One proposal would be very simple.
It would be to enhance existing uvw map modifier by adding ability to apply uv
mapping not for a single mesh, but to each element of the mesh.
I know its possible to achieve same result by selecting each element in mesh edit or poly edit mode
detach these elements from single mesh, and as a result, to have lots of objects to which u can individually apply uvw map modifier.
but in some cases, when u have a complex mesh, which consists of lots of elements, that could be very time
consuming work. These complex meshes are often result of importing geometry from other 3d applications.
So the idea is to apply, say a spherical or box or whatever type of uv projection method to each separate mesh element without a need to divide mesh and detach parts of it.
Now if we have a complex model, say a tree, or dono, anything actually, and we are not planing to have perfect uv’s by pelt mapping or selecting separate polygons and applying separate planar maps to parts of a mesh, we would want to have some simple method to get uv coordinates fast, and semi accurate. in current situation we can simply apply uv map modifier to whole object,
and expect that box or cylindrical map projection will do fine. and in some cases it does.
say model is far away from camera, and we have no time to produce good uv coordinates. But imagine we would have some middle quality solution. that is something between creating precise uv maps for separate polygon selections, or having to use pelt mapping, and between simply applying say cylindrical projection
to a whole mesh.
I propose to have an ability in a uvw map modifier to have a “per element” button which would apply
selected map projection method not to whole object but to each separate mesh element.
and also we would have an ability to manipulate (scale, rotate, move) uv projections within a single interface for each single element in one mesh,
at the same time without a need to apply hundreds, or be it tens of mesh select modifiers, and uv map modifiers, and struggling and getting lost in huge modifier stack.
lets look at the images below.
cylindrical map projection applied to a single mesh.1. cylindrical map projection applied to a single mesh.

 2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.

3. proposed projection per Element mode3. proposed projection per Element mode

proposed projection per Element mode with floating box4. same, but with floating toolbox for selecting each element and having ability to apply different (cylindrical, box, etc.) projection modes.
Some additional thoughts.
Why do we think only about elements of mesh, can we use mat id’s for same porpoise? can we
apply separate uvw projection methods for mat id’s also? That could come in hand too..?
mapIDs for uv coordinate sets Next question would be, can we use both mat ID’s and Mesh elements? Or is there an easy and fast (one button solution) way to convert separate mesh elements to mat ID’s. Or can we easily convert Mat ID’s to mesh elements? And if we where to write such tool, where would we place it?
Should it be a part of edit poly modifier? edit mesh? or uvw modifier?
Any ideas?

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

Oh, i have a blog? seems I forgot it for a while :)  So some ideas on textures.

We use different textures for creating all kinds of materials for our 3D models.
In some cases we use tiling and mirroring for different ways to apply textures on existing 3D objects with uv coordinates.
But in some cases we use textures in architectural situations where we have, say, couple of tiles to cover whole wall. while having a nice texture of say one tile by tilling and mirroring we can,
make it cover whole wall.  To achieve some more variable results we use brick procedural map generator (in 3D max) which basically helps us to put our
tile image in more variable fashion.
But if you ever looked at islamic art and architectural tradition, you will see that some parts of ornaments and tilling can blow your mind with its complexity.

Here I add some articles, which are very interesting regarding this topick.

Geometry feat cloaked in medieval Islamic tile,
“Cintamani” and Islamic Tiles,
A Discovery in Architecture: 15th Century Islamic Architecture Presages 20th Century Mathematics,
Q1 Project – Islamic Tiles,
The Art of Mathematics Islamic patterns,
Sometimes you would look at the wall but fail to see contours of single pattern which is repeated.
tower

tower decoration pattern

To some extent same can be said about baroque wallpapers in Europe. And probably in any culture and time period you could find very complicated pattern arrangements.

So while analysing couple of arabic patterns I thought why do i have to spend hours tilling and mirroring and rotating stuff in Photoshop, if it theoretically
could be done directly in material editor.
Many patterns have a basic triangle shape. So if we have correct texture which contains our desired basic pattern element in triangle shape, we could
instead of tilling or mirroring it rotate it. Sins all bitmaps, as far as i know, are rectangle in shape, we would need to find and identify our triangle element wich we
would use for rotation. technically I guess we could develop a type of bitmap which is a triangle in its shape. I guess its resolution would be defined not as
say 640×480 but i don’t know, 1 pixel at top row, 640 at bottom row, 480 pixels height or something like that.
But we will try to work with traditional bitmaps here.  So say we open a texture in our brand new material editor. then we should define a triangle in
one or another way. i guess our material editor should let draw a triangle on top of texture, something like guidelines in Photoshop. We would
make sure these guidelines matches our triangle shape in bitmap as close as possible. next step would be to chuse which corner of triangle is our pivot point,
or in other words where will be its rotation centre.
rotation of images
according to angles of our triangle new material editor should copy and rotate texture in a number of times which is closest to full circle.
(say our pivot point of triangle is 45 degrees, therefore material editor would copy/rotate our original image exactly 8 times, if i am still able to count :) )

So next thing, after we have nice circle with texture which repeats itself not by tiling or mirroring but by rotation, we would need to describe how to make many such
circles to cover our full wall with new pattern.
Here I discovered there are 3 types of triangles. perfect ones, good ones and bad ones.
perfect triangle is the one wich has 60 degrees angle in each corner (picture bee hive)
good one is when the degree of its pivotal angle is such number that after dividing 360 from that number u get something like: 1,2,3,4,5 .. 25…5356544444 – a whole number,
and not 5.22321445524… and this is also a description of bad triangle. what i mean is that the number to form a full circle should be always a whole number.

hexagon is formed out of 6 tryangles which have 60 degrees corners

hexagon is formed out of 6 tryangles

anyways. so a triangle with 60 degrees corners can cover any surface. all other “good”triangles also cover whole surface, but with some gaps between themselves.

good and bad triangles

good and bad triangles

this is also ok. but we should have an option to export this “gap”to a bitmap file, so user could draw what he or she wants there in Photoshop, and import
it back to our map generator. so basically if we use any other than 60 degree triangle, we need two textures, not one, to form our surface.

more, coming soon.

just a quick idea. It happens a lot that i have to wait for some operation in max. its shorter than rendering for 2 hours, but its way longer
for normal interaction with program. like lets say creating blob mesh, applying multires, substracting objects via booleans or subdivision.. or something like that. I have always task manager window opend and minimised in try bar,
so i see how much my processor is owerloaded. but I have to sit there and waite, staring at that small indicator, till I can finally move my mouse again. thats so annoying.
couldn’t we have a small beep? lets say max starts to use 100 cpu power, for longer then, dono, 20 s. our beeper program monitors this cpu usage. and after cpu usage for max goes down dono, 10 % lets say beeper still waits a bit to make sure its not a temporary
pose in cpu usage by max, and lets  say after 5s. it beeps. meanwhile, user, has applied some heavy operation, and went to make cafe. and he doesn’t need to look at cpu usage indicator, all i care about
is my coffee, till i hear the beep.
i think it could be helpful, or? annoying? of course it should not be on by default, and user could turn it on only if he or she needs it…

What u say?

So i had this idea about how to make clouds in 3d.

I think it works quite well, but as all things in life have draw backs. First – the image:

clauds

clouds

And here is how i did this.

The foreground clouds are done using a blob mesh rendered in mental ray using a depth of field. The background clouds are simple 3d Max volume fogs.

My goal  here was to create clouds that could be quickly rendered and quite easy to do. I used a standard volume fog in the background which was computationally effective, and in the foreground ablob mesh (also known as implicit surfaces) was used to create foreground clouds. The foreground clouds were rendered in mental ray, with a depth of field.

Depth of field adds this cloud like apearence.  Softness, which is quite tricky to do with simplle material, and fallofs in opacity chanal.

So even though i like results of this experiment, unfortunately its not the fastest, i meen rendering time. but oh well :)

and a screenshot:

clouds1one other good thing .  your mech topology can be as ugly as it gets, depth of field hids it :)

so its one way of using inplicit surfaces practicaly.

—————–

further readings at veterrain.org

(look at section research)

Lately I was working with some textures. I had a model, and for texture work in zbruch and photoshop,
I had to use unwrap uwv. I guess that’s a term in 3d max.
Not sure how you call it in other software, but I guess name shuld be similar as well as function.
So i selected parts of my 3d model, corrected seems, and used pelt mapping in most cases.
Again actually it does not mater. the main idea is that at the end i had something like this:
So when it comes to zbrush naturally you don’t see any seems where two texture blocks connect.
You work without knowing that they exist. (more or less)
But when it comes to photoshop, You yourself must make sure that all pixels corresponding to other texture
blocks pixels (at the edges of texture blocks) Are either same or very similar color, so that when you use
your texture back in your 3d program, no one would see where texture blocks connect.

But how on earth do you do that?
how do you manipulate image so that each edge pixel in two separated texture blocks would be the same?
I am sure there must be solutions, or work arounds. I would be very happy to hear how you do it.

But I was thinking, what if we would have a plug in for photoshop ( gimp? corel photo paint? …? )
that would help doing that?

the idea is to create two “guidelines” for both texture blocks, they could resemble curves in photoshop.
So first here is an image to better understand what do i talk about:

seems

so if, lets say i use brush tool and draw in one block of texture, and at some point brush moves out
of that texture block, crossing one seem, and  “magically” it appears in other texture block, crossing second seem.
it would be like working with tilling textures. or better example, like this OLD game “snakes”, where u have to
eat stuff, and as a result you get longer, but you can not bump in to yourself. but you can hit the right side of
your screen, and you will end up in left side of screen. The idea is the same.
You hit green seem line from one side, and you appear crossing also read seem in other texture block.

OK and now lets imagine a bit more complex situation.
well might be its not more complex, just for a sake of this example.

seems1

so if, lets say i use brush tool and draw in one block of texture, and at some point brush moves out
of that texture block, crossing one seem, and  “magically” it appears in other texture block, crossing second seem.
it would be like working with tilling textures. or better example, like this OLD game “snakes”, where u have to
eat stuff, and as a result you get longer, but you can not bump in to yourself. but you can hit the right side of
your screen, and you will end up in left side of screen. The idea is the same.
You hit green seem line from one side, and you appear crossing also read seem in other texture block.

OK and now lets imagine a bit more complex situation.
well might be its not more complex, just for a sake of this example.

 seems2
so we see a situation where same edge is shared in 3 texture blocks, and how the same edge represented in
each block has different length and shape.

free transform
OK my representation of “free transform” tool is not very accurate, but i hope its good enough for understanding.
I guess “Free transform” tool is quite limited, therefore you would be able to control it only on one texture block,
and other one (on next texture blog) would be automatically generated…  or we write free transform tool too :)
Now in a case of brush (look at second image) it might at some points appear in 3 different locations on screen, on 3
different texture blocks.

well, any one needs an idea for computer graphicks degree work? That could be it :)

p.s.
If such tool wuld exist…
the author could go further, and write another plugin for max/maya/xsi to eport seems….. or?
it would take years to define all seems in photoshop for more coplex objects..

4.4.2 Hole Determination Based Upon Overall Tree Structure

In this method a more advanced hole generator approach is used whereby holes are created at the intersection of big branches. The first potential place is the trunk and first branch intersection. The further the branching is from the trunk the less reason for a highly detailed model, therefore there should be a limit, and intersections between small branches should not generate holes and cracks. Another good position for possible holes could be the beginning of the roots whereby a hole is determined in an area where most of the vertices angles are low also in places where the roots meet the trunk’s surface.

Now that I have described the tools and two possible methods for hole detection I will provide the necessary steps for this process. The proposed process of hole generation is based on 3d studio max 9 work flow and tools, but could be easily implemented in any other 3d modeling application or created as a stand alone solution. For the sake of simplicity random vertices on a polygonal plane will be used, instead of vertices with low angular levels in tree trunk model.

The first step in hole generation is to select a candidate vertices group, which should form a hole. After the selection is done a chamfer tool is applied. The value which describes the distances where new vertices are created, could be half of an average edge length of selected vertices’. The next step would be to connect newly created vertices via the chamfer tool. The connection tool creates new edges between newly created vertices. Edge selection is converted to a polygon selection. See figure 33 where it describes the fourth step.

max hole

Fig 33. Proposals visualization.

Fig 33.first four steps in hole creation. The red dots represent selected vertices, red lines selected edges, and red shaded area represent selected polygons.

As Figure 33 shows the resulting polygon selection is not accurate. This is due to the fact that the edge selection conversion to a polygonal selection includes all neighboring

polygons. Polygonal selection should be shrunken to get the desired result.

max hole

Fig 34. Proposals visualization.

This is shown in Figure 34. The next step is to erase all selected polygons, and select the remaining edge outline and conduct an edge extrusion. The last step shown in Figure 34 represents a subdivision algorithm applied on a mesh.

The subdivision algorithm not only generates more detailed mesh structure, but also softens the edges of it. The steps I have described are rather easy to perform due to the ability of 3d studio max to remember the last used selection and convert vertices selections to edge or polygon selections. This process should be possible in any 3d modeling software, but it might include a different order of steps, or additional steps required to achieve the same result.

Figures 33 and 34 show steps which are very easily preformed manually. But the steps are also simple enough so that the actions could be automated. To manually create a hole as shown in Figures 33 and 34 it took around 5 minutes. But if a real trunk model was used with high mesh density and many holes to generate, this process could be very long.

My recommendations if implemented would decrease the amount of time involved in current tree generation.

————————-

You can visit my web page here

dont like english? u can (try) readig it in lithuanian or anny other language here! alternative, has no lithuanian :(

Chapter 4: Proposal for Automatic Tree Trunk Generator

This Chapter outlines a proposal for an automatic tree trunk generator. The first section focuses on the shape of a tree trunk and outlines six steps involved in the process of trunk generation. What follows describes the biological origins of a hole in a tree trunk as well as a proposal for automatic hole generation and possible tools for this approach as part of a trunk generator.

4.1 Overview of Trunk Shape

There are many solutions for increasing a 3 dimensional tree’s complexity so that it reflects the natural complexities we see in nature. For example recently there has been a great deal of research done in bark generation in order to achieve photo realistic results in tree simulation. Yet there is a lack of solutions regarding overall trunk shape. Trees in reality tend to have not only complex bark shapes, but also complex trunk structures and in older trees trunks these structures are much more complex, and harder to describe. Most tree generators create quite simple tree trunk shapes. This can be observed in Figure 26 which is screenshots generated by the application Vue6.

Fig 26. Generated using Vue6 personal learning edition.

Fig 26. Generated using Vue6 personal learning edition.

As we can see from Figure 27 a natural tree’s trunk shape can be much more complex .

Fig 27.  Pictures taken in Berlin 2007.

Fig 27. Pictures taken in Berlin 2007.

In order to achieve a more complex tree trunk surface and shape I propose a new trunk surface simulation strategy. This strategy would increase the speed of this process greatly as well as improve upon current tree generators. The idea follows and expands upon a traditional cylindrical extrusion approach. Tree trunks or branches in 3d tree simulators are usually treated as cylinders. Usually the cylinder begins wider and proceeds to become narrower.

My approach follows the same idea yet generates a more detailed result. The general concept is to combine many cylinders with different properties and shapes to generate one shape. In my proposal the cylinders are slightly off set from the center of the trunk, and the result is created by grouping shapes to create the tree trunk’s outer surface. The defining step is the removal of all inner parts of cylindrical shapes which intersect. The result of this unifies the surface thereby resulting in one trunk shape. The next sections will describe the overall steps in detail.

you can also wisit my portfolio.
dont like english? u can (try) readig it in lithuanian or anny other language here! alternative, has no lithuanian :(
Follow

Get every new post delivered to your Inbox.