A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)