So another minor complain on 3dmax’s UI. Its sort of very minor, and in essence is just a tiny thing, but still I personally think it could, and I think should be improved. So it’s about camera (not exactly camera, just viewport) position when you are modeling. So I guess many of us, max users are using “zoom to selected” viewport tool. I use it a lot, and especially when modeling. So the situation is this. One is modeling something, and selects a couple of vertexes, which happen to be in same position. Now we can’t really see anything cos now we look at whole model. so we hit zoom to selected, and bam most likely we see.. nothing. Or just some part of our object, which has suddenly lost its shape. Its viewport clipping I guess. The zoom tool zoomed so close to selected vertexes, that max decided it’s too close and just clipped the model. Or that’s what I think that happened. Look at the pictures, or try it yourself. Not sure if it is exactly what happens, but it sure looks so. Why does max do this?

just an object

just an object

(usually this happens when we model say a character, or whatever, and we work only on one half of the model. Other half we minor, and later on we want to weld vertexes which duplicate in the original and mirrored object.. at least that is one of situation where this occurs..)

So what do we do? just scroll back a bit, bam, the model is back, and we are close to our selected elements to work with. But, if you are unexperienced, u might go … Ouch, what happened??? or just scroll back, like you did 1 million times. But can’t we make a tweak in viewport camera positioning and zoom to selected tool’s code, so it doesn’t go to such extremes where model gets clipped? Anyone? So it’s just another small complain. Yes, I know there is million other things that are more important to improve. And this is just a tiny annoyance.. but if it would be just a couple of lines of code, might be it would be worth while writing them down? sins I suppose it should be quite straight forward thing, it’s not like one has to write Lagoa for max or anything like that… (Yes Lagoa would be cooler to get… :) Am I right? am I wrong?

———————–

1. (in image) The clipped area that is not shown….

As many of you I do enjoy working with an “extrude along spline” tool.
It makes your modeling so much faster in some cases, and it has
flexibility to achieve various results fast. I found it very good when my
task is not very constrained, when I work with something that I can change and create at my will.
But Some time ago I needed to do something more precise. To be exact i had to
do some lines on a sphere surface. I did remembered nice tool we have in NURBs arsenal which is called “create cv  spline on surface”. Now that i had my splines on a surface i needed to use them as splines for extrusion in my model.  And then I had some troubles with extrude along spline tool.  It just doesn’t seem to actually follow given curve, or to be more precise, the result is same as the curve, but its orientation is all wrong.
And then I had to find how to fix it. So if any of you happen to have same problem
here are some screen shots with a problem and its solution.

 

object and the spline

object and the spline

 

Just a picture of an object and a curve.

extrusion

extrusion

 

 

 

 

 

 

 

 

 

 

 

 

So we select faces and go to extrude along spline, and we choose our spline.

 

aligning

aligning

“Align
Aligns the extrusion with the face normal, which, in most cases, makes it perpendicular to the extruded polygon(s). When turned off (the default), the extrusion is oriented the same as the spline. To Face Normal, the extrusion does not follow the original orientation of the spline  it’s reoriented to match the face normals, or averaged normals for contiguous selections. The Rotation option is available only when Align To Face Normal is on.”  description from 3dmax help file.

 

rotation

rotation

 

“Rotation
Sets the rotation of the extrusion. Available only when Align To Face Normal is on. Default=0. Range=-360 to 360.” again, from 3dmaxs help.

make it first

make it first

 

So here we

select the vertex which is closest to extruded polygon so we can make it “first”

wuala!

wuala!

 

 

 

 

 

 

 

 

 

 

 

So we have it. Now our extrusion actually follows given spline. This seems to solve my problem. Hoope it helps you too.

 

 

Here it is, a title which, as usual is quite confusing. First of all here we will not discus mathematical nature of various random number generators (which is also an interesting topic). And we will focus on esthetics of random. Random… what? Well anything random I guess.

So as I see it there are two ways of making things esthetically more interesting*. One would be depicting more. Say we have a plain wall, we would want to see 5 layers of paint which where applied, and in some parts the paint is cracked and we see bricks, also we see a spider web and so on. This is one approach, and it is a best way to achieve visual detail, and make our frame more interesting. But it takes time to construct both wall, and spiders and dirt and so on. Another approach is to increase detail by adding random elements, which by themselves do not depict anything in particular, but in certain context might be perceived as intended detail.

plian white wall

plian white wall

rich white wall

rich white wall

Again noise is just noise. But if we use gray noise placed above depicted fireplace, viewer will think its a smoke. If we add same gray pattern of noise on plain wall, viewer might interpret it as a dirt on a wall. This way is much faster. And this is what we will talk about.

First of all lets see a simple Perlin noise pattern, and a multiple scaled Perlin noise patterns. (image A)

perlin noise, taken from wikipedia

perlin noise, taken from wikipedia

And in picture “B” we see variations of Perlin noise (here I am not certain if it is actually a perlin noise, probably not, corrections are welcome) which intend to depict something more precise then abstract noise itself.

cedural maps "Celular, Dent, Perlin marble, marble", 3ds max, image B

procedural maps “Celular, Dent, Perlin marble, marble”, 3ds max, image B

So a simple comparison could sound like this.
Perlin Noise. A boring plane noise which is rather abstract and doesn’t depict anything in particular. Which is also a good thing as an element to construct more sophisticated noise like effects.
Now there is nothing better then Scaled multiple Perlin noises. It has different size elements (see image A) and is almost a god in cg world (no?). And variations of Perlin noise, which could be modified to depict wood and other interesting patterns. But what is the difference between Perlin noise – and multiple scaled Perlin noises? Well, I would say that its all same plane noise, but it goes towards depiction, towards sophistication, towards no randomness.

So desired noise for image enhancement, should look like intended detail rather then just random detail. What I want to say, its not the detail itself that is a valuable a detail, its a detail that has something behind it. And its quite tricky to make automatic algorithms that would not only add detail such as Perlin noise but a noise of a sort that somehow depicts something, without a necessity to actually take care of what it depicts. And for that I think we need to look in to such procedural maps as “planet” in 3d s max. Its is gone in new max versions.. (why?). Also in some older posts I suggested to build materials, or procedural maps that would “feel”the geometry they are applied upon. Especially the last link talks about an extra detail on edges of mesh.

One example from real world. Some days ago I was walking in a street and saw a window. It was dirty and covered in paint. But the paint was not running down, but upwards. Now first thought would be that it is an unneeded detail, which , if it where a CG shot, not a real life would make no sense and would complicate understanding of an image or a shot. And then I found a possible reason for paint running “upside down” Most likely the glass was already dirty, and a person who made window used dirty glass without directing it in any particular way from point of view of paint on glass. What I am trying to say, is that observer will always try to find explanations, and be it correct or incorrect, one always might find reason behind something. And here is the question: If it would have been a CG shot, would it be a bad shot, because if one notices this it makes them uncomfortable, ads an unnecessary level of thinking and so on. Or is it a good shot, because rather then having a clean boring “plain”glass we have some visual details…

So I guess the point of this post is to make a note, that we do want detail, but not just any detail, we want it to be “in the context”. It has to depict something, somehow. And if it actually doesn’t depict anything in particular, we should strive that a viewer would find its meaning on his own. Its a tricky task, and I guess.

—————————————————————————————————————————————————————–

*Here I have to make a note and remind you that I am by no means implying that more complicated is better, more detailed is better, visually overwhelming is a desire. But In some cases it might be the case (or some of these statements at least). Just imagine a white wall, which is shot in a close up, and all u see is just a wall. It covers hole screen. Now say a wall is white. So we can imagine a situation where we have a whole image containing only of pixels of rgb value 255,255,255. If you would not know beforehand that it is a wall, wouldn’t you say, its just blank white screen? So in some cases more detail is better. And hole today’s post is dedicated only to this situation, where more detail is better.

——————————————————————–

I have remembered this blog article of mine sis I found this cool interview with Ken Perlin himself! One nice idea he has is: if something is too complicated, add another dimention to it. What does that mean? Listen to his interview at fxpodcast here:

link to fxpostcast with Ken Perlin about noise, aesthetics and perlin noise too!

another article here: link

So as usual it was quite a long time sins I posted something here. Unfortunately, even though I have some drafts, I just cant sit down and finish them. Therefore I decided to go for an easy post. Well its like writing an software review or movie review. (meaning a reflection of something rather then a creation, and here I am simplifying things a bit, there is surely complicated and “heavy”movie reviews as well as shallow “creation” stories.) Anyways this is gonna be a rather shallow story. But oh well..

So as all of us, weak human beings, from time too time we do stuff which we are not supposed to be doing. Sometimes its something very wrong, but more then often its just something that we are not suppose to be doing cos we are suppose to be doing something else. And most of the time we are watching movies, TV reading a stupid book, or don’t know.. Hundreds of things to do, in order to avoid something you actually must do. It could be anything. So as a member of this human race, I am too doing stupid things. But lucky us (humans) from time to time, on rear occasions, these stupid things lead to something. And be it not something great, or of huge value, its still sort of positive. Well the judgment here can be varying. Still.  As it happens, one of these times I was procrastinating I happen to be inspired by my own drawing in ink, and tried to do something of that sort in 3d. Ah finally you will say, he talks about 3d, its about time. So here it is a small rather dumb project. Who knows, when ‘the stars are right’ :) , one day I might actually do something good out of this. But for now we have:

A. A hand drawing of a… don’t know plankton, bug, molecule, microscopical being, alien, virus of a sorts?

ink version of a bug

ink version of a bug

B. We have a wireframe of recreation of itt in 3d. And then, we use some materials with falloff in opacity Chanel, we add some depth of field in our mental ray rendering engine, we invert an image, and here is what we have:

wireframe of a bug

wireframe of a bug

C. a final output:

final render of a bugg

final render of a bugg

One proposal would be very simple.
It would be to enhance existing uvw map modifier by adding ability to apply uv
mapping not for a single mesh, but to each element of the mesh.
I know its possible to achieve same result by selecting each element in mesh edit or poly edit mode
detach these elements from single mesh, and as a result, to have lots of objects to which u can individually apply uvw map modifier.
but in some cases, when u have a complex mesh, which consists of lots of elements, that could be very time
consuming work. These complex meshes are often result of importing geometry from other 3d applications.
So the idea is to apply, say a spherical or box or whatever type of uv projection method to each separate mesh element without a need to divide mesh and detach parts of it.
Now if we have a complex model, say a tree, or dono, anything actually, and we are not planing to have perfect uv’s by pelt mapping or selecting separate polygons and applying separate planar maps to parts of a mesh, we would want to have some simple method to get uv coordinates fast, and semi accurate. in current situation we can simply apply uv map modifier to whole object,
and expect that box or cylindrical map projection will do fine. and in some cases it does.
say model is far away from camera, and we have no time to produce good uv coordinates. But imagine we would have some middle quality solution. that is something between creating precise uv maps for separate polygon selections, or having to use pelt mapping, and between simply applying say cylindrical projection
to a whole mesh.
I propose to have an ability in a uvw map modifier to have a “per element” button which would apply
selected map projection method not to whole object but to each separate mesh element.
and also we would have an ability to manipulate (scale, rotate, move) uv projections within a single interface for each single element in one mesh,
at the same time without a need to apply hundreds, or be it tens of mesh select modifiers, and uv map modifiers, and struggling and getting lost in huge modifier stack.
lets look at the images below.
cylindrical map projection applied to a single mesh.1. cylindrical map projection applied to a single mesh.

 2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.

3. proposed projection per Element mode3. proposed projection per Element mode

proposed projection per Element mode with floating box4. same, but with floating toolbox for selecting each element and having ability to apply different (cylindrical, box, etc.) projection modes.
Some additional thoughts.
Why do we think only about elements of mesh, can we use mat id’s for same porpoise? can we
apply separate uvw projection methods for mat id’s also? That could come in hand too..?
mapIDs for uv coordinate sets Next question would be, can we use both mat ID’s and Mesh elements? Or is there an easy and fast (one button solution) way to convert separate mesh elements to mat ID’s. Or can we easily convert Mat ID’s to mesh elements? And if we where to write such tool, where would we place it?
Should it be a part of edit poly modifier? edit mesh? or uvw modifier?
Any ideas?

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

somehow i cant finish my tiling/ rotating/ mirroring article.  so while i am struggling with part 3 of tiling posts, something else….

Alzo, one very small remark. I noticed it long time ago,
its such a small detail, not worth writing a post. But it happen so many times,
Its my love and a curse. Well as usual I am talking about 3d max’s user interface.
Its nothing less then right mouse button. Its a bless and a course.
Imagine u selected something, and now u are dragging that stuff across the screen to mech
something else… U are so concentrated, the sweat is running on your back, u cant blink,
it must be so precise..
and u push your mouse so hard. And then it happens. u accidentally click right mouse button.
last thing u did is gone. Dam it!

On other hand sometimes u see u are selecting stuff wrong. U click right mouse button and last operation is gon.
So nice. Not like in autocad (right?)
Ok it shouldnt have been a post, but whatever :)

Ok sins its such an uninformative post well add some more.
Some complaining ofcourse :)
Ok what do I find annoying about max? Its mirror, and clone tools. Whats a problem u ask?
Say I have to clone (and move) one element many times. I want to see it from very close distance to move it exactly the right amount.
So I enlarge it to fill whole wievport. I hit shift, i move it exactly the right distance,
And then I need to specific a number of copies. But it happens a lot that for that I need to see
all the scene, not one object, to count how many copies i need. Ok I am not sure one can understand what I mean.
So to put it in other words, after we see clone or mirror dialog boxes, we are no more able to
navigate in viewport. and that sucks. Look at all extrude. bevel and other tools under edit poly,
if we use these we can both have dialog box open and navigate in viewport. that saves time,
and quite much id say.
So something to consider for max UI developers.

Follow

Get every new post delivered to your Inbox.