Here it is, a title which, as usual is quite confusing. First of all here we will not discus mathematical nature of various random number generators (which is also an interesting topic). And we will focus on esthetics of random. Random… what? Well anything random I guess.

So as I see it there are two ways of making things esthetically more interesting*. One would be depicting more. Say we have a plain wall, we would want to see 5 layers of paint which where applied, and in some parts the paint is cracked and we see bricks, also we see a spider web and so on. This is one approach, and it is a best way to achieve visual detail, and make our frame more interesting. But it takes time to construct both wall, and spiders and dirt and so on. Another approach is to increase detail by adding random elements, which by themselves do not depict anything in particular, but in certain context might be perceived as intended detail.

plian white wall

plian white wall

rich white wall

rich white wall

Again noise is just noise. But if we use gray noise placed above depicted fireplace, viewer will think its a smoke. If we add same gray pattern of noise on plain wall, viewer might interpret it as a dirt on a wall. This way is much faster. And this is what we will talk about.

First of all lets see a simple Perlin noise pattern, and a multiple scaled Perlin noise patterns. (image A)

perlin noise, taken from wikipedia

perlin noise, taken from wikipedia

And in picture “B” we see variations of Perlin noise (here I am not certain if it is actually a perlin noise, probably not, corrections are welcome) which intend to depict something more precise then abstract noise itself.

cedural maps "Celular, Dent, Perlin marble, marble", 3ds max, image B

procedural maps “Celular, Dent, Perlin marble, marble”, 3ds max, image B

So a simple comparison could sound like this.
Perlin Noise. A boring plane noise which is rather abstract and doesn’t depict anything in particular. Which is also a good thing as an element to construct more sophisticated noise like effects.
Now there is nothing better then Scaled multiple Perlin noises. It has different size elements (see image A) and is almost a god in cg world (no?). And variations of Perlin noise, which could be modified to depict wood and other interesting patterns. But what is the difference between Perlin noise – and multiple scaled Perlin noises? Well, I would say that its all same plane noise, but it goes towards depiction, towards sophistication, towards no randomness.

So desired noise for image enhancement, should look like intended detail rather then just random detail. What I want to say, its not the detail itself that is a valuable a detail, its a detail that has something behind it. And its quite tricky to make automatic algorithms that would not only add detail such as Perlin noise but a noise of a sort that somehow depicts something, without a necessity to actually take care of what it depicts. And for that I think we need to look in to such procedural maps as “planet” in 3d s max. Its is gone in new max versions.. (why?). Also in some older posts I suggested to build materials, or procedural maps that would “feel”the geometry they are applied upon. Especially the last link talks about an extra detail on edges of mesh.

One example from real world. Some days ago I was walking in a street and saw a window. It was dirty and covered in paint. But the paint was not running down, but upwards. Now first thought would be that it is an unneeded detail, which , if it where a CG shot, not a real life would make no sense and would complicate understanding of an image or a shot. And then I found a possible reason for paint running “upside down” Most likely the glass was already dirty, and a person who made window used dirty glass without directing it in any particular way from point of view of paint on glass. What I am trying to say, is that observer will always try to find explanations, and be it correct or incorrect, one always might find reason behind something. And here is the question: If it would have been a CG shot, would it be a bad shot, because if one notices this it makes them uncomfortable, ads an unnecessary level of thinking and so on. Or is it a good shot, because rather then having a clean boring “plain”glass we have some visual details…

So I guess the point of this post is to make a note, that we do want detail, but not just any detail, we want it to be “in the context”. It has to depict something, somehow. And if it actually doesn’t depict anything in particular, we should strive that a viewer would find its meaning on his own. Its a tricky task, and I guess.

—————————————————————————————————————————————————————–

*Here I have to make a note and remind you that I am by no means implying that more complicated is better, more detailed is better, visually overwhelming is a desire. But In some cases it might be the case (or some of these statements at least). Just imagine a white wall, which is shot in a close up, and all u see is just a wall. It covers hole screen. Now say a wall is white. So we can imagine a situation where we have a whole image containing only of pixels of rgb value 255,255,255. If you would not know beforehand that it is a wall, wouldn’t you say, its just blank white screen? So in some cases more detail is better. And hole today’s post is dedicated only to this situation, where more detail is better.

——————————————————————–

I have remembered this blog article of mine sis I found this cool interview with Ken Perlin himself! One nice idea he has is: if something is too complicated, add another dimention to it. What does that mean? Listen to his interview at fxpodcast here:

link to fxpostcast with Ken Perlin about noise, aesthetics and perlin noise too!

another article here: link

So as usual it was quite a long time sins I posted something here. Unfortunately, even though I have some drafts, I just cant sit down and finish them. Therefore I decided to go for an easy post. Well its like writing an software review or movie review. (meaning a reflection of something rather then a creation, and here I am simplifying things a bit, there is surely complicated and “heavy”movie reviews as well as shallow “creation” stories.) Anyways this is gonna be a rather shallow story. But oh well..

So as all of us, weak human beings, from time too time we do stuff which we are not supposed to be doing. Sometimes its something very wrong, but more then often its just something that we are not suppose to be doing cos we are suppose to be doing something else. And most of the time we are watching movies, TV reading a stupid book, or don’t know.. Hundreds of things to do, in order to avoid something you actually must do. It could be anything. So as a member of this human race, I am too doing stupid things. But lucky us (humans) from time to time, on rear occasions, these stupid things lead to something. And be it not something great, or of huge value, its still sort of positive. Well the judgment here can be varying. Still.  As it happens, one of these times I was procrastinating I happen to be inspired by my own drawing in ink, and tried to do something of that sort in 3d. Ah finally you will say, he talks about 3d, its about time. So here it is a small rather dumb project. Who knows, when ‘the stars are right’ :) , one day I might actually do something good out of this. But for now we have:

A. A hand drawing of a… don’t know plankton, bug, molecule, microscopical being, alien, virus of a sorts?

ink version of a bug

ink version of a bug

B. We have a wireframe of recreation of itt in 3d. And then, we use some materials with falloff in opacity Chanel, we add some depth of field in our mental ray rendering engine, we invert an image, and here is what we have:

wireframe of a bug

wireframe of a bug

C. a final output:

final render of a bugg

final render of a bugg

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

So probably as so many of as I am using noise and smoke procedural maps quite a lot. And they are kind of basic building blocks for many organic shaders. So Normally when we model anything.. don’t know say a wall… there are areas where we would like to use different textures for different parts of simple, flat wall. The wall beneath a window can have very different colors and bumps, and stuff. There is more dirt gathering there, the patterns of rain and wind affect it differently… also we spit through a window, throw stuff… it all leaves marks on a wall. Normally artist would have to create different materials for all these things, and then draw a mask, where to use one material and where second one should go. Why not to help artist a bit and make some of masks automatically?

And how do we do that? Lets see an image here:

procedural angular mask

procedural angular mask

So lets say we apply our new material, or map to a geometry. Now what it does is, it builds a a list of angular values of vertexies, or rather edges. And then according to values, we generate gradients. Which are our masks. And of course just to use angular values would be not enough, we would what to have ability to change contours from linear to add noise smoke and so on ….. its something like fallof map in max… would it work???

ps. also some easy ready made function shuld be added for horisontal stripes, something like what u have due to running whater. aka rain. could be simple streched noise. or something…


So lets imagine we want to create a new material. We go to material editor, we select a type of material, and hurray, we have it. No we want to add noise, or non existing planet :) map to our diffuse canal. And we click button “none”. Nice we are in material/map browser.

material Browser

material Browser

So far so good. Now lets count how many clicks we need to apply say, speckle map. And here we go, its not in browser, or, unless your screen size is extremely huge, you need to scroll down. That’s normal. But faster you scroll, harder to see if you reached wanted map. (some people have no problem with this) Slower you scroll, well it takes more time. (some other get annoyed). So what do we do? Simple. Same as we do in modifier selection list.

modifier List

modifier List

So we go over menu with mouse pointer, we don’t even click as in many other programs (winch is cool innovation by max) And we hit a letter on a keyboard, and here you go, no more scrolling, you are almost at speckle map. (providing there are no other maps starting S) and there are, but still, it takes fraction of second to spot required map. Why cant we have this functionality in material/map browser?

ok lets continue, on most crazy 3d ideas ever. it was a restless night for me, therefore two posts are coming.

So we all know how in 3d, rendered object have thuse ugly, well mostly, edges. they are so sharp, so cg, so unreal.

there where meny methods created to avoid this. we could use fallofs in opacity chanel, or put things out of fucus….

what about if we would mix they materials? i dono what would happened, but this is a place to theorethise, and think.

so free your mind :)

so look at the picture:

scheme of pixel material blending

so i am just thinking how things would look if we would use kind of antialising, but it would actualy apply blended materials in a pixels where two object are close to each other. would be cood if we could control the “size” of blending, what i mean here is ..

if we look at the picture as it is right now, we’d see only 1 pixel has a blended material, i mean only one which has a neighboring pixel of other material. what if we could apply this blended material it 2, 3 or more pixels what i mean is…. shit how to put this…..

a nother picture?

pixel blending

and if we could tie, lets say depth ( zbuffer?) to the number of pixels blended?

like close to viewplane only 1 pixel on edge is blended, while further more pixels would be blended?

has this been done already? would it work? would it be cool or ugly?

anyone whants to write a renderer?

:)

you can also wisit my portfolio.
dont like english? u can (try) readig it in lithuanian or anny other language here! alternative, has no lithuanian :(
Follow

Get every new post delivered to your Inbox.