ideas for thesis, research


Here it is, a title which, as usual is quite confusing. First of all here we will not discus mathematical nature of various random number generators (which is also an interesting topic). And we will focus on esthetics of random. Random… what? Well anything random I guess.

So as I see it there are two ways of making things esthetically more interesting*. One would be depicting more. Say we have a plain wall, we would want to see 5 layers of paint which where applied, and in some parts the paint is cracked and we see bricks, also we see a spider web and so on. This is one approach, and it is a best way to achieve visual detail, and make our frame more interesting. But it takes time to construct both wall, and spiders and dirt and so on. Another approach is to increase detail by adding random elements, which by themselves do not depict anything in particular, but in certain context might be perceived as intended detail.

plian white wall

plian white wall

rich white wall

rich white wall

Again noise is just noise. But if we use gray noise placed above depicted fireplace, viewer will think its a smoke. If we add same gray pattern of noise on plain wall, viewer might interpret it as a dirt on a wall. This way is much faster. And this is what we will talk about.

First of all lets see a simple Perlin noise pattern, and a multiple scaled Perlin noise patterns. (image A)

perlin noise, taken from wikipedia

perlin noise, taken from wikipedia

And in picture “B” we see variations of Perlin noise (here I am not certain if it is actually a perlin noise, probably not, corrections are welcome) which intend to depict something more precise then abstract noise itself.

cedural maps "Celular, Dent, Perlin marble, marble", 3ds max, image B

procedural maps “Celular, Dent, Perlin marble, marble”, 3ds max, image B

So a simple comparison could sound like this.
Perlin Noise. A boring plane noise which is rather abstract and doesn’t depict anything in particular. Which is also a good thing as an element to construct more sophisticated noise like effects.
Now there is nothing better then Scaled multiple Perlin noises. It has different size elements (see image A) and is almost a god in cg world (no?). And variations of Perlin noise, which could be modified to depict wood and other interesting patterns. But what is the difference between Perlin noise – and multiple scaled Perlin noises? Well, I would say that its all same plane noise, but it goes towards depiction, towards sophistication, towards no randomness.

So desired noise for image enhancement, should look like intended detail rather then just random detail. What I want to say, its not the detail itself that is a valuable a detail, its a detail that has something behind it. And its quite tricky to make automatic algorithms that would not only add detail such as Perlin noise but a noise of a sort that somehow depicts something, without a necessity to actually take care of what it depicts. And for that I think we need to look in to such procedural maps as “planet” in 3d s max. Its is gone in new max versions.. (why?). Also in some older posts I suggested to build materials, or procedural maps that would “feel”the geometry they are applied upon. Especially the last link talks about an extra detail on edges of mesh.

One example from real world. Some days ago I was walking in a street and saw a window. It was dirty and covered in paint. But the paint was not running down, but upwards. Now first thought would be that it is an unneeded detail, which , if it where a CG shot, not a real life would make no sense and would complicate understanding of an image or a shot. And then I found a possible reason for paint running “upside down” Most likely the glass was already dirty, and a person who made window used dirty glass without directing it in any particular way from point of view of paint on glass. What I am trying to say, is that observer will always try to find explanations, and be it correct or incorrect, one always might find reason behind something. And here is the question: If it would have been a CG shot, would it be a bad shot, because if one notices this it makes them uncomfortable, ads an unnecessary level of thinking and so on. Or is it a good shot, because rather then having a clean boring “plain”glass we have some visual details…

So I guess the point of this post is to make a note, that we do want detail, but not just any detail, we want it to be “in the context”. It has to depict something, somehow. And if it actually doesn’t depict anything in particular, we should strive that a viewer would find its meaning on his own. Its a tricky task, and I guess.

—————————————————————————————————————————————————————–

*Here I have to make a note and remind you that I am by no means implying that more complicated is better, more detailed is better, visually overwhelming is a desire. But In some cases it might be the case (or some of these statements at least). Just imagine a white wall, which is shot in a close up, and all u see is just a wall. It covers hole screen. Now say a wall is white. So we can imagine a situation where we have a whole image containing only of pixels of rgb value 255,255,255. If you would not know beforehand that it is a wall, wouldn’t you say, its just blank white screen? So in some cases more detail is better. And hole today’s post is dedicated only to this situation, where more detail is better.

——————————————————————–

I have remembered this blog article of mine sis I found this cool interview with Ken Perlin himself! One nice idea he has is: if something is too complicated, add another dimention to it. What does that mean? Listen to his interview at fxpodcast here:

link to fxpostcast with Ken Perlin about noise, aesthetics and perlin noise too!

another article here: link

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

Next thing we could consider is having our source texture (which now is all rotated and nicely distributed along
our wall or whatever surface) flipped or mirrored in each triangle. here I guess I should add an image to show what I mean.

 

orientation. flipping mirroring and so on...

orientation. flipping mirroring and so on...

So if u see 1 image shows our source texture, second one is nice hexagon, which we created by rotating our source texture.
now third one is more complex. as second one we get its shape and structure by rotating original source texture, but in addition
to that we mirror, flip, and rotate the source texture after it is already in its correct position. meaning, only after we rotate it.
well, again, hard to explain here using words, just have a closer look at third image and try to imagine what actions must be taken in order to get this
result using only source image.
Ok, so lets imagine our new tool is already capable of achieving what we described above. why to stop, lets add some more cool stuff.
next thing would be more textures. in this case we used one source texture and we created triangle out of it, then we rotated and copied it, and we have
all our imaginary surface nicely covered.
now we see that all elements of our surface, meaning original texture is same everywhere. it is rotated, flipped, mirrored, but essentially it
is same texture, and we don’t need lots of time to spot that. fells so unreal right?
so imagine now we have set of original textures. they are same in shape (image in bitmap forms same triangles) presumably also image is same,
but with slightly different coloring, or some of source textures have some cracks, some dirt or might be they are just made out of different glass,
therefore they reflect light in different way, or might be some of them has mirrors where the color is in others.

hexagon. with same..or different triangles wich form it?

hexagon. with same..or different triangles wich form it?

So we need an ability to have different source textures used in our final image generation. it might be random (for forming cracks) or very precise and
deliberate (say for constructing other patterns with different materials, while preserving original image)
Now interface for this is still quite a question. any input?
After we implement such an option to our generator, next logical step would be context awareness. what I mean by that. well we sed that
possibly we want cracks or dirt. and we achieve that by randomly adding some individual tiles which have some dirt applied in source image.
next step would be that our generator would recognise the world. it should assume rain direction (from up towards down along z coordinate with some random directional shifts)
And this is all because we see that dirt does not appear on a walls in totally random places we can distinguish areas where we will have more of it.
that would be where wall reaches ground, for one. also all sorts of dirty water forms nice patterns beneath windows. And so on. so I will discuss
something very similar in next post. (can’t link to it before I actually wrote next post, can I :) )

more coming in part 3, very soon :)

Oh, i have a blog? seems I forgot it for a while :)  So some ideas on textures.

We use different textures for creating all kinds of materials for our 3D models.
In some cases we use tiling and mirroring for different ways to apply textures on existing 3D objects with uv coordinates.
But in some cases we use textures in architectural situations where we have, say, couple of tiles to cover whole wall. while having a nice texture of say one tile by tilling and mirroring we can,
make it cover whole wall.  To achieve some more variable results we use brick procedural map generator (in 3D max) which basically helps us to put our
tile image in more variable fashion.
But if you ever looked at islamic art and architectural tradition, you will see that some parts of ornaments and tilling can blow your mind with its complexity.

Here I add some articles, which are very interesting regarding this topick.

Geometry feat cloaked in medieval Islamic tile,
“Cintamani” and Islamic Tiles,
A Discovery in Architecture: 15th Century Islamic Architecture Presages 20th Century Mathematics,
Q1 Project – Islamic Tiles,
The Art of Mathematics Islamic patterns,
Sometimes you would look at the wall but fail to see contours of single pattern which is repeated.
tower

tower decoration pattern

To some extent same can be said about baroque wallpapers in Europe. And probably in any culture and time period you could find very complicated pattern arrangements.

So while analysing couple of arabic patterns I thought why do i have to spend hours tilling and mirroring and rotating stuff in Photoshop, if it theoretically
could be done directly in material editor.
Many patterns have a basic triangle shape. So if we have correct texture which contains our desired basic pattern element in triangle shape, we could
instead of tilling or mirroring it rotate it. Sins all bitmaps, as far as i know, are rectangle in shape, we would need to find and identify our triangle element wich we
would use for rotation. technically I guess we could develop a type of bitmap which is a triangle in its shape. I guess its resolution would be defined not as
say 640×480 but i don’t know, 1 pixel at top row, 640 at bottom row, 480 pixels height or something like that.
But we will try to work with traditional bitmaps here.  So say we open a texture in our brand new material editor. then we should define a triangle in
one or another way. i guess our material editor should let draw a triangle on top of texture, something like guidelines in Photoshop. We would
make sure these guidelines matches our triangle shape in bitmap as close as possible. next step would be to chuse which corner of triangle is our pivot point,
or in other words where will be its rotation centre.
rotation of images
according to angles of our triangle new material editor should copy and rotate texture in a number of times which is closest to full circle.
(say our pivot point of triangle is 45 degrees, therefore material editor would copy/rotate our original image exactly 8 times, if i am still able to count :) )

So next thing, after we have nice circle with texture which repeats itself not by tiling or mirroring but by rotation, we would need to describe how to make many such
circles to cover our full wall with new pattern.
Here I discovered there are 3 types of triangles. perfect ones, good ones and bad ones.
perfect triangle is the one wich has 60 degrees angle in each corner (picture bee hive)
good one is when the degree of its pivotal angle is such number that after dividing 360 from that number u get something like: 1,2,3,4,5 .. 25…5356544444 – a whole number,
and not 5.22321445524… and this is also a description of bad triangle. what i mean is that the number to form a full circle should be always a whole number.

hexagon is formed out of 6 tryangles which have 60 degrees corners

hexagon is formed out of 6 tryangles

anyways. so a triangle with 60 degrees corners can cover any surface. all other “good”triangles also cover whole surface, but with some gaps between themselves.

good and bad triangles

good and bad triangles

this is also ok. but we should have an option to export this “gap”to a bitmap file, so user could draw what he or she wants there in Photoshop, and import
it back to our map generator. so basically if we use any other than 60 degree triangle, we need two textures, not one, to form our surface.

more, coming soon.

So probably as so many of as I am using noise and smoke procedural maps quite a lot. And they are kind of basic building blocks for many organic shaders. So Normally when we model anything.. don’t know say a wall… there are areas where we would like to use different textures for different parts of simple, flat wall. The wall beneath a window can have very different colors and bumps, and stuff. There is more dirt gathering there, the patterns of rain and wind affect it differently… also we spit through a window, throw stuff… it all leaves marks on a wall. Normally artist would have to create different materials for all these things, and then draw a mask, where to use one material and where second one should go. Why not to help artist a bit and make some of masks automatically?

And how do we do that? Lets see an image here:

procedural angular mask

procedural angular mask

So lets say we apply our new material, or map to a geometry. Now what it does is, it builds a a list of angular values of vertexies, or rather edges. And then according to values, we generate gradients. Which are our masks. And of course just to use angular values would be not enough, we would what to have ability to change contours from linear to add noise smoke and so on ….. its something like fallof map in max… would it work???

ps. also some easy ready made function shuld be added for horisontal stripes, something like what u have due to running whater. aka rain. could be simple streched noise. or something…


I recently had an idea for a small animation.
I was thinking to use some moving objects, which would generate “blob mesh”
but all of it should be contained in one form or shape. I know its hard to understand what I wanted to do, but that does not mater, while
here i am concerned only with technology, or rather a way to do it.
So i was thinking to use boolean thingy to “contain” my object. or rather to cut parts of it which go out of my “container object”.
And I noticed that it seems to be impossible.
my plan was to use animated object to generate a blob mesh.
and all of it should have a very specific shape, which would have been rather impossible to make, but very easy to cut using boolean.
But it seems boolean is not existing as a modifier, only as a separate object, which I find quite disturbing.
Why can it be as a modifier? Might be the reason is unstable topology of resulting mesh? or its not implemented cos of computing time? i noticed that using booleans with something that has more then 10
vertexes become quite slow :) (well numbers here are somewhat ironic obviously)
so be it, but still I want it functioning. it can have big annoying message “are u crazy” or something… (are u sure u want to proceed and count each boolean operation per frame)  Anyways it could be only calculated each frame separately, almost as a different object.
But now, how can i do it? basically all I can imagine is to create my animation, lets say its 100 frames. and to perform boolean operation on 100 copies of my animated object, and render, or use only one
of these resulting objects per frame. but it would be a madness. any ideas?????

Lately I was working with some textures. I had a model, and for texture work in zbruch and photoshop,
I had to use unwrap uwv. I guess that’s a term in 3d max.
Not sure how you call it in other software, but I guess name shuld be similar as well as function.
So i selected parts of my 3d model, corrected seems, and used pelt mapping in most cases.
Again actually it does not mater. the main idea is that at the end i had something like this:
So when it comes to zbrush naturally you don’t see any seems where two texture blocks connect.
You work without knowing that they exist. (more or less)
But when it comes to photoshop, You yourself must make sure that all pixels corresponding to other texture
blocks pixels (at the edges of texture blocks) Are either same or very similar color, so that when you use
your texture back in your 3d program, no one would see where texture blocks connect.

But how on earth do you do that?
how do you manipulate image so that each edge pixel in two separated texture blocks would be the same?
I am sure there must be solutions, or work arounds. I would be very happy to hear how you do it.

But I was thinking, what if we would have a plug in for photoshop ( gimp? corel photo paint? …? )
that would help doing that?

the idea is to create two “guidelines” for both texture blocks, they could resemble curves in photoshop.
So first here is an image to better understand what do i talk about:

seems

so if, lets say i use brush tool and draw in one block of texture, and at some point brush moves out
of that texture block, crossing one seem, and  “magically” it appears in other texture block, crossing second seem.
it would be like working with tilling textures. or better example, like this OLD game “snakes”, where u have to
eat stuff, and as a result you get longer, but you can not bump in to yourself. but you can hit the right side of
your screen, and you will end up in left side of screen. The idea is the same.
You hit green seem line from one side, and you appear crossing also read seem in other texture block.

OK and now lets imagine a bit more complex situation.
well might be its not more complex, just for a sake of this example.

seems1

so if, lets say i use brush tool and draw in one block of texture, and at some point brush moves out
of that texture block, crossing one seem, and  “magically” it appears in other texture block, crossing second seem.
it would be like working with tilling textures. or better example, like this OLD game “snakes”, where u have to
eat stuff, and as a result you get longer, but you can not bump in to yourself. but you can hit the right side of
your screen, and you will end up in left side of screen. The idea is the same.
You hit green seem line from one side, and you appear crossing also read seem in other texture block.

OK and now lets imagine a bit more complex situation.
well might be its not more complex, just for a sake of this example.

 seems2
so we see a situation where same edge is shared in 3 texture blocks, and how the same edge represented in
each block has different length and shape.

free transform
OK my representation of “free transform” tool is not very accurate, but i hope its good enough for understanding.
I guess “Free transform” tool is quite limited, therefore you would be able to control it only on one texture block,
and other one (on next texture blog) would be automatically generated…  or we write free transform tool too :)
Now in a case of brush (look at second image) it might at some points appear in 3 different locations on screen, on 3
different texture blocks.

well, any one needs an idea for computer graphicks degree work? That could be it :)

p.s.
If such tool wuld exist…
the author could go further, and write another plugin for max/maya/xsi to eport seems….. or?
it would take years to define all seems in photoshop for more coplex objects..

Lots of real time as well as non real time 3d solutions have same strategies dealing with intersecting meshes.  We find that in havok physics engine, any cloth or hair simulator an so on. Regardless of average and very good results these tools provided, we still sometimes find objects or meshes which are intersecting.

That might happen due to many reasons.  sometimes its imperfect physics simulation tools, but more likely,  simple computing restrictions (not all the objects, in their highest resolution,) can be added to physics engine – it would take years to calculate “everything”.  and also sometimes some objects are key framed, and do not obey physics engine, i dono if that is a real reason, but i imagine physics engine could, in some situations, have hard time dealing with “restricted” situations.  Also small spaces, different scales and so on….   all these things do not help to achieve best results.

There is a long history of research as well as commercial tools unwalable. realflow is in 4 version, syflex is in 3,9 version.

But I have not heard yet of a research on “hiding” problems instead of solving them.  what does rendering engine do if we have two polygons in exactly same space?

From my experience they usually go mad. the shading becomes really fu$$$$ up.  therefore my question is, is there any research done on hiding problems in situations where we cannot solve them?  Imagine tree leaves moving in a wind.. it takes quite a bit of computing power to make sure that non of each individual leaves would intersect with each other.  would it be cool to mask these problems if they happen to exist?

polysAlso we often use two or more intersecting planes for making trees. In many games we make trees like this:

bilboard

its only two polygons, 4 triangles i think, and both are with texture of tree, plus alpha map, so we see only tree shape, we shuld not see shape of real geometry of our 3d tree.

what if renderer would automatically blend textures in intersection area, so we don’t see a line?

Is it already done? mightbe it used to happend only lon time ago?

anyone knows?

leave me a comment if u know something about the topic.

So, recently I was working a bit with architectural visualizations.
And I ran into a small problem.  So there are these people who are always very tight and do things in “right” order. But there are others who prefer “creative mess” so to speak.
And I am sure both ways can be good.  sometimes at least :)
In my case i have a tendency to get lost in modeling stage,  adding more and more small details.  So at the end of the day I have a scene full with detail, but no lights.  And some say it is a good idea to draft your scene from simple cubes first, then to make “ruff” lighting,  so later on you will need just to tweak minor details.  But when you have a billion of objects, and its hard to navigate in view port… you add a light and it takes years to render that small test.  So its kind of stupid way to go.
So I was thinking. (This is about 3d max only i guess)
So lets imagine situation where we have heavy scene with dense mesh,  something like this :

interior, lots of polys

interior, lots of polys

and then lets turn on view port rendering mode “show as box”

interior, view as bounding boxes

interior, view as bounding boxes

So if max can generate bounding boxes, it would be nice to have a tool which could do the same but export a mesh of these boxes.

Even more advanced tool could somehow *(no idea how, still thinking) detect objects which resemble boxes (walls) and more complex shaped objects (trees) and by using multi res tool (which reduces number of polygons) automatically generate whole scene only for test renders. That would be time saver,  I guess,  or?

What do you think? might be there is similar tool already but I am not aware of it? let me know if thats the case!

and as usual link to my other website here where you can find interior used as example rendered. And dyrect link is here. By the way its pdf :)

So as we can see in a previous post (if we read comments) there where some concerns about my proposals validity. Mainly Delt0r raised some valid questions. therefore i have been struggling to improve my initial idea, and here is what i came up with. Now, its still has some flows. and questions. but its for u to judge, sins i have no clue about maths, and here its quite important, i mean the math.

so the idea is this.

A. we have a mesh. *(base mesh) which we will use to to determine where the “blobs” appear.

B. we will get the normals of each vertex.

c. then the slicing planes should be generated. (here u can look at my old post or drawings beneath)

here comes the first problem. what i want to find out – is there a way to limit this “slice plane’s ” effect. here no one will understand me, so lets have a look at the picture:

screanshot of blobmesh section in max

screen shot of blob mesh section in max

Here we have a blob mesh, and a slice tool. the white plane is a slice plane, and green line is outline which is generated. the problem is that the green outline goes out of boundaries of slice plane, and correct me if iam wrong but iam sure algorithm behind this procedure works this way. so what we would like to have here would look like this:

blob mesh, facke intersection

blob mesh, fake intersection

Ok how to do it ? i have no clue. It might require rewriting all algorithms behind this operation, or doing it old way, and then subtracting unneeded parts. which is probably easier.

so now lets imagine our slicing works like we want it, and lets move on.

so in order to understand my “hand” drawings here is another diagram for explanation:

normals

normals

so this diagram shows a mesh we will be thinking about. in next drawings i draw only 3 vertexis out of this mesh. In this image we can also see averaged normals (left)cos normally each plane is plane :) so all 4 vertexies should like in image on right. but we can get average values.

how to make planes

how to make planes

you can enlarge this image to see better. but the idea is to make planes for each vertex separately.

so here as well we have problems. first of all we would want to arrange planes so the would make a “continuity” i mean they would go like this: /\/\/\/\/\/\ shit i have no idea how to explane…. look at the drawings again. chm…. please tell me if anyone understands what i am talking about ? ah? anyone?

so and last step would be to “weld” vertexies who are very close to each other.

again i wonder how easy it would be to make so many “restricted” planes for slicing.

but from what i understand, if it would work it could improve mesh topology or?

any ideas?

Next Page »

Follow

Get every new post delivered to your Inbox.