Not sure if this is worth a separate post, but sins I haven’t wrote here for months and months… here it goes :)

softimage

softimage

So here at CG chanel.com I have read that the softimages development team, which worked on a product for more then 10 years was moved to work on maya related projects. A new team of developers was assigned for Softimage. “Softimage development will now be carried out by a six-person team in Singapore, headed up by Senior Software Development Manager Chun-Pong Yu.” Strange thing is that I didn’t found any news on this on area or at xsi base pages… Is it a rumor? Anyone?

——————

Also read about death of Softimage nr 1.

So another minor complain on 3dmax’s UI. Its sort of very minor, and in essence is just a tiny thing, but still I personally think it could, and I think should be improved. So it’s about camera (not exactly camera, just viewport) position when you are modeling. So I guess many of us, max users are using “zoom to selected” viewport tool. I use it a lot, and especially when modeling. So the situation is this. One is modeling something, and selects a couple of vertexes, which happen to be in same position. Now we can’t really see anything cos now we look at whole model. so we hit zoom to selected, and bam most likely we see.. nothing. Or just some part of our object, which has suddenly lost its shape. Its viewport clipping I guess. The zoom tool zoomed so close to selected vertexes, that max decided it’s too close and just clipped the model. Or that’s what I think that happened. Look at the pictures, or try it yourself. Not sure if it is exactly what happens, but it sure looks so. Why does max do this?

just an object

just an object

(usually this happens when we model say a character, or whatever, and we work only on one half of the model. Other half we minor, and later on we want to weld vertexes which duplicate in the original and mirrored object.. at least that is one of situation where this occurs..)

So what do we do? just scroll back a bit, bam, the model is back, and we are close to our selected elements to work with. But, if you are unexperienced, u might go … Ouch, what happened??? or just scroll back, like you did 1 million times. But can’t we make a tweak in viewport camera positioning and zoom to selected tool’s code, so it doesn’t go to such extremes where model gets clipped? Anyone? So it’s just another small complain. Yes, I know there is million other things that are more important to improve. And this is just a tiny annoyance.. but if it would be just a couple of lines of code, might be it would be worth while writing them down? sins I suppose it should be quite straight forward thing, it’s not like one has to write Lagoa for max or anything like that… (Yes Lagoa would be cooler to get… :) Am I right? am I wrong?

———————–

1. (in image) The clipped area that is not shown….

It was quite a long time that i found this video somewhere and it did impressed me.  Here we are talking about 3d modeling. actually about very fast skechup like 3d modeling interface based on filmed video. Now definitely it cannot be a solution to all modeling needs, but one could benefit from such fast prototyping tool. please have a look at the video, it shows everything rather clearly.  If anyone knows anything about future of this product / research, please drop me a line in coment,

thanks.

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II

displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object

object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination

objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?

face that ray hits, is it same object or another?

situationC

situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

I just seen this crazy video on vimeo. (sorry I embed utube, don’t know how to embed vimeo) and its quite shocking.  Ok, all of this is a first impression, so pardon me for strong words and emotions. but. it looks so bloody amassing. To be honest I am not sure if this is a plugin written by someone (Thiago Costa?) and will be sold separately, or will it be part of softimage. But this looks quite amasing. (to say the least)

Ok. on a second thoght…. we have stuff to simulate similar things. But two things I haven’t seen yet are:

1.  In  begining of video u see stuff braking apart. Its like a solid material braking apart. hard to explane, I mean stuff is braking to bouth small and big peaces. U can imagine big rocks and derby… and I havnt seen this yet. Ofcourse one can make this easily with two seperat simulations..I guess…  Like having one solution for big things cracking, and other for small dust like particles.. but this seems to be “one button” solution .. also I am sure one could achieve something like this in reel flow from next limit. Still it does look amasing. And nice to see someone challenging next limit. It seems to be a king of this physics simulation game :)

and

2. It is semy liquid blobby stuff which tears apart. In other words mesh that not only deforms but also brakes apart..

This I have never seen that yet. I would love to read some papers on this. anyone? How do we solve mesh topology problems??????  I have no clue, my imagination doesnt help here… any ideas? anyone?

anyways, be sure to see all of this:

http://vimeo.com/thiagocosta

I am no expert on it, but I would guess that ICE was something to make or help it happen… or?

Oh, i have a blog? seems I forgot it for a while :)  So some ideas on textures.

We use different textures for creating all kinds of materials for our 3D models.
In some cases we use tiling and mirroring for different ways to apply textures on existing 3D objects with uv coordinates.
But in some cases we use textures in architectural situations where we have, say, couple of tiles to cover whole wall. while having a nice texture of say one tile by tilling and mirroring we can,
make it cover whole wall.  To achieve some more variable results we use brick procedural map generator (in 3D max) which basically helps us to put our
tile image in more variable fashion.
But if you ever looked at islamic art and architectural tradition, you will see that some parts of ornaments and tilling can blow your mind with its complexity.

Here I add some articles, which are very interesting regarding this topick.

Geometry feat cloaked in medieval Islamic tile,
“Cintamani” and Islamic Tiles,
A Discovery in Architecture: 15th Century Islamic Architecture Presages 20th Century Mathematics,
Q1 Project – Islamic Tiles,
The Art of Mathematics Islamic patterns,
Sometimes you would look at the wall but fail to see contours of single pattern which is repeated.
tower

tower decoration pattern

To some extent same can be said about baroque wallpapers in Europe. And probably in any culture and time period you could find very complicated pattern arrangements.

So while analysing couple of arabic patterns I thought why do i have to spend hours tilling and mirroring and rotating stuff in Photoshop, if it theoretically
could be done directly in material editor.
Many patterns have a basic triangle shape. So if we have correct texture which contains our desired basic pattern element in triangle shape, we could
instead of tilling or mirroring it rotate it. Sins all bitmaps, as far as i know, are rectangle in shape, we would need to find and identify our triangle element wich we
would use for rotation. technically I guess we could develop a type of bitmap which is a triangle in its shape. I guess its resolution would be defined not as
say 640×480 but i don’t know, 1 pixel at top row, 640 at bottom row, 480 pixels height or something like that.
But we will try to work with traditional bitmaps here.  So say we open a texture in our brand new material editor. then we should define a triangle in
one or another way. i guess our material editor should let draw a triangle on top of texture, something like guidelines in Photoshop. We would
make sure these guidelines matches our triangle shape in bitmap as close as possible. next step would be to chuse which corner of triangle is our pivot point,
or in other words where will be its rotation centre.
rotation of images
according to angles of our triangle new material editor should copy and rotate texture in a number of times which is closest to full circle.
(say our pivot point of triangle is 45 degrees, therefore material editor would copy/rotate our original image exactly 8 times, if i am still able to count :) )

So next thing, after we have nice circle with texture which repeats itself not by tiling or mirroring but by rotation, we would need to describe how to make many such
circles to cover our full wall with new pattern.
Here I discovered there are 3 types of triangles. perfect ones, good ones and bad ones.
perfect triangle is the one wich has 60 degrees angle in each corner (picture bee hive)
good one is when the degree of its pivotal angle is such number that after dividing 360 from that number u get something like: 1,2,3,4,5 .. 25…5356544444 – a whole number,
and not 5.22321445524… and this is also a description of bad triangle. what i mean is that the number to form a full circle should be always a whole number.

hexagon is formed out of 6 tryangles which have 60 degrees corners

hexagon is formed out of 6 tryangles

anyways. so a triangle with 60 degrees corners can cover any surface. all other “good”triangles also cover whole surface, but with some gaps between themselves.

good and bad triangles

good and bad triangles

this is also ok. but we should have an option to export this “gap”to a bitmap file, so user could draw what he or she wants there in Photoshop, and import
it back to our map generator. so basically if we use any other than 60 degree triangle, we need two textures, not one, to form our surface.

more, coming soon.

Ugly non uniformal bisier splines – DIE !!!!!

(click below to see video (a MUST SEE)

Rhonda is here :

more about “NURBS must die! Rhonda is here“, posted with vodpod

ok, ok .. we are all quick to announce deaths when new things show up. painting was dead when photography was invented. softimage was dead when it was bought by autodesk, i think maya too :)

Ok Ok rahonda is probably not threatening rhino.  Or nubs in general. But still i think it could be very nice tool for two things.  three might be.

1. sketching. the speed u can (it seems) create is incredible (gooooogles sketch up – look out :)

2. creating models for animation films, in very distinct style. (well i wonder how exploitable models will be, and will it be only lines or will we be able to make surfaces out of them, and how.)

3. making 3d more accessable to people who do not belong to this industry. again look at sketch up.

imagine something as rhonda as interface for drawing nurbs… that would make me create some models with nurbs instead of polys for sure.

Lots of real time as well as non real time 3d solutions have same strategies dealing with intersecting meshes.  We find that in havok physics engine, any cloth or hair simulator an so on. Regardless of average and very good results these tools provided, we still sometimes find objects or meshes which are intersecting.

That might happen due to many reasons.  sometimes its imperfect physics simulation tools, but more likely,  simple computing restrictions (not all the objects, in their highest resolution,) can be added to physics engine – it would take years to calculate “everything”.  and also sometimes some objects are key framed, and do not obey physics engine, i dono if that is a real reason, but i imagine physics engine could, in some situations, have hard time dealing with “restricted” situations.  Also small spaces, different scales and so on….   all these things do not help to achieve best results.

There is a long history of research as well as commercial tools unwalable. realflow is in 4 version, syflex is in 3,9 version.

But I have not heard yet of a research on “hiding” problems instead of solving them.  what does rendering engine do if we have two polygons in exactly same space?

From my experience they usually go mad. the shading becomes really fu$$$$ up.  therefore my question is, is there any research done on hiding problems in situations where we cannot solve them?  Imagine tree leaves moving in a wind.. it takes quite a bit of computing power to make sure that non of each individual leaves would intersect with each other.  would it be cool to mask these problems if they happen to exist?

polysAlso we often use two or more intersecting planes for making trees. In many games we make trees like this:

bilboard

its only two polygons, 4 triangles i think, and both are with texture of tree, plus alpha map, so we see only tree shape, we shuld not see shape of real geometry of our 3d tree.

what if renderer would automatically blend textures in intersection area, so we don’t see a line?

Is it already done? mightbe it used to happend only lon time ago?

anyone knows?

leave me a comment if u know something about the topic.

ok lets continue, on most crazy 3d ideas ever. it was a restless night for me, therefore two posts are coming.

So we all know how in 3d, rendered object have thuse ugly, well mostly, edges. they are so sharp, so cg, so unreal.

there where meny methods created to avoid this. we could use fallofs in opacity chanel, or put things out of fucus….

what about if we would mix they materials? i dono what would happened, but this is a place to theorethise, and think.

so free your mind :)

so look at the picture:

scheme of pixel material blending

so i am just thinking how things would look if we would use kind of antialising, but it would actualy apply blended materials in a pixels where two object are close to each other. would be cood if we could control the “size” of blending, what i mean here is ..

if we look at the picture as it is right now, we’d see only 1 pixel has a blended material, i mean only one which has a neighboring pixel of other material. what if we could apply this blended material it 2, 3 or more pixels what i mean is…. shit how to put this…..

a nother picture?

pixel blending

and if we could tie, lets say depth ( zbuffer?) to the number of pixels blended?

like close to viewplane only 1 pixel on edge is blended, while further more pixels would be blended?

has this been done already? would it work? would it be cool or ugly?

anyone whants to write a renderer?

:)

you can also wisit my portfolio.
dont like english? u can (try) readig it in lithuanian or anny other language here! alternative, has no lithuanian :(

live shadows / crazy ideas I
this will be a thought about some effect which could be achieved using modified ray tracing engine. It has to do with non photorealistick rendering, and magical look to it. Ok, so first of all i am no programer or mathematician of any sorts, so please judge my ideas having that in mind.
First of all what is ray tracing, or ray traced shadows? i will try to describe it in one paragraph, as i understand it myself.
So as far as i understand first we have a viewing plane, from which we “cast a ray” towards an object in a scene. the point where it touches object, we find out properties of object, and return a color value to rendered image, or that particular pixel we started casting ray in view plane. so next is to cast a secondary ray, which starts from a point where first ray touched an object. this ray will go towards a light source. if any object will be in its way, we’ll know that thatlight we just mentioned casts a shadow. i hope its possible to follow me, but actually it doesn’t matter that much, cos anyone could look up ray tracing in wiki. http://en.wikipedia.org/wiki/Ray_tracing
The idea comes to my my mind, is , what if we would treat light differently, what if the rays would not go as a straight line, what if they would curve?
as far as i understand curved secondary ray would result curved shadows, distorted shadows. which might be nice. Lets have a look at pictures if that helps understand.

crazy raytracing?
Of course we would want to curve our rays only very little to give a subtile feel to it :)
What if we could animate these curves light is being traced? what if we could make wind move the shadows themselves? non realistic? not at all, but might be an interesting visual effect?
any ides?
any ray tracing engine writers out there?

————————-

You can visit my web page here

dont like english? u can (try) readig it in lithuanian or anny other language here! alternative, has no lithuanian :(

Follow

Get every new post delivered to your Inbox.