The default size of pelted maps in 3dmax. a question.

So this one will be again, the call for a new tool . Now it might very well be that image of big head and small eyes
someone of you know some work around or some way to achieve this, if so please share with us your knowledge. So its all about texturing and laying out uv maps. Lets say I have a character which I am planing to texture using unwrap uvw modifier and say a pelt mapper. Now my model has a separate model for head, torso, cloths, hands and so on. After I peel each of them, the sizes of my characters head texture, hand texture and everything come in same size. they all are fitted to this one square.. Now if I want to hand paint textures things like the width of the line in a texture matters. so in the case where  I have, lets say a giant head of the character and tiny tiny eyes, the unwraper will make them same in a size. And that gives me a problem. If I apply same texture say with a dots, the dots on a head will be huge compared to the ones on the eyes…

naturally in a unwraper I can resize everything, but as far as I know its all done by hand. So the question remains, how to make many pelt maps from many separate objects so the sizes would be very simar to these sizes of actual geometry? default size of a pelt map in unwraper

————————–Ok! seems I have an answer myself :) I found this grait tool – Unwrella! from now on its sort of advertising, but i found it solves exactly this problem – it makes the sizes of pelt uv’s all correct sizes relative to each other! text from their website:

Unwrella is an exact unwrapping plug-in for Autodesk 3DSMAX and Autodesk Maya. It is a single click solution which allows you to automatically unfold your 3D models with exact pixel to model surface aspect ratio, speeding up texture baking UV map production significantly.

  • Automatic one-click solution – Just apply the Unwrella modifier
  • Precise – Preserves user created UV Seams
  • Smart – Reduces texture mapping seams almost completely and minimizes surface stretching
  • Efficient – Chunks are kept large and are arranged on the UV surface with maximal use of available space
  • User-friendly – User defined pixel based padding between UV chunks
  • Excellent for all kinds of models (organic, human, industrial)
Advertisements

Small UI problem in viewport’s camera position while using “zoom to selected” tool.

So another minor complain on 3dmax’s UI. Its sort of very minor, and in essence is just a tiny thing, but still I personally think it could, and I think should be improved. So it’s about camera (not exactly camera, just viewport) position when you are modeling. So I guess many of us, max users are using “zoom to selected” viewport tool. I use it a lot, and especially when modeling. So the situation is this. One is modeling something, and selects a couple of vertexes, which happen to be in same position. Now we can’t really see anything cos now we look at whole model. so we hit zoom to selected, and bam most likely we see.. nothing. Or just some part of our object, which has suddenly lost its shape. Its viewport clipping I guess. The zoom tool zoomed so close to selected vertexes, that max decided it’s too close and just clipped the model. Or that’s what I think that happened. Look at the pictures, or try it yourself. Not sure if it is exactly what happens, but it sure looks so. Why does max do this?

just an object
just an object

(usually this happens when we model say a character, or whatever, and we work only on one half of the model. Other half we minor, and later on we want to weld vertexes which duplicate in the original and mirrored object.. at least that is one of situation where this occurs..)

So what do we do? just scroll back a bit, bam, the model is back, and we are close to our selected elements to work with. But, if you are unexperienced, u might go … Ouch, what happened??? or just scroll back, like you did 1 million times. But can’t we make a tweak in viewport camera positioning and zoom to selected tool’s code, so it doesn’t go to such extremes where model gets clipped? Anyone? So it’s just another small complain. Yes, I know there is million other things that are more important to improve. And this is just a tiny annoyance.. but if it would be just a couple of lines of code, might be it would be worth while writing them down? sins I suppose it should be quite straight forward thing, it’s not like one has to write Lagoa for max or anything like that… (Yes Lagoa would be cooler to get… :) Am I right? am I wrong?

———————–

1. (in image) The clipped area that is not shown….

Extrude along spline in 3d studio max

As many of you I do enjoy working with an “extrude along spline” tool.
It makes your modeling so much faster in some cases, and it has
flexibility to achieve various results fast. I found it very good when my
task is not very constrained, when I work with something that I can change and create at my will.
But Some time ago I needed to do something more precise. To be exact i had to
do some lines on a sphere surface. I did remembered nice tool we have in NURBs arsenal which is called “create cv  spline on surface”. Now that i had my splines on a surface i needed to use them as splines for extrusion in my model.  And then I had some troubles with extrude along spline tool.  It just doesn’t seem to actually follow given curve, or to be more precise, the result is same as the curve, but its orientation is all wrong.
And then I had to find how to fix it. So if any of you happen to have same problem
here are some screen shots with a problem and its solution.

 

object and the spline
object and the spline

 

Just a picture of an object and a curve.

extrusion
extrusion

 

 

 

 

 

 

 

 

 

 

 

 

So we select faces and go to extrude along spline, and we choose our spline.

 

aligning
aligning

“Align
Aligns the extrusion with the face normal, which, in most cases, makes it perpendicular to the extruded polygon(s). When turned off (the default), the extrusion is oriented the same as the spline. To Face Normal, the extrusion does not follow the original orientation of the spline  it’s reoriented to match the face normals, or averaged normals for contiguous selections. The Rotation option is available only when Align To Face Normal is on.”  description from 3dmax help file.

 

rotation
rotation

 

“Rotation
Sets the rotation of the extrusion. Available only when Align To Face Normal is on. Default=0. Range=-360 to 360.” again, from 3dmaxs help.

make it first
make it first

 

So here we

select the vertex which is closest to extruded polygon so we can make it “first”

wuala!
wuala!

 

 

 

 

 

 

 

 

 

 

 

So we have it. Now our extrusion actually follows given spline. This seems to solve my problem. Hoope it helps you too.

 

 

Noise, Perlin noise and esthetics of random

Here it is, a title which, as usual is quite confusing. First of all here we will not discus mathematical nature of various random number generators (which is also an interesting topic). And we will focus on esthetics of random. Random… what? Well anything random I guess.

So as I see it there are two ways of making things esthetically more interesting*. One would be depicting more. Say we have a plain wall, we would want to see 5 layers of paint which where applied, and in some parts the paint is cracked and we see bricks, also we see a spider web and so on. This is one approach, and it is a best way to achieve visual detail, and make our frame more interesting. But it takes time to construct both wall, and spiders and dirt and so on. Another approach is to increase detail by adding random elements, which by themselves do not depict anything in particular, but in certain context might be perceived as intended detail.

plian white wall
plian white wall
rich white wall
rich white wall

Again noise is just noise. But if we use gray noise placed above depicted fireplace, viewer will think its a smoke. If we add same gray pattern of noise on plain wall, viewer might interpret it as a dirt on a wall. This way is much faster. And this is what we will talk about.

First of all lets see a simple Perlin noise pattern, and a multiple scaled Perlin noise patterns. (image A)

perlin noise, taken from wikipedia
perlin noise, taken from wikipedia

And in picture “B” we see variations of Perlin noise (here I am not certain if it is actually a perlin noise, probably not, corrections are welcome) which intend to depict something more precise then abstract noise itself.

cedural maps "Celular, Dent, Perlin marble, marble", 3ds max, image B
procedural maps “Celular, Dent, Perlin marble, marble”, 3ds max, image B

So a simple comparison could sound like this.
Perlin Noise. A boring plane noise which is rather abstract and doesn’t depict anything in particular. Which is also a good thing as an element to construct more sophisticated noise like effects.
Now there is nothing better then Scaled multiple Perlin noises. It has different size elements (see image A) and is almost a god in cg world (no?). And variations of Perlin noise, which could be modified to depict wood and other interesting patterns. But what is the difference between Perlin noise – and multiple scaled Perlin noises? Well, I would say that its all same plane noise, but it goes towards depiction, towards sophistication, towards no randomness.

So desired noise for image enhancement, should look like intended detail rather then just random detail. What I want to say, its not the detail itself that is a valuable a detail, its a detail that has something behind it. And its quite tricky to make automatic algorithms that would not only add detail such as Perlin noise but a noise of a sort that somehow depicts something, without a necessity to actually take care of what it depicts. And for that I think we need to look in to such procedural maps as “planet” in 3d s max. Its is gone in new max versions.. (why?). Also in some older posts I suggested to build materials, or procedural maps that would “feel”the geometry they are applied upon. Especially the last link talks about an extra detail on edges of mesh.

One example from real world. Some days ago I was walking in a street and saw a window. It was dirty and covered in paint. But the paint was not running down, but upwards. Now first thought would be that it is an unneeded detail, which , if it where a CG shot, not a real life would make no sense and would complicate understanding of an image or a shot. And then I found a possible reason for paint running “upside down” Most likely the glass was already dirty, and a person who made window used dirty glass without directing it in any particular way from point of view of paint on glass. What I am trying to say, is that observer will always try to find explanations, and be it correct or incorrect, one always might find reason behind something. And here is the question: If it would have been a CG shot, would it be a bad shot, because if one notices this it makes them uncomfortable, ads an unnecessary level of thinking and so on. Or is it a good shot, because rather then having a clean boring “plain”glass we have some visual details…

So I guess the point of this post is to make a note, that we do want detail, but not just any detail, we want it to be “in the context”. It has to depict something, somehow. And if it actually doesn’t depict anything in particular, we should strive that a viewer would find its meaning on his own. Its a tricky task, and I guess.

—————————————————————————————————————————————————————–

*Here I have to make a note and remind you that I am by no means implying that more complicated is better, more detailed is better, visually overwhelming is a desire. But In some cases it might be the case (or some of these statements at least). Just imagine a white wall, which is shot in a close up, and all u see is just a wall. It covers hole screen. Now say a wall is white. So we can imagine a situation where we have a whole image containing only of pixels of rgb value 255,255,255. If you would not know beforehand that it is a wall, wouldn’t you say, its just blank white screen? So in some cases more detail is better. And hole today’s post is dedicated only to this situation, where more detail is better.

——————————————————————–

I have remembered this blog article of mine sis I found this cool interview with Ken Perlin himself! One nice idea he has is: if something is too complicated, add another dimention to it. What does that mean? Listen to his interview at fxpodcast here:

link to fxpostcast with Ken Perlin about noise, aesthetics and perlin noise too!

another article here: link

3d multicellular vs Hand drawing in ink :)

So as usual it was quite a long time sins I posted something here. Unfortunately, even though I have some drafts, I just cant sit down and finish them. Therefore I decided to go for an easy post. Well its like writing an software review or movie review. (meaning a reflection of something rather then a creation, and here I am simplifying things a bit, there is surely complicated and “heavy”movie reviews as well as shallow “creation” stories.) Anyways this is gonna be a rather shallow story. But oh well..

So as all of us, weak human beings, from time too time we do stuff which we are not supposed to be doing. Sometimes its something very wrong, but more then often its just something that we are not suppose to be doing cos we are suppose to be doing something else. And most of the time we are watching movies, TV reading a stupid book, or don’t know.. Hundreds of things to do, in order to avoid something you actually must do. It could be anything. So as a member of this human race, I am too doing stupid things. But lucky us (humans) from time to time, on rear occasions, these stupid things lead to something. And be it not something great, or of huge value, its still sort of positive. Well the judgment here can be varying. Still.  As it happens, one of these times I was procrastinating I happen to be inspired by my own drawing in ink, and tried to do something of that sort in 3d. Ah finally you will say, he talks about 3d, its about time. So here it is a small rather dumb project. Who knows, when ‘the stars are right’ :) , one day I might actually do something good out of this. But for now we have:

A. A hand drawing of a… don’t know plankton, bug, molecule, microscopical being, alien, virus of a sorts?

ink version of a bug
ink version of a bug

B. We have a wireframe of recreation of itt in 3d. And then, we use some materials with falloff in opacity Chanel, we add some depth of field in our mental ray rendering engine, we invert an image, and here is what we have:

wireframe of a bug
wireframe of a bug

C. a final output:

final render of a bugg
final render of a bugg

About blogging and 3d magazine called “digital production”

99 winner
99 winner

So Some weeks ago  I had a lucky day. It happened  so that I clicked on one of these “your ip address is a lucky 100000 visitor, come and get your free i phone” adds. Ok, it was not so. I commented on a blog I tend to read from time to time. And guess what happened? I got a reply saying that u are 1000th commenter on my blog and I want to give you a present. Ill buy u a book at amazon or something like that. And I did chose a magazine.

the post has arived!

And guess what? Here I am holding a package from Berlin!  A lucky day, isn’t it?  Well, everything comes with a price, I also received this small note: a note

The language in this note is hard to understand, its not German or English – its “handwriting” Do u know it? :)

Well it says, when I will have a 1000th comment on my blog I should send similar gift to a commenter, just the way I got my present. Now I wouldn’t consider my self a blogger. The reason is that I don’t read a lot of blogs, and this blog is more of a tool to talk about 3d and cg.  So I guess what I am trying to say is, I am not sure if this is a popular trend among bloggers, or has “Aidenium” thought of this himself, But, I think this gift giving is a cool idea.

So hereby I promise to all bloggers and readers that my 1000th cometer will receive a similar prize.  Now to tell u the truth it will not happen very soon while I have only 88 comments  now :) But its a nice idea that counts right?

Ok. sins this blog is not exactly about my personal life, I will try to bend this post towards this blogs topic, that is 3d.  And here it is a magazine I am reading for couple of years now. Why do I love it so much? well to tell you the truth I love it more then 3d world. why? well this is only my personal opinion.

3d world cover
3d world cover

But… I find 3d world to be more of a collection of very nice images and interviews of people who do them. And these interviews are more of a personal style, well like, what inspired you to create this art peace? what games and books do u like to read? That all is definitely very interesting, but I find it lacks depth. Its more a casual read with inspiring pictures, wile digital production is much harder to read.

digital production, The magazine
digital production, The magazine

Its much more technical. And as such more interesting. Ok this statement probably is not fair. Its probably like comparing apples and oranges. But if I can have only one magazine (its expensive to buy and ship magazines for me) and big fat “Digital Production” costs you 14 euros while thin “3dWorld” costs you about 6 pounds if remember it correctly. Digital Production is 130 pages. Now I don’t have a copy of 3d world, anyone knows how many pages are there? Again one could say that this is not fair to count pages, while “digital production” covers not only 3d. And this is true. But I think a 3d artist should know whats going on in compoziting world and 3d cinema and so on…  So judge it for yourself, what you like better. Oh and one small thing….. “Digital Production” is all in German….except for some titles :) forgot to mention that, so take out your fat German dictionary and lets read it :)  Here I will post some images I grabbed from the latest issue, and tell me if u find something similar in 3dworl.

different images from articles posted in digital production
different images from articles posted in digital production

By the way whats your favorite 3d related magazine? And of course, dear blogger, what do you think of 1000th comment idea?

Proposal for uvw map modifier

One proposal would be very simple.
It would be to enhance existing uvw map modifier by adding ability to apply uv
mapping not for a single mesh, but to each element of the mesh.
I know its possible to achieve same result by selecting each element in mesh edit or poly edit mode
detach these elements from single mesh, and as a result, to have lots of objects to which u can individually apply uvw map modifier.
but in some cases, when u have a complex mesh, which consists of lots of elements, that could be very time
consuming work. These complex meshes are often result of importing geometry from other 3d applications.
So the idea is to apply, say a spherical or box or whatever type of uv projection method to each separate mesh element without a need to divide mesh and detach parts of it.
Now if we have a complex model, say a tree, or dono, anything actually, and we are not planing to have perfect uv’s by pelt mapping or selecting separate polygons and applying separate planar maps to parts of a mesh, we would want to have some simple method to get uv coordinates fast, and semi accurate. in current situation we can simply apply uv map modifier to whole object,
and expect that box or cylindrical map projection will do fine. and in some cases it does.
say model is far away from camera, and we have no time to produce good uv coordinates. But imagine we would have some middle quality solution. that is something between creating precise uv maps for separate polygon selections, or having to use pelt mapping, and between simply applying say cylindrical projection
to a whole mesh.
I propose to have an ability in a uvw map modifier to have a “per element” button which would apply
selected map projection method not to whole object but to each separate mesh element.
and also we would have an ability to manipulate (scale, rotate, move) uv projections within a single interface for each single element in one mesh,
at the same time without a need to apply hundreds, or be it tens of mesh select modifiers, and uv map modifiers, and struggling and getting lost in huge modifier stack.
lets look at the images below.
cylindrical map projection applied to a single mesh.1. cylindrical map projection applied to a single mesh.

 2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.2. lots of uvw map and polygon select modifiers to achieve accurate projections for whole model.

3. proposed projection per Element mode3. proposed projection per Element mode

proposed projection per Element mode with floating box4. same, but with floating toolbox for selecting each element and having ability to apply different (cylindrical, box, etc.) projection modes.
Some additional thoughts.
Why do we think only about elements of mesh, can we use mat id’s for same porpoise? can we
apply separate uvw projection methods for mat id’s also? That could come in hand too..?
mapIDs for uv coordinate sets Next question would be, can we use both mat ID’s and Mesh elements? Or is there an easy and fast (one button solution) way to convert separate mesh elements to mat ID’s. Or can we easily convert Mat ID’s to mesh elements? And if we where to write such tool, where would we place it?
Should it be a part of edit poly modifier? edit mesh? or uvw modifier?
Any ideas?

VideoTrace – 3D modelling using real video

It was quite a long time that i found this video somewhere and it did impressed me.  Here we are talking about 3d modeling. actually about very fast skechup like 3d modeling interface based on filmed video. Now definitely it cannot be a solution to all modeling needs, but one could benefit from such fast prototyping tool. please have a look at the video, it shows everything rather clearly.  If anyone knows anything about future of this product / research, please drop me a line in coment,

thanks.

Displacement mapping and mesh.

A strange title, i know. But it tells what I was thinking about, and well what tool i would like to see in 3d max.
Ok. Lets imagine a situation where one has a 3d model, with uv mapping.
Say we whana use displacement map. And say we have no time to use zbrush, or even use pelt map, or unwrap uv to generate nice uv’s for drawing displacement map in photoshop.
Say our needs are basic, and we are ok to use noise, smoke or whatever map for displacement.
Now also lets presume, our model has big as well as small elements. (imagine a tree, big trunk small branches, or human, big chest, small fingers, etc.)
So our noise map generates black and white areas, one of hem will be extruded outwards (or inwards).  Say the average length of any edge in “big” elements of model is 10
units, and an average length of edge in a “small” element is 0.1 units now say our displacement maximum value is 1 unit. The result is:
nice extrusion in big elements of model, total 3$%$#% (pardon for expression:) in small elements of mesh. Why ? See example below.

displacement
It’s quite hard to see, but a green object shows geometry, and all edges. Where it sees huge distortion, I mean it is huge compared to width of that part of object. And where it is written “small” distortion, again it is small compared to the general size of that part or element of object.
So as u can see, teapot’s main body is big and displacement makes only a small percentage of its general size, while in a thin lines this displacement well exceeds the width of the thin line.
So to reach a situation where big mesh components have bigger and smaller elements of mesh have smaller displacement values, we have to apply different materials to different parts of an object. The difference between materials would be only the amount of displacement (or extrusion height to be more precise,
, map, image noise or whatever, remains the same).
Have a look at the images:

displacement II
displacement II

So what about a situation where we have big, small and medium size elements? or where object’s parts gradually get smaller or bigger?
Say a trunk and branches of the tree. If we would have nice uv’s we could draw displacement map in photoshop. Big parts would have black and white image where color extremes would be rgb 255,255,255 to 0,0,0 and smallest branches would have color extremes something like rgb 188,188,188 to 170,170,170
Well to put it in other words, we would have a contrasty image for big parts of mesh, and very gray,
as if would be seen though a fog image for small parts of model.

But we described a situation where we don’t have nice uv’s and we have to use some procedural map for displacement. So no photoshop. what to do?
We should have a tool which would change maps (which is used for displacement mapping) contrast levels according to the size of mesh elements automatically.
Lets think how on earth to do that? how can our software know what is big and what is small element? Ok its easy if we have mesh elements. Or mat ids.
Then we simply compare sizes of these and we can work out contrast levels for displacement values. But what if we have one object. lets see this image:

object
object

So. It is one single mesh. no mat id’s specified, just one single element. But as a human u can easily determine,
Left side is big, right side of object is smaller. But how can our software know that?
So this is a main question of this blog entry. I do not have answers to this question, but I do have some thoughts on it. lets see..
First of all we have a bounding box of a whole object. That is a space that we will be working in.

So thought one.
What we do have is positions of each vertex in space. Now lets imagine we take each vertex, and cast some number of rays to random directions from that vertex. The idea is to calculate the distance of these rays till they hit other parts of mesh. Well…. how to put it… We want to see where the ray intersects geometry again.
some rays will go to infinity, but we just eliminate these. The ray that passes the boundary of bounding box of object is not interesting for us.
So say we cast 10 rays from one vertex. some of them will “get lost” and some will return distance values. Imagine our object is a sphere. We take on vertex,
cast rays, and some will go inwards, inside the sphere, and will eventually hit other side of sphere, while others will go out, and will go out of objects bounding
box .. and we forget these. While others will give us a distance to the other side of sphere. Here we could also get information of the polygons that was intersected  normals direction. By calculating angle of ray and polys normal we probably could determine if its “inside” or “outside” of object.
Lets have a look at the image:

objects size determination
objects size determination

So what can we do with all these numbers? can we add all the values of all rays per vertex to one number? can we assume that the bigger the number we get, the bigger the chance is that this particular vertex belongs to a part of “big” element of object?  Cos say we use our sphere, and we would get that almost all vortexes have more or less same values,  so they all belong to one big object, which makes sense.
And if, close by we have a small element of mesh, the values we would get from its rays would be lower, therefore we could assume that it is a smaller element. Now to add numbers from ray distances would be wrong, we should count average i guess.. or does it make any difference? Also values where by calculating face normals direction we “assume” that the ray has hit a facing polygon, should be taken in consideration with caution.
And values where we “assume” that ray has hit polygon that faces away, could be more acceptable, sins we can guess that it is part of same object, that it is “other side” of that object. Now this part should be investigated more.

face that ray hits, is it same object or another?
face that ray hits, is it same object or another?
situationC
situationC

Oh some more thoughts on these rays. How do we know where one object starts and other begins? I mean imagine we have two spheres in one single mesh. How do we know which polygon or vertex belongs to which? Here I can think of  two methods. Say our ray from vertex hits something where we determine
that the face ray has just hit faces the vertex that emitted our ray. So first thought would be, it is part of other object, we should ignore all distances that this ray gave us. But it’s still possible that that face is a same object, and the distance data is relevant to determine the size of object.
So first way would be to check all neighboring vortexes, and see if the face our ray has just hit is one of them. not? to check neighbors of neighbors..
And so on and on, till we check all ploys or vortexes of whole mesh. if it’s not in a list, we are sure it does not connect to our vertex, and it is indeed a separate entity, and we should ignore it. Or say it is in a list but it is 1000 faces away… so   we assume it’s “too far away” to take it in to account.
I think this method has two weaknesses. First I assume it’s too time-consuming to check all of vortexes. Second if it does connect to a vertex that emitted a ray, we need to decide how far is far enough to be “too far”. And it depends on mesh topology and many other things. how can one decide upon some value which is too much? that’s why I think we should not go this way.

Anyways. at the end of the day we would have all (or some if we optimize our solution) vortexes numbered or arranged by the “size” of the objects part they belong to. It can be just numeral value, or could be color value. Might be user could correct manually errors which would be produced by our generator by simply painting on vortexes.

After that our new displacement map would know how to edit original displacement map to fit the size of 3d model parts.

Ahh.. I am confused myself now. Anyone any thoughts on a matter? gee… I wonder if it was possible to understand anything from what I wrote…  :)

softimage goes crazy!!! phisick simulation teaser!!!! Lagoa multiphysics 1.0

I just seen this crazy video on vimeo. (sorry I embed utube, don’t know how to embed vimeo) and its quite shocking.  Ok, all of this is a first impression, so pardon me for strong words and emotions. but. it looks so bloody amassing. To be honest I am not sure if this is a plugin written by someone (Thiago Costa?) and will be sold separately, or will it be part of softimage. But this looks quite amasing. (to say the least)

Ok. on a second thoght…. we have stuff to simulate similar things. But two things I haven’t seen yet are:

1.  In  begining of video u see stuff braking apart. Its like a solid material braking apart. hard to explane, I mean stuff is braking to bouth small and big peaces. U can imagine big rocks and derby… and I havnt seen this yet. Ofcourse one can make this easily with two seperat simulations..I guess…  Like having one solution for big things cracking, and other for small dust like particles.. but this seems to be “one button” solution .. also I am sure one could achieve something like this in reel flow from next limit. Still it does look amasing. And nice to see someone challenging next limit. It seems to be a king of this physics simulation game :)

and

2. It is semy liquid blobby stuff which tears apart. In other words mesh that not only deforms but also brakes apart..

This I have never seen that yet. I would love to read some papers on this. anyone? How do we solve mesh topology problems??????  I have no clue, my imagination doesnt help here… any ideas? anyone?

anyways, be sure to see all of this:

http://vimeo.com/thiagocosta

I am no expert on it, but I would guess that ICE was something to make or help it happen… or?