So a list of links to podcasts, you tube videos and alike regarding various vr related things but all of them (i hope) have something related to cinematic vr, even though non of them are exclusively about it. So we have to pick relevant parts. But still for today the net is not over saturated with thoughts about vr techniques, therefore i’d say its good to see them all :) Links are of no particular order.
Mike Alger talks about interfaces and interaction design in VR, as well as doing fast UI iterations by using After Effects, illustrator and Cinema 4D. Podcast and two videos.
So basically I care most about cinematic Vr. But it does help, IMO to to know basics, History, real time, rendering technology and all the rest. Even though Vr is an old “thing” vr content used to be very scarce and closely connected to specific hardware- usually so called caves. Basically, for most of common people who are not lucky enough to be around in 70’s somewhere in usa, uk, de universities in mathematics engineering or it departments or military… well then for you Vr content is non existent :)
So there was some scientific research, but there was very little “artistic” stuff. And even though there where some artists doing installations but not so many so the common “language”of vr would emerge.
Now that we have so called second wave of VR, we have much more people trying to make content. Most do game related content – which makes sense- as soon as we have positional tracking we need real time 3d rendering. (unless again you are in research team that works with light fields or similar..)
Others yet go for real life filming or so called action footage. Animators are somewhat in between these two extremes, on one hand we are closer to video – cos we are used to make stories, aka “films” on the other hand we create stuff more then we capture from real life and in that regard we are closer to game creators. Now ofcource all things overlap and there cannot be clear cut definitions and boundaries…
So people from the second wave are creating content for about 3 years now. And some share their knowledge and experience publicly. Here I gathered some material from youtube and other places. If you work on Vr content some of these might be relevant to you. Now most of them fall either in 3d real time category or 360 video. But one can always find relevant information even if it is not exactly from your field – most things are connected. So here it goes, my link collection:
After making my last drawn panorama I started a new one. actually i stared this one even before that, but at the time it was just a regular drawing that i decided to make as a panorama. That proved to be a mistake – I ended up needing to cover quite an area with doodles :) so its actually longer then a year that i am making this one pano. Obviously i don’t draw daily, its more like … monthly.. i got bored of it quite some time ago :) But only 2/6 of and area remains to be covered. so there is a light at the end of a tunnel … i think. Here Ill share 3 images. it’ll be general view an a close up.
So here it is, finally done. Its much bigger compared to my older attempts in making panoramas, so don’t forget to scroll for zooming in :) I started this illustration not thinking of panoramas, it was meant for something entirely else. Later on while drawing i thought why not to turn it in to panorama. I calculated the required size of an image way too late in a process, so i ended up with a huge image to fill up. one year later it was done. (naturally i didn’t draw it daily – i was bored of it quite a bit, so there where months of not drawing) for a bottom i used a collection of other small drawings and for a top i used one image that again was made for something entirely different. but here it is all stitched up.. So below you will find 3 fragments of panorama and clicking this link will actually take you to an interactive version.
all little things I
all little things II
all little things III
making of “all little things” drawn panorama
And this is a little making of “all little things”.. while drawing this panorama from time to time i would set up my camera and would make some shots, here they are combined together for a little time laps “video”
sadly i didn’t continued shooting all the time, and didn’t maintained composition.
So, finally I am done. Done with the hands. for good. I hope…. :) I must admit I learned one or two things working on this piece, especially regarding anatomy of left hand. If you look hard you can find couple of right hands too but it was harder to draw sins I couldn’t just look at my hand for a reference and draw it at a same time :))) Actually you can find also one feet here and well two other things which will remain unnamed here :)
madness of hands
In general I can conclude that I am able to make spherical image. And this format is quite suited for such “mad” subjects as this one. The hard past is composition. Does it exist in interactive piece? how do you deal with it? These are the questions that are to be explored in next work. So we see how that goes :)
To see the interactive version just click on an image. After you are redirected to 360 cities website u click and drag (anywhere on a picture). To unleash the full madness left click and choose “little planet view” from a menu. then click and drag. have fun!
So here it is, my first test of drawn pano. It wasn’t easy, but now I understand how things work. Next one will be much better :)
Sorry I have no ability to make it function directly in this blog post, so pardon me u have to click on image to actually see it.
Spherical panoramas are used by photographers to provide the viewers with the ability to move around the photograph rather than to view it from a fixed position. I used this technique for my drawings.
Long time no post – must be the number one starting sentence in blogs :)
Small animation test I did:
If you cannot see video directly (not logged in Facebook?) try looking here: http://littlstar.com/videos/3c44d6ae
So here I will share my techniques and mistakes – what i would do differently next time.
So I decided to try if it is possible to draw animation for VR. That means in equirectangular projection. Probably that’s incorrect statement – one could draw images on cube faces or other projections…
Still we are most familiar with equirectangular projection and cube faces would require to mach images in 6 separate pictures. Though this could be an approach if there are small objects to be drawn and they would remain quite stationary in frame. In this case we would get less distortion and it would be probably easier to draw.
NO SCENARIO ?
First its always nice to have at least some sort of scenario, something for audience to care about. And that’s a first thing i will make sure i have for my next “film”. There was a plan in a beginning to add forth scene – the abstract one… but that was before i learned about problems of camera movement and other things about confusing people and making them vomit…
So the plan was, after our “viewer” drops “dead” in water he starts “hallucinating” and animation is shown in this manner:
This gif file turned out not very representative of idea and of how i did it,… oh well .. still want to experiment with more “hardcore” / abstract imagery but .. well.. might be some other time. Mind you I have never seen my work on dk2 or even cardboard….
So I made most of my animation in 3D and rendered sort of a flat looking image that i afterwards processed a bit to make it look less “3D” like. Also I used cell shader to render outlines of my animated models and printed these that they would be hardly visible. That was for rotoscoping.
PRINTER / SCANNER
So in planing stage i have counted that i ll need to draw close to 1000 frames. And then there was a question how to scan it all? So i asked around in print shops and it was around 100 EU just for scanning. So i decided to try and buy a scanner for myself. There where not many options that i found in close by shops. Ebay was out of question cos sending costs would be prohibiting to me. So i found this weird brand i have never heard of – BROTHER. There was also a cheep RICO option and i don’t remember why i went for brother instead… might be cos it was A3 instead of A4.. BROTHER MFC – j6520DW has a automatic page feeder (around 30 pg) and can scan or print A3 pages. The scanner / printer is definitely a “budget” thing but it did the job. If you google this model you will find quite mixed comments about it. One has to be careful loading pages they turn out not as straight as i would like them to be, some seem to rotate couple of degrees and basically its impossible to use scanned images unless you stabilize the motion..
PRINTING AND STABILIZATION
Ok so i am using automatic feeder which means each page is positioned not perfectly while scanning and i do get quite a bit of wobbling. First images I printed for drawing where a4 and covered most of pages width like this:
So not only that each frame was in slightly different position of previous page but in some cases i would lose parts of scanned image. So next time I will not print so close to borders. Even if pages will be rotating and moving I will not lose image. The green lines show point placement for image stabilization in After Effects.
For all or most of background images I drew around 7 frames only. Then I used script or expression in After Effects that would randomly load these 7 frames for required duration.
fr = 12; // frame rate;
numFrames = 8;
seg = Math.floor(time*fr);
f = Math.floor(random(numFrames));
for (i = 0; i < seg; i++)
f = (f + Math.floor(random(1,numFrames)))%numFrames;
The sound was another problem. I wanted to experiment with binaural sound… but i got so fed up with this animation that in the end i just added some sound effects from free sound library.. just not to be a silent film. So again sound changes your experience so much.. if there is a chance ill get a professional sound person next time…
So for some time now I am quite interested in this process called photogrametry. Basically its when you take some photos of an object from different viewpoints and a dedicated software tries to reconstruct 3D model out of these images. That takes practice. First you need to take pictures in a specific way, then you need a lot of ram on your computer and long time to wait :) Alternatively you can use some cloud based solutions that will compute a 3D model for you. Then you usually have to correct all the errors that software produced. In this case I was shooting public sculptures in Vilnius city. I have some more photographed, and I have covered probably like a third of city sculptures :) Here is a test with uncleaned geometry. Only sculptures far away where reduced so they have less polygons. The rendering process took some days. It is a 15000 px X 7500 px resolution equirectangular projection that i got using mental ray’s “wrap” shader. Click a link for interactive version of this panorama here.
Most popular software probably would be agisoft, 123D catch and recap. You can find out more about photogrametry in this talk from autodesk. My personal observations and recommendations for shooting images intended to be used for photogrametry would be:
1. Always shoot raw files (no .jpg see what format your camera supports .nef .pef .dng) If you do so you can import your raw photos to lightroom or similar photo editing program and reduce highlights to the max also reduce (or bring out) shadows. So you get quite dull picture with less contrast then one would expect from a normal image. What you want to do is to get back details from all areas that are very light, so they become gray (usually) but contain more details. And shadows become less dark – you regain details in the shadows. You can experiment adding “details” in lightroom. its sort of local contrast / sharpening thing.
2. everything should be captured in focus. Use quite closed aperture to get more DOF. get to know your lens, find a hyperfocal distance if its a prime lens. I have never tried this for photogrametry, but when shooting very small objects one could try image staking in photoshop to get all in focus. Its under automate – stack images if remember correctly.
3. avoid sky. There is no information there, and even if there is its useless – if you have moving clouds for instance. And it messes up your exposure. Try using manual exposure. Again never tried this for photogrametry, but if your object has very different lighting conditions – say one side of building is in a shadow while the facade is in harsh sunlight you could try HDR photography. .
Try having as much as possible of background details. If you are shooting a small object, make sure its not on a single color table that has no color details in its surface. Use a newspaper with text for a backdrop.
which camera angles to choose? This info should be provided by the software you are using. Here are Agisoft’s tutorials. In general you have to change the position of camera in each shot. also each shot should contain some parts of object that where visible in a previous shot. And get some extra ram!
Not sure if this is worth a separate post, but sins I haven’t wrote here for months and months… here it goes :)
So here at CG chanel.com I have read that the softimages development team, which worked on a product for more then 10 years was moved to work on maya related projects. A new team of developers was assigned for Softimage. “Softimage development will now be carried out by a six-person team in Singapore, headed up by Senior Software Development Manager Chun-Pong Yu.” Strange thing is that I didn’t found any news on this on area or at xsi base pages… Is it a rumor? Anyone?
So this one will be again, the call for a new tool . Now it might very well be that
someone of you know some work around or some way to achieve this, if so please share with us your knowledge. So its all about texturing and laying out uv maps. Lets say I have a character which I am planing to texture using unwrap uvw modifier and say a pelt mapper. Now my model has a separate model for head, torso, cloths, hands and so on. After I peel each of them, the sizes of my characters head texture, hand texture and everything come in same size. they all are fitted to this one square.. Now if I want to hand paint textures things like the width of the line in a texture matters. so in the case where I have, lets say a giant head of the character and tiny tiny eyes, the unwraper will make them same in a size. And that gives me a problem. If I apply same texture say with a dots, the dots on a head will be huge compared to the ones on the eyes…
naturally in a unwraper I can resize everything, but as far as I know its all done by hand. So the question remains, how to make many pelt maps from many separate objects so the sizes would be very simar to these sizes of actual geometry?
————————–Ok! seems I have an answer myself :) I found this gait tool – Unwrella! from now on its sort of advertising, but i found it solves exactly this problem – it makes the sizes of pelt uv’s all correct sizes relative to each other! text from their website:
Unwrella is an exact unwrapping plug-in for Autodesk 3DSMAX and Autodesk Maya. It is a single click solution which allows you to automatically unfold your 3D models with exact pixel to model surface aspect ratio, speeding up texture baking UV map production significantly.
Automatic one-click solution – Just apply the Unwrella modifier
Precise – Preserves user created UV Seams
Smart – Reduces texture mapping seams almost completely and minimizes surface stretching
Efficient – Chunks are kept large and are arranged on the UV surface with maximal use of available space
User-friendly – User defined pixel based padding between UV chunks
Excellent for all kinds of models (organic, human, industrial)