So I decided to try selling some of my photogrammetry models.
These Models where made in two cemeteries in Vilnius (Wilno) in Lithuania – Bernardinų cemetery and Rasų cemetery. Most graves are from the end of 19th century and most inscriptions are in Polish, sins that was quite a Polish city at the time. All graves are broken, rusty and run down. So if you are looking for some background objects for your new game that might be in horror genre, or historical, or just takes place in Europe you might be interested in these models. Actually models are low poly but still have quite a bit of detail and they might be used not only as a background objects. Poly counts are in 2-4k range depending on model. All have diffuse (Albedo) and normal maps. Some have Ambient Occlusion, Height or Bump maps. Textures are mostly in 4k resolution. You probably can use these models not only for real time graphics but for visualizations or film as well.
Models are sold as separate gravestones/crosses, as collection of a particular graveyard or all in one.
Here are links to skechfab depicting two sets – from both grave yards.
So I stitched / rendered / drawn my 360 in biggest resolution I can. What resolution should I choose for viewing…
It seems the more you investigate the messier it gets. Most of online platforms will allow you to upload what you have and re encode the way they can distribute your video online. Littlstar tells you -5gb limit and h264 for compression. More details
If you do stereo 360 – you have to contact them for further details.
Youtube is like… give me whatever ..
If you are planing to show your video on specific device with specific software (say gear vr using their player)
You will have to prepare the video in correct resolution, right bit rate and in correct encoder.
Today’s HMD’s have a limited screen resolution of:
PSVR: 1920×1080 (960×1080 per eye)
Vive: 2160×1200 (1080×1200 per eye)
Rift: 2160×1200 (1080×1200 per eye)
Gear VR: 2560×1440 (1280×1440 per eye)
But that’s the number of pixels they can show. Your image typically has to be much bigger. (these devices do not directly show rectangular images – due to lens distortion you (or software of hmd) has to distort the image opposite way of HMD’s lens distortion – so at the end you see an ok looking image.
To achieve this you have to have more pixels. Another thing is field of view. We create a equirectangular image that covers full sphere. But in HMD you see only a small portion of this image (at the time when you rotate your head you see another part of image).
So our big resolution we feed to HMD gets diminished quite a bit when we actually see it through HMD.
Then there is a whole other mess with codecs and their limitations.
So what is .. Recommended (?) / optimal (?) / max (?) output resolution for:
recommendations taken from here.
The next level of taking advantage of limited HMD resolution and squeezing more quality would be to do clever tricks like Chris Milk is doing.
The idea is to use images in part of your video. But this is a complicated witchcraft not for us mortals.. :)
And probably third level is building your own viewer. Ant that’s programing. so I stop here.
Someone should make a calculator: you input your eq resolution your hmd’s field of view and resolution per eye – and you get a final “on screen” resolution. anyone?
If you are not using dslr/ cg you need a true 360 cam. here is a list with specs:
More reading on these topics:
The ideal format for gen 1 PC HMDs would be 4096×4096 @ 90 FPS, which roughly matches the angular resolution and refresh rate of CV1/Vive, but this can’t be achieved without H.265 Level 6.1 (which is itself impractical due to CPU load – no hardware acceleration for H.265 yet). H.265 Level 6.1 also would provide 5944×5944@60, which is ideal for Gear VR, but again, probably could not be decoded in real time on that device.
So a list of links to podcasts, you tube videos and alike regarding various vr related things but all of them (i hope) have something related to cinematic vr, even though non of them are exclusively about it. So we have to pick relevant parts. But still for today the net is not over saturated with thoughts about vr techniques, therefore i’d say its good to see them all :) Links are of no particular order.
Mike Alger talks about interfaces and interaction design in VR, as well as doing fast UI iterations by using After Effects, illustrator and Cinema 4D. Podcast and two videos.
So basically I care most about cinematic Vr. But it does help, IMO to to know basics, History, real time, rendering technology and all the rest. Even though Vr is an old “thing” vr content used to be very scarce and closely connected to specific hardware- usually so called caves. Basically, for most of common people who are not lucky enough to be around in 70’s somewhere in usa, uk, de universities in mathematics engineering or it departments or military… well then for you Vr content is non existent :)
So there was some scientific research, but there was very little “artistic” stuff. And even though there where some artists doing installations but not so many so the common “language”of vr would emerge.
Now that we have so called second wave of VR, we have much more people trying to make content. Most do game related content – which makes sense- as soon as we have positional tracking we need real time 3d rendering. (unless again you are in research team that works with light fields or similar..)
Others yet go for real life filming or so called action footage. Animators are somewhat in between these two extremes, on one hand we are closer to video – cos we are used to make stories, aka “films” on the other hand we create stuff more then we capture from real life and in that regard we are closer to game creators. Now ofcource all things overlap and there cannot be clear cut definitions and boundaries…
So people from the second wave are creating content for about 3 years now. And some share their knowledge and experience publicly. Here I gathered some material from youtube and other places. If you work on Vr content some of these might be relevant to you. Now most of them fall either in 3d real time category or 360 video. But one can always find relevant information even if it is not exactly from your field – most things are connected. So here it goes, my link collection:
After making my last drawn panorama I started a new one. actually i stared this one even before that, but at the time it was just a regular drawing that i decided to make as a panorama. That proved to be a mistake – I ended up needing to cover quite an area with doodles :) so its actually longer then a year that i am making this one pano. Obviously i don’t draw daily, its more like … monthly.. i got bored of it quite some time ago :) But only 2/6 of and area remains to be covered. so there is a light at the end of a tunnel … i think. Here Ill share 3 images. it’ll be general view an a close up.
So here it is, finally done. Its much bigger compared to my older attempts in making panoramas, so don’t forget to scroll for zooming in :) I started this illustration not thinking of panoramas, it was meant for something entirely else. Later on while drawing i thought why not to turn it in to panorama. I calculated the required size of an image way too late in a process, so i ended up with a huge image to fill up. one year later it was done. (naturally i didn’t draw it daily – i was bored of it quite a bit, so there where months of not drawing) for a bottom i used a collection of other small drawings and for a top i used one image that again was made for something entirely different. but here it is all stitched up.. So below you will find 3 fragments of panorama and clicking this link will actually take you to an interactive version.
all little things I
all little things II
all little things III
making of “all little things” drawn panorama
And this is a little making of “all little things”.. while drawing this panorama from time to time i would set up my camera and would make some shots, here they are combined together for a little time laps “video”
sadly i didn’t continued shooting all the time, and didn’t maintained composition.
So, finally I am done. Done with the hands. for good. I hope…. :) I must admit I learned one or two things working on this piece, especially regarding anatomy of left hand. If you look hard you can find couple of right hands too but it was harder to draw sins I couldn’t just look at my hand for a reference and draw it at a same time :))) Actually you can find also one feet here and well two other things which will remain unnamed here :)
madness of hands
In general I can conclude that I am able to make spherical image. And this format is quite suited for such “mad” subjects as this one. The hard past is composition. Does it exist in interactive piece? how do you deal with it? These are the questions that are to be explored in next work. So we see how that goes :)
To see the interactive version just click on an image. After you are redirected to 360 cities website u click and drag (anywhere on a picture). To unleash the full madness left click and choose “little planet view” from a menu. then click and drag. have fun!
And it seems word press finally supports 360 images so lets have it here without embeds:
So here it is, my first test of drawn pano. It wasn’t easy, but now I understand how things work. Next one will be much better :)
Sorry I have no ability to make it function directly in this blog post, so pardon me u have to click on image to actually see it.
Spherical panoramas are used by photographers to provide the viewers with the ability to move around the photograph rather than to view it from a fixed position. I used this technique for my drawings.
Long time no post – must be the number one starting sentence in blogs :)
Small animation test I did:
If you cannot see video directly (not logged in Facebook?) try looking here: http://littlstar.com/videos/3c44d6ae
So here I will share my techniques and mistakes – what i would do differently next time.
So I decided to try if it is possible to draw animation for VR. That means in equirectangular projection. Probably that’s incorrect statement – one could draw images on cube faces or other projections…
Still we are most familiar with equirectangular projection and cube faces would require to mach images in 6 separate pictures. Though this could be an approach if there are small objects to be drawn and they would remain quite stationary in frame. In this case we would get less distortion and it would be probably easier to draw.
NO SCENARIO ?
First its always nice to have at least some sort of scenario, something for audience to care about. And that’s a first thing i will make sure i have for my next “film”. There was a plan in a beginning to add forth scene – the abstract one… but that was before i learned about problems of camera movement and other things about confusing people and making them vomit…
So the plan was, after our “viewer” drops “dead” in water he starts “hallucinating” and animation is shown in this manner:
This gif file turned out not very representative of idea and of how i did it,… oh well .. still want to experiment with more “hardcore” / abstract imagery but .. well.. might be some other time. Mind you I have never seen my work on dk2 or even cardboard….
So I made most of my animation in 3D and rendered sort of a flat looking image that i afterwards processed a bit to make it look less “3D” like. Also I used cell shader to render outlines of my animated models and printed these that they would be hardly visible. That was for rotoscoping.
PRINTER / SCANNER
So in planing stage i have counted that i ll need to draw close to 1000 frames. And then there was a question how to scan it all? So i asked around in print shops and it was around 100 EU just for scanning. So i decided to try and buy a scanner for myself. There where not many options that i found in close by shops. Ebay was out of question cos sending costs would be prohibiting to me. So i found this weird brand i have never heard of – BROTHER. There was also a cheep RICO option and i don’t remember why i went for brother instead… might be cos it was A3 instead of A4.. BROTHER MFC – j6520DW has a automatic page feeder (around 30 pg) and can scan or print A3 pages. The scanner / printer is definitely a “budget” thing but it did the job. If you google this model you will find quite mixed comments about it. One has to be careful loading pages they turn out not as straight as i would like them to be, some seem to rotate couple of degrees and basically its impossible to use scanned images unless you stabilize the motion..
PRINTING AND STABILIZATION
Ok so i am using automatic feeder which means each page is positioned not perfectly while scanning and i do get quite a bit of wobbling. First images I printed for drawing where a4 and covered most of pages width like this:
So not only that each frame was in slightly different position of previous page but in some cases i would lose parts of scanned image. So next time I will not print so close to borders. Even if pages will be rotating and moving I will not lose image. The green lines show point placement for image stabilization in After Effects.
For all or most of background images I drew around 7 frames only. Then I used script or expression in After Effects that would randomly load these 7 frames for required duration.
fr = 12; // frame rate;
numFrames = 8;
seg = Math.floor(time*fr);
f = Math.floor(random(numFrames));
for (i = 0; i < seg; i++)
f = (f + Math.floor(random(1,numFrames)))%numFrames;
The sound was another problem. I wanted to experiment with binaural sound… but i got so fed up with this animation that in the end i just added some sound effects from free sound library.. just not to be a silent film. So again sound changes your experience so much.. if there is a chance ill get a professional sound person next time…
So for some time now I am quite interested in this process called photogrametry. Basically its when you take some photos of an object from different viewpoints and a dedicated software tries to reconstruct 3D model out of these images. That takes practice. First you need to take pictures in a specific way, then you need a lot of ram on your computer and long time to wait :) Alternatively you can use some cloud based solutions that will compute a 3D model for you. Then you usually have to correct all the errors that software produced. In this case I was shooting public sculptures in Vilnius city. I have some more photographed, and I have covered probably like a third of city sculptures :) Here is a test with uncleaned geometry. Only sculptures far away where reduced so they have less polygons. The rendering process took some days. It is a 15000 px X 7500 px resolution equirectangular projection that i got using mental ray’s “wrap” shader. Click a link for interactive version of this panorama here.
Most popular software probably would be agisoft, 123D catch and recap. You can find out more about photogrametry in this talk from autodesk. My personal observations and recommendations for shooting images intended to be used for photogrametry would be:
1. Always shoot raw files (no .jpg see what format your camera supports .nef .pef .dng) If you do so you can import your raw photos to lightroom or similar photo editing program and reduce highlights to the max also reduce (or bring out) shadows. So you get quite dull picture with less contrast then one would expect from a normal image. What you want to do is to get back details from all areas that are very light, so they become gray (usually) but contain more details. And shadows become less dark – you regain details in the shadows. You can experiment adding “details” in lightroom. its sort of local contrast / sharpening thing.
2. everything should be captured in focus. Use quite closed aperture to get more DOF. get to know your lens, find a hyperfocal distance if its a prime lens. I have never tried this for photogrametry, but when shooting very small objects one could try image staking in photoshop to get all in focus. Its under automate – stack images if remember correctly.
3. avoid sky. There is no information there, and even if there is its useless – if you have moving clouds for instance. And it messes up your exposure. Try using manual exposure. Again never tried this for photogrametry, but if your object has very different lighting conditions – say one side of building is in a shadow while the facade is in harsh sunlight you could try HDR photography. .
Try having as much as possible of background details. If you are shooting a small object, make sure its not on a single color table that has no color details in its surface. Use a newspaper with text for a backdrop.
which camera angles to choose? This info should be provided by the software you are using. Here are Agisoft’s tutorials. In general you have to change the position of camera in each shot. also each shot should contain some parts of object that where visible in a previous shot. And get some extra ram!