Tuesday 21 May 2019

Depth maps, volumetric experiences and stereo panoramas.


"3D Photos"
Consider this Facebook "3D Photo" scene of mine. If you move your mouse over it it will animate in a sort of 3D fashion.

Facebook first introduced this for some recent phones a few months ago -- phones that have extra camera units embedded that give additional parallax information and permit the construction of depth maps.

https://techcrunch.com/2018/06/07/how-facebooks-new-3d-photos-work/

 Mostly the multi camera/depth map technology on phones is used for shallow depth of focus effects for portraits but these newer "3d" animation style effects are becoming more common, for phones and for desktops.On phones the effect can play back when you tilt the phone up and down or side to side as well as respond to swiping.

More recently  Facebook lets anyone upload a pair of images -- the source image and a depth image with the same name as the source image except with the suffix "_depth" -- and Facebook will calculate and deliver the 3d animation effect in a few seconds.
https://www.oculus.com/blog/introducing-new-features-for-3d-photos-on-facebook/

So what is this "depth map" you speak of? Imagine you are a monochromatic creature, an achromatope:
https://en.wikipedia.org/wiki/Monochromacy
And you are beset with a thick blackish fog. Things will have normal sorts of tones up close, but in the distance everything merges into a black haze.The effect is used by a few CG-based artists- often for gloomy, ethereal themes eg. Kazuki Takamatsu:
https://metropolisjapan.com/wp-content/uploads/2017/11/we-ride-our-decoration-WEB-860x480.jpg
Or the depth encoding can be reversed,with the close things darker, more distant white.

Here is my depth map for the gardens scene above -- with white close (the Facebook 3d Photo convention)

The accuracy of the depth encoding in a depth map depends on the number of bits available, so a 32 bit depth image could have more depth information than an 8 bit depth image. Sometimes color depth maps are used as the extra color channels provide more data room as well as being visually informative.

Acquisition of depth maps:  People with normal vision can discriminate close scenes depth-wise with great precision. According to some evolutionary scientists it is all to do with our fruit hunting tree dwelling ancestors:


But good depth maps are difficult at present. We can see errors in depth map stereo visualisations very clearly, particularly now in virtual reality headsets, but currently this does not translate into a very intuitive feedback loop for better depth map creation and correction.

.... work in progress from this point on in this post:
Depth-aware stereo image editing method ...  (Google)
https://patents.google.com/patent/US20140368506

Depth-aware coherent line drawings
http://juliankratt.info/depth_line_drawing

Depth-aware neural style transfer
https://feedforward.github.io/blog/depth-aware/

Interactive Depth-Aware Effects for Stereo Image Editing
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=4711&context=etd