Tuesday, 28 September 2021

 Depth maps -- tools and applications

Facebook 3d Photos:   this is one of best ways of showing parallax effects with an image via a depth map -- for publishing your 3D conversion work, or checking depth map retouching etc

For three or four years now you can upload a photo + multi lens/depth sensor data -- to make a depth map based parallax effect image in the cloud from a recent Iphone -- to a Facebook personal feed or  Facebook Group:

Or more recently anyone can just upload --  to a Facebook post or Group -- at the same time --  an image (taken with any phone)  -- say "xxx.jpg" -- and a depth map you have made  - of the same dimensions -  named like "xxx_depth,jpg".

The depth map must be coded with depth being lighter in the foreground of the scene, darker in the background. The depth map must be RGB rather than grey scale if it is a png. After a few seconds of cloud processing the Facebook 3d Photo will appear - and you can try it and see if the depth map needs more work.

Facebook 3d Photo workflow 
Generally the depth map must look very low in contrast to work well as a Facebook 3d Photo.  If there is too much contrast there will be distortions in the parallax animation as the user interacts with it -- at the edges of objects in the scene especially.

btw -- I think this usual requirement of low contrast for good Facebook 3d Photos effect depth maps may possibly have to do with the small separation of the lenses on the phones  used to generate (with Facebook's AI processing)  the  effect  - in the original Iphones that could do it  -- like for instance with Google's "Stereo Magnification" paper -- with small baselines the depth map from such cameras was inherently low contrast
https://arxiv.org/pdf/1805.09817.pdf
"we explore an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones"

Anyway if your depth map has too much contrast to work well with Facebook 3d Photos you can use Levels (with selections maybe) to reduce the black density and/or increase the white density. Sometimes it helps to turn the image into 16 or 32 bit first before you do this (and then turn it back into 8bit before saving as jpg/png) -- as this can reduce banding in the depth map image from the Levels operation. 

Another thing you can do to reduce distortions in your Facebook 3d Photo from too much contrast is to use the "white dot" technique. What you do is this -- look at your depth map. Use the Eyedropper in Photoshop to select the lightest tone in the image.  This will now be the Foreground Color. Click on the Foreground Color icon and select a tone a bit lighter than what it is (or pick even pure white). Use the brush or pencil tool to paint a small area in a corner of the depth map with the new brighter tone. Now there will be less distortion in the generated  Facebook 3d Photo (and the image will be cropped slightly anyway so you won't see the spot you have made.) Presumably Facebook must look at the entire tonal range of the depth map image before estimating the amount of parallax to generate. This "white dot" process can reduce the depth map "detail" of the effect though and make the Facebook 3d Photo less interesting depth-wise.

Re this "cropping" of the Facebook 3d Photo effect image compared with the original photo/painting etc. This can be very annoying especially at top and bottom of the frame. So often what I do is use Canvas Size to increase the height of the image (and depth map)  -- maybe 10%. Then I use Photoshop's new Content-Aware Fill on both image and depth map to smartly add detail in the extended top and bottom areas.

Facebook 3d Photo Groups:  the most popular (in terms of number of subscribers)  Facebook 3d Photo Group is this one I think --
https://www.facebook.com/groups/3dphotographs

Another Group is this one:
https://www.facebook.com/groups/LetsConvert2DImagesTo3D
... but this is about 3d conversion of 2d  images in general .. so there are anaglyph, side by side format etc images as well as Facebook 3d Photos.  . Both groups you have to join to post but are publicly accessible otherwise.

Embedding Facebook 3d Photos into blogs etc:  it is possible I think  -- but maybe it is a bit browser dependent. 
For example:  https://rostaman.pp.ua/lotus-ode-zhang-liying-1962.html
See: https://developers.facebook.com/blog/post/2018/02/20/3d-posts-facebook/

Alternatives to Facebook 3d Photos for VR viewing: For a year or two after it was introduced it was possible -- by accessing Facebook 3d Photo urls with a VR web browser - with some VR headsets, for example , Oculus Quest -  to see them in very impressive 3d in VR. It was like you were standing in front of a large  hologram the height of a person. There was some limited amount of 6DOF freedom of movement about the experience. If  you moved too close or laterally the picture moved away from you and/or centered itself in front of you again. But then unfortunately Facebook turned off that capability.

Recently though there is something similar in the form of Owl3D which is a (paid) service to make AI depth maps from images and presents them in  VR headsets (from an url) in the same sort of way (though it creates what looks more like a 3d reconstruction than a hologram.)  https://www.owl3d.ai/

3D conversion of still photographs or paintings: There are a variety of methods, non-AI and increasingly AI, that people use to make 2D pictures 3D  - and most of them involve creating depth maps somewhere in the process.
 
Here is an extensive article  on various (mostly non-AI 3d) still conversion techniques.
shorturl.at/csvJ5
(this file opens in document edit mode in Chrome - to make the urls in the article clickable you can go ... View/Mode/Viewing).

Here is a recent  Youtube tutorial covering 3d conversion etc AI innovations for stereographers. 
https://www.youtube.com/watch?v=LuTEkytsTu4

Boosting Monocular Depth (BMD)  -- mid-2021 there was a big advance in AI depth map from images quality in the form of this research:
https://www.youtube.com/watch?v=lDeI17pHlqo

For using the Colab version there are a couple of tutorials:
https://youtu.be/LuTEkytsTu4?t=922  
btw he has a bugfix version of the Colab here (I am not sure what bug):
https://colab.research.google.com/drive/18an1b-ffk6K_lz3p2ZIdSfa8Qzhk5r1P?usp=sharing
https://www.youtube.com/watch?v=_-kYlCmIqyA
https://www.youtube.com/watch?v=SCbfV80bZeE

Here are some of my workflow notes with this AI software and its outputs:
Upscaling:
It is important if you want to get very  detailed depth maps with BMD  that the source image be as large (but still detailed) -- as possible.
So sometimes I use the new Enhance filter in Adobe Raw on the source image  (it can 2X upres + denoise). The source image must be a Raw format or DNG file.

For example here is a 3d conversion (anaglyph) from a monoscopic 360 panorama of a biker funeral. The original was 7200*3600 pixels and the upres Enhance version was 14400*7200.
https://www.facebook.com/groups/LetsConvert2DImagesTo3D/posts/3972563566198774/
The stitched panorama original was jpg so I had to convert it to DNG which is non-trivial these days. The only way I have found to do it now is with Topaz Adjust AI which luckily I had.

Here is another Enhance usage example -- depth map with BMD from a Youtube video -- of a urban explorer guy in Vorkuta, Siberia -- 2X from a 4K video frame  https://www.facebook.com/groups/3dphotographs/posts/862064171143568/

Panorama conversion and depth map effect previewing:  Sometimes BMD does a great job making a depth map from a 360 equirectangular image directly .. but sometimes one needs to split and transform  the source image into overlapping wide angle rectilinear images. And then generate depth maps with BMD from each rectilinear image and then stitch the images and the  derived depth maps back into equi format. (Using PTGui Pro for these splitting and joining steps). That was the case with this historic 360 panorama of Sydney (1873):

 I think with the degraded original source image quality (scans of 10 overlapping vintage prints) and limited vertical view of the historic panorama  BMD needed the converging verticals of my downwards tilted extracted wide angle views (the panorama was shot from a tower) to work out the geometry of the scene (presumably BMD - with Midas or Leres AIs -- could use "Manhattan World" AI depth cues with this scene type).
https://www.facebook.com/groups/LetsConvert2DImagesTo3D/posts/4101448026643660/

I have been retouching the depth map of the panoramic scene with the aid of PTViewer3D -- which is a great image+depth 360 3D viewer. Using my 3D TV, which runs as one of my monitors,  in squeezed SBS mode (one of the output formats supported by PTViewer3d) ... see my Reddit post here:
https://www.reddit.com/r/depthMaps/comments/pt35si/ptviewer3d_an_imagedepth_map_panorama_player/

Sometimes I use a VR headset (Oculus Rift) instead of - or alternating with -- my 3D 55" TV/monitor (switched in non-headset, direct viewing (with glasses)  into squeezed SBS mode) -- as the 3d display for depth map effect previewing and retouching work -- still seated at my desk  with my graphics tablet handy -- with Virtual Desktop running in my headset with the squeezed SBS output from PTViewer3d -- with a stereo view in my headset of the image on a virtual screen. In VR I have configured settings in Virtual Desktop (running from Steam) so this virtual screen appears about 1m away,  and of a width so it subtends an 80 degree view ie. very large and close (in fact like my real world 3d monitor aka 3D TV view -- only with the depth errors even more visible than on the 3d TV monitor)-- and as I lean in the 3d effect in headtracked VR is heightened in a quasi-6DOF fashion that makes the depth map errors even clearer.

 Alternatively, rather than using the stitched 360 panorama/depth map image pair with PTViewer I work  with the extracted tilted wide angle (about 75 degrees wide) rectilinear views and their BMD derived depth maps -- where the depth map retouching is often easier as all the buildings in the depth map now have straight lines rather than equirectangular curves.

In this case  I use Stereophoto Maker (SPM) and its depth map tools -- for converting from image/depth to outputting squeezed SBS images (or anaglyph) -- for visualizing errors. http://stereo.jpn.org/eng/stphmkr/ . With SPM you can import an image + depth pair as though it was a regular L/R stereo image pair, then then in the Depth Tools section of the Edit Menu you can set the "stereo base" to a high value and output a real squeezed  SBS L/R pair with exaggerated depth. And again view that on a virtual screen up close in VR space with Virtual Desktop in a headset.

more lings

https://arxiv.org/pdf/2109.07547.pdf  RAFT- Stereo

"Additionally, we introduce multi-level
GRU units that maintain hidden states at multiple resolutions with cross-connections but still generate a single high resolution disparity update. This improves the ability of the update operator to propagate information across the image, improving the global consistency of the disparity field."










Tuesday, 3 March 2020

Stereo fisheye viewfinders, peripersonal vision, AI video retiming and resizing, and drawing in fisheye/panoramic perspective -- and Black Ink

This post is mostly about subjects not specifically stereo related -- but relevant to stereo in various ways.

But first some specifically  stereo thing.  Stereo viewfinders. In the olden days, when I had a couple of Nikon Fs and twin Nikkor 24mm wide angles, I arranged them vertical,  eyepiece to eyepiece  at eye separation distance (65mm for me) -- and walked around using it as a high quality optical TTL stereo hyperstereo world viewer -- and  I was very pleased with the wide angle hyperstereo effect.

And more recently I have done the same thing with digital SLRs and twin Nikkor fisheyes for VR180 stereo snapshot photography.

Lucky for me the minimal eyepiece distance is 65mm -- and I love using it for stereo snapshots -- but the lens interaxial is large (140mm) so not good for close subjects -- though the new PTGui (v.12.0) is great for handling alignment with hyperstereo pairs.

I prefer mostly to use a base to base vertical-oriented DSLR rig for stereo snapshots -- with a more generally usable lens interaxial of 85mm -- but then I can't have this TTL stereo viewfinder thing happening. But then I noticed I had managed to collect a couple of  Lomography fisheye viewfinders ... the one here:


These are pretty mediocre viewfinders, but they have an accessory shoe. And I had a couple of accessory mount tripod adapters so it was easy to make a very portable stereo fisheye viewfinder rig with them mounted side by side on a small piece of aluminium bar.(You need to swivel the mounts inwards a bit til they align when you are looking through them.) In stereo btw  the view is much sharper looking than when looked through singly -- yet another proof that eye and brain are a synergistic system stereo-wise! This is really good compact rig for previewing scenes in  very wide angle stereo.
I have done more experiments - with holding  one fisheye viewer adjacent to the eyepiece of one of my fisheye DSLRs (vertical) -- and that surprisingly works too for giving a good stereo view -- the coverage is not the same, but the visual scale is the same(accidentally) -- and despite the drastic difference in clarity the stereo summation effect much reduces the expected blur (if you are looking though the DSLR with your dominant eye.)

Preview links for the other topics:

Peripersonal stereo vision:
https://sourceforge.net/projects/genua-pesto-usage-code/
https://www.ukdiss.com/examples/peripersonal-space-self-frontier-and-impact-prediction.php

AI video retiming and resizing and colorization:https://www.youtube.com/watch?v=6FN06Hf1iFk
https://www.facebook.com/Denis.Sergeevitch
https://grisk.itch.io/dain-app
https://arxiv.org/pdf/2002.11616.pdf
https://www.sciencealert.com/ai-neural-networks-have-upscaled-a-classic-19th-century-movie-into-4k Drawing in fisheye/panoramic perspective https://www.youtube.com/watch?v=ze5SWc-yN2c (Open Toonz) https://www.facebook.com/panopainter/ http://www.mediavr.com/drawingontheland.htm
Black Ink paint app:I bought this recently on Steam when it was heavily discounted. I am really interested in nodal paint apps and there aren't many -- and I think it can be very good for depth map retouching
http://blackink.bleank.com/
https://store.steampowered.com/app/233680/Black_Ink/












Saturday, 11 January 2020

More depth map developments


For depth from stereo software workflows the big news for me has been the release of Photoshop 2020. This incorporates Adobe Sensei AI into its selection tools:
https://photoshopcafe.com/photoshop-2020-upgrade-new-features-use/
"The Object Selection tool is very exciting. It comes in 2 flavors, rectangle and lasso. This amazing tool enables you to drag over and object in a photo or image and then Photoshop magically selects it for you, like Select Subject, but more focused. You could select it with the rectangle and then fine tune with the Lasso option. As always Shift Adds to the selection, while Alt/Option subtracts. This is like a Magic wand and Magnetic lasso in one, but its powered by Sensei, Adobe’s AI engine."

This makes masking so much simpler for depth map correction and background removal for better depth map calculation for foreground elements. It is very interesting how it really seems to know what kind of objects you are interested in and it works really quickly so the process is interactive. Btw you can fine tune your contours by slightly going over into the subject's area with the object lasso tool in a second pass -- something that is not intuitive. Topaz AI Mask is still useful I think for semi-transparent detail like hair.

I have been paying attention to AI scene segmentation  software developments on Arxiv etc. This is dividing up all of an image into meaningful objects. I am sure this is the future for depth map extraction. There is also the notion of "panoptic scene segmentation". This is not necessarily to do with panoramas -- it is more to do with with a complete analysis of every pixel in the frame -- something we have to do when creating depth maps
"In the panoptic segmentation task we need to classify all the pixels in the image as belonging to a class label, yet also identify what instance of that class they belong to."
https://medium.com/@danielmechea/what-is-panoptic-segmentation-and-why-you-should-care-7f6c953d2a6a
https://scholar.google.com.au/scholar?q=panoptic+scene+segmentation&hl=en&as_sdt=0&as_vis=1&oi=scholart

Stereo panorama photographer and software developer Thomas Sharpless has published an interesting workflow for two row stereo panorama capture and stitching (with PTGui11) with extremely sharp looking results -- on the 360 Stereo Panoramas Facebook group. PTGui11 has powerful warp to fit features. The two row approach serves to constrain drastic warping to the bottom row (closer to the ground and hence more needed) ... and also to better remove vertical parallax for further depth map from stitched stereo panorama pair extraction.
https://www.facebook.com/groups/3dstereopanoramas/permalink/2495865310679441/
https://www.facebook.com/groups/3dstereopanoramas/

Currently my main preoccupation with depth map improvement involves selecting (stereo)  layers of foreground elements in the source stereo pairs -- and using depth from stereo (Fusion/KartaVR) on those.Then compositing the depth elements back together. Also I have been investigating guided depth map improvement (from the source -- aka "feature" image). Like I described in my last post with JointWMF:
https://stereopanoramas.blogspot.com/2019/11/depth-from-stereo-depth-map-retouching.html

Now I am working  to see if I can get this new "PixTransform"  depth map superresolution software working -- it is primarily for smart upscaling depth maps -- with more detail from the source image but I think it might work for depth map improvement generally.
https://arxiv.org/abs/1904.01501
https://github.com/riccardodelutio/PixTransform
https://medium.com/ecovisioneth/guided-super-resolution-as-pixel-to-pixel-transformation-dad13dfc76cb

More news:
Lucid -- who work with Red  and eYs3D  -- and had an early VR180 type consumer camera -- have released a beta AI phone app for depth map from monocular: LucidPix 3D Photo Creator.
https://www.lucidpix.com/
https://www.marketwatch.com/press-release/lucid-partners-with-eys3d-etrons-subsidiary-to-create-the-first-vr180-depth-camera-module-2019-01-08

Raspberry Pi have a stereo/360 camera solution now -- and there is also a hardware chip for realtime AI depth videos
https://www.raspberrypi.org/blog/stereoscopic-photography-stereopi-raspberry-pi/
https://www.crowdsupply.com/luxonis/depthai
There are two camera types available I think -- one is an M12 lens mount sensor that can take fisheye lenses eg. Entaniya etc -- the other is a smaller sensor. I am not sure if the AI board will work with fisheye input. On 3dphoto.net they say the sync is good for stills but not so good for video in one post. You can stream stereo fisheye views directly from a Raspberry Pi into Oculus Go.

Google says the next version of ARCore for Android phones with its Depth API will provide depth capture for regular Android phones (no special lenses etc required).
https://www.theverge.com/2019/12/9/20999646/google-arcore-augmented-reality-updates-occlusion-physics-depth

Intel have released details of the next version of their Realsense depth camera
the Realsense Lidar  -- these are small enough to clip onto stereo rigs etc without being too obstructive
https://www.anandtech.com/show/15220/intel-announces-realsense-lidar-depth-camera-for-indoor-applications









Tuesday, 19 November 2019

Depth from stereo -- depth map retouching

A Facebook 3d Photo ... Newtown Festival

Workflow steps for depth from stereo:
1. Generate "*.exr"  (HDR) format image from Fusion Studio (using KartaVR depth from stereo template). This is 32 bit.
2. Convert it to 16bit tif in Photoshop (or sometimes also with Topaz Studio or HDR Efex Pro 2)
3. Adjust the balance and contrast of the 16bit tif version globally and locally with Photoshop. I often use Viveza plugin  for local density/contrast and "structure" corrections of the depth map at this point. The idea is to have depth map detail visually apparent everywhere you want to see it in the end experience (6dof, Facebook3D  photos etc)
4. Convert the depth map tif to a jpg. Now the depth map looks like this:


You can see at this point there are depth map tonalities varying continuously and pretty evenly from foreground to distance.  You can check it now in 3D with Stereophoto Maker depth tools (in anaglyph or on a 3d TV with squeezed SBS) or browser /headset previews with krpano after converting fisheye image and depth to equirectangular. You only have so much depth representation capability in an 8bit depth map so it has to be used efficiently. You can see that the depth information looks sort of accurate except that the contours of most of the objects need a lot of refinement.

Now currently I am using a "feature image" -guided smoothing software on the depth map  as a preliminary step before detailed contour correction - JointWMF
There is a application download on this page, with various *.bat templates for different filtering applications, and I use a version of the "demo_texture_smooth" one. You can use multiple iterations of the filter with the "t" parameter (no. of repeats). I usually use somewhere from 2 to 8 iterations. At a certain amount of iterations the depth map contours start to no longer improve and density banding (posterisation) starts occuring. The feature image is your source image (one of the original stereo pair), the depth map for that image is the one you are processing with the filter. The improved depth map is the output image. Two iterations take about a minute on my hardware with big files.
The bat file I used looks like this (but I didnt do much testing of possible parameter combination variants  -- you might need to do your own matrix of tests) :


JointWMF -i data1/newtownRdepthmap.jpg -f data1/newtownRsourceimage.jpg -o data1/correctedRdepthmapimage.jpg -r 30 -si 25 -w exp -nF 1028 -t 8

Here is the filtered depth map after JointWMF:

Now the job is standard Photoshop work. I have been working with the image and depth map side by side as a single image but as separate pair works too. Import the single composite image or image/depth image pair into Stereophoto Maker as if it was a standard stereo file (SBS or L/R pair) and then use the Depth Tool to make it into a anaglyph or other format eg. squeezed SBS for 3d TV to check your retouching. SPM will remember your source so you just need to reupload to check your retouching.

The natural thing in Photoshop to do is to work from foreground to background. What I do is this (with my image+depth single composite image). I select a rectangle area  of where I want to fix the depth map from the corresponding section of the source image. Copy that area in register onto the corresponding region of the depth map (use Difference mode etc to fine tune registration). Select a foreground object or figure on the image area. Hide that image layer. Make the depth layer active and invert the selection to fix the depth map contour by cloning from the adjacent background depth map region around the selection. Where the depth map is totally wrong or missing in  a region  eg. the fence railings I select them on the image and fix the depth map by filling with depth gradients (linear or radial ).

Sunday, 20 October 2019

Creating and publishing stereoscopic depth map panoramas

Since my previous posts there have been very significant developments with this!

On the publishing side:
Most significantly, krpano,  has released a version with support for image+depth  360 panoramas. It supports anaglyph html output for the desktop -- and WebVR immersive 6DOF or 3DOF stereoscopic panorama viewing - in supporting browsers (eg. Firefox Reality) - on vr headsets eg. Rift, Vive, Go and Quest.

https://krpano.com/forum/wbb/index.php?page=Thread&postID=79218
https://krpano.com/forum/wbb/index.php?page=Thread&postID=79218

On the 360 depth map creation side:
most significantly, Holokilo has released an alpha version of  an application - Deep360 (free) - that uses AI to automatically produce a depth map from monoscopic equirectangular 360 panorama images.
https://holokilo.wikividi.com

The developer Kaj Toet says he is mainly working on a version for 360 mono videos but this works amazingly well for some 360 stills. It outputs depth maps and over-under stereo. (He also has an Android app for any kind of image, not necessarily panoramic -- https://play.google.com/store/apps/details?id=net.kajos.holokilo&hl=en   )

More recent versions of his Windows app have a number of controls:
"Limit Stereo Edge" means fade off the stereo at zenith and nadir (to avoid stereo viewing failing there with the Over-Under output -- a universal issue with ODS (Omnidirectional Stereo) formats generally -- but absent with image+depth panos as correct stereo is generated dynamically in any direction with image+depth pano players)

Other ODS issues are problems when your head is "rolled" and very wide angle, even mildly tilted views. Dersch says ODS is "poison" for good panoramic stereo in one of his articles. (Depth maps are not the only way to avoid being poisoned -- light field technologies eg. Google's Welcome to Light Fields demo, has perfect stereo panoramic viewing in any direction as well as some 6DOF movement capability volume.  https://store.steampowered.com/app/771310/Welcome_to_Light_Fields/  )

"3D strength" means what it says -- it controls the amount of disparity produced -- which corresponds with the depth map to how contrasty it is. I usually have it set to between 3 and 6.

"Dilate filter" controls how much the depth map edges are expanded past the corresponding edges of the source pano image elements ie. how much it overlaps itI am not sure why he has this -- it improves the over-under output quality I think. But I usually set it to zero as I am primarily interested in the depth map output and clean contours are more generally useful I think (I can generate stereo formats from image+depth with Stereophoto Maker with one of its depth  tools or with After Effects with the Displacement filter).
http://stereo.jpn.org/eng/

btw ... about ODS rendering: https://developers.google.com/vr/jump/rendering-ods-content.pdf


How does Deep360 do its magic!  I don't know ... but "Deep" means  -- with AI -- of course -- but what kinds of AI? One common AI approach for depth from mono - for one kind of subject - seems to be with the cool-sounding "Manhattan Scene Understanding" concepts:
 http://grail.cs.washington.edu/projects/manhattan/manhattan.pdf

"We focus on the problem of recovering depth maps, as opposed to full object models. The key idea is to replace the smoothness prior used in traditional methods with priors that are more appropriate. To this end we invoke the so-called Manhattan-world assumption [10], which states that all surfaces in the world are aligned with three dominant directions, typically corresponding to the X, Y, and Z axes; i.e., the world is piecewise-axis-alignedplanar."

Other methods of producing depth from monocular 360 images
Depth (or modeling)  from monocular 360 images is a very popular AI research topic but there are few software implementations that just anyone, without machine learning software compilation know how,  can use.

Matterport
, the commercial realestate 3d scanning company has 360 image support. They call their technology Cortex AI.  It is for generating models from building interiors with 360 photography.
https://matterport.com/360-cameras/

Everpano, a image-based modeling application, is a sister program for krpano image+depth panorama  publishing. It is rather basic in the sorts of  models it can produce (it is mainly for interiors) -- but it can model directly from  equirectangular panoramas. It also has scripting features (the Navigator Plugin)  for 6dof navigation around the models Everpano makes from 360 pictures --  with krpano.
https://everpano.com/
Krpano has a utility which will  produce a panoramic depth map from Everpano model if your primary output is going to be image+depth. It also has a tool for matching depth map produced scale to the real world .. important for vr exploration.

In the next release, the krpano developer says, with Everpano's Navigator Plugin, we will be able to navigate intuitively in 6Dof, in VR with Quest say, with image+ depth maps alone by clicking on or moving around  multiple panoramas.
https://krpano.com/forum/wbb/index.php?page=Thread&threadID=16984

More depth from monocular AI software:
Masuji Suto's (of Stereophoto Maker) implementation of Google Mannequin Challenge software:
 "Learning the Depths of Moving People by Watching Frozen People."https://www.youtube.com/watch?v=prMk6Znm4Bc
https://github.com/google/mannequinchallenge
http://stereo.jpn.org/jpn/stphmkr/google/indexe.html 
This requires a recent CUDA card to work (I have it working on a PC with a nvidia GTX 1650). This is I think the most basic deep learning capable card it will work on. The output resolution is highly dependent on the amount of video ram on the card.. The maximum it will produce from a panorama with this card (4Gb video memory) is 512*256. (By processing the panorama in sections you might be able to get more resolution but I don't know how well the depth map would stitch.)

Retouching automatic depth from monoscopic panorama depth maps:There are multiple AI depth from mono programs now that are quite specialized in the sort of subjects that they work with. Using different AI for different problem areas in a 360 pano depth map can be an efficient method.

FACES
For instance there is this online service for automatic face models with AI
3D Face Reconstruction from a Single Image     
https://cvl-demos.cs.nott.ac.uk/vrn/
https://www.engadget.com/2017/09/19/AI-3d-face-single-photo/?guccounter=1

So you cut out the faces in your mono panorama, upload them, download the models and produce the depth maps. How to produce the depth map? -- you could export a depth channel with After Effects ... or use this workflow with Meshlab (free) -- which has nice controls for the depth map tones ..
https://community.glowforge.com/t/tutorial-creating-a-depth-map-from-a-3d-model-for-3d-engraving/6659
http://www.meshlab.net/
... and then fix the faces in your panoramic depth map image.

Or there are mobile phone apps that do a good job producing  depth maps of  faces and bodies ... popular apps I guess with phone uses whose phone does not do depth maps with hardware .. and who want depth of field control with portraits.
eg. DPTH
"uses AI to predict depth even on single camera phones"
https://play.google.com/store/apps/details?id=ru.roadar.dpth&hl=en
...the paid version lets you save the depth map .. for panoramic images depth map capture you can put the image on a screen and photograph the relevant area with the phone

More generally ..
Often with a panoramic depth map you would want to extract rectilinear views from the area of the source image that needs retouching on the depth map -- with PTGui say, and build a model from that image area, make a depth map from the model and insert that depth map back into the equirectangular depth map image -- again with PTGui.

So, for instance, if you  wanted a depth map of a panorama of  that amazing giant crystal cave in Mexico you could do  it like this in Photoshop -- much more easily in a rectilinear view than in equirectangular:
"Tutorial on manually creating depth-maps in Photoshop for pictures with complex depths."
https://www.youtube.com/watch?v=GUGafT3WWl4

Gimpel3d as a image+depth editor and previewer Gimpel3d supports 360 cylindrical and dome panoramic format input images -- as well as rectilinear. It only runs on one of my PC's (?)-- but it is very interesting but hard to get a handle on documentation-wise. It has SBS export and depth map input and export, and depth brushes. https://sourceforge.net/projects/gimpel3d/ http://www.freeterrainviewer.com/G3DSite/index.html https://www.youtube.com/watch?v=jxnlFqjQIVk https://www.stereoscopynews.com/hotnews/3d-technology/2d-to-3d-conversion/1647-gimpel3d2-conversion-software.html http://3dstereophoto.blogspot.com/2013/10/2d-to-3d-conversion-using-gimpel3d.html It works very well as an interactive anaglyph 6dof explorer player of wide angle image+depth extracts from Deep360 panos -- on the desktop.

https://www.google.com/search?q=depth+from+monoscopic+panoramas:
... one year's (2019) worth of studies:
https://richardt.name/publications/megaparallax/MegaParallax-BertelEtAl-TVCG2019.pdf

https://hal-mines-paristech.archives-ouvertes.fr/hal-
01915197/file/Cinematic_Virtual_Reality_with_Motion_Parallax_from_a_Single_Monoscopic_Omnidirectional_Image.pdf


Post-stitching Depth Adjustment for Stereoscopic Panorama
https://ieeexplore.ieee.org/document/8682724
 "For those regions that still suffer from visible depth error after the global correction, we select control points in those depth anomaly patches and utilize the Thin-Plate-Spline technique to warp those features into their target position."
Thin-Plate-Spline" warping is in ImageJ (Fiji). It is a really fast filter.
https://imagej.net/BigWarp
More than you ever wanted to know on image stitching tactics generally:
A survey on image and video stitching.
https://www.sciencedirect.com/science/article/pii/S2096579619300063

https://www.researchgate.net/publication/336304243_Real-Time_Panoramic_Depth_Maps_from_Omni-directional_Stereo_Images_for_6_DoF_Videos_in_Virtual_Reality
https://www.youtube.com/watch?v=XLOcrAZtc7w
This is a very clear talk. They are working with Suometry -- who have a fast realtime stereo pano capture camera technology. They use stereo panos and normal maps to train realtime depth from mono.

https://web.stanford.edu/~jayantt/data/icme16.pdf
DEPTH AUGMENTED STEREO PANORAMA FOR CINEMATIC VIRTUAL REALITY WITH
HEAD-MOTION PARALLAX
.. a cure for ODS .. "The proposed representation supports head-motion parallax for translations along arbitrary directions."

Motion parallax for 360° RGBD video
http://webdiis.unizar.es/~aserrano/projects/VR-6dof
they have 3 video layers to reduce occlusion errors ... there is a demo:
http://webdiis.unizar.es/~aserrano/projects/VR-6dof.html#downloads

https://www.vr-if.org/wp-content/uploads/Reality-Capture-Present-and-Future.pdf
his prediction 2018-2019:
'“3DOF+” is introduced and begins to gain traction in high-end offline productions, supplanting stereoscopic 360 almost entirely'   mmm..








Tuesday, 21 May 2019

Depth maps, volumetric experiences and stereo panoramas.


"3D Photos"
Consider this Facebook "3D Photo" scene of mine. If you move your mouse over it it will animate in a sort of 3D fashion.

Facebook first introduced this for some recent phones a few months ago -- phones that have extra camera units embedded that give additional parallax information and permit the construction of depth maps.

https://techcrunch.com/2018/06/07/how-facebooks-new-3d-photos-work/

 Mostly the multi camera/depth map technology on phones is used for shallow depth of focus effects for portraits but these newer "3d" animation style effects are becoming more common, for phones and for desktops.On phones the effect can play back when you tilt the phone up and down or side to side as well as respond to swiping.

More recently  Facebook lets anyone upload a pair of images -- the source image and a depth image with the same name as the source image except with the suffix "_depth" -- and Facebook will calculate and deliver the 3d animation effect in a few seconds.
https://www.oculus.com/blog/introducing-new-features-for-3d-photos-on-facebook/

So what is this "depth map" you speak of? Imagine you are a monochromatic creature, an achromatope:
https://en.wikipedia.org/wiki/Monochromacy
And you are beset with a thick blackish fog. Things will have normal sorts of tones up close, but in the distance everything merges into a black haze.The effect is used by a few CG-based artists- often for gloomy, ethereal themes eg. Kazuki Takamatsu:
https://metropolisjapan.com/wp-content/uploads/2017/11/we-ride-our-decoration-WEB-860x480.jpg
Or the depth encoding can be reversed,with the close things darker, more distant white.

Here is my depth map for the gardens scene above -- with white close (the Facebook 3d Photo convention)

The accuracy of the depth encoding in a depth map depends on the number of bits available, so a 32 bit depth image could have more depth information than an 8 bit depth image. Sometimes color depth maps are used as the extra color channels provide more data room as well as being visually informative.

Acquisition of depth maps:  People with normal vision can discriminate close scenes depth-wise with great precision. According to some evolutionary scientists it is all to do with our fruit hunting tree dwelling ancestors:


But good depth maps are difficult at present. We can see errors in depth map stereo visualisations very clearly, particularly now in virtual reality headsets, but currently this does not translate into a very intuitive feedback loop for better depth map creation and correction.

.... work in progress from this point on in this post:
Depth-aware stereo image editing method ...  (Google)
https://patents.google.com/patent/US20140368506

Depth-aware coherent line drawings
http://juliankratt.info/depth_line_drawing

Depth-aware neural style transfer
https://feedforward.github.io/blog/depth-aware/

Interactive Depth-Aware Effects for Stereo Image Editing
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=4711&context=etd


Tuesday, 9 April 2019

Handheld HDR VR180 stereo snapshots -- exposure alignment and deghosting

First some summary notes and links on new AI desktop imaging apps-- and then a detailed discussion of HDR workflow with Aurora HDR and its AI aided exposure blending and deghosting. And then workflow notes on stereo fisheye to aligned equirectangulars (PTGui), vertical disparity corrected (After Effects). depth equalized VR180 (Evil Twin 3D) VR 180 stereo pairs. .

AI desktop imaging
A considerable number of AI plugins and apps for masking, HDR exposure merging, noise reduction, sharpening and resizing, scene analysis and object detection and tracking and depth map extraction have appeared recently:

 eg. Topaz's AI Clear, AI Sharpen, AI Resize, and OnOne's ONI Photo Raw  and Skylum's Luminar and Aurora HDR.

https://photofocus.com/software/why-im-loving-topaz-studio-a-i-clear/
https://petapixel.com/2019/03/04/review-topaz-sharpen-ai-is-amazing/
https://skylum.com/blog/ai-sky-enhancer-a-breakthrough
https://skylum.com/newsroom/aurora-hdr-2019-introduces-aipowered-quantum-hdr-engine
https://www.on1.com/blog/new-ai-quick-mask-tool-on1-photo-raw-2019/

and ...
Matterport has AI powered photogrammetry cloud processing for 360 2D panorama scene capture of interiors. Autodesk's Flame 2020 has added AI-aided human face modeling from 2D video and scene segmentation for depth mapping. Kognak has impressive AI masking of  a large variety of video sequences via scene analysis and  object detection. Arraiy enables realtime AI aided virtual set camera and object tracking.

http://360rumors.com/2019/01/matterport-insta360-ricoh-theta.html
http://www.cgchannel.com/2019/04/autodesk-unveils-flame-2020/
https://www.fxguide.com/quicktakes/flame-embraces-deep-learning/
https://kognat.com/
https://www.fxguide.com/quicktakes/rotobot-bringing-machine-learning-to-roto/
https://www.arraiy.com/
http://vfxvoice.com/the-new-artificial-intelligence-frontier-of-vfx/

AuroraHDR
Some (very technical) notes to self. If you have 3 exposures say, you can choose one to be the reference for deghosting purposes. That exposure will not be changed scale-wise in the blended result. Say the exposures are +2, 0 and -2EV. And -2EV is the deghosting reference. Then other possible blends eg, +2 and -2 and 0 and -2 will similarly be the same dimensionwise as the 3 exposure combo. But with slightly different croppings. By canvas sizing them to the same dimensions you will now have 4 images, the original EV (-2) image,  and the 3 possible exposure blends (-2, 0, 2) (-2, 0) and (2, -2) that are very closely aligned. All can be very exactly aligned  by using the AutoAlign feature of Photoshop as layers, one or two times, -- using the choose align method automatically option. So you have plenty of options for compositng different parts of the blends or original deghost reference exposure.

to be continued ...