Tuesday, 3 March 2020

Stereo fisheye viewfinders, peripersonal vision, AI video retiming and resizing, and drawing in fisheye/panoramic perspective -- and Black Ink

This post is mostly about subjects not specifically stereo related -- but relevant to stereo in various ways.

But first some specifically  stereo thing.  Stereo viewfinders. In the olden days, when I had a couple of Nikon Fs and twin Nikkor 24mm wide angles, I arranged them vertical,  eyepiece to eyepiece  at eye separation distance (65mm for me) -- and walked around using it as a high quality optical TTL stereo hyperstereo world viewer -- and  I was very pleased with the wide angle hyperstereo effect.

And more recently I have done the same thing with digital SLRs and twin Nikkor fisheyes for VR180 stereo snapshot photography.

Lucky for me the minimal eyepiece distance is 65mm -- and I love using it for stereo snapshots -- but the lens interaxial is large (140mm) so not good for close subjects -- though the new PTGui (v.12.0) is great for handling alignment with hyperstereo pairs.

I prefer mostly to use a base to base vertical-oriented DSLR rig for stereo snapshots -- with a more generally usable lens interaxial of 85mm -- but then I can't have this TTL stereo viewfinder thing happening. But then I noticed I had managed to collect a couple of  Lomography fisheye viewfinders ... the one here:


These are pretty mediocre viewfinders, but they have an accessory shoe. And I had a couple of accessory mount tripod adapters so it was easy to make a very portable stereo fisheye viewfinder rig with them mounted side by side on a small piece of aluminium bar.(You need to swivel the mounts inwards a bit til they align when you are looking through them.) In stereo btw  the view is much sharper looking than when looked through singly -- yet another proof that eye and brain are a synergistic system stereo-wise! This is really good compact rig for previewing scenes in  very wide angle stereo.
I have done more experiments - with holding  one fisheye viewer adjacent to the eyepiece of one of my fisheye DSLRs (vertical) -- and that surprisingly works too for giving a good stereo view -- the coverage is not the same, but the visual scale is the same(accidentally) -- and despite the drastic difference in clarity the stereo summation effect much reduces the expected blur (if you are looking though the DSLR with your dominant eye.)

Preview links for the other topics:

Peripersonal stereo vision:
https://sourceforge.net/projects/genua-pesto-usage-code/
https://www.ukdiss.com/examples/peripersonal-space-self-frontier-and-impact-prediction.php

AI video retiming and resizing and colorization:https://www.youtube.com/watch?v=6FN06Hf1iFk
https://www.facebook.com/Denis.Sergeevitch
https://grisk.itch.io/dain-app
https://arxiv.org/pdf/2002.11616.pdf
https://www.sciencealert.com/ai-neural-networks-have-upscaled-a-classic-19th-century-movie-into-4k Drawing in fisheye/panoramic perspective https://www.youtube.com/watch?v=ze5SWc-yN2c (Open Toonz) https://www.facebook.com/panopainter/ http://www.mediavr.com/drawingontheland.htm
Black Ink paint app:I bought this recently on Steam when it was heavily discounted. I am really interested in nodal paint apps and there aren't many -- and I think it can be very good for depth map retouching
http://blackink.bleank.com/
https://store.steampowered.com/app/233680/Black_Ink/












Saturday, 11 January 2020

More depth map developments


For depth from stereo software workflows the big news for me has been the release of Photoshop 2020. This incorporates Adobe Sensei AI into its selection tools:
https://photoshopcafe.com/photoshop-2020-upgrade-new-features-use/
"The Object Selection tool is very exciting. It comes in 2 flavors, rectangle and lasso. This amazing tool enables you to drag over and object in a photo or image and then Photoshop magically selects it for you, like Select Subject, but more focused. You could select it with the rectangle and then fine tune with the Lasso option. As always Shift Adds to the selection, while Alt/Option subtracts. This is like a Magic wand and Magnetic lasso in one, but its powered by Sensei, Adobe’s AI engine."

This makes masking so much simpler for depth map correction and background removal for better depth map calculation for foreground elements. It is very interesting how it really seems to know what kind of objects you are interested in and it works really quickly so the process is interactive. Btw you can fine tune your contours by slightly going over into the subject's area with the object lasso tool in a second pass -- something that is not intuitive. Topaz AI Mask is still useful I think for semi-transparent detail like hair.

I have been paying attention to AI scene segmentation  software developments on Arxiv etc. This is dividing up all of an image into meaningful objects. I am sure this is the future for depth map extraction. There is also the notion of "panoptic scene segmentation". This is not necessarily to do with panoramas -- it is more to do with with a complete analysis of every pixel in the frame -- something we have to do when creating depth maps
"In the panoptic segmentation task we need to classify all the pixels in the image as belonging to a class label, yet also identify what instance of that class they belong to."
https://medium.com/@danielmechea/what-is-panoptic-segmentation-and-why-you-should-care-7f6c953d2a6a
https://scholar.google.com.au/scholar?q=panoptic+scene+segmentation&hl=en&as_sdt=0&as_vis=1&oi=scholart

Stereo panorama photographer and software developer Thomas Sharpless has published an interesting workflow for two row stereo panorama capture and stitching (with PTGui11) with extremely sharp looking results -- on the 360 Stereo Panoramas Facebook group. PTGui11 has powerful warp to fit features. The two row approach serves to constrain drastic warping to the bottom row (closer to the ground and hence more needed) ... and also to better remove vertical parallax for further depth map from stitched stereo panorama pair extraction.
https://www.facebook.com/groups/3dstereopanoramas/permalink/2495865310679441/
https://www.facebook.com/groups/3dstereopanoramas/

Currently my main preoccupation with depth map improvement involves selecting (stereo)  layers of foreground elements in the source stereo pairs -- and using depth from stereo (Fusion/KartaVR) on those.Then compositing the depth elements back together. Also I have been investigating guided depth map improvement (from the source -- aka "feature" image). Like I described in my last post with JointWMF:
https://stereopanoramas.blogspot.com/2019/11/depth-from-stereo-depth-map-retouching.html

Now I am working  to see if I can get this new "PixTransform"  depth map superresolution software working -- it is primarily for smart upscaling depth maps -- with more detail from the source image but I think it might work for depth map improvement generally.
https://arxiv.org/abs/1904.01501
https://github.com/riccardodelutio/PixTransform
https://medium.com/ecovisioneth/guided-super-resolution-as-pixel-to-pixel-transformation-dad13dfc76cb

More news:
Lucid -- who work with Red  and eYs3D  -- and had an early VR180 type consumer camera -- have released a beta AI phone app for depth map from monocular: LucidPix 3D Photo Creator.
https://www.lucidpix.com/
https://www.marketwatch.com/press-release/lucid-partners-with-eys3d-etrons-subsidiary-to-create-the-first-vr180-depth-camera-module-2019-01-08

Raspberry Pi have a stereo/360 camera solution now -- and there is also a hardware chip for realtime AI depth videos
https://www.raspberrypi.org/blog/stereoscopic-photography-stereopi-raspberry-pi/
https://www.crowdsupply.com/luxonis/depthai
There are two camera types available I think -- one is an M12 lens mount sensor that can take fisheye lenses eg. Entaniya etc -- the other is a smaller sensor. I am not sure if the AI board will work with fisheye input. On 3dphoto.net they say the sync is good for stills but not so good for video in one post. You can stream stereo fisheye views directly from a Raspberry Pi into Oculus Go.

Google says the next version of ARCore for Android phones with its Depth API will provide depth capture for regular Android phones (no special lenses etc required).
https://www.theverge.com/2019/12/9/20999646/google-arcore-augmented-reality-updates-occlusion-physics-depth

Intel have released details of the next version of their Realsense depth camera
the Realsense Lidar  -- these are small enough to clip onto stereo rigs etc without being too obstructive
https://www.anandtech.com/show/15220/intel-announces-realsense-lidar-depth-camera-for-indoor-applications