Skrivet av ipac:
Mmm, men litet frågetecken runt att kamerorna sitter tätare ihop på iPhone än på Vision, som sagt.
Så pappa kanske fortfarande kan motivera att klä sig i headset till barnkalaset, tänker jag.
"Ja men bättre stereo-effekt juh!" (vid närmare eftertanke så vill jag minnas att mamma ändå var absent så skilsmässan kanske redan har inträffat = ingen diskussion)
Skrivet av Halldin:
Den tar förmodligen hjälp av LiDAR:n och mjukvarumagi också, men ja det är ju lite grisen i säcken hur bra resultatet blir.
Ett informativt inlägg på RoadToVR om detta:
TL;DR: It doesn't capture 3D stereoscopic or VR 180 video, it captures spatial/true depth information, which is better and can be displayed as 3D stereoscopic or VR 180 video.
The naming is somewhat arbitrary, but based on the high end iPhones including true depth sensors, already storing their information alongside the regular pixels, and Apple using "spatial video" to describe it, their "three-dimensional" should not mean "only" stereoscopic images with no actual depth information, but true spatial data, where the 2D pixels are associated with depth (at a much lower resolution).
It should always be possible to create a stereoscopic/dual 2D image from a spatial/actual 3D image (with some limits), but not the other way around. Creating depth information from stereoscopic images boils down to comparing the edges of the two images, interpreting shifts between them as a result of parallax and deriving a depth from them. This only works with a sufficient distance between the cameras and enough contrasts in the images to detect edge shifts, and therefore fails with objects of the same color, bright scenes lacking shadows and curved objects.
But if you have actual depth information, you don't need the parallax and therefore you also don't need different camera images recorded at about the same distances as the human IPD, or even a second image. You can turn a monoscopic spatial image into a 6DoF representation you can walk around in, with obviously the backside of everything missing. The only advantage of stereoscopy here would be that you get some pixels from the sides of objects that are right in front of the viewer with one or both sides located between the eyes, but it comes with several disadvantages when trying to determine the depth of complex shapes.
So the iPhone most likely will not record stereoscopic video, but spatial video that can be displayed/projected on a stereoscopic display with the correct depth. I'd expect them to merge the image from normal and wide angle lens instead of storing them both, possibly getting some of the missing image information from the sides of small objects that way. In contrast to stereoscopic video this spatial video should allow for a lot of "6DoF wiggle room" in situations where the same fails with stereoscopic footage or creates a lot of depth artifacts, because the images doesn't contain enough visible edges to correctly reconstruct the parallax. In a similar way the Quest 3 with a true depth sensor should provide better "spatial" tracking than the Quest 2 "stereoscopic" tracking.
Stereoscopy is somewhat "overrated" anyway, because we now associate stereoscopic VR with 3D vision. But the resolution of the eyes/displays allow to perceive parallax only for a few dozen meters, beyond that our depth perception relies on other cues like shadows, size, object occlusion, haze and others. The brain is easily fooled with optical illusions, because much of "seeing" is actually "interpreting based on experience". You can use VR HMDs with one eye (closed), the ability to look around provides most of the actual immersion, and the stereoscopic view helps mostly with hand-eye coordination and estimating close, ambiguous distances.