Ooooh I have an original Lytro and of course it just sits there. I am going to have to try this!
By the way don't miss this video about the failed 755 megapixel 300fps Lytro Cinema camera, a contraption the size of a car with off the charts data storage requirements.
i have no idea who any of these people are, but i absolutely am in tears over this. the expression on his face at the mere suggestion of "if it had ham", but then she kept going with it, and his reaction. ohmugawd!! but it all started with "you're doing it wrong" flat out, not soft coating it, nothing. ahhhh
What's wild is that it turns out we were only a few years away from technology that could completely replace the purpose of the cinema camera, with off the shelf GPUs on a normal desktop (NeRFs)
NeRFs are cool and someday will do what the Lytro cinema camera claimed. But today it’s nowhere close. See how Lytro took 10 years to go from lab science to a cinema quality camera. NeRFs have a similar journey ahead of them. Today they kinda mostly sorta work in the lab.
Also just from a fidelity perspective it seems better to fully capture all the light rays than to capture part of the scene and try to recompute it later.
Indeed. NeRFs are impressive from an interpolation standpoint, but they can't magically beat Nyquist - any component in the radiance field that with a higher frequency than the sampling rate will be missed, and probably aliased. In practical terms, this means things like highly specular reflections, "glittery" or "twinkly" surfaces, and magnifying optics - anything which could "project" an obvious pattern onto the capture plane.
An example would be the surface of a swimming pool - reflection of light off a swimming pool will project a pattern like this [0] onto the nearest wall, and if that's where your camera positions are they are effectively sampling that image - there is no data for what happens in between. The resulting NeRF reconstruction will look dull and flat, with the water taking on a matte or satin texture.
Another example would be a laser beam - all the cameras see is a weak translucent line, if they see anything at all, and you cannot infer that you will receive a blinding amount of light if you put your eye right in the beam unless you happen to know about lasers. Even if you do know about lasers, if the laser itself is not in shot, you cannot determine which way the beam is going.
Assuming that nothing in the scene is emitting perversely high-frequency signals like a laser, you could in theory make a guess at some of this information by inferring a lot more about the scene - calculating the position of lights, computing normal maps, guessing that certain sets of points are made from the same material and therefore likely have the same BSDF. But current NeRFs don't do any of that, they just run volume rendering on a set of emissive points.
By the way don't miss this video about the failed 755 megapixel 300fps Lytro Cinema camera, a contraption the size of a car with off the charts data storage requirements.
https://www.youtube.com/watch?v=pfjYecJHMRU