Dead-eyed models, and other beautiful tragedies
Holographic VR requires imagery of objects in every direction, but also from every perspective. As you move in VR, objects would parallax correctly, but also reflections and refractions would look correct. The image of the sky reflecting from the hood of a car, the glint off your virtual grandma’s glasses, etc.
If done well, the 3D model approach can give pretty good parallax under good conditions. Computer vision algorithms can pretty reliably find object edges, calculate object distances, and interpolate how much of the background should be visible around an object at each intermediate, interpolated view.

But what this approach misses is the very nature of light fields. If the scene can be represented as solid objects painted with pixels, there’s no need for light fields: a series of 3D objects is a rather compact and effective way to describe these simple scenes. But of course, what distinguishes cheap animation from the real world is exactly that difference.
3D animators will be familiar with both the terminology and the stakes here. When we’re simulating light bouncing around a scene, we get much more accurate results if we allow the light to reflect off of shiny surfaces, rather than simply scattering. Pixel data can fake this effect: for a single perspective, the surface finish will be ‘encoded’ in the pixels captured by the BBOC, so you can ‘trick’ the eye into seeing more detail in a model with baked-in highlights and shading. In fact, there’s actually a fun analog for this in the real world: it’s how trompe l’oeil paintings work, by tricking you into thinking an object’s underlying geometry is different with shading.

(From http://www.hikarucho.com/)
Strangely effective. But… but these trompe l’oeil illusions tend to break when you start to move around; the implied geometry (and shading) is only accurate for a single point of view. As you walk around a perspective painting, the highlights and specularity will stay ‘glued’ to the model, which breaks the illusion. Also, the implied geometry won’t parallax correctly; objects won’t occlude each other or themselves in exactly the right way. Here’s an extreme example:

So the quality of these reconstructions will hinge on the accuracy of the 3D model and the subject matter, though they can’t reproduce specularity unless they’re informed by light field techniques.
Because of this, some materials simply won’t look correct, everything will look like it’s painted matte. Having a 3D artist ‘touch up’ a model captured by a BBOC is absolutely a possibility… and the best VR right now tends to blur the line between live action and CG, but ohmygod it’s expensive. In other words, BBOCs that aren’t trying for holographic reconstruction might make their subjects look… strange, barring significant retouching in post. This isn’t a small issue; the dead eyes of really bad 3D animation are partially due to a lack of specularity.
We are the 99.8%
So, I claimed the current BBOC approach is only 0.2% of the way there. As our consulting holographer Law describes it, current camera arrays have an uphill struggle.
“It’s not going to come close to what a real hologram’s going to give you… you’ve got an infinite number of views in a real hologram.” Of course, nothing ‘infinite’ is very practical, and even ‘uncountably large’ is generally just the province of the analog domain. How close can we get in digital?
Let’s use the Jaunt One as our example of a cutting-edge BBOC. The entire light field around Jaunt’s One is about 2 square meters, assuming it’s a ~40 cm diameter ball. With 16 lenses, each having an aperture that’s about 1 millimeter across (~3mm focal length divided by f/2.9), the lenses are sampling at best about 0.00005 square meters, or only about 0.2% of the incident light field. The remaining 99.8% of the light is absorbed by the cool industrial design (and the lenses’ irises). These numbers are almost identical for other arrays.

Okay, so are we sunk? If lenses require us to throw away most of our light field, maybe we’ll never get our holographic camera. Except… maybe we don’t need to get all of it.
Pretty ripples in the fabric of space-time
Now, it’s time to get a little hand-wavey. Our goal isn’t to literally grab every possible photon and count it, our goal is to reconstruct the incident light field well enough that it looks like we captured the whole thing. Current BBOCs don’t sample nearly often enough to do this: if a bright glint falls between cameras, it won’t be captured, and can’t be reconstructed. Can we visualize what those glints look like, to figure out how many cameras we’d need? Conveniently enough, we can:

The bright lines on the boat’s hull are places where the sun is reflecting off the water, magnified by the shape of a ripple. The pattern on the boat represents how the image of the sun changes across the surface of the boat. Now, this is an extreme case, you can literally see it projected onto a surface. But the same phenomenon is true of any high-spatial-frequency imagery: we can see the sun’s light on the boat, but if you were standing there in the same place looking toward the water, you’d see the sun, sky, clouds, probably even nearby trees and objects reflected on the water, changing with your position.
And if those reflections didn’t move correctly as you moved, or have the correct stereo perspective… it wouldn’t look like water. It might look like a picture of water, like a poster laid on the ground. It depends on how it’s handled, but I am confident the footage won’t look like water unless it’s sampled and reconstructed correctly.
That’s a pretty crappy constraint for a camera system. It’s okay if a camera system won’t perform well in low light, or flares easily, or doesn’t handle high contrast: professionals are used to these kinds of issues. But: you can’t film water? Or cars, or glass? C’mon, that’s barely a camera. At best it’s a VFX tool to create source material for animation. (This may also be why some BBOCs don’t bother with positional tracking, like the Ozo — if professionals need to animate the footage anyway, why bother with half-measures?)
So, how good does a BBOC have to be to film water? It’s going to be a sliding scale, I think. More cameras equals more perspectives equals more subjects you can film. Eventually, with dense enough sampling, it may appear perfect.
But… I want my holographic camera soon, and “probably won’t have crippling artifacts” is pretty wishy-washy. So let’s try to put a number on this thing, and figure out how far off we are. (More after the break.)