How to hack yourself a 3D room camera

I sometimes get the feeling that Google is watching me.

I don’t mean the way it emails me three hours before I have to catch a plane, or how it recommends news articles I actually would like to read, or how it does that thing where ads follow me all around the internet. That’s not paranoia. We all know they’re watching us for that stuff.

No, I’m talking about REA Hack Day projects.

A little over a year ago, our hack day team built a DIY, portable VR headset, which ran off a mobile phone. We used the OpenDive prototype, added a bit of REA creativity, and it was a runaway success. Then a couple of months later, Google released Cardboard!

I know, what are the odds?

A few weeks ago, Google announced Jump, a stereoscopic 3D camera designed to work in VR. Just a few weeks earlier, at the end of our most recent REA hack day, we released The Eyes of Sauron.

A wooden and plastic unit with many loose cables and connectors

Practically identical to the Google one, right?

Okay, so maybe our build was slightly less polished than the Google one. And it didn’t do video. Or integrate with YouTube. Or work quite right every time. But we did get it built mostly in the space of a hack day!

Here’s how it works. (You should absolutely try this at home)

We see 3D because each of our eyes receives images of the world at a slightly different angle. Our brains use the differences in those images to figure out depth. The more substantial the difference between those two images, the closer the object must be to us.

This is called stereoscopic vision. And a long time ago, some very clever people figured out that it could be tricked by showing each eye a slightly different flat image. If the images are aligned correctly, our brains perceive those images as though we were looking into three-dimensional space.

The easiest way to create this illusion is simply to take two photos, using two cameras eye-distance apart. Viewing the images just needs a little bit of hardware with fancy lenses to show one photo to each eye.

That is how we’ve been faking 3d since Victorian times.

A hand-held, Victorian-era wooden stereoscope pointing to two nearly identical photos of a cart

Not actually made by us

Luckily for us, modern mobile phones give us gyros and accelerometers, which make it easy for us to detect when people move their heads. We can use that information to adjust the location of those two photos, so that it seems like you are standing inside a 3d world. The trick is to get a stereoscopic pair of images that remain correct no matter what way you’re facing. Then you could look around a 3d space as though you were actually there.

Sounds perfect for inspecting houses, right?

Thus, The Eyes.

Two identical cameras are secured to a panel at eye distance apart.

This panel rotates about its local X-axis to cover nearly 180 degrees of vision.

The panel, in turn, is attached to a platform with a larger gear, which rotates about its Y-axis.

Both of these rotations are driven by a combination of stepper motors and 3D-printed gears.

A flat-faced metal panel attached to two vertical wooden supports, which are in turn attached to a flat wooden board. There is a small motor attached to the wooden board.

Camera-eyes not yet included

The whole unit, along with a couple of raspberry pis, network adapters and power connectors, is then attached to a standard photography tripod to keep the contraption stable and at an appropriate height.

To reduce the creepiness of such a rig, we added a cardboard picture of a face.

The illusion of a face, with two camera lenses as eyes and a nose and mouth drawn on a piece of cardboard

Take that, uncanny valley!

The end result is a machine which, via some fancy Python code, takes a total of 96 photos; 48 from each camera, in four rows of twelve. It exports those photos as high quality images to SD card, which we can run through Hugin to generate two panoramas. That’s one panorama per camera.

A panorama showing a 360 degree view of a large meeting room

And a second one, which is only very slightly different

At this point, all which remained was to get it into a VR headset. Using Unity3D and the Dive SDK, we were able to send one panorama to left side of the phone’s screen, and the other panorama to the right, while keeping the motion controls required to simulate VR. We exported the app to an Android APK, and stuck it into a Cardboard viewer.

To our utter shock (and also relief, plenty of relief,) it actually worked. There were some glitches, a couple of not-quite-right places where the photo stitching hadn’t been perfect, but all in all, it actually did work!

Four people holding the 3d room camera

The team: Michael, Angus, Erica, Andrew, Luke (Sir Not Appearing In This Photo) and The Eyes

A few lessons learned though, if you’re planning on making one of these yourself:

  • You can do a lot of very clever and complicated mathematics to make sure that you step your motor to the optimal position of each rotation. You probably don’t need to though, just pick a good-enough number of steps per rotation. Stitching software is designed to cope with photos from random tourists with shaky hands. It can almost certainly handle being a couple of degrees out.
  • Light is tricky. More specifically, cameras that adjust their settings based on ambient light are tricky. They darken photos for light parts of your room, and lighten photos for the dark parts of the room. Luckily, you can turn this feature off in the raspberry pi cameras. Make sure you do that!
  • Keep the cameras level. Unlevel cameras make for a surreal and nauseating experience. Even a couple of millimeters are enough to result in a VR experience that feels positively Lovecraftian.

Our next challenge, of course, is figuring out what to make for our next hack day.

And just in case Google really is watching our hack days for project inspiration…

Maybe it’s time to hack ourselves a hover-board?