Augmented Reality will Never be a Reality

Augmented Reality is today’s Virtual Boy: it’s expensive hype no one will buy.

Technologists these days have been hard at work building 3D visual overlays, augmenting how you see the world. As it were, fanboys and fangirls have been hard at work telling us about our future in homemade videos; but, as technology advances, the real world will only get more real.

For the luddites and techies among my readership, please find the following video below and watch it as a point of discussion.

For the same reasons humanoid robots never seem to make the shelves of Walmart, this future vision (double-meaning intended) will never happen for the mass market—it’s too costly.

Let’s start with the cost of providing vision modification technology, enumerating by scenario:

Case 1: Camera Hacks. Some iPhone apps, like the Yelp application, have basic augmented reality features that overlay information over a video panel of whatever it is you’re looking at. The hardware cost is low, but the fact remains your augmented vision requires that you hold a piece of technology out in front of you like a goober. Offloading vision augmentation into a handheld device is clumsy and usually inconvenient; it’s a neat trick, but not much more.

Case 2: Super Glasses. Science fiction (e.g. Snow Crash, Accelerando, Caprica, Iron Man) often feature HUD-enhanced glasses that identify other people, overlay environmental information, or display text or video messages from others. Yet, fiction forgets that mobile embedded devices have (and will continue to have) issues trading off performance for reliable power. Modifying a scene in believable real-time 3D is difficult enough for an array of 3D rendering machines at Pixar, much less a pair of Ray Bans. The power and heat requirements would simply be too taxing to prove usable, and vision-augmenting would be limited to short bursts, not useful for regular wear.

Not to mention, glasses move around on faces throughout the day. The display would have to constantly correct for the minor, but highly sensitive differences as the glasses move around ever so slightly on the wearer’s moving head. And, like watching Avatar in 3D, you’ll develop a slight headache unless the optics are almost near perfect and consistent.

If—big if—you manage to mitigate these issues, how much is it going to cost you?

Case 3: Tiny Projectors. Imagine a micro-projector outfitted somewhere on where an image can be projected on your retina, fooling your eye into seeing things that aren’t actually there. Can’t imagine it? Neither can I—mostly for the reasons mentioned in Case 2.

Case 4: Optical Nerve Hacks. Imagine a device that could intercept the signal relayed from the retina to the optic nerve as it his the vision cells on the neocortex and offloads visual rendering and modification to a nearby machine, you still have to deal with the matter of bandwidth in rendering an enhanced vision for your neocortex so that it can make sense of it. But, if that technology were possible, why would you waste time, effort, and cost on only making things look more real or understandable. Why not make things simply more real or understandable at the fundamental level of understanding?…

…which brings us to

My Hunch: As technology moves forward, there’s little doubt that we’ll eventually find a way to make visual image enhancement commonplace. (Naysayers: thirty years ago, what if I told you that people would, en masse, elect to have lasers reshape their corneas, circumventing the need for glasses?)

If we’re at the point, as in Case 4, that we would elect to enhance vision directly to the neocortex, why not enhance the neocortex itself?

Strange as it may sound, the neurons in the neocortex that handle and make sense of your taste, your touch, your smell, and your sight, are identical. Instead, depending on what input they’re connected to, the neurons arrange themselves and adapt themselves to make sense of the signals coming into them.

Neurons are pretty neat in this respect. Watch how sensors on the tongue can help the blind to see.

Imagine, if you could, connecting a sensor to a portion of your neocortex (presumptively an area that was of very little use to you) and training your brain to make sense of that information coming in. What if it were a digital source, like the entire contents of Wikipedia?

As of January 2010, Wikipedia, including all of its images and all of its text, totals 2.8 Terabytes, or 2867 Gigabytes. If memory density increases 20% a year (as it has been) for the next 21 years, you’ll be able to fit Wikipedia into memory the size of the fingernail on your pinky. You could certainly fit a pinky nail underneath your skull.

So, if you could implant information directly on your brain and your neocortex could make sense of it, why would you need augmented reality? Your brain would do the work automatically. Say for instance that you, in 2010, wanted to look up “portmanteau”, you’d have to pick up a dictionary or type that word into the Internet, read the definition, understand the definition, and then apply it contextually. With a chip on your neocortex, you’d just know it. You would know it just like you can read this sentence without thinking too much about the character-by-character construction of its words. You would just know.

By the same token, when you looked at someone, you would just know their name. Or, when you looked at the Eiffel Tower, you would know when it was built, who designed it, who installed the elevators, and it’s mass in kilograms (or pounds) as easily as you see that it’s colored dark brown.

With deep vision into everything you were looking at, why in the world would you need something as crude as a live-drawn diagram to tell you how to make a pot of tea?

You wouldn’t— it’s too costly. And, as discussed above, you would just know the motions and the recipe by heart.

By the time technology capable of feeding modifications to your vision arrives, we should be able to augment your neocortex. This can, in turn, create real knowledge inside your head based on linked data pools. It’d be the end of visual infographics and the start of just data.

Linking data in your head, live, is cheaper, faster, more reliable, no matter how you slice it. And, until we can connect to the data inside your head, always-on Augmented Reality is too expensive—socially, technologically, economically—to become a reality.

Rag Doll Physics and You

What’s perhaps most disturbing about the 2010 Olympic Luger Noder Kumaritashvili’s death was its familiarity.

When I first watched the video of 2010 Olympic Luger Noder Kumaritashvili’s accident, I was struck—not by the gruesome or graphic nature of the clip—but by its familiarity. Like many gamers, I’ve seen this sort of thing before. Countless times:

Life doesn’t have a reset button. But, when videographers and reporters depict events in a similar fashion—showing only the incident and none of the aftermath—the mind tends to catalog the event in abstraction. Without the sense of finality or consequence, significance is lost.

Those sensitive to violence will have more trouble letting go of what they just saw: for them, the image shocks them and significance isn’t as likely lost. But, for others used to violence and realistic depictions of violence, it’s more likely to be stored as another datapoint for how a human body can crumble at speed.

In discussing this with my friend, Ben Edwards, he remarked how age groups have responded with stark contrast: on average, people tend to be increasingly upset in correlation with age. And it makes sense: the younger you are, the greater chance you’ve been exposed to abstracted violence. The older you are, the greater chance you’ve either experienced real violence or none at all.

I’m not claiming that familiarity with violence is the problem here; but, rather, in presenting violence in the same cut-away shot as a video game does reduces its meaning and impact. And, while I understand that the “money shot” is in those critical albeit violent moments, the media should take note to craft a story that does not shy away from the aftermath of the incident. The Huffington Post has an appropriate feature.

How we remember what we’ve seen is more important than what we’ve seen. And, in order to distinguish real events from virtual events, we need to be mindful: how we frame violence changes the way it’s absorbed. A viewer need not have to review or watch the aftermath of a violent event. But, it’s important that we frame the violence appropriately so we can make sense of it, remembering that the victim often doesn’t get a reset button. Or, if you’re not going to frame it properly, don’t show it at all.

An Original, Unoriginal Thought

I don’t think human beings are capable of original thought.

In essence, the brain is a pattern machine. Thoughts and ideas are stored in neurons in the cerebral cortex as a nest of patterns, patterns established on physical limitations (the body) and on the environment. Emotion, circumstance, and social interaction help dictate the patterns the brain understands and values—and only that follows.

I’m not meaning to say we don’t think. (Or, at least I think we think.) What we call thought is (I think) our brains’ attempt to pattern-match our lifetimes’ worth of experiences onto whatever problem, circumstance, or question confronts us. Racking our own brains, we turn to research and randomness.

By way of example, recall Kubrick’s 2001: A Space Odyssey, the scene where primates discovered tool use by bludgeoning skulls with a loose femur. The act of banging was behavioral, its proximity to skulls coincidental, and thus its use random. Skulls, the primates knew, once belonged to live animals, and thus they concluded: the femur could be used against other primates. A novel idea, translated from random happenstance.

Similarly, the major leaps of man are random acts of pattern discovery: patterns observed, learned, and translated into other situations. In this sense, original thought is nothing more than discovery and translational application.

This is also not to say humans are incapable of complex thought, quantum leaps, or extraordinary thinking—I’m only suggesting that those leaps and complexities are based on a systems that we know or that we happened upon: our imaginations are limited to our experiences and the patterns we innately understand on circumstance of being human.

Consciousness is our gift. Pure creation is not. (Insert your preferred dogmatic implications here.)

Which, if I’m right, is rather frustrating… if I’m right, I never really came up with this idea—it just happened upon me.