Augmented Reality will Never be a Reality

Augmented Reality is today’s Virtual Boy: it’s expensive hype no one will buy.

Technologists these days have been hard at work building 3D visual overlays, augmenting how you see the world. As it were, fanboys and fangirls have been hard at work telling us about our future in homemade videos; but, as technology advances, the real world will only get more real.

For the luddites and techies among my readership, please find the following video below and watch it as a point of discussion.

For the same reasons humanoid robots never seem to make the shelves of Walmart, this future vision (double-meaning intended) will never happen for the mass market—it’s too costly.

Let’s start with the cost of providing vision modification technology, enumerating by scenario:

Case 1: Camera Hacks. Some iPhone apps, like the Yelp application, have basic augmented reality features that overlay information over a video panel of whatever it is you’re looking at. The hardware cost is low, but the fact remains your augmented vision requires that you hold a piece of technology out in front of you like a goober. Offloading vision augmentation into a handheld device is clumsy and usually inconvenient; it’s a neat trick, but not much more.

Case 2: Super Glasses. Science fiction (e.g. Snow Crash, Accelerando, Caprica, Iron Man) often feature HUD-enhanced glasses that identify other people, overlay environmental information, or display text or video messages from others. Yet, fiction forgets that mobile embedded devices have (and will continue to have) issues trading off performance for reliable power. Modifying a scene in believable real-time 3D is difficult enough for an array of 3D rendering machines at Pixar, much less a pair of Ray Bans. The power and heat requirements would simply be too taxing to prove usable, and vision-augmenting would be limited to short bursts, not useful for regular wear.

Not to mention, glasses move around on faces throughout the day. The display would have to constantly correct for the minor, but highly sensitive differences as the glasses move around ever so slightly on the wearer’s moving head. And, like watching Avatar in 3D, you’ll develop a slight headache unless the optics are almost near perfect and consistent.

If—big if—you manage to mitigate these issues, how much is it going to cost you?

Case 3: Tiny Projectors. Imagine a micro-projector outfitted somewhere on where an image can be projected on your retina, fooling your eye into seeing things that aren’t actually there. Can’t imagine it? Neither can I—mostly for the reasons mentioned in Case 2.

Case 4: Optical Nerve Hacks. Imagine a device that could intercept the signal relayed from the retina to the optic nerve as it his the vision cells on the neocortex and offloads visual rendering and modification to a nearby machine, you still have to deal with the matter of bandwidth in rendering an enhanced vision for your neocortex so that it can make sense of it. But, if that technology were possible, why would you waste time, effort, and cost on only making things look more real or understandable. Why not make things simply more real or understandable at the fundamental level of understanding?…

…which brings us to

My Hunch: As technology moves forward, there’s little doubt that we’ll eventually find a way to make visual image enhancement commonplace. (Naysayers: thirty years ago, what if I told you that people would, en masse, elect to have lasers reshape their corneas, circumventing the need for glasses?)

If we’re at the point, as in Case 4, that we would elect to enhance vision directly to the neocortex, why not enhance the neocortex itself?

Strange as it may sound, the neurons in the neocortex that handle and make sense of your taste, your touch, your smell, and your sight, are identical. Instead, depending on what input they’re connected to, the neurons arrange themselves and adapt themselves to make sense of the signals coming into them.

Neurons are pretty neat in this respect. Watch how sensors on the tongue can help the blind to see.

Imagine, if you could, connecting a sensor to a portion of your neocortex (presumptively an area that was of very little use to you) and training your brain to make sense of that information coming in. What if it were a digital source, like the entire contents of Wikipedia?

As of January 2010, Wikipedia, including all of its images and all of its text, totals 2.8 Terabytes, or 2867 Gigabytes. If memory density increases 20% a year (as it has been) for the next 21 years, you’ll be able to fit Wikipedia into memory the size of the fingernail on your pinky. You could certainly fit a pinky nail underneath your skull.

So, if you could implant information directly on your brain and your neocortex could make sense of it, why would you need augmented reality? Your brain would do the work automatically. Say for instance that you, in 2010, wanted to look up “portmanteau”, you’d have to pick up a dictionary or type that word into the Internet, read the definition, understand the definition, and then apply it contextually. With a chip on your neocortex, you’d just know it. You would know it just like you can read this sentence without thinking too much about the character-by-character construction of its words. You would just know.

By the same token, when you looked at someone, you would just know their name. Or, when you looked at the Eiffel Tower, you would know when it was built, who designed it, who installed the elevators, and it’s mass in kilograms (or pounds) as easily as you see that it’s colored dark brown.

With deep vision into everything you were looking at, why in the world would you need something as crude as a live-drawn diagram to tell you how to make a pot of tea?

You wouldn’t— it’s too costly. And, as discussed above, you would just know the motions and the recipe by heart.

By the time technology capable of feeding modifications to your vision arrives, we should be able to augment your neocortex. This can, in turn, create real knowledge inside your head based on linked data pools. It’d be the end of visual infographics and the start of just data.

Linking data in your head, live, is cheaper, faster, more reliable, no matter how you slice it. And, until we can connect to the data inside your head, always-on Augmented Reality is too expensive—socially, technologically, economically—to become a reality.

No

No. Let me explain.

It begins with the letter, ‘N’, and while you reckoned a three letter response, yours has only two. Not one of them is a ‘Y’, ‘E’, or ‘S’, as you may have fancied, so I strongly advise you reassess things.

It is, you’ll find, the exact opposite of your expectations. You were likely anticipating a wide smile, corners furling in affirmation; but, instead, you were met with tight lips. Do not misinterpret this as a playful “kissy-face”; inasmuch, you may sense derisive undertones… and you would be quite astute.

Further, do not presume any change. Ever. When earth reverses polarity, when North becomes South, concrete direction may prove ambiguous; however, rest assured, my answer will not.

No means no.