Augmented Reality will Never be a Reality

Augmented Reality is today’s Virtual Boy: it’s expensive hype no one will buy.

Technologists these days have been hard at work building 3D visual overlays, augmenting how you see the world. As it were, fanboys and fangirls have been hard at work telling us about our future in homemade videos; but, as technology advances, the real world will only get more real.

For the luddites and techies among my readership, please find the following video below and watch it as a point of discussion.

For the same reasons humanoid robots never seem to make the shelves of Walmart, this future vision (double-meaning intended) will never happen for the mass market—it’s too costly.

Let’s start with the cost of providing vision modification technology, enumerating by scenario:

Case 1: Camera Hacks. Some iPhone apps, like the Yelp application, have basic augmented reality features that overlay information over a video panel of whatever it is you’re looking at. The hardware cost is low, but the fact remains your augmented vision requires that you hold a piece of technology out in front of you like a goober. Offloading vision augmentation into a handheld device is clumsy and usually inconvenient; it’s a neat trick, but not much more.

Case 2: Super Glasses. Science fiction (e.g. Snow Crash, Accelerando, Caprica, Iron Man) often feature HUD-enhanced glasses that identify other people, overlay environmental information, or display text or video messages from others. Yet, fiction forgets that mobile embedded devices have (and will continue to have) issues trading off performance for reliable power. Modifying a scene in believable real-time 3D is difficult enough for an array of 3D rendering machines at Pixar, much less a pair of Ray Bans. The power and heat requirements would simply be too taxing to prove usable, and vision-augmenting would be limited to short bursts, not useful for regular wear.

Not to mention, glasses move around on faces throughout the day. The display would have to constantly correct for the minor, but highly sensitive differences as the glasses move around ever so slightly on the wearer’s moving head. And, like watching Avatar in 3D, you’ll develop a slight headache unless the optics are almost near perfect and consistent.

If—big if—you manage to mitigate these issues, how much is it going to cost you?

Case 3: Tiny Projectors. Imagine a micro-projector outfitted somewhere on where an image can be projected on your retina, fooling your eye into seeing things that aren’t actually there. Can’t imagine it? Neither can I—mostly for the reasons mentioned in Case 2.

Case 4: Optical Nerve Hacks. Imagine a device that could intercept the signal relayed from the retina to the optic nerve as it his the vision cells on the neocortex and offloads visual rendering and modification to a nearby machine, you still have to deal with the matter of bandwidth in rendering an enhanced vision for your neocortex so that it can make sense of it. But, if that technology were possible, why would you waste time, effort, and cost on only making things look more real or understandable. Why not make things simply more real or understandable at the fundamental level of understanding?…

…which brings us to

My Hunch: As technology moves forward, there’s little doubt that we’ll eventually find a way to make visual image enhancement commonplace. (Naysayers: thirty years ago, what if I told you that people would, en masse, elect to have lasers reshape their corneas, circumventing the need for glasses?)

If we’re at the point, as in Case 4, that we would elect to enhance vision directly to the neocortex, why not enhance the neocortex itself?

Strange as it may sound, the neurons in the neocortex that handle and make sense of your taste, your touch, your smell, and your sight, are identical. Instead, depending on what input they’re connected to, the neurons arrange themselves and adapt themselves to make sense of the signals coming into them.

Neurons are pretty neat in this respect. Watch how sensors on the tongue can help the blind to see.

Imagine, if you could, connecting a sensor to a portion of your neocortex (presumptively an area that was of very little use to you) and training your brain to make sense of that information coming in. What if it were a digital source, like the entire contents of Wikipedia?

As of January 2010, Wikipedia, including all of its images and all of its text, totals 2.8 Terabytes, or 2867 Gigabytes. If memory density increases 20% a year (as it has been) for the next 21 years, you’ll be able to fit Wikipedia into memory the size of the fingernail on your pinky. You could certainly fit a pinky nail underneath your skull.

So, if you could implant information directly on your brain and your neocortex could make sense of it, why would you need augmented reality? Your brain would do the work automatically. Say for instance that you, in 2010, wanted to look up “portmanteau”, you’d have to pick up a dictionary or type that word into the Internet, read the definition, understand the definition, and then apply it contextually. With a chip on your neocortex, you’d just know it. You would know it just like you can read this sentence without thinking too much about the character-by-character construction of its words. You would just know.

By the same token, when you looked at someone, you would just know their name. Or, when you looked at the Eiffel Tower, you would know when it was built, who designed it, who installed the elevators, and it’s mass in kilograms (or pounds) as easily as you see that it’s colored dark brown.

With deep vision into everything you were looking at, why in the world would you need something as crude as a live-drawn diagram to tell you how to make a pot of tea?

You wouldn’t— it’s too costly. And, as discussed above, you would just know the motions and the recipe by heart.

By the time technology capable of feeding modifications to your vision arrives, we should be able to augment your neocortex. This can, in turn, create real knowledge inside your head based on linked data pools. It’d be the end of visual infographics and the start of just data.

Linking data in your head, live, is cheaper, faster, more reliable, no matter how you slice it. And, until we can connect to the data inside your head, always-on Augmented Reality is too expensive—socially, technologically, economically—to become a reality.

Better Off Dead?

11 babe ruthIf resurrection becomes permissible, would reanimating legends diminish their utility?

Walt Disney, Albert Einstein, Ben Franklin, and other pioneers all had measurable impact in our world. Their contributions continue to resonate through time; but, if we had the power to bring them back, I’m skeptical that their intellectual currency kept pace with inflation—perhaps they work best in our memory.

Let’s draw an example:

It’s often said by baseball pundits that Babe Ruth, arguably the best player of all time, couldn’t hold a candle to modern major leaguers. They argue that these days, it’s hard to fathom him edging-out stars who’ve been trained in highly-competitive talent development leagues since diapers.

If Babe Ruth—The Great Bambino—were to miraculously return to baseball, we’d be risking what he means to baseball, possibly tainting what he is to so many people. (Just ask any three-year-old who the best baseball player of all time is.)

In this way, Babe’s most useful as an ideal, not as a player.

Reanimating the thinkers and doers first mentioned in this entry could have a similar effect: it’s not that restoring Albert Einstein wouldn’t be beneficial to science and mankind; it’s that in all likelihood, he’s no smarter or able than modern scientists who have followed in his footsteps.

A living Einstein couldn’t possibly create the edifice as a dead one could; much less could he meet demand for his time. Likely, his active involvement in the scientific community would be lackluster compared with the great expectations for him; and, likely, he’d on-par with the rest of the active community.

Like Babe, Einstein’s most useful as an ideal, not as a player. To further the risk, if Einstein proved not to be a modern-day Einstein, his reanimation could detract from his story.

Perhaps this is true for living legends as well.

Though, as an afterthought, a postmortem comeback to the top would be an impressive feat, one that would create a new benchmark for legend; but, we do need to recognize the possible (and likely) deleterious effects associated with returning the idea of a specific person—an idealistic person—into human form. Having it be a net benefit would be a long-shot, one pragmatism should prohibit in almost all circumstances.

An Original, Unoriginal Thought

I don’t think human beings are capable of original thought.

In essence, the brain is a pattern machine. Thoughts and ideas are stored in neurons in the cerebral cortex as a nest of patterns, patterns established on physical limitations (the body) and on the environment. Emotion, circumstance, and social interaction help dictate the patterns the brain understands and values—and only that follows.

I’m not meaning to say we don’t think. (Or, at least I think we think.) What we call thought is (I think) our brains’ attempt to pattern-match our lifetimes’ worth of experiences onto whatever problem, circumstance, or question confronts us. Racking our own brains, we turn to research and randomness.

By way of example, recall Kubrick’s 2001: A Space Odyssey, the scene where primates discovered tool use by bludgeoning skulls with a loose femur. The act of banging was behavioral, its proximity to skulls coincidental, and thus its use random. Skulls, the primates knew, once belonged to live animals, and thus they concluded: the femur could be used against other primates. A novel idea, translated from random happenstance.

Similarly, the major leaps of man are random acts of pattern discovery: patterns observed, learned, and translated into other situations. In this sense, original thought is nothing more than discovery and translational application.

This is also not to say humans are incapable of complex thought, quantum leaps, or extraordinary thinking—I’m only suggesting that those leaps and complexities are based on a systems that we know or that we happened upon: our imaginations are limited to our experiences and the patterns we innately understand on circumstance of being human.

Consciousness is our gift. Pure creation is not. (Insert your preferred dogmatic implications here.)

Which, if I’m right, is rather frustrating… if I’m right, I never really came up with this idea—it just happened upon me.