Augmented Reality will Never be a Reality

Augmented Reality is today’s Virtual Boy: it’s expensive hype no one will buy.

Technologists these days have been hard at work building 3D visual overlays, augmenting how you see the world. As it were, fanboys and fangirls have been hard at work telling us about our future in homemade videos; but, as technology advances, the real world will only get more real.

For the luddites and techies among my readership, please find the following video below and watch it as a point of discussion.

For the same reasons humanoid robots never seem to make the shelves of Walmart, this future vision (double-meaning intended) will never happen for the mass market—it’s too costly.

Let’s start with the cost of providing vision modification technology, enumerating by scenario:

Case 1: Camera Hacks. Some iPhone apps, like the Yelp application, have basic augmented reality features that overlay information over a video panel of whatever it is you’re looking at. The hardware cost is low, but the fact remains your augmented vision requires that you hold a piece of technology out in front of you like a goober. Offloading vision augmentation into a handheld device is clumsy and usually inconvenient; it’s a neat trick, but not much more.

Case 2: Super Glasses. Science fiction (e.g. Snow Crash, Accelerando, Caprica, Iron Man) often feature HUD-enhanced glasses that identify other people, overlay environmental information, or display text or video messages from others. Yet, fiction forgets that mobile embedded devices have (and will continue to have) issues trading off performance for reliable power. Modifying a scene in believable real-time 3D is difficult enough for an array of 3D rendering machines at Pixar, much less a pair of Ray Bans. The power and heat requirements would simply be too taxing to prove usable, and vision-augmenting would be limited to short bursts, not useful for regular wear.

Not to mention, glasses move around on faces throughout the day. The display would have to constantly correct for the minor, but highly sensitive differences as the glasses move around ever so slightly on the wearer’s moving head. And, like watching Avatar in 3D, you’ll develop a slight headache unless the optics are almost near perfect and consistent.

If—big if—you manage to mitigate these issues, how much is it going to cost you?

Case 3: Tiny Projectors. Imagine a micro-projector outfitted somewhere on where an image can be projected on your retina, fooling your eye into seeing things that aren’t actually there. Can’t imagine it? Neither can I—mostly for the reasons mentioned in Case 2.

Case 4: Optical Nerve Hacks. Imagine a device that could intercept the signal relayed from the retina to the optic nerve as it his the vision cells on the neocortex and offloads visual rendering and modification to a nearby machine, you still have to deal with the matter of bandwidth in rendering an enhanced vision for your neocortex so that it can make sense of it. But, if that technology were possible, why would you waste time, effort, and cost on only making things look more real or understandable. Why not make things simply more real or understandable at the fundamental level of understanding?…

…which brings us to

My Hunch: As technology moves forward, there’s little doubt that we’ll eventually find a way to make visual image enhancement commonplace. (Naysayers: thirty years ago, what if I told you that people would, en masse, elect to have lasers reshape their corneas, circumventing the need for glasses?)

If we’re at the point, as in Case 4, that we would elect to enhance vision directly to the neocortex, why not enhance the neocortex itself?

Strange as it may sound, the neurons in the neocortex that handle and make sense of your taste, your touch, your smell, and your sight, are identical. Instead, depending on what input they’re connected to, the neurons arrange themselves and adapt themselves to make sense of the signals coming into them.

Neurons are pretty neat in this respect. Watch how sensors on the tongue can help the blind to see.

Imagine, if you could, connecting a sensor to a portion of your neocortex (presumptively an area that was of very little use to you) and training your brain to make sense of that information coming in. What if it were a digital source, like the entire contents of Wikipedia?

As of January 2010, Wikipedia, including all of its images and all of its text, totals 2.8 Terabytes, or 2867 Gigabytes. If memory density increases 20% a year (as it has been) for the next 21 years, you’ll be able to fit Wikipedia into memory the size of the fingernail on your pinky. You could certainly fit a pinky nail underneath your skull.

So, if you could implant information directly on your brain and your neocortex could make sense of it, why would you need augmented reality? Your brain would do the work automatically. Say for instance that you, in 2010, wanted to look up “portmanteau”, you’d have to pick up a dictionary or type that word into the Internet, read the definition, understand the definition, and then apply it contextually. With a chip on your neocortex, you’d just know it. You would know it just like you can read this sentence without thinking too much about the character-by-character construction of its words. You would just know.

By the same token, when you looked at someone, you would just know their name. Or, when you looked at the Eiffel Tower, you would know when it was built, who designed it, who installed the elevators, and it’s mass in kilograms (or pounds) as easily as you see that it’s colored dark brown.

With deep vision into everything you were looking at, why in the world would you need something as crude as a live-drawn diagram to tell you how to make a pot of tea?

You wouldn’t— it’s too costly. And, as discussed above, you would just know the motions and the recipe by heart.

By the time technology capable of feeding modifications to your vision arrives, we should be able to augment your neocortex. This can, in turn, create real knowledge inside your head based on linked data pools. It’d be the end of visual infographics and the start of just data.

Linking data in your head, live, is cheaper, faster, more reliable, no matter how you slice it. And, until we can connect to the data inside your head, always-on Augmented Reality is too expensive—socially, technologically, economically—to become a reality.

(Too Many) Variations on a Theme

It’s great that people blog– I just wish they’d stop saying the same thing.

Through school, students write papers to demonstrate subject knowledge, less so to articulate original thought. Old habits die hard, people start blogging, and in this age of instant worldwide publishing, we end up chewing on a lot of cud.

It’s not that people are boring, stupid, or have nothing to say– (though, that’s debatable…) Years of response-based writing inclines people to offer reactions than articulate their own, original ideas.

It’s much easier to write reactions than create ideas and be wrong. Save nothing of the social anxieties for being wrong, describing new ideas is a hard thing to do.

People tend to follow the path of least resistance and thus the blogosphere saturates itself with commentary. And, since the blogosphere moves with such great velocity, it’s near impossible to keep track of everything that’s been said. 

Unfortunately, all contributions — and I use that term loosely — are indexed and compiled into the same channel. We call it “Google”, and the signal-to-noise ratio goes up. Way up.

Responses typically fall into certain categories. (Ask anyone who grades papers or reads hundreds of blogs.) With blogging, there’s just more. It seems more people are interested in demonstrating knowledge than contributing new thought.

My theory is that this happens subconsciously. Years of response-based education create this need– it’s how we were graded by our superiors and evaluated by our peers. People need to show that they know something.

There’s no problem with that, except that this need generates millions of blog posts. In result, we saturate our knowledge space and make it near impossible to wade through.

Good People Day 2008, Part I

Filed under: responses Topics: , , , , , , ,

It takes more than a good person to declare a flash holiday; it takes one genuinely good person.

Outside the SXSW Bloghaus in Austin last month, some guy was hanging near the door handing out wristbands. Me, a sucker for swag, approached the guy and said, “Hey, can I have one?” He turns to me, says ‘sure!’, and hands me a wrist band. “Thanks!” I said, “My name’s Michael. Who are you and what’s your story?” 

And that’s how I met Gary Vaynerchuk. Up until that moment, I hadn’t heard of Gary or winelibrary.tv. We spoke for a couple of minutes about how crazy I thought he was for answering his thousands of daily e-mails in lieu of delegating. Then it struck me as not so crazy: here’s a guy who cared so much about his job (wine) and his community that he made it his lifeblood. (I’m omitting a joke about transubstantiation right here.)

I ran into him later that night in the lobby of a hotel where about a hundred people had gathered. I went over to say hello but before I open my mouth he puts bottle of wine in my hand, “Gruen! Take this!” (I wasn’t wearing a name tag), raising another bottle to toast mine. At 2am, this man has energy.

“Gary, we’re so hanging out when we get back to New York.”

“Definitely! Now DRINK!” [sic]

Three days ago, I went to New York’s NextWeb Meetup and ran into Gary. Though we hadn’t talked since SXSW, he remembered me and we went right back to shooting the shit, with me making fun of his e-mailing habits.

So, it should come as no surprise that Gary could galvanize the social media world and beyond in an unedited two-minute video clip. Today is Good Person Day 2008, so spread it on.