“Wearable technology.” These days, the phrase conjures up images of laughably impractical watch-phone hybrids, single function devices like the FitBit and gigantic head-mounted displays that are useless for anything but watching movies for about 20 minutes at a time. But information leaking out of the shadowy inner test labs known as Google X indicates that the company is working on nothing less than a personal Heads-Up Display (HUD), a staple of science fiction for decades, and the goal of many converging technologies like transparent screens and microscopic transistors.

According to an anonymous source speaking to 9to5Google, Google’s take on HUD glasses is nearing the end of its prototype stage. The idea is that a pair of glasses overlays information onto transparent lenses, focusing on memory assistance – think things like a Google Maps Navigation overlay, or facial recognition that can display people’s when you can’t recall them. If you’re a literary geek, go read Daniel Suarez’s Daemon for an idea of the possibilities – the implications are staggering. According to the early leak the device, which SlashGear is calling the literal Google Goggles, will run Android, but will not be dependent upon an Android device to function. The mobile data connection and any other necessary hardware will be contained within the thick frames. Google co-founder Sergey Brin said to be taking a personal interest in the project.

All this is so speculative that calling it a rumor would be an understatement. But Google has a history of sinking tons of money into previously unknown territory, like its self-driving car program. And it’s not as if the applications don’t already exist – the military uses helmet-mounted HUD systems for fighter pilots and infantry, and even more recreational activities like skiing are starting to apply the ideas behind wearable displays.

Imagine the possibilities behind current and future tech: you receive a text party invite via your Google Voice account, it displays on your Google Goggles, you use voice input (perhaps Majel?) to respond and RSVP, then do a voice search for the address, eye tracking sensors let you indicate the right spot, and you activate a Google Navigation overlay – all without pressing a button. Fantastical? Maybe. Impossible? Maybe not.

[via SlashGear]

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.