It is the next potential evolution in personal computing and a significant step towards what has been termed the "singularity" when computers and people become so intertwined you can't distinguish between them. Don't worry there is still a long way to go. You aren't waking up as a Borg tomorrow.
The Next Computing Leap
Currently, we mostly connect to our computers through displays and direct them with text. Yes, we use some voice and a few gestures but for the most part we are computing much the way we did 20 years ago. Even with smartphones we are making displays bigger like we did with PCs and using some form of keyboard much of the time and our amazing advancement is that our finger is now our mouse.
Now look at HoloLens. We don't look at the display, the display moves with us and flows around things we otherwise need to see. Rather than a mouse or our finger redundantly pointing at the things we want to interact with, our eyes are the pointer and we can point to both real and virtual objects, something we can't do at all today. For commands, we switch from keyboards to voice and begin to add gestures (which, by the way, suggests that someone that knows how to do sign language may at some point have a critical computing skill) in free space.
This is much closer human machine integration in that out-of-the-box HoloLens couples far closer to how we instinctively do things than any computer we have ever used before. Finally, we wear it so eventually it will always be with us but always enhancing what we do wherever we are (with the possible exceptions of sleeping and bathing).
Now, as noted, this is very much like the Apple II in that the offerings full capability will likely come with future versions.
The Evolution of HoloLens
Over time, I expect the simple hand gestures we start with will switch to sign language and I would expect that some form of sign language currently in existence would, for the sake of time to market, be preferred. I would expect the display to get added features like the capability to darken to block out bright light or use the camera to enhance hard to see images both in darkness and at distance. I would expect it to increasingly be tied to AI systems that would monitor what you are doing and suggest better alternative actions, perhaps preventing you from doing as many poorly thought out stupid things we all do on a regular basis.
It would, as demonstrated during the launch, help with advice on how to do things, but increasingly that advice would come from centralized expert systems which would be dynamically assigned based on your need and what you were doing. It would monitor your health and the health of others, suggesting paths away from danger, medical procedures, and connect you with both virtual and real medical help in an emergency. It will show you the potential future of planned developments or the historic past of an area was as you go on tours or explore someplace you've never been before. It would keep track of names tied to faces and provide information needed to improve relationships and it would analyze body language to warn you of lies and potential hostile risk.
Sign up for MIS Asia eNewsletters.