Microsoft HoloLens–Hands on at Build 2015

HoloLens-CroppedDuring the last few minutes of the Day 1 Keynote at Build 2015, Microsoft’s Alex Kipman announced that “hundreds” of HoloLens devices had been brought to Build for developers to try out. There was instant excitement that was just as quickly followed up with disappointment. Alex went on to explain that attendees were required to sign up (via a web site) for a chance at getting their hands on the much-anticipated device. I, along with a friend that joined me at Build, signed up immediately. A few hours later, we both received e-mail messages letting us know that we were not selected to take part in the HoloLens activities. Boo!

However, about five hours later, I received a follow-up message letting me know that I had been confirmed as a participant for the Holographic Academy taking place the next morning (apparently I was close enough to the top of the waiting list). Unfortunately, my friend did not receive the same follow-up message. Boo!

The next day, I met up with the rest of the lucky participants at a hotel close to the Moscone Center where Build was taking place. The event was scheduled for roughly four hours and we were asked to show up a few minutes early. Although the existence of the Microsoft HoloLens is (obviously) public, Microsoft was taking no chances. We had to show our confirmation e-mail messages (on our phones) to the host downstairs before we were permitted to head up to the fifth floor for the event. Once on the fifth floor, our participation was once again confirmed where we were given another (Holographic Academy) badge and a locker key – where we had to store all electronics – including our phones. Lastly, before being allowed access to the Holographic Academy training area “lobby” each of us had the distance between our pupils measured (interpupillary distance or IPD). This number was later used to configure the HoloLens as each developer set theirs up for the first time.

Once inside, every two developers was assigned a mentor – a Microsoft employee that has in-depth HoloLens experience. My mentor was Chris. Chris was there to answer any questions we had about the HoloLens and to ensure we didn’t get lost during the development exercises.

As we have all heard multiple times now the HoloLens runs Windows 10 and, therefor, runs Universal Windows Apps (UWAs). We were to go through multiple (scripted) exercises that utilized Unity and Visual Studio 2015 to build our holographic (Universal Windows) apps.However, before getting into the nitty-gritty of coding, we first configured our HoloLens, fitted it to our heads and tried out a pre-installed holographic app – a monster truck that we could drive around the room.

The HoloLens device isn’t what I’d call lightweight but it wasn’t exactly heavy, either. Once it was adjusted to fit correctly on my head, it was relatively comfortable to wear. It stayed pretty snug to my head and didn’t slip around though putting it on each time always seemed to take several seconds to get it adjusted properly.

After trying the initial app for the first time, my initial impression was – “OMG,it actually works!

Having previously seen the various demonstrations of HoloLens, I was a bit skeptical that the “real” thing would look anything like what was demonstrated. My skepticism was immediately shed after the first few seconds of seeing my first hologram. The one question I really had going in was do the holograms really stay in place as you move around? And the answer was a resounding yes! Amazingly, no matter how fast I moved my head or at what angle I turned my head, the hologram did not flinch from the point at where I dropped it.

Interacting with the hologram was much easier than I though it would be. I could simply look (gaze) at a location on the floor, do a “finger tap” to place a waypoint, and watch the holographic truck come to life and zoom to the next waypoint. Even more cool was that if an obstacle got in it’s way, say someone’s foot, it would attempt to go over it or simply get stuck. The spatial mapping capabilities of the HoloLens were simply amazing (for such a small, completely disconnected and relatively new device)!

After running our highly-animated holograms around for a bit it was time to get down to the business of creating our own holograms. The staff walked us through a number of programming exercises (in Unity and Visual Studio 2015) where we built our first holograms (some paper mache blocks, spheres, and airplanes). We added a holographic cursor (a red circle) that followed our gaze around the holograms and was able to maintain the appropriate orientation to the hologram’s surface as it moved across the 3-dimensional terrain. We added gestures so we could “tap” on an object to make it drop. We also added voice gestures so we could use voice commands to perform various commands.

Adding sound to our holograms added an entirely new dimension. As we moved away from our melodious hologram, the sound got quieter. As we moved closer, it got louder. Turn your head and the sound adjusted accordingly. You could literally close your eyes and still know where you hologram was based on sound alone.

We were able to place our holograms on real-world surfaces by lighting up spatial mapping and having the HoloLens draw a wireframe mesh over objects around the room. And to wrap it all up, we had some fun by having the holographic spheres, when dropped, blow a hole open in the floor below us where we could look down and see a snaking river, rocky terrain and paper mache birds flying around an azure sky. For a really cool effect (whether intended or not) I realized that if I placed the surface of my hologram at eye level and then let the sphere blow a hole through it, I was then standing in the underground cavern (because I was essentially below the hologram) and everywhere I looked I saw the cavernous world superimposed onto the world around me. Sweet!

As cool as all this was – and it was cool! – there were still a few wrinkles yet to be worked out. Probably the biggest one (for me) was the field of view. Based on the conference demos, you might think that you literally see holograms all around you. In practice, you see holograms directly in front of you. There is a rectangular plane that is directly in front of you where the holograms are displayed. If you turn your head and then move your eyes to the side you will not see your hologram. You have to have the HoloLens physically pointing in the direction of the hologram to see it. Maybe this is something that will be improved in the future? I really have no idea.

I had many questions for my mentor such as: How many sensors does the HoloLens have? How does it determine distance from the HoloLens to the hologram? How does it sense my head turning in 3-D space? When, exactly, will it be released? What type of display does it have? What will the final cost be? Will there be more than one version (e.g. with more/less features)? And so on… Pretty much every question I asked, that was off script, was met with the typical “I really have nothing to comment at this time” :-/

All that said, I am grateful that I had the opportunity to spend four hours programming for, and interacting with, the HoloLens. I am sure that all of the questions I’ve raised will be answered in the coming months. There are a lot of potential applications for a device like this and I firmly believe it’s a game changer for this genre of device and I look forward to seeing where this device shows up in everyday life.

Related Posts