Quick Intro to Mixed Reality

Mixed Reality is a blend of physical and digital worlds, unlocking natural and intuitive 3D human, computer, and environmental interactions. While Augmented Reality focuses on enhancing the physical world and Virtual Reality immerses the user in a completely digital one, Mixed Reality promises the best of both. It is based on advancements in computer vision, graphical processing, display technologies, input systems, and cloud computing. If you are still curious about mixed reality, you can find out more here.

To explore this subject more in-depth, we launched our Research and Development initiative in 2021! Using Microsoft’s HoloLens 2, we started our learning journey by onboarding into a Proof of Technology – Assisted Indoor Navigation. Due to the number of interesting insights and information we gained, we’re here to share some of the key findings. So, if you’re curious about our experience, it’s time to keep on scrolling. You will find info on the features we managed to implement, the limitations and mitigations observed throughout the HoloLens 2 research, the technical insights gained, and the challenges we faced.

Why HoloLens 2?

HoloLens 2 is a cutting-edge device ahead of its time, providing us with a mixed reality option that allows us to see holograms in the real world around us without altering it, all of these by just wearing them as glasses. It’s backed up by a strong brand – Microsoft, one of the leading innovators in the mixed reality field. And, we have to admit we really enjoyed our experience with them.

Proof of technology – Assisted Indoor Navigation

We started a research project where we developed a HoloLens 2 application to assist with the navigation to indoor points of interest. We were curious about the potential results, so we thought it was the right place to start. To showcase the device’s abilities, we developed it to be entirely offline and self-contained.

Implemented features

For the indoor navigation PoT that we developed, we explored the following features:

  • Hand-tracked the main menu

When the user keeps their left palm open towards them, the menu appears near the top right of it. By default, the menu position is tracked to the left hand. It can also be pinned in space to free the left hand. When the menu is pinned it can be grabbed and moved by a handle in the corner. The user can press buttons and scroll intuitively like they would with a touchscreen device. The buttons at the bottom of the menu change tabs/pages.

 

  • Selecting an existing Point of Interest

The “POIs” tab contains the scrollable list of previously added POIs. Pressing on an item opens a submenu with options to toggle navigation to the POI, edit, or remove it. The submenu closes automatically when the hand is moved away.

  • Adding a new Point of Interest

The “Add” tab reveals the possibility of adding several types of new POIs. They can be represented by just a name label or additional 3D objects. These include an arrow, a cube or a sphere for now. After adding a new item it can be grabbed and moved in 3D space. It can be renamed by selecting it in the “POIs” tab and pressing the “Rename” button.

 

  • Navigate to a Point of Interest

Navigation directions to any POI can be toggled from the list of POIs. Turning it on enables several features:

  • A suggested path is highlighted on the floor by animated guide arrows. 
  • A larger 3D arrow in the center of the view shows the current direction to take and the distance to the POI. 
  • The POI is highlighted with a colored outline.

 

  • Visualize indoor space in 3D

The space scanned by the HoloLens’ sensors can be visualized in 3D together with the POIs and active navigation paths. By default, it shows a top-down view that moves and rotates centered around the user, like a video game minimap. It can also be resized and dragged to show different parts of the building, which can be combined with pinning and moving the menu.

 

Limitations and Mitigations

Like all devices in the early stages of their life cycle, HoloLens 2 is not perfect and has room for improvements. We tried our best to overcome the challenges and have intuitive solutions or workarounds for them.

A big challenge for the app is understanding the environment we’re in so we can show it on the minimap and compute paths through it. For the current implementation, we rely on the 3D environment data built by the HoloLens’ sensors (camera and depth) and its software. This has 2 main limitations:

  • Limitation: App uses on-device 3D mapping data

When the app starts, it queries the previously collected 3D environment data. This means that the device must have mapped the building before using the app to build a complete representation of it.

  • Solution: Map the space before using the app

Prompt or remind the user to map the space before using the app. Fortunately, this doesn’t require much interaction since HoloLens 2 constantly updates the spatial mapping on a few meters radius. This happens whether an app is open or not, so all there is to do is to visit and see all surfaces inside the building. The current spatial mapping can be visualized by clicking anywhere when not inside an app. Our app also allows visualizing and re-querying the spatial mapping so it doesn’t have to be restarted to use newly collected data.

  • Limitation: Some walkable areas are not being automatically detected

This is a limitation of the way spatial data is collected and processed by the Scene Understanding API. Two main cases were identified:

  • doors that were closed during spatial mapping are seen as walls
  • stairs are not detected as walkable areas
  • Solution: Have the user indicate paths through unknown walkable areas with 2 interactions

Click a button before and after walking through a doorway or a flight of stairs, the app then automatically determines the placement of the new path by aligning it with the floor. The functionality is currently provided in the PathHelper addable object.

Technical insights and challenges

Our team is developing the project using Unity 2021.2 (a framework useful for building real-time 3D projects for various industries across games, animation, and many more) and the Mixed Reality Toolkit (MRTK) 2.7.3 (a Microsoft-driven project that provides a set of components and features, used to accelerate cross-platform MR app development in Unity).

Here are some insights from the implementation challenges that we’ve encountered:

1.Menu placement and opening

Initial implementations placed the menu directly over the user’s open left palm. Some problems were identified:

  • The menu occasionally froze in mid-air instead of following the left palm when it was covered by the right hand because of tracking errors. It was moved to the right to solve occlusion issues.
  • The limited FOV of the HoloLens 2 required either holding the hand up or the head down to see the menu which would prove uncomfortable with prolonged use. It was moved up and rotated to facilitate a more comfortable position of the hand during use

A toggle was also added to pin the menu’s position. In case the user wanders too far after pinning the menu, it can be brought back with a button that appears near the left hand.

2. Scroll functionality

3D scrolling list functionality is provided in MRTK. It was augmented by writing a generic populator that can dynamically add or remove items to it. A familiar scroll bar was also implemented to show the position whenever the list is scrolled.

3. Saving/Loading POIs

The app automatically saves the location and info of placed POIs when they are created so they can be loaded seamlessly on the next app run. This is done using persisted world anchors and metadata saved to the persistent storage.

Anchoring

Anchoring is an AR concept that represents remembering an exact position and orientation relating to the real world using device sensors such as cameras or depth sensors, preventing shifts caused by tracking drift. Persisting anchors refers to saving them so they can be reused over different app sessions or devices.

Saving POIs

Each time a POI object is created or updated 3 things happen in the background:

  • An Anchor is created, locking the object’s position to the real world
  • The Anchor is persisted, so it can be loaded in the next app session
  • The object’s metadata is saved to the local storage
Loading POIs

When the app is started, 3 things happen in order to load the previously saved objects:

  • All persisted Anchors are loaded
  • All anchor metadata is loaded
  • For each Anchor, the relevant 3D object is instantiated and initialized based on the associated metadata, with optional specific initialization logic

4. 3D minimap

When the spatial mapping data is loaded, the surfaces detected as walls and floors are copied into a single mesh that is used as the minimap. This mesh is clipped using a ClippingBox component to fit the app menu.

The world position and Y rotation of the device are copied into the position and rotation of the minimap relative to the menu, to keep it centered around the user.

Possible commercial uses:

  • guided tours
  • data center navigation
  • warehouse navigation

Final Thoughts

Navigating through an office space is not difficult, but imagine you have a big warehouse, thousands of square meters with products stacked one on top of the other, or a hospital, where new employees feel lost and cannot find what they are looking for. A few days with a hololens will feel like you have your own guide, always available, always accurate. It can enhance our physical reality by adding tons of information on doors, rooms, products from your warehouse, or a complicated surgery room. The possibilities are infinite and are expecting you to leverage Hololens technology to make your work easier, faster, and more relevant.

Even though we saw limitations and we had to overcome them with workarounds, we enjoyed using and creating applications for HoloLens. Our R&D initiative is going to continue as we see the device as a device of the future, not yet perfect, but heading in the right direction. As it’s relatively a new technology and it’s evolving fast, whoever will use it might have a significant competitive advantage over their competitors. Do you have an idea of where it can be used? Call us and let’s discuss it.

HoloLens 2 is one of the most capable mixed reality devices available, allowing the development of industry-leading solutions that deliver an immersive experience. It’s enhanced by the reliability, security, and scalability of the cloud and AI services from Microsoft, allowing integration with many of its products. It’s a device that’s worth considering using for growing your business or the knowledge in your IT department. All in all, we are looking forward to delivering many many commercial projects to our Clients.

 

If you want to continue reading on the topic, here are some useful resources:

https://docs.microsoft.com/en-us/hololens/

https://docs.microsoft.com/en-us/hololens/hololens2-hardware

https://www.microsoft.com/en-us/hololens

Article written by Mihai Zăvoian, with a little bit of help from Vasile Tomoioagă & Daniela Goadă
Edited by Ruxandra Mazilu