NAVVIS was awarded the TUM IdeAward for their innovative indoor positioning technology.
This annual competition is organized by Technische Universität München, UnternehmerTUM GmbH, and Zeidler-Forschungs-Stiftung.
The award aims to encourage researchers to set up their own innovative and competitive start-up enterprises.
The jury (Vice President Dr. Evelyn Ehrenberger (TUM), Dr. Helmut Schönenberger (UnternehmerTUM), Sylvia Philipp (Zeidler-Forschungs-Stiftung), Dr. Lothar Stein (former director McKinsey Munich), Prof. Fritz Frenkler (Institute for Industrial Design)) ranked NAVVIS first among 51 highly competitive applications.
The awards ceremony took place in the TUM registration hall on February 28, 2013.
We are expanding the team. If you are interested in joining us, please contact us as soon as possible.
In case you asked yourself how we map the extensive indoor environments, which can be browsed with the NAVVIS IndoorViewer, have a look at this short video showing our mapping trolley in action. Besides the laser scanners and wheel odometry, used to determine our current location with an accuracy of approximately 1 cm, the trolley is equipped with a 360° panoramic camera and two high resolution DSLRs to record images of the environment. We can do this at walking speed by using additional LED lighting to avoid motion blur in the images. A track of about 1.2 km is mapped in about 2 hours. To record the 3D geometry of indoor areas, a vertically mounted laser scanner is used to incrementally build a complete point cloud as we move through the corridors and rooms.
For more detailed information, please have a look at the corresponding publication and the Download section. Our extensive indoor datasets, comprising several thousand high-resolution images with camera pose information, as well as 2D grid maps and 3D point clouds, is available free of charge for download. It may be used both commercially and non-commercially (attribution is required).
At ACM Multimedia 2012, we present a method that enables meter-accurate indoor positioning using visual information recorded by a smartphone’s camera. The position and orientation of the device is estimated by comparing the camera images to a database of previously computed virtual views. This comparison is carried out by an optimized image search engine and can be done within miliseconds. The paper covers the view generation process and explains how the virtual views are used for visual localization.
In a nutshell, the approach employs a novel combination of a content-based image retrieval engine and a method to generate virtual viewpoints for the reference database. In a preparation phase, virtual views are computed by transforming the viewpoint of images that were captured during the mapping run. The virtual views are represented by their respective bag-of-features vectors and image retrieval techniques are applied to determine the most likely pose of query images. As virtual image locations and orientations are decoupled from actual image locations, the system is able to work with sparse reference imagery and copes well with perspective distortion.
The video shows our lab demonstrator running on an Android phone. By analyzing the camera images for distinctive visual features, the position and orientation of the smartphone is recognized and displayed on the map. For this demonstration, only visual information has been used – other localization sources have not been used to improve the vision-based results. In order to compare localization accuracy, Android’s network-based position estimate (Wifi and cellular networks) is displayed.
At IPIN 2012, we present a visual odometry system for indoor navigation with a focus on long-term robustness and consistency. As our work is targeting mobile phones, we employ monocular SLAM to jointly estimate a local map and the device’s trajectory. We specifically address the problem of estimating the scale factor of both, the map and the trajectory.
State-of-the-art solutions approach this problem with an Extended Kalman Filter (EKF), which estimates the scale by fusing inertial and visual data, but strongly relies on good initialization and takes time to converge. Each visual tracking failure introduces a new arbitrary scale factor, forcing the filter to re-converge.
We propose a fast and robust method for scale initialization that exploits basic geometric properties of the learned local map. Using random projections, we efficiently compute geometric properties from the feature point cloud produced by the visual SLAM system. From these properties (e.g., corridor width or height) we estimate scale changes caused by tracking failures and update the EKF accordingly. As a result, previously achieved convergence is preserved despite re-initializations of the map.
We evaluate our approach using extensive and diverse indoor datasets. Results demonstrate that errors and convergence times for scale estimation are considerably reduced, thus ensuring consistent and accurate scale estimation. This enables long-term odometry despite of tracking failures which are inevitable in realistic scenarios.
Thanks to the recent press release by TU Munich, NAVVIS got quite some attention from the press during the last few days. We would like to thank TUM and all authors for recognizing and reporting about our indoor navigation system!
In the following you find a short selection of referring articles:
As of today, we offer a PhD position in the NAVVIS team at the Institute for Media Technology at TU Munich to further strengthen the ongoing development of our visual indoor localization system. For a general description of the project NAVVIS please have a look at the main page and the recent press release.
NAVVIS involves research in the areas of computer vision and pattern recognition, information theory, and machine learning. More specifically, we are working on content-based image retrieval, simultaneous localization and mapping (SLAM) as well as 3D reconstruction (point clouds, meshing, image based-rendering). Experience in one of these areas would be advantageous, more important, however, is passion for the project, which could ultimately lead to a start-up. Without the need for complex and expensive infrastructure, we believe that NAVVIS has the potential to be the enabling technology for indoor location based services such as navigation, virtual tourist guides, mobile yellow pages, and many others. The market for indoor location based services is estimated to be in the range of “several billion dollars“.
For applications and enquiries please contact:
Institute for Media Technology (Prof. Steinbach)
We just uploaded our VidSnaps and VidSnaps-Offtrack query images to the dataset page. We use these images to evaluate the quality of algorithms for visual localization. The 768 images in the VidSnaps dataset were recorded close to the mapping trajectory, i.e., there are reference images in the 2011-11-28 dataset that have a similar perspective. The 252 images in the VidSnaps-Offtrack dataset, in contrast, were recorded a few meters away from the mapping trajectory, hence they have a different perspective than the most similar reference images. At each location, we took six images and manually added the location (ground truth).
The NAVVIS Indoor Viewer has just been updated. The point cloud data as well as high resolution imagery recorded by the two DSLRs are now visualized. Please have a look at the Indoor Viewer menu to display the point cloud and adjust the properties.
There is also an animated tour through TU Munich. Feel free to modify the track of the tour by adding new keyframes!
The NAVVIS Indoor Viewer is now public, head over to the “Indoor Viewer” tab to try it out. The Indoor Viewer is a browser based research tool to explore the available TUMindoor datasets and to evaluate feature extraction as well as localization results. Please contact us if you are interested in working with this viewer.