Sevensense Stories: the Hilti SLAM Challenge
With the Hilti SLAM Challenge, Hilti wants to push the frontiers of current navigation technologies. They challenge state-of-the-art mapping and localization algorithms and advance them to a state where they can be deployed in real environments.
Request A Meeting
Please fill out your details below. Our team will reach out to you by email to schedule a date and time.
Learn about the Hilti SLAM Challenge in an Interview with Michael Helmberger
The Hilti Corporation is a multinational company that develops, manufactures, and markets products for the construction, building maintenance, energy, and manufacturing industries. With the Hilti SLAM Challenge, the company wants to push the frontiers of current navigation technologies. They challenge the state-of-the-art mapping and localization algorithms and advance them to a state where they can be deployed in real environments. We at Sevensense, believe in this goal and supported them with our Core Research Technology to use for the Challenge Dataset.
In this exclusive Interview with Michael Helmberger, we will tell you all the details about this project:
Michael Helmberger - A short Introduction
Michael joined Hilti in 2019 as a research engineer for visual computing in the corporate research department. His main research areas are mobile robotics and the fusion of different sensor modalities for accurate and robust navigation and mapping. Prior to Hilti, he was developing computer vision algorithms for high-accuracy reality capture platforms. Michael holds a Master’s degree in computer vision from Graz University of Technology.
Hilti recently announced a SLAM Challenge. What is that about?
Accurate and robust pose estimation is a fundamental capability for autonomous systems to navigate, map and perform tasks. This task is especially difficult in construction environments because of sparsity, varying illumination conditions, and dynamic objects. With our challenge, we want to help and advance the current academic research in SLAM for developing more accurate and robust algorithms that eventually can be deployed on real-world use cases.
Who are the ideal participants to the challenge and why should they participate?
We encourage everybody in the robotics and visual computing community to download our dataset and participate. Our dataset offers data from the visual, inertial, and lidar sensors, and we took special care of sensor synchronization and calibration. This is extremely important for algorithms that make use of multiple sensors at the same time. Additionally, each dataset includes accurate ground truth to allow direct testing of SLAM results. Once the challenge is over, we will release the ground truth for all datasets, in the hope to establish a useful dataset for the community.
Of course, we also offer more tangible incentives as well: participants can win a total of 10.000 USD, and can present their approach at Perception and Navigation for Autonomous Robotics at the IROS 2021 conference.
Why is Hilti interested in robotics and, particularly, in SLAM technology?
Robots on construction sites promise improved safety of workers, task productivity and high-quality data capture. Although some dangerous tasks, like concrete chainsawing, are prime work to automate, the more repetitive and ergonomically difficult tasks such as overhead drilling and installation result in many worker injuries. Construction robotics offers a way to remove this worker hazard, while also improving task scheduling and progress monitoring. To achieve that goal, however, automation requires a wide array of technologies and techniques to perceive, map, and navigate through the environment. This is where SLAM comes into play.
Robots on construction sites promise improved safety of workers, task productivity and high-quality data capture.
In your opinion, what is the role of Camera Vision in the future of SLAM?
Cameras are a critical component of a multi-sensor navigation and perception stack, and together with inertial measurement units, I think they will be the main sensors for SLAM. Another big advantage of cameras, apart from their price, is that they can not only be used for localization and mapping but the semantic information extracted from the images can be used to convert data into information for decision making.
Cameras are a critical component of a multi-sensor navigation and perception stack, and together with inertial measurement units, I think they will be the main sensors for SLAM.
The challenge’s datasets are recorded using Sevensense’s multi-camera technology. Why did you choose us?
Your hardware offers high-class cameras and IMU that are extremely well-calibrated and synchronized. The custom exposure algorithm is great for scenarios with large changes in lighting conditions, where you otherwise would lose all features in visual slam. Also, the integration into our system was super easy with PTP and ROS
If you had to describe Sevensense with 3 words, what would they be? And why?
I would describe you as passionate about your product which has the potential to change our daily lives in the future. Also, customer-oriented comes to my mind when thinking about our collaboration. Overall, I experience you as a very professional company with a clear goal and the right people to accomplish it.
Thanks to Michael Helmberger for this insightful and interesting conversation. We are glad and proud to be able to contribute to their project and are looking forward to seeing the results!