This talk glimpses into recent developments of computer vision algorithms for simultaneous
localization and mapping (SLAM). In particular, it introduces direct methods for visual SLAM. In
contrast to classical key point-based methods they directly exploit all available brightness
information. As a consequence, they provide drastic improvements in precision and robustness.
Furthermore, it will be demonstrated how one can further boost precision and robustness by fusion
visual information with inertial and GPS information. Ultimately this leads to a system for realtime
localization and mapping with unprecedented precision and robustness that can be deployed in a
multitude of autonomous systems ranging from robots and drones to self-driving cars.