SLAM is a complicated process because localization requires maps and mapping requires good position estimation. Although it has long been believed that robots will become the basic “chicken or egg” problem of autonomy, breakthrough research in the 1980s and mid-1990s solved SLAM conceptually and theoretically. Since then, various SLAM methods have been developed, most of which use the concept of probability.
SLAM and sensor fusion
In order to perform SLAM more accurately, sensor fusion comes into play. Sensor fusion is the process of combining data from multiple sensors and databases to obtain improved information. It is a multi-level process that deals with the correlation, relevance, and combination of data. Compared with using only a single data source, it can be achieved cheaper, with higher quality, or with more relevant information.
For all the processing and decisions required from sensor data to motion, two different AI methods are commonly used:
1. Sequentially decompose the driving process into components of a layered pipeline. Each step (sensing, positioning, path planning, motion control) is handled by specific software elements, and each component of the pipeline feeds the data to the next One;
2. An end-to-end solution based on deep learning, responsible for all these functions.
The question of which method is best for AV is an area of constant debate. The traditional and most common methods include decomposing the autonomous driving problem into multiple sub-problems, and solving each sub-problem in turn using dedicated machine learning algorithm techniques. These algorithms include computer vision, sensor fusion, positioning, control theory, and path planning
As a solution, end-to-end (e2e) learning can solve the challenges faced by complex AI systems for autonomous vehicles, so it has attracted more and more attention. End-to-end (e2e) learning applies iterative learning to entire complex systems and has gained popularity in the context of deep learning.