Currently, different types of machine learning algorithms are used for different applications in autonomous vehicles. Essentially, machine learning maps a set of inputs to a set of outputs based on the set of training data provided.
Three machine deep learning methods
1. Convolutional Neural Network (CNN);
2. Recurrent neural network (RNN);
3. Deep Reinforcement Learning (DRL);
It is the most common deep learning method applied to autonomous driving.
CNN-mainly used to process images and spatial information to extract features of interest and identify objects in the environment. These neural networks are composed of convolutional layers: a set of convolutional filters that try to distinguish image elements or input data to label them. The output of the convolutional layer is fed into an algorithm that combines them to predict the best description of the image. The final software component is often called an object classifier because it can classify objects in the image, such as street signs or other cars.
RNN-When processing time information such as video, RNN is a powerful tool. In these networks, the output of the previous step is fed into the network as input, so that information and knowledge can persist in the network and be contextualized.
DRL-Combining deep learning (DL) and reinforcement learning. The DRL approach enables software-defined “agents” to use rewards to learn the best actions in a virtual environment to achieve their goals. These goal-oriented algorithms learn how to achieve goals, or how to maximize along specific dimensions in multiple steps. Despite the broad prospects, the challenge for DRL is to design the correct reward function for driving vehicles. In self-driving cars, deep reinforcement learning is considered still in its early stages.
These methods do not necessarily exist in isolation. For example, companies such as Tesla rely on hybrid forms, and they try to use multiple methods together to improve accuracy and reduce computing requirements.
Training the network on multiple tasks at once is a common practice in deep learning and is often referred to as multi-task training or auxiliary task training. This is to avoid overfitting, which is a common problem with neural networks. When a machine learning algorithm is trained for a specific task, it becomes very focused on imitating the data it trains, so that its output becomes impractical when trying to interpolate or extrapolate. By training machine learning algorithms on multiple tasks, the core of the network will focus on discovering regular functions that are useful for all purposes, rather than just focusing on one task. This can make the output more realistic and useful for the application.