
Autonomous driving cars are important due to improved safety and fuel efficiency. Various techniques have been described to consider only a single task, for example, recognition, prediction, and planning with supervised learning techniques. Some limitations of previous studies are: (1) human bias from human demonstration; (2) the need for multiple components such as localization, road mapping etc. with a complicated fusion logic; (3) in reinforcement learning, the focus was mostly on the learning algorithms but less on the evaluation of different sensors and reward functions. We describe end-to-end reinforcement learning for an autonomous car, which used only a single reinforcement learning model to create the autonomous car. Further, we designed a new efficient reward function to make the agent learn faster (18% improvement for all settings compared to the baseline reward function) and build the car with only the necessary perceptions and sensors. We show that it performed better with state-of-the-art off-policy reinforcement learning for continuous action (SAC, TD3).
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
