Decision making in autonomous driving is traditionally based on handcrafted, rule-based
approaches. These approaches are always transparent, meaning it is always possible to revert
decision making process and investigating the causes of a given choice. However, rule-based
systems generally lack in dealing with complex situations and corner cases. Reinforcement
learning is a learning paradigm based on interaction with environment whose progress is guided
via a reward function, which is a high-level specification of the expected behavior the agent must
learn. This potentially solves the complexity of a lot of driving scenario but, nowadays, best
reinforcement learning algorithms are not transparent.
We exploit the power of combining these two techniques, by taking the generality of reinforcement
learning and the transparency of rule-based approaches in a decision making algorithm tested
massively in simulation and also in real-world scenarios.
Decision Making Designer