Improvement of the AEB activation algorithm based on the pedestrian reaction


This work is an initial activity in the OPREVU project (RTI2018-096617-B-100). This project, funded by the Spanish Ministry of Science and Innovation and by the Community of Madrid (S2018/EMT-4362 SEGVAUTO-4.0-CM), is aimed at the use of Virtual Reality (VR) techniques to determine Vulnerable Road Users (VRU) behavior patterns in the event of pedestrian collisions, in order to optimize the Autonomous Emergency Braking (AEB) systems incorporated in the new generations of commercial vehicles. With the aim of developing and optimizing the current pedestrian identification systems, vehicle tests are performed on INSIA-UPM test track with different vehicles to analyze the behavior of their AEB systems. These devices are equipped with a Lidar and a camera, whose joint operation allows detecting the proximity of the pedestrian and obtaining variables of interest to assess the automatic intervention of the braking system. The tests are inspired by the CPNA50 and CPNA25 tests, carried out by EuroNCAP to validate and certify AEB systems. The reference variables are the TTC (Time-to-collision) and the TFCW time. FCW (Forward Collision Warning) is a visual and acoustic signal that appears as a warning light or digitally on the instrument panel and warns of the presence of an obstacle in the vehicle's path, and TFCW is the time calculated from the sum of the driver's average reaction time and the time to stop if the driver applies pressure on the brake pedal until full stoppage. On the other hand, TTC is the time calculated from the distance and relative speed of the vehicle with respect to the pedestrian. If the TTC is less than TFCW, the system intervenes. By means of the CARSIM© simulation tool (vehicle-pedestrian-road), it is attainable to modify certain boundary parameters, such as the initial conditions of movement of the pedestrian and the vehicle, as well as their initial relative disposition at the beginning of each test. Along these lines, the virtual model incorporates reactions patterns for the pedestrian, such as stopping, running, or changing direction while crossing the road. These reaction patterns are defined by means of VR (Virtual Reality) tests. The CARSIM© vehicle model integrates the fusion of camera and LiDAR data, and an operating algorithm to control the AEB activation. It is feasible to breed vehicle models based on behavioral patterns from the values obtained in the real tests, and to find out the correlation between the corresponding TTC and TFCW values with parameters measured by the calibration equipment, such as: the maximum speed and the time in which it is reached, the initial relative distance and the relative distance at the moment of AEB activation, or the average deceleration from the start of braking, among others. Hence, the data obtained on the INSIA-UPM track tests allow the virtual models to be validated. Furthermore, the novelty of the approach is to consider the pedestrian reactions just before collision, extracted through users’ experiments made

Publication type: 
Published in: 
Proceedings of the FISITA 2021 World Congress
Publication date: 
September 2021
CeDInt Authors: 
Other Authors: 
Ángel Losada Arias; Javier Páez Ayuso; Juan José Herrero Villamor