A Benchmark for V2V Interaction and Cooperative Perception in CARLA ■
The way we train and evaluate self-driving systems today relies on closed-loop
benchmarks, settings in which the environment continuously reacts to the agents
behavior, requiring real-time decision-making. Existing benchmarks such
as Bench2Drive or NAVSIM focus primarily on single-vehicle driving, even when they
explore highly challenging or unrealistic scenarios. However, modern autonomous
driving increasingly depends on cooperative perception, where multiple vehicles share
information via V2V communication to improve safety and awareness.
This thesis shifts the focus from single-agent evaluation to multi-vehicle interaction
scenarios. The goal is to design and implement a closed-loop V2V benchmark in
CARLA that captures complex, real-world situations where cooperation is essential,
such as emergency vehicle handling, roundabouts, occlusions at intersections, and
dense traffic negotiation. These scenarios will require the student to define meaningful
use cases where collaboration between vehicles provides a measurable advantage.
In a second stage, the benchmark will be used to evaluate existing perception and
driving models, establishing baseline performance and enabling systematic
comparison between individual and cooperative approaches.
Kind of Work
The student will:
Design a suite of closed-loop V2V interaction scenarios in CARLA, moving
beyond single-vehicle benchmarks.
Implement multi-agent simulations where vehicles must react to each other and
the environment dynamically.
Define and model cooperative perception use cases, such as:
o emergency vehicle right-of-way,
o roundabout coordination,
o occluded object detection,
o long-range awareness via shared information.
Develop evaluation protocols and metrics for:
o task success,
o safety and collision avoidance,
o reaction to other agents,
o benefits of cooperation vs. single-agent perception.
Integrate and run existing autonomous driving / perception models within the
benchmark.
Produce baseline results comparing:
o individual perception vs. cooperative perception,
o performance under varying communication assumptions.
Framework of the Thesis ■
Expected Outcome
A novel benchmark suite for V2V interaction scenarios in CARLA.
A set of realistic, reproducible cooperative driving tasks.
Baseline evaluations of existing models in multi-agent settings.
Insights into when and how cooperative perception improves performance.
Expected Student Profile ■
Strong background in Computer Vision and/or Robotics.
Experience with Python familiarity with simulation tools is beneficial.
Interest in autonomous driving, multi-agent systems, and 3D perception.
Bonus: prior exposure to CARLA, V2V/V2X systems, or autonomous driving