Developing Cooperative Perception Systems for Autonomous Driving under Realistic V2X Communication Constraints ■
Context
Cooperative perception is a key enabler of next-generation autonomous driving systems,
allowing multiple vehicles to share information and jointly understand their environment. By
exchanging sensor data or learned features, vehicles can see beyond line-of-sight, improving
safety in complex urban scenarios with occlusions, dynamic objects, and limited visibility.
Most existing research in cooperative perception assumes ideal communication between
vehiclesinstantaneous, reliable, and high-bandwidth data exchange. However, in real-world
deployments, communication is far from perfect. Factors such as latency, packet loss, bandwidth
limitations, and transmission errors can significantly affect how well vehicles collaborate and
make decisions.
At the same time, recent advances in deep learning have made cooperative perception models
increasingly powerful and flexible. This opens an exciting research question: how do
communication constraints shape the design of the next generation perception systems in
autonomous driving, and how can autonomous vehicles adapt to imperfect communication
conditions?
Objective
This thesis aims to build a realistic learning-aware communication pipeline and to integrate it into
the CARLA autonomous driving simulator. This integration creates a strong testbed for the
cooperative perception community and the autonomous driving industry to develop safer nextgeneration
perception systems, robust against communication imperfections.
Framework of the Thesis ■
Description of Work
The thesis will be structured into the following steps:
Literature Review:
Review existing work on state-of-the-art cooperative perception, multi-agent deep
learning, and vehicle-to-everything (V2X) communication. Special focus will be given to
methods that consider bandwidth constraints, feature sharing, and robustness to
missing or delayed information.
Simulation Framework Development:
Extend the CARLA autonomous driving simulator by integraVng a modular V2X
communicaVon pipeline. This pipeline will simulate:
o Communication latency and delays
o Bandwidth limitations and rate constraints
o Packet loss and bit/symbol errors
o Variable communication quality across agents
Integration with AI Models:
Implement or adapt state-of-the-art cooperative perception models (e.g., feature fusion
or intermediate representation sharing). Connect these models to the communication
pipeline to enable realistic multi-agent interaction.
Experimental Analysis:
Evaluate how different communication conditions affect perception and planning
performance in autonomous driving (e.g., object detection accuracy, tracking stability).
Analyze trade-offs between communication cost and AI performance.
Exploration of Learning-Based Improvements (Optional/Advanced):
Investigate strategies to improve robustness in perception and planning, such as:
o Learning what information to transmit (adaptive feature compression)
o Handling missing or delayed data using temporal models
Expected Student Profile ■
Strong knowledge of machine learning and deep learning
Solid experience with Python and frameworks such as PyTorch
Interest in autonomous systems, AI, and robotics
Willingness to work with simulation tools and in a vibrant team