3rd International Workshop on "Data Driven Intelligent Vehicle Applications"

DDIVA 2021

     July 11, 2021 (8:00-14:30 CET)

A workshop in conjunction with IV 2021 in Nagoya, Japan

Registration via Portal Site is open. Free registration is possible for viewing the conference content. After registration, the contents will be made available soon. Please check the conference website for details.

Zoom Link: https://zoom.us/j/99581217036?pwd=QzE3NEdUZ2FwZGZtNitmek92b0d0dz09

Home

Recent advancements in the processing units have improved our ability to construct a variety of archi- tectures for understanding the surroundings of vehicles. Deep learning methods have been developed for geometric and semantic understanding of environments in driving scenarios aim to increase the suc- cess of full-autonomy with the cost of large amount of data.

Recently proposed methods challenge this dependency by pre-processing the data, enhancing, collecting and labeling it intelligently. In addition, the dependency on data can be relieved by generating synthetic data, which alleviates this need with the cost-free annotations, as well as using the test drive data from the sensors and hardware mounted on a vehicle. Nevertheless, state of the driver and passengers inside the cabin have been also of a big importance for the traffic safety and the holistic spatio-temporal perception of the environment.

Aim of this workshop is to form a platform for exchanging ideas and linking the scientific community active in intelligent vehicles domain. This workshop will provide an opportunity to discuss applications and their data-dependent demands for spatio-temporal understanding of the surroundings as well as inside of a vehicle while addressing how the data can be exploited to improve results instead of changing proposed architectures.

Please click to view the workshop in the previous years.


Important Dates

DDIVA Workshop: July 11, 2021

Please also check the conference web page for updates.

Workshop paper submission: (extended) May 10th, 2021

Notification of workshop paper acceptance: May 15th, 2021

Final Workshop paper submission: May 31st, 2021


Workshop Program

Start End Time Zone: CET
8:00 8:10 Introduction & Welcome
8:10 8:50 Keynote - Abhinav Valada ((Self-)Supervised Learning for Perception and Tracking in Autonomous Driving)
8:50 9:30

Keynote - Akshay Rangesh (Semi Automatic Labelling Techniques for Driver Behavior Models)

9:30 9:45 Break
9:45 10:25 Keynote - Nazım Kemal Üre (How to Get the Most Out of Your Simulation Data for Designing Decision Making Systems for Autonomous Driving)
10:25 10:40 Accepted Paper - Hariprasath Govindarajan (Self-Supervised Representation Learning for Content Based Image Retrieval of Complex Scenes)
10:40 11:20 Keynote - Alexander Carballo (Recent research topics in data-driven driving behavior and driving scene understanding at Takeda Lab.)
11:20 12:20 Lunch
12:20 13:00 Keynote - Julian Kooij (New sensing modalities for IV: Data-driven perception with acoustics and low-level radar)
13:00 13:40 Keynote - Fabian Oboril (Using the CARLA simulator for AV test and validation)
13:40 14:20 Panel Discussion
14:20 14:30 Closing

Confirmed Keynote Speakers

Speaker Akshay Rangesh
Affiliation Laboratory for Intelligent & Safe Automobiles, UC San Diego
Title of the talk Semi Automatic Labelling Techniques for Driver Behavior Models
Abstract

Modern supervised machine learning models require large amounts of labelled data for timely convergence during training and good generalization during testing. Thus, labelling large amounts of data has become an inevitable bottleneck in the model production pipeline. In this talk, I will present techniques and workflows that could considerably reduce the labelling time and/or effort, and in some cases remove the need to label altogether. To further illustrate this, I will present three case studies where these ideas have been put to use in practice, and discuss the resulting outcomes of our approach. In particular, this talk will cover concepts such as - simultaneous data and label capture, automatic labelling using auxiliary sensors, and task-specific data augmentation schemes. These techniques are meant to be for general-use, and could be applied to or adapted for tasks beyond the ones covered in this talk.

 

 

Speaker Prof. Dr. Abhinav Valada
Affiliation Robot Learning Lab, Albert-Ludwigs-Universität Freiburg
Title of the talk (Self-)Supervised Learning for Perception and Tracking in Autonomous Driving
Abstract

Scene understanding and object tracking play a critical role in enabling autonomous vehicles to navigate in dense urban environments. The last decade has witnessed unprecedented progress in these tasks by exploiting learning techniques to improve performance and robustness. Despite these advances, most techniques today still require a tremendous amount of human-annotated training data and do not generalize well to diverse real-world environments. In this talk, I will discuss some of our efforts targeted at addressing these challenges by leveraging self-supervision and multi-task learning for various tasks ranging from panoptic segmentation to cross-modal object tracking using sound. Finally, I will conclude the talk with a discussion on opportunities for further scaling up the learning of these tasks.

 

 

Speaker Dr. Julian F.P. Kooij
Affiliation Intelligent Vehicles Group, TU Delft
Title of the talk New sensing modalities for IV: Data-driven perception with acoustics and low-level radar
Abstract

Most research on data-driven perception in Intelligent Vehicles focuses on camera and lidar perception tasks, including object detection and scene segmentation. In this talk, I will present our group's research on data-driven methods for other sensing modalities that traditionally rely on signal processing (radar), or are even completely ignored in current IV research (acoustics).
First, I will demonstrate how the low-level radar data, represented as a 3D azimuth-range-Doppler tensor, provides valuable information to classify radar targets before clustering them for multi-class object detection [Palffy,RA-L’20]. We train our low-level radar detection network using annotations obtained from a visual detector on synchronized radar and camera recordings with our group's research vehicle. Extrinsic multi-modal sensor calibration for our vehicle will also be shortly discussed [Domhof,T-IV’21].
Second, I present novel research [Schulz,RA-L’21] that shows how a vehicle-mounted microphone array can be used to detect occluded traffic around corners 1 second before the camera-based detection system can. For these experiments we collected a new dataset with our research vehicle at various narrow alleys in the Delft inner city.
Overall, we seek to exploit easily collected multi-modal sensor data. Still, as we lack truly huge datasets and annotations needed for end-to-end learning, we rely on a combination of data-driven learning and classic signal processing techniques.

[Palffy,RA-L’20]: “CNN based Road User Detection using the 3D Radar Cube”, A. Palffy et al., IEEE Robotics and Automation Letters (RA-L), 2020
[Domhof,T-IV’21]: “A Joint Extrinsic Calibration Tool for Radar, Camera and Lidar”, J. Domhof et al., IEEE Trans. on Intelligent Vehicles (T-IV), 2021
[Schulz,RA-L’21]: “Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners”, Y. Schulz et al., IEEE Robotics and Automation Letters (RA-L), 2021

 

 

Speaker Dr.-Ing. Fabian Oboril
Affiliation Research Scientist for Dependable Driving Automation, Intel Labs
Title of the talk Using the CARLA simulator for AV test and validation
Abstract

Automated vehicles (AVs) are gaining increasing interest and their development is making great progress. However, assuring safe driving operation under all possible road and environment conditions is still an open challenge. In this regard, vehicle simulation is seen as a major corner stone for test and validation. Recorded real world challenges can be rebuild in simulation (e.g. NHTSA pre-crash scenarios) and in addition artificial corner cases can be added on top. Those can then be utilized to test the complex software stack in various configurations to find possible safety or availability issues. For example, the same situation can be tested with different settings of the planning modules (driving policy) or road conditions to ensure that all possibilities result in safe driving. In this talk, we will present how the CARLA vehicle simulator in combination with an open source scenario editor can be used to re-create traffic scenarios, play those under various operating conditions and by that means make one step towards safe autonomous driving.

 

 

Speaker Dr. Alexander Carballo
Affiliation Designated Associate Professor, Nagoya University
Title of the talk Recent research topics in data-driven driving behavior and driving scene understanding at Takeda Lab
Abstract

One of the most important efforts by Prof. Kazuya Takeda has been focused in the field of signal processing technology research, in particular, understanding human behavior through data centric approaches. Faithful to that tradition, in this talk, we will introduce our recent data-driven works in driving behavior at Takeda Laboratory. 
In the first part, we will discuss generation of personalized lane change maneuvers based on subjective risk modeling to analyze differences in individual driving behavior, based on drivers' subjective perception of risk. In the second part, we will cover extraction of driving behaviors from human experts; latent features are clustered into behaviors used to create different velocity profiles, allowing an autonomous driving agent to drive in a human-like manner.  In the last part, we will explain a graph convolutional network approach for predicting potential semantic relationships from object proposals; relationship data provides a human-kind description of the objects' behavior. 

 

 

Speaker Dr. Nazım Kemal Üre
Affiliation Artificial Intelligence Research Center, Istanbul Technical University & Eatron Technologies
Title of the talk How to Get the Most Out of Your Simulation Data for Designing Decision Making Systems for Autonomous Driving
Abstract

Developing reinforcement learning (RL) algorithms for automated tactical decision making has been an attractive topic in recent years. It is evident that designing RL based autonomous driving systems can help tremendously with handling performance and safety-based issues of alternative planning and decision making approaches. That being said, most RL algorithms are trained in simulators, and require large-amounts of data to converge to good solutions. Thus, designing sample-efficient RL algorithms is important for accelerating design cycles and verifying the safety and robustness of RL solutions. In addition, good performance in simulation does not always imply good performance in real life tests. Thus, additional measure needs to be taken to guarantee that trained RL agents generalize to real-life situations. In this talk, we go over three case studies that deals with these issues; i) how to utilize curriculum RL to boost your autonomous driving agent’s performance using limited amount of data; ii) how to take advantage of offline RL to inject external driving demonstrations for improving sample efficiency, iii) how to use multiple fidelity simulators to transfer simulated performance to real-life. 

 

 

Call For Papers

Spatio-temporal data is crucial to improve accuracy in deep learning applications. In this workshop, we mainly focus on data and deep learning, since data enables through applications to infer more information about environment for autonomous driving. This workshop will provide an opportunity to discuss applications and their data-dependent demands for understanding the environment of a vehicle while addressing how the data can be exploited to improve results instead of changing proposed architectures. The ambition of this full-day DDIVA workshop is to form a platform for exchanging ideas and linking the scientific community active in intelligent vehicles domain.

To this end we welcome contributions with a strong focus on (but not limited to) the following topics within Data Driven Intelligent Vehicle Applications:

 

Data Perspective:

  • Synthetic Data Generation
  • Sensor Data Synchronization
  • Sequential Data Processing
  • Data Labeling
  • Data Visualization
  • Data Discovery
     

Application Perspective:

  • Visual Scene Understanding
  • Large Scale Scene Reconstruction
  • Semantic Segmentation
  • Object Detection
  • In Cabin Understanding
  • Emotion Recognition

 

Contact workshop organizers: walter.zimmer( at )tum.de


Submission

Please check the conference webpage for the details of submission guidelines.

Authors are encouraged to submit high-quality, original (i.e. not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference or workshop) research. Authors of accepted workshop papers will have their paper published in the conference proceeding. For publication, at least one author needs to be registered for the workshop and the conference and present their work.

While preparing your manuscript, please follow the formatting guidelines of IEEE available here and listed below. Papers submitted to this workshop as well as IV2021 must be original, not previously published or accepted for publication elsewhere, and they must not be submitted to any other event or publication during the entire review process.

Manuscript Guidelines:

  • Language: English
  • Paper size: US Letter
  • Paper format: Two-column format in the IEEE style
  • Paper limit: For the initial submission, a manuscript can be 6-8 pages. For the final submission, a manuscript should be 6 pages, with 2 additional pages allowed, but at an extra charge ($100 per page)
  • Abstract limit: 200 words
  • File format: A single PDF file, please limit the size of PDF to be 10 MB
  • Compliance: check here for more info
     

The paper template is also identical to the main IV2021 symposium:

To go paper submission site, please click here.


Workshop Organizers