Unsupervised Anomaly Detection Improves Imitation Learning for Autonomous Racing
ivanruchkin
9 views
6 slides
Oct 22, 2025
Slide 1 of 6
1
2
3
4
5
6
About This Presentation
Slides presented by Ivan Ruchkin at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) on October 22, 2025 in Hangzhou, China.
Video: https://youtu.be/RjJ3nZR6_RQ
Abstract:
Imitation Learning (IL) has shown significant promise in autonomous driving, but its performanc...
Slides presented by Ivan Ruchkin at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) on October 22, 2025 in Hangzhou, China.
Video: https://youtu.be/RjJ3nZR6_RQ
Abstract:
Imitation Learning (IL) has shown significant promise in autonomous driving, but its performance heavily depends on the quality of training data. Noisy or corrupted sensor inputs can degrade learned policies, leading to unsafe behavior. This paper presents an unsupervised anomaly detection approach to automatically filter out abnormal images from driving datasets, thereby enhancing IL performance. Our method leverages a Convolutional Autoencoder with a novel \textit{latent reference loss}, which forces abnormal images to reconstruct with higher errors than normal images. This enables effective anomaly detection without requiring manually labeled data. We validate our approach on the realistic DonkeyCar autonomous racing platform, demonstrating that filtering videos significantly improves IL policies, as measured by a 25-40\% reduction in cross-track error. Compared to baseline and ablation models, our method achieves superior anomaly detection across three real-world video corruptions: collision-based occlusions, transparent obstructions, and raindrop interference. The results highlight the effectiveness of unsupervised video anomaly detection in improving the robustness and performance of IL-based autonomous control.
Size: 2.03 MB
Language: en
Added: Oct 22, 2025
Slides: 6 pages
Slide Content
Unsupervised Anomaly Detection Improves
Imitation Learning for Autonomous Racing
Yuang Geng, Yang Zhou, Yuyang Zhang, Zhongzheng Ren Zhang,
Kang Yang, Tyler Ruble, Giancarlo Vidal, Ivan Ruchkin
Department of Electrical and Computer Engineering
Trustworthy Engineered Autonomy (TEA) Lab
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Hangzhou, China
October 22, 2025
TEA Lab
Motivation
➢Imitation learning for autonomous racing
•Leverages large-scale expert demonstrations
•Requires clean, high-quality training data
2
Can we remove abnormal data
without human supervision?
Training data collection
…
Raw data
…
Cleaned data
Reconstruction-based Data Cleaning
3
Latent space
Recon.
Loss
Latent Reference Loss
Random
reference
batch
Filtered
images
Above
threshold
✔
Dirty
images
Imitation learning
on all images (baseline)
Encoder
Imitation learning
on filtered images
Evaluate racing via
cross-track error
Extract
frame
Input
batch
Decoder
Unlabeled
driving videos
Brings dirty and clean data closer
✖
Anomaly Detection Results
4
Reconstruction quality measured by Pearson Correlation Coefficient (PCC)
between the input and its reconstruction
Cleaned Data Improves Autonomous Driving
5
The imitation learning are
measured by Cross-track error
(CTE) from top-down camera
25–40% reduction in CTE with cleaned data
Unsupervised cleaning boosts imitation learning performance
with no human effort
Summary
6
1.Problem: Unsupervised anomaly detection in human driving data
2.Solution: Reference loss groups dirty & clean data, making detection easier
3.Outcome: 25–40% reduction in tracking error after cleaning the data