Poster: Unsupervised Anomaly Detection Improves Imitation Learning for Autonomous Racing
ivanruchkin
9 views
1 slides
Oct 22, 2025
Slide 1 of 1
1
About This Presentation
Poster presented by Ivan Ruchkin at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) on October 22, 2025 in Hangzhou, China.
Video: https://youtu.be/RjJ3nZR6_RQ
Abstract:
Imitation Learning (IL) has shown significant promise in autonomous driving, but its performanc...
Poster presented by Ivan Ruchkin at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) on October 22, 2025 in Hangzhou, China.
Video: https://youtu.be/RjJ3nZR6_RQ
Abstract:
Imitation Learning (IL) has shown significant promise in autonomous driving, but its performance heavily depends on the quality of training data. Noisy or corrupted sensor inputs can degrade learned policies, leading to unsafe behavior. This paper presents an unsupervised anomaly detection approach to automatically filter out abnormal images from driving datasets, thereby enhancing IL performance. Our method leverages a Convolutional Autoencoder with a novel \textit{latent reference loss}, which forces abnormal images to reconstruct with higher errors than normal images. This enables effective anomaly detection without requiring manually labeled data. We validate our approach on the realistic DonkeyCar autonomous racing platform, demonstrating that filtering videos significantly improves IL policies, as measured by a 25-40\% reduction in cross-track error. Compared to baseline and ablation models, our method achieves superior anomaly detection across three real-world video corruptions: collision-based occlusions, transparent obstructions, and raindrop interference. The results highlight the effectiveness of unsupervised video anomaly detection in improving the robustness and performance of IL-based autonomous control.
Size: 1.5 MB
Language: en
Added: Oct 22, 2025
Slides: 1 pages
Slide Content
Unsupervised Anomaly Detection Improves
Imitation Learning for Autonomous Racing
Yuang Geng, Yang Zhou, Yuyang Zhang, Zhongzheng Ren Zhang,
Kang Yang, Tyler Ruble, Giancarlo Vidal, Ivan Ruchkin
Electrical & Computer Engineering, University of Florida, USA
How can we automatically detect and remove corrupted training videos in an
unsupervised way to improve imitation learning for autonomous racing?
Although not all disturbances appear fully repaired to the eye test,
the Root Mean-Squared Cross-Track Errors (m, ↓) indicate that the
repair models drastically improve performance.
Latent space
Recon.
Loss
Latent Reference Loss
Unlabeled driving videos
Random
reference
batch
Image reconstruction
Filtered
images
Above
threshold
✔
✖
Dirty
images
Imitation learning
on all images (baseline)
Encoder
Imitation learning
on filtered images (ours)
Evaluate racing
via CTE
Extract
frame
Input
batch Decoder
Reference loss makes dirty and
clean data closer in latent space
Autoencoder-based Unsupervised Anomaly Training Data Cleaning Pipeline
Unsupervised Cleaning Performance Improving Autonomous Racing
Raw training video Cleaned training video
No human
effort
Normal images reconstruct well, abnormal ones poorly — the
gap is measured by Pearson Correlation Coefficient (PCC)
between input and reconstruction.
Performance is evaluated by Cross-Track Error (CTE).