Generalizable Image Repair for Robust Visual Control
ivanruchkin
12 views
11 slides
Oct 22, 2025
Slide 1 of 11
1
2
3
4
5
6
7
8
9
10
11
About This Presentation
Slides presented by Ivan Ruchkin at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) on October 22, 2025 in Hangzhou, China.
Video: https://youtu.be/C3-WlZpYBm8
Paper: https://arxiv.org/abs/2503.05911
Size: 1.01 MB
Language: en
Added: Oct 22, 2025
Slides: 11 pages
Slide Content
Generalizable Image Repair for
Robust Visual Control
Carson Sobolewski, Zhenjiang Mao,
Kshitij Maruti Vejre, Ivan Ruchkin
Trustworthy Engineered Autonomy (TEA) Lab
Department of Electrical and Computer Engineering
University of Florida
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Hangzhou, China
October 22, 2025
Vision-based Control System and Disturbances
Vision-based Autonomous Racing
●Vision-based controller h: Y→U; fixed and
realized by a convolutional neural network
2
Original Darkness Salt/Pepper Rain Fog Snow
Five Visual Disturbances:
Image Repair Setting
Our goal: train an image repair model r :Y→Y, which translates images from the testing
distribution to the training distribution.
Repair goal: Minimize the trajectory difference between the systems with normal vs.
repaired images (measured with cross-track error, CTE)
3
Our Repair Approach
4
Key Idea: Controller Loss
Augment the training of the repair model a controller loss:
➢Penalize control difference on the repaired and original images
5
Training Architecture: CycleGAN
Corrupted
Observation ŷ
t
Generator
Repaired
Observation r(ŷ
t
)
Generator
Discriminator
Reconstructed
Observation ȳ
t
Uncorrupted
Observation y
t
Generator
Fake Corrupted
Observation ??????(y
t
)
Generator
Reconstructed
Observation ȳ'
t
Discriminator
Controller
h
Action
h(ŷ
t
)
Action
h(??????(y
t
))
Controller
h
Action
h(y
t
)
Action
h(r(ŷ
t
))
6
Ours
Ours
Intuition: Cyclical mapping between the distributions of clean, corrupted, and repaired data.
Experimental Results
7
8
Before After
Qualitative Results: Restored Images
Takeaway: repaired
images have
high-frequency noise.
9
1.Repair models are trained on normal images & three types of noise
2.Then tested on the other two types of noise
●Two experimental configurations
A.Trained on Normal, Darkened, Salt/Pepper, Rain
Tested on Snow, Fog
B.Trained on Normal, Darkened, Salt/Pepper, Fog
Tested on Rain, Snow
Evaluating Generalization
ConfigSetup CTE
(Unseen)
CTE
(All)
Original Controller 2.131 2.045
Lucy Richardson 2.362 2.471
Variational Bayes 2.646 2.571
A Robust Controller 2.295 1.810
VAE 2.085 2.171
pix2pix (No Controller Loss) 1.734 1.605
CycleGAN (Original Controller Loss)1.507 1.394
B Robust Controller 1.870 1.684
VAE 1.998 2.167
pix2pix (Robust Controller Loss) 1.523 1.643
CycleGAN (Original Controller Loss)1.770 1.530
Quantitative Results: Raceline Tracking Loss
Winner: CycleGAN + controller loss.
●Even better than the original
controller on original ones &
unseen corruptions.
10
Summary
1.Problem: Vision-based control under unseen distribution shift
2.Solution: Generative adversarial networks trained with controller loss
3.Outcome: Improved autonomous racing with unseen visual disturbances
11