Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the dev...
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Size: 4.43 MB
Language: en
Added: May 27, 2024
Slides: 36 pages
Slide Content
Security and Privacy Research
Democratizing
Fuzzing at Scale
Abhishek Arya
May 27, 2024
Security and Privacy Research
About me
●Engineering Director, Google Open Source and
Supply Chain Security
●Founding member and TAC representative,
Open Source Security Foundation (OpenSSF)
●Founding Chrome Security member
Security and Privacy Research
What is fuzzing?
Automated bug finding with
unexpected inputs
Security and Privacy Research
Fuzzing: art of controlled chaos
Reward = Security vulnerability ||
Stability bug ||
State assertion
Input = Malicious or unexpected data
Security and Privacy Research
Agenda
History: The Early Days
Platform: Pillars of Fuzzing
Community: Scaling Research
AI/ML: The Next Frontier
Future: Trends and Challenges
Security and Privacy Research
800 800 800 800
100
50
100
50
100
50
100
50
300
200
100
50
History
The Early Days
Security and Privacy Research
1988: The origin story: Barton Miller CS736
(1) Operating System Utility Program Reliability −
The Fuzz Generator: The goal of this project is to evaluate the
robustness of various UNIX utility programs, given an unpredictable
input stream. This project has two parts. First, you will build a fuzz
generator. This is a program that will output a random character
stream. Second, you will take the fuzz generator and use it to attack
as many UNIX utilities as possible, with the goal of trying to break
them. For the utilities that break, you will try to determine what type
of input cause the break.
Security and Privacy Research
2008: MS SAGE: Automated Whitebox Testing
…evaluates the recorded trace, and
gathers constraints on inputs
capturing how the program uses these.
The collected constraints are then
negated one by one and solved with a
constraint solver, producing new inputs
that exercise different control paths in
the program. This process is repeated
with the help of a code-coverage
maximizing heuristic designed to find
defects as fast as possible.
Security and Privacy Research
2009: Tavis O: Automated Corpus Distillation
…simply calculate the cardinality of our large
corpus, and then attempt to find the smallest
sub-collection such that the union of those
inputs has the same cardinality.
…Just simple mutation of our distilled corpus
would break most software (or a corpus distilled
using coverage data for program A would break
similar program B without modification!)
Security and Privacy Research
2010-11: Structured File Format Fuzzing
●Randomized, black-box testing
with no-feedback loop
●Good understanding of file
formats (parsers, pits, etc)
●Mutations focused on generating
almost-valid testcases
Security and Privacy Research
800 800 800 800
100
50
100
50
100
50
100
50
300
200
100
50
Platform
Pillars of Modern Fuzzing
Security and Privacy Research
Platform Goals
Find regressions before they impact users
Reliably reproduce a fault testcase with negligible overhead
Automate all parts of continuous fuzzing pipeline including build
management, crash handling, regression analysis and fix verification
Simple to write, easy to integrate fuzzer unit tests in day-to-day
developer workflows.
Testing
Instrumentation
Automation
Scale
Security and Privacy Research
Testing: AFL (American Fuzzy Lop)
●First coverage guided fuzzer
●Support both fast compiler instrumentation
and QEMU for binary apps
●Efficient fork processes without execve()
●Novel mutation strategies - bit flipping,
input fragment slicing, dict insertions, etc
●Several triage features, e.g. minimization
Security and Privacy Research
Testing: libFuzzer
●First in-process evolutionary fuzzer (later “persistent” mode in AFL)
●Foundation for developer-focused fuzzer unit tests
●New novel mutation strategies, e.g. value profiling
●Support for custom mutators - libprotobuf-mutator (also FuzzTest)
●Natively integrated in the LLVM toolchain
#include "libxml/parser.h"
#include "libxml/tree.h"
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
if (auto doc = xmlReadMemory(reinterpret_cast<const char *>(data), size, "noname.xml", NULL, 0))
xmlFreeDoc(doc);
return 0;
}
Security and Privacy Research
Instrumentation: catch bugs reliably
●Sanitizers for all platforms
●Static >>> DBI (1.5-2x vs 10-50x)
●Reliable, comprehensive coverage
for bug classes (e.g. stack, global,
container overflows, undef behavior)
●Enable Security ASSERTs.
Security and Privacy Research
Automation: The ClusterFuzz Platform
●Continuous fuzzing on main/master
●Automated build mgmt, crash dedup,
triage, regression and fixed testing
●Automated corpus cross-pollination,
variant analysis, corpus culling, etc
●Support for custom mutators
●Ensemble fuzzing, incl support for
popular fuzzing engines and tools
Security and Privacy Research
ClusterFuzz: Sample Testcase Report
Security and Privacy Research
Scale: catch regressions before stable
●OSS-Fuzz: Large-scale Linux cluster on GCP
●ClusterFuzz supports Win/Android/Mac,
but not relevant for fuzz unit tests
●Auto-scale based on project criticality, new fuzzers,
coverage changes, roadblocks, etc
●~77% of all bugs are regressions
100k
cores
Security and Privacy Research
800 800 800 800
100
50
100
50
100
50
100
50
300
200
100
50
Community
Scaling Research through Collaboration
Security and Privacy Research
OSS-Fuzz: continuous fuzzing for OSS
●Finds HeartBleed in a few seconds
●Project integration in <100 LoC
●Focus on automation, ease-of-use for
resource-constrained OSS devs
●1.2K Projects, 12K vulns, 91% fix rate
●Follows Google 90 day disclosure policy
Security and Privacy Research
OSS-Fuzz Rewards: fueling a Safer OSS
Type Reward and Criteria
Initial
integration
Up to $5,000
Fuzz targets need to be checked into their upstream repository and integrated into the build
system with sanitizer support.
Projects are accepted by the OSS-Fuzz team based on their criticality, e.g. >=0.7 criticality score
or if they are used as part of critical infrastructure and/or have a large user base.
Ideal
fuzzing
integration
Up to $15,000, based on the following criteria:
○The upstream development process has CIFuzz enabled to fuzz all pull requests.
○The fuzzing coverage is at least 50% across the entire project, and targets are efficient.
○At least 2 reported bugs are fixed.
○Discretion bonus to recognize outstanding work.
Security and Privacy Research
Fuzzing Research: Lost in the Noise
Evaluating Fuzz Testing
George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, Michael Hicks
…Such new ideas are primarily evaluated experimentally so an important
question is: What experimental setup is needed to produce trustworthy results?
We surveyed the recent research literature and assessed the experimental
evaluations carried out by 32 fuzzing papers. We found problems in every
evaluation we considered. We then performed our own extensive experimental
evaluation using an existing fuzzer. Our results showed that the general problems
we found in existing experimental evaluations can indeed translate to actual
wrong or misleading assessments.
Security and Privacy Research
Fuzzer Benchmarking: FuzzBench and Magma
FuzzBench (Init coverage-based) Magma (bug-based)
Security and Privacy Research
FuzzBench: community benchmarking service
●Foster innovations beyond afl / libFuzzer
●Understand capability differences of
current fuzzing engines
●Zero-cost research experiments
●Diverse, real-world OSS-Fuzz benchmarks
●Fully reproducible results
●Code coverage and bug based evals
●Support for private experiments
Security and Privacy Research
FuzzBench: impact stories (e.g. AFL++)
Security and Privacy Research
Preregistration-based publication process
Stage 1: Evaluate for novelty and
significance of idea / approach.
Authors submit a full paper, including
a detailed description of the
methodology to be used to obtain the
study results, as well as preliminary
results demonstrating the feasibility of
the approach minus the results of
the proposed study.
Stage 2: Validate agreed methodology
and correct interpretation of results.
Authors submit the full paper, including
the results of their study and
non-design related revisions if any.
Security and Privacy Research
Preregistration-based publication process
Security and Privacy Research
800 800 800 800
100
50
100
50
100
50
100
50
300
200
100
50
AI-powered Fuzzing
The Next Frontier of Bug Hunting
Security and Privacy Research
The formidable barrier: code coverage wall
“After weeks or months of continuous testing, fuzzing
can hit an unexpected plateau, limiting the ability to find
critical vulnerabilities in unexplored code paths”
Security and Privacy Research
FuzzIntrospector: Interesting functions to fuzz
Function name Function source file Accumulated cyclomatic
complexity
Code
coverage
tinyxml2::XMLElement::ShallowClone(tinyxml2::XMLDocument*) /src/tinyxml2/tinyxml2.cpp 115 0.0%
tinyxml2::XMLDocument::LoadFile(charconst*) /src/tinyxml2/tinyxml2.cpp 112 0.0%
tinyxml2::XMLElement::SetAttribute(charconst*,charconst*) /src/tinyxml2/tinyxml2.h 106 0.0%
tinyxml2::XMLPrinter::VisitEnter(tinyxml2::XMLElementconst&, …) /src/tinyxml2/tinyxml2.cpp 104 0.0%
tinyxml2::XMLDocument::LoadFile(_IO_FILE*) /src/tinyxml2/tinyxml2.cpp 102 0.0%
tinyxml2::XMLElement::FindOrCreateAttribute(charconst*) /src/tinyxml2/tinyxml2.cpp 102 0.0%
tinyxml2::XMLElement::BoolText(bool)const /src/tinyxml2/tinyxml2.cpp 101 0.0%
tinyxml2::XMLElement::QueryBoolText(bool*)const /src/tinyxml2/tinyxml2.cpp 99 0.0%
tinyxml2::XMLDocument::SaveFile(charconst*,bool) /src/tinyxml2/tinyxml2.cpp 92 0.0%
tinyxml2::XMLElement::Int64Text(long)const /src/tinyxml2/tinyxml2.cpp 91 0.0%
Security and Privacy Research
OSS-Fuzz-Gen: LLM-powered fuzzing framework
OSS-Fuzz
Function signature
+ project context
LLM
Build and fuzz
Extracted compilation errors
+ runtime crashes
OSS-Fuzz-Gen
Existing project
Fuzz targets
Raw logs from
compilation & runtime
Refined fuzz targetsBuild and evaluate
Security and Privacy Research
OSS-Fuzz-Gen: tinyxml2 case study
+11.12% coverage
Fuzz target #3:
+3.54% coverage
Fuzz target #1:
+11.12% coverage
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
std::string data_string(reinterpret_cast<const char*>(data), size);
tinyxml2::XMLDocument doc;
doc.Parse(data_string.c_str());
Security and Privacy Research
OSS-Fuzz-Gen: early impact on 160+ OSS projects
Security and Privacy Research
800 800 800 800
100
50
100
50
100
50
100
50
300
200
100
50
Future of Fuzzing
Trends and Challenges
Security and Privacy Research
Fuzzing: Open Challenges
Coverage-guided AI Testing
Find reproducible cases of unexpected behavior in AI models (e.g. prompt injection)
LLM-powered Fuzz Target Writing
Given a project source code, use the AI model to generate new, efficient fuzz targets
LLM-powered Fuzzer Generator
Given a project source code, use the AI model to
suggest code that can generate valid testcases
3
1
2
Security and Privacy Research
Thank you!
We look forward to collaborating
closely with you on fuzzing research