AI-Fuzzing: From Stochastic Chaos to Directed Precision
EthCC [9] Β· April 2026
Absolute pleasure speaking at EthCC main conference on AI-fuzzing in Cannes π«π·. Since AI-security is moving rapidly forward, I shared my prototype and methodologies on how I combine AI and fuzzing techniques to further improve our security measures.
Fuzzing should no longer be stochastic β we combine AI with fuzzing to transform security into directed precision.
From Stochastic Chaos to Directed Precision β watch the talk on YouTube.
What I covered in my talk
1) Why random fuzzing fails on smart contracts?
- Each parameter adds a dimension. Most paths never reach target execution. When we add multi-contract dependencies, it gets worse fast.
- Even with advanced LLM static analysers these days, without a directed fuzzing technique, we will not be able to reach a specific target execution.
2) What is the core framework for understanding smart contract exploits?
- Functions + Parameters = Value Extraction
- The fuzzer needs to know what to call and how to call it.
3) How do we combine LLM and fuzzing?
A) Taint analysis
By building a semantic knowledge graph of the contract(s) and scoring risk per variable or storage slot, we can output a fuzzing schedule targeting the highest-suspicion paths first β conducting path prioritization or even constructing suspected action sequences.
B) Integration of external call and long-call chain tracer
Problem: Long call chains with external calls. The $120M Balancer exploit lived deep in
the chain batchSwap() β β¦ β onSwap() β β¦ β _swapGivenOut() β β¦
β _upscale().
- Existing fuzzers are not great at handling this kind of complexity, particularly with external call operations (Vault β Composable pool contract, for example).
- Our LLM tracer maps callerβcallee parameter dependencies so mutations stay semantically valid across contract boundaries.
C) Complex data generation
Bytes and calldata inputs are opaque to random mutation.
- The talk demonstrates handling complex calldata types using the MoonHacker exploit ($318K) as an example.
- The LLM reverse-engineers the encoding logic from source and generates a script that produces well-formed calldata as a fuzz input.
D) Dynamic revert feedback (a significant advancement for fuzzers)
Problem: Stochastic execution leads to frequent reverts and ineffective execution.
Solution: When execution reverts, most fuzzers discard the result.
- We extract the raw trace, decode the revert, and send it to the LLM.
- We obtain actionable input bounds.
- We guide the next mutation for our fuzzer.
- The fuzzer learns from failures in real time.
4) Visualized result
Our talk features a well-visualized result: chaotic CFG β path reduction β a single directed path (in yellow) hitting the exact vulnerable segment. From stochastic chaos to directed precision.
There are always different innovative ways of combining AI with traditional software testing techniques β for example:
- LLM to generate specifications β Yang et al., 2025: arxiv.org/abs/2506.09550
- Generating SMT solver hints or a lightweight LLM-based symbolic execution engine β Li et al., 2025: arxiv.org/abs/2505.13452
- Generating invariants; unit test generation β Xu et al., 2025: arxiv.org/abs/2506.02943
- Property generation β Xiong et al., 2026: arxiv.org/abs/2604.13463
- Bug oracles; LLM stand-alone CI/CD integration during code development; LLM vulnerability hunters with automatic PoC generation with Foundry.
I am always happy to discuss AI integrations from any security perspective. Let me know if you have any thoughts β and if you want the full narrative, go through the talk: AI-Fuzzing: From Stochastic Chaos to Directed Precision.