- Reimplement widely-used C/C++ libraries in Rust for memory safety.
- Preserve C ABI and behavioral compatibility so existing consumers can relink without source changes.
- Keep ports practical for production by retaining performance characteristics.
SafeLibs
SafeLibs builds memory-safe (Rust) reimplementations of critical load-bearing C/C++ libraries used throughout open source infrastructure, while attempting to preserve drop-in compatibility at compile-time and runtime.
Mission
Priorities
- Drop-in compile-time and runtime compatibility.
- Performance.
- Memory safety.
Maintainability
Non-goal
Long-term maintainability of each individual translated codebase is not a primary goal.
Maintainability
The working model is retranslating from upstream as original projects evolve (eventually, maybe nightly!), then re-validating compatibility and performance. In fact, we will NOT accept code PRs to the ported libraries themselves: we don't know the codebase well enough to reason about malicious patches, etc. We'll happily accept issues against the repo with an example of the failing workflow, then sic our agents on them, though!
Pipeline
This repo uses a four-stage pipeline driven by Juvenal (https://github.com/zardus/juvenal), a workflow manager named after the Roman poet — riffing on "who watches the watchmen?" Each stage uses one or more Juvenal workflows to achieve and verify its goals despite agentic laziness, and each successful stage produces a git tag in the respective repo.
Juvenal leans on a behavioral quirk of coding agents: they will happily cut corners on their own work, but have no incentive to cover for another agent's shortcuts. After every implementation step, several fresh-context validation agents check the result against different criteria; anything missing bounces back to the implementer, dozens of times if necessary. The Port stage in particular doesn't share a workflow across libraries — a planning agent drafts and iteratively refines a library-specific workflow before any porting starts, because no single template fits every C library.
The full pipeline definitions live in the pipeline repo.
Recon
pull the original source (via Ubuntu's source packages) and existing CVEs, so the port has historical context for known non-memory issues.
Setup
prepare the source for porting, rewriting tests to use public library APIs (so they survive a clean reimplementation) and adding new ones, both directly against the library and through dependent applications.
Port
do the rust port. A planning agent first builds a library-specific workflow, then implementation and validation agents execute and check it step by step.
Test
exercise the port against additional client applications and the validator suite.
Project Structure
Each target library lives in its own port-LIBNAME repo in the https://github.com/safelibs org. This website lives in https://github.com/safelibs/safelibs.github.io repo. The pipeline itself lives in https://github.com/safelibs/pipeline
Port Status
As of April 21, 2026, the SafeLibs org has 16 library port-* repositories that currently pass the validator proof at port-04-test, plus the shared port-template repository. They are all private right now, so this public site is intentionally not pretending to have a live public scoreboard yet. When the verification artifacts are ready to expose, this section should show actual compatibility results instead of vibes.
Port Effort Stats
Per-library token spend, agent time, and unsafe-block counts across the 16 validating port-* repos. The validator at https://safelibs.github.io/validator/site-data.json drives this list, and rows are filtered at site-build time so the table follows live validator results. In total: 4,243 sessions, 9.45B tokens, 597.0 agent-hours from March 27 to April 11, 2026 UTC, with a stage-token split of 0.98B recon, 1.54B setup, 4.27B port, 2.67B test.
Stage columns follow each repo's 01-recon/02-setup/03-port/04-test tags; sessions past the last completed tag count toward the next in-progress stage, which is why ports still in earlier stages show partial column coverage. Agent time is the sum of per-session wall time, so parallel sessions add up as parallel agent-hours.
The unsafe columns split each port's unsafe { ... } blocks two ways: ABI unsafe is forced by the C surface (functions taking *const T/*mut T, extern "C" functions, or unsafe fn exposed across the FFI boundary), and Other unsafe is everything else — transmutes, raw allocator handoff, intrinsics, static mut, etc. Across all 16 validating ports: 9,944 blocks total, 8,490 (85.4%) ABI and 1,454 (14.6%) other. The ABI share is largely a tax for staying drop-in compatible with existing C consumers; relaxing that constraint (e.g. for a Rust-only consumer story or a non-ABI-stable distro target) would let a future pipeline drop a large fraction of those blocks.
| Library | Completed stage | Sessions | Total tokens | Recon tokens | Setup tokens | Port tokens | Test tokens | Agent time | Calendar span | Total unsafe | ABI unsafe | Other unsafe |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
cjson | 04-test | 121 | 292.5M | 3.8M | 77.1M | 93.7M | 117.8M | 18.1h | 17.1h | 5 | 5 | 0 |
giflib | 04-test | 202 | 281.5M | 2.5M | 56.2M | 121.4M | 101.4M | 22.7h | 22.2h | 522 | 510 | 12 |
libarchive | 04-test | 294 | 731.3M | 8.3M | 88.9M | 376.8M | 257.2M | 37.3h | 35.0h | 574 | 348 | 226 |
libbz2 | 04-test | 157 | 237.0M | 2.3M | 31.5M | 139.4M | 63.7M | 18.1h | 16.8h | 83 | 3 | 80 |
libcsv | 04-test | 96 | 121.4M | 4.4M | 11.7M | 40.1M | 65.1M | 11.0h | 10.1h | 116 | 81 | 35 |
libexif | 04-test | 840 | 986.7M | 3.1M | 117.4M | 149.9M | 716.3M | 73.8h | 92.7h | 1,205 | 1,165 | 40 |
libjansson | 04-test | 219 | 330.8M | 2.9M | 38.1M | 231.7M | 58.1M | 28.6h | 69.6h | 805 | 790 | 15 |
libjpeg-turbo | 04-test | 255 | 804.2M | 2.5M | 91.0M | 406.2M | 304.6M | 72.3h | 57.7h | 79 | 1 | 78 |
libjson | 04-test | 143 | 328.7M | 2.6M | 50.1M | 109.0M | 167.0M | 23.3h | 21.9h | 104 | 93 | 11 |
liblzma | 04-test | 120 | 470.7M | 2.7M | 33.5M | 357.6M | 76.9M | 21.2h | 19.1h | 296 | 24 | 272 |
libsodium | 04-test | 136 | 276.7M | 6.8M | 22.0M | 190.4M | 57.5M | 18.9h | 18.2h | 532 | 497 | 35 |
libtiff | 04-test | 204 | 577.5M | 6.4M | 23.6M | 478.2M | 69.3M | 30.9h | 27.3h | 333 | 327 | 6 |
libuv | 04-test | 532 | 1,489.1M | 918.0M | 84.8M | 292.1M | 194.3M | 80.6h | 101.2h | 3,410 | 3,024 | 386 |
libwebp | 04-test | 154 | 390.8M | 2.9M | 26.9M | 215.8M | 145.1M | 20.9h | 20.4h | 194 | 133 | 61 |
libxml | 04-test | 463 | 1,757.6M | 6.1M | 749.3M | 884.8M | 117.4M | 85.2h | 75.5h | 1,560 | 1,425 | 135 |
libyaml | 04-test | 307 | 376.3M | 1.9M | 34.0M | 187.2M | 153.3M | 34.1h | 37.9h | 126 | 64 | 62 |
Other Efforts
- DARPA's TRACTOR program (Translating All C To Rust) is the broader DoD-funded push behind agentic C-to-Rust translation, and a conceptual ancestor of work like this.
- The "ralph loop" pattern at https://github.com/snarktank/ralph is a popular minimal harness for keeping agents on-task across long jobs; SafeLibs uses a structured planning + validation pipeline instead, but starts from the same observation that agents quit early on big tasks.
Compatibility Contract
A completed SafeLibs port should provide:
- Binary-compatible exported symbols.
- C headers compatible with existing consumer builds.
- Equivalent runtime semantics for valid inputs.
- Upstream test-suite parity plus consumer integration validation.
Verification Philosophy
SafeLibs verification is clean-room by design:
- Run baseline tests against the original Ubuntu C library package.
- Purge original runtime/dev packages from the test environment.
- Install SafeLibs-generated
.debreplacements. - Re-run the exact same tests and consumer checks.
If a port still forwards to the original C implementation, the replacement stage fails after purge.
FAQ
Why Rust instead of some other memory-safe language?
▐▛███▜▌ Claude Code
▝▜█████▛▘ Optimus 9000.1 · Claude ALL_YOUR_MONEY_PLAN
▘▘ ▝▝ /home/YOU
────────────────────────────────────────────────────────────────────────────────
❯ Change this repository from Rust to YOUR_LANGUAGE_HERE
────────────────────────────────────────────────────────────────────────────────
How many tokens does this take?
Any answer to this question will be meaningless, because it will be obsolete in a few weeks and I don't want to update this README that often.
Do you guarantee I won't get hacked?
rofl
In all seriousness, no one but the AI has ever looked at this code, and that goes for both the actual library reimplementations and the pipeline itself. Even if the libraries are perfect (and I'm sure they're not!), the best they'll do is protect against some memory safety issues in the original library. The actual programs using these libraries can still be vulnerable, or other libraries can be vulnerable, or the unsafe parts of these libraries can still be vulnerable, or there could be new and exotic non-memory errors in these libraries, or these libraries might set your computer on fire and turn your AI agent against you. I don't use these things, and you'd be crazy to, but they're there if you want to try them out!
Are these libraries correct?
These libraries pass adapted test cases from the original projects, and work in a drop-in manner with at least one client application. Beyond that, who knows!
But should I use these libraries?
If you still have a sandbox on in your AI agent, you should probably not use these libraries. If your agent's sandbox is off, you have already made the leap!
Why does this pipeline use codex and not claude?
Unfortunately, despite Claude's dominance, Codex gives FAR more tokens in their top plan, so that's what we're using here. If you harness it brutally enough, like it's harnessed here, it sometimes works!
How are the agents harnessed?
This repo uses a four-stage pipeline driven by Juvenal (https://github.com/zardus/juvenal), a workflow manager named after the Roman poet — riffing on "who watches the watchmen?" Each stage uses one or more Juvenal workflows to achieve and verify its goals despite agentic laziness, and each successful stage produces a git tag in the respective repo.
Juvenal leans on a behavioral quirk of coding agents: they will happily cut corners on their own work, but have no incentive to cover for another agent's shortcuts. After every implementation step, several fresh-context validation agents check the result against different criteria; anything missing bounces back to the implementer, dozens of times if necessary. The Port stage in particular doesn't share a workflow across libraries — a planning agent drafts and iteratively refines a library-specific workflow before any porting starts, because no single template fits every C library.
- Recon - pull the original source (via Ubuntu's source packages) and existing CVEs, so the port has historical context for known non-memory issues.
- Setup - prepare the source for porting, rewriting tests to use public library APIs (so they survive a clean reimplementation) and adding new ones, both directly against the library and through dependent applications.
- Port - do the rust port. A planning agent first builds a library-specific workflow, then implementation and validation agents execute and check it step by step.
- Test - exercise the port against additional client applications and the validator suite.
The full pipeline definitions live in the pipeline repo.
What about upgrading to new library versions?
Each port is a regenerable artifact — the workflow can be rerun from scratch against a new upstream release. There's also a dedicated upgrade stage in the pipeline that's intended to be cheaper than full retranslation, though it hasn't been battle-tested yet.
Will these work outside of Ubuntu?
The current focus is Ubuntu drop-in replacements via apt, which forces strict ABI compatibility and shapes a sizable chunk of the unsafe-block count. Distributions with stricter, content-addressed semantics (like Nix) would be a better fit for agent-generated replacements; if you're interested in extending coverage there, get in touch.
How do I report a port bug?
We can't accept code patches against the ported libraries — we don't audit the generated Rust closely enough to reason about adversarial PRs. We do accept reproducer testcases against the validator repo — the Test stage picks those up and re-runs the affected port until it passes.