90,000 Commits, One Developer: The OpenClaw Story

Key insights
- Steinberger built OpenClaw essentially alone using AI agents, something he describes as impossible for any single human before AI coding tools
- He warns against 'the agentic trap': spending time optimizing your AI setup instead of actually building things
- PRs are becoming 'prompt requests' where the intent matters more than the code itself
This article is a summary of Builders Unscripted: Ep. 1 - Peter Steinberger, Creator of OpenClaw. Watch the video โ
Read this article in norsk
In Brief
Peter Steinberger, the Austrian developer behind the PDF framework PSPDFKit, spent 13 years running his company before burning out and stepping away from tech. When he came back, AI coding tools changed everything. In this first episode of OpenAI's "Builders Unscripted" series, Steinberger sits down with Romain Huet to tell the story of OpenClaw, an open-source personal AI agent that started as a WhatsApp bot and grew into a global phenomenon with its own conference, a Wall Street Journal mention, and 2,000 open pull requests. According to the video description, this conversation was recorded before Steinberger joined OpenAI.
From burnout to builder
Steinberger created PSPDFKit, a PDF framework (a code library that other apps use to display and edit PDFs) for mobile apps, and ran the company behind it for 13 years. By the end, he was burned out. He describes it plainly: he had been running at full speed for over a decade without knowing how to manage the toll (4:24).
He followed tech news during his break but says nothing really clicked until he felt the urge to build again. The breakthrough came when he fed an unfinished project into AI tools. He took the entire codebase, turned it into a single Markdown file (a plain text format with basic formatting), dragged it into Gemini Studio (Google's AI tool) to generate a spec (a written description of what the software should do), and then handed that spec to Claude Code (Anthropic's AI coding tool) (6:36).
The result was rough. The model told him it was "100% production ready," and it crashed immediately (7:02). But after hooking up Playwright (a browser automation tool) to let the model check its own work, things started working within an hour. Steinberger describes this as the moment that changed everything: not because the output was good, but because the process felt like a leap forward (7:30).
How OpenClaw went from playground to phenomenon
OpenClaw didn't start with a grand plan. Steinberger wanted a personal AI agent that could access his WhatsApp and handle tasks for him. He built a prototype, then paused, expecting the major AI labs to ship something similar. They didn't (9:06).
By November 2025, he built the first version that would become OpenClaw. He describes it as less than an hour to get the prototype running, because, as he puts it, "you just yield things into existence" (9:43).
The real test came during a weekend trip to Marrakesh, where spotty internet made WhatsApp the most reliable channel. Steinberger found himself using the bot constantly: translating, finding restaurants, looking things up on his computer remotely. When he showed it to friends, they immediately wanted it (10:24).
To show a wider audience what the bot could do, he dropped it into a public Discord (a group chat platform) channel with no security and let people interact with it in real time. He debugged OpenClaw with OpenClaw while people watched (14:16).
The community took off from there. ClawCon, a community-organized event in San Francisco, drew roughly 1,000 attendees (1:20). A Vienna meetup had 300 people signed up before it even happened (1:46). The Wall Street Journal wrote about the project (0:32).
The voice message that proved AI can problem-solve
One story from the episode stands out. Steinberger sent his bot a voice message through WhatsApp. He hadn't programmed the bot to handle audio (10:52).
The bot replied anyway.
When he asked the model how it managed, the answer revealed a chain of problem-solving steps: WhatsApp had delivered the voice message as a file with no file extension. The model examined the file header (the first few bytes that identify a file's type), identified it as Opus (an audio codec, a way of encoding sound), and used FFmpeg (a tool for processing audio and video) to convert it. It then needed to transcribe the audio to text but didn't have Whisper (OpenAI's speech-to-text model) installed locally. So it searched the environment, found an OpenAI API (Application Programming Interface, a way for programs to talk to each other) key, and used cURL (a command-line tool for transferring data) to send the file to OpenAI's transcription service. Then it replied (10:52).
Steinberger points out that people were alarmed that the bot used his API key. His response: he put the key in the environment specifically for the bot to use. The bot operated within its intended scope (12:22).
90,000 contributions and zero team
Huet highlights Steinberger's GitHub profile during the conversation: 90,000 contributions across more than 120 projects in a single year (18:22). The activity graph starts sparse early in the year, turns light green through summer, and goes dark green around October and November, which Steinberger says is when he switched to Codex (OpenAI's agentic coding tool) (18:43).
His GitHub shows contributions to over 120 projects, but around 40 of those are ones he built himself. About half of those feed into OpenClaw (8:16).
When people try to reach OpenClaw's HR department or ask to speak with the CEO, Steinberger laughs. There is no team. He calls it "just me hacking from my cave" (17:33). He has brought on maintainers (trusted developers who help manage the project) and receives pull requests (PRs, proposed code changes) from contributors, but the core of the project was built by one person with AI agents.
His assessment is direct: "This would not have been possible by any one human" (17:48).
| Metric | Value |
|---|---|
| GitHub contributions | ~90,000 in one year |
| Projects built | 40+, half used in OpenClaw |
| Open PRs on OpenClaw | ~2,000 |
| ClawCon attendance | ~1,000 |
| Vienna meetup signups | 300 |
The "agentic trap" and how to avoid it
Steinberger warns about what he calls "the agentic trap": the tendency to spend time optimizing your AI setup instead of actually building things (20:06). He admits he fell into it too.
His advice is to keep things simple. He doesn't use worktrees (isolated copies of a codebase for parallel work). He just maintains numbered checkout directories and focuses on the actual problems (21:33).
The core of his workflow is conversation. He describes it as something different from pair programming. He tells the model what he wants, then asks: "Do you have any questions?" (20:26). He says this single question is critical because models are trained to solve problems immediately, which means they make assumptions. Those assumptions aren't always right, especially since models are trained on a lot of older code (20:40).
He also pushes back on the term "vibe coding." His view is blunt: "Vibe coding is a slur" (19:05). His argument is that working with AI agents is a genuine skill, like picking up a new instrument. You won't be good on the first day, and dismissing the whole approach after a bad first experience misses the point (19:08).
When PRs become prompt requests
With 2,000 open PRs on OpenClaw (23:49), Steinberger has had to rethink how code review works. He calls them "prompt requests" rather than pull requests, because the intent behind the change matters more than the code itself (24:04).
His review process starts by asking the model: do you understand the intent of this PR? He doesn't read the code first. He reads the idea. Many contributors don't have the full system in their heads, so their changes tend to be local fixes that might not fit the broader architecture (24:38).
He notes a paradox: reviewing external PRs often takes longer than building the feature himself, because he trusts the model not to be malicious more than he trusts an unknown contributor (24:10). Still, he re-implements features rather than rejecting contributions outright, and he credits the original contributor to build community (26:14).
Most code, he says, is boring: it transforms one shape of data into another. For that kind of work, he watches the model's output stream and checks it against his mental model. If the two match roughly, the code ships (22:51).
Security, open source, and growing pains
The security side of OpenClaw has been a source of tension. Security researchers gave the project a CVSS (Common Vulnerability Scoring System, a standard for rating security flaws) score of 10.0, the maximum (28:37).
Steinberger says the context matters. The web server was originally built for local debugging and was only meant to be accessible within a trusted network. But because the project is designed to be tinkered with and customized, users put it on the open internet despite documentation saying not to (27:53).
He has since brought on a dedicated security expert and shifted focus toward preventing users from harming themselves with the tool (28:47).
On the model side, Steinberger keeps a file called mysoul.md that defines his values, personality, and how the agent should operate. He keeps the contents secret (15:00). When he dropped the bot into a public Discord, people tried prompt injection (a technique where users try to trick an AI into ignoring its instructions) to extract the file. The model refused. Steinberger acknowledges prompt injection remains unsolved, but says the latest models handle it better than people expect (14:53).
The sandboxing story is telling. Steinberger's bot runs on his Mac Studio, which it proudly calls "The Castle" (16:33). When he moved it into a bare Docker container (an isolated lightweight environment for running software) with almost nothing installed, the model found a C compiler (a tool that turns programming code into runnable software) and built its own version of cURL from scratch so it could access the web (16:55).
How to interpret these claims
Survivor bias (when we only hear the success stories)
Steinberger's story is remarkable, but it's a single data point. One developer building a viral open-source project with AI tools does not prove the method works for everyone. Steinberger has 13 years of experience running a software company, deep domain knowledge in developer tooling, and a high public profile that helped the project gain attention. Developers without that background may have very different results.
What 90,000 contributions actually means
GitHub contribution counts include commits, issues, pull requests, and code reviews. A high number does not by itself tell us about code quality, maintainability, or long-term viability. Steinberger mentions that he ships code he doesn't always read closely and optimizes his codebase for agent productivity rather than human readability. Whether that approach scales as the project grows and adds more contributors remains an open question.
OpenClaw is still very early
The project went from personal playground to global phenomenon in weeks. It has 2,000 open PRs, a CVSS 10.0 security rating from researchers, and a creator who describes it as something he built for one-on-one use. The gap between a passionate community and an enterprise-ready product is real, and Steinberger himself acknowledges the tension between making something "mom can install" and keeping it "fun and hackable" (26:55).
Practical implications
For developers new to AI tools
Steinberger's advice is to start playfully. Build something you've always wanted to build. Don't over-optimize your setup. Talk to the model like a coworker and always ask, "Do you have any questions?" before letting it start (30:12). He references a quote attributed to Nvidia CEO Jensen Huang: "You're not going to be replaced by AI, you're going to be replaced by someone who uses AI" (30:31).
For open-source maintainers
The "prompt request" framing is worth watching. As more contributors use AI to generate code, the traditional PR review process may need to shift from reading code to reading intent. Steinberger's approach of crediting contributors while re-implementing their changes is one model for managing this transition.
For experienced developers
The transition from expert in one domain to beginner in another is what Steinberger calls "not hard, but painful" (5:43). AI tools can bridge that gap by letting broad architectural knowledge translate into working code, even in unfamiliar languages or frameworks. But the skill of directing AI agents still requires investment. Steinberger's gut feeling for how long a prompt will take came from months of practice, not a single breakthrough moment.
Glossary
| Term | Definition |
|---|---|
| Agentic coding | A way of building software where an AI agent writes the code while the developer directs the overall plan and architecture. |
| API (Application Programming Interface) | A way for programs to talk to each other. When the bot needed to transcribe audio, it called OpenAI's API to use their speech-to-text service. |
| CVSS (Common Vulnerability Scoring System) | A standard scale for rating how serious a security flaw is, from 0 (harmless) to 10 (critical). |
| cURL | A command-line tool for transferring data over the internet. Often used to call APIs or download files from a terminal. |
| Docker container | An isolated lightweight environment for running software. Like a sealed room where a program runs without affecting the rest of the computer. |
| FFmpeg | A widely used open-source tool for processing audio and video files. It can convert between formats, extract audio, and more. |
| Opus | An audio codec (a way of encoding sound). WhatsApp uses it for voice messages. |
| PR (Pull Request) | A proposed code change submitted to a project. Other developers review it before merging it into the main codebase. |
| Prompt injection | A technique where users try to trick an AI into ignoring its instructions by hiding commands in their input. |
| Sandboxing | Running software in an isolated environment to limit what it can access. If something goes wrong, the damage stays contained. |
Sources and resources
Want to go deeper? Watch the full video on YouTube โ