OpenClaw vs. Claude: I Was Wrong, and Maybe Jensen Huang Was Too

Adam Rutkowski
March 23, 2026
5 min read
ai-agentsopenclawclaudeautomationproduct

A few days ago I published a post about OpenClaw skills — the ones that matter versus the ones that are just API wrappers. I stood behind the thesis that real skills are backed by real infrastructure, and I still do.

But I need to correct something about what I built around that thesis.


What I Built

After Jensen Huang’s GTC keynote — where he compared OpenClaw adoption to Linux and called it “the operating system for personal AI” — I did what builders do. I built.

I spun up a dedicated VPS. Hardened it. Configured a secure OpenClaw server with authentication, SSL, and network isolation. Published two skills on ClawHub — one for investigative intelligence, one for general document ETL and entity extraction. I wrote the blog post. I tested it with an autonomous agent that ran an entire investigation end-to-end. The agent gave it an A.

The infrastructure worked. The skills worked. The thesis held up.

And then I replaced all of it.


What Actually Happened

I had a separate problem to solve: I had a specific monitoring effort I needed to engage on. I needed a system that would continuously run across a variety of verticals, score each finding for relevance, deduplicate similar results, and surface only the ones relevant to me — all without me touching it.

The obvious approach would have been to build another OpenClaw skill. I would need to wire up the relevant connectors, a scoring agent, a database writer. Run it on the OpenClaw server I’d already built.

Instead, I wrote a Python script that fetches the relevant data, stores it in a database, and exits. I wrote a shell script that chains the fetch, checks for unscored findings, and invokes my personal AI LLM to score and promote them. I added one line to my crontab:

17 */2 * * * /source_directory/scripts/run_cycle.sh

Every two hours, the system fetches new findings. My personal AI scores them against my criteria, deduplicates similar stories, and promotes anything above the threshold to a review queue. When I have 15 minutes, I start a conversation with my personal AI and say “what’s new?” The AI pulls the promoted findings, auto-fetches the relevant information for the high-scoring ones, summarizes each with context and a recommended action, and drafts correspondence for review if I ask for it.

No OpenClaw server. No skill registry. No authentication layer. No VPS. A cron job, a shell script, and my personal AI.

It took an afternoon to build. It’s been running for days. It works.


Why This Matters

The OpenClaw ecosystem is real. The skill registry is growing. NemoClaw will bring enterprise security controls. Jensen Huang isn’t wrong that agents need infrastructure behind them.

But for the majority of agent use cases I’ve encountered as a builder — and I include my own production automation in this — the OpenClaw layer adds complexity without adding capability.

Here’s what I mean. My monitor needs to:

  1. Fetch data from APIs on a schedule
  2. Score findings with an LLM
  3. Store results in a database
  4. Present summaries on demand

None of these require an OpenClaw server. fetch.py calls the APIs. run_cycle.sh invokes the LLM with a prompt that includes database access. Cron handles scheduling. The database handles persistence. That’s it.

The OpenClaw server I built was solving a problem that didn’t need solving. It added an authentication layer, a transport protocol, a skill definition format, and a server process — all between my automation and the LLM that was already available on the same machine. Also, it costs tokens to use. It can’t be used with an existing subscription to an AI service of your choosing.


The Cowork Moment

Then Anthropic shipped Dispatch on Claude Desktop Cowork. It is also listed as “Dispatch” on the Claude mobile app. One persistent conversation thread, accessible from your phone or desktop. Assign tasks to Claude from your phone while you’re at the gym. Claude works on your desktop using local files, connectors, and tools. Results waiting for you when you get back.

When I saw that article, I realized the direction is toward simpler, and more trusted, not toward more complexity, and not toward the use of a service like OpenClaw that is known to have substantial security vulnerabilities. The value isn’t in skill registries and protocol layers. It’s in giving an LLM access to your actual infrastructure — your files, your databases, your APIs — and letting it work.

That’s exactly what my cron setup does, minus the mobile interface. My LLM has access to the database. The LLM has a prompt that defines the scoring criteria. The LLM runs, scores, and writes results. No intermediary.


Where OpenClaw Still Matters

I want to be precise about what I’m saying. OpenClaw matters for a specific use case: when you need to expose infrastructure to agents you don’t control.

My Ingestigate skills on ClawHub are a good example. When someone else’s agent — one I’ve never seen, running on infrastructure I don’t manage — needs to search a document corpus, extract entities, or traverse a relationship graph, it needs a standardized interface. OpenClaw provides that. The skill definition, the authentication, the transport protocol — all of it exists so that an unknown agent can safely interact with my platform.

That’s a real problem, and OpenClaw provides a mechanism that solves it. However, when both the agent and the infrastructure are yours, any locally running LLM can handle the same interaction directly. The key is giving the right guidance to the LLM and providing the right structure and setup. When I’m building automation for my own systems, the OpenClaw layer is overhead. A cron job and a prompt get the job done faster, with fewer moving parts, and with less security concerns since I manage the infrastructure myself.


The Correction

In my previous post, I wrote that “every company needs an OpenClaw strategy.” I was echoing Jensen Huang. Truthfully, NemoClaw benefits Nvidia, and NemoClaw uses OpenClaw. Therefore, it is natural to have the CEO of a company make that declaration. But that doesn’t mean it is right for everyone. I believe he was right about the direction — people and companies are going to continue looking for ways to benefit from automation, and that includes the use of AI Agents.

But “every company” was too broad. Many companies will need an automation strategy. They will need agents that do useful work on their infrastructure. They will not necessarily need an OpenClaw server to make that happen.

OpenClaw is the right tool for some situations. If your use case is automating your own workflows, you probably need cron, a good prompt, and a database. The rest is optional. And if you control the entire setup, you have more control over the security.

I built both. The OpenClaw server was a good exercise. The local implementation is what I will actually use.

Ingestigate’s OpenClaw skills are available on ClawHub — CorpusGraph and Ingestigate — and the source is on GitHub (CorpusGraph, Ingestigate) for teams that want to connect their own agents.


For more on what Ingestigate does for AI agents that need real document infrastructure, see the Agents page. For the original analysis of OpenClaw’s skill ecosystem, read the first post.