readyforagents.ai

Are you ready
for agents?

Most companies aren't blocked by AI capability.
They're blocked by their own chaos and a skills gap nobody prepared for.

Diese Seite gibt es auch auf 🇩🇪 Deutsch.


The model isn't the problem.
Your business is.

For years, the AI conversation has been about model capability. Is it smart enough? Can it reason? Will it hallucinate? Those questions are mostly answered, or under control once models operate inside proper agent harnesses. Today's models can do the work. What they can't do is operate inside a business that was never designed for them.

Agents are sophisticated amnesiacs. Drop one into a typical company and it hits a wall immediately. Not because it lacks intelligence, but because the environment it needs to operate in doesn't exist.

You cannot automate a mess.


Agents aren't chatbots.
And they're already inside your company.

Today's agents don't just answer questions, they act. Claude Cowork writes documents, analyzes data, manages workflows. Perplexity Computer builds and runs the most sophisticated workflows that can last for hours, even months. Coding agents like Claude Code and Codex write, test, and ship production code autonomously.

Knowledge workers who aren't using these tools are already competing at a disadvantage. And many who do have access still use them like chatbots: asking a question here, translating a paragraph there, instead of delegating real work. The shift from 'ask it something' to 'give it a job' is harder than it sounds. That gap widens every month.

But here's what early adopters are finding out about actually using agents:

Delegation. Using agents well suddenly requires skills that most roles never demanded: clear delegation. Communicating intent, specifying expected outcomes, providing an explicit definition of done. These are management skills, and now every knowledge worker needs them, whether they've ever managed anyone or not.

Pace. Tools like Claude Cowork and Perplexity Computer handle their own orchestration well: sub-agents, parallel execution, that part works. But each task takes time, and that idle time invites you to spawn the next one. And the next. Before long, you're juggling fifteen tasks across five unrelated projects, and the tools are so capable that you become the bottleneck. That kind of agentic multitasking takes real practice, can be surprisingly draining, and it's a skill most teams are still figuring out on their own.

Your people need new skills to work with agents. But that's only half of it. Your organization needs to be ready too.


Agents need three things most businesses don't have.

The biggest one is legibility: how much of your business is actually visible to an agent. But agents also need the ability to act.

Let's break it down.

01

Your data exists. Agents can't reach it.

Agents don't browse dashboards. They don't log into your CRM and click around. They need data exposed through APIs, structured formats, or queryable interfaces, not locked behind GUIs designed for human eyes.

02

Agents act through APIs. Do yours have any?

An agent can only take action in systems that let it. No API means no writes, no updates, no automation. Just observation at best.

Modern SaaS tools usually have this covered. Legacy systems, internal tools, and heavily customized ERPs often don't. Finding these gaps is step one of any readiness audit.

03

If it's not written down, agents can't follow it.

This is the hardest one. In most companies you can find some essential parts of critical business logic that aren't documented anywhere. It's tribal knowledge: "ask Sarah," "it depends," "we've always done it that way."

A human hire can shadow Sarah for three weeks and absorb it. An agent cannot. If a procedure doesn't exist as a text artifact (a document, a spec, a checklist), it doesn't exist for the agent.


Agents need what good managers need:
clarity.

What people in the trenches report consistently: deploying agents isn't just a technology problem. It's a management problem.

When you delegate to a human, they read the room and fill gaps with common sense. Agents don't have that intuition. Effective delegation to an agent requires something most employees have never had to do: specify exactly what "done" looks like.

Not "it looks good." Not "you'll know it when you see it." Done means: a state in a database, a generated file, a passed checklist, a sent confirmation. Binary. Verifiable. Testable.

Delegating to a human
Delegating to an agent
Brief
"Hey, can you handle the quarterly report? You know the drill."
Generate Q3 revenue summary from api/v1/revenue?period=Q3, format as PDF using template reports/quarterly.tmpl.
Data source
"Check with finance, they'll know where the numbers are."
GET /api/finance/metrics — authenticate via service account reports-bot.
Done criteria
"Just make it look good. You'll know when it's ready."
PDF generated using brand template reports/quarterly.tmpl, stored at /reports/Q3-[date].pdf, email sent to stakeholders, Slack confirmation posted.
Edge cases
"Use your judgment. If anything weird comes up, ask Sarah."
If revenue delta >15% from Q2: flag for review, do not auto-send. If data source returns 4xx: retry 3x, then escalate.
Escalation
"If something feels off, use your judgment on when to loop me in."
If any step fails or data looks anomalous (defined in rules/escalation.yaml): create ticket in Linear project OPS, assign to @finance-lead, attach error log.

The companies that win in the agentic era aren't the ones with the most powerful models. They're the ones that have disciplined their organizations enough to write the work down.


I've been doing AI research and building production systems since 2016.
I know where things break.

From 2016 to 2024, I ran the AI Technology Lab at NIM, the Nuremberg Institute for Market Decisions. Over eight years, I built AI systems across computer vision, speech synthesis, social robotics, voice-based AI interviewing, synthetic respondents, and autonomous market research workflows. I still collaborate with NIM on research projects.

In 2024 I was chief founding engineer at ZML, building a high-speed AI inference framework in Zig and MLIR, squeezing maximum performance out of NVIDIA, AMD, Google TPU, and AWS Trainium hardware. As close to the metal as it gets.

Since 2025 I've been running my own AI research company. I build production agentic systems and the infrastructure they run on, including an AI agent built from scratch, no frameworks, that acts as my virtual co-founder and CFO. It runs my business daily.

2016 — 2024

NIM

AI Technology Lab Lead. Computer vision, speech synthesis, embodied conversational agents, voice-based AI interviewing, synthetic respondents, autonomous market research workflows. Ongoing research collaboration.

2024

ZML

Chief Founding Engineer. High-speed AI inference in Zig and MLIR, squeezing maximum performance on NVIDIA, AMD, Google TPU, and AWS Trainium hardware.

2025 —

AI Research & Technology Lab

Own company. Production agentic systems, research tools, and an AI co-founder running the business alongside me.

I'm not a consultant who learned AI. I'm an AI researcher and engineer who can help you figure out where you stand, and how to move forward in the agentic era.


I help you become agent-ready.

Not by selling you a platform. Not by deploying a chatbot and calling it AI. I meet you where you are. Whether that's getting your teams up to speed or laying the groundwork for custom agent solutions.

Concretely, that can mean helping your teams get the most out of tools like Claude Cowork or coding agents, identifying gaps in process documentation, extending your infrastructure with API layers, or building custom agents. It depends on how far you want to take it.

Start a readiness conversation rene@technologylab.ai

No pitch deck. No sales funnel. Just a conversation about where you stand.

Get in touch