Inside the Tiny AI Revolution

Kristi Cantor

Real Wins from Danielson Labs

There’s a war happening in AI right now, and most people are fighting the wrong battle.

Everyone’s obsessed with bigger. Bigger models. More parameters. Smarter agents. The pitch is always the same: “Our AI can handle anything.”

Meanwhile, at P3 Adaptive, something different is brewing. Our president, Kellan Danielson, has been locked in what we’re now calling “Danielson Labs” (basically a skunkworks operation that wasn’t even officially founded, it just happened) and he’s going the other direction entirely.

And winning.

The Tiny Jobs Philosophy

Here’s what Kellan figured out: AI doesn’t need to run the whole show. It just needs to do the one thing regular code can’t.

That’s it. That’s the insight.

(You can stop reading now. But you won’t, because the breakdown below is where this gets good.)

You don’t need a massive language model making every decision in your workflow. You need it making one decision. The fuzzy one. The one that requires understanding context or interpreting messy human input. Then you hand control back to normal, reliable, deterministic code.

This isn’t a limitation. It’s a strategy.

What This Actually Looks Like

Let me show you what I mean with real applications that Rob Collie has built.

Example 1: The Fantasy Football Commissioner

Rob runs multiple fantasy football leagues here at P3 Adaptive (I’m not doing well this season). The social part matters, you need a water cooler to gather around, or the whole thing’s just scoreboard watching.

Rob’s first attempt: Let an LLM browse ESPN’s website, figure out what’s happening, and write weekly recaps.

Total disaster. The LLM spent all its energy just trying to navigate web pages. Click links. Wait for pages to load. Parse HTML. It burned through kilowatt-hours of GPU doing something that should’ve taken seconds. And it barely worked.

Rob’s second attempt, post-Danielson Labs: Regular code hits ESPN’s public API. Grabs all the stats. Drops them in a local database. Zero AI involved.

Then the LLM comes in. Queries the database. Writes the emails. That’s its only job, turn structured data into entertaining prose.

Three hours to build. Works perfectly. And when there was a bug (triple-counting stats), it was just regular code doing regular code things. Fixed in five minutes.

Example 2: The Candidate Screener

Hiring right now? It’s a free for all.

Job seekers use AI to apply to 500 positions. Hiring managers get buried in applications where 90% of the answers sound identical because they are identical—three or four different LLMs answering the same questions over and over.

“Tell us about yourself.” Guaranteed the word “curious” appears by word six. “Continuous learning” shows up somewhere. Half the applicants are apparently volunteering with underprivileged youth to teach them technology.

None of it’s real. But you can’t tell from one application. You need volume to spot the pattern.

Rob built a custom screening agent. Regular code pulls applications from the system. Regular code checks basic qualifications. Regular code flags the joke question everyone’s missing (AI always misses it).

Then one tiny AI call per candidate: Does this answer sound authentic, or is it the same generic response we’ve seen 200 times?

That’s it. The AI doesn’t make hiring decisions. It doesn’t “score” candidates. It just flags the ones worth a human’s time.

Result: 400+ applications whittled down to about 20 real people giving real answers. The ones we can actually evaluate fairly.

Why Tiny Wins

Here’s what happens when you keep AI’s job small:

  • It’s fast. One focused AI call takes seconds. You’re not waiting for some agent to “think through” your entire process.
  • It’s testable. You can check if AI categorized the email correctly. You can’t easily check if AI “handled customer service well.”
  • It’s fixable. When AI screws up one specific task, you fix that one thing. When AI’s responsible for everything, you’re debugging a black box.
  • It’s composable. You can chain tiny AI calls together inside larger workflows. Each one does its job. Code stitches them together. You get power without chaos.
  • It’s actually reliable. Because you’re not asking AI to make judgment calls about stuff it can’t see. You’re asking it to do the one thing humans are bad at: processing repetition without losing their minds.

The Anti-Agent Approach

I know what you’re thinking. “But everyone’s building agents now. Agents that can do whole tasks end-to-end.”

Yeah. And most of them are unreliable.

Because agents are trying to be autonomous. They’re trying to figure out the whole problem, make all the decisions, handle all the exceptions.

That’s a great demo. It’s a terrible product.

What we’re building is different. It’s not “let AI run free.” It’s “let AI do the one thing that needs AI, then immediately hand control back to something predictable.”

This isn’t sexy. You can’t make a viral video about “AI that outputs category codes.”

But you can build tools people actually trust. Tools that work on Tuesday the same way they worked on Monday.

Smaller Is the Strategy

You might not believe it but making AI useful isn’t about making it smarter. It’s about making the job simpler.

Not because AI isn’t capable. Because simplicity scales. Because one well-defined task is worth ten vague tasks that expect the model to guess what you meant.

Every time we break a problem into tiny pieces, the same thing happens. People ask, “That’s it? That’s all the AI does?”

Yep. And here’s why that matters. The AI part is intentionally small because the real power comes from what we connect it to. We tie that tiny AI skill into Power BI, Fabric, and Power Automate so it kicks off real workflows, updates real systems, and drives real decisions.

Then we ship it. And it works. Day after day. No meltdowns. No hallucinations. No “well, it worked yesterday.”
One small piece of AI turns into a massive win because everything around it is built to move.

That’s when they get it.

The Compound Effect

Here’s the kicker: once you’ve got one tiny AI job working, you add another. Then another.

Each one’s small. Each one’s reliable. But you stack enough of them inside a workflow, and suddenly you’ve automated something that used to take hours.

Not with one giant AI agent. With twenty tiny AI calls, each doing exactly one thing, wrapped in rock-solid code that handles everything else.

That’s the revolution.

Not “AI replaces humans.” Not “AI runs your business.”

Just: AI does the fuzzy parts. Code does the deterministic parts. Humans check the results and make the calls that actually need judgment.

Listen to the episode that inspired this blog!

What We’re Not Doing

We’re not building chatbots that try to sound human.

We’re not building agents that “autonomously manage workflows.”

We’re not promising that AI will “understand your business.”

What we are doing: taking the parts of your process that are tedious, repetitive, and require just a tiny bit of interpretation, and building AI to handle those parts. Then handing everything else back to tools that were working fine already.

It’s not revolutionary tech. It’s revolutionary thinking.

Bigger models aren’t the answer. Smaller jobs are.

Every time we shrink the AI’s responsibility down to one clear task, reliability goes up. Speed goes up. Trust goes up.

Every time someone tries to make AI “do more,” they end up with a system that’s impressive in demos and useless in production.

We’re done being impressed. We’re here to ship stuff that works.

Tiny jobs. Big impact.

That’s the revolution.

If you’re ready for a tiny takeover, pick one tedious task and we’ll turn it into momentum. Schedule a call to get started!

Read more on our blog

Get in touch with a P3 team member

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form

This field is for validation purposes and should be left unchanged.

Related Content

Magic Lego Bricks: The Real Framework for Business AI

Most companies try to make AI carry the whole workflow and then

Read the Blog

The Great AI Divide: Consumer AI vs. Business AI

AI looks magical when it writes poems. It gets powerful when it

Read the Blog

Why AI Still Needs a Brain: The Case for the Semantic Model

Everyone’s teaching AI to talk, but nobody’s teaching it what the words

Read the Blog

How “Headless” AI Gets the Job Done (While You Sleep)

Discover how Headless AI quietly runs your business while you sleep, automating

Read the Blog