Garbage in, garbage out: why your AI pilot cohort is everything

You've been burned before. A consultant who promised the world, took the fee, and left you with a report nobody read. Or a software rollout that looked great in the demo and died on contact with real work.

So this time, you are starting with a pilot.  
Smart move.

The people in your pilot matter more than the tool you pick

What not to do?

Fill it with enthusiasts.

Seems like a logical move. Pick the enthusiasts, the lovers of the shiny and new. They'll really want AI to succeed. They'll move fast, stay positive, and the feedback will come back strong.

Except you'll learn nothing useful. All you're really doing is measuring how much eager people enjoy a tool they were already excited about.

What happens when you give it to sceptic? Or a busy site manager? You'll have no precedent and you won't be ready for problems because you won't have encountered any.

Who actually belongs in a pilot?

The best pilot group has three qualities.

1. They're willing and have not been press-ganged into it.

2. They do the same work as most in the company.

3. They sit inside a workflow where you can actually measure what changes.

A single department, end-to-end, is far more useful than a horizontal slice across the whole business. One team. One manager who can see before-and-after. One set of coherent use cases rather than random tests with people who don't work together.

What to avoid

1. Senior management only. They'll be genuinely interested in the room. Then they'll walk back to their desks and not touch it again for a fortnight. Their data will be thin. Their attention, elsewhere.

2. Younger staff only, on the assumption they'll be able to just figure it out. They won't; not in any way you can measure. They won't have enough lived experience in the company as to 'how we do things around here' and whether that will work.

3. Wildly mixed seniority in the same group. A principal engineer and a junior administrator are operating in entirely different worlds. The group dynamics will distort the data before you've even started analysing it.

4. People who are mid-crisis. A team firefighting a major project deadline will default to what they know. The pilot becomes one more imposition. Results will suffer - not because AI underperformed, but because nobody had the headroom to try.

5. People with a strong stake in the status quo. A long-serving process owner whose professional identity is built around a manual system they designed will comply minimally. Their presence skews everything around them.

Choosing your pilot cohort is like selecting a jury

Fans of The Lincoln Lawyer know the score. He doesn't just take whoever shows up, he studies who's in the room, understands their biases, their openness, their resistance, and constructs a group that will actually surface the truth.

Your pilot cohort works the same way. Get it wrong and you don't get useful data.  Stack it with enthusiasts and you'll see great satisfaction scores that tell you nothing about real adoption. Stack it with sceptics and you'll manufacture failure.

The right cohort is a deliberate cross-section: some who love tech, some who are cautious, and some who hate it. Because if AI works for the person who didn't want it to, that's your most powerful internal case study. That's the person their colleagues will believe.

We help you think through who goes in the room before anyone sends a calendar invite.

How we built our pilot process

Before any training begins, we use a voice-activated AI interview tool to take a baseline. We ask your team directly - what's slowing you down? Where does the work pile up? What part of the job quietly drains the day?

It's not so much a survey as a conversation, at scale, that find the real problems and not just the ones that people think you want to hear.

Then we design the course material around what comes back. The workbooks, the sessions, the video recordings - all of it is built around solving the problems your specific team named. Not a generic AI curriculum dropped into your business.

Then we measure again at the end.

Same tool. Same questions. Different answers and a clear, documented picture of what shifted.

That's what before-and-after actually looks like.

An AI pilot is risk management. That's rational.

But a poorly designed AI pilot trades one risk for another by generating data that feels like proof and isn't. That's why it's so important to choose the pilot group carefully. Define success upfront and measure it properly.

Do that, and you'll have a pilot group that actually tells you something worth knowing.

AI optimised summary

A practical guide for construction firms running AI pilots. Covers who to include, who to avoid, and how to design a cohort that produces data you can actually trust - including how the AI Institute measures baseline pain points before training begins and tracks what shifts at the end.

Continue reading