INSIGHTS

From pilot to production: what a real warehouse AMR deployment looks like

From pilot to production: what a real warehouse AMR deployment looks like

Most vendor content about autonomous mobile robot deployment reads like a press release: "seamless integration," "rapid deployment," "minimal disruption." What's almost never published is what actually happens. The real sequence of events, the decisions that get made, the things that go unexpectedly well or sideways, and what it genuinely takes to go from an initial conversation to a live autonomous operation.

This post is our attempt to change that. What follows is a week-by-week account of how a Thoro deployment actually unfolds. If you're an operator evaluating whether autonomous systems are right for your facility, this is the most honest account we can give you.

Before week one: the conversation that matters most

The most important thing that happens in a Thoro deployment doesn't happen on-site. It happens in the scoping conversation before any work starts.

The question we're trying to answer isn't "can we deploy here?" We can deploy almost anywhere. The question is: what workflow are we actually solving, and is autonomous technology the right solution for it?

This sounds obvious. It isn't. We've had conversations with operators who want to automate a workflow that moves too irregularly to predict. We've talked to facilities where >>>> Insert Learning <<<<<

The scoping conversation covers: the specific workflows being targeted (not "pallet transport" but "inbound receiving, zone A to staging, 6AM–2PM shift, approximately 40 moves per hour"), the existing systems the robot will need to interact with, the facility's physical characteristics and rules, the operator team who will work alongside the robot, and what a successful outcome actually looks like in measurable terms.

We come out of this conversation with a deployment hypothesis: if we put this machine in this workflow, we expect to see these outcomes. The entire deployment is a test of that hypothesis.

Week 1: On-site deployment & Operator training

Day 1 - Site mapping, route configuration, operator training

>>>>>Insert Text<<<<<

Day 2 - Supervised operation and fine tuning

>>>>>Insert Text<<<<<

Day 3+ - Unit handoff and support

>>>>>Insert Text<<<<<

Trust between human workers and autonomous systems is built through transparency and predictability

We've seen deployments struggle not because the technology failed but because floor staff didn't trust it. In the absence of trust, humans work around the robot in ways that undermine the workflow it was designed to support.

Week 2: Proactive monitoring & optimization

The first full week of operation runs with our deployment team on-site or closely monitoring remotely. The robot is live in the actual workflow, but we're watching every run.

This week is about calibration in the real-environment sense. Not reconfiguring sensors, but understanding the gap between the deployment hypothesis and operational reality.

Real operations are messier than scoping conversations. The workflow we planned for might run at 40 moves per hour on a theoretical Tuesday, but on the real Tuesday after the deployment goes live, there's a receiving surge that changes the pattern entirely. The charging station position we chose might turn out to be 20 meters farther from the primary workflow zone than optimal, adding three minutes per cycle. A floor area that looked clear on the site plan turns out to be where the incoming freight team parks their pallet jacks for 90 minutes every morning.

None of these things are failures. They're data. A waypoint moved, a charging schedule updated, an exclusion zone refined These are all data points that lead to small adjustments that translate to long-term performance.

Week 3: Fleet handoff and independent operation

The first full week of operation runs with our deployment team on-site or closely monitoring remotely. The robot is live in the actual workflow, but we're watching every run.

This week is about calibration in the real-environment sense. Not reconfiguring sensors, but understanding the gap between the deployment hypothesis and operational reality.

Real operations are messier than scoping conversations. The workflow we planned for might run at 40 moves per hour on a theoretical Tuesday, but on the real Tuesday after the deployment goes live, there's a receiving surge that changes the pattern entirely. The charging station position we chose might turn out to be 20 meters farther from the primary workflow zone than optimal, adding three minutes per cycle. A floor area that looked clear on the site plan turns out to be where the incoming freight team parks their pallet jacks for 90 minutes every morning.

None of these things are failures. They're data. A waypoint moved, a charging schedule updated, an exclusion zone refined These are all data points that lead to small adjustments that translate to long-term performance.

What makes deployments fail?

No deployment guide is honest if it doesn't address failure modes. Here's what we've seen go wrong.

Undefined success criteria. Deployments that don't start with clear, measurable success criteria drift. At week six, nobody can agree whether it's working or not because nobody agreed on what "working" meant. We now insist on defining success metrics before deployment begins.

Workflow mismatch. Deploying autonomous systems into a workflow that isn't a good fit. The most common version: deploying robots into workflows with high exception rates. If a robot completes 60% of its missions autonomously and requires human intervention 40% of the time, you haven't automated the workflow. You've made it more complicated.

Insufficient floor staff engagement. The operator's floor staff are not a passive audience for a technology deployment. Deployments that treat them as obstacles to work around consistently underperform deployments that treat them as the primary audience for how the system is introduced.

Treating the pilot as a proof of concept rather than a production deployment. Pilots deployed with "we'll clean this up later" configurations rarely get cleaned up. They become the production configuration, with all the shortcuts built in. We run pilots at production standards precisely to avoid this.

Get Started

Driven With Vision.

Deployed for Results

Embed Thoro in your platform

Power your AMR with our autonomy stack. Autonomy hardware integration, embedded work, software updates, training, and deployment support included.

Deploy across your operations

From single site pilots to a multi-facility rollout. See how Thoro solutions fit into your workflows, WMS, and existing fleet .

thoro.ai

Autonomy that fits where you are and scales where you're going

©2026 Thoro.ai. All rights reserved

Get Started

Driven With Vision.

Deployed for Results

Embed Thoro in your platform

Power your AMR with our autonomy stack. Autonomy hardware integration, embedded work, software updates, training, and deployment support included.

Deploy across your operations

From single site pilots to a multi-facility rollout. See how Thoro solutions fit into your workflows, WMS, and existing fleet .

thoro.ai

Autonomy that fits where you are and scales where you're going

©2026 Thoro.ai. All rights reserved

Get Started

Driven With Vision.

Deployed for Results

Embed Thoro in your platform

Power your AMR with our autonomy stack. Autonomy hardware integration, embedded work, software updates, training, and deployment support included.

Deploy across your operations

From single site pilots to a multi-facility rollout. See how Thoro solutions fit into your workflows, WMS, and existing fleet .

thoro.ai

Autonomy that fits where you are and scales where you're going

©2026 Thoro.ai. All rights reserved