01 - Perception & Localization
Visual-SLAM
Knowing exactly where you are
Thoro's multi-modal SLAM architecture fuses 3D LiDAR, depth camera, and IMU data into a highly accurate picture of the robot's position and environment - all on-edge, with minimal infrastructure required. No reflectors. No guide wires.
360° FOV, No blind spots
Full surround awareness (walls, racks, pallets, people) updated faster than human reaction time
Multi-Modal Sensor Fusion
LiDAR handles range and geometry. Camera provides texture and scene context. Together, they eliminate the blind spots each sensor has alone
3D Obstacle Detection
3D LiDARs enable us to identify obstacles with pin-point accuracy the moment they come in view.
Augmented with AI
Domain-specific models for greater environmental awareness.

Pallet Detection Model

Our highest-performing model. Trained on real warehouse data from our customer workflows. Detects pallet corners, entry faces, and alignment geometry with sub-centimeter accuracy, even with overwrap, partial occlusion, or non-standard load configurations.
Person Detection

Real-time detection of people in the robot's operating zone, including partial occlusion behind racks or pallets.
Ground Plane Detection

Real-time detection of free-drivable space that the robot is able to use to operate in freely and execute tasks.
02 - Physical AI
03 - Safety-First Approach
Safety built into the architecture
Thoro's safety system isn't a layer on top of the autonomy stack, it's woven through it. From dynamic safety fields that adapt in real-time to system-level certification, safety is a first-class design constraint.
Functional Safety
Our autonomy platform is designed and certified to align with internationally recognized safety standards, including ISO and CSA standards
Dynamic Safety Fields
Safety zones that adapt to real-time machine speed, orientation and direction of travel. Tighter in slow zones, expanded at speed. Always ready to slow down.
Redundant Sensor Coverage
No single point of failure. LiDAR, camera, and IMU provide overlapping safety coverage whenever units are in autonomy.

STOP ZONE
Immediate Halt
WARNING
Speed Reduce
MONITOR
Awareness
Flexible Autonomy
Sensor Suite
Off the Shelf
Nvidia Autonomy Kit
Lightweight Compute
Thoro Autonomy
Shared Stack
Standardized Interfaces
Universal
Works Across
Pallet Jack
Stacker
Scrubber
Tugger
04 - Speed to Market
CoreFlex is Thoro's next-generation modular autonomy architecture. A flexible, platform-agnostic kit, that dramatically compresses the time and cost of bringing a new autonomous product to market. Built on NVIDIA Jetson Modules, designed for any form factor, and backed by Thoro's full software and operations stack.
01
No Costly Redesigns
02
World-Class UI Ready to Ship
03
Proven in Production from Day One
05 - One Unified Platform
Fleet management, software intelligence, and cloud connectivity. All unified under one platform, accessible through one interface, compounding with every deployment.
Fleet Management
Every Machine, One View.
Complete fleet visibility across machines, zones, and sites from a single intuitive interface.
Real-time robot status and telemetry
Mission dispatch and override
Workzone liveview with safety events
Fleet-wide performance analytics
OTA map and software updates
Autonomy Software
Intelligence that compounds
SLAM, path planning, fleet orchestration, and AI models - powered across every Thoro-powered machine.
Shared map across all machine types
Multi-robot dynamic path planning
AI Models, growing with each deployment
Fleet-wide learning from every mission
Simulation-validated before real-world deploy
Progressive Cloud
Edge-native, Cloud-enabled
No cloud dependency with edge-first autonomy, no single point of failure. Cloud capabilities layer in progressively with you
100% on-edge autonomous operation
WMS / ERP / WCS API
Bi-directional mission integration
Remote waypoint management
Asynchronous updates - works offline, catches up

