Enquire about Robot

Simply fill up the form, our team will then reach out to you to connect to the relevant parties who can help facilitate your request.

Brand & Model
All fields required. Refer to our privacy policy for questions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
HomeRobots

AgiBot X2

AgiBot X2 is a 1.31 m bipedal humanoid with GO-1 embodied AI, 120 Nm joint torque, and up to 30 DoF (Ultra) for service, retail, education, and light industrial applications.
Software Type
None
Software Package
Uses GO-1 foundation model and proprietary ViLLA cognitive stack. Also uses proprietary Xyber-DCU and Xyber-Edge controllers.
Aparobot Readiness Score
ARS?
0
...
Actuators
Proprietary AgiBot hollow-shaft joint modules. Peak joint torque: 120 Nm. Managed by Xyber-Edge (cerebellum-level) and Xyber-DCU (domain-level) dedicated controllers for low-latency, high-precision joint coordination.
Compiute
Uses 2 RK3588 as the main compute board. (Only X2 Ultra is equipped with NVIDIA Orin NX with 157 TOPS)
Sensors
Equipped with interactive RGB cameras and Head Touch Sensor. Uses microphone array, wireless microphone, and speaker for audio interaction.
Max Op. time
120
mins

Recent Robot Videos

*Aparobot claims no ownership of videos posted unless otherwise stated.
Description text
Date
  1. https://www.youtube.com/watch?v=kBGNvPq0Iyg, AgiBot X2 | A Generalist Humanoid Robot masters in human motions, 2025-09-26

Robot Brief

The AgiBot X2, also referred to as the Lingxi X2, is a commercially available bipedal humanoid robot developed by AgiBot and launched in 2025 as the company's primary agile service and interaction platform. Designed to function as a general-purpose embodied agent in commercial, educational, and light industrial environments, the X2 combines a compact 1,310 mm standing height with 25 to 30 degrees of freedom depending on the variant, proprietary hollow-shaft joint modules delivering a peak joint torque of 120 Nm, and a 500 Wh swappable battery providing approximately 2 hours of operation. It is powered by AgiBot's GO-1 foundation model and ViLLA (Vision-Language-Locomotion-Action) cognitive architecture, enabling autonomous navigation, natural language interaction, and generalised task execution across unstructured environments. The series is available in two configurations: the base X2 and the higher-specification X2 Ultra.

Use Cases

  • Autonomous Indoor Navigation and Self-Charging:
    The X2 Ultra navigates independently using LiDAR and RGB-D cameras for SLAM-based localisation and obstacle avoidance, and autonomously docks at a charging station when battery is low, enabling unattended sustained operation.
  • Multimodal Human-Robot Interaction:
    Engages users through visual facial recognition, voice dialogue via microphone array and speaker, head-touch gesture response, and on-screen display, supporting natural, context-aware service interactions in public-facing roles.
  • Zero-Code Motion Creation via LinkCraft:
    Operators can create and deploy custom motion sequences, including dances, gestures, and task-specific actions, by uploading a human motion reference video to the LinkCraft platform without requiring programming expertise.
  • Generalised Task Execution via GO-1:
    The GO-1 foundation model supports zero-shot task generalisation, enabling the X2 to handle tools and object interactions not explicitly pre-programmed, with task completion performance reported as 30% above prior AgiBot models in controlled evaluations.
  • Dynamic Locomotion and Performance:
    Capable of humanoid gait, complex dance routines, running, and coordinated head-and-body movements at up to 1.8 m/s, enabling use in entertainment, demonstrations, and public engagement contexts.
  • Teleoperation and Secondary Development (X2 Ultra):
    Supports full-body remote control via a handheld controller, and exposes secondary development interfaces for integration with third-party AI systems and custom application development.

Industries

  • Service Robotics & Entertainment:
    The robot's capacity for complex movements (dancing, gestures) and its emotional interaction capabilities position it for commercial showcases, in-store reception, and public-facing roles where engagement is key.
  • Education:
    Used as a programmable humanoid platform for robotics courses, AI demonstrations, and student development projects, with mobile app control and secondary development access available on the X2 Ultra.
  • Industrial and Retail Uses:
    The X2 is aimed at general industrial and personal assistance tasks. Its ability to handle tools and perform zero-shot tasks makes it viable for light manufacturing, component sorting, and inventory management in retail spaces.
  • Embodied AI Development and Data Generation:
    The platform acts as a critical hardware base for testing and validating the GO-1 foundation model, which is central to AgiBot’s strategy for training general AI robots.

Specifications

Length
-
210
mm
Width
-
460
mm
Height (ResT)
-
mm
Height (Stand)
-
1310
mm
Height (Min)
-
mm
Height (Max)
-
mm
Weight (With Batt.)
-
35
kg
Weight (NO Batt.)
-
kg
Max Step Height
-
mm
Max Slope
+/-
-
°
Op. Temp (min)
-
-10
°C
Op. Temp (Max)
-
40
°C
Ingress Rating
-
No items found.

Intro

The AgiBot X2 and X2 Ultra share identical physical dimensions of 1,310 mm (H) x 460 mm (W) x 210 mm (L). The base X2 weighs approximately 35 kg while the X2 Ultra is approximately 39 kg. The X2 provides 25 total degrees of freedom distributed as: 5 DoF per arm (10 total), 3 DoF at the waist, and 6 DoF per leg (12 total), with no dedicated neck DoF. The X2 Ultra expands to 30 DoF by adding 7 DoF per arm (14 total) and 1 DoF for neck articulation. Arm reach in both variants extends to 558 mm excluding the end-effector. The maximum payload capacity is 3 kg in specific static postures, with a continuous full-range payload of 1 kg or below, reflecting the arm's design optimisation for interaction rather than sustained load-bearing. All joints are driven by hollow-shaft actuators delivering a peak joint torque of 120 Nm, with the Xyber-Edge cerebellum controller and Xyber-DCU domain controller managing low-latency joint coordination.

Both variants are powered by a 500 Wh swappable battery with a recharge time of approximately 90 minutes or less (rated below 1.5 hours at 54.6 V, 10 A), and support input voltages of 100 to 220 V. The main compute platform for both is two RK3588 processors; the X2 Ultra additionally integrates an NVIDIA Orin NX module at 157 TOPS for AI inference workloads. The base X2's perception system includes an interactive RGB camera and a head-touch sensor. The X2 Ultra significantly expands this with front dual RGB cameras, rear RGB, RGB-D depth camera, multi-line LiDAR for 3D SLAM, and 4G/5G cellular connectivity in addition to the base WiFi and Bluetooth. Both variants include a microphone array, wireless microphone, and speaker for voice interaction, along with an interactive display screen and LED lighting effects. OTA updates and mobile app control are supported on both variants; secondary development access and end-effector compatibility (OmniHand and OmniPicker) are exclusive to the X2 Ultra.

Connectivity

  • WiFi: Supported, Bluetooth: Supported (both variants)
  • USB: Type-A x1 and Type-C x1 (X2); Type-A x2 and Type-C x2 (X2 Ultra)
  • Ethernet: RJ45 x2 (X2 Ultra only)
  • Power Output: 12 V/3 A and 48 V/5 A (both variants)
  • Handheld Controller: Included with both variants
  • OTA supported on both variants
  • Charger Input: 100 to 220 V AC; charger output 54.6 V, 10 A

Capabilities

  • GO-1 Foundation Model with ViLLA Architecture:
    Processes multi-modal inputs (vision, language, and joint state) through a unified Vision-Language-Locomotion-Action model, enabling generalised task execution at reported speeds exceeding 10,000 interactions per second for millisecond-level response latency.
  • LinkCraft Zero-Code Motion Platform:
    Allows operators to generate and upload robot motion sequences from human video references, removing the need for motion programming expertise and enabling rapid customisation for entertainment and demonstration roles.
  • Autonomous Navigation and SLAM (X2 Ultra):
    Multi-line LiDAR fused with RGB-D cameras enables 3D SLAM localisation, autonomous route planning, and proactive obstacle avoidance for unsupervised indoor operation.
  • Autonomous Self-Docking (X2 Ultra):
    When battery charge is low, the X2 Ultra independently locates its dedicated charging station and initiates recharging without manual assistance, supporting extended unattended deployment.
  • Swappable Battery with OTA Support:
    The 500 Wh battery is field-swappable for minimising downtime, and the full software stack including motion libraries and AI model updates is delivered over-the-air.
  • Secondary Development and End-Effector Compatibility (X2 Ultra):
    Exposes development interfaces for custom AI and application integration, with hardware expansion ports (USB, RJ45, 12 V and 48 V power outputs) and compatibility with AgiBot OmniHand and OmniPicker end-effectors.