Deepen AI launches multi-sensor calibration for physical AI applications

Applications of AI


This content has no audio.

Deepen AI has released its latest targetless calibration platform built to simplify and accelerate calibration of complex self-driving vehicles, automotive ADAS, and robotics sensor suites.

The platform supports a wide range of configurations including GNSS reception, multiple LIDARs, radars, cameras, and inertial measurement units (IMUs). Process all input in one pass using a single continuous dataset, such as a ROS bag.

As sensor stacks become more sophisticated, traditional calibration methods are becoming a bottleneck in deploying autonomous systems at scale. These approaches are often manual, iterative, and dependent on physical targets. Deepen AI’s solution introduces a fully automated, integrated approach that calibrates all sensors simultaneously.

The platform estimates intrinsic, extrinsic, and temporal parameters across the entire sensor suite in a single streamlined workflow, eliminating the need for per-sensor calibration. This approach streamlines operations while delivering high performance, achieving up to 0.05° angular accuracy and 0.7 cm positional accuracy, exceeding traditional target-based calibration techniques.

Features include:

  • Simultaneous calibration of all sensors using a single dataset
  • Support for multi-LiDAR, camera, radar, IMU, GNSS configurations
  • Accuracy up to 0.05° and 0.7cm
  • No strict requirements for loop closure or fixed driving patterns

“Calibration has traditionally been one of the most time-consuming, complex and fragmented steps in autonomous system deployment,” said Mohammad Musa, Founder and CEO of Deepen AI. “This release enables teams to move to a system-level approach that uses real-world data to deliver both speed and accuracy.”

The system is designed to operate without a controlled environment or strict data collection protocols, allowing teams to seamlessly integrate calibration into existing workflows for both research and large-scale production deployments. All you need is simple, practical conditions, and calibration can even be done in a mostly stationary environment with minimal moving objects, such as a parking lot, garage, or quiet street. At least 30 seconds of continuous driving data is required.

The platform is already being deployed by customers working on highly complex sensor configurations that require multiple LIDARs and cameras to be calibrated together as one system. In one such deployment, a complete sensor stack was calibrated during normal driving in a parking lot, parking lot, or small residential area without using any special driving patterns or loop trajectories.

Deepen AI performed internal, external, and temporal calibration of all sensors simultaneously in a single workflow using only short-term operational data. This unified approach not only simplifies operation and improves consistency, but also provides greater accuracy than traditional target-based calibration methods, making it suitable for both research and production environments.



Source link