A person focused on typing code on a laptop, with lines of programming visible on the screen.

What Role do Algorithms Play in Mobile Robotics?

By Rajesh Mahapatra, Anil Sripadarao

Algorithms put the autonomy in autonomous mobile robots (AMRs). They help mobile robots process sensor data, detect obstacles, and make real-time driving decisions to better navigate their environment. While algorithms are continuously evolving due to advancements in artificial intelligence (AI) and machine learning, you can broadly classify them under three categories:

  • Sensing
  • Actuating
  • Communicating

These three capabilities give mobile robots everything they need to sense their surroundings, actuate or control their movements, and communicate efficiently between different systems within the robot. This blog will focus on the sensing and actuating algorithm types.

 Concept diagram of potential path options for autonomous vehicles

Figure 1: Concept diagram of potential path options for autonomous vehicles

Let’s consider the scale of the task: To navigate from Point A to Point B with a potentially unlimited number of obstacles in your path. Humans use our eyes to assess the world around us, make judgments based on those assessments, and set off in our desired direction. As we proceed, we may evaluate whether a faster or shorter path is available.

Mobile robots operate similarly, using sensing and actuating algorithms to make informed decisions from collected sensor data.  
 

Sensing Algorithms

Sensing algorithms help mobile robots to perceive their environment. AMRs gather data from many sources, such as time-of-flight (ToF) cameras, inertial measurement units (IMUs), and lasers. Sensors provide autonomous mobile robots with an accurate and detailed understanding of their surroundings using algorithms such as…

  • Simultaneous Localization and Mapping (SLAM): LiDAR-based SLAM algorithms help robots build maps of their surroundings and determine their location within the map. Techniques like Iterative Closest Point (ICP) and graph-based SLAM can be used.
  • Point Cloud Processing Algorithms: ToF and LiDAR technology generate Point Clouds, which represent the detected surfaces of objects. The resulting 3D models can be used for segmentation, clustering, and object detection.
  • Object Detection Algorithms: For spatial awareness tasks like obstacle detection and avoidance, ToF sensors utilize a range of algorithms, from connected-component labeling to advanced deep learning algorithms.
  • Sensor Fusion Algorithms (ex. Complementary Filter, Kalman Filter, and Extended Kalman Filter): Sensor fusion algorithms combine data from accelerometers, gyroscopes, and magnetometers to provide accurate estimates of orientation, position, and velocity. The Kalman Filter, for instance, is commonly used for real-time sensor fusion because of the advantages it offers in navigation and stabilization.
  • Attitude and Heading Reference Systems (AHRS): AHRS algorithms build on IMU data to calculate precise orientation and heading, which is crucial for drones, self-balancing robots, and navigation.
  • Human-Robot Interaction: In challenging lighting environments, you may implement a vision sensor such as ToF to enable gesture recognition. If the environment is quiet, key phrase detection may be used. Both communication interfaces can be implemented by running a deep learning inference.
     

Actuating Algorithms

Actuation focuses on controlling the movement of robotic components such as wheels, arms, or grippers. Naturally, precision is a priority, as these algorithms enable robots to carry out complex movements and interact with their environment in real time. Some examples are:

  • Proportional-Integral-Derivative (PID) Control ensures smooth, stable, and accurate motion control by minimizing errors between desired and actual states.
  • Model Predictive Control (MPC) predicts the robot's future state based on its current state and uses optimization techniques to determine the best set of actions over a given time. This is advantageous in dynamic and complex environments.
  • Trajectory Generation Algorithms compute smooth, collision-free paths for robotic motion. Examples include polynomial trajectory planning and minimum jerk paths.

 Sensing and actuation algorithm options for autonomous vehicles

Figure 2: Sensing and actuation algorithm options for autonomous vehicles
 

Mobile Robotics Algorithms in Context

The algorithms discussed here leverage the output of hardware devices and deliver more complete solutions in robotics applications. Each algorithm serves a specific purpose within the overall robotic design and has allowed autonomous mobile robots to adapt and evolve in a dynamic and complex environment.

IMUs, Hall sensors, depth computing engines, and motion control ICs have sophisticated algorithms built into them, making it possible to run other algorithms on top of them and achieve greater speed and power performance. As algorithms advance, the speed and ease of operating robots will continue to improve.

Learn more about ADI’s Mobile Robotics Solutions

More from the Mobile Robotics EZ Blogs series