robotic path-planning fleet
In this project, I worked with a team of 4 members over 4 weeks with the objective:
To develop a decentralized path planner for a fleet of robots, integrating sensor fusion techniques, and multi-agent communication for autonomous fleet navigation.
System Diagram
Before starting this project, my team and I drafted a diagram of the different nodes for the local software for a single robot (Neato).
Over the course of the 4 weeks, we have adjusted this system diagram to the following. This pipeline includes two main parts: 1) State Estimation and 2) Path Planning.
State Estimation
Errors tend to stack up when working in a multi-agent system, and when the goal is for each robot to avoid collision, extracting an accurate pose estimate becomes critical. The state estimation sub-pipeline utilizes an Extended Kalman Filter which combines a pose from wheel odometry data and a pose from a camera using visual odometry.
Path planning
A decentralized path planner allows each robot to independently plan and adjust its path as information is shared across the network. The path planning sub-pipeline initially uses the A Star algorithm to determine a shortest path to a goal. The next planned step and current pose estimate is shared across the network. These two pieces of information determine a hierarchy for the navigation of these robots.
my work - MONOCULAR VISUAL ODOMETRY
MVO Pipeline Overview
- Calibrate Neato camera to get the camera intrinsic matrix
- Image is received from the Neato every 1 second
- SIFT feature extraction
- FLANN-based feature matching and filtered good matches by distance
- Estimate motion with essential matrix decomposition
- Publish a pose
Testing consisted of manually pushing the Neato 0.5m every 6 seconds and compared the recorded distance to the real distance. (A velocity of 0.083 m/s was just a magic number that worked the best with mathcing features between each frame.)
Below is a graph showing the differnces in the outputed X pose (in red) compared to each 0.5m in real life (in blue). There was a huge inconsistency in the output scaling and positive/negative sign for each test which seemed very unrealiable.
The first 8 test we performed used cv2.recoverPose() to extract the rotation and translation. In the second set of tests, we tried directly implementing the single value decomposition math to extract the matrices. With the random spikes, we decided to continue with the OpenCV function. The next step to extracting out the correct translation is to compute the scale in each image. Using this function has a certain scale ambiguity that we cannot control.
The second set of calibration tests we performed was with the rotation matrix. Unlike the translation, this yielded very accurate results over time. In this calibration test, the real angle and computed angles were measured every 45 degrees going from 0 to 180 degrees clockwise and counter-clockwise. In the graph below, blue represents the manual movement of turning and the red is the output. The initial 45-90 degrees were very inaccurate as the estimation motion would always be significantly lower. However, as the Neato reaches 180 degrees, the angles would adjust to be more accurate, usually settling within 10 degrees.
In the end, the pose estimate from monocular visual odometry was not fully implemented due to the inaccuracy. Something that would be super helpful would be to create more visualizations, i.e. drawing out the epipolar lines and using Rvis to show how the poses change.