As we are reaching the end of the semester, I find myself reflecting on the last project and how far I have come so far. It is a lot. I have come a long way. At the beginning of the semester, I was a stranger to autonomous systems. I had a blurred understanding of the whole field. While I wouldn’t claim expertise by any means, I can now genuinely appreciate the immense effort put into self-driving vehicles. My own robot project, with its obstacle avoidance and goal-seeking challenges, has brought that appreciation home in a profound way.
Project 3 was one of the most interesting assignments that I have done so far. This project, focusing on path planning and hardware integration, was particularly interesting due to its intricate blend of software programming and mechanical design. I am fortunate to have had the chance to work both on the path planning assignment as well as hardware integration, as both were incredibly interesting to do. In the following paragraphs, I will first discuss Path Planning and then Hardware Integration.
Path Planning:
Path planning is the most important issue in autonomous vehicle navigation. It lets an autonomous vehicle or robot to find the shortest and most obstacle-free path from a start state to a goal state.
SLAM, or Simultaneous Localization and Mapping, is a pivotal concept in robotics, especially in the context of autonomous navigation. The video provided by the University of Michigan on SLAM was an extremely informative resource, but I dived deeper into it and looked at the mechanism more closely. It is very complex, and there’s no way to specifically say how it works. That’s why if you do a Google search, you won’t find anything meaningful related to the algorithm and the implementation. SLAM that we used in class is the SLAM software, which essentially abstracts the implementation.
In essence, SLAM involves a robot mapping an unknown environment while simultaneously keeping track of its location within that space.
Here’s how SLAM works (again, not the gory details but the abstraction):
- Localization: The Mbot continually estimates its position relative to its starting point. This is done by interpreting data from sensors and previously mapped areas.
- Mapping: Concurrently, the Mbot builds a map of its surroundings using input from sensors, in this case, the ultrasound sensors. It’s like drawing a map while journeying through an unknown territory.
Simultaneous localization and mapping is vital for path planning, as it allows the Mbot to navigate effectively while avoiding obstacles and making real-time adjustments based on the evolving map. For this, we had to implement a path planning algorithm such as breadth-first search.
I implemented three core path planning algorithms: Depth-First Search (DFS), Breadth-First Search (BFS), and A-Star (A*) Search. Each of these algorithms has its unique approach to navigating the robot through the map.
- Depth-First Search (DFS) Implementation:
- Overview: DFS is a traversal algorithm that explores as far as possible along each branch before backtracking. In the context of path planning, it means the algorithm explores a path until it hits an obstacle or a dead end, then retraces its steps to explore alternative paths.
- Pseudocode:
function depthFirstSearch(graph, start, goal):
initialize stack and visited set
stack.push(start)
while stack is not empty:
current = stack.pop()
mark current as visited
if current == goal:
return reconstructPath(current)
for each neighbor of current:
if neighbor of current:
stack.push(neighbor)
return path_not_found
- Breadth-First Search (BFS) Implementation:
- Overview: BFS explores all neighbor nodes at the present depth before moving on to the nodes at the next depth level. It’s excellent for finding the shortest path on unweighted graphs, like our grid map.
- Pseudocode:
function breadthFirstSearch(graph, start, goal):
initialize queue and visited set
queue.enqueue(start)
visited.add(start)
while queue is not empty:
current = queue.dequeue()
if current == goal:
return reconstructPath(current)
for each neighbor of current:
if neighbor is not visited:
queue.enqueue(neighbor)
visited.add(neighbor)
mark parent of neighbor as current
return path_not_found
- A-Star (A) Search Implementation:*
- Overview: A* is an informed search algorithm, ideal for pathfinding and graph traversals. It combines features of both BFS and the Greedy Best-First-Search, considering both the cost to reach the node and the estimated cost from that node to the goal.
- Pseudocode: I implemented this algorithm in my local repository and here’s the pseudocode.
function aStartSearch(graph, start, goal):
initialize priority queue, cost_to_start, and visited set
priority queue.enqueue(start, 0)
cost_to_start[start] = 0
while priority queue is not empty:
current = priority queue.dequeue()
if current == goal:
return reconstructPath(current)
for each neighbor of current:
tentative_cost = cost_to_start[current] + cost(current, neighbor)
if tentative_cost < cost_to_start[neighbor]:
cost_to_start[neighbor] = tentative_cost
priority = tentative_cost + heuristic(neighbor, goal)
priority queue.enqueue(neighbor, priority)
mark parent of neighbor as current
return path_not_found
Each algorithm has its strengths and weaknesses:
- DFS is simple but can be inefficient in large graphs.
- BFS is excellent for finding the shortest path in unweighted graphs.
- A*, though more complex, is highly efficient and effective in pathfinding in weighted graphs due to its use of heuristics.
Here’s a picture of the path created by my robot traversing through path and simultaneously localizing and mapping.
This path planning was the penultimate stage in my path planning assignment. I had to fine tune my code repeatedly to make it work for different maze sizes and shapes. It was an arduous task but differently worthwhile. At the end, I had to make a map of the area where robots are tested in the makerspace (above), and make my robot traverse the path.
Pros and Cons of Path Planning (Global Search) vs. Bug Algorithm (Local Search)
Although path planning seems the best approach, I would say both path planning and bug algorithm have their merits and demerits. Global search algorithms consider all possible paths in an environment and provide a comprehensive coverage and thorough exploration. Since they find all possible paths, they guarantee the shortest path to the goal (assuming the environment is static). There are some disadvantages to path planning. Global search algorithms such as A * are computationally expensive, especially in large or complex environments. That’s why our code was running slow. Additionally, global search algorithms are memory intensive as they require significant memory to store information about the entire environment. Another disadvantage is global search does not work for environments that constantly change like a changing maze.
As you can see in the picture above, it’s the planner file for my the map of the environment at the makerspace. The environment was small, but the file size is 10.7 mega bytes. Now let’s say if we want to map an environment 100 times bigger than the area at the makerspace, the file size will be almost 1 gigabyte or 1024 mega bytes.
Pros and Cons of Bug Algorithm (Local Search)
Local search has its own advantages and disadvantages. The bug algorithm is simpler to implement and is very suitable for straightforward navigation tasks. Moreover, it’s computationally less extensive as we the robot doesn’t have to acquire knowledge of the entire environment and store it in memory. Adaptability to environment is another benefit of local search. It can adapt to changes in the environment in real-time. For example, I remember in bug navigation, we tried several approaches like wall follower and bang-bang, and we could easily change the maze without affecting the robot’s ability to reach the goal. However, local search is not only all glorious as it has its downsides. This algorithm does not guarantee the most optimal path and can lead to longer routes.
Overall, as I reflect on the implementation, I feel extremely proud of myself. I remember spending hours in the makerspace in the cold trying to make my algorithms work. I definitely faced some challenges such as my BFS and DFS were not working optimally, and my robot would either not take the optimal path or hit the obstacles. I had to fine-tune it a lot. For the BFS algorithm to work optimally, I had to read into the sensor’s data and the robot’s movement dynamic. Additionally, one day I asked Dr. Wu, my robot was hitting the wall, and she suggested I look more into collision checking, and it solved the problem. I had to add the checks in my BFS algorithm. In order for my robot to take the most optimal path, I also had the idea of using a scoring system to score each path and take the best one, but it would have made my code more complicated. I’m happy with the results, nonetheless.
Overall, the process was iterative and required extensive testing and modifications, but overcoming these challenges was a significant learning experience in adapting theoretical algorithms to practical robotic applications.
Hardware Integration
Once I finished Path Planning, I had ample time to work on hardware integration part of the project. For the circuit design, I had to mount three ultrasonic sensors on the robot without interfering with the robot’s existing hardware.
I collected enough wires, three ultrasonic sensors, and tape. Then I had to figure out a way to mount the sensors on the Mbot. Then I designed a mechanical mounting plate for the Mbot to hold the three sensors — one on each side. I took into consideration the robot’s spatial constraints and optimal positioning for the sensors. This design process involved careful planning and several iterations.
As for the tool, I used Autodesk software for my drawing and modeling. I measured the dimensions of the sensors and the available space on the Mbot to ensure a snug and secure fit. To verify my measurements, I also did some research online on the dimensions and diameter of the ultrasonic sensor. I designed the cavities slightly larger than the sensors’ diameter to facilitate effortless insertion and removal while minimizing any potential for vibration or movement that could affect sensor readings. I also included holes on the extended arm for screws, which would be used to firmly attach the mounting plate to the Mbot’s chassis. The positioning of these holes was carefully calculated to align with the Mbot’s existing structural supports, thereby enhancing the stability of the mounted sensors.
For the circuit design, I used a breadboard and connected the ultrasonic sensors to the Raspberry Pi’s GPIO pins.
In the final checkpoint, our task was to program the Raspberry Pi to collect and process data from all three ultrasonic sensors. We developed functions in C++ to calculate the distance to obstacles in the direction of the Mbot’s travel. This process required a deep understanding of sensor data handling and involved referencing several resources, including the ROS Wiki tutorial on Ultrasonic Distance Measurement and the Ninedof blog post on HC-SR04 sensors.
Throughout the project, we faced several challenges. One of the primary ones was ensuring the accuracy and consistency of the ultrasonic sensors in different environmental conditions. Another challenge was integrating these sensors’ data with the Mbot’s existing navigation system, requiring extensive testing and calibration for effective obstacle avoidance.
Here’s a picture of the robot
The picture above shows how everything works. Later, though, I clipped together the wires using tape to make the design look better. For a video demonstration of the design, please visit this link https://drive.google.com/file/d/1R0dIgvY2lXRVQqyPJdItsI4qvAhIWSiL/view?usp=sharing.
I didn’t face any huge challenges because the instructions were clearly given to us — thanks to Dr. Wu and Dr. Jones. However, if I had to mention a challenge, I would say time constraints were truly challenging.
As for the code, I used the wiringPi library to manipulate the GPIO pins on the Raspberry Pi for reading inputs from the sensors. At the top of the code, I defined constants for each sensor’s trigger and echo pins, using wiringPi’s numbering scheme. Then I used the pinMode() function from the wiringPi to set up the pins as input or output. Then for the ultrasonic sensors to measure distance, I needed to generate a pulse. The digitalWrite() function was used to send a high signal to the trigger pin, followed by a low signal. To read the sensor’s distance measurement, I monitored the echo pins using the digitalRead() function. I also used the formula given in the video to calculate distance.
As for sources, I didn’t use any outside sources.
Written by Ali Ramazani on December 3, 2023 for Intro to Autonomous Systems taught by Dr. Jones and Dr. Wu.