pl649@cornell.edu
Lab1
Lab2
Lab3
Lab4
Lab5
Lab6
Lab7
Lab8
Lab9
Lab10
Lab11
Lab12
Hello! My name is Peng-Ru. I am an M.Eng. student in Electrical and Computer Engineering at Cornell University. I have a strong passion for hands-on experiments and have documented my Fast Robots Lab experience on this website. Hope you enjoy!
Lab 1 is divided into two parts:
Part A focuses on familiarizing with the SparkFun RedBoard Artemis Nano, programming the board, using the onboard LED, reading/writing serial messages over USB, and utilizing the temperature sensor and Pulse Density Microphone.
Part B involves establishing Bluetooth communication between the computer and the Artemis board. Python commands will be sent from a Jupyter notebook to the Artemis, and data can be sent from the Artemis to the computer via Bluetooth for future labs.
We first installed Arduino IDE from this link. Then, following the setup instructions provided on this link, we added the SparkFun Arduino Artemis to the IDE, ensuring that the latest SparkFun Apollo3 board package was installed. Finally, we selected RedBoard Artemis Nano as our board.
This program blinks an onboard LED by setting the pin connected to the LED to HIGH for one second, followed by setting the pin to LOW for one second. The basic "Blink" example is used to test the LED on the module. With the video below, the code is correctly uploaded to the board and the LED on the board functions properly.
The code tests serial communication between the computer and the Artemis board. The Artemis board sends a counting test to the serial monitor via UART and echoes any string sent from the computer, demonstrating that serial communication is set up correctly. When using serial communication, ensure the baud rate on the serial monitor matches the one defined in the code (9600 in this case) for proper communication. See the demo video below.
The code tests the temperature sensor on the board. It uses the analogReadTemp() function to obtain the raw ADC counts from the temperature sensor and send the data to the serial monitor. In the video, when the finger is removed, the temperature decreases from around 33,900 to about 33,500.
The code tests the microphone on the Artemis board. As shown in the video below, the Artemis is able to capture frequency readings from the ambient sound. When I whistle in the video, you can observe the frequency change.
With the example code from Arduino (task 4), we can modify the program to turn on the LED on the Artemis board when the note "C" is played. First, the frequency of the note "C" is measured to determine the appropriate frequency range for lighting up the LED (I set the range to 520-530 Hz here). The LED pin is then set to HIGH if the detected loudest frequency falls within that range, and set to LOW otherwise.
Before the lab, install Python and pip3 with the latest release. Then, create a virtual environment using the following commands:
python3 -m pip install --user virtualenv
python3 -m venv FastRobots_ble
Activate the environment with the following commands:
.\FastRobots_ble\Scripts\activatejupyter lab
To establish a Bluetooth connection, I needed to know my device's MAC address, which I printed to the serial monitor. Also, I need a UUID specific to my board to avoid connecting to the wrong one. All of this information was added to the connections.yaml file in the demo codebase. The following screenshot shows how I obtained the MAC address and UUID.
We need the Artemis Board to send and modify strings. To achieve this, I created an ECHO function. In the Arduino code, the function adds the prefix 'Robot says ->' to the original message before sending it. My code and results are shown below.
We need the board to send floats as well. We created the SEND_THREE_FLOATS command to send three floats and extract the values in the Arduino sketch. The code and results are shown below.
Next, I created a command called GET_TIME_MILLIS and used the onboard timer to track and send timestamps through the Bluetooth connection. The two following screenshots show the Arduino code and the code with results on the Python side.
Next, I set up a notification handler in order to receive strings from the Artemis board and extract the time value.
Then, I wrote a loop to continuously send time data with the code below. On the Python side, I collected the information from the notification handler. From the results below, we can see that the time ranges from 25,272 to 50,709, which is 25,437 milliseconds. We have 1,000 messages, and the size of each message is 7 bytes, so the effective rate is 1000*7/25.437=275.18 bytes/s
I created a globally defined array to store timestamps. The SEND_TIME_DATA command first stores the timestamp in the array, then loops through the array and sends each data to my computer. The code for the SEND_TIME_DATA command and the result received by the computer are shown below
Next, I added a second array of the same size as the timestamp array to store temperature data. I created a command, GET_TEMP_READINGS, to collect both time and temperature readings and store them in both arrays. Then, I merged the two arrays and sent the combined data to the computer. The code for GET_TEMP_READINGS and the results are shown below.
Live data transmission provides real-time updates but comes with higher overhead, potential data loss, and BLE speed limitations. It's suitable for small data points and systems that can handle interruptions. Storing data for later transmission ensures reliability, reduces data loss, and is more robust in unstable conditions, but requires more memory and introduces delays.
The board has 384 kB of RAM. Storing only time (4 bytes) allows for 96,000 data points, while storing both time and temperature (8 bytes) limits storage to 48,000 data points. With a sampling rate of 100 Hz, each second stores 800 bytes (100 samples × 8 bytes). The total storage required for 48,000 data points is 384,000 bytes, which lasts for 480 seconds (or 8 minutes) before memory runs out. If storage is full, overwriting previous data or reducing the sampling frequency can help extend data collection time.
We sent data ranging from 5 to 120 bytes, increasing by 5 bytes, and calculated the data rate for both 5-byte and 120-byte replies. The results showed that the effective data rate increases with message size. The delay is determined by Bluetooth latency, and larger replies help reduce overhead, which in turn can increase the data rate. The screenshot below shows how I tested it and the results I obtained.
I created the command RELIAVITY_TEST to make the board send 1000 messages, each with its sequence number. On the Python side, the screenshot below shows that all 1000 messages were received, confirming that data transfer via Bluetooth is reliable.
Learning how to establish Bluetooth communication between the board and the computer will be important for future labs. Additionally, comparing different transmission methods and their lengths will help me design more efficient communication strategies, considering both time and memory constraints.
The purpose of this lab was to test the IMU sensor, which includes an accelerometer, gyroscope, and magnetometer. We also received our actual robots and observed their capabilities.
To prepare for this lab, I read about the IMU and familiarized myself with its functionality.
First, I connect the IMU to the Artemis board using the QWIIC connector, which is shown in the photo below.
checkNext, I test the IMU works using the Example Code with rotating, flipping, and accelerating the IMU, which is shown in the video below.
The AD0_VAL represents the last bit of the I2C address in the IMU. I set AD0_VAL to 1, which is the default. However, it can be changed to 0 when the ADR jumper is closed, which is useful when using multiple IMUs.
The accelerometer measures linear acceleration. When the board is flat on a table, X and Y accelerations are minimal, while Z is around 1000 due to gravity (9.8 m/s²). The X, Y, and Z values change as the device moves along each axis. The gyroscope measures angular velocity. It outputs approximately zero when still but responds on the corresponding axis when the board rotates.
I added a visual indication to show that the board is running by blinking the LED three times on startup. I did this using the example code 'blink' from Lab 1, as shown in the video below.
With the accelerometer, we can calculate θ with the equation θ=artan(ax,az), and also ψ=artan(ay,az). Since the result is in radians, so we need to multiple 180 and divide pi to get the degree value. The Arduino code for this is shown below.
I placed the board flat on the table and rotated it in different directions, testing the pitch and roll at {-90, 0, 90} degrees. The results are shown below.
Pitch=0, Roll=0
Roll=90 Roll=-90
Pitch=90 Pitch=-90
To prepare for future labs, I wrote a function in Jupyter where the computer sends requests, and the Arduino replies with data. The data is then plotted on a graph with time on the x-axis and data on the y-axis. The code and result are shown below.
I performed a two-point calibration by positioning the board at 90° and -90°(against the wall) for roll and pitch with specific rotation directions. Using the IMU measurements, I obtained the accelerometer readings for the X, Y, and Z axes, calculated the angles, and compared them with the ideal values. The plot below shows that the error is very small.
From the two-point calibration, we can use the formula:
CorrectedValue = (((RawValue-RawLow) * ReferenceRange) / RawRange) + ReferenceLow
to understand how roll and pitch can be corrected based on the raw computed angles.
The equation for roll is:
CorrectedValue = (((RawValue - 90) * 180) / 180.849) - 90
and the equation for pitch is:
CorrectedValue = (((RawValue - 90) * 180) / 177.548) - 90
The accelerometer data was noisy, and placing it on a car would increase noise further. To reduce this, I collected data (by randomly shaking the board) and performed a Fourier Transform in Jupyter Notebook using NumPy, SciPy, and Matplotlib to determine the cutoff frequency. The images below show the original angle data and its FFT result for analysis.
Based on the FFT result shown in the graph, the most significant spikes occur between 1-3 Hz, so I set the cutoff frequency at 3 Hz. Using the formula:
α = T/(T + RC), where RC=1/2πfc, and T=1/sampling_rate.
Given that the sampling rate is 49.09 Hz. I calculated the value of α, which is approximately 0.2774.
With the low pass filter, the equation is:
θLPF[n]=θ*θRAW+(1-θ)*θLPF[n-1]
θLPF[n-1]=θLPF[n]
By applying this, we can obtain the data with the low-pass filter. The graph below compares the original and filtered data, showing reduced vibration in the filtered signal, which also confirms that the selected cutoff frequency effectively removes unwanted noise without distorting the primary motion data.
Using the low-pass filter, we can observe that the signal’s vibration decreases. To be more precise, I placed the IMU on a table and induced noise by gently tapping the table. Below are the original data, its FFT, and the low-pass filtered data. The comparison shows that the filter removes high-frequency noise while preserving motion data.
Using the gyroscope data, we can calculate the raw pitch, roll, and yaw by updating the angles as follows:
pitch_g = pitch_g + myICM.gyrY() * dt
roll_g = roll_g + myICM.gyrX() * dt
yaw_g = yaw_g + myICM.gyrZ() * dt
With a sampling rate of approximately 422Hz, we can generate three plots: one showing the roll, pitch, and yaw from the gyroscope; one displaying the roll from both the gyroscope and accelerometer, along with the low-pass filtered accelerometer data; and another showing the pitch from both the gyroscope and accelerometer, along with the low-pass filtered accelerometer data.
With a sampling rate of approximately 153.5Hz, I generate the following results shown below.
With a sampling rate of approximately 32.91 Hz, with the movement rotating the board around the x-axis from 0° to 90° twice, followed by a rotation around the y-axis from 0° to -90° once, I generated the following results shown below.
The accelerometer is noisy, especially at high sampling rates, but at lower sampling rates, the noise is less pronounced and not affected by drift. The gyroscope excels at detecting fast movements since it's less influenced by noise. Therefore, a complementary filter can be used to combine the strengths of both sensors.
I use a complementary filter to compute an estimate of pitch and roll that is both accurate and stable, using the equation shown below.
θ=(θ+θg)(1-α)+θaα
I tested by rotating the board around the x-axis from 0° to 90° twice, followed by a rotation around the y-axis from 0° to -90° once. I then compared the original accelerometer data, low-pass filtered accelerometer data, gyroscope data, and the complementary filter output with alpha values of 0.7 and 0.3.
When alpha is large, the filter relies more on the accelerometer, reducing drift but increasing noise, making it suitable for static applications. When alpha is small, it relies more on the gyroscope, reducing vibrations and noise, but increasing drift, making it ideal for fast-moving environments.
I sampled data 200 times without any Serial.print, only checking readiness, and storing it in an array. I measured the start and end times, with the sampling rate calculated as 435 Hz. Also, when the data was not ready, I incremented a variable count. The result showed count = 0, which means that as soon as the program started running, the data was already prepared. This indicates that the IMU updates faster than the Artemis main loop. The code below shows how I measured the time gap to calculate the sampling rate.
I also conducted a test where I stored the time after one sample and recorded the time before the IMU was ready for the next sample to determine how quickly new values could be sampled. The result was approximately few milliseconds.
Next, I added flags to start/stop data recording, with Jupyter sending control signals to collect and store time-stamped IMU data in arrays within the main loop. The screenshots below show my code and the result (I set it to print the number of measurements taken to calculate the sampling rate).
I need to store at least 5 seconds of data. Each 32B measurement includes acceleration (float), angle values (float), and a timestamp (int). With 384kB of memory, I can store 12,000 measurements. At ~330 Hz (slightly reduced due to storage and stop command handling), this allows ~33 seconds of recording.
I tested my car by driving it straight, moving backward, and turning right and left. I also tested its ability to move forward and turn, as well as move backward and turn. Based on my observations, the acceleration and spinning work well. It runs very fast at full speed, but the braking is quite weak. Also, when attempting to move straight, the movement is not perfectly straight, so I may need to implement some calibration in future labs.
The purpose of this lab is to equip our robots with distance sensors to detect surrounding objects.
Both ToF sensors default to the same address 0x52. If both communicate simultaneously, a bus collision occurs. However, since their addresses are programmable, we can use the XSHUT pin to disable one, set a new address for the other, and then use independent addresses.
To prepare for this lab, I designed my wiring diagrams, as shown below.
My robot will use two ToF sensors—one in the front for obstacle detection and one in the back for rear detection, as shown in the diagram below. At the same time, I am also considering whether placing sensors on the sides would help with rotation. However, assuming that the wheels may influence rotation detection, I chose not to place ToF sensors on the sides. The mounting position may be adjusted based on future results.
Since ToF sensors require more flexible placement than the IMU, I chose long QWIIC cables to allow for better positioning on the robot.
First, I connected the ToF sensor to the QWIIC breakout board to verify that it works.
I obtained the I2C address by running an example program in Arduino.
It shows that the address is 0x29, which does not match the default (0x52). However, we can see that 0x29 is derived from 0x52 >> 1. The last bit of the I2C address indicates 0 for a write and 1 for a read, so only the first 7 bits are considered.
The ToF sensor has three modes: Short, Medium, and Long. Short mode offers the highest accuracy in ambient light and the fastest response time, while Long mode provides the longest detection range but is more affected by ambient light and has a slower response. Medium mode balances accuracy, range, and speed.
Since my robot will operate indoors with ambient light and does not move very fast, a longer detection range is not necessary. I initially considered Short mode, but if it proves insufficient, I may switch to a different mode for future experiments. The picture below shows how I used the tape measure and conducted the testing.
I tested the ToF sensor from 50 mm to 1500 mm, repeating the measurement 10 times and calculating the standard deviation, as shown in the plot and table below.
From the results, I conclude that the ToF sensor is generally acceptable. However, there are sometimes significant variations in the measurements. In future tests, I may need to repeat the measurements or combine it with other sensors to improve accuracy.
As shown in the pre-lab section, I soldered the XSHUT pin of one ToF sensor and wired it to pin 8 on the Artemis. The XSHUT pin is initially set low to put the sensor in standby mode, then the other sensor’s I2C address is changed to 0x30. Later, setting XSHUT high allows both sensors to operate on the same I2C bus with different addresses. The setup code below demonstrates this, using the shortest distance mode. And the Screenshot below shows the results.
After ensuring that both ToF sensors work simultaneously, I added the IMU to our testing. The video below shows that they can operate in parallel.
Next, I focused on recording data as quickly as possible. The first step was to remove all delays and unnecessary data collection points. I also moved the .startRanging() command into the void setup() function so that the loop wouldn't call it every time. Then, I added a time print in the loop() function. With some basic math, I can determine how much time each loop takes. The code is as follows:
With the screenshot shown below, each loop takes about 8-9 ms, corresponding to a sampling rate of 120 Hz. Since some time is used for serial printing, the I2C communication between the Artemis and the sensor would be the limiting factor when collecting data.
I recorded time-stamped ToF data for about 30 seconds (with the ToF sensors in the same horizontal position, moving further away from the wall), and then sent these data over Bluetooth to my computer for plotting.
Next, I combined all the sensors, as shown in the setup below. I tested the system with Jupyter Lab sending commands to start and stop data recording, and the board transmitting the data over Bluetooth to the computer. The plots below show IMU accelerometer data vs. time and data from two ToF sensors vs. time.
Infrared Triangulation Sensors:
Pros: Cheap
Cons: Susceptible to interference, Accuracy depends on surface reflectivity
Infrared Reflectance Sensors:
Pros: Small, Cheap, Good for small-range detection
Cons: Short range
Infrared Time-of-Flight (ToF) Sensors(we used this in our lab):
Pros: Small, Range, Fast response
Cons: Expensive
Although our sensor is more expensive, it offers a greater range, a smaller size for a better fit on the car, and a faster response, making it more suitable for placement on our robot car.
I tested the sensor's sensitivity to colors and textures by taking distance measurements with our ToF sensor in short distance mode, targeting different objects (a white wall, a red bag, a blue bag, and a black fluffy jacket). The picture shows the objects used for testing, and the plot on the right shows the measurements. Although some variations occur above 1200 mm (with a maximum distance of 1350 mm in short distance mode), the results are acceptable and not significantly affected by color or distance.
In this lab, we used the components in hand, soldered them onto our robot car, got our robot moving, and achieved open-loop control of the car.
To prepare for this lab, I drew the wire diagram shown below, using A13–A16 as the control pins for motors.
Our motors are connected to a separate power supply from the Artemis board and other sensors because the motor supply generates more noise and requires higher current, which could potentially interfere with the Artemis board and sensors. Additionally, the motor drivers require more power than the Artemis, so we use an 850mAh battery for the motor drivers and a 650mAh battery for the Artemis.
After soldering the Artemis board to the motor driver, we used an oscilloscope and a power supply to test our PWM signal. The picture below shows the setup.
Based on the DRV8833 datasheet, the motor voltage can range from 2.7V to 10.8V. With the plan to drive the motors with a 3.7V battery, I decided to set the voltage supply to 3.7V.
Next, I tested the motor driver by using the code below to generate a PWM signal. I set pins 13, 14, 15, and 16 as output pins for motor control. Since the PWM signal ranges from 0 (always off) to 255 (always on), I increment the signal up to 255.
With the code below running, I obtained the result on the oscilloscope with two channels connected to each motor driver, which is shown in the video below.
After checking the PWM signal with the oscilloscope, I tested the wheels to ensure they ran as expected with the code below. I also verified movement in both directions, as shown in the video below.
Next, I tested my robot car with batteries—one for the Artemis board and another for the motor driver—using the same code, as shown in the video below.
After soldering all the components and powering the robot car with a battery, I secured everything to the car, as shown in the picture below.
Through experimentation by manually changing the PWM value, I found that the minimum PWM for the robot car to move forward was 33 (a duty ratio of 33/255 ≈ 0.129). Below this value, the car lacks the power to start moving.
For turning, I found that the minimum PWM for the robot car to move was 78 (a duty ratio of 78/255 ≈ 0.306)
However, I noticed that sometimes the minimum value increases by about 5-10 as the same battery is used over time.
To compensate for motor speed differences, I calibrated the robot to move straight for at least 2 meters. I marked the 2m point with small blue tape. The video and code are shown below. With manual adjustment, I found the calibration factor to be 1.125.
Next, I tested the robot to move forward, turn right, turn left, and move backward. The video and code are shown below.
For testing analogWrite frequency, I let it generate 10 times and measured the time gap from start to end. The calculated frequency is approximately 184.3 Hz.
From the result, I think analogWrite is fast enough, as the PWM frequency for our motor driver is 50 kHz (based on the DRV8833 datasheet). Generating a faster PWM signal could also help make the robot car move more smoothly.
Next, I tried to find the lowest PWM value while in motion. I started from a value of 35 and decreased it by 1 every second, with the LED blinking once. Based on the calculation, I found that the lowest PWM value in motion is 26.
I also conducted a test to check how fast the robot could settle at its slowest speed. I started by running the lowest PWM value (33) that made the robot start moving for 0.5 seconds, then changed it to the lowest PWM value in motion (26). Through the experiment, I found that this was the fastest way the robot could settle at its slowest speed. The video is shown below.
Cameron Urban for helping me with soldering during office hours.
Wenyi Fu (2024) for using the blink method when finding the lowest PWM value.
The purpose of this lab is to become familiar with PID control and to use a ToF sensor for implementing position control. My plan is to implement full PID control for position control.
To prepare for this lab, I set up Bluetooth commands to send and receive data. I already have a command, GET_DATA, to retrieve the recorded data, which I completed earlier. I also included the SET_PID command to adjust PID values without re-uploading the board and the START_MOVE command to initiate the car's forward motion with PID control while optionally recording data. Additionally, I added the STOP_MOVE command to stop the pid control and the car. The code is shown below:
Additionally, I added a notification handler to help me receive the data sent from the robot car. This allows me to gather the data, plot it, and also use it for debugging and adjusting my strategy.
A PID controller can be implemented using the equation above, which consists of three components: proportional control Kp, integral control Ki, and derivative control Kd.
The proportional term Kp directly responds to the current error, which is the difference between the current position and the target position. Increasing Kp enhances the correction speed, helping the system reach the desired position more quickly. However, an excessively high Kp may lead to overshooting.
The integral term Ki compensates for accumulated error over time. Even if the error is small, it can build up gradually. While Ki helps eliminate steady-state error, setting it too high may result in excessive overshooting.
The derivative term Kd acts as a damping factor by considering the rate of change of error. It helps slow down corrections when the PI terms produce aggressive responses. However, one drawback of Kd is that it can amplify noise, potentially leading to unstable behavior.
I started building my PID controller with a P controller. Since the task is to run a distance of 2-4 meters, my plan is to run at 100% speed when the error is about 4000mm and at 10% speed when the error is about 400mm. Thus, I began testing with Kp = 0.025. To be more conservative at the beginning, I limited the maximum speed to 45% (with a PWM value of about 100) if the speed exceeded 45%. However, with this setting, my robot still ran into the wall. I tried decreasing Kp slightly, and it turned out to work. As a result, I decided on Kp = 0.02. For other parameter settings, I set pid_pos_target to 500mm since I don't want my car to bump into the wall. The sc_pwm value is 2.55, which helps convert my p term to a PWM signal.
Below is the code and the video.
I also plotted graphs showing ToF sensor data vs. time, P term vs. time, and PWM data vs. time. The ToF sensor shows the distance(mm) to the wall, and the P term represents the current error (the distance to the wall minus 500mm) multiplied by Pi (which I set to 0.02). The PWM data represents the value I assigned to the motor. These help me better understand how my car works and how I can adjust my plan.
Before moving to the I/D term, I realized that I need to think about how to decouple sampling rate and the PID controlling rate. In my previous work, the main loop waited for the ToF sensor to be ready before continuing, which meant that the PID operation was restricted and delayed by the ToF sampling rate. Based on tests from previous labs, I concluded that the main loop runs faster than the ToF sensor. To prevent delays caused by the ToF sensor, I attempted to extrapolate an estimate of the car’s distance to the wall using the last two data readings from the ToF sensor. Consequently, I revised my code accordingly, as shown below.
Additionally, I calculated both the main loop execution rate and the ToF sampling rate. The main loop runs at 120.779Hz, and the ToF sampling rate is 10.223Hz, which is slower than the main loop. Thus, it is necessary to use the estimation and let the PID controller keep running instead of delaying the PID loop as I did previously.
I plotted the measured ToF data and the estimated data, as shown below.
Next, I started to implement the PI controller by adding some code, which is shown below.
I only added a few lines to implement the PI controller, introducing a new term—the integrated error—into my overall PWM calculation. I manually adjusted the Ki value, starting with Ki = 0.01. However, when the car needed to slow down and stop, the integral term prevented it from decreasing speed properly. So, I reduced Ki and found that Ki = 0.00001 allowed the car to stop in front of the wall.
Also, I limit the value from 15 to -15 here based on multiple tests.
I also made some small adjustments. After observing my PI terms and distance-speed changes, I found that the I term was still decreasing slowly when the robot car was about to bump into the wall. Thus, I adjusted the I term to 0 when the robot car passed the target point, with the code shown below.
And the vedio is shown below, along with my plotted graphs displaying ToF sensor data vs. time, P term vs. time, and PWM data vs. time.
Finally, I added the derivative term to the PID controller. First, I calculated the difference between the current error and the last error, then divided by dt. The update didn’t happen the first time. I added a low-pass filter (LPF) because the ToF data is noisy, which made the derivative unstable. Adding the LPF helps stabilize the derivative term and make it more reasonable. Through experimentation, I found that an alpha value of 0.15 works well. The code is shown bleow.
With multiple testings, I set my Kd to 0.008.
Below is the plot of distance vs time, PID term vs time, PWM vs time, as well as the demo video.
Note: The D term is close to 0 but not exactly 0.
Integrator wind-up occurs when the I term of a PID controller accumulates excessive error because the actuator hits its saturation limit and cannot provide more control. This can lead to overshooting or slow recovery once the system starts responding again. Therefore, preventing wind-up is essential to maintaining stable control.
I believe I have already implemented a solution where the integral term resets to zero when the robot passes the target point. However, I made a small modification to my code: now, the integral term resets when the robot passes the correction point, regardless of direction. The updated code is shown below.
I also tested the system with and without wind-up protection on both a wooden floor and a carpet. The demo video shows the robot running on both surfaces, with and without wind-up protection, and the results of the integral (I) term are shown below. Note: I still clamp the I term between -15 and 15.
With wind-up prevention, the I term won't keep increasing when the robot should be decreasing. In the video above, it seems that with or without wind-up, the results are not significantly different, as both cases stop before hitting the wall. The reason they stop before the wall is that the I term remains small, the velocity is low, and the robot requires a higher PWM value to move on the carpet. However, upon closer observation, the robot with wind-up prevention stops closer to the set point, whereas the robot without wind-up tends to overshoot the target (though luckily, it still stops before the wall).
There are still some problems with my system. Instead of adjusting its position to a target point, my robot tends to stop before the wall without fine-tuning its distance from the wall.
To address this, I made some adjustments through experimental testing. I changed my target point to 300 mm.
Additionally, due to limitations with the motor actuators, the robot won't move if the PWM signal is less than 33 (a value determined from a previous lab). To compensate for this, I added 30 to the PWM signal when the robot needs to move backward but the signal is below the threshold, allowing for small adjustments.
In the end, I found that the appropriate PID parameters are Pi = 0.015, Pd = 0.00001, Pk = 0.008.
The plots below show that the robot successfully stops before the wall, with a final position of 300 mm from the wall.
Note: The D term is close to 0 but not exactly 0.
I got the maximun speed in my final system by calculating the Tof sensor data and time is 1.3m/s.
Here is the demo video, with three repeated run.
Thanks to Ben Liao and Shuchang Wen for their tips on this lab (how to receive Bluetooth data and how to tune the PID), and also to the TAs for their support (ToF sensor setup and tips for the I controller).
The purpose of this lab is to use an IMU for implementing orientation control. My plan is to implement full PID control.
Similar to what I did in Lab 5, I added the command for setting PID parameters for orientation. Based on the original command for start and end control, I added an additional parameter for starting/ending/recording orientation control. I also added the command for sending the orientation information. The code is shown below.
Also, I added a notification handler to help me receive the data sent from the robot car. This allows me to gather the data, plot it, and also use it for debugging and adjusting my strategy.
Next, I experimented with the DMP, which can correct errors and drift by integrating data from the ICM’s 3-axis gyroscope, accelerometer, and magnetometer. I began with Example7_DMP_Quat6_EulerAngles.
I incorporated the DMP initialization to the setup() function on my Artemis.
I am getting orientation measurements from the DMP as quaternions and converting them to an Euler angle to determine the yaw in the get_yaw() function, which will be called in the BLE loop.
PID Input Signal
Integrating gyroscope data over time can lead to drift, causing errors to increase. To solve this, advanced sensor fusion techniques, like using acceleration and magnetometer data, might help stabilize the integration. The gyroscope has a constant bias, which affects the integrated data. To prevent drift from influencing the system, I’m considering using the Digital Motion Processor (DMP). Additionally, the gyroscope’s default maximum spin rate is set to 250 degrees/sec (dps) and it might be a littl bit slow since my robot run faster.
Derivative Term
To take the derivative of an integral, the yaw we used for the lab, will be its own signal. Thus, it makes sense to take the derivative. When we change the setpoint, it will cause a derivative kick and a large change in the output value. I will need a low-pass filter for the D term to smooth the response.
Programming Implementation
For my system, my controller can keep running and continuously receive Bluetooth commands since it operates within the main loop, and receiving Bluetooth commands is also part of the same loop. I think I will need to update the setpoint in real time. I am considering setting the new setpoint automatically, and I believe driving forward and backward, as well as controlling orientation at the same time, is totally workable. I have a controller for position and also one for orientation. I am thinking of letting them work together. I plan to work on this later(not in this lab).
A PID controller can be implemented using the equation above, which consists of three components: proportional control Kp, integral control Ki, and derivative control Kd.
The proportional term Kp directly responds to the current error, which is the difference between the current position and the target position. Increasing Kp enhances the correction speed, helping the system reach the desired position more quickly. However, an excessively high Kp may lead to overshooting.
The integral term Ki compensates for accumulated error over time. Even if the error is small, it can build up gradually. While Ki helps eliminate steady-state error, setting it too high may result in excessive overshooting.
The derivative term Kd acts as a damping factor by considering the rate of change of error. It helps slow down corrections when the PI terms produce aggressive responses. However, one drawback of Kd is that it can amplify noise, potentially leading to unstable behavior.
To get more familiar with the PID controller and also challenge myself, I decided to implement a PID controller.
I started building my PID controller with a P controller. The task is orientation control, where I can assign an angle, and the robot car will rotate back to the assigned angle no matter how I push or rotate it. I assign a target angle in the command SET_ORI_PID.
I first get the new yaw and calculate the error between the target and the current yaw, then multiply by Kp. I also check if the error exceeds 180 degrees, and if it does, it can rotate in a small circle (referencing Stephan Wagner, 2024). The sc_ori_pwm value is 2.55, which helps convert the P term into a PWM signal. I set the highest PWM value to 200 since I don't want my robot to spin too fast. I began testing with Kp = 2 and gradually increased the value. After multiple tests, I finally chose Kp = 3.5. Below is the code.
I also plotted graphs showing yaw data vs. time, P term vs. time, and PWM data vs. time. These plots help me better understand how my car operates and how I can adjust my plan. And the demo video shown below.
I only added a few lines to implement the I terms, which integrate the error, add it to the I term, and use it to calculate the speed. From multiple testing, I set Ki t0 1.
I plotted graphs showing yaw data vs. time, P/I term vs. time, and PWM data vs. time. These plots help me better understand how my car operates and how I can adjust my plan. And the demo video shown below.
Finally, I added the derivative term to the PID controller, computing the error difference divided by dt. The update won’t occur the first time. To stabilize the derivative, I applied a low-pass filter (LPF), finding an alpha value of 0.15 to work well. The code is shown below. With some testing, I found that Kd = 1.1 performs well.
The graphs below show yaw data vs. time, P/I/D term vs. time, and PWM data vs. time, helping me better understand how my car operates and how to adjust my plan. The demo video is shown below.
Note here, the D term is close to 0 but not exactly zero.
I also tested the DMP sampling rate and the main loop rate. The results are shown below. Compared to the ToF sampling rate from Lab 5, the DMP prepares the data faster than the ToF sensor.
Integrator wind-up occurs when the I term accumulates excessive error due to actuator limits, causing overshooting or slow recovery. To prevent this, I added the code below to reset it when the direction should change.
Note here, the D term is close to 0 but not exactly zero.
In my system, the P term plays a larger role in determining the PWM input, so the influence of windup is not very noticeable. However, from the plot, we can observe that when the car needs to change direction, the I term resets to 0 instead of gradually decreasing as I had originally planned.
I also tested my system on a carpet. Since the minimum force required for the car to rotate is relatively large on the carpet, it moves very slowly, but it is still returning to the assigned angle. It can run faster; however, my system is limited to a max PWM value of 200. I may need to increase the max PWM value and also adjust the weight of Ki so that the integral term can keep accumulating, which would lead to more response. However, I didn't plan to implement this, as I don't expect to run my car on the carpet most of the time. That said, the results show that my car can work on different surfaces.
Note here, the D term is close to 0 but not exactly zero.
In this lab, I will use a Kalman Filter to supplement my slowly sampled ToF sensor.
I need to estimate the drag and momentum terms for the A and B matrices to build the state-space model for my system. With the equations below, I can determine how to obtain the drag and momentum terms.
To find these values, I used a BLE command to set the PWM signal to 100, which is the maximum value in lab 5, and drive toward a wall while collecting ToF sensor data. With the top sensor and time information, I can calculate the velocity.
I found that the steady state is about 2.35 m/s, and the 90% rise time is roughly 2 seconds. The 90% time speed is 2 m/s, and u is 1(means 100 is my max PWM) Thus, the drag and the mass of the system are:
With the two value, I can compute the matrices A and B:
Next, I discretize my matrices and identify the C matrix. C is an m×n matrix (where n is the number of dimensions in the state space and m is the number of outputs). I also initialized with the state vector x from the first of the ToF distance readings collected earlier. The code is shown below.
To implement the Kalman filter, I still need to specify the process noise and sensor noise covariance matrices, which require a total of three covariance values.
For the process noise, the sampling time for distance and velocity is 0.09 s. As for the measurement noise, I assume that the value is 20 mm, indicating that for each ToF sensor measurement, the likely error is less than 20 mm.
From these values, I can define the process noise and sensor noise covariance matrices, which are computed below.
I copied the Kalman Filter function provided in the lab guide into Jupyter Lab, and I created an array, kf_data, to store the results of the Kalman Filter.
I need to specify initial states and parameters. I converted the PWM values into the input sc_pwm, scaling them to the range [0, 1] by dividing by the step size of 100. The initial x vector is initialized with the ToF readings, and the sig matrix represents the initial state uncertainty. These variables are updated in each iteration of the Kalman Filter. The Kalman Filter is then called, and the results are stored in kf_data.
sig=[[5**2,0], [0,5**2]], sig_z=[400], sig_u=[[1111,0], [0,1111]]
With the result above, I tried adjusting sig_z because a value of 400 caused the Kalman Filter to trust the sensor data less. By reducing it to 100, the filter can trust the sensor data more, which helps decrease the lag in the estimation.
sig=[[5**2,0], [0,5**2]], sig_z=[100], sig_u=[[1111,0], [0,1111]]
Sig_u represents the confidence in the model. I tried increasing sig_u to sig_u=[[2500,0], [0,2500]], which decreased the confidence in the model, and got the following result:"
sig=[[5**2,0], [0,5**2]], sig_z=[100], sig_u=[[2500,0], [0,2500]]
Update (after grading): There is a spike at the beginning. This is due to an incorrect setting—some of the positive or negative signs were set incorrectly. For example, when the car runs into a wall, the ToF sensor distance decreases, which means the derivative is negative. Therefore, the motor input we provide should also be negative (I hope my understanding is correct).
I set up two Bluetooth commands for running the Kalman filter on my robot car. One is RUN_KF, which sets the PID parameters, gets the initial distance for the initial setting of the Kalman filter, and triggers the PID controller with the Kalman filter. The other is GET_KF_DATA, which sends the data to the Jupyter Notebook, helping me with debugging and plotting. The code is shown below.
I implemented a notify handler to receive and organize the data, which assists me in plotting and debugging.
Next, I add the Kalman filter to my Arduino code, as shown below.
I made some modifications to the PID controller from Lab 5, using the Kalman filter to estimate the value. The modified code is shown below.
For the first time I tested my Kalman filter, I got the following result, showing that the Kalman filter is not working correctly and updating too slowly.
After carefully checking, I found out that my delta t was wrong. It should be the dt between each main loop, which represents the prediction rate, but I had set it as the dt between the times when the ToF sensor received new data. After fixing it, the result is now reasonable. Below are my Kalman filter, the measured data vs. time, my P/I/D terms vs. time, and the motor PWM vs. time and the testing video.
In this lab, I need to combine everything I've done so far and perform a fast stunt. I want to make use of orientation control, so I chose to perform a drift-style stunt. The robot will start at the designated line, drive forward at high speed, initiate a 180-degree turn when it is within 914 mm of the wall, and go back to the starting point.
Most of the prelab work was already done in Labs 5 to 7, and I didn’t do anything specifically for the prelab.
I set the starting point to be 2500 mm from the wall. Next, I separated my controlled stunt into three steps. The first is position control, which includes a Kalman filter and stops the robot within 914 mm from the wall. The second step uses orientation control to initiate a 180-degree turn after stopping in front of the wall. The final step is returning to the starting point, which is performed using open-loop control without any sensors or PID.
For the first step, I used what I implemented in Lab 7. The only change was adjusting the target distance to 900 mm. The recorded video is shown below.
After making sure the Kalman filter works and the robot stops within 914 mm of the wall, I added orientation control to implement the 180-degree turn.
With the Lab 6 orientation control in hand, I directly used the function I previously created to handle orientation. I added some code to trigger when the robot reaches 914 mm from the wall—at that point, it stops the current position control, assigns the target angle for a 180-degree turn based on the current orientation, and then starts the orientation control.
Initially, when attempting a 180-degree turn, the robot would spin in multiple circles before settling near the target angle, as shown in the first video below. After re-tuning the PID parameters for orientation control by decreasing the proportional term and increasing the derivative term (Kp = 0.4, Ki = 0.1, Kd = 20), the robot is now able to perform the 180-degree turn more effectively—though it still occasionally deviates by about 5 degrees. This improved behavior is demonstrated in the second video below.
Next, I add movement for my robot to return to the starting point. After testing, I found that a motor PWM of 230 and a delay of 550 ms and stopping work well. Below is my code and video.
After that, I attempted to increase the robot's speed by raising the proportional term in the position control PID. The final tuned values were Kp = 0.031, Ki = 0.00001, and Kd = 0.008. Below, I present the corresponding distance data, as well as the PID P-term, I-term, D-term, and motor PWM output.
I also recorded the degree data along with the PID P-term, I-term, D-term, and motor PWM output for orientation control.
Three final demonstration videos are included to show the results.
In conclusion, I used a drifting approach to perform the stunt. A Kalman filter was applied for position estimation, and a PID controller was used to move the robot forward toward the wall. Orientation control was implemented for turning, assisted by the Digital Motion Processor (DMP), and the robot was guided back to the starting point by assigning appropriate motor values.
The total fastest time for the robot to start from the initial position, move forward, perform a 180-degree turn, and return to the starting point was 3.4 seconds.
In this lab, I am going to map out a static room, which will be used later for localization and navigation tasks. I will place my robot in different marked locations in the room and have it spin around its axis while collecting ToF readings. With the data collected, I will apply transformation matrices, plot the points, and build the map.
In this lab, I need the orientation control implemented in Lab 6, and I should also review Lecture 2 on transformation matrices.
The first step was to have the robot rotate on its axis to collect ToF sensor readings. I decided to use Option 2: orientation control in this lab, with approximately 20-degree increments per full 360-degree rotation, resulting in 18 ToF data readings. This should provide enough resolution to build the map.
I added the turn-around function here, which sets the target angle to the current angle plus 20 degrees. It also calls the orientation PID control from Lab 6 to turn and collect distance data from the Time-of-Flight sensor. Based on the PID controller in Lab 6, I added a condition so that if the error is less than 2 degrees, it will trigger the turn-around function again. This process will repeat 18 times.
I have the video below, along with the degrees vs. time data.
Note: The video and the graph do not match. The video was recorded while I was still making modifications to the PID parameters and the deadband value. The graph was recorded after I completed those modifications.
Based on the data shown in the graph, the robot’s turns are mostly reliable; however, there is about a 10 cm drift after each turn. Therefore, it’s possible that I will have an average error of around 10 cm when running the robot in the middle of a 4×4 m empty room.
The robot was placed at the following points: (-3, -2), (0, 3), (5, 3), (5, -3), and (0, 0). The image below shows the location of each point in the lab.
I also added the code to read the ToF data after each 20-degree increment, which was implemented earlier in the control section.
Next, I added the command GET_TURN_DATA, so the robot can send the data to the computer.
I started with the same orientation each time, where the ToF sensor was positioned at y = 0, facing the positive x-direction, and began rotating to the right. The distance vs. time plots are shown below.
Point 1 (-3, -2)
Point 2 ( 5, 3)
Point 3 ( 0, 3)
Point 4 ( 5, -3)
Point 5 ( 0, 0)
I recorded the data twice and found that the values were not always consistent. It’s possible that the angle measurements were affected by noise in the IMU, causing the system to record data from slightly different positions. It’s also possible that the ToF sensor did not function properly at a few data points.
With distance and angle data, I can calculate the x and y values. To align more closely with the map, I added 8 cm to my distance data since the distance from the center to my ToF sensor is 8 cm.
For the angle obtained from orientation control, the value ranges from -180 to 180, so I adjusted it so that the first value is 0 and transformed it into a 0-360 range.
However, I found that the measurements were slightly skewed from what I expected, which may be due to DMP drifting. To fix this, I added a constant angle offset to the data sets.
Next, I calculated the x distance by multiplying the distance value with the cosine of the angle, and the y distance by multiplying the distance value with the negative sine of the angle. The code is shown below.
Below are polar plots, and scatter plot on the XY coordinates (without making adjustments to each point).
Point 1 (-3, -2)
Next, I transformed the data into the global frame by subtracting 3 from the x values and 2 from the y values.
Point 2 ( 5, 3)
Next, I transformed the data into the global frame by adding 5 to the x values and 3 to the y values.
Point 3 ( 0, 3)
Next, I transformed the data into the global frame by adding 3 to the y values.
Point 4 ( 5, -3)
Next, I transformed the data into the global frame by adding 5 to the x values and by substracing 3 from the y values.
Point 5 ( 0, 0)
Since it's the origin, I didn't need to do anything else.
By combining all the data points, I obtained the following graph.
By running the data collection process twice, it became easier to identify and plot the lines representing the walls and boxes. Although there were still a few outlier points, I chose to ignore them, only recognizing the presence of dense clusters of consistent data that indicated a clear linear pattern.
Next, I manually entered the coordinates for the walls and boxes, and plotted them on my map along with the data points. The code is shown below.
I now have four variables—wall_starts, wall_ends, box_starts, and box_ends—each containing coordinate pairs:
wall_starts = [[0.5, -4.5], [6.5, -4.5], [6.5, 4.6], [6.5, 4.6], [-2.45, 4.6], [-2.45, 0.7], [-5.25, 0.7], [-5.25, -4.2], [-0.7, -4.2], [-0.7, -2.5]]
wall_ends = [[6.5, -4.5], [6.5, 4.6], [6.5, 4.6], [-2.45, 4.6], [-2.45, 0.7], [-5.25, 0.7], [-5.25, -4.2], [-0.7, -4.2], [-0.7, -2.5], [0.5, -2.5]]
box_starts = [[2.5, -0.8], [4.5, -0.8], [4.5, 1.7], [2.5, 1.7]]
box_ends = [[4.5, -0.8], [4.5, 1.7], [2.5, 1.7], [2.5, -0.4]]
I’ll later import these lists into the simulator to visualize the map.
I would like to thank Ben Liao for helping me tune my robot’s turns, and Shuchang Wen for lending me a battery. I’m also grateful to the TA for generously extending office hours by half an hour on Tuesday night, which allowed me to collect my data.
In this lab, I am going to implement grid localization using Bayes filter.
To prepare for this lab, I need to set up the simulator. I need to install pip packages, ensure the tkinter set up correctly and install Box2D package. I also need to read the background materials on localization and the Bayes filter.
Bayesian filtering helps a robot estimate its position by updating its belief using sensor measurements and control actions. With new data, it combines prior information to produce a probabilistic estimate of the robot’s location, effectively handling uncertainty in both sensor readings and robot motion. In this lab, I will implement several helper functions for the Bayes Filter, as described in the following report.
I implemented five helper functions—compute_control, odom_motion_model, prediction_step, sensor_model, and update_step.
compute_control
The compute_control function computes the odometry model parameters shown below.
odom_motion_model
The odom_motion_model function calculates the likelihood of the robot transitioning from its previous position to its current one based on the given control inputs. To account for the uncertainties inherent in these movements, it uses Gaussian distributions.
prediction_step
The predict_step function is used in the prediction phase of the Bayes Filter algorithm. It calls the odom_motion_model to compute the likelihood of transitioning from the previous pose to each possible current pose. To enhance computational efficiency, any belief value less than 0.001 is ignored, although this may result in a slight loss of accuracy. The final belief is then updated by combining the transition probabilities with the prior beliefs for each pose.
sensor_model
I need my sensor model to provide a probability distribution over observations. This is done by comparing each actual observation (loc.obs_range_data[i]) and the expected observation (obs[i]) using a Gaussian distribution.
update_step
The update_step multiplies the measurement probability (from the sensor model) by the predicted belief (from the prediction model) to obtain the updated belief.
I conducted two simulations: one for localization without the Bayes filter and one for localization with the Bayes filter. In the video below, the raw odometry model is shown in red, the ground truth in green, and the Bayes filter in blue. From both videos, we can observe that the Bayes filter provides a much more accurate estimate of the location.
Localization Without Bayes Filter
Localization With Bayes Filter
The formula above is directly screenshotted from the lecture slides.
In this lab, I will implement localization using the Bayes filter on a real robot. Since the robot's motion is too noisy to make the prediction step useful, I will perform only the update step, using data from the ToF sensor during 360-degree scans.
To prepare for the lab, I need the background knowledge covered in the lecture, as well as the simulation setup and the observation loop for a 360-degree turn with 20-degree increments and ToF readings, which I already completed in the previous lab, along with setting up the base code.
I run the notebook lab11_sim.ipynb and take a screenshot of the final plot. The raw odometry model is shown in red, the ground truth in green, and the Bayes filter in blue.
To collect data, we rotate the robot counterclockwise by 20 degrees each time and read the ToF sensor data—a procedure I already implemented in a previous lab. I added the observation_loop command to the Arduino to send the necessary data, and I also added a notify handler in Python to receive the data.
I implement the member function perform_observation_loop of class RealRobot to help us collect and organize the data that will be used in the update step.
Below are the results at the four assigned marked positions and at (0,0).
(-3 ft ,-2 ft ,0 deg)
Ground Truth (ft, ft, angle): (-3,-2,0)
Ground Truth (m, m, angle): (-0.914,-0.607,0)
Belief (m, m, angle): (-0.914,-0.610,10)
Belief Probability: 1.0
(0 ft,3 ft, 0 deg)
Ground Truth (ft, ft, angle): (0,3,0)
Ground Truth (m, m, angle): (0,0.914,0)
Belief (m, m, angle): (0,0.914,-10)
Belief Probability: 1.0
(5 ft,-3 ft, 0 deg)
Ground Truth (ft, ft, angle): (5,-3,0)
Ground Truth (m, m, angle): (1.524,-0.914,0)
Belief (m, m, angle): (1.524,-0.914,10)
Belief Probability: 0.9999998
(5 ft,3 ft, 0 deg)
Ground Truth (ft, ft, angle): (5,3,0)
Ground Truth (m, m, angle): (1.524,0.914,0)
Belief (m, m, angle): (1.524,0.610,-10)
Belief Probability: 0.9999999
(0 ft,0 ft, 0 deg)
Ground Truth (ft, ft, angle): (0,0,0)
Ground Truth (m, m, angle): (0,0,0)
Belief (m, m, angle): (0,0,-10)
Belief Probability: 0.9999999
The result is better than I expected; most of the points are accurate, with belief probabilities close to 1. This may be due to grid quantization that causes nearby positions to be rounded together. Although some results fall on the same position, the angles still show some mismatches. This may be because certain locations appear similar with small angle changes, or because the robot’s rotation and the ToF sensor readings are not entirely accurate.
I also reran the entire localization process and found that, except for the point (5ft, 3ft, 0), all other points still matched their ground truth, which is great. As for the point (5ft, 3ft, 0), there is still a slight error, which I suspect is due to off-axis procession during data collection, or possibly because the sensor slightly missed the box corner, causing the measurements to indicate a lower position.
I also took a video during the localization run.
In this lab, I need to build on the previous lab in order to follow the assigned path.
My initial plan was to build on the work I completed in Lab 11, which achieved good localization results, and use it for position estimation in this lab. By using the current localization value and the coordinates of the next target point, I could estimate the required rotation angle(implemented in Lab 6)and calculate the distance between the two points to determine an approximate PWM value for a specific duration (to be determined through testing). If there was a wall in the robot’s forward path, I planned to use the ToF sensor(implemented in Lab 5)to detect obstacles and make the system more stable.
However, when I completed my code and began testing it on the first three points in a real-world scenario, I found that the localization process was quite slow(about 2 minutes). The data often had to be resent, and the localization results were not consistently stable. After observing that other students achieved very good results using non-localization methods, I decided to reduce the difficulty of the lab in order to make the system more reliable and easier to manage.
For each rotation between waypoints, I directly used the orientation control implemented in Lab 6. The required rotation angle was calculated based on the coordinates of two consecutive waypoints.
For the first three and the last straight-line segments, I used simple PWM control with a time limit. The parameters (PWM value and duration) were determined through experimental testing.
For the remaining three straight-line segments, I applied the position control implemented in Lab 5 to determine how far the robot should move, as illustrated in the diagram shown below.
Based on the plan described above, I wrote the code as shown below.
Each cycle begins with a command that rotates the robot car to face the direction of the next waypoint. After issuing the rotation command, the system pauses for 8 seconds to allow the robot sufficient time to reach the correct orientation, as it sometimes takes a few seconds to adjust. Once the orientation is aligned, the robot moves forward—either using a time-based PWM control or the PID-based position control—until it reaches the next point. This process is repeated for all 7 segments between the waypoints.
Through several tests, the run was still not perfect.
In some cases, when the robot slightly deviated from the intended path and required minor adjustments, I manually intervened by gently pushing the car to help it continue along the correct trajectory.
The recorded videos are shown below.
Since some segments are controlled using open-loop methods while others utilize sensors (IMU for rotation and ToF for distance), the overall accuracy is not always consistent.
In particular, when the robot rotates and accidentally hits a wall, it lacks the ability to self-correct or recover from the misalignment, which can result in significant deviation from the intended path.
Additionally, the orientation control is not always precise. When traveling long distances, even a small angular error can accumulate and cause the robot to veer off course and potentially crash into a wall.
Although the robot may not stop exactly on the designated waypoints, the overall trajectory still visibly follows the intended path.
In this lab, I worked independently. However, I would like to thank the TA for helping me identify an issue with a bent Qwiic pin, which had caused incorrect IMU readings and took me a significant amount of time to troubleshoot.