Skip to content

Interactive Detection Robot

Using the interactive detection robot, the user will be able to wirelessly control the device and visually see its surroundings using the camera module. The RPLidar enables a 360° view of the robot’s environment.

On the left is a picture of the visual model that was used, and on the right is the model that was built

Initial goal vs Accomplished goal

Our initial project expectations was to create an autonomous robot that is cable of 2D SLAM mapping a room and use that information to detect and avoid objects while optimizing the path it will be taking. Due to many difficulties with the RPLIDAR A1 and getting it to work on Robostudios, our focus for this project shifted into creating an interactive robot that allows the user to control it wirelessly and get visual feedback from the robot using the camera and lidar.

Goals Accomplished

  • Finished building hardware (robot car, RPLIDAR, and RPLIDAR stand)
  • Tested the software program using Python 3
  • Tested robot mobility and sensors
The above shows the parts ordered for the car

The above pictures show the building of the robot car from individual parts

This is a video of testing the motors on the car

Testing mobility and motors of the car

This is a visual representation of the code running to make the robot move around a room wirelessly

RPLIDAR A1

RPLIDAR Recap:

RPLIDAR A1 is based on laser triangulation ranging principle and uses high-speed vision acquisition and processing hardware developed by Slamtec.
The system measures distance data in more than 8000 times per second.

Goal Accomplished:

  • Using Raspberry pi 4 model B for data visualization obtained from the RPLidar
  • Integrating the RPLidar on to the robot car

The above shows the RPLIDAR scanning and displaying the scanned object on to the screen
This is the stand for the RPLidar

Attempted Approaches for Mapping

  1. The first approach we tried was to use RoboStudio for SLAM mapping, which did not work because RoboStudio was not able to SSH into the car.
  2. The second approach was to extract the distance and angle from the RPLidar program and use this information for object avoidance. The problem with this approach is that when the code from the RPLidar imported to the program to run the car, there was a function called to the RPLidar and never left the RPLidar function call to execute the rest of the program. Many attempts for a trigger call were made, but with out success.
  3. Connecting an ultrasonic sensor would have measured thee distance from the car to any object in the room, however this did not work because the hat on the Raspberry Pi blocked the GOP pins, which needed to be connected to the ultrasonic sensor in order to run.
This is our third approach to use an ultrasonic sensor for object detection, we used the breadboard for a voltage division from 5V to 3.3V to accommodate the raspberry pi

Possible Alternative Approaches

  1. A different approach would be to get the RPLidar program to extract the angle and distance. From there, this data would be integrated to the car program and ensure that all the parts work together before software integration.
  2. Building the software from scratch would cut out the unnecessary complexity in the RPLIDAR program as well as the car program.

The following zip file contains the main software used in the development and execution of the interactive Detection robot

Return to the top of the page