Project Update 9: System Integration Progress and Computer Vision Updates

The team was able to successfully integrate the camera, motor, and audio subsystems with the rover drivetrain. A picture of the subsystems and initial integrated prototype are shown below.

Camera/Audio subsystem configured with Raspberry Pi
Fully Integrated Rover Prototype

We were able to use VNC Viewer to establish a remote desktop connection with the Raspberry Pi wirelessly with an external laptop, in order to allow for improved interfacing capabilities. The preliminary testing results of the new-teleoperated control with that feature is shown below.

Computer Vision Updates

Integrating computer vision with an Arduino motor controller logic has proven successful through the use of the PySerial library. However, due to budget constraints, we were unable to obtain a stereo camera for improved accuracy in locating objects. Nevertheless, our team has focused on creating a remarkable semi-autonomous system, which is now operational. Our system features a single camera, but with the addition of computer vision integrated with a GUI, it provides users with valuable information regarding the location and distance of objects (this measurement is only shown in the termial). This extra guidance enhances the user’s experience and enables smoother operation of the device.

The image below show these features with the GUI interface.

Project Update 6: Fall Demo

The rover currently features a fully working tele-operated functionality through wireless keyboard control. The power supply issue was resolved by adding a separate source to power the Raspberry Pi, and the Pi can now also be connecting through serial communication on a shared WiFi network.

Below are two video demos highlighting the following:

  • Differential drive for all movement – unique steering algorithm controlled by tele-operated function
  • Smooth rotation using controlled speeds and direction, using modular and abstracted programming principles
  • Ease of use through simple user interface

Computer Vision Updates

Along with the rover demo we also showed the Computer Vision capabilities of classfication up to 80 different objects using the COCO dataset using the implenettaion of Yolo model. The image below is computer vision screenshot for our rover’s camera view with the object detection python code running on the raspberry pi.

The images below describes how the Yolo model works and what the COCO dataset is.

This image belows shows you how the overall system works as a block diagram.

Project Update 1: Basic Motor Test and Camera for Rover

This week our team was able to add the DC motors (without encoders) back onto the metal chassis frame. The original wires of the motors needed to be lengthened and tied together, which we were able to accomplish by soldering them to wires within conduits and using electrical tape to cover the junctions.

We were able to set up the Arduino-motor circuit from within the base frame, and used a simple code script to test the motion of the motors with the wheels on the prototype. The video below shows how the wheels and motors were able to spin in both directions, and at differing speeds when designated.

Rover Camera

As we were testing out the rover motors we were looking at camera options to buy. Although we want a stereo camera that works with the raspbery pi. at the moment it might be a bit out a budget to we are planning to look further into the single camera options with some infrared sensors to help with night time object detection.

Stereo Camera Option

Single Camera Option (with infrared sensors)