Carlos L. Castillo, PhD.

 

Projects

Current Projects

Currently, I am working on the following projects:

Past Projects

Self-Driving Specialization - University of Toronto(Coursera)

Currently, I have completed the Self-Driving Specialization imparted by the University of Toronto (Coursera). This specialization is composed of the following four courses:

  1. Introduction to Self-Driving Cars(Completed)

    Course1Certificate

  2. State Estimation and Localization for Self-Driving Cars(Completed)

    Course2Certificate

  3. Visual Perception for Self-Driving Cars(Completed).

    Course3Certificate

  4. Motion Planning for Self-Driving Cars(Completed).

    Course4Certificate

  5. Self-Driving Specialization Certificate(Completed).

    SpecializationCertificate

Outdoor Robot: Building and training an outdoor robot using the Donkey Car framework
(http://www.donkeycar.com/)

The goal of this project is to build an outdoor robot that I can use for experimenting with Autonomous Vehicles algoritms. The main goal is to be able the robot with two software frameworks: Donkey Car framework and Robot Operating System (ROS) framework.

  1. First Stage is the hardware design and implementation of the robot. This is still work in progresss but I would say that the prototype version 0.1 is done. The pictures bellow show the basic structure of the robot. Later, I willprovide more details above the physcal components of this portotype.

    Training Track


    Outdoor Robot


  2. Second stage is the training of the CNN used in the Donkey Car framework. This milestone is composed of the acquisition of the pictures. I have decided to try to learn to follow the sidewals in my neighboorhood. Next is a video of my son helping to get the data.


  3. Third stage is still in progress. More data training collection is required. I will post some results as soon as posible..youtube.com/embed/pdbSD5bOjV0" width="560">
    • Some hardware modifications were planned to be able to accomodate for additional sensors like GPS, IMU and LIDAR.
    • I will be doing some improvements in this platform very soon.
    • I am planning to start working in a ROS robot in parallel to the improvements to the donkey car.

Deep Learning Specialization by Andrew Ng (Coursera)

Currently, I am taking the online Deep Learning Specialization imparted by Dr. Andrew Ng. This specialization is composed of the following five courses:

  1. Neural Networks and Deep Learning (Completed)

    Course1Certificate

  2. Improving Deep Neural Networks: Hyperparamenter tuning, Regularization and Optimization (Completed)

    Course2Certificate

  3. Structuring Machine Leaning Projects (Completed).

    Course3Certificate

  4. Convolutional Neural Networks (Completed).

    Course4Certificate

  5. Sequence Models(Completed).

    Course5Certificate

  6. Sequence Models.

    Course5Certificate

  7. Deep Learning Specialization Final Certificate.

    Course5Certificate

Buidling and Training a Donkey Car ( http://www.donkeycar.com/)

The goal of this project is to use a Donkey Car for the development/testing of Machine Learning algorithms (mainly Convolution Neural Networks, CNN) oriented to the Autonomous/Self-driving Vehicles field. The plan for this project is:

  1. First Milestone is to assemble the Donkey Car. Next, the first, the second and the third version of the Doneky Car are presented. The first version was completely based on the "Donkey Car template", which is based on an standard RC car. I added some extra sensors with the idea of possible future sensor fusion experimentation. I had a lot of trouble trying to get a consistent data (throtle ) a low speed. It is a well know limitation of RC cars alow speeds. The second version was based on a differential drive robot. I was able to get consistent data and I was able to get the robot to do autonomous drive for several laps. Also, I decided to keep the sensors for a ROS robot in which I am working now (information below). For the third version of my Doneky Car, I used a two levels structure. The first level is used mainly for the motor drivers and the battery for the motors. In the second level, I put the Raspberry Pi 3B, the fisheye camera and the battery to power them.

  2. Training Track Training Track Training Track

  3. Second milestone is the training of the CNN. This milestone is composed of the acquisition of the pictures.
    The track to be used is show below.
    Training Track
    My first attempt to start the acquisition of the training pictures is shown below.

    I have collected more that 50K pictures. Editing/selection of the pictures to be used in the training is under way.

  4. Third milestone is to have my Donkey car doing a whole lap autonomously. This milestone took a lot of data gathering and a lot of pruning of the data.
  5. Next video shown the latest autonomous driving Doneky Car obtained usign the latest version of the robot.


    • Some hardware modifications were needed to be able to acquire a consistent data from the Donkey car. I built a new donkey car based on a differential steering robot. Below a picture of the donkey car that I finally used for getting the robot to perform an autonomous drive successfully.
    • I will be doing some improvements in this platform very soon.
    • I am planning to start working in a ROS robot in parallel to the improvements to the donkey car.

Robotic car for Experimenting with Autonomous Vehicle Algorithms using the ROS Navigation Stack (In progress)

The final goal of this project is to develop a small robotic car for experimenting with autonomous vehicles algorithms in several areas of interest like, motion planning, sensor fusion, computer vision, etc.
In the field of Motion Planning Techniques for Autonomous Vehicles, I am interested in implementing interpolation-based planners like clothoids or graph search-based planner like A*.
This a relatively long project. I will have to accomplish several milestones:

First milestone: Sucessful integration of the ROS Navigation Stack with my custom-built robot

In order to accomplish this it will be required implementing several hardware/software requirements. These requirements are presented in http://wiki.ros.org/navigation.
Hardware requirements. Three main hardware requirements are needed as presented below:

  1. Custom-built robot must be differential drive or holonomic wheeled. It assumes that the mobile base is controlled by sending desired velocity commands to achieve in the form of: x velocity, y velocity, theta velocity.
  2. A planar laser mounted somewhere on the mobile basis is required. This laser is used for map building and localization.
  3. The Navigation Stack was developed on a square robot, so its performance will be best on robots that are nearly square or circular. It does work on robots of arbitrary shapes and sizes, but it may have difficulty with large rectangular robots in narrow spaces like doorways.

Software requirements. The software requirements are determined by the Software setup of the Navigation Stack as shown in the diagram below.

ROSNavStackSetup
I will be addressing the hardware and software requirements as independently as possible.
The first step in the hardware is the building of the base and installing the motors. Two Brushed DC motors with encoders will be used. Next figure shows one motors with encoder and the wheel attached.
Motor with Encoder
The assembled robot is shown in the next figure
Robotic Base LEvel 1
The currently planned microcontroller board to be used for processing the encoders signals, produce the motor drivers commands and communicate with the onboard single-board computer, is the STM32F429I-DISC1 shown below: Base Controller
The base controller will be using an real-time operating system, the mbed RTOS.

Fisheye Camera Calibration using OpenCV and MATLAB

Using a set of Checkerboard pattern pictures taken from the camera, I used OpenCV library to obtain the intrinsic parameters of the and the lens distortion of the lens


Original image Original image Original image Original image
Original image Original image Original image Original image
Original image Original image Original image Original image
Original image Original image Original image Original image

The undistorted images are presented below


Original image Original image Original image Original image
Original image Original image Original image Original image
Original image Original image Original image Original image
Original image Original image Original image Original image

Traffic Signs Detecting using OpenCV and MATLAB

This is a basic example of the detection of an stop sign. The program was implemented using OpenCV (Python) in a Raspberry Pi 3B+.  Next a video of the robot approaching the stop sign and stopping as soon as it detect the sign.
Next a video of the what the camera sees, and when the stop sign detector function actualy recognize the sign.

Simultaneous Localization and Mapping(SLAM) - Custom-robot (2013)

This project was also oriented to the general experimentation with the Navigation Stack of the Robot Operating System (ROS). The selected robot base was the Rex-16C which comes equipped with an HP HEDS 5500 two channel optical encoder in each wheel. A Hokuyo URG-04LX-UG01 laser range-finder was acquired. Additional equipment needed was an Arduino MEGA microcontroller board, a Sabertooth Dual 10A 6V-24V Regenerative Motor Driver board, and the main computer system. The main computer system is a Dual Core PC based on a Mini ITX motherboard. The operating system selected was Ubuntu 11.04 “Natty”. The final assembled robot is presented in the figure below.
TurtleBot
In order to test the whole system, the custom-built hardware robot and the software interface with ROS, a small "L" corridor was implemented as shown in the figure below.
TurtleBot
The developed robot successfully produced the map of the L corridor setup and was able to navigate through it. The left figure below shows a screenshot of RViz which is a 3D visualization environment for robots using ROS . The right figure below shows a zoom of the map generated by the SLAM algorithm.

TurtleBot
TurtleBot

Simultaneous Localization and Mapping (SLAM) (2012)

The aim of this project was to start experimenting with the Robot Operating System (ROS), particularly its the Navigation stack. The robot used for this project was the TurtleBot as shown in the image below: TurtleBot The ROS navigation stack was configured and tuned to be able to work with the TurtleBot robot. The tuning of the navigation stack proved to be a delicate process. The TurtleBot robot successfully produced a map of a section of the second floor in the Corley building as shown in the figure below:
TurtleBot

Set-up of the Global Vision System for Coordinated Control of Multi-Robot Systems

In this research a hardware/software setup of three robots for the study of Control and Coordination of Multi-robot systems was built. First, a set of three small robots was selected and equipped with XBee modules to provide them with wireless communication capabilities. The Pololu 3pi robots were selected because of its size, price and technical support available for their configuration and programming. Due to space constraints in the Robotics Lab, size was a fundamental concern. The Pololu 3pi robot has a diameter of only 3.7 inches what makes possible control several of them inside relative small areas. Multi-Robot Set
Two AVT Guppy FireWire 1/3" CCD Color cameras were purchased for the Global Vision system. These cameras are designed to be high performance, cost effective alternatives to analog cameras.
Multi-Robot Set
The Global Vision System was implemented based on the open source for the Small Size League Vision (SSL-Vision) system software. The function of the Global Vision system is to detect the robot to be controlled inside of a given defined area. The control and coordination algorithms should keep the robot inside this area. Figure 2 shows the Global Vision system implemented based on the SSL-Vision software. Global Vision System was implemented based on the open source for the Small Size League Vision (SSL-Vision) system software. The function of the Global Vision system is to detect the robot to be controlled inside of a given defined area.
Multi-Robot Set
The following figures show the detected edges of the working area, the results of the camera calibration, and a zoomed screenshot of the robots detected using a segmentation algorithm.
Multi-Robot Set
Multi-Robot Set
Multi-Robot Set

Solar Powered Robot (2013)

The ultimate goal of this project is to develop a robot for "surveillance" capable of completely autonomous operation, capable of 24/7 service. GPS capabilities and a videlo link will be included in the robot. The main computer will be running Ubuntu and the guidance, navigation and control system is expected to be implemented using the Robot Operating System (ROS).
Next, we were testing that the robot was capable of manuevering while supporting the weight of the 100w solar panel (15lbs). This solar panel, under full sun linght, is capable of providing all the power needed for operating the computer, camera, video transmitter, microcontroller board, motors, and at the same time recharge the battery.

Autonomous Hexacopter

The goal of this project is to build an hexacopter.  The frame has been built and is presented in the next figure

Hexacopter

Now, the APM 2.6 autopilot system has been installed as shown in the next figure
Hexacopter with the APM 2.6
The software configuration is in progress. Some testing have been done but a motor need to be replaced.  

System Identification of Electrical Motors

The aim of this project is to obtain linear and nonlinear models of Brushed DC motors, Brushless DC motors(BLDC), Permanent Magnet Synchronous motors (PMSM) and AC Induction motors. Currently, a simple test stand have been implemented to obtain the linear model of DC Brushed motors. This test stand consists of a Pololu MC33926 Motor Driver Carrier, a small protoboard, a quad 2-input NAND gate IC 74LS00, a National Instrument Low-Cost M Series Multifunction Data Acquisition PCI-6229 card. The DC motor shown in the figure has an Optical Quadrature Encoder with 300 PPR (pulses per revolution) and a gear ratio of 30:1., 200RPM, 12 V.


A LabVIEW vi have been developed to record the speed response when the duty cycle of the H-Bridge connected to the motor is varied randomly. The PWM signal has a frequency of 20 KHz. The duty cycle was allowed to vary from about 10% to 100%. Next picture presents the Front Panel of the LabVIEW vi.



The corresponding Block Diagram is presented next



The random duty cycle signal generated and the output response (encoder's frequency) were saved to a file. After this, the data was imported to MATLAB and processed using the free available University of Newcastle Identification Toolbox (UNIT). The discrete-time DC Motor Speed model obtained with the UNIT is an ARX (Autoregressive Exogenous) with the following structure

Model Structure

Next, some of the analysis plots, displayed by UNIT during the identification process, are presented

Output Data vs Model Output

Crosscorrelation

Estimated Frequency Response

The discrete-time model obtained is shown next:

DC Motor Discrete-Time Model