Autonomous Driving Car
The project mainly intends to make a self-driving car model which can follow some of the traffic rules on detecting traffic signs that are encountered while driving. The project was implementedwith Neural Networks algorithms using Tensorflow and OpenCV for image processing. Project is built with Raspberry Pi which collects inputs from camera moduleand an ultrasonic sensor. The input images are being processed using convolutional neural network algorithms to detect traffic signs and based on traffic sign control signals are generated for car mobility.
Initially, the algorithm for the detection of lanes using different OpenCV techniques was explored. Later, the dataset of different traffic sign images was analysed. We worked on aligning the car movement in a direction of detected traffic sign with the help of steering angle prediction so that car follows the lanes properly. It would be done with the help of a small camera that would be fitted in front of car. In our next step, we explored algorithms for traffic sign detection. We implemented Convolutional Neural Network (CNN) for image processing to detect the traffic signs. Final step was to integrate all algorithms, sensors and camera with Raspberry Pi. Thus, the car follows the lane properly and learns to drive by itself.It will also learn to know the meaning of some of the signals such as STOP.
So, our first task was to write the algorithm for the detection of lanes. Our first task was to find
lanes from the below image and convert it to
this image .We did this by using canny edge detection, gaussian blur and using other OpenCV techniques.
Then our next task was to detect traffic signs .We used a German dataset and applied a neural network model to detect the signs .The signs we were able to detect using the neural nets were
Our traffic sign classifier was based on a model LeNet. It is a convolutional neural network designed to recognize visual patterns directly from pixel images with minimal preprocessing.
Our model is adapted from the LeNet as follows.
● The inputs are 32x32 (RGB - 3 channels) images
● The activation function is ReLU except for the output layer which uses Softmax
● The output has 43 classes
We were going to use Rpi for because we needed to send the information of images to computer.. Also, we went through socket programming because we used a remote control car and the car was given commands accordingly by Rpi using the neural net and OpenCV codes.
Thus, our car was connected through Rpi and Arduino .Through Arduino, we were giving commands and through Rpi we were taking the information from the camera and by processing the information we were able to detect the traffic signs and signals and were able to give commands automatically to the car as which direction should it move.
We can implement YOLO and do object detection by which we can avoid obstacles. So that it can move in real traffic and can avoid car, trees ,human beings and animals .Also if we include GPS we can specify the starting position and the destination of the car. We can also specify the speed at which it should move according to the traffic.
For this project, we learned Image processing, Machine Learning, Deep Learning.We also learned how to use a Raspberry pi. Also, major thing, we learned that there are many approaches do a project, so we should keep searching the ways so that we gain knowledge of different things.
We can implement the small models of autonomous driving cars focussing few aspects of it. We mainly concentrated on algorithms for Traffic sign classification, steering angle prediction and road lane detection. Our traffic sign detection algorithm worked with high accuracy with more epochs. As a overview, camera module input images are being processed by lane detection algorithm and Traffic sign classifier algorithm. After traffic sign detection, car movement control commands are sent from computer to Rpi wirelessly.
● Harshitha S
● Rithik Jain
● Karan Jain
● M Nikhil
● Soumy Mondal