Because of the advance in surveillance systems, object tracking has been an active research topic in the computer vision community over the last two decades as it is an essential prerequisite for analyzing and understanding video data. Tracking is usually performed in the context of higher-level applications, which in turn require the location or shape or color of the object in every captured frame. Accordingly, several assumptions should be considered to constrain the tracking problem for a particular application.
A great deal of interest in the field of object tracking has been generated due to (i) the recent evolution of high-speed computers, (ii) the availability of high quality and inexpensive sensors (video cameras), and (iii) the increasing demand for an automated real-time video analysis. Tracking an object is to get its spatial-tempo information by estimating its trajectory in the image plane as it moves around a scene, which in turn helps to study and predict its future behavior. There are three main steps in video analysis: detection of interesting moving objects, tracking those objects from frame to frame, and analysis of the recognized object tracks to detect their behavior. Some examples for object tracking include the tracking of anomalies movements of people in security systems, tracking of rockets in a military defense system, customers in a market, monitoring child or patient from remote location, players in a sport game, lane detection and obstacles detection systems in semi-automated vehicles and cars in a street. Tracking these objects helps in military defense, goods arranging, sport strategies, security field and traffic control, respectively. Basically, tracking can be done by one or multiple types of sensors. Radar, sonar, infrared, laser radar, and video cameras are the common used tracking sensors.
Enhancement of object tracking systems is building up a pan tilt moving cameras based on the movements of the detected object by combining the object tracking and computer vision technologies with microcontrollers. These systems are capable of continue the tracking even though the object runs away from the boundaries of the normal still camera. Most of modern surveillance systems use pan tilt moving tracking cameras to keep eye on specific object to a wide range. Basically these systems are capable of track a specific object along X axis from 0 degrees to 180 degrees and also along Y axis from 0 degrees to 180 degrees. This is a huge range when comparing to a still tracking camera system.
1.1. Background
Manufacturing of object tracking pan tilt cameras are the biggest technology revolution in surveillance systems. Most of existing systems are designed with advanced hardware technologies, and use complex computer vision software solution. So these systems are much expensive and too complex to handle. Most of the designers use static video camera for the manufacturing of surveillance systems, because of the high cost for the technology and the complexity. The goal of Object Tracking Automatic Camera is to introduce a low cost, more efficient and accurate solution for existing object tracking systems.
1.2. Motivation
Object tracking pan tilt camera is one of the most highlighted research areas in the last two decades. So I need to do my final year research project with interesting new topic that there are no exactly low cost solutions, which combine computer vision object tracking with hardware controlling, in the world. The reason is then I should not be in the frame because there is no international standard for the research. I could be able to develop my own algorithms for control the hardware by using the signals passing from the PC, which are generated by processing the input video from the video camera.
1.3. Goals
An Object Tracking Automatic Camera introduces a regular object tracking system with an enhancement of, making the camera to rotate along X and Y axis. It is a low cost solution for costly pan tilt moving object tracking systems. This system is designed to improve the current object tracking systems in the following ways.
Track the object which is pointed by user by color
Always place the moving object at the center of the screen, by rotating the camera to the direction where the object moves
Save the video for future analysis
Achievement in brief
This dissertation presents a pan tilt moving object tracking camera system. This system takes a video input from the camera. Desktop application shows the captured video to the user. Then he/she can point any object in the video. Then application processes the video and isolates the pointed object. It calculates the position of the object. Send the position to the Microcontroller. Microcontroller rotates the servo motor mechanism, which the camera is attached to keep the pointed object at the center of the display screen. These all steps in project successfully completed.
1.5. Structure of the dissertation
The main purpose of this dissertation is presenting the overall description on the topic of “Object Tracking Automatic camera”. It is organized in the following chapters
Chapter 1 - Introduction gives the general overview of the proposed Object Tracking Automatic camera. And it includes the project background, motivation, goals and achievement in brief.
Chapter 2 - Literature survey. There are some researches and project done on the topic object tracking and object tracking with single pan tilt camera. This chapter gives a summary of those projects.
Chapter3 - The purpose of the chapter three is giving a description of methodology that used to develop the system
Chapter 4 - The design of the object tracking automatic camera.
Chapter 5 - Description of the implementation of the object tracking automatic camera.
Chapter 6 - Results of the system by presenting by the experiment. And also discuss the limitation of final developed system
Chapter 7 - Includes Conclusions and discusses the improvement could be achieved in the future research.
Chapter 2
Literature survey
2.1 Object tracking
The goal of object tracking is to estimate the locations and motion parameters of a target in an image sequence given the initialized position in the first frame. Research in tracking plays a key role in understanding motion and structure of objects. A typical tracking system consists of three components
Object representation
Dynamic model
Search mechanism
Taxonomy of tracking methods is shown below:
Figure 2.1:
Taxonomy of tracking methods
Point Tracking: In point tracking the objects which are detected in consecutive frame s are represented in points and the points association is based on the previous state which can include object position and motion. This approach requires an external mechanism to detect the objects in every frame. Overall point correspondence methods can be divided into two broad categories, they are:
Deterministic Method
Statistical Method
Kernel Tracking: The word kernel refers to the object shape and appearance. Kernel tracking is performed by computing the motion of the object which is represented by a primitive object region, from one frame to the next. These algorithms differ in terms of the appearance representation used, the number of objects tracked and the method used to estimate the object motion. A rectangular template or an elliptical shape can be an example of the kernel. The motion in the kernel tracking is in the form of parametric transformation such as translation, affine and rotation. We divide these tracking methods into two sub categories, they are:
Templates and density based appearance models
Multi view appearance models
Silhouette Tracking: In its form, tracking is done by estimating the object region in each frame. This form of tracking makes the use of the information encoded inside the object region. This information can be in the form of density and shape models which are usually in the form of edge maps. Silhouette tracking based methods provide an accurate shape description for objects having complex shapes. The goal of a silhouette based object tracker is to find the object region in each frame by means of an object model generated using the previous frames. The goal of a silhouette based object tracker is to find the object region in each frame by means of an object model generated using the previous frames. This model can be in the form of a color histogram object edges or the object contour. Silhouette tracking is divided into two categories:
Shape Matchings
Contour Tracking
2.2. Pan Tilt servo control
Today we can find plenty of pan tilt controllers. Most of them use pan-tilt modules provided by the manufacturers like Lynxmotion. Some projects use handmade pan-tilt modules. For the controlling circuit most of projects use pre designed controller circuits as Phidget Advanced Servo controller. The Phidget Advanced Servo 8-Motor allows controlling the position, velocity, and acceleration of up to 8 RC servo motors. It requires a 6-15VDC external power supply; its switching power supply allows the Phidget Advanced Servo 8-Motor to efficiently operate from 6 to 15 VDC and can be used with a wide range of batteries.
The Phidget Advanced Servo 8-Motor has a high resolution of 125 steps per degree; it measures the power consumption of each servo and its switching regulator protects the motors from overvoltage. It powers servo motors of up to 3.4 Amps. The Phidget Advanced Servo 8-Motor continuously measures the current consumed by each motor with an accuracy of ±10%. The Advanced Servo connects directly to a computer’s USB port. This controller can program using C# by using the Phidgets library
The Endurance Robotics Pan and Tilt PT-3 base is a rugged pan and tilt system based around standard sized hobby servos. Featuring all around rigid 1/4" ABS laser cut construction, the PT-3 base is perfect for general use R/C, robotics, hobby, and photography applications. The PT-3 is capable of handling most point and shoot sized cameras as well as larger SLR's and video cameras.
Figure 2.2:
PT-3 Assembled with Servos
Compatible with most standard servo sizes. The PT-3 can be used along with the Endurance R/C PCTx and Servo Controller units for control.
The performance of this base is directly connected to the servos chosen for use.The unit featuring the HS-645MG servos has the 106.93 / 133.31 oz.in (4.8v/6v) torque.
Hitec servos are recommended for use with this base. Other standard sized servos can be used however modifications to the servo horns may be needed.
Figure 2.3:
PIC 18F4550 USB Interface Circuit
PIC 18F4550 and 18F2550 are powerful microcontrollers including a full-speed USB V2.0 compliant interface. Most developers use above microcontrollers to develop servo controllers. PIC 18F4550 can operate under 5V and it do not need any external power supply. Micro servo motors also can connect to the Microcontroller without external power supply.
Chapter 3
Methodology
This chapter describes the methodology that used to develop the Object tracking automatic camera. There are major two parts that include for this methodology. They are Digital image processing, USB communication.
3.1. Digital Image Processing
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
In this proposed system I process the video from the camera. Video is read as frames. Each frame is a still image. User allows to clicks on any object in this image. Then the color of the selected point is read as RGB values. The colors of the image filter according to the selected color. Color filtering algorithm keeps the colors which are match to the selected color and fill the rest of the image with black. Then find the blobs which have same color and save each and every blob in an array. There can be many blobs appear because of two reasons. There can be some areas in the image with the same color as selected, and because of the noise of the image some blobs can be generated. Sort the blobs array according to the size of the blobs. Find the largest blob. By sorting and getting the largest blob can accurately identify the selected object. Make a rectangle around the largest blob to clearly point out the selected object, and find the center X and Y coordinates of it in respect to the display pane. Repeat this scenario to the each and every frame. So any object in the video stream can be tracked based on color using this algorithm.
3.2. USB Communication
USB communication is very important to read and write data to the external hardware. First have to configure the Universal serial Bus port, which we are going to communicate with. We have to specify the baud rate also. Then open the serial port and write data to the port which is opened. The Microcontroller attached to the port is programmed to read the data from the Universal serial Bus port, and the program runs in the PC can read the data which are come from the Microcontroller via USB.
The data which is going to write to the port have a special format. For instance some strings are attached to the data, so it is easy to the Microcontroller to identify the data and identify which data to write to which servo motor. Then the Microcontroller is programmed to use these data to control the motors attached to it.
In this proposed system I initially set the both servo motors positions to 90 degrees and assign 90 to two variables which hold the X and Y values for write to the serial port. Then subtract the display pane’s center X coordinates from the largest blob’s center X coordinate, if the value is negative value, and then decrease the variable value which hold the X value and write it to the serial port. If the subtract value is positive increase the variable value which hold the X value and write it to the serial port. If it is equal to 0 do nothing. With parallel to the above process, subtract the display pane’s center Y coordinates from the largest blob’s center Y coordinate, if the value is negative value, and then increase the variable value which hold the Y value and write it to the serial port. If the subtract value is positive decrease the variable value which hold the Y value and write it to the serial port. If it is equal to 0 do nothing. These two parallel processes execute until the program stops inside an infinite loop. This algorithm keeps the tracked object in the center of the display pane always by rotating the camera towards the object.
Chapter 4
Design
This chapter describes the design of the object tracking automatic camera
4.1. Overview
There are basically two sections can be identify in the Object Tracking Autometic camera system. They are Haedware part and Software part. Hardware part contain the pan tilt mechanism with two servo motors, high quality camera attached to the pan tilt mechanism and Ardunio board to process the signals from the PC and control the Servo motors. Software part contain the desktop application for process the video and generate control outputs for Ardunio.
4.2. Functional View of object tracking automatic camera
Figure 4.1:
Block diagram of object tracking automatic camera
Figure 4.1 describes the general view of the system. It describes the main components of the system and how they attach together. Figure 4.2 describes the functions that tracks an object and rotate the camera towards it in this proposed system.
Figure 4.2:
Functional view object tracking automatic camera
4.1. Assumptions
In this proposed object tracking automatic camera system several assumptions have been made for the purpose of the development. The main assumption is, I assume that background lightning may not change during the tracking period. In this system object is going to identify based on the color. So changes in the lightning may cause to change the selected color. Another assumption is that I assume that there are no fast moving objects. Because the system is not capable of track a fast moving object.
Chapter 5
Implementation
This chapter describes about the system implementation of the Object tracking automatic camera. Implementation of the desktop application is done with using Visual Studio 2012 IDE in C# programming language. For the image processing purposes Aforge.net image processing libraries are used.
Figure 5.1:
Basic UI
Figure 5.2:
Tracking objects
Figure 5.1 shows the main UI of the system, and Figure 5.2 shows how the system tracks objects based on the color. In Figure 5.2 Number-1 area shows the video from the camera. Number-3 area describes the selected color and its portions of RGB. Number-2 area describes the isolated object. There are two main algorithms used in the application, they are Image processing algorithm and Tracking algorithm.
5.1. Image Processing Algorithm
First bitmap image is input to the algorithm. So a clone from the current video frame is taken and takes the mirror object of it. Then input the image to the processing function.
Figure 5.3:
Code for taking a mirror object and pass it to the processing function
Then processing function filters the image by the selected color, and finds the blobs of same color. Then find the biggest blob and draw a rectangle around it to identify the tracked object.
Figure 5.4:
Code for Image processing algorithm
In this function I calculate the center coordinates of the largest blob and the center coordinates of the display pane. Then get the different between those two coordinates and pass the values to the tracking algorithm.
5.2. Tracking Algorithm
Using this algorithm data is passed to the Ardunio based on the difference between two center coordinates. If the X coordinate difference is positive, send signals to Ardunio to increase the servo position from the current position. If it is negative decrease the servo position from the current position. If it is Zero do nothing. If the Y coordinates difference is positive send signals to Ardunio to decrease the relevant servo position from the current position. If it is negative increase the servo position from the current position. If it is Zero do nothing.
Figure 5.5:
Code for Tracking Algorithm
5.3. Implementing the Hardware
The hardware part consist of two micro servo motors, home-made Pan Tilt brackets, high quality camera, Ardunio board and its motor shield
Figure 5.6:
Pan Tilt Bracket
Figure 5.7:
Micro Servo Motor
Figure 5.8:
Ardunio Board
Figure 5.9:
Motor shield
Ardunio is programmed in C++ using Ardunio IDE. Use “@” character to identify the end of the data stream. The first character is use to identify the servo motor, and the entire data is use to rotate the relevant servo motor.
Figure 5.10:
Code to program the Ardunio
Chapter 6
Testing and Evaluation
6.1. Testing
Each and every function in this system is tested independently. Image processing algorithm is tested with the development of the algorithm. Some parameters are adjusted to obtain the best results from the algorithm. The algorithm was tested with set of ranges of colors, to filter the images. Finally different colored objects use to test the algorithm.
At the initial stage of the image processing algorithm instead of using a video from the camera, still images were used to test the algorithm. Still images were given as parameters to the algorithm and run the algorithm to test with wide range of colors.
Then algorithm was enhanced for video input. Regression tests were performed. The algorithm was tested with different colors and different background colors. And also algorithm was tested with changing the moving speed of the object.
Initially the PC was configured for the Ardunio board. Then the Ardunio was programmed with several testing programs and test whether the Board working as needed.
USB communication is tested by using a separate C# application. The program in the Microcontroller was same as the final system, and implemented a testing C# application which is capable of sending values manually to the Microcontroller. Microcontroller was tested to check whether the attached hardware working properly with the supplied values.
After test those two functions separately, Image Processing algorithm and USB communication were integrate to the GUI to test the tracking algorithm.
The integrated system was tested with several test cases. The main purposes of those test cases were to find any breaking point in the desktop application. The application was tested with high quality digital camera, to test whether the Image processing algorithm was strong enough to process the high resolution images.
Integration tests were performed to test the integrated components. Tests were performed at each and every time when two different modules were integrated together.
After integrating all those modules in to single system, system was tested with several moving objects, with different colors and different shapes. Each object is tracked correctly and system moved the camera to the right position.
Chapter 7
Conclusion and future work
7.1. Conclusions, Remarks and Discussion
The system is capable of tracking objects based on the color. Initially video stream from the camera is load in to the application and user clicks on an object he/she wish to track. Then the system detects the color of the object. The image processing algorithm filter the image based on the selected color. It isolates the detected object and makes a rectangle around the selected object.
Tracking algorithm passes the values to Ardunio, which are the angles to servo motors to rotate. The algorithm in the Ardunio is capable of identify the relevant servo motor and rotate it by the angle, which is passed by the PC application. So the camera attached to the pan tilt mechanism which is operated by two servo motors, can always focus on an object the user point out.
As a Summery the accuracy of the system is highly depend on the background lightning and the moving speed of the object.
7.2. Recommendations for Future Research
The research was propose and successfully implemented a system to track a moving object based on its color, by pan tilt moving camera. Research concluded after a one year with a working prototype. In addition to the features available on that system there are few implementation steps can be identified as further works
The video from the camera can save on the hard drive for the future analysis.
Tracking video can be stream over a network, so remote user can access the live video.
Client application can implement, and then remote user can operate the server camera and execute the application to track a remote object.
👍👍👍👍
ReplyDelete👍👍👌👌
ReplyDeleteBhai, Thanks yaar, From where can I get code?
ReplyDelete