Object Detection and Tracking with Raspberry Pi and OpenCV [Series]

Ben
Ben
@benjislab

Series Quick Links

Introduction

Welcome to the third article in our series on Raspberry Pi and OpenCV! After diving into the fundamentals of image processing, it's time to move on to more advanced topics. In this article, we'll focus on object detection and tracking, two crucial aspects of computer vision. We'll start by exploring basic object detection techniques, then delve into object tracking algorithms, and finally, we'll build a real-time object tracking system. So, let's get started!

Part 7: Basic Object Detection

Using Contour Detection to Identify Objects

Contour detection is one of the most commonly used techniques for object detection. It works by detecting continuous edges in an image, essentially outlining the shapes of objects. OpenCV provides the findContours function to facilitate this.

Here's a simple example:

import cv2

image = cv2.imread('objects.jpg', 0)

_, thresh = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY)

contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

cv2.drawContours(image, contours, -1, (0, 255, 0), 3)

cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

In this example, we first read an image and convert it to a binary image using thresholding. Then, we find the contours using the findContours function and draw them on the original image using the drawContours function.

Applications of Contour Detection

Contour detection is widely used in various applications such as:

  • Object sorting and identification in industrial automation.
  • Shape analysis in research.
  • Object recognition in robotics.

Part 8: Object Tracking Algorithms

Introduction to Mean-Shift Algorithm

Mean-Shift is a non-parametric feature-space analysis algorithm and is used in computer vision for tracking objects. The idea is to find the density of data points in a given space.

Here's a basic example using OpenCV's cv2.meanShift function:

r, h, c, w = 200, 20, 300, 20
track_window = (c, r, w, h)

roi = frame[r:r+h, c:c+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60., 32.)), np.array((180., 255., 255.)))
roi_hist = cv2.calcHist([hsv_roi], [0], mask, [180], [0, 180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)

term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
ret, track_window = cv2.meanShift(dst, track_window, term_crit)

Introduction to KLT (Kanade-Lucas-Tomasi) Algorithm

The KLT algorithm is another popular object tracking algorithm. It works by tracking the corners in a video frame. OpenCV provides the cv2.goodFeaturesToTrack function to find such corners, which can then be tracked using the cv2.calcOpticalFlowPyrLK function.

Here's a basic example:

corners = cv2.goodFeaturesToTrack(old_gray, mask=None, **feature_params)

new_corners, status, errors = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, corners, None, **lk_params)

Part 9: Real-Time Object Tracking

Building a Real-Time Object Tracking System

After understanding the basics of object detection and various tracking algorithms, let's combine these concepts to build a real-time object tracking system.

Steps to Build the System

  1. Initialize Webcam: Use OpenCV's VideoCapture function to initialize the webcam.
  2. Object Detection: Use contour detection or any other object detection algorithm to identify the object you want to track.
  3. Object Tracking: Use Mean-Shift, KLT, or any other tracking algorithm to track the object in real-time.
  4. Display Tracking: Draw a bounding box or contour around the object being tracked to visualize the tracking process.

Here's a skeleton code to get you started:

import cv2
import numpy as np

cap = cv2.VideoCapture(0)

r, h, c, w = 200, 20, 300, 20  # simply hardcoded the values
track_window = (c, r, w, h)

while True:
ret, frame = cap.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

if len(contours) != 0:
largest_contour = max(contours, key = cv2.contourArea)
x, y, w, h = cv2.boundingRect(largest_contour)
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

track_window = (x, y, w, h)

roi = frame[y:y+h, x:x+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((0., 60., 32.)), np.array((180., 255., 255.)))
roi_hist = cv2.calcHist([hsv_roi], [0], mask, [180], [0, 180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)

term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
ret, track_window = cv2.meanShift(thresh, track_window, term_crit)

x, y, w, h = track_window
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)

cv2.imshow('Object Tracking', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()

Conclusion

In this article, we've explored the fascinating world of object detection and tracking. We started with basic object detection techniques like contour detection and then moved on to more advanced tracking algorithms like Mean-Shift and KLT. Finally, we combined these concepts to build a real-time object tracking system. This knowledge will serve as a strong foundation for more complex computer vision projects you may undertake in the future. Happy coding!