Practical Applications of Computer Vision with Raspberry Pi and OpenCV [Series]
Series Quick Links
- OpenCV Basics & Introduction
- OpenCV Image Processing Fundamentals
- OpenCV Object Detection and Tracking
- OpenCV Advanced Computer Vision
- OpenCV Practical Applications of Computer Vision
- OpenCV Performance Optimization in Computer Vision
- OpenCV Future Trends and Application in Computer Vision
Introduction
Welcome to another article in our series on Raspberry Pi and OpenCV! So far, we've covered the basics of image processing, object detection, tracking, and even some advanced topics like facial and gesture recognition. Now, it's time to put all that knowledge to practical use. In this article, we'll focus on building real-world applications, including a home security system, smart traffic monitoring, and a DIY wildlife monitoring system. Let's get started!
Part 13: Building a Home Security System
Using OpenCV for Motion Detection and Alerting
Home security is a critical concern for many, and computer vision can play a significant role in making security systems smarter and more efficient. One of the most straightforward applications is motion detection.
Basic Motion Detection
The idea behind motion detection is simple: compare consecutive frames and identify the differences, which would indicate motion.
Here's a simple Python code snippet for motion detection:
import cv2
cap = cv2.VideoCapture(0)
ret, frame1 = cap.read()
while cap.isOpened():
ret, frame2 = cap.read()
diff = cv2.absdiff(frame1, frame2)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
_, thresh = cv2.threshold(blur, 20, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
(x, y, w, h) = cv2.boundingRect(contour)
if cv2.contourArea(contour) < 900:
continue
cv2.rectangle(frame1, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow('Motion Detection', frame1)
frame1 = frame2
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Alerting Mechanism
Once motion is detected, you can add an alerting mechanism, such as sending an email or triggering an alarm. Python libraries like smtplib
can be used for email notifications.
Part 14: Smart Traffic Monitoring
Analyzing Traffic Patterns and Vehicle Counts
Traffic monitoring is another area where computer vision can make a significant impact. From counting the number of vehicles passing a point to identifying traffic violations, the applications are numerous.
Vehicle Counting
Using object detection algorithms like YOLO (You Only Look Once) or SSD (Single Shot Detector), you can count the number of vehicles in real-time.
Here's a simplified example using YOLO:
import cv2
import numpy as np
cap = cv2.VideoCapture(0) # Use 0 for webcam, or replace with video file path
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
while True:
ret, frame = cap.read()
height, width, channels = frame.shape
blob = cv2.dnn.blobFromImage(frame, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
vehicle_count = 0
for i in range(len(boxes)):
if i in indexes:
label = str(class_ids[i])
confidence = confidences[i]
x, y, w, h = boxes[i]
if label == '2' or label == '5': # '2' and '5' are usually the class IDs for cars and buses
vehicle_count += 1
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(frame, f'Vehicles: {vehicle_count}', (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.imshow('Smart Traffic Monitoring', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Note:
You'll need to download the YOLOv3
weights and configuration file and place them in the same directory as your script, or provide the full path to those files in the cv2.dnn.readNet()
function.
Also, the class IDs for cars and buses might differ based on the specific YOLO model you are using. You may need to adjust those IDs in the code.
Part 15: DIY Wildlife Monitoring System
Setting Up a Camera to Monitor and Identify Wildlife
Wildlife monitoring can be both a fascinating hobby and a crucial part of environmental conservation efforts. With a Raspberry Pi, a camera module, and some OpenCV magic, you can create a DIY wildlife monitoring system.
Basic Setup
The basic setup involves a Raspberry Pi with a camera module placed in a natural habitat. The Pi can be powered by a solar power bank to make it self-sufficient.
Animal Identification
You can use pre-trained machine learning models to identify the types of animals that appear in the camera feed. TensorFlow's object detection API offers a range of models that can be used for this purpose.
Here's a simple example:
import tensorflow as tf
import cv2
import numpy as np
model = tf.saved_model.load("ssd_mobilenet_v2_coco/saved_model")
infer = model.signatures["serving_default"]
cap = cv2.VideoCapture(0) # Use 0 for webcam, or replace with video file path
animal_classes = ['dog', 'cat', 'bird'] # Add more classes as needed
while True:
ret, frame = cap.read()
input_frame = cv2.resize(frame, (300, 300))
input_frame = np.expand_dims(input_frame, axis=0)
detections = infer(tf.convert_to_tensor(input_frame, dtype=tf.uint8))
detection_boxes = detections['detection_boxes'][0].numpy()
detection_classes = detections['detection_classes'][0].numpy().astype(int)
detection_scores = detections['detection_scores'][0].numpy()
for i in range(len(detection_boxes)):
if detection_classes[i] in animal_classes and detection_scores[i] > 0.5:
box = detection_boxes[i] * np.array([frame.shape[0], frame.shape[1], frame.shape[0], frame.shape[1]])
(y, x, h, w) = box.astype("int")
cv2.rectangle(frame, (x, y), (w, h), (0, 255, 0), 2)
cv2.putText(frame, animal_classes[detection_classes[i]], (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2)
cv2.imshow('DIY Wildlife Monitoring', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Note:
- The
animal_classes
list should be customized based on the classes that your specific model can detect. - You'll need to download the pre-trained TensorFlow model and place it in the directory specified, or provide the full path to the saved model directory.
- The code assumes that the model's input size is 300x300 pixels. You may need to adjust this based on your specific model's requirements.
Conclusion
In this article, we've explored how to apply the computer vision techniques we've learned in practical, real-world scenarios. We've covered building a home security system with motion detection, creating a smart traffic monitoring system, and even setting up a DIY wildlife monitoring system. These projects not only serve as excellent learning experiences but also have real-world applications that can make our lives better and safer. Whether you're a hobbyist or a professional, the possibilities with Raspberry Pi and OpenCV are endless. Happy building!