Experiment No 7
Experiment No 7
Objective: Performing face recognition Generating the data for face recognition
Recognizing faces preparing the training data Loading the data and recognizing
faces.
Theory:
Generating the Data for Face Recognition:
Generating data for face recognition typically involves capturing images or videos
of individuals' faces. This can be done through cameras or video recording devices.
It's essential to ensure good lighting conditions and a variety of facial expressions
and poses to train a robust face recognition model. These images or video frames
are used as the dataset for training and testing the face recognition system.
Recognizing Faces:
• Face recognition is the process of identifying and verifying individuals based on
their facial features. It involves the following steps:
• Face Detection: Locate and isolate faces within an image or video frame. This
can be done using methods like Haar Cascade face detection or deep
learning-based techniques.
• Feature Extraction: Extract facial features from the detected faces. Common
methods include Eigenfaces, Local Binary Patterns (LBP), and deep learning based
feature extraction using Convolutional Neural Networks (CNNs).
• Training: Train a face recognition model using a labeled dataset. This model
learns to map the feature vectors to the corresponding individuals' identities.
• Recognition: In the recognition phase, the model is used to identify individuals in
new images or video frames. The input features are compared to the learned
representations, and the model outputs the most likely identity or a confidence
score.
• Data Loading: Load the video frames or images into your program or
application.
• Face Representation: Represent the features as a feature vector for each face.
• Model Loading: Load the pre-trained face recognition model that has been
trained on your labeled dataset.
Code:
import cv2
import datetime
from google.colab.patches import cv2_imshow
# Open the WebM video file (replace 'video.webm' with your WebM video file)
video_path = ‘/content/child.webm’
cap = cv2.VideoCapture(video_path)
while True:
ret, frame = cap.read()
if not ret:
break
cap.release()
cv2.destroyAllWindows()
Output:
Conclusion:
We created a Python script that uses OpenCV to process a WebM video, detect
faces, and overlay timestamps on the frames. The following are the script's main
highlights:
The OpenCV library is used to open and read a WebM video file.
A face detection cascade is used to identify and highlight faces in each frame.
Adding a timestamp to the top-left corner of each frame in the format
"YYYY-MM-DD HH:MM:SS".
Displaying frames with detected faces and timestamps and allowing for font size,
position, and color customization.
Limiting the output to only show the first and last frames of the video, allowing for
greater flexibility in specific use cases.