Blur and Anonymize Faces With OpenCV and Python
Blur and Anonymize Faces With OpenCV and Python
and Python
by Adrian Rosebrock on April 6, 2020
In this tutorial, you will learn how to blur and anonymize faces using OpenCV and Python.
Today’s blog post is inspired by an email I received last week from PyImageSearch reader, Li
Wei:
I’m in charge of creating the dataset but my professor has asked me to “anonymize” each
image by detecting faces and then blurring them to ensure privacy is protected and that no
face can be recognized (apparently this is a requirement at my institution before we publicly
distribute the dataset).
Do you have any tutorials on face anonymization? How can I blur faces using OpenCV?
Thanks,
Li Wei
Li asks a great question — we often utilize face detection in our projects, typically as the first
step in a face recognition pipeline.
But what if we wanted to do the “opposite” of face recognition? What if we instead wanted
to anonymize the face by blurring it, thereby making it impossible to identify the face?
To learn how to blur and anonymize faces with OpenCV and Python, just keep reading!
Looking for the source code to this post?
Jump Right To The Downloads Section
In the first part of this tutorial, we’ll briefly discuss what face blurring is and how we can use
OpenCV to anonymize faces in images and video streams.
From there, we’ll discuss the four-step method to blur faces with OpenCV and Python.
We’ll then review our project structure and implement two methods for face blurring with
OpenCV:
Given our two implementations, we’ll create Python driver scripts to apply these face blurring
methods to both images and video.
We’ll then review the results of our face blurring and anonymization methods.
What is face blurring, and how can it be used for face anonymization?
Face blurring is a computer vision method used to anonymize faces in images and video.
An example of face blurring and anonymization can be seen in Figure 1 above — notice how
the face is blurred, and the identity of the person is indiscernible.
Applying face blurring with OpenCV and computer vision is a four-step process.
Any face detector can be used here, provided that it can produce the bounding box
coordinates of a face in an image or video stream.
• Haar cascades
• HOG + Linear SVM
• Deep learning-based face detectors.
You can refer to this face detection guide for more information on how to detect faces in an
image.
Once you have detected a face, Step #2 is to extract the Region of Interest (ROI):
Figure 4: The second step for blurring faces with Python and OpenCV is to extract the face region of
interest (ROI).
Your face detector will give you the bounding box (x, y)-coordinates of a face in an image.
You can then use this information to extract the face ROI itself, as shown in Figure 4 above.
Typically, you’ll apply a Gaussian blur to anonymize the face. You may also apply methods
to pixelate the face if you find the end result more aesthetically pleasing.
Exactly how you “blur” the image is up to you — the important part is that the face is
anonymized.
With the face blurred and anonymized, Step #4 is to store the blurred face back in the
original image:
Figure 6: The fourth and final step for face blurring with Python and OpenCV is to replace the
original face ROI with the blurred face ROI.
Using the original (x, y)-coordinates from the face detection (i.e., Step #2), we can take the
blurred/anonymized face and then store it back in the original image (if you’re utilizing
OpenCV and Python, this step is performed using NumPy array slicing).
The face in the original image has been blurred and anonymized — at this point the face
anonymization pipeline is complete.
Let’s see how we can implement face blurring and anonymization with OpenCV in the
remainder of this tutorial.
To follow my face blurring tutorial, you will need OpenCV installed on your system. I
recommend installing OpenCV 4 using one of my tutorials:
I recommend the pip installation method for 99% of readers — it’s also how I typically install
OpenCV for quick projects like face blurring.
If you think you might need the full install of OpenCV with patented algorithms, you should
consider either the second or third bullet depending on your operating system. Both of these
guides require compiling from source, which takes considerably longer as well, but can (1)
give you the full OpenCV install and (2) allow you to optimize OpenCV for your operating
system and system architecture.
Once you have OpenCV installed, you can move on with the rest of the tutorial.
Note: I don’t support the Windows OS here at PyImageSearch. See my FAQ page.
Project structure
Go ahead and use the “Downloads” section of this tutorial to download the source code,
example images, and pre-trained face detector model. From there, let’s inspect the contents:
$ tree --dirsfirst
├── examples
│ ├── adrian.jpg
│ ├── chris_evans.png
│ ├── robert_downey_jr.png
│ ├── scarlett_johansson.png
│ └── tom_king.jpg
├── face_detector
│ ├── deploy.prototxt
│ └── res10_300x300_ssd_iter_140000.caffemodel
├── pyimagesearch
│ ├── __init__.py
│ └── face_blurring.py
├── blur_face.py
└── blur_face_video.py
3 directories, 11 files
The first step of face blurring is perform face detection to localize faces in a image/frame.
We’ll use a deep learning-based Caffe model as shown in the
face_detector/
directory.
blur_face.py
and
blur_face_video.py
, first detect faces and then perform face blurring in images and video streams. We will step through
both scripts so that you can adapt them for your own projects.
face_blurring.py
file.
Figure 7: Gaussian face blurring with OpenCV and Python (image source).
We’ll be implementing two helper functions to aid us in face blurring and anonymity:
• anonymize_face_simple
: Performs a simple Gaussian blur on the face ROI (such as in Figure 7 above)
• anonymize_face_pixelate
: Creates a pixelated blur-like effect (which we’ll cover in the next section)
anonymize_face_simple
— open up the
face_blurring.py
file in the
pyimagesearch
import numpy as np
import cv2
(h, w) = image.shape[:2]
kW = int(w / factor)
kH = int(h / factor)
if kW % 2 == 0:
kW -= 1
if kH % 2 == 0:
kH -= 1
# kernel size
Our face blurring utilities require NumPy and OpenCV imports as shown on Lines 2 and 3.
Beginning on Line 5, we define our
anonymize_face_simple
image
factor
Lines 8-18 derive the blurring kernel’s width and height as a function of the input image
dimensions:
• The larger the kernel size, the more blurred the output face will be
• The smaller the kernel size, the less blurred the output face will be
Increasing the factor will therefore increase the amount of blur applied to the face.
When applying a blur, our kernel dimensions must be odd integers such that the kernel can be
placed at a central (x, y)-coordinate of the input image (see my tutorial on convolutions with
OpenCV for more information on why kernels must be odd integers).
kW
and
kH
image
In the next section, we’ll cover an alternative anonymity method: pixelated blurring.
Creating a pixelated face blur with OpenCV
Figure 8: Creating a pixelated face effect on an image with OpenCV and Python (image source).
The second method we’ll be implementing for face blurring and anonymization creates a
pixelated blur-like effect — an example of such a method can be seen in Figure 8.
Notice how we have pixelated the image and made the identity of the person indiscernible.
This pixelated type of face blurring is typically what most people think of when they hear
“face blurring” — it’s the same type of face blurring you’ll see on the evening news, mainly
because it’s a bit more “aesthetically pleasing” to the eye than a Gaussian blur (which is
indeed a bit “jarring”).
Let’s learn how to implement this pixelated face blurring method with OpenCV — open up
the
face_blurring.py
file (the same file we used in the previous section), and append the following code:
(h, w) = image.shape[:2]
xSteps = np.linspace(0, w, blocks + 1, dtype="int")
startX = xSteps[j - 1]
startY = ySteps[i - 1]
endX = xSteps[j]
endY = ySteps[i]
return image
anonymize_face_pixilate
image
blocks
Lines 26-28 grab our face image dimensions and divide it into MxN blocks.
From there, we proceed to loop over the blocks in both the x and y directions (Lines 31 and
32).
In order to compute the starting and ending bounding coordinates for the current block, we
use our step indices,
and
(Lines 35-38).
Subsequently, we extract the current block ROI and compute the mean RGB pixel intensities
for the ROI (Lines 43 and 44).
We then annotate a
rectangle
on the block using the computed mean RGB values, thereby creating the “pixelated”-like effect (Lines
45 and 46).
Note: To learn more about OpenCV drawing functions, be sure to spend some time on my
OpenCV Tutorial.
image
to the caller.
Now that we have our two face blurring methods implemented, let’s learn how we can apply
them to blur a face in an image using OpenCV and Python.
Open up the
blur_face.py
import numpy as np
import argparse
import cv2
import os
ap = argparse.ArgumentParser()
choices=["simple", "pixelated"],
args = vars(ap.parse_args())
Our most notable imports are both our face pixelation and face blurring functions from the
previous two sections (Lines 2 and 3).
Our script accepts five command line arguments, the first two of which are required:
• --image
• --face
• --method
: Either the
simple
blurring or
pixelated
methods can be chosen with this flag. The simple method is the default
• --blocks
: For pixelated face anonymity, you must provide the number of blocks you want to use, or
you can keep the default of
20
• --confidence
: The minimum probability to filter weak face detections is set to 50% by default
Given our command line arguments, we’re now ready to perform face detection:
weightsPath = os.path.sep.join([args["face"],
"res10_300x300_ssd_iter_140000.caffemodel"])
# load the input image from disk, clone it, and grab the image spatial
# dimensions
image = cv2.imread(args["image"])
orig = image.copy()
(h, w) = image.shape[:2]
# pass the blob through the network and obtain the face detections
net.setInput(blob)
detections = net.forward()
--image
, generating a
blob
for inference (Lines 33-39). Read my How OpenCV’s blobFromImage works tutorial to learn the “why”
and “how” behind the function call on Lines 38 and 39.
Deep learning face detection inference (Step #1) takes place on Lines 43 and 44.
detections
# detection
confidence = detections[0, 0, i, 2]
# object
Here, we loop over detections and check the confidence, ensuring it meets the minimum
threshold (Lines 47-54).
Assuming so, we then extract the face ROI (Step #2) via Lines 57-61.
# method
if args["method"] == "simple":
# anonymization method
else:
face = anonymize_face_pixelate(face,
blocks=args["blocks"])
Depending on the
--method
face
(Lines 65-72).
image
face
Steps #2-#4 are then repeated for all faces in the input
--image
# display the original image and the output image with the blurred
cv2.imshow("Output", output)
cv2.waitKey(0)
To wrap up, the original and altered images are displayed side by side until a key is pressed
(Lines 79-81).
Let’s now put our face blurring and anonymization methods to work.
Go ahead and use the “Downloads” section of this tutorial to download the source code,
example images, and pre-trained OpenCV face detector.
Figure 9: Left: A photograph of me. Right: My face has been blurred with OpenCV and Python using a
Gaussian approach.
On the left, you can see the original input image (i.e., me), while the right shows that my face
has been blurred using the Gaussian blurring method — without seeing the original image,
you would have no idea it was me (other than the tattoos, I suppose).
Let’s try another image, this time applying the pixelated blurring technique:
Figure 10: Tom King’s face has been pixelated with OpenCV and Python; you can adjust the block
settings until you’re comfortable with the level of anonymity. (image source)
On the left, we have the original input image of Tom King, one of my favorite comic writers.
Then, on the right, we have the output of the pixelated blurring method — without seeing the
original image, you would have no idea whose face was in the image.
Our previous example only handled blurring and anonymizing faces in images — but what if
we wanted to apply face blurring and anonymization to real-time video streams?
Is that possible?
Open up the
blur_face_video.py
file in your project structure, and let’s learn how to blur faces in real-time video with OpenCV:
→ Launch Jupyter Notebook on Google Colab
import numpy as np
import argparse
import imutils
import time
import cv2
import os
ap = argparse.ArgumentParser()
choices=["simple", "pixelated"],
args = vars(ap.parse_args())
We begin with our imports on Lines 2-10. For face recognition in video, we’ll use the
VideoStream
Our command line arguments are the same as previously (Lines 13-23).
We’ll then load our face detector and initialize our video stream:
weightsPath = os.path.sep.join([args["face"],
"res10_300x300_ssd_iter_140000.caffemodel"])
# initialize the video stream and allow the camera sensor to warm up
vs = VideoStream(src=0).start()
time.sleep(2.0)
We’ll then proceed to loop over frames in the stream and perform Step #1 — face detection:
while True:
# grab the frame from the threaded video stream and resize it
frame = vs.read()
# from it
(h, w) = frame.shape[:2]
# pass the blob through the network and obtain the face detections
net.setInput(blob)
detections = net.forward()
Once faces are detected, we’ll ensure they meet the minimum confidence threshold:
→ Launch Jupyter Notebook on Google Colab
# the detection
confidence = detections[0, 0, i, 2]
# the object
# blurring method
if args["method"] == "simple":
# anonymization method
else:
face = anonymize_face_pixelate(face,
blocks=args["blocks"])
detections
, we extract the
face
ROI (Step #2) on Lines 55-69.
--method
face
in our camera’s
frame
(Line 83).
frame
cv2.imshow("Frame", frame)
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
If the
key is pressed, we
break
We are now ready to apply face blurring with OpenCV to real-time video streams.
Start by using the “Downloads” section of this tutorial to download the source code and pre-
trained OpenCV face detector.
blur_face_video.py
Notice how my face is blurred in the video stream using the Gaussian blurring method.
--method pixelated
flag:
Again, my face is anonymized/blurred using OpenCV, but using the more “aesthetically
pleasing” pixelated method.
The face blurring method we’re applying here assumes that a face can be detected in each and
every frame of our input video stream.
But what happens if our face detector misses a detection, such as in video at the top of
this section?
If our face detector misses a face detection, then the face cannot be blurred, thereby defeating
the purpose of face blurring and anonymization.
So what do we do in those situations?
Typically, the easiest method is to take the last known location of the face (i.e., the
previous detection location) and then blur that region.
Faces don’t tend to move very quickly, so blurring the last known location will help ensure
the face is anonymized even when your face detector misses the face.
A more advanced option is to use dedicated object trackers similar to what we do in our
people/footfall counter guide.
This method is more computationally complex than the simple “last known location,” but it’s
also far more robust.
I’ll leave implementing those methods up to you (although I am tempted to cover them in a
future tutorial, as they are pretty fun methods to implement).
Summary
In this tutorial, you learned how to blur and anonymize faces in both images and real-time
video streams using OpenCV and Python.
1. Step #1: Apply a face detector (i.e., Haar cascades, HOG + Linear SVM, deep learning-based
face detectors) to detect the presence of a face in an image
2. Step #2: Use the bounding box (x, y)-coordinates to extract the face ROI from the input
image
3. Step #3: Blur the face in the image, typically with a Gaussian blur or pixelated blur, thereby
anonymizing the face and protecting the identity of the person in the image
4. Step #4: Store the blurred/anonymized face back in the original image
We then implemented this entire pipeline using only OpenCV and Python.