0% found this document useful (0 votes)
26 views83 pages

Vazquez ImageProcessFundamentals

The document discusses fundamental image processing techniques including enhancing images through filtering, calibration, and adjusting brightness, contrast, and gamma as well as checking images by measuring intensity and analyzing particles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views83 pages

Vazquez ImageProcessFundamentals

The document discusses fundamental image processing techniques including enhancing images through filtering, calibration, and adjusting brightness, contrast, and gamma as well as checking images by measuring intensity and analyzing particles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Image Processing

Fundamentals
Nicolas Vazquez
Principal Software Engineer
National Instruments
Agenda
• Objectives and Motivations
• Enhancing Images
• Checking for Presence
• Locating Parts
• Measuring Features
• Identifying and Verifying Components

2
Class Goals
• Teach the fundamental image processing tools available
in machine vision software
• Provide some background into how the algorithms work
• Accelerate your machine vision learning curve

• What not to expect


– Learn how to develop a complete vision system
– 3D vision, color or other advanced topics
– Learn specific, proprietary functions
– Discussion of the benefits of different application development
environments

3
Image Processing for Machine Vision
• Objective
– To extract useful information present in an image, in a limited
amount of time
• Secondary
– To display an image for users
• Not
– Improve appearance of image in general
• Used for
– Image pre-processing
• Minimize variations of information in the image
• Prepare the image for processing and measurement
– Application specific processing
• Use image to count, locate, and measure attributes

4
Image Types
Grayscale Intensities
• Grayscale
– 8-bit: pixel values range from 0 to 255
– 16-bit: Image
• Pixel values range from 0 to 65535
• Includes 10 to 16-bit images

• Color
– Composed of 3 grayscale images (RGB)
• Other image types Pixel Values

– Binary: 0 or 1 pixel values


– Floating point
– Complex

5
What is an Ideal Image?
• Range of grayscale values
– Spread out between 0 and 255
– No pixels “saturated” at 255 (for most applications)
• Loss of information, impossible to distinguish between saturated pixels
• Good contrast
– Between the parts of the image of interest
• Repeatable

In short, an ideal image requires the least number of image


processing steps to obtain the result.

6
Class Organization

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Enhance Images

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Image Preprocessing
• Process an image so that the resulting image is more
suitable than the original for a specific application

• A preprocessing method that works well for one


application may not be the best method for another
application

9
Image Preprocessing
• Pre-processing occurs before the application specific
processing
Application
Acquisition Preprocessing Specific
Processing

• Shading • Intensity
Correction Measurements
• De-blurring • OCR
• De-noising • Pattern
• Contrast Matching
Enhancement • Gauging
• Feature • Barcode
Enhancement • Particle
Analysis

10
Enhancement Techniques
• Spatial Domain: pixel-wise operations
– Brightness, contrast and gamma
– Lookup tables
– Gray morphology
– Filtering (smoothing, median, general convolution)
– Flat-field correction
• Frequency Domain
– Deblurring
– Filtering

11
Lookup Tables
• A lookup table is a function that maps grayscale values in an
image to new grayscale values, to create a new result image
– For example: reverse, square, power…
Original Image Lookup Table: Result Image
y = (x/255)^0.6)*255

12
Brightness
Original Image Result Image

• Adds a constant grayscale


to all of the image
• Can improve appearance
but not useful for most
image processing steps

13
Contrast
Original Image Result Image

• Used to increase or
decrease contrast
• Normal = 45 (degree line)
– High = Higher than 45
– Low = Lower than 45
• Typical use is to improve
edge detection
• Sacrifices one range of
values to improve another

14
Gamma
Original Image Result Image

• Nonlinear contrast
enhancement
• Higher gamma improves
contrast for larger grayscale
values
• Does not cause saturation

15
Histogram
• Indicates the number of pixels at each gray level
• Provides a good description of the composition of an image
• Helps to identify various populations, assess image quality

1 3

1 2 3
# Pixels

Pixel values Saturation

16
Histogram Equalization
• Alters grayscale values of pixels so
that they become evenly distributed
across the full grayscale range
• The function associates an equal
number of pixels per constant
grayscale interval Original Image

• Takes full advantage of the available


shades of gray
• Enhances contrast of the image
without modifying the structure

Equalized
17
Histogram Equalization
Original Image Histogram Result Image Histogram Cumulative Histogram

Bright

Dark

Low
Contrast

High
Contrast

18
Spatial Filtering
Filter Type Filters

Linear High Pass Gradient, Laplacian Gradient

Linear Low Pass Smoothing, Gaussian Gaussian

Gradient, Roberts
Nonlinear High Sobel, Prewitt,
Pass Differentiation, Sobel
Sigma

Nonlinear Low
Median, Nth Order Median
Pass

19
Frequency Domain Filtering
• Standard filtering can be done in frequency domain
– Low Pass, High Pass, Band Pass, Band Stop, etc.
– Deblurring
• Compute the Fourier transform of the image
• Multiply with the transfer function of the filter
• Take the inverse Fourier transform to get the filtered image

22
Frequency Domain Filtering

Input Output
FFT H(u,v) IFFT
Image Image

I(x,y) F(u,v) F(u,v).H(u,v) R(x,y)

Periodic noise FFT (magnitude) Bandstop Filtered FFT Bandstop filtered

23
Spatial Filtering: Low Pass Examples
• Low Pass with Frequency Domain Filter

• Low Pass with Gaussian Filter

24
ENHANCE IMAGES:
IMAGE CALIBRATION
Spatial Calibration

• Corrects for lens and perspective distortion


• Allows the user to take real-world measurements
from image based on pixel locations.

lens distortion

perspective

known orientation offset

26
Calibrating Your Image Setup
• Acquire image of a calibration grid with known real-
world distances between the dots
• Learn the calibration (mapping information) from its
perspective and distortion
• Apply this mapping information to subsequent
images and measurements

27
Image Correction
• Use calibration to adjust image geometry so features are represented properly.
Calibration grid Corrected Image

Acquired image Corrected Image

28
Image Spatial Calibration Example

Acquired image

Calibration grid Calibrated measurements


29
Types of Calibration
• 2D spatial calibration
– Applied only to a plane
– Corrects for lens and perspective distortion
– Does not improve resolution of a measurement
– Cannot compensate for poor lighting or unstable conditions
• 3D spatial calibration: x, y, z
• Intensity calibration
• Color

30
Check Presence/Absence

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Region of Interest (ROI)
• Region of Interest (ROI)
– A portion of the image upon which an image processing step may be performed
– Can be defined statically (fixed) or dynamically (based on features located in the
image)
• Used to process only pixels that are interesting
– Increases reliability
– Reduces processing time

32
Measure Intensity
• Intensity statistics of Region of Interest (Search Area) can
be used as a very simple check for presence/absence

33
Thresholding
• Thresholding
– Converts each pixel value in an image to 0 or 1 according to
the value of the original pixel
– Helps extract significant structures in an image

34
Histogram and Thresholding
Original Image Binary Image

Image Histogram

Threshold Range

35
Finding Gray Objects
• Can also set upper and lower limits for pixel values
• Pixels inside the bounds of the limit (red region) are
set to 1, and those outside the limit are set to 0

36
Automatic Thresholding
Original Image 1 Thresholded Image 1
Automatic Threshold Settings

Original Image 2 Thresholded Image 2


Automatic Threshold Settings

37
Global vs. Local Threshold
Global Threshold Result

Original Image

Local Threshold Result

38
Particles and Connectivity
• Thresholding creates binary image:
– Pixels are either ‘0’ or ‘1’
• A particle in a binary image is a group of
connected ‘1’ pixels
• Defining connectivity

39
How Connectivity Affects a Particle
• Connectivity-4: two pixels 1
are considered part of the 1 1 1

same particle if they are 1 2


2 2 2
horizontally or vertically
2 2 2 2
adjacent
3 2 2 2
2

• Connectivity-8: two pixels 1


are considered part of the 1 1 1
same particle if they are 1 1
horizontally, vertically, or 1 1 1
diagonally adjacent. 1 1 1 1
2 1 1 1
1
40
Binary Morphology
• Binary morphological functions extract and alter the
structure of particles in a binary image
• Morphological functions remove unwanted information
caused by the thresholding process:
– Noise particles
– Removing holes within particles
– Particles touching the border of an image
– Particles touching each other
– Particles with uneven borders

41
Erosion
• Decreases the size of objects in an image
– Removes a layer of pixels along the boundary of the particle
– Eliminates small isolated particles in the background and removes
narrow peninsulas
• Use Erode to:
– Separate particles for counting

Erosion

42
Dilation
• Increases the size of objects in an image
– Adds a layer of pixels around the boundary of an object (including the
inside boundary for objects with holes)
– Eliminates tiny holes in objects
– Removes gaps or bays of insufficient width
• Use Dilate to connect particles

Dilation

43
Erosion vs. Dilation
Original Image Result Image
-
- -
- -
Erosion - - -
- -
(Connectivity-4)
- - -
-

+ +
+
+ + + +
Dilation + + +
+ +
(Connectivity-4) + + +
+ + +

44
Open
• An erosion followed by a dilation
– Remove small particles and smooth boundaries
– Does not significantly alter the size or shape of particles
– Borders removed by the erosion process are replaced by the dilation process
• Use Open To:
– Eliminate small particles that constitute noise

Open

45
Close
• A dilation followed by an erosion
– Fills holes and creates smooth boundaries
– Does not significantly alter the size or shape of particles
– Particles that do not connect after the dilation are not changed
• Use Close To:
– Eliminate small holes that constitute noise

Close

46
Advanced Morphology
• Advanced morphological functions are combinations
of operations, each of which is responsible for a
single operation
• These functions execute the following tasks:
– Remove small or large particles
– Remove particles from an image border
– Fill holes
– Separate particles
– Keep or remove particles identified by morphological
parameters
– Segmenting the image

47
Particle Filtering
• Keeps or removes particles based on geometric features
– Area, width, height, aspect ratio and other features are
commonly used to filter
• Typically used on binary images
• Cleans up noisy images

(Keep Heywood Circularity


Threshold Particle Filter Factor < 1.1)

48
Application: Particle counting

Original

Particle Filter Settings


Threshold

Particle Filter
Particle measurements
Measures particle features including:
location, orientation, area, perimeter, holes, shape
equivalences, moments…

50
Locate Objects

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
LOCATING PARTS:
PATTERN MATCHING
Introduction to Matching
• Locates regions of a grayscale image that match a
predefined template
– Calculate a score for each matching region
– Score indicates quality of match
• Returns the XY coordinates, rotation angle and scale
for each match

53
Applications
• Presence Detection
• Counting
• Locating
• Inspection

54
How It Works
Two step process:
• Step 1: Learn Template
– Extract information useful for uniquely characterizing the
pattern
– Organize information to facilitate faster search of the
pattern in the image
• Step 2: Match
– Use the information present in the template to locate
regions in the target image
– Emphasis is on search methods that quickly and accurately
locate matched regions

55
Pattern Matching Methods
• Different ways to perform pattern matching based on
the information extracted from the template
• Common methods:
– Correlation Pattern Matching
• Relies on the grayscale information in the image for
matching
– Geometric Pattern Matching
• Relies on edges and geometric features in the image for
matching
–…

56
Correlation Pattern Matching
• Assumes grayscale information
present in the image Template (Pattern)
• Directly uses the underlying
grayscale information in the
image for matching
• Grayscale values in the pattern
are matched to regions in the
image using normalized cross-
correlation
• Score ranges from 0 to 100 or
0 to 1000.
• Used to allow imperfect match.

57
Correlation Pattern Matching
• When to use:
– Template primarily characterized by grayscale information
– Matching under uniform light changes
– Little occlusion and small scale changes in image
– Good for the general case

Good template Bad template

58
Correlation Pattern Matching
• When NOT to use
correlation-based pattern
matching:
– Non-uniform lighting

– Occlusion more than 10%

– Scale changes

59
Geometric Pattern Matching (GPM)
• Matching tool you can use to locate parts that
contain distinct edge information
• Not useful when template is predominantly defined
by texture.

60
GPM is Tolerant to…
Occlusion Scale Changes

Non-uniform Background
Lighting Changes

61
GPM – Feature-based
Target Image

Template
Image

Extract Curves

Extract Features

circles
Match Features
parallel lines

62
Feature Comparison
Feature Correlation- Geometric
based Pattern Pattern
Matching Matching
Template contains texture-like information Yes
Template contains geometric information Yes Yes
Find multiple match locations Yes Yes
Rotation Yes Yes
Scale Yes
Occlusion Yes
Matching under non-uniform lighting Yes
Sub-pixel match locations Yes Yes

63
LOCATING PARTS:
COORDINATE SYSTEMS
Region of Interest (ROI)
• Region of Interest (ROI)
– a.k.a. Search Area
– A portion of the image upon which an
image processing step may be
performed.
– The object under inspection must
always appear inside the defined ROI
in order to extract measurements
from that ROI.
• ROIs need to be repositioned when
the location of the part varies

65
Coordinate System/Fixture
• Defined by a reference point (origin) and angle within the image, or by the
lines that make up its axes
• Allows you to define search areas that can move around the image with
the object you are inspecting
• Usually based on a characteristic feature of the object under inspection
– Use pattern match, edge detection or geometry tools to locate features
– Use features to establish coordinate system

66
Coordinate Systems – Set Up
1) Define an reference coordinate system
– Locate an easy-to-find feature in your
reference image. Feature must be stable
from image to image.
– Set a coordinate system based on the
feature location and orientation.

2) Set up measurement ROIs in reference to


the new location/orientation of the
coordinate system
– Acquire a new image
– Locate feature new location, orientation
– Reposition measurement ROIs
67
Measure Objects

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Edge Detection Overview
• Process of detecting transitions in an image
• One of the most commonly used machine vision tools
• Attractive because:
– Simple to understand and use
– Localized processing – fast
– Applicable to many applications

Tolerant to illumination
changes
What is an Edge?
• Contrast: Change in pixel value
Contrast
• Width: Region bordering edge Edge Location
where compute mean pixel
value
• Steepness: Size of the transition

Width Steepness Width

• Polarity:
1D Edge Detection
Detect edge points along a line
basic operation:
1) Get pixel values along the line
2) Compute gradient information

Pixel Values
3) Find peaks and valleys (edge
locations) based on contrast,
width, steepness….
4) Select edge(s) based on: Position Along Line (pixels)

– Order: first, last, first & last


– Polarity: rising, falling Gradient Values

– Score: best edge

Position Along Line (pixels)


71
Subpixel Accuracy
Subpixel location of edge can be computed using
parabolic interpolation Parabolic interpolation

Parabolic Fit
Gradient Value

Position Along Line (pixels)


Subpixel Location

In practice: prefer increasing system resolution

72
Edge Detector Tools
• Several high level edge tools are build
on the single edge detectors
• Rake:
– Used to find multiple edges and fit a
shape through them
• Configurable search directions,
filtering options and sub-sampling
ratios.

73
Straight Edge (Line) Detection
• Detect straight lines in an image
– Extension of 1D edge detection
• Straight edge detection options:
– Rake-based
– Projection-based
– Hough transform-based Rake-based find straight edge

Projection-based find straight edge Locating multiple straight edges


74
Edge Detection Applications
• Detect
Features
• Alignment
• Gauging
• Inspection

75
Application: Inspecting Parts

Locating part using find straight edge Check for remnant plastic using intensity measurement

76
Check liquid level using find edge Check tips using pattern matching
76
Identify Parts

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Identify Parts

• 1D and 2D Codes
• Marking methods
• Reading Text
• Examples

79
1D Codes
• Applications using 1D bar codes have been around for
over 40 years
• Barcode data is an index into a large central data storage
• Code is easily read by laser scanners
• Low data capacity in large footprint

EAN 13 Code 3 of 9 Code 128

80
2D Codes
• Usually not an index into a database
• Camera-based vision systems are preferred reading
method
• High data capacity in small footprint

PDF 417

Data Matrix QR Code

81
1D vs. 2D
1D Codes 2D Codes
Low data capacity High data capacity
Index into large database Self contained data
Large footprint Small footprint
Redundancy in Y dimension Error correction capability
Readable by laser scanner Requires camera based reader
Requires as much as 80% contrast Can be read in low contrast

82
Optical Character Recognition (OCR)

83
OCR/OCV
OCR: Optical Character Recognition:
• Reads (printed) characters

Typical steps:
• Region of Interest around lines of text
• Threshold
• Character Segmentation
• Compare to Library (classification)
• Character is learned or recognized

Optical Character Verification (OCV):


• Compare (verify) character recognized
against golden reference

84
Class Organization

Enhance Check Locate Measure Identify


• Filter • Measure • Match • Detect • Read text
noise or intensity patterns edges (OCR)
unwanted • Create • Match • Measure • Read 1D
features particles geometry distance barcodes
• Remove • Analyze • Set-up • Calculate • Read 2D
distortion particles coordinate geometry codes
• Calibrate systems
images
Contact Information

Nicolas Vazquez
Principal Software Engineer
National Instruments
11500 N. Mopac Expwy
Austin, Texas 78759
USA
Phone: +1 512-683-8494
Email: [email protected]
www.ni.com

You might also like