0% found this document useful (0 votes)
36 views

Computer Vision CH2

1. Digital image processing involves acquiring, enhancing, analyzing and compressing digital images using computer algorithms. 2. The basic steps are image acquisition, enhancement, restoration, segmentation, representation and description, analysis, and synthesis and compression. 3. Digital image processing has applications in medical imaging, remote sensing, computer vision, and multimedia.

Uploaded by

deribif
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Computer Vision CH2

1. Digital image processing involves acquiring, enhancing, analyzing and compressing digital images using computer algorithms. 2. The basic steps are image acquisition, enhancement, restoration, segmentation, representation and description, analysis, and synthesis and compression. 3. Digital image processing has applications in medical imaging, remote sensing, computer vision, and multimedia.

Uploaded by

deribif
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 2

Digital Image Fundamentals

1
2.1. Basic Concept of Image
• Digital Image Processing means processing digital image by means of a digital
computer.
• We can also say that it is a use of computer algorithms, in order to get
enhanced image either to extract some useful information.
• Digital image processing is the use of algorithms and mathematical models to
process and analyze digital images.
• The goal of digital image processing is to enhance the quality of images, extract
meaningful information from images, and automate image-based tasks.

2
The basic steps involved in digital image processing are:
1. Image acquisition: involves capturing an image using a digital camera or
scanner, or importing an existing image into a computer.
2. Image enhancement: involves improving the visual quality of an image, such
as increasing contrast, reducing noise, and removing artifacts.
3. Image restoration: involves removing degradation from an image, such as
blurring, noise, and distortion.
4. Image segmentation: This involves dividing an image into regions or
segments, each of which corresponds to a specific object or feature in the
image.
5. Image representation and description: This involves representing an image
in a way that can be analyzed and manipulated by a computer, and describing
the features of an image in a compact and meaningful way.

3
6. Image analysis: This involves using algorithms and mathematical models to
extract information from an image, such as recognizing objects, detecting
patterns, and quantifying features.
7. Image synthesis and compression: This involves generating new images or
compressing existing images to reduce storage and transmission requirements.
Digital image processing is widely used in a variety of applications, including
medical imaging, remote sensing, computer vision, and multimedia.

Image processing mainly include the following steps:


8. Importing the image via image acquisition tools;
9. Analyzing and manipulating the image;
10. Output in which result can be altered image or a report which is based on
4
analyzing that image.
What is an image?
• An image is defined as a two-dimensional function, F(x, y), where x and y are
spatial coordinates, and the amplitude of F at any pair of coordinates (x, y) is
called the intensity of that image at that point.
• When x, y, and amplitude values of F are finite, we call it a digital image.
• In other words, an image can be defined by a two-dimensional array specifically
arranged in rows and columns.
• Digital Image is composed of a finite number of elements, each of which
elements have a particular value at a particular location.
• These elements are referred to as picture elements, image elements, and pixels.
• A Pixel is most widely used to denote the elements of a Digital Image.

5
Types of an image

1. BINARY IMAGE– The binary image as its name suggests, contain only two
pixel elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This
image is also known as Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and
white color is called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format. It has 256
different shades of colors in it and commonly known as Grayscale Image. In
this format, 0 stands for Black, and 255 stands for white, and 127 stands for
gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different
colors in it. It is also known as High Color Format. In this format the
distribution of color is not as same as Grayscale image.
• A 16 bit format is actually divided into three further formats which are Red,
Green and Blue. That famous RGB format. 6
Fig.2.1: Fundamental Phases in Digital Image Processing

7
Fundamental Phases in Digital Image Processing
1. Image acquisition is the first process shown in the above figure.
• Generally, the image acquisition stage involves preprocessing, such as
scaling.
2. Image enhancement is among the simplest and most appealing areas of
digital image processing.
• Basically, the idea behind enhancement techniques is to bring out detail that
is obscured, or simply to highlight certain features of interest in an image.
• A familiar example of enhancement is when we increase the contrast of an
image because "it looks better." It is important to keep in mind that
enhancement is a very subjective area of image processing.

8
3. Image restoration is an area that also deals with improving the appearance of an
image.
• However, unlike enhancement, which is subjective, image restoration is objective,
in the sense that restoration techniques tend to be based on mathematical or
probabilistic models of image degradation.
• Enhancement, on the other hand, is based on human subjective preferences
regarding what constitutes a “good” enhancement result.
4. Color image processing is an area that has been gaining in importance because of
the significant increase in the use of digital images over the Internet.
• Color is used also as the basis for extracting feature of interest in an image.
5. Wavelets and Multiresolution Process are the foundation for representing images
in various degrees of resolution.
• In particular, this material is used in this course for image data compression and for
pyramidal representation, in which images are subdivided successively into smaller
regions.
9
6. Compression, as the name implies, deals with techniques for reducing the storage
required to save an image, or the bandwidth required to transmit it.
7. Morphological processing deals with tools for extracting image components that
are useful in the representation and description of shape.
8. Segmentation procedures partition an image into its constituent parts or objects.
• In general, autonomous segmentation is one of the most difficult tasks in digital
image processing.
• A rugged segmentation procedure brings the process a long way toward successful
solution of imaging problems that require objects to be identified individually.
• On the other hand, weak or erratic segmentation algorithms almost always
guarantee eventual failure.
• In general, the more accurate the segmentation, the more likely recognition is to
succeed.

10
9. Representation and description almost always follow the output of a
segmentation stage, which usually is raw pixel data, constituting either the
boundary of a region (i.e., the set of pixels separating one image region from
another) or all the points in the region itself.
• In either case, converting the data to a form suitable for computer processing is necessary.
• The first decision that must be made is whether the data should be represented as a
boundary or as a complete region.
• Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections.
• Regional representation is appropriate when the focus is on internal properties, such as
texture or skeletal shape.
• In some applications, these representations complement each other. Choosing a
representation is only part of the solution for transforming raw data into a form suitable
for subsequent computer processing. A method must also be specified for describing the
data so that features of interest are highlighted.
11
• Description, also called feature selection, deals with extracting attributes
that result in some quantitative information of interest or are basic for
differentiating one class of objects from another.
10. Recognition is the process that assigns a label (e.g., "vehicle") to an object
based on its descriptors.

12
The interaction between the knowledge base and the processing
modules in Fig. 2.1.
• Knowledge about a problem domain is coded into an image processing system
in the form of a knowledge database.
• This knowledge may be as simple as detailing regions of an image where the
information of interest is known to be located, thus limiting the search that has
to be conducted in seeking that information.
• The knowledge base also can be quite complex, such as an interrelated list of all
major possible defects in a materials inspection problem or an image database
containing high-resolution satellite images of a region in connection with
change-detection applications.

13
Cont’d
• In addition to guiding the operation of each processing module, the knowledge
base also controls the interaction between modules.
• This distinction is made in Fig. 2.1 by the use of double headed arrows between
the processing modules and the knowledge base, as opposed to single-headed
arrows linking the processing modules.
• In general, however, as the complexity of an image processing task increases, so
does the number of processes required to solve the problem.

14
Overlapping Fields with Image Processing

• According to block 1, if input is an image and we get out image as a output, then it is termed as Digital Image
Processing.
• According to block 2, if input is an image and we get some kind of information or description as a output, then
it is termed as Computer Vision.
• According to block 3, if input is some description or code and we get image as an output, then it is termed as
Computer Graphics.
• According to block 4, if input is description or some keywords or some code and we get description or some15
keywords as a output, then it is termed as Artificial Intelligence
Components of Image Processing System
• Large-scale image processing systems are being sold for massive imaging
applications, such as processing of satellite images, the trend continues toward
miniaturizing and blending of general-purpose small computers with specialized
image processing hardware.
• Figure 2.2 shows the basic components comprising a typical general-purpose
system used for digital image processing.
• The function of each component is discussed in the following paragraphs,
starting with image sensing.

16
Figure 2.2 Components of a general purpose image processing system

17
• With reference to sensing, two elements are required to acquire digital images.
• The first is a physical device that is sensitive to the energy radiated by the object
we wish to image.
• The second, called a digitizer, is a device for converting the output of the
physical sensing device into digital form.
• For instance, in a digital video camera, the sensors produce an electrical output
proportional to light intensity.
• The digitizer converts these outputs to digital data.

18
• Specialized image processing hardware usually consists of the digitizer just
mentioned, plus hardware that performs other primitive operations, such as an
arithmetic logic unit (ALU), which performs arithmetic and logical operations in
parallel on entire images.
• One example of how an ALU is used is in averaging images as quickly as they
are digitized, for the purpose of noise reduction.
• This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed.
• In other words, this unit performs functions that require fast data throughputs
(e.g., digitizing and averaging video images at 30 frames/s) that the typical main
computer cannot handle.

19
• The computer in an image processing system is a general-purpose computer and
can range from a PC to a supercomputer.
• In dedicated applications, sometimes specially designed computers are used to
achieve a required level of performance, but our interest here is on general-
purpose image processing systems.
• In these systems, almost any well-equipped PC-type machine is suitable for
offline image processing tasks.

• Software for image processing consists of specialized modules that perform


specific tasks. A well-designed package also includes the capability for the user
to write code that, as a minimum, utilizes the specialized modules. More
sophisticated software packages allow the integration of those modules and
general-purpose software commands from at least one computer language. 20
• Mass storage capability is a must in image processing applications.
• An image of size 1024 X 1024 pixels, in which the intensity of each pixel is an
8-bit quantity, requires one megabyte of storage space if the image is not
compressed.
• When dealing with thousands, or even millions, of images, providing adequate
storage in an image processing system can be a challenge.
• Digital storage for image processing applications falls into three principal
categories: (1) short-term storage for use during processing, (2) on-line storage
for relatively fast recall, and (3) archival storage, characterized by infrequent
access. Storage is measured in bytes (eight bits), Kbytes (one thousand bytes),
Mbytes (one million bytes), Gbytes (meaning giga, or one billion, bytes), and
Tbytes (meaning tera, or one trillion, bytes).
21
• One method of providing short-term storage is computer memory. Another is by
specialized boards, called frame buffers, that store one or more images and can
be accessed rapidly, usually at video rates (e.g., at 30 complete images per
second).
• The latter method allows virtually instantaneous image zoom, as well as scroll
(vertical shifts) and pan (horizontal shifts). Frame buffers usually are housed in
the specialized image processing hardware unit shown in fig. 2.2. Online storage
generally takes the form of magnetic disks or optical-media storage. The key
factor characterizing on-line storage is frequent access to the stored data.
Finally, archival storage is characterized by massive storage requirements but
infrequent need for access. Magnetic tapes and optical disks housed in
"jukeboxes" are the usual media for archival applications.

22
• Image displays in use today are mainly color (preferably flat screen) TV
monitors. Monitors are driven by the outputs of image and graphics display
cards that are an integral part of the computer system. Seldom are there
requirements for image display applications that cannot be met by display cards
available commercially as part of the computer system. In some cases, it is
necessary to have stereo displays, and these are implemented in the form of
headgear containing two small displays embedded in goggles worn by the user.
• Hardcopy devices for recording images include laser printers, film cameras,
heat-sensitive devices, inkjet units, and digital units, such as optical and CD-
ROM disks. Film provides the highest possible resolution, but paper is the
obvious medium of choice for written material. For presentations, images are
displayed on film transparencies or in a digital medium if image projection
equipment is used. The latter approach is gaining acceptance as the standard for
image presentations. 23
• Networking is almost a default function in any computer system in use today.
Because of the large amount of data inherent in image processing applications,
the key consideration in image transmission is bandwidth. In dedicated
networks, this typically is not a problem, but communications with remote sites
via the Internet are not always as efficient. Fortunately, this situation is
improving quickly as a result of optical fiber and other broadband technologies.

24
2.2. Digital Image Representation
• We will use two principal ways to represent digital images. Assume that an
image f(x, y) is sampled so that the resulting digital image has M rows and N
columns.
• The values of the coordinates (x, y) now become discrete quantities.
• For notational clarity and convenience, we shall use integer values for these
discrete coordinates.
• Thus, the values of the coordinates at the origin are (x, y) = (0, 0). The next
coordinate values along the first row of the image are represented as (x, y) = (0,
1).
• It is important to keep in mind that the notation (0, 1) is used to signify the
second sample along the first row. It does not mean that these are the actual
values of physical coordinates when the image was sampled.
• Figure 2.3 shows the coordinate convention used. 25
Fig 2.3 Coordinate convention used to represent digital images
• The notation introduced in the preceding paragraph allows us to write the
complete M*N digital image in the following compact matrix form:

• The right side of this equation is by definition a


digital image. Each element of this matrix array is
called an image element, picture element, pixel, or
pel.

26
1.3. Image Sampling and Quantization
• We can capture a picture using the sensors in imaging tools, such as a camera.
The sensors produce an image in the form of an analog signal, which is then
digitized to produce a digital image.
• A digital image is a two-dimensional function f(x, y) where x and y indicate the
position in an image. The f(x, y) function holds a discrete value called the
intensity value. Let’s see an example of the visual representation of spatial
coordinates and amplitude of a digital image:

27
• The digital image contains a collection of elements called pixel or picture
elements, each having its intensity value.
• In order to enhance an image and use it for some applications, we apply various
operations that are part of the image processing method.
• Sampling and quantization operations are part of the image processing method
that converts continuous voltage signals obtained from sensors into digital
images.
• A digital image f(x, y) consists of coordinates x and y. Additionally, it contains
the function’s amplitude. Quantization refers to digitizing the amplitudes, while
sampling refers to digitizing the coordinate values.
• The sensors placed in the image acquisition device capture the projected
continuous image. Later, this digitizes to form a digital image suitable for real-
time applications. For example, let’s see the difference between a continuous
28
and digital image:
Continuous Vs digital image:

29
• Digital images are basically of three types: monochrome or binary images,
grayscale images, and color images.
• The pixel value of a binary image at a specific location (x, y) usually holds the
value 0 for black or 1.
• Grayscale images have intensity values ranging from 0 to 255, where 0 is black,
gradually fading to 255, which is white. Additionally, color images like RGB
images contain three channels red, green, and blue channels. Each channel in an
RGB image has intensity values ranging from 0-255.

• Let’s see how each image looks with the help of 4×4 samples from binary,
grayscale, and RGB image:

30
• Sampling and quantization
result in a matrix of rows
and columns consisting of
real numbers.
• Further, let’s say a photo is
250 x 350. Here, the width
is 250, and the image
height is 350.
• This implies that the
digital image has 250
columns and 350 rows,
respectively.

31
Sampling and Quantization Applications
• Sampling and quantization allow images to be applied in medicine for accurate
diagnosis.
• Visualizing human body parts with the help of X-rays, MRI, and scans, thereby
making it easier for computer-aided diagnosis.
• Furthermore, we can use sampling and quantization in remote sensing, which
plays a major role in studying objects in space in a cheaper way.
• Moreover, the digitization of images facilitated the existence of the computer
vision field. Machine learning and deep learning algorithms are proven efficient
with the help of digital images.

32
Mathematical Tools used in Digital Image Processing

33
34

You might also like