Dspa Lab Manual
Dspa Lab Manual
2.
3.
4.
5.
6.
Laboratory Plan
Exp. Title of Experiment Session
No.
EXPERIMENT NO.1
Aim: Write C++ Program with GUI to capture using remotely placed camera and read
uncompressed TIFF Image to perform following functions (Menu Driven) Use of
Overloading and Morphism is expected. Image Frame1 is used for displaying Original
Image and Image Frame 2 is used for displaying the action performed.
Requirements:
H/W Requirements:
• 8 GB RAM
• 500GB/1TB HDD
• Web Camera
S/W Requirements:
Theory:
• TIFF (Tag Image File Format)
TIFF (Tag Image File Format) is a common format for exchanging raster graphics (bitmap)
images between applications programs, including those used for scanner images. A TIFF file can
be identified as a file with a ".tiff" or ".tif" file name suffix. The TIFF format was developed in
1986 by an industry committee chaired by the Aldus Corporation (now part of Adobe Software).
Microsoft and Hewlett-Packard were among the contributors to the format. One of the most
common graphic image formats, TIFF files are commonly used in desktop publishing, faxing, 3-
D applications, and medical imaging applications.
TIFF files can be in any of several classes, including gray scale, color palette, or RGB full color,
and can include files with JPEG, LZW, or CCITT Group 4 standard run-length image
compression.
Compression
Baseline TIFF readers must handle the following three compression schemes:[8]
• No compression
Many TIFF readers support tags additional to those in Baseline TIFF, but not every reader
supports every extension.As a consequence, Baseline TIFF features became the lowest common
denominator for TIFF format. Baseline TIFF features are extended in TIFF Extensions (defined
in the TIFF 6.0 Part 2 specification) but extensions can also be defined in private tags.
The TIFF Extensions are formally known as TIFF 6.0, Part 2: TIFF Extensions. Here are some
examples of TIFF extensions defined in TIFF 6.0 specification:
Compression
• JPEG-based compression (TIFF compression scheme 7) uses the DCT (Discrete Cosine
Transform) introduced in 1974 by N. Ahmed, T.Natarajan and K.R. Rao; see Reference 1
in Discrete cosine transform. For more details see Adobe document.
Image types
• CMYK Images
• YCbCr Images
• HalftoneHints
• Tiled Images
Many used TIFF images contain only uncompressed 32-bit CMYK or 24-bit RGB images.
Image Trees
A baseline TIFF file can contain a sequence of images (IFD). Typically, all the images are related
but represent different data, such as the pages of a document. In order to explicitly support
multiple views of the same data, the SubIFD tag was introduced. [17] This allows the images to be
defined along a tree structure. Each image can have a sequence of children, each child being
itself an image. The typical usage is to provide thumbnails or several versions of an image in
different color spaces.
Other extensions
According to TIFF 6.0 specification (Introduction), all TIFF files using proposed TIFF
extensions that are not approved by Adobe as part of Baseline TIFF (typically for specialized
uses of TIFF that do not fall within the domain of publishing or general graphics or picture
interchange) should be either not called TIFF files or should be marked some way so that they
will not be confused with mainstream TIFF files.
Private tags
Developers can apply for a block of "private tags" to enable them to include their own
proprietary information inside a TIFF file without causing problems for file interchange. TIFF
readers are required to ignore tags that they do not recognize, and a registered developer's private
tags are guaranteed not to clash with anyone else's tags or with the standard set of tags defined in
the specification.
TIFF Tags numbered 32768 or higher, sometimes called private tags, are reserved for
information meaningful only for some organization or for experiments with a new compression
scheme within TIFF. Upon request, the TIFF administrator (Adobe) will allocate and register one
or more private tags for an organization, to avoid possible conflicts with other organizations.
Organizations and developers are discouraged from choosing their own tag numbers, because
doing so could cause serious compatibility problems. However, if there is little or no chance that
TIFF files will escape a private environment, organizations and developers are encouraged to
consider using TIFF tags in the "reusable" 65000-65535 range. There is no need to contact
Adobe when using numbers in this range.
TIFF Compression Tag
The TIFF Tag 259 (010316) stores the information about the Compression method. The default
value is 1 = no compression.
Most of TIFF writers and TIFF readers support only some of existing TIFF compression
schemes. Here are some examples of used TIFF compression schemes:
Overloading
It allows functions in computer languages such as C++ and C# to have the same name but with
different parameters.
For example, rather than have a differently named function to sort each type of array
Sort_Int(Int Array Type) ;
Sort Doubles ( Double Array Type) ;
Morphism
In programming languages, polymorphism means that some code or operations or objects behave
differently in different contexts.
Typically, when the term polymorphism is used with C++, however, it refers to using virtual
methods, which we'll discuss shortly.
Procedure:
Mathematical Model:
S = {…..}
2. Identify input as I
S = {I,N,…..}
Ii € I
N =Number of Images
3. Identify Output as O.
S = {I,N,O,…..}
S = {I,N,O, P,……}
P ={Ps,Pb, Pc,Pr }
Ps = The process of sharpening the image.
5. Identify success as Su
The user will capture image properly and will be able to do menu driven operations on Image
S=IUNUOUP
Observations:
Results:
• Sharpen the Image
•
• Blur the Image
Conclusion:
EXPERIMENT NO.2
Objectives: To understand & implement Sampling of a sine wave and reconstructing it using
samples
Aim: Write a C++/ Python program to generate a sine wave of Programmable frequency and
capture samples at programmable frequency (Max up as per Nyquist Sampling Theorem) and
reconstruct the sine wave using collected Samples using ARM Cortex A5/A9. Use oscilloscope
to calculate signal frequency. Write your observations .Store a Data file in SAN (BIGDATA)
Requirements:
H/W Requirements:
• 8 GB RAM
• 500GB/1TB HDD
• ARM Cortex M4/A5
• Oscilloscope
• Signal Generator
S/W Requirements:
Theory:
Sampling
So what is sampling, and what does it do? Sampling is the process by which continuous time
signals, such as voltages or water levels or altitudes, are turned into discrete time signals. This is
usually done by translating the signal in question into a voltage, then using Tim Wescott 1
Westcott Design Services. What Nyquist Didn’t Say, and What to Do About It
Figure 1: The results of Sampling an analog to digital converter (ADC) to turn this continuous,
analog signal into a discrete, digital one. The ADC both samples1 the voltage and converts it to a
digital signal. The sampling process itself is easy to represent mathematically: given a
continuous signal x(t) to be sampled, and a sample interval T, the sampled version of x is simply
the continuous version of x taken at integer intervals of T:
Figure 1 shows the result of sampling a signal. The upper trace is the continuous-time signal,
while the lower trace shows the signal after being sampled once per millisecond. You may
wonder why the lower trace shows no signal between samples. This is because after sampling
there is no signal between samples — all the information that existed between the samples in the
original signal is irretrievably lost in the sampling process.
Aliasing
By ignoring anything that goes on between samples the sampling process throws away
information about the original signal2. This information loss must be taken into account during
system design. Most of the time, when folks are designing systems they are doing their analysis
in the frequency domain. When you are doing your design from this point of view you call this
effect aliasing, and you can easily express it and model it as a frequency domain phenomenon.
To understand aliasing, consider a signal that is a pure sinusoid, and look at it’s sampled version:
This means that given a pair of sampled versions of the signals, one of the lower frequency
sinusoid and one of the higher, you will have no way of distinguishing these signals from one
another. This ambiguity between two signals of different frequencies (or two components of one
signal) is aliasing, and it is happening all the time in the real world, anywhere that a real-world
signal is being sampled.
Figure 2 shows an example of aliasing. Two possible input sine waves are shown: one has a
frequency of 110Hz, the other has a frequency of 1110Hz. Both are sampled at 1000Hz.
The dots show the value of the sine waves at the sampling instants. As indicated by (1) these two
possible inputs both result in exactly the same output: after sampling you cannot tell these two
signals apart. It is rare, however, for real-world signals to resemble pure sine waves. In general,
real world continuous-time signals are more complex than simple sine waves. But we can use
what we learn about the system’s response to pure sine wave input to predict the behavior of a
system that is presented with a more complex signal. This is because more complex continuous-
time signals can be represented as sums of collections of sine waves at different frequencies and
amplitudes. For many systems we can break the signal down into it’s component parts, analyze
the system’s response to each part separately, then add these responses back together to get the
system’s response. When you break a signal down into its component sine waves, you see that
the signal’s energy is distributed as a function of frequency. This distribution of a signal’s energy
over frequency can be shown as a plot of spectral density vs. frequency, such as the solid plot in
the center of Figure 3. When you have a signal such as the one mentioned above, and you sample
it, aliasing will cause the signal components will be replicated endlessly. These replica signals
are the signal’s aliases. The spacing between these aliases will be even and equal to the sampling
rate. These aliases are indistinguishable from ‘real’ signals spaced an integer number of sampling
rates away: there is no way, once the signal is sampled, to know which parts of it are ‘real’ and
which parts are aliased. To compound our trouble, any real-world signal will have a power
spectrum that’s symmetrical around zero frequency, with ‘negative’ frequency components; after
sampling these components of the original signal will appear at frequencies that are lower than
the sample rate. It shows this effect. The central frequency density plot is the signal that’s being
sampled; the others are the signal’s aliases in sampled time. If you sampled this signal as shown,
then after sampling the signal energy would appear to “fold back” at 1=2 the sampling rate. This
can be used to demonstrate part of the Nyquist-Shannon sampling
theorem: if the original signal were band limited to 1=2 the sampling rate then after aliasing there
would be no overlapping energy, and thus no ambiguity caused by aliasing
ARM® cortex®
ARM cortex-A5 processor is the smallest, lowest, and lowest power ARMv7 application
processor, ideal as a standalone processor within current and future generation of smart wearable
devices. It is capable of delivering the internet to the widest possible range of devices, from
smart devices like wearable, feature phones and low cost, entry-level smartphones, to a range of
pervasive embedded, consumer and industrial devices.
Overview
The Cortex-A5 processor is the most mature, most configurable, smallest and lowest power
ARMv7-A CPU. It provides a high-value migration path for existing ARM926EJ-
S™ and ARM1176JZ-S™ processor designs. It achieves better performance than
the ARM1176JZ-S processor, better power and energy efficiency than the ARM926EJ-S, and
100% Cortex- HYPERLINK "http://www.arm.com/products/processors/cortex-
a/index.php"A compatibility.
These processors deliver high-end features to power and cost-sensitive applications, featuring:
• High performance memory system including caches and Memory Management Unit (MMU)
Applications
The Cortex-A5 is designed for applications that require virtual memory management for high-
level operating systems within an extremely low power profile.
The Cortex-A5 is the smallest and lowest power applications processor, delivering rich
functionality to high-performance wearable’s As ARM's most energy-efficient ARMv7
applications processor, the Cortex-A5 gets more work done per unit of energy. This corresponds
to longer battery life and less heat dissipation in wearable and mobile devices.
• Reduces leakage
Compatibility
The Cortex-A5 processor provides full instruction and feature compatibility with the higher
performance Cortex-A8 and Cortex-A9 processors at one-third of the area and power. The
Cortex-A5 processor also maintains backwards application compatibility with Classic ARM
processors including the ARM926EJ-S, ARM1176JZ-S, and ARM7TDMI®.
Procedure:
• Reconstruct the Sign wave using collected Samples using ARM Cortex A5/A9.
Mathematical Model:
S = {…..}
2. Identify input as I
S = {I,N,…..}
Ii € I
N= No. of samples.
3.Identify Output as O.
S = {I,N,O,…..}
S = {I,N, O, P,……}
P ={Ps,Pb, Pc,Pr }
5. Identify success as Su
The user will capture generate the sine wave of required frequency and pole-zero diagram.
S=IUNUOUP
Observations:
Results:
Conclusion:
EXPERIMENT NO.3
Objectives:To understand & generate a Square wave programmable frequency and also study
different function to generate Pole-Zero Diagram using multicore programming.
• 8 GB RAM
• 500GB/1TB HDD
• Oscilloscope
• Signal Generator
S/W Requirements:
Theory:
• Pole-Zero Diagrams:
• Stability
Stability:
In signal processing, specifically control theory, BIBO stability is a form
of stability for linear signals and systems that take inputs. BIBO stands for bounded-input,
bounded-output. If a system is BIBO stable, then the output will be bounded for every input to
the system that is bounded.
• A signal is bounded if there is a finite value such that the signal magnitude never
exceeds , that is
The radius of convergence r is a nonnegative real number or ∞ such that the series converges if
And diverges if
In other words, the series converges if z is close enough to the center and diverges if it is too far
away. The radius of convergence specifies how close is close enough. On the boundary, that is,
where |z − a| = r, the behavior of the power series may be complicated, and the series may
converge for some values of z and diverge for others. The radius of convergence is infinite if the
series converges for all complex numbers z.
A pole-zero plots shows the location in the complex plane of the poles and zeros of
the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer,
filter, or communications channel. By convention, the poles of the system are indicated in the
plot by an X while the zeroes are indicated by a circle or O.
A pole-zero plots can represent either a continuous-time (CT) or a discrete-time (DT) system.
For a CT system, the plane in which the poles and zeros appear is the s plane of the Laplace
transform. In this context, the parameter s represents thecomplex angular frequency, which is the
domain of the CT transfer function. For a DT system, the plane is the z plane, where z represents
the domain of the Z-transform.
Continuous-time system
In general, a rational transfer function for a continuous-time LTI system has the form:
Where,
Either M or N or both may be zero, but in real systems, it should be the case that ;
otherwise the gain would be unbounded at high frequencies.
• If the ROC extends leftward from the pole with the smallest real-part (but not at negative
infinity), then the system is anti-causal.
The ROC is usually chosen to include the imaginary axis since it is important for most
practical systems to have BIBO stability.
Example
and
Notice that these two poles are complex conjugates, which is the necessary and
sufficient condition to have real-valued coefficients in the differential equation
representing the system.
Discrete-time systems
Where,
• If the ROC includes the unit circle, then the system is bounded-input, bounded-output
(BIBO) stable.
• If the ROC extends outward from the pole with the largest (but not infinite) magnitude,
then the system has a right-sided impulse response. If the ROC extends outward from the
pole with the largest magnitude and there is no pole at infinity, then the system is causal.
• If the ROC extends inward from the pole with the smallest (nonzero) magnitude, then the
system is anti-causal.
• The ROC is usually chosen to include the unit circle since it is important for most
practical systems to have BIBO stability.
Example
If and are completely factored, their solution can be easily plotted in the z-plane.
For example, given the following transfer function:
The only (finite) zero is located at: , and the two poles are located at:
• What Is a Multicore?
A multicore is an architecture design that places multiple processors on a single die (computer
chip). Each processor is called a core. As chip capacity increased, placing multiple processors on
a single chip became practical. These designs are known as Chip Multiprocessors (CMPs)
because they allow for single chip multiprocessing. Multicore is simply a popular name for CMP
or single chip multiprocessors. The concept of single chip multiprocessing is not new, and chip
manufacturers have been exploring the idea of multiple cores on a uniprocessor since the early
1990s. Recently, the CMP has become the preferred method of improving overall system
performance. This is a departure from the approach of increasing the clock frequency or
processor speed to achieve gains in overall system performance. Increasing the clock frequency
has started to hit its limits in terms of cost - effectiveness. Higher frequency requires more
power, making it harder and more expensive to cool the system. This also affects sizing and
packaging considerations. So, instead of trying to make the processor faster to gain performance,
the response is now just to add more processors. The simple realization that this approach is
better has prompted the multicore revolution. Multicore architectures are now center stage in
terms of improving overall system performance. For software developers who are familiar with
multiprocessing, multicore development will be familiar. From a logical point of view, there is
no real significant difference between programming for multiple processors in separate packages
and programming for multiple processors contained in a single package on a single chip. There
may be performance differences, however, because the new CMPs are using advances in bus
architectures and in connections between processors. In some circumstances, this may cause an
application that was originally written for multiple processors to run faster when executed on a
CMP. Aside from the potential performance gains, the design and implementation are very
similar. We discuss minor differences throughout the book. For developers who are only familiar
with sequential programming and single core development, the multicore approach offers many
new software development paradigms.
Multicore Architectures
CMPs come in multiple flavors: two processors (dual core), four processors (quad core), and
eight processors (octa - core) configurations. Some configurations are multithreaded; some are
not. There areseveral variations in how cache and memory are approached in the new CMPs. The
approaches toprocessor - to - processor communication vary among different implementations.
The CMP implementationsfrom the major chip manufacturers each handle the I/O bus and the
Front Side Bus (FSB) differentlyAgain, most of these differences are not visible when looking
strictly at the logical view of an applicationthat is being designed to take advantage of a
multicore architecture. Figure 1 - 1 illustrates three commonconfigurations that support
multiprocessing.
• Square wave of programmable frequency
Square wave
A square wave is a non-sinusoidal periodic waveform(which can be represented as an infinite
summation of sinusoidal waves), in which the amplitude alternates at a steady frequency between
fixed minimum and maximum values, with the same duration at minimum and maximum. The
transition between minimum to maximum is instantaneous for an ideal square wave; this is not
realizable in physical systems. Square waves are often encountered in electronics and signal
processing. Its stochastic counterpart is a two-state trajectory. A similar but not necessarily
symmetrical wave, with arbitrary durations at minimum and maximum, is called a rectangular
wave (of which the square wave is a special case).
•
• Figure 2. Square waves
S = {…..}
2. Identify input as I
S = {I,…..}
Ii € I
3. Identify Output as O.
S = {I,O,…..}
S = {I, O, P,……}
P ={Ps,Pb, Pc,Pr }
5. Identify success as Su
The user will capture generate the square wave of required frequency and pole-zero diagram.
S=IUOUP
Observations:
Results:
•
Conclusion:
Experiment No. 4
Title:To capture a signal and perform various operations for analyzing it.
H/W Requirements:
• 8 GB RAM
• 500GB/1TB HDD
• Signal Generator
S/W Requirements:
Theory:
• ARM
Conceptually the Cortex-M4 is a Cortex-M3 plus DSP Instructions, and optional floating-point
unit (FPU). If a core contains an FPU, it is known as a Cortex-M4F, otherwise it is a Cortex-M4.
Key features of the Cortex-M4 core are:
• ARMv7E-M architecture
• Instruction sets
• Thumb (entire)
• Thumb-2 (entire)
• 1-cycle 32-bit hardware multiply, 2-12 cycle 32-bit hardware divide, saturated
math support
• DSP extension: Single cycle 16/32-bit MAC, single cycle dual 16-bit MAC, 8/16-
bit SIMD arithmetic
Silicon options:
C h ips
• Freescale Kinetis K
The following microcontrollers are based on the Cortex-M4F (M4 + FPU) core:
• Freescale Kinetis K
• Infineon XMC4000
• Spansion FM4F
• Toshiba TX04
Square wave
A square wave is a non-sinusoidal periodic waveform (which can be represented as an infinite
summation of sinusoidal waves), in which the amplitude alternates at a steady frequency between
fixed minimum and maximum values, with the same duration at minimum and maximum. The
transition between minimum to maximum is instantaneous for an ideal square wave; this is not
realizable in physical systems. Square waves are often encountered in electronics and signal
processing. Its stochastic counterpart is a two-state trajectory. A similar but not necessarily
symmetrical wave, with arbitrary durations at minimum and maximum, is called a rectangular
wave (of which the square wave is a special case).
Square waves are universally encountered in digital switching circuits and are naturally
generated by binary (two-level) logic devices. They are used as timing references or "clock
signals", because their fast transitions are suitable for triggering synchronous logic circuits at
precisely determined intervals. However, as the frequency-domain graph shows, square waves
contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of
current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in
very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead
of square waves as timing references.
In musical terms, they are often described as sounding hollow, and are therefore used as the basis
for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect
used on electric guitars clips the outermost regions of the waveform, causing it to increasingly
resemble a square wave as more distortion is applied.
The sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation.
It is named after the function sine, of which it is the graph. It occurs often in pure and applied
mathematics, as well as physics, engineering, signal processing and many other fields. Its most
basic form as a function of time (t) is:
The sine wave is important in physics because it retains its wave shape when added to another
sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic
waveform that has this property. This property leads to its importance in Fourier analysis and
makes it acoustically unique.
A storage area network (SAN) is a dedicated network that provides access to consolidated,
block level data storage. SANs are primarily used to enhance storage devices, such as disk
arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like
locally attached devices to the operating system. A SAN typically has its own network of storage
devices that are generally not accessible through the local area network (LAN) by other devices.
The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption
across both enterprise and small to medium sized business environments.
A SAN does not provide file abstraction, only block-level operations. However, file systems built
on top of SANs do provide file-level access, and are known as SAN filesystems or shared disk file
systems.
Sharing storage usually simplifies storage administration and adds flexibility since cables and
storage devices do not have to be physically moved to shift storage from one server to another.
Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a
quick and easy replacement of faulty servers since the SAN can be reconfigured so that a
replacement server can use the LUN of the faulty server. While this area of technology is still
new, many view it as being the future of the enterprise datacenter.
SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant
location containing a secondary storage array. This enables storage replication either
implemented by disk array controllers, by server software, or by specialized SAN devices. Since
IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP
(FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The
traditional physical SCSI layer could only support a few meters of distance - not nearly enough
to ensure business continuance in a disaster.
The economic consolidation of disk arrays has accelerated the advancement of several features
including I/O caching, snapshotting, and volume cloning (Business Continuance Volumes or
BCVs).
Procedure:-
Mathematical Model:
1. Let S be a system that describes the generation of signal along with an emulated signal.
S = {…..}
2. Identify input as I
S = {I,N,…..}
Ii € I
N =Number of waves
3. Identify Output as O.
S = {I,N,O,…..}
O = VT response curve i.e. Graph of voltage (y-axis) and Time (x-axis) graph
S = {I,N,O, P,……}
P ={Ps,Pb, Pc,Pr }
5. Identify success as Su
S = {I,N,O, P, Su, F……}
The user will capture signal and filter it and compare it while storing it in SAN.
S=IUNUOUP
Observations:
Results:
•
•
Conclusion:
EXPERIMENT NO.5
Title:To grab Image from camera and apply edge detection algorithm.
Aim:Write a Python program to grab the image from Camera and apply the edge detection
algorithm(overloaded with Sobel variants, Others) to find the edges use BBB / ARM Cortex
A5/A9/M4 Mobile Boards. Store the Images in SAN (for BIGDATA analitics)
Requirements:
H/W Requirements:
• 8 GB RAM
• 500GB/1TB HDD
• Web Camera
S/W Requirements:
Theory:
14 14
5 7 6 4 152
8 9
If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity
differences between the adjacent neighboring pixels were higher, it would not be as easy to say
that there should be an edge in the corresponding region. Moreover, one could argue that this
case is one in which there are several edges.
14 14
5 7 6 41 113
8 9
Hence, to firmly state a specific threshold on how large the intensity change between two
neighboring pixels must be for us to say that there should be an edge between these pixels is not
always simple. Indeed, this is one of the reasons why edge detection may be a non-trivial
problem unless the objects in the scene are particularly simple and the illumination conditions
can be well controlled (see for example, the edges extracted from the image with the girl above).
Approaches
There are many methods for edge detection, but most of them can be grouped into two
categories, search-based and zero-crossing based. The search-based methods detect edges by first
computing a measure of edge strength, usually a first-order derivative expression such as the
gradient magnitude, and then searching for local directional maxima of the gradient magnitude
using a computed estimate of the local orientation of the edge, usually the gradient direction. The
zero-crossing based methods search for zero crossings in a second-order derivative expression
computed from the image in order to find edges, usually the zero-crossings of the Laplacian or
the zero-crossings of a non-linear differential expression. As a pre-processing step to edge
detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also
noise reduction).
The edge detection methods that have been published mainly differ in the types of smoothing
filters that are applied and the way the measures of edge strength are computed. As many edge
detection methods rely on the computation of image gradients, they also differ in the types of
filters used for computing gradient estimates in the x- and y-directions.
A survey of a number of different edge detection methods can be found in (Ziou and Tabbone
1998) see also the encyclopedia articles on edge detection in Encyclopedia of Mathematics and
Encyclopedia of Computer Science and Engineering.
John Canny considered the mathematical problem of deriving an optimal smoothing filter given
the criteria of detection, localization and minimizing multiple responses to a single edge. He
showed that the optimal filter given these assumptions is a sum of four exponential terms. He
also showed that this filter can be well approximated by first-order derivatives of Gaussians.
Canny also introduced the notion of non-maximum suppression, which means that given the pre
smoothing filters, edge points are defined as points where the gradient magnitude assumes a
local maximum in the gradient direction. Looking for the zero crossing of the 2nd derivative
along the gradient direction was first proposed by Haralick.It took less than two decades to find a
modern geometric variation meaning for that operator that links it to the Marr–Hildreth (zero
crossing of the Laplacian) edge detector. That observation was presented by Ron Kimmel and
Alfred Bruckstein.[10]
Although his work was done in the early days of computer vision, the Canny edge detector
(including its variations) is still a state-of-the-art edge detector. [11] Unless the preconditions are
particularly suitable, it is hard to find an edge detector that performs significantly better than the
Canny edge detector.
The Canny-Deriche detector was derived from similar mathematical criteria as the Canny edge
detector, although starting from a discrete viewpoint and then leading to a set of recursive filters
for image smoothing instead of exponential filters or Gaussian filters.[12]
The differential edge detector described below can be seen as a reformulation of Canny's method
from the viewpoint of differential invariants computed from a scale space representation leading
to a number of advantages in terms of both theoretical analysis and sub-pixel implementation.
corresponding to the application of the following filter masks to the image data:
The well-known and earlier Sobel operator is based on the following filters:
Given such estimates of first-order derivatives, the gradient magnitude is then computed as:
Other first-order difference operators for estimating image gradient have been proposed in the
Prewitt operator, Roberts cross and Frei-Chen.
It is possible to extend filters dimension to avoid the issue of recognizing edge in low SNR
image. The cost of this operation is loss in terms of resolution. Examples are Extended Prewitt
7x7 and Abdou
E d ge th in n in g
Edge thinning is a technique used to remove the unwanted spurious points on the edges in an
image. This technique is employed after the image has been filtered for noise (using median,
Gaussian filter etc.), the edge operator has been applied (like the ones described above) to detect
the edges and after the edges have been smoothed using an appropriate threshold value. This
removes all the unwanted points and if applied carefully, results in one pixel thick edge elements.
Advantages:
• Sharp and thin edges lead to greater efficiency in object recognition.
• If Hough transforms are used to detect lines and ellipses, then thinning could give much
better results.
• If the edge happens to be the boundary of a region, then thinning could easily give the
image parameters like perimeter without much algebra.
There are many popular algorithms used to do this, one such is described below:
• Choose a type of connectivity, like 8, 6 or 4.
• 8 connectivity is preferred, where all the immediate pixels surrounding a particular pixel
are considered.
• Remove points from North, south, east and west.
• Do this in multiple passes, i.e. after the north pass, use the same semi processed image in
the other passes and so on.
The number of passes across direction should be chosen according to the level of accuracy
desired.
while the second-order directional derivative in the -direction of should be negative, i.e.,
Written out as an explicit expression in terms of local partial derivatives , ... , this
edge definition can be expressed as the zero-crossing curves of the differential invariant
Higher-order derivatives for the third-order sign condition can be obtained in an analogous
fashion.
Conceptually the Cortex-M4 is a Cortex-M3 plus DSP Instructions, and optional floating-point
unit (FPU). If a core contains an FPU, it is known as a Cortex-M4F, otherwise it is a Cortex-M4.
Key features of the Cortex-M4 core are:
• ARMv7E-M architecture
• Instruction sets
• Thumb (entire)
• Thumb-2 (entire)
• 1-cycle 32-bit hardware multiply, 2-12 cycle 32-bit hardware divide, saturated
math support
• DSP extension: Single cycle 16/32-bit MAC, single cycle dual 16-bit MAC, 8/16-
bit SIMD arithmetic
Silicon options:
Chips
• Freescale Kinetis K
The following microcontrollers are based on the Cortex-M4F (M4 + FPU) core:
• Freescale Kinetis K
• Infineon XMC4000
• Spansion FM4F
• Toshiba TX04
Square wave
A square wave is a non-sinusoidal periodic waveform (which can be represented as an infinite
summation of sinusoidal waves), in which the amplitude alternates at a steady frequency between
fixed minimum and maximum values, with the same duration at minimum and maximum. The
transition between minimum to maximum is instantaneous for an ideal square wave; this is not
realizable in physical systems. Square waves are often encountered in electronics and signal
processing. Its stochastic counterpart is a two-state trajectory. A similar but not necessarily
symmetrical wave, with arbitrary durations at minimum and maximum, is called a rectangular
wave (of which the square wave is a special case).
Square waves are universally encountered in digital switching circuits and are naturally
generated by binary (two-level) logic devices. They are used as timing references or "clock
signals", because their fast transitions are suitable for triggering synchronous logic circuits at
precisely determined intervals. However, as the frequency-domain graph shows, square waves
contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of
current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in
very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead
of square waves as timing references.
In musical terms, they are often described as sounding hollow, and are therefore used as the basis
for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect
used on electric guitars clips the outermost regions of the waveform, causing it to increasingly
resemble a square wave as more distortion is applied.
The sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation.
It is named after the function sine, of which it is the graph. It occurs often in pure and applied
mathematics, as well as physics, engineering, signal processing and many other fields. Its most
basic form as a function of time (t) is:
The sine wave is important in physics because it retains its wave shape when added to another
sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic
waveform that has this property. This property leads to its importance in Fourier analysis and
makes it acoustically unique.
A storage area network (SAN) is a dedicated network that provides access to consolidated,
block level data storage. SANs are primarily used to enhance storage devices, such as disk
arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like
locally attached devices to the operating system. A SAN typically has its own network of storage
devices that are generally not accessible through the local area network (LAN) by other devices.
The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption
across both enterprise and small to medium sized business environments.
A SAN does not provide file abstraction, only block-level operations. However, file systems built
on top of SANs do provide file-level access, and are known as SAN filesystems or shared disk file
systems.
Sharing storage usually simplifies storage administration and adds flexibility since cables and
storage devices do not have to be physically moved to shift storage from one server to another.
Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a
quick and easy replacement of faulty servers since the SAN can be reconfigured so that a
replacement server can use the LUN of the faulty server. While this area of technology is still
new, many view it as being the future of the enterprise datacenter.
SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant
location containing a secondary storage array. This enables storage replication either
implemented by disk array controllers, by server software, or by specialized SAN devices. Since
IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP
(FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The
traditional physical SCSI layer could only support a few meters of distance - not nearly enough
to ensure business continuance in a disaster.
The economic consolidation of disk arrays has accelerated the advancement of several features
including I/O caching, snap shotting, and volume cloning (Business Continuance Volumes or
BCVs).
Procedure:
• To grab the image from Camera.
Mathematical Model:
1. Let S be a system that describes that grabs the image and applies edge detection algorithm on
it..
S = {…..}
2. Identify input as I
S = {I,N,…..}
Ii € I
N =Number of image
3. Identify Output as O.
S = {I,N,O,…..}
S = {I,N,O, P,……}
P ={Ps,Pb, Pc,Pr }
5. Identify success as Su
S = {I,N,O, P, Su, F……}
The user will capture image and finds edges using edge detection algorithm and store it in SAN.
S=IUNUOUP
Observation:
Results:
Conclusion: