Embedded Systems GMR Lan
Embedded Systems GMR Lan
Lecture No: 1
the main objective of this class is to give detail description about general
computing system and comparing with embedded system.
This classification is based on the order in which the embeeded processing systems
evolved from the first version to where they are today. As per this criterion ,
embedded systems are classified as
i. First generation
ii. Second Generation
iii. Third Generation
iv. Fourth Generation
First Generation :
The early embedded systems were built around 8 bit microprocessors like 8085
and Z80, and 4bit microcontrollers. Simple in hardware circuits with firmware
developed in assembly code. Digital telephone keypads, stepper motor control units
etc. are examples of this.
Second Generation :
These are embedded systems built around 16 bit microprocessors and 8 or 16 bit
microcontrollers , following the first generation emebedded systems. The instruction
set for the second generation processors/controllers were much more complex and
powerful than the first generation preocessors/controllers.
This classification is based on the order in which the embeeded processing systems
evolved from the first version to where they are today. As per this criterion ,
embedded systems are classified as
i. First generation
ii. Second Generation
iii. Third Generation
iv. Fourth Generation
First Generation :
The early embedded systems were built around 8 bit microprocessors like 8085
and Z80, and 4bit microcontrollers. Simple in hardware circuits with firmware
developed in assembly code. Digital telephone keypads, stepper motor control units
etc. are examples of this.
Second Generation :
These are embedded systems built around 16 bit microprocessors and 8 or 16 bit
microcontrollers , following the first generation emebedded systems. The instruction
set for the second generation processors/controllers were much more complex and
powerful than the first generation preocessors/controllers.
Third Generation :
With advances in processor technology, embedded system developers started making
use of powerful 32 bit processors and 16 bit microcontrollers for their design. A new
conept of application and domain specific processors like DSP and Application
Specific Integrated Circuits (ASIC).
Fourth Generation : the advent of system on chips , reconfigurable processors and
multicore processors are bringing high performance , tight integration and
miniaturization into the embedded device market. The SoC technique implements a
total system on a chip by integrating different functionalities with a processor core on
an integrated .
CARD READERS: Barcode, smart card readers, hand held devices, etc.
Banking and Retail: Automatic teller machines (ATM) , and currency counter, point of
sales(POS).
Measurement and Instrumentation: Digital multi meters, digital CRO, logic analyzers,
PLC system, etc.
Computer networking systems: Network routers, switches, hubs, firewalls,etc.
Computer peripherals: printers, scanners, fax machines,etc.
For the output pin to function properly, the output pin should be pulled, to the desired
voltage for the oip device, through a pull-up resistor. The output signal of the IC is fed to
A task is the execution of a sequential program. It starts with reading of the input
data and of the internal state of the task, and terminates with the production of the results
and updating the internal state. The control signal that initiates the execution of a task
must be provided by the operating system. The time interval between the start of the task
and its termination, given an input data set x, is called the actual duration dact(task,x) of
the task on a given target machine. A task that does not have an internal state at its point
of invocation is called a stateless task; otherwise, it is called a task with state.
Simple Task (S-task):
If there is no synchronization point within a task, we call it a simple task (S-task),
i.e., whenever an S -task is started, it can continue until its termination point is reached.
Because an S-task cannot be blocked within the body of the task, the execution time of an
S-task is not directly dependent on the progress of the other tasks in the node, and can be
determined in isolation. It is possible for the execution time of an S-task to be extended
by indirect interactions, such as by task preemption by a task with higher priority.
Complex Task (C-Task):
A task is called a complex task (C-Task) if it contains a blocking synchronization
statement (e.g., a semaphore operation "wait") within the task body. Such a "wait"
operation may be required because the task must wait until a condition outside the task is
satisfied, e.g., until another task has finished updating a common data structure, or until
input from a terminal has arrived. If a common data structure is implemented as a
protected shared object, only one task may access the data at any particular moment
(mutual exclusion). All other tasks must be delayed by the "wait" operation until the
currently active task finishes its critical section. The worst-case execution time of a
complex task in a node is therefore a global issue because it depends directly on the
progress of the other tasks within the node, or within the environment of the node.
A task can be in one of the following states: running, waiting or ready-to-run.
A task is said to be in running state if it is being executed by the CPU
A task is said to be in waiting state if it is waiting for another event to occur.
A task is said to be in ready-to-run state if it is waiting in a queue for the CPU
Task Scheduler:
An application in real-time embedded system can always be broken down into a number
of distinctly different tasks. For example,
Keyboard scanning
Display control
Input data collection and processing
Responding to and processing external events
Communicating with host or others
Each of the tasks can be represented by a state machine. However, implementing a single
sequential loop for the entire application can prove to be a formidable task. This is
because of the various time constraints in the tasks - keyboard has to be scanned, display
controlled, input channel monitored, etc.One method of solving the above problem is to
use a simple task scheduler. The various tasks are handled by the scheduler in an orderly
manner. This produces the effect of simple multitasking with a single processor. A bonus
of using a scheduler is the ease of implementing the sleep mode in microcontrollers
which will reduce the power consumption dramatically (from mA to DA). This is
important in battery operated embedded systems.
There are several ways of implementing the scheduler - preemptive or cooperative, round
robin or with priority. In a cooperative or non-preemptive system, tasks cooperate with
The embedded firmware is responsible for controlling the various peripherals of"
the embedded hard ware and generating response in accordance with the functional
requirements mentioned in the requirements for the particular embedded product.
Firmware is considered as the master brain of the embedded Firmware Design and
Development system. imparting intelligence to an Embedded system is a one time
process and it can happen at any stage, it can be immediately after the fabrication of the
embedded hardware or at a later stage. Once intelligence is imparted to the embedded
product, by embedding t.l1e firmware in the hardware, the product starts Functioning
properly and will continue serving the assigned task till hardware breakdown occurs or a
corruption in embedded firmware occurs. In case of hardware breakdown, the damaged
component may need to be replaced by a new component and for firmware corruptions
the firmware should be re-loaded, to bring back the embedded product to the nominal
functioning. Coming back to the newborn baby example, the newborn baby is very
adaptive in terns of intelligence, meaning it learns from mistakes and updates its memory
each time a mistake or a deviation in expected behavior occurs, whereas most of the
embedded systems are less adaptive or non-adaptive. For most of the embedded
products the embedded firmware is stored at a permanent memory {ROM} and they are
no alterable by end users. Some of the embedded products used in the Control and
Instrumentation domain are adaptive. This adaptability is achieved by making use
configurable parameters which are stored in the alterable permanent memory area. The
parameters get updated in accordance with the deviations from expected behavior and the
firmware makes use of these parameters for creating the response next time for similar
variations.
Designing embedded firmware requires understanding of the particular embedded
product hardware, like various component interfacing, memory map details, ND port
details, configuration and register details of various hardware chips used and some
programming language [either target processor controller specific low level assembly
language or a high level language like C,C++,JAVA}.
Embedded firmware development process starts with the conversion of the firmware
requirements into a program model using modeling tools like UML or flow chart based
B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 38
representation. The Uml diagrams or flow chart gives a diagrammatic representation of
the decision items to be taken and the tasks to be performed {Since the program model is
created, the next step is the implementation of the tasks and actions by capturing the
model using a language which is understandable by the target process port controller. The
following sections are designed to give an overview of the various steps involved in the
embedded firmware design and development.
Embedded Firmware Design Approaches :
The firmware design approaches for embedded product is purely dependent on the
complexity of the functions to be performed, the speed of operation required, etc. Two
basic approaches are used for embedded firmware design. They are ‘Conventional
procedure based Firmware Design’ and ‘Embedded operating System (US) Based
Design’. The conventional procedural based design is also known as ‘Super Loop
Model’. We will discuss each of them in detail in the following sections.
The Super Loop Based approach :
The Super Loop based firmware development approach is adopted for applications that
are not time critical and where the response time is not so important [embedded systems
where missing deadlines are acceptable]. It is very similar to a conventional procedural
programming where the code is executed task by task. The task listed at the top of the
program code is executed first and the tasks just below the op are executed after
completing the first task. This is a true procedural one. In a multiple task based
system, each task is executed in serial in this approach. The firmware execution flow for
this will be
1. Configure the common parameters and perform initialization for various hardware
components memory, registers, etc.
2. Start the first task and execute it
3. Execute the second task
4. Execute the next task
5.
6.
7. Execute the last defined task
8. Jump back to the first task and follow the same flow
We can use either a target processor controller specific language {Generally known as
Assembly language or low level language) or a target processor controller independent
language [Like C, CH-_. JAVA, etc. commonly known as High Level Language} or a
combination of Assembly and High level Language. We will discuss where each of the
approach is used and the relative merits and de-merits of each, in the following sections.
Assembly language based development:
Assembly level language is the human readable notation of ‘machine language‘, whereas
“machine language’ is a processor understandable language. Processors deal only with
binaries machine language is a binary representation and it consists of is and Us. Machine
language is made readable by using specific symbols called mnemonies‘. Hence machine
language can be considered as an interface between processor and programmer.
Assembly language and machine languages are processor controller dependent and an
assembly program written for one processor controller family will not work with
others.
Assembly language programming is the task of writing processor specific machine code
in form, converting the mnmemonies into actual processor instructions {machine
language} and associated data using an assembler.
Assembly Language program was the most common type of programming adopted in the
beginning of software revolution. If we look back to the history of programming, we can
see that a large number of programs were written entirely in assembly language. Even in
the 19905, the majority of console video games were written in assembly language,
including most popular games written for the Sega Genesis and the Super Nintendo
Entertainment System. The popular arcade game NBA Jam released in 1993 was also
ended entirely using the assembly language.
Even today also almost all low level, system related, programming is carried out using
assembly language. Some Operating System dependent tasks require low-level
languages. in particular, assembly language is often used in writing the low level
interaction between the operating system and the hardware, for instance in device drivers.
The meaning of ‘interrupts’ is to break the sequence of operation. While the CPU is
executing a program, on ‘interrupt’ breaks the normal sequence of execution of
instructions, diverts its execution to some other program called Interrupt Service Routine
(ISR).After executing ISR , the control is transferred back again to the main program.
Interrupt processing is an alternative to polling.
Need for Interrupt: Interrupts are particularly useful when interfacing I/O devices that
provide or require data at relatively low data transfer rate.
Types of Interrupts: There are two types of Interrupts in 8086.
They are:
(i)Hardware Interrupts and
(ii)Software Interrupts
(i) Hardware Interrupts (External Interrupts). The Intel microprocessors support
hardware interrupts through:
• Two pins that allow interrupt requests, INTR and NMI
• One pin that acknowledges, INTA, the interrupt requested on INTR.
Dedicated Interrupts:
• Type 0 The divide error whenever the result from a division overflows or an
attempt is made to divide by zero.
• Type 1 Single-step or trap occurs after execution of each instruction if the trap
(TF) flag bit is set.
– upon accepting this interrupt, TF bit is cleared so the interrupt service
procedure executes at full speed
• Type 2 The non-maskable interrupt occurs when a logic 1 is placed on the NMI
input pin to the microprocessor.
– non-maskable—it cannot be disabled
• Type 3 A special one-byte instruction (INT 3) that uses this vector to access its
interrupt-service procedure.
– often used to store a breakpoint in a program for debugging
• Type 4 Overflow is a special vector used with the INTO instruction. The INTO
instruction interrupts the program if an overflow condition exists.
– as reflected by the overflow flag (OF)
HARDWARE INTERRUPTS:
• The two processor hardware interrupt inputs:
– non-maskable interrupt (NMI)
– interrupt request (INTR)
• When NMI input is activated, a type 2 interrupt occurs
– Because NMI is internally decoded
• The INTR input must be externally decoded to select a vector.
• Any interrupt vector can be chosen for the INTR pin, but we usually use an interrupt
type number between 20H and FFH.
• Intel has reserved interrupts 00H - 1FH for internal and future expansion.
• INTA is also an interrupt pin on the processor.
• It is an output used in response to INTR input to apply a vector type number to the data
bus connections D7–D0 .
B.Anilkumar, P Kalyan Chakravarthi ,ECE Department, GMRIT, RAJAM
Embedded Systems Unit 3
Lecture No: 43
• Fig. 2 shows the three user interrupt connections on the microprocessor.
Fig. 1: The timing of the INTR input and INTA output. *This portion of the data
bus is ignored and usually contains the vector number.
Fig. 2: A simple method for generating interrupt vector type number FFH in
response to INTR
Direct Memory Access is a capability provided by some computer bus architectures that
allows data to be sent directly from an attached device (such as a disk drive) to the
This chapter is about giving the reader some practical processes and techniques that have
proven useful over the years. Defining the system and its architecture, if done correctly,
is the phase of development which is the most difficult and the most important of the
entire development cycle. Figure shows the different phases of development as defined
by the Embedded System Design and Development Lifecycle Model.
This model indicates that the process of designing an embedded system and taking that
design to market has four phases:
1. Phase 1. Creating the Architecture, which is the process of planning the design of
the embedded system.
2. Phase 2. Implementing the Architecture, which is the process of developing the
embedded system.
3. Phase 3. Testing the System, which is the process of testing the embedded system
for problems, and then solving those problems.
Embedded systems are large in numbers, and those numbers are growing every
year as more electronic devices gain a computational element. Embedded systems
possess several common characteristics that differentiate them from desktop systems, and
that pose several challenges to designers of such systems. The key challenge is to
optimize design metrics, which is particularly difficult since those metrics compete with
one another. One particularly difficult design metric to optimize is time-to-market,
because embedded systems are growing in complexity at a tremendous rate, and the rate
at which productivity improves every year is not keeping up with that growth. This book
seeks to help improve productivity by describing design techniques that are standard and
others that are very new, and by presenting a unified view of software and hardware
design. This goal is worked towards by presenting three key technologies for embedded
systems design: processor technology, IC technology, and design technology. Processor
technology is divided into general-purpose, application-specific, and single-purpose
processors. IC technology is divided into custom, semi-custom, and programmable logic
IC’s. Design technology is divided into compilation/synthesis, libraries/IP, and
test/verification Design technology involves the manner in which we convert our concept
of desired system functionality into an implementation. We must not only design the
implementation to optimize design metrics, but we must do so quickly. As described
earlier, the designer must be able to produce larger numbers of transistors every year, to
keep pace with IC technology. Hence, improving design technology to enhance
productivity has been a focus of the software and hardware design communities for
decades.
To understand how to improve the design process, we must first understand the
design process itself. Variations of a top-down design process have become popular in
the past decade, an ideal form of which is illustrated in Figure. The designer refines the
system through several abstraction levels. At the system level, the designer describes the
desired functionality in some language, often a natural language like English, but
preferably an executable language like C; we shall call this the system specification. The
designer refines this specification by distributing portions of it among chosen processors
(general or single purpose), yielding behavioral specifications for each processor. The
Having the explicit architecture documentation helps the engineers and programmers on
the development team to implement an embedded system that conforms to the
requirements. Throughout this book, real-world suggestions have been made for
implementing various components of a design that meet these requirements. In addition
to understanding these components and recommendations, it is important to understand
what development tools are available that aid in the implementation of an embedded
system. The development and integration of an embedded system’s various hardware and
software components are made possible through development tools that provide
everything from loading software into the hardware to providing complete control over
the various system components. Embedded systems aren’t typically developed on one
system alone—for example, the hardware board of the embedded system—but usually
require at least one other computer system connected to t he embedded platform to
manage development of that platform. In short, a development environment is typically
made up of a target (the embedded system being designed) and a host (a PC, Sparc
Station, or some other computer system where the code is actually developed). The target
and host are connected by some transmission medium, whether serial, Ethernet, or other
method. Many other tools, such as utility tools to burn EPROMs or debugging tools, can
be used within the development environment in conjunction with host and target. The key
development tools in embedded design can be located on the host, on the target, or can
exist stand-alone. These tools typically fall under one of three categories: utility,
translation, and debugging tools. Utility tools are general tools that aid in software or
hardware development, such as editors (for writing source code), VCS (Version Control
Software) that manages software files, ROM burners that allow software to be put onto
ROMs, and so on. Translation tools convert code a developer intends for the target into a
form the target can execute, and debugging tools can be used to track down and correct
bugs in the system. Development tools of all types are as critical to a project as the
architecture design, because without the right tools, implementing and debugging the
system would be very difficult, if not impossible.
The Main Software Utility Tool: Writing Code in an Editor or IDE :
Hardware and firmware engineering design teams often run into problems and conflicts
when trying to work together. They come from different development environments,
have different tool sets and use different terminology. Often they are in different
locations within the same company or work for different companies. The two teams have
to work together, but often have conflicting differences in procedures and methods. Since
their resulting hardware and firmware work have to integrate successfully to build a
product, it is imperative that the hardware/firmware interface – including people,
technical disciplines, tools and technology – be designed properly
This article provides seven principles hardware/firmware codesign that if followed will
ensure that such collaborations are a success. They are:
Collaborate on the Design;
Set and Adhere to Standards;
Balance the Load;
Design for Compatibility;
Anticipate the Impacts;
Design for Contingencies; and
Plan Ahead.
Collaborate on the Design
Designing and producing an embedded product is a team effort. Hardware engineers
cannot produce the product without the firmware team; likewise, firmware engineers
cannot produce the product without the hardware team.
Even though the two groups know that the other exists, they sometimes don’t
communicate with each other very well. Yet it is very important that the interface where
the hardware and firmware meet—the registers and interrupts—be designed carefully
and with input from both sides.
Collaborating implies proactive participation on both sides. Figure 2.1 shows a picture of
a team rowing a boat. Some are rowing on the right side and some on the left. There is a
leader steering the boat and keeping the team rowing in unison. Both sides have to work
and work together. If one side slacks off, it is very difficult for the other side and the
leader to keep the boat going straight.
In order to collaborate, both the hardware and firmware teams should get together to
discuss a design or solve a problem. Collaboration needs to start from the very early
stages of conceptual hardware design all the way to the late stages of final firmware
development. Each side has a different perspective, that is, a view from their own
environment, domain, or angle.
Collaboration helps engineers increase their knowledge of the system as a whole,
allowing them to make better decisions and provide the necessary features in the design.
The quality of the product will be higher because both sides are working from the same
agenda and specification.
Documentation is the most important collaborative tool. It ranges from high-level product
specification down to low-level implementation details. The hardware specification
written by hardware engineers with details about the bits and registers forming the
hardware/ firmware interface is the most valuable tool for firmware engineers. They have
to have this to correctly code up the firmware. Of course, it goes without saying that this
specification must be complete and correct.
Software tools are available on the market to assist in collaborative efforts. In some, the
chip specifications are entered and the tool generates a variety of hardware (Verilog,
VHDL. . . ), firmware (C, C++ . . . ), and documentation (*.rtf, *.xls, *.txt . . . ) files.
Other collaborative tools aid parallel development during the hardware design phase,
such as co-simulation, virtual prototypes, FPGA-based prototype boards, and modifying
old products.
Collaboration needs to happen, whether it is achieved by walking over to the desk on the
same floor, or by using email, phone, and video conferencing, or by occasional trips to
another site in the same country or halfway around the world.
This principle, collaboration, is the foundation to all of the other principles. As we shall
see, all of the other principles require some amount of collaboration between the
hardware and firmware teams to be successful.
Set and Adhere to Standards
Standards need to be set and followed within the organization. I group standards into
industry standards and internal standards.
For example, USB is widely known and used for connecting devices to computers. If this
standard is adhered to, any USB-enabled device can plug into any computer and a well-
defined behavior will occur (even if it is “unknown USB device installed†).
Industry standards evolve but still behave in a well-defined manner. USB has evolved,
from 1.1, to 2.0, and now 3.0, but it still has a well-defined behavior when plugging one
version into another.
By internal standards, I mean that you have set standards, rules, and guidelines that
everybody must follow within your organization. Modules are written in a certain
fashion, specific quality checks are performed, and documentation is written in a
specified format. Common practices and methods are defined to promote reuse and avoid
the complexity of multiple, redundant ways of doing the something.
In the same way that industry standards allow many companies to produce similar
products, following internal standards allows many engineers to work together and
encourages them to make refinements to the design. It provides consistency among
modules, creation of common test suites and debugging tools, and it spreads expertise
among all the engineers.
Look at the standards within your organization. Look for best practices that are being
used and formalize them to make them into standards that everybody abides by. There are
many methods and techniques in the industry that help with this, such as CMMI
(capability maturity model integration, an approach for improving processes;
sei.cmu.edu/cmmi), ISO (International Organization for Standardization, international
standards for business, government, and society; iso.org), and Agile (software
development methods promoting regular inspection and adaptation; agilealliance.org).
Adapt and change your internal standards as necessary. If a change needs to be made, it
needs to go through a review and approval process by all interested parties.
The hardware components within an embedded system can only directly transmit, store,
and
execute machine code, a basic language consisting of ones and zeros. Machine code was
used
in earlier days to program computer systems, which made creating any complex
application
a long and tedious ordeal. In order to make programming more efficient, machine code
was
made visible to programmers through the creation of a hardware-specific set of
instructions,
where each instruction corresponded to one or more machine code operations. These
hardware-
specific sets of instructions were referred to as assembly language. Over time, other
programming languages, such as C, C++, Java, etc., evolved with instruction sets that
were
(among other things) more hardware-independent. These are commonly referred to as
high level
languages because they are semantically further away from machine code, they more
resemble human languages, and are typically independent of the hardware. This is in
contrast to a low-level language, such as assembly language, which more closely
resembles machine code. Unlike high-level languages, low-level languages are hardware
dependent, meaning there is a unique instruction set for processors with different
architectures. Table outlines this evolution of programming languages. Because machine
code is the only language the hardware can directly execute, all other languages need
some type of mechanism to generate the corresponding machine code. This mechanism
usually includes one or some combination of preprocessing, translation, and
interpretation. Depending on the language, these mechanisms exist on the programmer’s
host system (typically a nonembedded development system, such as a PC or Sparc
station), or the target system (the embedded system being developed). See Figure .
Preprocessing is an optional step that occurs before either the translation or interpretation
Figure is a snapshot of a popular standard circuit simulator, called PSpice. This circuit
simulation software is a variation of another circuit simulator that was originally
developed at University of California, Berkeley called SPICE (Simulation Program with
Integrated Circuit Emphasis). PSpice is the PC version of SPICE, and is an example of a
simulator that can do several types of circuit analysis, such as nonlinear transient,
nonlinear dc, linear ac, noise, and distortion to name a few. As shown in Figure circuits
created in this simulator can be made up of a variety of active and/or passive elements.
Many commercially available electrical circuit simulator tools are generally similar to
PSpice in terms of their overall purpose, and mainly differ in what analysis can be done,
what circuit components can be simulated, or the look and feel of the user interface of the
tool. Because of the importance of and costs associated with designing hardware, there
are many industry techniques in which CAD tools are utilized to simulate a circuit. Given
a complex set of circuits in a processor or on a board, it is very difficult, if not
impossible, to perform a simulation on the whole design, so a hierarchy of simulators and
models are typically used. In fact, the use of models is one of the most critical factors in
hardware design, regardless of the efficiency or accuracy of the simulator. At the highest
Locator: produces target machine code (which the locator glues into the RTOS) and the
combined code (called map) gets copied into the target ROM. The locator doesn’t stay in
the target environment, hence all addresses are resolved, guided by locating-tools and
directives, prior to running the code
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Locating Program Components – Segments
Unchanging embedded program (binary code) and constants must be kept in ROM to be
remembered even on power-off
Among the goals of testing and assuring the quality of a system are finding bugs within a
design and tracking whether the bugs are fixed. Quality assurance and testing is similar to
debugging, discussed earlier in this chapter, except that the goals of debugging are to
actually fix discovered bugs. Another main difference between debugging and testing the
system is that debugging typically occurs when the developer encounters a problem in
trying to complete a portion of the design, and then typically tests-to-pass the bug fix
(meaning tests only to ensure the system minimally works under normal circumstances).
With testing, on the other hand, bugs are discovered as a result of trying to break the
system, including both testing-to-pass and testing-to-fail, where weaknesses in the system
are probed. Under testing, bugs usually stem from either the system not adhering to the
architectural specifications— i.e., behaving in a way it shouldn’t according to
documentation, not behaving in a way it should according to the documentation,
behaving in a way not mentioned in documentation— or the inability to test the system.
The types of bugs encountered in testing depend on the type of testing being done. In
general, testing techniques fall under one of four models: static black box testing, static
white box testing, dynamic black box testing, or dynamic white box testing (see the matrix
in Figure 12-9). Black box testing occurs with a tester that has no visibility into the
internal workings of the system (no schematics, no source code, etc.). Black box testing is
based on general product requirements documentation, as opposed to white box testing
(also referred to clear box or glass box testing) in which the tester has access to source
code, schematics, and so on. Static testing is done while the system is not running,
whereas dynamic testing is done when the system is running.
Within each of the models (shown in Figure 12-10), testing can be further broken down
to include unit/module testing (incremental testing of individual elements within the
system), compatibility testing (testing that the element doesn’t cause problems with other
elements in the system), integration testing (incremental testing of integrated elements),
system testing (testing the entire embedded system with all elements integrated),
regression testing (rerunning previously passed tests after system modification), and
manufacturing testing (testing to ensure that manufacturing of system didn’t introduce
bugs), just to name a few. From these types of tests, an effective set of test cases can be
derived that verify that an element and/or system meets the architectural specifications, as
well as validate that the element and/or system meets the actual requirements, which may
or may not have been reflected correctly or at all in the documentation. Once the test
cases have been completed and the tests are run, how the results are handled can vary
Debugging Tools:
Aside from creating the architecture, debugging code is probably the most difficult task
of the development cycle. Debugging is primarily the task of locating and fixing errors
within the system. This task is made simpler when the programmer is familiar with the
various types of debugging tools available and how they can be used (the type of
information shown in Table ). As seen from some of the descriptions in Table ,
debugging tools reside and interconnect in some combination of standalone devices, on
the host, and/or on the target board.
.
Some of these tools are active debugging tools and are intrusive to the running of the
embedded system, while other debug tools passively capture the operation of the system
with no intrusion as the system is running. Debugging an embedded system usually
requires a combination of these tools in order to address all of the different types of
problems that can arise during the development process.
Some of these tools are active debugging tools and are intrusive to the running of the
embedded system, while other debug tools passively capture the operation of the system
with no intrusion as the system is running. Debugging an embedded system usually
requires a combination of these tools in order to address all of the different types of
problems that can arise during the development process.