ES QA Notes
ES QA Notes
Embedded systems are classified into four categories based on their performance
and functional requirements:
Embedded Systems are classified into three types based on the performance of the
microcontroller such as
Stand alone embedded systems do not require a host system like a computer, it
works by itself. It takes the input from the input ports either analog or digital and
processes, calculates and converts the data and gives the resulting data through the
connected device-Which either controls, drives and displays the connected devices.
Examples for the stand alone embedded systems are mp3 players, digital cameras,
video game consoles, microwave ovens and temperature measurement systems.
A real time embedded system is defined as, a system which gives a required o/p in
a particular time. These types of embedded systems follow the time deadlines for
completion of a task. Real time embedded systems are classified into two types
such as soft and hard real time systems.
These types of embedded systems are related to a network to access the resources.
The connected network can be LAN, WAN or the internet. The connection can be
any wired or wireless. This type of embedded system is the fastest growing area in
embedded system applications. The embedded web server is a type of system
wherein all embedded devices are connected to a web server and accessed and
controlled by a web browser.Example for the LAN networked embedded system is
a home security system wherein all sensors are connected and run on the protocol
TCP/IP
Mobile embedded systems are used in portable embedded devices like cell phones,
mobiles, digital cameras, mp3 players and personal digital assistants, etc.The basic
limitation of these devices is the other resources and limitation of memory.
2. Memory:
Electronic memory is an important part of embedded systems and
three essential types of memory can ne described RAM i.e. Random
Access Memory and ROM i.e. Read Only Memory and cache.
RAM is one of the hardware component where data are temporarily
stored during execution of the system.
ROM contains i/o routines that are needed for system at boot time.
Cache instead is used by the processor as a temporary storage suring
the processing and transferring of data
3. System Clock:
The system clock is used for all processes is running on an
embedded system and requires precise timing information
This clock is generally composed of an oscillator and some
associated digital circulatory.
4. Peripherals:
The peripheral devices are provided on the embedded system boards
for an easy integration.
Typical devices include serial part, parallel part, network part,
keyboard and mouse parts, a memory unit part and monitor part.
Some specialised embedded systems also have other parts such as
CAN-bus .
Harvard architecture:
1. Computers have separate memory areas for program instructions and data.
2. There are two or more internal data buses, which allow simultaneous
access to both instructions and data.
3. The CPU fetches program instructions on the program memory bus.
4. Easier to pipeline. So, High Performance can be achieved.
5. Comparatively high cost
6. The 8051 microcontrollers (MCS-51) have an 8-bit data bus. They can
address 64K of external data memory and 64K of external program
memory.
7. These may be separate blocks of memory, so that up to 128K of memory
can be attached to the microcontroller.
8. Separate blocks of code and data memory are referred to as the Harvard
architecture.
Q. 5 Differentiate between RISC and CISC processors.
Ans.:
- RISC CISC
1 RISC stands for Reduced CISC stands for
Instruction Set Computer Complex Instruction Set
Computer
2 RISC processors have CISC processors have
simple instructions complex instructions that
taking about one clock take up multiple clock
cycle. The average Clock cycles for execution. The
cycles Per average Clock cycles Per
Instruction(CPI) of a Instruction of a CISC
RISC processor is 1.5 processor is between 2
and 15
3 There are hardly any Most of the instructions
instructions that refer refer memory
memory.
4 RISC processors have a CISC processors have
fixed instruction format variable instruction
format.
5 The instruction set is The instruction set has a
reduced i.e. it has only variety of different
few instructions in the instructions that can be
instruction set. Many of used for complex
these instructions are operations.
very primitive.
6 RISC has fewer CISC has many different
addressing modes and addressing modes and
most of the instructions can thus be used to
in the instruction set have represent higher level
register to register programming language
addressing mode. statements more
efficiently.
7 Complex addressing CISC already supports
modes are synthesized complex addressing
using software. modes
8 Multiple register sets are Only has a single register
present set
9 RISC processors are They are normally not
highly pipelined pipelined or less
pipelined
Little Endian
In little endian, you store the least significant byte in the smallest address. Here's
how it would look:
Address Value
1000 CD
1001 12
1002 AB
1003 90
Notice that this is in the reverse order compared to big endian. To remember which
is which, recall whether the least significant byte is stored first (thus, little endian)
or the most significant byte is stored first (thus, big endian).
Notice I used "byte" instead of "bit" in least significant bit. I sometimes abbreviated
this as LSB and MSB, with the 'B' capitalized to refer to byte and use the lowercase
'b' to represent bit. I only refer to most and least significant byte when it comes to
endianness.
Which Way Makes Sense?
Different ISAs use different endianness. While one way may seem more natural to
you (most people think big-endian is more natural), there is justification for either
one.
For example, DEC and IBMs are little endian, while Motorolas and Suns are big
endian. MIPS processors allowed you to select a configuration where it would be
big or little endian.
Why is endianness so important? Suppose you are storing int values to a file, then
you send the file to a machine which uses the opposite endianness and read in the
value. You'll run into problems because of endianness. You'll read in reversed
values that won't make sense.
Endianness is also a big issue when sending numbers over the network. Again, if
you send a value from a machine of one endianness to a machine of the opposite
endianness, you'll have problems. This is even worse over the network, because
you might not be able to determine the endianness of the machine that sent you the
data.
The solution is to send 4 byte quantities using network byte order which is
arbitrarily picked to be one of the endianness (not sure if it's big or little, but it's one
of them). If your machine has the same endianness as network byte order, then
great, no change is needed. If not, then you must reverse the bytes.
I²C uses only two bidirectional open-drain lines, Serial Data Line (SDA) and
Serial Clock Line (SCL), pulled up with resistors. Typical voltages used are +5
V or +3.3 V, although systems with other voltages are permitted.
The I²C reference design has a 7-bit or a 10-bit (depending on the device
used) address space.
The reference design is a bus with a clock (SCL) and data (SDA) lines with 7-
bit addressing. The bus has two roles for nodes: master and slave:
Master node – node that generates the clock and initiates communication with
slaves.
Slave node – node that receives the clock and responds when addressed by the
master.
The bus is a multi-master bus, which means that any number of master nodes can
be present. Additionally, master and slave roles may be changed between messages
(after a STOP is sent).
There are four potential modes of operation for a given bus device:
master transmit – master node is sending data to a slave,
master receive – master node is receiving data from a slave,
slave transmit – slave node is sending data to the master,
slave receive – slave node is receiving data from the master.
Message protocols:
I²C defines basic types of messages, each of which begins with a START and ends
with a STOP:
Data transmission:
To begin communication, the bus master configures the clock, using a
frequency supported by the slave device, typically up to a few MHz.
The master then selects the slave device with a logic level 0 on the select
line. If a waiting period is required, such as for an analog-to-digital
conversion, the master must wait for at least that period of time before
issuing clock cycles.
During each SPI clock cycle, a full duplex data transmission occurs. The
master sends a bit on the MOSI line and the slave reads it, while the slave
sends a bit on the MISO line and the master reads it.
Data framing:
Application:
Transmitting and receiving UARTs must be set for the same bit speed,
character length, parity, and stop bits for proper operation.
The receiving UART may detect some mismatched settings and set a
"framing error" flag bit for the host system; in exceptional cases the
receiving UART will produce an erratic stream of mutilated characters and
transfer them to the host system.
4. 1-WIRE INTERFACE
1-Wire is a device communications bus system designed by Dallas
Semiconductor Corp. that provides low-speed data, signaling, and power
over a single conductor.
1-Wire is similar in concept to I²C, but with lower data rates and longer
range.
A network of 1-Wire devices with an associated master device is called
a MicroLAN.
When developing and/or troubleshooting the 1-Wire bus, examination of
hardware signals can be very important. Logic analyzers and bus
analyzers are tools which collect, analyze, decode, and store signals to
simplify viewing the high-speed waveforms.
USB, short for Universal Serial Bus, is an industry standard that defines cables,
connectors and communications protocols for connection, communication, and
power supply between computers and devices.
USB was designed to standardize the connection of computer
peripherals (including keyboards, pointing devices, digital cameras,
printers, portable media players, disk drives and network adapters)
to personal computers, both to communicate and to supply electric power.
It has largely replaced a variety of earlier interfaces, such as serial
portsand parallel ports, as well as separate power chargers for portable
devices – and has become commonplace on a wide range of devices.
Also, there are 5 modes of USB data transfer, in order of increasing bandwidth:
3. FIREWIRE
IEEE 1394 is an interface standard for a serial bus for high-speed communications
and isochronous real-time data transfer. It was developed in the late 1980s and
early 1990s by Apple, which called it FireWire.
The copper cable it uses in its most common implementation can be up to
4.5 metres (15 ft) long. Power is also carried over this cable allowing
devices with moderate power requirements to operate without a separate
power supply.
FireWire is also available in wireless, Cat 5, fiber optic,
and coaxial versions.
The 1394 interface is comparable to USB though USB requires a master controller
and has greater market share.
Technical specifications
FireWire can connect up to 63 peripherals in a tree or daisy-chain topology
It allows peer-to-peer device communication - such as communication
between a scanner and a printer - to take place without using system
memory or the CPU.
FireWire also supports multiple hosts per bus.
It is designed to support plug and play and hot swapping. The copper cable
it uses in its most common implementation can be up to 4.5 metres (15 ft)
long and is more flexible than most parallel SCSI cables. In its six-
conductor or nine-conductor variations, it can supply up to 45 watts of
power per port at up to 30 volts, allowing moderate-consumption devices to
operate without a separate power supply.
4. BLUETOOTH
Bluetooth is a wireless technology standard for exchanging data over short
distances from fixed and mobile devices, and building personal area
networks (PANs).
Invented by telecom vendor Ericsson in 1994, it was originally conceived as a
wireless alternative to RS-232 data cables.
Watchdog Timer
In desktop Windows systems. if we feel our application is behaving in an
abnormal way or if the system hangs up. we have the 'Ctrl + Mt 4- Der to
conic out of the situation.
What if it happens to our embedded system? Do we really have a 'Ctrl + Alt
+ Del' tee take control of the situation? Of course not, but we have a
watchdog to monitor the firmware execution and reset the system
processor/microcontroller when the program execution hangs up.
A watchdog timer, or simply a watchdog, is a hardware timer for
monitoring the firmware execution. Depending on the internal
implementation, the watchdog timer increments or decrements a free
running counter with each clock pulse and generates a reset signal to reset
the processor if the count reaches zero for a down counting watchdog. or the
highest count value for an up counting watchdog.
If the watchdog counter is in the enabled state. the firmware can write a
zero (for up counting watchdog implementation) to it before starting the
execution of a piece of code (subroutine or portion of cock which is
susceptible to execution hang up) and the watchdog will start counting.
If the firmware execution doesn't complete due to malfunctioning, within
the time required by the watchdog to reach the maximum count, the counter
will generate a reset pulse and this will reset the processor (if it is connected
to the reset line of the processor.
If the firmware execution completes before the expiration of the watchdog
timer, you can reset the count by writing a 0 (for an up counting watchdog
timer) to the watchdog timer register.
Most of the processors implement watchdog as a built-in component and
provides status register to control the watchdog timer (like enabling and
disabling watchdog functioning). and watchdog timer register for writing
the count value.
If the processor/controller doesn't contain a built in watchdog timer, the
same can be implemented using an external watchdog timer IC circuit.
The external watchdog timer uses hardware logic for enabling/disabling.
resetting the watchdog count. etc. instead of the firmware based 'writing' to
the status and watchdog timer register.
The Microprocessor supervisor IC D51232 integrates a hardware watchdog
timer in it. In modem systems running on embedded operating systems, the
watchdog can be implemented in such a way that when a watchdog timeout
occurs, an interrupt is generated instead of rescuing the processor. The
interrupt handler for this handles the situation in an appropriate fashion.
Figure illustrates the implementation of an external watchdog timer based
microcontroller supervisor circuit for a small scale embedded system.
Q. 10 What is the use Of sensors and actuators in embedded system? Give the
example of different type of sensors and actuators
Ans.: Sensors and actuators are two critical components of every closed loop control
system. Such a system is also called a mechatronics system. A typical mechatronics
system consists of a sensing unit, a controller, and an actuating unit. A sensing unit
can be as simple as a single sensor or can consist of additional components such as
filters, amplifiers, modulators, and other signal conditioners. The controller accepts
the information from the sensing unit, makes decisions based on the control
algorithm, and outputs commands to the actuating unit. The actuating unit consists
of an actuator and optionally a power supply and a coupling mechanism.
Sensors: Sensor is a device that when exposed to a physical phenomenon
(temperature, displacement, force, etc.) produces a proportional output
signal (electrical, mechanical, magnetic, etc.). The term transducer is often
used synonymously with sensors. However, ideally, a sensor is a device that
responds to a change in the physical phenomenon. On the other hand, a
transducer is a device that converts one form of energy into another form of
energy. Sensors are transducers when they sense one form of energy input
and output in a different form of energy. For example, a thermocouple
responds to a temperature change (thermal energy) and outputs a
proportional change in electromotive force (electrical energy). Therefore, a
thermocouple can be called a sensor and or transducer.
Below list various types of sensors that are classified by their measurement
objectives.
1. Linear/Rotational sensors
2. Acceleration sensors
3. Force, torque and pressure sensors
4. Flow sensors
5. Temperature sensors
6. Proximity sensors
7. Light sensors
8. Smart material sensors
9. Micro and nano sensors
Timing Devices :
Real-Time clock (RTC) is a system component responsible for keeping
track of time
RTC holds information like current time(In hours, minutes and seconds) in
12 hour/24 hour format, date, month, year, day of the week, etc. and
supplies timing reference to the system
RTC intended to function even in the absence of power
The RTC chip contains a microchip for holding the ti me and date related
information and backup battery cell for functioning in the absence of power,
is a signal IC package
For operating system based embedded device a timing reference is essential
for synchronizing the operations of the OS kernel
The RTC can interrupt the OS kernel by asserting the interrupt line of the
processor/controller to which the RTC interrupt line is connected
The OS kernel identifies the interrupt in terms of the Interrupt Request
(IRQ) number generated by an interrupt controller.
Q. 12 What is the requirement of the firmware for embedded systems? Explain the
Process of embedded firmware development.
Ans.: Embedded firmware refers to the control algorithm (Program instructions) and or
the configuration settings that an embedded system developer dumps into the
code (Program) memory of the embedded system. It is an un-avoidable part of an
embedded system. There are various methods available for developing the
embedded firmware. They are listed below:-
1) Write the program in high level languages like Embedded C/C++ using an
Integrated Development Environment (The IDE will contain an editor, compiler.
linker. debugger, simulator. etc. IDEs are different for different family of
processor controllers. For example, Kiel micro vision3 IDE is used for all family
members of 8051 microcontroller, since it contains the generic 8051 compiler C5
I ).
2) Write the program in Assembly language acing the instructions supported by
your application's target processor/controller.
The instruction set for each family of processor/controller is different and the
program written in either of the methods given above should be convened into a
processor understandable machine code before loading it into the program
memory.
The process of convening the program written in either a high level language or
processor/controller specific Assembly code to machine readable binary code is
called 'HEX File Creation'.
The methods used for WES 'File Creation' is different depending on the
programming techniques used. If the pro-gram is written in Embedded C/C++
using an IDE, the cross compiler included in the IDE converts it into
corresponding processor/controller understandable 'HEX File'.
If you arc following the Assembly language based programming technique
(method 2), you can use the utilities supplied by the processor/controller vendors
to convert the source code into 'ILEX File'. Also third party tools arc available.
which may be of free of cost, for this conversion slow.
For a beginner in the embedded software field, it is strongly recommended to use
the high level language based development technique.
The reasons for this being: writing codes in a high level language is easy, the
code written in high level language is highly portable which means you can use
the same code to run on different processor/controller with little or less
modification.
The only thing you need to do is re-compile the program with the required
processor's IDE, after replacing the include files for that particular processor.
Also the programs written in high level languages are not developer dependent.
Any skilled programmer can trace out the functionalities of the program by just
having a look at the pro-gram.
It will be much easier if the source code contains necessary comments and
documentation lines. It is very easy to debug and the overall system development
time will be reduced to a greater extent.
The embedded software development process in assembly language is tedious
and time consuming.
The developer needs to know about all the instruction sets of the
processor/controller or at least s/he should carry an instruction set reference
manual with her/him.
A programmer using assembly language technique writes the program according
to his/her view and taste. Often he/she may be writing a method or functionality
which can be achieved through a single instruction as an experienced person's
point of view, by two or three instructions in his/her own style.
So the program will be highly dependent on the developer. It is very difficult for
a second person to understand the code written in Assembly even if it is well
documented.
We will discuss both approaches of embedded software development in a later
chapter dealing with design of embedded firmware, in detail. Two types of
control algorithm design exist in embedded firm-ware development. The first
type of control algorithm development is known as the infinite loop or 'super
Mop' based approach, where the control flow runs from top to bottom and then
jumps back to the top of the program in a conventional procedure.
It is similar to the while (1) 1: based technique in C. The second method deals
with splitting the functions to be executed into tasks and running these tasks
using a scheduler which is part of a General Purpose or Real Time Embedded
Operating System (GPOS/RTOS).
2)Support:
a) Final phase involves maintenance and regular required updates. This step is
when end users can fine-tune the system, if they wish, to boost
performance, add new capabilities or meet additional user requirements.
b) The maintenance phase of the SDLC occurs after the product is in full
operation. Maintenance of software can include software upgrades, repairs,
and fixes of the software if it breaks.
c) Software applications often need to be upgraded or integrated with new
systems the customer deploys. It is often necessary to provide additional
testing of the software or version upgrades. During the maintenance phase,
errors or defects may exist, which would require repairs during additional
testing of the software. Monitoring the performance of the software is also
included during the maintenance phase.
d) The Support stage of the SDLC deals with the on-going support and
maintenance of the business solution. The long-term support branch owners
(may be AIMS, CS&P or O&S depending on support type) are responsible
for the maintenance and up keep of all project delivered documentation
used to facilitate ongoing support and maintenance.
Q. 20 Explain the relation between Upgrade, Retirement and Need phases of EDLC?
Ans.: The following figure depicts the different phases in EDLC:
Need:
The need may come from an individual or from the public or from a company.
‗Need‘ should be articulated to initiate the Development Life Cycle; a ‗Concept
Proposal‘ is prepared which is reviewed by the senior management for approval.
Need can be visualized in any one of the following three needs:
New or Custom Product Development.
Product Re-engineering.
Product Maintenance.
Upgrades:
Deals with the development of upgrades (new versions) for the product which is
already present in the market.
Product upgrade results as an output of major bug fixes.
During the upgrade phase, the system is subject to design modification to fix the
major bugs reported.
Retirement/Disposal
The retirement/disposal of the product is a gradual process.
This phase is the final phase in a product development life cycle where the product
is declared as discontinued from the market.
The disposal of a product is essential due to the following reasons
Rapid technology advancement
Increased user needs.
UNIT-2
Q. 21 Explain the Sequential Program Model for Seat Belt Warning System.
Ans.: SEQUENTIAL PROGRAM MODEL
In the sequential programming model, the functions or processing
requirements are executed in sequence .
It is same as the conventional procedural programming.
Here the program instructions are iterated and executed conditionally and
the data gets transformed through a series of operations.
Finite State Machines(FSMs) are good choice for sequential program
modeling.
Another important tool used for modeling sequential program is Flow
Charts.
The FSM approach represents the states, events, transitions and actions,
whereas the Flow Charts models the execution flow.
The execution of functions in a sequential program model for the ‗Seat Belt
Warning‘ system is as follows:-
#define ON 1
#define OFF 0
#define YES 1
#define NO 0
void seat_belt_warn()
{
wait_10sec();
if(check_ignition_key()==ON)
{
if(check_seat_belts==OFF)
{
set_timer(5);
start_alarm();
while((check_seat_belt()==OFF && (check_ignition_key()==OFF) &&
(timer_expire()==NO));
stop_alarm();
}
}
}
Controller Architecture:
-Controller Architecture implements the finite state machine model using
state register and 2 combinational circuits.
-The state register holds the present state and the combinational circuits
implements the logic for next state and output.
De-multiplexer.
For software:
Allowing the operating system direct access to hardware resources
Implementing only primitives
Implementing an interface for non-driver software (e.g., TWAIN)
Implementing a language, sometimes quite high-level (e.g., PostScript)
So choosing and installing the correct device drivers for given hardware is often a
key component of computer system configuration.
1. Each of the source files must be compiled or assembled into an object file.
2. All of the object files that result from the first step must be linked together
to produce a single object file, called the relocatable program.
3. Physical memory addresses must be assigned to the relative offsets within
the relocatable program in a process called relocation.
The result of the final step is a file containing an executable binary image
that is ready to run on the embedded system.
The embedded software development process has the three steps are Each
of these development tools takes one or more files as input and produces a
single output file.
Each of the steps of the embedded software build process is a
transformation performed by software running on a general-purpose
computer. To distinguish this development computer (usually a PC or Unix
workstation) from the target embedded system, it is referred to as the host
computer. The compiler, assembler, linker, and locator run on a host
computer rather than on the embedded system itself. Yet, these tools
combine their efforts to produce an executable binary image that will
execute properly only on the target embedded system.
The job of a compiler is mainly to translate programs written in some
human-readable language into an equivalent set of opcodes for a particular
processor. In that sense, an assembler is also a compiler (you might call it
an ―assembly language compiler‖), but one that performs a much simpler
one-to-one translation from one line of human-readable mnemonics to the
equivalent opcode. Everything in this section applies equally to compilers
and assemblers.
Together these tools make up the first step of the embedded software build
process. Of course, each processor has its own unique machine language, so
you need to choose a compiler that produces programs for your specific
target processor. In the embedded systems case, this compiler almost always
runs on the host computer. It simply doesn‘t make sense to execute the
compiler on the embedded system itself. A compiler such as this—that runs
on one computer platform and produces code for another—is called a cross-
compiler. The use of a cross-compiler is one of the defining features of
embedded software development.
The GNU C compiler (gcc) and assembler (as) can be configured as either
native compilers or cross-compilers. These tools support an impressive set
ofhost-target combinations. The gcc compiler will run on all common PC
and Mac operating systems. The target processor support is extensive,
including AVR, Intel x86, MIPS, PowerPC, ARM, and SPARC. Additional
information about gcc can be found online at http://gcc.gnu.org.
Regardless of the input language (C, C++, assembly, or any other), the
output of the cross-compiler will be an object file. This is a specially
formatted binary file that contains the set of instructions and data resulting
from the language translation process. Although parts of this file contain
executable code, the object file cannot be executed directly. In fact, the
internal structure of an object file emphasizes the incompleteness of the
larger program.
The contents of an object file can be thought of as a very large, flexible data
structure. The structure of the file is often defined by a standard format such
as the Common Object File Format (COFF) or Executable and Linkable
Format (ELF). If you‘ll be using more than one compiler (i.e., you‘ll be
writing parts of your program in different source languages), you need to
make sure that each compiler is capable of producing object files in the
same format; gcc supports both of the file formats previously mentioned.
Although many compilers (particularly those that run on Unix platforms)
support standard object file formats such as COFF and ELF, some others
produce object files only in proprietary formats. If you‘re using one of the
compilers in the latter group, you might find that you need to get all of your
other development tools from the same vendor.
Most object files begin with a header that describes the sections that follow.
Each of these sections contains one or more blocks of code or data that
originated within the source file you created. However, the compiler has
regrouped these blocks into related sections. For example, in gcc all of the
code blocks are collected into a section called text, initialized global
variables (and their initial values) into a section called data, and
uninitialized global variables into a section called bss.
There is also usually a symbol table somewhere in the object file that
contains the names and locations of all the variables and functions
referenced within the source file. Parts of this table may be incomplete,
however, because not all of the variables and functions are always defined
in the same file. These are the symbols that refer to variables and functions
defined in other source files. And it is up to the linker to resolve such
unresolved references.
UML is not a single language, but a set of notations, syntax and semantics to
allow the creation of families of languages for particular applications.
Extension mechanisms in UML like profiles, stereotypes, tags, and constraints
can be used for particular applications.
Use-case modelling to describe system environments, user scenarios, and test
cases.
UML has support for object-oriented system specification, design and
modelling.
Growing interest in UML from the embedded systems and real-time
community.
Support for state-machine semantics which can be used for modelling and
synthesis.
UML supports object-based structural decomposition and refinement.
Q. 31 Explain embedded firmware design approaches.
Ans.: The firmware design approaches for embedded product is purely dependent
on the complexity of the functions to be performed, the speed of operation
required etc.
Two basic approaches are used for embedded firmware design.
They are conventional procedural based firmware design and embedded
operating system based design.
1. The super loop based approach/conventional procedural based
firmware design.
Conventional procedural programming is executed task by
task.
The task listed at the top of the program code is executed
first and the tasks just below the top are executed after
completing the first task.
In a multiple task based system, each task is executed in
serial in this approach.
Non-ending repetition is achieved by using an infinite loop.
this approach is also referred as super loop based approach.
The tasks are running inside an infinite loop, the only way to
come out of the loop is either a hardware reset or an interrupt
assertion.
A hardware reset brings the program execution back to the
main loop.
Super loop based design doesn‘t require an operating system
,since there is no need for scheduling which task is to be
executed and assigning priority to each task.
In a super loop based design, the priorities are fixed and the
order in which the tasks to be executed are also fixed.
Design is deployed in low-cost product and product where
response time is not time critical.
Example of super loop based is an electronic video game toy
containing keypad and display unit.
2. The embedded operating system based approach.
The general purpose OS (GPOS) based design is very similar
to a conventional PC based application development where
an operating system.
You will be creating and running user applications on top of
it.
Example of embedded product using Microsoft windows XP
OS are PDA.
Real time operating system (RTOS) based design approach is
employed in embedded products demanding real time
response.
RTOS respond in a timely and predictable manner to events.
RTOS contains a real time kernel responsible scheduler for
scheduling tasks, multiple threads etc.
RTOS allows flexible scheduling of system resources like
the CPU and memory and offers some way to communicate
between tasks.
Time Management
1. Accurate time management is essential for providing precise time reference
for all application.
2. The time reference to kernel is provided by a high-resolution real time
clock (RTC) hardware chip.
3. The hardware timer is programmed to interrupt the processor/controller at a
fixed rate. This timer interrupt is referred as timer tick.
4. The timer tick is taken as the timing references by the kernel.
the timer tick interval may vary depending on the hardware timer.
Non-Functional Requirements:
Non-functional requirements of RTOS are as follows:
o Custom Developed or Off the shelf
o Cost
o Development and Debugging tools Availability
o Ease of Use
o After sales
Cost: The total cost for developing or buying OS and maintaining it in terms of
commercial product and custom build needs to be evaluated before taking a
decision on the selection.
After sales: For a commercial embedded RTOS, after sales in the form of email,
on call services, for bug fixes, critical patch updates and supported for production
issues, etc. should be analysed thoroughly.
Scheduler
Device drivers & device manager
Testing & System Debugging
Functions for IPCs using the signals event flag group, semaphore
Functions- handling functions.
FEATURES:
Priorities definitions for the tasks & IST.
Limit number of task.
Device imagining tool & device drivers.
Basic kernel Functions & Scheduling.
Host Target tools.
Support to the clock time & timer functions.
Support number of architectures of processors.
TYPES OF RTOS:
In-House developed RTOS.
Broad-based commercial RTOS.
General purpose OS with RTOS.
Special focus RTOS.
2)Cycle time:
It is the time interval between read/write instructions to the next
read/write instructions.
The time interval from the start of one read or write operations until
the start of the next.
It is the measure the how quickly the memory can be repeatedly
accessed.
3)Block size:
It is the collection of words in memory.
When quantities of the data are transferred within the system, the
unit of transfer are called blocks.
The block size specifies the number, words in such a collection.
4)Band width:
Memory band width is a measure of the word transmission rate to /
from memory via the memory I/O bus.
When the data is read from the memory, the pattern on each data
line will be q square wave.
The highest frequency of that square wave is the memory band
width.
5)Latency:
It is the amount of time required to access the first sequence of
words.
It measures the time necessary to compute the address of that
sequence and that locates its first block of words in memory.
6)Block access time:
It is the time required to access the entire block from start for read.
It includes the time to find the Oth word of block and then transfer
remaining words.
7)Page:
It is the logical view placed on larger collection of words in
memory.
Page is generally consist of blocks, the size of page can be given in
words or blocks.
Q. 40 What are some of the factors that should be considered when designing a
memory map for an embedded system?
Ans.: In computer science, a memory map is a structure of data (which usually resides in
memory itself) that indicates how memory is laid out. Memory maps can have a
different meaning in different parts of the operating system. It is the fastest and
most flexible cache organization which uses an associative memory. The
associative memory stores both the address and content of the memory word.
In the boot process, a memory map is passed on from the firmware in order to
instruct an operating system kernel about memory layout. It contains the
information regarding the size of total memory, any reserved regions and may also
provide other details specific to the architecture.
In virtual memory implementations and memory management units, a memory map
refers to page tables, which store the mapping between a certain process's virtual
memory layout and how that space relates to physical memory addresses.
In native debugger programs, a memory map refers to the mapping between loaded
executable/library files and memory regions. These memory maps are used to
resolve memory addresses (such as function pointers) to actual symbols.
Types of RAM
The RAM family includes two important memory devices: static RAM (SRAM)
and dynamic RAM (DRAM). The primary difference between them is the lifetime
of the data they store. SRAM retains its contents as long as electrical power is
applied to the chip. If the power is turned off or lost temporarily, its contents will
be lost forever. DRAM, on the other hand, has an extremely short data lifetime-
typically about four milliseconds. This is true even when power is applied
constantly.
In short, SRAM has all the properties of the memory you think of when you hear
the word RAM. Compared to that, DRAM seems kind of useless. By itself, it is.
However, a simple piece of hardware called a DRAM controller can be used to
make DRAM behave more like SRAM. The job of the DRAM controller is to
periodically refresh the data stored in the DRAM. By refreshing the data before it
expires, the contents of memory can be kept alive for as long as they are needed. So
DRAM is as useful as SRAM after all.
Types of ROM
Memories in the ROM family are distinguished by the methods used to write new
data to them (usually called programming), and the number of times they can be
rewritten. This classification reflects the evolution of ROM devices from hardwired
to programmable to erasable-and-programmable. A common feature of all these
devices is their ability to retain data and programs forever, even during a power
failure.
The very first ROMs were hardwired devices that contained a preprogrammed set
of data or instructions. The contents of the ROM had to be specified before chip
production, so the actual data could be used to arrange the transistors inside the
chip. Hardwired memories are still used, though they are now called "masked
ROMs" to distinguish them from other types of ROM. The primary advantage of a
masked ROM is its low production cost. Unfortunately, the cost is low only when
large quantities of the same ROM are required.
An EPROM (erasable-and-programmable ROM) is programmed in exactly the
same manner as a PROM. However, EPROMs can be erased and reprogrammed
repeatedly. To erase an EPROM, you simply expose the device to a strong source
of ultraviolet light. (A window in the top of the device allows the light to reach the
silicon.) By doing this, you essentially reset the entire chip to its initial-
unprogrammed-state. Though more expensive than PROMs, their ability to be
reprogrammed makes EPROMs an essential part of the software development and
testing process.
Hybrid types
As memory technology has matured in recent years, the line between RAM and
ROM has blurred. Now, several types of memory combine features of both. These
devices do not belong to either group and can be collectively referred to as hybrid
memory devices. Hybrid memories can be read and written as desired, like RAM,
but maintain their contents without electrical power, just like ROM. Two of the
hybrid devices, EEPROM and flash, are descendants of ROM devices. These are
typically used to store code. The third hybrid, NVRAM, is a modified version of
SRAM. NVRAM usually holds persistent data.
EEPROMs are electrically-erasable-and-programmable. Internally, they are similar
to EPROMs, but the erase operation is accomplished electrically, rather than by
exposure to ultraviolet light. Any byte within an EEPROM may be erased and
rewritten. Once written, the new data will remain in the device forever-or at least
until it is electrically erased. The primary tradeoff for this improved functionality is
higher cost, though write cycles are also significantly longer than writes to a RAM.
So you wouldn't want to use an EEPROM for your main system memory.
Flash memory combines the best features of the memory devices described thus far.
Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not
to write), and electrically reprogrammable. These advantages are overwhelming
and, as a direct result, the use of flash memory has increased dramatically in
embedded systems. From a software viewpoint, flash and EEPROM technologies
are very similar. The major difference is that flash devices can only be erased one
sector at a time, not byte-by-byte. Typical sector sizes are in the range 256 bytes to
16KB. Despite this disadvantage, flash is much more popular than EEPROM and is
rapidly displacing many of the ROM devices as well.
Strengths
Many memory studies provide*evidence*to support the distinction between STM
and LTM (in terms of encoding, duration and capacity). *The model can account
for*primacy & recency effects.
The model is*influential*as it has generated a lot of research into memory.
The model is supported by studies of amnesiacs: For example the*HM case study.*
HM is still alive but has marked problems in long-term memory after brain surgery.
He has remembered little of personal (death of mother and father) or public events
(*Watergate*,*Vietnam*War) that have occurred over the last 45 years. However
his short-term memory remains intact.
Weaknesses
The model is*oversimplified, in particular when it suggests that both short-term
and long-term memory each operate in a single, uniform fashion.* We now know is
this not the case.
It has now become apparent that both short-term and long-term memory are more
complicated that previously thought.* For example, the*Working Model of
Memory*proposed by Baddeley and Hitch (1974) showed that short term memory
is more than just one simple unitary store and comprises different components (e.g.
central executive, visuo-spatial etc.).
In the case of long-term memory, it is unlikely that different kinds of knowledge,
such as remembering how to play a computer game, the rules of subtraction and
remembering what we did yesterday are all stored within a single, long-term
memory store.* Indeed*different types of long-term memory have been identified,
namely episodic (memories of events), procedural (knowledge of how to do things)
and semantic (general knowledge).
The model suggests*rehearsal*helps to transfer information into LTM but this
is*not essential.*Why are we able to recall information which we did not rehearse
(e.g. swimming) yet unable to recall information which we have rehearsed (e.g.
reading your notes while revising). Therefore, the role of rehearsal as a means of
transferring from STM to LTM is much less important than Atkinson and Shiffrin
(1968) claimed in their model.
However, the models main emphasis was on structure and tends to*neglect the
process elements of memory*(e.g. it only focuses on attention and rehearsal).
The*multi store model*has been criticized for being*a passive/one way/linear
model.
Another benefit of cache memory is that the CPU does not have to use the bus
system motherboard for data transfer. Each time the data must pass through the
system bus, the data transfer rate slow the ability motherboard. CPU can process
data much faster by avoiding obstacles created by the system bus..
Memory Trace
◦A temporal sequence of memory references (addresses) from a real program.
Temporal Locality
◦If an item is referenced, it will tend to be referenced again soon
Spatial Locality
◦If an item is referenced, nearby items will tend to be referenced soon.
1. Locality of reference -
Execution generally occurs either sequentially or in small loop with a small
number of instructions.
Such behaviour means that the overall forward progress through a program is
proceeding at a much lower rate than the access time of the fastest memory.
Put another way with respect to the entire program actual execution takes
place within a small window that moves forward through the program.
Formally such a phenomenal is called sequential locality of reference
Because the program is executing only a few instructions within a small
window if those few instructions can be kept in fast memory the program
will appear to be executing out of the memory.
If the area within the program in which the application is currently executing
is in the local window.
Two other types of locality of reference are spatial and temporal.
Special locality suggests that a future access of a resource a memory address
in this case is going to be physically near one previously accessed.
Temporal locality suggests that a future access of a resource again a memory
address is going to be temporarily near one recently accessed. Using locality
of reference knowledge can significantly improve memory access time
performance.
Temporal locality:-
Temporal locality is when the cache memory is referenced once, and then again
shortly afterwards. The data accessed is stored in memory, and when it is
accessed again it can be done so much quicker, as a reference point is created.
Spatial locality:-
Spatial locality is when a specific location of memory is accessed. The knock-on
effect of this is that nearby points of memory will most likely be accessed in the
near future and the size of the memory needed is predicted and this allows for faster
access, in the short term, and over a longer period of time.
Branch locality:-
Branch locality occurs when there are not many options for the path in the co-
ordinate space. The instruction most likely to result in this type of locality of
reference is one that is structured simply and has the ability for different reference
points to be situated a distance away from each other.
Equidistant locality:-
Equidistant locality is when a linear function is used to determine which location of
the cache memory will be needed in certain situations. The equidistant locality is so
called, as it is halfway between the spatial locality and the branch locality.
Locality of reference is important as it predicts behaviour in computers and can
avoid the computer having future problems with the memory.
2. Including Constant data Files: These are the files for the codes and may
have the extension ‗.const‘.
3. Including Stings data Files: These are the files for the strings and may have
the extension ‗.strings‘ or ‗.str.‘ or ‗.txt. For example, # include
―netDrvConfig.txt‖.
4. Including initial data Files: There are files for the initial or default data for
the shadow ROM of the embedded system. The boot-up program is copied
later into the RAM and may have the extension ‗.init‘. On the other hand,
RAM data files have the extension, ‗.data‘.
5. Including basic variables Files: These are the files for the local or global
static variables that are stored in the RAM because they do not possess the
initial (default) values. The static means that there is a common not more
than one instance of that variable address and it has a static memory
allocation. There is only one real time clock, and therefore only one instance
of that variable address. These basic variables are stored in the files with the
extension ‗.bss‘.
Also included are the header files for the codes in assembly, and for the I/O
operations (conio.h), for the OS functions and RTOS functions. # include
―vxWorks.h‖ in Example 5.1 is directive to compiler, which includes VxWorks
RTOS functions.
GNU C/C++ compilers (called gcc) find extensive use in the C++ environment
in embedded software development. Embedded C++ is a new programming tool
with a compiler that provides a small runtime library.
An embedded system C++ compiler (other than gcc) is Diab compiler from
Diab Data. It also provides the target (embedded system processor) specific
optimisation of the codes. The runtime analysis tools check the expected run
time error and give a profile that is visually interactive.
Q. 57 Illustrate the use of Infinite loops with example in embedded system design.
Ans.: Use Of Infinite Loops In Embedded System Design:-
Infinite loops are never desired in usual programming. The program will never
end and never exit or proceed further to the codes after the loop. Infinite loop is
a feature in embedded system programming!
The hardware equivalent of an infinite loop is a ticking system clock (real time
clock) or a free running counter.
In following Example gives a ‗C‘ program design in which the program starts
executing from the main ( ) function. There are calls to the functions and calls on
the interrupts in between. It has to return to the start. The system main program
is never in a halt state. Therefore, the main ( ) is in an infinite loop within the
start and end.
Example
# define false 0
# define true 1
void main (void)
{
/* The Declarations here and initialization here */
..
/* Infinite while loop follows. Since the condition set for the while loop is
always true, the statements within the curly braces continue to execute */
while (true)
{
/* Codes that repeatedly execute */
..
}
Assume that the function main does not have a waiting loop and simply passes
the control to an RTOS. Consider a multitasking program. The OS can create a
task. The OS can insert a task into the list. It can delete from the list. Let an OS
kernel pre-emptively schedule the running of the various listed tasks. Each task
will then have the codes in an infinite loop.
How do more than one infinite loops co-exist?
The code inside waits for a signal or event or a set of events that the kernel
transfers to it to run the waiting task. The code inside the loop generates a
message that transfers to the kernel. It is detected by the OS kernel, which passes
another task message and generates another signal for that task, and pre-empts
the previously running task.
Let an event be setting of a flag, and the flag setting is to trigger the running of a
task whenever the kernel passes it to the waiting task. The instruction, ‗if (flag1)
{...};‘ is to execute the task function for a service if flag1 is true.
Case (ii): Modifier ‗auto‘ or No modifier, if inside the function block, means
there is ROM allocation for the variable by the locator if it is
initialised in the program. There is no RAM allocation by the
locator.
Case (iii): Modifier ‗unsigned‘ is modifier for a short or int or long data type.
It is a directive to permit only the positive values of 16, 32 or 64
bits, respectively.
Case (iv): Modifier ‗static‘ declaration is inside a function block. Static
declaration is a directive to the compiler that the variable should be
accessible outside the function block also and there is to be a
reserved memory space for it. It then does not save on a stack on
context switching to another task. When several tasks are executed
in cooperation, the declaration static helps.
There is ROM allocation by the locator if it is initialised in the
program. There is RAM allocation by the locator if it is not
initialised in the program.
Case (viii): Modifiers interrupt. It directs the compiler to save all processor
registers on entry to the function codes and restore them on return
from that function.
Case (ix): Modifier extern. It directs the compiler to look for the data type
declaration or the function in a module other than the one currently
in use.
Q. 59 What are main features of source code engineering tools for embedded
C/C++?
Ans.: Source Code Engineering Tools For Embedded C/C++:-
A source code engineering tool is of great help for source-code
development, compiling and cross compiling. The tools are commercially
available for embedded C/C++ code engineering, testing and debugging.
The features of a typical tool are comprehension, navigation and browsing,
editing, debugging, configuring (disabling and enabling the C++ features)
and compiling. A tool for C and C++ is SNiFF+. It is from WindRiver®
Systems. A version, SNiFF+ PRO has full SNiFF+ code as well as debug
module.
Main features of the tool are as follows:
1. It searches and lists the definitions, symbols, hierarchy of the classes,
and class inheritance trees.
2. It searches and lists the dependencies of symbols and defined symbols,
variables, functions (methods) and other symbols.
3. It monitors, enables and disables the implementation virtual functions.
4. It finds the full effect of any code change on the source code.
5. It searches and lists the dependencies and hierarchy of included header
files.
6. It navigates to and fro between the implementation and symbol
declaration.
7. It navigates to and fro between the overridden and overriding methods.
8. It browses through information regarding instantiation (object creation)
of a class.
9. It browses through the encapsulation of variables among the members
and browses through the public, private and protected visibility of the
members.
10. It browses through object component relationships.
11. It automatically removes error- prone and unused tasks.
12. It provides easy and automated search and replacement.
3. Call to a function:
Consider an example: ‗if (delay_F = = true & & SWTDelayIEnable = =
true) ISR_Delay ( );‘.
There is a call on fulfilling a condition. The call can occur several times
and can be repeatedly made. On each call, the values of the arguments
given within the pair of bracket pass for use in the function statements.
2. When an operation is not atomic, that function should not operate on any
variable, which is declared outside the function or which an interrupt service
routine uses or which is a global variable but passed by reference and not
passed by value as an argument into the function.
The following is an example that clarifies it further. Assume that at a server
(software), there does a 32 bit variable count to count the number of clients
(software) needs service. There is no option except to declare the count as a
global variable that shares with all clients. Each client on a connection to a
server sends a call to increment the count. The implementation by the
assembly code for increment at that memory location is non-atomic when (i)
the processor is of eight bits, and (ii) the servercompiler design is such that it
does not account for the possibility of interrupt in-between the four
instructions that implement the increment of 32-bit count on 8-bit processor.
There will be a wrong value with the server after an instance when interrupt
occurs midway during implementing an increment of count.
3. That function does not call any other function that is not itself Reentrant. Let
RTI_Count be a global declaration. Consider an ISR, ISR_RTI. Let an
‗RTI_Count ++;‘ instruction be where the RTI_Count is variable for counts
on a real-time clock interrupt. Here ISR_RTI is a not a Reentrant routine
because the second condition may not be fulfilled in the given processor
hardware. There is no precaution that may be taken here by the programmer
against shared data problems at the address of the RTI_Count because there
may be no operation that modifies RTI_Counts in any other routine or
function than the IST_RTI. But if there is another operation that modifies the
RTI_Count the shared-data problem will arise.
Q. 63 Explain program elements: Macros and Functions used in embedded system
programming
Ans.: Macros:
A macro is a collection of codes that is defined in a program by a name. It
differs from a function in the sense that once a macro is defined by a name, the
compiler puts the corresponding codes for it at every place where that macro
name appears.
Whenever the name of the macro appears, the compiler places the codes
designed for it. Macros, called test macros or test vectors are also designed and
used for debugging a system.
Macros are used for short codes only. This is because, if a Function call is used
instead of macro, the overheads (context saving and other actions on function
call and return) will take a time.
Functions:
The functions execute a named set of codes with values passed by calling
program through its arguments. Also returns a data object when it is not
declared as void, It has the context saving and retrieving overheads.
Main Function
Declarations of functions and data types, typedef and either
(i)Executes a named set of codes, calls a set of functions, and calls on the
Interrupts the ISRs or
(ii) starts an OS Kernel
Interrupt service Routine or Device Driver
Declarations of functions and data types, typedef and either
(i)Executes a named set of codes. Must be short so that other sources of
interrupts are also serviced within the deadlines.
(ii)Must be either a reentrant routine or must have a solution to the shared data
problem.
Recursive Function
A function that calls itself. It must be a reentrant function also' Most often its
use is avoided in embedded systems due to memory constraints.
Reentrant Function
Reentrant function is usable by the several tasks and routines synchronously.
2. There may be at the beginning of an input data, for example, received call
numbers in a phone, which is saved onto a stack at RAM in order to be
retrieved later in LIFO mode. It is shown figure (b).
Consider for example, o each push the following are saved on a stack. (i)
Four pointers (addresses each of 4 bytes); (ii) four integers (each of 4 bytes)
(iii) four floating point numbers (each of 4 bytes).
Memory allocation required for a stack structure for pushing the function
parameters = 4 × 4 + 4 × 4 + 4 × 4 = 48 B.
3. An application may also create the run-time stack structures. There can be
multiple data stacks at the different memory blocks, each having a separate
pointer address. There can be multiple stacks shown as Stack 1, …, Stack N
in Figure (c).
Figure: (a) A pipe from a queue (b) A queue between two sockets
(c) The queues of the packets on the network
Q. 67 Explain the use of queue in interrupt handling.
Ans.: FreeRTOS Queues:-
Queues are the primary form of intertask communications. They can be used to
send messages between tasks, and between interrupts and tasks. In most cases
they are used as thread safe FIFO (First In First Out) buffers with new data
being sent to the back of the queue, although data can also be sent to the front.
2. Using queues that pass data by copy does not prevent queues from being used
to pass data by reference. When the size of a message reaches a point where it
is not practical to copy the entire message into the queue byte for byte, define
the queue to hold pointers and copy just a pointer to the message into the
queue instead. This is exactly how the FreeRTOS+UDP implementation
passes large network buffers around the FreeRTOS IP stack.
3. The kernel takes complete responsibility for allocating the memory used as
the queue storage area.
7. A separate API is provided for use inside of an interrupt. Separating the API
used from an RTOS task from that used from an interrupt service routine
means the implementation of the RTOS API functions do not carry the
overhead of checking their call context each time they execute. Using a
separate interrupt API also means, in most cases, creating RTOS aware
interrupt service routines is simpler for end users than when compared to
alternative RTOS products.
Blocking on Queues:-
Queue API functions permit a block time to be specified.
When a task attempts to read from an empty queue the task will be placed into
the Blocked state (so it is not consuming any CPU time and other tasks can run)
until either data becomes available on the queue, or the block time expires.
When a task attempts to write to a full queue the task will be placed into the
Blocked state (so it is not consuming any CPU time and other tasks can run)
until either space becomes available in the queue, or the block time expires.
If more than one task block on the same queue then the task with the highest
priority will be the task that is unblocked first.
There has to be three pointers, one for the front (*QHEAD), a second for the
back (*QTAIL) and a third pointer is tempfront (*QACK). Two pointers are the
same as in every queue. The third pointer defines a point up to which an
acknowledgement has been received.
The acknowledgement is for a byte inserted (placed) at the queue back. The
insertion into the queue is at the back (*QTAIL). There is a predefined limiting
difference between front and back (*QTAIL). There is a predefined time-
interval up to which insertions can occur at the back (*QTAIL). There is a
predefined limiting maximum permitted difference between tempfront
(*QACK) and front (*QHEAD).
This design gives a necessary feature. There can be a variable amount of delays
in transmitting a byte a well as in receiving its or its successor
acknowledgement. The receiver does not acknowledge every byte. There is
acknowledgement only at successive predefined time-intervals. The design can
be called FIPO (First In provisionally out).
Note that the window that is between N-th sequence pointed by QACK and
waiting sequence pointed by QTAIL as a function of time, is shown as sliding
after receipt of QACK from the receiver for N-th sequence. [Refer left to right
changes in Figure.]
1. front (*QHEAD) equals back (*QTAIL) as well as tempfront (*QACK) at
the beginning of the transmission.
2. When there is an acknowledgement, front (*QHEAD) resets and equals
tempfront (*QACK).
3. The transmission starts from the tempfront (*QACK) again.
4. There is a limiting maximum time interval difference between transmission
from tempfront (*QACK) and after that time if tempfront (*QACK) is not
equal to front (*QHEAD) then front (*QHEAD) resets and equals tempfront
(*QACK) again. It means that after the limit, the tempfront (*QACK) will
be forced to be equal to front (*QHEAD). This is because the receiver did
not acknowledge within the stipulated time interval.
Figure: FIPO queue for accounting for the acknowledgements on the networks
with Go back to N (sliding window protocol) for transmission flow control.
Note the three pointer addresses at three instances: at the beginning of
transmission, on acknowledgement and after acknowledgement.
Q. 69 Explain with example multiple function calls in the main program.
Ans.: Multiple Function Calls In The Main Program:-
One of the most common methods is for the multiple function-calls to be made in a
cyclic order in an infinite loop of the main. Recall the 64 kbps network problem of
Example 4.1. Let us design the C codes given in Example 5.3 for an infinite loop
for this problem. Example 5.4 shows how the multiple function calls are defined in
the main for execution in the cyclic orders. Figure 5.1 shows the model adopted
here.
Figure 5.2: A typical finite state machine for task execution states.
Q. 71 What are the advantages and disadvantages of Java for embedded system
programming?
Ans.: Java has advantages for embedded programming as follows:
1. Java is completely an OOP language.
2. Java has in-built support for creating multiple threads. It obviates the need for
an operating system (OS) based scheduler for handling the tasks.
3. Java is the language for most Web applications and allows machines of different
types to communicate on the Web.
4. There is a huge class library on the network that makes program development
quick.
5. Platform independence in hosting the compiled codes on the network is because
Java generates the byte codes. These are executed on an installed JVM (Java
Virtual Machine) on a machine. [Virtual machines (VM) in embedded systems
are stored at the ROM.] Platform independence gives portability with respect to
the processor used.
6. Java does not permit pointer manipulation instructions. So it is robust in the
sense that memory leaks and memory related errors do not occur. A memory
leak occurs, for example, when attempting to write to the end of a bounded
array.
7. Java byte codes that are generated need a larger memory when a method has
more than 3 or 4 local variables.
8. Java being platform independent is expected to run on a machine with an RISC
like instruction execution with few addressing modes only.
Q. 74 What do you understand by memory optimization? How will you optimise the
use of memory in an embedded system?
Ans.: Memory Optimization:-
When codes are made compact and fitted in small memory areas without
affecting the code performance, it is called memory optimization.
It also reduces the total number of CPU cycles, and thus, the total energy
requirements.
Certain steps changed to reduce the need for memory and having a compact
code.
Following rules should be kept in mind while optimizing memory needs:-
1. Use declaration as unsigned byte if there is a variable, which always has a
value between 0 and 255. When using data structures, limit the maximum
size of queues, lists and stacks size to 256. Byte arithmetic takes less time
than integer arithmetic.
Follow a rule that uses unsigned bytes for a short for an integer if possible,
to optimize use of the RAM and ROM available in the system.
Avoid if possible the use of ‘long’ integers and ‘double’ precision floating
point values.
3. When the software designer knows fully the instruction set of the target
processor, assembly codes must be used. This also allows the efficient use
of memory. The device driver programs in assembly especially provide
efficiency due to the need to use the bit set-reset instructions for the control
and status registers. Only the few assembly codes for using the device I/O
port addresses, control and status registers are needed. The best use is made
of available features for the given applications. Assembly coding also helps
in coding for atomic operations. A modifier register can be used in the C
program for fast access to a frequently used variable.
As a rule, use the assembly codes for simple functions like configuring the
device control register, port addresses and bit manipulations if the
instruction set is clearly understood. Use assembly codes for the atomic
operations for increment and addition. Use modifier ‘register’ for a
frequently used variable.
5. As long as shared data problem does not arise, the use of global variables
can be optimised. These are not used as the arguments for passing the
values. A good function is one that has no arguments to be passed. The
passed values are saved on the stacks in case of interrupt service calls and
other function calls. Besides obviating the need for repeated declarations,
the use of global variables will thus reduce the worst-case interrupt-latency
and the time and stack overheads in the function call and return. But this is
at the cost of the codes for eliminating shared data problem. When a
variable is declared static, the processor accesses with less instruction than
from the stack.
As a rule, use global variables if shared data problems are tackled and use
static variables in case it needs saving frequently on the stack.
6. Combine two functions if possible. For example, LElSearch (boolean
present, const LElType & item) is a combined function. The search
functions for finding pointers to a list item and pointers of previous list
items combine into one. If present is false the pointer of the previous list
item retrieves the one that has the item.
As a rule, whenever feasible combine two functions of more or less similar
codes.
7. Recall the use of a list of running timers and list of initiated tasks. All the
timers and a conditional statement that changes the count input in case of a
running count and does not change it in case of idle state timers could have
also been used. More number of calls will however be needed and not once,
but repeatedly on each real time clock interrupt tick. The RAM memory
needed will be more. Therefore, creating a list of running counters is a more
efficient way. Similarly, bringing the tasks first into an initiated task list
will reduce the frequent interactions with the OS and context savings and
retrievals stack, and time overheads. Optimise the RAM use for the stacks.
It is done by reducing the number of tasks that interact with the OS. One
function calling another function and that calling the third and so on means
nested calls. Reduce the number of nested calls and call at best one more
function from a function. This optimises the use of the stack.
As a rule, reduce use of frequent function calls and nested calls and thus
reduce the time and RAM memory needed for the stacks, respectively.
9. Use the delete function when there is no longer a need for a set of
statements after that execute.
As a rule, to free the RAM used by a set of statements, use the delete
function and destructor functions.
10. When using C++, configure the compiler for not permitting the multi-
inheritance, templates, exceptional handling, new style casts, virtual base
classes, and namespaces.
As a rule, for using C++, use the classes without multiple inheritances,
without template, with runtime identification and with throwable
exceptions.
UNIT-5
Q. 75 What are the open standards, frameworks and alliances presents in the
market?
Ans.: An open standard is a standard that is publicly available and has various rights to
use associated with it, and may also have various properties of how it was
designed (e.g. open process). There is no single definition and interpretations
vary with usage.
The terms open and standard have a wide range of meanings associated with their
usage. There are a number of definitions of open standards which emphasize
different aspects of openness, including the openness of the resulting
specification, the openness of the drafting process, and the ownership of rights in
the standard.
The term "standard" is sometimes restricted to technologies approved by
formalized committees that are open to participation by all interested parties and
operate on a consensus basis.
3. Android:-
Android is a Linux-based operating system for mobile devices such as
smartphones and tablet computers.
Android is an open source software platform for mobile, embedded and
wearable devices. The First Android Phone is HTC G1.
Android specially developed for applications. Android allows writing
managed code in the Java languageAndroid includes a Java API for
developing applications.
There are more than 2.6 million apps in Android market. Android is not a
device or product.
Android has its own virtual machine i.e. DVM (Dalvik Virtual Machine),
which is used for executing the android applications.
Android gives a rich application system that permits you to construct inventive
applications and games for mobile devices in a Java language environment.
4. Openmoko:-
Openmoko Linux is an operating system for smartphones developed by the
Openmoko project. It is based on the Ångström distribution, comprising
various pieces of free software.
The main targets of Openmoko Linux were the Openmoko Neo 1973 and the
Neo FreeRunner. Furthermore, there were efforts to port the system to other
mobile phones.
Openmoko Linux was developed from 2007 to 2009 by Openmoko Inc. The
development was discontinued because of financial problems. Afterwards the
development of software for the Openmoko phones was taken over by the
community and continued in various projects, including SHR, QtMoko and
Hackable1.
C. Reconfigurable Processors
It is a processor with reconfigurable hardware features.
Depending on the requirement, reconfigurable processors can change their
functionality to adapt to the new requirement. Example: A reconfigurable
processor chip can be configured as the heart of a camera or that of a
media player.
These processors contain an Array of Programming Elements (PE) along
with a microprocessor. The PE can be used as a computational engine like
ALU or a memory element.
1. PROCESSOR TRENDS :
Following are some of the points of difference between the first generation of
processor/controller and today‘s processor/ controller.
Number of ICs per chip: Early processors had a few number of IC/gates per
chip. Today‘s processors with Very Large Scale Integration (VLSI)
technology can pack together tens of thousands of IC/gates per processor.
Need for individual components: Early processors need different
components like brown out circuit, timers, DAC/ADC separately interfaced if
required to be used in the circuit. Today‘s processors have all these
components on the same chip as the processor.
Speed of Execution: Early processors were slow in terms of number of
instructions executed per second. Today‘s processor with advanced
architecture support features like instruction pipeline improving the execution
speed.
B. Embedded Software
It is the software that runs on the host computer and is responsible for
interfacing with the embedded system
It is the user application that executes on top of the embedded system on a
host computer.
B. .NET CF
It stands for .NET Compact Framework.
.NET CF is a replacement of the original .NET framework to be used on
embedded systems.
The CF version is customized to contain all the necessary components for
application development.
The Original version of .NET Framework is very large and hence not a
good choice for embedded development.
The .NET Framework is a collection of precompiled libraries.
Common Language Runtime (CLR) is the runtime environment of .NET. It
provides functions like memory management, exception handling, etc.
Applications written in .NET are compiled to a platform neutral language
called Common Intermediate Language (CIL).
For execution, the CIL is converted to target specific machine instructions
by CLR.
8. The table given below explains the meaning and use of each bit :
H Half carry Flag Sets when a carry generated out of bit 3(bit index
starts from 0)in arithmetic operation .useful in BCD
arithmetic
T Bit copy Acts as source and destination for the bit copy storage
storage Instruction, bit load (BLD) and bit store (BST)
respectively. BLD loads the specified bit in the register
With the value of T. BST loads T with the value of
specified bit in the specified register.
4. Multiple Instruction:
Multiply (MUL), Multiply Accumulate (MLA), Multiply Long (MLL),
Multiply Long Accumulate (MLAL) instruction supported by ARM
instruction set.
6. Branching Instruction:
For changing the program execution flow.
It can be either conditional branching or unconditional branching.