0% found this document useful (0 votes)
12 views33 pages

Unit v - Bocs2020 Final

This document provides an overview of assembly language programming, detailing its features, advantages, and disadvantages, as well as the structure of assembly programs and the necessary development tools. It explains key programming techniques such as looping, counting, and indexing, alongside various instruction types including data transfer, arithmetic, logical, branching, and machine control instructions. The document emphasizes the importance of assembly language as a bridge between high-level programming and machine code, highlighting its role in microcontroller programming.

Uploaded by

kshb29msyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views33 pages

Unit v - Bocs2020 Final

This document provides an overview of assembly language programming, detailing its features, advantages, and disadvantages, as well as the structure of assembly programs and the necessary development tools. It explains key programming techniques such as looping, counting, and indexing, alongside various instruction types including data transfer, arithmetic, logical, branching, and machine control instructions. The document emphasizes the importance of assembly language as a bridge between high-level programming and machine code, highlighting its role in microcontroller programming.

Uploaded by

kshb29msyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT IV

APPLYING THE PRINCIPLES OF ASSEMBLY PROGRAMMING


ASSEMBLY LANGUAGE PROGRAMS

Introduction

An assembly language is a type of low-level programming language that is intended to


communicate directly with a computer’s hardware.

Low-level programming languages such as assembly language are a necessary bridge between the
underlying hardware of a computer and the higher-level programming languages such as Python
or JavaScript which are used to write modern software programs.

Unlike machine language, which consists of binary characters, assembly languages are designed
to be readable by humans.

In order to convert assembly language into machine code it needs to be translated using
an assembler.

The assembler converts each statement into the specific machine code needed for the hardware on
which it is being run.

There is a one-to-one relationship between an assembly language instruction and its machine code
equivalent.

Each CPU has its own version of machine code and assembly language.

Features of Assembly Language

The features of the assembly language are outlined below:

1. It uses mnemonic code instead of a numeric code.

For example, it allows the programmer to use mnemonics when writing source code
programs, like ‘ADD’ (addition), ‘SUB’ (subtraction), JMP (jump) etc.

2. It also reports any problems in the code.

3. This language aids in the specification of the symbolic operand, removing the need to
provide the operand's machine address.

For example, variables are represented by symbolic names, not as memory locations, like
MOV A, here ‘A’ is the variable

4. A symbol can be used to represent it.

Things to know if you intend to program a microcontroller using Assembly language:


By now you would have got a good idea about Assembly language.

Before programming a target hardware using it you need to be aware of following things.

 Complete instruction set provided for the hardware.

 Register organization of the hardware.

 Development environment/tool chain ( assembler, directives, linkers etc )

 Addressing modes and peripheral features of target hardware

How Assembly Languages Work

Fundamentally, the most basic instructions executed by a computer are binary codes, consisting of
ones and zeros.

Those codes are directly translated into the “on” and “off” states of the electricity moving through
the computer’s physical circuits.

In essence, these codes form the basis of “machine language”, the most fundamental variety of
programming language.

Of course, no human would be able to construct modern software programs by explicitly


programming ones and zeros.

Instead, human programmers must rely on various layers of abstraction that can allow themselves
to articulate their commands in a format that is more intuitive to humans.

Specifically, modern programmers issue commands in so-called “high-level languages”, which


utilize intuitive syntax such as whole English words and sentences, as well as logical operators
such as “And”, “Or”, and “Else” that are familiar to everyday usage.

Ultimately, however, these high-level commands need to be translated into machine language.

Rather than doing so manually, programmers rely on assembly languages whose purpose is to
automatically translate between these high-level and low-level languages.

Advantages of assembly language

Below are the advantages:

1. Programs written in machine language are replaceable by mnemonics which are easier to
remember.
2. Memory Efficient.

3. It is not required to keep track of memory locations.

4. Faster in speed.

5. Easy to make insertions and deletions.

6. Hardware Oriented.

7. Requires fewer instructions to accomplish the same result.

Disadvantages of assembly language

Below mentioned are the disadvantages:

1. Long programs written in such languages cannot be executed on small sized computers.

2. It takes lot of time to code or write the program, as it is more complex in nature.

3. Difficult to remember the syntax.

4. Lack of portability of program between computers of different makes.

Assembly Program Section

Assembly language is dependent upon the instruction set and the architecture of the processor.

An assembly program can be divided into three sections;

a) The data Section

The data section is used for declaring initialized data or constants.

This data does not change at runtime.

You can declare various constant values, file names, or buffer size, etc., in this section.

The syntax for declaring data section is

section.data

b) The bss Section

The bss section is used for declaring variables.


The syntax for declaring bss section is

section.bss

c) The text section

The text section is used for keeping the actual code.

This section must begin with the declaration global _start, which tells the kernel where the
program execution begins.

The syntax for declaring text section is;

section.text

global _start

_start:

Assembly Language Statements

Assembly language programs consist of three types of statements;

a) The executable instructions or simply instructions tell the processor what to do.

Each instruction consists of an operation code (opcode).

Each executable instruction generates one machine language instruction.

b) The assembler directives or pseudo-ops tell the assembler about the various aspects of the
assembly process.

These are non-executable and do not generate machine language instructions.

c) Macros are basically a text substitution mechanism.

Syntax of Assembly Language Statements

A basic instruction has two parts, the first one is the name of the instruction (or the mnemonic),
which is to be executed, and the second are the operands or the parameters of the command.

Assembly language statements are entered one statement per line.

Each statement follows the following format

[Label] mnemonic [operands] [; comment]


The fields in the square brackets are optional.

Assembly language comment begins with a semicolon (;).

The following are some examples of typical assembly language statements

 INC COUNT ; Increment the memory variable COUNT

 MOV TOTAL, 48 ; Transfer the value 48 in the memory variable TOTAL

 ADD AH, BH ; Add the content of the BH register into the AH register

 AND MASK1, 128 ; Perform AND operation on the variable MASK1 and 128

 ADD MARKS, 10 ; Add 10 to the variable MARKS

 MOV AL, 10 ; Transfer the value 10 to the AL register

Assembly Language Development Tools:

To develop an assembly language program we need certain program development tools.

The various development tools required for 8086 programming are explained below.

1) Editor:

An Editor is a program which allows us to create a file containing the assembly language
statements for the program.

Examples of some editors are PC write Wordstar.

As we type the program the editor stores the ACSII codes for the letters and numbers in
successive RAM locations.

If any typing mistake is done editor will alert us to correct it.

If we leave out a program statement an editor will let you move everything down and insert
a line.

After typing all the program we have to save the program for a hard disk.

This we call it as source file.

The next step is to process the source file with an assembler.


While using TASM or MASM we should give a file name and extension .ASM. Ex:
Sample. asm

2) Assembler

An Assembler is used to translate the assembly language mnemonics into machine


language (i.e. binary codes).

When you run the assembler it reads the source file of your program from where you have
saved it.

The assembler generates two files.

The first file is the Object file with the extension .OBJ.

The object file consists of the binary codes for the instructions and information about the
addresses of the instructions.

After further processing, the contents of the file will be loaded in to memory and run.

The second file is the assembler list file with the extension .LST.

3) Linker

A linker is a program used to connect several object files into one large object file.

While writing large programs it is better to divide the large program into smaller modules.

Each module can be individually written, tested and debugged.

Then all the object modules are linked together to form one, functioning program.

These object modules can also be kept in library file and linked into other programs as
needed.

A linker produces a link file which contains the binary codes for all the combined modules.

The linker also produces a link map file which contains the address information about the
linked files.

The linkers which come with TASM or MASM assemblers produce link files with the
.EXE extension.
4) Locator

A locator is a program used to assign the specific addresses of where the segments of object
code are to be loaded into memory.

A locator program called EXE2BIN comes with the IBM PC Disk Operating System
(DOS).

EXE2BIN converts a .EXE file to a .BIN file which has physical addresses.

5) Debugger

A debugger is a program which allows to load your object code program into system
memory, execute the program, and troubleshoot or debug it.

The debugger allows to look into the contents of registers and memory locations after the
program runs.

We can also change the contents of registers and memory locations and rerun the program.

Some debuggers allows to stop the program after each instruction so that you can check or
alter memory and register contents.

This is called single step debug.

A debugger also allows to set a breakpoint at any point in the program.

If we insert a break point, the debugger will run the program up to the instruction where
the breakpoint is put and then stop the execution.

6) Emulator

An emulator is a mixture of hardware and software.

It is usually used to test and debug the hardware and software of an external system such
as the prototype of a microprocessor based instrument.

PROGRAMMING TECHNIQUES SUCH AS LOOPING, COUNTING AND INDEXING


ADDRESSING NODES

Looping:

The programming technique used to instruct the microprocessor to repeat tasks is called looping.

This task is accomplished by using jump instructions.


Classification of Loops:

1. Continuous Loop:

A set of instructions in a program that are repeated until interrupted.

A program with a continuous loop does not stop repeating the tasks until the system is
reset.

This loop is set up by using the unconditional jump instruction.

Unconditional Jump Instructions: Transfers the program sequence to the described


memory address.

2. Conditional Loop:

Conditional loop-repeats a task until certain data conditions are met.

A conditional loop is set up by a conditional jump instructions.

These instructions check flags (Z, C,Y, P, S) and repeat the tasks if the conditions are
satisfied.

These loops include counting and indexing.

A conditional loop keeps repeating until a specific condition is met.

Conditional Jump Instructions: Transfers the program sequence to the described memory
address only if the condition in satisfied.

For example

The program might keep asking a user to enter their password until they enter the right one.

The loop will keep going round repeating the code until they actually enter the right
password.

Importance of loops in programming

 Programming Application:

When programmers write code, loops allow them to shorten what could be hundreds of
lines of code to just a few.
This allows them to write the code once and repeat it as many times as needed, making it
more likely for the program to run as expected

Looping Flow Chart

The processor executes initialization section and result section only once, while it may execute
processing section and loop control section many times.

Thus, the execution time of the loop will be mainly dependent on the execution time of the
processing section and loop control section.

The flowchart shows typical program loop.

The processing section in this flowchart is always executed at least once.


1. The initialization section establishes the starting values of loop counters for counting how
many times loop is executed, address registers for indexing which give pointers to memory
locations and other variables.

2. The actual data manipulation occurs in the processing section.

This is the section which does the work.

3. The loop control section updates counters, indices (pointers) for the next iteration.

4. The result section analyzes and stores the results.

Counting:

This programming technique uses INR or DCR instructions.

 INR is a mnemonic that stands for 'INcRement' and 'R' stands for any of the following
registers or memory location M pointed by HL pair.

R = A, B, C, D, E, H, L, or M.

This instruction is used to add 1 with the contents of R.

So the previous value in R will get increased by amount 1 only

 DCR is a mnemonic, which stands for 'DeCRement' and 'R' stands for any of the registers,
or memory location M pointed by HL pair.

This instruction is used to decrease the content of register R.

Also we can say it will subtract 1 from the register R content.

A loop is established to update count and each count is checked to determine whether it has reached
final number and if not reached, then the loop is repeated.

A counter is a typical application of the conditional loop.

A microprocessor needs a counter, flag to accomplish the looping task.

Counter is set up by loading an appropriate count in a register.

Counting is performed by either increment or decrement the counter.

End of counting is indicated by a flag.


Indexing:

It means counting or referencing objects with sequential numbers.

Data bytes are stored in memory locations and those data bytes are referred to by their memory
locations.

Example:

Steps to add ten bytes of data stored in memory locations starting at a given location and display
the sum

The microprocessor needs

1. A counter to count 10 data bytes.

2. An index or a memory pointer to locate where data bytes are stored.

3. To transfer data from a memory location to the microprocessor(ALU)

4. To perform addition

5. Registers for temporary storage of partial answers

6. A flag to indicate the completion of the stack

7. To store or output the result.

INTRUCTION TYPES

All these instructions are associated with a variety of addressing modes.

1. Data Transfer Instructions

Data transfer instructions transfer the data between memory and processor registers,
processor registers, and I/O devices, and from one processor register to another.

There are eight commonly used data transfer instructions.

Each instruction is represented by a mnemonic symbol.

The table shows the eight data transfer instructions and their respective mnemonic symbols.

Data Transfer Instructions


Name Mnemonic Symbols
Load LD
Store ST
Move MOV
Exchange XCH
Input IN
Output OUT
Push PUSH
Pop POP

The following table shows examples the list of data transfer instructions.

The instructions can be described as follows;

 Load - The load instruction is used to transfer data from the memory to a processor
register, which is usually an accumulator.

 Store - The store instruction transfers data from processor registers to memory.

 Move - The move instruction transfers data from processor register to memory or
memory to processor register or between processor registers itself.
 Exchange - The exchange instruction swaps information either between two
registers or between a register and a memory word.

 Input - The input instruction transfers data between the processor register and the
input terminal.

 Output - The output instruction transfers data between the processor register and
the output terminal.

 Push and Pop - The push and pop instructions transfer data between a processor
register and memory stack.

2. Arithmetic Instructions:

Includes the instructions, which performs the addition, subtraction, increment or decrement
operations.

The flag conditions are altered after execution of an instruction in this group.

3. Logical Instructions:

The instructions which performs the logical operations like AND, OR, EXCLUSIVE-OR,
complement, compare and rotate instructions are grouped under this heading.

The flag conditions are altered after execution of an instruction in this group.

These instructions perform various logical operations with the contents of the accumulator.
4. Branching Instructions:

The instructions that are used to transfer the program control from one memory location to
another memory location are grouped under this heading.

Machine Control Instructions: Includes the instructions related to interrupts and the
instruction used to halt program execution.

5. Machine Control Instructions

These instructions control machine functions such as Halt, Interrupt, or do nothing.

ARITHMETIC AND LOGIC OPERATIONS

Inside a computer, there is an Arithmetic Logic Unit (ALU), which is capable of performing logical
operations (e.g. AND, OR, Ex-OR, Invert etc.) in addition to the arithmetic operations (e.g.
Addition, Subtraction etc.).
The control unit supplies the data required by the ALU from memory, or from input devices, and
directs the ALU to perform a specific operation based on the instruction fetched from the memory.

ALU is the “calculator” portion of the computer.

An arithmetic logic unit (ALU) is a major component of the central processing unit of the computer
system.

It does all processes related to arithmetic and logic operations that need to be done on instruction
words.

Except performing calculations related to addition and subtraction, ALUs handle the multiplication
of two integers as they are designed to execute integer calculations; hence, its result is also an
integer.

However, division operations commonly may not be performed by ALU as division operations
may produce a result in a floating-point number.

Instead, the floating-point unit (FPU) usually handles the division operations; other non-integer
calculations can also be performed by FPU

In some microprocessor architectures, the ALU is divided into the arithmetic unit (AU) and the
logic unit (LU).

An ALU can be designed by engineers to calculate many different operations.

When the operations become more and more complex, then the ALU will also become more and
more expensive and also takes up more space in the CPU and dissipates more heat.

That is why engineers make the ALU powerful enough to ensure that the CPU is also powerful
and fast, but not so complex as to become prohibitive in terms of cost and other disadvantages.
ALU is also known as an Integer Unit (IU).

Depending on how the ALU is designed, it can make the CPU more powerful, but it also consumes
more energy and creates more heat.

Therefore, there must be a balance between how powerful and complex the ALU is and how
expensive the whole unit becomes.

This is why faster CPUs are more expensive, consume more power and dissipate more heat.

Different operation as carried out by ALU can be categorized as follows;

 Logical operations - These include operations like AND, OR, NOT, XOR, NOR, NAND,
etc.

 Bit-Shifting Operations - This pertains to shifting the positions of the bits by a certain
number of places either towards the right or left, which is considered a multiplication or
division operations.

 Arithmetic operations - This refers to bit addition and subtraction.

Although multiplication and division are sometimes used, these operations are more
expensive to make.

Multiplication and subtraction can also be done by repetitive additions and subtractions
respectively.

Arithmetic Logic Unit (ALU) Signals

A variety of input and output electrical connections are contained by the ALU, which led to casting
the digital signals between the external electronics and ALU.

ALU input gets signals from the external circuits, and in response, external electronics get outputs
signals from ALU.

 Data: Three parallel buses are contained by the ALU, which include two input and output
operand.

These three buses handle the number of signals, which are the same.

 Opcode: When the ALU is going to perform the operation, it is described by the operation
selection code what type of operation an ALU is going to perform arithmetic or logic
operation.
 Status

 Output: The results of the ALU operations are provided by the status outputs in
the form of supplemental data as they are multiple signals.

Usually, status signals like overflow, zero, carry out, negative, and more are
contained by general ALUs.

When the ALU completes each operation, the external registers contained the status
output signals.

These signals are stored in the external registers that led to making them available
for future ALU operations.

 Input: When ALU once performs the operation, the status inputs allow ALU to
access further information to complete the operation successfully.

Furthermore, stored carry-out from a previous ALU operation is known as a single


"carry-in" bit.

Configurations of the ALU

The description of how ALU interacts with the processor is given below.

Every arithmetic logic unit includes the following configurations:

 Instruction Set Architecture

 Accumulator

The intermediate result of every operation is contained by the accumulator, which means
Instruction Set Architecture (ISA) is not more complex because there is only required to
hold one bit.
Generally, they are much fast and less complex but to make Accumulator more stable; the
additional codes need to be written to fill it with proper values.

Unluckily, with a single processor, it is very difficult to find Accumulators to execute


parallelism.

An example of an Accumulator is the desktop calculator.

 Stack

Whenever the latest operations are performed, these are stored on the stack that holds
programs in top-down order, which is a small register.

When the new programs are added to execute, they push to put the old programs.

 Register to Register

It includes a place for 1 destination instruction and 2 source instructions, also known as a
3-register operation machine.

This Instruction Set Architecture must be more in length for storing three operands, 1
destination and 2 sources.

After the end of the operations, writing the results back to the Registers would be difficult,
and also the length of the word should be longer.

However, it can be caused to more issues with synchronization if write back rule would be
followed at this place.

The MIPS component is an example of the register-to-register Architecture.

For input, it uses two operands, and for output, it uses a third distinct component.

The storage space is hard to maintain as each needs a distinct memory; therefore, it has to
be premium at all times.

Moreover, there might be difficult to perform some operations.

 Register Stack

Generally, the combination of Register and Accumulator operations is known as for


Register - Stack Architecture.
The operations that need to be performed in the register-stack Architecture are pushed onto
the top of the stack.

And its results are held at the top of the stack.

With the help of using the Reverse polish method, more complex mathematical operations
can be broken down.

Some programmers, to represent operands, use the concept of a binary tree.

It means that the reverse polish methodology can be easy for these programmers, whereas
it can be difficult for other programmers.

To carry out Push and Pop operations, there is a need to be new hardware created.

 Register Memory

In this architecture, one operand comes from the register, and the other comes from the
external memory as it is one of the most complicated architectures.

The reason behind it is that every program might be very long as they require to be held in
full memory space.

Generally, this technology is integrated with Register-Register Register technology and


practically cannot be used separately.

Advantages of ALU

ALU has various advantages, which are as follows:

 It supports parallel architecture and applications with high performance.

 It has the ability to get the desired output simultaneously and combine integer and floating-
point variables.

 It has the capability of performing instructions on a very large set and has a high range of
accuracy.

 Two arithmetic operations in the same code like addition and multiplication or addition
and subtraction, or any two operands can be combined by the ALU.

For case, A+B*C.

 Through the whole program, they remain uniform, and they are spaced in a way that they
cannot interrupt part in between.
 In general, it is very fast; hence, it provides results quickly.

 There are no sensitivity issues and no memory wastage with ALU.

 They are less expensive and minimize the logic gate requirements.

Disadvantages of ALU

The disadvantages of ALU are discussed below:

 With the ALU, floating variables have more delays, and the designed controller is not easy
to understand.

 The bugs would occur in our result if memory space were definite.

 It is difficult to understand amateurs as their circuit is complex; also, the concept of


pipelining is complex to understand.

 A proven disadvantage of ALU is that there are irregularities in latencies.

 Another demerit is rounding off, which impacts accuracy.

DYNAMIC DEBUGGING

Introduction

Every programmer in their life has a chance to experience bugs or errors in their code while
developing an operating system or application or any other program.

In such cases, developers use debugging and tools to find bugs in a code and make the code or
program error-free.

There is a chance to identify the bug and find where it has occurred in the entire program.

In software technology, this is an important process to find bugs in any new program or any
application process.

Definition

Debugging is the process of identifying the bug, finding the source of the bug and correcting the
problem to make the program error-free.

It is a multistep process in software development and refers to identification of errors in the


program logic, machine codes, and execution.
In software development, the developer can locate the code error in the program and remove it
using this process.

Hence, it plays a vital role in the entire software development lifecycle.

Types of Possible Errors

To understand different types of debugging, we need to understand the different types of errors.

Errors are usually of three types:

1. Build and compile-time errors

These errors happen at the development stage when the code is being built.

These errors are thrown by the compiler or the interpreter while building the source code.

In other words, build or compile-time errors prevent the application from even starting.

These errors often result from syntax errors, like missing semicolons at the end of a
statement or class not found.

These errors are easy to spot and rectify because most IDE or compilers find them for you.

The compiler or the interpreter will tell you the exact piece of code that is causing the
problem.

2. Runtime errors

These errors occur and can be identified only while running the application.

They occur only when the source code doesn’t have any compiler or syntax error, and the
compiler or the interpreter cannot identify the runtime error during the build stage.

Mostly, runtime errors depend on the user input or the environment.

3. Logic errors

These errors occur after the program is successfully compiled and running and it gives you
an output.

A logic error is when the result of the program is incorrect. .

Logic errors are also called semantic errors, and they occur due to some incorrect logic
used by the developer to solve a problem while building the application.
Types of Debugging

Not all errors can be treated or debugged in the same way.

The developer has to set up the right strategy to fix different errors.

There are two types of debugging techniques: reactive debugging and preemptive debugging.

Most debugging is reactive - a defect is reported in the application or an error occurs, and the
developer tries to find the root cause of the error to fix it.

Solving build errors is easier, and generally, the compiler or build system clearly tells you the
errors.

Solving a runtime error can be done using a debugger, which can provide additional information
about the error and the stack trace of the error.

It will tell you exactly the line where the fault is happening or exception is raised.

Solving logic errors is trickier.

We don’t get clues as we do in other errors from the compiler or the stack trace.

We need to use other strategies to identify the code causing the error.

1. Reactive Debugging

Reactive debugging refers to any debugging protocol that is employed after the bug
manifests itself.

Reactive debugging is deployed to reduce runtime and logic errors.

Examples of reactive debugging are print debugging and using a debugger.

2. Preemptive Debugging

Preemptive debugging involves writing code that doesn’t impact the functionality of the
program but helps developers, either catch bugs sooner or debug the source code easily
when the bug occurs.

Debugging Process

The process of finding bugs or errors and fixing them in any application or software is called
debugging.
To make the software programs or products bug-free, this process should be done before releasing
them into the market.

The steps involved in this process are,

a) Identifying the error - It saves time and avoids the errors at the user site.

Identifying errors at an earlier stage helps to minimize the number of errors and wastage
of time.

b) Identifying the error location - The exact location of the error should be found to fix the
bug faster and execute the code.

c) Analyzing the error - To understand the type of bug or error and reduce the number of
errors we need to analyze the error.

Solving one bug may lead to another bug that stops the application process.

d) Prove the analysis - Once the error has been analyzed, we need to prove the analysis.

It uses a test automation process to write the test cases through the test framework.

e) Cover the lateral damage - The bugs can be resolved by making the appropriate changes
and move onto the next stages of the code or programs to fix the other errors.

f) Fix and Validate - This is the final stage to check all the new errors, changes in the software
or program and executes the application.

Debugging Software

This software plays a vital role in the software development process.

Software developers use it to find the bugs, analyze the bugs and enhance the quality and
performance of the software.

The process of resolving the bugs using manual debugging is very tough and time-consuming.

We need to understand the program, it’s working, and the causes of errors by creating breakpoints.

As soon as the code is written, the code is combined with other stages of programming to form a
new software product.

Several strategies like unit tests, code reviews, and pair programming are used to debug the large
program (contains thousands of lines of code).
The standard debugger tool or the debug mode of the Integral Development Environment (IDE)
helps determine the code’s logging and error messages.

The steps involved in debugging software are,

a) The bug is identified in a system and defect report is created.

This report helps the developer to analyze the error and find the solutions.

b) The debugging tool is used to know the cause of the bug and analyze it by step-by-step
execution process.

c) After identifying the bug, we need to make the appropriate changes to fix the issues.

d) The software is retested to ensure that no error is left and checks all the new errors in the
software during the debugging software process.

e) A sequence-based method used in this software process made it easier and more convenient
for the developer to find the bugs and fix them using the code sequences.

Debugging Techniques

To perform the debugging process easily and efficiently, it is necessary to follow some techniques.

The most commonly used debugging strategies are;

1. Debugging by brute force is the most commonly used technique.

This is done by taking memory dumps of the program which contains a large amount of
information with intermediate values and analyzing them, but analyzing the information
and finding the bugs leads to a waste of time and effort.

2. Induction strategy includes the Location of relevant data, the Organization of data, the
Devising hypothesis (provides possible causes of errors), and the Proving hypothesis.

3. Deduction strategy includes Identification of possible causes of bugs and elimination of


possible by analyzing the information one-by-one.

4. The backtracking strategy is used to locate errors in small programs.

When an error occurs, the program is traced one step backward during the evaluation of
values to find the cause of bug or error.

Debugging by testing is the conjunction with debugging by induction and debugging by deduction
technique.
The test cases used in debugging are different from the test cases used in the testing process.

Debugging Tools

A software tool or program used to test and debug the other programs is called a debugger or a
debugging tool.

It helps to identify the errors of the code at the various stages of the software development process.

These tools analyze the test run and find the lines of codes that are not executed.

Simulators in other debugging tools allow the user to know about the display and behavior of the
operating system or any other computing device.

Most of the open-source tools and scripting languages don’t run an IDE and they require the
manual process.

Mostly used Debugging Tools are GDB, DDD, and Eclipse.

 GDB Tool: This type of tool is used in UNIX programming.

GDB is pre-installed in all Linux systems if not, it is necessary to download the GCC
compiler package.

 DDD Tool: DDD means Data Display Debugger, which is used to run a Graphic User
Interface (GUI) in UNIX systems.

 Eclipse: An IDE tool is the integration of an editor, build tool, debugger and other
development tools. IDE is the most popular Eclipse tool.

It works more efficiently when compared to the DDD, GDB and other tools.

The list of debugging tools is listed below.

 AppPuncher Debugger is used for debugging Rich Internet Applications


 AQtime debugger
 CA/EZ TEST is a CICS interactive test/debug software package
 CharmDebug is a Debugger for Charm++
 CodeView debugger
 DBG is a PHP Debugger and Profiler
 dbx debugger
 Distributed Debugging Tool (Allinea DDT)
 DDTLite — Allinea DDTLite for Visual Studio 2008
 DEBUG is the built-in debugger of DOS and Microsoft Windows
 Debugger for MySQL
 Opera Dragonfly
 The dynamic debugging technique (DDT)
 Embedded System Debug Plug-in is used for Eclipse
 FusionDebug
 Debugger OpenGL, OpenGL ES, and OpenCL Debugger and Profiler. For Windows,
Linux, Mac OS X, and iPhone
 GNU Debugger (GDB), GNU Binutils
 Intel Debugger (IDB)
 The system is used as circuit debugger for Embedded Systems
 Interactive Disassembler (IDA Pro)
 Java Platform Debugger Architecture source Java debugger
 LLDB
 MacsBug
 IBM Rational Purify
 TRACE32 is circuit debugger for Embedded Systems
 VB Watch Debugger — debugger for Visual Basic 6.0
 Microsoft Visual Studio Debugger
 WinDbg
 Xdebug — PHP debugger and profiler

STACK – SUBROUTINE - CONDITIONAL CALL AND RETURN INSTRUCTIONS

Subroutine

Most programs are organized into multiple blocks of instructions called subroutines rather than a
single large sequence of instructions.

Subroutines are located apart from the main program segment and are invoked by a subroutine
call.

This call is a type of branch instruction that temporarily jumps the microprocessor’s PC to the
subroutine, allowing it to be executed.

When the subroutine has completed, control is returned to the program segment that called it via a
return from subroutine instruction.

Subroutines provide several benefits to a program, including modularity and ease of reuse.

A modular subroutine is one that can be relocated in different parts of the same program while still
performing the same basic function.

An example of a modular subroutine is one that sorts a list of numbers in ascending order.
This sorting subroutine can be called by multiple sections of a program and will perform the same
operation on multiple lists.

Reuse is related to modularity and takes the concept a step further by enabling the subroutine to
be transplanted from one program to another without modification.

This concept greatly speeds the software development process.

Almost all microprocessors provide inherent support for subroutines in their architectures and
instruction sets.

Recall that the program counter keeps track of the next instruction to be executed and that branch
instructions provide a mechanism for loading a new value into the PC.

Most branch instructions simply cause a new value to be loaded into the PC when their specific
branch condition is satisfied.

Some branch instructions, however, not only reload the PC but also instruct the microprocessor to
save the current value of the PC off to the side for later recall.

This stored PC value, or subroutine return address, is what enables the subroutine to eventually
return control to the program that called it.

Subroutine call instructions are sometimes called branch-to-subroutine or jump-to subroutine, and
they may be unconditional.

When a branch-to-subroutine is executed, the PC is saved into a data structure called a stack.

Stack

The stack is a region of data memory that is set aside by the programmer specifically for purpose
of storing the microprocessor’s state information when it branches to a subroutine.

A stack is a last-in, first-out memory structure.

When data is stored on the stack, it is pushed on.

When data is removed from the stack, it is popped off.

Popping the stack recalls the most recently pushed data.

The first datum to be pushed onto the stack will be the last to be popped.

A stack pointer (SP) holds a memory address that identifies the top of the stack at any given time.
The SP decrements as entries are pushed on and increments at they are popped off, thereby growing
the stack downward in memory as data is pushed on as shown in figure below.

By pushing the PC onto the stack during a branch-to-subroutine, the microprocessor now has a
means to return to the calling routine at any time by restoring the PC to its previous value by simply
popping the stack.

This operation is performed by a return-from-subroutine instruction.

Many microprocessors push not only the PC onto the stack when calling a subroutine, but the
accumulator and ALU status flags as well.

While this increases the complexity of a subroutine call and return somewhat, it is useful to
preserve the state of the calling routine so that it may resume control smoothly when the subroutine
ends.

The stack can store multiple entries, enabling multiple subroutines to be active at the same time.

If one subroutine calls another, the microprocessor must keep track of both subroutines’ return
addresses in the order in which the subroutines have been called.

This subroutine nesting process of one calling another subroutine, which calls another subroutine,
naturally conforms to the last-in, first-out operation of a stack.

To implement a stack, a microprocessor contains a stack pointer register that is loaded by the
programmer to establish the initial starting point, or top, of the stack.

The figure below shows the hypothetical microprocessor in more complete form with a stack
pointer register.
Like the PC, the SP is a counter that is automatically modified by certain instructions.

Not only do subroutine branch and return instructions use the stack, there are also general-purpose
push/pop instructions provided to enable the programmer to use the stack manually.

The stack can make certain calculations easier by pushing the partial results of individual
calculations and then popping them as they are combined into a final result.

The programmer must carefully manage the location and size of the stack.

A microprocessor will freely execute subroutine call, subroutine return, push, and pop instructions
whenever they are encountered in the software.

If an empty stack is popped, the microprocessor will oblige by reading back whatever data value
is present in memory at the time and then incrementing the SP.

If a full stack is pushed, the microprocessor will write the specified data to the location pointed to
by the SP and then decrement it.

Depending on the exact circumstances, either of these operations can corrupt other parts of the
program or data that happens to be in the memory location that gets overwritten.

It is the programmer’s responsibility to leave enough free memory for the desired stack depth and
then to not nest too many subroutines simultaneously.

The programmer must also ensure that there is symmetry between push/pop and subroutine
call/return operations.

Subroutine Call and Return

A subroutine is a self-contained sequence of instructions that performs a given computational task.


The instruction that transfers program control to a subroutine is known by different names.

The most common names used are call subroutine, jump to subroutine, branch to subroutine, or
branch and save address.

The instruction is executed by performing two operations:

1) The address of the next instruction available in the program counter (the return address) is
stored in a temporary location so the subroutine knows where to return

2) Control is transferred to the beginning of the subroutine.

Different computers use a different temporary location for storing the return address.

Some store the return address in the first memory location of the subroutine, some store it in a
fixed location in memory, some store it in a processor register, and some store it in a memory
stack.

The most efficient way is to store the return address in a memory stack.

The advantage of using a stack for the return address is that when a succession of subroutines is
called, the sequential return addresses can be pushed into the stack.

The return from subroutine instruction causes the stack to pop and the contents of the top of the
stack are transferred to the program counter.

A subroutine call is implemented with the following micro operations:

SP ←SP - 1 Decrement stack pointer

M [SP] ←PC Push content of PC onto the stack

PC ← effective address Transfer control to the subroutine

If another subroutine is called by the current subroutine, the new return address is pushed into the
stack and so on.

The instruction that returns from the last subroutine is implemented by the Micro operations:

PC←M [SP] Pop stack and transfer to PC

SP ←SP + 1 Increment stack pointer


Program Interrupt

Program interrupt refers to the transfer of program control from a currently running program to
another service program as a result of an external or internal generated request.

Control returns to the original program after the service program is executed.

The interrupt procedure is, in principle, quite similar to a subroutine call except for three variations:

1) The interrupt is usually initiated by an internal or external signal rather than from the
execution of an instruction (except for software interrupt);

2) The address of the interrupt service program is determined by the hardware rather than
from the address field of an instruction.

3) An interrupt procedure usually stores all the information

The state of the CPU at the end of the execute cycle (when the interrupt is recognized) is
determined from:

1) The content of the program counter

2) The content of all processor registers

3) The content of certain status conditions

Program status word: the collection of all status bit conditions in the CPU is sometimes called a
program status word or PSW.

The PSW is stored in a separate hardware register and contains the status information that
characterizes the state of the CPU.

Types of Interrupts

There are three major types of interrupts that cause a break in the normal execution of a Program.

They can be classified as:

a) External interrupts come from input-output (I/O) devices, from a timing device, from a
circuit monitoring the power supply, or from any other external source.

b) Internal interrupts arise from illegal or erroneous use of an instruction or data.

Internal interrupts are also called traps.


Examples of interrupts caused by internal error conditions are register overflow, attempt to
divide by zero, an invalid operation code, stack overflow, and protection violation.

c) A software interrupt is initiated by executing an instruction.

Software interrupt is a special call instruction that behaves like an interrupt rather than a
subroutine call.

It can be used by the programmer to initiate an interrupt procedure at any desired point in
the program.

You might also like