0% found this document useful (0 votes)
56 views

Structures, System Software, Performance

This document discusses the basic structure of computers including functional units, operational concepts, bus structures, and system software. It then covers the following topics in 3 paragraphs or less each: 1) Computer definitions - A computer accepts digitized input, processes it according to stored instructions, and produces output. Instructions are called programs stored in memory. 2) Types of computer languages - There are low-level languages like machine language and assembly that correspond directly to hardware, and high-level languages that are independent of hardware. 3) Computer types - Computers are classified by computational ability into microcomputers, mainframes, minicomputers, and supercomputers. They are also classified by generation where each generation saw technological

Uploaded by

Harini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Structures, System Software, Performance

This document discusses the basic structure of computers including functional units, operational concepts, bus structures, and system software. It then covers the following topics in 3 paragraphs or less each: 1) Computer definitions - A computer accepts digitized input, processes it according to stored instructions, and produces output. Instructions are called programs stored in memory. 2) Types of computer languages - There are low-level languages like machine language and assembly that correspond directly to hardware, and high-level languages that are independent of hardware. 3) Computer types - Computers are classified by computational ability into microcomputers, mainframes, minicomputers, and supercomputers. They are also classified by generation where each generation saw technological

Uploaded by

Harini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT 3

Syllabus: Basic Structure Of Computers: Functional unit, Basic Operational concepts, Bus
structures, System Software, Performance
Computer: Computer is a fast electronic calculating machine that accepts digitized input information,
processing it according to a list of internally stored instructions and produces the resulting output
information. The list of instructions is called as a Computer program and the internal storage is called as
Computer memory.
Types of Languages: Just as humans use language to communicate, and different regions have different
languages, computers also have their own languages that are specific to them. Different kinds of languages
have been developed to perform different types of work on the computer. Basically, languages can be divided
into two categories according to how the computer understands them.

➢ Low-Level Languages: A language that corresponds directly to a specific machine. Low-level computer
languages are either machine codes or are very close them. A computer cannot understand instructions
given to it in high-level languages or in English. It can only understand and execute instructions given in
the form of machine language i.e. binary. There are two types of low-level languages:

• Machine Language: a language that is directly interpreted into the hardware. Machine language is
the lowest and most elementary level of programming language and was the first type of
programming language to be developed. Machine language is basically the only language that a
computer can understand and it is usually written in hex. It is represented inside the computer by a
string of binary digits (bits) 0 and 1. The symbol 0 stands for the absence of an electric pulse and the
1 stands for the presence of an electric pulse. Since a computer is capable of recognizing electric
signals, it understands machine language.
Advantages:
➢ Machine language makes fast and efficient use of the computer.
➢ It requires no translator to translate the code. It is directly understood by the computer.
Disadvantages:
➢ All operation codes have to be remembered
➢ All memory addresses have to be remembered.
➢ It is hard to amend or find errors in a program written in the machine language.
• Assembly Language: A slightly more user-friendly language that directly corresponds to machine
language. Assembly language was developed to overcome some of the many inconveniences of
machine language. This is another low-level but very important language in which operation codes
and operands are given in the form of alphanumeric symbols instead of 0’s and l’s.
These alphanumeric symbols are known as mnemonic codes and can combine in a maximum
of five-letter combinations e.g. ADD for addition, SUB for subtraction, START, LABEL etc.
Because of this feature, assembly language is also known as ‘Symbolic Programming Language.'
Advantages:
➢ Assembly language is easier to understand and use as compared to machine language.
➢ It is easy to locate and correct errors.
➢ It is easily modified.
Disadvantages:
➢ Like machine language, it is also machine dependent/specific.

➢ Since it is machine dependent, the programmer also needs to understand the hardware.

➢ High-Level Languages: Any language that is independent of the machine. High-level computer
languages use formats that are similar to English. The purpose of developing high-level languages was to
enable people to write programs easily, in their own native language environment (English).
High-level languages are basically symbolic languages that use English words and/or mathematical
symbols rather than mnemonic codes. Each instruction in the high-level language is translated into many
machine language instructions that the computer can understand.
Advantages:
o High-level languages are user-friendly
o They are easier to learn.
o They are easier to maintain
o A program written in a high-level language can be translated into many machine languages and can
run on any computer
o programs developed in a high-level language can be run on any computer text

Disadvantages:
o A high-level language has to be translated into the machine language by a translator, which takes up
time

Computer Types: Basing capacity, technology used and performance of computer, they are classified
into two types
 According to computational ability
 According to generation
According to computational ability (Based on Size, cost and performance):
There are mainly 4 types of computers. These include:
a) Micro computers
b) Mainframe computers
c) Mini computers
d) Super computer
a) Micro computers: -
Micro computers are the most common type of computers in existence today, whether at work in
school or on the desk at home. These computers include:
1. Desktop computer
2. Personal digital assistants (more commonly known as PDA's)
3. Palmtop computers
4. Laptop and notebook computers
Micro computers were the smallest, least powerful and least expensive of the computers of the time.
The first Micro computers could only perform one task at a time, while bigger computers ran multi-tasking
operating systems, and served multiple users. Referred to as a personal computer or "desktop computer",
Micro computers are generally meant to service one user (person) at a time. By the late 1990s, all personal
computers run a multi-tasking operating system, but are still intended for a single user.
b) Mainframe Computers :-
The term Mainframe computer was created to distinguish the traditional, large, institutional computer
intended to service multiple users from the smaller, single user machines. These computers are capable of
handling and processing very large amounts of data easily and quickly. A mainframe speed is so fast that it
is measured in millions of tasks per milliseconds (MTM). While other computers became smaller,
Mainframe computers stayed large to maintain the ever growing memory capacity and speed.
Mainframe computers are used in large institutions such as government, banks and large corporations.
These institutions were early adopters of computer use, long before personal computers were available to
individuals. "Mainframe" often refers to computers compatible with the computer architectures established in
the 1960's. Thus, the origin of the architecture also affects the classification, not just processing power.
c) Mini Computers / Workstation :-
Mini computers, or Workstations, were computers that are one step above the micro or personal
computers and a step below mainframe computers. They are intended to serve one user, but contain special
hardware enhancements not found on a personal computer. They run operating systems that are normally
associated with mainframe computers, usually one of the variants of the UNIX operating system.

d) Super Computer:-
A Super computer is a specialized variation of the mainframe. Where a mainframe is intended to
perform many tasks, a Super computer tends to focus on performing a single program of intense numerical
calculations. Weather forecasting systems, Automobile design systems, extreme graphic generator for
example, are usually based on super computers.

According to Generations of Computers:


The history of computer development is often referred to in reference to the different generations of
computing devices. Each generation of computer is characterized by a major technological development that
fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful
and more efficient and reliable devices.
a) First Generation (1940-1956): Vacuum Tubes:
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and
were often enormous, taking up entire rooms. They were very expensive to operate and in addition to
using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a time. Input was
based on punched cards and paper tape, and output was displayed on printouts.
Example: The UNIVAC and ENIAC computers are examples of first-generation computing devices.
The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in
1951.
b) Second Generation (1956-1963): Transistors:-
Transistors replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The
transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more
energy-efficient and more reliable than their first-generation predecessors. Though the transistor still
generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the
vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers to specify instructions in words. High-level programming
languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in their memory, which moved from a magnetic
drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
c) Third Generation (1964-1971): Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased
the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to run many
different applications at one time with a central program that monitored the memory. Computers for the first
time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
d) Fourth Generation (1971-Present): Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the
palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer—
from the central processing unit and memory to input/output controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life
as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks,
which eventually led to the development of the Internet. Fourth generation computers also saw the
development of GUIs, the mouse and handheld devices.
e) Fifth Generation (Present and Beyond): Artificial Intelligence)
Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today. The use of parallel
processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and
molecular and nanotechnology will radically change the face of computers in years to come. The goal of
fifth-generation computing is to develop devices that respond to natural language input and are capable of
learning and self-organization.
Functional Unit (Or) Structure of a Computer System :
Every Digital computer systems consist of five distinct functional units. These units are as follows:
1. Input unit
2. Memory unit
3. Arithmetic logic unit
4. Output unit
5. Control Unit

These units are interconnected by electrical cables to permit communication between them. A
computer must receive both data and program statements to function properly and be able to solve problems.
The method of feeding data and programs to a computer is accomplished by an input device. Computer
input devices read data from a source, such as magnetic disks, and translate that data into electronic
impulses for transfer into the CPU. Example for input devices are a keyboard, a mouse, or a scanner.
Central Processing Unit The brain of a computer system is the central processing unit (CPU). The
CPU processes data transferred to it from one of the various input devices. It then transfers either an
intermediate or final result of the CPU to one or more output devices. A central control section and
work areas are required to perform calculations or manipulate data. The CPU is the computing center
of the system. It consists of a control section, an arithmetic-logic section, and an internal storage section
(main memory). Each section within the CPU serves a specific function and has a particular relationship
with the other sections within the CPU.
Input Unit: An input device is usually a keyboard or mouse, the input device is the conduit through
which data and instructions enter a computer.
1. The most common input device is the keyboard, which accepts letters, numbers, and commands
from the user.
2. Another important type of input device is the mouse, which lets you select options from on-
screen menus. You use a mouse by moving it across a flat surface and pressing its buttons. A variety
of other input devices work with personal computers, too:
3. The trackball and touchpad are variations of the mouse and enable you to draw or point on the
screen.
The joystick is a swiveling lever mounted on a stationary base that is well suited for playing
video games
Memory unit: memory is used to store programs and data. There are two classes of storage, called
primary and secondary.
Primary storage: It is a fast memory that operates at electronic speeds. Programs must stay in memory
while they are being executed.The memory contains a large number of semiconductor storage cells, each
capable of storing one bit of information. To provide easy access to any word in the memory, a distinct
address is associated with each word location. Addresses are numbers that identify successive locations. A
given word is accessed by specifying its address and issuing a control command.
The number of bits in each word is referred as the word length of the computer. Typical word
lengths range from 16 to 64 bits.
Programs must reside in the memory during execution. Instructions and data can be written into the
memory or read out under the control of the processor.
1. Memory in which any location can be reached in a short and fixed amount of time after
specifying its address is called random access memory (RAM).
2. The time required to access one word is called the memory access time.
3. The small, fast, Ram units are called caches. They are tightly coupled with the processor and
are often contained on the same integrated circuit chip to achieve high performance.
4. The largest and slowest units are referred to as the main memory.

Secondary storage: Secondary storage is used when large amounts of data and many programs have
to be stored, particularly for information that is accessed infrequently.
Examples for secondary storage devices are Magnetic Disks, Tape and Optical disks.

Arithmetic-Logic Unit:- The arithmetic-logic section performs arithmetic operations, such as


addition, subtraction, multiplication, and division.
Arithmetic-Logic Unit usually called the ALU is a digital circuit that performs two types of
operations— arithmetic and logical.
Arithmetic operations are the fundamental mathematical operations consisting of addition,
subtraction, multiplication and division.
Logical operations consist of comparisons. That is, two pieces of data are compared to see whether
one is equal to, less than, or greater than the other. The ALU is a fundamental building block of the central
processing unit of a computer.
Out put Unit:- An output device is any piece of computer hardware equipment used to communicate the
results of data processing carried out by an information processing system (such as a computer) to the outside
world.
In computing, input/output, or I/O, refers to the communication between an information processing
system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and
outputs are the signals or data sent by the system to the outside.
Examples of output devices:
• Speaker
• Headphones
• Screen
• Printer
Control Unit: All activities inside the machine are directed and controlled by the control unit. Control
Unit is the part of the computer's central processing unit (CPU), which directs the operation of the
processor. A control unit works by receiving input information to which it converts into control signals,
which are then sent to the central processor
The Basic Operational Concepts of a Computer:-
1. The program contains of a list of instructions is stored in the memory.
2. Individual instructions are brought from the memory into the processor, which
execute the specified operations.
3. Data to be used as operands are also stored in the memory.

Add R1,R2,R3
In This instruction add is the operation perform on operands R1,R2 and place the result stored in R3.
The top level view of the computer is as follows:
1. Instruction register (IR):
1. The instruction register holds the instruction that is currently being executed.
2. Its output is available to the control circuits, which generate the timing signals that control the various
processing elements involved in executing the instruction.
2. Program counter (PC):
1. The program counter is another specialized register.
2. It keeps track of the execution of a program.
3. It contains the memory address of the next instruction to be fetched and executed.
4. During the execution of an instruction, the contents of the PC are updated to correspond to the
address of the next instruction to be executed

3. Memory address register (MAR) & Memory data register(MDR):-


1. These two registers facilitate communication with the memory.
2. The MAR holds the address of the location to be accessed.
3. The MDR contains the data to be written into or read out of the addressed location.

4. Operating steps for Program execution (or) Instruction Cycle :


1. Execution of the program (stored in memory) starts when the PC is set to point to the first
instruction of the program.
2. The contents of the PC are transferred to the MAR and a Read control signal is sent to the
memory.
3. The addressed word is read out of the memory and loaded into the MDR. Next, the contents of
the MDR are transferred to the IR. At this point, the instruction is ready to be decoded and
executed.
4. If the instruction involves an operation to be performed by the ALU, it is necessary to obtain
the required operands.
5. If an operand resides in memory (it could also be in a general purpose register in the processor), it has
to be fetched by sending its address to the MAR and initiating a Read cycle.
6. When the operand has been read from the memory into the MDR, it is transferred from the MDR to
ALU.
7. After one or more operands are fetched in this way, the ALU can perform the desired operation.
8. If the result of the operation is to be stored in the memory, then the result is ent to the MDR.
9. The address of the location where the result is to be stored is sent to the MAR, and a write cycle is
initiated.
10. At some point during the execution of the current instruction, the contents of the PC are incremented
so that the PC pints to the next instruction to be executed.
11. Thus, as soon as the execution of the current instruction is completed, a new instruction fetch may be
started.
12. In addition to transferring data between the memory and the processor, the computer accepts data
from input devices and sends data to output devices. Thus, some machine instructions with the
ability to handle I/O transfers are provided.

Bus Structures:-
1. BUS:A group of lines(wires) that serves as a connecting path for several devices of a computer is
called a bus.
The following are different types of busses:
1. Address Bus 2. Data Bus 3. Control Bus
The Data bus Carries(transfer) data from one component (source) to other component
(destination) connected to it. The data bus consists of 8, 16, 32 or more parallel signal lines. The data
bus lines are bi-directional. This means that CPU can read data on these lines from memory or from a port, as
well as send data out on these lines to a memory location.
The Address bus is the set of lines that carry(transfer) address information about where in memory
the data is to be transferred to or from. It is an unidirectional bus. The address bus consists of 16, 20, 24 or
more parallel signal lines. On these lines CPU sends out the address of the memory location.
The Control Bus carries the Control and timing information. Including these three the following
are various types of busses. They are
System Bus: A System Bus is usually a combination of address bus, data bus, and control bus
respectively.
Internal Bus: The bus that operates only with the internal circuitary of the CPU.
External Bus: Buses which connects computer to external devices is nothing but external bus.
Back Plane: A Back Plane bus includes a row pf connectors into which system modules can be plugged in.
I/O Bus: The bus used by I/O devices to communicate with the CPU is usually reffered as I/O bus.
Synchronous Bus: While using Synchronous bus, data transmission between source and destination units
takes place in a given timeslot which is already known to these units.
Asynchronous Bus: In this case the data transmission is governed by a special concept. That is handshaking
control signals.
The Bus interconnection Scheme:-
Single bus structure :-
1. A group of lines(wires) that serves as a connecting path for several devices of a computer is called
a bus.
2. In addition to the lines that carry the data, the bus must have lines for address and control
purposes.
3. The simplest way to interconnect functional units is to use a single bus, as shown below.

OUTPUT MEMORY PROCESSOR


INPUT

4. All units are connected to this bus. Because the bus can be used for only one transfer at a time, only
two units can actively use the bus at any given time.
5. Bus control lines are used to arbitrate multiple requests for use of the bus.

ADVANTAGE:
Its is low cost and its flexibility for attaching peripheral devices

DISADVANTAGE:
low performance because at time only one transfer
Traditional / Multiple bus Structure: There is a local bus that connects the processor to cache memory and
that may support one or more local devices. There is also a cache memory controller that connects this cache
not only to this local bus but also to the system bus.
On the system, the bus is attached to the main memory modules. In this way, I/O transfers to and
from the main memory across the system bus do not interfere with the processor’s activity. An expansion bus
interface buffers data transfers between the system bus and the I/O controllers on the expansion bus.
Some typical I/O devices that might be attached to the expansion bus include: Network cards (LAN), SCSI
(Small Computer System Interface), Modem, Serial Com etc..

Advantages: better performance


Disadvantage: increased cost.
Performance
Performance: - The most important measure of the performance of a computer is how quickly it can
compute programs. The speed with which a computer executes programs is affected by the design of its
hardware and its machine language instructions. To represent the performance of a processor, we should
consider only the periods during which the processor is active.

At the start of execution, all program instructions and the required data are stored in the memory as
shown below. As execution proceeds, instructions are fetched one by one over the bus into the
processor, and a copy is placed in the cache. When the execution of instruction calls for data located in
the main memory, the data are fetched and a copy is placed in the cache. Later, if the same instruction or
data item is needed a second time, it is read directly from the cache.
Computer performance is often described in terms of clock speed (usually in MHz or GHz). This
refers to the cycles per second of the main clock of the CPU. Performance of a computer depends on the
following factors.
a) Processor clock:-
1. Processor circuits are controlled by a timing signal called a clock. A clock is a microchip that
regulates speed and timing of all computer functions.
2. Clock Cycle is the speed of a computer processor, or CPU, which is the amount of time between two
pulses of an oscillator. Generally speaking, the higher number of pulses per second, the faster the
computer processor will be able to process information
3. CPU clock speed, or clock rate, is measured in Hertz — generally in gigahertz, or GHz. A
CPU's clock speed rate is a measure of how many clock cycles a CPU can perform per second
4. To execute a machine instruction, the processor divides the action to be performed into a sequence of
basic steps, such that each step can be completed in one clock cycle.
5. The length P of one clock cycle is an important parameter that affects processor performance.
6. Its inverse is the clock rate, R = 1/P, which is measured in cycles per second.
7. If the clock rate is 500(MHz) million cycles per second, then the corresponding clock period is 2
nanoseconds.
b) Basic performance equation:- The Performance Equation is a term used in computer science. It refers
to the calculation of the performance or speed of a central processing unit (CPU).
Basically the Basic Performance Equation [BPE] is an equation with 3 parameters which are
required for the calculation of "Basic Performance" of a given system. It is given by

T = (N*S)/R
Where 'T' is the processor time [Program Execution Time] required to execute a given program
written in some high level language .The compiler generates a machine language object program
corresponding to the source program.
'N' is the total number of steps required to complete program execution. 'N' is the actual number
of instruction executions, not necessarily equal to the total number of machine language instructions in the
object program. Some instructions are executed more than others (loops) and some are not executed at all
(conditions).
'S' is the average number of basic steps each instruction execution requires, where each basic
step is completed in one clock cycle. We say average as each instruction contains a variable number of steps
depending on the instruction.
'R' is the clock rate [In cycles per second]

c) Pipelining and Super scalar operation:-


1. A substantial improvement in performance can be achieved by overlapping the execution of
successive instructions, using a technique called pipelining.
2. Consider the instruction
3. Add R1, R2, R3
4. Which adds the contents of registers R1 and R2, and places the sum into R3
5. The contents of R1 and R2 are first transferred to the inputs of the ALU.
6. After the add operation is performed, the sum is transferred to R3.
7. Processor can read the next instruction from the memory while the addition operation is being
performed.
8. Then, if that instruction also uses the ALU, its operands can be transferred to the ALU inputs at the
same time that the result of add instruction is being transferred to R3.
9. Thus, pipelining increases the rate of executing instructions significantly.
d) Super scalar operation:-
1. A higher degree of concurrency can be achieved if multiple instruction pipelines are implemented in
the processor.
2. This means that multiple function units are used, creating parallel paths through which different
instructions can be executed in parallel.
3. With such an arrangement, it becomes possible to start the execution of several instructions in every
clock cycle.
4. This mode of execution is called super scalar operation.
e) Clock rate:-
1. There are two possibilities for increasing the clock rate, R.
2. First, improving the Integrated Circuit technology makes logic circuit faster, which reduces the
needed to complete a basic step. This allows the clock period, P, to be reduced and the clock rate, R,
to be increased.
3. Second, reducing the amount of processing done in one basic step also makes it possible to reduce the
clock period, P.
f) Instruction set: CISC and RISC:-
1. The terms CISC and RISC refer to design principles and techniques.
2. RISC: Reduced instruction set computers.
3. Simple instructions require a small number of basic steps to execute.
4. For a processor that has only simple instructions, a large number of instructions may by need to
perform a given programming task. This could lead to a large value of N and a small value for S.
5. It is much easier to implement efficient pipelining in processors with simple instruction sets.
6. CISC: Complex instruction set computers.
7. Complex instructions involve a large number of steps.
8. If individual instructions perform more complex operations, fewer instructions will be needed,
leading to a lower value of N and a larger value of S.
9. Complex instructions combined with pipelining would achieve good performance.
g) Optimizing Compiler:-
1. A compiler translates a high-level language program into a sequence of machine instructions.
2. To reduce N, we need to have a suitable machine instruction set and a compiler that makes good use
of it.
3. An optimizing compiler takes advantage of various features of the target processor to reduce the
product N * S.
4. The compiler may rearrange program instructions to achieve better performance.
h) Performance measurement:-
1. SPEC rating.
2. A nonprofit organization called” System Performance Evaluation Corporation” (SPEC) selects and
publishes representative application programs for different application domains.
3. The SPEC rating is computed as follows.
4. SPEC rating = Running time on the reference computer
Running time on the computer under test.
5. Thus SPEC rating of 50 means that the computer under test is 50 times faster than the reference
computer for these particular benchmarks.
6. The test is repeated for all the programs in the SPEC suite, and the geometric mean of the results is
computed.
7. Let SPEC, be the rating for program ‘i’ in the suite.
The overall SPEC rating for the computer is given by

Where n is the number of programs in the suite.


FREQUENTLY ASKED QUESTIONS

1. Discuss the generations of computers based on the development technologies used to fabricate the processors,
memories an I/O units.
2. What are the functional units of a computer system? Explain the way of handling information by each of
them.
3. “System software is responsible for coordination of all activities in a computing system”-Justify this statement
with the functionalities of it.
4. Write a short note on bus structures used in computer system.
5. Explain the importance of instruction set in measuring the performance of a computer system.
6. Discuss various computer types with their applications in real world environment.
7. What is the role of Processor clock, clock rate in the performance of computer system? Explain.
8. Suppose two numbers located in memory are to be added. What are the functional units of digital computer
system will carry out this? Explain how.
9. Define system software.
10. Mention different types of Bus structures.
11. Explain the structure of a computer system.
12. Write about logical structure of a simple personal computer.
13. Explain the organization of a computer system and its input-output processor.
14. Write differences between RISC and CISC.

You might also like