0% found this document useful (0 votes)
116 views

MST 1 Solution

The document discusses the classification and generations of computers. It classifies computers based on size, functionality, and data handling. It describes the four generations of computers from the first generation using vacuum tubes to the current fourth generation using microprocessors. The fifth generation of artificial intelligence computers is still in development.

Uploaded by

Sumit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views

MST 1 Solution

The document discusses the classification and generations of computers. It classifies computers based on size, functionality, and data handling. It describes the four generations of computers from the first generation using vacuum tubes to the current fourth generation using microprocessors. The fifth generation of artificial intelligence computers is still in development.

Uploaded by

Sumit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

INSTITUTE OF ENGINEERING & SCIENCE

DEPARTMENT OF CE, CST, CSIT


SESSION JULY-DEC 2018

SOLUTION MID TERM TEST I

BASIC COMPUTER ENGINEERING

Q.1 (A) Define Classification of Computer in detail?

Classification of Computers

The computer systems can be classified on the following basis:

1. On the basis of size.


2. On the basis of functionality.
3. On the basis of data handling.

Classification on the basis of size

1. Super computers : The super computers are the most high performing system. A
supercomputer is a computer with a high level of performance compared to a general-purpose
computer. The actual Performance of a supercomputer is measured in FLOPS instead of
MIPS. All of the world‘s fastest 500 supercomputers run Linux-based operating systems.
Additional research is being conducted in China, the US, the EU, Taiwan and Japan to build
even faster, more high performing and more technologically superior supercomputers.
Supercomputers actually play an important role in the field of computation, and are used for
intensive computation tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling, and physical
simulations. and also Throughout the history, supercomputers have been essential in the field

2. Mainframe computers : These are commonly called as big iron, they are usually used by big
organisations for bulk data processing such as statics, census data processing, transaction
processing and are widely used as the severs as these systems has a higher processing
capability as compared to the other classes of computers, most of these mainframe
architectures were established in 1960s, the research and development worked continuously
over the years and the mainframes of today are far more better than the earlier ones, in size.
Eg: IBM z Series, System z9 and System z10 servers.

3. Mini computers : These computers came into the market in mid 1960s and were sold at a
much cheaper price than the main frames, they were actually designed for control,
instrumentation, human interaction, and communication switching as distinct from calculation
and record keeping, later they became very popular for personal uses with evolution.
In the 60s to describe the smaller computers that became possible with the use of transistors
and core memory technologies, minimal instructions sets and less expensive peripherals such
as the ubiquitous Teletype Model 33 ASR.They usually took up one or a few inch rack
cabinets, compared with the large mainframes that could fill a room, there was a new term
―MINICOMPUTERS‖ .
Eg: Personal Laptop, PC etc.

4. Micro computers : A microcomputer is a small, relatively inexpensive computer with a


microprocessor as its CPU. It includes a microprocessor, memory, and minimal I/O circuitry
mounted on a single printed circuit board.The previous to these computers, mainframes and
minicomputers, were comparatively much larger, hard to maintain and more expensive. They
actually formed the foundation for present day microcomputers and smart gadgets that we use

Classification on the basis of functionality

1. Servers : Servers are nothing but dedicated computers which are set-up to offer some
services to the clients. They are named depending on the type of service they offered. Eg:
security server, database server.

2. Workstation : Those are the computers designed to primarily to be used by single user at a
time. They run multi-user operating systems. They are the ones which we use for our day to
day personal / commercial work.

3. Information Appliances : They are the portable devices which are designed to perform a
limited set of tasks like basic calculations, playing multimedia, browsing internet etc. They
are generally referred as the mobile devices. They have very limited memory and flexibility
and generally run on ―as-is‖ basis.

4. Embedded computers : They are the computing devices which are used in other machines to
serve limited set of requirements. They follow instructions from the non-volatile memory and
they are not required to execute reboot or reset. The processing units used in such device
work to those basic requirements only and are different from the ones that are used in
personal computers- better known as workstations.

Classification on the basis of data handling

1. Analog : An analog computer is a form of computer that uses the continuously-changeable


aspects of physical fact such as electrical, mechanical, or hydraulic quantities to model the
problem being solved. Any thing that is variable with respect to time and continuous can be
claimed as analog just like an analog clock measures time by means of the distance traveled
for the spokes of the clock around the circular dial.

2. Digital : A computer that performs calculations and logical operations with quantities
represented as digits, usually in the binary number system of ―0‖ and ―1‖, ―Computer capable
of solving problems by processing information expressed in discrete form. from manipulation
of the combinations of the binary digits, it can perform mathematical calculations, organize
and analyze data, control industrial and other processes, and simulate dynamic systems such
as global weather patterns.

3. Hybrid : A computer that processes both analog and digital data, Hybrid computer is a digital
computer that accepts analog signals, converts them to digital and processes them in digital
form.

(B) Discuss generation of Computer with examples?

Generations of Computers
The computers of today find their roots in the second half of the twentieth century. Later as time
progressed, we saw many technological improvements in physics and electronics. This has eventually
led to revolutionary developments in the hardware and software of computers. In other words, soon
the computer started to evolve. Each such technological advancement marks a generation of
computers. Let us begin with the first one.

First Generation: Vacuum Tubes (1940-1956)

The first computer systems used vacuum tubes for circuitry and magnetic drums for memory, and
were often enormous, taking up entire rooms. These computers were very expensive to operate and in
addition to using a great deal of electricity, the first computers generated a lot of heat, which was
often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a time. It
would take operators days or even weeks to set-up a new problem. Input was based on punched cards
and paper tape, and output was displayed on printouts.

Second Generation: Transistors (1956-1963)

The world would see transistors replace vacuum tubes in the second generation of computers. The
transistor was invented at Bell Labs in 1947 but did not see widespread use in computers until the late
1950s.

The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the
transistor still generated a great deal of heat that subjected the computer to damage, it was a vast
improvement over the vacuum tube. Second-generation computers still relied on punched cards for
input and printouts for output.

Third Generation: Integrated Circuits (1964-1971)

The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller and
cheaper than their predecessors.

Fourth Generation: Microprocessors (1971-Present)

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in
the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the
computer—from the central processing unitand memory to input/output controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas
of life as more and more everyday products began to use microprocessors.

Fifth Generation: Artificial Intelligence (Present and Beyond)


Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today. The use of parallel
processing and superconductors is helping to make artificial intelligence a reality.

Quantum computation and molecular and nanotechnology will radically change the face of computers
in years to come. The goal of fifth-generation computing is to develop devices that respond to natural
language input and are capable of learning and self-organization.

(A) Describe organization of Computer in detail.

Organization of Computer:

Introduction

Computer Organization refers to the level of abstraction above the digital logic level, but below the
operating system level.

In computer engineering, micro-architecture, also called computer organization, is the way a given
instruction set architecture is implemented on a processor. A given ISA may be implemented with
different micro-architectures.

Computer organization consist of following parts

1. CPU – central processing unit

2. Memory

3. Input devices

4. Output devices

CPU – central processing unit

Introduction

It is alternatively referred to as the brain of the computer, processor, central processor,


or microprocessor, the CPU (pronounced as C-P-U) was first developed at Intel with the help of Ted
Hoff in the early 1970‘s and is short for Central Processing Unit. The computer CPU is responsible
for handling all instructions it receives from hardware and software running on the computer.

CPU is considered as the brain of the computer. CPU performs all types of data processing operations.
It stores data, intermediate results and instructions (program).It controls the operation of all parts of
computer.
Memory

Computer memory is any physical device capable of storing information temporarily or permanently.
For example, Random Access Memory RAM is a type of volatile memory that is stores information
on an integrated circuit, and that is used by the operating system, software, hardware, or the user.

Computer memory divide into two parts

1.Volatile memory

Volatile memory is a temporary memory that loses its contents when the computer or hardware device
loses power.eg. RAM

2.Non-volatile memory

Non-volatile memory keeps its contents even if the power is lost. Example: ROM or EPROM is a
good example of a non-volatile memory

Input Devices

A device that can be used to insert data into a computer system is called as input device. It allows
people to supply information to computers. An input device is any hardware device that sends data to
the computer, without any input devices, a computer would only be a display device and not allow
users to interact with it, much like a TV.The most fundamental pieces of information are keystrokes
on a keyboard and clicks with a mouse. These two input devices are essential for you to interact with
your computer. Input devices represent one type of computer peripheral.
Examples of input devices include keyboards, mouse, scanners, digital cameras and joysticks.

Output Devices

A device which is used to display result from a computer is called as output device. It Allows
people to receive information from computers. An output device is any peripheral that receives or
displays output from a computer. The picture shows an inkjet printer, an output device that can make
a hard copy of anything being displayed on a monitor. Output device is electronic equipment
connected to a computer and used to transfer data out of the computer in the form of text, images,
sounds or print.

Examples of output devices include Printer, Scanner, Monitor, etc.

(B) Define all types of I/O Devices.

ollowing are some of the important input devices which are used in a computer −

 Keyboard

 Mouse

 Joy Stick

 Light pen

 Track Ball

 Scanner

 Graphic Tablet

 Microphone

 Magnetic Ink Card Reader(MICR)

 Optical Character Reader(OCR)

 Bar Code Reader

 Optical Mark Reader(OMR)

Q.2(A) Describe difference between Procedure oriented


programming and Object oriented programming.

Both are programming processes whereas OOP stands for ―Object Oriented Programming‖ and POP
stands for ―Procedure Oriented Programming‖. Both are programming languages that use high-level
programming to solve a problem but using different approaches. These approaches in technical terms
are known as programming paradigms. A programmer can take different approaches to write a
program because there‘s no direct approach to solve a particular problem. This is where programming
languages come to the picture. A program makes it easy to resolve the problem using just the right
approach or you can say ‗paradigm‘. Object-oriented programming and procedure-oriented
programming are two such paradigms.

What is Object Oriented Programming (OOP)?

OOP is a high-level programming language where a program is divided into small chunks called
objects using the object-oriented model, hence the name. This paradigm is based on objects and
classes.

 Object – An object is basically a self-contained entity that accumulates both data and
procedures to manipulate the data. Objects are merely instances of classes.

 Class – A class, in simple terms, is a blueprint of an object which defines all the common
properties of one or more objects that are associated with it. A class can be used to define
multiple objects within a program.

The OOP paradigm mainly eyes on the data rather than the algorithm to create modules by dividing a
program into data and functions that are bundled within the objects. The modules cannot be modified
when a new object is added restricting any non-member function access to the data. Methods are the
only way to assess the data.

Objects can communicate with each other through same member functions. This process is known as
message passing. This anonymity among the objects is what makes the program secure. A
programmer can create a new object from the already existing objects by taking most of its features
thus making the program easy to implement and modify.
What is Procedure Oriented Programming (POP)?

POP follows a step-by-step approach to break down a task into a collection of variables and routines
(or subroutines) through a sequence of instructions. Each step is carried out in order in a systematic
manner so that a computer can understand what to do. The program is divided into small parts called
functions and then it follows a series of computational steps to be carried out in order.

It follows a top-down approach to actually solve a problem, hence the name. Procedures correspond to
functions and each function has its own purpose. Dividing the program into functions is the key to
procedural programming. So a number of different functions are written in order to accomplish the
tasks.

Initially, all the computer programs are procedural or let‘s say, in the initial stage. So you need to feed
the computer with a set of instructions on how to move from one code to another thereby
accomplishing the task. As most of the functions share global data, they move independently around
the system from function to function, thus making the program vulnerable. These basic flaws gave
rise to the concept of object-oriented programming which is more secure.

Difference between OOP and POP

1. Definition

OOP stands for Object-oriented programming and is a programming approach that focuses on data
rather than the algorithm, whereas POP, short for Procedure-oriented programming, focuses on
procedural abstractions.

1. Programs

In OOP, the program is divided into small chunks called objects which are instances of classes,
whereas in POP, the main program is divided into small parts based on the functions.

1. Accessing Mode
Three accessing modes are used in OOP to access attributes or functions – ‗Private‘, ‗Public‘, and
‗Protected‘. In POP, on the other hand, no such accessing mode is required to access attributes or
functions of a particular program.

1. Focus

The main focus is on the data associated with the program in case of OOP while POP relies on
functions or algorithms of the program.

1. Execution

In OOP, various functions can work simultaneously while POP follows a systematic step-by-step
approach to execute methods and functions.

1. Data Control

In OOP, the data and functions of an object act like a single entity so accessibility is limited to the
member functions of the same class. In POP, on the other hand, data can move freely because each
function contains different data.

1. Security

OOP is more secure than POP, thanks to the data hiding feature which limits the access of data to the
member function of the same class, while there is no such way of data hiding in POP, thus making it
less secure.

1. Ease of Modification

New data objects can be created easily from existing objects making object-oriented programs easy to
modify, while there‘s no simple process to add data in POP, at least not without revising the whole
program.

1. Process

OOP follows a bottom-up approach for designing a program, while POP takes a top-down approach to
design a program.

1. Examples

Commonly used OOP languages are C++, Java, VB.NET, etc. Pascal and Fortran are used by POP.

(B) Explain data types and operators used in C/C++.


Operators in C++

Operators are special type of functions, that takes one or more arguments and produces a new value.
For example : addition (+), substraction (-), multiplication (*) etc, are all operators. Operators are used
to perform various operations on variables and constants.
C++ Data Types

All variables use data-type during declaration to restrict the type of data to be stored. Therefore, we
can say that data types are used to tell the variables the type of data it can store.
Whenever a variable is defined in C++, the compiler allocates some memory for that variable based
on the data-type with which it is declared. Every data type requires different amount of memory.

Data types in C++ is mainly divided into two types:

1. Primitive Data Types: These data types are built-in or predefined data types and can be used
directly by the user to declare variables. example: int, char , float, bool etc. Primitive data
types available in C++ are:

 Integer

 Character

 Boolean

 Floating Point

 Double Floating Point

 Valueless or Void

 Wide Character

2. Abstract or user defined data type: These data types are defined by user itself. Like,
defining a class in C++ or a structure.

(A) Describe I/O operators used in C/C++ with


example.

C++ Output Operator

The output operator ("<<") ("put to"), also called stream insertion operator is used to direct a value to
standard output.
C++ Output Operator Example

Here is an example program, uses C++ output operator.

/* C++ Input and Output Operators */

#include<iostream.h>

#include<conio.h>

void main()

clrscr();

cout<<"Welcome to codescracker.com";

cout<<"\nYou are learning C++ here\n";

cout<<"You can learn all about C++ here";

cout<<"\nThis is C++ Input and Output Operators Tutorial";

getch();

C++ Input Operator

The input operator (">>") ("get from"), also known as stream extraction operator is used to read a
value from standard input.

C++ Input Operator Example

Here is an example program, uses C++ input operator to store integer value:

/* C++ Input and Output Operators */

#include<iostream.h>

#include<conio.h>

void main()

clrscr();

int val1, val2;

cin>>val1>>val2;

getch();

}
C Input and Output

Input means to provide the program with some data to be used in the program and Output means to
display data on screen or write the data to a printer or a file.

C programming language provides many built-in functions to read any given input and to display data
on screen when there is a need to output the result.

In this tutorial, we will learn about such functions, which can be used in our program to take input
from user and to output the result on screen.

All these built-in functions are present in C header files, we will also specify the name of header files
in which a particular function is defined while discussing about it.

scanf() and printf() functions

The standard input-output header file, named stdio.h contains the definition of the
functions printf() and scanf(), which are used to display output on screen and to take input from user
respectively.

#include<stdio.h>

void main()

// defining a variable

int i;

/*

displaying message on the screen

asking the user to input a value

*/

printf("Please enter a value...");

/*

reading the value entered by the user

*/

scanf("%d", &i);

/*

displaying the number as output

*/
printf( "\nYou entered: %d", i);

(B) Define variables, constant, character set, tokens with example.

The C Character sets

It is the smallest unit for representing information, which is denoted by a digit, character or special
symbols allowed in the C language.

a) Alphabets: A, B, C, D ……… Y, Z

a, b, c, d ……… y, z

b) Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

c) Special symbols: @ # % ^ & * ( ) _ - + = | \ { } []

;:― ‗ < > , . ? /


d) White spaces

Tokens

The smallest individual units in the C programs are known as Tokens. There are 6 types of tokens in
C, which are:

1. Keywords - int, do, while

2. Identifiers - unit, amount

3. Constants - 3.14, -100

4. Special symbols - !, {}

5. Punctuation marks - Double quotes (―‖), single quotes (‗)

6. Strings and operators - ―ABC‖ and +, -, --, /

Keywords

The alphabets, numbers and special symbols are properly combined to form a limited number of C
keywords. The C compiler already knows the meaning of these keywords. You cannot use C keyword
as a variable name. It means assigning a new meaning to a keyword is restricted in C. Thus the
keywords are also known as ‗Reserved words‘.

There are 32 keywords available in C language

auto, double, union, void, int, char, long, float, short, sizeof, goto, continue, return, struct, switch,
case, break, enum, register, volatile, typedef, extern, const, unsigned, signed, for, do, while, default, if,
else, static

Identifiers in C
While writing a C program, a user needs to define various programming constructs like arrays,
functions, variables, constants, etc. each with a different name (under specified rules), are termed as
identifiers.

A programmer should name an identifier, relevant to what it is meant for. It makes the C program to
be more readable. For instance, if you are defining a variable for storing names of the students than it
is better to name it like ‗studentname‘ or ‗st_name‘ rather than ‗abc‘. It makes your variable more
meaningful and understandable for anyone.

You might also like