Unit-1 Important Questions & Answers
Unit-1 Important Questions & Answers
UNIT-1
10M Q &A
1. Centralized Model
The centralized model refers to an architecture where all services, resources, and processing power are located
on a single, central server or system. Clients or end users connect to this central server to access the resources and
services they need.
Characteristics:
Single Point of Control: The central server handles all processing, data storage, and service management.
Thin Clients: Clients in a centralized model typically act as simple interfaces or terminals, relying on the
central server for processing.
Tight Control: Centralized control allows for easier management, security, and resource allocation since all
data and processes are housed in one location.
Examples:
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 2 of 13
o Early mainframe systems where users accessed applications via terminals connected to a central
mainframe.
o Traditional banking systems with centralized data centers.
Advantages:
Simplified Maintenance: Since everything is located in one place, it’s easier to maintain, update, and manage
the system.
Strong Security: With all data residing on one server, security measures can be more easily enforced
and monitored.
Cost-Efficient for Small Scale: For small organizations, centralizing resources can be cost- effective as
there is no need to distribute hardware or computing resources across many nodes.
Disadvantages:
Single Point of Failure: If the central server fails, the entire system goes down, disrupting service for
all users.
Scalability Issues: As more users and demands grow, the central server can become a bottleneck, making it
harder to scale the system efficiently.
Performance Constraints: A centralized system can become slow if all clients depend on one server for
processing and resources, especially with a large number of users.
2. Client/Server Model
The client/server model is a distributed architecture where services, data, and processing tasks are shared
between servers and client devices. In this model, multiple clients (users or devices) request services from one or
more servers, which respond with the requested resources or data.
Characteristics:
Division of Labor: The server handles processing, data management, and service provision, while the client can
perform its own local tasks (e.g., UI rendering, local processing).
Clients and Servers Communicate Over a Network: Clients connect to the server through a network (e.g.,
LAN, WAN, internet) to request services or resources.
Decentralized Processing: Unlike the centralized model, the client/server model distributes some processing
tasks to the clients, reducing the load on the server.
Examples:
o Web browsers (clients) accessing websites (servers).
o Email systems where the client (email application) interacts with an email server.
o Modern business applications like ERP or CRM systems.
Advantages:
Scalability: As the system grows, more servers can be added to handle increased loads, and clients can be
upgraded or replaced without affecting the entire system.
Better Performance: By distributing some processing tasks to clients, the system can be more efficient, reducing
the load on the server and improving response times.
Fault Tolerance: The failure of a single client doesn’t affect the rest of the system, and in some cases, servers can
be replicated to avoid single points of failure.
Disadvantages:
Complexity: The client/server model is more complex to set up and manage because it requires network
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 3 of 13
configuration, server management, and ensuring clients and servers work together efficiently.
Security Challenges: Distributed systems often require more robust security measures, as data is transmitted
between clients and servers, making them potentially vulnerable to attacks like eavesdropping or man-in-the-
middle attacks.
Higher Costs: Depending on the scale, setting up multiple servers and managing the network infrastructure can
be expensive.
3. Define DBMS. Explain database users in detail.
ANS: A Database User is defined as a person who interacts with data daily, updating, reading, and modifying the
given data. Database users can access and retrieve data from the database through the Database Management System
(DBMS) applications and interfaces.
Types of Database Users
Database users are categorized based on their interaction with the database. There are seven types of database users
in DBMS. Below mentioned are the types of database users:
1. Database Administrator (DBA)
A Database Administrator (DBA) is a person/team who defines the schema and also controls the 3 levels of the
database. The DBA will then create a new account ID and password for the user if he/she needs to access the
database. DBA is also responsible for providing security to the database and he allows only authorized users to
access/modify the database. DBA is responsible for problems such as security breaches and poor system response
time.
DBA also monitors the recovery and backup and provides technical support.
The DBA has a DBA account in the DBMS which is called a system or superuser account.
DBA repairs damage caused due to hardware and/or software failures.
DBA is the one having privileges to perform DCL (Data Control Language) operations such as GRANT
and REVOKE, to allow/restrict a particular user from accessing the database.
Architecture of DBMS
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 6 of 13
1. Query Processor:
It interprets the requests (queries) received from end user via an application program into instructions. It also executes
the user request which is received from the DML compiler.
Query Processor contains the following components –
DML Compiler: It processes the DML statements into low level instruction (machine language), so that they
can be executed.
DDL Interpreter: It processes the DDL statements into a set of table containing meta data (data about data).
Embedded DML Pre-compiler: It processes DML statements embedded in an application program into
procedural calls.
Query Optimizer: It executes the instruction generated by DML Compiler.
2. Storage Manager:
Storage Manager is a program that provides an interface between the data stored in the database and the queries
received. It is also known as Database Control System. It maintains the consistency and integrity of the database by
applying the constraints and executing the DCL statements. It is responsible for updating, storing, deleting, and
retrieving data in the database.
It contains the following components –
Authorization Manager: It ensures role-based access control, i.e,. checks whether the particular person is
privileged to perform the requested operation or not.
Integrity Manager: It checks the integrity constraints when the database is modified.
Transaction Manager: It controls concurrent access by performing the operations in a scheduled way that it
receives the transaction. Thus, it ensures that the database remains in the consistent state before and after
the execution of a transaction.
File Manager: It manages the file space and the data structure used to represent information in the database.
Buffer Manager: It is responsible for cache memory and the transfer of data between the secondary
storage and main memory.
3. Disk Storage:
It contains the following components:
Data Files: It stores the data.
Data Dictionary: It contains the information about the structure of any database object. It is the repository
of information that governs the metadata.
Indices: It provides faster retrieval of data item.
Internal Level:
This level represents the physical storage of data in the database. It is responsible for storing and retrieving data from
the storage devices, such as hard drives or solid-state drives. It deals with low-level implementation details such as
data compression, indexing, and storage allocation.
Conceptual Level:
This level represents the logical view of the database. It deals with the overall organization of data in the database and
the relationships between them. It defines the data schema, which includes tables, attributes, and their relationships.
The conceptual level is independent of any specific DBMS and can be implemented using different DBMSs.
External Level:
This level represents the user’s view of the database. It deals with how users access the data in the database. It allows
users to view data in a way that makes sense to them, without worrying about the underlying implementation details.
The external level provides a set of views or interfaces to the database, which are tailored to meet the needs of specific
user groups.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 7 of 13
The three levels are connected through a schema mapping process that translates data from one level to another.
The schema mapping process ensures that changes made at one level are reflected in the other levels.
6. What is data independence and how does a DBMS support it? Explain.
ANS:
Data independence is a key feature of a Database Management System (DBMS) that allows changes to be made
to the database schema at one level without affecting the schema at higher levels. It provides the flexibility to
modify the data structure or storage without impacting how users or applications access and interact with the data.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 8 of 13
directly, enabling both physical and logical data independence.
2. Mappings Between Levels:
o DBMS maintains mappings between these schemas to translate between levels of abstraction.
Mapping between Internal and Conceptual Schemas: Supports physical data
independence by allowing changes in the physical schema without affecting the logical
schema.
Mapping between Conceptual and External Schemas: Supports logical data independence
by allowing changes in the logical schema without affecting external user views.
3. Query Processing and Optimization:
o The query processor ensures that users can write high-level queries (like SQL) without worrying
about the physical storage details.
o Query optimization ensures that queries are executed in the most efficient way, regardless of how
the data is stored physically, supporting physical data independence.
4. Data Abstraction:
o DBMS provides multiple levels of data abstraction (view level, logical level, and physical level)
that hide the implementation details from users and applications.
o This abstraction ensures that higher-level users and developers work with data in a simplified way,
while the DBMS handles the complexity of data storage and access.
7. List and explain various data models used for database design.
ANS:
A Data Model in Database Management System (DBMS) is the concept of tools that are developed to summarize the
description of the database. Data Models provide us with a transparent picture of data which helps us in creating an
actual database. It shows us from the design of the data to its proper implementation of data.
Types of Relational Models
1. Conceptual Data Model
2. Representational Data Model
3. Physical Data Model
It is basically classified into 3 types:-
Hierarchical Model
The hierarchical Model is one of the oldest models in the data model which was developed by IBM, in the 1950s. In a
hierarchical model, data are viewed as a collection of tables, or we can say segments that form a hierarchical relation.
In this, the data is organized into a tree-like structure where each record consists of one parent record and many
children. Even if the segments are connected as a chain-like structure by logical associations, then the instant structure
can be a fan structure with multiple branches. We call the illogical associations as directional associations.
2. Network Model
The Network Model was formalized by the Database Task group in the 1960s. This model is the generalization of the
hierarchical model. This model can consist of multiple parent segments and these segments are grouped as levels but
there exists a logical association between the segments belonging to any level. Mostly, there exists a many-to-many
logical association between any of the two segments.
3. Object-Oriented Data Model
In the Object-Oriented Data Model, data and their relationships are contained in a single structure which is referred to
as an object in this data model. In this, real-world problems are represented as objects with different attributes. All
objects have multiple relationships between them. Basically, it is a combination of Object Oriented programming and
a Relational Database Model.
4. Float Data Model
The float data model basically consists of a two-dimensional array of data models that do not contain any duplicate
elements in the array. This data model has one drawback it cannot store a large amount of data that is the tables can
not be of large size.
5. Context Data Model
The Context data model is simply a data model which consists of more than one data model. For example, the Context
data model consists of ER Model, Object-Oriented Data Model, etc. This model allows users to do more than one
thing which each individual data model can do.
6. Semi-Structured Data Model
Semi-Structured data models deal with the data in a flexible way. Some entities may have extra attributes and some
entities may have some missing attributes. Basically, you can represent data here in a flexible way.
Advantages of Data Models
1. Data Models help us in representing data accurately.
2. It helps us in finding the missing data and also in minimizing Data Redundancy.
3. Data Model provides data security in a better way.
4. The data model should be detailed enough to be used for building the physical database.
5. The information in the data model can be used for defining the relationship between tables, primary and
foreign keys, and stored procedures.
Disadvantages of Data Models
1. In the case of a vast database, sometimes it becomes difficult to understand the data model.
2. You must have the proper knowledge of SQL to use physical models.
3. Even smaller change made in structure require modification in the entire application.
4. There is no set data manipulation language in DBMS.
5. To develop Data model one should know physical data stored characteristics.
8. Define Schema. Explain Three level schema architecture in DBMS.
ANS :
Schema
The Skeleton of the database is created by the attributes and this skeleton is named Schema.
Schema mentions the logical constraints like table, primary key, etc.
The schema does not represent the data type of the attributes.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 10 of 13
Details of a Customer
Schema of Customer
Database Schema
A database schema is a logical representation of data that shows how the data in a database should be stored
logically. It shows how the data is organized and the relationship between the tables.
Database schema contains table, field, views and relation between different keys like primary key, foreign key.
Data are stored in the form of files which is unstructured in nature which makes accessing the data difficult. Thus
to resolve the issue the data are organized in structured way with the help of database schema.
Database schema provides the organization of data and the relationship between the stored data.
Database schema defines a set of guidelines that control the database along with that it provides information about
the way of accessing and modifying the data.
Types of Database Schemas
There are 3 types of database schema:
Physical Database Schema
A Physical schema defines, how the data or information is stored physically in the storage systems in the form
of files & indices. This is the actual code or syntax needed to create the
structure of a database, we can say that when we design a database at a physical level, it’s called physical schema.
The Database administrator chooses where and how to store the data in the different blocks of storage.
Logical Database Schema
A logical database schema defines all the logical constraints that need to be applied to the stored data, and also
describes tables, views, entity relationships, and integrity constraints.
The Logical schema describes how the data is stored in the form of tables & how the attributes of a table are
connected.
Using ER modelling the relationship between the components of the data is maintained.
In logical schema different integrity constraints are defined in order to maintain the quality of insertion and
update the data.
View Database Schema
It is a view level design which is able to define the interaction between end-user and database.
User is able to interact with the database with the help of the interface without knowing much about the stored
mechanism of data in database.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 11 of 13
and also allows one to access those objects without needing multiple SQL statements for multiple transactions.
UNIT-1
2M Q&A
1. Define Instances and schemas of database.
2. List any two advantages of database systems.
3. Discuss about Data Definition Language.
4. Define Data Abstraction?
5. Discuss about Data Manipulation Language.
6. Explain the disadvantages of file processing system.
Database systems ensure that data remains accurate, consistent, and reliable across different applications. Integrity
constraints like primary keys, foreign keys, and data validation rules help prevent data anomalies and maintain
consistency. For example, a database system will ensure that no duplicate entries exist for a unique identifier like a
social security number, preserving data accuracy.
2. Data Security:
Database systems provide robust security mechanisms to control access to the data. User permissions and access
levels can be defined to restrict unauthorized users from viewing, modifying, or deleting sensitive data. This
ensures that only authorized individuals or applications can interact with certain parts of the data, enhancing
overall security.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 12 of 13
3. Discuss about Data Definition Language.
ANS:
Data Definition Language (DDL) is a subset of SQL (Structured Query Language) used to define and manage all
aspects of database structures. It provides commands for creating, modifying, and deleting database objects, such
as tables, indexes, and schemas.
CREATE: To create new tables, indexes, views, or schemas.
ALTER: To modify existing database objects.
DROP: To remove existing database objects.
TRUNCATE: To remove all records from a table while retaining the table structure.
Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272 Page 13 of 13