0% found this document useful (0 votes)
65 views

MCS-023 Introduction to Database Management Systems

Ignou database assignment

Uploaded by

Md Qasim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

MCS-023 Introduction to Database Management Systems

Ignou database assignment

Uploaded by

Md Qasim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Q1

Q1 (a)
SQL (Structured Query Language) is a standardized programming language used for managing and
manipulating relational databases. SQL is widely used to query, insert, update, delete, and manage data
stored in a database. It allows interaction with databases by executing various types of queries.

Important features of SQL:

1. Data Retrieval: SQL is used to retrieve data from a database using SELECT queries. You can filter,
group, and order results.
2. Data Manipulation: SQL allows you to add ( INSERT ), modify ( UPDATE ), and delete ( DELETE ) data.
3. Data Definition: SQL is used to define the structure of a database using commands like CREATE
TABLE , ALTER TABLE , and DROP TABLE .
4. Data Control: SQL provides commands to control data access and permissions with GRANT and
REVOKE .
5. Transaction Control: SQL supports transaction management with commands like COMMIT ,
ROLLBACK , and SAVEPOINT .
6. Joins: SQL allows joining tables to retrieve data from multiple related tables using various types of
joins like INNER JOIN , LEFT JOIN , etc.
7. Functions and Subqueries: SQL supports built-in functions for performing calculations and
operations. Subqueries can be nested inside other queries for more complex operations.

Q1 (b)
BOOK (Book_ID, Title, Publisher_ID, Year_of_Pub, Price)
AUTHOR (Author_ID, Book_ID, Author_Name)
PUBLISHER (Publisher_ID, Book_ID, Address, Name_of_Pub, No._of_Copies)

(i) Find the name of authors whose books are published by "ABC Press".

SELECT A.Author_Name language-sql


FROM AUTHOR A
JOIN PUBLISHER P ON A.Book_ID = P.Book_ID
WHERE P.Name_of_Pub = 'ABC Press';

(ii) Find the name of the author and price of the book, whose Book_ID is '100'.

SELECT A.Author_Name, B.Price language-sql


FROM AUTHOR A
JOIN BOOK B ON A.Book_ID = B.Book_ID
WHERE B.Book_ID = '100';

(iii) Find the title of the books which are published by Publisher_ID '20' and are published in
the year 2011.

SELECT B.Title language-sql


FROM BOOK B
WHERE B.Publisher_ID = '20' AND B.Year_of_Pub = 2011;
(iv) Find the address of the publisher who has published Book_ID "500".

SELECT P.Address language-sql


FROM PUBLISHER P
WHERE P.Book_ID = '500';

Q2
Q2 (a)
Anomalies in a Database:
Anomalies occur in poorly designed databases, usually when all the data is stored in a single table without
normalization. These include insertion, deletion, and updation anomalies.

1. Insertion Anomaly: Occurs when certain data cannot be inserted into the database due to the
absence of other data.
Example: Consider a table STUDENT_COURSE with columns (Student_ID, Student_Name,
Course_ID, Course_Name, Instructor). If a new course needs to be added, but there are no
students enrolled yet, we cannot insert the course into this table since there is no student
information.
Solution: Normalize the database by creating separate tables for STUDENTS and COURSES .
This allows adding course information without needing student data.
2. Deletion Anomaly: Occurs when deletion of some data results in the unintended loss of additional
data.
Example: In the same STUDENT_COURSE table, if we delete a student who was the only person
enrolled in a particular course, we will also lose the information about the course.
Solution: Normalizing the database can help avoid this by keeping course information in a
separate table.
3. Updation Anomaly: Occurs when updating redundant data in one place doesn't reflect across all
instances of that data in the table.
Example: If the instructor for a course changes, and this information is repeated across
multiple rows (one for each student in the course), updating one row will leave other rows with
outdated information.
Solution: This anomaly can be removed by normalizing the table and keeping instructor
information separate from student enrollment data.

Mechanism to remove anomalies:

Normalization: The process of structuring a relational database to reduce redundancy and improve
data integrity. It involves breaking a table into smaller, related tables and defining relationships
between them.
First Normal Form (1NF): Eliminates repeating groups by ensuring each cell contains atomic
values.
Second Normal Form (2NF): Eliminates partial dependencies, ensuring every non-key
attribute is fully dependent on the primary key.
Third Normal Form (3NF): Removes transitive dependencies, ensuring non-key attributes
are not dependent on other non-key attributes.
Q2 (b)
(i) Creation of views

Definition: A view is a virtual table created using a SQL query that can combine data from one or
more tables. It simplifies data access and can be used to present a specific subset of data.

-- Create a view to show all students enrolled in courses language-sql


CREATE VIEW StudentCourses AS
SELECT Student_Name, Course_Name
FROM STUDENT S
JOIN COURSE C ON S.Course_ID = C.Course_ID;

(ii) Creation of sequences

Definition: A sequence is an object that generates a sequence of unique numbers, often used for auto-
incrementing primary keys.

-- Create a sequence to generate unique student IDs starting from 100 language-sql
CREATE SEQUENCE student_seq
START WITH 100
INCREMENT BY 1;

(iii) Outer join

Definition: An outer join returns records from two tables even if there is no matching record in one of
the tables. There are three types: LEFT, RIGHT, and FULL outer join.
-- LEFT OUTER JOIN to show all students, including those without a course language-sql
SELECT S.Student_Name, C.Course_Name
FROM STUDENT S
LEFT OUTER JOIN COURSE C ON S.Course_ID = C.Course_ID;

(iv) To give access permission to any user

Definition: Granting permissions allows a user to perform specific actions (like SELECT, INSERT) on
database objects.
-- Grant SELECT permission on STUDENT table to user 'user1' language-sql
GRANT SELECT ON STUDENT TO user1;

Q3
Q3 (a)
Integrity constraints are rules that ensure the accuracy, consistency, and reliability of data in a relational
database. They enforce certain conditions on the data and restrict invalid data from being entered into the
database.

Types of Integrity Constraints:


1. Entity Integrity: Ensures that each table has a primary key and that the primary key column(s)
cannot have NULL values.
Example: In a STUDENT table, Student_ID is the primary key and must be unique and not
null.
2. Referential Integrity: Ensures that foreign keys in a table correctly reference primary keys in
another table, and it prevents actions that would leave orphan records.
Example: If a COURSE table has a foreign key Student_ID that references the STUDENT table,
the value of Student_ID in COURSE must match an existing Student_ID in the STUDENT table.
3. Domain Integrity: Ensures that all data in a column adheres to a specific predefined data type,
format, or range of values.
Example: In an EMPLOYEE table, the Age column must contain only positive integer values
between 18 and 65.

Q3 (b)
Relationship between Database Security and Database Integrity:

Database security focuses on protecting the database from unauthorized access, tampering, or
misuse, ensuring that only authorized users can modify or view the data. On the other hand, database
integrity ensures the correctness and consistency of the data stored in the database.
These two concepts are related because ensuring the security of a database (by restricting access and
modifications) also helps in maintaining the integrity of the data. If unauthorized users gain access,
they might change or corrupt the data, leading to integrity issues.

Levels of Security Measures to Protect the Database:

1. Physical Security: Protecting the hardware and infrastructure that hosts the database from
physical threats such as unauthorized access, theft, or damage (e.g., securing servers in data centers).
2. Network Security: Safeguarding the communication channels between users and the database,
ensuring that data is transmitted securely. Techniques like encryption, firewalls, and secure protocols
(e.g., SSL/TLS) are used to protect the network.
3. Access Control: Ensuring that only authorized users can access or modify the data. This includes
using authentication methods (passwords, biometrics) and role-based access control (RBAC) to grant
users different permission levels.
4. Application Security: Implementing security mechanisms within the application that interacts
with the database, such as input validation, to prevent attacks like SQL injection.
5. Data Encryption: Protecting sensitive data within the database by encrypting it so that
unauthorized users cannot read it even if they gain access.

Q3 (c)
F(A → D, {A,B} → C, D → E)
Assume that the decomposition of R into {R1 (A, B, C) and R2 (A, D, E)}.
Is this decomposition lossless? Justify. (4 Marks)
To determine if the decomposition of relation R into R1 (A, B, C) and R2 (A, D, E) is lossless, we use the
lossless join property. A decomposition is lossless if the original relation can be recreated by joining the
decomposed relations without losing any information.

The given functional dependencies are:

1. ( A → D )
2. ( {A, B} → C )
3. ( D → E )
Check if the decomposition is lossless:
For a decomposition to be lossless, at least one of the relations must contain a candidate key of
the original relation, or the common attributes between the two relations should preserve
functional dependencies.
The common attribute between ( R_1(A, B, C) ) and ( R_2(A, D, E) ) is A .
From the functional dependency ( A → D ), we see that knowing A in ( R_1 ) also gives us D in (
R_2 ). Since D determines E in ( R_2 ), we can infer all of D and E from A .
Also, from ( {A, B} → C ), we know that C can be inferred from A and B in ( R_1 ).

Thus, both R_1 and R_2 share A , and knowing A allows us to infer the rest of the attributes without losing
any data during the join process. Therefore, the decomposition is lossless.

Justification:
Since the common attribute A holds functional dependencies that allow us to retrieve all other attributes
when joining ( R_1 ) and ( R_2 ), the decomposition is lossless.

Q4
Q4 (a)
Log-based recovery is a technique used in databases to ensure that the system can recover from failures
(like system crashes) and restore the database to a consistent state. This scheme relies on maintaining a log
(a record of all changes made to the database) to redo or undo operations in case of a failure.

Example:
Let's assume there are two transactions ( T1 ) and ( T2 ) modifying a bank account balance.

1. Initial State: Account A = 500, Account B = 300


2. Transaction ( T1 ): Transfer 100 from A to B.
Log:
<T1, Start>
<T1, A, 500, 400> (Account A’s balance changes from 500 to 400)
<T1, B, 300, 400> (Account B’s balance changes from 300 to 400)
<T1, Commit>
3. Transaction ( T2 ): Transfer 50 from A to B.
Log:
<T2, Start>
<T2, A, 400, 350> (Account A’s balance changes from 400 to 350)
<T2, B, 400, 450> (Account B’s balance changes from 400 to 450)
System Crashes before Commit

Recovery Process:

Redo Phase: The system will check for any committed transactions. It will find that ( T1 ) committed,
so it will redo all the operations of ( T1 ) (if not already done).
Undo Phase: The system will find that ( T2 ) did not commit, so it will undo all the changes made by (
T2 ), restoring Account A and Account B to their previous values before ( T2 ) started (i.e., A = 400
and B = 400 ).

Q4 (b)
Given Functional Dependencies (F):

1. ( A → BC )
2. ( CD → E )
3. ( B → D )
4. ( E → A )

Step 1: Compute the closure of F (denoted as ( F^+ )) to find all possible derivations from the functional
dependencies. To do this, start by progressively applying the functional dependencies to find what can be
inferred.

1. Start with ( A^+ ) (closure of A):

From ( A → BC ), we can infer B and C .


From ( B → D ), we can infer D (since we now have B from the previous step).
From ( CD → E ), we can infer E (since we now have both C and D ).
From ( E → A ), we can infer A (which we already have, confirming the closure cycle).

Thus, ( A^+ = {A, B, C, D, E} ), meaning that A can derive all attributes, which indicates that A is a
candidate key.

2. Closure of Other Attributes:

( B^+ = {B, D} ) (from ( B → D )).


( C^+ = {C} ) (no further inferences from ( C )).
( D^+ = {D} ) (no further inferences from ( D )).
( E^+ = {E, A, B, C, D} ) (from ( E → A ), then apply ( A^+ )).

List of Candidate Keys for R:

A candidate key is a minimal set of attributes that can determine all other attributes of the relation. From
the closure analysis:

Since ( A^+ = {A, B, C, D, E} ), A is a candidate key.


Similarly, ( E^+ = {A, B, C, D, E} ) shows that E is also a candidate key.
Thus, the candidate keys for ( R(A, B, C, D, E) ) are:

A
E

Both A and E are minimal sets that can determine all other attributes.

Q5
Q5 (a)
Limitations of File-based Systems:

1. Data Redundancy: In a file-based system, the same data may be duplicated across multiple files,
leading to inconsistencies and wastage of storage space.
2. Data Inconsistency: Due to redundancy, data may become inconsistent if one file is updated and
others are not, resulting in different values for the same data in different places.
3. Lack of Data Integration: File-based systems store data in separate files, which makes it difficult
to access related data together. There is no mechanism for creating relationships between different
files.
4. Data Isolation: Data is stored in different files in a scattered manner, making it hard to retrieve
and access the necessary data when needed.

How DBMS Overcomes These Limitations:

1. Reduction in Data Redundancy: DBMS uses a central database that stores data only once,
eliminating redundancy.
2. Data Consistency: Since data is centralized and managed efficiently, any updates to the database
automatically reflect across the system, maintaining consistency.
3. Data Integration: DBMS allows related data to be stored in multiple tables with relationships (like
primary and foreign keys), which enables easy integration of data across different entities.
4. Data Access: DBMS provides powerful querying tools (like SQL) that allow users to retrieve data
efficiently from multiple tables using joins and conditions.

Q5 (b)
Importance of File Organization in Databases:
File organization refers to how data is stored in the database at the physical level. It is important because it
directly impacts the performance of the database in terms of data retrieval, updating, and storage efficiency.
Proper file organization ensures:

1. Efficient Data Access: Well-organized files allow quick retrieval of data, reducing response time.
2. Optimized Storage: File organization helps in minimizing the storage requirements and ensures
effective space utilization.
3. Ease of Maintenance: Proper organization facilitates easy management, modification, and
maintenance of data over time.
4. Data Integrity: Organized files help in maintaining data integrity by ensuring that related data is
stored in a logical order.

Types of File Organizations:

1. Heap (Unordered) File Organization: Records are stored in no specific order. New records are
simply added at the end of the file.
2. Sequential File Organization: Records are stored in a sequential order based on a key field. This
makes it easier to access records in a specific order.
3. Indexed File Organization: A separate index is maintained, pointing to the actual data. This
allows quick access to records based on key values.
4. Hashed File Organization: A hash function is used to calculate the address of the data record,
allowing fast direct access to records.

Detailed Discussion of Sequential File Organization:

Sequential File Organization: In this type of organization, records are stored in a specific sequence
based on the value of one or more fields, typically a primary key. This is useful when you need to
access records in a sorted order or process them sequentially, such as for reports or batch processing.
Advantages:
1. Efficient Access for Sequential Reads: When records are read in the same sequence as
they are stored, access is very fast.
2. Easy to Implement: Sequential file organization is relatively simple to implement and
manage.
Disadvantages:
1. Slow for Random Access: Randomly accessing a record in the middle of the file can be slow
since the system may have to scan the file sequentially.
2. Insertion Overhead: Inserting a new record in the correct position can be slow, as it may
require shifting existing records to maintain the order.
Example:
Let’s assume a file stores employee records, sorted by Employee_ID . If we need to access the records in
order of Employee_ID , we can quickly iterate through them. However, if we need to insert a new
employee record, the system must insert it at the correct position based on Employee_ID and may need
to shift other records to maintain the order.

Q6
Q6 (a)
2-Phase Locking Protocol (2PL) is required to ensure serializability in database transactions, which
guarantees that the schedule of transactions results in a correct, consistent state. It prevents race
conditions and ensures concurrency control.

Disadvantages of Basic 2PL:

1. Deadlocks: 2PL can lead to deadlocks, where two or more transactions are waiting for each other
to release locks, and none can proceed.
2. Cascading Aborts: In some cases, when a transaction fails, other transactions dependent on it may
also need to be aborted, leading to a cascading effect.
3. Starvation: A transaction may be delayed indefinitely if it keeps waiting for a resource that is always
locked by other transactions, causing starvation.

Ways to Overcome the Disadvantages:

1. Deadlock Prevention and Detection: Algorithms like wait-die or wound-wait can be used for
deadlock prevention, or a deadlock detection mechanism can periodically check for deadlocks
and resolve them.
2. Timestamp-based Protocols: These protocols assign timestamps to transactions to decide the
order of execution. This helps in avoiding some of the limitations of 2PL, especially deadlocks.
3. Strict 2PL: In strict 2PL, locks are held until the transaction commits or aborts. This reduces the
chances of cascading aborts by ensuring that once a transaction releases locks, its results are
finalized.

Q6 (b)
The four basic properties of a transaction are known as ACID properties. These properties ensure that
database transactions are processed reliably and maintain the integrity of the database.

1. Atomicity:
Definition: A transaction is an atomic unit of work that either completes in its entirety or
does not execute at all. If any part of the transaction fails, the entire transaction is rolled back,
and the database is left unchanged.
Example: Consider a banking transaction where a customer transfers ₹500 from Account A to
Account B. The transaction involves two steps:
1. Debit ₹500 from Account A.
2. Credit ₹500 to Account B.
If the system crashes after debiting Account A but before crediting Account B, the
entire transaction is rolled back, leaving both accounts unchanged, preserving
atomicity.
2. Consistency:
Definition: A transaction must take the database from one consistent state to another
consistent state. Any integrity constraints defined on the database must be satisfied at the
end of the transaction.
Example: Assume a bank has a rule that the total balance of all accounts must remain the
same. If a transaction moves ₹500 from one account to another, the total balance should
remain unchanged, ensuring consistency. If the initial total is ₹10,000, the total must still be
₹10,000 after the transaction.
3. Isolation:
Definition: Transactions should operate independently without interference from other
concurrent transactions. The intermediate states of a transaction should not be visible to other
transactions.
Example: Consider two transactions:
1. Transaction 1: Transfers ₹500 from Account A to Account B.
2. Transaction 2: Reads the balance of Account A.
Even if Transaction 2 runs concurrently with Transaction 1, it will see either the old
balance (before Transaction 1) or the new balance (after Transaction 1), but not an
intermediate state where ₹500 has been deducted from Account A but not yet added to
Account B.
4. Durability:
Definition: Once a transaction has been committed, its changes are permanent, even in the
event of a system failure. The changes made by the transaction must be recorded in non-
volatile storage.
Example: After the transfer of ₹500 from Account A to Account B is completed and the
transaction is committed, the changes are stored in the database permanently. Even if the
system crashes right after the commit, the balances of Accounts A and B will remain updated
when the system is restored, ensuring durability.

Q7
Q7 (a)
Fragmentation of a Database refers to the process of dividing a database into smaller, logical pieces
(fragments) that can be stored across different locations in a distributed database management system
(DDBMS). The fragments are managed in such a way that the database's integrity and usability are
maintained.

Need for Fragmentation in DDBMS:

1. Improved Performance: Fragmentation allows data to be stored closer to where it is needed,


reducing access time and improving performance.
2. Increased Reliability and Availability: By distributing fragments across different sites, the
system can continue to operate even if one or more sites fail, increasing fault tolerance.
3. Local Autonomy: Each site can operate independently with its fragment, enabling local decision-
making and resource management.

Types of Fragmentation:

1. Horizontal Fragmentation:
In horizontal fragmentation, a relation (table) is divided into subsets of tuples (rows) based on
certain conditions, where each fragment contains a subset of rows that satisfy a specific
condition.
Example: Consider a CUSTOMER table with attributes Customer_ID , Name , and City .
Horizontal fragmentation could divide the table into two fragments:
Fragment 1: All customers in 'New York'.
Fragment 2: All customers in 'Los Angeles'.
2. Vertical Fragmentation:
In vertical fragmentation, a relation is divided into subsets of attributes (columns), where each
fragment contains a subset of columns, and the primary key is typically included in each
fragment to maintain uniqueness.
Example: For a CUSTOMER table with Customer_ID , Name , and City , vertical fragmentation
could divide it into two fragments:
Fragment 1: Customer_ID , Name
Fragment 2: Customer_ID , City
3. Mixed (Hybrid) Fragmentation:
Mixed fragmentation is a combination of both horizontal and vertical fragmentation. A relation
is first horizontally fragmented, and then each horizontal fragment is further fragmented
vertically, or vice versa.
Example: First, fragment the CUSTOMER table horizontally by City , and then further
fragment each horizontal fragment vertically.
Horizontal Fragment 1: All customers in 'New York'.
Vertical Fragment 1A: Customer_ID , Name for New York customers.
Vertical Fragment 1B: Customer_ID , City for New York customers.

Q7 (b)
Need for Distributed DBMS (DDBMS) over Centralized DBMS:

1. Data Locality: In a centralized DBMS, all data is stored at one central site, which can lead to
performance bottlenecks and higher communication costs when users from different locations need
to access the data. DDBMS stores data closer to where it is needed, reducing latency and improving
access speed.
2. Reliability and Availability: Centralized DBMS systems have a single point of failure. If the
central server fails, the entire system becomes unavailable. In a DDBMS, data is distributed across
multiple locations, providing better fault tolerance and ensuring that the system remains operational
even if one or more sites go down.
3. Scalability: Centralized systems struggle to scale efficiently as the amount of data and number of
users increase. DDBMS can scale horizontally by adding more sites and distributing the load across
multiple locations.

Structure of Distributed DBMS:

A Distributed DBMS consists of several interconnected sites that store and manage their own data. The
structure can be visualized as follows:

1. Local Sites (Nodes):


Each site has its own database and local DBMS that manages the data at that site. The local
DBMS handles local transactions and queries on local data.
2. Global Coordinator:
A coordinator or control system is responsible for managing distributed transactions that span
multiple sites. It ensures that the ACID properties (Atomicity, Consistency, Isolation,
Durability) of transactions are maintained across the distributed system.
3. Communication Network:
All sites in a distributed DBMS are connected through a communication network. The network
enables sites to exchange data and coordinate transactions that involve multiple locations.
4. Distributed Query Processor:
The query processor breaks down global queries into sub-queries that can be processed locally
at each site and then combines the results from each site to generate the final answer.

Q8
To model this scenario as an Entity-Relationship (ER) diagram, we need to identify the entities,
relationships, constraints, and cardinality mappings involved in the organization's medical facility
system for employees. Let's break this down step by step.

Identified Entities:
1. Employee: The organization provides medical facilities to its employees. Each employee can avail
themselves of services and is associated with dependents.
2. Doctor: A list of doctors is provided by the organization for employees to receive medical services.
3. Hospital: Employees can also receive medical care from designated hospitals.
4. Test Centre: Employees can get medical tests from test centres that are on the organization's list.
5. Dependents: Employees can extend the medical facility to their dependents (family members).
6. Organization: The entity that pays for the services provided to employees and dependents.

Relationships:
1. Employee-Doctor: An employee can receive services from multiple doctors, and a doctor can serve
multiple employees. The relationship between them can be named Receives Treatment.
2. Employee-Hospital: An employee can be treated at multiple hospitals, and a hospital can treat
many employees. The relationship can be called Receives Treatment.
3. Employee-Test Centre: An employee may undergo tests at different test centres, and a test centre
serves many employees. This relationship can be called Receives Test.
4. Employee-Dependents: Each employee may have dependents, and dependents can avail the same
services from doctors, hospitals, and test centres. The relationship can be called Has Dependents.
5. Doctor-Hospital: Doctors may work at one or more hospitals, and hospitals may have multiple
doctors. This relationship is optional but can be included if relevant.
6. Organization-Doctors, Hospitals, Test Centres: The organization pays for the services
provided by doctors, hospitals, and test centres. This relationship can be called Raises Bill.

Assumptions:
Each employee may have multiple dependents, but each dependent is associated with exactly one
employee.
Doctors, hospitals, and test centres are assumed to be contracted directly by the organization, and
employees do not pay for services.
A doctor or hospital can raise a bill only after providing a service to the employee or dependent.

Cardinality Mappings:
Employee-Doctor: Many-to-Many (An employee can consult many doctors, and each doctor can
treat multiple employees).
Employee-Hospital: Many-to-Many (An employee can go to multiple hospitals, and each hospital
can treat multiple employees).
Employee-Test Centre: Many-to-Many (An employee can undergo tests at multiple test centres, and
each test centre serves multiple employees).
Employee-Dependent: One-to-Many (An employee can have multiple dependents, but a dependent
is linked to only one employee).
Organization-Doctors/Hospitals/Test Centres: One-to-Many (The organization pays each
doctor, hospital, or test centre directly).

ER Diagram:
We can now represent this scenario in an ER diagram. Here's a textual description of how it would look:

1. Entities:
Employee (Employee_ID, Name, Department)
Doctor (Doctor_ID, Name, Specialization)
Hospital (Hospital_ID, Name, Location)
Test Centre (Test_Centre_ID, Name, Location)
Dependent (Dependent_ID, Name, Relation)
Organization (Org_ID)
2. Relationships:
Receives Treatment between Employee and Doctor (Many-to-Many)
Receives Treatment between Employee and Hospital (Many-to-Many)
Receives Test between Employee and Test Centre (Many-to-Many)
Has Dependents between Employee and Dependent (One-to-Many)
Raises Bill between Organization and Doctors/Hospitals/Test Centres (One-to-Many)

The ER diagram will contain entities as rectangles, relationships as diamonds, and attributes as ovals. Arrows
between the entities will represent the cardinality mappings (1-to-many, many-to-many, etc.).

One-to-Many relationships (like Employee and Dependents) will have an arrow on the dependent
side.
Many-to-Many relationships (like Employee and Doctor) will have no arrows, representing that both
sides can have multiple entities associated.

You might also like