In this PPT contains Functional Dependency , Armstrong Inferences Rules and Data Normalization like 1NF,2NF and 3NF. Explain also full functional dependencies , multivalued dependency and Transitive Dependency.
Functional dependencies play a key role in database design and normalization. A functional dependency (FD) is a constraint that one attribute determines another. FDs have various definitions but generally mean that given the value of one attribute (left side), the value of another attribute (right side) is determined. Armstrong's axioms are used to derive implied FDs from a set of FDs. The closure of an attribute set or set of FDs finds all attributes/FDs logically implied. Normalization aims to eliminate anomalies and is assessed using normal forms like 1NF, 2NF, 3NF, BCNF which impose additional constraints on table designs.
The document discusses database normalization and functional dependencies. It defines normalization as imposing rules on database tables to eliminate anomalies during data manipulation. Functional dependency is defined as a relationship where a set of attributes determines another. The properties of functional dependencies - reflexive, augmentation, transitive, union, and decomposition - are explained and examples are given. Normalization and understanding functional dependencies helps design high quality databases without redundancies or anomalies.
Functional dependencies in Database Management SystemKevin Jadiya
Slides attached here describes mainly Functional dependencies in database management system, how to find closure set of functional dependencies and in last how decomposition is done in any database tables
Database Systems - Normalization of Relations(Chapter 4/3)Vidyasagar Mundroy
The document discusses normalization, which is a process for relational database design that reduces data redundancy and improves data integrity. It involves decomposing relations to eliminate anomalies like insertion, deletion, and modification anomalies. Several normal forms are described - 1NF, 2NF, 3NF, BCNF, 4NF, and 5NF - each addressing different types of dependencies and anomalies. The goal of normalization is to organize the data in a logical manner and break relations into smaller, less redundant relations without affecting the information contained.
Normalization is the process of organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency.
The document defines functional dependencies and describes how they constrain relationships between attributes in a database relation. A functional dependency X → Y means the Y attribute is functionally determined by the X attribute(s). The closure of a set of functional dependencies includes all dependencies that can be logically derived. Normalization aims to eliminate anomalies by decomposing relations based on their functional dependencies until a desired normal form is reached.
This document provides an overview of the relational model and relational algebra operations. It defines key concepts like relations, attributes, tuples, domains, keys and foreign keys. It describes common relational algebra operations like selection, projection, joins and set operations. Examples are provided to demonstrate how to write relational algebra queries using selection and projection operations on sample student and employee tables.
This document discusses dependency preserving decomposition in relational databases. It defines dependency preservation as decomposing a relation such that the set of functional dependencies is preserved. An algorithm is presented to check if a decomposition preserves dependencies by iterating through each dependency and checking if the right hand side is contained within the closure of the left hand side within the decomposed relations. An example is provided to demonstrate how to apply the algorithm to verify a decomposition preserves dependencies.
This document discusses schema refinement and normalization in database design. It begins by introducing schema refinement and normalization. It then discusses decomposition and properties of decompositions such as lossless decomposition and dependency preservation. It covers different types of functional dependencies and how to reason about them using Armstrong's axioms. The document also discusses various normal forms including 1NF, 2NF, 3NF, BCNF, 4NF and 5NF. It provides examples of insertion, update and deletion anomalies and how normalization helps avoid them. Finally, it discusses decomposition techniques such as lossless join decomposition and lossy join decomposition.
The document discusses relational databases and functional dependencies. It begins by defining a relational database as a set of tables containing data organized in columns. Each table represents a relation between attributes. The document then provides examples of relations and attributes. It introduces the concept of functional dependencies, where a dependency a->b means that the value of b is determined by a. It provides examples and rules for determining if a functional dependency holds. The document also discusses closure of functional dependencies and canonical forms.
The document discusses functional dependencies and database normalization. It provides examples of functional dependencies and explains key concepts like:
- Functional dependencies define relationships between attributes in a relation.
- Armstrong's axioms are properties used to derive functional dependencies.
- Decomposition aims to eliminate redundancy and anomalies by breaking relations into smaller, normalized relations while preserving information and dependencies.
- A decomposition is lossless if it does not lose any information, and dependency preserving if the original dependencies can be maintained on the decomposed relations.
This slide explains the conversion procedure from ER Diagram to Relational Schema.
1. Entity set to Relation
2. Relationship set to Relation
3. Attributes to Columns, Primary key, Foreign Keys
Functional dependencies (FDs) describe relationships between attributes in a database relation. FDs constrain the values that can appear across attributes for each tuple. They are used to define database normalization forms.
Some examples of FDs are: student ID determines student name and birthdate; sport name determines sport type; student ID and sport name determine hours practiced per week.
FDs can be trivial, non-trivial, multi-valued, or transitive. Armstrong's axioms provide rules for inferring new FDs. The closure of a set of attributes includes all attributes functionally determined by that set according to the FDs. Closures are used to identify keys, prime attributes, and equivalence of FDs.
The document discusses various SQL concepts like views, triggers, functions, indexes, joins, and stored procedures. Views are virtual tables created by joining real tables, and can be updated, modified or dropped. Triggers automatically run code when data is inserted, updated or deleted from a table. Functions allow reusable code and improve clarity. Indexes allow faster data retrieval. Joins combine data from different tables. Stored procedures preserve data integrity.
SQL language includes four primary statement types: DML, DDL, DCL, and TCL. DML statements manipulate data within tables using operations like SELECT, INSERT, UPDATE, and DELETE. DDL statements define and modify database schema using commands like CREATE, ALTER, and DROP. DCL statements control user access privileges with GRANT and REVOKE. TCL statements manage transactions with COMMIT, ROLLBACK, and SAVEPOINT to maintain data integrity.
The document discusses normalization of database tables. It covers normal forms including 1NF, 2NF, 3NF, BCNF and 4NF. The process of normalization reduces data redundancies and helps eliminate data anomalies. Normalization is done concurrently with entity-relationship modeling to produce an effective database design. In some cases, denormalization may be needed to generate information more efficiently.
Normalization is the process of removing redundant data from your tables to improve storage efficiency, data integrity, and scalability.
Normalization generally involves splitting existing tables into multiple ones, which must be re-joined or linked each time a query is issued.
Why normalization?
The relation derived from the user view or data store will most likely be unnormalized.
The problem usually happens when an existing system uses unstructured file, e.g. in MS Excel.
This document provides an overview of Boyce-Codd normal form (BCNF) which is a type of database normalization. It explains that BCNF was developed in 1974 and aims to eliminate redundant data and ensure data dependencies make logical sense. The document outlines the five normal forms including 1NF, 2NF, 3NF, BCNF, and 4NF. It provides examples of converting non-BCNF tables into BCNF by identifying and removing overlapping candidate keys and grouping remaining items into separate tables based on functional dependencies.
This document is a student assignment on joins and their types in database management systems. It defines joins as combining related tuples from two relations based on matching conditions. The main types of joins discussed are inner joins (theta, equi, natural), and outer joins (left, right, full). Inner joins return only tuples that satisfy the condition, while outer joins return all tuples from one or both relations whether or not they match. Examples are provided to illustrate each join type.
The document discusses relational database design and normalization. It covers first normal form, functional dependencies, and decomposition. The goal of normalization is to avoid data redundancy and anomalies. First normal form requires attributes to be atomic. Functional dependencies specify relationships between attributes that must be preserved. Decomposition breaks relations into smaller relations while maintaining lossless join properties. Higher normal forms like Boyce-Codd normal form and third normal form further reduce redundancy.
The document discusses functional dependency and normalization. It defines functional dependency and outlines Armstrong's axioms for functional dependencies. It also defines normalization objectives and normal forms including 1NF, 2NF and 3NF. The document provides examples of functional dependencies and canonical covers. It discusses anomalies that can occur in 1NF relations including insertion, deletion and update anomalies. Finally, it defines partial and transitive dependencies.
Normalisation is a process that structures data in a relational database to minimize duplication and redundancy while preserving information. It aims to ensure data is structured efficiently and consistently through multiple forms. The stages of normalization include first normal form (1NF), second normal form (2NF), third normal form (3NF), Boyce-Codd normal form (BCNF), fourth normal form (4NF) and fifth normal form (5NF). Higher normal forms eliminate more types of dependencies to optimize the database structure.
Normalization is a process that organizes data to minimize redundancy and dependency. It divides tables to relate data without duplicating information. There are three common normal forms. The first normal form structures data into tables without repeating groups. The second normal form removes attributes not dependent on the primary key. The third normal form removes transitive dependencies so each non-key attribute depends directly on the primary key. Examples show how data can be normalized through multiple forms to eliminate anomalies and inconsistencies.
Distributed database management systemsDhani Ahmad
This chapter discusses distributed database management systems (DDBMS). A DDBMS governs storage and processing of logically related data across interconnected computer systems. The chapter covers DDBMS components, levels of data and process distribution, transaction management, and design considerations like data fragmentation, replication, and allocation. Transparency and optimization techniques aim to make the distributed nature transparent to users.
This document discusses SQL commands for creating tables, adding data, and enforcing integrity constraints. It covers the core SQL commands: DDL for defining schema, DML for manipulating data, DCL for controlling access, DQL for querying data, and TCL for transactions. Specific topics summarized include data types, primary keys, foreign keys, indexes, views, stored procedures, functions and triggers. Integrity constraints like NOT NULL, UNIQUE, CHECK, DEFAULT are explained. The document also covers SQL queries with filtering, sorting, patterns and ranges. Authorization using GRANT and REVOKE commands is briefly covered.
The document discusses database normalization and related concepts. It defines functional dependencies and different normal forms including 1NF, 2NF, 3NF, BCNF. Anomalies like insertion, update and deletion anomalies are explained using an example. The concepts of primary key, candidate key, composite key and partial vs full dependencies are also covered. Different types of functional dependencies like trivial, non-trivial and transitive are defined. The process of normalization up to BCNF is summarized.
The document discusses normalization and different normal forms. It begins by explaining anomalies that can occur in a database like insertion, updation and deletion anomalies if the database is not properly normalized. It then discusses 1NF, 2NF, 3NF and BCNF. Key topics covered include functional dependencies, closures, candidate keys, primary keys and how to decompose relations to eliminate anomalies through normalization.
This document discusses dependency preserving decomposition in relational databases. It defines dependency preservation as decomposing a relation such that the set of functional dependencies is preserved. An algorithm is presented to check if a decomposition preserves dependencies by iterating through each dependency and checking if the right hand side is contained within the closure of the left hand side within the decomposed relations. An example is provided to demonstrate how to apply the algorithm to verify a decomposition preserves dependencies.
This document discusses schema refinement and normalization in database design. It begins by introducing schema refinement and normalization. It then discusses decomposition and properties of decompositions such as lossless decomposition and dependency preservation. It covers different types of functional dependencies and how to reason about them using Armstrong's axioms. The document also discusses various normal forms including 1NF, 2NF, 3NF, BCNF, 4NF and 5NF. It provides examples of insertion, update and deletion anomalies and how normalization helps avoid them. Finally, it discusses decomposition techniques such as lossless join decomposition and lossy join decomposition.
The document discusses relational databases and functional dependencies. It begins by defining a relational database as a set of tables containing data organized in columns. Each table represents a relation between attributes. The document then provides examples of relations and attributes. It introduces the concept of functional dependencies, where a dependency a->b means that the value of b is determined by a. It provides examples and rules for determining if a functional dependency holds. The document also discusses closure of functional dependencies and canonical forms.
The document discusses functional dependencies and database normalization. It provides examples of functional dependencies and explains key concepts like:
- Functional dependencies define relationships between attributes in a relation.
- Armstrong's axioms are properties used to derive functional dependencies.
- Decomposition aims to eliminate redundancy and anomalies by breaking relations into smaller, normalized relations while preserving information and dependencies.
- A decomposition is lossless if it does not lose any information, and dependency preserving if the original dependencies can be maintained on the decomposed relations.
This slide explains the conversion procedure from ER Diagram to Relational Schema.
1. Entity set to Relation
2. Relationship set to Relation
3. Attributes to Columns, Primary key, Foreign Keys
Functional dependencies (FDs) describe relationships between attributes in a database relation. FDs constrain the values that can appear across attributes for each tuple. They are used to define database normalization forms.
Some examples of FDs are: student ID determines student name and birthdate; sport name determines sport type; student ID and sport name determine hours practiced per week.
FDs can be trivial, non-trivial, multi-valued, or transitive. Armstrong's axioms provide rules for inferring new FDs. The closure of a set of attributes includes all attributes functionally determined by that set according to the FDs. Closures are used to identify keys, prime attributes, and equivalence of FDs.
The document discusses various SQL concepts like views, triggers, functions, indexes, joins, and stored procedures. Views are virtual tables created by joining real tables, and can be updated, modified or dropped. Triggers automatically run code when data is inserted, updated or deleted from a table. Functions allow reusable code and improve clarity. Indexes allow faster data retrieval. Joins combine data from different tables. Stored procedures preserve data integrity.
SQL language includes four primary statement types: DML, DDL, DCL, and TCL. DML statements manipulate data within tables using operations like SELECT, INSERT, UPDATE, and DELETE. DDL statements define and modify database schema using commands like CREATE, ALTER, and DROP. DCL statements control user access privileges with GRANT and REVOKE. TCL statements manage transactions with COMMIT, ROLLBACK, and SAVEPOINT to maintain data integrity.
The document discusses normalization of database tables. It covers normal forms including 1NF, 2NF, 3NF, BCNF and 4NF. The process of normalization reduces data redundancies and helps eliminate data anomalies. Normalization is done concurrently with entity-relationship modeling to produce an effective database design. In some cases, denormalization may be needed to generate information more efficiently.
Normalization is the process of removing redundant data from your tables to improve storage efficiency, data integrity, and scalability.
Normalization generally involves splitting existing tables into multiple ones, which must be re-joined or linked each time a query is issued.
Why normalization?
The relation derived from the user view or data store will most likely be unnormalized.
The problem usually happens when an existing system uses unstructured file, e.g. in MS Excel.
This document provides an overview of Boyce-Codd normal form (BCNF) which is a type of database normalization. It explains that BCNF was developed in 1974 and aims to eliminate redundant data and ensure data dependencies make logical sense. The document outlines the five normal forms including 1NF, 2NF, 3NF, BCNF, and 4NF. It provides examples of converting non-BCNF tables into BCNF by identifying and removing overlapping candidate keys and grouping remaining items into separate tables based on functional dependencies.
This document is a student assignment on joins and their types in database management systems. It defines joins as combining related tuples from two relations based on matching conditions. The main types of joins discussed are inner joins (theta, equi, natural), and outer joins (left, right, full). Inner joins return only tuples that satisfy the condition, while outer joins return all tuples from one or both relations whether or not they match. Examples are provided to illustrate each join type.
The document discusses relational database design and normalization. It covers first normal form, functional dependencies, and decomposition. The goal of normalization is to avoid data redundancy and anomalies. First normal form requires attributes to be atomic. Functional dependencies specify relationships between attributes that must be preserved. Decomposition breaks relations into smaller relations while maintaining lossless join properties. Higher normal forms like Boyce-Codd normal form and third normal form further reduce redundancy.
The document discusses functional dependency and normalization. It defines functional dependency and outlines Armstrong's axioms for functional dependencies. It also defines normalization objectives and normal forms including 1NF, 2NF and 3NF. The document provides examples of functional dependencies and canonical covers. It discusses anomalies that can occur in 1NF relations including insertion, deletion and update anomalies. Finally, it defines partial and transitive dependencies.
Normalisation is a process that structures data in a relational database to minimize duplication and redundancy while preserving information. It aims to ensure data is structured efficiently and consistently through multiple forms. The stages of normalization include first normal form (1NF), second normal form (2NF), third normal form (3NF), Boyce-Codd normal form (BCNF), fourth normal form (4NF) and fifth normal form (5NF). Higher normal forms eliminate more types of dependencies to optimize the database structure.
Normalization is a process that organizes data to minimize redundancy and dependency. It divides tables to relate data without duplicating information. There are three common normal forms. The first normal form structures data into tables without repeating groups. The second normal form removes attributes not dependent on the primary key. The third normal form removes transitive dependencies so each non-key attribute depends directly on the primary key. Examples show how data can be normalized through multiple forms to eliminate anomalies and inconsistencies.
Distributed database management systemsDhani Ahmad
This chapter discusses distributed database management systems (DDBMS). A DDBMS governs storage and processing of logically related data across interconnected computer systems. The chapter covers DDBMS components, levels of data and process distribution, transaction management, and design considerations like data fragmentation, replication, and allocation. Transparency and optimization techniques aim to make the distributed nature transparent to users.
This document discusses SQL commands for creating tables, adding data, and enforcing integrity constraints. It covers the core SQL commands: DDL for defining schema, DML for manipulating data, DCL for controlling access, DQL for querying data, and TCL for transactions. Specific topics summarized include data types, primary keys, foreign keys, indexes, views, stored procedures, functions and triggers. Integrity constraints like NOT NULL, UNIQUE, CHECK, DEFAULT are explained. The document also covers SQL queries with filtering, sorting, patterns and ranges. Authorization using GRANT and REVOKE commands is briefly covered.
The document discusses database normalization and related concepts. It defines functional dependencies and different normal forms including 1NF, 2NF, 3NF, BCNF. Anomalies like insertion, update and deletion anomalies are explained using an example. The concepts of primary key, candidate key, composite key and partial vs full dependencies are also covered. Different types of functional dependencies like trivial, non-trivial and transitive are defined. The process of normalization up to BCNF is summarized.
The document discusses normalization and different normal forms. It begins by explaining anomalies that can occur in a database like insertion, updation and deletion anomalies if the database is not properly normalized. It then discusses 1NF, 2NF, 3NF and BCNF. Key topics covered include functional dependencies, closures, candidate keys, primary keys and how to decompose relations to eliminate anomalies through normalization.
Normalization is a process of organizing data in a database to reduce data redundancy and inconsistencies. There are three main types of anomalies that can occur when data is not normalized: insertion anomaly, update anomaly, and deletion anomaly. These are demonstrated through an example of an employee table that is not normalized. To overcome these anomalies and normalize the data, tables need to comply with certain normal forms like 1NF, 2NF, 3NF and BCNF. These normal forms impose rules to remove anomalies through techniques like decomposing tables and removing transitive dependencies.
The document discusses various normal forms of database relations including 1NF, 2NF, 3NF, BCNF, 4NF and 5NF. It defines key concepts like functional dependencies, multi-valued dependencies, join dependencies, transitive dependencies and different types of anomalies. Various examples are provided to illustrate normalization techniques to decompose relations to higher normal forms by removing partial, transitive and non-trivial dependencies.
This document discusses database normalization and functional dependencies. It defines functional dependencies and describes how to identify them. The goals of normalization are to avoid redundant data, ensure relationships between attributes are represented, and facilitate checking updates. Normalization is done in steps to produce relations in first normal form, second normal form, third normal form and higher. Functional dependencies are used to test for superkeys and check if other dependencies are implied. The document also covers closure of attribute sets and computing the canonical cover of a set of dependencies.
Normalization is a process that converts a relation into smaller, more stable relations to reduce data redundancy and inconsistencies. It involves analyzing functional dependencies and transforming relations into normal forms like 1NF, 2NF and 3NF by removing anomalies like insert, update and delete anomalies. The document provides examples of normalization techniques like decomposing relations to remove partial, transitive and multivalued dependencies to ensure relations are free of anomalies.
The document discusses functional dependencies in databases. It defines functional dependencies as constraints between attribute sets in a relation such that the values of one attribute set uniquely determine the values of another attribute set. It provides examples of functional dependencies and discusses key concepts like full functional dependencies, closure of a set of functional dependencies, and inference rules for deriving new functional dependencies. It also distinguishes between trivial and non-trivial functional dependencies.
The document discusses various types of normal forms in database normalization including 1NF, 2NF, 3NF, BCNF, and 4NF. It defines each normal form and provides examples to illustrate situations that violate the given normal form and how to normalize the data to satisfy that normal form. The goal of normalization is to organize data to eliminate issues like data redundancy, insertion anomalies, update anomalies, and deletion anomalies.
The document provides an overview of database design and normalization. It discusses informal design guidelines for relational schemas, functional dependencies, and various normal forms including 1NF, 2NF, 3NF, BCNF, and 4NF. It defines concepts such as candidate keys, prime attributes, and dependency preservation. It also describes anomalies like insertion, deletion, and update anomalies that can occur without normalization and the benefits of normalization.
This document discusses database normalization. It defines normalization as removing anomalies from database design, including insertion, update, and deletion anomalies. The document then explains the concepts of first, second, third, and Boyce-Codd normal forms. It provides examples of functional and transitive dependencies. The goal of normalization is to break relations into smaller relations without anomalies, reaching at least third normal form or ideally Boyce-Codd normal form. Fourth normal form is also introduced as removing multi-valued dependencies.
The document discusses database schema refinement through normalization. It introduces the concepts of functional dependencies and normal forms including 1NF, 2NF, 3NF and BCNF. Decomposition is presented as a technique to resolve issues like redundancy, update anomalies and insertion/deletion anomalies that arise due to violations of normal forms. Reasoning about functional dependencies and computing their closure is also covered.
Normalization is a process of removing redundancy from tables by splitting them into multiple tables in a sequence of normal forms. It addresses problems like inconsistent changes during updates by separating entities, attributes, and values into tables. The normal forms are first normal form (1NF), second normal form (2NF), third normal form (3NF), and Boyce-Codd normal form (BCNF). Higher normal forms impose stronger rules to remove dependencies between attributes like transitive and partial dependencies that can cause data anomalies.
Constraint specification techniques in EER Model.
The model that has resulted from adding more semantic constructs to the original ER model is called as Extended Entity Relationship Model or Enhanced Entity Relationship Model.
The most important new modeling construct incorporated in the EER model is Supertype/Subtype relationships. This facility allows us to model a general entity type called “Super Type” and then sub divide it into several specialized entity types called “Sub types”.
1. SubType Discriminator: A subtype discriminator is the attribute in the supertype entity that determines to which subtype the supertype occurrence is related. It determines that into which of the subtypes an entity instance should be inserted.
In the below EER Diagram, the subtype discriminator Emp_Type determines the targeted subtype to which an instance should be related. If the value of the discriminator is ‘P’ then the instance is related to the subtype entity PILOT, if it is ‘M’ then the instance is related to the subtype entity MECHANIC, if it is ‘A’ then the instance is related to the subtype entity ACCOUNTANT.
Constraints allow us to capture some of the important business rules that apply to the relationships. The two most important types of constraints are Disjointness and Completeness constraints.
2. Disjointness Constraints: An entity supertype can have disjoint or overlapping entity subtypes. Disjoint constraint address that whether an instance of a supertype may simultaneously be a member of two or more subtypes. The disjointness constraint has two possible rules.
a. Disjointness Rule b. Overlapping Rule
a) Disjointness Rule: Disjoint subtypes also known as non-overlapping subtypes. Each entity instance of the supertype can appear in only one of the subtypes. It specifies that if an entity instance of the supertype is a member of one subtype, it cannot simultaneously be a member of any other subtype in the hierarchy.
b) Overlapping Rule: The overlap rule specifies that an entity instance can simultaneously be a member of two or more subtypes. Each entity instance of the supertype may appear in more than one subtype. Example:
3. Completeness Constraints: The completeness constraint specifies whether each supertype entity occurrence must also be a member of atleast one subtype. Completeness constraint may be withering partial or total.
a) Partial Completeness: It means that not every supertype occurrence is a member of a subtype. There may be some supertype occurrences that are not members of any of its subtypes.
b) Total Completeness: Total completeness constraint means that every supertype entity occurrence must be a member of atleast one of its subtypes.
Notation:
Specialization: Specialization is the top-down process of defining low-level and more specific subtypes of the supertype and forming supertype/subtype relationships. Each subtype is formed based on some distinguishing characteristics such as attributes specific to the s
Normalization is the process of organizing data in a database to reduce data redundancy and improve data integrity. It involves separating relations into smaller relations and linking them through relationships. The normal forms, such as first normal form, second normal form, etc. are used to reduce redundancy and anomalies like insertion, update and deletion anomalies. Some key aspects are that first normal form disallows multi-valued attributes and composite attributes. Second normal form eliminates non-prime attributes in relations that depend on part of a composite primary key.
This document provides an introduction to relational database design and normalization. The goal of normalization is to avoid data redundancy and anomalies. Examples of anomalies include insertion anomalies where new data cannot be added without existing data, and deletion anomalies where deleting data also deletes other related data. The document discusses functional dependencies and normal forms to help guide the decomposition of relations into multiple normalized relations while preserving data integrity and dependencies.
TO UNDERSTAND about stdio.h in C.
TO LEARN ABOUT Math.h in C.
To learn about ctype.h in C.
To understand stdlib.h in c.
To learn about conio.h in c.
To learn about String.h in c.
TO LEARN ABOUT process.h in C.
TO UNDERSTAND about Structure in C.
TO LEARN ABOUT How to Declare Structure in C.
To learn about how to store Structure in Memory.
To understand copy of structure elements in c.
To understand about nested structure in C.
TO LEARN ABOUT how to use Array of structure in C.
To learn about Union in C.
TO UNDERSTAND about Preprocessor Directives IN C.
TO LEARN ABOUT #define.
TO LEARN ABOUT how to use macro with arguments.
To learn about file inclusion.
To learn about Conditional Compilation.
To learn about #pragma in C
TO LEARN ABOUT #if define and #ifndefine in C.
TO LEARN ABOUT #undef in C.
TO LEARN ABOUT # and ## in C Language.
This document discusses file handling in C programming. It covers objectives like understanding different file types, modes for opening files, functions for reading and writing files. Specific functions covered are fopen(), fprintf(), fscanf(), fgetc(), fputc(), fclose(), fseek(). It provides examples to open and read text files, write and read from binary files using functions like fwrite() and fread(). The last example shows storing and retrieving student record from a file using structure and file handling.
This document provides an introduction to bit fields, command line arguments, and enums in the C programming language. It defines bit fields as a data structure that allocates memory to structures and unions in bit form for efficient utilization. Command line arguments refer to arguments passed to the main function, with argc representing the number of arguments and argv being a pointer array to each argument. Enums are enumerated types that consist of integral constants and are used to provide meaningful names to constants to make code more understandable and maintainable. Examples of each concept are provided.
This document discusses pointers in the C programming language. It begins by listing the chapter objectives, which are to understand pointers, arrays and pointers, pointer arithmetic, dynamic memory allocation, pointers to arrays, arrays of pointers, pointers to functions, and arrays of pointers to functions. It then provides examples and explanations of pointers, pointer declarations, the relationship between arrays and pointers, pointer arithmetic, dynamic memory allocation functions like malloc(), calloc(), free(), and realloc(), pointers to arrays, arrays of pointers, pointers to functions, and arrays of pointers to functions.
To understand about Array in C.
To learn about declaration of array.
To learn about initialization of Array
To learn about Types of Array.
To learn about One Dimensional Array in C.
To learn about Two Dimensional Array in C.
To learn about Multi Dimensional Array (Three Dimension & Four dimension in C.
To understand about Storage Class in c.
To learn about why we use storage class.
To learn about automatic storage class.
To learn about Regular Storage class.
To learn about static storage class in C.
To learn about external storage class in C.
To understand about Function in C.
To learn about declaration of function.
To learn about types of function.
To learn about function prototype.
To learn about calling function and called function in C.
To learn about function arguments or parameter in C.
To learn about call by value and call by references.
To understand about recursion in C Language.
To understand about conditional statement.
To learn about if statement , if else , nested if else , if elseif else etc.
To learn about break and continue statement.
To use of switch statement in C.
To learn about Loop in C.
To learn about for loop in C.
To learn about while loop in C.
To learn about Do While in C (Entry and Exit control loop) in C.
To learn about goto statement.
To understand about Operator.
To learn about how many types of Operator.
To learn about Arithmetic Operator in C.
To use of Bitwise Operator in C.
To use of Relational Operator in C.
To learn about Logical Operator in C.
To learn about Assignment Operator in C.
To learn about Ternary Operator in C.
To learn about Unary & Binary Operator.
Describe about C Programming?
What is the Characteristics of C Language?
What is Constant? Explain types of Constant.
What is variable ? Types of Variable.
What is Identifier?
What is Keyword in C?
What is Tokens in C?
What is Software or System ?
How to develop a good Software or System ?
What attributes of designing a good Software or System ?
Which methodology should be to design a good Software or System ?
What is SDLC ?
How many phases available in SDLC ?
In this slide explaining mobile commerce and some consideration points related to Mobile Commerce like Ethical consideration , Technological , social consideration in E-Commerce.
In This slide explaining about E-Commerce applications which is used in E-Commerce. There are various applications or types available in E-Commerce. So that today there are lots of technologies or applications used in E-Commerce.
In this slide I described all control which is used by the Html Form Controls such as checkbox , radio , text , drop down list / select , file upload and html output controls.
Explain security issues and protection about unwanted threat in E-Commerce. Explain Security E-Commerce Environment. Security Threat in E-Commerce Environment.
The document discusses the key features of the entity-relationship (E-R) model. The E-R model allows users to describe data in terms of objects and relationships. It provides concepts like entities, attributes, and relationships that make it easy to model real-world data. Entities represent objects, attributes describe entity features, and relationships define connections between entities. The document also discusses different types of relationships and modeling techniques like generalization, specialization, and aggregation.
The document describes how to connect to a Microsoft Access database using data readers in Visual Studio .NET. It involves the following steps:
1. Create an Employee database in MS Access with a table containing employee fields.
2. Open Visual Studio .NET and add a connection to the Access database file.
3. Write code to perform CRUD (create, read, update, delete) operations on the employee table using OleDbConnection, OleDbCommand and OleDbDataReader objects.
How to track Cost and Revenue using Analytic Accounts in odoo Accounting, App...Celine George
Analytic accounts are used to track and manage financial transactions related to specific projects, departments, or business units. They provide detailed insights into costs and revenues at a granular level, independent of the main accounting system. This helps to better understand profitability, performance, and resource allocation, making it easier to make informed financial decisions and strategic planning.
INTRO TO STATISTICS
INTRO TO SPSS INTERFACE
CLEANING MULTIPLE CHOICE RESPONSE DATA WITH EXCEL
ANALYZING MULTIPLE CHOICE RESPONSE DATA
INTERPRETATION
Q & A SESSION
PRACTICAL HANDS-ON ACTIVITY
Social Problem-Unemployment .pptx notes for Physiotherapy StudentsDrNidhiAgarwal
Unemployment is a major social problem, by which not only rural population have suffered but also urban population are suffered while they are literate having good qualification.The evil consequences like poverty, frustration, revolution
result in crimes and social disorganization. Therefore, it is
necessary that all efforts be made to have maximum.
employment facilities. The Government of India has already
announced that the question of payment of unemployment
allowance cannot be considered in India
Odoo Inventory Rules and Routes v17 - Odoo SlidesCeline George
Odoo's inventory management system is highly flexible and powerful, allowing businesses to efficiently manage their stock operations through the use of Rules and Routes.
How to Set warnings for invoicing specific customers in odooCeline George
Odoo 16 offers a powerful platform for managing sales documents and invoicing efficiently. One of its standout features is the ability to set warnings and block messages for specific customers during the invoicing process.
pulse ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulsesushreesangita003
what is pulse ?
Purpose
physiology and Regulation of pulse
Characteristics of pulse
factors affecting pulse
Sites of pulse
Alteration of pulse
for BSC Nursing 1st semester
for Gnm Nursing 1st year
Students .
vitalsign
Understanding P–N Junction Semiconductors: A Beginner’s GuideGS Virdi
Dive into the fundamentals of P–N junctions, the heart of every diode and semiconductor device. In this concise presentation, Dr. G.S. Virdi (Former Chief Scientist, CSIR-CEERI Pilani) covers:
What Is a P–N Junction? Learn how P-type and N-type materials join to create a diode.
Depletion Region & Biasing: See how forward and reverse bias shape the voltage–current behavior.
V–I Characteristics: Understand the curve that defines diode operation.
Real-World Uses: Discover common applications in rectifiers, signal clipping, and more.
Ideal for electronics students, hobbyists, and engineers seeking a clear, practical introduction to P–N junction semiconductors.
This chapter provides an in-depth overview of the viscosity of macromolecules, an essential concept in biophysics and medical sciences, especially in understanding fluid behavior like blood flow in the human body.
Key concepts covered include:
✅ Definition and Types of Viscosity: Dynamic vs. Kinematic viscosity, cohesion, and adhesion.
⚙️ Methods of Measuring Viscosity:
Rotary Viscometer
Vibrational Viscometer
Falling Object Method
Capillary Viscometer
🌡️ Factors Affecting Viscosity: Temperature, composition, flow rate.
🩺 Clinical Relevance: Impact of blood viscosity in cardiovascular health.
🌊 Fluid Dynamics: Laminar vs. turbulent flow, Reynolds number.
🔬 Extension Techniques:
Chromatography (adsorption, partition, TLC, etc.)
Electrophoresis (protein/DNA separation)
Sedimentation and Centrifugation methods.
Multi-currency in odoo accounting and Update exchange rates automatically in ...Celine George
Most business transactions use the currencies of several countries for financial operations. For global transactions, multi-currency management is essential for enabling international trade.
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schoolsdogden2
Algebra 1 is often described as a “gateway” class, a pivotal moment that can shape the rest of a student’s K–12 education. Early access is key: successfully completing Algebra 1 in middle school allows students to complete advanced math and science coursework in high school, which research shows lead to higher wages and lower rates of unemployment in adulthood.
Learn how The Atlanta Public Schools is using their data to create a more equitable enrollment in middle school Algebra classes.
How to Subscribe Newsletter From Odoo 18 WebsiteCeline George
Newsletter is a powerful tool that effectively manage the email marketing . It allows us to send professional looking HTML formatted emails. Under the Mailing Lists in Email Marketing we can find all the Newsletter.
How to Manage Opening & Closing Controls in Odoo 17 POSCeline George
In Odoo 17 Point of Sale, the opening and closing controls are key for cash management. At the start of a shift, cashiers log in and enter the starting cash amount, marking the beginning of financial tracking. Throughout the shift, every transaction is recorded, creating an audit trail.
2. OBJECTIVES
To understand about the functional dependencies.
To bring out solution of Trivial , non-trivial , full dependency and
transitive.
To known about the Armstrong inferences rules.
To understand about data normalization.
3. Functional Dependencies
1. It is important role play to design a good or bad database.
2. Basically functional dependencies is a type of constraints that is a generalization
in the notation of key.
3. Functional Dependency (FD) determines the relation of one attribute to another
attribute in a database management system (DBMS) system.
4. Functional dependency in DBMS, as the name suggests is a relationship
between attributes of a table dependent on each other. Introduced by E. F.
Codd, it helps in preventing data redundancy and gets to know about bad
designs.
5. It is represented by -> (arrow) sign. Ex- P -> Q.
6. It is a set of legal relation. Its depicts relationship of attributes.
7. For any relation r containing attributes P and Q and if for every valid instance of
P determines uniquely value of Q It is expressed as:
P -> Q
P determines Q
Q dependent P or Q is determines by P
4. Let f: P -> Q Relation so that P -> compute the value of Q.
Fig1. Fig2.
P & Q in set of attributes.
This is the value which is store in P & Q. here we see that value is not duplicate in both row,
In this fig1. we easily search any value of P. suppose if you want to search the value of a1 so you can easily
find the value of a1.
In this fig2. we don’t say what is the actual value of a1. so that
F(P) - Q
a1 10
a1 20
b1 10
Here we see this example. It is possible to meet same value of dependent but not possible determines is same.
P Q
a1 10
b1 20
c1 30
d1 40
P Q
a1 10
a1 20
c1 30
d1 40
5. It is a relationship that exists when one attributes uniquely determines another attributes.
If R is relation with all P & Q
Uniquely Determination P -> Q
Q is functional dependency on P
-> It is a set of constrains between 2 attributes in a relations.
Q is functional dependent on P
Identification key P -> Q
Eg: if every attribute Q of R dependent of P then attributes P is primary key.
In this table SSN -> FNAME
SSN -> LNAME
So it dependent of a particular table.
SSN Fname Lname
101 Ravi Kumar
102 Piyush Tiwari
103 Kartik Mishra
104 Harish Prasad
6. Fully Functional Dependency:
ABC - D
1. D is fully functional dependent on ABC
2. D cannot dependent on any subset of ABC
BC -> D
C -> D
A -> D
This is not possible because BC cannot determine D , C cannot determine D and A cannot
determine D.
In ABC -> D , only D is fully functional dependence of ABC.
Sid sname tid smarks
101 Ravi 201 34
102 Piyush 202 44
103 Kartik 203 55
104 Harish 204 33
7. Here in this table sid,tid Identification Key
so that marks is fully dependent of sid & Pid. This is combination of two attributes full functionally
dependent.
Transitive Dependency:
Consider attributes PQ & R
When P -> Q & Q -> R
If full dependency one transitive also have the
FD : A -> C
C is transitive dependent on P through Q. Here we consider two tables Employee & Department.
Eno -> Dno P-> Q
P Q Q R
Same as Eno -> email P -> Q
Eno -> dno , dno -> dname , eno dname so that P -> Q , Q -> R so that P -> R This is
transitive dependency.
Eno Email Dno Dname
8. Trivial function dependency :
Trivial dependency is a set of attributes which are called a trivial if the set of attributes are
included in that attributes.
So that P -> Q is a trivial if Q is subset of P.
Ex-: [sid , sname] sid is a trivial dependency as sid is a subset of [sid,sname].
Non - Trivial function dependency : In this dependency which also known non-trivial occurs
when P -> Q holds true where Q is not subset of P. In attributes Q is not subset of P.
Sid -> sname or P -> Q and sname -> sdob Q -> R
Sid sname
101 Ravi
102 Piyush
103 Kartik
104 Harish
Sid sname sdob
101 Ravi 1
102 Piyush 2
103 Kartik 3
104 Harish 4
9. Armstrong Inferences Rules:
The term Armstrong axioms to the sound and complete set of inference rules or
axioms, introduced by William W. Armstrong, that is used to test the logical
implication of functional dependencies. If F is a set of functional dependencies
then the closure of F, denoted as F plus, is the set of all functional dependencies
logically implied by F. Armstrong’s Axioms are a set of rules, that when applied
repeatedly, generates a closure of functional dependencies.
It is a basic rule.
It are used to conclude functional dependencies on a relational database.
The inference rule is a type of assertion. It can apply to a set of FD(functional
dependency) to derive other FD.
By using of this , we can derive additional functional dependency from the
initial set.
11. 1. Reflexive Rule
according to reflexive rule, if B is a subset of A, then A determines B.
If A ⊇ B then A → B eid - > eid
A = {eid,ename,eage}
B = {eid,ename}
2. Augmentation Rule
The augmentation is also called as a partial dependency. In augmentation, if A
determines B, then AC determines BC for any C.
If A → B then AC → BC
Example:
For R(ABC), if A → B then AC → BC
12. 3. Transitive Rule
In the transitive rule, if A determines B and B determine C, then A must also determine
C.
If A → B and B → C then A → C
4. Union Rule
Union rule says, if A determines B and A determines C, then A must also determine B
and C.
If A → B and A → C then A → BC
1. A → B (given)
2. A → C (given)
3. A → AB (using 2 rules on 1 by augmentation with A. Where AA = A)
4. AB → BC (using 2 rules on 2 by augmentation with B)
5. A → BC (using 3 rules on 3 and 4)
13. 5. Decomposition Rule
Decomposition rule is also known as project rule. It is the reverse of union rule.
This Rule says, if A determines B and C, then A determines B and A determines C
separately.
If A → BC then A → B and A → C
1. A → BC (given)
2. BC → B (using rule 1)
3. A → B (using rule 3 on 1 and 2)
6. Pseudo transitive Rule
If A holds B and BC holds D, then AC holds D.
If {A → B} and {BC → D}, then {AC → D}
1. A → B (given)
2. DB → C (given)
3. DA → DB (By using 2 rule on 1 by augmenting with D)
4. DA → C (by using 3 rule on 3 and 2)
14. Example:
Consider relation E = (P, Q, R, S, T, U) having set of Functional Dependencies (FD).
P -> Q P -> R
QR -> S Q -> T
QR -> U PR -> U
Calculate some members of Axioms are as follows,
1. P -> T
2. PR -> S
3. QR -> SU
4. PR -> SU
Solution:
1. P -> T
In the above FD set, P -> Q and Q -> T
So, Using Transitive Rule: If {P -> Q} and {Q -> T}, then {P -> T}
? If P -> Q and Q -> T, then P -> T.
P -> T
15. 2. PR -> S
In the above FD set, P -> Q
As, QR -> S
So, Using Pseudo Transitivity Rule: If{A -> B} and {BC -> D}, then {AC -> D}
If P -> Q and QR -> S, then PR -> S.
PR -> S
3. QR -> SU
In above FD set, QR -> S and QR -> U
So, Using Union Rule:
If{A -> B} and {A -> C}, then {A -> BC}
If QR -> S and QR -> U, then QR -> SU.
QR -> SU
4. PR -> SU
So, Using Pseudo Transitivity Rule:
If{A -> B} and {BC -> D}, then {AC -> D}
If PR -> S and PR -> U, then PR -> SU.
PR -> SU
30. Data Normalization
Database Normalization is a technique of organizing the data in the database.
Normalization is a systematic approach of decomposing tables to eliminate data
redundancy(repetition) and undesirable characteristics like Insertion, Update and Deletion
Anomalies. It is a multi-step process that puts data into tabular form, removing duplicated
data from the relation tables.
Normalization is used for mainly two purposes,
Eliminating redundant(useless) data.
Ensuring data dependencies make sense i.e. data is logically stored.
Problems Without Normalization
If a table is not properly normalized and have data redundancy then it will not only eat up
extra memory space but will also make it difficult to handle and update the database,
without facing data loss. Insertion, Updating and Deletion Anomalies are very frequent if
database is not normalized. To understand these anomalies let us take an example of
a Student table.
31. Anomalies in DBMS
There are three types of anomalies that occur when the database is not normalized.
These are – Insertion, update and deletion anomaly.
Example: Suppose a Software company stores the employee details in a table
named emp that has five attributes: eid for storing employee id ,ename for storing
employee name, eadd for storing employee address , edept for storing the
department information and emno for storing employee mobile no in which the
employee works. Lets see this table this is not normalized.
eid ename eadd edept emno
101 Ravi Kanpur ed101 777799
101 Ravi Kanpur ed103 777799
102 Karim Kannauj ed105 555555
103 Raju Hamirpur ed110 434434
103 Raju Hamirpur ed111 343434
32. Update anomaly: In the above table we have two rows for employee
Ravi as he belongs to two departments of the software company. If we
want to update the address of Ravi then we have to update the same
in two rows or the data will become inconsistent.
Insert anomaly: Suppose a new employee joins the company, who is
under training and currently not assigned to any department then we
would not be able to insert the data into the table if edept field
doesn’t allow nulls.
Delete anomaly: Suppose, if at a point of time the company closes the
department ed105 then deleting the rows that are having edept as
ed105 would also delete the information of employee Karim since he is
assigned only to this department.
There are different types of Normalization. In this slide we explain only
1NF , 2NF and 3NF.
33. 1 Normalization Form (1NF): The first normal form expects you to follow a
few simple rules while designing your database, and they are:
Rule 1: Single Valued Attributes
Rule 2: Attribute Domain should not change
Rule 3: Unique name for Attributes/Columns
Rule 4: Order doesn't matters
eid ename eadd edept emno
101 Ravi Kanpur ed101 777799
777799
102 Karim Kannauj ed105 555555
103 Raju Hamirpur ed110 434434
434434
104 Piyush Lucknow ed111 343434
35. 2 Normalization Form (2NF):
A table is said to be in 2NF if both the following conditions hold:
Table is in 1NF (First normal form)
No non-prime attribute is dependent on the proper subset of any candidate key of table. There
should be no Partial Dependency.
An attribute that is not part of any candidate key is known as non-prime attribute.
Example: Suppose a college wants to store the data of teachers and the subjects they teach. They
create a table that looks like this: Since a teacher can teach more than one subjects, the table can
have multiple rows for a same teacher.
tid tsub tage
101 C 30
101 C++ 30
102 Python 32
103 C++ 35
103 DBMS 35
36. Candidate Keys: {tid, tsub}
Non prime attribute: tage
The table is in 1 NF because each attribute has atomic values. However, it is not in 2NF
because non prime attribute tage is dependent on tid alone which is a proper subset of
candidate key. This violates the rule for 2NF as the rule says “no non-prime attribute is
dependent on the proper subset of any candidate key of the table”. To make the table
complies with 2NF we can break it in two tables like this, Now the tables comply with Second
normal form (2NF).
tid tage
101 30
102 32
103 35
tid tsub
101 C
101 C++
102 Python
103 C++
103 DBMS
37. 3 Normalization Form (3NF):
A table design is said to be in 3NF if both the following conditions hold:
Table must be in 2NF
Transitive functional dependency of non-prime attribute on any super key should be removed.
An attribute that is not part of any candidate key is known as non-prime attribute. In other words 3NF
can be explained like this: A table is in 3NF if it is in 2NF and for each functional dependency X-> Y at
least one of the following conditions hold:
X is a super key of table
Y is a prime attribute of table.
An attribute that is a part of one of the candidate keys is known as prime attribute.
Example: Suppose a company wants to store the complete address of each customer, they create a
table named cust that looks like this:
38. cid cname cpincode cstate ccity cdistrict
101 Ravi 110010 MP Satna abc
102 Rahul 110019 CG Raipur ccd
103 Rakesh 110017 CG Bilaspur ppd
104 Lalit 110013 UP Kanpur lll
105 Piyush 110014 AP Tirupati ooo
Super keys: {cid}, {cid, cname}, {cid,cname,cpincode}…so on
Candidate Keys: {cid}
Non-prime attributes: all attributes except cid are non-prime as they are not part of any candidate keys.
Here, cstate, ccity & cdistrict dependent on cpincode And, cpincode is dependent on cid that makes
non-prime attributes (cstate, ccity & cdistrict) transitively dependent on super key (cid). This violates the
rule of 3NF.
To make this table complies with 3NF we have to break the table into two tables to remove the transitive
dependency: