Project Report
Project Report
Project Report
on
Online School Management System
Submitted in partial fulfillment of the requirements
for the award of the degree of
Bachelor of Computer Application
Academic session (2016-2019)
Submitted by
Kriti Kumar (URN-16141099)
Under the supervision of
Mr. Abhay Kr. Mishra
(Assistant Professor)
(1955-2019)
Department of Computer Science
Veer kunwar singh university,maharaja college
Ara
1
CERTIFICATE
(Assistant Professor)
Department of CS
Assist. Professor
The evaluation committee has thoroughly examined this project and has
found it acceptable.
Dr. Abhay Kr. Mishra
2
CERTIFICATE
3
H.O.: Raja Bajar, 800014, Patna - 1
Date:...............
This is certify that Miss Kriti Kumari is a bonafide student of this institute
and has successful completed project "Online School Management
System” from our Institute under the guidence of Mr. Abhinave Kumar.
(Director)
Email- [email protected]
4
ACKNOWLEDGEMENT
Intention, dedication, concentration and hard work are very much essential
to complete any task. But still it needs a lot of support, guidence Co-
operation of people to make it successful.
Last but not the less I would like to thank my fried and all individuals who
gave their valuable information taking out their precious time out the
research.
Thanking you!
5
Sl.
Particulars
No.
Introduction of the Project
1.
Objective of Online School Management System
2.
Scope
3.
Waterfall Model
4.
Front End
5.
Back End
6.
Data Storage
7.
8. Analysis and Design
11. Bibliography
6
INTRODUCTION
Apart from storing, it maintains the information by doing proper updating in the
database. The SSMS has the following feature:
At present the school management and it’s all procedures are totally manual based. It
creates a lot of problem due to wrong entries or mistakes in totaling etc. This system
avoided such mistakes through proper checks and validation control methods in checking
of student record, fee deposit particulars, teachers’ schedules, examination report, issue
of transfer certificates etc. SSMS is a software application for education establishment to
manage students.
7
OBJECTIVE
The objective of developing such a computerization system is to reduce the paper work
and safe of time in school management. There by increase the efficiency and decreasing
the work load.
The project provides us the information about student record, school faculty, school
timetable, school fee. The system must provide the flexibility of generating the required
document on screen as well as on printer as and when required.
It creates a lot of problem due to wrong entries or mistakes in totaling etc. This
system avoided such mistakes through proper checks and validation control methods
in checking of student record, fee deposit particulars, teachers’ schedules,
examination report, issue of transfer certificates etc. SSMS is a software application
for education establishment to manage student data. It provide capabilities for
entering student test and other assignment scope through electronic grade book,
building student schedules, an managing many other student-related data need in
college or institute. Also known student records system (SRS), student management
system (SMS).SPR is software which is helpful for student as well as college or
school authorities. In the current system all the activities are done manually. It is very
time consuming and costly.
8
SCOPE
In the present time the scope of this application is very popular because of the less time
for meet people and talk to them so this application helps them to communicate other.
The project provides us the information about student record, school faculty, school
timetable, school fee, and school examination result and library management. The system
must provide the flexibility of generating the required document on screen as well as on
printer as and when required.
9
SOFTWARE ENGINEERING PARADIGM APPLIED
Solves the actual problems the software engineer or a team of engineers must incorporate
a developments strategy that encompasses the process, method and tools and the generic
phases (connection, adoption, enhancement and prevention). This strategy is often
referred to an as process model or software engineering paradigm. A process model for
software engineering is chosen based on the nature of the project and application the
method and tools to be used and the controls and deliverable that are required.
As the development process specifies the major development and quality assurance
activity that need to be performed in the project, the development process really forms
the core of the software process. The management process is decided based on the
development process. Due to the importance of the development process various model
have been proposed e.g. Water Fall Model (linear sequential model) , spiral model
prototyping model and RAD model etc. in our project we used the Water Fall Model for
software development is as follow:
WATERFALL MODEL
The Water Fall model sometimes called classic life cycle model or linear sequential
model. It suggest a systematic sequential approach to software development that begins
at the system level and progress through, analysis, design, coding, testing and
maintenance. Model after the conventional engineering cycle the linear sequential model
encompasses the following activities:
10
MINIMUM HARDWARE &SOFTWARE REQUIRENMENT
MINIMUM HARDWARE REQUIREMENTS
PROCESSOR : PIV 2.8 GHz Processor and Above
MONITOR
MOUSE
11
FRONT END AND BACK END
FRONT END
HISTORYoracle
After four years of development, and a series of beta releases in 2000 and 2001,
ASP.NET 1.0 was released on January 5, 2002 as part of version 1.0 of the .NET
Framework. Even prior to the release, dozens of books had been written about
ASP.NET,[5] and Microsoft promoted it heavily as part of its platform for Web services.
Scott Guthrie became the product unit manager for ASP.NET, and development
continued apace, with version 1.1 being released on April 24, 2003 as a part of Windows
Server 2003. ASP.NET is loosely based on HTML. This release focused on improving
ASP.NET's support for mobile devices.
INTRODUCTION TO oracle
Asp.net is an open-source server - side web application framework designed for Web
development to produce dynamic web pages. It was developed by Microsoft to allow
Programmers to build dynamic web sites, web applications and web services.
It was first released in January 2002 with version 1.0 of the .NET Framework, and is the
Success or to Microsoft's Active Server Pages (ASP) technology. JSP is built on the
Common Language Runtime (CLR), allowing programmers to write JSP code using any
supported.JSP language. The JSPSOAP extension framework allows JSP components to
process SOAP messages. JSP is in the process of being re-implemented as a modern and
modular web framework, together with other frameworks like Entity Framework. The
new framework will make use of the new open-source. JSP is not just a simple upgrade
or the latest version of .JSP. .JSP combines unprecedented developer productivity with
performance, reliability, and deployment. JSP redesigns the whole process. It's still easy
to grasp for new comers but it provides many new ways of managing projects. Below are
the features of JSP Easy Programming Model JSP makes building real world Web
applications dramatically easier. JSP server controls enable an HTML-like style of
declarative programming that let you build great pages with far less code than with
classic ASP. Displaying data, validating user input, and uploading files are all amazingly
12
easy. Best of all, JSP pages work in all browsers Including Netscape, Opera, AOL, and
Internet Explorer. JSP stands for Active Server Pages .NET and is developed by
Microsoft. ASP.NET is used to create web pages and web technologies and is an integral
part of Microsoft’s .NET framework vision. As a member of the .NET framework,
ASP.NET is a very valuable tool for programmers and developers as it allows them to
build dynamic, rich web sites and web applications using compiled languages like VB
and C#.ASP.NET is not limited to script languages, it allows you to make use of .NET
languages like C#, J#, VB, etc. It allows developers to build very compelling applications
by making use of Visual Studio, the development tool provided by Microsoft. ASP.NET
is purely server-side technology. It is built on a common language runtime that can be
used on any Windows server to host powerful ASP.NET web sites and technologies.
In the early days of the Web i.e. before the release of Internet Information Services (IIS)
in 1997, the contents of web pages were largely static. These web pages needed to be
constantly, and manually, modified. There was an urgent need to create web sites that
were dynamic and would update automatically.
Microsoft’s Active Server Pages (ASP) was brought to the market to meet this need. ASP
executed on the server side, with its output sent to the user’s web browser, thus allowing
the server to generate dynamic web pages based on the actions of the user.
1. JSP drastically reduces the amount of code required to build large applications.
4. The JSP frame work is complemented by a rich toolbox and designer in the Visual
Studio integrated development environment. WYSIWYG editing, drag-and-drop server
controls, and automatic deployment are just a few of the features this powerful tool
provides.
13
5. Provides simplicity as JSP makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration.
6. The source code and HTML are together therefore JSP pages are easy to maintain and
write. Also the source code is executed on the server. This provides a lot of power and
flexibility to the web pages.
7. All the processes are closely monitored and managed by the ASP.NET runtime, so that
if process is dead, a new process can be created in its place, which helps keep your
application constantly available to handle requests.
8. It is purely server-side technology so, JSP code executes on the server before it is sent
to the browser.
9. Being language-independent, it allows you to choose the language that best applies to
your application or partition your application across many languages.
10JSP makes for easy deployment. There is no need to register components because the
configuration information is built-in.
11. The Web server continuously monitors the pages, components and applications
running on it. If it notices any memory leaks, infinite loops, other illegal activities, it
immediately destroys those activities and restarts itself.
12. Easily works with JSP using data-binding and page formatting features. It is an
application which runs faster and counters large volumes of users without having
performance problems
ASP.NET is not just a simple upgrade or the latest version of ASP. JSP combines
unprecedented developer productivity with performance, reliability, and deployment.
ASP.NET redesigns the whole process. It's still easy to grasp for new comers but it
provides many new ways of managing projects. Below are the features of ASP.NET.
14
CHARACTERISTICS
ASP.NET Web pages, known officially as Web Forms, are the main building blocks for
application development in ASP.NET.[7] There are two basic methodologies for Web
Forms, a web application format and a web site format.[8] Web applications need to be
compiled before deployment, while web sites structures allow the user to copy the files
directly to the server without prior compilation. Web forms are contained in files with a
".asp" extension; these files typically contain static (X)HTML mark up or component
mark up. The component mark-up can include server-side Web Controls and User
Controls that have been defined in the framework or the web page. For example, there is
a textbox component which can be defined on a page as <asp: textbox id='midi'
runt='server'> which will be rendered into a html input box. Additionally, dynamic code,
which runs on the server, can be placed in a page within a block <% -- dynamic code --
%>, which is similar to other Web development technologies such as PHP, JSP, and
ASP. With ASP.NET Framework 2.0, Microsoft introduced a new code-behind model
which allows static text to remain on the .asp page, while dynamic code remains in an
.aspx.vb or .aspx.cs or .aspx.fs file (depending on the programming language used).
ASP.NET makes building real world Web applications dramatically easier. ASP.NET
server controls enable an HTML-like style of declarative programming that let you build
great pages with far less code than with classic ASP. Displaying data, validating user
input, and uploading files are all amazingly easy. Best of all, JSP pages work in all
browsers including Netscape, Opera, AOL, and Internet Explorer.
JSP lets you leverage your current programming language skills. Unlike classic ASP,
which supports only interpreted VBScript and J Script, JSP now supports more than 25
.JSP languages (built-in support for VB.NET, C#, and JScript.NET), giving us
unprecedented flexibility in the choice of language.
15
We can harness the full power of .JSPusing any text editor, even Notepad. But Visual
Studio .NET adds the productivity of Visual Basic-style development to the Web. Now
we can visually design .JSPWeb Forms using familiar drag-drop-double click techniques,
and enjoy full-fledged code support including statement completion and colour-coding.
VS.NET also provides integrated support for debugging and deploying .JSPWeb
applications. The Enterprise versions of Visual Studio .NET deliver life-cycle features to
help organizations plan, analyse, design, build, test, and coordinate teams that develop
.JSPWeb applications. These include UML class modelling, database modelling
(conceptual, logical, and physical models), testing tools (functional, performance and
scalability), and enterprise frameworks and templates, all available within the integrated
Visual Studio .NET environment.
COMPILED EXECUTION
.JSPis much faster than classic ASP, while preserving the "just hit save" update model of
ASP. However, no explicit compile step is required. .JSPwill automatically detect any
changes, dynamically compile the files if needed, and store the compiled results to reuse
for subsequent requests. Dynamic compilation ensures that the application is always up
to date, and compiled execution makes it fast. Most applications migrated from classic
ASP see a 3x to 5x increase in pages served.
Oracle output caching can dramatically improve the performance and scalability of the
application. When output caching is enabled on a page, Oracleexecutes the page just
once, and saves the result in memory in addition to sending it to the user. When another
user requests the same page, Oracleserves the cached result from memory without re-
16
executing the page. Output caching is configurable, and can be used to cache individual
regions or an entire page. Output caching can dramatically improve the performance of
data-driven pages by eliminating the need to query the database on every request.
ENHANCED RELIABILITY
.JSPautomatically detects and recovers from errors like deadlocks and memory leaks to
ensure our application is always available to our users. For example, say that our
application has a small memory leak, and that after a week the leak has tied up a
significant percentage of our server's virtual memory. .JSPwill detect this condition,
automatically start up another copy of the Oracleworker process, and direct all new
requests to the new process. Once the old process has finished processing its pending
requests, it is gracefully disposed and the leaked memory is released. Automatically,
without administrator intervention or any interruption of service, .JSPhas recovered from
the error.
EASY DEPLOYMENT
Oracle takes the pain out of deploying server applications. "No touch" application
deployment. Oracledramatically simplifies installation of our application. With Oracle,
we can deploy an entire application as easily as an HTML page; just copy it to the server.
No need to run regsvr32 to register any components, and configuration settings are stored
in an XML file within the application.
17
.JSPnow let’s we update compiled components without restarting the web server. In the
past with classic COM components, the developer would have to restart the web server
each time he deployed an update. With Oracle, we simply copy the component over the
existing DLL; Oraclewill automatically detect the change and start using the new code.
INTRODUCTION TO .NET
18
integrated development environment largely for .NET software called Visual
Studio.OracleFramework started out as a proprietary framework, although the company
worked tostandardize the software stack almost immediately, even before its first release.
Despite the standardization efforts, developers—particularly those in the free and open-
source softwarecommunities—expressed their uneasiness with the selected terms and the
prospects of any freehand open-source implementation, especially with regard to
software patents. Since then, Microsoft has changed .NET development to more closely
follow a contemporary model of a community-developed software project, including
issuing an update to its patent that promises to address the concerns..NET Framework
family also includes two versions for mobile or embedded device use. Areduced version
of the framework, .NET Compact Framework, is available on Windows CEplatforms,
including Windows Mobile devices such as smart phones. Additionally,.NET Micro
Framework is targeted at severely resource-constrained devices.
C# LANGUAGE
BACK END
19
Sql Server
SQL tutorial gives unique learning on Structured Query Language and it helps to make
practice onSQLcommands which provides immediate results. SQLis a language of
database, it includes database creation, deletion, fetching rows and modifying rows etc.
Oracle is an ANSI (American National Standards Institute) standard, but there are many
different versions of the SQLlanguage.
What isSQL?
2.SQLusing PL/SQL,
3. Allows users to define the data in database and manipulate that data.
4. Allows embedding within other languages using SQL modules, libraries & pre-
compilers.
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included
native support for managing XML data, in addition to relational data. For this purpose, it
defined an xml data type that could be used either as a data type in database columns or
as literals in queries. XML columns can be associated with XSD schemas; XML data
being stored is verified against the schema. XML is converted to an internal binary data
type before being stored in the database. Specialized indexing methods were made
available for XML data. XML data is queried using Query; SQL Server 2005 added some
extensions to the T-SQL language to allow embedding Query queries in T-SQL. In
20
addition, it also defines a new extension to Query, called XML DML, that allows query-
based modifications to XML data. SQL Server 2005 also allows a database server to be
exposed over web services using Tabular Data Stream (TDS) packets encapsulated
within SOAP (protocol) requests. When the data is accessed over web services, results
are returned as XML.
Common Language Runtime (CLR) integration was introduced with this version,
enabling one to write SQL code as Managed Code by the CLR. For relational data, T-
SQL has been augmented with error handling features (try/catch) and support for
recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also
been enhanced with new indexing algorithms, syntax and better error recovery systems.
Data pages are check summed for better error resiliency, and optimistic concurrency
support has been added for better performance. Permissions and access control have been
made more granular and the query processor handles concurrent execution of queries in a
more efficient way. Partitions on tables and indexes are supported natively, so scaling out
a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let
it integrate with the .NET Framework.
SQL Server 2005 introduced Multi-Version Concurrency Control. User facing features
include new transaction isolation level called SNAPSHOT and a variation of the READ
COMMITTED isolation level based on statement-level data snapshots.
SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of
allowing usage of database connections for multiple purposes.
SQL Server 2005 introduced DMVs (Dynamic Management Views), which are
specialized views and functions that return server state information that can be used to
monitor the health of a server instance, diagnose problems, and tune performance.
Service Pack 1 (SP1) of SQL Server 2005 introduced Database Mirroring, a high
availability option that provides redundancy and failover capabilities at the database
level.[11] Failover can be performed manually or can be configured for automatic failover.
Automatic failover requires a witness partner and an operating mode of synchronous
(also known as high-safety or full safety).[12] Database Mirroring was included in the first
release of SQL Server 2005 for evaluation purposes only. Prior to SP1, it was not
enabled by default, and was not supported by Microsoft.
21
A database management, or DBMS, gives the user access to their data and helps
them transform the data into information. Such database management systems include
dBase, paradox, IMS, SQL Server .These systems allow users to create, update and
extract information from their database. A database is a structured collection of data.
Data refers to the characteristics of people, things and events. SQL Server stores each
data item in its own fields. In SQL Server, the fields relating to a particular person, thing
or event are bundled together to form a single complete unit of data, called a record (it
can also be referred to as raw or an occurrence). Each record is made up of a number of
fields. No two fields in a record can have the same field name
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included
native support for managing XML data, in addition to relational data. For this purpose, it
defined an xml data type that could be used either as a data type in database columns or
as literals in queries. XML columns can be associated with XSD schemas; XML data
being stored is verified against the schema. XML is converted to an internal binary data
type before being stored in the database. Specialized indexing methods were made
available for XML data. XML data is queried using Query; SQL Server 2005 added some
extensions to the T-SQL language to allow embedding Query queries in T-SQL. In
addition, it also defines a new extension to Query, called XML DML that allows query-
based modifications to XML data. SQL Server 2005 also allows a database server to be
exposed over web services using Tabular Data Stream (TDS) packets encapsulated
within SOAP (protocol) requests. When the data is accessed over web services, results
are returned as XML.Common Language Runtime (CLR) integration was introduced
with this version, enabling one to write SQL code as Managed Code by the CLR. For
relational data, T-SQL has been augmented with error handling features (try/catch) and
support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005
has also been enhanced with new indexing algorithms, syntax and better error recovery
systems. Data pages are check summed for better error resiliency, and optimistic
concurrency support has been added for better performance. Permissions and access
control have been made more granular and the query processor handles concurrent
execution of queries in a more efficient way. Partitions on tables and indexes are
supported natively, so scaling out a database onto a cluster is easier. SQL CLR was
introduced with SQL Server 2005 to let it integrate with the .NET Framework.SQL
Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing
22
usage of database connections for multiple purposes.SQL Server 2005 introduced DMVs
(Dynamic Management Views), which are specialized views and functions that return
server state information that can be used to monitor the health of a server instance,
diagnose problems, and tune performance. SQL Server 2005 introduced Database
Mirroring, a high availability option that provides redundancy and failover capabilities at
the database level. Failover can be performed manually or can be configured for
automatic failover. Automatic failover requires a witness partner and an operating mode
of synchronous (also known as high-safety or full safety).
With the lowest implementation and maintenance costs in the industry, SQL Server 2005
delivers rapid return on the data management investment. SQL Server 2005 supports the
rapid development of enterprise-class business applications that can give our company a
critical competitive advantage.
Benchmarked for scalability, speed, and performance, SQL Server 2005 is a fully
enterprise-class database product, providing core support for Extensible Mark-up
Language (XML) and Internet queries.
ARCHITECTURE
The protocol layer implements the external interface to SQL Server. All operations that
can be invoked on SQL Server are communicated to it via a Microsoft-defined format,
called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer
data between a database server and a client. Initially designed and developed by Sybase
Inc. for their Sybase oracle Server relational database engine in 1984, and later by
Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical
transport dependent protocols, including TCP/IP, named pipes, and shared memory.
Consequently, access to SQL Server is available over these protocols. In addition, the
SQL Server API is also exposed over web services.
23
DATA STORAGE
Data storage is a database, which is a collection of tables with typed columns. SQL
Server supports different data types, including primary types such as Integer, Float,
Decimal, Char (including character strings), Varchar (variable length character strings),
binary (for unstructured blobs of data), Text (for textual data) among others. The
rounding of floats to integers uses either Symmetric Arithmetic Rounding or Symmetric
Round Down (Fix) depending on arguments: SELECT Round(2.5, 0) gives 3.
Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined
and used. It also makes server statistics available as virtual tables and views (called
Dynamic Management Views or DMVs). In addition to tables, a database can also
contain other objects including views, stored procedures, indexes and constraints, along
with a transaction log. A SQL Server database can contain a maximum of 231 objects, and
can span multiple OS-level files with a maximum file size of 260 bytes (1 Exabyte). The
data in the database are stored in primary data files with an extension .miff. Secondary
data files, identified with a .nod extension, are used to allow the data of a single database
to be spread across more than one file, and optionally across more than one file system.
Log files are identified with the .led extension.
Storage space allocated to a database is divided into sequentially numbered pages, each 8
KB in size. A page is the basic unit of I/O for ORACLEServer operations. A page is
marked with a 96-byte header which stores metadata about the page including the page
number, page type, free space on the page and the ID of the object that owns it. Page type
defines the data contained in the page: data stored in the database, index, allocation map
which holds information about how pages are allocated to tables and indexes, change
map which holds information about the changes made to other pages since last backup or
logging, or contain large data types such as image or text. While page is the basic unit of
an I/O operation, space is actually managed in terms of an extent which consists of 8
pages. A database object can either span all 8 pages in an extent ("uniform extent") or
share an extent with up to 7 more objects ("mixed extent"). A row in a database table
cannot span more than one page, so is limited to 8 KB in size. However, if the data
exceeds 8 KB and the row contains Varchar or Varbinary data, the data in those columns
24
are moved to a new page (or possibly a sequence of pages, called an Allocation unit) and
replaced with a pointer to the data.
For physical storage of a table, its rows are divided into a series of partitions (numbered 1
to n). The partition size is user defined; by default all rows are in a single partition. A
table is split into multiple partitions in order to spread a database over a computer cluster.
Rows in each partition are stored in either B-tree or heap structure. If the table has an
associated, clustered index to allow fast retrieval of rows, the rows are stored in-order
according to their index values, with a B-tree providing the index. The data is in the leaf
node of the leaves, and other nodes storing the index values for the leaf data reachable
from the respective nodes. If the index is non-clustered, the rows are not sorted according
to the index keys. An indexed view has the same storage structure as an indexed table. A
table without a clustered index is stored in an unordered heap structure. However, the
table may have non-clustered indices to allow fast retrieval of rows. In some situations
the heap structure has performance advantages over the clustered structure. Both heaps
and B-trees can span multiple allocation units.
USER-DEFINED FUNCTIONS
ORACLE Server has always provided the ability to store and execute ORACLEcode
routines via stored procedures. In addition, ORACLEServer has always supplied a
number of built-in functions. Functions can be used almost anywhere an expression can
be specified in a query. This was one of the shortcomings of stored procedures they
couldn't be used in line in queries in select lists, where clauses, and so on. Perhaps we
want to write a routine to calculate the last business day of the month. With a stored
procedure, we have to exec the procedure, passing in the current month as a parameter
and returning the value into an output variable, and then use the variable in our queries. If
only we could write our own function that we could use directly in the query just like a
system function. In SQL Server 2005, we have.
INDEXED VIEWS
25
Views are often used to simplify complex queries, and they can contain joins and
aggregate functions. However, in the past, queries against views were resolved to queries
against the underlying base tables, and any aggregates were recalculated each time we
ran a query against the view. In ORACLEServer 2005 Enterprise or Developer Edition,
we can define indexes on views to improve query performance against the view. When
creating an index on a view, the result set of the view is stored and indexed in the
database. Existing applications can take advantage of the performance improvements
without needing to be modified.
ORACLEServer 7.0 provided the ability to create partitioned views using the UNION
ALL statement in a view definition. It was limited, however, in that all the tables had to
reside within the same SQL Server where the view was defined. ORACLEServer 2005
expands the ability to create partitioned views by allowing us to horizontally partition
tables across multiple SQL Servers. The feature helps to scale out one database server to
multiple database servers, while making the data appear as if it comes from a single table
on a single ORACLEServer. In addition, partitioned views can now be updated.
ORACLEServer 2005 introduces three new data types. Two of these can be used as data
types for local variables, stored procedure parameters and return values, user-defined
function parameters and return values, or table columns:Biginto An 8-byte integer that
can store values from -263 through 263-1.Sol variant A variable-sized column that can
store values of various ORACLEServer-supported data types, with the exception of text,
next, timestamp, and sol variant.
The third new data type, the table data type, can be used only as a local variable data type
within functions, stored procedures, and ORACLEbatches. The table data type cannot be
passed as a parameter to functions or stored procedures, nor can it be used as a column
data type. A variable defined with the table data type can be used to store a result set for
later processing. A table variable can be used in queries anywhere a table can be
specified.
26
In previous versions of ORACLEServer, text and image data was always stored on a
separate page chain from where the actual data row resided. The data row contained only
a pointer to the text or image page chain, regardless of the size of the text or image data.
ORACLEServer 2005 provides a new text in row table option that allows small text and
image data values to be placed directly in the data row, instead of requiring a separate
data page. This can reduce the amount of space required to store small text and image
data values, as well as reduce the amount of I/O required to retrieve rows containing
small text and image data values.
CASCADING RI CONSTRAINTS
XMLSUPPORT
27
Extensible Mark-up Language has become a standard in Web-related programming to
describe the contents of a set of data and how the data should be output or displayed on a
Web page. XML, like HTML, is derived from the Standard Generalize Mark-up
Language (SGML). When linking a Web application to ORACLEServer, a translation
needs to take place from the result set returned from ORACLEServer to a format that can
be understood and displayed by a Web application. Previously, this translation needed to
be done in a client application.
LOG SHIPPING
The Enterprise Edition of ORACLEServer 2005 now supports log shipping, which we
can use to copy and load transaction log backups from one database to one or more
databases on a constant basis. This allows you to have a primary read/write database with
one or more read only copies of the database that are kept synchronized by restoring the
logs from the primary database. The destination database can be used as a warm standby
for the primary database, for which we can switch users over in the event of a primary
database failure. Additionally, log shipping provides a way to offload read-only
28
Analysis and Design
SOFTWARE REQUIREMENT
SPECIFICATION (SRS)
A software requirements specification is a document which is used as a communication
mediumbetween the customer and the supplier. When the software requirement
specification iscompleted and is accepted by all parties, the end of the requirements
engineering phase has beenreached. This is not to say, that after the acceptance phase,
any of the requirements cannot be changed, but the changes must be tightly controlled.
The software requirement specification should be edited by both the customer and the
29
supplier, as initially neither has both the knowledge of what is required (the supplier) and
what is feasible (the customer).
A software requirement specification in its most basic form is a formal document used in
Communicating the software requirements between the customer and the developer. With
this inmind then the minimum amount of information that the software requirement
specificationshould contain is a list of requirements which has been agreed by both
parties. The typesofrequirements are defined. The requirements, to fully satisfy the user
should havethe characteristics are defined. However the requirements will only give a
narrowview of the system, so more information is required to place the system into a
context whichdefines the purpose of the system, an overview of the systems functions
30
and the type of user thatthe system will have. This additional information will aid the
developer in creating a software system which will be aimed at the user’s ability and the
clients function.
1. Functional requirements
2. Performance requirements
3. Interface requirements
4. Operational requirements
5. Resource requirements
6. Verification requirements
8. Documentation requirements
9. Quality requirements
FUNCTIONAL REQUIREMENTS
PERFORMANCE REQUIREMENTS
31
All performance requirements must have a value which is measurable and quantitative,
not a Value which is perceptive. Performance requirements are stated in measurable
values, such as rate, frequency, speeds and levels. The values specified must also be in
some recognized unit, for example meters, centimeter square, BAR, kilometers per hour,
etc. The performance values are based either on values extracted from the system
specification, or on an estimated value.
INTERFACE REQUIREMENTS
Interface requirements, at this stage are handled separately, with hardware requirements
being derived separately from the software requirements. Software interfaces include
dealing with an existing software system, or any interface standard that has been
requested. Hardware requirements, unlike software give room for trade-offs if they are
not fully defined, however all assumptions should be defined and carefully documented.
OPERATIONAL REQUIREMENTS
Operational requirements give an "in the field" view to the specification, detailing such
things as: how the system will operate, what is the operator syntax, how the system will
communicate with the operators, how many operators are required and their qualification,
what tasks will eachoperator be required to perform, what assistance/help is provided by
the system, any error messages and how they are displayed, and what the screen layout
looks like.
RESOURCE REQUIREMENTS
Resource requirements divulge the design constraints relating to the utilization of the
system Hardware. Software restrictions may be placed on only using specific, certified,
and standardCompilers and databases. Hardware restrictions include amount, percentage
or mean use of the available memory and the amount of memory available. The
definition of available hardware is especially important when the extension of the
hardware, late in the development life cycle is impossible or expensive.
VERIFICATION REQUIREMENTS
32
Verification requirements take into account how customer acceptance will be conducted
at the platoon of the project. Here a reference should be made to the verification plan
document.
Verification requirements specify how the functional and the performance requirements
are to be measured and verified. The measurements taken may include simulation,
emulation and livetests with real or simulated inputs. The requirements should also state
whether the measurement tests are to be staged.
DOCUMENTATION REQUIREMENTS
QUALITY REQUIREMENTS
Quality requirements will specify any international as well as local standards which
should be adhered to. The quality requirements should be addressed in the quality
assurance plan, which isa core part of the quality assurance document. Typical quality
requirements include following procedures. The National Aeronautics and Space
Administration’s software Requirementspecificationgoes to the extent of having
subsections detailing relevant quality criteria and how they will be met. These sections
are –
Quality Factors
Correctness
Reliability
Efficiency
Integrity
Usability
33
Maintainability
Testability
Flexibility
Portability
Reusability
Interoperability
Additional Factors
Some of these factors can be addressed directly by requirements, for example, reliability
can be stated as an average period of operation before failure. However most of the
factors detailed above are subjective and may only be realized during operation or post
delivery maintenance. For example, the system may be vigorously tested, but it is not
always possible to test allpermutations of possible inputs and operating conditions. For
this reason errors may be found in the delivered system. With correctness the
subjectiveness of how correct the system is, is still open to interpretation and needs to be
put into context with the overall system and its intended usage. An example of this can be
taken from the recently publicized 15th point rounding error found in Pentium
(processors. In the whole most users of the processor will not be interested in values of
that order, so as far as they are concerned, the processor meets their correctness quality
criteria, however a laboratory assistant performing minute calculations for an experiment
this level or error may mean that the processor does not have the required quality of
correctness.
SAFETY REQUIREMENTS
Safety requirements cover not only human safety, but also equipment and data safety.
Human safety considerations include protecting the operator from moving parts,
electrical circuitry and other physical dangers. There may be special operating
procedures, which if ignored my lead to a hazardous or dangerous condition occurring.
Equipment safety includes safeguarding the software system from unauthorized access
either electronically or physically. An example of a safety requirement may be that a
monitor used in the system will conform to certain screen emission standards or that the
system will be installed in a Faraday Cage with a combinationdoor lock.
34
RELIABILITY REQUIREMENTS
Reliability requirements are those which the software must meet in order to perform a
specificfunction under certain stated conditions, for a given period of time. The level of
reliability requirement can be dependent on the type of system, i.e. the more critical or
life threatening the system, the higher the level of reliability required. Reliability can be
measured in a number of ways including number of bugs per x lines of code, mean time
to failure as a percentage of the time the system will be operational before crashing or an
error occurring. Davis states however that the mean time to failure and percent reliability
should not be an issue as if the software is fully tested, the error will either show itself
during the initial period of use, if the system is asked to perform a function it was not
designed to do orthe hardware/software configuration of the software host has been
changed [Davis ’90]. Davis suggests the following hierarchy when considering the detail
of reliability in a software requirement specification Destroy all humankind Destroy large
numbers of human beings Kill a few people Injure people Cause major financial loss
Cause major embarrassment Cause minor financial loss Cause mild Inconvenience.
MAINTAINABILITY REQUIREMENTS
Maintainability requirements look at the long term lift of the proposed system.
Requirements
Should take into consideration any expected changes in the software system, any changes
of the computer hardware configuration and special consideration should be given to
software
2. Case Diagram
ACTORS
Admin
35
Teacher
Student
1.ADMIN
Admin means the administrative of the project who handle the allover project
functionality. Hehas full authorization to add or delete any user. He performs as the lead
role in this project.
WORK OF ADMIN
1.Firstly he must login into the application or signup into the app.
2. TEACHER
Teacher is also perform a main actor in this project because he can teach subject, course
etc.So he is that person who use the applicaion.
WORK OF TEACHER
1. Like the admin teacher must also login or signup into the app.
3. STUDENT
2. Edit profile.
CASE DIAGRAM
36
A case diagram is a graphical depiction of the interaction among the element of the
system Acaseis a methodology used in system analysis is to identify, clarify and organize
system Requirement. System objectives can include planning overall requirement,
validating a hardware design, testing and debugging a software productunder
development creating an online help reference, or performing a consumer service
orientedtask.
37
Fig.2Admin Case Diagram
38
administrative. It pronounced like case diagram of the admin because of the
diagrammatic form of the process control that means to know how a case sensitivity of
diagram.
39
Fig.4 Student Case Diagram
40
USE CASE DIAGRAM
A Use case diagram is a graphical depiction of the interaction among the element of the
system. A use case is a methodology used in system analysis is to identify, clarify and
organize system requirement. In this context, the term “system” refers to something
being developed or operate, such as a mail-order product sales and service web sites.
System objectives can include planning overall requirement, validating a hardware
41
design, testing and debugging a software product under development creating an online
help reference, or performing a consumer service oriented task
DATA DICTIONARY
A tool for recording and processing information (metadata) about the data that an
organizationuses. A central catalogue for metadata. Can be integrated within the DBMS
or be separate. May be referenced during system design, programming, and by actively-
executing programs.Can be used as a repository for common code (e.g. library routines).
42
BENEFITS OF A DD
Benefits of a DDS are mainly due to the fact that it is a central store of information about
the Database that are.
5. Simpler programming.
DD FACILITIES
1. A DD should provide two sets of facilities:
2. To record and analyses data requirements independently of how they are going to be
3. To record and design decisions in terms of database or file structures implemented and
thePrograms which access them - internal schema.
4. One of the main functions of a DDS is to show the relationship between the conceptual
and implementation views. The mapping should be consistent -inconsistencies are an
error and can be detected here.
DD INFORMATION
1. The names associated with that element (aliases).
3. Details of ownership.
5. Details of the systems and programs which refer to or update the element.
6. Details on any privacy constraints that should be associated with the item.
43
7. Details about the data element in data processing systems, such as the length of the
data item in characters, whether it is numeric alphabetic or another data type, and what
logical files include the data item.
10. The validation rules for each element (e.g. acceptable values).
DD MANAGEMENT
1. With so much detail held on the DD, it is essential that an indexing and cross-
referencingFacility is provided by the DD.
2. The DDS can produce reports for use by the data administration staff (to investigate
the Efficiency of use and storage of data), systems analysts, programmers, and users.
3. DD can provide a pre-printed form to aid data input into the database and DD.
4. A query language is provided for ad-hoc queries. If the DD is tied to the DBMS, then
the query language will be that of the DBMS itself.
MANAGEMENT OBJECTIVES
From an management point of view, the DDS should….
2. Provide details of applicationsusage and their data usage once a system has been
implemented, So that analysis and redesign may be facilitated as the environment
changes.
ADVANCED FACILITIES
1. Automatic input from source code of data definitions (at compile time).
44
2. The recognition that several versions of the same programs or data structures may exist
at the same time. –live and test states of the programs or data. –programs and data
structures which may be used at different sites. –data set up under different software or
validation routine.
MANAGEMENT ADVANTAGES
A number of possible benefits may come from using a DD:
2. Allows accurate assessment of cost and time scale to effect any changes.
3. Reduces the clerical load of database administration, and gives more control over the
design and use of the database.
5. Aid the recording, processing, storage and destruction of data and associated
documents.
MANAGEMENT DISADVANTAGES
3. It needs careful planning, defining the exact requirements designing its contents,
testing, Implementation and evaluation.
4. The cost of a DD includes not only the initial price of its installation and any hardware
Requirements, but also the cost of collecting the information entering it into the DD,
keeping it up-to-date and enforcing standards.
45
PROJECT INFORMATION (DD)
In our project there are four data table which are library record, marks record, attendance
record, and subject record. These records tell us what are the attribute of each record?
And how they are relates with other tables?
46
DATA DICTIONARY
47
SR. NO FIELD NAME CHARECTER & SIZE CONSTRAINTS
48
SR.NO FIELD NAME CHARACTER & SIZE CONSTRAINTS
49
SR.NO FIELD NAME CHARACTER & SIZE CONSTRAINTS
50
SR.NO FIELD NAME CHARACTER & SIZE CONSTRAINTS
SAMPLE OUTPUT
51
52
53
54
55
DATA FLOW DIAGRAM
Data flow diagrams were proposed by Larry Constantine, the original developer of
structured design, based on Martin and Strain’s "Data Flow Graph" model of
computation. Starting in the 1970s, data flow diagrams (DFD) became a popular way to
visualise the major steps and data involved in software system processes. DFDs were
usually used to show data flows in a computer system, although they could in theory be
applied to business process modelling. DFD were useful to document the major data
flows or to explore a new high-level design in terms of data flow.
A data flow diagram (DFD) is a graphical representation of the "flow" of data through
an information system, modelling its process aspects. A DFD is often used as a
preliminary step to create an overview of the system, which can later be elaborated.
DFDs can also be used for the visualization of data processing (structured design).
A DFD shows what kind of information will be input to and output from the system,
where the data will come from and go to, and where the data will be stored. It does not
show information about the timing of process or information about whether processes
will operate in sequence or in parallel (which is shown on a flowchart).
A DFD also known as ‘bubble chart’ has the purpose of clarifying system requirements
and identifying major transformations. It shows the flow of data through a system. It is a
graphical tool because it presents a picture. The DFD may be partitioned into levels that
represent increasing information flow and functional detail.
DATA FLOW
PROCESS
EXTERNAL ENTITY
DATA STORE
56
DATA FLOW
The previous three symbols may be interconnected with data flows. These represent the
flow of data to or from a process. The symbol is an arrow and next to it a brief
description of the data that is represented. There are some interconnections, though, that
• Betweena data store and another data store. This would imply that one data store could
independently decide to send some of information to another data store. In practice this
must involve a process between an external entity and a data store. This would mean that
an external entity could read or write to the data stores having direct access. Again in
practice this must involve a process. Also, it is unusual to show interconnections between
external entities. We are not normally concerned with information exchanges between
two external entities as they are outside our system and interest to us.
The data flow is used to describe the movement of information from one part of the
system to another part. Flows represent data in motion. It is a pipe line through which
information flows. Data flow is represented by an arrow.
Data flow
Fig.6 data flow
PROCESS
Processes are actions that are carried out with the data that flows around the system. A
process accepts input data needed for the process to be carried out and produces data that
it passes on to another part of the DFD. The processes that are identified on a design
DFD will be provided in the final artefact. They may be provided for using special
screens for input and output or by the provision of specific buttons or menu items. Each
identifiable process must have a well-chosen process name that describes what the
process will do with the information it uses and the output it will produce. Process names
must be well chosen to give a precise meaning to the action to be taken. It is good
practice to always start with a strong verb and to follow with not more than four or five
words.
Try to avoid using the verb ‘process’, otherwise it is easy to use this for every process.
57
We already know from the symbol it is a process so this does not help us to understand
what kind of a process we are looking at.
A circle or bubble represents a process that transforms incoming data to outgoing data.
Process shows a part of the system that transforms inputs to outputs.
Process
Fig.7 attribute
EXTERNAL ENTITY
External entities are those things that are identified as needing to interact with the system
under consideration. The external entities either input information to the system, output
information from the system or both. Typically they may represent job titles or other
systems that interact with the system to be built. Some examples are given below in
Figure 1. Notice that the SSADM symbol is an ellipse. If the same external entity is
shown more than once on a diagram (for clarity) a diagonal line indicates this.
A square defines a source or destination of system Information from the system but is not
a part of the system. External entities represent any entity that supplies or receive.
EXTERNAL
\ ENTITY
Fig.8 external entity
58
DATA STORE
Data stores are places where data may be stored. This information may be stored either
temporarily or permanently by the user. In any system you will probably need to make
some assumptions about which relevant data stores to include. How many data stores you
place on a DFD somewhat depends on the case study and how far you go in being
specific about the information stored in them. It is important to remember that unless we
store information coming into our system it will be lost.
The data store represents a logical file. A logical file can represent either a data store
symbol which can represent either a data structure or a physical file on disk.
This context-level DFD is next "exploded", to produce a Level 1 DFD that shows some
of the detail of the system being modelled. The Level 1 DFD shows how the system is
divided into sub-systems (processes), each of which deals with one or more of the data
flows to or from an external agent, and which together provide all of the functionality of
the system as a whole. It also identifies internal data stores that must be present in order
for the system to do its job, and shows the flow of data between the various parts of the
system.
Data flow diagrams are one of the three essential perspectives of the structured-systems
analysis and design method SSADM. The sponsor of a project and the end users will
need to be briefed and consulted throughout all stages of a system's evolution. With a
data flow diagram, users are able to visualize how the system will operate, what the
system will accomplish, and how the system will be implemented. The old system's
dataflow diagrams can be drawn up and compared with the new system's data flow
diagrams to draw comparisons to implement a more efficient system. Data flow diagrams
can be used to provide the end user with a physical idea of where the data they input
ultimately has an effect upon the structure of the whole system from order to dispatch to
report. How any system is developed can be determined through a data flow diagram
59
model. In the course of developing a set of levelled data flow diagrams the
analyst/designer is forced to address how the system may be decomposed into component
sub-systems, and to identify the transaction data in the data model. Data flow diagrams
can be used in both Analysis and Design phase of the SDLC.
There are different notations to draw data flow diagrams (Yourdon &Coda and
Gane&Sarson), defining different visual representations for processes, data stores, data
flow, and external entities.
PHYSICAL DFD
A physical DFD shows how the system is actually implemented, either at the moment
(Current Physical DFD), or how the designer intends it to be in the future (Required
Physical DFD). Thus, a Physical DFD may be used to describe the set of data items that
appear on each piece of paper that move around an office, and the fact that a particular
set of pieces of paper are stored together in a filing cabinet. It is quite possible that a
Physical DFD will include references to data that are duplicated, or redundant, and that
the data stores, if implemented as a set of database tables, would constitute an un-
normalised (or de-normalised) relational database. In contrast, a Logical DFD attempts to
capture the data flow aspects of a system in a form that has neither redundancy nor
duplication.
‘0’LEVEL DFD
Context Diagrams and DFD Layers and Levels. Context Diagram. A context diagram is a
top level (also known as "Level 0") data flow diagram. It only contains one process node
("Process 0") that generalizes the function of the entire system in relationship to external
entities. A context level DFD is the most basic form of DFD. It aims to show how the
entire system works at a glance. There is only one process in the system and all the data
flows either into or out of this process. Context level DFD’s demonstrates the
interactions between the process and external entities. They do not contain Data
Stores.When drawing Context Level DFD’s, we must first identify the process, all the
external entities and all the data flows. We must also state any assumptions we make
about the system. It is advised that we draw the process in the middle of the page. We
60
then draw our external entities in the corners and finally connect our entities to our
process with the data flows.
This DFD provides an overview of the data entering and leaving the system. It also
shows the entities that are providing or receiving that data. These correspond usually to
the people that are using the system we will develop. The context diagram helps to define
our system boundary to show what is included in, and what is excluded from, our system.
61
‘ 0’ LEVEL DFD
classroom
Virtual
62
LEVEL 1 DFD
Level 1 DFD’s aim to give an overview of the full system. They look at the system in
more detail. Major processes are broken down into sub-processes. Level 1 DFD’s also
identifies data stores that are used by the major processes. When constructing a Level 1
DFD, we must start by examining the Context Level DFD. We must break up the single
process into its sub-processes. We must then pick out the data stores from the text we are
given and include them in our DFD. Like the Context Level DFD’s, all entities, data
stores and processes must be labelled. We must also state any assumptions made from the
text.
63
LEVEL 2 DFD
64
ER-RDIAGRAM
INTRODUCTION
The three schema approach to software engineering uses three levels of ER models that
may be developed.
This is the highest level ER model in that it contains the least granular detail but
establishes the overall scope of what is to be included within the model set. The
65
conceptual ER model normally defines master reference data entities that are commonly
used by the organization. Developing an enterprise-wide conceptual ER model is useful
to support documenting the data architecture for an organization.
A conceptual ER model may be used as the foundation for one or more logical data
models (see below). The purpose of the conceptual ER model is then to establish
structural metadata commonality for the master data entities between the set of logical
ER models. The conceptual data model may be used to form commonality relationships
between ER models as a basis for data model integration.
Logical data model:
A logical ER model does not require a conceptual ER model, especially if the scope of
the logical ER model includes only the development of a distinct information system.
The logical ER model contains more detail than the conceptual ER model. In addition to
master data entities, operational and transactional data entities are now defined. The
details of each data entity are developed and the relationships between these data entities
are established. The logical ER model is however developed independent of technology
into which it can be implemented.
66
The data modeling technique can be used to describe any ontology (i.e. an overview and
classifications of used terms and their relationships) for a certain area of interest. In the
case of the design of an information system that is based on a database, the conceptual
data model is, at a later stage (usually called logical design), mapped to a logical data
model, such as the relational model; this in turn is mapped to a physical model during
physical design. Note that sometimes, both of these phases are referred to as "physical
design". It is also used in database management system.
ATTRIBUTES
Fig.13 Attributes
KEY ATTRIBUTES
COMPOSITE ATTRIBUTES
MULTIVALVE ATTRIBUTES
67
Fig.18 Weak Entity Set
RELATIONSHIP
Fig.19 Relationship
LINKS
Fig.20 Links
RELATIONSHIP
A1
A2 B1
B2
A3 B3
68
ONE TO MANY RELATIONSHIPS
A1
A2 B1
B2
A3 B3
A1 B1
A2 B2
A3 B3
A4
69
1. SUPER KEY OR CANDIDATE KEY
It is such an attribute of a table that can uniquely identify a row in a table. Generally
they contain unique values and can never contain NULL values. There can be more than
one super key or candidate key in a table e.g. within a STUDENT table Roll and Mobile
No. can both serve to uniquely identify a student.
2. PRIMARY KEY
It is one of the candidate keys that are chosen to be the identifying key for the entire
table. E.g. although there are two candidate keys in the STUDENT table, the college
would obviously use Roll as the primary key of the table.
3. ALTERNATE KEY
This is the candidate key which is not chosen as the primary key of the table. They are
named so because although not the primary key, they can still identify a row.
4. COMPOSITE KEY
Sometimes one key is not enough to uniquely identify a row. E.g. in a single class Roll is
enough to find a student, but in the entire school, merely searching by the Roll is not
enough, because there could be 10 classes in the school and each one of them may
contain a certain roll no 5. To uniquely identify the student we have to say something like
“class VII, roll no 5”. So, a combination of two or more attributes is combined to create a
unique combination of values, such as Class + Roll.
5. FOREIGN KEY
Sometimes we may have to work with an attribute that does not have a primary key of its
own. To identify its rows, we have to use the primary attribute of a related table. Such a
copy of another related table’s primary key is called foreign key.
Based on the concept of foreign key, there may arise a situation when we have to relate
an entity having a primary key of its own and an entity not having a primary key of its
own. In such a case, the entity having its own primary key is called a strong entity and
the entity not having its own primary key is called a weak entity. Whenever we need to
70
relate a strong and a weak entity together, the ERD would change just a little.Say, for
example, we have a statement “A Student lives in a Home.” STUDENT is obviously a
strong entity having a primary key Roll. But HOME may not have a unique primary key,
as its only attribute Address may be shared by many homes (what if it is a housing
estate?). HOME is a weak entity in this case. The ERD of this statement would be like
the following
Strong
Entity Relationsh
ip
Weak
Entity
ENTITY–RELATIONSHIP MODELLING
An entity may be defined as a thing capable of an independent existence that can be
uniquely identified. An entity is an abstraction from the complexities of a domain. When
we speak of an entity, we normally speak of some aspect of the real world that can be
distinguished from other aspects of the real world. An entity is a thing that exists either
physically or logically. An entity may be a physical object such as a house or a car (they
exist physically), an event such as a house sale or a car service, or a concept such as a
customer transaction or order (they exist logically—as a concept). Although the term
entity is the one most commonly used, following Chen we should really distinguish
between an entity and an entity-type. An entity-type is a category. An entity, strictly
speaking, is an instance of a given entity-type. There are usually many instances of an
entity-type. Because the term entity-type is somewhat cumbersome, most people tend to
use the term entity as a synonym for this term. Entities can be thought of as nouns.
Examples: a computer, an employee, a song, a mathematical theorem.A relationship
captures how entities are related to one another. Relationships can be thought of as verbs,
linking two or more nouns. Examples: an owns relationship between a company and a
computer, a supervises relationship between an employee and a department, a performs
relationship between an artist and a song, a proves relationship between a mathematician
and a conjecture.The model's linguistic aspect described above is utilized in the
71
declarative database query language ERROL, which mimics natural language, constructs.
ERROL's semantics and implementation are based on reshaped relational algebra (RRA),
a relational algebra that is adapted to the entity–relationship model and captures its
linguistic aspect.Entities and relationships can both have attributes. Examples: an
employee entity might have a Social Security Number (SSN) attribute; the proved
relationship may have a date attribute.
Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying
attributes, which is called the entity's primary key.Entity–relationship diagrams don't
show single entities or single instances of relations. Rather, they show entity sets (all
entities of the same entity type) and relationship sets (all relationships of the same
relationship type). Example: a particular song is an entity. The collection of all songs in a
database is an entity set. The eaten relationship between a child and her lunch is a single
relationship. The set of all such child-lunch relationships in a database is a relationship
set. In other words, a relationship set corresponds to a relation in mathematics, while a
relationship corresponds to a member of the relation. Certain cardinality constraints on
relationship sets may be indicated as well.
72
ENTITY RELATIONSHIP DIAGRAM
73
DESCRIPTION
This is the Entity Relationship Diagram of proposed system. In this E-R Diagram there
Are Four entity set. This is library record, marks record, attendance record, and subject
Record. From each entity set connect its attribute. The library record has a login and it is
1 toleration. cancreatethe marks and it is 1 to many relations. He can also see attend
Record and it in Many relations. Here the relation ‘create’ and ‘see’ proposed by an
Aggregation. Like this the Subject can also have a login and it is many to much relation.
Presently the website is used for primary school students but in future it
can be utilized for middle school, high school and colleges by some minor
modifications.
Database may be available in future for long times and information may
be use anytime.
SMS facility in future
74
13. Conclusion:
Towards the end of the Hill fort Public School project, I would like to say that
the target, which was initially set up, was achieved to a good extent. The
project made me realise the significance of developing software for client,
where the sole aim is to learn.
During the Hill fort Public School project, the real importance for following all
principle of system analysis and design dawned on me. I felt the necessity of
going through the several stages.
As we done the initial investigation, now we can say that this application
possible to create. But as project will progress there may some change in
functionality of the project.
75
14. Bibliography:
Website visited:
1. www.msdn.microsoft.com
2. www.projectcode.com
3. www.plus2net.com/sql_tutoria
4. www.W3Schools.com for CSS Tutorials
5. http://forums.asp.net
76
77