Selected Software
Selected Software
The Microsoft .NET Framework is a software technology that is available with several
Microsoft Windows operating systems. It includes a large library of pre-coded solutions to
common programming problems and a virtual machine that manages the execution of programs
written specifically for the framework. The .NET Framework is a key Microsoft offering and is
intended to be used by most new applications created for the Windows platform.
The pre-coded solutions that form the framework's Base Class Library cover a large range of
programming needs in a number of areas, including user interface, data access, database
connectivity, cryptography, web application development, numeric algorithms, and network
communications. The class library is used by programmers, who combine it with their own code
to produce applications.
Programs written for the .NET Framework execute in a software environment that manages
the program's runtime requirements. Also part of the .NET Framework, this runtime environment
is known as the Common Language Runtime (CLR). The CLR provides the appearance of an
application virtual machine so that programmers need not consider the capabilities of the specific
CPU that will execute the program. The CLR also provides other important services such as
security, memory management, and exception handling. The class library and the CLR together
compose the .NET Framework.
Principal design features
Interoperability
The Common Language Runtime (CLR) is the virtual machine component of the .NET
framework. All .NET programs execute under the supervision of the CLR, guaranteeing
certain properties and behaviors in the areas of memory management, security, and
exception handling.
The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library
of functionality available to all languages using the .NET Framework. The BCL provides
classes which encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction and XML document manipulation.
Simplified Deployment
Installation of computer software must be carefully managed to ensure that it does not
interfere with previously installed software, and that it conforms to security requirements.
The .NET framework includes design features and tools that help address these
requirements.
Security
The design is meant to address some of the vulnerabilities, such as buffer overflows, that
have been exploited by malicious software. Additionally, .NET provides a common security
model for all applications.
Portability
The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform for
application development and execution, including functions for exception handling, garbage
collection, security, and interoperability. Microsoft's implementation of the CLI is called the
Common Language Runtime or CLR.
Assemblies
Metadata
All CLI is self-describing through .NET metadata. The CLR checks the metadata to
ensure that the correct method is called. Metadata is usually generated by language compilers but
developers can create their own metadata through custom attributes. Metadata contains
information about the assembly, and is also used to implement the reflective programming
capabilities of .NET Framework.
Security
.NET has its own security mechanism with two general features: Code Access Security
(CAS), and validation and verification. Code Access Security is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the intranet or
Internet). Code Access Security uses evidence to determine the permissions granted to the code.
Other code can demand that calling code is granted a specified permission. The demand causes
the CLR to perform a call stack walk: every assembly of each method in the call stack is checked
for the required permission; if any assembly is not granted the permission a security exception is
thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains valid
metadata and CIL, and whether the internal tables are correct. Verification is not so exact. The
verification mechanism checks to see if the code does anything that is 'unsafe'. The algorithm
used is quite conservative; hence occasionally code that is 'safe' does not pass. Unsafe code will
only be executed if the assembly has the 'skip verification' permission, which generally means
code that is installed on the local machine.
The Base Class Library (BCL) includes a small subset of the entire class library and is
the core set of classes that serve as the basic API of the Common Language Runtime. The classes
in mscorlib.dll and some of the classes in System.dll and System.core.dll are considered to be a
part of the BCL. The BCL classes are available in both .NET Framework as well as its alternative
implementations including .NET Compact Framework, Microsoft Silver light and Mono.
The Framework Class Library (FCL) is a superset of the BCL classes and refers to the
entire class library that ships with .NET Framework. It includes an expanded set of libraries,
including Win Forms, ADO.NET, ASP.NET, Language Integrated Query, Windows Presentation
Foundation, Windows Communication Foundation among others. The FCL is much larger in
scope than standard libraries for languages like C++, and comparable in scope to the standard
libraries of Java.
Memory management
The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself. To this
end, the memory allocated to instantiations of .NET types (objects) is done contiguously from
the managed heap, a pool of memory managed by the CLR. As long as there exists a reference to
an object, which might be either a direct reference to an object or via a graph of objects, the
object is considered to be in use by the CLR. When there is no reference to an object, and it
cannot be reached or used, it becomes garbage. However, it still holds on to the memory
allocated to it. .NET Framework includes a garbage collector which runs periodically, on a
separate thread from the application's thread, that enumerates all the unusable objects and
reclaims the memory allocated to them.
All objects not marked as reachable are garbage. This is the mark phase. Since the
memory held by garbage is not of any consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects are
then compacted together, by using memory to copy them over to the free space to make them
contiguous again. Any reference to an object invalidated by moving the object is updated to
reflect the new location by the GC. The application is resumed after the garbage collection is
over.
Versions
Microsoft started development on the .NET Framework in the late 1990s originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of
.NET 1.0 were released.
The .NET Framework stack.
ASP.NET
SERVER APPLICATION DEVELOPMENT
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.
The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.
ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services are built
on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format),
and WSDL ( the Web Services Description Language). The .NET Framework is built on these
standards to promote interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with the .NET
Framework SDK can query an XML Web service published on the Web, parse its WSDL
description, and produce C# or Visual Basic source code that your application can use to become
a client of the XML Web service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly, the Web Services
Description Language tool and the other tools contained in the SDK facilitate your development
efforts with the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by distributed software
development.
Finally, like Web Forms pages in the managed environment, your XML Web service will run
with the speed of native machine language using the scalable communication of IIS.
ACTIVE SERVER PAGES.NET
ASP.NET is a programming framework built on the common language runtime that can
be used on a server to build powerful Web applications. ASP.NET offers several important
advantages over previous Web development models:
Scalability and Availability. ASP.NET has been designed with scalability in mind, with
features specifically tailored to improve performance in clustered and multiprocessor
environments. Further, processes are closely monitored and managed by the ASP.NET
runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its
place, which helps keep your application constantly available to handle requests.
Customizability and Extensibility. ASP.NET delivers a well-factored architecture that
allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to
extend or replace any subcomponent of the ASP.NET runtime with your own custom-written
component. Implementing custom authentication or state services has never been easier.
Security. With built in Windows authentication and per-application configuration, you
can be assured that your applications are secure.
LANGUAGE SUPPORT
The Microsoft .NET Platform currently offers built-in support for three languages: C#,
Visual Basic, and Java Script.
The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.
The ability to create and use reusable UI controls that can encapsulate common functionality
and thus reduce the amount of code that a page developer has to write.
The ability for developers to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
The ability for development tools to provide strong WYSIWYG design support for pages
(existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx
resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework
class. This class can then be used to dynamically process incoming requests. (Note that the .aspx
file is compiled only the first time it is accessed; the compiled type instance is then reused across
multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and changing
its file name extension to .aspx (no modification of code is required). For example, the following
sample demonstrates a simple HTML page that collects a user's name and category preference
and then performs a form post back to the originating page when a button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes support
for <% %> code render blocks that can be intermixed with HTML content within an .aspx file.
These code blocks execute in a top-down manner at page render time.
ASP.NET supports two methods of authoring dynamic pages. The first is the method
shown in the preceding samples, where the page code is physically declared within the
originating .aspx file. An alternative approach--known as the code-behind method--enables the
page code to be more cleanly separated from the HTML content into an entirely separate file.
In addition to (or instead of) using <% %> code blocks to program dynamic content,
ASP.NET page developers can use ASP.NET server controls to program Web pages. Server
controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a
runat="server" attributes value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the
controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.
Server controls automatically maintain any client-entered values between round trips to
the server. This control state is not stored on the server (it is instead stored within an <input
type="hidden"> form field that is round-tripped between requests). Note also that no client-side
script is required.
1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
2. ASP.NET Web Forms pages can target any browser client (there are no script library or
cookie requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can also use controls built by
third parties.
6. ASP.NET server controls can automatically project both uplevel and downlevel HTML.
7. ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
8. ASP.NET validation controls provide an easy way to do declarative client or server data
validation.
C#.NET
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and also
introduces new objects. Key new ADO.NET objects include the Dataset, Data Reader, and
Data Adapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any
data stores. Because of that, the DataSet functions as a standalone entity. You can think of the
DataSet as an always disconnected recordset that knows nothing about the source or destination
of the data it contains. Inside a DataSet, much like in a database, there are tables, columns,
relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the
DataSet held the data. In the past, data processing has been primarily connection-based. Now, in
an effort to make multi-tiered apps more efficient, data processing is turning to a message-based
approach that revolves around chunks of information. At the center of this approach is the
DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its
source data store. It accomplishes this by means of requests to the appropriate SQL commands
made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as collections and
data types. No matter what the source of the data within the DataSet is, it is manipulated through
the same set of standard APIs exposed through the DataSet and its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and persist
the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers
(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide
four basic objects: the Command, Connection, DataReader and DataAdapter. In the
remaining sections of this document, we'll walk through each part of the DataSet and the OLE
DB/SQL Server .NET Data Providers explaining what they are, and how to program against
them.
The following sections will introduce you to some objects that have evolved, and some that are
new. These objects are:
Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific
classes such as SqlConnection. Commands travel over connections and resultsets are returned in
the form of streams which can be read by a DataReader object, or pushed into a DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. You can also use input and output
parameters, and return values as part of your command syntax. The example below shows how to
issue an INSERT statement against the Northwind database.
DataReaders:
DataSets
The Dataset object is similar to the ADO Recordset object, but more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns, relationships,
and constraints. However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with databases, or other
source data. This allows the developer to work with a programming model that is always
consistent, regardless of where the source data resides. Data coming from a database, an XML
file, from code, or user input can all be placed into DataSet objects. Then, as changes are made
to the DataSet they can be tracked and verified before updating the source data. The
GetChanges method of the DataSet object actually creates a second DatSet that contains only
the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update
the original data source.
The DataSet has many XML characteristics, including the ability to produce and consume XML
data and XML schemas. XML schemas can be used to describe schemas interchanged via
WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and
statement completion.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL Server
databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter
object and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls the INSERT, UPDATE or DELETE command for each changed
row. You can explicitly set these commands in order to control the statements used at runtime to
resolve changes, including the use of stored procedures. For ad-hoc scenarios, a
CommandBuilder object can generate these at run-time based upon a select statement.
However, this run-time generation requires an extra round-trip to the server in order to gather
required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at
design time will result in better run-time performance.
A database management, or DBMS, gives the user access to their data and helps them
transform the data into information. Such database management systems include dBase, paradox,
IMS, SQL Server and SQL Server. These systems allow users to create, update and extract
information from their database.
During an SQL Server Database design project, the analysis of your business needs
identifies all the fields or attributes of interest. If your business needs change over time, you
define any additional fields or change the definition of existing fields.
SQL Server stores records relating to each other in a table. Different tables are created
for the various groups of information. Related tables are grouped together to form a database.
PRIMARY KEY
Every table in SQL Server has a field or a combination of fields that uniquely identifies
each record in the table. The Unique identifier is called the Primary Key, or simply the Key. The
primary key provides the means to distinguish one record from all other in a table. It allows the
user and the database system to identify, locate and refer to one particular record in the database.
RELATIONAL DATABASE
Sometimes all the information of interest to a business operation can be stored in one
table. SQL Server makes it very easy to link the data in multiple tables. Matching an employee
to the department in which they work is one example. This is what makes SQL Server a
relational database management system, or RDBMS. It stores data in two or more tables and
enables you to define relationships between the table and enables you to define relationships
between the tables.
FOREIGN KEY
When a field is one table matches the primary key of another field is referred to as a
foreign key. A foreign key is a field or a group of fields in one table whose values match those of
the primary key of another table.
REFERENTIAL INTEGRITY
Not only does SQL Server allow you to link multiple tables, it also maintains consistency
between them. Ensuring that the data among related tables is correctly matched is referred to as
maintaining referential integrity.
DATA ABSTRACTION
A major purpose of a database system is to provide users with an abstract view of the
data. This system hides certain details of how the data is stored and maintained. Data abstraction
is divided into three levels.
Physical level: This is the lowest level of abstraction at which one describes how the data are
actually stored.
Conceptual Level: At this level of database abstraction all the attributed and what data are
actually stored is described and entries and relationship among them.
View level: This is the highest level of abstraction at which one describes only part of the
database.
ADVANTAGES OF RDBMS
Redundancy can be avoided
Inconsistency can be eliminated
Data can be Shared
Standards can be enforced
Security restrictions ca be applied
Integrity can be maintained
Conflicting requirements can be balanced
Data independence can be achieved.
DISADVANTAGES OF DBMS
SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched
performance, continuous operation and support for every database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed for
online transactions processing and for handling large database application.
SQL SERVER with transactions processing option offers two features which contribute to very
high level of transaction processing throughput, which are
The unrivaled portability and connectivity of the SQL SERVER DBMS enables all the
systems in the organization to be linked into a singular, integrated computing resource.
PORTABILITY
SQL SERVER is fully portable to more than 80 distinct hardware and operating systems
platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms.
This portability gives complete freedom to choose the database server platform that meets the
system requirements.
OPEN SYSTEMS
SQL SERVER offers a leading implementation of industry standard SQL. SQL Servers
open architecture integrates SQL SERVER and non SQL SERVER DBMS with industrys most
comprehensive collection of tools, application, and third party software products SQL Servers
Open architecture provides transparent access to data from other relational database and even
non-relational database.
SQL Servers networking and distributed database capabilities to access data stored on
remote server with the same ease as if the information was stored on a single local computer. A
single SQL statement can access data at multiple sites. You can store data where system
requirements such as performance, security or availability dictate.
UNMATCHED PERFORMANCE
The most advanced architecture in the industry allows the SQL SERVER DBMS to
deliver unmatched performance.
Real World applications demand access to critical data. With most database Systems
application becomes contention bound which performance is limited not by the CPU power
or by disk I/O, but user waiting on one another for data access. SQL Server employs full,
unrestricted row-level locking and contention free queries to minimize and in many cases
entirely eliminates contention wait times.
NO I/O BOTTLENECKS
SQL Servers fast commit groups commit and deferred write technologies dramatically
reduce disk I/O bottlenecks. While some database write whole data block to disk at commit time,
SQL Server commits transactions with at most sequential log file on disk at commit time, On
high throughput systems, one sequential writes typically group commit multiple transactions.
Data read by the transaction remains as shared memory so that other transactions may access that
data without reading it again from disk. Since fast commits write all data necessary to the
recovery to the log file, modified blocks are written back to the database independently of the
transaction commit, when written from memory to disk.
SYSTEM TESTING
INTRODUCTION:
Testing is the process of detecting errors. Testing performs a very critical role for quality
assurance and for ensuring the reliability of software. The results of testing are used later on
during maintenance also.
The aim of testing is often to demonstrate that a program works by showing that it has no
errors. The basic purpose of testing phase is to detect the errors that may be present in the
program. Hence one should not start testing with the intent of showing that a program works, but
the intent should be to show that a program doesnt work. Testing is the process of executing a
program with the intent of finding errors.
Testing Objectives
The main objective of testing is to uncover a host of errors, systematically and with
minimum effort and time. Stating formally, we can say,
A good test case is one that has a high probability of finding error, if it exists.
Levels of Testing
In order to uncover the errors present in different phases we have the concept of levels of
testing. The basic levels of testing are as shown below
Acceptance Testing
Client Needs
Unit Testing
Code
Testing Strategies:
A strategy for software testing integrates software test case design methods into a well-
planned series of steps that result in the successful construction of software.
Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e. the module.
Using the detailed design and the process specifications testing is done to uncover errors within
the boundary of the module. All modules must be successful in the unit test before the start of the
integration testing begins.
TEST PLAN:
A number of activities must be performed for testing software. Testing starts with test
plan. Test plan identifies all testing related activities that needed to be performed along with the
schedule and guidelines for testing. The plan also specifies the level of testing that need to be
done , by identifying the different units. For each unit specifying in the plan first the test cases
and reports are produced. These reports are analyzed.
Test plan is a general document for entire project , which defines the scope, approach to
be taken and the personal responsible for different activities of testing. The inputs for forming
test plans are :
1. Project plan
2. Requirements document
3. System design
I tested step wise every piece of code, taking care that every statement in the code is
executed at least once. I have generated a list of test cases, sample data, which is used to check
all possible combinations of execution paths through the code at every module level.
This testing method considers a module as a single unit and checks the unit at interface
and communication with other modules rather getting into details at statement level. Here the
module will be treated as a block box that will take some input and generate output. Output for a
given set of input combinations are forwarded to other modules.
Black Box Testing in this Project: I tested each and every module by considering each module
as a unit. I have prepared some set of input combinations and checked the outputs for those
inputs. Also I tested whether the communication between one module to other module is
performing well or not.
Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see if
modules can be integrated properly or not. This testing activity can be considered as testing the
design and hence the emphasis on testing module interactions. It also helps to uncover a set of
errors associated with interfacing. Here the input to these modules will be the unit tested
modules.
In Bottom-Up Integration Testing each sub module is tested separately and then the full
system is tested.
Integration Testing in this project: In this project integrating all the modules forms the main
system. Means I used Bottom-Up Integration Testing for this project. When integrating all the
modules I have checked whether the integration effects working of any of the services by giving
different combinations of inputs with which the two services run perfectly before Integration.
System Testing
Project testing is an important phase without which the system cant be released to the
end users. It is aimed at ensuring that all the processes are according to the specification
accurately.
System Testing in this project: Here entire system has been tested against requirements of
project and it is checked whether all requirements of project have been satisfied or not.
Alpha Testing
This refers to the system testing that is carried out by the test team with the organization.
Beta Testing
This refers to the system testing that is performed by a select group of friendly customers.
Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that the
software is working satisfactorily. Testing here is focused on external behavior of the system; the
internal logic of program is not emphasized.
Acceptance Testing in this project: In this project I have collected some data that was belongs to
the University and tested whether project is working correctly or not.
I conclude that this system is tested using all variety of tests and found no errors.
Hence the testing process is completed.