0% found this document useful (0 votes)
51 views

Selected Software

The document provides an overview of the .NET framework, which includes a large library of pre-coded solutions and a virtual machine that manages program execution. It describes key components like the common language runtime, assemblies, metadata, security features, and the base class library. The framework is intended to simplify development and provide a common platform for building Windows applications.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Selected Software

The document provides an overview of the .NET framework, which includes a large library of pre-coded solutions and a virtual machine that manages program execution. It describes key components like the common language runtime, assemblies, metadata, security features, and the base class library. The framework is intended to simplify development and provide a common platform for building Windows applications.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

SOFTWARE DESCRIPTION

INTRODUCTION TO .NET FRAMEWORK

The Microsoft .NET Framework is a software technology that is available with several
Microsoft Windows operating systems. It includes a large library of pre-coded solutions to
common programming problems and a virtual machine that manages the execution of programs
written specifically for the framework. The .NET Framework is a key Microsoft offering and is
intended to be used by most new applications created for the Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a large range of
programming needs in a number of areas, including user interface, data access, database
connectivity, cryptography, web application development, numeric algorithms, and network
communications. The class library is used by programmers, who combine it with their own code
to produce applications.

Programs written for the .NET Framework execute in a software environment that manages
the program's runtime requirements. Also part of the .NET Framework, this runtime environment
is known as the Common Language Runtime (CLR). The CLR provides the appearance of an
application virtual machine so that programmers need not consider the capabilities of the specific
CPU that will execute the program. The CLR also provides other important services such as
security, memory management, and exception handling. The class library and the CLR together
compose the .NET Framework.
Principal design features

Interoperability

Because interaction between new and older applications is commonly required,


the .NET Framework provides means to access functionality that is implemented in
programs that execute outside the .NET environment. Access to COM components is
provided in the System.Runtime.InteropServices and System.EnterpriseServices
namespaces of the framework; access to other functionality is provided using the
P/Invoke feature.

Common Runtime Engine

The Common Language Runtime (CLR) is the virtual machine component of the .NET
framework. All .NET programs execute under the supervision of the CLR, guaranteeing
certain properties and behaviors in the areas of memory management, security, and
exception handling.

Base Class Library

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library
of functionality available to all languages using the .NET Framework. The BCL provides
classes which encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction and XML document manipulation.

Simplified Deployment

Installation of computer software must be carefully managed to ensure that it does not
interfere with previously installed software, and that it conforms to security requirements.
The .NET framework includes design features and tools that help address these
requirements.

Security

The design is meant to address some of the vulnerabilities, such as buffer overflows, that
have been exploited by malicious software. Additionally, .NET provides a common security
model for all applications.

Portability

The design of the .NET Framework allows it to theoretically be platform agnostic,


and thus cross-platform compatible. That is, a program written to use the framework should run
without change on any type of system for which the framework is implemented. Microsoft's
commercial implementations of the framework cover Windows, Windows CE, and the Xbox
360. In addition, Microsoft submits the specifications for the Common Language Infrastructure
(which includes the core class libraries, Common Type System, and the Common Intermediate
Language), the C# language, and the C++/CLI language to both ECMA and the ISO, making
them available as open standards. This makes it possible for third parties to create compatible
implementations of the framework and its languages on other platforms.

Visual overview of the Common Language Infrastructure (CLI)

Common Language Infrastructure

The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform for
application development and execution, including functions for exception handling, garbage
collection, security, and interoperability. Microsoft's implementation of the CLI is called the
Common Language Runtime or CLR.

Assemblies

The intermediate CIL code is housed in .NET assemblies. As mandated by specification,


assemblies are stored in the Portable Executable (PE) format, common on the Windows platform
for all DLL and EXE files. The assembly consists of one or more files, one of which must
contain the manifest, which has the metadata for the assembly. The complete name of an
assembly (not to be confused with the filename on disk) contains its simple text name, version
number, culture, and public key token. The public key token is a unique hash generated when the
assembly is compiled, thus two assemblies with the same public key token are guaranteed to be
identical from the point of view of the framework. A private key can also be specified known
only to the creator of the assembly and can be used for strong naming and to guarantee that the
assembly is from the same author when a new version of the assembly is compiled (required to
add an assembly to the Global Assembly Cache).

Metadata

All CLI is self-describing through .NET metadata. The CLR checks the metadata to
ensure that the correct method is called. Metadata is usually generated by language compilers but
developers can create their own metadata through custom attributes. Metadata contains
information about the assembly, and is also used to implement the reflective programming
capabilities of .NET Framework.
Security

.NET has its own security mechanism with two general features: Code Access Security
(CAS), and validation and verification. Code Access Security is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the intranet or
Internet). Code Access Security uses evidence to determine the permissions granted to the code.
Other code can demand that calling code is granted a specified permission. The demand causes
the CLR to perform a call stack walk: every assembly of each method in the call stack is checked
for the required permission; if any assembly is not granted the permission a security exception is
thrown.

When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains valid
metadata and CIL, and whether the internal tables are correct. Verification is not so exact. The
verification mechanism checks to see if the code does anything that is 'unsafe'. The algorithm
used is quite conservative; hence occasionally code that is 'safe' does not pass. Unsafe code will
only be executed if the assembly has the 'skip verification' permission, which generally means
code that is installed on the local machine.

.NET Framework uses appdomains as a mechanism for isolating code running in a


process. Appdomains can be created and code loaded into or unloaded from them independent of
other appdomains. This helps increase the fault tolerance of the application, as faults or crashes
in one appdomain do not affect rest of the application. Appdomains can also be configured
independently with different security privileges. This can help increase the security of the
application by isolating potentially unsafe code. The developer, however, has to split the
application into sub domains; it is not done by the CLR.
Class library

Namespaces in the BCL


System
System. CodeDom
System. Collections
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System.Text.RegularExpressi
ons
Microsoft .NET Framework includes a set of standard class libraries. The class library
is organized in a hierarchy of namespaces. Most of the built in APIs are part of either System.*
or Microsoft.* namespaces. It encapsulates a large number of common functions, such as file
reading and writing, graphic rendering, database interaction, and XML document manipulation,
among others. The .NET class libraries are available to all .NET languages. The .NET
Framework class library is divided into two parts: the Base Class Library and the Framework
Class Library.

The Base Class Library (BCL) includes a small subset of the entire class library and is
the core set of classes that serve as the basic API of the Common Language Runtime. The classes
in mscorlib.dll and some of the classes in System.dll and System.core.dll are considered to be a
part of the BCL. The BCL classes are available in both .NET Framework as well as its alternative
implementations including .NET Compact Framework, Microsoft Silver light and Mono.

The Framework Class Library (FCL) is a superset of the BCL classes and refers to the
entire class library that ships with .NET Framework. It includes an expanded set of libraries,
including Win Forms, ADO.NET, ASP.NET, Language Integrated Query, Windows Presentation
Foundation, Windows Communication Foundation among others. The FCL is much larger in
scope than standard libraries for languages like C++, and comparable in scope to the standard
libraries of Java.

Memory management

The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself. To this
end, the memory allocated to instantiations of .NET types (objects) is done contiguously from
the managed heap, a pool of memory managed by the CLR. As long as there exists a reference to
an object, which might be either a direct reference to an object or via a graph of objects, the
object is considered to be in use by the CLR. When there is no reference to an object, and it
cannot be reached or used, it becomes garbage. However, it still holds on to the memory
allocated to it. .NET Framework includes a garbage collector which runs periodically, on a
separate thread from the application's thread, that enumerates all the unusable objects and
reclaims the memory allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep


garbage collector. The GC runs only when a certain amount of memory has been used or there is
enough pressure for memory on the system. Since it is not guaranteed when the conditions to
reclaim memory are reached, the GC runs are non-deterministic. Each .NET application has a set
of roots, which are pointers to objects on the managed heap (managed objects). These include
references to static objects and objects defined as local variables or method parameters currently
in scope, as well as objects referred to by CPU registers. When the GC runs, it pauses the
application, and for each object referred to in the root, it recursively enumerates all the objects
reachable from the root objects and marks them as reachable. It uses .NET metadata and
reflection to discover the objects encapsulated by an object, and then recursively walk them. It
then enumerates all the objects on the heap (which were initially allocated contiguously) using
reflection.

All objects not marked as reachable are garbage. This is the mark phase. Since the
memory held by garbage is not of any consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects are
then compacted together, by using memory to copy them over to the free space to make them
contiguous again. Any reference to an object invalidated by moving the object is updated to
reflect the new location by the GC. The application is resumed after the garbage collection is
over.

The GC used by .NET Framework is actually generational. Objects are assigned a


generation; newly created objects belong to Generation 0. The objects that survive a garbage
collection are tagged as Generation 1, and the Generation 1 objects that survive another
collection are Generation 2 objects. The .NET Framework uses up to Generation 2 objects.
Higher generation objects are garbage collected less frequently than lower generation objects.
This helps increase the efficiency of garbage collection, as older objects tend to have a larger
lifetime than newer objects. Thus, by removing older (and thus more likely to survive a
collection) objects from the scope of a collection run, fewer objects need to be checked and
compacted.

Versions

Microsoft started development on the .NET Framework in the late 1990s originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of
.NET 1.0 were released.
The .NET Framework stack.

Version Version Number Release Date


1.0 1.0.3705.0 2002-01-05
1.1 1.1.4322.573 2003-04-01
2.0 2.0.50727.42 2005-11-07
3.0 3.0.4506.30 2006-11-06
3.5 3.5.21022.8 2007-11-09

ASP.NET
SERVER APPLICATION DEVELOPMENT

Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.

The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.

SERVER-SIDE MANAGED CODE

ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.

XML Web services, an important evolution in Web-based technology, are distributed,


server-side application components similar to common Web sites. However, unlike Web-based
applications, XML Web services components have no UI and are not targeted for browsers such
as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable
software components designed to be consumed by other applications, such as traditional client
applications, Web-based applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment into the highly
distributed environment of the Internet.
If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs
to share the same file with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other managed application,
they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to develop than
unmanaged ASP pages because they interact with the runtime like any managed application.

The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services are built
on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format),
and WSDL ( the Web Services Description Language). The .NET Framework is built on these
standards to promote interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included with the .NET
Framework SDK can query an XML Web service published on the Web, parse its WSDL
description, and produce C# or Visual Basic source code that your application can use to become
a client of the XML Web service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly, the Web Services
Description Language tool and the other tools contained in the SDK facilitate your development
efforts with the .NET Framework.

If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by distributed software
development.

Finally, like Web Forms pages in the managed environment, your XML Web service will run
with the speed of native machine language using the scalable communication of IIS.
ACTIVE SERVER PAGES.NET

ASP.NET is a programming framework built on the common language runtime that can
be used on a server to build powerful Web applications. ASP.NET offers several important
advantages over previous Web development models:

Enhanced Performance. ASP.NET is compiled common language runtime code running


on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early
binding, just-in-time compilation, native optimization, and caching services right out of the
box. This amounts to dramatically better performance before you ever write a line of code.
World-Class Tool Support. The ASP.NET framework is complemented by a rich
toolbox and designer in the Visual Studio integrated development environment. WYSIWYG
editing, drag-and-drop server controls, and automatic deployment are just a few of the
features this powerful tool provides.
Power and Flexibility. Because ASP.NET is based on the common language runtime, the
power and flexibility of that entire platform is available to Web application developers.
The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly
accessible from the Web. ASP.NET is also language-independent, so you can choose the
language that best applies to your application or partition your application across many
languages. Further, common language runtime interoperability guarantees that your existing
investment in COM-based development is preserved when migrating to ASP.NET.
Simplicity. ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For example, the
ASP.NET page framework allows you to build user interfaces that cleanly separate
application logic from presentation code and to handle events in a simple, Visual Basic - like
forms processing model. Additionally, the common language runtime simplifies
development, with managed code services such as automatic reference counting and garbage
collection.

Manageability. ASP.NET employs a text-based, hierarchical configuration system, which


simplifies applying settings to your server environment and Web applications. Because
configuration information is stored as plain text, new settings may be applied without the aid
of local administration tools. This "zero local administration" philosophy extends to
deploying ASP.NET Framework applications as well. An ASP.NET Framework application is
deployed to a server simply by copying the necessary files to the server. No server restart is
required, even to deploy or replace running compiled code.

Scalability and Availability. ASP.NET has been designed with scalability in mind, with
features specifically tailored to improve performance in clustered and multiprocessor
environments. Further, processes are closely monitored and managed by the ASP.NET
runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its
place, which helps keep your application constantly available to handle requests.
Customizability and Extensibility. ASP.NET delivers a well-factored architecture that
allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to
extend or replace any subcomponent of the ASP.NET runtime with your own custom-written
component. Implementing custom authentication or state services has never been easier.
Security. With built in Windows authentication and per-application configuration, you
can be assured that your applications are secure.
LANGUAGE SUPPORT

The Microsoft .NET Platform currently offers built-in support for three languages: C#,
Visual Basic, and Java Script.

WHAT IS ASP.NET WEB FORMS?

The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.

Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with


existing pages), the ASP.NET Web Forms framework has been specifically designed to address a
number of key deficiencies in the previous model. In particular, it provides:

The ability to create and use reusable UI controls that can encapsulate common functionality
and thus reduce the amount of code that a page developer has to write.
The ability for developers to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
The ability for development tools to provide strong WYSIWYG design support for pages
(existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx
resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework
class. This class can then be used to dynamically process incoming requests. (Note that the .aspx
file is compiled only the first time it is accessed; the compiled type instance is then reused across
multiple requests).

An ASP.NET page can be created simply by taking an existing HTML file and changing
its file name extension to .aspx (no modification of code is required). For example, the following
sample demonstrates a simple HTML page that collects a user's name and category preference
and then performs a form post back to the originating page when a button is clicked:

ASP.NET provides syntax compatibility with existing ASP pages. This includes support
for <% %> code render blocks that can be intermixed with HTML content within an .aspx file.
These code blocks execute in a top-down manner at page render time.

CODE-BEHIND WEB FORMS

ASP.NET supports two methods of authoring dynamic pages. The first is the method
shown in the preceding samples, where the page code is physically declared within the
originating .aspx file. An alternative approach--known as the code-behind method--enables the
page code to be more cleanly separated from the HTML content into an entirely separate file.

INTRODUCTION TO ASP.NET SERVER CONTROLS

In addition to (or instead of) using <% %> code blocks to program dynamic content,
ASP.NET page developers can use ASP.NET server controls to program Web pages. Server
controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a
runat="server" attributes value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the
controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.

Server controls automatically maintain any client-entered values between round trips to
the server. This control state is not stored on the server (it is instead stored within an <input
type="hidden"> form field that is round-tripped between requests). Note also that no client-side
script is required.

In addition to supporting standard HTML input controls, ASP.NET enables developers to


utilize richer custom controls on their pages. For example, the following sample demonstrates
how the <asp:adrotator> control can be used to dynamically display rotating ads on a page.

1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
2. ASP.NET Web Forms pages can target any browser client (there are no script library or
cookie requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can also use controls built by
third parties.
6. ASP.NET server controls can automatically project both uplevel and downlevel HTML.
7. ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
8. ASP.NET validation controls provide an easy way to do declarative client or server data
validation.

C#.NET

ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.

ADO.NET uses some ADO objects, such as the Connection and Command objects, and also
introduces new objects. Key new ADO.NET objects include the Dataset, Data Reader, and
Data Adapter.

The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any
data stores. Because of that, the DataSet functions as a standalone entity. You can think of the
DataSet as an always disconnected recordset that knows nothing about the source or destination
of the data it contains. Inside a DataSet, much like in a database, there are tables, columns,
relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the
DataSet held the data. In the past, data processing has been primarily connection-based. Now, in
an effort to make multi-tiered apps more efficient, data processing is turning to a message-based
approach that revolves around chunks of information. At the center of this approach is the
DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its
source data store. It accomplishes this by means of requests to the appropriate SQL commands
made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as collections and
data types. No matter what the source of the data within the DataSet is, it is manipulated through
the same set of standard APIs exposed through the DataSet and its subordinate objects.

While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and persist
the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers
(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide
four basic objects: the Command, Connection, DataReader and DataAdapter. In the
remaining sections of this document, we'll walk through each part of the DataSet and the OLE
DB/SQL Server .NET Data Providers explaining what they are, and how to program against
them.

The following sections will introduce you to some objects that have evolved, and some that are
new. These objects are:

Connections. For connection to and managing transactions against a database.


Commands. For issuing SQL commands against a database.
DataReaders. For reading a forward-only stream of data records from a SQL Server data
source.
DataSet. For storing, Remoting and programming against flat data, XML data and
relational data.
DataAdapters. For pushing data into a DataSet, and reconciling data against a database.
When dealing with connections to a database, there are two different options: SQL Server
.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These
are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to
talk to any OLE DB provider (as it uses OLE DB underneath).

Connections:

Connections are used to 'talk to' databases, and are represented by provider-specific
classes such as SqlConnection. Commands travel over connections and resultsets are returned in
the form of streams which can be read by a DataReader object, or pushed into a DataSet object.

Commands:

Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. You can also use input and output
parameters, and return values as part of your command syntax. The example below shows how to
issue an INSERT statement against the Northwind database.

DataReaders:

The Data Reader object is somewhat synonymous with a read-only/forward-only cursor


over data. The DataReader API supports flat as well as hierarchical data. A DataReader object
is returned after executing a command against a database. The format of the returned
DataReader object is different from a recordset. For example, you might use the DataReader to
show the results of a search list in a web page.
DATASETS AND DATAADAPTERS:

DataSets

The Dataset object is similar to the ADO Recordset object, but more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns, relationships,
and constraints. However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with databases, or other
source data. This allows the developer to work with a programming model that is always
consistent, regardless of where the source data resides. Data coming from a database, an XML
file, from code, or user input can all be placed into DataSet objects. Then, as changes are made
to the DataSet they can be tracked and verified before updating the source data. The
GetChanges method of the DataSet object actually creates a second DatSet that contains only
the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update
the original data source.

The DataSet has many XML characteristics, including the ability to produce and consume XML
data and XML schemas. XML schemas can be used to describe schemas interchanged via
WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and
statement completion.

DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL Server
databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter
object and its associated OleDbCommand and OleDbConnection objects.

The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls the INSERT, UPDATE or DELETE command for each changed
row. You can explicitly set these commands in order to control the statements used at runtime to
resolve changes, including the use of stored procedures. For ad-hoc scenarios, a
CommandBuilder object can generate these at run-time based upon a select statement.
However, this run-time generation requires an extra round-trip to the server in order to gather
required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at
design time will result in better run-time performance.

1. ADO.NET is the next evolution of ADO for the .Net Framework.


2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new
objects, the DataSet and DataAdapter, are provided for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in order to do
inserts, updates, and deletes. You don't need to first put data into a DataSet in order to insert,
update, or delete it.
Also, you can use a DataSet to bind to the data, move through the data, and navigate data
relationships
SQL SERVER -2000

A database management, or DBMS, gives the user access to their data and helps them
transform the data into information. Such database management systems include dBase, paradox,
IMS, SQL Server and SQL Server. These systems allow users to create, update and extract
information from their database.

A database is a structured collection of data. Data refers to the characteristics of people,


things and events. SQL Server stores each data item in its own fields. In SQL Server, the fields
relating to a particular person, thing or event are bundled together to form a single complete unit
of data, called a record (it can also be referred to as raw or an occurrence). Each record is made
up of a number of fields. No two fields in a record can have the same field name.

During an SQL Server Database design project, the analysis of your business needs
identifies all the fields or attributes of interest. If your business needs change over time, you
define any additional fields or change the definition of existing fields.

SQL SERVER TABLES

SQL Server stores records relating to each other in a table. Different tables are created
for the various groups of information. Related tables are grouped together to form a database.

PRIMARY KEY

Every table in SQL Server has a field or a combination of fields that uniquely identifies
each record in the table. The Unique identifier is called the Primary Key, or simply the Key. The
primary key provides the means to distinguish one record from all other in a table. It allows the
user and the database system to identify, locate and refer to one particular record in the database.

RELATIONAL DATABASE
Sometimes all the information of interest to a business operation can be stored in one
table. SQL Server makes it very easy to link the data in multiple tables. Matching an employee
to the department in which they work is one example. This is what makes SQL Server a
relational database management system, or RDBMS. It stores data in two or more tables and
enables you to define relationships between the table and enables you to define relationships
between the tables.

FOREIGN KEY

When a field is one table matches the primary key of another field is referred to as a
foreign key. A foreign key is a field or a group of fields in one table whose values match those of
the primary key of another table.

REFERENTIAL INTEGRITY

Not only does SQL Server allow you to link multiple tables, it also maintains consistency
between them. Ensuring that the data among related tables is correctly matched is referred to as
maintaining referential integrity.

DATA ABSTRACTION

A major purpose of a database system is to provide users with an abstract view of the
data. This system hides certain details of how the data is stored and maintained. Data abstraction
is divided into three levels.

Physical level: This is the lowest level of abstraction at which one describes how the data are
actually stored.

Conceptual Level: At this level of database abstraction all the attributed and what data are
actually stored is described and entries and relationship among them.

View level: This is the highest level of abstraction at which one describes only part of the
database.

ADVANTAGES OF RDBMS
Redundancy can be avoided
Inconsistency can be eliminated
Data can be Shared
Standards can be enforced
Security restrictions ca be applied
Integrity can be maintained
Conflicting requirements can be balanced
Data independence can be achieved.

DISADVANTAGES OF DBMS

A significant disadvantage of the DBMS system is cost. In addition to the cost of


purchasing of developing the software, the hardware has to be upgraded to allow for the
extensive programs and the workspace required for their execution and storage. While
centralization reduces duplication, the lack of duplication requires that the database be
adequately backed up so that in case of failure the data can be recovered.

FEATURES OF SQL SERVER (RDBMS)


SQL SERVER is one of the leading database management systems (DBMS) because it is
the only Database that meets the uncompromising requirements of todays most demanding
information systems. From complex decision support systems (DSS) to the most rigorous online
transaction processing (OLTP) application, even application that require simultaneous DSS and
OLTP access to the same critical data, SQL Server leads the industry in both performance and
capability.

SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched
performance, continuous operation and support for every database.

SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed for
online transactions processing and for handling large database application.

SQL SERVER with transactions processing option offers two features which contribute to very
high level of transaction processing throughput, which are

The row level lock manager

ENTERPRISE WIDE DATA SHARING

The unrivaled portability and connectivity of the SQL SERVER DBMS enables all the
systems in the organization to be linked into a singular, integrated computing resource.

PORTABILITY

SQL SERVER is fully portable to more than 80 distinct hardware and operating systems
platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms.
This portability gives complete freedom to choose the database server platform that meets the
system requirements.

OPEN SYSTEMS
SQL SERVER offers a leading implementation of industry standard SQL. SQL Servers
open architecture integrates SQL SERVER and non SQL SERVER DBMS with industrys most
comprehensive collection of tools, application, and third party software products SQL Servers
Open architecture provides transparent access to data from other relational database and even
non-relational database.

DISTRIBUTED DATA SHARING

SQL Servers networking and distributed database capabilities to access data stored on
remote server with the same ease as if the information was stored on a single local computer. A
single SQL statement can access data at multiple sites. You can store data where system
requirements such as performance, security or availability dictate.

UNMATCHED PERFORMANCE

The most advanced architecture in the industry allows the SQL SERVER DBMS to
deliver unmatched performance.

SOPHISTICATED CONCURRENCY CONTROL

Real World applications demand access to critical data. With most database Systems
application becomes contention bound which performance is limited not by the CPU power
or by disk I/O, but user waiting on one another for data access. SQL Server employs full,
unrestricted row-level locking and contention free queries to minimize and in many cases
entirely eliminates contention wait times.

NO I/O BOTTLENECKS
SQL Servers fast commit groups commit and deferred write technologies dramatically
reduce disk I/O bottlenecks. While some database write whole data block to disk at commit time,
SQL Server commits transactions with at most sequential log file on disk at commit time, On
high throughput systems, one sequential writes typically group commit multiple transactions.
Data read by the transaction remains as shared memory so that other transactions may access that
data without reading it again from disk. Since fast commits write all data necessary to the
recovery to the log file, modified blocks are written back to the database independently of the
transaction commit, when written from memory to disk.

SYSTEM TESTING
INTRODUCTION:

Testing is the process of detecting errors. Testing performs a very critical role for quality
assurance and for ensuring the reliability of software. The results of testing are used later on
during maintenance also.

The aim of testing is often to demonstrate that a program works by showing that it has no
errors. The basic purpose of testing phase is to detect the errors that may be present in the
program. Hence one should not start testing with the intent of showing that a program works, but
the intent should be to show that a program doesnt work. Testing is the process of executing a
program with the intent of finding errors.

Testing Objectives

The main objective of testing is to uncover a host of errors, systematically and with
minimum effort and time. Stating formally, we can say,

Testing is a process of executing a program with the intent of finding an error.

A successful test is one that uncovers an as yet undiscovered error.

A good test case is one that has a high probability of finding error, if it exists.

The tests are inadequate to detect possibly present errors.

Levels of Testing

In order to uncover the errors present in different phases we have the concept of levels of
testing. The basic levels of testing are as shown below
Acceptance Testing
Client Needs

Requirements System Testing

Design Integration Testing

Unit Testing
Code

Testing Strategies:

A strategy for software testing integrates software test case design methods into a well-
planned series of steps that result in the successful construction of software.

Unit Testing

Unit testing focuses verification effort on the smallest unit of software i.e. the module.
Using the detailed design and the process specifications testing is done to uncover errors within
the boundary of the module. All modules must be successful in the unit test before the start of the
integration testing begins.

Unit Testing in this project


In this project each service can be thought of a module. There are so many modules like
Login, New Registration, Change Password, Post Question, Modify Answer etc. When
developing the module as well as finishing the development so that each module works without
any error. The inputs are validated when accepting from the user.

TEST PLAN:

A number of activities must be performed for testing software. Testing starts with test
plan. Test plan identifies all testing related activities that needed to be performed along with the
schedule and guidelines for testing. The plan also specifies the level of testing that need to be
done , by identifying the different units. For each unit specifying in the plan first the test cases
and reports are produced. These reports are analyzed.

Test plan is a general document for entire project , which defines the scope, approach to
be taken and the personal responsible for different activities of testing. The inputs for forming
test plans are :

1. Project plan

2. Requirements document

3. System design

White Box Testing


White Box Testing mainly focuses on the internal performance of the product. Here a
part will be taken at a time and tested thoroughly at a statement level to find the maximum
possible errors. Also construct a loop in such a way that the part will be tested with in a range.
That means the part is execute at its boundary values and within bounds for the purpose of
testing.

White Box Testing in this Project

I tested step wise every piece of code, taking care that every statement in the code is
executed at least once. I have generated a list of test cases, sample data, which is used to check
all possible combinations of execution paths through the code at every module level.

Black Box Testing

This testing method considers a module as a single unit and checks the unit at interface
and communication with other modules rather getting into details at statement level. Here the
module will be treated as a block box that will take some input and generate output. Output for a
given set of input combinations are forwarded to other modules.

Black Box Testing in this Project: I tested each and every module by considering each module
as a unit. I have prepared some set of input combinations and checked the outputs for those
inputs. Also I tested whether the communication between one module to other module is
performing well or not.

Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see if
modules can be integrated properly or not. This testing activity can be considered as testing the
design and hence the emphasis on testing module interactions. It also helps to uncover a set of
errors associated with interfacing. Here the input to these modules will be the unit tested
modules.

Integration testing is classifies in two types

1. Top-Down Integration Testing.

2. Bottom-Up Integration Testing.

In Top-Down Integration Testing modules are integrated by moving downward through


the control hierarchy, beginning with the main control module.

In Bottom-Up Integration Testing each sub module is tested separately and then the full
system is tested.

Integration Testing in this project: In this project integrating all the modules forms the main
system. Means I used Bottom-Up Integration Testing for this project. When integrating all the
modules I have checked whether the integration effects working of any of the services by giving
different combinations of inputs with which the two services run perfectly before Integration.

System Testing
Project testing is an important phase without which the system cant be released to the
end users. It is aimed at ensuring that all the processes are according to the specification
accurately.

System Testing in this project: Here entire system has been tested against requirements of
project and it is checked whether all requirements of project have been satisfied or not.

Alpha Testing

This refers to the system testing that is carried out by the test team with the organization.

Beta Testing

This refers to the system testing that is performed by a select group of friendly customers.

Acceptance Testing

Acceptance Test is performed with realistic data of the client to demonstrate that the
software is working satisfactorily. Testing here is focused on external behavior of the system; the
internal logic of program is not emphasized.

Acceptance Testing in this project: In this project I have collected some data that was belongs to
the University and tested whether project is working correctly or not.

I conclude that this system is tested using all variety of tests and found no errors.
Hence the testing process is completed.

You might also like