Client Server 5marks
Client Server 5marks
Important 5 Marks
1. How to dispell the myths in client/server computing?
Many myths surround client/server computing. Some are promoted by marketing literature.
Others are promoted by the press, more by omission than inclusion.
a) Client/Server Computing Is Easily Implemented:-
• Implementing any technology that requires integration of hardware and software from
multiple vendors is not easy Implementing client/server computing is no different.
• To many mainframe-oriented IS professionals, client/server computing is part of the micro
world. Without understanding the capabilities of the micro and its related software and how
the main frame capabilities can complement them, applications cannot be designed to take
advantage of both.
• Micro-oriented professionals need to understand why it is important that a server include
host-like functions, such as backup and recovery, security, reliability capabilities, and
network management.
• Even if the developers understand the host capabilities that should be included and the micro
capabilities that should be included, thes is still the network to be dealt with.
• In the name of progress, some micro applications are being forced. to fit into a client/server
architecture, when they belong entirely in the micro world. The same holds true for
mainframe applications. It i critical that an organization review the placement of each
application with an open mind, and that the review be based on business reasons, not personal
or technical reasons.
b) Current Desktop Machines Are Sufficient:-
• Organizations cannot always use their existing desktop hardware to support a client/server
environment. The AT and 286 class machines do not have the power required for this
environment and will have to be upgraded or replaced Most client software requires at least
a 386 machine (ideally 33 MHz) with a minimum of 2-Mbytes memory and 40 Mbytes of
hard disk capacity.
• If the client/server application is just providing a GUI to an existing mainframe application,
a basics-only client machine will suffice. As the applications get more complex, the
capabilities of the desktop machin must be reviewed. If a great deal of processing must be
done on the client machine, that machine needs more memory, greater hard disk capacity,
increased memory caching, and, possibly, a faster cycle speed.
c) Minimal Training Is Required:-
• Marketing literature gives organizations the impression that they can train a few of their best
IS professionals in ellent/server technology and be off and running.
• Now IS needs experts in networks, network software, client software, and server software. In
addition, they have to maintain this new environment in geographically dispersed locations.
• The design, installation, and management of LANs alone require experts who understand the
interdependencies among hardware, software, and networks. If ever there was a need for
cross-training. client/server computing is it.
d) All Data Are Relational:-
• Relational DBMS vendors continue to push relational technology as the only structure
required to manage all the data in an organization! Since it is based on SQL access,
client/server technology assumes any data needed by a user can be accessed via SQL requests
or through a translator that accepts SQL statements and converts them to another access
language.
• There are still a lot of legary systems with file structures, not database structures. And now
structures are being introduced, such as object-oriented databasen.
• Relational data structures require that the user understand the data and know something about
relational technology.
• Intelligent databases can relieve the user of this requirement but not all server databases are
intelligent databases Intelligent SQL interfaces can also relieve the user of the requirement to
understand relational technology.
e) Development Time is Shorter:-
• Compared to host-based applications, client/server applications are typically smaller in scope
and designed for smaller user communities. The automated development tools for building
client/server applications are easy to use and shorten development time. These tools were
designed, from scratch, specifically for the client/server environment. In addition, many of
these tools are based on object technology. An a result, development should be relatively easy
and short.
• Be careful when using productivity as a benefit without actually giving quantifiable measures.
If the development of typical client/server applications will take 50 percent less time than
their mainframe counterparts, state it as such.
• Build the learning curve, integration issues, system tuning, and debugging into time estimates.
Be sure to manage management's perception of what shorter really is.
Client/server computing is not the casy task that marketing literature would have us believe.
There are some very real, and painful, hurdles on the road to success.
a) Costs:-
• Potential cost savings prompt organizations to consider client/server computing. The
combined base price of hardware (machines and network) and software for client/server
systems is often a tenth of the cost of host machines and software. Conservative figures for
the cost per MIPS (millions of instructions per second) are:
* IBM mainframes - about $100,000.
* Midrange - about $50,000 (includes some bandwidths processor)
* Desktop - about $300.
• In addition, for smaller systems, the cat of the network operating system and the mat of the
server database could be more than the m of the server hardware itself.
• Training costs and long learning curves must be anticipation. The client/server seems simple
and straightforward, and yet requires experts in mainframe, midrange, micro, and LAN
technologies.
• Conversion costs can be misleading. There are few products that convert 3GL code, such as
COBOL, into C, one of the preferred languages for client/server computing.
b) Mixed Platforms:-
• In the past, most large organizations operated homogeneous centralized mainframe and mid-
range processors and terminal networks. Each was managed separately with independent
systems and protocols.
• Today's organizations have at least two micro operating systems, multiple network operating
systems, a variety of network topologies and different mainframe and midrange platforms
and data sources.
• The goal of client/server computing is to have all these hardware and software platforms
working together. It is not an easy task.
c) Maintenance:-
• Maintenance in the bane of every IS organization. It is costly and time consuming.
Client/server computing might shorten the backlog but won't do away with maintenance.
• Client/server applications are modular in nature. The process of updating application code is
more straightforward than it is in host based systems, primarily due to the use of
object(Client/server computing puts computing management in the hands of the user group,
while control and administration is still in the hands of IS. These two groups have not been
known to work well together in the past. (For client/server computing to work, both end users
and IS must begin to look at computing power as a resource to solve business problems The
focus has to be on the business problems, not the technology. IS professionals are beginning
to report to functional managers on the same level as the business users, Business users also
need to understand why centralization is still important, why IS should paradigms. The
modularity of client/server computing and the use of object technology make the maintenance
tank (the coding and testing) easier.
• However, consider this new wrinkle to maintenance: if some of the application logic is
processed on the client and there are hundreds of clients in the network, any updates to the
application logic have to be distributed to all those clients without impacting processing.
d) Reliability:-
• When a server goes down, the organization does not wait for the vendor to come and repair
it-they fix it themselves.
• Hardware must be stable and include backup units, fault tolerant systems, and monitoring
software for the system itself.
• The system software (database software, communication software, and the operating system-
maybe more than one of each) must be very robust and easily integrated Backup and recovery
procedures must be easy to use.
• The backup and recovery procedures taken for granted in the main framos world must be
provided for the client/server environment.
e) Restructuring Corporate Architecture:-
• Client/server computing puts computing management in the hands of the user group, while
control and administration is still in the hands of IS. For client/server computing to work,
both end users and IS must begin to look at computing power as a resource to solve business
problems.
• The focus has to be on the business problems, not the technology. IS professionals are
beginning to report to functional managers on the same level as the business users, Business
users also need to understand why centralization is still important, why IS should be
responsible for control and administration of the computer resource.
• Client/server technology also restructures the workflow processing, partly by placing it closer
to the work node.
• The use of electronic data interchange (EDI) is an example of a modification in workflow
processing.
• Client/server applications are being written to provide a GUI for accessing corporate
data as illustrated in Figure.
• These query oriented applications provide a single window to the data of the
organization. In some cases these applications are read-only, in others they are read-
write.
• The benefits of these systems are also ease of use and increased worker productivity.
The productivity for these systems in not measured by how easily a worker deals with
an application, as is the case with screen-emulation systems.
• Some of the tools for this category of client/server applications are offered by server
DBMS vendors.
• These tools work best with the vendor's DBMS, although links are usually provided
to some other data sources. Tools from non-server DBMS vendors usually access a
wider variety of data sources.
• As networks become more sophisticated, they need operating system software to shield
application programs from direct communication with the hardware. A network operating
system manages the services of the server in the same manner that an operating system
manages the services of the hardware.
• Today's leading network operating systems offer the reliability, performance, security, and
internet working capability once associated primarily with mainframe and mid – range
computers.
• A network operating system manages the services of the server. It shields application
programs from direct communication with the hardware.
• GUIs are an overlay to the network operating system. Originally offered as tools that
supported the concept of a server, network operating systems have evolved into enablers of
other software packages and network management tools.
• Microsoft's Windows New Technology (Windows NT) is based on the Mach variation of the
UNIX kernel. The microkernel architecture ensures its compatibility with applications not
written specifically for Windows NT or future supported operating systems.
• The Windows NT Executive kernel provides basic operating system functions and supports
additional subsystems layered above it. In its initial release, Windows NT supports five
subsystems: 32-bit Windows. 16-bit Windows, DOS, POSIX, and OS/2 (but only in
character-mode, without the Presentation Manager GUI).
• Windows NT can be used on a client and on a server in networkod environments and can
perform redirecting services for LAN Manager, NetWare, and LAN Server. Windows NT
runs on the Intel 386 and 486, the Mips R3000, and R4000 RISC and the Digital Alpha chips.
• Windows NT supports multiprocessor systems. Each application thread-even the Windows
NT kernel itself-can run on any processer is a multiprocessor box. IBM has announced plans
to introduce multiprocessing capabilities into 08/2 in the future, but as yet with n specific
time frame.
• Windows NT incorporates several fault-tolerant features to further hance stability. The
include a built-in, fully-recoverable file system, which incorporates features such as disk
mirroring, duplexing, and ping to minimize file damage from power outages and hardware
failures.
• In addition, Windows NT includes exception handling routines ratch program anomalies and
impose strict quotas on each process order to protect system sources.
6. List out the factors that involved in the success of Client / Server Computing.
There does not seem to be a consensus on what client/server means but there is agreement on
the two critical success factors for the technology. Unless a client server architecture can provide
internetworking and interoperability, it is doomed.
It is important to identify the business motivation for the change to client/server computing.
Whether it is reduced costs, increased productivity, or a competitive advantage, the implementation
of the new technology must be carefully monitored to ensure that it does achieve the desired goal.
* Internetworking.
* Interoperability.
* Compatible Environment.
* Perceived Benefits.
Internetworking:-
• Internetworking deals with how separate platforms are linked. Some of the terms used when
discussing internetworking are bridges, routers, gateways, and LANs. By adhering to a set of
standards, such as TCP/IP and the OSI modul, products to provide link capabilities can be
developed.
• It is also important that the internet support the way information really flows within the
organization. If the internet does not support that flow, the organisation must determine
whether the flow itself is faulty and could change, or if the network needs modification and
tuning to support the existing flow.
Interoperability:-
• All the pieces of a successful client/server application should have the ability to interact-
interoperability. This is no small font considering a client server application has client
machines, client software, a network, network software, a server, and server software. All six
piece have to be able to communicate reliably.
• To provide this interoperability, current client/server software products can transparently
retrieve data from many different trees APIs are being written to additional data sources.
• Some of the client/server application development tools create GUT formats that can be re-
compiled for another windowing environment. Conversion products can be used to translate
a GUI format to another environment:
Compatible Environments:-
• Whenever possible, avoid mixing architectures. The successful implementations of
client/server technology replicated a proven architecture. It is not necessary that all the pieces
be supplied by the same vendor. However, all the elements at work together and be treated as
a single entity. They become the internal standards for the three components of client server
computing.
• When new components are added to the environment, they must work as - is. Be wary of
enhancements under development that will make the new component the existing
environment. It is critical that the organization stick with its standards. The fewer the
exceptions, the fewer the headaches.
Perceived Benefits:-
• Managing expectations is an important part of any application that uses new technology. If
the users or management expect more than they actually get, the success of the application is
in question. More of what?-benefits, ease-of-use, cost savings, response time, functionality,
accessible data, and the list goes on.
• Many companies have started with host – based applications where only the client portion of
the application is modified.
• Much time, effort and money was spent to convert 3270 / 5250 based screens into GUI -
based screens. No changes were made to the network or the application running on the host.
• The primary benefit of this conversion is increased end – user productivity.
• Know the users. Developers must understand the users' orientation and work profile.
How comfortable are they with computers? Do they prefer a keyboard or a mouse or both?
What terminology do they use for the functions in the application?
• Simplify often used tasks. These tasks should be on a tool bar for quick access or
reflected in the order of items on a menu. Accelerator keys allow users to use the keyboard
to quickly go through a menu series. Users should be allowed to turn off tanks, which
removes them from the tool bar, if they don't expect to use them.
• Provide feedback to the user. The cursor should be changed to an hourglass to
signify a short wait while the system in processing A long wait (more than 10 seconds)
should be indicated by a message box with a progress indicator. Beeps should be used only
for actions that require immediate attention. Error message boxes should explain the error
(in user-ese) and offer suggestions for correcting the error.
• Be consistent. Don't let developers stretch their creative juices too far. Consistency
guidelines should include grammar, syntax, icons, and color. If users are happy with the
products from a major vendor, copy the look of their products Consistency is necessary if
users are expected to go quickly from one application to another.
• Test early and often. In addition to the users of the application, GUIs should be tested by
other developers and users of other applications.
The components of Open systems are Standards – based products and technology, Open
development infrastructure and Management directives.
• Standards – based products and technology: Open systems technology provide portability
and interoperability, Closed technologies will dead-end progress. While some proprietary
technology in beneficial, the trick is to provide proprietary functionality within a structure of
common APIs.
• Open development infrastructure: Standards must ensure that current implementations of
technology build on prior implementations and are able to support future implementations.
• Management directives: These directives, established by consensus by those with a vested
interest in the 13 infrastructure, ensure that technology does not benefit one group at the
expense of the organization.
A host – base processing application usually does not require additional server hardware
because the host running the application is acting as the server. Classes of client / server applications
will require either a micro/server, a super server, a database server machine, mid – range computer,
or a fault tolerant machine.
Micro / Server:-
• A micro / server, an upright micro that can fit under a desk, has been optimized for server
functionality. Micro / Server use Intel 486 chips, run at 33 MHz or higher and have expanded
memory capabilities.
• Internal hard drives offer capacities ranging from 80 Mbytes to 500 Mbytes. Intelligent disk
subsystems can provide access to several gigabytes of data.
Superservers:-
• Superservers, developed specifically for the client / server architecture, are an important
option for server hardware.
• Superservers should not be confused with UNIX – based servers that provide hardware
features such as multiple processors, large amounts of memory and massive high – speed disk
arrays and were built for specialized applications, such as technical or scientific applications.
Database Machines:-
• Using specialized hardware and software, database machines can run as high – speed database
servers or as application servers. Because they usually support parallel processing, data
searches are performed at high speeds. Large database machines can have hundreds of
processors.
Fault – Tolerant Machines:-
• As mainframe data is moved to servers, it becomes critical that server machines remain
operational. Business are looking to fault – tolerant computers to keep these applications
available to end – users.
• A fully fault – tolerant machine uses a dual – redundant hardware architecture with two
complete processing systems in one enclosure to ensure no point of failure.
• Each system bus, together with its corresponding CPU, Memory, EISA module and attached
peripherals, comprises a system.
• Stored procedures are a collection of SQL statements that are compiled and stored on
the server database. When an SQL command is sent to the server database, the server parses
the command, checks the syntax and names used, checks the protection levels, and then
generates an execution plan for the query. The comparison to interactive queries in illustrated
in Figure.
• Stored procedures allow developers to code queries and other groups of statements into stored
procedures, compile them, store them on the server, and invoke them directly from
applications. The stored procedures are parsed and checked for syntax the first time the
procedure in called. The compiled version is stored in cache memory. Subsequent calls to the
procedure use the compiled version in cache.
• Since they are stored as compiled code, stored procedures execute quickly. They are
automatically re-compiled when changes are made to the objects they affect. Since stored
procedures accept parameters, they can be used by multiple applications with a variety of
data. Stored procedures can be nested and remote calls can be made to procedures on other
systems.
• Stored procedures can also be used to enforce business rules and data integrity. In the case of
the banking transaction, the logic for the debit and the credit as well as a validity check to
ensure that the debited account had enough funds to cover the transfer could be coded into a
stored procedure called transfer-amt. This procedure could be used by any transaction that
transferred money between accounts. The parameters used when the procedure was invoked
would specify which accounts.
13. Write a note on Testing interface.
Testing character - based screens is easy – do a screen capture and compare screens. If they
match, success; if not, the detective work begins.
Automatic Testing Facility (ATP) from Softbridge, Inc. supports unattended testing of
single or multiuser applications running OS/2 or Windows. The performance analysis software
includes two components: The Controller and the Executive.
SQA: Manager and SQA: Robot for windows from Software Quality Automation help
proceduralize testing of Windows software. SQA :Robot for windows is a
capture/playback/comparison tool that captures windows instead of screens. SQA: Manager serves
as a project management tool for software testing. It helps developers plan, design and organize
resources; track problems; apply measurement; and generate reports.
Microsoft’s Test for Windows has been used for years as an internet tool at Microsoft. It was
used to test the windows 3.1. Microsoft Test can be used to examine applications and automate test
scripts for Windows programs.
Test Dialogs capture and compare Windows controls such as menus, buttons and dialog
boxes.
Test Screen captures and compares screen bitmaps.
Test Event and Test Control are DLLs that simulate any combination of mouse or keyboard
input. Users can control the timing of events and identify and change the availability and state of any
individual control by name. They can also be called from any programming language that supports
DLL access, such as C, Pascal or Visual Basic.
Test Driver includes an enhanced version of the Basic language, a recorder and a debugger
with single stopping and breakpoints.
Client/server applications demand more than just management of data-more than data storage,
data integrity, some degree of security, and recovery and backup procedures. Client/server
applications require that some of the application logic be stored with the data in the database. The
logic can be stored as an integrity check, a trigger, or a stored procedure. By defining the logic in the
database, it is written once and used when the "protected" data is accessed.
Most server-stored logic is vendor-dependent. The stored logic will execute only with the
server database software. Some client/server application development products that are not tied to
server database software allow developers to compile and store vendor-neutral logic. One such
product is Ellipse from Cooperative Solutions. The logic is written in ANSI SQL and controlled by
the Ellipse software. If the application moves to new server database software, the logic will still
execute against the new software.
The server database software should handle referential integrity, which ensures that related
data in different tables is consistent. If a parent row is deleted (for example, a customer), the rows
related to that parent row in the children tables (for example, accounts such as savings, checking,
and loans) should also be deleted. This centralizes the control of data integrity, ensures that the rules
are followed no matter what node accessed the data and frees the developers from having to code
integrity rules into the front-end programs.
In addition, there may be procedural logic associated with the data and allocated to the server,
rather than the client, for execution. This might be done for reasons of load balancing or speed.
Open Systems -
• For client/server computing to be effective, multiple environments must be supported.
• Tools that allow the systems administrator to manage the network (configuration, console,
problems, modification) and monitor its performance must be developed.
• The benefits of open systems-interoperability and portability result from adherence to
standards.
• The third class of client/server applications uses a fully cooperative peer-to-peer processing
approach.
• In a true cooperative approach, all components of the system are equal and can request or
provide services to each other.
• The client formats the data and executes any run-time calculations, such as row totals or
column computations,
• Data manipulation may be performed on both the client and the server, whichever is more
appropriate.
• Application data may exist on both the client and the server. Local data, as well as server
data, might be used in generating a report.
• Cooperative processing requires a great deal of coordination and there are many integrity
and control issues that must be dealt with.
• Client-based processing applications do some cooperative processing because data
validation, stored procedures, and triggers may be executed on the server.
18. Narrate the popular operating system used on client machines.
The most popular operating systems used on client machines are:
1. Microsoft's MS-DOS, IBM's PC-DOS, or a DOS clone such as DR DCS from Novell (DR
DOS was formerly from Digital Research)
2. IBM's OS/2
3. A UNIX-based operating system, such as USL's UNIX System V Release 4, IBM's AIX, and
HP's HP-UX.
1 – DOS with Windows 3.x:-
• One of the disadvantages of DOS, a 16-bit operating system, has been the memory ceiling of
640 kbytes. Any memory over this limit is used for caching. The latest version of Microsoft
DOS, MS-DOS 5.0, has improved memory management, data protection, and online help.
• To free up memory for application use, MS-DOS 5.0 is automatically loaded into extended
memory on 286 or higher machines. Also, in 386 or higher machines, device drivers,
terminate-and-stay-resident (TSR) software, and network software can be loaded into upper
memory.
• Windows 3.x augments the capabilities of DOS with its own memory management routines,
simulates multitasking operations, and provides queued input messages that permit event-
driven interaction. TSR software can be run in virtual machines and accessed via a hot-key.
The recommended minimum configuration is 4 Mbytes of memory and a 40-Mbyte hard
drive.
2 – OS/2:-
• OS/2 2.0, a 32-bit operating system from IBM, is supported on most IBM-compatible 386SX
micros and above, provides true multitasking support and recognizes and uses all available
memory-there is no 640 kbyte limitation. An application can use up to 48 Mbytes. The
recommended minimum configuration is 6 Mbytes of memory and an 80-Mbyte hard drive.
• Its new icon-driven interface called WorkPlace Shell (WPS) replaces the Group Manager
interface used by OS/2 1.x. Desktop functions are represented as a menu with icons for
frequently-used applications. Icons are also used to represent objects, such as files, folders,
and devices.
• OS/2 2.0 can simultaneously run DOS, Windows (currently not Window 3.1, however), and
OS/2 applications in separate windows on the same screen. Users can launch the Windows
programs directly from the Presentation Manager WPS or launch the Windows Program
Manager and run their applications from that interface. Cut-and-paste and DDE are also
supported.
3 – UNIX – Based:-
• UNIX-based operating systems are, in many cases, overkill as client operating systems.
UNIX was designed to operate in a multitasking, multi user environment and, as such, is more
likely to be installed on a workstation than a micro. OS/2, which also provides multitasking,
was designed to provide such support to a single user or multiple users, making it a cost-
effective candidate for client machines.
19. What are the six application tasks of client / server computing ? Describe them.
• GUIs require a great deal of processing power to create the screen the user sees. Since the
processing costs on a micro are lower than a host (mainframe or midrange), presentation
processing is best done by the desktop machine, freeing up host resources for other processing
requirements and requiring no changes to the host application. As application tasks were split
between the host and the screen-generating desktop machine, illustrated in Figure 1.2, the
idea of client/server computing was born.
• A file is locked once it is sent to a user machine-even if only parts of it are sent. Early versions
of file servers did not differentiate between a browse access and an update access. Current
LAN software recognizes the difference, eliminating some access bottlenecks.
• As the software became more robust and the power of the desktop machines increased, some
of the data validation was moved to the client. It made sense to move the error-checking and
validation routines to the client-the user received quick turnaround for errors and omissions
and the host did not receive faulty requests.
• In addition, some portions of the application processing were moved from the client to the
server, especially number-crunching activities such as sorting processes and large
consolidations, as well as stored procedures and triggers. The distribution of processing
between the client and server is accommodated by the client/server model, as illustrated in
Figure 1.3.
• When to make this split and how to make this split continues to be up to the developer. Some
products that facilitate building client/server applications can easily incorporate this split-
some even do the partitioning automatically. Otherwise, the split must be coded into the
client/server application.
• SQL Access Group is an industry consortium working on the definition and implementation
of specifications for heterogeneous SQL data access using accepted international standards.
• SQL Access Group is supported by most DBMS developers except IBM. The focus of the
group is to provide an environment where any client front-end can work with any compliant
server database. SQL Access Group supports TCP/IP as well as OSI protocols.
• The Open SQL standards from SQL Access Group are based on ANSI SQL. Current
specifications include SQL Access and a call-level interface. Compliance to these
specifications allows relational DBMS (RDBMS) products from different vendors to access
distributed databases without using special gateways.
• SQL Access Group is also working on a Persistent SQL Function, which uses a high-
performance compiler to compile and stores an SQL statement, and a data type strategy for
two-phase commits.
• (Microsoft's Open Database Connectivity (ODBC) is an extension of SQL Access Group's
call-level interface (CLI) specification. (The SQL Access available in Windows 3.x is CLI
compliant, as is Borland's Object Component architecture. X/Open has announced its intent
to use the SQL Access Group standards in its Portability Guide.
Micro / Server:- A micro / server, an upright micro that can fit under a desk, has been
optimized for server functionality. Micro / Server use Intel 486 chips, run at 33 MHz or higher and
have expanded memory capabilities.
Super Server:- Super Severs, developed specifically for the client / server architecture are
an important option for server hardware. Super servers should not be confused with UNIX – based
servers that provides hardware features such as multiple processors, large amounts of memory, and
massive high – speed disk arrays and were built for specialized applications, such as technical or
scientific applications.
A Super Server has the following advantages over a micro / server:
• Increased processing power, using multiple processors.
• Increased I/O capabilities, using multiple buses or I/O processor modules for I/O support.
• Increased disk capacity, using arrays of inexpensive disks, which are treated as a single
logical drive.
• Improved memory management, using faster memory chips, better memory architectures,
and optimal use of large amounts of memory and memory caches.
• Improved reliability, using redundant components and ECC memory.
• Improved maintainability, using built-in routines for remote troubleshooting and
management, they are easier to configure and maintain.
23. What is Host – Based Processing ? Compare it with Client- Based Processing.
________________________________END______________________________________