0% found this document useful (0 votes)
42 views

Client Server 5marks

Client/server computing presents several challenges and myths that must be addressed for successful implementation. Some myths include that it is easily implemented, current desktop machines are sufficient, minimal training is required, and development time is shorter. However, implementing across hardware and software vendors takes expertise. More powerful clients are often needed. Extensive cross-training is required to manage networks, clients, servers, and applications. Development times can be underestimated. Obstacles include high costs, difficulties with mixed platforms, complex maintenance across distributed clients, ensuring reliability across the system, and restructuring corporate IT architecture so users and IT work together effectively.

Uploaded by

Pradeep Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Client Server 5marks

Client/server computing presents several challenges and myths that must be addressed for successful implementation. Some myths include that it is easily implemented, current desktop machines are sufficient, minimal training is required, and development time is shorter. However, implementing across hardware and software vendors takes expertise. More powerful clients are often needed. Extensive cross-training is required to manage networks, clients, servers, and applications. Development times can be underestimated. Obstacles include high costs, difficulties with mixed platforms, complex maintenance across distributed clients, ensuring reliability across the system, and restructuring corporate IT architecture so users and IT work together effectively.

Uploaded by

Pradeep Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CLIENT / SERVER COMPUTING

Important 5 Marks
1. How to dispell the myths in client/server computing?

Many myths surround client/server computing. Some are promoted by marketing literature.
Others are promoted by the press, more by omission than inclusion.
a) Client/Server Computing Is Easily Implemented:-
• Implementing any technology that requires integration of hardware and software from
multiple vendors is not easy Implementing client/server computing is no different.
• To many mainframe-oriented IS professionals, client/server computing is part of the micro
world. Without understanding the capabilities of the micro and its related software and how
the main frame capabilities can complement them, applications cannot be designed to take
advantage of both.
• Micro-oriented professionals need to understand why it is important that a server include
host-like functions, such as backup and recovery, security, reliability capabilities, and
network management.
• Even if the developers understand the host capabilities that should be included and the micro
capabilities that should be included, thes is still the network to be dealt with.
• In the name of progress, some micro applications are being forced. to fit into a client/server
architecture, when they belong entirely in the micro world. The same holds true for
mainframe applications. It i critical that an organization review the placement of each
application with an open mind, and that the review be based on business reasons, not personal
or technical reasons.
b) Current Desktop Machines Are Sufficient:-
• Organizations cannot always use their existing desktop hardware to support a client/server
environment. The AT and 286 class machines do not have the power required for this
environment and will have to be upgraded or replaced Most client software requires at least
a 386 machine (ideally 33 MHz) with a minimum of 2-Mbytes memory and 40 Mbytes of
hard disk capacity.
• If the client/server application is just providing a GUI to an existing mainframe application,
a basics-only client machine will suffice. As the applications get more complex, the
capabilities of the desktop machin must be reviewed. If a great deal of processing must be
done on the client machine, that machine needs more memory, greater hard disk capacity,
increased memory caching, and, possibly, a faster cycle speed.
c) Minimal Training Is Required:-
• Marketing literature gives organizations the impression that they can train a few of their best
IS professionals in ellent/server technology and be off and running.
• Now IS needs experts in networks, network software, client software, and server software. In
addition, they have to maintain this new environment in geographically dispersed locations.
• The design, installation, and management of LANs alone require experts who understand the
interdependencies among hardware, software, and networks. If ever there was a need for
cross-training. client/server computing is it.
d) All Data Are Relational:-
• Relational DBMS vendors continue to push relational technology as the only structure
required to manage all the data in an organization! Since it is based on SQL access,
client/server technology assumes any data needed by a user can be accessed via SQL requests
or through a translator that accepts SQL statements and converts them to another access
language.
• There are still a lot of legary systems with file structures, not database structures. And now
structures are being introduced, such as object-oriented databasen.
• Relational data structures require that the user understand the data and know something about
relational technology.
• Intelligent databases can relieve the user of this requirement but not all server databases are
intelligent databases Intelligent SQL interfaces can also relieve the user of the requirement to
understand relational technology.
e) Development Time is Shorter:-
• Compared to host-based applications, client/server applications are typically smaller in scope
and designed for smaller user communities. The automated development tools for building
client/server applications are easy to use and shorten development time. These tools were
designed, from scratch, specifically for the client/server environment. In addition, many of
these tools are based on object technology. An a result, development should be relatively easy
and short.
• Be careful when using productivity as a benefit without actually giving quantifiable measures.
If the development of typical client/server applications will take 50 percent less time than
their mainframe counterparts, state it as such.
• Build the learning curve, integration issues, system tuning, and debugging into time estimates.
Be sure to manage management's perception of what shorter really is.

2. Describe the Obstacles of client/server computing?

Client/server computing is not the casy task that marketing literature would have us believe.
There are some very real, and painful, hurdles on the road to success.
a) Costs:-
• Potential cost savings prompt organizations to consider client/server computing. The
combined base price of hardware (machines and network) and software for client/server
systems is often a tenth of the cost of host machines and software. Conservative figures for
the cost per MIPS (millions of instructions per second) are:
* IBM mainframes - about $100,000.
* Midrange - about $50,000 (includes some bandwidths processor)
* Desktop - about $300.
• In addition, for smaller systems, the cat of the network operating system and the mat of the
server database could be more than the m of the server hardware itself.
• Training costs and long learning curves must be anticipation. The client/server seems simple
and straightforward, and yet requires experts in mainframe, midrange, micro, and LAN
technologies.
• Conversion costs can be misleading. There are few products that convert 3GL code, such as
COBOL, into C, one of the preferred languages for client/server computing.
b) Mixed Platforms:-
• In the past, most large organizations operated homogeneous centralized mainframe and mid-
range processors and terminal networks. Each was managed separately with independent
systems and protocols.
• Today's organizations have at least two micro operating systems, multiple network operating
systems, a variety of network topologies and different mainframe and midrange platforms
and data sources.
• The goal of client/server computing is to have all these hardware and software platforms
working together. It is not an easy task.
c) Maintenance:-
• Maintenance in the bane of every IS organization. It is costly and time consuming.
Client/server computing might shorten the backlog but won't do away with maintenance.
• Client/server applications are modular in nature. The process of updating application code is
more straightforward than it is in host based systems, primarily due to the use of
object(Client/server computing puts computing management in the hands of the user group,
while control and administration is still in the hands of IS. These two groups have not been
known to work well together in the past. (For client/server computing to work, both end users
and IS must begin to look at computing power as a resource to solve business problems The
focus has to be on the business problems, not the technology. IS professionals are beginning
to report to functional managers on the same level as the business users, Business users also
need to understand why centralization is still important, why IS should paradigms. The
modularity of client/server computing and the use of object technology make the maintenance
tank (the coding and testing) easier.
• However, consider this new wrinkle to maintenance: if some of the application logic is
processed on the client and there are hundreds of clients in the network, any updates to the
application logic have to be distributed to all those clients without impacting processing.
d) Reliability:-
• When a server goes down, the organization does not wait for the vendor to come and repair
it-they fix it themselves.
• Hardware must be stable and include backup units, fault tolerant systems, and monitoring
software for the system itself.
• The system software (database software, communication software, and the operating system-
maybe more than one of each) must be very robust and easily integrated Backup and recovery
procedures must be easy to use.
• The backup and recovery procedures taken for granted in the main framos world must be
provided for the client/server environment.
e) Restructuring Corporate Architecture:-
• Client/server computing puts computing management in the hands of the user group, while
control and administration is still in the hands of IS. For client/server computing to work,
both end users and IS must begin to look at computing power as a resource to solve business
problems.
• The focus has to be on the business problems, not the technology. IS professionals are
beginning to report to functional managers on the same level as the business users, Business
users also need to understand why centralization is still important, why IS should be
responsible for control and administration of the computer resource.
• Client/server technology also restructures the workflow processing, partly by placing it closer
to the work node.
• The use of electronic data interchange (EDI) is an example of a modification in workflow
processing.

3. Write a note on Database Access Method.

• Client/server applications are being written to provide a GUI for accessing corporate
data as illustrated in Figure.
• These query oriented applications provide a single window to the data of the
organization. In some cases these applications are read-only, in others they are read-
write.
• The benefits of these systems are also ease of use and increased worker productivity.
The productivity for these systems in not measured by how easily a worker deals with
an application, as is the case with screen-emulation systems.
• Some of the tools for this category of client/server applications are offered by server
DBMS vendors.
• These tools work best with the vendor's DBMS, although links are usually provided
to some other data sources. Tools from non-server DBMS vendors usually access a
wider variety of data sources.

4.Highlight the features of network operating system.

• As networks become more sophisticated, they need operating system software to shield
application programs from direct communication with the hardware. A network operating
system manages the services of the server in the same manner that an operating system
manages the services of the hardware.
• Today's leading network operating systems offer the reliability, performance, security, and
internet working capability once associated primarily with mainframe and mid – range
computers.
• A network operating system manages the services of the server. It shields application
programs from direct communication with the hardware.
• GUIs are an overlay to the network operating system. Originally offered as tools that
supported the concept of a server, network operating systems have evolved into enablers of
other software packages and network management tools.

5. Brief about the functions of Windows NT.

• Microsoft's Windows New Technology (Windows NT) is based on the Mach variation of the
UNIX kernel. The microkernel architecture ensures its compatibility with applications not
written specifically for Windows NT or future supported operating systems.
• The Windows NT Executive kernel provides basic operating system functions and supports
additional subsystems layered above it. In its initial release, Windows NT supports five
subsystems: 32-bit Windows. 16-bit Windows, DOS, POSIX, and OS/2 (but only in
character-mode, without the Presentation Manager GUI).
• Windows NT can be used on a client and on a server in networkod environments and can
perform redirecting services for LAN Manager, NetWare, and LAN Server. Windows NT
runs on the Intel 386 and 486, the Mips R3000, and R4000 RISC and the Digital Alpha chips.
• Windows NT supports multiprocessor systems. Each application thread-even the Windows
NT kernel itself-can run on any processer is a multiprocessor box. IBM has announced plans
to introduce multiprocessing capabilities into 08/2 in the future, but as yet with n specific
time frame.
• Windows NT incorporates several fault-tolerant features to further hance stability. The
include a built-in, fully-recoverable file system, which incorporates features such as disk
mirroring, duplexing, and ping to minimize file damage from power outages and hardware
failures.
• In addition, Windows NT includes exception handling routines ratch program anomalies and
impose strict quotas on each process order to protect system sources.

6. List out the factors that involved in the success of Client / Server Computing.

There does not seem to be a consensus on what client/server means but there is agreement on
the two critical success factors for the technology. Unless a client server architecture can provide
internetworking and interoperability, it is doomed.
It is important to identify the business motivation for the change to client/server computing.
Whether it is reduced costs, increased productivity, or a competitive advantage, the implementation
of the new technology must be carefully monitored to ensure that it does achieve the desired goal.
* Internetworking.
* Interoperability.
* Compatible Environment.
* Perceived Benefits.
Internetworking:-
• Internetworking deals with how separate platforms are linked. Some of the terms used when
discussing internetworking are bridges, routers, gateways, and LANs. By adhering to a set of
standards, such as TCP/IP and the OSI modul, products to provide link capabilities can be
developed.
• It is also important that the internet support the way information really flows within the
organization. If the internet does not support that flow, the organisation must determine
whether the flow itself is faulty and could change, or if the network needs modification and
tuning to support the existing flow.
Interoperability:-
• All the pieces of a successful client/server application should have the ability to interact-
interoperability. This is no small font considering a client server application has client
machines, client software, a network, network software, a server, and server software. All six
piece have to be able to communicate reliably.
• To provide this interoperability, current client/server software products can transparently
retrieve data from many different trees APIs are being written to additional data sources.
• Some of the client/server application development tools create GUT formats that can be re-
compiled for another windowing environment. Conversion products can be used to translate
a GUI format to another environment:
Compatible Environments:-
• Whenever possible, avoid mixing architectures. The successful implementations of
client/server technology replicated a proven architecture. It is not necessary that all the pieces
be supplied by the same vendor. However, all the elements at work together and be treated as
a single entity. They become the internal standards for the three components of client server
computing.
• When new components are added to the environment, they must work as - is. Be wary of
enhancements under development that will make the new component the existing
environment. It is critical that the organization stick with its standards. The fewer the
exceptions, the fewer the headaches.
Perceived Benefits:-
• Managing expectations is an important part of any application that uses new technology. If
the users or management expect more than they actually get, the success of the application is
in question. More of what?-benefits, ease-of-use, cost savings, response time, functionality,
accessible data, and the list goes on.
• Many companies have started with host – based applications where only the client portion of
the application is modified.
• Much time, effort and money was spent to convert 3270 / 5250 based screens into GUI -
based screens. No changes were made to the network or the application running on the host.
• The primary benefit of this conversion is increased end – user productivity.

7. Discuss about the basic principles of effective GUI designs.

• Know the users. Developers must understand the users' orientation and work profile.
How comfortable are they with computers? Do they prefer a keyboard or a mouse or both?
What terminology do they use for the functions in the application?
• Simplify often used tasks. These tasks should be on a tool bar for quick access or
reflected in the order of items on a menu. Accelerator keys allow users to use the keyboard
to quickly go through a menu series. Users should be allowed to turn off tanks, which
removes them from the tool bar, if they don't expect to use them.
• Provide feedback to the user. The cursor should be changed to an hourglass to
signify a short wait while the system in processing A long wait (more than 10 seconds)
should be indicated by a message box with a progress indicator. Beeps should be used only
for actions that require immediate attention. Error message boxes should explain the error
(in user-ese) and offer suggestions for correcting the error.
• Be consistent. Don't let developers stretch their creative juices too far. Consistency
guidelines should include grammar, syntax, icons, and color. If users are happy with the
products from a major vendor, copy the look of their products Consistency is necessary if
users are expected to go quickly from one application to another.
• Test early and often. In addition to the users of the application, GUIs should be tested by
other developers and users of other applications.

8. Discuss about the network management environment.


As IS works towards a distributed computing environment, the management of networked
systems, regards of their hardware and software platforms, becomes a necessity.
1 – Distributed Management Environment:-
The Open Software Foundation (OSF) is focusing on building distributed computing and
management software and developing a new role as manager of middleware technologies developed
by outside companies. Middleware sits between the operating system and the applications.
DME Provides:-
• An object-oriented application interface for administration and management of
multivendor objects, such as network devices, systems, applications, files, and databases.
• A standard API to manage a wide range of networked systems vin either OSI's Common
Management Information Protocol (CMIP) or Simple Network Management Protocol
(SNMP).
• A common GUI across different network management applications and services.
2 – Object Management Architecture:-
Object Management Group’s Object Management Architecture (OMA) combines distributed
processing with object – oriented computing. It provides a standard method for creating, preserving,
locating and communicating objects.
The OMA performs as a layer above existing operating systems and communication
transports that support standard RPCs such as SunSoft’s Open Network Computing (ONC). The
OMA consist of four main component:
• Object Request Broker (ORB), the interface that must be used and the information that
must be presented for an object to communicate with another object.
• Object services, utility-like objects that can be called on to help perform basic object-
oriented housekeeping chores and provide for consistency, integrity, and security of objects
and the messages that pass between objects.
• Common facilities, functions commonly used by applications such as printing and spooling
or error reporting.
• Application objects, applications or components of applications, which are created by
independent software vendors or in-house software developers.
3 – UI – Atlas:-
UI – Atlas, the open systems solution from UNIX International Inc.(UI) specifies how to
create an open systems architecture using hardware, software, networking and other standards –
based components.
UI – Atlas includes:
• The OSI seven-layer network services model.
• A GUI that uses one API to support both Motif and OpenLook.
• An expanded version of Sun's Network File System that operates over wide area networks
and supports file replications.
• A global naming support system.
• A system management framework
• A distributed object management model that complies with OMG specifications.

9. Explain the common capabilities of operating systems.

The common capabilities of Operating System, these includes:


• Portability in the underlying kernel and application programming interfaces (APIs) that sit
above it.
• Further use of 32-bit architectures (CISC and RISC).
• Support for symmetric multiprocessing hardware.
• Extensions for multimedia and pen-based computing.
• Compliance with POSIX (Portable Operating System Interface for UNIX).

10. Describe the components of open system.

The components of Open systems are Standards – based products and technology, Open
development infrastructure and Management directives.
• Standards – based products and technology: Open systems technology provide portability
and interoperability, Closed technologies will dead-end progress. While some proprietary
technology in beneficial, the trick is to provide proprietary functionality within a structure of
common APIs.
• Open development infrastructure: Standards must ensure that current implementations of
technology build on prior implementations and are able to support future implementations.
• Management directives: These directives, established by consensus by those with a vested
interest in the 13 infrastructure, ensure that technology does not benefit one group at the
expense of the organization.

11. Illustrate the classes of the server machines.

A host – base processing application usually does not require additional server hardware
because the host running the application is acting as the server. Classes of client / server applications
will require either a micro/server, a super server, a database server machine, mid – range computer,
or a fault tolerant machine.
Micro / Server:-
• A micro / server, an upright micro that can fit under a desk, has been optimized for server
functionality. Micro / Server use Intel 486 chips, run at 33 MHz or higher and have expanded
memory capabilities.
• Internal hard drives offer capacities ranging from 80 Mbytes to 500 Mbytes. Intelligent disk
subsystems can provide access to several gigabytes of data.
Superservers:-
• Superservers, developed specifically for the client / server architecture, are an important
option for server hardware.
• Superservers should not be confused with UNIX – based servers that provide hardware
features such as multiple processors, large amounts of memory and massive high – speed disk
arrays and were built for specialized applications, such as technical or scientific applications.
Database Machines:-
• Using specialized hardware and software, database machines can run as high – speed database
servers or as application servers. Because they usually support parallel processing, data
searches are performed at high speeds. Large database machines can have hundreds of
processors.
Fault – Tolerant Machines:-
• As mainframe data is moved to servers, it becomes critical that server machines remain
operational. Business are looking to fault – tolerant computers to keep these applications
available to end – users.
• A fully fault – tolerant machine uses a dual – redundant hardware architecture with two
complete processing systems in one enclosure to ensure no point of failure.
• Each system bus, together with its corresponding CPU, Memory, EISA module and attached
peripherals, comprises a system.

12. Explain in detail about stored procedure.

• Stored procedures are a collection of SQL statements that are compiled and stored on
the server database. When an SQL command is sent to the server database, the server parses
the command, checks the syntax and names used, checks the protection levels, and then
generates an execution plan for the query. The comparison to interactive queries in illustrated
in Figure.
• Stored procedures allow developers to code queries and other groups of statements into stored
procedures, compile them, store them on the server, and invoke them directly from
applications. The stored procedures are parsed and checked for syntax the first time the
procedure in called. The compiled version is stored in cache memory. Subsequent calls to the
procedure use the compiled version in cache.
• Since they are stored as compiled code, stored procedures execute quickly. They are
automatically re-compiled when changes are made to the objects they affect. Since stored
procedures accept parameters, they can be used by multiple applications with a variety of
data. Stored procedures can be nested and remote calls can be made to procedures on other
systems.
• Stored procedures can also be used to enforce business rules and data integrity. In the case of
the banking transaction, the logic for the debit and the credit as well as a validity check to
ensure that the debited account had enough funds to cover the transfer could be coded into a
stored procedure called transfer-amt. This procedure could be used by any transaction that
transferred money between accounts. The parameters used when the procedure was invoked
would specify which accounts.
13. Write a note on Testing interface.

Testing character - based screens is easy – do a screen capture and compare screens. If they
match, success; if not, the detective work begins.
Automatic Testing Facility (ATP) from Softbridge, Inc. supports unattended testing of
single or multiuser applications running OS/2 or Windows. The performance analysis software
includes two components: The Controller and the Executive.

SQA: Manager and SQA: Robot for windows from Software Quality Automation help
proceduralize testing of Windows software. SQA :Robot for windows is a
capture/playback/comparison tool that captures windows instead of screens. SQA: Manager serves
as a project management tool for software testing. It helps developers plan, design and organize
resources; track problems; apply measurement; and generate reports.
Microsoft’s Test for Windows has been used for years as an internet tool at Microsoft. It was
used to test the windows 3.1. Microsoft Test can be used to examine applications and automate test
scripts for Windows programs.
Test Dialogs capture and compare Windows controls such as menus, buttons and dialog
boxes.
Test Screen captures and compares screen bitmaps.
Test Event and Test Control are DLLs that simulate any combination of mouse or keyboard
input. Users can control the timing of events and identify and change the availability and state of any
individual control by name. They can also be called from any programming language that supports
DLL access, such as C, Pascal or Visual Basic.
Test Driver includes an enhanced version of the Basic language, a recorder and a debugger
with single stopping and breakpoints.

14. Describe about basic client requirement.

The basic client requirements are:


* GUI Design Standards
* Open GUI Standards
* Interface Independence
* Testing Interface
* Development Aids
1 – GUI Design Standards:-
Most GUI development tools assume that its users know how to build effective GUI
applications. But a GUI-based application does not automatically guarantee an increase in
productivity. There is such a thing as a bad GUI design!
GUI development tools must begin to incorporate some intelligence for building tasks, such
as screen design. In addition, each organization should develop standards and preferences for GUIs
so that all GUI developed for the organization will look alike and act alike. Even if it is something
as simple as All OK buttons will be green and All Cancel buttons will be red.
2 – Open GUI Standards:-
Consistent interfaces aro key to the success of open systems and client/server computing.
Each now interface requires retraining and modifications to applications. Many GUIs are not portable
to other GUI environments (although there are products that will do the necessary translations, as
discussed later)
After an organization has invested in developing GUI guidelines for their current platform,
they do not want to lose that look-and-feel when they move the application to another platform.
Ideally there should be a standard GUI that would have the same look – and – feel on every platform
but such a standard is unlikely.
3 – Interface Independence:-
Portable applications is one of the most discussed benefits of client/ server computing.
However, if the GUI is not portable, the application is not portable. For a GUI to be truly portable,
the GUI interfaces should be able to be moved, for example from Motif to Windows 3.x, with no
application changes and no impact on its fundamental look and-feel of the interface. This reduces
application development time and allows an application to be written for multiple target GUI
platforms. However, to be truly portable, most tools cannot take advantage of the native GUI toolkit.
When the native GUI tool kits are used, the development tool supports portability for only those
functions that are common to the Supported GUI environments.
4 – Testing interface:-
Testing character-based screens is easy-do a screen capture and compare screens. If they
match, success; if not, the detective work begins.
Windowed interfaces are not tested that easily. A user can customize a window by changing
its color, size, and position. As applications are accessed, windows overlay windows or are reduced
to icons. Objects do not appear the same on a VGA monitor and a Super VGA monitor. In fact, it is
rare that screens would match, but a mismatch might not represent a software error.
5 – Development Aids:-
Products that allow developers to focus more on the business problem and less on the
programming details will improve productivity. This is usually accomplished through built-in
interface intelligence. There are three areas in which an interface can have built-in intelligence. They
are:
• GUI
• Data-access (SQL access and data dictionaries)
• OLTP.

15. Brief a note on intelligent databases.

Client/server applications demand more than just management of data-more than data storage,
data integrity, some degree of security, and recovery and backup procedures. Client/server
applications require that some of the application logic be stored with the data in the database. The
logic can be stored as an integrity check, a trigger, or a stored procedure. By defining the logic in the
database, it is written once and used when the "protected" data is accessed.
Most server-stored logic is vendor-dependent. The stored logic will execute only with the
server database software. Some client/server application development products that are not tied to
server database software allow developers to compile and store vendor-neutral logic. One such
product is Ellipse from Cooperative Solutions. The logic is written in ANSI SQL and controlled by
the Ellipse software. If the application moves to new server database software, the logic will still
execute against the new software.
The server database software should handle referential integrity, which ensures that related
data in different tables is consistent. If a parent row is deleted (for example, a customer), the rows
related to that parent row in the children tables (for example, accounts such as savings, checking,
and loans) should also be deleted. This centralizes the control of data integrity, ensures that the rules
are followed no matter what node accessed the data and frees the developers from having to code
integrity rules into the front-end programs.
In addition, there may be procedural logic associated with the data and allocated to the server,
rather than the client, for execution. This might be done for reasons of load balancing or speed.

16. Describe the benefits of Client / Server computing.


There is little disagreement that the implementation of client/server computing can result in
current and future savings, but this new technology usually cannot be justified on cost/benefit
analysis alone. The other major benefits are intangible and hard to quantify.
1. Dollar Savings
2. Increased Productivity
3. Flexibility and Scalability
4. Resource Utilization
5. Centralized Control
6. Open Systems
Dollar Savings -
• Mainframe environments are costly to maintain the hardware, software, and staff required to
maintain and develop applications are very expensive.
• Fewer staff are required to maintain client/server platforms and maintenance contracts are
moderate in cost.
• Networks are now providing mainframe-like security at much lower costs.
• Client/server technology allows organizations to protect current investments by using
existing equipment and protect.
Increased Productivity -
• Both users and developers are more productive using client/server tools, Users are more
involved in the development process and in control of the application, once it is operational.
• End User Productivity: Flexible data access for end users was first provided by fourth
generation languages, although early versions only provided access to their own proprietary
databases. Later versions included transparent access to other data sources as well. But the
interface was command – line driven. The user had to know the commands and their
arguments.
• Developer Productivity: Developers can be more productive using client/server
development tools. Applications may be designed, implemented, and tested in a client/server
environment much faster than in a mainframe environment. The development platform is the
desktop machine. All phases of application development – designing, coding, testing,
executing, and maintaining – can be performed from the desktop machine.
Flexibility and Scalability -
• By segmenting the application tasks, an organization can easily migrate to new technologies
or enhance existing technologies with little or no interruption.
• An application does not have to be redesigned to use a new interface software or be moved
to a new platform.
• An upgrade to a server should have little impact on the applications themselves.
• Client/server computing is modular.
Resource Utilization -
• A basic tenet of client/server computing is effective use of computing power. In a balanced
client/server network, the processing power at every node in the network is efficiently used.
• The first client/server implementation in an organization may not require new equipment.
• One of the important features of client/server computing is being able to link existing
hardware and applications. It allows an organization to use the equipment they already have
more effectively.
Centralized Control -
• We have come full circle in the architecture of IS facilities.
• The centralized facilities of the 1960s and 1970s became decentralized without network links.
• Data was transmitted via a channel or data feed. Data was not easily shared and compliance
with control standards and procedures was difficult to enforce.
• Client/server computing allows today's IS facilities to combine the best of both centralized
and decentralized architectures.

Open Systems -
• For client/server computing to be effective, multiple environments must be supported.
• Tools that allow the systems administrator to manage the network (configuration, console,
problems, modification) and monitor its performance must be developed.
• The benefits of open systems-interoperability and portability result from adherence to
standards.

17. Explain the concept of cooperative processing.

• The third class of client/server applications uses a fully cooperative peer-to-peer processing
approach.
• In a true cooperative approach, all components of the system are equal and can request or
provide services to each other.
• The client formats the data and executes any run-time calculations, such as row totals or
column computations,
• Data manipulation may be performed on both the client and the server, whichever is more
appropriate.
• Application data may exist on both the client and the server. Local data, as well as server
data, might be used in generating a report.
• Cooperative processing requires a great deal of coordination and there are many integrity
and control issues that must be dealt with.
• Client-based processing applications do some cooperative processing because data
validation, stored procedures, and triggers may be executed on the server.
18. Narrate the popular operating system used on client machines.
The most popular operating systems used on client machines are:
1. Microsoft's MS-DOS, IBM's PC-DOS, or a DOS clone such as DR DCS from Novell (DR
DOS was formerly from Digital Research)
2. IBM's OS/2
3. A UNIX-based operating system, such as USL's UNIX System V Release 4, IBM's AIX, and
HP's HP-UX.
1 – DOS with Windows 3.x:-
• One of the disadvantages of DOS, a 16-bit operating system, has been the memory ceiling of
640 kbytes. Any memory over this limit is used for caching. The latest version of Microsoft
DOS, MS-DOS 5.0, has improved memory management, data protection, and online help.
• To free up memory for application use, MS-DOS 5.0 is automatically loaded into extended
memory on 286 or higher machines. Also, in 386 or higher machines, device drivers,
terminate-and-stay-resident (TSR) software, and network software can be loaded into upper
memory.
• Windows 3.x augments the capabilities of DOS with its own memory management routines,
simulates multitasking operations, and provides queued input messages that permit event-
driven interaction. TSR software can be run in virtual machines and accessed via a hot-key.
The recommended minimum configuration is 4 Mbytes of memory and a 40-Mbyte hard
drive.
2 – OS/2:-
• OS/2 2.0, a 32-bit operating system from IBM, is supported on most IBM-compatible 386SX
micros and above, provides true multitasking support and recognizes and uses all available
memory-there is no 640 kbyte limitation. An application can use up to 48 Mbytes. The
recommended minimum configuration is 6 Mbytes of memory and an 80-Mbyte hard drive.
• Its new icon-driven interface called WorkPlace Shell (WPS) replaces the Group Manager
interface used by OS/2 1.x. Desktop functions are represented as a menu with icons for
frequently-used applications. Icons are also used to represent objects, such as files, folders,
and devices.
• OS/2 2.0 can simultaneously run DOS, Windows (currently not Window 3.1, however), and
OS/2 applications in separate windows on the same screen. Users can launch the Windows
programs directly from the Presentation Manager WPS or launch the Windows Program
Manager and run their applications from that interface. Cut-and-paste and DDE are also
supported.
3 – UNIX – Based:-
• UNIX-based operating systems are, in many cases, overkill as client operating systems.
UNIX was designed to operate in a multitasking, multi user environment and, as such, is more
likely to be installed on a workstation than a micro. OS/2, which also provides multitasking,
was designed to provide such support to a single user or multiple users, making it a cost-
effective candidate for client machines.

19. What are the six application tasks of client / server computing ? Describe them.

An application can be broken into six tasks:


1. User interface, what the user actually sees.
2. Presentation logic, what happens when the user interacts with the form on the screen.
3. Application logic, Data requests and results acceptance.
4. Data integrity, such as validation, security, completeness.
5. Physical data management, such as update, retrieval, deletion, and addition.
• When applications are entirely mainframe based, the mainframe file management system
handles the physical data management. The programs in the application handle the other
components. Early online systems did not actually change this division of duties even though
the user interfaced with the application via a screen.
• The acceptance of database management systems (DBMSs) began to change the division of
duties by including some data integrity functionality within the physical data management
software: This integration allowed early query languages to access data for retrieval without
compromising its integrity. Query languages have since evolved to support update, create,
retrieve, and delete functions, all under the security of the DBMS.

• GUIs require a great deal of processing power to create the screen the user sees. Since the
processing costs on a micro are lower than a host (mainframe or midrange), presentation
processing is best done by the desktop machine, freeing up host resources for other processing
requirements and requiring no changes to the host application. As application tasks were split
between the host and the screen-generating desktop machine, illustrated in Figure 1.2, the
idea of client/server computing was born.
• A file is locked once it is sent to a user machine-even if only parts of it are sent. Early versions
of file servers did not differentiate between a browse access and an update access. Current
LAN software recognizes the difference, eliminating some access bottlenecks.
• As the software became more robust and the power of the desktop machines increased, some
of the data validation was moved to the client. It made sense to move the error-checking and
validation routines to the client-the user received quick turnaround for errors and omissions
and the host did not receive faulty requests.
• In addition, some portions of the application processing were moved from the client to the
server, especially number-crunching activities such as sorting processes and large
consolidations, as well as stored procedures and triggers. The distribution of processing
between the client and server is accommodated by the client/server model, as illustrated in
Figure 1.3.
• When to make this split and how to make this split continues to be up to the developer. Some
products that facilitate building client/server applications can easily incorporate this split-
some even do the partitioning automatically. Otherwise, the split must be coded into the
client/server application.

20. Describe about the SQL Access Group.

• SQL Access Group is an industry consortium working on the definition and implementation
of specifications for heterogeneous SQL data access using accepted international standards.
• SQL Access Group is supported by most DBMS developers except IBM. The focus of the
group is to provide an environment where any client front-end can work with any compliant
server database. SQL Access Group supports TCP/IP as well as OSI protocols.
• The Open SQL standards from SQL Access Group are based on ANSI SQL. Current
specifications include SQL Access and a call-level interface. Compliance to these
specifications allows relational DBMS (RDBMS) products from different vendors to access
distributed databases without using special gateways.
• SQL Access Group is also working on a Persistent SQL Function, which uses a high-
performance compiler to compile and stores an SQL statement, and a data type strategy for
two-phase commits.
• (Microsoft's Open Database Connectivity (ODBC) is an extension of SQL Access Group's
call-level interface (CLI) specification. (The SQL Access available in Windows 3.x is CLI
compliant, as is Borland's Object Component architecture. X/Open has announced its intent
to use the SQL Access Group standards in its Portability Guide.

21. Explain the distributed computing environment with neat diagram.

DCE (Distributed Computing Environment) provides a framework of services for distributing


applications in heterogeneous hardware and software environments.

DCE provides two sets of services:


◦ Basic distribution services, allow developers to build applications.
◦ Data – sharing services, which requires no programming by the end users, include a
distributed file system, diskless system support, and micro integration.
The tools provided by DCE as basic distributed services are:
• Remote Procedure Calls: DCE's remote procedure calls (RPCs) allow an application's
programs to execute on more than one server in the network, regardless of the other
machines' architectures or physical locations.
• Distributed Directory Service: This service provides a single naming model throughout
the distributed network. Users locate and access servers, files or print queues by name,
not their physical location.
• Time service: This software-based service synchronizes system clocks among host
computers in both LAN and WAN environments. It provides an accurate timestamp for
application development files that must be stored in sequence. The Time Service also
supports time values from external services used for distributed sites using the Network
Time Protocol. The Time Service is integrated with other DCE components such as RPC,
rectory, and Security services.
• Threads Service: DCE requires threads for operation. The Threads Service allows
multiple threads of execution in a single process and synchronizes the Global data access.
The Threads Service is used for RPCs; Security, Directory, and Time Services and
Distributed File System.
• Security Service: Data integrity and privacy are provided by three facilities:
✓ Authentication is based on the Kerberos Version 5 standard from MIT. It verifies a
user through a third server.
✓ Once users are authenticated, the authorization facility decides if they should have
access to the requested resources.
✓ A user registry facilitates the management of user information The registry ensures
that user names are unique across the network. It also maintains a log of user and
login activity.
• Distributed File System:- Distributed File System (DFS) allows users to access data on
another system via the network. In DCE’s DFS terms, the user’s system is the client and the
system where the data is stored is the server.
The following advanced distributed file systems features:
✓ Access Security and Protection
✓ Data Reliability
✓ Data Availability
✓ Integrated Support
✓ Global File Space
• Desktop Support:- DCE supports the distribution of network processing power among a
large number of computers and allows interconnected clients to work with other DCE –
compliant systems and to access files and peripherals.
• DCE Client / Server Model:- DCE is more than just a server software packages. DCE
components are placed between applications and networking services on both the client and
the server. The interaction between the layers is transparent to end users.

22. Compare the micro servers with super servers.

Micro / Server:- A micro / server, an upright micro that can fit under a desk, has been
optimized for server functionality. Micro / Server use Intel 486 chips, run at 33 MHz or higher and
have expanded memory capabilities.
Super Server:- Super Severs, developed specifically for the client / server architecture are
an important option for server hardware. Super servers should not be confused with UNIX – based
servers that provides hardware features such as multiple processors, large amounts of memory, and
massive high – speed disk arrays and were built for specialized applications, such as technical or
scientific applications.
A Super Server has the following advantages over a micro / server:
• Increased processing power, using multiple processors.
• Increased I/O capabilities, using multiple buses or I/O processor modules for I/O support.
• Increased disk capacity, using arrays of inexpensive disks, which are treated as a single
logical drive.
• Improved memory management, using faster memory chips, better memory architectures,
and optimal use of large amounts of memory and memory caches.
• Improved reliability, using redundant components and ECC memory.
• Improved maintainability, using built-in routines for remote troubleshooting and
management, they are easier to configure and maintain.

23. What is Host – Based Processing ? Compare it with Client- Based Processing.

S.NO HOST – BASED PROCESSING CLIENT - BASED PROCESSING


1. The most basic client server applications has a The client based processing class of client /
presentation layer running on the desktop server application puts all the application logic
machine with all the application processing on the client machine.
running on the server / host.
2. Host Based processing application have less Client Based processing application have
functionality than the other classes of client / greater functionality compared to Host Based
server applications. Processing application.
3. This type of processing application need less This type of environment requires coordination
coordination between the platforms and software running on
the platforms.
4.

24. Outline the tools provided by distribution computing environment.

The tools provided by DCE as basic distributed services are:


• Remote Procedure Calls: DCE's remote procedure calls (RPCs) allow an application's
programs to execute on more than one server in the network, regardless of the other machines'
architectures or physical locations.
• Distributed Directory Service: This service provides a single naming model throughout the
distributed network. Users locate and access servers, files or print queues by name, not their
physical location.
• Time service: This software-based service synchronizes system clocks among host
computers in both LAN and WAN environments. It provides an accurate timestamp for
application development files that must be stored in sequence. The Time Service also supports
time values from external services used for distributed sites using the Network Time Protocol.
The Time Service is integrated with other DCE components such as RPC, rectory, and
Security services.
• Threads Service: DCE requires threads for operation. The Threads Service allows
multiple threads of execution in a single process and synchronizes the Global data access.
The Threads Service is used for RPCs; Security, Directory, and Time Services and
Distributed File System.
• Security Service: Data integrity and privacy are provided by three facilities:
✓ Authentication is based on the Kerberos Version 5 standard from MIT. It verifies a
user through a third server.
✓ Once users are authenticated, the authorization facility decides if they should have
access to the requested resources.
✓ A user registry facilitates the management of user information The registry ensures
that user names are unique across the network. It also maintains a log of user and
login activity.

25. Explain the various server requirements.

• Platform Independence: Platform Independence is the major benefit of client / server


computing. Upgrading hardware should be almost as simple as backing up the data and
restoring it on a more powerful machine running the same server database software.
• Transaction Processing: Transaction processing forces additional requirements on the
server database software. A transaction is one or more operations that are performed together
to complete a task. A postal address change is a simple transaction. A slightly more complex
transaction is a banking transaction that debits one account and credit another.
• Connectivity: Server software must provide access to a variety of data sources and not be
restricted to vendor supplied sources. Server data management and access tools provide such
links. This connectivity is in part due to vendors “opening” up access to their data structures
via pack.
• Intelligent Database: Client/server applications demand more than just management of data-
more than data storage, data integrity, some degree of security, and recovery and backup
procedures. Client/server applications require that some of the application logic be stored
with the data in the database. The logic can be stored as an integrity check, a trigger, or a
stored procedure. By defining the logic in the database, it is written once and used when the
"protected" data is accessed.
• Stored Procedures: Stored procedures are a collection of SQL statements that are compiled
and stored on the server database. When an SQL command is sent to the server database, the
server parses the command, checks the syntax and names used, checks the protection levels,
and then generates an execution plan for the query.
• Triggers: Triggers are special stored procedures that are automatically invoked by server
database software. Stored procedures are explicitly called; triggers, which are associated with
particular tables, are executed when attempts are made to modify data in those tables. Triggers
and rules are both associated with particular tables, but rules can perform only simple checks
on the data. Triggers can perform complex checks on the data since they can use the full
power of SQL.
• Load Leveling: One early premise of client/server computing was that ail application logic
should be performed on the client machine. Original implementations of client/server
computing followed that premise to the letter.
• Optimizer: A robust data server software package should provide an optimizer, which
analyzes a generated SQL statement and determines the most efficient way to process the
request. Most optimizers available today are cost-based. They analyze the index distribution
statistics and table sizes to determine the number of disk reads required, which determines
the cost, and then uses the least expensive process.
• Testing and Diagnostic Tools: Data and generated indexes can be corrupted by system errors
(tables sharing disk space) or bad disk sectors. Utilities that can diagnose problems and
recover. from them are slow in coming, as are SQL debugging tools. Mainframe development
environments offer powerful debugging tools with break points and real-time values of
variables and support embedding of debugging commands. These types of tools are
desperately needed in client/server environments.
• Reliability: the reliability of those platforms becomes an integral part of IS planning. Even
if these systems seem micro-based, the micro philosophy of "if one goes down, we'll just run
it from backups on another machine" does not hold true. This might hold true for the client
machines, but certainly not for the server or the network. It is important that IS have
alternatives in case a server goes down or the network crashes.
• Backup and Recovery Mechanisms: There are similarities between mainframes and LANs
in the area of backup, recovery, and archiving. The problems for both environments are
identical but the solutions change from tier to tier. It has two types of Backups. They are Tape
– Based Backups and Host – Based Backups.

________________________________END______________________________________

You might also like