0% found this document useful (0 votes)
2 views

Testing, implementation and maintenance

The document outlines the processes involved in testing, implementation, and maintenance of information systems, emphasizing the importance of thorough testing throughout the development lifecycle. It discusses various testing levels, including unit, integration, system, and acceptance testing, as well as the challenges and considerations in implementing and maintaining systems. Additionally, it highlights the significance of user training and engagement to ensure successful system adoption and ongoing support.

Uploaded by

MA7ESTIC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Testing, implementation and maintenance

The document outlines the processes involved in testing, implementation, and maintenance of information systems, emphasizing the importance of thorough testing throughout the development lifecycle. It discusses various testing levels, including unit, integration, system, and acceptance testing, as well as the challenges and considerations in implementing and maintaining systems. Additionally, it highlights the significance of user training and engagement to ensure successful system adoption and ongoing support.

Uploaded by

MA7ESTIC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Testing, implementation

and maintenance

Information System Design Process


Teppo Saarenpää
Rest of the course
• Until now
• Development models (3 weeks)
• Modelling (3 weeks)
• Requirement specification (3 weeks)
• Design methods (3 weeks)
• Concretization of design
• Testing, implementation and maintenance
• Prototype user testing
• Project work presentations (15 minutes for each group)
• Group works are presented in group order (group 1, group 2, group 3,…)
Testing, deployment and
maintenance
• Three important areas that unfortunately often take a back seat when planning schedules and
budgets...
• Testing
• Continuous testing must be taken into account throughout the development work.
• Deployment
• Well-planned deployment reduces setbacks.
• Maintenance
• Supporting the system throughout its lifecycle.

Example of software engineering lifecycle

Implementati
Analysis Planning Testing Maintenance
on
System verification and validation
• Verification is the process of evaluating a system or component to determine whether products at
a given stage of development meet the conditions set at the beginning of that stage.

• Have we developed the software correctly? Does it meet the specifications?

• Validation is the process of evaluating a system or component during or at the end of a


development process to determine whether it meets certain requirements.

• Have we developed the right software?


Did we want/did the customer want this?
What is software testing?

Software testing refers to the execution of a software on specific inputs in


an attempt to uncover errors in the program.

The basic idea is to strengthen confidence in the correctness of the code by


finding as many errors in the program as possible and eliminating them.

Testing is an important part of software engineering and an estimated 50%


of the costs come from testing.

Since testing is a big part of the project financially, more efficient testing
methods can save costs and improve product quality.
V-model for testing
Test levels: unit/module testing
• A single module is being tested.
• A module usually consists of 100-1000 program lines.
• The operation of the module is compared with the definition made in its software specification.
• Testing is usually carried out by the module developer.
• In order to carry out testing, test environments (test bed) can be implemented in which the
functionality of the module can be tested.
• The test environment simulates the production environment and the necessary test drivers and
temporary stub modules must be created for it

(Haikala & Märijärvi 2006)


Test levels: integration testing
• Ready-made modules are combined into functional entities and the operation of module groups,
i.e. subsystems, is tested.
• The main focus of testing is on testing interfaces between modules.
• The results of the testing are compared with the technical specification.
• Usually proceeds in parallel with module testing.
• Integration usually proceeds in a bottom-up manner from the lowest level modules upwards.
• In top-down integration, the trend is reversed.

(Haikala & Märijärvi 2006)


Test levels: system testing
• The entire system is examined and the results are compared with specification documentation
and customer documentation.
• System testers should be as independent of the actual software implementation as possible, as
the people who created the program may not be able to look at their output completely
objectively.
• The functional and non-functional (reliability, safety, load tests, etc.) characteristics of the system
are tested.
• Correcting errors found during testing can create new errors, and often the system undergoes
regression testing, where system testing is performed again on the program.
• System testing may also include field testing and acceptance testing.

(Haikala & Märijärvi 2006)


Testing levels: acceptance testing
• In acceptance testing (UAT, User Acceptance Testing)
• Testing that business processes work exactly as they should from start to finish before
implementation
• assess whether the system is fit for purpose;
• whether the product is "good enough" for production

• Acceptance testing must be carried out in as ready-made an operating environment as possible


and with the real data

• Includes a risk-based approach to determining whether an information system is publishable


• whether existing risks are acknowledged and accepted
Test cases
• To test the program, test cases are created for it, the information of which is recorded in the test
case definition documents.
• Test cases include a series of inputs to the program, the expected result of the program with the
inputs provided, the prerequisites for performing the test, and the post-conditions for verifying the
correctness of the program's state after the test.
• The preconditions describe the state in which the program and environment must be in order to
run the test case. The post-match conditions tell you what state the program should be in after
execution.
• In addition, the test case specification document briefly explains the functions and components of
the program tested by the test case, as well as possible interfaces with other test objects.
Static vs. dynamic testing
• Static testing
• Code investigation, but not execution
• Reviews and walkthroughs
• Helps testers or developers fix their mistakes early in software development
• Can be done manually or with the help of a tool

• Dynamic testing
• Code testing with different testing cases
• Can be started even before the program is fully finished by testing modules or functions.
• Ensuring that the software meets the user's requirements and expectations
Regression testing
• Regression testing refers to the retesting of a program after a change.
• The goal of regression testing is to reveal any errors caused by a change to the functionality that
existed in the program before the change.
• This is done by testing the program with regression test cases, some of which originate from the
program's old test set and some are new regression test cases intended to test previously tested
functionality.
Box approach
• There are two basic approaches to test case selection:
• Black box (functional testing)
• Test cases are selected based on the specifications of the program being tested without
familiarizing themselves with the implementation of the program.
• Glass/white box, structural testing
• The tester tests the software "from within", i.e. data structures, algorithms, code.
• The concept of grey box testing can also be used, which utilizes information about the
implementation principles of the program and combines methods for creating testing methods.
• From the created set, the test cases to be performed are selected.
• The selection of test cases also involves the type of testing to be performed.
Test coverage

• It is difficult to measure the coverage of testing,


because all errors in a large system are almost
impossible to detect. Many errors are not clearly
visible.
• Errors found in a project and their corrections do
not always meet. Because projects have time
limits, not all errors can even be fixed.
• Resources are limited, so testing cases and
modules to be tested should be prioritized and
acceptance criteria set for stopping testing.
• Especially in modular testing, the adequacy of
testing can be assessed with coverage
dimensions and by implementing errors.
Implementation of information systems
• The implemention of information systems can refer to quite different stages in the life cycle of a
system, depending also on whether it is a new system or an upgrade from an older to a newer
one.
• In reality, the life cycle of an implementation should begin at the design stage, when its
environment, users, description of operations and division of work during the project are defined.
• Eventually, the information system will be taken into use, whereby, in addition to software,
• handing over documentation
• Provide guidance on how to use the system
• possible installations are made.
• There is no single method of implementing an information system, but it always depends on the
prevailing organizational culture and its characteristics.
Example implementation process
and its stages
1. Define the responsibilities of the system, which concern not only the IT side, but also specialists
from other departments.
2. Building an operating environment that includes servers and workstations and their operating
systems.
3. Install the system in a test environment and test its functionality.
4. Collect the necessary data for the system and create a plan for transferring existing data to the
new system.
5. The models and user interface are tested together with the users, after which changes are made
to the system if necessary.
6. In the pilot phase, the system and data-related functionality will be tested as much as possible
in situations that resemble real production.
7. In the actual implementation phase of the system, the system is transferred to a real production
environment and the use is instructed to the end users.
8. Hand over finalized documents and support users when needed.
Implementation issues
• The implementation of information systems is always risk-prone and the end result may not be in
line with plans and expectations.
• Problems include the lack of a phased implementation process, the existing rush and user
resistance caused by future changes
• Risk factors related to the implementation of a system project may include, for example*:
• Inefficient communication between the parties
• Inadequate organisation and standardisation
• Inadequate training of end-users
• Business process reengineering failure
• Inability to engage users in development.

*Sumner, M. Journal of Information Technology, no. 15


Issues affecting implementation
• The implementation of the information system should take into account, among other things,
user training, motivation, commitment, support and performance evaluation.
• When designing processes, humans can be thought of as a single resource, such as systems
and other automation. However, as a resource, people differ from other resources because of
their own will, for example.
• The implementation can be promoted by taking into account, for example, the following
issues:
• The state and situation in which the system is used
• Individual disabilities and functional abilities of users
• Tasks with the system
• Different practices in technical environments
• Cultural aspects such as language and cultural practices

(Sinkkonen, Kuoppala, Parkkinen, Vastamäki 2002)


Maintenance of information systems
• Maintenance work is usually divided into development work and maintenance work, which are
carried out on the system after commissioning until the information system is rejected as
unnecessary or replaced with a new one.
• In development work, an information system or part thereof is renewed when:
• There have been changes in the company's operations
• Technological development offers better opportunities.
• Maintenance work includes:
• Incident management and error correction
• Mandatory changes, such as statutory changes
• Other maintenance work
• Maintenance often also includes user or customer support work.
Forms of maintenance
• Corrective maintenance improves the quality of the product and ensures that problems that have
occurred after commissioning are corrected.
• For example, a bug that hinders use.
• Customizable maintenance is used to create new features or make changes to an existing system.
• For example, changes in partner systems.
• Full-scale maintenance produces new functional requirements by complementing an existing
system, either by adding new features or by changing existing ones.
• For example, features that improve the user experience.
• Preventive maintenance anticipates the future. A cost-effective way to take care of system
maintenance, as timely preventive maintenance can avoid major problems in the system.
• For example, known future changes in regulations.
Maintenance problems
• If there have been no operational standards in place, situations can arise that are
nightmares for administrators.
• Documents and descriptions are sometimes made only for the user or building an
information system, and not with the administrator in mind.
• When dealing with maintenance problems, three main groups usually occur:
• Maintenance is hardly visible in training or methods
• Negative or dismissive attitudes towards maintenance
• Information systems are not built to be easy to maintain (quality).
• Consequences of maintenance problems:
• Increase costs and reduce the quality of the information system
• Slow down change and development work
• Cause extra work for administrators and system users
• Complicate customer service and can affect the company's image.
High-quality maintenance
• An information system is of high quality and maintainable when:
• Descriptions and documentation are up-to-date and free of contradictions
• The information system is easy to perceive as a whole
• The programs are clear, modular and well-commented
• The links between the programs are simple and clear.
• Maintenance is systematic when:
• responsibilities have been defined and maintenance projects are monitored
• Alarms are recorded and the situation is communicated appropriately
• Requests for changes are presented in a specified form through the contact person
• All changes are tested and logged
• Change requests are gathered together into maintenance projects
• Changes to the information system will be communicated
• Goals have been set for maintenance, which are steered and monitored.
Lesson assignment 12
The case of a fictional medical center.

1. The management of the medical center has planned training for its staff related to the
implementation of the new information system. Employees are already busy with their own work
and have no motivation to learn new things. Consider ways to promote change in that case.

2. The change of patient information system was caused by the previous service provider's poor
ability to react to the development needs of the system. Why insufficient attention to the
maintenance phase of an information system leads to problems and how can the situation be
prevented?
Weekly assignment 12
Creating a testing plan for project work.

• Take a look at the Test plan dashboard file.pdf.

• Your group can use this form template to plan user testing and prepare its own testing plan, even
though this form is mainly intended for usability testing.

• Add matters related to user testing to the project work development plan or as an appendix to it.
No separate return box.
Further reading for those interested…

• The Encyclopedia of Human-Computer Interaction, 2nd Ed.


• Contextual design
• https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-
ed/contextual-design
• Suunnittelua tukevia työkaluja mm.
• https://servicedesigntools.org/tools

• Haikala, I. & Märijärvi, J. 2006. Ohjelmistotuotanto. Talentum.

• ISO / IEC / IEEE 29119 Ohjelmisto- ja järjestelmätekniikka – Ohjelmistotestaus

• Rytkönen, J. 2020. Ohjelmistokehittäjien näkemyksiä käyttäjien osallistumisesta sosiaali- ja terveydenhuollon tietojärjestelmien


kehittämiseen. https://erepo.uef.fi/bitstream/handle/123456789/23829/urn_nbn_fi_uef-20201498.pdf

• Sinkkonen, Kuoppala, Parkkinen, Vastamäki. 2002. Käytettävyyden psykologia. IT Press.

You might also like