0% found this document useful (0 votes)
49 views

Soft 3

The document discusses different types of coupling in software engineering. It defines coupling as the degree to which modules rely on each other. The types of coupling from highest to lowest are: content, common, external, control, stamp, data, and message coupling. It also discusses module coupling metrics and how coupling affects software quality, with lower coupling generally indicating better structure and design.

Uploaded by

Rouf Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Soft 3

The document discusses different types of coupling in software engineering. It defines coupling as the degree to which modules rely on each other. The types of coupling from highest to lowest are: content, common, external, control, stamp, data, and message coupling. It also discusses module coupling metrics and how coupling affects software quality, with lower coupling generally indicating better structure and design.

Uploaded by

Rouf Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

CSE364

SOFTWARE ENGINEERING
CONCEPTS & TOOLS

In computer science, coupling or dependency is the degree to which each program module
relies on each one of the other modules.

Coupling is usually contrasted with cohesion. Low coupling often correlates with high
cohesion, and vice versa. The software quality metrics of coupling and cohesion were
invented by Larry Constantine, an original developer of Structured Design who was also an
early proponent of these concepts (see also SSADM). Low coupling is often a sign of a well-
structured computer system and a good design, and when combined with high cohesion,
supports the general goals of high readability and maintainability.

Types of coupling

Conceptual model of coupling

Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some
types of coupling, in order of highest to lowest coupling, are as follows:

Content coupling (high)


Content coupling is when one module modifies or relies on the internal workings of
another module (e.g., accessing local data of another module).
Therefore changing the way the second module produces data (location, type, timing)
will lead to changing the dependent module.
Common coupling
Common coupling is when two modules share the same global data (e.g., a global
variable).
Changing the shared resource implies changing all the modules using it.
External coupling
External coupling occurs when two modules share an externally imposed data format,
communication protocol, or device interface.
Control coupling
Control coupling is one module controlling the flow of another, by passing it
information on what to do (e.g., passing a what-to-do flag).
Stamp coupling (Data-structured coupling)
Stamp coupling is when modules share a composite data structure and use only a part
of it, possibly a different part (e.g., passing a whole record to a function that only
needs one field of it).
This may lead to changing the way a module reads a record because a field that the
module doesn't need has been modified.
Data coupling
Data coupling is when modules share data through, for example, parameters. Each
datum is an elementary piece, and these are the only data shared (e.g., passing an
integer to a function that computes a square root).
Message coupling (low)
This is the loosest type of coupling. It can be achieved by state decentralization(as in
objects)and component communication is done via parameters or message passing.
No coupling
Modules do not communicate at all with one another.

Module coupling
Coupling in Software Engineering describes a version of metrics associated with this concept.

For data and control flow coupling:

 di: number of input data parameters


 ci: number of input control parameters
 do: number of output data parameters
 co: number of output control parameters

For global coupling:

 gd: number of global variables used as data


 gc: number of global variables used as control

For environmental coupling:

 w: number of modules called (fan-out)


 r: number of modules calling the module under consideration (fan-in)
Coupling(C) makes the value larger the more coupled the module is. This number ranges
from approximately 0.67 (low coupling) to 1.0 (highly coupled)

For example, if a module has only a single input and output data parameter

If a module has 5 input and output data parameters, an equal number of control parameters,
and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4,

(2)
ADVANTAGES

 Provide very efficient "High-speed" retrieval


 Simplicity

The network model is conceptually simple and easy to design.

 Ability to handle more relationship types

The network model can handle the one-to-many and many-to-many relationships.

 Ease of data access

In the network database terminology, a relationship is a set. Each set comprises of


two types of records.- an owner record and a member record, In a network model an
application can access an owner record and all the member records within a set.

 Data Integrity

In a network model, no member can exist without an owner. A user must therefore
first define the owner record and then the member record. This ensures the integrity.

 Data Independence

The network model draws a clear line of demarcation between programs and the
complex physical storage details. The application programs work independently of
the data. Any changes made in the data characteristics do not affect the application
program.

DISADVANTAGES

 System complexity

In a network model, data are accessed one record at a time.This males it essential
for the database designers, administrators, and programmers to be familiar with the
internal data structures to gain access to the data. Therefore, a user friendly
database management system cannot be created using the network model.

 Lack of Structural independence.

Making structural modifications to the database is very difficult in the network


database model as the data access method is navigational. Any changes made to
the database structure require the application programs to be modified before they
can access data. Though the network model achieves data independence, it still fails
to achieve structural independence.

(3)
The traditional object model is insufficient in the context of
real-time systems. Here a completely new aspect has to be added to the object
concept, namely time. It has to be investigated how to annotate the functional
specification of types with timing constraints and how to guarantee and implement
these timing specifications. Also other concepts that were already included
in the traditional object model have to be reviewed in the context of a real-time
system, due to the difficulties to obtain deterministic timing behavior. These
concepts include inheritance, dynamic binding, dynamic memory allocation,
concurrency, and synchronization. A lot of research has been undertaken in order
to resolve the inherent contradiction between object-orientation and real-time. In
the following subsections several different approaches to real-time objects will
be reviewed.

(4). A consistent user interface may be impossible to produce for complex systems
with a large number of interface options. In such systems, there is a wide imbalance
between the extent of usage of different commands so for frequently used
commands, it is desirable to have short cuts. Unless all commands have short cuts,
then consistency is impossible.

It may also be the case in complex systems that the entities manipulated are of quite
different types and it is inappropriate to have consistent operations on each of these
types.

An example of such a system is an operating system interface. Even MacOS which


has attempted to be as consistent as possible has inconsistent operations that are
liked by users. For example, to delete a file it is dragged to the trash but dragging a
disk image to the trash does not delete it but unmounts that disk.

(5).

Software verification is a broader and more complex discipline of


software engineering whose goal is to assure that software fully satisfies all the expected
requirements.

There are two fundamental approaches to verification:

 Dynamic verification, also known as Test or Experimentation - This is good for


finding bugs
 Static verification, also known as Analysis - This is useful for proving correctness of a
program although it may result in false positives

Dynamic verification (Test, experimentation)


Dynamic verification is performed during the execution of software, and dynamically checks
its behaviour; it is commonly known as the Test phase. Verification is a Review Process.
Depending on the scope of tests, we can categorize them in three families:

 Test in the small: a test that checks a single function or class (Unit test)
 Test in the large: a test that checks a group of classes, such as
o Module test (a single module)
o Integration test (more than one module)
o System test (the entire system)
 Acceptance test: a formal test defined to check acceptance criteria for a software
o Functional test
o Non functional test (performance, stress test)

Software verification is often confused with software validation. The difference between
'verification and validation:

 Software verification asks the question, "Are we building the product right?"; that is,
does the software conform to its specification.
 Software validation asks the question, "Are we building the right product?"; that is, is
the software doing what the user really requires.

The aim of software verification is to find the errors introduced by an activity, i.e. check if
the product of the activity is as correct as it was at the beginning of the activity.

Static verification (Analysis)

Static verification is the process of checking that software meets requirements by doing a
physical inspection of it. For example:

 Code conventions verification


 Bad practices (anti-pattern) detection
 Software metrics calculation
 Formal verification

Software Validation is a Quality assurance process of establishing


evidence that provides a high degree of assurance that a product, service, or system
accomplishes its intended requirements. This often involves acceptance of fitness for purpose
with end users and other product stakeholders.

It is sometimes said that validation can be expressed by the query "Are you building
the right thing?" and verification by "Are you building it right?" "Building the right
thing" refers back to the user's needs, while "building it right" checks that the
specifications be correctly implemented by the system. In some contexts, it is required
to have written requirements for both as well as formal procedures or protocols for
determining compliance.

Validation work can generally be categorized by the following functions:


 Prospective validation - the missions conducted before new items are released to
make sure the characteristics of the interests which are functional properly and which
meet the safety standards. Some examples could be legislative rules, guidelines or
proposals, methods, theories/hypothesis/models, products and services
 Retrospective validation - a process for items that are already in use and distribution
or production. The validation is performed against the written specifications or
predetermined expectations, based upon their historical data/evidences that are
documented/recorded. If any critical data is missing, then the work cannot be
processed or can only be completed partially. The tasks are considered necessary if
o Prospective validation is missing, inadequate or flawed.
o The change of legislative regulations or standards affects the compliance of
the items being released to the public or market.
o reviving of out-of-use items

Some of the examples could be validating of

 ancient scriptures that remain controversies


 clinical decision rules
 data systems

 Full scale validation


 Partial validation - often used for research and pilot studies if time is constrained. The
most important and significant effects are tested. From an analytical chemistry
perspective, those effects are selectivity, accuracy, repeatability, linearity and its
range.
 Cross-validation
 Re-validation/Location or Periodical validation - carried out, for the item of interest
that is dismissed, repaired, integrated/coupled, relocated, or after a specified time laps.
Examples of this category could be relicensing/renewing driver's license, recertifying
an analytical balance that has been expired or relocated, and even revalidating
professionals. Re-validation may also be conducted when/where a change occurs
during the courses of activities, such as scientific researches or phases of clinical trial
transitions. Examples of these changes could be
o sample matrices
o production scales
o population profiles and sizes
o Out-of-specification (OOS) investigations, due to the contamination of testing
reagents, glassware, the aging of equipment/devices, or the depreciation of
associated assets etc.

In GLP accredited laboratories, verification/revalidation will even be conducted very


often against the monographs of USP and BP to cater for domestic needs.

 Concurrent validation - conducted during a routine processing of services,


manufacturing or engineering etc. Examples of these could be
o duplicated sample analysis for a chemical assay
o triplicates sample analysis for trace impurities at the marginalized levels of
detection limit, or/and quantification limit
o single sample analysis for an chemical assay by a skilled operator with
multiplicity online system suitability testing’s

Verification:
1. It is a Quality improvement process.
2. It is involve with the reviewing and evaluating the process.
3. It is conducted by QA team.
4. Verification is Correctness.
5. Are we producing the product right?

Validation:
1. It is ensures the functionality.
2. It is conducted by development team with the help from QC team.
3. Validation is Truth.
4. Validation is the following process of verification.
5. Are we producing the right product?

(6).

There are two major problems encountered when modifying systems.

1. Understanding which entities in a program are logically part of some greater


entity.
2. Ensuring that changes do not have unanticipated side-effects ie. a change to
one entity has an undesirable effect on some other entity.

Object-oriented development helps to reduce these problems as it supports the


grouping of entities (in object classes) so therefore simplifies program
understanding. It also provides protection for entities declared within objects so that
access from outside the object is controlled (the entity may not be accessible, its
name may be accessible but not its representation or it may be fully accessible). This
reduces that probability that chances to one part of the system will have undesirable
effects on some other part.

In all models of the software development cycle, the software specification case is
always followed by the design phase. This is true, no matter whether a model
encompasses a full range of activities, as described in IEEE Std. 1074 , or a limited
subset composed of the four primary phases, specification, design,
coding, and testing, as in our case. Once the requirements have been frozen, which
means that no more significant changes to the requirements document can be
expected, the developers should be ready to start the design process. At this stage,
one has to organize the design activities in some systematic manner by choosing an
appropriate notation for this phase, techniques to derive a specific representation of
software from a more general one, and automatic software tools to assist in the
derivation process. A complete set of these three elements (a method, techniques,
and tools) is called a software design (or development) methodology.

Simplifying things a little, one can say that the design activities usually concentrate
on two aspects:
- architectural design, which for the most part describes the structure of software
- detailed design, which provides the insight into the functioning of the structural
elements of software developed at the architectural level.

There are two primary approaches to develop the software architecture: structured
approach (also called a functional approach) and object-oriented design approach.
The difference between the two is significant but, in fact, both approaches have
common origin. The common roots and drastic differences become clear, if we look
at the operation of software from an abstract perspective. Since their inception in the
1940s, computers and their software, no matter which level we consider, microcode,
processor instruction level, operating system, programming language, etc., involve
only two primary entities: data and operations on these data.
The way the structured (functional) approach treats these two entities is completely
opposite to the way they are treated in object-oriented approaches . In structured
approaches, the primary focus is on operations. This means that in the structured
development process, we first determine the operations to be executed and their
order, encapsulate those operations in a form of a module, function, procedure,
subroutine, etc., and then add the data paths in the form of parameters or
arguments, through which data can flow in and out of these design units. This is an
operations-centered approach.

Principal difference between the functional and object-oriented approach.


In contrast to that, an object-oriented approach is data-centered. This means that in
the development process, we first determine what data objects are going to be used
and consequently design functions,technically called methods, to access these data
objects. To illustrate both approaches graphically, let us assume that squares
represent operations and circles represent data. In a structured approach, a big
square would represent respective operations encapsulated in a module, and little
circles on the boundaries of this square would represent data transferred as
arguments to the calls accessing operations in the square. In an object-oriented
approach, a big circle would represent all data items encapsulated as a structure,
with little squares spread over the boundaries of this square that would represent
methods used to access these data items. Because of this principal difference in
both approaches, respective notations for software design are different. The primary
notation used in structured software development is based on data flow diagrams.
The notation adopted for object-oriented development is based on class and object
diagrams.
Based on this primary distinction, there have been many approaches to real-time
software development, which claimed to be methodologies, but in fact they were just
methods or specific notations. Historically, one can mention several methods and
their notations, which have been used in real-time software design, for example:
- SCOOP (Software Construction by Object-Oriented Pictures)
- MASCOT (Modular Approach to Software Construction, Operation and Test)
- HOOD (Hierarchical Object-Oriented Design)
- DARTS (Design Approach to Real-Time Systems)

None of these methods, however, gained widespread popularity, because they were
lacking techniques of well-defined transformations and, in particular, automatic
software tools to help in the development process. In addition to that, even though
these notations capture quite adequately various intricacies of real-time software,
they are not suitable for expressing timing requirements.

You might also like