0% found this document useful (0 votes)
70 views

Def Stan 00-42 Draft Issue 3-2

Uploaded by

sufian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Def Stan 00-42 Draft Issue 3-2

Uploaded by

sufian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

DRAFT DEF STAN 00-42 Draft Issue 3.

2, Publication Date xx December 2013

Title: Reliability and Maintenance Activity Part 4: Testability

WARNING: This Draft Document is not to be used as an agreed Defence Standard

The following changes have been incorporated into the new issue of this Def Stan:

Clause No. Change Made Reason for Change


Whole Document Rewrite Update Content
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

Contents
Foreword…………………………………………………………………………………………………………….... iii

0 Introduction .................................................................................................................................................... IV

1 Scope ................................................................................................................................................................ 1

2 Warning ............................................................................................................................................................ 1

3 Normative References ................................................................................................................................... 1

4 Definitions ....................................................................................................................................................... 2

5 Abbreviations .................................................................................................................................................. 8

5 Testability Concepts ...................................................................................................................................... 9

6 Testability Objectives .................................................................................................................................... 9

7 Testability And The Product Life Cycle .................................................................................................... 10

8 Contracting For Testability ......................................................................................................................... 11

9 Implementing Testability............................................................................................................................. 13

10 Testability Verification ................................................................................................................................. 16

11 Testability Risks ........................................................................................................................................... 18

Annex A - Informative Annex Providing Supporting/Descriptive Material Relating To Main Sections Of


Standard. ................................................................................................................................................................ 21

ANNEX B: STANDARDISATION TASK VALIDATION CERTIFICATE (STVC)………..…………………………………………………..…… 27

ii

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


Foreword
AMENDMENT RECORD

Amd No. Date Text Affected Signature and Date

REVISION NOTE
This standard is raised to issue 3 to update its content.

HISTORICAL RECORD
This standard supersedes Def Stan 00-42 Part 4 Issue 2. The complete Def Stan 00-42 Reliability and
Maintainability Assurance Activity comprises:
Part 1: One-Shot Devices/Systems

Part 3: R&M Case

Part 4: Testability (this document)

Part 5: In-Service Reliability Demonstrations

Part 6: Maintainability Demonstrations

a) This standard provides testability requirements for MOD practices, procedures and requirements during
the design process.
b) This standard is raised to issue 3 to update its content on behalf of the Ministry of Defence (MOD) by the
Reliability Advice and Guidance Group for UK Defence Standardization (DStan).
c) This standard has been reached following broad consensus amongst the authorities concerned with its
use and is intended to be used whenever relevant in all future designs, contracts, orders etc. and
whenever practicable by amendment to those already in existence. If any difficulty arises which
prevents application of the Defence Standard, DStan shall be informed so that a remedy may be
sought.
d) Please address any enquiries regarding the use of this standard in relation to an invitation to tender or
to a contract in which it is incorporated, to the responsible technical or supervising authority named in
the invitation to tender or contract.
e) Compliance with this Defence Standard shall not in itself relieve any person from any legal obligations
imposed upon them.
f) This standard has been devised solely for the use of the MOD and its contractors in the execution of
contracts for the MOD. To the extent permitted by law, the MOD hereby excludes all liability whatsoever
and howsoever arising (including, but without limitation, liability resulting from negligence) for any loss
or damage however caused when the standard is used for any other purpose.

iii

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

0 Introduction

0.1 The provision of defence materiel with acceptable levels of availability is essential to the achievement
of operational effectiveness, economy of in-service maintenance support and optimised life cycle costs.

0.2 Reliability and Maintenance (R&M) engineering involves processes that aim to optimise Availability by
providing strategies that identify risk areas and provide mitigation. Strategies should be flexible and provide
progressive R&M assurance, such that the results can be reviewed against the R&M requirements and the
programme modified as necessary to ensure the requirements are achieved.

0.3 One key aspect of R&M with the ability to optimize Availability is Testability. Defence Standard 00-42
Part 4 describes the concepts of Testability and how its timely specification and implementation contribute
effectively to the R&M process.

0.4 Informative ANNEX A: provides guidance on interpretation, discussion and further descriptive material
on the main body of the standard.

iv

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

Reliability and Maintenance Activity: Testability


1 Scope

1.1 This Defence Standard specifies requirements for procedural and technical aspects and guidelines to
be followed in relation to testability during development of equipment for the UK armed forces. It should be
used in conjunction with BS EN 60706-5 Edition 2, Maintainability of Equipment, Part 5 Testability and
Diagnostic Testing.

1.2 This document is intended to be used throughout the acquisition chain from the customer to the prime
contractor, down to individual sub-contractors and suppliers of Commercial Off The Shelf (COTS) equipment.

1.3 The activities required to achieve Testability are likely to be specific to each project. The concepts can
be applied to all types of products (bespoke, COTS), regardless of the technology (mechanical, electrical,
hydraulic, software) they employ.

1.4 The objective of the standard is to ensure that testability requirements are defined by the customer
early in the procurement phase, incorporated into the design, implemented, documented and verified.

1.5 The Standard provides guidance on the Testability process including:

• a definition of testability

• why it is carried out

• when it should be carried out

• how and what should be specified

• what it means in practice

• how it should be assessed

• other aspects that should be considered.

2 Warning

2.1 The Ministry of Defence (MOD), like its contractors, is subject to both United Kingdom and European
laws regarding Health and Safety at Work. Many Defence Standards set out processes and procedures that
could be injurious to health if adequate precautions are not taken. Adherence to those processes and
procedures in no way absolves users from complying with legal requirements relating to Health and Safety at
Work.

3 Normative References

3.1 The publications shown below are referred to in the text of this standard. Publications are grouped and
listed in alpha-numeric order.

1
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


Note: Def Stan’s can be downloaded free of charge from the DStan web site by visiting
<http://www.dstan.dii.r.mil.uk> for those with rli access or <https://www.dstan.mod.uk> for all other
users. All referenced standards were correct at the time of publication of this standard (see 3.3, 3.4 & 3.5
below for further guidance), if you are having difficulty obtaining any referenced standard please contact the
DStan Helpdesk in the first instance.

ARMP 7 NATO R&M Terminology applicable to ARMPs;

Def Stan 00-40 Reliability and Maintainability;

Part 1: Management Responsibilities and Requirements for Programmes


and Plans;

Def Stan 00-42 Reliability and Maintainability (R&M) Assurance Activity:

Part 1: One Shot Devices/Systems

Part 3: R&M Case

Part 5: In-Service Reliability Demonstrations

Part 6; Maintainability Demonstration

Def Stan 00-49 Reliability and Maintainability - MOD Guide to Terminology Definitions

Def Stan 00-52 The General Requirements for Product Acceptance and Maintenance
Test Specifications And Test Schedules

Def Stan 00-70 Standard Serviceability Testing

Def Stan 05-57 Configuration Management

BS EN 60706-5 Maintainability of Equipment – Part 5: Testability and Diagnostic Testing

BS EN 10050-191 INTERNATIONAL Electromechanical Vocabulary – Dependability and


Quality of Service

3.2 Reference in this Standard to any normative references means in any Invitation to Tender or contract
the edition and all amendments current at the date of such tender or contract unless a specific edition is
indicated. Care should be taken when referring out to specific portions of other standards to ensure that they
remain easily identifiable where subsequent amendments and supersessions might be made. For some
standards the most recent editions shall always apply due to safety and regulatory requirements.

3.3 In consideration of clause 3.2 above, users shall be fully aware of the issue, amendment status and
application of all normative references, particularly when forming part of an Invitation to Tender or contract.
Correct application of standards is as defined in the ITT or contract.

3.4 DStan can advise regarding where to obtain normative referenced documents. Requests for such
information can be made to the DStan Helpdesk. Details of how to contact the helpdesk are shown on the
outside rear cover of Defence Standards.

4 Definitions

4.1 For the purpose of this standard, the definitions shown below apply.

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


4.2
Ambiguity Group
A collection of items to which a diagnostic sequence can isolate a fault

4.3
Built In Test (BIT)
An integral capability of the equipment that provides an on-board test capability to detect, diagnose, or
isolate system failures. The fault detection and, possibly, isolation capability is used for periodic or
continuous monitoring of a system’s operational health, and for observation and, possibly, diagnosis as a
prelude to maintenance action (ARMP-7 definition).

4.4
Built In Test Equipment (BITE)
Any device permanently mounted in the equipment and used for the express purpose of testing the
equipment, either independently or in association with external test equipment (ARMP-7 definition).

4.5
COTS
Designates items readily available commercially (BS EN 60706-5 definition).

4.6
Criticality
Significance attached to a malfunction

NOTE: Criticality is expressed in grades: the higher the grade, the more severe the consequences to be
expected from the malfunction (BS EN 60706-5 definition).

4.7
Depth of Test
Specification of the level to which the unit or sub-unit is to be identified (BS EN 60706-5 definition) i.e. the
level to which replaceable entities are required to be identified by the test being undertaken.

For example, the system test must identify the failed unit to be replaced at this level, but the test may also
have to identify the sub-unit that will have to be repaired/replaced at the next level of repair. In this situation,
the test coverage may have to be defined for each depth of test level. Trade-off studies to establish the most
cost-effective depth of test should be an integral part of the Level of Repair Analysis (LORA) (the process of
determining the most suitable maintenance level for repairing items of equipment).

4.8
Design Level
Level to which the design elements (functional and/or physical units), when they already exist,
are assigned within the product breakdown structure.

NOTE: In some cases “design level” is known as “indenture level” (BS EN 60706-5 definition).

4.9
Diagnosis Correctness
Proportion of faults of an item that can be correctly diagnosed under given conditions (BS EN 60706-5
definition).

4.10
Diagnostic Testing
Test procedure carried out in order to make a diagnosis (BS EN 60706-5 definition).

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


4.11
External Test System
Stand-alone equipment provided to determine the serviceability of an entity.

4.12
Failure / Fault
The inability of an item to perform within previously specified limits (ARMP-7 definition).

Failure and Fault are synonymous in this document.

4.13
Fault Detection
The ability of the system to recognise any Fault and inform the user of its presence.

4.14
Fault Diagnosis
Actions taken for fault recognition, fault location and cause identification.

4.15
Fault Isolation
The identification of the location of a Fault in an equipment or system.

4.16
False Alarm
An indication of Fault that cannot be confirmed following subsequent fault finding activities.

4.17
False Alarm Rate
The percentage of False Alarms in the total number of Fault indications or the number of false alarms in unit
time.

4.18
Fault Detection Capability
The ratio of the sum of detectable faults to the sum of all faults.

4.19
Fault Detection Probability
The ratio of the sum of the failure rates of detectable faults to the total sum of failure rates.

4.20
Fault Recognition Time
Period of time between the instant of Failure and Fault Recognition (BS EN 60706-5 definition).

4.21
Fault Simulation
Inclusion of faults by non-destructive interventions in the hardware units and/or, where necessary, simulation
via software in order to verify the diagnostic capability (BS EN 60706-5 definition).

4.22
Fault Tolerant Design
Fault Tolerance is the attribute of an item that makes it able to perform a required function in the presence
of certain, given sub-item faults (ARMP-7 definition).
4

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


4.23
Function
Performance required of the item.

NOTE: A function is always associated with an item of a given level in the product breakdown structure (BS
EN 60706-5 definition).

4.24
Functional Model
A conceptual representation of an item describing the interrelationship and dependencies between its stimuli
and measurement (response) terminals.

NOTE: The functional model, which arises during the development of a product, is in principle a block
diagram showing the functions of the product and supplemented to include the test paths envisaged by the
developer (BS EN 60706-5 definition).

4.25
Functional Test
Testing of all the specified Functions of hardware units to prove their functional capability (BS EN 60706-5
definition).

4.26
Hardware Unit
Design element which represents Functions and/or sub-functions in the form of hardware, possibly including
software components (BS EN 60706-5 definition).

4.27
Line Replaceable Unit, LRU
Replaceable hardware or software unit which can be replaced directly on the equipment by the user or by a
maintenance support facility (BS EN 60706-5 definition).

4.28
Maintenance Concept
Interrelationship between the Design Levels and the levels of maintenance to be applied for the maintenance
of an item (BS EN 60706-5 definition).

4.29
Maintenance Policy
a description of the interrelationship between the maintenance echelons, the indenture levels and the levels of
maintenance to be applied for the maintenance of an item (BS EN 60050-191 definition)
4.30
Monitoring
Automatic supervision of the Functions required for the operation in a selected operational mode; operation
should not be affected by this (BS EN 60706-5 definition).

4.31
No Fault Found
A Fault that cannot be related to a defined Unit or number of Units (isolated as containing a Fault) upon
subsequent testing of the unit.

4.32
Operational Context
Circumstances in which an item is expected to operate (BS EN 60706-5 definition).

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


4.33
Parameter
Physical quantity that specifies a Function (BS EN 60706-5 definition).

4.34
Product
Specified deliverable goods or service.

NOTE 1: In the context of dependability, a Product may be simple (e.g. a device, a software algorithm) or
complex e.g. a system or an integrated network comprising of hardware, software, human elements, support
facilities and activities).

NOTE 2: A Product has its own life cycle phases.

4.35
Product Breakdown Structure
Hierarchical tree visualizing the physical composition of a Product by assemblies of units and sub-units (BS
EN 60706-5 definition).

4.36
Prognostics
A technique which allows data to be collected and analysed on the operational status of an entity so that
predictions can be made as to when Failures / Faults are likely to occur. Prognostics can be considered as
a subset of Testability, but the storage of data and the instantaneous analysis of data can be highly
complex, so it is usually only applied to critical performance attributes.

4.37
Shop Replaceable Unit, SRU
Replaceable hardware or software unit which can be replaced by the user depot/workshop or by the
maintenance support facility at the same level or in the company's workshops (BS EN 60706-5 definition).

4.38
Signal
Variation of a physical quantity used to represent data.

NOTE: A signal is represented by one or several Parameters (BS EN 60706-5 definition).

4.39
Specification
Detailed definition of the functions of an item for a given level of the product breakdown structure.

NOTE: Specifications should be derived from the systems requirements and be verifiable (BS EN 60706-5
definition).

4.40
Stimulus
Input Signal with defined Parameters for the purpose of exercising/triggering Functions (BS EN 60706-5
definition).

4.41
Sub-function
Sub-division of a Function (BS EN 60706-5 definition).

4.42

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


Terminal
Generic term for the physical access points to the Signals of a test item.

Examples of physical implementations or relevant terms for synonymous expressions are:


– pin
– connector
– plug/plug-type connector
– test point
– interface
– port

NOTE: A terminal is generally identified by a unique identifier (BS EN 60706-5 definition).

4.43
Test Concept
Description of the results of the system testability requirements analysis and stipulation of the method of how
the requirements are to be met (BS EN 60706-5 definition).

4.44
Test Coverage
The ratio of the number of Faults that can be tested to the total number of Faults that could occur during
operational use for a defined failure criticality and a defined test scenario.

4.45
Test Effectiveness
A superset of Test Coverage. It is defined as the ratio of the failure rate for all tested Faults compared to
the failure rate of all Faults within a system.

4.46
Test Equipment
Tools (hardware and/or software) required for conducting tests

NOTE: Due to the technology involved, these are divided into internal (BITE) and external test equipment
(BS EN 60706-5 definition).

4.47
Test Instruction
Document describing how the tests required in the test specification are to be implemented (BS EN 60706-5
definition).

4.48
Test Path
Description of the assignment of hardware units to their terminals, taking the associated test steps into
account.

NOTE: In addition, the test path defines the (functional) relationship between the stimulus and the response
(BS EN 60706-5 definition).

4.49
Test Sequence
Series of Test Steps (BS EN 60706-5 definition).

4.50
Test Specification
Document in which Test Sequences, Parameters and Functions are specified (BS EN 60706-5 definition).

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

4.51
Test Step
Smallest unit in a test conducted on a hardware unit (BS EN 60706-5 definition).

4.52
Test Task
Sum of all the tests necessary to meet the specifications of fault recognition and isolation (BS EN 60706-5
definition).

4.53
Testability
A design characteristic which determines the degree to which an item can be functionally tested under stated
conditions (BS EN 60706-5 definition)

5 Abbreviations
Designation Title

ARMP Allied Reliability and Maintenance Publication

BIT Built in Test

BITE Built in Test Equipment

CADMID Concept, Assessment, Demonstration,


Manufacture, In-Service, Disposal
COTS Commercial of the Shelf

Def Stan Defence Standard

DStan Directorate of Standardization

FMECA Failure Mode Effect and Criticality Analysis

FTA Fault Tree Analysis

GFE Government Furnished Equipment

ITT Invitation to Tender

LCC Life Cycle Cost

LORA Level of Repair Analysis

LRU Line Replaceable Unit

MOD Ministry of Defence

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

MTTF Mean Time to Failure

PM Programme Manager

R&M Reliability & Maintainability

RU Replaceable/Repairable Unit

5 TESTABILITY CONCEPTS

5.1 The most important criteria for product selection and procurement are performance characteristics,
and Life Cycle Costs. When measured in terms of availability, LCC is directly related to effective operation
and maintainability which are optimised by considering testability.

5.2 Testability is applicable to the principal phases of the CADMID life cycle; specifically from the time
when the need for a product is established, through the design and development phase, the manufacturing
and installation phase and finally to the operation and maintenance phase.

5.3 Testability conducts a quantitative assessment of how testable a product is and to optimise LCC for
both user and supplier. Providing testability analysis is conducted early in the product life cycle, it can identify
opportunities for design improvements to increase availability. These opportunities include the following
methods:

a) improved test access

b) redundancy in which failure of a critical unit is negated by the operation of one or more standby units
operating in parallel

c) re-configurability based upon condition monitoring

d) prognostics to predict statistically the time to failure of critical units so that they can be replaced or
repaired before they fail.

5.4 Testability can also identify situations where testing is impractical:

a) because the overall system operation cannot be verified due to safety constraints or the test may be
destructive

b) when consideration is given to criticality of the failure

c) due to costs associated with alternative design or testing (including test equipment procurement,
maintenance costs and the test activity costs).

5.5 The testability requirement must consider all of the above, including the outputs from trade-off studies
into mission criticality, cost, and resources, at the earliest opportunity.

6 TESTABILITY OBJECTIVES

6.1 From the operational user's viewpoint, the main objective is to achieve maximum (if not total) mission
availability. Testability plays a key function of design and development because it has a major impact on
availability, maintainability and through life costs. The criteria for this include:

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


a) minimum operational downtime

b) rapid fault-detection and isolation

c) minimizing ambiguity group size to reduce level of spares carried for repair and replacement.

6.2 From the developer perspective, testability requirements such as Test Effectiveness, Test Coverage
and Fault Detection Capability are incorporated into a test policy, which scopes and characterises tests for
the product life cycle. To ensure the test policy can meet the operational requirements, testability analysis
should commence on the top level conceptual design and progress through successive levels of detail in
order to enhance the design as required.

6.3 This is summarised in the definition of testability taken from MIL-HDBK 2165:

“Testability is a design characteristic which allows the status (operable, inoperable, or degraded) of an item
to be determined and the isolation of faults within the item to be performed in a timely manner.”

7 TESTABILITY AND THE PRODUCT LIFE CYCLE

7.1 Deciding the extent to which tests are necessary at each stage of the lifecycle can have a significant
impact on life cycle costs.

7.2 It can determine the extent of testing required by calculating test coverage at each stage. Test
coverage is cumulative and one aspect of the testability process should be to assess the extent to which
system components are exercised by testing throughout the product lifecycle.

7.3 The extent to which tests are conducted throughout a product’s operational life is dependent upon:

a) its criticality i.e. how important it is that the product performs its function on demand – e.g. if a TV fails,
it is inconvenient; if an aircraft fuel system fails it can be catastrophic

b) whether the expected life of subsystems and associated equipment exceeds the life of the product, in
which case only testing at the product level is cost effective

c) whether the cost of repair is viable when compared with item or product replacement cost.

7.4 Design and Development Phase

7.4.1 As soon as the need for a product has been identified, its operational requirements begin to be
formulated with particular attention being given to its field of application and availability. Availability is
dependent upon component Failure Rates / Mean Time Between Failures and, Mean Time To Repair. When
coupled with field of application, these measures directly influence the Repair Strategy and Level of Spares
required, which are in turn largely determined by the product design and the components employed.

7.4.2 Product specifications should include testability requirements, minimally in the form of fault
detection and fault isolation capability, so that they are incorporated into the design and contribute a
meaningful measure of design performance. As the design progresses through successive levels of detail,
feedback of the testability results enhances the successive design phases. This requires close cooperation
between the design, test, manufacturing, maintenance and support organizations.

7.4.3 During the design phase the functions of the product should be tested individually to ensure that
they can perform to their respective specifications and meet their requirements. This is often referred to as
design proving or design qualification testing. At this stage of testing all of the individual functions can be

10

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


exercised to ensure they meet their performance requirements in terms of timing, parametric values and
even beyond their normal operational range. It is important that testing covers as many of the function
components as possible. This may also be the only opportunity to fully exercise functions that are
components of, or interface with, safety critical or single shot functions before they are integrated into the
complete system.

7.4.4 Once design proving at the functional level is completed, the functions are progressively integrated
with other product functions as the design proving exercise extends to cover the whole product.

7.5 Manufacturing and Installation Phase

7.5.1 During manufacture the emphasis is to ensure the product is functional and the tests check
components are correctly orientated, located and installed (e.g. soldered in the case of electronic circuitry).
Each stage of system integration can add to the test coverage although there may be limitations regarding
the extent to which safety or single shot items can be tested. Once a system successfully completes its
production acceptance tests the product can enter into service.

7.5.2 It is essential that appropriate testing is developed at this phase to maximise manufacturing
throughput and yield through effective Fault Detection and Fault Isolation strategies.

7.5.3 Consideration should be given to whether the testing approach used during this phase can be
effectively developed from the design and development phase. Subsystems should be tested prior to their
integration into the final product. Tests may take the form of performance tests but frequently subsystems
are passed out by means of tests to detect manufacturing faults and prove overall functionality. Production
tests often use standalone test systems to exercise the unit through the subsystem interfaces. Sometimes
BIT may be used, supported by external test systems to simulate data passing over the subsystem interfaces
in the final product. Diagnostics have a particularly important role to play in production to facilitate repair and
maximise yield. Once again, production acceptance tests are employed with a variation in emphasis to
achieve final product pass out and delivery to the customer or end user.

7.6 Operation and Maintenance Phase

7.6.1 In service the principal objective is to maximise availability which requires effective fault detection
using functional tests and fault isolation using diagnostic tests.

7.6.2 Tests at this stage may be developed from manufacturing functional tests to reduce the potential
for False Alarms and No Fault Founds in-service.

7.6.3 The effect of obsolescence on test equipment often has to be dealt with in systems having a long
operational life cycle and particularly those that have the potential for extension of life cycle beyond that
originally planned. Mitigation should address not only replacement of test equipment but also migration of
tests to new platforms to ensure consistent test performance.

8 CONTRACTING FOR TESTABILITY

8.1 This section provides guidance to both the MOD and the Contractor to ensure that the testability
requirements and activities required to fulfil those requirements meet the following objectives:

a) that the testability requirements are stated explicitly and clearly in the contractual documents

b) that the testability requirements are fulfilled by a programme of activities planned and implemented by
the Contractor

11

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


c) that assurance is provided by the Contractor to confirm that the testability requirements are being met
at all stages of the contract.

8.2 It is extremely important to recognise that changes to the requirements may have an impact on both
time and cost.

8.2.1 The consequences of failing to meet the specified requirements should be defined in advance,
including a requirement to change to the detection process or system design.

8.3 Testability Measures

8.3.1 The testability requirement should be specified in terms of:

a) test effectiveness

b) test coverage

c) fault detection

d) fault isolation

e) No Fault Found frequency

f) effect of BIT on system reliability

g) impact to the system of BIT Failure

h) identification of safety related functions for which the fault detection target is 100%

i) identification of critical functions for which the required test coverage target is 100% with a
requirement to produce evidence to justify where this cannot be achieved

j) the level (lowest acceptable limit) of test coverage for all other functions

k) maximum time for fault detection

l) maximum time for fault isolation

m) minimum acceptable sizes for ambiguity groups

n) maximum acceptable value for false alarm rate

o) the process to verify testability requirements have been implemented

p) the process to verify that safety requirements have been implemented.

8.4 Further elements to be included in the testability requirements include:

8.4.1 Verification and Validation of Testability

8.4.1.1 The testability requirement shall specify the level of documentation and metrics that should be
provided, not only to verify that the testability requirements have been met but also, and most importantly,
how it will meet through-life support requirements.

8.4.1.2 Suppliers should be required to describe the testability measures that they intend to use and how
these will be validated.
12

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


8.4.2 Built-in Test (BIT)

8.4.2.1 The testability requirement may express a preference for BIT to reduce the need for external test
equipment.

8.4.3 External Test Equipment

8.4.3.1 Where the testability requirement permits its use, external test equipment may be used on its own
or in conjunction with BIT.

8.4.3.2 When used on its own, external test equipment requires access to the equipment via its test
connectors and if the equipment is a subsystem of the total design, it may also use the subsystem interfaces.

8.4.4 Testability of the Test System

8.4.4.1 If an external test system is employed, the testability requirement shall specify the extent to which it
too shall be testable up to and including the interface with the unit under test.

8.4.5 Product Acceptance / Demonstration

8.4.5.1 The supplier should be required to provide a Maintainability Demonstration Plan, described in Def
Stan 00-42 Part 6, which details how the prescribed testability characteristics of the equipment are to be
demonstrated.

8.4.6 False Alarms and No Fault Founds

8.4.6.1 The system test may detect and report a failure to the user when a failure does not truly exist. This
may have an adverse impact upon operations by causing unnecessary investigations, result in the report of a
false alarm and the replacement of a fully functional unit.

8.4.6.2 False alarm rates for the various resolutions should be included in the testability requirement. The
testability requirement should state that they shall not exceed a predetermined limit and ideally this limit
should be close to zero. Methods that are recognised to minimise these occurrences may involve feedback
systems that constantly assess the environment and recalibrate themselves to allow for component
degradation within the defined criteria.

8.4.6.3 The contract should also require the supplier to provide metrics on the achieved NFF or False
Alarm Rates once the equipment is in service. This places an equivalent requirement on the user to supply
appropriate data from usage.

8.4.6.4 Unless the environment conditions can be accurately duplicated, repeating a test to eliminate a
false alarm will serve only to increase the test time, provide no benefit and should be avoided.

8.4.7 Testability Verification

8.4.7.1 Design Proving, or Verification Testing, provides assurance that a system will function correctly
given defined inputs and within a defined environment.

8.4.7.2 Testability analysis can determine test coverage and offer the opportunity to improve the design by
identifying where additional test access would be beneficial.

9 IMPLEMENTING TESTABILITY

13

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


9.1 Testability in the Development Process

9.1.1 Figure 1 illustrates three milestones in the testability development process:

a) Testability Requirements - should be set out at the start of the development process

b) detailed testability analysis - should be conducted in parallel with the design as it progresses to
provide feedback and opportunities for design improvements

c) testability verification - part of the acceptance process to demonstrate the Testability Requirements
have been satisfied.

Figure 1 – Testability Milestones of the Development Process

9.1.2 Product development process should commence only when the testability requirements of the
product in operation have been determined. Successful product development is dependent upon having the
correct processes and procedures in place to provide a common understanding to all parties of the
requirements relating to the development, update and modification phases of a product. These processes
are as important as the more familiar activities of technical project management, quality assurance and
configuration management. The processes usually include:

a) System Requirements: ensure customer system requirements are complete. Where incomplete,
ambiguous or deficient the customer must provide clarification

b) Product Specification: analyse the customer requirements to generate a Product Specification and
Functional Design

14

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

c) System Architecture: assign the functionality of the functional design to the top level system
architecture i.e equipments and sub-assemblies

d) Detailed Design: decompose the System Design into functions/sub-functions and show the interfaces
between them. Develop the system components to effect the functions and sub-functions

e) Design Proving, Product Manufacture and System Integration commence when the Detailed Design is
complete

f) Product Delivery and Acceptance: prepare, conduct and document the verification activities.

9.2 Functional Design

9.2.1 The testability development process requires that all functions and sub-functions that make up the
product should be verifiable. To be effective, testability analysis should be conducted throughout the design
process in order that deficiencies in test coverage are identified in a timely fashion to facilitate design
changes including the introduction of additional test access.

9.3 Design criteria for testability

9.3.1 The following should be considered to meet testability design criteria:

a) the functions should be assigned to groups that have clearly-arranged and well defined interfaces.
Where possible, the physical and electrical partitioning of the design should reflect the functional
assignment to facilitate diagnostics, maintenance and logistics

b) components and design elements should be selected so that straight forward diagnostic tests can
readily determine their operational status

c) where possible it is desirable that tests are developed that can re-use existing test equipment during
maintenance. Physical and electrical access is a key component in realising this objective

d) depth of test achieved by self-test should be consistent with the test policy

e) test points and connectors for external test equipment should be clearly identified

f) defining what constitutes a failure and its criticality is a major consideration in establishing the
testability criteria

g) the status of a system after power on should be clearly defined to facilitate effective fault isolation e.g.
check reset logic states after power up.

h) test procedures and stimuli should be repeatable and non-detrimental to the equipment or overall
system

i) ideally the design should provide adequate test access to data busses

j) any re-installation of functional software post maintenance should be readily verifiable

k) any diagnostic software for maintenance purposes should be easy to load, operate and well
documented

l) Where different functional elements can operate in multi-role or different scenarios, each role or
scenario should be addressed.

9.4 Use of commercial off-the-shelf products (COTS)

15

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


9.4.1 As components of the overall system, COTS products must meet the testability, maintenance and
the availability requirements set out in the testability requirement, including provision of in-service
documentation. Where the COTS item falls short of the testability requirements, the missing aspects must be
mitigated by the system functionality.

9.4.2 Suppliers should make provision for errors detected by COTS equipment to be detectable and
reportable by system BIT e.g. low battery power.

9.4.3 If the prime contractor and COTS supplier are separate parties, the contract must ensure the item
meets the testability requirements. In order to substantiate reliability and maintainability claims, the supplier
may need to furnish operational and environmental characterization data and the results of testing as well as
evidence that the testability processes do not compromise the designed-in reliability and maintainability
characteristics.

9.4.4 The prime contractor should utilize the data from the supplier to ensure the product testability
requirement is achieved.

9.5 Logistic support

9.5.1 The logistic support required is likely to have an impact on the product design to ensure that the
maintenance plan is achievable e.g. through higher priority for BIT functions and so on.

9.6 Availability and diagnostic testing

9.6.1 Initial trade-off studies into reducing operating costs should determine the extent to which testability
features and characteristics need to be incorporated into the design.

9.6.2 Where items wear out, preventive maintenance can be a significant contributor to operating costs
and so the design needs to optimise maintenance intervals, their duration, diagnostic testing and the
resources needed to conduct maintenance activities.

9.6.3 Where no wear-out occurs such as for most solid state electronic systems, preventive maintenance
costs can be negligible. The highest expenditure arises from the capital costs, which are largely dependent
on the achieved availability. Availability is increased by minimising down time and key factors in achieving
this are component reliability, efficient fault detection and fault isolation.

9.7 Test Documentation

9.7.1 The appropriate test documentation should be provided, see Def Stan 00-52.

9.8 Configuration Management

9.8.1 Configuration management requirements are specified within Def Stan 05-57. Changes in a
system configuration may require associated changes to the testing process. Designing for testability should
take into consideration the flexibility of the testing process to accommodate system variants, modifications,
and build states.

10 TESTABILITY VERIFICATION

10.1 Verification and Validation of Testability

16

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


10.1.1 Testability and Diagnostic Testing capability must be verified. Appropriate verification concepts
include verification by analysis and verification by tests.

10.1.2 To ensure that all relevant aspects of testability are addressed both the contractor and the MOD
checklists can be used but they can be of limited value. They cannot be equally applied to all testable
products and as such any figure of merit compiled from them may be misleading. Checklists should only be
used as an “aide memoire” to ensure that all aspects of the testability requirement have been addressed.

10.1.3 Contractors shall describe the Test Measures that they have used, how they meet the testability
requirement and how they will support and substantiate the verification and validation processes.

10.2 Verification by analysis

10.2.1 The fundamental elements of verification by analysis are:

a) analytical investigation of the item and the assessment of its testability features may be effected by
FMECA and/or specialist software tools

b) review of documents supplied in response to the testability requirement for completeness and
accuracy.

10.3 Verification by tests

Verification by tests is conducted on the product and, where required by the testability requirement, it should
be conducted using the product acceptance test specification. Where practical, faults should be introduced
into the product using non-destructive hardware interventions and/or, where appropriate, by software
simulation.

Where critical items cannot be tested (e.g. one shot items, safety interlocks), they must be clearly identified
and evidence must be provided to substantiate the efficacy of the design, such as testing during
development and/or production processes prior to system integration.

10.4 Testability documentation

10.4.1 Evidence of testability analysis should be provided in a systematic and consistent format to
facilitate verification.

10.4.2 It is important to ensure that full documentation of the diagnostics is supplied in the relevant service
format, so that all the information is made available to the operators and maintainers of the system.

10.4.3 The design of the documentation should take into account the operational context for usage and
maintenance, including levels of skill, and degree of training to be given.

10.5 Demonstration

10.5.1 The Maintainability Demonstration Plan, described in Def Stan 00-42 Part 6, shall detail how the
prescribed testability characteristics of the equipment are to be demonstrated. It must be recognised that the
demonstration, due to constraints on time and cost, may not be able to address all the possible permutations
of equipment failure

10.5.2 Additionally, the prime contractor shall supply evidence, collected throughout the development
cycle, of how the testability requirements have been satisfied in respect of all failure modes. Initially this may
be on a theoretical basis, but it must be substantiated by practical evidence collected as the system
develops.
17

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


10.5.3 The testing of a one-shot device or a one-shot system shall be in accordance with Def Stan 00-42
Part 1.

10.5.4 In addition to the previous sub-clause the prime contractor shall identify which, if any, mission or
safety critical entities cannot be tested and provide alternative assurance to the satisfaction of the
appropriate authority that these entities will function when required.

11 TESTABILITY RISKS

11.1 Introduction

11.1.1 Introducing Testability into a system will impact the Reliability, Maintainability and Availability. An
optimised Testability regime will result in an increased Availability. However there are associated risks that
must be considered when determining the level of Testability introduced.

11.1.2 Testability is concerned with the determination of the operational status of an entity and the
isolation of failed parts. Both of these elements can be affected by the access to, and the frequency of,
reported information. Risks to be considered include:

a) BIT (Built In Test) causing a reduction in operational reliability. Proper management of the test timing
can minimise the difference between the level of test and the risk of reducing operational reliability

b) test issues that increase weight, demand more power and increase system complexity can influence
system performance

c) system safety that can be affected by the need to test weapon systems in safe environments

d) system life that can be affected by the degree of testing, particularly when wear occurs and/or
consumables are required

e) compatibility across test equipment (calibration and rate of degradation)

f) compatibility across test equipment and entity (build standard mismatches)

g) go/no go tests using too restrictive limits, resulting in test failures, but system functions correctly

h) test tolerances that are too close to the go/no go boundary increase the possibility of accepting bad
and rejecting good units

i) induced system failure risks caused by testing that has a weakening effect, by test failures that
generate system failures and by failures induced by maintainers

j) COTS products increase risks associated with testing, particularly when integrating multi-COTS
systems

k) logistic risks associated with increased turn round times and test equipment costs

l) logistic risks with delivery delays

m) facility variations (power, temperature, humidity) impacting upon test performance

n) human factor risks.

11.2 Reliability Impact

18

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


11.2.1 The main purpose for determining the operational status is to provide a level of confidence that the
system will function satisfactorily when required.

11.2.2 The reliability of the operational system is calculated to provide a level of confidence of enduring
satisfactory function. The level of confidence is influenced by the following factors:

a) the reliability of the system post design and manufacture

b) the test coverage ratio at a given criticality level, weighted by failure rates

c) the failure detection ratio at a given criticality level, weighted by failure rates

d) the reliability impacted by the general testability risk issues, not already incorporated within the
previous three factors.

11.2.3 Test coverage may be overly pessimistic in assuming that parts (such as, one-shot entities) that
cannot be tested, may be unreliable. If historic evidence exists to support the likely reliability of untested
entities, then a more realistic measure of system status can be found by adjusting the test coverage to
include untested entities with a proven reliability record.

11.2.4 The use of prognostics can contribute to operational reliability over a specified future mission. The
limitations of prognostics in the early phases of product deployment must also be recognised.

11.3 Maintainability Impact

Fault isolation can have a major impact upon maintainability. A process should be used to generate a
critical-entities list as a means of identifying those entities requiring periodic maintenance or frequent
repair/replacement. One factor influencing the logistics of a system is the number of ‘No Fault Found’
failures. Failure isolation should be used constructively to minimise the occurrences of ‘No Fault Found’, as
these will directly impact upon maintenance resources, spares and the through life costs of a system.

11.3.1 Test system(s) must be capable of detecting when system reconfiguration has occurred
automatically in response to faults detected in operation. The test system(s) must also be capable of
isolating the original failed entity/function (dormant failure) so that appropriate corrective action can be taken.
The criticality of the system will determine whether the operator should be notified that reconfiguration has
occurred during the mission or whether such notification is only necessary at the start or end of a mission, or
during maintenance.

11.4 Safety Considerations

11.4.1 Any tests that are hazardous to either personnel or equipment identified by the testability analysis
must be flagged and documented such that they can be performed in a safe and reliable manner.

11.4.2 Such hazardous conditions may arise from:

a) Failure mode of the unit under test, test equipment or operator error

b) Use of equipment at its stress limits

c) Forbidden or dangerous operations.

11.5 Non-Conformities

19

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


11.5.1 Recommendations from audits and failures identified during design verification or production
testing that were not detected by the test regime must be fed back into the relevant test specification,
corrected by design changes or recorded as non-conformities.

20

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

A ANNEX A - Informative Annex providing supporting/descriptive material


relating to main sections of Standard.

A1 Objectives of Testability

A1.1 Testability ensures that faults are identified to the level specified by the test policy as quickly,
unambiguously and cost effectively as possible. For each function at the levels identified by the test policy,
testability aims to:

A1.2 Identify functional tests to detect faults or verify that the equipment is operational. It is an objective
that functional tests identify all faults.

A1.3 Identify diagnostic or isolation tests that can isolate faults unambiguously. When a fault has been
detected, these tests systematically exercise the equipment to isolate the failure. It is important that the fault
is isolated to the correct level and the smallest ambiguity group in accordance with the testability isolation
requirements.

A1.4 Although typically exercised during the product maintenance cycle, the above types of tests can be
implemented as functions of:

a) Built-In-Test, BIT, where the data can be analysed while the equipment is in service

b) Condition Monitoring, CM which captures parametric data from the equipment during its normal
operation to determine whether equipment is degrading over time. (Condition Monitoring is sometimes
referred to as Continuous Monitoring or Continuous BIT.)

A1.5 Identify tests that are impractical to implement because they exercise single-shot devices, could
have a destructive effect on the function or there may be safety constraints.

A1.6 Identify functions that cannot be tested, whose failure is undetectable or whose failure has no effect
on normal operation.

A1.7 Ensure that there is sufficient access to permit the tests to be carried out.

A1.8 During the manufacturing process, where the expected time taken to test an item during
manufacturing is likely to have an impact on the potential rate of production, testability analysis should
consider the time it takes to execute each test, the number of tests required and then develop an optimised
test execution sequence. If these measures are still insufficient to eliminate potential delays or bottlenecks in
the manufacturing process, it identifies the need for additional test system capacity.

A2 Prognostics

A2.1 During operational service, prognostics can be used to generate and revise preventive
maintenance plans and are particularly useful for critical components whose predicted time to failure and
consequences of its failure mode are too severe to allow them to fail

A2.2 However, only if comprehensive failure, operating and environmental data is available can
prognostics be effective because the aim is to predict the optimum time to replace parts before they actually
fail. This tends to limit prognostics effectiveness at the beginning of the product life cycle

A2.3 Furthermore, such data can prove expensive to collect and analyse, and the cost of developing and
using it has to be balanced against the cost and impact of failures in operation

21

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2

A3 Further BIT Aspects

A3.1 BIT typically falls into one of three categories:

a) Power On BIT – a form of self-test that is conducted whenever the system is powered up

b) Interruptive BIT (sometimes referred to as Demanded BIT) – requires the system to be offline or its
operation to be suspended while it is carried out so as not to disrupt normal operation. Interruptive BIT
conducts a more extensive testing regime and is often used for diagnostic purposes

c) Continuous BIT (sometimes referred to as Periodic BIT or Condition Monitoring) –runs in the
background without disrupting system operation. A particularly useful form is used for detecting
failures in electronic equipment often by simply monitoring supply voltages, discrete signals and data
checksums. These types of test require no stimuli though hardware features may need to be
incorporated into the design to conduct the signal measurements.

A3.2 Since BIT can contribute to the failure of a system, it is advisable to specify that BIT should not
contribute to the system complexity by more than a predetermined proportion of the failure rate, total cost,
component count or power requirement.

A4 External Test Equipment

A4.1 Subsystem interfaces are used to provide stimuli to the function being tested and access to the test
point. The test equipment assesses the outcome of the test.

A4.2 External test equipment does not impinge upon the reliability of the system, as in the case of BIT.

A4.3 Where external test equipment is used in conjunction with BIT it is frequently used in a
complementary capacity, such as to stimulate the unit under test via its subsystem interfaces, conduct tests
that cannot be conducted by BIT or to access, analyse and present BIT results.

A4.4 In all cases, external test equipment has cost, testability and maintenance considerations of its own
and in systems with a long in-service life, the impact of test equipment obsolescence must be incorporated
into the life cycle cost.

A5 False Alarms and No Fault Founds

A5.1 Faults indicated where no faults exist can be due to a number of causes, which should be
eliminated wherever possible:

a) Where a fault is found in the system and the diagnostic test identifies a faulty item that, upon removal
of the item and testing the item independently results in a no fault found, may be caused by errors in
either the system test specification or the item test specification

b) Design tolerance-tiering problems: a failure at system test may not register as a failure at unit test.
This may be caused by units having extreme values that are within the tolerance band of their
operation so that, when combined with a similar rogue unit, the combined effect of their respective
tolerance bands can result in continual rejection

22

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


c) Poor diagnostics: the test does not adequately identify the faulty part.

d) Interface problems: a connector fault may have resulted in a unit being removed and a re-connection
removes the failure

e) Intermittent or transient fault: rogue units can sometimes contain faults that are not evident except
when operating in certain conditions

f) Software faults: faults appearing during usage that are the result of inadequate software and
integration tests

g) Inadequate maintainer and/or user training.

A6 Testability Measures

A6.1 Characteristic values of testability are employed to facilitate assessment. The most frequently used
values are Test Coverage, False Alarms, Fault Detection and Fault Isolation. These measures not only
quantify the testability performance, but they are also applicable to all products and are independent of the
technology employed by the product.

A6.2 These values should be specified in the testability requirement and used to assess and verify that
the testability criteria have been achieved prior to product acceptance. Any consequences of failing to meet
the requirements shall be defined in the testability requirement.

A6.3 Formulae for the calculation of Fault Detection and Fault Isolation are available in BS EN 60706-5
Annex A.

A6.4 Fault Detection

A6.4.1 Fault Detection methods include BIT, BITE, inspection and self- test; all of which can be utilised
individually, or combined to confirm the serviceability of a system.

A6.4.2 An entity will have a defined number of failure modes, caused by component failures or
malfunctions within the entity. The failures are usually determined, and judged for their effect on the
performance of the entity, by an appropriate failure assessment method such as FMECA.

A6.4.3 The criteria for the Fault Detection methods must be stated and justified. These may include:

a) the percentage of failures that the detection method shall be able to detect that have been defined as
critical and the percentage of these failures that are detectable with or without operator intervention
b) a performance test to confirm the operational serviceability of the system, without dismantling the
equipment, within a specified time. The time taken for this may vary in conjunction with the mission
profile
c) the percentage of detected and testable failures that may be automatically diagnosed must be in
accordance with the repair policy of the system
d) the detection method shall not reduce the performance of the system to below its operational
requirement
e) the service level (e.g. Forward and Depth) at which the method is to be performed
f) the standards to be employed to develop the test specifications and routines

23

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


A6.4.4 It is important that the criticality of failures is recognised in the determination of the failure detection
ratio. Those testable failures that remain undetected in the entity should, ideally, be those having the least
effect on the success of the mission.

A6.4.5 A failure occurrence must be detected as quickly as possible.

A6.5 Test Coverage

A6.5.1 As an entity is designed, its Failure Modes become apparent and consequently it becomes
possible to identify which functions should be tested. A probable figure of test coverage of other failures must
be made. The method of arriving at the figure and any associated reasoning for prioritising the failures may
be offered as evidence that all failure scenarios have been considered.

A6.5.2 It is important that the test coverage accurately reflects the coverage achieved by the tests that are
being conducted.

A6.5.3 Test coverage as a single measure for an entire system is likely to be insufficient if it contains
mission critical or safety related functions. Any critical functions (mission or safety) that cannot be tested
must be identified and the reasons for not achieving them provided.

A6.5.4 Non coverage becomes particularly significant if the non-exercised items have a critical impact on
the mission success, in which case effort has to be extended in the design phase to ensure the critical items
can be exercised.

A6.5.5 Test coverage does not dictate the test procedure though it can have a significant role in
determining which tests are essential. Testing at the system level, with subsequent deeper testing if
necessary can provide substantial timesaving and greatly improve user friendliness and diagnostic
capability.

A6.5.6 Test sensors should be included in the coverage analysis.

A6.5.7 Non-critical items, i.e those having no functional effect, should not be included in the test coverage
calculation.

A6.5.8 The aim of test is to determine the operational state of the function.

A6.5.9 The method of detection may also be specified, e.g. operator alert, displayed in normal operation,
routine BIT results record.

A6.6 Fault Isolation

A6.6.1 The specification for fault isolation may include a number of elements, including:

a) the measure of the ability to detect and isolate faults to a single or group of LRUs and/or to a single
LRU or component, with appropriate accuracy
b) the required, more discrete (in depth) ability to diagnose a fault to within the appropriate sections of
an LRU
c) the ability to isolate faults to the interfaces, e.g. the connectors fitted between LRUs
d) the ability of the test equipment to discriminate between a specific software failure and general
equipment failure
e) the greater percentage of fault isolation, which the BITE should be capable of achieving with
operator intervention, as opposed to automatically without it

24

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


f) the still greater percentage of fault isolation which the BITE should be capable of achieving when
the system is off-line for maintenance
g) the approach to demonstrating the percentage of fault isolation the BITE can achieve, i.e. by failure
injection (as for a maintainability demonstration).

A6.6.2 The time to eliminate false alarms and isolate a fault must be the minimum that can be achieved.

A6.7 No Fault Found Frequency is the ratio of the number of No Fault Founds to the total number of
detected faults in a specified period.

A6.7.1 The testability requirement should specify an acceptable limit for the project (ideally zero or as
close to zero as possible. Example: NFF should be 0; NFF shall be less than 2 NFFs per 100 Faults
detected.It is important that a requirement is placed on the supplier to report NFF rates achieved during the
in-service period of the lifecycle.

A7 Functional Design

A7.1 An appropriate procedure for the achievement of the testability requirement is described in Annex
B of BS EN 60706: Part 5. It conducts breakdown of the product into functions and sub-functions (to the
required level), assigns them to hardware functions and represents interconnections between them and
terminals by means of functional links. It is a type of Failure Mode and Effects Analysis, FMEA, method
based only on a functional model.

A8 Criteria for Evaluation of Alternative Diagnostic Designs

A8.1 The supplier should assess the possible alternative diagnostic designs using the following criteria:

a) Capital cost based on:


i. numbers of systems required according to usage and availability
ii. development cost
iii. nonrecurring software cost
iv. maintenance installation cost

b) Equipment unit cost:


i. complexity
ii. quantity to be produced
iii. degree of redundancy
iv. degree of BIT capability
v. quality standard

c) Maintenance related cost:


i. complexity
ii. preventive and corrective maintenance cost
iii. savings by diagnostic system preventing consequential damage
iv. degree of automation considered against training levels
v. planned equipment life

d) Means to increase operating profit:


i. improved handling characteristics
ii. increased availability e.g. by quick fault diagnosis and repair of faults or by use of redundancy
iii. increased utilization

e) Additional factors to consider:


i. modification potential
ii. technical and cost impact of additional hardware or software on safety and reliability of
equipment

25

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


iii. capability to recognize transient and intermittent failures
iv. revert mode capability of diagnostic system
v. cost efficiency gains using BITE compared with external test equipment
vi. use of COTS

A8.2 Software tools can be employed to model the system design and tests to assess the testability of
the design by calculating test coverage, fault detection, fault isolation and so on. Such tools can provide an
effective method of assessing alternate scenarios to optimise the functional and diagnostic test strategies of
the system at each stage of the lifecycle. In addition, their use often provides means of generating consistent
documentary evidence.

26

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


Standardization Task Validation Certificate (STVC)

From: To: UK Defence Standardization

Address: Room 1138


Kentigern House
65 Brown Street
Glasgow
G2 8EX

Date:

STANDARDIZATION TASK VALIDATION CERTIFICATE (STVC)


The following Defence Standard proposal is forwarded for approval:

Def Stan Number (if known):

Title (proposed, if new):

Sponsoring Authority:

Task Description and Priority:


Insert the existing title of the standard for a revision.
For a new standard, identify the technology area and a basic description.
Insert the appropriate 6, 12 or 24 month priority with justification and proposed time scale;

Search for suitable existing standard:


Insert findings from your search for a suitable existing standard.
Confirm that no suitable new standard is in course of production, If a new standard is being produced identify
by whom and that its time scale is unacceptable to MOD; consult DStan as necessary.

Task Justification including the identity of intended standard users:


Justify the need and the added value to MOD in producing this Defence Standard.
Insert details of users and potential users within MOD and Industry of this standard.

27

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


As the sponsor of the document in completing the section below I am aware it is my
obligation and responsibility to ensure the Defence Standard complies with MOD
Standardization policy and will continue to be supported.

Sponsor/Committee Chairman UK Defence Standardization

Name:

Designation:

Email:

Tel:

Signature:

Date:

DStan File Reference:


D/DSTAN/

Additional Comments:

28

DRAFT
DRAFT

DEF STAN 00-42 Part 4 Draft Issue 3.2


Defence Standard Change Control Table

This table is to be completed during the revision process; it should be placed on the front cover of the draft
standard.

The following changes have been incorporated into the new issue of the Def Stan:

Clause No. Change Made Reason for Change

29

DRAFT
©Crown Copyright 2013

Copying Only as Agreed with DStan

Defence Standards are published by and obtainable from:

Defence Equipment and Support

UK Defence Standardization

Kentigern House

65 Brown Street

GLASGOW

G2 8EX

DStan Helpdesk

Tel: +44 (0) 141 224 2531/2

Fax: +44 (0) 141 224 2503

Internet e-mail: [email protected]

File Reference
The DStan file reference relating to work on this standard is D/DStan/.

Contract Requirements
When Defence Standards are incorporated into contracts users are responsible for their correct
application and for complying with contractual and statutory requirements. Compliance with a Defence
Standard does not in itself confer immunity from legal obligations.

Revision of Defence Standards


Defence Standards are revised as necessary by an up issue or amendment. It is important that users
of Defence Standards should ascertain that they are in possession of the latest issue or amendment.
Information on all Defence Standards can be found on the DStan Website www.dstan.mod.uk,
updated weekly and supplemented regularly by Standards in Defence News (SID News). Any person
who, when making use of a Defence Standard encounters an inaccuracy or ambiguity is encouraged
to notify UK Defence Standardization (DStan) without delay in order that the matter may be
investigated and appropriate action taken. Sponsors and authors shall refer to Def Stan 00-00 before
proceeding with any standards work.

You might also like