PRICES include / exclude VAT
Homepage>BS Standards>03 SOCIOLOGY. SERVICES. COMPANY ORGANIZATION AND MANAGEMENT. ADMINISTRATION. TRANSPORT>03.120 Quality>03.120.20 Product and company certification. Conformity assessment>BS ISO/IEC TR 13233:1995 Information technology. Interpretation of accreditation requirements in ISO/IEC Guide 25. Accreditation of information technology and telecommunications testing laboratories for software and protocol testing services
Sponsored link
immediate downloadReleased: 1996-11-15
BS ISO/IEC TR 13233:1995 Information technology. Interpretation of accreditation requirements in ISO/IEC Guide 25. Accreditation of information technology and telecommunications testing laboratories for software and protocol testing services

BS ISO/IEC TR 13233:1995

Information technology. Interpretation of accreditation requirements in ISO/IEC Guide 25. Accreditation of information technology and telecommunications testing laboratories for software and protocol testing services

Format
Availability
Price and currency
English Secure PDF
Immediate download
370.26 EUR
You can read the standard for 1 hour. More information in the category: E-reading
Reading the standard
for 1 hour
37.03 EUR
You can read the standard for 24 hours. More information in the category: E-reading
Reading the standard
for 24 hours
111.08 EUR
English Hardcopy
In stock
370.26 EUR
Standard number:BS ISO/IEC TR 13233:1995
Pages:56
Released:1996-11-15
ISBN:0 580 26595 1
Status:Standard
DESCRIPTION

BS ISO/IEC TR 13233:1995


This standard BS ISO/IEC TR 13233:1995 Information technology. Interpretation of accreditation requirements in ISO/IEC Guide 25. Accreditation of information technology and telecommunications testing laboratories for software and protocol testing services is classified in these ICS categories:
  • 35.020 Information technology (IT) in general
  • 03.120.20 Product and company certification. Conformity assessment

0.1 This Technical Report provides guidance for assessors and testing laboratories on the specific interpretation of the accreditation requirements applicable to testing (including validation of means of testing and test tools) in the field of Information Technology and Telecommunications (IT&T), specifically in relation to software and protocol testing services. This Technical Report does not apply to the accreditation of inspection, certification and quality assurance assessment activities.

0.2 However, ISO/IEC Guide 25 and any other applicable ISO/IEC Guides take precedence over the interpretation given in this Technical Report.

0.3 This Technical Report covers the use by accredited testing laboratories of services for the validation of means of testing (MOT) and test tools, and also applies to the possibility of accreditation of MOT and test tool validation services, because such a validation service is just a specialised form of software testing service.

NOTE — In many areas of IT&T, it may be impractical to require the use of accredited MOT and test tool validation services, both economically and given the state of the art in the particular area. It is important to recognise that the mere existence of an applicable accredited validation service does not mean that relevant accredited testing laboratories should be required to use it, as other suitable forms of MOT and test tool validation may exist. Other factors outside the scope of this Technical Report will determine if and when use of accredited MOT and test tool validation services might become a requirement.

0.4 The aim is that it should be generally applicable across the whole software and protocol testing area, whenever accreditation to ISO/IEC Guide 25 applies. However, it does not cover all the requirements of ISO/IEC Guide 25. Laboratories are reminded that, in order to obtain and maintain accreditation, they shall fully comply with ISO/IEC Guide 25. This Technical Report interprets the ISO/IEC Guide 25 requirements in this field; it does not in any way replace them. Furthermore, there may be other interpretations of ISO/IEC Guide 25 which are sector independent, maybe focusing on just one aspect of accreditation, in which case such generally applicable interpretations continue to apply, and are not replaced by this interpretation.

0.5 This interpretation applies to conformance testing and other types of objective testing of software. Specific guidance is provided for OSI, telecommunications protocols, product data exchange (as defined by ISO TC184), graphics, POSIX and compilers. The testing of physical properties of hardware is outside the scope of this interpretation, but may be covered elsewhere. Evaluation of systems and products, as in IT&T Security and Software Quality evaluation (ISO/IEC 9126), is also not included in the scope of this interpretation. Safety-critical software and general application software testing are also not included in this edition.

0.6 Specific text is given in this interpretation for conformance testing. However, the general interpretations given in this Technical Report are applicable to all types of objective testing, including measuring some objective aspects of performance (e.g. as in compiler testing for some programming languages) and types of testing that are particular to a single area within the IT&T field. Analysis by the test operator in order to produce the final result for a test case, in accordance with procedures that lead to objective results, is included in this interpretation.

NOTES

  1. Normally, each individual test case in a test suite (set of test cases) will be designed to yield a test verdict, that is a statement of pass, fail or inconclusive.

  2. Conformance testing involves testing the implementation against the conformance requirements specified in one or more standards (or other normative specifications). The standards against which implementations are tested for conformance will often be International Standards, although they may be ITU-T Recommendations, regional or national standards, or even a manufacturer's specification when the manufacturer is seeking independent confirmation that the implementation conforms.

  3. The test cases to be used in conformance testing may also be standardized, but (in the fields of software and protocol testing) are usually distinct from the standards which specify the requirements to which implementations are supposed to conform.

  4. Each test verdict should be made with respect to the purpose of the test case and the requirements of the relevant standard(s). Optionally, a particular test suite may specify various classes of pass, fail or inconclusive test verdict (e.g. fail class 1 : severe non-conformance; fail class 2: invalid behaviour but satisfied the test purpose), but this does not alter the general points about test verdicts.

Requirements in ISO/IEC Guide 25,IT&T Interpretations and Definitions Guidance and Examples
1 Scope  
  No IT&T specific interpretation is required for clause 1 of ISO/IEC Guide 25. See clause 0 for the scope of this Technical Report. Note that this Technical Report applies to testing laboratories but not to calibration laboratories. The relevant laboratories, however, include validation laboratories that offer validation services for means of testing and/or test tools to be used by testing laboratories; in this case, the item to be validated is to be regarded as a system or implementation under test.
2 References  
  No IT&T specific interpretation is required for the references of ISO/IEC Guide 25.  
  The following standards contain provisions which, through reference in this text, constitute provisions of this Technical Report. At the time of publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this Technical Report are encouraged to investigate the possibility of applying the most recent editions of the standards indicated below. Members of IEC and ISO maintain registers of currently valid International Standards.  
  ISO/IEC Guide 25: 1990, General requirements for the competence of calibration and testing laboratories.  
  ISO 9000-3: 1991, Quality management and quality assurance standards - Part 3: Guidelines for the application of ISO 9001 to the development, supply and maintenance of software.  
  These are the only normative references required by this interpretation. Informative references used in this Technical Report are given in Annex B.  
3 Definitions  
  No IT&T specific interpretation is required for the definitions of ISO/IEC Guide 25. However, ISO/ IEC Guide 25 subclause 3.7 is referenced in 10.A.1 and subclause 3.15 is referenced in 4.B.  
  Additional definitions are required in this Technical Report; for these, see Annex A. As far as possible standard definitions are used. Even where this is not possible, the intent is not to standardize new definitions but rather to explain the meaning of terms as used in this Technical Report.
  The distinction between a means of testing (MOT) and a test tool is important in this interpretation. The complexity of MOT and test tools varies from one area of software testing to another. For example, in OSI and telecommunications protocol testing, each MOT is a very complex hardware and software system which plays a major part in the testing, whereas in compiler testing, in addition to the test suite (of programs) itself, only a few ancillary software test tools are used.
  For the purposes of this Technical Report, a means of testing is hardware and/or software, and the procedures for its use, including the executable test suite itself, used to carry out the testing required. In an accredited testing service, the MOT is run under the control of the testing laboratory.  
  For the purposes of this Technical Report, a test tool is hardware and/or software, excluding the test suite itself, used to carry out or assist in carrying out the testing required. It may be concerned with running the test cases, analysing the results, or both. Those concerned with running the test cases may also involve parameterization, selection or even generation of the test cases.  
4 Organization and management  
4.1 No IT&T specific interpretation is required for sub clause 4.1 of ISO/IEC Guide 25.  
4.2 ISO/IEC Guide 25 subclause 4.2 is interpreted in 4.A and 4.B.  
4.A Use of commercial reference implementations  
  In ISO/IEC Guide 25, subclause 4.2, requires the following:  
  'The laboratory shall...  
 
  1. have arrangements to ensure its personnelare free from any commercial, financial andother pressures which might adversely affectthe quality of their work;

  2. be organized in such a way that confidencein its independence of judgement and integrityis maintained at all times;...'

 
  In IT&T, these requirements shall be interpreted as follows.  
4.A.1 If a commercial implementation (not designed to be a reference implementation) is used by a testing laboratory or validation laboratory as a reference implementation for the purposes of MOT validation within the laboratory, then the adequacy of the technical coverage with respect to the other implementations that are available shall be kept under review by the laboratory, in consultation with the accreditation body. If it is agreed that the technical coverage has become inadequate compared to other implementations, then the commercial implementation should be replaced, within a time period to be agreed with the accreditation body, by one of the following, as appropriate:
  1. another implementation with better coverage;

  2. a set of implementations from different suppliers; or

  3. an implementation which is designed to be a reference implementation.

See 9.B for a description of the use of reference implementations in MOT validation.

It is recognised that for some IT&T standards there may be no alternative to using a normal commercial implementation as a reference implementation against which to validate the MOT. In such cases, the publication of the identity of the reference implementation (in order to be open about the nature of the MOT validation conducted) may inadvertently give commercial advantage to the supplier.

The decision to use a normal commercial implementation as a reference implementation, and the choice of which commercial Implementation to use in this way, are decisions to be made by the laboratory.

In some cases, it may be necessary to use multiple reference implementations for MOT validation, in order to ensure that adequate coverage of the MOT behaviour is checked. This arises because a given commercial implementation may only support a subset (or "profile") of the relevant specification(s). This may be acceptable as a temporary solution, particularly if the market Is primarily interested in that subset, but is inadequate as a longer term solution if the testing service is to cover the specification(s) in full.

  If a set of implementations is chosen, the set shall be chosen to give better technical coverage of the relevant specification(s) and not for commercial reasons.  
4.B Proficiency testing  
  In ISO/IEC Guide 25, subclause 4.2 j) requires the following:  
  'The laboratory shall, where appropriate,participate in interlaboratory comparisons andproficiency testing programmes. '  
  The definition of proficiency testing given in subclause 3.15 of ISO/IEC Guide 25 is as follows:  
  'Determination of the laboratory ... testingperformance by means of interlaboratorycomparisons. '  
  In ISO/IEC Guide 25, subclause 5.6 requires the following:  
  'In addition to periodic audits the laboratory shallensure the quality of results ...by implementingchecks. These checks ... shall include, asappropriate, but not limited to:  
  ..  
 
  1. participation in proficiency testing or otherinterlaboratory comparisons;'

 
  In IT&T, these requirements shall be interpreted as follows.  
4.B.1 If a laboratory claims to offer a harmonised testing or validation service, it shall provide evidence of its participation in the relevant inter-laboratory comparisons to ensure that the declared harmonisation is achieved and maintained. There may be numerous inter-laboratory comparison schemes organised for IT&T. IT&T Agreement Groups have been and are being formed to operate mutual recognition agreements whereby the group of testing laboratories establish the means to recognise the mutual equivalence of their corresponding testing services. Such Agreement Groups provide one formalized way of participating in inter-laboratory comparisons. They may require that testing service harmonization and demonstrations of equivalence are carried out, and that all participating testing laboratories become accredited for the services they offer (within a reasonable period of time). Agreement Groups may also provide inter-laboratory comparison schemes for MOT validation services. If it is not practical or economic for the laboratory to participate in inter-laboratory comparisons, then the laboratory shall not claim that the service is harmonised.
A laboratory may decide not to join an Agreement Group and therefore not to claim to provide a harmonised testing or validation service. It may nevertheless be required by the accreditation body to participate in some informal inter-laboratory comparison exercises, in order to overcome any doubts there may be about the objectivity of its test or validation results.
5 Quality system, audit and review  
5.1 No IT&T specific interpretation is required for this subclause.  
5.2 ISO/IEC Guide 25 subclause 5.2 is interpreted in 5.A and 5.B.  
5.3 No IT&T specific interpretation is required for this subclause.  
5.4 No IT&T specific interpretation is required for this subclause.  
5.5 No IT&T specific interpretation is required for this subclause.  
5.6 ISO/IEC Guide 25 subclause 5.6 is interpreted in 4.B.  
5.A Maintenance procedures  
  In ISO/IEC Guide 25, subclause 5.2 requires the following:  
  'The quality manual and related qualitydocumentation shall also  
  ...  
 
  1. reference procedures for... verification andmaintenance of equipment;

 
  ...  
  In ISO/IEC Guide 25, subclause 8.2 requires the following:  
  'All equipment shall be properly maintained.... '  
  In ISO/IEC Guide 25, subclause 9.1 requires the following:  
  'All... testing equipment having an effect on theaccuracy or validity of... tests shall be... verifiedbefore being put into service. The laboratory shallhave an established programme for the ...verification of its... test equipment.'  
  In IT&T, these requirements shall be interpreted as follows.  
5.A.1 A testing laboratory shall have procedures defining the checking to be performed whenever major or minor changes are made to the MOT or other test tools, in order to ensure that harmonisation is maintained as appropriate with other testing laboratories and that correctness is maintained with respect to the relevant standard(s) or specification(s).  
5.A.2 A validation laboratory, or a testing laboratory which conducts its own MOT or test tool validations, shall have procedures defining the checking to be performed whenever major or minor changes are made to the reference implementation or other means of validation, in order to ensure that harmonisation is maintained as appropriate with other validation laboratories and that correctness is maintained with respect to the relevant standard (s) or specification (s).  
5.B Documentation of MOT and test tool validation  
  In ISO/IEC Guide 25, subclause 5.2 requires the following:  
  'The quality manual and related qualitydocumentation shall also contain:  
  . . .  
 
  1. reference to procedures for... verificationand maintenance of equipment;

 
  . . .  
  In ISO/IEC Guide 25, subclause 10.1 requires the following:  
  'The laboratory shall have documentedinstructions on the use and operation of allrelevant equipment...'  
  In ISO/IEC Guide 25, subclause 9.2 requires the following:  
  'The overall programme of ... validation ofequipment shall be designed and operated so asto ensure that, wherever applicable,measurements made by the laboratory aretraceable to national standards of measurementwhere available:  
  In the context of MOT and test tool validation, these requirements shall be interpreted as follows.  
5.B.1 The procedures for carrying out MOT and test tool validations shall be documented by the laboratory. If an accredited external validation service is used, then the procedures merely need to refer to the use of that service. If a non-accredited external validation service is used, then the laboratory should provide adequate procedures for the selection and monitoring of the results of the service (see 15.A.2). If the laboratory carries out its own validations, then the procedures should include those for selecting which test cases to run.
5.B.2 If, for a given MOT or test tool, there is no suitable validation service available outside the testing laboratory to which accreditation is applicable, and there is no suitable reference implementation that could be used by the testing laboratory to validate the MOT or test tool, then the testing laboratory shall define and document the procedures and methods that it uses to check on the correct operation of the MOT or test tool, and provide evidence that these procedures and methods are applied whenever the MOT or test tool is modified. The suitability of an external validation service may depend not only on its relevance to the given MOT or test tool, but also on the cost-effectiveness of using the service compared to alternative means of validation that may be available and acceptable.
The locally defined procedures could involve arbitrarily complex arrangements of other hardware and software tools. They could also involve some checking of the MOT or test tool by one or more other testing laboratories. ISO/ IEC Guide 25, subclause 9.3, cites inter-laboratory comparison as one of the means of providing satisfactory evidence of correlation of results.
  Such checking is required to fulfil the requirements in ISO/IEC Guide 25 subclauses 8.2 and 9. See 9.A and 9.B on the validation of the MOT and test tools.
6 Personnel  
6.1 ISO/IEC Guide 25 subclause 7.2 is interpreted in 6.A.  
6.2 ISO/IEC Guide 25 subclause 7.2 is interpreted in 6.A.  
6.3 No IT&T specific interpretation is required for this subclause.  
6.A Maintaining Competence  
  In ISO/IEC Guide 25, subclause 6.1 requires the following:  
  'The testing laboratory shall have sufficientpersonnel having the necessary education,training, technical knowledge and experience fortheir assigned functions. '  
  In ISO/IEC Guide 25, subclause 6.2 requires the following:  
  'The testing laboratory shall ensure that thetraining of its personnel is kept up-to-date. '  
  In IT&T, these requirements shall be interpreted as follows.  
6.A.1 The testing laboratory shall have a procedure to define how to maintain competence of its test operators, particularly in the absence of clients for a given testing or validation service. It is recommended that in order to maintain their competence in operating a particular testing or validation service, the relevant test operators should be required to conduct at least one test campaign or validation exercise per year, going through all the steps in the process, but not necessarily running all the test cases. Thus, in the absence of clients for a given testing or validation service, special training exercises should be arranged for this purpose.
7 Accommodation and environment  
7.1 No IT&T specific interpretation is required for this subclause.  
7.2 ISO/IEC Guide 25 subclause 7.2 is interpreted in 7.A, 7.B, 7.C, 7.D, 11.A.2 and 12.A.  
7.3 No IT&T specific interpretation is required for this subclause.  
7.4 ISO/IEC Guide 25 subclause 7.4 is interpreted in 7.A.  
7.5 No IT&T specific interpretation is required for this subclause.  
7.6 ISO/IEC Guide 25 subclause 7.6 is interpreted in 11 .A.  
7.A Separation of data partitions  
  In ISO/IEC Guide 25, subclause 7.4 requires the following:

'There shall be effective separation betweenneighbouring areas when the activities therein are incompatible. '

Although subclause 7.4 normally refers to separation between different testing services, in some IT&T testing or validation services there could be also the need for clear separation of the software components used within a single testing or validation service.
  In ISO/IEC Guide 25, subclause 7.2 requires the following:  
  'The environment in which these activities areundertaken shall not invalidate the results. '  
  In ISO/IEC Guide 25, subclause 11.1 requires the following:  
  'The laboratory shall have a documented systemfor uniquely identifying the items to be... tested, toensure that there can be no confusion regardingthe identity of such items at any time. '  
  In IT&T, these requirements shall be interpreted as follows.  
7.A.1 There shall be a clear separation of test tools, the implementation under test (IUT) and the rest of the environment, even when part of that environment is within the same computer system as the IUT. This is particularly relevant when there is not a clear physical separation between test tools, IUT and the rest of the environment, as is the case in compiler, graphics, portable operating system interfaces (POSIX) and application portability testing. This is also the case when using the Distributed and Coordinated test methods in Open Systems Interconnection (OSI) or telecommunications protocol testing.
7.A.2 The testing laboratory and client shall agree in writing what constitutes the IUT and what constitutes the environment within the system under test (SUT).  
7.A.3 Insofar as the IUT is integrated in a system also provided by the client, the laboratory shall ensure that there is no interference from other activities in the environment part of the SUT which could affect the test or validation results, and shall instruct the client to take the necessary actions to achieve this. As far as is practical, it should be ensured that the SUT is not supporting any activities while the tests or validation are being conducted, other than those required for the testing or validation.
For example, in the case of POSIX testing, it has to be ensured that the SUT is not running any task which could interfere with the correct installation, configuration or execution of the test suite, and that no other user is logged onto the system. A 'clean', empty and secure directory should be set up for the test campaign.
7.B Identification of environment  
  In ISO/IEC Guide 25, subclause 7.2 requires the following:  
  'The environment in which these activities areundertaken shall not invalidate the results oradversely affect the required accuracy ofmeasurement.'  
  In ISO/IEC Guide 25, subclause 11.1 requires the following:  
  'The laboratory shall have a documented systemfor uniquely identifying the items to be... tested, toensure that there can be no confusion regardingthe identity of such items at any time. '  
  In IT&T, these requirements shall be interpreted as follows.  
7.B.1 The laboratory shall obtain from the client and record, as far as possible, the correct and complete identification of the test environment within the SUT (or the validation environment within the MOT). The client shall take responsibility for this identification. In addition, the laboratory shall record the values of any parameters that are used to control the test environment within the SUT during testing (or to control the validation environment within the MOT during validation). In the context of IT&T testing and validation, the test (or validation) environment means the software and hardware operating environment in which testing or validation takes place. Part of this may be within the SUT (or MOT) and part may be outside the SUT (or MOT). The record of the software test (or validation) environment commences with its definition. Depending on the type of testing or validation to be undertaken, the test (or validation) environment within the SUT (or MOT) may consist of a number of parameters (e.g. processor, disk drive, operating system, compiler, compiler switches, linker, run-time system — this list is not comprehensive). All such parameters should be precisely defined, including as a minimum the name, model and version number. In some specific cases, it is known that even individual components need to be identified.
7.B.2 The test (or validation) environment outside the SUT (or MOT) shall be sufficiently recorded and controlled by the laboratory to ensure, as far as possible, correct and complete identification of the test (or validation) environment outside the SUT (or MOT) at any time.  
7.C Remote testing over a network  
  In ISO/IEC Guide 25, subclause 7.2 requires the following:  
  'The environment in which these activities areundertaken shall not invalidate the results oradversely affect the required accuracy ofmeasurement. Particular care shall be takenwhen such activities are undertaken at sites otherthan the permanent laboratory premises. '  
  In IT&T, these requirements shall be interpreted as follows.  
7.C.1 For the purposes of this Technical Report, remote testing is testing which is conducted over a network, with the SUT connected to but separate from the network, and the network being part of the test environment. Note that the interpretation in 7.C.2 applies whether or not the SUT is in a different physical location from the MOT. It is also possible that both the main part of the MOT and the SUT are physically remote from the test operators. It is sometimes impractical for testing laboratory staff to be physically present with each separate part of the MOT and with the SUT, but this need not undermine the trustworthiness of the test results.
7.C.2 In the context of remote testing over a network, the laboratory shall use test methods and procedures that ensure that the behaviour of the network does not invalidate the test or validation results. Special consideration needs to be given to the requirements on environmental control when testing remotely over a network. In particular, any environmental conditions which might affect the correct operation of the network should be noted. If the requirements are not met, then no testing or validation should take place.
    Prior to running conformance or performance tests, a number of specified tests should be executed in order to establish that the network is operating correctly. If a failure is encountered, diagnostic codes should be checked to determine whether the failure was in the network or in the IUT. If a network error is detected, then checks should be carried out to ensure that no previously run tests have been affected by the error.
    Problems may be encountered during remote testing if the network is being heavily used. In such situations, it may be necessary to reschedule testing or validation to a time when usage of the network is not so high.
    If, during test execution, interference is caused by network behaviour (e.g. a network generated reset or disconnect) then the verdict should be considered to be inconclusive and the test case should be rerun.
    In order for the laboratory to be assured that the IUT is the one stated by the client, the laboratory will normally need to rely on contractual commitment. In some special cases, it may be appropriate for the laboratory to require the use of software authentication techniques. If all the observations of the SUT behaviour required for the purpose of determining test results can be made remotely from the SUT, and the SUT is at a separate physical location from the at least part of MOT, then it should not be necessary for a test operator to be present at the SUT location as well as the main MOT location.
7.D Checking the testing or validation environment  
  In ISO/IEC Guide 25, subclause 7.2 requires the following:  
  'The environment in which these activities areundertaken shall not invalidate the results oradversely affect the required accuracy ofmeasurement. Particular care shall be takenwhen such activities are undertaken at sites otherthan the permanent laboratory premises. '  
  In IT&T, these requirements shall be interpreted as follows.  
7.D.1 The laboratory shall make it clear to the client that it is their responsibility to ensure that all equipment supplied by them is available and operating correctly. The laboratory shall ensure that pre-test or pre-validation checks are performed on the equipment, to establish its suitability for purpose. This is particularly important when testing at a site other than the permanent laboratory premises, because the client may be asked to provide equipment on which test software and/or test suites are to be mounted.
7.D.2 A testing laboratory shall ensure that the correct version of each MOT and each test tool is used and that they have not been modified in any way that might lead to incorrect test results. This is particularly important when the client has received a copy of the MOT or test tool prior to the date on which the testing laboratory begins the testing. The best way to be sure that the MOT or test tool is the correct version and is unmodified is to use a new copy of the MOT or test tool, brought by the testing laboratory. If it is impractical to use a new copy of the MOT or test tool (e.g. because of the time needed to configure it for use with the SUT) then an integrity check will need to be performed on the MOT or test tool, either by using some checking tool or by comparing the MOT or test tool with a reference copy brought by the testing laboratory.
7.D.3 A validation laboratory, or a testing laboratory which conducts its own MOT or test tool validations, shall ensure that the correct version of each reference implementation or other means of validation is used and that they have not been modified in any way that might lead to incorrect validation results.  
7.D.4 For all software testing and validation, it shall be ensured that any files containing old results or old test programs on the SUT cannot be confused with the current test programs and test or validation results.  
8 Equipment and reference materials  
8.1 ISO/IEC Guide 25 subclause 8.1 is interpreted in 9.B.1.  
8.2 ISO/IEC Guide 25 subclause 8.2 is interpreted in 5.A, 5.B.2, 8.B, 8.C, 9.A.1 and 9.B.1.4.  
8.3 No IT&T specific interpretation is required for this subclause.  
8.4 ISO/IEC Guide 25 subclause 8.4 is interpreted in 8.A, 8.C, 8.D, and 9.A.2.  
8.A Means of testing and test tool validation records  
  In ISO/IEC Guide 25, subclause 8.4 requires all equipment records to include 'details ofmaintenance carried out...;and 'history of any...modification or repair.  
  In IT&T, these requirements shall be interpreted as follows.  
8.A.1 A testing laboratory shall maintain a record of all MOT validations and re-validations carried out. See 9.A and 9.B for a description of the MOT validation concept and the use of reference implementations in MOT validation.
8.A.2 If the MOT validations are carried out by the testing laboratory itself, its records shall include the reasons for the test cases being run, date, environmental information if appropriate, and a summary of the results obtained plus the detail of any discrepancies from the expected results. When MOT validation is made by the testing laboratory itself using a reference implementation, the requirements also mean that the testing laboratory shall document fully the expected results (i.e. previously obtained results) from using the full conformance test suite to test the nominated reference implementation. When an implementation is first being considered as a possible reference implementation, there will need to be a period In which it is tested using the full conformance test suite and the results obtained are scrutinized to check their validity. Once they have been checked in this way, those results (possibly as amended as a consequence of the scrutiny) become the 'expected results' to be looked for on a subsequent validation or re-validation of the MOT.

Records of MOT validation should include clarification of the coverage of the validation with respect to the test suite being used.

8.A.3 If the MOT validations are carried out by one or more external organisations, the testing laboratory shall keep in its records the validation reports, each including the identity of the organisation(s) carrying out the validation, date(s), and results of the validation. It is important for traceability that the validation of each MOT or test tool results in a validation report, provided to the MOT supplier to be copied to all testing laboratories that use that MOT or test tool.
8.B Procedures for handling errors in the MOT and test tools  
  In ISO/IEC Guide 25 subclause 8.2 requires the following:
'Any item of equipment which has been subjectedto overloading or mishandling, or which givessuspect results, or has been shown by verificationor otherwise to be defective, shall be taken out ofservice... until it has been repaired and shown by... verification or test to perform satisfactorily. '
In order to Interpret these requirements in the IT&T field, it needs to be realised that an 'item of equipment' for a testing laboratory does not need to be a complete MOT. The interpretation of 'item of equipment' depends on the extent to which the location of the fault can be isolated from the rest of the MOT.
  In IT&T, these requirements shall be interpreted as follows.  
8.B.1 If a defect in an item of equipment is such that it impacts only a few specific test cases, then those test cases shall be taken out of service. These test cases shall not be put into service again unless revised procedures are introduced for analysing the results of those test cases to overcome the defect. In particular, this may well be the case for defects In results analysis tools (i.e. test tools which analyse the recorded observations made during testing in order to produce test or validation reports).

If a testing service is such that there are specified test cases that are run, then if the discrepancies seem to be Isolated to particular test cases, then only those test cases need be suspended from use. If, however, the concept of test cases does not apply or if the discrepancies indicate a general problem with the MOT or test tool, then the MOT or test tool needs to be suspended.

8.B.2 Thus, the testing laboratory shall document and use adequate procedures, for use whenever any MOT or test tool is suspected or found to contain errors which make it defective or unfit for use. These procedures shall include establishing that there is a genuine error, reporting the error to the appropriate maintenance authority, withdrawing the MOT, test tool or test case(s) from service, as appropriate, correcting the errors, and then re-validating the MOT or test tool, as appropriate.  
8.B.3 Similarly, a validation laboratory shall document and use adequate procedures, for use whenever any reference implementation or other means of validation is suspected or found to contain errors which make it defective or unfit for use. These procedures shall include establishing that there is a genuine error, reporting the error to the appropriate maintenance authority, withdrawing the reference implementation or other means of validation from service, as appropriate, correcting the errors, and then re-checking the reference implementation or other means of validation, as appropriate. it Is important to recognise the difference between errors in a reference implementation which make it unfit for use, and deliberate error generation capabilities built into the reference implementation to make its use in MOT validation more effective. It is only the former that need to be removed.
8.C MOT and test tool maintenance  
  In ISO/IEC Guide 25, subclause 8.2 requires the following:  
  'All equipment shall be properly maintained.Maintenance procedures shall be documented. '  
  In ISO/IEC Guide 25, subclause 8 4 requires that equipment records shall include the following:  
  the manufacturer's name... '  
  In IT&T, these requirements shall be interpreted as follows.  
8.C.1 A testing laboratory shall keep a record of the design and maintenance authority for each MOT, test tool and executable test suite. For some MOT, the executable test suite may be maintained by a different organisation from the test tool on which it runs. It is important that test case errors can be reported directly to the appropriate maintenance authority.
8.C.2 A validation laboratory, or a testing laboratory which conducts its own MOT or test tool validations, shall keep a record of the design and maintenance authority for each reference implementation and each tool used in other means of validation.  
8.D Identification of equipment  
  In ISO/IEC Guide 25, subclause 8.4 specifies requirements for equipment records, including identification. In the IT&T context, the use of the term 'equipment' in this clause refers only to 'test equipment'. The 'equipment under test' is referred to as the 'test item' in ISO/IEC Guide 25 and is the subject of clause 11 rather than 8.
In each equipment record for an MOT, the MOT should be identified by its name, version number, and supplier; and a reference should be given to the latest validation report, if relevant, for that MOT.
9 Validation and traceability (Measurement traceability and calibration)  
9.1 ISO/IEC Guide 25 subclause 9.1 is interpreted in 5.A, 9.A.2, 5.B.2 and 9.F.1.  
9.2 ISO/IEC Guide 25 subclause 9.2 is interpreted in 5.B and 9.D.1.  
9.3 ISO/IEC Guide 25 subclause 9.3 is interpreted in 9.A.2 and 9.B.1, and referenced in 5.B.2.  
9.4 No IT&T specific interpretation is required for this subclause.  
9.5 No IT&T specific interpretation is required for this subclause.  
9.6 ISO/IEC Guide 25 subclause 9.6 is interpreted in 9.A.2, 9.B.1 and 9.E.  
9.7 No IT&T specific interpretation is required for this subclause.  
9.A The validation concept  
9.A.1 In IT&T, the concept of MOT validation is used to meet the ISO/IEC Guide 25 requirements for verification of equipment. Since measurement traceability and calibration are not generally directly relevant to software and protocol testing, the title of clause 9 in this interpretation has been changed to 'Validation and traceability'. However, one exception Is given in 9.F.
  In ISO/IEC Guide 25, subclause 8.2 requires the following:

'All equipment shall be properly maintained.... '


In ISO/IEC Guide 25, subclause 8.2 also requires the following:


'Any item of equipment which has been subject to... mishandling, or which gives suspect results, orhas been shown by verification or otherwise to bedefective shall be taken out of service, ... until ithas been repaired and shown by... verification ortest to perform satisfactorily. The laboratory shallexamine the effect of this defect on the previous...tests. '

The general accreditation requirements (as specified by ISO/IEC Guide 25, clause 9) include requirements concerning calibration and traceability which are applicable to the use of measuring equipment in testing. These concepts are not directly applicable in this interpretation to software and protocol testing. This is because the relevant MOT and test tools are not what is meant by 'measuring equipment', because they observe and control the operation of the tests, rather than making any physical measurements. The one obvious exception is in the case of Physical layer communications testing, which is mostly outside the scope of this interpretation. Furthermore, there are no relevant 'primary standards' of measurement to which traceability, via a chain of calibrations, can refer.
  Nevertheless, the requirements from ISO/IEC Guide 25 quoted here are applicable to software and protocol testing. When taken together it can be seen that they require some means of checking the validity of the MOT and test tools both when it is new and whenever it becomes suspect (e.g. whenever it has undergone a major update, or whenever errors are detected in the results produced from its use). Thus, there is a need for the idea of checking the MOT and test tools against some appropriate reference.  
  In ISO/IEC Guide 25, the list in subclause 8.4 requires that test equipment records shall include the following:
  1. details of maintenance carried out to-dateand planned for the future;

  2. history of any damage,modification or repair. '

 
  In ISO/IEC Guide 25, subclause 9.1 requires the following: 'Verification of test equipment' is to be interpreted to mean 'validation of the MOT and test tools'.
  'All... testing equipment having an effect on theaccuracy or validity of... tests shall be... verifiedbefore being put into service. The laboratory shallhave an established programme for the ...verification of its... test equipment.'  
  In ISO/IEC Guide 25, subclause 9.3 requires the following:  
  'Where traceability ... is not applicable, thelaboratory shall provide satisfactory evidence ofcorrelation results, for example by participation ina suitable programme of interlaboratory comparisons...'  
  In ISO/IEC Guide 25, subclause 9.6 requires the following:  
  'Where relevant, ... testing equipment shall besubjected to in-service checks between ...verifications.'  
  In ISO/IEC Guide 25, subclause 10.6 requires the following:  
  'Calculation and data transfers shall be subject toappropriate checks. '  
  In ISO/IEC Guide 25, subclause 10.7 requires the following:  
  'Where computers or automated equipment areused for the capture, processing, manipulation,recording, reporting, storage or retrieval of... testdata, the laboratory shall ensure that:  
  ...  
 
  1. computer software is documented andadequate for use;

  2. procedures are established andimplemented for protecting the integrity ofdata;...

  3. computer and automated equipment ismaintained to ensure proper functioning andprovided with the environmental and operatingconditions necessary to maintain the integrityof... test data;

 
  ...  
  In IT&T, these requirements shall be interpreted as follows.  
9.A.2 For the purposes of this Technical Report, validation of an MOT or test tool is the process of verifying as far as possible that the MOT or test tool will behave properly and produce results that are consistent with the specifications of the relevant test suites, with any relevant standards and, if applicable, with a previously validated version of the MOT or test tool. This concept of validation can be applied to an MOT, to a test tool consisting of test software mounted on the SUT, to test tools that are separate from the SUT, and in some cases to the relationship between the specification and the realization of a test suite. If validation is carried out by checking the behaviour of the MOT or test tool against some kind of 'reference implementation', then it has some similarity to calibration, with the contrasting concept of 'reference implementation' used instead of the concept of a 'reference material'. If it is carried out by checking that the particular version of the MOT or test tool is derived in a reliable manner from a previously validated or 'master' version, then it has some similarity to checking traceability, with the 'master' version being a substitute for the 'primary standard'. Thus, whenever the requirements call for re-calibration, this should be interpreted as requiring re-validation to take place.
The term 'validation' in this Technical Report is not the same as 'validation' in the phrase 'compiler validation'. The only aspect of 'compiler validation' referred to in this Technical Report is conformance testing of compilers, which is referred to as 'compiler testing'.
See 8.A for requirements concerning the keeping of records of MOT and test tool validations.
9.B MOT and test tool validation  
    In general, there can be a variety of different methods of performing MOT and test tool validation. For example, some may involve testing the MOT or test tool in some way, and others may involve analysing the source code of the test software in some way. Accreditation bodies should not impose a particular method of MOT and test tool validation. However, there is only one method of MOT validation which is sufficiently widely used and sufficiently well understood for specific interpretation and guidance to be given concerning its use. This method involves the testing of the MOT by running it against one or more known implementations (the reference implementations) and comparing the results with those expected.
9.B.1 Use of reference implementations  
  For the purposes of this Technical Report, a reference implementation is an implementation of one or more standards or specifications, against which any MOT for those standards or specifications may be run for the purposes of validation of that MOT. It may be thought that a reference implementation should be derived faithfully from the relevant standard(s) or specification(s). The problem is that the state of the art of software testing Is such that it is unclear what faithful derivation of a reference implementation from a standard or specification actually means. For example, there may be very many parameters in the standard, with many more variations possible than could be present in a single implementation. In such cases, it might be thought that an Ideal reference implementation should be a configurable implementation that can be made to behave like a whole range of different implementations for different tests in the validation process. If so, however, it will be difficult to verify the 'traceability' back to the standard.
Several attempts have been made to produce reference implementations of compilers. For various reasons, none of these implementations can have the same status as, for instance, reference materials for hardness measurement. Hence, the use of such reference implementations for compiler testing is restricted to that of checking the test suite. In some cases, the reference Implementation may be a 'model implementation' which is not representative of ordinary commercial implementations. It may perform tests on the test suite which are not feasible for ordinary commercial implementations; this is often an advantage but may also be a disadvantage if undue weight is given to such 'unrealistic' tests. Furthermore, it may not be feasible to upgrade such 'model implementations' to match the test suite exactly.
Lack of the means to validate an MOT is an unsatisfactory situation to maintain for any great length of time in an accredited testing laboratory; therefore, an approach is proposed which allows arrangements acceptable to the accreditation body to be used by testing laboratories until such time as there are available high quality means of MOT validation. What is proposed is that, if there is no suitable available external MOT validation service, testing laboratories should nominate a particular implementation as a reference implementation for the purpose of validating the relevant MOT. When higher quality means of MOT validation become available, then the testing laboratory should be required to use such means within a realistic timescale to be agreed with the accreditation body.
Improvements to the quality of MOT validation should not be considered to invalidate previous results of the testing service and should therefore not of themselves lead to a requirement for re-testing previously tested products.
Uncertainty frequently arises in respect of how many test cases are definitely working correctly. This arises because it is quite likely that not all test cases will be applicable to the reference implementation and some of those that are may produce inconclusive results. Methods and procedures should be used to minimise, as far as possible, this uncertainty about the status of test cases not applicable to the reference implementation, given the state of the art in the particular area; but it is unlikely that such uncertainty can be reliably estimated and hence it is not advisable to include an estimate of it on test reports.
  In ISO/IEC Guide 25, subclause 8.1 requires 'correct performance of... tests.'  
  Subclause 9.3 requires 'satisfactory evidence ofcorrelation of results. '  
  In ISO/IEC Guide 25, subclause 9.6 requires the following:  
  'Where relevant, reference standards and ...testing equipment shall be subject to in-servicechecks between... verifications.'  
  In ISO/IEC Guide 25, subclause 10.1 requires the following:  
  'The laboratory shall have documentedinstructions on the use and operation of allrelevant equipment, ... where the absence ...could jeopardize the... tests. '  
  In ISO/IEC Guide 25, subclause 10.2 requires the following:  
  'The laboratory shall use appropriate methodsand procedures ... consistent with the accuracyrequired, and with any standard specificationsrelevant to the... tests concerned. '  
  In IT&T, these requirements shall be interpreted as follows.  
9.B.1.1 A particular implementation may be used as a reference implementation only if its behaviour when tested by the relevant conformance test suite is repeatable and if the coverage of the conformance test cases that it is capable of exercising is appropriate for the range of implementations that may have to be tested by that conformance test suite. For the purposes of MOT validation, a reference implementation does not need to be totally error-free, nor does it need to support every single option of the standard, but It should provide reasonable coverage of the options in the standard and should behave in a repeatable and predictable manner. It is the behaviour of the MOT when tested against such a reference implementation that has to be derived faithfully from the relevant standard or specification. However, any errors found in a reference implementation with respect to the relevant standard or specification should be documented and due note taken of them when deciding what results are acceptable for MOT validation purposes.

In compiler testing, instrumented reference implementations have also been used to check the coverage of test suites.


See 4.A for requirements concerning the use of an ordinary commercial implementation as a reference implementation.

9.B.1.2 The 'satisfactory evidence ofcorrelation of... results' requirement of ISO/IEC Guide 25, subclause 9.3, shall be interpreted as meaning that, when at least one suitable means of MOT validation becomes available, the relevant MOT shall be validated using such means within a period of time to be agreed with the accreditation body. The period allowed between a means of MOT validation becoming available, and its use for initial validation of each relevant MOT becoming a requirement, will depend on how difficult it is for testing laboratories to begin using this means of validating the relevant MOT. A typical period is expected to be 6 months.
9.B.1.3 In ISO/IEC Guide 25, subclause 9.1 requires the following:  
  'All... testing equipment... shall be ... verifiedbefore being put into service.'  
  In IT&T, these requirements shall be interpreted as follows.  
  If the method of MOT validation is to use a reference implementation, then initial validation of an MOT shall be made by testing the MOT against the reference implementation, using all the test cases from the complete conformance test suite that are applicable to the reference implementation. Results analysis tools (i.e. test tools which analyse the recorded observations made during testing in order to produce test or validation reports) do not need to be validated against a reference implementation as such, but rather against a conformance log produced as the result of running the relevant test suite against a reference implementation. Thus, the use of the full conformance test suite would be replaced by the use of a conformance log produced as a result of running the full conformance test suite.
9.B.1.4 In ISO/IEC Guide 25, subclause 8.2 requires the following:  
  'All equipment shall be properly maintained. ... Any item of the equipment... which gives suspect results ... shall be taken out of service ... until it has been repaired and shown by... verification or test to perform satisfactorily.'  
  In addition, ISO/IEC Guide 25, subclause 9.6 requires the following:  
  'Where relevant, ... testing equipment shall be subjected to in-service checks between ... verifications.'  
  In IT&T, these requirements shall be interpreted as follows.  
  Whenever there is any doubt about the correct operation of the MOT, the MOT shall be re-validated. When the method of MOT validation is to use a reference implementation, then the re-validation shall use an appropriate subset of the conformance test suite, selected in accordance with specified procedures of the laboratory. In addition, whenever any discrepancy is shown by the running of such a subset of the test suite or whenever any major change is made to the MOT or testing environment, then the MOT shall be validated against the reference implementation using the complete conformance test suite before any further testing of clients' systems takes place. The laboratory should specify the procedures to be used for re-validation following minor changes, which, if a reference implementation is to be used, should include procedures for subsetting the conformance test suite. The aim should be to select those test cases that are most likely to highlight the effects of the changes that have been made to the MOT or testing environment, or highlight the areas in which there is doubt about the operation of the MOT.

For results analysis tools, the use of a subset of the test suite would be replaced by the use of the conformance log produced as a result of running a subset of the test suite.

For compiler testing, a simple regression test against a validated compiler would suffice for re-validation following minor changes.

9.C Validation of test software in the system under test  
  In ISO/IEC Guide 25, subclause 10.7 requires the following:  
  'Where computers or automated equipment are used... the laboratory shall ensure that

a) the requirements of this guide are complied with;

b) computer software is documented and adequate for use;

...

d) computer and automated equipment is maintained to ensure proper functioning...'

 
  In IT&T, these requirements shall be interpreted as follows.  
9.C.1 These requirements apply to every MOT and test tool. In particular, they apply to test software In the SUT. This in turn is to be interpreted as meaning that the requirements given in the following two subclauses shall be met. For some test methods, it is necessary to mount some or all of the test software on the SUT. For example, in some test methods, a portable implementation of an 'upper tester' (sometimes called a 'test responder') may need to be installed above the IUT. On the other hand, in compiler testing, the whole test suite (of programs) needs to be mounted on the SUT prior to being submitted to the compiler under test. Such mounting of test software on the SUT raises issues similar to conventional calibration and traceability.

In some special cases, this requirement will apply to firmware as well as test software.

This requirement also applies to software which is part of a means of validation and which is to be mounted on the system which contains the MOT or test tool to be validated.

9.C.2 Whenever the test method used in an accredited testing or validation service requires test software to be mounted on the SUT, the testing laboratory shall nominate or specify a set of confidence tests (possibly a subset of the conformance test suite) and specify the procedures to run them to check that the test software has been mounted correctly. Firstly, a small set of tests will need to be run to check that the test software has been mounted correctly on the SUT. For example, cyclic redundancy checks are used in Pascal compiler testing to check that the test programs have been mounted correctly. Similarly, in OSI and telecommunications protocol testing, test responder software may be checked by using confidence tests with a special test harness prior to coupling the test responder to the protocol implementation to be tested; but for some simple test responders, the necessary confidence checks may be performed by the initial few test cases in the conformance test suite.
9.C.3 The laboratory shail also specify procedures which ensure that all test software mounted on a SUT, for the purposes of an accredited testing or validation service, is derived faithfully from the appropriate master version held by the laboratory. Secondly, there is the issue of the 'traceability' of the version of the test software which is mounted on the SUT, back to a master version held by the testing laboratory.
9.D Traceability  
9.D.1 Traceability of validations  
  In ISO/IEC Guide 25, subclause 9.2 requires the following:  
  'The overall programme of ... validation of equipment shall be designed and operated so as to ensure that, wherever applicable, measurements made by the laboratory are traceable to national standards of measurement where available.'  
  In IT&T, these requirements shall be interpreted as follows.  
9.D.1.1 A testing laboratory shall have and that this shall be designed and operated so as to ensure the objectivity of the results of MOT and test tool validation. An overall programme of MOT and test tool validation should include a policy statement, plans for when and how each MOT and test tool should be validated, and what additional checks need to be performed before each MOT and test tool can be put into service by the testing laboratory. High quality validation is likely to need little in the way of additional acceptance checks, but lower quality validation is likely to require significant additional acceptance checks to ensure objectivity.

Appropriate harmonization agreements should be used, wherever applicable, to check the reliability of MOT and test tool validation results.

9.D.1.2 Validation reports shall, wherever applicable, indicate the traceability to the authoritative test suite specifications or other relevant authoritative standards or specifications, and shall provide the comparison of the results obtained and those expected, and also list known defects. For example, in OSI and telecommunications protocol testing, the validation reports should be traceable to the relevant abstract test suite specification. If, however, there is any doubt about the accuracy of the abstract test suite or if there is no abstract test suite, then they should be traceable back to the relevant profile specification, if applicable, or to the relevant base protocol specification(s).

There should be a procedure to deal with the discovery that one of the expected results of an MOT validation is erroneous. There should also be a procedure to deal with the situation in which the set of expected results is incomplete (e.g. using those expected results that are available but using analysis from first principles where no expected results exist).

9.D.2 Traceability of test results  
9.D.2.1 The requirements stated in clause 9.C shall be interpreted as meaning that test results produced by the testing laboratory shall be traceable to international standard test suites, when available, or otherwise to the relevant authoritative test suite.  
9.D.2.2 Test reports shall, wherever applicable, identify test suites to which the results are traceable.  
9.D.3 Traceability of test cases  
9.D.3.1 ISO/IEC Guide 25 subclause 10.7 b) as quoted in 9.C also applies to test cases, because in software and protocol testing these are themselves considered to be computer software. This leads to the requirement given in the following subclause. Even if they were not considered to be computer software, their validation would be required as a result of ISO/IEC Guide 25clause 9.C (satisfactory evidence of correlation of... results).
9.D.3.2 In those technical areas where there is a major distinction between the specification and realization of test cases, the testing laboratory shall show how each realization of a test case is derived faithfully from its specification, with preservation of assignment of verdicts or measurements to the corresponding sets of observations. In OSI, telecommunications protocols and product data exchange (e.g. Initial Graphics Exchange Specification — IGES), the specification of test cases is in a substantially different form from their realization. For example, test cases are specified in a human-readable notation in a manner independent of the test tools on which they may be run; these are known as 'abstract test cases'. In order to run such test cases, they have to be translated into a form suitable for compilation (if necessary) and execution on a specific test tool; they are then known as 'executable test cases'. In such areas, the 'abstract test cases' are the master versions and it is necessary to show 'traceability' from the 'executable' versions back to the 'abstract' ones. This means showing the precise mapping from one to the other and demonstrating that this mapping preserves the meaning and verdict assignments (as specified in ISO/IEC 9646-4 and ITU-T X.293).

Systems used solely for producing the executable test cases that are to be run need not be validated provided that the executable test cases are themselves validated (i.e. shown to be 'traceable' back to the abstract test suite).

9.D.4 Traceability of test tools  
9.D.4.1 A testing laboratory shall specify the procedures and methods it uses to validate new versions of each MOT or test tool, including, if feasible, its 'traceability' to the master copy and, where relevant, the consistency with previous versions. The MOT and test tools are normally designed and maintained outside the testing laboratory to which accreditation applies. Nevertheless, the testing laboratory is responsible for ensuring that they work correctly and consistently when used to provided an accredited testing service. Correctness of operation may, for example, be demonstrated by comparing the results of a validation with those obtained by another testing laboratory using the same reference implementation. Consistent operation may be demonstrated by re-validation. 'Traceability' in this context means showing the mapping back to the master version and demonstrating that that mapping preserves the essential meaning of the original.
9.E Confidence checks  
  In ISO/IEC Guide 25, subclause 9.6 requires the following:  
  'Where relevant, ... testing equipment shall be subjected to in-service checks between ... verifications.'

In IT&T, this requirement shall be interpreted as follows.

 
9.E.1 Each MOT and test tool shall be subject to hardware and software in-service checks by the testing laboratory, which can be made periodically, between MOT and test tool validations, before beginning to test each new IUT, or even during the period of testing a single implementation, as appropriate. These checks shall include authentication of the version of the MOT or test tool to be used. If faults are detected then the procedures for dealing with defective equipment shall be applied. When the MOT or test tool is separate from the client's SUT, prior to commencement of testing each new IUT, the MOT or test tool may have to be restarted. Where appropriate, this process should Involve shutting down the operating system and processor and then re-booting the system to force the execution of built-in hardware checks, start-up scripts and file system checks. All output messages should be monitored and any unexpected ones should be investigated.

Checks should be carried out to ensure that the correct version of the MOT is loaded and that there has been no unauthorised access or change to the MOT. The checks should include the execution of a specified subset of the test cases to ensure that the software has loaded correctly and not been corrupted.

9.F Calibration of test tools used to measure software timers
  In ISO/IEC Guide 25, subclause 9.1 requires the following:  
  'All measuring and testing equipment... shall be calibrated... before being put into service. '  
  In IT&T, these requirements shall be interpreted as follows.  
9.F.1 Any test tool used to measure software timers shall be calibrated. If the testing or validation service makes use of measurement of software timers, then in this unusual case software or protocol testing (or validation) involves measurement of a physical quantity and hence conventional calibration and traceability are applicable.
10 Test methods (Calibration and test methods)
10.1 ISO/IEC Guide 25 subclause 10.1 is interpreted in 5.B, 9.B.1,10.B, 13.A.4 and 13.A.5.  
10.2 ISO/IEC Guide 25 subclause 10.2 is interpreted in 9.B.1, 10.B, 10.C, 10.D, 13.A.7, 13.A.8 and 13.A.9.  
10.3 ISO/IEC Guide 25 subclause 10.3 is interpreted by 10.C.  
10.4 ISO/IEC Guide 25 subclause 10.4 is interpreted in 10.B and 13.A.6.  
10.5 No IT&T specific interpretation is required for this subclause.  
10.6 ISO/IEC Guide 25 subclause 10.6 is interpreted in 9.A.2.  
10.7 ISO/IEC Guide 25 subclause 10.7 is interpreted in 9.A.2, 9.C, 9.D.3,10.E and 12.A.  
10.8 No IT&T specific interpretation is required for this subclause.  
10.A Introduction to test methods  
10.A.1 For the purposes of this Technical Report, a test method for IT&T testing comprises: the individual test cases which constitute a test suite (set of test cases); the test tools (both hardware and software) used to run those test cases and the way in which they are used; and the procedures used to select and run the test cases and to analyse the observations and state the results. This use of the term 'test method', which is consistent with the use in general accreditation criteria (ISO/IEC Guide 25, 3.7), should not be confused with the more restrictive notion of an 'abstract test method' as defined in ISO/IEC 9646 and the ITU-T X.290 series for conformance testing for OSI (including telecommunications protocols).

The criteria that are applied to the suitability of test methods depend to some extent on the type of testing. Below, there are, firstly, some general criteria which apply to all types of testing; secondly, there are additional criteria which apply only to conformance testing.

10.B Repeatability and reproducibility  
  In ISO/IEC Guide 25, èubclause 10.1, paragraph 1 requires the following:  
  'The laboratory shall have documented instructions on the use and operation of all relevant equipment,... where the absence of such instructions could jeopardize the... tests.'  
  InISO/IEC Guide 25, subclause 10.2 requires the following:  
  'The laboratory shall use appropriate methods and procedures for all ... tests and related activities ... They shall be consistent with the accuracy required, and with any standard specifications relevant to the... tests concerned.'  
  In ISO/IEC Guide 25, subclause 10.4 requires the following:  
  'Where it is necessary to employ methods that have not been established as standard, these shall be subject to agreement with the client, be fully documented and validated, and be available to the client and other recipients of the relevant reports.'  
  In ISO/IEC Guide 25, subclause 12.1 requires the following:  
  'The records for each ... test shall contain sufficient information to permit their repetition.'  
  In IT&T, these requirements are to be interpreted as including the requirement that test methods shall meet the following two criteria of repeatability and reproducibility.  
10.B.1 A test method is said to meet the criterion of repeatability if repeated testing of the same implementation by the same testing laboratory using the same set of tests employing that test method produces results that are consistent with those produced on earlier occasions. This criterion of repeatability should not be taken to mean that repeated testing has to produce identical results from corresponding tests, but only that the results from corresponding tests are consistent with one another. For example, measurements which vary only by amounts less than the measurement uncertainty can be taken as being consistent.

Furthermore, repeatability has to be interpreted more flexibly for types of testing that involve making performance measurements. For example, measurements of the time taken to run a specific test case can vary considerably as the result of what seem to be insignificant changes to the environment or parameterization of the software implementation. Thus, the test methods for making performance measurements need to specify the precise environmental conditions to ensure repeatability. The results should be presented so as to avoid giving spurious accuracy. The relevant test methods may involve re-running test cases with a selection of different environmental conditions or parameter settings. In all cases of making performance measurements, the results need to be accompanied by an explanation of the environment in which the test cases were run.

There is no implied time period over which the repeatability should hold true.

Note — this usage of the term 'repeatability' is consistent with the definitions given in ISO 5725, the Guide for determination of repeatability and reproducibility for a standard test method by inter-laboratory tests, and in the International Vocabulary of Basic and General Terms in Metrology.

10.B.2 A test method is said to meet the criterion of reproducibility if testing of the same implementation by two different testing laboratories using the same set of tests employing that test method leads to the results produced by one testing laboratory being consistent with those produced by the other. Reproducibility will be particularly difficult to achieve for any aspects of a testing service which are inherently subjective. For example, in some standards there is a requirement to produce certain documentation, and the testing service may attempt to check the adequacy of the documentation. In such cases, reproducibility should still be sought by having clear objective criteria against which such qualities are to be judged. Accreditation can be given for testing against objective criteria but not for the subjective interpretation of the results of testing.
    Reproducibility will also be difficult to assess if the test cases are not determined in advance. Such a situation may be considered necessary for some types of performance measurement. Once again, reproducibility should be sought by having clear objective criteria against which the measurements are interpreted, irrespective of the exact means used to make the observations.
    There is no implied time period over which the reproducibility should hold true.
    Reproducibility requires any necessary measurements to be made accurately within specified tolerances of measurement uncertainty.
    Any doubts regarding reproducibility should be investigated, where feasible, by means of interlaboratory comparisons.
    Note — this usage of the term 'reproducibility' is consistent with the definitions given in ISO 5725, the Guide for determination of repeatability and reproducibility for a standard test method by inter-laboratory tests, and in the International Vocabulary of Basic and General Terms in Metrology.
10.B.3 In conformance testing, the selection and parameterization of test cases to be run to test the conformance of a specific implementation to the relevant standard(s) or specification(s) shall meet the criteria of repeatability and reproducibility with respect to all other conformance assessments based on the same test suite. IT&T standards usually include a number of different options. It is, therefore, to be expected that test case selection will be needed to choose those test cases suited to the options which the IUT is claimed to support. Furthermore, some test cases will need to be parameterized or modified to tailor them to the corresponding parameters of the lUT.In such cases, the parameter values used and modifications made should be documented in the test report, possibly annexed to it.
  This is a consequence of 10.B.1 and 10.B.2. In OSI, telecommunications protocol and product data exchange conformance testing, for example, the supplier provides two documents which give the information necessary to perform the test case selection and parameterization. One is the Implementation Conformance Statement (ICS), which lists the options supported, including the ranges of parameter values supported or identifies constraints on parameter values supported; a proforma for this is, in many cases, being standardized. The other document is the Implementation eXtra Information for Testing (IXIT), which gives additional information, particularly concerning parameter values to be used and which features are untestable; In protocol conformance testing, the proforma for this should be partly as standardized in ISO/IEC 9646-5 and ITU-T X.294, partly specified with the test suite concerned, and partly defined by the testing laboratory.

There is no equivalent of a standardized ICS proforma for some other areas such as compiler testing, although test selection may need to be performed. For Ada compiler testing, for example, there are implicit options available due to limitations of the target hardware upon which the Ada program executes. For instance, if the hardware does not support floating point arithmetic with 20 digit precision, then certain test cases in the test suite are inapplicable and are not processed during testing. Unfortunately, the complete characterization of such inapplicable test cases is not straightforward and therefore rests on analysis by a test operator, rather than upon simple pre-defined rules (or selection by a test tool). Nevertheless, procedures are needed to ensure that the test selection is repeatable, reproducible and objective, otherwise it is probable that such testing is not suitable for accreditation.

10.C Internationally agreed test methods  
  As stated in 9.B.1, ISO/IEC Guide 25 subclause 10.3 requires the following: For some testing services, it may be appropriate to use test methods that comply with national, rather than international, standards (or other forms of agreement). However, given the international nature of the IT&T industry, it is desirable that test methods be subject to International agreement and review.
  'Where methods are not specified the laboratory shall wherever possible, select methods that have been published in international or national standards,...'  
  In ISO/IEC Guide 25, subclause 10.2 requires the following: An international agreement could be between testing laboratories, as is the case with compiler testing, rather than through a standards body such as ISO. Also, where there are relevant standards, these may be supplemented by international agreements between testing laboratories.
  'The laboratory shall use appropriate methods and procedures ... consistent with the accuracy required, and with any standard specifications relevant to the... tests concerned.'  
  In IT&T, these requirements shall be interpreted as follows.  
10.C.1 Unless otherwise agreed with the client, each test method should be consistent with and, if applicable, comply with the relevant international standard(s), if any exist. This is not intended to restrict a testing laboratory from using non-standard test methods, but if such non-standard test methods are used the testing laboratory shall state this clearly to its clients and in the resulting test reports. In OSI and telecommunications protocol testing, test methods should be in accordance with the most up to date version available of the various parts of ISO/IEC 9646 or the ITU-T X.290 series recommendations. This means that abstract test suites should comply with ISO/IEC 9646-2 or ITU-T X.291 (and ISO/IEC 9646-3 or ITU-T X.292 if they are written in TTCN), test realization should comply with ISO/IEC 9646-4 or ITU-T X.293, and the testing laboratory and client should comply with ISO/IEC 9646-5 or ITU-T X.294 in the conformance assessment process. In addition, profile test specifications should comply with the latest version of ISO/IEC 9646-6 or ITU-T X.295.
    For Office Documentation Architecture (ISO/IEC 8613), test methods should be in accordance with ISO/IEC TR 10183-1 (1992). In addition, abstract test cases should be in accordance with the most up to date version available of ISO/IEC TR 10183-2.
    In the graphics area, test methods should be in accordance with ISO/IEC 10641.
    For POSIX, test methods should be in accordance with the most up to date version available of IEEE P2003 or ISO/ IEC13210.
    For software packages, test methods should be in accordance with the most up to date version available of ISO/IEC 12119.
    For language processors, such as compilers, test methods should be in accordance with ISO/IEC TR 9547.
10.D Measurement uncertainty  
  In ISO/IEC Guide 25, subclause 10.2 requires the following:  
  'The laboratory shall use appropriate methods and procedures for all ... tests and related activities within its responsibility (including ... estimation of uncertainty of measurement ...). They shall be consistent with the accuracy required...'  
  In ISO/IEC Guide 25, subclause 13.2 I) it requires that test reports shall include the following:  
  'a statement of estimated uncertainty of the... test result (where relevant)'.  
  In IT&T, these requirements shall be interpreted as follows.  
10.D.1 These requirements are considered only to apply to software and protocol testing in those relatively rare cases in which measurements of physical quantities are made, and where the testing involves the use of floating point arithmetic or other approximate representations of data.  
10.E Configuration management  
  In ISO/IEC Guide 25, subclause 10.7 requires the following:  
  'Where computers or automated equipment are used lor the capture, processing, manipulation, recording, reporting, storage or retrieval of... test data, the laboratory shall ensure that:  
  b) computer software is documented and adequate for use;  
  d) computer and automated equipment is maintained to ensure proper functioning and provided with the environmental and operating conditions necessary to maintain the integrity of... test data;...'  
  In IT&T, these requirements shall be interpreted to include the following.  
10.E.1 All software developed or installed by the laboratory and having a bearing on the test or validation results shall be maintained within an appropriate configuration management system consistent with the requirements of ISO/IEC 9000-3. A software configuration management system is a set of procedures that, when followed, ensures adequate identification, control, visibility and security of the version number, issue and any changes made to the software.
11 Handling of test items (Handling of calibration and test items)
11.1 ISO/IEC Guide 25 subclause 11.1 is interpreted in 7.A and 7.B.  
11.2 No IT&T specific interpretation is required for this subclause.  
11.3 ISO/IEC Guide 25 subclause 11.3 is interpreted by 11 .A.  
11.4 ISO/IEC Guide 25 subclause 11.4 is interpreted by 11 .A.  
11.A Protection against viruses and other agents of corruption  
  In ISO/IEC Guide 25, subclause 11.3 requires the following:  
  'The laboratory shall have documented procedures and appropriate facilities to avoid deterioration or damage to the... test item.'  
  In ISO/IEC Guide 25, subclause 7.6 requires the following:  
  'Adequate measures shall be taken to ensure good housekeeping in the laboratory. '  
  In ISO/IEC Guide 25, subclause 11.4 requires the following:  
  'The laboratory shall have documented procedures for the receipt, retention and safe disposal of... test items, including all provisions necessary to protect the integrity of the laboratory.'  
  In IT&T, these requirements shall be interpreted as follows.
11.A.1 The testing laboratory shall provide an adequate assurance to the client that the test tools or test suite will not corrupt any hardware or software belonging to the client by introducing viruses or other agents of corruption. When the IUT is on disc or tape, the testing laboratory should take all reasonable steps to avoid introducing any viruses or other agents of corruption onto the disc or tape, and if requested by the client may give a written undertaking to this effect. Similarly, the other way round, if the test tool or test suite is on a disc or tape, the testing laboratory should give a written undertaking that it has taken all reasonable precautions to ensure that the test tool and/or test suite contains no known viruses or other agents of corruption.
11.A.2 In addition, ISO/IEC Guide 25 subclause 7.2 requires the following: When the IUT is on disc or tape, the testing laboratory should conduct a suitable virus scan, if available, before loading the IUT into any test tool.
  'The environment in which these activities are undertaken shall not invalidate the results or adversely affect the required accuracy of measurement.' If the test tool or test suite is on a disc or tape, it may be unsafe to allow the client to run a virus scan on the test tool or test suite because the testing laboratory cannot then prevent the client introducing viruses or otherwise corrupting the test tool or test suite. However, when the testing is over the testing laboratory should run a check that no viruses have been introduced and that no modification of the test tool or test suite has taken place. Any discrepancy would potentially Invalidate the test results.
  This requirement, when taken together with ISO/ IEC Guide 25 subclause 7.6, shall be interpreted as requiring adequate procedures be documented and used to prevent viruses from client-supplied items contaminating the MOT or test tools and to check that no modifications to the MOT or test tools are made during testing by any software or systems provided by the client.  
12 Records  
12.1 ISO/IEC Guide 25 subclause 12.1 is interpreted in 10.B and 13.A.6.  
12.2 ISO/IEC Guide 25 subclause 12.2 is interpreted in 12.A.  
12.A Confidentiality and integrity of records
  In ISO/IEC Guide 25, subclause 12.2 requires the following: In IT&T testing, records include on-line records made during testing (e.g. in OSI and telecommunications protocol testing, the conformance log).
  'All records... shall be safely stored, held secure and in confidence to the client. '  
  In ISO/IEC Guide 25, subclause 10.7 requires the following:  
  'Where computers ... are used ... the laboratory shall ensure that...  
  c) procedures are established and implemented for protecting integrity of data;'...  
  In ISO/IEC Guide 25, subclause 7.2 requires the following:  
  'The environment in which these activities are undertaken shall not invalidate the results or adversely affect the required accuracy of measurement. Particular care shall be taken when such activities are undertaken at sites other than the permanent laboratory premises. '  
  In IT&T, these requirements shall be interpreted as follows.  
12.A.1 The testing laboratory shall take steps to ensure that no third party can gain access to the on-line records either during or after testing.  
12.A.2 Thus, if a client's system on which testing is conducted is potentially open to access by third parties during testing, the testing laboratory shall ensure that the client controls the testing environment in such a way that third parties do not gain access to that system during testing. This protection is needed to ensure both confidentiality and integrity of the test records.
12.A.3 Furthermore, if a client's system on which testing is conducted is open to access by third parties after completion of the testing, and the client is to retain the test records on the system, then the client shall provide a written undertaking to indicate full responsibility for the test records. Unless such an undertaking is given by the client, the testing laboratory shall delete all records generated during the testing process on the client's system, having first sought the agreement of the client on the appropriate means of deletion. The potential for this situation is greatest in relation to testing conducted on a site other than the testing laboratory's own premises.

This situation can arise, for example, in OSI and telecommunications protocol testing when the testing is to be conducted at the client's site, and to avoid the testing laboratory having to transport the hardware on which the MOT is to run, the client lends the testing laboratory an appropriate computer system on which to mount the MOT software. This computer system may be coupled to a computer network and the client may expect it to be returned after testing in the same state as it was before testing began. In such a case, If test records are left on the system and subsequently accessed by a third party, without the client having accepted responsibility for the test records, then the testing laboratory may be held liable for the breach of confidence.

When the testing laboratory needs to remove such test records from the client's system, care should be taken to try to find a means of deletion which completely removes the records from the system, rather than just flagging them for subsequent removal. To find such a means of deletion may require the client's assistance.

13 Certificates and reports  
13.1 ISO/IEC Guide 25 subclause 13.1 is interpreted in 13.A and 13.B.  
13.2 ISO/IEC Guide 25 subclause 13.2 is interpreted in 10.D.  
13.3 No IT&T specific interpretation is required for this subclause.  
13.4 No IT&T specific interpretation is required for this subclause.  
13.5 No IT&T specific interpretation is required for this subclause.  
13.6 No IT&T specific interpretation is required for this subclause.  
13.7 No IT&T specific interpretation is required for this subclause.  
13.A Objectivity  
  In ISO/IEC Guide 25, subclause 13.1 requires the following:  
  'The results of each ... test or series of... tests carried out by the testing laboratory shall be reported accurately, clearly, unambiguously and objectively,...'  
  In IT&T, these requirements, taken in conjunction with the criteria stated in 10.B.1 and 10.B.2, are to be interpreted as requiring objective procedures, as follows.  
13.A.1 Subjective comments and results are outside the scope of accreditation. Thus, if the testing laboratory includes any subjective comments or results in a test report, it shall state clearly that these comments and results are outside the scope of its accreditation.  
13.A.2 Whenever test cases are such that analysis of the observations by a test operator is required in order to interpret the results, before the results can be stated in a test report, the testing laboratory shall define objective procedures to be followed by the test operators performing the analysis, sufficient to ensure that repeatability, reproducibility, and objectivity are maintained; and the test operators shall follow the defined procedures. Some test cases may require a test operator to analyse the observations in detail after the test case has been run, before the results can be stated. With such test cases, there is clearly a danger of both subjectivity and a lack of consistency in the results. By defining detailed objective procedures to be followed by the test operator making such analyses, it is possible to use analysis by a test operator during testing without compromising repeatability, reproducibility, or objectivity.

These procedures should not be confused with the ones to be followed when arbitration is called for (i.e. when the client challenges the results produced by the testing laboratory).

13.A.3 Test methods for conformance testing shall have the property that the verdicts shall be assigned consistently and objectively with respect to the standard(s) or specification(s) against which conformance is being tested, and thereby be repeatable, reproducible and objective. Repeatability, reproducibility and objectivity of conformance testing means repeatability, reproducibility and objectivity of verdict assignment. This in turn means that the verdicts have to be consistent and objective with respect to the relevant standards.
  This is a consequence of ISO/IEC Guide 25 subclause 13.1 (as quoted above), and the interpretations in 10.B.1 and 10.B.2. If repeated testing of the same implementation results in different verdicts for the same test case, inconclusive verdicts may be regarded as consistent with either pass or faiherdicts, but a pass verdict should never be considered consistent with a fail verdict.
13.A.4 The testing laboratory shall define and document the procedures to be followed by its staff concerning the re-running of test cases, and these shall be followed in all applicable circumstances. They shall include objective criteria to determine whether or not a test case is to be re-run, and procedures for ensuring that the repeatability, reproducibility and objectivity of the verdict assignment is maintained. Sometimes, in order to come to a conclusive verdict, it may be necessary to run a particular test case several times. For example, if a test case intermittently produces inconclusive verdicts through no fault of the SUT, then it should be re-run (until some stated criterion is reached) to try to produce a verdict of either pass or fail. However, procedures are needed to avoid introducing inconsistency (and thereby breaching the criteria of repeatability, reproducibility and objectivity) as a result of re-running such test cases.
  This is a consequence of ISO/IEC Guide 25 subclauses 10.1 and 13.1 (as quoted above), and the interpretations in 10.B.1,10.B.2 and 13.A.2. In some fields of testing (e.g. compiler testing), re-running individual test cases may be inappropriate, in which case, in order to resolve inconclusive verdicts, it may be necessary to re-run the whole test suite.
13.A.5 Where there are procedures to be followed by test operators performing analysis of the observations prior to assigning a verdict, those procedures shall ensure that there can be no ambiguity over the verdict to be assigned in each case. Indeed, the verdicts assigned shall be as given in the relevant test case. Clearly, the repeatability, reproducibility and objectivity of verdict assignments also applies to any procedures to be followed by test operators performing analysis of the observations prior to assigning a verdict. In some cases, there may be international mechanisms established which should govern the procedures to be followed by test operators performing such analysis.
  This is a consequence of ISO/IEC Guide 25 subclauses 10.1 and 13.1 (as quoted above), and the interpretation in 13.A.1.  
13.A.6 If analysis of the observations by a test operator leads to modification or qualification of results produced automatically, then the test report shall be written in such a way that there can be no ambiguity over what verdict has been assigned. Such modifications or qualifications of the results shall not conflict with the verdicts as given in the relevant test case. The justification for each such modification, together with the endorsement by management, shall be noted in all the relevant records. If it is known that the results of certain test cases will be subject to analysis by test operators before the test report can be written, then it must not appear that an automatically produced verdict is changed by the test operator. There are two possible ways of avoiding this. The first is to make the uncertainty clear in the output produced by the MOT or test tool (e.g. the MOT or test tool could print 'unexpected situation needs investigation' or even 'provisional fail' rather than 'fail'). The second solution, which is applicable to a different class of tests, is to make it clear that the test operator has produced an interpretation of the results produced by the MOT or test tool, rather than a change in the verdict (e.g. the test operator could explain the reason for an inconclusive verdict or the significance of a fe/'/verdict in some obscure error state).
  This is a consequence of ISO/IEC Guide 25 subclauses 10.4, 12.1 and 13.1 (as quoted above), and the interpretation 13.A.4. The need for documenting each such modification comes from ISO/IEC Guide 25 subclause 10.4, which requires the following: For example, in Physical layer protocol testing, sometimes the results of an MOT or test tool need to be analysed with respect to an accuracy specification, because such a check is not built into the available MOT or test tool. Such checks can, in extreme cases, result in changing a 'provisional pass' produced by the MOT or test tool into a 'fail' in the test report.
  'Where it is necessary to employ methods that have not been established as standard, these shall be... fully documented... '  
13.A.7 If a test case which is supposed to specify the verdicts does not completely define which verdicts are to be assigned to which sets of observations, then the testing laboratory shall either: Sometimes an MOT or test tool (equipment, systems or software) is such that it will assign verdicts in all cases according to the verdict assignments defined in the test cases (e.g. this is typically the case for compiler testing and becoming fairly common in OSI and telecommunications protocol testing). Unfortunately, some test cases may be incompletely specified, leaving some outcomes undefined. This can occur, for example, in an extreme form in OSI and telecommunications protocol testing if the notation used is only capable of expressing the most likely sequences of events, under the false assumption that all other sequences are necessarily invalid. Careful use of well-defined default behaviour can avoid such cases of unspecified verdict assignments, without unduly burdening the test case with information concerning unlikely events. Ideally, test cases which do not completely specify all verdict assignments should be considered to be in error, but since in some cases this deficiency may apply to the whole test suite rather than just to individual test cases, testing laboratories should have the option of applying procedures to overcome the problem rather than be forced to withdraw the testing service.

a) define and use testing procedures which ensure that any unspecified verdict assignments meet the criteria of repeatability, reproducibility and objectivity with respect to the relevant standard(s); or

b) assign an inconclusive verdict to any unspecified situation until the test suite has been corrected by the maintenance authority for that test suite.

The testing laboratory shall also take appropriate steps to get such (incomplete) test cases revised so that they completely define all allowed verdict assignments.

This is a consequence of ISO/IEC Guide 25 subclauses 10.2 and 13.1 (as quoted above), and the interpretations in 10.B.1,10.B.2 and 13.A.2.

13.A.8 If it is not possible in the accepted conformance testing methodology for a particular area for conformance test cases to specify the actual verdicts to be assigned, then the test case shall specify verdict criteria which meet the requirements of repeatability, reproducibility and objectivity, so that it is well defined what verdict should be assigned in every situation resulting from the execution of each test case. For example, this applies in the area of product data exchange (e.g. Initial Graphics Exchange Specification — IGES). When the results of a test case are produced, analysis tools provide or extract the information necessary to ascertain whether or not the verdict criteria have been met.

This is a consequence of ISO/IEC Guide 25 subclauses 10.2 and 13.1 (as quoted above), and the interpretations in 10.B.1,10.B.2 and 13.A.2.

  In many cases, the maintenance authority for the test suite will be a standardization body.  
13.A.9 If a test case specifies two possible sets of observations, either of which could (because of the test method being used) result from the same behaviour on the part of the SUT, then the two sets of observations shall be given consistent verdicts. If the test case specifies inconsistent verdicts in this situation (e.g. one set of observations is assigned pass while the other is assigned fail), then the testing laboratory shall assign an inconclusive verdict if either of these outcomes arises in practice. The testing laboratory shall also seek to have the test case corrected by the maintenance authority for that test suite. In OSI and telecommunications protocol testing, one type of badly designed test case is where it distinguishes between two sets of observations both of which could result from the same valid behaviour by the SUT. Such situations can occur when testing across a network because the network adds a degree of non-determinism into the observations. For example, 'expedited data' may be allowed to overtake 'normal data' in the network, so the order of arrival at the MOT is not necessarily indicative of the order of transmission by the SUT. Another example is that it is possible for destructive events (like 'disconnect' or 'reset') to destroy data which is already in transit; thus, if a SUT sends data followed by a destructive event, the data may or may not reach the MOT, depending on the time gap between the events and on the network performance at the time.
  This is a consequence of ISO/IEC Guide 25 subclauses 10.2 and 13.1 (as quoted above). In such circumstances, where the same behaviour by the SUT can manifest itself in different ways to the MOT, it would be inconsistent to assign a verdict of pass to one possible set of observations but fail to the other. However, it is acceptable to assign the verdict inconclusive to one but pass or fail to the other. This should not necessarily be regarded as an inconsistency, because the observations that led to an inconclusive verdict could alternatively have resulted from some other behaviour by the SUT.

It is recognised that if the same test suite is used by other testing laboratories then withdrawal of a test case may only be possible by agreement with the others, to avoid affecting mutual recognition agreements.

In some cases the assignment of verdicts will be made by an MOT or test tool maintained by an organization that is outside the testing laboratory's control. In such cases, the testing laboratory will need to add to its procedures the intervention of a test operator to change provisional verdicts to inconclusive as necessary to meet this requirement.

13.B Clear and unambiguous test reports  
13.B.1 In order to fulfil the requirement in ISO/ IEC Guide 25 subclause 13.1 for clear and unambiguous test reports, if a test report contains both conformance testing results and measurements that are not conformance testing results, these shall be distinguished clearly from one another.  
13.B.2 The test report should contain a precise identification of both the IUT and the SUT, including their parameterization.  
14 Sub-contracting of calibration or testing
  No IT&T specific interpretation is required for the sub-contracting of calibration or testing requirements.  
15 Outside support services and supplies
15.1 No IT&T specific interpretation is required for this subclause.  
15.2 ISO/IEC Guide 25 subclause 15.2 is interpreted in subclause 15.A.  
15.3 No IT&T specific interpretation is required for this subclause.  
15.A Provision of MOT and test tool validation services
  In ISO/IEC Guide 25, subclause 15.2 requires the following: The term 'outside' should be taken to mean 'outside the testing laboratory to which accreditation applies'. Thus, the validation service may be provided by another part of the same organisation but, as long as it is a separate organisational unit from the testing laboratory itself, then it is to be considered an outside support service, and hence ISO/IEC Guide 25 subclause 15.2 is applicable.
  'Where no independent assurance of the quality of outside support services or supplies is available, the laboratory shall have procedures to ensure that purchased equipment... and services comply with specified requirements. The laboratory should, wherever possible, ensure that purchased equipment... are not used until they have been... verified as complying with any standard specifications relevant to the... tests concerned.' An MOT or test tool validation service is just one example of an outside service, but it is the only one for which an IT&T specific interpretation is considered necessary.
  In IT&T, these requirements shall be interpreted as follows.  
15.A.1 These requirements apply to the use of any MOT or test tools and any validation service supplied from outside the testing laboratory to which accreditation applies.  
15.A.2 For validation services provided outside the testing laboratory to which accreditation applies, the testing laboratory shall obtain an independent assurance of the quality of each such service. If no such assurance can be obtained, the testing laboratory has to have procedures to ensure that the MOT and test tools comply with the specified requirements; in other words it has to provide its own validation service, making the use of the outside validation service irrelevant.
16 Complaints  
  No IT&T specific interpretation is required for the complaints requirements.