PRICES include / exclude VAT
Homepage>IEEE Standards>35 INFORMATION TECHNOLOGY. OFFICE MACHINES>35.240 Applications of information technology>35.240.01 Application of information technology in general>IEEE 3168-2024 - IEEE Standard for Robustness Evaluation Test Methods for a Natural Language Processing Service That Uses Machine Learning
Released: 09.08.2024

IEEE 3168-2024 - IEEE Standard for Robustness Evaluation Test Methods for a Natural Language Processing Service That Uses Machine Learning

IEEE Standard for Robustness Evaluation Test Methods for a Natural Language Processing Service That Uses Machine Learning

Format
Availability
Price and currency
English PDF
Immediate download
54.43 EUR
English Hardcopy
In stock
67.07 EUR
Standard number:IEEE 3168-2024
Released:09.08.2024
ISBN:979-8-8557-0910-0
Pages:29
Status:Active
Language:English
DESCRIPTION

IEEE 3168-2024

This standard specifies test methods for evaluating the robustness of a natural language processing (NLP) service that uses machine learning. Models of NLP generally feature an input space being discrete and an output space being almost infinite in some tasks. The robustness of the NLP service is affected by various perturbations including adversarial attacks. A methodology to categorize the perturbations, and test cases for evaluating the robustness of an NLP service against different perturbation categories is specified. Metrics for robustness evaluation of an NLP service are defined. NLP use cases and corresponding applicable test methods are also described.

The purpose of the standard is to provide test methods for evaluating the robustness of an NLP service. Test methods are used by service developers, service providers, and service users to determine the robustness of an NLP service.

New IEEE Standard - Active. The natural language processing (NLP) services using machine learning have rich applications in solving various tasks and have been widely deployed and used, usually accessible by application programming interface (API) calls. The robustness of the NLP services is challenged by various well-known general corruptions and adversarial attacks. Inadvertent or random deletion, addition, or repetition of characters or words are examples of general corruptions. Adversarial characters, words, or sentence samples are generated by adversarial attacks, causing the models underpinning the NLP services to produce incorrect results. A method for quantitatively evaluating the robustness the NLP services is proposed by this standard. Under the method, different cases the evaluation needs to perform against are specified. Robustness metrics and their calculation are defined. With the standard, understanding of the robustness of the services can be developed by the service stakeholders including the service developer, service providers, and service users. The evaluation can be performed during various phases in the life cycle of the NLP services, the testing phase, in the validation phase, after deployment, and so forth.