PRICES include / exclude VAT
Homepage>IEEE Standards>35 INFORMATION TECHNOLOGY. OFFICE MACHINES>35.040 Character sets and information coding>35.040.40 Coding of audio, video, multimedia and hypermedia information>IEEE 3300-2024 - IEEE Standard Adoption of Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Technical Specification Multimodal Conversation--Version 2
Sponsored link
Released: 14.11.2024

IEEE 3300-2024 - IEEE Standard Adoption of Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Technical Specification Multimodal Conversation--Version 2

IEEE Standard Adoption of Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Technical Specification Multimodal Conversation--Version 2

Format
Availability
Price and currency
English PDF
Immediate download
100.12 EUR
English Hardcopy
In stock
124.42 EUR
Standard number:IEEE 3300-2024
Released:14.11.2024
ISBN:979-8-8557-1417-3
Pages:89
Status:Active
Language:English
DESCRIPTION

IEEE 3300-2024

Multimodal Conversation (MPAI-MMC) specifies: 1. Data Formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications. 2. Use Cases implemented in the AI Framework using Data Formats from MPAI-MMC and other MPAI standards and providing recognized applications in the Multimodal Conversation domain. This Technical Specification includes the following Use Cases: 1. Conversation with Personal Status (CPS), enabling conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned: 2.Conversation About a Scene (CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture. 3.Virtual Secretary for Video conference (VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments. 4.Human-Connected Autonomous Vehicle Interaction” (HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds by displaying its Personal Status in speech, face, and gesture. 5.Conversation with Emotion (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 6.Multimodal Question Answering (MQA), supporting request for information about a displayed object. 7.Three Uses Cases supporting text and speech translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: 7.1.Unidirectional Speech Translation (UST). 7.2.Bidirectional Speech Translation (BST). 7.3.One-to-Many Speech Translation (MST). 8.The “Personal Status Extraction Composite AIMs that estimates the Personal Status Conveyed by Text, Speech, Face, and Gesture – of a real or digital human.



New IEEE Standard - Active. This standard adopts MPAI Technical Specification Version 2 as an IEEE Standard. Multimodal Conversation (MPAI-MMC) specifies use cases, all of which share the use of artificial intelligence (AI) to enable a complete and intense form of human-machine conversation.