Abstract

Multimodal feature fusion natural human-computer interaction involves complex intelligent architectures facing unexpected errors and mistakes made by users. These architectures should react to events that occur simultaneously with eventual redundancy from different input media. Intelligent agent based genetic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components, and each element is modeled separately using timed colored Petri nets. The elementary models are then linked together to obtain the full architecture. Generic components of the application are then monitored by an agent based expert system to perform dynamic changes in reconfiguration, adaptation and evolution at the architectural level. For validation purposes, the proposed multi-agent architecture and its dynamic reconfiguration are respectively applied on practical examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call