University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Multimodal emotion recognition

Haq, S and Jackson, PJB (2010) Multimodal emotion recognition pp. 398-423.

Full text not available from this repository.


Recent advances in human-computer interaction technology go beyond the successful transfer of data between human and machine by seeking to improve the naturalness and friendliness of user interactions. An important augmentation, and potential source of feedback, comes from recognizing the user's expressed emotion or affect. This chapter presents an overview of research efforts to classify emotion using different modalities: audio, visual and audio-visual combined. Theories of emotion provide a framework for defining emotional categories or classes. The first step, then, in the study of human affect recognition involves the construction of suitable databases. The authors describe fifteen audio, visual and audio-visual data sets, and the types of feature that researchers have used to represent the emotional content. They discuss data-driven methods of feature selection and reduction, which discard noise and irrelevant information to maximize the concentration of useful information. They focus on the popular types of classifier that are used to decide to which emotion class a given example belongs, and methods of fusing information from multiple modalities. Finally, the authors point to some interesting areas for future investigation in this field, and conclude. © 2011, IGI Global.

Item Type: Article
Divisions : Surrey research (other units)
Authors :
Haq, S
Date : 1 December 2010
DOI : 10.4018/978-1-61520-919-4.ch017
Depositing User : Symplectic Elements
Date Deposited : 17 May 2017 13:16
Last Modified : 24 Jan 2020 23:42

Actions (login required)

View Item View Item


Downloads per month over past year

Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800