
Fulltext:
179525.pdf
Embargo:
until further notice
Size:
349.2Kb
Format:
PDF
Description:
Publisher’s version
Publication year
2017Author(s)
Publisher
S.l. : Institute of Electrical and Electronics Engineers (IEEE)
In
2017 International Joint Conference on Neural Networks (IJCNN), pp. 3688-3695Annotation
2017 International Joint Conference on Neural Networks (IJCNN) (Anchorage, Alaska, 14-19 May 2017)
Publication type
Article in monograph or in proceedings

Display more detailsDisplay less details
Organization
SW OZ DCC CO
SW OZ DCC AI
Languages used
English (eng)
Book title
2017 International Joint Conference on Neural Networks (IJCNN)
Page start
p. 3688
Page end
p. 3695
Subject
Action, intention, and motor control; Cognitive artificial intelligence; DI-BCB_DCC_Theme 2: Perception, Action and Control; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal CommunicationAbstract
This paper reviews and discusses research advances on "explainable machine learning" in computer vision. We focus on a particular area of the "Looking at People" (LAP) thematic domain: first impressions and personality analysis. Our aim is to make the computational intelligence and computer vision communities aware of the importance of developing explanatory mechanisms for computer-assisted decision making applications, such as automating recruitment. Judgments based on personality traits are being made routinely by human resource departments to evaluate the candidates' capacity of social insertion and their potential of career growth. However, inferring personality traits and, in general, the process by which we humans form a first impression of people, is highly subjective and may be biased. Previous studies have demonstrated that learning machines can learn to mimic human decisions. In this paper, we go one step further and formulate the problem of explaining the decisions of the models as a means of identifying what visual aspects are important, understanding how they relate to decisions suggested, and possibly gaining insight into undesirable negative biases. We design a new challenge on explainability of learning machines for first impressions analysis. We describe the setting, scenario, evaluation metrics and preliminary outcomes of the competition. To the best of our knowledge this is the first effort in terms of challenges for explainability in computer vision. In addition our challenge design comprises several other quantitative and qualitative elements of novelty, including a "coopetition" setting, which combines competition and collaboration.
This item appears in the following Collection(s)
- Academic publications [233365]
- Electronic publications [116752]
- Faculty of Social Sciences [28938]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.