Fast, invariant representation for human action in the visual system

TitleFast, invariant representation for human action in the visual system
Publication TypeCBMM Memo
Year of Publication2016
AuthorsPoggio T, Isik L, Tacchetti A.
Date Published01/2016
Publication LanguageEnglish
Abstract

The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography (MEG) decoding and a computational model to study action recognition from a novel dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat drink) performed by five actors at five viewpoints. We show for the first that that actor- and view-invariant representations for action arise in the human brain as early as 200 ms. We next extend a class of biologically inspired hierarchical computational models of object recognition to recognize actions from videos and explain the computations underlying our MEG findings. This model achieves 3D viewpoint-invariance by the same biologically inspired computational mechanism it uses to build invariance to position and scale. These results suggest that robustness to complex transformations, such as 3D viewpoint invariance, does not require special neural architectures, and further provide a mechanistic explanation of the computations driving invariant action recognition.

URLhttps://cbmm.mit.edu/sites/default/files/publications/1601.01358v1.pdf
Citation Key23

CBMM Memo No.: 

42