Comparing the ability of humans and DNNs to recognise closed contours in cluttered images

2018 | journal article

Jump to: Cite & Linked | Documents & Media | Details | Version history

Cite this publication

​Comparing the ability of humans and DNNs to recognise closed contours in cluttered images​
Funke, C.; Borowski, J.; Wallis, T.; Brendel, W.; Ecker, A.   & Bethge, M.​ (2018) 
Journal of Vision18(10) pp. 800​.​ DOI: https://doi.org/10.1167/18.10.800 

Documents & Media

License

GRO License GRO License

Details

Authors
Funke, Christina; Borowski, Judy; Wallis, Thomas; Brendel, Wieland; Ecker, Alexander ; Bethge, Matthias
Abstract
Given the recent success of machine vision algorithms in solving complex visual inference tasks, it becomes increasingly challenging to find tasks for which machines are still outperformed by humans. We seek to identify such tasks and test them under controlled settings. Here we compare human and machine performance in one candidate task: discriminating closed and open contours. We generated contours using simple lines of varying length and angle, and minimised statistical regularities that could provide cues. It has been shown that DNNs trained for object recognition are very sensitive to texture cues (Gatys et al., 2015). We use this insight to maximize the difficulty of the task for the DNN by adding random natural images to the background. Humans performed a 2IFC task discriminating closed and open contours (100 ms presentation) with and without background images. We trained a readout network to perform the same task using the pre-trained features of the VGG-19 network. With no background image (contours black on grey), humans reach a performance of 92% correct on the task, dropping to 71% when background images are present. Surprisingly, the model's performance is very similar to humans, with 91% dropping to 64% with background. One contributing factor for why human performance drops with background images is that dark lines become difficult to discriminate from the natural images, whose average pixel values are dark. Changing the polarity of the lines from dark to light improved human performance (96% without and 82% with background image) but not model performance (88% without to 64% with background image), indicating that humans could largely ignore the background image whereas the model could not. These results show that the human visual system is able to discriminate closed from open contours in a more robust fashion than transfer learning from the VGG network.
Issue Date
2018
Journal
Journal of Vision 
ISSN
1534-7362
Language
English

Reference

Citations


Social Media