• DECODING ORIENTATION AND MOTION HIGH-RESOLUTION fMRI AT 7 TESLA The Tong Lab uses high-resolution fMRI at 7 Tesla to investigate the functional role of the early visual system in visual perception, attentional selection, figure-ground processing, predictive coding, and visual working memory.
  • FACE AND OBJECT PROCESSING FACE AND OBJECT PROCESSING We study the neurocomputational bases of face and object processing using behavioral methods, functional neuroimaging, TMS, and the development of convolutional neural networks (CNNs) as a model for human recognition performance.
  • MECHANISMS OF VISUAL ATTENTION MECHANISMS OF VISUAL ATTENTION Our lab investigates how bottom-up saliency and top-down attentional signals interact in the early visual system. We are also developing and testing a neurocomputational model of object-based attentional selection.
  • VISUAL WORKING MEMORY VISUAL WORKING MEMORY The Tong Lab pursues research on the behavioral and neural bases of visual working memory. Our goal is to characterize and model the neural representations that underlie visual working memory.
  • NEURAL BASES OF VISUAL AWARENESS DEEP LEARNING NETWORKS OF VISUAL PROCESSING A growing focus of the lab is the application and development of deep learning networks as potential models of human visual performance, especially for object recognition tasks. We are currently working with a variety of networks, including AlexNet, VGG-19, GoogleNet, ResNet and Inception-v3. We have also begun constructing our own deep network architectures.



Noise-trained deep neural networks effectively predict human vision and its neural responses to challenging images.

Jang, H., McCormack, D., & Tong, F. (2021).

PLOS Biology, 18:496-498.

Convolutional neural networks trained with a developmental sequence of blurry to clear images reveal core differences between face and object processing.

Jang, H. & Tong, F. (2021).

Journal of Vision, 18:496-498.

Resolving the spatial profile of figure enhancement in human V1 through population receptive field modeling.

Poltoratski, S., & Tong, F. (2020).

Journal of Neuroscience, 40(16) 3292-3303.

Figure-ground modulation in the human lateral geniculate nucleus is distinguishable from top-down attention.

Poltoratski, S., Maier, A., Newton, A. T., Tong, F. (2019).

Current Biology, 29(12), 2051-2057.


March 2022

The TONGLAB seeks to recruit a research assistant to the lab. For more details, click here.

October 2021

The TONGLAB seeks to recruit a postdoctoral fellow to the lab. For more details, click here.

August 2021

Hojin successfully defends his PhD thesis on "Exploring the robust nature of human visual object recognition through comparisons with convolutional neural networks". Congrats Dr. Jang!

December 2020

The lab begins a 5-year NIH-funded project on medical image perception and lung nodule detection, supported by the National Cancer Institute.

August 2020

Kaylee Bashor has joined the lab as a research analyst. Welcome Kaylee!

September 2019

Tim Kietzmann begins his new position as assistant professor at the Donders Institute for Brain, Cognition and Behaviour. Congrats Tim!

July 2019

Frank presents a new branch of the lab's research on how people gain expertise at detecting lung nodules in chest X-rays at the Medical Image Perception Society conference.

March 2019

Sonia and Frank's paper showing figure-ground modulation in the human lateral geniculate nucleus is published in Current Biology