Mission

The Machine and Neuromorphic Perception Laboratory (a.k.a. kLab) in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology (RIT) uses machine learning to solve problems in computer vision. The lab's primary interests are goal-driven scene understanding and lifelong learning. Almost all of our current research uses deep learning. The lab also studies learning and vision in animals as a source of principles that can be used to create brain-inspired algorithms. kLab is part of RIT's Multidisciplinary Vision Research Laboratory (MVRL). kLab is directed by Dr. Christopher Kanan.

Recent projects have included algorithms for visual question answering, studying incremental learning in neural networks, low-shot deep learning, new methods for eye movement analysis, saliency algorithms, perception systems for autonomous ships, algorithms for top-down and bottom-up saliency, tracking in video using neural networks, active vision algorithms, and feature learning in hyperspectral imagery.

 

Research Topics & Selected Publications

Visual Question Answering (VQA) - VQA algorithms attempt to answer questions about images.

Deep & Self-Taught Learning - We were early pioneers in self-taught feature learning, and we heavily use deep learning.

Brain-Inspired Computer Vision -

  • Yousefhussien, M., Browning, N.A., and Kanan, C. (2016) Online Tracking using Saliency. In: WACV-2016.
  • Wang, P., Cottrell, G., and Kanan, C. (2015) Modeling the Object Recognition Pathway: A Deep Hierarchical Model Using Gnostic Fields. In: CogSci-2015.
  • Kanan, C. (2014) Fine-Grained Object Recognition with Gnostic Fields. WACV-2014. doi:10.1109/WACV.2014.6836122
  • Khosla, D., Huber, D.J., and Kanan, C. (2014) A Neuromorphic System for Visual Object Recognition. Biologically Inspired Cognitive Architectures, 8: 33-45.
  • Kanan, C. (2013) Recognizing Sights, Smells, and Sounds With Gnostic Fields. PLoS ONE, 8(1): e54088.

Human Eye Movements - People make 180,000 eye movements per day. We have developed algorithms for predicting what a person is doing from their eye movements and saliency models for predicting where a person will look in an image.

Active Computer Vision - Motivated by human eye movements, we built computer vision algorithms that sequentially sample images to recognize objects.

Lifelong Learning - Lifelong learning deals with algorithms that incrementally learn from data streams, which poses unique challenges.

 

Lab Members

Dr. Christopher Kanan

Lab Director & Principal Investigator

Machine Learning, Computer Vision, Theoretical Neuroscience/Psychology

Dr. Ashish Gupta

Postdoc

Computer Vision

Kushal Kafle

Imaging Science PhD Candidate

Deep Learning,
Visual Question Answering

Manoj Acharya

Imaging Science PhD Student

VQA, Object Detection

Tyler Hayes

Imaging Science PhD Student
Co-Advisor: Nathan Cahill

Lifelong Machine Learning

Ryne Roady

Imaging Science PhD Student

Uncertainty in Neural Networks

Robik Shrestha

Imaging Science PhD Student

Deep Learning

Rodney Sanchez

Electrical Engineering BS Student

Deep Reinforcement Learning, Robotics

Usman Mahmood

Imaging Science PhD Student

Deep Learning for Radiology

Zhongchao Qian

Imaging Science PhD Student

Lifelong Learning

Justin Namba

CIT BS Student

Computer Vision

Sophia Kotok

Math Modeling PhD Student

Lifelong Learning

Frank Cwitkowitz

Comp. Eng. MS Student

Music Technology; Deep Learning

 

Affiliate Lab Members

Aneesh Rangnekar

Imaging Science PhD Student
Main Advisor: Matt Hoffman

Deep Reinforcement Learning, Tracking

Michal Kucer

Imaging Science PhD Student
Main Advisor: Dave Messinger

Deep Learning for Aesthetics

Anjali Jogeshwar

Imaging Science PhD Student
Main Advisor: Jeff Pelz

Brain-Inspired Vision Models