Doctoral Project: Efficient Multi-task Deep Learning

Principal Investigators

Description of the doctoral project

The successful candidate is to carry out experiments for training deep learning models and analyzing the generated representations for a battery of visual tasks. Various transfer learning paradigms will be implemented and analyzed in order to shed light on the following questions:

  • Can one find representations that are optimal for multiple but related objectives?
  • Does deep learning even construct representations where multi-optimal solutions outperform specialized representations thus mitigating the need for large datasets before adding new tasks?
  • Can the presence of multiple tasks be used to isolate elements of the representation, assign roles to them, and identify the conditions under which the network makes decisions?

Deep representations will be compared with visual representations in the human brain recorded during object recognition tasks. Efficient transfer learning techniques will be evaluated in the context of a synthetic system for rapid object recognition and visual search.

 

Project start date: October 1, 2019 (an earlier starting date may be possible)

Prerequisites

Applicants must hold a Master degree in Computational Neuroscience, Computer Science, Physics, Mathematics, or related fields. Applicants should have very good programming skills, a strong competence in machine learning, and a strong interest in working at the interface of machine learning and cognitive science. Practical experience with deep learning techniques is a plus.

Contact

User registration

You don't have permission to register

Reset Password