Lagunita Theme

Beyond Supervised Learning

How would a human-like artificial visual perception look like? We may not know the answer to this question, but we can forsee such a perception shall have the following characteristics at minimum: it will not demand many examples to learn a new task; it can establish an abstract relationship between learned tasks and transfer knowledge across them; it can accumulate, preserve, and enhance learned tasks.

The field of computer vision has undergone a major shift with the dominance of neural networks with a remarkable success at various problems. The vast majority of employed methods use fully supervised learning which essentially aims for a task-specific system utilizing a massive amount of curated training data. Despite their notable success, this approach has prohibitive scalability issues with both amount of required training data and, more importantly, addressed tasks as they fail to efficiently generalize to novel problems. In other words, they constitute a narrow perception, and not broad. They also bear an unsatisfying sentiment as more efficient alternative approaches to learning are deemed feasible; for instance, cognitive studies propose living organisms can perform a wide range of tasks for which they did not received direct supervision by learning proxy tasks [Held1983,Smith2005,Rader1980]. This suggests achieving success by going outside the paradigm of fully supervising a task-specific system is possible.

Therefore, it is natural to invest on alternatives to supervised learning. This is particularly important for the academic community as the experimental trends show common neural networks are nearing the practically satisfactory performance on predefined tasks if supervised with enough training data, and industries are mastering this approach for their needs.

Workshop goals

With this motivation, several approaches, such as unsupervised learning, self-supervised learning, generic/broad perception, learning through exploration, learning to learn, have been investigated. Despite a notable progress, they still make an insignificant fraction of research in the vision community and also suffer from a considerable performance gap with supervised methods. In this workshop, we wish to foster a discussion around alternative efforts to fully supervised learning to pave the way for the unavoidable future research on this topic. In particular, we would like to address:

1) discuss and revisit why we need to go beyond fully supervised learning. What are the necessities, promises, and challenges?
2) what are the particularly unexplored directions in this context that the field should investigate in the future?
3) active agents and learning through exploration, exemplified by reinforcement learning and similar approaches,
4) what can we learn from the only working example of an all-encompassing intelligence: the brain.
5) what are the conceptual reasons behind the performance gap with supervised learning and how to fill it?
6) what are the most successful practices when fully supervised learning is not the employed framework?
7) "Elephant in the room" related topics that are visible over the horizon but barely addressed, such as, lifelong learning, compositional reasoning, memory-based inference, accumulation of tasks/skills, evolutionary approaches.

We are also open to hearing the other side of this argument. Do you believe anything beyond fully supervised learning is insignificant or shouldn't be pursued for now? Or most value lies in fully supervised learning? Drop us a line or just show up at the panel!

Papers and Dates


Call for Papers

The workshop is open to any researcher studying any sub-topic of computer vision pertinent to going beyond the fully supervised learning paradigm. Although we discourage half-baked results, we encourage substantiated brave new ideas and provocative new avenues of thinking.

Important: We are specifically looking for high quality papers particularly and rigorously targeting alternative efforts to fully supervised learning. We rather keep the accepted papers limited to only a few high quality ones presented orally. We will not be accepting any papers if no submission passes the set quality bar.

Submission Guidelines: The paper submission guides, template, and maximum length is the same as the main conference's (8 pages exclusive of references). Please refer to the conference paper submission guidelines for the details and authorkit.

CMT submission website

Important Dates:

Submission Deadline: May 18, 2018, 11:59 PM PT

Reviews Due: Jun 1, 2018

Decision Notification: Jun 4, 2018

Camera-ready: Jun 11, 2018

Workshop Date: Jun 22, 2018 (full day)


Amir R. Zamir
Jitendra Malik
Alexei Efros
Leo Guibas
Josh Tenenbaum
Silvio Savarese