Lagunita Theme

Schedule

This event has now ended. The videos of the talks will be posted here, once provided by recording crew.

Workshop location: Ballroom G

Time Name Title
09:00:9:02 Opening
09:02-09:20 Andrew Owens (UC Berkeley) Self-supervising Sight, Sound, and Image Forensics
09:20-10:00 Honglak Lee (Google, U of Michigan) Learning hierarchical generative models with structured variables
10:00-10:30 Coffee Break
10:30-11:00 Devi Parikh (Facebook, Georgia Tech) Forcing Vision and Language Models to Not Just Talk but Also Actually See
11:00-12:00 Virginia de Sa - Keynote (UCSD) Recognizing, Creating, and Exploiting Multiple Views
12:00-14:00 Lunch Break
14:00-14:30 Dan Yamins (Stanford) An Algorithmic (and Experimental) Roadmap for Cognitively-Inspired Self-Supervised Sensory Learning
14:30-15:00 Carl Vondrick (Google, Columbia U) Learning from Unlabeled Video
15:00-15:30 Paolo Favaro (U of Bern) Unsupervised Learning and Data Redundancy
15:30-16:00 Coffee Break
16:30-17:00 Amir Zamir (Stanford, UC Berkeley) Taskonomy Transfer Learning: Game of Tasks
16:30-17:00 Adrien Gaidon (Toyota Reserach Inst) Beyond Supervised Driving
17:00-17:30 Panel, Q&A Panelists: Vriginia de Sa, Paolo Favaro, Adrien Gaidon,
Honglak Lee, Carl Vondrick, Dan Yamins

Beyond Supervised Learning

How would a human-like artificial visual perception look like? We may not know the full answer to this question, but we can forsee such a perception model shall have the following intriguing characteristics at minimum, just to name a few: it will not demand a massive amount of supervision to learn new tasks; it can establish an abstract relationship between learned tasks and subtasks and transfer knowledge across them; it can accumulate, preserve, and enhance learned tasks; at least part of its behavior and solved tasks emerge from experience without needing to be explicitly defined.

The field of computer vision has undergone a major shift with the dominance of neural networks with a remarkable success at various problems. However, the vast majority of solutions employ fully supervised learning which typically aims for a task-specific system with defined inputs and outputs and utilizing a massive amount of curated training data. This approach has prohibitive scalability issues with both amount of required training data and, more importantly, breadth of addressed tasks as they fail to efficiently generalize to novel problems or result in automatically "emerged" solutions. More efficient alternative approaches to learning perception are deemed feasible, e.g., cognitive studies propose living organisms can perform a wide range of tasks for which they did not received direct supervision by learning proxy tasks [Held1983,Smith2005,Rader1980] or exploration [Gopnik2000,Smith2005].

Therefore, it is worthwhile to invest on alternatives to supervised learning. This is particularly important for the academic community as the experimental trends show common neural networks are nearing the practically satisfactory performance on predefined pattern recognition tasks if supervised with enough training data, and industries are mastering this approach for their needs.


Workshop goals

With this motivation, several approaches, such as unsupervised learning, self-supervised learning, transfer learning, learning by exploration, learning to learn, have been investigated. Despite a notable progress, they still make an insignificant fraction of research in the vision community and also often yield performances inadequate for practical uses. In this workshop, we wish to foster a discussion around alternative efforts to fully supervised learning to pave the way for future research on this topic. In particular, we would like to address:

1) discuss and revisit why we need to go beyond fully supervised learning. What are the necessities, promises, and challenges?
2) what are the particularly unexplored directions in this context that the field should investigate in the future?
3) active agents and learning through exploration, exemplified by reinforcement learning and similar approaches,
4) what can we learn from the only working example of an all-encompassing intelligence: the brain.
5) what are the conceptual reasons behind the performance gap with supervised learning and how to fill it? Are we using the right metrics?
6) what are the most successful practices when fully supervised learning is not the employed framework?
7) "Elephant in the room" related topics that are visible over the horizon but underdiscussed, such as lifelong learning, compositional reasoning, memory-based inference, evolutionary approaches.


We are also open to hearing the other side of this argument. Do you believe the practical value of going beyond fully supervised learning is insignificant right now or this isn't the right time for pursing them? Show up at the panel!


Papers and Dates

CFP

Call for Papers

The workshop is open to any researcher studying any sub-topic of computer vision pertinent to going beyond the fully supervised learning paradigm. Although we discourage half-baked results, we encourage substantiated brave new ideas and provocative new avenues of thinking.

Important: We are specifically looking for high quality papers particularly and rigorously targeting alternative efforts to fully supervised learning. We rather keep the accepted papers limited to only a few high quality ones presented orally. We will not be accepting any papers if no submission passes the set quality bar.

Submission Guidelines: The paper submission guides, template, and maximum length is the same as the main conference's (8 pages exclusive of references). Please refer to the conference paper submission guidelines for the details and authorkit.

Submission webpage

Important Dates:

Submission Deadline: May 18, 2018, 11:59 PM PT

Reviews Due: Jun 1, 2018

Decision Notification: Jun 4, 2018

Camera-ready: Jun 11, 2018

Workshop Date: Jun 22, 2018 (full day)


Organizers

Amir R. Zamir
Jitendra Malik
Alexei Efros
Leo Guibas
Josh Tenenbaum
Silvio Savarese