ICDM 2022 Workshop
Foundation Models in Vision and Language


State-of-the-art AI systems can learn directly from whatever information they perceive, without relying on heavily labeled data sets for guidance. Such easy-to-collect data provide a more flexible form of supervision and a more affordable solution to data scalabilility. By training big deep neural networks with a large number of parameters on such heterogeneous data, recent foundation models have shown great promises in generality and usability. For example:

One appealing property of these foundation models is their ashtonishing performance on zero-shot and few-shot adaptation to various new real-world tasks. We organize this "Foundation Models in Vision and Language (FOMO-VL)" workshop, aiming to gather academic and industry communities to work on foundation models to solve real-world problems, focusing on the challenge of building scalable AI models can learn from heterogeneous data to gain generalized task-level transfer ability. This year, our FOMO-VL workshop will be held (tentatively in a hybrid mode) in conjunction with ICDM 2022, Orlando, FL, USA.

Important Dates

  • Workshop paper submission deadline: October 10, 2022
  • Workshop paper acceptance decision to authors: October 13, 2022
  • Workshop dates: November 28, 2022

How to Submit

Please submit your papers to the Online Submision System . Please refer to Call for Papers for details on the topics and other related information. Thanks for the support of Amazon, some cash awards will be made to best papers. We look forward to your excellent work!

Invited Speakers (TBD)

Danqi Chen
Princeton University

Xifeng Yan
UC at Santa Barbara

Tengyu Ma
Standford University

Letitia Parcalabescu
University of Heidelberg

Jason Baldridge
Google Brain

Lu Yuan
Microsoft Cloud and AI

Jiasen Lu
Allen Institute of Artificial Intelligence

Justin Lin
DAMO Academy, Alibaba Group

Advisory Committee -- Panelists (TBD)

Jianfeng Gao
Microsoft Research, Redmond

Ruslan Salakhutdinov
Carnegie Mellon University

Ludwig Schmidt
University of Washington, OpenCLIP

Mohammad Norouzi
Google Brain


Changyou Chen
University at Buffalo, Amazon

Chunyuan Li
Microsoft Research, Redmond

Jiahui Yu
Google Brain

Hongxia Yang
Alibaba Group

Paul Pu Liang
Carnegie Mellon University

Yi Xu

Son Tran

Belinda Zeng

Plain Academic