EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone

1Johns Hopkings University, 2Meta AI, 3University of Toronto, 4National University of Singapore

TL;DR

We introduce the second generation of egocentric video-language pre-training (EgoVLPv2), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones.

EgoVLPv2 Framework

Computation of three objectives, LEgoNCE, LMLM, and LVTM. We insert cross-modal fusion into uni-modal backbones with a gating mechanism. During pre-training, every forward iteration contains three steps: (i) cross-attention modules are switched off, EgoVLPv2 acts as dual encoder, LEgoNCE is computed. (ii) cross-attention is switched on, EgoVLPv2 acts as fusion encoder, and video-masked narration pair is fed into EgoVLPv2 to compute LMLM (iii) crossattention is kept on, hard-negative video-narration pairs are fed into EgoVLPv2 to compute LVTM. This fusion in the backbone strategy results in a lightweight and flexible model compared to using fusion-specific transformer layers.

Abstract

Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system.

In this work, we introduce the second generation of egocentric video-language pre-training (EgoVLPv2), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones. EgoVLPv2 learns strong video-text representation during pre-training and reuses the cross-modal attention modules to support different downstream tasks in a flexible and efficient manner, reducing fine-tuning costs. Moreover, our proposed fusion in the backbone strategy is more lightweight and compute-efficient than stacking additional fusion-specific layers.

Extensive experiments on a wide range of VL tasks demonstrate the effectiveness of EgoVLPv2 by achieving consistent state-of-the-art performance over strong baselines across all downstream.


Main Results


Cross Attention Visualizations

QFVS Results


Summarized Video 2 of QFVS

Query: All scenes containing stores and hands

Summarized Video 3 of QFVS

Query: All scenes containing faces and chocolates

BibTeX

@article{pramanick2023egovlpv2,
  author    = {Pramanick, Shraman and Song, Yale and Nag, Sayan and Qinghong Lin, Kevin and Shah, Hardik and Zheng Shou, Mike and Chellappa, Rama and Zhang, Pengchuan},
  title     = {EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone},
  journal   = {arXiv preprint arXiv:2307.05463},
  year      = {2023}
}

Acknowledgement

This codebase is built on the EgoVLP, LaViLa, FIBER, and VSLNet repository. We would like to thank the respective authors for their help, and the Meta AI team for discussions and feedback. Shraman Pramanick and Rama Chellappa were partially supported by a MURI program from the Army Research Office under the grant W911NF17-1-0304. This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Template of this website is borrowed from nerfies website.