Towards Debiasing Temporal Sentence Grounding in Video

Abstract

The temporal sentence grounding in video (TSGV) task is to locate a temporal moment from an untrimmed video, to match a language query, i.e., a sentence. Without considering bias in moment annotations (e.g., start and end positions in a video), many models tend to capture statistical regularities of the moment annotations, and do not well learn cross-modal reasoning between video and language query. In this paper, we propose two debiasing strategies, data debiasing and model debiasing, to force a TSGV model to capture cross-modal interactions. Data debiasing performs data oversampling through video truncation to balance moment temporal distribution in train set. Model debiasing leverages video-only and query-only models to capture the distribution bias, and forces the model to learn cross-modal interactions. Using VSLNet as the base model, we evaluate impact of the two strategies on two datasets that contain out-of-distribution test instances. Results show that both strategies are effective in improving model generalization capability. Equipped with both debiasing strategies, VSLNet achieves best results on both datasets.

Publication
arXiv preprint arXiv:2111.04321
Hao Zhang
Hao Zhang
Principal Engineer