[TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset
-
Updated
Dec 25, 2024 - Python
[TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
Add a description, image, and links to the audiovisual-language-pretraining topic page so that developers can more easily learn about it.
To associate your repository with the audiovisual-language-pretraining topic, visit your repo's landing page and select "manage topics."