What Changed
- Existing benchmarks largely assess audio and video in isolation or rely on coarse embedding similarity, failing to capture the fine-grained joint correctness required by realistic prompts.
- AI introduces AVGen-Bench, a task-driven benchmark for T2 AV generation featuring high-quality prompts across 11 real-world categories.
Why It Matters
Context
Existing benchmarks largely assess audio and video in isolation or rely on coarse embedding similarity, failing to capture the fine-grained joint correctness required by realistic prompts. AI introduces AVGen-Bench, a task-driven benchmark for T2 AV generation featuring high-quality prompts across 11 real-world categories. To support comprehensive assessment, arXiv cs. AI proposes a multi-granular evaluation framework that combines lightweight specialist models with Multimodal Large Language Models (MLLMs), enabling evaluation from perceptual quality to fine-grained semantic controllability. AI's evaluation reveals a pronounced gap between strong audio-visual aesthetics and weak semantic reliability, including persistent failures in text rendering, speech coherence, physical reasoning, and a universal breakdown in musical pitch control. Code and benchmark resources are available at http://aka.ms/avgenbench.
For Builders
Existing benchmarks largely assess audio and video in isolation or rely on coarse embedding similarity, failing to capture the fine-grained joint correctness required by realistic prompts.