Multi-Level Feature Dynamic Fusion Neural Radiance Fields for Audio-Driven Talking Head Generation

Bibliographic Details
Title: Multi-Level Feature Dynamic Fusion Neural Radiance Fields for Audio-Driven Talking Head Generation
Authors: Wenchao Song, Qiong Liu, Yanchao Liu, Pengzhou Zhang, Juan Cao
Source: Applied Sciences, Vol 15, Iss 1, p 479 (2025)
Publisher Information: MDPI AG, 2025.
Publication Year: 2025
Collection: LCC:Technology
LCC:Engineering (General). Civil engineering (General)
LCC:Biology (General)
LCC:Physics
LCC:Chemistry
Subject Terms: talking head generation, neural radiance fields, audio-visual feature fusion, cross-modal content generation, Technology, Engineering (General). Civil engineering (General), TA1-2040, Biology (General), QH301-705.5, Physics, QC1-999, Chemistry, QD1-999
More Details: Audio-driven cross-modal talking head generation has experienced significant advancement in the last several years, and it aims to generate a talking head video that corresponds to a given audio sequence. Out of these approaches, the NeRF-based method can generate videos featuring a specific person with more natural motion compared to the one-shot methods. However, previous approaches failed to distinguish the importance of different regions, resulting in the loss of information-rich region features. To alleviate the problem and improve video quality, we propose MLDF-NeRF, an end-to-end method for talking head generation, which can achieve better vector representation through multi-level feature dynamic fusion. Specifically, we designed two modules in MLDF-NeRF to enhance the cross-modal mapping ability between audio and different facial regions. We initially developed a multi-level tri-plane hash representation that uses three sets of tri-plane hash networks with varying resolutions of limitation to capture the dynamic information of the face more accurately. Then, we introduce the idea of multi-head attention and design an efficient audio-visual fusion module that explicitly fuses audio features with image features from different planes, thereby improving the mapping between audio features and spatial information. Meanwhile, the design helps to minimize interference from facial areas unrelated to audio, thereby improving the overall quality of the representation. The quantitative and qualitative results indicate that our proposed method can effectively generate talk heads with natural actions and realistic details. Compared with previous methods, it performs better in terms of image quality, lip sync, and other aspects.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 2076-3417
Relation: https://www.mdpi.com/2076-3417/15/1/479; https://doaj.org/toc/2076-3417
DOI: 10.3390/app15010479
Access URL: https://doaj.org/article/2ecba286f9bd42cb992b27cd0ca3b7e2
Accession Number: edsdoj.2ecba286f9bd42cb992b27cd0ca3b7e2
Database: Directory of Open Access Journals
Full text is not displayed to guests.
More Details
ISSN:20763417
DOI:10.3390/app15010479
Published in:Applied Sciences
Language:English