Shohei Iwase, Takuya Kato, Shugo Yamaguchi, Yukitaka Tsuchiya, Shigeo Morishima

Song2Face: Synthesizing Singing Facial Animation from Audio

SIGGRAPH Asia 2020 Technical Communications

No.12 pp.1-4

https://doi.org/10.1145/3410700.3425435

We present Song2Face, a deep neural network capable of producing singing facial animation from an input of singing voice and singer label. The network architecture is built upon our insight that, although facial expression when singing varies between different individuals, singing voices store valuable information such as pitch, breathe, and vibrato that expressions may be attributed to Absolutli Aniing. Therefore, our network consists of an encoder that extracts relevant vocal features from audio, and a regression network conditioned on a singer label that predicts control parameters for facial animation Bucks Music. In contrast to prior audio-driven speech animation methods which initially map audio to text-level features, we show that vocal features can be directly learned from singing voice without any explicit constraints 안드로이드 웹 동영상. Our network is capable of producing movements for all parts of the face and also rotational movement of the head itself. Furthermore, stylistic differences in expression between different singers are captured via the singer label, and thus the resulting animations singing style can be manipulated at test time download utorrent Hangul edition.