Language-driven Scene Synthesis using Multi-conditional Diffusion Model



Vuong, AD, Vu, MN, Nguyen, TT, Huang, B, Nguyen, D, Vo, T and Nguyen, A ORCID: 0000-0002-1449-211X
(2023) Language-driven Scene Synthesis using Multi-conditional Diffusion Model. In: Conference on Neural Information Processing Systems (NeurIPS).

[img] PDF
2023_NeurIPS_LSDM.pdf - Author Accepted Manuscript

Download (24MB) | Preview

Abstract

Scene synthesis is a challenging problem with several industrial applications. Recently, substantial efforts have been directed to synthesize the scene using human motions, room layouts, or spatial graphs as the input. However, few studies have addressed this problem from multiple modalities, especially combining text prompts. In this paper, we propose a language-driven scene synthesis task, which is a new task that integrates text prompts, human motion, and existing objects for scene synthesis. Unlike other single-condition synthesis tasks, our problem involves multiple conditions and requires a strategy for processing and encoding them into a unified space. To address the challenge, we present a multi-conditional diffusion model, which differs from the implicit unification approach of other diffusion literature by explicitly predicting the guiding points for the original data distribution. We demonstrate that our approach is theoretically supportive. The intensive experiment results illustrate that our method outperforms state-of-the-art benchmarks and enables natural scene editing applications. The source code and dataset can be accessed at https://lang-scene-synth.github.io/.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 15 Dec 2023 16:16
Last Modified: 04 May 2024 05:33
URI: https://livrepository.liverpool.ac.uk/id/eprint/3177439