Adjustable Visual Appearance for Generalizable Novel View Synthesis

1Chalmers University of Technology   2KTH Royal Institute of Technology  

Preprint

AVA-NVS allows to change the visual appearance of an observed scene while preserving the underlying geometry

Abstract

We present a generalizable novel view synthesis method which enables modifying the visual appearance of an observed scene so rendered views match a target weather or lighting condition without any scene specific training or access to reference views at the target condition.

Our method is based on a pretrained generalizable transformer architecture and is fine-tuned on synthetically generated scenes under different appearance conditions. This allows for rendering novel views in a consistent manner for 3D scenes that were not included in the training set, along with the ability to (i) modify their appearance to match the target condition and (ii) smoothly interpolate between different conditions. Experiments on real and synthetic scenes show that our method is able to generate 3D consistent renderings while making realistic appearance changes, including qualitative and quantitative comparisons.

BibTeX


@misc{bengtson2023adjustable,
  title         = {Adjustable Visual Appearance for Generalizable Novel View Synthesis}, 
  author        = {Josef Bengtson and David Nilsson and Che-Tsung Lin and Marcel Büsching and Fredrik Kahl},
  year          = {2023},
  eprint        = {2306.01344},
  archivePrefix = {arXiv},
}