This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN.
Web Demo
Integrated to Huggingface Spaces with Gradio. See demo for Panorama Generation for Landscapes:
Abstract:
Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space. However, additional architectures or task-specific training paradigms are usually required for different tasks. In this work, we take a deeper look at the spatial properties of StyleGAN. We show that with a pretrained StyleGAN along with some operations, without any additional architecture, we can perform comparably to the state-of-the-art methods on various tasks, including image blending, panorama generation, generation from a single image, controllable and local multimodal image to image translation, and attributes transfer.
Everything to get started is in the colab notebook.
For toonification, you can train a new model yourself by running
python train.pyFor disney toonification, we use the disney dataset here. Feel free to experiment with different datasets.
To perform GAN inversion with gaussian regularization in W+ space,
python projector.py xxx.jpgthe code will be saved in ./inversion_codes/xxx.pt which you can load by
source = load_source(['xxx'], generator, device)
source_im, _ = generator(source)If you use this code or ideas from our paper, please cite our paper:
@article{chong2021stylegan,
title={StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN},
author={Chong, Min Jin and Lee, Hsin-Ying and Forsyth, David},
journal={arXiv preprint arXiv:2111.01619},
year={2021}
}
This code borrows from StyleGAN2 by rosalinity
