Taming Transformers for High-Resolution Image Synthesis
Paper
•
2012.09841
•
Published
This model is the Taming VQGAN tokenizer with a vocabulary size of 10bits converted into a format for the MaskBit codebase. It uses a downsampling factor of 16 and is trained on ImageNet for images of resolution 256.
You can find more details on the VQGAN in the original repository or paper. All credits for this model belong to Patrick Esser, Robin Rombach and Björn Ommer.