Upscaled_image.save( "upsampled_cat.png") Upscaled_image = pipeline(prompt=prompt, image=low_res_img).images Pipeline = om_pretrained(model_id, torch_dtype=torch.float16) Model_id = "stabilityai/stable-diffusion-x4-upscaler" pip install diffusers transformers accelerate scipy safetensorsįrom diffusers import StableDiffusionUpscalePipeline Using the □'s Diffusers library to run Stable Diffusion 2 in a simple and efficient manner. Resources for more information: GitHub Repository.Ĭite as: = , It is a Latent Diffusion Model that uses a fixed, pretrained text encoder ( OpenCLIP-ViT/H). Model Description: This is a model that can be used to generate and modify images based on text prompts. License: CreativeML Open RAIL++-M License Model type: Diffusion-based text-to-image generation model Use it with the stablediffusion repository: download the x4-upscaler-ema.ckpt here.ĭeveloped by: Robin Rombach, Patrick Esser.In addition to the textual input, it receives a noise_level as an input parameter, which can be used to add noise to the low-resolution input according to a predefined diffusion schedule. The model was trained on crops of size 512x512 and is a text-guided latent upscaling diffusion model. This model is trained for 1.25M steps on a 10M subset of LAION containing images >2048x2048. This model card focuses on the model associated with the Stable Diffusion Upscaler, available here.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |