Delete Set public Set private Add tags Delete tags
  Add tag   Cancel
  Delete tag   Cancel
  • • Curated knowledge about art and AI •
  •  
  • About
  • Lora
  • Prompts
  • Tags
  • Login
6 results tagged ui

Collage-diffusion-ui : use a familiar Photoshop-like interface and let the AI harmonize the details.https://github.com/linden-li/collage-diffusion-ui

  • ui
  • image2image
  • text2image
  • remix
  • ui
  • image2image
  • text2image
  • remix

Collage Diffusion is a novel interface for interacting with image generation models. It allows you to specify the composition of an image in a familiar Photoshop-like interface. Our modified version of Stable Diffusion takes the layers in and produces a harmonized image, ensuring that everything from perspectives to lighting are plausible. Unlike text prompting supported by traditional diffusion interfaces, Layered Diffusion allows you to precisely outline how a scene should be composed—from where objects are relative to each other to what they look like.

2 months ago Permalink
cluster icon
  • AUTOMATIC1111 Stable Diffusion web UI : The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting,...
  • Hard Prompts Made Easy (PEZ) : The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical "hard" prompts are made from inte...
  • ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation : The study aims to improve the ability of large text-to-image models to express customized concepts without excessive computation or memory burden. The...
  • IP-Adapter: The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt. : IP-Adapter, an effective and lightweight adapter to achieve image prompt capability for the pre-trained text-to-image diffusion models. An IP-Adapter ...
  • ComfyUI: A stable diffusion GUI with a graph/nodes interface : ComfyUI is a powerful and modular stable diffusion GUI and backend that enables users to design and execute advanced stable diffusion pipelines using ...

StableStudio: Community interface for generative AIhttps://github.com/Stability-AI/StableStudio

  • stable_diffusion
  • ui
  • stable_diffusion
  • ui

Stability AI has released an open-source version of its DreamStudio text-to-image consumer application called StableStudio. The company intends to work with the broader community to create a world class user interface for generative AI that users fully control. DreamStudio was first conceived as an animation studio that shifted its focus to image generation with the arrival of Stable Diffusion in the summer of 2022. StableStudio enable local-first development through WebGPU and a desktop installation of its Stable Diffusion tool. It is also compatible with ControlNet tools and local inference through AUTOMATIC1111 stable-diffusion-webui tool.

StableStudio

5 months ago Permalink
cluster icon
  • Ainodes engine : aiNodes is a Python-based AI image/motion picture generator node engine that facilitates creativity in the creation of images and videos. The engine i...
  • AUTOMATIC1111 Stable Diffusion web UI : The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting,...
  • ComfyUI: A stable diffusion GUI with a graph/nodes interface : ComfyUI is a powerful and modular stable diffusion GUI and backend that enables users to design and execute advanced stable diffusion pipelines using ...
  • Absolute beginner's guide to Stable Diffusion AI image - Stable Diffusion Art : The article provides a beginner's guide to using Stable Diffusion, an AI model that generates images from text input. It includes an overview of Stabl...
  • Diffusers gallery : Another HuggingFace DreamBooth library browser :

Ainodes enginehttps://github.com/XmYx/ainodes-engine

  • visual_programming
  • stable_diffusion
  • ui
  • deforum
  • visual_programming
  • stable_diffusion
  • ui
  • deforum

aiNodes is a Python-based AI image/motion picture generator node engine that facilitates creativity in the creation of images and videos. The engine is fully modular and can download node packs on runtime. It also features RIFE and FILM interpolation integration, coloured background drop, and node-creation with IDE annotations. The installation process requires Python 3.10, Git, and an nVidia GPU with CUDA and drivers installed. AiNodes engine is an open-source desktop AI-based image/motion generator that supports various features such as Deforum, Stable Diffusion, Upscalers, Kandinsky, ControlNet, LORAs, Ti Embeddings, Hypernetworks, Background Separation, Human matting/masking, and Compositing, among others.

Ainodes engine

5 months ago Permalink
cluster icon
  • AUTOMATIC1111 Stable Diffusion web UI : The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting,...
  • StableStudio: Community interface for generative AI : Stability AI has released an open-source version of its DreamStudio text-to-image consumer application called StableStudio. The company intends to wor...
  • ComfyUI: A stable diffusion GUI with a graph/nodes interface : ComfyUI is a powerful and modular stable diffusion GUI and backend that enables users to design and execute advanced stable diffusion pipelines using ...
  • PIXART-α : PIXART-α is a low-cost and efficient text-to-image (T2I) model that produces high-quality images. By utilizing a Transformer-based diffusion model, PI...
  • AI Art Panic | Opinionated Guides : Artificial intelligence (AI) has advanced to the point where it is capable of generating art that is often better than what many human artists can cre...

ComfyUI: A stable diffusion GUI with a graph/nodes interfacehttps://github.com/comfyanonymous/ComfyUI

  • stable_diffusion
  • ui
  • text2image
  • controlnet
  • stable_diffusion
  • ui
  • text2image
  • controlnet

ComfyUI is a powerful and modular stable diffusion GUI and backend that enables users to design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart-based interface. This software supports SD1.x and SD2.x, and provides an asynchronous queue system with many optimizations. It can be used to load ckpt, safetensors, and diffusers models/checkpoints, as well as standalone VAEs and CLIP models. Other features include embeddings/textual inversion, area composition, inpainting with both regular and inpainting models, ControlNet and T2I-Adapter, upscale models, unCLIP models, and more. ComfyUI starts up quickly and works fully offline without downloading anything. Users can also save and load workflows as Json files, and the nodes interface can be used to create complex workflows.

comfyui

https://github.com/BlenderNeko/ComfyUI_Cutoff

7 months ago Permalink
cluster icon
  • AUTOMATIC1111 Stable Diffusion web UI : The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting,...
  • Fondant : Large-scale data processing made easy and reusable : Fondant is an open-source framework designed to simplify and accelerate large-scale data processing. It allows for the reuse of containerized componen...
  • Stable Diffusion Sketch, an Android app used on automatic1111's Stable Diffusion Web UI : Stable Diffusion Sketch is an Android app that allows users to create colorful sketches and enhance them using various modes of Stable Diffusion. User...
  • Ebsynth_utility: AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth. : This AUTOMATIC1111 UI extension allows users to create edited videos using ebsynth without requiring AE. The extension is confirmed to work properly w...
  • Erasing Concepts from Diffusion Models : Making sure that diffusion model-generated images are safe from undesirable content and copyrighted material is a serious concern. Previous methods fo...

Hard Prompts Made Easy (PEZ)https://tomg-group-umd-pez-dispenser.hf.space/

  • prompt
  • text2image
  • gradio
  • generator
  • ui
  • prompt
  • text2image
  • gradio
  • generator
  • ui

The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical "hard" prompts are made from interpretable words and tokens, and must be hand-crafted by humans. There are also "soft" prompts, which consist of continuous feature vectors. These can be discovered using powerful optimization methods, but they cannot be easily interpreted, re-used across models, or plugged into a text-based interface.
We describe an approach to robustly optimize hard text prompts through efficient gradient-based optimization. Our approach automatically generates hard text-based prompts for both text-to-image and text-to-text applications. In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model. In the text-to-text setting, we show that hard prompts can be automatically discovered that are effective in tuning LMs for classification.

pez

This space can either generate a text fragment that describes your image, or it can shorten an existing text prompt. This space is using OpenCLIP-ViT/H, the same text encoder used by Stable Diffusion V2. After you generate a prompt, try it out on Stable Diffusion here, here or on Midjourney. For a quick PEZ demo, try clicking on one of the examples at the bottom of this page.

Note: Generation with 1000 steps takes ~60 seconds with a T4. Don't want to wait? You can also run on Google Colab. Or, you can reduce the number of steps.

More example here : https://bird.trom.tf/tomgoldsteincs/status/1623358917110538240#m

pez

Code on Github

7 months ago Permalink
cluster icon
  • Versatile Diffusion : Versatile Diffusion (VD), the first unified multi-flow multimodal diffusion framework, as a step towards Universal Generative AI. VD can natively sup...
  • Random Drawing Prompt Generator : The random drawing prompt generator provides users with easy drawing ideas by generating a stream of random prompts. The generator is not based on AI ...
  • AUTOMATIC1111 Stable Diffusion web UI : The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting,...
  • Collage-diffusion-ui : use a familiar Photoshop-like interface and let the AI harmonize the details. : Collage Diffusion is a novel interface for interacting with image generation models. It allows you to specify the composition of an image in a familia...
  • CLIP Interrogator 2.1 : Want to figure out what a good prompt might be to create new images like an existing one? The CLIP Interrogator is here to get you answers! This vers...

AUTOMATIC1111 Stable Diffusion web UIhttps://github.com/AUTOMATIC1111/stable-diffusion-webui

  • stable_diffusion
  • image_generation
  • ui
  • web
  • python
  • essential_tool
  • text2image
  • stable_diffusion
  • image_generation
  • ui
  • web
  • python
  • essential_tool
  • text2image

The Stable Diffusion WebUI offers a range of features for generating and processing images, including original txt2img and img2img modes, outpainting, inpainting, color sketch, prompt matrix, stable diffusion upscale, attention, loopback, X/Y/Z plot, textual inversion, extras tab with GFPGAN, CodeFormer, RealESRGAN, ESRGAN, SwinIR, Swin2SR, LDSR, and more. The tool also offers various functionalities such as resizing aspect ratio options, sampling method selection, interrupt processing, 4GB video card support, live prompt token length validation, generation parameter saving, negative prompt, styles, variations, seed resizing, CLIP interrogator, prompt editing, batch processing, Img2img alternative, high-res fix, reloading checkpoints, checkpoint merger, custom scripts, composable-diffusion, deepdanbooru integration, xformers, history tab, generate forever option, training tab, clip skip, hypernetworks, Loras, API support, and more.

7 months ago Permalink
cluster icon
  • Paella: Simple & Efficient Text-To-Image generation : Paella is an easy-to-use text-to-image model that can turn text into pictures. It was inspired by earlier models but has simpler code for training and...
  • IF by DeepFloyd Lab : DeepFloyd IF is a text-to-image model that utilizes the large language model T5-XXL-1.1 as a text encoder to generates intelligible and coherent image...
  • ComfyUI: A stable diffusion GUI with a graph/nodes interface : ComfyUI is a powerful and modular stable diffusion GUI and backend that enables users to design and execute advanced stable diffusion pipelines using ...
  • Erasing Concepts from Diffusion Models : Making sure that diffusion model-generated images are safe from undesirable content and copyrighted material is a serious concern. Previous methods fo...
  • Cutting Off Prompt Effect : This stable-diffusion-webui extension aims to limit the influence of certain tokens in language models by rewriting them as padding tokens. This is im...


(175)
Links per page
  • 20
  • 50
  • 100
Filter untagged links

 

 

 
Fold Fold all Expand Expand all Are you sure you want to delete this link? Are you sure you want to delete this tag? The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community