Text-2-image
Stable Diffusion 1.5 & Stable Diffusion XL
With this node you can generate images of anything. Available with more than 80 models
Quick menu: click on any section to quickly go to it
Key features
  • Realistic and artistic images
    Capable of generating in the numerous style depending on the prompt. High variability
  • Stable Diffusion 1.5 and XL
    Available with 2 state-of-the-art models: Stable Diffusion 1.5 with option to train the model and 80+ available styles, and Stable Diffusion XL for better quality
  • Generate in up to 1524x1524. Upscaling up to 4k
    Stable Diffusion XL gives incredible results at higher resolution, whereas Stable Diffusion 1.5 provides better quality at lower resolution
  • Wide choice of custom models
    With custom checkpoints (Styles or models) you can get the desired images in various styles and photorealism.
    More than 80 styles available with most popular Dreamshaper and Juggernaut models. Includes SFW and NSFW models
  • Fast generations
    Starting from 10 seconds you can get the desired images in batch generation up to 20 images per generation (depending on your subscription)
  • Basic and advanced settings
    Whether you're new to AI art or an experienced user, you will find all the necessary parameters and high customizability for your use case
How to use the node
Click on each step to see the corresponded image as an example and tutorial to follow
Difference between Stable Diffusion 1.5 and Stable Diffusion XL
Stable Diffusion 1.5
More models
SD 1.5 has more than 80+ available custom checkpoints, meaning more variability
Trained Models
Possibility to use trained AI models
Recommended resolution
512x512
Stable Diffusion XL
Higher quality
By default it generates 1024x1024, meaning less artifacts and better images
Easier prompts
Works way better with shorter prompts than SD 1.5
Recommended resolution
1024x1024
Need inspiration for prompts?
Check out our prompt collection!
Parameters
Generally as a basic user you would only need to type prompt, negative prompt, set up resolution if needed. However, you might need to change other settings, so here are the descriptions of what each parameter does.
My models
A dropdown list of your DreamBooth models
Available in Stable Diffusion 1.5

When turned on, it will use your trained model from AI Photoshoot node or Train Panel.

In order to use it you have to write down the subject in the Text prompt you trained it on. By choosing a model, the subject will be placed automatically to your prompt, but if you erase it, you would have to type it again

Important note: you can NOT use DreamBooth model and Style (custom checkpoint) at the same time
Styles
A dropdown list of custom checkpoints, custom models or Styles
Available in Stable Diffusion 1.5 and Stable Diffusion XL

Currently we are offering a big number of available checkpoints (80+ at the moment)

If you have any suggestions, which custom models you want to add to the Phygital+, write us in the Discord
Steps
The range: from 1 to 200
Available in Stable Diffusion 1.5 and Stable Diffusion XL

It defines how detailed and close to the prompt your image will be. The higher number of steps is, the longer it takes to generate one image.
Good results can be achieved in 30-50 steps

Usually the more steps, the better, but we recommend to experiment with 50 or 100 steps, as it's the most optimal time spent and quality correlation
Seed
The range: from 1 to 4294967296
Available in Stable Diffusion 1.5 and Stable Diffusion XL

The generation starts from a particular image (or noise). The same seed number will give you similar results each time, whereas by changing it to another number may dramatically influence the final output.

We recommend to use different seed numbers if you're experimenting, and as soon as you find the perfect result

Width and height

Range: from 512 to 1024 (in SD 1.5) and to 1524 (SD XL)
Available in Stable Diffusion 1.5 and Stable Diffusion XL

It defines the width (x) and height (y) of the final image.

We recommend to use
horizontal orientation for landscapes and environments,
vertical for characters and creatures

No (optional)

Available in Stable Diffusion 1.5 and Stable Diffusion XL

Negative prompt. It tells neural network what it should NOT generate

Scale uncond (CFG)

The range: from 0 to 100
Available in Stable Diffusion 1.5 and Stable Diffusion XL

It defines how close the image to the initial seed will be. It also Impacts on image contrast and the following of the prompt.
The higher the number is, the less detailed, more blurry, noisy and more vibrant the image will be.

We recommend to keep it in the range of 7,5 to 15

Eta (Guidance Rescale)

The range: from 0 to 1
Available in Stable Diffusion 1.5 and Stable Diffusion XL

Impacts on image depth and color range



We recommend to keep it around 0.8 so that the picture is more balanced.

Samplers

There are multiple samplers
Available in Stable Diffusion 1.5 and Stable Diffusion XL

It defines how AI will generate the image. here is also almost no difference in the output image depending on which sampler you use.

We recommend to change it only if you're an experienced user

Start image skip

Range: from 0.05 to 0.95
Available in Stable Diffusion 1.5 and Stable Diffusion XL (img2img)

It defines how close to the Start image the final image will be.
The higher the number is, the more different from the Start image a generated artwork will be.

We recommend to keep it between 0,5-0,6 if you use Start image

Tiling X axis / Tiling Y axis

Available in Stable Diffusion 1.5
Makes the image seamless and tileable

Perfect for creating textures

Artistic Mode

Available in Stable Diffusion 1.5
It turns any generation into the artistic masterpiece.

Currently available at 4 styles:
• General Artstic mode: for anything from objects to personas
• Portrait: for personas and any humanoid-like creatures
• Full-body: for full body height images of humanoid-like creatures
• Landscapes: for cinematic scenes

Denoising end

SD XL parameter
The range: from 0 to 1

If set to 1, the refiner is not used in generation.
The less the number, the more % of generation time will be split between the general model and refiner

Number of images

Available in Stable Diffusion 1.5 and Stable Diffusion XL

Determines how many images will be generated per one node launch.
The highest number is 20 for SD 1.5 for Plus subscribers
Getting bad results? Try one of these!
1
Use different Style

Some Styles perform exceptionally well with short prompts. Try these Styles:
DreamShaper, Deliberate, Juggernaut

2
Change Resolution
A square image doesn't always give a good result. Try adjusting size for the image for:
• characters: vertical orientation (e.g, 512x768)
• landscapes: horizontal orientation (e.g., 768x512)
3
Try negative prompt
Fill in 'No (optional)' text field.
Write down what you do NOT want to generate. It can help a lot with results

Try adding these words: weird, ugly, deformed, poor quality, blurred
Related materials and tutorials
How to
Explore the most accessible ways to write a text prompt in Stable Diffusion and touch upon how you can quickly get wow-results

Beginner level
How to
Learn how you can use your sketch as a starting point for generation to create AI image in Stable Diffusion

Beginner level
Collection
Get to know how you can use latest AI tools to create HQ textures and make them voluminous in 3D software

Advanced level
How to
Explore the workflow of creating high quality icons using AI

Advanced level
How to
Learn about Zoom out feature that will help you expand your Stable Diffusion generation and add missing details with simple text

Beginner level
How to
Learn how to create multiple frames using Stable Diffusion and generate 360 animated panorama with it

Beginner level
How to
Create a prototype of an object in several options

Beginner level
How to
Learn how to train AI on your photos to create endless avatars in any style with Stable Diffusion using node interface

Beginner level
How to
Based on AI training you will get to know how you can create game assets in your unique game style

Advanced level
How to
Learn how to use Magic Canvas in Phygital+ for generation, inpainting and outpainting

Beginner level
Related templates
Generate game or design assets
Change elements on your photos or AI images
Quickly change clothes on any photo
Generate unique tileable textures
Create landscapes and edit it like a pro
Explore how you can create AI concept image from simple sketch
Create variations of one ingame asset for your UI or ingame object description
Delete unwanted elements on generations
Generate any person in any style using DreamBooth
Create a prototype for your ingame badges in different variations (color, material)
Branded content. Train once, generate endlessly
Train AI on your own style to generate game assets
Create stylized concepts and assets using only one image and your style references
Unique stylized assets from simple text. Based on your own unique style
Generate character concept and get the back view using ControlNet