How to remove artifacts on generations with people. Fixing weird faces in DALL-E 3
The simple workflow to follow by Artemiy Kalinin.
DALL-E 3 can follow the prompt extremely well, but some generations end up being a little creepy, especially while generating images with people. This way of following any prompt makes DALL-E 3 an incredible tool, but the final result is often too far away from the production level.

If we talk about Stable Diffusion, you can get better quality results without distorted faces with custom models and various extensions and modifications such as HiRes fix. But DALL-E 3, as a relatively new and closed tool, doesn’t have that big community and people who would make different additional developments.

Here we suggest a simple way to improve your generations by using image2image (generation with AI where you use not only a prompt as an input, but also a start image).

Step 1. Generate image in DALL-E 3
For example, for our project we need an image with a group of people at the beach. We decided to write a simple prompt: a photo of happy girl at the beach party, surrounded by friends, surfboards.
We got a pretty good generation with one girl in the front, the skin on her face is not fully realistic, and also she has some weird teeth. Considering the people in the background — no comment, you can see it all yourself :)
Step 2. Crop the face with artifacts
In order to do that we need to take Image Cropper node and connect our image. Using parameters Width and Height you can set a desired resolution, or you can select the cropped area manually. The main point here is to cut out the face and leave a little bit of area around it.
Step 3. Image2Image pipeline
Now we need to use the final cropped image as a reference for generation. Let’s create SD XL (img2img) node or SD 1.5 node, and connect our cropped image as Start image. We need to write down in the prompt what we see on the image: black woman smiling. Do not forget about the negative prompt that will help avoid artifacts: deformed, ugly, weird. You can add any keywords that you usually use while generating in Stable Diffusion.
Set the parameter Start image skip (denoising strength) to 0.3.
Choose the model Juggernaut or Reliberate v3.
Because our initial image was generated in 1024x1024 resolution in DALL-E 3, and after crop the resolution became even smaller, we recommend to upscale the result of Stable Diffusion node, especially if the person is in the foreground.
Now as soon as we got our image, we need to combine it with the original generation. It can be easily done in Photoshop, even if you haven’t used it before, it’s not too difficult.

Step 4. Composing images in Photoshop
Open up Photoshop (in our case Photoshop Beta) and upload the original DALL-E 3 generation.
Now upload the generation from img2img (SD + Upscale), it should appear as the second layer in the lower right corner.
Choose the second layer with one face and in Blending options of the layer put Opacity to 40–50%. In the left panel choose Move tool, select the layer and scale it so the boundaries of the person’s face on two images were blended together and right above each other.
Now we need to bring Opacity back to 100% in the Blending options of the layer. We see one image on top of the other and the boundaries between those images are seen clearly,
Now our main goal is to blend these boundaries. Choose Brush tool in the Clear mode. Now we can start to blend. You might need to rasterize the layer first.
And here we have a new iteration for our generation with more realistic face and without artifacts :)
Repeat the same action with all other faces in the shot. While generating try getting as most realistic face as possible. If clothes or background change too much, that’s okay as you would anyway remove it in Photoshop.
If some generations turn out to be too sharp and for the background you need to be blurred, apply Gaussian blur for 0,2–0,3.
Using this method you can fix faces on any generation. If you want to get a closer look into the settings of generations, we recommend to look at this template.
The described pipeline was invented and inspired by Artemiy Kalinin.