DALL-E 3 can follow the prompt extremely well, but some generations end up being a little creepy, especially while generating images with people. This way of following any prompt makes DALL-E 3 an incredible tool, but the final result is often too far away from the production level.
If we talk about Stable Diffusion, you can get better quality results without distorted faces with custom models and various extensions and modifications such as HiRes fix. But DALL-E 3, as a relatively new and closed tool, doesn’t have that big community and people who would make different additional developments.
Here we suggest a simple way to improve your generations by using image2image (generation with AI where you use not only a prompt as an input, but also a start image).