How to Turn Sketches Into Finished Art Pieces with ControlNet (Ep. 2)
This guide demonstrates how you can take a simple sketch and turn it into an art masterpiece with ControlNet.
Written By: HolostrawberryLast updated: 2023-06-30
In this guide, we'll go through an example of how to turn a sketch into a finished art piece using ControlNet.
ControlNet can turbocharge anyone's creative workflow -- artist or enthusiast alike. If you are still new to Stable Diffusion, then you should check out our previous tutorial which explains how to generate images using Stable Diffusion.
This is the second part of a series: "How to Master ControlNet". You can explore all guides in this series below:
- Introduction to Image Manipulation and Remixing with ControlNet (Ep. 1)
- How to Turn Sketches Into Finished Art Pieces with ControlNet (Ep. 2)
- Turn your Photos Into Paintings with Stable Diffusion and ControlNet (Ep. 3)
- Turn a Drawing or Statue Into a Real Person with Stable Diffusion and ControlNet (Ep. 4)
- Make an Original Logo with Stable Diffusion and ControlNet (Ep. 5)
Now, let's dive in.
How to Turn Sketches Into Finished Art Pieces with ControlNet (Ep. 2)
ControlNet is perfect for brainstorming ideas. I will showcase this with this simple sketch:

I will load it into ControlNet by opening the dropdown near the bottom of my txt2img page. Drop your image, don't forget to check Enable, and select the method you wish to use. First, I'll show you the Scribble method, which is designed to extend simple drawings. Watch this short clip:

The preprocessor and model will be chosen automatically, but you can change them as needed. For example, we can click the Preprocessor and select invert, as white-on-black is better than black-on-white for sketches like this. It's just how it is.

I will use the Mistoon_Anime checkpoint to generate our final images. I chose this checkpoint because it has a pleasant artstyle that is less generic than other anime-like models. I will make a batch of 4, and use simple terms as tags in our prompt: girl, upper body, brown hair, smile, forest
These tags are in fact booru tags, and are used with checkpoints trained on anime-like pictures.
For the negative prompt I'll only type EasyNegative; this is an embedding which you can add to Stable Diffusion, and it comes included in the colab from the first guide. If you're not sure how to use embeddings, here's a simple negative prompt you can use instead: (low quality, worst quality:1.4)

I think thsese look nice!

We could specify a different hairstyle, colors, background, facial expression - all sorts of things. But right now, I think this piece could more closely resemble our original sketch. In such a case, I would recommend switching from Scribble to the Canny method, which follows the contours of our image. Additionally, we can increase the Control Weight, which gives more importance to our input image. There are other settings, but we can leave them alone this time.

After running with these settings, I got these:

Feel free to try other methods, such as Lineart. Experiment with different styles and techniques, and you may find something you really enjoy! In the future, you may even be training the AI with your own art, making your AI art actually, completely, unique.
ControlNet can do other things, too. Check out our other ControlNet guides. Cheers!
- Introduction to Image Manipulation and Remixing with ControlNet (Ep. 1)
- How to Turn Sketches Into Finished Art Pieces with ControlNet (Ep. 2)
- Turn your Photos Into Paintings with Stable Diffusion and ControlNet (Ep. 3)
- Turn a Drawing or Statue Into a Real Person with Stable Diffusion and ControlNet (Ep. 4)
- Make an Original Logo with Stable Diffusion and ControlNet (Ep. 5)