Meet the New Midjourney Character Reference (–cref) – Create Consistent Characters
On March 12, 2024, Midjourney just announced its latest feature, Character Reference, which many users have been eagerly expecting.
This new feature is a game-changer for creating consistent story characters for any project you can imagine. Simply add –cref and the URL of your reference image, you can try this new feature out in Midjourney!
Let’s dive deeper, see how it performs, and explore advanced tips for making your characters more consistent across your images.
Understanding Character Reference
Midjourney’s Character Reference is similar to the Style Reference feature you might be familiar with, but for characters. This means you can maintain consistency in your character’s appearance across all your artworks.
How to Use Character Reference (–cref)?
Simply add –cref followed by the URL of your character’s image right after your prompt.
You can also control how closely Midjourney adheres to this reference with the –cw command.
Set it to 100 to include the character’s face, hair, and clothes in the reference, or dial it down to 0 to focus just on the face. This is perfect for experimenting with new looks for your character.
It’s important to note that Midjourney suggests this feature works best with characters from Midjourney images. Using photos of real people might not yield as good results.
Additionally, you can combine characters from different images by adding more than one URL with –cref, offering even more creative freedom.
Detailed Instruction of Using the –cref command
Character Reference is available both on Discord and the web app Alpha.
Getting Ready:
- Choose a Character Image: Start by selecting a Midjourney-generated image of your character to serve as your reference. It’s best to choose an image that clearly showcases your character’s traits from various angles—like front view, side view, 3/4 portrait, looking up, and looking down. This approach ensures the best results. (Feel free to include multiple character references.)
- Copy the Image URL: You’ll need the web address of your character’s image. Right-click to copy it.
Step-by-Step Guide for Using –cref in Discord and Web App
Now you’ve prepared your reference image, here’s how to bring your characters to life, step by step.
Discord Usage:
- Upload an Image to Discord: First, upload an image of your character to Discord. You can upload it to the Midjourney bot, your private Discord server, or the Midjourney channels. After uploading, right-click on the image to copy the URL.
- Write Your Prompt and Add –cref: Now, write the creative prompt you want Midjourney to visualize. At the end of your prompt, add –cref followed by the image URL you copied. This tells Midjourney to use your character as a reference for the artwork.
Web App Usage:
If you’ve created more than 1,000 images with Midjourney, you can get access to the new Midjourney Website Alpha to create images without using Discord.
In the web app dashboard:
- Upload or Choose Your Character Image: Click the ‘+’ button to upload a new character image or select a previous work from your library.
- Select the ‘cref’ Option: Hover over the image you’ve chosen. You’ll see options appear; click on the ‘cref’ option to set this image as your Character Reference.
- Enter Your Prompt: Type your prompt, add –cw parameter if needed.
- Choose Parameters: After writing your prompt, you can select additional parameters as needed from the available options to fine-tune your request.
Using –cw to Control the Strength of the Character Reference
The –cw (character weight) parameter is crucial for using the Character Reference feature in Midjourney.
It lets you adjust how closely Midjourney follows your character reference.
This flexibility gives you control over how much detail from your reference character is included in the final output, allowing you to fine-tune the appearance of your creations.
How Does –cw Work?
The –cw parameter is followed by a number ranging from 0 to 100.
A higher number means Midjourney will more closely match the reference character’s face, hair, and clothing.
On the other hand, a lower number, like 0, shifts the focus primarily to the face.
This allows for variations in attire or hairstyle without losing the character’s core identity.
From my experience, setting the –cw value to 100 means you can’t change the character’s dressing; it simply won’t follow the text prompt.
So, feel free to experiment with this value to achieve the desired outcome.
Beyond the Basics: Combining with Other Features
Using Midjourney’s Character Reference (–cref) is just the starting point. When you begin to combine it with Style Reference (–sref) and other parameters, you can further enhance the consistency of your creations. This integration allows for a more cohesive look and feel across your projects.
This is perfect for projects that require a specific mood or atmosphere, from watercolor landscapes to neon-lit cyberpunk cities.
Combining with Other Parameters
Beyond –cref and –sref, Midjourney offers a wide range of parameters to further refine your creative vision.
From switching Midjourney models from V6 to Niji 6, to adjusting stylization, weirdness, or variety, these parameters enable you to explore different inspirations and find the style you prefer.
Once you’ve pinpointed your desired style, you can use that new image as a character reference to create a series of consistent images.
The Limitations
As Midjourny mentioned in their announcement:
This feature works best when using characters made from Midjourney images. It’s not designed for real people / photos (and will likely distort them as regular image prompts do)
DavidH – Midjourney Discord Channel – Announcements
In my tests, I found that photos of real people, especially celebrities and well-known figures, work better because Midjourney can identify them when you use the /describe command.
I also experimented with a few stock photos of men and women’s portraits, and the results were quite similar to what I achieved using realistic portraits made with Midjourney.
Additionally, anime-style images created in Niji mode tend to yield better results than those generated in V6 mode.
Wrapping Up
While it’s definitely more convenient and performs better than image prompts, the accuracy isn’t always spot-on. This depends on whether the character in the reference image has very distinct characteristics. It can capture the overall feeling quite accurately, but the nuances are often missed. You get the sense that they are the same characters, but you can also easily spot many differences. There’s hope for improvements in the future.