DALL-E 2 is a new AI system from OpenAI that can take simple text descriptions like “a purple mammoth riding a motorcycle” and turn them into photorealistic images that have never existed before. DALL-E was created by training a neural network on images and their text descriptions. Through deep learning, it not only understands individual objects, like mammoths and motorcycles but learns from relationships between objects and also actions. DALL-E can take what it learned from a variety of other labeled images and then apply it to a new image, helping people express themselves visually in ways they may not have been able to before.
DALL·E 2 began as a research project and is now available in beta since July 2022. To get started, sign up and create an account using this link. Verify your email address and follow the instructions on the website. Every DALL·E user will receive 50 free credits during their first month of use and 15 free credits every subsequent month. Each credit can be used for one original DALL·E prompt generation — returning four images — or an edit or variation prompt, which returns three images. You can always buy additional credits. You can learn more about how DALL·E credits work here.
Note: artists who are in need of financial assistance should apply for subsidized access. They can fill out an interest form if they’d like to be a part of the DALL·E artist assistance program mailing list.
Using DALL·E 2, you can create and iterate to
- Generate: generate images from a description in natural language. You can combine concepts, attributes, and styles. Note that detailed descriptions tend to produce more desirable results as long as they are less than 400 characters.
- Edit: allows users to make realistic, targeted, and context-aware edits to images they generate with DALL·E or images they upload using a natural language description. With the Editor, you can place and resize uploaded images, zoom in and out, and use keyboard shortcuts (press ‘?’ for the shortcut menu). DALL·E can add and remove elements while taking shadows, reflections, and textures into account! Learn more about the Editor here.
- Variations: can take an image generated by DALL·E or an image uploaded by a user and create different variations of it inspired by the original.
Other features include:
- Collections: allows users to save generations right in the DALL·E platform. Users can organize their DALL·E creations in multiple collections, share them publicly or keep them private. For example, check out the sea otters collection! Users can save up to 10,000 images in their collection.
- History: the history sidebar contains 50 past generations.
Most of the features mentioned above are straightforward and can be easily used. However, since the recently introduced DALL·E Editor interface is pretty compelling, we will proceed to briefly mention its capabilities:
Inpainting: DALL·E’s Edit feature can realistically edit a generated or uploaded image. Based on a simple natural language description, it can fill in or replace part of an image with AI-generated imagery that blends seamlessly with the original.
Outpainting: this feature helps users extend their creativity by continuing an image beyond its original borders — adding visual elements in the same style or taking a story in new directions simply by using a natural language description. Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image.
DALL·E is also available as an API. Users can integrate state-of-the-art image generation capabilities directly into their apps and products through the new DALL·E API.
Since you technically create the idea for your artwork, you, by default, are the accredited artist for the AI-generated product. If you choose to download the image, though, there will be a colorful DALL•E 2 watermark on the bottom right corner.
Users have full usage rights to commercialize the images they create with DALL·E, including the right to reprint, sell, and merchandise. Many users are already using DALL·E images for commercial projects, like illustrations for children’s books, art for newsletters, concept art and characters for games, moodboards for design consulting, and storyboards for movies.
DALL-E can be limited by gaps in its training. For example, if you type “baboon” and DALL-E has learned what a baboon is through images and accurate labels, it will generate a lot of great baboons. But if you type “howler monkey” and it hasn’t learned what a howler monkey is, DALL-E will give you its best idea of what it thinks it could be: like a “howling monkey”.
Still, an AI-generated image can tell us a lot about whether the system understands us or is just repeating what it has been taught. Furthermore, DALL-E helps humans understand how advanced AI systems see and understand our world. This is a critical part of developing AI that’s useful and safe. OpenAI has developed and continues to improve upon safety mitigations. Most importantly, their content policy does not allow users to generate violent, hateful, adult, or political content, among other categories. DALL·E won’t generate images if its filters identify text prompts and image uploads that may violate OpenAI’s policies. Please read a detailed description of DALL·E’s risks and limitations here.
Writing a good prompt is crucial when using DALL-E, as it directly impacts the quality and relevance of the generated images. A clear and specific prompt can guide the model toward generating specific types of content and can help the model understand the context and generate more accurate and relevant responses. Therefore, it is important to take the time to carefully craft a well-written prompt to get the best out of DALL-E 2. The following are some resources and guides where you can read more about the importance of prompt engineering and DALL-E 2:
Keep in mind that DALL-E 2 is a powerful tool, and it is important to use it responsibly and ethically.
One last note: to follow along on updates and share your creations and feedback, do not forget to join OpenAI’s Discord channel!