Generative AI Technologies and Models used by Adobe

Adobe uses a variety of generative AI technologies and models, including:

  • Adobe Sensei GenAI: This is Adobe's new generative AI service that will be integrated natively in Adobe Experience Cloud to power end-to-end marketing workflows. Sensei GenAI is trained on a unique dataset that generates commercially viable, professional-quality content.

  • Adobe Firefly: This is a family of creative generative AI models coming to Adobe products, with an initial focus on image and text effect generation. Firefly models are trained on a variety of datasets, including Adobe Stock, Creative Cloud, and the public web.

  • Adobe Substance: This is a suite of 3D tools that use generative AI to create realistic and high-quality 3D assets. Substance models are trained on a variety of datasets, including real-world objects and textures.

  • Adobe Photoshop: Adobe Photoshop includes a number of generative AI features, such as the Neural Filters and Content-Aware Fill tools. These tools use AI to automate tasks that would otherwise be time-consuming or difficult, such as removing objects from images or creating realistic textures.

The generative AI technologies that Adobe uses include some that are proprietary to Adobe and some that are based on open-source models. For example, the Adobe Sensei GenAI service is based on a proprietary model that is trained on a unique dataset of commercially viable, professional-quality content. However, the Adobe Firefly models are based on open-source models, such as VQGAN+CLIP and Diffusion Models.

Adobe also uses a variety of other generative AI technologies, including:

  • Generative adversarial networks (GANs): GANs are a type of generative AI model that are trained to compete against each other. GANs have been used to create realistic images, videos, and text.

  • Variational autoencoders (VAEs): VAEs are another type of generative AI model that are trained to reconstruct data. VAEs have been used to create realistic images, videos, and text.

  • Transformers: Transformers are a type of neural network that have been used for a variety of tasks, including natural language processing and machine translation. Transformers have also been used for generative AI tasks, such as text generation and image captioning.

  • Diffusion models: Diffusion models are a type of generative AI model that are trained to spread out from a simple starting point to a more complex final state. Diffusion models have been used to create realistic images, videos, and text.

These are just a few of the generative AI technologies and models that Adobe uses. As the field of generative AI continues to develop, Adobe is committed to using this technology to help creative professionals create even more amazing content.

Here are some specific examples of how Adobe is using generative AI:

  • Adobe Sensei GenAI: This service is being used by businesses to create personalized marketing experiences, generate realistic product images, and automate customer service tasks.

  • Adobe Firefly: This technology is being used by creative professionals to generate realistic illustrations, create unique color palettes, and experiment with different design ideas.

  • Adobe Substance: This suite of tools is being used by game developers, architects, and other 3D artists to create realistic and high-quality 3D assets.

  • Adobe Photoshop: The Neural Filters and Content-Aware Fill tools in Photoshop are being used by photographers, designers, and other creative professionals to automate tasks and create more realistic and creative content.

Adobe is committed to using the latest generative AI technologies to help creative professionals create even more amazing content. As the field of generative AI continues to develop, Adobe will continue to invest in this technology and make it available to its customers.

Last updated