Introduction to Generative AI
Unless you have been living in a cave, you must have heard the term Generative AI, or GenAI for short, recently. The best example is ChatGPT, which has taken the world by storm. Adobe, on its side, has also been working on this technology and it is now available in some tools. I have to admit that it has taken me some time to start playing with it, as my main project has eaten all my time. However, I have finally started digging into it. I am far from an expert, but I wanted to share with you what I have learned and my point of view on where I think it should go to.
What is Generative AI
You know me and you know that I always want to start with the basics, be it the definition or the why. A quick search takes me to this definition:
Generative artificial intelligence (AI) is a type of AI that generates images, text, videos, and other media in response to inputted prompts.
In other words, it is a technology that creates content almost out of thin air. With only a few words, you get a much larger content.
How?, you may be wondering. I am no subject matter expert, but I can summarize what I know. Some algorithms try to mimic how the brain works. These algorithms are adaptative and need to be “trained”. This means that you need to feed them with multiple known inputs and expected outputs, the more the better. For example, if you want the algorithm to identify a house, you will submit thousands or millions of photos, some with houses, some without, and you will tell the algorithm which are houses and which are not. Based on this input, the algorithm can then learn what a house is. Next time you submit a completely new photo, it should identify whether it is a house or not.
Adobe has released a set of capabilities under the GenAI umbrella, that are generically called Adobe Firefly. Some of these features can be used through a web interface, but, for others, you need one of the Creative Cloud tools. Once you enter the Firefly web, you get the following capabilities:
- Text to image. Generate images from a detailed description and guide style with a reference image.
- Generative fill. Use a brush to remove objects, or paint in new ones from text descriptions.
- Text effects. Apply styles or textures to text with a text prompt.
- Generative recolor. Generate color variations of your vector artwork from a detailed text description.
- Text to template. Generate editable templates from a detailed text description. Available in Adobe Express
- Text to vector graphic. Generate SVGs from a detailed text description. Available in Adobe Illustrator.
As an example, I have tried “text to image” with the following prompt: unicorn galloping on a beach at sunset with the moon high up in the sky. This is one of the results I got:
Pretty amazing, huh?
Adobe Experience Cloud
This blog is about the Adobe Experience Cloud. In fact, if you ask me anything about Adobe Photoshop or Adobe Illustrator, I would not be able to help you, as I am very sure that you know more than me. However, GenAI can also have a deep impact on the Experience side of the house.
What I am going to explain until the end of this post is highly speculative and only my point of view. This is why this post is classified under Opinion too. I do not have access to the roadmap of Adobe’s products. I am in consulting, not product management, so I can only guess. My goal here is to encourage you to imagine what you could do if and when the technology allows you to do so so that you are prepared.
I will use only images in this subsection, but it could also be applied to text or video.
Marketing is all about content: you want to send a message that resonates with your customers so that they perform a desired action. In the case of digital marketing, the goal is to make the content as personalized as possible. So far, we have used segmentation to create audiences and send a different message to each audience. Imagine now that you could create a unique image per customer, an image that has been specifically created for this individual.
Here are some examples of what I have in mind:
- Retail. I imagine that commerce websites (either with Adobe Commerce standalone or AEM + Adobe Commerce) would be able to generate personalized images of the catalog. For example, if you are a fast fashion retailer, you could offer your customers to be able to see how the clothes would fit them in a particular scenario. Ideally, your customers could upload a full-size picture of them, to be used by the generation software.
- Automotive. This is not my idea, I am borrowing it from a coworker. In this scenario, we are sending a newsletter from Adobe Journey Optimizer (AJO). For most of the customers, we know what car they are considering or they own and where they live. When preparing the delivery, AJO could request the creation of an image of that car in an environment familiar to the customer.
- FSI. Typically, banks use inspiring images or photos of happy people. One of the financial institutions I use in the US has the image of an African-American man, a caucasian woman, and their mixed-raced daughter. This is as much as we can do today, trying to be as inclusive as possible. However, if we have enough information about the customers, we could create a photo that feels closer to this customer.
I am fully aware that the images I have shown are not realistic, as I just wanted to showcase the functionality. I have only used “text to image”, but with features like “generative fill”, you can upload your own image and prompt GenAI to change things to the image, typically the background.
Let’s dream even more and take it to the next level. As I said above, I do not know what exactly Adobe is working on, but I would not be surprised if I see something similar to the following ideas.
We all know how time-consuming it is to create an activity in Adobe Target, a dashboard in Customer Journey Analytics (CJA), or a journey in AJO. Not only this, Adobe tools are very sophisticated and not everybody has enough knowledge of them. Would it not be amazing if another GenAI tool could do some of this work for you?
Imagine the following prompts:
- I want to create an A/B test where the alternate experience highlights the basket icon and swaps the first and second entries of the menu.
- Please create a dashboard where I can see the purchases of the last year by region, month by month. Add summary values of the total revenue and AOV.
- I need a journey that will be triggered by a purchase, where I will send a message to my customers’ preferred channel: email, SMS, or push.
These prompts would trigger the creation of the Adobe Target activity, CJA dashboard, or AJO journey and would configure as much as possible. In my vision, somewhere between 50% and 80% could be automated. For example in the case of AJO, the main text is unlikely to be created on the fly. The marketing team will want to take control: write (maybe using GenAI), review, and approve it, before sending it to the customer. However, we could get the journey canvas and the email canvas ready, so that we only have to add the content to the email canvas.
What other features based on GenAI would you want to see in the Adobe Experience Cloud? Let me know in the comments.
All images generated by Adobe Firefly