December 11, 2023 Content Chat Recap: Using AI For Content Marketing in 2024

A Content Chat header image featuring an array of flowers behind a text overlay that says today’s topic is how to use AI for content marketing in 2024, with guest Christopher S. Penn.

“Artificial intelligence is a mirror of humanity. It is created from reams and reams of text. And as a result, because it’s a mirror of humanity, it is a mirror of all that is good and bad about us. And the questions that we can’t answer about ourselves, the machines can’t answer either.” – Christopher Penn

AI has reached the masses, forever changing how we interact with the world. As marketers continue to explore how to use AI to reach our audiences in more engaging and personalized ways—and not just add to the content noise—we are met with new challenges and unparalleled opportunities.

In this #ContentChat recap, Erika joins Christopher S. Penn, co-founder and chief data scientist of Trust Insights, to discuss what marketers should know about using AI for content marketing in 2024.

Watch the full conversation on YouTube or read through the highlights below.

Q1: What did you predict would happen with AI in 2023 that didn’t end up being the case?

People started 2023 excited about the prospect of AI, thanks to accessible tools like ChatGPT and image generators that became wildly popular.

“2022 really was a big year for the democratization and the commoditization of generative AI. And then we really started to hear more people adopting it in January once people came back from the holidays.” – Christopher Penn

“Earlier in the year, you had the release of Stable Diffusion, which was a huge deal for the image generation community. Up until that point, a lot of the models have been closed. So DALL-E, for example, was a closed model where you could do stuff with it, but you had to pay after a certain amount of time or a certain number of generations. And suddenly Stable Diffusion came out in April 2022 and was like ‘Hey, you can just download this and run it on your laptop and you can crank out 1,600 images at a time.'” – Christopher Penn

But that excitement was met with fears of job displacement. Luckily for us writers, AI is far from ready to replace us.

“And, of course, the hysteria: ‘ChatGPT is gonna take my writer job!’ Which is always my favorite, because I can tell very, very quickly when a feature article has been generated by ChatGPT, usually by the weird adjective and adverb choice.” – Erika Heald

Q2: What companies surprised you this year regarding their AI platforms?

Microsoft and Meta helped advance AI in notable ways this year.

“Microsoft came out swinging very hard. They are, I think, the majority investor in OpenAI. They basically provide OpenAI’s compute infrastructure. The fact that Microsoft pivoted so quickly to having the GPT-series models in everything is a testament to the fact that Microsoft’s management looked at the landscape and said ‘Okay, we still have Windows, we still have Office, we lost the Browser… this is our chance to take search.’ And it is good enough now that people don’t laugh anymore when you say I used Bing to create this thing.” – Christopher Penn

“The second big winner this year was Meta. Meta released two models in the span of a year, Llama and Llama 2, that are best-in-class open source models. And what this means is that you can take a tool like ChatGPT [and] you can use it, but the underlying model, the GPT4, you can’t touch that. You can never see it, you never get a chance to interact with it directly. With Llama 2 in particular, you can go to Hugging Face, for example, and just download a copy. And you can have that on your laptop if you have the right interface or you can run it on your laptop. And that means you can modify it, you can tune it, you can make it, you can specialize it. And if you are running it on your laptop, you can turn off your Wi-Fi and it still works. This is true democratization of generative AI because now no one can take that away from you” – Christopher Penn

Google, surprisingly, underperformed.

“Google has consistently surprised us all year with how bad they’ve done for a company that has the concentration of smart, capable people. They are some of the smartest people in the world. And a company that has all the data—they have Chrome, Google Search, Gmail, Android, YouTube—and yet, they consistently just screwed the pooch. The PaLM model was a disaster, PaLM 2 was okay. They just released Gemini, which the Gemini pro version isn’t much better than GPT 3.5, and Llama outperforms that. They’ve gotten a whole bunch of marketing hot water by basically faking their demo video, which, if you’re trying to build trust in AI, that is literally the worst thing you could do.” – Christopher Penn

And Apple is staying focused on being the best at user experience.

“Apple is never first. What they are is best at user experience. They make things that are low friction. For language models and protection, particularly, there’s still a lot of friction. If you want to get a model up and running on your own hardware, it is a pain. It is not something that the average Apple user is going to want to do, and it doesn’t make a lot of sense right now to the Apple experience. What we are seeing them doing is building new add-ons and software to the things like Siri and adding new chipsets.” – Christopher Penn

Q3: What do marketers often not consider when thinking about AI?

Skill is no longer a differentiator.

“Skill is no longer a differentiator, right? If you have access to these tools and you become fluent in these tools, your individual skills are no longer a differentiator of your quality as an employee. What differentiates you is two things. One: the quality of the data you have access to. Two: the quality and quantity of ideas that you have. Because if you’ve got an idea, you can use these tools to bring it to life.” – Christopher Penn

AI doesn’t have to be perfect, as long as it helps you perform better than if you didn’t use it.

“For the people who are rabidly anti-AI, it is not a binary situation where it is that these machines have to be perfect or it’s not as good as the human. The reality is there are a lot of people who create a lot of crap, especially in marketing. One of the things that came out in July was a study from BCG and Wharton basically showing that from 750 consultants that they tested, the bottom half performers in terms of quality of work augmented with AI surpassed the top half of quality performance from the control group with just five hours of work within these tools. It’s not a question of it has to be perfect, it just has to be better than what you got now.” – Christopher Penn

A model’s results become worse as it is censored.

“The more time goes on, the worse ChatGPT gets. Here’s why: It’s getting increasingly censored. You can’t say this, you can’t say these things. The way language models work underneath the hood, they’re just big libraries with probability. The analogy I often give is: If a piece of text was a pizza, then a language model would be your cookbook of notes on all the pizzas in the world you’ve ever eaten. And these companies make these gigantic cookbooks. When you censor things in the language model, it causes damage to the whole model. Imagine you have a big cookbook and you have all these recipes in it. But then you’re like ‘but I’m allergic to wheat.’ You can’t just go through the cookbook and cross out the word wheat, you have to tear out recipes. What’s left is a pale shadow because that one word, that one concept, is in so many of the things.” – Christopher Penn

Adversarial models may overcome this quality issue with censorship.

“You’re seeing a big movement now in the language model community to create what are called adversarial model systems. Meta just released theirs called Llama Guard. The idea is you have an uncensored model that can say some really bad things. But it also is very creative as [it has] full language capabilities. So that’s your base. And there’s a second model that basically supervises and looks for ‘Is that racist? Is that sexist?’ then points back to the original system and says try again. So instead of trying to censor the model itself, you’re now saying let’s create a behavioral system in place where we define the behaviors we don’t want.” – Christopher Penn

There are thousands of models to choose from, and they often excel at different things.

“There are probably over 10,000 different models that exist now. And some of them sound really good. There’s one model I’ve been using as an open source model called Netty that when you read its writing—particularly when you read its fiction writing—it’s really good.” – Christopher Penn

There are convenient solutions to get started with AI, however, the only way to get full control and security is to build your own ecosystem made up of multiple solutions.

“It’s kind of like the difference between hosting your website on a site like Squarespace or putting a server in your office saying our websites are going to run here, and unless the FBI shows up in our building to unplug us, no one’s taking this thing down. That’s a big part of how to think about AI, too, is you have the convenient SaaS-based models (OpenAI, Microsoft, Google) and those are good if you don’t want to build infrastructure. But if you are doing something that you need either absolute control over or absolute security over, you are going to be looking at doing it in-house.” – Christopher Penn

“You can get really, really good results through prompting, but the prompting has to be very thorough and very specific and very complete. And a lot of people don’t have those skills. For enterprises, they’re better off looking at that ensemble of different tools. You have a language model, you also have databases of knowledge that the model can pull data from. You have adversarial models fact-checking what the main model is doing. And this ecosystem that is created is the better architecture and a better deployment strategy for enterprise-ready AI. If you want something that will make your lawyers happy, that’s the way you’ve got to go about it. Having a monolithic model and asking it to be the Wizard of Oz and do everything is just not going to go well.” – Christopher Penn

“The real thing that I have not been hearing in these conversations that marketers are having about AI; everyone just sort of assumes that they can purchase it like you would go and purchase Marketo. And that’ll just be available to you. And they’re not necessarily understanding the technology infrastructure required to be able to integrate it with everything your company is doing.” – Erika Heald

Q4: How is AI changing how brands can reach and engage their communities on social media?

The use of AI to moderate social media networks is complex and nuanced. It is, in part, fueling a rise of communities on channels like Discord that allow members to create their own bubbles. This makes it harder for marketers to conduct social listening, understand buyer needs, and track content success.

“When we think about how the AI systems have to accommodate for so many diverse points of view, including a whole bunch we don’t agree with, I can understand why this is such an intractable problem to solve.” – Christopher Penn

“Discord now has a billion users a month. It is one of the largest social networks. But within Discord, obviously, hundreds of thousands, if not millions, of individual, small communities. We’re in this bubble. These things are the rules. This is the culture of this bubble, and if you don’t like it go make your own. For content marketers, that becomes a huge problem for a couple of reasons. There’s no town square anymore. X kind of was that to some degree, and that’s been set on fire. The private social media communities like Discord and Slack, they’re not things you can monitor. They’re not things you can do analytics on. They’re not things you can do very much attribution analysis on.” – Christopher Penn

Which means that branding is even more essential in 2024 onward.

“As a marketer you need to double, triple, or quadruple down on brand. If you’re not building a brand, if you’re not building something that is memorable, you are hosed. Brand is the only thing that can cross the semi-permeable membranes of all these different communities. Because people who like your brand will talk about your brand wherever they are. We really have to flip our thinking about content on its head to put handles on content that people can pick it up and take it with them into these other places.” – Christopher Penn

Q5: How can marketing teams prepare to use AI effectively?

Companies need to stay agile when exploring new AI solutions and also empower their team with processes that keep them agile.

“Your technology deployment strategy has to be agile, but your people and your processes also need to be as agile and as adaptable to rapidly changing circumstances.” – Christopher Penn

“When you start talking about agility, you do need to start thinking about how can you have specialized AI editors and process improvement tools in place so you’re not having things be bogged down by a human being who may or may not be available.” – Erika Heald

Practice and experiment with the tool of your choice to learn how to make it work better.

“It’s all about practice. It’s all about time in the system. It’s about learning how to use the systems better. There is no substitute for logging into the tool of your choice and just trying to make it work. Try to figure out what makes it work better and what doesn’t. Ask what tasks you do that are language-based, repetitive, and maybe aren’t great value, and can you do that task with a machine?” – Christopher Penn

If team members misuse AI in a way that violates the company’s values, then there should be repercussions.

“If you’re a marketer or a business person wondering ‘How do we regulate the usage of this thing?’, regulate the outcomes. If people are using AI in your company to do things that are unethical or against your company’s values, guess what, you’ve already got processes in place to handle that, but you have to use them.” – Christopher Penn

Q6: What are easy ways for marketers to start using AI?

Reporting is a smart place to use AI.

“Reporting is a task that a lot of people don’t love, have varying levels of skill at, are being asked to do on a regular and frequent basis, and could do it better with the help of generative AI. A tool like ChatGPT’s advanced data analysis engine… a great place to start, say ‘I’ve got this data from my Marketo instance or Salesforce instance, what do you see here? Here’s what I’m trying to figure out, I want to know what channels are working when or what campaigns are working. And that would be a really good use case for someone to get better at these tools, the tasks you don’t like doing.” – Christopher Penn

“I really dislike looking at all the numbers and trying to find those weird pieces of overlap, because I dont’ want to do pivot tables or try to remember how to do all the Excel formulas. And now you don’t have to; you can actually have the language model tell you what the formulas are, or to give you that basic stuff that you just plug in.” – Erika Heald

And AI is especially helpful for enforcing your brand identity across all content and channels.

“I like the idea of being able to train your AI in order to be that shepherd of the brand voice. In bigger companies in particular, you end up in that situation where your content team and your marketing team have invested all of this time in creating a really specific brand and having a little bit of aspirational brand voice going on. And that just gets completely blown out of the water by all of the customer service interactions people have that seemed like they came from a completely different company.” – Erika Heald

Q7: What is likely to happen with AI in 2024?

The U.S. has a presidential election in 2024, meaning ad budgets need to increase.

“This time next year, at least in the USA, we’ll have just gotten through a presidential election cycle. And whatever the results of that are, it will be the amount of brain space that people have to devote to consuming marketing will be approximately zero. Every election cycle, the budgets for ad spend go up and up and up. If you’re thinking about 2024 right now and your ad budgets, what ever you budgeted, it’s not enough.” – Christopher Penn

AI will play a major role in spreading disinformation during the election cycle, which will likely bring calls for regulation.

“You are going to see abuse and misuse of AI within the political sphere to create disinformation, misinformation, objectively flat-out wrong stuff, fake stuff. That will find its way into political advertising, political discourse and stuff. So there will be some—and needs to be some—strong conversation. The EU just passed its first set of recommendations on regulating AI to deal with misuse. So by this time next year, we will probably be talking about all the ways AI was used to abuse and manipulate the election cycle. There’ll be some increased calls for regulation from that, because that’s just how human beings are.” – Christopher Penn

New models come out every six to nine months, so we will see several major releases next year.

“From the technology side, you are likely to see another major release of all the big models. You’re likely to see another version of ChatGPT from OpenAI, you’re likely to see one from Anthropic, you’ll probably see one from Google that we’ll all laugh at, and you will see Llama 3 from Meta. What you need to know about that is every generation of model—approximately six to nine months is how fast new models come out—is double the capacity of the previous generation. So it has twice as much working memory as the past, has twice as much capability, and in a lot of cases twice as much data that was trained on.” – Christopher Penn

Multimodal models will continue to generate interest (but are still in their infancy).

“We’re seeing a lot of interest and a lot of development of multimodal models, models that can go from one format to another. There’s an open source one called Lava that is a combined word and vision model. You give it an image and it can describe it, or you give it text and it can create imagery. You’re gonna see more of those multimodal models that will have profound impacts on things like advertising and creative.” – Christopher Penn

And eventually, we could use AI to hold our governments more accountable.

“In the right hands and with the right intent, these tools could be very useful for auditing the governments we have, for auditing the representatives we elect. Do you really want to listen to every speech that your elected representative makes? Probably not. Could you use AI to download, transcribe, and then highlight any unusual things that person said? Absolutely. I think if citizens are serious about it, these tools can be used to hold the governments that we elect more accountable for their behavior, and to make it easier to spot misbehavior and call attention to it.” – Christopher Penn

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top