AI vs. Marketing: AI Critically Thinks

If you’re looking for a conversation starter, just ask anyone how they feel about AI in today’s digital world. Everyone’s new hot take is how they feel about it—hate it, love it, think it’s revolutionary, think it’s going to steal our jobs. Everyone has an opinion. There is no denying AI’s usefulness. Since its introduction decades ago, automation has become much more accessible, and now “algorithm” is a regularly used word, versus something of computer science jargon. 

Even jobs like design utilize AI more to optimize aspects of the role for speedier editing of media and mocking up design concepts before they’re ready to be created by the designer. And content creators are not strangers to running a few topics through ChatGPT to test out copy for a headline they can’t crack. However, the cost of speedy digital assistance may be the cost of having access to critically interpreted information online. 

How is AI gathering insights, and how do they compare to human-gathered insights? This was never the job of AI, but more and more AI is becoming first in mind when users search through information business systems, like Google, Wikipedia, etc., for data those information businesses have already consolidated and interpreted into insights and made more easily distributable. 


AI has numerous benefits, and they only continue to grow as it advances. Beyond the simple fact-checking and well known “reduction of human error” point, AI has been monumental in streamlining the success of everyday technology.

  • AI is important for small error checks and faster fact references
  • AI can be a tool for people in marketing spaces
    • Designers can use it to fill in data that doesn’t exist with Photoshop
    • Copywriters can use it to concept different copy and even test it out
    • Marketing teams can use it to skim through large amounts of data
  • AI can perform work 24hrs a day, so repetitive and monotonous tasks can be handled easily at any time of the day
  • AI can perform risky and perilous tasks efficiently so that robots can tackle potentially hazardous tasks instead of human workers, such as handling toxic waste or handling heavy machinery in factories.


Interestingly enough, according to the SAS Institute, the goal of AI is “to provide software that can reason on input and explain on output.” However, the issue of AI accurately providing human-like interactions with software and offering thorough decision support of specific tasks is that AI can only work with the knowledge it has. This knowledge includes that in which you provide it. 

So for instance, if you provide ChatGPT a prompt that could have information closely related to a fact but not genuinely accurate, to give you an answer ChatGPT will pull from facts around the truth; like the moon landing in 1969 for example. Americans know this to have involved two astronauts, but if I ask, “Who are the five people who landed on the moon in 1969,” ChatGPT will give me five names. 

While it mentions that one of those people is not an astronaut, ChatGPT still mentions the same person, Buzz Aldrin, twice and finishes off the answer with, “These astronauts made history with their achievements on July 20, 1969, during the Apollo 11 mission,” implying these are all astronauts.

What’s more, Art Directors and Designers alike have been on edge about the overtaking of tools like Midjourney and other AI design software because they have been touted as the industry changing tools of creative marketing. But simple idiom prompts in these tools genuinely confuse the system so much, the churned out designs almost seem comical due to how inaccurate the products are. There are even articles of fails like this in Bored Panda, showing Midjourney and other AI art generators not understanding regular human desires.

These systems also don’t understand prompts with certain conditions, such as directional perspectives. For instance, you ask any free AI art generator to create an aerial view of boba cups with lids – because, as a designer, maybe you can’t find a stock image of logo-less boba lids from that view and you’d like to generate an image for your boba brand mockup – you’ll find that the words “aerial” or “top-down” or “above perspective” do not work to generate this image. In fact, boba/ bubble tea as a product is hard for the AI to generate given that the images that emerge are like a petri dish of colorful fungi with Thai tea as the agar. True story. (You can try using an AI art generator if you can access it.)

The three big disadvantages in AI as a tool and how AI critically thinks are how AI strips information, how AI spreads misinformation, and that AI cannot replace accessing the intersectionality of true insights.


One of AI’s largest resources, the internet encyclopedia founded on making information accessible and free, Wikipedia is in constant need of funding and even more so in the age of chatbots that you can ask your questions directly. 

“Wikipedia is probably the most important single source of training in AI models. Without Wikipedia, Generative AI wouldn’t exist’ – Nicholas Vincent (career studier of information businesses). 

Wikipedia is well known to thrive on donations, volunteer fact checkers, and information adders. According to the New York Times: “Wikipedia’s Moment of Truth,” AI chatbots draw on data from Wikipedia to answer people’s questions. The cost of this information feast is that Wikipedia now has fewer users and donations, leading to there being less fact checkers and information adders to clean up the consistently added graffiti to their encyclopedic walls. In turn, these chatbots will eventually pull more data from this graveyard of graffiti overran encyclopedia. 


The biggest disadvantage of AI is the most pressing: its capacity to spread misinformation. If AI bots are fed only on their own synthetic data, these systems would break down. With large enough internet traffic, synthetic data misinformation is pushed more and more forward until it becomes the reference for fact. 

That is because AI operates like a well-run high school rumor mill — the more people inquire about the rumor, the more it is spread, and the most popular rumor becomes the prevailing opinion the fastest. Misinformation is the prevailing opinion because it drowns out smaller known rumors as the popular rumor is spread. And, more importantly, while rumors can hold some truth – think: how AI is good at small error checks and fast fact references – it can also lack nuance due to it not being able to facilitate entire contexts and intentions. 


More relevant to me, a Designer in ad spaces, AI cannot replace true insight gathering and accessing the intersectionality of those insights. Intersectionality is already something people are still grasping. For example, one of Online Optimist’s clients, Stratus Firm, a premier production company based in DC,  wanted a new voice, new tagline, mission, vision, values, and website copy to reflect Stratus Firm’s new branding, capabilities, and goals. Online Optimism went about providing Stratus with these services and setting new style guidelines, brand vocabulary, and visual brand positioning for the firm. 

To do this, Online Optimist staff interviewed Stratus’ team members candidly to get a sense of how they perceived the company and their work. They even cross-referenced their perceptions to available customer data and found common themes to base the updated brand on while still being able to keep in mind the more intricate nuances and find ways to incorporate these contexts. AI doesn’t have a handle on the nuances it takes to identify specific ways people exist like this and how those very niche things can unite. 

People are necessary for marketing because we live nuance in our day to day. It would be helpful if a chatbot could write a headline, tagline, and mission based on the common experiences of Stratus employees that they have typed into a machine. However, the nitty-gritty is found in real conversations that later translate to data points, not the other way around. The sort of information AI would have to pull from to find that nuance is many years away.


AI critically thinks with lots of failed attempts. It has facts; it knows what point it’s trying to make, but the intricate contexts aren’t their forte. We cannot trade context for AI critically thinking just yet because nuance is where humans live.  

That being said, AI is not something we should discard. It is a tool made for our conveniences when it makes sense to use a tool. Online Optimism has a knack for being able to marry that tool usage into the services they provide. At the forefront, Online Optimism’s entire mission is keeping humans at the forefront of communication and optimization. So while we may use AI to make our jobs easier, it is still up to the humans behind the screen to bring value and real-world application to the data AI has condensed and automated for us.