8 Ways to Tell if Generative AI Can Help You Produce Content

Simon Sternklar
By
Simon Sternklar
on
featured image

Generative AI like ChatGPT can do some impressive things, but “impressive” isn’t always the same as “useful.” While it can write clean copy with impressive speed, its quality is mediocre and it has a well-known (and possibly unfixable) tendency to invent false information.


GenAI struggles with a lot of areas of content creation, such as:

  • Novelty
  • Organization, brevity, and flow
  • Accuracy
  • Understanding of user needs
  • Expertise or unique perspectives
  • An interesting voice

The things GenAI struggles with are all key marks of good content.

As a result, many content creators have concluded that it’s far less useful for writing than it first appears. That’s not to say GenAI is useless – it’s just not the Swiss Army knife AI evangelists like to claim it is. You’ll do a lot better treating it like a hammer: fantastic for the right job, but clumsy and destructive otherwise. Let’s talk about what it is and isn’t good for.

Rule #1: Don’t Publish Raw AI-Generated Content

Editors follow the same rule as doctors: First, do no harm. Their jobs just have much lower stakes.

If there’s one thing to learn from the long string of major brands embarrassing themselves, it’s this: There are few surer ways to harm your brand than publishing unedited AI content. If you’re lucky, the result will be mediocre and generic. If not, you might get something nonsensical, inflammatory, or dangerously misleading. 

Never publish AI-written content without thoroughly editing and fact-checking it.

8 Questions to Ask Yourself Before Using AI to Produce Content

What makes good content? The answer depends on a lot of things: its purpose, the audience, the topic, and many more. However, it’s not hard to find out if GenAI is suitable for a piece.

Think of a piece you’re planning to publish, and ask yourself the following questions. If you answer “yes” to any of them, consider sticking with a human writer.

1. Is it hard to find authoritative information on this topic?

GenAI doesn’t truly create – all it can ultimately do is produce new text based on existing examples. It needs a sizable body of examples to base its guesses on, and it struggles when it can’t find enough. This can lead it to “hallucinate,” making up false information to fill in gaps. 

That makes it unsuitable for subjects that aren’t covered by many authoritative sources, such as:

  • Relatively unexplored topics where there’s no significant body of knowledge, like new scientific breakthroughs.
  • Topics where there’s little substantiated information, like rumors and conspiracy theories.
  • Topics that aren’t often written about online, like niche interests and secrets. 

If you can’t get reliable information with a cursory Google search, GenAI probably can’t either.


2. Is there significant debate or controversy about this topic?

GenAI can’t think critically. It has no way to determine which information is and isn’t valid, so it may base its responses on sources that aren’t trustworthy, balanced, or accurate. That means disagreement, controversy, and nuance tend to confuse it, especially when major sources disagree with each other. One common result is a disjointed response full of contradictions and clashing viewpoints. This can also cause problems with subjects prone to heated debate, even if everyone generally agrees about the facts. Because AI cannot filter out unsuitable information, it often replicates inflammatory rhetoric, racial biases, and more.


If there’s no general consensus about a subject, AI will struggle to write a coherent piece.


3. Does this topic have high stakes?

Or, to put it more bluntly: Who gets hurt if I get this wrong?

In high-stakes fields, misinformation can cause real harm, so human writers often go to great lengths to ensure they get it right. Because GenAI can’t question its information and tends to hallucinate when confused, it can be highly prone to spreading falsehoods. A 2023 study found that GPT-3 “agreed with incorrect statements between 4.8 percent and 26 percent of the time, depending on the statement category.”

The most obvious high-stakes subjects are health and finance (known as “your money or your life” content), as well as politics. However, spreading bad information can have major consequences even in lower-stakes fields.

  • Inaccurate celebrity or business journalism can lead to damaged reputations, financial losses, and lawsuits.
  • Bad culinary advice can lead to inedible meals like a pizza made with glue. 
  • Bad home improvement tips can lead to injuries or property damage. 

If publishing bad information could get someone hurt, leave it to a human writer you can trust.


4. Does this topic overlap significantly with
other subjects?

GenAI’s inability to discount bad information also makes it unable to see what is and isn’t relevant. This can cause it to meander and insert nonsequiturs. More problematically, it tends to become disjointed when the lines between topics are blurred.

When GenAI can’t tell where one topic ends and another begins, it tends to drift from one to the other. I’ve personally edited one piece in which three sequential paragraphs seemed related on the surface but were actually about completely different subjects. This problem is deeper and more common than you might expect, for two main reasons:

Topical Overlap

Every topic in existence relates somehow to other topics, and even human writers struggle to tell when they should and shouldn’t touch on related points. The best writers can struggle with this even more, because they intuitively see connections that others miss. Imagine you’re writing a piece about economic policy. Should you discuss its effects on macroeconomics and microeconomics? If so, how often, and how deeply? There’s no true right answer – it comes down to a judgment call.

Linguistic Overlap

Words have different meanings depending on the context. For example, the word “ball” means different things at a football field, a golf course, and the Met Museum.  When two distinct topics share many words, GenAI can become convinced that these two topics are actually the same. This is particularly common in business, sports, and science, subjects where many subtopics use similar sets of jargon for very different purposes. If your topic overlaps significantly with another topic or uses a similar pool of words, using GenAI to write it may produce an unusable mess.

5. Does this topic relate to recent developments?

Until the fall of 2023, ChatGPT was blind to any information published after 2021. Recent updates have brought it closer to the present, but it still struggles to reliably provide recent information. Its awareness of the world tends to lag by about six to eight months. While this is likely to be improved at some point, it’s best to avoid using AI for content related to developments less than about a year old. It’s generally safer to keep it to evergreen or retrospective topics.

6. Does this piece need to be unique in any way? 

In other words: Is it important to not sound like everyone else? GenAI can’t truly create anything—it can only iterate from what already exists. It also tends to avoid taking specific positions on anything. That makes it mostly useless for producing anything novel, unique, or persuasive—and those qualities are what makes content compelling. It struggles to create:

  • Subjective or opinionated content – especially if that opinion is unusual.
  • Content that relies on a unique or personal perspective.
  • Content written in an interesting voice.

AI content usually sounds like everyone else because it’s based on everyone else. If you want to sound like everyone else – which is occasionally alright – it might be worth a shot. Otherwise, don’t bother.


7. Does it require analysis, critical thinking, specificity, or expertise?

As we mentioned above, GenAI struggles with novelty, uniqueness, critical thinking, and complexity—in other words, the exact sorts of things that experts tend to excel at. 

GenAI also struggles with specificity. Given a specific question, it tends to respond with broad generalizations rather than detailed answers. That’s because specifics are the domain of experts, who only create a small fraction of the content available for most topics. The bulk is written by lay writers, who tend to give general, surface-level information. GenAI can’t favor the opinions of experts over those of lay writers, so it mimics those generalities.


If you need a deeper look, consider hiring an expert writer.


There’s nothing wrong with a good overview. Overviews give users an easy entry point into a topic, which makes them popular. They’re also relatively easy to produce – and GenAI may be able to help you build them. But if you need a deeper look, consider hiring an expert writer.

8. Will using AI actually reduce your workload?

People tend to assume that using GenAI will save them time and money on content production. But that’s only partly true, part of the time. 

Getting AI content to a publishable state requires a lot of effort from editors and fact-checkers – much more than even mediocre content written by humans. It’s easy for short, simple pieces, but exponentially harder for long and complex ones. In many cases, it may be easier to just write the piece manually.  Skillful prompt engineering (writing highly specific prompts in hopes of producing more desired outputs from GenAI) can sometimes result in decent pieces. But it has a high learning curve, and the intricate prompts it requires are themselves hard to write.


In most cases, GenAI doesn’t actually reduce workload; it just shunts labor from writers to editors and fact-checkers. Before using it, ask yourself if that’s a worthwhile trade.


What is generative AI good for?

GenAI might not be as useful for content production as people think, but it’s not useless. One area where it shines is mass-producing small, low-stakes content. That’s a niche case, but it’s a very profitable niche.

For example, e-commerce platforms can have thousands of product pages, and each one needs a product description. That text doesn’t need to be fantastic: It mostly just needs to contain relevant keywords and not do any harm. GenAI can provide simple descriptions for each item that are quick to edit and publish.

In most cases, however, utilizing GenAI to write text is a gamble. It’s far safer to use it for indirect support work like:

  • Brainstorming
  • Outlining
  • Transcription
  • Summarizing an existing text, and flagging quotable sections

This way, you can benefit from GenAI’s ability to gather information rapidly without the risk of its many shortcomings.

Would you trust an intern to do this?

In November 2023, Cigna Chief Digital & Analytics Officer Katya Andresen described GenAI as being similar to an infinite number of interns

The description is apt. GenAI is quick, enthusiastic, and happy to please, but it lacks judgment, initiative, or any real skills. You can’t just turn it loose on a problem. Getting useful results from it requires constant, engaged management and solid guardrails.


So next time you wonder if GenAI might help you solve a problem, ask yourself, “Would I trust an intern with this?”


Want more insights on AI? Read 5 Things Adobe Photoshop AI Can Do (and 2 That It Can’t) from Ridge Marketing art director Mike McDonald.