ChatGPT Turns One: What a Writer Has Learned About Her Biggest Competition

By Hannah Burns

When ChatGPT debuted last November 30, my initial reaction was one of concern. And anxiety. Some stress. Certainly disbelief. And maybe a little bit of excitement. But mostly, I felt threatened. Generative AI could do it all, the headlines said—or, at least, enough to render my role obsolete in just a few years.

So, I set out to learn all I could about the powerful new tool poised to replace me. I played with GPT 3, Google Bard, and every other platform I could get my hands on. I glued myself to the news, squirreling away evidence that the AI hype was overblown. After all, I needed ammo on hand so I could respond with confidence when clients, my parents, and my dentist (yes, my dentist!) hit me with what’s becoming an all-too-familiar question: “Can’t ChatGPT just do that?”

The answer to that query, it turns out, is… complicated.

What it’s good at

OK. We’ll start with the good—and, I must admit, there is plenty of good if you play to the technology’s strengths.

GenAI can cut the time it takes to format interview transcripts, summarize notes, organize lists, and more from minutes to seconds. You still have to check its work, but it almost always gets you closer to the desired result. The right software can also be an effective editor when a second set of human eyes isn’t available (our coworkers sleep, after all, and GenAI doesn’t). So long as you ask for suggestions rather than corrections, these platforms can help you tighten your prose and get to the point when you’re struggling to find the path.

That brings us to, perhaps, the most valuable use of AI I’ve found in my work: brainstorming. Coming up with new ideas can be difficult, especially for PR and content marketing pros in fast-paced agencies. I mean, how many times can you write from the same talking points before you hit a wall? My team and I have found that AI is incredibly good at getting writers over this hump, even if it doesn’t provide exactly what they were looking for.

Believe it or not, it’s usually more helpful when it gets things wrong. I don’t know that I’ve found a more potent salve for writer’s block than asking AI to write an article’s conclusion just so that I could gut it. Even better, it’s confirmed a long-held suspicion of mine: that seeing what you don’t want is the fastest way to figure out what you do.

What it’s not so good at

Now, I could go on at length about what GenAI can’t do yet, but instead, I’ll keep it simple. The problem with today’s AI is that large language models (LLMs) are not people. They don’t reason, nor do they think. While I know this might make me sound old-school, that really is their fatal flaw.

Today’s GenAI platforms create work based on probabilities, not ideas—like iMessage suggestions but with incredible computing power and trillions of reference points. ChatGPT, Bard, and other tools cannot make decisions; they can only predict what’s likely to be right. It’s the reason we’ve gotten so many laughs from ChatGPT’s attempts to unravel logic puzzles, understand riddles, or even solve seemingly simple mathematic equations.

It’s also the reason writers still must guide the process. In writing, predicting isn’t always enough. ChatGPT’s penchant for citing made-up legal precedents, invented statistics or studies, or non-existent headlines is a compelling example of this issue. Because the algorithm sees it’s statistically likely for a legal brief to cite case law, it might pull last names and dates from thin air. Its prediction is right, but the execution is wrong. The phenomenon is called “hallucination” by some in AI tech fields, but all I hear as a writer is libel. Maybe slander. Possibly fraud, depending on the context.

Perhaps the most glaring issue with this approach is the question of originality (or lack thereof) in AI-generated materials. The simple fact is that all GenAI content is derivative. Sure, we can debate about whether writing itself is derivative all day, and many people far smarter than myself have done so. But the level of iteration we’re talking about here feels different. The technology’s design makes defining a voice incredibly difficult and saying something new or innovative nearly impossible.

A blog about cybersecurity written by Bard is an amalgamation of any material—from competitors’ websites to outdated articles, conspiracy theories, or works of fiction—used to feed the model, and its format is dictated by what’s been done before. Essentially, what we get in exchange for some extra time is work that, while technically sound, has nothing new to say. That polished blog is just another version of someone else’s arguments, syntax, grammar, and flow. It may be new, but it is not original.

The bottom line

Despite my seemingly harsh assessment, my outlook on GenAI in content development now sits somewhere between optimistic and accepting. Over the past year, I have come to appreciate the support it offers to my team and me. When things get busy, writer’s block rears its ugly head, and inspiration is in short supply, GenAI can free up space in the day or give you something to respond to.

Still, I feel confident that AI won’t be coming for my job anytime soon, nor will it be taking yours. The undeniable fact is that it cannot do the work for you as many headlines and developers initially claimed, and that’s true across disciplines. One year later, the biggest lesson I’ve learned is that AI is only as capable as the person using it. That may change with time, but that’s where we are one year later.

So, I suppose the answer to my dentist’s question is that, yes, ChatGPT can do that—to a point. And only if an experienced writer is helping it along.

If your content is feeling robotic lately, our team is always eager to show why the human touch isn’t going away any time soon. Drop us a note below!