The AI Smells Guide
AI Writing Smells (And How to Fix Them)
8 patterns that flatten your voice, and how to bring it back.
AI Writing Smells (And How to Fix Them)
8 patterns that flatten your voice, and how to bring it back.

AI is a starting point. The draft it gives you is raw material, not a finished product. When you skip the step that makes it yours, patterns show up in the output that signal "this person didn't think about what they were saying." Your colleagues have seen enough AI output by now to recognize those patterns instantly.
In software, Kent Beck coined the term code smell for patterns that technically work but signal deeper problems. Martin Fowler defined it as "a surface indication that usually corresponds to a deeper problem in the system." That same concept applies to workplace writing. AI writing smells are the defaults that flatten your voice and disconnect you from the people reading your work.
Here are 8 of them. Each one is something the model does, not something you did wrong. And each one has a specific fix.
Structural smells
1Em-dash overuse
Dashes used frequently to break up sentences, often in places where a comma or a period would do. LLMs reach for em-dashes the way some people reach for filler words. If you see more than one per paragraph, that's the model's habit showing.
Fix: Replace dash emphasis with stronger sentence structure. A period and a new sentence is almost always clearer.
2Mixed metaphors
Multiple unrelated figurative images in the same passage. "Moving the needle" and "planting seeds" in the same paragraph. Models pull from enormous corpora and combine figurative language without checking whether the images are compatible.
Fix: Pick one metaphor and carry it across paragraphs, or switch to literal language entirely.
3Emoji as a crutch
Overuse of emoji to supply tone, warmth, or emphasis where the language itself should carry the feeling. When the words are flat, the model compensates with symbols.
Fix: Remove emoji and carry warmth through wording and rhythm. If you need a rocket emoji to make something feel exciting, the sentence needs rewriting.
Tone smells
4Via negativa parallels
Defining a point by stating what it is not before stating what it is. "This is not about cost reduction. This is about strategic alignment." Models use this structure constantly because it sounds authoritative. It reads as hedging.
Fix: Lead with the direct assertion and remove the negation scaffolding. Just say what it is.
5Verbose, structured rambling
Prose that restates the same point in multiple ways instead of saying it once and moving on. The model generates until it runs out of tokens or reaches a boundary, and that often means three versions of the same idea wearing different clothes.
Fix: Say it once. Remove repeated points and triads. If removing a sentence doesn't reduce understanding, cut it.
6LinkedIn staccato
Short, line-broken sentences with dramatic spacing. Reads like a feed-style media post optimized for scrolling rather than a document written for a specific audience. Models produce this because their training data is saturated with it.
Fix: Collapse dramatic fragments into natural flow. Short sentences are fine. Short sentences that exist only for theatrical pause are not.
Meta smells
7Prompt-echoing
Output that mirrors the instruction it received instead of executing on it. A section heading that says "(brief)" because the prompt asked for brevity. An opening line that says "Here is a concise summary" because the prompt asked for one. The model describes what it was told to do instead of doing it.
Fix: Delete instruction narration and start with content. If the prompt asked for a summary, the output should just be the summary.
8Meta-explanation
Passages that explain the structure or purpose of the document they appear in. A slide explaining why the deck exists. A paragraph justifying why a section was included. This is the model orienting itself in its own output.
Fix: Remove structure narration and use direct transitions. Readers don't need a table of contents narrated in prose.
Everyone can prompt a model. That is not a skill anymore. The people who are good at AI-assisted communication have learned to recognize these patterns, build prompts that avoid them, and edit what comes back. That's the gap, and it compounds.
Try it: Free AI Smells Remover Prompt
Paste the following into your LLM after any output to get a clean draft free of AI writing smells.
Free prompt included with this guide
Enter your email to unlock the ready-to-use prompt. Copy, paste into your LLM, and use it forever.
Recognizing AI writing smells is the first dimension of authentic AI-assisted communication. The second is how your message looks on screen. Read Formatting Smells (And How to Fix Them) next, or explore the full series at The AI Smells Guide.
This is what Rally does automatically: communications that sound like they came from a person who cares, optimized for employee engagement.
Sources
- Martin Fowler, "Code Smell," martinfowler.com, 2006. Origin of the term coined by Kent Beck.
- Martin Fowler, Refactoring: Improving the Design of Existing Code, 1999. The book where code smells were first cataloged.
- Northeastern University / Khoury College, "AI slop is a common online nuisance. But what makes a piece of text 'slop'?" Research on AI text pattern frequency.
- Pham et al., "Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models," 2025. Found certain LLM writing patterns appear over 1,000x more frequently than in human text.
- Yenny Cheung, "Technical Lessons Learned from Pythonic Refactoring," PyCon.de / Talk Python to Me #150. Discussion of code smells applied to Python.
- "Slop Detector," slopdetector.org. Taxonomy of AI-generated content patterns.
