Perplexity
Perplexity measures how surprising the next word in a text is, given the previous words. Lower means more predictable; higher means more unexpected.
Why AI text has low perplexity
Language models pick the most likely next token most of the time. The output sounds fluent because each word is a confident choice, and detectors see that confidence as a fingerprint. Human writers occasionally pick a low-probability word, change direction mid-sentence, or leave something slightly awkward. That irregularity raises perplexity and is one of the primary signals detectors use to call text "human-written."
How to raise perplexity in your draft
- Replace inflated AI vocabulary ("leverage", "delve", "robust") with plainer, less predicted words.
- Use a specific noun where the AI used a category ("Postgres" instead of "the database").
- Let one sentence break a pattern, a parenthetical, an admission of uncertainty, a contraction.
- Avoid the safest synonym; reach for the second-best one.