少点错误 09月22日 00:38
警惕AI写作的陷阱:保持独立思考与原创表达
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能写作工具(如ChatGPT)对原创性思维和语言表达的潜在危害。作者借鉴乔治·奥威尔的观点,指出无论是模糊陈词滥调还是依赖AI生成内容,都会导致思维懒惰,因为写作本身就是思考的过程。文章强调,AI的介入会使人失去独立思考和精炼语言的能力,即使通过修改AI痕迹也难以完全避免其风格侵染。作者建议,应通过大量自主写作来培养语感和批判性思维,而非依赖AI,以保持思想的原创性和表达的清晰度。

✍️ AI写作的本质是思维的惰性:文章引用乔治·奥威尔的观点,指出无论是使用模糊的陈词滥调还是依赖AI生成内容,都反映了思维的懒惰。写作不仅仅是语言的呈现,更是思考的过程,依赖AI会削弱我们深入思考和精确表达的能力,导致思想的浅薄化。

🧠 AI的“ChatGPTisms”难以彻底清除:作者认为,AI生成的语言模式(“ChatGPTisms”)会悄无声息地渗透到作者的写作中,即使试图修改也难以完全根除。这些模式存在于不同层面,远超一般人识别能力,最终使文章失去个性和自然流畅感,甚至在更资深的读者眼中暴露AI痕迹。

💡 培养原创思维需自主写作:文章强调,真正的独立思考和原创表达来自于自主写作的过程。通过亲身探索、反复修改和精炼语言,才能形成独特的语感和深刻的洞察力。依赖AI工具,特别是要求其直接生成内容,会阻碍这一过程,使人止步于浅层模仿,无法触及思想的真正深度。

🚫 警惕AI的“风格模仿”陷阱:作者指出,即使AI能模仿特定风格,也可能保留其固有的语言模式。更重要的是,主动模仿他人的“风格”本身就是一种停滞不前的表现。真正的专业人士,如沃伦·巴菲特等,都坚持亲自写作,因为清晰的风格源于清晰的思维,而清晰的思维则源于为自己写作。

🚀 提升写作能力的关键在于实践:文章最后建议,想要提升写作能力,关键在于大量自主练习,即使是在没有明确写作需求时。通过在无数的微小选择中进行权衡,才能培养出对语言的直觉和对表达的精准把握,这是AI无法替代的。

Published on September 21, 2025 3:52 PM GMT

But if thought corrupts language, language can also corrupt thought. A bad usage can spread by tradition and imitation, even among people who should and do know better. The debased language that I have been discussing is in some ways very convenient. Phrases like a not unjustifiable assumption, leaves much to be desired, would serve no good purpose, a consideration which we should do well to bear in mind, are a continuous temptation, a packet of aspirins always at one’s elbow.

Before ChatGPTisms, sloppy writing was signposted by Orwellisms. There’s a political kind (to euphemize “firebombing a village” vaguely as “neutralize unreliable elements”), but I’m interested in the mundane kind: lazy phrases you default to when you can’t articulate exactly what you mean.

For years, I visualized this as: your idea is shaped like a nuanced path, but without finding the exact words to express it, you settle for tracing the worn groove of a common phrase that is roughly, but not exactly, the same shape you intended. One time doesn’t hurt, but enough worn grooves stacked end-to-end and you will land nowhere near where your original thoughts may have taken you.

I still don’t have an elegant rendition of this idea, but now I must also resist ChatGPT’s beckoning finger: “please write an aphoristic version of this concept: your idea shaped like a nuanced path…”

But you have to resist.

I don’t let AI compose ANY original writing for me.

Orwell’s essay was published just after World War II, but applies equally to AI writing on the World Wide Web. Lazy writing in his time was vague cliches, and in ours is outsourcing work to a robot. But in both, it leads to lazy thinking, because writing is thinking:

Paper and pencil and “expressing ideas” is merely what writing looks like. But words on a page are just the visible artifacts of an invisible process. Writing words and choosing good ones is a process we found that propels hard thinking, and it’s that underlying thinking that is what writing really is.

In my previous essay, I defended AI writing as it pertains to reading it. You should develop your taste and judgment for good writing without resorting to lazy proxies like ‘Yuck, this is AI.’

However, as it pertains to the writing process, the same advice to “develop your taste and judgment” demands the opposite outlook: do not use AI. When it feels hard to edit your writing, that’s what it feels like to exercise your mind and to excavate good ideas.[1] While this belief is common, it’s rare for people to believe it as absolutely I do: I allow zero AI prose or even prose suggestions AT ALL.

To be clear, I’m not referring to functional writing like work emails or language translation. I mean creative writing when you’re wrestling with ideas you’re trying to express. I’m also not referring to using it like a tool: I use it like a magic thesaurus and sometimes ask for its feedback in conversation. I’m just opposed to AI composing your words; “write a hook for a post about x”, or “edit y to be shorter.”

The problem with both vague cliches and outsourcing your work is it feels like you’re merely changing the expression of your thoughts, rewriting them. But in reality, you’ve surrendered the content of your thoughts, overwriting them. Cliched and AI prose both entail slipping into the orbit of some common diction which enough other people also gravitated towards. You end up professing socialized beliefs, while identifying as an independent thinker. ChatGPTisms in your work signpost the exact spots where AI hijacked your mind.

So, maybe use AI, but revise all the common signs?

Nope, doesn’t work. People think they’re very subtle when they rephrase all the ChatGPTisms. Except, they never catch all of them, because most are too abstract to explicitly grasp. Signs like em dashes, “delve”, and “not but” get mentioned most because they’re the most legible; word-level artifacts.

But AI patterns happen at all levels of granularity; across word, phrase, sentence, paragraph, and more. Its cadence of speech is just stilted. So a “search and destroy” approach to ChatGPTisms will lead you on a hunt for ghosts. To “edit away” the signs you’d have to change everything, which is equivalent to writing it yourself in the first place. That’s as true for eviscerating signs of AI writing as it is for fixing any incompetent writing:

Marking a good essay? That's really easy. Check. A. You did everything right. Marking a bad essay? Oh my God—the words are wrong, the phrases are wrong, the sentences are wrong, they're not ordered right in the paragraphs, the paragraphs aren't coherent, and the whole thing makes no sense. So, trying to tell the person what they did wrong? It's like well, you did everything wrong. Everything about this essay is wrong.

By letting ChatGPT compose even a short phrase for you, you’re inviting generic AI patterns that you can’t detect, which slowly infect your writing until it sounds unnatural and you can’t explain why.

Well, depending on the pattern, someone else could maybe detect it. But you’ll let through all the patterns beyond your power level to discern. In effect, people who are smarter or have a better intuition for AI writing than you will know you’re a fraud, but you won’t know which telltale sign gave it away.

So, maybe use AI, but keep obvious signs in defiance?

It’s tempting to think that your usage of “not-but” is backed with clear substance that readers will recognize, or that you’ve always used em dashes, so you won’t let AI witchhunts wrest your speech.

I’ve seen real human experts apparently using AI to aid their copywriting, but not generate their ideas.[2] And while in theory you can parse an expert perspective from its bed of em dashes, in practice most people won’t extend you that leniency.

It’s not that it weakens your point, but it’s that starting a sentence with “It’s not that …” triggers audiences to narrate your writing in “AI voice”. It disintegrates a reader’s brain because just the smell of AI is a noxious fume. Too often it’s a sign of other lurking deficiencies; there’s never just one cockroach in the kitchen.

With writing, the medium is the message: polished language reflects polished thoughts, slovenly language reflects slovenly thoughts, and AI language reflects AI “thoughts”.

So while you should not invite AI into your writing, that doesn’t go far enough. You should evict any ChatGPTisms you find even if that’s just how I write. Em dashes and “not-but” are extremely versatile and effective linguistic technologies. But if you want an audience to listen, you have to adapt your style.

So, maybe use AI, but ask for outputs in a non-AI style?

One problem is, although AI is good at mimicking genres, it still retains its other language patterns at higher levels of abstraction. The chameleon isn’t literally invisible after changing color. But the bigger problem is, why would you want to inherit any premade “style” in the first place?[3] This strikes me as pitifully stagnant.

Suppose you want to write in a “LinkedIn”-style tone to make your post sound professional. Well, Warren Buffet writes his own shareholder letters. Jeff Bezos writes his own letters. Howard Marks writes his own memos. Peter Thiel and Paul Graham and all these serious people write their own words.

The most “businessy” thing you can do, evidently, is write clear language in your own style. But the clear style is a consequence of clear thinking; knowing exactly what you want to say. And the clear thinking is a consequence of writing for yourself! Because you explore each dead end until only the critical path is left.

Alternatively, to sound ‘businessy” there’s a sequence of buttons on Google Docs that will convert your writing into a “formal” style:

Here’s the plain text. Feel free to skip it once you’re sufficiently nauseated.

A fundamental challenge arises from artificial intelligence's proficiency in mimicking genres while simultaneously retaining its inherent linguistic patterns at higher levels of abstraction. The metaphor of a chameleon, though illustrative, falls short; the creature is not literally invisible after altering its coloration. However, a more significant concern emerges: why would one willingly adopt a pre-existing "style" in the first instance? This inclination strikes one as lamentably unoriginal.

Consider the desire to adopt a "LinkedIn"-esque tone to convey professionalism in a written post. Yet, prominent figures such as Warren Buffett, Jeff Bezos, Howard Marks, Peter Thiel, and Paul Graham—indeed, all individuals of serious intellectual pursuit—personally compose their own communications.

Evidently, the most "business-oriented" approach is to articulate ideas with clarity and in one's distinctive voice. This clarity of style is a direct consequence of lucid thought, stemming from a precise understanding of one's intended message. Furthermore, clear thinking itself is fostered through the act of writing for oneself, as it necessitates the exploration of every tangential path until only the essential argument remains.

That text is a grotesque offence against the English language. And soon it’ll be on tap inside your word processor! The feature frankly borders on evil. Obviously, it exists because people want to use AI this way. Their writing is bad and needs AI’s help, but using AI prevents them from getting better. So they’re stuck.

So what if I’m stuck! I’m not a luddite, I’ll just use AI.

You can get unstuck, you just need to practice writing outside of strictly when you need to. Write essays for fun.

But even if you still intend to splice and curate AI’s outputs, you need good taste and judgment. That only gets developed through writing; a crucible of wading through endless micro-possibilities and choosing one over another, over and over.[4] It’s how you earn a tacit, visceral instinct of “this idea feels like it should take 4 lines, not 5,” “if I restructure this part here it’ll open a natural spot for a joke over there,” “instead of a general description, this would work better as a vivid sketch of one example.”

Eventually, that intuition doesn’t just help you choose between options as you would with AI outputs, but you become drawn towards the beauty of the next words which haven’t even materialized yet; you become possessed by the prophetic spirit of prosody, your feel for how language flows when it sounds right.

But that never happens when you write with AI from the outset.

Like feeling the rain on your skin, no one else can feel it for you. Least of all an AI which can’t be pulled forward by the feeling of anticipation, because it can’t feel anything at all.

Outsourcing to AI jams any seed of insight you had into the predetermined shape of the training data, just like the course of a river is predetermined by its topography. Sometimes, there’s a nuanced path you need to follow, and AI will simply never thread that path.

  1. ^

    For this reason, you also can’t feed a bullet point outline of “your ideas” to ChatGPT and ask it to “convert” it into full sentences. If you processed those ideas yourself, you’d almost always end with different, better ideas than you started with. The abbreviated form of bullet points seems to imply condensed work representing more substance, but really it obscures shallow work, yet to be filtered and developed through contact with reality.

  2. ^

    For example, see this article about the Law of Armed Conflict by John Spencer, a man credentialled as chair of urban warfare studies at the Modern War Institute (MWI) at West Point (the most prestigious military academy in the US). And he does cite relevant historical substantiation, his points sound like those of his position. But his writing is a museum of AI bloopers, such as “The law of war is not a loophole in morality—it is morality under fire.” and “These aren’t just bad arguments. They’re dangerously shallow—and they should be rejected outright.

  3. ^

    Copying styles mindlessly is how you pick up bad habits. In her article, The Average Fourth Grader Is a Better Poet Than You (and Me Too), Hannah Gamble mentions how 3rd-6th grade poets were far better than middleschoolers and highschoolers, probably because the younger students had no inbuilt expectations for how poetry “should” sound.

  4. ^

    Wouldn't this "don’t outsource your thinking to any crutch" reasoning apply to any technology? Like the calculator? Yes actually, if you want to think deeply about math, you should avoid calculators when you can. At advanced levels, you won’t even have the intuition of when and how to use the calculator if you don’t work through enough problems by yourself to train that instinct.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI写作 独立思考 原创表达 语言表达 思维惰性 AI writing independent thinking original expression language expression intellectual laziness
相关文章