少点错误 10月03日
反思提示词:LLM的答案取决于我们对问题的思考
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章作者分享了在使用大型语言模型(LLM)时的深刻体会。起初,作者认为提升LLM回复质量的关键在于优化提示词本身,但经过大量实践后,他发现问题根源在于自己对问题的定义不够清晰和具体。作者指出,与编程或数学问题不同,LLM的回复缺乏明确的成功标准。他将LLM比作一面镜子,认为最关键的部分并非提示词的措辞,而是自己如何清晰地界定和思考问题。例如,模糊的提示如“如何快速致富”自然只会得到模糊的答案。作者最终认识到,解决问题的关键在于“提示”自己,即更深入、有条理地思考问题,这比任何“技巧”都更重要。

💡 **问题界定的核心作用**:作者强调,与普遍认为的优化提示词技巧不同,LLM输出质量的关键在于提问者自身对问题的清晰界定和深入思考。模糊或笼统的问题只会导致模糊或笼统的答案,这如同对着一面镜子却期待它能提供超出自身映照范围的答案。

🤔 **LLM的类比与局限性**:文章将LLM比作一面“镜子”,指出其回复的公式化和可预测性。它们并非真正“理解”或“有意识”,而是根据输入信息进行模式匹配。因此,期望LLM提供深刻见解,前提是提问者自己已经进行了深刻的思考和问题拆解。

🔧 **从“提示LLM”到“提示自己”的转变**:作者反思,过去将精力放在如何“提示LLM”上,而忽略了最根本的——如何“提示自己”。认识到这一点,意味着需要采用更严谨、更具条理的方法来分析和定义问题,这才是获得有价值信息和解决方案的真正途径。

📈 **量化反思的价值**:尽管承认自己花了很长时间才认识到这一点,作者认为这种反思过程是有益的。它提供了一个衡量自己思考深度的标准:当得到的LLM回复不尽如人意时,可以将其视为一种信号,表明自己需要更深入、更系统地审视和构建问题。

Published on October 3, 2025 2:28 AM GMT

“All this time I thought I needed to learn how to prompt the LLM, but it turns out I really just needed to learn how to prompt… myself”


Don't worry, I trust you won't need a barf bag for the rest of this post, just for the above.

I’ve been making unreasonably requests of LLMs, asking for feedback on topics like Career Strategy. That was never gonna work. Lousy replies are the result of lousy prompts; which are the result of (my) lousy problem framing. 

No amount of “5 simple tricks to improve your prompts” or magic words was going to improve the responses. Unlike programming or mathematics problems, which engender highly specific methods of finding answers, with clear criteria of success. I have not been prompting LLMs with similarly specific methods or criteria. 

It took brute force repetition for me to learn this[1].

The more and more I use LLMs, like Claude and ChatGPT, the more I notice how formulaic their responses are. They remind me, now,  of common cads or flirtatious floozies, cooing tried and tested lines to every Tom, Dick, and Harriett (“this is significant, you’re picking up on a pattern few notice.”) Nah, they notice. They notice it, plenty.

The more and more I use LLMs, the less and less magical or conscious they appear. And the more and more useful the heuristic of “think of LLMs as a mirror” becomes. I now understand that the least important part of the process is the way I frame and word the prompt, rather the most important part is: how I frame the problem myself. Hence the trite analogy: how I “prompt” myself. 

Embarrassingly, my prompts have been the equivalent of “Tell me how to make a lot of money” or “what does everybody vibe that I’m not seeing?” but, if the LLM is only a mirror[2], how can I reasonably expect it to give me anything more than a vague, broad, and totally unactionable answer? Alas, that is exactly what I was doing.

Could I have realized learned this without LLMs? Probably. I suspect that's what sage mentors or therapists are supposed to help illuminate.

I’m not sure what point I’m trying to make here other than the interesting observation that the annoying self-reflective platitude is true. The good news is that at least I have some metric to tell me when I’m not thinking sufficiently detailed or methodically about a problem, which is some cause for help.

Wait you mean to say you needed to prompt an LLM like 150 times just to learn YOU were the problem? Shee-EEE-eesh. I just updated my P-Doom

 

  1. ^

    Chancing it with another platitude: better late than never.

  2. ^

    MUST... RESIST MAKING... DECADE OLD JADEN SMITH REFERENCE!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM 提示词工程 人工智能 问题解决 批判性思维 AI Prompt Engineering Problem Solving Critical Thinking
相关文章