The GitHub Blog 09月05日
利用MCP的elicitation功能提升AI交互体验
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何在构建软件时,特别是利用MCP(模型编排)服务器,通过“elicitation”功能来改善用户体验。Elicitation允许AI在信息不全时主动向用户询问所需信息,而非依赖预设的默认值,从而实现更自然、无缝的交互。作者分享了在构建一个回合制游戏MCP服务器时,通过迭代开发,从工具命名混乱、处理部分信息不当等挑战中学习,最终实现统一工具调用、只询问缺失信息等优化,显著提升了用户与AI的互动流畅度,并将AI交互从“笨拙的工具调用”转变为“无缝的用户体验”。

💡 **提升用户体验是关键**:在构建MCP服务器时,即使具备核心功能,用户体验仍是重要考量。Elicitation机制允许AI在信息不足时主动询问,避免默认假设带来的硬编码交互路径,从而提供更直观、更符合用户意图的体验,将AI交互从生硬的工具调用转变为流畅的对话式体验。

🛠️ **工具命名与整合的重要性**:在MCP服务器中,清晰且互斥的工具命名至关重要。作者在实践中发现,命名相似的工具容易混淆AI,导致调用错误。通过整合功能相似的工具,例如将多个游戏创建工具合并为一个通用的`create-game`工具,并优化其描述,能够显著减少AI的误判,提高工具调用的准确性和效率。

🔄 **迭代开发与处理不完整信息**:实现Elicitation并非一蹴而就,需要通过迭代开发来完善。初期实现可能存在问题,如每次都询问所有信息。通过后续的重构和优化,能够解析用户已提供的信息,仅询问缺失的部分,例如在用户指定了游戏和难度后,仅需询问玩家名称。这种精细化的信息收集方式,极大提升了用户体验的流畅度。

⚙️ **Elicitation的工作机制**:Elicitation在MCP服务器端通过检查所需参数、传递可选参数、暂停工具执行以收集缺失信息,并利用模式驱动的提示向用户展示格式化问题来运作。一旦收集到所有必要信息,工具便会根据用户的偏好完成原始请求,实现更加个性化和精确的游戏创建。

When we build software, we’re not just shipping features. We’re shipping experiences that surprise and delight our users — and making sure that we’re providing natural and seamless experiences is a core part of what we do.

In my last post, I wrote about an MCP server that we started building for a turn-based-game (like tic-tac-toe, or rock, paper, scissors). While it had the core capabilities, like tool calls, resources, and prompts, the experience could still be improved. For example, the player always took the first move, the player could only change the difficulty if they specified in their initial message, and a slew of other papercuts.

So on my most recent Rubber Duck Thursdays stream, I covered a feature that’s helped improve the user experience: elicitationSee the full stream below 👇

Elicitation is kind of like saying, “if we don’t have all the information we need, let’s go and get it.” But it’s more than that. It’s about creating intuitive interactions where the AI (via the MCP server) can pause, ask for what it needs, and then continue with the task. No more default assumptions that provide hard-coded paths of interaction. 

👀 Be aware: Elicitation is not supported by all AI application hosts. GitHub Copilot in Visual Studio Code supports it, but you’ll want to check the latest state from the MCP docs for other AI apps. Elicitation is a relative newcomer to the MCP spec, having been added in the June 2025 revision, and so the design may continue to evolve.

Let me walk you through how I implemented elicitation in my turn-based game MCP server and the challenges I encountered along the way.

Enter elicitation: Making AI interactions feel natural

Before we reached the livestream, I had put together a basic implementation of elicitation, which asked for required information when creating a new game, like difficulty and player name. For tic-tac-toe, it asks which player goes first. For rock, paper, scissors, it asked how many rounds to play. 

But rather than completely replacing our existing tools, I implemented these as new tools, so we could clearly see the behavior between the two approaches until we tested and standardized the approach. As a result, we began to see sprawl in the server with some duplicative tools:

The problem? When you give AI agents like Copilot tools with similar names and descriptions, it doesn’t know which one to pick. On several occasions, Copilot chose the wrong tool because I had created this confusing landscape of overlapping functionality. This was an unexpected learning experience, but an important one to pick up along the way.

The next logical step was to consolidate our tool calls, and make sure we’re using DRY (don’t repeat yourself) principles throughout the codebase instead of redefining constants and having nearly identical implementations for different game types.

After a lot of refactoring and consolidation, when someone prompts “let’s play a game of tic-tac-toe,” the tool call identifies that more information is needed to ensure the user has made an explicit choice, rather than creating a game with a pre-determined set of defaults.

The user provides their preferences, and the server creates the game based upon those, improving that overall user experience. 

It’s worth adding that my code (like I’m sure many of us would admit?) is far from perfect, and I noticed a bug live on the stream. The elicitation step triggered for every invocation of the tool, regardless of whether the user had already provided the needed information. 

As part of my rework after the livestream, I added some checks after the tool was invoked to determine what information had already been provided. I also aligned the property names between the tool and elicitation schemas, bringing a bit more clarity. So if you said “Let’s play a game of tic-tac-toe, I’ll go first,” you would be asked to confirm the game difficulty and to provide your name.

How my elicitation implementation now works under the hood

The magic happens in the MCP server implementation. As part of my up-to-date implementation, when the MCP server invokes the create_game tool, it:

    Checks for required parameters: Do we know which game the user wants to play, or did they specify an ID?Passes the optional identified arguments to a separate method: Are we missing difficulty, player name, or turn order?Initiates elicitation: If information is missing, it pauses the tool execution and gathers only the missing information from the user. This was an addition that I made after the stream to further improve the user experience.Presents schema-driven prompts: The user sees formatted questions for each missing parameter.Collects responses: The MCP client (VS Code in this case) handles the UI interaction.Completes the original request: Once the server collects all the information, the tool executes the createGame method with the user’s preferences.

Here’s what you see in the VS Code interface when elicitation kicks in, and you must provide some preferences:

The result? Instead of “Player vs AI (Medium)”, I get “Chris vs AI (Hard)” with the AI making the opening move because I chose to go second.

What I learned while implementing elicitation

Challenge 1: Tool naming confusion

Problem: Tools with similar names and descriptions confuse the AI about which one to use.

Solution: Where it’s appropriate, merge tools and use clear, distinct names and descriptions. I went from eight tools down to four:

Challenge 2: Handling partial information

Problem: What if the user provides some information upfront? (“Let’s play tic-tac-toe on hard mode”)

Observation: During the livestream, we saw that the way I built elicitation asked for all of the preferences each time it was invoked, which is not an ideal user experience.

Solution: Parse the initial request and only elicit the missing information. This was fixed after the livestream, and is now in the latest version of the sample.

Key lessons from this development session

1. User experience is still a consideration with MCP servers

How often do you provide all the needed information straight up? Elicitation provides this capability, but you need to consider how this is included as part of your tool calling and overall MCP experience. It can add complexity, but is it better to ask users for their preferences than force them to work around poor defaults?

2. Tool naming matters more than you think

When building (and even using) tools in MCP servers, naming and descriptions are critical. Ambiguous tool names and similar descriptions can lead to unpredictable behavior, where the “wrong” tool is called.

3. Iterative development wins

Rather than trying to build the perfect implementation upfront, I iterated to:

Try it yourself

Want to see how elicitation works in an MCP server? Or seeking inspiration to build  your own MCP server?

    Fork the repository: gh.io/rdt-blog/game-mcpSet up your dev environment by creating a GitHub CodespaceRun the sample by building the code, starting the MCP server and running the web app / API server. 

Take this with you

Building better AI tools isn’t all about the underlying models — it’s about creating experiences that can interpret context, ask good questions, and deliver exactly what users need. Elicitation is a step in that direction, and I’m excited to see how the MCP ecosystem continues to evolve and support even richer interactions.

Join us for the next Rubber Duck Thursdays stream where we’ll continue exploring the intersection of AI tools and developer experience.

Get the guide to build your first MCP server >

The post Building smarter interactions with MCP elicitation: From clunky tool calls to seamless user experiences appeared first on The GitHub Blog.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MCP Elicitation AI交互 用户体验 软件开发 工具调用 模型编排 AI MCP Elicitation User Experience AI Interaction Software Development Tool Calls Model Orchestration
相关文章