快来看,n8n更新了!使用n8n的MCP服务器构建和更新工作流

内容来源:https://blog.n8n.io/n8n-mcp-server/
内容总结:
n8n推出MCP服务器新功能:一句话生成自动化工作流,无需手动编程
近日,自动化平台n8n宣布其内置MCP(模型上下文协议)服务器迎来重大升级。该服务器现已支持用户通过自然语言提示直接创建和更新工作流,彻底告别传统手动复制粘贴或编写JSON文件的繁琐流程。
从指令到可运行工作流,仅需几分钟
用户只需在AI客户端(如Claude、ChatGPT等)中描述需求,n8n的MCP服务器便会自动构建、验证并执行工作流。若执行中遇到错误,系统还能自行修复。例如,用户可输入“每天早7点用Gmail发送纽约天气预报”,系统即可生成完整工作流,仅需补充邮箱地址即可运行。
原生集成,无需额外工具
该功能已内置于n8n所有版本中,包括云端版、企业版及免费的自托管社区版。用户无需安装任何第三方服务,只需在现有AI客户端中配置MCP连接即可使用。n8n团队已在日常工作中采用此功能。
实测案例:自然语言迭代优化
根据官方演示,使用Claude Desktop测试时,系统首次生成的工作流过度依赖代码节点。用户随后通过对话要求“减少代码使用,改用标准模板”,系统便自动调整为使用“Set节点”和Gmail模板,最终输出简洁、可维护的版本。
工作原理与使用技巧
MCP服务器生成的并非JSON,而是类型安全的TypeScript代码,从而提升可靠性。系统默认流程包括:生成工作流→验证→修正→执行→失败后自动调试。用户全程无需介入。
n8n官方建议:
- 优先使用编码类AI助手(如Claude Code),效果优于普通聊天模型
- 明确指定所需节点和服务(如“用Gmail而非通用SMTP”)
- 通过同一对话持续迭代,避免重新开始丢失上下文
- 若手动在界面修改,需告知AI检查最新版本
已知局限与改进方向
当前系统在处理复杂分支逻辑、节点选择(当多个节点功能重叠时)及首次设计过于复杂等方面仍有优化空间。n8n团队正积极修复这些问题,并邀请用户通过社区论坛反馈建议。
即日起开放公测
任何拥有n8n实例的用户均可立即使用。完整配置指南及客户端设置说明已发布,支持Claude Desktop、Claude Code、Codex CLI及Google ADK等主流工具。
中文翻译:
描述你希望 Claude、ChatGPT 或你的 IDE 做什么,几分钟内就能获得一个可直接运行的工作流,直接在 n8n 中构建完成。无需再复制粘贴,无需再反复沟通。
n8n 的 MCP 服务器现在可以根据提示构建工作流(而不仅仅是运行它们)!
MCP 服务器已经发布几个月了,但之前你只能执行已有的工作流。现在,你可以在你的 n8n 实例中从头开始构建新工作流,并更新已有工作流。
-
从提示到可直接运行的工作流
告诉你的 AI 客户端你想要什么。它会构建工作流、验证它、运行它,并在出现问题时自行修复。无需处理 JSON 文件或复制粘贴错误。 -
在你已使用的任何 AI 客户端中均可使用
Claude、ChatGPT、Cursor、Windsurf——只要它“支持 MCP”,你就可以将其指向 n8n。无需学习新工具,无需切换环境。 -
原生内置,面向所有用户
内置在 n8n 的每一个版本中:Cloud、Enterprise 以及免费自托管的 Community Edition。由 n8n 维护,无需在你的实例旁运行第三方服务。
它已经公开预览了数周,n8n 团队每天都在使用。我们迫不及待地希望你也能尝试一下。
注意:本文介绍的是 n8n 内置的 MCP 服务器,而非 MCP 服务器触发节点。前者允许外部 AI 客户端连接到你的整个实例。后者则将单个工作流暴露为 MCP 服务器。
使用示例:一个真实案例
以下是我用 Claude Desktop (Chat) 和 Opus 4.6 构建的一个简单工作流的完整流程。
只需告诉它你想构建什么:
“我想让你创建一个 n8n 工作流,每天早上 7 点给我发送一封包含今日天气预报的邮件。用我的 Gmail 账号发送。我住在纽约市。把工作流放在 MCP 服务器测试项目中。”
几分钟后,我得到了这个结果:
唯一缺少的、导致报错的内容,是我的实际邮箱地址在 Gmail 节点中。
我添加了邮箱地址后,工作流就正常运行了,我收到了邮件结果:
顺便说一句,我后来在提示中包含了邮箱地址,再次测试同样的提示,它完美运行!
工作流完成后,我发现所有的邮件格式化都在一个代码节点中。我更喜欢使用 n8n 内置节点,并在 Gmail 节点中使用邮件模板。于是我继续对话,请求了如下修改:
“我注意到工作流中大量使用代码进行格式化。请更新工作流,尽可能不使用代码,而是使用标准模板,从天气 API 填充数据。如果内容简化一点也没关系。”
这正是使用 MCP 的优势所在。你通过自然的来回对话来迭代工作流。
这是我得到的 Claude 的回复:
它确实更新了工作流,用设置节点替代了代码节点,并将模板放入了 Gmail 节点中 🙂
最终的邮件如下:
工作原理
MCP 服务器不仅提供了创建或更新工作流的工具,还包含验证工作流、运行测试执行和生成测试数据的工具。
因此,当你的 MCP 客户端创建工作流时,流程通常如下:
- 生成工作流
- 验证它(在运行前捕获大多数错误)
- 如果验证失败,修复并重新验证
- 执行它,必要时生成测试数据
- 如果执行失败,读取错误,修复工作流,再次运行
所有这些都在你的 MCP 客户端内进行,而你只需观看。
你无需做任何事。
无需复制粘贴,无需粘贴错误信息,无需手动调试交接。如果你的提示足够具体,你最终会得到一个可运行的工作流;如果第一次尝试有问题,模型会自行迭代修复。
请注意:上述过程并非指示客户端执行的任何指令的一部分。模型并未被告知要在构建后测试工作流,这只是大多数客户端和 LLM 的自然行为。
n8n 团队已经在日常工作中使用它,但复杂的工作流通常需要第二次(甚至第三次)迭代,我们仍在完善细节。
连接你的 MCP 客户端
连接到 n8n MCP 服务器的方式与连接任何其他 MCP 服务器相同,但你需要先在你的 n8n 实例中启用它。
- 启用 MCP 服务器
- 复制连接详情
- 将 n8n MCP 服务器添加到你的 MCP 客户端
- 重启或重新加载客户端
设置指南还包含针对 Claude Desktop、Claude Code、Codex CLI 和 Google ADK 的每个客户端的说明。
→ 查看完整设置指南了解详情。
建议使用 n8n 版本 2.18.4 或更高,以获得使用 MCP 服务器创建工作流的最佳体验。
你也可以查看我们的视频,其中介绍了 MCP 设置和一些额外技巧。
如何最大化利用
以下是我们目前学到的一些经验。有些是技巧,有些是已知的、我们正在修复的不足。
技巧
- 使用编码代理
我们的内部测试显示,使用相同的提示和模型,Claude Code 的效果比 Claude Chat 更好。 - 告诉它如何构建,而不仅仅构建什么
- 如果你刚开始接触 n8n,可以跳过这条建议,直接用简单的英语描述你想解决的问题。
- 另一方面,如果你是 n8n 老手,请明确指定要使用哪些节点。例如,是使用代码节点还是原生节点,格式化是放在模板中还是脚本中,时区是硬编码还是其他方式。
- 指定你想要的连接服务
当两个服务都能完成任务时,例如 HTTP 请求与专用集成,或 Gmail 与通用 SMTP 节点,请告诉它使用哪一个。否则模型会猜测,而且猜测结果不一定准确。 - 迭代,不要从头开始
如果它完成了 80%,请在同一个对话中优化。重新开始通常会丢失上下文,让事情变得更糟。 - 如果在 UI 中做了任何修改,告诉它检查最新版本
这可能是显而易见的,但有时我在会话中通过 UI 进行了修改,却忘记告诉 Claude 在继续之前检查我的修改,导致这些修改被覆盖。 - 问它哪里做得不好
构建完成后,问一句“如果我一开始就提到什么,结果会更好?”答案通常是一句话,下次就能避免重做。
它可能出错的地方
- 第一次构建时的隐性设计选择
模型会做出一些小的决策,但不一定会告诉你。如果感觉不对劲,通常确实有问题。问它做了哪些决定,不要假设什么都没做。 - 在你反馈之前过度工程化
第一次尝试有时会比实际需要更复杂,例如用 50 行代码节点,而实际上模板加设置节点就够了。一句“这好像太依赖代码了”通常能得到更简洁的第二版。 - 复杂的分支逻辑
包含大量条件路径或嵌套逻辑的工作流通常需要手动清理。我们正在积极改进这一点。 - 选项重叠时的节点选择
如果多个节点都能完成任务,它有时会选错节点。这是最常见且值得明确引导的问题。 - 构建过程中发现的平台特性问题
模型通过试错学习 SDK 的限制,比如遇到被禁止的方法、无法绑定的凭证、无法自引用的节点。对于复杂的工作流,一两次重试是正常的。这是防护机制在起作用,而非模型失败。
我们正在快速修复各种细节问题。预计在接下来的几个版本中,体验会显著改善。
关于构建原理的简要说明
MCP 服务器生成的是工作流的 TypeScript 表示,而非原始 JSON。实际上,这意味着模型必须生成通过类型检查和编译的内容,然后才会触及你的实例,从而带来更可靠的解决方案。
关于原生 MCP 服务器,其他值得了解的信息:
- 无需额外运行或维护服务
它是 n8n 的一部分。如果你的实例在运行,MCP 服务器就在运行。 - 更深度的集成
因为它是 n8n 内置的,所以可以使用公有 API 未暴露的功能。 - 生成代码,而非 JSON
在我们的测试中,TypeScript 方法(如上所述)比直接生成 JSON 的效果好得多。
👉 关于架构的完整技术深入解读即将推出!
我们还要感谢那些自行构建开源 MCP 服务器来创建工作流的社区成员。这些开发者通常是我们最活跃的用户之一,我们非常感激他们为此付出的辛勤努力。
今天就开始构建吧!
那么,去做点什么吧。如果你有 n8n 实例,你已经准备好了!
→ 查看完整设置指南了解详情。
由于这是公开预览,我们非常希望得到你的反馈。我们在社区论坛中专门开设了一个帖子:
→ 在此留下你的反馈
技巧、Bug、奇怪的边缘案例、你引以为傲的工作流——统统欢迎。团队正在密切关注,我们接下来推出的很多功能都来自你告诉我们的信息。
英文来源:
Describe what you want from Claude, ChatGPT, or your IDE, and get a ready-to-run workflow in a few minutes, built directly in n8n. No more copy-paste, no more back-and-forth.
n8n's MCP server can now build workflows from a prompt
(and not just run them)!
The MCP server has been around for a few months, but previously you could only execute existing workflows. Now you can build new ones from scratch and update existing ones, directly in your n8n instance.
- Go from prompt to a ready-to-run workflow.
Tell your AI client what you want. It builds the workflow, validates it, runs it, and fixes itself if something breaks. No messing with JSON files or copy-pasting errors. - Works in whatever AI client you already use.
Claude, ChatGPT, Cursor, Windsurf - if it “speaks” MCP, you can point it at n8n. No new tool to learn, no context-switching. - First-party, native, and available to all.
Built into every edition of n8n: Cloud, Enterprise, and the free self-hosted Community Edition. Maintained by n8n, no third-party service to run alongside your instance.
It's been in public preview for the past few weeks and the n8n team already uses it daily. We can't wait for you to try it out.
Note: This article is about the MCP server built into n8n, not the MCP Server Trigger node. The former lets external AI clients connect to your entire instance. The latter exposes a single workflow as an MCP server.
Using it: a real example
Here’s what it looks like end-to-end, with a simple workflow I built to test it out, using Claude Desktop (Chat) and Opus 4.6.
Just tell it what you want it to build:
"I want you to create an n8n workflow that once a day at 7am sends me an email with today's forecast. Use my gmail account to send it. I live in New York city. Put the workflow in the MCP Server testing project."
After a few minutes, this is what I got back:
The only thing missing, which was causing an error, was my actual email address in the Gmail node.
Once I added my email address it worked, and I got this result in my email:
On a side note, I later tested the same prompt with my email address included, and it worked perfectly!
Once it was done, I noticed all of the email formatting is in a code node. I prefer using built-in n8n nodes and using an email template in the Gmail node. So I continued the conversation and asked for this change:
"I noticed the workflow is very code heavy for all of the formatting. Update the workflow so as much as possible it does not use code but instead has a standard template that we fill with data from the weather API. I'm fine if it it's less detailed."
This is where using the MCP really shines. You're iterating on the workflow using a natural back-and-forth conversation.
This is the response I got back from Claude:
And indeed it updated the workflow to use the set node instead of the code node and put the template in the Gmail node 🙂
The final email looks like this:
How it works
The MCP server adds more than just tools to create or update workflows. It also has tools for validating workflows, running test executions, and generating test data.
So when your MCP client creates a workflow, the flow usually looks like this: - Generate the workflow
- Validate it (catches most errors before execution)
- If validation fails, fix it and re-validate
- Execute it, generating test data if needed
- If execution fails, read the error, fix the workflow, run it again
All of that happens inside your MCP client while you watch.
You don't need to do anything.
No copy-paste, no pasting errors back, no debugging handoffs. If your prompt was specific enough, you end up with a working workflow, and if the first attempt had issues, the model iterates on its own to fix it.
Please note: The process described above is not part of any instructions given to the client. The model is not told to test the workflow after the build process, it’s just something most clients and LLMs naturally do.
The n8n team already uses it daily for real work, but complex workflows often need a second (or third) pass, and we're still smoothing rough edges.
Connecting your MCP client
Connecting to the n8n MCP server works the same way as connecting to any other MCP server, though you need to first enable it for your n8n instance. - Enable the MCP server
- Copy the connection details
- Add the n8n MCP server to your MCP client of choice
- Restart or reload the client
The setup guide also includes per-client instructions for Claude Desktop, Claude Code, Codex CLI, and Google ADK.
→ Check out the Full Setup Guide for details.
2.18.4
or higher for the best experience creating workflows with the MCP server.Alternatively, check out our video which covers the MCP setup and some additional tips.
Getting the most out of it
Here are a few things we've learned so far. Some of these are tips, some are known rough edges that we’re working on.
Tips - Use a Coding Agent
Our internal testing showed Claude Code gets better results than Claude Chat using the same prompt and the same models. - Tell it how to build, not just what to build - If you’re just getting started with n8n, you can skip this tip and just describe the problem you’re trying to solve in plain english.
- On the other hand, if you’re a veteran n8n builder, be explicit in what nodes to use. For example, whether it uses Code nodes or native ones, whether formatting lives in a template or a script, or whether the timezone is hardcoded.
- Name the services you want
When two services could plausibly do the job, like HTTP Request vs. a dedicated integration, or Gmail vs. a generic SMTP node, tell it which one to use. The model will guess otherwise, and it doesn't always guess well. - Iterate, don't restart
If it got 80% right, refine in the same conversation. Starting over usually loses context and makes things worse. - If you make any changes in the UI, tell it to check the latest version This might be obvious, but a few times I made changes mid-session via the UI and forgot to tell Claude to check for my changes before continuing. Those changes were over-written. - Ask it what it got wrong
After it builds, ask "what would have been better if I'd mentioned it upfront?" The answer is often one sentence that'll save you a rebuild next time.
What it can get wrong - Silent design choices on the first pass
The model makes small decisions it doesn't always surface. If something feels off, it usually is. Ask what it decided and don't assume nothing was decided. - Over-engineering before you push back
First attempts are sometimes more complex than needed, like a 50-line Code node where a template and a Set node would do. A quick "this feels too code-heavy" usually produces a simpler second version. - Complex branching.
Workflows with a lot of conditional paths or nested logic often need manual cleanup. We're actively working on this. - Node selection when options overlap.
If multiple nodes could do the job, it sometimes picks the wrong one. This is the single most common thing worth steering explicitly. - Platform quirks discovered mid-build.
The model learns some of the SDK's constraints the hard way, by hitting a blocked method, a credential it can't bind, a node that can't self-reference. A retry cycle or two is normal on non-trivial workflows. That's the guardrails working, not the model failing.
We're fixing rough edges quickly. Expect things to get noticeably better over the next few releases.
A quick word on how it's built
The MCP server generates a TypeScript representation of the workflow rather than raw JSON. In practice, that means the model has to produce something that type-checks and compiles before anything touches your instance, leading to a more reliable solution.
A few other things worth knowing about the native MCP server: - No extra service to run or maintain
It's part of n8n. If your instance is up, the MCP server is up. - Deeper integration
Because it's built into n8n, it can use capabilities that aren't exposed through the public API. - Generates code, not JSON
The TypeScript approach (above) produces much better results than direct JSON generation in our testing.
👉 A full technical deep-dive on the architecture is coming soon!
We also want to acknowledge the community members that built their own open-source MCP servers to create workflows. The people behind them are often among our most engaged users and we appreciate the hard work that went into those solutions.
Start building today!
So go build something. If you’ve got an n8n instance, you're ready to go!
→ Check out the Full Setup Guide for details.
And since this is a public preview, we really want your feedback. We've opened a thread on the community forum specifically for this:
→ Leave your feedback here
Tips, bugs, weird edge cases, workflows you're proud of - all welcome. The team is watching it closely and a lot of what we ship next will come from what you tell us.
文章标题:快来看,n8n更新了!使用n8n的MCP服务器构建和更新工作流
文章链接:https://news.qimuai.cn/?post=3955
本站文章均为原创,未经授权请勿用于任何商业用途