openfoam
LLM-powered OpenFOAM MCP server for intelligent CFD education with Socratic questioning and expert error resolution
claude mcp add --transport stdio webworn-openfoam-mcp-server node server.js \ --env CFD_ENV="production or development (optional)" \ --env OPENFOAM_HOME="Path to OpenFOAM installation (e.g., /opt/openfoam-v2106)"
How to use
OpenFOAM MCP Server provides an educational AI-assisted interface for CFD problem solving. It combines a conversational agent with OpenFOAM execution capabilities, enabling context-aware guidance, Socratic questioning, and automatic extraction of CFD parameters from natural language. Users can start intelligent CFD conversations, run OpenFOAM operations (mesh generation, solving, and post-processing), and analyze results with built-in educational explanations. Tools cover mesh quality assessment, STL geometry preparation, RDE analysis, and advanced pipe flow and turbulent flow analyses, all designed to adapt to the user’s learning progress and to provide structured, error-resolving guidance.
To use the server, interact with the 12 available tools via the MCP API. For example, begin with start_cfd_assistance to establish context, then use execute_openfoam_operation to create a mesh or run a solver, followed by analyze_cfd_results to interpret outputs. The Socratic questioning engine will pose clarifying questions and progressively introduce concepts (context engineering, parameter extraction, and 5 Whys-based error resolution) as the conversation advances. If you provide a description of a CFD problem in natural language, the system will translate it into OpenFOAM parameters, validate them for physical consistency, and guide you through a learning path tailored to your current understanding.
How to install
Prerequisites
- Node.js and npm installed on your system
- OpenFOAM installed and accessible via environment (OPENFOAM_HOME or system PATH)
- Sufficient system resources for OpenFOAM jobs (CPU, RAM, storage)
Installation steps
- Clone the MCP server repository or download the release package.
- Navigate to the project directory.
- Install dependencies
- npm install
- Configure environment (optional)
- Create or update environment variables as needed, e.g. OPENFOAM_HOME=/path/to/openfoam
- Start the server
- npm run start or node server.js
- Verify API access
- Send a JSON-RPC request to the server endpoint to ensure 12 tools are registered and responsive.
Notes
- Ensure OpenFOAM is correctly installed and the OPENFOAM_HOME path is valid.
- If using a different entry file, update the mcp_config accordingly (node <your_entry>.js).
- For production deployments, consider running behind a process manager (PM2) and configuring proper logging and security settings.
Additional notes
Tips and common issues:
- If OpenFOAM commands fail, verify that the OpenFOAM environment is sourced in the shell that runs the Node process (e.g., source $OPENFOAM_HOME/bashrc).
- Ensure permissions allow mesh generation and solver execution; run with appropriate user privileges.
- The 5 Whys error resolution and academic references are provided to help you diagnose issues and locate reputable sources for solutions.
- Use analyze_turbulent_flow for complex turbulence modeling guidance, as this tool selects suitable models and justifies choices.
- When parameter extraction is uncertain, the Interactive Clarification flow will ask follow-up questions to disambiguate inputs before proceeding.
Related MCP Servers
context7
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
mem0
✨ mem0 MCP Server: A memory system using mem0 for AI applications with model context protocl (MCP) integration. Enables long-term memory for AI agents as a drop-in MCP server.
time
⏰ Time MCP Server: Giving LLMs Time Awareness Capabilities
workflowy
Powerful CLI and MCP server for WorkFlowy: reports, search/replace, backup support, and AI integration (Claude, LLMs)
rdkit
MCP server that enables language models to interact with RDKit through natural language
gtm
An MCP server for Google Tag Manager. Connect it to your LLM, authenticate once, and start managing GTM through natural language.