Get the FREE Ultimate OpenClaw Setup Guide →

kortx

Kortx: MCP server for AI-powered consultation. GPT-5 strategic planning, Perplexity real-time search, GPT Image visual creation, with intelligent context gathering

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio effatico-kortx-mcp npx -y @effatico/kortx-mcp@latest \
  --env OPENAI_API_KEY="${OPENAI_API_KEY}" \
  --env PERPLEXITY_API_KEY="${PERPLEXITY_API_KEY}"

How to use

Kortx is a lightweight MCP server that provides copilots with access to a suite of planning, research, and enhancement tools built around the Model Context Protocol. It ships with seven consultation tools (think-about-plan, suggest-alternative, improve-copy, solve-problem, consult, search-content, create-visual) plus a batch runner for parallel execution. It also supports file-based context enrichment through a default gatherer and optional connectors to Serena, MCP Knowledge Graph, and CCLSP MCP servers when those services are running. The server is designed for stdio transport and includes features like structured logging, rate limiting, and request caching, with a hardened Docker build that runs as a non-root user. To use Kortx, you integrate it into your MCP client configuration, providing credentials and the desired model preferences, and then invoke the available tools through the mcp interface. The Quick Start demonstrates how to wire Kortx into an MCP client and pass credentials to enable OpenAI and Perplexity access for generation and citation-backed results.

How to install

Prerequisites:

  • Node.js >= 22.12.0
  • npm >= 9
  • Optional: Docker for containerized runs

Recommended steps:

  1. Clone the repository (or install the npm package): git clone https://github.com/effatico/kortx-mcp.git cd kortx-mcp

  2. Install dependencies: npm install

  3. Build the project (if applicable for your setup): npm run build

  4. Run in development mode (for local testing): npm run dev

  5. If you prefer Docker, build and run the image as described in the Docker section of the README: docker build -t kortx-mcp . docker run -i --rm
    -e OPENAI_API_KEY=$OPENAI_API_KEY
    -e PERPLEXITY_API_KEY=$PERPLEXITY_API_KEY
    kortx-mcp

  6. Optionally configure via the Quick Start example to add Kortx to your MCP client configuration (see below).

Additional notes

Tips and notes:

  • The server uses stdio for transport; HTTP is not implemented yet (as per the README).
  • Set OPENAI_API_KEY and PERPLEXITY_API_KEY in your environment to enable model access and Perplexity-powered search.
  • You can customize model choices, reasoning effort, verbosity, and retry behavior via configuration (OPENAI_MODEL, OPENAI_REASONING_EFFORT, OPENAI_VERBOSITY, etc.).
  • If you run Docker, the image runs as UID/GID 1001 (nodejs) and performs npm audits during build.
  • The Quick Start shows how to wire Kortx into an MCP client using an npx-based installation; you can replace with a package manager or local build as needed.
  • For longer-lived deployments, consider using docker-compose with volume mounts as suggested in the Docker section of the README.

Related MCP Servers

Sponsor this space

Reach thousands of developers