Get the FREE Ultimate OpenClaw Setup Guide →

x-act

x-act a lib to compose ai assistant

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio ox-ai-x-act npx -y ox-ai-x-act \
  --env PORT="Port to run the local server (default 3000)" \
  --env OX_AI_CACHE_DIR="Directory for local caches and models (optional)"

How to use

x-act is a complete Gen-AI library designed to run fully locally and provide tools to build, orchestrate, and serve AI-powered capabilities. It can be used as a CLI via npx or as an importable library in your own applications, enabling local hosting of AI models, pipelines, and components without relying on external services. The package aims to simplify composing Gen-AI workflows, managing components, and exposing APIs for integration with other local services.

To get started, install or run via npx and explore the included tooling and APIs. Once running, you can leverage the built-in modules to create prompts, manage model selection, compose multi-step reasoning chains, and serve endpoints for development or testing. The library is designed to work offline with locally hosted models while providing hooks to connect to remote services if needed.

How to install

Prerequisites:

  • Node.js (14.x or newer) and npm/yarn installed on your system
  • Basic familiarity with the command line

Installation steps:

  1. Install or run the package with npx (no global install needed):
npx -y ox-ai-x-act
  1. Alternatively, if you prefer to install locally for ongoing development, you can initialize a project and install the package:
mkdir my-genai-app
cd my-genai-app
npm init -y
npm install ox-ai-x-act
  1. Start the local server (if the package provides a server entry point) or import the library in your code as documented by the package:
node path/to/server.js
  1. Check the running service at http://localhost:3000 (or the port you configured).

Prerequisites note: Ensure your environment has sufficient memory and, if using large models locally, allocate appropriate CPU/GPU resources as required by the models you plan to run.

Additional notes

Tips and common considerations:

  • Environment variables: PORT may change the listening port; OX_AI_CACHE_DIR can improve performance by caching models locally.
  • If you encounter network-related issues with dependencies during first run, ensure your npm/yarn registry access is available and your network allows fetching packages.
  • For production usage, consider configuring a reverse proxy and setting up proper TLS termination.
  • Review the library's documentation for supported local models, supported runtimes, and any optional plugins or modules that extend capabilities.
  • If you run into compatibility issues with Node.js versions, consult the project's compatibility matrix and consider using nvm to manage Node versions.

Related MCP Servers

Sponsor this space

Reach thousands of developers