Get the FREE Ultimate OpenClaw Setup Guide →

MCP_sentiment_analysis_server

MCP Sentiment Analysis Server is a cutting-edge, robust sentiment analysis solution built on the Model Context Protocol (MCP). This powerful server provides real-time sentiment analysis capabilities with seamless integration into AI workflows and applications.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio adilzhanb-mcp_sentiment_analysis_server python app.py \
  --env CACHE_SIZE="1000" \
  --env ENABLE_GPU="true" \
  --env NUM_WORKERS="4" \
  --env API_KEY_REQUIRED="true" \
  --env MCP_SENTIMENT_HOST="localhost or 0.0.0.0" \
  --env MCP_SENTIMENT_PORT="8080" \
  --env MCP_SENTIMENT_DEBUG="false" \
  --env SENTIMENT_BATCH_SIZE="32" \
  --env SENTIMENT_MAX_LENGTH="512" \
  --env SENTIMENT_MODEL_PATH="./models/sentiment" \
  --env RATE_LIMIT_PER_MINUTE="100"

How to use

The MCP Sentiment Analysis Server provides real-time sentiment analysis via an MCP-compatible interface. It exposes a RESTful API for single and batch analysis, and includes a Gradio-based web UI for interactive exploration. You can also integrate with the MCP client SDK to request sentiment results programmatically. Typical usage includes initializing the Python client or sending HTTP requests to the /analyze and /batch-analyze endpoints to obtain sentiment, confidence scores, emotions, and other metrics. The server supports multi-language input, batch processing, live streaming capabilities, and detailed performance metrics for monitoring in production.

How to install

Prerequisites:\n- Python 3.8+ (preferred 3.8 or newer)\n- Git\n- Optional: conda or virtualenv tooling\n\nSteps:\n1) Clone the repository:\nbash\ngit clone https://github.com/AdilzhanB/MCP_sentiment_analysis_server.git\ncd MCP_sentiment_analysis_server\n\n2) Create and activate a virtual environment:\nbash\n# Using venv (Python standard library)\npython -m venv venv\nsource venv/bin/activate # On Windows use: venv\Scripts\activate\n\n# Or using conda (recommended if available)\nconda env create -f environment.yml\nconda activate mcp-sentiment\n\n3) Install dependencies:\nbash\npip install -r requirements.txt\n\n4) Configure environment variables (see below) and start the server:\nbash\n# Run the server locally (default port 8080)\nexport MCP_SENTIMENT_HOST=localhost\nexport MCP_SENTIMENT_PORT=8080\nexport MCP_SENTIMENT_DEBUG=false\nexport SENTIMENT_MODEL_PATH=./models/sentiment\nexport SENTIMENT_BATCH_SIZE=32\nexport SENTIMENT_MAX_LENGTH=512\nexport ENABLE_GPU=true\nexport NUM_WORKERS=4\nexport CACHE_SIZE=1000\nexport API_KEY_REQUIRED=true\nexport RATE_LIMIT_PER_MINUTE=100\n\npython app.py\n\n5) Optional: start with a custom config file and port:\nbash\npython app.py --config custom_config.yaml --port 8080\n

Additional notes

Notes and tips:\n- The server relies on environment variables for configuration. Ensure they are set in your deployment environment.\n- For GPU acceleration, ensure your hardware and drivers are compatible and ENABLE_GPU is set to true.\n- The API endpoints include /analyze for single text and /batch-analyze for multiple texts.\n- If API keys or rate limiting are enabled, make sure to supply the required credentials in your client requests.\n- When using conda, the environment name in environment.yml is mcp-sentiment (or adjust as needed).\n- Monitor health using the provided /health, /status/detailed, and /metrics endpoints.\n- Check logs for model loading and inference errors, especially when upgrading model weights or dependencies.

Related MCP Servers

Sponsor this space

Reach thousands of developers