Get the FREE Ultimate OpenClaw Setup Guide →

ExamPrepAgent

An AI agent that assists students preparing for any exam or test. The agent serves as an intelligent study companion leveraging Large Language Models (LLMs) and MCP servers to provide an engaging and interactive learning experience.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio cardea-mcp-examprepagent python main.py \
  --env TABLE_NAME="qa_table_name" \
  --env DATABASE_URL="TiDB connection URL" \
  --env ENV_FILE_PATH=".env" \
  --env OPENAI_API_KEY="Your OpenAI-compatible LLM API key" \
  --env ASR_API_ENDPOINT="ASR service endpoint (optional)" \
  --env TTS_API_ENDPOINT="TTS service endpoint (optional)"

How to use

ExamPrepAgent exposes two MCP tools to assist learners: get_random_question() and get_question_and_answer(). The get_random_question() tool returns a random Q&A pair from the knowledge base to practice and reinforce learning. The get_question_and_answer() tool performs a semantic/keyword search to retrieve the most relevant Q&A pairs for a user-posed query, enabling targeted study and clarification. You can interact with the MCP server through any MCP-compatible LLM client, and the agent architecture is designed to integrate with a chatbot UI for a guided study session. When using get_question_and_answer(), the LLM can cite the retrieved Q&A context to explain concepts, while get_random_question() prompts practice questions and explanation flow to reinforce understanding. The server relies on a TiDB-backed dataset of Kubernetes-focused Q&A by default, with the flexibility to load additional datasets as needed.

How to install

Prerequisites

  • Python 3.8+ and pip
  • Git
  • Access to a LaML/LLM API (OpenAI-compatible) and optional ASR/TTS services if you enable voice features
  • TiDB-compatible database running (or the ability to run the included tester) and a dataset loaded at dataset/qa.csv

Step-by-step installation

  1. Clone the repository
git clone https://github.com/cardea-mcp/ExamPrepAgent.git
cd ExamPrepAgent
  1. Create and activate a Python virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # on macOS/Linux
venv\Scripts\activate     # on Windows
  1. Install dependencies
pip install fastmcp fastapi requests mysql-connector-python ffmpeg
  1. Create a .env file from the example
cp .env.example .env
  1. Prepare the dataset (as described in README)
cd dataset
curl -L -o qa.csv https://huggingface.co/datasets/ItshMoh/k8_qa_pairs/resolve/main/kubernetes_qa_output.csv
  1. Load the CSV into the database
python csv_loader.py
  1. Start the MCP server
python3 main.py
  1. (Optional) Start the chatbot UI
python3 app.py

Prerequisites recap

  • Ensure OpenAI or equivalent LLM API credentials are available
  • Ensure the environment (.env) is configured with DATABASE_URL and API keys
  • Ensure the dataset is properly loaded into TiDB and the MCP server can access it

Additional notes

Tips and common issues:

  • If the MCP server fails to start due to database connection errors, verify the DATABASE_URL value in .env and that TiDB is running with the expected schema.
  • When using voice features, provide functional ASR and TTS endpoints in the environment variables (ASR_API_ENDPOINT and TTS_API_ENDPOINT).
  • The default dataset focuses on Kubernetes Q&A; you can add other datasets by placing QA pairs into dataset/ and updating the loader script as needed.
  • If you modify main.py or its MCP tools, ensure you expose get_random_question() and get_question_and_answer() via the MCP server so LLM clients can call them.
  • For local testing, you can run the LLM API and MCP server separately and connect a simple client to validate responses before wiring up the full chatbot UI.

Related MCP Servers

Sponsor this space

Reach thousands of developers