Taskx-nano
A python library built to execute tasks with natural language
claude mcp add --transport stdio rodyrahi-taskx-nano python main.py
How to use
TaskX-Nano is a lightweight, open-source tool that maps natural language prompts to programming functions without relying on large language models. It interprets prompts and matches them to functions from a predefined or user-defined library, then extracts and optimizes the required parameters. This enables developers to automate simple tasks or integrate NLP-driven prompts into their workflows with minimal resource usage. The server is designed to be fast and extensible, allowing you to extend the function library and tailor parameters to specific use cases.
To use TaskX-Nano, run the Python entry script as described in the installation guide. Once running, you can provide natural language prompts describing the desired operation (for example, calculating a computed value or transforming data). The system will identify the best-matching function (e.g., a square function or a data transformation) and return the selected function along with optimized parameters derived from the prompt. This makes it suitable for quick prototyping, edge deployments, or lightweight automation tasks where a full LLM is unnecessary or impractical.
How to install
Prerequisites:
- Python 3.8+ installed
- Git installed
- Access to the internet to install dependencies from requirements.txt
Installation steps:
-
Clone the repository git clone https://github.com/rodyrahi/Taskx-nano.git cd Taskx-nano
-
Create and (if desired) activate a virtual environment: python -m venv venv
On Windows
venv\Scripts\activate
On macOS/Linux
source venv/bin/activate
-
Install dependencies pip install -r requirements.txt
-
Prepare configuration (if needed)
- Review config.yaml or related config files to add or adjust the function library, model weights, or other settings as applicable.
-
Run the server python main.py
Notes:
- The README mentions dependencies like spaCy and scikit-learn, which will be installed from requirements.txt. No large NLP models are required for TaskX-Nano.
- If your environment requires different Python paths or environment setup, adjust accordingly.
Additional notes
Tips and common issues:
- Ensure Python 3.8+ is installed and accessible in your PATH.
- If you encounter dependency issues, consider upgrading pip or creating a clean virtual environment.
- If the function library or config.yaml is not set up yet, define a minimal function library with simple test functions to validate prompt-to-function mapping.
- Check for any project-specific environment variables or configuration options in config.yaml and set placeholders if not yet configured.
- Logs can help diagnose mapping failures; enable verbose logging if available in main.py or configuration.
Related MCP Servers
daymon
Daymon puts your favorite AI to work 24/7. It schedules, remembers, and orchestrates your own virtual team. Free.
mcp-android -python
MCP Android agent - This project provides an *MCP (Model Context Protocol)* server for automating Android devices using uiautomator2. It's designed to be easily plugged into AI agents like GitHub Copilot Chat, Claude, or Open Interpreter to control Android devices through natural language.
Unified -Tool-Graph
Instead of dumping 1000+ tools into a model’s prompt and expecting it to choose wisely, the Unified MCP Tool Graph equips your LLM with structure, clarity, and relevance. It fixes tool confusion, prevents infinite loops, and enables modular, intelligent agent workflows.
packt-netops-ai-workshop
🔧 Build Intelligent Networks with AI
architect
A powerful, self-extending MCP server for dynamic AI tool orchestration. Features sandboxed JS execution, capability-based security, automated rate limiting, marketplace integration, and a built-in monitoring dashboard. Built for the Model Context Protocol (MCP).
Youtube
YouTube MCP Server is an AI-powered solution designed to revolutionize your YouTube experience. It empowers users to search for YouTube videos, retrieve detailed transcripts, and perform semantic searches over video content—all without relying on the official API. By integrating with a vector database, this server streamlines content discovery.