nexus-prime
Nexus Prime fuses multimodal AI (text, vision, audio, real-time data) for zero-shot learning, ethical reasoning, and simulations in climate and medicine. Quantum-inspired algorithms enable exponential speed, solving intractable problems in seconds.
claude mcp add --transport stdio kosasih-nexus-prime python -m nexus_prime.inference.api \ --env NEXUS_API_KEY="your_api_key_here (optional)" \ --env NEXUS_LOG_LEVEL="INFO (optional)"
How to use
Nexus Prime is a comprehensive multimodal AI framework that fuses text, vision, audio, and real-time data to support zero-shot learning, ethical decision-making, and scenario simulations such as climate modeling and personalized medicine. The server exposes an API (via FastAPI) for real-time inference, along with tools for model simulation, training, and edge deployment. You can run the server with Python by starting the API module, then interact with the /infer endpoint to submit multimodal inputs and receive predictions. The project also includes utilities for ONNX export, distributed training, and VR/AR-enabled simulations, making it suitable for research and production deployments that require scalable, ethically audited AI capabilities.
How to install
Prerequisites:
- Python 3.8+
- CUDA-enabled GPU recommended for performance (optional for CPU)
- Docker (optional for containerized deployment)
- Git
- (Optional) Qiskit account for quantum features
Step-by-step installation:
-
Clone the repository: git clone https://github.com/KOSASIH/nexus-prime.git cd nexus-prime
-
Create and activate a virtual environment (recommended): python -m venv venv source venv/bin/activate # Linux/macOS venv\Scripts\activate # Windows
-
Install dependencies: pip install -r requirements.txt
Key packages include: torch, transformers, qiskit, ray, fastapi, onnx, pytorch-lightning, pyvista, kubernetes, etc.
-
Optional: Set up quantum prerequisites
- Install Qiskit: pip install qiskit
- If you have a Qiskit account, load credentials as needed in your config
-
Run the API server (example): python -m nexus_prime.inference.api
The server will typically start on http://localhost:8000
-
Optional Docker-based deployment: docker build -t nexusprime:latest . docker run -it -p 8000:8000 nexusprime:latest
-
Optional: Download pre-trained weights (if applicable): python src/nexus_prime/utils/download_weights.py --api-key YOUR_KEY
Notes:
- Ensure CUDA drivers and compatible PyTorch version are installed for GPU acceleration.
- If you plan to run distributed training or edge deployments, follow the project’s distributed/edge deployment guidelines in the docs.
Additional notes
Tips and common issues:
- If the API does not start, check that port 8000 is free or configure the server to use a different port.
- For quantum features, you may need a Qiskit account; otherwise a classical fallback is used.
- When using Docker, ensure Docker daemon is running and you have enough memory for large models.
- Set environment variables like NEXUS_API_KEY only if you need authenticated access to weight files or external services.
- For inference performance, consider exporting to ONNX and deploying on edge hardware with the provided edge tools.
- If you encounter dependency conflicts, consider using a clean virtual environment and installing pinned versions from requirements.txt.
Related MCP Servers
mysql_mcp_server
A Model Context Protocol (MCP) server that enables secure interaction with MySQL databases
Gitingest
mcp server for gitingest
mcp -weibo
基于 Model Context Protocol 的微博数据接口服务器 - 实时获取微博用户信息、动态内容、热搜榜单、粉丝关注数据。支持用户搜索、内容搜索、话题分析,为 AI 应用提供完整的微博数据接入方案。
skill-to
Convert AI Skills (Claude Skills format) to MCP server resources - Part of BioContextAI
scraper
Context-optimized MCP server for web scraping. Reduces LLM token usage by 70-90% through server-side CSS filtering and HTML-to-markdown conversion.
ros2_medkit_mcp
MCP server for ros2_medkit. Bridge LLM agents to the SOVD REST API for ROS 2 diagnostics and remote operations.