Get the FREE Ultimate OpenClaw Setup Guide →

quarkus-workshop-langchain4j

Quarkus Langchain4J Workshop

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio quarkusio-quarkus-workshop-langchain4j docker run -i quarkus-workshop-langchain4j:latest

How to use

This MCP server hosts the Quarkus LangChain4j workshop. It provides a Java-based application that showcases building AI-infused applications using Quarkus and LangChain4j. The project consists of multiple steps, each presenting a concrete scenario or integration (such as connecting LangChain4j to a reasoning or embedding service) and offering runnable code in the step directories. The application runs locally on port 8080, and you can follow the step-by-step instructions by exploring the step-XX folders. To run the workshop locally, start the server and then interact with the exposed endpoints to view the workshop content and the built-in tooling.

How to install

Prerequisites:

  • Docker installed and running
  • Basic familiarity with the command line
  • Optional: Git to clone the repository

Installation steps:

  1. Clone the repository (or download the repository that contains the workshop): git clone https://github.com/quarkusio/quarkus-workshop-langchain4j.git cd quarkus-workshop-langchain4j

  2. Build and run the workshop using Docker (preferred if a prebuilt image is available):

    • Ensure Docker is running
    • Start the workshop container: docker run -i quarkus-workshop-langchain4j:latest
  3. Alternative: If you have the project locally and want to run with Maven (not via Docker):

    • Ensure JDK 17+ is installed
    • Use the Maven wrapper included in the project: ./mvnw quarkus:dev
    • The application will be accessible at http://localhost:8080

Note: If you choose to run locally, you may need to configure environment variables for LangChain4j integration or external services as described in the step documentation.

Additional notes

Tips and considerations:

  • The workshop is organized into step-XX directories. Each step contains its own runnable state and instructions.
  • The default running port is 8080; if you need to change it, adjust the server configuration in the step docs or via environment variables as described.
  • If you encounter missing dependencies or connection issues with external AI services, verify network access and proxy settings.
  • For Docker runs, ensure you have permission to pull images from the registry and that there’s sufficient disk space for the image.
  • The LangChain4j integration relies on the appropriate AI model endpoints; check the step docs for required API keys or local mock endpoints.
  • When updating the workshop, you may want to rebuild the Docker image or re-run via Maven to pick up changes in the step directories.

Related MCP Servers

Sponsor this space

Reach thousands of developers