line-bot
MCP server that integrates the LINE Messaging API to connect an AI Agent to the LINE Official Account.
claude mcp add --transport stdio line-line-bot-mcp-server npx @line/line-bot-mcp-server \ --env DESTINATION_USER_ID="FILL_HERE" \ --env CHANNEL_ACCESS_TOKEN="FILL_HERE"
How to use
This MCP server integrates LINE Messaging API with an AI agent via the MCP framework. It exposes actions to push text or flex messages to individual LINE users, broadcast messages to all followers, retrieve user profiles, and manage rich menus (creation, deletion, default selection, and listing). The available tools include push_text_message, push_flex_message, broadcast_text_message, broadcast_flex_message, get_profile, get_message_quota, get_rich_menu_list, delete_rich_menu, set_rich_menu_default, cancel_rich_menu_default, create_rich_menu. To use it, configure your LINE Channel Access Token and a default destination user ID (DESTINATION_USER_ID) if your tool inputs do not provide a userId, then run the MCP server using the mcp_config provided. Your AI agent can then invoke these actions to interact with LINE users or manage rich menus as part of conversations.
How to install
Prerequisites:
- Node.js v20 or later
- npm (comes with Node.js)
Installation (Using npx):
- Ensure you have a LINE Official Account with Messaging API enabled and obtain a Channel Access Token.
- Set DESTINATION_USER_ID if your inputs do not include userId:
- DESTINATION_USER_ID: the default recipient user ID
- Run the MCP server via npx using the provided configuration:
{
"mcpServers": {
"line-bot": {
"command": "npx",
"args": [
"@line/line-bot-mcp-server"
],
"env": {
"CHANNEL_ACCESS_TOKEN": "FILL_HERE",
"DESTINATION_USER_ID": "FILL_HERE"
}
}
}
}
Installation (Using Docker):
- Build the Docker image as described in the installation guide to host the MCP server:
docker build -t line/line-bot-mcp-server .
- Run the container with required environment variables (CHANNEL_ACCESS_TOKEN and DESTINATION_USER_ID).
- Point your MCP config to use the docker-based command as shown in the README example.
Local development and testing follow the inspector workflow described in the repository: clone, install dependencies, build, and run the inspector to test MCP interactions.
Additional notes
Notes and tips:
- Ensure CHANNEL_ACCESS_TOKEN is kept secret and not committed to source control.
- If your tool input sometimes omits userId, define DESTINATION_USER_ID to avoid errors when sending messages.
- The push_flex_message and broadcast_flex_message inputs expect a valid LINE Flex Message structure in message.contents.
- Rich menu operations require proper LINE account configuration and image assets; create and upload menus carefully and remember to set the default menu if needed.
- When testing locally with the inspector, use the provided npm/yarn scripts to build and run dist/index.js and connect the inspector to the MCP server.
Related MCP Servers
context7
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
obsidian -tools
Add Obsidian integrations like semantic search and custom Templater prompts to Claude or any MCP client.
MiniMax -JS
Official MiniMax Model Context Protocol (MCP) JavaScript implementation that provides seamless integration with MiniMax's powerful AI capabilities including image generation, video generation, text-to-speech, and voice cloning APIs.
mcp-bundler
Is the MCP configuration too complicated? You can easily share your own simplified setup!
akyn-sdk
Turn any data source into an MCP server in 5 minutes. Build AI-agents-ready knowledge bases.
promptboard
The Shared Whiteboard for Your AI Agents via MCP. Paste screenshots, mark them up, and share with AI.