Get the FREE Ultimate OpenClaw Setup Guide →

Custom

MCP server for scraping LinkedIn, Facebook, Instagram profiles and Google search.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio sharan-kumar-r-custom-mcp-server uvx custom-mcp-server

How to use

This MCP server provides social media scraping capabilities through a Python-based MCP setup managed with UV. It exposes tools for collecting public data from LinkedIn, Facebook, and Instagram profiles, as well as performing Google searches via the Google Serper API. The server is designed to be used within an MCP-enabled environment, enabling an AI assistant to request profile information or search results and receive structured JSON responses. To use it, install the server in a UV-managed project, configure API keys in a .env file, and start the runtime so your MCP client can invoke the exposed tools. Typical workflows involve querying for LinkedIn or company profile data, scraping public profile information, or performing targeted Google searches for web results.

How to install

Prerequisites:

  • Python 3.8+ installed on your system
  • UV (uv) installed and available in PATH
  • RapidAPI account with access to LinkedIn/Facebook/Instagram scraping APIs
  • Serper API key for Google Serper

Installation steps:

  1. Initialize a UV project for the MCP server uv init custom-mcp-server cd custom-mcp-server

  2. Add MCP CLI dependencies uv add "mcp[cli]" This will generate project scaffolding, including a .env file for API keys.

  3. Add the Custom MCP server code In your project, place main.py (the MCP server implementation) as described in the repository.

  4. Install additional dependencies uv add httpx python-dotenv fastmcp

  5. Create and configure environment variables Create a .env file at the project root with your API keys: RAPIDAPI_KEY=your_rapidapi_key_here SERPER_API_KEY=your_serper_api_key_here

  6. Run/Test the server To install and run via MCP Inspector or Claude Desktop, use commands such as: uv run mcp install main.py # install the MCP server uv run mcp dev main.py # run in development mode for testing

Notes:

  • Ensure the environment variables are loaded by the application (dotenv support is included in the dependencies).
  • The exact file names (main.py) and project structure should match what your repository provides.

Additional notes

Tips and troubleshooting:

  • If API keys are invalid or missing, the tools will fail to fetch data; always verify your .env is loaded and keys are correct.
  • For Claude Desktop usage, ensure the server is installed and started through the provided UV commands, then instruct the AI to call the appropriate tool names (LinkedIn Profile Scraping, Facebook Profile Scraping, Instagram Profile Scraping, Google Search).
  • If tools do not appear in the UI, restart Claude Desktop, re-install the MCP server, or test with MCP Inspector using uv run mcp dev main.py.
  • Keep dependencies up to date: run uv add httpx python-dotenv fastmcp after updating Python or dependencies.
  • The server exposes four primary capabilities: LinkedIn profile scraping, Facebook profile scraping, Instagram profile scraping, and Google search via Serper API. Use precise prompts to retrieve structured data (e.g., name, title, location, bio for profiles; search results for Google queries).

Related MCP Servers

Sponsor this space

Reach thousands of developers