Guide March 2026 / 7 min read

LinkedIn Profile API for AI Agents: No MCP Server Required

Your agent needs LinkedIn data. Here's the simplest way to wire it up.

You're building an AI agent. It researches prospects, enriches leads, or qualifies signups. At some point it needs to answer a simple question: who is this person on LinkedIn?

Their current role, past companies, education, headline. The kind of context that turns a generic outreach into a relevant conversation, or helps your agent decide whether a lead is worth pursuing.

You have two paths to get this data into your agent. One involves running a persistent server. The other is a single HTTP request. The right choice depends on what you're actually building.

The MCP Server Approach (and Why It's Overkill)

The Model Context Protocol has become the standard for giving AI agents access to external tools. It's well-designed, and for complex integrations it makes sense. Databases, filesystems, multi-step workflows where the agent needs to discover capabilities at runtime.

For LinkedIn profile data, an MCP server means:

Running a persistent server for a single endpoint is like renting a warehouse to store a shoebox.

MCP is great for stateful, multi-operation integrations. A database where you need to list tables, run queries, and manage connections. A filesystem where you browse, read, write, and search. LinkedIn profile lookup is none of those things. It's a stateless data fetch: URL in, structured profile out.

The REST API Pattern (the Better Way)

Your agent already knows HTTP. Every LLM framework supports defining tools that make HTTP requests. OpenAI function calling, Anthropic tool use, LangChain, CrewAI. They all have the same pattern.

Define a tool that calls an API. Parse the JSON response. Done. No server process, no config file, no context window bloat when the tool isn't being used.

Here's what this looks like in practice with two common setups.

Claude Code: The Skill File Approach

If you're using Claude Code, skill files are the native way to extend agent capabilities. A skill file is a markdown document in ~/.claude/skills/ that teaches Claude how to use a tool. Zero overhead when the skill isn't active. Zero infrastructure to manage.

# ~/.claude/skills/linkedin-lookup/skill.md

## LinkedIn Profile Lookup

Look up LinkedIn profiles using the ScrapeLinkedIn API.
API key is stored in the environment variable `SCRAPELINKEDIN_API_KEY`.

### Single profile by URL

```bash
curl -s -X POST "https://api.scrapelinkedin.com/api/v1/scrape" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -d '{"linkedin_url": "https://linkedin.com/in/USERNAME"}'
```

### Search by name and company

```bash
curl -s -X POST "https://api.scrapelinkedin.com/api/v1/scrape" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -d '{"first_name": "Jane", "last_name": "Smith", "company_name": "Acme Inc"}'
```

### Check cache or poll result

```bash
curl -s "https://api.scrapelinkedin.com/api/v1/scrape/{id}" \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY"
```

Response includes: headline, current role, company, location,
experience history, education, skills, and profile summary.

That's it. When Claude needs LinkedIn data, it reads the skill and makes the call. When it doesn't, the skill adds zero tokens to your context window.

Python Agent: OpenAI Function Calling

For Python-based agents using OpenAI's API, the integration is about 20 lines:

import os, requests, json

def scrape_linkedin(linkedin_url=None, first_name=None,
                     last_name=None, company_name=None):
    """Fetch a LinkedIn profile by URL or name+company."""
    payload = {}
    if linkedin_url:
        payload["linkedin_url"] = linkedin_url
    else:
        payload = {"first_name": first_name,
                   "last_name": last_name,
                   "company_name": company_name}

    resp = requests.post(
        "https://api.scrapelinkedin.com/api/v1/scrape",
        headers={"X-API-Key": os.environ["SCRAPELINKEDIN_API_KEY"]},
        json=payload
    )
    return resp.json()

# OpenAI tool definition
tools = [{
    "type": "function",
    "function": {
        "name": "scrape_linkedin",
        "description": "Look up a LinkedIn profile by URL or name+company",
        "parameters": {
            "type": "object",
            "properties": {
                "linkedin_url": {"type": "string"},
                "first_name": {"type": "string"},
                "last_name": {"type": "string"},
                "company_name": {"type": "string"}
            }
        }
    }
}]

No SDK. No server. The agent calls the function when it needs LinkedIn data, and the 20 lines above handle the rest.

Real-World Agent Workflow: Pre-Call Research

Here's how this plays out in practice. You tell your agent:

"I have a call with Sarah Chen from Stripe in 30 minutes. What should I know?"

The agent's chain of thought:

  1. I need background on Sarah Chen at Stripe
  2. Call scrape_linkedin with first_name="Sarah", last_name="Chen", company_name="Stripe"
  3. Parse the profile JSON
  4. Synthesize a briefing from the structured data

The agent returns something like:

Sarah Chen - Head of Enterprise Partnerships, Stripe

Current role: 2 years at Stripe, promoted from Sr. Partnership Manager.
Previously at Square (3 years) and JPMorgan Chase (2 years).

Education: MBA from Wharton, BS Computer Science from MIT.

Talking points:
- She moved from traditional finance (JPMorgan) to fintech.
  Ask about that transition and what she sees differently now.
- Her Square experience means she understands the seller side.
  Frame your pitch from that perspective.
- Wharton MBA suggests she'll respond well to data-driven
  arguments over narrative pitches.

Total cost: $0.01 for the profile lookup. Total time: roughly 20 seconds end to end. No server was harmed in the making of this briefing.

MCP Server vs REST API: Side by Side

MCP Server REST API
Setup Config file + server process One environment variable
Runtime Always-on daemon On-demand HTTP call
Context overhead Tool descriptions loaded always Zero until called
Dependencies MCP SDK + server code curl or requests
Portability MCP-compatible agents only Any agent, any framework
Failure mode Server crashes = no data Stateless, retry = works

This is not an argument against MCP. MCP is excellent for complex, multi-tool systems where discovery and orchestration matter. The point is narrower: for a single stateless data fetch, the simplest integration wins.

When Should You Use an MCP Server?

There are legitimate cases where wrapping LinkedIn lookups in an MCP server makes sense:

Outside of those cases, the REST call is simpler, faster to set up, and easier to debug when something goes wrong.

Get your API key. Give your agent LinkedIn data in 5 minutes.

Structured profile data via a single HTTP call. No MCP server, no SDK, no infrastructure to manage.

Get Your API Key

$0.01 per profile. 5 free lookups included.