This repository is a hands-on set of Model Context Protocol (MCP) examples built to show how MCP clients and MCP servers work together in real workflows.
It focuses on three practical scenarios:
- Sampling: server requests model generation through the client
- Logging and progress notifications: server streams status updates while a tool runs
- Roots-based file access: server is restricted to approved file-system paths
This project demonstrates how to build MCP-based systems with Python and mcp[cli], using stdio transport for local development.
It was built to make core MCP concepts tangible:
- how to expose tools from an MCP server
- how an MCP client invokes those tools
- how model calls can be delegated via MCP sampling
- how to enforce safe file access with roots
- how to improve UX with logging/progress notifications
In short: it is a learning and reference repo for MCP patterns, not a single production app.
At a high level, each example follows this flow:
User/CLI -> MCP Client -> MCP Server -> Tool logic
For sampling, the server asks the client to call a model:
MCP Server -> ctx.session.create_message(...) -> MCP Client sampling callback -> LLM provider (Gemini/Claude)
sampling_example/server.py: exposes arephrasetool and triggers sampling via MCP contextclient.py/client_anthropic.py: starts MCP session and implements sampling callback to model SDK
notifications_example/server.py: exposesaddtool withctx.info(...)andctx.report_progress(...)client.py: receives logging/progress callbacks and prints updates
roots_example/mcp_server.py: toolslist_roots,read_dir,convert_videowith path checksmain.py: starts CLI chat + MCP client with provided root directoriescore/: chat loop, model adapters, utilities, video conversion helpers
- Python 3.10+
uvinstalled (pip install uv)- API key(s), depending on example:
GOOGLE_API_KEYfor GeminiANTHROPIC_API_KEYfor Claude
- FFmpeg (only required for
roots_examplevideo conversion)
cd sampling_example
uv sync
uv run client.pyNotes:
client.pyuses Gemini (GOOGLE_API_KEY).- If using Anthropic, switch to
client_anthropic.pyas documented in that folder.
cd notifications_example
uv sync
uv run client.pycd roots_example
cp .env.example .env
# set GOOGLE_API_KEY or ANTHROPIC_API_KEY in .env
uv sync
uv run main.py <root1> [root2] [root3]Example:
uv run main.py .This repo is currently designed for local stdio-based execution. There is no production deployment pipeline included yet.
If you want to deploy:
- package each example as a service/container
- replace
stdiotransport with a network transport (for remote clients) - inject secrets through environment variables in your platform
- add observability, auth, and access controls for production traffic
- Split into three focused examples to isolate MCP concepts and keep each learning path simple.
- Use Python +
mcp[cli]to reduce boilerplate and stay close to MCP primitives. - Use
stdiotransport because it is easiest for local development and debugging. - Delegate model generation through client-side sampling so server tool logic can remain model-agnostic at the protocol boundary.
- Enforce roots for file operations to demonstrate safe-by-default filesystem access.
- Include progress/log callbacks to show user-friendly behavior for long-running tools.
- Add a unified root-level
Makefile/task runner for one-command setup and execution. - Standardize docs and naming across all examples (some README titles/descriptions currently differ).
- Add tests (unit + integration) for tool handlers and path-permission checks.
- Add optional HTTP transport examples for remote deployment scenarios.
- Add CI for linting, formatting, and automated smoke tests.
Model Context Protocol (MCP) MCP Server allow us to gains access to data & functionality to the outside (third-party) service. Think of MCP as a standard way to for LLM to communicate with database, files, APIs, and internal tools.
Why need MCP
Without MCP, LLM integration look like this: user -> app -> custom tool logic -> LLM
Every tool integration is custom built:
- custom auth
- custom api wrapper
- custom schema
- custom error handling
With MCP: user -> LLM -> MCP Client -> MCP Server -> Tools
Instead of writing custom tool definition, you expose tools via MCP and any MCP-compatible model can adopt this.