Model Context Protocol (MCP) Guide
Understanding and implementing Model Context Protocol for connecting AI to external data sources and tools.
What is MCP?
Model Context Protocol (MCP) is a standardized protocol that enables Large Language Models (LLMs) to securely connect with external data sources, APIs, and tools while maintaining context and security boundaries.
Key Benefits
- Standardized Integration - One protocol for all connectors
- Security First - Sandboxed execution environment
- Context Preservation - Maintains conversation state
- Tool Composability - Chain multiple tools together
- Provider Agnostic - Works with any LLM
Core Concepts
Connectors
Connectors are the bridge between AI models and external systems:
interface MCPConnector {
id: string;
name: string;
description: string;
category: ConnectorCategory;
// Configuration schema
configSchema: JSONSchema;
// Available actions
actions: ConnectorAction[];
// Authentication requirements
auth: AuthConfiguration;
}
Connector Categories
Category | Description | Examples |
---|---|---|
Data Sources | Read from databases, files | PostgreSQL, MySQL, CSV |
APIs | External service integration | Slack, GitHub, Stripe |
File Systems | Local/cloud storage | S3, Google Drive, Local |
Tools | Specialized functionality | Calculator, Code Runner |
Communication | Messaging platforms | Email, Discord, SMS |
Security Model
graph TD
A[LLM] --> B[MCP Gateway]
B --> C{Security Layer}
C --> D[Sandboxed Connector]
D --> E[External Resource]
C --> F[Permission Check]
C --> G[Rate Limiting]
C --> H[Audit Logging]
Available Connectors
Database Connectors
PostgreSQL Connector
id: mcp-postgresql
name: PostgreSQL Database
category: data_sources
config:
host: string
port: number
database: string
username: string
password: string (encrypted)
ssl: boolean
actions:
- query: Execute SELECT queries
- schema: Get table schemas
- tables: List all tables
MySQL Connector
id: mcp-mysql
name: MySQL Database
category: data_sources
config:
# Similar to PostgreSQL
actions:
- query
- schema
- tables
API Connectors
GitHub Connector
id: mcp-github
name: GitHub
category: apis
config:
token: string (encrypted)
organization: string (optional)
actions:
- list_repos: Get repositories
- get_issues: Fetch issues
- create_issue: Create new issue
- get_pull_requests: List PRs
- get_commits: Fetch commits
Slack Connector
id: mcp-slack
name: Slack
category: communication
config:
bot_token: string (encrypted)
workspace_id: string
actions:
- send_message: Post to channel
- list_channels: Get channels
- search_messages: Search history
- upload_file: Share files
File System Connectors
AWS S3 Connector
id: mcp-s3
name: AWS S3
category: file_systems
config:
access_key_id: string (encrypted)
secret_access_key: string (encrypted)
region: string
bucket: string
actions:
- list_objects: Browse files
- get_object: Download file
- put_object: Upload file
- delete_object: Remove file
Tool Connectors
Web Scraper
id: mcp-web-scraper
name: Web Scraper
category: tools
config:
user_agent: string
timeout: number
actions:
- scrape: Extract page content
- screenshot: Capture page
- extract_data: Structured extraction
Using MCP in Workflows
Basic Workflow Example
// Workflow definition with MCP connectors
const workflow = {
name: 'Customer Feedback Analyzer',
nodes: [
{
id: 'fetch-data',
type: 'mcp-connector',
connector: 'mcp-postgresql',
action: 'query',
config: {
query: "SELECT * FROM feedback WHERE created_at > NOW() - INTERVAL '7 days'",
},
},
{
id: 'analyze-sentiment',
type: 'ai-processor',
model: 'gpt-4',
prompt: 'Analyze sentiment of each feedback: {{feedback}}',
},
{
id: 'post-results',
type: 'mcp-connector',
connector: 'mcp-slack',
action: 'send_message',
config: {
channel: '#customer-insights',
message: 'Weekly sentiment analysis: {{results}}',
},
},
],
};
Advanced Multi-Connector Workflow
graph LR
A[GitHub Issues] -->|MCP| B[Filter New Issues]
B --> C[AI Analysis]
C --> D{Priority?}
D -->|High| E[Create Jira Ticket]
D -->|Low| F[Add to Backlog]
E -->|MCP| G[Notify Slack]
F -->|MCP| H[Update Spreadsheet]
Implementing Custom Connectors
Connector Structure
// custom-connector.ts
import { MCPConnector, ConnectorAction } from '@greenmonkey/mcp-sdk';
export class CustomAPIConnector implements MCPConnector {
id = 'custom-api';
name = 'My Custom API';
category = 'apis';
configSchema = {
type: 'object',
properties: {
apiKey: { type: 'string', encrypted: true },
baseUrl: { type: 'string' },
},
required: ['apiKey', 'baseUrl'],
};
actions: ConnectorAction[] = [
{
name: 'fetch_data',
description: 'Fetch data from API',
inputSchema: {
type: 'object',
properties: {
endpoint: { type: 'string' },
params: { type: 'object' },
},
},
outputSchema: {
type: 'object',
},
},
];
async execute(action: string, input: any, config: any) {
switch (action) {
case 'fetch_data':
return this.fetchData(input, config);
default:
throw new Error(`Unknown action: ${action}`);
}
}
private async fetchData(input: any, config: any) {
const response = await fetch(`${config.baseUrl}${input.endpoint}`, {
headers: {
Authorization: `Bearer ${config.apiKey}`,
},
});
return response.json();
}
}
Connector Registration
// Register your connector
import { MCPRegistry } from '@greenmonkey/mcp-sdk';
import { CustomAPIConnector } from './custom-connector';
MCPRegistry.register(new CustomAPIConnector());
Security Considerations
// Implement security checks
class SecureConnector implements MCPConnector {
async execute(action: string, input: any, config: any) {
// Rate limiting
await this.checkRateLimit(action);
// Input validation
this.validateInput(action, input);
// Permission check
await this.checkPermissions(action);
// Audit logging
await this.logAction(action, input);
// Execute in sandbox
return this.sandboxExecute(action, input, config);
}
}
MCP Best Practices
Configuration Management
# mcp-config.yaml
connectors:
- id: prod-database
type: mcp-postgresql
config:
host: ${DB_HOST}
port: ${DB_PORT}
database: ${DB_NAME}
username: ${DB_USER}
password: ${DB_PASSWORD}
permissions:
- read_only: true
- allowed_tables: ['users', 'orders']
- id: slack-notifications
type: mcp-slack
config:
bot_token: ${SLACK_BOT_TOKEN}
permissions:
- allowed_channels: ['#alerts', '#reports']
- max_messages_per_hour: 10
Error Handling
// Robust error handling
async function executeMCPAction(connector, action, input) {
try {
const result = await connector.execute(action, input);
return { success: true, data: result };
} catch (error) {
// Log error details
console.error(`MCP Error: ${connector.id}/${action}`, error);
// Return user-friendly error
return {
success: false,
error: {
code: error.code || 'UNKNOWN_ERROR',
message: sanitizeErrorMessage(error.message),
connector: connector.id,
action: action,
},
};
}
}
Performance Optimization
// Connection pooling
class PooledDatabaseConnector {
constructor() {
this.pool = new ConnectionPool({
max: 10,
idleTimeoutMillis: 30000,
});
}
async query(sql, params) {
const client = await this.pool.acquire();
try {
return await client.query(sql, params);
} finally {
this.pool.release(client);
}
}
}
// Caching frequently accessed data
class CachedConnector {
constructor() {
this.cache = new LRUCache({
max: 100,
ttl: 1000 * 60 * 5, // 5 minutes
});
}
async getData(key) {
const cached = this.cache.get(key);
if (cached) return cached;
const data = await this.fetchFromSource(key);
this.cache.set(key, data);
return data;
}
}
Common Use Cases
Data Analysis Pipeline
// Analyze sales data from multiple sources
const salesAnalysis = {
name: 'Multi-Source Sales Analysis',
schedule: '0 9 * * MON', // Every Monday at 9 AM
nodes: [
{
id: 'fetch-shopify',
connector: 'mcp-shopify',
action: 'get_orders',
config: { date_range: 'last_week' },
},
{
id: 'fetch-stripe',
connector: 'mcp-stripe',
action: 'list_payments',
config: { date_range: 'last_week' },
},
{
id: 'combine-data',
type: 'processor',
operation: 'merge',
key: 'customer_email',
},
{
id: 'analyze',
type: 'ai',
prompt: 'Analyze sales trends and identify top customers',
},
{
id: 'generate-report',
connector: 'mcp-google-docs',
action: 'create_document',
template: 'weekly_sales_report',
},
],
};
Customer Support Automation
// Auto-respond to support tickets
const supportAutomation = {
name: 'Support Ticket Handler',
trigger: 'webhook',
nodes: [
{
id: 'receive-ticket',
connector: 'mcp-zendesk',
action: 'get_ticket',
},
{
id: 'search-knowledge',
connector: 'mcp-elasticsearch',
action: 'search',
config: {
index: 'knowledge_base',
query: '{{ticket.subject}}',
},
},
{
id: 'generate-response',
type: 'ai',
model: 'gpt-4',
prompt: 'Draft a helpful response based on the knowledge base articles',
},
{
id: 'send-response',
connector: 'mcp-zendesk',
action: 'reply_ticket',
config: {
status: 'pending',
tags: ['auto-replied'],
},
},
],
};
Troubleshooting MCP
Common Issues
Issue | Cause | Solution |
---|---|---|
Connection timeout | Network issues | Check firewall, increase timeout |
Permission denied | Invalid credentials | Verify API keys/tokens |
Rate limit exceeded | Too many requests | Implement backoff, use caching |
Invalid configuration | Schema mismatch | Validate against configSchema |
Debugging MCP Workflows
// Enable debug logging
const workflow = new MCPWorkflow({
debug: true,
logLevel: 'verbose',
onNodeExecute: (node, input, output) => {
console.log(`Node ${node.id}:`, { input, output });
},
onError: (node, error) => {
console.error(`Error in ${node.id}:`, error);
},
});
// Test individual connectors
async function testConnector(connectorId) {
const connector = MCPRegistry.get(connectorId);
// Validate configuration
const configValid = await connector.validateConfig();
console.log('Config valid:', configValid);
// Test connection
const connected = await connector.testConnection();
console.log('Connection test:', connected);
// Test each action
for (const action of connector.actions) {
try {
const result = await connector.execute(action.name, {}, {});
console.log(`Action ${action.name}:`, result);
} catch (error) {
console.error(`Action ${action.name} failed:`, error);
}
}
}
Security Guidelines
Credential Management
// Never hardcode credentials
ā Bad:
const config = {
apiKey: 'sk-1234567890abcdef'
};
ā
Good:
const config = {
apiKey: process.env.API_KEY
};
// Use encryption for stored credentials
import { encrypt, decrypt } from '@greenmonkey/crypto';
const encryptedKey = encrypt(apiKey, masterKey);
// Store encryptedKey in database
const apiKey = decrypt(encryptedKey, masterKey);
// Use apiKey for API calls
Access Control
# Define granular permissions
permissions:
database:
- action: query
allowed_tables: ['public_data']
denied_columns: ['ssn', 'credit_card']
api:
- action: read
rate_limit: 100/hour
- action: write
requires_approval: true
filesystem:
- action: read
allowed_paths: ['/data/public/**']
- action: write
denied: true