This commit is contained in:
Danielle Jenkins 2025-03-06 22:49:18 -08:00
commit dcf78edfca
12 changed files with 1158 additions and 0 deletions

138
docs/development.md Normal file
View file

@ -0,0 +1,138 @@
# Development Guide
This guide provides information for developers who want to contribute to or modify the Rust Documentation MCP Server.
## Architecture Overview
The server consists of several key components:
1. **DocRouter** (`src/docs.rs`):
- Core implementation of the MCP Router trait
- Handles tool calls for documentation lookup
- Implements caching to avoid redundant API requests
2. **Transport Implementations**:
- STDIN/STDOUT server (`src/bin/doc_server.rs`)
- HTTP/SSE server (`src/bin/axum_docs.rs`)
3. **Utilities**:
- JSON-RPC frame codec for byte stream handling
## Adding New Features
### Adding a New Tool
To add a new tool to the documentation server:
1. Add the implementation function in `DocRouter` struct
2. Add the tool definition to the `list_tools()` method
3. Add the tool handler in the `call_tool()` match statement
Example:
```rust
// 1. Add the implementation function
async fn get_crate_examples(&self, crate_name: String, limit: Option<u32>) -> Result<String, ToolError> {
// Implementation details...
}
// 2. In list_tools() add:
Tool::new(
"get_crate_examples".to_string(),
"Get usage examples for a Rust crate".to_string(),
json!({
"type": "object",
"properties": {
"crate_name": {
"type": "string",
"description": "The name of the crate"
},
"limit": {
"type": "integer",
"description": "Maximum number of examples to return"
}
},
"required": ["crate_name"]
}),
),
// 3. In call_tool() match statement:
"get_crate_examples" => {
let crate_name = arguments
.get("crate_name")
.and_then(|v| v.as_str())
.ok_or_else(|| ToolError::InvalidParameters("crate_name is required".to_string()))?
.to_string();
let limit = arguments
.get("limit")
.and_then(|v| v.as_u64())
.map(|v| v as u32);
let examples = this.get_crate_examples(crate_name, limit).await?;
Ok(vec![Content::text(examples)])
}
```
### Enhancing the Cache
The current cache implementation is basic. To enhance it:
1. Add TTL (Time-To-Live) for cache entries
2. Add cache size limits to prevent memory issues
3. Consider using a more sophisticated caching library
## Testing
Create test files that implement basic tests for the server:
```rust
#[cfg(test)]
mod tests {
use super::*;
use tokio::test;
#[test]
async fn test_search_crates() {
let router = DocRouter::new();
let result = router.search_crates("tokio".to_string(), Some(2)).await;
assert!(result.is_ok());
let data = result.unwrap();
assert!(data.contains("crates"));
}
#[test]
async fn test_lookup_crate() {
let router = DocRouter::new();
let result = router.lookup_crate("serde".to_string(), None).await;
assert!(result.is_ok());
let data = result.unwrap();
assert!(data.contains("serde"));
}
}
```
## Deployment
For production deployment, consider:
1. Rate limiting to prevent abuse
2. Authentication for sensitive documentation
3. HTTPS for secure communication
4. Docker containerization for easier deployment
Example Dockerfile:
```dockerfile
FROM rust:1.74-slim as builder
WORKDIR /usr/src/app
COPY . .
RUN cargo build --release
FROM debian:stable-slim
RUN apt-get update && apt-get install -y libssl-dev ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/src/app/target/release/axum-docs /usr/local/bin/
EXPOSE 8080
CMD ["axum-docs"]
```

107
docs/usage.md Normal file
View file

@ -0,0 +1,107 @@
# Rust Documentation Server Usage Guide
This guide explains how to use the Rust Documentation MCP Server with different types of clients.
## Client Integration
### Using with MCP-compatible LLMs
Any LLM client that follows the Model Context Protocol (MCP) can connect to this documentation server. The LLM will gain the ability to:
1. Look up documentation for any Rust crate
2. Search the crates.io registry for libraries
3. Get documentation for specific items within crates
### Command-Line Client
For testing purposes, you can use a simple command-line client like this one:
```rust
use mcp_client::{Client, transport::StdioTransport};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Create a client using stdio transport
let transport = StdioTransport::new();
let mut client = Client::new(transport);
// Connect to the server
client.connect().await?;
// Example: Looking up the 'tokio' crate
let response = client.call_tool(
"lookup_crate",
serde_json::json!({
"crate_name": "tokio"
})
).await?;
println!("Documentation response: {}", response[0].text());
Ok(())
}
```
### Web Client
When using the Axum SSE mode, you can connect to the server using a simple web client:
```javascript
// Connect to the SSE endpoint
const eventSource = new EventSource('http://127.0.0.1:8080/sse');
// Get the session ID from the initial connection
let sessionId;
eventSource.addEventListener('endpoint', (event) => {
sessionId = event.data.split('=')[1];
console.log(`Connected with session ID: ${sessionId}`);
});
// Handle messages from the server
eventSource.addEventListener('message', (event) => {
const data = JSON.parse(event.data);
console.log('Received response:', data);
});
// Function to send a tool request
async function callTool(toolName, args) {
const response = await fetch(`http://127.0.0.1:8080/sse?sessionId=${sessionId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
jsonrpc: '2.0',
method: 'call_tool',
params: {
name: toolName,
arguments: args
},
id: 1,
}),
});
return response.ok;
}
// Example: Search for async crates
callTool('search_crates', { query: 'async runtime', limit: 5 });
```
## Example Workflows
### Helping an LLM Understand a New Crate
1. LLM client connects to the documentation server
2. User asks a question involving an unfamiliar crate
3. LLM uses `lookup_crate` to get general documentation
4. LLM uses `lookup_item` to get specific details on functions/types
5. LLM can now provide an accurate response about the crate
### Helping Find the Right Library
1. User asks "What's a good crate for async HTTP requests?"
2. LLM uses `search_crates` with relevant keywords
3. LLM reviews the top results and their descriptions
4. LLM uses `lookup_crate` to get more details on promising options
5. LLM provides a recommendation with supporting information