Building MCP Servers from Protobuf (Part3) - Enhance AI Interactions with Proto Comments

September 30, 2025

Introduction

In this blog series, we’ll show you how to build an MCP (Model Context Protocol) server packed with useful tools. Rather than starting from scratch, we’ll take advantage of our existing Protocol Buffers and Google’s gRPC transcoding. By creating a custom protoc (Protocol Buffer compiler) plugin, we can automatically generate the MCP server. This unified approach lets us produce gRPC services, OpenAPI specifications, REST APIs, and the MCP server all from the same source.

This blog series contains 4 articles:

What You'll Build

By the end of this tutorial, you'll have:

  • AI Agents that accurately understand your MCP tools through proto comments
  • Rich tool descripton automatically generated from proto file comments
  • A testing framework to validate AI agent behavior improvements
  • Best practices for writing AI-friendly proto comments

All the code mentioned in this article can be found in Github repository: zhangcz828/proto-to-mcp-tutorial

Prerequisites

Before we start, make sure you have completed Parts 1 and 2 and have:

  • The bookstore tutorial project from Parts 1 and 2
  • An OpenAI API key for testing AI Agent interactions

The Problem We're Solving

In Parts 1 and 2, we built a unified generation pipeline where one proto file produced gRPC services, REST endpoints, OpenAPI specs, and an MCP server. Everything worked technically, the tools existed and could be called. But when we began testing with AI Agents, we discovered a critical bottleneck: the AI frequently failed to choose the right tool or collect the required parameters correctly.

The issue wasn't our code, it was missing contextual documentation. In this article, we'll show how to transform the comments you already write in proto files into rich, structured MCP tool description. This approach keeps descriptions in sync automatically and dramatically improves AI tool selection, prompting, and request construction.

How Proto Comments Become Tool Description

Our protoc-gen-mcp plugin already generates MCP tools from proto files. Here is the process for how it generate the tool description:

  1. RPC method comments → Tool description that guides AI agents
  2. Request message and field comments → Parameter documentation
  3. HTTP annotations (already used for REST/OpenAPI) → Method, path, and body mapping in the tool help

This means every time you regenerate your code, tool description stays perfectly aligned with your API definitions.  

Set Up an AI Agent for Testing

We'll use a minimal LangGraph-based agent to illustrate before/after behavior. 

Place your OpenAI (or compatible) key in a .env file:

OPENAI_API_KEY=sk-xxx

Install Dependencies

uv pip install langchain_mcp_adapters
uv pip install langgraph
uv pip install langchain_openai
uv pip install langchain

Or consolidate in requirements.txt and run:

uv pip install -r requirements.txt

Create agent/langGraph.py:

from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import os, asyncio

load_dotenv()

model = ChatOpenAI(
 model="gpt-4o",
 api_key=os.getenv("OPENAI_API_KEY"),
)

client = MultiServerMCPClient(
 {
   "bookstore": {
     "command": "uv",
     "args": ["--directory", "./generated/mcp", "run", "mcp_server.py"],
     "transport": "stdio",
   }
 }
)

async def main():
 tools = await client.get_tools()
 
 agent = create_react_agent(model=model, tools=tools)
 
 print("Agent is ready. Type 'exit' or 'quit' to end the session.")
 
 while True:
   try:
     user_input = input("You: ")
     
     if user_input.lower() in ["exit", "quit"]:
       break
       
     response = await agent.ainvoke({"messages": [{"role": "user", "content": user_input}]})
     
     print("Output: ", response_stream["messages"][-1].content)
   except Exception as e:
     print(f"An error occurred: {e}")
     
 print("\nExiting agent.")
 
if __name__ == "__main__":
 asyncio.run(main())

Run:

python3 agent/langGraph.py


Then ask: create a new book.

Baseline: Simple Proto Comments

Here is the proto file with minimal comments:

syntax = "proto3";
package bookstore.v1;
import "google/api/annotations.proto";
import "mcp/protobuf/annotations.proto";
option go_package = "generated/go/bookstore/v1";

service BookstoreService {
 // Get a book by ID
 rpc GetBook(GetBookRequest) returns (Book) {
   option (google.api.http) = {
     get: "/v1/books/{book_id}"
   };
   option (mcp.v1.tool) = {
     enabled: true
   };
 }
 
 // Create a new book in the system.
 rpc CreateBook(CreateBookRequest) returns (Book) {
   option (google.api.http) = {
     post: "/v1/books"
     body: "*"
   };
   option (mcp.v1.tool) = {
     enabled: true
   };
 }
}
message Book {
 string book_id = 1;
 string title = 2;
 string author = 3;
 int32 pages = 4;
}
message GetBookRequest {
 // The ID of the book to retrieve
 string book_id = 1;
}
message CreateBookRequest {
 // The book object to create.
 Book book = 1;
}

Run the agent and observe:

✗ python agent/langGraph.py
[09/30/25 14:40:34] INFO     Processing request of type ListToolsRequest                                                                                                                     server.py:623
Agent is ready. Type 'exit' or 'quit' to end the session.
You: create a new book
[09/30/25 14:40:45] INFO     Processing request of type CallToolRequest                                                                                                                      server.py:623
Output: It looks like some mandatory details for creating a new book are missing. Could you please provide more information about the book, such as the title, author, genre, or any other relevant details?

From code we know title, author, and pages are required, but the LLM lacks that guidance and produces generic prompts. If we provide only title and author:

You: the author is "Charlie" and the title is "Test book"
[09/30/25 14:45:34] INFO     Processing request of type CallToolRequest                                                                                                                      server.py:623
Output: It seems that I made an error when attempting to create a new book. However, I don't have the necessary details to proceed. Could you please provide a description for the book you want to create, so I can assist you properly?

The agent detects a failure but cannot articulate the missing field, because the tool definition didn't expose constraints.

Improved: Rich Proto Comments

Update the RPC comment to provide explicit instructions:

 // Create a new book in the system.
 //
 // INSTRUCTIONS:
 //   1. For each required field:
 //      - If the user has not provided a value , prompt the user to supply it (otherwise the request will fail).
 //   2. For optional fields:
 //      - If not set by the user, do not set the field in the request and omit them.
 //
 // Example payload for creating a book:
 // {
 //   "book": {
 //     "bookId": "string", // optional
 //     "title": "string", // required
 //     "author": "string", // required
 //     "pages": int // required
 //   }
 // }
 rpc CreateBook(CreateBookRequest) returns (Book) {
   option (google.api.http) = {
     post: "/v1/books"
     body: "*"
   };
   option (mcp.v1.tool) = {
     enabled: true
   };
 }

Regenerate:

go build -o protoc-gen-mcp plugins/protoc-gen-mcp/main.go
./generate.sh
python3 agent/langGraph.py

Query again with: create a new book:

✗ python3 agent/langGraph.py
[09/30/25 14:50:07] INFO     Processing request of type ListToolsRequest                                                                                                                     server.py:623
Agent is ready. Type 'exit' or 'quit' to end the session.
You: create a new book
Output: Before we create a new book, I'll need some details from you. Please provide the following information:
1. Title of the book (required)
2. Author of the book (required)
3. Number of pages in the book (required)
Feel free to provide any optional details as well if you have them.

Now the agent proactively gathers only the missing required fields, demonstrating how descriptive proto comments become actionable tool guidance.

If we give partial input:

You: the author is "Charlie" and the title is "Test book"
Output: Could you please provide the number of pages for the book "Test book" by Charlie? This information is required to create the book.

The agent know that there is still one missing required field, and prompts for it.

Best Practices for Writing Proto Comments

When writing proto comments that will become AI tool description:

  • Lead with an imperative verb: "Create a new book", "Delete an invoice", "Update user profile"
  • Separate summary from instructions: Use a blank line between the one-line summary and detailed guidance
  • Mark required vs optional fields: Explicitly state which parameters are required and which are optional
  • Provide example payloads: When nested objects are present, include a compact JSON example
  • Include constraints: Document field formats, ranges, or validation rules
  • Describe outputs: Explain what the tool returns and common failure modes  

Conclusion

You now have a complete pipeline where proto comments become intelligent AI tool description. The same proto file now contains everything needed to generate gRPC services, REST endpoints, OpenAPI specs, and MCP tools with descriptions that makes AI Agents significantly more effective.

Next Steps

Now that our AI agents can accurately interpret and use MCP tools via proto comments, Part 4 will share operational lessons learned from running proto-generated MCP tools in practice.
 

Share this