Pedro Rodrigues, AI Tools Engineer at Supabase, recently spoke at AI Engineer Europe on the topic “Bridging the Context Gap by Combining Skills and MCP.” His talk focused on the important role of providing AI agents with the right guidance and context to perform their tasks effectively.
Supabase’s Pedro Rodrigues on AI agents and context — from an AI engineer
Visual TL;DR. The gap in AI agent guidance is addressed by Supabase skills. Supabase skills work in conjunction with the MCP framework. The MCP framework enables effective AI agents. Supabase Teach AI Supabase skill examples. Teach AI Supabase uses documents over SSH. Documentation over SSH improves effective AI agents. Effective AI agents lead to “get started.” Superbase skills evaluated by Evaluate Skill Performance.
AI agent guidance gap: Agents are smart but need proper guidance to perform tasks
Supabase Skills: A folder containing instructions, scripts, and resources for the agent to load
MCP Framework: A framework that improves the context and performance of AI agents
Teach AI Supabase: Create a Supabase skill to teach agents about your product
Documentation over SSH: An experimental approach to providing context to AI agents
Effective AI agents: Close the context gap to improve agent performance
Assess skill performance: Measure how well an AI agent performs in a skill
Introduction: Principles for building and implementing AI agent skills
Visual TL;DR
Understanding the “skills issue”
Rodriguez began by highlighting common challenges in developing AI agents. That said, agents are smart, but they need proper “guidance.” This guidance is provided in the form of “skills”. This is essentially a folder containing instructions, scripts, and resources that the agent can discover and dynamically load to complete its tasks. These skills are designed to address specific needs such as security pitfalls, outdated knowledge, and missing workflows.
He explained that Supabase is actively working on building these skills in its products. This process involves creating a Supabase skill, which means teaching the AI agent about Supabase products. Rodriguez noted that this effort was more complex than his previous experience writing a master’s thesis, highlighting the challenges in creating comprehensive and effective documentation for AI agents.
Principles for building effective AI agent skills
Rodrigues outlined three key principles for building successful product skills for AI agents.
Explain intent, not meaning. instead of telling the agent how To do something, it’s more effective to tell them what What information do you need to accomplish and where can you get that information? This includes providing clear instructions and pointing to appropriate resources. For example, instead of spelling out a specific search query, guide agents to search Supabase documentation.
If you can skip it: This principle emphasizes the importance of including critical information directly (inline) within the skill file when it is essential and unlikely to change. Reference files are optional, but agents should be able to skip them if they are not needed. For example, security rules are not optional and must be included inline.
State your opinion: Rodriguez emphasized that product teams know their products best and must encode workflows that are appropriate for specific use cases. This means decisively deciding how the agent interacts with the data, such as using direct SQL queries for specific tasks or providing advice before making changes.
Supabase documentation over SSH: an experimental approach
The presentation also touched on the experimental feature Supabase Docs over SSH. This gives agents a more intuitive way to query documents using the familiar bash interface. The idea is to allow the agent to work with documents as if they were a local file system, increasing its ability to retrieve and process information related to Supabase. Rodriguez said the feature is still experimental and is actively seeking feedback.
Evaluation of skill performance
To measure the effectiveness of the skills, Supabase conducted evaluations across different scenarios. They tested six Supabase-specific scenarios, four agent-only scenarios, and three test conditions. The evaluation compared baseline performance, an “MCP only” condition (possibly referring to a specific AI model or framework), and an “MCP + skills” condition. The results showed that the skills significantly improved the agent’s performance across a variety of models and tasks, particularly in areas such as code generation and understanding complex data schemas.
Rodriguez emphasized that the data clearly shows that skills, when implemented properly, are important for bridging context gaps and enabling AI agents to perform tasks more accurately and efficiently. He encouraged the audience to think about how they could apply these principles to build their own product skills.
Introduction to Supabase Skills
For those interested in learning more about Supabase agent skills or getting started, Rodrigues pointed to the Supabase blog and GitHub repository. He also mentioned an ongoing giveaway where attendees can win a Mac Mini by scanning a QR code and signing up for Supabase. The presentation ended with a Q&A session where Mr. Rodriguez answered questions regarding the actual implementation and distribution of these skills within organizations.