On May 11, Solution Architects Benoit Kohler and Phillip Simonds held a livestream to walk through Infrahub Skills, an open-source AI skills package covering the full Infrahub development lifecycle. In this livestream, Ben and Phillip provided an overview and demonstration of Infrahub Skills. Watch the livestream here, and access the full transcript below.
Introductions
Hi everyone. I’m here with Phillip to speak about the skills and what they do with Infrahub. For those who don’t know us: I’m Benoit, and I’ve been working for OpsMill since 2020. I’ve been doing automation since 2013, I contribute to the Infrahub schema, and I am the Infrahub MCP author, based in Amsterdam.
I’m Philippe, and I’m a solutions architect with OpsMill. I’ve been working in computer networks and building them since 2009, automating them since around 2017, and doing it full time since around 2019. I helped build this demo and I’m based in Denver, Colorado.
For those who didn’t follow the recent news, we announced a couple of days ago that we finished our Series A. If you didn’t see it, you can check the posts on LinkedIn.
What Are Skills?
The plan today, before doing the live demo, is to go through a couple of slides to explain what skills currently are in the context of LLMs, what they are used for, and how they work with everything else. Then Philippe will do the demo, and we’ll open it up for Q&A.
Anthropic added skills around October of last year. Before that, you were mainly doing prompt engineering every time. With skills coming in October, you were able to have proper Markdown files in which you declare any conventions and rules you want, instead of having them directly as prompts. You also have extra validator files, such as Python scripts, which provide additional context.
Why does that matter? Before skills, if you did a lot of prompt engineering, you’d end up with a massive context window and a lot of copy-pasting, which was really painful if you wanted to reuse the same setup. Additionally, if you wanted to share prompts across a team, you’d have to pass them around manually. With skills, they are loaded on demand rather than staying in your context window all the time. They’re testable and shareable with team members, making them more broadly accessible.
Infrahub Skills Overview
We have built eight skills for Infrahub so far, with one or two more incoming. The idea behind the Infrahub skills is to manage everything present in Infrahub, from schema to objects, passing through menus, checks, and generators. We also have a few extra ones: one to help analyze data present in Infrahub by connecting via MCP (or CLI if you don’t have MCP), and another for auditing your repository. If you work a lot with Infrahub, you know you need certain files and certain ways to declare transforms and generator definitions. That skill helps you confirm that everything in your repository is exactly what you want. Philippe, the floor is yours for the demo.
Demo
Awesome, onto the demo. Let me share my screen. Alright, can you see me? Great.
The first thing I’m going to do today is initiate a new Infrahub repository using the copier template. This is available at the GitHub OpsMill Infrahub template repo, which allows you to instantiate a new Infrahub repo pretty easily. I’ll call this “infrahub-skills-demo.” This uses the copier tool to initialize a new repo and walks us through a few prompts. It’s asking if I want starter configuration with objects enabled. I’m going to skip that because I want to build those objects from scratch using the Infrahub skills. I will, however, enable support for Infrahub object files, which will create the repo structure for us, including generators, transforms, scripts directory, and so on.
Now you can see we have the infrahub-skills-demo folder with Claude settings JSON, and the rest is all set up for Infrahub. The first thing I’m going to do is install the skills using NPX as the skills package manager. There are a few ways to install it as described in the repo, but this is the simplest. When I run this command, it pops up and asks what skills I want to select. I’ll grab all of them and hit Enter. Then you specify the agent you want to work with. I use Claude Code, so I’ll keep that specified. I’ll set the installation scope to project level, choose symlink over copying, and proceed with installation.
You can see the skills are now installed in my repo. Under Claude, there’s a skills repository with all the different skills, and they’re symlinked from the agents file. This is nice because if you have a few different agents running, you can have the skills defined in a common place and then symlink out to them.
Generating Schema
The first thing I’m going to do with the skills is build some schema. I’ll invoke Claude with “dangerously skip permissions” just so we can iterate quickly without having to confirm every action. Obviously, don’t do that on a production instance, but we’re working on a test instance here.
I’ll ask Claude what skills we have. You can see it’s locally finding all the skills we just installed for Infrahub, plus some others I’ve installed globally. Now I’m going to ask it to model a small network. I need a device with a hostname and role, and interfaces that connect to each other via circuits. I’m asking it to stand up Infrahub locally, generate the schema in the schemas folder, and then use Infrahub CTL to apply that. It will invoke the skill on the backend to understand how to generate the schema correctly and how to apply it.
It has generated a task list: starting Infrahub via Docker Compose, generating schema files, and validating them with schema check. You can see it’s successfully loaded the Infrahub managing schemas skill into its context window. In the schemas folder, we now have device, interface, circuit, and provider, totaling around 160 lines of schema. Infrahub is running and the schema is written. Let me load it and verify in the UI. After logging in, we can navigate to object management and schemas, where we now have a network tab with the four schemas defined.
Generating Menus and Objects
I’m also going to have it generate some menu files so we can navigate in the left-hand sidebar. I’ll ask it to generate a network tab and nest all those schemas under it. You can see the Infrahub managing menu skill has been loaded as well. Claude is smart enough to understand which skill to invoke based on your prompt and automatically loads and uses it. It’s also going into the schema and setting the included menu flag to false so that it can load the menu it’s defining, which is a schema primitive in Infrahub. After reloading the page, we now have a network tab on the side with all our schemas, and we can verify them under object management.
Next, I’ll load some objects. I’ll have it create two routers, one in Denver and one in Amsterdam, each with one interface, connected back-to-back via a circuit on each end in a specific subnet, with MTU set to 9000 on both sides. I’ll tell Claude to use object files and then use Infrahub CTL to load them. You can see it has loaded the “Infrahub managing objects” skill, which it didn’t need before. It only loads a skill when the prompt actually calls for it. After generating and loading the objects, we can verify in the UI: Denver router and Amsterdam router both have their interfaces, correct MTU of 9000, and correct IP addresses. The circuit has two endpoints, one on each router.
Jinja2 Transform and Artifacts
The next thing I want to do is have Claude build a Jinja2 transform. In Infrahub, we have the concept of artifacts. We can use data in Infrahub to dynamically render a configuration for a device, and then an artifacts tab will appear showing that configuration. I’ll prompt it to write a Jinja2 transform that generates per-interface config for a router, including hostname, interface name, description, MTU, and IP address for each interface.
From an Infrahub perspective, all of this is stitched together in the Infrahub YAML file. Claude has updated that file with a GraphQL query to grab data from Infrahub, and a template path pointing to the Jinja2 template. It uses that query as input to the template and then renders it per device. You can see the generated GraphQL query for router config and the Jinja2 transform itself. The output for Denver router shows the hostname, interface, MTU, and IP address, with a for loop to iterate over multiple interfaces.
Now, the transform needs to be in a Git repository integrated with Infrahub for it to show up as an artifact in the UI. So I’m going to have Claude spin up a Gitea instance, deploy the repo to it, and configure the integration with Infrahub. When I ask Claude why artifacts aren’t showing yet, it explains exactly what’s going on using the skills: the repo isn’t connected to the server, network device doesn’t inherit from core artifact target, and so on. I’ll give it a prompt to spin up a Git integration and fix those things.
It’s spinning up the Gitea instance locally, creating the artifact definition, reloading the schema, initializing a Git repo, committing, pushing to Gitea, and syncing with Infrahub using Infrahub CTL. In the UI, we now have a Git repository integration syncing with the Gitea instance on port 3000, and a transformation registered as “router config.” Coming back to network devices, we now have an artifacts tab showing the rendered configuration for the Amsterdam router and the Denver router. The pipeline is producing artifacts end to end.
MCP Integration and Analysis
The next thing I’m going to do is introduce an error. I’ll go into the Amsterdam router’s Ethernet interface and change the IP address to 10.0.0.5 instead of 10.0.0.1, so that the two ends of the circuit are no longer in the same subnet. Once the MCP server is set up, I’ll use the analyst skill to query the MCP server and analyze the data inside Infrahub to find circuits with misconfigured IP addresses.
After setting up the MCP JSON configuration and relaunching Claude, I pass in the prompt: “Can you find any back-to-back interface pairs where the two ends are not in the same subnet?” It uses the MCP server and the Infrahub analyst skill to do this. Interestingly, it issued a GraphQL query through MCP to find the answer. What’s interesting here is that we didn’t tell it what an interface or a device is called in our schema. It fetches the schema from Infrahub, understands which model is closest to the question, and then fires the query itself. That’s the power of the skills in combination with MCP.
Q&A
What’s the difference between a skill and an MCP server? The MCP is a connection to your service, but it doesn’t tell the LLM what to do. The skill provides that guidance. The MCP provides the data connection, and the skill provides the methodology. They work side by side and aren’t in competition.
How do I write my own skill? You can dog-food it. I go and talk to Claude and ask it to write a skill for me, then test and iterate. You can write your own local skills that are private or Infrahub-related. You can also contribute to our GitHub repo at github.com/OpsMill/infrahub-skills. Please feel free to open PRs or issues for skills you’d like to see. An easy way to write and improve skills is to use the Skill Creator skill provided by Anthropic. It’s always up to date on best practices. We also copied Vercel’s pattern for how skills are loaded, where metadata is loaded initially and more specific content is pulled in on demand as needed.
Does it work with other LLMs? Yes, but with some caveats. Skills in general can work with effectively any LLM. What can be iffy is the invocation, meaning how the LLM decides whether to invoke a specific skill. With Claude, that works quite well with the skills we’ve developed. With other LLMs like Codex or GitHub Copilot, there isn’t quite an industry-standard behavior yet, and you may need to explicitly call the skill to ensure it’s invoked. The NPX install method also installs a “find skills” skill that helps corral LLMs to use the skills more reliably.
What about prompt injection and safety? All the different skills we have are written locally, and you still interact through the Infrahub API using your own token, so there’s no way to inject anything without authentication. The MCP server also has its own authentication. There’s no concern about a third party injecting data into your skill usage.
What’s the branching story with MCP? One of the beauties of Infrahub is its branching capability. When you make changes, whether manually, through skill development, or through the MCP server, you’re always making changes in branches that can be reviewed. So you don’t have to worry about affecting your production environment.
Wrap-Up
To find the skills and get started, visit the documentation at docs.infrahub.app/skills. You can also find the GitHub repository and install via the NPX command as Philippe showed, or find other installation methods in our documentation. The next step for you is to install it, try it, and please send us feedback. Thanks everyone for joining the live stream. You know where to find us on our Discord server at OpsMill.com. See you on the next one.