3 Nov 2025

A Quick Introduction to Infrahub (FRNOG 42 Talk by Benoit Kohler, English Translation)

FRNOG Infrahub overview tile

At FRNOG 42, I delivered a talk about the open source edition of Infrahub. The video below is a recording of my talk in its original French. Following the video is the English translation.

divider
divider

Hello everyone, I’m going to talk to you about Infrahub, the open source version created by OpsMill. First, for those who don’t know me, I’ve worked at Cloud Temple, Vente Privée, and now at OpsMill.

And why OpsMill? Essentially to build a tool I had needed for years when I was working with NetBox—mainly for the source of truth and more broadly infrastructure management.

OpsMill, by way of background, was founded two years ago by Raphael Maunier and Damien Garros. We raised a round two years ago with major backers such as Serena, Partech, and OVNI Capital. We already have customers and mostly work with large accounts, whether with the open source version or the Enterprise version. I won’t talk about that today because this isn’t a commercial tool pitch.

The problem with source of truth today

The first problem today with the source of truth in infrastructure—and in fact the Nokia presentation just before was an example—is that we don’t have the entire source of truth in one place. You may end up with data in a CMDB, in Git, in prebuilt tools like NetBox and Nautobot, but you don’t have all the truth in one place, which makes automation brittle or at least complex, since it’s not easy to use everything directly.

If you try to use a single source of truth, no matter the system, you get the associated downsides. If I use a CMDB, it’s not designed for automation—I may not have an API. If I want to use Git, great, I can do more or less what I want, it’s flexible. The drawback is I don’t have an API and it can be more complex because there’s no schema, and you can do a bit of anything, which can become very complex.

And finally, those dedicated to tools like NetBox and Nautobot: perfect, they’re made for infra. The drawback is it’s inflexible. You can’t easily extend it, or at least that used to be through plugins. For those who have used NetBox and Nautobot in recent years, you know that every time you upgrade, it often happens that plugins don’t work with the new version. They are improving and it’s not their fault. It’s tied to the models.

Introducing Infrahub

Where does Infrahub fit in? The idea, of course, is to do all of this together. We’re going to unify the data, have custom models that you can extend as much as you want, and also keep the entire configuration part. Instead of “I have my source of truth here, and my configuration goes into Git,” we’ll also store it in Infrahub. It’s not mandatory, but you can store it there, which lets you version both the data and the outputs.

To throw in a bit of buzzword, we do knowledge graphs, since we use a graph database underneath. We’re already ready for AI, since we have this associated graph system that enables a more relevant lifecycle.

All the features present today in the Enterprise version are also present in the open source version. We don’t do a watered-down open source version. We didn’t put OSS behind a paywall, we didn’t remove features. If you want to use the Community version, all features are there: pipelines where you can write custom checks, RBAC, and full idempotency.

The big differences between the two versions are essentially performance and some slightly advanced rules, typically like GitLab. If you want mandatory approvals in your CI, you need the Enterprise version and you’ll have to pay, because normally you don’t need that if you’re three people.

Creating a model in Infrahub

To make this more concrete—and not just dump features one by one and explain what they are—the idea here is we’ll create a model. It’s a paper demo because I don’t have a video and no real demo on Philippe’s laptop. We’ll extend a model directly at the schema level, import or edit data, create a branch, and I’ll show you a bit of the system. For the schema here, I took the example of a service: an L3 VPN.

Typically, for my L3 VPN, I’ll need VRFs, I’ll need prefixes. The example you see on the left is a small YAML that describes your schema. You can load that YAML directly into Infrahub without needing to reload anything, and it will be taken into account. As soon as you load it, the UI will change, the APIs become available, etc. You can start consuming.

Infrahub Generators and Resource Managers

Once that’s in place, you can use other internal tools, specifically the Generator and the Resource Manager, which is on the next slide. Maybe I should have swapped them. You can define the Generator, which is ultimately a Python script that can run inside Infrahub. You can define logic and your design, which can also be defined in other models.

For example: my VRF must have this type of ID, I want to use this IP source with that role—again based on my design—and you can store that in your source of truth in the same place, versioned all together.

For the resource management I just mentioned: Today, if you take NetBox, you can ask it for the next available VLAN, the next available prefix in a range, but it stops there. If you ask, “Give me the next available AS in a range,” it can’t, because it’s not a predefined table. You don’t have automatic allocation via the UI, only via Python.

The idea here is to have a more extensive resource management system—interfaces, interface pools, rack pools, etc.—so you can ask “Give me the next available one,” the next rack, and so on.

Transforming data into configurations

Once you have all this and you’ve created your model, you’ll want to transform that data into a configuration. Our configurations are what we call artifacts.

Ultimately, an artifact, in the Git sense, is a file. It can be just about anything. For the transformation—whether Jinja or Python—you can customize it for your specific vendor. To change it, you don’t need to change your models. You just change your Jinja, the same way you do with Ansible, to get the new configuration.

For example here, I have an L3 VPN with the associated VRF and, depending on my transformation, I can choose: this Jinja is for Cisco, that Jinja is for Arista. And if you just change the configuration model, you’ll get the vendor-specific configuration associated with your device.

Branching, peer review, and checks

Of course, all of this is versioned. We assume you’ve made all these changes in a branch because you’re a good person and don’t do everything in main. You’ll create a branch, make these changes, and propose the change. It’s the equivalent of a Git merge request.

A colleague can review. You’ll be able to see exactly what changed in terms of data and configuration as well, what changed in the config diffs. For example, if you add MTUs and so on, you’ll see it in both the data and the configuration.

You can also add user-defined checks. It can be almost anything. For example: If you shut down this device, I want you to confirm another device with the same role exists at my site to make sure things will go fine before it’s approved. And today, if the checks fail, you won’t be able to merge that branch. And since a check can be anything, it can be transit, device, descriptions—because you want to ensure the description conforms to a certain pattern to avoid having 20 different variants in production.

The artifact I mentioned earlier can be anything. It’s not necessarily tied to a device. If you want to attach it to a rack and do ASCII art to show how the rack is laid out, you can. You can generate CSVs for cabling, images, even emails for maintenance notices. Multiple formats are supported. The classics: CSV, JSON, markdown, and others. And you can retrieve all of this directly in third-party tools.

Deployment through existing tools like Ansible

Unlike Nautobot, we chose not to put everything into a single part of Infrahub, specifically deployment. Other tools do that well. You can use Ansible, you can use Nornir, or even others. There’s no reason for us to redevelop that. You can keep your tools.

The only difference is, if you use Ansible, instead of retrieving data, transforming it in the playbook, and then pushing it, you’ll simply tell Ansible “Fetch the configuration and push it.” Ansible itself no longer contains the logic. It just passes it along.

That means that if tomorrow you stop using Ansible and move to another system based on webhooks and orchestration, you can do that fairly easily, because the logic no longer lives in the deployment tool.

And if I remember correctly, that’s the end. You have the link here to the open source Git repo and a QR code to join our Discord server if you have questions.

Benoit Kohler

November 3, 2025

REQUEST A DEMO

See what Infrahub can do for you

Get a personal tour of Infrahub Enterprise

Learn how we can support your infrastructure automation goals

Ask questions and get advice from our automation experts

By submitting this form, I confirm that I have read and agree to OpsMill’s privacy policy.

Fantastic! 🙌

Check your email for a message from our team.

From there, you can pick a demo time that’s convenient for you and invite any colleagues who you want to attend.

We’re looking forward to hearing about your automation goals and exploring how Infrahub can help you meet them.

Fantastic! 🙌

Check your email for a message from our team.

From there, you can pick a demo time that’s convenient for you and invite any colleagues who you want to attend.

We’re looking forward to hearing about your automation goals and exploring how Infrahub can help you meet them.