Eliminating Duplicates: Infrahub’s Resource Management Revolutionda

Why Is Resource Management Important to Teams and How Does Infrahub Help?

One of the often overlooked elements in advancing automation is selecting what resources are safe to use next.  When teams rely on manual designation of resources they can end up with subnet overlaps, unavailable ASN choices, or even unroutable IP addresses.  This is where resource management within Infrahub can help teams intelligently, and automatically, assign the next valid resource.

Infrahub gives the ability to intelligently assign:

  • IP Addresses: Individual IP addresses which need to be assigned to interfaces or routing processes.
  • IP Prefixes:  Multiple subnets that adhere to higher level supernet routing design.
  • Number Groups:  Ranges of VLANs, ASNs, routing processes and more can be safely set in advance for easier consumption.

You can read more about Infrahub’s resource management system, and how it can help teams more effectively deliver IPAM services in our official documentation.  Meanwhile, check out this short video that we created for Autcon2 where we give a brief walk-through of the IPAM feature.

Preventing Errors at Scale: Meet Infrahub’s Version Management

How Do Automators Guard Against Data Errors at Scale?

One of the biggest operational challenges automation teams face is rapidly changing data.  Teams need to understand the how and why behind every change.  Whether this is for deploying deliverables, troubleshooting issues, or decommissioning environments, having a system that helps them manage their evolving information, safely becomes paramount.  That’s why the built in version management and branching in Infrahub can be a life saver for teams.

With integrated version management groups get:

  • Immutable Data History: Information is never truly gone.  Being able to view, diff, and restore to previous versions of data ensures that erroneous changes are never truly permanent.
  • Safe Environment for Changes:  Branches create a safe approach for teams to make changes, minor to major, without jeopardizing their main production data.
  • Conflict Resolution: Working with branches ensures engineers aren’t overwriting data or making a change that conflicts with others.
  • Audit Trails:  The ability to track specific changes to specific versions or branches enables auditing of schema changes, artifact generations, data modifications and more.

To learn more about these topics you can read more about version management and branches in the official documentation.  Meanwhile, check out this short demo video that we created for Autocon2.  Here we walk through the version management and branching controls.

A Brief Introduction to IPAM and Infrahub

IPAM Remains a Key Feature for Infrastructure Automation

Even as infrastructure automation continues to advance with the help of modern tools like Infrahub, certain functions continue to be an ever present necessity for success. IP Address Management (IPAM) remains one of those cornerstones. While Infrahub’s ability to have a truly custom schema and data model is essential to many organizations, we also know how fundamental IPAM is. That’s why, instead of requiring you to create an IPAM schema from scratch, Infrahub includes the following constructs out of the box:

  • IP Addresses: used to model a single IP address.
  • IP Prefixes: used to model a network, sometimes referred as supernet/subnet
  • IPAM Namespaces: used to model distinct, isolated contexts for managing IP resources
  • Generic VRFs: Isolate and manage IP resources for each VRF, preventing address overlap.
  • Security Zones: Align IP address management with security zones to ensure clear segmentation and compliance.
  • Hybrid Designs: Differentiate and track IP resources across on-premises and cloud environments.
  • Multi-Cloud: Isolate address spaces for multiple cloud providers to avoid conflicts and simplify management.
  • Regional Compliance: Create zones for IPs tied to specific data sovereignty regulations.

IP addresses and prefixes are well understood and implemented in a fairly straightforward fashion. However, Infrahub IPAM Namespaces introduces a novel capability to IPAM. IPAM Namespaces offer the ability to create contained environments for many different scenarios. Some examples are:

Add a Flexible Schema, Transformations, and Change Management

Moving past the three out of the box IPAM constructs, the power of the flexible schema becomes apparent. For example, you can add attributes such as customer tenant, associated service, or related applications to customize how IPAM works for your organization.

Combining other Infrahub features (such as service and configuration generation with Transformations and Artifacts, or data versioning through built in change management) with IPAM gives infrastructure automators a further leg up on their delivery practices.

You can read more about IPAM in Infrahub documentation. Meanwhile, check out this short demo video that we created for Autocon2 where we look at the basic capabilities of IPAM in Infrahub.

Generators and Services in Infrahub – Part 3

In this part of our series on generators and services, we’re diving into the practical aspects of setting up a generator. In our last video, we covered the planning phase, including identifying use cases, defining deliverables, and mapping workflows. This time, we’ll focus on the actual coding and configuration needed in InfraHub to get our generator up and running.

Setting the Generator Up

To start, we need to define our generator within the .infrahub.yaml file. This is crucial as it outlines how the different components come together. Our definition file will specify the locations of the Python code and the GraphQL query required to pull the data we need for our generator to function.

Walking through our generator definition:

  • name: We set a name for our generator, like site_generator.
  • file_path: This points to where our generator code resides.
  • targets: We define our targets, which will be set up in a future video.
  • query: We specify the query name, which is used further in the file, and the path to the query file.
  • class_name: We define a name for the class to use in the generator

And then a similar setup for the query:

  • name: The name we use to point at the query
  • file_path: This points to where our query file resides.
-

generator_definitions:

  - name: site_generator

    file_path: "generators/site_generator.py"

    targets: "generator_sites"

    query: "site_details"

    class_name: "SiteGenerator"

queries:

  - name: site_details

    file_path: "queries/site_details.gql"

GraphQL Query

The GraphQL query will filter out the necessary information for our generator. We will focus on returning the following variable values for our site:

  • edges > node > name: This will return the name of the site we are targeting.
  • edges > node > site_prefix: This return the site prefix for device and interface names.
  • edges > node > homing: Whether the site is single or dual-homed.
query site_details($site_name: String!) {

  LocationSite(name__value: $site_name) {

    edges {

      node {

        name {

          value

        }

        site_prefix {

          value

        }

        homing {

          value

        }

      }

    }

  }

}

Generator File

This is where the magic happens.  The generator is coded with Python and uses the Infrahub SDK to bring the logic to life.  I’ll use comments within the Python code to walk through the flow:

import logging

from infrahub_sdk.generator import InfrahubGenerator

# Configure logging

logging.basicConfig(level=logging.INFO)

logger = logging.getLogger(__name__)

# Suppress httpx logs for ease of viewing on the console

httpx_logger = logging.getLogger("httpx")

httpx_logger.setLevel(logging.WARNING)

# Call our SiteGenerator as defined in our .infrahub.yml file

class SiteGenerator(InfrahubGenerator):

##### Receive the output of the GraphQL query

    async def generate(self, data: dict) -> None:

        logger.info("Received data for processing.")

        if not data["LocationSite"]["edges"]:

            logger.warning("No sites found in query result.")

            return

######### Extract site details we need to build our devices

        site = data["LocationSite"]["edges"][0]["node"]

        site_name = site["name"]["value"]

        site_prefix = site["site_prefix"]["value"]

        homing = site["homing"]["value"]

        logger.info(f"Processing site: {site_name} with prefix: {site_prefix}, Homing: {homing}")

######### Fetch the Management IP Pool

        management_pool = await self.client.get(

            kind="CoreIPAddressPool",

            any__value="Management Pool",

            raise_when_missing=False

        )

        if not management_pool:

            logger.error("Management Pool not found. Ensure it is created in the GUI.")

            return

        logger.info(f"Using IPAM Pool: {management_pool}")

######### Define device configuration based on homing type

        device_config = {

            "single_homed": [

                {"name": f"{site_prefix}-router-01", "role": "core", "type": "router"},

                {"name": f"{site_prefix}-firewall-01", "role": "firewall", "type": "firewall"},

                {"name": f"{site_prefix}-switch-01", "role": "end_user_switch", "type": "switch"},

            ],

            "dual_homed": [

                {"name": f"{site_prefix}-router-01", "role": "core", "type": "router"},

                {"name": f"{site_prefix}-router-02", "role": "core", "type": "router"},

                {"name": f"{site_prefix}-firewall-01", "role": "firewall", "type": "firewall"},

                {"name": f"{site_prefix}-firewall-02", "role": "firewall", "type": "firewall"},

                {"name": f"{site_prefix}-switch-01", "role": "end_user_switch", "type": "switch"},

            ],

        }

        devices = device_config.get(homing, [])

######### Begin creating the objects for our service of a new site

        for device in devices:

            device_name = device["name"]

            device_role = device["role"]

            device_type = device["type"]

############# Create device

            logger.info(f"Creating device: {device_name} ({device_role})")

            device_obj = await self.client.create(

                kind="InfraDevice",

                data={

                    "name": device_name,

                    "role": device_role,

                    "type": device_type,

                    "status": "active",

                    "site": site_name,

                },

            )

            await device_obj.save(allow_upsert=True)

############# Create interfaces for device

            for iface_num in range(1, 3):

                interface_name = f"{device_name}-iface{iface_num}"

                logger.info(f"Adding interface: {interface_name}")

                interface = await self.client.create(

                    kind="InfraInterfaceL3",

                    data={

                        "name": interface_name,

                        "device": device_obj,

                        "speed": 1000,

                        "status": "active",

                        "role": "management" if iface_num == 1 else "uplink",

                    },

                )

                await interface.save(allow_upsert=True)

############# Obtain and associate IP address for management interfaces

                ip_address = await self.client.allocate_next_ip_address(

                    resource_pool=management_pool,

                    identifier=f"{interface_name}-ip",

                    data={"description": f"IP for {interface_name}"}

                )

                ip_address.interface = interface

                await ip_address.save(allow_upsert=True)

                logger.info(f"Allocated IP: {ip_address.address.value} and linked to {interface_name}")

        logger.info("Device and interface creation completed.")

Let’s Run It!

Here we can see the output from the console of running the generator for site Chicago

infrahubctl generator --branch "generator-test" site_generator site_name=Chicago
Infrahub Generator output example

And now we can see our devices from inside the Infrahub UI.
Infrahub screenshot: devices from Generator

With the generator functioning as expected, we now look towards operationalizing it. This means integrating the generator into the application for easier consumption by users.

Conclusion

With our next installment in the series, we will implement a way to control when generators run and against which sites. This will allow for more flexibility and prevent unnecessary executions against all sites.

We will also be bringing the generator, and all of the required files, into a git repository so we can operationalize its usage.  We’ll also look at how to leverage our proposed changes feature to help teams even more.

To learn more about Generators today, check out the following links to our documentation:

Automate Collaboratively with Infrahub Proposed Changes

Automate Collaboratively with Infrahub Proposed Changes

Once teams start working to integrate automation into their day to day operations, being able to review and participate in change management becomes essential to success. Proposed changes inside of Infrahub allows teams to fully control when data is merged as well as check it for necessary requirements. Oftentimes, peer reviewing of changes becomes a vital safeguard. This is a default feature of Infrahub.

Infrahub proposed changes allows teams to:

  • Verify Changes: Enables a human operator to sign off on a change before it’s ingested into production data repositories.
  • Run Business Intelligence Checks: Programmatically check that designs align with configurations catered to business needs
  • Enforce Design Validity: Does every leaf need two connections to a spine? Do routing processes need secure configurations? Ensure the right configuration every time.
  • Compare Before and After: Analyze impact to artifacts, configurations, files, and more before a change is accepted.

You can read more about how proposed changes in Infrahub can bring higher levels of control and safety to teams and their automation-led efforts in our official documentation. Meanwhile, check out this short video that we created for Autocon2 where we propose and accept a change to a demo system.

Flex Your Data with Infrahub’s Customizable Schema

A Customizable Schema Ensures Your Automation Delivers Technical and Business Value

As groups continue to evolve their automation practice, they inevitably encounter business data that doesn’t fit into predefined schemas. This is usually a symptom of an outside developer having an opinion that doesn’t capture the nuances of a specific industry or a business they aren’t familiar with. Unfortunately the best option for these groups is to modify the application or relegate their important business data into rigid custom fields. Once this happens business data is no longer treated as a first-class citizen, often losing flexibility and suffering from reduced usefulness. That’s where the power of being able to control your own schema gives teams new-found powers.

Here are a few reasons you want full control over your intent data schema:

  • Industry-Specific: Teams that need to track more proprietary data can easily include that in their data.
  • Multi-Vendor: When multiple vendors are involved, conflicting or incomplete schemas can result in missing data.
  • Cross-Domain: Platform-specific applications can yield data gaps, such as when dealing with cloud and on-prem deployments.
  • Disparate Systems: Too often disparate systems result in data silos. Flexible schemas allow for data from multiple systems to exist simultaneously within a single platform.

You can read more about Infrahub’s flexible schema and how to supercharge your data, within the Infrahub documentation. Meanwhile, check out this short demo video that we created for Autocon2 where we take a look at how easy it is to update a schema to include new attributes.

Generators and Services in Infrahub – Part 2

In this second part of our series on generators and services, we delve deeper into the practical aspects of creating a generator that streamlines the delivery of new sites. This blog aims to provide a comprehensive overview of the process, focusing on the use case, design requirements, workflow, and a live demonstration of the generator in action. If you missed our first article and video in this series, be sure you check those out.

Recap of Generators

Generators are generic plugins that query data and create new nodes and relationships based on the results. They can play a crucial role in automating the creation of new objects, services, and other elements within infrastructure management and deployment frameworks. The logic behind these generators determines how effectively they can automate processes.

Defining the Generator Use Case

For our generator, the primary use case is to automate the delivery of new sites. This involves creating a consistent and repeatable process for setting up new infrastructure. Our deliverables will include the creation of devices within Infrahub and the artifacts necessary for launching new sites. We’ll also record our design requirements and map out the general generator workflow.

diagram

Design Requirements

Before diving into the workflow, we’ll clarify our design requirements. This includes decisions about hardware configurations, naming conventions, and the relationships between devices.

Hardware Configuration

We will implement two configurations:

  • Single-Homed Configuration: This setup utilizes a single router, firewall, and switch, suitable for less critical applications and sites.
  • Dual-Homed Configuration: For more critical infrastructures, we will set up two routers, two firewalls, and a single switch to ensure redundancy and fault tolerance.

Naming Conventions

  • (site)-(device)-(incrementor)
    • (site) – will be a shortname for any given site
    • (device) – will be the type of device
    • (incrementor) – will be incremented for each device at the site

Relationships

  • We will keep this simple and just map our devices to the sites

Map the Workflow

In our next article we’ll dive into the actual code that runs our generator. But for now we’ve outlined, in psuedocode, what this workflow should look like.

function generate(site):

validate site data

if homing == "single_homed":

create 1 router, 1 firewall, 1 switch

if else homing == "dual_homed":

create 2 routers, 2 firewalls, 1 switch

link devices to sites

return success

Running the Generator

Generators can be ran one of two ways: the command line or the UI. When developing a generator it’s best to run it from the command line. Using the UI works better once a generator has been completed and is ready to be used in production. While we only show one example below, we actually run this twice for the demo in the video.

Below we can see the generator running and creating our devices for the Chicago site. To help us track the generator running in the development stage, additional code has been added to output information to the console.

generator console output

Once the generator has run successfully, we will see the 2 routers, 2 firewalls, and switch have been generated for the Chicago site.

successful generator run

Conclusion

In this article, and video, we’ve continued to expand our understanding on how a generator can start to help us deliver our services in a repeatable and reliable fashion. The continuing articles in the series will further enhance the delivery of our service of a fully functional site through our generator.

Stay tuned as we continue on this path in crafting our generator tailored to specific needs, enhancing our capabilities through Infrahub.

To learn more about Generators today, check out the following links to our documentation:

Generators and Services in Infrahub – Part 1

This article kicks off a multipart series where we’re diving into the concept of generators and services, exploring their significance, structure, and how they can streamline processes for teams. Whether you’re a developer or just curious about automation in IT, this guide will provide you with a comprehensive understanding of generators and their applications.

What is a Generator?

A generator, in the context of Infrahub, is defined as a generic plugin that queries data and creates new nodes and relationships based on the results. This definition encompasses several key components that work together seamlessly to facilitate the automation of repeatable tasks. Link to the Generator topic in our docs

To visualize a generator, think of it as a combination of various elements: targeting the intended group of objects, the logic that powers it, and the GraphQL query that bridges everything. This combination of components allows teams to automate the generation of service deliverable objects efficiently and effectively.

generator overview.excalidraw ddeaf82647838b18a4fb12296687d82b

Why Use Generators?

Generators are particularly beneficial for teams that have repeatable tasks that require consistency and efficiency. These tasks can range from deploying new VLANs to creating infrastructures for new locations, such as remote offices of a healthcare organization or new construction of franchise buildings.

In scenarios where organizations are scaling operations—like a hyperscaler creating new data centers or a managed service provider deploying numerous applications—generators offer a solution to streamline these processes. By inputting new variables, teams can generate a fresh version of the task each time, saving time and reducing errors.

While these scenarios are hardly an inclusive list, they do serve as a point to kickstart the thought process. To ask the question “What services does my team deliver?”

Example Generators in Action

While this is best illustrated in the videos, we can briefly discuss the two provided examples.

Data Center Fabric Example

In our first example, we created a branch called generator demo to work on establishing a new data center fabric. The process began with adding fabric for a new location, in this case, Chicago. We set up the spines and leaf groups, specifying the number of Layer Two and Layer Three leafs required.

Setting up the fabric parameters

Infrahub Generators: Setting up fabric parameters

Next, we designated our target group within Infrahub, ensuring that our generator was aligned with the right site. After initiating the generator, we observed the tasks in progress, with successful outputs confirming the creation of our leaves and spines as per the defined parameters.

The generated objects

Infrahub Generators: generated objects

New Site Example

In our second example, we focused on creating a new site, specifically for Grand Rapids. This time, we selected a design type and specified the hardware—a Cisco 24 Port switch. The site was categorized under the automated sites group to ensure proper management and organization.

Setting up the new site parameters

Infrahub Generators: setting new site parameters

Similar to the previous generator, once started the logic was processed which led to the successful creation of the new design, with the requested equipment. As we refreshed the data, we noted the interfaces, IP address consumption, and configurations that were automatically generated, showcasing the efficiency of the generator in preparing for a new site to come online.

The generated objects

Infrahub Generators: generated objects

Artifacts and Their Importance

As we explored the generators, we also encountered the concept of artifacts. Artifacts are essential as they represent the actual configurations needed for devices, such as boot-up configurations. They play a critical role in implementing changes in existing environments or introducing new services.

By utilizing artifacts, teams can ensure that the generated infrastructure aligns with operational requirements, making the automation process not only efficient but also reliable.

Looking Ahead: Building Our Own Generator

Generators are a powerful tool in the Infrahub ecosystem, enabling teams to automate repetitive tasks effectively. By understanding their components, applications, and the importance of artifacts, you can leverage generators to enhance productivity and streamline operations.

In the upcoming segments of this series, we will take a step further by creating our own generator. This endeavor will involve a design phase where we outline the generator’s functionality before diving into the coding aspect. Our goal is to simplify the process while ensuring clarity and understanding.

Stay tuned as we embark on this journey to craft a generator tailored to specific needs, enhancing our capabilities through Infrahub.

To learn more about Generators today, check out the following links to our documentation:

A Quick Look at Infrahub Artifacts

What are Transformations and Artifacts and Why are They Useful?

One of the key characteristics of Infrahub is that it is a comprehensive data management system, and that doesn’t just stop at design and intent, but extends to rendered data.  As such Infrahub offers a capability called Transformations. A transformation is a generic plug-in that transforms data into a different format to simplify ingestion by a third-party system. The output of a transformation is an artifact. While you can run a transformation on-demand, artifacts are persistent, bringing a number of benefits:

Caching: Generated artifacts are stored in the internal object storage, which improves performance  resource intensive transformations since you don’t have to regenerate them each time you use them.

Traceability: Past values of an artifact remain available.

Peer Review: Artifacts are automatically part of the Proposed Change review process.

Database: Artifact nodes are stored in the database and other nodes can optionally have a relationship with them, which makes it possible to perform certain artifact related queries.

Here are some examples of artifacts:

  • Startup configurations for single devices
  • Startup configurations for complex service catalogue delivery
  • Multi-device configuration snippets for configuring services
  • Containerlab *.clab files for digital twin labs

Note that transforms and artifacts aren’t restricted to config files–you’ll see that in the video (and image above) that the artifact is a JSON blob.  Artifacts can be plain text or JSON format.

To learn more in-depth on these topics, you can read more about Transformations and Artifacts in the Infrahub documentation. Meanwhile, check out this short demo video that we created for AutoCon 2 where we take a look at an artifact, change an interface attribute, and automatically regenerate the artifact to reflect that change.

Introduction to Schema Design

What is a Schema?

One way to define a schema is a blueprint that defines your data structure for your environment. Within Infrahub, it can outline the structure, objects, and relationships of your data. To clarify, this doesn’t refer to individual devices, like a specific router or firewall. Instead, we’re talking about broader categories, such as network devices (which could include routers, switches, etc.) or organizational sites.

The schema is comprised of the larger building blocks used throughout your environment, along with the attributes associated with them, and how they relate to each other.

This article walks through the major concepts contained in the video below.

The Importance of Flexibility in Schema Design

One key aspect of effective schema design is flexibility. As your business evolves, so do your data structure and tooling needs. For instance, if you acquire another business that utilizes a different application, you’ll need to incorporate their data into your existing environment. A flexible schema allows you to merge these data points without requiring substantial changes to your existing data design.

Flexibility also comes into play during tool migrations. If you’re transitioning from an outdated tool, a flexible schema can help you adapt your old data to fit into the new system seamlessly.

Moreover, as you push further into automation, you may discover additional data points that currently exist as custom fields in other applications. By having a flexible schema, you can treat this business data as first-class citizens within your data model, eliminating the need for custom fields.

In Infrahub, flexibility means you can customize and evolve your schema over time to meet your changing needs.

Understanding a Basic Schema

Here’s a very basic schema setup featuring two nodes. We have network device and organization site.

-

  version: "1.0"

  nodes:

    - name: Device

      namespace: Network

      attributes:

      - name: hostname

        kind: Text

        unique: true

      - name: model

        kind: Text

    - name: Site

      namespace: Organization

      attributes:

      - name: address

        kind: Text

      - name: contact

        kind: Text

Each node type has attributes associated with it. For instance, every Network Device will have a hostname and a model number.

For the Organization Site, which we can think of as a physical building, has attributes like address and contact information. The types of these attributes can vary; in our example, we have standard text fields for free-form text entry.

Depending on your company it may be important to enforce uniqueness for certain attributes, such as the hostname for network devices. This is fully controllable within the schema design.

Loading the Schema in Infrahub

In the terminal, using the infrahubctl schema load command, it’s fairly simple to select our schema and load it into our Infrahub environment.

Infrahub schema load at the CLI

More information on using infrahubctl can be found in the Infrahub documentation

Expanding the Schema

Next, let’s look at how to expand our schema. We’ll add a firmware attribute to Network Device and an email attribute for our Organization Site contact person.

-

  version: "1.0"

  nodes:

    - name: Device

      namespace: Network

      attributes:

      - name: hostname

        kind: Text

        unique: true

      - name: model

        kind: Text

      - name: firmware

        kind: Number

    - name: Site

      namespace: Organization

      attributes:

      - name: address

        kind: Text

      - name: contact

        kind: Text

      - name: email

        kind: Email

We’ll find a list of attribute types, including text, which we already used in our basic schema. There are many other attribute kind options available that can help customize your schema according to your specific environment needs.

Understanding Namespaces

Finally, let’s talk about namespaces. In our working schema, we defined the network namespace and the organization namespace. This helps prevent node name collisions. Other examples could be having a node named Device in multiple namespaces. Here is an abbreviated example of that.

-

  version: "1.0"

  nodes:

    - name: Device

      namespace: Network

    - name: Device

      namespace: Security

    - name: Device

      namespace: Server

Obviously we would include attributes in a production schema, but this should help visualize the concept of Device existing in the Network, Security, and Server namespaces.

Conclusion

Understanding schema design is a key element to extending how to take infrastructure automation and effectively improve it. It’s a topic worthy of spending time to understand and consider. Here are a few key pages from our documentation to help on that journey.

Let’s get automating!

Github: github.com/opsmill/infrahub | Discord: discord.gg/opsmill |

REQUEST A DEMO

See what Infrahub can do for you

Get a personal tour of Infrahub Enterprise

Learn how we can support your infrastructure automation goals

Ask questions and get advice from our automation experts

By submitting this form, I confirm that I have read and agree to OpsMill’s privacy policy.

Fantastic! 🙌

Check your email for a message from our team.

From there, you can pick a demo time that’s convenient for you and invite any colleagues who you want to attend.

We’re looking forward to hearing about your automation goals and exploring how Infrahub can help you meet them.