How to Turn Your Source of Truth into a Service Factory

Introduction

A few months ago, I gave a presentation at AutoCon1 on ‘How a Network Source of Truth Transformed Customer Provisioning and Team Dynamics.‘ The talk covered how a clear service definition and some automation enabled us to deliver consistent customer services. I also highlighted the added value it provided by optimizing resource utilization and enforcing the service lifecycle.

At the time, the implementation was functioning well, and the project was considered a success. However, we already had concerns about its long-term viability. The form provided to end users was too tightly coupled to the underlying implementation and lacked flexibility. Additionally, the rigid data model required substantial effort to support complex use cases and evolving business needs.

Since then, many things have changed. Today, I’d like to revisit this challenge, but with a completely different technical stack. In this blog, I’ll explore the theoretical aspects of the approach and provide a link to the repository containing the corresponding implementation. In addition, you can watch the video walk through.

https://youtu.be/FbM9XA38wDk

Problem statement

Organizations aim to deliver services effectively, as this is where the money flows.

However, implementing a service catalog is a complex operation that many organizations struggle with. It demands a deep understanding of the product lifecycle, the interplay of various components, and coordination among numerous stakeholders. Beyond that, it requires a robust technical implementation to automate all the associated rules and processes properly.

The stakes are high because the structure of a service dictates everything downstream—from invoicing and lifecycle management to resource allocation and capacity planning. A poorly designed service layer can result in inefficiencies and challenges at every stage.

Use case

AD 4nXfUWYFHrhCYxtjPc11nnsQtYFyvEFFW2W3CCGPupYpSDOGW6WoK7tqnhoJcZQj7IQr bbnbllXGZOQXFVa TtaVkeW9cW5VHjReMtnlEd1DNCcfRl Dv73yWha3bjFTAcHSzvUZcg?key= ubQ8BuBitnou8 T76L ZtD3

Let’s consider a fictional ISP, Otter-net, which provides standard internet connectivity. To maintain clarity, I’ll deliberately skip low-level technical details and focus on generic aspects that might resonate with other use cases.

AD 4nXfpqFYDQWflglVxH6mmdTBPDbj3sGnNnQ20bD9GZNGONnKgpGfPI6u6xwq9uPvtTIezMSUn1ltp7LjGNuuFHI 9gmPIxYF98hP8yJprYwh9tonSEysnTFc8qnsZsLtBBFA8X EiNA?key= ubQ8BuBitnou8 T76L ZtD3

Otter-net operates multiple points of presence across Europe. Currently, it offers a single service: dedicated internet access. This service provides customers with a physical port and a set of public IP addresses for hosting services. Additional services are planned for the future!

AD 4nXeEXm5mi1roTO5OZDODhyeWMlxTRf6rYbtlPQwgNcebK8POD2yxUSPzmbGguqIuK8bLMNsTzy1GaXz8WcltmZgoYJhZJhR Em o8y4YOdgBWs6bBH2QcFhQ0ic6h6ViM4i8zw Dpg?key= ubQ8BuBitnou8 T76L ZtD3

The operational team at Otter-net is divided into two groups:

  • Network Architects: Experts with extensive networking experience, responsible for operating and maintaining the backbone network.
  • Service Delivery Team: Customer-facing professionals responsible for provisioning and connecting services to the backbone.

It all starts with data

To structure and store the data, we’ll use Infrahub, which offers “flexible schemas”—a feature that lets you design a data model tailored to your specific needs. By default, Infrahub doesn’t include any prebuilt schemas; it’s up to the user to create and load them.

Fortunately, there’s a schema library that provides various examples and schema shards to help you quickly scaffold a usable schema. The “base” schema is mandatory, as it provides foundational generics required for all extensions. From there, you can pick additional modules. In our case, we selected:

  • Location Minimal: Defines a hierarchical tree for country, metro, and site.
  • VLAN: Includes nodes for VLANs and L2 domains.

In the future, we may need modules like circuits, cross-connects, or routing. These can be copied into the local repository and customized to perfectly suit our needs.

Now comes the core challenge: crafting a fully custom schema for our service object. As mentioned earlier, Otter-net is planning to offer multiple services in the future. To enable these future expansions, I created a generic service object. This object holds all common attributes shared across the products. Additionally, we will leverage this generic structure to simplify relationships, which we will revisit later.

Next, I developed a DedicatedInternet schema node that inherits from the generic service object and includes a few additional attributes. These attributes are relatively high-level (e.g., an ip_package with T-shirt size values) and are primarily intended as inputs for users.

By default, Infrahub creates data within branches (parallel realities), but it also supports branch-agnostic objects. A branch-agnostic object is propagated to all branches, regardless of where it was created. Here, branch-agnostic behavior is applied in the schema to the service object and key attributes, such as service_identifier. This ensures consistent tracking of a service across all ongoing implementations and branches.

To capture all the building blocks of my service (such as prefixes, interfaces, etc.), I implemented various relationships. These relationships include some advanced behaviors. For instance, consider the relationship between sites and services. From a site’s perspective, I only need a list of services and do not want multiple relationships for each type of service. However, for a specific type of service, I want to enforce rules within the relationships. For example, a distributed service could link to multiple sites, whereas a DedicatedInternet service is tied to a single site.

While these requirements might seem contradictory, the Infrahub schema effectively supports such advanced use cases. By configuring directions in relationships to point toward the generic service from a site’s perspective and initiating the relationship in the node pointing toward the site, we achieve the desired behavior. Using the same identifier in the relationship allows Infrahub to recognize it as a single, unified relationship.

CODE BLOCK WITH https://github.com/opsmill/poc-service-catalog/blob/main/schemas/service/service.yml

During the schema development process, it’s helpful to use infrahubctl to check and load the schema in a branch, making adjustments as needed. In a production context, we will leverage Infrahub’s Git integration to load the schema in a controlled manner.

AD 4nXf4irO2RQx8 CD0HD7V9NfyySRX4LMR8gE6M2Ul8BCCbnJ08YnGoSf2je9 0g3nlv4eWEutInN78iD8i0EGg7Bua0eONs B MNLLBVOQJYzGyUqsynoB0XF8Zj70hREHNCZz1dusg?key= ubQ8BuBitnou8 T76L ZtD3
AD 4nXeA dm6rPDtw7stfMZkAr2RNembm38JJwME7 0tAG6VppsWzVh70nh K1UNIyLuWVJlcl jl41Cm2lAz3A5vedJqNMhPBYZ2AAilG4GSiuoViRUp0pBoVNONFnqHCn jieYIF5yLw?key= ubQ8BuBitnou8 T76L ZtD3

We now have the data model and data to support our business. It captures everything from services to the backbone, with some abstraction for flexibility. This setup is a strong foundation for automation.

… then capture the business logic …

Let’s now look at how to codify the business rules for service provisioning using Infrahub’s generator feature. To put it simply, a generator is a Python script that interacts with data to transform a high-level service request into a technical implementation. The process starts with defining inputs and mapping them to the final output.

AD 4nXcFIcrtFKnNTpv0eqUYjFKaKkJAfw3ECSRewzaZiLCbpv0iKBa3Wj JaaVHq4vGSykGLm6v4PSrVUPyen7Ed3bZd8kSMh0Ggttp7vp3 3yGrqA7XUjXNNcTHkCqxY 4r1 xGQk1xw?key= ubQ8BuBitnou8 T76L ZtD3

Generators are built on the concept of idempotency. If you are familiar with Ansible, this concept should sound familiar. The goal is to make the generator repeatable: it assigns resources the first time it runs, and if run again, it changes nothing if the desired state is already achieved. This approach ensures the code is robust and predictable.

Another convenient feature is Infrahub’s Resource Manager. It allows users to create pools and allocate resources from them, such as prefixes, IP addresses, or even numbers. We will use this feature to allocate our prefixes in a branch-agnostic and idempotent way. Additionally, we’ll use a number pool for VLAN IDs. This can seamlessly integrate with the generator.

AD 4nXfuNMemoV T Gk8CTbNy0A5YAHFIogpCaA62SRhJXV7Knn8Jq6FxfPvrWjNcVeIZag8uLUV Rf00gH8lLWaK qYsVwJyQQ PQOYUe7p Kor4CiWdBYVJJQO6xgVuJAU4D9K2mEOYg?key= ubQ8BuBitnou8 T76L ZtD3

CODE BLOCK WITH  https://github.com/opsmill/poc-service-catalog/blob/main/generators/implement_dedicated_internet.py

This generator is closely tied to a GraphQL query that Infrahub executes and provides as input to the generate method. This connection is configured in a special file at the root of our repository called .infrahub.yml. Additionally, we need to specify a target, which is a group containing all the objects we want to automate—in this case, customer services. Once set up, we can test our generator using Infrahubctl and verify that the output meets our expectations.

CODE BLOCK WITH https://github.com/opsmill/poc-service-catalog/blob/main/.infrahub.yml

At this stage, we have a generator that transforms a high-level service request into a complete service, sourcing resources from predefined pools with consistency and in just seconds.

… and give it to users

The generator we developed earlier is powerful but requires technical knowledge of low-level implementation, multiple operations, and access to Infrahub. To simplify the process for users, we will create a form on top of the generator. This form will expose only the necessary inputs, create objects in a branch, and generate a proposed change for review by a network architect. We will implement this using Streamlit, a Python library that enables the creation of frontend applications without prior frontend experience.

AD 4nXcZIMdQbqmyCZsKWk2IuobTelwIYsgEgD Bj Ds36WprdYuKwP0A3H0R7qJefo1cjQ6DDz5UzQLEcRPNIHjZ0 xl4O3 Tjlp6OKNDNiCc8qDXsYref97MU MWTwVszTfkTTFguWTg?key= ubQ8BuBitnou8 T76L ZtD3

The application will have two main pages: the form and the list of requests. The form provides a streamlined interface with only the relevant inputs, allowing users to request a new service in a controlled manner. It creates a “request” (a proposed change in Infrahub), which can be monitored on the second page. A network architect will review the request in Infrahub and approve it. Once approved, service delivery teams can view the allocated resources on the page and communicate them to the customer.

AD 4nXcqYuW4xL4 5VHBFt4pdQXdvqQj69ScE8 n6dBDI9ta6esSOIzveAXDx8QPDIfFw298zjvwzTTn8jCZqTSrRwO940pFIZH3MZWV77Hl lAJl5oMwezGwLFX V8sBLEarIgVw1P5ow?key= ubQ8BuBitnou8 T76L ZtD3
AD 4nXffL4375MxsTEvkoljstU6oLLqZX1NNLJQ9kbB GY rgk BgW6ED6I4RfoqxP GQPwuQo3LnMGnhbZgaB pOeEWvhA1XSTvWFxgyevF1Gd7IUiGBSFfFZ3YdEvT4JfnXOiUHuKa3Q?key= ubQ8BuBitnou8 T76L ZtD3

This form separates design from implementation. Users can request and interact with resources without dealing with low-level complexity. Additionally, we leverage Infrahub’s branching capabilities to push all changes to a dedicated branch, enabling architects to review exact modifications. Finally, the proposed change provides a runtime for the generator and offers future possibilities, such as user-defined checks and artifacts for device configuration.

AD 4nXfdoKgqBONq9H277wOPciXoXuQFTPdvFjn4TjEdd0xYvfv uPHbAFmhzyp3wkIqWwVg7k IPrcIbKj8C22ZY81zkUUyZn1bft98V8GcsWrxQSPQ0DDdfpMrokhlv2SYBBObjK8k?key= ubQ8BuBitnou8 T76L ZtD3

Conclusion

END RESULT = https://drive.google.com/file/d/12LVQQ5g403G3NFUMQ9gpr_LuFIpICKXU/view?usp=drive_link

REPOSITORY = https://github.com/opsmill/poc-service-catalog

The flexible schema feature of Infrahub is an ideal solution for capturing the service layer as closely to the data as possible. It enables you to represent every building block of your current services while also scaling to accommodate the growth of your business, such as the addition of new products. Additionally, the generator feature provides a robust mechanism to codify your business rules, transforming service requests into proper implementations in an efficient manner. Finally, encapsulating this logic into a form allows users to interact with data and resources more effectively, ensuring a clear separation of concerns.

Implementing this example as a production-ready product would require careful consideration of the overall BSS process. For instance, service objects likely exist prior to implementation (in a CRM system, for example) and would need to be synchronized with Infrahub. While creating a form is one option, it’s also possible that a system already present in your IT landscape (such as ticketing or CRM software) could fulfill this interface role.

Flexibility in the data model is crucial when it comes to the service layer. Since every organization operates slightly differently, there is no one-size-fits-all solution. Your source of truth must adapt to your business, capturing existing and future products as well as all their various building blocks. The quality of the data model, combined with the ease of interacting with the data it contains, will have a profound impact on downstream processes like service automation and resource management. Aligning a tailored data model, robust automation, and an efficient user interface will transform your source of truth into a powerful service factory.

Share the Post:

JOIN OUR MAILING LIST

Please enter your email address to stay informed about OpsMill developments. Your email address will be stored according to GDPR and will never be sold.

REQUEST A DEMO

See OpsMill in action and learn how it can help you achieve your goals. Fill out the form below to schedule a personalized demo.

By submitting this form, you agree that your personal data will be stored and processed by OpsMill in accordance with our privacy policy.