By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Oct 7, 2024
Product

Beyond the UI: Sifflet's Expansion into Technical Workflows

Post by
Mahdi Karabiben
&

Since day one, Sifflet's mission has always been to bring data calm to both business and technical users. Over the past three years, we've shaped the Sifflet UI into a powerful control panel for business users while simultaneously developing our Data-Quality-as-Code (DQaC) framework and Sifflet CLI to empower technical users in building and scaling high-quality data products.

As we keep improving the Sifflet UI to cater to business teams, we're equally focused on enhancing the experience for technical users by integrating Sifflet's capabilities into their existing tools—an essential part of our Sifflet Everywhere vision. Today, we’re thrilled to unveil our plans for technical users across three key areas:

  1. Making Sifflet a Continuous Integration (CI) companion
  2. Enabling an Everything-as-Code approach
  3. Integrating Sifflet with local development workflows

Let's delve into each of these areas and the exciting new features they bring.

1. Making Sifflet a Continuous Integration (CI) Companion

Proactive issue detection is key. Instead of only monitoring your data assets once they’re in production, we want to alert you about potential problems before code changes are merged or dbt models are deployed. To do so, we're bringing Sifflet's monitors and metadata-based insights directly to your CI setup via multiple ready-to-use workflows. These CI workflows, compatible with both GitHub and GitLab (and soon with BitBucket), will ensure you confidently merge and approve changes and minimize production disruptions.

Impact Analysis for dbt Changes (Available Now)

With this first workflow, which we just released, Sifflet can now analyze the impact of your code changes on downstream assets, proactively identifying potential issues and ensuring that production incidents are averted.

When you raise a pull/merge request containing dbt model changes, the workflow will add an automated comment listing potentially impacted assets like dbt models, dashboards, and more. To ensure that you’re only notified about assets that matter to you, we added multiple filtering capabilities based on tags and asset types.

Sample GitHub comment via the impact analysis action

This feature is now available for both GitHub and GitLab users and can be activated through a simple setup detailed in our documentation. We’re also planning to bring this capability to non-dbt data pipelines, so stay tuned!

Running Monitors in Testing Environments (Coming Soon)

The next workflow that we’re currently working on will automatically run Sifflet monitors in your testing (or staging) environment, allowing you to proactively detect data quality rule violations before changes are approved. Once activated, this workflow will use Sifflet monitors to test whether your proposed code changes generate valid data, instead of having to perform manual checks yourself.

The CI workflow to run Sifflet monitors in your testing environment.

Simply provide a YAML configuration for your testing environment, and Sifflet will run the relevant monitors on your pull/merge requests, preventing any unwanted surprises in production.

Robust Public APIs for Custom Workflows (Available Now)

In addition to the above workflows, we're continuously enhancing and expanding our public APIs to provide even more flexibility. The Sifflet APIs offer access to Sifflet metadata and API-based operations that can already be leveraged in custom ways beyond our ready-to-use CI workflows, enabling you to build your own CI integrations and automation. To get started with the Sifflet APIs, check out the official documentation.

2. Enabling an 'Everything-as-Code' Approach

We believe in empowering you to define and manage all sorts of Sifflet assets (from setting up new monitors to integrating new sources) without leaving your IDE. While we've prioritized a frictionless Sifflet UI experience, we also recognize the value of defining assets as code. That’s why, over the past few months, we worked on multiple features that greatly expand what you can do in code.

The Universal Connector (Available Now)

Our recently released Universal Connector enables you to integrate Sifflet with any tool in your stack using YAML and a dedicated API endpoint. This gives you the power to define custom sources as code, further enhancing your control and flexibility while also bringing Sifflet’s full-stack data observability capabilities to life no matter which tools you’re using.

Example declared asset lineage view (assets from tools that Sifflet doesn't integrate with yet: Metabase and MongoDB)

Data Quality as Code v2 (Available Now)

Our dedication to supporting Data-Quality-as-Code (DQaC) has been a core focus since day one, and our early development of Sifflet’s DQaC framework is the strongest proof of that commitment. This summer, we introduced a complete overhaul of the framework: DQaC v2 is a more robust, flexible, and user-friendly framework that allows you to create advanced monitors at scale without leaving your IDE. 

With the release of DQaC v2, you can now define and manage thousands of monitors directly in YAML, matching the power of the Sifflet UI.

For more information, check out our recent blog post and explore the DQaC v2 documentation.

Officially-supported Terraform Provider (Available Now)

We're excited to announce that we've just released an officially-supported Terraform provider. This will allow you to manage a wide range of Sifflet objects (coming soon: monitors!) using Terraform. 

With this new Terraform provider, built on top of an initial contribution by the Servier data team, you'll be able to configure your entire Sifflet setup as code, enhancing reproducibility, collaboration, and governance.

3. Integrating Sifflet with the Local Development Workflow

As part of our Sifflet Everywhere vision, we're committed to bringing Sifflet's capabilities to your everyday tasks, making data observability a seamless part of your development workflow. 

Whether you’re building new dbt models or testing pipeline changes, Sifflet will be right there with you, ready to support you with end-to-end testing and extensive metadata.

dbt with a Sifflet Flavor (Coming Soon)

We're working on integrating Sifflet’s DQaC framework into dbt YAML, allowing you to define Sifflet monitors and configure other features directly within your dbt projects. This upcoming feature will make data observability a natural extension of your dbt development workflow.

Supercharging the Sifflet CLI (Coming Soon)

The Sifflet CLI is getting a major upgrade, transforming it into an essential tool for your development workflow. You'll be able to run Sifflet monitors in development environments, analyze downstream dependencies for dbt models, and more—seamlessly complementing the dbt CLI as you switch between the two.

Summing up: Our Commitment to Technical Users

These features across our three focus areas highlight our dedication to empowering technical users. By embedding Sifflet into your CI pipelines, local development environments, and infrastructure-as-code practices, we’re eliminating friction and empowering you to maintain control over your data from development to production. 

A key tenet of our Sifflet Everywhere promise is bringing Sifflet's full power to you right where and when you need it – in your IDE, your code repository, and throughout your development lifecycle.

Moving forward, we'll continue building upon these features, fine-tuning the technical experience, and ensuring Sifflet seamlessly integrates into your journey towards data calm.

Not a Sifflet customer yet? Intrigued by these features? Reach out to us for a technical demo!

Related content