Data transformation is a critical part of modern data workflows, and tools like DBT (Data Build Tool) and Microsoft Fabric can help streamline this process. In this blog, we’ll explore how to use DBT with Microsoft Fabric, walk through the setup process, and discuss the benefits of this integration.
What is DBT?
DBT (Data Build Tool) is an open-source tool designed to help analytics engineers transform data in their data warehouses. It operates as a declarative ELT (Extract, Load, Transform) framework, allowing transformations to be expressed through SQL SELECT statements and YAML configurations. DBT handles the transformation layer, making it easier to build scalable and maintainable data pipelines.
Microsoft Fabric offers a dedicated DBT adapter, dbt–fabric, which enables seamless integration with Fabric’s data warehousing capabilities. Let’s take a closer look at how to set this up.
Data Warehouse and T-SQL Functionality
Microsoft Fabric’s Data Warehouse is a fully managed, lake-centric SaaS experience built around OneLake. It supports cross-database queries and is accessible through tools like the Visual Query Editor, SQL Query Editor, and SQL Server Management Studio (SSMS). Its intuitive design makes it suitable for users of all experience levels.
Unlike the SQL Analytics endpoint of a Fabric Lakehouse, the Fabric Warehouse supports both read and write operations, enabling a broader range of T-SQL functionalities, including:
- COPY INTO
- CREATE TABLE AS SELECT (CTAS)
- INSERT…SELECT
- SELECT INTO
Connecting DBT to a Fabric Data Warehouse
Setting up DBT to work with a Fabric data warehouse involves a few straightforward steps.
Step 1: Install the Required Tools
Before you begin, ensure you have the following installed:
- Python 3.7 or higher: DBT runs on Python, so this is a prerequisite.
- Microsoft ODBC Driver for SQL Server: This driver is necessary for connecting to Fabric.
- dbt-fabric adapter: Install it from the PyPI repository using the <pip install dbt-fabric> command.
Step 2: Create a DBT Project
Once the tools are installed, create a new DBT project using the <dbt init> command. For this example, we’ll name the project <dbt_fabric_demo>. (This assumes you already have a data warehouse set up in Fabric.)
Step 3: Configure the profiles.yml File
Next, configure the <profiles.yml> file to connect DBT to your Fabric warehouse. Here’s an example configuration:

We are using “Azure CLI” authentication here, as shown in the <profiles.yml>. To use Azure CLI, you will need to install it from the Microsoft site. Other alternatives to Azure CLI authentication include:
- Microsoft Entra ID authentication (this is the default)
- Service Principal authentication
- Environment-based authentication
- VS Code authentication
- Automatic authentication
The <host> option is for the connection string of the target warehouse. To obtain the connection string, go to your Fabric Workspace, click the ellipsis button, and select ‘Copy SQL connection string’.

Step 4: Test the Connection
Once the profiles.yml file is configured, you can test the connection using the <dbt debug> command. This command will verify the connection and authentication. If everything is set up correctly, you should see a success message indicating that the connection has been established.

DBT Architecture with Fabric
When using DBT with Fabric, it’s recommended to implement the Medallion architecture, which organizes data into three layers: Bronze, Silver, and Gold. Here’s how this architecture can be applied:
- Bronze Layer (Lakehouse): This layer is used for ingesting raw data from source systems. A Lakehouse is ideal for this purpose because it can handle both structured and unstructured data. If the data isn’t already in Delta format, it can be converted to Delta tables at this stage.
- Silver and Gold Layers (Warehouse): Once the data is in Delta format, it can be transformed and refined in the Silver and Gold layers. The Warehouse is recommended for these layers, as it allows you to leverage T-SQL for writing and transforming data.
This architecture ensures a clean, scalable, and efficient data transformation process, enabling you to handle large volumes of data with ease.

DBT Macros with Fabric
Macros are a way to create scalable functions across DBT projects. DBT macros are reusable pieces of code that can be used across projects to perform specific tasks. They are particularly useful for automating repetitive tasks. In the below case, we have shown an example of a macro for statistics that can be created and updated after every load.
Example: Statistics Macro
In Fabric, statistics are created on tables to enhance performance. The following example shows a macro that updates statistics after every data load. This macro can be used as a post-hook operation, ensuring that statistics are updated automatically after each load.

The above macro can be used as a post_hook operation so that the statistics are updated after the data load.

Benefits of Combining DBT with Fabric
By integrating DBT with Microsoft Fabric, you can streamline your ELT processes and enjoy several benefits, including:
- Data Lineage: Easily track the flow of data through your transformations.
- Dependency Management: Manage dependencies between datasets and transformations seamlessly.
- Native Test Support: Built-in support for data quality tests ensures that your data is accurate and reliable.
- Reusable Macros: Create reusable code snippets to automate repetitive tasks.
- Project-Specific Documentation: Generate documentation for your DBT projects, making it easier to understand and maintain your data pipelines.
Together, DBT and Fabric provide a robust platform for performing transformations at scale, enabling you to handle large datasets efficiently and effectively.
Conclusion
Combining Microsoft Fabric with DBT offers a powerful solution for managing and transforming data at scale. By following the steps outlined in this blog, you can set up a DBT project, connect it to a Fabric data warehouse, and leverage the Medallion architecture to organize your data effectively. Additionally, using DBT macros can help automate tasks like updating statistics, further enhancing performance and scalability.
Whether you’re dealing with structured or unstructured data, the combination of DBT and Fabric provides the tools you need to streamline your ELT processes and derive meaningful insights from your data.