FabricForge
🏗️ FabricForge
Create and configure your Microsoft Fabric Workspaces, Lakehouses and Warehouses
🔐 Prerequisites
🏢 Workspace Configuration
📊 Lakehouses
No lakehouses configured yet. Click "Add Lakehouse" to get started.
🏢 Warehouses
No warehouses configured yet. Click "Add Warehouse" to get started.
📁 Import Configuration
📋 Configuration Preview
❓ Frequently Asked Questions
FabricForge is a client-side web application that runs entirely in your browser. Here's what this means for your security:
- No server storage: We don't have any backend servers storing your credentials
- Direct transmission: Your data is sent directly from your browser to your specified N8N webhook URL
- No intermediaries: There are no middle servers intercepting or logging your information
- Browser-only: Once you close or refresh the page, all entered data is completely gone
- Open workflow: The N8N workflow JSON is transparent - you can inspect exactly what happens with your data
Your sensitive credentials (Client ID, Client Secret, Tenant ID, OpenAI API Key) are only used by your own N8N instance to authenticate with Microsoft Fabric APIs and generate code.
The N8N workflow follows a systematic approach to provision your Fabric resources:
To use FabricForge, you need:
- Microsoft Fabric Capacity: An active Fabric capacity (F2 or higher recommended)
- Azure AD App Registration: Create an app registration in Azure Portal with Fabric API permissions
- Client Credentials: Client ID and Client Secret from your app registration
- Tenant Information: Your Azure AD Tenant ID
- User ID: Unique identifier of a user with Fabric workspace creation permissions
- OpenAI API Key: Required for generating PySpark code to create tables in lakehouses
- N8N Instance: A running N8N instance with the workflow imported
The workflow intelligently handles existing resources:
- Workspaces: If a workspace with the same name exists, it uses the existing one
- Lakehouses: Skips creation if a lakehouse with the same name already exists
- Warehouses: Similar check - won't create duplicates
- Notebooks: Only creates notebooks that don't already exist
This prevents errors and ensures idempotent operations - you can safely run the workflow multiple times.
Table creation is automated through AI-powered code generation:
- Your table schema (columns and data types) is sent to OpenAI using your API key
- AI generates PySpark code to create Delta tables
- Code is embedded into Fabric notebooks
- Notebooks are automatically executed to create the tables
- Tables are saved in Delta format in your lakehouse
The workflow maps SQL data types to appropriate Spark types automatically. The OpenAI API key is required for this step.
If you encounter issues:
- Check N8N logs: Open your N8N instance and check the workflow execution logs
- Verify credentials: Ensure all prerequisites are correctly entered
- API permissions: Confirm your Azure AD app has necessary Fabric permissions
- OpenAI API key: Verify your OpenAI API key is valid and has sufficient credits
- Rate limits: The workflow includes delays, but you might hit API limits with large configurations
- Partial completion: Check Fabric portal - some resources might have been created
The workflow continues on errors, so partial provisioning is possible. You can safely re-run to complete remaining resources.
Yes! The workflow is completely customizable:
- Import the JSON into your N8N instance
- Modify nodes to fit your specific requirements
- Add additional steps or integrations
- Adjust rate limiting delays if needed
- Change the AI model or prompts for table generation
Just ensure your webhook URL in FabricForge points to your modified workflow.