Fabric Fundamentals
Fabric Fundamentals
e OVERVIEW
What is Fabric?
Fabric terminology
What's New
b GET STARTED
End-to-end tutorials
p CONCEPT
c HOW-TO GUIDE
Copilot
p CONCEPT
Copilot overview
c HOW-TO GUIDE
Enable Copilot
Workspaces
p CONCEPT
Fabric workspace
Workspace roles
b GET STARTED
Create a workspace
c HOW-TO GUIDE
Get Help
c HOW-TO GUIDE
Contact Support
What is Microsoft Fabric?
Article • 02/07/2025
7 Note
Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .
Capabilities of Fabric
Microsoft Fabric enhances productivity, data management, and AI integration. Here are
some of its key capabilities:
Fabric integrates workloads like Data Engineering, Data Factory, Data Science, Data
Warehouse, Real-Time Intelligence, Industry solutions, Databases, and Power BI into a
SaaS platform. Each of these workloads is tailored for distinct user roles like data
engineers, scientists, or warehousing professionals, and they serve a specific task.
Advantages of Fabric include:
Data Engineering - Fabric Data Engineering provides a Spark platform with great
authoring experiences. It enables you to create, manage, and optimize
infrastructures for collecting, storing, processing, and analyzing vast data volumes.
Fabric Spark's integration with Data Factory allows you to schedule and orchestrate
notebooks and Spark jobs. For more information, see What is Data engineering in
Microsoft Fabric?
Fabric Data Science - Fabric Data Science enables you to build, deploy, and
operationalize machine learning models from Fabric. It integrates with Azure
Machine Learning to provide built-in experiment tracking and model registry. Data
scientists can enrich organizational data with predictions and business analysts can
integrate those predictions into their BI reports, allowing a shift from descriptive to
predictive insights. For more information, see What is Data science in Microsoft
Fabric?
Fabric Data Warehouse - Fabric Data Warehouse provides industry leading SQL
performance and scale. It separates compute from storage, enabling independent
scaling of both components. Additionally, it natively stores data in the open Delta
Lake format. For more information, see What is data warehousing in Microsoft
Fabric?
Microsoft Fabric enables organizations and individuals to turn large and complex data
repositories into actionable workloads and analytics, and is an implementation of data
mesh architecture. For more information, see What is a data mesh?
OneLake
A data lake is the foundation for all Fabric workloads. In Microsoft Fabric, this lake is
calledOneLake. It's built into the platform and serves as a single store for all
organizational data.
OneLake is built on ADLS (Azure Data Lake Storage) Gen2. It provides a single SaaS
experience and a tenant-wide store for data that serves both professional and citizen
developers. It simplifies the user experience by removing the need to understand
complex infrastructure details like resource groups, RBAC, Azure Resource Manager,
redundancy, or regions. You don't need an Azure account to use Fabric.
OneLake prevents data silos by offering one unified storage system that makes data
discovery, sharing, and consistent policy enforcement easy. For more information, see
What is OneLake?
The following image shows how Fabric stores data in OneLake. You can have several
workspaces per tenant and multiple lakehouses within each workspace. A lakehouse is a
collection of files, folders, and tables that acts as a database over a data lake. To learn
more, see What is a lakehouse?.
Every developer and business unit in the tenant can create their own workspaces in
OneLake. They can ingest data into lakehouses and start processing, analyzing, and
collaborating on that data—similar to using OneDrive in Microsoft Office.
OneLake lets you instantly mount your existing PaaS storage accounts using the
Shortcut feature. You don't have to migrate your existing data. Shortcuts provide direct
access to data in Azure Data Lake Storage. They also enable easy data sharing between
users and applications without duplicating files. Additionally, you can create shortcuts to
other storage systems, allowing you to analyze cross-cloud data with intelligent caching
that reduces egress costs and brings data closer to compute.
The Real-Time hub makes it easy discover, ingest, manage, and consume data-in-
motion from a wide variety of sources to collaborate and develop streaming
applications in one place. For more information, see What is the Real-Time hub?
Interop - Integrate your solution with the OneLake Foundation and establish basic
connections and interoperability with Fabric.
Develop on Fabric - Build your solution on top of the Fabric platform or seamlessly
embed Fabric's functionalities into your existing applications. You can easily use
Fabric capabilities with this option.
Build a Fabric workload - Create customized workloads and experiences in Fabric
tailoring your offerings to maximize their impact within the Fabric ecosystem.
Related content
Microsoft Fabric terminology
Create a workspace
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric
Feedback
Was this page helpful? Yes No
7 Note
Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .
Microsoft Fabric is provided free of charge when you sign up for a Microsoft Fabric trial
capacity. Your use of the Microsoft Fabric trial capacity includes access to the Fabric
product workloads and the resources to create and host Fabric items. The Fabric trial
lasts for 60 days unless canceled sooner.
7 Note
With one trial of a Fabric capacity, you get the following features:
Full access to all of the Fabric workloads and features. There are a few key Fabric
features that aren't available on trial capacities.
OneLake storage up to 1 TB.
A license similar to Premium Per User (PPU)
One capacity per trial. Other Fabric capacity trials can be started until a maximum,
set by Microsoft, is met.
The ability for users to create Fabric items and collaborate with others in the Fabric
trial capacity.
Copilot
Trusted workspace access
Managed private endpoints
You may already have a license and not realize it. For example, some versions of
Microsoft 365 include a Fabric (Free) or Power BI Pro license. Open Fabric
(app.fabric.microsoft.com) and select your Account manager to see if you already have a
license, and which license it is. Read on to see how to open your Account manager.
Start the Fabric capacity trial
You can start a trial several different ways. The first two methods make you the Capacity
administrator of the trial capacity.
Sign up for a trial capacity. You manage who else can use your trial by giving
coworkers permission to create workspaces in your trial capacity. Or, by assigning
workspaces to the trial capacity, which automatically adds coworkers (with roles in
those workspaces) to the trial capacity.
Attempt to use a Fabric feature. If your organization enabled self-service,
attempting to use a Fabric feature launches a Fabric trial.
Join a trial started by a coworker by adding your workspace to that existing trial
capacity. This action only is possible if the owner gives you, or gives the entire
organization, Contributor permissions to the trial.
Follow these steps to start your Fabric capacity trial and become the Capacity
administrator of that trial.
2. In the Account manager, select Free trial. If you don't see Free trial or Start trial or
a Trial status, trials might be disabled for your tenant.
7 Note
If the Account manager already displays Trial status, you may already have a
Power BI trial or a Fabric (Free) trial in progress. To test this out, attempt to
use a Fabric feature. For more information, see Start using Fabric.
4. Once your trial capacity is ready, you receive a confirmation message. Select Got it
to begin working in Fabric. You're now the Capacity administrator for that trial
capacity. To learn how to share your trial capacity using workspaces, see Share trial
capacities
5. Open your Account manager again. Notice the heading for Trial status. Your
Account manager keeps track of the number of days remaining in your trial. You
also see the countdown in your Fabric menu bar when you work in a product
workload.
Congratulations. You now have a Fabric trial capacity that includes a Power BI individual
trial (if you didn't already have a Power BI paid license) and a Fabric trial capacity. To
share your capacity, see Share trial capacities.
1. From the top right section of the Fabric menubar, select the cog icon to open
Settings.
2. Select Admin portal > Trial. Enabled for the entire organization is set by default.
Enabling Contributor permissions means that any user with an Admin role in a
workspace can assign that workspace to the trial capacity and access Fabric features.
Apply these permissions to the entire organization or apply them to only specific users
or groups.
2. Select the ellipses(...) and choose Workspace settings > Premium > Trial.
For more information, see Use Workspace settings.
If you're the capacity or Fabric administrator, from the upper right corner of Fabric,
select the gear icon. Select Admin portal. For a Fabric trial, select Capacity settings and
then choose the Trial tab.
End a Fabric trial
End a Fabric capacity trail by canceling, letting it expire, or purchasing the full Fabric
experience. Only capacity and Fabric admins can cancel the trial of a Fabric capacity.
Individual users don't have this ability.
One reason to cancel a trial capacity is when the capacity administrator of a trial
capacity leaves the company. Since Microsoft limits the number of trial capacities
available per tenant, you might want to remove the unmanaged trial to make room to
sign up for a new trial.
When you cancel a free Fabric capacity trial, and don't move the workspaces and their
contents to a new capacity that supports Fabric:
Microsoft can't extend the Fabric capacity trial, and you might not be able to start
a new trial using your same user ID. Other users can still start their own Fabric trial
capacity.
All licenses return to their original versions. You no longer have the equivalent of a
PPU license. The license mode of any workspaces assigned to that trial capacity
changes to Power BI Pro.
All Fabric items in the workspaces become unusable and are eventually deleted.
Your Power BI items are unaffected and still available when the workspace license
mode returns to Power BI Pro.
You can't create workspaces that support Fabric capabilities.
You can't share Fabric items, such as machine learning models, warehouses, and
notebooks, and collaborate on them with other Fabric users.
You can't create any other analytics solutions using these Fabric items.
If you want to retain your data and continue to use Microsoft Fabric, purchase a capacity
and migrate your workspaces to that capacity. Or, migrate your workspaces to a
capacity that you already own that supports Fabric items.
For more information, see Canceling, expiring, and closing.
To retain your Fabric items, before your trial ends, purchase Fabric .
Select Settings > Admin portal > Capacity settings. Then choose the Trials tab. Select
the cog icon for the trial capacity that you want to delete.
If you don't see the Start trial button in your Account manager:
Your Fabric administrator might disable access, and you can't start a Fabric trial. To
request access, contact your Fabric administrator. You can also start a trial using
your own tenant. For more information, see Sign up for Power BI with a new
Microsoft 365 account.
You're an existing Power BI trial user, and you don't see Start trial in your Account
manager. You can start a Fabric trial by attempting to create a Fabric item. When
you attempt to create a Fabric item, you receive a prompt to start a Fabric trial. If
you don't see this prompt, it's possible that this action is deactivated by your
Fabric administrator.
If you don't have a work or school account and want to sign up for a free trial.
For more information, see Sign up for Power BI with a new Microsoft 365 account.
You might not be able to start a trial if your tenant exhausted its limit of trial
capacities. If that is the case, you have the following options:
Request another trial capacity user to share their trial capacity workspace with
you. Give users access to workspaces.
Purchase a Fabric capacity from Azure by performing a search for Microsoft
Fabric.
To increase tenant trial capacity limits, reach out to your Fabric administrator to
create a Microsoft support ticket.
This bug occurs when the Fabric administrator turns off trials after you start a trial. To
add your workspace to the trial capacity, open the Admin portal by selecting it from the
gear icon in the top menu bar. Then, select Trial > Capacity settings and choose the
name of the capacity. If you don't see your workspace assigned, add it here.
What is the region for my Fabric trial capacity?
If you start the trial using the Account manager, your trial capacity is located in the
home region for your tenant. See Find your Fabric home region for information about
how to find your home region, where your data is stored.
Not all regions are available for the Fabric trial. Start by looking up your home region
and then check to see if your region is supported for the Fabric trial. If your home region
doesn't have Fabric enabled, don't use the Account manager to start a trial. To start a
trial in a region that isn't your home region, follow the steps in Other ways to start a
Fabric trial. If you already started a trial from Account manager, cancel that trial and
follow the steps in Other ways to start a Fabric trial instead.
You can't move your organization's tenant between regions by yourself. If you need to
change your organization's default data location from the current region to another
region, you must contact support to manage the migration for you. For more
information, see Move between regions.
To learn more about regional availability for Fabric trials, see Fabric trial capacities are
available in all regions.
How is the Fabric trial different from an individual trial of Power BI paid?
A per-user trial of Power BI paid allows access to the Fabric landing page. Once you sign
up for the Fabric trial, you can use the trial capacity for storing Fabric workspaces and
items and for running Fabric workloads. All rules guiding Power BI licenses and what you
can do in the Power BI workload remain the same. The key difference is that a Fabric
capacity is required to access non-Power BI workloads and items.
Autoscale
The Fabric trial capacity doesn't support autoscale. If you need more compute capacity,
you can purchase a Fabric capacity in Azure.
The Fabric trial is different from a Proof of Concept (POC). A Proof of Concept
(POC) is standard enterprise vetting that requires financial investment and months'
worth of work customizing the platform and using fed data. The Fabric trial is free
for users and doesn't require customization. Users can sign up for a free trial and
start running product workloads immediately, within the confines of available
capacity units.
You don't need an Azure subscription to start a Fabric trial. If you have an existing
Azure subscription, you can purchase a (paid) Fabric capacity.
Trial Capacity administrators can migrate existing workspaces into a trial capacity using
workspace settings and choosing Trial as the license mode. To learn how to migrate
workspaces, see create workspaces.
Related content
Learn about licenses
Feedback
Was this page helpful? Yes No
This article describes the meaning of preview in Microsoft Fabric, and explains how
preview experiences and features can be used.
Preview experiences and features are released with limited capabilities, but are made
available on a preview basis so customers can get early access and provide feedback.
Are not subject to SLAs and support is provided as best effort in certain cases.
However, Microsoft Support is eager to get your feedback on the preview
functionality, and might provide best effort support in certain cases.
7 Note
Feedback
Was this page helpful? Yes No
Learn the definitions of terms used in Microsoft Fabric, including terms specific to Fabric
Data Warehouse, Fabric Data Engineering, Fabric Data Science, Real-Time Intelligence,
Data Factory, and Power BI.
General terms
Capacity: Capacity is a dedicated set of resources that is available at a given time
to be used. Capacity defines the ability of a resource to perform an activity or to
produce output. Different items consume different capacity at a certain time. Fabric
offers capacity through the Fabric SKU and Trials. For more information, see What
is capacity?
Item: An item a set of capabilities within an experience. Users can create, edit, and
delete them. Each item type provides different capabilities. For example, the Data
Engineering experience includes the lakehouse, notebook, and Spark job definition
items.
Apache Spark job: A Spark job is part of a Spark application that is run in parallel
with other jobs in the application. A job consists of multiple tasks. For more
information, see Spark job monitoring.
Apache Spark job definition: A Spark job definition is a set of parameters, set by
the user, indicating how a Spark application should be run. It allows you to submit
batch or streaming jobs to the Spark cluster. For more information, see What is an
Apache Spark job definition?
V-order: A write optimization to the parquet file format that enables fast reads and
provides cost efficiency and better performance. All the Fabric engines write v-
ordered parquet files by default.
Data Factory
Connector: Data Factory offers a rich set of connectors that allow you to connect
to different types of data stores. Once connected, you can transform the data. For
more information, see connectors.
Data pipeline: In Data Factory, a data pipeline is used for orchestrating data
movement and transformation. These pipelines are different from the deployment
pipelines in Fabric. For more information, see Pipelines in the Data Factory
overview.
Dataflow Gen2: Dataflows provide a low-code interface for ingesting data from
hundreds of data sources and transforming your data. Dataflows in Fabric are
referred to as Dataflow Gen2. Dataflow Gen1 exists in Power BI. Dataflow Gen2
offers extra capabilities compared to Dataflows in Azure Data Factory or Power BI.
You can't upgrade from Gen1 to Gen2. For more information, see Dataflows in the
Data Factory overview.
Fabric Data Warehouse: The Fabric Data Warehouse functions as a traditional data
warehouse and supports the full transactional T-SQL capabilities you would expect
from an enterprise data warehouse. For more information, see Fabric Data
Warehouse.
Real-Time Intelligence
Activator: Activator is a no-code, low-code tool that allows you to create alerts,
triggers, and actions on your data. Activator is used to create alerts on your data
streams. For more information, see Activator.
KQL Database: The KQL Database holds data in a format that you can execute KQL
queries against. KQL databases are items under an Eventhouse. For more
information, see KQL database.
KQL Queryset: The KQL Queryset is the item used to run queries, view results, and
manipulate query results on data from your Data Explorer database. The queryset
includes the databases and tables, the queries, and the results. The KQL Queryset
allows you to save queries for future use, or export and share queries with others.
For more information, see Query data in the KQL Queryset
Real-Time hub
Real-Time hub: Real-Time hub is the single place for all data-in-motion across
your entire organization. Every Microsoft Fabric tenant is automatically provisioned
with the hub. For more information, see Real-Time hub overview.
OneLake
Shortcut: Shortcuts are embedded references within OneLake that point to other
file store locations. They provide a way to connect to existing data without having
to directly copy it. For more information, see OneLake shortcuts.
Related content
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric
Feedback
Was this page helpful? Yes No
This page is continuously updated with a recent review of what's new in Microsoft
Fabric.
To follow the latest in Fabric news and features, see the Microsoft Fabric Updates
Blog .
For community, marketing, case studies, and industry news, see the Microsoft
Fabric Blog .
Follow the latest in Power BI at What's new in Power BI?
For older updates, review the Microsoft Fabric What's New archive.
7 Note
ノ Expand table
AutoML code-first preview In Fabric Data Science, the new AutoML feature enables
automation of your machine learning workflow. AutoML, or
Automated Machine Learning, is a set of techniques and tools that
Feature Learn more
AutoML low code user AutoML, or Automated Machine Learning, is a process that
experience in Fabric automates the time-consuming and complex tasks of developing
(preview) machine learning models. The new low code AutoML experience
supports a variety of tasks, including regression, forecasting,
classification, and multi-class classification. To get started, Create
models with Automated ML (preview).
Azure Data Factory item You can now bring your existing Azure Data Factory (ADF) to your
Fabric workspace . This new preview capability allows you to
connect to your existing Azure Data Factory from your Fabric
workspace. Select "Create Azure Data Factory" inside of your Fabric
Data Factory workspace, and you can manage your Azure data
factories directly from the Fabric workspace.
Capacity pools preview Capacity administrators can now create custom pools (preview)
based on their workload requirements, providing granular control
over compute resources. Custom pools for Data Engineering and
Data Science can be set as Spark Pool options within Workspace
Spark Settings and environment items.
Copy job The Copy job (preview) in Data Factory has advantages over the
Copy activity. For more information, see Announcing Preview: Copy
Job in Microsoft Fabric . For a tutorial, see Learn how to create a
Copy job (preview) in Data Factory for Microsoft Fabric.
Dataflow Gen2 CI/CD CI/CD and Git integration are now supported for Dataflow Gen2.
support For more information, see Dataflow Gen2 CI/CD support .
Data Factory Apache Apache Airflow job (preview) in Data Factory , powered by
Airflow jobs preview Apache Airflow, offer seamless authoring, scheduling, and
monitoring experience for Python-based data processes defined as
Directed Acyclic Graphs (DAGs). For more information, see
Quickstart: Create an Apache Airflow Job.
Data pipeline capabilities The new Data pipeline capabilities in Copilot for Data Factory are
in Copilot for Data Factory now available in preview. These features function as an AI expert to
Feature Learn more
Data Wrangler for Spark Data Wrangler on Spark DataFrames in preview. Users can now edit
DataFrames preview Spark DataFrames in addition to pandas DataFrames with Data
Wrangler .
Data Science AI skill You can now build your own generative AI experiences over your
(preview) data in Fabric with the AI skill (preview)! You can build question and
answering AI systems over your Lakehouses and Warehouses. For
more information, see Introducing AI Skills in Microsoft Fabric: Now
in Preview . To get started, try AI skill example with the
AdventureWorks dataset (preview).
Delta column mapping in SQL analytics endpoint now supports Delta tables with column
the SQL analytics endpoint mapping enabled . For more information, see Delta column
mapping and Limitations of the SQL analytics endpoint. This
feature is currently in preview.
Fabric gateway enables Connect to on-premises data sources with a Fabric on-premises
OneLake shortcuts to on- data gateway on a machine in your environment, with
premises data networking visibility of your S3 compatible or Google Cloud
Storage data source. Then, you create your shortcut and select that
gateway. For more information, see Create shortcuts to on-
premises data.
Feature Learn more
Fabric Spark connector for The Spark connector for Data Warehouse enables a Spark
Fabric Data Warehouse in developer or a data scientist to access and work on data from a
Spark runtime (preview) warehouse or SQL analytics endpoint of the lakehouse (either from
within the same workspace or from across workspaces) with a
simplified Spark API.
Fabric Spark Diagnostic The Fabric Apache Spark Diagnostic Emitter (preview) allows
Emitter (preview) Apache Spark users to collect logs, event logs, and metrics from
their Spark applications and send them to various destinations,
including Azure Event Hubs, Azure storage, and Azure log analytics.
High concurrency mode High concurrency mode for Notebooks in Pipelines enables users
for Notebooks in Pipelines to share Spark sessions across multiple notebooks within a
(preview) pipeline. With high concurrency mode, users can trigger pipeline
jobs, and these jobs are automatically packed into existing high
concurrency sessions.
Iceberg data in OneLake You can now consume Iceberg-formatted data across Microsoft
using Snowflake and Fabric with no data movement or duplication , plus Snowflake has
shortcuts (preview) added the ability to write Iceberg tables directly to OneLake. For
more information, see Use Iceberg tables with OneLake.
Invoke remote pipeline You can now use the Invoke Pipeline (preview) activity to call
(preview) in Data pipeline pipelines from Azure Data Factory or Synapse Analytics pipelines .
This feature allows you to utilize your existing ADF or Synapse
pipelines inside of a Fabric pipeline by calling it inline through this
new Invoke Pipeline activity.
JSON Aggregate support Fabric warehouses now support JSON aggregate functions in
(preview) preview, JSON_ARRAYAGG and JSON_OBJECTAGG.
Lakehouse support for git The Lakehouse now integrates with the lifecycle management
integration and capabilities in Microsoft Fabric , providing a standardized
deployment pipelines collaboration between all development team members throughout
(preview) the product's life. Lifecycle management facilitates an effective
product versioning and release process by continuously delivering
features and bug fixes into multiple environments.
Livy REST API (preview) The Fabric Livy endpoint lets users submit and execute their Spark
code on the Spark compute within a designated Fabric workspace,
eliminating the need to create a Notebook or Spark Job Definition
item. The Livy API offers the ability to customize the execution
environment through its integration with the Environment .
Managed virtual networks Managed virtual networks are virtual networks that are created and
(preview) managed by Microsoft Fabric for each Fabric workspace.
Microsoft 365 connector The Microsoft 365 connector now supports ingesting data into
now supports ingesting Lakehouse tables .
data into Lakehouse
(preview)
Microsoft Fabric Admin Fabric Admin APIs are designed to streamline administrative
APIs tasks. The initial set of Fabric Admin APIs is tailored to simplify the
discovery of workspaces, Fabric items, and user access details.
Mirroring in Microsoft With database mirroring in Fabric, you can easily bring your
Fabric preview databases into OneLake in Microsoft Fabric , enabling seamless
zero-ETL, near real-time insights on your data – and unlocking
warehousing, BI, AI, and more. For more information, see What is
Mirroring in Fabric?
Mirroring CI/CD (preview) Mirroring now supports CI/CD as a preview feature. You can
integrate Git for source control and utilize ALM Deployment
Pipelines, streamlining the deployment process and ensuring
seamless updates to mirrored databases.
Nested common table Fabric Warehouse and SQL analytics endpoint both support
expressions (CTEs) standard, sequential, and nested CTEs . While CTEs are generally
(preview) available in Microsoft Fabric, nested common table expressions
(CTE) in Fabric data warehouse are currently a preview feature.
Notebook debug within You can now place breakpoints and debug your Notebook code
vscode.dev (preview) with the Synapse VS Code - Remote extension in vscode.dev .
This update first starts with the Fabric Runtime 1.3 (GA).
Notebook version history Fabric notebook version history provides robust built-in version
(preview) control capabilities, including automatic and manual checkpoints,
tracked changes, version comparisons, and previous version
restore. For more information, see Notebook version history.
Feature Learn more
OneLake data access roles OneLake data access roles for lakehouse are in preview . Role
permissions and user/group assignments can be easily updated
through a new folder security user interface.
OneLake SAS (preview) Support for short-lived, user-delegated OneLake SAS is now in
preview . This functionality allows applications to request a User
Delegation Key backed by Microsoft Entra ID, and then use this key
to construct a OneLake SAS token. This token can be handed off to
provide delegated access to another tool, node, or user, ensuring
secure and controlled access.
Open mirroring (Preview) Open mirroring enables any application to write change data
directly into a mirrored database in Fabric, based on the open
mirroring public APIs and approach. Open mirroring is designed
to be extensible, customizable, and open. It's a powerful feature
that extends mirroring in Fabric based on open Delta Lake table
format. To get started, see Tutorial: Configure Microsoft Fabric
open mirrored databases.
Prebuilt Azure AI services The preview of prebuilt AI services in Fabric is an integration with
in Fabric preview Azure AI services , formerly known as Azure Cognitive Services.
Prebuilt Azure AI services allow for easy enhancement of data with
prebuilt AI models without any prerequisites. Currently, prebuilt AI
services are in preview and include support for the Microsoft Azure
OpenAI Service , Azure AI Language , and Azure AI Translator .
Purview Data Loss Extending Microsoft Purview's Data Loss Prevention (DLP) policies
Prevention policies have into Fabric lakehouses is now in preview.
been extended to Fabric
lakehouses
Purview Data Loss Restricting access based on sensitive content for semantic models,
Prevention policies now now in preview, helps you to automatically detect sensitive
support the restrict access information as it is uploaded into Fabric lakehouses and semantic
action for semantic models models .
Python Notebook Python Notebooks are for BI Developers and Data Scientists
(preview) working with smaller datasets using Python as their primary
language. To get started, see Use Python experience on Notebook.
Real-Time Dashboards and With separate permissions for dashboards and underlying data,
underlying KQL databases administrators now have the flexibility to allow users to view
dashboards without giving access to the raw data .
Feature Learn more
access separation
(preview)
Reserve maximum cores A new workspace-level setting allows you to reserve maximum
for jobs (preview) cores for your active jobs for Spark workloads . For more
information, see High concurrency mode in Apache Spark for
Fabric.
REST APIs for connections REST APIs for connections and gateways are now in preview .
and gateways (preview) These new APIs allow developers to programmatically manage and
interact with connections and gateways within Fabric.
REST APIs for Fabric Data The REST APIs for Fabric Data Factory Pipelines are now in
Factory pipelines preview preview. Fabric data pipeline public REST API enable you to extend
the built-in capability in Fabric to create, read, update, delete, and
list pipelines.
Secure Data Streaming By creating a Fabric Managed Private Endpoint, you can now
with Managed Private securely connect Eventstream to your Azure services, such as Azure
Endpoints in Eventstream Event Hubs or IoT Hub, within a private network or behind a
(Preview) firewall. For more information, see Secure Data Streaming with
Managed Private Endpoints in Eventstream (Preview) .
Semantic model refresh Use the Semantic model refresh activity to refresh a Power BI
activity (preview) Dataset (Preview), the most effective way to refresh your Fabric
semantic models. For more information, see New Features for
Fabric Data Factory Pipelines Announced at Ignite .
Share Feature for Fabric AI "Share" capability for the Fabric AI skill (preview) allows you to
skill (preview) share the AI Skill with others using a variety of permission models.
Share the Fabric AI skill Share capability for the Fabric AI skill (preview) allows you to
(preview) share the AI Skill with others using a variety of permission models.
Spark Run Series Analysis The Spark Monitoring Run Series Analysis features allow you to
preview analyze the run duration trend and performance comparison for
Pipeline Spark activity recurring run instances and repetitive Spark
run activities, from the same Notebook or Spark Job Definition.
Feature Learn more
Splunk add-on preview Microsoft Fabric add-on for Splunk allows users to ingest logs
from Splunk platform into a Fabric KQL DB using the Kusto python
SDK.
SQL database support for You can use tenant level private links to provide secure access for
Tenant level private links data traffic in Microsoft Fabric, including SQL database (in preview).
(preview) For more information, see Set up and use private links and Blog:
Tenant Level Private Link (Preview) .
Task flows in Microsoft The preview of task flows in Microsoft Fabric is enabled for all
Fabric (preview) Microsoft Fabric users. With Task flows (preview), when designing a
data project, you no longer need to use a whiteboard to sketch out
the different parts of the project and their interrelationships.
Instead, you can use a task flow to build and bring this key
information into the project itself.
varchar(max) and Support for the varchar(max) and varbinary(max) data types in
varbinary(max) support in Warehouse is now in preview. For more information, see
preview Announcing public preview of VARCHAR(MAX) and
VARBINARY(MAX) types in Fabric Data Warehouse .
Terraform Provider for The Terraform Provider for Microsoft Fabric is now in preview. The
Fabric (preview) Terraform Provider for Microsoft Fabric supports the creation and
management of many Fabric resources. For more information, see
Announcing the new Terraform Provider for Microsoft Fabric .
T-SQL support in Fabric The T-SQL notebook feature in Microsoft Fabric (preview) lets
notebooks (preview) you write and run T-SQL code within a notebook. You can use them
to manage complex queries and write better markdown
documentation. It also allows direct execution of T-SQL on
connected warehouse or SQL analytics endpoint. To learn more, see
T-SQL support in Microsoft Fabric notebooks.
Warehouse source control Using Source control with Warehouse (preview), you can manage
(preview) development and deployment of versioned warehouse objects. You
can use SQL Database Projects extension available inside of Azure
Data Studio and Visual Studio Code . For more information on
warehouse source control, see CI/CD with Warehouses in Microsoft
Fabric .
ノ Expand table
January Real-time Application Lifecycle Management (ALM) and Fabric REST APIs
2025 intelligence ALM are now generally available for all RTI items: Eventstream,
and REST API GA Eventhouse, KQL Database, Realtime dashboard, Query set and
Data Activator. ALM includes both deployment pipelines and
Git integration. REST APIs allow you to programmatically
create / read / update / delete items.
January Warehouse You can now create restore points and perform an in-place
2025 restore points restore of a warehouse to a past point in time. Restore in-
and restore in place is an essential part of data warehouse recovery , which
place allows to restore the data warehouse to a prior known reliable
state by replacing or over-writing the existing data warehouse
from which the restore point was created.
November OneLake external OneLake external data sharing makes it possible for Fabric
2024 data sharing (GA) users to share data from within their Fabric tenant with users
in another Fabric tenant.
November GraphQL API in The API for GraphQL , now generally available, is a data
2024 Microsoft Fabric access layer that allows us to query multiple data sources
GA quickly and efficiently in Fabric. For more information, see
What is Microsoft Fabric API for GraphQL?
November Fabric workload The Microsoft Fabric workload development kit is now
2024 dev kit (GA) generally available . This robust developer toolkit is for
designing, developing, and interoperating with Microsoft
Fabric using frontend SDKs and backend REST APIs .
November Mirroring for With Azure SQL Database mirroring in Fabric, you can easily
2024 Azure SQL replicate data from Azure SQL Database into OneLake in
Database GA Microsoft Fabric.
November Real-Time hub Real-Time hub is now generally available . For more
2024 information, see Introduction to Fabric Real-Time hub.
October Notebook Git Notebook Git integration now supports persisting the
2024 integration mapping relationship of the attached Environment when
syncing to new workspace. For more information, see
Notebook source control and deployment
October Notebook in Now you can also use notebooks to deploy your code across
2024 Deployment different environments , such as development, test, and
Pipeline production. You can also use deployment rules to customize
the behavior of your notebooks when they're deployed, such
as changing the default Lakehouse of a Notebook. Get started
with deployment pipelines, and Notebook shows up in the
deployment content automatically.
September Mirroring for With Mirroring for Snowflake in Fabric, you can easily bring
2024 Snowflake your Snowflake data into OneLake . For more information,
see Mirroring Snowflake.
September Copilot for Data Copilot for Data Factory is now generally available and
2024 Factory included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.
September Fast Copy in The Fast copy feature in Dataflows Gen2 is now generally
2024 Dataflow Gen2 available. For more information, read Announcing the General
Availability of Fast Copy in Dataflows Gen2 .
September Fabric Pipeline On-premises connectivity for Data pipelines in Microsoft Fabric
2024 Integration in is now generally available. Learn How to access on-premises
On-premises data sources in Data Factory for Microsoft Fabric.
Data Gateway GA
September Fabric Runtime Fabric Runtime 1.3 (GA) includes Apache Spark 3.5, Delta Lake
2024 1.3 3.1, R 4.4.1, Python 3.11, support for Starter Pools, integration
Month Feature Learn more
September OneLake REST APIs for OneLake Shortcuts allow programmatic creation
2024 Shortcuts API and management of shortcuts, now generally available. You
can now programmatically create, read, and delete OneLake
shortcuts. For example, see Use OneLake shortcuts REST APIs.
For older general availability (GA) announcements, review the Microsoft Fabric What's
New archive.
Community
This section summarizes new Microsoft Fabric community opportunities for prospective
and current influencers and MVPs.
) Important
Join us for FabCon 2025 in Las Vegas from March 31 to April 2 for the biggest-
ever FabCon. Register and use code MSCUST for a $150 discount!
ノ Expand table
December Announcing the See the winners of the Microsoft Fabric Focused
2024 winners of the Hackathon event , where we partnered with DevPost to
Microsoft Fabric and challenge the world to build the next wave of innovative
AI Learning AI powered data analytics applications with Microsoft
Hackathon! Fabric!
October Fabric Influencers Check out Microsoft MVPs & Fabric Super Users doing
2024 Spotlight October amazing work in October 2024 on all aspects of
2024 Microsoft Fabric.
October Microsoft Fabric and Part of the Microsoft Fabric and AI Learning Hackathon ,
2024 AI Learning read this guide of various capabilities that Copilot offers in
Hackathon: Copilot in Microsoft Fabric , empowering you to enhance
Fabric productivity and streamline your workflows.
October Get certified in For a limited time, the Microsoft Fabric Community team is
2024 Microsoft Fabric—for offering 5,000 free DP-600 exam vouchers to eligible
free! Fabric Community members . Complete your exam by
the end of the year and join the ranks of certified experts.
October FabCon Europe 2024 Read a recap of Europe's first Fabric Community
2024 Conference and a Recap of Data Factory
announcements .
October Fabric Influencers The Fabric Influencers Spotlight September 2024 shines
2024 Spotlight September a bright light on the places on the internet where
2024 Microsoft MVPs & Fabric Super Users are doing some
amazing work on all aspects of Microsoft Fabric.
September Announcing: The Get ready for the Microsoft Fabric & AI Learning
2024 Microsoft Fabric & AI Hackathon ! We're calling all Data/AI Enthusiasts and
Month Feature Learn more
For older updates, review the Microsoft Fabric What's New archive.
Power BI
) Important
If you're accessing Power BI on a web browser version older than Chrome 94,
Microsoft Edge 94, Safari 16.4, Firefox 93, or equivalent, you need upgrade your
web browser to a newer version by August 31, 2024. Using an outdated browser
version after this date can prevent you from accessing features in Power BI.
Updates to Power BI Desktop and the Power BI service are summarized at What's new in
Power BI?
ノ Expand table
October Microsoft Fabric and Part of the Microsoft Fabric and AI Learning
2024 AI Learning Hackathon , read this guide of various capabilities that
Hackathon: Copilot in Copilot offers in Microsoft Fabric , empowering you to
Fabric enhance productivity and streamline your workflows.
Month Feature Learn more
October Use Azure OpenAI to Read this blog to learn how to turn whiteboard sketches
2024 turn whiteboard into data pipelines , using the GPT-4o model through
sketches into data Azure OpenAI Service.
pipelines
September Creating a real time Copilot can review a table and automatically create a
2024 dashboard by Copilot dashboard with insights and a profile of the data with
a sample.
September Copilot in Dataflow Copilot for Data Factory is now generally available and
2024 Gen2 GA included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.
September Copilot for Data Copilot for Data Warehouse is now available, offering the
2024 Warehouse Copilot chat pane, quick actions, and code completions.
For more information and sample scenarios, see
Announcing the Preview of Copilot for Data Warehouse
in Microsoft Fabric .
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
January Dataflow Gen2 CI/CD CI/CD and Git integration are now supported for
2025 support (preview) Dataflow Gen2, as a preview feature. For more
information, see Dataflow Gen2 CI/CD support .
December Data Factory A couple of weeks ago we had such an exciting week for
2024 Announcements at Fabric during the Ignite Conference, filled with several
Ignite 2024 Recap product announcements and sneak previews of
upcoming new features for Data Factory in Fabric .
November REST APIs for REST APIs for connections and gateways are now in
2024 connections and preview . These new APIs allow developers to
gateways (preview) programmatically manage and interact with connections
and gateways within Fabric.
Month Feature Learn more
November Iceberg format via Fabric Data Factory now supports writing data in Iceberg
2024 Azure Data Lake format via Azure Data Lake Storage Gen2 Connector
Storage Gen2 in Data pipeline. For more information, see Iceberg
Connector in Data format for Data Factory in Microsoft Fabric.
pipeline
November Data Factory Copy Job – CI/CD for Copy job (preview) in Data Factory in
2024 CI/CD now available Microsoft Fabric is now available. Copy Job now
supports Git Integration and Deployment Pipeline .
November Semantic model refresh Use the Semantic model refresh activity to refresh a
2024 activity (preview) Power BI Dataset (Preview), the most effective way to
refresh your Fabric semantic models. For more
information, see New Features for Fabric Data Factory
Pipelines Announced at Ignite .
November New connectors for In the Data Factory, both data pipeline and Dataflow
2024 Fabric SQL database Gen2 now natively support the SQL database in Fabric
(Preview) connector as source and destination. More
connector updates for MariaDB, Snowflake, Dataverse,
and PostgreSQL also announced.
November OneLake catalog OneLake data hub has been rebranded as the OneLake
2024 catalog in Modern Get Data. When you use Get data
inside Pipeline, Copy job, Mirroring and Dataflow Gen2,
you'll find the OneLake data hub has been renamed to
OneLake catalog.
November Data pipeline The new Data pipeline capabilities in Copilot for Data
2024 capabilities in Copilot Factory are now available in preview. These features
for Data Factory function as an AI expert to help users build,
(preview) troubleshoot, and maintain data pipelines.
November Legacy Timestamp The recent update to Native Execution Engine on Fabric
2024 Support in Native Runtime 1.3 brings support for legacy timestamp
Execution Engine for handling, allowing seamless processing of timestamp
Fabric Runtime 1.3 data created by different Spark versions. Read to learn
why legacy timestamp support matters .
November Dataflow Gen2 CI/CD, With this new set of features , you can now seamlessly
2024 GIT source control integrate your dataflow with your existing CI/CD
integration and Public pipelines and version control of your workspace in
APIs support are now in Fabric. This integration allows for better collaboration,
preview versioning, and automation of your deployment process
across dev, test, and production environments. For more
information, see Dataflow Gen2 with CI/CD and Git
integration support (preview).
Month Feature Learn more
October New Features and We're excited to announce several powerful updates to
2024 Enhancements for the Virtual Network (VNET) Data Gateway , designed
Virtual Network Data to further enhance performance and improve the overall
Gateway user experience.
October Recap of Data Factory Read a recap of Data Factory announcements from
2024 Announcements at Fabric Community Conference Europe 2024.
Fabric Community
Conference Europe
September Copilot in Dataflow Copilot for Data Factory is now generally available
2024 Gen2 GA and included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.
September Fast Copy in Dataflow The Fast copy in Dataflows Gen2 is now generally
2024 Gen2 GA available. For more information, read Announcing the
General Availability of Fast Copy in Dataflows Gen2 .
September Invoke remote pipeline You can now use the Invoke Pipeline (preview) activity
2024 (preview) in Data to call pipelines from Azure Data Factory or Synapse
pipeline Analytics pipelines . This feature allows you to utilize
your existing ADF or Synapse pipelines inside of a Fabric
pipeline by calling it inline through this new Invoke
Pipeline activity.
September Spark Job environment You can now reuse existing Spark sessions with Session
2024 parameters tags . In the Fabric Spark Notebook activity, tag your
Spark session, then reuse the existing session using that
same tag.
September Azure Data Factory item You can now bring your existing Azure Data Factory
2024 in Fabric (preview) (ADF) to your Fabric workspace . This new preview
capability allows you to connect to your existing Azure
Month Feature Learn more
September Copy job (preview) The Copy job (preview) has advantages over the legacy
2024 Copy activity. For more information, see Announcing
Preview: Copy Job in Microsoft Fabric . For a tutorial,
see Learn how to create a Copy job (preview) in Data
Factory for Microsoft Fabric.
September Storage Integration You can now connect Snowflake with external storage
2024 Support in Snowflake solutions (such as Azure Blob Storage) using a secure
Connector for Fabric and centralized approach. For more information, see
Data Factory Snowflake SQL storage integration .
September New Data Factory New Data Factory Connectors include Salesforce, Azure
2024 Connectors Released in MySQL Database, and Azure Cosmos DB for
Q3 2024 MongoDB .
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
January Enhancing data Read this blog for a guide on using Copilot for Data
2025 quality with Copilot Factory to clean and transform data .
for Data Factory
November Boosting Data Here's a closer look at how recent advancements are
2024 Ingestion in Data transforming data ingestion in Data Factory .
Factory: Continuous
Innovations in
Performance
Optimization
November Copy Job upsert to The Copy Job simplifies your data ingestion with non-
2024 SQL & overwrite to compromising experience from any source to any
Fabric Lakehouse destination. By default, Copy Job appends data to your
Month Feature Learn more
September Integrate your SAP Learn more about an overview of SAP data options in
2024 data into Microsoft Microsoft Fabric , along with some guidance on the
Fabric respective use cases.
ノ Expand table
January Notebook and Spark You can now run a Notebook/Spark Job Definition
2025 Job definition execution under the credentials of a service principal .
execution with service Use the Fabric Job Scheduler API with a service principal's
principal access token, to run the Spark Job within the security
context of that service principal.
January Building Apps with Microsoft Fabric has an API for GraphQL to build your
2025 Microsoft Fabric API data applications, enabling you to pull data from sources
for GraphQL such as Data Warehouses, Lakehouse, Mirrored
Databases, and DataMart in Microsoft Fabric.
January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .
January Folder security within a Now you can define security on any subfolder within the
2025 shortcut in OneLake shortcut root. For more information and an example, see
Define security on folders within a shortcut using
OneLake data access roles .
Month Feature Learn more
December REST API for Livy The Fabric Livy endpoint lets users submit and execute
2024 (preview) their Spark code on the Spark compute within a
designated Fabric workspace, eliminating the need to
create a Notebook or Spark Job Definition items. The Livy
API offers the ability to customize the execution
environment through its integration with the
Environment .
December Notebook version Fabric notebook version history provides robust built-
2024 history (preview) in version control capabilities, including automatic and
manual checkpoints, tracked changes, version
comparisons, and previous version restore. For more
information, see Notebook version history.
December Python Notebook Python Notebooks are for BI Developers and Data
2024 (preview) Scientists working with smaller datasets using Python as
their primary language. To get started, see Use Python
experience on Notebook.
November The new OneLake The OneLake catalog is the next evolution of the
2024 catalog OneLake data hub . For more information about the
new catalog, Discover and explore Fabric items in the
OneLake catalog.
November OneLake external data OneLake external data sharing, now generally available,
2024 sharing (GA) makes it possible for Fabric users to share data from
within their Fabric tenant with users in another Fabric
tenant.
November Purview Data Loss Restricting access based on sensitive content for
2024 Prevention policies semantic models, now in preview, helps you to
now support the automatically detect sensitive information as it is
restrict access action uploaded into Fabric lakehouses and semantic models .
for semantic models
November Iceberg data in You can now consume Iceberg-formatted data across
2024 OneLake using Microsoft Fabric with no data movement or
Snowflake and duplication , plus Snowflake has added the ability to
shortcuts (preview) write Iceberg tables directly to OneLake. For more
information, see Use Iceberg tables with OneLake.
Month Feature Learn more
November Notebook display The new and improved chart view brings multiple new
2024 chart upgrade capabilities to the notebook display. To access the new
chart view just open your Fabric notebook and run the
display(df) statement.
November Jar libraries Java Archive (JAR) files are a popular packaging format
2024 used in the Java ecosystem, and are now supported in
Fabric Environments.
November Legacy Timestamp The recent update to Native Execution Engine on Fabric
2024 Support in Native Runtime 1.3 brings support for legacy timestamp
Execution Engine for handling, allowing seamless processing of timestamp
Fabric Runtime 1.3 data created by different Spark versions. Read to learn
why legacy timestamp support matters .
October Use OneLake shortcuts Learn how OneLake capacity consumption works when
2024 to access data across accessing data through a shortcut, particularly across
capacities: Even when capacities .
the producing capacity
is paused
October Purview Data Loss Extending Microsoft Purview's Data Loss Prevention (DLP)
2024 Prevention policies policies into Fabric lakehouses is now in preview.
have been extended to
Fabric lakehouses
October API for GraphQL Service Principal Names (SPN) support for API for
2024 support for Service GraphQL offers organizations looking to integrate their
Principal Names apps with API for GraphQL in Microsoft Fabric tie
(SPNs) seamlessly with their enterprise identity and access
management systems. For more information, see Service
Principal Names (SPNs) in Fabric API for GraphQL .
October Automatic code Fabric API for GraphQL now adds the ability to
2024 generation in API for automatically generate Python and Node.js code
Month Feature Learn more
October Notebook Git Notebook Git integration now supports persisting the
2024 integration GA mapping relationship of the attached Environment when
syncing to new workspace. For more information, see
Notebook source control and deployment
October Notebook in Now you can also use notebooks to deploy your code
2024 deployment pipeline across different environments , such as development,
GA test, and production. You can also use deployment rules
to customize the behavior of your notebooks when
they're deployed, such as changing the default
Lakehouse of a Notebook. Get started with deployment
pipelines, and Notebook shows up in the deployment
content automatically.
October Notebook in Org APP The Notebook feature is now supported in Org APP. You
2024 can easily embed Notebook code and markdown cells,
visuals, tables, charts, and widgets in OrgAPP, as a
practical storytelling tool.
October Notebook onboarding The new Fabric Notebook Onboarding Tour is now
2024 tour available. This guided tour is designed to help you get
started with the essential Notebook features and learn
the new capabilities.
October Notebook mode The Notebook mode switcher provides flexible access
2024 switcher modes (Develop, Run Only, Edit, View) for your
notebooks, which can help you easily manage the
permissions to the notebook and the corresponding
view.
October Free selection support The free selection function on the rich dataframe preview
2024 on display() table view in the notebook can improve the data analysis
experience. To see the new features, read Free selection
support on display() table view .
October Filter, sort and search Sorting, Filtering, and Searching capabilities make data
2024 your Lakehouse exploration and analysis more efficient by allowing you
objects to quickly retrieve the information you need based on
specific criteria, right within the Lakehouse environment.
September Fabric Runtime 1.3 GA Fabric Runtime 1.3 (GA), now generally available, includes
2024 Apache Spark 3.5, Delta Lake 3.1, R 4.4.1, Python 3.11,
support for Starter Pools, integration with Environment,
and library management capabilities. For more
information, see Fabric Runtime 1.3 is Generally
Available! .
Month Feature Learn more
September Native Execution Native execution engine for Fabric Spark for Fabric
2024 Engine on Runtime 1.3 Runtime 1.3 is now available in preview, offering superior
(preview) query performance across data processing, ETL, data
science, and interactive queries. No code changes are
required to speed up the execution of your Apache Spark
jobs when using the Native Execution Engine .
September Fabric Spark The Fabric Apache Spark Diagnostic Emitter (preview)
2024 Diagnostic Emitter allows Apache Spark users to collect logs, event logs, and
(preview) metrics from their Spark applications and send them to
various destinations, including Azure Event Hubs, Azure
Storage, and Azure Log Analytics.
September Environment You can now create, configure, and use an environment
2024 integration with in Fabric in VS Code with the Synapse VS Code extension.
Synapse VS Code
extension
September Notebook debug You can now place breakpoints and debug your
2024 within vscode.dev Notebook code with the Synapse VS Code - Remote
(preview) extension in vscode.dev . This update first starts with
the Fabric Runtime 1.3.
September Invoke Fabric User You can now invoke User Defined Functions (UDFs) in
2024 Data Functions in your PySpark code directly from Microsoft Fabric
Notebook Notebooks or Spark jobs. With NotebookUtils
integration, invoking UDFs is as simple as writing a few
lines of code .
Month Feature Learn more
September Functions Hub The new Functions Hub provides a single location to
2024 view, access, and manage your User Data Functions .
September Support for spaces in You can now create and query Delta tables with spaces in
2024 Lakehouse Delta table their names , such as "Sales by Region" or "Customer
names Feedback". All Fabric Runtimes and Spark authoring
experiences support table names with spaces.
September Public REST API of Livy The Fabric Livy endpoint lets users submit and execute
2024 endpoint their Spark code on the Spark compute within a
designated Fabric workspace, eliminating the need to
create any Notebook or Spark Job Definition.
September OneLake SAS (preview) Support for OneLake SAS is now in preview . This
2024 functionality allows applications to request a User
Delegation Key backed by Microsoft Entra ID, and then
use this key to construct a short-lived, user-delegated
OneLake SAS token. This token can be handed off to
provide delegated access to another tool, node, or user,
ensuring secure and controlled access.
September T-SQL support in The T-SQL notebook feature in Microsoft Fabric lets
2024 Fabric notebooks you write and run T-SQL code within a notebook. You can
use them to manage complex queries and write better
markdown documentation. It also allows direct execution
of T-SQL on connected warehouse or SQL analytics
endpoint. To learn more, see T-SQL support in Microsoft
Fabric notebooks.
September OneLake shortcuts to Now a generally available feature, Create a Google Cloud
2024 Google Cloud Storage Storage (GCS) shortcut to connect to your existing data
through a single unified name space without having to
copy or move data.
Month Feature Learn more
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
January Create a shortcut to a Follow this guide to create a OneLake shortcut to a VPC-
2025 VPC-protected Google protected Google Cloud Storage (GCS) bucket .
Cloud Storage bucket
January Best practices for The Microsoft Fabric API for GraphQL is a handy service
2025 Fabric API for GraphQL that quickly allows you to set up a GraphQL API to pull
data from places like warehouses, the lakehouse, and
mirrored databases. Learn best practices when building
applications using Fabric API for GraphQL .
October Optimizing Spark Learn how to optimize Spark Compute for Medallion
2024 Compute for architecture : a popular data engineering approach that
Medallion emphasizes modularity. It organizes the data platform into
Architectures in three distinct layers: Bronze, Silver, and Gold.
Microsoft Fabric
January Building Apps with Microsoft Fabric has an API for GraphQL to build your
2025 Microsoft Fabric API data applications, enabling you to pull data from sources
for GraphQL such as Data Warehouses, Lakehouse, Mirrored Databases,
and DataMart in Microsoft Fabric.
December Notebook version Fabric notebook version history provides robust built-in
2024 history version control capabilities, including automatic and
manual checkpoints, tracked changes, version
comparisons, and previous version restore. For more
information, see Notebook version history.
December Python Notebook Python Notebooks are for BI Developers and Data
2024 (preview) Scientists working with smaller datasets using Python as
their primary language. To get started, see Use Python
experience on Notebook.
November Low code AutoML AutoML, or Automated Machine Learning, is a process that
2024 user experience in automates the time-consuming and complex tasks of
Fabric (preview) developing machine learning models. The new low code
AutoML experience supports a variety of tasks, including
regression, forecasting, classification, and multi-class
classification. To get started, Create models with
Automated ML (preview).
September Data Wrangler for Data Wrangler is now generally available. A notebook-
2024 Spark DataFrames GA based tool for exploratory data analysis, Data Wrangler
works for both pandas DataFrames and Spark
DataFrames and arrives at general availability with new
usability improvements .
September Share Feature for "Share" capability for the Fabric AI skill (preview) allows
2024 Fabric AI skill you to share the AI Skill with others using a variety of
(preview) permission models.
September File editor in The file editor feature in Fabric Notebook allows users
2024 Notebook to view and edit files directly within the notebook's
resource folder and environment resource folder in
notebook. Supported file types include CSV, TXT, HTML,
YML, PY, SQL, and more.
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
September Using Microsoft Fabric for This tutorial includes three main notebooks, each
2024 Generative AI: A Guide to covering a crucial aspect of building and optimizing
Building and Improving RAG systems in Microsoft Fabric .
RAG Systems
September Harness Microsoft Fabric This post demonstrates how you can extend the
2024 AI Skill to Unlock capabilities of Fabric AI Skill in Microsoft Fabric
Context-Rich Insights notebooks to deliver richer and more
from Your Data comprehensive responses using additional Large
Language Model (LLM) queries.
Fabric Databases
This section summarizes recent improvements and features for Microsoft Fabric
Databases.
ノ Expand table
January SQL database You can use tenant level private links to provide secure access
2025 support for for data traffic in Microsoft Fabric, including SQL database (in
tenant level preview). For more information, see Set up and use private links
private links and Blog: Tenant Level Private Link (Preview) .
(preview)
January SQL databases After February 1, 2025, compute and data storage for SQL
2025 billing begins database are charged to your Fabric capacity. Additionally,
backup billing will start after April 1, 2025. For more
Month Feature Learn more
January Ask the Experts – Join us for a live Q&A session on the new Fabric Databases
2025 Fabric Databases experience ! Our product engineering team will answer your
– Livestream top questions in real time.
January 29!
December Copilot for SQL Learn more about the Copilot integration for Query Editor.
2024 database Copilot for SQL database in Fabric as an AI-powered assistant
designed to support you regardless of your SQL expertise or
role.
November New connectors In the Data Factory, both data pipeline and Dataflow Gen2 now
2024 for Fabric SQL natively support the Fabric SQL database connector as source
database and destination with the SQL database connector (Preview). For
more information, see Fabric SQL Database Connector .
ノ Expand table
February Govern your data in Microsoft Purview's protection policies help you safeguard
2025 SQL database in sensitive data in Microsoft Fabric items, including SQL
Microsoft Fabric with databases. Learn how Purview policies override Microsoft
protection policies in Fabric item permissions for users, apps, and groups,
Microsoft Purview limiting their actions within the database .
February ICYMI: Ask the Expert Here's a few great questions and answers about SQL
2025 – Fabric Databases database in Fabric from a recent Ask the Expert session
on Microsoft Reactor.
January Manage access in SQL A followup to Learn how to manage Microsoft Fabric
2025 database with SQL access controls in SQL database , learn how to manage
native authorization access for your SQL database with SQL native access
controls controls . SQL database in Microsoft Fabric supports two
Month Feature Learn more
January Monitor SQL Database Learn how the capacity metrics app can be used for
2025 usage and monitoring usage and consumption of SQL databases in
consumption by using Fabric .
capacity metrics app
January Manage access for SQL database (preview) supports two different sets of
2025 SQL databases in controls that allow you to manage access for your
Microsoft Fabric with databases: Microsoft Fabric access controls and SQL native
workspace roles and access controls. Learn how to manage Microsoft Fabric
item permissions access controls in SQL database .
December Source control SQL database in Fabric has a tightly integrated and fully
2024 integration for SQL extensible DevOps feature set, including a source control
database integration for GitHub and Azure DevOps. Learn how to
use the Fabric web-based development environment with
the git repository directly through a streamlined source
control panel.
December Tour the Query Editor Whether you're a seasoned data professional or a
2024 in SQL database in developer new to SQL, the query editor offers features
Microsoft Fabric that cater to all skill levels . For more information, see
Query with the SQL query editor.
November Building a Smart Imagine you're the founder of Contoso, a rapidly growing
2024 Chatbot with SQL e-commerce startup. As your online store grows, you
Database in Microsoft realize that many customer inquiries are about basic
Fabric, LangChain and product information: price, availability, and specific
Chainlit features. To automate these routine questions, you decide
to build a chatbot with SQL Database in Microsoft Fabric,
LangChain, and Chainlit .
November Learning pathways for For those curious about where to learn more and how to
2024 SQL database try out this new offering, read more about the upcoming
episodes of SQL database in Microsoft Fabric: Learn
Together .
November Data Exposed: Watch a Data Exposed video introducing on the SQL
2024 Announcing SQL database in Microsoft Fabric public preview.
database in Microsoft
Fabric preview
Month Feature Learn more
November Get started with Guided how-to documents on how to do basic tasks in
2024 Fabric SQL database SQL database in Fabric start with Enable SQL database in
Fabric using Admin Portal tenant settings.
ノ Expand table
February Open Mirroring for With open mirroring, data replication is an extensible
2024 SAP sources – dab platform that partners and customers can use to plug in
and Simplement their own data integration capabilities. Once data is brought
in through open mirroring, it can be used in all Fabric
workloads. Two partners have now taken the next step in
integrating with open mirroring , and are ready to
onboard customers.
January Source schema Mirroring in Fabric now supports replicating the source
2025 support in Mirroring schema hierarchy. For more information, see Mirroring now
in Fabric supports replicating source schemas .
January Delta column Mirroring in Fabric now supports Delta column mapping.
2025 mapping support Column mapping is a feature of Delta tables that allows
for Mirroring users to include spaces and special characters such as
'``,``;``{``}``(``)``\n``\t``=``.``' in column names.
For more information, see Delta column mapping support.
Month Feature Learn more
January Mirroring CI/CD Mirroring now supports CI/CD as a preview feature. You can
2025 (preview) integrate Git for source control and utilize ALM Deployment
Pipelines, streamlining the deployment process and
ensuring seamless updates to mirrored databases.
January COPY INTO The COPY INTO now allows you to control the behavior of
2025 operations with row your data ingestion jobs by checking if the count of
count check columns in the source data matches the count of columns
on your target table, with the MATCH_COLUMN_COUNT
argument.
January Default schema in a You can now change the default schema for users in Fabric
2025 warehouse Data Warehouse , using the ALTER USER statement,
ensuring that every user has a predefined schema context
when they connect to the database.
January Service principal Service principal (SPN) support Fabric warehouse items
2025 support allows developers and administrators to automate
processes, streamline operations, and increase security for
their data workflows. For more information, see Service
principal support for Fabric Data Warehouse .
January Warehouse restore You can now create restore points and perform an in-place
2025 points and restore in restore of a warehouse to a past point in time. Restore in-
place place is an essential part of data warehouse recovery ,
which allows to restore the data warehouse to a prior
known reliable state by replacing or over-writing the
existing data warehouse from which the restore point was
created.
January COPY INTO Enhancements to the COPY INTO T-SQL command in Fabric
2025 operations with Data Warehouse introduce granular SQL controls. For more
granular information, see Enhancing COPY INTO operations with
permissions Granular Permissions .
Month Feature Learn more
December What's new in the There are several updates to improve both functionality and
2024 Fabric SQL analytics user experience with the SQL analytics endpoint ,
endpoint? including metadata sync, last successful update, improved
error propagation, and more.
November Open mirroring Open mirroring enables any application to write change
2024 (Preview) data directly into a mirrored database in Fabric, based on
the open mirroring public APIs and approach. Open
mirroring is designed to be extensible, customizable, and
open. It's a powerful feature that extends mirroring in Fabric
based on open Delta Lake table format. To get started, see
Tutorial: Configure Microsoft Fabric open mirrored
databases.
November Data Warehouse: Learn how the Copilot tools for Fabric Data Warehouse
2024 Copilot & AI Skill differ , when to use each, and how they can work together
to maximize productivity and deliver insights with Fabric
Warehouse.
November Fabric Mirroring for Fabric Database mirroring is now able to mirror Azure SQL
2024 Azure SQL Managed Managed Instance databases.
Instance (Preview)
November Mirroring for Azure With Azure SQL Database mirroring in Fabric, you can easily
2024 SQL Database GA bring your database into OneLake in Microsoft Fabric.
October Case insensitive By default, the collation of a warehouse is case sensitive (CS)
2024 collation support with 'Latin1_General_100_BIN2_UTF8'. You can now Create a
warehouse with case-insensitive (CI) collation.
October varchar(max) and Support for the varchar(max) and varbinary(max) in data
2024 varbinary(max) types in Warehouse is now in preview. For more
support in preview information, see Announcing public preview of
VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data
Warehouse .
October Nested Common Fabric Warehouse and SQL analytics endpoint both support
2024 Table Expressions standard, sequential, and nested CTEs. While CTEs are
Month Feature Learn more
September Mirroring for With Mirroring for Snowflake in Fabric, you can easily bring
2024 Snowflake GA your Snowflake data into OneLake in Microsoft Fabric . For
more information, see Mirroring Snowflake.
September Copilot for Data Copilot for Data Warehouse (preview) is now updated and
2024 Warehouse available as a preview feature , offering the Copilot chat
pane, quick actions, and code completions.
September Delta column SQL analytics endpoint now supports Delta tables with
2024 mapping in the SQL column mapping enabled . For more information, see
analytics endpoint Delta column mapping and Limitations of the SQL
analytics endpoint. This feature is currently in preview.
September Fabric Spark The Fabric Spark connector for Fabric Data Warehouse
2024 connector for Fabric (preview) now supports custom or pass-through queries,
Data Warehouse PySpark, and Fabric Runtime 1.3 (Spark 3.5) .
new features
(preview)
September New editor Editor improvements for Warehouse and SQL analytics
2024 improvements endpoint items improve the consistency and efficiency. For
more information, see New editor improvements .
September T-SQL support in The T-SQL notebook feature in Microsoft Fabric (preview)
2024 Fabric notebooks lets you write and run T-SQL code within a notebook. You
(preview) can use them to manage complex queries and write better
markdown documentation. It also allows direct execution of
T-SQL on connected warehouse or SQL analytics endpoint.
To learn more, see T-SQL support in Microsoft Fabric
notebooks.
September Nested Common Fabric Warehouse and SQL analytics endpoint both support
2024 Table Expressions standard, sequential, and nested CTEs . While CTEs are
(CTEs) (preview) generally available in Microsoft Fabric, nested common
table expressions (CTE) in warehouse are currently a preview
feature.
September Mirrored Azure A mirrored Azure Databricks Unity Catalog in Fabric allows
2024 Databricks (Preview) you to read data managed by Unity Catalog from Fabric
workloads from the Lakehouse. For more information, see
Month Feature Learn more
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
January Best practices for Fabric The Microsoft Fabric API for GraphQL is a handy service
2025 API for GraphQL that quickly allows you to set up a GraphQL API to pull
data from places like warehouses, the lakehouse, and
mirrored databases. Learn best practices when building
applications using Fabric API for GraphQL .
December Microsoft Fabric API for Learn how to integrate Azure Cosmos DB and the
2024 GraphQL™ for Azure Microsoft Fabric API for GraphQL™ to build near real-
Cosmos DB Mirroring time analytical applications . For more information on
how to leverage Fabric API for GraphQL in your
applications, see Connect applications to Fabric API for
GraphQL.
November SQL to Microsoft Fabric Learn more about migrating your SQL database to
2024 Migration: Beginner- Microsoft Fabric , a unified platform that brings your
Friendly Strategies for a data and analytics together effortlessly.
Smooth Transition
October Ensuring Data Dive deep into the common recovery scenarios and
2024 Continuity in Fabric features that help enable seamless end-to-end data
Warehouse: Best recovery and discuss best practices to ensure data
Practices for Every resilience .
Scenario
ノ Expand table
Month Feature Learn more
November New event New event categories in Real-Time Hub include: OneLake
2024 categories in Fabric events, Job events, and Capacity utilization events . These
Real-Time Hub new event categories are currently in preview. For more
information, see Unlocking the power of Real-Time Data
with OneLake Events .
November REST APIs for Fabric With the Eventstream REST API , you can now
2024 Eventstream programmatically create, manage, and update Eventstream
items. For more information, see Fabric REST APIs for
Eventstream.
November Real-Time hub Real-Time hub is now generally available . For more
2024 information, see Introduction to Fabric Real-Time hub.
October Secure Data By creating a Fabric Managed Private Endpoint, you can
2024 Streaming with securely connect Eventstream to your Azure services, such
Managed Private as Azure Event Hubs or IoT Hub, within a private network or
Endpoints in behind a firewall. For more information, see Secure Data
Eventstream Streaming with Managed Private Endpoints in Eventstream
(Preview) (Preview) .
October Usage reporting for The Activator team has rolled out usage reporting to help
2024 Activator is now live you better understand your capacity consumption and
future charges. When you look at the Capacity metrics app
compute page you'll now see operations for the reflex items
included.
October Quickly visualize You can now graphically visualize KQL Queryset results
2024 query results in KQL instantly and effortlessly and control the formatting without
Queryset the need for re-run queries – all using a familiar UI.
October Pin query to You can now save the outcome of any query written in KQL
2024 dashboard Queryset directly to a new or existing Real-Time
Dashboard .
Month Feature Learn more
September Creating a real time Copilot can review a table and automatically create a
2024 dashboard by dashboard with insights and a profile of the data with a
Copilot sample.
September New Real-Time hub The new user experience features new Real-Time hub
2024 and KQL Database navigation, a My Streams page, an enhanced database page
user experiences experience , and more.
September Eventhouse as a Eventhouses, equipped with KQL Databases, can handle and
2024 new Destination in analyze large volumes of data. With the Eventhouse
Eventstream destination in Eventstream , you can efficiently process
and route data streams into an Eventhouse and analyze the
data in near real-time using KQL.
September Managed private With a managed private endpoints for Fabric, you can now
2024 endpoints for establish a private connection between your Azure services,
Eventstream such as Azure Event Hubs, and Fabric Eventstream. For more
information, see Eventstream integration with managed
private endpoint .
September Activator alerts on Now you can set up Activator (preview) alerts directly on
2024 KQL Querysets your KQL queries in KQL querysets . For more information
and samples, see Create Activator alerts from a KQL
Queryset.
September Real-Time The Copilot assistant, which translates natural language into
2024 Intelligence Copilot KQL, now supports conversational mode , allowing you to
conversational ask follow-up questions that build on previous queries
mode within the chat.
September New connectors and Four new connectors released on September 24, 2024:
2024 UI in Real-Time hub Apache Kafka, Amazon Managed Streaming for Apache
Kafka, Azure SQL Managed Instance CDC, SQL Server on VM
DB CDC. The tabs in the main page of Real-Time hub are
replaced with menu items on the left navigation menu. For
more information, see Get started with Fabric Real-Time
Month Feature Learn more
September Announcement: Starting the week of September 16 you will start seeing
2024 Eventhouse billable consumption of the OneLake Storage Data Stored
Standard Storage meter from the Eventhouse and KQL Database items.
billing
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .
December Automate Real-Time Learn how to build a PowerShell script to automate the
2024 Intelligence Eventstream deployment of Eventstream with the definition of
deployment using source, processing, and destination into a workspace in
PowerShell Microsoft Fabric .
December Monitor Fabric Spark Learn how to build a centralized Spark monitoring
2024 applications using Fabric solution, leveraging Fabric Real-Time Intelligence . To
Real-Time Intelligence do this, integrate a Fabric Spark data emitter directly
with Fabric Eventstream and Eventhouse to build a
centralized Spark monitoring solution.
December Easily recreate your ADX You can bring ADX dashboards into Microsoft Fabric
2024 dashboards as Real-Time without relocating your data. Learn how to create Real-
Dashboards Time Dashboards as copies of your ADX dashboards in
Fabric .
December Enhance fraud detection Activator allows you to monitor events, detect certain
2024 with Activator conditions on your data and act on them by sending
alerts. Learn how to implement a system that sends
teams or email alerts when a transaction is flagged
as potentially fraudulent.
Month Feature Learn more
December Manual Migration If you created an item while Activator was in preview,
2024 needed for Activator you'll need to manually migrate these items to GA to
preview items get access to all the new features .
ノ Expand table
February Understanding GraphQL Learn more about error handling in GraphQL and
2025 API error handling some best practices for managing errors effectively .
January Building Apps with Microsoft Fabric has an API for GraphQL to build
2025 Microsoft Fabric API for your data applications, enabling you to pull data from
GraphQL sources such as Data Warehouses, Lakehouse,
Mirrored Databases, and DataMart in Microsoft Fabric.
January Power BI Embedded with Power BI Embedded with Direct Lake Mode is
2025 Direct Lake Mode designed to enhance how developers and
(Preview) Independent Software Vendors (ISVs) provide
embedded analytics in their applications. For more
information, see Introducing Power BI Embedded with
Direct Lake Mode (Preview) .
January Fabric Copilot capacity: Fabric Copilot capacity us a new billing feature
2025 Democratizing AI usage designed to enhance your experience with Microsoft
in Microsoft Fabric Fabric. With Fabric Copilot capacities, capacity admins
can give Copilot access to end users directly, rather
than requiring creators to move their content into a
specific workspace or link a specific capacity.
January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Month Feature Learn more
Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .
January Surge protection With surge protection (preview), capacity admins can
2025 (preview) set limits on background usage within a capacity.
Learn more about surge protection to help protect
capacities from excess usage by background
workloads .
December Microsoft Fabric Microsoft Fabric is now included within the US Federal
2024 approved as a Service Risk and Authorization Management Program
within the FedRAMP (FedRAMP) High Authorization for Azure
High Authorization for Commercial . This Provisional Authorization to
Azure Commercial Operate (P-ATO) within the existing FedRAMP High
Azure Commercial environment was approved by the
FedRAMP Joint Authorization Board (JAB).
November OneLake external data External data sharing in Microsoft Fabric, now
2024 sharing (GA) generally available, makes it possible for Fabric users
to share data from within their Fabric tenant with users
in another Fabric tenant.
November GraphQL API in Microsoft The API for GraphQL , now generally available, is a
2024 Fabric GA data access layer that allows us to query multiple data
sources quickly and efficiently in Fabric. For more
information, see What is Microsoft Fabric API for
GraphQL?
November The new OneLake catalog The OneLake catalog is the next evolution of the
2024 OneLake data hub . For more information about the
new catalog, Discover and explore Fabric items in the
OneLake catalog.
November Fabric workload dev kit The Microsoft Fabric workload development kit is now
2024 (GA) generally available . This robust developer toolkit is
for designing, developing, and interoperating with
Month Feature Learn more
November Domains in Fabric – new Review new features and use cases for Domains in
2024 enhancements Fabric , including Best practices for planning and
creating domains in Microsoft Fabric.
October New Item panel in Previously, by selecting +New in the workspace, you
2024 Workspace can access a dropdown menu with some pre-defined
item types to get started. Now, the +New item button
shows item types listed in a panel, categorized by
tasks .
October APIs for Managed Private REST APIs for managed Private Endpoints are available.
2024 Endpoint are now You can now create, delete, get, and list Managed
available private endpoints via APIs .
October Important billing updates Upcoming pricing and billing updates to make Copilot
2024 coming to Copilot and AI and AI features in Fabric more accessible and cost-
in Fabric effective .
September Terraform Provider for The Terraform Provider for Microsoft Fabric is now in
2024 Fabric (preview) preview. The Terraform Provider for Microsoft Fabric
supports the creation and management of many Fabric
resources. For more information, see Announcing the
new Terraform Provider for Microsoft Fabric .
September Announcing Service You can now use service principal to access Fabric
2024 Principal support for APIs . Service principal is a security identity that you
Fabric APIs can create in Microsoft Entra and assign permissions
to it in Microsoft Entra and other Microsoft services,
such as Microsoft Fabric.
September Tag your data to enrich Tags (preview) help admins categorize and organize
2024 item curation and data , enhancing the searchability of your data and
discovery boosting success rates and efficiency for end users.
September Trusted workspace access Trusted workspace access and Managed private
2024 and Managed private endpoints are available in any Fabric capacity .
endpoints in any Fabric Previously, trusted workspace access and Managed
capacity private endpoints were available only in F64 or higher
capacities. Managed Private endpoints are now
available in Trial capacities.
Month Feature Learn more
September Microsoft Fabric Achieves Microsoft Fabric is now certified for the HITRUST
2024 HITRUST CSF Common Security Framework (CSF) v11.0.1 .
Certification
For older updates, review the Microsoft Fabric What's New archive.
ノ Expand table
November Microsoft Fabric These APIs enable you to automate Git integration tasks, such
2024 REST APIs as connecting to GitHub, retrieving connection details,
Integration with committing changes to your connected GitHub repository,
GitHub updating from the repository, and more. For more
information, see Automate Git integration by using APIs and
code samples .
November Data Factory Copy CI/CD for Copy job (preview) in Data Factory in Microsoft
2024 Job – CI/CD now Fabric is now available. Copy Job now supports Git
available Integration and Deployment Pipeline .
September GitHub Now generally available, Fabric developers can now choose
2024 integration for GitHub or GitHub Enterprise as their source control tool ,
source control and version their Fabric items there. For more information,
see What is Microsoft Fabric Git integration?
September New Deployment A new and improved design for the Deployment Pipeline
2024 Pipelines design introduces a range of changes, additions, and improvements
designed to elevate your deployment process. Read more
about What's changed in deployment pipelines .
For older updates, review the Microsoft Fabric What's New archive.
Related content
Microsoft Fabric What's New archive
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?
Microsoft Fabric videos on YouTube
Microsoft Fabric community
Feedback
Was this page helpful? Yes No
7 Note
Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .
Multi-experience tutorials
The following table lists tutorials that span multiple Fabric experiences.
ノ Expand table
Tutorial Scenario
name
Lakehouse In this tutorial, you ingest, transform, and load the data of a fictional retail
company, Wide World Importers, into the lakehouse and analyze sales data across
various dimensions.
Data Science In this tutorial, you explore, clean, and transform a taxicab trip semantic model,
and build a machine learning model to predict trip duration at scale on a large
semantic model.
Real-Time In this tutorial, you use the streaming and query capabilities of Real-Time
Intelligence Intelligence to analyze London bike share data. You learn how to stream and
transform the data, run KQL queries, build a Real-Time Dashboard and a Power BI
report to gain insights and respond to this real-time data.
Data In this tutorial, you build an end-to-end data warehouse for the fictional Wide
warehouse World Importers company. You ingest data into data warehouse, transform it
using T-SQL and pipelines, run queries, and build reports.
Tutorial Scenario
name
Fabric SQL The tutorial provides a comprehensive guide to utilizing the SQL database in
database Fabric. This tutorial is tailored to help you navigate through the process of
database creation, setting up database objects, exploring autonomous features,
and combining and visualizing data. Additionally, you learn how to create a
GraphQL endpoint, which serves as a modern approach to connecting and
querying your data efficiently.
Fabric The tutorial is designed for customers who are new to Fabric Activator. Using a
Activator sample eventstream, you learn your way around Activator. Once you're familiar
with the terminology and interface, you create your own object, rule, and
activator.
Experience-specific tutorials
The following tutorials walk you through scenarios within specific Fabric experiences.
ノ Expand table
Power BI In this tutorial, you build a dataflow and pipeline to bring data into a
lakehouse, create a dimensional model, and generate a compelling
report.
Data Factory In this tutorial, you ingest data with data pipelines and transform data
with dataflows, then use the automation and notification to create a
complete data integration scenario.
Data Science end-to- In this set of tutorials, learn about the different Data Science experience
end AI samples capabilities and examples of how ML models can address your common
business problems.
Data Science - Price In this tutorial, you build a machine learning model to analyze and
prediction with R visualize the avocado prices in the US and predict future prices.
Application lifecycle In this tutorial, you learn how to use deployment pipelines together with
management git integration to collaborate with others in the development, testing, and
publication of your data and reports.
Related content
Create a workspace
Discover data items in the OneLake data hub
Feedback
Was this page helpful? Yes No
This article describes the task flows feature in Microsoft Fabric. Its target audience is
data analytics solution architects who want to use a task flow to build a visual
representation of their project, engineers who are working on the project and want to
use the task flow to facilitate their work, and others who want to use the task flow to
filter the item list to help navigate and understand the workspace.
Overview
Fabric task flow is a workspace feature that enables you to build a visualization of the
flow of work in the workspace. The task flow helps you understand how items are
related and work together in your workspace, and makes it easier for you to navigate
your workspace, even as it becomes more complex over time. Moreover, the task flow
can help you standardize your team's work and keep your design and development
work in sync to boost the team's collaboration and efficiency.
Fabric provides a range of predefined, end-to-end task flows based on industry best
practices that are intended to make it easier to get started with your project. In addition,
you can customize the task flows to suit your specific needs and requirements. This
enables you to create a tailored solution that meets your unique business needs and
goals.
Each workspace has one task flow. The task flow occupies the upper part of workspace
list view. It consists of a canvas where you can build the visualization of your data
analytics project, and a side pane where you can see and edit details about the task
flow, tasks, and connectors.
7 Note
You can resize or hide the task flow using the controls on the horizontal separator
bar.
Key concepts
Key concepts to know when working with a task flow are described in the following
sections.
Task flow
A task flow is a collection of connected tasks that represent relationships in a process or
collection of processes that complete an end-to-end data solution. A workspace has one
task flow. You can either build it from scratch or use one of Fabric's predefined task
flows, which you can customize as desired.
Task
A task is a unit of process in the task flow. A task has recommended item types to help
you select the appropriate items when building your solution. Tasks also help you
navigate the items in the workspace.
Task type
Each task has a task type that classifies the task based on its key capabilities in the data
process flow. The predefined task types are:
ノ Expand table
General Create a customized task for your project needs that you can assign
available item types to.
Get data Ingest batch and real-time data into a single location within your Fabric
workspace.
Task type What you want to do with the task
Mirror data Replicate your data from any location to OneLake in near real-time.
Store data Organize, query, and store your ingested data in an easily retrievable format.
Prepare data Clean, transform, extract, and load your data for analysis and modeling
tasks.
Analyze and train Propose hypotheses, train models, and explore your data to make decisions
data and predictions.
Track data Monitor your streaming or nearly real-time operational data, and make
decisions based on gained insights.
Visualize data Present your data as rich visualizations and insights that can be shared with
others.
Distribute data Package your items for distribution as customized, easy-to-use apps.
Develop data Create and build your software, applications, and data solutions.
Connector
Connectors are arrows that represent logical connections between the tasks in the task
flow. They don't represent the flow of data, nor do they create any actual data
connections.
Feedback
Was this page helpful? Yes No
This article describes how to start building a task flow, starting either from scratch or
with one of Fabric's predefined task flows. It targets data analytics solution architects
and others who want to create a visualization of a data project.
Prerequisites
To create a task flow in a workspace, you must be a workspace admin, member, or
contributor.
To get started, choose either Select a predesigned task flow or Add a task to start
building one from scratch.
If the workspace contains Power BI items only, the task flow canvas will display a basic
task flow designed to meet the basic needs of a solution based on Power BI items only.
Select Create if you want to start with this task flow, or choose either of the previously
mentioned options, Select a predesigned task flow or Add a task.
The side pane lists the predesigned task flows provided by Microsoft. Each predefined
task flow has a brief description of its use case. When you select one of the flows, you'll
see a more detailed description of the flow and how to use it, and also the workloads
and item types that the flow requires.
Select the task flow that best fits your project needs and then choose Select. The
selected task flow will be applied to the task flow canvas.
The task flow canvas provides a graphic view of the tasks and how they're connected
logically.
The side pane now shows detailed information about the task flow you selected,
including:
It's recommended that you change the task flow name and description to something
meaningful that enables others to better understand what the task flow is all about. To
change the name and description, select Edit in the task flow side pane. For more
information, see Edit task flow details.
The items list shows all the items and folders in the workspace, including those items
that are assigned to tasks in the task flow. When you select a task in the task flow, the
items list is filtered to show just the items that are assigned to the selected task.
7 Note
Selecting a predefined task flow just places the tasks involved in the task flow on
the canvas and indicates the connections between them. It's just a graphical
representation - no actual items or data connections are created at this point, and
no existing items are assigned to tasks in the flow.
After you've added the predefined task flow to the canvas, you can start modifying it to
suit your needs - arranging the tasks on the canvas, updating task names and
descriptions, assigning items to tasks, etc. For more information, see Working with task
flows.
1. In the empty task flow area, select Add a task and choose a task type.
2. A task card appears on the canvas and the task details pane opens to the side.
It's recommended to provide a meaningful name and description of the task to
help other members of the workspace understand what the task is for. In the task
details side pane, select Edit, to provide a meaningful name and description.
3. Deselect the task by clicking on a blank area of the task flow canvas. The side pane
will display the task flow details with a default name (Get started with a task flow)
and description. Note that the task you just created is listed under the Tasks
section.
Select Edit and provide a meaningful name and description for your new task flow to
help other members of the workspace understand your project and the task flow you're
creating. For more information, see Edit task flow details.
You can continue to add more tasks to the canvas. You'll also have to perform other
actions, such as arranging the tasks on the canvas, connecting the tasks, assigning items
to the tasks, etc. For more information, see Working with task flows.
Related concepts
Task flow overview
Work with task flows
Feedback
Was this page helpful? Yes No
This article describes how to work with tasks. The target audience is data analytics
solution architects who are designing a data analytics solution, engineers who need to
know how to use task flows to facilitate their work, and others who want to use the task
flow to filter the item list to help navigate and understand the workspace.
Prerequisites
To create or edit the task flow, and to create items in the workspace via the task flow,
you need to be an Admin, Member, or Contributor in the workspace.
Admins, Members, Contributors, and Viewers can use the task flow to filter the items list.
Task controls
Much of the work with tasks is accomplished either in the task details pane or via
controls on the task card or on the task flow canvas.
Select a task to display the task details pane. The following image shows the main
controls for working with tasks.
1. Add task or connector
2. Edit task name and description
3. Change task type
4. Create new item for task
5. Assign existing items to task
6. Delete task
To resize the task flow, drag the resize bar on the horizontal separator up or down.
To show/hide the task flow, select the show/hide control at the right side of the
separator.
Add a task
To add a new task to the task flow canvas, open the Add dropdown menu and select the
desired task type.
The task of the selected task type is added onto the canvas. The name and description
of the new task are the default name and description of the task type. Consider
changing the name and description of the new task to better describe its purpose in the
work flow. A good task name should identify the task and provide a clear indication of
its intended use.
1. Select the task on the canvas to open the task details pane.
2. Select Edit and change the name and description fields as desired. When done,
select Save.
1. Select the task on the canvas to open the task details pane.
2. Open the Task type dropdown menu and choose the new desired task type.
7 Note
Changing the task type doesn't change the task name or description. Consider
changing these fields to suit the new task type.
Arrange tasks on the canvas
Part of building a task flow is arranging the tasks in the proper order. To arrange the
tasks, select and drag each task to the desired position in the task flow.
Tip
When you move tasks around on the canvas, they stay in the place where you put
them. However, due to a known issue, when you add a new task to the canvas, any
unconnected tasks will move back to their default positions. Therefore, to
safeguard your arrangement of tasks, it's highly recommended to connect them all
with connectors before adding any new tasks to the canvas.
Connect tasks
Connectors show the logical flow of work. They don't make or indicate any actual data
connections - they are graphic representations of the flow of tasks only.
Add a connector
To connect two tasks, select the edge of the starting task and drag to an edge of the
next task.
Alternatively, you can select Add > Connector from the Add dropdown on the canvas.
Then, in the Add connector dialog, select the start and end tasks, then select Add.
Delete a connector
To delete a connector, select it and press Enter.
Alternatively, select the connector to open the connector details pane, then select the
trash can icon.
Change connector start and end points or direction
To change a connector's start and end values, or switch its direction:
2. In the details pane, change the start and end values as desired, or select Swap to
change connector direction.
7 Note
An item can only be assigned to a single task. It can't be assigned to multiple tasks.
Create a new item for a task
To create a new item for a specific task:
2. On the Create an item pane that opens, the recommended item types for the task
are displayed by default. Choose one of the recommended types.
If you don't see the item type you want, change the Display selector from
Recommended items to All items, and then choose the item type you want.
7 Note
When you first save a new report, you'll be given the opportunity to assign it to any
task that exists in the workspace.
The items you selected items are assigned to the task. In the item list, task assignments
are shown in the Task column.
7 Note
Unassigning items from tasks does not remove the items from the workspace.
Unassign items from a task
To unassign items from a task:
1. Select the task you want to unassign the items from. This filters the item list to
show just the items that are assigned to the task.
2. In the item list, hover over the items you want to unassign and mark the
checkboxes that appear.
3. On the workspace toolbar, choose Unassign from task (or Unassign from all tasks,
if you've selected multiple items).
1. Select Clear all at the top of the items list to clear all filters so that you can see all
the items in the workspace. Note that items that are assigned to tasks list the task
name in the Task column.
2. Hover over the items you want to unassign and mark the checkboxes.
3. When you've finished making your selections, select Unassign from all tasks in the
workspace toolbar.
Delete a task
To delete a task:
Alternatively,
1. Select the task flow canvas to open the task flow details pane.
2. In the task flow details pane, hover over the task you want to delete in the Tasks
list and select the trash can icon.
7 Note
Deleting a task doesn't delete the items assigned to it. They remain in the
workspace.
For each item that you see in the items list, you can see the item type and what
task it's assigned to, if any.
When you select a task, the items list is filtered to show only the items that are
assigned to that task.
1. Open the Add dropdown on the canvas and choose Select task flow. The
predefined task flows pane will open.
2. Choose one of the predefined task flows and the select Select. If there already is a
task flow on the canvas, you'll be asked whether to overwrite the current task flow
or to append the predefined task flow to the current task flow.
1. Open the task flow details pane by selecting the task flow canvas.
2. Select Edit and change the name and description fields as desired. When done,
select Save.
7 Note
A good task flow name and description should help others understand the
intended purpose and use of the task flow.
1. Select a blank area of the canvas to display the task flow details pane.
Deleting a task flow removes all tasks, the task list, and any item assignments, and resets
the task flow to its original default empty state.
7 Note
Items that were assigned to tasks in the deleted task flow remain in the workspace.
When you create a new task flow, you need to assign them to the tasks in the new
flow.
Related concepts
Task flow overview
Set up a task flow
Feedback
Was this page helpful? Yes No
Use this reference guide and the example scenarios to help you in deciding whether you
need a copy activity, a dataflow, or Spark for your Microsoft Fabric workloads.
Use case Data lake and data Data ingestion, Data ingestion,
warehouse migration, data data transformation,
data ingestion, transformation, data processing,
lightweight transformation data wrangling, data profiling
data profiling
Review the following three scenarios for help with choosing how to work with your data
in Fabric.
Scenario1
Leo, a data engineer, needs to ingest a large volume of data from external systems, both
on-premises and cloud. These external systems include databases, file systems, and APIs.
Leo doesn’t want to write and maintain code for each connector or data movement
operation. He wants to follow the medallion layers best practices, with bronze, silver,
and gold. Leo doesn't have any experience with Spark, so he prefers the drag and drop
UI as much as possible, with minimal coding. And he also wants to process the data on a
schedule.
The first step is to get the raw data into the bronze layer lakehouse from Azure data
resources and various third party sources (like Snowflake Web, REST, AWS S3, GCS, etc.).
He wants a consolidated lakehouse, so that all the data from various LOB, on-premises,
and cloud sources reside in a single place. Leo reviews the options and selects pipeline
copy activity as the appropriate choice for his raw binary copy. This pattern applies to
both historical and incremental data refresh. With copy activity, Leo can load Gold data
to a data warehouse with no code if the need arises and pipelines provide high scale
data ingestion that can move petabyte-scale data. Copy activity is the best low-code
and no-code choice to move petabytes of data to lakehouses and warehouses from
varieties of sources, either ad-hoc or via a schedule.
Scenario2
Mary is a data engineer with a deep knowledge of the multiple LOB analytic reporting
requirements. An upstream team has successfully implemented a solution to migrate
multiple LOB's historical and incremental data into a common lakehouse. Mary has been
tasked with cleaning the data, applying business logics, and loading it into multiple
destinations (such as Azure SQL DB, ADX, and a lakehouse) in preparation for their
respective reporting teams.
Mary is an experienced Power Query user, and the data volume is in the low to medium
range to achieve desired performance. Dataflows provide no-code or low-code
interfaces for ingesting data from hundreds of data sources. With dataflows, you can
transform data using 300+ data transformation options, and write the results into
multiple destinations with an easy to use, highly visual user interface. Mary reviews the
options and decides that it makes sense to use Dataflow Gen 2 as her preferred
transformation option.
Scenario3
Adam is a data engineer working for a large retail company that uses a lakehouse to
store and analyze its customer data. As part of his job, Adam is responsible for building
and maintaining the data pipelines that extract, transform, and load data into the
lakehouse. One of the company's business requirements is to perform customer review
analytics to gain insights into their customers' experiences and improve their services.
Adam decides the best option is to use Spark to build the extract and transformation
logic. Spark provides a distributed computing platform that can process large amounts
of data in parallel. He writes a Spark application using Python or Scala, which reads
structured, semi-structured, and unstructured data from OneLake for customer reviews
and feedback. The application cleanses, transforms, and writes data to Delta tables in
the lakehouse. The data is then ready to be used for downstream analytics.
Related content
How to copy data using copy activity
Quickstart: Create your first dataflow to get and transform data
How to create an Apache Spark job definition in Fabric
Feedback
Was this page helpful? Yes No
Use this reference guide and the example scenarios to help you choose a data store for
your Microsoft Fabric workloads.
ノ Expand table
Security RLS, CLS**, table level (T- Object level, RLS, CLS, RLS
SQL), none for Spark DDL/DML, dynamic data
masking
Advanced Interface for large-scale Interface for large-scale Time Series native
analytics data processing, built-in data processing, built-in elements, full geo-
data parallelism, and fault data parallelism, and spatial and query
tolerance fault tolerance capabilities
Advanced Tables defined using Tables defined using Full indexing for free
formatting PARQUET, CSV, AVRO, PARQUET, CSV, AVRO, text and semi-
support JSON, and any Apache JSON, and any Apache structured data like
Hive compatible file Hive compatible file JSON
format format
* Spark supports reading from tables using shortcuts, doesn't yet support accessing
views, stored procedures, functions etc.
ノ Expand table
Advanced T-SQL analytical capabilities, data Interface for data processing with
analytics replicated to delta parquet in automated performance tuning
OneLake for analytics
Advanced Table support for OLTP, JSON, Tables defined using PARQUET, CSV,
formatting vector, graph, XML, spatial, key- AVRO, JSON, and any Apache Hive
support value compatible file format
Ingestion latency Available instantly for querying Available instantly for querying
Scenarios
Review these scenarios for help with choosing a data store in Fabric.
Scenario 1
Susan, a professional developer, is new to Microsoft Fabric. They're ready to get started
cleaning, modeling, and analyzing data but need to decide to build a data warehouse or
a lakehouse. After review of the details in the previous table, the primary decision points
are the available skill set and the need for multi-table transactions.
Susan has spent many years building data warehouses on relational database engines,
and is familiar with SQL syntax and functionality. Thinking about the larger team, the
primary consumers of this data are also skilled with SQL and SQL analytical tools. Susan
decides to use a Fabric warehouse, which allows the team to interact primarily with T-
SQL, while also allowing any Spark users in the organization to access the data.
Susan creates a new data warehouse and interacts with it using T-SQL just like her other
SQL server databases. Most of the existing T-SQL code she has written to build her
warehouse on SQL Server will work on the Fabric data warehouse making the transition
easy. If she chooses to, she can even use the same tools that work with her other
databases, like SQL Server Management Studio. Using the SQL editor in the Fabric
portal, Susan and other team members write analytic queries that reference other data
warehouses and Delta tables in lakehouses simply by using three-part names to perform
cross-database queries.
Scenario 2
Rob, a data engineer, needs to store and model several terabytes of data in Fabric. The
team has a mix of PySpark and T-SQL skills. Most of the team running T-SQL queries are
consumers, and therefore don't need to write INSERT, UPDATE, or DELETE statements.
The remaining developers are comfortable working in notebooks, and because the data
is stored in Delta, they're able to interact with a similar SQL syntax.
Rob decides to use a lakehouse, which allows the data engineering team to use their
diverse skills against the data, while allowing the team members who are highly skilled
in T-SQL to consume the data.
Scenario 3
Ash, a citizen developer, is a Power BI developer. They're familiar with Excel, Power BI,
and Office. They need to build a data product for a business unit. They know they don't
quite have the skills to build a data warehouse or a lakehouse, and those seem like too
much for their needs and data volumes. They review the details in the previous table
and see that the primary decision points are their own skills and their need for a self
service, no code capability, and data volume under 100 GB.
Ash works with business analysts familiar with Power BI and Microsoft Office, and knows
that they already have a Premium capacity subscription. As they think about their larger
team, they realize the primary consumers of this data are analysts, familiar with no-code
and SQL analytical tools. Ash decides to use a Power BI datamart, which allows the team
to interact build the capability fast, using a no-code experience. Queries can be
executed via Power BI and T-SQL, while also allowing any Spark users in the organization
to access the data as well.
Scenario 4
Daisy is business analyst experienced with using Power BI to analyze supply chain
bottlenecks for a large global retail chain. They need to build a scalable data solution
that can handle billions of rows of data and can be used to build dashboards and
reports that can be used to make business decisions. The data comes from plants,
suppliers, shippers, and other sources in various structured, semi-structured, and
unstructured formats.
Daisy decides to use an Eventhouse because of its scalability, quick response times,
advanced analytics capabilities including time series analysis, geospatial functions, and
fast direct query mode in Power BI. Queries can be executed using Power BI and KQL to
compare between current and previous periods, quickly identify emerging problems, or
provide geo-spatial analytics of land and maritime routes.
Scenario 5
Kirby is an application architect experienced in developing .NET applications for
operational data. They need a high concurrency database with full ACID transaction
compliance and strongly enforced foreign keys for relational integrity. Kirby wants the
benefit of automatic performance tuning to simplify day-to-day database management.
Kirby decides on a SQL database in Fabric, with the same SQL Database Engine as Azure
SQL Database. SQL databases in Fabric automatically scale to meet demand throughout
the business day. They have the full capability of transactional tables and the flexibility
of transaction isolation levels from serializable to read committed snapshot. SQL
database in Fabric automatically creates and drops nonclustered indexes based on
strong signals from execution plans observed over time.
In Kirby's scenario, data from the operational application must be joined with other data
in Fabric: in Spark, in a warehouse, and from real-time events in an Eventhouse. Every
Fabric database includes a SQL analytics endpoint, so data to be accessed in real time
from Spark or with Power BI queries using DirectLake mode. These reporting solutions
spare the primary operational database from the overhead of analytical workloads, and
avoid denormalization. Kirby also has existing operational data in other SQL databases,
and needs to import that data without transformation. To import existing operational
data without any data type conversion, Kirby designs data pipelines with Fabric Data
Factory to import data into the Fabric SQL database.
Related content
Create a lakehouse in Microsoft Fabric
Create a warehouse in Microsoft Fabric
Create an eventhouse
Create a SQL database in the Fabric portal
Power BI datamart
Feedback
Was this page helpful? Yes No
Microsoft Fabric offers two enterprise-scale, open standard format workloads for data
storage: Warehouse and Lakehouse. This article compares the two platforms and the
decision points for each.
Criterion
Spark
Use Lakehouse
T-SQL
Use Warehouse
Yes
Use Warehouse
No
Use Lakehouse
Don't know
Use Lakehouse
Unstructured and structureddata
Use Lakehouse
Structureddata only
Use Warehouse
Choose a candidate service
Perform a detailed evaluation of the service to confirm that it meets your needs.
The Warehouse item in Fabric Data Warehouse is an enterprise scale data warehouse
with open standard format.
The Lakehouse item in Fabric Data Engineering is a data architecture platform for
storing, managing, and analyzing structured and unstructured data in a single location.
Store, manage, and analyze structured and unstructured data in a single location
to gain insights and make decisions faster and efficiently.
Flexible and scalable solution that allows organizations to handle large volumes of
data of all types and sizes.
Easily ingest data from many different sources, which are converted into a unified
Delta format
Automatic table discovery and registration for a fully managed file-to-table
experience for data engineers and data scientists.
Automatic SQL analytics endpoint and default dataset that allows T-SQL querying
of delta tables in the lake
Warehouse
Primary capabilities
Read only, system generated SQL analytics endpoint for Lakehouse for T-SQL querying
and serving. Supports analytics on the Lakehouse Delta tables, and the Delta Lake
folders referenced via shortcuts.
Developer profile
Data loading
Storage layer
Development experience
Warehouse Editor with full support for T-SQL data ingestion, modeling,
development, and querying UI experiences for data ingestion, modeling, and
querying
Read / Write support for 1st and 3rd party tooling
Lakehouse SQL analytics endpoint with limited T-SQL support for views, table
valued functions, and SQL Queries
UI experiences for modeling and querying
Limited T-SQL support for 1st and 3rd party tooling
T-SQL capabilities
Full DQL, DML, and DDL T-SQL support, full transaction support
Full DQL, No DML, limited DDL T-SQL Support such as SQL Views and TVFs
Related content
Microsoft Fabric decision guide: choose a data store
Feedback
Was this page helpful? Yes No
Provide product feedback | Ask the community
Navigate to your items from Microsoft
Fabric Home
Article • 01/26/2025
7 Note
Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .
This article gives a high level view of navigating to your items and actions from
Microsoft Fabric Home.
When you create a new item, it saves in your My workspace unless you selected a
workspace from Workspaces. To learn more about creating items in workspaces, see
create workspaces.
7 Note
Power BI Home is different from the other product workloads. To learn more, visit
Power BI Home.
) Important
Power BI Home is different from the other product workloads. To learn more, visit
Power BI Home.
Overview of Home
On Home, you see items that you create and that you have permission to use. These
items are from all the workspaces that you access. That means that the items available
on everyone's Home are different. At first, you might not have much content, but that
changes as you start to create and share Microsoft Fabric items.
7 Note
Home isn't workspace-specific. For example, the Recent workspaces area on Home
might include items from many different workspaces.
In Microsoft Fabric, the term item refers to: apps, lakehouses, activators, warehouses,
reports, and more. Your items are accessible and viewable in Microsoft Fabric, and often
the best place to start working in Microsoft Fabric is from Home. However, once you
create at least one new workspace, been granted access to a workspace, or you add an
item to My workspace, you might find it more convenient to start working directly in a
workspace. One way to navigate to a workspace is by using the nav pane and workspace
selector.
To open Home, select it from the top of your navigation pane (nav pane).
Most important content at your fingertips
The items that you can create and access appear on Home. If your Home canvas gets
crowded, use global search to find what you need, quickly. The layout and content on
Fabric Home is different for every user.
1. The left navigation pane (nav pane) links you to different views of your items and
to creator resources. You can remove buttons from the nav pane to suit your
workflow.
2. The selector for switching between Fabric and Power BI.
3. Options for creating new items.
4. The top menu bar for orienting yourself in Fabric, finding items, help, and sending
feedback to Microsoft. The Account manager control is a critical button for looking
up your account information and managing your Fabric trial.
5. Learning resources to get you started learning about Fabric and creating items.
6. Your items organized by recent workspaces, recent items, and favorites.
) Important
Only the content that you can access appears on your Home. For example, if you
don't have permissions to a report, that report doesn't appear on Home. The
exception to this restriction is if your subscription or license changes to one with
less access, then you receive a prompt letting you know that the item is no longer
available and asking you to start a trial or upgrade your license.
The nav pane is there when you open Home and remains there as you open other areas
of Microsoft Fabric.
You can remove buttons from the nav pane for products and actions you don't think you
need. You can always add them back later.
To add a button back to the nav pane, start by selecting the ellipses (...). Then right-click
the button and select Pin. If you don't have space on the nav pane, the pinned button
might displace a current button.
There are different ways to find and open your workspaces. If you know the name or
owner, you can search. Or you can select the Workspaces button in the nav pane and
choose which workspace to open.
The workspace opens on your canvas, and the name of the workspace is listed on your
nav pane. When you open a workspace, you can view its content. It includes items such
as notebooks, pipelines, reports, and lake houses.
Create items
The Workload hub is a central location where you can view all the workloads available to
you. Navigate to your Workload hub by selecting Workloads from the nav pane.
Microsoft Fabric displays a list and description of the available workloads. Select a
workload to open it and learn more.
If your organization gives you access to additional workloads, your Workload hub
displays additional tabs.
When you select a workload, the landing page for that workload displays. Each workload
in Fabric has its own item types associated with it. The landing page has information
about these items type and details about the workload, learning resources, and samples
that you can use to test run the workload.
Microsoft Fabric provides context sensitive help in the right rail of your browser. In this
example, we selected Browse from the nav pane and the Help pane automatically
updates to show us articles about the features of the Browse screen. For example, the
Help pane displays articles on View your favorites and See content that others shared
with you. If there are community posts related to the current view, they display under
Forum topics.
Leave the Help pane open as you work, and use the suggested topics to learn how to
use Microsoft Fabric features and terminology. Or, select the X to close the Help pane
and save screen space.
The Help pane is also a great place to search for answers to your questions. Type your
question or keywords in the Search field.
To return to the default Help pane, select the left arrow.
For more information about the Help pane, see Get in-product help.
Related content
Power BI Home
Start a Fabric trial
Feedback
Was this page helpful? Yes No
This article explains how to use the Fabric Help pane. The Help pane is feature-aware
and displays articles about the actions and features available on the current Fabric
screen. The Help pane is also a search engine that quickly finds answers to questions in
the Fabric documentation and Fabric community forums.
Other resources: This section has links for feedback and Support.
1. From the upper-right corner of Fabric, select the Help icon (?) icon to open the
Help pane.
2. Open Browse and select the Recent feature. The Fabric Help pane displays
documents about the Recent feature. To learn more, elect a document. The
document opens in a separate browser tab.
3. Forum posts often provide interesting context. Select one that looks helpful or
interesting.
4. Search the Microsoft documentation and community forums by entering a
keyword in the search pane.
5. Return to the default display of the Help pane by selecting the arrow that appears
to the left of the entry field.
6. Close the Help pane by selecting the X icon in the upper-right corner of the pane.
Feedback
Was this page helpful? Yes No
When you're new to Microsoft Fabric, you have only a few items (workspaces, reports,
apps, lakehouses). But as you begin creating and sharing items, you can end up with
long lists of content. That's when searching, filtering, and sorting become helpful.
7 Note
Search is available from Home and also most other areas of Microsoft Fabric. Just look
In the Search field, type all or part of the name of an item, creator, keyword, or
workspace. You can even enter your colleague's name to search for content that they
shared with you. The search finds matches in all the items that you own or have access
to.
In addition to the Search field, most experiences on the Microsoft Fabric canvas also
include a Filter by keyword field. Similar to search, use Filter by keyword to narrow
down the content on your canvas to find what you need. The keywords you enter in the
Filter by keyword pane apply to the current view only. For example, if you open Browse
and enter a keyword in the Filter by keyword pane, Microsoft Fabric searches only the
content that appears on the Browse canvas.
Sorting is also available in other areas of Microsoft Fabric. In this example, the
workspaces are sorted by the Refreshed date. To set sorting criteria for workspaces,
select a column header, and then select again to change the sorting direction.
Not all columns can be sorted. Hover over the column headings to discover which can
be sorted.
Feedback
Was this page helpful? Yes No
The Fabric settings pane provides links to various kinds of settings you can configure.
This article shows how to open the Fabric settings pane and describes the kinds of
settings you can access from there.
Preferences
In the preferences section, individual users can set their user preferences, specify the
language of the Fabric user interface, manage their account and notifications, and
configure settings for their personal use throughout the system.
ノ Expand table
Link Description
General Opens the generate settings page, where you can set the display language for
the Fabric interface and parts of visuals.
Notifications Opens the notifications settings page where you can view your subscriptions
and alerts.
Item settings Opens the item settings page, where you can configure per-item-type settings.
Developer Opens the developer settings page, where you can configure developer mode
settings settings.
Resources and extensions
The resources and extensions section provides links to pages where users can use
following capabilities.
ノ Expand table
Link Description
Manage Opens the personal/group storage management page, where you can see
personal/group and manage data items that you own or that have been shared with you.
storage
Power BI settings Opens the Power BI settings page, where you can get to the settings
pages for the Power BI items (dashboards, semantic models, workbooks,
reports, datamarts, and dataflows) that are in the current workspace.
Manage connections Opens page where you can manage connections, on-premises data
and gateways gateways, and virtual networks data gateways.
Manage embed Opens a page where you can manage embed codes you have created.
codes
Azure Analysis Opens up a page where you can migrate your Azure Analysis Services
Services migrations datasets to Power BI Premium.
ノ Expand table
Link Description
Admin portal Opens the Fabric admin portal where admins perform various management tasks
and configure Fabric tenant settings. For more information about the admin
portal, see What is the admin portal?. To learn how to open the admin portal, see
How to get to the admin portal.
Microsoft Currently available to Fabric admins only. Opens the Microsoft Purview hub where
Purview hub you can view Purview insights about your organization's sensitive data. The
(preview) Microsoft Purview hub also provides links to Purview governance and compliance
capabilities and has links to documentation to help you get started with Microsoft
Purview governance and compliance in Fabric.
Related content
What is Fabric
What is Microsoft Fabric admin?
Feedback
Was this page helpful? Yes No
Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports, and to create task flows. This article describes
workspaces, how to manage access to them, and what settings are available.
Set up a task flow for the workspace to organize your data project and to help
others understand and work on your project. Read more about task flows.
Pin workspaces to the top of the workspace flyout list to quickly access your
favorite workspaces. Read more about pin workspaces.
Navigate to current workspace from anywhere by selecting the icon on left nav
pane. Read more about current workspace in this article.
Workspace settings: As workspace admin, you can update and manage your
workspace configurations in workspace settings.
Contact list: Specify who receives notification about workspace activity. Read more
about workspace contact lists in this article.
Current workspace
After you select and open a workspace, this workspace becomes your current
workspace. You can quickly navigate to it from anywhere by selecting the workspace
icon from left nav pane.
Workspace layout
A workspace consists of a header, a toolbar, and a view area. There are two views that
can appear in the view area: list view and lineage view. You select the view you want to
see with controls on the toolbar. The following image shows these main workspace
components, with list view selected.
1. Header: The header contains the name and brief description of the workspace, and
also links to other functionality.
2. Toolbar: The toolbar contains controls for adding items to the workspace and
uploading files. It also contains a search box, filter, and the list view and lineage
view selectors.
3. List view and lineage view selectors: The list view and lineage view selectors
enable you to choose which view you want to see in the view area.
4. View area: The view area displays either list view or lineage view.
List view
List view is divided into the task flow and the items list.
1. Task flow: The task flow is where you can create or view a graphical representation
of your data project. The task flow shows the logical flow of the project - it doesn't
show the flow of data. Read more about task flows.
2. Items list: The items list is where you see the items and folders in the workspace. If
you have tasks in the task flow, you can filter the items list by selecting the tasks.
3. Resize bar: You can resize the task flow and items list by dragging the resize bar up
or down.
4. Show/Hide task flow: If you don't want to see the task flow, you can hide it using
the hide/show arrows at the side of the separator bar.
Lineage view
Lineage view shows the flow of data between the items in the workspace. Read more
about lineage view.
Workspace settings
Workspace admins can use workspace settings to manage and update the workspace.
The settings include general settings of the workspace, like the basic information of the
workspace, contact list, SharePoint, license, Azure connections, storage, and other
experiences' specific settings.
To open the workspace settings, you can select the workspace in the nav pane, then
select More options (...) > Workspace settings next to the workspace name.
You can also open it from the workspace page.
7 Note
Creating Microsoft 365 Groups may be restricted in your environment, or the ability
to create them from your SharePoint site may be disabled. If this is the case, speak
with your IT department.
License mode
By default, workspaces are created in your organization's shared capacity. When your
organization has other capacities, workspaces including My Workspaces can be assigned
to any capacity in your organization. You can configure it while creating a workspace or
in Workspace settings -> Premium. Read more about licenses.
Azure connections configuration
Workspace admins can configure dataflow storage to use Azure Data Lake Gen 2
storage and Azure Log Analytics (LA) connection to collect usage and performance logs
for the workspace in workspace settings.
With the integration of Azure Data Lake Gen 2 storage, you can bring your own storage
to dataflows, and establish a connection at the workspace level. Read Configure
dataflow storage to use Azure Data Lake Gen 2 for more detail.
After the connection with Azure Log Analytics (LA), activity log data is sent continuously
and is available in Log Analytics in approximately 5 minutes. Read Using Azure Log
Analytics for more detail.
System storage
System storage is the place to manage your semantic model storage in your individual
or workspace account so you can keep publishing reports and semantic models. Your
own semantic models, Excel reports, and those items that someone has shared with you,
are included in your system storage.
In the system storage, you can view how much storage you have used and free up the
storage by deleting the items in it.
Keep in mind that you or someone else may have reports and dashboards based on a
semantic model. If you delete the semantic model, those reports and dashboards don't
work anymore.
In the Workspace settings pane, select Other > Remove this workspace.
2 Warning
If the workspace you're deleting has a workspace identity, that workspace identity
will be irretrievably lost. In some scenarios this could cause Fabric items relying on
the workspace identity for trusted workspace access or authentication to break. For
more information, see Delete a workspace identity.
Admins can also see the state of all the workspaces in their organization. They can
manage, recover, and even delete workspaces. Read about managing the workspaces
themselves in the "Admin portal" article.
Auditing
Microsoft Fabric audits the following activities for workspaces.
ノ Expand table
Feedback
Was this page helpful? Yes No
This article explains how to create workspaces in Microsoft Fabric. In workspaces, you
create collections of items such as lakehouses, warehouses, and reports. For more
background, see the Workspaces article.
To create a workspace:
1. Select Workspaces > New workspace. The Create a workspace pane opens.
If you are a domain contributor for the workspace, you can associate the
workspace to a domain, or you can change an existing association. For
information about domains, see Domains in Fabric.
Advanced settings
Expand Advanced and you see advanced setting options:
Contact list
Contact list is a place where you can put the names of people as contacts for
information about the workspace. Accordingly, people in this contact list receive system
email notifications for workspace level changes.
By default, the first workspace admin who created the workspace is the contact. You can
add other users or groups according to your needs. Enter the name in the input box
directly, it helps you to automatically search and match users or groups in your org.
License mode
Different license mode provides different sets of feature for your workspace. After the
creation, you can still change the workspace license type in workspace settings, but
some migration effort is needed.
7 Note
Currently, if you want to downgrade the workspace license type from Premium
capacity to Pro (Shared capacity), you must first remove any non-Power BI Fabric
items that the workspace contains. Only after you remove such items will you be
allowed to downgrade the capacity. For more information, see Moving data
around.
Template apps
Power BI template apps are developed for sharing outside your organization. If you
check this option, a special type of workspace (template app workspace) is created. It's
not possible to revert it back to a normal workspace after creation.
Dataflow storage (preview)
Data used with Power BI is stored in internal storage provided by Power BI by default.
With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you
can store your dataflows in your organization's Azure Data Lake Storage Gen2 account.
Learn more about dataflows in Azure Data Lake Storage Gen2 accounts.
Pin workspaces
Quickly access your favorite workspaces by pinning them to the top of the workspace
flyout list.
1. Open the workspace flyout from the nav pane and hover over the workspace you
want to pin. Select the Pin to top icon.
2. The workspace is added in the Pinned list.
3. To unpin a workspace, select the unpin button. The workspace is unpinned.
Related content
Read about workspaces
Feedback
Was this page helpful? Yes No
Workspace roles let you manage who can do what in a Microsoft Fabric workspace.
Microsoft Fabric workspaces sit on top of OneLake and divide the data lake into
separate containers that can be secured independently. Workspace roles in Microsoft
Fabric extend the Power BI workspace roles by associating new Microsoft Fabric
capabilities such as data integration and data exploration with existing workspace roles.
For more information on Power BI roles, see Roles in workspaces in Power BI.
You can either assign roles to individuals or to security groups, Microsoft 365 groups,
and distribution lists. To grant access to a workspace, assign those user groups or
individuals to one of the workspace roles: Admin, Member, Contributor, or Viewer.
Here's how to give users access to workspaces.
Everyone in a user group gets the role that you assign. If someone is in several user
groups, they get the highest level of permission that's provided by the roles that they're
assigned. If you nest user groups and assign a role to a group, all the contained users
have permissions.
Users in workspace roles have the following Microsoft Fabric capabilities, in addition to
the existing Power BI capabilities associated with these roles.
1 Contributors and Viewers can also share items in a workspace, if they have Reshare
permissions.
2 Other permissions are needed to read data from shortcut destination. Learn more
about shortcut security model.
Related content
Roles in workspaces in Power BI
Create workspaces
Give users access to workspaces
Fabric and OneLake security
OneLake shortcuts
Data warehouse security
Data engineering security
Data science roles and permissions
Role-based access control in Eventhouse
Feedback
Was this page helpful? Yes No
This article explains how to create items in workspaces in Microsoft Fabric. For more
information about items and workspaces, see the Microsoft Fabric terminology and
Workspaces article.
2. You can see all items are categorized by tasks. Each task represents daily job-to-
be-done when you build a data solution: get data, store data, prepare data,
analyze and train data, track data, visualize data, and develop data. Inside each
category, item types are sorted alphabetically. You can scroll down and up to
browse all item types which are available for you to create.
3. Select the card of item type you need to create, you can start the creation process
of an item.
3. Next time, when you select 'New item' button, 'Favorites' is shown by default so
that you can quickly access the items you need to create most frequently
4. By clicking on the star button again, you can unfavorite the item types.
Import items
You can also import files from outside Fabric to create Fabric items in a workspace.
1. Select 'Import' in a workspace, you can see all item types you can create by
importing the files from somewhere else.
2. Select the item type you want to import, and select the location where your files
locate.
3. Select the file you want to import and confirm.
4. Check if new items are created in workspace and the import process is completed
successfully.
Related content
Create workspaces
Feedback
Was this page helpful? Yes No
This article explains what folders in workspaces are and how to use them in workspaces
in Microsoft Fabric. Folders are organizational units inside a workspace that enable users
to efficiently organize and manage artifacts in the workspace. For more information
about workspaces, see the Workspaces article.
2. Enter a name for the folder in the New folder dialog box. See Folder name
requirements for naming restrictions.
7 Note
2. Select the destination folder where you want to move this item.
4. By selecting Open folder in the notification or navigating to the folder directly, you
can go to the destination folder to check if the item moved successfully.
2. Select a destination where you want to move these items. You can also create a
new folder if you need it.
7 Note
Dataflows gen2
Streaming semantic models
Streaming dataflows
If you create items from the home page or the Create hub, items are created
at the root level of the workspace.
When you publish a report, you can choose the specific workspace and folder for your
report, as illustrated in the following image.
To publish reports to specific folders in the service, make sure that in Power BI Desktop,
the Publish dialogs support folder selection setting is enabled in the Preview features
tab in the options menu.
Rename a folder
1. Select the context (...) menu, then select Rename.
2. Give the folder a new name and select the Rename button.
7 Note
When renaming a folder, follow the same naming convention as when you're
creating a folder. See Folder name requirements for naming restrictions.
Delete a folder
1. Make sure the folder is empty.
Permission model
Workspace admins, members, and contributors can create, modify, and delete folders in
the workspace. Viewers can only view folder hierarchy and navigate in the workspace.
Currently, folders inherit the permissions of the workspace where they're located.
ノ Expand table
Create folder ✅ ✅ ✅ ❌
Delete folder ✅ ✅ ✅ ❌
Rename folder ✅ ✅ ✅ ❌
Related content
Folders in deployment pipelines
Create workspaces
Give users access to workspaces
Feedback
Was this page helpful? Yes No
After you create a workspace in Microsoft Fabric, or if you have an admin or member
role in a workspace, you can give others access to it by adding them to the different
roles. Workspace creators are automatically admins. For an explanation of the different
roles, see Roles in workspaces.
7 Note
To enforce row-level security (RLS) on Power BI items for Microsoft Fabric Pro users
who browse content in a workspace, assign them the Viewer Role.
After you add or remove workspace access for a user or a group, the permission
change only takes effect the next time the user logs into Microsoft Fabric.
Feedback
Was this page helpful? Yes No
This article walks you through the following basic tasks in Microsoft Fabric’s Git
integration tool:
It’s recommended to read the overview of Git integration before you begin.
Prerequisites
To integrate Git with your Microsoft Fabric workspace, you need to set up the following
prerequisites for both Fabric and Git.
Fabric prerequisites
To access the Git integration feature, you need a Fabric capacity. A Fabric capacity is
required to use all supported Fabric items. If you don't have one yet, sign up for a free
trial. Customers that already have a Power BI Premium capacity, can use that capacity,
but keep in mind that certain Power BI SKUs only support Power BI items.
In addition, the following tenant switches must be enabled from the Admin portal:
These switches can be enabled by the tenant admin, capacity admin, or workspace
admin, depending on your organization's settings.
Git prerequisites
Git integration is currently supported for Azure DevOps and GitHub. To use Git
integration with your Fabric workspace, you need the following in either Azure DevOps
or GitHub:
Azure DevOps
An active Azure account registered to the same user that is using the Fabric
workspace. Create a free account .
Access to an existing repository.
1. Sign into Fabric and navigate to the workspace you want to connect with.
2. Go to Workspace settings
4. Select your Git provider. Currently, Azure DevOps and GitHub are supported.
If you select Azure DevOps, select Connect to automatically sign into the Azure
Repos account registered to the Microsoft Entra user signed into Fabric.
Connect to a workspace
If the workspace is already connected to GitHub, follow the instructions for Connecting
to a shared workspace.
1. From the dropdown menu, specify the following details about the branch you
want to connect to:
7 Note
You can only connect a workspace to one branch and one folder at a
time.
Organization
Project
Git repository.
Branch (Select an existing branch using the drop-down menu, or select +
New Branch to create a new branch. You can only connect to one branch
at a time.)
Folder (Type in the name of an existing folder or enter a name to create a
new folder. If you leave the folder name blank, content will be created in
the root folder. You can only connect to one folder at a time.)
During the initial sync, if either the workspace or Git branch is empty, content is copied
from the nonempty location to the empty one. If both the workspace and Git branch
have content, you’re asked which direction the sync should go. For more information on
this initial sync, see Connect and sync.
After you connect, the Workspace displays information about source control that allows
the user to view the connected branch, the status of each item in the branch and the
time of the last sync.
To keep your workspace synced with the Git branch, commit any changes you make in
the workspace to the Git branch, and update your workspace whenever anyone creates
new commits to the Git branch.
Commit to Git
1. Go to the workspace.
2. Select the Source control icon. This icon shows the number of uncommitted
changes.
3. Select the Changes from the Source control panel. A list appears with all the
items you changed, and an icon indicating if the item is new , modified ,
conflict , or deleted .
4. Select the items you want to commit. To select all items, check the top box.
5. Add a comment in the box. If you don't add a comment, a default message is
added automatically.
6. Select Commit.
After the changes are committed, the items that were committed are removed from
the list, and the workspace will point to the new commit that it synced to.
After the commit is completed successfully, the status of the selected items changes
from Uncommitted to Synced.
1. Go to the workspace.
2. Select the Source control icon.
3. Select Updates from the Source control panel. A list appears with all the items that
were changed in the branch since the last update.
4. Select Update all.
After it updates successfully, the list of items is removed, and the workspace will point to
the new commit that it's synced to.
After the update is completed successfully, the status of the items changes to Synced.
1. Go to Workspace settings
2. Select Git integration
3. Select Disconnect workspace
4. Select Disconnect again to confirm.
Permissions
The actions you can take on a workspace depend on the permissions you have in both
the workspace and the Git repo. For a more detailed discussion of permissions, see
Permissions.
Considerations and limitations
The Azure DevOps account must be registered to the same user that is using
the Fabric workspace.
The tenant admin must enable cross-geo exports if the workspace and Git
repo are in two different geographical regions.
If your organization set up conditional access, make sure the Power BI Service
has the same conditions set for authentication to function as expected.
The commit size is limited to 125 MB.
IP allowlist
Private networking
Custom domains
Workspace limitations
Only the workspace admin can manage the connections to the Git Repo such as
connecting, disconnecting, or adding a branch.
Once connected, anyone with permission can work in the workspace.
The workspace folder structure isn't reflected in the Git repository. Workspace
items in folders are exported to the root directory.
\ * ? |
The item folder (the folder that contains the item files) can't contain any of the
following characters: " : < > \ * ? | . If you rename the folder to
something that includes one of these characters, Git can't connect or sync with the
workspace and an error occurs.
Related content
Understand the Git integration process
Manage Git branches
Git integration best practices
Feedback
Was this page helpful? Yes No
When a user leaves the organization, or if they don't sign in for more than 90 days, it's
possible that any Fabric items they own will stop working correctly. In such cases,
anyone with read and write permissions on such an item (such as workspace admins,
members, and contributors) can take ownership of the item, using the procedure
described in this article.
When a user takes over ownership of an item using this procedure, they also become
the owner of any child items the item might have. You can't take over ownership of child
items directly - only through the parent item.
7 Note
Items such as semantic models, reports, datamarts, dataflows gen1 and dataflows
gen2 have existing functionality for changing item ownership that remains the
same. This article describes the procedure for taking ownership of other Fabric
items.
Prerequisites
To take over ownership of a Fabric item, you must have read and write permissions on
the item.
1. Navigate to the item's settings. Remember, the item can't be a child item.
If the take over fails for any reason, select Take over again.
ノ Expand table
Operation status Error message Next step
Partial Failure Can't take over child items. Try again. Retry take over of parent
item.
Complete Can't take over <item_name>. Try Retry take over of parent
Failure again. item.
7 Note
Data Pipeline items require the additional step of ensuring that the Last Modified
By user is also updated after taking item ownership. You can do this by making a
small edit to the item and saving it. For example, you could make a small change to
the activity name.
) Important
The take over feature doesn't cover ownership change of related items. For
instance, if a data pipeline has notebook activity, changing ownership of the data
pipeline doesn't change the ownership of the notebook. Ownership of related
items needs to be changed separately.
In this scenario, the new item owner can fix connections by going into the item and
replacing the connection with a new or existing connection. The following sections
describe the steps for doing this procedure for several common item types. For other
item types that have connections, refer to the item's connection management
documentation.
Pipelines
1. Open the pipeline.
2. Select the activity created.
3. Replace the connection in the source and/or destination with the appropriate
connection.
KQL Queryset
1. Open the KQL queryset.
Real-Time Dashboard
1. Open the real-time dashboard in edit mode.
2. Select Add data connection to add a new connection and use that in the data
function.
Mirrored SQL DB
Mirrored Snowflake
Mirrored database
If a mirrored database stops working because the item owner has left the
organization or their credentials are disabled, create a new mirrored database.
The option to take over an item isn't available if the item is a system-generated
item not visible or accessible to users in a workspace. For instance, a parent item
might have system-generated child items - this can happen when items such as
Eventstream items and Data Activator items are created through the Real-Time
hub. In such cases, the take over option is not available for the parent item.
Currently, there's no API support for changing ownership of Fabric items. This
doesn't impact existing functionality for changing ownership of items such as
semantic models, reports, dataflows gen1 and gen2, and datamarts, which
continues to be available. For information about taking ownership of warehouses,
see Change ownership of Fabric Warehouse.
Feedback
Was this page helpful? Yes No
Workspace identities can be created in the workspace settings of workspaces that are
associated with a Fabric capacity. A workspace identity is automatically assigned the
workspace contributor role and has access to workspace items.
When you create a workspace identity, Fabric creates a service principal in Microsoft
Entra ID to represent the identity. An accompanying app registration is also created.
Fabric automatically manages the credentials associated with workspace identities,
thereby preventing credential leaks and downtime due to improper credential handling.
7 Note
While Fabric workspace identities share some similarities with Azure managed identities,
their lifecycle, administration, and governance are different. A workspace identity has an
independent lifecycle that is managed entirely in Fabric. A Fabric workspace can
optionally be associated with an identity. When the workspace is deleted, the identity
gets deleted. The name of the workspace identity is always the same as the name of the
workspace it's associated with.
The sections of the workspace identity configuration are described in the following
sections.
Identity details
ノ Expand table
Detail Description
Name Workspace identity name. The workspace identity name is the same as the workspace
name.
ID The workspace identity GUID. This is a unique identifier for the identity.
Role The workspace role assigned to the identity. Workspace identities are automatically
assigned the contributor role upon creation.
State The state of the workspace. Possible values: Active, Inactive, Deleting, Unusable, Failed,
DeleteFailed
Authorized users
For information, see Access control.
7 Note
Access control
Workspace identity can be created and deleted by workspace admins. The workspace
identity has the workspace contributor role on the workspace.
Application Administrators or users with higher roles can view, modify, and delete the
service principal and app registration associated with the workspace identity in Azure.
2 Warning
Modifying or deleting the service principal or app registration in Azure is not
recommended, as it will cause Fabric items relying on workspace identity to stop
working.
7 Note
Enterprise applications
The application associated with the workspace identity can be seen in Enterprise
Applications in the Azure portal. Fabric Identity Management app is its configuration
owner.
2 Warning
Modifications to the application made here will cause the workspace identity to
stop working.
To view the audit logs and sign-in logs for this identity:
App registrations
The application associated with the workspace identity can be seen under App
registrations in the Azure portal. No modifications should be made there, as this will
cause the workspace identity to stop working.
Advanced scenarios
The following sections describe scenarios involving workspace identities that might
occur.
When a workspace is deleted, its workspace identity is deleted as well. If the workspace
is restored after deletion, the workspace identity is not restored. If you want the
restored workspace to have a workspace identity, you must create a new one.
Renaming the workspace
When a workspace gets renamed, the workspace identity is also renamed to match the
workspace name. However its Entra application and service principal remain the same.
Note that there can be multiple application and app registration objects with same
name in a tenant.
If you run into issues the first time you create a workspace identity in your tenant,
try the following steps:
1. If the workspace identity state is failed, wait for an hour and then delete the
identity.
2. After the identity has been deleted, wait 5 minutes and then create the
identity again.
Related content
Trusted workspace access
Fabric identities
Feedback
Was this page helpful? Yes No
Workspace monitoring is a Microsoft Fabric database that collects and organizes logs
and metrics from a range of Fabric items in your workspace. Workspace monitoring lets
workspace users access and analyze logs and metrics related to Fabric items in the
workspace. You can query the database to gain insights into the usage and performance
of your workspace.
Monitoring
Workspace monitoring creates an Eventhouse database in your workspace that collects
and organizes logs and metrics from the Fabric items in the workspace. Workspace
contributors can query the database to learn more about the performance of their
Fabric items.
Data collection - The monitoring Eventhouse collects diagnostic logs and metrics
from Fabric items in the workspace. The data is aggregated and stored in the
monitoring database, where it can be queried using KQL or SQL. The database
supports both historical log analysis and real-time data streaming.
Access - Access the monitoring database from the workspace. You can build and
save query sets and dashboards to simplify data exploration.
Operation logs
After you install workspace monitoring, you can query the following logs:
Mirrored database
Mirrored database logs
Power BI
Semantic models
Sample queries
Workload monitoring sample queries are available from workspace-monitoring in the
Fabric samples GitHub repository.
You can't configure ingestion to filter for specific log type or category such as error
or workload type.
User data operation logs aren't available even though the table is available in the
monitoring database.
Related content
Enable monitoring in your workspace
Feedback
Was this page helpful? Yes No
Prerequisites
A Power BI Premium or a Fabric capacity.
The Workspace admins can turn on monitoring for their workspaces tenant setting
is enabled. To enable the setting, you need to be a Fabric administrator. If you're
not a Fabric administrator, ask the Fabric administrator in your organization to
enable the setting.
Enable monitoring
Follow these steps to enable monitoring in your workspace:
1. Go to the workspace you want to enable monitoring for, and select Workspace
settings (⚙).
Related content
What is workspace monitoring?
Feedback
Was this page helpful? Yes No
OneLake catalog is a centralized place that helps you find, explore, and use the Fabric
items you need, and govern the data you own. It features two tabs:
Explore tab: The explore tab has an items list with an in-context item details view
that makes it possible to browse through and explore items without losing your list
context. It also provides selectors and filters to narrow down and focus the list,
making it easier to find what you need. By default, the OneLake catalog opens on
the Explore tab.
Govern tab: The govern tab provides insights that help you understand the
governance posture of all the data you own in Fabric, and presents recommended
actions you can take to improve the governance status of your data.
Related content
Discover and explore Fabric items in the OneLake catalog
Govern your data in Fabric
Endorsement
Fabric domains
Lineage in Fabric
Monitor hub
Feedback
Was this page helpful? Yes No
Copilot and other generative AI features in preview bring new ways to transform and
analyze data, generate insights, and create visualizations and reports in Microsoft Fabric
and Power BI.
Enable Copilot
Before your business can start using Copilot capabilities in Microsoft Fabric, you need to
enable Copilot.
Read on for answers to your questions about how it works in the different workloads,
how it keeps your business data secure and adheres to privacy requirements, and how
to use generative AI responsibly.
7 Note
Copilot is not yet supported for sovereign clouds due to GPU availability.
For more information on the features and how to use Copilot for Power BI, see Overview
of Copilot for Power BI.
The article Privacy, security, and responsible use for Copilot (preview) offers guidance on
responsible use.
Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.
Before you use Copilot, your admin needs to enable Copilot in Fabric. See the article
Overview of Copilot in Fabric for details. Also, keep in mind the limitations of Copilot:
Available regions
Available regions for Azure OpenAI service
To access the prebuilt Azure OpenAI Service , including the Copilot in Fabric, you must
have an F64 or higher SKU or a P SKU in the following Fabric regions. The Azure OpenAI
Service isn't available on trial SKUs.
Azure OpenAI Service is powered by large language models that are currently only
deployed to US datacenters (East US, East US2, South Central US, and West US) and EU
datacenter (France Central). If your data is outside the US or EU, the feature is disabled
by default unless your tenant admin enables Data sent to Azure OpenAI can be
processed outside your capacity's geographic region, compliance boundary, or
national cloud instance tenant setting. To learn how to get to the tenant settings, see
About tenant settings.
7 Note
The data processed for Copilot interactions can include user prompts, meta
prompts, structure of data (schema) and conversation history. No data, such as
content in tables is sent to Azure OpenAI for processing unless it is included in the
user prompts.
ノ Expand table
US US No Turn-on Copilot
Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ
AI services in Fabric (preview)
Copilot tenant settings
Feedback
Was this page helpful? Yes No
Copilot and other generative AI features in preview bring new ways to transform and
analyze data, generate insights, and create visualizations in Microsoft Fabric and Power
BI.
The F64 capacity must be in a supported region listed in Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by default
unless your admin enables the tenant setting in the Fabric Admin portal. Note that
the Data sent to Azure OpenAI can be processed outside your tenant's geographic
region, compliance boundary, or national cloud instance.
Copilot isn't supported for Fabric trial SKUs. Only paid SKUs (F64 or higher) are
eligible.
The following screenshot shows the tenant setting where Copilot can be enabled or
disabled:
Copilot in Microsoft Fabric is rolling out gradually, ensuring all customers with paid
Fabric capacities (F64 or higher) gain access. It automatically appears as a new setting in
the Fabric admin portal when available for your tenant. Once billing starts for the
Copilot in Fabric experiences, Copilot usage will count against your existing Fabric
capacity.
See the article Overview of Copilot in Fabric for details on its functionality across
workloads, data security, privacy compliance, and responsible AI use.
) Important
When scaling from a smaller capacity to F64 or above, allow up to 24 hours for
Copilot for Power BI to activate.
Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ
AI services in Fabric (preview)
Copilot tenant settings
Copilot in Power BI
Feedback
Was this page helpful? Yes No
This article answers frequently asked questions about Copilot for Microsoft Fabric and
Power BI.
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
For more information, see the article Overview of Copilot in Fabric.
Power BI
Can Copilot be enabled for specific workspaces
within a tenant?
Copilot is enabled at the tenant level and access can be restricted by security groups. If
the workspace is tied to an F64 or P1 capacity, Copilot experience will be enabled.
If your data doesn't meet that criteria, we recommend spending the time to bring it into
compliance.
Try restarting Copilot by closing the pane and selecting the Copilot button again.
Real-Time Intelligence
Does Copilot respond to multiple questions in a
conversation?
No, Copilot doesn't answer follow-up questions. You need to ask one question at a time.
Related content
What is Microsoft Fabric?
Privacy, security, and responsible use of Copilot in Fabric
Feedback
Was this page helpful? Yes No
Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.
This article provides answers to common questions related to business data security and
privacy to help your organization get started with Copilot in Fabric. The article Privacy,
security, and responsible use for Copilot in Power BI (preview) provides an overview of
Copilot in Power BI. Read on for details about Copilot for Fabric.
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
) Important
Review the supplemental preview terms for Fabric , which includes terms of use
for Microsoft Generative AI Service Previews.
In general, these features are designed to generate natural language, code, or other
content based on:
For example, Power BI, Data Factory, and data science offer Copilot chats where you can
ask questions and get responses that are contextualized on your data. Copilot for Power
BI can also create reports and other visualizations. Copilot for Data Factory can
transform your data and explain what steps it has applied. Data science offers Copilot
features outside of the chat pane, such as custom IPython magic commands in
notebooks. Copilot chats may be added to other experiences in Fabric, along with other
features that are powered by Azure OpenAI under the hood.
This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:
Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.
Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.
Copilot uses Azure OpenAI—not the publicly available OpenAI services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
the Microsoft Azure environment, and the Service doesn't interact with any services by
OpenAI, such as ChatGPT or the OpenAI API. Your data isn't used to train models and
isn't available to other customers. Learn more about Azure OpenAI.
1. Copilot receives a prompt from a user. This prompt could be in the form of a
question that a user types into a chat pane, or in the form of an action such as
selecting a button that says "Create a report."
2. Copilot preprocesses the prompt through an approach called grounding.
Depending on the scenario, this might include retrieving relevant data such as
dataset schema or chat history from the user's current session with Copilot.
Grounding improves the specificity of the prompt, so the user gets responses that
are relevant and actionable to their specific task. Data retrieval is scoped to data
that is accessible to the authenticated user based on their permissions. See the
section What data does Copilot use and how is it processed? in this article for
more information.
3. Copilot takes the response from Azure OpenAI and postprocesses it. Depending
on the scenario, this postprocessing might include responsible AI checks, filtering
with Azure content moderation, or additional business-specific constraints.
4. Copilot returns a response to the user in the form of natural language, code, or
other content. For example, a response might be in the form of a chat message or
generated code, or it might be a contextually appropriate form such as a Power BI
report or a Synapse notebook cell.
5. The user reviews the response before using it. Copilot responses can include
inaccurate or low-quality content, so it's important for subject matter experts to
check outputs before using or sharing them.
Just as each experience in Fabric is built for certain scenarios and personas—from data
engineers to data analysts—each Copilot feature in Fabric has also been built with
unique scenarios and users in mind. For capabilities, intended uses, and limitations of
each feature, review the section for the experience you're working in.
Definitions
Prompt or input
The text or action submitted to Copilot by a user. This could be in the form of a question
that a user types into a chat pane, or in the form of an action such as selecting a button
that says "Create a report."
Grounding
A preprocessing technique where Copilot retrieves additional data that's contextual to
the user's prompt, and then sends that data along with the user's prompt to Azure
OpenAI in order to generate a more relevant and actionable response.
Response or output
The content that Copilot returns to a user. For example, a response might be in the form
of a chat message or generated code, or it might be contextually appropriate content
such as a Power BI report or a Synapse notebook cell.
This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:
Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.
Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.
Copilot uses Azure OpenAI—not OpenAI's publicly available services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
Microsoft's Azure environment and the Service doesn't interact with any services by
OpenAI (for example, ChatGPT or the OpenAI API). Your data isn't used to train models
and isn't available to other customers. Learn more about Azure OpenAI.
To allow data to be processed elsewhere, your admin can turn on the setting Data sent
to Azure OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance. Learn more about admin settings for
Copilot.
What should I know to use Copilot responsibly?
Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues.
Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.
Related content
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ
Feedback
Was this page helpful? Yes No
In this article, learn how Microsoft Copilot for SQL databases works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For more information on Copilot in Fabric, see Privacy, security, and
responsible use for Copilot in Microsoft Fabric (preview).
With Copilot for SQL databases in Microsoft Fabric and other generative AI features,
Microsoft Fabric brings a new way to transform and analyze data, generate insights, and
create visualizations and reports in your database and other workloads.
Previous messages sent to and replies from Copilot for that user in that session.
Contents of SQL query that the user has executed.
Error messages of a SQL query that the user has executed (if applicable).
Schemas of the database.
Be explicit about the data you want Copilot to examine. If you describe the data
asset, with descriptive table and column names, Copilot is more likely to retrieve
relevant data and generate useful outputs.
Evaluation of Copilot for SQL databases
The product team tested Copilot to see how well the system performs within the context
of databases, and whether AI responses are insightful and useful.
Related content
Privacy, security, and responsible use for Copilot in Microsoft Fabric (preview)
Copilot for SQL database in Fabric (preview)
Feedback
Was this page helpful? Yes No
In this article, learn how Copilot for Data Factory overview works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot (preview).
With Copilot for Data Factory in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.
For considerations and limitations, see Limitations of Copilot for Data Factory.
Related content
Copilot for Data Factory overview
Copilot in Fabric: FAQ
Feedback
Was this page helpful? Yes No
In this article, learn how Microsoft Copilot for Data Science works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot (preview).
With Copilot for Data Science in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.
What is AI Skill?
AI Skill is a new tool in Fabric that brings a way to get answers from your tabular data in
natural language.
Non-technical users can then type questions and receive the results from the execution
of an AI generated SQL query.
AI Skill isn't intended for use in cases where deterministic and 100% accurate
results are required, which reflects the current LLM limitations.
The AI Skill isn't intended for uses cases that require deep analytics or causal
analytics. E.g. asking “why did our sales numbers drop last month?” is out of scope.
How was AI Skill evaluated? What metrics are used to
measure performance?
The product team has tested the AI skill on a variety of public and private benchmarks
for SQL tasks to ascertain the quality of SQL queries.
Make use of the Notes for the model in the configuration panel in the UI. If the
SQL queries that the AI Skill generates are incorrect, you can provide instructions
to the model in plain English to improve upon future queries. The system will make
use of these instructions with every query. Short and direct instructions are best.
Provide examples in the model configuration panel in the UI. The system will
leverage the most relevant examples when providing its answers.
The AI skill only has access to data that the questioner has access to. If you use the
AI skill, your credentials are used to access the underlying database. If you don't
have access to the underlying data, the AI skill doesn't either. This holds true when
you publish the AI skill to other destinations, such as Copilot for Microsoft 365 or
Microsoft Copilot Studio, where the AI skill can be used by other questioners.
Related content
Privacy, security, and responsible use of Copilot for Data Factory (preview)
Overview of Copilot for Data Science and Data Engineering (preview)
Copilot for Data Factory overview
Copilot in Fabric: FAQ
Feedback
Was this page helpful? Yes No
In this article, learn how Microsoft Copilot for Fabric Data Warehouse works, how it
keeps your business data secure and adheres to privacy requirements, and how to use
generative AI responsibly. For more information on Copilot in Fabric, see Privacy,
security, and responsible use for Copilot in Microsoft Fabric (preview).
With Copilot for Data Warehouse in Microsoft Fabric and other generative AI features,
Microsoft Fabric brings a new way to transform and analyze data, generate insights, and
create visualizations and reports in your warehouse and other workloads.
Previous messages sent to and replies from Copilot for that user in that session.
Contents of SQL query that the user has executed.
Error messages of a SQL query that the user has executed (if applicable).
Schemas of the warehouse.
Schemas from attached warehouses or SQL analytics endpoints when cross-DB
querying.
Related content
Privacy, security, and responsible use for Copilot in Microsoft Fabric (preview)
Microsoft Copilot for Fabric Data Warehouse
Feedback
Was this page helpful? Yes No
In this article, learn how Microsoft Copilot for Power BI works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. With Copilot and other generative AI features in preview, Power BI brings a
new way to transform and analyze data, generate insights, and create visualizations and
reports in Power BI and the other workloads.
For more information privacy and data security in Copilot, see Privacy, security, and
responsible use for Copilot in Microsoft Fabric (preview).
For considerations and limitations with Copilot for Power BI, see Considerations and
Limitations.
Related content
Microsoft Copilot for Power BI
Enable Fabric Copilot for Power BI
Copilot in Fabric and Power BI: FAQ
Feedback
Was this page helpful? Yes No
In this article, learn how Copilot for Real-Time Intelligence works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot.
This feature leverages the power of OpenAI to seamlessly translate natural language
queries into Kusto Query Language (KQL), a specialized language for querying large
datasets. In essence, it acts as a bridge between users' everyday language and the
technical intricacies of KQL removing adoption barriers for users unfamiliar with the
language. By harnessing OpenAI's advanced language understanding, this feature
empowers users to submit business questions in a familiar, natural language format,
which are then converted into KQL queries.
Copilot accelerates productivity by simplifying the query creation process but also
provides a user-friendly and efficient approach to data analysis.
Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ
Feedback
Was this page helpful? Yes No
) Important
Copilot for Data Factory is generally available now, but its new Data pipeline
capabilities are still in preview.
Copilot in Fabric enhances productivity, unlocks profound insights, and facilitates the
creation of custom AI experiences tailored to your data. As a component of the Copilot
in Fabric experience, Copilot in Data Factory empowers customers to use natural
language to articulate their requirements for creating data integration solutions using
Dataflow Gen2. Essentially, Copilot in Data Factory operates like a subject-matter expert
(SME) collaborating with you to design your dataflows.
Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent
Mashup code generation to transform data using natural language input and generates
code explanations to help you better understand earlier generated complex queries and
tasks.
Before your business can start using Copilot capabilities in Fabric, your administrator
needs to enable Copilot in Microsoft Fabric.
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Supported capabilities
With Dataflow Gen2, you can:
Pipeline Generation: Using natural language, you can describe your desired
pipeline, and Copilot will understand the intent and generate the necessary Data
pipeline activities.
Error message assistant: troubleshoot Data pipeline issues with clear error
explanation capability and actionable troubleshooting guidance.
Summarize Pipeline: Explain your complex pipeline with the summary of content
and relations of activities within the Pipeline.
Get started
Data Factory Copilot is available in both Dataflow Gen2, and Data pipelines.
4. In the Get data window, search for OData and select the OData connector.
5. In the Connect to data source for the OData connector, input the following text
into the URL field:
https://services.odata.org/V4/Northwind/Northwind.svc/
6. From the navigator, select the Orders table and then Select related tables. Then
select Create to bring multiple tables into the Power Query editor.
7. Select the Customers query, and in the Copilot pane type this text: Only keep
European customers , then press Enter or select the Send message icon.
Your input is now visible in the Copilot pane along with a returned response card.
You can validate the step with the corresponding step title in the Applied steps list
and review the formula bar or the data preview window for accuracy of your
results.
8. Select the Employees query, and in the Copilot pane type this text: Count the
total number of employees by City , then press Enter or select the Send message
icon. Your input is now visible in the Copilot pane along with a returned response
card and an Undo button.
9. Select the column header for the Total Employees column and choose the option
Sort descending. The Undo button disappears because you modified the query.
10. Select the Order_Details query, and in the Copilot pane type this text: Only keep
orders whose quantities are above the median value , then press Enter or select
the Send message icon. Your input is now visible in the Copilot pane along with a
returned response card.
11. Either select the Undo button or type the text Undo (any text case) and press Enter
in the Copilot pane to remove the step.
12. To leverage the power of Azure OpenAI when creating or transforming your data,
ask Copilot to create sample data by typing this text:
Create a new query with sample data that lists all the Microsoft OS versions
and the year they were released
Copilot adds a new query to the Queries pane list, containing the results of your
input. At this point, you can either transform data in the user interface, continue to
edit with Copilot text input, or delete the query with an input such as Delete my
current query .
2. On the Home tab of the Data pipeline editor, select the Copilot button.
3. Then you can get started with Copilot to build your pipeline with the Ingest data
option.
4. Copilot generates a Copy activity and you can interact with Copilot to complete
the whole flow. You can type / to select the source and destination connection, and
then add all the required content according to the prefilled started prompt
context.
5. After everything is setup, simply select Run this pipeline to execute the new
pipeline and ingest the data.
6. If you are already familiar with Data pipelines, you can complete everything with
one prompt command, too.
Use these steps to summarize a pipeline with Copilot for Data Factory:
2. On the Home tab of the pipeline editor window, select the Copilot button.
3. Then you can get started with Copilot to summarize the content of the pipeline.
Copilot empowers you to troubleshoot any pipeline with error messages. You can either
use Copilot for pipeline error messages assistant in the Fabric Monitor page, or in
pipeline authoring page. The steps below show you how to access the pipeline Copilot
to troubleshoot your pipeline from the Fabric Monitor page, but you can use the same
steps from the pipeline authoring page.
1. Go to Fabric Monitor page and select filters to show pipelines with failures, as
shown below:
Related content
Privacy, security, and responsible use of Copilot for Data Factory (preview)
Feedback
Was this page helpful? Yes No
) Important
Copilot for Data Science and Data Engineering is an AI assistant that helps analyze and
visualize data. It works with Lakehouse tables and files, Power BI Datasets, and
pandas/spark/fabric dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to add your data as a dataframe.
You can ask your questions in the chat panel, and the AI provides responses or code to
copy into your notebook. It understands your data's schema and metadata, and if data
is loaded into a dataframe, it has awareness of the data inside of the data frame as well.
You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Introduction to Copilot for Data Science and
Data Engineering for Fabric Data Science
With Copilot for Data Science and Data Engineering, you can chat with an AI assistant
that can help you handle your data analysis and visualization tasks. You can ask the
Copilot questions about lakehouse tables, Power BI Datasets, or Pandas/Spark
dataframes inside notebooks. Copilot answers in natural language or code snippets.
Copilot can also generate data-specific code for you, depending on the task. For
example, Copilot for Data Science and Data Engineering can generate code for:
Chart creation
Filtering data
Applying transformations
Machine learning models
First select the Copilot icon in the notebooks ribbon. The Copilot chat panel opens, and
a new cell appears at the top of your notebook. This cell must run each time a Spark
session loads in a Fabric notebook. Otherwise, the Copilot experience won't properly
operate. We are in the process of evaluating other mechanisms for handling this
required initialization in future releases.
Run the cell at the top of the notebook, with this code:
Python
After the cell successfully executes, you can use Copilot. You must rerun the cell at the
top of the notebook each time your session in the notebook closes.
And more. Copilot responds with the answer or the code, which you can copy and paste
it your notebook. Copilot for Data Science and Data Engineering is a convenient,
interactive way to explore and analyze your data.
As you use Copilot, you can also invoke the magic commands inside of a notebook cell
to obtain output directly in the notebook. For example, for natural language answers to
responses, you can ask questions using the "%%chat" command, such as:
%%chat
What are some machine learning models that may fit this dataset?
or
%%code
Can you generate code for a logistic regression that fits this data?
Copilot for Data Science and Data Engineering also has schema and metadata
awareness of tables in the lakehouse. Copilot can provide relevant information in
context of your data in an attached lakehouse. For example, you can ask:
Copilot responds with the relevant information if you added the lakehouse to the
notebook. Copilot also has awareness of the names of files added to any lakehouse
attached to the notebook. You can refer to those files by name in your chat. For
example, if you have a file named sales.csv in your lakehouse, you can ask "Create a
dataframe from sales.csv". Copilot generates the code and displays it in the chat panel.
With Copilot for notebooks, you can easily access and query your data from different
sources. You don't need the exact command syntax to do it.
Tips
"Clear" your conversation in the Copilot chat panel with the broom located at the
top of the chat panel. Copilot retains knowledge of any inputs or outputs during
the session, but this helps if you find the current content distracting.
Use the chat magics library to configure settings about Copilot, including privacy
settings. The default sharing mode is designed to maximize the context sharing
Copilot has access to, so limiting the information provided to copilot can directly
and significantly impact the relevance of its responses.
When Copilot first launches, it offers a set of helpful prompts that can help you get
started. They can help kickstart your conversation with Copilot. To refer to prompts
later, you can use the sparkle button at the bottom of the chat panel.
You can "drag" the sidebar of the copilot chat to expand the chat panel, to view
code more clearly or for readability of the outputs on your screen.
Limitations
Copilot features in the Data Science experience are currently scoped to notebooks.
These features include the Copilot chat pane, IPython magic commands that can be
used within a code cell, and automatic code suggestions as you type in a code cell.
Copilot can also read Power BI semantic models using an integration of semantic link.
One, you can ask Copilot to examine and analyze data in your notebook (for
example, by first loading a DataFrame and then asking Copilot about data inside
the DataFrame).
Two, you can ask Copilot to generate a range of suggestions about your data
analysis process, such as what predictive models might be relevant, code to
perform different types of data analysis, and documentation for a completed
notebook.
Keep in mind that code generation with fast-moving or recently released libraries might
include inaccuracies or fabrications.
Related content
How to use Chat-magics
How to use the Copilot Chat Pane
Feedback
Was this page helpful? Yes No
) Important
The Chat-magics Python library enhances your data science and engineering workflow
in Microsoft Fabric notebooks. It seamlessly integrates with the Fabric environment, and
allows execution of specialized IPython magic commands in a notebook cell, to provide
real-time outputs. IPython magic commands and more background on usage can be
found here: https://ipython.readthedocs.io/en/stable/interactive/magics.html# .
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Capabilities of Chat-magics
Dataframe descriptions
The %describe command provides summaries and descriptions of loaded dataframes.
This simplifies the data exploration phase.
Privacy controls
Chat-magics also offers granular privacy settings, which allows you to control what data
is shared with the Azure OpenAI Service. The %set_sharing_level and
%configure_privacy_settings commands, for example, provide this functionality.
current cell
new cell
cell output
into a variable
Related content
How to use Copilot Pane
Feedback
Was this page helpful? Yes No
) Important
Copilot for Data Science and Data Engineering notebooks is an AI assistant that helps
you analyze and visualize data. It works with lakehouse tables, Power BI Datasets, and
pandas/spark dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to load your data as a dataframe.
You can use the chat panel to ask your questions, and the AI provides responses or code
to copy into your notebook. It understands your data's schema and metadata, and if
data is loaded into a dataframe, it has awareness of the data inside of the data frame as
well. You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Azure OpenAI enablement
Azure OpenAI must be enabled within Fabric at the tenant level.
7 Note
If your workspace is provisioned in a region without GPU capacity, and your data is
not enabled to flow cross-geo, Copilot will not function properly and you will see
errors.
) Important
If your Spark session terminates, the context for Chat-magics will also
terminate, also wiping the context for the Copilot pane.
2. Verify that all these conditions are met before proceeding with the Copilot chat
pane.
Open Copilot chat panel inside the notebook
1. Select the Copilot button on the notebook ribbon.
2. To open Copilot, select the Copilot button at the top of the notebook.
3. The Copilot chat panel opens on the right side of your notebook.
Key capabilities
AI assistance: Generate code, query data, and get suggestions to accelerate your
workflow.
Data insights: Quick data analysis and visualization capabilities.
Explanations: Copilot can provide natural language explanations of notebook cells,
and can provide an overview for notebook activity as it runs.
Fixing errors: Copilot can also fix notebook run errors as they arise. Copilot shares
context with the notebook cells (executed output) and can provide helpful
suggestions.
Important notices
Inaccuracies: Potential for inaccuracies exists. Review AI-generated content
carefully.
Data storage: Customer data is temporarily stored, to identify harmful use of AI.
2. Each of these selections outputs chat text in the text panel. As the user, you must
fill out the specific details of the data you'd like to use.
3. You can then input any type of request you have in the chat box.
Related content
How to use Chat-magics
Feedback
Was this page helpful? Yes No
Natural Language to SQL: Ask Copilot to generate SQL queries using simple
natural language questions.
Code completion: Enhance your coding efficiency with AI-powered code
completions.
Quick actions: Quickly fix and explain SQL queries with readily available actions.
Intelligent Insights: Receive smart suggestions and insights based on your
warehouse schema and metadata.
There are three ways to interact with Copilot in the Fabric Warehouse editor.
Chat Pane: Use the chat pane to ask questions to Copilot through natural
language. Copilot will respond with a generated SQL query or natural language
based on the question asked.
How to: Use the Copilot chat pane for Synapse Data Warehouse
Code completions: Start writing T-SQL in the SQL query editor and Copilot will
automatically generate a code suggestion to help complete your query. The Tab
key accepts the code suggestion, or keep typing to ignore the suggestion.
How to: Use Copilot code completion for Synapse Data Warehouse
Quick Actions: In the ribbon of the SQL query editor, the Fix and Explain options
are quick actions. Highlight a SQL query of your choice and select one of the quick
action buttons to perform the selected action on your query.
Explain: Copilot can provide natural language explanations of your SQL query
and warehouse schema in comments format.
Fix: Copilot can fix errors in your code as error messages arise. Error scenarios
can include incorrect/unsupported T-SQL code, wrong spellings, and more.
Copilot will also provide comments that explain the changes and suggest SQL
best practices.
How to: Use Copilot quick actions for Synapse Data Warehouse
When crafting prompts, be sure to start with a clear and concise description of the
specific information you're looking for.
Natural language to SQL depends on expressive table and column names. If your
table and columns aren't expressive and descriptive, Copilot might not be able to
construct a meaningful query.
Use natural language that is applicable to your table and view names, column
names, primary keys, and foreign keys of your warehouse. This context helps
Copilot generate accurate queries. Specify what columns you wish to see,
aggregations, and any filtering criteria as explicitly as possible. Copilot should be
able to correct typos or understand context given your schema context.
Create relationships in the model view of the warehouse to increase the accuracy
of JOIN statements in your generated SQL queries.
When using code completions, leave a comment at the top of the query with -- to
help guide the Copilot with context about the query you are trying to write.
Avoid ambiguous or overly complex language in your prompts. Simplify the
question while maintaining its clarity. This editing ensures Copilot can effectively
translate it into a meaningful T-SQL query that retrieves the desired data from the
associated tables and views.
Currently, natural language to SQL supports English language to T-SQL.
The following example prompts are clear, specific, and tailored to the properties of
your schema and data warehouse, making it easier for Copilot to generate
accurate T-SQL queries:
Show me all properties that sold last year
Count all the products, group by each category
Show agents who have listed more than two properties for sale
Show the rank of each agent by property sales and show name, total sales,
and rank
Enable Copilot
Your administrator needs to enable the tenant switch before you start using
Copilot. For more information, see Copilot tenant settings.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by default
unless your Fabric tenant admin enables the Data sent to Azure OpenAI can be
processed outside your tenant's geographic region, compliance boundary, or
national cloud instance tenant setting in the Fabric Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64 or
higher, or P1 or higher) are supported.
For more information, see Overview of Copilot in Fabric and Power BI.
Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.
For more information, see Privacy, security, and responsible use of Copilot for Data
Warehouse (preview).
Copilot doesn't understand previous inputs and can't undo changes after a user
commits a change when authoring, either via user interface or the chat pane. For
example, you can't ask Copilot to "Undo my last 5 inputs." However, users can still
use the existing user interface options to delete unwanted changes or queries.
Copilot can't make changes to existing SQL queries. For example, if you ask Copilot
to edit a specific part of an existing query, it doesn't work.
Copilot might produce inaccurate results when the intent is to evaluate data.
Copilot only has access to the warehouse schema, none of the data inside.
Copilot responses can include inaccurate or low-quality content, so make sure to
review outputs before using them in your work.
People who are able to meaningfully evaluate the content's accuracy and
appropriateness should review the outputs.
Related content
Copilot tenant settings (preview)
How to: Use the Copilot chat pane for Synapse Data Warehouse
How to: Use Copilot quick actions for Synapse Data Warehouse
How to: Use Copilot code completion for Synapse Data Warehouse
Privacy, security, and responsible use of Copilot for Data Warehouse (preview)
Feedback
Was this page helpful? Yes No
Copilot for Real-Time Intelligence is an advanced AI tool designed to help you explore
your data and extract valuable insights. You can input questions about your data, which
are then automatically translated into Kusto Query Language (KQL) queries. Copilot
streamlines the process of analyzing data for both experienced KQL users and citizen
data scientists.
For billing information about Copilot, see Announcing Copilot in Fabric pricing .
Prerequisites
A workspace with a Microsoft Fabric-enabled capacity
Read or write access to a KQL queryset
7 Note
Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Copilot supports conversational interactions which allows you to clarify, adapt, and
extend your queries dynamically, all while maintaining the context of your previous
inputs. You can refine queries and ask follow-up questions without starting over:
Dynamic query refinement: You can refine the initial KQL generated by Copilot by
refining your prompt to remove ambiguity, specify tables or columns, or provide
more context.
Seamless follow-up questions: If the generated KQL is correct but you want to
explore the data more deeply, you can ask follow-up questions related to the same
task. You can expand the scope of your query, add filters, or explore related data
points by building on previous dialogue.
7 Note
You can continue to ask follow-up questions or further refine your query. To start a new
chat, select the speech bubble on the top right of the Copilot pane (1).
Hover over a previous question (2) and select the pencil icon to copy it to the question
box to edit it, or copy it to your clipboard.
Start with simple natural language prompts, to learn the current capabilities and
limitations. Then, gradually proceed to more complex prompts.
State the task precisely, and avoid ambiguity. Imaging you shared the natural
language prompt with few KQL experts from your team without adding oral
instructions - would they be able to generate the correct query?
To generate the most accurate query, supply any relevant information that can
help the model. If you can, specify tables, operators, or functions that are critical to
the query.
Prepare your database: Add docstring properties to describe common tables and
columns. This might be redundant for descriptive names (for example, timestamp)
but is critical to describe tables or columns with meaningless names. You don't
have to add docstring to tables or columns that are rarely used. For more
information, see .alter table column-docstrings command.
To improve Copilot results, select either the like or dislike icon to submit your
comments in the Submit feedback form.
7 Note
The Submit feedback form submits the name of the database, its url, the KQL
query generated by copilot, and any free text response you include in the feedback
submission. Results of the executed KQL query aren't sent.
Limitations
Copilot might suggest potentially inaccurate or misleading suggested KQL queries
due to:
Complex and long user input.
User input that directs to database entities that aren't KQL Database tables or
materialized views (for example KQL function.)
More than 10,000 concurrent users within an organization can result in failure or a
major performance hit.
Related content
Privacy, security, and responsible use of Copilot for Real-Time Intelligence
(preview)
Copilot for Microsoft Fabric: FAQ
Overview of Copilot in Fabric (preview)
Query data in a KQL queryset
Feedback
Was this page helpful?
Yes No
This page contains information on how the Fabric Copilot usage is billed and reported.
Copilot usage is measured by the number of tokens processed. Tokens can be thought
of as pieces of words. Approximately 1,000 tokens are about 750 words. Prices are
calculated per 1,000 tokens, and input and output tokens are consumed at different
rates.
7 Note
The Copilot for Fabric billing will become effective on March 1st, 2024, as part of
your existing Power BI Premium or Fabric Capacity.
Consumption rate
Requests to Copilot consume Fabric Capacity Units. This table defines how many
capacity units (CU) are consumed when Copilot is used. For example, when user using
Copilot for Power BI, Copilot for Data Factory, or Copilot for Data Science and Data
Engineering.
ノ Expand table
Copilot in Fabric The input prompt Per 1,000 Tokens 100 CU seconds
For example, assume each Copilot request has 2,000 input tokens and 500 output
tokens. The price for one Copilot request is calculated as follows: (2,000 * 100 + 500 *
400) / 1,000 = 700 CU seconds = 11.66 CU minutes.
Since Copilot is a background job, each Copilot request (~24 CU minute job) consumes
only one CU minute of each hour of a capacity. For a customer on F64 who has 64 * 24
CU Hours (1,536) in a day, and each Copilot job consumes (24 CU mins / 60 mins) = 0.4
CU Hours, customers can run over 3,800 requests before they exhaust the capacity.
However, once the capacity is exhausted, all operations will shut down.
Region mapping
Fabric Copilot is powered by Azure OpenAI large language models that are currently
deployed to limited data centers. However, customers can enable cross-geo process
tenant settings to use Copilots by processing their data in another region where Azure
OpenAI Service is available. This region could be outside of the user's geographic
region, compliance boundary, or national cloud instance. While performing region
mapping, we prioritize data residency as the foremost consideration and attempt to
map to a region within the same geographic area whenever feasible.
The cost of Fabric Capacity Units can vary depending on the region. Regardless of the
consumption region where GPU capacity is utilized, customers are billed based on the
Fabric Capacity Units pricing in their billing region. For example, if a customer's requests
are mapped from region 1 to region 2 , with region 1 being the billing region and
region 2 being the consumption region, the customer is charged based on the pricing
in region 1 .
Related content
Overview of Copilot in Fabric
Copilot in Fabric: FAQ
AI services in Fabric (preview)
Feedback
Was this page helpful? Yes No
Direct Lake is a storage mode option for tables in a Power BI semantic model that's
stored in a Microsoft Fabric workspace. It's optimized for large volumes of data that can
be quickly loaded into memory from Delta tables, which store their data in Parquet files
in OneLake—the single store for all analytics data. Once loaded into memory, the
semantic model enables high performance queries. Direct Lake eliminates the slow and
costly need to import data into the model.
You can use Direct Lake storage mode to connect to the tables or views of a single
Fabric lakehouse or Fabric warehouse. Both of these Fabric items and Direct Lake
semantic models require a Fabric capacity license.
In some ways, a Direct Lake semantic model is similar to an Import semantic model.
That's because model data is loaded into memory by the VertiPaq engine for fast query
performance (except in the case of DirectQuery fallback, which is explained later in this
article).
However, a Direct Lake semantic model differs from an Import semantic model in an
important way. That's because a refresh operation for a Direct Lake semantic model is
conceptually different to a refresh operation for an Import semantic model. For a Direct
Lake semantic model, a refresh involves a framing operation (described later in this
article), which can take a few seconds to complete. It's a low-cost operation where the
semantic model analyzes the metadata of the latest version of the Delta tables and is
updated to reference the latest files in OneLake. In contrast, for an Import semantic
model, a refresh produces a copy of the data, which can take considerable time and
consume significant data source and capacity resources (memory and CPU).
7 Note
Incremental refresh for an Import semantic model can help to reduce refresh time
and use of capacity resources.
7 Note
Import and DirectQuery semantic models are still relevant in Fabric, and they're the
right choice of semantic model for some scenarios. For example, Import storage
mode often works well for a self-service analyst who needs the freedom and agility
to act quickly, and without dependency on IT to add new data elements.
Also, OneLake integration automatically writes data for tables in Import storage
mode to Delta tables in OneLake without involving any migration effort. By using
this option, you can realize many of the benefits of Fabric that are made available
to Import semantic model users, such as integration with lakehouses through
shortcuts, SQL queries, notebooks, and more. We recommend that you consider
this option as a quick way to reap the benefits of Fabric without necessarily or
immediately re-designing your existing data warehouse and/or analytics system.
Direct Lake storage mode is also suitable for minimizing data latency to quickly make
data available to business users. If your Delta tables are modified intermittently (and
assuming you already did data preparation in the data lake), you can depend on
automatic updates to reframe in response to those modifications. In this case, queries
sent to the semantic model return the latest data. This capability works well in
partnership with the automatic page refresh feature of Power BI reports.
Keep in mind that Direct Lake depends on data preparation being done in the data lake.
Data preparation can be done by using various tools, such as Spark jobs for Fabric
lakehouses, T-SQL DML statements for Fabric warehouses, dataflows, pipelines, and
others. This approach helps ensure data preparation logic is performed as low as
possible in the architecture to maximize reusability. However, if the semantic model
author doesn't have the ability to modify the source item, for example, if a self-service
analyst doesn't have write permissions on a lakehouse that is managed by IT, then
Import storage mode might be a better choice. That's because it supports data
preparation by using Power Query, which is defined as part of semantic model.
Be sure to factor in your current Fabric capacity license and the Fabric capacity
guardrails when you consider Direct Lake storage mode. Also, factor in the
considerations and limitations, which are described later in this article.
Tip
A Direct Lake semantic model might also use DirectQuery fallback, which involves
seamlessly switching to DirectQuery mode. DirectQuery fallback retrieves data directly
from the SQL analytics endpoint of the lakehouse or the warehouse. For example,
fallback might occur when a Delta table contains more rows of data than supported by
your Fabric capacity (described later in this article). In this case, a DirectQuery operation
sends a query to the SQL analytics endpoint. Fallback operations might result in slower
query performance.
The following diagram shows how Direct Lake works by using the scenario of a user who
opens a Power BI report.
The diagram depicts the following user actions, processes, and features.
ノ Expand table
Item Description
OneLake is a data lake that stores analytics data in Parquet format. This file format is
optimized for storing data for Direct Lake semantic models.
A Fabric lakehouse or Fabric warehouse exists in a workspace that's on Fabric capacity. The
lakehouse has a SQL analytics endpoint, which provides a SQL-based experience for
querying. Tables (or views) provide a means to query the Delta tables in OneLake by using
Transact-SQL (T-SQL).
A Direct Lake semantic model exists in a Fabric workspace. It connects to tables or views in
either the lakehouse or warehouse.
Item Description
The Power BI report sends Data Analysis Expressions (DAX) queries to the Direct Lake
semantic model.
When possible (and necessary), the semantic model loads columns into memory directly
from the Parquet files stored in OneLake. Queries achieve in-memory performance, which
is very fast.
In certain circumstances, such as when the semantic model exceeds the guardrails of the
capacity, semantic model queries automatically fall back to DirectQuery mode. In this
mode, queries are sent to the SQL analytics endpoint of the lakehouse or warehouse.
DirectQuery queries sent to the SQL analytics endpoint in turn query the Delta tables in
OneLake. For this reason, query performance might be slower than in-memory queries.
The following sections describe Direct Lake concepts and features, including column
loading, framing, automatic updates, and DirectQuery fallback.
Once it understands which columns are needed, the semantic model determines which
columns are already in memory. If any columns needed for the query aren't in memory,
the semantic model loads all data for those columns from OneLake. Loading column
data is typically a fast operation, however it can depend on factors such as the
cardinality of data stored in the columns.
Columns loaded into memory are then resident in memory. Future queries that involve
only resident columns don't need to load any more columns into memory.
A column remains resident until there's reason for it to be removed (evicted) from
memory. Reasons that columns might get removed include:
The model or table was refreshed after a Delta table update at the source (see
Framing in the next section).
No query used the column for some time.
Other memory management reasons, including memory pressure in the capacity
due to other, concurrent operations.
Your choice of Fabric SKU determines the maximum available memory for each Direct
Lake semantic model on the capacity. For more information about resource guardrails
and maximum memory limits, see Fabric capacity guardrails and limitations later in this
article.
Framing
Framing provides model owners with point-in-time control over what data is loaded into
the semantic model. Framing is a Direct Lake operation triggered by a refresh of a
semantic model, and in most cases takes only a few seconds to complete. That's
because it's a low-cost operation where the semantic model analyzes the metadata of
the latest version of the Delta Lake tables and is updated to reference the latest Parquet
files in OneLake.
When framing occurs, resident table column segments and dictionaries might be evicted
from memory if the underlying data has changed and the point in time of the refresh
becomes the new baseline for all future transcoding events. From this point, Direct Lake
queries only consider data in the Delta tables as of the time of the most recent framing
operation. For that reason, Direct Lake tables are queried to return data based on the
state of the Delta table at the point of the most recent framing operation. That time isn't
necessarily the latest state of the Delta tables.
Note that the semantic model analyzes the Delta log of each Delta table during framing
to drop only the affected column segments and to reload newly added data during
transcoding. An important optimization is that dictionaries will usually not be dropped
when incremental framing takes effect, and new values are added to the existing
dictionaries. This incremental framing approach helps to reduce the reload burden and
benefits query performance. In the ideal case, when a Delta table received no updates,
no reload is necessary for columns already resident in memory and queries show far less
performance impact after framing because incremental framing essentially enables the
semantic model to update substantial portions of the existing in-memory data in place.
The following diagram shows how Direct Lake framing operations work.
The diagram depicts the following processes and features.
ノ Expand table
Item Description
Framing operations take place periodically, and they set the baseline for all future
transcoding events. Framing operations can happen automatically, manually, on schedule,
or programmatically.
OneLake stores metadata and Parquet files, which are represented as Delta tables.
The last framing operation includes Parquet files related to the Delta tables, and
specifically the Parquet files that were added before the last framing operation.
Item Description
A later framing operation includes Parquet files added after the last framing operation.
Resident columns in the Direct Lake semantic model might be evicted from memory, and
the point in time of the refresh becomes the new baseline for all future transcoding
events.
Subsequent data modifications, represented by new Parquet files, aren't visible until the
next framing operation occurs.
It's not always desirable to have data representing the latest state of any Delta table
when a transcoding operation takes place. Consider that framing can help you provide
consistent query results in environments where data in Delta tables is transient. Data can
be transient for several reasons, such as when long-running extract, transform, and load
(ETL) processes occur.
Refresh for a Direct Lake semantic model can be done manually, automatically, or
programmatically. For more information, see Refresh Direct Lake semantic models.
For more information about Delta table versioning and framing, see Understand storage
for Direct Lake semantic models.
Automatic updates
There's a semantic model-level setting to automatically update Direct Lake tables. It's
enabled by default. It ensures that data changes in OneLake are automatically reflected
in the Direct Lake semantic model. You should disable automatic updates when you
want to control data changes by framing, which was explained in the previous section.
For more information, see Manage Direct Lake semantic models.
Tip
You can set up automatic page refresh in your Power BI reports. It's a feature that
automatically refreshes a specific report page providing that the report connects to
a Direct Lake semantic model (or other types of semantic model).
DirectQuery fallback
A query sent to a Direct Lake semantic model can fall back to DirectQuery mode. In this
case, it retrieves data directly from the SQL analytics endpoint of the lakehouse or
warehouse. Such queries always return the latest data because they're not constrained
to the point in time of the last framing operation.
A query always falls back when the semantic model queries a view in the SQL analytics
endpoint, or a table in the SQL analytics endpoint that enforces row-level security (RLS).
Also, a query might fall back when the semantic model exceeds the guardrails of the
capacity.
) Important
If possible, you should always design your solution—or size your capacity—to
avoid DirectQuery fallback. That's because it might result in slower query
performance.
You can control fallback of your Direct Lake semantic models by setting its
DirectLakeBehavior property. For more information, see Set the Direct Lake behavior
property.
) Important
The first column in the following table also includes Power BI Premium capacity
subscriptions (P SKUs). Be aware that Microsoft is consolidating purchase options
and retiring the Power BI Premium per capacity SKUs. New and existing customers
should consider purchasing Fabric capacity subscriptions (F SKUs) instead.
ノ Expand table
Fabric Parquet Row Rows per Max model size on Max
SKU files per groups per table disk/OneLake (GB) memory
table table (millions) (GB) 1
1
For Direct Lake semantic models, Max Memory represents the upper memory resource
limit for how much data can be paged in. For this reason, it's not a guardrail because
exceeding it doesn't result in a fallback to DirectQuery mode; however, it can have a
performance impact if the amount of data is large enough to cause excessive paging in
and out of the model data from the OneLake data.
If exceeded, the Max model size on disk/OneLake causes all queries to the semantic
model to fall back to DirectQuery mode. All other guardrails presented in the table are
evaluated per query. It's therefore important that you optimize your Delta tables and
Direct Lake semantic model to avoid having to unnecessarily scale up to a higher Fabric
SKU (resulting in increased cost).
Additionally, Capacity unit and Max memory per query limits apply to Direct Lake
semantic models. For more information, see Capacities and SKUs.
7 Note
The capabilities and features of Direct Lake semantic models are evolving. Be sure
to check back periodically to review the latest list of considerations and limitations.
When a Direct Lake semantic model table connects to a table in the SQL analytics
endpoint that enforces row-level security (RLS), queries that involve that model
table will always fall back to DirectQuery mode. Query performance might be
slower.
When a Direct Lake semantic model table connects to a view in the SQL analytics
endpoint, queries that involve that model table will always fall back to DirectQuery
mode. Query performance might be slower.
Composite modeling isn't supported. That means Direct Lake semantic model
tables can't be mixed with tables in other storage modes, such as Import,
DirectQuery, or Dual (except for special cases, including calculation groups, what-if
parameters, and field parameters).
Calculated columns and calculated tables that reference columns or tables in Direct
Lake storage mode aren't supported. Calculation groups, what-if parameters, and
field parameters, which implicitly create calculated tables, and calculated tables
that don't reference Direct Lake columns or tables are supported.
Direct Lake storage mode tables don't support complex Delta table column types.
Binary and GUID semantic types are also unsupported. You must convert these
data types into strings or other supported data types.
Table relationships require the data types of related columns to match.
One-side columns of relationships must contain unique values. Queries fail if
duplicate values are detected in a one-side column.
Auto data/time intelligence in Power BI Desktop isn't supported. Marking your own
date table as a date table is supported.
The length of string column values is limited to 32,764 Unicode characters.
The floating point value NaN (not a number) isn't supported.
Publish to web from Power BI using a service principal is only supported when
using a fixed identity for the Direct Lake semantic model.
In the web modeling experience, validation is limited for Direct Lake semantic
models. User selections are assumed to be correct, and no queries are issued to
validate cardinality or cross filter selections for relationships, or for the selected
date column in a marked date table.
In the Fabric portal, the Direct Lake tab in the refresh history lists only Direct Lake-
related refresh failures. Successful refresh (framing) operations aren't listed.
Your Fabric SKU determines the maximum available memory per Direct Lake
semantic model for the capacity. When the limit is exceeded, queries to the
semantic model might be slower due to excessive paging in and out of the model
data.
Creating a Direct Lake semantic model in a workspace that is in a different region
of the data source workspace is not supported. For example, if the Lakehouse is in
West Central US, then you can only create semantic models from this Lakehouse in
the same region. A workaround is to create a Lakehouse in the other region's
workspace and shortcut to the tables before creating the semantic model. To find
what region you are in, see find your Fabric home region.
You can create and view a custom Direct Lake semantic model using a Service
Principal identity, but the default Direct Lake semantic model does not support
Service Principals. Make sure service principal authentication is enabled for Fabric
REST APIs in your tenant and grant the service principal Contributor or higher
permissions to the workspace of your Direct Lake semantic model.
Embedding reports requires a V2 embed token.
Direct Lake does not support service principal profiles for authentication.
Customized Direct Lake semantic models created by Service Principal and viewer
with Service Principal are supported, but default Direct Lake semantic models are
not supported.
ノ Expand table
SQL analytics Yes – but queries will fall Yes – but must Yes – but queries
endpoint object- back to DirectQuery duplicate permissions might produce errors
level security or mode and might produce with semantic model when permission is
column-level errors when permission is object-level security denied
security denied
SQL analytics Yes – but queries will fall Yes – but must Yes
endpoint row- back to DirectQuery duplicate permissions
level security (RLS) mode with semantic model
RLS
1 You can't combine Direct Lake storage mode tables with DirectQuery or Dual storage
mode tables in the same semantic model. However, you can use Power BI Desktop to
create a composite model on a Direct Lake semantic model and then extend it with new
tables (by using Import, DirectQuery, or Dual storage mode) or calculations. For more
information, see Build a composite model on a semantic model.
2 Requires a V2 embed token. If you're using a service principal, you must use a fixed
identity cloud connection.
Related content
Develop Direct Lake semantic models
Manage Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models
Feedback
Was this page helpful? Yes No
This article describes design topics relevant to developing Direct Lake semantic models.
You can then use the web modeling experience to further develop the semantic model.
This experience allows you to create relationships between tables, create measures and
calculation groups, mark date tables, and set properties for model and its objects (like
column formats). You can also set up model row-level security (RLS) by defining roles
and rules, and by adding members (Microsoft Entra user accounts or security groups) to
those roles.
Alternatively, you can continue the development of your model by using an XMLA-
compliant tool, like SQL Server Management Studio (SSMS) (version 19.1 or later) or
open-source, community tools. For more information, see Model write support with the
XMLA endpoint later in this article.
Tip
You can learn how to create a lakehouse, a Delta table, and a basic Direct Lake
semantic model by completing this tutorial.
Model tables
Model tables are based on either a table or a view of the SQL analytics endpoint.
However, avoid using views whenever possible. That's because queries to a model table
based on a view will always fall back to DirectQuery mode, which might result in slower
query performance.
Tables should include columns for filtering, grouping, sorting, and summarizing, in
addition to columns that support model relationships. While unnecessary columns don't
affect semantic model query performance (because they won't be loaded into memory),
they result in a larger storage size in OneLake and require more compute resources to
load and maintain.
2 Warning
Using columns that apply dynamic data masking (DDM) in Direct Lake semantic
models is not supported.
To learn how to select which tables to include in your Direct Lake semantic model, see
Edit tables for Direct Lake semantic models.
For more information about columns to include in your semantic model tables, see
Understand storage for Direct Lake semantic models.
7 Note
For a SQL analytics endpoint, you can set up OLS to control access to the endpoint
objects, such as tables or views, and column-level security (CLS) to control access to
endpoint table columns.
For a semantic model, you can set up OLS to control access to model tables or columns.
You need to use open-source, community tools like Tabular Editor to set up OLS.
For a SQL analytics endpoint, you can set up RLS to control access to rows in an
endpoint table.
) Important
When a query uses any table that has RLS in the SQL analytics endpoint, it will fall
back to DirectQuery mode. Query performance might be slower.
For a semantic model, you can set up RLS to control access to rows in model tables. RLS
can be set up in the web modeling experience or by using a third-party tool.
The following steps approximate how queries are evaluated (and whether they fail). The
benefits of Direct Lake storage mode are only possible when the fifth step is achieved.
1. If the query contains any table or column that's restricted by semantic model OLS,
an error result is returned (report visual will fail to render).
2. If the query contains any column that's restricted by SQL analytics endpoint CLS (or
the table is denied), an error result is returned (report visual will fail to render).
a. If the cloud connection uses SSO (default), CLS is determined by the access level
of the report consumer.
b. If the cloud connection uses a fixed identity, CLS is determined by the access
level of the fixed identity.
3. If the query contains any table in the SQL analytics endpoint that enforces RLS or a
view is used, the query falls back to DirectQuery mode.
a. If the cloud connection uses SSO (default), RLS is determined by the access level
of the report consumer.
b. If the cloud connection uses a fixed identity, RLS is determined by the access
level of the fixed identity.
4. If the query exceeds the guardrails of the capacity, it falls back to DirectQuery
mode.
5. Otherwise, the query is satisfied from the in-memory cache. Column data is loaded
into memory as and when it's required.
The account must at least have Read and ReadData permissions on the source item
(lakehouse or warehouse). Item permissions can be inherited from workspace roles or
assigned explicitly for the item as described in this article.
Assuming this requirement is met, Fabric grants the necessary access to the semantic
model to read the Delta tables and associated Parquet files (to load column data into
memory) and data-access rules can be applied.
It's also a suitable approach when report consumers aren't granted permission to query
the lakehouse or warehouse.
In either case, it's strongly recommended that the cloud connection uses a fixed identity
instead of SSO. SSO would imply that end users can access the SQL analytics endpoint
directly and might therefore bypass security rules in the semantic model.
) Important
Semantic model item permissions can be set explicitly via Power BI apps, or
acquired implicitly via workspace roles.
Notably, semantic model data-access rules are not enforced for users who have
Write permission on the semantic model. Conversely, data-access rules do apply to
users who are assigned to the Viewer workspace role. However, users assigned to
the Admin, Member, or Contributor workspace role implicitly have Write permission
on the semantic model and so data-access rules are not enforced. For more
information, see Roles in workspaces.
Notably, however, a semantic model query will fall back to DirectQuery mode when it
includes any table that enforces RLS in the SQL analytics endpoint. Consequently, the
semantic model might never cache data into memory to achieve high performance
queries.
ノ Expand table
Semantic model only Use this option when users aren't granted item permissions to query the
lakehouse or warehouse. Set up the cloud connection to use a fixed
identity. High query performance can be achieved from the in-memory
cache.
Apply data-access Comment
rules to
SQL analytics Use this option when users need to access data from either the warehouse
endpoint only or the semantic model, and with consistent data-access rules. Ensure SSO
is enabled for the cloud connection. Query performance might be slow.
Lakehouse or This option involves extra management overhead. Set up the cloud
warehouse and connection to use a fixed identity.
semantic model
Tip
Before you can perform write operations, the XMLA read-write option must be enabled
for the capacity. For more information, see Enable XMLA read-write.
When changing a semantic model using XMLA, you must update the ChangedProperties
and PBI_RemovedChildren collection for the changed object to include any modified or
removed properties. If you don't perform that update, Power BI modeling tools might
overwrite any changes the next time the schema is synchronized with the Lakehouse.
Learn more about semantic model object lineage tags in the lineage tags for Power BI
semantic models article.
) Important
For more information, see Semantic model connectivity with the XMLA endpoint.
Post-publication tasks
After you publish a Direct Lake semantic model, you should complete some setup tasks.
For more information, see Manage Direct Lake semantic models.
Unsupported features
The following model features aren't supported by Direct Lake semantic models:
Related content
Direct Lake overview
Manage Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Edit tables for Direct Lake semantic models
OneLake integration for semantic models
Feedback
Was this page helpful? Yes No
This article describes design topics relevant to managing Direct Lake semantic models.
Post-publication tasks
After you first publish a Direct Lake semantic model ready for reporting, you should
immediately complete some post-publication tasks. These tasks can also be adjusted at
any time during the lifecycle of the semantic model.
Optionally, you can also set up data discovery to allow report creators to read metadata,
helping them to discover data in the OneLake data hub and request access to it. You can
also endorse (certified or promoted) the semantic model to communicate that it
represents quality data fit for use.
To set up a fixed identity, see Specify a fixed identity for a Direct Lake semantic model.
Authentication
The fixed identity can authenticate either by using OAuth 2.0 or Service principal.
7 Note
OAuth 2.0
When you use OAuth 2.0, you can authenticate with a Microsoft Entra user account. The
user account must have permission to query the SQL analytics endpoint tables and
views, and schema metadata.
Using a specific user account isn't a recommended practice. That's because semantic
model queries will fail should the password change or the user account be deleted (like
when an employee leaves the organization).
Service principal
Authenticating with a service principal is the recommended practice because it's not
dependent on a specific user account. The security principal must have permission to
query the SQL analytics endpoint tables and views, and schema metadata.
7 Note
The Fabric tenant settings must allow service principals, and the service principal
must belong to a declared security group.
Single sign-on
When you create a sharable cloud connection, the Single Sign-On checkbox is
unchecked by default. That's the correct setup when using a fixed identity.
You can enable SSO when you want the identity that queries the semantic model to also
query the SQL analytics endpoint. In this configuration, the Direct Lake semantic model
will use the fixed identity to refresh the model and the user identity to query data.
When using a fixed identity, it's common practice to disable SSO so that the fixed
identity is used for both refreshes and queries, but there's no technical requirement to
do so.
When all users can access the data (and have permission to do so), there's no need
to create a shared cloud connection. Instead, the default cloud connection settings
can be used. In this case, the identity of the user who queries the model will be
used should queries fall back to DirectQuery mode.
Create a shared cloud connection when you want to use a fixed identity to query
source data. That could be because the users who query the semantic model aren't
granted permission to read the lakehouse or warehouse. This approach is
especially relevant when the semantic model enforces RLS.
If you use a fixed identity, use the Service principal option because it's more secure
and reliable. That's because it doesn't rely on a single user account or their
permissions, and it won't require maintenance (and disruption) should they change
their password or leave the organization.
If different users must be restricted to access only subsets of data, if viable, enforce
RLS at the semantic model layer only. That way, users will benefit from high
performance in-memory queries.
If possible, avoid OLS and CLS because it results in errors in report visuals. Errors
can create confusion or concern for users. For summarizable columns, consider
creating measures that return BLANK in certain conditions instead of CLS (if
possible).
You must grant permissions to users so that they can use or manage the Direct Lake
semantic model. In short, report consumers need Read permission, and report creators
need Build permission. Semantic model permissions can be assigned directly or acquired
implicitly via workspace roles. To manage the semantic model settings (for refresh and
other configurations), you must be the semantic model owner.
Depending on the cloud connection set up, and whether users need to query the
lakehouse or the warehouse SQL analytics endpoint, you might need to grant other
permissions (described in the table in this section).
7 Note
Notably, users don't ever require permission to read data in OneLake. That's
because Fabric grants the necessary permissions to the semantic model to read the
Delta tables and associated Parquet files (to load column data into memory). The
semantic model also has the necessary permissions to periodically read the SQL
analytics endpoint to perform permission checks to determine what data the
querying user (or fixed identity) can access.
ノ Expand table
Users can view reports • Grant Read permission for Reports don't need to belong to
the reports and Read the same workspace as the
permission for the semantic semantic model. For more
model. information, see Strategy for read-
• If the cloud connection only consumers.
uses SSO, grant at least
Read permission for the
lakehouse or warehouse.
Users can create reports • Grant Build permission for For more information, see Strategy
the semantic model. for content creators.
• If the cloud connection
uses SSO, grant at least
Scenario Required permissions Comments
Users can query the • Don't grant any permission Only suitable when the cloud
semantic model but are for the lakehouse or connection uses a fixed identity.
denied querying the warehouse.
lakehouse or SQL analytics
endpoint
Users can query the • Grant Read and ReadData Important: Queries sent to the SQL
semantic model and the permissions for the analytics endpoint will bypass data
SQL analytics endpoint but lakehouse or warehouse. access permissions enforced by the
are denied querying the semantic model.
lakehouse
Manage the semantic • Requires semantic model For more information, see Semantic
model, including refresh ownership. model ownership.
settings
) Important
You should always thoroughly test permissions before releasing your semantic
model and reports into production.
When the setting is enabled, the semantic model performs a framing operation
whenever data modifications in underlying Delta tables are detected. The framing
operation is always specific to only those tables where data modifies are detected.
We recommend that you leave the setting on, especially when you have a small or
medium-sized semantic model. It's especially useful when you have low-latency
reporting requirements and Delta tables are modified regularly.
In some situations, you might want to disable automatic updates. For example, you
might need to allow completion of data preparation jobs or the ETL process before
exposing any new data to consumers of the semantic model. When disabled, you can
trigger a refresh by using a programmatic method (described earlier).
7 Note
To avoid such delays, consider warming the cache by programmatically sending a query
to the semantic model. A convenient way to send a query is to use semantic link. This
operation should be done immediately after the refresh operation finishes.
) Important
Warming the cache might only make sense when delays are unacceptable. Take
care not to unnecessarily load data into memory that could place pressure on other
capacity workloads, causing them to throttle or become deprioritized.
Automatic: (Default) Queries fall back to DirectQuery mode if the required data
can't be efficiently loaded into memory.
DirectLakeOnly: All queries use Direct Lake storage mode only. Fall back to
DirectQuery mode is disabled. If data can't be loaded into memory, an error is
returned.
DirectQueryOnly: All queries use DirectQuery mode only. Use this setting to test
fallback performance, where, for instance, you can observe the query performance
in connected reports.
You can set the property in the web modeling experience, or by using Tabular Object
Model (TOM) or Tabular Model Scripting Language (TMSL).
Tip
Consider disabling DirectQuery fallback when you want to process queries in Direct
Lake storage mode only. We recommend that you disable fallback when you don't
want to fall back to DirectQuery. It can also be helpful when you want to analyze
query processing for a Direct Lake semantic model to identify if and how often
fallback occurs.
You can use Performance Analyzer, SQL Server Profiler, Azure Log Analytics, or an open-
source, community tool, like DAX Studio.
Performance Analyzer
You can use Performance Analyzer in Power BI Desktop to record the processing time
required to update report elements initiated as a result of any user interaction that
results in running a query. If the monitoring results show a Direct query metric, it means
the DAX queries were processed in DirectQuery mode. In the absence of that metric, the
DAX queries were processed in Direct Lake mode.
) Important
In general, Direct Lake storage mode provides fast query performance unless a
fallback to DirectQuery mode is necessary. Because fallback to DirectQuery mode
can impact query performance, it's important to analyze query processing for a
Direct Lake semantic model to identify if, how often, and why fallbacks occur.
For more information, see Using Azure Log Analytics in Power BI.
Related content
Direct Lake overview
Develop Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models
Specify a fixed identity for a Direct Lake semantic model
Feedback
Was this page helpful? Yes No
This article introduces Direct Lake storage concepts. It describes Delta tables and
Parquet files. It also describes how you can optimize Delta tables for Direct Lake
semantic models, and how you can maintain them to help deliver reliable, fast query
performance.
Delta tables
Delta tables exist in OneLake. They organize file-based data into rows and columns and
are available to Microsoft Fabric compute engines such as notebooks, Kusto, and the
lakehouse and warehouse. You can query Delta tables by using Data Analysis
Expressions (DAX), Multidimensional Expressions (MDX), T-SQL (Transact-SQL), Spark
SQL, and even Python.
7 Note
Delta—or Delta Lake—is an open-source storage format. That means Fabric can
also query Delta tables created by other tools and vendors.
Delta tables store their data in Parquet files, which are typically stored in a lakehouse
that a Direct Lake semantic model uses to load data. However, Parquet files can also be
stored externally. External Parquet files can be referenced by using a OneLake shortcut,
which points to a specific storage location, such as Azure Data Lake Storage (ADLS)
Gen2, Amazon S3 storage accounts, or Dataverse. In almost all cases, compute engines
access the Parquet files by querying Delta tables. However, typically Direct Lake
semantic models load column data directly from optimized Parquet files in OneLake by
using a process known as transcoding.
Data versioning
Delta tables comprise one or more Parquet files. These files are accompanied by a set of
JSON-based link files, which track the order and nature of each Parquet file that's
associated with a Delta table.
It's important to understand that the underlying Parquet files are incremental in nature.
Hence the name Delta as a reference to incremental data modification. Every time a
write operation to a Delta table takes place—such as when data is inserted, updated, or
deleted—new Parquet files are created that represent the data modifications as a
version. Parquet files are therefore immutable, meaning they're never modified. It's
therefore possible for data to be duplicated many times across a set of Parquet files for
a Delta table. The Delta framework relies on link files to determine which physical
Parquet files are required to produce the correct query result.
Consider a simple example of a Delta table that this article uses to explain different data
modification operations. The table has two columns and stores three rows.
ノ Expand table
ProductID StockOnHand
A 1
B 2
C 3
The Delta table data is stored in a single Parquet file that contains all data, and there's a
single link file that contains metadata about when the data was inserted (appended).
Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Insert operations
Consider what happens when an insert operation occurs: A new row for product C with
a stock on hand value of 4 is inserted. This operations results in the creation of a new
Parquet file and link file, so there's now two Parquet files and two link files.
Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.
At this point, a query of the Delta table returns the following result. It doesn't matter
that the result is sourced from multiple Parquet files.
ノ Expand table
ProductID StockOnHand
A 1
B 2
C 3
D 4
Every subsequent insert operation creates new Parquet files and link files. That means
the number of Parquet files and link files grows with every insert operation.
Update operations
Now consider what happens when an update operation occurs: The row for product D
has its stock on hand value changed to 10 . This operations results in the creation of a
new Parquet file and link file, so there are now three Parquet files and three link files.
Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Parquet file 3:
ProductID: C
StockOnHand: 10
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.
Link file 3:
Contains the timestamp when Parquet file 3 was created, and records that
data was updated.
At this point, a query of the Delta table returns the following result.
ノ Expand table
ProductID StockOnHand
A 1
B 2
C 10
D 4
Data for product C now exists in multiple Parquet files. However, queries to the Delta
table combine the link files to determine what data should be used to provide the
correct result.
Delete operations
Now consider what happens when a delete operation occurs: The row for product B is
deleted. This operation results in a new Parquet file and link file, so there are now four
Parquet files and four link files.
Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Parquet file 3:
ProductID: C
StockOnHand: 10
Parquet file 4:
ProductID: A, C, D
StockOnHand: 1, 10, 4
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.
Link file 3:
Contains the timestamp when Parquet file 3 was created, and records that
data was updated.
Link file 4:
Contains the timestamp when Parquet file 4 was created, and records that
data was deleted.
Notice that Parquet file 4 no longer contains data for product B , but it does contain
data for all other rows in the table.
At this point, a query of the Delta table returns the following result.
ノ Expand table
ProductID StockOnHand
A 1
C 10
D 4
7 Note
This example is simple because it involves a small table, just a few operations, and
only minor modifications. Large tables that experience many write operations and
that contain many rows of data will generate more than one Parquet file per
version.
) Important
Depending on how you define your Delta tables and the frequency of data
modification operations, it might result in many Parquet files. Be aware that each
Fabric capacity license has guardrails. If the number of Parquet files for a Delta
table exceeds the limit for your SKU, queries will fall back to DirectQuery, which
might result in slower query performance.
To manage the number of Parquet files, see Delta table maintenance later in this
article.
Link files enable querying data as of an earlier point in time. This capability is known as
Delta time travel. The earlier point in time could be a timestamp or version.
SQL
Tip
You can also query a table by using the @ shorthand syntax to specify the
timestamp or version as part of the table name. The timestamp must be in
yyyyMMddHHmmssSSS format. You can specify a version after @ by prepending a v to
the version.
Here are the previous query examples rewritten with shorthand syntax.
SQL
) Important
Table versions accessible with time travel are determined by a combination of the
retention threshold for transaction log files and the frequency and specified
retention for VACUUM operations (described later in the Delta table maintenance
section). If you run VACUUM daily with the default values, seven days of data will be
available for time travel.
Framing
Framing is a Direct Lake operation that sets the version of a Delta table that should be
used to load data into a semantic model column. Equally important, the version also
determines what should be excluded when data is loaded.
A framing operation stamps the timestamp/version of each Delta table into the
semantic model tables. From this point, when the semantic model needs to load data
from a Delta table, the timestamp/version associated with the most recent framing
operation is used to determine what data to load. Any subsequent data modifications
that occur for the Delta table since the latest framing operation are ignored (until the
next framing operation).
) Important
Because a framed semantic model references a particular Delta table version, the
source must ensure it keeps that Delta table version until framing of a new version
is completed. Otherwise, users will encounter errors when the Delta table files need
to be accessed by the model and have been vacuumed or otherwise deleted by the
producer workload.
Table partitioning
Delta tables can be partitioned so that a subset of rows are stored together in a single
set of Parquet files. Partitions can speed up queries as well as write operations.
Consider a Delta table that has a billion rows of sales data for a two-year period. While
it's possible to store all the data in a single set of Parquet files, for this data volume it's
not optimal for read and write operations. Instead, performance can be improved by
spreading the billion rows of data across multiple series of Parquet files.
A partition key must be defined when setting up table partitioning. The partition key
determines which rows to store in which series. For Delta tables, the partition key can be
defined based on the distinct values of a specified column (or columns), such as a
month/year column of a date table. In this case, two years of data would be distributed
across 24 partitions (2 years x 12 months).
Fabric compute engines are unaware of table partitions. As they insert new partition key
values, new partitions are created automatically. In OneLake, you'll find one subfolder
for each unique partition key value, and each subfolder stores its own set of Parquet
files and link files. At least one Parquet file and one link file must exist, but the actual
number of files in each subfolder can vary. As data modification operations take place,
each partition maintains its own set of Parquet files and link files to keep track of what
to return for a given timestamp or version.
If a query of a partitioned Delta table is filtered to only the most recent three months of
sales data, the subset of Parquet files and link files that need to be accessed can be
quickly identified. That then allows skipping many Parquet files altogether, resulting in
better read performance.
However, queries that don't filter on the partition key might not always perform better.
That can be the case when a Delta table stores all data in a single large set of Parquet
files and there's file or row group fragmentation. While it's possible to parallelize the
data retrieval from multiple Parquet files across multiple cluster nodes, many small
Parquet files can adversely affect file I/O and therefore query performance. For this
reason, it's best to avoid partitioning Delta tables in most cases—unless write operations
or extract, transform, and load (ETL) processes would clearly benefit from it.
Partitioning benefits insert, update, and delete operations too, because file activity only
takes place in subfolders matching the partition key of the modified or deleted rows. For
example, if a batch of data is inserted into a partitioned Delta table, the data is assessed
to determine what partition key values exist in the batch. Data is then directed only to
the relevant folders for the partitions.
Understanding how Delta tables use partitions can help you design optimal ETL
scenarios that reduce the write operations that need to take place when updating large
Delta tables. Write performance improves by reducing the number and size of any new
Parquet files that must be created. For a large Delta table partitioned by month/year, as
described in the previous example, new data only adds new Parquet files to the latest
partition. Subfolders of previous calendar months remain untouched. If any data of
previous calendar months must be modified, only the relevant partition folders receive
new partition and link files.
) Important
If the main purpose of a Delta table is to serve as a data source for semantic
models (and secondarily, other query workloads), it's usually better to avoid
partitioning in preference for optimizing the load of columns into memory.
For Direct Lake semantic models or the SQL analytics endpoint, the best way to optimize
Delta table partitions is to let Fabric automatically manage the Parquet files for each
version of a Delta table. Leaving the management to Fabric should result in high query
performance through parallelization, however it might not necessarily provide the best
write performance.
If you must optimize for write operations, consider using partitions to optimize write
operations to Delta tables based on the partition key. However, be aware that over
partitioning a Delta table can negatively impact on read performance. For this reason,
we recommend that you test the read and write performance carefully, perhaps by
creating multiple copies of the same Delta table with different configurations to
compare timings.
2 Warning
Parquet files
The underlying storage for a Delta table is one or more Parquet files. Parquet file format
is generally used for write-once, read-many applications. New Parquet files are created
every time data in a Delta table is modified, whether by an insert, update, or delete
operation.
7 Note
You can access Parquet files that are associated with Delta tables by using a tool,
like OneLake file explorer. Files can be downloaded, copied, or moved to other
destinations as easily as moving any other files. However, it's the combination of
Parquet files and the JSON-based link files that allow compute engines to issue
queries against the files as a Delta table.
The internal format of a Parquet file differs from other common data storage formats,
such as CSV, TSV, XMLA, and JSON. These formats organize data by rows, while Parquet
organizes data by columns. Also, Parquet file format differs from these formats because
it organizes rows of data into one or more row groups.
The internal data structure of a Power BI semantic model is column-based, which means
Parquet files share a lot in common with Power BI. This similarity means that a Direct
Lake semantic model can efficiently load data from the Parquet files directly into
memory. In fact, very large volumes of data can be loaded in seconds. Contrast this
capability with the refresh of an Import semantic model which must retrieve blocks or
source data, then process, encode, store, and then load it into memory. An Import
semantic model refresh operation can also consume significant amounts of compute
(memory and CPU) and take considerable time to complete. However, with Delta tables,
most of the effort to prepare the data suitable for direct loading into a semantic model
takes place when the Parquet file is generated.
ノ Expand table
2024-09-16 A 10
2024-09-16 B 11
2024-09-17 A 13
When stored in Parquet file format, conceptually, this set of data might look like the
following text.
HTML
Header:
RowGroup1:
Date: 2024-09-16, 2024-09-16, 2024-09-17…
ProductID: A, B, A…
StockOnHand: 10, 11, 13…
RowGroup2:
…
Footer:
Data is compressed by substituting dictionary keys for common values, and by applying
run-length encoding (RLE). RLE strives to compress a series of same values into a smaller
representation. In the following example, a dictionary mapping of numeric keys to
values is created in the header, and the smaller key values are used in place of the data
values.
HTML
Header:
Dictionary: [
(1, 2024-09-16), (2, 2024-09-17),
(3, A), (4, B),
(5, 10), (6, 11), (7, 13)
…
]
RowGroup1:
Date: 1, 1, 2…
ProductID: 3, 4, 3…
StockOnHand: 5, 6, 7…
Footer:
When the Direct Lake semantic model needs data to compute the sum of the
StockOnHand column grouped by ProductID , only the dictionary and data associated
with the two columns is required. In large files that contain many columns, substantial
portions of the Parquet file can be skipped to help speed up the read process.
7 Note
The contents of a Parquet file aren't human readable and so it isn't suited to
opening in a text editor. However, there are many open-source tools available that
can open and reveal the contents of a Parquet file. These tools can also let you
inspect metadata, such as the number of rows and row groups contained in a file.
V-Order
Delta tables created and loaded by Fabric items such as data pipelines, dataflows, and
notebooks automatically apply V-Order. However, Parquet files uploaded to a Fabric
lakehouse, or that are referenced by a shortcut, might not have this optimization
applied. While non-optimized Parquet files can still be read, the read performance likely
won't be as fast as an equivalent Parquet file that's had V-Order applied.
7 Note
Parquet files that have V-Order applied still conform to the open-source Parquet
file format. Therefore, they can be read by non-Fabric tools.
For more information, see Delta Lake table optimization and V-Order.
Data volume
While Delta tables can grow to store extremely large volumes of data, Fabric capacity
guardrails impose limits on semantic models that query them. When those limits are
exceeded, queries will fall back to DirectQuery, which might result in slower query
performance.
Therefore, consider limiting the row count of a large fact table by raising its granularity
(store summarized data), reducing dimensionality, or storing less history.
Also, ensure that V-Order is applied because it results in a smaller and therefore faster
file to read.
When you use approximate numeric data types (like float and real), consider rounding
values and using a lower precision.
Unnecessary columns
As with any data table, Delta tables should only store columns that are required. In the
context of this article, that means required by the semantic model, though there could
be other analytic workloads that query the Delta tables.
Delta tables should include columns required by the semantic model for filtering,
grouping, sorting, and summarizing, in addition to columns that support model
relationships. While unnecessary columns don't affect semantic model query
performance (because they won't be loaded into memory), they result in a larger
storage size and so require more compute resources to load and maintain.
Because Direct Lake semantic models don't support calculated columns, you should
materialize such columns in the Delta tables. Note that this design approach is an anti-
pattern for Import and DirectQuery semantic models. For example, if you have
FirstName and LastName columns, and you need a FullName column, materialize the
values for this column when inserting rows into the Delta table.
Consider that some semantic model summarizations might depend on more than one
column. For example, to calculate sales, the measure in the model sums the product of
two columns: Quantity and Price . If neither of these columns is used independently, it
would be more efficient to materialize the sales calculation as a single column than store
its component values in separate columns.
The number of rows in a row group influences how quickly Direct Lake can read the
data. A higher number of row groups with fewer rows is likely to negatively impact
loading column data into a semantic model due to excessive I/O.
Generally, we don't recommend that you change the default row group size. However,
you might consider changing the row group size for large Delta tables. Be sure to test
the read and write performance carefully, perhaps by creating multiple copies of the
same Delta tables with different configurations to compare timings.
) Important
Be aware that every Fabric capacity license has guardrails. If the number of row
groups for a Delta table exceeds the limit for your SKU, queries will fall back to
DirectQuery, which might result in slower query performance.
OPTIMIZE
You can use OPTIMIZE to optimize a Delta table to coalesce smaller files into larger
ones. You can also set the WHERE clause to target only a filtered subset of rows that
match a given partition predicate. Only filters involving partition keys are supported. The
OPTIMIZE command can also apply V-Order to compact and rewrite the Parquet files.
We recommend that you run this command on large, frequently updated Delta tables
on a regular basis, perhaps every day when your ETL process completes. Balance the
trade-off between better query performance and the cost of resource usage required to
optimize the table.
VACUUM
You can use VACUUM to remove files that are no longer referenced and/or that are
older than a set retention threshold. Take care to set an appropriate retention period,
otherwise you might lose the ability to time travel back to a version older than the frame
stamped into semantic model tables.
) Important
Because a framed semantic model references a particular Delta table version, the
source must ensure it keeps that Delta table version until framing of a new version
is completed. Otherwise, users will encounter errors when the Delta table files need
to be accessed by the model and have been vacuumed or otherwise deleted by the
producer workload.
REORG TABLE
You can use REORG TABLE to reorganize a Delta table by rewriting files to purge soft-
deleted data, such as when you drop a column by using ALTER TABLE DROP COLUMN.
Tip
You can also use the lakehouse Table maintenance feature in the Fabric portal to
simplify management of your Delta tables.
Related content
Direct Lake overview
Develop Direct Lake semantic models
Manage Direct Lake semantic models
Delta Lake table optimization and V-Order
Feedback
Was this page helpful? Yes No
Semantic models using Direct Lake mode access OneLake data directly, which requires
running the Power BI Analysis Services engine in a workspace with a Fabric capacity.
Semantic models using import or DirectQuery mode can have the Power BI Analysis
Services engine running locally on your computer by using Power BI Desktop for
creating and editing the semantic model. Once published, such models operate using
Power BI Analysis Services in the workspace.
To facilitate editing Direct Lake semantic models in Power BI Desktop, you can now
perform a live edit of a semantic model in Direct Lake mode, enabling Power BI Desktop
to make changes to the model by using the Power BI Analysis Services engine in the
Fabric workspace.
You can also open the OneLake data hub from a blank report, as shown in the following
image:
2. Search for a semantic model in Direct Lake mode, expand the Connect button and
select Edit.
7 Note
Selecting a semantic model that is not in Direct Lake mode will result in an error.
3. The selected semantic model opens for editing at which point you are in live edit
mode, as demonstrated in the following screenshot.
4. You can edit your semantic model using Power BI Desktop, enabling you to make
changes directly to the selected semantic model. Changes include all modeling
tasks, such as renaming tables/columns, creating measures, and creating
calculation groups. DAX query view is available to run DAX queries to preview data
and test measures before saving them to the model.
7 Note
Notice that the Save option is disabled, because you don’t need to save. Every
change you make is immediately applied to the selected semantic model in the
workspace.
In the title bar, you can see the workspace and semantic model name with links to open
these items in the Fabric portal.
When you connect and live edit a semantic model. During the preview it's not possible
to select an existing report to edit, and the Report view is hidden. You can open an
existing report or create a new one by live connecting to this semantic model in another
instance of Power BI Desktop or in the workspace. You can write DAX queries in the
workspace with DAX query view in the web. And you can visually explore the data with
the new explore your data feature in the workspace.
If two or more users are live editing the same semantic model and a conflict occurs,
Power BI Desktop alerts one of the users, shown in the following image, and refreshes
the model to the latest version. Any changes you were trying to make will need to be
performed again after the refresh.
Edit tables
Changes to the tables and columns in the OneLake data source, typically a Lakehouse or
Warehouse, like import or DirectQuery data sources, aren't automatically reflected in the
semantic model. To update the semantic model with the latest schema, such as getting
column changes in existing tables or to add or remove tables, go to Transform data >
Data source settings > Edit Tables.
Learn more about Edit tables for Direct Lake semantic models.
Use refresh
Semantic models in Direct Lake mode automatically reflect the latest data changes in
the delta tables when Keep your direct Lake data up to date is enabled. When disabled,
you can manually refresh your semantic model using Power BI Desktop Refresh button
to ensure it targets the latest version of your data. This is also sometimes called
reframing.
Navigate to File > Export > Power BI Project and export it as a Power BI Project file
(PBIP).
Selecting Export opens the folder containing the PBIP files of the exported semantic
model along with an empty report.
After exporting you should open a new instance of Power BI Desktop and open the
exported PBIP file to continue editing with a Power BI Project. When you open the PBIP
file, Power BI Desktop prompts you to either create a new semantic model in a Fabric
workspace, or select an existing semantic model for remote modeling.
7 Note
Semantic models in Direct Lake mode, when exported to a Git repository using
Fabric Git Integration, can be edited using Power BI Desktop. To do so, make sure
at least one report is connected to the semantic model, then open the report's
exported definition.pbir file to edit both the report and the semantic model.
If you select an existent semantic model and the definition differs, Power BI Desktop
warns you before overwriting, as shown in the following image.
7 Note
You can select the same semantic model you exported the PBIP from. However, the
best practice when working with a PBIP that requires a remote semantic model is
for each developer to work on their own private remote semantic model to avoid
conflicts with changes from other developers.
Selecting the title bar displays both the PBIP file location and the remote semantic
model living in a Fabric workspace, shown in the following image.
A local setting will be saved in the Power BI Project files with the configured semantic
model, next time you open the PBIP, you won't see the prompt, and Fabric semantic
model will be overwritten with the metadata from the semantic model in the Power BI
Project files.
7 Note
The configuration described in this section is intended solely for local development
and should not be used for deployment across different environments.
Solution: Review all the requirements and permissions. If you met all the requirements,
check whether you can edit the semantic modeling using web modeling.
Scenario: I lost the connection to the remote semantic model and can't recover it. Have I
lost my changes?
Solution: All your changes are immediately applied to the remote semantic model. You
can always close Power BI Desktop and restart the editing session with the semantic
model you were working on.
Scenario: I exported to Power BI Project (PBIP). Can I select the same semantic model I
was live editing?
Solution: You can, but you should be careful. If each developer is working on their local
PBIP and all select the same semantic model as a remote model, they'll overwrite each
other's changes. The best practice when working with a PBIP is for each developer to
have their own isolated copy of the Direct Lake semantic model.
Scenario: I’m live editing the Direct Lake semantic model and can't create field
parameters.
Solution: When live editing a semantic model, Report View isn't available, which is
required for the field parameters UI. You can export to a Power BI Project (PBIP) and
open it to access Report View and the field parameters UI.
Scenario: I made changes to the semantic model using an external tool, but I don't see
those changes reflected in Power BI Desktop.
Solution: Changes made by external tools are applied to the remote semantic model,
but these changes will only become visible in Power BI Desktop after either the next
modeling change is made within Power BI Desktop, or the semantic model is refreshed.
Additionally, please consider the current known issues and limitations of Direct Lake.
Related content
Direct Lake overview
Power BI Project files
Feedback
Was this page helpful? Yes No
Semantic models in Direct Lake mode's tables come from Microsoft Fabric and OneLake
data. Instead of the transform data experience of Power BI import and DirectQuery,
Direct Lake mode uses the Edit tables experience, allowing you to decide which tables
you want the semantic model in Direct Lake mode to use.
In the semantic model, tables and columns can be renamed to support reporting
expectations. Edit tables still show the data source table names, and schema sync
doesn't impact the semantic model renames.
In the Lakehouse, tables and views can also be renamed. If the upstream data source
renames a table or column after the table was added to the semantic model, the
semantic model schema sync will still be looking for the table using the previous name,
so the table will be removed from the model on schema sync. The table with the new
name will show in the Edit tables dialog as unchecked, and must be explicitly checked
again and added again to the semantic model. Measures can be moved to the new
table, but relationships and column property updates need to be reapplied to the table.
Entry points
The following sections describe the multiple ways you can edit semantic models in
Direct Lake.
Selecting the ribbon button launches the Edit tables dialog, as shown in the following
image.
You can perform many actions that impact the tables in the semantic model:
Selecting the Confirm button with no changes initiates a schema sync. Any table
changes in the data source, such as an added or removed column, are applied to
the semantic model.
Selecting the Cancel button returns to editing the model without applying any
updates.
Selecting tables or views previously unselected adds the selected items to the
semantic model.
Unselecting tables or views previously selected removes them from the semantic
model.
Tables that have measures can be unselected but will still show in the model with
columns removed and only showing measures. The measures can be either deleted or
moved to a different table. When all measures have been moved or deleted, go back to
Edit tables and click Confirm to no longer show the empty table in the model.
Direct Lake semantic model: The name of the semantic model in the workspace,
which can be changed later. If the semantic model with the same name already
exists in the workspace, a number is automatically appended to the end of the
model name.
Workspace: The workspace where the semantic model is saved. By default the
workspace you're currently working in is selected, but you can change it to another
Fabric workspace.
Related content
Direct Lake overview
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models
Feedback
Was this page helpful? Yes No
This article describes how to create a lakehouse, create a Delta table in the lakehouse,
and then create a basic semantic model for the lakehouse in a Microsoft Fabric
workspace.
Before getting started creating a lakehouse for Direct Lake, be sure to read Direct Lake
overview.
Create a lakehouse
1. In your Microsoft Fabric workspace, select New > More options, and then in Data
Engineering, select the Lakehouse tile.
2. In the New lakehouse dialog box, enter a name, and then select Create. The name
can only contain alphanumeric characters and underscores.
3. Verify the new lakehouse is created and opens successfully.
There are multiple options to load data into a lakehouse, including data pipelines and
scripts. The following steps use PySpark to add a Delta table to a lakehouse based on an
Azure Open Dataset:
1. In the newly created lakehouse, select Open notebook, and then select New
notebook.
2. Copy and paste the following code snippet into the first code cell to let SPARK
access the open model, and then press Shift + Enter to run the code.
Python
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "holidaydatacontainer"
blob_relative_path = "Processed"
blob_sas_token = r""
4. Copy and paste the following code into the next cell, and then press Shift + Enter.
Python
Python
7. Verify all SPARK jobs complete successfully. Expand the SPARK jobs list to view
more details.
8. To verify a table has been created successfully, in the upper left area, next to
Tables, select the ellipsis (…), then select Refresh, and then expand the Tables
node.
9. Using either the same method as above or other supported methods, add more
Delta tables for the data you want to analyze.
3. Select Open data model to open the Web modeling experience where you can
add table relationships and DAX measures.
When you're finished adding relationships and DAX measures, you can then create
reports, build a composite model, and query the model through XMLA endpoints in
much the same way as any other model.
Related content
Specify a fixed identity for a Direct Lake model
Direct Lake overview
Analyze query processing for Direct Lake semantic models
Feedback
Was this page helpful? Yes No
Follow these steps to specify a fixed identity connection for a Direct Lake semantic
model.
1. In your Direct Lake model's settings, expand Gateway and cloud connections.
Note that your Direct Lake model has a SQL Server data source pointing to a
lakehouse or data warehouse in Fabric.
3. In Authentication method, select OAuth 2.0 or Service Principal, and then specify
credentials for the fixed identity you want to use.
4. In Single sign-on, ensure SSO via Microsoft Entra ID for DirectQuery queries is
not selected.
Related content
Direct Lake overview
Analyze query processing for Direct Lake semantic models
Feedback
Was this page helpful? Yes No
Power BI semantic models in Direct Lake mode read Delta tables directly from OneLake
— unless they have to fall back to DirectQuery mode. Typical fallback reasons include
memory pressures that can prevent loading of columns required to process a DAX
query, and certain features at the data source might not support Direct Lake mode, like
SQL views in a Warehouse and Lakehouse. In general, Direct Lake mode provides the
best DAX query performance unless a fallback to DirectQuery mode is necessary.
Because fallback to DirectQuery mode can impact DAX query performance, it's
important to analyze query processing for a Direct Lake semantic model to identify if
and how often fallbacks occur.
1. Start Power BI Desktop. On the startup screen, select New > Report.
2. Select Get Data from the ribbon, then select Power BI semantic models.
3. In the OneLake data hub page, select the Direct Lake semantic model you want to
connect to, and then select Connect.
4. Place a card visual on the report canvas, select a data column to create a basic
report, and then on the View menu, select Performance analyzer.
5. In the Performance analyzer pane, select Start recording.
6. In the Performance analyzer pane, select Refresh visuals, and then expand the
Card visual. The card visual doesn't cause any DirectQuery processing, which
indicates the semantic model was able to process the visual’s DAX queries in Direct
Lake mode.
If the semantic model falls back to DirectQuery mode to process the visual’s DAX
query, you see a Direct query performance metric, as shown in the following
image:
Analyze by using SQL Server Profiler
SQL Server Profiler can provide more details about query performance by tracing query
events. It's installed with SQL Server Management Studio (SSMS). Before starting, make
sure you have the latest version of SSMS installed.
3. In Connect to Server > Server type, select Analysis Services, then in Server name,
enter the URL to your workspace, then select an authentication method, and then
enter a username to sign in to the workspace.
4. Select Options. In Connect to database, enter the name of your semantic model
and then select Connect. Sign in to Microsoft Entra ID.
5. In Trace Properties > Events Selection, select the Show all events checkbox.
6. Scroll to Query Processing, and then select checkboxes for the following events:
ノ Expand table
Event Description
The following image shows an example of query processing events for a DAX
query. In this trace, the VertiPaq storage engine (SE) events indicate that the query
was processed in Direct Lake mode.
Related content
Create a lakehouse for Direct Lake
Direct Lake overview
Feedback
Was this page helpful? Yes No
In Microsoft Fabric, when the user creates a lakehouse, the system also provisions the
associated SQL analytics endpoint and default semantic model in Direct Lake mode. You
can add tables from the lakehouse into the default semantic model by going to the SQL
analytics endpoint and clicking the Manage default semantic model button in the
Reporting ribbon. You can also create a non-default Power BI semantic model in Direct
Lake mode by clicking New semantic model in the lakehouse or SQL analytics endpoint.
The non-default semantic model is created in Direct Lake mode and allows Power BI to
consume data by creating Power BI reports, explores, and running user-created DAX
queries in Power BI Desktop or the workspace itself. The default semantic model created
in the SQL analytics endpoint can be used to create Power BI reports but has some other
limitations.
When a Power BI report shows data in visuals, it requests it from the semantic model.
Next, the semantic model accesses a lakehouse to consume data and return it to the
Power BI report. For efficiency, the semantic model can keep some data in the cache and
refresh it when needed. Direct Lake overview has more details.
Lakehouse also applies V-order optimization to delta tables. This optimization gives
unprecedented performance and the ability to quickly consume large amounts of data
for Power BI reporting.
Setting permissions for report consumption
The semantic model in Direct Lake mode is consuming data from a lakehouse on
demand. To make sure that data is accessible for the user that is viewing Power BI
report, necessary permissions on the underlying lakehouse need to be set.
One option is to give the user the Viewer role in the workspace to consume all items in
the workspace, including the lakehouse, if in this workspace, semantic models, and
reports. Alternatively, the user can be given the Admin, Member, or Contributor role to
have full access to the data and be able to create and edit the items, such as lakehouses,
semantic models, and reports.
In addition, non-default semantic models can utilize a fixed identity to read data from
the lakehouse, without giving report users any access to the lakehouse, and users be
given permission to access the report through an app. Also, with fixed identity, non-
default semantic models in Direct Lake mode can have row-level security defined in the
semantic model to limit the data the report user sees while maintaining Direct Lake
mode. SQL-based security at the SQL analytics endpoint can also be used, but Direct
Lake mode will fall back to DirectQuery, so this should be avoided to maintain the
performance of Direct Lake.
Related content
Default Power BI semantic models in Microsoft Fabric
Feedback
Was this page helpful? Yes No
Fabric provides three ways to endorse valuable, high-quality items to increase their
visibility: promotion and certification and designating them as master data.
Promotion: Promotion is a way to highlight items you think are valuable and
worthwhile for others to use. It encourages the collaborative use and spread of
content within an organization.
Any item owner, as well as anyone with write permissions on the item, can
promote the item when they think it's good enough for sharing.
Certification: Certification means that the item meets the organization's quality
standards and can be regarded as reliable, authoritative, and ready for use across
the organization.
Only authorized reviewers (defined by the Fabric administrator) can certify items.
Item owners who wish to see their item certified and aren't authorized to certify it
themselves need to follow their organization's guidelines about getting items
certified.
Master data: Being labeled as master data means that the data item is regarded by
the organization as being core, single-source-of-truth data, such as customer lists
or product codes.
Only authorized reviewers (defined by the Fabric administrator) can label data
items as master data. Item owners who wish to see their item endorsed as master
data and aren't authorized to apply the Master data badge themselves need to
follow their organization's guidelines about getting items labeled as master data.
Currently it's possible to promote or certify all Fabric and Power BI items except Power
BI dashboards.
Master data badges can only be applied to items that contain data, such as lakehouses
and semantic models.
Promote items
To promote an item, you must have write permissions on the item you want to promote.
3. Select Apply.
Certify items
Item certification is a significant responsibility, and you should only certify an item if you
feel qualified to do so and have reviewed the item.
To certify an item:
7 Note
If you aren't authorized to certify an item yourself, you can request item
certification.
You must have write permissions on the item you want to apply the Certified
badge to.
1. Carefully review the item and determine whether it meets your organization's
certification standards.
2. If you decide to certify the item, go to the workspace where it resides, and open
the settings of the item you want to certify.
4. Select Apply.
7 Note
If you aren't authorized to designate a data item as master data yourself, you
can request the master data designation.
You must have write permissions on the item you want to apply the Master data
badge to.
1. Carefully review the data item and determine whether it is truly core, single-
source-of-truth data that your organization wants users to find and use for the
kind of data it contains.
2. If you decide to label the item as master data, go to the workspace where it
located, and open the settings of the item's settings..
4. Select Apply.
1. Go to the workspace where the item you want endorsed as certified or master data
is located, and then open the settings of that item.
2. Expand the endorsement section. The Certified or Master data button will be
greyed if you're not authorized to endorse items as certified or as master data.
3. Select relevant link, How do I get content certified or How do I get content
endorsed as Master data to find out how to get your item endorsed the way you
want it to be:
7 Note
If you clicked one of the links but got redirected back to this note, it means
that your Fabric admin has not made any information available. In this case,
contact the Fabric admin directly.
Related content
Read more about endorsement
Enable item certification (Fabric admins)
Enable master data endorsement (Fabric admins)
Read more about semantic model discoverability
Feedback
Was this page helpful? Yes No
Workspaces are the central places where you collaborate with your colleagues in
Microsoft Fabric. Besides assigning workspace roles, you can also use item sharing to
grant and manage item-level permissions in scenarios where:
You want to collaborate with colleagues who don't have a role in the workspace.
You want to grant additional item level-permissions for colleagues who already
have a role in the workspace.
This document describes how to share an item and manage its permissions.
2. The Create and send link dialog opens. Select People in your organization can
view.
3. The Select permissions dialog opens. Choose the audience for the link you're
going to share.
You have the following options:
People with existing access This type of link generates a URL to the item, but
it doesn't grant any access to the item. Use this link type if you just want to
send a link to somebody who already has access.
Specific people This type of link allows specific people or groups to access
the report. If you select this option, enter the names or email addresses of the
people you wish to share with. This link type also lets you share to guest
users in your organization's Microsoft Entra ID. You can't share to external
users who aren't guests in your organization.
7 Note
If your admin has disabled shareable links to People in your organization, you
can only copy and share links using the People with existing access and
Specific people options.
Links that give access to People in your organization or Specific people always
include at least read access. However, you can also specify if you want the link to
include additional permissions as well.
7 Note
The Additional permissions settings vary for different items. Learn more
about the item permission model.
Links for People with existing access don't have additional permission
settings because these links don't give access to the item.
Select Apply.
5. In the Create and send link dialog, you have the option to copy the sharing link,
generate an email with the link, or share it via Teams.
Copy link: This option automatically generates a shareable link. Select Copy
in the Copy link dialog that appears to copy the link to your clipboard.
by Email: This option opens the default email client app on your computer
and creates an email draft with the link in it.
by Teams: This option opens Teams and creates a new Teams draft message
with the link in it.
6. You can also choose to send the link directly to Specific people or groups
(distribution groups or security groups). Enter their name or email address,
optionally type a message, and select Send. An email with the link is sent to your
specified recipients.
When your recipients receive the email, they can access the report through the
shareable link.
This image shows the Edit link pane when the selected audience is People in your
organization can view and share.
This image shows the Edit link pane when the selected audience is Specific people
can view and share. Note that the pane enables you to modify who can use the
link.
4. For more access management capabilities, select the Advanced option in the
footer of the Manage permissions pane. On the management page that opens,
you can:
Grant and manage access directly
In some cases, you need to grant permission directly instead of sharing link, such as
granting permission to service account, for example.
4. Enter the names of people or accounts that you need to grant access to directly.
Select the permissions that you want to grant. You can also optionally notify
recipients by email.
5. Select Grant.
6. You can see all the people, groups, and accounts with access in the list on the
permission management page. You can also see their workspace roles,
permissions, and so on. By selecting the context menu, you can modify or remove
the permissions.
7 Note
You can't modify or remove permissions that are inherited from a workspace
role in the permission management page. Learn more about workspace roles
and the item permission model.
ノ Expand table
Permission Effect
granted while
sharing
Read Recipient can discover the item in the data hub and open it. Connect to the
Warehouse or SQL analytics endpoint of the Lakehouse.
Share Recipient can share the item and grant permissions up to the permissions
that they have. For example, if the original recipient has Share, Edit, and Read
permissions, they can at most grant Share, Edit, and Read permissions to the
next recipient.
Read All with SQL Read data from the SQL analytics endpoint of the Lakehouse or Warehouse
analytics endpoint data through TDS endpoints.
Read all with Read Lakehouse or Data warehouse data through OneLake APIs and Spark.
Apache Spark Read Lakehouse data through Lakehouse explorer.
Permission Effect
granted while
sharing
The Shared with me option in the Browse pane currently only displays Power BI
items that have been shared with you. It doesn't show you non-Power BI Fabric
items that have been shared with you.
Related content
Workspace roles
Feedback
Was this page helpful?
Yes No
Sensitivity labels from Microsoft Purview Information Protection on items can guard
your sensitive content against unauthorized data access and leakage. They're a key
component in helping your organization meet its governance and compliance
requirements. Labeling your data correctly with sensitivity labels ensures that only
authorized people can access your data. This article shows you how to apply sensitivity
labels to your Microsoft Fabric items.
7 Note
For information about applying sensitivity labels in Power BI Desktop, see Apply
sensitivity labels in Power BI Desktop.
Prerequisites
Requirements needed to apply sensitivity labels to Fabric items:
7 Note
If you can't apply a sensitivity label, or if the sensitivity label is greyed out in the
sensitivity label menu, you may not have permissions to use the label. Contact your
organization's tech support.
Apply a label
There are two common ways of applying a sensitivity label to an item: from the flyout
menu in the item header, and in the item settings.
From the flyout menu - select the sensitivity indication in the header to display the
flyout menu:
In items settings - open the item's settings, find the sensitivity section, and then
choose the desired label:
Related content
Sensitivity label overview
Feedback
Was this page helpful? Yes No
In Microsoft Fabric, the Delta Lake table format is the standard for analytics. Delta Lake is an open-source storage layer
that brings ACID (Atomicity, Consistency, Isolation, Durability) transactions to big data and analytics workloads.
All Fabric experiences generate and consume Delta Lake tables, driving interoperability and a unified product experience.
Delta Lake tables produced by one compute engine, such as Fabric Data Warehouse or Synapse Spark, can be consumed by
any other engine, such as Power BI. When you ingest data into Fabric, Fabric stores it as Delta tables by default. You can
easily integrate external data containing Delta Lake tables by using OneLake shortcuts.
Writers: Data warehouses, eventstreams, and exported Power BI semantic models into OneLake
Readers: SQL analytics endpoint and Power BI direct lake semantic models
Writers and readers: Fabric Spark runtime, dataflows, data pipelines, and Kusto Query Language (KQL) databases
The following matrix shows key Delta Lake features and their support on each Fabric capability.
ノ Expand table
Fabric Name- Deletion V-order Table Write Read Liquid TIMESTAMP_NTZ Delta
capability based vectors writing optimization partitions partitions Clustering reader/writer
column and version and
mappings maintenance default table
features
SQL analytics Yes Yes N/A (not N/A (not N/A (not Yes Yes No N/A (not
endpoint applicable) applicable) applicable) applicable)
Fabric Spark Yes Yes Yes Yes Yes Yes Yes Yes Reader: 1
Runtime 1.3 Writer: 2
Fabric Spark Yes Yes Yes Yes Yes Yes Yes, read Yes Reader: 1
Runtime 1.2 only Writer: 2
Fabric Spark Yes No Yes Yes Yes Yes Yes, read No Reader: 1
Runtime 1.1 only Writer: 2
Power BI Yes Yes N/A (not N/A (not N/A (not Yes Yes No N/A (not
direct lake applicable) applicable) applicable) applicable)
semantic
models
Export Power Yes N/A (not Yes No Yes N/A (not No No Reader: 2
BI semantic applicable) applicable) Writer: 5
models into
OneLake
Fabric Name- Deletion V-order Table Write Read Liquid TIMESTAMP_NTZ Delta
capability based vectors writing optimization partitions partitions Clustering reader/writer
column and version and
mappings maintenance default table
features
* KQL databases provide certain table maintenance capabilities such as retention. Data is removed at the end of the retention
period from OneLake. For more information, see One Logical copy.
7 Note
Fabric doesn't write name-based column mappings by default. The default Fabric experience generates tables that
are compatible across the service. Delta lake, produced by third-party services, may have incompatible table
features.
Some Fabric experiences do not have inherited table optimization and maintenance capabilities, such as bin-
compaction, V-order, and clean up of old unreferenced files. To keep Delta Lake tables optimal for analytics, follow
the techniques in Use table maintenance feature to manage delta tables in Fabric for tables ingested using those
experiences.
Current limitations
Currently, Fabric doesn't support these Delta Lake features:
Related content
What is Delta Lake?
Learn more about Delta Lake tables in Fabric Lakehouse and Synapse Spark.
Learn about Direct Lake in Power BI and Microsoft Fabric.
Learn more about querying tables from the Warehouse through its published Delta Lake Logs.
Feedback
Was this page helpful? Yes No
Your feedback is important to us. We want to hear about your experiences with
Microsoft Fabric. Your feedback is used to improve the product and shape the way it
evolves. This article describes how you can give feedback about Microsoft Fabric, how
the feedback is collected, and how we handle this information.
Feedback types
There are three ways to give feedback about Microsoft Fabric, in-product feedback, in-
product surveys, and community feedback.
In-product feedback
Give in-product feedback by selecting the Feedback button next to your profile picture
in the Microsoft Fabric portal.
In-product surveys
From time to time, Microsoft Fabric initiates in-product surveys to collect feedback from
users. When you see a prompt, you can choose to give feedback or dismiss the prompt.
If you dismiss the prompt, you won't see it again for some time.
Community feedback
There are a few ways you can give feedback while engaging with the Microsoft Fabric
community:
Community Feedback - Give feedback about Microsoft Fabric and vote for
publicly submitted feedback. Top known feedback items remain available in the
new portal.
What kind of feedback is best?
Try to give detailed and actionable feedback. If you have issues, or suggestions for how
we can improve, we’d like to hear it.
Descriptive title - Descriptive and specific titles help us understand the issue being
reported.
One issue - Providing feedback for one issue ensures the correct logs and data are
received with each submission and can be assigned for follow-up. If you want to
give feedback for multiple issues, give feedback for each issue separately. Giving
feedback for Separate issues helps us identify the volume of feedback we’re
receiving for a particular issue. If you have more than one issue, submit a new
feedback request for each issue.
Give details - Give details about your issue in the description box. Information
about your device, operating system, and apps are automatically included in each
reported feedback. Add any additional information you think is important. Include
detailed steps to reproduce the issue.
What do we collect?
Here are the most common items collected or calculated.
Survey questions - Questions that we asked the user during the survey.
Survey responses - User responses to survey questions.
App language - The language of the Microsoft product that was captured on
submission.
To learn more about how we protect the privacy and confidentiality of your data, and
how we ensure that it will be used only in a way that is consistent with your
expectations, review our privacy principles at the Microsoft Trust Center .
Feedback
Was this page helpful? Yes No
The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that lead to the successful
adoption of Microsoft Fabric, and help build a data culture in your organization.
Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves many considerations across the spectrum of
people, processes, and technology.
7 Note
While reading this series of articles, we recommended that you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Microsoft Fabric adoption roadmap, consider reviewing
the usage scenarios. Understanding the diverse ways Power BI is used can
influence your implementation strategies and decisions for all of Microsoft Fabric.
The diagram depicts the following areas of the Microsoft Fabric adoption roadmap.
The areas in the above diagram include:
ノ Expand table
Area Description
Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Fabric, and it's often a key aspect of an organization's digital transformation.
Business Alignment: How well the data culture and data strategy enable business users to
achieve business objectives. An effective BI data strategy aligns with the business strategy.
Area Description
Content ownership and management: There are three primary strategies for how
business intelligence (BI) and analytics content is owned and managed: business-led self-
service BI, managed self-service BI, and enterprise BI. These strategies have a significant
influence on adoption, governance, and the Center of Excellence (COE) operating model.
Content delivery scope: There are four primary strategies for content and data delivery:
personal, team, departmental, and enterprise. These strategies have a significant influence
on adoption, governance, and the COE operating model.
Center of Excellence: A Fabric COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.
Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Fabric, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.
Mentoring and user enablement: A critical objective for adoption efforts is to enable
users to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.
User support: User support includes both informally organized and formally organized
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.
Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
Adoption and governance decisions are implemented alongside change
management to mitigate the impact and disruption of change on existing business
processes.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor, who facilitates
business alignment between the business strategy and data strategy. This
alignment in turn informs data culture and governance decisions.
Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.
) Important
Whenever possible, adoption efforts should be aligned across analytics platforms and BI
services.
7 Note
The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.
) Important
You might be wondering how this Fabric adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It's a lightweight set of resources to help partners
deploy Power BI solutions for their customers.
This adoption series is more current. It's intended to guide any person or
organization that is using—or considering using—Fabric. If you're seeking to
improve your existing Power BI of Fabric implementation, or planning a new Power
BI or Fabric implementation, this adoption roadmap is a great place to start.
Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.
Improving their organization's ability to effectively use analytics.
Increasing their organization's maturity level related to the delivery of analytics.
Understanding and overcoming adoption-related challenges faced when scaling
and growing.
Increasing their organization's return on investment (ROI) in data and analytics.
This series of articles will be most helpful to those who work in an organization with one
or more of the following characteristics.
To fully benefit from the information provided in these articles, you should have an
understanding of Power BI foundational concepts and Fabric foundational concepts.
Related content
In the next article in this series, learn about the Fabric adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage with a partner, visit the Power BI partner portal .
Acknowledgments
The Microsoft Fabric adoption roadmap articles are written by Melissa Coates , Kurt
Buhler , and Peter Myers . Matthew Roche , from the Fabric Customer Advisory
Team, provides strategic guidance and feedback to the subject matter experts.
Reviewers include Cory Moore , James Ward, Timothy Bindas , Greg Moir , and
Chuy Varela .
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
ノ Expand table
Type Description
User adoption is the extent to which consumers and creators continually increase their
knowledge. It's concerned with whether they're actively using analytics tools, and whether
they're using them in the most effective way.
Type Description
Solution adoption refers to the impact and business value achieved for individual
requirements and analytical solutions.
As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:
The remainder of this article introduces the three types of adoption in more detail.
It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Microsoft Fabric adoption roadmap aligns with the five
levels from the Capability Maturity Model , which were later enhanced by the Data
Management Maturity (DMM) model from ISACA (note that the DMM was a paid
resource that has since been retired).
Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in analytics, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be conscious of the organizational state as well as progress for key business
units.
7 Note
Pockets of success and experimentation with Fabric exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and solutions have been delivered with
some success.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with analytics is acknowledged. However,
there's no clear path forward for purposeful, organization-wide execution.
7 Note
The characteristics above are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Fabric adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Fabric adoption series
present maturity levels on a per-topic basis.
Individuals—and the organization itself—continually learn, change, and improve. That
means there's no formal end to adoption-related efforts. However, it's common that
effort is reduced as higher maturity levels are reached.
The remainder of this article introduces the second and third types of adoption: user
adoption and solution adoption.
7 Note
User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.
User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.
An individual has heard of, or been initially exposed to, analytics in some way.
An individual might have access to a tool, such as Fabric, but isn't yet actively using
it.
It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).
) Important
By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.
Solution adoption phases
Solution adoption is concerned with measuring the impact of content that's been
deployed. It's also concerned with the level of value solutions provide. The scope for
evaluating solution adoption is for one set of requirements, like a set of reports, a
lakehouse, or a single Power BI app.
7 Note
Tip
Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service efforts, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
For simplicity, all exploration—and initial feedback—could occur within local user
tools (such as Power BI Desktop or Excel) or within a single Fabric workspace.
Target users find the solution to be valuable and experience tangible benefits.
The solution is promoted to a production workspace that's managed, secured, and
audited.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated). The
documentation helps future content creators (for example, for documenting any
future maintenance or planned enhancements).
Ownership and subject matter experts for the content are clear.
Report branding and theming are in place and in line with governance guidelines.
Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet evolving requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified due to its critical nature.
Formal user acceptance testing for new changes might occur, particularly for IT-
managed content.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
organizational data culture and its impact on adoption efforts.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Building a data culture is closely related to adopting analytics, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:
) Important
Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.
Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.
The following circular diagram conveys the interrelated aspects that influence your data
culture:
The diagram depicts the somewhat ambiguous relationships among the following items:
Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership and management, and content delivery scope, are closely
related to data democratization.
The elements of the diagram are discussed throughout this series of articles.
Data culture vision
The concept of data culture can be difficult to define and measure. Even though it's
challenging to articulate data culture in a way that's meaningful, actionable, and
measurable, you need to have a well-understood definition of what a healthy data
culture means to your organization. This vision of a healthy data culture should:
Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate through their
actions what's important to them, including how they praise, recognize, and reward staff
members who take initiative.
Tip
If you can take for granted that your efforts to develop a data solution (such as a
semantic model, a lakehouse, or a report) will be valued and appreciated, that's an
excellent indicator of a healthy data culture. Sometimes, however, it depends on
what your immediate manager values most.
The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:
In each of these situations, there's often a specific area where the data culture takes
root. The specific area could be a scope of effort that's smaller than the entire
organization, even if it's still significant. After necessary changes are made at this smaller
scope, they can be incrementally replicated and adapted for the rest of the organization.
Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.
Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.
Data discovery is the ability to effectively locate relevant data assets across the
organization. Primarily, data discovery is concerned with improving awareness that data
exists, which can be particularly challenging when data is siloed in departmental
systems.
Data discovery allows users to see metadata for an item, like the name of a
semantic model, even if they don't currently have access to it. After a user is aware
of its existence, that user can go through the standard process to request access to
the item.
Search allows users to locate an existing item when they already have security
access to the item.
Tip
It's important to have a clear and simple process so users can request access to
data. Knowing that data exists—but being unable to access it within the guidelines
and processes that the domain owner has established—can be a source of
frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.
The OneLake catalog and the use of endorsements are key ways to promote data
discovery in your organization.
Furthermore, data catalog solutions are extremely valuable tools for data discovery.
They can record metadata tags and descriptions to provide deeper context and
meaning. For example, Microsoft Purview can scan and catalog items from a Fabric
tenant (as well as many other sources).
Is there a data catalog where business users can search for data?
Is there a metadata catalog that describes definitions and data locations?
Are high-quality data sources endorsed by certifying or promoting them?
To what extent do redundant data sources exist because people can't find the data
they need? What roles are expected to create data items? What roles are expected
to create reports or perform ad hoc analysis?
Can end users find and use existing reports, or do they insist on data exports to
create their own?
Do end users know which reports to use to address specific business questions or
find specific data?
Are people using the appropriate data sources and tools, or resisting them in favor
of legacy ones?
Do analysts understand how to enrich existing certified semantic models with new
data—for example, by using a Power BI composite model?
How consistent are data items in their quality, completeness, and naming
conventions?
Can data item owners follow data lineage to perform impact analysis of data
items?
Maturity levels of data discovery
The following maturity levels can help you assess your current state of data discovery.
ノ Expand table
100: Initial • Data is fragmented and disorganized, with no clear structures or processes to
find it.
• Users struggle to find and use data they need for their tasks.
200: • Scattered or organic efforts to organize and document data are underway, but
Repeatable only in certain teams or departments.
300: Defined • A central repository, like the OneLake catalog, is used to make data easier to find
for people who need it.
400: Capable • Structured, consistent processes guide users how to endorse, document, and
find data from a central hub. Data silos are the exception instead of the rule.
500: Efficient • Data and metadata is systematically organized and documented with a full view
of the data lineage.
• Cataloging tools, like Microsoft Purview, are used to make data discoverable for
both use and governance.
Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling more users to make
better data-driven decisions.
7 Note
2 Warning
The following maturity levels can help you assess your current state of data
democratization.
ノ Expand table
100: Initial • Data and analytics are limited to a small number of roles, who gatekeep access to
others.
• Business users must request access to data or tools to complete tasks. They
struggle with delays or bottlenecks.
• Self-service initiatives are taking place with some success in various areas of the
organization. These activities are occurring in a somewhat chaotic manner, with few
formal processes and no strategic plan. There's a lack of oversight and visibility into
these self-service activities. The success or failure of each solution isn't well
understood.
• The enterprise data team can't keep up with the needs of the business. A
significant backlog of requests exists for this team.
Level State of data democratization
200: • There are limited efforts underway to expand access to data and tools.
Repeatable
• Multiple teams have had measurable success with self-service solutions. People in
the organization are starting to pay attention.
• Investments are being made to identify the ideal balance of enterprise and self-
service solutions.
300: • Many people have access to the data and tools they need, although not all users
Defined are equally enabled or held accountable for the content they create.
400: • Healthy partnerships exist among enterprise and self-service solution creators.
Capable Clear, realistic user accountability and policies mitigate risk of self-service analytics
and BI.
• Clear and consistent processes are in place for users to request access to data
and tools.
• Individuals who take initiative in building valuable solutions are recognized and
rewarded.
500: • User accountability and effective governance give central teams confidence in
Efficient what users do with data.
Data literacy
Data literacy refers to the ability to interpret, create, and communicate with data and
analytics accurately and effectively.
Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing software and licenses to users.
How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You might choose to focus on these types of activities related to data
literacy:
Tip
Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.
Does a common analytical vocabulary exist in the organization to talk about data
and BI solutions? Alternatively, are definitions fragmented and different across
silos?
How comfortable are people with making decisions based on data and evidence
compared to intuition and subjective experience?
When people who hold an opinion are confronted with conflicting evidence, how
do they react? Do they critically appraise the data, or do they dismiss it? Can they
alter their opinion, or do they become entrenched and resistant?
Do training programs exist to support people in learning about data and analytical
tools?
Is there significant resistance to visual analytics and interactive reporting in favor of
static spreadsheets?
Are people open to new analytical methods and tools to potentially address their
business questions more effectively? Alternatively, do they prefer to keep using
existing methods and tools to save time and energy?
Are there methods or programs to assess or improve data literacy in the
organization? Does leadership have an accurate understanding of the data literacy
levels?
Are there roles, teams, or departments where data literacy is particularly strong or
weak?
The following maturity levels can help you assess your current state of data literacy.
ノ Expand table
100: Initial • Decisions are frequently made based on intuition and subjective experience.
When confronted with data that challenges existing opinions, data is often
dismissed.
• Report consumers have a strong preference for static tables. These consumers
dismiss interactive visualizations or sophisticated analytical methods as "fancy" or
unnecessary.
200: • Some teams and individuals inconsistently incorporate data into their decision
Repeatable making. There are clear cases where misinterpretation of data has led to flawed
decisions or wrong conclusions.
300: • The majority of teams and individuals understand data relevant to their business
Defined area and use it implicitly to inform decisions.
• Visualizations and advanced analytics are more widely accepted, though not
always used effectively.
400: • Data literacy is recognized explicitly as a necessary skill in the organization. Some
Capable training programs address data literacy. Specific efforts are taken to help
departments, teams, or individuals that have particularly weak data literacy.
• Most individuals can effectively use and apply data to make objectively better
decisions and take actions.
• Visual and analytical best practices are documented and followed in strategically
important data solutions.
500: • Data literacy, critical thinking, and continuous learning are strategic skills and
Efficient values in the organization. Effective programs monitor progress to improve data
literacy in the organization.
• Visual and analytical best practices are seen as essential to generate business
value with data.
Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.
" Align your data culture goals and strategy: Give serious consideration to the type
of data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These conversations can present
an opportunity to educate teams on topics like security and infrastructure. You can
also use the opportunity to educate stakeholders on the features and capabilities
included in Fabric.
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your data strategy: Decide what the ideal
balance of business-led self-service, managed self-service, and enterprise data,
analytics and BI use cases should be for the key business units in the organization
(covered in the content ownership and management article). Also consider how the
data strategy relates to the extent of published content for personal, team,
departmental, and enterprise analytics and BI (described in the content delivery
scope article). Define your high-level goals and priorities for this strategic planning.
Determine how these decisions affect your tactical planning.
" Create a tactical plan: Begin creating a tactical plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create key performance indicators (KPIs) or objectives and
key results (OKRs) to validate the results of your efforts.
The following maturity levels will help you assess the current state of your data culture.
ノ Expand table
100: Initial • Enterprise data teams can't keep up with the needs of the business. A significant
backlog of requests exists.
• Self-service data and BI initiatives are taking place with some success in various
areas of the organization. These activities occur in a somewhat chaotic manner,
with few formal processes and no strategic plan.
200: • Multiple teams have had measurable successes with self-service solutions. People
Repeatable in the organization are starting to pay attention.
• Investments are being made to identify the ideal balance of enterprise and self-
service data, analytics, and BI.
300: Defined • Specific goals are established for advancing the data culture. These goals are
implemented incrementally.
400: • The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.
• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.
Level State of data culture
• Individuals who take initiative in building valuable data solutions are recognized
and rewarded.
500: • The business value of data, analytics, and BI solutions is regularly evaluated and
Efficient measured. KPIs or OKRs are used to track data culture goals and the results of
these efforts.
• Feedback loops are in place, and they encourage ongoing data culture
improvements.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of an executive sponsor.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
When planning to advance the data culture and the state of organizational adoption for
data and analytics, it's crucial to have executive support. An executive sponsor is
imperative because analytics adoption is far more than just a technology project.
Formulating a strategic vision, goals, and priorities for data, analytics, and business
intelligence (BI).
Providing top-down guidance and reinforcement for the data strategy by regularly
promoting, motivating, and investing in strategic and tactical planning.
Leading by example by actively using data and analytics in a way that's consistent
with data culture and adoption goals.
Allocating staffing and prioritizing resources.
Approving funding (for example, Fabric licenses).
Removing barriers to enable action.
Communicating announcements that are of critical importance, to help them gain
traction.
Decision-making, particularly for strategic-level governance decisions.
Dispute resolution (for escalated issues that can't be resolved by operational or
tactical personnel).
Supporting organizational change initiatives (for example, creating or expanding
the Center of Excellence).
) Important
The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization. They also have an invested stake in data efforts and
the data strategy. When the BI strategy is successful, the ideal executive sponsor
also experiences success in their role.
Top-down pattern
An executive sponsor might be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) could hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Fabric (or for data and analytics in general).
Here's another example: The CEO might empower an existing executive, such as the
Chief Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.
7 Note
Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating data solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a smaller scale. A junior-level leader who hasn't reached the executive level
(such as a director) might then grow into the executive sponsor role by sharing
successes with other business units across the organization.
With a bottom-up approach, the sponsor might be able to make some progress, but
they won't have formal authority over other business units. Without clear authority, it's
only a matter of time until challenges occur that are beyond their level of authority. For
this reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which might start a healthy competition across other business units in the
adoption of data and BI.
Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for analytics.
Questions to ask
Use questions like those found below to assess data literacy.
Maturity levels
The following maturity levels will help you assess your current state of executive
support.
ノ Expand table
100: Initial • There might be awareness from at least one executive about the strategic
importance of how analytics can advance the organization's data culture goals.
However, neither a sponsor nor an executive-level decision-maker is identified.
200: • Informal executive support exists for analytics through informal channels and
Level State of executive support
Repeatable relationships.
300: • An executive sponsor is identified. Expectations are clear for the role.
Defined
400: • An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.
• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.
500: • The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of business alignment with organizational goals.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and data strategy enable business users to achieve their business objectives.
You can achieve effective business alignment with data activities and solutions by
having:
Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.
Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.
Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example is when central teams schedule regular meetings
to mentor and enable self-service users.
Set up a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.
) Important
To achieve effective business alignment, you should make it a priority to identify
and dismantle any communication barriers between business teams and technical
teams.
Strategic alignment
Your business strategy should be well aligned with your data and BI strategy. To
incrementally achieve this alignment, we recommend that you commit to following
structured, iterative planning processes.
Strategic planning: Define data, analytics, and BI goals and priorities based on the
business strategy and current state of adoption and implementation. Typically,
strategic planning occurs every 12-18 months to iteratively define high-level
desired outcomes. You should synchronize strategic planning with key business
planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your data and BI goals. Typically, tactical planning occurs
quarterly to iteratively re-evaluate and align the data strategy and activities to the
business strategy. This alignment is informed by business feedback and changes to
business objectives or technology. You should synchronize tactical planning with
key project planning processes.
Solution planning: Design, develop, test, and deploy solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.
) Important
U Caution
A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users will often pursue workarounds to
complete their tasks.
Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.
To achieve executive alignment, consider the following key considerations and activities.
Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of data in the organization. Use this feedback to identify
changes in business objectives, re-assess the data strategy, and inform future
actions to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward data, analytics,
and BI goals.
Fabric adoption and implementation milestones.
Technology changes that might impact organizational business goals.
) Important
Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.
Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and data strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Fabric adoption.
Schedule regular re-alignment sessions: Ensure that data strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).
7 Note
) Important
Questions to ask
Use questions like those found below to assess business alignment.
Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the data
strategy?
Are changes in business user data needs addressed promptly in data and BI
solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are data and BI initiatives and strategy communicated across the
organization?
Maturity levels
A business alignment assessment evaluates integration between the business strategy
and data strategy. Specifically, this assessment focuses on whether or not data and BI
initiatives and solutions support business users to achieve business strategic objectives.
The following maturity levels will help you assess your current state of business
alignment.
ノ Expand table
100: Initial • Business and data strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.
200: • There are efforts to align data and BI initiatives with specific data needs without
Repeatable a consistent approach or understanding of their success.
300: • Data and BI initiatives are prioritized based on their alignment with strategic
Defined business objectives. However, alignment is siloed and typically focuses on local
needs.
• Strategic initiatives and changes have a clear, structured involvement of both the
business and data strategic decision makers. Business teams and technical teams
can have productive discussions to meet business and governance needs.
400: • There's a consistent, organization-wide view of how data initiatives and solutions
Capable support business objectives.
• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the data strategy to better support business needs.
500: • The data strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.
Level ate of data and business alignment**
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn more about
content ownership and management, and its effect on business-led self-service BI,
managed self-service BI, and enterprise BI.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
7 Note
There are three primary strategies for how data, analytics, and business intelligence (BI)
content is owned and managed: business-led self-service, managed self-service, and
enterprise. For the purposes of this series of articles, the term content refers to any type
of data item (like a notebook, semantic model, report, or dashboard).
The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.
The areas in the above diagram include:
ノ Expand table
Area Description
Business-led self-service: All content is owned and managed by the creators and subject
matter experts within a business unit. This ownership strategy is also known as a
decentralized or bottom-up strategy.
Managed self-service: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.
Enterprise: All content is owned and managed by a centralized team such as IT, enterprise
BI, or the Center of Excellence (COE).
It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:
How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.
As discussed in the governance article, the level of governance and oversight depends
on:
Who owns and manages the content.
The scope of content delivery.
The data subject area and sensitivity level.
The importance of the data, and whether it's used for critical decision making.
In general:
As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.
ノ Expand table
Role Description
Data steward Responsible for defining and/or managing acceptable data quality levels as well
as master data management (MDM).
Subject Responsible for defining what the data means, what it's used for, who might
matter expert access it, and how the data is presented to others. Collaborates with domain
(SME) owner as needed and supports colleagues in their use of data.
Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.
7 Note
Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:
In the Fabric portal, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open a Power BI app but they don't have permission,
they will be presented with an option to make a request for access.
The remainder of this article covers considerations related to the three content
ownership and management strategies.
Business-led self-service
With a business-led self-service approach to data and BI, all content is owned and
managed by creators and subject matter experts. Because responsibility is retained
within a business unit, this strategy is often described as the bottom-up, or decentralized,
approach. Business-led self-service is often a good strategy for personal BI and team BI
solutions.
) Important
The concept of business-led self-service isn't the same as shadow IT. In both
scenarios, data and BI content is created, owned, and managed by business users.
However, shadow IT implies that the business unit is circumventing IT and so the
solution is not sanctioned. With business-led self-service BI solutions, the business
unit has full authority to create and manage content. Resources and support from
the COE are available to self-service content creators. It's also expected that the
business unit will comply with all established data governance guidelines and
policies.
Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled users capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of items,
including the data (such as a lakehouse, data warehouse, data pipeline, dataflow,
or semantic model), the visuals (such as reports and dashboards), and Power BI
apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.
Here are some guidelines to help become successful with business-led self-service data
and BI.
Teach your creators to use the same techniques that IT would use, like shared
semantic models and dataflows. Make use of a well-organized OneLake. Centralize
data to reduce maintenance, improve consistency, and reduce risk.
Focus on providing mentoring, training, resources, and documentation (described
in the Mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (described next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when Power BI app audiences or workspace
roles might be a better choice. The data from the activity log allows the COE to
offer support and advice to the business units. In turn, this information can help
increase the quality of solutions, while allowing the business to retain full
ownership and control of their content. For more information, see Auditing and
monitoring.
Managed self-service
Managed self-service BI is a blended approach to data and BI. The data is owned and
managed by a centralized team (such as IT, enterprise BI, or the COE), while
responsibility for reports and dashboards belongs to creators and subject matter experts
within the business units. Managed self-service BI is frequently a good strategy for team
BI and departmental BI solutions.
This approach is often called_discipline at the core and flexibility at the edge_. It's
because the data architecture is maintained by a single team with an appropriate level
of discipline and rigor. Business units have the flexibility to create reports and
dashboards based on centralized data. This approach allows report creators to be far
more efficient because they can remain focused on delivering value from their data
analysis and visuals.
Here are some guidelines to help you become successful with managed self-service BI.
Teach users to separate model and report development. They can use live
connections to create reports based on existing semantic models. When the
semantic model is decoupled from the report, it promotes data reuse by many
reports and many authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many semantic model
creators. Refine the dataflow as much as possible, using friendly column names
and correct data types to reduce the downstream effort required by semantic
model authors, who consume the dataflow as a source. Dataflows are an effective
way to reduce the time involved with data preparation and improve data
consistency across semantic models. The use of dataflows also reduces the number
of data refreshes on source systems and allows fewer users who require direct
access to source systems.
When self-service creators need to augment an existing semantic model with
departmental data, educate them to create composite models. This feature allows
for an ideal balance of self-service enablement while taking advantage of the
investment in data assets that are centrally managed.
Use the certified endorsement for semantic models and dataflows to help content
creators identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Fabric portal.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build semantic
model permissions allow creators to create new reports with row-level security
(RLS) in effect, when applicable. For more information, see Workspace-level
planning. For more information about RLS, see Content creator security planning.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of semantic models to reports to evaluate the extent of semantic model
reuse.
Enterprise
Enterprise is a centralized approach to delivering data and BI solutions in which all
solution content is owned and managed by a centralized team. This team is usually IT,
enterprise BI, or the COE.
Centralizing content management with a single team aligns with the organization's
data culture.
The organization has data and BI expertise to manage all items end-to-end.
The content needs of consumers are well-defined, and there's little need to
customize or explore data beyond the reporting solution that's delivered.
Content ownership and direct access to data needs to be limited to a small
number of experts and owners.
The data is highly sensitive or subject to regulatory requirements.
Here are some guidelines to help you become successful with enterprise data and BI.
Implement a rigorous process for use of the certified endorsement for content.
Not all enterprise content needs to be certified, but much of it probably should be.
Certified content should indicate that data quality has been validated. Certified
content should also follow change management rules, have formal support, and be
fully documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported by a user.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing semantic model, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
portal.
Ownership transfers
Occasionally, the ownership of a particular solution might need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:
The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer could be addressed during COE office
hours; a more complex transfer could warrant a small project managed by the COE.
7 Note
There's potential that the new owner will need to do some refactoring and data
validations before they're willing to take full ownership. Refactoring is most likely to
occur with the less visible aspects of data preparation, data modeling, and
calculations. If there are any manual steps or flat file sources, now is an ideal time
to apply those enhancements. The branding of reports and dashboards might also
need to change (for example, if there's a footer indicating report contact or a text
label indicating that the content is certified).
It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:
The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.
Tip
Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.
Checklist - Here's a list of considerations and key actions you can take to strengthen
your approach to content ownership and management.
Questions to ask
Use questions like those found below to assess content ownership and management.
Do central teams that are responsible for Fabric have a clear understanding of who
owns what BI content? Is there a distinction between report and data items, or
different item types (like Power BI semantic models, data science notebooks, or
lakehouses)?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization, and how do they
differ between key business units?
What activities do business analytical teams perform (for example, data
integration, data modeling, or reporting)?
What kinds of roles in the organizations are expected to create and own content?
Is it limited to central teams, analysts, or also functional roles, like sales?
Where does the organization sit on the spectrum of business-led self-service,
managed self-service, or enterprise? Does it differ between key business units?
Do strategic data and BI solutions have ownership roles and stewardship roles that
are clearly defined? Which are missing?
Are content creators and owners also responsible for supporting and updating
content once it's released? How effective is the ownership of content support and
updates?
Is a clear process in place to transfer ownership of solutions (where necessary)? An
example is when an external consultant creates or updates a solution.
Do data sources have data stewards or subject matter experts (SMEs) who serve as
a special point of contact?
If your organization is already using Fabric or Power BI, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Maturity levels
The following maturity levels will help you assess the current state of your content
ownership and management.
ノ Expand table
100: Initial • Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.
• A high ratio of semantic models to reports exists. When many semantic models
exist only support one report, it indicates opportunities to improve data reusability,
improve trustworthiness, reduce maintenance, and reduce the number of duplicate
semantic models.
200: • A plan is in place for which content ownership and management strategy to use
Repeatable and in which circumstances.
• Initial steps are taken to improve the consistency and trustworthiness levels for
self-service efforts.
• Guidance for the user community is available that includes expectations for self-
service versus enterprise content.
• Roles and responsibilities are clear and well understood by everyone involved.
400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
Level State of content ownership and management
• There's a plan in place for how to request and handle ownership transfers.
500: • Proactive steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
scope of content delivery.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
The four delivery scopes described in this article include personal, team, departmental,
and enterprise. To be clear, focusing on the scope of a delivered data and business
intelligence (BI) solution does refer to the number of people who might view the
solution, though the impact is much more than that. The scope strongly influences best
practices for not only content distribution, but also content management, security, and
information protection. The scope has a direct correlation to the level of governance
(such as requirements for change management, support, or documentation), the extent
of mentoring and user enablement, and needs for user support. It also influences user
licensing decisions.
The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.
) Important
Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain. However, flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.
Personal: Personal solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, a personal data
and BI solution has the fewest number of target consumers.
Team: Collaborates and shares content with a relatively small number of colleagues
who work closely together.
Departmental: Delivers content to a large number of consumers, who can belong
to a department or business unit.
Enterprise: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.
Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:
Personal: Represents the largest number of creators because the data culture
encourages any user to work with data using business-led self-service data and BI
methods. Although managed self-service BI methods can be used, it's less
common with personal data and BI efforts.
Team: Colleagues within a team collaborate and share with each other by using
business-led self-service patterns. It has the next largest number of creators in the
organization. Managed self-service patterns could also begin to emerge as skill
levels advance.
Departmental: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service practices are very common and highly encouraged.
Enterprise: Involves the smallest number of content creators because it typically
includes only professional data and BI developers who work in the BI team, the
COE, or in IT.
The content ownership and management article introduced the concepts of business-
led self-service, managed self-service, and enterprise. The most common alignment
between ownership and delivery scope is:
Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.
7 Note
The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Fabric, which is a specific
implementation where a user or group is granted access to a single item. In this
article, the term sharing is meant in a general way to describe sharing content with
colleagues. When the per-item permissions are intended, this article will make a
clear reference to that feature. For more information, see Report consumer
security planning.
Personal
The Personal delivery scope is about enabling an individual to gain analytical value. It's
also about allowing them to more efficiently perform business tasks through the
effective personal use of data, information, and analytics. It could apply to any type of
information worker in the organization, not just data analysts and developers.
Sharing content with others isn't the objective. Personal content can reside in Power BI
Desktop or in a personal workspace in the Fabric portal.
Here are the characteristics of creating content for a personal delivery scope.
The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content might be an exploratory proof of concept that may, or may not, evolve
into a project.
Here are a few guidelines to help you become successful with content developed for
personal use.
Consider personal data and BI solutions to be like an analytical sandbox that has
little formal governance and oversight from the governance team or COE.
However, it's still appropriate to educate content creators that some general
governance guidelines could still apply to personal content. Valid questions to ask
include: Can the creator export the personal report and email it to others? Can the
creator store a personal report on a non-organizational laptop or device? What
limitations or requirements exist for content that contains sensitive data?
See the techniques described for business-led self-service, and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and personal data and BI
solutions.
Analyze data from the activity log to discover situations where personal solutions
appear to have expanded beyond the original intended usage. It's usually
discovered by detecting a significant amount of content sharing from a personal
workspace.
Tip
For information about how users progress through the stages of user adoption, see
the Microsoft Fabric adoption roadmap maturity levels. For more information
about using the activity log, see Tenant-level auditing.
Team
The Team delivery scope is focused on a team of people who work closely together, and
who are tasked with solving closely related problems using the same data. Collaborating
and sharing content with each other in a workspace is usually the primary objective.
Content is often shared among the team more informally as compared to departmental
or enterprise content. For instance, the workspace is often sufficient for consuming
content within a small team. It doesn't require the formality of publishing the workspace
to distribute it as an app. There isn't a specific number of users when team-based
delivery is considered too informal; each team can find the right number that works for
them.
Here are the characteristics of creating content for a team delivery scope.
Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of content might occur for report viewers (especially for managers
of the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.
Here are some guidelines to help you become successful with content developed for
team use.
Tip
Departmental
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental delivery scopes.
Here are a few guidelines to help you become successful with departmental BI delivery.
Ensure that the COE is prepared to support the efforts of self-service creators.
Creators who publish content used throughout their department or business unit
might emerge as candidates to become champions. Or, they might become
candidates to join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement could be permitted for departmental solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Fabric portal, content owners
can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports by:
Using departmental colors and styling to indicate who produced the content.
For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which is valuable when
the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Apply the techniques described for business-led self-service and managed self-
service content delivery in the Content ownership and management article. They're
highly relevant techniques that can help content creators to create efficient and
effective departmental solutions.
Enterprise
Enterprise content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.
A centralized team of experts manages the content end-to-end and publishes it for
others to consume.
Formal delivery of data solutions like reports, lakehouses, and Power BI apps is a
high priority to ensure consumers have the best experience.
The content is highly sensitive, subject to regulatory requirements, or is considered
extremely critical.
Published enterprise-level semantic models and dataflows might be used as a
source for self-service creators, thus creating a chain of dependencies to the
source data.
Stability and a consistent experience for consumers are highly important.
Application lifecycle management, such as deployment pipelines and DevOps
techniques , is commonly used. Change management processes to review and
approve changes before they're deployed are commonly used for enterprise
content, for example, by a change review board or similar group.
Processes exist to gather requirements, prioritize efforts, and plan for new projects
or enhancements to existing content.
Integration with other enterprise-level data architecture and management services
could exist, possibly with other Azure services and Power Platform products.
Here are some guidelines to help you become successful with enterprise content
delivery.
Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.
" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Fabric adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle Fabric licensing considerations. Create a process for how workspaces could
be assigned each license type, and the prerequisites required for the type of
content that could be assigned to Premium.
) Important
Questions to ask
Use questions like those found below to assess content delivery scope.
Do central teams that are responsible for Fabric have a clear understanding of who
creates and delivers content? Does it differ by business area, or for different
content item types?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization? Are there advanced
scenarios, like advanced data preparation or advanced data model management,
or niche scenarios, like self-service real-time analytics?
For the identified content delivery scopes in place, to what extent are guidelines
being followed?
Are there trajectories for helpful self-service content to be "promoted" from
personal to team content delivery scopes and beyond? What systems and
processes enable sustainable, bottom-up scaling and distribution of useful self-
service content?
What are the guidelines for publishing content to, and using, personal
workspaces?
Are personal workspaces assigned to dedicated Fabric capacity? In what
circumstances are personal workspaces intended to be used?
On average, how many reports does someone have access to? How many reports
does an executive have access to? How many reports does the CEO have access
to?
If your organization is using Fabric or Power BI today, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Is there a clear licensing strategy? How many licenses are used today? How many
tenants and capacities exist, who uses them, and why?
How do central teams decide what gets published to Premium (or Fabric)
dedicated capacity, and what uses shared capacity? Do development workloads
use separate Premium Per User (PPU) licensing to avoid affecting production
workloads?
Maturity levels
The following maturity levels will help you assess the current state of your content
delivery.
ノ Expand table
200: • Pockets of good practices exist. However, good practices are overly dependent
Repeatable on the knowledge, skills, and habits of the content creator.
Level State of content delivery
300: Defined • Clear guidelines are defined and communicated to describe what can and can't
occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.
400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
• Guidelines for content delivery scope are followed by most, or all, groups across
the organization.
• Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.
500: Efficient • Proactively take steps to communicate with users occur when any concerning
activities are detected in the activity log. Education and information are provided
to make gradual improvements or reduce risk.
• The business value that's achieved for deployed solutions is regularly evaluated.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
Center of Excellence (COE).
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
7 Note
) Important
One of the most powerful aspects of a COE is the cross-departmental insight into
how analytics tools like Fabric are used by the organization. This insight can reveal
which practices work well and which don't, that can facilitate a bottom-up
approach to governance. A primary goal of the COE is to learn which practices work
well, share that knowledge more broadly, and replicate best practices across the
organization.
Staffing a COE
People who are good candidates as COE members tend to be those who:
Tip
If you have self-service content creators in your organization who constantly push
the boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a satellite member of the COE.
When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.
ノ Expand table
Role Description
COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary.
For an overview of additional roles and responsibilities, see the Governance article.
Coach Coaches and educates others on data and BI skills via office hours (community
engagement), best practices reviews, or co-development projects. Oversees and
participates in the discussion channel of the internal community. Interacts with, and
supports, the champions network.
Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.
Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.
Data Creates and manages data assets (such as shared semantic model and dataflows) to
modeler support other self-service content creators.
Data Plans for deployment and architecture, including integration with other services and
engineer data platforms. Publishes data assets which are utilized broadly across the
organization (such as a lakehouse, data warehouse, data pipeline, dataflow, or
semantic model).
User Assists with the resolution of data discrepancies and escalated help desk support
support issues.
As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.
Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or when acquisitions have occurred.
7 Note
The following terms might differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.
Centralized COE
A centralized COE comprises a single shared services team.
Pros:
There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.
Cons:
Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.
Pros:
There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.
Cons:
The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. The organizational structure could potentially lead
to complications, differences in priorities, or necessitate the involvement of the
executive sponsor. Preferably, the executive sponsor has a scope of authority that
includes the COE and all involved business units to help resolve conflicts.
Federated COE
A federated COE comprises a shared services team (the core COE members) plus
satellite members from each functional area or major business unit. A federated team
works in coordination, even though its members reside in different business units.
Typically, satellite members are primarily focused on development activities to support
their business unit while the shared services personnel support the entire community.
Pros:
Cons:
Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.
Tip
Decentralized COE
Decentralized COEs are independently managed by business units.
Pros:
A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.
Cons:
There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team might be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding might have more resources available
to them, which might not serve cost optimization goals from an organizational-
wide perspective.
) Important
Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.
When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.
When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.
Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
might receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.
Tip
The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.
Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Fabric. For a shared capacity, this could be based on
number of active users. For Premium capacity, chargebacks could be allocated based on
which business units are using the capacity. Ideally, chargebacks are directly correlated
to the business value gained.
) Important
Checklist - Considerations and key actions you can take to establish or improve your
COE.
" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal community members, and any external
customers, to be served by the COE. Decide how the COE will generally engage with
those customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
internal community of users about the services the COE offers, and how to engage
with the COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.
Questions to ask
Use questions like those found below to assess the effectiveness of a COE.
Is there a COE? If so, who is in the COE and what's the structure?
If there isn't a COE, is there a central team that performs a similar function? Do
data decision makers in the organization understand what a COE does?
If there isn't a COE, does the organization aspire to create one? Why or why not?
Are there opportunities for federated or decentralized COE models due to a mix of
enterprise and departmental solutions?
Are there any missing roles and responsibilities from the COE?
To what extent does the COE engage with the user community? Do they mentor
users? Do they curate a centralized portal? Do they maintain centralized resources?
Is the COE recognized in the organization? Does the user community consider
them to be credible and helpful?
Do business users see central teams as enabling or restricting their work with data?
What's the COE funding model? Do COE customers financially contribute in some
way to the COE?
How consistent and transparent is the COE with their communication?
Maturity levels
The following maturity levels will help you assess the current state of your COE.
ノ Expand table
100: Initial • One or more COEs exist, or the activities are performed within the data team, BI
team, or IT. There's no clarity on the specific goals nor expectations for
responsibilities.
• Requests for assistance from the COE are handled in an unplanned manner.
200: • The COE is in place with a specific charter to mentor, guide, and educate self-
Repeatable service users. The COE seeks to maximize benefits of self-service approaches to
data and BI while reducing the risks.
• The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.
300: Defined • The COE operates with active involvement from all business units in a unified or
federated mode.
400: Capable • The goals of the COE align with organizational goals, and they are reassessed
regularly.
• The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.
Level State of the Center of Excellence
500: Efficient • Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about
implementing governance guidelines, policies, and processes.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Microsoft Fabric,
but it's not a comprehensive reference for data governance.
As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."
The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: the
true focus is on governing user's behavior to ensure organizational data is well
managed.
When focused on self-service data and business intelligence (BI), the primary goals of
governance are to achieve the proper balance of:
The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. You'll be most successful with a platform like Fabric when you put as much
emphasis on user empowerment as on clarifying its practical usage within established
guardrails.
Tip
Think of governance as a set of established guidelines and formalized policies. All
governance guidelines and policies should align with your organizational data
culture and adoption objectives. Governance is enacted on a day-to-day basis by
your system oversight (administration) activities.
Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.
Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.
Tip
A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.
ノ Expand table
Roll out Fabric first, then introduce governance: Fabric is made widely available to
users in the organization as a new self-service data and BI tool. Then, at some time in
the future, a governance effort begins. This method prioritizes agility.
Full governance planning first, then roll out Fabric: Extensive governance planning
occurs prior to permitting users to begin using Fabric. This method prioritizes control
and stability.
Choose method 1 when Fabric is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.
Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.
Pros:
Cons:
Pros:
Cons:
Favors enterprise content development more than self-service
Slower to allow the user population to begin to get value and improve decision-
making
Encourages poor habits and workarounds when there's a significant delay in
allowing the use of data for decision-making
Pros:
Cons:
For more information about up-front planning, see the Preparing to migrate to Power BI
article.
Governance challenges
If your organization has implemented Fabric without a governance approach or strategic
direction (as described above by method 1), there could be numerous challenges
requiring attention. Depending on the approach that you've taken and your current
state, some of the following challenges could be applicable to your organization.
Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics
People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities
Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements
Tip
Governance planning
Some organizations have implemented Fabric without a governance approach or clear
strategic direction (as described above by method 1). In this case, the effort to begin
governance planning can be daunting.
If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service and enterprise data and BI scenarios.
) Important
Some potential governance planning activities and outputs that you might find valuable
are described next.
Strategy
Key activities:
Conduct a series of workshops to gather information and assess the current state
of data culture, adoption, and data and BI practices. For guidance about how to
gather information and define the current state of BI adoption, including
governance, see BI strategic planning.
Use the current state assessment and information gathered to define the desired
future state, including governance objectives. For guidance about how to use this
current state definition to decide on your desired future state, see BI tactical
planning.
Validate the focus and scope of the governance program.
Identify existing bottom-up initiatives in progress.
Identify immediate pain points, issues, and risks.
Educate senior leadership about governance, and ensure executive sponsorship is
sufficient to sustain and grow the program.
Clarify where Power BI fits in to the overall BI and analytics strategy for the
organization.
Assess internal factors such as organizational readiness, maturity levels, and key
challenges.
Assess external factors such as risk, exposure, regulatory, and legal requirements—
including regional differences.
Key output:
People
Key activities:
Key output:
Analyze immediate pain points, issues, risks, and areas to improve the user
experience.
Prioritize data policies to be addressed by order of importance.
Identify existing processes in place that work well and can be formalized.
Determine how new data policies will be socialized.
Decide to what extent data policies might differ or be customized for different
groups.
Key output:
Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies
Project management
The implementation of the governance program should be planned and managed as a
series of projects.
Key activities:
) Important
The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the adoption roadmap conclusion for some helpful resources, as well as
the implementation planning BI strategy articles.
Governance policies
Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.
Who owns and manages the data and BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service,
managed self-service, and enterprise. Who owns and manages the content has a
significant impact on governance requirements.
What is the scope for delivery of the data and BI content? The Content delivery
scope article introduced four scopes for delivery of content: personal, team,
departmental, and enterprise. The scope of delivery has a considerable impact on
governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps could be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.
Tip
Different combinations of the above four criteria will result in different governance
requirements for Fabric content.
The following list includes items that you might choose to prioritize when introducing
governance for Fabric.
If you don't make governance decisions and communicate them well, users will use their
own judgment for how things should work—and that often results in inconsistent
approaches to common tasks.
Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.
Data policies
A data policy is a document that defines what users can and can't do. You might call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.
A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.
7 Note
Here are three common data policy examples you might choose to prioritize.
ノ Expand table
Policy Description
Data ownership Specifies when an owner is required for a data asset, and what the data
policy owner's responsibilities include, such as: supporting colleagues who view the
content, maintaining appropriate confidentiality and security, and ensuring
compliance.
Data certification Specifies the process that is followed to certify content. Requirements might
(endorsement) include activities such as: data accuracy validation, data source and lineage
policy review, technical review of the data model, security review, and
documentation review.
Data classification Specifies activities that are allowed and not allowed per classification
and protection (sensitivity level). It should specify activities such as: allowed sharing with
policy external users, with or without a non-disclosure agreement (NDA),
encryption requirements, and ability to download the data. Sometimes, it's
also called a data handling policy or a data usage policy. For more
information, see the Information protection for Power BI article.
U Caution
Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.
Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.
Inflexible
Less autonomy and empowerment
Tip
Finding the right balance of standardization and customization for supporting self-
service data and BI across the organization can be challenging. However, by
starting with organizational policies and mindfully watching for exceptions, you can
make meaningful progress quickly.
) Important
Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.
Starting with the first level, the levels of checks and balances in the above diagram
include:
ノ Expand table
Level Description
Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise data and BI,
the data governance office, as well as other ancillary teams. Ancillary teams can include IT,
security, HR, and legal. A change control board is included here as well.
Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.
Strategic - Executive sponsor and steering committee: The highest level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have a leadership team
with sufficient authority to be able to make decisions when necessary.
) Important
ノ Expand table
Role Description
Chief Data Officer Defines the strategy for use of data as an enterprise asset. Oversees
or Chief Analytics enterprise-wide governance guidelines and policies.
Officer
Data governance Steering committee with members from each business unit who, as domain
board owners, are empowered to make enterprise governance decisions. They
make decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.
Data governance Creates governance policies, standards, and processes. Provides enterprise-
team wide oversight and optimization of data integrity, trustworthiness, privacy,
and usability. Collaborates with the COE to provide governance education,
support, and mentoring to data owners and content creators.
Data governance Temporary or permanent teams that focus on individual governance topics,
working such as security or data quality.
committees
Project Manages individual governance projects and the ongoing data governance
Role Description
Fabric executive Promotes adoption and the successful use of Fabric. Actively ensures that
sponsor Fabric decisions are consistently aligned with business objectives, guiding
principles, and policies across organizational boundaries. For more
information, see the Executive sponsorship article.
Center of Mentors the community of creators and consumers to promote the effective
Excellence use of Fabric for decision-making. Provides cross-departmental
coordination of Fabric activities to improve practices, increase consistency,
and reduce inefficiencies. For more information, see the Center of
Excellence article.
Fabric champions A subset of content creators found within the business units who help
advance the adoption of Fabric. They contribute to data culture growth by
advocating the use of best practices and actively assisting colleagues. For
more information, see the Community of practice article.
Risk management Reviews and assesses data sharing and security risks. Defines ethical data
policies and standards. Communicates regulatory and legal requirements.
Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.
All BI creators and Adheres to policies for ensuring that data is secure, protected, and well-
consumers managed as an organizational asset.
Tip
Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.
" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Fabric is currently used for self-service and enterprise data
and BI scenarios. Document opportunities for improvement. Also, document
strengths and good practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests for
exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.
Questions to ask
Use questions like those found below to assess governance.
At a high level, what's the current governance strategy? To what extent is the
purpose and importance of this governance strategy clear to both end users and
the central data and BI teams?
In general, is the current governance strategy effective?
What are the key regulatory and compliance criteria that the organization (or
specific business units) must adhere to? Where's this criteria documented? Is this
information readily available to people who work with data and share data items as
a part of their role?
How well does the current governance strategy align to the user's way of working?
Is a specific role or team responsible for governance in the organization?
Who has the authority to create and change governance policies?
Do governance teams use Microsoft Purview or another tool to support
governance activities?
What are the prioritized governance risks, such as risks to security, information
protection, and data loss prevention?
What's the potential business impact of the identified governance risks?
How frequently is the governance strategy re-evaluated? What metrics are used to
evaluate it, and what mechanisms exist for business users to provide feedback?
What types of user behaviors create risk when users work with data? How are
those risks mitigated?
What sensitivity labels are in place, if any? Are data and BI decision makers aware
of sensitivity labels and the benefits to the business?
What data loss prevention policies are in place, if any?
How is "Export to Excel" handled? What steps are taken to prevent data loss
prevention? What's the prevalence of "Export to Excel"? What do people do with
data once they have it in Excel?
Are there practices or solutions that are out of regulatory compliance that must be
urgently addressed? Are these examples justified with an explanation of the
potential business impact, should they not be addressed?
Tip
"Export to Excel" is typically a controversial topic. Often, business users focus on the
requirement to have "Export to Excel" possible in BI solutions. Enabling "Export to
Excel" can be counter-productive because a business objective isn't to get data into
Excel. Instead, define why end users need the data in Excel. Ask what they do with
the data once it's in Excel, which business questions they try to answer, what
decisions they make, and what actions they take with the data.
Focusing on business decisions and actions helps steer focus away from tools and
features and toward helping people achieve their business objectives.
Maturity levels
The following maturity levels will help you assess the current state of your governance
initiatives.
ノ Expand table
100: Initial • Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.
200: • Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.
300: Defined • A complete governance strategy with focus, objectives, and priorities is enacted
and broadly communicated.
• Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed
by users.
400: Capable • All Fabric governance priorities align with organizational goals and business
objectives. Goals are reassessed regularly.
• It's clear where Fabric fits into the overall data and BI strategy for the
organization.
• Fabric activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.
500: Efficient • Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
continual progress is a priority.
• Fabric activity log and API data is actively used to inform and improve adoption
and governance efforts.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about
mentoring and user enablement.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see Microsoft Fabric adoption
maturity levels.
Skills mentoring
Mentoring and helping users in the Fabric community become more effective can take
on various forms, such as:
Office hours
Co-development projects
Best practices reviews
Extended support
Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Office hours are usually group-based, so Fabric
champions and other members of the community can also help solve an issue if a topic
is in their area of expertise.
Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour or Fabric
Fridays. The primary goal is usually to get questions answered, solve problems, and
remove blockers. Office hours can also be used as a platform for the user community to
share ideas, suggestions, and even complaints.
The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.
Tip
One option is to set specific office hours each week. However, users might not
show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.
Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others might observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.
They're a great way for the COE to identify champions or users with specific skills
that the COE didn't previously know about.
The COE can learn what users throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.
Tip
It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work, or addressing
performance challenges in a complex solution. Set clear expectations for what's in
scope for office hours, and if there's any commitment for follow up.
Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service analytics or business
intelligence (BI) solution that the business stakeholders couldn't deliver independently.
The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.
A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership of the project.
Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.
The active involvement shown in the above diagram changes over time, as follows:
Business unit: 50% initially, up to 75%, finally at 98%-100%.
COE: 50% initially, down to 25%, finally at 0%-2%.
Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.
Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service data and BI solutions in the future.
) Important
Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces the risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.
During a review, an expert from the COE evaluates self-service Fabric content developed
by a member of the community and identifies areas of risk or opportunities for
improvement.
Here are some examples of when a best practices review could be beneficial.
The sales team has a Power BI app that they intend to distribute to thousands of
users throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to a capacity. A review of the
workspace content is required to ensure sound development practices are
followed. This type of review is common when the capacity is shared among
multiple business units. (A review might not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new Fabric solution they expect to be widely
used. They would like to request a best practices review before it goes into user
acceptance testing (UAT), or before a request is submitted to the change
management board.
A best practices review is most often focused on the semantic model design, though the
review can encompass all types of data items (such as a lakehouse, data warehouse,
data pipeline, dataflow, or semantic model). The review can also encompass reporting
items (such as reports, dashboards, or metrics).
Before content is deployed, a best practices review can be used to verify other design
decisions, like:
Once the content has been deployed, the best practices review isn't necessarily
complete yet. Completing the remainder of the review could also include items such as:
The target workspace is suitable for the content.
Workspace security roles are appropriate for the content.
Other permissions (such as app audience permissions, Build permission, or use of
the individual item sharing feature) are correctly and appropriately configured.
Contacts are identified, and correctly correlate to the owners of the content.
Sensitivity labels are correctly assigned.
Fabric item endorsement (certified or promoted) is appropriate.
Data refresh is configured correctly, failure notifications include the proper users,
and uses the appropriate data gateway in standard mode (if applicable).
All appropriate semantic model best practices rules are followed and, preferably,
are automated via a community tool called Best Practices Analyzer for maximum
efficiency and productivity.
Extended support
From time to time, the COE might get involved with complex issues escalated from the
help desk. For more information, see the User support article.
7 Note
Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI and Fabric are
extraordinarily powerful tools. They provide data preparation and data modeling
capabilities in addition to data visualization. Having the ability to aid and enable
users can significantly improve their skills and increase the quality of their solutions
—it reduces risks too.
Centralized portal
A single centralized portal, or hub, is where the user community can find:
Tip
In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of users might naturally
evolve to become your champions. Everyone else is usually just trying to get the
job done as quickly as possible, because their time, focus, and energy are needed
elsewhere. Therefore, it's crucial to make information easy for your community
users to find.
The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.
) Important
It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.
Training
A key factor for successfully enabling self-service users in a Fabric community is training.
It's important that the right training resources are readily available and easily
discoverable. While some users are so enthusiastic about analytics that they'll find
information and figure things out on their own, it isn't true for most of the user
community.
Making sure your self-service users (particularly content creators and owners) have
access to the training resources they need to be successful doesn't mean that you need
to develop your own training content. Developing training content is often
counterproductive due to the rapidly evolving nature of the product. Fortunately, an
abundance of training resources is available in the worldwide community. A curated set
of links goes a long way to help users organize and focus their training efforts, especially
for tool training, which focuses on the technology. All external links should be validated
by the COE for accuracy and credibility. It's a key opportunity for the COE to add value
because COE stakeholders are in an ideal position to understand the learning needs of
the community, and to identify and locate trusted sources of quality learning materials.
You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.
Tip
One of the goals of training is to help users learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm new users by adding in a lot of complexity and friction to a beginner-
level class for report creators. However, it's a great investment to make newer
content creators aware of things that could otherwise take them a while to figure
out. An ideal example is teaching the ability to use a live connection to report from
an existing semantic model. By teaching this concept at the earliest logical time,
you can save a less experienced creator thinking they always need one semantic
model for every report (and encourage the good habit of reusing existing semantic
models across reports).
Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.
Some training might be delivered more formally, such as classroom training with hands-
on labs. Other types of training are less formal, such as:
Tip
) Important
Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail for end-to-end solutions that include multiple Fabric workloads. If you
have a diverse population of Fabric content creators, consider creating personas
and tailoring the experience to an extent that's practical.
The completion of training can be a leading indicator for success with user adoption.
Some organizations add an element of fun by granting badges, like blue belt or black
belt, as users progress through the training programs.
Give some consideration to how you want to handle users at various stages of user
adoption. Training needs are very different for:
How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You might also find over time that some
community champions want to run their own tailored set of training classes within their
functional business unit.
Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.
If you do make the investment to create custom in-house training, consider creating
short, targeted content that focuses on solving one specific problem. It makes the
training easier to find and consume. It's also easier to maintain and update over time.
Tip
The Help and Support menu in the Fabric portal is customizable. When your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.
Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Fabric is managed in your organization. For more information, see the Content
ownership and management article.
Certain aspects of Fabric tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:
How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new capacity
How to request a new workspace
How to request a workspace be added to an existing capacity
How to request access to a gateway data source
How to request software installation
Tip
For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.
Tip
When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.
There are also going to be some governance decisions that have been made and should
be documented, such as:
) Important
One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Fabric administrator role (who might have this
role solely for the purpose of viewing settings).
Over time, you might choose to allow certain types of documentation to be maintained
by the community if you have willing volunteers. In this case, you might want to
introduce an approval process for changes.
When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation might be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. Documentation contributes to user
enablement and a self-sustaining community.
Tip
Providing Power BI template files for your community is a great way to:
Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.
Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:
7 Note
Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.
You can use Power BI project files with Power BI Desktop developer mode for:
Advanced editing and authoring (for example, in a code editor such as Visual
Studio Code).
Purposeful separation of semantic model and report items (unlike the .pbix or .pbit
files).
Enabling multiple content creators and developers to work on the same project
concurrently.
Integrating with source control (such as by using Fabric Git integration).
Using continuous integration and continuous delivery (CI/CD) techniques to
automate integration, testing and deployment of changes, or versions of content.
7 Note
Power BI includes capabilities such as .pbit template files and .pbip project files that
make it simple to share starter resources with authors. Other Fabric workloads
provide different approaches to content development and sharing. Having a set of
starter resources is important regardless of the items being shared. For example,
your portal might include a set of SQL scripts or notebooks that present tested
approaches to solve common problems.
Checklist - Considerations and key actions you can take to establish, or improve,
mentoring and user enablement.
" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types can include office hours,
co-development projects, and best practices reviews.
" Communicate regularly about mentoring services: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well-supported centralized hub
where users can easily find training materials, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, create, compile,
and publish useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Provide templates and projects: Determine how you'll use templates including
Power BI template files and Power BI project files. Include the resources in your
centralized portal, and in training materials.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.
Questions to ask
Use questions like those found below to assess mentoring and user enablement.
Maturity levels
The following maturity levels will help you assess the current state of your mentoring
and user enablement.
ノ Expand table
100: Initial • Some documentation and resources exist. However, they're siloed and
inconsistent.
• Few users are aware of, or take advantage of, available resources.
200: • A centralized portal exists with a library of helpful documentation and resources.
Repeatable
• A curated list of training links and resources are available in the centralized
portal.
• Office hours are available so the user community can get assistance from the
COE.
300: • The centralized portal is the primary hub for community members to locate
Defined training, documentation, and resources. The resources are commonly referenced
by champions and community members when supporting and learning from each
other.
• The COE's skills mentoring program is in place to assist users in the community in
various ways.
400: • Office hours have regular and active participation from all business units in the
Capable organization.
• Best practices reviews from the COE are regularly requested by business units.
• Co-development projects are repeatedly executed with success by the COE and
members of business units.
500: • Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.
• Measurable and tangible business value is gained from the mentoring program
by using KPIs or OKRs.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
community of practice.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using a tool such as Microsoft Fabric to
produce effective analytics is a common interest that can bring people together across
an organization.
Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people in most organizations.
7 Note
All references to the Fabric community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Fabric. However, internal users are the focus of this article.
For information about related topics including resources, documentation, and training
provided for the Fabric community, see the Mentoring and user enablement article.
Champions network
One important part of a community of practice is its champions. A champion is a self-
service content creator who works in a business unit that engages with the COE. A
champion is recognized by their peers as the go-to expert. A champion continually
builds and shares their knowledge even if it's not an official part of their job role.
Champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.
Have a deep interest in analytics being used effectively and adopted successfully
throughout the organization.
Possess strong technical skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.
Tip
Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.
) Important
Someone very well might be acting in the role of a champion without even
knowing it, and without any formal recognition. The COE should always be on the
lookout for champions. COE members should actively monitor the discussion
channel to see who is particularly helpful. The COE should deliberately encourage
and support potential champions, and when appropriate, invite them into a
champions network to make the recognition formal.
Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:
ノ Expand table
Activity Description
Discussion A Q&A forum where anyone in the community can post and view messages.
channel Often used for help and announcements. For more information, see the User
support article.
Lunch and learn Regularly scheduled sessions where someone presents a short session about
sessions something they've learned or a solution they've created. The goal is to get a
variety of presenters involved, because it's a powerful message to hear
firsthand what colleagues have achieved.
Office hours with Regularly scheduled times when COE experts are available so the community
the COE can engage with them. Community users can receive assistance with minimal
process overhead. For more information, see the Mentoring and user
enablement article.
Internal blog Short blog posts, usually covering technical how-to topics.
posts or wiki posts
Activity Description
Internal analytics A subset of the community that chooses to meet as a group on a regularly
user group scheduled basis. User group members often take turns presenting to each
other to share knowledge and improve their presentation skills.
Book club A subset of the community select a book to read on a schedule. They discuss
what they've learned and share their thoughts with each other.
Tip
Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.
Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.
Contests with a small gift card or time off: For example, you might hold a
performance tuning event with the winner being the person who successfully
reduced the size of their data model the most.
Ranking based on help points: The more frequently someone participates in Q&A,
they achieve a change in status on a leaderboard. This type of gamification
promotes healthy competition and excitement. By getting involved in more
conversations, the participant learns and grows personally in addition to helping
their colleagues.
Leadership communication: Reach out to a manager when someone goes above
and beyond so that their leader, who might not be active in the community, sees
the value that their staff member provides.
Rewarding champions
Different types of incentives will appeal to different types of people. Some community
members will be highly motivated by praise and feedback. Some will be inspired by
gamification and a bit of fun. Others will highly value the opportunity to improve their
level of knowledge.
More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did recently. It could be a fun tradition at the beginning of a
monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.
Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:
The most critical communication objectives include ensuring your community members
know that:
Tip
Consider requiring a simple quiz before a user is granted a Power BI or Fabric
license. This quiz is a misnomer because it doesn't focus on any technical skills.
Rather, it's a short series of questions to verify that the user knows where to find
help and resources. It sets them up for success. It's also a great opportunity to have
users acknowledge any governance policies or data privacy and protection
agreements you need them to be aware of. For more information, see the System
oversight article.
Types of communication
There are generally four types of communication to plan for:
Tip
One-way communication to the user community is important. Don't forget to also
include bidirectional communication options to ensure the user community has an
opportunity to provide feedback.
Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.
Checklist - Considerations and key actions you can take for the community of practice
follow.
" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall data and BI strategy, and
that your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of champions. Be aware that the
time investment will vary from person to person, and team to team due to different
responsibilities. Plan to clearly communicate expectations to people who are
interested in getting involved. Obtain manager approval when appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: One
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate.
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
semantic model certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about ways to structure the COE, see the Center of
Excellence article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.
Introduce incentives:
" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.
Improve communications:
" Establish communication methods: Evaluate which methods of communication fit
well in your data culture. Set up different ways to communicate, including history
retention and search.
" Identify responsibility: Determine who will be responsible for different types of
communication, how, and when.
Questions to ask
Use questions like those found below to assess the community of practice.
Maturity levels
The following maturity levels will help you assess the current state of your community of
practice.
ノ Expand table
100: Initial • Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.
• Goals for transparent communication with the user community are defined.
400: Capable • Champions are identified for all business units. They actively support colleagues
in their self-service efforts.
500: Efficient • Bidirectional feedback loops exist between the champions network and the COE.
• Automation is in place when it adds direct value to the user experience (for
example, automatic access to a group that provides community resources).
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about user
support.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
This article addresses user support. It focuses primarily on the resolution of issues.
The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.
The following diagram shows some common types of user support that organizations
employ successfully:
The six types of user support shown in the above diagram include:
ノ Expand table
Type Description
Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.
Help desk support (internal) handles formal support issues and requests.
Extended support (internal) involves handling complex issues escalated by the help desk.
Microsoft support (external) includes support for licensed users and Fabric
administrators. It also includes comprehensive documentation.
In some organizations, intra-team and internal community support are most relevant for
self-service data and business intelligence (BI)—content is owned and managed by
creators and owners in decentralized business units. Conversely, the help desk and
extended support are reserved for technical issues and enterprise data and BI (content is
owned and managed by a centralized BI team or Center of Excellence). In some
organizations, all four types of support could be relevant for any type of content.
Tip
Each of the six types of user support introduced above are described in further detail in
this article.
Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. Self-service content creators who emerge as your champions tend to
take on this type of informal support role voluntarily because they have an intrinsic
desire to help. Although it's an informal support mode, it shouldn't be undervalued.
Some estimates indicate that a large percentage of learning at work is peer learning,
which is particularly helpful for analysts who are creating domain-specific analytics
solutions.
7 Note
Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.
Tip
Be sure to cultivate multiple experts in the more difficult topics like T-SQL,
Python, Data Analysis eXpressions (DAX) and the Power Query M formula
language. When a community member becomes a recognized expert, they
could become overburdened with too many requests for help.
A greater number of community members might readily answer certain types
of questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex T-SQL or DAX). It's
important for the COE to allow the community a chance to respond yet also
be willing to promptly handle unanswered questions. If users repeatedly ask
questions and don't receive an answer, it will significantly hinder growth of
the community. In this case, a user is likely to leave and never return if they
don't receive any responses to their questions.
One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.
Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).
Monitoring the discussion channel can also reveal additional analytics experts and
potential champions who were previously unknown to the COE.
) Important
It's a best practice to continually identify emerging champions, and to engage with
them to make sure they're equipped to support their colleagues. As described in
the Community of practice article, the COE should actively monitor the discussion
channel to see who is being helpful. The COE should deliberately encourage and
support community members. When appropriate, invite them into the champions
network.
Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance might reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.
Tip
You might be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.
There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.
Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.
Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new capacity.
) Important
Your Fabric governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or find a way to work around your
requirements. That might not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient. For more information, see
Tenant-level workspace planning.
Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with supporting Fabric. The
best help desk personnel are those who have a good grasp of what users need to
accomplish.
Tip
Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service-level agreement (SLA). For instance, there could be an SLA
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define SLAs for troubleshooting issues, like data
discrepancies.
Extended support
Since the COE has deep insight into how Fabric is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.
Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.
Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to users and Fabric administrators
that shouldn't be overlooked.
Microsoft documentation
Check the Fabric support website for high-priority issues that broadly affect all
customers. Global Microsoft 365 administrators have access to additional support issue
details within the Microsoft 365 portal.
Refer to the comprehensive Fabric documentation. It's an authoritative resource that can
aid troubleshooting and search for information. You can prioritize results from the
documentation site. For example, enter a site-targeted search request into your web
search engine, like power bi gateway site:learn.microsoft.com .
Tip
Make it clear to your internal user community whether you prefer technical issues
to be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.
Administrator support
There are several support options available for Fabric administrators.
For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.
Community documentation
The Fabric global community is vibrant. Every day, there are a great number of Fabric
blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:
How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.
Checklist - Considerations and key actions you can take for user support follow.
Improve your intra-team support:
" Determine help desk responsibilities: Decide what the initial scope of Fabric
support topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to handle
Fabric support. Identify whether there are readiness gaps to be addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: Include known questions and answers in a
searchable knowledgebase. Ensure someone is responsible for regular updates to
the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide whether anyone will be on-call for any issues related to Fabric: If
appropriate, ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.
" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members and Fabric administrators have a direct escalation path to reach
global administrators for Microsoft 365 and Azure. It's critical to have a
communication channel when a widespread issue arises that's beyond the scope of
Fabric.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).
Questions to ask
U Caution
When assessing user support and describing risks or issues, be careful to use
neutral language that doesn't place blame on individuals or teams. Ensure
everyone's perspective is fairly represented in an assessment. Focus on objective
facts to accurately understand and describe the context.
Maturity levels
The following maturity levels will help you assess the current state of your Power BI user
support.
ノ Expand table
100: Initial • Individual business units find effective ways of supporting each other. However,
the tactics and practices are siloed and not consistently applied.
200: • The COE actively encourages intra-team support and growth of the champions
Repeatable network.
• The internal discussion channel gains traction. It's become known as the default
place for questions and discussions.
• The help desk handles a small number of the most common technical support
issues.
300: Defined • The internal discussion channel is popular and largely self-sustaining. The COE
actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.
400: Capable • The help desk is fully trained and prepared to handle a broader number of
known and expected technical support issues.
• SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.
500: Efficient • Bidirectional feedback loops exist between the help desk and the COE.
• Automation is in place to allow the help desk to react faster and reduce errors
(for example, use of APIs and scripts).
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about system
oversight and administration activities.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
) Important
Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Fabric administration activities take place and
by whom.
System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.
Fabric administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of management activities. Global Microsoft 365 administrators are implicitly Fabric
administrators. Power Platform administrators are also implicitly Fabric administrators.
) Important
Tip
The best type of person to serve as a Fabric administrator is one who has enough
knowledge about the tools and workloads to understand what self-service users
need to accomplish. With this understanding, the administrator can balance user
empowerment and governance.
In addition to the Fabric administrator, there are other roles which use the term
administrator. The following table describes the roles that are commonly and regularly
used.
ノ Expand table
Fabric Tenant Manages tenant settings and other settings in the Fabric portal.
administrator All general references to administrator in this article refer to
this type of administrator.
Capacity One Manages workspaces and workloads, and monitors the health
administrator capacity of a Fabric capacity.
Data gateway One Manages gateway data source configuration, credentials, and
administrator gateway users assignments. Might also handle gateway software
updates (or collaborate with infrastructure team on updates).
The Fabric ecosystem of workloads is broad and deep. There are many ways that Fabric
integrates with other systems and platforms. From time to time, it'll be necessary to
work with other administrators and IT professionals. For more information, see
Collaborate with other administrators.
The remainder of this article provides an overview of the most common activities that a
Fabric administrator does. It focuses on activities that are important to carry out
effectively when taking a strategic approach to organizational adoption.
Service management
Overseeing the tenant is a crucial aspect to ensure that all users have a good experience
with Power BI. A few of the key governance responsibilities of a Fabric administrator
include:
Tenant settings: Control which Power BI features and capabilities are enabled, and
for which users in your organization.
Domains: Group together two or more workspaces that have similar
characteristics.
Workspaces: Review and manage workspaces in the tenant.
Embed codes: Govern which reports have been published publicly on the internet.
Organizational visuals: Register and manage organizational visuals.
Azure connections: Integrate with Azure services to provide additional
functionality.
How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for which use
cases?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?
Architecture
In the context of Fabric, architecture relates to data architecture, capacity management,
and data gateway architecture and management.
Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.
There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.
) Important
Where does Fabric fit into the organization's entire data architecture? Are there
other existing components such as an enterprise data warehouse (EDW) or a data
lake that will be important to factor into plans?
Is Fabric used end-to-end for data preparation, data modeling, and data
presentation or is Fabric used for only some of those capabilities?
Are managed self-service patterns followed to find the best balance between data
reusability and report creator flexibility?
Where will users consume the content? Generally, the three main ways to deliver
content are: the Fabric portal, Power BI Report Server, and embedded in custom
applications. Additionally, Microsoft Teams is a convenient alternative for users
who spend a lot of time in Teams.
Who is responsible for managing and maintaining the data architecture? Is it a
centralized team, or a decentralized team? How is the COE represented in this
team? Are certain skillsets required?
What data sources are the most important? What types of data will we be
acquiring?
What semantic model connectivity mode and storage mode choices (for example,
Direct Lake, import, live connection, DirectQuery, or composite model frameworks)
are the best fit for the use cases?
To what extent is data reusability encouraged using lakehouses, warehouses, and
shared semantic models?
To what extent is the reusability of data preparation logic and advanced data
preparation encouraged by using data pipelines, notebooks, and dataflows?
It's important for administrators to become fully aware of Fabric's technical capabilities
—as well as the needs and goals of their stakeholders—before they make architectural
decisions.
Tip
Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
described in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.
Capacity management
Capacity includes features and capabilities to deliver analytics solutions at scale. There
are two types of Fabric organizational licenses: Premium per User (PPU) and capacity.
There are several types of capacity licenses. The type of capacity license determines
which Fabric workloads are supported.
) Important
The use of capacity can play a significant role in your strategy for creating, managing,
publishing, and distributing content. A few of the top reasons to invest in capacity
include:
The above list isn't all-inclusive. For a complete list, see Power BI Premium features.
U Caution
Define who is responsible for managing the capacity. Confirm the roles and
responsibilities so that it's clear what action will be taken, why, when, and by
whom.
Create a specific set of criteria for content that will be published to capacity. It's
especially relevant when a single capacity is used by multiple business units
because the potential exists to disrupt other users if the capacity isn't well-
managed. Consider requiring a best practices review (such as reasonable semantic
model size and efficient calculations) before publishing new content to a
production capacity.
Regularly use the Fabric capacity metrics app to understand resource utilization
and patterns for the capacity. Most importantly, look for consistent patterns of
overutilization, which will contribute to user disruptions. An analysis of usage
patterns should also make you aware if the capacity is underutilized, indicating
more value could be gained from the investment.
Set the tenant setting so Fabric notifies you if the capacity becomes overloaded ,
or if an outage or incident occurs.
Autoscale
Autoscale is intended to handle occasional or unexpected bursts in capacity usage
levels. Autoscale can respond to these bursts by automatically increasing CPU resources
to support the increased workload.
Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the capacity isn't well-managed, autoscale might
trigger more often than expected. In this case, the metrics app can help you to
determine underlying issues and do capacity planning.
Decentralized capacity management
Capacity administrators are responsible for assigning workspaces to a specific capacity.
Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license to collaborate on, or view, Power BI
content in the workspace. Other Fabric workloads can't be included in a workspace
assigned to PPU.
Here's an example that describes one way you could manage your capacity.
The limits per capacity are lower. The maximum memory size allowed for semantic
models isn't the entire P3 capacity node size that was purchased. Rather, it's the
assigned capacity size where the semantic model is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the tenant.
7 Note
Resources for Power BI Premium per Capacity are referred to as v-cores. However, a
Fabric capacity refers to them as capacity units (CUs). The scale for CUs and v-cores
is different for each SKU. For more information, see the Fabric licensing
documentation.
Tip
The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.
Decentralized gateway management works best when it's a joint effort as follows.
Managed by centralized data owners (includes data sources that are used broadly across
the organization; management is centralized to avoid duplicated data sources):
Managed by IT:
Tip
User licenses
Every user needs a commercial license, which is integrated with a Microsoft Entra
identity. The user license could be Free, Pro, or Premium Per User (PPU).
7 Note
Although each user requires a license, a Pro or PPU license is only required to share
Power BI content. Users with a free license can create and share Fabric content
other than Power BI items.
Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.
There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Fabric service closely.
Tip
Don't introduce too many barriers to obtaining a Fabric license. Users who need to
get work done will find a way, and that way might involve workarounds that aren't
ideal. For instance, without a license to use Fabric, people might rely far too much
on sharing files on a file system or via email when significantly better approaches
are available.
Cost management
Managing and optimizing the cost of cloud services, like Fabric, is an important activity.
Here are several activities you can consider.
Analyze who is using—and, more to the point, not using—their allocated Fabric
licenses and make necessary adjustments. Fabric usage is analyzed using the
activity log.
Analyze the cost effectiveness of capacity or Premium Per User. In addition to the
additional features, perform a cost/benefit analysis to determine whether capacity
licensing is more cost-effective when there are a large number of consumers.
Carefully monitor and manage Fabric capacity. Understanding usage patterns over
time will allow you to predict when to purchase more capacity. For example, you
might choose to scale up a single capacity from a P1 to P2, or scale out from one
P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Fabric is
recommended to ensure the user experience isn't interrupted. Autoscale will scale
up capacity resources for 24 hours, then scale them back down to normal levels (if
sustained activity isn't present). Manage autoscale cost by constraining the
maximum number of v-cores, and/or with spending limits set in Azure. Due to the
pricing model, autoscale is best suited to handle occasional unplanned increases in
usage.
For Azure data sources, co-locate them in the same region as your Fabric tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.
The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.
User responsibilities
Some organizations ask Fabric users to accept a self-service user acknowledgment. It's a
document that explains the user's responsibilities and expectations for safeguarding
organizational data.
One way to automate its implementation is with a Microsoft Entra terms of use policy.
The user is required to view and agree to the policy before they're permitted to visit the
Fabric portal for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.
Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service data platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.
The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).
Administrators can help by following best practices themselves. Administrators can also
raise concerns when they see issues that could be discovered when managing
workspaces, auditing user activities, or managing gateway credentials and users. There
are also several tenant settings that are usually restricted except for a few users (for
instance, the ability to publish to web or the ability to publish apps to the entire
organization).
External user access is controlled by tenant settings and certain Microsoft Entra ID
settings. For details of external user considerations, review the Distribute Power BI
content to external guest users using Microsoft Entra B2B whitepaper.
Data residency
For organizations with requirements to store data within a geographic region, Fabric
capacity can be set for a specific region that's different from the home region of the
Fabric tenant.
Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.
Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Fabric tenant.
There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.
Report-level auditing: Techniques that report creators can use to understand
which users are using the reports that they create, publish, and share.
Data-level auditing: Methods that data creators can use to track the performance
and usage patterns of data assets that they create, publish, and share.
Tenant-level auditing: Key decisions and actions administrators can take to create
an end-to-end auditing solution.
Tenant-level monitoring: Tactical actions administrators can take to monitor the
Power BI service, including updates and announcements.
REST APIs
The Power BI REST APIs and the Fabric REST APIs provide a wealth of information about
your Fabric tenant. Retrieving data by using the REST APIs should play an important role
in managing and governing a Fabric implementation. For more information about
planning for the use of REST APIs for auditing, see Tenant-level auditing.
You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.
ノ Expand table
Audit content shared to entire REST API to check use of widely shared links
organization
Manage gateway data sources REST API to update credentials for a gateway data
source
Programmatically retrieve a query result REST API to run a DAX query against a semantic model
from a semantic model
Tip
There are many other Power BI REST APIs. For a complete list, see Using the Power
BI REST APIs.
) Important
Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage Fabric and
support your users.
Checklist - Considerations and key actions you can take for system oversight follow.
Improve system oversight:
" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Fabric community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature. Use the Get Tenant
Settings REST API to make the process more efficient, and to create snapshots of
the settings on a regular basis.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.
" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Free, Pro, or PPU) can be handled together. It
can simplify onboarding since new content creators won't always know what to ask
for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.
" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Fabric is currently used by the different business units in your organization
versus how you want Fabric to be used. Determine if there's a gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Fabric users, and how they're documented
and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.
" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Pro users signing up for a Premium Per
User trial.
" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Fabric content with external users. Ensure
that settings in Fabric support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in Fabric.
" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing data
would be helpful to complement the activity log data, such as security data.
Tip
" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs and the Fabric REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.
Questions to ask
Use questions like those found below to assess system oversight.
Are there atypical administration settings enabled or disabled? For example, is the
entire organization allowed to publish to web (we strongly advise restricting this
feature).
Do administration settings and policies align with, or inhibit, business the way user
work?
Is there a process in place to critically appraise new settings and decide how to set
them? Alternatively, are only the most restrictive settings set as a precaution?
Are Microsoft Entra security groups used to manage who can do what?
Do central teams have visibility of effective auditing and monitoring tools?
Do monitoring solutions depict information about the data assets, user activities,
or both?
Are auditing and monitoring tools actionable? Are there clear thresholds and
actions set, or do monitoring reports simply describe what's in the data estate?
Is Azure Log Analytics used (or planned to be used) for detailed monitoring of
Fabric capacities? Are the potential benefits and cost of Azure Log Analytics clear
to decision makers?
Are sensitivity labels and data loss prevention policies used? Are the potential
benefits and cost of these clear to decision makers?
Do administrators know the current number of licenses and licensing cost? What
proportion of the total BI spend goes to Fabric capacity, and to Pro and PPU
licenses? If the organization is only using Pro licenses for Power BI content, could
the number of users and usage patterns warrant a cost-effective switch to Power BI
Premium or Fabric capacity?
Maturity levels
The following maturity levels will help you assess the current state of your Power BI
system oversight.
ノ Expand table
100: Initial • Tenant settings are configured independently by one or more administrators
based on their best judgment.
• Fabric activity logs are unused, or selectively used for tactical purposes.
200: • The tenant settings purposefully align with established governance guidelines
Repeatable and policies. All tenant settings are reviewed regularly.
• A well-defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.
• Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.
300: Defined • The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.
• An automated process is in place to export Fabric activity log and API data to a
secure location for reporting and auditing.
400: Capable • Administrators work closely with the COE and governance teams to provide
oversight of Fabric. A balance of user empowerment and governance is
successfully achieved.
• Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.
• Activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.
Level State of system oversight
500: Efficient • The Fabric administrators work closely with the COE actively stay current. Blog
posts and release plans from the Fabric product team are reviewed frequently to
plan for upcoming changes.
• Regular cost management analysis is done to ensure user needs are met in a
cost-effective way.
• The Fabric REST API is used to retrieve tenant setting values on a regular basis.
• Activity log and API data is actively used to inform and improve adoption and
governance efforts.
Related content
For more information about system oversight and Fabric administration, see the
following resources.
In the next article in the Microsoft Fabric adoption roadmap series, learn about effective
change management.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
When working toward improved data and business intelligence (BI) adoption, you
should plan for effective change management. In the context of data and BI, change
management includes procedures that address the impact of change for people in an
organization. These procedures safeguard against disruption and productivity loss due
to changes in solutions or processes.
7 Note
Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Fabric capacity) or organizational compliance (like data security and privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.
) Important
Tip
Consider the following types of change to manage when you plan for Fabric adoption.
Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.
7 Note
Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.
7 Note
In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a lakehouse, a
semantic model, or a report. The considerations for change management described
in this article are relevant for all types of solutions, and not only reporting projects.
How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.
The following steps outline how you can incrementally address change.
1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects might
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.
Tip
Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly tactical planning.
When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.
What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.
) Important
You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.
Here are some actions you can take to plan for training and support.
Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.
7 Note
These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
to managed self-service approaches to data and BI), you'll likely need to plan
iterative, multi-phase plans that span multiple planning periods. In this case,
carefully consider the effort and resources needed to deliver success.
U Caution
Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.
Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.
Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.
To effectively manage change, you should identify and engage promoters early in the
process. You should involve them and inform them about the change to better utilize
and amplify their advocacy.
Tip
The promoters you identify might also be great candidates for your champions
network.
To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.
Tip
A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.
Questions to ask
Maturity levels
The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.
ノ Expand table
100: Initial • Change is usually reactive, and it's also poorly communicated.
• No clear teams or roles are responsible for managing change for data initiatives.
200: • Executive leadership and decision makers recognize the need for change
Repeatable management in data and BI projects and initiatives.
• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.
300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, in conclusion, learn
about adoption-related resources that you might find valuable.
Feedback
Was this page helpful? Yes No
7 Note
This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.
This article concludes the series on Microsoft Fabric adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your analytics
adoption efforts, and with creating a productive data culture in your organization.
Adoption introduction
Adoption maturity levels
Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management
The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.
A few important key points are implied within the previous suggestions.
Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.
7 Note
Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.
The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.
The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.
The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.
The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.
A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:
Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.
7 Note
The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Fabric adoption series. They're reputable resources
should you wish to continue your journey.
Partner community
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage a partner, visit the Power BI partner portal .
Feedback
Was this page helpful? Yes No
This page lists known issues for Fabric and Power BI features. Before submitting a
Support request, review this list to see if the issue that you're experiencing is already
known and being addressed. Known issues are also available as an interactive
embedded Power BI report .
ノ Expand table
1024 Data Factory CopyJob item deletion fails with error February 14,
2025
1023 Data Factory Preview destination data on a pipeline's copy February 14,
activity fails 2025
1017 Data Engineering Unsupported error for legacy timestamp in Fabric February 5,
Runtime 1.3 2025
1011 Power BI Models with specific gateway configuration might January 29,
experience refresh issues 2025
1004 Data Engineering Notebook and SJD job statuses are in progress in January 29,
monitor hub 2025
1003 Databases Copilot sidecar chat fails with certain private link January 28,
settings 2025
1002 Power BI Reports that use functions with RLS don't work January 28,
2025
996 Databases Some SQL query syntax fails in a graph database January 28,
query 2025
990 Real-Time KQL database loads continuously without an error January 28,
Intelligence 2025
Issue Product Title Issues publish
ID experience date
991 Data Factory Apache Airflow job creation shows Fabric upgrade January 13,
message 2025
989 Data Factory Local data access isn't allowed for pipeline using January 13,
on-premises data gateway 2025
988 Real-Time Data activator events aren't ingested for Reflex January 13,
Intelligence events 2025
986 Power BI Direct Lake query cancellation might cancel other January 7, 2025
queries
985 Power BI Direct Lake query cancellation causes model to fall January 7, 2025
back to DirectQuery
979 Databases SQL databases not available with private link January 6, 2025
through January 2025
974 Real-Time Show table command in KQL Queryset editor fails January 6, 2025
Intelligence
976 Power BI Export-to-data disabled for a visual with visual December 17,
calculation 2024
966 Power BI Sync content from Git in workspace fails December 11,
2024
968 Power BI Export data option is disabled for Q&A visual in December 10,
the service 2024
967 Data Factory Pipeline activities don't save if their data December 10,
warehouse connection is changed 2024
965 Databases SQL database creation fails to create child items December 10,
when item with same name exists 2024
957 Data Factory Creation failure for Copy job item in empty December 5,
workspace 2024
Issue Product Title Issues publish
ID experience date
940 Data Factory Pipeline copy data to Kusto using an on-premises November 22,
data gateway doesn't work 2024
938 Power BI Line chart value-axis zoom sliders don't work with November 20,
markers enabled 2024
922 Data Engineering The default environment's resources folder November 12,
doesn't work in notebooks 2024
910 Data Warehouse SQL analytics endpoint tables lose statistics October 31,
2024
909 Data Warehouse SQL analytics endpoint tables lose permissions October 31,
2024
903 Data Warehouse Data warehouse data preview might fail if multiple October 28,
data warehouse items 2024
897 OneLake OneLake Shared Access Signature (SAS) can't read October 25,
cross-region shortcuts 2024
894 Data Engineering Pipeline fails when getting a token to connect to October 25,
Kusto 2024
895 OneLake Dataverse shortcut creation and read fails when October 23,
organization is moved 2024
893 Power BI Can't connect to semantic model from Excel or October 23,
use Analyze in Excel 2024
891 Data Warehouse Data warehouse tables aren't accessible or October 17,
updatable 2024
883 Data Engineering Spark jobs might fail due to Runtime 1.3 updates October 17,
for GA 2024
Issue Product Title Issues publish
ID experience date
878 Power BI Premium capacity doesn't add excess usage into October 10,
carry forward 2024
819 Power BI Subscriptions and exports with maps might October 10,
produce wrong results 2024
877 Data Factory Data pipeline connection fails after connection October 9,
creator role is removed 2024
872 Data Warehouse Data warehouses don't show button friendly October 3,
names 2024
856 Data Factory Pipeline fails when copying data to data September 25,
warehouse with staging 2024
842 Data Warehouse Data warehouse exports using deployment September 23,
pipelines or git fail 2024
837 Data Engineering Monitoring hub displays incorrect queued September 17,
duration 2024
835 Data Engineering Managed private endpoint connection could fail September 13,
2024
817 Data Factory Pipelines don't support Role property for August 23,
Snowflake connector 2024
816 Data Factory Pipeline deployment fails when parent contains August 23,
deactivated activity 2024
810 Data Warehouse Inserting nulls into Data Warehouse tables fail August 16,
with incorrect error message 2024
795 Data Factory Multiple installations of on-premises data July 31, 2024
gateway causes pipelines to fail
789 Data Engineering SQL analytics endpoint table queries fail due to July 24, 2024
RLE
774 Data Factory Data warehouse deployment using deployment July 5, 2024
pipelines fails
767 Data Warehouse SQL analytics endpoint table sync fails when table July 2, 2024
contains linked functions
Issue Product Title Issues publish
ID experience date
757 Data Factory Copy activity from Oracle to lakehouse fails for June 20, 2024
Number data type
726 Data Factory Pipeline using XML format copy gets stuck May 24, 2024
717 Data Factory West India region doesn't support on-premises May 16, 2024
data gateway for data pipelines
718 OneLake OneLake under-reports transactions in the Other May 13, 2024
category
643 Data Engineering Tables not available to add in Power BI semantic February 27,
model 2024
508 Data Warehouse User column incorrectly shows as System in Fabric October 5,
capacity metrics app 2023
506 Data Warehouse InProgress status shows in Fabric capacity metrics October 5,
app for completed queries 2023
454 Data Warehouse Warehouse's object explorer doesn't support July 10, 2023
case-sensitive object names
ノ Expand table
1020 Data Factory Dataflow connector doesn't show February 10, Fixed:
dataflows with view only permissions 2025 February 14,
2025
769 Data Factory Dataflows Gen2 staging lakehouse July 2, 2024 Fixed:
doesn't work in deployment pipelines February 14,
2025
Issue Product Title Issues Issue fixed
ID experience publish date date
765 Data Factory Dataflows Gen2 staging warehouse July 2, 2024 Fixed:
doesn't work in deployment pipelines February 14,
2025
591 Data Factory Type mismatch when writing decimals February 16, Fixed:
and dates to lakehouse using a dataflow 2024 February 14,
2025
955 Data Factory Create Gateway public API doesn't work December 5, Fixed:
for service principals 2024 February 5,
2025
898 OneLake External data sharing OneLake shortcuts October 25, Fixed:
don't show in SQL analytics endpoint 2024 January 28,
2025
933 Data Factory New tile for Dataflow Gen2 (CI/CD, November Fixed:
preview) isn't yet supported 22, 2024 January 13,
2025
809 Data Factory Dataflow Gen2 refresh fails due to August 14, Fixed:
missing SQL analytics endpoint 2024 January 13,
2025
Issue Product Title Issues Issue fixed
ID experience publish date date
821 Data Schema refresh for a data warehouse's August 28, Fixed:
Warehouse semantic model fails 2024 January 6,
2025
447 Data Temp tables in Data Warehouse and July 5, 2023 Fixed:
Warehouse SQL analytics endpoint January 6,
2025
Related content
Go to the embedded interactive report version of this page
Service level outages
Get your questions answered by the Fabric community
Feedback
Was this page helpful? Yes No
When you attempt to delete a CopyJob item, the attempt to delete the CopyJob item
doesn't work.
Status: Open
Symptoms
When you attempt to delete a CopyJob item, you receive an error. The error message
tells you that the deletion failed. Additionally, the CopyJob item isn't deleted.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In a pipeline, you can set up a copy activity. In the destination of the copy activity, you
can preview the data. When you select on the preview button, it fails with an error.
Status: Open
Symptoms
In a pipeline, you have a copy activity. In the copy activity, you select the Destination
tab > Preview data. The preview doesn't show and you receive an error.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can't see dataflow data using the dataflow connector. The issue happens when
connecting to either a Dataflow Gen2 or Dataflow Gen2 (CI/CD preview) dataflow. You
only have view access to the workspace that contains the dataflow.
Symptoms
You have Viewer permission on the workspace that contains a Dataflow Gen2 dataflow
or Dataflow Gen2 (CI/CD preview) dataflow. In a different workspace or Power BI
Desktop, you use the dataflow connector to query the original dataflow data. You can't
see the original dataflow.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When using the native execution engine in Fabric Runtime 1.3, you might encounter an
error if your data contains legacy timestamps. This issue arises due to compatibility
challenges introduced when Spark 3.0 transitioned to the Java 8 date/time API, which
uses the Proleptic Gregorian calendar (SQL ISO standard). Earlier Spark versions utilized
a hybrid Julian-Gregorian calendar, resulting in potential discrepancies when processing
timestamp data created by different Spark versions.
Status: Open
Symptoms
When using legacy timestamp support in native execution engine for Fabric Runtime 1.3,
you receive an error. The error message is similar to: Error Source: USER. Error Code:
UNSUPPORTED. Reason: Reading legacy timestamp is not supported.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
If you're a Power BI premium customer, you can process models using a gateway. If the
gateway configuration of StreamBeforeRequestCompletes is true, you might experience
refresh issues, such as delays failures
Status: Open
Symptoms
Refresh operations might take longer than expected.
Refresh failures due to out-of-memory exceptions.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can trigger a notebook or Spark job definition (SJD) job's execution using the Fabric
public API with a service principal token. You can use the monitor hub to track the status
of the job. In this known issue, the job status is In-progress even after the execution of
the job completes.
Status: Open
Symptoms
In the monitor hub, you see a stuck job status of In-progress for a notebook or SJD job
that was submitted by a service principal.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Copilot sidecar chat when you enable private link on your Fabric tenant and disabled
public network access.
Status: Open
Symptoms
If you enable private link on your Fabric tenant and disable public network access, the
Copilot sidecar chat fails with an error. The error message is similar to: "I'm sorry, but
I encountered an error while answering your question. Please try again. " when you
submit any prompts. However, Copilot inline code completion and quick actions still
work as expected.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can define row-level security (RLS) for a table that contains measures.
USERELATIONSHIP() and CROSSFILTER() functions can't be used in the measures. A
Status: Open
Symptoms
When viewing a report, you see an error message. The error message is similar to:
" Error fetching data for this Visual. The UseRelationship() and Crossfilter()
functions may not be used when querying <dataset> because it is constrained by row
level security " or " The USERELATIONSHIP() and CROSSFILTER() functions may not be
used when querying 'T' because it is constrained by row-level security ."
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you try to run a query against a graph database in the Fabric SQL editor, some
graph database syntax doesn't work.
Status: Open
Symptoms
You can run a query against a graph database in the Fabric SQL editor. When the query
contains some graph database syntax, such as "->," the query fails.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can open an Eventhouse and select a database with tables. The main tile opens on
the Tables tab by default. All charts in the tiles are stuck in an infinite loading loop.
Status: Open
Symptoms
When you open a KQL database, it loads continuously without showing an error.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can't use Git operations or deployment pipelines that require lakehouse items.
Symptoms
You can't sync your workspace to Git, commit to Git, or update from Git. Also, you can't
perform deployments using a deployment pipeline for lakehouse items.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In Data Factory, you must have a workspace tied to a valid Fabric capacity or Fabric trial
to create a new Apache Airflow job. You have the correct license and try to create an
Apache Airflow job. The creation fails and you receive a message asking you to upgrade
to Fabric.
Status: Open
Symptoms
When trying to create an Apache Airflow job, you receive an upgrade message and can't
create the job. The upgrade message is similar to: Upgrade to a free Microsoft Fabric
Trial .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
For security considerations, local machine access is no longer allowed for a pipeline
using an on-premises data gateway. To segregate storage and compute, you can't host
data store on the compute where the on-premises data gateway is running.
Status: Open
Symptoms
You can try to access the local data source, such as REST, on the same server as the on-
premises data gateway. When you try to connect, you receive an error message and the
connection fails. The error message is similar to:
ErrorCode=RestResourceReadFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridD
eliveryException,Message=Fail to read from REST
resource.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.DataTransfer
.SecurityValidation.Exceptions.HostValidationException,Message=Access to <local Ip
address> is denied, resolved IP address is <local Ip address>, network type is
OnPremise,Source=Microsoft.DataTransfer.SecurityValidation,'
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Data Activator to get data using an API call to Fabric events from Reflex.
You see failures in the Data Activator and the events aren't ingested.
Status: Open
Symptoms
You see failures when you try to ingest Fabric events from Reflex.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Direct Lake as a storage mode for your semantic model. If you cancel a
query on a Direct Lake semantic model table, the model might occasionally also cause
the cancellation of other queries which read the same table.
Status: Open
Symptoms
Queries might fail with user a cancellation error, despite the user not canceling the
query. If a visual uses the query that was canceled, you might receive an error. The error
message is similar to: Error fetching data for this visual. The operation was
cancelled by the user.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Direct Lake as a storage mode for your semantic model. If you cancel a
query on a Direct Lake semantic model table, the query falls back to DirectQuery mode.
At the same time, Direct Lake storage mode is disabled temporarily on the semantic
model.
Status: Open
Symptoms
On "Direct Lake Only" semantic models, queries/visuals might fail with transient error.
On "Automatic" mode semantic models, query performance might be temporarily
impacted. If a visual uses the query that was canceled, you might receive an error. The
error message is similar to: Error fetching data for this visual. The operation was
cancelled by the user.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can't create or use SQL databases in tenants with private link enabled.
Status: Open
Symptoms
If you enabled private link on your Fabric tenant on or before November 19, 2024, you
don't see the option to create a new SQL Database. If you enabled private link after
November 19, 2024, you can't create the database and receive an error. The error
message is similar to Something went wrong .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can set up Eventhouse monitoring, which includes a KQL database. In the KQL
database, you can select Create Power BI report to create a report. You receive an error,
and no report is created. The issue occurs because the report creation requires an active
query in the query pane.
Status: Open
Symptoms
When you select Create Power BI report, you receive an error. The error message is
similar to: Something went wrong. Try opening the report again. If the problem
continues, contact support and provide the details below.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can try to query a table in the KQL Queryset editor. If you execute the .show table
<tableName> command, you receive an error.
Status: Open
Symptoms
When you try to execute the .show table <tableName> command in the KQL Queryset
editor, you receive an error. The error message is similar to Something went wrong. The
incident has been reported .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
After renaming an eventstream item, you can try to open it. You receive a pop-up
notification indicating that the eventstream failed to open. Then, if you try to open
another eventstream in the same workspace, the opening also fails, displaying the same
error message. You can refresh the browser to allow the other eventstreams to open
successfully, but the renamed eventstream remains inaccessible.
Status: Open
Symptoms
You receive an error when you try to open a renamed eventstream. You also receive an
error when trying to open other eventstreams in the same workspace where a renamed
eventstream resides.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can have a visual that has one or more grouping columns and also has Show items
with no data enabled. If you try to export to Excel using a live connection, the export
fails.
Symptoms
When you try to export to Excel using a live connection, the export fails with a generic
error message.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Status: Open
Symptoms
The export-to-data command is disabled for a visual because it has a visual calculation
or hidden field.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can connect your workspace to Git and perform a sync from Git into the workspace.
When you choose the Sync content from Git into this workspace and select the Sync
button, you receive an error and the sync fails.
Status: Open
Symptoms
The error typically happens when you try to sync from a new workspace that wasn't
previously synced. It also might happen due to an object with an invalid format. You
receive a message similar to: Theirs artifact must have the same logical id as Yours
artifact at this point , and can't perform any operations using Git.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The export data option is disabled for the Q&A visual in the Power BI service.
Status: Open
Symptoms
When using the Q&A visual in the Power BI service, you see the export data option is
disabled.
Alternatively, you can download the report from the service, and use the Power BI
Desktop to export the data.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In a pipeline, you can add a stored procedure or script activity that uses a data
warehouse connection. If you change the data warehouse connection to point to a new
data warehouse connection in the activity, you can't save the connection in the activity.
Status: Open
Symptoms
In the pipeline, the stored procedure or script activity changes doesn't persist after their
data warehouse connection is updated.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you create a Fabric SQL Database, it automatically creates a child SQL analytics
endpoint and a child semantic model with the same name as the SQL database. If the
workspace already contains a SQL analytics endpoint or a semantic model with the same
name, the creation of the child items fails.
Status: Open
Symptoms
You created an SQL database with the same name as a SQL analytics endpoint or
semantic model in that workspace. The child items for that SQL database weren't
created. You can't query the mirrored data for this database.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can create an eventstream that has columns of data and a transformation operator
to process the data. The data contains a column with an empty array. If you try to
publish the eventstream, it shows an error and doesn't publish.
Status: Open
Symptoms
You can't publish an event stream when both of the following conditions are met: the
data contains a column with an empty array and an operator is added to process the
data. You receive an error message similar to Failed to publish topology changes .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can create a Copy job item in a workspace. If no items are present in the workspace,
so the Copy job would be the first artifact in the workspace, the Copy job item creation
fails.
Status: Open
Symptoms
When you try to create a Copy job item in an empty workspace, the creation fails.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use the Fabric public API to create a gateway. If you attempt to use the API to
create a gateway using a service principal, you might experience errors.
Symptoms
You might experience issues when you create a gateway using a service principal with
the Create Gateway public API.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Status: Open
Symptoms
The creation, configuration, or deletion of a mirror with an error. The error message is
similar to: UI error: Unexpected error occurred. Failed after 10 retries.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might experience incorrect or random column names after changing the column
format or aggregation.
Status: Open
Symptoms
You might experience incorrect or random column names after changing the column
format or aggregation. One example where the incorrect or random column names
could appear is when querying through SQL Server Management Studio (SSMS).
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In the Fabric Capacity Metrics app, you can view the timepoint details for your capacity.
If you have a new P2 capacity, you see that the timepoint detail is missing.
Symptoms
When you try to retrieve timepoint details in the metrics app, you receive a blank screen.
The missing data is for a new P2 capacity.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
If you have a Fabric capacity hosted in the Southeast Asia or South Brazil region, you
might receive intermittent failures when you attempt to deploy the Sustainability
solution.
Status: Open
Symptoms
When you try to deploy the Sustainability solution, you receive an error. The error
message is similar to: Failed to create Sustainability solution, please retry after
some time .
Retry the creation of the Sustainability solution in the same or different workspace
Use a Fabric capacity in any region excluding Southeast Asia or South Brazil and
retry the creation of the Sustainability solution
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use an on-premises data gateway for a source in a pipeline. If the pipeline's
copy activity uses the on-premises source and a Kusto destination, the pipeline fails with
an error.
Status: Open
Symptoms
If you run a pipeline using the on-premises data gateway, you receive an error. The error
is similar to An error occurred for source: 'DataReader'. Error: 'Could not load file
or assembly 'Microsoft.IO.RecyclableMemoryStream, Version=$$2.2.0.0$$,
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might see a new tile for creating a Dataflow Gen2 (CI/CD, preview) Fabric item. If
you select the tile, you get an upgrade dialog box and you can't use the feature.
Symptoms
If you select the tile to create a Dataflow Gen2 (CI/CD, preview) Fabric item, you receive
an upgrade dialog box. The message in the upgrade dialog box is similar to: Upgrade to
a paid Microsoft Fabric Capacity .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The vertical/Y-axis value-axis zoom controls might not work correctly for line charts or
line chart varieties, such as area chart or stacked area chart. The previous issue where
the issue occurred if markers, stacked totals, or anomaly markers were enabled is fixed.
However, there's an ongoing issue if the minimum or maximum values are set.
Status: Open
Symptoms
You see that the vertical zoom controls don't work correctly for line charts or line chart
varieties, such as area chart or stacked area chart.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you accept an external data share invitation, you can select the lakehouse where
the external share to the shared data is created. If you select a lakehouse within a
capacity that resides in a different region than your home tenant region, the operation
fails.
Status: Open
Symptoms
After selecting the lakehouse and the path where to create the external share to the
external data, the operation fails.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Each Fabric environment item provides a resources folder. When a notebook attaches to
an environment, you can read and write files from and to this folder. When you select an
environment as workspace default and the notebook uses the workspace default, the
resources folder of the default environment doesn't work.
Status: Open
Symptoms
You see the environment's resources folder in the notebook's file explorer. However,
when you try to read or write files from or to this folder, you receive an error. The error
message is similar to ModuleNotFoundError .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Cross-region tenant migrations are paused through February 28, 2025. New and existing
requests aren't processed during this time period.
Status: Open
Symptoms
New and existing cross-region tenant migration requests aren't processed through
February 28, 2025.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you're focused on a Power BI visual, you can select the More options (...) button
to open the menu. When you select More options, the menu doesn't open if the report
is unsaved.
Symptoms
The More options menu doesn't open when you select the button.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
After you successfully sync your tables in your SQL analytics endpoint, the statistics get
dropped.
Status: Open
Symptoms
Statistics created on the SQL analytics endpoint tables aren't available after a successful
sync between the lakehouse and the SQL analytics endpoint.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
After you successfully sync your tables in your SQL analytics endpoint, the permissions
get dropped.
Status: Open
Symptoms
Permissions applied to the SQL analytics endpoint tables aren't available after a
successful sync between the lakehouse and the SQL analytics endpoint.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can add the Data Analysis Expressions (DAX) function INFO.VIEW.MEASURES() to a
calculated table in a semantic model. In some cases, an error happens when you create
the calculated table. Other times, after the table is in the model, you might receive an
error when you remove other tables. The issue is more likely to happen on semantic
models that have a calculation group that includes a dynamic format string in one or
more calculation items.
Symptoms
You either try to create a calculated table that contains INFO.VIEW.MEASURES() or you
try to delete a table where another calculated table in the semantic model contains
INFO.VIEW.MEASURES(). You receive an error message similar to: An unexpected
exception occurred .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The data warehouse data preview in the user experience might fail if there's more than
one data warehouse item in the Object Explorer.
Status: Open
Symptoms
The data preview fails with error: Unable to execute the SQL request .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
External data sharing OneLake shortcuts don't support blob specific APIs
You can set up external data sharing using OneLake shortcuts. The shortcut tables show
in the shared tenant in the lakehouse, but don't show in the SQL analytics endpoint.
Additionally, if you try to use a blob-specific API to access the OneLake shortcut
involved in the external data share, the API call fails.
Symptoms
If you're using external data sharing, table discovery in the SQL analytics endpoint
doesn't work due to an underlying dependency on blob APIs. Additionally, blob APIs on
a path containing the shared OneLake shortcut returns a partial response or error.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can't read a cross-region shortcut with a OneLake shared access signature (SAS).
Status: Open
Symptoms
You receive a 401 Unauthorized error, even if the delegated SAS has the correct
permissions to access the shortcut.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Status: Open
Symptoms
You receive a pipeline failure when you try to get the token for Azure Data Explorer.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use a shortcut to see data from your Dataverse in a lakehouse. However, when
the Dataverse organization is moved to a new storage location, the shortcut stops
working.
Status: Open
Symptoms
Dataverse shortcut creation/read fails if the underlying Dataverse organization is moved.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can consume Power BI semantic models in Excel by connecting to the semantic
model in Excel or choosing the Analyze in Excel option from the Power BI service. Either
way, when you try to make the connection, you receive an error message and can't
properly connect.
Status: Open
Symptoms
When you try to connect to a Power BI dataset from Excel or use Analyze in Excel, you
receive an error. The error message is similar to Forbidden Activity or AAD error . It
most likely happens if you have Excel versions 2409 or 2410.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can access data warehouse tables through the SQL analytics endpoint. Due to this
known issue, you can't apply changes to the tables. You also see an error marker next to
the table and receive an error if you try to access the table. The table sync also doesn't
complete as expected.
Status: Open
Symptoms
You see a red circle with white 'X' next to the unavailable tables. When you try to access
table, you receive an error. The error message is similar to: An internal error has
occurred while applying table changes to SQL .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The Microsoft Fabric Runtime 1.3 based on Apache Spark 3.5 went into general
availability (GA) on September 23, 2024. Fabric Runtime 1.3 can now be used for
production workloads. As part of transitioning from public preview to the general
availability stage, we released major built-in library updates to improve functionality,
security, reliability, and performance. These updates can affect your Microsoft Fabric
environments if you installed libraries or overridden the built-in library version with
Runtime 1.3.
Status: Open
Symptoms
If you installed the environments with libraries with Runtime 1.3, the Spark job starts to
fail with an error similar to Post Personalization failed . Importing installed custom
libraries might fail due to the underlying built-in libraries being updated.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
In most scenarios, carry forward logic avoids the need to trigger Autoscale for small
bursts of usage. Autoscale is only triggered for longer overages as a way to avoid
throttling. If you have Power BI Premium, you can set the maximum number of v-cores
to use for Autoscale. You don't get any throttling behavior even if your usage is above
100% for a long time.
Status: Open
Symptoms
In some cases when you set the maximum number of v-cores to use for Autoscale, you
don't see the Autoscale cores triggered as expected. If you face this known issue, you
observe the following patterns using the Capacity Metrics App:
Current usage is clearly higher than the 100% capacity units (CU) line in the
Capacity Metrics App
Little or no overages are added and accumulated during these spikes
Throttling levels are low and not growing with the overages seen
Maximum number of v-cores to use for Autoscale is set and active Autoscale isn't
reaching it even after long periods higher than average
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might face issues with a connection in a data pipeline in a certain scenario. The
scenario is that you add yourself to the connection creator role in an on-premises data
gateway. You then create a connection in a data pipeline successfully. Someone removes
you from the content creator role. When you try to add and test the same connection,
the connection fails, and you receive an error.
Status: Open
Symptoms
When trying to add and test a connection in a data pipeline that uses an on-premises
data gateway, you receive an error. The error message is similar to: An exception error
occurred: You do not have sufficient permission for this data gateway. Please
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The data warehouse user interface might not show the correct button names. You can
still use button functionality as expected.
Status: Open
Symptoms
If you face this issue, your language might be set to something other than English.
When working in the data warehouse experience, you don't see the button friendly
names. For example, when you try to create a data warehouse, you see common.create
instead of Create and common.cancel instead of Cancel.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The data pipeline copy activity fails when copying data from Azure Blob Storage to a
Data Warehouse with staging enabled. Since staging is enabled, the copy activity uses
parquet as the staging format; however, the parquet string type can't be copied into a
decimal type in the data warehouse.
Status: Open
Symptoms
The pipeline copy activity fails with and error similar to:
ErrorCode=DWCopyCommandOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.H
type 'Parquet physical type: BYTE_ARRAY, logical type: UTF8', please try with
'VARCHAR(8000)' .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Provide product feedback | Ask the community
Known issue - Intermittent refresh
failure through on-premises data
gateway
Article • 02/03/2025
You might experience intermittent refresh failures for semantic models and dataflows
through the on-premises data gateway. Failures happen regardless of how the refresh
was triggered, whether scheduled, manually, or over the REST API.
Status: Open
Symptoms
You see a gateway-bound refresh fail intermittently with the error
AdoNetProviderOpenConnectionTimeoutError . Impacted hosts include Power BI semantic
models and dataflows. The error occurs when the refresh is scheduled, manual, and via
the API.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might have a data warehouse that you use in a deployment pipeline or store in a Git
repository. When you run the deployment pipelines or update the Git repository, you
might receive an error.
Status: Open
Symptoms
During the pipeline run or Git update, you might see an error. The error message is
similar to: Index was outside the bounds of the array .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can enable Business Continuity and Disaster Recovery (BCDR) for a specific capacity
in Fabric. The write transactions that OneLake reports that go through our client are
categorized and billed as non-BCDR.
Symptoms
You see under-billing of write transactions since you're billed at the non-BCDR rate.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Spark Jobs get queued when the capacity usage reaches its maximum compute limit on
Spark. Once the limit is reached, jobs are added to the queue. The jobs are then
processed when the cores become available in the capacity. This queueing capability is
enabled for all background jobs on Spark, including Spark notebooks triggered from the
job scheduler, pipelines, and spark job definitions. The time duration that the job is
waiting in the queue isn't correctly represented in the Monitoring hub as queued
duration.
Status: Open
Symptoms
The total duration of the job shown in the Monitoring hub currently includes only the
job execution time. The total duration doesn't correctly reflect the duration in which the
job waited in the queue.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Provide product feedback | Ask the community
Known issue - Managed private
endpoint connection could fail
Article • 02/03/2025
A managed private endpoint connection for a private link service could fail. The failure
occurs due to the inability to allow a list of Fully Qualified Domain Names (FQDNs) as
part of the managed private endpoint creation.
Status: Open
Symptoms
You see a managed private endpoint creation error when trying to create a managed
private endpoint from the network security menu in the workspace settings.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can execute the same stored procedure in parallel in a data warehouse. When the
stored procedure is run concurrently, it causes blocking because each stored procedure
takes an exclusive lock during plan generation.
Symptoms
You might experience slowness when the same procedure is executed in parallel as
opposed to by itself.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can have a semantic model built on a data warehouse. When you try to refresh the
schema for the semantic model, you receive an error message, and the schema isn't
refreshed.
Symptoms
When refreshing the schema for a semantic model built on a data warehouse, you
receive an error message similar to: The datamart data is invalid .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can set up a subscription or export on a report or dashboard. If the item contains an
Azure or Bing map visual, the map data might show incorrect results.
Symptoms
There are two main symptoms:
Next steps
About known issues
Feedback
Was this page helpful? Yes No
Status: Open
Symptoms
When trying to test the Snowflake connection, you receive an error message similar to:
Test connection operation failed. Failed to open the database connection.
[Snowflake] 390201 (08004): The requested warehouse does not exist or not
authorized
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When creating pipelines, you can have a parent pipeline that contains an Invoke
pipeline activity that was deactivated. When you try to deploy the pipeline to a new
workspace, the deployment fails.
Status: Open
Symptoms
When you try to deploy a pipeline that has a deactivated Invoke pipeline activity, you
get an error similar to: Something went wrong. Deployment couldn't be completed. or
Git_InvalidResponseFromWorkload .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When you insert NULL values into NOT NULL columns in SQL tables, the SQL query fails
as expected. However, the error message returned references the incorrect column.
Status: Open
Symptoms
You might see a failure when executing a SQL query to insert into a Data Warehouse
table. The error message is similar to: Cannot insert the value NULL into column
<columnname>, table <tablename> . When the query fails, the column referenced isn't the
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When a Dataflow Gen2 creates its staging lakehouse, sometimes the associated SQL
analytics endpoint isn't created. When there's no SQL analytics endpoint, the dataflow
fails to refresh with an error.
Symptoms
If you face this known issue, you see the dataflow refresh fail with an error. The error
message is similar to: Refresh failed. The staging lakehouse is not configured
correctly. Please create a support ticket with this error report.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You might face an issue with Data Factory pipelines when performing multiple
installations on the on-premises data gateway. The issue occurs when you install the on-
premises data gateway that supports pipelines, and then downgrade the on-premises
data gateway version to a version that doesn't support pipelines. Finally, you upgrade
the on-premises data gateway version to support pipelines. You then receive an error
when you run a Data Factory pipeline using the on-premises data gateway.
Status: Open
Symptoms
You receive an error during a pipeline run. The error message is similar to: Please check
your network connectivity to ensure your on-premises data gateway can access
xx.frontend.clouddatahub.net .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When creating a delta table, you can use run length encoding (RLE) . If the delta writer
uses RLE on the table you try to query in the SQL analytics endpoint, you receive an
error.
Status: Open
Symptoms
When you query a table in the SQL analytics endpoint, you receive an error. The error
message is similar to: Error handing external file: 'Unknown encoding type.'
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Fabric Data Factory deployment pipelines to deploy data warehouses. When
you deploy data warehouse related items from one workspace to another, the data
warehouse connection breaks.
Status: Open
Symptoms
Once the deployment pipeline completes in the destination workspace, you see the data
warehouse connection is broken. You see an error message similar to: Failed to load
connection, please make sure it exists, and you have the permission to access it .
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Git integration for your Dataflow Gen2 dataflows. When you begin to
commit the workspace to the Git repo, you see the dataflow's staging lakehouse, named
DataflowsStagingLakehouse, available to commit. While you can select the staging
lakehouse to be exported, the integration doesn't work properly. If using a deployment
pipeline, you can't deploy DataflowsStagingLakehouse to the next stage.
Status: Open
Symptoms
You see the DataflowsStagingLakehouse visible in Git integration and can't deploy
DataflowsStagingLakehouse to the next stage using a deployment pipeline.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The Fabric SQL analytics endpoint uses a backend service to sync delta tables created in
a lakehouse. The backend service recreates the tables in the SQL analytics endpoint
based on the changes in lakehouse delta tables. When there are functions linked to the
SQL table, such as Row Level Security (RLS) functions, the creation operation fails and
the table sync fails.
Status: Open
Symptoms
In the scenario where there are functions linked to the SQL table, some or all of the
tables on the SQL analytics endpoint aren't synced.
1. Run the SQL statement ALTER SECURITY POLICY DROP FILTER PREDICATE ON <Table>
on the table where the sync failed
2. Update the table on OneLake
3. Force the sync using the lakehouse or wait for the sync to complete automatically
4. Run the SQL statement ALTER SECURITY POLICY ADD FILTER PREDICATE ON <Table>
on the table where the sync failed
5. Confirm the table is successfully synced by checking the data
Next steps
About known issues
Feedback
Was this page helpful? Yes No
You can use Git integration for your Dataflow Gen2 dataflows. When you begin to
commit the workspace to the Git repo, you see the dataflow's staging warehouse,
named DataflowsStagingWarehouse, available to commit. While you can select the
staging warehouse to be exported, the integration doesn't work properly. If using a
deployment pipeline, you can't deploy DataflowsStagingWarehouse to the next stage.
Status: Open
Symptoms
You see the DataflowsStagingWarehouse visible in Git integration and can't deploy
DataflowsStagingWarehouse to the next stage using a deployment pipeline.
Next steps
About known issues
Feedback
Was this page helpful? Yes No
The copy activity from Oracle to a lakehouse fails when one of the columns from Oracle
has a Number data type. In Oracle, scale can be greater than precision for
decimal/numeric types. Parquet files in Lakehouse require the scale be less than or equal
to precision, so the copy activity fails.
Status: Open
Symptoms
When trying to copy data from Oracle to a lakehouse, you receive an error similar to:
ParquetInvalidDecimalPrecisionScale. Invalid Decimal Precision or Scale. Precision:
38 Scale:127 .
and s >= 0 . Meanwhile, the range defined by NUMBER(p,s) should cover the range of
the values stored in the column. If not, you receive an error similar to ORA-01438: value
larger than specified precision allowed for this column . Here's a sample query:
Next steps
About known issues
Feedback
Was this page helpful? Yes No
When using a pipeline to copy XML formatted data to a tabular data source, the
pipeline gets stuck. The issue most often appears when XML single records contain
many different array type properties.
Status: Open
Symptoms
The copy activity doesn't fail; it runs endlessly until it hits a timeout or is canceled. Some
XML files copy without any issue while some files are causing the issue.
Related content
About known issues
Feedback
Was this page helpful? Yes No
West India region currently doesn't support on-premises Data gateway for Data Factory
pipelines.
Status: Open
Symptoms
If you are in the West India region, you don't see the option to select the on-premises
data gateway during the creation of a Data Factory pipeline connection.
Related content
About known issues
Feedback
Was this page helpful? Yes No
Status: Open
Symptoms
You currently don't see all OneLake transactions in the Other category being reported.
Related content
About known issues
Feedback
Was this page helpful? Yes No
When you're working in a lakehouse, you can create and add tables to a new Power BI
semantic model. You also can adjust the tables shown in the default semantic model
associated with a lakehouse. In either case, you might run into a scenario where you
don't see all available tables and can't add them to your semantic model.
Status: Open
Symptoms
When trying to select the tables to include in a semantic model, you don't see all
expected tables.
Related content
About known issues
Feedback
Was this page helpful? Yes No
You can create a Dataflow Gen2 dataflow that writes data to a lakehouse as an output
destination. If the source data has a Decimal or Date data type, you might see a
different data type appear in the lakehouse after running the dataflow. For example,
when the data type is Date, the resulting data type is sometimes converted to Datetime,
and when the data type is Decimal, the resulting data type is sometime converted to
Float.
Status: Open
Symptoms
You see an unexpected data type in the lakehouse after running a dataflow.
Related content
About known issues
Feedback
Was this page helpful? Yes No
If you use a SQL analytics endpoint that hasn't been active for a while, the SQL analytics
endpoint scans the underlying delta tables. It's possible for you to query one of the
tables before the refresh is completed with the latest data. If so, you might see old data
being returned or even errors being raised if the parquet files were vacuumed.
Symptoms
When querying a table through the SQL analytics endpoint, you see old data or get an
error, similar to: "Failed to complete the command because the underlying location does
not exist. Underlying data description: %1."
Related content
About known issues
Feedback
Was this page helpful? Yes No
A data warehouse or SQL analytics endpoint that has more than 20,000 tables fails to
load in the portal. If connecting through any other client tools, you can load the tables.
The issue is only observed while accessing the data warehouse through the portal.
Symptoms
Your data warehouse or SQL analytics endpoint fails to load in the portal with the error
message "Batch was canceled," but the same connection strings are reachable using
other client tools.
Related content
About known issues
Feedback
Was this page helpful? Yes No
In a limited number of cases, when you make a user-initiated request to the data
warehouse, the user identity isn't correctly reported to the Fabric capacity metrics app.
In the capacity metrics app, the User column shows as System.
Status: Open
Symptoms
In the interactive operations table on the timepoint page, you incorrectly see the value
System under the User column.
Related content
About known issues
Feedback
Was this page helpful? Yes No
In the Fabric capacity metrics app, completed queries in the Data Warehouse SQL
analytics endpoint appear with the status as "InProgress" in the interactive operations
table on the timepoint page.
Status: Open
Symptoms
In the interactive operations table on the timepoint page, completed queries in the Data
Warehouse SQL analytics endpoint appear with the status InProgress
Related content
About known issues
Feedback
Was this page helpful? Yes No
The object explorer fails to display the Fabric Data Warehouse objects (ex. tables, views,
etc.) when have same noncase sensitive name (ex. table1 and Table1). In case there are
two objects with same name, one displays in the object explorer. but, if there's three or
more objects, nothing gets display. The objects show and can be used from system
views (ex. sys.tables). The objects aren't available in the object explorer.
Status: Open
Symptoms
If the customer notice the object shares the same noncase sensitive name as another
object listed in a system view and is working as intended, but isn't listed in the object
explorer, then the customer has encountered this known issue.
Related content
About known issues
Feedback
Was this page helpful? Yes No
Users can create Temp tables in the Data Warehouse and in SQL analytics endpoint but
data from user tables can't be inserted into Temp tables. Temp tables can't be joined to
user tables.
Symptoms
Users may notice that data from their user tables can't be inserted into a Temp table.
Temp tables can't be joined to user tables.
Related content
About known issues
Feedback
Was this page helpful? Yes No
This article provides information about the official collection of icons for Microsoft
Fabric that you can use in architectural diagrams, training materials, slide decks or
documentation.
Do's
Use the icons to illustrate how products can work together.
In diagrams, we recommend including a label that contains the product,
experience, or item name somewhere close to the icon.
Use the icons as they appear within the product.
Don'ts
Don't crop, flip, or rotate icons.
Don't distort or change icon shape in any way.
Don't use Microsoft product icons to represent your product or service.
Terms
Microsoft permits the use of these icons in architectural diagrams, training materials, or
documentation. You can copy, distribute, and display the icons only for the permitted
use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
Fabric icons are also available as a npm package for use in Microsoft Fabric platform
extension development. To use these icons, import the package into your project, then
use individual SVG files as an image source or as an SVG. You can also directly download
the icons from the following GitHub repository. Select the following button to open the
repo, select ... from the right hand corner and select Download:
Related content
Microsoft Power Platform icons
Azure icons
Dynamics 365 icons
Feedback
Was this page helpful? Yes No
This archive page is periodically updated with an archival of content from What's new in
Microsoft Fabric?
To follow the latest in Fabric news and features, see the Microsoft Fabric Blog . Also
follow the latest in Power BI at What's new in Power BI?
ノ Expand table
March Microsoft Fabric is now We're excited to announce that Microsoft Fabric, our all-
2024 HIPAA compliant in-one analytics solution for enterprises, has achieved
new certifications for HIPAA and ISO 27017, ISO 27018,
ISO 27001, ISO 27701 .
March Exam DP-600 is now Exam DP-600 is now available, leading to the Microsoft
2024 available Certified: Fabric Analytics Engineer Associate certification.
The Fabric Career Hub can help you learn quickly and
get certified.
March Fabric Copilot Pricing: Copilot in Fabric begins billing on March 1, 2024 as
2024 An End-to-End part of your existing Power BI Premium or Fabric
example Capacity. Learn how Fabric Copilot usage is calculated .
January Microsoft Fabric Copilot for Data Science and Data Engineering is now
2024 Copilot for Data available worldwide. What can Copilot for Data Science
Science and Data and Data Engineering do for you?
Engineering
December Fabric platform Learn more about the big-picture perspective of the
2023 Security Fundamentals Microsoft Fabric security architecture by describing how
the main security flows in the system work.
Month Feature Learn more
November Microsoft Fabric, A focus on what customers using the current Platform-
2023 explained for existing as-a-Service (PaaS) version of Synapse can expect . We
Synapse users explain what the general availability of Fabric means for
your current investments (spoiler: we fully support them),
but also how to think about the future.
November Microsoft Fabric is now Microsoft Fabric is now generally available for
2023 generally available purchase . Microsoft Fabric can reshape how your
teams work with data by bringing everyone together on
a single, AI-powered platform built for the era of AI. This
includes: Power BI, Data Factory, Data Engineering, Data
Science, Real-Time Analytics, Data Warehouse, and the
overall Fabric platform.
November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.
October Announcing the Fabric Announcing the Fabric Roadmap . One place you can
2023 roadmap see what we are working on and when you can expect it
to be available.
October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Fabric Data Science within
Microsoft Fabric. Learn more at Semantic link in
Microsoft Fabric: Bridging BI and Data Science .
September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and
to know about what's Power BI users .
new and what's
coming
August Strong, useful, From the Data Integration Design Team, learn about the
2023 beautiful: Designing a strong, creative, and function design of Microsoft
new way of getting Fabric, as Microsoft designs for the future of data
data integration.
August Learn Live: Get started Calling all professionals, enthusiasts, and learners! On
2023 with Microsoft Fabric August 29, we'll be kicking off the "Learn Live: Get started
with Microsoft Fabric" series in partnership with
Microsoft's Data Advocacy teams and Microsoft
WorldWide Learning teams to deliver 9x live-streamed
lessons covering topics related to Microsoft Fabric!
July 2023 Step-by-Step Tutorial: In this comprehensive guide, we walk you through the
Building ETLs with process of creating Extract, Transform, Load (ETL)
Microsoft Fabric pipelines using Microsoft Fabric .
June 2023 Get skilled on Who is Fabric for? How can I get skilled? This blog post
Microsoft Fabric - the answers these questions about Microsoft Fabric, a
AI-powered analytics comprehensive data analytics solution by unifying many
platform experiences on a single platform.
June 2023 Introducing the end- In this blog, we explore four end-to-end scenarios that
to-end scenarios in are typical paths our customers take to extract value and
Microsoft Fabric insights from their data using Microsoft Fabric .
May 2023 Get Started with A technical overview and introduction to everything from
Microsoft Fabric - All data movement to data science, real-time analytics, and
in-one place for all business intelligence in Microsoft Fabric .
your Analytical needs
May 2023 Microsoft OneLake in Microsoft OneLake brings the first multicloud SaaS data
Fabric, the OneDrive lake for the entire organization .
for data
ノ Expand table
July 2024 Update records The .update command is now generally available. Learn more
in a KQL about how to Update records in a Kusto database .
Month Feature Learn more
Database
preview
July 2024 Warehouse Warehouse in Microsoft Fabric offers the capability to query the
queries with historical data as it existed in the past at the statement level,
time travel (GA) now generally available. The ability to query data from a specific
timestamp is known in the data warehousing industry as time
travel.
June 2024 OneLake As part of the One logical copy promise, we're excited to
availability of announce that OneLake availability of Eventhouse in Delta Lake
Eventhouse in format is Generally Available .
Delta Lake
format
May 2024 Microsoft Fabric Azure Private Link for Microsoft Fabric secures access to your
Private Links sensitive data in Microsoft Fabric by providing network isolation
and applying required controls on your inbound network traffic.
For more information, see Announcing General Availability of
Fabric Private Links .
May 2024 Trusted Trusted workspace access in OneLake shortcuts is now generally
workspace available . You can now create data pipelines to access your
access firewall-enabled Azure Data Lake Storage Gen2 (ADLS Gen2)
accounts using Trusted workspace access (preview) in your
Fabric Data Pipelines. Use the workspace identity to establish a
secure and seamless connection between Fabric and your
storage accounts . Trusted workspace access also enables
secure and seamless access to ADLS Gen2 storage accounts
from OneLake shortcuts in Fabric .
May 2024 Managed Managed private endpoints for Microsoft Fabric allow secure
private connections over managed virtual networks to data sources that
endpoints are behind a firewall or not accessible from the public internet.
For more information, see Announcing General Availability of
Fabric Private Links, Trusted Workspace Access, and Managed
Private Endpoints .
May 2024 Eventhouse Eventhouse is a new, dynamic workspace hosting multiple KQL
databases , generally available as part of Fabric Real-Time
Intelligence. An Eventhouse offers a robust solution for
managing and analyzing substantial volumes of real-time data.
Get started with a guide to Create and manage an Eventhouse.
May 2024 Data The Environment in Fabric is now generally available. The
Engineering: Environment is a centralized item that allows you to configure
Environment all the required settings for running a Spark job in one place. At
GA, we added support for Git, deployment pipelines, REST APIs,
resource folders, and sharing.
Month Feature Learn more
May 2024 Microsoft Fabric Microsoft Fabric Core APIs are now generally available. The
Core REST APIs Fabric user APIs are a major enabler for both enterprises and
partners to use Microsoft Fabric as they enable end-to-end fully
automated interaction with the service, enable integration of
Microsoft Fabric into external web applications, and generally
enable customers and partners to scale their solutions more
easily.
May 2024 Power Query The Power Query SDK is now generally available in Visual
Dataflow Gen2 Studio Code! To get started with the Power Query SDK in Visual
SDK for VS Code Studio Code, install it from the Visual Studio Code
Marketplace .
April 2024 Semantic Link Semantic links are now generally available! The package comes
with our default VHD, and you can now use Semantic link in
Fabric right away without any pip installation.
March VNet Gateways VNet Data Gateway support for Dataflows Gen2 in Fabric is now
2024 in Dataflow generally available. The VNet data gateway helps to connect
Gen2 from Fabric Dataflows Gen2 to Azure data services within a
VNet, without the need of an on-premises data gateway.
November Microsoft Fabric Microsoft Fabric is now generally available for purchase .
2023 is now generally Microsoft Fabric can reshape how your teams work with data by
available bringing everyone together on a single, AI-powered platform
built for the era of AI. This includes: Power BI, Data Factory, Data
Engineering, Data Science, Real-Time Analytics, Data Warehouse,
and the overall Fabric platform .
Community
This section summarizes previous Microsoft Fabric community opportunities for
prospective and current influencers and MVPs. To learn about the Microsoft MVP Award
and to find MVPs, see mvp.microsoft.com .
ノ Expand table
August Fabric Influencers The Fabric Influencers Spotlight August 2024 highlights
2024 Spotlight August 2024 and amplifies blog posts, videos, presentations, and other
content related to Microsoft Fabric from members of
Microsoft MVPs & Fabric Super Users from the Fabric
community.
Month Feature Learn more
August Winners of the Fabric Congratulations to the winners of the Fabric Community
2024 Community Sticker Sticker Challenge !
Challenge
July 2024 Fabric Influencers Introducing the new Fabric Influencers Spotlight series of
Spotlight articles to highlight and amplify blog posts, videos,
presentations, and other content related to Microsoft
Fabric. Read blogs from Microsoft MVPs and Fabric Super
Users from the Fabric community .
June 2024 Solved Fabric You can now find solved posts from Fabric Community
Community posts are discussions in the Fabric Help Pane .
now available in the
Fabric Help Pane
May 2024 Announcing Microsoft Announcing the Microsoft Fabric Community Conference
Fabric Community Europe on September 24, 2024. Register today !
Conference Europe
May 2024 Register for the Starting May 21, 2024, sign up for the Microsoft Build:
Microsoft Build: Microsoft Fabric Cloud Skills Challenge and prepare for
Microsoft Fabric Cloud Exam DP-600 and upskill to the Fabric Analytics Engineer
Skills Challenge Associate certification.
March Exam DP-600 is now Exam DP-600 is now available, leading to the Microsoft
2024 available Certified: Fabric Analytics Engineer Associate certification.
The Fabric Career Hub can help you learn quickly and
get certified.
March Microsoft Fabric Join us in Las Vegas March 26-28, 2024 for the first annual
2024 Community Microsoft Fabric Community Conference. See firsthand
Conference how Microsoft Fabric and the rest of the data and AI
products at Microsoft can help your organization prepare
for the era of AI. Register today using code MSCUST for
an exclusive discount!
January Announcing Fabric The new Fabric Career Hub is your one-stop-shop for
2024 Career Hub professional growth! We've created a comprehensive
learning journey with the best free on-demand and live
training, plus exam discounts.
January Hack Together: The Hack Together is a global online hackathon that runs
2024 Microsoft Fabric from February 15 to March 4, 2024. Join us for Hack
Month Feature Learn more
December Microsoft Fabric Join us in Las Vegas March 26-28, 2024 for the first annual
2023 Community Microsoft Fabric Community Conference. See firsthand
Conference how Microsoft Fabric and the rest of the data and AI
products at Microsoft can help your organization prepare
for the era of AI. Register today to immerse yourself in
the future of data and AI and connect with thousands of
data innovators like yourself eager to share their insights.
November Microsoft Fabric MVP A special edition of the "Microsoft Fabric MVP Corner"
2023 Corner – Special blog series highlights selected content related to Fabric
Edition (Ignite) and created by MVPs around the Microsoft Ignite 2023
conference , when we announced Microsoft Fabric
generally available.
October Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – October created by MVPs from October 2023 .
2023
September Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – September created by MVPs from September 2023 .
2023
August Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – August 2023 created by MVPs from August 2023 .
July 2023 Microsoft Fabric MVP Highlights of selected content related to Fabric and
Corner – July 2023 created by MVPs in July 2023 .
June 2023 Microsoft Fabric MVP The Fabric MVP Corner blog series to highlight selected
Corner – June 2023 content related to Fabric and created by MVPs in June
2023 .
May 2023 Fabric User Groups Power BI User Groups are now Fabric User Groups !
May 2023 Learn about Microsoft Prior to our official announcement of Microsoft Fabric at
Fabric from MVPs Build 2023, MVPs had the opportunity to familiarize
themselves with the product. For several months, they
have been actively testing Fabric and gaining valuable
insights. Now, their enthusiasm for the product is evident
as they eagerly share their knowledge and thoughts
about Microsoft Fabric with the community .
Fabric samples and guidance
This section summarizes archived guidance and sample project resources for Microsoft
Fabric.
ノ Expand table
March Protect PII One possible way to use Azure AI to identify and extract
2024 information in your personally identifiable information (PII) in Microsoft
Microsoft Fabric Fabric is to use Azure AI Language to detect and
Lakehouse with categorize PII entities in text data, such as names,
Responsible AI addresses, emails, phone numbers, social security
numbers, etc.
February Building Common Read more about common data architecture patterns and
2024 Data Architectures how they can be secured with Microsoft Fabric , and the
with OneLake in basic building blocks of security for OneLake.
Microsoft Fabric
December Working with If you want to use an application that directly integrates
2023 OneLake using Azure with Windows File Explorer, check out OneLake file
Storage Explorer explorer . However, if you're accustomed to using Azure
Storage Explorer for your data management tasks , you
can continue to harness its functionalities with OneLake
and some of its key benefits.
November Semantic Link: Semantic Link adds support for the recently released
2023 OneLake integrated OneLake integrated semantic models. You can now
Semantic Models directly access data using your semantic model's name via
OneLake using the read_table function and the new
mode parameter set to onelake .
November Integrate your SAP Using the built-in connectivity of Microsoft Fabric is the
2023 data into Microsoft easiest and least-effort way of adding SAP data to your
Fabric Fabric data estate .
November Fabric Changing the Follow this step-by-step example of how to explore the
2023 game: Validate functional dependencies between columns in a table
dependencies with using the semantic link . The semantic link is a feature
Semantic Link – Data that allows you to establish a connection between Power
Quality BI datasets and Fabric Data Science in Microsoft Fabric.
Month Feature Learn more
October Fabric Change the Follow this realistic example of reading data from Azure
2023 Game: Exploring the Data Lake Storage using shortcuts, organizing raw data
data into structured tables, and basic data exploration. Our
data exploration uses as a source the diverse and
captivating city of London with information extracted
from data.london.gov.uk/ .
September Announcing an end- A new workshop guides you in building a hands-on, end-
2023 to-end workshop: to-end data analytics solution for the Snapshot
Analyzing Wildlife Serengeti dataset using Microsoft Fabric. The dataset
Data with Microsoft consists of approximately 1.68M wildlife images and
Fabric image annotations provided in .json files.
September New learning path: The new Implement a Lakehouse with Microsoft Fabric
2023 Implement a learning path introduces the foundational components of
Lakehouse with implementing a data lakehouse with Microsoft Fabric with
Microsoft Fabric seven in-depth modules.
July 2023 Connecting to How do I connect to OneLake? This blog covers how to
OneLake connect and interact with OneLake, including how
OneLake achieves its compatibility with any tool used
over ADLS Gen2!
June 2023 Using Azure How does Azure Databricks work with Microsoft Fabric?
Databricks with This blog post answers that question and more details on
Microsoft Fabric and how the two systems can work together.
OneLake
July 2023 Free preview usage of We're extending the free preview usage of Fabric
Microsoft Fabric experiences (other than Power BI). These experiences
experiences extended won't count against purchased capacity until October 1,
to October 1, 2023 2023 .
June 2024 Copilot privacy and For more information on the privacy and security of
security Copilot in Microsoft Fabric, and for detail information on
each workload, see Privacy, security, and responsible use
for Copilot in Microsoft Fabric (preview).
May 2024 The AI and Copilot In the tenant admin portal, you can delegate the
setting automatically enablement of AI and Copilot features to Capacity
delegated to capacity administrators . This AI and Copilot setting is
admins automatically delegated to capacity administrators and
tenant administrators won't be able to turn off the
delegation.
February Fabric Change the This blog post shows how simple is to enable Copilot ,
2024 Game: How easy is it a generative AI that brings new ways to transform and
to use Copilot in analyze data, generate insights, and create visualizations
Microsoft Fabric and reports in Microsoft Fabric.
February Copilot for Data Copilot for Data Factory in Microsoft Fabric is now
2024 Factory in Microsoft available in preview and included in the Dataflow Gen2
Fabric experience. For more information, see Copilot for Data
Factory.
January Microsoft Fabric Copilot for Data Science and Data Engineering is now
2024 Copilot for Data available worldwide. What can Copilot for Data Science
Science and Data and Data Engineering do for you?
Engineering
January How to enable Copilot Follow this guide to get Copilot in Fabric enabled for
2024 in Fabric for Everyone everyone in your organization. For more information, see
Overview of Copilot for Microsoft Fabric (preview).
November Copilot for Power BI in We're thrilled to announce the preview of Copilot in
2023 Microsoft Fabric Microsoft Fabric , including the experience for Power BI,
preview which helps users quickly get started by helping them
create reports in the Power BI web experience. For more
information, see Copilot for Power BI .
Month Feature Learn more
October Chat your data in Learn how to construct Copilot tools based on business
2023 Microsoft Fabric with data in Microsoft Fabric .
Semantic Kernel
ノ Expand table
August Data Warehouse The Data Warehouse connector now supports TLS
2024 Connector Supports TLS 1.3 , the latest version of the Transport Layer
1.3 Security protocol.
August Connect to your Azure You can easily browse and connect to your Azure
2024 Resources by Modern Get resources automatically with the modern data
Data Experience in Data experience of Data Pipeline .
pipeline
July 2024 Use existing connections You can now select any existing connections from
from the OneLake Data OneLake Datahub , not just your recent and favorite
hub integration ones. This makes it easier to access your data sources
from the homepage of modern get data in data
pipeline. For more information, see Modern Get Data
experience.
July 2024 Edit JSON code for Data You can now edit the JSON behind your Data Factory
pipelines pipelines in Fabric. When you design low-code
pipeline workflows, directly editing the JSON code
Month Feature Learn more
July 2024 Dataflow Gen2 certified New and updated Dataflow Gen2 connectors have
connector updates been released, including two new connectors in Fabric
Data Factory data pipeline: Azure MySQL Database
Connector and Azure Cosmos DB for MongoDB
Connector. For more information, see the July 2024
Certified connector updates .
July 2024 Support for editing Introducing a new experience to edit navigation steps
Navigation steps within Dataflow, to connect to a different object,
inside of the Applied steps section of the Query
settings pane. For more information, see Editing
Navigation steps .
July 2024 Global view in Manage The new Global view in Manage connections allows
connections you to see all the available connections in your Fabric
environment so you can modify them or delete them
without ever having to leave the Dataflow experience.
For more information, see Global view in Manage
connections .
July 2024 Fast Copy with On- Fast Copy (preview) in Dataflow Gen2 now supports
premises Data Gateway on-premises data stores using a gateway to access
Support in Dataflow Gen2 on-premises stores like SQL Server with Fast Copy in
Dataflow Gen2.
July 2024 Fabric API for GraphQL API for GraphQL in Fabric starts billing on July 12,
(preview) pricing 2024, as part of your existing Power BI Premium or
Fabric Capacity. Use the Fabric Capacity Metrics app
to track capacity usage for API for GraphQL
operations, under the name "Query".
June 2024 Dataflow Gen2 certified New and updated Dataflow Gen2 connectors have
connector updates been released. For more information, see the June
2024 Certified connector updates .
June 2024 New data pipeline More connectors are now available for data pipeline.
connector updates For more information, see the June 2024 Fabric
update .
June 2024 Move Data Across You can now move data among Lakehouses,
Workspace via Data warehouses, etc. across different workspaces . In
pipeline Modern Get Data Pipeline Modern Get Data, select a Fabric item from
Experience another workspace under Explorer on the left side of
the OneLake data hub.
June 2024 Create a new Warehouse You can now create a new Warehouse as a destination
as destination in Data in Data Pipeline , instead of only selecting an
pipeline existing one.
May 2024 Data Factory Don't miss any of the Data Factory in Fabric
Announcements at announcements, here's a recap of all new features in
Microsoft Build Recap Data Factory in Fabric from Build 2024 .
May 2024 New certified connectors The Power Query SDK and Power Query Connector
Certification process has introduced several new
Power Query connectors , including connectors for
Oracle database, MySQL, Oracle Cloud Storage, Azure
AI, Azure Files, Dynamics AX, Google Bigquery,
Snowflake ADBC, and more coming soon.
May 2024 API for GraphQL in The new API for GraphQL is a data access layer that
Microsoft Fabric (preview) allows us to query multiple data sources quickly and
efficiently in Fabric. For more information, see What is
Microsoft Fabric API for GraphQL?
May 2024 Power Query Dataflow The Power Query SDK is now generally available in
Gen2 SDK for VS Code GA Visual Studio Code! To get started with the Power
Query SDK in Visual Studio Code, install it from the
Visual Studio Code Marketplace .
May 2024 Refresh the Refresh The Refresh History details popup window now has a
History Dialog Refresh button .
May 2024 New and updated certified The Power Query SDK and Power Query Connector
connectors Certification process has introduced four new and
updated Power Query connectors .
May 2024 Data workflows in Data Data workflows (preview) in Data Factory , powered
Factory preview by Apache Airflow, offer seamless authoring,
scheduling, and monitoring experience for Python-
based data processes defined as Directed Acyclic
Graphs (DAGs). For more information, see Quickstart:
Create a Data workflow.
May 2024 Trusted Workspace Access Use the workspace identity to establish a secure and
in Fabric Data Pipelines seamless connection between Fabric and your storage
preview accounts . You can now create data pipelines to
access your firewall-enabled Azure Data Lake Storage
Month Feature Learn more
May 2024 Blob storage Event Azure Blob storage event triggers (preview) in Fabric
Triggers for Data Pipelines Data Factory Data Pipelines use Fabric Reflex alerts
preview and eventstreams to create event subscriptions to
your Azure storage accounts.
May 2024 Azure HDInsight activity The Azure HDInsight activity allows you to execute
for data pipelines Hive queries, invoke a MapReduce program, execute
Pig queries, execute a Spark program, or a Hadoop
Stream program.
May 2024 Copy data assistant Start using the Modern Get Data experience by
selecting Copy data assistant in the Pipeline landing
page or Use copy assistant in the Copy data drop
down . You can easily connect to recently used
Fabric items and provides an intuitive way to read
sources from sample data and new connections.
May 2024 Edit the Destination Table You can edit destination table column types when
Column Type when copying data for a new or autocreated destination
Copying Data table for many data stores. For more information, see
Configure Lakehouse in a copy activity.
April 2024 Spark job definition With the new Spark job definition activity , you'll be
activity able to run a Spark job definition in your pipeline.
April 2024 Fabric Warehouse in ADF You can now connect to your Fabric Warehouse from
copy activity an Azure Data Factory/Fabric Warehouse pipeline .
You can find this new connector when creating a new
source or sink destination in your copy activity, in the
Lookup activity, Stored Procedure activity, Script
activity, and Get Metadata activity.
April 2024 Edit column type to When moving data from any supported data sources
destination table support into Fabric Warehouse or other SQL data stores (SQL
added to Fabric Server, Azure SQL Database, Azure SQL Managed
Warehouse and other SQL Instance, or Azure Synapse Analytics) via data
data stores pipelines, users can now specify the data type for
each column .
April 2024 Performance The SFTP connector has been improved to offer
improvements when better performance when writing to SFTP as
writing data to SFTP destination.
April 2024 Service Principal Name Azure Service Principals (SPN) are now supported
authentication kind for on-premises data gateways and virtual network
support for On-Premises data gateways. Learn how to use the service principal
Month Feature Learn more
and virtual network data authentication kind in Azure Data Lake Storage,
gateways Dataverse, Azure SQL Database, Web connector, and
more.
April 2024 New and updated The Power Query SDK and Power Query Connector
Certified connectors Certification process has introduced 11 new and
updated custom Power Query connectors .
April 2024 New Expression Builder A new experience in the Script activity in Fabric
Experience Data Factory pipelines to make it even easier to build
expressions using the pipeline expression language.
April 2024 Data Factory Increases We have doubled the limit on number of activities
Maximum Activities Per you can define in a pipeline from 40 to 80.
Pipeline to 80
April 2024 REST APIs for Fabric Data The REST APIs for Fabric Data Factory Pipelines are
Factory pipelines preview now in preview. REST APIs for Data Factory pipelines
enable you to extend the built-in capability in Fabric
to create, read, update, delete, and list pipelines.
March Fast copy in Dataflows With Fast copy , you can ingest terabytes of data
2024 Gen2 with the easy experience of dataflows, but with the
scalable backend of Pipeline's Copy activity.
March CI/CD for Fabric Data Git Integration and integration with built-in
2024 Pipelines preview Deployment Pipelines to Data Factory data pipelines
is now in preview. For more information, see Data
Factory Adds CI/CD to Fabric Data Pipelines .
March Browse Azure resources Learn how to browse and connect to all your Azure
2024 with Get Data resources with the 'browse Azure' functionality in Get
Data . You can browse Azure resources then connect
to Synapse, blob storage, or ADLS Gen2 resources
easily.
March Dataflow Gen2 Support VNet Data Gateway support for Dataflows Gen2 in
2024 for VNet Gateways now Fabric is now generally available. The VNet data
generally available gateway helps to connect from Fabric Dataflows
Gen2 to Azure data services within a VNet, without
the need of an on-premises data gateway.
Month Feature Learn more
March Privacy levels support in You can now set privacy levels for your connections in
2024 Dataflows your Dataflow Gen2. Privacy levels are critical to
configure correctly so that sensitive data is only
viewed by authorized users.
February Dataflows Gen2 data New features for Dataflows Gen2 include destinations,
2024 destinations and managed managed settings, and advanced topics .
settings
February Copilot for Data Factory in Copilot for Data Factory in Microsoft Fabric is now
2024 Microsoft Fabric available in preview and included in the Dataflow
Gen2 experience. For more information, see Copilot
for Data Factory.
February Certified Connector The Power Query SDK enables you to create new
2024 updates connectors for both Power BI and Dataflow. New
certified Power Query connectors are available to
the list of Certified Connectors in Power Query.
February Data pipeline connector New connectors are available in your Data Factory
2024 updates data pipelines , including S3 compatible and Google
Cloud Storage data sources. For more information,
see Data pipeline connectors in Microsoft Fabric.
January Automate Fabric Data In Fabric Data Factory, there are many ways to query
2024 Warehouse Queries and data, retrieve data, and execute commands from your
Commands with Data warehouse using pipeline activities that can then be
Factory easily automated .
January Use Fabric Data Factory Guidance and good practices when building Fabric
2024 Data Pipelines to Spark Notebook workflows using Data Factory in
Orchestrate Notebook- Fabric with data pipelines.
based Workflows
December Read and Write to the You can now read and write data in the Microsoft
2023 Fabric Lakehouse using Fabric Lakehouse from ADF (Azure Data Factory).
Azure Data Factory (ADF) Using either Copy Activity or Mapping Data Flows,
you can read, write, transform, and process data using
ADF or Synapse Analytics, currently in preview.
December Set activity state for easy In Fabric Data Factory data pipelines, you can now set
2023 pipeline debugging an activity's state to inactive so that you can save your
pipeline even with incomplete, invalid configurations.
Month Feature Learn more
December Connection editing in You can now edit your existing data connections while
2023 pipeline editor you're designing your pipeline without leaving the
pipeline editor! When setting your connection, select
Edit and a pop-up appears.
December Azure Databricks You can now create powerful data pipeline workflows
2023 Notebook executions in that include Notebook executions from your Azure
Fabric Data Factory Databricks clusters using Fabric Data Factory . Add a
Databricks activity to your pipeline, point to your
existing cluster, or request a new cluster, and Data
Factory will execute your Notebook code for you.
November Dataflow Gen2 General The connectors for Lakehouse, Warehouse, and KQL
2023 availability of Fabric Database are now generally available . We
connectors encourage you to use these connectors when trying
to connect to data from any of these Fabric
workloads.
November Dataflow Gen2 Support Column binding support is enabled for SAP HANA.
2023 for column binding for This optional parameter results in significantly
SAP HANA connector improved performance. For more information, see
Support for column binding for SAP HANA
connector .
November Dataflow Gen2 staging When using a Dataflow Gen2 in Fabric, the system will
2023 artifacts hidden automatically create a set of staging artifacts. Now,
these staging artifacts will be abstracted from the
Dataflow Gen2 experience and will be hidden from
the workspace list. No action is required by the user
and this change has no impact on existing Dataflows.
Month Feature Learn more
November Dataflow Gen2 Support VNet Data Gateway support for Dataflows Gen2 in
2023 for VNet Gateways Fabric is now in preview. The VNet data gateway
preview helps to connect from Fabric Dataflows Gen2 to Azure
data services within a VNet, without the need of an
on-premises data gateway.
November Cross workspace "Save as" You can now clone your data pipelines across
2023 workspaces by using the "Save as" button .
November Dynamic content flyout In the Email and Teams activities, you can now add
2023 integration with Email and dynamic content with ease. With this new pipeline
Teams activity expression integration, you'll now see a flyout menu
to help you select and build your message content
quickly without needing to learn the pipeline
expression language.
November Copy activity now The Copy activity in data pipelines now supports fault
2023 supports fault tolerance tolerance for Fabric Warehouse . Fault tolerance
for Fabric Data Warehouse allows you to handle certain errors without
connector interrupting data movement. By enabling fault
tolerance, you can continue to copy data while
skipping incompatible data like duplicated rows.
November MongoDB and MongoDB MongoDB and MongoDB Atlas connectors are now
2023 Atlas connectors available to use in your Data Factory data pipelines
as sources and destinations.
November Microsoft 365 connector The Microsoft 365 connector now supports ingesting
2023 now supports ingesting data into Lakehouse tables .
data into Lakehouse
(preview)
November Multi-task support for You can now open and edit data pipelines from
2023 editing pipelines in the different workspaces and navigate between them
designer using the multi-tasking capabilities in Fabric.
November String interpolation added You can now edit your data connections within your
2023 to pipeline return value data pipelines . Previously, a new tab would open
when connections needed editing. Now, you can
remain within your pipeline and seamlessly update
your connections.
October Category redesign of We've redesigned the way activities are categorized
2023 activities to make it easier for you to find the activities you're
looking for with new categories like Control flow,
Notifications, and more.
October Integer data type available We now support variables as integers! When creating
2023 for variables a new variable, you can now choose to set the
variable type to Integer, making it easier to use
arithmetic functions with your variables.
October Pipeline name now We've added a new system variable called Pipeline
2023 supported in System Name so that you can inspect and pass the name of
variables. your pipeline inside of the pipeline expression editor,
enabling a more powerful workflow in Fabric Data
Factory.
October Support for Type editing You can now edit column types when you land data
2023 in Copy activity Mappings into your Lakehouse tables. This makes it easier to
customize the schema of your data in your
destination. Simply navigate to the Mapping tab,
import your schemas, if you don't see any mappings,
and use the dropdown list to make changes.
October New certified connector: Announcing the release of the new Emplifi Metrics
2023 Emplifi Metrics connector. The Power BI Connector is a layer between
Emplifi Public API and Power BI itself. For more
information, see Emplifi Public API documentation .
October SAP HANA (Connector The update enhances the SAP HANA connector with
2023 Update) the capability to consume HANA Calculation Views
deployed in SAP Datasphere by taking into account
SAP Datasphere's additional security concepts.
October Set Activity State to Activity State is now available in Fabric Data Factory
2023 "Comment Out" Part of data pipelines , giving you the ability to comment
Pipeline out part of your pipeline without deleting the
definition.
August Secure input/output for We've added advanced settings for the Set Variable
2023 logs activity called Secure input and Secure output. When
you enable secure input or output, you can hide
sensitive information from being captured in logs.
August Pipeline run status added We've recently added Pipeline status so that
2023 to Output panel developers can easily see the status of the pipeline
Month Feature Learn more
run. You can now view your Pipeline run status from
the Output panel.
August Data pipelines FTP The FTP connector is now available to use in your
2023 connector Data Factory data pipelines in Microsoft Fabric. Look
for it in the New connection menu.
August Maximum number of The new maximum number of entities that can be
2023 entities in a Dataflow part of a Dataflow has been raised to 50.
August Manage connections The Manage Connections option now allows you to
2023 feature view the linked connections to your dataflow, unlink a
connection, or edit connection credentials and
gateway.
July 2023 New modern data An improved experience aims to expedite the process
connectivity and discovery of discovering data in Dataflow, Dataflow Gen2, and
experience in Dataflows Datamart .
May 2023 Introducing Data Factory Data Factory enables you to develop enterprise-scale
in Microsoft Fabric data integration solutions with next-generation
dataflows and data pipelines .
ノ Expand table
July 2024 Connect to your Azure Learn how to connect to your Azure resources
Resources from Fabric with automatically with the modern get data experience
the Data Pipeline Modern of Data Pipelines .
Get Data Experience
July 2024 Fabric Data Pipelines – This blog provides a tutorial on the ability to
Advanced Scheduling schedule a Pipeline on a specific day of the
Techniques (Part 2: Run a month , including both the start of the month
Pipeline on a Specific Day) along with the last day of the month.
June 2024 A Data Factory Pipeline The ultimate Data Factory Pipeline Mind Map
Navigator mind map helps you navigate Data Factory pipelines on your
Data Factory journey to build a successful Data
Integration project.
Month Feature Learn more
May 2024 Semantic model refresh Learn how to use the much-requested Semantic
activity model refresh activity in Data pipelines and how
you can now create a complete end-to-end
solution that spans the entire pipeline lifecycle.
February Fabric Data Pipelines – This blog series covers Advanced Scheduling
2024 Advanced Scheduling techniques in Microsoft Fabric Data Pipelines .
Techniques
December Read data from Delta Lake The DeltaLake.Table is a new function in Power
2023 tables with the Query's M language for reading data from Delta
DeltaLake.Table M function Lake tables . This function is now available in
Power Query in Power BI Desktop and in Dataflows
Gen1 and Gen2 and replaces the need to use
community-developed solution.
October Microsoft Fabric Data You're invited to join our October webinar series ,
2023 Factory Webinar Series – where we'll show you how to use Data Factory to
October 2023 transform and orchestrate your data in various
scenarios.
September Notify Outlook and Teams Learn how to send notifications to both Teams
2023 channel/group from a channels/groups and Outlook emails .
Microsoft Fabric pipeline
September Microsoft Fabric Data Join our Data Factory webinar series where we'll
2023 Factory Webinar Series – show you how to use Data Factory to transform and
September 2023 orchestrate your data in various scenarios.
August Incrementally amass data With Dataflows Gen2 that comes with support for
2023 data destinations, you can setup your own pattern
to load new data incrementally , replace some old
data, and keep your reports up to date with your
source data.
Month Feature Learn more
August Data Pipeline Performance Learn how to account for pagination given the
2023 Improvement Part 3: current state of Fabric Data Pipelines in preview.
Gaining more than 50% This pipeline is performant when the number of
improvement for Historical paginated pages isn't too large. Read more at
Loads Gaining more than 50% improvement for Historical
Loads .
August Data Pipeline Performance Examples from this blog series include how to
2023 Improvements Part 2: merge two arrays into an array of JSON objects,
Creating an Array of JSONs and how to take a date range and create multiple
subranges then store these as an array of JSONs.
Read more at Creating an Array of JSONs .
July 2023 Data Pipeline Performance Part one of a series of blogs on moving data with
Improvements Part 1: How multiple Copy Activities moving smaller volumes in
to convert a time interval parallel: How to convert a time interval
(dd.hh:mm:ss) into seconds (dd.hh:mm:ss) into seconds .
July 2023 Construct a data analytics A blog covering data pipelines in Data Factory and
workflow with a Fabric Data the advantages you find by using pipelines to
Factory data pipeline orchestrate your Fabric data analytics projects and
activities .
July 2023 Data Pipelines Tutorial: In this blog, we will act in the persona of an AVEVA
Ingest files into a Lakehouse customer who needs to retrieve operations data
from a REST API with from AVEVA Data Hub into a Microsoft Fabric
pagination ft. AVEVA Data Lakehouse .
Hub
July 2023 Data Factory Spotlight: This blog spotlight covers the two primary high-
Dataflow Gen2 level features Data Factory implements: dataflows
and pipelines .
ノ Expand table
August Import Notebook UX The Import Notebook feature user interface has been
2024 improvement enhanced - you can now effortlessly import
notebooks, reports, or paginated reports using the
unified entry in the workspace toolbar.
July 2024 Environment Resources The new Environment Resources Folder is a shared
folder repository designed to streamline collaboration across
multiple notebooks.
June 2024 Fabric Spark connector The Fabric Spark connector for Synapse Data Warehouse
for Fabric Synapse Data (preview) enables a Spark developer or a data scientist
Warehouse in Spark to access and work on data from a warehouse or SQL
runtime (preview) analytics endpoint of the lakehouse (either from within
the same workspace or from across workspaces) with a
simplified Spark API.
June 2024 External data sharing REST APIs for OneLake external data sharing are now
public API preview available in preview. Users can now scale their data
sharing use cases by automating the creation of shares
with the public API.
June 2024 Capacity pools preview Capacity administrators can now create custom pools
(preview) based on their workload requirements,
providing granular control over compute resources.
Custom pools for Data Engineering and Data Science
can be set as Spark Pool options within Workspace
Spark Settings and environment items.
June 2024 Native Execution Engine The Native Execution Engine for Apache Spark on Fabric
for Apache Spark Data Engineering and Data Science for Fabric Runtime
1.2 is now in preview. For more information, see Native
execution engine for Fabric Spark.
Month Feature Learn more
June 2024 OneLake data access Following the release of OneLake data access roles in
roles API preview, new APIs are available for managing data
access roles . These APIs can be used to
programmatically manage granular data access for your
lakehouses.
May 2024 Runtime 1.3 (Apache The enhancements in Fabric Runtime 1.3 include the
Spark 3.5, Delta Lake incorporation of Delta Lake 3.1, compatibility with
3.1, R 4.3.3, Python 3.11) Python 3.11, support for Starter Pools, integration with
(preview) Environment, and library management capabilities.
Additionally, Fabric Runtime now enriches the data
science experience by supporting the R language and
integrating Copilot.
May 2024 Spark Run Series The Spark Monitoring Run Series Analysis features
Analysis and Autotune allow you to analyze the run duration trend and
feature preview performance comparison for Pipeline Spark activity
recurring run instances and repetitive Spark run
activities, from the same Notebook or Spark Job
Definition.
May 2024 OneLake shortcuts to Connect to on-premises data sources with a Fabric on-
on-premises and premises data gateway on a machine in your
network-restricted data environment, with networking visibility of your S3
sources (preview) compatible, Amazon S3, or Google Cloud Storage data
source. Then, you create your shortcut and select that
gateway. For more information, see Create shortcuts to
on-premises data.
May 2024 Comment @tagging in Notebook now supports the ability to tag others in
Notebook comments , just like the familiar functionality of using
Office products.
May 2024 Notebook ribbon New features in the Fabric notebook ribbon including
upgrades the Session connect control and Data Wrangler button
on the Home tab, High concurrency sessions, new View
session information control including the session
timeout.
May 2024 Data Engineering: The Environment in Fabric is now generally available. The
Environment GA Environment is a centralized item that allows you to
configure all the required settings for running a Spark
job in one place. At GA, we added support for Git,
deployment pipelines, REST APIs, resource folders, and
sharing.
Month Feature Learn more
May 2024 Public API for REST API support for Fabric Data Engineering/Science
Workspace Data workspace settings allows users to create/manage
Engineering/Science their Spark compute, select the default runtime/default
environment, enable or disable high concurrency mode,
or ML autologging.
April 2024 Fabric Spark Optimistic Fabric Spark Optimistic Job Admission reduces the
Job Admission frequency of throttling errors (HTTP 430: Spark Capacity
Limit Exceeded Response) and improves the job
admission experience for our customers, especially
during peak usage hours.
April 2024 Single Node support for The Single Node support for starter pools feature lets
starter pools you set your starter pool to max one node and get
super-fast session start times for your Spark sessions.
April 2024 Container Image for To simplify the development process, we have released a
Synapse VS Code container image for Synapse VS Code that contains all
the necessary dependencies for the extension.
April 2024 Git integration with Git integration with Spark Job definitions allows you
Spark Job definition to check in the changes of your Spark Job Definitions
into a Git repository, which will include the source code
of the Spark jobs and other item properties.
April 2024 New Revamped Object The new Object Explorer experience improves
Explorer experience in flexibility and discoverability of data sources in the
the notebook explorer and improve the discoverability of Resource
folders.
April 2024 %Run your scripts in Now you can use %run magic command to run your
Notebook Python scripts and SQL scripts in Notebook resources
folder , just like Jupyter notebook %run command.
April 2024 OneLake shortcuts to OneLake shortcuts to S3-compatible data sources are
S3-compatible data now in preview . Create an Amazon S3 compatible
sources preview shortcut to connect to your existing data through a
single unified name space without having to copy or
move data.
April 2024 OneLake shortcuts to OneLake shortcuts to Google Cloud Storage are now in
Google Cloud Storage preview . Create a Google Cloud Storage shortcut to
preview connect to your existing data through a single unified
name space without having to copy or move data.
April 2024 OneLake data access OneLake data access roles for lakehouse are in
roles preview . Role permissions and user/group
assignments can be easily updated through a new folder
security user interface.
Month Feature Learn more
March New validation The new validation enhancement to the "Load to table"
2024 enhancement for "Load feature help mitigate any validation issues and make
to table" your data loading experience smoother and faster.
March Queuing for Notebook Now with Job Queueing for Notebook Jobs , jobs that
2024 Jobs are triggered by pipelines or job scheduler will be added
to a queue and will be retried automatically when the
capacity frees up. For more information, see Job
queueing in Microsoft Fabric Spark.
March Autotune Query Tuning The Autotune Query Tuning feature for Apache Spark
2024 feature for Apache is now available. Autotune leverages historical data from
Spark your Spark SQL queries and machine learning algorithms
to automatically fine-tune your configurations, ensuring
faster execution times and enhanced efficiency.
March OneLake File Explorer: With our latest release v1.0.11.0 of file explorer , we're
2024 Editing via Excel excited to announce that you can now update your files
directly using Excel , mirroring the user-friendly
experience available in OneDrive.
February Reduce egress costs Learn how OneLake shortcuts to S3 now support
2024 with S3 shortcuts in caching , which can greatly reduce egress costs. Use
OneLake the new Enable Cache for S3 Shortcuts setting with an
S3 shortcut.
February OneLake Shortcuts API New REST APIs for OneLake Shortcuts allow
2024 programmatic creation and management of shortcuts,
currently in preview. You can now programmatically
create, read, and delete OneLake shortcuts. For example,
see Use OneLake shortcuts REST APIs.
February Browse code snippet The new Browse code snippet notebook feature
2024 allows you to easily access and insert code snippets for
commonly used code snippets with multiple supported
languages.
February Fabric notebook status The new Fabric Notebook status bar has three
2024 bar upgrade persisted info buttons: session status, save status, and
cell selection status. Plus, context features include info
on the git connection state, a shortcut to extend session
timeout, and a failed cell navigator.
January Microsoft Fabric Copilot Copilot for Data Science and Data Engineering is now
2024 for Data Science and available worldwide. What can Copilot for Data Science
Data Engineering and Data Engineering do for you?
January Newest version of With the newest version of OneLake file explorer
2024 OneLake File Explorer (v1.0.11.0) we bring a few updates to enhance your
includes Excel experience with OneLake, including Excel Integration .
Integration
December %%configure – Now you can personalize your Spark session with the
2023 personalize your Spark magic command %%configure, in both interactive
session in Notebook notebook and pipeline notebook activities.
December Rich dataframe preview The display() function has been updated on Fabric
2023 in Notebook Notebook , now named the Rich dataframe preview.
Now when you use display() to preview your
dataframe, you can easily specify the range, view the
dataframe summary and column statistics, check invalid
values or missing values, and preview the long cell.
December Working with OneLake If you want to use an application that directly integrates
2023 using Azure Storage with Windows File Explorer, check out OneLake file
Explorer explorer . However, if you're accustomed to using
Azure Storage Explorer for your data management
tasks , you can continue to harness its functionalities
with OneLake and some of its key benefits.
November Accessibility support for To provide a more inclusive and user-friendly interaction,
2023 Lakehouse we have implemented improvements so far to support
accessibility in the Lakehouse , including screen reader
compatibility, responsive design text reflow, keyboard
navigation, alternative text for images, and form fields
and labels.
November Enhanced multitasking We've introduced new capabilities to enhance the multi-
2023 experience in tasking experience in Lakehouse , including
Lakehouse multitasking during running operations, nonblocking
reloading, and clearer notifications.
November SQL analytics endpoint You can now retry the SQL analytics endpoint
2023 re-provisioning provisioning directly within the Lakehouse . This means
that if your initial provisioning attempt fails, you have
the option to try again without the need to create an
entirely new Lakehouse.
November Multiple Runtimes With the introduction of Runtime 1.2, Fabric supports
2023 Support multiple runtimes , offering users the flexibility to
seamlessly switch between them, minimizing the risk of
incompatibilities or disruptions. When changing
runtimes, all system-created items within the workspace,
including Lakehouses, SJDs, and Notebooks, will operate
using the newly selected workspace-level runtime
version starting from the next Spark Session.
November Monitoring Hub for The latest enhancements in the monitoring hub are
2023 Spark enhancements designed to provide a comprehensive and detailed view
of Spark and Lakehouse activities , including executor
allocations, runtime version for a Spark application, a
related items link in the detail page.
November Monitoring for Users can now view the progress and status of
2023 Lakehouse operations Lakehouse maintenance jobs and table load activities.
Month Feature Learn more
November REST API support for REST Public APIs for Spark Job Definition are now
2023 Spark Job Definition available, making it easy for users to manage and
preview manipulate SJD items .
November REST API support for As a key requirement for workload integration, REST
2023 Lakehouse, Load to Public APIs for Lakehouse are now available. The
tables and table Lakehouse REST Public APIs makes it easy for users to
maintenance manage and manipulate Lakehouse items
programmatically.
November Lakehouse support for The Lakehouse now integrates with the lifecycle
2023 git integration and management capabilities in Microsoft Fabric ,
deployment pipelines providing a standardized collaboration between all
(preview) development team members throughout the product's
life. Lifecycle management facilitates an effective
product versioning and release process by continuously
delivering features and bug fixes into multiple
environments.
November Notebook resources We now support uploading the .jar files in the Notebook
2023 .JAR file support Resources explorer . You can add your own compiled
libs, use drag & drop to generate a code snippet to
install them in the session, and load the libraries in code
conveniently.
Month Feature Learn more
November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps . It allows users to easily
control the notebook code versions and manage the git
branches by leveraging the Fabric Git functions and
Azure DevOps.
November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments , such as development,
Preview test, and production. You can also use deployment rules
to customize the behavior of your notebooks when
they're deployed, such as changing the default
Lakehouse of a Notebook. Get started with deployment
pipelines, and Notebook shows up in the deployment
content automatically.
November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines
and establish CI/CD conveniently and efficiently. The
notebook Restful Public API can make it easy for users
to manage and manipulate Fabric notebook items and
integrate notebook with other tools and systems.
November Synapse VS Code With support for the Synapse VS Code extension on
2023 extension in vscode.dev vsocde.dev, users can now seamlessly edit and execute
preview Fabric Notebooks without ever leaving their browser
window . Additionally, all the native pro-developer
features of VS Code are now accessible to end-users in
this environment.
October Create multiple Creating multiple OneLake shortcuts just got easier.
2023 OneLake shortcuts at Rather than creating shortcuts one at a time, you can
once now browse to your desired location and select multiple
targets at once. All your selected targets then get
created as new shortcuts in a single operation .
October Delta-RS introduces The OneLake team worked with the Delta-RS community
2023 native support for to help introduce support for recognizing OneLake URLs
OneLake in both Delta-RS and the Rust Object Store .
September Import notebook to The new "Import Notebook" entry on the Workspace ->
2023 your Workspace New menu lets you easily import new Fabric
Notebook items in the target workspace. You can upload
Month Feature Learn more
September Notebook file system The Synapse VS Code extension now supports notebook
2023 support in Synapse VS File System for Data Engineering and Data Science in
Code extension Microsoft Fabric. The Synapse VS Code extension
empowers users to develop their notebook items
directly within the Visual Studio Code environment.
September Notebook save conflict We now support viewing and comparing the differences
2023 resolution between two versions of the same notebook when
there are saving conflicts.
September Mssparkutils new API We now support a new method in mssparkutils that can
2023 for fast data copy enable large volume of data move/copy much faster ,
Mssparkutils.fs.fastcp() . You can use
mssparkutils.fs.help("fastcp") to check the detailed
usage.
September Notebook resources We now support uploading .whl files in the Notebook
2023 .whl file support Resources explorer .
August Introducing High High concurrency mode allows you to run notebooks
2023 Concurrency Mode in simultaneously on the same cluster without
Notebooks for Data compromising performance or security when paying for
Engineering and Data a single session. High concurrency mode offers several
Science workloads in benefits for Fabric Spark users.
Microsoft Fabric
July 2023 Lakehouse Sharing and Share a lakehouse and manage permissions so that
Access Permission users can access lakehouse data through the Data Hub,
Management the SQL analytics endpoint, and the default semantic
model.
Month Feature Learn more
June 2023 Virtualize your existing Connect data silos without moving or copying data with
data into OneLake with OneLake, which allows you to create special folders
shortcuts called shortcuts that point to other storage locations .
May 2023 Introducing Data With Fabric Data Engineering, one of the core
Engineering in experiences of Microsoft Fabric, data engineers feel right
Microsoft Fabric at home, able to leverage the power of Apache Spark to
transform their data at scale and build out a robust
lakehouse architecture .
ノ Expand table
August Build a custom Sparklens JAR In this blog, learn how to build the sparklens
2024 JAR for Spark 3.X , which can be used in
Microsoft Fabric.
July 2024 Create a shortcut to a VPC- Learn how to create a shortcut to a VPC-
protected S3 bucket protected S3 bucket , using the on-
premises data gateway and AWS Virtual
Private Cloud (VPC).
July 2024 Move Your Data Across The new modern get data experience of data
Workspaces Using Modern Get pipeline now supports copying to Lakehouse
Data of Fabric Data Pipeline and warehouse across different workspaces
with an intuitive experience.
June 2024 Demystifying Data Ingestion in Learn about a batch data Ingestion
Fabric: Fundamental Components framework based on experience working
for Ingesting Data into a Fabric with different customers while building a
Lakehouse using Fabric Data lakehouse in Fabric.
Pipelines
June 2024 Boost performance and save costs Learn how the Fast Copy feature helps to
with Fast Copy in Dataflows Gen2 enhance the performance and cost-efficiency
of your Dataflows Gen2 .
May 2024 Copy Data from Lakehouse in Learn how to copy data between Lakehouse
another Workspace using Data that cross different workspaces via Data
pipeline pipeline .
May 2024 Profiling Microsoft Fabric Spark In this blog, you will learn how to leverage
Notebooks with Sparklens Sparklens, an open-source Spark profiling
tool, to profile Microsoft Fabric Spark
Month Feature Learn more
March Bridging Fabric Lakehouses: Delta Learn how to use the Delta Change Data
2024 Change Data Feed for Seamless Feed to facilitate seamless data
ETL synchronization across different lakehouses
in your medallion architecture .
January Use Fabric Data Factory Data Guidance and good practices when building
2024 Pipelines to Orchestrate Fabric Spark Notebook workflows using Data
Notebook-based Workflows Factory in Fabric with data pipelines.
November Fabric Changing the game: Using A step-by-step guide to use your own Python
2023 your own library with Microsoft library in the Lakehouse . It's quite simple to
Fabric create your own library with Python and even
simpler to reuse it on Fabric.
August Fabric changing the game: Learn more about logging your workload into
2023 Logging your workload using OneLake using notebooks , using the
Notebooks OneLake API Path inside the notebook.
ノ Expand table
August Apply MLFlow tags on ML You can now apply MLflow tags directly on ML
2024 experiment runs and model experiment runs and ML model versions from the
versions user interface .
August Track related ML You can now use an enhancement to the Monitoring
2024 Experiment runs in your Hub to track related ML experiment runs within
Spark Application Spark applications. You can also integrate
Experiment items into the Monitoring Hub .
August Use PREDICT with Fabric You can now move from training with AutoML to
2024 AutoML models making predictions by using the built-in Fabric
PREDICT UI and code-first APIs for batch
predictions . For more information, see Machine
learning model scoring with PREDICT in Microsoft
Fabric.
Month Feature Learn more
August Data Science AI skill You can now build your own generative AI
2024 (preview) experiences over your data in Fabric with the AI skill
(preview)! You can build question and answering AI
systems over your Lakehouses and Warehouses. For
more information, see Introducing AI Skills in
Microsoft Fabric: Now in Preview . To get started,
try AI skill example with the AdventureWorks dataset
(preview).
July 2024 Semantic link preinstalled Semantic Link in now included in the default
runtime . If you use Fabric with Spark 3.4 or later,
semantic link is already in the default runtime, and
you don't need to install it.
July 2024 Semantic Link Labs Semantic Link Labs is a library of helpful python
solutions for use in Microsoft Fabric notebooks.
Semantic Link Labs helps Power BI developers and
admins easily automate previously complicated
tasks, as well as make semantic model optimization
tooling more easily accessible within the Fabric
ecosystem. For Semantic Link Labs documentation,
see semantic-link-labs documentation . For more
information and to see it in action, read the
Semantic Link Labs announcement blog .
June 2024 Capacity pools preview Capacity administrators can now create custom
pools (preview) based on their workload
requirements, providing granular control over
compute resources. Custom pools for Data
Engineering and Data Science can be set as Spark
Pool options within Workspace Spark Settings and
environment items.
June 2024 Native Execution Engine for The Native Execution Engine for Apache Spark on
Apache Spark Fabric Data Engineering and Data Science for
Fabric Runtime 1.2 is now in preview. For more
information, see Native execution engine for Fabric
Spark.
June 2024 Demystifying Data Learn about a batch data Ingestion framework
Ingestion in Fabric: based on experience working with different
Fundamental Components customers while building a lakehouse in Fabric.
for Ingesting Data into a
Fabric Lakehouse using
Fabric Data Pipelines
June 2024 Boost performance and Learn how the Fast Copy feature helps to enhance
save costs with Fast Copy in the performance and cost-efficiency of your
Month Feature Learn more
May 2024 Public API for Workspace REST API support for Fabric Data
Data Engineering/Science Engineering/Science workspace settings allows
users to create/manage their Spark compute, select
the default runtime/default environment, enable or
disable high concurrency mode, or ML autologging.
April 2024 Semantic Link GA Semantic links are now generally available! The
package comes with our default VHD. You can now
use Semantic link in Fabric right away without any
pip installation.
April 2024 Capacity level delegation Tenant admins can now enable AI and Copilot in
for AI and Copilot Fabric for the entire organization, certain security
groups, or for a specific Capacity.
March EU customers can use AI Since mid-March EU customers can use AI and
2024 and Copilot without cross- Copilot without turning on the cross-geo setting ,
geo setting and their AI and Copilot requests will be processed
within EUDB.
March Code-First AutoML preview With the new AutoML feature , you can automate
2024 your machine learning workflow and get the best
results with less effort. AutoML, or Automated
Machine Learning, is a set of techniques and tools
that can automatically train and optimize machine
learning models for any given data and task type.
March Compare Nested Runs Parent and child runs in the Run List View for ML
2024 Experiments introduces a hierarchical structure,
allowing users to effortlessly view various parent and
child runs within a single view and seamlessly
interact with them to visually compare results.
March Support for Mandatory MIP ML Model and Experiment items in Fabric now offer
2024 Label Enforcement enhanced support for Microsoft Information
Protection (MIP) labels .
January Microsoft Fabric Copilot for Copilot for Data Science and Data Engineering is
2024 Data Science and Data now available worldwide. What can Copilot for Data
Engineering Science and Data Engineering do for you?
Month Feature Learn more
December Semantic Link update We're excited to announce the latest update of
2023 Semantic Link ! Apart from many improvements,
we also added many new features for our Power BI
engineering community that you can use from
Fabric notebooks to satisfy all your automation
needs.
November Copilot in notebooks The Copilot in Fabric Data Science and Data
2023 preview Engineering notebooks is designed to accelerate
productivity, provide helpful answers and guidance,
and generate code for common tasks like data
exploration, data preparation, and machine learning.
You can interact and engage with the AI from either
the chat panel or even from within notebooks cells
using magic commands to get insights from data
faster. For more information, see Copilot in
notebooks .
November Data Wrangler for Spark Data Wrangler now supports Spark DataFrames in
2023 DataFrames preview preview. Until now, users have been able to explore
and transform pandas DataFrames using common
operations that can be converted to Python code in
real time. The new release allows users to edit Spark
DataFrames in addition to pandas DataFrames with
Data Wrangler .
November MLFlow Notebook Widget The MLflow inline authoring widget enables users to
2023 effortlessly track their experiment runs along with
metrics and parameters, all directly from within their
notebook .
Month Feature Learn more
November New Model & Experiment New enhancements to our model and experiment
2023 Item Usability tracking features are based on valuable user
Improvements feedback. The new tree-control in the run details
view makes tracking easier by showing which run is
selected. We've enhanced the comparison feature,
allowing you to easily adjust the comparison pane
for a more user-friendly experience. Now you can
select the run name to see the Run Details view.
November Recent Experiment Runs It's now simpler for users to check out recent runs
2023 for an experiment directly from the workspace list
view . This update makes it easier to keep track of
recent activity, quickly jump to the related Spark
application, and apply filters based on the run status.
November Prebuilt AI models in We're excited to announce the preview for prebuilt
2023 Microsoft Fabric preview AI models in Fabric . Azure OpenAI Service , Text
Analytics , and Azure AI Translator are prebuilt
models available in Fabric, with support for both
RESTful API and SynapseML. You can also use the
OpenAI Python Library to access Azure OpenAI
service in Fabric.
November Reusing existing Spark We have added support for a new connection
2023 Session in sparklyr method called "synapse" in sparklyr , which
enables users to connect to an existing Spark
session. Additionally, we have contributed this
connection method to the OSS sparklyr project.
Users can now use both sparklyr and SparkR in the
same session and easily share data between them.
Month Feature Learn more
November REST API Support for ML REST APIs for ML Experiment and ML Model are
2023 Experiments and ML Models now available. These REST APIs for ML Experiments
and ML Models begin to empower users to create
and manage machine learning items
programmatically, a key requirement for pipeline
automation and workload integration.
October Semantic link in Microsoft We're pleased to introduce the preview of semantic
2023 Fabric: Bridging BI and Data link , an innovative feature that seamlessly
Science connects Power BI semantic models with Fabric Data
Science.
October Get started with semantic Explore how semantic link seamlessly connects
2023 link (preview) Power BI semantic models with Fabric Data Science.
Learn more at Semantic link in Microsoft Fabric:
Bridging BI and Data Science .
August Harness the Power of Harness the potential of Microsoft Fabric and
2023 LangChain in Microsoft SynapseML LLM capabilities to effectively
Fabric for Advanced summarize and organize your own documents.
Document Summarization
July 2023 Unleashing the Power of In this blog post, we delve into the exciting
SynapseML and Microsoft functionalities and features of Microsoft Fabric and
Fabric: A Guide to Q&A on SynapseML to demonstrate how to leverage
PDF Documents Generative AI models or Large Language Models
(LLMs) to perform question and answer (Q&A) tasks
on any PDF document .
May 2023 Introducing Fabric Data With data science in Microsoft Fabric, you can utilize
Science the power of machine learning features to
seamlessly enrich data as part of your data and
analytics workflows .
June 2024 Building Custom AI This guide walks you through implementing a RAG
Applications with (Retrieval Augmented Generation) system in
Microsoft Fabric: Microsoft Fabric using Azure OpenAI and Azure AI
Implementing Retrieval Search .
Augmented Generation
for Enhanced Language
Models
March New AI Samples New AutoML sample, Model Tuning, and Semantic
2024 Link samples appear in the Quick Tutorial category
of the Data Science samples on Microsoft Fabric.
December Using Microsoft Fabric's A step-by-step RAG application through prompt flow
2023 Lakehouse Data and in Azure Machine Learning Service combined with
prompt flow in Azure Microsoft Fabric's Lakehouse data.
Machine Learning Service
to create RAG applications
November New data science happy We've updated the Data Science Happy Path tutorial
2023 path tutorial in Microsoft for Microsoft Fabric . This new comprehensive
Fabric tutorial demonstrates the entire data science
workflow , using a bank customer churn problem as
the context.
November New data science samples We've expanded our collection of data science
2023 samples to include new end-to-end R samples and
new quick tutorial samples for "Explaining Model
Outputs" and "Visualizing Model Behavior." .
November New data science The new Data Science sample on sales forecasting
2023 forecasting sample was developed in collaboration with Sonata
Software . This new sample encompasses the entire
data science workflow, spanning from data cleaning
to Power BI visualization. The notebook covers the
steps to develop, evaluate, and score a forecasting
model for superstore sales, harnessing the power of
the SARIMAX algorithm.
August New Machine failure and More samples have been added to the Fabric Data
2023 Customer churn samples Science Use a sample menu. To check these Data
Science samples, select Fabric Data Science, then Use
a sample.
August Use Semantic Kernel with Learn how Fabric allows data scientists to use
2023 Lakehouse in Microsoft Semantic Kernel with Lakehouse in Microsoft Fabric .
Fabric
Fabric Data Warehouse
This section summarizes archived improvements and features for Data Warehouse in
Microsoft Fabric.
ノ Expand table
August Mirroring integration You can now use the Modern Get Data experience to
2024 with modern get data choose from all the available mirrored databases in
experience OneLake.
August T-SQL DDL support in You can now run DDL operations on a Azure SQL
2024 Azure SQL Database Database mirrored database such as Drop Table,
mirrored database Rename Table, and Rename Column.
August Delta Lake log You can now pause and resume the publishing of Delta
2024 publishing pause and Lake Logs for Warehouses . For more information, see
resume Delta Lake logs in Warehouse in Microsoft Fabric.
August Managing V-Order You can now manage V-Order behavior at the
2024 behavior of Fabric warehouse level . For more information, see
Warehouses Understand V-Order for Microsoft Fabric Warehouse.
July 2024 ALTER TABLE and We've added T-SQL ALTER TABLE support for some
nullable column support operations, as well as nullable column support to
tables in the warehouse. For more information, see
ALTER TABLE (Transact-SQL).
July 2024 Warehouse queries with Warehouse in Microsoft Fabric offers the capability to
time travel (GA) query the historical data as it existed in the past at the
statement level, now generally available. The ability to
query data from a specific timestamp is known in the
data warehousing industry as time travel.
July 2024 Restore warehouse You can now create restore points and perform a
experience in the Fabric restore in-place of a warehouse item. For more
portal information, see Seamless Data Recovery through
Warehouse restoration .
July 2024 Warehouse source Using Git integration and/or deployment pipelines with
control (preview) your warehouse, you can manage development and
deployment of versioned warehouse objects. You can
use SQL Database Projects extension available inside of
Azure Data Studio and Visual Studio Code . For more
Month Feature Learn more
July 2024 Time travel and clone The retention period for time travel queries and clone
table retention window table is now 30 days.
expanded
June 2024 Restore in place portal You can now create user-created restore points in your
experience warehouse via the Fabric portal. For more information,
see Restore in-place of a warehouse in Microsoft
Fabric.
June 2024 Fabric Spark connector The Fabric Spark connector for Fabric Data Warehouse
for Fabric Data (preview) enables a Spark developer or a data scientist
Warehouse in Spark to access and work on data from Fabric DW and SQL
runtime (preview) analytics endpoint of the lakehouse (either from within
the same workspace or from across workspaces) with a
simplified Spark API.
May 2024 Monitor Warehouse You can Monitor Fabric Data Warehouse activity with a
tools variety of tools, including: Billing and utilization
reporting in Fabric Data Warehouse, monitor
connections, sessions, and requests using DMVs, Query
insights, and now Query activity. For more information,
read Query activity: A one-stop view to monitor your
running and completed T-SQL queries .
May 2024 Copilot for Data Copilot for Data Warehouse (preview) is now available
Warehouse in limited preview, offering the Copilot chat pane, quick
actions, and code completions.
May 2024 Warehouse queries with Warehouse in Microsoft Fabric offers the capability to
time travel (preview) query the historical data as it existed in the past at the
statement level, currently in preview. The ability to
query data from a specific timestamp is known in the
data warehousing industry as time travel.
May 2024 COPY INTO COPY INTO now supports Microsoft Entra ID
enhancements authentication and access to firewall protected storage
via the trusted workspace functionality. For more
information, see COPY INTO enhancements and
COPY INTO (Transact-SQL).
April 2024 Fabric Warehouse in ADF You can now connect to your Fabric Warehouse from
copy activity an Azure Data Factory/Synapse pipeline . You can find
this new connector when creating a new source or sink
destination in your copy activity, in the Lookup activity,
Stored Procedure activity, Script activity, and Get
Metadata activity.
Month Feature Learn more
April 2024 Git integration Git integration for the Warehouse allows you to
check in the changes of your Warehouse to an Azure
DevOps Git repository as a SQL database project.
March Mirroring in Microsoft With Mirroring in Fabric, you can easily bring your
2024 Fabric preview databases into OneLake in Microsoft Fabric , enabling
seamless zero-ETL, near real-time insights on your data
– and unlocking warehousing, BI, AI, and more. For
more information, see What is Mirroring in Fabric?.
March Cold cache performance Fabric stores data in Delta tables and when the data is
2024 improvements not cached, it needs to transcode data from parquet
file format structures to in-memory structures for query
processing. Recent cold cache performance
improvements further optimize transcoding and we
observed up to 9% faster queries in our tests when
data is not previously cached.
March Extract and publish a The SQL Database Projects extension creates a SQL
2024 SQL database project project ( .sqlproj ) file, a local representation of SQL
directly through the DW objects that comprise the schema for a single
editor database, such as tables, stored procedures, or
functions. You can now extract and publish a SQL
database project directly through the DW editor .
March Change owner of The new Takeover API allows you to change the
2024 Warehouse item warehouse owner from the current owner to a new
owner, which can be an SPN or an Organizational
Account.
March Clone table RLS and CLS A cloned table now inherits the row-level security (RLS)
2024 and dynamic data masking from the source of the
clone table.
December Automatic Log Automatic Log Checkpointing is one of the ways that
2023 Checkpointing for Fabric we help your Data Warehouse to provide you with
Warehouse great performance and best of all, it involves no
additional work from you!
December Restore points and You can now create restore points and perform an in-
2023 restore in place place restore of a warehouse to a past point in time.
The restore points and restore in place features are
currently in preview. Restore in-place is an essential
part of data warehouse recovery , which allows to
restore the data warehouse to a prior known reliable
state by replacing or over-writing the existing data
warehouse from which the restore point was created.
November TRIM T-SQL support You can now use the TRIM command to remove spaces
2023 or specific characters from strings by using the
keywords LEADING, TRAILING or BOTH in TRIM
(Transact-SQL).
November SSD metadata caching File and rowgroup metadata are now also cached with
2023 in-memory and SSD cache, further improving
performance.
November PARSER 2.0 CSV file parser version 2.0 for COPY INTO builds an
2023 improvements for CSV innovation from Microsoft Research's Data Platform
ingestion and Analytics group to make CSV file ingestion blazing
fast on Fabric Warehouse. For more information, see
COPY INTO (Transact-SQL).
November Fast compute resource All query executions in Fabric Warehouse are now
2023 assignment enabled powered by the new technology recently deployed as
part of the Global Resource Governance component
that assigns compute resources in milliseconds.
November REST API support for With the Warehouse public APIs, SQL developers can
2023 Warehouse now automate their pipelines and establish CI/CD
conveniently and efficiently. The Warehouse REST
Public APIs makes it easy for users to manage and
manipulate Fabric Warehouse items.
November Power BI semantic Microsoft has renamed the Power BI dataset content
2023 models type to semantic model. This applies to Microsoft Fabric
semantic models as well. For more information, see
New name for Power BI datasets.
November SQL analytics endpoint Microsoft has renamed the SQL endpoint of a
2023 Lakehouse to the SQL analytics endpoint of a
Lakehouse.
November Dynamic data masking Dynamic Data Masking (DDM) for Fabric Warehouse
2023 and the SQL analytics endpoint in the Lakehouse. For
more information and samples, see Dynamic data
masking in Fabric data warehousing and How to
implement dynamic data masking in Fabric Data
Warehouse.
November Clone tables with time You can now use table clones to create a clone of a
2023 travel table based on data up to seven calendar days in the
past .
November User experience updates Several user experiences in Warehouse have landed.
2023 For more information, see Fabric Warehouse user
experience updates .
October Support for sp_rename Support for the T-SQL sp_rename syntax is now
2023 available for both Warehouse and SQL analytics
endpoint. For more information, see Fabric Warehouse
support for sp_rename .
October Full DML to Delta Lake Fabric Warehouse now publishes all Inserts, Updates,
2023 Logs and Deletes for each table to their Delta Lake Log in
OneLake.
Month Feature Learn more
October Throttling and A new article details the throttling and smoothing
2023 smoothing in Fabric Data behavior in Fabric Data Warehouse, where almost all
Warehouse activity is classified as background to take advantage of
the 24-hr smoothing window before throttling takes
effect. Learn more about how to observe utilization in
Fabric Data Warehouse.
September Default semantic model The default semantic model no longer automatically
2023 improvements adds new objects . This can be enabled in the
Warehouse item settings.
September SQL Projects support for Microsoft Fabric Data Warehouse is now supported in
2023 Warehouse in Microsoft the SQL Database Projects extension available inside of
Fabric Azure Data Studio and Visual Studio Code .
September Usage reporting Utilization and billing reporting is available for Fabric
2023 data warehousing in the Microsoft Fabric Capacity
Metrics app. For more information, read about
Month Feature Learn more
August SSD Caching enabled Local SSD caching stores frequently accessed data on
2023 local disks in highly optimized format, significantly
reducing I/O latency. This benefits you immediately,
with no action required or configuration necessary.
July 2023 Sharing Any Admin or Member within a workspace can share a
Warehouse with another recipient within your
organization. You can also grant these permissions
using the "Manage permissions" experience.
July 2023 Table clone A zero-copy clone creates a replica of the table by
copying the metadata, while referencing the same data
files in OneLake. This avoids the need to store multiple
copies of data, thereby saving on storage costs when
you clone a table in Microsoft Fabric. For more
information, see tutorials to Clone a table with T-SQL
or Clone tables in the Fabric portal.
May 2023 Introducing Fabric Data Fabric Data Warehouse is the next generation of data
Warehouse in Microsoft warehousing in Microsoft Fabric that is the first
Fabric transactional data warehouse to natively support an
open data format, Delta-Parquet.
ノ Expand table
August Mirroring SQL Server While SQL Server isn't currently supported for Fabric
2024 database to Fabric mirrored databases, learn how to extend Fabric
mirroring to an on-premises SQL Server database as a
source, using a combination of SQL Server
Transactional replication and Fabric Mirroring .
July 2024 Microsoft Entra For sample connection strings and more information
authentication for Fabric on using Microsoft Entra as an alternative to SQL
Data Warehouse Authentication, see Microsoft Entra authentication as
an alternative to SQL authentication.
April 2024 Fabric Change the Game: A step-by-step guide to mirror your Azure SQL
Azure SQL Database Database into Microsoft Fabric.
mirror into Microsoft
Fabric
February Mapping Azure Synapse Read for guidance on mapping Data Warehouse Units
2024 dedicated SQL pools to (DWU) from Azure Synapse Analytics dedicated SQL
Fabric data warehouse pool to an approximate equivalent number of Fabric
compute Capacity Units (CU) .
January Automate Fabric Data In Fabric Data Factory, there are many ways to query
2024 Warehouse Queries and data, retrieve data, and execute commands from your
Commands with Data warehouse using pipeline activities that can then be
Factory easily automated .
November Migrate from Azure A detailed guide with a migration runbook is available
2023 Synapse dedicated SQL for migrations from Azure Synapse Data Warehouse
pools dedicated SQL pools into Microsoft Fabric.
August Efficient Data Partitioning A proposed method for data partitioning using Fabric
2023 with Microsoft Fabric: notebooks . Data partitioning is a data management
Best Practices and technique used to divide a large dataset into smaller,
Implementation Guide more manageable subsets called partitions or shards.
May 2023 Microsoft Fabric - How This blog reviews how to connect to a SQL analytics
can a SQL user or DBA endpoint of the Lakehouse or the Warehouse through
connect the Tabular Data Stream, or TDS endpoint , familiar
to all modern web applications that interact with a
SQL Server endpoint.
ノ Expand table
August Fabric Real-Time New teaching bubbles provide a step-by-step guide through
2024 hub Teaching its major functionalities. These interactive guides allow you
Bubbles to seamlessly navigate each tab of the Real-Time hub. For
Month Feature Learn more
August KQL Queryset REST The new Fabric Queryset REST APIs allow you to
2024 API support create/update/delete KQL Querysets in Fabric, and
programmatically manage them without manual
intervention. For more information, see KQL Queryset REST
API support .
July 2024 Update records in a The .update command is now generally available. Learn
KQL Database more about how to Update records in a Kusto database .
preview
July 2024 Real-Time Real-time Dashboards now support ultra-low refresh rates
Dashboards 1s and of just 1 or 10 seconds. For more information, see Create a
10s refresh rate Real-Time Dashboard (preview).
June 2024 Graph Semantics in Graph Semantics in Eventhouse allows users to model their
Eventhouse data as graphs and perform advanced graph queries and
analytics using the Kusto Query Language (KQL).
June 2024 Set alerts on Real- Real-Time Dashboard visuals now support alerts , to
time Dashboards extend monitoring support with Activator. With integration
with Fabric with Activator, you'll receive timely alerts as your key metrics
Activator triggers change in real-time.
June 2024 OneLake availability As part of the One logical copy promise, we're excited to
of Eventhouse in announce that OneLake availability of Eventhouse in Delta
Delta Lake format Lake format is Generally Available .
GA
June 2024 Real-Time Real-Time Dashboards interact with data dynamically and in
Dashboards real time. Real-Time Dashboards natively visualize data
stored in Eventhouses. Real-time Dashboards support ultra-
low refresh rates of just 1 or 10 seconds. For more
information, see Visualize and Explore Data with Real-Time
Dashboards .
May 2024 Copilot for Real- Copilot for Real-Time Intelligence is now in preview ! For
Time Intelligence those who are already fans of KQL or newcomers exploring
its potential, Copilot can help you get started, and navigate
data with ease.
Month Feature Learn more
May 2024 Automating Fabric Learn how to interact with data pipelines, notebooks, spark
items with Real- jobs in a more event-driven way .
Time Intelligence
May 2024 Real-Time At Build 2024, a dozen new features and capabilities were
Intelligence new announced for Real-Time Intelligence, organized into
preview features categories of Ingest & Process , Analyze & Transform ,
and Visualize & Act .
May 2024 Real-Time hub Real-Time hub is single, tenant-wide, unified, logical place
preview for streaming data-in-motion. It enables you to easily
discover, ingest, manage, and consume data-in-motion from
a wide variety of sources. It lists all the streams and Kusto
Query Language (KQL) tables that you can directly act on. It
also gives you an easy way to ingest streaming data from
Microsoft products and Fabric events. For more information,
see Real-Time hub overview.
May 2024 Get Events preview The Get Events experience allows users to connect to a
wide range of sources directly from Real-Time hub,
Eventstreams, Eventhouse, and Activator. Using Get Events,
bring streaming data from Microsoft sources directly into
Fabric with a first-class experience.
May 2024 Enhanced With enhanced Eventstream capabilities , you can now
Eventstream stream data not only from Microsoft sources but also from
capabilities preview other platforms like Google Cloud, Amazon Kinesis,
Database change data capture streams, and more, using our
new messaging connectors.
May 2024 Eventstreams - The preview of enhanced capabilities supports many new
enhanced sources - Google Cloud Pub/Sub, Amazon Kinesis Data
capabilities preview Streams, Confluent Cloud Kafka, Azure SQL Database
Change Data Capture (CDC), PostgreSQL Database CDC,
MySQL Database CDC, Azure Cosmos DB CDC, Azure Blob
Storage events, and Fabric workspace item events, and a
new Stream destination. It supports two distinct modes, Edit
mode and Live view, in the visual designer. It also supports
routing based on content in data streams. For more
information, see What is Fabric eventstreams.
April 2024 Kusto Cache The preview of Kusto Cache consumption means that you
consumption will start seeing billable consumption of the OneLake Cache
preview Data Stored meter from the KQL Database and Eventhouse
Month Feature Learn more
April 2024 Pause and Resume The Pause and Resume feature enables you to pause data
in Eventstream streaming from various sources and destinations within
preview Eventstream. You can then resume data streaming
seamlessly from the paused time or a customized time,
ensuring no data loss.
March Fabric Real-Time Users of Azure SQL can use the Database Watcher
2024 Intelligence monitoring solution with Microsoft Fabric . Database
Integrates with Watcher for Azure SQL (preview) provides advanced
Newly Announced monitoring capabilities, and can integrate with Eventhouse
Database Watcher KQL database.
for Azure SQL
March Update records in a The .update command is now available, as a preview feature.
2024 KQL Database Learn more about how to Update records in a Kusto
preview database .
March Query Azure Data Connecting to and using data in Azure Data explorer cluster
2024 Explorer data from from Fabric's KQL Queryset is now available.
Queryset
February KQL DB shortcut to KQL DB now supports reading Delta tables with column
2024 Delta Lake tables name mappings. The column mapping feature allows Delta
support name- table columns and the underlying Parquet file columns to
based column use different names. This enables Delta schema evolution
mapping operations such on a Delta table without the need to rewrite
the underlying Parquet files and allows users to name Delta
table columns by using characters that aren't allowed by
Parquet.
February KQL DB shortcut to KQL DB can now read delta tables with deletion vectors,
2024 Delta Lake tables resolving the current table state by applying the deletions
noted by deletion vectors to the most recent table version.
Month Feature Learn more
support deletion
vectors
February Get Data in KQL DB The Process event before ingestion in Eventstream option
2024 now supports enables you to process the data before it's ingested into the
processing events destination table. By selecting this option, the get data
before ingestion via process seamlessly continues in Eventstream, with the
Eventstream destination table and data source details automatically
populated.
February KQL DB now Using the open-source Flink connector, you can send data
2024 supports data from Flink to your table. Using Azure Data Explorer and
ingestion using Apache Flink, you can build fast and scalable applications
Apache Flink targeting data driven scenarios.
February Route data from You can now use the Kusto Splunk Universal Connector to
2024 Splunk Universal send data from Splunk Universal Forwarder to a table in your
Forwarder to KQL KQL DB.
DB using Kusto
Splunk Universal
Connector
December Calculating distinct New Fabric KQL database dcount and dcountif functions use
2023 counts in Power BI a special algorithm to return an estimate of distinct counts ,
running reports on even in extremely large datasets. The new functions
KQL Databases count_distinct and count_distinctif calculate exact distinct
counts.
December Create a Notebook You can now just create a new Notebook from KQL DB
2023 with pre-configured editor with a preconfigured connection to your KQL DB
connection to your and explore the data using PySpark. This option creates a
KQL DB PySpark Notebook with a ready-to execute code cell to read
data from the selected KQL DB.
December KQL Database The new Kusto command .show database schema violations
2023 schema validation was designed to validate the current state of your database
schema and find inconsistencies. You can use .show
database schema violations for a spot check on your
database or in CI/CD automation .
December Enabling Data Data availability of KQL Database in OneLake means you
2023 Availability of KQL can enjoy the best of both worlds. You can query the data
Database in with high performance and low latency in their KQL
OneLake database, and you can query the same data in Delta Parquet
via Power BI Direct Lake mode, Warehouse, Lakehouse,
Notebooks, and more.
November Announcing Delta You can now enable availability of KQL Database in Delta
2023 Lake support in Lake format . Delta Lake is the unified data lake table
Real-Time Analytics format chosen to achieve seamless data access across all
KQL Database compute engines in Microsoft Fabric.
November Delta Parquet As part of the one logical copy promise, we're excited to
2023 support in KQL announce that data in KQL Database can now be made
Database available in OneLake in delta parquet format . You can now
access this Delta table by creating a OneLake shortcut from
Lakehouse, Warehouse, or directly via Power BI Direct Lake
mode.
November Open Source Several open-source connectors for real-time analytics are
2023 Connectors for KQL now supported to enable users to ingest data from
Database various sources and process it using KQL DB.
November REST API Support We're excited to announce the launch of REST Public APIs for
2023 for KQL Database KQL DB. The Public REST APIs of KQL DB enables users to
manage and automate their flows programmatically.
November Eventstream Data Now, you can transform your data streams into real time
2023 Transformation for within Eventstream before they're sent to your KQL
KQL Database Database . When you create a KQL Database destination in
the eventstream, you can set the ingestion mode to "Event
processing before ingestion" and add event processing
logics such as filtering and aggregation to transform your
data streams.
November Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest
2023 preview logs from Splunk platform into a Fabric KQL DB using the
Kusto python SDK.
November Get Data from If you're working on other Fabric items and are looking to
2023 Eventstream ingest data from Eventstream, our new "Get Data from
anywhere in Fabric Eventstream" feature simplifies the process, you can Get
data from Eventstream while you're working with a KQL
database and Lakehouse.
Month Feature Learn more
November Two ingestion We've introduced two distinct ingestion modes for your
2023 modes for Lakehouse Destination: Rows per file and Duration .
Lakehouse
Destination
November Optimize Tables The table optimization shortcut is now available inside
2023 Before Ingesting Eventstream Lakehouse destination to compact numerous
Data to Lakehouse small streaming files generated on a Lakehouse table. Table
optimization shortcut works by opening a Notebook with
Spark job, which would compact small streaming files in the
destination Lakehouse table.
November Get Data in Real- A new Get Data experience simplifies the data ingestion
2023 Time Analytics: A process in your KQL database.
New and Improved
Experience
October Expanded Custom New new custom app connections provide more flexibility
2023 App Connections when it comes to bringing your data streams into
Eventstream.
October Eventstream Kafka The Custom App feature has new endpoints in sources and
2023 Endpoints and destinations , including sample Java code for your
Sample Code convenience. Simply add it to your application, and you're all
set to stream your real-time event to Eventstream.
October KQL Database Auto Users do not need to worry about how many resources are
2023 scale algorithm needed to support their workloads in a KQL database. KQL
improvements Database has a sophisticated in-built, multi-dimensional,
auto scaling algorithm. We recently implemented some
optimizations that make some time series analysis more
efficient .
October Understanding Read more about how a KQL database is billed in the SaaS
2023 Fabric KQL DB world of Microsoft Fabric.
Capacity
September OneLake shortcut Now you can create a shortcut from KQL DB to delta tables
2023 to delta tables from in OneLake, allowing in-place data queries. Now you query
KQL DB delta tables in your Lakehouse or Warehouse directly from
KQL DB.
September Model and Query Kusto Query Language (KQL) now allows you to model and
2023 data as graphs query data as graphs. This feature is currently in preview.
using KQL Learn more at Introduction to graph semantics in KQL and
Graph operators and functions .
September Easily connect to Power BI desktop released two new ways to easily connect
2023 KQL Database from to a KQL database, in the Get Data dialogue and in the
Power BI desktop OneLake data hub menus.
September Eventstream now AMQP stands for Advanced Message Queuing Protocol, a
2023 supports AMQP protocol that supports a wide range of messaging patterns.
format connection In Eventstream, you can now create a Custom App source or
string for data destination and select AMQP format connection string for
ingestion ingesting data into Fabric or consuming data from Fabric.
August Provisioning The KQL Database provisioning process has been optimized.
2023 optimization Now you can create a KQL Database within a few seconds.
August KQL Database Fabric KQL Database supports running Python code
2023 support for inline embedded in Kusto Query Language (KQL) using the
Month Feature Learn more
July 2023 Microsoft Fabric Microsoft Fabric eventstreams are a high-throughput, low-
eventstreams: latency data ingestion and transformation service.
Generating Real-
time Insights with
Python, KQL, and
Power BI
June 2023 Unveiling the Epic As part of the Kusto Detective Agency Season 2 , we're
Opportunity: A Fun excited to introduce an epic opportunity for all investigators
Game to Explore and data enthusiasts to learn about the new portfolio in a
the Real-Time fun and engaging way. Recruiting now at
Intelligence https://detective.kusto.io/ !
May 2023 What's New in Announcing the Fabric Real Time Analytics !
Kusto – Build 2023!
ノ Expand table
August Acting on Real-Time Learn how to monitor and acting on data is to use
2024 data using custom Activator, which is a no-code experience in Microsoft
actions with Activator Fabric for taking action automatically when the condition
of the package temperature is detected in the data.
July 2024 Build real-time order Read about a a real-life example of how an online store
notifications with used Eventstream's CDC connector from Azure SQL
Database.
Month Feature Learn more
Eventstream's CDC
connector
July 2024 Automating Real-Time Let's build a PowerShell script to automate the
Intelligence deployment of Eventhouse, KQL Database, Tables,
Eventhouse Functions, and Materialized Views into a workspace in
deployment using Microsoft Fabric.
PowerShell
June 2024 Power BI Admin portal Effective July 2024, the Power BI Admin portal Usage
Usage metrics metrics dashboard is removed . Comparable insights
dashboard retirement are now supported out-of-the-box through the Admin
monitoring workspace (preview). The Admin monitoring
workspace provides several Power BI reports and
semantic models, including the Feature Usage and
Adoption report which focuses on Fabric tenant
inventory and audit activity monitoring.
May 2024 Alerting and acting on Microsoft Fabric's new Real-Time hub and Activator
data from the Real- provide a no-code experience for automatically taking
Time hub actions when patterns or conditions are detected in
changing data and is embedded around the Real-Time
hub to make creating alerts always accessible.
May 2024 Using APIs with Fabric Learn how to create/update/delete items in Fabric with
Real-Time Intelligence: the KQL APIs , accessing the data plane of a resource.
Eventhouse and KQL
DB
May 2024 Connect and stream The Get events experience streamlines the process of
events with the Get browsing and searching for sources and streams .
events experience
May 2024 Acquiring Real-Time Learn how to connect to new sources in Eventstream.
Data from New Start by creating an eventstream and choosing
Sources with Enhanced "Enhanced Capabilities (preview)" .
Eventstream
March Browse Azure Learn how to browse and connect to all your Azure
2024 resources with Get resources with the 'browse Azure' functionality in Get
Data Data . You can browse Azure resources then connect to
Synapse, blob storage, or ADLS Gen2 resources easily.
November Semantic Link: Data Great Expectations Open Source (GX OSS) is a popular
2023 validation using Great Python library that provides a framework for describing
Expectations and validating the acceptable state of data. With the
recent integration of Microsoft Fabric semantic link, GX
can now access semantic models , further enabling
Month Feature Learn more
November Explore Data Dive into a practical scenario using real-world bike-
2023 Transformation in sharing data and learn to compute the number of bikes
Eventstream for KQL rented every minute on each street, using Eventstream's
Database Integration powerful event processor, mastering real-time data
transformations, and effortlessly directing the processed
data to your KQL Database. .
October Stream Azure IoT Hub A demo of using Fabric Eventstream to seamlessly ingest
2023 Data into Fabric and transform real-time data streams before they
Eventstream for Email reach various Fabric destinations such as Lakehouse, KQL
Alerting Database, and Reflex. Then, configure email alerts in
Reflex with Activator triggers.
September Quick start: Sending Learn how to send data from Kafka to Real-Time
2023 data to Real-Time Intelligence in Fabric .
Intelligence in Fabric
from Apache Kafka
Ecosystems using Java
June 2023 From raw data to Learn about the integration between Azure Event Hubs
insights: How to ingest and your KQL database .
data from Azure Event
Hubs into a KQL
database
June 2023 From raw data to Learn about the integration between eventstreams and a
insights: How to ingest KQL database , both of which are a part of the Real-
data from Time Intelligence experience.
eventstreams into a
KQL database
June 2023 Discovering the best This blog covers different options for bringing data into a
ways to get data into a KQL database .
KQL database
Month Feature Learn more
June 2023 Get started with In this blog, we focus on the different ways of querying
exploring your data data in Real-Time Intelligence .
with KQL – a purpose-
built tool for petabyte
scale data analytics
May 2023 Ingest, transform, and You can now ingest, capture, transform and route real-
route real-time events time events to various destinations in Microsoft Fabric
with Microsoft Fabric with a no-code experience using Microsoft Fabric
eventstreams eventstreams.
ノ Expand table
August OneLake data access Based on key feedback, we've updated data access
2024 role improvements roles with a user interface redesign. For more
information, see Get started with OneLake data access
roles (preview).
August Announcing the Use Trusted workspace access and Managed Private
2024 availability of Trusted endpoints in Fabric with any F capacity and enjoy the
workspace access and benefits of secure and optimized data access and
Managed private connectivity.
endpoints in any Fabric
capacity
July 2024 SOC certification We are excited to announce that Microsoft Fabric, our
compliance all-in-one analytics solution for enterprises, is now
System and Organization Controls (SOC) 1 Type II, SOC 2
Type II, and SOC 3 compliant .
July 2024 Microsoft Fabric .NET We are excited to announce the very first release of the
SDK Microsoft Fabric .NET SDK ! For more information on
the REST API documentation, see Microsoft Fabric REST
API documentation.
Month Feature Learn more
May 2024 Microsoft Fabric Private Azure Private Link for Microsoft Fabric secures access to
Links GA your sensitive data in Microsoft Fabric by providing
network isolation and applying required controls on
your inbound network traffic. For more information, see
Announcing General Availability of Fabric Private Links .
May 2024 Trusted workspace Trusted workspace access in OneLake shortcuts is now
access GA generally available . You can now create data pipelines
to access your firewall-enabled Azure Data Lake Storage
Gen2 (ADLS Gen2) accounts using Trusted workspace
access (preview) in your Fabric Data Pipelines. Use the
workspace identity to establish a secure and seamless
connection between Fabric and your storage accounts .
Trusted workspace access also enables secure and
seamless access to ADLS Gen2 storage accounts from
OneLake shortcuts in Fabric .
May 2024 Fabric APIs Learn about using REST APIs in Fabric , including
walkthrough creating workspaces, adding permission, dropping,
creating, executing data pipelines, and how to
pause/resume Fabric activities using the management
API.
May 2024 Managed private Managed private endpoints for Microsoft Fabric allow
endpoints GA secure connections over managed virtual networks to
data sources that are behind a firewall or not accessible
from the public internet. For more information, see
Announcing General Availability of Fabric Private Links,
Trusted Workspace Access, and Managed Private
Endpoints .
May 2024 Fabric UX System The Fabric UX System represents a leap forward in
design consistency and extensibility for Microsoft Fabric.
May 2024 Microsoft Fabric Core Microsoft Fabric Core APIs are now generally available.
REST APIs The Fabric user APIs are a major enabler for both
enterprises and partners to use Microsoft Fabric as they
enable end-to-end fully automated interaction with the
service, enable integration of Microsoft Fabric into
external web applications, and generally enable
customers and partners to scale their solutions more
easily.
May 2024 Microsoft Fabric Admin Fabric Admin APIs are designed to streamline
APIs preview administrative tasks. Now, you can manage both Power
BI and the new Fabric items (previously referred to as
artifacts) using the same set of APIs. Before this
enhancement, you had to navigate using two different
Month Feature Learn more
May 2024 Fabric workload dev kit The Microsoft Fabric workload development kit
(preview) extends to additional workloads and offers a robust
developer toolkit for designing, developing, and
interoperating with Microsoft Fabric using frontend SDKs
and backend REST APIs .
May 2024 Introducing external External Data Sharing (preview) is a new feature that
data sharing (preview) makes it possible for Fabric users to share data from
within their Fabric tenant with users in another Fabric
tenant.
May 2024 Task flows in Microsoft The preview of task flows in Microsoft Fabric is
Fabric (preview) enabled for all Microsoft Fabric users. With Fabric task
flows, when designing a data project, you no longer
need to use a whiteboard to sketch out the different
parts of the project and their interrelationships. Instead,
you can use a task flow to build and bring this key
information into the project itself.
May 2024 Power BI: Subscriptions, Information on Power BI implementation planning and
licenses, and trials key considerations for planning subscriptions, licenses,
and trials for Power BI and Fabric.
May 2024 Register for the Starting May 21, 2024, sign up for the Microsoft Build:
Microsoft Build: Microsoft Fabric Cloud Skills Challenge and prepare for
Microsoft Fabric Cloud Exam DP-600 and upskill to the Fabric Analytics Engineer
Skills Challenge Associate certification.
March Microsoft Fabric is now We are excited to announce that Microsoft Fabric, our
2024 HIPAA compliant all-in-one analytics solution for enterprises, has achieved
new certifications for HIPAA and ISO 27017, ISO 27018,
ISO 27001, ISO 27701 .
March Fabric Copilot Pricing: Copilot in Fabric begins billing on March 1, 2024 as
2024 An End-to-End example part of your existing Power BI Premium or Fabric
Capacity. Learn how Fabric Copilot usage is calculated .
March Capacity Platform The Fabric Capacity Platform now supports usage
2024 Updates for reporting for Pause/Resume, virtualized items and
Pause/Resume, workspaces supporting Copilot, Capacity Metrics, and
Month Feature Learn more
February Azure Private Link Azure Private Link for Microsoft Fabric secures access to
2024 Support for Microsoft your sensitive data in Microsoft Fabric by providing
Fabric (Preview) network isolation and applying required controls on
your inbound network traffic. For more information, see
Announcing Azure Private Link Support for Microsoft
Fabric in Preview .
February Domains in OneLake Domains in OneLake help you organize your data into
2024 (preview) a logical data mesh, allowing federated governance and
optimizing for business needs. You can now create sub
domains, default domains for users, and move
workspaces between domains. For more information, see
Fabric domains.
February Customizable Fabric You can now customize your preferred entry points in
2024 navigation bar the navigation bar , including pinning common entry
points and unpinning rarely used options.
February Persistent filters in You can now save selected filters in workspace list
2024 workspace view , and they'll be automatically applied the next
time you open the workspace.
December Microsoft Fabric Admin Fabric Admin APIs are designed to streamline
2023 APIs preview administrative tasks. The initial set of Fabric Admin APIs
is tailored to simplify the discovery of workspaces, Fabric
items, and user access details.
November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Data Warehouse, Data Engineering & Data
available!
Month Feature Learn more
November Microsoft Fabric User We're happy to announce the preview of Microsoft
2023 APIs preview Fabric User APIs. The Fabric user APIs are a major
enabler for both enterprises and partners to use
Microsoft Fabric as they enable end-to-end fully
automated interaction with the service, enable
integration of Microsoft Fabric into external web
applications, and generally enable customers and
partners to scale their solutions more easily.
October Item type icons Our design team has completed a rework of the item
2023 type icons across the platform to improve visual
parsing.
September Monitoring hub – Column options inside the monitoring hub give users
2023 column options a better customization experience and more room to
operate.
September OneLake File Explorer The OneLake file explorer automatically syncs all
2023 v1.0.10 Microsoft OneLake items that you have access to in
Windows File Explorer. With the latest version, you can
seamlessly transition between using the OneLake file
explorer app and the Fabric web portal. You can also
right-click on the OneLake icon in the Windows
notification area, and select Diagnostic Operations to
view client-site logs. Learn more about easy access to
open workspaces and items online .
August Multitasking navigation Now, all Fabric items are opened in a single browser tab
2023 improvement on the navigation pane, even in the event of a page
refresh. This ensures you can refresh the page without
the concern of losing context.
July 2023 New OneLake file With OneLake file explorer v1.0.9.0, it's simple to
explorer update with choose and switch between different Microsoft Entra ID
support for switching (formerly Azure Active Directory) accounts .
organizational accounts
Month Feature Learn more
July 2023 Help pane The Help pane is feature-aware and displays articles
about the actions and features available on the current
Fabric screen. For more information, see Help pane in
the monthly Fabric update.
ノ Expand table
July 2024 GitHub integration Fabric developers can now choose GitHub or GitHub
for source control Enterprise as their source control tool , and version their
(preview) Fabric items there. For more information, see Get started
with Git integration (preview).
July 2024 Microsoft Fabric We are excited to announce the very first release of the
.NET SDK Microsoft Fabric .NET SDK ! For more information on the
REST API documentation, see Microsoft Fabric REST API
documentation.
June 2024 Introducing New New branching capabilities in Fabric Git integration
Branching include a redesigned Source Control pane, the ability to
Capabilities in Fabric quickly create a new connected workspace and branch, and
Git Integration contextual related branches to find content related to the
current workspace.
May 2024 Deployment Fabric deployment pipelines APIs have been introduced,
pipelines APIs for starting with the 'Deploy' API, which will allow you to
CI/CD deploy the entire workspace, or only selected items.
May 2024 New items in Fabric Data pipelines, Warehouse, Spark, and Spark jobs are now
CI/CD available for CI/CD in git integration and deployment
pipelines.
April 2024 Introducing Trusted Create data pipelines in Fabric to access your firewall-
Workspace Access in enabled ADLS Gen2 storage accounts with ease and
Fabric Data Pipelines security. This feature leverages the workspace identity to
establish a secure and seamless connection between Fabric
and your storage accounts.
Month Feature Learn more
March CI/CD for Fabric Git Integration and integration with built-in Deployment
2024 Data Pipelines Pipelines to Data Factory data pipelines is now in preview.
preview For more information, see Data Factory Adds CI/CD to
Fabric Data Pipelines .
February REST APIs for Fabric REST APIs for Fabric Git integration enable seamless
2024 Git integration incorporation of Fabric Git integration into your team's
end-to-end CI/CD pipeline, eliminating the need for
manual triggering of actions from Fabric.
February Delegation for Git To enable more control over Git related settings , a tenant
2024 integration settings admin can now delegate these settings to both capacity
admins and workspace admins via the What is the admin
portal?
November Microsoft Fabric Microsoft Fabric User APIs are now available. The Fabric
2023 User APIs user APIs are a major enabler for both enterprises and
partners to use Microsoft Fabric as they enable end-to-end
fully automated interaction with the service, enable
integration of Microsoft Fabric into external web
applications, and generally enable customers and partners
to scale their solutions more easily.
November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments, such as development, test,
Preview and production. You can also use deployment rules to
customize the behavior of your notebooks when they're
deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines, and
Notebook shows up in the deployment content
automatically.
November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps. It allows users to easily control
the notebook code versions and manage the Git branches
by leveraging the Fabric Git functions and Azure DevOps.
November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.
Month Feature Learn more
November Lakehouse support The Lakehouse item now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric, providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and
bug fixes into multiple environments.
September SQL Projects support Microsoft Fabric Data Warehouse is now supported in the
2023 for Warehouse in SQL Database Projects extension available inside of Azure
Microsoft Fabric Data Studio and Visual Studio Code .
September Notebook file The Synapse VS Code extension now supports notebook
2023 system support in File System for Data Engineering and Data Science in
Synapse VS Code Microsoft Fabric. The Synapse VS Code extension
extension empowers users to develop their notebook items directly
within the Visual Studio Code environment.
September Git integration with You can now publish a Power BI paginated report and keep
2023 paginated reports in it in sync with your git workspace. Developers can apply
Power BI their development processes, tools, and best practices.
August Introducing the dbt The dbt adapter allows you to connect and transform data
2023 adapter for Fabric into Fabric Data Warehouse . The Data Build Tool (dbt) is
Data Warehouse an open-source framework that simplifies data
transformation and analytics engineering.
May 2023 Introducing git While developing in Fabric, developers can back up and
integration in version their work, roll back as needed, collaborate, or work
Microsoft Fabric for in isolation using git branches . Read more about
seamless source connecting the workspace to an Azure repo.
control management
Continuous Integration/Continuous Delivery (CI/CD) samples
ノ Expand table
August Exploration of Microsoft A guided tour of Microsoft Fabric's CI/CD features for
2024 Fabric's CI/CD Features data pipelines, lakehouse, notebooks, reports, and
semantic models.
June Getting started with In this walkthrough, we'll talk about how to set up git
2024 development in isolation for a private workspace from a main branch , which is
using a Private Workspace connected to a shared dev team workspace and then
how to commit changes from the private workspace
into the main branch of the shared workspace.
Activator
This section summarizes archived new features and capabilities of Activator in Microsoft
Fabric.
ノ Expand table
October Announcing the We're thrilled to announce that Activator is now in preview
2023 Activator preview and is enabled for all existing Microsoft Fabric users.
August Updated preview We have been working on a new experience for designing
2023 experience for triggers and it's now available in our preview! You now see
trigger design three cards in every trigger: Select, Detect, and Act.
May Driving actions from Activator is a new no-code Microsoft Fabric experience
2023 your data with that empowers the business analyst to drive actions
Activator automatically from your data. To learn more, sign up for the
Activator limited preview.
March Analyze Dataverse When creating a shortcut within Fabric, you will now see
2024 tables from Microsoft an option for Dataverse . When you choose this
Fabric shortcut type and specify your Dataverse environment
details, you can quickly see and work with the tables
from that environment.
November Fabric + Microsoft 365 Microsoft Graph is the gateway to data and intelligence
2023 Data: Better Together in Microsoft 365. Microsoft 365 Data Integration for
Microsoft Fabric enables you to manage your
Microsoft 365 alongside your other data sources in one
place with a suite of analytical experiences.
November Microsoft 365 The Microsoft 365 connector now supports ingesting
2023 connector now data into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)
October Microsoft OneLake You can now create shortcuts directly to your Dynamics
2023 adds shortcut support 365 and Power Platform data in Dataverse and analyze
to Power Platform and it with Microsoft Fabric alongside the rest of your
Dynamics 365 OneLake data. There's no need to export data, build ETL
pipelines, or use partner integration tools.
May 2023 Step-by-Step Guide to This blog reviews how to enable Microsoft Fabric with a
Enable Microsoft Fabric Microsoft 365 Developer Account and the Fabric Free
for Microsoft 365 Trial .
Developer Account
May 2023 Microsoft 365 Data + Microsoft 365 Data Integration for Microsoft Fabric
Microsoft Fabric better enables you to manage your Microsoft 365 alongside
together your other data sources in one place with a suite of
analytical experiences.
Migration
This section includes guidance and documentation updates on migration to Microsoft
Fabric.
ノ Expand table
Month Feature Learn more
February Mapping Azure Synapse Read for guidance on mapping Data Warehouse
2024 dedicated SQL pools to Units (DWU) from Azure Synapse Analytics
Fabric data warehouse dedicated SQL pool to an approximate equivalent
compute number of Fabric Capacity Units (CU) .
July 2023 Fabric Changing the game This blog post covers OneLake integrations and
– OneLake integration multiple scenarios to ingest the data inside of Fabric
OneLake , including ADLS, ADF, OneLake Explorer,
Databricks.
June 2023 Microsoft Fabric changing This blog post covers the scenario to export data
the game: Exporting data from Azure SQL Database into OneLake .
and building the
Lakehouse
June 2023 Copy data to Azure SQL at Did you know that you can use Microsoft Fabric to
scale with Microsoft Fabric copy data at scale from supported data sources to
Azure SQL Database or Azure SQL Managed
Instance within minutes?
June 2023 Bring your Mainframe DB2 In this blog, we review the convenience and ease of
z/OS data to Microsoft opening DB2 for z/OS data in Microsoft Fabric .
Fabric
Monitor
This section includes guidance and documentation updates on monitoring your
Microsoft Fabric capacity and utilization, including the Monitoring hub.
ノ Expand table
March Capacity Metrics Fabric Capacity Metrics has been updated with new system
2024 support for Pause events and reconciliation logic to simplify analysis of
and Resume paused capacities. Fabric Pause and Resume is a capacity
Month Feature Learn more
October Throttling and A new article helps you understand Fabric capacity
2023 smoothing in Fabric throttling. Throttling occurs when a tenant's capacity
Data Warehouse consumes more capacity resources than it has purchased
over a period of time.
September Monitoring hub - Users can select and reorder the columns according to their
2023 column options customized needs in the Monitoring hub .
September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and Power
to know about BI users .
what's new and
what's coming
September Microsoft Fabric The Microsoft Fabric Capacity Metrics app is available in
2023 Capacity Metrics App Source for a variety of billing and utilization reporting.
August Monitoring Hub The Monitoring Hub to allow users to personalize activity-
2023 support for specific columns. You now have the flexibility to display
personalized column columns that are relevant to the activities you're focused
options on.
May 2023 Capacity metrics in Learn more about the universal compute capacities and
Microsoft Fabric Fabric's capacity metrics governance features that
admins can use to monitor usage and make data-driven
scale-up decisions.
Microsoft Purview
This section summarizes archived announcements about governance and compliance
capabilities with Microsoft Purview in Microsoft Fabric. Learn more about Information
protection in Microsoft Fabric.
ノ Expand table
May Administration, Security and Microsoft Fabric provides built-in enterprise grade
2023 Governance in Microsoft Fabric governance and compliance capabilities , powered
by Microsoft Purview.
Related content
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Get started with Microsoft Fabric
Microsoft Training Learning Paths for Fabric
End-to-end tutorials in Microsoft Fabric
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?
What's new in Microsoft Fabric?
Feedback
Was this page helpful? Yes No