0% found this document useful (0 votes)
4 views

Fabric Fundamentals

Microsoft Fabric is a comprehensive analytics platform that integrates various data services, enabling organizations to manage data movement, processing, and reporting seamlessly. It features a unified data lake called OneLake, supports AI capabilities, and offers role-specific workloads tailored for different users. The platform is available for trial, allowing users to explore its functionalities and collaborate on data projects for a limited time.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Fabric Fundamentals

Microsoft Fabric is a comprehensive analytics platform that integrates various data services, enabling organizations to manage data movement, processing, and reporting seamlessly. It features a unified data lake called OneLake, supports AI capabilities, and offers role-specific workloads tailored for different users. The platform is available for trial, allowing users to explore its functionalities and collaborate on data projects for a limited time.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 759

Tell us about your PDF experience.

Microsoft Fabric fundamentals


documentation
Microsoft Fabric is a unified platform that can meet your organization's data and
analytics needs. Discover the Fabric shared and platform documentation from this page.

About Microsoft Fabric

e OVERVIEW

What is Fabric?

Fabric terminology

What's New

b GET STARTED

Start a Fabric trial

Fabric home navigation

End-to-end tutorials

Context sensitive Help pane

Get started with Fabric items

p CONCEPT

Get started with a task flow

Find items in OneLake data hub

Promote and certify items

c HOW-TO GUIDE

Apply sensitivity labels

Copilot
p CONCEPT

Copilot overview

Copilot in Fabric FAQ

Privacy, security, and responsible use for Copilot

c HOW-TO GUIDE

Enable Copilot

Workspaces

p CONCEPT

Fabric workspace

Workspace roles

b GET STARTED

Create a workspace

c HOW-TO GUIDE

Workspace access control

Get Help

c HOW-TO GUIDE

Use the integrated Help pane

Check for known issues

Contact Support
What is Microsoft Fabric?
Article • 02/07/2025

Microsoft Fabric is an enterprise-ready, end-to-end analytics platform. It unifies data


movement, data processing, ingestion, transformation, real-time event routing, and
report building. It supports these capabilities with integrated services like Data
Engineering, Data Factory, Data Science, Real-Time Analytics, Data Warehouse, and
Databases.

Fabric provides a seamless, user-friendly SaaS experience. It integrates separate


components into a cohesive stack. It centralizes data storage with OneLake and embeds
AI capabilities, eliminating the need for manual integration. With Fabric, you can
efficiently transform raw data into actionable insights.

7 Note

Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .

Capabilities of Fabric
Microsoft Fabric enhances productivity, data management, and AI integration. Here are
some of its key capabilities:

Role-specific workloads: Customized solutions for various roles within an


organization, providing each user with the necessary tools.
OneLake: A unified data lake that simplifies data management and access.
Copilot support: AI-driven features that assist users by providing intelligent
suggestions and automating tasks.
Integration with Microsoft 365: Seamless integration with Microsoft 365 tools,
enhancing collaboration and productivity across the organization.
Azure AI Foundry: Utilizes Azure AI Foundry for advanced AI and machine learning
capabilities, enabling users to build and deploy AI models efficiently.
Unified data management: Centralized data discovery that simplifies governance,
sharing, and access.

Unification with SaaS foundation


Microsoft Fabric is built on a Software as a Service (SaaS) platform. It unifies new and
existing components from Power BI, Azure Synapse Analytics, Azure Data Factory, and
more into a single environment.

Fabric integrates workloads like Data Engineering, Data Factory, Data Science, Data
Warehouse, Real-Time Intelligence, Industry solutions, Databases, and Power BI into a
SaaS platform. Each of these workloads is tailored for distinct user roles like data
engineers, scientists, or warehousing professionals, and they serve a specific task.
Advantages of Fabric include:

End to end integrated analytics


Consistent, user-friendly experiences
Easy access and reuse of all assets
Unified data lake storage preserving data in its original location
AI-enhanced stack to accelerate the data journey
Centralized administration and governance

Fabric centralizes data discovery, administration, and governance by automatically


applying permissions and inheriting data sensitivity labels across all the items in the
suite. Governance is powered by Purview, which is built into Fabric. This seamless
integration lets creators focus on producing their best work without managing the
underlying infrastructure.

Components of Microsoft Fabric


Fabric offers the following workloads, each customized for a specific role and task:
Power BI - Power BI lets you easily connect to your data sources, visualize, and
discover what's important, and share that with anyone or everyone you want. This
integrated experience allows business owners to access all data in Fabric quickly
and intuitively and to make better decisions with data. For more information, see
What is Power BI?

Databases - Databases in Microsoft Fabric are a developer-friendly transactional


database such as Azure SQL Database, which allows you to easily create your
operational database in Fabric. Using the mirroring capability, you can bring data
from various systems together into OneLake. You can continuously replicate your
existing data estate directly into Fabric's OneLake, including data from Azure SQL
Database, Azure Cosmos DB, Azure Databricks, Snowflake, and Fabric SQL
database. For more information, see SQL database in Microsoft Fabric and What is
Mirroring in Fabric?

Data Factory - Data Factory provides a modern data integration experience to


ingest, prepare, and transform data from a rich set of data sources. It incorporates
the simplicity of Power Query, and you can use more than 200 native connectors to
connect to data sources on-premises and in the cloud. For more information, see
What is Data Factory in Microsoft Fabric?

Industry Solutions - Fabric provides industry-specific data solutions that address


unique industry needs and challenges, and include data management, analytics,
and decision-making. For more information, see Industry Solutions in Microsoft
Fabric.

Real-Time Intelligence - Real-time Intelligence is an end-to-end solution for


event-driven scenarios, streaming data, and data logs. It enables the extraction of
insights, visualization, and action on data in motion by handling data ingestion,
transformation, storage, analytics, visualization, tracking, AI, and real-time actions.
The Real-Time hub in Real-Time Intelligence provides a wide variety of no-code
connectors, converging into a catalog of organizational data that is protected,
governed, and integrated across Fabric. For more information, see What is Real-
Time Intelligence in Fabric?.

Data Engineering - Fabric Data Engineering provides a Spark platform with great
authoring experiences. It enables you to create, manage, and optimize
infrastructures for collecting, storing, processing, and analyzing vast data volumes.
Fabric Spark's integration with Data Factory allows you to schedule and orchestrate
notebooks and Spark jobs. For more information, see What is Data engineering in
Microsoft Fabric?
Fabric Data Science - Fabric Data Science enables you to build, deploy, and
operationalize machine learning models from Fabric. It integrates with Azure
Machine Learning to provide built-in experiment tracking and model registry. Data
scientists can enrich organizational data with predictions and business analysts can
integrate those predictions into their BI reports, allowing a shift from descriptive to
predictive insights. For more information, see What is Data science in Microsoft
Fabric?

Fabric Data Warehouse - Fabric Data Warehouse provides industry leading SQL
performance and scale. It separates compute from storage, enabling independent
scaling of both components. Additionally, it natively stores data in the open Delta
Lake format. For more information, see What is data warehousing in Microsoft
Fabric?

Microsoft Fabric enables organizations and individuals to turn large and complex data
repositories into actionable workloads and analytics, and is an implementation of data
mesh architecture. For more information, see What is a data mesh?

OneLake: The unification of lakehouses


The Microsoft Fabric platform unifies the OneLake and lakehouse architecture across an
enterprise.

OneLake
A data lake is the foundation for all Fabric workloads. In Microsoft Fabric, this lake is
calledOneLake. It's built into the platform and serves as a single store for all
organizational data.

OneLake is built on ADLS (Azure Data Lake Storage) Gen2. It provides a single SaaS
experience and a tenant-wide store for data that serves both professional and citizen
developers. It simplifies the user experience by removing the need to understand
complex infrastructure details like resource groups, RBAC, Azure Resource Manager,
redundancy, or regions. You don't need an Azure account to use Fabric.

OneLake prevents data silos by offering one unified storage system that makes data
discovery, sharing, and consistent policy enforcement easy. For more information, see
What is OneLake?

OneLake and lakehouse data hierarchy


OneLake’s hierarchical design simplifies organization-wide management. Fabric includes
OneLake by default, so no upfront provisioning is needed. Each tenant gets one unified
OneLake with single file-system namespace that spans users, regions, and clouds.
OneLake organizes data into containers for easy handling. The tenant maps to the root
of OneLake and is at the top level of the hierarchy. You can create multiple workspaces
(which are like folders) within a tenant.

The following image shows how Fabric stores data in OneLake. You can have several
workspaces per tenant and multiple lakehouses within each workspace. A lakehouse is a
collection of files, folders, and tables that acts as a database over a data lake. To learn
more, see What is a lakehouse?.

Every developer and business unit in the tenant can create their own workspaces in
OneLake. They can ingest data into lakehouses and start processing, analyzing, and
collaborating on that data—similar to using OneDrive in Microsoft Office.

Fabric compute engines


All Microsoft Fabric compute experiences come preconfigured with OneLake, much like
Office apps automatically use organizational OneDrive. The experiences such as Data
Engineering, Data Warehouse, Data Factory, Power BI, and Real-Time Intelligence etc.
use OneLake as their native store without extra setup.

OneLake lets you instantly mount your existing PaaS storage accounts using the
Shortcut feature. You don't have to migrate your existing data. Shortcuts provide direct
access to data in Azure Data Lake Storage. They also enable easy data sharing between
users and applications without duplicating files. Additionally, you can create shortcuts to
other storage systems, allowing you to analyze cross-cloud data with intelligent caching
that reduces egress costs and brings data closer to compute.

Real-Time hub: the unification of data streams


The Real-Time hub is a foundational location for data in motion. It provides a unified
SaaS experience and tenant-wide logical place for streaming data. It lists data from
every source, allowing users to discover, ingest, manage, and react to it. It contains both
streams and KQL database tables. Streams include Data streams, Microsoft sources
(like, Azure Event Hubs, Azure IoT Hub, Azure SQL DB Change Data Capture (CDC),
Azure Cosmos DB CDC, and PostgreSQL DB CDC), and Fabric events (Fabric events and
external events from Azure, Microsoft 365, or other clouds).

The Real-Time hub makes it easy discover, ingest, manage, and consume data-in-
motion from a wide variety of sources to collaborate and develop streaming
applications in one place. For more information, see What is the Real-Time hub?

Fabric solutions for ISVs


If you're an Independent Software Vendors (ISVs) looking to integrate your solutions
with Microsoft Fabric, you can use one of the following paths based on your desired
level of integration:

Interop - Integrate your solution with the OneLake Foundation and establish basic
connections and interoperability with Fabric.
Develop on Fabric - Build your solution on top of the Fabric platform or seamlessly
embed Fabric's functionalities into your existing applications. You can easily use
Fabric capabilities with this option.
Build a Fabric workload - Create customized workloads and experiences in Fabric
tailoring your offerings to maximize their impact within the Fabric ecosystem.

For more information, see the Fabric ISV partner ecosystem.

Related content
Microsoft Fabric terminology
Create a workspace
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric trial capacity
Article • 01/29/2025

7 Note

Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .

Microsoft Fabric is provided free of charge when you sign up for a Microsoft Fabric trial
capacity. Your use of the Microsoft Fabric trial capacity includes access to the Fabric
product workloads and the resources to create and host Fabric items. The Fabric trial
lasts for 60 days unless canceled sooner.

7 Note

If you're ready to purchase Fabric, visit the Purchase Fabric page.

With one trial of a Fabric capacity, you get the following features:

Full access to all of the Fabric workloads and features. There are a few key Fabric
features that aren't available on trial capacities.
OneLake storage up to 1 TB.
A license similar to Premium Per User (PPU)
One capacity per trial. Other Fabric capacity trials can be started until a maximum,
set by Microsoft, is met.
The ability for users to create Fabric items and collaborate with others in the Fabric
trial capacity.

Creating and collaborating in Fabric includes:

Creating Workspaces (folders) for projects that support Fabric capabilities.


Sharing Fabric items, such as semantic models, warehouses, and notebooks, and
collaborating on them with other Fabric users.
Creating analytics solutions using Fabric items.

About the trial capacity


When you start a trial of a Fabric capacity, your trial capacity has 64 capacity units (CU).
You get the equivalent of an F64 capacity but there are a few key features that aren't
available on trial capacities. These features include:

Copilot
Trusted workspace access
Managed private endpoints

About the trial license


If you do not already have an assigned Power BI Premium Per User (PPU) license, you'll
receive a Power BI Individual Trial when initiating a Fabric trial capacity. This individual
trial enables you to perform the actions and use the features that a PPU license enables.
Your Account manager still displays the nontrial licenses assigned to you. But in order to
make full use of Fabric, your Fabric trial includes the Power BI Individual trial.

Use your trial


To begin using your trial of a Fabric capacity, add items to My workspace or create a
new workspace. Assign that workspace to your trial capacity using the Trial license
mode, and then all the items in that workspace are saved and executed in that capacity.
Invite colleagues to those workspaces so they can share the trial experience with you. If
you, as the capacity administrator, enable Contributor permissions, then others can also
assign their workspaces to your trial capacity. For more information about sharing, see
Share trial capacities.

Existing Power BI users


If you're an existing Power BI user, you can skip to Start the Fabric trial. If you're already
enrolled in a Power BI trial, you don't see the option to Start trial or Free trial in your
Account manager.

Users who are new to Power BI


The Fabric trial requires a per-user Power BI license. Navigate to
https://app.fabric.microsoft.com to sign up for a Fabric (Free) license. Once you have
the free license, you can begin participating in the Fabric capacity trial.

You may already have a license and not realize it. For example, some versions of
Microsoft 365 include a Fabric (Free) or Power BI Pro license. Open Fabric
(app.fabric.microsoft.com) and select your Account manager to see if you already have a
license, and which license it is. Read on to see how to open your Account manager.
Start the Fabric capacity trial
You can start a trial several different ways. The first two methods make you the Capacity
administrator of the trial capacity.

Sign up for a trial capacity. You manage who else can use your trial by giving
coworkers permission to create workspaces in your trial capacity. Or, by assigning
workspaces to the trial capacity, which automatically adds coworkers (with roles in
those workspaces) to the trial capacity.
Attempt to use a Fabric feature. If your organization enabled self-service,
attempting to use a Fabric feature launches a Fabric trial.
Join a trial started by a coworker by adding your workspace to that existing trial
capacity. This action only is possible if the owner gives you, or gives the entire
organization, Contributor permissions to the trial.

For more information, see Sharing trial capacities.

Follow these steps to start your Fabric capacity trial and become the Capacity
administrator of that trial.

1. Open the Fabric homepage and select the Account manager.

2. In the Account manager, select Free trial. If you don't see Free trial or Start trial or
a Trial status, trials might be disabled for your tenant.

7 Note

If the Account manager already displays Trial status, you may already have a
Power BI trial or a Fabric (Free) trial in progress. To test this out, attempt to
use a Fabric feature. For more information, see Start using Fabric.

3. If prompted, agree to the terms and then select Start trial.

4. Once your trial capacity is ready, you receive a confirmation message. Select Got it
to begin working in Fabric. You're now the Capacity administrator for that trial
capacity. To learn how to share your trial capacity using workspaces, see Share trial
capacities

5. Open your Account manager again. Notice the heading for Trial status. Your
Account manager keeps track of the number of days remaining in your trial. You
also see the countdown in your Fabric menu bar when you work in a product
workload.
Congratulations. You now have a Fabric trial capacity that includes a Power BI individual
trial (if you didn't already have a Power BI paid license) and a Fabric trial capacity. To
share your capacity, see Share trial capacities.

Other ways to start a Microsoft Fabric trial


In some situations, your Fabric administrator enables Microsoft Fabric for the tenant but
you don't have access to a capacity that has Fabric enabled. You have another option for
enabling a Fabric capacity trial. When you try to create a Fabric item in a workspace that
you own (such as My Workspace) and that workspace doesn't support Fabric items, you
receive a prompt to start a trial of a Fabric capacity. If you agree, your trial starts and
your My workspace is upgraded to a trial capacity workspace. You're the Capacity
administrator and can add workspaces to the trial capacity.

Share trial capacities


Each standard trial of a Fabric capacity includes 64 capacity units. The person who starts
the trial becomes the Capacity administrator for that trial capacity. Other users on the
same tenant can also start a Fabric trial and become the Capacity administrator for their
own trial capacity. Hundreds of customers can use each trial capacity. But, Microsoft sets
a limit on the number of trial capacities that can be created on a single tenant. To help
others in your organization try out Fabric, share your trial capacity. There are several
ways to share.

Share using Contributor permissions


Enabling the Contributor permissions setting allows other users to assign their
workspaces to your trial capacity. If you're the Capacity or Fabric administrator, enable
this setting from the Admin portal.

1. From the top right section of the Fabric menubar, select the cog icon to open
Settings.
2. Select Admin portal > Trial. Enabled for the entire organization is set by default.

Enabling Contributor permissions means that any user with an Admin role in a
workspace can assign that workspace to the trial capacity and access Fabric features.
Apply these permissions to the entire organization or apply them to only specific users
or groups.

Share by assigning workspaces


If you're the Capacity administrator, assign the trial capacity to multiple workspaces.
Anyone with access to one of those workspaces is now also participating in the Fabric
capacity trial.

1. Open Workspaces and select the name of a Premium workspace.

2. Select the ellipses(...) and choose Workspace settings > Premium > Trial.
For more information, see Use Workspace settings.

Look up the trial Capacity administrator


Contact your Capacity administrator to request access to a trial capacity or to check
whether your organization has the Fabric tenant setting enabled. Ask your Fabric
administrator to use the Admin portal to look up your Capacity administrator.

If you're the capacity or Fabric administrator, from the upper right corner of Fabric,
select the gear icon. Select Admin portal. For a Fabric trial, select Capacity settings and
then choose the Trial tab.
End a Fabric trial
End a Fabric capacity trail by canceling, letting it expire, or purchasing the full Fabric
experience. Only capacity and Fabric admins can cancel the trial of a Fabric capacity.
Individual users don't have this ability.

One reason to cancel a trial capacity is when the capacity administrator of a trial
capacity leaves the company. Since Microsoft limits the number of trial capacities
available per tenant, you might want to remove the unmanaged trial to make room to
sign up for a new trial.

When you cancel a free Fabric capacity trial, and don't move the workspaces and their
contents to a new capacity that supports Fabric:

Microsoft can't extend the Fabric capacity trial, and you might not be able to start
a new trial using your same user ID. Other users can still start their own Fabric trial
capacity.
All licenses return to their original versions. You no longer have the equivalent of a
PPU license. The license mode of any workspaces assigned to that trial capacity
changes to Power BI Pro.
All Fabric items in the workspaces become unusable and are eventually deleted.
Your Power BI items are unaffected and still available when the workspace license
mode returns to Power BI Pro.
You can't create workspaces that support Fabric capabilities.
You can't share Fabric items, such as machine learning models, warehouses, and
notebooks, and collaborate on them with other Fabric users.
You can't create any other analytics solutions using these Fabric items.

If you want to retain your data and continue to use Microsoft Fabric, purchase a capacity
and migrate your workspaces to that capacity. Or, migrate your workspaces to a
capacity that you already own that supports Fabric items.
For more information, see Canceling, expiring, and closing.

The trial expires


A standard Fabric capacity trial lasts 60 days. If you don't upgrade to a paid Fabric
capacity before the end of the trial period, non-Power BI Fabric items are removed
according to the retention policy upon removal. You have seven days after the
expiration date to save your non-Power BI Fabric items by assigning the workspaces to
capacity that supports Fabric.

To retain your Fabric items, before your trial ends, purchase Fabric .

Cancel your Fabric capacity trial - non admins


Only the capacity or Fabric administrator can cancel the Fabric capacity trial.

Cancel the Fabric trial - Capacity and Fabric admins


Capacity admins and Fabric admins can cancel a trial capacity. The user who starts a trial
automatically becomes the capacity administrator. The Fabric administrator has full
access to all Fabric management tasks. All Fabric items (non-Power BI items) in those
workspaces become unusable and are eventually deleted

Cancel a trial using your Account manager


As a Capacity admin, you can cancel your free Fabric trial capacity from your Account
manager. Canceling the trial this way ends the trial for yourself and anyone else you
invited to the trial.

Open your Account Manager and select Cancel trial.


Cancel the Fabric trial using the Admin portal
As a Capacity or Fabric administrator, you can use the Admin portal to cancel a trial of a
Fabric capacity.

Select Settings > Admin portal > Capacity settings. Then choose the Trials tab. Select
the cog icon for the trial capacity that you want to delete.

Considerations and limitations


I am unable to start a trial

If you don't see the Start trial button in your Account manager:
Your Fabric administrator might disable access, and you can't start a Fabric trial. To
request access, contact your Fabric administrator. You can also start a trial using
your own tenant. For more information, see Sign up for Power BI with a new
Microsoft 365 account.

You're an existing Power BI trial user, and you don't see Start trial in your Account
manager. You can start a Fabric trial by attempting to create a Fabric item. When
you attempt to create a Fabric item, you receive a prompt to start a Fabric trial. If
you don't see this prompt, it's possible that this action is deactivated by your
Fabric administrator.

If you don't have a work or school account and want to sign up for a free trial.

For more information, see Sign up for Power BI with a new Microsoft 365 account.

If you do see the Start trial button in your Account manager:

You might not be able to start a trial if your tenant exhausted its limit of trial
capacities. If that is the case, you have the following options:
Request another trial capacity user to share their trial capacity workspace with
you. Give users access to workspaces.
Purchase a Fabric capacity from Azure by performing a search for Microsoft
Fabric.

To increase tenant trial capacity limits, reach out to your Fabric administrator to
create a Microsoft support ticket.

In Workspace settings, I can't assign a workspace to the trial capacity

This bug occurs when the Fabric administrator turns off trials after you start a trial. To
add your workspace to the trial capacity, open the Admin portal by selecting it from the
gear icon in the top menu bar. Then, select Trial > Capacity settings and choose the
name of the capacity. If you don't see your workspace assigned, add it here.
What is the region for my Fabric trial capacity?

If you start the trial using the Account manager, your trial capacity is located in the
home region for your tenant. See Find your Fabric home region for information about
how to find your home region, where your data is stored.

What impact does region have on my Fabric trial?

Not all regions are available for the Fabric trial. Start by looking up your home region
and then check to see if your region is supported for the Fabric trial. If your home region
doesn't have Fabric enabled, don't use the Account manager to start a trial. To start a
trial in a region that isn't your home region, follow the steps in Other ways to start a
Fabric trial. If you already started a trial from Account manager, cancel that trial and
follow the steps in Other ways to start a Fabric trial instead.

Can I move my tenant to another region?

You can't move your organization's tenant between regions by yourself. If you need to
change your organization's default data location from the current region to another
region, you must contact support to manage the migration for you. For more
information, see Move between regions.

Fabric trial capacity availability by Azure region

To learn more about regional availability for Fabric trials, see Fabric trial capacities are
available in all regions.

How is the Fabric trial different from an individual trial of Power BI paid?

A per-user trial of Power BI paid allows access to the Fabric landing page. Once you sign
up for the Fabric trial, you can use the trial capacity for storing Fabric workspaces and
items and for running Fabric workloads. All rules guiding Power BI licenses and what you
can do in the Power BI workload remain the same. The key difference is that a Fabric
capacity is required to access non-Power BI workloads and items.

Autoscale

The Fabric trial capacity doesn't support autoscale. If you need more compute capacity,
you can purchase a Fabric capacity in Azure.

For existing Synapse users

The Fabric trial is different from a Proof of Concept (POC). A Proof of Concept
(POC) is standard enterprise vetting that requires financial investment and months'
worth of work customizing the platform and using fed data. The Fabric trial is free
for users and doesn't require customization. Users can sign up for a free trial and
start running product workloads immediately, within the confines of available
capacity units.

You don't need an Azure subscription to start a Fabric trial. If you have an existing
Azure subscription, you can purchase a (paid) Fabric capacity.

For existing Power BI users

Trial Capacity administrators can migrate existing workspaces into a trial capacity using
workspace settings and choosing Trial as the license mode. To learn how to migrate
workspaces, see create workspaces.
Related content
Learn about licenses

Review Fabric terminology

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric preview information
Article • 01/26/2025

This article describes the meaning of preview in Microsoft Fabric, and explains how
preview experiences and features can be used.

Preview experiences and features are released with limited capabilities, but are made
available on a preview basis so customers can get early access and provide feedback.

Preview experiences and features:

Are subject to separate supplemental preview terms .

Aren't meant for production use.

Are not subject to SLAs and support is provided as best effort in certain cases.
However, Microsoft Support is eager to get your feedback on the preview
functionality, and might provide best effort support in certain cases.

May have limited or restricted functionality.

May be available only in selected geographic areas.

Who can enable a preview experiences and


features
To enable a preview experience or feature, you need to have a Fabric administrator
admin role.

7 Note

When a preview feature is delegated, it can be enabled by a capacity admin for


that capacity.

How do I enable a preview experience or


feature
To enable a preview experience or feature, follow these steps:

1. Navigate to the admin portal.


2. Select tenant settings tab.

3. Select the preview experience or experience you want to enable.

4. Enable experience using the tenant setting.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric terminology
Article • 01/26/2025

Learn the definitions of terms used in Microsoft Fabric, including terms specific to Fabric
Data Warehouse, Fabric Data Engineering, Fabric Data Science, Real-Time Intelligence,
Data Factory, and Power BI.

General terms
Capacity: Capacity is a dedicated set of resources that is available at a given time
to be used. Capacity defines the ability of a resource to perform an activity or to
produce output. Different items consume different capacity at a certain time. Fabric
offers capacity through the Fabric SKU and Trials. For more information, see What
is capacity?

Experience: A collection of capabilities targeted to a specific functionality. The


Fabric experiences include Fabric Data Warehouse, Fabric Data Engineering, Fabric
Data Science, Real-Time Intelligence, Data Factory, and Power BI.

Item: An item a set of capabilities within an experience. Users can create, edit, and
delete them. Each item type provides different capabilities. For example, the Data
Engineering experience includes the lakehouse, notebook, and Spark job definition
items.

Tenant: A tenant is a single instance of Fabric for an organization and is aligned


with a Microsoft Entra ID.

Workspace: A workspace is a collection of items that brings together different


functionality in a single environment designed for collaboration. It acts as a
container that uses capacity for the work that is executed, and provides controls
for who can access the items in it. For example, in a workspace, users create
reports, notebooks, semantic models, etc. For more information, see Workspaces
article.

Fabric Data Engineering


Lakehouse: A lakehouse is a collection of files, folders, and tables that represent a
database over a data lake used by the Apache Spark engine and SQL engine for
big data processing. A lakehouse includes enhanced capabilities for ACID
transactions when using the open-source Delta formatted tables. The lakehouse
item is hosted within a unique workspace folder in Microsoft OneLake. It contains
files in various formats (structured and unstructured) organized in folders and
subfolders. For more information, see What is a lakehouse?

Notebook: A Fabric notebook is a multi-language interactive programming tool


with rich functions. Which include authoring code and markdown, running and
monitoring a Spark job, viewing and visualizing result, and collaborating with the
team. It helps data engineers and data scientist to explore and process data, and
build machine learning experiments with both code and low-code experience. It
can be easily transformed to a pipeline activity for orchestration.

Spark application: An Apache Spark application is a program written by a user


using one of Spark's API languages (Scala, Python, Spark SQL, or Java) or
Microsoft-added languages (.NET with C# or F#). When an application runs, it's
divided into one or more Spark jobs that run in parallel to process the data faster.
For more information, see Spark application monitoring.

Apache Spark job: A Spark job is part of a Spark application that is run in parallel
with other jobs in the application. A job consists of multiple tasks. For more
information, see Spark job monitoring.

Apache Spark job definition: A Spark job definition is a set of parameters, set by
the user, indicating how a Spark application should be run. It allows you to submit
batch or streaming jobs to the Spark cluster. For more information, see What is an
Apache Spark job definition?

V-order: A write optimization to the parquet file format that enables fast reads and
provides cost efficiency and better performance. All the Fabric engines write v-
ordered parquet files by default.

Data Factory
Connector: Data Factory offers a rich set of connectors that allow you to connect
to different types of data stores. Once connected, you can transform the data. For
more information, see connectors.

Data pipeline: In Data Factory, a data pipeline is used for orchestrating data
movement and transformation. These pipelines are different from the deployment
pipelines in Fabric. For more information, see Pipelines in the Data Factory
overview.

Dataflow Gen2: Dataflows provide a low-code interface for ingesting data from
hundreds of data sources and transforming your data. Dataflows in Fabric are
referred to as Dataflow Gen2. Dataflow Gen1 exists in Power BI. Dataflow Gen2
offers extra capabilities compared to Dataflows in Azure Data Factory or Power BI.
You can't upgrade from Gen1 to Gen2. For more information, see Dataflows in the
Data Factory overview.

Trigger: An automation capability in Data Factory that initiates pipelines based on


specific conditions, such as schedules or data availability.

Fabric Data Science


Data Wrangler: Data Wrangler is a notebook-based tool that provides users with
an immersive experience to conduct exploratory data analysis. The feature
combines a grid-like data display with dynamic summary statistics and a set of
common data-cleansing operations, all available with a few selected icons. Each
operation generates code that can be saved back to the notebook as a reusable
script.

Experiment: A machine learning experiment is the primary unit of organization and


control for all related machine learning runs. For more information, see Machine
learning experiments in Microsoft Fabric.

Model: A machine learning model is a file trained to recognize certain types of


patterns. You train a model over a set of data, and you provide it with an algorithm
that it uses to reason over and learn from that data set. For more information, see
Machine learning model.

Run: A run corresponds to a single execution of model code. In MLflow , tracking


is based on experiments and runs.

Fabric Data Warehouse


SQL analytics endpoint: Each Lakehouse has a SQL analytics endpoint that allows a
user to query delta table data with TSQL over TDS. For more information, see SQL
analytics endpoint.

Fabric Data Warehouse: The Fabric Data Warehouse functions as a traditional data
warehouse and supports the full transactional T-SQL capabilities you would expect
from an enterprise data warehouse. For more information, see Fabric Data
Warehouse.

Real-Time Intelligence
Activator: Activator is a no-code, low-code tool that allows you to create alerts,
triggers, and actions on your data. Activator is used to create alerts on your data
streams. For more information, see Activator.

Eventhouse: Eventhouses provide a solution for handling and analyzing large


volumes of data, particularly in scenarios requiring real-time analytics and
exploration. They're designed to handle real-time data streams efficiently, which
lets organizations ingest, process, and analyze data in near real-time. A single
workspace can hold multiple Eventhouses, an eventhouse can hold multiple KQL
databases, and each database can hold multiple tables. For more information, see
Eventhouse overview.

Eventstream: The Microsoft Fabric eventstreams feature provides a centralized


place in the Fabric platform to capture, transform, and route real-time events to
destinations with a no-code experience. An eventstream consists of various
streaming data sources, ingestion destinations, and an event processor when the
transformation is needed. For more information, see Microsoft Fabric
eventstreams.

KQL Database: The KQL Database holds data in a format that you can execute KQL
queries against. KQL databases are items under an Eventhouse. For more
information, see KQL database.

KQL Queryset: The KQL Queryset is the item used to run queries, view results, and
manipulate query results on data from your Data Explorer database. The queryset
includes the databases and tables, the queries, and the results. The KQL Queryset
allows you to save queries for future use, or export and share queries with others.
For more information, see Query data in the KQL Queryset

Real-Time hub
Real-Time hub: Real-Time hub is the single place for all data-in-motion across
your entire organization. Every Microsoft Fabric tenant is automatically provisioned
with the hub. For more information, see Real-Time hub overview.

OneLake
Shortcut: Shortcuts are embedded references within OneLake that point to other
file store locations. They provide a way to connect to existing data without having
to directly copy it. For more information, see OneLake shortcuts.
Related content
Navigate to your items from Microsoft Fabric Home page
End-to-end tutorials in Microsoft Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


What's new in Microsoft Fabric?
Article • 02/13/2025

This page is continuously updated with a recent review of what's new in Microsoft
Fabric.

To follow the latest in Fabric news and features, see the Microsoft Fabric Updates
Blog .
For community, marketing, case studies, and industry news, see the Microsoft
Fabric Blog .
Follow the latest in Power BI at What's new in Power BI?
For older updates, review the Microsoft Fabric What's New archive.

New to Microsoft Fabric?


Learning Paths for Fabric
Get started with Microsoft Fabric
End-to-end tutorials in Microsoft Fabric
Microsoft Fabric terminology

Features currently in preview


The following table lists the features of Microsoft Fabric that are currently in preview.
Preview features are sorted alphabetically.

7 Note

Features currently in preview are available under supplemental terms of use .


Review for legal terms that apply to Azure features that are in beta, preview, or
otherwise not yet released into general availability. Microsoft Fabric provides
previews to give you a chance to evaluate and share feedback with the product
group on preview features before they become generally available (GA).

ノ Expand table

Feature Learn more

AutoML code-first preview In Fabric Data Science, the new AutoML feature enables
automation of your machine learning workflow. AutoML, or
Automated Machine Learning, is a set of techniques and tools that
Feature Learn more

can automatically train and optimize machine learning models for


any given data and task type.

AutoML low code user AutoML, or Automated Machine Learning, is a process that
experience in Fabric automates the time-consuming and complex tasks of developing
(preview) machine learning models. The new low code AutoML experience
supports a variety of tasks, including regression, forecasting,
classification, and multi-class classification. To get started, Create
models with Automated ML (preview).

Azure Data Factory item You can now bring your existing Azure Data Factory (ADF) to your
Fabric workspace . This new preview capability allows you to
connect to your existing Azure Data Factory from your Fabric
workspace. Select "Create Azure Data Factory" inside of your Fabric
Data Factory workspace, and you can manage your Azure data
factories directly from the Fabric workspace.

Capacity pools preview Capacity administrators can now create custom pools (preview)
based on their workload requirements, providing granular control
over compute resources. Custom pools for Data Engineering and
Data Science can be set as Spark Pool options within Workspace
Spark Settings and environment items.

Code-First In Fabric Data Science, FLAML is now integrated for


Hyperparameter Tuning hyperparameter tuning, currently a preview feature. Fabric's
preview flaml.tune feature streamlines this process, offering a cost-
effective and efficient approach to hyperparameter tuning .

Copilot in Fabric is Copilot in Fabric is now available to all customers, including


available worldwide Copilot for Power BI, Copilot for Data Factory, Copilot for Data
Science & Data Engineering, and Copilot for Real-Time Intelligence.
Read more in our Overview of Copilot in Fabric.

Copy job The Copy job (preview) in Data Factory has advantages over the
Copy activity. For more information, see Announcing Preview: Copy
Job in Microsoft Fabric . For a tutorial, see Learn how to create a
Copy job (preview) in Data Factory for Microsoft Fabric.

Dataflow Gen2 CI/CD CI/CD and Git integration are now supported for Dataflow Gen2.
support For more information, see Dataflow Gen2 CI/CD support .

Data Factory Apache Apache Airflow job (preview) in Data Factory , powered by
Airflow jobs preview Apache Airflow, offer seamless authoring, scheduling, and
monitoring experience for Python-based data processes defined as
Directed Acyclic Graphs (DAGs). For more information, see
Quickstart: Create an Apache Airflow Job.

Data pipeline capabilities The new Data pipeline capabilities in Copilot for Data Factory are
in Copilot for Data Factory now available in preview. These features function as an AI expert to
Feature Learn more

(preview) help users build, troubleshoot, and maintain data pipelines.

Data Wrangler for Spark Data Wrangler on Spark DataFrames in preview. Users can now edit
DataFrames preview Spark DataFrames in addition to pandas DataFrames with Data
Wrangler .

Data Science AI skill You can now build your own generative AI experiences over your
(preview) data in Fabric with the AI skill (preview)! You can build question and
answering AI systems over your Lakehouses and Warehouses. For
more information, see Introducing AI Skills in Microsoft Fabric: Now
in Preview . To get started, try AI skill example with the
AdventureWorks dataset (preview).

Delta column mapping in SQL analytics endpoint now supports Delta tables with column
the SQL analytics endpoint mapping enabled . For more information, see Delta column
mapping and Limitations of the SQL analytics endpoint. This
feature is currently in preview.

Enhanced conversation We are introducing improvements to AI functionalities in Microsoft


with Microsoft Fabric Fabric , including a new way to store chat prompts and history,
Copilot (Preview) improved accuracy of responses, and better context knowledge
retention.

Eventhouse Query Query Acceleration for OneLake Shortcuts in Eventhouse speeds


Acceleration for OneLake up ad hoc queries over data in OneLake. OneLake shortcuts are
Shortcuts (Preview) references from an Eventhouse that point to internal Fabric or
external sources. Previously, queries run over OneLake shortcuts
were less performant than on data that is ingested directly to
Eventhouses due to various factors.

Eventhouse Monitoring Eventhouse monitoring , currently in preview, offers multiple


(preview) events and metrics that are automatically routed and stored in
Workspace Monitoring. For more information, see Manage and
monitor an eventhouse.

Eventstream processing Now, Eventstream supports processing and transforming events


and routing events to with business requirements before routing the events to the
Activator (preview) destination: Activator. When these transformed events reach
Activator, you can establish rules or conditions for your alerts to
monitor the events.

Fabric gateway enables Connect to on-premises data sources with a Fabric on-premises
OneLake shortcuts to on- data gateway on a machine in your environment, with
premises data networking visibility of your S3 compatible or Google Cloud
Storage data source. Then, you create your shortcut and select that
gateway. For more information, see Create shortcuts to on-
premises data.
Feature Learn more

Fabric Spark connector for The Spark connector for Data Warehouse enables a Spark
Fabric Data Warehouse in developer or a data scientist to access and work on data from a
Spark runtime (preview) warehouse or SQL analytics endpoint of the lakehouse (either from
within the same workspace or from across workspaces) with a
simplified Spark API.

Fabric Spark Diagnostic The Fabric Apache Spark Diagnostic Emitter (preview) allows
Emitter (preview) Apache Spark users to collect logs, event logs, and metrics from
their Spark applications and send them to various destinations,
including Azure Event Hubs, Azure storage, and Azure log analytics.

Fabric SQL database SQL database in Microsoft Fabric (Preview) is a developer-friendly


(Preview) transactional database, based on Azure SQL Database, that allow
you to easily create your operational database in Fabric. SQL
database in Fabric uses the SQL Database Engine as Azure SQL
Database. Review a Decision guide: choose a SQL database.

High concurrency mode High concurrency mode for Notebooks in Pipelines enables users
for Notebooks in Pipelines to share Spark sessions across multiple notebooks within a
(preview) pipeline. With high concurrency mode, users can trigger pipeline
jobs, and these jobs are automatically packed into existing high
concurrency sessions.

Iceberg data in OneLake You can now consume Iceberg-formatted data across Microsoft
using Snowflake and Fabric with no data movement or duplication , plus Snowflake has
shortcuts (preview) added the ability to write Iceberg tables directly to OneLake. For
more information, see Use Iceberg tables with OneLake.

Incremental refresh for Incremental refresh in Dataflow Gen2 (Preview) is designed to


Dataflow Gen2 (preview) optimize data ingestion and transformation, particularly as your
data continues to expand. For more information, see Announcing
Preview: Incremental Refresh in Dataflow Gen2 .

Invoke remote pipeline You can now use the Invoke Pipeline (preview) activity to call
(preview) in Data pipeline pipelines from Azure Data Factory or Synapse Analytics pipelines .
This feature allows you to utilize your existing ADF or Synapse
pipelines inside of a Fabric pipeline by calling it inline through this
new Invoke Pipeline activity.

JSON Aggregate support Fabric warehouses now support JSON aggregate functions in
(preview) preview, JSON_ARRAYAGG and JSON_OBJECTAGG.

Lakehouse schemas The Lakehouse schemas feature (preview) introduces data


feature pipeline support for reading the schema info from Lakehouse
tables and supports writing data into tables under specified
schemas. Lakehouse schemas allow you to group your tables
together for better data discovery, access control, and more.
Feature Learn more

Lakehouse support for git The Lakehouse now integrates with the lifecycle management
integration and capabilities in Microsoft Fabric , providing a standardized
deployment pipelines collaboration between all development team members throughout
(preview) the product's life. Lifecycle management facilitates an effective
product versioning and release process by continuously delivering
features and bug fixes into multiple environments.

Livy REST API (preview) The Fabric Livy endpoint lets users submit and execute their Spark
code on the Spark compute within a designated Fabric workspace,
eliminating the need to create a Notebook or Spark Job Definition
item. The Livy API offers the ability to customize the execution
environment through its integration with the Environment .

Managed virtual networks Managed virtual networks are virtual networks that are created and
(preview) managed by Microsoft Fabric for each Fabric workspace.

Microsoft 365 connector The Microsoft 365 connector now supports ingesting data into
now supports ingesting Lakehouse tables .
data into Lakehouse
(preview)

Microsoft Fabric Admin Fabric Admin APIs are designed to streamline administrative
APIs tasks. The initial set of Fabric Admin APIs is tailored to simplify the
discovery of workspaces, Fabric items, and user access details.

Mirroring in Microsoft With database mirroring in Fabric, you can easily bring your
Fabric preview databases into OneLake in Microsoft Fabric , enabling seamless
zero-ETL, near real-time insights on your data – and unlocking
warehousing, BI, AI, and more. For more information, see What is
Mirroring in Fabric?

Mirroring CI/CD (preview) Mirroring now supports CI/CD as a preview feature. You can
integrate Git for source control and utilize ALM Deployment
Pipelines, streamlining the deployment process and ensuring
seamless updates to mirrored databases.

Nested common table Fabric Warehouse and SQL analytics endpoint both support
expressions (CTEs) standard, sequential, and nested CTEs . While CTEs are generally
(preview) available in Microsoft Fabric, nested common table expressions
(CTE) in Fabric data warehouse are currently a preview feature.

Notebook debug within You can now place breakpoints and debug your Notebook code
vscode.dev (preview) with the Synapse VS Code - Remote extension in vscode.dev .
This update first starts with the Fabric Runtime 1.3 (GA).

Notebook version history Fabric notebook version history provides robust built-in version
(preview) control capabilities, including automatic and manual checkpoints,
tracked changes, version comparisons, and previous version
restore. For more information, see Notebook version history.
Feature Learn more

OneLake data access roles OneLake data access roles for lakehouse are in preview . Role
permissions and user/group assignments can be easily updated
through a new folder security user interface.

OneLake SAS (preview) Support for short-lived, user-delegated OneLake SAS is now in
preview . This functionality allows applications to request a User
Delegation Key backed by Microsoft Entra ID, and then use this key
to construct a OneLake SAS token. This token can be handed off to
provide delegated access to another tool, node, or user, ensuring
secure and controlled access.

Open mirroring (Preview) Open mirroring enables any application to write change data
directly into a mirrored database in Fabric, based on the open
mirroring public APIs and approach. Open mirroring is designed
to be extensible, customizable, and open. It's a powerful feature
that extends mirroring in Fabric based on open Delta Lake table
format. To get started, see Tutorial: Configure Microsoft Fabric
open mirrored databases.

OPENROWSET support The T-SQL OPENROWSET(BULK) function is now available in Fabric


(preview) warehouse as a preview feature. For more information and
examples, see Browse file content using OPENROWSET function
(Preview).

Prebuilt Azure AI services The preview of prebuilt AI services in Fabric is an integration with
in Fabric preview Azure AI services , formerly known as Azure Cognitive Services.
Prebuilt Azure AI services allow for easy enhancement of data with
prebuilt AI models without any prerequisites. Currently, prebuilt AI
services are in preview and include support for the Microsoft Azure
OpenAI Service , Azure AI Language , and Azure AI Translator .

Purview Data Loss Extending Microsoft Purview's Data Loss Prevention (DLP) policies
Prevention policies have into Fabric lakehouses is now in preview.
been extended to Fabric
lakehouses

Purview Data Loss Restricting access based on sensitive content for semantic models,
Prevention policies now now in preview, helps you to automatically detect sensitive
support the restrict access information as it is uploaded into Fabric lakehouses and semantic
action for semantic models models .

Python Notebook Python Notebooks are for BI Developers and Data Scientists
(preview) working with smaller datasets using Python as their primary
language. To get started, see Use Python experience on Notebook.

Real-Time Dashboards and With separate permissions for dashboards and underlying data,
underlying KQL databases administrators now have the flexibility to allow users to view
dashboards without giving access to the raw data .
Feature Learn more

access separation
(preview)

Reserve maximum cores A new workspace-level setting allows you to reserve maximum
for jobs (preview) cores for your active jobs for Spark workloads . For more
information, see High concurrency mode in Apache Spark for
Fabric.

REST APIs for connections REST APIs for connections and gateways are now in preview .
and gateways (preview) These new APIs allow developers to programmatically manage and
interact with connections and gateways within Fabric.

REST APIs for Fabric Data The REST APIs for Fabric Data Factory Pipelines are now in
Factory pipelines preview preview. Fabric data pipeline public REST API enable you to extend
the built-in capability in Fabric to create, read, update, delete, and
list pipelines.

Secure Data Streaming By creating a Fabric Managed Private Endpoint, you can now
with Managed Private securely connect Eventstream to your Azure services, such as Azure
Endpoints in Eventstream Event Hubs or IoT Hub, within a private network or behind a
(Preview) firewall. For more information, see Secure Data Streaming with
Managed Private Endpoints in Eventstream (Preview) .

Semantic model refresh Use the Semantic model refresh activity to refresh a Power BI
activity (preview) Dataset (Preview), the most effective way to refresh your Fabric
semantic models. For more information, see New Features for
Fabric Data Factory Pipelines Announced at Ignite .

SET SHOWPLAN_XML The SET SHOWPLAN_XML T-SQL syntax is now supported as a


support preview feature in Fabric Data Warehouse and SQL analytics
endpoint.

Session Expiry Control in A new session expiry control in Data Engineering/Science


Workspace Settings for workspace settings allows you to set the maximum expiration time
Notebook Interactive Runs limit for notebook interactive sessions. By default, sessions expire
(preview) after 20 minutes, but you can now customize the maximum
expiration duration.

Share Feature for Fabric AI "Share" capability for the Fabric AI skill (preview) allows you to
skill (preview) share the AI Skill with others using a variety of permission models.

Share the Fabric AI skill Share capability for the Fabric AI skill (preview) allows you to
(preview) share the AI Skill with others using a variety of permission models.

Spark Run Series Analysis The Spark Monitoring Run Series Analysis features allow you to
preview analyze the run duration trend and performance comparison for
Pipeline Spark activity recurring run instances and repetitive Spark
run activities, from the same Notebook or Spark Job Definition.
Feature Learn more

Splunk add-on preview Microsoft Fabric add-on for Splunk allows users to ingest logs
from Splunk platform into a Fabric KQL DB using the Kusto python
SDK.

SQL database support for You can use tenant level private links to provide secure access for
Tenant level private links data traffic in Microsoft Fabric, including SQL database (in preview).
(preview) For more information, see Set up and use private links and Blog:
Tenant Level Private Link (Preview) .

Tags Tags (preview) help admins categorize and organize data ,


enhancing the searchability of your data and boosting success rates
and efficiency for end users.

Task flows in Microsoft The preview of task flows in Microsoft Fabric is enabled for all
Fabric (preview) Microsoft Fabric users. With Task flows (preview), when designing a
data project, you no longer need to use a whiteboard to sketch out
the different parts of the project and their interrelationships.
Instead, you can use a task flow to build and bring this key
information into the project itself.

varchar(max) and Support for the varchar(max) and varbinary(max) data types in
varbinary(max) support in Warehouse is now in preview. For more information, see
preview Announcing public preview of VARCHAR(MAX) and
VARBINARY(MAX) types in Fabric Data Warehouse .

Terraform Provider for The Terraform Provider for Microsoft Fabric is now in preview. The
Fabric (preview) Terraform Provider for Microsoft Fabric supports the creation and
management of many Fabric resources. For more information, see
Announcing the new Terraform Provider for Microsoft Fabric .

T-SQL support in Fabric The T-SQL notebook feature in Microsoft Fabric (preview) lets
notebooks (preview) you write and run T-SQL code within a notebook. You can use them
to manage complex queries and write better markdown
documentation. It also allows direct execution of T-SQL on
connected warehouse or SQL analytics endpoint. To learn more, see
T-SQL support in Microsoft Fabric notebooks.

Warehouse source control Using Source control with Warehouse (preview), you can manage
(preview) development and deployment of versioned warehouse objects. You
can use SQL Database Projects extension available inside of Azure
Data Studio and Visual Studio Code . For more information on
warehouse source control, see CI/CD with Warehouses in Microsoft
Fabric .

Workspace monitoring Workspace monitoring is a Microsoft Fabric database that collects


(preview) data from a range of Fabric items in your workspace, and lets users
access and analyze logs and metrics. For more about this feature,
see Announcing preview of workspace monitoring .
Generally available features
The following table lists the features of Microsoft Fabric that have recently transitioned
from preview to general availability (GA).

ノ Expand table

Month Feature Learn more

January Real-time Application Lifecycle Management (ALM) and Fabric REST APIs
2025 intelligence ALM are now generally available for all RTI items: Eventstream,
and REST API GA Eventhouse, KQL Database, Realtime dashboard, Query set and
Data Activator. ALM includes both deployment pipelines and
Git integration. REST APIs allow you to programmatically
create / read / update / delete items.

January Warehouse You can now create restore points and perform an in-place
2025 restore points restore of a warehouse to a past point in time. Restore in-
and restore in place is an essential part of data warehouse recovery , which
place allows to restore the data warehouse to a prior known reliable
state by replacing or over-writing the existing data warehouse
from which the restore point was created.

December Folder in As an organizational unit, the workspace folder addresses


2024 Workspace this pain point by providing a hierarchical structure for
organizing and managing your items. This feature is now
generally available, and includes new filter features. For more
information, see Create folders in workspaces.

November Workspace Workspace monitoring is a Microsoft Fabric database that


2024 monitoring collects data from a range of Fabric items in your workspace,
and lets users access and analyze logs and metrics. For more
about this feature, see Announcing preview of workspace
monitoring .

November OneLake external OneLake external data sharing makes it possible for Fabric
2024 data sharing (GA) users to share data from within their Fabric tenant with users
in another Fabric tenant.

November GraphQL API in The API for GraphQL , now generally available, is a data
2024 Microsoft Fabric access layer that allows us to query multiple data sources
GA quickly and efficiently in Fabric. For more information, see
What is Microsoft Fabric API for GraphQL?

November Real-Time We're excited to announce that Real-Time Intelligence is now


2024 Intelligence: now generally available (GA) . This includes the Real-Time hub ,
Generally enhanced Eventstream , Eventhouse , Real-Time
Available Dashboards , and Activator . For more information, see
What is Real-Time Intelligence?
Month Feature Learn more

November Fabric workload The Microsoft Fabric workload development kit is now
2024 dev kit (GA) generally available . This robust developer toolkit is for
designing, developing, and interoperating with Microsoft
Fabric using frontend SDKs and backend REST APIs .

November Mirroring for With Azure SQL Database mirroring in Fabric, you can easily
2024 Azure SQL replicate data from Azure SQL Database into OneLake in
Database GA Microsoft Fabric.

November Real-Time hub Real-Time hub is now generally available . For more
2024 information, see Introduction to Fabric Real-Time hub.

October Notebook Git Notebook Git integration now supports persisting the
2024 integration mapping relationship of the attached Environment when
syncing to new workspace. For more information, see
Notebook source control and deployment

October Notebook in Now you can also use notebooks to deploy your code across
2024 Deployment different environments , such as development, test, and
Pipeline production. You can also use deployment rules to customize
the behavior of your notebooks when they're deployed, such
as changing the default Lakehouse of a Notebook. Get started
with deployment pipelines, and Notebook shows up in the
deployment content automatically.

September Mirroring for With Mirroring for Snowflake in Fabric, you can easily bring
2024 Snowflake your Snowflake data into OneLake . For more information,
see Mirroring Snowflake.

September Copilot for Data Copilot for Data Factory is now generally available and
2024 Factory included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.

September Fast Copy in The Fast copy feature in Dataflows Gen2 is now generally
2024 Dataflow Gen2 available. For more information, read Announcing the General
Availability of Fast Copy in Dataflows Gen2 .

September Fabric Pipeline On-premises connectivity for Data pipelines in Microsoft Fabric
2024 Integration in is now generally available. Learn How to access on-premises
On-premises data sources in Data Factory for Microsoft Fabric.
Data Gateway GA

September Data Wrangler Data Wrangler on Spark DataFrames. A notebook-based tool


2024 for Spark for exploratory data analysis, Data Wrangler works for both
DataFrames pandas DataFrames and Spark DataFrames and arrives at
general availability with new usability improvements .

September Fabric Runtime Fabric Runtime 1.3 (GA) includes Apache Spark 3.5, Delta Lake
2024 1.3 3.1, R 4.4.1, Python 3.11, support for Starter Pools, integration
Month Feature Learn more

with Environment, and library management capabilities. For


more information, see Fabric Runtime 1.3 is Generally
Available! .

September OneLake REST APIs for OneLake Shortcuts allow programmatic creation
2024 Shortcuts API and management of shortcuts, now generally available. You
can now programmatically create, read, and delete OneLake
shortcuts. For example, see Use OneLake shortcuts REST APIs.

September GitHub Fabric developers can now choose GitHub or GitHub


2024 integration for Enterprise as their source control tool , and version their
source control Fabric items there. For more information, see What is
Microsoft Fabric Git integration?

September OneLake Create a Google Cloud Storage (GCS) shortcut to connect to


2024 shortcuts to your existing data through a single unified name space
Google Cloud without having to copy or move data. For more information,
Storage see Google Cloud Storage shortcuts shortcuts generally
available .

September OneLake Create an S3 compatible shortcut to connect to your existing


2024 shortcuts to S3- data through a single unified name space without having to
compatible data copy or move data. For more information, see S3 compatible
sources shortcuts shortcuts generally available .

For older general availability (GA) announcements, review the Microsoft Fabric What's
New archive.

Community
This section summarizes new Microsoft Fabric community opportunities for prospective
and current influencers and MVPs.

) Important

Join us for FabCon 2025 in Las Vegas from March 31 to April 2 for the biggest-
ever FabCon. Register and use code MSCUST for a $150 discount!

Sign up for the Fabric Community Newsletter .


Join a local Fabric User Group or join a local event .
Vote for your favorite new product feature ideas at Microsoft Fabric Ideas .
To learn about the Microsoft MVP Award and to find MVPs, see
mvp.microsoft.com .
Are you a student? Learn more about the Microsoft Learn Student Ambassadors
program .
Visit the Microsoft Fabric Career Hub for everything you need on your
certification journey, including a 50% discount on exams.
Watch and subscribe to Microsoft Fabric videos on YouTube .
Ask and answer questions in the Microsoft Fabric community .

ノ Expand table

Month Feature Learn more

December Announcing the See the winners of the Microsoft Fabric Focused
2024 winners of the Hackathon event , where we partnered with DevPost to
Microsoft Fabric and challenge the world to build the next wave of innovative
AI Learning AI powered data analytics applications with Microsoft
Hackathon! Fabric!

October Fabric Influencers Check out Microsoft MVPs & Fabric Super Users doing
2024 Spotlight October amazing work in October 2024 on all aspects of
2024 Microsoft Fabric.

October Microsoft Fabric and Part of the Microsoft Fabric and AI Learning Hackathon ,
2024 AI Learning read this guide of various capabilities that Copilot offers in
Hackathon: Copilot in Microsoft Fabric , empowering you to enhance
Fabric productivity and streamline your workflows.

October Get certified in For a limited time, the Microsoft Fabric Community team is
2024 Microsoft Fabric—for offering 5,000 free DP-600 exam vouchers to eligible
free! Fabric Community members . Complete your exam by
the end of the year and join the ranks of certified experts.

October DP-700: The new Microsoft Certified: Fabric Data Engineer


2024 Implementing Data Associate certification helps demonstrate your skills with
Engineering Solutions data ingestion, transformation, administration, monitoring,
Using Microsoft and performance optimization in Fabric. To learn more, see
Fabric (beta) DP-700: Implementing Data Engineering Solutions Using
Microsoft Fabric (beta).

October FabCon Europe 2024 Read a recap of Europe's first Fabric Community
2024 Conference and a Recap of Data Factory
announcements .

October Fabric Influencers The Fabric Influencers Spotlight September 2024 shines
2024 Spotlight September a bright light on the places on the internet where
2024 Microsoft MVPs & Fabric Super Users are doing some
amazing work on all aspects of Microsoft Fabric.

September Announcing: The Get ready for the Microsoft Fabric & AI Learning
2024 Microsoft Fabric & AI Hackathon ! We're calling all Data/AI Enthusiasts and
Month Feature Learn more

Learning Hackathon Data/AI practitioners to join us for another exciting


opportunity to upskill and build the next generation of
Data + AI solutions with Microsoft Fabric! The Hackathon
is open for a seven-week submission period and offers a
total of $10,000 in prizes!

For older updates, review the Microsoft Fabric What's New archive.

Power BI

) Important

If you're accessing Power BI on a web browser version older than Chrome 94,
Microsoft Edge 94, Safari 16.4, Firefox 93, or equivalent, you need upgrade your
web browser to a newer version by August 31, 2024. Using an outdated browser
version after this date can prevent you from accessing features in Power BI.

Updates to Power BI Desktop and the Power BI service are summarized at What's new in
Power BI?

Microsoft Copilot in Microsoft Fabric


With Copilot and other generative AI features in preview, Microsoft Fabric brings a new
way to transform and analyze data, generate insights, and create visualizations and
reports. For more information, see Overview of Copilot in Fabric.

ノ Expand table

Month Feature Learn more

February Enhanced We are introducing improvements to AI functionalities in


2024 conversation with Microsoft Fabric , including a new way to store chat
Microsoft Fabric prompts and history, improved accuracy of responses,
Copilot (Preview) and better context knowledge retention.

October Microsoft Fabric and Part of the Microsoft Fabric and AI Learning
2024 AI Learning Hackathon , read this guide of various capabilities that
Hackathon: Copilot in Copilot offers in Microsoft Fabric , empowering you to
Fabric enhance productivity and streamline your workflows.
Month Feature Learn more

October Use Azure OpenAI to Read this blog to learn how to turn whiteboard sketches
2024 turn whiteboard into data pipelines , using the GPT-4o model through
sketches into data Azure OpenAI Service.
pipelines

September Creating a real time Copilot can review a table and automatically create a
2024 dashboard by Copilot dashboard with insights and a profile of the data with
a sample.

September Copilot in Dataflow Copilot for Data Factory is now generally available and
2024 Gen2 GA included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.

September Copilot for Data Copilot for Data Warehouse is now available, offering the
2024 Warehouse Copilot chat pane, quick actions, and code completions.
For more information and sample scenarios, see
Announcing the Preview of Copilot for Data Warehouse
in Microsoft Fabric .

For older updates, review the Microsoft Fabric What's New archive.

Data Factory in Microsoft Fabric


This section summarizes recent new features and capabilities of Data Factory in
Microsoft Fabric. Follow issues and feedback through the Data Factory Community
Forum .

ノ Expand table

Month Feature Learn more

January Dataflow Gen2 CI/CD CI/CD and Git integration are now supported for
2025 support (preview) Dataflow Gen2, as a preview feature. For more
information, see Dataflow Gen2 CI/CD support .

December Data Factory A couple of weeks ago we had such an exciting week for
2024 Announcements at Fabric during the Ignite Conference, filled with several
Ignite 2024 Recap product announcements and sneak previews of
upcoming new features for Data Factory in Fabric .

November REST APIs for REST APIs for connections and gateways are now in
2024 connections and preview . These new APIs allow developers to
gateways (preview) programmatically manage and interact with connections
and gateways within Fabric.
Month Feature Learn more

November Iceberg format via Fabric Data Factory now supports writing data in Iceberg
2024 Azure Data Lake format via Azure Data Lake Storage Gen2 Connector
Storage Gen2 in Data pipeline. For more information, see Iceberg
Connector in Data format for Data Factory in Microsoft Fabric.
pipeline

November Data Factory Copy Job – CI/CD for Copy job (preview) in Data Factory in
2024 CI/CD now available Microsoft Fabric is now available. Copy Job now
supports Git Integration and Deployment Pipeline .

November Semantic model refresh Use the Semantic model refresh activity to refresh a
2024 activity (preview) Power BI Dataset (Preview), the most effective way to
refresh your Fabric semantic models. For more
information, see New Features for Fabric Data Factory
Pipelines Announced at Ignite .

November New connectors for In the Data Factory, both data pipeline and Dataflow
2024 Fabric SQL database Gen2 now natively support the SQL database in Fabric
(Preview) connector as source and destination. More
connector updates for MariaDB, Snowflake, Dataverse,
and PostgreSQL also announced.

November OneLake catalog OneLake data hub has been rebranded as the OneLake
2024 catalog in Modern Get Data. When you use Get data
inside Pipeline, Copy job, Mirroring and Dataflow Gen2,
you'll find the OneLake data hub has been renamed to
OneLake catalog.

November Data pipeline The new Data pipeline capabilities in Copilot for Data
2024 capabilities in Copilot Factory are now available in preview. These features
for Data Factory function as an AI expert to help users build,
(preview) troubleshoot, and maintain data pipelines.

November Legacy Timestamp The recent update to Native Execution Engine on Fabric
2024 Support in Native Runtime 1.3 brings support for legacy timestamp
Execution Engine for handling, allowing seamless processing of timestamp
Fabric Runtime 1.3 data created by different Spark versions. Read to learn
why legacy timestamp support matters .

November Dataflow Gen2 CI/CD, With this new set of features , you can now seamlessly
2024 GIT source control integrate your dataflow with your existing CI/CD
integration and Public pipelines and version control of your workspace in
APIs support are now in Fabric. This integration allows for better collaboration,
preview versioning, and automation of your deployment process
across dev, test, and production environments. For more
information, see Dataflow Gen2 with CI/CD and Git
integration support (preview).
Month Feature Learn more

October New Features and We're excited to announce several powerful updates to
2024 Enhancements for the Virtual Network (VNET) Data Gateway , designed
Virtual Network Data to further enhance performance and improve the overall
Gateway user experience.

October Recap of Data Factory Read a recap of Data Factory announcements from
2024 Announcements at Fabric Community Conference Europe 2024.
Fabric Community
Conference Europe

September Copilot in Dataflow Copilot for Data Factory is now generally available
2024 Gen2 GA and included in the Dataflow Gen2 experience. For more
information, see Copilot for Data Factory overview.

September Fast Copy in Dataflow The Fast copy in Dataflows Gen2 is now generally
2024 Gen2 GA available. For more information, read Announcing the
General Availability of Fast Copy in Dataflows Gen2 .

September Incremental refresh for Incremental refresh in Dataflow Gen2 (Preview) is


2024 Dataflow Gen2 designed to optimize data ingestion and transformation,
(preview) particularly as your data continues to expand. For more
information, see Announcing Preview: Incremental
Refresh in Dataflow Gen2 .

September Certified connector Updated Dataflow Gen2 connectors in Microsoft Fabric


2024 updates have been released, as well as an updated Data pipeline
connectors for Salesforce and Vertica. For more
information, see the Certified connector updates .

September Fabric Pipeline On-premises connectivity for Data pipelines in Microsoft


2024 Integration in On- Fabric is now generally available. Learn How to access
premises Data Gateway on-premises data sources in Data Factory for Microsoft
GA Fabric.

September Invoke remote pipeline You can now use the Invoke Pipeline (preview) activity
2024 (preview) in Data to call pipelines from Azure Data Factory or Synapse
pipeline Analytics pipelines . This feature allows you to utilize
your existing ADF or Synapse pipelines inside of a Fabric
pipeline by calling it inline through this new Invoke
Pipeline activity.

September Spark Job environment You can now reuse existing Spark sessions with Session
2024 parameters tags . In the Fabric Spark Notebook activity, tag your
Spark session, then reuse the existing session using that
same tag.

September Azure Data Factory item You can now bring your existing Azure Data Factory
2024 in Fabric (preview) (ADF) to your Fabric workspace . This new preview
capability allows you to connect to your existing Azure
Month Feature Learn more

Data Factory from your Fabric workspace. Select "Create


Azure Data Factory" inside of your Fabric Data Factory
workspace, and you can manage your Azure data
factories directly from the Fabric workspace.

September Copy job (preview) The Copy job (preview) has advantages over the legacy
2024 Copy activity. For more information, see Announcing
Preview: Copy Job in Microsoft Fabric . For a tutorial,
see Learn how to create a Copy job (preview) in Data
Factory for Microsoft Fabric.

September Lakehouse Connector in Fabric Lakehouse supports the creation of custom


2024 Fabric Data Factory schemas. When reading from a Lakehouse table with the
introduces Schema Lakehouse Connector in Fabric Data Factory, custom
Support schema information is now automatically included .

September Storage Integration You can now connect Snowflake with external storage
2024 Support in Snowflake solutions (such as Azure Blob Storage) using a secure
Connector for Fabric and centralized approach. For more information, see
Data Factory Snowflake SQL storage integration .

September New Data Factory New Data Factory Connectors include Salesforce, Azure
2024 Connectors Released in MySQL Database, and Azure Cosmos DB for
Q3 2024 MongoDB .

For older updates, review the Microsoft Fabric What's New archive.

Data Factory in Microsoft Fabric samples and guidance

ノ Expand table

Month Feature Learn more

January Enhancing data Read this blog for a guide on using Copilot for Data
2025 quality with Copilot Factory to clean and transform data .
for Data Factory

November Boosting Data Here's a closer look at how recent advancements are
2024 Ingestion in Data transforming data ingestion in Data Factory .
Factory: Continuous
Innovations in
Performance
Optimization

November Copy Job upsert to The Copy Job simplifies your data ingestion with non-
2024 SQL & overwrite to compromising experience from any source to any
Fabric Lakehouse destination. By default, Copy Job appends data to your
Month Feature Learn more

destination so that you never miss any change history.


However, you can also customize the write behavior to
upsert data on Azure SQL Database or SQL Server and
overwrite data on Fabric Lakehouse tables, giving you full
flexibility to match your needs.

September Integrate your SAP Learn more about an overview of SAP data options in
2024 data into Microsoft Microsoft Fabric , along with some guidance on the
Fabric respective use cases.

Fabric Data Engineering


This section summarizes recent new features and capabilities of the Data Engineering
workload in Microsoft Fabric.

ノ Expand table

Month Feature Learn more

January Simplified enablement We recommend upgrading to Runtime 1.3 to maintain


2025 and transition to support, as native acceleration will soon be unavailable
Runtime 1.3 from on Runtime 1.2. Now, activating Native Execution Engine
Runtime 1.2 on Runtime 1.3 is as easy as a switch . You'll find the
new toggle button in the Acceleration tab within your
environment settings.

January Notebook and Spark You can now run a Notebook/Spark Job Definition
2025 Job definition execution under the credentials of a service principal .
execution with service Use the Fabric Job Scheduler API with a service principal's
principal access token, to run the Spark Job within the security
context of that service principal.

January Building Apps with Microsoft Fabric has an API for GraphQL to build your
2025 Microsoft Fabric API data applications, enabling you to pull data from sources
for GraphQL such as Data Warehouses, Lakehouse, Mirrored
Databases, and DataMart in Microsoft Fabric.

January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .

January Folder security within a Now you can define security on any subfolder within the
2025 shortcut in OneLake shortcut root. For more information and an example, see
Define security on folders within a shortcut using
OneLake data access roles .
Month Feature Learn more

December REST API for Livy The Fabric Livy endpoint lets users submit and execute
2024 (preview) their Spark code on the Spark compute within a
designated Fabric workspace, eliminating the need to
create a Notebook or Spark Job Definition items. The Livy
API offers the ability to customize the execution
environment through its integration with the
Environment .

December Notebook version Fabric notebook version history provides robust built-
2024 history (preview) in version control capabilities, including automatic and
manual checkpoints, tracked changes, version
comparisons, and previous version restore. For more
information, see Notebook version history.

December Python Notebook Python Notebooks are for BI Developers and Data
2024 (preview) Scientists working with smaller datasets using Python as
their primary language. To get started, see Use Python
experience on Notebook.

November Workspace monitoring Workspace monitoring is a Microsoft Fabric database


2024 (preview) that collects data from a range of Fabric items in your
workspace, and lets users access and analyze logs and
metrics. For more about this feature, see Announcing
preview of workspace monitoring .

November The new OneLake The OneLake catalog is the next evolution of the
2024 catalog OneLake data hub . For more information about the
new catalog, Discover and explore Fabric items in the
OneLake catalog.

November OneLake external data OneLake external data sharing, now generally available,
2024 sharing (GA) makes it possible for Fabric users to share data from
within their Fabric tenant with users in another Fabric
tenant.

November Purview Data Loss Restricting access based on sensitive content for
2024 Prevention policies semantic models, now in preview, helps you to
now support the automatically detect sensitive information as it is
restrict access action uploaded into Fabric lakehouses and semantic models .
for semantic models

November Iceberg data in You can now consume Iceberg-formatted data across
2024 OneLake using Microsoft Fabric with no data movement or
Snowflake and duplication , plus Snowflake has added the ability to
shortcuts (preview) write Iceberg tables directly to OneLake. For more
information, see Use Iceberg tables with OneLake.
Month Feature Learn more

November Notebook display The new and improved chart view brings multiple new
2024 chart upgrade capabilities to the notebook display. To access the new
chart view just open your Fabric notebook and run the
display(df) statement.

November Mirrored databases in Mirrored databases in Spark Notebooks allow you to


2024 Spark Notebooks seamlessly explore and run read-only queries on your
open-format tables just like Lakehouses, all while taking
full advantage of our advanced analytics engines—
without the need to migrate any of your data into Fabric.

November Jar libraries Java Archive (JAR) files are a popular packaging format
2024 used in the Java ecosystem, and are now supported in
Fabric Environments.

November Legacy Timestamp The recent update to Native Execution Engine on Fabric
2024 Support in Native Runtime 1.3 brings support for legacy timestamp
Execution Engine for handling, allowing seamless processing of timestamp
Fabric Runtime 1.3 data created by different Spark versions. Read to learn
why legacy timestamp support matters .

October Native Execution The Native Execution Engine is now available at no


2024 Engine available at no additional cost . The Native Execution Engine now
additional cost supports Fabric Runtime 1.3, which includes Apache
Spark 3.5 and Delta Lake 3.2. This upgrade enhances
Microsoft Fabric's Data Engineering and Data Science
workflows, offering boosts in performance and flexibility.

October Use OneLake shortcuts Learn how OneLake capacity consumption works when
2024 to access data across accessing data through a shortcut, particularly across
capacities: Even when capacities .
the producing capacity
is paused

October Purview Data Loss Extending Microsoft Purview's Data Loss Prevention (DLP)
2024 Prevention policies policies into Fabric lakehouses is now in preview.
have been extended to
Fabric lakehouses

October API for GraphQL Service Principal Names (SPN) support for API for
2024 support for Service GraphQL offers organizations looking to integrate their
Principal Names apps with API for GraphQL in Microsoft Fabric tie
(SPNs) seamlessly with their enterprise identity and access
management systems. For more information, see Service
Principal Names (SPNs) in Fabric API for GraphQL .

October Automatic code Fabric API for GraphQL now adds the ability to
2024 generation in API for automatically generate Python and Node.js code
Month Feature Learn more

GraphQL based on GraphQL queries tested in the API Explorer.

October Notebook Git Notebook Git integration now supports persisting the
2024 integration GA mapping relationship of the attached Environment when
syncing to new workspace. For more information, see
Notebook source control and deployment

October Notebook in Now you can also use notebooks to deploy your code
2024 deployment pipeline across different environments , such as development,
GA test, and production. You can also use deployment rules
to customize the behavior of your notebooks when
they're deployed, such as changing the default
Lakehouse of a Notebook. Get started with deployment
pipelines, and Notebook shows up in the deployment
content automatically.

October Notebook in Org APP The Notebook feature is now supported in Org APP. You
2024 can easily embed Notebook code and markdown cells,
visuals, tables, charts, and widgets in OrgAPP, as a
practical storytelling tool.

October Notebook onboarding The new Fabric Notebook Onboarding Tour is now
2024 tour available. This guided tour is designed to help you get
started with the essential Notebook features and learn
the new capabilities.

October Notebook mode The Notebook mode switcher provides flexible access
2024 switcher modes (Develop, Run Only, Edit, View) for your
notebooks, which can help you easily manage the
permissions to the notebook and the corresponding
view.

October Free selection support The free selection function on the rich dataframe preview
2024 on display() table view in the notebook can improve the data analysis
experience. To see the new features, read Free selection
support on display() table view .

October Filter, sort and search Sorting, Filtering, and Searching capabilities make data
2024 your Lakehouse exploration and analysis more efficient by allowing you
objects to quickly retrieve the information you need based on
specific criteria, right within the Lakehouse environment.

September Fabric Runtime 1.3 GA Fabric Runtime 1.3 (GA), now generally available, includes
2024 Apache Spark 3.5, Delta Lake 3.1, R 4.4.1, Python 3.11,
support for Starter Pools, integration with Environment,
and library management capabilities. For more
information, see Fabric Runtime 1.3 is Generally
Available! .
Month Feature Learn more

September Native Execution Native execution engine for Fabric Spark for Fabric
2024 Engine on Runtime 1.3 Runtime 1.3 is now available in preview, offering superior
(preview) query performance across data processing, ETL, data
science, and interactive queries. No code changes are
required to speed up the execution of your Apache Spark
jobs when using the Native Execution Engine .

September High concurrency High concurrency mode for Notebooks in Pipelines


2024 mode for Notebooks enables users to share Spark sessions across multiple
in Pipelines (preview) notebooks within a pipeline. With high concurrency
mode, users can trigger pipeline jobs, and these jobs are
automatically packed into existing high concurrency
sessions.

September Reserve maximum A new workspace-level setting allows you to reserve


2024 cores for jobs maximum cores for your active jobs for Spark
(preview) workloads . For more information, see High
concurrency mode in Apache Spark for Fabric.

September Session Expiry Control A new session expiry control in Data


2024 in Workspace Settings Engineering/Science workspace settings allows you to set
for Notebook the maximum expiration time limit for notebook
Interactive Runs interactive sessions. By default, sessions expire after 20
(preview) minutes, but you can now customize the maximum
expiration duration.

September Fabric Spark The Fabric Apache Spark Diagnostic Emitter (preview)
2024 Diagnostic Emitter allows Apache Spark users to collect logs, event logs, and
(preview) metrics from their Spark applications and send them to
various destinations, including Azure Event Hubs, Azure
Storage, and Azure Log Analytics.

September Environment You can now create, configure, and use an environment
2024 integration with in Fabric in VS Code with the Synapse VS Code extension.
Synapse VS Code
extension

September Notebook debug You can now place breakpoints and debug your
2024 within vscode.dev Notebook code with the Synapse VS Code - Remote
(preview) extension in vscode.dev . This update first starts with
the Fabric Runtime 1.3.

September Invoke Fabric User You can now invoke User Defined Functions (UDFs) in
2024 Data Functions in your PySpark code directly from Microsoft Fabric
Notebook Notebooks or Spark jobs. With NotebookUtils
integration, invoking UDFs is as simple as writing a few
lines of code .
Month Feature Learn more

September Functions Hub The new Functions Hub provides a single location to
2024 view, access, and manage your User Data Functions .

September Support for spaces in You can now create and query Delta tables with spaces in
2024 Lakehouse Delta table their names , such as "Sales by Region" or "Customer
names Feedback". All Fabric Runtimes and Spark authoring
experiences support table names with spaces.

September Enable/Disable The Enable/Disable feature for queries and mutations in


2024 Functionality in API for GraphQL API provides administrators and developers
GraphQL with granular control over API access and usage.

September Public REST API of Livy The Fabric Livy endpoint lets users submit and execute
2024 endpoint their Spark code on the Spark compute within a
designated Fabric workspace, eliminating the need to
create any Notebook or Spark Job Definition.

September OneLake SAS (preview) Support for OneLake SAS is now in preview . This
2024 functionality allows applications to request a User
Delegation Key backed by Microsoft Entra ID, and then
use this key to construct a short-lived, user-delegated
OneLake SAS token. This token can be handed off to
provide delegated access to another tool, node, or user,
ensuring secure and controlled access.

September Access Databricks A mirrored Azure Databricks Unity Catalog in Fabric


2024 Unity Catalog tables allows you to read data managed by Unity Catalog from
from Fabric (preview) Fabric workloads from the Lakehouse. In Fabric, you can
now create a new data item called "Mirrored Azure
Databricks Catalog". For more information, see
Databricks Unity Catalog tables available in Microsoft
Fabric .

September T-SQL support in The T-SQL notebook feature in Microsoft Fabric lets
2024 Fabric notebooks you write and run T-SQL code within a notebook. You can
use them to manage complex queries and write better
markdown documentation. It also allows direct execution
of T-SQL on connected warehouse or SQL analytics
endpoint. To learn more, see T-SQL support in Microsoft
Fabric notebooks.

September OneLake shortcuts to Now a generally available feature, Create a Google Cloud
2024 Google Cloud Storage Storage (GCS) shortcut to connect to your existing data
through a single unified name space without having to
copy or move data.
Month Feature Learn more

September OneLake shortcuts to Now a generally available feature, Create an S3


2024 S3-compatible data compatible shortcut to connect to your existing data
sources through a single unified name space without having to
copy or move data.

For older updates, review the Microsoft Fabric What's New archive.

Fabric Data Engineering samples and guidance

ノ Expand table

Month Feature Learn more

January Create a shortcut to a Follow this guide to create a OneLake shortcut to a VPC-
2025 VPC-protected Google protected Google Cloud Storage (GCS) bucket .
Cloud Storage bucket

January Best practices for The Microsoft Fabric API for GraphQL is a handy service
2025 Fabric API for GraphQL that quickly allows you to set up a GraphQL API to pull
data from places like warehouses, the lakehouse, and
mirrored databases. Learn best practices when building
applications using Fabric API for GraphQL .

December Troubleshooting Fabric You have Fabric Spark Notebooks deployed in a


2024 Spark application production workspace, but you don't have direct access to
without production it. The production support team reports that a Fabric
workspace access Spark job has failed in the production workspace, and you
need to analyze the logs to troubleshoot the issue. To
troubleshoot Spark applications , Spark engineers
typically use the Spark UI, which provides details of Jobs,
Stages, Storage, Environment, Executors, and SQL.

October Optimizing Spark Learn how to optimize Spark Compute for Medallion
2024 Compute for architecture : a popular data engineering approach that
Medallion emphasizes modularity. It organizes the data platform into
Architectures in three distinct layers: Bronze, Silver, and Gold.
Microsoft Fabric

Fabric Data Science


This section summarizes recent improvements and features for Data Science in
Microsoft Fabric.
ノ Expand table

Month Feature Learn more

January Building Apps with Microsoft Fabric has an API for GraphQL to build your
2025 Microsoft Fabric API data applications, enabling you to pull data from sources
for GraphQL such as Data Warehouses, Lakehouse, Mirrored Databases,
and DataMart in Microsoft Fabric.

December Notebook version Fabric notebook version history provides robust built-in
2024 history version control capabilities, including automatic and
manual checkpoints, tracked changes, version
comparisons, and previous version restore. For more
information, see Notebook version history.

December Python Notebook Python Notebooks are for BI Developers and Data
2024 (preview) Scientists working with smaller datasets using Python as
their primary language. To get started, see Use Python
experience on Notebook.

November Low code AutoML AutoML, or Automated Machine Learning, is a process that
2024 user experience in automates the time-consuming and complex tasks of
Fabric (preview) developing machine learning models. The new low code
AutoML experience supports a variety of tasks, including
regression, forecasting, classification, and multi-class
classification. To get started, Create models with
Automated ML (preview).

October Enhancing Open We have focused on enhancing FLAML's capabilities for


2024 Source: Fabric's Spark workloads. We've contributed several new Spark and
Contributions to non-Spark estimators to the FLAML project . Try these
FLAML for Scalable out with AutoML in Fabric (preview).
AutoML

September Data Wrangler for Data Wrangler is now generally available. A notebook-
2024 Spark DataFrames GA based tool for exploratory data analysis, Data Wrangler
works for both pandas DataFrames and Spark
DataFrames and arrives at general availability with new
usability improvements .

September Share Feature for "Share" capability for the Fabric AI skill (preview) allows
2024 Fabric AI skill you to share the AI Skill with others using a variety of
(preview) permission models.

September Session Expiry A new session expiry control in Data


2024 Control in Workspace Engineering/Science workspace settings allows you to set
Settings for the maximum expiration time limit for notebook
Notebook Interactive interactive sessions. By default, sessions expire after 20
Runs (preview) minutes, but you can now customize the maximum
expiration duration.
Month Feature Learn more

September File editor in The file editor feature in Fabric Notebook allows users
2024 Notebook to view and edit files directly within the notebook's
resource folder and environment resource folder in
notebook. Supported file types include CSV, TXT, HTML,
YML, PY, SQL, and more.

For older updates, review the Microsoft Fabric What's New archive.

Fabric Data Science samples and guidance

ノ Expand table

Month Feature Learn more

September Using Microsoft Fabric for This tutorial includes three main notebooks, each
2024 Generative AI: A Guide to covering a crucial aspect of building and optimizing
Building and Improving RAG systems in Microsoft Fabric .
RAG Systems

September Harness Microsoft Fabric This post demonstrates how you can extend the
2024 AI Skill to Unlock capabilities of Fabric AI Skill in Microsoft Fabric
Context-Rich Insights notebooks to deliver richer and more
from Your Data comprehensive responses using additional Large
Language Model (LLM) queries.

Fabric Databases
This section summarizes recent improvements and features for Microsoft Fabric
Databases.

ノ Expand table

Month Feature Learn more

January SQL database You can use tenant level private links to provide secure access
2025 support for for data traffic in Microsoft Fabric, including SQL database (in
tenant level preview). For more information, see Set up and use private links
private links and Blog: Tenant Level Private Link (Preview) .
(preview)

January SQL databases After February 1, 2025, compute and data storage for SQL
2025 billing begins database are charged to your Fabric capacity. Additionally,
backup billing will start after April 1, 2025. For more
Month Feature Learn more

information, see Activation of billing for SQL database in


Fabric .

January Ask the Experts – Join us for a live Q&A session on the new Fabric Databases
2025 Fabric Databases experience ! Our product engineering team will answer your
– Livestream top questions in real time.
January 29!

December Copilot for SQL Learn more about the Copilot integration for Query Editor.
2024 database Copilot for SQL database in Fabric as an AI-powered assistant
designed to support you regardless of your SQL expertise or
role.

November New connectors In the Data Factory, both data pipeline and Dataflow Gen2 now
2024 for Fabric SQL natively support the Fabric SQL database connector as source
database and destination with the SQL database connector (Preview). For
more information, see Fabric SQL Database Connector .

November Fabric SQL SQL database in Microsoft Fabric is a developer-friendly


2024 database transactional database, based on Azure SQL Database, that
(Preview) allow you to easily create your operational database in Fabric. A
SQL database in Fabric uses the SQL Database Engine as Azure
SQL Database. Review a Decision Guide for SQL databases. For
more on this announcement, read the SQL database in Fabric
announcement blog .

Fabric Database samples and guidance

ノ Expand table

Month Feature Learn more

February Govern your data in Microsoft Purview's protection policies help you safeguard
2025 SQL database in sensitive data in Microsoft Fabric items, including SQL
Microsoft Fabric with databases. Learn how Purview policies override Microsoft
protection policies in Fabric item permissions for users, apps, and groups,
Microsoft Purview limiting their actions within the database .

February ICYMI: Ask the Expert Here's a few great questions and answers about SQL
2025 – Fabric Databases database in Fabric from a recent Ask the Expert session
on Microsoft Reactor.

January Manage access in SQL A followup to Learn how to manage Microsoft Fabric
2025 database with SQL access controls in SQL database , learn how to manage
native authorization access for your SQL database with SQL native access
controls controls . SQL database in Microsoft Fabric supports two
Month Feature Learn more

different sets of authorization controls, Microsoft Fabric


access controls and SQL native access controls.

January Monitor SQL Database Learn how the capacity metrics app can be used for
2025 usage and monitoring usage and consumption of SQL databases in
consumption by using Fabric .
capacity metrics app

January Performance For a complete walkthrough of performance


2025 Dashboard tutorial troubleshooting in SQL database in Microsoft Fabric with
the Performance Dashboard, see Speed up your SQL
databases with the Performance Dashboard .

January Manage access for SQL database (preview) supports two different sets of
2025 SQL databases in controls that allow you to manage access for your
Microsoft Fabric with databases: Microsoft Fabric access controls and SQL native
workspace roles and access controls. Learn how to manage Microsoft Fabric
item permissions access controls in SQL database .

December Source control SQL database in Fabric has a tightly integrated and fully
2024 integration for SQL extensible DevOps feature set, including a source control
database integration for GitHub and Azure DevOps. Learn how to
use the Fabric web-based development environment with
the git repository directly through a streamlined source
control panel.

December Tour the Query Editor Whether you're a seasoned data professional or a
2024 in SQL database in developer new to SQL, the query editor offers features
Microsoft Fabric that cater to all skill levels . For more information, see
Query with the SQL query editor.

November Building a Smart Imagine you're the founder of Contoso, a rapidly growing
2024 Chatbot with SQL e-commerce startup. As your online store grows, you
Database in Microsoft realize that many customer inquiries are about basic
Fabric, LangChain and product information: price, availability, and specific
Chainlit features. To automate these routine questions, you decide
to build a chatbot with SQL Database in Microsoft Fabric,
LangChain, and Chainlit .

November Learning pathways for For those curious about where to learn more and how to
2024 SQL database try out this new offering, read more about the upcoming
episodes of SQL database in Microsoft Fabric: Learn
Together .

November Data Exposed: Watch a Data Exposed video introducing on the SQL
2024 Announcing SQL database in Microsoft Fabric public preview.
database in Microsoft
Fabric preview
Month Feature Learn more

November Guided application The tutorial provides a comprehensive guide to utilizing


2024 tutorial in Fabric SQL the SQL database in Fabric. This tutorial is tailored to help
database you navigate through the process of database creation,
setting up database objects, exploring autonomous
features, and combining and visualizing data. Additionally,
learn how to create a GraphQL endpoint, which serves as a
modern approach to connecting and querying your data
efficiently.

November Get started with Guided how-to documents on how to do basic tasks in
2024 Fabric SQL database SQL database in Fabric start with Enable SQL database in
Fabric using Admin Portal tenant settings.

Fabric Data Warehouse


This section summarizes recent improvements and features for Fabric Data Warehouse.

ノ Expand table

Month Feature Learn more

February OPENROWSET and The T-SQL OPENROWSET(BULK) function is now available in


2024 BULK INSERT Fabric warehouse as a preview feature. For more
support (preview) information and examples, see Browse file content using
OPENROWSET function (Preview). For more information, see
BULK INSERT statement in Fabric Data Warehouse and
Fabric OPENROWSET preview .

February Open Mirroring for With open mirroring, data replication is an extensible
2024 SAP sources – dab platform that partners and customers can use to plug in
and Simplement their own data integration capabilities. Once data is brought
in through open mirroring, it can be used in all Fabric
workloads. Two partners have now taken the next step in
integrating with open mirroring , and are ready to
onboard customers.

January Source schema Mirroring in Fabric now supports replicating the source
2025 support in Mirroring schema hierarchy. For more information, see Mirroring now
in Fabric supports replicating source schemas .

January Delta column Mirroring in Fabric now supports Delta column mapping.
2025 mapping support Column mapping is a feature of Delta tables that allows
for Mirroring users to include spaces and special characters such as
'``,``;``{``}``(``)``\n``\t``=``.``' in column names.
For more information, see Delta column mapping support.
Month Feature Learn more

January Mirroring CI/CD Mirroring now supports CI/CD as a preview feature. You can
2025 (preview) integrate Git for source control and utilize ALM Deployment
Pipelines, streamlining the deployment process and
ensuring seamless updates to mirrored databases.

January COPY INTO The COPY INTO now allows you to control the behavior of
2025 operations with row your data ingestion jobs by checking if the count of
count check columns in the source data matches the count of columns
on your target table, with the MATCH_COLUMN_COUNT
argument.

January Default schema in a You can now change the default schema for users in Fabric
2025 warehouse Data Warehouse , using the ALTER USER statement,
ensuring that every user has a predefined schema context
when they connect to the database.

January Data Insights now New columns are available in the


2025 includes Data queryinsights.exec_requests_history system view to
Scanned and CPU determine if large data scans are contributing to slower
time analytics query execution. For more information, see Query insights
in Fabric data warehousing.

January JSON Aggregate Fabric warehouses now support JSON aggregate


2025 support (preview) functions in preview, JSON_ARRAYAGG and
JSON_OBJECTAGG.

January SET The SET SHOWPLAN_XML T-SQL syntax is now supported as


2025 SHOWPLAN_XML a preview feature in Fabric Data Warehouse and SQL
support analytics endpoint.

January Service principal Service principal (SPN) support Fabric warehouse items
2025 support allows developers and administrators to automate
processes, streamline operations, and increase security for
their data workflows. For more information, see Service
principal support for Fabric Data Warehouse .

January Warehouse restore You can now create restore points and perform an in-place
2025 points and restore in restore of a warehouse to a past point in time. Restore in-
place place is an essential part of data warehouse recovery ,
which allows to restore the data warehouse to a prior
known reliable state by replacing or over-writing the
existing data warehouse from which the restore point was
created.

January COPY INTO Enhancements to the COPY INTO T-SQL command in Fabric
2025 operations with Data Warehouse introduce granular SQL controls. For more
granular information, see Enhancing COPY INTO operations with
permissions Granular Permissions .
Month Feature Learn more

December What's new in the There are several updates to improve both functionality and
2024 Fabric SQL analytics user experience with the SQL analytics endpoint ,
endpoint? including metadata sync, last successful update, improved
error propagation, and more.

November Open mirroring Open mirroring enables any application to write change
2024 (Preview) data directly into a mirrored database in Fabric, based on
the open mirroring public APIs and approach. Open
mirroring is designed to be extensible, customizable, and
open. It's a powerful feature that extends mirroring in Fabric
based on open Delta Lake table format. To get started, see
Tutorial: Configure Microsoft Fabric open mirrored
databases.

November Data Warehouse: Learn how the Copilot tools for Fabric Data Warehouse
2024 Copilot & AI Skill differ , when to use each, and how they can work together
to maximize productivity and deliver insights with Fabric
Warehouse.

November Fabric Mirroring for Fabric Database mirroring is now able to mirror Azure SQL
2024 Azure SQL Managed Managed Instance databases.
Instance (Preview)

November Mirroring for Azure With Azure SQL Database mirroring in Fabric, you can easily
2024 SQL Database GA bring your database into OneLake in Microsoft Fabric.

October Case insensitive By default, the collation of a warehouse is case sensitive (CS)
2024 collation support with 'Latin1_General_100_BIN2_UTF8'. You can now Create a
warehouse with case-insensitive (CI) collation.

October varchar(max) and Support for the varchar(max) and varbinary(max) in data
2024 varbinary(max) types in Warehouse is now in preview. For more
support in preview information, see Announcing public preview of
VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data
Warehouse .

October Concurrency We have recently optimized our task scheduling algorithm


2024 performance in our distributed query processing engine (DQP) to reduce
improvements contention when the workspace is under moderate to heavy
concurrency. In testing we have observed that this
optimization makes significant performance improvements
in querying workloads.

October JSON support JSON functionalities in warehouse and SQL analytics


2024 enhancements endpoints for Lakehouse and mirrored databases have been
improved. For details see, JSON support enhancements .

October Nested Common Fabric Warehouse and SQL analytics endpoint both support
2024 Table Expressions standard, sequential, and nested CTEs. While CTEs are
Month Feature Learn more

(CTEs) (preview) generally available in Microsoft Fabric, nested common


table expressions (CTE) in Fabric data warehousing
(Transact-SQL) are currently a preview feature.

September Mirroring for With Mirroring for Snowflake in Fabric, you can easily bring
2024 Snowflake GA your Snowflake data into OneLake in Microsoft Fabric . For
more information, see Mirroring Snowflake.

September Copilot for Data Copilot for Data Warehouse (preview) is now updated and
2024 Warehouse available as a preview feature , offering the Copilot chat
pane, quick actions, and code completions.

September Delta column SQL analytics endpoint now supports Delta tables with
2024 mapping in the SQL column mapping enabled . For more information, see
analytics endpoint Delta column mapping and Limitations of the SQL
analytics endpoint. This feature is currently in preview.

September Lakehouse schemas Lakehouse schemas allow delta tables in schemas to be


2024 in SQL analytics queried in the SQL analytics endpoint. For more
endpoint information, see Lakehouse schemas feature (preview) .

September Fabric Spark The Fabric Spark connector for Fabric Data Warehouse
2024 connector for Fabric (preview) now supports custom or pass-through queries,
Data Warehouse PySpark, and Fabric Runtime 1.3 (Spark 3.5) .
new features
(preview)

September New editor Editor improvements for Warehouse and SQL analytics
2024 improvements endpoint items improve the consistency and efficiency. For
more information, see New editor improvements .

September T-SQL support in The T-SQL notebook feature in Microsoft Fabric (preview)
2024 Fabric notebooks lets you write and run T-SQL code within a notebook. You
(preview) can use them to manage complex queries and write better
markdown documentation. It also allows direct execution of
T-SQL on connected warehouse or SQL analytics endpoint.
To learn more, see T-SQL support in Microsoft Fabric
notebooks.

September Nested Common Fabric Warehouse and SQL analytics endpoint both support
2024 Table Expressions standard, sequential, and nested CTEs . While CTEs are
(CTEs) (preview) generally available in Microsoft Fabric, nested common
table expressions (CTE) in warehouse are currently a preview
feature.

September Mirrored Azure A mirrored Azure Databricks Unity Catalog in Fabric allows
2024 Databricks (Preview) you to read data managed by Unity Catalog from Fabric
workloads from the Lakehouse. For more information, see
Month Feature Learn more

Databricks Unity Catalog tables available in Microsoft


Fabric .

For older updates, review the Microsoft Fabric What's New archive.

Fabric Data Warehouse samples and guidance

ノ Expand table

Month Feature Learn more

January Best practices for Fabric The Microsoft Fabric API for GraphQL is a handy service
2025 API for GraphQL that quickly allows you to set up a GraphQL API to pull
data from places like warehouses, the lakehouse, and
mirrored databases. Learn best practices when building
applications using Fabric API for GraphQL .

December Microsoft Fabric API for Learn how to integrate Azure Cosmos DB and the
2024 GraphQL™ for Azure Microsoft Fabric API for GraphQL™ to build near real-
Cosmos DB Mirroring time analytical applications . For more information on
how to leverage Fabric API for GraphQL in your
applications, see Connect applications to Fabric API for
GraphQL.

November SQL to Microsoft Fabric Learn more about migrating your SQL database to
2024 Migration: Beginner- Microsoft Fabric , a unified platform that brings your
Friendly Strategies for a data and analytics together effortlessly.
Smooth Transition

October Ensuring Data Dive deep into the common recovery scenarios and
2024 Continuity in Fabric features that help enable seamless end-to-end data
Warehouse: Best recovery and discuss best practices to ensure data
Practices for Every resilience .
Scenario

Real-Time Intelligence in Microsoft Fabric


This section summarizes recent improvements and features for Real-Time Intelligence in
Microsoft Fabric.

ノ Expand table
Month Feature Learn more

January Real-time Application Lifecycle Management (ALM) and Fabric REST


2025 intelligence ALM APIs are now generally available for all RTI items:
and REST API GA Eventstream, Eventhouse, KQL Database, Realtime
dashboard, Query set and Data Activator. ALM includes both
deployment pipelines and Git integration. REST APIs allow
you to programmatically create / read / update / delete
items.

December Eventhouse Eventhouse monitoring in preview offers multiple events


2024 Monitoring and metrics that are automatically routed and stored in
(preview) Workspace Monitoring. For more information, see Manage
and monitor an eventhouse.

December All Real-Time Fabric Real-Time Intelligence items (Eventstream,


2024 Intelligence items Eventhouse, KQL Database, Realtime Dashboard, Query set
supported for ALM & Activator) support Application Lifecycle Management
and REST API (ALM) and REST API capabilities.

December Eventhouse Query Query Acceleration for OneLake Shortcuts in Eventhouse


2024 Acceleration for speeds up ad hoc queries over data in OneLake. OneLake
OneLake Shortcuts shortcuts are references from an Eventhouse that point to
(Preview) internal Fabric or external sources. Previously, queries run
over OneLake shortcuts were less performant than on data
that is ingested directly to Eventhouses due to various
factors.

November New event New event categories in Real-Time Hub include: OneLake
2024 categories in Fabric events, Job events, and Capacity utilization events . These
Real-Time Hub new event categories are currently in preview. For more
information, see Unlocking the power of Real-Time Data
with OneLake Events .

November Eventstream Now, Eventstream supports processing and transforming


2024 processing and events with business requirements before routing the
routing events to events to the destination: Activator. When these
Activator (preview) transformed events reach Activator, you can establish rules
or conditions for your alerts to monitor the events.

November REST APIs for Fabric With the Eventstream REST API , you can now
2024 Eventstream programmatically create, manage, and update Eventstream
items. For more information, see Fabric REST APIs for
Eventstream.

November Real-Time We're excited to announce that Real-Time Intelligence is


2024 Intelligence: now now generally available (GA) . This includes the Real-Time
Generally Available hub, enhanced Eventstream , Eventhouse , Real-Time
Dashboards , and Activator . For more information, see
What is Real-Time Intelligence?
Month Feature Learn more

November Real-Time hub Real-Time hub is now generally available . For more
2024 information, see Introduction to Fabric Real-Time hub.

November Eventstream Eventstreams support Azure Service Bus source (preview)


2024 support for Azure and Fabric activator destination (preview) now, and they are
Service Bus and in preview. The following connectors are generally available
Activator now: PostgreSQL Database (DB) Change Data Capture
(CDC), MySQL DB CDC, Cosmos DB CDC, Azure SQL DB
CDC, Azure SQL Managed Instance DB CDC, SQL Server on
virtual machine DB CDC, Google Pub/Sub, Amazon Kinesis
Data Streams, Apache Kafka, Confluent Cloud Kafka, and
Amazon Managed Streaming for Apache Kafka.
Eventstreams support Git Integration and Deployment
Pipeline by integrating with Git and deployment pipelines.

October Secure Data By creating a Fabric Managed Private Endpoint, you can
2024 Streaming with securely connect Eventstream to your Azure services, such
Managed Private as Azure Event Hubs or IoT Hub, within a private network or
Endpoints in behind a firewall. For more information, see Secure Data
Eventstream Streaming with Managed Private Endpoints in Eventstream
(Preview) (Preview) .

October Usage reporting for The Activator team has rolled out usage reporting to help
2024 Activator is now live you better understand your capacity consumption and
future charges. When you look at the Capacity metrics app
compute page you'll now see operations for the reflex items
included.

October Real-Time With separate permissions for dashboards and underlying


2024 Dashboards and data, administrators now have the flexibility to allow users to
underlying KQL view dashboards without giving access to the raw data .
databases access
separation (preview)

October Real-Time Fabric's Git integration is now available for Real-Time


2024 Dashboards Dashboards . For more information, see What is Microsoft
Integration with Fabric Git integration?
GitHub

October Quickly visualize You can now graphically visualize KQL Queryset results
2024 query results in KQL instantly and effortlessly and control the formatting without
Queryset the need for re-run queries – all using a familiar UI.

October Pin query to You can now save the outcome of any query written in KQL
2024 dashboard Queryset directly to a new or existing Real-Time
Dashboard .
Month Feature Learn more

September Creating a real time Copilot can review a table and automatically create a
2024 dashboard by dashboard with insights and a profile of the data with a
Copilot sample.

September New Real-Time hub The new user experience features new Real-Time hub
2024 and KQL Database navigation, a My Streams page, an enhanced database page
user experiences experience , and more.

September Eventhouse as a Eventhouses, equipped with KQL Databases, can handle and
2024 new Destination in analyze large volumes of data. With the Eventhouse
Eventstream destination in Eventstream , you can efficiently process
and route data streams into an Eventhouse and analyze the
data in near real-time using KQL.

September Managed private With a managed private endpoints for Fabric, you can now
2024 endpoints for establish a private connection between your Azure services,
Eventstream such as Azure Event Hubs, and Fabric Eventstream. For more
information, see Eventstream integration with managed
private endpoint .

September Activator alerts on Now you can set up Activator (preview) alerts directly on
2024 KQL Querysets your KQL queries in KQL querysets . For more information
and samples, see Create Activator alerts from a KQL
Queryset.

September Real-Time The dashboard auto refresh feature now supports


2024 Dashboards continuous and 10 second refresh rates , in addition to the
continuous or 10s existing options. This upgrade, addressing a popular
refresh rate customer request, allows both editors and viewers to set
near real-time and real-time data updates.

September Multivariate A new workflow for multivariate anomaly detection of time


2024 anomaly detection series data is based on the algorithm that is used in the AI
Anomaly Detector service (which is being retired as a
standalone service). For a tutorial, see Multivariate Anomaly
Detection.

September Real-Time The Copilot assistant, which translates natural language into
2024 Intelligence Copilot KQL, now supports conversational mode , allowing you to
conversational ask follow-up questions that build on previous queries
mode within the chat.

September New connectors and Four new connectors released on September 24, 2024:
2024 UI in Real-Time hub Apache Kafka, Amazon Managed Streaming for Apache
Kafka, Azure SQL Managed Instance CDC, SQL Server on VM
DB CDC. The tabs in the main page of Real-Time hub are
replaced with menu items on the left navigation menu. For
more information, see Get started with Fabric Real-Time
Month Feature Learn more

hub. You can connect to Azure streaming sources using


private endpoints now.

September Announcement: Starting the week of September 16 you will start seeing
2024 Eventhouse billable consumption of the OneLake Storage Data Stored
Standard Storage meter from the Eventhouse and KQL Database items.
billing

For older updates, review the Microsoft Fabric What's New archive.

Real-Time Intelligence samples and guidance

ノ Expand table

Month Feature Learn more

February Template dashboards for Ready-to-use Power BI and real-time dashboard


2025 Workspace Monitoring template reports are available for workspace
monitoring in Microsoft Fabric.

January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .

December Automate Real-Time Learn how to build a PowerShell script to automate the
2024 Intelligence Eventstream deployment of Eventstream with the definition of
deployment using source, processing, and destination into a workspace in
PowerShell Microsoft Fabric .

December Monitor Fabric Spark Learn how to build a centralized Spark monitoring
2024 applications using Fabric solution, leveraging Fabric Real-Time Intelligence . To
Real-Time Intelligence do this, integrate a Fabric Spark data emitter directly
with Fabric Eventstream and Eventhouse to build a
centralized Spark monitoring solution.

December Easily recreate your ADX You can bring ADX dashboards into Microsoft Fabric
2024 dashboards as Real-Time without relocating your data. Learn how to create Real-
Dashboards Time Dashboards as copies of your ADX dashboards in
Fabric .

December Enhance fraud detection Activator allows you to monitor events, detect certain
2024 with Activator conditions on your data and act on them by sending
alerts. Learn how to implement a system that sends
teams or email alerts when a transaction is flagged
as potentially fraudulent.
Month Feature Learn more

December Manual Migration If you created an item while Activator was in preview,
2024 needed for Activator you'll need to manually migrate these items to GA to
preview items get access to all the new features .

December Understanding Real-Time Learn about Real-Time Intelligence Eventstream,


2024 Intelligence usage Eventhouse, storage, Fabric Events and Activator
reporting and billing consumption utilization, capacity meters, and costs .

Microsoft Fabric platform features


News and feature announcements about the Microsoft Fabric platform experience.

ノ Expand table

Month Feature Learn more

February Understanding GraphQL Learn more about error handling in GraphQL and
2025 API error handling some best practices for managing errors effectively .

February Billing for Workspace Workspace Monitoring (preview) is an observability


2025 monitoring feature within Fabric that enables monitoring
capabilities. For more informaiton, see Announcing
preview of workspace monitoring . Billing for this
feature starts March 10, 2025.

January Building Apps with Microsoft Fabric has an API for GraphQL to build
2025 Microsoft Fabric API for your data applications, enabling you to pull data from
GraphQL sources such as Data Warehouses, Lakehouse,
Mirrored Databases, and DataMart in Microsoft Fabric.

January Power BI Embedded with Power BI Embedded with Direct Lake Mode is
2025 Direct Lake Mode designed to enhance how developers and
(Preview) Independent Software Vendors (ISVs) provide
embedded analytics in their applications. For more
information, see Introducing Power BI Embedded with
Direct Lake Mode (Preview) .

January Fabric Copilot capacity: Fabric Copilot capacity us a new billing feature
2025 Democratizing AI usage designed to enhance your experience with Microsoft
in Microsoft Fabric Fabric. With Fabric Copilot capacities, capacity admins
can give Copilot access to end users directly, rather
than requiring creators to move their content into a
specific workspace or link a specific capacity.

January Efficient log For a tutorial and walkthrough of efficient log files
2025 management with collection processing and analysis with Real-Time
Month Feature Learn more

Microsoft Fabric Intelligence, read this new blog post on Efficient log
management with Microsoft Fabric .

January Surge protection With surge protection (preview), capacity admins can
2025 (preview) set limits on background usage within a capacity.
Learn more about surge protection to help protect
capacities from excess usage by background
workloads .

December Microsoft Fabric Microsoft Fabric is now included within the US Federal
2024 approved as a Service Risk and Authorization Management Program
within the FedRAMP (FedRAMP) High Authorization for Azure
High Authorization for Commercial . This Provisional Authorization to
Azure Commercial Operate (P-ATO) within the existing FedRAMP High
Azure Commercial environment was approved by the
FedRAMP Joint Authorization Board (JAB).

December Folder in Workspace As an organizational unit, the workspace folder


2024 addresses this pain point by providing a hierarchical
structure for organizing and managing your items. This
feature is now generally available, and includes new
filter features. For more information, see Create folders
in workspaces.

November Workspace monitoring Workspace monitoring (preview) is a Microsoft Fabric


2024 (preview) database that collects data from a range of Fabric
items in your workspace, and lets users access and
analyze logs and metrics. For more about this feature,
see Announcing preview of workspace monitoring .

November OneLake external data External data sharing in Microsoft Fabric, now
2024 sharing (GA) generally available, makes it possible for Fabric users
to share data from within their Fabric tenant with users
in another Fabric tenant.

November GraphQL API in Microsoft The API for GraphQL , now generally available, is a
2024 Fabric GA data access layer that allows us to query multiple data
sources quickly and efficiently in Fabric. For more
information, see What is Microsoft Fabric API for
GraphQL?

November The new OneLake catalog The OneLake catalog is the next evolution of the
2024 OneLake data hub . For more information about the
new catalog, Discover and explore Fabric items in the
OneLake catalog.

November Fabric workload dev kit The Microsoft Fabric workload development kit is now
2024 (GA) generally available . This robust developer toolkit is
for designing, developing, and interoperating with
Month Feature Learn more

Microsoft Fabric using frontend SDKs and backend


REST APIs .

November Domains in Fabric – new Review new features and use cases for Domains in
2024 enhancements Fabric , including Best practices for planning and
creating domains in Microsoft Fabric.

October New Item panel in Previously, by selecting +New in the workspace, you
2024 Workspace can access a dropdown menu with some pre-defined
item types to get started. Now, the +New item button
shows item types listed in a panel, categorized by
tasks .

October Enhanced Tenant Setting Delegation of export settings is now available to


2024 Delegation for Export workspaces via domain . This new capability provides
Controls more granular control over data export permissions,
addressing the specific needs of tenant, domain, and
workspace administrators.

October APIs for Managed Private REST APIs for managed Private Endpoints are available.
2024 Endpoint are now You can now create, delete, get, and list Managed
available private endpoints via APIs .

October Important billing updates Upcoming pricing and billing updates to make Copilot
2024 coming to Copilot and AI and AI features in Fabric more accessible and cost-
in Fabric effective .

September Terraform Provider for The Terraform Provider for Microsoft Fabric is now in
2024 Fabric (preview) preview. The Terraform Provider for Microsoft Fabric
supports the creation and management of many Fabric
resources. For more information, see Announcing the
new Terraform Provider for Microsoft Fabric .

September Announcing Service You can now use service principal to access Fabric
2024 Principal support for APIs . Service principal is a security identity that you
Fabric APIs can create in Microsoft Entra and assign permissions
to it in Microsoft Entra and other Microsoft services,
such as Microsoft Fabric.

September Tag your data to enrich Tags (preview) help admins categorize and organize
2024 item curation and data , enhancing the searchability of your data and
discovery boosting success rates and efficiency for end users.

September Trusted workspace access Trusted workspace access and Managed private
2024 and Managed private endpoints are available in any Fabric capacity .
endpoints in any Fabric Previously, trusted workspace access and Managed
capacity private endpoints were available only in F64 or higher
capacities. Managed Private endpoints are now
available in Trial capacities.
Month Feature Learn more

September Multitenant organization Fabric now supports Microsoft Entra ID Multitenant


2024 (MTO) (preview) Organizations (MTO) . The multitenant organizations
capability in Microsoft Entra ID synchronizes users
across multiple tenants, adding them as users of type
external member. For more information, see Distribute
Power BI content to external guest users with
Microsoft Entra B2B .

September Microsoft Fabric Achieves Microsoft Fabric is now certified for the HITRUST
2024 HITRUST CSF Common Security Framework (CSF) v11.0.1 .
Certification

For older updates, review the Microsoft Fabric What's New archive.

Continuous Integration/Continuous Delivery


(CI/CD) in Microsoft Fabric
This section includes guidance and documentation updates on development process,
tools, source control, and versioning in the Microsoft Fabric workspace.

ノ Expand table

Month Feature Learn more

November Microsoft Fabric These APIs enable you to automate Git integration tasks, such
2024 REST APIs as connecting to GitHub, retrieving connection details,
Integration with committing changes to your connected GitHub repository,
GitHub updating from the repository, and more. For more
information, see Automate Git integration by using APIs and
code samples .

November Data Factory Copy CI/CD for Copy job (preview) in Data Factory in Microsoft
2024 Job – CI/CD now Fabric is now available. Copy Job now supports Git
available Integration and Deployment Pipeline .

September GitHub Now generally available, Fabric developers can now choose
2024 integration for GitHub or GitHub Enterprise as their source control tool ,
source control and version their Fabric items there. For more information,
see What is Microsoft Fabric Git integration?

September New Deployment A new and improved design for the Deployment Pipeline
2024 Pipelines design introduces a range of changes, additions, and improvements
designed to elevate your deployment process. Read more
about What's changed in deployment pipelines .
For older updates, review the Microsoft Fabric What's New archive.

Related content
Microsoft Fabric What's New archive
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?
Microsoft Fabric videos on YouTube
Microsoft Fabric community

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


End-to-end tutorials in Microsoft Fabric
Article • 01/26/2025

7 Note

Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .

In this article, you find a comprehensive list of end-to-end tutorials available in


Microsoft Fabric. These tutorials guide you through a scenario that covers the entire
process, from data acquisition to data consumption. They're designed to help you
develop a foundational understanding of the Fabric UI, the various experiences
supported by Fabric and their integration points, and the professional and citizen
developer experiences that are available.

Multi-experience tutorials
The following table lists tutorials that span multiple Fabric experiences.

ノ Expand table

Tutorial Scenario
name

Lakehouse In this tutorial, you ingest, transform, and load the data of a fictional retail
company, Wide World Importers, into the lakehouse and analyze sales data across
various dimensions.

Data Science In this tutorial, you explore, clean, and transform a taxicab trip semantic model,
and build a machine learning model to predict trip duration at scale on a large
semantic model.

Real-Time In this tutorial, you use the streaming and query capabilities of Real-Time
Intelligence Intelligence to analyze London bike share data. You learn how to stream and
transform the data, run KQL queries, build a Real-Time Dashboard and a Power BI
report to gain insights and respond to this real-time data.

Data In this tutorial, you build an end-to-end data warehouse for the fictional Wide
warehouse World Importers company. You ingest data into data warehouse, transform it
using T-SQL and pipelines, run queries, and build reports.
Tutorial Scenario
name

Fabric SQL The tutorial provides a comprehensive guide to utilizing the SQL database in
database Fabric. This tutorial is tailored to help you navigate through the process of
database creation, setting up database objects, exploring autonomous features,
and combining and visualizing data. Additionally, you learn how to create a
GraphQL endpoint, which serves as a modern approach to connecting and
querying your data efficiently.

Fabric The tutorial is designed for customers who are new to Fabric Activator. Using a
Activator sample eventstream, you learn your way around Activator. Once you're familiar
with the terminology and interface, you create your own object, rule, and
activator.

Experience-specific tutorials
The following tutorials walk you through scenarios within specific Fabric experiences.

ノ Expand table

Tutorial name Scenario

Power BI In this tutorial, you build a dataflow and pipeline to bring data into a
lakehouse, create a dimensional model, and generate a compelling
report.

Data Factory In this tutorial, you ingest data with data pipelines and transform data
with dataflows, then use the automation and notification to create a
complete data integration scenario.

Data Science end-to- In this set of tutorials, learn about the different Data Science experience
end AI samples capabilities and examples of how ML models can address your common
business problems.

Data Science - Price In this tutorial, you build a machine learning model to analyze and
prediction with R visualize the avocado prices in the US and predict future prices.

Application lifecycle In this tutorial, you learn how to use deployment pipelines together with
management git integration to collaborate with others in the development, testing, and
publication of your data and reports.

Related content
Create a workspace
Discover data items in the OneLake data hub
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Task flows in Microsoft Fabric (preview)
Article • 01/26/2025

This article describes the task flows feature in Microsoft Fabric. Its target audience is
data analytics solution architects who want to use a task flow to build a visual
representation of their project, engineers who are working on the project and want to
use the task flow to facilitate their work, and others who want to use the task flow to
filter the item list to help navigate and understand the workspace.

Overview
Fabric task flow is a workspace feature that enables you to build a visualization of the
flow of work in the workspace. The task flow helps you understand how items are
related and work together in your workspace, and makes it easier for you to navigate
your workspace, even as it becomes more complex over time. Moreover, the task flow
can help you standardize your team's work and keep your design and development
work in sync to boost the team's collaboration and efficiency.

Fabric provides a range of predefined, end-to-end task flows based on industry best
practices that are intended to make it easier to get started with your project. In addition,
you can customize the task flows to suit your specific needs and requirements. This
enables you to create a tailored solution that meets your unique business needs and
goals.

Each workspace has one task flow. The task flow occupies the upper part of workspace
list view. It consists of a canvas where you can build the visualization of your data
analytics project, and a side pane where you can see and edit details about the task
flow, tasks, and connectors.

7 Note

You can resize or hide the task flow using the controls on the horizontal separator
bar.

Key concepts
Key concepts to know when working with a task flow are described in the following
sections.

Task flow
A task flow is a collection of connected tasks that represent relationships in a process or
collection of processes that complete an end-to-end data solution. A workspace has one
task flow. You can either build it from scratch or use one of Fabric's predefined task
flows, which you can customize as desired.

Task
A task is a unit of process in the task flow. A task has recommended item types to help
you select the appropriate items when building your solution. Tasks also help you
navigate the items in the workspace.

Task type
Each task has a task type that classifies the task based on its key capabilities in the data
process flow. The predefined task types are:

ノ Expand table

Task type What you want to do with the task

General Create a customized task for your project needs that you can assign
available item types to.

Get data Ingest batch and real-time data into a single location within your Fabric
workspace.
Task type What you want to do with the task

Mirror data Replicate your data from any location to OneLake in near real-time.

Store data Organize, query, and store your ingested data in an easily retrievable format.

Prepare data Clean, transform, extract, and load your data for analysis and modeling
tasks.

Analyze and train Propose hypotheses, train models, and explore your data to make decisions
data and predictions.

Track data Monitor your streaming or nearly real-time operational data, and make
decisions based on gained insights.

Visualize data Present your data as rich visualizations and insights that can be shared with
others.

Distribute data Package your items for distribution as customized, easy-to-use apps.

Develop data Create and build your software, applications, and data solutions.

Connector
Connectors are arrows that represent logical connections between the tasks in the task
flow. They don't represent the flow of data, nor do they create any actual data
connections.

Considerations and limitations


Creating paginated reports, dataflows Gen1, and semantic models from a task isn't
supported.
Creating reports from a task is supported only if a published semantic model is
picked.
Related content
Set up a task flow
Work with task flows

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Set up a task flow (preview)
Article • 01/26/2025

This article describes how to start building a task flow, starting either from scratch or
with one of Fabric's predefined task flows. It targets data analytics solution architects
and others who want to create a visualization of a data project.

Prerequisites
To create a task flow in a workspace, you must be a workspace admin, member, or
contributor.

Open the workspace


Navigate to the workspace where you want to create your task flow. It should open in
list view, but if not, select the List view icon. You'll see that the workspace view is split
between the task flow, where you'll build your task flow, and the items list, which shows
you the items in the workspace. A moveable separator bar allows you to adjust the size
of the views. You can also hide the task flow if you want to get it out of the way.

1. List view selector


2. Task flow canvas
3. Resize bar
4. Show/hide task flow
5. Items list
When no task flow has been configured, the task flow area prompts you to choose
between starting with a predesigned task flow or adding a task to start building your
own custom task flow.

To build your task flow, you need to:

Add tasks to the task flow canvas.


Arrange the tasks on the task flow canvas in such a way that illustrates the logic of
the project.
Connect the tasks to show the logical structure of the project.
Assign items to the tasks in the workflow.

To get started, choose either Select a predesigned task flow or Add a task to start
building one from scratch.

If the workspace contains Power BI items only, the task flow canvas will display a basic
task flow designed to meet the basic needs of a solution based on Power BI items only.

Select Create if you want to start with this task flow, or choose either of the previously
mentioned options, Select a predesigned task flow or Add a task.

Start with a predesigned task flow


In the empty task flow area, choose Select a predesigned task flow.

The side pane lists the predesigned task flows provided by Microsoft. Each predefined
task flow has a brief description of its use case. When you select one of the flows, you'll
see a more detailed description of the flow and how to use it, and also the workloads
and item types that the flow requires.

1. List of predesigned task flows.


2. Layout of selected predesigned task flow.
3. Name of selected predesigned task flow.
4. Number of tasks in the task flow.
5. Detailed description of the task flow and how it's used.
6. The workloads that the task flow typically requires.
7. The item types that are typically used in task flow.

Select the task flow that best fits your project needs and then choose Select. The
selected task flow will be applied to the task flow canvas.

The task flow canvas provides a graphic view of the tasks and how they're connected
logically.
The side pane now shows detailed information about the task flow you selected,
including:

Task flow name.


Task flow description.
Total number of tasks in the task flow.
A list of the tasks in the task flow.

It's recommended that you change the task flow name and description to something
meaningful that enables others to better understand what the task flow is all about. To
change the name and description, select Edit in the task flow side pane. For more
information, see Edit task flow details.

The items list shows all the items and folders in the workspace, including those items
that are assigned to tasks in the task flow. When you select a task in the task flow, the
items list is filtered to show just the items that are assigned to the selected task.

7 Note

Selecting a predefined task flow just places the tasks involved in the task flow on
the canvas and indicates the connections between them. It's just a graphical
representation - no actual items or data connections are created at this point, and
no existing items are assigned to tasks in the flow.

After you've added the predefined task flow to the canvas, you can start modifying it to
suit your needs - arranging the tasks on the canvas, updating task names and
descriptions, assigning items to tasks, etc. For more information, see Working with task
flows.

Start with a custom task flow


If you already have a clear idea of what the structure of your task flow needs to be, or if
none of the predesigned task flows fit your needs, you can build a custom task flow
from scratch task by task.

1. In the empty task flow area, select Add a task and choose a task type.
2. A task card appears on the canvas and the task details pane opens to the side.
It's recommended to provide a meaningful name and description of the task to
help other members of the workspace understand what the task is for. In the task
details side pane, select Edit, to provide a meaningful name and description.

3. Deselect the task by clicking on a blank area of the task flow canvas. The side pane
will display the task flow details with a default name (Get started with a task flow)
and description. Note that the task you just created is listed under the Tasks
section.
Select Edit and provide a meaningful name and description for your new task flow to
help other members of the workspace understand your project and the task flow you're
creating. For more information, see Edit task flow details.

You can continue to add more tasks to the canvas. You'll also have to perform other
actions, such as arranging the tasks on the canvas, connecting the tasks, assigning items
to the tasks, etc. For more information, see Working with task flows.

Related concepts
Task flow overview
Work with task flows

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Work with task flows (preview)
Article • 01/26/2025

This article describes how to work with tasks. The target audience is data analytics
solution architects who are designing a data analytics solution, engineers who need to
know how to use task flows to facilitate their work, and others who want to use the task
flow to filter the item list to help navigate and understand the workspace.

Prerequisites
To create or edit the task flow, and to create items in the workspace via the task flow,
you need to be an Admin, Member, or Contributor in the workspace.

Admins, Members, Contributors, and Viewers can use the task flow to filter the items list.

Task controls
Much of the work with tasks is accomplished either in the task details pane or via
controls on the task card or on the task flow canvas.

Select a task to display the task details pane. The following image shows the main
controls for working with tasks.
1. Add task or connector
2. Edit task name and description
3. Change task type
4. Create new item for task
5. Assign existing items to task
6. Delete task

Resize or hide the task flow


You can resize the task flow, or even hide it, according to your personal needs and
preferences. Fabric remembers task flow resize and show/hide choices per user and per
workspace, so each time you return to a workspace, the task flow size and show/hide
status will be the same as it was the last time you left the workspace.

To resize the task flow, drag the resize bar on the horizontal separator up or down.

To show/hide the task flow, select the show/hide control at the right side of the
separator.

Add a task
To add a new task to the task flow canvas, open the Add dropdown menu and select the
desired task type.

The task of the selected task type is added onto the canvas. The name and description
of the new task are the default name and description of the task type. Consider
changing the name and description of the new task to better describe its purpose in the
work flow. A good task name should identify the task and provide a clear indication of
its intended use.

Edit task name and description


To edit a task's name or description:

1. Select the task on the canvas to open the task details pane.

2. Select Edit and change the name and description fields as desired. When done,
select Save.

Change task type


To change a task to a different type:

1. Select the task on the canvas to open the task details pane.

2. Open the Task type dropdown menu and choose the new desired task type.

7 Note

Changing the task type doesn't change the task name or description. Consider
changing these fields to suit the new task type.
Arrange tasks on the canvas
Part of building a task flow is arranging the tasks in the proper order. To arrange the
tasks, select and drag each task to the desired position in the task flow.

 Tip

When you move tasks around on the canvas, they stay in the place where you put
them. However, due to a known issue, when you add a new task to the canvas, any
unconnected tasks will move back to their default positions. Therefore, to
safeguard your arrangement of tasks, it's highly recommended to connect them all
with connectors before adding any new tasks to the canvas.

Connect tasks
Connectors show the logical flow of work. They don't make or indicate any actual data
connections - they are graphic representations of the flow of tasks only.

Add a connector
To connect two tasks, select the edge of the starting task and drag to an edge of the
next task.

Alternatively, you can select Add > Connector from the Add dropdown on the canvas.
Then, in the Add connector dialog, select the start and end tasks, then select Add.

Delete a connector
To delete a connector, select it and press Enter.

Alternatively, select the connector to open the connector details pane, then select the
trash can icon.
Change connector start and end points or direction
To change a connector's start and end values, or switch its direction:

1. Select the connector to open the connector details pane.

2. In the details pane, change the start and end values as desired, or select Swap to
change connector direction.

Assign items to a task


Once a task has been placed on the canvas, you can assign items to it to help structure
and organize your work. You can create new items to be assign to the task, or you can
assign items that already exist in the workspace.

7 Note

An item can only be assigned to a single task. It can't be assigned to multiple tasks.
Create a new item for a task
To create a new item for a specific task:

1. Select + New item on the task.

2. On the Create an item pane that opens, the recommended item types for the task
are displayed by default. Choose one of the recommended types.

If you don't see the item type you want, change the Display selector from
Recommended items to All items, and then choose the item type you want.

7 Note

When you first save a new report, you'll be given the opportunity to assign it to any
task that exists in the workspace.

Assign existing items to a task


To assign existing items to a task:

1. Select the clip icon on the task.


2. In the Assign item dialog box that opens, hover over item you want to assign to
the task and mark the checkbox. You can assign more than one item. When you're
done choosing the items you want to assign to the task, choose Select to assign
the selected items to the task.

The items you selected items are assigned to the task. In the item list, task assignments
are shown in the Task column.

Unassign items from tasks


You can unassign items from a selected task or from multiple tasks.

7 Note

Unassigning items from tasks does not remove the items from the workspace.
Unassign items from a task
To unassign items from a task:

1. Select the task you want to unassign the items from. This filters the item list to
show just the items that are assigned to the task.

2. In the item list, hover over the items you want to unassign and mark the
checkboxes that appear.

3. On the workspace toolbar, choose Unassign from task (or Unassign from all tasks,
if you've selected multiple items).

Unassign items from multiple tasks


To unassign items from multiple tasks:

1. Select Clear all at the top of the items list to clear all filters so that you can see all
the items in the workspace. Note that items that are assigned to tasks list the task
name in the Task column.

2. Hover over the items you want to unassign and mark the checkboxes.
3. When you've finished making your selections, select Unassign from all tasks in the
workspace toolbar.

Delete a task
To delete a task:

1. Select the task to open the task details pane.

2. Select the trash can icon.

Alternatively,

1. Select the task flow canvas to open the task flow details pane.

2. In the task flow details pane, hover over the task you want to delete in the Tasks
list and select the trash can icon.
7 Note

Deleting a task doesn't delete the items assigned to it. They remain in the
workspace.

Navigate items with the task flow


With items assigned to tasks in a task flow, you can use the task flow to quickly
understand how the items in the workspace work together, and get a clear picture of
your work in the workspace.

For each item that you see in the items list, you can see the item type and what
task it's assigned to, if any.
When you select a task, the items list is filtered to show only the items that are
assigned to that task.

Select a new predefined task flow


At any point, you can choose to apply one of the predefined task flows to the canvas.

To select one of the predefined task flows:

1. Open the Add dropdown on the canvas and choose Select task flow. The
predefined task flows pane will open.

2. Choose one of the predefined task flows and the select Select. If there already is a
task flow on the canvas, you'll be asked whether to overwrite the current task flow
or to append the predefined task flow to the current task flow.

Edit task flow details


To edit the task flow name or description:

1. Open the task flow details pane by selecting the task flow canvas.
2. Select Edit and change the name and description fields as desired. When done,
select Save.

7 Note

A good task flow name and description should help others understand the
intended purpose and use of the task flow.

Delete a task flow


To delete a task flow:

1. Select a blank area of the canvas to display the task flow details pane.

2. Select the trash icon to delete the task flow.

Deleting a task flow removes all tasks, the task list, and any item assignments, and resets
the task flow to its original default empty state.

7 Note
Items that were assigned to tasks in the deleted task flow remain in the workspace.
When you create a new task flow, you need to assign them to the tasks in the new
flow.

Related concepts
Task flow overview
Set up a task flow

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric decision guide: copy
activity, dataflow, or Spark
Article • 01/26/2025

Use this reference guide and the example scenarios to help you in deciding whether you
need a copy activity, a dataflow, or Spark for your Microsoft Fabric workloads.

Copy activity, dataflow, and Spark properties


ノ Expand table

Pipeline copy activity Dataflow Gen 2 Spark

Use case Data lake and data Data ingestion, Data ingestion,
warehouse migration, data data transformation,
data ingestion, transformation, data processing,
lightweight transformation data wrangling, data profiling
data profiling

Primary Data engineer, Data engineer, Data engineer,


developer data integrator data integrator, data scientist,
persona business analyst data developer

Primary ETL, ETL, Spark (Scala, Python,


developer skill SQL, M, Spark SQL, R)
set JSON SQL

Code written No code, No code, Code


low code low code

Data volume Low to high Low to high Low to high

Development Wizard, Power query Notebook,


interface canvas Spark job definition

Sources 30+ connectors 150+ connectors Hundreds of Spark


libraries

Destinations 18+ connectors Lakehouse, Hundreds of Spark


Azure SQL libraries
database,
Azure Data
explorer,
Azure Synapse
analytics
Pipeline copy activity Dataflow Gen 2 Spark

Transformation Low: Low to high: Low to high:


complexity lightweight - type 300+ support for native
conversion, column mapping, transformation Spark and open-
merge/split files, flatten functions source libraries
hierarchy

Review the following three scenarios for help with choosing how to work with your data
in Fabric.

Scenario1
Leo, a data engineer, needs to ingest a large volume of data from external systems, both
on-premises and cloud. These external systems include databases, file systems, and APIs.
Leo doesn’t want to write and maintain code for each connector or data movement
operation. He wants to follow the medallion layers best practices, with bronze, silver,
and gold. Leo doesn't have any experience with Spark, so he prefers the drag and drop
UI as much as possible, with minimal coding. And he also wants to process the data on a
schedule.

The first step is to get the raw data into the bronze layer lakehouse from Azure data
resources and various third party sources (like Snowflake Web, REST, AWS S3, GCS, etc.).
He wants a consolidated lakehouse, so that all the data from various LOB, on-premises,
and cloud sources reside in a single place. Leo reviews the options and selects pipeline
copy activity as the appropriate choice for his raw binary copy. This pattern applies to
both historical and incremental data refresh. With copy activity, Leo can load Gold data
to a data warehouse with no code if the need arises and pipelines provide high scale
data ingestion that can move petabyte-scale data. Copy activity is the best low-code
and no-code choice to move petabytes of data to lakehouses and warehouses from
varieties of sources, either ad-hoc or via a schedule.

Scenario2
Mary is a data engineer with a deep knowledge of the multiple LOB analytic reporting
requirements. An upstream team has successfully implemented a solution to migrate
multiple LOB's historical and incremental data into a common lakehouse. Mary has been
tasked with cleaning the data, applying business logics, and loading it into multiple
destinations (such as Azure SQL DB, ADX, and a lakehouse) in preparation for their
respective reporting teams.
Mary is an experienced Power Query user, and the data volume is in the low to medium
range to achieve desired performance. Dataflows provide no-code or low-code
interfaces for ingesting data from hundreds of data sources. With dataflows, you can
transform data using 300+ data transformation options, and write the results into
multiple destinations with an easy to use, highly visual user interface. Mary reviews the
options and decides that it makes sense to use Dataflow Gen 2 as her preferred
transformation option.

Scenario3
Adam is a data engineer working for a large retail company that uses a lakehouse to
store and analyze its customer data. As part of his job, Adam is responsible for building
and maintaining the data pipelines that extract, transform, and load data into the
lakehouse. One of the company's business requirements is to perform customer review
analytics to gain insights into their customers' experiences and improve their services.

Adam decides the best option is to use Spark to build the extract and transformation
logic. Spark provides a distributed computing platform that can process large amounts
of data in parallel. He writes a Spark application using Python or Scala, which reads
structured, semi-structured, and unstructured data from OneLake for customer reviews
and feedback. The application cleanses, transforms, and writes data to Delta tables in
the lakehouse. The data is then ready to be used for downstream analytics.

Related content
How to copy data using copy activity
Quickstart: Create your first dataflow to get and transform data
How to create an Apache Spark job definition in Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric decision guide: choose
a data store
Article • 01/26/2025

Use this reference guide and the example scenarios to help you choose a data store for
your Microsoft Fabric workloads.

Data store properties


Use this information to compare Fabric data stores such as warehouse, lakehouse,
Eventhouse, SQL database, and Power BI datamart, based on data volume, type,
developer persona, skill set, operations, and other capabilities. These comparisons are
organized into the following two tables:

ノ Expand table

Table 1 of 2 Lakehouse Warehouse Eventhouse

Data volume Unlimited Unlimited Unlimited

Type of data Unstructured, Structured, Unstructured,


semi-structured, semi-structured (JSON) semi-structured,
structured structured

Primary Data engineer, data Data warehouse App developer, data


developer scientist developer, data scientist, data
persona architect, data engineer, engineer
database developer

Primary dev Spark (Scala, PySpark, SQL No code, KQL, SQL


skill Spark SQL, R)

Data organized Folders and files, Databases, schemas, Databases, schemas,


by databases, and tables and tables and tables

Read Spark, T-SQL T-SQL, Spark* KQL, T-SQL, Spark


operations

Write Spark (Scala, PySpark, T-SQL KQL, Spark, connector


operations Spark SQL, R) ecosystem

Multi-table No Yes Yes, for multi-table


transactions ingestion
Table 1 of 2 Lakehouse Warehouse Eventhouse

Primary Spark notebooks, Spark SQL scripts KQL Queryset, KQL


development job definitions Database
interface

Security RLS, CLS**, table level (T- Object level, RLS, CLS, RLS
SQL), none for Spark DDL/DML, dynamic data
masking

Access data via Yes Yes, via SQL analytics Yes


shortcuts endpoint

Can be a source Yes (files and tables) Yes (tables) Yes


for shortcuts

Query across Yes Yes Yes


items

Advanced Interface for large-scale Interface for large-scale Time Series native
analytics data processing, built-in data processing, built-in elements, full geo-
data parallelism, and fault data parallelism, and spatial and query
tolerance fault tolerance capabilities

Advanced Tables defined using Tables defined using Full indexing for free
formatting PARQUET, CSV, AVRO, PARQUET, CSV, AVRO, text and semi-
support JSON, and any Apache JSON, and any Apache structured data like
Hive compatible file Hive compatible file JSON
format format

Ingestion Available instantly for Available instantly for Queued ingestion,


latency querying querying streaming ingestion
has a couple of
seconds latency

* Spark supports reading from tables using shortcuts, doesn't yet support accessing
views, stored procedures, functions etc.

ノ Expand table

Table 2 of 2 Fabric SQL database Power BI Datamart

Data volume 4 TB Up to 100 GB

Type of data Structured, Structured


semi-structured,
unstructured

Primary developer AI developer, App developer, Data scientist, data analyst


persona database developer, DB admin
Table 2 of 2 Fabric SQL database Power BI Datamart

Primary dev skill SQL No code, SQL

Data organized by Databases, schemas, tables Database, tables, queries

Read operations T-SQL Spark, T-SQL

Write operations T-SQL Dataflows, T-SQL

Multi-table Yes, full ACID compliance No


transactions

Primary SQL scripts Power BI


development
interface

Security Object level, RLS, CLS, DDL/DML, Built-in RLS editor


dynamic data masking

Access data via Yes No


shortcuts

Can be a source Yes (tables) No


for shortcuts

Query across Yes No


items

Advanced T-SQL analytical capabilities, data Interface for data processing with
analytics replicated to delta parquet in automated performance tuning
OneLake for analytics

Advanced Table support for OLTP, JSON, Tables defined using PARQUET, CSV,
formatting vector, graph, XML, spatial, key- AVRO, JSON, and any Apache Hive
support value compatible file format

Ingestion latency Available instantly for querying Available instantly for querying

** Column-level security available on the Lakehouse through a SQL analytics endpoint,


using T-SQL.

Scenarios
Review these scenarios for help with choosing a data store in Fabric.

Scenario 1
Susan, a professional developer, is new to Microsoft Fabric. They're ready to get started
cleaning, modeling, and analyzing data but need to decide to build a data warehouse or
a lakehouse. After review of the details in the previous table, the primary decision points
are the available skill set and the need for multi-table transactions.

Susan has spent many years building data warehouses on relational database engines,
and is familiar with SQL syntax and functionality. Thinking about the larger team, the
primary consumers of this data are also skilled with SQL and SQL analytical tools. Susan
decides to use a Fabric warehouse, which allows the team to interact primarily with T-
SQL, while also allowing any Spark users in the organization to access the data.

Susan creates a new data warehouse and interacts with it using T-SQL just like her other
SQL server databases. Most of the existing T-SQL code she has written to build her
warehouse on SQL Server will work on the Fabric data warehouse making the transition
easy. If she chooses to, she can even use the same tools that work with her other
databases, like SQL Server Management Studio. Using the SQL editor in the Fabric
portal, Susan and other team members write analytic queries that reference other data
warehouses and Delta tables in lakehouses simply by using three-part names to perform
cross-database queries.

Scenario 2
Rob, a data engineer, needs to store and model several terabytes of data in Fabric. The
team has a mix of PySpark and T-SQL skills. Most of the team running T-SQL queries are
consumers, and therefore don't need to write INSERT, UPDATE, or DELETE statements.
The remaining developers are comfortable working in notebooks, and because the data
is stored in Delta, they're able to interact with a similar SQL syntax.

Rob decides to use a lakehouse, which allows the data engineering team to use their
diverse skills against the data, while allowing the team members who are highly skilled
in T-SQL to consume the data.

Scenario 3
Ash, a citizen developer, is a Power BI developer. They're familiar with Excel, Power BI,
and Office. They need to build a data product for a business unit. They know they don't
quite have the skills to build a data warehouse or a lakehouse, and those seem like too
much for their needs and data volumes. They review the details in the previous table
and see that the primary decision points are their own skills and their need for a self
service, no code capability, and data volume under 100 GB.
Ash works with business analysts familiar with Power BI and Microsoft Office, and knows
that they already have a Premium capacity subscription. As they think about their larger
team, they realize the primary consumers of this data are analysts, familiar with no-code
and SQL analytical tools. Ash decides to use a Power BI datamart, which allows the team
to interact build the capability fast, using a no-code experience. Queries can be
executed via Power BI and T-SQL, while also allowing any Spark users in the organization
to access the data as well.

Scenario 4
Daisy is business analyst experienced with using Power BI to analyze supply chain
bottlenecks for a large global retail chain. They need to build a scalable data solution
that can handle billions of rows of data and can be used to build dashboards and
reports that can be used to make business decisions. The data comes from plants,
suppliers, shippers, and other sources in various structured, semi-structured, and
unstructured formats.

Daisy decides to use an Eventhouse because of its scalability, quick response times,
advanced analytics capabilities including time series analysis, geospatial functions, and
fast direct query mode in Power BI. Queries can be executed using Power BI and KQL to
compare between current and previous periods, quickly identify emerging problems, or
provide geo-spatial analytics of land and maritime routes.

Scenario 5
Kirby is an application architect experienced in developing .NET applications for
operational data. They need a high concurrency database with full ACID transaction
compliance and strongly enforced foreign keys for relational integrity. Kirby wants the
benefit of automatic performance tuning to simplify day-to-day database management.

Kirby decides on a SQL database in Fabric, with the same SQL Database Engine as Azure
SQL Database. SQL databases in Fabric automatically scale to meet demand throughout
the business day. They have the full capability of transactional tables and the flexibility
of transaction isolation levels from serializable to read committed snapshot. SQL
database in Fabric automatically creates and drops nonclustered indexes based on
strong signals from execution plans observed over time.

In Kirby's scenario, data from the operational application must be joined with other data
in Fabric: in Spark, in a warehouse, and from real-time events in an Eventhouse. Every
Fabric database includes a SQL analytics endpoint, so data to be accessed in real time
from Spark or with Power BI queries using DirectLake mode. These reporting solutions
spare the primary operational database from the overhead of analytical workloads, and
avoid denormalization. Kirby also has existing operational data in other SQL databases,
and needs to import that data without transformation. To import existing operational
data without any data type conversion, Kirby designs data pipelines with Fabric Data
Factory to import data into the Fabric SQL database.

Related content
Create a lakehouse in Microsoft Fabric
Create a warehouse in Microsoft Fabric
Create an eventhouse
Create a SQL database in the Fabric portal
Power BI datamart

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric decision guide: Choose
between Warehouse and Lakehouse
Article • 01/26/2025

Microsoft Fabric offers two enterprise-scale, open standard format workloads for data
storage: Warehouse and Lakehouse. This article compares the two platforms and the
decision points for each.

Criterion

No Code or Pro Code solutions: How do you want to develop?​

Spark
Use Lakehouse​
T-SQL​
Use Warehouse​

Warehousing needs​: Do you need multi-table transactions?​

Yes
Use Warehouse​
No​
Use Lakehouse​

Data complexity​: What type of data are you analyzing?​

Don't know​
Use Lakehouse​
Unstructured and structured​data
Use Lakehouse​
Structured​data only
Use Warehouse​
Choose a candidate service
Perform a detailed evaluation of the service to confirm that it meets your needs.

The Warehouse item in Fabric Data Warehouse is an enterprise scale data warehouse
with open standard format.​

No knobs performance with minimal set-up and deployment, no configuration of


compute or storage needed. ​
Simple and intuitive warehouse experiences for both beginner and experienced
data professionals (no/pro code)​.
Lake-centric warehouse stores data in OneLake in open Delta format with easy
data recovery and management​.
Fully integrated with all Fabric workloads.
Data loading and transforms at scale, with full multi-table transactional guarantees
provided by the SQL engine.​
Virtual warehouses with cross-database querying and a fully integrated semantic
layer​.
Enterprise-ready platform with end-to-end performance and usage visibility, with
built-in governance and security​.
Flexibility to build data warehouse or data mesh based on organizational needs
and choice of no-code, low-code, or T-SQL for transformations​.

The Lakehouse item in Fabric Data Engineering is a data architecture platform for
storing, managing, and analyzing structured and unstructured data in a single location.

Store, manage, and analyze structured and unstructured data in a single location
to gain insights and make decisions faster and efficiently.​
Flexible and scalable solution that allows organizations to handle large volumes of
data of all types and sizes.​
Easily ingest data from many different sources, which are converted into a unified
Delta format ​
Automatic table discovery and registration for a fully managed file-to-table
experience for data engineers and data scientists. ​
Automatic SQL analytics endpoint and default dataset that allows T-SQL querying
of delta tables in the lake

Both are included in Power BI Premium or Fabric capacities​.

Compare different warehousing capabilities


This table compares the Warehouse to the SQL analytics endpoint of the Lakehouse.
Microsoft Fabric offering

Warehouse

SQL analytics endpoint of the Lakehouse

Primary capabilities

ACID compliant, full data warehousing with transactions support in T-SQL.

Read only, system generated SQL analytics endpoint for Lakehouse for T-SQL querying
and serving. Supports analytics on the Lakehouse Delta tables, and the Delta Lake
folders referenced via shortcuts.

Developer profile

SQL Developers or citizen developers

Data Engineers or SQL Developers

Data loading

SQL, pipelines, dataflows

Spark, pipelines, dataflows, shortcuts

Delta table support

Reads and writes Delta tables

Reads delta tables

Storage layer

Open Data Format - Delta

Open Data Format - Delta


Recommended use case
Data Warehousing for enterprise use
Data Warehousing supporting departmental, business unit or self service use
Structured data analysis in T-SQL with tables, views, procedures and functions and
Advanced SQL support for BI
Exploring and querying delta tables from the lakehouse
Staging Data and Archival Zone for analysis
Medallion lakehouse architecture with zones for bronze, silver and gold analysis
Pairing with Warehouse for enterprise analytics use cases

Development experience
Warehouse Editor with full support for T-SQL data ingestion, modeling,
development, and querying UI experiences for data ingestion, modeling, and
querying
Read / Write support for 1st and 3rd party tooling
Lakehouse SQL analytics endpoint with limited T-SQL support for views, table
valued functions, and SQL Queries
UI experiences for modeling and querying
Limited T-SQL support for 1st and 3rd party tooling

T-SQL capabilities

Full DQL, DML, and DDL T-SQL support, full transaction support

Full DQL, No DML, limited DDL T-SQL Support such as SQL Views and TVFs

Related content
Microsoft Fabric decision guide: choose a data store

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Ask the community
Navigate to your items from Microsoft
Fabric Home
Article • 01/26/2025

7 Note

Are you a new developer working with Fabric? Are you interested in sharing your
getting started experience and helping us make improvements? We’d like to talk
with you! Sign up here if interested .

This article gives a high level view of navigating to your items and actions from
Microsoft Fabric Home.

Find what you need on your Home canvas


The final section of Home is the center area, called the canvas. The content of your
canvas updates as you select different items. By default, the Home canvas displays
options for creating new items, recents, and getting started resources. To collapse a
section on your canvas, select the Show less view.

When you create a new item, it saves in your My workspace unless you selected a
workspace from Workspaces. To learn more about creating items in workspaces, see
create workspaces.

7 Note

Power BI Home is different from the other product workloads. To learn more, visit
Power BI Home.

) Important

Power BI Home is different from the other product workloads. To learn more, visit
Power BI Home.

Overview of Home
On Home, you see items that you create and that you have permission to use. These
items are from all the workspaces that you access. That means that the items available
on everyone's Home are different. At first, you might not have much content, but that
changes as you start to create and share Microsoft Fabric items.

7 Note

Home isn't workspace-specific. For example, the Recent workspaces area on Home
might include items from many different workspaces.

In Microsoft Fabric, the term item refers to: apps, lakehouses, activators, warehouses,
reports, and more. Your items are accessible and viewable in Microsoft Fabric, and often
the best place to start working in Microsoft Fabric is from Home. However, once you
create at least one new workspace, been granted access to a workspace, or you add an
item to My workspace, you might find it more convenient to start working directly in a
workspace. One way to navigate to a workspace is by using the nav pane and workspace
selector.

To open Home, select it from the top of your navigation pane (nav pane).
Most important content at your fingertips
The items that you can create and access appear on Home. If your Home canvas gets
crowded, use global search to find what you need, quickly. The layout and content on
Fabric Home is different for every user.

1. The left navigation pane (nav pane) links you to different views of your items and
to creator resources. You can remove buttons from the nav pane to suit your
workflow.
2. The selector for switching between Fabric and Power BI.
3. Options for creating new items.
4. The top menu bar for orienting yourself in Fabric, finding items, help, and sending
feedback to Microsoft. The Account manager control is a critical button for looking
up your account information and managing your Fabric trial.
5. Learning resources to get you started learning about Fabric and creating items.
6. Your items organized by recent workspaces, recent items, and favorites.

) Important

Only the content that you can access appears on your Home. For example, if you
don't have permissions to a report, that report doesn't appear on Home. The
exception to this restriction is if your subscription or license changes to one with
less access, then you receive a prompt letting you know that the item is no longer
available and asking you to start a trial or upgrade your license.

Locate items from Home


Microsoft Fabric offers many ways of locating and viewing your items and ways of
creating new items. All approaches access the same pool of items, just in different ways.
Searching is sometimes the easiest and quickest way to find something. While other
times, using the nav pane to open a workspace, using the nav pane to open OneLake, or
selecting a card on the Home canvas is your best option.

Use the navigation pane


Along the left side is a narrow vertical bar, referred to as the nav pane. The nav pane
organizes actions you can take with your items in ways that help you get to where you
want to be quickly. Occasionally, using the nav pane is the quickest way to get to your
items.
In the bottom section of the nav pane is where your active workspaces and items are
listed. In this example, our active items are: an Activator, the Retail sales workspace, and
a KQL database. Select any of these items to display them on your canvas. To open other
workspaces, use the workspace selector to view a list of your workspaces and select one
to open on your canvas. To open other items, select them from the nav pane buttons.

The nav pane is there when you open Home and remains there as you open other areas
of Microsoft Fabric.

Add and remove buttons from the nav pane

You can remove buttons from the nav pane for products and actions you don't think you
need. You can always add them back later.

To remove a button, right-click the button and select Unpin.

To add a button back to the nav pane, start by selecting the ellipses (...). Then right-click
the button and select Pin. If you don't have space on the nav pane, the pinned button
might displace a current button.

Find and open workspaces


Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports and to create task flows.

There are different ways to find and open your workspaces. If you know the name or
owner, you can search. Or you can select the Workspaces button in the nav pane and
choose which workspace to open.
The workspace opens on your canvas, and the name of the workspace is listed on your
nav pane. When you open a workspace, you can view its content. It includes items such
as notebooks, pipelines, reports, and lake houses.

If no workspaces are active, by default you see My workspace.


When you open a workspace, its name replaces My workspace.
Whenever you create a new item, it's added to the open workspace.

For more information, see Workspaces.

Create items

Create workspaces using a task flow


The first row on your Home canvas is a selection of task flow templates. Fabric task flow
is a workspace feature that enables you to build a visualization of the flow of work in the
workspace. Fabric provides a range of predefined, end-to-end task flows based on
industry best practices that are intended to make it easier to get started with your
project.

To learn more, see Task flows in Microsoft Fabric

Create items using workloads


Workloads refer to the different capabilities available in Microsoft Fabric. Microsoft
Fabric includes preinstalled workloads that can't be removed, including Data Factory,
Data Engineering, Real-Time Intelligence, and more. You might also have preinstalled
workloads that Microsoft or your organization added.

The Workload hub is a central location where you can view all the workloads available to
you. Navigate to your Workload hub by selecting Workloads from the nav pane.
Microsoft Fabric displays a list and description of the available workloads. Select a
workload to open it and learn more.
If your organization gives you access to additional workloads, your Workload hub
displays additional tabs.


When you select a workload, the landing page for that workload displays. Each workload
in Fabric has its own item types associated with it. The landing page has information
about these items type and details about the workload, learning resources, and samples
that you can use to test run the workload.

For more information about workloads, see Workloads in Fabric

Find your content using search, sort, and filter


To learn about the many ways to search from Microsoft Fabric, see Searching and
sorting. Global searching is available by item, name, keyword, workspace, and more.

Find answers in the context sensitive Help pane


Select the Help icon (?) to open and use the contextual Help pane and to search for
answers to questions.

Microsoft Fabric provides context sensitive help in the right rail of your browser. In this
example, we selected Browse from the nav pane and the Help pane automatically
updates to show us articles about the features of the Browse screen. For example, the
Help pane displays articles on View your favorites and See content that others shared
with you. If there are community posts related to the current view, they display under
Forum topics.

Leave the Help pane open as you work, and use the suggested topics to learn how to
use Microsoft Fabric features and terminology. Or, select the X to close the Help pane
and save screen space.

The Help pane is also a great place to search for answers to your questions. Type your
question or keywords in the Search field.
To return to the default Help pane, select the left arrow.

For more information about searching, see Searching and sorting.

For more information about the Help pane, see Get in-product help.

Find help and support


If the self-help answers don't resolve your issue, scroll to the bottom of the Help pane
for more resources. Use the links to ask the community for help or to connect with
Microsoft Fabric Support. For more information about contacting Support, see Support
options.

Find your account and license information


Information about your account and license is available from the Account manager. To
open your Account manager, select the tiny photo from the upper-right corner of
Microsoft Fabric.
For more information about licenses and trials, see Licenses.

Find notifications, settings, and feedback


In the upper-right corner of Home are several helpful icons. Take time to explore your
Notifications center, Settings, and Feedback options. The Help (?) icon displays your
Help and search options and the Account manager icon displays information about
your account and license. Both of these features are described in detail earlier in this
article.

Related content
Power BI Home
Start a Fabric trial

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Self-help with the Fabric contextual
Help pane
Article • 01/26/2025

This article explains how to use the Fabric Help pane. The Help pane is feature-aware
and displays articles about the actions and features available on the current Fabric
screen. The Help pane is also a search engine that quickly finds answers to questions in
the Fabric documentation and Fabric community forums.

The Help pane is feature-aware


The feature-aware state is the default view of the Help pane when you open it without
entering any search terms. The Help pane shows a list of recommended topics,
resources that are relevant to your current context and location in Fabric, and a list of
links for other resources. It has three sections:

Feature-aware documents: This section groups the documents by the features


that are available on the current screen. Select a feature in the Fabric screen and
the Help pane updates with documents related to that feature. Select a document
to open it in a separate browser tab.
Forum topics: This section shows topics from the community forums that are
related to the features on the current screen. Select a topic to open it in a separate
browser tab.

Other resources: This section has links for feedback and Support.

The Help pane is a search engine


The Help pane is also a search engine. Enter a keyword to find relevant information and
resources from Microsoft documentation and community forum topics. Use the
dropdown to filter the results.
The Help pane is perfect for learning and
getting started
As you explore Fabric, the feature-aware documents update based on what you select
and where you are in Fabric. This awareness is a great way to learn how to use Fabric.
Give yourself a guided tour by making selections in Fabric and reading the feature-
aware documents. For example, in the Data Science experience, select OneLake data
hub. The Help pane updates with articles that you can use to learn about the data hub.

Open the Help pane


Follow the instructions to practice using the Help pane.

1. From the upper-right corner of Fabric, select the Help icon (?) icon to open the
Help pane.

2. Open Browse and select the Recent feature. The Fabric Help pane displays
documents about the Recent feature. To learn more, elect a document. The
document opens in a separate browser tab.

3. Forum posts often provide interesting context. Select one that looks helpful or
interesting.
4. Search the Microsoft documentation and community forums by entering a
keyword in the search pane.
5. Return to the default display of the Help pane by selecting the arrow that appears
to the left of the entry field.

6. Close the Help pane by selecting the X icon in the upper-right corner of the pane.

Still need help?


If you still need help, select Ask the community and submit a question. If you have an
idea for a new feature, let us know by selecting Submit an idea. To open the Support
site, select Get help in Other Resources.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Global search
Article • 01/26/2025

When you're new to Microsoft Fabric, you have only a few items (workspaces, reports,
apps, lakehouses). But as you begin creating and sharing items, you can end up with
long lists of content. That's when searching, filtering, and sorting become helpful.

Search for content


At the top of Home, the global search box finds items by title, name, or keyword.
Sometimes, the fastest way to find an item is to search for it. For example, if a
dashboard you didn't use in a while isn't showing up on your Home canvas. Or, if your
colleague shared something with you, but you don't remember what it's named or what
type of content they shared. Sometimes, you might have so much content that it's easier
to search for it rather than scrolling or sorting.

7 Note

Global search is currently unavailable in sovereign clouds.

Search is available from Home and also most other areas of Microsoft Fabric. Just look

for the search box or search icon.

In the Search field, type all or part of the name of an item, creator, keyword, or
workspace. You can even enter your colleague's name to search for content that they
shared with you. The search finds matches in all the items that you own or have access
to.
In addition to the Search field, most experiences on the Microsoft Fabric canvas also
include a Filter by keyword field. Similar to search, use Filter by keyword to narrow
down the content on your canvas to find what you need. The keywords you enter in the
Filter by keyword pane apply to the current view only. For example, if you open Browse
and enter a keyword in the Filter by keyword pane, Microsoft Fabric searches only the
content that appears on the Browse canvas.

Sort content lists


If you have only a few items, sorting isn't necessary. But when you have long lists of
items, sorting helps you find what you need. For example, this Shared with me content
list has many items.
Right now, this content list is sorted alphabetical by name, from Z to A. To change the
sort criteria, select the arrow to the right of Name.

Sorting is also available in other areas of Microsoft Fabric. In this example, the
workspaces are sorted by the Refreshed date. To set sorting criteria for workspaces,
select a column header, and then select again to change the sorting direction.
Not all columns can be sorted. Hover over the column headings to discover which can
be sorted.

Filter content lists


Another way to locate content quickly is to use the content list Filter. Display the filters
by selecting Filter from the upper right corner. The filters available depend on your
location in Microsoft Fabric. This example is from a Recent content list. It allows you to
filter the list by content Type, Time, or Owner.
Related content
Find Fabric items from Home
Start a Fabric trial

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Fabric settings
Article • 01/26/2025

The Fabric settings pane provides links to various kinds of settings you can configure.
This article shows how to open the Fabric settings pane and describes the kinds of
settings you can access from there.

Open the Fabric settings pane


To open the Fabric settings pane, select the gear icon in the Fabric portal header.

Preferences
In the preferences section, individual users can set their user preferences, specify the
language of the Fabric user interface, manage their account and notifications, and
configure settings for their personal use throughout the system.

ノ Expand table

Link Description

General Opens the generate settings page, where you can set the display language for
the Fabric interface and parts of visuals.

Notifications Opens the notifications settings page where you can view your subscriptions
and alerts.

Item settings Opens the item settings page, where you can configure per-item-type settings.

Developer Opens the developer settings page, where you can configure developer mode
settings settings.
Resources and extensions
The resources and extensions section provides links to pages where users can use
following capabilities.

ノ Expand table

Link Description

Manage Opens the personal/group storage management page, where you can see
personal/group and manage data items that you own or that have been shared with you.
storage

Power BI settings Opens the Power BI settings page, where you can get to the settings
pages for the Power BI items (dashboards, semantic models, workbooks,
reports, datamarts, and dataflows) that are in the current workspace.

Manage connections Opens page where you can manage connections, on-premises data
and gateways gateways, and virtual networks data gateways.

Manage embed Opens a page where you can manage embed codes you have created.
codes

Azure Analysis Opens up a page where you can migrate your Azure Analysis Services
Services migrations datasets to Power BI Premium.

Governance and insights settings


The governance and insights section provides links to help admins and users with their
admin, governance, and compliance tasks.

ノ Expand table

Link Description

Admin portal Opens the Fabric admin portal where admins perform various management tasks
and configure Fabric tenant settings. For more information about the admin
portal, see What is the admin portal?. To learn how to open the admin portal, see
How to get to the admin portal.

Microsoft Currently available to Fabric admins only. Opens the Microsoft Purview hub where
Purview hub you can view Purview insights about your organization's sensitive data. The
(preview) Microsoft Purview hub also provides links to Purview governance and compliance
capabilities and has links to documentation to help you get started with Microsoft
Purview governance and compliance in Fabric.
Related content
What is Fabric
What is Microsoft Fabric admin?

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Workspaces in Microsoft Fabric and
Power BI
Article • 01/26/2025

Workspaces are places to collaborate with colleagues to create collections of items such
as lakehouses, warehouses, and reports, and to create task flows. This article describes
workspaces, how to manage access to them, and what settings are available.

Ready to get started? Read Create a workspace.

Work with workspaces


Here are some useful tips about working with workspaces.

Set up a task flow for the workspace to organize your data project and to help
others understand and work on your project. Read more about task flows.

Pin workspaces to the top of the workspace flyout list to quickly access your
favorite workspaces. Read more about pin workspaces.

Use granular workspace roles for flexible permissions management in the


workspaces: Admin, Member, Contributor, and Viewer. Read more about
workspace roles.
Create folders in the workspace: Organize and manage artifacts in the workspace.
Read more about creating folders in workspaces.

Navigate to current workspace from anywhere by selecting the icon on left nav
pane. Read more about current workspace in this article.

Workspace settings: As workspace admin, you can update and manage your
workspace configurations in workspace settings.

Manage a workspace in Git: Git integration in Microsoft Fabric enables Pro


developers to integrate their development processes, tools, and best practices
straight into the Fabric platform. Learn how to manage a workspace with Git.

Contact list: Specify who receives notification about workspace activity. Read more
about workspace contact lists in this article.

Current workspace
After you select and open a workspace, this workspace becomes your current
workspace. You can quickly navigate to it from anywhere by selecting the workspace
icon from left nav pane.
Workspace layout
A workspace consists of a header, a toolbar, and a view area. There are two views that
can appear in the view area: list view and lineage view. You select the view you want to
see with controls on the toolbar. The following image shows these main workspace
components, with list view selected.

1. Header: The header contains the name and brief description of the workspace, and
also links to other functionality.
2. Toolbar: The toolbar contains controls for adding items to the workspace and
uploading files. It also contains a search box, filter, and the list view and lineage
view selectors.
3. List view and lineage view selectors: The list view and lineage view selectors
enable you to choose which view you want to see in the view area.
4. View area: The view area displays either list view or lineage view.

List view
List view is divided into the task flow and the items list.

1. Task flow: The task flow is where you can create or view a graphical representation
of your data project. The task flow shows the logical flow of the project - it doesn't
show the flow of data. Read more about task flows.
2. Items list: The items list is where you see the items and folders in the workspace. If
you have tasks in the task flow, you can filter the items list by selecting the tasks.
3. Resize bar: You can resize the task flow and items list by dragging the resize bar up
or down.
4. Show/Hide task flow: If you don't want to see the task flow, you can hide it using
the hide/show arrows at the side of the separator bar.

Lineage view
Lineage view shows the flow of data between the items in the workspace. Read more
about lineage view.

Workspace settings
Workspace admins can use workspace settings to manage and update the workspace.
The settings include general settings of the workspace, like the basic information of the
workspace, contact list, SharePoint, license, Azure connections, storage, and other
experiences' specific settings.

To open the workspace settings, you can select the workspace in the nav pane, then
select More options (...) > Workspace settings next to the workspace name.
You can also open it from the workspace page.

Workspace contact list


The Contact list feature allows you to specify which users receive notification about
issues occurring in the workspace. By default, the one who created the workspace is in
the contact list. You can add others to that list while creating workspace or in workspace
settings after creation. Users or groups in the contact list are also listed in the user
interface (UI) of the workspace settings, so workspace users know whom to contact.
Microsoft 365 and SharePoint
The Workspace SharePoint feature allows you to configure a Microsoft 365 Group whose
SharePoint document library is available to workspace users. You create the Group
outside of Microsoft Fabric first, with one available method being from SharePoint. Read
about creating a SharePoint shared library .

7 Note

Creating Microsoft 365 Groups may be restricted in your environment, or the ability
to create them from your SharePoint site may be disabled. If this is the case, speak
with your IT department.

Microsoft Fabric doesn't synchronize permissions between users or groups with


workspace access, and users or groups with Microsoft 365 Group membership. A best
practice is to give access to the workspace to the same Microsoft 365 Group whose file
storage you configured. Then manage workspace access by managing membership of
the Microsoft 365 Group.
You can configure SharePoint in workspace settings by typing in the name of the
Microsoft 365 group that you created earlier. Type just the name, not the URL. Microsoft
Fabric automatically picks up the SharePoint for the group.

License mode
By default, workspaces are created in your organization's shared capacity. When your
organization has other capacities, workspaces including My Workspaces can be assigned
to any capacity in your organization. You can configure it while creating a workspace or
in Workspace settings -> Premium. Read more about licenses.
Azure connections configuration
Workspace admins can configure dataflow storage to use Azure Data Lake Gen 2
storage and Azure Log Analytics (LA) connection to collect usage and performance logs
for the workspace in workspace settings.

With the integration of Azure Data Lake Gen 2 storage, you can bring your own storage
to dataflows, and establish a connection at the workspace level. Read Configure
dataflow storage to use Azure Data Lake Gen 2 for more detail.

After the connection with Azure Log Analytics (LA), activity log data is sent continuously
and is available in Log Analytics in approximately 5 minutes. Read Using Azure Log
Analytics for more detail.

System storage
System storage is the place to manage your semantic model storage in your individual
or workspace account so you can keep publishing reports and semantic models. Your
own semantic models, Excel reports, and those items that someone has shared with you,
are included in your system storage.

In the system storage, you can view how much storage you have used and free up the
storage by deleting the items in it.

Keep in mind that you or someone else may have reports and dashboards based on a
semantic model. If you delete the semantic model, those reports and dashboards don't
work anymore.

Remove the workspace


As an admin for a workspace, you can delete it. When you delete the workspace,
everything contained within the workspace is deleted for all group members, and the
associated app is also removed from AppSource.

In the Workspace settings pane, select Other > Remove this workspace.

2 Warning
If the workspace you're deleting has a workspace identity, that workspace identity
will be irretrievably lost. In some scenarios this could cause Fabric items relying on
the workspace identity for trusted workspace access or authentication to break. For
more information, see Delete a workspace identity.

Administering and auditing workspaces


Administration for workspaces is in the Microsoft Fabric admin portal. Microsoft Fabric
admins decide who in an organization can create workspaces and distribute apps. Read
about managing users' ability to create workspaces in the "Workspace settings" article.

Admins can also see the state of all the workspaces in their organization. They can
manage, recover, and even delete workspaces. Read about managing the workspaces
themselves in the "Admin portal" article.

Auditing
Microsoft Fabric audits the following activities for workspaces.

ノ Expand table

Friendly name Operation name

Created Microsoft Fabric folder CreateFolder

Deleted Microsoft Fabric folder DeleteFolder

Updated Microsoft Fabric folder UpdateFolder

Updated Microsoft Fabric folder access UpdateFolderAccess

Read more about Microsoft Fabric auditing.

Considerations and limitations


Limitations to be aware of:

Workspaces can contain a maximum of 1,000 Fabric and Power BI items.


Certain special characters aren't supported in workspace names when using an
XMLA endpoint. As a workaround, use URL encoding of special characters, for
example, for a forward slash /, use %2F.
A user or a service principal can be a member of up to 1,000 workspaces.
Related content
Create workspaces
Give users access to workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Create a workspace
Article • 01/26/2025

This article explains how to create workspaces in Microsoft Fabric. In workspaces, you
create collections of items such as lakehouses, warehouses, and reports. For more
background, see the Workspaces article.

To create a workspace:

1. Select Workspaces > New workspace. The Create a workspace pane opens.

2. The Create a workspace pane opens.


Give the workspace a unique name (mandatory).

Provide a description of the workspace (optional).

Assign the workspace to a domain (optional).

If you are a domain contributor for the workspace, you can associate the
workspace to a domain, or you can change an existing association. For
information about domains, see Domains in Fabric.

3. When done, either continue to the advanced settings, or select Apply.

Advanced settings
Expand Advanced and you see advanced setting options:

Contact list
Contact list is a place where you can put the names of people as contacts for
information about the workspace. Accordingly, people in this contact list receive system
email notifications for workspace level changes.

By default, the first workspace admin who created the workspace is the contact. You can
add other users or groups according to your needs. Enter the name in the input box
directly, it helps you to automatically search and match users or groups in your org.

License mode
Different license mode provides different sets of feature for your workspace. After the
creation, you can still change the workspace license type in workspace settings, but
some migration effort is needed.

7 Note

Currently, if you want to downgrade the workspace license type from Premium
capacity to Pro (Shared capacity), you must first remove any non-Power BI Fabric
items that the workspace contains. Only after you remove such items will you be
allowed to downgrade the capacity. For more information, see Moving data
around.

Default storage format


Power BI semantic models can store data in a highly compressed in-memory cache for
optimized query performance, enabling fast user interactivity. With Premium capacities,
large semantic models beyond the default limit can be enabled with the Large semantic
model storage format setting. When enabled, semantic model size is limited by the
Premium capacity size or the maximum size set by the administrator. Learn more about
large semantic model storage format.

Template apps
Power BI template apps are developed for sharing outside your organization. If you
check this option, a special type of workspace (template app workspace) is created. It's
not possible to revert it back to a normal workspace after creation.
Dataflow storage (preview)
Data used with Power BI is stored in internal storage provided by Power BI by default.
With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you
can store your dataflows in your organization's Azure Data Lake Storage Gen2 account.
Learn more about dataflows in Azure Data Lake Storage Gen2 accounts.

Give users access to your workspace


Now that you've created the workspace, you'll want to add other users to roles in the
workspace, so you can collaborate with them. See these articles for more information:

Give users access to a workspace


Roles in workspaces

Pin workspaces
Quickly access your favorite workspaces by pinning them to the top of the workspace
flyout list.

1. Open the workspace flyout from the nav pane and hover over the workspace you
want to pin. Select the Pin to top icon.
2. The workspace is added in the Pinned list.
3. To unpin a workspace, select the unpin button. The workspace is unpinned.

Related content
Read about workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Roles in workspaces in Microsoft Fabric
Article • 01/26/2025

Workspace roles let you manage who can do what in a Microsoft Fabric workspace.
Microsoft Fabric workspaces sit on top of OneLake and divide the data lake into
separate containers that can be secured independently. Workspace roles in Microsoft
Fabric extend the Power BI workspace roles by associating new Microsoft Fabric
capabilities such as data integration and data exploration with existing workspace roles.
For more information on Power BI roles, see Roles in workspaces in Power BI.

You can either assign roles to individuals or to security groups, Microsoft 365 groups,
and distribution lists. To grant access to a workspace, assign those user groups or
individuals to one of the workspace roles: Admin, Member, Contributor, or Viewer.
Here's how to give users access to workspaces.

To create a new workspace, see Create a workspace.

Everyone in a user group gets the role that you assign. If someone is in several user
groups, they get the highest level of permission that's provided by the roles that they're
assigned. If you nest user groups and assign a role to a group, all the contained users
have permissions.

Users in workspace roles have the following Microsoft Fabric capabilities, in addition to
the existing Power BI capabilities associated with these roles.

Microsoft Fabric workspace roles


ノ Expand table

Capability Admin Member Contributor Viewer

Update and delete the workspace. ✅

Add or remove people, including other admins. ✅

Add members or others with lower permissions. ✅ ✅

Allow others to reshare items.1 ✅ ✅

Create or modify database mirroring items. ✅ ✅

Create or modify warehouse items. ✅ ✅

Create or modify SQL database items. ✅ ✅


Capability Admin Member Contributor Viewer

View and read content of data pipelines, ✅ ✅ ✅ ✅


notebooks, Spark job definitions, ML models and
experiments, and eventstreams.

View and read content of KQL databases, KQL ✅ ✅ ✅ ✅


query-sets, and real-time dashboards.

Connect to SQL analytics endpoint of Lakehouse or ✅ ✅ ✅ ✅


the Warehouse

Read Lakehouse and Data warehouse data and ✅ ✅ ✅ ✅


shortcuts2 with T-SQL through TDS endpoint.

Read Lakehouse and Data warehouse data and ✅ ✅ ✅


shortcuts2 through OneLake APIs and Spark.

Read Lakehouse data through Lakehouse explorer. ✅ ✅ ✅

Write or delete data pipelines, notebooks, Spark ✅ ✅ ✅


job definitions, ML models, and experiments, and
eventstreams.

Write or delete Eventhouses3, KQL Querysets, Real- ✅ ✅ ✅


Time Dashboards, and schema and data of KQL
Databases, Lakehouses, data warehouses, and
shortcuts.

Execute or cancel execution of notebooks, Spark ✅ ✅ ✅


job definitions, ML models, and experiments.

Execute or cancel execution of data pipelines. ✅ ✅ ✅

View execution output of data pipelines, ✅ ✅ ✅ ✅


notebooks, ML models and experiments.

Schedule data refreshes via the on-premises ✅ ✅ ✅


gateway.4

Modify gateway connection settings.4 ✅ ✅ ✅

1 Contributors and Viewers can also share items in a workspace, if they have Reshare
permissions.

2 Other permissions are needed to read data from shortcut destination. Learn more
about shortcut security model.

3 Other permissions are needed to perform certain operations on data in an Eventhouse.


Learn more about the hybrid role-based access control model.
4 Keep in mind that you also need permissions on the gateway. Those permissions are
managed elsewhere, independent of workspace roles and permissions.

Related content
Roles in workspaces in Power BI
Create workspaces
Give users access to workspaces
Fabric and OneLake security
OneLake shortcuts
Data warehouse security
Data engineering security
Data science roles and permissions
Role-based access control in Eventhouse

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Create items in workspaces
Article • 02/10/2025

This article explains how to create items in workspaces in Microsoft Fabric. For more
information about items and workspaces, see the Microsoft Fabric terminology and
Workspaces article.

Create an item in a workspace


1. In a workspace, select New item

2. You can see all items are categorized by tasks. Each task represents daily job-to-
be-done when you build a data solution: get data, store data, prepare data,
analyze and train data, track data, visualize data, and develop data. Inside each
category, item types are sorted alphabetically. You can scroll down and up to
browse all item types which are available for you to create.
3. Select the card of item type you need to create, you can start the creation process
of an item.

Search by item type


1. To find out the item type you need, enter the keyword of an item type, you can
search in this panel.

Add items to Favorites


1. Select the star button on a card of an item type, you can add this item type to your
'Favorites'
2. Select 'Favorite' and you can see all item types you added to 'Favorites'

3. Next time, when you select 'New item' button, 'Favorites' is shown by default so
that you can quickly access the items you need to create most frequently

4. By clicking on the star button again, you can unfavorite the item types.

Import items
You can also import files from outside Fabric to create Fabric items in a workspace.
1. Select 'Import' in a workspace, you can see all item types you can create by
importing the files from somewhere else.

2. Select the item type you want to import, and select the location where your files
locate.
3. Select the file you want to import and confirm.

4. Check if new items are created in workspace and the import process is completed
successfully.

Related content
Create workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Create folders in workspaces
Article • 01/26/2025

This article explains what folders in workspaces are and how to use them in workspaces
in Microsoft Fabric. Folders are organizational units inside a workspace that enable users
to efficiently organize and manage artifacts in the workspace. For more information
about workspaces, see the Workspaces article.

Create a folder in a workspace


1. In a workspace, select New folder.

2. Enter a name for the folder in the New folder dialog box. See Folder name
requirements for naming restrictions.

3. The folder is created successfully.


4. You can create nested subfolders in a folder in the same way. A maximum of 10
levels of nested subfolders can be created.

7 Note

You can nest up to 10 folders in the root folder.

Folder name requirements


Folder names must follow certain naming conventions:

The name can't include C0 and C1 control codes.


The name can't contain leading or trailing spaces.
The name can't contain these characters: ~"#.&*:<>?/{|}.
The name can't contain system-reserved names, including: $recycle.bin, recycled,
recycler.
The name length can't exceed 255 characters.
You can't have more than one folder with the same name in a folder or at the root
level of the workspace.

Move items into a folder

Move a single item


1. Select the context menu (...) of the item you want to move, then select Move to.

2. Select the destination folder where you want to move this item.

3. Select Move here.


4. By selecting Open folder in the notification or navigating to the folder directly, you
can go to the destination folder to check if the item moved successfully.

Move multiple items


1. Select multiple items, then select Move from the command bar.

2. Select a destination where you want to move these items. You can also create a
new folder if you need it.

Create an item in a folder


1. Go to a folder, select New, then select the item you want to create. The item is
created in this folder.

7 Note

Currently, you can't create certain items in a folder:

Dataflows gen2
Streaming semantic models
Streaming dataflows

If you create items from the home page or the Create hub, items are created
at the root level of the workspace.

Publish to folder (preview)


You can now publish your Power BI reports to specific folders in your workspace.

When you publish a report, you can choose the specific workspace and folder for your
report, as illustrated in the following image.

To publish reports to specific folders in the service, make sure that in Power BI Desktop,
the Publish dialogs support folder selection setting is enabled in the Preview features
tab in the options menu.

Rename a folder
1. Select the context (...) menu, then select Rename.
2. Give the folder a new name and select the Rename button.

7 Note

When renaming a folder, follow the same naming convention as when you're
creating a folder. See Folder name requirements for naming restrictions.

Delete a folder
1. Make sure the folder is empty.

2. Select the context menu (...) and select Delete.


7 Note

Currently you can only delete empty folders.

Permission model
Workspace admins, members, and contributors can create, modify, and delete folders in
the workspace. Viewers can only view folder hierarchy and navigate in the workspace.

Currently, folders inherit the permissions of the workspace where they're located.

ノ Expand table

Capability Admin Member Contributor Viewer

Create folder ✅ ✅ ✅ ❌

Delete folder ✅ ✅ ✅ ❌

Rename folder ✅ ✅ ✅ ❌

Move folder and items ✅ ✅ ✅ ❌

View folder in workspace list ✅ ✅ ✅ ✅


Considerations and limitations
Currently dataflows gen2, streaming semantic models, and streaming dataflows
can't be created in folders.
If you trigger item creation from the home page, create hub, and industry solution,
items are created at the root level of workspaces.
Git doesn't currently support workspace folders.
If folders is enabled in the Power BI service but not enabled in Power BI Desktop,
republishing a report that is in a nested folder will replace the report in the nested
folder.
If Power BI Desktop folders is enabled in Power BI Desktop, but not enabled in the
service and you publish to a nested folder, the report will be published to the
general workspace.
When publishing reports to folders, report names must be unique throughout an
entire workspace, regardless of their location. Therefore, when publishing a report
to a workspace that has another report with the same name in a different folder,
the report will publish to the location of the already existing report. If you want to
move the report to a new folder location in the workspace, you need to make this
change in the Power BI service.
Folders are not supported in Template App workspaces.

Related content
Folders in deployment pipelines
Create workspaces
Give users access to workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Give users access to workspaces
Article • 01/26/2025

After you create a workspace in Microsoft Fabric, or if you have an admin or member
role in a workspace, you can give others access to it by adding them to the different
roles. Workspace creators are automatically admins. For an explanation of the different
roles, see Roles in workspaces.

7 Note

To enforce row-level security (RLS) on Power BI items for Microsoft Fabric Pro users
who browse content in a workspace, assign them the Viewer Role.

After you add or remove workspace access for a user or a group, the permission
change only takes effect the next time the user logs into Microsoft Fabric.

Give access to your workspace


1. Because you have the Admin or Member role in the workspace, on the command
bar of workspace page, you see Manage Access. Sometimes this entry is on the
More options (...) menu.

Manage access on the More options menu.

2. Select Add people or groups.


3. Enter name or email, select a role, and select Add. You can add security groups,
distribution lists, Microsoft 365 groups, or individuals to these workspaces as
admins, members, contributors, or viewers. If you have the member role, you can
only add others to the member, contributor, or viewer roles.
4. You can view and modify access later if needed. Use the Search box to search for
people or groups who already have access of this workspace. To modify access,
select drop-down arrow, and select a role.
Related content
Read about the workspace experience.
Create workspaces.
Roles in workspaces

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Get started with Git integration
Article • 09/22/2024

This article walks you through the following basic tasks in Microsoft Fabric’s Git
integration tool:

Connect to a Git repo


Commit changes
Update from Git
Disconnect from Git

It’s recommended to read the overview of Git integration before you begin.

Prerequisites
To integrate Git with your Microsoft Fabric workspace, you need to set up the following
prerequisites for both Fabric and Git.

Fabric prerequisites
To access the Git integration feature, you need a Fabric capacity. A Fabric capacity is
required to use all supported Fabric items. If you don't have one yet, sign up for a free
trial. Customers that already have a Power BI Premium capacity, can use that capacity,
but keep in mind that certain Power BI SKUs only support Power BI items.

In addition, the following tenant switches must be enabled from the Admin portal:

Users can create Fabric items


Users can synchronize workspace items with their Git repositories
For GitHub users only: Users can synchronize workspace items with GitHub
repositories

These switches can be enabled by the tenant admin, capacity admin, or workspace
admin, depending on your organization's settings.

Git prerequisites
Git integration is currently supported for Azure DevOps and GitHub. To use Git
integration with your Fabric workspace, you need the following in either Azure DevOps
or GitHub:
Azure DevOps

An active Azure account registered to the same user that is using the Fabric
workspace. Create a free account .
Access to an existing repository.

Connect a workspace to a Git repo

Connect to a Git repo


Only a workspace admin can connect a workspace to a repository, but once connected,
anyone with permission can work in the workspace. If you're not an admin, ask your
admin for help with connecting. To connect a workspace to an Azure or GitHub Repo,
follow these steps:

1. Sign into Fabric and navigate to the workspace you want to connect with.

2. Go to Workspace settings

3. Select Git integration.

4. Select your Git provider. Currently, Azure DevOps and GitHub are supported.

Azure DevOps Connect

If you select Azure DevOps, select Connect to automatically sign into the Azure
Repos account registered to the Microsoft Entra user signed into Fabric.
Connect to a workspace
If the workspace is already connected to GitHub, follow the instructions for Connecting
to a shared workspace.

Azure DevOps branch connect

1. From the dropdown menu, specify the following details about the branch you
want to connect to:

7 Note

You can only connect a workspace to one branch and one folder at a
time.

Organization
Project
Git repository.
Branch (Select an existing branch using the drop-down menu, or select +
New Branch to create a new branch. You can only connect to one branch
at a time.)
Folder (Type in the name of an existing folder or enter a name to create a
new folder. If you leave the folder name blank, content will be created in
the root folder. You can only connect to one folder at a time.)

Select Connect and sync.

During the initial sync, if either the workspace or Git branch is empty, content is copied
from the nonempty location to the empty one. If both the workspace and Git branch
have content, you’re asked which direction the sync should go. For more information on
this initial sync, see Connect and sync.

After you connect, the Workspace displays information about source control that allows
the user to view the connected branch, the status of each item in the branch and the
time of the last sync.
To keep your workspace synced with the Git branch, commit any changes you make in
the workspace to the Git branch, and update your workspace whenever anyone creates
new commits to the Git branch.

Commit changes to git


Once you successfully connect to a Git folder, edit your workspace as usual. Any
changes you save are saved in the workspace only. When you’re ready, you can commit
your changes to the Git branch, or you can undo the changes and revert to the previous
status. Read more about commits.

Commit to Git

To commit your changes to the Git branch, follow these steps:

1. Go to the workspace.

2. Select the Source control icon. This icon shows the number of uncommitted

changes.

3. Select the Changes from the Source control panel. A list appears with all the
items you changed, and an icon indicating if the item is new , modified ,
conflict , or deleted .

4. Select the items you want to commit. To select all items, check the top box.

5. Add a comment in the box. If you don't add a comment, a default message is
added automatically.

6. Select Commit.
After the changes are committed, the items that were committed are removed from
the list, and the workspace will point to the new commit that it synced to.
After the commit is completed successfully, the status of the selected items changes
from Uncommitted to Synced.

Update workspace from Git


Whenever anyone commits a new change to the connected Git branch, a notification
appears in the relevant workspace. Use the Source control panel to pull the latest
changes, merges, or reverts into the workspace and update live items. Read more about
updating.

To update a workspace, follow these steps:

1. Go to the workspace.
2. Select the Source control icon.
3. Select Updates from the Source control panel. A list appears with all the items that
were changed in the branch since the last update.
4. Select Update all.
After it updates successfully, the list of items is removed, and the workspace will point to
the new commit that it's synced to.
After the update is completed successfully, the status of the items changes to Synced.

Disconnect a workspace from Git


Only a workspace admin can disconnect a workspace from a Git Repo. If you’re not an
admin, ask your admin for help with disconnecting. If you’re an admin and want to
disconnect your repo, follow these steps:

1. Go to Workspace settings
2. Select Git integration
3. Select Disconnect workspace
4. Select Disconnect again to confirm.

Permissions
The actions you can take on a workspace depend on the permissions you have in both
the workspace and the Git repo. For a more detailed discussion of permissions, see
Permissions.
Considerations and limitations

General Git integration limitations


The authentication method in Fabric must be at least as strong as the
authentication method for Git. For example, if Git requires multifactor
authentication, Fabric needs to require multifactor authentication as well.
Power BI Datasets connected to Analysis Services aren't supported at this time.
Workspaces with template apps installed can't be connected to Git.
Submodules aren't supported.
Sovereign clouds aren't supported.

Azure DevOps limitations

The Azure DevOps account must be registered to the same user that is using
the Fabric workspace.
The tenant admin must enable cross-geo exports if the workspace and Git
repo are in two different geographical regions.
If your organization set up conditional access, make sure the Power BI Service
has the same conditions set for authentication to function as expected.
The commit size is limited to 125 MB.

GitHub Enterprise limitations


Some GitHub Enterprise settings aren't supported. For example:

IP allowlist
Private networking
Custom domains

Workspace limitations
Only the workspace admin can manage the connections to the Git Repo such as
connecting, disconnecting, or adding a branch.
Once connected, anyone with permission can work in the workspace.
The workspace folder structure isn't reflected in the Git repository. Workspace
items in folders are exported to the root directory.

Branch and folder limitations


Maximum length of branch name is 244 characters.
Maximum length of full path for file names is 250 characters. Longer names fail.
Maximum file size is 25 MB.
You can’t download a report/dataset as .pbix from the service after deploying them
with Git integration.
If the item’s display name has any of these characteristics, The Git folder is
renamed to the logical ID (Guid) and type:
Has more than 256 characters
Ends with a . or a space
Contains any forbidden characters as described in directory name limitations

Directory name limitations


The name of the directory that connects to the Git repository has the following
naming restrictions:
The directory name can't begin or end with a space or tab.
The directory name can't contain any of the following characters: " / : < >

\ * ? |

The item folder (the folder that contains the item files) can't contain any of the
following characters: " : < > \ * ? | . If you rename the folder to
something that includes one of these characters, Git can't connect or sync with the
workspace and an error occurs.

Branching out limitations


Branch out requires permissions listed in permissions table.
There must be an available capacity for this action.
All workspace and branch naming limitations apply when branching out to a new
workspace.
When branching out, a new workspace is created and the settings from the
original workspace aren't copied. Adjust any settings or definitions to ensure that
the new workspace meets your organization's policies.
Only Git supported items are available in the new workspace.
The related branches list only shows branches and workspaces you have
permission to view.
Git integration must be enabled.

Sync and commit limitations


You can only sync in one direction at a time. You can’t commit and update at the
same time.
Sensitivity labels aren't supported and exporting items with sensitivity labels might
be disabled. To commit items that have sensitivity labels without the sensitivity
label, ask your administrator for help.
Works with limited items. Unsupported items in the folder are ignored.
Duplicating names isn't allowed. Even if Power BI allows name duplication, the
update, commit, or undo action fails.
B2B isn’t supported.
Conflict resolution is partially done in Git.
During the Commit to Git process, the Fabric service deletes files inside the item
folder that aren't part of the item definition. Unrelated files not in an item folder
aren't deleted.
After you commit changes, you might notice some unexpected changes to the
item that you didn't make. These changes are semantically insignificant and can
happen for several reasons. For example:
Manually changing the item definition file. These changes are valid, but might
be different than if done through the editors. For example, if you rename a
semantic model column in Git and import this change to the workspace, the
next time you commit changes to the semantic model, the bim file will register
as changed and the modified column pushed to the back of the columns array.
This is because the AS engine that generates the bim files pushes renamed
columns to the end of the array. This change doesn't affect the way the item
operates.
Committing a file that uses CRLF line breaks. The service uses LF (line feed) line
breaks. If you had item files in the Git repo with CRLF line breaks, when you
commit from the service these files are changed to LF. For example, if you open
a report in desktop, save the project file (.pbip) and upload it to Git using CRLF.
Refreshing a semantic model using the Enhanced refresh API causes a Git diff after
each refresh.

Related content
Understand the Git integration process
Manage Git branches
Git integration best practices

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Take ownership of Fabric items
Article • 01/31/2025

When a user leaves the organization, or if they don't sign in for more than 90 days, it's
possible that any Fabric items they own will stop working correctly. In such cases,
anyone with read and write permissions on such an item (such as workspace admins,
members, and contributors) can take ownership of the item, using the procedure
described in this article.

When a user takes over ownership of an item using this procedure, they also become
the owner of any child items the item might have. You can't take over ownership of child
items directly - only through the parent item.

7 Note

Items such as semantic models, reports, datamarts, dataflows gen1 and dataflows
gen2 have existing functionality for changing item ownership that remains the
same. This article describes the procedure for taking ownership of other Fabric
items.

Prerequisites
To take over ownership of a Fabric item, you must have read and write permissions on
the item.

Take ownership of a Fabric item


To take ownership of a Fabric item:

1. Navigate to the item's settings. Remember, the item can't be a child item.

2. In the About tab, select Take over.

3. A message bar indicates whether the take over was successful.

If the take over fails for any reason, select Take over again.

ノ Expand table
Operation status Error message Next step

Success Successfully took over the item. None.

Partial Failure Can't take over child items. Try again. Retry take over of parent
item.

Complete Can't take over <item_name>. Try Retry take over of parent
Failure again. item.

7 Note

Data Pipeline items require the additional step of ensuring that the Last Modified
By user is also updated after taking item ownership. You can do this by making a
small edit to the item and saving it. For example, you could make a small change to
the activity name.

) Important

The take over feature doesn't cover ownership change of related items. For
instance, if a data pipeline has notebook activity, changing ownership of the data
pipeline doesn't change the ownership of the notebook. Ownership of related
items needs to be changed separately.

Repair connections after Fabric item ownership


change
Some connections that use the previous item owner's credentials might stop working if
the new item owner doesn't have access to the connection. In such cases, you might see
a warning message.

In this scenario, the new item owner can fix connections by going into the item and
replacing the connection with a new or existing connection. The following sections
describe the steps for doing this procedure for several common item types. For other
item types that have connections, refer to the item's connection management
documentation.

Pipelines
1. Open the pipeline.
2. Select the activity created.

3. Replace the connection in the source and/or destination with the appropriate
connection.

KQL Queryset
1. Open the KQL queryset.

2. In the Explorer pane, add another connection or select an existing one.

Real-Time Dashboard
1. Open the real-time dashboard in edit mode.

2. Choose New data source on the tool bar.

3. Select Add+ to add new data sources.

4. In the new or existing tile, select the appropriate data source.

User data functions


1. Open the item and go to Manage Connections.

2. Select Add data connection to add a new connection and use that in the data
function.

Considerations and limitations


The following Fabric items don't support ownership change.
Mirrored Cosmos DB

Mirrored SQL DB

Mirrored SQL Managed Instance

Mirrored Snowflake

Mirrored database

If a mirrored database stops working because the item owner has left the
organization or their credentials are disabled, create a new mirrored database.

The option to take over an item isn't available if the item is a system-generated
item not visible or accessible to users in a workspace. For instance, a parent item
might have system-generated child items - this can happen when items such as
Eventstream items and Data Activator items are created through the Real-Time
hub. In such cases, the take over option is not available for the parent item.

Currently, there's no API support for changing ownership of Fabric items. This
doesn't impact existing functionality for changing ownership of items such as
semantic models, reports, dataflows gen1 and gen2, and datamarts, which
continues to be available. For information about taking ownership of warehouses,
see Change ownership of Fabric Warehouse.

This Fabric-item takeover feature doesn't cover ownership takeover as a service


principal.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Workspace identity
Article • 09/05/2024

A Fabric workspace identity is an automatically managed service principal that can be


associated with a Fabric workspace. Fabric workspaces with a workspace identity can
securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts
through trusted workspace access for OneLake shortcuts. In the future, Fabric items will
be able to use the identity when connecting to resources that support Microsoft Entra
authentication. Fabric will use workspace identities to obtain Microsoft Entra tokens
without the customer having to manage any credentials.

Workspace identities can be created in the workspace settings of workspaces that are
associated with a Fabric capacity. A workspace identity is automatically assigned the
workspace contributor role and has access to workspace items.

When you create a workspace identity, Fabric creates a service principal in Microsoft
Entra ID to represent the identity. An accompanying app registration is also created.
Fabric automatically manages the credentials associated with workspace identities,
thereby preventing credential leaks and downtime due to improper credential handling.

7 Note

Fabric workspace identity is generally available. You can create a workspace


identity in any workspace except My workspace.

While Fabric workspace identities share some similarities with Azure managed identities,
their lifecycle, administration, and governance are different. A workspace identity has an
independent lifecycle that is managed entirely in Fabric. A Fabric workspace can
optionally be associated with an identity. When the workspace is deleted, the identity
gets deleted. The name of the workspace identity is always the same as the name of the
workspace it's associated with.

Create and manage a workspace identity


You must be a workspace admin to be able to create and manage a workspace identity.
The workspace you're creating the identity for can't be a My Workspace.

1. Navigate to the workspace and open the workspace settings.


2. Select the Workspace identity tab.
3. Select the + Workspace identity button.
When the workspace identity has been created, the tab displays the workspace identity
details and the list of authorized users.

The sections of the workspace identity configuration are described in the following
sections.

Identity details

ノ Expand table

Detail Description

Name Workspace identity name. The workspace identity name is the same as the workspace
name.

ID The workspace identity GUID. This is a unique identifier for the identity.

Role The workspace role assigned to the identity. Workspace identities are automatically
assigned the contributor role upon creation.

State The state of the workspace. Possible values: Active, Inactive, Deleting, Unusable, Failed,
DeleteFailed

Authorized users
For information, see Access control.

Delete a workspace identity


When an identity is deleted, Fabric items relying on the workspace identity for trusted
workspace access or authentication will break. Deleted workspace identities cannot be
restored.

7 Note

When a workspace is deleted, its workspace identity is deleted as well. If the


workspace is restored after deletion, the workspace identity is not restored. If you
want the restored workspace to have a workspace identity, you must create a new
one.

How to use workspace identity


Shortcuts in a workspace that has a workspace identity can be used for trusted service
access. For more information, see trusted workspace access.

Security, administration, and governance of the


workspace identity
The following sections describe who can use the workspace identity, and how you can
monitor it in Microsoft Purview and Azure.

Access control
Workspace identity can be created and deleted by workspace admins. The workspace
identity has the workspace contributor role on the workspace.

Currently, workspace identity isn't supported for authentication to target resources in


connections. Authentication to target resources in connections will be supported in the
future. Admins, members, and contributors will be able to use workspace identity in
authentication in connections in the future.

Application Administrators or users with higher roles can view, modify, and delete the
service principal and app registration associated with the workspace identity in Azure.

2 Warning
Modifying or deleting the service principal or app registration in Azure is not
recommended, as it will cause Fabric items relying on workspace identity to stop
working.

Administer the workspace identity in Fabric


Fabric administrators can administer the workspace identities created in their tenant on
the Fabric identities tab in the admin portal.

1. Navigate to the Fabric identities tab in the Admin portal.


2. Select a workspace identity, and then select Details.
3. In the Details tab, you can view additional information related to the workspace
identity.
4. You can also delete a workspace identity.

7 Note

Workspace identities cannot be restored after deletion. Be sure to review the


consequences of deleting a workspace identity described in Delete a
workspace identity.

Administer the workspace identity in Purview


You can view the audit events generated upon the creation and deletion of workspace
identity in Purview Audit Log. To access the log

1. Navigate to the Microsoft Purview hub.


2. Select the Audit tile.
3. In the audit search form that appears, use the Activities - friendly names field to
search for fabric identity to find the activities related to workspace identities.
Currently, the following activities related to workspace identities are:

Created Fabric Identity for Workspace


Retrieved Fabric Identity for Workspace
Deleted Fabric Identity for Workspace
Retrieved Fabric Identity Token for Workspace

Administer the workspace identity in Azure


The application associated with the workspace identity can be viewed under both
Enterprise applications and App registrations in the Azure portal.

Enterprise applications

The application associated with the workspace identity can be seen in Enterprise
Applications in the Azure portal. Fabric Identity Management app is its configuration
owner.

2 Warning

Modifications to the application made here will cause the workspace identity to
stop working.

To view the audit logs and sign-in logs for this identity:

1. Sign in to the Azure portal.


2. Navigate to Microsoft Entra ID > Enterprise Applications.
3. Select either Audit logs or Sign in logs, as desired.

App registrations
The application associated with the workspace identity can be seen under App
registrations in the Azure portal. No modifications should be made there, as this will
cause the workspace identity to stop working.

Advanced scenarios
The following sections describe scenarios involving workspace identities that might
occur.

Deleting the identity


The workspace identity can be deleted in the workspace settings. When an identity is
deleted, Fabric items relying on the workspace identity for trusted workspace access or
authentication will break. Deleted workspace identities can't be restored.

When a workspace is deleted, its workspace identity is deleted as well. If the workspace
is restored after deletion, the workspace identity is not restored. If you want the
restored workspace to have a workspace identity, you must create a new one.
Renaming the workspace
When a workspace gets renamed, the workspace identity is also renamed to match the
workspace name. However its Entra application and service principal remain the same.
Note that there can be multiple application and app registration objects with same
name in a tenant.

Considerations and limitations


A workspace identity can be created in any workspace except a My Workspace.
You can only use trusted access in F SKUs.
If a workspace with a workspace identity is migrated to a non-Fabric capacity or to
a non-F SKU Fabric capacity, the identity won't be disabled or deleted, but Fabric
items relying on the workspace identity will stop working.
A maximum of 1,000 workspace identities can be created in a tenant. Once this
limit is reached, workspace identities must be deleted to enable newer ones to be
created.
Azure Data Lake Storage Gen2 shortcuts in a workspace that has a workspace
identity will be capable of trusted service access.

Troubleshooting issues with creating a


workspace identity
If you can't create a workspace identity because the creation button is disabled,
make sure you have the workspace administrator role, and that the workspace is
associated with a Fabric F SKU capacity.

If you run into issues the first time you create a workspace identity in your tenant,
try the following steps:

1. If the workspace identity state is failed, wait for an hour and then delete the
identity.
2. After the identity has been deleted, wait 5 minutes and then create the
identity again.

Related content
Trusted workspace access
Fabric identities
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


What is workspace monitoring
(preview)?
Article • 01/28/2025

Workspace monitoring is a Microsoft Fabric database that collects and organizes logs
and metrics from a range of Fabric items in your workspace. Workspace monitoring lets
workspace users access and analyze logs and metrics related to Fabric items in the
workspace. You can query the database to gain insights into the usage and performance
of your workspace.

Monitoring
Workspace monitoring creates an Eventhouse database in your workspace that collects
and organizes logs and metrics from the Fabric items in the workspace. Workspace
contributors can query the database to learn more about the performance of their
Fabric items.

Security - Workspace monitoring is a secure read-only database that is accessible


only to workspace users with at least a contributor role.

Data collection - The monitoring Eventhouse collects diagnostic logs and metrics
from Fabric items in the workspace. The data is aggregated and stored in the
monitoring database, where it can be queried using KQL or SQL. The database
supports both historical log analysis and real-time data streaming.

Access - Access the monitoring database from the workspace. You can build and
save query sets and dashboards to simplify data exploration.

Operation logs
After you install workspace monitoring, you can query the following logs:

Data engineering (GraphQL)


GraphQL operations

Eventhouse monitoring in Real-Time Intelligence


Command logs
Data operation logs
Ingestion results logs
Metrics
Query logs

Mirrored database
Mirrored database logs

Power BI
Semantic models

Sample queries
Workload monitoring sample queries are available from workspace-monitoring in the
Fabric samples GitHub repository.

Considerations and limitations


You can only enable either workspace monitoring or log analytics in a workspace.
You can't enable both at the same time. To enable workspace monitoring in a
workspace that workspace that already has log analytics enabled, delete the log
analytics configuration and wait for a few hours before enabling workspace
monitoring.

The workspace monitoring Eventhouse is a read-only item.


To delete the database, use the workspace settings. Before recreating a deleted
database, wait about 15 minutes.
To share the database, grant users a workspace member or admin role.

The retention period for monitoring data is 30 days.

You can't configure ingestion to filter for specific log type or category such as error
or workload type.

User data operation logs aren't available even though the table is available in the
monitoring database.

Related content
Enable monitoring in your workspace

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Enable monitoring in your workspace
Article • 01/26/2025

This article explains how to enable monitoring in a Microsoft Fabric workspace.

Prerequisites
A Power BI Premium or a Fabric capacity.

The Workspace admins can turn on monitoring for their workspaces tenant setting
is enabled. To enable the setting, you need to be a Fabric administrator. If you're
not a Fabric administrator, ask the Fabric administrator in your organization to
enable the setting.

You have the admin role in the workspace.

Enable monitoring
Follow these steps to enable monitoring in your workspace:

1. Go to the workspace you want to enable monitoring for, and select Workspace
settings (⚙).

2. In Workspace settings, select Monitoring.

3. Select +Eventhouse and wait for the database to be created.

Related content
What is workspace monitoring?

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


OneLake catalog overview
Article • 01/21/2025

OneLake catalog is a centralized place that helps you find, explore, and use the Fabric
items you need, and govern the data you own. It features two tabs:

Explore tab: The explore tab has an items list with an in-context item details view
that makes it possible to browse through and explore items without losing your list
context. It also provides selectors and filters to narrow down and focus the list,
making it easier to find what you need. By default, the OneLake catalog opens on
the Explore tab.

Govern tab: The govern tab provides insights that help you understand the
governance posture of all the data you own in Fabric, and presents recommended
actions you can take to improve the governance status of your data.

Open the the OneLake catalog


To open the OneLake catalog, select the OneLake icon in the Fabric navigation pane.
Select the tab you're interested if it's not displayed by default.

Related content
Discover and explore Fabric items in the OneLake catalog
Govern your data in Fabric
Endorsement
Fabric domains
Lineage in Fabric
Monitor hub

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Copilot in Fabric
Article • 01/26/2025

Copilot and other generative AI features in preview bring new ways to transform and
analyze data, generate insights, and create visualizations and reports in Microsoft Fabric
and Power BI.

Enable Copilot
Before your business can start using Copilot capabilities in Microsoft Fabric, you need to
enable Copilot.

Read on for answers to your questions about how it works in the different workloads,
how it keeps your business data secure and adheres to privacy requirements, and how
to use generative AI responsibly.

7 Note

Copilot is not yet supported for sovereign clouds due to GPU availability.

Copilot for Data Science and Data Engineering


Copilot for Data Engineering and Data Science is an AI-enhanced toolset tailored to
support data professionals in their workflow. It provides intelligent code completion,
automates routine tasks, and supplies industry-standard code templates to facilitate
building robust data pipelines and crafting complex analytical models. Utilizing
advanced machine learning algorithms, Copilot offers contextual code suggestions that
adapt to the specific task at hand, helping you code more effectively and with greater
ease. From data preparation to insight generation, Microsoft Fabric Copilot acts as an
interactive aide, lightening the load on engineers and scientists and expediting the
journey from raw data to meaningful conclusions.

Copilot for Data Factory


Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent code
generation to transform data with ease and generates code explanations to help you
better understand complex tasks. For more information, see Copilot for Data Factory
Copilot for Data Warehouse
Microsoft Copilot for Fabric Data Warehouse is an AI assistant designed to streamline
your data warehousing tasks. Key features of Copilot for Warehouse include Natural
Language to SQL, code completion, quick actions, and intelligent insights. For more
information, see Copilot for Data Warehouse.

Copilot for Power BI


Power BI has introduced generative AI that allows you to create reports automatically by
selecting the topic for a report or by prompting Copilot for Power BI on a particular
topic. You can use Copilot for Power BI to generate a summary for the report page that
you just created, and generate synonyms for better Q&A capabilities.

For more information on the features and how to use Copilot for Power BI, see Overview
of Copilot for Power BI.

Copilot for Real-Time Intelligence


Copilot for Real-Time Intelligence is an advanced AI tool designed to help you explore
your data and extract valuable insights. You can input questions about your data, which
are then automatically translated into Kusto Query Language (KQL) queries. Copilot
streamlines the process of analyzing data for both experienced KQL users and citizen
data scientists.

For more information, see Copilot for Real-Time Intelligence overview.

Copilot for SQL database


Copilot for SQL database in Microsoft Fabric is an AI assistant designed to streamline
your OLTP database tasks. Key features of Copilot for SQL database include Natural
Language to SQL, code completion, quick actions, and document-based Q&A. For more
information, see Copilot for SQL database.

Create your own AI solution accelerators

Build your own copilots


Using the client advisor AI accelerator tool, you can build custom copilot with your
enterprise data. The client advisor AI accelerator uses Azure OpenAI Service, Azure AI
Search, and Microsoft Fabric to create custom Copilot solutions. This all-in-one custom
copilot empowers client advisors to use generative AI across structured and
unstructured data optimizing daily tasks and fostering better interactions with clients. To
learn more, see the GitHub repo.

Conversational knowledge mining solution accelerator


The conversational knowledge mining solution accelerator is built on top of Microsoft
Fabric, Azure OpenAI Service, and Azure AI Speech. It enables customers with large
amounts of conversational data to use generative AI to find key phrases alongside the
operational metrics. This way, you can discover valuable insights with business impact.
To learn more, see the GitHub repo.

How do I use Copilot responsibly?


Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues.

The article Privacy, security, and responsible use for Copilot (preview) offers guidance on
responsible use.

Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.

Before you use Copilot, your admin needs to enable Copilot in Fabric. See the article
Overview of Copilot in Fabric for details. Also, keep in mind the limitations of Copilot:

Copilot responses can include inaccurate or low-quality content, so make sure to


review outputs before using them in your work.
Reviews of outputs should be done by people who are able to meaningfully
evaluate the content's accuracy and appropriateness.
Today, Copilot features work best in the English language. Other languages may
not perform as well.

Available regions
Available regions for Azure OpenAI service
To access the prebuilt Azure OpenAI Service , including the Copilot in Fabric, you must
have an F64 or higher SKU or a P SKU in the following Fabric regions. The Azure OpenAI
Service isn't available on trial SKUs.

Azure OpenAI Service is powered by large language models that are currently only
deployed to US datacenters (East US, East US2, South Central US, and West US) and EU
datacenter (France Central). If your data is outside the US or EU, the feature is disabled
by default unless your tenant admin enables Data sent to Azure OpenAI can be
processed outside your capacity's geographic region, compliance boundary, or
national cloud instance tenant setting. To learn how to get to the tenant settings, see
About tenant settings.

Data processing across geographic areas


The prebuilt Azure OpenAI Service and Copilot in Fabric may process your prompts
and results (input and output when using Copilot) outside your capacity's geographic
region, depending on where the Azure OpenAI service is hosted. The table below shows
the mapping of where data is processed across geographic areas for Copilot in Fabric
and Azure OpenAI features.

7 Note

The data processed for Copilot interactions can include user prompts, meta
prompts, structure of data (schema) and conversation history. No data, such as
content in tables is sent to Azure OpenAI for processing unless it is included in the
user prompts.

ノ Expand table

Geographic area Geographic area Data processing Actions required


where your Fabric where Azure OpenAI outside your capacity's to use Fabric
Capacity is located Service is hosted geographic region? Copilot

US US No Turn-on Copilot

EU Data Boundary EU Data Boundary No Turn-on Copilot

UK EU Data Boundary Yes Turn-on Copilot


Enable cross-geo
data processing
Geographic area Geographic area Data processing Actions required
where your Fabric where Azure OpenAI outside your capacity's to use Fabric
Capacity is located Service is hosted geographic region? Copilot

Australia US Yes Turn-on Copilot


Brazil Enable cross-geo
Canada data processing
India
Asia
Japan
Korea
South Africa
Southeast Asia
United Arab Emirates

Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ
AI services in Fabric (preview)
Copilot tenant settings

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Enable Copilot in Fabric
Article • 01/26/2025

Copilot and other generative AI features in preview bring new ways to transform and
analyze data, generate insights, and create visualizations in Microsoft Fabric and Power
BI.

Copilot in Microsoft Fabric is enabled by default. Administrators can be disable it from


the admin portal if your organization isn't ready to adopt it. Administrators can refer to
the Copilot tenant settings (preview) article for details. The following requirements must
be met to use Copilot:

The F64 capacity must be in a supported region listed in Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by default
unless your admin enables the tenant setting in the Fabric Admin portal. Note that
the Data sent to Azure OpenAI can be processed outside your tenant's geographic
region, compliance boundary, or national cloud instance.
Copilot isn't supported for Fabric trial SKUs. Only paid SKUs (F64 or higher) are
eligible.

The following screenshot shows the tenant setting where Copilot can be enabled or
disabled:
Copilot in Microsoft Fabric is rolling out gradually, ensuring all customers with paid
Fabric capacities (F64 or higher) gain access. It automatically appears as a new setting in
the Fabric admin portal when available for your tenant. Once billing starts for the
Copilot in Fabric experiences, Copilot usage will count against your existing Fabric
capacity.

See the article Overview of Copilot in Fabric for details on its functionality across
workloads, data security, privacy compliance, and responsible AI use.

) Important

When scaling from a smaller capacity to F64 or above, allow up to 24 hours for
Copilot for Power BI to activate.

Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ
AI services in Fabric (preview)
Copilot tenant settings
Copilot in Power BI

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Copilot for Microsoft Fabric and
Power BI: FAQ
FAQ

This article answers frequently asked questions about Copilot for Microsoft Fabric and
Power BI.

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
For more information, see the article Overview of Copilot in Fabric.

Power BI
Can Copilot be enabled for specific workspaces
within a tenant?
Copilot is enabled at the tenant level and access can be restricted by security groups. If
the workspace is tied to an F64 or P1 capacity, Copilot experience will be enabled.

When you're using Copilot, who has access to


what data?
The data that Copilot can access depends on your role-level security and user-based
permission on Fabric.
If you don't have permission to access specific data, then prompting Copilot for it won't
retrieve the information.

Can Copilot prompts be saved for future


reference?
Copilot prompts can't be saved for future reference. The only experience where it's
possible to view your prompts is by using the chat-magic function in notebooks.

Does enabling Copilot and agreeing to the


setting of "Data sent to Azure OpenAI can be
processed outside your tenant's geographic
region, compliance boundary, or national cloud
instance" mean all my data is sent or processed
outside my country?
Not exactly. While the prompt itself is sent to Azure OpenAI, it doesn't mean your data
is sent or processed outside your country.

The prompt isn't used to train any models.

I loaded my semantic model, but it doesn't meet


all the criteria listed in the data evaluation. What
should I do?
The criteria listed in Update your data model to work well with Copilot for Power BI is
important because it helps you get a better quality report. As long as you meet seven of
the eight points, including Consistency, the quality of the reports generated should be
good.

If your data doesn't meet that criteria, we recommend spending the time to bring it into
compliance.

I was given a Copilot URL, but I can't see the


Copilot button. Why is that?
First, check with your admin to see if they have enabled Copilot.
Next, when you select a Copilot-enabled URL, you have to initially load the semantic
model. When you've completed loading the semantic model, then you see the Copilot
button.

I selected the Copilot button, and it's stuck on


Analyzing your semantic model.
Depending upon the size of the semantic model, Copilot might take a while to analyze
it. If you've waited longer than 15 minutes and you haven't received any errors, chances
are that there's an internal server error.

Try restarting Copilot by closing the pane and selecting the Copilot button again.

I loaded the semantic model and Copilot


generated a summary, but I don't think that it's
accurate.
This inaccuracy could be because your semantic model has missing values. Because AI is
generating the summary, it can try to fill the holes and fabricate data. If you can remove
the rows with missing values, this situation could be avoided.

I generated the report visuals, but the quality of


the visuals concern me. I wouldn't choose them
myself.
We're continuously looking to improve the quality of the Copilot-generated visuals. For
now, we recommend that you make the change by using the Power BI visualization tool.

The accuracy of the narrative visual concerns me.


We're continuously working to improve the accuracy of the narrative visual results. We
recommend using the custom prompts as an extra tool to try to tweak the summary to
meet your needs.

I want to disable Copilot immediately as I'm


concerned with the data storage you mentioned.
Contact your help desk to get support from your IT admin.
I want to suggest new features. How can I do
that?
You can submit and vote on ideas for Microsoft Fabric on the Ideas page of the Fabric
Community . Read more about giving feedback in the Learn about Microsoft Fabric
feedback article.

Real-Time Intelligence
Does Copilot respond to multiple questions in a
conversation?
No, Copilot doesn't answer follow-up questions. You need to ask one question at a time.

How can I improve the quality of the Copilot


answer?
Provide any tips or relevant information in your question. For example, if you're asking
about a specific column, provide the column name and the type of data it contains. If
you want to use specific operators or functions, this will also help. The more information
you provide, the better the Copilot answer will be.

What access level do I need on a KQL queryset


to use Copilot?
You need read access to the KQL queryset to use Copilot. In order to insert and execute
the Copilot-generated query in the KQL queryset, you need to have write access to that
KQL queryset.

What database does the Copilot-generated


query run against?
The Copilot-generated query runs against the database that the KQL queryset is
connected to. If you want to change the database, you can do so in the KQL queryset.

Related content
What is Microsoft Fabric?
Privacy, security, and responsible use of Copilot in Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot in Fabric
Article • 01/26/2025

Before your business starts using Copilot in Fabric, you may have questions about how it
works, how it keeps your business data secure and adheres to privacy requirements, and
how to use generative AI responsibly.

This article provides answers to common questions related to business data security and
privacy to help your organization get started with Copilot in Fabric. The article Privacy,
security, and responsible use for Copilot in Power BI (preview) provides an overview of
Copilot in Power BI. Read on for details about Copilot for Fabric.

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.

Your business data is secure


Copilot features use Azure OpenAI Service, which is fully controlled by Microsoft.
Your data isn't used to train models and isn't available to other customers.
You retain control over where your data is processed. Data processed by Copilot in
Fabric stays within your tenant's geographic region, unless you explicitly allow data
to be processed outside your region—for example, to let your users use Copilot
when Azure OpenAI isn't available in your region or availability is limited due to
high demand. Learn more about admin settings for Copilot.
Copilot does not store your data for abuse monitoring. To enhance privacy and
trust, we've updated our approach to abuse monitoring: previously, we retained
data from Copilot in Fabric, containing prompt inputs and outputs, for up to 30
days to check for abuse or misuse. Following customer feedback, we've eliminated
this 30-day retention. Now, we no longer store prompt related data,
demonstrating our unwavering commitment to your privacy and security.

Check Copilot outputs before you use them


Copilot responses can include inaccurate or low-quality content, so make sure to
review outputs before you use them in your work.
People who can meaningfully evaluate the content's accuracy and appropriateness
should review the outputs.
Today, Copilot features work best in the English language. Other languages may
not perform as well.

) Important

Review the supplemental preview terms for Fabric , which includes terms of use
for Microsoft Generative AI Service Previews.

How Copilot works


In this article, Copilot refers to a range of generative AI features and capabilities in Fabric
that are powered by Azure OpenAI Service.

In general, these features are designed to generate natural language, code, or other
content based on:

(a) inputs you provide, and,

(b) grounding data that the feature has access to.

For example, Power BI, Data Factory, and data science offer Copilot chats where you can
ask questions and get responses that are contextualized on your data. Copilot for Power
BI can also create reports and other visualizations. Copilot for Data Factory can
transform your data and explain what steps it has applied. Data science offers Copilot
features outside of the chat pane, such as custom IPython magic commands in
notebooks. Copilot chats may be added to other experiences in Fabric, along with other
features that are powered by Azure OpenAI under the hood.

This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:

The user's prompt or input.


Grounding data.
The AI response or output.

Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.

Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.

Copilot uses Azure OpenAI—not the publicly available OpenAI services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
the Microsoft Azure environment, and the Service doesn't interact with any services by
OpenAI, such as ChatGPT or the OpenAI API. Your data isn't used to train models and
isn't available to other customers. Learn more about Azure OpenAI.

The Copilot process


These features follow the same general process:

1. Copilot receives a prompt from a user. This prompt could be in the form of a
question that a user types into a chat pane, or in the form of an action such as
selecting a button that says "Create a report."
2. Copilot preprocesses the prompt through an approach called grounding.
Depending on the scenario, this might include retrieving relevant data such as
dataset schema or chat history from the user's current session with Copilot.
Grounding improves the specificity of the prompt, so the user gets responses that
are relevant and actionable to their specific task. Data retrieval is scoped to data
that is accessible to the authenticated user based on their permissions. See the
section What data does Copilot use and how is it processed? in this article for
more information.
3. Copilot takes the response from Azure OpenAI and postprocesses it. Depending
on the scenario, this postprocessing might include responsible AI checks, filtering
with Azure content moderation, or additional business-specific constraints.
4. Copilot returns a response to the user in the form of natural language, code, or
other content. For example, a response might be in the form of a chat message or
generated code, or it might be a contextually appropriate form such as a Power BI
report or a Synapse notebook cell.
5. The user reviews the response before using it. Copilot responses can include
inaccurate or low-quality content, so it's important for subject matter experts to
check outputs before using or sharing them.

Just as each experience in Fabric is built for certain scenarios and personas—from data
engineers to data analysts—each Copilot feature in Fabric has also been built with
unique scenarios and users in mind. For capabilities, intended uses, and limitations of
each feature, review the section for the experience you're working in.

Definitions

Prompt or input
The text or action submitted to Copilot by a user. This could be in the form of a question
that a user types into a chat pane, or in the form of an action such as selecting a button
that says "Create a report."

Grounding
A preprocessing technique where Copilot retrieves additional data that's contextual to
the user's prompt, and then sends that data along with the user's prompt to Azure
OpenAI in order to generate a more relevant and actionable response.

Response or output
The content that Copilot returns to a user. For example, a response might be in the form
of a chat message or generated code, or it might be contextually appropriate content
such as a Power BI report or a Synapse notebook cell.

What data does Copilot use and how is it


processed?
To generate a response, Copilot uses:

The user's prompt or input and, when appropriate,


Additional data that is retrieved through the grounding process.

This information is sent to Azure OpenAI Service, where it's processed and an output is
generated. Therefore, data processed by Azure OpenAI can include:

The user's prompt or input.


Grounding data.
The AI response or output.

Grounding data may include a combination of dataset schema, specific data points, and
other information relevant to the user's current task. Review each experience section for
details on what data is accessible to Copilot features in that scenario.

Interactions with Copilot are specific to each user. This means that Copilot can only
access data that the current user has permission to access, and its outputs are only
visible to that user unless that user shares the output with others, such as sharing a
generated Power BI report or generated code. Copilot doesn't use data from other users
in the same tenant or other tenants.

Copilot uses Azure OpenAI—not OpenAI's publicly available services—to process all
data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses
a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in
Microsoft's Azure environment and the Service doesn't interact with any services by
OpenAI (for example, ChatGPT or the OpenAI API). Your data isn't used to train models
and isn't available to other customers. Learn more about Azure OpenAI.

Data residency and compliance


You retain control over where your data is processed. Data processed by Copilot in Fabric
stays within your tenant's geographic region, unless you explicitly allow data to be
processed outside your region—for example, to let your users use Copilot when Azure
OpenAI isn't available in your region or availability is limited due to high demand. (See
where Azure OpenAI is currently available.)

To allow data to be processed elsewhere, your admin can turn on the setting Data sent
to Azure OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance. Learn more about admin settings for
Copilot.
What should I know to use Copilot responsibly?
Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues.

Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.

Before you use Copilot, keep in mind the limitations of Copilot:

Copilot responses can include inaccurate or low-quality content, so make sure to


review outputs before using them in your work.
People who are able to meaningfully evaluate the content's accuracy and
appropriateness should review the outputs.
Currently, Copilot features work best in the English language. Other languages may
not perform as well.

Copilot for Fabric workloads


Privacy, security, and responsible use for:

Copilot for Data Factory (preview)


Copilot for Data Science (preview)
Copilot for Data Warehouse (preview)
Copilot for SQL Databases (preview)
Copilot for Power BI
Copilot for Real-Time Intelligence (preview)

Related content
What is Microsoft Fabric?
Copilot in Fabric and Power BI: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for SQL database in Microsoft
Fabric (preview)
Article • 01/26/2025

Applies to: ✅ SQL database in Microsoft Fabric

In this article, learn how Microsoft Copilot for SQL databases works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For more information on Copilot in Fabric, see Privacy, security, and
responsible use for Copilot in Microsoft Fabric (preview).

With Copilot for SQL databases in Microsoft Fabric and other generative AI features,
Microsoft Fabric brings a new way to transform and analyze data, generate insights, and
create visualizations and reports in your database and other workloads.

For limitations, see Limitations of Copilot for SQL database.

Data use of Copilot for SQL databases


In database, Copilot can only access the database schema that is accessible in the user's
database.

By default, Copilot has access to the following data types:

Previous messages sent to and replies from Copilot for that user in that session.
Contents of SQL query that the user has executed.
Error messages of a SQL query that the user has executed (if applicable).
Schemas of the database.

Tips for working with Copilot for SQL


databases
Copilot is best equipped to handle SQL database topics, so limit your questions to
this area.

Be explicit about the data you want Copilot to examine. If you describe the data
asset, with descriptive table and column names, Copilot is more likely to retrieve
relevant data and generate useful outputs.
Evaluation of Copilot for SQL databases
The product team tested Copilot to see how well the system performs within the context
of databases, and whether AI responses are insightful and useful.

The team also invested in additional harm mitigation, including technological


approaches to focusing Copilot's output on topics related to SQL databases.

Related content
Privacy, security, and responsible use for Copilot in Microsoft Fabric (preview)
Copilot for SQL database in Fabric (preview)

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Data Factory (preview)
Article • 01/26/2025

In this article, learn how Copilot for Data Factory overview works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot (preview).

With Copilot for Data Factory in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.

For considerations and limitations, see Limitations of Copilot for Data Factory.

Data use of Copilot for Data Factory


Copilot can only access data that is accessible to the user's current Gen2 dataflow
session, and that is configured and imported into the data preview grid. Learn
more about getting data in Power Query.

Evaluation of Copilot for Data Factory


The product team has tested Copilot to see how well the system performs within
the context of Gen2 dataflows, and whether AI responses are insightful and useful.
The team also invested in other harms mitigations, including technological
approaches to focusing Copilot's output on topics related to data integration.

Tips for working with Copilot for Data Factory


Copilot is best equipped to handle data integration topics, so it's best to limit your
questions to this area.
If you include descriptions such as query names, column names, and values in the
input, Copilot is more likely to generate useful outputs.
Try breaking complex inputs into more granular tasks. This helps Copilot better
understand your requirements and generate a more accurate output.

Related content
Copilot for Data Factory overview
Copilot in Fabric: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Data Science
Article • 01/26/2025

In this article, learn how Microsoft Copilot for Data Science works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot (preview).

With Copilot for Data Science in Microsoft Fabric and other generative AI features in
preview, Microsoft Fabric brings a new way to transform and analyze data, generate
insights, and create visualizations and reports in Data Science and the other workloads.

For considerations and limitations, see Limitations.

Data use of Copilot for Data Science


In notebooks, Copilot can only access data that is accessible to the user's current
notebook, either in an attached lakehouse or directly loaded or imported into that
notebook by the user. In notebooks, Copilot can't access any data that's not
accessible to the notebook.

By default, Copilot has access to the following data types:


Previous messages sent to and replies from Copilot for that user in that session.
Contents of cells that the user has executed.
Outputs of cells that the user has executed.
Schemas of data sources in the notebook.
Sample data from data sources in the notebook.
Schemas from external data sources in an attached lakehouse.

Evaluation of Copilot for Data Science


The product team has tested Copilot to see how well the system performs within
the context of notebooks, and whether AI responses are insightful and useful.
The team also invested in additional harms mitigations, including technological
approaches to focusing Copilot's output on topics related to data science.

Tips for working with Copilot for Data Science


Copilot is best equipped to handle data science topics, so limit your questions to
this area.
Be explicit about the data you want Copilot to examine. If you describe the data
asset, such as naming files, tables, or columns, Copilot is more likely to retrieve
relevant data and generate useful outputs.
If you want more granular responses, try loading data into the notebook as
DataFrames or pinning the data in your lakehouse. This gives Copilot more context
with which to perform analysis. If an asset is too large to load, pinning it's a helpful
alternative.

AI Skill: Responsible AI FAQ

What is AI Skill?
AI Skill is a new tool in Fabric that brings a way to get answers from your tabular data in
natural language.

What can AI Skill do?


A data analyst or engineer can prepare AI Skill for use by non-technical business users.
They need to configure Fabric data source and can optionally provide additional context
information that isn't obvious from the schema.

Non-technical users can then type questions and receive the results from the execution
of an AI generated SQL query.

What is/are AI Skill’s intended use(s)?


Business users who aren't familiar with how the data is structured are able to ask
descriptive questions such as “what are the 10 top products by sales volume last
month?" on top of tabular data stored in Fabric Lakehouses and Fabric
Warehouses.

AI Skill isn't intended for use in cases where deterministic and 100% accurate
results are required, which reflects the current LLM limitations.

The AI Skill isn't intended for uses cases that require deep analytics or causal
analytics. E.g. asking “why did our sales numbers drop last month?” is out of scope.
How was AI Skill evaluated? What metrics are used to
measure performance?
The product team has tested the AI skill on a variety of public and private benchmarks
for SQL tasks to ascertain the quality of SQL queries.

The team also invested in additional harms mitigations, including technological


approaches to focusing the AI skill’s output on the context of the chosen data sources.

What are the limitations of AI Skill? How can users


minimize the impact of AI Skill’s limitations when using
the system?
Make sure your column names are descriptive. Instead of using column names like
“C1” or “ActCu,” use “ActiveCustomer” or “IsCustomerActive.” This is the most
effective way to get more reliable queries out of the AI.

Make use of the Notes for the model in the configuration panel in the UI. If the
SQL queries that the AI Skill generates are incorrect, you can provide instructions
to the model in plain English to improve upon future queries. The system will make
use of these instructions with every query. Short and direct instructions are best.

Provide examples in the model configuration panel in the UI. The system will
leverage the most relevant examples when providing its answers.

What operational factors and settings allow for effective


and responsible use of AI Skill?
The AI skill only has access to the data that you provide. It makes use of the
schema (table name and column name), as well as the Notes for the model and
Examples that you provide in the UI.

The AI skill only has access to data that the questioner has access to. If you use the
AI skill, your credentials are used to access the underlying database. If you don't
have access to the underlying data, the AI skill doesn't either. This holds true when
you publish the AI skill to other destinations, such as Copilot for Microsoft 365 or
Microsoft Copilot Studio, where the AI skill can be used by other questioners.

Related content
Privacy, security, and responsible use of Copilot for Data Factory (preview)
Overview of Copilot for Data Science and Data Engineering (preview)
Copilot for Data Factory overview
Copilot in Fabric: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Data Warehouse (preview)
Article • 01/26/2025

Applies to: ✅ Warehouse in Microsoft Fabric

In this article, learn how Microsoft Copilot for Fabric Data Warehouse works, how it
keeps your business data secure and adheres to privacy requirements, and how to use
generative AI responsibly. For more information on Copilot in Fabric, see Privacy,
security, and responsible use for Copilot in Microsoft Fabric (preview).

With Copilot for Data Warehouse in Microsoft Fabric and other generative AI features,
Microsoft Fabric brings a new way to transform and analyze data, generate insights, and
create visualizations and reports in your warehouse and other workloads.

For considerations and limitations, see Limitations.

Data use of Copilot for Data Warehouse


In warehouse, Copilot can only access the database schema that is accessible in the
user's warehouse.

By default, Copilot has access to the following data types:

Previous messages sent to and replies from Copilot for that user in that session.
Contents of SQL query that the user has executed.
Error messages of a SQL query that the user has executed (if applicable).
Schemas of the warehouse.
Schemas from attached warehouses or SQL analytics endpoints when cross-DB
querying.

Tips for working with Copilot for Data


Warehouse
Copilot is best equipped to handle data warehousing topics, so limit your
questions to this area.
Be explicit about the data you want Copilot to examine. If you describe the data
asset, with descriptive table and column names, Copilot is more likely to retrieve
relevant data and generate useful outputs.
Evaluation of Copilot for Data Warehouse
The product team tested Copilot to see how well the system performs within the context
of warehouses, and whether AI responses are insightful and useful.

The team also invested in additional harm mitigation, including technological


approaches to focusing Copilot's output on topics related to data warehousing.

Related content
Privacy, security, and responsible use for Copilot in Microsoft Fabric (preview)
Microsoft Copilot for Fabric Data Warehouse

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use for
Copilot in Power BI
Article • 01/26/2025

In this article, learn how Microsoft Copilot for Power BI works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. With Copilot and other generative AI features in preview, Power BI brings a
new way to transform and analyze data, generate insights, and create visualizations and
reports in Power BI and the other workloads.

For more information privacy and data security in Copilot, see Privacy, security, and
responsible use for Copilot in Microsoft Fabric (preview).

For considerations and limitations with Copilot for Power BI, see Considerations and
Limitations.

Data use in Copilot for Power BI


Copilot uses the data in a semantic model that you provide, combined with the
prompts you enter, to create visuals. Learn more about semantic models.
To answer data questions from the semantic model, Copilot requires that Q&A be
enabled in the semantic model's dataset settings. For more information, see
Update your data model to work well with Copilot for Power BI.
To create measure descriptions in a semantic model, Copilot uses the DAX formula
and table name of the selected measure. DAX comments and text in double-
quotes of the DAX formula are not used. For more information, see Use Copilot to
create measure descriptions.
To create DAX queries, explain DAX queries, or explain DAX topics, Copilot uses the
semantic model metadata, such as table and column names and properties, with
any DAX query selected in the DAX query editor combined with the request you
enter, to respond. For more information, see Use Copilot to create DAX queries.
When you add a copilot summary to an email subscription, the copilot summary
generated is the same as that generated when you add a narrative visual to a
report. Fore more information, see Copilot summaries in email subscriptions.

Tips for working with Copilot for Power BI


Review FAQ for Copilot for Power BI for tips and suggestions to help you work with
Copilot in this experience.
Evaluation of Copilot for Data Warehouse
The product team invested in harm mitigation, including technological approaches to
focusing Copilot's output on topics related to reporting and data warehousing.

Related content
Microsoft Copilot for Power BI
Enable Fabric Copilot for Power BI
Copilot in Fabric and Power BI: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Privacy, security, and responsible use of
Copilot for Real-Time Intelligence
Article • 01/26/2025

In this article, learn how Copilot for Real-Time Intelligence works, how it keeps your
business data secure and adheres to privacy requirements, and how to use generative AI
responsibly. For an overview of these topics for Copilot in Fabric, see Privacy, security,
and responsible use for Copilot.

This feature leverages the power of OpenAI to seamlessly translate natural language
queries into Kusto Query Language (KQL), a specialized language for querying large
datasets. In essence, it acts as a bridge between users' everyday language and the
technical intricacies of KQL removing adoption barriers for users unfamiliar with the
language. By harnessing OpenAI's advanced language understanding, this feature
empowers users to submit business questions in a familiar, natural language format,
which are then converted into KQL queries.

Copilot accelerates productivity by simplifying the query creation process but also
provides a user-friendly and efficient approach to data analysis.

Copilot for Real-Time Intelligence intended use


Kusto Copilot accelerates data scientists’ and analysts’ data exploration process, by
translating natural language business questions into KQL queries, based on the
underlying dataset column names / schema.

What can Copilot for Real-Time Intelligence


do?
Kusto Copilot is powered by generative AI models developed by OpenAI and Microsoft.
Specifically, it uses OpenAI’s Embedding and Completion APIs to build the natural
language prompt and to generate KQL queries.

Data use of Copilot for Real-Time Intelligence


Copilot for Real-Time Intelligence has access to data that is accessible to the Copilot
user, for example the database schema, user-defined functions, and data sampling of
the connected database. The Copilot refers to whichever database is currently
connected to the KQL queryset. The Copilot doesn't store any data.

Evaluation of Copilot for Real-Time Intelligence


Following a thorough research period in which several configurations and methods
have been tested, the OpenAI integration method had been proven to generate
highest accuracy KQL queries. Copilot doesn't automatically run the generated KQL
query, and users are advised to run the queries at their own discretion.
Kusto Copilot doesn’t automatically run any generated KQL query, and users are
advised to run the queries at their own discretion.

Limitations of Copilot for Real-Time


Intelligence
Complex and long user input might be misunderstood by Copilot, resulting in
potentially inaccurate or misleading suggested KQL queries.
User input which directs to database entities which are not KQL tables or
materialized views (for example, a KQL function), may result in potentially
inaccurate or misleading suggested KQL queries.
More than 10,000 concurrent users within an org will most likely fail or result in
major performance hit.
The KQL query should be validated by user before executing for preventing
insecure KQL query execution.

Tips for working with Copilot for Real-Time


Intelligence
We recommend you provide detailed and relevant natural language queries.
Furthermore, you should provide concise and simple requests to the copilot to
avoid inaccurate or misleading suggested KQL queries. You should also restrict
questions to databases which are KQL tables or materialized views.
For example, if you're asking about a specific column, provide the column name
and the type of data it contains. If you want to use specific operators or functions,
this will also help. The more information you provide, the better the Copilot answer
will be.

Related content
What is Microsoft Fabric?
Copilot in Fabric: FAQ

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Copilot for Data Factory overview
Article • 01/26/2025

) Important

Copilot for Data Factory is generally available now, but its new Data pipeline
capabilities are still in preview.

Copilot in Fabric enhances productivity, unlocks profound insights, and facilitates the
creation of custom AI experiences tailored to your data. As a component of the Copilot
in Fabric experience, Copilot in Data Factory empowers customers to use natural
language to articulate their requirements for creating data integration solutions using
Dataflow Gen2. Essentially, Copilot in Data Factory operates like a subject-matter expert
(SME) collaborating with you to design your dataflows.

Copilot for Data Factory is an AI-enhanced toolset that supports both citizen and
professional data wranglers in streamlining their workflow. It provides intelligent
Mashup code generation to transform data using natural language input and generates
code explanations to help you better understand earlier generated complex queries and
tasks.

Before your business can start using Copilot capabilities in Fabric, your administrator
needs to enable Copilot in Microsoft Fabric.

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.

Supported capabilities
With Dataflow Gen2, you can:

Generate new transformation steps for an existing query.


Provide a summary of the query and the applied steps.
Generate a new query that may include sample data or a reference to an existing
query.

With Data pipelines, you can:

Pipeline Generation: Using natural language, you can describe your desired
pipeline, and Copilot will understand the intent and generate the necessary Data
pipeline activities.
Error message assistant: troubleshoot Data pipeline issues with clear error
explanation capability and actionable troubleshooting guidance.
Summarize Pipeline: Explain your complex pipeline with the summary of content
and relations of activities within the Pipeline.

Get started
Data Factory Copilot is available in both Dataflow Gen2, and Data pipelines.

Get started with Copilot for Dataflow Gen2


Use the following steps to get started with Copilot for Dataflow Gen2:

1. Create a new Dataflows Gen2.

2. On the Home tab in Dataflows Gen2, select the Copilot button.


3. In the bottom left of the Copilot pane, select the starter prompt icon, then the Get
data from option.

4. In the Get data window, search for OData and select the OData connector.
5. In the Connect to data source for the OData connector, input the following text
into the URL field:

https://services.odata.org/V4/Northwind/Northwind.svc/

6. From the navigator, select the Orders table and then Select related tables. Then
select Create to bring multiple tables into the Power Query editor.

7. Select the Customers query, and in the Copilot pane type this text: Only keep
European customers , then press Enter or select the Send message icon.

Your input is now visible in the Copilot pane along with a returned response card.
You can validate the step with the corresponding step title in the Applied steps list
and review the formula bar or the data preview window for accuracy of your
results.

8. Select the Employees query, and in the Copilot pane type this text: Count the
total number of employees by City , then press Enter or select the Send message
icon. Your input is now visible in the Copilot pane along with a returned response
card and an Undo button.

9. Select the column header for the Total Employees column and choose the option
Sort descending. The Undo button disappears because you modified the query.

10. Select the Order_Details query, and in the Copilot pane type this text: Only keep
orders whose quantities are above the median value , then press Enter or select

the Send message icon. Your input is now visible in the Copilot pane along with a
returned response card.

11. Either select the Undo button or type the text Undo (any text case) and press Enter
in the Copilot pane to remove the step.

12. To leverage the power of Azure OpenAI when creating or transforming your data,
ask Copilot to create sample data by typing this text:

Create a new query with sample data that lists all the Microsoft OS versions
and the year they were released

Copilot adds a new query to the Queries pane list, containing the results of your
input. At this point, you can either transform data in the user interface, continue to
edit with Copilot text input, or delete the query with an input such as Delete my
current query .

Get started with Copilot for Data pipelines


You can use Copilot to generate, summarize, or even troubleshoot your Data pipelines.

Generate a Data pipeline with Copilot


Use these steps to generate a new pipeline with Copilot for Data Factory:

1. Create a new Data pipeline.

2. On the Home tab of the Data pipeline editor, select the Copilot button.

3. Then you can get started with Copilot to build your pipeline with the Ingest data
option.
4. Copilot generates a Copy activity and you can interact with Copilot to complete
the whole flow. You can type / to select the source and destination connection, and
then add all the required content according to the prefilled started prompt
context.
5. After everything is setup, simply select Run this pipeline to execute the new
pipeline and ingest the data.
6. If you are already familiar with Data pipelines, you can complete everything with
one prompt command, too.

Summarize a Data pipeline with Copilot

Use these steps to summarize a pipeline with Copilot for Data Factory:

1. Open an existing Data pipeline.

2. On the Home tab of the pipeline editor window, select the Copilot button.
3. Then you can get started with Copilot to summarize the content of the pipeline.

4. Select Summarize this pipeline and Copilot generates a summary.


Troubleshoot pipeline errors with Copilot

Copilot empowers you to troubleshoot any pipeline with error messages. You can either
use Copilot for pipeline error messages assistant in the Fabric Monitor page, or in
pipeline authoring page. The steps below show you how to access the pipeline Copilot
to troubleshoot your pipeline from the Fabric Monitor page, but you can use the same
steps from the pipeline authoring page.

1. Go to Fabric Monitor page and select filters to show pipelines with failures, as
shown below:

2. Select the Copilot icon beside the failed pipeline.


3. Copilot provides a clear error message summary and actionable recommendations
to fix it. In the recommendations, troubleshooting links are provided for you to
efficiently investigate further.
Limitations of Copilot for Data Factory
Here are the current limitations of Copilot for Data Factory:

Copilot can't perform transformations or explanations across multiple queries in a


single input. For instance, you can't ask Copilot to "Capitalize all the column
headers for each query in my dataflow."
Copilot doesn't understand previous inputs and can't undo changes after a user
commits a change when authoring, either via user interface or the chat pane. For
example, you can't ask Copilot to "Undo my last 5 inputs." However, users can still
use the existing user interface options to delete unwanted steps or queries.
Copilot can't make layout changes to queries in your session. For example, if you
tell Copilot to create a new group for queries in the editor, it doesn't work.
Copilot may produce inaccurate results when the intent is to evaluate data that
isn't present within the sampled results imported into the sessions data preview.
Copilot doesn't produce a message for the skills that it doesn't support. For
example, if you ask Copilot to "Perform statistical analysis and write a summary
over the contents of this query", it doesn't complete the instruction successfully as
mentioned previously. Unfortunately, it doesn't give an error message either.

Related content
Privacy, security, and responsible use of Copilot for Data Factory (preview)

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Copilot for Data Science
and Data Engineering (preview)
Article • 01/09/2025

) Important

This feature is in preview.

Copilot for Data Science and Data Engineering is an AI assistant that helps analyze and
visualize data. It works with Lakehouse tables and files, Power BI Datasets, and
pandas/spark/fabric dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to add your data as a dataframe.
You can ask your questions in the chat panel, and the AI provides responses or code to
copy into your notebook. It understands your data's schema and metadata, and if data
is loaded into a dataframe, it has awareness of the data inside of the data frame as well.
You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Introduction to Copilot for Data Science and
Data Engineering for Fabric Data Science
With Copilot for Data Science and Data Engineering, you can chat with an AI assistant
that can help you handle your data analysis and visualization tasks. You can ask the
Copilot questions about lakehouse tables, Power BI Datasets, or Pandas/Spark
dataframes inside notebooks. Copilot answers in natural language or code snippets.
Copilot can also generate data-specific code for you, depending on the task. For
example, Copilot for Data Science and Data Engineering can generate code for:

Chart creation
Filtering data
Applying transformations
Machine learning models

First select the Copilot icon in the notebooks ribbon. The Copilot chat panel opens, and
a new cell appears at the top of your notebook. This cell must run each time a Spark
session loads in a Fabric notebook. Otherwise, the Copilot experience won't properly
operate. We are in the process of evaluating other mechanisms for handling this
required initialization in future releases.

Run the cell at the top of the notebook, with this code:

Python

#Run this cell to install the required packages for Copilot


%load_ext dscopilot_installer
%activate_dscopilot

After the cell successfully executes, you can use Copilot. You must rerun the cell at the
top of the notebook each time your session in the notebook closes.

To maximize Copilot effectiveness, load a table or dataset as a dataframe in your


notebook. This way, the AI can access the data and understand its structure and content.
Then, start chatting with the AI. Select the chat icon in the notebook toolbar, and type
your question or request in the chat panel. For example, you can ask:

"What is the average age of customers in this dataset?"


"Show me a bar chart of sales by region"

And more. Copilot responds with the answer or the code, which you can copy and paste
it your notebook. Copilot for Data Science and Data Engineering is a convenient,
interactive way to explore and analyze your data.

As you use Copilot, you can also invoke the magic commands inside of a notebook cell
to obtain output directly in the notebook. For example, for natural language answers to
responses, you can ask questions using the "%%chat" command, such as:

%%chat
What are some machine learning models that may fit this dataset?

or

%%code
Can you generate code for a logistic regression that fits this data?

Copilot for Data Science and Data Engineering also has schema and metadata
awareness of tables in the lakehouse. Copilot can provide relevant information in
context of your data in an attached lakehouse. For example, you can ask:

"How many tables are in the lakehouse?"


"What are the columns of the table customers?"

Copilot responds with the relevant information if you added the lakehouse to the
notebook. Copilot also has awareness of the names of files added to any lakehouse
attached to the notebook. You can refer to those files by name in your chat. For
example, if you have a file named sales.csv in your lakehouse, you can ask "Create a
dataframe from sales.csv". Copilot generates the code and displays it in the chat panel.
With Copilot for notebooks, you can easily access and query your data from different
sources. You don't need the exact command syntax to do it.

Tips
"Clear" your conversation in the Copilot chat panel with the broom located at the
top of the chat panel. Copilot retains knowledge of any inputs or outputs during
the session, but this helps if you find the current content distracting.
Use the chat magics library to configure settings about Copilot, including privacy
settings. The default sharing mode is designed to maximize the context sharing
Copilot has access to, so limiting the information provided to copilot can directly
and significantly impact the relevance of its responses.
When Copilot first launches, it offers a set of helpful prompts that can help you get
started. They can help kickstart your conversation with Copilot. To refer to prompts
later, you can use the sparkle button at the bottom of the chat panel.
You can "drag" the sidebar of the copilot chat to expand the chat panel, to view
code more clearly or for readability of the outputs on your screen.

Limitations
Copilot features in the Data Science experience are currently scoped to notebooks.
These features include the Copilot chat pane, IPython magic commands that can be
used within a code cell, and automatic code suggestions as you type in a code cell.
Copilot can also read Power BI semantic models using an integration of semantic link.

Copilot has two key intended uses:

One, you can ask Copilot to examine and analyze data in your notebook (for
example, by first loading a DataFrame and then asking Copilot about data inside
the DataFrame).
Two, you can ask Copilot to generate a range of suggestions about your data
analysis process, such as what predictive models might be relevant, code to
perform different types of data analysis, and documentation for a completed
notebook.
Keep in mind that code generation with fast-moving or recently released libraries might
include inaccuracies or fabrications.

Related content
How to use Chat-magics
How to use the Copilot Chat Pane

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of chat-magics in Microsoft
Fabric notebooks (preview)
Article • 12/04/2024

) Important

This feature is in preview.

The Chat-magics Python library enhances your data science and engineering workflow
in Microsoft Fabric notebooks. It seamlessly integrates with the Fabric environment, and
allows execution of specialized IPython magic commands in a notebook cell, to provide
real-time outputs. IPython magic commands and more background on usage can be
found here: https://ipython.readthedocs.io/en/stable/interactive/magics.html# .

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.

Capabilities of Chat-magics

Instant query and code generation


The %%chat command allows you to ask questions about the state of your notebook.
The %%code enables code generation for data manipulation or visualization.

Dataframe descriptions
The %describe command provides summaries and descriptions of loaded dataframes.
This simplifies the data exploration phase.

Commenting and debugging


The %%add_comments and %%fix_errors commands help add comments to your code and
fix errors respectively. This helps make your notebook more readable and error-free.

Privacy controls
Chat-magics also offers granular privacy settings, which allows you to control what data
is shared with the Azure OpenAI Service. The %set_sharing_level and
%configure_privacy_settings commands, for example, provide this functionality.

How can Chat-magics help you?


Chat-magics enhances your productivity and workflow in Microsoft Fabric notebooksIt
accelerates data exploration, simplifies notebook navigation, and improves code quality.
It adapts to multilingual code environments, and it prioritizes data privacy and security.
Through cognitive load reductions, it allows you to more closely focus on problem-
solving. Whether you're a data scientist, data engineer, or business analyst, Chat-magics
seamlessly integrates robust, enterprise-level Azure OpenAI capabilities directly into
your notebooks. This makes it an indispensable tool for efficient and streamlined data
science and engineering tasks.

Get started with Chat-magics


1. Open a new or existing Microsoft Fabric notebook.
2. Select the Copilot button on the notebook ribbon to output the Chat-magics
initialization code into a new notebook cell.
3. Run the cell when it is added at the top of your notebook.

Verify the Chat-magics installation


1. Create a new cell in the notebook, and run the %chat_magics command to display
the help message. This step verifies proper Chat-magics installation.

Introduction to basic commands: %%chat and


%%code

Using %%chat (Cell Magic)


1. Create a new cell in your notebook.
2. Type %%chat at the top of the cell.
3. Enter your question or instruction below the %%chat command - for example,
What variables are currently defined?
4. Execute the cell to see the Chat-magics response.

Using %%code (Cell Magic)


1. Create a new cell in your notebook.
2. Type %%code at the top of the cell.
3. Below this, specify the code action you'd like - for example, Load my_data.csv into
a pandas dataframe.
4. Execute the cell, and review the generated code snippet.

Customizing output and language settings


1. Use the %set_output command to change the default for how magic commands
provide output. The options can be viewed by running %set_output?
2. Choose where to place the generated code, from options like

current cell
new cell
cell output
into a variable

Advanced commands for data operations

%describe, %%add_comments, and %%fix_errors


1. Use %describe DataFrameName in a new cell to obtain an overview of a specific
dataframe.
2. To add comments to a code cell for better readability, type %%add_comments to
the top of the cell you want to annotate and then execute. Be sure to validate the
code is correct
3. For code error fixing, type %%fix_errors at the top of the cell that contained an
error and execute it.

Privacy and security settings


1. By default, your privacy configuration shares previous messages sent to and from
the Language Learning Model (LLM). However, it doesn't share cell contents,
outputs, or any schemas or sample data from data sources.
2. Use %set_sharing_level in a new cell to adjust the data shared with the AI
processor.
3. For more detailed privacy settings, use %configure_privacy_settings .

Context and focus commands

Using %pin, %new_task, and other context commands


1. Use %pin DataFrameName to help the AI focus on specific dataframes.
2. To clear the AI to focus on a new task in your notebook, type %new_task followed
by a task that you are about to undertake. This clears the execution history that
copilot knows about to this point and can make future responses more relevant.

Related content
How to use Copilot Pane

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Use the Copilot for Data Science and
Data Engineering chat panel (preview)
Article • 12/04/2024

) Important

This feature is in preview.

Copilot for Data Science and Data Engineering notebooks is an AI assistant that helps
you analyze and visualize data. It works with lakehouse tables, Power BI Datasets, and
pandas/spark dataframes, providing answers and code snippets directly in the
notebook. The most effective way of using Copilot is to load your data as a dataframe.
You can use the chat panel to ask your questions, and the AI provides responses or code
to copy into your notebook. It understands your data's schema and metadata, and if
data is loaded into a dataframe, it has awareness of the data inside of the data frame as
well. You can ask Copilot to provide insights on data, create code for visualizations, or
provide code for data transformations, and it recognizes file names for easy reference.
Copilot streamlines data analysis by eliminating complex coding.

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.
Azure OpenAI enablement
Azure OpenAI must be enabled within Fabric at the tenant level.

7 Note

If your workspace is provisioned in a region without GPU capacity, and your data is
not enabled to flow cross-geo, Copilot will not function properly and you will see
errors.

Successful execution of Chat-magics


installation cell
1. To use the Copilot pane, the installation cell for Chat-magics must successfully
execute within your Spark session.

) Important

If your Spark session terminates, the context for Chat-magics will also
terminate, also wiping the context for the Copilot pane.

2. Verify that all these conditions are met before proceeding with the Copilot chat
pane.
Open Copilot chat panel inside the notebook
1. Select the Copilot button on the notebook ribbon.

2. To open Copilot, select the Copilot button at the top of the notebook.

3. The Copilot chat panel opens on the right side of your notebook.

4. A panel opens to provide overview information and helpful links.

Key capabilities
AI assistance: Generate code, query data, and get suggestions to accelerate your
workflow.
Data insights: Quick data analysis and visualization capabilities.
Explanations: Copilot can provide natural language explanations of notebook cells,
and can provide an overview for notebook activity as it runs.
Fixing errors: Copilot can also fix notebook run errors as they arise. Copilot shares
context with the notebook cells (executed output) and can provide helpful
suggestions.

Important notices
Inaccuracies: Potential for inaccuracies exists. Review AI-generated content
carefully.
Data storage: Customer data is temporarily stored, to identify harmful use of AI.

Getting started with Copilot chat in notebooks


1. Copilot for Data Science and Data Engineering offers helpful starter prompts to get
started. For example, "Load data from my lakehouse into a dataframe", or
"Generate insights from data".

2. Each of these selections outputs chat text in the text panel. As the user, you must
fill out the specific details of the data you'd like to use.

3. You can then input any type of request you have in the chat box.

Regular usage of the Copilot chat panel


The more specifically you describe your goals in your chat panel entries, the more
accurate the Copilot responses.
You can "copy" or "insert" code from the chat panel. At the top of each code block,
two buttons allow input of items directly into the notebook.
To clear your conversation, select the icon at the top to remove your
conversation from the pane. It clears the pane of any input or output, but the
context remains in the session until it ends.
Configure the Copilot privacy settings with the %configure_privacy_settings
command, or the %set_sharing_level command in the Chat-magics library.
Transparency: Read our Transparency Note for details on data and algorithm use.

Related content
How to use Chat-magics

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Overview of Copilot for Data Warehouse
Article • 09/25/2024

Applies to: ✅ Warehouse in Microsoft Fabric

Microsoft Copilot for Synapse Data Warehouse is an AI assistant designed to streamline


your data warehousing tasks. Copilot integrates seamlessly with your Fabric warehouse,
providing intelligent insights to help you along each step of the way in your T-SQL
explorations.

Introduction to Copilot for Data Warehouse


Copilot for Data Warehouse utilizes table and view names, column names, primary key,
and foreign key metadata to generate T-SQL code. Copilot for Data Warehouse does
not use data in tables to generate T-SQL suggestions.

Key features of Copilot for Warehouse include:

Natural Language to SQL: Ask Copilot to generate SQL queries using simple
natural language questions.
Code completion: Enhance your coding efficiency with AI-powered code
completions.
Quick actions: Quickly fix and explain SQL queries with readily available actions.
Intelligent Insights: Receive smart suggestions and insights based on your
warehouse schema and metadata.

There are three ways to interact with Copilot in the Fabric Warehouse editor.

Chat Pane: Use the chat pane to ask questions to Copilot through natural
language. Copilot will respond with a generated SQL query or natural language
based on the question asked.
How to: Use the Copilot chat pane for Synapse Data Warehouse
Code completions: Start writing T-SQL in the SQL query editor and Copilot will
automatically generate a code suggestion to help complete your query. The Tab
key accepts the code suggestion, or keep typing to ignore the suggestion.
How to: Use Copilot code completion for Synapse Data Warehouse
Quick Actions: In the ribbon of the SQL query editor, the Fix and Explain options
are quick actions. Highlight a SQL query of your choice and select one of the quick
action buttons to perform the selected action on your query.
Explain: Copilot can provide natural language explanations of your SQL query
and warehouse schema in comments format.
Fix: Copilot can fix errors in your code as error messages arise. Error scenarios
can include incorrect/unsupported T-SQL code, wrong spellings, and more.
Copilot will also provide comments that explain the changes and suggest SQL
best practices.
How to: Use Copilot quick actions for Synapse Data Warehouse

Use Copilot effectively


Here are some tips for maximizing productivity with Copilot.

When crafting prompts, be sure to start with a clear and concise description of the
specific information you're looking for.
Natural language to SQL depends on expressive table and column names. If your
table and columns aren't expressive and descriptive, Copilot might not be able to
construct a meaningful query.
Use natural language that is applicable to your table and view names, column
names, primary keys, and foreign keys of your warehouse. This context helps
Copilot generate accurate queries. Specify what columns you wish to see,
aggregations, and any filtering criteria as explicitly as possible. Copilot should be
able to correct typos or understand context given your schema context.
Create relationships in the model view of the warehouse to increase the accuracy
of JOIN statements in your generated SQL queries.
When using code completions, leave a comment at the top of the query with -- to
help guide the Copilot with context about the query you are trying to write.
Avoid ambiguous or overly complex language in your prompts. Simplify the
question while maintaining its clarity. This editing ensures Copilot can effectively
translate it into a meaningful T-SQL query that retrieves the desired data from the
associated tables and views.
Currently, natural language to SQL supports English language to T-SQL.
The following example prompts are clear, specific, and tailored to the properties of
your schema and data warehouse, making it easier for Copilot to generate
accurate T-SQL queries:
Show me all properties that sold last year
Count all the products, group by each category

Show all agents who sell properties in California

Show agents who have listed more than two properties for sale
Show the rank of each agent by property sales and show name, total sales,

and rank
Enable Copilot
Your administrator needs to enable the tenant switch before you start using
Copilot. For more information, see Copilot tenant settings.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by default
unless your Fabric tenant admin enables the Data sent to Azure OpenAI can be
processed outside your tenant's geographic region, compliance boundary, or
national cloud instance tenant setting in the Fabric Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64 or
higher, or P1 or higher) are supported.
For more information, see Overview of Copilot in Fabric and Power BI.

What should I know to use Copilot responsibly?


Microsoft is committed to ensuring that our AI systems are guided by our AI
principles and Responsible AI Standard . These principles include empowering our
customers to use these systems effectively and in line with their intended uses. Our
approach to responsible AI is continually evolving to proactively address emerging
issues.

Copilot features in Fabric are built to meet the Responsible AI Standard, which means
that they're reviewed by multidisciplinary teams for potential harms, and then refined to
include mitigations for those harms.

For more information, see Privacy, security, and responsible use of Copilot for Data
Warehouse (preview).

Limitations of Copilot for Data Warehouse


Here are the current limitations of Copilot for Data Warehouse:

Copilot doesn't understand previous inputs and can't undo changes after a user
commits a change when authoring, either via user interface or the chat pane. For
example, you can't ask Copilot to "Undo my last 5 inputs." However, users can still
use the existing user interface options to delete unwanted changes or queries.
Copilot can't make changes to existing SQL queries. For example, if you ask Copilot
to edit a specific part of an existing query, it doesn't work.
Copilot might produce inaccurate results when the intent is to evaluate data.
Copilot only has access to the warehouse schema, none of the data inside.
Copilot responses can include inaccurate or low-quality content, so make sure to
review outputs before using them in your work.
People who are able to meaningfully evaluate the content's accuracy and
appropriateness should review the outputs.

Related content
Copilot tenant settings (preview)
How to: Use the Copilot chat pane for Synapse Data Warehouse
How to: Use Copilot quick actions for Synapse Data Warehouse
How to: Use Copilot code completion for Synapse Data Warehouse
Privacy, security, and responsible use of Copilot for Data Warehouse (preview)

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Copilot for Real-Time Intelligence
Article • 01/26/2025

Copilot for Real-Time Intelligence is an advanced AI tool designed to help you explore
your data and extract valuable insights. You can input questions about your data, which
are then automatically translated into Kusto Query Language (KQL) queries. Copilot
streamlines the process of analyzing data for both experienced KQL users and citizen
data scientists.

For billing information about Copilot, see Announcing Copilot in Fabric pricing .

Prerequisites
A workspace with a Microsoft Fabric-enabled capacity
Read or write access to a KQL queryset

7 Note

Your administrator needs to enable the tenant switch before you start using
Copilot. See the article Copilot tenant settings for details.
Your F64 or P1 capacity needs to be in one of the regions listed in this article,
Fabric region availability.
If your tenant or capacity is outside the US or France, Copilot is disabled by
default unless your Fabric tenant admin enables the Data sent to Azure
OpenAI can be processed outside your tenant's geographic region,
compliance boundary, or national cloud instance tenant setting in the Fabric
Admin portal.
Copilot in Microsoft Fabric isn't supported on trial SKUs. Only paid SKUs (F64
or higher, or P1 or higher) are supported.
Copilot in Fabric is currently rolling out in public preview and is expected to
be available for all customers by end of March 2024.
See the article Overview of Copilot in Fabric and Power BI for more
information.

Capabilities of Copilot for Real-Time


Intelligence
Copilot for Real-Time Intelligence lets you effortlessly translate natural language queries
into Kusto Query Language (KQL). The copilot acts as a bridge between everyday
language and KQL's technical intricacies, and in doing so removes adoption barriers for
data analysts and citizen data scientists. By harnessing OpenAI's advanced language
understanding, this feature allows you to submit business questions in a familiar, natural
language format, which are then converted into KQL queries. Copilot accelerates
productivity by simplifying the query creation process with a user-friendly and efficient
approach to data analysis.

Copilot supports conversational interactions which allows you to clarify, adapt, and
extend your queries dynamically, all while maintaining the context of your previous
inputs. You can refine queries and ask follow-up questions without starting over:

Dynamic query refinement: You can refine the initial KQL generated by Copilot by
refining your prompt to remove ambiguity, specify tables or columns, or provide
more context.

Seamless follow-up questions: If the generated KQL is correct but you want to
explore the data more deeply, you can ask follow-up questions related to the same
task. You can expand the scope of your query, add filters, or explore related data
points by building on previous dialogue.

Access the Real-Time Intelligence Copilot


1. To access Copilot for Real-Time Intelligence, navigate to a new or existing KQL
queryset.
2. Connect to a database. For more information, see Select a database
3. Select the Copilot button.
4. In the Copilot pane, enter your business question in natural language.
5. Press Enter. After a few seconds, Copilot will generate a KQL query based on your
input. You can copy the query to the clipboard, or Insert it directly in the KQL
query editor. To run the query in the query editor, you must have write access to
the KQL queryset.
6. Select the Run button to execute the query.

7 Note

Copilot doesn't generate control commands.


Copilot doesn't automatically run the generated KQL query. Users are advised
to run the queries at their own discretion.

You can continue to ask follow-up questions or further refine your query. To start a new
chat, select the speech bubble on the top right of the Copilot pane (1).

Hover over a previous question (2) and select the pencil icon to copy it to the question
box to edit it, or copy it to your clipboard.

Improve the accuracy of Copilot for Real-Time


Intelligence
Here are some tips that can help improve the accuracy of the KQL queries generated by
Copilot:

Start with simple natural language prompts, to learn the current capabilities and
limitations. Then, gradually proceed to more complex prompts.
State the task precisely, and avoid ambiguity. Imaging you shared the natural
language prompt with few KQL experts from your team without adding oral
instructions - would they be able to generate the correct query?
To generate the most accurate query, supply any relevant information that can
help the model. If you can, specify tables, operators, or functions that are critical to
the query.
Prepare your database: Add docstring properties to describe common tables and
columns. This might be redundant for descriptive names (for example, timestamp)
but is critical to describe tables or columns with meaningless names. You don't
have to add docstring to tables or columns that are rarely used. For more
information, see .alter table column-docstrings command.
To improve Copilot results, select either the like or dislike icon to submit your
comments in the Submit feedback form.

7 Note

The Submit feedback form submits the name of the database, its url, the KQL
query generated by copilot, and any free text response you include in the feedback
submission. Results of the executed KQL query aren't sent.

Limitations
Copilot might suggest potentially inaccurate or misleading suggested KQL queries
due to:
Complex and long user input.
User input that directs to database entities that aren't KQL Database tables or
materialized views (for example KQL function.)
More than 10,000 concurrent users within an organization can result in failure or a
major performance hit.

Related content
Privacy, security, and responsible use of Copilot for Real-Time Intelligence
(preview)
Copilot for Microsoft Fabric: FAQ
Overview of Copilot in Fabric (preview)
Query data in a KQL queryset

Feedback
Was this page helpful?
 Yes  No

Provide product feedback | Ask the community


Copilot in Fabric consumption
Article • 01/26/2025

This page contains information on how the Fabric Copilot usage is billed and reported.
Copilot usage is measured by the number of tokens processed. Tokens can be thought
of as pieces of words. Approximately 1,000 tokens are about 750 words. Prices are
calculated per 1,000 tokens, and input and output tokens are consumed at different
rates.

7 Note

The Copilot for Fabric billing will become effective on March 1st, 2024, as part of
your existing Power BI Premium or Fabric Capacity.

Consumption rate
Requests to Copilot consume Fabric Capacity Units. This table defines how many
capacity units (CU) are consumed when Copilot is used. For example, when user using
Copilot for Power BI, Copilot for Data Factory, or Copilot for Data Science and Data
Engineering.

ノ Expand table

Operation in Metrics Description Operation Unit of Consumption


App Measure rate

Copilot in Fabric The input prompt Per 1,000 Tokens 100 CU seconds

Copilot in Fabric The output Per 1,000 Tokens 400 CU seconds


completion

Monitor the usage


The Fabric Capacity Metrics app displays the total capacity usage for Copilot operations
under the name "Copilot in Fabric." Additionally, Copilot users are able to view a
summary of their billing charges for Copilot usage under the invoicing item "Copilot in
Fabric."
Capacity utilization type
Fabric Copilots are classified as "background jobs" to handle a higher volume of Copilot
requests during peak hours.

Fabric is designed to provide lightning-fast performance by allowing operations to


access more CU (Capacity Units) resources than are allocated to capacity. Fabric
smooths or averages the CU usage of an "interactive job" over a minimum of 5 minutes
and a "background job" over a 24-hour period. According to the Fabric throttling policy,
the first phase of throttling begins when a capacity has consumed all its available CU
resources for the next 10 minutes.

For example, assume each Copilot request has 2,000 input tokens and 500 output
tokens. The price for one Copilot request is calculated as follows: (2,000 * 100 + 500 *
400) / 1,000 = 700 CU seconds = 11.66 CU minutes.

Since Copilot is a background job, each Copilot request (~24 CU minute job) consumes
only one CU minute of each hour of a capacity. For a customer on F64 who has 64 * 24
CU Hours (1,536) in a day, and each Copilot job consumes (24 CU mins / 60 mins) = 0.4
CU Hours, customers can run over 3,800 requests before they exhaust the capacity.
However, once the capacity is exhausted, all operations will shut down.

Region mapping
Fabric Copilot is powered by Azure OpenAI large language models that are currently
deployed to limited data centers. However, customers can enable cross-geo process
tenant settings to use Copilots by processing their data in another region where Azure
OpenAI Service is available. This region could be outside of the user's geographic
region, compliance boundary, or national cloud instance. While performing region
mapping, we prioritize data residency as the foremost consideration and attempt to
map to a region within the same geographic area whenever feasible.

The cost of Fabric Capacity Units can vary depending on the region. Regardless of the
consumption region where GPU capacity is utilized, customers are billed based on the
Fabric Capacity Units pricing in their billing region. For example, if a customer's requests
are mapped from region 1 to region 2 , with region 1 being the billing region and
region 2 being the consumption region, the customer is charged based on the pricing

in region 1 .

Changes to Copilot in Fabric consumption rate


Consumption rates are subject to change at any time. Microsoft uses reasonable efforts
to provide notice via email or through in-product notification. Changes shall be effective
on the date stated in Microsoft’s Release Notes or Microsoft Fabric Blog. If any change
to a Copilot in Fabric Consumption Rate materially increases the Capacity Units (CU)
required to use Copilot in Fabric, customers can use the cancellation options available
for the chosen payment method.

Related content
Overview of Copilot in Fabric
Copilot in Fabric: FAQ
AI services in Fabric (preview)

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Direct Lake overview
Article • 01/26/2025

Direct Lake is a storage mode option for tables in a Power BI semantic model that's
stored in a Microsoft Fabric workspace. It's optimized for large volumes of data that can
be quickly loaded into memory from Delta tables, which store their data in Parquet files
in OneLake—the single store for all analytics data. Once loaded into memory, the
semantic model enables high performance queries. Direct Lake eliminates the slow and
costly need to import data into the model.

You can use Direct Lake storage mode to connect to the tables or views of a single
Fabric lakehouse or Fabric warehouse. Both of these Fabric items and Direct Lake
semantic models require a Fabric capacity license.
In some ways, a Direct Lake semantic model is similar to an Import semantic model.
That's because model data is loaded into memory by the VertiPaq engine for fast query
performance (except in the case of DirectQuery fallback, which is explained later in this
article).

However, a Direct Lake semantic model differs from an Import semantic model in an
important way. That's because a refresh operation for a Direct Lake semantic model is
conceptually different to a refresh operation for an Import semantic model. For a Direct
Lake semantic model, a refresh involves a framing operation (described later in this
article), which can take a few seconds to complete. It's a low-cost operation where the
semantic model analyzes the metadata of the latest version of the Delta tables and is
updated to reference the latest files in OneLake. In contrast, for an Import semantic
model, a refresh produces a copy of the data, which can take considerable time and
consume significant data source and capacity resources (memory and CPU).

7 Note

Incremental refresh for an Import semantic model can help to reduce refresh time
and use of capacity resources.

When should you use Direct Lake storage


mode?
The primary use case for a Direct Lake storage mode is typically for IT-driven analytics
projects that use lake-centric architectures. In this scenario, you have—or expect to
accumulate—large volumes of data in OneLake. The fast loading of that data into
memory, frequent and fast refresh operations, efficient use of capacity resources, and
fast query performance are all important for this use case.

7 Note

Import and DirectQuery semantic models are still relevant in Fabric, and they're the
right choice of semantic model for some scenarios. For example, Import storage
mode often works well for a self-service analyst who needs the freedom and agility
to act quickly, and without dependency on IT to add new data elements.

Also, OneLake integration automatically writes data for tables in Import storage
mode to Delta tables in OneLake without involving any migration effort. By using
this option, you can realize many of the benefits of Fabric that are made available
to Import semantic model users, such as integration with lakehouses through
shortcuts, SQL queries, notebooks, and more. We recommend that you consider
this option as a quick way to reap the benefits of Fabric without necessarily or
immediately re-designing your existing data warehouse and/or analytics system.

Direct Lake storage mode is also suitable for minimizing data latency to quickly make
data available to business users. If your Delta tables are modified intermittently (and
assuming you already did data preparation in the data lake), you can depend on
automatic updates to reframe in response to those modifications. In this case, queries
sent to the semantic model return the latest data. This capability works well in
partnership with the automatic page refresh feature of Power BI reports.
Keep in mind that Direct Lake depends on data preparation being done in the data lake.
Data preparation can be done by using various tools, such as Spark jobs for Fabric
lakehouses, T-SQL DML statements for Fabric warehouses, dataflows, pipelines, and
others. This approach helps ensure data preparation logic is performed as low as
possible in the architecture to maximize reusability. However, if the semantic model
author doesn't have the ability to modify the source item, for example, if a self-service
analyst doesn't have write permissions on a lakehouse that is managed by IT, then
Import storage mode might be a better choice. That's because it supports data
preparation by using Power Query, which is defined as part of semantic model.

Be sure to factor in your current Fabric capacity license and the Fabric capacity
guardrails when you consider Direct Lake storage mode. Also, factor in the
considerations and limitations, which are described later in this article.

 Tip

We recommend that you produce a prototype—or proof of concept (POC)—to


determine whether a Direct Lake semantic model is the right solution, and to
mitigate risk.

How Direct Lake works


Typically, queries sent to a Direct Lake semantic model are handled from an in-memory
cache of the columns sourced from Delta tables. The underlying storage for a Delta
table is one or more Parquet files in OneLake. Parquet files organize data by columns
rather than rows. Semantic models load entire columns from Delta tables into memory
as they're required by queries.

A Direct Lake semantic model might also use DirectQuery fallback, which involves
seamlessly switching to DirectQuery mode. DirectQuery fallback retrieves data directly
from the SQL analytics endpoint of the lakehouse or the warehouse. For example,
fallback might occur when a Delta table contains more rows of data than supported by
your Fabric capacity (described later in this article). In this case, a DirectQuery operation
sends a query to the SQL analytics endpoint. Fallback operations might result in slower
query performance.

The following diagram shows how Direct Lake works by using the scenario of a user who
opens a Power BI report.
The diagram depicts the following user actions, processes, and features.

ノ Expand table

Item Description

OneLake is a data lake that stores analytics data in Parquet format. This file format is
optimized for storing data for Direct Lake semantic models.

A Fabric lakehouse or Fabric warehouse exists in a workspace that's on Fabric capacity. The
lakehouse has a SQL analytics endpoint, which provides a SQL-based experience for
querying. Tables (or views) provide a means to query the Delta tables in OneLake by using
Transact-SQL (T-SQL).

A Direct Lake semantic model exists in a Fabric workspace. It connects to tables or views in
either the lakehouse or warehouse.
Item Description

A user opens a Power BI report.

The Power BI report sends Data Analysis Expressions (DAX) queries to the Direct Lake
semantic model.

When possible (and necessary), the semantic model loads columns into memory directly
from the Parquet files stored in OneLake. Queries achieve in-memory performance, which
is very fast.

The semantic model returns query results.

The Power BI report renders the visuals.

In certain circumstances, such as when the semantic model exceeds the guardrails of the
capacity, semantic model queries automatically fall back to DirectQuery mode. In this
mode, queries are sent to the SQL analytics endpoint of the lakehouse or warehouse.

DirectQuery queries sent to the SQL analytics endpoint in turn query the Delta tables in
OneLake. For this reason, query performance might be slower than in-memory queries.

The following sections describe Direct Lake concepts and features, including column
loading, framing, automatic updates, and DirectQuery fallback.

Column loading (transcoding)


Direct Lake semantic models only load data from OneLake as and when columns are
queried for the first time. The process of loading data on-demand from OneLake is
known as transcoding.

When the semantic model receives a DAX (or Multidimensional Expressions—MDX)


query, it first determines what columns are needed to produce a query result. Any
column directly used by the query is needed, and also columns required by relationships
and measures. Typically, the number of columns needed to produce a query result is
significantly smaller than the number of columns defined in the semantic model.

Once it understands which columns are needed, the semantic model determines which
columns are already in memory. If any columns needed for the query aren't in memory,
the semantic model loads all data for those columns from OneLake. Loading column
data is typically a fast operation, however it can depend on factors such as the
cardinality of data stored in the columns.

Columns loaded into memory are then resident in memory. Future queries that involve
only resident columns don't need to load any more columns into memory.
A column remains resident until there's reason for it to be removed (evicted) from
memory. Reasons that columns might get removed include:

The model or table was refreshed after a Delta table update at the source (see
Framing in the next section).
No query used the column for some time.
Other memory management reasons, including memory pressure in the capacity
due to other, concurrent operations.

Your choice of Fabric SKU determines the maximum available memory for each Direct
Lake semantic model on the capacity. For more information about resource guardrails
and maximum memory limits, see Fabric capacity guardrails and limitations later in this
article.

Framing
Framing provides model owners with point-in-time control over what data is loaded into
the semantic model. Framing is a Direct Lake operation triggered by a refresh of a
semantic model, and in most cases takes only a few seconds to complete. That's
because it's a low-cost operation where the semantic model analyzes the metadata of
the latest version of the Delta Lake tables and is updated to reference the latest Parquet
files in OneLake.

When framing occurs, resident table column segments and dictionaries might be evicted
from memory if the underlying data has changed and the point in time of the refresh
becomes the new baseline for all future transcoding events. From this point, Direct Lake
queries only consider data in the Delta tables as of the time of the most recent framing
operation. For that reason, Direct Lake tables are queried to return data based on the
state of the Delta table at the point of the most recent framing operation. That time isn't
necessarily the latest state of the Delta tables.

Note that the semantic model analyzes the Delta log of each Delta table during framing
to drop only the affected column segments and to reload newly added data during
transcoding. An important optimization is that dictionaries will usually not be dropped
when incremental framing takes effect, and new values are added to the existing
dictionaries. This incremental framing approach helps to reduce the reload burden and
benefits query performance. In the ideal case, when a Delta table received no updates,
no reload is necessary for columns already resident in memory and queries show far less
performance impact after framing because incremental framing essentially enables the
semantic model to update substantial portions of the existing in-memory data in place.

The following diagram shows how Direct Lake framing operations work.
The diagram depicts the following processes and features.

ノ Expand table

Item Description

A semantic model exists in a Fabric workspace.

Framing operations take place periodically, and they set the baseline for all future
transcoding events. Framing operations can happen automatically, manually, on schedule,
or programmatically.

OneLake stores metadata and Parquet files, which are represented as Delta tables.

The last framing operation includes Parquet files related to the Delta tables, and
specifically the Parquet files that were added before the last framing operation.
Item Description

A later framing operation includes Parquet files added after the last framing operation.

Resident columns in the Direct Lake semantic model might be evicted from memory, and
the point in time of the refresh becomes the new baseline for all future transcoding
events.

Subsequent data modifications, represented by new Parquet files, aren't visible until the
next framing operation occurs.

It's not always desirable to have data representing the latest state of any Delta table
when a transcoding operation takes place. Consider that framing can help you provide
consistent query results in environments where data in Delta tables is transient. Data can
be transient for several reasons, such as when long-running extract, transform, and load
(ETL) processes occur.

Refresh for a Direct Lake semantic model can be done manually, automatically, or
programmatically. For more information, see Refresh Direct Lake semantic models.

For more information about Delta table versioning and framing, see Understand storage
for Direct Lake semantic models.

Automatic updates
There's a semantic model-level setting to automatically update Direct Lake tables. It's
enabled by default. It ensures that data changes in OneLake are automatically reflected
in the Direct Lake semantic model. You should disable automatic updates when you
want to control data changes by framing, which was explained in the previous section.
For more information, see Manage Direct Lake semantic models.

 Tip

You can set up automatic page refresh in your Power BI reports. It's a feature that
automatically refreshes a specific report page providing that the report connects to
a Direct Lake semantic model (or other types of semantic model).

DirectQuery fallback
A query sent to a Direct Lake semantic model can fall back to DirectQuery mode. In this
case, it retrieves data directly from the SQL analytics endpoint of the lakehouse or
warehouse. Such queries always return the latest data because they're not constrained
to the point in time of the last framing operation.

A query always falls back when the semantic model queries a view in the SQL analytics
endpoint, or a table in the SQL analytics endpoint that enforces row-level security (RLS).

Also, a query might fall back when the semantic model exceeds the guardrails of the
capacity.

) Important

If possible, you should always design your solution—or size your capacity—to
avoid DirectQuery fallback. That's because it might result in slower query
performance.

You can control fallback of your Direct Lake semantic models by setting its
DirectLakeBehavior property. For more information, see Set the Direct Lake behavior
property.

Fabric capacity guardrails and limitations


Direct Lake semantic models require a Fabric capacity license. Also, there are capacity
guardrails and limitations that apply to your Fabric capacity subscription (SKU), as
presented in the following table.

) Important

The first column in the following table also includes Power BI Premium capacity
subscriptions (P SKUs). Be aware that Microsoft is consolidating purchase options
and retiring the Power BI Premium per capacity SKUs. New and existing customers
should consider purchasing Fabric capacity subscriptions (F SKUs) instead.

For more information, see Important update coming to Power BI Premium


licensing and Power BI Premium.

ノ Expand table
Fabric Parquet Row Rows per Max model size on Max
SKU files per groups per table disk/OneLake (GB) memory
table table (millions) (GB) 1

F2 1,000 1,000 300 10 3

F4 1,000 1,000 300 10 3

F8 1,000 1,000 300 10 3

F16 1,000 1,000 300 20 5

F32 1,000 1,000 300 40 10

F64/FT1/P1 5,000 5,000 1,500 Unlimited 25

F128/P2 5,000 5,000 3,000 Unlimited 50

F256/P3 5,000 5,000 6,000 Unlimited 100

F512/P4 10,000 10,000 12,000 Unlimited 200

F1024/P5 10,000 10,000 24,000 Unlimited 400

F2048 10,000 10,000 24,000 Unlimited 400

1
For Direct Lake semantic models, Max Memory represents the upper memory resource
limit for how much data can be paged in. For this reason, it's not a guardrail because
exceeding it doesn't result in a fallback to DirectQuery mode; however, it can have a
performance impact if the amount of data is large enough to cause excessive paging in
and out of the model data from the OneLake data.

If exceeded, the Max model size on disk/OneLake causes all queries to the semantic
model to fall back to DirectQuery mode. All other guardrails presented in the table are
evaluated per query. It's therefore important that you optimize your Delta tables and
Direct Lake semantic model to avoid having to unnecessarily scale up to a higher Fabric
SKU (resulting in increased cost).

Additionally, Capacity unit and Max memory per query limits apply to Direct Lake
semantic models. For more information, see Capacities and SKUs.

Considerations and limitations


Direct Lake semantic models present some considerations and limitations.

7 Note
The capabilities and features of Direct Lake semantic models are evolving. Be sure
to check back periodically to review the latest list of considerations and limitations.

When a Direct Lake semantic model table connects to a table in the SQL analytics
endpoint that enforces row-level security (RLS), queries that involve that model
table will always fall back to DirectQuery mode. Query performance might be
slower.
When a Direct Lake semantic model table connects to a view in the SQL analytics
endpoint, queries that involve that model table will always fall back to DirectQuery
mode. Query performance might be slower.
Composite modeling isn't supported. That means Direct Lake semantic model
tables can't be mixed with tables in other storage modes, such as Import,
DirectQuery, or Dual (except for special cases, including calculation groups, what-if
parameters, and field parameters).
Calculated columns and calculated tables that reference columns or tables in Direct
Lake storage mode aren't supported. Calculation groups, what-if parameters, and
field parameters, which implicitly create calculated tables, and calculated tables
that don't reference Direct Lake columns or tables are supported.
Direct Lake storage mode tables don't support complex Delta table column types.
Binary and GUID semantic types are also unsupported. You must convert these
data types into strings or other supported data types.
Table relationships require the data types of related columns to match.
One-side columns of relationships must contain unique values. Queries fail if
duplicate values are detected in a one-side column.
Auto data/time intelligence in Power BI Desktop isn't supported. Marking your own
date table as a date table is supported.
The length of string column values is limited to 32,764 Unicode characters.
The floating point value NaN (not a number) isn't supported.
Publish to web from Power BI using a service principal is only supported when
using a fixed identity for the Direct Lake semantic model.
In the web modeling experience, validation is limited for Direct Lake semantic
models. User selections are assumed to be correct, and no queries are issued to
validate cardinality or cross filter selections for relationships, or for the selected
date column in a marked date table.
In the Fabric portal, the Direct Lake tab in the refresh history lists only Direct Lake-
related refresh failures. Successful refresh (framing) operations aren't listed.
Your Fabric SKU determines the maximum available memory per Direct Lake
semantic model for the capacity. When the limit is exceeded, queries to the
semantic model might be slower due to excessive paging in and out of the model
data.
Creating a Direct Lake semantic model in a workspace that is in a different region
of the data source workspace is not supported. For example, if the Lakehouse is in
West Central US, then you can only create semantic models from this Lakehouse in
the same region. A workaround is to create a Lakehouse in the other region's
workspace and shortcut to the tables before creating the semantic model. To find
what region you are in, see find your Fabric home region.
You can create and view a custom Direct Lake semantic model using a Service
Principal identity, but the default Direct Lake semantic model does not support
Service Principals. Make sure service principal authentication is enabled for Fabric
REST APIs in your tenant and grant the service principal Contributor or higher
permissions to the workspace of your Direct Lake semantic model.
Embedding reports requires a V2 embed token.
Direct Lake does not support service principal profiles for authentication.
Customized Direct Lake semantic models created by Service Principal and viewer
with Service Principal are supported, but default Direct Lake semantic models are
not supported.

Comparison to other storage modes


The following table compares Direct Lake storage mode to Import and DirectQuery
storage modes.

ノ Expand table

Capability Direct Lake Import DirectQuery

Licensing Fabric capacity Any Fabric or Power BI Any Fabric or Power BI


subscription (SKUs) only license (including license (including
Microsoft Fabric Free Microsoft Fabric Free
licenses) licenses)

Data source Only lakehouse or Any connector Any connector that


warehouse tables (or supports DirectQuery
views) mode

Connect to SQL Yes – but will Yes Yes


analytics endpoint automatically fall back to
views DirectQuery mode

Composite models No 1 Yes – can combine Yes – can combine


with DirectQuery or with Import or Dual
Dual storage mode storage mode tables
tables
Capability Direct Lake Import DirectQuery

Single sign-on Yes Not applicable Yes


(SSO)

Calculated tables No – except calculation Yes No – calculated tables


groups, what-if use Import storage
parameters, and field mode even when they
parameters, which refer to other tables in
implicitly create DirectQuery mode
calculated tables

Calculated No Yes Yes


columns

Hybrid tables No Yes Yes

Model table No – however Yes – either No


partitions partitioning can be done automatically created
at the Delta table level by incremental refresh,
or manually created by
using the XMLA
endpoint

User-defined No Yes Yes


aggregations

SQL analytics Yes – but queries will fall Yes – but must Yes – but queries
endpoint object- back to DirectQuery duplicate permissions might produce errors
level security or mode and might produce with semantic model when permission is
column-level errors when permission is object-level security denied
security denied

SQL analytics Yes – but queries will fall Yes – but must Yes
endpoint row- back to DirectQuery duplicate permissions
level security (RLS) mode with semantic model
RLS

Semantic model Yes – but it's strongly Yes Yes


row-level security recommended to use a
(RLS) fixed identity cloud
connection

Semantic model Yes Yes Yes


object-level
security (OLS)

Large data Yes Less suited – a larger Yes


volumes without capacity size might be
refresh required for querying
requirement and refreshing
Capability Direct Lake Import DirectQuery

Reduce data Yes – when automatic No Yes


latency updates is enabled, or
programmatic reframing;
however, data
preparation must be
done upstream first

Power BI Yes 2 Yes Yes


Embedded

1 You can't combine Direct Lake storage mode tables with DirectQuery or Dual storage
mode tables in the same semantic model. However, you can use Power BI Desktop to
create a composite model on a Direct Lake semantic model and then extend it with new
tables (by using Import, DirectQuery, or Dual storage mode) or calculations. For more
information, see Build a composite model on a semantic model.

2 Requires a V2 embed token. If you're using a service principal, you must use a fixed
identity cloud connection.

Related content
Develop Direct Lake semantic models
Manage Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Develop Direct Lake semantic models
Article • 01/26/2025

This article describes design topics relevant to developing Direct Lake semantic models.

Create the model


You use the Fabric portal to create a Direct Lake semantic model in a workspace. It's a
simple process that involves selecting which tables from a single lakehouse or
warehouse to add to the semantic model.

You can then use the web modeling experience to further develop the semantic model.
This experience allows you to create relationships between tables, create measures and
calculation groups, mark date tables, and set properties for model and its objects (like
column formats). You can also set up model row-level security (RLS) by defining roles
and rules, and by adding members (Microsoft Entra user accounts or security groups) to
those roles.

Alternatively, you can continue the development of your model by using an XMLA-
compliant tool, like SQL Server Management Studio (SSMS) (version 19.1 or later) or
open-source, community tools. For more information, see Model write support with the
XMLA endpoint later in this article.

 Tip

You can learn how to create a lakehouse, a Delta table, and a basic Direct Lake
semantic model by completing this tutorial.

Model tables
Model tables are based on either a table or a view of the SQL analytics endpoint.
However, avoid using views whenever possible. That's because queries to a model table
based on a view will always fall back to DirectQuery mode, which might result in slower
query performance.

Tables should include columns for filtering, grouping, sorting, and summarizing, in
addition to columns that support model relationships. While unnecessary columns don't
affect semantic model query performance (because they won't be loaded into memory),
they result in a larger storage size in OneLake and require more compute resources to
load and maintain.
2 Warning

Using columns that apply dynamic data masking (DDM) in Direct Lake semantic
models is not supported.

To learn how to select which tables to include in your Direct Lake semantic model, see
Edit tables for Direct Lake semantic models.

For more information about columns to include in your semantic model tables, see
Understand storage for Direct Lake semantic models.

Enforce data-access rules


When you have requirements to deliver subsets of model data to different users, you
can enforce data-access rules. You enforce rules by setting up object-level security (OLS)
and/or row-level security (RLS) in the SQL analytics endpoint or in the semantic model.

7 Note

The topic of enforcing data-access rules is different, yet related, to setting


permissions for content consumers, creators, and users who will manage the
semantic model (and related Fabric items). For more information about setting
permissions, see Manage Direct Lake semantic models.

Object-level security (OLS)


OLS involves restricting access to discover and query objects or columns. For example,
you might use OLS to limit the users who can access the Salary column from the
Employee table.

For a SQL analytics endpoint, you can set up OLS to control access to the endpoint
objects, such as tables or views, and column-level security (CLS) to control access to
endpoint table columns.

For a semantic model, you can set up OLS to control access to model tables or columns.
You need to use open-source, community tools like Tabular Editor to set up OLS.

Row-level security (RLS)


RLS involves restricting access to subsets of data in tables. For example, you might use
RLS to ensure that salespeople can only access sales data for customers in their sales
region.

For a SQL analytics endpoint, you can set up RLS to control access to rows in an
endpoint table.

) Important

When a query uses any table that has RLS in the SQL analytics endpoint, it will fall
back to DirectQuery mode. Query performance might be slower.

For a semantic model, you can set up RLS to control access to rows in model tables. RLS
can be set up in the web modeling experience or by using a third-party tool.

How queries are evaluated


The reason to develop Direct Lake semantic models is to achieve high performance
queries over large volumes of data in OneLake. Therefore, you should strive to design a
solution that maximizes the chances of in-memory querying.

The following steps approximate how queries are evaluated (and whether they fail). The
benefits of Direct Lake storage mode are only possible when the fifth step is achieved.

1. If the query contains any table or column that's restricted by semantic model OLS,
an error result is returned (report visual will fail to render).
2. If the query contains any column that's restricted by SQL analytics endpoint CLS (or
the table is denied), an error result is returned (report visual will fail to render).
a. If the cloud connection uses SSO (default), CLS is determined by the access level
of the report consumer.
b. If the cloud connection uses a fixed identity, CLS is determined by the access
level of the fixed identity.
3. If the query contains any table in the SQL analytics endpoint that enforces RLS or a
view is used, the query falls back to DirectQuery mode.
a. If the cloud connection uses SSO (default), RLS is determined by the access level
of the report consumer.
b. If the cloud connection uses a fixed identity, RLS is determined by the access
level of the fixed identity.
4. If the query exceeds the guardrails of the capacity, it falls back to DirectQuery
mode.
5. Otherwise, the query is satisfied from the in-memory cache. Column data is loaded
into memory as and when it's required.

Source item permissions


The account used to access data is one of the following.

If the cloud connection uses SSO (default), it is the report consumer.


If the cloud connection uses a fixed identity, it is the fixed identity.

The account must at least have Read and ReadData permissions on the source item
(lakehouse or warehouse). Item permissions can be inherited from workspace roles or
assigned explicitly for the item as described in this article.

Assuming this requirement is met, Fabric grants the necessary access to the semantic
model to read the Delta tables and associated Parquet files (to load column data into
memory) and data-access rules can be applied.

Data-access rule options


You can set up data-access rules in:

The semantic model only.


The SQL analytics endpoint only.
In both the semantic model and the SQL analytics endpoint.

Rules in the semantic model


If you must enforce data-access rules, you should do so in the semantic model
whenever viable. That's because RLS enforced by the semantic model is achieved by
filtering the in-memory cache of data to achieve high performance queries.

It's also a suitable approach when report consumers aren't granted permission to query
the lakehouse or warehouse.

In either case, it's strongly recommended that the cloud connection uses a fixed identity
instead of SSO. SSO would imply that end users can access the SQL analytics endpoint
directly and might therefore bypass security rules in the semantic model.

) Important

Semantic model item permissions can be set explicitly via Power BI apps, or
acquired implicitly via workspace roles.
Notably, semantic model data-access rules are not enforced for users who have
Write permission on the semantic model. Conversely, data-access rules do apply to
users who are assigned to the Viewer workspace role. However, users assigned to
the Admin, Member, or Contributor workspace role implicitly have Write permission
on the semantic model and so data-access rules are not enforced. For more
information, see Roles in workspaces.

Rules in the SQL analytics endpoint


It's appropriate to enforce data-access rules in the SQL analytics endpoint when the
semantic model cloud connection uses single sign-on (SSO). That's because the identity
of the user is delegated to query the SQL analytics endpoint, ensuring that queries
return only the data the user is allowed to access. It's also appropriate to enforce data-
access rules at this level when users will query the SQL analytics endpoint directly for
other workloads (for example, to create a Power BI paginated report, or export data).

Notably, however, a semantic model query will fall back to DirectQuery mode when it
includes any table that enforces RLS in the SQL analytics endpoint. Consequently, the
semantic model might never cache data into memory to achieve high performance
queries.

Rules at both layers


Data-access rules can be enforced at both layers. However, this approach involves extra
complexity and management overhead. In this case, it's strongly recommended that the
cloud connection uses a fixed identity instead of SSO.

Comparison of data-access rule options


The following table compares data data-access setup options.

ノ Expand table

Apply data-access Comment


rules to

Semantic model only Use this option when users aren't granted item permissions to query the
lakehouse or warehouse. Set up the cloud connection to use a fixed
identity. High query performance can be achieved from the in-memory
cache.
Apply data-access Comment
rules to

SQL analytics Use this option when users need to access data from either the warehouse
endpoint only or the semantic model, and with consistent data-access rules. Ensure SSO
is enabled for the cloud connection. Query performance might be slow.

Lakehouse or This option involves extra management overhead. Set up the cloud
warehouse and connection to use a fixed identity.
semantic model

Recommended practices for enforcing data-access rules


Here are recommended practices related to enforcing data-access rules:

If different users must be restricted to subsets of data, whenever viable, enforce


RLS only at the semantic model layer. That way, users will benefit from high
performance in-memory queries. In this case, it's strongly recommended that the
cloud connection uses a fixed identity instead of SSO.
If possible, avoid enforcing OLS and CLS at either layer because it results in errors
in report visuals. Errors can lead to confusion or concern for users. For
summarizable columns, consider creating measures that return BLANK in certain
conditions instead of CLS (if possible).

Model write support with the XMLA endpoint


Direct Lake semantic models support write operations with the XMLA endpoint by using
tools such as SSMS (19.1 or later), and open-source, community tools.

 Tip

For more information about using third-party tools to develop, manage, or


optimize semantic models, see the advanced data model management usage
scenario.

Before you can perform write operations, the XMLA read-write option must be enabled
for the capacity. For more information, see Enable XMLA read-write.

Model write operations with the XMLA endpoint support:

Customizing, merging, scripting, debugging, and testing Direct Lake model


metadata.
Source and version control, continuous integration and continuous deployment
(CI/CD) with Azure DevOps and GitHub. For more information, see Content
lifecycle management.
Automation tasks like semantic model refresh, and applying changes to Direct Lake
semantic models by using PowerShell and the REST APIs.

When changing a semantic model using XMLA, you must update the ChangedProperties
and PBI_RemovedChildren collection for the changed object to include any modified or
removed properties. If you don't perform that update, Power BI modeling tools might
overwrite any changes the next time the schema is synchronized with the Lakehouse.

Learn more about semantic model object lineage tags in the lineage tags for Power BI
semantic models article.

) Important

Direct Lake tables created by using XMLA applications will initially be in an


unprocessed state until the application sends a refresh command. Queries that
involve unprocessed tables will always fall back to DirectQuery mode. So, when you
create a new semantic model, be sure to refresh the model to process its tables.

For more information, see Semantic model connectivity with the XMLA endpoint.

Direct Lake model metadata


When you connect to a Direct Lake semantic model with the XMLA endpoint, the
metadata looks like that of any other model. However, Direct Lake models show the
following differences:

The compatibilityLevel property of the database object is 1604 (or higher).


The mode property of Direct Lake partitions is set to directLake .
Direct Lake partitions use shared expressions to define data sources. The
expression points to the SQL analytics endpoint of the lakehouse or warehouse.
Direct Lake uses the SQL analytics endpoint to discover schema and security
information, but it loads the data directly from OneLake (unless it falls back to
DirectQuery mode for any reason).

Post-publication tasks
After you publish a Direct Lake semantic model, you should complete some setup tasks.
For more information, see Manage Direct Lake semantic models.
Unsupported features
The following model features aren't supported by Direct Lake semantic models:

Calculated tables referencing tables or columns in Direct Lake storage mode


Calculated columns referencing tables or columns in Direct Lake storage mode
Hybrid tables
User-defined aggregations
Composite models, in that you can't combine Direct Lake storage mode tables with
DirectQuery or Dual storage mode tables in the same model. However, you can use
Power BI Desktop to create a live connection to a Direct Lake semantic model and
then extend it with new measures, and from there you can click the option to make
changes to this model to add new tables (using Import, DirectQuery, or Dual
storage mode). This action creates a DirectQuery connection to the semantic
model in Direct Lake mode, so the tables show as DirectQuery storage mode, but
this storage mode is not indicating fallback to DirectQuery. Only the connection
between this new model and the Direct Lake model is DirectQuery and queries still
utilize Direct Lake to get data from OneLake. For more information, see Build a
composite model on a semantic model.
Columns based on SQL analytics endpoint columns that apply dynamic data
masking.

Related content
Direct Lake overview
Manage Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Edit tables for Direct Lake semantic models
OneLake integration for semantic models

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Manage Direct Lake semantic models
Article • 01/26/2025

This article describes design topics relevant to managing Direct Lake semantic models.

Post-publication tasks
After you first publish a Direct Lake semantic model ready for reporting, you should
immediately complete some post-publication tasks. These tasks can also be adjusted at
any time during the lifecycle of the semantic model.

Set up the cloud connection


Manage security role membership
Set Fabric item permissions
Set up scheduled refresh

Optionally, you can also set up data discovery to allow report creators to read metadata,
helping them to discover data in the OneLake data hub and request access to it. You can
also endorse (certified or promoted) the semantic model to communicate that it
represents quality data fit for use.

Set up the cloud connection


A Direct Lake semantic model uses a cloud connection to connect to the SQL analytics
endpoint. It enables access to source data, which is either the Parquet files in OneLake
(Direct Lake storage mode, which involves loading column data into memory) or the
SQL analytics endpoint (when queries fall back to DirectQuery mode).

Default cloud connection


When you create a Direct Lake semantic model, the default cloud connection is used. It
leverages single sign-on (SSO), which means that the identity that queries the semantic
model (often a report user) is used to query the SQL analytics endpoint data.

Sharable cloud connection


Optionally, you can create a sharable cloud connection (SCC) so that connections to the
data source can be made with a fixed identity. It can help enterprise customers protect
their organizational data stores. The IT department can manage credentials, create SCCs,
and share them with the intended creators for centralized access management.

To set up a fixed identity, see Specify a fixed identity for a Direct Lake semantic model.

Authentication
The fixed identity can authenticate either by using OAuth 2.0 or Service principal.

7 Note

Only Microsoft Entra authentication is supported. Therefore, Basic authentication


isn't supported for Direct Lake semantic models.

OAuth 2.0

When you use OAuth 2.0, you can authenticate with a Microsoft Entra user account. The
user account must have permission to query the SQL analytics endpoint tables and
views, and schema metadata.

Using a specific user account isn't a recommended practice. That's because semantic
model queries will fail should the password change or the user account be deleted (like
when an employee leaves the organization).

Service principal

Authenticating with a service principal is the recommended practice because it's not
dependent on a specific user account. The security principal must have permission to
query the SQL analytics endpoint tables and views, and schema metadata.

For continuity, the service principal credentials can be managed by secret/certificate


rotation.

7 Note

The Fabric tenant settings must allow service principals, and the service principal
must belong to a declared security group.

Single sign-on
When you create a sharable cloud connection, the Single Sign-On checkbox is
unchecked by default. That's the correct setup when using a fixed identity.

You can enable SSO when you want the identity that queries the semantic model to also
query the SQL analytics endpoint. In this configuration, the Direct Lake semantic model
will use the fixed identity to refresh the model and the user identity to query data.

When using a fixed identity, it's common practice to disable SSO so that the fixed
identity is used for both refreshes and queries, but there's no technical requirement to
do so.

Recommended practices for cloud connections


Here are recommended practices related to cloud connections:

When all users can access the data (and have permission to do so), there's no need
to create a shared cloud connection. Instead, the default cloud connection settings
can be used. In this case, the identity of the user who queries the model will be
used should queries fall back to DirectQuery mode.
Create a shared cloud connection when you want to use a fixed identity to query
source data. That could be because the users who query the semantic model aren't
granted permission to read the lakehouse or warehouse. This approach is
especially relevant when the semantic model enforces RLS.
If you use a fixed identity, use the Service principal option because it's more secure
and reliable. That's because it doesn't rely on a single user account or their
permissions, and it won't require maintenance (and disruption) should they change
their password or leave the organization.
If different users must be restricted to access only subsets of data, if viable, enforce
RLS at the semantic model layer only. That way, users will benefit from high
performance in-memory queries.
If possible, avoid OLS and CLS because it results in errors in report visuals. Errors
can create confusion or concern for users. For summarizable columns, consider
creating measures that return BLANK in certain conditions instead of CLS (if
possible).

Manage security role membership


If your Direct Lake semantic model enforces row-level security (RLS), you might need to
manage the members that are assigned to the security roles. For more information, see
Manage security on your model.
Set Fabric item permissions
Direct Lake semantic models adhere to a layered security model. They perform
permission checks via the SQL analytics endpoint to determine whether the identity
attempting to access the data has the necessary data access permissions.

You must grant permissions to users so that they can use or manage the Direct Lake
semantic model. In short, report consumers need Read permission, and report creators
need Build permission. Semantic model permissions can be assigned directly or acquired
implicitly via workspace roles. To manage the semantic model settings (for refresh and
other configurations), you must be the semantic model owner.

Depending on the cloud connection set up, and whether users need to query the
lakehouse or the warehouse SQL analytics endpoint, you might need to grant other
permissions (described in the table in this section).

7 Note

Notably, users don't ever require permission to read data in OneLake. That's
because Fabric grants the necessary permissions to the semantic model to read the
Delta tables and associated Parquet files (to load column data into memory). The
semantic model also has the necessary permissions to periodically read the SQL
analytics endpoint to perform permission checks to determine what data the
querying user (or fixed identity) can access.

Consider the following scenarios and permission requirements.

ノ Expand table

Scenario Required permissions Comments

Users can view reports • Grant Read permission for Reports don't need to belong to
the reports and Read the same workspace as the
permission for the semantic semantic model. For more
model. information, see Strategy for read-
• If the cloud connection only consumers.
uses SSO, grant at least
Read permission for the
lakehouse or warehouse.

Users can create reports • Grant Build permission for For more information, see Strategy
the semantic model. for content creators.
• If the cloud connection
uses SSO, grant at least
Scenario Required permissions Comments

Read permission for the


lakehouse or warehouse.

Users can query the • Don't grant any permission Only suitable when the cloud
semantic model but are for the lakehouse or connection uses a fixed identity.
denied querying the warehouse.
lakehouse or SQL analytics
endpoint

Users can query the • Grant Read and ReadData Important: Queries sent to the SQL
semantic model and the permissions for the analytics endpoint will bypass data
SQL analytics endpoint but lakehouse or warehouse. access permissions enforced by the
are denied querying the semantic model.
lakehouse

Manage the semantic • Requires semantic model For more information, see Semantic
model, including refresh ownership. model ownership.
settings

) Important

You should always thoroughly test permissions before releasing your semantic
model and reports into production.

For more information, see Semantic model permissions.

Refresh Direct Lake semantic models


A refresh of a Direct Lake semantic model results in a framing operation. A refresh
operation can be triggered:

Manually, by doing an on-demand refresh in the Fabric portal, or by executing the


Tabular Model Scripting Language (TMSL) Refresh command from a script in SQL
Server Management Studio (SSMS), or by using a third-party tool that connects via
the XMLA endpoint.
Automatically, by setting up a refresh schedule in the Fabric portal.
Automatically, when changes are detected in the underlying Delta tables—for
more information, see Automatic updates (described next).
Programmatically, by triggering a refresh by using the Power BI REST API or TOM.
You might trigger a programmatic refresh as a final step of an extract, transform,
and load (ETL) process.
Automatic updates
There's a semantic model-level setting named Keep your Direct Lake data up to date that
does automatic updates of Direct Lake tables. It's enabled by default. It ensures that
data changes in OneLake are automatically reflected in the Direct Lake semantic model.
The setting is available in the Fabric portal, in the Refresh section of the semantic model
settings.

When the setting is enabled, the semantic model performs a framing operation
whenever data modifications in underlying Delta tables are detected. The framing
operation is always specific to only those tables where data modifies are detected.

We recommend that you leave the setting on, especially when you have a small or
medium-sized semantic model. It's especially useful when you have low-latency
reporting requirements and Delta tables are modified regularly.

In some situations, you might want to disable automatic updates. For example, you
might need to allow completion of data preparation jobs or the ETL process before
exposing any new data to consumers of the semantic model. When disabled, you can
trigger a refresh by using a programmatic method (described earlier).

7 Note

Power BI suspends automatic updates when a non-recoverable error is encountered


during refresh. A non-recoverable error can occur, for example, when a refresh fails
after several attempts. So, make sure your semantic model can be refreshed
successfully. Power BI automatically resumes automatic updates when a subsequent
on-demand refresh completes without errors.

Warm the cache


A Direct Lake semantic model refresh operation might evict all resident columns from
memory. That means the first queries after a refresh of a Direct Lake semantic model
could experience some delay as columns are loaded into memory. Delays might only be
noticeable when you have extremely large volumes of data.

To avoid such delays, consider warming the cache by programmatically sending a query
to the semantic model. A convenient way to send a query is to use semantic link. This
operation should be done immediately after the refresh operation finishes.

) Important
Warming the cache might only make sense when delays are unacceptable. Take
care not to unnecessarily load data into memory that could place pressure on other
capacity workloads, causing them to throttle or become deprioritized.

Set the Direct Lake behavior property


You can control fallback of your Direct Lake semantic models by setting its
DirectLakeBehavior property. It can be set to:

Automatic: (Default) Queries fall back to DirectQuery mode if the required data
can't be efficiently loaded into memory.
DirectLakeOnly: All queries use Direct Lake storage mode only. Fall back to
DirectQuery mode is disabled. If data can't be loaded into memory, an error is
returned.
DirectQueryOnly: All queries use DirectQuery mode only. Use this setting to test
fallback performance, where, for instance, you can observe the query performance
in connected reports.

You can set the property in the web modeling experience, or by using Tabular Object
Model (TOM) or Tabular Model Scripting Language (TMSL).

 Tip

Consider disabling DirectQuery fallback when you want to process queries in Direct
Lake storage mode only. We recommend that you disable fallback when you don't
want to fall back to DirectQuery. It can also be helpful when you want to analyze
query processing for a Direct Lake semantic model to identify if and how often
fallback occurs.

Monitor Direct Lake semantic models


You can monitor a Direct Lake semantic model to determine the performance of report
visual DAX queries, or to determine when it falls back to DirectQuery mode.

You can use Performance Analyzer, SQL Server Profiler, Azure Log Analytics, or an open-
source, community tool, like DAX Studio.

Performance Analyzer
You can use Performance Analyzer in Power BI Desktop to record the processing time
required to update report elements initiated as a result of any user interaction that
results in running a query. If the monitoring results show a Direct query metric, it means
the DAX queries were processed in DirectQuery mode. In the absence of that metric, the
DAX queries were processed in Direct Lake mode.

For more information, see Analyze by using Performance Analyzer.

SQL Server Profiler


You can use SQL Server Profiler to retrieve details about query performance by tracing
query events. It's installed with SQL Server Management Studio (SSMS). Before starting,
make sure you have the latest version of SSMS installed.

For more information, see Analyze by using SQL Server Profiler.

) Important

In general, Direct Lake storage mode provides fast query performance unless a
fallback to DirectQuery mode is necessary. Because fallback to DirectQuery mode
can impact query performance, it's important to analyze query processing for a
Direct Lake semantic model to identify if, how often, and why fallbacks occur.

Azure Log Analytics


You can use Azure Log Analytics to collect, analyze, and act on telemetry data associated
with a Direct Lake semantic model. It's a service within Azure Monitor , which Power BI
uses to save activity logs.

For more information, see Using Azure Log Analytics in Power BI.

Related content
Direct Lake overview
Develop Direct Lake semantic models
Understand storage for Direct Lake semantic models
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models
Specify a fixed identity for a Direct Lake semantic model
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Understand storage for Direct Lake
semantic models
Article • 01/26/2025

This article introduces Direct Lake storage concepts. It describes Delta tables and
Parquet files. It also describes how you can optimize Delta tables for Direct Lake
semantic models, and how you can maintain them to help deliver reliable, fast query
performance.

Delta tables
Delta tables exist in OneLake. They organize file-based data into rows and columns and
are available to Microsoft Fabric compute engines such as notebooks, Kusto, and the
lakehouse and warehouse. You can query Delta tables by using Data Analysis
Expressions (DAX), Multidimensional Expressions (MDX), T-SQL (Transact-SQL), Spark
SQL, and even Python.

7 Note

Delta—or Delta Lake—is an open-source storage format. That means Fabric can
also query Delta tables created by other tools and vendors.

Delta tables store their data in Parquet files, which are typically stored in a lakehouse
that a Direct Lake semantic model uses to load data. However, Parquet files can also be
stored externally. External Parquet files can be referenced by using a OneLake shortcut,
which points to a specific storage location, such as Azure Data Lake Storage (ADLS)
Gen2, Amazon S3 storage accounts, or Dataverse. In almost all cases, compute engines
access the Parquet files by querying Delta tables. However, typically Direct Lake
semantic models load column data directly from optimized Parquet files in OneLake by
using a process known as transcoding.

Data versioning
Delta tables comprise one or more Parquet files. These files are accompanied by a set of
JSON-based link files, which track the order and nature of each Parquet file that's
associated with a Delta table.

It's important to understand that the underlying Parquet files are incremental in nature.
Hence the name Delta as a reference to incremental data modification. Every time a
write operation to a Delta table takes place—such as when data is inserted, updated, or
deleted—new Parquet files are created that represent the data modifications as a
version. Parquet files are therefore immutable, meaning they're never modified. It's
therefore possible for data to be duplicated many times across a set of Parquet files for
a Delta table. The Delta framework relies on link files to determine which physical
Parquet files are required to produce the correct query result.

Consider a simple example of a Delta table that this article uses to explain different data
modification operations. The table has two columns and stores three rows.

ノ Expand table

ProductID StockOnHand

A 1

B 2

C 3

The Delta table data is stored in a single Parquet file that contains all data, and there's a
single link file that contains metadata about when the data was inserted (appended).

Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.

Insert operations
Consider what happens when an insert operation occurs: A new row for product C with
a stock on hand value of 4 is inserted. This operations results in the creation of a new
Parquet file and link file, so there's now two Parquet files and two link files.

Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.

At this point, a query of the Delta table returns the following result. It doesn't matter
that the result is sourced from multiple Parquet files.

ノ Expand table

ProductID StockOnHand

A 1

B 2

C 3

D 4

Every subsequent insert operation creates new Parquet files and link files. That means
the number of Parquet files and link files grows with every insert operation.

Update operations
Now consider what happens when an update operation occurs: The row for product D
has its stock on hand value changed to 10 . This operations results in the creation of a
new Parquet file and link file, so there are now three Parquet files and three link files.

Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Parquet file 3:
ProductID: C
StockOnHand: 10
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.
Link file 3:
Contains the timestamp when Parquet file 3 was created, and records that
data was updated.

At this point, a query of the Delta table returns the following result.

ノ Expand table

ProductID StockOnHand

A 1

B 2

C 10

D 4

Data for product C now exists in multiple Parquet files. However, queries to the Delta
table combine the link files to determine what data should be used to provide the
correct result.

Delete operations
Now consider what happens when a delete operation occurs: The row for product B is
deleted. This operation results in a new Parquet file and link file, so there are now four
Parquet files and four link files.

Parquet file 1:
ProductID: A, B, C
StockOnHand: 1, 2, 3
Parquet file 2:
ProductID: D
StockOnHand: 4
Parquet file 3:
ProductID: C
StockOnHand: 10
Parquet file 4:
ProductID: A, C, D
StockOnHand: 1, 10, 4
Link file 1:
Contains the timestamp when Parquet file 1 was created, and records that
data was appended.
Link file 2:
Contains the timestamp when Parquet file 2 was created, and records that
data was appended.
Link file 3:
Contains the timestamp when Parquet file 3 was created, and records that
data was updated.
Link file 4:
Contains the timestamp when Parquet file 4 was created, and records that
data was deleted.

Notice that Parquet file 4 no longer contains data for product B , but it does contain
data for all other rows in the table.

At this point, a query of the Delta table returns the following result.

ノ Expand table

ProductID StockOnHand

A 1

C 10

D 4

7 Note

This example is simple because it involves a small table, just a few operations, and
only minor modifications. Large tables that experience many write operations and
that contain many rows of data will generate more than one Parquet file per
version.

) Important

Depending on how you define your Delta tables and the frequency of data
modification operations, it might result in many Parquet files. Be aware that each
Fabric capacity license has guardrails. If the number of Parquet files for a Delta
table exceeds the limit for your SKU, queries will fall back to DirectQuery, which
might result in slower query performance.
To manage the number of Parquet files, see Delta table maintenance later in this
article.

Delta time travel

Link files enable querying data as of an earlier point in time. This capability is known as
Delta time travel. The earlier point in time could be a timestamp or version.

Consider the following query examples.

SQL

SELECT * FROM Inventory TIMESTAMP AS OF '2024-04-28T09:15:00.000Z';


SELECT * FROM Inventory AS OF VERSION 2;

 Tip

You can also query a table by using the @ shorthand syntax to specify the
timestamp or version as part of the table name. The timestamp must be in
yyyyMMddHHmmssSSS format. You can specify a version after @ by prepending a v to

the version.

Here are the previous query examples rewritten with shorthand syntax.

SQL

SELECT * FROM Inventory@20240428091500000;


SELECT * FROM Inventory@v2;

) Important

Table versions accessible with time travel are determined by a combination of the
retention threshold for transaction log files and the frequency and specified
retention for VACUUM operations (described later in the Delta table maintenance
section). If you run VACUUM daily with the default values, seven days of data will be
available for time travel.

Framing
Framing is a Direct Lake operation that sets the version of a Delta table that should be
used to load data into a semantic model column. Equally important, the version also
determines what should be excluded when data is loaded.

A framing operation stamps the timestamp/version of each Delta table into the
semantic model tables. From this point, when the semantic model needs to load data
from a Delta table, the timestamp/version associated with the most recent framing
operation is used to determine what data to load. Any subsequent data modifications
that occur for the Delta table since the latest framing operation are ignored (until the
next framing operation).

) Important

Because a framed semantic model references a particular Delta table version, the
source must ensure it keeps that Delta table version until framing of a new version
is completed. Otherwise, users will encounter errors when the Delta table files need
to be accessed by the model and have been vacuumed or otherwise deleted by the
producer workload.

For more information about framing, see Direct Lake overview.

Table partitioning
Delta tables can be partitioned so that a subset of rows are stored together in a single
set of Parquet files. Partitions can speed up queries as well as write operations.

Consider a Delta table that has a billion rows of sales data for a two-year period. While
it's possible to store all the data in a single set of Parquet files, for this data volume it's
not optimal for read and write operations. Instead, performance can be improved by
spreading the billion rows of data across multiple series of Parquet files.

A partition key must be defined when setting up table partitioning. The partition key
determines which rows to store in which series. For Delta tables, the partition key can be
defined based on the distinct values of a specified column (or columns), such as a
month/year column of a date table. In this case, two years of data would be distributed
across 24 partitions (2 years x 12 months).

Fabric compute engines are unaware of table partitions. As they insert new partition key
values, new partitions are created automatically. In OneLake, you'll find one subfolder
for each unique partition key value, and each subfolder stores its own set of Parquet
files and link files. At least one Parquet file and one link file must exist, but the actual
number of files in each subfolder can vary. As data modification operations take place,
each partition maintains its own set of Parquet files and link files to keep track of what
to return for a given timestamp or version.

If a query of a partitioned Delta table is filtered to only the most recent three months of
sales data, the subset of Parquet files and link files that need to be accessed can be
quickly identified. That then allows skipping many Parquet files altogether, resulting in
better read performance.

However, queries that don't filter on the partition key might not always perform better.
That can be the case when a Delta table stores all data in a single large set of Parquet
files and there's file or row group fragmentation. While it's possible to parallelize the
data retrieval from multiple Parquet files across multiple cluster nodes, many small
Parquet files can adversely affect file I/O and therefore query performance. For this
reason, it's best to avoid partitioning Delta tables in most cases—unless write operations
or extract, transform, and load (ETL) processes would clearly benefit from it.

Partitioning benefits insert, update, and delete operations too, because file activity only
takes place in subfolders matching the partition key of the modified or deleted rows. For
example, if a batch of data is inserted into a partitioned Delta table, the data is assessed
to determine what partition key values exist in the batch. Data is then directed only to
the relevant folders for the partitions.

Understanding how Delta tables use partitions can help you design optimal ETL
scenarios that reduce the write operations that need to take place when updating large
Delta tables. Write performance improves by reducing the number and size of any new
Parquet files that must be created. For a large Delta table partitioned by month/year, as
described in the previous example, new data only adds new Parquet files to the latest
partition. Subfolders of previous calendar months remain untouched. If any data of
previous calendar months must be modified, only the relevant partition folders receive
new partition and link files.

) Important

If the main purpose of a Delta table is to serve as a data source for semantic
models (and secondarily, other query workloads), it's usually better to avoid
partitioning in preference for optimizing the load of columns into memory.

For Direct Lake semantic models or the SQL analytics endpoint, the best way to optimize
Delta table partitions is to let Fabric automatically manage the Parquet files for each
version of a Delta table. Leaving the management to Fabric should result in high query
performance through parallelization, however it might not necessarily provide the best
write performance.
If you must optimize for write operations, consider using partitions to optimize write
operations to Delta tables based on the partition key. However, be aware that over
partitioning a Delta table can negatively impact on read performance. For this reason,
we recommend that you test the read and write performance carefully, perhaps by
creating multiple copies of the same Delta table with different configurations to
compare timings.

2 Warning

If you partition on a high cardinality column, it can result in an excessive number of


Parquet files. Be aware that every Fabric capacity license has guardrails. If the
number of Parquet files for a Delta table exceeds the limit for your SKU, queries will
fall back to DirectQuery, which might result in slower query performance.

Parquet files
The underlying storage for a Delta table is one or more Parquet files. Parquet file format
is generally used for write-once, read-many applications. New Parquet files are created
every time data in a Delta table is modified, whether by an insert, update, or delete
operation.

7 Note

You can access Parquet files that are associated with Delta tables by using a tool,
like OneLake file explorer. Files can be downloaded, copied, or moved to other
destinations as easily as moving any other files. However, it's the combination of
Parquet files and the JSON-based link files that allow compute engines to issue
queries against the files as a Delta table.

Parquet file format

The internal format of a Parquet file differs from other common data storage formats,
such as CSV, TSV, XMLA, and JSON. These formats organize data by rows, while Parquet
organizes data by columns. Also, Parquet file format differs from these formats because
it organizes rows of data into one or more row groups.

The internal data structure of a Power BI semantic model is column-based, which means
Parquet files share a lot in common with Power BI. This similarity means that a Direct
Lake semantic model can efficiently load data from the Parquet files directly into
memory. In fact, very large volumes of data can be loaded in seconds. Contrast this
capability with the refresh of an Import semantic model which must retrieve blocks or
source data, then process, encode, store, and then load it into memory. An Import
semantic model refresh operation can also consume significant amounts of compute
(memory and CPU) and take considerable time to complete. However, with Delta tables,
most of the effort to prepare the data suitable for direct loading into a semantic model
takes place when the Parquet file is generated.

How Parquet files store data


Consider the following example set of data.

ノ Expand table

Date ProductID StockOnHand

2024-09-16 A 10

2024-09-16 B 11

2024-09-17 A 13

When stored in Parquet file format, conceptually, this set of data might look like the
following text.

HTML

Header:
RowGroup1:
Date: 2024-09-16, 2024-09-16, 2024-09-17…
ProductID: A, B, A…
StockOnHand: 10, 11, 13…
RowGroup2:

Footer:

Data is compressed by substituting dictionary keys for common values, and by applying
run-length encoding (RLE). RLE strives to compress a series of same values into a smaller
representation. In the following example, a dictionary mapping of numeric keys to
values is created in the header, and the smaller key values are used in place of the data
values.

HTML
Header:
Dictionary: [
(1, 2024-09-16), (2, 2024-09-17),
(3, A), (4, B),
(5, 10), (6, 11), (7, 13)

]
RowGroup1:
Date: 1, 1, 2…
ProductID: 3, 4, 3…
StockOnHand: 5, 6, 7…
Footer:

When the Direct Lake semantic model needs data to compute the sum of the
StockOnHand column grouped by ProductID , only the dictionary and data associated

with the two columns is required. In large files that contain many columns, substantial
portions of the Parquet file can be skipped to help speed up the read process.

7 Note

The contents of a Parquet file aren't human readable and so it isn't suited to
opening in a text editor. However, there are many open-source tools available that
can open and reveal the contents of a Parquet file. These tools can also let you
inspect metadata, such as the number of rows and row groups contained in a file.

V-Order

Fabric supports an additional optimization called V-Order. V-Order is a write-time


optimization to the Parquet file format. Once V-Order is applied, it results in a smaller
and therefore faster file to read. This optimization is especially relevant for a Direct Lake
semantic model because it prepares the data for fast loading into memory, and so it
makes less demands on capacity resources. It also results in faster query performance
because less memory needs to be scanned.

Delta tables created and loaded by Fabric items such as data pipelines, dataflows, and
notebooks automatically apply V-Order. However, Parquet files uploaded to a Fabric
lakehouse, or that are referenced by a shortcut, might not have this optimization
applied. While non-optimized Parquet files can still be read, the read performance likely
won't be as fast as an equivalent Parquet file that's had V-Order applied.

7 Note
Parquet files that have V-Order applied still conform to the open-source Parquet
file format. Therefore, they can be read by non-Fabric tools.

For more information, see Delta Lake table optimization and V-Order.

Delta table optimization


This section describes various topics for optimizing Delta tables for semantic models.

Data volume
While Delta tables can grow to store extremely large volumes of data, Fabric capacity
guardrails impose limits on semantic models that query them. When those limits are
exceeded, queries will fall back to DirectQuery, which might result in slower query
performance.

Therefore, consider limiting the row count of a large fact table by raising its granularity
(store summarized data), reducing dimensionality, or storing less history.

Also, ensure that V-Order is applied because it results in a smaller and therefore faster
file to read.

Column data type


Strive to reduce cardinality (the number of unique values) in every column of each Delta
table. That's because all columns are compressed and stored by using hash encoding.
Hash encoding requires V-Order optimization to assign a numeric identifier to each
unique value contained in the column. It's the numeric identifier, then, that's stored,
requiring a hash lookup during storage and querying.

When you use approximate numeric data types (like float and real), consider rounding
values and using a lower precision.

Unnecessary columns
As with any data table, Delta tables should only store columns that are required. In the
context of this article, that means required by the semantic model, though there could
be other analytic workloads that query the Delta tables.

Delta tables should include columns required by the semantic model for filtering,
grouping, sorting, and summarizing, in addition to columns that support model
relationships. While unnecessary columns don't affect semantic model query
performance (because they won't be loaded into memory), they result in a larger
storage size and so require more compute resources to load and maintain.

Because Direct Lake semantic models don't support calculated columns, you should
materialize such columns in the Delta tables. Note that this design approach is an anti-
pattern for Import and DirectQuery semantic models. For example, if you have
FirstName and LastName columns, and you need a FullName column, materialize the

values for this column when inserting rows into the Delta table.

Consider that some semantic model summarizations might depend on more than one
column. For example, to calculate sales, the measure in the model sums the product of
two columns: Quantity and Price . If neither of these columns is used independently, it
would be more efficient to materialize the sales calculation as a single column than store
its component values in separate columns.

Row group size


Internally, a Parquet file organizes rows of data into multiple row groups within each file.
For example, a Parquet file that contains 30,000 rows might chunk them into three row
groups, each having 10,000 rows.

The number of rows in a row group influences how quickly Direct Lake can read the
data. A higher number of row groups with fewer rows is likely to negatively impact
loading column data into a semantic model due to excessive I/O.

Generally, we don't recommend that you change the default row group size. However,
you might consider changing the row group size for large Delta tables. Be sure to test
the read and write performance carefully, perhaps by creating multiple copies of the
same Delta tables with different configurations to compare timings.

) Important

Be aware that every Fabric capacity license has guardrails. If the number of row
groups for a Delta table exceeds the limit for your SKU, queries will fall back to
DirectQuery, which might result in slower query performance.

Delta table maintenance


Over time, as write operations take place, Delta table versions accumulate. Eventually,
you might reach a point at which a negative impact on read performance becomes
noticeable. Worse, if the number of Parquet files per table, or row groups per table, or
rows per table exceeds the guardrails for your capacity, queries will fall back to
DirectQuery, which might result in slower query performance. It's therefore important
that you maintain Delta tables regularly.

OPTIMIZE
You can use OPTIMIZE to optimize a Delta table to coalesce smaller files into larger
ones. You can also set the WHERE clause to target only a filtered subset of rows that
match a given partition predicate. Only filters involving partition keys are supported. The
OPTIMIZE command can also apply V-Order to compact and rewrite the Parquet files.

We recommend that you run this command on large, frequently updated Delta tables
on a regular basis, perhaps every day when your ETL process completes. Balance the
trade-off between better query performance and the cost of resource usage required to
optimize the table.

VACUUM
You can use VACUUM to remove files that are no longer referenced and/or that are
older than a set retention threshold. Take care to set an appropriate retention period,
otherwise you might lose the ability to time travel back to a version older than the frame
stamped into semantic model tables.

) Important

Because a framed semantic model references a particular Delta table version, the
source must ensure it keeps that Delta table version until framing of a new version
is completed. Otherwise, users will encounter errors when the Delta table files need
to be accessed by the model and have been vacuumed or otherwise deleted by the
producer workload.

REORG TABLE
You can use REORG TABLE to reorganize a Delta table by rewriting files to purge soft-
deleted data, such as when you drop a column by using ALTER TABLE DROP COLUMN.

Automate table maintenance


To automate table maintenance operations, you can use the Lakehouse API. For more
information, see Manage the Lakehouse with Microsoft Fabric REST API.

 Tip

You can also use the lakehouse Table maintenance feature in the Fabric portal to
simplify management of your Delta tables.

Related content
Direct Lake overview
Develop Direct Lake semantic models
Manage Direct Lake semantic models
Delta Lake table optimization and V-Order

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Direct Lake in Power BI Desktop
(preview)
Article • 01/26/2025

Semantic models using Direct Lake mode access OneLake data directly, which requires
running the Power BI Analysis Services engine in a workspace with a Fabric capacity.
Semantic models using import or DirectQuery mode can have the Power BI Analysis
Services engine running locally on your computer by using Power BI Desktop for
creating and editing the semantic model. Once published, such models operate using
Power BI Analysis Services in the workspace.

To facilitate editing Direct Lake semantic models in Power BI Desktop, you can now
perform a live edit of a semantic model in Direct Lake mode, enabling Power BI Desktop
to make changes to the model by using the Power BI Analysis Services engine in the
Fabric workspace.

Enable preview feature


Live editing semantic models in Direct Lake mode with Power BI Desktop is enabled by
default. You can disable this feature by turning off the Live edit of Power BI semantic
models in Direct Lake mode preview selection, found in Options and Settings >
Options > Preview features.
Live edit a semantic model in Direct Lake mode
To perform a live edit of a semantic model in Direct Lake mode, take the following steps.

1. Open Power BI Desktop and select OneLake data hub:

You can also open the OneLake data hub from a blank report, as shown in the following
image:

2. Search for a semantic model in Direct Lake mode, expand the Connect button and
select Edit.


7 Note

Selecting a semantic model that is not in Direct Lake mode will result in an error.

3. The selected semantic model opens for editing at which point you are in live edit
mode, as demonstrated in the following screenshot.

4. You can edit your semantic model using Power BI Desktop, enabling you to make
changes directly to the selected semantic model. Changes include all modeling
tasks, such as renaming tables/columns, creating measures, and creating
calculation groups. DAX query view is available to run DAX queries to preview data
and test measures before saving them to the model.

7 Note

Notice that the Save option is disabled, because you don’t need to save. Every
change you make is immediately applied to the selected semantic model in the
workspace.

In the title bar, you can see the workspace and semantic model name with links to open
these items in the Fabric portal.


When you connect and live edit a semantic model. During the preview it's not possible
to select an existing report to edit, and the Report view is hidden. You can open an
existing report or create a new one by live connecting to this semantic model in another
instance of Power BI Desktop or in the workspace. You can write DAX queries in the
workspace with DAX query view in the web. And you can visually explore the data with
the new explore your data feature in the workspace.

Automatically save your changes


As you make changes to your semantic model, your changes are automatically saved
and the Save button is disabled when in Live edit mode. Changes are permanent with
no option to undo.

If two or more users are live editing the same semantic model and a conflict occurs,
Power BI Desktop alerts one of the users, shown in the following image, and refreshes
the model to the latest version. Any changes you were trying to make will need to be
performed again after the refresh.

Edit tables
Changes to the tables and columns in the OneLake data source, typically a Lakehouse or
Warehouse, like import or DirectQuery data sources, aren't automatically reflected in the
semantic model. To update the semantic model with the latest schema, such as getting
column changes in existing tables or to add or remove tables, go to Transform data >
Data source settings > Edit Tables.

Learn more about Edit tables for Direct Lake semantic models.

Use refresh
Semantic models in Direct Lake mode automatically reflect the latest data changes in
the delta tables when Keep your direct Lake data up to date is enabled. When disabled,
you can manually refresh your semantic model using Power BI Desktop Refresh button
to ensure it targets the latest version of your data. This is also sometimes called
reframing.

Export to a Power BI Project


To support professional enterprise development workflows of semantic models in Direct
Lake mode, you can export the definition of your semantic model after opening it for
editing, which provides a local copy of the semantic model and report metadata that
you can use with Fabric deployment mechanisms such as Fabric Git Integration. The
Power BI Desktop report view becomes enabled letting you view and edit the local
report, publish directly from Power BI Desktop isn't available but you can publish using
Git integration. The Save button is also enabled to save the local model metadata and
report in the Power BI Project folder.

Navigate to File > Export > Power BI Project and export it as a Power BI Project file
(PBIP).

By default, the PBIP file is exported to the %USERPROFILE%\Microsoft Fabric\repos\


[Workspace Name] folder. However, you can choose a different location during the export
process.


Selecting Export opens the folder containing the PBIP files of the exported semantic
model along with an empty report.

After exporting you should open a new instance of Power BI Desktop and open the
exported PBIP file to continue editing with a Power BI Project. When you open the PBIP
file, Power BI Desktop prompts you to either create a new semantic model in a Fabric
workspace, or select an existing semantic model for remote modeling.

Remote modeling with a Power BI Project


When working on a Power BI Project (PBIP) with a semantic model that can't run on the
local Power BI Analysis Services engine, such as Direct Lake mode, Power BI Desktop
requires to be connected to a semantic model in a Fabric workspace, a remote semantic
model. Like live edit, all changes you make are immediately applied to the semantic
model in the workspace. However, unlike live edit, you can save your semantic model
and report definitions to local PBIP files that can later be deployed to a Fabric workspace
using a deployment mechanism such as Fabric Git Integration.

7 Note
Semantic models in Direct Lake mode, when exported to a Git repository using
Fabric Git Integration, can be edited using Power BI Desktop. To do so, make sure
at least one report is connected to the semantic model, then open the report's
exported definition.pbir file to edit both the report and the semantic model.

Open your Power BI Project


When opening a Power BI Project (PBIP) that require a remote semantic model, Power BI
Desktop prompts you to either create a new semantic model or select an existing
semantic model in a Fabric workspace.

If you select an existent semantic model and the definition differs, Power BI Desktop
warns you before overwriting, as shown in the following image.

7 Note
You can select the same semantic model you exported the PBIP from. However, the
best practice when working with a PBIP that requires a remote semantic model is
for each developer to work on their own private remote semantic model to avoid
conflicts with changes from other developers.

Selecting the title bar displays both the PBIP file location and the remote semantic
model living in a Fabric workspace, shown in the following image.

A local setting will be saved in the Power BI Project files with the configured semantic
model, next time you open the PBIP, you won't see the prompt, and Fabric semantic
model will be overwritten with the metadata from the semantic model in the Power BI
Project files.

Change remote semantic model


During the preview, if you wish to switch the remote semantic model in the PBIP you
must navigate to the \*.SemanticModel\.pbi\localSettings.json file. There, you can
either modify the remoteModelingObjectId property to the ID of the semantic model you
want to connect to, or remove the property altogether. Upon reopening the PBIP, Power
BI Desktop connects to the new semantic model or prompts you to create or select an
existing semantic model.

7 Note

The configuration described in this section is intended solely for local development
and should not be used for deployment across different environments.

Common uses for Direct Lake in Power BI


Desktop
Scenario: I’m getting errors when opening the Direct Lake semantic model for Edit with
Power BI Desktop.

Solution: Review all the requirements and permissions. If you met all the requirements,
check whether you can edit the semantic modeling using web modeling.

Scenario: I lost the connection to the remote semantic model and can't recover it. Have I
lost my changes?

Solution: All your changes are immediately applied to the remote semantic model. You
can always close Power BI Desktop and restart the editing session with the semantic
model you were working on.

Scenario: I exported to Power BI Project (PBIP). Can I select the same semantic model I
was live editing?

Solution: You can, but you should be careful. If each developer is working on their local
PBIP and all select the same semantic model as a remote model, they'll overwrite each
other's changes. The best practice when working with a PBIP is for each developer to
have their own isolated copy of the Direct Lake semantic model.

Scenario: I’m live editing the Direct Lake semantic model and can't create field
parameters.
Solution: When live editing a semantic model, Report View isn't available, which is
required for the field parameters UI. You can export to a Power BI Project (PBIP) and
open it to access Report View and the field parameters UI.

Scenario: I made changes to the semantic model using an external tool, but I don't see
those changes reflected in Power BI Desktop.

Solution: Changes made by external tools are applied to the remote semantic model,
but these changes will only become visible in Power BI Desktop after either the next
modeling change is made within Power BI Desktop, or the semantic model is refreshed.

Requirements and permissions


XMLA Endpoint must be enabled on the tenant. Learn more in the XMLA endpoint
article.
XMLA Endpoint with Read Write access must be enabled at the capacity. Learn
more in the tools article.
User must have Write permission on the semantic model. Learn more in the
permissions article.
User must have Viewer permission on the lakehouse. Learn more in the lakehouse
article.
This feature is unavailable for users with a free license.

Considerations and limitations


Live edit of semantic models in Direct Lake mode in Power BI Desktop is currently in
preview. Keep the following in mind:

You can't edit default semantic models.


You can't transform data using Power Query editor. In the Lakehouse you can use a
dataflow to perform Power Query transformations.
You can’t have multiple data sources. You can shortcut to or add additional data to
Lakehouse or Warehouse data sources to use in the semantic model.
You can't publish the Power BI Project (PBIP) from Power BI Desktop. You can use
Fabric Deployment mechanisms such as Fabric Git Integration or Fabric Item APIs
to publish your local PBIP files to a Fabric workspace.
You can't validate RLS roles from Power BI Desktop. You can validate the role in the
service.
Service-created model diagram layouts aren't displayed in Power BI Desktop, and
layouts created in Power BI Desktop aren't persisted in the Power BI service.
Signing off during editing could lead to unexpected errors.
You can open external tools, but the external tool must manage authentication to
the remote semantic model.
Changing the data category to barcode won't allow reports linked to the semantic
model to be filtered by barcodes.
Externally shared semantic models aren't eligible for live edit.

Additionally, please consider the current known issues and limitations of Direct Lake.

Related content
Direct Lake overview
Power BI Project files

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Edit tables for Direct Lake semantic
models
Article • 01/26/2025

Semantic models in Direct Lake mode's tables come from Microsoft Fabric and OneLake
data. Instead of the transform data experience of Power BI import and DirectQuery,
Direct Lake mode uses the Edit tables experience, allowing you to decide which tables
you want the semantic model in Direct Lake mode to use.

Use and features of Edit tables


The purpose of Edit tables is to add or remove tables in the semantic model in Direct
Lake mode. Such tables reside in a single Fabric item that writes data to the OneLake,
such as a Lakehouse or Warehouse.

The following image shows the Edit tables initial dialog:

The areas in the Edit tables dialog are the following:

Title displays whether you're editing or creating.


Information text and learn more link to the Direct Lake documentation.
Search to find the specific table or view from the data source.
Filter to limit the schema or object type (table or view) that is displayed.
Reload to sync the SQL analytics endpoint of a Lakehouse or a warehouse (requires
write permission on the Lakehouse or warehouse). Not available in all scenarios.
Tree view organizes the available tables or views:
Schema name
Object type (table or view)
Table or view name
Check boxes allow you to select or unselect tables or views to use in the semantic
model.
Confirm or Cancel button let you decide whether to make the change to the
semantic model.

In the semantic model, tables and columns can be renamed to support reporting
expectations. Edit tables still show the data source table names, and schema sync
doesn't impact the semantic model renames.

In the Lakehouse, tables and views can also be renamed. If the upstream data source
renames a table or column after the table was added to the semantic model, the
semantic model schema sync will still be looking for the table using the previous name,
so the table will be removed from the model on schema sync. The table with the new
name will show in the Edit tables dialog as unchecked, and must be explicitly checked
again and added again to the semantic model. Measures can be moved to the new
table, but relationships and column property updates need to be reapplied to the table.

Entry points
The following sections describe the multiple ways you can edit semantic models in
Direct Lake.

Editing a semantic model in Direct Lake mode in web


modeling
When editing a semantic model in the browser, there's a ribbon button to launch Edit
tables, as shown in the following image.

Selecting the ribbon button launches the Edit tables dialog, as shown in the following
image.

You can perform many actions that impact the tables in the semantic model:

Selecting the Confirm button with no changes initiates a schema sync. Any table
changes in the data source, such as an added or removed column, are applied to
the semantic model.
Selecting the Cancel button returns to editing the model without applying any
updates.
Selecting tables or views previously unselected adds the selected items to the
semantic model.
Unselecting tables or views previously selected removes them from the semantic
model.

Tables that have measures can be unselected but will still show in the model with
columns removed and only showing measures. The measures can be either deleted or
moved to a different table. When all measures have been moved or deleted, go back to
Edit tables and click Confirm to no longer show the empty table in the model.

Creating a new semantic model from Lakehouse and


Warehouse
When creating a semantic model, you must specify two properties:

Direct Lake semantic model: The name of the semantic model in the workspace,
which can be changed later. If the semantic model with the same name already
exists in the workspace, a number is automatically appended to the end of the
model name.
Workspace: The workspace where the semantic model is saved. By default the
workspace you're currently working in is selected, but you can change it to another
Fabric workspace.

The following image shows the New semantic model dialog.


Default semantic model


There are some differences for the default Power BI semantic model in Direct Lake
mode. Refer to the default Power BI semantic models in Microsoft Fabric article for more
information about the differences.

Related content
Direct Lake overview
Create a lakehouse for Direct Lake
Analyze query processing for Direct Lake semantic models

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Create a lakehouse for Direct Lake
Article • 01/26/2025

This article describes how to create a lakehouse, create a Delta table in the lakehouse,
and then create a basic semantic model for the lakehouse in a Microsoft Fabric
workspace.

Before getting started creating a lakehouse for Direct Lake, be sure to read Direct Lake
overview.

Create a lakehouse
1. In your Microsoft Fabric workspace, select New > More options, and then in Data
Engineering, select the Lakehouse tile.

2. In the New lakehouse dialog box, enter a name, and then select Create. The name
can only contain alphanumeric characters and underscores.
3. Verify the new lakehouse is created and opens successfully.

Create a Delta table in the lakehouse


After creating a new lakehouse, you must then create at least one Delta table so Direct
Lake can access some data. Direct Lake can read parquet-formatted files, but for the
best performance, it's best to compress the data by using the VORDER compression
method. VORDER compresses the data using the Power BI engine’s native compression
algorithm. This way the engine can load the data into memory as quickly as possible.

There are multiple options to load data into a lakehouse, including data pipelines and
scripts. The following steps use PySpark to add a Delta table to a lakehouse based on an
Azure Open Dataset:

1. In the newly created lakehouse, select Open notebook, and then select New
notebook.

2. Copy and paste the following code snippet into the first code cell to let SPARK
access the open model, and then press Shift + Enter to run the code.

Python
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "holidaydatacontainer"
blob_relative_path = "Processed"
blob_sas_token = r""

# Allow SPARK to read from Blob remotely


wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' %
(blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name,
blob_account_name),
blob_sas_token)
print('Remote blob path: ' + wasbs_path)

3. Verify the code successfully outputs a remote blob path.

4. Copy and paste the following code into the next cell, and then press Shift + Enter.

Python

# Read Parquet file into a DataFrame.


df = spark.read.parquet(wasbs_path)
print(df.printSchema())

5. Verify the code successfully outputs the DataFrame schema.


6. Copy and paste the following lines into the next cell, and then press Shift + Enter.
The first instruction enables the VORDER compression method, and the next
instruction saves the DataFrame as a Delta table in the lakehouse.

Python

# Save as delta table


spark.conf.set("spark.sql.parquet.vorder.enabled", "true")
df.write.format("delta").saveAsTable("holidays")

7. Verify all SPARK jobs complete successfully. Expand the SPARK jobs list to view
more details.
8. To verify a table has been created successfully, in the upper left area, next to
Tables, select the ellipsis (…), then select Refresh, and then expand the Tables
node.

9. Using either the same method as above or other supported methods, add more
Delta tables for the data you want to analyze.

Create a basic Direct Lake model for your


lakehouse
1. In your lakehouse, select New semantic model, and then in the dialog, select
tables to be included.
2. Select Confirm to generate the Direct Lake model. The model is automatically
saved in the workspace based on the name of your lakehouse, and then opens the
model.

3. Select Open data model to open the Web modeling experience where you can
add table relationships and DAX measures.

When you're finished adding relationships and DAX measures, you can then create
reports, build a composite model, and query the model through XMLA endpoints in
much the same way as any other model.

Related content
Specify a fixed identity for a Direct Lake model
Direct Lake overview
Analyze query processing for Direct Lake semantic models

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Specify a fixed identity for a Direct Lake
semantic model
Article • 01/26/2025

Follow these steps to specify a fixed identity connection for a Direct Lake semantic
model.

1. In your Direct Lake model's settings, expand Gateway and cloud connections.
Note that your Direct Lake model has a SQL Server data source pointing to a
lakehouse or data warehouse in Fabric.

2. In the Maps to listbox, select Create a connection. A New connection pane


appears with some data source information already entered for you. Specify a
connection name.

3. In Authentication method, select OAuth 2.0 or Service Principal, and then specify
credentials for the fixed identity you want to use.
4. In Single sign-on, ensure SSO via Microsoft Entra ID for DirectQuery queries is
not selected.

5. Configure any additional parameters if needed and then click Create.


6. In the Direct Lake model settings, verify the data source is now associated with the
non-SSO cloud connection.

Related content
Direct Lake overview
Analyze query processing for Direct Lake semantic models

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Analyze query processing for Direct
Lake semantic models
Article • 01/26/2025

Power BI semantic models in Direct Lake mode read Delta tables directly from OneLake
— unless they have to fall back to DirectQuery mode. Typical fallback reasons include
memory pressures that can prevent loading of columns required to process a DAX
query, and certain features at the data source might not support Direct Lake mode, like
SQL views in a Warehouse and Lakehouse. In general, Direct Lake mode provides the
best DAX query performance unless a fallback to DirectQuery mode is necessary.
Because fallback to DirectQuery mode can impact DAX query performance, it's
important to analyze query processing for a Direct Lake semantic model to identify if
and how often fallbacks occur.

Analyze by using Performance analyzer


Performance analyzer can provide a quick and easy look into how a visual queries a data
source, and how much time it takes to render a result.

1. Start Power BI Desktop. On the startup screen, select New > Report.

2. Select Get Data from the ribbon, then select Power BI semantic models.

3. In the OneLake data hub page, select the Direct Lake semantic model you want to
connect to, and then select Connect.

4. Place a card visual on the report canvas, select a data column to create a basic
report, and then on the View menu, select Performance analyzer.
5. In the Performance analyzer pane, select Start recording.

6. In the Performance analyzer pane, select Refresh visuals, and then expand the
Card visual. The card visual doesn't cause any DirectQuery processing, which
indicates the semantic model was able to process the visual’s DAX queries in Direct
Lake mode.

If the semantic model falls back to DirectQuery mode to process the visual’s DAX
query, you see a Direct query performance metric, as shown in the following
image:
Analyze by using SQL Server Profiler
SQL Server Profiler can provide more details about query performance by tracing query
events. It's installed with SQL Server Management Studio (SSMS). Before starting, make
sure you have the latest version of SSMS installed.

1. Start SQL Server Profiler from the Windows menu.

2. In SQL Server Profiler, select File > New Trace.

3. In Connect to Server > Server type, select Analysis Services, then in Server name,
enter the URL to your workspace, then select an authentication method, and then
enter a username to sign in to the workspace.
4. Select Options. In Connect to database, enter the name of your semantic model
and then select Connect. Sign in to Microsoft Entra ID.

5. In Trace Properties > Events Selection, select the Show all events checkbox.
6. Scroll to Query Processing, and then select checkboxes for the following events:

ノ Expand table

Event Description

DirectQuery_Begin If DirectQuery Begin/End events appear in the trace,


DirectQuery_End the semantic model might have fallen back to
DirectQuery mode. However, note that the presence of
EngineEdition queries and possibly queries to check
Object-Level Security (OLS) do not represent a fallback
because the engine always uses DirectQuery mode for
these non-query processing related checks.

VertiPaq_SE_Query_Begin VertiPaq storage engine (SE) events in Direct Lake


VertiPaq_SE_Query_Cache_Match mode are the same as for import mode.
VertiPaq_SE_Query_Cache_Miss
VertiPaq_SE_Query_End

It should look like this:


7. Select Run. In Power BI Desktop, create a new report or interact with an existing
report to generate query events. Review the SQL Server Profiler trace report for
query processing events.

The following image shows an example of query processing events for a DAX
query. In this trace, the VertiPaq storage engine (SE) events indicate that the query
was processed in Direct Lake mode.

Related content
Create a lakehouse for Direct Lake
Direct Lake overview
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


How Direct Lake mode works with
Power BI reporting
Article • 01/26/2025

In Microsoft Fabric, when the user creates a lakehouse, the system also provisions the
associated SQL analytics endpoint and default semantic model in Direct Lake mode. You
can add tables from the lakehouse into the default semantic model by going to the SQL
analytics endpoint and clicking the Manage default semantic model button in the
Reporting ribbon. You can also create a non-default Power BI semantic model in Direct
Lake mode by clicking New semantic model in the lakehouse or SQL analytics endpoint.
The non-default semantic model is created in Direct Lake mode and allows Power BI to
consume data by creating Power BI reports, explores, and running user-created DAX
queries in Power BI Desktop or the workspace itself. The default semantic model created
in the SQL analytics endpoint can be used to create Power BI reports but has some other
limitations.

When a Power BI report shows data in visuals, it requests it from the semantic model.
Next, the semantic model accesses a lakehouse to consume data and return it to the
Power BI report. For efficiency, the semantic model can keep some data in the cache and
refresh it when needed. Direct Lake overview has more details.

Lakehouse also applies V-order optimization to delta tables. This optimization gives
unprecedented performance and the ability to quickly consume large amounts of data
for Power BI reporting.


Setting permissions for report consumption
The semantic model in Direct Lake mode is consuming data from a lakehouse on
demand. To make sure that data is accessible for the user that is viewing Power BI
report, necessary permissions on the underlying lakehouse need to be set.

One option is to give the user the Viewer role in the workspace to consume all items in
the workspace, including the lakehouse, if in this workspace, semantic models, and
reports. Alternatively, the user can be given the Admin, Member, or Contributor role to
have full access to the data and be able to create and edit the items, such as lakehouses,
semantic models, and reports.

In addition, non-default semantic models can utilize a fixed identity to read data from
the lakehouse, without giving report users any access to the lakehouse, and users be
given permission to access the report through an app. Also, with fixed identity, non-
default semantic models in Direct Lake mode can have row-level security defined in the
semantic model to limit the data the report user sees while maintaining Direct Lake
mode. SQL-based security at the SQL analytics endpoint can also be used, but Direct
Lake mode will fall back to DirectQuery, so this should be avoided to maintain the
performance of Direct Lake.

Related content
Default Power BI semantic models in Microsoft Fabric

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Endorse Fabric and Power BI items
Article • 01/26/2025

Fabric provides three ways to endorse valuable, high-quality items to increase their
visibility: promotion and certification and designating them as master data.

Promotion: Promotion is a way to highlight items you think are valuable and
worthwhile for others to use. It encourages the collaborative use and spread of
content within an organization.

Any item owner, as well as anyone with write permissions on the item, can
promote the item when they think it's good enough for sharing.

Certification: Certification means that the item meets the organization's quality
standards and can be regarded as reliable, authoritative, and ready for use across
the organization.

Only authorized reviewers (defined by the Fabric administrator) can certify items.
Item owners who wish to see their item certified and aren't authorized to certify it
themselves need to follow their organization's guidelines about getting items
certified.

Master data: Being labeled as master data means that the data item is regarded by
the organization as being core, single-source-of-truth data, such as customer lists
or product codes.

Only authorized reviewers (defined by the Fabric administrator) can label data
items as master data. Item owners who wish to see their item endorsed as master
data and aren't authorized to apply the Master data badge themselves need to
follow their organization's guidelines about getting items labeled as master data.

Currently it's possible to promote or certify all Fabric and Power BI items except Power
BI dashboards.

Master data badges can only be applied to items that contain data, such as lakehouses
and semantic models.

This article describes:

How to promote items.


How to certify items if you're an authorized reviewer, or request certification if
you're not.
How to apply the Master data badge to a data item if you are authorized to do so,
or request master data designation if you're not.

See the endorsement overview to learn more about endorsement.

Promote items
To promote an item, you must have write permissions on the item you want to promote.

1. Go to the settings of the item you want to promote.

2. Expand the endorsement section and select Promoted.

If you're promoting a Power BI semantic model and see a Make discoverable


checkbox, it means you can make it possible for users who don't have access to
the semantic model to find it. See semantic model discovery for more detail.

3. Select Apply.

Certify items
Item certification is a significant responsibility, and you should only certify an item if you
feel qualified to do so and have reviewed the item.

To certify an item:

You must be authorized by the Fabric administrator.

7 Note

If you aren't authorized to certify an item yourself, you can request item
certification.

You must have write permissions on the item you want to apply the Certified
badge to.

1. Carefully review the item and determine whether it meets your organization's
certification standards.

2. If you decide to certify the item, go to the workspace where it resides, and open
the settings of the item you want to certify.

3. Expand the endorsement section and select Certified.


If you're certifying a Power BI semantic model and see a Make discoverable
checkbox, it means you can make it possible for users who don't have access to
the semantic model to find it. See semantic model discovery for more detail.

4. Select Apply.

Label data items as master data


Labeling data items as master data is a significant responsibility, and you should
perform this task only if you feel you are qualified to do so.

To label a data item as master data:

You must be authorized by the Fabric administrator.

7 Note

If you aren't authorized to designate a data item as master data yourself, you
can request the master data designation.

You must have write permissions on the item you want to apply the Master data
badge to.

1. Carefully review the data item and determine whether it is truly core, single-
source-of-truth data that your organization wants users to find and use for the
kind of data it contains.

2. If you decide to label the item as master data, go to the workspace where it
located, and open the settings of the item's settings..

3. Expand the endorsement section and select Master data.

4. Select Apply.

Request certification or master data


designation
If you would like to certify your item or get it labeled as master data but aren't
authorized to do so, follow the steps below.

1. Go to the workspace where the item you want endorsed as certified or master data
is located, and then open the settings of that item.
2. Expand the endorsement section. The Certified or Master data button will be
greyed if you're not authorized to endorse items as certified or as master data.

3. Select relevant link, How do I get content certified or How do I get content
endorsed as Master data to find out how to get your item endorsed the way you
want it to be:

7 Note

If you clicked one of the links but got redirected back to this note, it means
that your Fabric admin has not made any information available. In this case,
contact the Fabric admin directly.

Related content
Read more about endorsement
Enable item certification (Fabric admins)
Enable master data endorsement (Fabric admins)
Read more about semantic model discoverability

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Share items in Microsoft Fabric
Article • 01/26/2025

Workspaces are the central places where you collaborate with your colleagues in
Microsoft Fabric. Besides assigning workspace roles, you can also use item sharing to
grant and manage item-level permissions in scenarios where:

You want to collaborate with colleagues who don't have a role in the workspace.
You want to grant additional item level-permissions for colleagues who already
have a role in the workspace.

This document describes how to share an item and manage its permissions.

Share an item via link


1. In the list of items, or in an open item, select the Share button .

2. The Create and send link dialog opens. Select People in your organization can
view.
3. The Select permissions dialog opens. Choose the audience for the link you're
going to share.
You have the following options:

People in your organization This type of link allows people in your


organization to access this item. It doesn't work for external users or guest
users. Use this link type when:
You want to share with someone in your organization.
You're comfortable with the link being shared with other people in your
organization.
You want to ensure that the link doesn't work for external or guest users.

People with existing access This type of link generates a URL to the item, but
it doesn't grant any access to the item. Use this link type if you just want to
send a link to somebody who already has access.

Specific people This type of link allows specific people or groups to access
the report. If you select this option, enter the names or email addresses of the
people you wish to share with. This link type also lets you share to guest
users in your organization's Microsoft Entra ID. You can't share to external
users who aren't guests in your organization.

7 Note

If your admin has disabled shareable links to People in your organization, you
can only copy and share links using the People with existing access and
Specific people options.

4. Choose the permissions you want to grant via the link.

Links that give access to People in your organization or Specific people always
include at least read access. However, you can also specify if you want the link to
include additional permissions as well.

7 Note

The Additional permissions settings vary for different items. Learn more
about the item permission model.

Links for People with existing access don't have additional permission
settings because these links don't give access to the item.

Select Apply.

5. In the Create and send link dialog, you have the option to copy the sharing link,
generate an email with the link, or share it via Teams.
Copy link: This option automatically generates a shareable link. Select Copy
in the Copy link dialog that appears to copy the link to your clipboard.
by Email: This option opens the default email client app on your computer
and creates an email draft with the link in it.

by Teams: This option opens Teams and creates a new Teams draft message
with the link in it.

6. You can also choose to send the link directly to Specific people or groups
(distribution groups or security groups). Enter their name or email address,
optionally type a message, and select Send. An email with the link is sent to your
specified recipients.
When your recipients receive the email, they can access the report through the
shareable link.

Manage item links


1. To manage links that give access to the item, in the upper right of the sharing
dialog, select the Manage permissions icon:
2. The Manage permissions pane opens, where you can copy or modify existing links
or grant users direct access. To modify a given link, select Edit.
3. In the Edit link pane, you can modify the permissions included in the link, people
who can use this link, or delete the link. Select Apply after your modification.

This image shows the Edit link pane when the selected audience is People in your
organization can view and share.
This image shows the Edit link pane when the selected audience is Specific people
can view and share. Note that the pane enables you to modify who can use the
link.
4. For more access management capabilities, select the Advanced option in the
footer of the Manage permissions pane. On the management page that opens,
you can:

View, manage, and create links.


View and manage who has direct access and grant people direct access.
Apply filters or search for specific links or people.


Grant and manage access directly
In some cases, you need to grant permission directly instead of sharing link, such as
granting permission to service account, for example.

1. Select Manage permission from the context menu.

2. Select Direct access.

3. Select Add user.


4. Enter the names of people or accounts that you need to grant access to directly.
Select the permissions that you want to grant. You can also optionally notify
recipients by email.

5. Select Grant.

6. You can see all the people, groups, and accounts with access in the list on the
permission management page. You can also see their workspace roles,
permissions, and so on. By selecting the context menu, you can modify or remove
the permissions.

7 Note

You can't modify or remove permissions that are inherited from a workspace
role in the permission management page. Learn more about workspace roles
and the item permission model.

Item permission model


Depending on the item being shared, you may find a different set of permissions that
you can grant to recipients when you share. Read permission is always granted during
sharing, so the recipient can discover the shared item in the OneSource data hub and
open it.

ノ Expand table

Permission Effect
granted while
sharing

Read Recipient can discover the item in the data hub and open it. Connect to the
Warehouse or SQL analytics endpoint of the Lakehouse.

Edit Recipient can edit the item or its content.

Share Recipient can share the item and grant permissions up to the permissions
that they have. For example, if the original recipient has Share, Edit, and Read
permissions, they can at most grant Share, Edit, and Read permissions to the
next recipient.

Read All with SQL Read data from the SQL analytics endpoint of the Lakehouse or Warehouse
analytics endpoint data through TDS endpoints.

Read all with Read Lakehouse or Data warehouse data through OneLake APIs and Spark.
Apache Spark Read Lakehouse data through Lakehouse explorer.
Permission Effect
granted while
sharing

Build Build new content on the semantic model.

Execute Execute or cancel execution of the item.

Considerations and limitations


When a user's permission on an item is revoked through the manage permissions
experience, it can take up to two hours for the change to take effect if the user is
signed-in. If the user is not signed in, their permissions will be evaluated the next
time they sign in, and any changes will only take effect at that time.

The Shared with me option in the Browse pane currently only displays Power BI
items that have been shared with you. It doesn't show you non-Power BI Fabric
items that have been shared with you.

Related content
Workspace roles

Feedback
Was this page helpful?
 Yes  No

Provide product feedback | Ask the community


Apply sensitivity labels to Fabric items
Article • 01/26/2025

Sensitivity labels from Microsoft Purview Information Protection on items can guard
your sensitive content against unauthorized data access and leakage. They're a key
component in helping your organization meet its governance and compliance
requirements. Labeling your data correctly with sensitivity labels ensures that only
authorized people can access your data. This article shows you how to apply sensitivity
labels to your Microsoft Fabric items.

7 Note

For information about applying sensitivity labels in Power BI Desktop, see Apply
sensitivity labels in Power BI Desktop.

Prerequisites
Requirements needed to apply sensitivity labels to Fabric items:

Power BI Pro or Premium Per User (PPU) license


Edit permissions on the item you wish to label.

7 Note

If you can't apply a sensitivity label, or if the sensitivity label is greyed out in the
sensitivity label menu, you may not have permissions to use the label. Contact your
organization's tech support.

Apply a label
There are two common ways of applying a sensitivity label to an item: from the flyout
menu in the item header, and in the item settings.

From the flyout menu - select the sensitivity indication in the header to display the
flyout menu:
In items settings - open the item's settings, find the sensitivity section, and then
choose the desired label:

Related content
Sensitivity label overview
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Delta Lake table format interoperability
Article • 01/26/2025

In Microsoft Fabric, the Delta Lake table format is the standard for analytics. Delta Lake is an open-source storage layer
that brings ACID (Atomicity, Consistency, Isolation, Durability) transactions to big data and analytics workloads.

All Fabric experiences generate and consume Delta Lake tables, driving interoperability and a unified product experience.
Delta Lake tables produced by one compute engine, such as Fabric Data Warehouse or Synapse Spark, can be consumed by
any other engine, such as Power BI. When you ingest data into Fabric, Fabric stores it as Delta tables by default. You can
easily integrate external data containing Delta Lake tables by using OneLake shortcuts.

Delta Lake features and Fabric experiences


To achieve interoperability, all the Fabric experiences align on the Delta Lake features and Fabric capabilities. Some
experiences can only write to Delta Lake tables, while others can read from it.

Writers: Data warehouses, eventstreams, and exported Power BI semantic models into OneLake
Readers: SQL analytics endpoint and Power BI direct lake semantic models
Writers and readers: Fabric Spark runtime, dataflows, data pipelines, and Kusto Query Language (KQL) databases

The following matrix shows key Delta Lake features and their support on each Fabric capability.

ノ Expand table

Fabric Name- Deletion V-order Table Write Read Liquid TIMESTAMP_NTZ Delta
capability based vectors writing optimization partitions partitions Clustering reader/writer
column and version and
mappings maintenance default table
features

Data No Yes Yes Yes No Yes No No Reader: 3


warehouse Writer: 7
Delta Lake Deletion
export Vectors

SQL analytics Yes Yes N/A (not N/A (not N/A (not Yes Yes No N/A (not
endpoint applicable) applicable) applicable) applicable)

Fabric Spark Yes Yes Yes Yes Yes Yes Yes Yes Reader: 1
Runtime 1.3 Writer: 2

Fabric Spark Yes Yes Yes Yes Yes Yes Yes, read Yes Reader: 1
Runtime 1.2 only Writer: 2

Fabric Spark Yes No Yes Yes Yes Yes Yes, read No Reader: 1
Runtime 1.1 only Writer: 2

Dataflows Yes Yes Yes No Yes Yes Yes, read No Reader: 1


only Writer: 2

Data No No Yes No Yes, Yes Yes, read No Reader: 1


pipelines overwrite only Writer: 2
only

Power BI Yes Yes N/A (not N/A (not N/A (not Yes Yes No N/A (not
direct lake applicable) applicable) applicable) applicable)
semantic
models

Export Power Yes N/A (not Yes No Yes N/A (not No No Reader: 2
BI semantic applicable) applicable) Writer: 5
models into
OneLake
Fabric Name- Deletion V-order Table Write Read Liquid TIMESTAMP_NTZ Delta
capability based vectors writing optimization partitions partitions Clustering reader/writer
column and version and
mappings maintenance default table
features

KQL Yes Yes No No* Yes Yes No No Reader: 1


databases Writer: 1

Eventstreams No No No No Yes N/A (not No No Reader: 1


applicable) Writer: 2

* KQL databases provide certain table maintenance capabilities such as retention. Data is removed at the end of the retention
period from OneLake. For more information, see One Logical copy.

7 Note

Fabric doesn't write name-based column mappings by default. The default Fabric experience generates tables that
are compatible across the service. Delta lake, produced by third-party services, may have incompatible table
features.
Some Fabric experiences do not have inherited table optimization and maintenance capabilities, such as bin-
compaction, V-order, and clean up of old unreferenced files. To keep Delta Lake tables optimal for analytics, follow
the techniques in Use table maintenance feature to manage delta tables in Fabric for tables ingested using those
experiences.

Current limitations
Currently, Fabric doesn't support these Delta Lake features:

Delta Lake 3.x Uniform


Identity columns writing (proprietary Databricks feature)
Delta Live Tables (proprietary Databricks feature)
RLE (Run Length Encoding) enabled on the checkpoint file

Related content
What is Delta Lake?
Learn more about Delta Lake tables in Fabric Lakehouse and Synapse Spark.
Learn about Direct Lake in Power BI and Microsoft Fabric.
Learn more about querying tables from the Warehouse through its published Delta Lake Logs.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Learn about Microsoft Fabric feedback
Article • 01/26/2025

Your feedback is important to us. We want to hear about your experiences with
Microsoft Fabric. Your feedback is used to improve the product and shape the way it
evolves. This article describes how you can give feedback about Microsoft Fabric, how
the feedback is collected, and how we handle this information.

Feedback types
There are three ways to give feedback about Microsoft Fabric, in-product feedback, in-
product surveys, and community feedback.

In-product feedback
Give in-product feedback by selecting the Feedback button next to your profile picture
in the Microsoft Fabric portal.

In-product surveys
From time to time, Microsoft Fabric initiates in-product surveys to collect feedback from
users. When you see a prompt, you can choose to give feedback or dismiss the prompt.
If you dismiss the prompt, you won't see it again for some time.

Community feedback
There are a few ways you can give feedback while engaging with the Microsoft Fabric
community:

Ideas - Submit and vote on ideas for Microsoft Fabric.

Issues - Discuss issues and workarounds with the community.

Community Feedback - Give feedback about Microsoft Fabric and vote for
publicly submitted feedback. Top known feedback items remain available in the
new portal.
What kind of feedback is best?
Try to give detailed and actionable feedback. If you have issues, or suggestions for how
we can improve, we’d like to hear it.

Descriptive title - Descriptive and specific titles help us understand the issue being
reported.

One issue - Providing feedback for one issue ensures the correct logs and data are
received with each submission and can be assigned for follow-up. If you want to
give feedback for multiple issues, give feedback for each issue separately. Giving
feedback for Separate issues helps us identify the volume of feedback we’re
receiving for a particular issue. If you have more than one issue, submit a new
feedback request for each issue.

Give details - Give details about your issue in the description box. Information
about your device, operating system, and apps are automatically included in each
reported feedback. Add any additional information you think is important. Include
detailed steps to reproduce the issue.

How Microsoft uses feedback


Microsoft uses feedback to improve Microsoft products. We get user feedback in the
form of questions, problems, compliments, and suggestions. We make sure this
feedback makes it back to the appropriate teams, who use feedback to identify,
prioritize and make improvements to Microsoft products. Feedback is essential for our
product teams to understand our user's experiences, and directly influences the priority
of fixes and improvements.

What do we collect?
Here are the most common items collected or calculated.

Comments User - Submitted comments in the original language.

Submission date - Date and time we got the feedback.

Language - The original language the comment was submitted in.

Feedback type - The type of feedback: Survey feedback or in-product feedback.

Survey questions - Questions that we asked the user during the survey.
Survey responses - User responses to survey questions.

App language - The language of the Microsoft product that was captured on
submission.

Tenant ID - When the feedback is submitted from a Microsoft Entra account.

User ID - Microsoft Entra ID or email address of the authenticated user submitting


the feedback.

Data handling and privacy


We understand that when you use our cloud services, you're entrusting us with one of
your most valuable assets: your data. We make sure the feedback we receive is stored
and handled under Microsoft governance rules, and that it can only be accessed for
approved uses. We don't use your email, chat, files, or other personal content to target
ads to you. When we collect data, we use it to make your experiences better.

To learn more about how we protect the privacy and confidentiality of your data, and
how we ensure that it will be used only in a way that is consistent with your
expectations, review our privacy principles at the Microsoft Trust Center .

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap
Article • 12/30/2024

The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that lead to the successful
adoption of Microsoft Fabric, and help build a data culture in your organization.

Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves many considerations across the spectrum of
people, processes, and technology.

7 Note

While reading this series of articles, we recommended that you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Microsoft Fabric adoption roadmap, consider reviewing
the usage scenarios. Understanding the diverse ways Power BI is used can
influence your implementation strategies and decisions for all of Microsoft Fabric.

The diagram depicts the following areas of the Microsoft Fabric adoption roadmap.
The areas in the above diagram include:

ノ Expand table

Area Description

Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Fabric, and it's often a key aspect of an organization's digital transformation.

Executive sponsor: An executive sponsor is someone with credibility, influence, and


authority throughout the organization. They advocate for building a data culture and
adopting Fabric.

Business Alignment: How well the data culture and data strategy enable business users to
achieve business objectives. An effective BI data strategy aligns with the business strategy.
Area Description

Content ownership and management: There are three primary strategies for how
business intelligence (BI) and analytics content is owned and managed: business-led self-
service BI, managed self-service BI, and enterprise BI. These strategies have a significant
influence on adoption, governance, and the Center of Excellence (COE) operating model.

Content delivery scope: There are four primary strategies for content and data delivery:
personal, team, departmental, and enterprise. These strategies have a significant influence
on adoption, governance, and the COE operating model.

Center of Excellence: A Fabric COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.

Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Fabric, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.

Mentoring and user enablement: A critical objective for adoption efforts is to enable
users to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.

Community of practice: A community of practice comprises a group of people with a


common interest, who interact with and help each other on a voluntary basis. An active
community is an indicator of a healthy data culture. It can significantly advance adoption
efforts.

User support: User support includes both informally organized and formally organized
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.

System oversight: System oversight includes the day-to-day administration responsibilities


to support the internal processes, tools, and people.

Change management: Change management involves procedures to address the impact of


change for people in an organization. These procedures safeguard against disruption and
productivity loss due to changes in solutions or processes. An effective data strategy
describes who is responsible for managing this change and the practices and resources
needed to realize it.

The relationships in the above diagram can be summarized as follows.

Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
Adoption and governance decisions are implemented alongside change
management to mitigate the impact and disruption of change on existing business
processes.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor, who facilitates
business alignment between the business strategy and data strategy. This
alignment in turn informs data culture and governance decisions.

Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.

Microsoft Fabric adoption


Successful adoption of analytical tools like Fabric involves making effective processes,
support, tools, and data available and integrated into regular ongoing patterns of usage
for content creators, consumers, and stakeholders in the organization.

) Important

This series of adoption articles is focused on organizational adoption. See


Microsoft Fabric adoption maturity levels for an introduction to the three types of
adoption: organizational, user, and solution.

A common misconception is that adoption relates primarily to usage or the number of


users. There's no question that usage statistics are an important factor. However, usage
isn't the only factor. Adoption isn't just about using the technology regularly; it's about
using it effectively. Effectiveness is much more difficult to define and measure.

Whenever possible, adoption efforts should be aligned across analytics platforms and BI
services.
7 Note

Individuals—and the organization itself—are continually learning, changing, and


improving. That means there's no formal end to adoption-related efforts.

The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.

Adoption maturity levels


Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management
Conclusion and additional resources

) Important

You might be wondering how this Fabric adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It's a lightweight set of resources to help partners
deploy Power BI solutions for their customers.

This adoption series is more current. It's intended to guide any person or
organization that is using—or considering using—Fabric. If you're seeking to
improve your existing Power BI of Fabric implementation, or planning a new Power
BI or Fabric implementation, this adoption roadmap is a great place to start.

Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.
Improving their organization's ability to effectively use analytics.
Increasing their organization's maturity level related to the delivery of analytics.
Understanding and overcoming adoption-related challenges faced when scaling
and growing.
Increasing their organization's return on investment (ROI) in data and analytics.

This series of articles will be most helpful to those who work in an organization with one
or more of the following characteristics.

Power BI or other Fabric workloads are deployed with some successes.


There are pockets of viral adoption, but analytics isn't being purposefully governed
across the entire organization.
Analytics solutions are deployed with some meaningful scale, but there remains a
need to determine:
What is effective and what should be maintained.
What should be improved.
How future deployments could be more strategic.
An expanded implementation of analytics is under consideration or is planned.

This series of articles will also be helpful for:

Organizations that are in the early stages of an analytics implementation.


Organizations that have had success with adoption and now want to evaluate their
current maturity level.

Assumptions and scope


The primary focus of this series of articles is on the Microsoft Fabric platform.

To fully benefit from the information provided in these articles, you should have an
understanding of Power BI foundational concepts and Fabric foundational concepts.

Related content
In the next article in this series, learn about the Fabric adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.

Other helpful resources include:

Power BI implementation planning


Questions? Try asking the Fabric Community
Suggestions? Contribute ideas to improve Fabric

Experienced partners are available to help your organization succeed with adoption
initiatives. To engage with a partner, visit the Power BI partner portal .

Acknowledgments
The Microsoft Fabric adoption roadmap articles are written by Melissa Coates , Kurt
Buhler , and Peter Myers . Matthew Roche , from the Fabric Customer Advisory
Team, provides strategic guidance and feedback to the subject matter experts.
Reviewers include Cory Moore , James Ward, Timothy Bindas , Greg Moir , and
Chuy Varela .

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap
maturity levels
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

There are three inter-related perspectives to consider when adopting an analytics


technology like Microsoft Fabric.

The three types of adoption depicted in the above diagram include:

ノ Expand table

Type Description

Organizational adoption refers to the effectiveness of your analytics governance


processes. It also refers to data management practices that support and enable analytics
and business intelligence (BI) efforts.

User adoption is the extent to which consumers and creators continually increase their
knowledge. It's concerned with whether they're actively using analytics tools, and whether
they're using them in the most effective way.
Type Description

Solution adoption refers to the impact and business value achieved for individual
requirements and analytical solutions.

As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:

Solution adoption affects user adoption. A well-designed and well-managed


solution—which could be many things, such as a set of reports, a Power BI app, a
semantic model, or a Fabric lakehouse—impacts and guides users on how to use
analytics in an optimal way.
User adoption impacts organizational adoption. The patterns and practices used
by individual users influence organizational adoption decisions, policies, and
practices.
Organizational adoption influences user adoption. Effective organizational
practices—including mentoring, training, support, and community—encourage
users to do the right thing in their day-to-day workflow.
User adoption affects solution adoption. Stronger user adoption, because of the
effective use of analytics by educated and informed users, contributes to stronger
and more successful individual solutions.

The remainder of this article introduces the three types of adoption in more detail.

Organizational adoption maturity levels


Organizational adoption measures the state of analytics governance and data
management practices. There are several organizational adoption goals:

Effectively support the community of creators, consumers, and stakeholders


Enable and empower users
Right-sized governance of analytics, BI, and data management activities
Oversee information delivery via enterprise BI and self-service BI with continuous
improvement cycles

It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Microsoft Fabric adoption roadmap aligns with the five
levels from the Capability Maturity Model , which were later enhanced by the Data
Management Maturity (DMM) model from ISACA (note that the DMM was a paid
resource that has since been retired).
Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in analytics, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be conscious of the organizational state as well as progress for key business
units.

7 Note

Organizational adoption maturity is a long journey. It takes time, effort, and


planning to progress to the higher levels.

Maturity level 100 – Initial


Level 100 is referred to as initial or performed. It's the starting point for new data-related
investments that are new, undocumented, and without any process discipline.

Common characteristics of maturity level 100 include:

Pockets of success and experimentation with Fabric exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and solutions have been delivered with
some success.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with analytics is acknowledged. However,
there's no clear path forward for purposeful, organization-wide execution.

Maturity level 200 – Repeatable


Level 200 is referred to as repeatable or managed. At this point on the maturity curve,
data management is planned and executed. Defined processes exist, though these
processes might not apply uniformly throughout the organization.

Common characteristics of maturity level 200 include:


Certain analytics content is now critical in importance and/or it's broadly used by
the organization.
There are attempts to document and define repeatable practices. These efforts are
siloed, reactive, and deliver varying levels of success.
There's an over-reliance on individuals having good judgment and adopting
healthy habits that they learned on their own.
Analytics adoption continues to grow organically and produces value. However, it
takes place in an uncontrolled way.
Resources for an internal community are established, such as a Teams channel or
Yammer group.
Initial planning for a consistent analytics governance strategy is underway.
There's recognition that a Center of Excellence (COE) can deliver value.

Maturity level 300 – Defined


Level 300 is referred to as defined. At this point on the maturity curve, a set of
standardized data management processes are established and consistently applied
across organizational boundaries.

Common characteristics of maturity level 300 include:

Measurable success is achieved for the effective use of analytics.


Progress is made on the standardization of repeatable practices. However, less-
than-optimal aspects could still exist due to early uncontrolled growth.
The COE is established. It has clear goals and scope of responsibilities.
The internal community of practice gains traction with the participation of a
growing number of users.
Champions emerge in the internal user community.
Initial investments in training, documentation, and resources (such as template
files) are made.
An initial governance model is in place.
There's an active and engaged executive sponsor.
Roles and responsibilities for all analytics stakeholders are well understood.

Maturity level 400 – Capable


Level 400 is known as capable or measured. At this point on the maturity curve, data is
well-managed across its entire lifecycle.

Common characteristics of maturity level 400 include:

Analytics and business intelligence efforts deliver significant value.


Approved tools are commonly used for delivering critical content throughout the
organization.
There's an established and accepted governance model with cooperation from all
key business units.
Training, documentation, and resources are readily available for, and actively used
by, the internal community of users.
Standardized processes are in place for the oversight and monitoring of analytics
usage and practices.
The COE includes representation from all key business units.
A champions network supports the internal community. The champions actively
work with their colleagues as well as the COE.

Maturity level 500 – Efficient


Level 500 is known as efficient or optimizing because at this point on the maturity curve,
the emphasis is now on automation and continuous improvement.

Common characteristics of maturity level 500 include:

The value of analytics solutions is prevalent in the organization. Fabric is widely


accepted throughout the organization.
Analytics skillsets are highly valued in the organization, and they're recognized by
leadership.
The internal user community is self-sustaining, with support from the COE. The
community isn't over-reliant on key individuals.
The COE reviews key performance indicators regularly to measure success of
implementation and adoption goals.
Continuous improvement is a continual priority.
Use of automation adds value, improves productivity, or reduces risk for error.

7 Note

The characteristics above are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Fabric adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Fabric adoption series
present maturity levels on a per-topic basis.
Individuals—and the organization itself—continually learn, change, and improve. That
means there's no formal end to adoption-related efforts. However, it's common that
effort is reduced as higher maturity levels are reached.

The remainder of this article introduces the second and third types of adoption: user
adoption and solution adoption.

7 Note

The remaining articles in this series focus primarily on organizational adoption.

User adoption stages


User adoption measures the extent to which content consumers and self-service content
creators are actively and effectively using analytics tools such as Fabric. Usage statistics
alone don't indicate successful user adoption. User adoption is also concerned with
individual user behaviors and practices. The aim is to ensure users engage with
solutions, tools, and processes in the correct way and to their fullest extent.

User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.

User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.

User adoption stage 1 – Awareness


Common characteristics of stage 1 user adoption include:

An individual has heard of, or been initially exposed to, analytics in some way.
An individual might have access to a tool, such as Fabric, but isn't yet actively using
it.

User adoption stage 2 – Understanding


Common characteristics of stage 2 user adoption include:

An individual develops understanding of the benefits of analytics and how it can


support decision-making.
An individual shows interest and starts to use analytics tools.

User adoption stage 3 – Momentum


Common characteristics of stage 3 user adoption include:

An individual actively gains analytics skills by attending formal training, self-


directed learning, or experimentation.
An individual gains basic competency by using or creating analytics relevant to
their role.

User adoption stage 4 – Proficiency


Common characteristics of stage 4 user adoption include:

An individual actively uses analytics regularly.


An individual understands how to use analytic tools in the way in which they were
intended, as relevant for their role.
An individual modifies their behavior and activities to align with organizational
governance processes.
An individual's willingness to support organizational processes and change efforts
is growing over time, and they become an advocate for analytics in the
organization.
An individual makes the effort to continually improve their skills and stay current
with new product capabilities and features.

It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).

) Important

By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.
Solution adoption phases
Solution adoption is concerned with measuring the impact of content that's been
deployed. It's also concerned with the level of value solutions provide. The scope for
evaluating solution adoption is for one set of requirements, like a set of reports, a
lakehouse, or a single Power BI app.

7 Note

In this series of articles, content is synonymous with solution.

As a solution progresses to phases 3 or 4, expectations to operationalize the solution


are higher.

 Tip

The importance of scope on expectations for governance is described in the


content delivery scope article. That concept is closely related to this topic, but this
article approaches it from a different angle. It considers when you already have a
solution that is operationalized and distributed to many users. That doesn't
immediately equate to phase 4 solution adoption, as the concept of solution
adoption focuses on how much value the content delivers.

Solution phase 1 – Exploration


Common characteristics of phase 1 solution adoption include:

Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service efforts, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
For simplicity, all exploration—and initial feedback—could occur within local user
tools (such as Power BI Desktop or Excel) or within a single Fabric workspace.

Solution phase 2 – Functional


Common characteristics of phase 2 solution adoption include:
The solution is functional and meets the basic set of user requirements. There are
likely plans to iterate on improvements and enhancements.
The solution is deployed to the Fabric portal.
All necessary supporting components are in place (for example, a gateway to
support scheduled data refresh).
Target users are aware of the solution and show interest in using it. Potentially, it
could be a limited preview release, and might not yet be ready to promote to a
production workspace.

Solution phase 3 – Valuable


Common characteristics of phase 3 solution adoption include:

Target users find the solution to be valuable and experience tangible benefits.
The solution is promoted to a production workspace that's managed, secured, and
audited.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated). The
documentation helps future content creators (for example, for documenting any
future maintenance or planned enhancements).
Ownership and subject matter experts for the content are clear.
Report branding and theming are in place and in line with governance guidelines.

Solution phase 4 – Essential


Common characteristics of phase 4 solution adoption include:

Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet evolving requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified due to its critical nature.
Formal user acceptance testing for new changes might occur, particularly for IT-
managed content.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
organizational data culture and its impact on adoption efforts.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Data culture
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Building a data culture is closely related to adopting analytics, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:

By more stakeholders throughout more areas of the organization.


Based on analytics, not opinion.
In an effective, efficient way that's based on best practices approved by the Center
of Excellence (COE).
Based on trusted data.
That reduces reliance on undocumented tribal knowledge.
That reduces reliance on hunches and gut decisions.

) Important

Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.

Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.
The following circular diagram conveys the interrelated aspects that influence your data
culture:

The diagram depicts the somewhat ambiguous relationships among the following items:

Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership and management, and content delivery scope, are closely
related to data democratization.

The elements of the diagram are discussed throughout this series of articles.
Data culture vision
The concept of data culture can be difficult to define and measure. Even though it's
challenging to articulate data culture in a way that's meaningful, actionable, and
measurable, you need to have a well-understood definition of what a healthy data
culture means to your organization. This vision of a healthy data culture should:

Originate from the executive level.


Align with organizational objectives.
Directly influence your adoption strategy.
Serve as the high-level guiding principles for enacting governance policies and
guidelines.

Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate through their
actions what's important to them, including how they praise, recognize, and reward staff
members who take initiative.

 Tip

If you can take for granted that your efforts to develop a data solution (such as a
semantic model, a lakehouse, or a report) will be valued and appreciated, that's an
excellent indicator of a healthy data culture. Sometimes, however, it depends on
what your immediate manager values most.

The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:

A reactive change, such as responding to new agile competition.


A proactive change, such as starting a new line of business or expanding into new
markets to seize a "green field" opportunity. Being data driven from the beginning
can be relatively easier when there are fewer constraints and complications,
compared with an established organization.
Driven by external changes, such as pressure to eliminate inefficiencies and
redundancies during an economic downturn.

In each of these situations, there's often a specific area where the data culture takes
root. The specific area could be a scope of effort that's smaller than the entire
organization, even if it's still significant. After necessary changes are made at this smaller
scope, they can be incrementally replicated and adapted for the rest of the organization.
Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.

Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.

Data discovery is the ability to effectively locate relevant data assets across the
organization. Primarily, data discovery is concerned with improving awareness that data
exists, which can be particularly challenging when data is siloed in departmental
systems.

Data discovery is a slightly different concept from search, because:

Data discovery allows users to see metadata for an item, like the name of a
semantic model, even if they don't currently have access to it. After a user is aware
of its existence, that user can go through the standard process to request access to
the item.
Search allows users to locate an existing item when they already have security
access to the item.

 Tip

It's important to have a clear and simple process so users can request access to
data. Knowing that data exists—but being unable to access it within the guidelines
and processes that the domain owner has established—can be a source of
frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.

Data discovery contributes to adoption efforts and the implementation of governance


practices by:

Encouraging the use of trusted high-quality data sources.


Encouraging users to take advantage of existing investments in available data
assets.
Promoting the use and enrichment of existing data items (such as a lakehouse,
data warehouse, data pipeline, dataflow, or semantic model) or reporting items
(such as reports, dashboards, or metrics).
Helping people understand who owns and manages data assets.
Establishing connections between consumers, creators, and owners.

The OneLake catalog and the use of endorsements are key ways to promote data
discovery in your organization.

Furthermore, data catalog solutions are extremely valuable tools for data discovery.
They can record metadata tags and descriptions to provide deeper context and
meaning. For example, Microsoft Purview can scan and catalog items from a Fabric
tenant (as well as many other sources).

Questions to ask about data discovery

Use questions like those found below to assess data discovery.

Is there a data catalog where business users can search for data?
Is there a metadata catalog that describes definitions and data locations?
Are high-quality data sources endorsed by certifying or promoting them?
To what extent do redundant data sources exist because people can't find the data
they need? What roles are expected to create data items? What roles are expected
to create reports or perform ad hoc analysis?
Can end users find and use existing reports, or do they insist on data exports to
create their own?
Do end users know which reports to use to address specific business questions or
find specific data?
Are people using the appropriate data sources and tools, or resisting them in favor
of legacy ones?
Do analysts understand how to enrich existing certified semantic models with new
data—for example, by using a Power BI composite model?
How consistent are data items in their quality, completeness, and naming
conventions?
Can data item owners follow data lineage to perform impact analysis of data
items?
Maturity levels of data discovery

The following maturity levels can help you assess your current state of data discovery.

ノ Expand table

Level State of Fabric data discovery

100: Initial • Data is fragmented and disorganized, with no clear structures or processes to
find it.

• Users struggle to find and use data they need for their tasks.

200: • Scattered or organic efforts to organize and document data are underway, but
Repeatable only in certain teams or departments.

• Content is occasionally endorsed, but these endorsements aren't defined and


the process isn't managed. Data remains siloed and fragmented, and it's difficult
to access.

300: Defined • A central repository, like the OneLake catalog, is used to make data easier to find
for people who need it.

• An explicit process is in place to endorse quality data and content.

• Basic documentation includes catalog data, definitions, and calculations, as well


as where to find them.

400: Capable • Structured, consistent processes guide users how to endorse, document, and
find data from a central hub. Data silos are the exception instead of the rule.

• Quality data assets are consistently endorsed and easily identified.

• Comprehensive data dictionaries are maintained and improve data discovery.

500: Efficient • Data and metadata is systematically organized and documented with a full view
of the data lineage.

• Quality assets are endorsed and easily identified.

• Cataloging tools, like Microsoft Purview, are used to make data discoverable for
both use and governance.
Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling more users to make
better data-driven decisions.

7 Note

The concept of data democratization doesn't imply a lack of security or a lack of


justification based on job role. As part of a healthy data culture, data
democratization helps reduce shadow IT by providing semantic models that:

Are secured, governed, and well managed.


Meet business needs in cost-effective and timely ways.

Your organization's position on data democratization will have a wide-reaching impact


on adoption and governance-related efforts.

2 Warning

If access to data or the ability to perform analytics is limited to a select number of


individuals in the organization, that's typically a warning sign because the ability to
work with data is a key characteristic of a healthy data culture.

Questions to ask about data democratization

Use questions like those found below to assess data democratization.

Is data and analytics readily accessible, or restricted to limited roles and


individuals?
Is an effective process in place for people to request access to new data and tools?
Is data readily shared between teams and business units, or is it siloed and closely
guarded?
Who is permitted to have Power BI Desktop installed?
Who is permitted to have Power BI Pro or Power BI Premium Per User (PPU)
licenses?
Who is permitted to create assets in Fabric workspaces?
What's the desired level of self-service analytics and business intelligence (BI) user
enablement? How does this level vary depending on business unit or job role?
What's the desired balance between enterprise and self-service analytics, and BI?
What data sources are strongly preferred for what topics and business domains?
What's the allowed use of unsanctioned data sources?
Who can manage content? Is this decision different for data versus reports? Is the
decision different for enterprise BI users versus decentralized users? Who can own
and manage self-service BI content?
Who can consume content? Is this decision different for external partners,
customers, or suppliers?

Maturity levels of data democratization

The following maturity levels can help you assess your current state of data
democratization.

ノ Expand table

Level State of data democratization

100: Initial • Data and analytics are limited to a small number of roles, who gatekeep access to
others.

• Business users must request access to data or tools to complete tasks. They
struggle with delays or bottlenecks.

• Self-service initiatives are taking place with some success in various areas of the
organization. These activities are occurring in a somewhat chaotic manner, with few
formal processes and no strategic plan. There's a lack of oversight and visibility into
these self-service activities. The success or failure of each solution isn't well
understood.

• The enterprise data team can't keep up with the needs of the business. A
significant backlog of requests exists for this team.
Level State of data democratization

200: • There are limited efforts underway to expand access to data and tools.
Repeatable
• Multiple teams have had measurable success with self-service solutions. People in
the organization are starting to pay attention.

• Investments are being made to identify the ideal balance of enterprise and self-
service solutions.

300: • Many people have access to the data and tools they need, although not all users
Defined are equally enabled or held accountable for the content they create.

• Effective self-service data practices are incrementally and purposely replicated


throughout more areas of the organization.

400: • Healthy partnerships exist among enterprise and self-service solution creators.
Capable Clear, realistic user accountability and policies mitigate risk of self-service analytics
and BI.

• Clear and consistent processes are in place for users to request access to data
and tools.

• Individuals who take initiative in building valuable solutions are recognized and
rewarded.

500: • User accountability and effective governance give central teams confidence in
Efficient what users do with data.

• Automated, monitored processes enable people to easily request access to data


and tools. Anyone with the need or interest to use data can follow these processes
to perform analytics.

Data literacy
Data literacy refers to the ability to interpret, create, and communicate with data and
analytics accurately and effectively.

Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing software and licenses to users.

How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You might choose to focus on these types of activities related to data
literacy:

Interpreting charts and graphs


Assessing the validity of data
Performing root cause analysis
Discerning correlation from causation
Understanding how context and outliers affect how results are presented
Using storytelling to help consumers quickly understand and act

 Tip

If you're struggling to get data culture or governance efforts approved, focusing on


tangible benefits that you can achieve with data discovery ("find the data"), data
democratization ("use the data"), or data literacy ("understand the data") can help.
It can also be helpful to focus on specific problems that you can solve or mitigate
through data culture advancements.

Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.

Questions to ask about data literacy

Use questions like those found below to assess data literacy.

Does a common analytical vocabulary exist in the organization to talk about data
and BI solutions? Alternatively, are definitions fragmented and different across
silos?
How comfortable are people with making decisions based on data and evidence
compared to intuition and subjective experience?
When people who hold an opinion are confronted with conflicting evidence, how
do they react? Do they critically appraise the data, or do they dismiss it? Can they
alter their opinion, or do they become entrenched and resistant?
Do training programs exist to support people in learning about data and analytical
tools?
Is there significant resistance to visual analytics and interactive reporting in favor of
static spreadsheets?
Are people open to new analytical methods and tools to potentially address their
business questions more effectively? Alternatively, do they prefer to keep using
existing methods and tools to save time and energy?
Are there methods or programs to assess or improve data literacy in the
organization? Does leadership have an accurate understanding of the data literacy
levels?
Are there roles, teams, or departments where data literacy is particularly strong or
weak?

Maturity levels of data literacy

The following maturity levels can help you assess your current state of data literacy.

ノ Expand table

Level State of data literacy

100: Initial • Decisions are frequently made based on intuition and subjective experience.
When confronted with data that challenges existing opinions, data is often
dismissed.

• Individuals have low confidence to use and understand data in decision-making


processes or discussions.

• Report consumers have a strong preference for static tables. These consumers
dismiss interactive visualizations or sophisticated analytical methods as "fancy" or
unnecessary.

200: • Some teams and individuals inconsistently incorporate data into their decision
Repeatable making. There are clear cases where misinterpretation of data has led to flawed
decisions or wrong conclusions.

• There's some resistance when data challenges pre-existing beliefs.

• Some people are skeptical of interactive visualizations and sophisticated


analytical methods, though their use is increasing.
Level State of data literacy

300: • The majority of teams and individuals understand data relevant to their business
Defined area and use it implicitly to inform decisions.

• When data challenges pre-existing beliefs, it produces critical discussions and


sometimes motivates change.

• Visualizations and advanced analytics are more widely accepted, though not
always used effectively.

400: • Data literacy is recognized explicitly as a necessary skill in the organization. Some
Capable training programs address data literacy. Specific efforts are taken to help
departments, teams, or individuals that have particularly weak data literacy.

• Most individuals can effectively use and apply data to make objectively better
decisions and take actions.

• Visual and analytical best practices are documented and followed in strategically
important data solutions.

500: • Data literacy, critical thinking, and continuous learning are strategic skills and
Efficient values in the organization. Effective programs monitor progress to improve data
literacy in the organization.

• Decision making is driven by data across the organization. Decision intelligence


or prescriptive analytics are used to recommend key decisions and actions.

• Visual and analytical best practices are seen as essential to generate business
value with data.

Considerations and key actions

Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.

" Align your data culture goals and strategy: Give serious consideration to the type
of data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These conversations can present
an opportunity to educate teams on topics like security and infrastructure. You can
also use the opportunity to educate stakeholders on the features and capabilities
included in Fabric.
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your data strategy: Decide what the ideal
balance of business-led self-service, managed self-service, and enterprise data,
analytics and BI use cases should be for the key business units in the organization
(covered in the content ownership and management article). Also consider how the
data strategy relates to the extent of published content for personal, team,
departmental, and enterprise analytics and BI (described in the content delivery
scope article). Define your high-level goals and priorities for this strategic planning.
Determine how these decisions affect your tactical planning.
" Create a tactical plan: Begin creating a tactical plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create key performance indicators (KPIs) or objectives and
key results (OKRs) to validate the results of your efforts.

Questions to ask about data culture

Use questions like those found below to assess data culture.

Is data regarded as a strategic asset in the organization?


Is there a vision of a healthy data culture that originates from executive leadership
and aligns with organizational objectives?
Does the data culture guide creation of governance policies and guidelines?
Are organizational data sources trusted by content creators and consumers?
When justifying an opinion, decision, or choice, do people use data as evidence?
Is knowledge about analytics and data use documented or is there a reliance on
undocumented tribal knowledge?
Are efforts to develop a data solution valued and appreciated by the user
community?

Maturity levels of data culture

The following maturity levels will help you assess the current state of your data culture.

ノ Expand table

Level State of data culture

100: Initial • Enterprise data teams can't keep up with the needs of the business. A significant
backlog of requests exists.

• Self-service data and BI initiatives are taking place with some success in various
areas of the organization. These activities occur in a somewhat chaotic manner,
with few formal processes and no strategic plan.

• There's a lack of oversight and visibility into self-service BI activities. The


successes or failures of data and BI solutions aren't well understood.

200: • Multiple teams have had measurable successes with self-service solutions. People
Repeatable in the organization are starting to pay attention.

• Investments are being made to identify the ideal balance of enterprise and self-
service data, analytics, and BI.

300: Defined • Specific goals are established for advancing the data culture. These goals are
implemented incrementally.

• Learnings from what works in individual business units is shared.

• Effective self-service practices are incrementally and purposely replicated


throughout more areas of the organization.

400: • The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.

• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.
Level State of data culture

• Individuals who take initiative in building valuable data solutions are recognized
and rewarded.

500: • The business value of data, analytics, and BI solutions is regularly evaluated and
Efficient measured. KPIs or OKRs are used to track data culture goals and the results of
these efforts.

• Feedback loops are in place, and they encourage ongoing data culture
improvements.

• Continual improvement of organizational adoption, user adoption, and solution


adoption is a top priority.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of an executive sponsor.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Executive sponsorship
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

When planning to advance the data culture and the state of organizational adoption for
data and analytics, it's crucial to have executive support. An executive sponsor is
imperative because analytics adoption is far more than just a technology project.

Although some successes can be achieved by a few determined individual contributors,


the organization is in a significantly better position when a senior leader is engaged,
supportive, informed, and available to assist with the following activities.

Formulating a strategic vision, goals, and priorities for data, analytics, and business
intelligence (BI).
Providing top-down guidance and reinforcement for the data strategy by regularly
promoting, motivating, and investing in strategic and tactical planning.
Leading by example by actively using data and analytics in a way that's consistent
with data culture and adoption goals.
Allocating staffing and prioritizing resources.
Approving funding (for example, Fabric licenses).
Removing barriers to enable action.
Communicating announcements that are of critical importance, to help them gain
traction.
Decision-making, particularly for strategic-level governance decisions.
Dispute resolution (for escalated issues that can't be resolved by operational or
tactical personnel).
Supporting organizational change initiatives (for example, creating or expanding
the Center of Excellence).

) Important

The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization. They also have an invested stake in data efforts and
the data strategy. When the BI strategy is successful, the ideal executive sponsor
also experiences success in their role.

Identifying an executive sponsor


There are multiple ways to identify an executive sponsor.

Top-down pattern
An executive sponsor might be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) could hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Fabric (or for data and analytics in general).

Here's another example: The CEO might empower an existing executive, such as the
Chief Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.

7 Note

Having an executive sponsor at the C-level is an excellent leading indicator. It


indicates that the organization recognizes the importance of data as a strategic
asset and is advancing its data culture in a positive direction.

Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating data solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a smaller scale. A junior-level leader who hasn't reached the executive level
(such as a director) might then grow into the executive sponsor role by sharing
successes with other business units across the organization.

The bottom-up approach is more likely to occur in smaller organizations. It might be


because the return on investment and strategic imperative of a data culture (or digital
transformation) isn't yet an urgent priority for C-level executives.
The success for a leader using the bottom-up pattern depends on being recognized by
senior leadership.

With a bottom-up approach, the sponsor might be able to make some progress, but
they won't have formal authority over other business units. Without clear authority, it's
only a matter of time until challenges occur that are beyond their level of authority. For
this reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which might start a healthy competition across other business units in the
adoption of data and BI.

Considerations and key actions

Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for analytics.

" Identify an executive sponsor with broad authority: Find someone in a sufficient


position of influence and authority (across organizational boundaries) who
understands the value and impact of BI. It is important that the individual has a
vested interest in the success of analytics in the organization.
" Involve your executive sponsor: Consistently involve your executive sponsor in all
strategic-level governance decisions involving data management, BI, and analytics.
Also involve your sponsor in all governance data culture initiatives to ensure
alignment and consensus on goals and priorities.
" Establish responsibilities and expectation: Formalize the arrangement with
documented responsibilities for the executive sponsor role. Ensure that there's no
uncertainty about expectations and time commitments.
" Identify a backup for the sponsor: Consider naming a backup executive sponsor.
The backup can attend meetings in the sponsor's absence and make time-sensitive
decisions when necessary.
" Identify business advocates: Find influential advocates in each business unit.
Determine how their cooperation and involvement can help you to accomplish your
objectives. Consider involving advocates from various levels in the organization
chart.

Questions to ask
Use questions like those found below to assess data literacy.

Has an executive sponsor of Fabric or other analytical tools been identified?


If so, who is the executive sponsor?
If not, is there an informal executive sponsor? Who is the closest to this role? Can
you define the business impact of having no executive sponsor?
To what extent is the strategic importance of Fabric and analytics understood and
endorsed by executives?
Are executives using Fabric and the results of data and BI initiatives? What's the
sentiment among executives for the effectiveness of data solutions?
Is the executive sponsor leading by example in the effective use of data and BI
tools?
Does the executive sponsor provide the appropriate resources for data initiatives?
Is the executive sponsor involved in dispute resolution and change management?
Does the executive sponsor engage with the user community?
Does the executive sponsor have sufficient credibility and healthy relationships
across organizational boundaries (particularly the business and IT)?

Maturity levels

The following maturity levels will help you assess your current state of executive
support.

ノ Expand table

Level State of executive support

100: Initial • There might be awareness from at least one executive about the strategic
importance of how analytics can advance the organization's data culture goals.
However, neither a sponsor nor an executive-level decision-maker is identified.

200: • Informal executive support exists for analytics through informal channels and
Level State of executive support

Repeatable relationships.

300: • An executive sponsor is identified. Expectations are clear for the role.
Defined

400: • An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.

• A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.

500: • The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.

• The executive sponsor is involved with ongoing organizational adoption


improvements. KPIs (key performance indicators) or OKRs (objectives and key
results) are used to track data culture goals and the results of data, analytics, and
BI efforts.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
importance of business alignment with organizational goals.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Business alignment
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and data strategy enable business users to achieve their business objectives.

You can achieve effective business alignment with data activities and solutions by
having:

An understanding of the strategic importance of data and analytics in achieving


measurable progress toward business goals.
A shared awareness of the business strategy and key business objectives among
content owners, creators, consumers, and administrators. A common
understanding should be integral to the data culture and decision-making across
the organization.
A clear and unified understanding of the business data needs, and how meeting
these needs helps content creators and content consumers achieve their
objectives.
A governance strategy that effectively balances user enablement with risk
mitigation.
An engaged executive sponsor who provides top-down guidance to regularly
promote, motivate, and support the data strategy and related activities and
solutions.
Productive and solution-oriented discussions between business teams and
technical teams that address business data needs and problems.
Effective and flexible requirements gathering processes to design and plan
solutions.
Structured and consistent processes to validate, deploy, and support solutions.
Structured and sustainable processes to regularly update existing solutions so that
they remain relevant and valuable, despite changes in technology or business
objectives.
Effective business alignment brings significant benefits to an organization. Here are
some benefits of effective business alignment.

Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.

Achieve business alignment


There are multiple ways to achieve business alignment of data activities and initiatives.

Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.

Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example is when central teams schedule regular meetings
to mentor and enable self-service users.
Set up a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.

) Important
To achieve effective business alignment, you should make it a priority to identify
and dismantle any communication barriers between business teams and technical
teams.

Strategic alignment
Your business strategy should be well aligned with your data and BI strategy. To
incrementally achieve this alignment, we recommend that you commit to following
structured, iterative planning processes.

Strategic planning: Define data, analytics, and BI goals and priorities based on the
business strategy and current state of adoption and implementation. Typically,
strategic planning occurs every 12-18 months to iteratively define high-level
desired outcomes. You should synchronize strategic planning with key business
planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your data and BI goals. Typically, tactical planning occurs
quarterly to iteratively re-evaluate and align the data strategy and activities to the
business strategy. This alignment is informed by business feedback and changes to
business objectives or technology. You should synchronize tactical planning with
key project planning processes.
Solution planning: Design, develop, test, and deploy solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.

) Important

Effective business alignment is a key prerequisite for a successful data strategy.

Governance and compliance alignment


A key aspect of effective business alignment is balancing user enablement and risk
mitigation. This balance is an important aspect of your governance strategy, together
with other activities related to compliance, security and privacy, that can include:

Transparently document and justify compliance criteria, key governance decisions,


and policies so that content creators and consumers know what's expected of
them.
Regularly audit and assess activities to identify risk areas or strong deviations from
the desired behaviors.
Provide mechanisms for content owners, content creators, and content consumers
to request clarification or provide feedback about existing policies.

U Caution

A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users will often pursue workarounds to
complete their tasks.

Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.

To achieve executive alignment, consider the following key considerations and activities.

Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of data in the organization. Use this feedback to identify
changes in business objectives, re-assess the data strategy, and inform future
actions to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward data, analytics,
and BI goals.
Fabric adoption and implementation milestones.
Technology changes that might impact organizational business goals.

) Important

Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.

Maintain business alignment


Business alignment is a continual process. To maintain business alignment, consider the
following factors.

Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and data strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Fabric adoption.
Schedule regular re-alignment sessions: Ensure that data strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).

7 Note

Because business objectives continually evolve, you should understand that


solutions and initiatives will change over time. Don't assume that requirements for
data and BI projects are rigid and can't be altered. If you struggle with changing
requirements, it might be an indication that your requirements-gathering process is
ineffective or inflexible, or that your development workflows don't sufficiently
incorporate regular feedback.

) Important

To effectively maintain business alignment, it's essential that user feedback be


promptly and directly addressed. Regularly review and analyze feedback, and
consider how you can integrate it into iterative strategic planning, tactical planning,
and solution planning processes.

Questions to ask
Use questions like those found below to assess business alignment.

Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the data
strategy?
Are changes in business user data needs addressed promptly in data and BI
solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are data and BI initiatives and strategy communicated across the
organization?

Maturity levels
A business alignment assessment evaluates integration between the business strategy
and data strategy. Specifically, this assessment focuses on whether or not data and BI
initiatives and solutions support business users to achieve business strategic objectives.

The following maturity levels will help you assess your current state of business
alignment.

ノ Expand table

Level ate of data and business alignment**

100: Initial • Business and data strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.

• Misalignment in priorities and planning hinders productive discussions and


effectiveness.

• Executive leadership doesn't recognize data as a strategic asset.

200: • There are efforts to align data and BI initiatives with specific data needs without
Repeatable a consistent approach or understanding of their success.

• Alignment discussions focus on immediate or urgent needs and focus on


features, solutions, tools or data, rather than strategic alignment.

• People have a limited understanding of the strategic importance of data in


achieving business objectives.

300: • Data and BI initiatives are prioritized based on their alignment with strategic
Defined business objectives. However, alignment is siloed and typically focuses on local
needs.

• Strategic initiatives and changes have a clear, structured involvement of both the
business and data strategic decision makers. Business teams and technical teams
can have productive discussions to meet business and governance needs.

400: • There's a consistent, organization-wide view of how data initiatives and solutions
Capable support business objectives.

• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the data strategy to better support business needs.

• Business and technical teams have healthy, productive relationships.

500: • The data strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.
Level ate of data and business alignment**

• Business and technical teams have healthy, productive relationships.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn more about
content ownership and management, and its effect on business-led self-service BI,
managed self-service BI, and enterprise BI.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Content ownership and management
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

7 Note

The Power BI implementation planning usage scenarios explore many concepts


discussed in this article, focusing on the Power BI workload in Microsoft Fabric. The
usage scenario articles include detailed diagrams that you might find helpful to
support your planning and decision making.

There are three primary strategies for how data, analytics, and business intelligence (BI)
content is owned and managed: business-led self-service, managed self-service, and
enterprise. For the purposes of this series of articles, the term content refers to any type
of data item (like a notebook, semantic model, report, or dashboard).

The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.
The areas in the above diagram include:

ノ Expand table

Area Description

Business-led self-service: All content is owned and managed by the creators and subject
matter experts within a business unit. This ownership strategy is also known as a
decentralized or bottom-up strategy.

Managed self-service: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.

Enterprise: All content is owned and managed by a centralized team such as IT, enterprise
BI, or the Center of Excellence (COE).

It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:

Requirements for a solution (such as a collection of reports, a Power BI app, or a


lakehouse).
User skills.
Ongoing commitment for training and skills growth.
Flexibility required.
Complexity level.
Priorities and leadership commitment level.

The organization's data culture—particularly its position on data democratization—has


considerable influence on the extent of which of the three content ownership strategies
are used. While there are common patterns for success, there's no one-size-fits-all
approach. Each organization's governance model and approach to content ownership
and management should reflect the differences in data sources, applications, and
business context.

How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.

As discussed in the governance article, the level of governance and oversight depends
on:
Who owns and manages the content.
The scope of content delivery.
The data subject area and sensitivity level.
The importance of the data, and whether it's used for critical decision making.

In general:

Business-led self-service content is subject to the least stringent governance and


oversight controls. It often includes personal BI and team BI solutions.
Managed self-service content is subject to moderately stringent governance and
oversight controls. It frequently includes team BI and departmental BI solutions.
Enterprise solutions are subject to more rigorous governance controls and
oversight.

As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.

Ownership and stewardship


There are many roles related to data management. Roles can be defined in many ways
and can be easily misunderstood. The following table presents possible ways you might
conceptually define these roles:

ノ Expand table

Role Description

Data steward Responsible for defining and/or managing acceptable data quality levels as well
as master data management (MDM).

Subject Responsible for defining what the data means, what it's used for, who might
matter expert access it, and how the data is presented to others. Collaborates with domain
(SME) owner as needed and supports colleagues in their use of data.

Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.

Domain Higher-level decision-maker who collaborates with governance teams on data


owner management policies, processes, and requirements. Decision-maker for defining
appropriate and inappropriate uses of the data. Participates on the data
governance board, as described in the governance article.
Assigning ownership for a data domain tends to be more straightforward when
managing transactional source systems. In analytics and BI solutions, data is integrated
from multiple domain areas, then transformed and enriched. For downstream analytical
solutions, the topic of ownership becomes more complex.

7 Note

Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:

Who to contact with questions.


Feedback.
Enhancement requests.
Support requests.

In the Fabric portal, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open a Power BI app but they don't have permission,
they will be presented with an option to make a request for access.

Guidelines for being successful with ownership:

Define how ownership and stewardship terminology is used in your organization,


including expectations for these roles.
Set contacts for each workspace and for individual items to communicate
ownership and/or support responsibilities.
Specify between two and four workspace administrators and conduct an audit of
workspace admins regularly (perhaps twice a year). Workspace admins might be
directly responsible for managing workspace content, or it could be that those
tasks are assigned to colleagues who do the hands-on work. In all cases,
workspace admins should be able to easily contact owners of specific content.
Include consistent branding on reports to indicate who produced the content and
who to contact for help. A small image or text label located in the report footer is
valuable, especially when the report is exported from the Fabric portal. A standard
template file can encourage and simplify the consistent use of branding.
Make use of best practices reviews and co-development projects with the COE.

The remainder of this article covers considerations related to the three content
ownership and management strategies.
Business-led self-service
With a business-led self-service approach to data and BI, all content is owned and
managed by creators and subject matter experts. Because responsibility is retained
within a business unit, this strategy is often described as the bottom-up, or decentralized,
approach. Business-led self-service is often a good strategy for personal BI and team BI
solutions.

) Important

The concept of business-led self-service isn't the same as shadow IT. In both
scenarios, data and BI content is created, owned, and managed by business users.
However, shadow IT implies that the business unit is circumventing IT and so the
solution is not sanctioned. With business-led self-service BI solutions, the business
unit has full authority to create and manage content. Resources and support from
the COE are available to self-service content creators. It's also expected that the
business unit will comply with all established data governance guidelines and
policies.

Business-led self-service is most suitable when:

Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled users capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of items,
including the data (such as a lakehouse, data warehouse, data pipeline, dataflow,
or semantic model), the visuals (such as reports and dashboards), and Power BI
apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.

Here are some guidelines to help become successful with business-led self-service data
and BI.

Teach your creators to use the same techniques that IT would use, like shared
semantic models and dataflows. Make use of a well-organized OneLake. Centralize
data to reduce maintenance, improve consistency, and reduce risk.
Focus on providing mentoring, training, resources, and documentation (described
in the Mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (described next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when Power BI app audiences or workspace
roles might be a better choice. The data from the activity log allows the COE to
offer support and advice to the business units. In turn, this information can help
increase the quality of solutions, while allowing the business to retain full
ownership and control of their content. For more information, see Auditing and
monitoring.

Managed self-service
Managed self-service BI is a blended approach to data and BI. The data is owned and
managed by a centralized team (such as IT, enterprise BI, or the COE), while
responsibility for reports and dashboards belongs to creators and subject matter experts
within the business units. Managed self-service BI is frequently a good strategy for team
BI and departmental BI solutions.

This approach is often called_discipline at the core and flexibility at the edge_. It's
because the data architecture is maintained by a single team with an appropriate level
of discipline and rigor. Business units have the flexibility to create reports and
dashboards based on centralized data. This approach allows report creators to be far
more efficient because they can remain focused on delivering value from their data
analysis and visuals.

Managed self-service BI is most suitable when:

Centralized data management aligns with the organization's data culture.


The organization has a team of BI experts who manage the data architecture.
There's value in the reuse of data by many self-service report creators across
organizational boundaries.
Self-service report creators need to produce analytical content at a pace faster
than the centralized team can accommodate.
Different users are responsible for handling data preparation, data modeling, and
report creation.

Here are some guidelines to help you become successful with managed self-service BI.

Teach users to separate model and report development. They can use live
connections to create reports based on existing semantic models. When the
semantic model is decoupled from the report, it promotes data reuse by many
reports and many authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many semantic model
creators. Refine the dataflow as much as possible, using friendly column names
and correct data types to reduce the downstream effort required by semantic
model authors, who consume the dataflow as a source. Dataflows are an effective
way to reduce the time involved with data preparation and improve data
consistency across semantic models. The use of dataflows also reduces the number
of data refreshes on source systems and allows fewer users who require direct
access to source systems.
When self-service creators need to augment an existing semantic model with
departmental data, educate them to create composite models. This feature allows
for an ideal balance of self-service enablement while taking advantage of the
investment in data assets that are centrally managed.
Use the certified endorsement for semantic models and dataflows to help content
creators identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Fabric portal.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build semantic
model permissions allow creators to create new reports with row-level security
(RLS) in effect, when applicable. For more information, see Workspace-level
planning. For more information about RLS, see Content creator security planning.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of semantic models to reports to evaluate the extent of semantic model
reuse.

Enterprise
Enterprise is a centralized approach to delivering data and BI solutions in which all
solution content is owned and managed by a centralized team. This team is usually IT,
enterprise BI, or the COE.

Enterprise is the most suitable when:

Centralizing content management with a single team aligns with the organization's
data culture.
The organization has data and BI expertise to manage all items end-to-end.
The content needs of consumers are well-defined, and there's little need to
customize or explore data beyond the reporting solution that's delivered.
Content ownership and direct access to data needs to be limited to a small
number of experts and owners.
The data is highly sensitive or subject to regulatory requirements.

Here are some guidelines to help you become successful with enterprise data and BI.

Implement a rigorous process for use of the certified endorsement for content.
Not all enterprise content needs to be certified, but much of it probably should be.
Certified content should indicate that data quality has been validated. Certified
content should also follow change management rules, have formal support, and be
fully documented. Because certified content has passed rigorous standards, the
expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported by a user.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing semantic model, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
portal.

Ownership transfers
Occasionally, the ownership of a particular solution might need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:

A business-led solution is used by a significant number of users, or it now supports


critical business decisions. In these cases, the solution should be managed by a
team with processes in place to implement higher levels of governance and
support.
A business-led solution is a candidate to be used far more broadly throughout the
organization, so it needs to be managed by a team who can set security and
deploy content widely throughout the organization.
A business unit no longer has the expertise, budget, or time available to continue
managing the content, but the business need for the content remains.
The size or complexity of a solution has grown to a point where a different data
architecture or redesign is required.
A proof of concept is ready to be operationalized.

The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer could be addressed during COE office
hours; a more complex transfer could warrant a small project managed by the COE.

7 Note

There's potential that the new owner will need to do some refactoring and data
validations before they're willing to take full ownership. Refactoring is most likely to
occur with the less visible aspects of data preparation, data modeling, and
calculations. If there are any manual steps or flat file sources, now is an ideal time
to apply those enhancements. The branding of reports and dashboards might also
need to change (for example, if there's a footer indicating report contact or a text
label indicating that the content is certified).

It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:

The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.

 Tip

Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.

Considerations and key actions

Checklist - Here's a list of considerations and key actions you can take to strengthen
your approach to content ownership and management.

" Gain a full understanding of what's currently happening: Ensure you deeply


understand how content ownership and management is happening throughout the
organization. Recognize that there likely won't be a one-size-fits-all approach to
apply uniformly across the entire organization. Review the implementation planning
usage scenarios to understand how Power BI and Fabric can be used in diverse
ways.
" Conduct discussions: Determine what is currently working well, what isn't working
well, and what the desired balance is between the three ownership strategies. If
necessary, schedule discussions with specific people on various teams. Develop a
plan for moving from the current state to the desired state.
" Perform an assessment: If your enterprise data team currently has challenges
related to scheduling and priorities, do an assessment to determine if a managed
self-service strategy can be put in place to empower more content creators
throughout the organization. Managed self-service data and BI can be extremely
effective on a global scale.
" Clarify terminology: Clarify terms used in your organization for owner, data
steward, and subject matter expert.
" Assign clear roles and responsibilities: Make sure roles and responsibilities for
owners, stewards, and subject matter experts are documented and well understood
by everyone involved. Include backup personnel.
" Ensure community involvement: Ensure that all your content owners—from both
the business and IT—are part of your community of practice.
" Create user guidance for owners and contacts in Fabric: Determine how you will
use the contacts feature in Fabric. Communicate with content creators about how it
should be used, and why it's important.
" Create a process for handling ownership transfers: If ownership transfers occur
regularly, create a process for how it will work.
" Support your advanced content creators: Determine your strategy for using
external tools for advanced authoring capabilities and increased productivity.

Questions to ask

Use questions like those found below to assess content ownership and management.

Do central teams that are responsible for Fabric have a clear understanding of who
owns what BI content? Is there a distinction between report and data items, or
different item types (like Power BI semantic models, data science notebooks, or
lakehouses)?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization, and how do they
differ between key business units?
What activities do business analytical teams perform (for example, data
integration, data modeling, or reporting)?
What kinds of roles in the organizations are expected to create and own content?
Is it limited to central teams, analysts, or also functional roles, like sales?
Where does the organization sit on the spectrum of business-led self-service,
managed self-service, or enterprise? Does it differ between key business units?
Do strategic data and BI solutions have ownership roles and stewardship roles that
are clearly defined? Which are missing?
Are content creators and owners also responsible for supporting and updating
content once it's released? How effective is the ownership of content support and
updates?
Is a clear process in place to transfer ownership of solutions (where necessary)? An
example is when an external consultant creates or updates a solution.
Do data sources have data stewards or subject matter experts (SMEs) who serve as
a special point of contact?
If your organization is already using Fabric or Power BI, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Maturity levels

The following maturity levels will help you assess the current state of your content
ownership and management.

ノ Expand table

Level State of content ownership and management

100: Initial • Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.

• A high ratio of semantic models to reports exists. When many semantic models
exist only support one report, it indicates opportunities to improve data reusability,
improve trustworthiness, reduce maintenance, and reduce the number of duplicate
semantic models.

• Discrepancies between different reports are common, causing distrust of content


produced by others.

200: • A plan is in place for which content ownership and management strategy to use
Repeatable and in which circumstances.

• Initial steps are taken to improve the consistency and trustworthiness levels for
self-service efforts.

• Guidance for the user community is available that includes expectations for self-
service versus enterprise content.

• Roles and responsibilities are clear and well understood by everyone involved.

300: • Managed self-service is a priority and an area of investment to further advance


Defined the data culture. The priority is to allow report creators the flexibility they need
while using well-managed, secure, and trustworthy data sources.

• Report branding is consistently used to indicate who produced the content.

• A mentoring program exists to educate self-service content creators on how to


apply best practices and make good decisions.

400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
Level State of content ownership and management

• There's a plan in place for how to request and handle ownership transfers.

• Managed self-service—and techniques for the reuse of data—are commonly


used and well-understood.

500: • Proactive steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.

• Third-party tools are used by highly proficient content creators to improve


productivity and efficiency.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
scope of content delivery.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Content delivery scope
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

The four delivery scopes described in this article include personal, team, departmental,
and enterprise. To be clear, focusing on the scope of a delivered data and business
intelligence (BI) solution does refer to the number of people who might view the
solution, though the impact is much more than that. The scope strongly influences best
practices for not only content distribution, but also content management, security, and
information protection. The scope has a direct correlation to the level of governance
(such as requirements for change management, support, or documentation), the extent
of mentoring and user enablement, and needs for user support. It also influences user
licensing decisions.

The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.

) Important

Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain. However, flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.

Scope of content delivery


The following diagram focuses on the number of target consumers who will consume
the content.
The four scopes of content delivery shown in the above diagram include:

Personal: Personal solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, a personal data
and BI solution has the fewest number of target consumers.
Team: Collaborates and shares content with a relatively small number of colleagues
who work closely together.
Departmental: Delivers content to a large number of consumers, who can belong
to a department or business unit.
Enterprise: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.

Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:

Personal: Represents the largest number of creators because the data culture
encourages any user to work with data using business-led self-service data and BI
methods. Although managed self-service BI methods can be used, it's less
common with personal data and BI efforts.
Team: Colleagues within a team collaborate and share with each other by using
business-led self-service patterns. It has the next largest number of creators in the
organization. Managed self-service patterns could also begin to emerge as skill
levels advance.
Departmental: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service practices are very common and highly encouraged.
Enterprise: Involves the smallest number of content creators because it typically
includes only professional data and BI developers who work in the BI team, the
COE, or in IT.

The content ownership and management article introduced the concepts of business-
led self-service, managed self-service, and enterprise. The most common alignment
between ownership and delivery scope is:

Business-led self-service ownership: Commonly deployed as personal and team


solutions.
Managed self-service ownership: Can be deployed as personal, team, or
departmental solutions.
Enterprise ownership: Typically deployed as enterprise-scoped solutions.

Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.

7 Note

The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Fabric, which is a specific
implementation where a user or group is granted access to a single item. In this
article, the term sharing is meant in a general way to describe sharing content with
colleagues. When the per-item permissions are intended, this article will make a
clear reference to that feature. For more information, see Report consumer
security planning.

Personal
The Personal delivery scope is about enabling an individual to gain analytical value. It's
also about allowing them to more efficiently perform business tasks through the
effective personal use of data, information, and analytics. It could apply to any type of
information worker in the organization, not just data analysts and developers.

Sharing content with others isn't the objective. Personal content can reside in Power BI
Desktop or in a personal workspace in the Fabric portal.

Here are the characteristics of creating content for a personal delivery scope.

The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content might be an exploratory proof of concept that may, or may not, evolve
into a project.

Here are a few guidelines to help you become successful with content developed for
personal use.

Consider personal data and BI solutions to be like an analytical sandbox that has
little formal governance and oversight from the governance team or COE.
However, it's still appropriate to educate content creators that some general
governance guidelines could still apply to personal content. Valid questions to ask
include: Can the creator export the personal report and email it to others? Can the
creator store a personal report on a non-organizational laptop or device? What
limitations or requirements exist for content that contains sensitive data?
See the techniques described for business-led self-service, and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and personal data and BI
solutions.
Analyze data from the activity log to discover situations where personal solutions
appear to have expanded beyond the original intended usage. It's usually
discovered by detecting a significant amount of content sharing from a personal
workspace.

 Tip

For information about how users progress through the stages of user adoption, see
the Microsoft Fabric adoption roadmap maturity levels. For more information
about using the activity log, see Tenant-level auditing.

Team
The Team delivery scope is focused on a team of people who work closely together, and
who are tasked with solving closely related problems using the same data. Collaborating
and sharing content with each other in a workspace is usually the primary objective.

Content is often shared among the team more informally as compared to departmental
or enterprise content. For instance, the workspace is often sufficient for consuming
content within a small team. It doesn't require the formality of publishing the workspace
to distribute it as an app. There isn't a specific number of users when team-based
delivery is considered too informal; each team can find the right number that works for
them.

Here are the characteristics of creating content for a team delivery scope.

Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of content might occur for report viewers (especially for managers
of the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.
Here are some guidelines to help you become successful with content developed for
team use.

Ensure the COE is prepared to support the efforts of self-service creators


publishing content for their team.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for a Power BI app. It's tempting to start with one workspace per team,
but that might not be flexible enough to satisfy all needs.
See the techniques described for business-led self-service and managed self-
service in the content ownership and management article. They're highly relevant
techniques that help content creators create efficient and effective team data and
BI solutions.

 Tip

For more information, see Workspace-level planning.

Departmental
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental delivery scopes.

Here are the characteristics of departmental content delivery.

A few content creators typically publish content for colleagues to consume.


Formal delivery of reports by using Power BI apps is a high priority to ensure
consumers have the best experience.
Additional effort is made to deliver more sophisticated and polished reports.
Following best practices for data preparation and higher quality data modeling is
also expected.
Needs for change management and lifecycle management begin to emerge to
ensure release stability and a consistent experience for consumers.

Here are a few guidelines to help you become successful with departmental BI delivery.

Ensure that the COE is prepared to support the efforts of self-service creators.
Creators who publish content used throughout their department or business unit
might emerge as candidates to become champions. Or, they might become
candidates to join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement could be permitted for departmental solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Fabric portal, content owners
can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports by:
Using departmental colors and styling to indicate who produced the content.
For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which is valuable when
the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Apply the techniques described for business-led self-service and managed self-
service content delivery in the Content ownership and management article. They're
highly relevant techniques that can help content creators to create efficient and
effective departmental solutions.
Enterprise
Enterprise content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.

Here are the characteristics of enterprise content delivery.

A centralized team of experts manages the content end-to-end and publishes it for
others to consume.
Formal delivery of data solutions like reports, lakehouses, and Power BI apps is a
high priority to ensure consumers have the best experience.
The content is highly sensitive, subject to regulatory requirements, or is considered
extremely critical.
Published enterprise-level semantic models and dataflows might be used as a
source for self-service creators, thus creating a chain of dependencies to the
source data.
Stability and a consistent experience for consumers are highly important.
Application lifecycle management, such as deployment pipelines and DevOps
techniques , is commonly used. Change management processes to review and
approve changes before they're deployed are commonly used for enterprise
content, for example, by a change review board or similar group.
Processes exist to gather requirements, prioritize efforts, and plan for new projects
or enhancements to existing content.
Integration with other enterprise-level data architecture and management services
could exist, possibly with other Azure services and Power Platform products.

Here are some guidelines to help you become successful with enterprise content
delivery.

Governance and oversight techniques described in the governance article are


relevant for managing an enterprise solution. Techniques primarily include change
management and lifecycle management.
Plan for how to effectively use Premium Per User or Fabric capacity licensing per
workspace. Align your workspace management strategy, like how workspaces will
be organized and secured, to the planned licensing strategy.
Plan how Power BI apps will distribute enterprise content to consumers. An app
can provide a significantly better user experience for consuming content. Align the
app distribution strategy with your workspace management strategy.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Implement a rigorous process for use of the certified endorsement for enterprise
reports and apps. Data assets can be certified, too, when there's the expectation
that self-service creators will build solutions based on them. Not all enterprise
content needs to be certified, but much of it probably will be.
Make it a common practice to announce when changes will occur. For more
information, see the community of practice article for a description of
communication types.
Include consistent branding on reports, by:
Using specific colors and styling, which can also indicate who produced the
content. For more information, see Content ownership and management.
Adding a small image or text label to the report footer, which can be valuable
when the report is exported from the Fabric portal.
Using a standard Power BI Desktop template file. For more information, see
Mentoring and user enablement.
Actively use the lineage view to understand dependencies, perform impact
analysis, and communicate to downstream content owners when changes will
occur.
See the techniques described for enterprise content delivery in the content
ownership and management article. They're highly relevant techniques that help
content creators create efficient and effective enterprise solutions.
See the techniques described in the system oversight article for auditing,
governing, and the oversight of enterprise content.

Considerations and key actions

Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.

" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Fabric adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle Fabric licensing considerations. Create a process for how workspaces could
be assigned each license type, and the prerequisites required for the type of
content that could be assigned to Premium.

) Important

At times this article refers to Power BI Premium or its capacity subscriptions (P


SKUs). Be aware that Microsoft is currently consolidating purchase options and
retiring the Power BI Premium per capacity SKUs. New and existing customers
should consider purchasing Fabric capacity subscriptions (F SKUs) instead.

For more information, see Important update coming to Power BI Premium


licensing and Power BI Premium FAQ.

Questions to ask

Use questions like those found below to assess content delivery scope.

Do central teams that are responsible for Fabric have a clear understanding of who
creates and delivers content? Does it differ by business area, or for different
content item types?
Which usage scenarios are in place, such as personal BI, team BI, departmental BI,
or enterprise BI? How prevalent are they in the organization? Are there advanced
scenarios, like advanced data preparation or advanced data model management,
or niche scenarios, like self-service real-time analytics?
For the identified content delivery scopes in place, to what extent are guidelines
being followed?
Are there trajectories for helpful self-service content to be "promoted" from
personal to team content delivery scopes and beyond? What systems and
processes enable sustainable, bottom-up scaling and distribution of useful self-
service content?
What are the guidelines for publishing content to, and using, personal
workspaces?
Are personal workspaces assigned to dedicated Fabric capacity? In what
circumstances are personal workspaces intended to be used?
On average, how many reports does someone have access to? How many reports
does an executive have access to? How many reports does the CEO have access
to?
If your organization is using Fabric or Power BI today, does the current workspace
setup comply with the content ownership and delivery strategies that are in place?
Is there a clear licensing strategy? How many licenses are used today? How many
tenants and capacities exist, who uses them, and why?
How do central teams decide what gets published to Premium (or Fabric)
dedicated capacity, and what uses shared capacity? Do development workloads
use separate Premium Per User (PPU) licensing to avoid affecting production
workloads?

Maturity levels

The following maturity levels will help you assess the current state of your content
delivery.

ノ Expand table

Level State of content delivery

100: Initial • Content is published for consumers by self-service creators in an uncontrolled


way, without a specific strategy.

200: • Pockets of good practices exist. However, good practices are overly dependent
Repeatable on the knowledge, skills, and habits of the content creator.
Level State of content delivery

300: Defined • Clear guidelines are defined and communicated to describe what can and can't
occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.

400: • Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.

• Guidelines for content delivery scope are followed by most, or all, groups across
the organization.

• Change management requirements are in place to approve critical changes for


content that's distributed to a larger-sized audience.

• Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.

500: Efficient • Proactively take steps to communicate with users occur when any concerning
activities are detected in the activity log. Education and information are provided
to make gradual improvements or reduce risk.

• The business value that's achieved for deployed solutions is regularly evaluated.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
Center of Excellence (COE).

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Center of Excellence
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A data or analytics Center of Excellence (COE) is an internal team of technical and


business experts. The team actively assists others within the organization who are
working with data. The COE forms the nucleus of the broader community to advance
adoption goals, which align with the data culture vision.

A COE might also be known as competency center, capability center, or a center of


expertise. Some organizations use the term squad. Many organizations perform the COE
responsibilities within their data, analytics, or business intelligence (BI) team.

7 Note

Having a COE team formally recognized in your organizational chart is


recommended, but not required. What's most important is that the COE roles and
responsibilities are identified, prioritized, and assigned. It's common for a
centralized data or analytics team to take on many of the COE responsibilities;
some responsibilities might also reside within IT. For simplicity, in this series of
articles, COE means a specific group of people, although you might implement it
differently. It's also very common to implement the COE with a scope broader than
Fabric or Power BI alone: for instance, a Power Platform COE, a data COE, or an
analytics COE.

Goals for a COE


Goals for a COE include:

Evangelizing a data-driven culture.


Promoting the adoption of analytics.
Nurturing, mentoring, guiding, and educating internal users to increase their skills
and level of self-reliance.
Coordinating efforts and disseminating knowledge across organizational
boundaries.
Creating consistency and transparency for the user community, which reduces
friction and pain points related to finding relevant data and analytics content.
Maximizing the benefits of self-service BI, while reducing the risks.
Reducing technical debt by helping users make good decisions that increase
consistency and result in fewer inefficiencies.

) Important

One of the most powerful aspects of a COE is the cross-departmental insight into
how analytics tools like Fabric are used by the organization. This insight can reveal
which practices work well and which don't, that can facilitate a bottom-up
approach to governance. A primary goal of the COE is to learn which practices work
well, share that knowledge more broadly, and replicate best practices across the
organization.

Scope of COE responsibilities


The scope of COE responsibilities can vary significantly between organizations. In a way,
a COE can be thought of as a consultancy service because its members routinely provide
expert advice to the internal community of users. To varying degrees, most COEs handle
hands-on work too.

Common COE responsibilities include:

Mentoring and facilitating knowledge sharing within the internal Fabric


community.
Holding office hours to engage with the internal Fabric community.
Conducting co-development projects and best practices reviews in order to
actively help business units deliver solutions.
Managing the centralized portal.
Producing, curating, and promoting training materials.
Creating documentation and other resources, such as template files, to encourage
consistent use of standards and best practices.
Applying, communicating, and assisting with governance guidelines.
Handling and assisting with system oversight and Fabric administration.
Responding to user support issues escalated from the help desk.
Developing solutions and/or proofs of concept.
Establishing and maintaining the BI platform and data architecture.
Communicating regularly with the internal community of users.

Staffing a COE
People who are good candidates as COE members tend to be those who:

Understand the analytics vision for the organization.


Have a desire to continually improve analytics practices for the organization.
Have a deep interest in, and expertise with, analytics tools such as Fabric.
Are interested in seeing Fabric used effectively and adopted successfully
throughout the organization.
Take the initiative to continually learn, adapt, and grow.
Readily share their knowledge with others.
Are interested in repeatable processes, standardization, and governance with a
focus on user enablement.
Are hyper-focused on collaboration with others.
Are comfortable working in an agile fashion.
Have an inherent interest in being involved and helping others.
Can effectively translate business needs into solutions.
Communicate well with both technical and business colleagues.

 Tip

If you have self-service content creators in your organization who constantly push
the boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a satellite member of the COE.

When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.

Roles and responsibilities


Very generalized roles within a COE are listed below. It's common for multiple people to
overlap roles, which is useful from a backup and cross-training perspective. It's also
common for the same person to serve multiple roles. For instance, most COE members
also serve as a coach or mentor.

ノ Expand table
Role Description

COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary.
For an overview of additional roles and responsibilities, see the Governance article.

Coach Coaches and educates others on data and BI skills via office hours (community
engagement), best practices reviews, or co-development projects. Oversees and
participates in the discussion channel of the internal community. Interacts with, and
supports, the champions network.

Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.

Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.

Data Creates and manages data assets (such as shared semantic model and dataflows) to
modeler support other self-service content creators.

Report Creates and publishes reports, dashboards, and metrics.


creator

Data Plans for deployment and architecture, including integration with other services and
engineer data platforms. Publishes data assets which are utilized broadly across the
organization (such as a lakehouse, data warehouse, data pipeline, dataflow, or
semantic model).

User Assists with the resolution of data discrepancies and escalated help desk support
support issues.

As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.

Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or when acquisitions have occurred.

7 Note

The following terms might differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.
Centralized COE
A centralized COE comprises a single shared services team.

Pros:

There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.

Cons:

A centralized team might have an authoritarian tendency to favor one-size-fits-all


decisions that don't always work well for all business units.
There can be a tendency to prefer IT skills over business skills.
Due to the centralized nature, it might be more difficult for the COE members to
sufficiently understand the needs of all business units.

Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.

Pros:

There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.

Cons:

The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. The organizational structure could potentially lead
to complications, differences in priorities, or necessitate the involvement of the
executive sponsor. Preferably, the executive sponsor has a scope of authority that
includes the COE and all involved business units to help resolve conflicts.
Federated COE
A federated COE comprises a shared services team (the core COE members) plus
satellite members from each functional area or major business unit. A federated team
works in coordination, even though its members reside in different business units.
Typically, satellite members are primarily focused on development activities to support
their business unit while the shared services personnel support the entire community.

Pros:

There's cross-functional involvement from satellite COE members who represent


their specific functional area and have domain expertise.
There's a balance of centralized and decentralized representation across the core
and satellite COE members.
When distributed data ownership situations exist—as could be the case when
business units take direct responsibility for data management activities—this
model is effective.

Cons:

Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.

 Tip

Some organizations have success by using a rotational program. It involves


federated members joining the core COE for a period of time, such as six months.
This type of program allows federated members to learn best practices and
understand more deeply how and why things are done. Although each federated
member remains focused on their specific business unit, they gain a deeper
understanding of the organization's challenges. This deeper understanding leads to
a more productive partnership over time.

Decentralized COE
Decentralized COEs are independently managed by business units.
Pros:

A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.

Cons:

There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team might be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding might have more resources available
to them, which might not serve cost optimization goals from an organizational-
wide perspective.

) Important

A highly centralized COE tends to be more authoritarian, while highly decentralized


COEs tend to be more siloed. Each organization will need to weigh the pros and
cons that apply to them to determine the best choice. For most organizations, the
most effective approach tends to be the unified or federated, which bridges
organizational boundaries.

Funding the COE


The COE might obtain its operating budget in multiple ways:

Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.

When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.

When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.

Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
might receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.

 Tip

The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.

Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Fabric. For a shared capacity, this could be based on
number of active users. For Premium capacity, chargebacks could be allocated based on
which business units are using the capacity. Ideally, chargebacks are directly correlated
to the business value gained.

) Important

At times this article refers to Power BI Premium or its capacity subscriptions (P


SKUs). Be aware that Microsoft is currently consolidating purchase options and
retiring the Power BI Premium per capacity SKUs. New and existing customers
should consider purchasing Fabric capacity subscriptions (F SKUs) instead.

For more information, see Important update coming to Power BI Premium


licensing and Power BI Premium FAQ.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish or improve your
COE.
" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal community members, and any external
customers, to be served by the COE. Decide how the COE will generally engage with
those customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
internal community of users about the services the COE offers, and how to engage
with the COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.

Questions to ask

Use questions like those found below to assess the effectiveness of a COE.

Is there a COE? If so, who is in the COE and what's the structure?
If there isn't a COE, is there a central team that performs a similar function? Do
data decision makers in the organization understand what a COE does?
If there isn't a COE, does the organization aspire to create one? Why or why not?
Are there opportunities for federated or decentralized COE models due to a mix of
enterprise and departmental solutions?
Are there any missing roles and responsibilities from the COE?
To what extent does the COE engage with the user community? Do they mentor
users? Do they curate a centralized portal? Do they maintain centralized resources?
Is the COE recognized in the organization? Does the user community consider
them to be credible and helpful?
Do business users see central teams as enabling or restricting their work with data?
What's the COE funding model? Do COE customers financially contribute in some
way to the COE?
How consistent and transparent is the COE with their communication?

Maturity levels

The following maturity levels will help you assess the current state of your COE.

ノ Expand table

Level State of the Center of Excellence

100: Initial • One or more COEs exist, or the activities are performed within the data team, BI
team, or IT. There's no clarity on the specific goals nor expectations for
responsibilities.

• Requests for assistance from the COE are handled in an unplanned manner.

200: • The COE is in place with a specific charter to mentor, guide, and educate self-
Repeatable service users. The COE seeks to maximize benefits of self-service approaches to
data and BI while reducing the risks.

• The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.

300: Defined • The COE operates with active involvement from all business units in a unified or
federated mode.

400: Capable • The goals of the COE align with organizational goals, and they are reassessed
regularly.

• The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.
Level State of the Center of Excellence

500: Efficient • Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.

• Agility and implementing continual improvements from lessons learned


(including scaling out methods that work) are top priorities for the COE.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about
implementing governance guidelines, policies, and processes.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Governance
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Microsoft Fabric,
but it's not a comprehensive reference for data governance.

As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."

The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: the
true focus is on governing user's behavior to ensure organizational data is well
managed.

When focused on self-service data and business intelligence (BI), the primary goals of
governance are to achieve the proper balance of:

User empowerment: Empower the internal user community to be productive and


efficient, within requisite guardrails.
Regulatory compliance: Comply with the organization's industry, governmental,
and contractual regulations.
Internal requirements: Adhere to the organization's internal requirements.

The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. You'll be most successful with a platform like Fabric when you put as much
emphasis on user empowerment as on clarifying its practical usage within established
guardrails.

 Tip
Think of governance as a set of established guidelines and formalized policies. All
governance guidelines and policies should align with your organizational data
culture and adoption objectives. Governance is enacted on a day-to-day basis by
your system oversight (administration) activities.

Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.

Governance decisions are implemented with documented guidance, policies, and


processes. Objectives for governance of a self-service data and BI platform, such as
Fabric, include:

Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.

 Tip

A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.

Governance success factors


Governance isn't well-received when it's enacted with top-down mandates that are
focused more on control than empowerment. Governing Fabric is most successful when:

The most lightweight governance model that accomplishes required objectives is


used.
Governance is approached on an iterative basis and doesn't significantly impede
productivity.
A bottom-up approach to formulating governance guidelines is used whenever
practical. The Center of Excellence (COE) and/or the data governance team
observes successful behaviors that are occurring within a business unit. The COE
then takes action to scale out to other areas of the organization.
Governance decisions are co-defined with input from different business units
before they're enacted. Although there are times when a specific directive is
necessary (particularly in heavily regulated industries), mandates should be the
exception rather than the rule.
Governance needs are balanced with flexibility and the ability to be productive.
Governance requirements can be satisfied as part of users' regular workflow,
making it easier for users to do the right thing in the right way with little friction.
The answer to new requests for data isn't "no" by default, but rather "yes and" with
clear, simple, transparent rules for what governance requirements are for data
access, usage, and sharing.
Users that need access to data have incentive to do so through normal channels,
complying with governance requirements, rather than circumventing them.
Governance decisions, policies, and requirements for users to follow are in
alignment with organizational data culture goals as well as other existing data
governance initiatives.
Decisions that affect what users can—and can't—do aren't made solely by a
system administrator.

Introduce governance to your organization


There are three primary timing methods organizations take when introducing Fabric
governance to an organization.
The methods in the above diagram include:

ノ Expand table

Method Strategy followed

Roll out Fabric first, then introduce governance: Fabric is made widely available to
users in the organization as a new self-service data and BI tool. Then, at some time in
the future, a governance effort begins. This method prioritizes agility.

Full governance planning first, then roll out Fabric: Extensive governance planning
occurs prior to permitting users to begin using Fabric. This method prioritizes control
and stability.

Iterative governance planning with rollouts of Fabric in stages: Just enough


governance planning occurs initially. Then Fabric is iteratively rolled out in stages to
individual teams while iterative governance enhancements occur. This method equally
prioritizes agility and governance.

Choose method 1 when Fabric is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.

Choose method 2 when your organization already has a well-established approach to


governance that can be readily expanded to include Fabric.

Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.

Each method is described in the following sections.


Method 1: Roll out Fabric first
Method 1 prioritizes agility and speed. It allows users to quickly get started creating
solutions. This method occurs when Fabric has been made widely available to users in
the organization as a new self-service data and BI tool. Quick wins and some successes
are achieved. At some point in the future, a governance effort begins, usually to bring
order to an unacceptable level of chaos since the self-service user population didn't
receive sufficient guidance.

Pros:

Fastest to get started


Highly capable users can get things done quickly
Quick wins are achieved

Cons:

Higher effort to establish governance once Fabric is used prevalently throughout


the organization
Resistance from self-service users who are asked to change what they've been
doing
Self-service users need to figure out things on their own, which is inefficient and
results in inconsistencies
Self-service users need to use their best judgment, which produces technical debt
to be resolved

See other possible cons in the Governance challenges section below.

Method 2: In-depth governance planning first


Method 2 prioritizes control and stability. It lies at the opposite end of the spectrum
from method 1. Method 2 involves doing extensive governance planning before rolling
out Fabric. This situation is most likely to occur when the implementation of Fabric is led
by IT. It's also likely to occur when the organization operates in a highly regulated
industry, or when an existing data governance board imposes significant prerequisites
and up-front requirements.

Pros:

More fully prepared to meet regulatory requirements


More fully prepared to support the user community

Cons:
Favors enterprise content development more than self-service
Slower to allow the user population to begin to get value and improve decision-
making
Encourages poor habits and workarounds when there's a significant delay in
allowing the use of data for decision-making

Method 3: Iterative governance with rollouts


Method 3 seeks a balance between agility and governance. It's an ideal scenario that
does just enough governance planning upfront. Frequent and continual governance
improvements iteratively occur over time alongside Fabric development projects that
deliver value.

Pros:

Puts equal priority on governance and user productivity


Emphasizes a learning as you go mentality
Encourages iterative releases to groups of users in stages

Cons:

Requires a high level of communication to be successful with agile governance


practices
Requires additional discipline to keep documentation and training current
Introducing new governance guidelines and policies too often causes a certain
level of user disruption

For more information about up-front planning, see the Preparing to migrate to Power BI
article.

Governance challenges
If your organization has implemented Fabric without a governance approach or strategic
direction (as described above by method 1), there could be numerous challenges
requiring attention. Depending on the approach that you've taken and your current
state, some of the following challenges could be applicable to your organization.

Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics

People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities

Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements

Data quality and data management challenges


Sprawl of data and reports
Inaccurate, incomplete, or outdated data
Lack of trust in the data, especially for content produced by self-service content
creators
Inconsistent reports produced without sufficient data validation
Valuable data not used or difficult to access
Fragmented, siloed, and duplicated data
Lack of data catalog, inventory, glossary, or lineage
Unclear data ownership and stewardship

Skills and data literacy challenges


Varying levels of ability to interpret, create, and communicate with data effectively
Varying levels of technical skillsets and skill gaps
Lack of ability to confidently manage data diversity and volume
Underestimating the level of complexity for BI solution development and
management throughout its entire lifecycle
Short tenure with continual staff transfers and turnover
Coping with the speed of change for cloud services

 Tip

Identifying your current challenges—as well as your strengths—is essential to do


proper governance planning. There's no single straightforward solution to the
challenges listed above. Each organization needs to find the right balance and
approach that solves the challenges that are most important to them. The
challenges presented above will help you identify how they might affect your
organization, so you can start thinking about what the right solution is for your
circumstances.

Governance planning
Some organizations have implemented Fabric without a governance approach or clear
strategic direction (as described above by method 1). In this case, the effort to begin
governance planning can be daunting.

If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service and enterprise data and BI scenarios.

) Important

Governance is a big undertaking, and it's never completely done. Relentlessly


prioritizing and iterating on improvements will make the scope more manageable.
If you track your progress and accomplishments each week and each month, you'll
be amazed at the impact over time. The maturity levels at the end of each article in
this series can help you to assess where you are currently.

Some potential governance planning activities and outputs that you might find valuable
are described next.

Strategy
Key activities:
Conduct a series of workshops to gather information and assess the current state
of data culture, adoption, and data and BI practices. For guidance about how to
gather information and define the current state of BI adoption, including
governance, see BI strategic planning.
Use the current state assessment and information gathered to define the desired
future state, including governance objectives. For guidance about how to use this
current state definition to decide on your desired future state, see BI tactical
planning.
Validate the focus and scope of the governance program.
Identify existing bottom-up initiatives in progress.
Identify immediate pain points, issues, and risks.
Educate senior leadership about governance, and ensure executive sponsorship is
sufficient to sustain and grow the program.
Clarify where Power BI fits in to the overall BI and analytics strategy for the
organization.
Assess internal factors such as organizational readiness, maturity levels, and key
challenges.
Assess external factors such as risk, exposure, regulatory, and legal requirements—
including regional differences.

Key output:

Business case with cost/benefit analysis


Approved governance objectives, focus, and priorities that are in alignment with
high-level business objectives
Plan for short-term goals and priorities (quick wins)
Plan for long-term and deferred goals and priorities
Success criteria and measurable key performance indicators (KPIs)
Known risks documented with a mitigation plan
Plan for meeting industry, governmental, contractual, and regulatory requirements
that impact BI and analytics in the organization
Funding plan

People
Key activities:

Establish a governance board and identify key stakeholders.


Determine focus, scope, and a set of responsibilities for the governance board.
Establish a COE.
Determine focus, scope, and a set of responsibilities for COE.
Define roles and responsibilities.
Confirm who has decision-making, approval, and veto authority.

Key output:

Charter for the governance board


Charter and priorities for the COE
Staffing plan
Roles and responsibilities
Accountability and decision-making matrix
Communication plan
Issue management plan

Policies and processes


Key activities:

Analyze immediate pain points, issues, risks, and areas to improve the user
experience.
Prioritize data policies to be addressed by order of importance.
Identify existing processes in place that work well and can be formalized.
Determine how new data policies will be socialized.
Decide to what extent data policies might differ or be customized for different
groups.

Key output:

Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies

Project management
The implementation of the governance program should be planned and managed as a
series of projects.

Key activities:

Establish a timeline with priorities and milestones.


Identify related initiatives and dependencies.
Identify and coordinate with existing bottom-up initiatives.
Create an iterative project plan that's aligned with high-level prioritization.
Obtain budget approval and funding.
Establish a tangible way to track progress.
Key output:

Project plan with iterations, dependencies, and sequencing


Cadence for retrospectives with a focus on continual improvements

) Important

The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the adoption roadmap conclusion for some helpful resources, as well as
the implementation planning BI strategy articles.

Governance policies

Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.

How we go about making governance decisions depends on:

Who owns and manages the data and BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service,
managed self-service, and enterprise. Who owns and manages the content has a
significant impact on governance requirements.
What is the scope for delivery of the data and BI content? The Content delivery
scope article introduced four scopes for delivery of content: personal, team,
departmental, and enterprise. The scope of delivery has a considerable impact on
governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps could be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.

 Tip

Different combinations of the above four criteria will result in different governance
requirements for Fabric content.

Key Fabric governance decisions


As you explore your goals and objectives and pursue more tactical data governance
decisions as described above, it will be important to determine what the highest
priorities are. Deciding where to focus your efforts can be challenging.

The following list includes items that you might choose to prioritize when introducing
governance for Fabric.

Recommendations and requirements for content ownership and management


Recommendations and requirements for content delivery scope
Recommendations and requirements for content distribution and sharing with
colleagues, as well as for external users, such as customers, partners, or vendors
How users are permitted to work with regulated data and highly sensitive data
Allowed use of unverified data sources that are unknown to IT
When manually maintained data sources, such as Excel or flat files, are permitted
Who is permitted to create a workspace
How to manage workspaces effectively
How personal workspaces are effectively used
Which workspaces are assigned to Fabric capacity
Who is allowed to be a Fabric administrator
Security, privacy, and data protection requirements, and allowed actions for
content assigned to each sensitivity label
Allowed or encouraged use of personal gateways
Allowed or encouraged use of self-service purchasing of user licenses
Requirements for who can certify content, as well as requirements that must be
met
Application lifecycle management for managing content through its entire
lifecycle, including development, test, and production stages
Additional requirements applicable to critical content, such as data quality
verifications and documentation
Requirements to use standardized master data and common data definitions to
improve consistency across data assets
Recommendations and requirements for use of external tools by advanced content
creators

If you don't make governance decisions and communicate them well, users will use their
own judgment for how things should work—and that often results in inconsistent
approaches to common tasks.

Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.

Data policies
A data policy is a document that defines what users can and can't do. You might call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.

A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.

A data policy should include:

Policy name, purpose, description, and details


Specific responsibilities
Scope of the policy (organization-wide versus departmental-specific)
Audience for the policy
Policy owner, approver, and contact
How to request an exception
How the policy will be audited and enforced
Regulatory or legal requirements met by the policy
Reference to terminology definitions
Reference to any related guidelines or policies
Effective date, last revision date, and change log

7 Note

Locate, or link to, data policies from your centralized portal.

Here are three common data policy examples you might choose to prioritize.
ノ Expand table

Policy Description

Data ownership Specifies when an owner is required for a data asset, and what the data
policy owner's responsibilities include, such as: supporting colleagues who view the
content, maintaining appropriate confidentiality and security, and ensuring
compliance.

Data certification Specifies the process that is followed to certify content. Requirements might
(endorsement) include activities such as: data accuracy validation, data source and lineage
policy review, technical review of the data model, security review, and
documentation review.

Data classification Specifies activities that are allowed and not allowed per classification
and protection (sensitivity level). It should specify activities such as: allowed sharing with
policy external users, with or without a non-disclosure agreement (NDA),
encryption requirements, and ability to download the data. Sometimes, it's
also called a data handling policy or a data usage policy. For more
information, see the Information protection for Power BI article.

U Caution

Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.

Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.

Pros of organization-wide policies:

Much easier to manage and maintain


Greater consistency
Encompasses more use cases
Fewer policies overall
Cons of organization-wide policies:

Inflexible
Less autonomy and empowerment

Pros of departmental-scope policies:

Expectations are clearer when tailored to a specific group


Customizable and flexible

Cons of departmental-scope policies:

More work to manage


More policies that are siloed
Potential for conflicting information
Difficult to scale more broadly throughout the organization

 Tip

Finding the right balance of standardization and customization for supporting self-
service data and BI across the organization can be challenging. However, by
starting with organizational policies and mindfully watching for exceptions, you can
make meaningful progress quickly.

Staffing and accountability


The organizational structure for data governance varies substantially between
organizations. In larger organizations there might be a data governance office with
dedicated staff. Some organizations have a data governance board, council, or steering
committee with assigned members coming from different business units. Depending on
the extent of the data governance body within the organization, there could be an
executive team separate from a functional team of people.

) Important

Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.

Checks and balances


Governance accountability is about checks and balances.

Starting with the first level, the levels of checks and balances in the above diagram
include:

ノ Expand table

Level Description

Operational - Business units: Level 1 is the foundation of a well-governed system, which


includes users within the business units performing their work. Self-service data and BI
creators have a lot of responsibilities related to authoring, publishing, sharing, security,
and data quality. Self-service data and BI consumers also have responsibilities for the
proper use of data.

Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise data and BI,
the data governance office, as well as other ancillary teams. Ancillary teams can include IT,
security, HR, and legal. A change control board is included here as well.

Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.

Strategic - Executive sponsor and steering committee: The highest level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have a leadership team
with sufficient authority to be able to make decisions when necessary.
) Important

Everyone has a responsibility to adhere to policies for ensuring that organizational


data is secure, protected, and well-managed as an organizational asset. Sometimes
this is cited as everyone is a data steward. To make this a reality, start with the users
in the business units (level 1 described above) as the foundation.

Roles and responsibilities


Once you have a sense for your governance strategy, roles and responsibilities should
be defined to establish clear expectations.

Governance team structure, roles (including terminology), and responsibilities vary


widely among organizations. Very generalized roles are described in the table below. In
some cases, the same person could serve multiple roles. For instance, the Chief Data
Officer (CDO) could also be the executive sponsor.

ノ Expand table

Role Description

Chief Data Officer Defines the strategy for use of data as an enterprise asset. Oversees
or Chief Analytics enterprise-wide governance guidelines and policies.
Officer

Data governance Steering committee with members from each business unit who, as domain
board owners, are empowered to make enterprise governance decisions. They
make decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.

Data governance Creates governance policies, standards, and processes. Provides enterprise-
team wide oversight and optimization of data integrity, trustworthiness, privacy,
and usability. Collaborates with the COE to provide governance education,
support, and mentoring to data owners and content creators.

Data governance Temporary or permanent teams that focus on individual governance topics,
working such as security or data quality.
committees

Change Coordinates the requirements, processes, approvals, and scheduling for


management release management processes with the objective of reducing risk and
board minimizing the impact of changes to critical applications.

Project Manages individual governance projects and the ongoing data governance
Role Description

management office program.

Fabric executive Promotes adoption and the successful use of Fabric. Actively ensures that
sponsor Fabric decisions are consistently aligned with business objectives, guiding
principles, and policies across organizational boundaries. For more
information, see the Executive sponsorship article.

Center of Mentors the community of creators and consumers to promote the effective
Excellence use of Fabric for decision-making. Provides cross-departmental
coordination of Fabric activities to improve practices, increase consistency,
and reduce inefficiencies. For more information, see the Center of
Excellence article.

Fabric champions A subset of content creators found within the business units who help
advance the adoption of Fabric. They contribute to data culture growth by
advocating the use of best practices and actively assisting colleagues. For
more information, see the Community of practice article.

Fabric Day-to-day-system oversight responsibilities to support the internal


administrators processes, tools, and people. Handles monitoring, auditing, and
management. For more information, see the System oversight article.

Information Provides occasional assistance to Fabric administrators for services related


technology to Fabric, such as Microsoft Entra ID, Microsoft 365, Teams, SharePoint, or
OneDrive.

Risk management Reviews and assesses data sharing and security risks. Defines ethical data
policies and standards. Communicates regulatory and legal requirements.

Internal audit Auditing of compliance with regulatory and internal requirements.

Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.

All BI creators and Adheres to policies for ensuring that data is secure, protected, and well-
consumers managed as an organizational asset.

 Tip

Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.

Considerations and key actions


Checklist - Considerations and key actions you can take to establish or strengthen your
governance initiatives.

" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Fabric is currently used for self-service and enterprise data
and BI scenarios. Document opportunities for improvement. Also, document
strengths and good practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests for
exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.

Questions to ask
Use questions like those found below to assess governance.

At a high level, what's the current governance strategy? To what extent is the
purpose and importance of this governance strategy clear to both end users and
the central data and BI teams?
In general, is the current governance strategy effective?
What are the key regulatory and compliance criteria that the organization (or
specific business units) must adhere to? Where's this criteria documented? Is this
information readily available to people who work with data and share data items as
a part of their role?
How well does the current governance strategy align to the user's way of working?
Is a specific role or team responsible for governance in the organization?
Who has the authority to create and change governance policies?
Do governance teams use Microsoft Purview or another tool to support
governance activities?
What are the prioritized governance risks, such as risks to security, information
protection, and data loss prevention?
What's the potential business impact of the identified governance risks?
How frequently is the governance strategy re-evaluated? What metrics are used to
evaluate it, and what mechanisms exist for business users to provide feedback?
What types of user behaviors create risk when users work with data? How are
those risks mitigated?
What sensitivity labels are in place, if any? Are data and BI decision makers aware
of sensitivity labels and the benefits to the business?
What data loss prevention policies are in place, if any?
How is "Export to Excel" handled? What steps are taken to prevent data loss
prevention? What's the prevalence of "Export to Excel"? What do people do with
data once they have it in Excel?
Are there practices or solutions that are out of regulatory compliance that must be
urgently addressed? Are these examples justified with an explanation of the
potential business impact, should they not be addressed?

 Tip

"Export to Excel" is typically a controversial topic. Often, business users focus on the
requirement to have "Export to Excel" possible in BI solutions. Enabling "Export to
Excel" can be counter-productive because a business objective isn't to get data into
Excel. Instead, define why end users need the data in Excel. Ask what they do with
the data once it's in Excel, which business questions they try to answer, what
decisions they make, and what actions they take with the data.

Focusing on business decisions and actions helps steer focus away from tools and
features and toward helping people achieve their business objectives.

Maturity levels

The following maturity levels will help you assess the current state of your governance
initiatives.

ノ Expand table

Level State of governance

100: Initial • Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.

• There's a significant reliance on undocumented tribal knowledge.

200: • Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.

• An initial governance approach exists. Incremental progress is being made.

300: Defined • A complete governance strategy with focus, objectives, and priorities is enacted
and broadly communicated.

• Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed
by users.

• Roles and responsibilities are clearly defined and documented.

400: Capable • All Fabric governance priorities align with organizational goals and business
objectives. Goals are reassessed regularly.

• Processes exist to customize policies for decentralized business units, or to


Level State of governance

handle valid exceptions to standard governance policies.

• It's clear where Fabric fits into the overall data and BI strategy for the
organization.

• Fabric activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.

500: Efficient • Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
continual progress is a priority.

• Agility and implementing continual improvements from lessons learned


(including scaling out methods that work) are top priorities for the COE.

• Fabric activity log and API data is actively used to inform and improve adoption
and governance efforts.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about
mentoring and user enablement.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Mentoring and user enablement
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see Microsoft Fabric adoption
maturity levels.

Skills mentoring
Mentoring and helping users in the Fabric community become more effective can take
on various forms, such as:

Office hours
Co-development projects
Best practices reviews
Extended support

Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Office hours are usually group-based, so Fabric
champions and other members of the community can also help solve an issue if a topic
is in their area of expertise.

Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour or Fabric
Fridays. The primary goal is usually to get questions answered, solve problems, and
remove blockers. Office hours can also be used as a platform for the user community to
share ideas, suggestions, and even complaints.

The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.

 Tip

One option is to set specific office hours each week. However, users might not
show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.

Office hours are an excellent user enablement approach because:

Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others might observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.

Office hours benefit the COE as well because:

They're a great way for the COE to identify champions or users with specific skills
that the COE didn't previously know about.
The COE can learn what users throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.

 Tip

It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work, or addressing
performance challenges in a complex solution. Set clear expectations for what's in
scope for office hours, and if there's any commitment for follow up.

Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service analytics or business
intelligence (BI) solution that the business stakeholders couldn't deliver independently.

The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.

A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership of the project.

Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.

The active involvement shown in the above diagram changes over time, as follows:
Business unit: 50% initially, up to 75%, finally at 98%-100%.
COE: 50% initially, down to 25%, finally at 0%-2%.

Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.

Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service data and BI solutions in the future.

) Important

Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces the risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.

Best practices reviews


The COE could also offer best practices reviews. A best practices review can be extremely
helpful for content creators who would like to validate their work. They might also be
known as advisory services, internal consulting time, or technical reviews. Unlike a co-
development project (described previously), a best practices review occurs after the
solution has been developed.

During a review, an expert from the COE evaluates self-service Fabric content developed
by a member of the community and identifies areas of risk or opportunities for
improvement.

Here are some examples of when a best practices review could be beneficial.

The sales team has a Power BI app that they intend to distribute to thousands of
users throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to a capacity. A review of the
workspace content is required to ensure sound development practices are
followed. This type of review is common when the capacity is shared among
multiple business units. (A review might not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new Fabric solution they expect to be widely
used. They would like to request a best practices review before it goes into user
acceptance testing (UAT), or before a request is submitted to the change
management board.

A best practices review is most often focused on the semantic model design, though the
review can encompass all types of data items (such as a lakehouse, data warehouse,
data pipeline, dataflow, or semantic model). The review can also encompass reporting
items (such as reports, dashboards, or metrics).

Before content is deployed, a best practices review can be used to verify other design
decisions, like:

Code in notebooks follows organizational standards and best practices.


The appropriate data preparation approach (dataflows, pipelines, notebooks, and
others) are used where needed.
Data sources used are appropriate and query folding is invoked whenever possible
where Power Query and dataflows are used.
Data preparation steps are clean, orderly, and efficient.
Connectivity mode and storage mode choices (for example, Direct Lake, import,
live connection, DirectQuery, and composite model frameworks) are appropriate.
Location for data sources, like flat files, and original Power BI Desktop files are
suitable (preferably stored in a backed-up location with versioning and appropriate
security, such as Teams files or a SharePoint shared library).
Semantic models are well-designed, clean, and understandable, and use a star
schema design.
Model relationships are configured correctly.
DAX calculations use efficient coding practices (particularly if the data model is
large).
The semantic model size is within a reasonable limit and data reduction techniques
are applied.
Row-level security (RLS) appropriately enforces data permissions.
Data is accurate and has been validated against the authoritative source(s).
Approved common definitions and terminology are used.
Good data visualization practices are followed, including designing for
accessibility.

Once the content has been deployed, the best practices review isn't necessarily
complete yet. Completing the remainder of the review could also include items such as:
The target workspace is suitable for the content.
Workspace security roles are appropriate for the content.
Other permissions (such as app audience permissions, Build permission, or use of
the individual item sharing feature) are correctly and appropriately configured.
Contacts are identified, and correctly correlate to the owners of the content.
Sensitivity labels are correctly assigned.
Fabric item endorsement (certified or promoted) is appropriate.
Data refresh is configured correctly, failure notifications include the proper users,
and uses the appropriate data gateway in standard mode (if applicable).
All appropriate semantic model best practices rules are followed and, preferably,
are automated via a community tool called Best Practices Analyzer for maximum
efficiency and productivity.

Extended support
From time to time, the COE might get involved with complex issues escalated from the
help desk. For more information, see the User support article.

7 Note

Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI and Fabric are
extraordinarily powerful tools. They provide data preparation and data modeling
capabilities in addition to data visualization. Having the ability to aid and enable
users can significantly improve their skills and increase the quality of their solutions
—it reduces risks too.

Centralized portal
A single centralized portal, or hub, is where the user community can find:

Access to the community Q&A forum.


Announcements of interest to the community, such as new features and release
plan updates.
Schedules and registration links for office hours, lunch and learns, training
sessions, and user group meetings.
Announcements of key changes to content and change log (if appropriate).
How to request help or support.
Training materials.
Documentation, onboarding materials, and frequently asked questions (FAQ).
Governance guidance and approaches recommended by the COE.
Report templates.
Examples of best practices solutions.
Recordings of knowledge sharing sessions.
Entry points for accessing managed processes, such as license acquisition, access
requests, and gateway configuration.

 Tip

In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of users might naturally
evolve to become your champions. Everyone else is usually just trying to get the
job done as quickly as possible, because their time, focus, and energy are needed
elsewhere. Therefore, it's crucial to make information easy for your community
users to find.

The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.

In larger organizations, it can be difficult to implement one single centralized portal.


When it's not practical to consolidate into a single portal, a centralized hub can serve as
an aggregator, which contains links to the other locations.

) Important

Although saving time finding information is important, the goal of a centralized


portal is more than that. It's about making information readily available to help
your user community do the right thing. They should be able to find information
during their normal course of work, with as little friction as possible. Until it's easier
to complete a task within the guardrails established by the COE and data
governance team, some users will continue to complete their tasks by
circumventing policies that are put in place. The recommended path must become
the path of least resistance. Having a centralized portal can help achieve this goal.

It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.

Training
A key factor for successfully enabling self-service users in a Fabric community is training.
It's important that the right training resources are readily available and easily
discoverable. While some users are so enthusiastic about analytics that they'll find
information and figure things out on their own, it isn't true for most of the user
community.

Making sure your self-service users (particularly content creators and owners) have
access to the training resources they need to be successful doesn't mean that you need
to develop your own training content. Developing training content is often
counterproductive due to the rapidly evolving nature of the product. Fortunately, an
abundance of training resources is available in the worldwide community. A curated set
of links goes a long way to help users organize and focus their training efforts, especially
for tool training, which focuses on the technology. All external links should be validated
by the COE for accuracy and credibility. It's a key opportunity for the COE to add value
because COE stakeholders are in an ideal position to understand the learning needs of
the community, and to identify and locate trusted sources of quality learning materials.

You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.

 Tip

One of the goals of training is to help users learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm new users by adding in a lot of complexity and friction to a beginner-
level class for report creators. However, it's a great investment to make newer
content creators aware of things that could otherwise take them a while to figure
out. An ideal example is teaching the ability to use a live connection to report from
an existing semantic model. By teaching this concept at the earliest logical time,
you can save a less experienced creator thinking they always need one semantic
model for every report (and encourage the good habit of reusing existing semantic
models across reports).
Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.

Training resources and approaches


There are many training approaches because people learn in different ways. If you can
monitor and measure usage of your training materials, you'll learn over time what works
best.

Some training might be delivered more formally, such as classroom training with hands-
on labs. Other types of training are less formal, such as:

Lunch and learn presentations


Short how-to videos targeted to a specific goal
Curated set of online resources
Internal user group presentations
One-hour, one-week, or one-month challenges
Hackathon-style events

The advantages of encouraging knowledge sharing among colleagues are described in


the Community of practice article.

 Tip

Whenever practical, learning should be correlated with building something


meaningful and realistic. However, simple demo data does have value during a
training course. It allows a learner to focus on how to use the technology rather
than the data itself. After completion of introductory session(s), consider offering a
bring your own data type of session. These types of sessions encourage the learner
to apply their new technical skills to an actual business problem. Try to include
multiple facilitators from the COE during this type of follow-up session so questions
can be answered quickly.

The types of users you might target for training include:

Content owners, subject matter experts (SMEs), and workspace administrators


Data creators (for example, users who create semantic models for report creators
to use, or who create dataflows, lakehouses, or warehouses for other semantic
model creators to use)
Report creators
Content consumers and viewers
Satellite COE members and the champions network
Fabric administrators

) Important

Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail for end-to-end solutions that include multiple Fabric workloads. If you
have a diverse population of Fabric content creators, consider creating personas
and tailoring the experience to an extent that's practical.

The completion of training can be a leading indicator for success with user adoption.
Some organizations add an element of fun by granting badges, like blue belt or black
belt, as users progress through the training programs.

Give some consideration to how you want to handle users at various stages of user
adoption. Training needs are very different for:

Onboarding new users (sometimes referred to as training day zero).


Users with minimal experience.
More experienced users.

How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You might also find over time that some
community champions want to run their own tailored set of training classes within their
functional business unit.

Sources for trusted Fabric training content


A curated set of online resources is valuable to help community members focus and
direct their efforts on what's important. Some publicly available training resources you
might find helpful include:

Microsoft Learn training for Power BI


Microsoft Learn training for Fabric
Power BI courses and "in a day" training materials
LinkedIn Learning for Power BI
LinkedIn Learning for Fabric

Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.

In addition to Microsoft content and custom content produced by your organization,


you might choose to provide your user community with a curated set of recommended
links to trusted online sources. There's a wide array of videos, blogs, and articles
produced by the worldwide community. The community comprises Fabric and Power BI
experts, Microsoft Most Valued Professions (MVPs) , and enthusiasts. Providing a
curated learning path that contains specific, reputable, current, and high-quality
resources will provide the most value to your user community.

If you do make the investment to create custom in-house training, consider creating
short, targeted content that focuses on solving one specific problem. It makes the
training easier to find and consume. It's also easier to maintain and update over time.

 Tip

The Help and Support menu in the Fabric portal is customizable. When your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.

Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Fabric is managed in your organization. For more information, see the Content
ownership and management article.

Certain aspects of Fabric tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:

How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new capacity
How to request a new workspace
How to request a workspace be added to an existing capacity
How to request access to a gateway data source
How to request software installation
 Tip

For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.

Different aspects of your documentation can be managed by self-service users,


decentralized teams, or by a centralized team. The following types of documentation
might differ based on who owns and manages the content:

How to request a new report


How to request a report enhancement
How to request access to data
How to request new data be prepared and made available for use
How to request an enhancement to existing data or visualizations

 Tip

When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.

There are also going to be some governance decisions that have been made and should
be documented, such as:

How to request content be certified


What are the approved file storage locations
What are the data retention and purge requirements
What are the requirements for handling sensitive data and personally identifiable
information (PII)

Documentation should be located in your centralized portal, which is a searchable


location where, preferably, users already work. Either Teams or SharePoint work very
well. Creating documentation in either wiki pages or in documents can work equally
well, provided that the content is organized well and is easy to find. Shorter documents
that focus on one topic are usually easier to consume than long, comprehensive
documents.

) Important
One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Fabric administrator role (who might have this
role solely for the purpose of viewing settings).

Over time, you might choose to allow certain types of documentation to be maintained
by the community if you have willing volunteers. In this case, you might want to
introduce an approval process for changes.

When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation might be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. Documentation contributes to user
enablement and a self-sustaining community.

 Tip

When creating custom documentation or training materials, reference existing


Microsoft sites using links whenever possible. Most community bloggers don't
keep blog posts or videos up to date.

Power BI template files


A Power BI template is a .pbit file. It can be provided as a starting point for content
creators. It's the same as a .pbix file, which can contain queries, a data model, and a
report, but with one exception: the template file doesn't contain any data. Therefore, it's
a smaller file that can be shared with content creators and owners, and it doesn't
present a risk of inappropriately sharing data.

Providing Power BI template files for your community is a great way to:

Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.
Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:

Reports can use examples of good visualization practices


Reports can incorporate organizational branding and design standards
Semantic models can include the structure for commonly used tables, like a date
table
Helpful DAX calculations can be included, like a year-over-year (YoY) calculation
Common parameters can be included, like a data source connection string
An example of report and/or semantic model documentation can be included

7 Note

Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.

Power BI project files


A Power BI project is a .pbip file. Like a template file (previously described), a project file
doesn't contain any data. It's a file format that advanced content creators can use for
advanced data model and report management scenarios. For example, you can use
project files to save time in development by sharing common model patterns, like date
tables, DAX measure expressions, or calculation groups.

You can use Power BI project files with Power BI Desktop developer mode for:

Advanced editing and authoring (for example, in a code editor such as Visual
Studio Code).
Purposeful separation of semantic model and report items (unlike the .pbix or .pbit
files).
Enabling multiple content creators and developers to work on the same project
concurrently.
Integrating with source control (such as by using Fabric Git integration).
Using continuous integration and continuous delivery (CI/CD) techniques to
automate integration, testing and deployment of changes, or versions of content.

7 Note

Power BI includes capabilities such as .pbit template files and .pbip project files that
make it simple to share starter resources with authors. Other Fabric workloads
provide different approaches to content development and sharing. Having a set of
starter resources is important regardless of the items being shared. For example,
your portal might include a set of SQL scripts or notebooks that present tested
approaches to solve common problems.

Considerations and key actions

Checklist - Considerations and key actions you can take to establish, or improve,
mentoring and user enablement.

" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types can include office hours,
co-development projects, and best practices reviews.
" Communicate regularly about mentoring services: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well-supported centralized hub
where users can easily find training materials, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, create, compile,
and publish useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Provide templates and projects: Determine how you'll use templates including
Power BI template files and Power BI project files. Include the resources in your
centralized portal, and in training materials.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.

Questions to ask

Use questions like those found below to assess mentoring and user enablement.

Is there an effective process in place for users to request training?


Is there a process in place to evaluate user skill levels (such as beginner,
intermediate, or advanced)? Can users study for and achieve Microsoft
certifications by using company resources?
What's the onboarding process to introduce new people in the user community to
data and BI solutions, tools, and processes?
Have all users followed the appropriate Microsoft Learn learning paths for their
roles during onboarding?
What kinds of challenges do users experience due to lack of training or
mentorship?
What impact does lack of enablement have on the business?
When users exhibit behavior that creates governance risks, are they punished or do
they undergo education and mentorship?
What training materials are in place to educate people about governance
processes and policies?
Where's the central documentation maintained? Who maintains it?
Do central resources exist, like organizational design guidelines, themes, or
template files?

Maturity levels
The following maturity levels will help you assess the current state of your mentoring
and user enablement.

ノ Expand table

Level State of mentoring and user enablement

100: Initial • Some documentation and resources exist. However, they're siloed and
inconsistent.

• Few users are aware of, or take advantage of, available resources.

200: • A centralized portal exists with a library of helpful documentation and resources.
Repeatable
• A curated list of training links and resources are available in the centralized
portal.

• Office hours are available so the user community can get assistance from the
COE.

300: • The centralized portal is the primary hub for community members to locate
Defined training, documentation, and resources. The resources are commonly referenced
by champions and community members when supporting and learning from each
other.

• The COE's skills mentoring program is in place to assist users in the community in
various ways.

400: • Office hours have regular and active participation from all business units in the
Capable organization.

• Best practices reviews from the COE are regularly requested by business units.

• Co-development projects are repeatedly executed with success by the COE and
members of business units.

500: • Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.

• Measurable and tangible business value is gained from the mentoring program
by using KPIs or OKRs.
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about the
community of practice.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Community of practice
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using a tool such as Microsoft Fabric to
produce effective analytics is a common interest that can bring people together across
an organization.

The following diagram provides an overview of an internal community.

The above diagram shows the following:

The community of practice includes everyone with an interest in Fabric.


The Center of Excellence (COE) forms the nucleus of the community. The COE
oversees the entire community and interacts most closely with its champions.
Self-service content creators and subject matter experts (SMEs) produce, publish,
and support content that's used by their colleagues, who are consumers.
Content consumers view content produced by both self-service creators and
enterprise business intelligence (BI) developers.
Champions are a subset of the self-service content creators. Champions are in an
excellent position to support their fellow content creators to generate effective
analytics solutions.

Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people in most organizations.

7 Note

All references to the Fabric community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Fabric. However, internal users are the focus of this article.

For information about related topics including resources, documentation, and training
provided for the Fabric community, see the Mentoring and user enablement article.

Champions network
One important part of a community of practice is its champions. A champion is a self-
service content creator who works in a business unit that engages with the COE. A
champion is recognized by their peers as the go-to expert. A champion continually
builds and shares their knowledge even if it's not an official part of their job role.
Champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.

Champions emerge as leaders of the community of practice who:

Have a deep interest in analytics being used effectively and adopted successfully
throughout the organization.
Possess strong technical skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.

 Tip

To add an element of fun, some organizations refer to their champions network as


ambassadors, Jedis, ninjas, or rangers. Microsoft has an internal community called BI
Champs.
Often, people aren't directly asked to become champions. Commonly, champions are
identified by the COE and recognized for the activities they're already doing, such as
frequently answering questions on an internal discussion channel or participating in
lunch and learn sessions.

Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.

) Important

Someone very well might be acting in the role of a champion without even
knowing it, and without any formal recognition. The COE should always be on the
lookout for champions. COE members should actively monitor the discussion
channel to see who is particularly helpful. The COE should deliberately encourage
and support potential champions, and when appropriate, invite them into a
champions network to make the recognition formal.

Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:

ノ Expand table

Activity Description

Discussion A Q&A forum where anyone in the community can post and view messages.
channel Often used for help and announcements. For more information, see the User
support article.

Lunch and learn Regularly scheduled sessions where someone presents a short session about
sessions something they've learned or a solution they've created. The goal is to get a
variety of presenters involved, because it's a powerful message to hear
firsthand what colleagues have achieved.

Office hours with Regularly scheduled times when COE experts are available so the community
the COE can engage with them. Community users can receive assistance with minimal
process overhead. For more information, see the Mentoring and user
enablement article.

Internal blog Short blog posts, usually covering technical how-to topics.
posts or wiki posts
Activity Description

Internal analytics A subset of the community that chooses to meet as a group on a regularly
user group scheduled basis. User group members often take turns presenting to each
other to share knowledge and improve their presentation skills.

Book club A subset of the community select a book to read on a schedule. They discuss
what they've learned and share their thoughts with each other.

Internal analytics An annual or semi-annual internal conference that delivers a series of


conferences or sessions focused on the needs of self-service content creators, subject
events matter experts, and stakeholders.

 Tip

Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.

Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.

Rewarding community members


Incentives that the entire community (including champions) find particularly rewarding
can include:

Contests with a small gift card or time off: For example, you might hold a
performance tuning event with the winner being the person who successfully
reduced the size of their data model the most.
Ranking based on help points: The more frequently someone participates in Q&A,
they achieve a change in status on a leaderboard. This type of gamification
promotes healthy competition and excitement. By getting involved in more
conversations, the participant learns and grows personally in addition to helping
their colleagues.
Leadership communication: Reach out to a manager when someone goes above
and beyond so that their leader, who might not be active in the community, sees
the value that their staff member provides.
Rewarding champions
Different types of incentives will appeal to different types of people. Some community
members will be highly motivated by praise and feedback. Some will be inspired by
gamification and a bit of fun. Others will highly value the opportunity to improve their
level of knowledge.

Incentives that champions find particularly rewarding can include:

More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did recently. It could be a fun tradition at the beginning of a
monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.

Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:

Internal discussion channel or forum.


Announcements channel.
Organizational newsletter.

The most critical communication objectives include ensuring your community members
know that:

The COE exists.


How to get help and support.
Where to find resources and documentation.
Where to find governance guidelines.
How to share suggestions and ideas.

 Tip
Consider requiring a simple quiz before a user is granted a Power BI or Fabric
license. This quiz is a misnomer because it doesn't focus on any technical skills.
Rather, it's a short series of questions to verify that the user knows where to find
help and resources. It sets them up for success. It's also a great opportunity to have
users acknowledge any governance policies or data privacy and protection
agreements you need them to be aware of. For more information, see the System
oversight article.

Types of communication
There are generally four types of communication to plan for:

New employee communications can be directed to new employees (and


contractors). It's an excellent opportunity to provide onboarding materials for how
to get started. It can include articles on topics like how to get Power BI Desktop
installed, how to request a license, and where to find introductory training
materials. It can also include general data governance guidelines that all users
should be aware of.
Onboarding communications can be directed to employees who are just acquiring
a license or are getting involved with the community of practice. It presents an
excellent opportunity to provide the same materials as given to new employee
communications (as mentioned above).
Ongoing communications can include regular announcements and updates
directed to all users, or subsets of users, like:
Announcements about changes that are planned to key organizational content.
For example, changes are to be published for a critical shared semantic model
that's used heavily throughout the organization. It can also include the
announcement of new features. For more information about planning for
change, see the Tenant-level monitoring article.
Feature announcements, which are more likely to receive attention from the
reader if the message includes meaningful context about why it's important.
(Although an RSS feed can be a helpful technique, with the frequent pace of
change, it can become noisy and might be ignored.)
Situational communications can be directed to specific users or groups based on
a specific occurrence discovered while monitoring the platform. For example,
perhaps you notice a significant amount of sharing from the personal workspace a
particular user, so you choose to send them some information about the benefits
of workspaces and Power BI apps.

 Tip
One-way communication to the user community is important. Don't forget to also
include bidirectional communication options to ensure the user community has an
opportunity to provide feedback.

Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.

Considerations and key actions

Checklist - Considerations and key actions you can take for the community of practice
follow.

Initiate, grow, and sustain your champions network:

" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall data and BI strategy, and
that your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of champions. Be aware that the
time investment will vary from person to person, and team to team due to different
responsibilities. Plan to clearly communicate expectations to people who are
interested in getting involved. Obtain manager approval when appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: One
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate.
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
semantic model certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about ways to structure the COE, see the Center of
Excellence article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.

Improve knowledge sharing:

" Identify knowledge sharing activities: Determine what kind of activities for


knowledge sharing fit well into the organizational data culture. Ensure that all
planned knowledge sharing activities are supportable and sustainable.
" Confirm roles and responsibilities: Verify who will take responsibility for
coordinating all knowledge sharing activities.

Introduce incentives:

" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.

Improve communications:
" Establish communication methods: Evaluate which methods of communication fit
well in your data culture. Set up different ways to communicate, including history
retention and search.
" Identify responsibility: Determine who will be responsible for different types of
communication, how, and when.

Questions to ask

Use questions like those found below to assess the community of practice.

Is there a centralized portal for a community of practice to engage in knowledge


sharing?
Do technical questions and requests for support always go through central teams
like the COE or support? Alternatively, to what extent is the community of practice
engaging in knowledge sharing?
Do any incentives exist for people to engage in knowledge sharing or improve
their skills with data and BI tools?
Is there a system of recognition to acknowledge significant self-service efforts in
teams?
Are champions recognized among the user community? If so, what explicit
recognition do they get for their expertise? How are they identified?
If no champions are recognized, are there any potential candidates?
What role do central teams envision that champions play in community of
practice?
How often do central data and BI teams engage with the user community? What
medium do these interactions take? Are they bidirectional discussions or
unidirectional communications?
How are changes and announcements communicated within the community of
practice?
Among the user community, who is the most enthusiastic about analytics and BI
tools? Who is the least enthusiastic, or the most negative, and why?

Maturity levels
The following maturity levels will help you assess the current state of your community of
practice.

ノ Expand table

Level State of the community of practice

100: Initial • Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.

• Efforts to purposefully share knowledge across the organizational boundaries


are rare and unstructured.

• Communication is inconsistent, without a purposeful plan.

200: • The first set of champions are identified.


Repeatable
• The goals for a champions network are identified.

• Knowledge sharing practices are gaining traction.

300: Defined • Knowledge sharing in multiple forms is a normal occurrence. Information


sharing happens frequently and purposefully.

• Goals for transparent communication with the user community are defined.

400: Capable • Champions are identified for all business units. They actively support colleagues
in their self-service efforts.

• Incentives to recognize and reward knowledge sharing efforts are a common


occurrence.

• Regular and frequent communication occurs based on a predefined


communication plan.

500: Efficient • Bidirectional feedback loops exist between the champions network and the COE.

• Key performance indicators measure community engagement and satisfaction.

• Automation is in place when it adds direct value to the user experience (for
example, automatic access to a group that provides community resources).
Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about user
support.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
User support
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

This article addresses user support. It focuses primarily on the resolution of issues.

The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.

For a description of related topics, including skills mentoring, training, documentation,


and co-development assistance provided to the internal Fabric user community, see the
Mentoring and user enablement article. The effectiveness of those activities can
significantly reduce the volume of formal user support requests and increase user
experience overall.

Types of user support


If a user has an issue, do they know what their options are to resolve it?

The following diagram shows some common types of user support that organizations
employ successfully:
The six types of user support shown in the above diagram include:

ノ Expand table

Type Description

Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.

Internal community support (internal) can be organized informally, formally, or both. It


occurs when colleagues interact with each other via internal community channels.

Help desk support (internal) handles formal support issues and requests.

Extended support (internal) involves handling complex issues escalated by the help desk.

Microsoft support (external) includes support for licensed users and Fabric
administrators. It also includes comprehensive documentation.

Community support (external) includes the worldwide community of experts, Microsoft


Most Valued Professionals (MVPs) , and enthusiasts who participate in forums and
publish content.

In some organizations, intra-team and internal community support are most relevant for
self-service data and business intelligence (BI)—content is owned and managed by
creators and owners in decentralized business units. Conversely, the help desk and
extended support are reserved for technical issues and enterprise data and BI (content is
owned and managed by a centralized BI team or Center of Excellence). In some
organizations, all four types of support could be relevant for any type of content.
 Tip

For more information about business-led self-service, managed self-service, and


enterprise data and BI concepts, see the Content ownership and management
article.

Each of the six types of user support introduced above are described in further detail in
this article.

Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. Self-service content creators who emerge as your champions tend to
take on this type of informal support role voluntarily because they have an intrinsic
desire to help. Although it's an informal support mode, it shouldn't be undervalued.
Some estimates indicate that a large percentage of learning at work is peer learning,
which is particularly helpful for analysts who are creating domain-specific analytics
solutions.

7 Note

Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.

Internal community support


Assistance from your fellow community members often takes the form of messages in a
discussion channel, or a forum set up specifically for the community of practice. For
example, someone posts a message that they're having problems getting a DAX
calculation to work or are looking for the right Python module to import. They then
receive a response from someone in the organization with suggestions or links.

 Tip

The goal of an internal Fabric community is to be self-sustaining, which can lead to


reduced formal support demands and costs. It can also facilitate managed self-
service content creation occurring on a broader scale versus a purely centralized
approach. However, there will always be a need to monitor, manage, and nurture
the internal community. Here are two specific tips:

Be sure to cultivate multiple experts in the more difficult topics like T-SQL,
Python, Data Analysis eXpressions (DAX) and the Power Query M formula
language. When a community member becomes a recognized expert, they
could become overburdened with too many requests for help.
A greater number of community members might readily answer certain types
of questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex T-SQL or DAX). It's
important for the COE to allow the community a chance to respond yet also
be willing to promptly handle unanswered questions. If users repeatedly ask
questions and don't receive an answer, it will significantly hinder growth of
the community. In this case, a user is likely to leave and never return if they
don't receive any responses to their questions.

An internal community discussion channel is commonly set up as a Teams channel or a


Yammer group. The technology chosen should reflect where users already work, so that
the activities occur within their natural workflow.

One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.

Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).

Monitoring the discussion channel can also reveal additional analytics experts and
potential champions who were previously unknown to the COE.

) Important

It's a best practice to continually identify emerging champions, and to engage with
them to make sure they're equipped to support their colleagues. As described in
the Community of practice article, the COE should actively monitor the discussion
channel to see who is being helpful. The COE should deliberately encourage and
support community members. When appropriate, invite them into the champions
network.

Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance might reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.

 Tip

You might be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.

Help desk support


The help desk is usually operated as a shared service, staffed by the IT department.
Users who will likely rely on a more formal support channel include those who are:

Less experienced users.


Newer to the organization.
Reluctant to post a message to the internal discussion community.
Lacking connections and colleagues within the organization.

There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.

Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.

Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new capacity.

) Important

Your Fabric governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or find a way to work around your
requirements. That might not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient. For more information, see
Tenant-level workspace planning.

Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with supporting Fabric. The
best help desk personnel are those who have a good grasp of what users need to
accomplish.

 Tip

Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service-level agreement (SLA). For instance, there could be an SLA
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define SLAs for troubleshooting issues, like data
discrepancies.

Extended support
Since the COE has deep insight into how Fabric is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.

Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.
Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to users and Fabric administrators
that shouldn't be overlooked.

Microsoft documentation
Check the Fabric support website for high-priority issues that broadly affect all
customers. Global Microsoft 365 administrators have access to additional support issue
details within the Microsoft 365 portal.

Refer to the comprehensive Fabric documentation. It's an authoritative resource that can
aid troubleshooting and search for information. You can prioritize results from the
documentation site. For example, enter a site-targeted search request into your web
search engine, like power bi gateway site:learn.microsoft.com .

Power BI Pro and Premium Per User end-user support


Licensed users are eligible to log a support ticket with Microsoft .

 Tip

Make it clear to your internal user community whether you prefer technical issues
to be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.

Administrator support
There are several support options available for Fabric administrators.

For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.

Worldwide community support


In addition to the internal user support approaches described in this article, and
Microsoft support options described previously, you can leverage the worldwide Fabric
community.

The worldwide community is useful when a question can be easily understood by


someone without domain knowledge, and when it doesn't involve confidential data or
sensitive internal processes.

Publicly available community forums


There are several public community forums where users can post issues and receive
responses from any user in the world. Getting answers from anyone, anywhere, can be
very powerful and exceedingly helpful. However, as is the case with any public forum, it's
important to validate the advice and information posted on the forum. The advice
posted on the internet might not be suitable for your situation.

Publicly available discussion areas


It's very common to see people posting Fabric technical questions on social media
platforms. You might find discussions, post announcements, and users helping each
other.

Community documentation
The Fabric global community is vibrant. Every day, there are a great number of Fabric
blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:

How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.

Considerations and key actions

Checklist - Considerations and key actions you can take for user support follow.
Improve your intra-team support:

" Provide recognition and encouragement: Provide incentives to your champions as


described in the Community of practice article.
" Reward efforts: Recognize and praise meaningful grassroots efforts when you see
them happening.
" Create formal roles: If informal intra-team efforts aren't adequate, consider
formalizing the roles you want to enact in this area. Include the expected
contributions and responsibilities in the HR job description, when appropriate.

Improve your internal community support:

" Continually encourage questions: Encourage users to ask questions in the


designated community discussion channel. As the habit builds over time, it will
become normalized to use that as the first option. Over time, it will evolve to
become more self-supporting.
" Actively monitor the discussion area: Ensure that the appropriate COE members
actively monitor this discussion channel. They can step in if a question remains
unanswered, improve upon answers, or make corrections when appropriate. They
can also post links to additional information to raise awareness of existing
resources. Although the goal of the community is to become self-supporting, it still
requires dedicated resources to monitor and nurture it.
" Communicate options available: Make sure your user population knows the
internal community support area exists. It could include the prominent display of
links. You can include a link in regular communications to your user community. You
can also customize the help menu links in the Fabric portal to direct users to your
internal resources.
" Set up automation: Ensure that all licensed users automatically have access to the
community discussion channel. It's possible to automate license setup by using
group-based licensing.

Improve your internal help desk support:

" Determine help desk responsibilities: Decide what the initial scope of Fabric
support topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to handle
Fabric support. Identify whether there are readiness gaps to be addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: Include known questions and answers in a
searchable knowledgebase. Ensure someone is responsible for regular updates to
the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide whether anyone will be on-call for any issues related to Fabric: If
appropriate, ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.

Improve your internal COE extended support:

" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members and Fabric administrators have a direct escalation path to reach
global administrators for Microsoft 365 and Azure. It's critical to have a
communication channel when a widespread issue arises that's beyond the scope of
Fabric.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).

Questions to ask

Use questions like those found below to assess user support.


Who is responsible for supporting enterprise data and BI solutions? What about
self-service solutions?
How are the business impact and urgency of issues identified to effectively detect
and prioritize critical issues?
Is there a clear process for business users to report issues with data and BI
solutions? How does this differ between enterprise and self-service solutions?
What are the escalation paths?
What types of issues do content creators and consumers typically experience? For
example, do they experience data quality issues, performance issues, access issues,
and others?
Are any issues closed without them being resolved? Are there "known issues" in
data items or reports today?
Is a process in place for data asset owners to escalate issues with self-service BI
solutions to central teams like the COE?
How frequent are issues in the data and existing solutions? What proportion of
these issues are found before they impact business end users?
How long does it typically take to resolve issues? Is this timing sufficient for
business users?
What are examples of recent issues and the concrete impact on the business?
Do enterprise teams and content creators know how to report Fabric issues to
Microsoft? Can enterprise teams effectively leverage community resources to
unblock critical issues?

U Caution

When assessing user support and describing risks or issues, be careful to use
neutral language that doesn't place blame on individuals or teams. Ensure
everyone's perspective is fairly represented in an assessment. Focus on objective
facts to accurately understand and describe the context.

Maturity levels

The following maturity levels will help you assess the current state of your Power BI user
support.
ノ Expand table

Level State of user support

100: Initial • Individual business units find effective ways of supporting each other. However,
the tactics and practices are siloed and not consistently applied.

• An internal discussion channel is available. However, it's not monitored closely.


Therefore, the user experience is inconsistent.

200: • The COE actively encourages intra-team support and growth of the champions
Repeatable network.

• The internal discussion channel gains traction. It's become known as the default
place for questions and discussions.

• The help desk handles a small number of the most common technical support
issues.

300: Defined • The internal discussion channel is popular and largely self-sustaining. The COE
actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.

• A help desk tracking system is in place to monitor support frequency, response


topics, and priorities.

• The COE provides appropriate extended support when required.

400: Capable • The help desk is fully trained and prepared to handle a broader number of
known and expected technical support issues.

• SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.

500: Efficient • Bidirectional feedback loops exist between the help desk and the COE.

• Key performance indicators measure satisfaction and support methods.

• Automation is in place to allow the help desk to react faster and reduce errors
(for example, use of APIs and scripts).

Related content
In the next article in the Microsoft Fabric adoption roadmap series, learn about system
oversight and administration activities.
Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
System oversight
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

System oversight—also known as Fabric administration—is the ongoing, day-to-day,


administrative activities. It's specifically concerned with:

Governance: Enact governance guidelines and policies to support self-service and


enterprise data and business intelligence (BI) scenarios.
User empowerment: Facilitate and support the internal processes and systems that
empower the internal user community to the extent possible, while adhering to the
organization's regulations and requirements.
Adoption: Allow for broader organizational adoption of Fabric with effective
governance and data management practices.

) Important

Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Fabric administration activities take place and
by whom.

System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.

Fabric administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of management activities. Global Microsoft 365 administrators are implicitly Fabric
administrators. Power Platform administrators are also implicitly Fabric administrators.

A key governance decision is who to assign as a Fabric administrator. It's a centralized


role that affects your entire tenant. Ideally, there are two to four people in the
organization who are capable of managing Fabric. Your administrators should operate in
close coordination with the Center of Excellence (COE).

High privilege role


The Fabric administrator role is a high privilege role because:

User experience: Settings that are managed by a Fabric administrator have a


significant effect on user capabilities and user experience. For more information,
see Govern tenant settings.
Full security access: Fabric administrators can update access permissions for
workspaces in the tenant. The result is that an administrator can allow permission
to view or download data and reports as they see fit. For more information, see
Govern tenant settings.
Personal workspace access: Fabric administrators can access contents and govern
the personal workspace of any user.
Metadata: Fabric administrators can view all tenant metadata, including all user
activities that occur in the Fabric portal (described in the Auditing and monitoring
section below).

) Important

Having too many Fabric administrators is a risk. It increases the probability of


unapproved, unintended, or inconsistent management of the tenant.

Roles and responsibilities


The types of activities that an administrator will do on a day-to-day basis will differ
between organizations. What's important, and given priority in your data culture, will
heavily influence what an administrator does to support business-led self-service,
managed self-service, and enterprise data and BI scenarios. For more information, see
the Content ownership and management article.

 Tip

The best type of person to serve as a Fabric administrator is one who has enough
knowledge about the tools and workloads to understand what self-service users
need to accomplish. With this understanding, the administrator can balance user
empowerment and governance.
In addition to the Fabric administrator, there are other roles which use the term
administrator. The following table describes the roles that are commonly and regularly
used.

ノ Expand table

Role Scope Description

Fabric Tenant Manages tenant settings and other settings in the Fabric portal.
administrator All general references to administrator in this article refer to
this type of administrator.

Capacity One Manages workspaces and workloads, and monitors the health
administrator capacity of a Fabric capacity.

Data gateway One Manages gateway data source configuration, credentials, and
administrator gateway users assignments. Might also handle gateway software
updates (or collaborate with infrastructure team on updates).

Workspace One Manages workspace settings and access.


administrator workspace

The Fabric ecosystem of workloads is broad and deep. There are many ways that Fabric
integrates with other systems and platforms. From time to time, it'll be necessary to
work with other administrators and IT professionals. For more information, see
Collaborate with other administrators.

The remainder of this article provides an overview of the most common activities that a
Fabric administrator does. It focuses on activities that are important to carry out
effectively when taking a strategic approach to organizational adoption.

Service management
Overseeing the tenant is a crucial aspect to ensure that all users have a good experience
with Power BI. A few of the key governance responsibilities of a Fabric administrator
include:

Tenant settings: Control which Power BI features and capabilities are enabled, and
for which users in your organization.
Domains: Group together two or more workspaces that have similar
characteristics.
Workspaces: Review and manage workspaces in the tenant.
Embed codes: Govern which reports have been published publicly on the internet.
Organizational visuals: Register and manage organizational visuals.
Azure connections: Integrate with Azure services to provide additional
functionality.

For more information, see Tenant administration.

User machines and devices


The adoption of Fabric depends directly on content creators and consumers having the
tools and applications they need. Here are some important questions to consider.

How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for which use
cases?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?

For more information, see User tools and devices.

Architecture
In the context of Fabric, architecture relates to data architecture, capacity management,
and data gateway architecture and management.

Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.

There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.

) Important

Data architecture decisions have a significant impact on Fabric adoption, user


satisfaction, and individual project success rates.
A few data architecture considerations that affect adoption include:

Where does Fabric fit into the organization's entire data architecture? Are there
other existing components such as an enterprise data warehouse (EDW) or a data
lake that will be important to factor into plans?
Is Fabric used end-to-end for data preparation, data modeling, and data
presentation or is Fabric used for only some of those capabilities?
Are managed self-service patterns followed to find the best balance between data
reusability and report creator flexibility?
Where will users consume the content? Generally, the three main ways to deliver
content are: the Fabric portal, Power BI Report Server, and embedded in custom
applications. Additionally, Microsoft Teams is a convenient alternative for users
who spend a lot of time in Teams.
Who is responsible for managing and maintaining the data architecture? Is it a
centralized team, or a decentralized team? How is the COE represented in this
team? Are certain skillsets required?
What data sources are the most important? What types of data will we be
acquiring?
What semantic model connectivity mode and storage mode choices (for example,
Direct Lake, import, live connection, DirectQuery, or composite model frameworks)
are the best fit for the use cases?
To what extent is data reusability encouraged using lakehouses, warehouses, and
shared semantic models?
To what extent is the reusability of data preparation logic and advanced data
preparation encouraged by using data pipelines, notebooks, and dataflows?

It's important for administrators to become fully aware of Fabric's technical capabilities
—as well as the needs and goals of their stakeholders—before they make architectural
decisions.

 Tip

Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
described in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.
Capacity management
Capacity includes features and capabilities to deliver analytics solutions at scale. There
are two types of Fabric organizational licenses: Premium per User (PPU) and capacity.
There are several types of capacity licenses. The type of capacity license determines
which Fabric workloads are supported.

) Important

At times this article refers to Power BI Premium or its capacity subscriptions (P


SKUs). Be aware that Microsoft is currently consolidating purchase options and
retiring the Power BI Premium per capacity SKUs. New and existing customers
should consider purchasing Fabric capacity subscriptions (F SKUs) instead.

For more information, see Important update coming to Power BI Premium


licensing and Power BI Premium FAQ.

The use of capacity can play a significant role in your strategy for creating, managing,
publishing, and distributing content. A few of the top reasons to invest in capacity
include:

Unlimited Power BI content distribution to large numbers of read-only users.


Content consumption by users with a free Power BI license is available in Premium
capacity only, not PPU. Content consumption by free users is also available with an
F64 Fabric capacity license or higher.
Access to Fabric experiences for producing end-to-end analytics.
Deployment pipelines to manage the publication of content to development, test,
and production workspaces. They're highly recommended for critical content to
improve release stability.
XMLA endpoint, which is an industry standard protocol for managing and
publishing a semantic model, or querying the semantic model from any XMLA-
compliant tool.
Increased model size limits, including large semantic model support.
More frequent data refreshes.
Storage of data in a specific geographic area that's different from the home region.

The above list isn't all-inclusive. For a complete list, see Power BI Premium features.

Manage Fabric capacity


Overseeing the health of Fabric capacity is an essential ongoing activity for
administrators. Each capacity SKU includes a set of resources. Capacity units (CUs) are
used to measure compute resources for each SKU.

U Caution

Lack of management, and consistently exceeding the limits of your capacity


resources can often result in performance challenges and user experience
challenges. Both challenges, if not managed correctly, can contribute to negative
impact on adoption efforts.

Suggestions for managing Fabric capacity:

Define who is responsible for managing the capacity. Confirm the roles and
responsibilities so that it's clear what action will be taken, why, when, and by
whom.
Create a specific set of criteria for content that will be published to capacity. It's
especially relevant when a single capacity is used by multiple business units
because the potential exists to disrupt other users if the capacity isn't well-
managed. Consider requiring a best practices review (such as reasonable semantic
model size and efficient calculations) before publishing new content to a
production capacity.
Regularly use the Fabric capacity metrics app to understand resource utilization
and patterns for the capacity. Most importantly, look for consistent patterns of
overutilization, which will contribute to user disruptions. An analysis of usage
patterns should also make you aware if the capacity is underutilized, indicating
more value could be gained from the investment.
Set the tenant setting so Fabric notifies you if the capacity becomes overloaded ,
or if an outage or incident occurs.

Autoscale
Autoscale is intended to handle occasional or unexpected bursts in capacity usage
levels. Autoscale can respond to these bursts by automatically increasing CPU resources
to support the increased workload.

Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the capacity isn't well-managed, autoscale might
trigger more often than expected. In this case, the metrics app can help you to
determine underlying issues and do capacity planning.
Decentralized capacity management
Capacity administrators are responsible for assigning workspaces to a specific capacity.

Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license to collaborate on, or view, Power BI
content in the workspace. Other Fabric workloads can't be included in a workspace
assigned to PPU.

It's possible to set up multiple capacities to facilitate decentralized management by


different business units. Decentralizing management of certain aspects of Fabric is a
great way to balance agility and control.

Here's an example that describes one way you could manage your capacity.

Purchase a P3 capacity node in Microsoft 365. It includes 32 virtual cores (v-cores).


Use 16 v-cores to create the first capacity. It will be used by the Sales team.
Use 8 v-cores to create the second capacity. It will be used by the Operations team.
Use the remaining 8 v-cores to create the third capacity. It will support general use.

The previous example has several advantages.

Separate capacity administrators can be set up for each capacity. Therefore, it


facilitates decentralized management situations.
If a capacity isn't well-managed, the effect is confined to that capacity only. The
other capacities aren't impacted.
Billing and chargebacks to other business units are straightforward.
Different workspaces can be easily assigned to the separate capacities.

However, the previous example has disadvantages, too.

The limits per capacity are lower. The maximum memory size allowed for semantic
models isn't the entire P3 capacity node size that was purchased. Rather, it's the
assigned capacity size where the semantic model is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the tenant.

7 Note

Resources for Power BI Premium per Capacity are referred to as v-cores. However, a
Fabric capacity refers to them as capacity units (CUs). The scale for CUs and v-cores
is different for each SKU. For more information, see the Fabric licensing
documentation.

Data gateway architecture and management


A data gateway facilitates the secure and efficient transfer of data between
organizational data sources and the Fabric service. A data gateway is needed for data
connectivity to on-premises or cloud services when a data source is:

Located within the enterprise data center.


Configured behind a firewall.
Within a virtual network.
Within a virtual machine.

There are three types of gateways.

On-premises data gateway (standard mode) is a gateway service that supports


connections to registered data sources for many users to use. The gateway
software installations and updates are installed on a machine that's managed by
the customer.
On-premises data gateway (personal mode) is a gateway service that supports
data refresh only. This gateway mode is typically installed on the PC of a content
creator. It supports use by one user only. It doesn't support live connection or
DirectQuery connections.
Virtual network data gateway is a Microsoft managed service that supports
connectivity for many users. Specifically, it supports connectivity for semantic
models and dataflows stored in workspaces assigned to Premium capacity or
Premium Per User.

 Tip

The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.

Decentralized gateway management


The On-premises data gateway (standard mode) and Virtual network data gateway
support specific data source types that can be registered, together with connection
details and how credentials are stored. Users can be granted permission to use the
gateway data source so that they can schedule a refresh or run DirectQuery queries.

Certain aspects of gateway management can be done effectively on a decentralized


basis to balance agility and control. For example, the Operations group might have a
gateway dedicated to its team of self-service content creators and data owners.

Decentralized gateway management works best when it's a joint effort as follows.

Managed by the decentralized data owners:

Departmental data source connectivity information and privacy levels.


Departmental data source stored credentials (including responsibility for updating
routine password changes).
Departmental data source users who are permitted to use each data source.

Managed by centralized data owners (includes data sources that are used broadly across
the organization; management is centralized to avoid duplicated data sources):

Centralized data source connectivity information and privacy levels.


Centralized data source stored credentials (including responsibility for updating
routine password changes).
Centralized data source users who are permitted to use each data source.

Managed by IT:

Gateway software updates (gateway updates are usually released monthly).


Installation of drivers and custom connectors (the same ones that are installed on
user machines).
Gateway cluster management (number of machines in the gateway cluster for high
availability, disaster recovery, and to eliminate a single point of failure, which can
cause significant user disruptions).
Server management (for example, operating system, RAM, CPU, or networking
connectivity).
Management and backup of gateway encryption keys.
Monitoring of gateway logs to assess when scale-up or scale-out is necessary.
Alerting of downtime or persistent low resources on the gateway machine.

 Tip

Allowing a decentralized team to manage certain aspects of the gateway means


they can move faster. The tradeoff of decentralized gateway management does
mean running more gateway servers so that each can be dedicated to a specific
area of the organization. If gateway management is handled entirely by IT, it's
imperative to have a good process in place to quickly handle requests to add data
sources and apply user updates.

User licenses
Every user needs a commercial license, which is integrated with a Microsoft Entra
identity. The user license could be Free, Pro, or Premium Per User (PPU).

A user license is obtained via a subscription, which authorizes a certain number of


licenses with a start and end date.

7 Note

Although each user requires a license, a Pro or PPU license is only required to share
Power BI content. Users with a free license can create and share Fabric content
other than Power BI items.

There are two approaches to procuring subscriptions.

Centralized: Microsoft 365 billing administrator purchases a subscription for Pro or


PPU . It's the most common way to manage subscriptions and assign licenses.
Decentralized: Individual departments purchase a subscription via self-service
purchasing.

Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.

Self-service purchasing is useful for:

Larger organizations with decentralized business units that have purchasing


authority and want to handle payment directly with a credit card.
Organizations that intend to make it as easy as possible to purchase subscriptions
on a monthly commitment.

Consider disabling self-service purchasing when:

Centralized procurement processes are in place to meet regulatory, security, and


governance requirements.
Discounted pricing is obtained through an Enterprise Agreement (EA).
Existing processes are in place to handle intercompany chargebacks.
Existing processes are in place to handle group-based licensing assignments.
Prerequisites are required for obtaining a license, such as approval, justification,
training, or a governance policy requirement.
There's a valid need, such as a regulatory requirement, to control access closely.

User license trials


Another important governance decision is whether user license trials are allowed. By
default, trials are enabled. That means when content is shared with a colleague, if the
recipient doesn't have a Pro or PPU license, they'll be prompted to start a trial to view
the content (if the content doesn't reside within a workspace backed by capacity). The
trial experience is intended to be a convenience that allows users to continue with their
normal workflow.

Generally, disabling trials isn't recommended. It can encourage users to seek


workarounds, perhaps by exporting data or working outside of supported tools and
processes.

Consider disabling trials only when:

There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Fabric service closely.

 Tip

Don't introduce too many barriers to obtaining a Fabric license. Users who need to
get work done will find a way, and that way might involve workarounds that aren't
ideal. For instance, without a license to use Fabric, people might rely far too much
on sharing files on a file system or via email when significantly better approaches
are available.

Cost management
Managing and optimizing the cost of cloud services, like Fabric, is an important activity.
Here are several activities you can consider.
Analyze who is using—and, more to the point, not using—their allocated Fabric
licenses and make necessary adjustments. Fabric usage is analyzed using the
activity log.
Analyze the cost effectiveness of capacity or Premium Per User. In addition to the
additional features, perform a cost/benefit analysis to determine whether capacity
licensing is more cost-effective when there are a large number of consumers.
Carefully monitor and manage Fabric capacity. Understanding usage patterns over
time will allow you to predict when to purchase more capacity. For example, you
might choose to scale up a single capacity from a P1 to P2, or scale out from one
P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Fabric is
recommended to ensure the user experience isn't interrupted. Autoscale will scale
up capacity resources for 24 hours, then scale them back down to normal levels (if
sustained activity isn't present). Manage autoscale cost by constraining the
maximum number of v-cores, and/or with spending limits set in Azure. Due to the
pricing model, autoscale is best suited to handle occasional unplanned increases in
usage.
For Azure data sources, co-locate them in the same region as your Fabric tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.

Security, information protection, and data loss


prevention
Security, information protection, and data loss prevention (DLP) are joint responsibilities
among all content creators, consumers, and administrators. That's no small task because
there's sensitive information everywhere: personal data, customer data, or customer-
authored data, protected health information, intellectual property, proprietary
organizational information, just to name a few. Governmental, industry, and contractual
regulations could have a significant impact on the governance guidelines and policies
that you create related to security.

The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.

User responsibilities
Some organizations ask Fabric users to accept a self-service user acknowledgment. It's a
document that explains the user's responsibilities and expectations for safeguarding
organizational data.

One way to automate its implementation is with a Microsoft Entra terms of use policy.
The user is required to view and agree to the policy before they're permitted to visit the
Fabric portal for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.

Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service data platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.

The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).

Administrators can help by following best practices themselves. Administrators can also
raise concerns when they see issues that could be discovered when managing
workspaces, auditing user activities, or managing gateway credentials and users. There
are also several tenant settings that are usually restricted except for a few users (for
instance, the ability to publish to web or the ability to publish apps to the entire
organization).

External guest users


External users—such as partners, customers, vendors, and consultants—are a common
occurrence for some organizations, and rare for others. How you handle external users is
a governance decision.

External user access is controlled by tenant settings and certain Microsoft Entra ID
settings. For details of external user considerations, review the Distribute Power BI
content to external guest users using Microsoft Entra B2B whitepaper.

Information protection and data loss prevention


Fabric supports capabilities for information protection and data loss prevention (DLP) in
the following ways.

Information protection: Microsoft Purview Information Protection (formerly known


as Microsoft Information Protection) includes capabilities for discovering,
classifying, and protecting data. A key principle is that data can be better protected
once it's been classified. The key building block for classifying data is sensitivity
labels. For more information, see Information protection for Power BI planning.
Data loss prevention for Power BI: Microsoft Purview Data Loss Prevention
(formerly known as Office 365 Data Loss Prevention) supports DLP policies for
Power BI. By using sensitivity labels or sensitive information types, DLP policies for
Power BI help an organization locate sensitive semantic models. For more
information, see Data loss prevention for Power BI planning.
Microsoft Defender for Cloud Apps: Microsoft Defender for Cloud Apps (formerly
known as Microsoft Cloud App Security) supports policies that help protect data,
including real-time controls when users interact with the Power BI service. For
more information, see Defender for Cloud Apps for Power BI planning.

Data residency
For organizations with requirements to store data within a geographic region, Fabric
capacity can be set for a specific region that's different from the home region of the
Fabric tenant.

Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.

Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Fabric tenant.

Auditing and monitoring


It's critical that you make use of auditing data to analyze adoption efforts, understand
usage patterns, educate users, support users, mitigate risk, improve compliance, manage
license costs, and monitor performance. For more information about why auditing your
data is valuable, see Auditing and monitoring overview.

There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.
Report-level auditing: Techniques that report creators can use to understand
which users are using the reports that they create, publish, and share.
Data-level auditing: Methods that data creators can use to track the performance
and usage patterns of data assets that they create, publish, and share.
Tenant-level auditing: Key decisions and actions administrators can take to create
an end-to-end auditing solution.
Tenant-level monitoring: Tactical actions administrators can take to monitor the
Power BI service, including updates and announcements.

REST APIs
The Power BI REST APIs and the Fabric REST APIs provide a wealth of information about
your Fabric tenant. Retrieving data by using the REST APIs should play an important role
in managing and governing a Fabric implementation. For more information about
planning for the use of REST APIs for auditing, see Tenant-level auditing.

You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.

ノ Expand table

Action Documentation resource(s)

Audit user activities REST API to get activity events

Audit workspaces, items, and Collection of asynchronous metadata scanning REST


permissions APIs to obtain a tenant inventory

Audit content shared to entire REST API to check use of widely shared links
organization

Audit tenant settings REST API to check tenant settings

Publish content REST API to deploy items from a deployment pipeline


or clone a report to another workspace

Manage content REST API to refresh a semantic model or take over


ownership of a semantic model

Manage gateway data sources REST API to update credentials for a gateway data
source

Export content REST API to export a report

Create workspaces REST API to create a new workspace


Action Documentation resource(s)

Manage workspace permissions REST API to assign user permissions to a workspace

Update workspace name or description REST API to update workspace attributes

Restore a workspace REST API to restore a deleted workspace

Programmatically retrieve a query result REST API to run a DAX query against a semantic model
from a semantic model

Assign workspaces to capacity REST API to assign workspaces to capacity

Programmatically change a data model Tabular Object Model (TOM) API

Embed Power BI content in custom Power BI embedded analytics client APIs


applications

 Tip

There are many other Power BI REST APIs. For a complete list, see Using the Power
BI REST APIs.

Planning for change


Every month, Microsoft releases new Fabric features and capabilities. To be effective, it's
crucial that everyone involved with system oversight stays current. For more
information, see Tenant-level monitoring.

) Important

Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage Fabric and
support your users.

Considerations and key actions

Checklist - Considerations and key actions you can take for system oversight follow.
Improve system oversight:

" Verify who is permitted to be a Fabric administrator: If possible, reduce the


number of people granted the Fabric administrator role if it's more than a few
people.
" Use PIM for occasional administrators: If you have people who occasionally need
Fabric administrator rights, consider implementing Privileged Identity Management
(PIM) in Microsoft Entra ID. It's designed to assign just-in-time role permissions that
expire after a few hours.
" Train administrators: Check the status of cross-training and documentation in place
for handling Fabric administration responsibilities. Ensure that a backup person is
trained so that needs can be met timely, in a consistent way.

Improve management of the Fabric service:

" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Fabric community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature. Use the Get Tenant
Settings REST API to make the process more efficient, and to create snapshots of
the settings on a regular basis.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.

Improve management of user machines and devices:

" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Free, Pro, or PPU) can be handled together. It
can simplify onboarding since new content creators won't always know what to ask
for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.

Data architecture planning:

" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Fabric is currently used by the different business units in your organization
versus how you want Fabric to be used. Determine if there's a gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Fabric users, and how they're documented
and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.

Improve management of user licenses:

" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Pro users signing up for a Premium Per
User trial.

Improve cost management:

" Determine your cost management objectives: Consider how to balance cost,


features, usage patterns, and effective utilization of resources. Schedule a routine
process to evaluate costs, at least annually.
" Obtain activity log data: Ensure you have access to the activity log data to assist
with cost analysis. It can be used to understand who is—or isn't—using the license
assigned to them.

Improve security and data protection:

" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Fabric content with external users. Ensure
that settings in Fabric support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in Fabric.

Improve auditing and monitoring:

" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing data
would be helpful to complement the activity log data, such as security data.

 Tip

For more information, see Tenant-level auditing.

Use the REST APIs:

" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs and the Fabric REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.

Questions to ask
Use questions like those found below to assess system oversight.

Are there atypical administration settings enabled or disabled? For example, is the
entire organization allowed to publish to web (we strongly advise restricting this
feature).
Do administration settings and policies align with, or inhibit, business the way user
work?
Is there a process in place to critically appraise new settings and decide how to set
them? Alternatively, are only the most restrictive settings set as a precaution?
Are Microsoft Entra security groups used to manage who can do what?
Do central teams have visibility of effective auditing and monitoring tools?
Do monitoring solutions depict information about the data assets, user activities,
or both?
Are auditing and monitoring tools actionable? Are there clear thresholds and
actions set, or do monitoring reports simply describe what's in the data estate?
Is Azure Log Analytics used (or planned to be used) for detailed monitoring of
Fabric capacities? Are the potential benefits and cost of Azure Log Analytics clear
to decision makers?
Are sensitivity labels and data loss prevention policies used? Are the potential
benefits and cost of these clear to decision makers?
Do administrators know the current number of licenses and licensing cost? What
proportion of the total BI spend goes to Fabric capacity, and to Pro and PPU
licenses? If the organization is only using Pro licenses for Power BI content, could
the number of users and usage patterns warrant a cost-effective switch to Power BI
Premium or Fabric capacity?

Maturity levels

The following maturity levels will help you assess the current state of your Power BI
system oversight.
ノ Expand table

Level State of system oversight

100: Initial • Tenant settings are configured independently by one or more administrators
based on their best judgment.

• Architecture needs, such as gateways and capacities, are satisfied on an as-


needed basis. However, there isn't a strategic plan.

• Fabric activity logs are unused, or selectively used for tactical purposes.

200: • The tenant settings purposefully align with established governance guidelines
Repeatable and policies. All tenant settings are reviewed regularly.

• A small number of specific administrators are selected. All administrators have a


good understanding of what users are trying to accomplish in Fabric, so they're in
a good position to support users.

• A well-defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.

• Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.

300: Defined • The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.

• Cross-training and documentation exist for administrators to ensure continuity,


stability, and consistency.

• Sensitivity labels are assigned to content consistently. The advantages of using


sensitivity labels for data protection are understood by users.

• An automated process is in place to export Fabric activity log and API data to a
secure location for reporting and auditing.

400: Capable • Administrators work closely with the COE and governance teams to provide
oversight of Fabric. A balance of user empowerment and governance is
successfully achieved.

• Decentralized management of data architecture (such as gateways or capacity


management) is effectively handled to balance agility and control.

• Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.

• Activity log and API data is actively analyzed to monitor and audit Fabric
activities. Proactive action is taken based on the data.
Level State of system oversight

500: Efficient • The Fabric administrators work closely with the COE actively stay current. Blog
posts and release plans from the Fabric product team are reviewed frequently to
plan for upcoming changes.

• Regular cost management analysis is done to ensure user needs are met in a
cost-effective way.

• The Fabric REST API is used to retrieve tenant setting values on a regular basis.

• Activity log and API data is actively used to inform and improve adoption and
governance efforts.

Related content
For more information about system oversight and Fabric administration, see the
following resources.

Administer Microsoft Fabric


Administer Power BI - Part 1
Administer Power BI - Part 2
Administrator in a Day Training – Day 1
Administrator in a Day Training – Day 2
Power BI security whitepaper
External guest users whitepaper
Power BI implementation planning

In the next article in the Microsoft Fabric adoption roadmap series, learn about effective
change management.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap:
Change management
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

When working toward improved data and business intelligence (BI) adoption, you
should plan for effective change management. In the context of data and BI, change
management includes procedures that address the impact of change for people in an
organization. These procedures safeguard against disruption and productivity loss due
to changes in solutions or processes.

7 Note

Effective change management is particularly important when you migrate to Power


BI.

Effective change management improves adoption and productivity because it:

Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Fabric capacity) or organizational compliance (like data security and privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.

Effective change management is critical for successful adoption at all levels. To


successfully manage change, consider the key actions and activities described in the
following sections.

) Important

Change management is a fundamental obstacle to success in many organizations.


Effective change management requires that you understand that it's about people
—not tools or processes.
Successful change management involves empathy and communication. Ensure that
change isn't forced or resistance to change is ignored, because it can widen
organizational divides and further inhibit effectiveness.

 Tip

Whenever possible, we recommend that you describe and promote change as


improvement—it's much less threatening. For many people, change implies a cost in
terms of effort, focus, and time. Alternatively, improvement means a benefit
because it's about making something better.

Types of change to manage


When implementing data and BI solutions, you should manage different types of
change. Also, depending on the scale and scope of your implementation, you should
address different aspects of change.

Consider the following types of change to manage when you plan for Fabric adoption.

Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.

Here are some examples of process-level changes.

Change from centralized to decentralized approaches to ownership (change in


content ownership and management).
Change from enterprise to departmental, or from team to personal content
delivery (change in content delivery scope).
Change of central team structure (for example, forming a Center of Excellence).
Changes in governance policies.
Migration from other analytics products to Fabric, and the changes this migration
involves, like:
The separation of semantic models and reports, and a model-based approach
to analytics.
Transitioning from exports or static reports to interactive analytical reports,
which can involve filtering and cross-filtering.
Moving from distributing reports as PowerPoint files or flat files to accessing
reports directly from the Fabric portal.
Shifting from information in tables, paginated reports, and spreadsheets to
interactive visualizations and charts.
Changing from an on-premises or platform as a service (PaaS) platform to a
software as a service (SaaS) tool.

7 Note

Typically, giving up export-based processes or Excel reporting is a significant


challenge. That's because these methods are usually deeply engrained in the
organization and are tied to the autonomy and data skills of your users.

Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.

7 Note

In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a lakehouse, a
semantic model, or a report. The considerations for change management described
in this article are relevant for all types of solutions, and not only reporting projects.

Here are some examples of solution-level changes.

Changes in calculation logic for KPIs or measures.


Changes in how master data or hierarchies for business attributes are mapped,
grouped, or described.
Changes in data freshness, detail, format, or complexity.
Introduction of advanced analytics concepts, like predictive analytics or
prescriptive analytics, or general statistics (if the user community isn't familiar with
these concepts, already).
Changes in the presentation of data, like:
Styling, colors, and other formatting choices for visuals.
The type of visualization.
How data is grouped or summarized (such as changing from different measures
of central tendency, like average, median, or geometric mean).
Changes in how content consumers interact with data (like connecting to a shared
semantic model instead of exporting information for personal usage scenarios).

How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.

Address change incrementally


Change management can be a significant undertaking. Taking an incremental approach
can help you facilitate change in a way that's sustainable. To adopt an incremental
approach, you identify the highest priority changes and break them into manageable
parts, implementing each part with iterative phases and action plans.

The following steps outline how you can incrementally address change.

1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects might
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.
 Tip

Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly tactical planning.

When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.

Effectively communicate change


Ensure that you clearly and concisely describe planned changes for the user community.
Important communication should originate from the executive sponsor, or another
leader with relevant authority. Be sure to communicate the following details.

What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.

Consider maintaining a history of communications in your centralized portal. That way,


it's easy to find communications, timings, and details of changes after they've occurred.

) Important

You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.

Plan training and support


Changes to tools, processes, and solutions typically require training to use them
effectively. Additionally, extra support might be required to address questions or
respond to support requests.

Here are some actions you can take to plan for training and support.

Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.

7 Note

These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
to managed self-service approaches to data and BI), you'll likely need to plan
iterative, multi-phase plans that span multiple planning periods. In this case,
carefully consider the effort and resources needed to deliver success.

Involve executive leadership


Executive support is critical to effective change management. When an executive
supports a change, it demonstrates its strategic importance or benefit to the rest of the
organization. This top-down endorsement and reinforcement is particularly important
for high-impact, large-scale changes, which have a higher potential for disruption. For
these scenarios, ensure that you actively engage and involve your executive sponsor to
endorse and reinforce the change.

U Caution

Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.

Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.

Handle resistance to change


It's important to address resistance to change, as it can have substantial negative
impacts on adoption and productivity. When you address resistance to change, consider
the following actions and activities.

Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.

Additionally, you can have a significant impact on change resistance by engaging


promoters and detractors.

Identify and engage promoters


Promoters are vocal, credible individuals in a user community who advocate in favor of a
tool, solution, or initiative. Promoters can have a positive impact on adoption because
they can influence peers to understand and accept change.

To effectively manage change, you should identify and engage promoters early in the
process. You should involve them and inform them about the change to better utilize
and amplify their advocacy.

 Tip
The promoters you identify might also be great candidates for your champions
network.

Identify and engage detractors


Detractors are the opposite of promoters. They are vocal, credible individuals in a user
community who advocate against a tool, solution, or initiative. Detractors can have a
significant negative influence on adoption because they can convince peers that the
change isn't beneficial. Additionally, detractors can advocate for alternative or solutions
marked for retirement, making it more difficult to decommission old tools, solutions, or
processes.

To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.

 Tip

A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.

Questions to ask

Use questions like those found below to assess change management.

Is there a role or team responsible for change management in the organization? If


so, how are they involved in data and BI initiatives?
Is change seen as an obstacle to achieving strategic success among people in the
organization? Is the importance of change management acknowledged in the
organization?
Are there any significant promoters for data and BI solutions and processes in the
user community? Conversely, are there any significant detractors?
What communication and training efforts are performed to launch new data tools
and solutions? How long do they last?
How is change in the user community handled (for example, with new hires or
promoted individuals)? What onboarding activities introduce these new individuals
to existing solutions, processes, and policies?
Do people who create Excel reports feel threatened or frustrated by initiatives to
automate reporting with BI tools?
To what extent do people associate their identities with the tools they use and the
solutions they have created and own?
How are changes to existing solutions planned and managed? Are changes
planned, with a visible roadmap, or are they reactive? Do people get sufficient
notification about upcoming changes?
How frequently do changes disrupt existing processes and tools?
How long does it take to decommission legacy systems or solutions when new
ones become available? How long does it take to implement changes to existing
solutions?
To what extent do people agree with the statement I am overwhelmed with the
amount of information I am required to process? To what extent do people agree
with the sentiment things are changing too much, too quickly?

Maturity levels

An assessment of change management evaluates how effectively the organization can


enact and respond to change.

The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.

ノ Expand table

Level State of change management

100: Initial • Change is usually reactive, and it's also poorly communicated.

• The purpose or benefits of change aren't well understood, and resistance to


Level State of change management

change causes conflict and disruption.

• No clear teams or roles are responsible for managing change for data initiatives.

200: • Executive leadership and decision makers recognize the need for change
Repeatable management in data and BI projects and initiatives.

• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.

300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.

• Successful change management is championed by key individuals that bridge


organizational boundaries.

400: • Empathy and effective communication are integral to change management


Capable strategies.

• Change management efforts are owned by particular roles or teams, and


effective communication results in a clear understanding of the purpose and
benefits of change. Change rarely interrupts existing processes and tools.

500: • Change is an integral part of the organization. People in the organization


Efficient understand the inevitability of change, and see it as a source for momentum
instead of disruption. Change almost never unnecessarily interrupts existing
processes or tools.

• Systematic processes address change as a challenge of people and not


processes.

Related content
In the next article in the Microsoft Fabric adoption roadmap series, in conclusion, learn
about adoption-related resources that you might find valuable.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric adoption roadmap
conclusion
Article • 12/30/2024

7 Note

This article forms part of the Microsoft Fabric adoption roadmap series of articles.
For an overview of the series, see Microsoft Fabric adoption roadmap.

This article concludes the series on Microsoft Fabric adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your analytics
adoption efforts, and with creating a productive data culture in your organization.

This series covered the following aspects of Fabric adoption.

Adoption introduction
Adoption maturity levels
Data culture
Executive sponsorship
Business alignment
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
Change management

The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.

Next actions to take


It can be overwhelming to decide where to start. The following series of steps provides a
process to help you approach your next actions.
1. Learn: First, read this series of articles end-to-end. Become familiar with the
strategic and tactical considerations and action items that directly lead to
successful analytics adoption. They'll help you to build a data culture in your
organization. Discuss the concepts with your colleagues.
2. Assess current state: For each area of the adoption roadmap, assess your current
state. Document your findings. Your goal is to have full clarity on where you're now
so that you can make informed decisions about what to do next.
3. Clarify your strategic goals: Ensure that you're clear on what your organization's
goals are for adopting Fabric. Confirm that your adoption and data culture goals
align with your organization's broader strategic goals for the use of data, analytics,
and business intelligence (BI) in general. Focus on what your immediate strategy is
for the next 3-12 months. For more information about defining your goals, see the
strategic planning article.
4. Prioritize: Clarify what's most important to achieve in the next 12-18 months. For
instance, you might identify specific user enablement or risk reduction areas that
are a higher priority than other areas. Determine which advancements in maturity
levels you should prioritize first. For more information about defining your
priorities, see the strategic planning article.
5. Identify future state: For each area of the roadmap, identify the gaps between
what you want to happen (your future state) and what's happening (your current
state). Focus on the next 12-18 months for identifying your desired future state.
6. Customize maturity levels: Using the information you have on your strategy and
future state, customize the maturity levels for each area of the roadmap. Update or
delete the description for each maturity level so that they're realistic, based on
your goals and strategy. Your current state, priorities, staffing, and funding will
influence the time and effort it will take to advance to higher maturity levels.
7. Define measurable objectives: Create KPIs (key performance indicators) or OKRs
(objectives and key results) to define specific goals for the next quarter. Ensure that
the objectives have clear owners, are measurable, time-bound, and achievable.
Confirm that each objective aligns with your strategic BI goals and priorities.
8. Create tactical plans: Add specific action items to your project plan. Action items
will identify who will do what, and when. Include short, medium, and longer-term
(backlog) items in your project plan to make it easy to track and reprioritize.
9. Track action items: Use your preferred project planning software to track
continual, incremental progress of your action items. Summarize progress and
status every quarter for your executive sponsor.
10. Adjust: As new information becomes available—and as priorities change—
reevaluate and adjust your focus. Reexamine your strategic goals, objectives, and
action items once a quarter so you're certain that you're focusing on the right
actions.
11. Celebrate: Pause regularly to appreciate your progress. Celebrate your wins.
Reward and recognize people who take the initiative and help achieve your goals.
Encourage healthy partnerships between IT and the different areas of the business.
12. Repeat: Continue learning, experimenting, and adjusting as you progress with your
implementation. Use feedback loops to continually learn from everyone in the
organization. Ensure that continual, gradual, improvement is a priority.

A few important key points are implied within the previous suggestions.

Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.

Power BI implementation planning


Successfully implementing analytics throughout the organization requires deliberate
thought and planning. The Power BI implementation planning series of articles, which is
a work in progress, is intended to complement the Microsoft Fabric adoption roadmap.
It includes key considerations, actions, decision-making criteria, recommendations, and
it describes implementation patterns for important common usage scenarios.

Power BI adoption framework


The Power BI adoption framework describes additional aspects of how to adopt Power
BI in more detail. The original intent of the framework was to support Microsoft partners
with a lightweight set of resources for use when helping their customers deploy and
adopt Power BI.
The framework can augment this Microsoft Fabric adoption roadmap series. The
roadmap series focuses on the why and what of adopting Fabric, more so than the how.

7 Note

When completed, the Power BI implementation planning series (described in the


previous section) will replace the Power BI adoption framework.

Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.

Power Platform adoption


The Power Platform team has an excellent set of adoption-related content. Its primary
focus is on Power Apps, Power Automate, and Power Virtual Agents. Many of the ideas
presented in this content can be applied to Power BI also.

The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.

The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.

The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.

The Power Platform adoption framework is a community-driven project with excellent


resources on adoption of Power Platform services at scale.

Microsoft 365 and Azure adoption


You might also find useful adoption-related guidance published by other Microsoft
technology teams.

The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.

A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:

Microsoft Teams adoption guide .


Microsoft Security and Compliance adoption guide .
SharePoint Adoption Resources .

Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.

7 Note

The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Fabric adoption series. They're reputable resources
should you wish to continue your journey.

Partner community
Experienced partners are available to help your organization succeed with adoption
initiatives. To engage a partner, visit the Power BI partner portal .

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Fabric known issues
Article • 02/16/2025

This page lists known issues for Fabric and Power BI features. Before submitting a
Support request, review this list to see if the issue that you're experiencing is already
known and being addressed. Known issues are also available as an interactive
embedded Power BI report .

For service level outages or degradation notifications, check


https://support.fabric.microsoft.com/ .

Currently active known issues


Select the Title to view more information about that specific known issue.

ノ Expand table

Issue Product Title Issues publish


ID experience date

1024 Data Factory CopyJob item deletion fails with error February 14,
2025

1023 Data Factory Preview destination data on a pipeline's copy February 14,
activity fails 2025

1017 Data Engineering Unsupported error for legacy timestamp in Fabric February 5,
Runtime 1.3 2025

1011 Power BI Models with specific gateway configuration might January 29,
experience refresh issues 2025

1004 Data Engineering Notebook and SJD job statuses are in progress in January 29,
monitor hub 2025

1003 Databases Copilot sidecar chat fails with certain private link January 28,
settings 2025

1002 Power BI Reports that use functions with RLS don't work January 28,
2025

996 Databases Some SQL query syntax fails in a graph database January 28,
query 2025

990 Real-Time KQL database loads continuously without an error January 28,
Intelligence 2025
Issue Product Title Issues publish
ID experience date

991 Data Factory Apache Airflow job creation shows Fabric upgrade January 13,
message 2025

989 Data Factory Local data access isn't allowed for pipeline using January 13,
on-premises data gateway 2025

988 Real-Time Data activator events aren't ingested for Reflex January 13,
Intelligence events 2025

986 Power BI Direct Lake query cancellation might cancel other January 7, 2025
queries

985 Power BI Direct Lake query cancellation causes model to fall January 7, 2025
back to DirectQuery

979 Databases SQL databases not available with private link January 6, 2025
through January 2025

975 Power BI Create report doesn't work on Eventhouse January 6, 2025


monitoring KQL database

974 Real-Time Show table command in KQL Queryset editor fails January 6, 2025
Intelligence

978 Real-Time Renamed eventstream fails to open December 17,


Intelligence 2024

976 Power BI Export-to-data disabled for a visual with visual December 17,
calculation 2024

966 Power BI Sync content from Git in workspace fails December 11,
2024

968 Power BI Export data option is disabled for Q&A visual in December 10,
the service 2024

967 Data Factory Pipeline activities don't save if their data December 10,
warehouse connection is changed 2024

965 Databases SQL database creation fails to create child items December 10,
when item with same name exists 2024

962 Real-Time Eventstream publish fails when column contains December 9,


Intelligence empty array and operator is added 2024

957 Data Factory Creation failure for Copy job item in empty December 5,
workspace 2024
Issue Product Title Issues publish
ID experience date

954 Data Factory Create, configure, or delete a mirror fails December 2,


2024

950 Power BI Incorrect column names after column format or December 2,


aggregation change 2024

945 Industry Intermittent failures on deployment of November 22,


Solutions Sustainability solution 2024

940 Data Factory Pipeline copy data to Kusto using an on-premises November 22,
data gateway doesn't work 2024

938 Power BI Line chart value-axis zoom sliders don't work with November 20,
markers enabled 2024

922 Data Engineering The default environment's resources folder November 12,
doesn't work in notebooks 2024

923 Power BI Tenant migrations paused through February 2025 November 8,


2024

910 Data Warehouse SQL analytics endpoint tables lose statistics October 31,
2024

909 Data Warehouse SQL analytics endpoint tables lose permissions October 31,
2024

903 Data Warehouse Data warehouse data preview might fail if multiple October 28,
data warehouse items 2024

897 OneLake OneLake Shared Access Signature (SAS) can't read October 25,
cross-region shortcuts 2024

894 Data Engineering Pipeline fails when getting a token to connect to October 25,
Kusto 2024

895 OneLake Dataverse shortcut creation and read fails when October 23,
organization is moved 2024

893 Power BI Can't connect to semantic model from Excel or October 23,
use Analyze in Excel 2024

891 Data Warehouse Data warehouse tables aren't accessible or October 17,
updatable 2024

883 Data Engineering Spark jobs might fail due to Runtime 1.3 updates October 17,
for GA 2024
Issue Product Title Issues publish
ID experience date

878 Power BI Premium capacity doesn't add excess usage into October 10,
carry forward 2024

819 Power BI Subscriptions and exports with maps might October 10,
produce wrong results 2024

877 Data Factory Data pipeline connection fails after connection October 9,
creator role is removed 2024

872 Data Warehouse Data warehouses don't show button friendly October 3,
names 2024

856 Data Factory Pipeline fails when copying data to data September 25,
warehouse with staging 2024

844 Power BI Intermittent refresh failure through on-premises September 25,


data gateway 2024

842 Data Warehouse Data warehouse exports using deployment September 23,
pipelines or git fail 2024

837 Data Engineering Monitoring hub displays incorrect queued September 17,
duration 2024

835 Data Engineering Managed private endpoint connection could fail September 13,
2024

817 Data Factory Pipelines don't support Role property for August 23,
Snowflake connector 2024

816 Data Factory Pipeline deployment fails when parent contains August 23,
deactivated activity 2024

810 Data Warehouse Inserting nulls into Data Warehouse tables fail August 16,
with incorrect error message 2024

795 Data Factory Multiple installations of on-premises data July 31, 2024
gateway causes pipelines to fail

789 Data Engineering SQL analytics endpoint table queries fail due to July 24, 2024
RLE

774 Data Factory Data warehouse deployment using deployment July 5, 2024
pipelines fails

767 Data Warehouse SQL analytics endpoint table sync fails when table July 2, 2024
contains linked functions
Issue Product Title Issues publish
ID experience date

757 Data Factory Copy activity from Oracle to lakehouse fails for June 20, 2024
Number data type

726 Data Factory Pipeline using XML format copy gets stuck May 24, 2024

717 Data Factory West India region doesn't support on-premises May 16, 2024
data gateway for data pipelines

718 OneLake OneLake under-reports transactions in the Other May 13, 2024
category

643 Data Engineering Tables not available to add in Power BI semantic February 27,
model 2024

508 Data Warehouse User column incorrectly shows as System in Fabric October 5,
capacity metrics app 2023

506 Data Warehouse InProgress status shows in Fabric capacity metrics October 5,
app for completed queries 2023

454 Data Warehouse Warehouse's object explorer doesn't support July 10, 2023
case-sensitive object names

Recently closed known issues


Select the Title to view more information about that specific known issue. Known issues
are organized in descending order by fixed date. Fixed issues are retained for at least 46
days.

ノ Expand table

Issue Product Title Issues Issue fixed


ID experience publish date date

1020 Data Factory Dataflow connector doesn't show February 10, Fixed:
dataflows with view only permissions 2025 February 14,
2025

934 Power BI External data sharing doesn't work in a November Fixed:


different region capacity lakehouse 19, 2024 February 14,
2025

769 Data Factory Dataflows Gen2 staging lakehouse July 2, 2024 Fixed:
doesn't work in deployment pipelines February 14,
2025
Issue Product Title Issues Issue fixed
ID experience publish date date

765 Data Factory Dataflows Gen2 staging warehouse July 2, 2024 Fixed:
doesn't work in deployment pipelines February 14,
2025

591 Data Factory Type mismatch when writing decimals February 16, Fixed:
and dates to lakehouse using a dataflow 2024 February 14,
2025

902 Power BI INFO.VIEW.MEASURES() in calculated October 31, Fixed:


table might cause errors 2024 February 10,
2025

955 Data Factory Create Gateway public API doesn't work December 5, Fixed:
for service principals 2024 February 5,
2025

1005 Data Git operations and deployment January 22, Fixed:


Engineering pipelines don't work with lakehouses 2025 February 4,
2025

898 OneLake External data sharing OneLake shortcuts October 25, Fixed:
don't show in SQL analytics endpoint 2024 January 28,
2025

846 OneLake OneLake BCDR write transactions aren't September Fixed:


categorized correctly for billing 17, 2024 January 28,
2025

823 Data Concurrent stored procedures block September 4, Fixed:


Warehouse each other in data warehouse 2024 January 28,
2025

948 Power BI Metrics app timepoint details missing November Fixed:


for new P2 capacities 27, 2024 January 15,
2025

933 Data Factory New tile for Dataflow Gen2 (CI/CD, November Fixed:
preview) isn't yet supported 22, 2024 January 13,
2025

918 Power BI More options menu on a visual doesn't November 7, Fixed:


open in unsaved reports 2024 January 13,
2025

809 Data Factory Dataflow Gen2 refresh fails due to August 14, Fixed:
missing SQL analytics endpoint 2024 January 13,
2025
Issue Product Title Issues Issue fixed
ID experience publish date date

977 Power BI Export to Excel using live connection December Fixed:


with show items with no data turned on 17, 2024 January 6,
fails 2025

821 Data Schema refresh for a data warehouse's August 28, Fixed:
Warehouse semantic model fails 2024 January 6,
2025

618 Data Using an inactive SQL analytics February 14, Fixed:


Warehouse endpoint can show old data 2024 January 6,
2025

447 Data Temp tables in Data Warehouse and July 5, 2023 Fixed:
Warehouse SQL analytics endpoint January 6,
2025

Related content
Go to the embedded interactive report version of this page
Service level outages
Get your questions answered by the Fabric community

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - CopyJob item deletion
fails with error
Article • 02/16/2025

When you attempt to delete a CopyJob item, the attempt to delete the CopyJob item
doesn't work.

Status: Open

Product Experience: Data Factory

Symptoms
When you attempt to delete a CopyJob item, you receive an error. The error message
tells you that the deletion failed. Additionally, the CopyJob item isn't deleted.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Preview destination data
on a pipeline's copy activity fails
Article • 02/16/2025

In a pipeline, you can set up a copy activity. In the destination of the copy activity, you
can preview the data. When you select on the preview button, it fails with an error.

Status: Open

Product Experience: Data Factory

Symptoms
In a pipeline, you have a copy activity. In the copy activity, you select the Destination
tab > Preview data. The preview doesn't show and you receive an error.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Dataflow connector
doesn't show dataflows with view only
permissions
Article • 02/16/2025

You can't see dataflow data using the dataflow connector. The issue happens when
connecting to either a Dataflow Gen2 or Dataflow Gen2 (CI/CD preview) dataflow. You
only have view access to the workspace that contains the dataflow.

Status: Fixed: February 14, 2025

Product Experience: Data Factory

Symptoms
You have Viewer permission on the workspace that contains a Dataflow Gen2 dataflow
or Dataflow Gen2 (CI/CD preview) dataflow. In a different workspace or Power BI
Desktop, you use the dataflow connector to query the original dataflow data. You can't
see the original dataflow.

Solutions and workarounds


To access the dataflow, assign a higher level of permission to the user in the workspace.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Unsupported error for
legacy timestamp in Fabric Runtime 1.3
Article • 02/06/2025

When using the native execution engine in Fabric Runtime 1.3, you might encounter an
error if your data contains legacy timestamps. This issue arises due to compatibility
challenges introduced when Spark 3.0 transitioned to the Java 8 date/time API, which
uses the Proleptic Gregorian calendar (SQL ISO standard). Earlier Spark versions utilized
a hybrid Julian-Gregorian calendar, resulting in potential discrepancies when processing
timestamp data created by different Spark versions.

Status: Open

Product Experience: Data Engineering

Symptoms
When using legacy timestamp support in native execution engine for Fabric Runtime 1.3,
you receive an error. The error message is similar to: Error Source: USER. Error Code:
UNSUPPORTED. Reason: Reading legacy timestamp is not supported.

Solutions and workarounds


For more information about the feature that addresses this known issue, see the blog
post on legacy timestamp support . To activate the feature, add the following to your
Spark session: SET spark.gluten.legacy.timestamp.rebase.enabled = true . Dates that
are post-1970 are unaffected, ensuring consistency without extra steps.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Models with specific
gateway configuration might experience
refresh issues
Article • 02/03/2025

If you're a Power BI premium customer, you can process models using a gateway. If the
gateway configuration of StreamBeforeRequestCompletes is true, you might experience
refresh issues, such as delays failures

Status: Open

Product Experience: Power BI

Symptoms
Refresh operations might take longer than expected.
Refresh failures due to out-of-memory exceptions.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Notebook and SJD job
statuses are in progress in monitor hub
Article • 02/03/2025

You can trigger a notebook or Spark job definition (SJD) job's execution using the Fabric
public API with a service principal token. You can use the monitor hub to track the status
of the job. In this known issue, the job status is In-progress even after the execution of
the job completes.

Status: Open

Product Experience: Data Engineering

Symptoms
In the monitor hub, you see a stuck job status of In-progress for a notebook or SJD job
that was submitted by a service principal.

Solutions and workarounds


As a temporary workaround, use the Recent-Run job history inside the notebook or SJD
to query the correct job status.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Copilot sidecar chat fails
with certain private link settings
Article • 02/03/2025

Copilot sidecar chat when you enable private link on your Fabric tenant and disabled
public network access.

Status: Open

Product Experience: Databases

Symptoms
If you enable private link on your Fabric tenant and disable public network access, the
Copilot sidecar chat fails with an error. The error message is similar to: "I'm sorry, but
I encountered an error while answering your question. Please try again. " when you

submit any prompts. However, Copilot inline code completion and quick actions still
work as expected.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Reports that use
functions with RLS don't work
Article • 02/03/2025

You can define row-level security (RLS) for a table that contains measures.
USERELATIONSHIP() and CROSSFILTER() functions can't be used in the measures. A

change was recently made to enforce this requirement.

Status: Open

Product Experience: Power BI

Symptoms
When viewing a report, you see an error message. The error message is similar to:
" Error fetching data for this Visual. The UseRelationship() and Crossfilter()
functions may not be used when querying <dataset> because it is constrained by row

level security " or " The USERELATIONSHIP() and CROSSFILTER() functions may not be
used when querying 'T' because it is constrained by row-level security ."

Solutions and workarounds


The change is to enforce a security requirement. To prevent your reports from failing,
you can remove USERELATIONSHIP() and CROSSFILTER() from your measures.
Alternatively, you can modify the relationships using recommendations for RLS models.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Some SQL query syntax
fails in a graph database query
Article • 02/03/2025

When you try to run a query against a graph database in the Fabric SQL editor, some
graph database syntax doesn't work.

Status: Open

Product Experience: Databases

Symptoms
You can run a query against a graph database in the Fabric SQL editor. When the query
contains some graph database syntax, such as "->," the query fails.

Solutions and workarounds


Use a different client tool, such as Visual Studio Code, Azure Data Studio, or SQL Server
Management Studio, to run your query.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - KQL database loads
continuously without an error
Article • 02/03/2025

You can open an Eventhouse and select a database with tables. The main tile opens on
the Tables tab by default. All charts in the tiles are stuck in an infinite loading loop.

Status: Open

Product Experience: Real-Time Intelligence

Symptoms
When you open a KQL database, it loads continuously without showing an error.

Solutions and workarounds


To access the data in the KQL database, manually query the data.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Git operations and
deployment pipelines don't work with
lakehouses
Article • 02/04/2025

You can't use Git operations or deployment pipelines that require lakehouse items.

Status: Fixed: February 4, 2025

Product Experience: Data Engineering

Symptoms
You can't sync your workspace to Git, commit to Git, or update from Git. Also, you can't
perform deployments using a deployment pipeline for lakehouse items.

Solutions and workarounds


As a temporary workaround for deployment pipelines, you can skip the lakehouse by
selecting other items. The deployment proceeds for all items except the lakehouse.
There's no workaround if you're using a lakehouse in Git operations. This article will be
updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Apache Airflow job
creation shows Fabric upgrade message
Article • 02/03/2025

In Data Factory, you must have a workspace tied to a valid Fabric capacity or Fabric trial
to create a new Apache Airflow job. You have the correct license and try to create an
Apache Airflow job. The creation fails and you receive a message asking you to upgrade
to Fabric.

Status: Open

Product Experience: Data Factory

Symptoms
When trying to create an Apache Airflow job, you receive an upgrade message and can't
create the job. The upgrade message is similar to: Upgrade to a free Microsoft Fabric
Trial .

Solutions and workarounds


You see the upgrade message because Apache Airflow jobs aren't supported in all
regions. Currently, Apache Airflow jobs are only supported in the regions listed in the
documentation. To work around the limitation, change the capacity of your region to a
supported region, and retry the operation. When the issue is resolved, the error
message will be improved to include a clear, actionable item.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Local data access isn't
allowed for pipeline using on-premises
data gateway
Article • 02/03/2025

For security considerations, local machine access is no longer allowed for a pipeline
using an on-premises data gateway. To segregate storage and compute, you can't host
data store on the compute where the on-premises data gateway is running.

Status: Open

Product Experience: Data Factory

Symptoms
You can try to access the local data source, such as REST, on the same server as the on-
premises data gateway. When you try to connect, you receive an error message and the
connection fails. The error message is similar to:
ErrorCode=RestResourceReadFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridD
eliveryException,Message=Fail to read from REST

resource.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.DataTransfer

.SecurityValidation.Exceptions.HostValidationException,Message=Access to <local Ip
address> is denied, resolved IP address is <local Ip address>, network type is

OnPremise,Source=Microsoft.DataTransfer.SecurityValidation,'

Solutions and workarounds


Ensuring the security of your data is a top priority. The change is by security design, and
there aren't any plans to revert. To use your pipeline securely, set up your data store on
another server, which can be connected from on-premises data gateway.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data activator events
aren’t ingested for Reflex events
Article • 02/03/2025

You can use Data Activator to get data using an API call to Fabric events from Reflex.
You see failures in the Data Activator and the events aren't ingested.

Status: Open

Product Experience: Real-Time Intelligence

Symptoms
You see failures when you try to ingest Fabric events from Reflex.

Solutions and workarounds


No workarounds at this time.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Direct Lake query
cancellation might cancel other queries
Article • 02/03/2025

You can use Direct Lake as a storage mode for your semantic model. If you cancel a
query on a Direct Lake semantic model table, the model might occasionally also cause
the cancellation of other queries which read the same table.

Status: Open

Product Experience: Power BI

Symptoms
Queries might fail with user a cancellation error, despite the user not canceling the
query. If a visual uses the query that was canceled, you might receive an error. The error
message is similar to: Error fetching data for this visual. The operation was
cancelled by the user.

Solutions and workarounds


To work around this issue, refresh the failed visuals.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Direct Lake query
cancellation causes model to fall back to
DirectQuery
Article • 02/03/2025

You can use Direct Lake as a storage mode for your semantic model. If you cancel a
query on a Direct Lake semantic model table, the query falls back to DirectQuery mode.
At the same time, Direct Lake storage mode is disabled temporarily on the semantic
model.

Status: Open

Product Experience: Power BI

Symptoms
On "Direct Lake Only" semantic models, queries/visuals might fail with transient error.
On "Automatic" mode semantic models, query performance might be temporarily
impacted. If a visual uses the query that was canceled, you might receive an error. The
error message is similar to: Error fetching data for this visual. The operation was
cancelled by the user.

Solutions and workarounds


To avoid query failures, enable "Automatic" Direct Lake behavior.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - SQL databases not
available with private link through
January 2025
Article • 02/03/2025

You can't create or use SQL databases in tenants with private link enabled.

Status: Open

Product Experience: Databases

Symptoms
If you enabled private link on your Fabric tenant on or before November 19, 2024, you
don't see the option to create a new SQL Database. If you enabled private link after
November 19, 2024, you can't create the database and receive an error. The error
message is similar to Something went wrong .

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Create report doesn't
work on Eventhouse monitoring KQL
database
Article • 02/03/2025

You can set up Eventhouse monitoring, which includes a KQL database. In the KQL
database, you can select Create Power BI report to create a report. You receive an error,
and no report is created. The issue occurs because the report creation requires an active
query in the query pane.

Status: Open

Product Experience: Power BI

Symptoms
When you select Create Power BI report, you receive an error. The error message is
similar to: Something went wrong. Try opening the report again. If the problem
continues, contact support and provide the details below.

Solutions and workarounds


To work around the issue, create a query in the query pane. Then retry the report
creation using the Create Power BI report button.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Show table command in
KQL Queryset editor fails
Article • 02/03/2025

You can try to query a table in the KQL Queryset editor. If you execute the .show table
<tableName> command, you receive an error.

Status: Open

Product Experience: Real-Time Intelligence

Symptoms
When you try to execute the .show table <tableName> command in the KQL Queryset
editor, you receive an error. The error message is similar to Something went wrong. The
incident has been reported .

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Eventstream fails to open
after renaming
Article • 02/03/2025

After renaming an eventstream item, you can try to open it. You receive a pop-up
notification indicating that the eventstream failed to open. Then, if you try to open
another eventstream in the same workspace, the opening also fails, displaying the same
error message. You can refresh the browser to allow the other eventstreams to open
successfully, but the renamed eventstream remains inaccessible.

Status: Open

Product Experience: Real-Time Intelligence

Symptoms
You receive an error when you try to open a renamed eventstream. You also receive an
error when trying to open other eventstreams in the same workspace where a renamed
eventstream resides.

Solutions and workarounds


To work around the issue, rename the eventstream back to its original name. Refresh the
page, and then you can open eventstreams in that workspace.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Export to Excel using live
connection with show items with no
data turned on fails
Article • 02/03/2025

You can have a visual that has one or more grouping columns and also has Show items
with no data enabled. If you try to export to Excel using a live connection, the export
fails.

Status: Fixed: January 6, 2025

Product Experience: Power BI

Symptoms
When you try to export to Excel using a live connection, the export fails with a generic
error message.

Solutions and workarounds


To work around the issue, on the visual, turn off Show items with no data. You can then
export to Excel and then turn the setting back on. Changing the setting doesn't change
what is exported.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Export-to-data disabled
for a visual with visual calculation
Article • 02/03/2025

When a visual contains a visual calculation or a hidden field, the export-to-data


functionality is disabled on the service.

Status: Open

Product Experience: Power BI

Symptoms
The export-to-data command is disabled for a visual because it has a visual calculation
or hidden field.

Solutions and workarounds


Visual calculation is currently in public preview. The export-to-data command is disabled
temporarily. This article will be updated once the command is enabled.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Sync content from Git in
workspace fails
Article • 02/03/2025

You can connect your workspace to Git and perform a sync from Git into the workspace.
When you choose the Sync content from Git into this workspace and select the Sync
button, you receive an error and the sync fails.

Status: Open

Product Experience: Power BI

Symptoms
The error typically happens when you try to sync from a new workspace that wasn't
previously synced. It also might happen due to an object with an invalid format. You
receive a message similar to: Theirs artifact must have the same logical id as Yours
artifact at this point , and can't perform any operations using Git.

Solutions and workarounds


As a workaround for a small workspace, you can fix the problem directly in Git or
rename the items.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Export data option is
disabled for Q&A visual in the service
Article • 02/03/2025

The export data option is disabled for the Q&A visual in the Power BI service.

Status: Open

Product Experience: Power BI

Symptoms
When using the Q&A visual in the Power BI service, you see the export data option is
disabled.

Solutions and workarounds


As a workaround to get your data, follow these steps:

1. Enter Edit mode


2. Use the Turn this Q&A result into a standard visual button to turn the Q&A visual
into a table visual
3. Select the export data option

Alternatively, you can download the report from the service, and use the Power BI
Desktop to export the data.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline activities don't
save if their data warehouse connection
is changed
Article • 02/03/2025

In a pipeline, you can add a stored procedure or script activity that uses a data
warehouse connection. If you change the data warehouse connection to point to a new
data warehouse connection in the activity, you can't save the connection in the activity.

Status: Open

Product Experience: Data Factory

Symptoms
In the pipeline, the stored procedure or script activity changes doesn't persist after their
data warehouse connection is updated.

Solutions and workarounds


Delete and recreate the stored procedure or script activity using the new data
warehouse connection.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Database creation fails to
create child items when item with same
name exists
Article • 02/03/2025

When you create a Fabric SQL Database, it automatically creates a child SQL analytics
endpoint and a child semantic model with the same name as the SQL database. If the
workspace already contains a SQL analytics endpoint or a semantic model with the same
name, the creation of the child items fails.

Status: Open

Product Experience: Databases

Symptoms
You created an SQL database with the same name as a SQL analytics endpoint or
semantic model in that workspace. The child items for that SQL database weren't
created. You can't query the mirrored data for this database.

Solutions and workarounds


Before creating the SQL database, check if the target workspace already contains a SQL
analytics endpoint or semantic model with the same name. Choose a different name for
your new SQL database.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Eventstream publish fails
when column contains empty array and
operator is added
Article • 02/03/2025

You can create an eventstream that has columns of data and a transformation operator
to process the data. The data contains a column with an empty array. If you try to
publish the eventstream, it shows an error and doesn't publish.

Status: Open

Product Experience: Real-Time Intelligence

Symptoms
You can't publish an event stream when both of the following conditions are met: the
data contains a column with an empty array and an operator is added to process the
data. You receive an error message similar to Failed to publish topology changes .

Solutions and workarounds


To work around this issue, avoid including empty arrays in the events.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Creation failure for Copy
job item in empty workspace
Article • 02/03/2025

You can create a Copy job item in a workspace. If no items are present in the workspace,
so the Copy job would be the first artifact in the workspace, the Copy job item creation
fails.

Status: Open

Product Experience: Data Factory

Symptoms
When you try to create a Copy job item in an empty workspace, the creation fails.

Solutions and workarounds


Create a new artifact like a lakehouse, data warehouse, or pipeline before creating the
Copy job.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Create Gateway public
API doesn't work for service principals
Article • 02/06/2025

You can use the Fabric public API to create a gateway. If you attempt to use the API to
create a gateway using a service principal, you might experience errors.

Status: Fixed: February 5, 2025

Product Experience: Data Factory

Symptoms
You might experience issues when you create a gateway using a service principal with
the Create Gateway public API.

Solutions and workarounds


As a workaround, create the gateway as a user and then share the gateway with your
service principal.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Create, configure, or
delete a mirror fails
Article • 02/03/2025

When you try to create, configure, or delete a mirror, you receive a


SchemaSupportNotEnabled error.

Status: Open

Product Experience: Data Factory

Symptoms
The creation, configuration, or deletion of a mirror with an error. The error message is
similar to: UI error: Unexpected error occurred. Failed after 10 retries.

Solutions and workarounds


To work around this issue, add the parameter switch
REPEnableSchemaHierarchyInMountedRelationalDatabaseSink=0 at the end of the browser

URL. Then try the action again.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Incorrect column names
after column format or aggregation
change
Article • 02/03/2025

You might experience incorrect or random column names after changing the column
format or aggregation.

Status: Open

Product Experience: Power BI

Symptoms
You might experience incorrect or random column names after changing the column
format or aggregation. One example where the incorrect or random column names
could appear is when querying through SQL Server Management Studio (SSMS).

Solutions and workarounds


You can try to run an XML for Analysis (XMLA) command through SSMS to the XMLA
endpoint to clear the cache to address the incorrect column names issue. However, you
might encounter the same issue if you redo the same operation to change column
format or aggregation.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Metrics app timepoint
details missing for new P2 capacities
Article • 02/03/2025

In the Fabric Capacity Metrics app, you can view the timepoint details for your capacity.
If you have a new P2 capacity, you see that the timepoint detail is missing.

Status: Fixed: January 15, 2025

Product Experience: Power BI

Symptoms
When you try to retrieve timepoint details in the metrics app, you receive a blank screen.
The missing data is for a new P2 capacity.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Intermittent failures on
deployment of Sustainability solution
Article • 02/03/2025

If you have a Fabric capacity hosted in the Southeast Asia or South Brazil region, you
might receive intermittent failures when you attempt to deploy the Sustainability
solution.

Status: Open

Product Experience: Industry Solutions

Symptoms
When you try to deploy the Sustainability solution, you receive an error. The error
message is similar to: Failed to create Sustainability solution, please retry after
some time .

Solutions and workarounds


You can try one of the following workarounds:

Retry the creation of the Sustainability solution in the same or different workspace
Use a Fabric capacity in any region excluding Southeast Asia or South Brazil and
retry the creation of the Sustainability solution

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline copy data to
Kusto using an on-premises data
gateway doesn't work
Article • 02/03/2025

You can use an on-premises data gateway for a source in a pipeline. If the pipeline's
copy activity uses the on-premises source and a Kusto destination, the pipeline fails with
an error.

Status: Open

Product Experience: Data Factory

Symptoms
If you run a pipeline using the on-premises data gateway, you receive an error. The error
is similar to An error occurred for source: 'DataReader'. Error: 'Could not load file
or assembly 'Microsoft.IO.RecyclableMemoryStream, Version=$$2.2.0.0$$,

Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The


system cannot find the file specified. or KustoWriteFailed .

Solutions and workarounds


The issue is fixed in the November and later versions of the on-premises data gateway.
Install the latest version of the on-premises data gateway , and try again.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - New tile for Dataflow
Gen2 (CI/CD, preview) isn't yet
supported
Article • 02/03/2025

You might see a new tile for creating a Dataflow Gen2 (CI/CD, preview) Fabric item. If
you select the tile, you get an upgrade dialog box and you can't use the feature.

Status: Fixed: January 13, 2025

Product Experience: Data Factory

Symptoms
If you select the tile to create a Dataflow Gen2 (CI/CD, preview) Fabric item, you receive
an upgrade dialog box. The message in the upgrade dialog box is similar to: Upgrade to
a paid Microsoft Fabric Capacity .

Solutions and workarounds


The Dataflow Gen2 (CI/CD, preview) feature isn't available for public usage. Disregard
the tile for now.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Line chart value-axis
zoom sliders don't work with markers
enabled
Article • 02/03/2025

The vertical/Y-axis value-axis zoom controls might not work correctly for line charts or
line chart varieties, such as area chart or stacked area chart. The previous issue where
the issue occurred if markers, stacked totals, or anomaly markers were enabled is fixed.
However, there's an ongoing issue if the minimum or maximum values are set.

Status: Open

Product Experience: Power BI

Symptoms
You see that the vertical zoom controls don't work correctly for line charts or line chart
varieties, such as area chart or stacked area chart.

Solutions and workarounds


To work around this issue, remove the configurations that caused the issue, such as
disable the line markers, stacked totals, or anomaly markers. Alternatively, you can
enable the markers and disable the minimum or maximum axis.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - External data sharing
doesn't work in a different region
capacity lakehouse
Article • 02/03/2025

When you accept an external data share invitation, you can select the lakehouse where
the external share to the shared data is created. If you select a lakehouse within a
capacity that resides in a different region than your home tenant region, the operation
fails.

Status: Open

Product Experience: Power BI

Symptoms
After selecting the lakehouse and the path where to create the external share to the
external data, the operation fails.

Solutions and workarounds


As a workaround, accept the share invitation in a workspace within your home tenant
capacity.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - The default
environment's resources folder doesn't
work in notebooks
Article • 02/03/2025

Each Fabric environment item provides a resources folder. When a notebook attaches to
an environment, you can read and write files from and to this folder. When you select an
environment as workspace default and the notebook uses the workspace default, the
resources folder of the default environment doesn't work.

Status: Open

Product Experience: Data Engineering

Symptoms
You see the environment's resources folder in the notebook's file explorer. However,
when you try to read or write files from or to this folder, you receive an error. The error
message is similar to ModuleNotFoundError .

Solutions and workarounds


To work around this issue, you can attach a different environment in the notebook or
remove the environment from workspace default.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Tenant migrations paused
through February 2025
Article • 02/04/2025

Cross-region tenant migrations are paused through February 28, 2025. New and existing
requests aren't processed during this time period.

Status: Open

Product Experience: Power BI

Symptoms
New and existing cross-region tenant migration requests aren't processed through
February 28, 2025.

Solutions and workarounds


This article will be updated once tenant migrations are resumed.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - More options menu on a
visual doesn't open in unsaved reports
Article • 02/03/2025

When you're focused on a Power BI visual, you can select the More options (...) button
to open the menu. When you select More options, the menu doesn't open if the report
is unsaved.

Status: Fixed: January 13, 2025

Product Experience: Power BI

Symptoms
The More options menu doesn't open when you select the button.

Solutions and workarounds


To get access to the More options menu on your visuals, save the report and try again.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - SQL analytics endpoint
tables lose statistics
Article • 02/03/2025

After you successfully sync your tables in your SQL analytics endpoint, the statistics get
dropped.

Status: Open

Product Experience: Data Warehouse

Symptoms
Statistics created on the SQL analytics endpoint tables aren't available after a successful
sync between the lakehouse and the SQL analytics endpoint.

Solutions and workarounds


The behavior is currently expected for the tables after a schema change. You need to
recreate the statistics or allow the auto statistics to run when necessary.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - SQL analytics endpoint
tables lose permissions
Article • 02/03/2025

After you successfully sync your tables in your SQL analytics endpoint, the permissions
get dropped.

Status: Open

Product Experience: Data Warehouse

Symptoms
Permissions applied to the SQL analytics endpoint tables aren't available after a
successful sync between the lakehouse and the SQL analytics endpoint.

Solutions and workarounds


The behavior is currently expected for the tables after a schema change. You need to
reapply the permissions after a successful sync to the SQL analytics endpoint.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - INFO.VIEW.MEASURES()
in calculated table might cause errors
Article • 02/11/2025

You can add the Data Analysis Expressions (DAX) function INFO.VIEW.MEASURES() to a
calculated table in a semantic model. In some cases, an error happens when you create
the calculated table. Other times, after the table is in the model, you might receive an
error when you remove other tables. The issue is more likely to happen on semantic
models that have a calculation group that includes a dynamic format string in one or
more calculation items.

Status: Fixed: February 10, 2025

Product Experience: Power BI

Symptoms
You either try to create a calculated table that contains INFO.VIEW.MEASURES() or you
try to delete a table where another calculated table in the semantic model contains
INFO.VIEW.MEASURES(). You receive an error message similar to: An unexpected
exception occurred .

Solutions and workarounds


To delete the table, remove the calculated table that contains INFO.VIEW.MEASURES().

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse data
preview might fail if multiple data
warehouse items
Article • 02/03/2025

The data warehouse data preview in the user experience might fail if there's more than
one data warehouse item in the Object Explorer.

Status: Open

Product Experience: Data Warehouse

Symptoms
The data preview fails with error: Unable to execute the SQL request .

Solutions and workarounds


As a workaround, use T-SQL queries instead of the data preview in the data warehouse
user experience.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - External data sharing
OneLake shortcuts don't support blob
specific APIs
Article • 02/03/2025

External data sharing OneLake shortcuts don't support blob specific APIs

You can set up external data sharing using OneLake shortcuts. The shortcut tables show
in the shared tenant in the lakehouse, but don't show in the SQL analytics endpoint.
Additionally, if you try to use a blob-specific API to access the OneLake shortcut
involved in the external data share, the API call fails.

Status: Fixed: January 28, 2025

Product Experience: OneLake

Symptoms
If you're using external data sharing, table discovery in the SQL analytics endpoint
doesn't work due to an underlying dependency on blob APIs. Additionally, blob APIs on
a path containing the shared OneLake shortcut returns a partial response or error.

Solutions and workarounds


There's no workaround for the SQL analytics endpoint table discovery not working. As a
workaround for the blob API failures, use a DFS alternative for the same activity.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake Shared Access
Signature (SAS) can't read cross-region
shortcuts
Article • 02/03/2025

You can't read a cross-region shortcut with a OneLake shared access signature (SAS).

Status: Open

Product Experience: OneLake

Symptoms
You receive a 401 Unauthorized error, even if the delegated SAS has the correct
permissions to access the shortcut.

Solutions and workarounds


As a workaround, you can read the shortcut from its home region, or authenticate using
a Microsoft Entra ID instead of a OneLake SAS.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline fails when
getting a token to connect to Kusto
Article • 02/03/2025

You might experience issues while trying to get a token using


mssparkutils.credentials.getToken() with your cluster URL as the audience when

connecting to Kusto using a pipeline.

Status: Open

Product Experience: Data Engineering

Symptoms
You receive a pipeline failure when you try to get the token for Azure Data Explorer.

Solutions and workarounds


Use mssparkutils.credentials.getToken("kusto") instead of
mssparkutils.credentials.getToken(cluster_url) . The code kusto is the supported

short code for the Kusto audience in getToken().

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Dataverse shortcut
creation and read fails when
organization is moved
Article • 02/03/2025

You can use a shortcut to see data from your Dataverse in a lakehouse. However, when
the Dataverse organization is moved to a new storage location, the shortcut stops
working.

Status: Open

Product Experience: OneLake

Symptoms
Dataverse shortcut creation/read fails if the underlying Dataverse organization is moved.

Solutions and workarounds


You can work around the issue by deleting and recreating the shortcuts.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Can't connect to semantic
model from Excel or use Analyze in
Excel
Article • 02/03/2025

You can consume Power BI semantic models in Excel by connecting to the semantic
model in Excel or choosing the Analyze in Excel option from the Power BI service. Either
way, when you try to make the connection, you receive an error message and can't
properly connect.

Status: Open

Product Experience: Power BI

Symptoms
When you try to connect to a Power BI dataset from Excel or use Analyze in Excel, you
receive an error. The error message is similar to Forbidden Activity or AAD error . It
most likely happens if you have Excel versions 2409 or 2410.

Solutions and workarounds


To fix this issue, sign out from all accounts in Excel, then sign in to Excel and try again.
Alternatively, you can download a version of Excel earlier 2409 and try again.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse tables
aren't accessible or updatable
Article • 02/03/2025

You can access data warehouse tables through the SQL analytics endpoint. Due to this
known issue, you can't apply changes to the tables. You also see an error marker next to
the table and receive an error if you try to access the table. The table sync also doesn't
complete as expected.

Status: Open

Product Experience: Data Warehouse

Symptoms
You see a red circle with white 'X' next to the unavailable tables. When you try to access
table, you receive an error. The error message is similar to: An internal error has
occurred while applying table changes to SQL .

Solutions and workarounds


Update the on-premises data gateway to the October or latest version.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Spark jobs might fail due
to Runtime 1.3 updates for GA
Article • 02/03/2025

The Microsoft Fabric Runtime 1.3 based on Apache Spark 3.5 went into general
availability (GA) on September 23, 2024. Fabric Runtime 1.3 can now be used for
production workloads. As part of transitioning from public preview to the general
availability stage, we released major built-in library updates to improve functionality,
security, reliability, and performance. These updates can affect your Microsoft Fabric
environments if you installed libraries or overridden the built-in library version with
Runtime 1.3.

Status: Open

Product Experience: Data Engineering

Symptoms
If you installed the environments with libraries with Runtime 1.3, the Spark job starts to
fail with an error similar to Post Personalization failed . Importing installed custom
libraries might fail due to the underlying built-in libraries being updated.

Solutions and workarounds


Reinstall the libraries for Fabric Runtime 1.3 in your Microsoft Fabric environments. The
reinstallation builds the new dependency based on the latest updates. If your custom
libraries have incompatible dependencies on Python built-in packages, the installation
fails and a self debug log is generated with the list of required versions. You can update
the dependencies to make your custom libraries compatible. There might be
incompatibilities or breaking changes on the existing notebook or pipeline as the
underlying Python packages were changed. You need to revisit the code to mitigate.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Premium capacity doesn't
add excess usage into carry forward
Article • 02/03/2025

In most scenarios, carry forward logic avoids the need to trigger Autoscale for small
bursts of usage. Autoscale is only triggered for longer overages as a way to avoid
throttling. If you have Power BI Premium, you can set the maximum number of v-cores
to use for Autoscale. You don't get any throttling behavior even if your usage is above
100% for a long time.

Status: Open

Product Experience: Power BI

Symptoms
In some cases when you set the maximum number of v-cores to use for Autoscale, you
don't see the Autoscale cores triggered as expected. If you face this known issue, you
observe the following patterns using the Capacity Metrics App:

Current usage is clearly higher than the 100% capacity units (CU) line in the
Capacity Metrics App
Little or no overages are added and accumulated during these spikes
Throttling levels are low and not growing with the overages seen
Maximum number of v-cores to use for Autoscale is set and active Autoscale isn't
reaching it even after long periods higher than average

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data pipeline connection
fails after connection creator role is
removed
Article • 02/03/2025

You might face issues with a connection in a data pipeline in a certain scenario. The
scenario is that you add yourself to the connection creator role in an on-premises data
gateway. You then create a connection in a data pipeline successfully. Someone removes
you from the content creator role. When you try to add and test the same connection,
the connection fails, and you receive an error.

Status: Open

Product Experience: Data Factory

Symptoms
When trying to add and test a connection in a data pipeline that uses an on-premises
data gateway, you receive an error. The error message is similar to: An exception error
occurred: You do not have sufficient permission for this data gateway. Please

request permissions from the owner of the gateway.

Solutions and workarounds


To avoid this issue, don't revoke the connection creator role.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouses don't
show button friendly names
Article • 02/03/2025

The data warehouse user interface might not show the correct button names. You can
still use button functionality as expected.

Status: Open

Product Experience: Data Warehouse

Symptoms
If you face this issue, your language might be set to something other than English.
When working in the data warehouse experience, you don't see the button friendly
names. For example, when you try to create a data warehouse, you see common.create
instead of Create and common.cancel instead of Cancel.

Solutions and workarounds


The incorrect names don't affect the button functionality. However, if you want to see
the button friendly names, you can append &language=en to the end of your browser
URL.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline fails when
copying data to data warehouse with
staging
Article • 02/03/2025

The data pipeline copy activity fails when copying data from Azure Blob Storage to a
Data Warehouse with staging enabled. Since staging is enabled, the copy activity uses
parquet as the staging format; however, the parquet string type can't be copied into a
decimal type in the data warehouse.

Status: Open

Product Experience: Data Factory

Symptoms
The pipeline copy activity fails with and error similar to:
ErrorCode=DWCopyCommandOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.H

ybridDeliveryException,Message='DataWarehouse' Copy Command operation failed with


error ''Column '' of type 'DECIMAL(32, 6)' is not compatible with external data

type 'Parquet physical type: BYTE_ARRAY, logical type: UTF8', please try with

'VARCHAR(8000)' .

Solutions and workarounds


To work around this issue: First, copy the data into the lakehouse table with decimal
type. Then, copy the data from the lakehouse table into to the data warehouse.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Ask the community
Known issue - Intermittent refresh
failure through on-premises data
gateway
Article • 02/03/2025

You might experience intermittent refresh failures for semantic models and dataflows
through the on-premises data gateway. Failures happen regardless of how the refresh
was triggered, whether scheduled, manually, or over the REST API.

Status: Open

Product Experience: Power BI

Symptoms
You see a gateway-bound refresh fail intermittently with the error
AdoNetProviderOpenConnectionTimeoutError . Impacted hosts include Power BI semantic

models and dataflows. The error occurs when the refresh is scheduled, manual, and via
the API.

Solutions and workarounds


As a workaround, you can try to reboot your on-premises data gateway server or
upgrade the server to the latest version.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse exports
using deployment pipelines or git fail
Article • 02/03/2025

You might have a data warehouse that you use in a deployment pipeline or store in a Git
repository. When you run the deployment pipelines or update the Git repository, you
might receive an error.

Status: Open

Product Experience: Data Warehouse

Symptoms
During the pipeline run or Git update, you might see an error. The error message is
similar to: Index was outside the bounds of the array .

Solutions and workarounds


Try running the pipeline again, as the issue appears intermittently.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake BCDR write
transactions aren't categorized correctly
for billing
Article • 02/03/2025

You can enable Business Continuity and Disaster Recovery (BCDR) for a specific capacity
in Fabric. The write transactions that OneLake reports that go through our client are
categorized and billed as non-BCDR.

Status: Fixed: January 28, 2025

Product Experience: OneLake

Symptoms
You see under-billing of write transactions since you're billed at the non-BCDR rate.

Solutions and workarounds


We fixed the issue, and all BCDR operations via Redirect are now correctly labeled as
BCDR. Because BCDR Write operations consume more compute units (CUs) compared to
non-BCDR Writes, you see BCDR Write operations marked as nonbillable in the
Microsoft Fabric Capacity Metrics app until January 2025. In January 2025, OneLake
BCDR Write operations via Redirect become billable and start consuming the CUs.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Monitoring hub displays
incorrect queued duration
Article • 02/03/2025

Spark Jobs get queued when the capacity usage reaches its maximum compute limit on
Spark. Once the limit is reached, jobs are added to the queue. The jobs are then
processed when the cores become available in the capacity. This queueing capability is
enabled for all background jobs on Spark, including Spark notebooks triggered from the
job scheduler, pipelines, and spark job definitions. The time duration that the job is
waiting in the queue isn't correctly represented in the Monitoring hub as queued
duration.

Status: Open

Product Experience: Data Engineering

Symptoms
The total duration of the job shown in the Monitoring hub currently includes only the
job execution time. The total duration doesn't correctly reflect the duration in which the
job waited in the queue.

Solutions and workarounds


When the job is in queue, the status is shown as Not Started in the monitoring view.
Once the job starts execution, the status updates to In Progress in the monitoring view.
Use the job status indicator to know when the job is queued and when its execution is in
progress.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Ask the community
Known issue - Managed private
endpoint connection could fail
Article • 02/03/2025

A managed private endpoint connection for a private link service could fail. The failure
occurs due to the inability to allow a list of Fully Qualified Domain Names (FQDNs) as
part of the managed private endpoint creation.

Status: Open

Product Experience: Data Engineering

Symptoms
You see a managed private endpoint creation error when trying to create a managed
private endpoint from the network security menu in the workspace settings.

Solutions and workarounds


You can use an alternate method to securely connect using the existing data sources
supported currently in Fabric.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Concurrent stored
procedures block each other in data
warehouse
Article • 02/03/2025

You can execute the same stored procedure in parallel in a data warehouse. When the
stored procedure is run concurrently, it causes blocking because each stored procedure
takes an exclusive lock during plan generation.

Status: Fixed: January 28, 2025

Product Experience: Data Warehouse

Symptoms
You might experience slowness when the same procedure is executed in parallel as
opposed to by itself.

Solutions and workarounds


To relieve the slowdown, you can execute the stored procedures serially. Alternatively,
you can let the procedures block each other and execute your new query once the block
is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Schema refresh for a data
warehouse's semantic model fails
Article • 02/03/2025

You can have a semantic model built on a data warehouse. When you try to refresh the
schema for the semantic model, you receive an error message, and the schema isn't
refreshed.

Status: Fixed: January 6, 2025

Product Experience: Data Warehouse

Symptoms
When refreshing the schema for a semantic model built on a data warehouse, you
receive an error message similar to: The datamart data is invalid .

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Subscriptions and exports
with maps might produce wrong results
Article • 02/03/2025

You can set up a subscription or export on a report or dashboard. If the item contains an
Azure or Bing map visual, the map data might show incorrect results.

Status: Reopened: October 10, 2024

Product Experience: Power BI

Symptoms
There are two main symptoms:

1. Azure maps don't have the bubble layer


2. Maps are zoomed out to the whole globe instead of showing the designed areas

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipelines don't support
Role property for Snowflake connector
Article • 02/03/2025

Pipelines don't support Role property for Snowflake connector.

Status: Open

Product Experience: Data Factory

Symptoms
When trying to test the Snowflake connection, you receive an error message similar to:
Test connection operation failed. Failed to open the database connection.

[Snowflake] 390201 (08004): The requested warehouse does not exist or not

authorized

Solutions and workarounds


As a solution, you need to allocate the role to the specific warehouse for the connector
to use by default.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline deployment fails
when parent contains deactivated
activity
Article • 02/03/2025

When creating pipelines, you can have a parent pipeline that contains an Invoke
pipeline activity that was deactivated. When you try to deploy the pipeline to a new
workspace, the deployment fails.

Status: Open

Product Experience: Data Factory

Symptoms
When you try to deploy a pipeline that has a deactivated Invoke pipeline activity, you
get an error similar to: Something went wrong. Deployment couldn't be completed. or
Git_InvalidResponseFromWorkload .

Solutions and workarounds


To work around the issue, mark the Invoke pipeline activity as Activated. You can then
redeploy the pipeline.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Inserting nulls into Data
Warehouse tables fail with incorrect
error message
Article • 02/03/2025

When you insert NULL values into NOT NULL columns in SQL tables, the SQL query fails
as expected. However, the error message returned references the incorrect column.

Status: Open

Product Experience: Data Warehouse

Symptoms
You might see a failure when executing a SQL query to insert into a Data Warehouse
table. The error message is similar to: Cannot insert the value NULL into column
<columnname>, table <tablename> . When the query fails, the column referenced isn't the

column that caused the error.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Dataflow Gen2 refresh
fails due to missing staging SQL
analytics endpoint
Article • 02/03/2025

When a Dataflow Gen2 creates its staging lakehouse, sometimes the associated SQL
analytics endpoint isn't created. When there's no SQL analytics endpoint, the dataflow
fails to refresh with an error.

Status: Fixed: January 13, 2025

Product Experience: Data Factory

Symptoms
If you face this known issue, you see the dataflow refresh fail with an error. The error
message is similar to: Refresh failed. The staging lakehouse is not configured
correctly. Please create a support ticket with this error report.

Solutions and workarounds


As a workaround, you can create a support ticket and include the specific error message.
Since the staging lakehouse is an internal artifact, we can recreate the lakehouse
internally. The issue only affects existing staging lakehouses. If you create a new
dataflow, it won't have this issue.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Multiple installations of
on-premises data gateway causes
pipelines to fail
Article • 02/03/2025

You might face an issue with Data Factory pipelines when performing multiple
installations on the on-premises data gateway. The issue occurs when you install the on-
premises data gateway that supports pipelines, and then downgrade the on-premises
data gateway version to a version that doesn't support pipelines. Finally, you upgrade
the on-premises data gateway version to support pipelines. You then receive an error
when you run a Data Factory pipeline using the on-premises data gateway.

Status: Open

Product Experience: Data Factory

Symptoms
You receive an error during a pipeline run. The error message is similar to: Please check
your network connectivity to ensure your on-premises data gateway can access

xx.frontend.clouddatahub.net .

Solutions and workarounds


To solve the issue, uninstall and reinstall the on-premises data gateway.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - SQL analytics endpoint
table queries fail due to RLE
Article • 02/03/2025

When creating a delta table, you can use run length encoding (RLE) . If the delta writer
uses RLE on the table you try to query in the SQL analytics endpoint, you receive an
error.

Status: Open

Product Experience: Data Engineering

Symptoms
When you query a table in the SQL analytics endpoint, you receive an error. The error
message is similar to: Error handing external file: 'Unknown encoding type.'

Solutions and workarounds


To resolve the issue, you can disable RLE in the delta writer and recreate the delta table.
You can then query the table in the SQL analytics endpoint.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse
deployment using deployment pipelines
fails
Article • 02/03/2025

You can use Fabric Data Factory deployment pipelines to deploy data warehouses. When
you deploy data warehouse related items from one workspace to another, the data
warehouse connection breaks.

Status: Open

Product Experience: Data Factory

Symptoms
Once the deployment pipeline completes in the destination workspace, you see the data
warehouse connection is broken. You see an error message similar to: Failed to load
connection, please make sure it exists, and you have the permission to access it .

Solutions and workarounds


As a workaround, you can manually update the destination workspace data warehouse
connection to point to the destination workspace data warehouse.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Dataflows Gen2 staging
lakehouse doesn't work in deployment
pipelines
Article • 02/03/2025

You can use Git integration for your Dataflow Gen2 dataflows. When you begin to
commit the workspace to the Git repo, you see the dataflow's staging lakehouse, named
DataflowsStagingLakehouse, available to commit. While you can select the staging
lakehouse to be exported, the integration doesn't work properly. If using a deployment
pipeline, you can't deploy DataflowsStagingLakehouse to the next stage.

Status: Open

Product Experience: Data Factory

Symptoms
You see the DataflowsStagingLakehouse visible in Git integration and can't deploy
DataflowsStagingLakehouse to the next stage using a deployment pipeline.

Solutions and workarounds


To deploy your files to the next stage in a deployment pipeline, manually ignore
DataflowsStagingLakehouse from the Git integration.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - SQL analytics endpoint
table sync fails when table contains
linked functions
Article • 02/03/2025

The Fabric SQL analytics endpoint uses a backend service to sync delta tables created in
a lakehouse. The backend service recreates the tables in the SQL analytics endpoint
based on the changes in lakehouse delta tables. When there are functions linked to the
SQL table, such as Row Level Security (RLS) functions, the creation operation fails and
the table sync fails.

Status: Open

Product Experience: Data Warehouse

Symptoms
In the scenario where there are functions linked to the SQL table, some or all of the
tables on the SQL analytics endpoint aren't synced.

Solutions and workarounds


To mitigate the issue, perform the following steps:

1. Run the SQL statement ALTER SECURITY POLICY DROP FILTER PREDICATE ON <Table>
on the table where the sync failed
2. Update the table on OneLake
3. Force the sync using the lakehouse or wait for the sync to complete automatically
4. Run the SQL statement ALTER SECURITY POLICY ADD FILTER PREDICATE ON <Table>
on the table where the sync failed
5. Confirm the table is successfully synced by checking the data

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Dataflows Gen2 staging
warehouse doesn't work in deployment
pipelines
Article • 02/03/2025

You can use Git integration for your Dataflow Gen2 dataflows. When you begin to
commit the workspace to the Git repo, you see the dataflow's staging warehouse,
named DataflowsStagingWarehouse, available to commit. While you can select the
staging warehouse to be exported, the integration doesn't work properly. If using a
deployment pipeline, you can't deploy DataflowsStagingWarehouse to the next stage.

Status: Open

Product Experience: Data Factory

Symptoms
You see the DataflowsStagingWarehouse visible in Git integration and can't deploy
DataflowsStagingWarehouse to the next stage using a deployment pipeline.

Solutions and workarounds


To deploy your files to the next stage in a deployment pipeline, manually ignore
DataflowsStagingWarehouse from the Git integration.

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Copy activity from Oracle
to lakehouse fails for Number data type
Article • 02/03/2025

The copy activity from Oracle to a lakehouse fails when one of the columns from Oracle
has a Number data type. In Oracle, scale can be greater than precision for
decimal/numeric types. Parquet files in Lakehouse require the scale be less than or equal
to precision, so the copy activity fails.

Status: Open

Product Experience: Data Factory

Symptoms
When trying to copy data from Oracle to a lakehouse, you receive an error similar to:
ParquetInvalidDecimalPrecisionScale. Invalid Decimal Precision or Scale. Precision:

38 Scale:127 .

Solutions and workarounds


You can work around this issue by using a query to explicitly cast the column to
NUMBER(p,s) or other types like BINARY_DOUBLE . When using NUMBER(p,s) , ensure p >= s

and s >= 0 . Meanwhile, the range defined by NUMBER(p,s) should cover the range of
the values stored in the column. If not, you receive an error similar to ORA-01438: value
larger than specified precision allowed for this column . Here's a sample query:

SELECT CAST(ColA AS BINARY_DOUBLE) AS ColB FROM TableA

Next steps
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Pipeline using XML
format copy gets stuck
Article • 02/03/2025

When using a pipeline to copy XML formatted data to a tabular data source, the
pipeline gets stuck. The issue most often appears when XML single records contain
many different array type properties.

Status: Open

Product Experience: Data Factory

Symptoms
The copy activity doesn't fail; it runs endlessly until it hits a timeout or is canceled. Some
XML files copy without any issue while some files are causing the issue.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - West India region doesn't
support on-premises data gateway for
pipelines
Article • 02/03/2025

West India region currently doesn't support on-premises Data gateway for Data Factory
pipelines.

Status: Open

Product Experience: Data Factory

Symptoms
If you are in the West India region, you don't see the option to select the on-premises
data gateway during the creation of a Data Factory pipeline connection.

Solutions and workarounds


Use a tenant in a region other than West India.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - OneLake under-reports
transactions in the Other category
Article • 02/03/2025

OneLake is currently under-reporting OneLake Other Operations Via Redirect


transactions that occur when a lakehouse automatically detects Delta tables. HTTP 400
errors other than 401 and 403 errors aren't billed. When we fix the issue, your usage for
the OneLake Other Operations Via Redirect transactions might go up. If your usage
exceeds your capacity limits, your capacity might be throttled.

Status: Open

Product Experience: OneLake

Symptoms
You currently don't see all OneLake transactions in the Other category being reported.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Tables not available to
add in Power BI semantic model
Article • 02/03/2025

When you're working in a lakehouse, you can create and add tables to a new Power BI
semantic model. You also can adjust the tables shown in the default semantic model
associated with a lakehouse. In either case, you might run into a scenario where you
don't see all available tables and can't add them to your semantic model.

Status: Open

Product Experience: Data Engineering

Symptoms
When trying to select the tables to include in a semantic model, you don't see all
expected tables.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Type mismatch when
writing decimals and dates to lakehouse
using a dataflow
Article • 02/03/2025

You can create a Dataflow Gen2 dataflow that writes data to a lakehouse as an output
destination. If the source data has a Decimal or Date data type, you might see a
different data type appear in the lakehouse after running the dataflow. For example,
when the data type is Date, the resulting data type is sometimes converted to Datetime,
and when the data type is Decimal, the resulting data type is sometime converted to
Float.

Status: Open

Product Experience: Data Factory

Symptoms
You see an unexpected data type in the lakehouse after running a dataflow.

Solutions and workarounds


No workarounds at this time. This article will be updated when the fix is released.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Using an inactive SQL
analytics endpoint can show old data
Article • 02/03/2025

If you use a SQL analytics endpoint that hasn't been active for a while, the SQL analytics
endpoint scans the underlying delta tables. It's possible for you to query one of the
tables before the refresh is completed with the latest data. If so, you might see old data
being returned or even errors being raised if the parquet files were vacuumed.

Status: Fixed: January 6, 2025

Product Experience: Data Warehouse

Symptoms
When querying a table through the SQL analytics endpoint, you see old data or get an
error, similar to: "Failed to complete the command because the underlying location does
not exist. Underlying data description: %1."

Solutions and workarounds


You can retry after allowing the SQL analytics endpoint to complete its refresh process.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Data warehouse with
more than 20,000 tables fails to load
Article • 02/03/2025

A data warehouse or SQL analytics endpoint that has more than 20,000 tables fails to
load in the portal. If connecting through any other client tools, you can load the tables.
The issue is only observed while accessing the data warehouse through the portal.

Status: Fixed: November 11, 2024

Product Experience: Data Warehouse

Symptoms
Your data warehouse or SQL analytics endpoint fails to load in the portal with the error
message "Batch was canceled," but the same connection strings are reachable using
other client tools.

Solutions and workarounds


If you're impacted, use a client tool such as SQL Server Management Studio or Azure
Data studio to query the data warehouse.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - User column incorrectly
shows as System in Fabric capacity
metrics app
Article • 02/03/2025

In a limited number of cases, when you make a user-initiated request to the data
warehouse, the user identity isn't correctly reported to the Fabric capacity metrics app.
In the capacity metrics app, the User column shows as System.

Status: Open

Product Experience: Data Warehouse

Symptoms
In the interactive operations table on the timepoint page, you incorrectly see the value
System under the User column.

Solutions and workarounds


No workarounds at this time. When the fix is released, we'll update this article.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - InProgress status shows
in Fabric capacity metrics app for
completed queries
Article • 02/03/2025

In the Fabric capacity metrics app, completed queries in the Data Warehouse SQL
analytics endpoint appear with the status as "InProgress" in the interactive operations
table on the timepoint page.

Status: Open

Product Experience: Data Warehouse

Symptoms
In the interactive operations table on the timepoint page, completed queries in the Data
Warehouse SQL analytics endpoint appear with the status InProgress

Solutions and workarounds


No workarounds at this time. When the fix is released, we'll update this article.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - The Data Warehouse
Object Explorer doesn't support case-
sensitive object names
Article • 02/03/2025

The object explorer fails to display the Fabric Data Warehouse objects (ex. tables, views,
etc.) when have same noncase sensitive name (ex. table1 and Table1). In case there are
two objects with same name, one displays in the object explorer. but, if there's three or
more objects, nothing gets display. The objects show and can be used from system
views (ex. sys.tables). The objects aren't available in the object explorer.

Status: Open

Product Experience: Data Warehouse

Symptoms
If the customer notice the object shares the same noncase sensitive name as another
object listed in a system view and is working as intended, but isn't listed in the object
explorer, then the customer has encountered this known issue.

Solutions and workarounds


Recommend naming objects with different names and not relying on case-sensitivity as
it helps avoid any inconsistency from not being listed in the object explorer, but listed in
system views

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Known issue - Temp table usage in Data
Warehouse and SQL analytics endpoint
Article • 02/03/2025

Users can create Temp tables in the Data Warehouse and in SQL analytics endpoint but
data from user tables can't be inserted into Temp tables. Temp tables can't be joined to
user tables.

Status: Fixed: January 6, 2025

Product Experience: Data Warehouse

Symptoms
Users may notice that data from their user tables can't be inserted into a Temp table.
Temp tables can't be joined to user tables.

Solutions and workarounds


Use regular user tables instead of Temp tables.

Related content
About known issues

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


Microsoft Fabric product, workload, and
item icons
Article • 01/26/2025

This article provides information about the official collection of icons for Microsoft
Fabric that you can use in architectural diagrams, training materials, slide decks or
documentation.

Do's
Use the icons to illustrate how products can work together.
In diagrams, we recommend including a label that contains the product,
experience, or item name somewhere close to the icon.
Use the icons as they appear within the product.

Don'ts
Don't crop, flip, or rotate icons.
Don't distort or change icon shape in any way.
Don't use Microsoft product icons to represent your product or service.

Terms
Microsoft permits the use of these icons in architectural diagrams, training materials, or
documentation. You can copy, distribute, and display the icons only for the permitted
use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
Fabric icons are also available as a npm package for use in Microsoft Fabric platform
extension development. To use these icons, import the package into your project, then
use individual SVG files as an image source or as an SVG. You can also directly download
the icons from the following GitHub repository. Select the following button to open the
repo, select ... from the right hand corner and select Download:

Download icons from GitHub

Related content
Microsoft Power Platform icons
Azure icons
Dynamics 365 icons

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community


What's new in Microsoft Fabric? archive
Article • 01/26/2025

This archive page is periodically updated with an archival of content from What's new in
Microsoft Fabric?

To follow the latest in Fabric news and features, see the Microsoft Fabric Blog . Also
follow the latest in Power BI at What's new in Power BI?

New to Microsoft Fabric?


This section includes past articles and announcements that are useful to users new to
Microsoft Fabric.

Learning Paths for Fabric


Get started with Microsoft Fabric
End-to-end tutorials in Microsoft Fabric
Definitions of terms used in Microsoft Fabric

ノ Expand table

Month Feature Learn more

March Microsoft Fabric is now We're excited to announce that Microsoft Fabric, our all-
2024 HIPAA compliant in-one analytics solution for enterprises, has achieved
new certifications for HIPAA and ISO 27017, ISO 27018,
ISO 27001, ISO 27701 .

March Exam DP-600 is now Exam DP-600 is now available, leading to the Microsoft
2024 available Certified: Fabric Analytics Engineer Associate certification.
The Fabric Career Hub can help you learn quickly and
get certified.

March Fabric Copilot Pricing: Copilot in Fabric begins billing on March 1, 2024 as
2024 An End-to-End part of your existing Power BI Premium or Fabric
example Capacity. Learn how Fabric Copilot usage is calculated .

January Microsoft Fabric Copilot for Data Science and Data Engineering is now
2024 Copilot for Data available worldwide. What can Copilot for Data Science
Science and Data and Data Engineering do for you?
Engineering

December Fabric platform Learn more about the big-picture perspective of the
2023 Security Fundamentals Microsoft Fabric security architecture by describing how
the main security flows in the system work.
Month Feature Learn more

November Microsoft Fabric, A focus on what customers using the current Platform-
2023 explained for existing as-a-Service (PaaS) version of Synapse can expect . We
Synapse users explain what the general availability of Fabric means for
your current investments (spoiler: we fully support them),
but also how to think about the future.

November Microsoft Fabric is now Microsoft Fabric is now generally available for
2023 generally available purchase . Microsoft Fabric can reshape how your
teams work with data by bringing everyone together on
a single, AI-powered platform built for the era of AI. This
includes: Power BI, Data Factory, Data Engineering, Data
Science, Real-Time Analytics, Data Warehouse, and the
overall Fabric platform.

November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Data Warehouse, Data Engineering & Data
available! Science, Real-Time Analytics, Data Factory, OneLake, and
the overall Fabric platform are now generally available.

November Implement medallion An introduction to medallion lake architecture and how


2023 lakehouse architecture you can implement a lakehouse in Microsoft Fabric.
in Microsoft Fabric

October Announcing the Fabric Announcing the Fabric Roadmap . One place you can
2023 roadmap see what we are working on and when you can expect it
to be available.

October Get started with Explore how semantic link seamlessly connects Power BI
2023 semantic link semantic models with Fabric Data Science within
Microsoft Fabric. Learn more at Semantic link in
Microsoft Fabric: Bridging BI and Data Science .

You can also check out the semantic link sample


notebooks that are now available in the fabric-samples
GitHub repository. These notebooks showcase the use of
semantic link's Python library, SemPy, in Microsoft Fabric.

September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and
to know about what's Power BI users .
new and what's
coming

August Accessing Microsoft Learn how to enable Microsoft Fabric as a developer, as a


2023 Fabric for developers, startup or as an enterprise has different steps. Learn
startups and more at Enabling Microsoft Fabric for developers,
enterprises! startups, and enterprises .
Month Feature Learn more

August Strong, useful, From the Data Integration Design Team, learn about the
2023 beautiful: Designing a strong, creative, and function design of Microsoft
new way of getting Fabric, as Microsoft designs for the future of data
data integration.

August Learn Live: Get started Calling all professionals, enthusiasts, and learners! On
2023 with Microsoft Fabric August 29, we'll be kicking off the "Learn Live: Get started
with Microsoft Fabric" series in partnership with
Microsoft's Data Advocacy teams and Microsoft
WorldWide Learning teams to deliver 9x live-streamed
lessons covering topics related to Microsoft Fabric!

July 2023 Step-by-Step Tutorial: In this comprehensive guide, we walk you through the
Building ETLs with process of creating Extract, Transform, Load (ETL)
Microsoft Fabric pipelines using Microsoft Fabric .

June 2023 Get skilled on Who is Fabric for? How can I get skilled? This blog post
Microsoft Fabric - the answers these questions about Microsoft Fabric, a
AI-powered analytics comprehensive data analytics solution by unifying many
platform experiences on a single platform.

June 2023 Introducing the end- In this blog, we explore four end-to-end scenarios that
to-end scenarios in are typical paths our customers take to extract value and
Microsoft Fabric insights from their data using Microsoft Fabric .

May 2023 Get Started with A technical overview and introduction to everything from
Microsoft Fabric - All data movement to data science, real-time analytics, and
in-one place for all business intelligence in Microsoft Fabric .
your Analytical needs

May 2023 Microsoft OneLake in Microsoft OneLake brings the first multicloud SaaS data
Fabric, the OneDrive lake for the entire organization .
for data

Generally available features


The following table lists the features of Microsoft Fabric that have transitioned from
preview to general availability (GA).

ノ Expand table

Month Feature Learn more

July 2024 Update records The .update command is now generally available. Learn more
in a KQL about how to Update records in a Kusto database .
Month Feature Learn more

Database
preview

July 2024 Warehouse Warehouse in Microsoft Fabric offers the capability to query the
queries with historical data as it existed in the past at the statement level,
time travel (GA) now generally available. The ability to query data from a specific
timestamp is known in the data warehousing industry as time
travel.

June 2024 OneLake As part of the One logical copy promise, we're excited to
availability of announce that OneLake availability of Eventhouse in Delta Lake
Eventhouse in format is Generally Available .
Delta Lake
format

May 2024 Microsoft Fabric Azure Private Link for Microsoft Fabric secures access to your
Private Links sensitive data in Microsoft Fabric by providing network isolation
and applying required controls on your inbound network traffic.
For more information, see Announcing General Availability of
Fabric Private Links .

May 2024 Trusted Trusted workspace access in OneLake shortcuts is now generally
workspace available . You can now create data pipelines to access your
access firewall-enabled Azure Data Lake Storage Gen2 (ADLS Gen2)
accounts using Trusted workspace access (preview) in your
Fabric Data Pipelines. Use the workspace identity to establish a
secure and seamless connection between Fabric and your
storage accounts . Trusted workspace access also enables
secure and seamless access to ADLS Gen2 storage accounts
from OneLake shortcuts in Fabric .

May 2024 Managed Managed private endpoints for Microsoft Fabric allow secure
private connections over managed virtual networks to data sources that
endpoints are behind a firewall or not accessible from the public internet.
For more information, see Announcing General Availability of
Fabric Private Links, Trusted Workspace Access, and Managed
Private Endpoints .

May 2024 Eventhouse Eventhouse is a new, dynamic workspace hosting multiple KQL
databases , generally available as part of Fabric Real-Time
Intelligence. An Eventhouse offers a robust solution for
managing and analyzing substantial volumes of real-time data.
Get started with a guide to Create and manage an Eventhouse.

May 2024 Data The Environment in Fabric is now generally available. The
Engineering: Environment is a centralized item that allows you to configure
Environment all the required settings for running a Spark job in one place. At
GA, we added support for Git, deployment pipelines, REST APIs,
resource folders, and sharing.
Month Feature Learn more

May 2024 Microsoft Fabric Microsoft Fabric Core APIs are now generally available. The
Core REST APIs Fabric user APIs are a major enabler for both enterprises and
partners to use Microsoft Fabric as they enable end-to-end fully
automated interaction with the service, enable integration of
Microsoft Fabric into external web applications, and generally
enable customers and partners to scale their solutions more
easily.

May 2024 Power Query The Power Query SDK is now generally available in Visual
Dataflow Gen2 Studio Code! To get started with the Power Query SDK in Visual
SDK for VS Code Studio Code, install it from the Visual Studio Code
Marketplace .

April 2024 Semantic Link Semantic links are now generally available! The package comes
with our default VHD, and you can now use Semantic link in
Fabric right away without any pip installation.

March VNet Gateways VNet Data Gateway support for Dataflows Gen2 in Fabric is now
2024 in Dataflow generally available. The VNet data gateway helps to connect
Gen2 from Fabric Dataflows Gen2 to Azure data services within a
VNet, without the need of an on-premises data gateway.

November Microsoft Fabric Microsoft Fabric is now generally available for purchase .
2023 is now generally Microsoft Fabric can reshape how your teams work with data by
available bringing everyone together on a single, AI-powered platform
built for the era of AI. This includes: Power BI, Data Factory, Data
Engineering, Data Science, Real-Time Analytics, Data Warehouse,
and the overall Fabric platform .

Community
This section summarizes previous Microsoft Fabric community opportunities for
prospective and current influencers and MVPs. To learn about the Microsoft MVP Award
and to find MVPs, see mvp.microsoft.com .

ノ Expand table

Month Feature Learn more

August Fabric Influencers The Fabric Influencers Spotlight August 2024 highlights
2024 Spotlight August 2024 and amplifies blog posts, videos, presentations, and other
content related to Microsoft Fabric from members of
Microsoft MVPs & Fabric Super Users from the Fabric
community.
Month Feature Learn more

August Winners of the Fabric Congratulations to the winners of the Fabric Community
2024 Community Sticker Sticker Challenge !
Challenge

July 2024 Fabric Influencers Introducing the new Fabric Influencers Spotlight series of
Spotlight articles to highlight and amplify blog posts, videos,
presentations, and other content related to Microsoft
Fabric. Read blogs from Microsoft MVPs and Fabric Super
Users from the Fabric community .

June 2024 Solved Fabric You can now find solved posts from Fabric Community
Community posts are discussions in the Fabric Help Pane .
now available in the
Fabric Help Pane

May 2024 Announcing Microsoft Announcing the Microsoft Fabric Community Conference
Fabric Community Europe on September 24, 2024. Register today !
Conference Europe

May 2024 Register for the Starting May 21, 2024, sign up for the Microsoft Build:
Microsoft Build: Microsoft Fabric Cloud Skills Challenge and prepare for
Microsoft Fabric Cloud Exam DP-600 and upskill to the Fabric Analytics Engineer
Skills Challenge Associate certification.

March Exam DP-600 is now Exam DP-600 is now available, leading to the Microsoft
2024 available Certified: Fabric Analytics Engineer Associate certification.
The Fabric Career Hub can help you learn quickly and
get certified.

March Microsoft Fabric Join us in Las Vegas March 26-28, 2024 for the first annual
2024 Community Microsoft Fabric Community Conference. See firsthand
Conference how Microsoft Fabric and the rest of the data and AI
products at Microsoft can help your organization prepare
for the era of AI. Register today using code MSCUST for
an exclusive discount!

March Announcing the We received 50 Hackathon project submissions from over


2024 winners of 100 registrants, participating from every corner of the
"HackTogether: The world. Our judges were blown away by the breadth,
Microsoft Fabric depth, and overall quality of submissions. Meet the
Global AI Hack" winners of the Fabric Global AI Hack!

January Announcing Fabric The new Fabric Career Hub is your one-stop-shop for
2024 Career Hub professional growth! We've created a comprehensive
learning journey with the best free on-demand and live
training, plus exam discounts.

January Hack Together: The Hack Together is a global online hackathon that runs
2024 Microsoft Fabric from February 15 to March 4, 2024. Join us for Hack
Month Feature Learn more

Global AI Hack Together: The Microsoft Fabric Global AI Hack, a virtual


event where you can learn, experiment, and hack together
with the new Copilot and AI features in Microsoft Fabric!
For more information, see Microsoft Fabric Global AI
Hack .

December Microsoft Fabric Join us in Las Vegas March 26-28, 2024 for the first annual
2023 Community Microsoft Fabric Community Conference. See firsthand
Conference how Microsoft Fabric and the rest of the data and AI
products at Microsoft can help your organization prepare
for the era of AI. Register today to immerse yourself in
the future of data and AI and connect with thousands of
data innovators like yourself eager to share their insights.

November Microsoft Fabric MVP A special edition of the "Microsoft Fabric MVP Corner"
2023 Corner – Special blog series highlights selected content related to Fabric
Edition (Ignite) and created by MVPs around the Microsoft Ignite 2023
conference , when we announced Microsoft Fabric
generally available.

October Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – October created by MVPs from October 2023 .
2023

September Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – September created by MVPs from September 2023 .
2023

August Microsoft Fabric MVP Highlights of selected content related to Fabric and
2023 Corner – August 2023 created by MVPs from August 2023 .

July 2023 Microsoft Fabric MVP Highlights of selected content related to Fabric and
Corner – July 2023 created by MVPs in July 2023 .

June 2023 Microsoft Fabric MVP The Fabric MVP Corner blog series to highlight selected
Corner – June 2023 content related to Fabric and created by MVPs in June
2023 .

May 2023 Fabric User Groups Power BI User Groups are now Fabric User Groups !

May 2023 Learn about Microsoft Prior to our official announcement of Microsoft Fabric at
Fabric from MVPs Build 2023, MVPs had the opportunity to familiarize
themselves with the product. For several months, they
have been actively testing Fabric and gaining valuable
insights. Now, their enthusiasm for the product is evident
as they eagerly share their knowledge and thoughts
about Microsoft Fabric with the community .
Fabric samples and guidance
This section summarizes archived guidance and sample project resources for Microsoft
Fabric.

ノ Expand table

Month Feature Learn more

March Protect PII One possible way to use Azure AI to identify and extract
2024 information in your personally identifiable information (PII) in Microsoft
Microsoft Fabric Fabric is to use Azure AI Language to detect and
Lakehouse with categorize PII entities in text data, such as names,
Responsible AI addresses, emails, phone numbers, social security
numbers, etc.

February Building Common Read more about common data architecture patterns and
2024 Data Architectures how they can be secured with Microsoft Fabric , and the
with OneLake in basic building blocks of security for OneLake.
Microsoft Fabric

January New Fabric Beta availability of Microsoft Certification Exam DP-600:


2024 certification and Implementing Analytics Solutions with Microsoft Fabric
Fabric Career Hub is available for a limited time. Passing this exam earns the
Microsoft Certified: Fabric Analytics Engineer Associate
certification.

December Working with If you want to use an application that directly integrates
2023 OneLake using Azure with Windows File Explorer, check out OneLake file
Storage Explorer explorer . However, if you're accustomed to using Azure
Storage Explorer for your data management tasks , you
can continue to harness its functionalities with OneLake
and some of its key benefits.

November Semantic Link: Semantic Link adds support for the recently released
2023 OneLake integrated OneLake integrated semantic models. You can now
Semantic Models directly access data using your semantic model's name via
OneLake using the read_table function and the new
mode parameter set to onelake .

November Integrate your SAP Using the built-in connectivity of Microsoft Fabric is the
2023 data into Microsoft easiest and least-effort way of adding SAP data to your
Fabric Fabric data estate .

November Fabric Changing the Follow this step-by-step example of how to explore the
2023 game: Validate functional dependencies between columns in a table
dependencies with using the semantic link . The semantic link is a feature
Semantic Link – Data that allows you to establish a connection between Power
Quality BI datasets and Fabric Data Science in Microsoft Fabric.
Month Feature Learn more

November Implement medallion An introduction to medallion lake architecture and how


2023 lakehouse you can implement a lakehouse in Microsoft Fabric.
architecture in
Microsoft Fabric

October Fabric Change the Follow this realistic example of reading data from Azure
2023 Game: Exploring the Data Lake Storage using shortcuts, organizing raw data
data into structured tables, and basic data exploration. Our
data exploration uses as a source the diverse and
captivating city of London with information extracted
from data.london.gov.uk/ .

September Announcing an end- A new workshop guides you in building a hands-on, end-
2023 to-end workshop: to-end data analytics solution for the Snapshot
Analyzing Wildlife Serengeti dataset using Microsoft Fabric. The dataset
Data with Microsoft consists of approximately 1.68M wildlife images and
Fabric image annotations provided in .json files.

September New learning path: The new Implement a Lakehouse with Microsoft Fabric
2023 Implement a learning path introduces the foundational components of
Lakehouse with implementing a data lakehouse with Microsoft Fabric with
Microsoft Fabric seven in-depth modules.

September Fabric Readiness The Fabric Readiness repository is a treasure trove of


2023 repository resources for anyone interested in exploring the exciting
world of Microsoft Fabric.

July 2023 Connecting to How do I connect to OneLake? This blog covers how to
OneLake connect and interact with OneLake, including how
OneLake achieves its compatibility with any tool used
over ADLS Gen2!

June 2023 Using Azure How does Azure Databricks work with Microsoft Fabric?
Databricks with This blog post answers that question and more details on
Microsoft Fabric and how the two systems can work together.
OneLake

July 2023 Free preview usage of We're extending the free preview usage of Fabric
Microsoft Fabric experiences (other than Power BI). These experiences
experiences extended won't count against purchased capacity until October 1,
to October 1, 2023 2023 .

Microsoft Copilot in Microsoft Fabric


This section summarizes archived announcements about Copilot in Fabric.
ノ Expand table

Month Feature Learn more

June 2024 Copilot privacy and For more information on the privacy and security of
security Copilot in Microsoft Fabric, and for detail information on
each workload, see Privacy, security, and responsible use
for Copilot in Microsoft Fabric (preview).

May 2024 The AI and Copilot In the tenant admin portal, you can delegate the
setting automatically enablement of AI and Copilot features to Capacity
delegated to capacity administrators . This AI and Copilot setting is
admins automatically delegated to capacity administrators and
tenant administrators won't be able to turn off the
delegation.

February Fabric Change the This blog post shows how simple is to enable Copilot ,
2024 Game: How easy is it a generative AI that brings new ways to transform and
to use Copilot in analyze data, generate insights, and create visualizations
Microsoft Fabric and reports in Microsoft Fabric.

February Copilot for Data Copilot for Data Factory in Microsoft Fabric is now
2024 Factory in Microsoft available in preview and included in the Dataflow Gen2
Fabric experience. For more information, see Copilot for Data
Factory.

January Microsoft Fabric Copilot for Data Science and Data Engineering is now
2024 Copilot for Data available worldwide. What can Copilot for Data Science
Science and Data and Data Engineering do for you?
Engineering

January How to enable Copilot Follow this guide to get Copilot in Fabric enabled for
2024 in Fabric for Everyone everyone in your organization. For more information, see
Overview of Copilot for Microsoft Fabric (preview).

January Copilot in Fabric is Copilot in Fabric is now available to all customers,


2024 available worldwide including Copilot for Power BI, Data Factory, and Data
Science & Data Engineering. Read more in our Overview
on Copilot in Fabric.

November Empower Power BI We're thrilled to announce the general availability of


2023 users with Microsoft Microsoft Fabric and the preview of Copilot in Microsoft
Fabric and Copilot Fabric, including the experience for Power BI. .

November Copilot for Power BI in We're thrilled to announce the preview of Copilot in
2023 Microsoft Fabric Microsoft Fabric , including the experience for Power BI,
preview which helps users quickly get started by helping them
create reports in the Power BI web experience. For more
information, see Copilot for Power BI .
Month Feature Learn more

October Chat your data in Learn how to construct Copilot tools based on business
2023 Microsoft Fabric with data in Microsoft Fabric .
Semantic Kernel

Data Factory in Microsoft Fabric


This section summarizes archived new features and capabilities of Data Factory in
Microsoft Fabric. Follow issues and feedback through the Data Factory Community
Forum .

ノ Expand table

Month Feature Learn more

August Certified connector Updated Dataflow Gen2 connectors have been


2024 updates released, as well as two new Data pipeline connectors
for Salesforce and Vertica. For more information, see
the August 2024 Certified connector updates .

August Data Warehouse The Data Warehouse connector now supports TLS
2024 Connector Supports TLS 1.3 , the latest version of the Transport Layer
1.3 Security protocol.

August Connect to your Azure You can easily browse and connect to your Azure
2024 Resources by Modern Get resources automatically with the modern data
Data Experience in Data experience of Data Pipeline .
pipeline

July 2024 Use existing connections You can now select any existing connections from
from the OneLake Data OneLake Datahub , not just your recent and favorite
hub integration ones. This makes it easier to access your data sources
from the homepage of modern get data in data
pipeline. For more information, see Modern Get Data
experience.

July 2024 Snowflake storage Connect and integrate Snowflake's storage


integration integration to streamline data workflows and
optimize performance across all staging scenarios,
without the need to bring external storage to stage
your dataset. For more information, see Snowflake
connector.

July 2024 Edit JSON code for Data You can now edit the JSON behind your Data Factory
pipelines pipelines in Fabric. When you design low-code
pipeline workflows, directly editing the JSON code
Month Feature Learn more

behind your visual pipeline canvas can increase your


flexibility and improve your market time.

July 2024 Dataflow Gen2 certified New and updated Dataflow Gen2 connectors have
connector updates been released, including two new connectors in Fabric
Data Factory data pipeline: Azure MySQL Database
Connector and Azure Cosmos DB for MongoDB
Connector. For more information, see the July 2024
Certified connector updates .

July 2024 Support for editing Introducing a new experience to edit navigation steps
Navigation steps within Dataflow, to connect to a different object,
inside of the Applied steps section of the Query
settings pane. For more information, see Editing
Navigation steps .

July 2024 Global view in Manage The new Global view in Manage connections allows
connections you to see all the available connections in your Fabric
environment so you can modify them or delete them
without ever having to leave the Dataflow experience.
For more information, see Global view in Manage
connections .

July 2024 Fast Copy with On- Fast Copy (preview) in Dataflow Gen2 now supports
premises Data Gateway on-premises data stores using a gateway to access
Support in Dataflow Gen2 on-premises stores like SQL Server with Fast Copy in
Dataflow Gen2.

July 2024 Fabric API for GraphQL API for GraphQL in Fabric starts billing on July 12,
(preview) pricing 2024, as part of your existing Power BI Premium or
Fabric Capacity. Use the Fabric Capacity Metrics app
to track capacity usage for API for GraphQL
operations, under the name "Query".

June 2024 Dataflow Gen2 certified New and updated Dataflow Gen2 connectors have
connector updates been released. For more information, see the June
2024 Certified connector updates .

June 2024 New data pipeline More connectors are now available for data pipeline.
connector updates For more information, see the June 2024 Fabric
update .

June 2024 Lakehouse schemas The Lakehouse schemas feature (preview)


feature introduces data pipeline support for reading the
schema info from Lakehouse tables and supports
writing data into tables under specified schemas.
Lakehouse schemas allow you to group your tables
together for better data discovery, access control, and
more.
Month Feature Learn more

June 2024 Move Data Across You can now move data among Lakehouses,
Workspace via Data warehouses, etc. across different workspaces . In
pipeline Modern Get Data Pipeline Modern Get Data, select a Fabric item from
Experience another workspace under Explorer on the left side of
the OneLake data hub.

June 2024 Create a new Warehouse You can now create a new Warehouse as a destination
as destination in Data in Data Pipeline , instead of only selecting an
pipeline existing one.

May 2024 Data Factory Don't miss any of the Data Factory in Fabric
Announcements at announcements, here's a recap of all new features in
Microsoft Build Recap Data Factory in Fabric from Build 2024 .

May 2024 New certified connectors The Power Query SDK and Power Query Connector
Certification process has introduced several new
Power Query connectors , including connectors for
Oracle database, MySQL, Oracle Cloud Storage, Azure
AI, Azure Files, Dynamics AX, Google Bigquery,
Snowflake ADBC, and more coming soon.

May 2024 API for GraphQL in The new API for GraphQL is a data access layer that
Microsoft Fabric (preview) allows us to query multiple data sources quickly and
efficiently in Fabric. For more information, see What is
Microsoft Fabric API for GraphQL?

May 2024 Power Query Dataflow The Power Query SDK is now generally available in
Gen2 SDK for VS Code GA Visual Studio Code! To get started with the Power
Query SDK in Visual Studio Code, install it from the
Visual Studio Code Marketplace .

May 2024 Refresh the Refresh The Refresh History details popup window now has a
History Dialog Refresh button .

May 2024 New and updated certified The Power Query SDK and Power Query Connector
connectors Certification process has introduced four new and
updated Power Query connectors .

May 2024 Data workflows in Data Data workflows (preview) in Data Factory , powered
Factory preview by Apache Airflow, offer seamless authoring,
scheduling, and monitoring experience for Python-
based data processes defined as Directed Acyclic
Graphs (DAGs). For more information, see Quickstart:
Create a Data workflow.

May 2024 Trusted Workspace Access Use the workspace identity to establish a secure and
in Fabric Data Pipelines seamless connection between Fabric and your storage
preview accounts . You can now create data pipelines to
access your firewall-enabled Azure Data Lake Storage
Month Feature Learn more

Gen2 (ADLS Gen2) accounts using Trusted workspace


access (preview) in your Fabric Data Pipelines.

May 2024 Blob storage Event Azure Blob storage event triggers (preview) in Fabric
Triggers for Data Pipelines Data Factory Data Pipelines use Fabric Reflex alerts
preview and eventstreams to create event subscriptions to
your Azure storage accounts.

May 2024 Azure HDInsight activity The Azure HDInsight activity allows you to execute
for data pipelines Hive queries, invoke a MapReduce program, execute
Pig queries, execute a Spark program, or a Hadoop
Stream program.

May 2024 Copy data assistant Start using the Modern Get Data experience by
selecting Copy data assistant in the Pipeline landing
page or Use copy assistant in the Copy data drop
down . You can easily connect to recently used
Fabric items and provides an intuitive way to read
sources from sample data and new connections.

May 2024 Edit the Destination Table You can edit destination table column types when
Column Type when copying data for a new or autocreated destination
Copying Data table for many data stores. For more information, see
Configure Lakehouse in a copy activity.

April 2024 Spark job definition With the new Spark job definition activity , you'll be
activity able to run a Spark job definition in your pipeline.

April 2024 Fabric Warehouse in ADF You can now connect to your Fabric Warehouse from
copy activity an Azure Data Factory/Fabric Warehouse pipeline .
You can find this new connector when creating a new
source or sink destination in your copy activity, in the
Lookup activity, Stored Procedure activity, Script
activity, and Get Metadata activity.

April 2024 Edit column type to When moving data from any supported data sources
destination table support into Fabric Warehouse or other SQL data stores (SQL
added to Fabric Server, Azure SQL Database, Azure SQL Managed
Warehouse and other SQL Instance, or Azure Synapse Analytics) via data
data stores pipelines, users can now specify the data type for
each column .

April 2024 Performance The SFTP connector has been improved to offer
improvements when better performance when writing to SFTP as
writing data to SFTP destination.

April 2024 Service Principal Name Azure Service Principals (SPN) are now supported
authentication kind for on-premises data gateways and virtual network
support for On-Premises data gateways. Learn how to use the service principal
Month Feature Learn more

and virtual network data authentication kind in Azure Data Lake Storage,
gateways Dataverse, Azure SQL Database, Web connector, and
more.

April 2024 New and updated The Power Query SDK and Power Query Connector
Certified connectors Certification process has introduced 11 new and
updated custom Power Query connectors .

April 2024 New Expression Builder A new experience in the Script activity in Fabric
Experience Data Factory pipelines to make it even easier to build
expressions using the pipeline expression language.

April 2024 Data Factory Increases We have doubled the limit on number of activities
Maximum Activities Per you can define in a pipeline from 40 to 80.
Pipeline to 80

April 2024 REST APIs for Fabric Data The REST APIs for Fabric Data Factory Pipelines are
Factory pipelines preview now in preview. REST APIs for Data Factory pipelines
enable you to extend the built-in capability in Fabric
to create, read, update, delete, and list pipelines.

March Fast copy in Dataflows With Fast copy , you can ingest terabytes of data
2024 Gen2 with the easy experience of dataflows, but with the
scalable backend of Pipeline's Copy activity.

March Integrating On-Premises With the on-premises Data Gateway (preview),


2024 Data into Microsoft Fabric customers can connect to on-premises data sources
Using Data Pipelines in using dataflows and data pipelines with Data
Data Factory preview Factory . For more information, see How to access
on-premises data sources in Data Factory for
Microsoft Fabric.

March CI/CD for Fabric Data Git Integration and integration with built-in
2024 Pipelines preview Deployment Pipelines to Data Factory data pipelines
is now in preview. For more information, see Data
Factory Adds CI/CD to Fabric Data Pipelines .

March Browse Azure resources Learn how to browse and connect to all your Azure
2024 with Get Data resources with the 'browse Azure' functionality in Get
Data . You can browse Azure resources then connect
to Synapse, blob storage, or ADLS Gen2 resources
easily.

March Dataflow Gen2 Support VNet Data Gateway support for Dataflows Gen2 in
2024 for VNet Gateways now Fabric is now generally available. The VNet data
generally available gateway helps to connect from Fabric Dataflows
Gen2 to Azure data services within a VNet, without
the need of an on-premises data gateway.
Month Feature Learn more

March Privacy levels support in You can now set privacy levels for your connections in
2024 Dataflows your Dataflow Gen2. Privacy levels are critical to
configure correctly so that sensitive data is only
viewed by authorized users.

March Copy data to S3 Copying data to S3 Compatible is now available in


2024 Compatible via Fabric Data Data pipeline of Fabric Data Factory! You can use
Factory Data Pipeline Copy assistant and Copy activity in your Data pipeline
to finish this data movement.

February Dataflows Gen2 data New features for Dataflows Gen2 include destinations,
2024 destinations and managed managed settings, and advanced topics .
settings

February Copilot for Data Factory in Copilot for Data Factory in Microsoft Fabric is now
2024 Microsoft Fabric available in preview and included in the Dataflow
Gen2 experience. For more information, see Copilot
for Data Factory.

February Certified Connector The Power Query SDK enables you to create new
2024 updates connectors for both Power BI and Dataflow. New
certified Power Query connectors are available to
the list of Certified Connectors in Power Query.

February Data pipeline connector New connectors are available in your Data Factory
2024 updates data pipelines , including S3 compatible and Google
Cloud Storage data sources. For more information,
see Data pipeline connectors in Microsoft Fabric.

January Automate Fabric Data In Fabric Data Factory, there are many ways to query
2024 Warehouse Queries and data, retrieve data, and execute commands from your
Commands with Data warehouse using pipeline activities that can then be
Factory easily automated .

January Use Fabric Data Factory Guidance and good practices when building Fabric
2024 Data Pipelines to Spark Notebook workflows using Data Factory in
Orchestrate Notebook- Fabric with data pipelines.
based Workflows

December Read and Write to the You can now read and write data in the Microsoft
2023 Fabric Lakehouse using Fabric Lakehouse from ADF (Azure Data Factory).
Azure Data Factory (ADF) Using either Copy Activity or Mapping Data Flows,
you can read, write, transform, and process data using
ADF or Synapse Analytics, currently in preview.

December Set activity state for easy In Fabric Data Factory data pipelines, you can now set
2023 pipeline debugging an activity's state to inactive so that you can save your
pipeline even with incomplete, invalid configurations.
Month Feature Learn more

Think of it as "commenting out" part of your pipeline


code.

December Connection editing in You can now edit your existing data connections while
2023 pipeline editor you're designing your pipeline without leaving the
pipeline editor! When setting your connection, select
Edit and a pop-up appears.

December Azure Databricks You can now create powerful data pipeline workflows
2023 Notebook executions in that include Notebook executions from your Azure
Fabric Data Factory Databricks clusters using Fabric Data Factory . Add a
Databricks activity to your pipeline, point to your
existing cluster, or request a new cluster, and Data
Factory will execute your Notebook code for you.

November Implement medallion An introduction to medallion lake architecture and


2023 lakehouse architecture in how you can implement a lakehouse in Microsoft
Microsoft Fabric Fabric.

November Dataflow Gen2 General The connectors for Lakehouse, Warehouse, and KQL
2023 availability of Fabric Database are now generally available . We
connectors encourage you to use these connectors when trying
to connect to data from any of these Fabric
workloads.

November Dataflow Gen2 Automatic To prevent unnecessary resources from being


2023 refresh cancellation consumed, there's a new mechanism that stops the
refresh of a Dataflow as soon as the results of the
refresh are known to have no impact . This is to
reduce consumption more proactively.

November Dataflow Gen2 Error We made diagnostics improvement to provide


2023 message propagation meaningful error messages when Dataflow refresh
through gateway fails for those Dataflows running through the
Enterprise Data Gateway.

November Dataflow Gen2 Support Column binding support is enabled for SAP HANA.
2023 for column binding for This optional parameter results in significantly
SAP HANA connector improved performance. For more information, see
Support for column binding for SAP HANA
connector .

November Dataflow Gen2 staging When using a Dataflow Gen2 in Fabric, the system will
2023 artifacts hidden automatically create a set of staging artifacts. Now,
these staging artifacts will be abstracted from the
Dataflow Gen2 experience and will be hidden from
the workspace list. No action is required by the user
and this change has no impact on existing Dataflows.
Month Feature Learn more

November Dataflow Gen2 Support VNet Data Gateway support for Dataflows Gen2 in
2023 for VNet Gateways Fabric is now in preview. The VNet data gateway
preview helps to connect from Fabric Dataflows Gen2 to Azure
data services within a VNet, without the need of an
on-premises data gateway.

November Cross workspace "Save as" You can now clone your data pipelines across
2023 workspaces by using the "Save as" button .

November Dynamic content flyout In the Email and Teams activities, you can now add
2023 integration with Email and dynamic content with ease. With this new pipeline
Teams activity expression integration, you'll now see a flyout menu
to help you select and build your message content
quickly without needing to learn the pipeline
expression language.

November Copy activity now The Copy activity in data pipelines now supports fault
2023 supports fault tolerance tolerance for Fabric Warehouse . Fault tolerance
for Fabric Data Warehouse allows you to handle certain errors without
connector interrupting data movement. By enabling fault
tolerance, you can continue to copy data while
skipping incompatible data like duplicated rows.

November MongoDB and MongoDB MongoDB and MongoDB Atlas connectors are now
2023 Atlas connectors available to use in your Data Factory data pipelines
as sources and destinations.

November Microsoft 365 connector The Microsoft 365 connector now supports ingesting
2023 now supports ingesting data into Lakehouse tables .
data into Lakehouse
(preview)

November Multi-task support for You can now open and edit data pipelines from
2023 editing pipelines in the different workspaces and navigate between them
designer using the multi-tasking capabilities in Fabric.

November String interpolation added You can now edit your data connections within your
2023 to pipeline return value data pipelines . Previously, a new tab would open
when connections needed editing. Now, you can
remain within your pipeline and seamlessly update
your connections.

October Category redesign of We've redesigned the way activities are categorized
2023 activities to make it easier for you to find the activities you're
looking for with new categories like Control flow,
Notifications, and more.

October Copy runtime We've made improvements to the Copy runtime


2023 performance performance. According to our tests results, with the
Month Feature Learn more

improvement improvements users can expect to see the duration of


copying from parquet/csv files into Lakehouse table
to improve by ~25%-35%.

October Integer data type available We now support variables as integers! When creating
2023 for variables a new variable, you can now choose to set the
variable type to Integer, making it easier to use
arithmetic functions with your variables.

October Pipeline name now We've added a new system variable called Pipeline
2023 supported in System Name so that you can inspect and pass the name of
variables. your pipeline inside of the pipeline expression editor,
enabling a more powerful workflow in Fabric Data
Factory.

October Support for Type editing You can now edit column types when you land data
2023 in Copy activity Mappings into your Lakehouse tables. This makes it easier to
customize the schema of your data in your
destination. Simply navigate to the Mapping tab,
import your schemas, if you don't see any mappings,
and use the dropdown list to make changes.

October New certified connector: Announcing the release of the new Emplifi Metrics
2023 Emplifi Metrics connector. The Power BI Connector is a layer between
Emplifi Public API and Power BI itself. For more
information, see Emplifi Public API documentation .

October SAP HANA (Connector The update enhances the SAP HANA connector with
2023 Update) the capability to consume HANA Calculation Views
deployed in SAP Datasphere by taking into account
SAP Datasphere's additional security concepts.

October Set Activity State to Activity State is now available in Fabric Data Factory
2023 "Comment Out" Part of data pipelines , giving you the ability to comment
Pipeline out part of your pipeline without deleting the
definition.

August Staging labels The concept of staging data was introduced in


2023 Dataflows Gen2 for Microsoft Fabric and now you
have the ability to define what queries within your
Dataflow should use the staging mechanisms or not.

August Secure input/output for We've added advanced settings for the Set Variable
2023 logs activity called Secure input and Secure output. When
you enable secure input or output, you can hide
sensitive information from being captured in logs.

August Pipeline run status added We've recently added Pipeline status so that
2023 to Output panel developers can easily see the status of the pipeline
Month Feature Learn more

run. You can now view your Pipeline run status from
the Output panel.

August Data pipelines FTP The FTP connector is now available to use in your
2023 connector Data Factory data pipelines in Microsoft Fabric. Look
for it in the New connection menu.

August Maximum number of The new maximum number of entities that can be
2023 entities in a Dataflow part of a Dataflow has been raised to 50.

August Manage connections The Manage Connections option now allows you to
2023 feature view the linked connections to your dataflow, unlink a
connection, or edit connection credentials and
gateway.

August Power BI Lakehouses An update to the Lakehouses connector in the August


2023 connector version of the Power BI Desktop and Gateway
includes significant performance improvements.

July 2023 New modern data An improved experience aims to expedite the process
connectivity and discovery of discovering data in Dataflow, Dataflow Gen2, and
experience in Dataflows Datamart .

May 2023 Introducing Data Factory Data Factory enables you to develop enterprise-scale
in Microsoft Fabric data integration solutions with next-generation
dataflows and data pipelines .

Data Factory in Microsoft Fabric samples and guidance

ノ Expand table

Month Feature Learn more

July 2024 Connect to your Azure Learn how to connect to your Azure resources
Resources from Fabric with automatically with the modern get data experience
the Data Pipeline Modern of Data Pipelines .
Get Data Experience

July 2024 Fabric Data Pipelines – This blog provides a tutorial on the ability to
Advanced Scheduling schedule a Pipeline on a specific day of the
Techniques (Part 2: Run a month , including both the start of the month
Pipeline on a Specific Day) along with the last day of the month.

June 2024 A Data Factory Pipeline The ultimate Data Factory Pipeline Mind Map
Navigator mind map helps you navigate Data Factory pipelines on your
Data Factory journey to build a successful Data
Integration project.
Month Feature Learn more

May 2024 Semantic model refresh Learn how to use the much-requested Semantic
activity model refresh activity in Data pipelines and how
you can now create a complete end-to-end
solution that spans the entire pipeline lifecycle.

February Fabric Data Pipelines – This blog series covers Advanced Scheduling
2024 Advanced Scheduling techniques in Microsoft Fabric Data Pipelines .
Techniques

December Read data from Delta Lake The DeltaLake.Table is a new function in Power
2023 tables with the Query's M language for reading data from Delta
DeltaLake.Table M function Lake tables . This function is now available in
Power Query in Power BI Desktop and in Dataflows
Gen1 and Gen2 and replaces the need to use
community-developed solution.

October Microsoft Fabric Data You're invited to join our October webinar series ,
2023 Factory Webinar Series – where we'll show you how to use Data Factory to
October 2023 transform and orchestrate your data in various
scenarios.

September Notify Outlook and Teams Learn how to send notifications to both Teams
2023 channel/group from a channels/groups and Outlook emails .
Microsoft Fabric pipeline

September Microsoft Fabric Data Join our Data Factory webinar series where we'll
2023 Factory Webinar Series – show you how to use Data Factory to transform and
September 2023 orchestrate your data in various scenarios.

August Metadata Driven Pipelines An overview of a metadata-driven pipeline in


2023 for Microsoft Fabric – Part 2, Microsoft Fabric that follows the medallion
Data Warehouse Style architecture with Data Warehouse serving as the
Gold layer .

August Metadata Driven Pipelines An overview of a Metadata driven pipeline in


2023 for Microsoft Fabric Microsoft Fabric that follows the medallion
architecture (Bronze, Silver, Gold).

August Using Data pipelines for Real-Time Intelligence' KQL DB is supported as


2023 copying data to/from KQL both a destination and a source with data
Databases and crafting pipelines , allowing you to build and manage
workflows with the Lookup various extract, transform, and load (ETL) activities,
activity leveraging the power and capabilities of KQL DBs.

August Incrementally amass data With Dataflows Gen2 that comes with support for
2023 data destinations, you can setup your own pattern
to load new data incrementally , replace some old
data, and keep your reports up to date with your
source data.
Month Feature Learn more

August Data Pipeline Performance Learn how to account for pagination given the
2023 Improvement Part 3: current state of Fabric Data Pipelines in preview.
Gaining more than 50% This pipeline is performant when the number of
improvement for Historical paginated pages isn't too large. Read more at
Loads Gaining more than 50% improvement for Historical
Loads .

August Data Pipeline Performance Examples from this blog series include how to
2023 Improvements Part 2: merge two arrays into an array of JSON objects,
Creating an Array of JSONs and how to take a date range and create multiple
subranges then store these as an array of JSONs.
Read more at Creating an Array of JSONs .

July 2023 Data Pipeline Performance Part one of a series of blogs on moving data with
Improvements Part 1: How multiple Copy Activities moving smaller volumes in
to convert a time interval parallel: How to convert a time interval
(dd.hh:mm:ss) into seconds (dd.hh:mm:ss) into seconds .

July 2023 Construct a data analytics A blog covering data pipelines in Data Factory and
workflow with a Fabric Data the advantages you find by using pipelines to
Factory data pipeline orchestrate your Fabric data analytics projects and
activities .

July 2023 Data Pipelines Tutorial: In this blog, we will act in the persona of an AVEVA
Ingest files into a Lakehouse customer who needs to retrieve operations data
from a REST API with from AVEVA Data Hub into a Microsoft Fabric
pagination ft. AVEVA Data Lakehouse .
Hub

July 2023 Data Factory Spotlight: This blog spotlight covers the two primary high-
Dataflow Gen2 level features Data Factory implements: dataflows
and pipelines .

Fabric Data Engineering


This section summarizes archived new features and capabilities of data engineering,
including Data Factory in Microsoft Fabric.

ノ Expand table

Month Feature Learn more

August MsSparkUtils upgrade The library MsSparkUtils has been rebranded as


2024 to NotebookUtils NotebookUtils . While NotebookUtils is backward
compatible with MsSparkUtils , new features will only be
added to the NotebookUtils namespace. For more
Month Feature Learn more

information, see NotebookUtils (former MSSparkUtils)


for Fabric.

August Import Notebook UX The Import Notebook feature user interface has been
2024 improvement enhanced - you can now effortlessly import
notebooks, reports, or paginated reports using the
unified entry in the workspace toolbar.

August Lifecycle of Apache The Lifecycle of Apache Spark runtimes in Fabric


2024 Spark runtimes in Fabric document details the release cadence and versioning for
the Azure-integrated platform based on Azure Spark.
For more information, see the Fabric runtime lifecycle
blog post .

July 2024 MSSparkUtils API The mssparkutils.runtime.context is a new API that


provides context information of the current live session,
including the notebook name, default lakehouse,
workspace info, if it's a pipeline run, etc. For more
information, see Microsoft Spark Utilities (MSSparkUtils)
for Fabric.

July 2024 Environment Resources The new Environment Resources Folder is a shared
folder repository designed to streamline collaboration across
multiple notebooks.

June 2024 Fabric Spark connector The Fabric Spark connector for Synapse Data Warehouse
for Fabric Synapse Data (preview) enables a Spark developer or a data scientist
Warehouse in Spark to access and work on data from a warehouse or SQL
runtime (preview) analytics endpoint of the lakehouse (either from within
the same workspace or from across workspaces) with a
simplified Spark API.

June 2024 External data sharing REST APIs for OneLake external data sharing are now
public API preview available in preview. Users can now scale their data
sharing use cases by automating the creation of shares
with the public API.

June 2024 Capacity pools preview Capacity administrators can now create custom pools
(preview) based on their workload requirements,
providing granular control over compute resources.
Custom pools for Data Engineering and Data Science
can be set as Spark Pool options within Workspace
Spark Settings and environment items.

June 2024 Native Execution Engine The Native Execution Engine for Apache Spark on Fabric
for Apache Spark Data Engineering and Data Science for Fabric Runtime
1.2 is now in preview. For more information, see Native
execution engine for Fabric Spark.
Month Feature Learn more

June 2024 OneLake data access Following the release of OneLake data access roles in
roles API preview, new APIs are available for managing data
access roles . These APIs can be used to
programmatically manage granular data access for your
lakehouses.

May 2024 Runtime 1.3 (Apache The enhancements in Fabric Runtime 1.3 include the
Spark 3.5, Delta Lake incorporation of Delta Lake 3.1, compatibility with
3.1, R 4.3.3, Python 3.11) Python 3.11, support for Starter Pools, integration with
(preview) Environment, and library management capabilities.
Additionally, Fabric Runtime now enriches the data
science experience by supporting the R language and
integrating Copilot.

May 2024 Spark Run Series The Spark Monitoring Run Series Analysis features
Analysis and Autotune allow you to analyze the run duration trend and
feature preview performance comparison for Pipeline Spark activity
recurring run instances and repetitive Spark run
activities, from the same Notebook or Spark Job
Definition.

May 2024 OneLake shortcuts to Connect to on-premises data sources with a Fabric on-
on-premises and premises data gateway on a machine in your
network-restricted data environment, with networking visibility of your S3
sources (preview) compatible, Amazon S3, or Google Cloud Storage data
source. Then, you create your shortcut and select that
gateway. For more information, see Create shortcuts to
on-premises data.

May 2024 Comment @tagging in Notebook now supports the ability to tag others in
Notebook comments , just like the familiar functionality of using
Office products.

May 2024 Notebook ribbon New features in the Fabric notebook ribbon including
upgrades the Session connect control and Data Wrangler button
on the Home tab, High concurrency sessions, new View
session information control including the session
timeout.

May 2024 Data Engineering: The Environment in Fabric is now generally available. The
Environment GA Environment is a centralized item that allows you to
configure all the required settings for running a Spark
job in one place. At GA, we added support for Git,
deployment pipelines, REST APIs, resource folders, and
sharing.
Month Feature Learn more

May 2024 Public API for REST API support for Fabric Data Engineering/Science
Workspace Data workspace settings allows users to create/manage
Engineering/Science their Spark compute, select the default runtime/default
environment, enable or disable high concurrency mode,
or ML autologging.

April 2024 Fabric Spark Optimistic Fabric Spark Optimistic Job Admission reduces the
Job Admission frequency of throttling errors (HTTP 430: Spark Capacity
Limit Exceeded Response) and improves the job
admission experience for our customers, especially
during peak usage hours.

April 2024 Single Node support for The Single Node support for starter pools feature lets
starter pools you set your starter pool to max one node and get
super-fast session start times for your Spark sessions.

April 2024 Container Image for To simplify the development process, we have released a
Synapse VS Code container image for Synapse VS Code that contains all
the necessary dependencies for the extension.

April 2024 Git integration with Git integration with Spark Job definitions allows you
Spark Job definition to check in the changes of your Spark Job Definitions
into a Git repository, which will include the source code
of the Spark jobs and other item properties.

April 2024 New Revamped Object The new Object Explorer experience improves
Explorer experience in flexibility and discoverability of data sources in the
the notebook explorer and improve the discoverability of Resource
folders.

April 2024 %Run your scripts in Now you can use %run magic command to run your
Notebook Python scripts and SQL scripts in Notebook resources
folder , just like Jupyter notebook %run command.

April 2024 OneLake shortcuts to OneLake shortcuts to S3-compatible data sources are
S3-compatible data now in preview . Create an Amazon S3 compatible
sources preview shortcut to connect to your existing data through a
single unified name space without having to copy or
move data.

April 2024 OneLake shortcuts to OneLake shortcuts to Google Cloud Storage are now in
Google Cloud Storage preview . Create a Google Cloud Storage shortcut to
preview connect to your existing data through a single unified
name space without having to copy or move data.

April 2024 OneLake data access OneLake data access roles for lakehouse are in
roles preview . Role permissions and user/group
assignments can be easily updated through a new folder
security user interface.
Month Feature Learn more

March New validation The new validation enhancement to the "Load to table"
2024 enhancement for "Load feature help mitigate any validation issues and make
to table" your data loading experience smoother and faster.

March Queuing for Notebook Now with Job Queueing for Notebook Jobs , jobs that
2024 Jobs are triggered by pipelines or job scheduler will be added
to a queue and will be retried automatically when the
capacity frees up. For more information, see Job
queueing in Microsoft Fabric Spark.

March Autotune Query Tuning The Autotune Query Tuning feature for Apache Spark
2024 feature for Apache is now available. Autotune leverages historical data from
Spark your Spark SQL queries and machine learning algorithms
to automatically fine-tune your configurations, ensuring
faster execution times and enhanced efficiency.

March OneLake File Explorer: With our latest release v1.0.11.0 of file explorer , we're
2024 Editing via Excel excited to announce that you can now update your files
directly using Excel , mirroring the user-friendly
experience available in OneDrive.

February Trusted workspace Trusted workspace access (preview) enables secure


2024 access (preview) for and seamless access to ADLS Gen2 storage accounts
OneLake Shortcuts from OneLake shortcuts in Fabric . For more
information, see Trusted workspace access (preview).

February Reduce egress costs Learn how OneLake shortcuts to S3 now support
2024 with S3 shortcuts in caching , which can greatly reduce egress costs. Use
OneLake the new Enable Cache for S3 Shortcuts setting with an
S3 shortcut.

February OneLake Shortcuts API New REST APIs for OneLake Shortcuts allow
2024 programmatic creation and management of shortcuts,
currently in preview. You can now programmatically
create, read, and delete OneLake shortcuts. For example,
see Use OneLake shortcuts REST APIs.

February Browse code snippet The new Browse code snippet notebook feature
2024 allows you to easily access and insert code snippets for
commonly used code snippets with multiple supported
languages.

February Configure session Notebooks now support configuring session timeout


2024 timeout for the current live session. It can help you avoid wasting
resources or losing context due to timeout. You can
specify the maximum duration of your spark sessions,
from minutes to hours, and also get alerts before the
session expires and extend it.
Month Feature Learn more

February Fabric notebook status The new Fabric Notebook status bar has three
2024 bar upgrade persisted info buttons: session status, save status, and
cell selection status. Plus, context features include info
on the git connection state, a shortcut to extend session
timeout, and a failed cell navigator.

January Microsoft Fabric Copilot Copilot for Data Science and Data Engineering is now
2024 for Data Science and available worldwide. What can Copilot for Data Science
Data Engineering and Data Engineering do for you?

January Newest version of With the newest version of OneLake file explorer
2024 OneLake File Explorer (v1.0.11.0) we bring a few updates to enhance your
includes Excel experience with OneLake, including Excel Integration .
Integration

December %%configure – Now you can personalize your Spark session with the
2023 personalize your Spark magic command %%configure, in both interactive
session in Notebook notebook and pipeline notebook activities.

December Rich dataframe preview The display() function has been updated on Fabric
2023 in Notebook Notebook , now named the Rich dataframe preview.
Now when you use display() to preview your
dataframe, you can easily specify the range, view the
dataframe summary and column statistics, check invalid
values or missing values, and preview the long cell.

December Working with OneLake If you want to use an application that directly integrates
2023 using Azure Storage with Windows File Explorer, check out OneLake file
Explorer explorer . However, if you're accustomed to using
Azure Storage Explorer for your data management
tasks , you can continue to harness its functionalities
with OneLake and some of its key benefits.

November Accessibility support for To provide a more inclusive and user-friendly interaction,
2023 Lakehouse we have implemented improvements so far to support
accessibility in the Lakehouse , including screen reader
compatibility, responsive design text reflow, keyboard
navigation, alternative text for images, and form fields
and labels.

November Enhanced multitasking We've introduced new capabilities to enhance the multi-
2023 experience in tasking experience in Lakehouse , including
Lakehouse multitasking during running operations, nonblocking
reloading, and clearer notifications.

November Upgraded DataGrid An upgraded DataGrid for the Lakehouse table


2023 capabilities in preview now features sorting, filtering, and resizing of
Lakehouse columns.
Month Feature Learn more

November SQL analytics endpoint You can now retry the SQL analytics endpoint
2023 re-provisioning provisioning directly within the Lakehouse . This means
that if your initial provisioning attempt fails, you have
the option to try again without the need to create an
entirely new Lakehouse.

November Microsoft Fabric The Microsoft Fabric Runtime 1.2 is a significant


2023 Runtime 1.2 advancement in our data processing capabilities.
Microsoft Fabric Runtime 1.2 includes Apache Spark
3.4.1, Mariner 2.0 as the operating system, Java 11, Scala
2.12.17, Python 3.10, Delta Lake 2.4, and R 4.2.2,
ensuring you have the most cutting-edge tools at your
disposal. In addition, this release comes bundled with
default packages, encompassing a complete Anaconda
installation and essential libraries for Java/Scala, Python,
and R, simplifying your workflow.

November Multiple Runtimes With the introduction of Runtime 1.2, Fabric supports
2023 Support multiple runtimes , offering users the flexibility to
seamlessly switch between them, minimizing the risk of
incompatibilities or disruptions. When changing
runtimes, all system-created items within the workspace,
including Lakehouses, SJDs, and Notebooks, will operate
using the newly selected workspace-level runtime
version starting from the next Spark Session.

November Delta as the default The default Spark session parameter


2023 table format in the new spark.sql.sources.default is now delta . All tables
Runtime 1.2 created using Spark SQL, PySpark, Scala Spark, and
Spark R, whenever the table type is omitted, will create
the table as Delta by default .

November Intelligent Cache By default, the newly revamped and optimized


2023 Intelligent Cache feature is enabled in Fabric Spark. The
intelligent cache works seamlessly behind the scenes
and caches data to help speed-up the execution of
Spark jobs in Microsoft Fabric as it reads from your
OneLake or ADLS Gen2 storage via shortcuts.

November Monitoring Hub for The latest enhancements in the monitoring hub are
2023 Spark enhancements designed to provide a comprehensive and detailed view
of Spark and Lakehouse activities , including executor
allocations, runtime version for a Spark application, a
related items link in the detail page.

November Monitoring for Users can now view the progress and status of
2023 Lakehouse operations Lakehouse maintenance jobs and table load activities.
Month Feature Learn more

November Spark application Responding to customers' requests for monitoring Spark


2023 resource Usage Analysis resource usage metrics for performance tuning and
optimization, we're excited to introduce the Spark
resource usage analysis feature , now available in
preview. This newly released feature enables users to
monitor allocated executors, running executors, and idle
executors, alongside Spark executions.

November REST API support for REST Public APIs for Spark Job Definition are now
2023 Spark Job Definition available, making it easy for users to manage and
preview manipulate SJD items .

November REST API support for As a key requirement for workload integration, REST
2023 Lakehouse, Load to Public APIs for Lakehouse are now available. The
tables and table Lakehouse REST Public APIs makes it easy for users to
maintenance manage and manipulate Lakehouse items
programmatically.

November Lakehouse support for The Lakehouse now integrates with the lifecycle
2023 git integration and management capabilities in Microsoft Fabric ,
deployment pipelines providing a standardized collaboration between all
(preview) development team members throughout the product's
life. Lifecycle management facilitates an effective
product versioning and release process by continuously
delivering features and bug fixes into multiple
environments.

November Embed a Power BI We're thrilled to announce that the powerbiclient


2023 report in Notebook Python package is now natively supported in Fabric
notebooks. This means you can easily embed and
interact with Power BI reports in your notebooks with
just a few lines of code. To learn more about how to use
the powerbiclient package to embed a Power BI
component.

November Mssparkutils new API – A new runMultiple API in mssparkutils called


2023 reference run multiple mssparkutils.notebook.runMultiple() allows you to run
notebooks in parallel multiple notebooks in parallel, or with a predefined
topological structure. For more information, see
Notebook utilities.

November Notebook resources We now support uploading the .jar files in the Notebook
2023 .JAR file support Resources explorer . You can add your own compiled
libs, use drag & drop to generate a code snippet to
install them in the session, and load the libraries in code
conveniently.
Month Feature Learn more

November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps . It allows users to easily
control the notebook code versions and manage the git
branches by leveraging the Fabric Git functions and
Azure DevOps.

November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments , such as development,
Preview test, and production. You can also use deployment rules
to customize the behavior of your notebooks when
they're deployed, such as changing the default
Lakehouse of a Notebook. Get started with deployment
pipelines, and Notebook shows up in the deployment
content automatically.

November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines
and establish CI/CD conveniently and efficiently. The
notebook Restful Public API can make it easy for users
to manage and manipulate Fabric notebook items and
integrate notebook with other tools and systems.

November Environment preview We're thrilled to announce preview of the Environment


2023 in Fabric. The Environment is a centralized item that
allows you to configure all the required settings for
running a Spark job in one place.

November Synapse VS Code With support for the Synapse VS Code extension on
2023 extension in vscode.dev vsocde.dev, users can now seamlessly edit and execute
preview Fabric Notebooks without ever leaving their browser
window . Additionally, all the native pro-developer
features of VS Code are now accessible to end-users in
this environment.

October Create multiple Creating multiple OneLake shortcuts just got easier.
2023 OneLake shortcuts at Rather than creating shortcuts one at a time, you can
once now browse to your desired location and select multiple
targets at once. All your selected targets then get
created as new shortcuts in a single operation .

October Delta-RS introduces The OneLake team worked with the Delta-RS community
2023 native support for to help introduce support for recognizing OneLake URLs
OneLake in both Delta-RS and the Rust Object Store .

September Import notebook to The new "Import Notebook" entry on the Workspace ->
2023 your Workspace New menu lets you easily import new Fabric
Notebook items in the target workspace. You can upload
Month Feature Learn more

one or more files, including .ipynb , .py , .sql , .scala ,


and .r file formats.

September Notebook file system The Synapse VS Code extension now supports notebook
2023 support in Synapse VS File System for Data Engineering and Data Science in
Code extension Microsoft Fabric. The Synapse VS Code extension
empowers users to develop their notebook items
directly within the Visual Studio Code environment.

September Notebook sharing We now support checking the "Run" operation


2023 execute-only mode separately when sharing a notebook, if you just selected
the "Run" operation, the recipient would see a
"Execution-only" notebook .

September Notebook save conflict We now support viewing and comparing the differences
2023 resolution between two versions of the same notebook when
there are saving conflicts.

September Mssparkutils new API We now support a new method in mssparkutils that can
2023 for fast data copy enable large volume of data move/copy much faster ,
Mssparkutils.fs.fastcp() . You can use
mssparkutils.fs.help("fastcp") to check the detailed
usage.

September Notebook resources We now support uploading .whl files in the Notebook
2023 .whl file support Resources explorer .

August Introducing High High concurrency mode allows you to run notebooks
2023 Concurrency Mode in simultaneously on the same cluster without
Notebooks for Data compromising performance or security when paying for
Engineering and Data a single session. High concurrency mode offers several
Science workloads in benefits for Fabric Spark users.
Microsoft Fabric

August Service principal Azure service principal has been added as an


2023 support to connect to authentication type for a set of data sources that can
data in Dataflow, be used in Dataset, Dataflow, Dataflow Gen2 and
Datamart, Dataset and Datamart.
Dataflow Gen 2

August Announcing XMLA Direct Lake datasets now support XMLA-Write


2023 Write support for Direct operations. Now you can use your favorite BI Pro tools
Lake datasets and scripts to create and manage Direct Lake datasets
using XMLA endpoints .

July 2023 Lakehouse Sharing and Share a lakehouse and manage permissions so that
Access Permission users can access lakehouse data through the Data Hub,
Management the SQL analytics endpoint, and the default semantic
model.
Month Feature Learn more

June 2023 Virtualize your existing Connect data silos without moving or copying data with
data into OneLake with OneLake, which allows you to create special folders
shortcuts called shortcuts that point to other storage locations .

May 2023 Introducing Data With Fabric Data Engineering, one of the core
Engineering in experiences of Microsoft Fabric, data engineers feel right
Microsoft Fabric at home, able to leverage the power of Apache Spark to
transform their data at scale and build out a robust
lakehouse architecture .

Fabric Data Engineering samples and guidance

ノ Expand table

Month Feature Learn more

August Build a custom Sparklens JAR In this blog, learn how to build the sparklens
2024 JAR for Spark 3.X , which can be used in
Microsoft Fabric.

July 2024 Create a shortcut to a VPC- Learn how to create a shortcut to a VPC-
protected S3 bucket protected S3 bucket , using the on-
premises data gateway and AWS Virtual
Private Cloud (VPC).

July 2024 Move Your Data Across The new modern get data experience of data
Workspaces Using Modern Get pipeline now supports copying to Lakehouse
Data of Fabric Data Pipeline and warehouse across different workspaces
with an intuitive experience.

June 2024 Demystifying Data Ingestion in Learn about a batch data Ingestion
Fabric: Fundamental Components framework based on experience working
for Ingesting Data into a Fabric with different customers while building a
Lakehouse using Fabric Data lakehouse in Fabric.
Pipelines

June 2024 Boost performance and save costs Learn how the Fast Copy feature helps to
with Fast Copy in Dataflows Gen2 enhance the performance and cost-efficiency
of your Dataflows Gen2 .

May 2024 Copy Data from Lakehouse in Learn how to copy data between Lakehouse
another Workspace using Data that cross different workspaces via Data
pipeline pipeline .

May 2024 Profiling Microsoft Fabric Spark In this blog, you will learn how to leverage
Notebooks with Sparklens Sparklens, an open-source Spark profiling
tool, to profile Microsoft Fabric Spark
Month Feature Learn more

Notebooks and improve the performance


of your spark code.

March Bridging Fabric Lakehouses: Delta Learn how to use the Delta Change Data
2024 Change Data Feed for Seamless Feed to facilitate seamless data
ETL synchronization across different lakehouses
in your medallion architecture .

January Use Fabric Data Factory Data Guidance and good practices when building
2024 Pipelines to Orchestrate Fabric Spark Notebook workflows using Data
Notebook-based Workflows Factory in Fabric with data pipelines.

November Fabric Changing the game: Using A step-by-step guide to use your own Python
2023 your own library with Microsoft library in the Lakehouse . It's quite simple to
Fabric create your own library with Python and even
simpler to reuse it on Fabric.

August Fabric changing the game: Learn more about logging your workload into
2023 Logging your workload using OneLake using notebooks , using the
Notebooks OneLake API Path inside the notebook.

Fabric Data Science


This section summarizes archived improvements and features for the Data Science
experience in Microsoft Fabric.

ノ Expand table

Month Feature Learn more

August Apply MLFlow tags on ML You can now apply MLflow tags directly on ML
2024 experiment runs and model experiment runs and ML model versions from the
versions user interface .

August Track related ML You can now use an enhancement to the Monitoring
2024 Experiment runs in your Hub to track related ML experiment runs within
Spark Application Spark applications. You can also integrate
Experiment items into the Monitoring Hub .

August Use PREDICT with Fabric You can now move from training with AutoML to
2024 AutoML models making predictions by using the built-in Fabric
PREDICT UI and code-first APIs for batch
predictions . For more information, see Machine
learning model scoring with PREDICT in Microsoft
Fabric.
Month Feature Learn more

August Data Science AI skill You can now build your own generative AI
2024 (preview) experiences over your data in Fabric with the AI skill
(preview)! You can build question and answering AI
systems over your Lakehouses and Warehouses. For
more information, see Introducing AI Skills in
Microsoft Fabric: Now in Preview . To get started,
try AI skill example with the AdventureWorks dataset
(preview).

July 2024 Semantic link preinstalled Semantic Link in now included in the default
runtime . If you use Fabric with Spark 3.4 or later,
semantic link is already in the default runtime, and
you don't need to install it.

July 2024 Semantic Link Labs Semantic Link Labs is a library of helpful python
solutions for use in Microsoft Fabric notebooks.
Semantic Link Labs helps Power BI developers and
admins easily automate previously complicated
tasks, as well as make semantic model optimization
tooling more easily accessible within the Fabric
ecosystem. For Semantic Link Labs documentation,
see semantic-link-labs documentation . For more
information and to see it in action, read the
Semantic Link Labs announcement blog .

June 2024 Capacity pools preview Capacity administrators can now create custom
pools (preview) based on their workload
requirements, providing granular control over
compute resources. Custom pools for Data
Engineering and Data Science can be set as Spark
Pool options within Workspace Spark Settings and
environment items.

June 2024 Native Execution Engine for The Native Execution Engine for Apache Spark on
Apache Spark Fabric Data Engineering and Data Science for
Fabric Runtime 1.2 is now in preview. For more
information, see Native execution engine for Fabric
Spark.

June 2024 Demystifying Data Learn about a batch data Ingestion framework
Ingestion in Fabric: based on experience working with different
Fundamental Components customers while building a lakehouse in Fabric.
for Ingesting Data into a
Fabric Lakehouse using
Fabric Data Pipelines

June 2024 Boost performance and Learn how the Fast Copy feature helps to enhance
save costs with Fast Copy in the performance and cost-efficiency of your
Month Feature Learn more

Dataflows Gen2 Dataflows Gen2 .

May 2024 Public API for Workspace REST API support for Fabric Data
Data Engineering/Science Engineering/Science workspace settings allows
users to create/manage their Spark compute, select
the default runtime/default environment, enable or
disable high concurrency mode, or ML autologging.

April 2024 Semantic Link GA Semantic links are now generally available! The
package comes with our default VHD. You can now
use Semantic link in Fabric right away without any
pip installation.

April 2024 Capacity level delegation Tenant admins can now enable AI and Copilot in
for AI and Copilot Fabric for the entire organization, certain security
groups, or for a specific Capacity.

March EU customers can use AI Since mid-March EU customers can use AI and
2024 and Copilot without cross- Copilot without turning on the cross-geo setting ,
geo setting and their AI and Copilot requests will be processed
within EUDB.

March Code-First Hyperparameter FLAML is now integrated for hyperparameter tuning,


2024 Tuning preview currently a preview feature. Fabric's flaml.tune
feature streamlines this process, offering a cost-
effective and efficient approach to hyperparameter
tuning .

March Code-First AutoML preview With the new AutoML feature , you can automate
2024 your machine learning workflow and get the best
results with less effort. AutoML, or Automated
Machine Learning, is a set of techniques and tools
that can automatically train and optimize machine
learning models for any given data and task type.

March Compare Nested Runs Parent and child runs in the Run List View for ML
2024 Experiments introduces a hierarchical structure,
allowing users to effortlessly view various parent and
child runs within a single view and seamlessly
interact with them to visually compare results.

March Support for Mandatory MIP ML Model and Experiment items in Fabric now offer
2024 Label Enforcement enhanced support for Microsoft Information
Protection (MIP) labels .

January Microsoft Fabric Copilot for Copilot for Data Science and Data Engineering is
2024 Data Science and Data now available worldwide. What can Copilot for Data
Engineering Science and Data Engineering do for you?
Month Feature Learn more

December Semantic Link update We're excited to announce the latest update of
2023 Semantic Link ! Apart from many improvements,
we also added many new features for our Power BI
engineering community that you can use from
Fabric notebooks to satisfy all your automation
needs.

December Prebuilt Azure AI services in The preview of prebuilt AI services in Fabric is an


2023 Fabric preview integration with Azure AI services , formerly known
as Azure Cognitive Services. Prebuilt Azure AI
services allow for easy enhancement of data with
prebuilt AI models without any prerequisites.
Currently, prebuilt AI services are in preview and
include support for the Microsoft Azure OpenAI
Service , Azure AI Language , and Azure AI
Translator .

November Copilot in notebooks The Copilot in Fabric Data Science and Data
2023 preview Engineering notebooks is designed to accelerate
productivity, provide helpful answers and guidance,
and generate code for common tasks like data
exploration, data preparation, and machine learning.
You can interact and engage with the AI from either
the chat panel or even from within notebooks cells
using magic commands to get insights from data
faster. For more information, see Copilot in
notebooks .

November Custom Python Operations Data Wrangler, a notebook-based tool for


2023 in Data Wrangler exploratory data analysis, has always allowed users
to browse and apply common data-cleaning
operations, generating the corresponding code in
real time. Now, in addition to generating code from
the UI, users can also write their own code with
custom operations in Data Wrangler .

November Data Wrangler for Spark Data Wrangler now supports Spark DataFrames in
2023 DataFrames preview preview. Until now, users have been able to explore
and transform pandas DataFrames using common
operations that can be converted to Python code in
real time. The new release allows users to edit Spark
DataFrames in addition to pandas DataFrames with
Data Wrangler .

November MLFlow Notebook Widget The MLflow inline authoring widget enables users to
2023 effortlessly track their experiment runs along with
metrics and parameters, all directly from within their
notebook .
Month Feature Learn more

November New Model & Experiment New enhancements to our model and experiment
2023 Item Usability tracking features are based on valuable user
Improvements feedback. The new tree-control in the run details
view makes tracking easier by showing which run is
selected. We've enhanced the comparison feature,
allowing you to easily adjust the comparison pane
for a more user-friendly experience. Now you can
select the run name to see the Run Details view.

November Recent Experiment Runs It's now simpler for users to check out recent runs
2023 for an experiment directly from the workspace list
view . This update makes it easier to keep track of
recent activity, quickly jump to the related Spark
application, and apply filters based on the run status.

November Models renamed to ML Microsoft has renamed "Models" to "ML Models"


2023 Models to ensure clarity and avoid any confusion with other
Fabric elements. For more information, see Machine
learning experiments in Microsoft Fabric.

November SynapseML v1.0 SynapseML v1.0 is now released. SynapseML v1.0


2023 makes it easy to build production ready machine
learning systems on Fabric and has been in use at
Microsoft for over six years.

November Train Interpretable We've introduced a scalable implementation of


2023 Explainable Boosting Explainable Boosting Machines (EBM) powered by
Machines with SynapseML Apache Spark in SynapseML . EBMs are a powerful
machine learning technique that combines the
accuracy of gradient boosting with a strong focus on
model interpretability.

November Prebuilt AI models in We're excited to announce the preview for prebuilt
2023 Microsoft Fabric preview AI models in Fabric . Azure OpenAI Service , Text
Analytics , and Azure AI Translator are prebuilt
models available in Fabric, with support for both
RESTful API and SynapseML. You can also use the
OpenAI Python Library to access Azure OpenAI
service in Fabric.

November Reusing existing Spark We have added support for a new connection
2023 Session in sparklyr method called "synapse" in sparklyr , which
enables users to connect to an existing Spark
session. Additionally, we have contributed this
connection method to the OSS sparklyr project.
Users can now use both sparklyr and SparkR in the
same session and easily share data between them.
Month Feature Learn more

November REST API Support for ML REST APIs for ML Experiment and ML Model are
2023 Experiments and ML Models now available. These REST APIs for ML Experiments
and ML Models begin to empower users to create
and manage machine learning items
programmatically, a key requirement for pipeline
automation and workload integration.

October Semantic link (preview) Semantic Link is an innovative feature that


2023 seamlessly connects Power BI semantic models with
Fabric Data Science. As the gold layer in a medallion
architecture, Power BI semantic models contain the
most refined and valuable data in your organization.

October Semantic link in Microsoft We're pleased to introduce the preview of semantic
2023 Fabric: Bridging BI and Data link , an innovative feature that seamlessly
Science connects Power BI semantic models with Fabric Data
Science.

October Get started with semantic Explore how semantic link seamlessly connects
2023 link (preview) Power BI semantic models with Fabric Data Science.
Learn more at Semantic link in Microsoft Fabric:
Bridging BI and Data Science .

You can also check out the semantic link sample


notebooks that are now available in the fabric-
samples GitHub repository. These notebooks
showcase the use of semantic link's Python library,
SemPy, in Microsoft Fabric.

August Harness the Power of Harness the potential of Microsoft Fabric and
2023 LangChain in Microsoft SynapseML LLM capabilities to effectively
Fabric for Advanced summarize and organize your own documents.
Document Summarization

July 2023 Unleashing the Power of In this blog post, we delve into the exciting
SynapseML and Microsoft functionalities and features of Microsoft Fabric and
Fabric: A Guide to Q&A on SynapseML to demonstrate how to leverage
PDF Documents Generative AI models or Large Language Models
(LLMs) to perform question and answer (Q&A) tasks
on any PDF document .

May 2023 Introducing Fabric Data With data science in Microsoft Fabric, you can utilize
Science the power of machine learning features to
seamlessly enrich data as part of your data and
analytics workflows .

Fabric Data Science samples and guidance


ノ Expand table

Month Feature Learn more

June 2024 Building Custom AI This guide walks you through implementing a RAG
Applications with (Retrieval Augmented Generation) system in
Microsoft Fabric: Microsoft Fabric using Azure OpenAI and Azure AI
Implementing Retrieval Search .
Augmented Generation
for Enhanced Language
Models

March New AI Samples New AutoML sample, Model Tuning, and Semantic
2024 Link samples appear in the Quick Tutorial category
of the Data Science samples on Microsoft Fabric.

December Using Microsoft Fabric's A step-by-step RAG application through prompt flow
2023 Lakehouse Data and in Azure Machine Learning Service combined with
prompt flow in Azure Microsoft Fabric's Lakehouse data.
Machine Learning Service
to create RAG applications

November New data science happy We've updated the Data Science Happy Path tutorial
2023 path tutorial in Microsoft for Microsoft Fabric . This new comprehensive
Fabric tutorial demonstrates the entire data science
workflow , using a bank customer churn problem as
the context.

November New data science samples We've expanded our collection of data science
2023 samples to include new end-to-end R samples and
new quick tutorial samples for "Explaining Model
Outputs" and "Visualizing Model Behavior." .

November New data science The new Data Science sample on sales forecasting
2023 forecasting sample was developed in collaboration with Sonata
Software . This new sample encompasses the entire
data science workflow, spanning from data cleaning
to Power BI visualization. The notebook covers the
steps to develop, evaluate, and score a forecasting
model for superstore sales, harnessing the power of
the SARIMAX algorithm.

August New Machine failure and More samples have been added to the Fabric Data
2023 Customer churn samples Science Use a sample menu. To check these Data
Science samples, select Fabric Data Science, then Use
a sample.

August Use Semantic Kernel with Learn how Fabric allows data scientists to use
2023 Lakehouse in Microsoft Semantic Kernel with Lakehouse in Microsoft Fabric .
Fabric
Fabric Data Warehouse
This section summarizes archived improvements and features for Data Warehouse in
Microsoft Fabric.

ノ Expand table

Month Feature Learn more

August Mirroring integration You can now use the Modern Get Data experience to
2024 with modern get data choose from all the available mirrored databases in
experience OneLake.

August T-SQL DDL support in You can now run DDL operations on a Azure SQL
2024 Azure SQL Database Database mirrored database such as Drop Table,
mirrored database Rename Table, and Rename Column.

August Delta Lake log You can now pause and resume the publishing of Delta
2024 publishing pause and Lake Logs for Warehouses . For more information, see
resume Delta Lake logs in Warehouse in Microsoft Fabric.

August Managing V-Order You can now manage V-Order behavior at the
2024 behavior of Fabric warehouse level . For more information, see
Warehouses Understand V-Order for Microsoft Fabric Warehouse.

August TRUNCATE T-SQL The TRUNCATE T-SQL command is now supported in


2024 support Warehouse tables.

July 2024 ALTER TABLE and We've added T-SQL ALTER TABLE support for some
nullable column support operations, as well as nullable column support to
tables in the warehouse. For more information, see
ALTER TABLE (Transact-SQL).

July 2024 Warehouse queries with Warehouse in Microsoft Fabric offers the capability to
time travel (GA) query the historical data as it existed in the past at the
statement level, now generally available. The ability to
query data from a specific timestamp is known in the
data warehousing industry as time travel.

July 2024 Restore warehouse You can now create restore points and perform a
experience in the Fabric restore in-place of a warehouse item. For more
portal information, see Seamless Data Recovery through
Warehouse restoration .

July 2024 Warehouse source Using Git integration and/or deployment pipelines with
control (preview) your warehouse, you can manage development and
deployment of versioned warehouse objects. You can
use SQL Database Projects extension available inside of
Azure Data Studio and Visual Studio Code . For more
Month Feature Learn more

information on warehouse source control, see CI/CD


with Warehouses in Microsoft Fabric .

July 2024 Time travel and clone The retention period for time travel queries and clone
table retention window table is now 30 days.
expanded

June 2024 Restore in place portal You can now create user-created restore points in your
experience warehouse via the Fabric portal. For more information,
see Restore in-place of a warehouse in Microsoft
Fabric.

June 2024 Fabric Spark connector The Fabric Spark connector for Fabric Data Warehouse
for Fabric Data (preview) enables a Spark developer or a data scientist
Warehouse in Spark to access and work on data from Fabric DW and SQL
runtime (preview) analytics endpoint of the lakehouse (either from within
the same workspace or from across workspaces) with a
simplified Spark API.

May 2024 Monitor Warehouse You can Monitor Fabric Data Warehouse activity with a
tools variety of tools, including: Billing and utilization
reporting in Fabric Data Warehouse, monitor
connections, sessions, and requests using DMVs, Query
insights, and now Query activity. For more information,
read Query activity: A one-stop view to monitor your
running and completed T-SQL queries .

May 2024 Copilot for Data Copilot for Data Warehouse (preview) is now available
Warehouse in limited preview, offering the Copilot chat pane, quick
actions, and code completions.

May 2024 Warehouse queries with Warehouse in Microsoft Fabric offers the capability to
time travel (preview) query the historical data as it existed in the past at the
statement level, currently in preview. The ability to
query data from a specific timestamp is known in the
data warehousing industry as time travel.

May 2024 COPY INTO COPY INTO now supports Microsoft Entra ID
enhancements authentication and access to firewall protected storage
via the trusted workspace functionality. For more
information, see COPY INTO enhancements and
COPY INTO (Transact-SQL).

April 2024 Fabric Warehouse in ADF You can now connect to your Fabric Warehouse from
copy activity an Azure Data Factory/Synapse pipeline . You can find
this new connector when creating a new source or sink
destination in your copy activity, in the Lookup activity,
Stored Procedure activity, Script activity, and Get
Metadata activity.
Month Feature Learn more

April 2024 Git integration Git integration for the Warehouse allows you to
check in the changes of your Warehouse to an Azure
DevOps Git repository as a SQL database project.

April 2024 Partition elimination Partition elimination is a performance improvement


for tables with a large number of files. The SQL
analytics endpoint of a Lakehouse uses partition
elimination to read data from only those partitions that
are relevant to the query. Recent improvements
boosted performance even more when queries are
aimed at a few partitions in a table that has many files.

March Mirroring in Microsoft With Mirroring in Fabric, you can easily bring your
2024 Fabric preview databases into OneLake in Microsoft Fabric , enabling
seamless zero-ETL, near real-time insights on your data
– and unlocking warehousing, BI, AI, and more. For
more information, see What is Mirroring in Fabric?.

March Cold cache performance Fabric stores data in Delta tables and when the data is
2024 improvements not cached, it needs to transcode data from parquet
file format structures to in-memory structures for query
processing. Recent cold cache performance
improvements further optimize transcoding and we
observed up to 9% faster queries in our tests when
data is not previously cached.

March Extract and publish a The SQL Database Projects extension creates a SQL
2024 SQL database project project ( .sqlproj ) file, a local representation of SQL
directly through the DW objects that comprise the schema for a single
editor database, such as tables, stored procedures, or
functions. You can now extract and publish a SQL
database project directly through the DW editor .

March Change owner of The new Takeover API allows you to change the
2024 Warehouse item warehouse owner from the current owner to a new
owner, which can be an SPN or an Organizational
Account.

March Clone table RLS and CLS A cloned table now inherits the row-level security (RLS)
2024 and dynamic data masking from the source of the
clone table.

February Experience performance Recent connectivity and performance enhancements


2024 improvements include an improved experience for creating
warehouses, T-SQL execution, automatic metadata
discovery, and error messaging.
Month Feature Learn more

December Automatic Log Automatic Log Checkpointing is one of the ways that
2023 Checkpointing for Fabric we help your Data Warehouse to provide you with
Warehouse great performance and best of all, it involves no
additional work from you!

December Restore points and You can now create restore points and perform an in-
2023 restore in place place restore of a warehouse to a past point in time.
The restore points and restore in place features are
currently in preview. Restore in-place is an essential
part of data warehouse recovery , which allows to
restore the data warehouse to a prior known reliable
state by replacing or over-writing the existing data
warehouse from which the restore point was created.

November TRIM T-SQL support You can now use the TRIM command to remove spaces
2023 or specific characters from strings by using the
keywords LEADING, TRAILING or BOTH in TRIM
(Transact-SQL).

November GENERATE_SERIES T-SQL Generates a series of numbers within a given interval


2023 support with GENERATE_SERIES (Transact-SQL). The interval and
the step between series values are defined by the user.

November SSD metadata caching File and rowgroup metadata are now also cached with
2023 in-memory and SSD cache, further improving
performance.

November PARSER 2.0 CSV file parser version 2.0 for COPY INTO builds an
2023 improvements for CSV innovation from Microsoft Research's Data Platform
ingestion and Analytics group to make CSV file ingestion blazing
fast on Fabric Warehouse. For more information, see
COPY INTO (Transact-SQL).

November Fast compute resource All query executions in Fabric Warehouse are now
2023 assignment enabled powered by the new technology recently deployed as
part of the Global Resource Governance component
that assigns compute resources in milliseconds.

November REST API support for With the Warehouse public APIs, SQL developers can
2023 Warehouse now automate their pipelines and establish CI/CD
conveniently and efficiently. The Warehouse REST
Public APIs makes it easy for users to manage and
manipulate Fabric Warehouse items.

November SQLPackage support for SQLPackage now supports Fabric Warehouse .


2023 Fabric Warehouse SqlPackage is a command-line utility that automates
the following database development tasks by exposing
some of the public Data-Tier Application Framework
(DacFx) APIs. The SqlPackage command line tool allows
Month Feature Learn more

you to specify these actions along with action-specific


parameters and properties.

November Power BI semantic Microsoft has renamed the Power BI dataset content
2023 models type to semantic model. This applies to Microsoft Fabric
semantic models as well. For more information, see
New name for Power BI datasets.

November SQL analytics endpoint Microsoft has renamed the SQL endpoint of a
2023 Lakehouse to the SQL analytics endpoint of a
Lakehouse.

November Dynamic data masking Dynamic Data Masking (DDM) for Fabric Warehouse
2023 and the SQL analytics endpoint in the Lakehouse. For
more information and samples, see Dynamic data
masking in Fabric data warehousing and How to
implement dynamic data masking in Fabric Data
Warehouse.

November Clone tables with time You can now use table clones to create a clone of a
2023 travel table based on data up to seven calendar days in the
past .

November User experience updates Several user experiences in Warehouse have landed.
2023 For more information, see Fabric Warehouse user
experience updates .

November Automatic data Automatic data compaction rewrites many smaller


2023 compaction parquet files into a few larger parquet files, which will
improve the performance of reading the table. Data
Compaction is one of the ways that we help your
Warehouse to provide you with great performance and
no effort on your part.

October Support for sp_rename Support for the T-SQL sp_rename syntax is now
2023 available for both Warehouse and SQL analytics
endpoint. For more information, see Fabric Warehouse
support for sp_rename .

October Query insights The query insights feature is a scalable, sustainable,


2023 and extendable solution to enhance the SQL analytics
experience. With historic query data, aggregated
insights, and access to actual query text, you can
analyze and tune your query performance.

October Full DML to Delta Lake Fabric Warehouse now publishes all Inserts, Updates,
2023 Logs and Deletes for each table to their Delta Lake Log in
OneLake.
Month Feature Learn more

October V-Order write V-Order optimizes parquet files to enable lightning-


2023 optimization fast reads under the Microsoft Fabric compute engines
such as Power BI, SQL, Spark, and others. Warehouse
queries in general benefit from faster read times with
this optimization, still ensuring the parquet files are
100% compliant to its open-source specification.
Starting this month, all data ingested into Fabric
Warehouses use V-Order optimization.

October Burstable capacity Burstable capacity allows workloads to use more


2023 resources to achieve better performance. Burstable
capacity is finite, with a limit applied to the backend
compute resources to greatly reduce the risk of
throttling. For more information, see Warehouse SKU
Guardrails for Burstable Capacity .

October Throttling and A new article details the throttling and smoothing
2023 smoothing in Fabric Data behavior in Fabric Data Warehouse, where almost all
Warehouse activity is classified as background to take advantage of
the 24-hr smoothing window before throttling takes
effect. Learn more about how to observe utilization in
Fabric Data Warehouse.

September Default semantic model The default semantic model no longer automatically
2023 improvements adds new objects . This can be enabled in the
Warehouse item settings.

September Deployment pipelines Deployment pipelines enable creators to develop and


2023 now support warehouses test content in the service before it reaches the users.
Supported content types include reports, paginated
reports, dashboards, semantic models, dataflows, and
now warehouses. Learn how to deploy content
programmatically using REST APIs and DevOps.

September SQL Projects support for Microsoft Fabric Data Warehouse is now supported in
2023 Warehouse in Microsoft the SQL Database Projects extension available inside of
Fabric Azure Data Studio and Visual Studio Code .

September Announcing: Column- Column-level and row-level security in Fabric


2023 level & Row-level Warehouse and SQL analytics endpoint are now in
security for Fabric preview, behaving similarly to the same features in SQL
Warehouse & SQL Server.
analytics endpoint

September Usage reporting Utilization and billing reporting is available for Fabric
2023 data warehousing in the Microsoft Fabric Capacity
Metrics app. For more information, read about
Month Feature Learn more

Utilization and billing reporting Fabric data


warehousing .

August SSD Caching enabled Local SSD caching stores frequently accessed data on
2023 local disks in highly optimized format, significantly
reducing I/O latency. This benefits you immediately,
with no action required or configuration necessary.

July 2023 Sharing Any Admin or Member within a workspace can share a
Warehouse with another recipient within your
organization. You can also grant these permissions
using the "Manage permissions" experience.

July 2023 Table clone A zero-copy clone creates a replica of the table by
copying the metadata, while referencing the same data
files in OneLake. This avoids the need to store multiple
copies of data, thereby saving on storage costs when
you clone a table in Microsoft Fabric. For more
information, see tutorials to Clone a table with T-SQL
or Clone tables in the Fabric portal.

May 2023 Introducing Fabric Data Fabric Data Warehouse is the next generation of data
Warehouse in Microsoft warehousing in Microsoft Fabric that is the first
Fabric transactional data warehouse to natively support an
open data format, Delta-Parquet.

Fabric Data Warehouse samples and guidance

ノ Expand table

Month Feature Learn more

August Mirroring SQL Server While SQL Server isn't currently supported for Fabric
2024 database to Fabric mirrored databases, learn how to extend Fabric
mirroring to an on-premises SQL Server database as a
source, using a combination of SQL Server
Transactional replication and Fabric Mirroring .

July 2024 Microsoft Entra For sample connection strings and more information
authentication for Fabric on using Microsoft Entra as an alternative to SQL
Data Warehouse Authentication, see Microsoft Entra authentication as
an alternative to SQL authentication.

June 2024 Mastering Enterprise T- Learn about foundational elements of an enterprise-


SQL ETL/ELT: A Guide with scale ETL/ELT framework using Fabric Pipelines and
Data Warehouse and a Data Warehouse for performing our transformations
Fabric Pipelines in T-SQL. Additionally, we will examine a dynamic SQL
Month Feature Learn more

script designed to incrementally process tables


throughout your enterprise.

April 2024 Fabric Change the Game: A step-by-step guide to mirror your Azure SQL
Azure SQL Database Database into Microsoft Fabric.
mirror into Microsoft
Fabric

February Mapping Azure Synapse Read for guidance on mapping Data Warehouse Units
2024 dedicated SQL pools to (DWU) from Azure Synapse Analytics dedicated SQL
Fabric data warehouse pool to an approximate equivalent number of Fabric
compute Capacity Units (CU) .

January Automate Fabric Data In Fabric Data Factory, there are many ways to query
2024 Warehouse Queries and data, retrieve data, and execute commands from your
Commands with Data warehouse using pipeline activities that can then be
Factory easily automated .

November Migrate from Azure A detailed guide with a migration runbook is available
2023 Synapse dedicated SQL for migrations from Azure Synapse Data Warehouse
pools dedicated SQL pools into Microsoft Fabric.

August Efficient Data Partitioning A proposed method for data partitioning using Fabric
2023 with Microsoft Fabric: notebooks . Data partitioning is a data management
Best Practices and technique used to divide a large dataset into smaller,
Implementation Guide more manageable subsets called partitions or shards.

May 2023 Microsoft Fabric - How This blog reviews how to connect to a SQL analytics
can a SQL user or DBA endpoint of the Lakehouse or the Warehouse through
connect the Tabular Data Stream, or TDS endpoint , familiar
to all modern web applications that interact with a
SQL Server endpoint.

Real-Time Intelligence in Microsoft Fabric


This section summarizes archived improvements and features for Real-Time Intelligence
in Microsoft Fabric.

ノ Expand table

Month Feature Learn more

August Fabric Real-Time New teaching bubbles provide a step-by-step guide through
2024 hub Teaching its major functionalities. These interactive guides allow you
Bubbles to seamlessly navigate each tab of the Real-Time hub. For
Month Feature Learn more

more information, see Fabric Real-Time hub Teaching


Bubble .

August KQL Queryset REST The new Fabric Queryset REST APIs allow you to
2024 API support create/update/delete KQL Querysets in Fabric, and
programmatically manage them without manual
intervention. For more information, see KQL Queryset REST
API support .

July 2024 Update records in a The .update command is now generally available. Learn
KQL Database more about how to Update records in a Kusto database .
preview

July 2024 Real-Time Real-time Dashboards now support ultra-low refresh rates
Dashboards 1s and of just 1 or 10 seconds. For more information, see Create a
10s refresh rate Real-Time Dashboard (preview).

June 2024 Graph Semantics in Graph Semantics in Eventhouse allows users to model their
Eventhouse data as graphs and perform advanced graph queries and
analytics using the Kusto Query Language (KQL).

June 2024 Set alerts on Real- Real-Time Dashboard visuals now support alerts , to
time Dashboards extend monitoring support with Activator. With integration
with Fabric with Activator, you'll receive timely alerts as your key metrics
Activator triggers change in real-time.

June 2024 OneLake availability As part of the One logical copy promise, we're excited to
of Eventhouse in announce that OneLake availability of Eventhouse in Delta
Delta Lake format Lake format is Generally Available .
GA

June 2024 Real-Time Real-Time Dashboards interact with data dynamically and in
Dashboards real time. Real-Time Dashboards natively visualize data
stored in Eventhouses. Real-time Dashboards support ultra-
low refresh rates of just 1 or 10 seconds. For more
information, see Visualize and Explore Data with Real-Time
Dashboards .

May 2024 Eventhouse GA Eventhouse is a new, dynamic workspace hosting multiple


KQL databases , generally available as part of Fabric Real-
Time Intelligence. An Eventhouse offers a robust solution for
managing and analyzing substantial volumes of real-time
data. Get started with a guide to Create and manage an
Eventhouse.

May 2024 Copilot for Real- Copilot for Real-Time Intelligence is now in preview ! For
Time Intelligence those who are already fans of KQL or newcomers exploring
its potential, Copilot can help you get started, and navigate
data with ease.
Month Feature Learn more

May 2024 Automating Fabric Learn how to interact with data pipelines, notebooks, spark
items with Real- jobs in a more event-driven way .
Time Intelligence

May 2024 Real-Time This month includes the announcement of Real-Time


Intelligence Intelligence , the next evolution of Real-Time Analytics and
Activator.

May 2024 Real-Time At Build 2024, a dozen new features and capabilities were
Intelligence new announced for Real-Time Intelligence, organized into
preview features categories of Ingest & Process , Analyze & Transform ,
and Visualize & Act .

May 2024 Real-Time hub Real-Time hub is single, tenant-wide, unified, logical place
preview for streaming data-in-motion. It enables you to easily
discover, ingest, manage, and consume data-in-motion from
a wide variety of sources. It lists all the streams and Kusto
Query Language (KQL) tables that you can directly act on. It
also gives you an easy way to ingest streaming data from
Microsoft products and Fabric events. For more information,
see Real-Time hub overview.

May 2024 Get Events preview The Get Events experience allows users to connect to a
wide range of sources directly from Real-Time hub,
Eventstreams, Eventhouse, and Activator. Using Get Events,
bring streaming data from Microsoft sources directly into
Fabric with a first-class experience.

May 2024 Enhanced With enhanced Eventstream capabilities , you can now
Eventstream stream data not only from Microsoft sources but also from
capabilities preview other platforms like Google Cloud, Amazon Kinesis,
Database change data capture streams, and more, using our
new messaging connectors.

May 2024 Eventstreams - The preview of enhanced capabilities supports many new
enhanced sources - Google Cloud Pub/Sub, Amazon Kinesis Data
capabilities preview Streams, Confluent Cloud Kafka, Azure SQL Database
Change Data Capture (CDC), PostgreSQL Database CDC,
MySQL Database CDC, Azure Cosmos DB CDC, Azure Blob
Storage events, and Fabric workspace item events, and a
new Stream destination. It supports two distinct modes, Edit
mode and Live view, in the visual designer. It also supports
routing based on content in data streams. For more
information, see What is Fabric eventstreams.

April 2024 Kusto Cache The preview of Kusto Cache consumption means that you
consumption will start seeing billable consumption of the OneLake Cache
preview Data Stored meter from the KQL Database and Eventhouse
Month Feature Learn more

items. For more information, see KQL Database


consumption.

April 2024 Pause and Resume The Pause and Resume feature enables you to pause data
in Eventstream streaming from various sources and destinations within
preview Eventstream. You can then resume data streaming
seamlessly from the paused time or a customized time,
ensuring no data loss.

March New Expressions In Activator, when setting conditions on a trigger, we've


2024 "Changes by", added syntax to allows you to detect when there's been a
"Increases by", and change in your data by absolute number or percentage. See
"Decreases by" New Expressions "Changes by", "Increases by", and
"Decreases by" .

March Fabric Real-Time Users of Azure SQL can use the Database Watcher
2024 Intelligence monitoring solution with Microsoft Fabric . Database
Integrates with Watcher for Azure SQL (preview) provides advanced
Newly Announced monitoring capabilities, and can integrate with Eventhouse
Database Watcher KQL database.
for Azure SQL

March Update records in a The .update command is now available, as a preview feature.
2024 KQL Database Learn more about how to Update records in a Kusto
preview database .

March Query Azure Data Connecting to and using data in Azure Data explorer cluster
2024 Explorer data from from Fabric's KQL Queryset is now available.
Queryset

February Eventhouse Eventhouse (preview) is a dynamic workspace hosting


2024 Overview: Handling multiple KQL databases , part of Fabric Real-Time
Real-Time Data Intelligence. An Eventhouse offers a robust solution for
with Microsoft managing and analyzing substantial volumes of real-time
Fabric data. Get started with a guide to Create and manage an
Eventhouse.

February KQL DB shortcut to KQL DB now supports reading Delta tables with column
2024 Delta Lake tables name mappings. The column mapping feature allows Delta
support name- table columns and the underlying Parquet file columns to
based column use different names. This enables Delta schema evolution
mapping operations such on a Delta table without the need to rewrite
the underlying Parquet files and allows users to name Delta
table columns by using characters that aren't allowed by
Parquet.

February KQL DB shortcut to KQL DB can now read delta tables with deletion vectors,
2024 Delta Lake tables resolving the current table state by applying the deletions
noted by deletion vectors to the most recent table version.
Month Feature Learn more

support deletion
vectors

February Get Data in KQL DB The Process event before ingestion in Eventstream option
2024 now supports enables you to process the data before it's ingested into the
processing events destination table. By selecting this option, the get data
before ingestion via process seamlessly continues in Eventstream, with the
Eventstream destination table and data source details automatically
populated.

February KQL DB now Using the open-source Flink connector, you can send data
2024 supports data from Flink to your table. Using Azure Data Explorer and
ingestion using Apache Flink, you can build fast and scalable applications
Apache Flink targeting data driven scenarios.

February Route data from You can now use the Kusto Splunk Universal Connector to
2024 Splunk Universal send data from Splunk Universal Forwarder to a table in your
Forwarder to KQL KQL DB.
DB using Kusto
Splunk Universal
Connector

December Calculating distinct New Fabric KQL database dcount and dcountif functions use
2023 counts in Power BI a special algorithm to return an estimate of distinct counts ,
running reports on even in extremely large datasets. The new functions
KQL Databases count_distinct and count_distinctif calculate exact distinct
counts.

December Create a Notebook You can now just create a new Notebook from KQL DB
2023 with pre-configured editor with a preconfigured connection to your KQL DB
connection to your and explore the data using PySpark. This option creates a
KQL DB PySpark Notebook with a ready-to execute code cell to read
data from the selected KQL DB.

December KQL Database The new Kusto command .show database schema violations
2023 schema validation was designed to validate the current state of your database
schema and find inconsistencies. You can use .show
database schema violations for a spot check on your
database or in CI/CD automation .

December Enabling Data Data availability of KQL Database in OneLake means you
2023 Availability of KQL can enjoy the best of both worlds. You can query the data
Database in with high performance and low latency in their KQL
OneLake database, and you can query the same data in Delta Parquet
via Power BI Direct Lake mode, Warehouse, Lakehouse,
Notebooks, and more.

December Fabric Change the Real-Time Intelligence is a formidable tool , diminishing


2023 Game: Real–time complexity and streamlining data integration processes.
Month Feature Learn more

Intelligence Microsoft Fabric allows you to build Real-Time streaming


analytics with eventstream or Spark Stream.

November Announcing Delta You can now enable availability of KQL Database in Delta
2023 Lake support in Lake format . Delta Lake is the unified data lake table
Real-Time Analytics format chosen to achieve seamless data access across all
KQL Database compute engines in Microsoft Fabric.

November Real-Time Analytics Announcing the general availability of Real-Time Analytics in


2023 in Microsoft Fabric Microsoft Fabric ! Real-Time Analytics offers countless
general availability features all aimed at making your data analysis more
(GA) efficient and effective.

November Delta Parquet As part of the one logical copy promise, we're excited to
2023 support in KQL announce that data in KQL Database can now be made
Database available in OneLake in delta parquet format . You can now
access this Delta table by creating a OneLake shortcut from
Lakehouse, Warehouse, or directly via Power BI Direct Lake
mode.

November Open Source Several open-source connectors for real-time analytics are
2023 Connectors for KQL now supported to enable users to ingest data from
Database various sources and process it using KQL DB.

November REST API Support We're excited to announce the launch of REST Public APIs for
2023 for KQL Database KQL DB. The Public REST APIs of KQL DB enables users to
manage and automate their flows programmatically.

November Eventstream now Eventstream is now generally available, adding


2023 Generally Available enhancements aimed at taking your data processing
experience to the next level.

November Eventstream Data Now, you can transform your data streams into real time
2023 Transformation for within Eventstream before they're sent to your KQL
KQL Database Database . When you create a KQL Database destination in
the eventstream, you can set the ingestion mode to "Event
processing before ingestion" and add event processing
logics such as filtering and aggregation to transform your
data streams.

November Splunk add-on Microsoft Fabric add-on for Splunk allows users to ingest
2023 preview logs from Splunk platform into a Fabric KQL DB using the
Kusto python SDK.

November Get Data from If you're working on other Fabric items and are looking to
2023 Eventstream ingest data from Eventstream, our new "Get Data from
anywhere in Fabric Eventstream" feature simplifies the process, you can Get
data from Eventstream while you're working with a KQL
database and Lakehouse.
Month Feature Learn more

November Two ingestion We've introduced two distinct ingestion modes for your
2023 modes for Lakehouse Destination: Rows per file and Duration .
Lakehouse
Destination

November Optimize Tables The table optimization shortcut is now available inside
2023 Before Ingesting Eventstream Lakehouse destination to compact numerous
Data to Lakehouse small streaming files generated on a Lakehouse table. Table
optimization shortcut works by opening a Notebook with
Spark job, which would compact small streaming files in the
destination Lakehouse table.

November Create a Cloud We've simplified the process of establishing a cloud


2023 Connection within connection to your Azure services within Eventstream .
Eventstream When adding an Azure resource, such as Azure IoT Hub and
Azure Event Hubs, to your Eventstream, you can now create
the cloud connection and enter your Azure resource
credentials right within Eventstream. This enhancement
significantly improves the process of adding new data
sources to your Eventstream, saving time and effort.

November Get Data in Real- A new Get Data experience simplifies the data ingestion
2023 Time Analytics: A process in your KQL database.
New and Improved
Experience

October Expanded Custom New new custom app connections provide more flexibility
2023 App Connections when it comes to bringing your data streams into
Eventstream.

October Enhanced UX on New UX improvements on the no-code Event Processor


2023 Event Processor provide an intuitive experience, allowing you to effortlessly
add or delete operations on the canvas.

October Eventstream Kafka The Custom App feature has new endpoints in sources and
2023 Endpoints and destinations , including sample Java code for your
Sample Code convenience. Simply add it to your application, and you're all
set to stream your real-time event to Eventstream.

October Event processing Recent UX improvements introduce a full-screen mode,


2023 editor UX providing a more spacious workspace for designing your
improvements data processing workflows. The insertion and deletion of
data stream operations have been made more intuitive,
making it easier to drag and drop and connect your data
transformations.
Month Feature Learn more

October KQL Database Auto Users do not need to worry about how many resources are
2023 scale algorithm needed to support their workloads in a KQL database. KQL
improvements Database has a sophisticated in-built, multi-dimensional,
auto scaling algorithm. We recently implemented some
optimizations that make some time series analysis more
efficient .

October Understanding Read more about how a KQL database is billed in the SaaS
2023 Fabric KQL DB world of Microsoft Fabric.
Capacity

September OneLake shortcut Now you can create a shortcut from KQL DB to delta tables
2023 to delta tables from in OneLake, allowing in-place data queries. Now you query
KQL DB delta tables in your Lakehouse or Warehouse directly from
KQL DB.

September Model and Query Kusto Query Language (KQL) now allows you to model and
2023 data as graphs query data as graphs. This feature is currently in preview.
using KQL Learn more at Introduction to graph semantics in KQL and
Graph operators and functions .

September Easily connect to Power BI desktop released two new ways to easily connect
2023 KQL Database from to a KQL database, in the Get Data dialogue and in the
Power BI desktop OneLake data hub menus.

September Eventstream now AMQP stands for Advanced Message Queuing Protocol, a
2023 supports AMQP protocol that supports a wide range of messaging patterns.
format connection In Eventstream, you can now create a Custom App source or
string for data destination and select AMQP format connection string for
ingestion ingesting data into Fabric or consuming data from Fabric.

September Eventstream Azure IoT Hub is a cloud-hosted solution that provides


2023 supports data secure communication channels for sending and receiving
ingestion from data from IoT devices. In Eventstream, you can now stream
Azure IoT Hub your Azure IoT Hub data into Fabric and perform real-time
processing before storing it in a Kusto Database or
Lakehouse.

September Real-Time Data A database shortcut in Real-Time Intelligence is an


2023 Sharing in embedded reference within a KQL database to a source
Microsoft Fabric database in Azure Data Explorer (ADX) allowing in-place data
sharing. The behavior exhibited by the database shortcut is
similar to that of an Azure Data Explorer follower database.

August Provisioning The KQL Database provisioning process has been optimized.
2023 optimization Now you can create a KQL Database within a few seconds.

August KQL Database Fabric KQL Database supports running Python code
2023 support for inline embedded in Kusto Query Language (KQL) using the
Month Feature Learn more

Python python() plugin. The plugin is disabled by default. Before you


start, enable the Python plugin in your KQL database.

July 2023 Microsoft Fabric Microsoft Fabric eventstreams are a high-throughput, low-
eventstreams: latency data ingestion and transformation service.
Generating Real-
time Insights with
Python, KQL, and
Power BI

July 2023 Stream Real-time Eventstreams under Real-Time Intelligence are a


Events to Microsoft centralized platform within Fabric, allowing you to capture,
Fabric with transform, and route real-time events to multiple
eventstreams from destinations effortlessly, all through a user-friendly, no-code
a custom experience.
application

June 2023 Unveiling the Epic As part of the Kusto Detective Agency Season 2 , we're
Opportunity: A Fun excited to introduce an epic opportunity for all investigators
Game to Explore and data enthusiasts to learn about the new portfolio in a
the Real-Time fun and engaging way. Recruiting now at
Intelligence https://detective.kusto.io/ !

May 2023 What's New in Announcing the Fabric Real Time Analytics !
Kusto – Build 2023!

Real-Time Intelligence samples and guidance

ノ Expand table

Month Feature Learn more

August Advanced Time Series Read an example using the time-series-anomaly-detector


2024 Anomaly Detector in in Fabric to upload stocks change table to Fabric, train
Fabric the multivariate anomaly detection model in a Python
notebook using Spark engine, and predict anomalies by
applying the trained model to new data using
Eventhouse (Kusto) engine.

August Acting on Real-Time Learn how to monitor and acting on data is to use
2024 data using custom Activator, which is a no-code experience in Microsoft
actions with Activator Fabric for taking action automatically when the condition
of the package temperature is detected in the data.

July 2024 Build real-time order Read about a a real-life example of how an online store
notifications with used Eventstream's CDC connector from Azure SQL
Database.
Month Feature Learn more

Eventstream's CDC
connector

July 2024 Automating Real-Time Let's build a PowerShell script to automate the
Intelligence deployment of Eventhouse, KQL Database, Tables,
Eventhouse Functions, and Materialized Views into a workspace in
deployment using Microsoft Fabric.
PowerShell

June 2024 Power BI Admin portal Effective July 2024, the Power BI Admin portal Usage
Usage metrics metrics dashboard is removed . Comparable insights
dashboard retirement are now supported out-of-the-box through the Admin
monitoring workspace (preview). The Admin monitoring
workspace provides several Power BI reports and
semantic models, including the Feature Usage and
Adoption report which focuses on Fabric tenant
inventory and audit activity monitoring.

May 2024 Alerting and acting on Microsoft Fabric's new Real-Time hub and Activator
data from the Real- provide a no-code experience for automatically taking
Time hub actions when patterns or conditions are detected in
changing data and is embedded around the Real-Time
hub to make creating alerts always accessible.

May 2024 Using APIs with Fabric Learn how to create/update/delete items in Fabric with
Real-Time Intelligence: the KQL APIs , accessing the data plane of a resource.
Eventhouse and KQL
DB

May 2024 Connect and stream The Get events experience streamlines the process of
events with the Get browsing and searching for sources and streams .
events experience

May 2024 Acquiring Real-Time Learn how to connect to new sources in Eventstream.
Data from New Start by creating an eventstream and choosing
Sources with Enhanced "Enhanced Capabilities (preview)" .
Eventstream

March Browse Azure Learn how to browse and connect to all your Azure
2024 resources with Get resources with the 'browse Azure' functionality in Get
Data Data . You can browse Azure resources then connect to
Synapse, blob storage, or ADLS Gen2 resources easily.

November Semantic Link: Data Great Expectations Open Source (GX OSS) is a popular
2023 validation using Great Python library that provides a framework for describing
Expectations and validating the acceptable state of data. With the
recent integration of Microsoft Fabric semantic link, GX
can now access semantic models , further enabling
Month Feature Learn more

seamless collaboration between data scientists and


business analysts.

November Explore Data Dive into a practical scenario using real-world bike-
2023 Transformation in sharing data and learn to compute the number of bikes
Eventstream for KQL rented every minute on each street, using Eventstream's
Database Integration powerful event processor, mastering real-time data
transformations, and effortlessly directing the processed
data to your KQL Database. .

October From RabbitMQ to A walkthrough of an end-to-end scenario sending data


2023 PowerBI reports with from RabbitMQ to a KQL Database in Microsoft Fabric .
Microsoft Fabric Real-
Time Analytics

October Stream Azure IoT Hub A demo of using Fabric Eventstream to seamlessly ingest
2023 Data into Fabric and transform real-time data streams before they
Eventstream for Email reach various Fabric destinations such as Lakehouse, KQL
Alerting Database, and Reflex. Then, configure email alerts in
Reflex with Activator triggers.

September Real-Time Intelligence Real-Time Intelligence now offers a comprehensive


2023 sample gallery sample gallery with multiple datasets allowing you to
explore, learn, and get started quickly. Access the
samples by selecting Use a sample from the Real-Time
Intelligence experience home .

September Quick start: Sending Learn how to send data from Kafka to Real-Time
2023 data to Real-Time Intelligence in Fabric .
Intelligence in Fabric
from Apache Kafka
Ecosystems using Java

June 2023 From raw data to Learn about the integration between Azure Event Hubs
insights: How to ingest and your KQL database .
data from Azure Event
Hubs into a KQL
database

June 2023 From raw data to Learn about the integration between eventstreams and a
insights: How to ingest KQL database , both of which are a part of the Real-
data from Time Intelligence experience.
eventstreams into a
KQL database

June 2023 Discovering the best This blog covers different options for bringing data into a
ways to get data into a KQL database .
KQL database
Month Feature Learn more

June 2023 Get started with In this blog, we focus on the different ways of querying
exploring your data data in Real-Time Intelligence .
with KQL – a purpose-
built tool for petabyte
scale data analytics

May 2023 Ingest, transform, and You can now ingest, capture, transform and route real-
route real-time events time events to various destinations in Microsoft Fabric
with Microsoft Fabric with a no-code experience using Microsoft Fabric
eventstreams eventstreams.

Microsoft Fabric platform features


Archived news and feature announcements about the Microsoft Fabric platform
experience.

ノ Expand table

Month Feature Learn more

August OneLake data access Based on key feedback, we've updated data access
2024 role improvements roles with a user interface redesign. For more
information, see Get started with OneLake data access
roles (preview).

August Workspace filter We have upgraded the filter experience to support


2024 improvement to filtering through the entire workspace or through a
support nested folders specific folder with all its nested folders.

August Announcing the Use Trusted workspace access and Managed Private
2024 availability of Trusted endpoints in Fabric with any F capacity and enjoy the
workspace access and benefits of secure and optimized data access and
Managed private connectivity.
endpoints in any Fabric
capacity

July 2024 SOC certification We are excited to announce that Microsoft Fabric, our
compliance all-in-one analytics solution for enterprises, is now
System and Organization Controls (SOC) 1 Type II, SOC 2
Type II, and SOC 3 compliant .

July 2024 Microsoft Fabric .NET We are excited to announce the very first release of the
SDK Microsoft Fabric .NET SDK ! For more information on
the REST API documentation, see Microsoft Fabric REST
API documentation.
Month Feature Learn more

May 2024 Microsoft Fabric Private Azure Private Link for Microsoft Fabric secures access to
Links GA your sensitive data in Microsoft Fabric by providing
network isolation and applying required controls on
your inbound network traffic. For more information, see
Announcing General Availability of Fabric Private Links .

May 2024 Trusted workspace Trusted workspace access in OneLake shortcuts is now
access GA generally available . You can now create data pipelines
to access your firewall-enabled Azure Data Lake Storage
Gen2 (ADLS Gen2) accounts using Trusted workspace
access (preview) in your Fabric Data Pipelines. Use the
workspace identity to establish a secure and seamless
connection between Fabric and your storage accounts .
Trusted workspace access also enables secure and
seamless access to ADLS Gen2 storage accounts from
OneLake shortcuts in Fabric .

May 2024 Fabric APIs Learn about using REST APIs in Fabric , including
walkthrough creating workspaces, adding permission, dropping,
creating, executing data pipelines, and how to
pause/resume Fabric activities using the management
API.

May 2024 Managed private Managed private endpoints for Microsoft Fabric allow
endpoints GA secure connections over managed virtual networks to
data sources that are behind a firewall or not accessible
from the public internet. For more information, see
Announcing General Availability of Fabric Private Links,
Trusted Workspace Access, and Managed Private
Endpoints .

May 2024 Fabric UX System The Fabric UX System represents a leap forward in
design consistency and extensibility for Microsoft Fabric.

May 2024 Microsoft Fabric Core Microsoft Fabric Core APIs are now generally available.
REST APIs The Fabric user APIs are a major enabler for both
enterprises and partners to use Microsoft Fabric as they
enable end-to-end fully automated interaction with the
service, enable integration of Microsoft Fabric into
external web applications, and generally enable
customers and partners to scale their solutions more
easily.

May 2024 Microsoft Fabric Admin Fabric Admin APIs are designed to streamline
APIs preview administrative tasks. Now, you can manage both Power
BI and the new Fabric items (previously referred to as
artifacts) using the same set of APIs. Before this
enhancement, you had to navigate using two different
Month Feature Learn more

APIs—one for Power BI items and another for new Fabric


items.

May 2024 Fabric workload dev kit The Microsoft Fabric workload development kit
(preview) extends to additional workloads and offers a robust
developer toolkit for designing, developing, and
interoperating with Microsoft Fabric using frontend SDKs
and backend REST APIs .

May 2024 Introducing external External Data Sharing (preview) is a new feature that
data sharing (preview) makes it possible for Fabric users to share data from
within their Fabric tenant with users in another Fabric
tenant.

May 2024 Task flows in Microsoft The preview of task flows in Microsoft Fabric is
Fabric (preview) enabled for all Microsoft Fabric users. With Fabric task
flows, when designing a data project, you no longer
need to use a whiteboard to sketch out the different
parts of the project and their interrelationships. Instead,
you can use a task flow to build and bring this key
information into the project itself.

May 2024 Power BI: Subscriptions, Information on Power BI implementation planning and
licenses, and trials key considerations for planning subscriptions, licenses,
and trials for Power BI and Fabric.

May 2024 Register for the Starting May 21, 2024, sign up for the Microsoft Build:
Microsoft Build: Microsoft Fabric Cloud Skills Challenge and prepare for
Microsoft Fabric Cloud Exam DP-600 and upskill to the Fabric Analytics Engineer
Skills Challenge Associate certification.

March Microsoft Fabric is now We are excited to announce that Microsoft Fabric, our
2024 HIPAA compliant all-in-one analytics solution for enterprises, has achieved
new certifications for HIPAA and ISO 27017, ISO 27018,
ISO 27001, ISO 27701 .

March Folder in Workspace As an organizational unit in the workspace, folder


2024 preview addresses this pain point by providing a hierarchical
structure for organizing and managing your items. For
more information, see Create folders in workspaces
(preview).

March Fabric Copilot Pricing: Copilot in Fabric begins billing on March 1, 2024 as
2024 An End-to-End example part of your existing Power BI Premium or Fabric
Capacity. Learn how Fabric Copilot usage is calculated .

March Capacity Platform The Fabric Capacity Platform now supports usage
2024 Updates for reporting for Pause/Resume, virtualized items and
Pause/Resume, workspaces supporting Copilot, Capacity Metrics, and
Month Feature Learn more

Capacity Metrics, VNET Gateway. For more information, read Capacity


virtualized items and Platform Updates for Pause Resume and Capacity
workspaces for Copilot, Metrics for Copilot and VNET Gateways .
and VNET Gateways

February Managed private Managed private endpoints for Microsoft Fabric


2024 endpoints for Microsoft (preview) allow secure connections to data sources that
Fabric (Preview) are behind a firewall or not accessible from the public
internet. Workspaces with managed private endpoints
have network isolation through a managed virtual
network created by Microsoft Fabric. For more
information, see Introducing Managed private endpoints
for Microsoft Fabric preview .

February Azure Private Link Azure Private Link for Microsoft Fabric secures access to
2024 Support for Microsoft your sensitive data in Microsoft Fabric by providing
Fabric (Preview) network isolation and applying required controls on
your inbound network traffic. For more information, see
Announcing Azure Private Link Support for Microsoft
Fabric in Preview .

February Domains in OneLake Domains in OneLake help you organize your data into
2024 (preview) a logical data mesh, allowing federated governance and
optimizing for business needs. You can now create sub
domains, default domains for users, and move
workspaces between domains. For more information, see
Fabric domains.

February Customizable Fabric You can now customize your preferred entry points in
2024 navigation bar the navigation bar , including pinning common entry
points and unpinning rarely used options.

February Persistent filters in You can now save selected filters in workspace list
2024 workspace view , and they'll be automatically applied the next
time you open the workspace.

December Microsoft Fabric Admin Fabric Admin APIs are designed to streamline
2023 APIs preview administrative tasks. The initial set of Fabric Admin APIs
is tailored to simplify the discovery of workspaces, Fabric
items, and user access details.

December Workspace retention The retention period for collaborative workspaces is


2023 changes in Fabric and configurable from 7 to 90 days . The workspace
Power BI retention setting is enabled by default and the default
retention period is seven days.

November Fabric workloads are Microsoft Fabric is now generally available! Microsoft
2023 now generally Fabric Data Warehouse, Data Engineering & Data
available!
Month Feature Learn more

Science, Real-Time Analytics, Data Factory, OneLake, and


the overall Fabric platform are now generally available.

November Microsoft Fabric User We're happy to announce the preview of Microsoft
2023 APIs preview Fabric User APIs. The Fabric user APIs are a major
enabler for both enterprises and partners to use
Microsoft Fabric as they enable end-to-end fully
automated interaction with the service, enable
integration of Microsoft Fabric into external web
applications, and generally enable customers and
partners to scale their solutions more easily.

October Item type icons Our design team has completed a rework of the item
2023 type icons across the platform to improve visual
parsing.

October Keyword-Based Microsoft Fabric has recently introduced keyword-based


2023 Filtering of Tenant filtering for the tenant settings page in the admin
Settings portal .

September Monitoring hub – Column options inside the monitoring hub give users
2023 column options a better customization experience and more room to
operate.

September OneLake File Explorer The OneLake file explorer automatically syncs all
2023 v1.0.10 Microsoft OneLake items that you have access to in
Windows File Explorer. With the latest version, you can
seamlessly transition between using the OneLake file
explorer app and the Fabric web portal. You can also
right-click on the OneLake icon in the Windows
notification area, and select Diagnostic Operations to
view client-site logs. Learn more about easy access to
open workspaces and items online .

August Multitasking navigation Now, all Fabric items are opened in a single browser tab
2023 improvement on the navigation pane, even in the event of a page
refresh. This ensures you can refresh the page without
the concern of losing context.

August Monitoring Hub We have updated Monitoring Hub to allow users to


2023 support for personalize activity-specific columns. You now have the
personalized column flexibility to display columns that are relevant to the
options activities you're focused on.

July 2023 New OneLake file With OneLake file explorer v1.0.9.0, it's simple to
explorer update with choose and switch between different Microsoft Entra ID
support for switching (formerly Azure Active Directory) accounts .
organizational accounts
Month Feature Learn more

July 2023 Help pane The Help pane is feature-aware and displays articles
about the actions and features available on the current
Fabric screen. For more information, see Help pane in
the monthly Fabric update.

Continuous Integration/Continuous Delivery


(CI/CD) in Microsoft Fabric
This section includes guidance and documentation updates on development process,
tools, and versioning in the Microsoft Fabric workspace.

ノ Expand table

Month Feature Learn more

July 2024 GitHub integration Fabric developers can now choose GitHub or GitHub
for source control Enterprise as their source control tool , and version their
(preview) Fabric items there. For more information, see Get started
with Git integration (preview).

July 2024 Microsoft Fabric We are excited to announce the very first release of the
.NET SDK Microsoft Fabric .NET SDK ! For more information on the
REST API documentation, see Microsoft Fabric REST API
documentation.

June 2024 Introducing New New branching capabilities in Fabric Git integration
Branching include a redesigned Source Control pane, the ability to
Capabilities in Fabric quickly create a new connected workspace and branch, and
Git Integration contextual related branches to find content related to the
current workspace.

May 2024 Deployment Fabric deployment pipelines APIs have been introduced,
pipelines APIs for starting with the 'Deploy' API, which will allow you to
CI/CD deploy the entire workspace, or only selected items.

May 2024 New items in Fabric Data pipelines, Warehouse, Spark, and Spark jobs are now
CI/CD available for CI/CD in git integration and deployment
pipelines.

April 2024 Introducing Trusted Create data pipelines in Fabric to access your firewall-
Workspace Access in enabled ADLS Gen2 storage accounts with ease and
Fabric Data Pipelines security. This feature leverages the workspace identity to
establish a secure and seamless connection between Fabric
and your storage accounts.
Month Feature Learn more

March CI/CD for Fabric Git Integration and integration with built-in Deployment
2024 Data Pipelines Pipelines to Data Factory data pipelines is now in preview.
preview For more information, see Data Factory Adds CI/CD to
Fabric Data Pipelines .

March System file updates The automatically generated system files


2024 for Git integration item.metadata.json and item.config.json have been
consolidated into a single system file .platform . For more
information, see Automatically generated system files.

February REST APIs for Fabric REST APIs for Fabric Git integration enable seamless
2024 Git integration incorporation of Fabric Git integration into your team's
end-to-end CI/CD pipeline, eliminating the need for
manual triggering of actions from Fabric.

February Delegation for Git To enable more control over Git related settings , a tenant
2024 integration settings admin can now delegate these settings to both capacity
admins and workspace admins via the What is the admin
portal?

November Microsoft Fabric Microsoft Fabric User APIs are now available. The Fabric
2023 User APIs user APIs are a major enabler for both enterprises and
partners to use Microsoft Fabric as they enable end-to-end
fully automated interaction with the service, enable
integration of Microsoft Fabric into external web
applications, and generally enable customers and partners
to scale their solutions more easily.

November Notebook in Now you can also use notebooks to deploy your code
2023 Deployment Pipeline across different environments, such as development, test,
Preview and production. You can also use deployment rules to
customize the behavior of your notebooks when they're
deployed, such as changing the default Lakehouse of a
Notebook. Get started with deployment pipelines, and
Notebook shows up in the deployment content
automatically.

November Notebook Git Fabric notebooks now offer Git integration for source
2023 integration preview control using Azure DevOps. It allows users to easily control
the notebook code versions and manage the Git branches
by leveraging the Fabric Git functions and Azure DevOps.

November Notebook REST APIs With REST Public APIs for the Notebook items, data
2023 Preview engineers/data scientists can automate their pipelines and
establish CI/CD conveniently and efficiently. The notebook
Restful Public API can make it easy for users to manage
and manipulate Fabric notebook items and integrate
notebook with other tools and systems.
Month Feature Learn more

November Lakehouse support The Lakehouse item now integrates with the lifecycle
2023 for git integration management capabilities in Microsoft Fabric, providing a
and deployment standardized collaboration between all development team
pipelines (preview) members throughout the product's life. Lifecycle
management facilitates an effective product versioning and
release process by continuously delivering features and
bug fixes into multiple environments.

November SQLPackage support SQLPackage now supports Fabric Warehouse. SqlPackage is


2023 for Fabric a command-line utility that automates the following
Warehouse database development tasks by exposing some of the
public Data-Tier Application Framework (DacFx) APIs. The
SqlPackage command line tool allows you to specify these
actions along with action-specific parameters and
properties.

September SQL Projects support Microsoft Fabric Data Warehouse is now supported in the
2023 for Warehouse in SQL Database Projects extension available inside of Azure
Microsoft Fabric Data Studio and Visual Studio Code .

September Notebook file The Synapse VS Code extension now supports notebook
2023 system support in File System for Data Engineering and Data Science in
Synapse VS Code Microsoft Fabric. The Synapse VS Code extension
extension empowers users to develop their notebook items directly
within the Visual Studio Code environment.

September Deployment Deployment pipelines enable creators to develop and test


2023 pipelines now content in the service before it reaches the users.
support warehouses Supported content types include reports, paginated
reports, dashboards, semantic models, dataflows, and now
warehouses. Learn how to deploy content
programmatically using REST APIs and DevOps.

September Git integration with You can now publish a Power BI paginated report and keep
2023 paginated reports in it in sync with your git workspace. Developers can apply
Power BI their development processes, tools, and best practices.

August Introducing the dbt The dbt adapter allows you to connect and transform data
2023 adapter for Fabric into Fabric Data Warehouse . The Data Build Tool (dbt) is
Data Warehouse an open-source framework that simplifies data
transformation and analytics engineering.

May 2023 Introducing git While developing in Fabric, developers can back up and
integration in version their work, roll back as needed, collaborate, or work
Microsoft Fabric for in isolation using git branches . Read more about
seamless source connecting the workspace to an Azure repo.
control management
Continuous Integration/Continuous Delivery (CI/CD) samples

ノ Expand table

Month Feature Learn more

August Exploration of Microsoft A guided tour of Microsoft Fabric's CI/CD features for
2024 Fabric's CI/CD Features data pipelines, lakehouse, notebooks, reports, and
semantic models.

June Getting started with In this walkthrough, we'll talk about how to set up git
2024 development in isolation for a private workspace from a main branch , which is
using a Private Workspace connected to a shared dev team workspace and then
how to commit changes from the private workspace
into the main branch of the shared workspace.

March Microsoft Fabric Lifecycle Learn the essentials of Lifecycle Management


2024 Management – Getting through a demo scenario, and explore what Lifecycle
started with Git Integration Management is, and what it means in Fabric.
and Deployment Pipelines

Activator
This section summarizes archived new features and capabilities of Activator in Microsoft
Fabric.

ノ Expand table

Month Feature Learn more

October Announcing the We're thrilled to announce that Activator is now in preview
2023 Activator preview and is enabled for all existing Microsoft Fabric users.

August Updated preview We have been working on a new experience for designing
2023 experience for triggers and it's now available in our preview! You now see
trigger design three cards in every trigger: Select, Detect, and Act.

May Driving actions from Activator is a new no-code Microsoft Fabric experience
2023 your data with that empowers the business analyst to drive actions
Activator automatically from your data. To learn more, sign up for the
Activator limited preview.

Fabric and Microsoft 365


This section includes articles and announcements about Microsoft Fabric integration
with Microsoft Graph and Microsoft 365.
ノ Expand table

Month Feature Learn more

March Analyze Dataverse When creating a shortcut within Fabric, you will now see
2024 tables from Microsoft an option for Dataverse . When you choose this
Fabric shortcut type and specify your Dataverse environment
details, you can quickly see and work with the tables
from that environment.

November Fabric + Microsoft 365 Microsoft Graph is the gateway to data and intelligence
2023 Data: Better Together in Microsoft 365. Microsoft 365 Data Integration for
Microsoft Fabric enables you to manage your
Microsoft 365 alongside your other data sources in one
place with a suite of analytical experiences.

November Microsoft 365 The Microsoft 365 connector now supports ingesting
2023 connector now data into Lakehouse tables .
supports ingesting
data into Lakehouse
(preview)

October Microsoft OneLake You can now create shortcuts directly to your Dynamics
2023 adds shortcut support 365 and Power Platform data in Dataverse and analyze
to Power Platform and it with Microsoft Fabric alongside the rest of your
Dynamics 365 OneLake data. There's no need to export data, build ETL
pipelines, or use partner integration tools.

May 2023 Step-by-Step Guide to This blog reviews how to enable Microsoft Fabric with a
Enable Microsoft Fabric Microsoft 365 Developer Account and the Fabric Free
for Microsoft 365 Trial .
Developer Account

May 2023 Microsoft 365 Data + Microsoft 365 Data Integration for Microsoft Fabric
Microsoft Fabric better enables you to manage your Microsoft 365 alongside
together your other data sources in one place with a suite of
analytical experiences.

Migration
This section includes guidance and documentation updates on migration to Microsoft
Fabric.

ノ Expand table
Month Feature Learn more

February Mapping ​Azure Synapse Read for guidance on mapping Data Warehouse
2024 dedicated SQL pools to Units (DWU) from Azure Synapse Analytics
Fabric data warehouse dedicated SQL pool to an approximate equivalent
compute number of Fabric Capacity Units (CU) .

November Migrate from Azure A detailed guide with a migration runbook is


2023 Synapse dedicated SQL available for migrations from Azure Synapse Data
pools Warehouse dedicated SQL pools into Microsoft
Fabric.

November Migrating from Azure A detailed set of articles on migration of Azure


2023 Synapse Spark to Fabric Synapse Spark to Microsoft Fabric, including a
migration process that can involve multiple
scenarios and phases.

July 2023 Fabric Changing the game This blog post covers OneLake integrations and
– OneLake integration multiple scenarios to ingest the data inside of Fabric
OneLake , including ADLS, ADF, OneLake Explorer,
Databricks.

June 2023 Microsoft Fabric changing This blog post covers the scenario to export data
the game: Exporting data from Azure SQL Database into OneLake .
and building the
Lakehouse

June 2023 Copy data to Azure SQL at Did you know that you can use Microsoft Fabric to
scale with Microsoft Fabric copy data at scale from supported data sources to
Azure SQL Database or Azure SQL Managed
Instance within minutes?

June 2023 Bring your Mainframe DB2 In this blog, we review the convenience and ease of
z/OS data to Microsoft opening DB2 for z/OS data in Microsoft Fabric .
Fabric

Monitor
This section includes guidance and documentation updates on monitoring your
Microsoft Fabric capacity and utilization, including the Monitoring hub.

ノ Expand table

Month Feature Learn more

March Capacity Metrics Fabric Capacity Metrics has been updated with new system
2024 support for Pause events and reconciliation logic to simplify analysis of
and Resume paused capacities. Fabric Pause and Resume is a capacity
Month Feature Learn more

management feature that lets you pause F SKU capacities


to manage costs. When your capacity isn't operational, you
can pause it to enable cost savings and then later, when
you want to resume work on your capacity you can
reactivate it.

October Throttling and A new article helps you understand Fabric capacity
2023 smoothing in Fabric throttling. Throttling occurs when a tenant's capacity
Data Warehouse consumes more capacity resources than it has purchased
over a period of time.

September Monitoring hub - Users can select and reorder the columns according to their
2023 column options customized needs in the Monitoring hub .

September Fabric Capacities – Read more about the improvements we're making to the
2023 Everything you need Fabric capacity management platform for Fabric and Power
to know about BI users .
what's new and
what's coming

September Microsoft Fabric The Microsoft Fabric Capacity Metrics app is available in
2023 Capacity Metrics App Source for a variety of billing and utilization reporting.

August Monitoring Hub The Monitoring Hub to allow users to personalize activity-
2023 support for specific columns. You now have the flexibility to display
personalized column columns that are relevant to the activities you're focused
options on.

May 2023 Capacity metrics in Learn more about the universal compute capacities and
Microsoft Fabric Fabric's capacity metrics governance features that
admins can use to monitor usage and make data-driven
scale-up decisions.

Microsoft Purview
This section summarizes archived announcements about governance and compliance
capabilities with Microsoft Purview in Microsoft Fabric. Learn more about Information
protection in Microsoft Fabric.

ノ Expand table

Month Feature Learn more

May Administration, Security and Microsoft Fabric provides built-in enterprise grade
2023 Governance in Microsoft Fabric governance and compliance capabilities , powered
by Microsoft Purview.
Related content
Modernization Best Practices and Reusable Assets Blog
Azure Data Explorer Blog
Get started with Microsoft Fabric
Microsoft Training Learning Paths for Fabric
End-to-end tutorials in Microsoft Fabric
Fabric Known Issues
Microsoft Fabric Blog
Microsoft Fabric terminology
What's new in Power BI?
What's new in Microsoft Fabric?

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Ask the community

You might also like