An introduction to using R in Power BI via the various touch points such as: R script data sources, R transformations, custom R visuals, and the community gallery of R visualizations
Modern ETL: Azure Data Factory, Data Lake, and SQL DatabaseEric Bragas
This document discusses modern Extract, Transform, Load (ETL) tools in Azure, including Azure Data Factory, Azure Data Lake, and Azure SQL Database. It provides an overview of each tool and how they can be used together in a data warehouse architecture with Azure Data Lake acting as the data hub and Azure SQL Database being used for analytics and reporting through the creation of data marts. It also includes two demonstrations, one on Azure Data Factory and another showing Azure Data Lake Store and Analytics.
The document discusses Azure Data Factory and its capabilities for cloud-first data integration and transformation. ADF allows orchestrating data movement and transforming data at scale across hybrid and multi-cloud environments using a visual, code-free interface. It provides serverless scalability without infrastructure to manage along with capabilities for lifting and running SQL Server Integration Services packages in Azure.
Moving to the cloud; PaaS, IaaS or Managed InstanceThomas Sykes
In this session we'll look at the cloud choices available in Azure for SQL Server. Whether it's PaaS, IaaS or Managed Instance we'll look into the features provided, the major differences and the Pros and Cons of each solution and how to choose the best option available.
Azure Data Factory for Redmond SQL PASS UG Sept 2018Mark Kromer
Azure Data Factory is a fully managed data integration service in the cloud. It provides a graphical user interface for building data pipelines without coding. Pipelines can orchestrate data movement and transformations across hybrid and multi-cloud environments. Azure Data Factory supports incremental loading, on-demand Spark, and lifting SQL Server Integration Services packages to the cloud.
Analyzing StackExchange data with Azure Data LakeBizTalk360
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different. Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
- Azure Data Lake makes big data easy to manage, debug, and optimize through services like Azure Data Lake Store and Azure Data Lake Analytics.
- Azure Data Lake Store provides a hyper-scale data lake that allows storing any data in its native format at unlimited scale. Azure Data Lake Analytics allows running distributed queries and analytics jobs on data stored in Data Lake Store.
- Azure Data Lake is based on open source technologies like Apache Hadoop, YARN, and provides a managed service with auto-scaling and a pay-per-use model through the Azure portal and tools like Visual Studio.
Microsoft Azure BI Solutions in the CloudMark Kromer
This document provides an overview of several Microsoft Azure cloud data and analytics services:
- Azure Data Factory is a data integration service that can move and transform data between cloud and on-premises data stores as part of scheduled or event-driven workflows.
- Azure SQL Data Warehouse is a cloud data warehouse that provides elastic scaling for large BI and analytics workloads. It can scale compute resources on demand.
- Azure Machine Learning enables building, training, and deploying machine learning models and creating APIs for predictive analytics.
- Power BI provides interactive reports, visualizations, and dashboards that can combine multiple datasets and be embedded in applications.
DBP-010_Using Azure Data Services for Modern Data Applicationsdecode2016
This document discusses using Azure data services for modern data applications based on the Lambda architecture. It covers ingestion of streaming and batch data using services like Event Hubs, IoT Hubs, and Kafka. It describes processing streaming data in real-time using Stream Analytics, Storm, and Spark Streaming, and processing batch data using HDInsight, ADLA, and Spark. It also covers staging data in data lakes, SQL databases, NoSQL databases and data warehouses. Finally, it discusses serving and exploring data using Power BI and enriching data using Azure Data Factory and Machine Learning.
Microsoft Azure Data Factory Hands-On Lab Overview SlidesMark Kromer
This document outlines modules for a lab on moving data to Azure using Azure Data Factory. The modules will deploy necessary Azure resources, lift and shift an existing SSIS package to Azure, rebuild ETL processes in ADF, enhance data with cloud services, transform and merge data with ADF and HDInsight, load data into a data warehouse with ADF, schedule ADF pipelines, monitor ADF, and verify loaded data. Technologies used include PowerShell, Azure SQL, Blob Storage, Data Factory, SQL DW, Logic Apps, HDInsight, and Office 365.
PaaSport to Paradise: Lifting & Shifting with Azure SQL Database/Managed Inst...Sandy Winarko
This session focuses on the all PaaS solution of Azure SQL DB/Managed Instance (MI) + SSIS in Azure Data Factory (ADF) to lift & shift, modernize, and extend ETL workflows. We will first show you how to provision Azure-SSIS Integration Runtime (IR) – dedicated ADF servers for running SSIS – with SSIS catalog (SSISDB) hosted by Azure SQL DB/MI, configure it to access data on premises using Windows authentication and Virtual Network injection/Self-Hosted IR as a proxy, and extend it with custom/Open Source/3rd party components. We will next show you how to use the familiar SSDT/SSMS tools to design/test/deploy/execute your SSIS packages in the cloud just like you do on premises. We will finally show you how to modernize your ETL workflows by invoking/scheduling SSIS package executions as first-class activities in ADF pipelines and combining/chaining them with other activities, allowing you to trigger your pipeline runs by events, automatically (de)provision SSIS IR just in time, etc.
Integration Monday - Analysing StackExchange data with Azure Data LakeTom Kerkhove
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different.
Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
Vitalii Bondarenko "Machine Learning on Fast Data"DataConf
This document discusses machine learning on fast data. It presents an agenda covering ML on production systems, TensorFlow, Kafka, Docker and Kubernetes. It then describes the machine learning process and shows how an enterprise analytics platform can integrate data sources, a machine learning cluster using Kafka, and data destinations. Details are provided on using TensorFlow for linear regression and neural networks. Apache Kafka is explained as a distributed streaming platform using topics, brokers, and consumer groups. The Confluent platform, KStream and KTable APIs are also summarized. Docker and Kubernetes are mentioned for containerization.
This document provides an overview of Azure SQL Data Warehouse. It discusses what Azure SQL Data Warehouse is, how it is provisioned and scaled, best practices for designing tables in Azure SQL DW including distribution keys and data types, and methods for loading and querying data including PolyBase and labeling queries for monitoring. The presentation also covers tuning aspects like statistics, indexing, and resource classes.
Part 3 - Modern Data Warehouse with Azure SynapseNilesh Gule
Slide deck of the third part of building Modern Data Warehouse using Azure. This session covered Azure Synapse, formerly SQL Data Warehouse. We look at the Azure Synapse Architecture, external files, integration with Azuer Data Factory.
The recording of the session is available on YouTube
https://www.youtube.com/watch?v=LZlu6_rFzm8&WT.mc_id=DP-MVP-5003170
Develop scalable analytical solutions with Azure Data Factory & Azure SQL Dat...Microsoft Tech Community
In this session you will learn how to develop data pipelines in Azure Data Factory and build a Cloud-based analytical solution adopting modern data warehouse approaches with Azure SQL Data Warehouse and implementing incremental ETL orchestration at scale. With the multiple sources and types of data available in an enterprise today Azure Data factory enables full integration of data and enables direct storage in Azure SQL Data Warehouse for powerful and high-performance query workloads which drive a majority of enterprise applications and business intelligence applications.
10 Things Learned Releasing Databricks Enterprise WideDatabricks
Implementing tools, let alone an entire Unified Data Platform, like Databricks, can be quite the undertaking. Implementing a tool which you have not yet learned all the ins and outs of can be even more frustrating. Have you ever wished that you could take some of that uncertainty away? Four years ago, Western Governors University (WGU) took on the task of rewriting all of our ETL pipelines in Scala/Python, as well as migrating our Enterprise Data Warehouse into Delta, all on the Databricks platform. Starting with 4 users and rapidly growing to over 120 users across 8 business units, our Databricks environment turned into an entire unified platform, being used by individuals of all skill levels, data requirements, and internal security requirements.
Through this process, our team has had the chance and opportunity to learn while making a lot of mistakes. Taking a look back at those mistakes, there are a lot of things we wish we had known before opening the platform to our enterprise.
We would like to share with you 10 things we wish we had known before WGU started operating in our Databricks environment. Covering topics surrounding user management from both an AWS and Databricks perspective, understanding and managing costs, creating custom pipelines for efficient code management, learning about new Apache Spark snippets that helped save us a fortune, and more. We would like to provide our recommendations on how one can overcome these pitfalls to help new, current and prospective users to make their environments easier, safer, and more reliable to work in.
Microsoft Data Integration Pipelines: Azure Data Factory and SSISMark Kromer
The document discusses tools for building ETL pipelines to consume hybrid data sources and load data into analytics systems at scale. It describes how Azure Data Factory and SQL Server Integration Services can be used to automate pipelines that extract, transform, and load data from both on-premises and cloud data stores into data warehouses and data lakes for analytics. Specific patterns shown include analyzing blog comments, sentiment analysis with machine learning, and loading a modern data warehouse.
This document provides an overview of configuration options in Azure, including application settings, App Configuration, Key Vault, and Managed Identities for Azure Resources. It begins with an introduction to configuration and then discusses each option in more detail, providing demos of application settings, App Configuration, and Key Vault. The document emphasizes that these tools can help centralize and secure configuration across environments while simplifying administration.
Pipelines and Packages: Introduction to Azure Data Factory (Techorama NL 2019)Cathrine Wilhelmsen
This document discusses Azure Data Factory (ADF) and how it can be used to build and orchestrate data pipelines without code. It describes how ADF is a hybrid data integration service that improves on its previous version. It also explains how existing SSIS packages can be "lifted and shifted" to ADF to modernize solutions while retaining investments. The document demonstrates creating pipelines and data flows in ADF, handling schema drift, and best practices for development.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Northwestern Mutual Journey – Transform BI Space to CloudDatabricks
The volume of available data is growing by the second (to an estimated 175 zetabytes by 2025), and it is becoming increasingly granular in its information. With that change every organization is moving towards building a data driven culture. We at Northwestern Mutual share similar story of driving towards making data driven decisions to improve both efficiency and effectiveness. Legacy system analysis revealed bottlenecks, excesses, duplications etc. Based on ever growing need to analyze more data our BI Team decided to make a move to more modern, scalable, cost effective data platform. As a financial company, data security is as important as ingestion of data. In addition to fast ingestion and compute we would need a solution that can support column level encryption, Role based access to different teams from our datalake.
In this talk we describe our journey to move 100’s of ELT jobs from current MSBI stack to Databricks and building a datalake (using Lakehouse). How we reduced our daily data load time from 7 hours to 2 hours with capability to ingest more data. Share our experience, challenges, learning, architecture and design patterns used while undertaking this huge migration effort. Different sets of tools/frameworks built by our engineers to help ease the learning curve that our non-Apache Spark engineers would have to go through during this migration. You will leave this session with more understand on what it would mean for you and your organization if you are thinking about migrating to Apache Spark/Databricks.
Personalization Journey: From Single Node to Cloud StreamingDatabricks
In the online gaming industry we receive a vast amount of transactions that need to be handled in real time. Our customers get to choose from hundreds or even thousand options, and providing a seamless experience is crucial in our industry. Recommendation systems can be the answer in such cases but require handling loads of data and need to utilize large amounts of processing power. Towards this goal, in the last two years we have taken down the road of machine learning and AI in order to transform our customer’s daily experience and upgrade our internal services.
Next Generation Data Integration with Azure Data FactoryTom Kerkhove
Azure Data Factory is a managed data integration service that allows users to create data pipelines to move and transform data. It provides triggers to initiate pipelines, activities to perform tasks like data movement and transformation, and integration runtimes to execute pipelines across cloud and on-premises environments. The presentation demonstrated how to use Azure serverless services like Data Factory and Logic Apps to build a pipeline for fulfilling GDPR data requests.
This document provides an overview of Microsoft R and its capabilities for advanced analytics. It discusses how Microsoft R can enable businesses to analyze large volumes of data across multiple environments including locally, on Azure, and with SQL Server and HDInsight. The presentation includes a demonstration of R used with SQL Server, HDInsight, Azure Machine Learning, and Power BI. It highlights how Microsoft R provides a unified platform for data science and analytics that allows users to write code once and deploy models anywhere.
- R is a popular open-source statistical programming language that is gaining wider adoption in business. Microsoft has incorporated R into Power BI to enhance business intelligence solutions.
- There are three main ways to use R in Power BI - R-powered custom visuals, R visuals, and R scripts. R-powered visuals utilize pre-built R visualizations without needing R knowledge. R visuals give full control over custom R code and visualizations. R scripts can be used for data preparation.
- R-powered visuals are easy to use but have limited customization. R visuals provide more flexibility but require R coding skills. Both display static images within Power BI reports.
DBP-010_Using Azure Data Services for Modern Data Applicationsdecode2016
This document discusses using Azure data services for modern data applications based on the Lambda architecture. It covers ingestion of streaming and batch data using services like Event Hubs, IoT Hubs, and Kafka. It describes processing streaming data in real-time using Stream Analytics, Storm, and Spark Streaming, and processing batch data using HDInsight, ADLA, and Spark. It also covers staging data in data lakes, SQL databases, NoSQL databases and data warehouses. Finally, it discusses serving and exploring data using Power BI and enriching data using Azure Data Factory and Machine Learning.
Microsoft Azure Data Factory Hands-On Lab Overview SlidesMark Kromer
This document outlines modules for a lab on moving data to Azure using Azure Data Factory. The modules will deploy necessary Azure resources, lift and shift an existing SSIS package to Azure, rebuild ETL processes in ADF, enhance data with cloud services, transform and merge data with ADF and HDInsight, load data into a data warehouse with ADF, schedule ADF pipelines, monitor ADF, and verify loaded data. Technologies used include PowerShell, Azure SQL, Blob Storage, Data Factory, SQL DW, Logic Apps, HDInsight, and Office 365.
PaaSport to Paradise: Lifting & Shifting with Azure SQL Database/Managed Inst...Sandy Winarko
This session focuses on the all PaaS solution of Azure SQL DB/Managed Instance (MI) + SSIS in Azure Data Factory (ADF) to lift & shift, modernize, and extend ETL workflows. We will first show you how to provision Azure-SSIS Integration Runtime (IR) – dedicated ADF servers for running SSIS – with SSIS catalog (SSISDB) hosted by Azure SQL DB/MI, configure it to access data on premises using Windows authentication and Virtual Network injection/Self-Hosted IR as a proxy, and extend it with custom/Open Source/3rd party components. We will next show you how to use the familiar SSDT/SSMS tools to design/test/deploy/execute your SSIS packages in the cloud just like you do on premises. We will finally show you how to modernize your ETL workflows by invoking/scheduling SSIS package executions as first-class activities in ADF pipelines and combining/chaining them with other activities, allowing you to trigger your pipeline runs by events, automatically (de)provision SSIS IR just in time, etc.
Integration Monday - Analysing StackExchange data with Azure Data LakeTom Kerkhove
Big data is the new big thing where storing the data is the easy part. Gaining insights in your pile of data is something different.
Based on a data dump of the well-known StackExchange websites, we will store & analyse 150+ GB of data with Azure Data Lake Store & Analytics to gain some insights about their users. After that we will use Power BI to give an at a glance overview of our learnings.
If you are a developer that is interested in big data, this is your time to shine! We will use our existing SQL & C# skills to analyse everything without having to worry about running clusters.
Vitalii Bondarenko "Machine Learning on Fast Data"DataConf
This document discusses machine learning on fast data. It presents an agenda covering ML on production systems, TensorFlow, Kafka, Docker and Kubernetes. It then describes the machine learning process and shows how an enterprise analytics platform can integrate data sources, a machine learning cluster using Kafka, and data destinations. Details are provided on using TensorFlow for linear regression and neural networks. Apache Kafka is explained as a distributed streaming platform using topics, brokers, and consumer groups. The Confluent platform, KStream and KTable APIs are also summarized. Docker and Kubernetes are mentioned for containerization.
This document provides an overview of Azure SQL Data Warehouse. It discusses what Azure SQL Data Warehouse is, how it is provisioned and scaled, best practices for designing tables in Azure SQL DW including distribution keys and data types, and methods for loading and querying data including PolyBase and labeling queries for monitoring. The presentation also covers tuning aspects like statistics, indexing, and resource classes.
Part 3 - Modern Data Warehouse with Azure SynapseNilesh Gule
Slide deck of the third part of building Modern Data Warehouse using Azure. This session covered Azure Synapse, formerly SQL Data Warehouse. We look at the Azure Synapse Architecture, external files, integration with Azuer Data Factory.
The recording of the session is available on YouTube
https://www.youtube.com/watch?v=LZlu6_rFzm8&WT.mc_id=DP-MVP-5003170
Develop scalable analytical solutions with Azure Data Factory & Azure SQL Dat...Microsoft Tech Community
In this session you will learn how to develop data pipelines in Azure Data Factory and build a Cloud-based analytical solution adopting modern data warehouse approaches with Azure SQL Data Warehouse and implementing incremental ETL orchestration at scale. With the multiple sources and types of data available in an enterprise today Azure Data factory enables full integration of data and enables direct storage in Azure SQL Data Warehouse for powerful and high-performance query workloads which drive a majority of enterprise applications and business intelligence applications.
10 Things Learned Releasing Databricks Enterprise WideDatabricks
Implementing tools, let alone an entire Unified Data Platform, like Databricks, can be quite the undertaking. Implementing a tool which you have not yet learned all the ins and outs of can be even more frustrating. Have you ever wished that you could take some of that uncertainty away? Four years ago, Western Governors University (WGU) took on the task of rewriting all of our ETL pipelines in Scala/Python, as well as migrating our Enterprise Data Warehouse into Delta, all on the Databricks platform. Starting with 4 users and rapidly growing to over 120 users across 8 business units, our Databricks environment turned into an entire unified platform, being used by individuals of all skill levels, data requirements, and internal security requirements.
Through this process, our team has had the chance and opportunity to learn while making a lot of mistakes. Taking a look back at those mistakes, there are a lot of things we wish we had known before opening the platform to our enterprise.
We would like to share with you 10 things we wish we had known before WGU started operating in our Databricks environment. Covering topics surrounding user management from both an AWS and Databricks perspective, understanding and managing costs, creating custom pipelines for efficient code management, learning about new Apache Spark snippets that helped save us a fortune, and more. We would like to provide our recommendations on how one can overcome these pitfalls to help new, current and prospective users to make their environments easier, safer, and more reliable to work in.
Microsoft Data Integration Pipelines: Azure Data Factory and SSISMark Kromer
The document discusses tools for building ETL pipelines to consume hybrid data sources and load data into analytics systems at scale. It describes how Azure Data Factory and SQL Server Integration Services can be used to automate pipelines that extract, transform, and load data from both on-premises and cloud data stores into data warehouses and data lakes for analytics. Specific patterns shown include analyzing blog comments, sentiment analysis with machine learning, and loading a modern data warehouse.
This document provides an overview of configuration options in Azure, including application settings, App Configuration, Key Vault, and Managed Identities for Azure Resources. It begins with an introduction to configuration and then discusses each option in more detail, providing demos of application settings, App Configuration, and Key Vault. The document emphasizes that these tools can help centralize and secure configuration across environments while simplifying administration.
Pipelines and Packages: Introduction to Azure Data Factory (Techorama NL 2019)Cathrine Wilhelmsen
This document discusses Azure Data Factory (ADF) and how it can be used to build and orchestrate data pipelines without code. It describes how ADF is a hybrid data integration service that improves on its previous version. It also explains how existing SSIS packages can be "lifted and shifted" to ADF to modernize solutions while retaining investments. The document demonstrates creating pipelines and data flows in ADF, handling schema drift, and best practices for development.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Northwestern Mutual Journey – Transform BI Space to CloudDatabricks
The volume of available data is growing by the second (to an estimated 175 zetabytes by 2025), and it is becoming increasingly granular in its information. With that change every organization is moving towards building a data driven culture. We at Northwestern Mutual share similar story of driving towards making data driven decisions to improve both efficiency and effectiveness. Legacy system analysis revealed bottlenecks, excesses, duplications etc. Based on ever growing need to analyze more data our BI Team decided to make a move to more modern, scalable, cost effective data platform. As a financial company, data security is as important as ingestion of data. In addition to fast ingestion and compute we would need a solution that can support column level encryption, Role based access to different teams from our datalake.
In this talk we describe our journey to move 100’s of ELT jobs from current MSBI stack to Databricks and building a datalake (using Lakehouse). How we reduced our daily data load time from 7 hours to 2 hours with capability to ingest more data. Share our experience, challenges, learning, architecture and design patterns used while undertaking this huge migration effort. Different sets of tools/frameworks built by our engineers to help ease the learning curve that our non-Apache Spark engineers would have to go through during this migration. You will leave this session with more understand on what it would mean for you and your organization if you are thinking about migrating to Apache Spark/Databricks.
Personalization Journey: From Single Node to Cloud StreamingDatabricks
In the online gaming industry we receive a vast amount of transactions that need to be handled in real time. Our customers get to choose from hundreds or even thousand options, and providing a seamless experience is crucial in our industry. Recommendation systems can be the answer in such cases but require handling loads of data and need to utilize large amounts of processing power. Towards this goal, in the last two years we have taken down the road of machine learning and AI in order to transform our customer’s daily experience and upgrade our internal services.
Next Generation Data Integration with Azure Data FactoryTom Kerkhove
Azure Data Factory is a managed data integration service that allows users to create data pipelines to move and transform data. It provides triggers to initiate pipelines, activities to perform tasks like data movement and transformation, and integration runtimes to execute pipelines across cloud and on-premises environments. The presentation demonstrated how to use Azure serverless services like Data Factory and Logic Apps to build a pipeline for fulfilling GDPR data requests.
This document provides an overview of Microsoft R and its capabilities for advanced analytics. It discusses how Microsoft R can enable businesses to analyze large volumes of data across multiple environments including locally, on Azure, and with SQL Server and HDInsight. The presentation includes a demonstration of R used with SQL Server, HDInsight, Azure Machine Learning, and Power BI. It highlights how Microsoft R provides a unified platform for data science and analytics that allows users to write code once and deploy models anywhere.
- R is a popular open-source statistical programming language that is gaining wider adoption in business. Microsoft has incorporated R into Power BI to enhance business intelligence solutions.
- There are three main ways to use R in Power BI - R-powered custom visuals, R visuals, and R scripts. R-powered visuals utilize pre-built R visualizations without needing R knowledge. R visuals give full control over custom R code and visualizations. R scripts can be used for data preparation.
- R-powered visuals are easy to use but have limited customization. R visuals provide more flexibility but require R coding skills. Both display static images within Power BI reports.
Predictive Analysis using Microsoft SQL Server R ServicesFisnik Doko
R is rapidly becoming the leading language in Data Science and statistics.
This session will show how Microsoft SQL Server can help meet an increasingly “predictive” world by supporting the R language inside the database.
Demonstration using R and SQL Server Services in rental industry.
Learn the basics to get started using R with Power BI. Discover how to set up the software and what libraries are needed. See how to use R scripts to create data, connect to a data source, build a visual and transform data. Using R, you can leverage data sources, functions and visualizations not directly built into Power BI. See the demos and download this deck: https://senturus.com/resources/using-r-with-power-bi/
Senturus offers a full spectrum of services for business analytics. Our resource library has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: https://senturus.com/resources/
This document provides an overview of using R with Power BI. It discusses integrating R via the R Connector and Power Query, creating R visuals in Power BI reports, and some limitations. Key points include using R for data transformation, visualization, and advanced analytics. The R Connector allows loading R data frames into Power BI. R packages can be used in Power Query for tasks like web scraping, data wrangling, and missing value imputation. R visuals bring R plotting capabilities to Power BI reports.
This document summarizes a session on AI in Power BI. The session covered how AI capabilities like automated machine learning, cognitive services, and AI visualizations can help business analysts explore insights from data and build basic machine learning models without coding. It also discussed how these AI tools in Power BI are designed for business users, living fully within Power BI, while extension capabilities allow data scientists to integrate external Python/R scripts or Azure ML models. Demo portions showed how to use automated machine learning, cognitive services and AI visualizations in Power BI reports.
Machine learning services with SQL Server 2017Mark Tabladillo
SQL Server 2017 introduces Machine Learning Services with two independent technologies: R and Python. The purpose of this presentation is 1) to describe major features of this technology for technology managers; 2) to outline use cases for architects; and 3) to provide demos for developers and data scientists.
General Presentation - DIAD and AIAD, Dashboard and AppsVishal Pawar
General presentation by Vishal Pawar for DIAD and AIAD
Green House Data invites you and your team to a 3 day online Power BI and Power Apps Training with Vishal Pawar, Microsoft MVP who has 10+ years in Microsoft BI and the data stack.
Day 1: Power BI Dashboard in a Day
Day 2: Power Apps and Power Automate in a Day
How R Developers Can Build and Share Data and AI Applications that Scale with...Databricks
This document discusses how R developers can build and share scalable data and AI applications using RStudio and Databricks. It outlines how RStudio and Databricks can be used together to overcome challenges of processing large amounts of data in R, including limited server memory and performance issues. Developers can use hosted RStudio servers on Databricks clusters, connect to Spark from RStudio using Databricks Connect, and share scalable Shiny apps deployed with RStudio Connect. The ODBC toolchain provides a performant way to connect R to Spark without issues encountered when using sparklyr directly.
In Project Server, Power BI adds value in creating interactive reports and dashboards as it allows data connectivity with OData and other data sources. Power BI reporting can be used with both Project Online and Project Server.
This webinar gives you a platform to learn how to connect to Data Source - OData feed from Power BI. In this session, you will also learn how to create and publish reports and dashboards using the visual tools set available in Power BI desktop, and explore more on its features.
Gain a 360° overview of how to use Power BI to create effective reports. In this Power BI course, our trainer helps you to explore how to use this business intelligence platform through hands-on practice. The Power BI training begins by covering the web-based Power BI service, considering how to import data, create visualizations and reports. Next, you’ll acknowledge Power BI Mobile and how to use data modeling capabilities in Power BI Desktop. Each analysis is designed to drive your Power BI skills to the next level. With live Power BI training classes and multiple real-time projects, you’ll acquire the skills needed to pass the Microsoft Power BI Certification Exam.
Power BI Report Server allows users to publish Power BI reports, paginated reports, and Excel workbooks on-premises. It can be acquired through Power BI Premium or SQL Server licensing. It provides self-service BI capabilities while also supporting enterprise reporting needs. IT pros can deploy Power BI Report Server on-premises, integrate it with Active Directory for authentication, and migrate existing SQL Server Reporting Services reports.
This document summarizes how businesses can transform through business intelligence (BI) and advanced analytics using Microsoft's modern BI platform. It outlines the Power BI and Azure Analysis Services tools for visualization, data modeling, and analytics. It also discusses how Collective Intelligence and Microsoft can help customers accelerate their move to a data-driven culture and realize benefits like increased productivity and cost savings by implementing BI and advanced analytics solutions in the cloud. The presentation includes demonstrations of Power BI and Azure Analysis Services.
Power BI Report Server Enterprise Architecture, Tools to Publish reports and ...Vishal Pawar
To improve the performance, sustainability, security and scalability of enterprise-grade Power BI implementations with constant velocity, we need to adhere best practices with sloid architecture.
In this session Vishal will go over Power BI Ecosystem with quick Example, Power BI report Server evolution from its inception till date with Architecture for Enterprise PBI RS and usage through various tool available to publish -SSDT SSRS, Power BI Desktop(Optimized Version), Report Builder and mobile report builder and various Best Practices for PBI Report Server.
Simplifying AI integration on Apache SparkDatabricks
Spark is an ETL and Data Processing engine especially suited for big data. Most of the time an organization has different teams working on different languages, frameworks and libraries, which needs to be integrated in the ETL Pipelines or for general data processing. For example, a Spark ETL job may be written in Scala by data engineering team, but there is a need to integrate a machine learning solution written in python/R developed by Data Science team. These kinds of solutions are not very straightforward to integrate with spark engine, and it required great amount of collaboration between different teams, hence increasing overall project time and cost. Furthermore, these solutions will keep on changing/upgrading with time using latest versions of the technologies and with improved design and implementation, especially in Machine Learning domain where ML models/algorithms keep on improving with new data and new approaches. And so there is significant downtime involved in integrating the these upgraded version.
1. Power BI Embedded is an Azure cloud service that allows developers to embed Power BI reports in their custom applications.
2. Developers can connect to data sources, create reports and datasets, embed reports, and manage access using Power BI APIs.
3. The demonstration shows how to provision a workspace collection, embed an interactive report in an application, and handle authentication and permissions for end users.
During the bake-off (NYC Enterprise Collaboration Meetup in the NYC Microsoft Office) between Power BI and QlikView, Gal Vekselman presented the Power BI tool and those are his slides.
Portable Scalable Data Visualization Techniques for Apache Spark and Python N...Databricks
Python Notebooks are great for communicating data analysis & research but how do you port these data visualizations between the many available platforms (Jupyter, Databricks, Zeppelin, Colab,…). Also learn about how to scale up your visualizations using Spark
The document discusses a Cloud OnAir event about database management and databases. It includes an agenda that covers overviews of Cloud SQL, Cloud Memorystore, Cloud Spanner, and Cloud Firestore updates. Several speakers are also listed that will provide presentations on choosing the right database, database migrations, database modernization, and specific database services like Cloud Spanner and Cloud Bigtable.
Power BI is a business analytics tool used to analyze business data and derive insights, while Tableau is a business intelligence and data visualization tool used to generate flexible reports and visualizations. Tableau typically handles larger data volumes faster than Power BI and has a larger community/support network, but Power BI is easier for less technical users and integrates better with other Microsoft products. Both tools have cloud-based and on-premise deployment options, with Tableau supporting more platforms, while Power BI is limited to Azure in the cloud.
This comprehensive Data Science course is designed to equip learners with the essential skills and knowledge required to analyze, interpret, and visualize complex data. Covering both theoretical concepts and practical applications, the course introduces tools and techniques used in the data science field, such as Python programming, data wrangling, statistical analysis, machine learning, and data visualization.
Just-in-time: Repetitive production system in which processing and movement of materials and goods occur just as they are needed, usually in small batches
JIT is characteristic of lean production systems
JIT operates with very little “fat”
GenAI for Quant Analytics: survey-analytics.aiInspirient
Pitched at the Greenbook Insight Innovation Competition as apart of IIEX North America 2025 on 30 April 2025 in Washington, D.C.
Join us at survey-analytics.ai!
How iCode cybertech Helped Me Recover My Lost Fundsireneschmid345
I was devastated when I realized that I had fallen victim to an online fraud, losing a significant amount of money in the process. After countless hours of searching for a solution, I came across iCode cybertech. From the moment I reached out to their team, I felt a sense of hope that I can recommend iCode Cybertech enough for anyone who has faced similar challenges. Their commitment to helping clients and their exceptional service truly set them apart. Thank you, iCode cybertech, for turning my situation around!
[email protected]
AI Competitor Analysis: How to Monitor and Outperform Your CompetitorsContify
AI competitor analysis helps businesses watch and understand what their competitors are doing. Using smart competitor intelligence tools, you can track their moves, learn from their strategies, and find ways to do better. Stay smart, act fast, and grow your business with the power of AI insights.
For more information please visit here https://www.contify.com/
2. Eric Bragas, Jr.
DesignMind BI Consultant
Trainer for Power BI, Master Data Management, other BI
topics
◦ SQL Saturday
◦ Power BI Dashboard In A Day
◦ In-depth MDM
Passionate about learning, design, and data
3. Agenda
What is R in Power BI?
When to use R in Power BI?
How to use R in Power BI?
◦ Setup
◦ Data Source
◦ Script Visual
◦ Transformation
R-powered Custom Visuals
Other
◦ External IDEs
◦ Supported Packages in PBI Service
4. What is R?
“R is a language and environment for statistical computing and graphics”
Capabilities include
◦ Statistical Analysis
◦ Modeling and
◦ Data Visualization
Wiki: https://en.wikipedia.org/wiki/R_(programming_language)
5. What is Power BI?
Microsoft’s Self-Service BI application
Capabilities include
◦ Connecting to a variety of data sources
◦ Data transformation through PowerQuery
◦ Data modeling
◦ Data Analysis via DAX
Power BI: https://powerbi.microsoft.com/en-us/
6. What is R in
Power BI
An intersection of two tools
Power BI provides multiple touch points for R
◦ Allows R scripts as data sources
◦ Data transformations as part of Power Query steps
◦ R visualizations
◦ R-powered custom visualizations
Unlimited R script options when working locally
Limited support for reports published to Power BI Service
Does not include/require Microsoft R Server
7. When to use R in Power BI
No dedicated modeling capability (read: small R
implementations)
Robust visualizations; beyond what Power BI can provide
◦ Augment Power BI, don’t try to replace it
Data manipulation beyond DAX capabilities
Integrating R modeling capabilities into PowerQuery
9. Setup
Install R - https://cran.r-project.org/
(optional) Install R IDE
◦ R Studio
◦ R Tools for Visual Studio (RTVS)
Install R Packages
Power BI settings
◦ Options > R Scripting
10. R Script Data Source
Reference all dependent packages
Only data frames are imported (not vectors)
◦ All data frames available in navigator
NA is translated to NULL
30 minute time out
What other use cases do you have?
12. R Script Visualization
Augment Power BI visualizations using R
◦ base
◦ ggplot2
◦ ggmap
◦ etc.
Rendered to default R device
◦ Interactive and animated visuals not supported
14. R Script Transformation
Allow you to manipulate data via Query Editor steps
Can be used to
◦ clean data
◦ generate additional values ie. model results
◦ run any R script ie. write.csv()
Executes as part of the query, so only when the data set is refreshed
16. Other Resources
External IDEs
◦ R Studio
◦ R Tools for Visual Studio
Supported Packages in the Power BI Service
◦ List of supported packages
◦ Request additional packages
Additional Resources
◦ Download R from CRAN (Comprehensive R Archive Network) or MRAN (Microsoft R Application Network)
◦ Power BI visuals in R
◦ Additional visualizations