Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Microsoft Azure Data Factory Hands-On Lab Overview SlidesMark Kromer
This document outlines modules for a lab on moving data to Azure using Azure Data Factory. The modules will deploy necessary Azure resources, lift and shift an existing SSIS package to Azure, rebuild ETL processes in ADF, enhance data with cloud services, transform and merge data with ADF and HDInsight, load data into a data warehouse with ADF, schedule ADF pipelines, monitor ADF, and verify loaded data. Technologies used include PowerShell, Azure SQL, Blob Storage, Data Factory, SQL DW, Logic Apps, HDInsight, and Office 365.
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
This document provides an overview of using Azure Data Factory (ADF) for ETL workflows. It discusses the components of modern data engineering, how to design ETL processes in Azure, an overview of ADF and its components. It also previews a demo on creating an ADF pipeline to copy data into Azure Synapse Analytics. The agenda includes discussions of data ingestion techniques in ADF, components of ADF like linked services, datasets, pipelines and triggers. It concludes with references, a Q&A section and a request for feedback.
1- Introduction of Azure data factory.pptxBRIJESH KUMAR
Azure Data Factory is a cloud-based data integration service that allows users to easily construct extract, transform, load (ETL) and extract, load, transform (ELT) processes without code. It offers job scheduling, security for data in transit, integration with source control for continuous delivery, and scalability for large data volumes. The document demonstrates how to create an Azure Data Factory from the Azure portal.
Azure DataBricks for Data Engineering by Eugene PolonichkoDimko Zhluktenko
This document provides an overview of Azure Databricks, a Apache Spark-based analytics platform optimized for Microsoft Azure cloud services. It discusses key components of Azure Databricks including clusters, workspaces, notebooks, visualizations, jobs, alerts, and the Databricks File System. It also outlines how data engineers can leverage Azure Databricks for scenarios like running ETL pipelines, streaming analytics, and connecting business intelligence tools to query data.
The document discusses Azure Data Factory v2. It provides an agenda that includes topics like triggers, control flow, and executing SSIS packages in ADFv2. It then introduces the speaker, Stefan Kirner, who has over 15 years of experience with Microsoft BI tools. The rest of the document consists of slides on ADFv2 topics like the pipeline model, triggers, activities, integration runtimes, scaling SSIS packages, and notes from the field on using SSIS packages in ADFv2.
Big data architectures and the data lakeJames Serra
The document provides an overview of big data architectures and the data lake concept. It discusses why organizations are adopting data lakes to handle increasing data volumes and varieties. The key aspects covered include:
- Defining top-down and bottom-up approaches to data management
- Explaining what a data lake is and how Hadoop can function as the data lake
- Describing how a modern data warehouse combines features of a traditional data warehouse and data lake
- Discussing how federated querying allows data to be accessed across multiple sources
- Highlighting benefits of implementing big data solutions in the cloud
- Comparing shared-nothing, massively parallel processing (MPP) architectures to symmetric multi-processing (
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
The document discusses the challenges of modern data, analytics, and AI workloads. Most enterprises struggle with siloed data systems that make integration and productivity difficult. The future of data lies with a data lakehouse platform that can unify data engineering, analytics, data warehousing, and machine learning workloads on a single open platform. The Databricks Lakehouse platform aims to address these challenges with its open data lake approach and capabilities for data engineering, SQL analytics, governance, and machine learning.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Azure Data Factory Mapping Data Flow allows users to stage and transform data in Azure during a limited preview period beginning in February 2019. Data can be staged from Azure Data Lake Storage, Blob Storage, or SQL databases/data warehouses, then transformed using visual data flows before being landed to staging areas in Azure like ADLS, Blob Storage, or SQL databases. For information, contact [email protected] or visit http://aka.ms/dataflowpreview.
Azure Data Factory is a data integration service that allows for data movement and transformation between both on-premises and cloud data stores. It uses datasets to represent data structures, activities to define actions on data with pipelines grouping related activities, and linked services to connect to external resources. Key concepts include datasets representing input/output data, activities performing actions like copy, and pipelines logically grouping activities.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This document is a training presentation on Databricks fundamentals and the data lakehouse concept by Dalibor Wijas from November 2022. It introduces Wijas and his experience. It then discusses what Databricks is, why it is needed, what a data lakehouse is, how Databricks enables the data lakehouse concept using Apache Spark and Delta Lake. It also covers how Databricks supports data engineering, data warehousing, and offers tools for data ingestion, transformation, pipelines and more.
This document provides an introduction and overview of Azure Data Lake. It describes Azure Data Lake as a single store of all data ranging from raw to processed that can be used for reporting, analytics and machine learning. It discusses key Azure Data Lake components like Data Lake Store, Data Lake Analytics, HDInsight and the U-SQL language. It compares Data Lakes to data warehouses and explains how Azure Data Lake Store, Analytics and U-SQL process and transform data at scale.
This document provides an overview and summary of the author's background and expertise. It states that the author has over 30 years of experience in IT working on many BI and data warehouse projects. It also lists that the author has experience as a developer, DBA, architect, and consultant. It provides certifications held and publications authored as well as noting previous recognition as an SQL Server MVP.
Unified Big Data Processing with Apache Spark (QCON 2014)Databricks
This document discusses Apache Spark, a fast and general engine for big data processing. It describes how Spark generalizes the MapReduce model through its Resilient Distributed Datasets (RDDs) abstraction, which allows efficient sharing of data across parallel operations. This unified approach allows Spark to support multiple types of processing, like SQL queries, streaming, and machine learning, within a single framework. The document also outlines ongoing developments like Spark SQL and improved machine learning capabilities.
Organizations are grappling to manually classify and create an inventory for distributed and heterogeneous data assets to deliver value. However, the new Azure service for enterprises – Azure Synapse Analytics is poised to help organizations and fill the gap between data warehouses and data lakes.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Modern DW Architecture
- The document discusses modern data warehouse architectures using Azure cloud services like Azure Data Lake, Azure Databricks, and Azure Synapse. It covers storage options like ADLS Gen 1 and Gen 2 and data processing tools like Databricks and Synapse. It highlights how to optimize architectures for cost and performance using features like auto-scaling, shutdown, and lifecycle management policies. Finally, it provides a demo of a sample end-to-end data pipeline.
Building End-to-End Delta Pipelines on GCPDatabricks
Delta has been powering many production pipelines at scale in the Data and AI space since it has been introduced for the past few years.
Built on open standards, Delta provides data reliability, enhances storage and query performance to support big data use cases (both batch and streaming), fast interactive queries for BI and enabling machine learning. Delta has matured over the past couple of years in both AWS and AZURE and has become the de-facto standard for organizations building their Data and AI pipelines.
In today’s talk, we will explore building end-to-end pipelines on the Google Cloud Platform (GCP). Through presentation, code examples and notebooks, we will build the Delta Pipeline from ingest to consumption using our Delta Bronze-Silver-Gold architecture pattern and show examples of Consuming the delta files using the Big Query Connector.
The document discusses migrating a data warehouse to the Databricks Lakehouse Platform. It outlines why legacy data warehouses are struggling, how the Databricks Platform addresses these issues, and key considerations for modern analytics and data warehousing. The document then provides an overview of the migration methodology, approach, strategies, and key takeaways for moving to a lakehouse on Databricks.
With all the outstanding education technologies available these days, it's now possible to turn an online course into a full ecosystem of best-in-breed technologies and content providers. Come to this session to learn what that ecosystem can look like! We'll discuss how to use open educational resources (OERs) to replace expensive textbooks, and tips for finding, reviewing, and implementing the best tools right inside your LMS/VLE. We'll also look at best practices for building and adopting an open-centric strategy in your organization's teaching and learning environment.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
The document discusses the challenges of modern data, analytics, and AI workloads. Most enterprises struggle with siloed data systems that make integration and productivity difficult. The future of data lies with a data lakehouse platform that can unify data engineering, analytics, data warehousing, and machine learning workloads on a single open platform. The Databricks Lakehouse platform aims to address these challenges with its open data lake approach and capabilities for data engineering, SQL analytics, governance, and machine learning.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Azure Data Factory Mapping Data Flow allows users to stage and transform data in Azure during a limited preview period beginning in February 2019. Data can be staged from Azure Data Lake Storage, Blob Storage, or SQL databases/data warehouses, then transformed using visual data flows before being landed to staging areas in Azure like ADLS, Blob Storage, or SQL databases. For information, contact [email protected] or visit http://aka.ms/dataflowpreview.
Azure Data Factory is a data integration service that allows for data movement and transformation between both on-premises and cloud data stores. It uses datasets to represent data structures, activities to define actions on data with pipelines grouping related activities, and linked services to connect to external resources. Key concepts include datasets representing input/output data, activities performing actions like copy, and pipelines logically grouping activities.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This document is a training presentation on Databricks fundamentals and the data lakehouse concept by Dalibor Wijas from November 2022. It introduces Wijas and his experience. It then discusses what Databricks is, why it is needed, what a data lakehouse is, how Databricks enables the data lakehouse concept using Apache Spark and Delta Lake. It also covers how Databricks supports data engineering, data warehousing, and offers tools for data ingestion, transformation, pipelines and more.
This document provides an introduction and overview of Azure Data Lake. It describes Azure Data Lake as a single store of all data ranging from raw to processed that can be used for reporting, analytics and machine learning. It discusses key Azure Data Lake components like Data Lake Store, Data Lake Analytics, HDInsight and the U-SQL language. It compares Data Lakes to data warehouses and explains how Azure Data Lake Store, Analytics and U-SQL process and transform data at scale.
This document provides an overview and summary of the author's background and expertise. It states that the author has over 30 years of experience in IT working on many BI and data warehouse projects. It also lists that the author has experience as a developer, DBA, architect, and consultant. It provides certifications held and publications authored as well as noting previous recognition as an SQL Server MVP.
Unified Big Data Processing with Apache Spark (QCON 2014)Databricks
This document discusses Apache Spark, a fast and general engine for big data processing. It describes how Spark generalizes the MapReduce model through its Resilient Distributed Datasets (RDDs) abstraction, which allows efficient sharing of data across parallel operations. This unified approach allows Spark to support multiple types of processing, like SQL queries, streaming, and machine learning, within a single framework. The document also outlines ongoing developments like Spark SQL and improved machine learning capabilities.
Organizations are grappling to manually classify and create an inventory for distributed and heterogeneous data assets to deliver value. However, the new Azure service for enterprises – Azure Synapse Analytics is poised to help organizations and fill the gap between data warehouses and data lakes.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Modern DW Architecture
- The document discusses modern data warehouse architectures using Azure cloud services like Azure Data Lake, Azure Databricks, and Azure Synapse. It covers storage options like ADLS Gen 1 and Gen 2 and data processing tools like Databricks and Synapse. It highlights how to optimize architectures for cost and performance using features like auto-scaling, shutdown, and lifecycle management policies. Finally, it provides a demo of a sample end-to-end data pipeline.
Building End-to-End Delta Pipelines on GCPDatabricks
Delta has been powering many production pipelines at scale in the Data and AI space since it has been introduced for the past few years.
Built on open standards, Delta provides data reliability, enhances storage and query performance to support big data use cases (both batch and streaming), fast interactive queries for BI and enabling machine learning. Delta has matured over the past couple of years in both AWS and AZURE and has become the de-facto standard for organizations building their Data and AI pipelines.
In today’s talk, we will explore building end-to-end pipelines on the Google Cloud Platform (GCP). Through presentation, code examples and notebooks, we will build the Delta Pipeline from ingest to consumption using our Delta Bronze-Silver-Gold architecture pattern and show examples of Consuming the delta files using the Big Query Connector.
The document discusses migrating a data warehouse to the Databricks Lakehouse Platform. It outlines why legacy data warehouses are struggling, how the Databricks Platform addresses these issues, and key considerations for modern analytics and data warehousing. The document then provides an overview of the migration methodology, approach, strategies, and key takeaways for moving to a lakehouse on Databricks.
With all the outstanding education technologies available these days, it's now possible to turn an online course into a full ecosystem of best-in-breed technologies and content providers. Come to this session to learn what that ecosystem can look like! We'll discuss how to use open educational resources (OERs) to replace expensive textbooks, and tips for finding, reviewing, and implementing the best tools right inside your LMS/VLE. We'll also look at best practices for building and adopting an open-centric strategy in your organization's teaching and learning environment.
Big Data in the Cloud with Azure Marketplace ImagesMark Kromer
The document discusses strategies for modern data warehousing and analytics on Azure including using Hadoop for ETL/ELT, integrating streaming data engines, and using lambda and hybrid architectures. It also describes using data lakes on Azure to collect and analyze large amounts of data from various sources. Additionally, it covers performing real-time stream analytics, machine learning, and statistical analysis on the data and discusses how Azure provides scalability, speed of deployment, and support for polyglot environments that incorporate many data processing and storage options.
Pentaho Big Data Analytics with Vertica and HadoopMark Kromer
Overview of the Pentaho Big Data Analytics Suite from the Pentaho + Vertica presentation at Big Data Techcon 2014 in Boston for the session called "The Ultimate Selfie | Picture Yourself with the Fastest Analytics on Hadoop with HP Vertica and Pentaho"
Here I talk about examples and use cases for Big Data & Big Data Analytics and how we accomplished massive-scale sentiment, campaign and marketing analytics for Razorfish using a collecting of database, Big Data and analytics technologies.
Big Data Analytics Projects - Real World with PentahoMark Kromer
This document discusses big data analytics projects and technologies. It provides an overview of Hadoop, MapReduce, YARN, Spark, SQL Server, and Pentaho tools for big data analytics. Specific scenarios discussed include digital marketing analytics using Hadoop, sentiment analysis using MongoDB and SQL Server, and data refinery using Hadoop, MPP databases, and Pentaho. The document also addresses myths and challenges around big data and provides code examples of MapReduce jobs.
Big Data Analytics with Hadoop, MongoDB and SQL ServerMark Kromer
This document discusses SQL Server and big data analytics projects in the real world. It covers the big data technology landscape, big data analytics, and three big data analytics scenarios using different technologies like Hadoop, MongoDB, and SQL Server. It also discusses SQL Server's role in the big data world and how to get data into Hadoop for analysis.
Big Data Analytics in the Cloud with Microsoft AzureMark Kromer
Big Data Analytics in the Cloud using Microsoft Azure services was discussed. Key points included:
1) Azure provides tools for collecting, processing, analyzing and visualizing big data including Azure Data Lake, HDInsight, Data Factory, Machine Learning, and Power BI. These services can be used to build solutions for common big data use cases and architectures.
2) U-SQL is a language for preparing, transforming and analyzing data that allows users to focus on the what rather than the how of problems. It uses SQL and C# and can operate on structured and unstructured data.
3) Visual Studio provides an integrated environment for authoring, debugging, and monitoring U-SQL scripts and jobs. This allows
This is a run-through at a 200 level of the Microsoft Azure Big Data Analytics for the Cloud data platform based on the Cortana Intelligence Suite offerings.
This document summarizes Rob Gillen's presentation on Amazon Web Services (AWS) and Windows Azure. The presentation covers:
1. An overview of the cloud computing market and discussion of core cloud primitives.
2. Examples of fault tolerance techniques used by AWS and Azure like regions, availability zones, load balancing, and replication.
3. Lessons learned on vendor-specific implementations including an example of an incorrect network change at AWS that caused a "replication storm".
4. Examples of application architectures that leverage cloud services for dynamic scaling, fault tolerance, and background processing using independent components, load balancing, and queues.
Data Visualization with Microsoft Reporting ServicesChris Price
This document discusses data visualization techniques using Microsoft Reporting Services. It begins with an introduction to data visualization as both an art and science. The document then covers principles of visual design, different types of charts like line charts and bar charts, and how to encode data visually. It provides examples of effective and ineffective data visualization. Finally, it demonstrates capabilities of Reporting Services for creating visualizations and data charts.
Microsoft Enterprise Cube is a business performance management solution that helps telecommunications service providers integrate their disparate subscriber data sources to gain insights. It provides a single view of subscriber usage across systems to identify high-value subscribers, underutilized services, and opportunities to improve loyalty. The solution uses familiar Microsoft technologies like SQL Server, SharePoint and Office to deliver customizable reports and analytics at a low total cost of ownership. It supports compliance needs and scales to accommodate growing data storage requirements of service providers.
Microsoft Cloud BI Update 2012 for SQL Saturday PhillyMark Kromer
This document provides an overview and update of Microsoft's Cloud Business Intelligence (BI) solutions in version 3.0 from June 2012. It discusses the objectives of Cloud BI including providing data access and answers to business questions anytime from mobile devices. An overview of the session covers Windows Azure, SQL Azure, SQL Azure Reporting Services, mobile BI delivery, cloud data integration, data mining in the cloud, and hybrid scenarios. Key features of SQL Azure like import/export, data-tier applications, data sync, and federations for database scale-out are also summarized.
Philly Code Camp 2013 Mark Kromer Big Data with SQL ServerMark Kromer
These are my slides from May 2013 Philly Code Camp at Penn State Abington. I will post the samples, code and scripts on my blog here following the event this Saturday: http://www.kromerbigdata.com
Microsoft Event Registration System Hosted on Windows AzureMark Kromer
This document describes a Windows Azure event registration app built in 2 weeks by 1 developer. It allows interactive check-in for live events on Windows 8 slates and mobile devices. It uses SQL Azure databases to store registration data and Windows Azure storage for photo sharing. The app provides check-in, photo viewing, and social media integration across Windows 8, Windows 7, and Windows Phone 7 platforms.
What's new in SQL Server 2012 for philly code camp 2012.1Mark Kromer
A high-level run through the SQL Server roadmap focused on the new technologies and features of SQL Server 2012. Mark Kromer presented this deck to the Philly .NET Code Camp in at Penn State Abington on May 12, 2012.
This document discusses big data and SQL Server. It covers what big data is, the Hadoop environment, big data analytics, and how SQL Server fits into the big data world. It describes using Sqoop to load data between Hadoop and SQL Server, and SQL Server features for big data analytics like columnstore and PolyBase. The document concludes that a big data analytics approach is needed for massive, variable data, and that SQL Server 2012 supports this with features like columnstore and tabular SSAS.
Microsoft Azure BI Solutions in the CloudMark Kromer
This document provides an overview of several Microsoft Azure cloud data and analytics services:
- Azure Data Factory is a data integration service that can move and transform data between cloud and on-premises data stores as part of scheduled or event-driven workflows.
- Azure SQL Data Warehouse is a cloud data warehouse that provides elastic scaling for large BI and analytics workloads. It can scale compute resources on demand.
- Azure Machine Learning enables building, training, and deploying machine learning models and creating APIs for predictive analytics.
- Power BI provides interactive reports, visualizations, and dashboards that can combine multiple datasets and be embedded in applications.
Azure Data Factory ETL Patterns in the CloudMark Kromer
This document discusses ETL patterns in the cloud using Azure Data Factory. It covers topics like ETL vs ELT, the importance of scale and flexible schemas in cloud ETL, and how Azure Data Factory supports workflows, templates, and integration with on-premises and cloud data. It also provides examples of nightly ETL data flows, handling schema drift, loading dimensional models, and data science scenarios using Azure data services.
SQL Saturday Redmond 2019 ETL Patterns in the CloudMark Kromer
This document discusses ETL patterns in the cloud using Azure Data Factory. It covers topics like ETL vs ELT, scaling ETL in the cloud, handling flexible schemas, and using ADF for orchestration. Key points include staging data in low-cost storage before processing, using ADF's integration runtime to process data both on-premises and in the cloud, and building resilient data flows that can handle schema drift.
The document discusses Azure Data Factory and its capabilities for cloud-first data integration and transformation. ADF allows orchestrating data movement and transforming data at scale across hybrid and multi-cloud environments using a visual, code-free interface. It provides serverless scalability without infrastructure to manage along with capabilities for lifting and running SQL Server Integration Services packages in Azure.
Triple C - Centralize, Cloudify and Consolidate Dozens of Oracle Databases (O...Lucas Jellema
Dozens of Oracle Databases - each health center location has one on its local server with the same data model and the same set of applications. These databases have to be centralized and cloudified and also be consolidated into one or as few databases as possible. To lower costs, ease operations and enable innovation. Each location can access only its own data, applications do not have to be changed and different locations can run different versions of applications and database objects. This is the story of a critical migration. About the cloud ready analysis, the Proofs of Concept with Oracle Database features VPD and Edition Based Redefinition, the scalability investigation, the redesign of change management, rollout and operational management processes and the careful modernization of a 25 year old platform on the latest database release and a shiny new, fully automated cloud platform.This is the story of an organization that had state of the art systems in the mid-90s. And they have these same systems today - no longer state of the art. They can keep the systems alive, but barely, and at increasing cost. In the Fall of 2020, we started an investigation into the feasibility of bringing the 100s of databases from each of the locations together, in a central location, in the cloud and finally: consolidated into one or at least as few database instances as possible. Using Oracle Database Virtual Private Database and Edition Based Redefinition, a smart database connection configuration in each site and a limited reimplementation of non-cloud/non-consolidated mechanisms (interaction with local file system for example) we have designed and proven a working new design and migration approach.
Microsoft Data Integration Pipelines: Azure Data Factory and SSISMark Kromer
The document discusses tools for building ETL pipelines to consume hybrid data sources and load data into analytics systems at scale. It describes how Azure Data Factory and SQL Server Integration Services can be used to automate pipelines that extract, transform, and load data from both on-premises and cloud data stores into data warehouses and data lakes for analytics. Specific patterns shown include analyzing blog comments, sentiment analysis with machine learning, and loading a modern data warehouse.
J1 T1 3 - Azure Data Lake store & analytics 101 - Kenneth M. NielsenMS Cloud Summit
This document provides an overview and demonstration of Azure Data Lake Store and Azure Data Lake Analytics. The presenter discusses how Azure Data Lake can store and analyze large amounts of data in its native format. Key capabilities of Azure Data Lake Store like unlimited storage, security features, and support for any data type are highlighted. Azure Data Lake Analytics is presented as an elastic analytics service built on Apache YARN that can process large amounts of data. The U-SQL language for big data analytics is demonstrated, along with using Visual Studio and PowerShell for interacting with Azure Data Lake. The presentation concludes with a question and answer section.
The document discusses Azure Data Factory V2 data flows. It will provide an introduction to Azure Data Factory, discuss data flows, and have attendees build a simple data flow to demonstrate how they work. The speaker will introduce Azure Data Factory and data flows, explain concepts like pipelines, linked services, and data flows, and guide a hands-on demo where attendees build a data flow to join customer data to postal district data to add matching postal towns.
The document discusses building an end-to-end analytic solution in the cloud using Microsoft Azure tools, including ingesting data from various sources into Azure Data Factory, storing it in Azure Data Lake, transforming the data using U-SQL scripts in Azure Data Lake Analytics, developing predictive models with Azure Machine Learning Studio, and visualizing insights with Power BI. It provides examples of how each tool in the analytic lifecycle can be leveraged as part of an overall cloud-based analytics solution handling large volumes of data.
Microsoft released SQL Azure more than two years ago - that's enough time for testing (I hope!). So, are you ready to move your data to the Cloud? If you’re considering a business (i.e. a production environment) in the Cloud, you need to think about methods for backing up your data, a backup plan for your data and, eventually, restoring with Red Gate Cloud Services (and not only). In this session, you’ll see the differences, functionality, restrictions, and opportunities in SQL Azure and On-Premise SQL Server 2008/2008 R2/2012. We’ll consider topics such as how to be prepared for backup and restore, and which parts of a cloud environment are most important: keys, triggers, indexes, prices, security, service level agreements, etc.
The document introduces Oracle Data Integrator and Oracle GoldenGate as solutions for enterprise data integration. It discusses challenges with fragmented data silos and the need to improve data accessibility, reliability, and quality across systems. Oracle Data Integrator is presented as a solution for real-time enterprise data integration using an ELT approach. It can integrate data across various systems faster and with lower total cost of ownership compared to traditional ETL. Oracle GoldenGate enables real-time data replication and change data capture. Together, Oracle Data Integrator and Oracle GoldenGate provide a full suite for batch, incremental, and real-time data integration.
The document introduces Oracle Data Integrator and Oracle GoldenGate as solutions for enterprise data integration. It discusses challenges with fragmented data silos and the need to improve data accessibility, reliability, and quality across systems. It describes how Oracle Data Integrator uses an ELT approach to load and transform data, leveraging database technologies. It also explains how Oracle GoldenGate enables real-time data integration. The document highlights benefits of Oracle Data Integrator such as faster performance, simpler setup and management, and lower costs compared to traditional ETL approaches.
MicroStrategy integrates with Microsoft SQL Server in several ways to optimize analytical queries:
1) MicroStrategy generates SQL Server-specific syntax and pushes over 120 functions to take advantage of SQL Server's analytics capabilities.
2) MicroStrategy uses multi-pass SQL and intermediate tables to help answer complex analytical questions, with options like global temporary tables and parallel query execution.
3) MicroStrategy supports key SQL Server features like parallel queries, indexed views, compression, and partitioning to improve performance.
This document provides an overview of Microsoft SQL Server 2005 database editions. It describes the main features and limitations of the Enterprise, Standard, Workgroup, and Express editions. These editions are designed for different organization sizes and needs, with Enterprise having the most advanced features and no limitations, and Express being lightweight with a small database size limit. The document also discusses how SQL Server supports both online transaction processing and online analytical processing workloads through its database engine and Analysis Services.
Migrating on premises workload to azure sql databasePARIKSHIT SAVJANI
This document provides an overview of migrating databases from on-premises SQL Server to Azure SQL Database Managed Instance. It discusses why companies are moving to the cloud, challenges with migration, and the tools and services available to help with assessment and migration including Data Migration Service. Key steps in the migration workflow include assessing the database and application, addressing compatibility issues, and deploying the converted schema to Managed Instance which provides high compatibility with on-premises SQL Server in a fully managed platform as a service model.
VMworld 2013: Vapp6124 automating v mware cloud and virtualization deployment...gpadmanabh
- Active System Manager from Dell automates VMware cloud and virtualization deployments through centralized management and template-based provisioning.
- It addresses common challenges around fragmented management, siloed infrastructure, and slow/manual processes by providing rapid deployment of virtual workloads across integrated compute, storage, and networking.
- The system uses templates to define workload-specific infrastructure configurations and orchestration to automate the full lifecycle from provisioning to decommissioning.
The Oracle Corporation is an American global computer technology corporation founded in 1977. It primarily develops and markets database management systems and enterprise software. In 2013, Oracle began using Oracle 12C which provided cloud services capabilities. In 2014, Oracle acquired digital marketing company Datalogix for an undisclosed amount.
Co 4, session 2, aws analytics servicesm vaishnavi
AWS offers several analytics services to help process and provide insights from data. These include Amazon Athena for interactive querying of data stored in S3 using SQL, Amazon EMR for processing large amounts of data using Hadoop and other open source tools, Amazon CloudSearch for setting up a search solution easily, and Amazon Kinesis for collecting, processing, and analyzing real-time data. Other services are Amazon Redshift for data warehousing, Amazon Quicksight for interactive dashboards, AWS Glue for ETL jobs, and Amazon Lake Formation for securing data lakes.
Fabric Data Factory Pipeline Copy Perf Tips.pptxMark Kromer
This document provides performance tips for pipelines and copy activities in Azure Data Factory (ADF). It discusses:
- Using pipelines for data orchestration with conditional execution and parallel activities.
- The Copy activity provides massive-scale data movement within pipelines. Using Copy for ELT can land data quickly into a data lake.
- Gaining more throughput by using multiple parallel Copy activities but this can overload the source.
- Optimizing copy performance by using binary format, file lists/folders instead of individual files, and SQL source partitioning.
- Metrics showing copying Parquet files to a lakehouse at 5.1 GB/s while CSV and SQL loads were slower due to transformation.
The
Build data quality rules and data cleansing into your data pipelinesMark Kromer
This document provides guidance on building data quality rules and data cleansing into data pipelines. It discusses considerations for data quality in data warehouse and data science scenarios, including verifying data types and lengths, handling null values, domain value constraints, and reference data lookups. It also provides examples of techniques for replacing values, splitting data based on values, data profiling, pattern matching, enumerations/lookups, de-duplicating data, fuzzy joins, validating metadata rules, and using assertions.
Mapping Data Flows Training deck Q1 CY22Mark Kromer
Mapping data flows allow for code-free data transformation at scale using an Apache Spark engine within Azure Data Factory. Key points:
- Mapping data flows can handle structured and unstructured data using an intuitive visual interface without needing to know Spark, Scala, Python, etc.
- The data flow designer builds a transformation script that is executed on a JIT Spark cluster within ADF. This allows for scaled-out, serverless data transformation.
- Common uses of mapping data flows include ETL scenarios like slowly changing dimensions, analytics tasks like data profiling, cleansing, and aggregations.
Data cleansing and prep with synapse data flowsMark Kromer
This document provides resources for data cleansing and preparation using Azure Synapse Analytics Data Flows. It includes links to videos, documentation, and a slide deck that explain how to use Data Flows for tasks like deduplicating null values, saving data profiler summary statistics, and using metadata functions. A GitHub link shares a tutorial document for a hands-on learning experience with Synapse Data Flows.
Data cleansing and data prep with synapse data flowsMark Kromer
This document contains links to resources about using Azure Synapse Analytics for data cleansing and preparation with Data Flows. It includes links to videos and documentation about removing null values, saving data profiler summary statistics, and using metadata functions in Azure Data Factory data flows.
Mapping Data Flows Perf Tuning April 2021Mark Kromer
This document discusses optimizing performance for data flows in Azure Data Factory. It provides sample timing results for various scenarios and recommends settings to improve performance. Some best practices include using memory optimized Azure integration runtimes, maintaining current partitioning, scaling virtual cores, and optimizing transformations and sources/sinks. The document also covers monitoring flows to identify bottlenecks and global settings that affect performance.
This document discusses using Azure Data Factory (ADF) for data lake ETL processes in the cloud. It describes how ADF can ingest data from on-premises, cloud, and SaaS sources into a data lake for preparation, transformation, enrichment, and serving to downstream analytics or machine learning processes. The document also provides several links to YouTube videos and articles about using ADF for these tasks.
Azure Data Factory Data Wrangling with Power QueryMark Kromer
Azure Data Factory now allows users to perform data wrangling tasks through Power Query activities, translating M scripts into ADF data flow scripts executed on Apache Spark. This enables code-free data exploration, preparation, and operationalization of Power Query workflows within ADF pipelines. Examples of use cases include data engineers building ETL processes or analysts operationalizing existing queries to prepare data for modeling, with the goal of providing a data-first approach to building data flows and pipelines in ADF.
Azure Data Factory Data Flow Performance Tuning 101Mark Kromer
The document provides performance timing results and recommendations for optimizing Azure Data Factory data flows. Sample 1 processed a 421MB file with 887k rows in 4 minutes using default partitioning on an 80-core Azure IR. Sample 2 processed a table with the same size and transforms in 3 minutes using source and derived column partitioning. Sample 3 processed the same size file in 2 minutes with default partitioning. The document recommends partitioning strategies, using memory optimized clusters, and scaling cores to improve performance.
Azure Data Factory Data Flows Training (Sept 2020 Update)Mark Kromer
Mapping data flows allow for code-free data transformation using an intuitive visual interface. They provide resilient data flows that can handle structured and unstructured data using an Apache Spark engine. Mapping data flows can be used for common tasks like data cleansing, validation, aggregation, and fact loading into a data warehouse. They allow transforming data at scale through an expressive language without needing to know Spark, Scala, Python, or manage clusters.
Data quality patterns in the cloud with ADFMark Kromer
Azure Data Factory can be used to build modern data warehouse patterns with Azure SQL Data Warehouse. It allows extracting and transforming relational data from databases and loading it into Azure SQL Data Warehouse tables optimized for analytics. Data flows in Azure Data Factory can also clean and join disparate data from Azure Storage, Data Lake Store, and other data sources for loading into the data warehouse. This provides simple and productive ETL capabilities in the cloud at any scale.
Azure Data Factory Data Flows Training v005Mark Kromer
Mapping Data Flow is a new feature of Azure Data Factory that allows building data transformations in a visual interface without code. It provides a serverless, scale-out transformation engine for processing big data with unstructured requirements. Mapping Data Flows can be authored and designed visually, with transformations, expressions, and results previews, and then operationalized with Data Factory scheduling, monitoring, and control flow.
Data Quality Patterns in the Cloud with Azure Data FactoryMark Kromer
This document discusses data quality patterns when using Azure Data Factory (ADF). It presents two modern data warehouse patterns that use ADF for orchestration: one using traditional ADF activities and another leveraging ADF mapping data flows. It also provides links to additional resources on ADF data flows, data quality patterns, expressions, performance, and connectors.
Azure Data Factory can now use Mapping Data Flows to orchestrate ETL workloads. Mapping Data Flows allow users to visually design transformations on data from disparate sources and load the results into Azure SQL Data Warehouse for analytics. The key benefits of Mapping Data Flows are that they provide a visual interface for building expressions to cleanse and join data with auto-complete assistance and live previews of expression results.
Mapping Data Flow is a new feature of Azure Data Factory that allows users to build data transformations in a visual interface without code. It provides a serverless, scale-out transformation engine for processing big data with unstructured requirements. Mapping Data Flows can be operationalized with Data Factory's scheduling, control flow, and monitoring capabilities.
ADF Mapping Data Flows Training Slides V1Mark Kromer
Mapping Data Flow is a new feature of Azure Data Factory that allows users to build data transformations in a visual interface without code. It provides a serverless, scale-out transformation engine to transform data at scale in the cloud in a resilient manner for big data scenarios involving unstructured data. Mapping Data Flows can be operationalized with Azure Data Factory's scheduling, control flow, and monitoring capabilities.
Azure Data Factory Data Flow Limited Preview for January 2019Mark Kromer
Azure Data Factory introduces Visual Data Flow, a limited preview feature that allows users to visually design data flows without writing code. It provides a drag-and-drop interface for users to select data sources, place transformations on imported data, and choose destinations for transformed data. The flows are run on Azure and default to using Azure Data Lake Storage for staging transformed data, though users can optionally configure other staging options. The feature supports common data formats and transformations like sorting, merging, joining, and lookups.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
Designing Low-Latency Systems with Rust and ScyllaDB: An Architectural Deep DiveScyllaDB
Want to learn practical tips for designing systems that can scale efficiently without compromising speed?
Join us for a workshop where we’ll address these challenges head-on and explore how to architect low-latency systems using Rust. During this free interactive workshop oriented for developers, engineers, and architects, we’ll cover how Rust’s unique language features and the Tokio async runtime enable high-performance application development.
As you explore key principles of designing low-latency systems with Rust, you will learn how to:
- Create and compile a real-world app with Rust
- Connect the application to ScyllaDB (NoSQL data store)
- Negotiate tradeoffs related to data modeling and querying
- Manage and monitor the database for consistently low latencies
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
ETL in the Cloud With Microsoft Azure
1. Microsoft ETL in the Cloud
Microsoft Azure
Cloud Data Platform
Mark Kromer
Microsoft Azure Cloud Data Architect
@kromerbigdata
@mssqldude
2. What is ETL?
• Acronym for “Extract, Transform and Load”
• Classic form of data movement, aggregation, summarization, cleansing and loading
a Data Warehouse
• More loosely defined as data management processes that clean, move and
aggregate data
• Formal ETL processes are typically scheduled (i.e. hourly, nightly, monthly)
• Not real-time, although micro-batch ETL systems are quite common
3. Classic Enterprise ETL in the Cloud with Azure
Microsoft and ISV Marketplace common offerings (Examples)
Spin-up SQL Server VM image
from the Azure Portal to run
SSIS in the cloud via Azure IaaS
Informatica is an Enterprise-
grade ETL product suite that
offers an Azure VM available in
the ISV Marketplace Microsoft partner with Azure ISV
Marketplace offerings including
CDC. Attunity Compose can
provide additional ELT/ELT
capabilities.
4. ELT in the Cloud with Azure Data Factory
ADF provides Extract, Transform and Load in the Cloud
• ADF relies on external execution engines like SQL Server, Hadoop and AzureML
• Provides very easy Copy Activities to get started quickly
5. Azure ML as an ETL Tool
Transforming Data is a common task for Data Scientists and Data Engineers
• AML has a fully Cloud / Web based UI with basic SQL Transformations
• AML’s core capability is training and scoring data via ML models. But you don’t need to include those
advanced analytics in your “data flow”.
• Schedule ETL activities via ADF
Data
Transformations