In this session I tried to explain to SQL Community what is Common Data Service, it's a new Database or only a service to allow Power Users to create applications.
Common Data Model - A Business Database!Pedro Azevedo
In this session I presented how Common Data Service will be the future of Business Application Platform and how this platform will help the Dynamics 365 to grow.
Common Service and Common Data Model by Henry McCallumKTL Solutions
These are two topics that are most interesting, but many people don’t know about them. The Common Data Service (CMS) is confusing for many, and honestly, a more technical approach that Microsoft was reluctant about publishing at first. It’s a hidden gem. The CMS allows you to securely store and manage data within a set of standard and custom entities. After your data is stored, you would then have the ability to do much more with your data such as customize entities, leverage productivity, and secure your data. It’s the middle factor between foundation, customer service, sales, purchasing, and people. Flow is Microsoft’s long promised cross platform workflow engine. Join us as Henry dives into how these two connector tools showcase Microsoft’s solutions and can help synchronize your day to day activities.
Why you should use common data service finalJoel Lindstrom
The document discusses the benefits of using Common Data Service (CDS), including its security features, data relationships, calculations, Exchange/Outlook integration, real-time workflows, model-driven apps and portals, administration features, and development tools. CDS provides a centralized cloud data service with common business entities, security at the platform level, multi-language and currency support, and integration with other Microsoft products and technologies like Azure, Power Apps, and Dynamics 365.
SPSNYC2019 - What is Common Data Model and how to use it?Nicolas Georgeault
The document discusses the Common Data Model and Common Data Service. It provides an overview of what the Common Data Model is, how it standardizes business entities and concepts. It also discusses how the Common Data Service provides a service level agreement and business-centric approach. Examples are given of how the Common Data Model and Service can be used to build applications and integrate data across different systems using a single data model.
Data virtualization allows applications to access and manipulate data without knowledge of physical data structures or locations. Teiid is a data virtualization system comprised of tools, components and services for creating and executing bidirectional data services across distributed, heterogeneous data sources in real-time without moving data. Teiid includes a query engine, embedded driver, server, connectors and tools for creating virtual databases (VDBs) containing models that define data structures and views. Models represent data sources or abstractions and must be validated and configured with translators and resource adapters to access physical data when a VDB is deployed.
Presentation on Data Mesh: The paradigm shift is a new type of eco-system architecture, which is a shift left towards a modern distributed architecture in which it allows domain-specific data and views “data-as-a-product,” enabling each domain to handle its own data pipelines.
This document discusses building applications with SQL Data Services and Windows Azure. It provides an agenda that introduces SQL Data Services architecture, describes SDS application architectures, and how to scale out with SQL Data Services. It also discusses the SQL Data Services network topology and performance considerations for accessing SDS from applications.
Data virtualization, Data Federation & IaaS with Jboss TeiidAnil Allewar
Enterprise have always grappled with the problem of information silos that needed to be merged using multiple data warehouses(DWs) and business intelligence(BI) tools so that enterprises could mine this disparate data for businessdecisions and strategy. Traditionally this data integration was done with ETL by consolidating multiple DBMS into a single data storage facility.
Data virtualization enables abstraction, transformation, federation, and delivery of data taken from variety of heterogeneous data sources as if it is a single virtual data source without the need to physically copy the data for integration. It allows consuming applications or users to access data from these various sources via a request to a single access point and delivers information-as-a-service (IaaS).
In this presentation, we will explore what data virtualization is and how it differs from the traditional data integration architecture. We’ll also look at validating the data virtualization and federation concepts by working through an example(see videos at the GitHub repo) to federate data across 2 heterogeneous data sources; mySQL and MongoDB using the JBoss Teiid data virtualization platform.
This document provides an introduction to data virtualization and JBoss Data Virtualization. It discusses key concepts like data services, connectors, models, and virtual databases. It describes the architecture and components of JBoss Data Virtualization including tools for creating data views, the runtime environment, and repository. Use cases like business intelligence, 360-degree views, cloud data integration, and big data integration are also covered.
Information Virtualization: Query Federation on Data LakesDataWorks Summit
This document discusses information virtualization and query federation on data lakes. It provides examples of how information virtualization hides the complexity of integrating data from different sources and allows queries to span multiple data repositories. It also discusses best practices for query federation, including avoiding complex joins across many systems and keeping statistics up to date on all tables in a federated system.
Jboss Teiid is a data virtualization and federation system that provides a uniform API for accessing data. It allows for data from different sources like SQL and NoSQL databases, unstructured data, and web services to be virtually integrated. Teiid extracts metadata from multiple data sources through virtual databases (VDBs), enabling federation. The consumer API is simple JDBC usage. Teiid is fully integrated with Jboss and customizable for performance and extensible through custom binders.
This document provides an overview of Teiid, an open source data integration platform. It begins with an agenda that covers what Teiid is, basic usage, tooling, demonstrations, internals, heterogeneous data sources, enterprise data services, and how attendees can get involved. The document then delves into explanations of Teiid's history, architecture, modeling tooling, dynamic and static virtual databases, tooling like the console and admin shell, and how it works internally with parsing, optimization, and federation. It concludes by discussing how developers can contribute to Teiid and opportunities for students.
A Crash Course in SQL Server Administration for Reluctant Database Administra...Chad Petrovay
Reluctant DBAs are those of us who aren’t formally trained in database administration, but manage through a combination of our wits, technical manuals, and online forums. This practical session will explore best practices for installing, configuring, and maintaining Microsoft SQL Server, and highlight some SQL Server features (and Easter eggs) that can improve your user experience and institutional ROI.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
The Microsoft Common Data model (CDM) is the Azure-based storage mechanism for the Microsoft business application platform. The most intuitive use case for the CDM is to provide an easy to use relational database for PowerApps. App Creators can use the CDM to model their data and easily share apps created on that data. CDM aims to provide much more than storage, it also provides definitions for common business entities as well as integration capabilities for importing data from multiple sources like SharePoint Online and On Premises. With data sourced across the enterprise, businesses can drive insights and actions using PowerApps, Power BI, and Microsoft Flow based solutions.
Data Integration through Data Virtualization (SQL Server Konferenz 2019)Cathrine Wilhelmsen
Data Integration through Data Virtualization - PolyBase and new SQL Server 2019 Features (Presented at SQL Server Konferenz 2019 on February 21st, 2019)
This document discusses the limitations of traditional ETL processes and how data virtualization can help address these issues. Specifically:
- ETL processes are complex, involve costly data movement between systems, and can result in data inconsistencies due to latency.
- Data virtualization eliminates data movement by providing unified access to data across different systems. It enables queries to retrieve integrated data in real-time without physical replication.
- Rocket Data Virtualization is a mainframe-resident solution that reduces TCO by offloading up to 99% of data integration processing to specialty engines, simplifying access to mainframe data via SQL.
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
JBoss Enterprise Data Services (Data Virtualization)plarsen67
The document discusses how JBoss Enterprise Data Services and the JBoss Enterprise SOA Platform can be used to address common business challenges related to data, decision making, inflexible systems, and manual processes. It presents five solution patterns - data foundation, information delivery, externalizing knowledge, automating decision making, and codifying business processes - and how the technologies in the JBoss platforms map to implementing these patterns to generate new business value from existing assets.
Power BI Overview, Deployment and GovernanceJames Serra
This document provides an overview of external sharing in Power BI using Azure Active Directory Business-to-Business (Azure B2B) collaboration. Azure B2B allows Power BI content to be securely distributed to guest users outside the organization while maintaining control over internal data. There are three main approaches for sharing - assigning Pro licenses manually, using guest's own licenses, or sharing to guests via Power BI Premium capacity. Azure B2B handles invitations, authentication, and governance policies to control external sharing. All guest actions are audited. Conditional access policies can also be enforced for guests.
DMsuite is proprietary data masking software that can profile, mask, audit, provision and manage data to replace sensitive information with fictitious data. It allows testing and data sharing while protecting sensitive information. The document discusses how DMsuite works, its features, benefits, ROI and support options.
Organize & manage master meta data centrally, built upon kong, cassandra, neo4j & elasticsearch. Managing master & meta data is a very common problem with no good opensource alternative as far as I know, so initiating this project – MasterMetaData.
oracle data integrator training | oracle data integrator training videos | or...Nancy Thomas
Website : http://www.todaycourses.com
Oracle Data Integrator 11g Course Content :
1.Introduction to Oracle Data Integrator
What is Oracle Data Integrator?
Why Oracle Data Integrator?
Overview of ODI 11g Architecture
Overview of ODI 11g Components
About Graphical Modules
Types of ODI Agents
Overview of Oracle Data Integrator Repositories
2. Administrating ODI Repositories and Agents
Administrating the ODI Repositories
Creating Repository Storage Spaces
Creating and Connecting to the Master Repository
Creating and Connecting to the Work Repository
Managing ODI Agents
Creating a Physical Agent
Launching a Listener, Scheduler and Web Agent
Example of Load Balancing
oracle data integrator training, oracle data integrator training videos, oracle data integrator training online, oracle data integrator training material, oracle data integrator training ppt, oracle data integrator tutorial, oracle data integrator 12c, oracle data integration suite, oracle dataintegrator 12c, oracle database 12c multitenant architecture overview, odi master repository, odi12c introduction demo, oracle data integrator training videos
In the healthcare sector, data security, governance, and quality are crucial for maintaining patient privacy and ensuring the highest standards of care. At Florida Blue, the leading health insurer of Florida serving over five million members, there is a multifaceted network of care providers, business users, sales agents, and other divisions relying on the same datasets to derive critical information for multiple applications across the enterprise. However, maintaining consistent data governance and security for protected health information and other extended data attributes has always been a complex challenge that did not easily accommodate the wide range of needs for Florida Blue’s many business units. Using Apache Ranger, we developed a federated Identity & Access Management (IAM) approach that allows each tenant to have their own IAM mechanism. All user groups and roles are propagated across the federation in order to determine users’ data entitlement and access authorization; this applies to all stages of the system, from the broadest tenant levels down to specific data rows and columns. We also enabled audit attributes to ensure data quality by documenting data sources, reasons for data collection, date and time of data collection, and more. In this discussion, we will outline our implementation approach, review the results, and highlight our “lessons learned.”
This document introduces data mining with SQL Server. It discusses how data mining can provide business insights by analyzing large amounts of data. It describes how SQL Server integrates data acquisition, transformation, discovery, and presentation capabilities for data mining. It also provides an example of how a customer used SQL Server to analyze transaction and stock data to increase sales and profits.
Enabling digital transformation api ecosystems and data virtualizationDenodo
Watch the full webinar here: https://buff.ly/2KBKzLJ
Digital transformation, as cliché as it sounds, is on top of every decision maker’s strategic initiative list. And at the heart of any digital transformation, no matter the industry or the size of the company, there is an application programming interface (API) strategy. While API platforms enable companies to manage large numbers of APIs working in tandem, monitor their usage, and establish security between them, they are not optimized for data integration, so they cannot easily or quickly integrate large volumes of data between different systems. Data virtualization, however, can greatly enhance the capabilities of an API platform, increasing the benefits of an API-based architecture. With data virtualization as part of an API strategy, companies can streamline digital transformations of any size and scope.
Join us for this webinar to see these technologies in action in a demo and to get the answers to the following questions:
*How can data virtualization enhance the deployment and exposure of APIs?
*How does data virtualization work as a service container, as a source for microservices and as an API gateway?
*How can data virtualization create managed data services ecosystems in a thriving API economy?
*How are GetSmarter and others are leveraging data virtualization to facilitate API-based initiatives?
Azure Digital Twins is a platform as a service (PaaS) offering that enables the creation of knowledge graphs based on digital models of entire environments. These environments could be buildings, factories, farms, energy networks, railways, stadiums, and more—even entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences.
After nearly two years, Azure Digital Twins has been rewritten and it's off to a great start. In this session, we will see what it is for, see where it has changed, and see how to use it in our IoT strategy.
CRM-UG Summit Phoenix 2018 - What is Common Data Model and how to use it?Nicolas Georgeault
My Slidedeck about Common Data Service and Model from CRMUG SUmmit in Phoenix Oct 2018. This technology is under development so content is subject to change and based on current service on 10/18/2018
Data virtualization, Data Federation & IaaS with Jboss TeiidAnil Allewar
Enterprise have always grappled with the problem of information silos that needed to be merged using multiple data warehouses(DWs) and business intelligence(BI) tools so that enterprises could mine this disparate data for businessdecisions and strategy. Traditionally this data integration was done with ETL by consolidating multiple DBMS into a single data storage facility.
Data virtualization enables abstraction, transformation, federation, and delivery of data taken from variety of heterogeneous data sources as if it is a single virtual data source without the need to physically copy the data for integration. It allows consuming applications or users to access data from these various sources via a request to a single access point and delivers information-as-a-service (IaaS).
In this presentation, we will explore what data virtualization is and how it differs from the traditional data integration architecture. We’ll also look at validating the data virtualization and federation concepts by working through an example(see videos at the GitHub repo) to federate data across 2 heterogeneous data sources; mySQL and MongoDB using the JBoss Teiid data virtualization platform.
This document provides an introduction to data virtualization and JBoss Data Virtualization. It discusses key concepts like data services, connectors, models, and virtual databases. It describes the architecture and components of JBoss Data Virtualization including tools for creating data views, the runtime environment, and repository. Use cases like business intelligence, 360-degree views, cloud data integration, and big data integration are also covered.
Information Virtualization: Query Federation on Data LakesDataWorks Summit
This document discusses information virtualization and query federation on data lakes. It provides examples of how information virtualization hides the complexity of integrating data from different sources and allows queries to span multiple data repositories. It also discusses best practices for query federation, including avoiding complex joins across many systems and keeping statistics up to date on all tables in a federated system.
Jboss Teiid is a data virtualization and federation system that provides a uniform API for accessing data. It allows for data from different sources like SQL and NoSQL databases, unstructured data, and web services to be virtually integrated. Teiid extracts metadata from multiple data sources through virtual databases (VDBs), enabling federation. The consumer API is simple JDBC usage. Teiid is fully integrated with Jboss and customizable for performance and extensible through custom binders.
This document provides an overview of Teiid, an open source data integration platform. It begins with an agenda that covers what Teiid is, basic usage, tooling, demonstrations, internals, heterogeneous data sources, enterprise data services, and how attendees can get involved. The document then delves into explanations of Teiid's history, architecture, modeling tooling, dynamic and static virtual databases, tooling like the console and admin shell, and how it works internally with parsing, optimization, and federation. It concludes by discussing how developers can contribute to Teiid and opportunities for students.
A Crash Course in SQL Server Administration for Reluctant Database Administra...Chad Petrovay
Reluctant DBAs are those of us who aren’t formally trained in database administration, but manage through a combination of our wits, technical manuals, and online forums. This practical session will explore best practices for installing, configuring, and maintaining Microsoft SQL Server, and highlight some SQL Server features (and Easter eggs) that can improve your user experience and institutional ROI.
xRM is the natural evolution of CRM. Businesses are expanding their use of new generation CRM solutions to manage a wider range of scenarios, including asset management, prospect management, citizen management, and many more. Microsoft CRM sits on the .NET platform and because of that, it is much more than a traditional CRM product. Instead, think of Microsoft CRM is as a rapid development application with out of the box CRM functionality. The purpose of this session is to understand Microsoft's CRM strategy and how you get to market first with world class business solutions.
The Microsoft Common Data model (CDM) is the Azure-based storage mechanism for the Microsoft business application platform. The most intuitive use case for the CDM is to provide an easy to use relational database for PowerApps. App Creators can use the CDM to model their data and easily share apps created on that data. CDM aims to provide much more than storage, it also provides definitions for common business entities as well as integration capabilities for importing data from multiple sources like SharePoint Online and On Premises. With data sourced across the enterprise, businesses can drive insights and actions using PowerApps, Power BI, and Microsoft Flow based solutions.
Data Integration through Data Virtualization (SQL Server Konferenz 2019)Cathrine Wilhelmsen
Data Integration through Data Virtualization - PolyBase and new SQL Server 2019 Features (Presented at SQL Server Konferenz 2019 on February 21st, 2019)
This document discusses the limitations of traditional ETL processes and how data virtualization can help address these issues. Specifically:
- ETL processes are complex, involve costly data movement between systems, and can result in data inconsistencies due to latency.
- Data virtualization eliminates data movement by providing unified access to data across different systems. It enables queries to retrieve integrated data in real-time without physical replication.
- Rocket Data Virtualization is a mainframe-resident solution that reduces TCO by offloading up to 99% of data integration processing to specialty engines, simplifying access to mainframe data via SQL.
Enabling Data as a Service with the JBoss Enterprise Data Services Platformprajods
This presentation was given at JUDCon 2013, Jan 17,18 at Bangalore. Presented by Prajod Vettiyattil and Gnanaguru Sattanathan. The presentation deals with the Why, What and How of Data Services and Data Services Platforms. It also explains the features of the JBoss Enterprise Data Services Platform.
The need for Data Services is explained with 3 Business use cases:
1. Post purchase customer experience improvement for an Auto manufacturer
2. Enterprise Data Access Layer
3. Data Services for Regulatory Reporting requirements like Dodd Frank
JBoss Enterprise Data Services (Data Virtualization)plarsen67
The document discusses how JBoss Enterprise Data Services and the JBoss Enterprise SOA Platform can be used to address common business challenges related to data, decision making, inflexible systems, and manual processes. It presents five solution patterns - data foundation, information delivery, externalizing knowledge, automating decision making, and codifying business processes - and how the technologies in the JBoss platforms map to implementing these patterns to generate new business value from existing assets.
Power BI Overview, Deployment and GovernanceJames Serra
This document provides an overview of external sharing in Power BI using Azure Active Directory Business-to-Business (Azure B2B) collaboration. Azure B2B allows Power BI content to be securely distributed to guest users outside the organization while maintaining control over internal data. There are three main approaches for sharing - assigning Pro licenses manually, using guest's own licenses, or sharing to guests via Power BI Premium capacity. Azure B2B handles invitations, authentication, and governance policies to control external sharing. All guest actions are audited. Conditional access policies can also be enforced for guests.
DMsuite is proprietary data masking software that can profile, mask, audit, provision and manage data to replace sensitive information with fictitious data. It allows testing and data sharing while protecting sensitive information. The document discusses how DMsuite works, its features, benefits, ROI and support options.
Organize & manage master meta data centrally, built upon kong, cassandra, neo4j & elasticsearch. Managing master & meta data is a very common problem with no good opensource alternative as far as I know, so initiating this project – MasterMetaData.
oracle data integrator training | oracle data integrator training videos | or...Nancy Thomas
Website : http://www.todaycourses.com
Oracle Data Integrator 11g Course Content :
1.Introduction to Oracle Data Integrator
What is Oracle Data Integrator?
Why Oracle Data Integrator?
Overview of ODI 11g Architecture
Overview of ODI 11g Components
About Graphical Modules
Types of ODI Agents
Overview of Oracle Data Integrator Repositories
2. Administrating ODI Repositories and Agents
Administrating the ODI Repositories
Creating Repository Storage Spaces
Creating and Connecting to the Master Repository
Creating and Connecting to the Work Repository
Managing ODI Agents
Creating a Physical Agent
Launching a Listener, Scheduler and Web Agent
Example of Load Balancing
oracle data integrator training, oracle data integrator training videos, oracle data integrator training online, oracle data integrator training material, oracle data integrator training ppt, oracle data integrator tutorial, oracle data integrator 12c, oracle data integration suite, oracle dataintegrator 12c, oracle database 12c multitenant architecture overview, odi master repository, odi12c introduction demo, oracle data integrator training videos
In the healthcare sector, data security, governance, and quality are crucial for maintaining patient privacy and ensuring the highest standards of care. At Florida Blue, the leading health insurer of Florida serving over five million members, there is a multifaceted network of care providers, business users, sales agents, and other divisions relying on the same datasets to derive critical information for multiple applications across the enterprise. However, maintaining consistent data governance and security for protected health information and other extended data attributes has always been a complex challenge that did not easily accommodate the wide range of needs for Florida Blue’s many business units. Using Apache Ranger, we developed a federated Identity & Access Management (IAM) approach that allows each tenant to have their own IAM mechanism. All user groups and roles are propagated across the federation in order to determine users’ data entitlement and access authorization; this applies to all stages of the system, from the broadest tenant levels down to specific data rows and columns. We also enabled audit attributes to ensure data quality by documenting data sources, reasons for data collection, date and time of data collection, and more. In this discussion, we will outline our implementation approach, review the results, and highlight our “lessons learned.”
This document introduces data mining with SQL Server. It discusses how data mining can provide business insights by analyzing large amounts of data. It describes how SQL Server integrates data acquisition, transformation, discovery, and presentation capabilities for data mining. It also provides an example of how a customer used SQL Server to analyze transaction and stock data to increase sales and profits.
Enabling digital transformation api ecosystems and data virtualizationDenodo
Watch the full webinar here: https://buff.ly/2KBKzLJ
Digital transformation, as cliché as it sounds, is on top of every decision maker’s strategic initiative list. And at the heart of any digital transformation, no matter the industry or the size of the company, there is an application programming interface (API) strategy. While API platforms enable companies to manage large numbers of APIs working in tandem, monitor their usage, and establish security between them, they are not optimized for data integration, so they cannot easily or quickly integrate large volumes of data between different systems. Data virtualization, however, can greatly enhance the capabilities of an API platform, increasing the benefits of an API-based architecture. With data virtualization as part of an API strategy, companies can streamline digital transformations of any size and scope.
Join us for this webinar to see these technologies in action in a demo and to get the answers to the following questions:
*How can data virtualization enhance the deployment and exposure of APIs?
*How does data virtualization work as a service container, as a source for microservices and as an API gateway?
*How can data virtualization create managed data services ecosystems in a thriving API economy?
*How are GetSmarter and others are leveraging data virtualization to facilitate API-based initiatives?
Azure Digital Twins is a platform as a service (PaaS) offering that enables the creation of knowledge graphs based on digital models of entire environments. These environments could be buildings, factories, farms, energy networks, railways, stadiums, and more—even entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences.
After nearly two years, Azure Digital Twins has been rewritten and it's off to a great start. In this session, we will see what it is for, see where it has changed, and see how to use it in our IoT strategy.
CRM-UG Summit Phoenix 2018 - What is Common Data Model and how to use it?Nicolas Georgeault
My Slidedeck about Common Data Service and Model from CRMUG SUmmit in Phoenix Oct 2018. This technology is under development so content is subject to change and based on current service on 10/18/2018
The document discusses the Common Data Model (CDM) and how to use it. It describes CDM as an open-sourced definition of standard business entities that provides a common data model that can be shared across applications. It outlines how CDM allows building applications faster by composing analytics, user experiences, and automation using integrated Microsoft services. It also discusses moving data into CDM using the Data Integrator and building applications with CDM using PowerApps, the CDS SDK, Microsoft Flow, and Power BI.
IBM InfoSphere Information Server 8.1 is a unified platform for understanding, cleansing, transforming and delivering trustworthy information. It combines the technologies of components like the Information Server Console, Metadata Workbench, Business Glossary, DataStage & QualityStage, Information Analyzer and Information Services Director. The platform provides shared services for administration and reporting. Metadata services allow accessing and integrating data. Key components include the Metadata Server, Metadata Workbench and Business Glossary for managing metadata. DataStage & QualityStage is used for designing jobs to transform and cleanse data, while Information Analyzer helps understand data quality.
Want to know more about Common Data Model and Service? You need to understant what's the difference between CDS for Apps and Analytics? Feel free to use these slides and send me your feed backs.
The document provides an agenda for a 5-day admin workshop covering topics like organization setup, user interface configuration, standard and custom objects, and data management. Day 1 covers organization setup, global user interface, and standard/custom objects. Day 2 covers user setup, security, and workflow automation. Later days cover additional topics like reports, mobile configuration, and the AppExchange. The document also includes introductory information and instructions for various setup and configuration exercises to be completed during the workshop.
Applying Auto-Data Classification Techniques for Large Data SetsPriyanka Aash
In the current data security landscape, large volumes of data are being created across the enterprise. Manual techniques to inventory and classify data makes it a tedious and expensive activity. To create a time and cost effective implementation of security and access controls, it becomes key to automate the data classification process.
(Source: RSA USA 2016-San Francisco)
This document provides an overview of key concepts related to database management and business intelligence. It discusses the database approach to data management, including entities, attributes, relationships, keys, normalization, and entity-relationship diagrams. It also covers relational database management systems, their operations, capabilities and querying languages. Additional topics include big data, business intelligence tools for capturing, organizing and analyzing data, and ensuring data quality. The agenda outlines a review of chapters from the textbook and an in-class ERD exercise in preparation for the first exam.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns like in databases and can be easily processed by computers. Unstructured data lacks a predefined structure or organization and makes up about 80% of organizational data. Semi-structured data has some structure but does not conform fully to predefined data models. The document also discusses big data in terms of its volume, velocity and variety characteristics as well as challenges in capturing, storing, managing and analyzing big data.
This document discusses different types of digital data including structured, unstructured, and semi-structured data. It provides examples and characteristics of each type of data. Structured data is organized in rows and columns, like in a database. Unstructured data lacks a predefined structure or organization, like text documents, images, and videos. Semi-structured data has some structure but not a rigid schema, like XML files. The majority of organizational data is unstructured. Big data is also discussed, which is high-volume, high-velocity, and high-variety data that requires new technologies to capture, store, manage and analyze.
Data development involves analyzing, designing, implementing, deploying, and maintaining data solutions to maximize the value of enterprise data. It includes defining data requirements, designing data components like databases and reports, and implementing these components. Effective data development requires collaboration between business experts, data architects, analysts, developers and other roles. The activities of data development follow the system development lifecycle and include data modeling, analysis, design, implementation, and maintenance.
The document discusses data development and data modeling concepts. It describes data development as defining data requirements, designing data solutions, and implementing components like databases, reports, and interfaces. Effective data development requires collaboration between business experts, data architects, analysts and developers. It also outlines the key activities in data modeling including analyzing information needs, developing conceptual, logical and physical data models, designing databases and information products, and implementing and testing the data solution.
The document discusses StreamCentral, a real-time business intelligence and data analytics platform. It allows users to build high-impact business solutions quickly using streaming and static data without extensive technical skills. StreamCentral automatically builds and manages an information warehouse with real-time updated data marts. It incorporates streaming data, events, entities, and environmental data to provide context. The platform also handles security, metadata management, and supports various data storage technologies.
This document provides an overview and recommendations for an Enterprise 365 project. It discusses common business objectives like process enhancement and security. Solution areas covered include security, integration, reporting, data cleaning and migration. General user needs are also outlined such as quick searching, filtering, and customizable views. The document recommends holding a kick-off meeting, getting documentation, launching carefully in production, and providing trainings.
This document provides an overview of Force.com databases including standard and custom objects, fields, relationships, and querying. Key points include: standard objects contain core CRM data while custom objects are defined by developers; fields include standard, custom, and field types like auto-number, formula, and encrypted fields; relationships include lookups, master-details, and hierarchical relationships; and querying is done through SOQL for structured queries and SOSL for full-text search.
A database management system (DBMS) stores and manages data and provides efficient ways to store, retrieve, and manipulate that data. The primary goals of a DBMS are to provide convenient and efficient ways to store and retrieve database information. It uses tables to represent entities, their relationships, and the data, with each table having multiple columns and rows. Some common DBMSs are Microsoft Access, which is designed for small home or business databases, and SQL Server, which is intended for larger server-based databases accessed remotely.
SharePoint Online vs. On-Premise document compares Microsoft SharePoint Online to an on-premise SharePoint implementation. Some key differences include SharePoint Online having higher security but more limited customization options compared to on-premise which has more robust features but requires managing security. Migrating to SharePoint Online can provide cost savings on licensing and infrastructure but requires planning to address limitations in areas like search and administration interfaces. The document provides considerations for law firms evaluating moving to SharePoint Online.
Why an AI-Powered Data Catalog Tool is Critical to Business SuccessInformatica
Imagine a fast, more efficient business thriving on trusted data-driven decisions. An intelligent data catalog can help your organization discover, organize, and inventory all data assets across the org and democratize data with the right balance of governance and flexibility. Informatica's data catalog tools are powered by AI and can automate tedious data management tasks and offer immediate recommendations based on derived business intelligence. We offer data catalog workshops globally. Visit Informatica.com to attend one near you.
A service oriented architecture (SOA) organizes software into business services that are network accessible and executable. Key characteristics include quality of service specifications, discoverable services and data catalogs, and use of industry standards. A SOA breaks up monolithic systems into reusable components called services that can be more easily maintained and replaced. Implementing a SOA requires organizing infrastructure, data, security, computing, communication, and application services to maximize reuse across the enterprise.
Architecture of Dynamics CRM with Office 365 and AzurePedro Azevedo
This document provides an overview of Microsoft Dynamics 365 and how it integrates with other Microsoft technologies like Office 365 and Azure. It discusses the CRM market share and Dynamics 365's growth. It describes how Dynamics 365 can be deployed on Azure and leverages various Azure services for capabilities like offline syncing, machine learning, voice of customer, and more. It also mentions related technologies like PowerApps, Logic Apps, and pricing considerations.
Office 365 Portugal - Dynamics CRM com Office 365Pedro Azevedo
O documento apresenta uma introdução ao Microsoft Dynamics CRM, comparando-o com o Salesforce e destacando suas funcionalidades principais, como vendas, marketing e serviço. Também descreve como o Dynamics CRM pode ser integrado com o Office 365 para aumentar a produtividade, com ferramentas como Outlook, SharePoint, Yammer e Power BI. Por fim, fornece detalhes sobre preços e contatos.
This document discusses a Microsoft MVP Showcase event that took place on April 22, 2015. The event focused on real world answers and independent experts. Pedro Azevedo presented on the topic "Where's the code?" regarding Microsoft Dynamics CRM. The document also mentions client applications, CRM platforms, xRM utilities, and social responsibility initiatives like the clown doctors who bring joy to hospitalized children in Portugal.
Dynamics CRM - Mais que uma plataforma de CRMPedro Azevedo
O documento discute sistemas de CRM e Microsoft Dynamics CRM, incluindo sua arquitetura, integrações, custos e evolução. Ele fornece detalhes sobre a experiência da empresa com tecnologias Microsoft e CRM, além de resumir os principais recursos e vantagens do Dynamics CRM.
Este documento apresenta uma introdução à plataforma Microsoft Dynamics CRM. Resume os principais pontos sobre o que é um CRM, as suas funcionalidades principais, o mercado de CRM, as vantagens de um sistema proprietário vs personalizado, e demonstra alguns recursos básicos do Dynamics CRM como entidades, atributos e processos.
CRM? Como escolher? De raiz ou sistema já desenvolvido?Pedro Azevedo
Um dos sistemas mais customizados (desenvolvidos de raiz) é o CRM nem que seja uma parte dele e muitas vezes não estamos preocupados se é ou não um CRM. E quando perguntamos a um programador qual a decisão e principalmente os mais juniores a resposta é desenvolvido de raiz. Aqui é clarificar o que é um CRM as várias características quais as várias opções para as várias linguagens de programação.
x(C)RM como plataforma de desenvolvimento rápido.Pedro Azevedo
Pedro Azevedo apresentou sobre o X(c)RM como plataforma de desenvolvimento rápido. Ele discutiu como o Microsoft Dynamics CRM 2011 pode ser usado como uma plataforma flexível para construir aplicações personalizadas para qualquer tipo de relacionamento, não apenas para CRM. Ele também cobriu customizações da interface do usuário, processos, relatórios e integrações.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Designing Low-Latency Systems with Rust and ScyllaDB: An Architectural Deep DiveScyllaDB
Want to learn practical tips for designing systems that can scale efficiently without compromising speed?
Join us for a workshop where we’ll address these challenges head-on and explore how to architect low-latency systems using Rust. During this free interactive workshop oriented for developers, engineers, and architects, we’ll cover how Rust’s unique language features and the Tokio async runtime enable high-performance application development.
As you explore key principles of designing low-latency systems with Rust, you will learn how to:
- Create and compile a real-world app with Rust
- Connect the application to ScyllaDB (NoSQL data store)
- Negotiate tradeoffs related to data modeling and querying
- Manage and monitor the database for consistently low latencies
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
3. About Me
•14 years in Microsoft technologies
• 6 years in Web, Desktop e Mobile
• 8 working with CRMs
•Head of Business Applications at Findmore (Nearshore Portugal)
•Microsoft Partner company focused on providing CRM solutions,
Sharepoint, Office 365 and Azure. Focus on Nearshore.
•Business Solutions MVP 4.0 (Dynamics CRM)
4. Agenda
•Business Application Platform
•Dynamics 365
•What is?
•Why the Common Data Service?
•The Common Data Model
•Data Types; Properties
•Security
•Integration
•SDK
•Future
Common Data Service
7. What is?
•Wouldn’t life be easier if everything just worked together?
•Platform that enables customers to easily build the business apps and
processes they need.
•Brings together your business data in one place so you can focus on the
things that matter: building apps, finding insights and automating your
business processes
•Going to be the backbone of future for business data.
•Business application model and storage mechanism
8. What is?
•Common Data Service (CDS) contains a Common Data Model (CDM)
•Technologies
•Azure infrastructure, an easy to provision, yet scalable data store (Service Fabric;
Elastic SQL)
•Integrations use M-engine (sits under power query), DIXF & OData under the hood!
•M is the language used to move and transport data to and from CDS
9. Why the Common Data Service?
•A common data model with standard entity schema and behavior
•Set of standard entities deployed within every database
•Integration with Microsoft Office for Excel and Outlook
•SDK for professional development scenarios
•Roles
•Power Users – provides extensibility & enables point solutions via PowerApps, Flow,
PowerBI
•IT Admin – manage your company’s data and processes centrally. Security, roles,
etc. applied consistently across all apps & services.
•ISV App Developers – build your apps once with CDS and we’ll do the rest of the
heavy lifting.
•LOB App Developers – build your apss on top of all your company’s data in one place
12. The Common Data Model
•Provides a shared representation of the data that matters to your apps
•Consists of standard, extensible, commonly used entities across business
and productivity applications.
•Applications can work against data without needing to explicitly know
where that data is coming from
•You can…
•Create custom entities
•Perform bulk data import export through the PowerApps portal
•Leverage field groups to drive default PowerApps behavior
•Use Standard Picklists & create your own
•Analyse data in Power BI Perspectives for standard entities
13. The Common Data Model
• Structured metadata
• Entities are structured with data definition, behavior modeling and defaulting.
• Rich data types
• Address, Email, Currency, Auto-numbering. Modern types such as images,
geographic location, Phone, Website
• Data constructs
• Support for modeling relationships, lookups, aggregates, containment etc.
• System attributes (for concurency management, security, and audit trails)
• RecordVersion, RecordId, DataPartition, CreatedByUser, CreatedByDateTime,
ModifiedByUser, ModifiedByDateTime.
• Entity and field level security can be configured per entity
14. The Common Data Model
• Data validation for mandatory and unique field data and checking for
invalid foreign key references
• Data encryption at rest
15. Why use Standard entities?
•Translations for standard entity names and fields into local languages
•Field Groups to identify key fields for create, details and reporting scenarios
•Predefined sample data
•Security permission sets
•Relationships to each other to support common business processes
•Can be extended with custom fields to support
•ISVs and other developers can all work against a common set of data
16. Entity field data types I
Type Primitive Type Description
Address Compound separate fields for first line, second line, city, state/province, ...
AutoNumber String With prefix and an incrementing number. For example, “EXP001.”
BigInteger BigInteger RecordId - included as a system field in every entity. cannot create
Boolean Boolean True and False.
Currency Compound two fields(decimal value; enumeration for the currency code).
Date DateTime Only the Date portion of the DateTime type
DateTime DateTime A date that is combined with a time of day with fractional seconds.
Email String Email is stored as a string but is understood as a separate type
Guid Guid A guid.
Integer Integer An integer between -2,147,483,648 and 2,147,483,647.
17. Entity field data types II
Type Primitive Type Description
Lookup [Foreign key] The value matches the primary key in another table.
Multiline Text String Multiple lines of text.
Number Decimal can be stored 32 digits
PersonName Compound given name (first name), middle name, and surname (last name).
Phone String Phone is stored as a string but is understood as a separate type
Picklist Integer The integer serves as a reference into one of the standard picklists.
Quantity Quantity A decimal value.
Text String One line of text.
Url String Url is stored as a string but is understood as a separate type
18. Entity field properties
Property Applies To Description
Default value Text The default value of the Text field.
Max length Text The maximum number of characters in a text field.
Prefix Number Sequence The prefix that is used for the number sequence.
Picklist Picklist The option set type of the field.
Required All A value is required for the field.
Searchable All The data can be searched.
Unique All Values for the field must be unique across the entity.
19. Naming Conventions
•Entity names are singular
•Examples: Tenant, Family, SalesOrder.
•Entity ID names are created by appending “Id” to the entity name
•Examples: WorkerId, CaseId, FamilyId.
•Lookup fields are named with the entity name of the entity that they are
related to
20. Entity relationships and lookup fields
•Referential integrity
•Cascading delete
•Associated rows in the referencing entity
are deleted.
•Restricted delete
•Cannot delete a row in the referenced entity
if it has associated rows in the referencing
entity.
• Self => Supported
• One-to-one => Not Supported
• One-to-many => Supported
• Many-to-many => Not Supported
21. Database Security
•There are two modes in which the
Common Data Service can run:
•Open mode – the data stored in the
common data service is open to all
users. Everyone will always have the
needed permissions to use any app.
•Restricted mode – Grant specific
data permissions to users by using
the admin center. When running in,
you will need to configure the role-
based security.
22. Security Model
•Data in the Common Data Service can be secured at several levels:
•Database level – Admins can define which users can perform all administrative
operations in the Common Data Service.
•Entity level – Admins can define which users have access to entities, and what
actions those users can take on those entities
•Record level – Admins can use policies to define which records a user has access to
in a given entity.
23. User Roles
•User role are assigned to users or user groups within your organization to
provide them access to a collection of entities.
•The entities that a role provides access to are determined by the permission
sets that the role includes.
•There are two special roles that are provided by the Common Data Service
for your convenience.
•Database Owner – provides access to all entities in your database, even as new
custom entities are added. assign users to roles, and define the permissions for those
roles.
•Organization User – assigned to all users in your organization automatically. In
Restricted mode, everyone will need to be provided access to the entities that the
PowerApp is using.
•A user can be assigned multiple roles to allow access to different sets of
24. Permission Sets
•The standard entities have been grouped, and each group has two
permission sets :
•View – allows read-only access to the data within the entity
•Maintain – allows read, create, update and delete operations within the entity.
•A permission set is comprised of a list of entities and the level of access
granted for each entity.
•Create, read, update, and delete permissions can be granted to any entity
•To grant access to a custom entity you must provide an access level under
a permission set.
25. Policies - Record Level Sercurity (Preview)
•Determine the records that a user has access to within an entity
•The policy allows you to limit the data returned to the user. A policy
restricts access based on the value of a field within the record.
•Policies can be defined to only return values:
•With a given picklist value
•Where the current user of the application matches the user stored in the record
•Separate policies can be defined for each data operation: Create, Read,
Update, and Delete
•Security configuration can also be done in code via the Common Data
Service SDK
26. Data Import Export
•Standard and Custom entities
•Data import feature that allows hundreds of thousands of records to be
imported and exported efficiently.
•Exporting template files (Excel spreadsheets or CSV delimited) for entities
that match the schema of the target entity and can have a subset of the
entity fields
•The template designer let’s you pick fields that you care about and quickly add
required fields.
•Quickly and easily import data from your existing systems.
•Rapidly establishes trusted connections for IT-managed tenants.
•The trusted connections continually synchronize the data between your existing
systems and your platform solutions.
27. Integration
•Productivity add-ins to access your data from Microsoft Excel and Outlook.
•Your solutions can connect information from productivity platforms with
data from business applications.
•Connects through standard interfaces, such as the Microsoft Graph
•Maps entities to the productivity platform objects to enable the join relationships
with business data.
28. Excel Add-in
•All standard and custom entities can be interactively viewed and edited in
Excel
•Excel Add-in that provides data entry with data-type specific assistance for
picklists, dates and lookups
29. Outlook Add-in
•Data from the Common Data
Service related to the people in
emails and meetings
•In this first release, the Outlook
Add-in only looks for related data in
a few entities such as Cases
•End-users can manage the relevant
data in the CDS without your LOB
app or IT department needing to lift
a finger
30. Microsoft Dynamics 365 data integration
•The Prospect to Cash data
integration feature enables a
basic flow of account data and
other entity data to enable a
prospect-to-cash scenario.
•The data integration feature is
available to customers who
have at least one Dynamics
365 product
31. How to build and manage apps?
• For the low-code/novice app creator
• PowerApps - Drag and drop to create your apps on data from the CDS
• Power-up your Powerapps by building complex logic with the C# SDK and host it in
an Azure Function
• For the professional developer
• Rich C# SDK – enables you to build complex web apps or rich client applications.
• Environments group features together in CDS including:
• A collection of tables and table relationships
• Publishing Power Apps
• Integration and mapping tools connected to Dynamics 365
32. Example of Business Applications
•Dynamics 365 for Retail
•Dynamics 365 for Talent
33. SDK I
•Allows create, read, update, delete (CRUD) and even query your business
data residing in the Common Data Service
•The client and the server tiers communicate through JSON documents that
describe the operations required
•Is viable create applications by building the JSON documents with string
operations and using standard HTTP protocols to transmit them over the
wire to the Common Data System endpoints
•Advantage of API is a higher abstraction level, offering a strong type system to help
you at design time rather than running the risk of failing at runtime
•It is useful to think of these SDKs as domain-specific languages (DSLs)
implemented in their host languages.
•In the terminology of the SDK, tables are called entitysets, as opposed to
entities. Entities are in turn the records in the entitysets.
34. SDK II
•Authentication of the user is against Azure Active Directory
•Almost all the methods provided by the SDK are available as asynchronous
methods
•The types representing entitysets are merely C# classes without much
fanfare (POCOs)
•In the SDK the concept of transactions does not exist:
•We have to add all the entities to an executor. The executor is then responsible for
managing the transaction in the most effective manner
•The server layer will deserialize the entities, start a transaction, insert the records,
and commit the transaction
35. SDK – Query Data I
•Almost all the methods provided by the SDK are available as asynchronous
methods
•The types representing entitysetsare merely C# classes without much
fanfare (POCOs)
•No option for specifying "all fields", since misuse caused performance
issues in other systems
var query = client.GetRelationalEntitySet<ProductCategory>()
.CreateQueryBuilder()
.Where(pc => pc.Name == "Surface" || pc.Name == "Phone")
.Project(pc => pc.SelectField(f => f.CategoryId).SelectField(f => f.Name));
36. SDK – Query Data II
•Joining data from multiple entitysets
•Joins – entitysets carry with them a lot of metadata that describes the relationships
among them, you typically don't have to specify the fields that are used to do the join
in the query.
•Zips
•If you haven't modeled any relationships. For this purpose, you can use the Zip
clause, where you specify both the joined entityset and the relationship that defines
the join.
•Grouping data
•Using aggregates
•This approach is preferable to manual aggregation, because the data isn't
transported over the wire, and the aggregations are done very quickly.
•Paging
•Able to fetch a certain number of records after several records have been skipped. To
achieve this result, you can add Take and Skip clauses to the query.
37. Generally Available
• Improved app from data generation on standard and custom entities with
field groups
• Multi-field lookups
• Editable data import/export entity field mappings
• Ability to export data import/export templates
• Multi-sheet Excel import
• Simplified address type, complex types for Quantity, Person name, GUID,
Date
• Central place to view entity relationships
38. Generally Available
• Simplified primary key definition
• Searchable fields allow for indexed searches
• Entity data explorer in creator portal
• Null support
• Default value support for simple data types
• Manage Custom Picklists
39. PowerBI (Preview)
•Power Apps Common Data Service (CDS) connector to Power BI desktop
•This means your data model and all the data in it is natively accessible in Power BI
Desktop
•Secured with the roles and policies IT Pros have defined in CDS
•Reports reflect real time data
•There’s no need to schedule a refresh in Power BI. When the data is updated in CDS,
changes are reflected in reports
•Power BI is aware of rich data types and relationships defined in CDS
•These types are recognized by Power BI as first class data types.
•For an example, when you report using an address field, Power BI shows a map as a default
visualization.
•Entities are presented by subject areas (perspectives)
•While CDS contains a rich set of entities representing many business areas
•Entities are organized into a set of ready-made subject areas called Perspectives. A
perspective offers a “view into data” from a reporting point of view.
40. Future I
•Common Data Model will grow from 70+ entities today to 300+ entities in
the next quarter
•Security
•Column level security
•Add the ability for IT Pros to secure data based on even more advanced business
artifacts and concepts such as hierarchies, regions and business units.
•Multiple Dynamics 365 applications and offerings that are built on this
Common Data Service. These apps build on the CDS, so the data that’s used
by those apps are available for you to build your own apps against.
•Make PowerApps + CDS even simpler, by introducing more powerful out-of-
the-box forms that will automatically configure based on entity metadata
and relationships
•Improved import/export capabilities so you can choose which entities and
41. Future II
•With Dynamics 365 for Sales or Operations, or even Azure Active Directory
and Office 365, we’re making it so that your data just shows up in the
Common Data Service.
•Working with the Office 365 team so that business processes and apps can
use productivity artifacts like calendar events and tasks natively in their
apps
•Working very closely with the Microsoft Graph team so that the data we
bring together in CDS is exposed via the Graph for apps that are already built
using the Graph REST APIs or SDKs
•Working with partners like Microsoft StaffHub to automatically integrate
data from your Dynamics services to enhance users’ StaffHub experiences