DP200 - PracticeTests 2 AnswersAndExplanation
DP200 - PracticeTests 2 AnswersAndExplanation
Question 1
]A.
ActivityRuns
]B.
AllMetrics
]C.
PipelineRuns
]D.
TriggerRuns
Explanation:
Answer – C
Since you need to measure the pipeline execution, consider storing the data on pipeline runs.
The Microsoft documentation gives the schema of the log attributes for pipeline runs. Here
there are properties for the start and end time for all activities that run within the pipeline.
Option A is incorrect since this will store the log for each activity execution within the pipeline
itself.
Option B is incorrect since this will store all the metrics for the Azure Data Factory resource.
Option D is incorrect since this will store each trigger run for the Azure Data Factory resource.
For more information on monitoring Azure Data Factory, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor
Question 2
]A.
Azure Event Hub
]B.
Azure Storage Account
]C.
Azure Cosmos DB
]D.
Azure Log Analytics
Explanation:
Answer – D
Since we have to query the logs via Log Analytics, we need to choose the storage option as
Azure Log Analytics.
Since this is clearly mentioned as a requirement in the question, all other options are incorrect.
For more information on monitoring Azure Data Factory, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor
Question 3
]B.
Azure Event Hubs
]C.
Azure Blob storage
]D.
Azure IoT Hub
Explanation:
Answer – C
You can use Azure Blob storage as an input type for the reference data.
For more information on using reference data, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-use-
reference-data
Question 4
]B.
A three-minute Sliding ion window
]C.
A three-minute Tumbling window
]D.
A three-minute Hopping window
Explanation:
Answer – C
The Tumbling window guarantees that data gets segmented into distinct time segments. And
they do not repeat or overlap.
For more information on stream analytics window functions, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-
functions
Question 5
A.
Export to a BACPAC file by using Azure Cloud Shell and save the file to a storage account.
B.
Export to a BACPAC file by using SQL Server Management Studio. Save the file to a
storage account.
C.
Export to a BACPAC file by using the Azure portal.
D.
Export to a BACPAC file by using Azure PowerShell and save the file locally.
E.
Export to a BACPAC file by using the SqlPackage utility.
Explanation:
Answer – B, D and E
The Microsoft documentation mentions the different ways in which you can export a BACPAC
file of a SQL database.
Option A is incorrect because there is no mention in the Microsoft documentation of being able
to create a backup from Azure Cloud Shell.
Option C is incorrect because even though you can create a backup using the Azure Portal, the
backup won’t be available locally.
For more information on SQL database export, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-export
Question 6
]A.Yes
]B.No
Explanation:
Answer - B
Here each Data scientist must be assigned a standard cluster. This is configured to terminate
automatically after 120 minutes.
For more information on configuring clusters, please refer to the following link-
https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
Question 7
]A.Yes
]B.No
Explanation:
Answer - B
Here each Data scientist must be assigned a standard cluster. This is configured to terminate
automatically after 120 minutes.
For more information on configuring clusters, please refer to the following link-
https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
Question 8
]A.Yes
]B.No
Explanation:
Answer - A
Here each Data scientist must be assigned a standard cluster. This is configured to terminate
automatically after 120 minutes.
For more information on configuring clusters, please refer to the following link-
https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
Question 9
]A.Yes
]B.No
Explanation:
Answer - B
Here each Data scientist must be assigned a standard cluster. This is configured to terminate
automatically after 120 minutes.
For more information on configuring clusters, please refer to the following link-
https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
Question 10
A.
Make use of an Event Grid Topic.
B.
Make use of Azure Stream Analytics to query twitter data from an Event Hub.
C.
Make use of Azure Stream Analytics to query twitter data from an Event Grid.
D.
Have a Logic App in place that would send twitter data to Azure.
E.
Create an Event Grid subscription.
F.
Create an Event Hub Instance.
Explanation:
Answer – B, D and F
There is an example in the Microsoft documentation, which showcases how to use Azure
Stream Analytics to process Twitter data.
Option A is incorrect because this is more of a messaging-based system.
Options C and E are incorrect because the Event Grid service is used for event-based
processing.
For more information on the implementation, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-twitter-
sentiment-analysis-trends
Question 11
B.
Create a Virtual Private Network Connection from the on-premise network to Azure.
C.
Create a self-hosted integration runtime.
D.
Create a database master key.
E.
Backup the database.
F.
Configure the on-premise server to use an integration runtime.
Explanation:
Answer – A , B and C
First, you have to create a Virtual Private Network Connection from the on-premise network to
Azure. This is to ensure that you have connectivity between your on-premises data center and
Azure.
Next, create a new Azure Data Factory resource and then have a self-hosted integration runtime
in Azure Data Factory.
Option D is incorrect because we don’t need a database master key for this process.
Option F is incorrect because we need to configure the integration runtime in Azure Data
Factory.
For more information on how to copy data using Azure Data Factory, one can visit the below
URL-
https://docs.microsoft.com/en-us/azure/data-factory/tutorial-hybrid-copy-portal
Question 12
]A.
Make use of the AzCopy tool with Blob storage as the linked service in the source.
]B.
Make use of Azure PowerShell with SQL Server as the linked service in the source.
]C.
Make use of Azure Data Factory UI with Blob storage as the linked service in the source.
]D.
Make use of .Net Data Factory API with Blob storage as the linked service in the source.
Explanation:
Answer – C
You can use Azure Data Factory which utilizes Azure Blob storage. An example of this is also
given in the Microsoft documentation.
All other options are incorrect since you need to use the Azure Data Factory UI tool to develop
a pipeline.
For more information on how to copy data using Azure Data Factory for an on-premise SQL
Server, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/
move-sql-azure-adf
Question 13
]A.
Input Deserialization Errors
]B.
Early Input Events
]C.
Late Input Events
]D.
Watermark delay
Explanation:
Answer – D
You should monitor the Watermark delay. This would indicate if there are not enough
processing resources for the input events.
Options B and C are incorrect since this is related to the arrival time of input events.
For more information on monitoring stream analytics, please refer to the following link-
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-time-
handling
Question 14
]A.
Azure SQL Database single database
]B.
Azure SQL data warehouse
]C.
Azure Cosmos DB
]D.
Azure SQL Database managed instance
Explanation:
Answer – D
For easy migration of on-premise databases, consider migrating to Azure SQL Database
managed instance.
Option B is incorrect since this is a data warehousing solution available on the Azure platform.
For more information on Azure SQL Database managed instance, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance
Question 15
]A.
DataNode
]B.
NameNode
]C.
PrimaryNode
]D.
SecondaryNode
Explanation:
Answer – A
If you look at the architecture of the Hadoop Distributed File System, you will see that clients
connect to the Data Nodes.
Since this is clear from the documentation, all other options are incorrect.
For more information on HDFS design, one can visit the below URL-
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes
Question 16
]A.
DataNode
]B.
NameNode
]C.
PrimaryNode
]D.
SecondaryNode
Explanation:
Answer – B
Since this is clear from the documentation, all other options are incorrect.
For more information on HDFS design, one can visit the below URL-
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes
Question 17
]A.
DataNode
]B.
NameNode
]C.
PrimaryNode
]D.
SecondaryNode
Explanation:
Answer – A
Since this is clear from the documentation, all other options are incorrect.
For more information on HDFS design, one can visit the below URL-
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes
Question 18
]A.
Number of transactions only
]B.
eDTUs per database only
]C.
Number of databases only
]D.
CPU usage only
]E.
eDTUs and maximum data size
Explanation:
Answer – E
When you implement Elastic Pools using the DTU-based purchasing model, you have to
consider both the eDTU’s and the storage size for the databases.
For more information on SQL database elastic pools, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool-scale
Question 19
]A.
Data Migration Assistant
]B.
Backup and restore
]C.
SQL Server Agent Job
]D.
Azure SQL Data Sync
Explanation:
Answer – D
Azure SQL Data Sync can be used to synchronize data between the on-premise SQL Server and
the Azure SQL database.
Option A is incorrect since this is just used to assess databases for the migration process.
For more information on SQL database Sync, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-sync-data
Question 20
]A.
Install a standalone on-premise Azure data gateway at each company location.
]B.
Install an on-premise data gateway in personal mode at each company location.
]C.
Install an Azure on-premise data gateway at the primary company location.
]D.
Install an Azure on-premise data gateway as a cluster at each location.
Explanation:
Answer – D
If you need a high availability solution, then you can install the on-premise data gateway as a
cluster.
For more information on high available clusters for the gateway, one can visit the below URL-
https://docs.microsoft.com/en-us/data-integration/gateway/service-gateway-high-
availability-clusters
Question 21
A.
Provision an Azure SQL Data Warehouse instance
B.
Connect to the Blob storage container via SQL Server Management Studio
C.
Create an Azure Blob storage container
D.
Run the T-SQL statements to load the data
E.
Connect to the Azure SQL Data warehouse via SQL Server Management Studio
F.
Build external tables by using Azure portal
G.
Build external tables by using SQL Server Management Studio
Explanation:
Answer – A, D, E and G
Then you need to connect to the data warehouse via SQL Server Management Studio.
This is also given as an example in GitHub as part of the Microsoft documentation on loading
data from Azure Blob to an Azure SQL data warehouse.
Option B is incorrect because you can’t connect to Blob storage from SQL Server Management
Studio.
Option C is incorrect because you already have the blob data in place.
Option F is incorrect because you need to build the external tables in SQL Server Management
Studio.
For more information on the example, one can visit the below URL-
https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/sql-data-
warehouse/load-data-from-azure-blob-storage-using-polybase.md
Question 22
]A.
7 days
]B.
365 days
]C.
Indefinitely
]D.
90 days
Explanation:
Answer – C
Here since we have not specified an option for Delete data, the data will be stored Indefinitely.
If you choose the Delete data option, you can then mention a retention period.
Since this is clear from the implementation, all other options are incorrect.
For more information on monitoring storage accounts, please refer to the following link-
https://docs.microsoft.com/en-us/azure/storage/common/storage-monitor-storage-
account
Question 23
A.
Ensure to assign Azure AD security groups to Azure Data Lake Storage.
B.
Make sure to configure end-user authentication to the Azure Data Lake Storage account.
C.
Make sure to configure service-to-service authentication to the Azure Data Lake Storage
account.
D.
Create security groups in Azure AD and then add the project members.
E.
Configure Access control lists for the Azure Data Lake Storage account.
Explanation:
Answer – A, D and E
You can assign users and service principals, but the Microsoft documentation recommends
giving Azure AD group permissions for Azure Data Lake Storage account. For the storage
account itself, you can manage the permissions via Access Control Lists.
For more information on Azure Data Lake storage access control, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control
Question 24
]A.
Always Encrypted
]B.
Cell-level encryption
]C.
Row-level encryption
]D.
Transparent data encryption
Explanation:
Answer – D
Transparent Data Encryption is used to encrypt data at rest for Azure SQL Server databases.
All other options are incorrect as they would not give the facility to encrypt data at rest for the
entire database.
For more information on Transparent Data Encryption, one can visit the below URL-
https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/
transparent-data-encryption?view=sql-server-ver15
Question 25
]A.
Azure Disk Encryption
]B.
Secure Transport Layer Security
]C.
Storage Account Keys
]D.
Default Storage Service Encryption
Explanation:
Answer – D
You can manage the encryption of data at rest for Azure storage accounts using the default
storage service encryption.
Option A is incorrect since this is used for encrypting data at rest for Azure Virtual machines.
For more information on Storage Service Encryption, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption
Question 26
A.
Create a new action group which send email alerts to the IT Security team.
B.
Make sure to use all security operations as the condition.
C.
Ensure to query audit log entries as the condition.
D.
Use all the Azure SQL Database servers as the resource.
Explanation:
Answer – A, B and D
You can set up alerts based on all the security conditions in Azure Monitor. When any security
operation is performed, an alert can be sent to the IT Security team.
For more information on alerts for Azure SQL Databases, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-insights-alerts-portal
Question 27
A.
Ensure to configure routes.
B.
Create an Azure Blob storage container.
C.
Configure Streaming Units.
D.
Create an IoT Hub and add the Azure Stream Analytics modules to the IoT Hub
namespace.
E.
Create an Azure Stream Analytics edge job and configure job definition save location.
F.
Create an Azure Stream Analytics cloud job and configure job definition save location.
Explanation:
Answer – A, B, D and E
You also need to set the modules for your IoT edge device.
You also need to configure the Routes.
Since this is clear from the Microsoft documentation, all other options are incorrect.
For more information on Stream Analytics on edge devices, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge
Question 28
A.
Create a firewall rule on the database compdb1 that has a start IP address of 5.78.99.6 and
end IP address of 5.78.99.10
B.
Create a firewall rule on the database compdb1 that has a start and end IP address of
0.0.0.0
C.
Create a firewall rule on the server hosting both of the databases that has a start IP address
of 5.78.99.6 and end IP address of 5.78.99.10
D.
Create a firewall rule on the database compdb1 that has a start and end IP address of
5.78.99.4
E.
Create a firewall rule on the server hosting both of the databases that has a start and end IP
address of 5.78.99.4
Explanation:
Answer – A, B, E
The action of “Create a firewall rule on the database compdb1 that has a start IP address of
5.78.99.6 and end IP address of 5.78.99.10” will fulfil the requirement
“The specified IP address range must be allowed to connect to the database compdb1 and
not compdb2”
The action of “Create a firewall rule on the database compdb1 that has a start and end IP
address of 0.0.0.0” will fulfil the requirement
“The Web services in Azure must be able to connect to the database compdb1 and not
compdb2”
The action of “Create a firewall rule on the server hosting both of the databases that has a start
and end IP address of 5.78.99.4” will fulfil the requirement
Option D is incorrect since we have to configure a server firewall rule to allow traffic from the
workstation on both databases
For more information on working with the database firewall, please refer to the following link
https://docs.microsoft.com/en-us/azure/azure-sql/database/firewall-configure
Question 29
]A.
Cache used percentage
]B.
Memory percentage
]C.
CPU percentage
]D.
Failed connections
Explanation:
Answer - A
To check for issues on frequently used queries, you can look at the cache percentage used.
Since this is clear from the Microsoft documentation, all other options are incorrect.
For more information on monitoring Gen2 cache, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-how-
to-monitor-cache
Question 30
A.
Scale up the Streaming Units of the job.
B.
Make use of event ordering.
C.
Make use of Azure Stream Analytics user-defined functions.
D.
Implement query parallelization by partitioning the data input.
Explanation:
Answer – A and D
You can scale up the streaming units and also implement parallelization.
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-
parallelization
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-scale-jobs
Question 31
A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service
products to create a new data pipeline process. They have the following requirements
Data Ingestion
This layer must provide access to multiple sources
This layer must provide the ability to orchestrate a workflow
It must also provide the capability to run SQL Server Integration Service packages
Storage
The storage layer must be optimized for Big Data workloads
It must provide encryption of data at rest
There must be no size constraints
Prepare and Train
This layer must provide a fully managed interactive workspace for exploration and
visualization
Here you should be able to program in R, SQL or Scala
It must provide seamless user authentication with Azure Active Directory
Model and Service
This layer must provide support for SQL language
It must implement native columnar storage
Which of the following should be used as a technology for the “Data Ingestion” layer?
]A.
Azure Logic Apps
]B.
Azure Data Factory
]C.
Azure Automation
]D.
Azure Functions
Explanation:
Answer – B
Since you are looking at a data pipeline process, you must consider using Azure Data Factory.
This can connect to multiple sources. You can define a workflow or pipeline and it can also run
SQL Server Integration Service packages.
Since this is the perfect fit for the requirement, all other options are incorrect.
For more information on Azure Data Factory, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/data-factory/introduction
Question 32
A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service
products to create a new data pipeline process. They have the following requirements
Data Ingestion
This layer must provide access to multiple sources
This layer must provide the ability to orchestrate a workflow
It must also provide the capability to run SQL Server Integration Service packages
Storage
The storage layer must be optimized for Big Data workloads
It must provide encryption of data at rest
There must be no size constraints
Prepare and Train
This layer must provide a fully managed interactive workspace for exploration and
visualization
Here you should be able to program in R, SQL or Scala
It must provide seamless user authentication with Azure Active Directory
Model and Service
This layer must provide support for SQL language
It must implement native columnar storage
Which of the following should be used as a technology for the “Storage” layer?
]A.
Azure Data Lake Storage
]B.
Azure Blob Storage
]C.
Azure Files
]D.
Azure SQL Data warehouse
Explanation:
Answer – A
Azure Data Lake Storage fulfills all of the right aspects as being built for Big Data Analytics. It
can also scale in terms of storage.
For more information on Azure Data Lake Storage, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction
Question 33
A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service
products to create a new data pipeline process. They have the following requirements
Data Ingestion
This layer must provide access to multiple sources
This layer must provide the ability to orchestrate a workflow
It must also provide the capability to run SQL Server Integration Service packages
Storage
The storage layer must be optimized for Big Data workloads
It must provide encryption of data at rest
There must be no size constraints
Prepare and Train
This layer must provide a fully managed interactive workspace for exploration and
visualization
Here you should be able to program in R, SQL or Scala
It must provide seamless user authentication with Azure Active Directory
Model and Service
This layer must provide support for SQL language
It must implement native columnar storage
Which of the following should be used as a technology for the “Prepare and Train” layer?
]A.
HDInsight Apache Spark Cluster
]B.
Azure Databricks
]C.
HDInsight Apache Storm Cluster
]D.
Azure SQL Data warehouse
Explanation:
Answer – B
Azure Databricks is perfect for the Prepare and Train layer. Here you can perform interactive
analysis using different programming languages.
For more information on Azure Databricks, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/azure-databricks/what-is-azure-databricks
Question 34
A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service
products to create a new data pipeline process. They have the following requirements
Data Ingestion
This layer must provide access to multiple sources
This layer must provide the ability to orchestrate a workflow
It must also provide the capability to run SQL Server Integration Service packages
Storage
The storage layer must be optimized for Big Data workloads
It must provide encryption of data at rest
There must be no size constraints
Prepare and Train
This layer must provide a fully managed interactive workspace for exploration and
visualization
Here you should be able to program in R, SQL or Scala
It must provide seamless user authentication with Azure Active Directory
Model and Service
This layer must provide support for SQL language
It must implement native columnar storage
Which of the following should be used as a technology for the “Model and Service” layer?
]A.
HDInsight Apache Kafta cluster
]B.
Azure SQL Data warehouse
]C.
Azure Data Lake Storage
]D.
Azure Blob Storage
Explanation:
Answer – B
For columnar storage, you can make use of Azure SQL data warehouse.
For more information on Azure SQL data warehouse, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-
overview-what-is
Question 35
]A.
Soft delete
]B.
Schema Read
]C.
Time to Live
]D.
CORS
Explanation:
Answer – C
You can set a time to live for the items in a Cosmos DB database.
For more information on the time to live feature, please refer to the following link-
https://docs.microsoft.com/en-us/azure/cosmos-db/time-to-live
Question 36
]A.Yes
]B.No
Explanation:
Answer – B
You need to make use of Azure Data Lake storage diagnostics for this purpose.
For more information on Azure Data Lake Gen 1 storage diagnostics, one can visit the below
URL-
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-diagnostic-logs
Question 37
]A.Yes
]B.No
Explanation:
Answer – B
You need to make use of Azure Data Lake storage diagnostics for this purpose.
For more information on Azure Data Lake Gen 1 storage diagnostics, one can visit the below
URL-
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-diagnostic-logs
Question 38
]A.Yes
]B.No
Explanation:
Answer – A
For more information on Azure Data Lake Gen 1 storage diagnostics, one can visit the below
URL-
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-diagnostic-logs
Question 39
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
Which of the following can be used to process and query the ingested data for the Tier 9 data?
]A.
Azure Notification Hubs
]B.
Apache Cache for Redis
]C.
Azure Functions
]D.
Azure Stream Analytics
Explanation:
Answer – D
One way is to use Azure Stream Analytics. The Microsoft documentation mentions the
following.
For more information on Azure Stream Analytics, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/event-hubs/process-data-azure-stream-analytics
Question 40
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
Applications Tier Replication Comments
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
The Azure Data Factory instance must meet the requirements to move the data from the On-
premise SQL Servers to Azure. Which of the following would you use as the integration
runtime?
]A.
Self-hosted integration runtime
]B.
Azure-SSIS Integration runtime
]C.
.Net Common Language Runtime
]D.
Azure Integration runtime
Explanation:
Answer – A
The self-hosted integration runtime can be used to move data between on-premise data stores to
Azure cloud data stores.
Since this is clearly mentioned in the Microsoft documentation, all other options are incorrect.
For more information on self-hosted runtime environments, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-
runtime
Question 41
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
The data for the external applications needs to be encrypted at rest. You decide to implement
the following steps.
Use the Always Encrypted Wizard in SQL Server Management Studio.
Select the column that needs to be encrypted.
Set the encryption type to Randomized.
Configure the master key to be used from the Windows Certificate Store.
Confirm the configuration and deploy the solution.
Would these steps fulfill the requirement?
]A.Yes
]B.No
Explanation:
Answer – B
As per the documentation, the encryption type needs to set as Deterministic when enabling
Always Encrypted.
For more information on implementing Always Encrypted, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted
Question 42
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
]A.Yes
]B.No
Explanation:
Answer – A
For more information on implementing Always Encrypted, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted
Question 43
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
This is a Partner
managed
External 7,8 No database
Partner
The data for the external applications needs to be encrypted at rest. You decide to implement
the following steps.
Use the Always Encrypted Wizard in SQL Server Management Studio.
Select the column that needs to be encrypted.
Set the encryption type to Deterministic.
Configure the master key to be used from the Azure Key Vault.
Confirm the configuration and deploy the solution.
Would these steps fulfill the requirement?
]A.Yes
]B.No
Explanation:
Answer – B
As per the case study, all keys and certificates need to be managed in on-premise data stores.
For more information on implementing Always Encrypted, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted
Question 44
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
Which of the following should you use as the masking function for Data type compA?
]A.
Custom Text
]B.
Default
]C.
Email
]D.
Random number
Explanation:
Answer – B
As per the case study, below is the requirement for the Data type.
For Data type compA – Mask 4 or less string data type characters.
You can use the “Default” masking function for this requirement.
Since this is clear from the Microsoft documentation, all other options are incorrect.
For more information on dynamic data masking, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-dynamic-data-
masking-get-started
Question 45
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
Which of the following should you use as the masking function for Data type compB?
]A.
Custom Text
]B.
Default
]C.
Email
]D.
Random number
Explanation:
Answer C
As per the case study, below is the requirement for the Data type.
For Data type compB – Expose the first letter and mask the domain.
You can use the “Email” masking function for this requirement.
Since this is clear from the Microsoft documentation, all other options are incorrect.
For more information on dynamic data masking, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-dynamic-data-
masking-get-started
Question 46
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
]A.
Custom Text
]B.
Default
]C.
Email
]D.
Random number
Explanation:
Answer - A
As per the case study, below is the requirement for the Data type.
For Data type compC – Mask everything except characters at the beginning and the
end.
You can use the “Custom Text” masking function for this requirement.
Since this is clear from the Microsoft documentation, all other options are incorrect.
For more information on dynamic data masking, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-dynamic-data-
masking-get-started
Question 47
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
A.
Use Azure PowerShell to create a database firewall rule.
B.
Configure the setting of “Allow Azure Services to Access Server” to Disabled.
C.
Configure the setting of “Allow Azure Services to Access Server” to Enabled.
D.
Create a database firewall rule from the Azure portal.
E.
Create a server firewall rule from the Azure portal.
F.
Use Transact-SQL to create a database firewall rule.
Explanation:
Answer – B, E and F
You can set database and firewall rules to restrict access to the server and the database.
Options A and D are incorrect since you can only create a database firewall rule via Transact-
SQL.
Option C is incorrect since the setting “Allow Azure Services to Access Server” should be
Disabled.
For more information on server and database rules for Azure SQL databases, one can visit the
below URL-
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure
Question 48
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
]A.
RequestSteps
]B.
DmsWorkers
]C.
SQLRequests
]D.
ExecRequests
Explanation:
Answer – C
Since the SQL requests would affect the cache, these requests need to be monitored.
For more information on monitoring the cache, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-how-
to-monitor-cache
Question 49
Overview
Comps is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers.
Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements
1 Yes
Internal Comp
Data is replicated
3 Yes to the Partner
Internal Partner
This is a Partner
managed
External 7,8 No database
Partner
]A.
An alert rule that would be used to monitor CPU percentage for the database and then alert
the IT Engineers
]B.
An alert rule that would be used to monitor CPU percentage for the elastic pool and then
alert the IT Engineers
]C.
An alert rule that would be used to monitor storage percentage for the database and then
alert the IT Engineers
]D.
An alert rule that would be used to monitor storage percentage for the elastic pool and then
alert the IT Engineers
Explanation:
Answer – D
Since the requirement asks for monitoring the storage, we should set this. Also, since the
databases are going to be part of an elastic pool, we need to set it to monitor the percentage for
the entire elastic pool.
For more information on working with alerts, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric
Question 50
]A.Yes
]B.No
Explanation:
Answer – B
You need to add the secret to Azure Key Vault and add the secret scope to the Databricks
workspace.
For more information on accessing Azure Blob storage from Azure Databricks using Azure Key
Vault, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/azure-databricks/store-secrets-azure-key-vault
Question 51
]A.Yes
]B.No
Explanation:
Answer – A
Yes, this would fulfill the requirement. The Microsoft documentation mentions the following.
For more information on accessing Azure Blob storage from Azure Databricks using Azure Key
Vault, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/azure-databricks/store-secrets-azure-key-vault
Question 52
]A.Yes
]B.No
Explanation:
Answer – B
You are supposed to add a secret scope to the Databricks workspace and not the access keys.
For more information on accessing Azure Blob storage from Azure Databricks using Azure Key
Vault, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/azure-databricks/store-secrets-azure-key-vault
Question 53
]A.
Azure Data Factory
]B.
AzCopy Tool
]C.
Azure Event Hubs
]D.
Azure Event Grid
Explanation:
Answer – A
For more information on data lake storage scenarios, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-data-scenarios
Question 54
]A.
Azure Data Factory
]B.
AzCopy Tool
]C.
Azure Event Hubs
]D.
Azure Event Grid
Explanation:
Answer – B
Since this is clearly mentioned in the Microsoft documentation, all other options are incorrect.
For more information on data lake storage scenarios, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-data-scenarios
Question 55
]A.
Azure Data Factory
]B.
AzCopy Tool
]C.
Azure Event Hubs
]D.
Azure Event Grid
Explanation:
Answer – A
Since this is clearly mentioned in the Microsoft documentation, all other options are incorrect.
For more information on data lake storage scenarios, one can visit the below URL-
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-data-scenarios