Databricks Associate Data Engg
Databricks Associate Data Engg
A data organization leader is upset about the data analysis team’s reports being different from the
data engineering team’s reports. The leader believes the siloed nature of their organization’s data
engineering and data analysis architectures is to blame.
Which of the following describes how a data lakehouse could alleviate this issue?
2
Which of the following describes a scenario in which a data team will want to utilize cluster pools?
3
Which of the following is hosted completely in the control plane of the classic Databricks architecture?
A. Worker node
B. JDBC data source
C. Databricks web application
D. Databricks Filesystem
E. Driver node
4
Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta Lake?
5
Which of the following describes the storage organization of a Delta table?
A. Delta tables are stored in a single file that contains data, history, metadata, and other attributes.
B. Delta tables store their data in a single file and all metadata in a collection of files in a separate
location.
C. Delta tables are stored in a collection of files that contain data, history, metadata, and other
attributes.
D. Delta tables are stored in a collection of files that contain only the data stored within the table.
E. Delta tables are stored in a single file that contains only the data stored within the table.
6
Which of the following code blocks will remove the rows where the value in column age is greater
than 25 from the existing Delta table my_table and save the updated table?
A data engineer has realized that they made a mistake when making a daily update to a table. They
need to use Delta time travel to restore the table to a version that is 3 days old. However, when the
data engineer attempts to time travel to the older version, they are unable to restore the data
because the data files have been deleted.
Which of the following explains why the data files are no longer present?
Which of the following Git operations must be performed outside of Databricks Repos?
A. Commit
B. Pull
C. Push
D. Clone
E. Merge
Which of the following data lakehouse features results in improved data quality over a traditional
data lake?
A. A data lakehouse provides storage solutions for structured and unstructured data.
Which of the following data lakehouse features results in improved data quality over a traditional
data lake?
A. A data lakehouse provides storage solutions for structured and unstructured data.
11
A data engineer has left the organization. The data team needs to transfer ownership of the data
engineer’s Delta tables to a new data engineer. The new data engineer is the lead engineer on the
data team.
Assuming the original data engineer no longer has access, which of the following individuals must be
the one to transfer ownership of the Delta tables in Data Explorer?
C. Workspace administrator
A data analyst has created a Delta table sales that is used by the entire data analysis team. They want
help from the data engineering team to implement a series of tests to ensure the data is clean.
However, the data engineering team uses Python for its tests rather than SQL.
Which of the following commands could the data engineering team use to access sales in PySpark?
C. spark.sql("sales")D. spark.delta.table("sales")
E. spark.table("sales")
13
Which of the following commands will return the location of database customer360?
A data engineer wants to create a new table containing the names of customers that live in France.
They have written the following command:
A senior data engineer mentions that it is organization policy to include a table property indicating
that the new table includes personally identifiable information (PII).
Which of the following lines of code fills in the above blank to successfully complete the task?
B. "COMMENT PII"
C. TBLPROPERTIES PII
E. PII
15
Which of the following benefits is provided by the array functions from Spark SQL?
D. An ability to work with complex, nested data ingested from JSON files
Which of the following commands can be used to write data into a Delta table while avoiding the
writing of duplicate records?
A. DROP
B. IGNORE
C. MERGE
D. APPEND
E. INSERT
17
A data engineer needs to apply custom logic to string column city in table stores for a specific use case.
In order to apply this custom logic at scale, the data engineer wants to create a SQL user-defined
function (UDF).
Which of the following code blocks creates this SQL UDF?
A.
B.
C.
D.
E.
18
A data analyst has a series of queries in a SQL program. The data analyst wants this program to run
every day. They only want the final query in the program to run on Sundays. They ask for help from
the data engineering team to complete this task.
Which of the following approaches could be used by the data engineering team to complete this task?
A. They could submit a feature request with Databricks to add this functionality.
B. They could wrap the queries using PySpark and use Python’s control flow system to
determine when to run the final query.
D. They could automatically restrict access to the source table in the final query so that it is
only accessible on Sundays.
E. They could redesign the data model to separate the data used in the final query into a new
table.
19
A data engineer runs a statement every day to copy the previous day’s sales into the table
transactions. Each day’s sales are in their own file in the location "/transactions/raw".
Today, the data engineer runs the following command to complete this task:
After running the command today, the data engineer notices that the number of records in table
transactions has not changed.
Which of the following describes why the statement might not have copied any new records into the
table?
A. The format of the files to be copied were not included with the FORMAT_OPTIONS
keyword.
B. The names of the files to be copied were not included with the FILES keyword.
C. The previous day’s file has already been copied into the table.
E. The COPY INTO statement requires the table to be refreshed to view the copied rows.
20
A data engineer needs to create a table in Databricks using data from their organization’s existing
SQLite database.
They run the following command:
Which of the following lines of code fills in the above blank to successfully complete the task?
A. org.apache.spark.sql.jdbc
B. autoloader
C. DELTA
D. sqlite
E. org.apache.spark.sql.sqlite
21
A data engineering team has two tables. The first table march_transactions is a collection of all retail
transactions in the month of March. The second table april_transactions is a collection of all retail
transactions in the month of April. There are no duplicate records between the tables.
Which of the following commands should be run to create a new table all_transactions that contains
all records from march_transactions and april_transactions without duplicate records?
22
A data engineer only wants to execute the final block of a Python program if the Python variable
day_of_week is equal to 1 and the Python variable review_period is True.
Which of the following control flow statements should the data engineer use to begin this
conditionally executed code block?
A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to delete
all table metadata and data.
They run the following command:
24
A data engineer wants to create a data entity from a couple of tables. The data entity must be used by
other data engineers in other sessions. It also must be saved to a physical location.
Which of the following data entities should the data engineer create?
A. Database
B. Function
C. View
D. Temporary view
E. Table
25
A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that
the source data is starting to have a lower level of quality. The data engineer would like to automate
the process of monitoring the quality level.
Which of the following tools can the data engineer use to solve this problem?
A. Unity Catalog
B. Data Explorer
C. Delta Lake
E. Auto Loader
26
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three
datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected
outcome after clicking Start to update the pipeline?
A. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist to allow for additional testing.
B. All datasets will be updated once and the pipeline will persist without any processing. The
compute resources will persist but go unused.
C. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will be deployed for the update and terminated when the pipeline is stopped.
D. All datasets will be updated once and the pipeline will shut down. The compute resources
will be terminated.
E. All datasets will be updated once and the pipeline will shut down. The compute resources
will persist to allow for additional testing.
27
In order for Structured Streaming to reliably track the exact progress of the processing so that it can
handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is
used by Spark to record the offset range of the data being processed in each trigger?
B. Structured Streaming cannot record the offset range of the data being processed in each trigger.
28
Which of the following describes the relationship between Gold tables and Silver tables?
A. Gold tables are more likely to contain aggregations than Silver tables.
B. Gold tables are more likely to contain valuable data than Silver tables.
C. Gold tables are more likely to contain a less refined view of data than Silver tables.
D. Gold tables are more likely to contain more data than Silver tables.
E. Gold tables are more likely to contain truthful data than Silver tables.
29
Which of the following describes the relationship between Bronze tables and raw data?
D. Bronze tables contain a less refined view of data than raw data.
Which of the following tools is used by Auto Loader process data incrementally?
A. Checkpointing
C. Data Explorer
D. Unity Catalog
E. Databricks SQL
31
A data engineer has configured a Structured Streaming job to read from a table, manipulate the data,
and then perform a streaming write into a new table.
The cade block used by the data engineer is below:
If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds,
which of the following lines of code should the data engineer use to fill in the blank?
A. trigger("5 seconds")
B. trigger()
C. trigger(once="5 seconds")
D. trigger(processingTime="5 seconds")
E. trigger(continuous="5 seconds")
32
A dataset has been defined using Delta Live Tables and includes an expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION DROP ROW
What is the expected behavior when a batch of data containing data that violates these constraints is
processed?
A. Records that violate the expectation are dropped from the target dataset and loaded into
a quarantine table.
B. Records that violate the expectation are added to the target dataset and flagged as invalid
in a field added to the target dataset.
C. Records that violate the expectation are dropped from the target dataset and recorded as
invalid in the event log.
D. Records that violate the expectation are added to the target dataset and recorded as
invalid in the event log.
33
Which of the following describes when to use the CREATE STREAMING LIVE TABLE (formerly CREATE
INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta Live Tables
(DLT) tables using SQL?
A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT
pipeline is static.
B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed
incrementally.
C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used.
D. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through
complicated aggregations.
E. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline
is static.
34
A data engineer is designing a data pipeline. The source system generates files in a shared directory
that is also used by other processes. As a result, the files should be kept as is and will accumulate in
the directory. The data engineer needs to identify which files are new since the previous run in the
pipeline, and set up the pipeline to only ingest those new files with each run.
Which of the following tools can the data engineer use to solve this problem?
A. Unity Catalog
B. Delta Lake
C. Databricks SQL
D. Data Explorer
E. Auto Loader
35
Which of the following Structured Streaming queries is performing a hop from a Silver table to a Gold
table?
A.
B.
C.
D.
E.
36
A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the
pipeline to drop invalid records at each table. They notice that some data is being dropped due to
quality concerns at some point in the DLT pipeline. They would like to determine at which table in
their pipeline the data is being dropped.
Which of the following approaches can the data engineer take to identify the table that is dropping
the records?
A. They can set up separate expectations for each table when developing their DLT pipeline.
C. They can set up DLT to notify them via email when records are dropped.
D. They can navigate to the DLT pipeline page, click on each table, and view the data quality
statistics.
E. They can navigate to the DLT pipeline page, click on the “Error” button, and review the
present errors.
37
A data engineer has a single-task Job that runs each morning before they begin working. After
identifying an upstream data issue, they need to set up another task to run a new notebook prior to
the original task.
Which of the following approaches can the data engineer use to set up the new task?
A. They can clone the existing task in the existing Job and update it to run the new notebook.
B. They can create a new task in the existing Job and then add it as a dependency of the
original task.
C. They can create a new task in the existing Job and then add the original task as a
dependency of the new task.
D. They can create a new job from scratch and add both tasks to run concurrently.
E. They can clone the existing task to a new Job and then edit it to run the new notebook.
38
An engineering manager wants to monitor the performance of a recent project using a Databricks SQL
query. For the first week following the project’s release, the manager wants the query results to be
updated every minute. However, the manager is concerned that the compute resources used for the
query will be left running and cost the organization a lot of money beyond the first week of the
project’s release.
Which of the following approaches can the engineering team use to ensure the query does not cost
the organization any money beyond the first week of the project’s release?
A. They can set a limit to the number of DBUs that are consumed by the SQL Endpoint.
B. They can set the query’s refresh schedule to end after a certain number of refreshes.
C. They cannot ensure the query does not cost the organization money beyond the first week
of the project’s release.
D. They can set a limit to the number of individuals that are able to manage the query’s
refresh schedule.
E. They can set the query’s refresh schedule to end on a certain date in the query scheduler.
39
A data analysis team has noticed that their Databricks SQL queries are running too slowly when
connected to their always-on SQL endpoint. They claim that this issue is present when many members
of the team are running small queries simultaneously. They ask the data engineering team for help.
The data engineering team notices that each of the team’s queries uses the same SQL endpoint.
Which of the following approaches can the data engineering team use to improve the latency of the
team’s queries?
B. They can increase the maximum bound of the SQL endpoint’s scaling range.
C. They can turn on the Auto Stop feature for the SQL endpoint.
D. They can turn on the Serverless feature for the SQL endpoint.
E. They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance
Policy to “Reliability Optimized.”
40
A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but they
only want the associated SQL endpoint to be running when it is necessary.
Which of the following approaches can the data engineer use to minimize the total running time of
the SQL endpoint used in the refresh schedule of their dashboard?
A. They can ensure the dashboard’s SQL endpoint matches each of the queries’ SQL
endpoints.
C. They can turn on the Auto Stop feature for the SQL endpoint.
E. They can ensure the dashboard’s SQL endpoint is not one of the included query’s SQL
endpoint.
41
A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input
data to an ELT job. The ELT job has its Databricks SQL query that returns the number of input records
containing unexpected NULL values. The data engineer wants their entire team to be notified via a
messaging webhook whenever this value reaches 100.
Which of the following approaches can the data engineer use to notify their entire team via a
messaging webhook whenever the number of NULL values reaches 100?
A single Job runs two notebooks as two separate tasks. A data engineer has noticed that one of the
notebooks is running slowly in the Job’s current run. The data engineer asks a tech lead for help in
identifying why this might be the case.
Which of the following approaches can the tech lead use to identify why the notebook is running
slowly as part of the Job?
A. They can navigate to the Runs tab in the Jobs UI to immediately review the processing
notebook.
B. They can navigate to the Tasks tab in the Jobs UI and click on the active run to review the
processing notebook.
C. They can navigate to the Runs tab in the Jobs UI and click on the active run to review the
processing notebook.
E. They can navigate to the Tasks tab in the Jobs UI to immediately review the processing
notebook.
43
A data engineer has a Job with multiple tasks that runs nightly. Each of the tasks runs slowly because
the clusters take a long time to start.
Which of the following actions can the data engineer perform to improve the start up time for the
clusters used for the Job?
E. They can configure the clusters to autoscale for larger data sizes
44
A new data engineering team team. has been assigned to an ELT project. The new data engineering
team will need full privileges on the database customers to fully manage the project.
Which of the following commands can be used to grant full permissions on the database to the new
data engineering team?
45
A new data engineering team has been assigned to work on a project. The team will need access to
database customers in order to see what tables already exist. The team has its own group team.
Which of the following commands can be used to grant the necessary permission on the entire
database to the new team?
A data engineer is running code in a Databricks Repo that is cloned from a central Git repository. A
colleague of the data engineer informs them that changes have been made and synced to the central
Git repository. The data engineer now needs to sync their Databricks Repo to get the changes from
the central Git repository.
Which of the following Git operations does the data engineer need to run to accomplish this task?
A. Merge
B. Push
C. Pull
D. Commit
E. Clone
47
Which of the following is a benefit of the Databricks Lakehouse Platform embracing open source
technologies?
A. Cloud-specific integrations
B. Simplified governance
A data engineer needs to use a Delta table as part of a data pipeline, but they do not know if they
have the appropriate permissions.
In which of the following locations can the data engineer review their permissions on the table?
A. Databricks Filesystem
B. Jobs
C. Dashboards
D. Repos
E. Data Explorer
49
Which of the following describes a scenario in which a data engineer will want to use a single-node
cluster?
D. When they are concerned about the ability to automatically scale with larger data
E. When they are manually running reports with a large amount of data
50
id STRING = 'a1'
rank INTEGER = 6
rating FLOAT = 9.4
Which of the following SQL commands can be used to append the new record to an existing Delta
table my_table?
51
Where in the Spark UI can one diagnose a performance problem induced by not leveraging predicate
push-down?
B. In the Stage’s Detail screen, in the Completed Stages table, by noting the size of data read
from the Input column
C. In the Storage Detail screen, by noting which RDDs are not stored on disk
In which of the following file formats is data from Delta Lake tables primarily stored?
A. Delta
B. CSV
C. Parquet
D. JSON
53
C. Repos
D. Data
E. Notebooks
54
Which of the following can be used to simplify and unify siloed data architectures that are specialized
for specific use cases?
A. None of these
B. Data lake
C. Data warehouse
D. All of these
E. Data lakehouse
55
A data architect has determined that a table of the following format is necessary:
Which of the following code blocks uses SQL DDL commands to create an empty Delta table in the
above format regardless of whether a table already exists with this name?
A.
B.
C.
D.
E.
56
A data engineer has a Python notebook in Databricks, but they need to use SQL to accomplish a
specific task within a cell. They still want all of the other cells to use Python without making any
changes to those cells.
Which of the following describes how the data engineer can use SQL within a cell of their Python
notebook?
B. They can attach the cell to a SQL endpoint rather than a Databricks cluster
57
Which of the following SQL keywords can be used to convert a table from a long format to a wide
format?
A. TRANSFORM
B. PIVOT
C. SUM
D. CONVERT
E. WHERE
58
Which of the following describes a benefit of creating an external table from Parquet rather than CSV
when using a CREATE TABLE AS SELECT statement?
59
A data engineer wants to create a relational object by pulling data from two tables. The relational
object does not need to be used by other data engineers in other sessions. In order to save on storage
costs, the data engineer wants to avoid copying and storing physical data.
Which of the following relational objects should the data engineer create?
B. View
C. Database
D. Temporary view
E. Delta Table
60
A data analyst has developed a query that runs against Delta table. They want help from the data
engineering team to implement a series of tests to ensure the data returned by the query is clean.
However, the data engineering team uses Python for its tests rather than SQL.
Which of the following operations could the data engineering team use to run the query and operate
with the results in PySpark?
B. spark.delta.table
C. spark.sql
E. spark.table
61
Which of the following commands will return the number of null values in the member_id column?
A data engineer needs to apply custom logic to identify employees with more than 5 years of
experience in array column employees in table stores. The custom logic should create a new column
exp_employees that is an array of all of the employees with more than 5 years of experience for each
row. In order to apply this custom logic at scale, the data engineer wants to use the FILTER higher-
order function.
Which of the following code blocks successfully completes this task?
A.
B.
C.
D.
E.
63
A data engineer has a Python variable table_name that they would like to use in a SQL query. They
want to construct a Python code block that will run the query using table_name.
Which of the following can be used to fill in the blank to successfully complete the task?
A. spark.delta.sql
B. spark.delta.table
C. spark.table
D. dbutils.sql
E. spark.sql
64
A data engineer has created a new database using the following command:
A. dbfs:/user/hive/database/customer360
B. dbfs:/user/hive/warehouse
C. dbfs:/user/hive/customer360
E. dbfs:/user/hive/database
65
A data engineer is attempting to drop a Spark SQL table my_table and runs the following command:
After running this command, the engineer notices that the data files and metadata files have been
deleted from the file system.
Which of the following describes why all of these files were deleted?
A data engineer that is new to using Python needs to create a Python function to add two integers
together and return the sum?
Which of the following code blocks can the data engineer use to complete this task?
A.
B.
C.
D.
E.
67
In which of the following scenarios should a data engineer use the MERGE INTO command instead of
the INSERT INTO command?
A data engineer is working with two tables. Each of these tables is displayed below in its entirety.
The data engineer runs the following query to join these tables together:
A.
B.
C.
D.
E.
69
A data engineer needs to create a table in Databricks using data from a CSV file at location
/path/to/csv.
Which of the following lines of code fills in the above blank to successfully complete the task?
A. None of these lines of code are needed to successfully complete the task
B. USING CSV
C. FROM CSV
D. USING DELTA
E. FROM "path/to/csv"
70
A data engineer has configured a Structured Streaming job to read from a table, manipulate the data,
and then perform a streaming write into a new table.
If the data engineer only wants the query to process all of the available data in as many batches as
required, which of the following lines of code should the data engineer use to fill in the blank?
A. processingTime(1)
B. trigger(availableNow=True)
C. trigger(parallelBatch=True)
D. trigger(processingTime="once")
E. trigger(continuous="once")
71
A data engineer has developed a data pipeline to ingest data from a JSON source using Auto Loader,
but the engineer has not provided any type inference or schema hints in their pipeline. Upon
reviewing the data, the data engineer has noticed that all of the columns in the target table are of the
string type despite some of the fields only including float or boolean values.
Which of the following describes why Auto Loader inferred all of the columns to be of the string type?
A. There was a type mismatch between the specific schema and the inferred schema
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three
datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Development mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected
outcome after clicking Start to update the pipeline?
A. All datasets will be updated once and the pipeline will shut down. The compute resources
will be terminated.
B. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist until the pipeline is shut down.
C. All datasets will be updated once and the pipeline will persist without any processing. The
compute resources will persist but go unused.
D. All datasets will be updated once and the pipeline will shut down. The compute resources
will persist to allow for additional testing.
E. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist to allow for additional testing.
73
Which of the following data workloads will utilize a Gold table as its source?
A. A job that enriches data by parsing its timestamps into a human-readable format
E. A job that ingests raw data from a streaming source into the Lakehouse
74
Which of the following must be specified when creating a new Delta Live Tables pipeline?
75
A data engineer has joined an existing project and they see the following query in the project
repository:
SELECT customer_id -
FROM STREAM(LIVE.customers)
WHERE loyalty_level = 'high';
Which of the following describes why the STREAM function is included in the query?
E. The data in the customers table has been updated since its last run.
76
Which of the following describes the type of workloads that are always compatible with Auto Loader?
A. Streaming workloads
C. Serverless workloads
D. Batch workloads
E. Dashboard workloads
77
A data engineer and data analyst are working together on a data pipeline. The data engineer is
working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is
working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming
input. They now want to migrate their pipeline to use Delta Live Tables.
Which of the following changes will need to be made to the pipeline when migrating to Delta Live
Tables?
B. The pipeline will need to stop using the medallion-based multi-hop architecture
D. The pipeline will need to use a batch source in place of a streaming source
A data engineer is using the following code block as part of a batch ingestion pipeline to read from a
composable table:
Which of the following changes needs to be made so this code block will work when the transactions
table is a stream source?
C. Replace "transactions" with the path to the location of the Delta table
Which of the following queries is performing a streaming hop from raw data to a Bronze table?
A.
B.
C.
D.
E.
80
A dataset has been defined using Delta Live Tables and includes an expectations clause:
What is the expected behavior when a batch of data containing data that violates these constraints is
processed?
A. Records that violate the expectation are dropped from the target dataset and recorded as
invalid in the event log.
C. Records that violate the expectation are dropped from the target dataset and loaded into
a quarantine table.
D. Records that violate the expectation are added to the target dataset and recorded as
invalid in the event log.
E. Records that violate the expectation are added to the target dataset and flagged as invalid
in a field added to the target dataset.
81
Which of the following statements regarding the relationship between Silver tables and Bronze tables
is always true?
A. Silver tables contain a less refined, less clean view of data than Bronze data.
D. Silver tables contain a more refined and cleaner view of data than Bronze tables.
A data engineering team has noticed that their Databricks SQL queries are running too slowly when
they are submitted to a non-running SQL endpoint. The data engineering team wants this issue to be
resolved.
Which of the following approaches can the team use to reduce the time it takes to return results in
this scenario?
A. They can turn on the Serverless feature for the SQL endpoint and change the Spot
Instance Policy to "Reliability Optimized."
B. They can turn on the Auto Stop feature for the SQL endpoint.
D. They can turn on the Serverless feature for the SQL endpoint.
E. They can increase the maximum bound of the SQL endpoint's scaling range
83
A data engineer has a Job that has a complex run schedule, and they want to transfer that schedule to
other Jobs.
Rather than manually selecting each value in the scheduling form in Databricks, which of the following
tools can the data engineer use to represent and submit the schedule programmatically?
A. pyspark.sql.types.DateType
B. datetime
C. pyspark.sql.types.TimestampType
D. Cron syntax
Which of the following approaches should be used to send the Databricks Job owner an email in the
case that the Job fails?
D. There is no way to notify the Job owner in the case of Job failure
85
An engineering manager uses a Databricks SQL query to monitor ingestion latency for each data
source. The manager checks the results of the query every day, but they are manually rerunning the
query each day and waiting for the results.
Which of the following approaches can the manager use to ensure the results of the query are
updated each day?
A. They can schedule the query to refresh every 1 day from the SQL endpoint's page in
Databricks SQL.
B. They can schedule the query to refresh every 12 hours from the SQL endpoint's page in
Databricks SQL.
C. They can schedule the query to refresh every 1 day from the query's page in Databricks
SQL.
D. They can schedule the query to run every 1 day from the Jobs UI.
E. They can schedule the query to run every 12 hours from the Jobs UI.
86
In which of the following scenarios should a data engineer select a Task in the Depends On field of a
new Databricks Job Task?
B. When another task needs to fail before the new task begins
C. When another task has the same dependency libraries as the new task
E. When another task needs to successfully complete before the new task begins
87
A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input
data to a data analytics dashboard for a retail use case. The job has a Databricks SQL query that
returns the number of store-level records where sales is equal to zero. The data engineer wants their
entire team to be notified via a messaging webhook whenever this value is greater than 0.
Which of the following approaches can the data engineer use to notify their entire team via a
messaging webhook whenever the number of stores with $0 in sales is greater than zero?
A data engineer wants to schedule their Databricks SQL dashboard to refresh every hour, but they
only want the associated SQL endpoint to be running when it is necessary. The dashboard has
multiple queries on multiple datasets associated with it. The data that feeds the dashboard is
automatically processed using a Databricks Job.
Which of the following approaches can the data engineer use to minimize the total running time of
the SQL endpoint used in the refresh schedule of their dashboard?
A. They can turn on the Auto Stop feature for the SQL endpoint.
B. They can ensure the dashboard's SQL endpoint is not one of the included query's SQL
endpoint.
D. They can ensure the dashboard's SQL endpoint matches each of the queries' SQL
endpoints.
89
A data engineer needs access to a table new_table, but they do not have the correct permissions.
They can ask the table owner for permission, but they do not know who the table owner is.
Which of the following approaches can be used to identify the owner of new_table?
B. All of these options can be used to identify the owner of the table
D. Review the Owner field in the table's page in the cloud storage solution
A new data engineering team team has been assigned to an ELT project. The new data engineering
team will need full privileges on the table sales to fully manage the project.
Which of the following commands can be used to grant full permissions on the database to the new
data engineering team?
91
A developer has successfully configured their credentials for Databricks Repos and cloned a remote
Git repository. They do not have privileges to make changes to the main branch, which is the only
branch currently visible in their workspace.
Which approach allows this user to share their code updates without the risk of overwriting the work
of their teammates?
A. Use Repos to checkout all changes and send the git diff log to the team.
B. Use Repos to create a fork of the remote repository, commit all changes, and make a pull
request on the source repository.
C. Use Repos to pull changes from the remote Git repository; commit and push changes to a
branch that appeared as changes were pulled.
D. Use Repos to merge all differences and make a pull request back to the remote repository.
E. Use Repos to create a new branch, commit all changes, and push changes to the remote
Git repository.
92
In order to prevent accidental commits to production data, a senior data engineer has instituted a
policy that all development work will reference clones of Delta Lake tables. After testing both DEEP
and SHALLOW CLONE, development tables are created using SHALLOW CLONE.
A few weeks after initial table creation, the cloned versions of several tables implemented as Type 1
Slowly Changing Dimension (SCD) stop working. The transaction logs for the source tables show that
VACUUM was run the day before.
Which statement describes why the cloned tables are no longer working?
A. Because Type 1 changes overwrite existing records, Delta Lake cannot guarantee data
consistency for cloned tables.
B. Running VACUUM automatically invalidates any shallow clones of a table; DEEP CLONE
should always be used when a cloned table will be repeatedly queried.
C. Tables created with SHALLOW CLONE are automatically deleted after their default
retention threshold of 7 days.
D. The metadata created by the CLONE operation is referencing data files that were purged
as invalid by the VACUUM command.
E. The data files compacted by VACUUM are not tracked by the cloned metadata; running
REFRESH on the cloned table will pull in recent changes.
93
You are performing a join operation to combine values from a static userLookup table with a
streaming DataFrame streamingDF.
Spill occurs as a result of executing various wide transformations. However, diagnosing spill requires
one to proactively look for key indicators.
Where in the Spark UI are two of the primary indicators that a partition is spilling to disk?
A task orchestrator has been configured to run two hourly tasks. First, an outside system writes
Parquet data to a directory mounted at /mnt/raw_orders/. After this data is written, a Databricks job
containing the following code is executed:
Assume that the fields customer_id and order_id serve as a composite key to uniquely identify each
order, and that the time field indicates when the record was queued in the source system.
If the upstream system is known to occasionally enqueue duplicate entries for a single order hours
apart, which statement is correct?
A. Duplicate records enqueued more than 2 hours apart may be retained and the orders
table may contain duplicate records with the same customer_id and order_id.
B. All records will be held in the state store for 2 hours before being deduplicated and
committed to the orders table.
C. The orders table will contain only the most recent 2 hours of records and no duplicates will
be present.
D. Duplicate records arriving more than 2 hours apart will be dropped, but duplicates that
arrive in the same batch may both be written to the orders table.
E. The orders table will not contain duplicates, but records arriving more than 2 hours late
will be ignored and missing from the table.
96
A junior data engineer is migrating a workload from a relational database system to the Databricks
Lakehouse. The source system uses a star schema, leveraging foreign key constraints and multi-table
inserts to validate records on write.
Which consideration will impact the decisions made by the engineer while migrating this workload?
A. Databricks only allows foreign key constraints on hashed identifiers, which avoid collisions
in highly-parallel writes.
B. Databricks supports Spark SQL and JDBC; all logic can be directly migrated from the source
system without refactoring.
C. Committing to multiple tables simultaneously requires taking out multiple table locks and
can lead to a state of deadlock.
D. All Delta Lake transactions are ACID compliant against a single table, and Databricks does
not enforce foreign key constraints.
E. Foreign keys must reference a primary key field; multi-table inserts must leverage Delta
Lake’s upsert functionality.
97
A data engineer is running code in a Databricks Repo that is cloned from a central Git repository. A
colleague of the data engineer informs them that changes have been made and synced to the central
Git repository. The data engineer now needs to sync their Databricks Repo to get the changes from
the central Git repository.
Which Git operation does the data engineer need to run to accomplish this task?
A. Clone
B. Pull
C. Merge
D. Push
98
A table named user_ltv is being used to create a view that will be used by data analysts on various
teams. Users in the workspace are configured into groups, which are used for setting up data access
using ACLs.
An analyst who is not a member of the auditing group executes the following query:
A. All columns will be displayed normally for those records that have an age greater than 17;
records not meeting this condition will be omitted.
B. All age values less than 18 will be returned as null values, all other columns will be
returned with the values in user_ltv.
C. All values for the age column will be returned as null values, all other columns will be
returned with the values in user_ltv.
D. All records from all columns will be displayed with the values in user_ltv.
E. All columns will be displayed normally for those records that have an age greater than 18;
records not meeting this condition will be omitted.
99
The data governance team is reviewing code used for deleting records for compliance with GDPR. The
following logic has been implemented to propagate delete requests from the user_lookup table to the
user_aggregates table.
Assuming that user_id is a unique identifying key and that all users that have requested deletion have
been removed from the user_lookup table, which statement describes whether successfully executing
the above logic guarantees that the records to be deleted from the user_aggregates table are no
longer accessible and why?
A. No; the Delta Lake DELETE command only provides ACID guarantees when combined with
the MERGE INTO command.
B. No; files containing deleted records may still be accessible with time travel until a
VACUUM command is used to remove invalidated data files.
C. Yes; the change data feed uses foreign keys to ensure delete consistency throughout the
Lakehouse.
D. Yes; Delta Lake ACID guarantees provide assurance that the DELETE command succeeded
fully and permanently purged these records.
E. No; the change data feed only tracks inserts and updates, not deleted records.
100
The data engineering team has been tasked with configuring connections to an external database that
does not have a supported native connector with Databricks. The external database already has data
security configured by group membership. These groups map directly to user groups already created
in Databricks that represent various teams within the company.
A new login credential has been created for each group in the external database. The Databricks
Utilities Secrets module will be used to make these credentials available to Databricks users.
Assuming that all the credentials are configured correctly on the external database and group
membership is properly configured on Databricks, which statement describes how teams can be
granted the minimum necessary access to using these credentials?
A. "Manage" permissions should be set on a secret key mapped to those credentials that will
be used by a given team.
B. "Read" permissions should be set on a secret key mapped to those credentials that will be
used by a given team.
C. "Read" permissions should be set on a secret scope containing only those credentials that
will be used by a given team.
D. "Manage" permissions should be set on a secret scope containing only those credentials
that will be used by a given team.
No additional configuration is necessary as long as all users are configured as administrators in the
workspace where secrets have been added.
101
A data engineer has realized that the data files associated with a Delta table are incredibly small. They
want to compact the small files to form larger files to improve performance.
A. OPTIMIZE
B. VACUUM
C. COMPACTION
D. REPARTITION
102
A data engineer wants to create a data entity from a couple of tables. The data entity must be used by
other data engineers in other sessions. It also must be saved to a physical location.
Which of the following data entities should the data engineer create?
A. Table
B. Function
C. View
D. Temporary view
103
The Databricks CLI is used to trigger a run of an existing job by passing the job_id parameter. The
response that the job run request has been submitted successfully includes a field run_id.
Which statement describes what the number alongside this field represents?
A. The job_id and number of times the job has been run are concatenated and returned.
B. The total number of jobs that have been run in the workspace.
C. The number of times the job definition has been run in this workspace.
105
The data science team has created and logged a production model using MLflow. The model accepts a
list of column names and returns a new column of type DOUBLE.
The following code correctly imports the production model, loads the customers table containing the
customer_id key column into a DataFrame, and defines the feature columns needed for the model.
Which code block will output a DataFrame with the schema "customer_id LONG, predictions DOUBLE"?
B. df.select("customer_id", model(*columns).alias("predictions"))
C. model.predict(df, columns)
A data engineer has created a new database using the following command:
A. dbfs:/user/hive/database/customer360
B. dbfs:/user/hive/warehouse
C. dbfs:/user/hive/customer360
D. dbfs:/user/hive/database
107
A data engineer is attempting to drop a Spark SQL table my_table and runs the following command:
After running this command, the engineer notices that the data files and metadata files have been
deleted from the file system.
Which statement describes the default execution mode for Databricks Auto Loader?
A. Cloud vendor-specific queue storage and notification services are configured to track
newly arriving files; the target table is materialized by directly querying all valid files in the source
directory.
B. New files are identified by listing the input directory; the target table is materialized by
directly querying all valid files in the source directory.
C. Webhooks trigger a Databricks job to run anytime new data arrives in a source directory;
new data are automatically merged into target tables using rules inferred from the data.
D. New files are identified by listing the input directory; new files are incrementally and
idempotently loaded into the target Delta Lake table.
E. Cloud vendor-specific queue storage and notification services are configured to track
newly arriving files; new files are incrementally and idempotently loaded into the target Delta Lake
table.
109
What is a benefit of creating an external table from Parquet rather than CSV when using a CREATE
TABLE AS SELECT statement?
A large company seeks to implement a near real-time solution involving hundreds of pipelines with
parallel updates of many tables with extremely high volume and high velocity data.
Which of the following solutions would you implement to achieve this requirement?
A. Use Databricks High Concurrency clusters, which leverage optimized cloud storage
connections to maximize data throughput.
B. Partition ingestion tables by a small time duration to allow for many data files to be
written in parallel.
C. Configure Databricks to save all data to attached SSD volumes instead of object storage,
increasing file I/O significantly.
D. Isolate Delta Lake tables in their own storage containers to avoid API limits imposed by
cloud vendors.
E. Store all tables in a single database to ensure that the Databricks Catalyst Metastore can
load balance overall throughput.
111
Which describes a method of installing a Python package scoped at the notebook level to all nodes in
the currently active cluster?
A data engineer is working with two tables. Each of these tables is displayed below in its entirety.
The data engineer runs the following query to join these tables together:
A.
B.
C.
D.
113
You are a retailer that wants to integrate your online sales capabilities with different in-home
assistants, such as Google Home. You need to interpret customer voice commands and issue an order
to the backend systems. Which solutions should you choose?
A. Speech-to-Text API
114
A data engineer that is new to using Python needs to create a Python function to add two integers
together and return the sum?
Which code block can the data engineer use to complete this task?
A.
B.
C.
D.
EXTRA
A data engineer needs to determine whether to use the built-in Databricks Notebooks versioning or
version their project using Databricks Repos. Which of the following is an advantage of using
Databricks Repos over the Databricks Notebooks versioning?