SlideShare a Scribd company logo
Databricks Data
Engineer Associate
Certification Questions
A data engineer wants to create a data entity from a couple of tables. The data entity must be
used by other data engineers in other sessions. It also must be saved to a physical location.
Which of the following data entities should the data engineer create?
•A. Table
•B. Function
•C. View
•D. Temporary view
A dataset has been defined using Delta Live Tables and includes an expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION DROP ROW
What is the expected behavior when a batch of data containing data that violates these
constraints is processed?
•A. Records that violate the expectation cause the job to fail.
•B. Records that violate the expectation are added to the target dataset and flagged as invalid in a
field added to the target dataset.
•C. Records that violate the expectation are dropped from the target dataset and recorded as
invalid in the event log.
•D. Records that violate the expectation are added to the target dataset and recorded as invalid in
the event log.
A data organization leader is upset about the data analysis team’s reports being different from
the data engineering team’s reports. The leader believes the siloed nature of their organization’s
data engineering and data analysis architectures is to blame.
Which of the following describes how a data lakehouse could alleviate this issue?
•A. Both teams would autoscale their work as data size evolves
•B. Both teams would use the same source of truth for their work
•C. Both teams would reorganize to report to the same department
•D. Both teams would be able to collaborate on projects in real-time
•E. Both teams would respond more quickly to ad-hoc requests
Which of the following describes a scenario in which a data team will want to utilize cluster pools?
•A. An automated report needs to be refreshed as quickly as possible.
•B. An automated report needs to be made reproducible.
•C. An automated report needs to be tested to identify errors.
•D. An automated report needs to be version-controlled across multiple collaborators.
•E. An automated report needs to be runnable by all stakeholders.
Which of the following is hosted completely in the control plane of the classic Databricks
architecture?
•A. Worker node
•B. JDBC data source
•C. Databricks web application
•D. Databricks Filesystem
•E. Driver node
Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta
Lake?
•A. The ability to manipulate the same data using a variety of languages
•B. The ability to collaborate in real time on a single notebook
•C. The ability to set up alerts for query failures
•D. The ability to support batch and streaming workloads
•E. The ability to distribute complex data operations
Which of the following describes the storage organization of a Delta table?
•A. Delta tables are stored in a single file that contains data, history, metadata, and other
attributes.
•B. Delta tables store their data in a single file and all metadata in a collection of files in a
separate location.
•C. Delta tables are stored in a collection of files that contain data, history, metadata, and
other attributes.
•D. Delta tables are stored in a collection of files that contain only the data stored within the table.
•E. Delta tables are stored in a single file that contains only the data stored within the table.
Which of the following code blocks will remove the rows where the value in column age is greater
than 25 from the existing Delta table my_table and save the updated table?
•A. SELECT * FROM my_table WHERE age > 25;
•B. UPDATE my_table WHERE age > 25;
•C. DELETE FROM my_table WHERE age > 25;
•D. UPDATE my_table WHERE age <= 25;
•E. DELETE FROM my_table WHERE age <= 25;
A data engineer has realized that they made a mistake when making a daily update to a table.
They need to use Delta time travel to restore the table to a version that is 3 days old. However,
when the data engineer attempts to time travel to the older version, they are unable to restore
the data because the data files have been deleted.
Which of the following explains why the data files are no longer present?
•A. The VACUUM command was run on the table
•B. The TIME TRAVEL command was run on the table
•C. The DELETE HISTORY command was run on the table
•D. The OPTIMIZE command was nun on the table
•E. The HISTORY command was run on the table
Which of the following Git operations must be performed outside of Databricks Repos?
•A. Commit
•B. Pull
•C. Push
•D. Clone
•E. Merge
Which of the following data lakehouse features results in improved data quality over a traditional
data lake?
•A. A data lakehouse provides storage solutions for structured and unstructured data.
•B. A data lakehouse supports ACID-compliant transactions. Most Voted
•C. A data lakehouse allows the use of SQL queries to examine data.
•D. A data lakehouse stores data in open formats.
•E. A data lakehouse enables machine learning and artificial Intelligence workloads.
A data engineer needs to determine whether to use the built-in Databricks Notebooks versioning
or version their project using Databricks Repos.
Which of the following is an advantage of using Databricks Repos over the Databricks Notebooks
versioning?
•A. Databricks Repos automatically saves development progress
•B. Databricks Repos supports the use of multiple branches
•C. Databricks Repos allows users to revert to previous versions of a notebook
•D. Databricks Repos provides the ability to comment on specific changes
•E. Databricks Repos is wholly housed within the Databricks Lakehouse Platform
A data engineer has left the organization. The data team needs to transfer ownership of the data
engineer’s Delta tables to a new data engineer. The new data engineer is the lead engineer on the
data team.
Assuming the original data engineer no longer has access, which of the following individuals
must be the one to transfer ownership of the Delta tables in Data Explorer?
•A. Databricks account representative
•B. This transfer is not possible
•C. Workspace administrator
•D. New lead data engineer
•E. Original data engineer
A data analyst has created a Delta table sales that is used by the entire data analysis team. They
want help from the data engineering team to implement a series of tests to ensure the data is
clean. However, the data engineering team uses Python for its tests rather than SQL.
Which of the following commands could the data engineering team use to access sales in
PySpark?
•A. SELECT * FROM sales
•B. There is no way to share data between PySpark and SQL.
•C. spark.sql("sales")
•D. spark.delta.table("sales")
•E. spark.table("sales")
Which of the following commands will return the location of database customer360?
•A. DESCRIBE LOCATION customer360;
•B. DROP DATABASE customer360;
•C. DESCRIBE DATABASE customer360;
•D. ALTER DATABASE customer360 SET DBPROPERTIES ('location' = '/user'};
•E. USE DATABASE customer360;
A data engineer wants to create a new table containing the names of customers that live in
France.
They have written the following command:
A senior data engineer mentions that it is organization policy to include a table property
indicating that the new table includes personally identifiable information (PII).
Which of the following lines of code fills in the above blank to successfully complete the task?
•A. There is no way to indicate whether a table contains PII.
•B. "COMMENT PII"
•C. TBLPROPERTIES PII
•D. COMMENT "Contains PII"
•E. PII
Which of the following benefits is provided by the array functions from Spark SQL?
•A. An ability to work with data in a variety of types at once
•B. An ability to work with data within certain partitions and windows
•C. An ability to work with time-related data in specified intervals
•D. An ability to work with complex, nested data ingested from JSON files
•E. An ability to work with an array of tables for procedural automation
Which of the following commands can be used to write data into a Delta table while avoiding the
writing of duplicate records?
•A. DROP
•B. IGNORE
•C. MERGE
•D. APPEND
•E. INSERT
A data engineer needs to apply custom logic to string column city in table stores for a specific use
case. In order to apply this custom logic at scale, the data engineer wants to create a SQL user-
defined function (UDF). Which of the following code blocks creates this SQL UDF?
A.
•
B.
•
C.
•
D.
•
E.
A data analyst has a series of queries in a SQL program. The data analyst wants this program to
run every day. They only want the final query in the program to run on Sundays. They ask for help
from the data engineering team to complete this task.
Which of the following approaches could be used by the data engineering team to complete this
task?
•A. They could submit a feature request with Databricks to add this functionality.
•B. They could wrap the queries using PySpark and use Python’s control flow system to
determine when to run the final query.
•C. They could only run the entire program on Sundays.
•D. They could automatically restrict access to the source table in the final query so that it is only
accessible on Sundays.
•E. They could redesign the data model to separate the data used in the final query into a new
table.
A data engineer runs a statement every day to copy the previous day’s sales into the table
transactions. Each day’s sales are in their own file in the location "/transactions/raw".
Today, the data engineer runs the following command to complete this task:
After running the command today, the data engineer notices that the number of records in table
transactions has not changed.
Which of the following describes why the statement might not have copied any new records into
the table?
•A. The format of the files to be copied were not included with the FORMAT_OPTIONS keyword.
•B. The names of the files to be copied were not included with the FILES keyword.
•C. The previous day’s file has already been copied into the table.
•D. The PARQUET file format does not support COPY INTO.
•E. The COPY INTO statement requires the table to be refreshed to view the copied rows.
A data engineer needs to create a table in Databricks using data from their organization’s existing
SQLite database.
They run the following command:
Which of the following lines of code fills in the above blank to successfully complete the task?
•A. org.apache.spark.sql.jdbc
•B. autoloader
•C. DELTA
•D. sqlite
•E. org.apache.spark.sql.sqlite
A data engineering team has two tables. The first table march_transactions is a collection of all retail
transactions in the month of March. The second table april_transactions is a collection of all retail
transactions in the month of April. There are no duplicate records between the tables.
Which of the following commands should be run to create a new table all_transactions that contains all
records from march_transactions and april_transactions without duplicate records?
•A. CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
INNER JOIN SELECT * FROM april_transactions;
•B. CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
UNION SELECT * FROM april_transactions;
•C. CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
OUTER JOIN SELECT * FROM april_transactions;
•D. CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
INTERSECT SELECT * from april_transactions;
•E. CREATE TABLE all_transactions AS
SELECT * FROM march_transactions
MERGE SELECT * FROM april_transactions;
A data engineer only wants to execute the final block of a Python program if the Python variable
day_of_week is equal to 1 and the Python variable review_period is True.
Which of the following control flow statements should the data engineer use to begin this
conditionally executed code block?
•A. if day_of_week = 1 and review_period:
•B. if day_of_week = 1 and review_period = "True":
•C. if day_of_week == 1 and review_period == "True":
•D. if day_of_week == 1 and review_period:
•E. if day_of_week = 1 & review_period: = "True":
A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to
delete all table metadata and data.
They run the following command:
DROP TABLE IF EXISTS my_table -
While the object no longer appears when they run SHOW TABLES, the data files still exist.
Which of the following describes why the data files still exist and the metadata files were deleted?
•A. The table’s data was larger than 10 GB
•B. The table’s data was smaller than 10 GB
•C. The table was external
•D. The table did not have a location
•E. The table was managed
A data engineer wants to create a data entity from a couple of tables. The data entity must be
used by other data engineers in other sessions. It also must be saved to a physical location.
Which of the following data entities should the data engineer create?
•A. Database
•B. Function
•C. View
•D. Temporary view
•E. Table
A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices
that the source data is starting to have a lower level of quality. The data engineer would like to
automate the process of monitoring the quality level.
Which of the following tools can the data engineer use to solve this problem?
•A. Unity Catalog
•B. Data Explorer
•C. Delta Lake
•D. Delta Live Tables
•E. Auto Loader
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three
datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected
outcome after clicking Start to update the pipeline?
•A. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist to allow for additional testing.
•B. All datasets will be updated once and the pipeline will persist without any processing. The
compute resources will persist but go unused.
•C. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will be deployed for the update and terminated when the pipeline is stopped.
•D. All datasets will be updated once and the pipeline will shut down. The compute resources will
be terminated.
•E. All datasets will be updated once and the pipeline will shut down. The compute resources will
persist to allow for additional testing.
In order for Structured Streaming to reliably track the exact progress of the processing so that it
can handle any kind of failure by restarting and/or reprocessing, which of the following two
approaches is used by Spark to record the offset range of the data being processed in each
trigger?
•A. Checkpointing and Write-ahead Logs
•B. Structured Streaming cannot record the offset range of the data being processed in each
trigger.
•C. Replayable Sources and Idempotent Sinks
•D. Write-ahead Logs and Idempotent Sinks
•E. Checkpointing and Idempotent Sinks
Which of the following describes the relationship between Gold tables and Silver tables?
•A. Gold tables are more likely to contain aggregations than Silver tables.
•B. Gold tables are more likely to contain valuable data than Silver tables.
•C. Gold tables are more likely to contain a less refined view of data than Silver tables.
•D. Gold tables are more likely to contain more data than Silver tables.
•E. Gold tables are more likely to contain truthful data than Silver tables.
Which of the following describes the relationship between Bronze tables and raw data?
•A. Bronze tables contain less data than raw data files.
•B. Bronze tables contain more truthful data than raw data.
•C. Bronze tables contain aggregates while raw data is unaggregated.
•D. Bronze tables contain a less refined view of data than raw data.
•E. Bronze tables contain raw data with a schema applied.
Which of the following tools is used by Auto Loader process data incrementally?
•A. Checkpointing
•B. Spark Structured Streaming
•C. Data Explorer
•D. Unity Catalog
•E. Databricks SQL
A data engineer has configured a Structured Streaming job to read from a table, manipulate the
data, and then perform a streaming write into a new table.
The cade block used by the data engineer is below:
If the data engineer only wants the query to execute a micro-batch to process data every 5
seconds, which of the following lines of code should the data engineer use to fill in the blank?
•A. trigger("5 seconds")
•B. trigger()
•C. trigger(once="5 seconds")
•D. trigger(processingTime="5 seconds")
•E. trigger(continuous="5 seconds")
A dataset has been defined using Delta Live Tables and includes an expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION DROP ROW
What is the expected behavior when a batch of data containing data that violates these
constraints is processed?
•A. Records that violate the expectation are dropped from the target dataset and loaded into a
quarantine table.
•B. Records that violate the expectation are added to the target dataset and flagged as invalid in a
field added to the target dataset.
•C. Records that violate the expectation are dropped from the target dataset and recorded
as invalid in the event log.
•D. Records that violate the expectation are added to the target dataset and recorded as invalid in
the event log.
•E. Records that violate the expectation cause the job to fail.
A data engineer is working with two tables. Each of these tables is displayed below in its entirety.
The data engineer runs the following
query to join these tables together:
Which of the following will be returned
by the above query?
A
B
C
D
E
A data engineer and data analyst are working together on a data pipeline. The data engineer is
working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is
working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming
input. They now want to migrate their pipeline to use Delta Live Tables.
Which of the following changes will need to be made to the pipeline when migrating to Delta Live
Tables?
•A. None of these changes will need to be made
•B. The pipeline will need to stop using the medallion-based multi-hop architecture
•C. The pipeline will need to be written entirely in SQL
•D. The pipeline will need to use a batch source in place of a streaming source
•E. The pipeline will need to be written entirely in Python
Which of the following must be specified when creating a new Delta Live Tables pipeline?
•A. A key-value pair configuration
•B. The preferred DBU/hour cost
•C. A path to cloud storage location for the written data
•D. A location of a target database for the written data
•E. At least one notebook library to be executed
Which of the following code blocks will remove the rows where the value in column age is greater
than 25 from the existing Delta table my_table and save the updated table?
•A. SELECT * FROM my_table WHERE age > 25;
•B. UPDATE my_table WHERE age > 25;
•C. DELETE FROM my_table WHERE age > 25;
•D. UPDATE my_table WHERE age <= 25;
•E. DELETE FROM my_table WHERE age <= 25;
Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta
Lake?
•A. The ability to manipulate the same data using a variety of languages
•B. The ability to collaborate in real time on a single notebook
•C. The ability to set up alerts for query failures
•D. The ability to support batch and streaming workloads
•E. The ability to distribute complex data operations
A data engineer has a single-task Job that runs each morning before they begin working. After
identifying an upstream data issue, they need to set up another task to run a new notebook prior
to the original task.
Which approach can the data engineer use to set up the new task?
•A. They can clone the existing task in the existing Job and update it to run the new notebook.
•B. They can create a new task in the existing Job and then add it as a dependency of the
original task.
•C. They can create a new task in the existing Job and then add the original task as a dependency
of the new task.
•D. They can create a new job from scratch and add both tasks to run concurrently.
A data engineer that is new to using Python needs to create a Python function to add two
integers together and return the sum?
Which code block can the data engineer use to complete this task?
•
A.
•
B.
•
C.
•
D.
A new data engineering team team has been assigned to an ELT project. The new data
engineering team will need full privileges on the table sales to fully manage the project.
Which of the following commands can be used to grant full permissions on the database to the
new data engineering team?
•A. GRANT ALL PRIVILEGES ON TABLE sales TO team;
•B. GRANT SELECT CREATE MODIFY ON TABLE sales TO team;
•C. GRANT SELECT ON TABLE sales TO team;
•D. GRANT USAGE ON TABLE sales TO team;
•E. GRANT ALL PRIVILEGES ON TABLE team TO sales;
A data engineer wants to schedule their Databricks SQL dashboard to refresh every hour, but they
only want the associated SQL endpoint to be running when it is necessary. The dashboard has
multiple queries on multiple datasets associated with it. The data that feeds the dashboard is
automatically processed using a Databricks Job.
Which of the following approaches can the data engineer use to minimize the total running time
of the SQL endpoint used in the refresh schedule of their dashboard?
•A. They can turn on the Auto Stop feature for the SQL endpoint.
•B. They can ensure the dashboard's SQL endpoint is not one of the included query's SQL
endpoint.
•C. They can reduce the cluster size of the SQL endpoint.
•D. They can ensure the dashboard's SQL endpoint matches each of the queries' SQL endpoints.
•E. They can set up the dashboard's SQL endpoint to be serverless.
A data analyst has a series of queries in a SQL program. The data analyst wants this program to
run every day. They only want the final query in the program to run on Sundays. They ask for help
from the data engineering team to complete this task.
Which of the following approaches could be used by the data engineering team to complete this
task?
•A. They could submit a feature request with Databricks to add this functionality.
•B. They could wrap the queries using PySpark and use Python’s control flow system to
determine when to run the final query.
•C. They could only run the entire program on Sundays.
•D. They could automatically restrict access to the source table in the final query so that it is only
accessible on Sundays.
•E. They could redesign the data model to separate the data used in the final query into a new
table.
Which of the following describes a benefit of creating an external table from Parquet rather than
CSV when using a CREATE TABLE AS SELECT statement?
•A. Parquet files can be partitioned
•B. CREATE TABLE AS SELECT statements cannot be used on files
•C. Parquet files have a well-defined schema
•D. Parquet files have the ability to be optimized
•E. Parquet files will become Delta tables
What is a benefit of creating an external table from Parquet rather than CSV when using a CREATE
TABLE AS SELECT statement?
•A. Parquet files can be partitioned
•B. Parquet files will become Delta tables
•C. Parquet files have a well-defined schema
•D. Parquet files have the ability to be optimized
A data engineer has created a new database using the following command:
CREATE DATABASE IF NOT EXISTS customer360;
In which location will the customer360 database be located?
•A. dbfs:/user/hive/database/customer360
•B. dbfs:/user/hive/warehouse
•C. dbfs:/user/hive/customer360
•D. dbfs:/user/hive/database
A data analyst has created a Delta table sales that is used by the entire data analysis team. They
want help from the data engineering team to implement a series of tests to ensure the data is
clean. However, the data engineering team uses Python for its tests rather than SQL.
Which command could the data engineering team use to access sales in PySpark?
•A. SELECT * FROM sales
•B. spark.table("sales")
•C. spark.sql("sales")
•D. spark.delta.table("sales")
A data engineer has been given a new record of data:
id STRING = 'a1'
rank INTEGER = 6
rating FLOAT = 9.4
Which SQL commands can be used to append the new record to an existing Delta table my_table?
•A. INSERT INTO my_table VALUES ('a1', 6, 9.4)
•B. INSERT VALUES ('a1', 6, 9.4) INTO my_table
•C. UPDATE my_table VALUES ('a1', 6, 9.4)
•D. UPDATE VALUES ('a1', 6, 9.4) my_table
A data architect has determined that a table of the following format is necessary:
Which code block is used by SQL DDL command to create an empty Delta table in the above
format regardless of whether a table already exists with this name?
•A. CREATE OR REPLACE TABLE table_name ( employeeId STRING, startDate DATE, avgRating
FLOAT )
•B. CREATE OR REPLACE TABLE table_name WITH COLUMNS ( employeeId STRING, startDate DATE,
avgRating FLOAT ) USING DELTA
•C. CREATE TABLE IF NOT EXISTS table_name ( employeeId STRING, startDate DATE, avgRating
FLOAT )
•D. CREATE TABLE table_name AS SELECT employeeId STRING, startDate DATE, avgRating FLOAT
A data engineer is running code in a Databricks Repo that is cloned from a central Git repository.
A colleague of the data engineer informs them that changes have been made and synced to the
central Git repository. The data engineer now needs to sync their Databricks Repo to get the
changes from the central Git repository.
Which Git operation does the data engineer need to run to accomplish this task?
•A. Clone
•B. Pull
•C. Merge
•D. Push
Which file format is used for storing Delta Lake Table?
•A. CSV
•B. Parquet
•C. JSON
•D. Delta
A data engineer has joined an existing project and they see the following query in the project
repository:
CREATE STREAMING LIVE TABLE loyal_customers AS
SELECT customer_id -
FROM STREAM(LIVE.customers)
WHERE loyalty_level = 'high';
Which of the following describes why the STREAM function is included in the query?
•A. The STREAM function is not needed and will cause an error.
•B. The table being created is a live table.
•C. The customers table is a streaming live table.
•D. The customers table is a reference to a Structured Streaming query on a PySpark DataFrame.
•E. The data in the customers table has been updated since its last run.
A data engineer has a single-task Job that runs each morning before they begin working. After
identifying an upstream data issue, they need to set up another task to run a new notebook prior
to the original task.
Which of the following approaches can the data engineer use to set up the new task?
•A. They can clone the existing task in the existing Job and update it to run the new notebook.
•B. They can create a new task in the existing Job and then add it as a dependency of the
original task.
•C. They can create a new task in the existing Job and then add the original task as a dependency
of the new task.
•D. They can create a new job from scratch and add both tasks to run concurrently.
•E. They can clone the existing task to a new Job and then edit it to run the new notebook.
An engineering manager wants to monitor the performance of a recent project using a
Databricks SQL query. For the first week following the project’s release, the manager wants the
query results to be updated every minute. However, the manager is concerned that the compute
resources used for the query will be left running and cost the organization a lot of money beyond
the first week of the project’s release.
Which of the following approaches can the engineering team use to ensure the query does not
cost the organization any money beyond the first week of the project’s release?
•A. They can set a limit to the number of DBUs that are consumed by the SQL Endpoint.
•B. They can set the query’s refresh schedule to end after a certain number of refreshes.
•C. They cannot ensure the query does not cost the organization money beyond the first week of
the project’s release.
•D. They can set a limit to the number of individuals that are able to manage the query’s refresh
schedule.
•E. They can set the query’s refresh schedule to end on a certain date in the query
scheduler.
Which of the following benefits is provided by the array functions from Spark SQL?
•A. An ability to work with data in a variety of types at once
•B. An ability to work with data within certain partitions and windows
•C. An ability to work with time-related data in specified intervals
•D. An ability to work with complex, nested data ingested from JSON files
•E. An ability to work with an array of tables for procedural automation
In order for Structured Streaming to reliably track the exact progress of the processing so that it
can handle any kind of failure by restarting and/or reprocessing, which of the following two
approaches is used by Spark to record the offset range of the data being processed in each
trigger?
•A. Checkpointing and Write-ahead Logs
•B. Structured Streaming cannot record the offset range of the data being processed in each
trigger.
•C. Replayable Sources and Idempotent Sinks
•D. Write-ahead Logs and Idempotent Sinks
•E. Checkpointing and Idempotent Sinks
Which of the following statements regarding the relationship between Silver tables and Bronze
tables is always true?
•A. Silver tables contain a less refined, less clean view of data than Bronze data.
•B. Silver tables contain aggregates while Bronze data is unaggregated.
•C. Silver tables contain more data than Bronze tables.
•D. Silver tables contain a more refined and cleaner view of data than Bronze tables.
•E. Silver tables contain less data than Bronze tables.
A dataset has been defined using Delta Live Tables and includes an expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION FAIL UPDATE
What is the expected behavior when a batch of data containing data that violates these
constraints is processed?
•A. Records that violate the expectation are dropped from the target dataset and recorded as
invalid in the event log.
•B. Records that violate the expectation cause the job to fail.
•C. Records that violate the expectation are dropped from the target dataset and loaded into a
quarantine table.
•D. Records that violate the expectation are added to the target dataset and recorded as invalid in
the event log.
•E. Records that violate the expectation are added to the target dataset and flagged as invalid in a
field added to the target dataset.
Which of the following queries is performing a streaming hop from raw data to a Bronze table?
A data engineer is using the following code block as part of a batch ingestion pipeline to read
from a composable table:
Which of the following changes needs to be made so this code block will work when the
transactions table is a stream source?
•A. Replace predict with a stream-friendly prediction function
•B. Replace schema(schema) with option ("maxFilesPerTrigger", 1)
•C. Replace "transactions" with the path to the location of the Delta table
•D. Replace format("delta") with format("stream")
•E. Replace spark.read with spark.readStream
Which of the following describes the type of workloads that are always compatible with Auto
Loader?
•A. Streaming workloads
•B. Machine learning workloads
•C. Serverless workloads
•D. Batch workloads
•E. Dashboard workloads
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three
datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Development mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected
outcome after clicking Start to update the pipeline?
•A. All datasets will be updated once and the pipeline will shut down. The compute resources will
be terminated.
•B. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist until the pipeline is shut down.
•C. All datasets will be updated once and the pipeline will persist without any processing. The
compute resources will persist but go unused.
•D. All datasets will be updated once and the pipeline will shut down. The compute resources will
persist to allow for additional testing.
•E. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist to allow for additional testing.
A data engineer has configured a Structured Streaming job to read from a table, manipulate the
data, and then perform a streaming write into a new table.
The code block used by the data engineer is below:
If the data engineer only wants the query to process all of the available data in as many batches
as required, which of the following lines of code should the data engineer use to fill in the blank?
•A. processingTime(1)
•B. trigger(availableNow=True)
•C. trigger(parallelBatch=True)
•D. trigger(processingTime="once")
•E. trigger(continuous="once")
A data engineer has a Python variable table_name that they would like to use in a SQL query.
They want to construct a Python code block that will run the query using table_name.
They have the following incomplete code block:
____(f"SELECT customer_id, spend FROM {table_name}")
Which of the following can be used to fill in the blank to successfully complete the task?
•A. spark.delta.sql
•B. spark.delta.table
•C. spark.table
•D. dbutils.sql
•E. spark.sql
A data engineer needs to apply custom logic to identify employees with more than 5 years of
experience in array column employees in table stores. The custom logic should create a new
column exp_employees that is an array of all of the employees with more than 5 years of
experience for each row. In order to apply this custom logic at scale, the data engineer wants to
use the FILTER higher-order function.
Which of the following code blocks successfully completes this task?
A data analyst has developed a query that runs against Delta table. They want help from the data
engineering team to implement a series of tests to ensure the data returned by the query is
clean. However, the data engineering team uses Python for its tests rather than SQL.
the following operations could the data engineering team use to run the query and operate with
the results in PySpark?
Which of
•A. SELECT * FROM sales
•B. spark.delta.table
•C. spark.sql
•D. There is no way to share data between PySpark and SQL.
•E. spark.table
Which of the following commands will return the number of null values in the member_id
column?
•A. SELECT count(member_id) FROM my_table;
•B. SELECT count(member_id) - count_null(member_id) FROM my_table;
•C. SELECT count_if(member_id IS NULL) FROM my_table;
•D. SELECT null(member_id) FROM my_table;
•E. SELECT count_null(member_id) FROM my_table;
Which of the following SQL keywords can be used to convert a table from a long format to a wide
format?
•A. TRANSFORM
•B. PIVOT
•C. SUM
•D. CONVERT
•E. WHERE
A data architect has determined that a table of the following format is necessary:
Which of the following code blocks uses SQL DDL commands to create an empty Delta table in
the above format regardless of whether a table already exists with this name?
Which of the following can be used to simplify and unify siloed data architectures that are
specialized for specific use cases?
•A. None of these
•B. Data lake
•C. Data warehouse
•D. All of these
•E. Data lakehouse
Which of the following is stored in the Databricks customer's cloud account?
•A. Databricks web application
•B. Cluster management metadata
•C. Repos
•D. Data
•E. Notebooks
In which of the following file formats is data from Delta Lake tables primarily stored?
•A. Delta
•B. CSV
•C. Parquet
•D. JSON
•E. A proprietary, optimized format specific to Databricks
A data engineer has realized that the data files associated with a Delta table are incredibly small.
They want to compact the small files to form larger files to improve performance.
Which of the following keywords can be used to compact the small files?
•A. REDUCE
•B. OPTIMIZE
•C. COMPACTION
•D. REPARTITION
•E. VACUUM
Which of the following describes a scenario in which a data engineer will want to use a single-
node cluster?
•A. When they are working interactively with a small amount of data
•B. When they are running automated reports to be refreshed as quickly as possible
•C. When they are working with SQL within Databricks SQL
•D. When they are concerned about the ability to automatically scale with larger data
•E. When they are manually running reports with a large amount of data
A data engineer needs to use a Delta table as part of a data pipeline, but they do not know if they
have the appropriate permissions.
In which of the following locations can the data engineer review their permissions on the table?
•A. Databricks Filesystem
•B. Jobs
•C. Dashboards
•D. Repos
•E. Data Explorer
Which of the following is a benefit of the Databricks Lakehouse Platform embracing open source
technologies?
•A. Cloud-specific integrations
•B. Simplified governance
•C. Ability to scale storage
•D. Ability to scale workloads
•E. Avoiding vendor lock-in
A new data engineering team team. has been assigned to an ELT project. The new data
engineering team will need full privileges on the database customers to fully manage the project.
Which of the following commands can be used to grant full permissions on the database to the
new data engineering team?
•A. GRANT USAGE ON DATABASE customers TO team;
•B. GRANT ALL PRIVILEGES ON DATABASE team TO customers;
•C. GRANT SELECT PRIVILEGES ON DATABASE customers TO teams;
•D. GRANT SELECT CREATE MODIFY USAGE PRIVILEGES ON DATABASE customers TO team;
•E. GRANT ALL PRIVILEGES ON DATABASE customers TO team;
A data engineer has a Job with multiple tasks that runs nightly. Each of the tasks runs slowly
because the clusters take a long time to start.
Which of the following actions can the data engineer perform to improve the start up time for the
clusters used for the Job?
•A. They can use endpoints available in Databricks SQL
•B. They can use jobs clusters instead of all-purpose clusters
•C. They can configure the clusters to be single-node
•D. They can use clusters that are from a cluster pool
•E. They can configure the clusters to autoscale for larger data sizes
A single Job runs two notebooks as two separate tasks. A data engineer has noticed that one of
the notebooks is running slowly in the Job’s current run. The data engineer asks a tech lead for
help in identifying why this might be the case.
Which of the following approaches can the tech lead use to identify why the notebook is running
slowly as part of the Job?
•A. They can navigate to the Runs tab in the Jobs UI to immediately review the processing
notebook.
•B. They can navigate to the Tasks tab in the Jobs UI and click on the active run to review the
processing notebook.
•C. They can navigate to the Runs tab in the Jobs UI and click on the active run to review the
processing notebook.
•D. There is no way to determine why a Job task is running slowly.
•E. They can navigate to the Tasks tab in the Jobs UI to immediately review the processing
notebook.
A data analysis team has noticed that their Databricks SQL queries are running too slowly when
connected to their always-on SQL endpoint. They claim that this issue is present when many
members of the team are running small queries simultaneously. They ask the data engineering
team for help. The data engineering team notices that each of the team’s queries uses the same
SQL endpoint.
Which of the following approaches can the data engineering team use to improve the latency of
the team’s queries?
•A. They can increase the cluster size of the SQL endpoint.
•B. They can increase the maximum bound of the SQL endpoint’s scaling range.
•C. They can turn on the Auto Stop feature for the SQL endpoint.
•D. They can turn on the Serverless feature for the SQL endpoint.
•E. They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance
Policy to “Reliability Optimized.”
A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the
pipeline to drop invalid records at each table. They notice that some data is being dropped due to
quality concerns at some point in the DLT pipeline. They would like to determine at which table in
their pipeline the data is being dropped.
Which of the following approaches can the data engineer take to identify the table that is
dropping the records?
•A. They can set up separate expectations for each table when developing their DLT pipeline.
•B. They cannot determine which table is dropping the records.
•C. They can set up DLT to notify them via email when records are dropped.
•D. They can navigate to the DLT pipeline page, click on each table, and view the data
quality statistics.
•E. They can navigate to the DLT pipeline page, click on the “Error” button, and review the present
errors.
Which of the following Structured Streaming queries is performing a hop from a Silver table to a
Gold table?
A data engineer is designing a data pipeline. The source system generates files in a shared
directory that is also used by other processes. As a result, the files should be kept as is and will
accumulate in the directory. The data engineer needs to identify which files are new since the
previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.
Which of the following tools can the data engineer use to solve this problem?
•A. Unity Catalog
•B. Delta Lake
•C. Databricks SQL
•D. Data Explorer
•E. Auto Loader
A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three
datasets are defined against Delta Lake table sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid, what is the expected
outcome after clicking Start to update the pipeline?
•A. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will persist to allow for additional testing.
•B. All datasets will be updated once and the pipeline will persist without any processing. The
compute resources will persist but go unused.
•C. All datasets will be updated at set intervals until the pipeline is shut down. The compute
resources will be deployed for the update and terminated when the pipeline is stopped.
•D. All datasets will be updated once and the pipeline will shut down. The compute resources will
be terminated.
•E. All datasets will be updated once and the pipeline will shut down. The compute resources will
persist to allow for additional testing.
A data analyst has been asked to use the below table sales_table to get the percentage rank of
products within region by the sales:
The result of the query should look like this:
A data engineer needs to create a table in Databricks using data from a CSV file at location
/path/to/csv.
They run the following command:
Which of the following lines of code fills in the above blank to successfully complete the task?
•A. None of these lines of code are needed to successfully complete the task
•B. USING CSV
•C. FROM CSV
•D. USING DELTA
•E. FROM "path/to/csv"
In which of the following scenarios should a data engineer use the MERGE INTO command
instead of the INSERT INTO command?
•A. When the location of the data needs to be changed
•B. When the target table is an external table
•C. When the source table can be deleted
•D. When the target table cannot contain duplicate records
•E. When the source is not a Delta table
A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the
input data to a data analytics dashboard for a retail use case. The job has a Databricks SQL query
that returns the number of store-level records where sales is equal to zero. The data engineer
wants their entire team to be notified via a messaging webhook whenever this value is greater
than 0.
Which of the following approaches can the data engineer use to notify their entire team via a
messaging webhook whenever the number of stores with $0 in sales is greater than zero?
•A. They can set up an Alert with a custom template.
•B. They can set up an Alert with a new email alert destination.
•C. They can set up an Alert with one-time notifications.
•D. They can set up an Alert with a new webhook alert destination.
•E. They can set up an Alert without notifications.
A data engineer has a Python notebook in Databricks, but they need to use SQL to accomplish a
specific task within a cell. They still want all of the other cells to use Python without making any
changes to those cells.
Which of the following describes how the data engineer can use SQL within a cell of their Python
notebook?
•A. It is not possible to use SQL in a Python notebook
•B. They can attach the cell to a SQL endpoint rather than a Databricks cluster
•C. They can simply write SQL syntax in the cell
•D. They can add %sql to the first line of the cell
•E. They can change the default language of the notebook to SQL
An engineering manager uses a Databricks SQL query to monitor ingestion latency for each data
source. The manager checks the results of the query every day, but they are manually rerunning
the query each day and waiting for the results.
Which of the following approaches can the manager use to ensure the results of the query are
updated each day?
•A. They can schedule the query to refresh every 1 day from the SQL endpoint's page in
Databricks SQL.
•B. They can schedule the query to refresh every 12 hours from the SQL endpoint's page in
Databricks SQL.
•C. They can schedule the query to refresh every 1 day from the query's page in Databricks
SQL.
•D. They can schedule the query to run every 1 day from the Jobs UI.
•E. They can schedule the query to run every 12 hours from the Jobs UI.
Which of the following describes when to use the CREATE STREAMING LIVE TABLE (formerly
CREATE INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta
Live Tables (DLT) tables using SQL?
•A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline
is static.
•B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed
incrementally.
•C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used.
•D. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through
complicated aggregations.
•E. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is
static.
A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but
they only want the associated SQL endpoint to be running when it is necessary.
Which of the following approaches can the data engineer use to minimize the total running time
of the SQL endpoint used in the refresh schedule of their dashboard?
•A. They can ensure the dashboard’s SQL endpoint matches each of the queries’ SQL endpoints.
•B. They can set up the dashboard’s SQL endpoint to be serverless.
•C. They can turn on the Auto Stop feature for the SQL endpoint.
•D. They can reduce the cluster size of the SQL endpoint.
•E. They can ensure the dashboard’s SQL endpoint is not one of the included query’s SQL
endpoint.
A data engineer wants to create a relational object by pulling data from two tables. The relational
object does not need to be used by other data engineers in other sessions. In order to save on
storage costs, the data engineer wants to avoid copying and storing physical data.
Which of the following relational objects should the data engineer create?
•A. Spark SQL Table
•B. View
•C. Database
•D. Temporary view
•E. Delta Table
A data engineer has developed a data pipeline to ingest data from a JSON source using Auto
Loader, but the engineer has not provided any type inference or schema hints in their pipeline.
Upon reviewing the data, the data engineer has noticed that all of the columns in the target table
are of the string type despite some of the fields only including float or boolean values.
Which of the following describes why Auto Loader inferred all of the columns to be of the string
type?
•A. There was a type mismatch between the specific schema and the inferred schema
•B. JSON data is a text-based format
•C. Auto Loader only works with string data
•D. All of the fields had at least one null value
•E. Auto Loader cannot infer the schema of ingested data
A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the
input data to an ELT job. The ELT job has its Databricks SQL query that returns the number of
input records containing unexpected NULL values. The data engineer wants their entire team to
be notified via a messaging webhook whenever this value reaches 100.
Which of the following approaches can the data engineer use to notify their entire team via a
messaging webhook whenever the number of NULL values reaches 100?
•A. They can set up an Alert with a custom template.
•B. They can set up an Alert with a new email alert destination.
•C. They can set up an Alert with a new webhook alert destination.
•D. They can set up an Alert with one-time notifications.
•E. They can set up an Alert without notifications.
Which of the following approaches should be used to send the Databricks Job owner an email in
the case that the Job fails?
•A. Manually programming in an alert system in each cell of the Notebook
•B. Setting up an Alert in the Job page
•C. Setting up an Alert in the Notebook
•D. There is no way to notify the Job owner in the case of Job failure
•E. MLflow Model Registry Webhooks
In which of the following scenarios should a data engineer select a Task in the Depends On field
of a new Databricks Job Task?
•A. When another task needs to be replaced by the new task
•B. When another task needs to fail before the new task begins
•C. When another task has the same dependency libraries as the new task
•D. When another task needs to use as little compute resources as possible
•E. When another task needs to successfully complete before the new task begins
A data engineer needs access to a table new_table, but they do not have the correct permissions.
They can ask the table owner for permission, but they do not know who the table owner is.
Which of the following approaches can be used to identify the owner of new_table?
•A. Review the Permissions tab in the table's page in Data Explorer
•B. All of these options can be used to identify the owner of the table
•C. Review the Owner field in the table's page in Data Explorer
•D. Review the Owner field in the table's page in the cloud storage solution
•E. There is no way to identify the owner of the table
A data engineer has a Job that has a complex run schedule, and they want to transfer that
schedule to other Jobs.
Rather than manually selecting each value in the scheduling form in Databricks, which of the
following tools can the data engineer use to represent and submit the schedule
programmatically?
•A. pyspark.sql.types.DateType
•B. datetime
•C. pyspark.sql.types.TimestampType
•D. Cron syntax
•E. There is no way to represent and submit this information programmatically
Which of the following data workloads will utilize a Gold table as its source?
•A. A job that enriches data by parsing its timestamps into a human-readable format
•B. A job that aggregates uncleaned data to create standard summary statistics
•C. A job that cleans data by removing malformatted records
•D. A job that queries aggregated data designed to feed into a dashboard
•E. A job that ingests raw data from a streaming source into the Lakehouse

More Related Content

PPT
Sql portfolio admin_practicals
Shelli Ciaschini
 
PDF
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
PDF
Databricks Data Analyst Associate Exam Dumps 2024.pdf
SkillCertProExams
 
PPTX
Evolutionary database design
Salehein Syed
 
PPTX
Move a successful onpremise oltp application to the cloud
Ike Ellis
 
DOC
Sandeep Grandhi (1)
SANDEEP GRANDHI
 
DOC
Abinitio Experienced resume-Anilkumar
anilkumar kagitha
 
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
sayfealween98
 
Sql portfolio admin_practicals
Shelli Ciaschini
 
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
Databricks Data Analyst Associate Exam Dumps 2024.pdf
SkillCertProExams
 
Evolutionary database design
Salehein Syed
 
Move a successful onpremise oltp application to the cloud
Ike Ellis
 
Sandeep Grandhi (1)
SANDEEP GRANDHI
 
Abinitio Experienced resume-Anilkumar
anilkumar kagitha
 
Essentials of Database Management 1st Edition Hoffer Test Bank
sayfealween98
 

Similar to Databricks-DE-Associate Certification Questions-june-2024.pptx (20)

PDF
Moving to Databricks & Delta
Databricks
 
PDF
Designing Database Solutions for Microsoft SQL Server 2012 2012 Microsoft 70-...
ChristopherBow2
 
PDF
Using Databricks as an Analysis Platform
Databricks
 
PDF
DumpsBoss Your Key to Passing the DP 203 Exam
jackjohnson9842
 
PDF
Top DP-203 Exam Dumps PDF Free Download for Easy Preparation
jackjohnson9842
 
PDF
Ozri 2013 Brisbane, Australia - Geodatabase Efficiencies
Walter Simonazzi
 
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
kromahmacfie
 
PPTX
AOUG_11Nov2016_Challenges_with_EBS12_2
Sean Braymen
 
PPTX
Industry Keynote at Large Scale Testing Workshop 2015
Wolfgang Gottesheim
 
PPTX
DQ Product Usage Methodology Highlights_v6_ltd
Digendra Vir Singh (DV)
 
PPTX
Webinar: Successful Data Migration to Microsoft Dynamics 365 CRM | InSync
APPSeCONNECT
 
DOC
Alejandro Chico Resume
Alex Chico
 
PPTX
Understanding DB2 Optimizer
terraborealis
 
PDF
DMann-SQLDeveloper4Reporting
David Mann
 
DOC
Tah 03302015 withendclient
Terry Hendrickson
 
PPTX
Relational data modeling trends for transactional applications
Ike Ellis
 
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
saxlinsitou55
 
PPTX
T sql performance guidelines for better db stress powers
Shehap Elnagar
 
PDF
CCI2017 - Considerations for Migrating Databases to Azure - Gianluca Sartori
walk2talk srl
 
Moving to Databricks & Delta
Databricks
 
Designing Database Solutions for Microsoft SQL Server 2012 2012 Microsoft 70-...
ChristopherBow2
 
Using Databricks as an Analysis Platform
Databricks
 
DumpsBoss Your Key to Passing the DP 203 Exam
jackjohnson9842
 
Top DP-203 Exam Dumps PDF Free Download for Easy Preparation
jackjohnson9842
 
Ozri 2013 Brisbane, Australia - Geodatabase Efficiencies
Walter Simonazzi
 
Essentials of Database Management 1st Edition Hoffer Test Bank
kromahmacfie
 
AOUG_11Nov2016_Challenges_with_EBS12_2
Sean Braymen
 
Industry Keynote at Large Scale Testing Workshop 2015
Wolfgang Gottesheim
 
DQ Product Usage Methodology Highlights_v6_ltd
Digendra Vir Singh (DV)
 
Webinar: Successful Data Migration to Microsoft Dynamics 365 CRM | InSync
APPSeCONNECT
 
Alejandro Chico Resume
Alex Chico
 
Understanding DB2 Optimizer
terraborealis
 
DMann-SQLDeveloper4Reporting
David Mann
 
Tah 03302015 withendclient
Terry Hendrickson
 
Relational data modeling trends for transactional applications
Ike Ellis
 
Essentials of Database Management 1st Edition Hoffer Test Bank
saxlinsitou55
 
T sql performance guidelines for better db stress powers
Shehap Elnagar
 
CCI2017 - Considerations for Migrating Databases to Azure - Gianluca Sartori
walk2talk srl
 
Ad

Recently uploaded (20)

PDF
Company Presentation pada Perusahaan ADB.pdf
didikfahmi
 
PPTX
Extract Transformation Load (3) (1).pptx
revathi148366
 
PPTX
Trading Procedures (1).pptxcffcdddxxddsss
garv794
 
PPTX
Employee Salary Presentation.l based on data science collection of data
barridevakumari2004
 
PDF
Data_Cleaning_Infographic_Series_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PPTX
Presentation (1) (1).pptx k8hhfftuiiigff
karthikjagath2005
 
PDF
Taxes Foundatisdcsdcsdon Certificate.pdf
PratyushPrem2
 
PPTX
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
PPTX
Economic Sector Performance Recovery.pptx
yulisbaso2020
 
PDF
A Systems Thinking Approach to Algorithmic Fairness.pdf
Epistamai
 
PPTX
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
PPTX
Lecture 1 Intro in Inferential Statistics.pptx
MiraLamuton
 
PPTX
GR3-PPTFINAL (1).pptx 0.91 MbHIHUHUGG,HJGH
DarylArellaga1
 
PDF
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
PDF
1 Simple and Compound Interest_953c061c981ff8640f0b8e733b245589.pdf
JaexczJol060205
 
PPTX
Presentation1.pptxvhhh. H ycycyyccycycvvv
ItratBatool16
 
PPTX
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
PPTX
Machine Learning Solution for Power Grid Cybersecurity with GraphWavelets
Sione Palu
 
PPTX
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
PDF
TCP_IP for Programmers ------ slides.pdf
Souhailsouhail5
 
Company Presentation pada Perusahaan ADB.pdf
didikfahmi
 
Extract Transformation Load (3) (1).pptx
revathi148366
 
Trading Procedures (1).pptxcffcdddxxddsss
garv794
 
Employee Salary Presentation.l based on data science collection of data
barridevakumari2004
 
Data_Cleaning_Infographic_Series_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
Presentation (1) (1).pptx k8hhfftuiiigff
karthikjagath2005
 
Taxes Foundatisdcsdcsdon Certificate.pdf
PratyushPrem2
 
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
Economic Sector Performance Recovery.pptx
yulisbaso2020
 
A Systems Thinking Approach to Algorithmic Fairness.pdf
Epistamai
 
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
Lecture 1 Intro in Inferential Statistics.pptx
MiraLamuton
 
GR3-PPTFINAL (1).pptx 0.91 MbHIHUHUGG,HJGH
DarylArellaga1
 
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
1 Simple and Compound Interest_953c061c981ff8640f0b8e733b245589.pdf
JaexczJol060205
 
Presentation1.pptxvhhh. H ycycyyccycycvvv
ItratBatool16
 
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
Machine Learning Solution for Power Grid Cybersecurity with GraphWavelets
Sione Palu
 
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
TCP_IP for Programmers ------ slides.pdf
Souhailsouhail5
 
Ad

Databricks-DE-Associate Certification Questions-june-2024.pptx

  • 2. A data engineer wants to create a data entity from a couple of tables. The data entity must be used by other data engineers in other sessions. It also must be saved to a physical location. Which of the following data entities should the data engineer create? •A. Table •B. Function •C. View •D. Temporary view
  • 3. A dataset has been defined using Delta Live Tables and includes an expectations clause: CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION DROP ROW What is the expected behavior when a batch of data containing data that violates these constraints is processed? •A. Records that violate the expectation cause the job to fail. •B. Records that violate the expectation are added to the target dataset and flagged as invalid in a field added to the target dataset. •C. Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log. •D. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log.
  • 4. A data organization leader is upset about the data analysis team’s reports being different from the data engineering team’s reports. The leader believes the siloed nature of their organization’s data engineering and data analysis architectures is to blame. Which of the following describes how a data lakehouse could alleviate this issue? •A. Both teams would autoscale their work as data size evolves •B. Both teams would use the same source of truth for their work •C. Both teams would reorganize to report to the same department •D. Both teams would be able to collaborate on projects in real-time •E. Both teams would respond more quickly to ad-hoc requests
  • 5. Which of the following describes a scenario in which a data team will want to utilize cluster pools? •A. An automated report needs to be refreshed as quickly as possible. •B. An automated report needs to be made reproducible. •C. An automated report needs to be tested to identify errors. •D. An automated report needs to be version-controlled across multiple collaborators. •E. An automated report needs to be runnable by all stakeholders.
  • 6. Which of the following is hosted completely in the control plane of the classic Databricks architecture? •A. Worker node •B. JDBC data source •C. Databricks web application •D. Databricks Filesystem •E. Driver node
  • 7. Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta Lake? •A. The ability to manipulate the same data using a variety of languages •B. The ability to collaborate in real time on a single notebook •C. The ability to set up alerts for query failures •D. The ability to support batch and streaming workloads •E. The ability to distribute complex data operations
  • 8. Which of the following describes the storage organization of a Delta table? •A. Delta tables are stored in a single file that contains data, history, metadata, and other attributes. •B. Delta tables store their data in a single file and all metadata in a collection of files in a separate location. •C. Delta tables are stored in a collection of files that contain data, history, metadata, and other attributes. •D. Delta tables are stored in a collection of files that contain only the data stored within the table. •E. Delta tables are stored in a single file that contains only the data stored within the table.
  • 9. Which of the following code blocks will remove the rows where the value in column age is greater than 25 from the existing Delta table my_table and save the updated table? •A. SELECT * FROM my_table WHERE age > 25; •B. UPDATE my_table WHERE age > 25; •C. DELETE FROM my_table WHERE age > 25; •D. UPDATE my_table WHERE age <= 25; •E. DELETE FROM my_table WHERE age <= 25;
  • 10. A data engineer has realized that they made a mistake when making a daily update to a table. They need to use Delta time travel to restore the table to a version that is 3 days old. However, when the data engineer attempts to time travel to the older version, they are unable to restore the data because the data files have been deleted. Which of the following explains why the data files are no longer present? •A. The VACUUM command was run on the table •B. The TIME TRAVEL command was run on the table •C. The DELETE HISTORY command was run on the table •D. The OPTIMIZE command was nun on the table •E. The HISTORY command was run on the table
  • 11. Which of the following Git operations must be performed outside of Databricks Repos? •A. Commit •B. Pull •C. Push •D. Clone •E. Merge
  • 12. Which of the following data lakehouse features results in improved data quality over a traditional data lake? •A. A data lakehouse provides storage solutions for structured and unstructured data. •B. A data lakehouse supports ACID-compliant transactions. Most Voted •C. A data lakehouse allows the use of SQL queries to examine data. •D. A data lakehouse stores data in open formats. •E. A data lakehouse enables machine learning and artificial Intelligence workloads.
  • 13. A data engineer needs to determine whether to use the built-in Databricks Notebooks versioning or version their project using Databricks Repos. Which of the following is an advantage of using Databricks Repos over the Databricks Notebooks versioning? •A. Databricks Repos automatically saves development progress •B. Databricks Repos supports the use of multiple branches •C. Databricks Repos allows users to revert to previous versions of a notebook •D. Databricks Repos provides the ability to comment on specific changes •E. Databricks Repos is wholly housed within the Databricks Lakehouse Platform
  • 14. A data engineer has left the organization. The data team needs to transfer ownership of the data engineer’s Delta tables to a new data engineer. The new data engineer is the lead engineer on the data team. Assuming the original data engineer no longer has access, which of the following individuals must be the one to transfer ownership of the Delta tables in Data Explorer? •A. Databricks account representative •B. This transfer is not possible •C. Workspace administrator •D. New lead data engineer •E. Original data engineer
  • 15. A data analyst has created a Delta table sales that is used by the entire data analysis team. They want help from the data engineering team to implement a series of tests to ensure the data is clean. However, the data engineering team uses Python for its tests rather than SQL. Which of the following commands could the data engineering team use to access sales in PySpark? •A. SELECT * FROM sales •B. There is no way to share data between PySpark and SQL. •C. spark.sql("sales") •D. spark.delta.table("sales") •E. spark.table("sales")
  • 16. Which of the following commands will return the location of database customer360? •A. DESCRIBE LOCATION customer360; •B. DROP DATABASE customer360; •C. DESCRIBE DATABASE customer360; •D. ALTER DATABASE customer360 SET DBPROPERTIES ('location' = '/user'}; •E. USE DATABASE customer360;
  • 17. A data engineer wants to create a new table containing the names of customers that live in France. They have written the following command: A senior data engineer mentions that it is organization policy to include a table property indicating that the new table includes personally identifiable information (PII). Which of the following lines of code fills in the above blank to successfully complete the task? •A. There is no way to indicate whether a table contains PII. •B. "COMMENT PII" •C. TBLPROPERTIES PII •D. COMMENT "Contains PII" •E. PII
  • 18. Which of the following benefits is provided by the array functions from Spark SQL? •A. An ability to work with data in a variety of types at once •B. An ability to work with data within certain partitions and windows •C. An ability to work with time-related data in specified intervals •D. An ability to work with complex, nested data ingested from JSON files •E. An ability to work with an array of tables for procedural automation
  • 19. Which of the following commands can be used to write data into a Delta table while avoiding the writing of duplicate records? •A. DROP •B. IGNORE •C. MERGE •D. APPEND •E. INSERT
  • 20. A data engineer needs to apply custom logic to string column city in table stores for a specific use case. In order to apply this custom logic at scale, the data engineer wants to create a SQL user- defined function (UDF). Which of the following code blocks creates this SQL UDF? A. • B. • C. • D. • E.
  • 21. A data analyst has a series of queries in a SQL program. The data analyst wants this program to run every day. They only want the final query in the program to run on Sundays. They ask for help from the data engineering team to complete this task. Which of the following approaches could be used by the data engineering team to complete this task? •A. They could submit a feature request with Databricks to add this functionality. •B. They could wrap the queries using PySpark and use Python’s control flow system to determine when to run the final query. •C. They could only run the entire program on Sundays. •D. They could automatically restrict access to the source table in the final query so that it is only accessible on Sundays. •E. They could redesign the data model to separate the data used in the final query into a new table.
  • 22. A data engineer runs a statement every day to copy the previous day’s sales into the table transactions. Each day’s sales are in their own file in the location "/transactions/raw". Today, the data engineer runs the following command to complete this task: After running the command today, the data engineer notices that the number of records in table transactions has not changed. Which of the following describes why the statement might not have copied any new records into the table? •A. The format of the files to be copied were not included with the FORMAT_OPTIONS keyword. •B. The names of the files to be copied were not included with the FILES keyword. •C. The previous day’s file has already been copied into the table. •D. The PARQUET file format does not support COPY INTO. •E. The COPY INTO statement requires the table to be refreshed to view the copied rows.
  • 23. A data engineer needs to create a table in Databricks using data from their organization’s existing SQLite database. They run the following command: Which of the following lines of code fills in the above blank to successfully complete the task? •A. org.apache.spark.sql.jdbc •B. autoloader •C. DELTA •D. sqlite •E. org.apache.spark.sql.sqlite
  • 24. A data engineering team has two tables. The first table march_transactions is a collection of all retail transactions in the month of March. The second table april_transactions is a collection of all retail transactions in the month of April. There are no duplicate records between the tables. Which of the following commands should be run to create a new table all_transactions that contains all records from march_transactions and april_transactions without duplicate records? •A. CREATE TABLE all_transactions AS SELECT * FROM march_transactions INNER JOIN SELECT * FROM april_transactions; •B. CREATE TABLE all_transactions AS SELECT * FROM march_transactions UNION SELECT * FROM april_transactions; •C. CREATE TABLE all_transactions AS SELECT * FROM march_transactions OUTER JOIN SELECT * FROM april_transactions; •D. CREATE TABLE all_transactions AS SELECT * FROM march_transactions INTERSECT SELECT * from april_transactions; •E. CREATE TABLE all_transactions AS SELECT * FROM march_transactions MERGE SELECT * FROM april_transactions;
  • 25. A data engineer only wants to execute the final block of a Python program if the Python variable day_of_week is equal to 1 and the Python variable review_period is True. Which of the following control flow statements should the data engineer use to begin this conditionally executed code block? •A. if day_of_week = 1 and review_period: •B. if day_of_week = 1 and review_period = "True": •C. if day_of_week == 1 and review_period == "True": •D. if day_of_week == 1 and review_period: •E. if day_of_week = 1 & review_period: = "True":
  • 26. A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to delete all table metadata and data. They run the following command: DROP TABLE IF EXISTS my_table - While the object no longer appears when they run SHOW TABLES, the data files still exist. Which of the following describes why the data files still exist and the metadata files were deleted? •A. The table’s data was larger than 10 GB •B. The table’s data was smaller than 10 GB •C. The table was external •D. The table did not have a location •E. The table was managed
  • 27. A data engineer wants to create a data entity from a couple of tables. The data entity must be used by other data engineers in other sessions. It also must be saved to a physical location. Which of the following data entities should the data engineer create? •A. Database •B. Function •C. View •D. Temporary view •E. Table
  • 28. A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that the source data is starting to have a lower level of quality. The data engineer would like to automate the process of monitoring the quality level. Which of the following tools can the data engineer use to solve this problem? •A. Unity Catalog •B. Data Explorer •C. Delta Lake •D. Delta Live Tables •E. Auto Loader
  • 29. A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE. The table is configured to run in Production mode using the Continuous Pipeline Mode. Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline? •A. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing. •B. All datasets will be updated once and the pipeline will persist without any processing. The compute resources will persist but go unused. •C. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped. •D. All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated. •E. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.
  • 30. In order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is used by Spark to record the offset range of the data being processed in each trigger? •A. Checkpointing and Write-ahead Logs •B. Structured Streaming cannot record the offset range of the data being processed in each trigger. •C. Replayable Sources and Idempotent Sinks •D. Write-ahead Logs and Idempotent Sinks •E. Checkpointing and Idempotent Sinks
  • 31. Which of the following describes the relationship between Gold tables and Silver tables? •A. Gold tables are more likely to contain aggregations than Silver tables. •B. Gold tables are more likely to contain valuable data than Silver tables. •C. Gold tables are more likely to contain a less refined view of data than Silver tables. •D. Gold tables are more likely to contain more data than Silver tables. •E. Gold tables are more likely to contain truthful data than Silver tables.
  • 32. Which of the following describes the relationship between Bronze tables and raw data? •A. Bronze tables contain less data than raw data files. •B. Bronze tables contain more truthful data than raw data. •C. Bronze tables contain aggregates while raw data is unaggregated. •D. Bronze tables contain a less refined view of data than raw data. •E. Bronze tables contain raw data with a schema applied.
  • 33. Which of the following tools is used by Auto Loader process data incrementally? •A. Checkpointing •B. Spark Structured Streaming •C. Data Explorer •D. Unity Catalog •E. Databricks SQL
  • 34. A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table. The cade block used by the data engineer is below: If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds, which of the following lines of code should the data engineer use to fill in the blank? •A. trigger("5 seconds") •B. trigger() •C. trigger(once="5 seconds") •D. trigger(processingTime="5 seconds") •E. trigger(continuous="5 seconds")
  • 35. A dataset has been defined using Delta Live Tables and includes an expectations clause: CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION DROP ROW What is the expected behavior when a batch of data containing data that violates these constraints is processed? •A. Records that violate the expectation are dropped from the target dataset and loaded into a quarantine table. •B. Records that violate the expectation are added to the target dataset and flagged as invalid in a field added to the target dataset. •C. Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log. •D. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log. •E. Records that violate the expectation cause the job to fail.
  • 36. A data engineer is working with two tables. Each of these tables is displayed below in its entirety. The data engineer runs the following query to join these tables together: Which of the following will be returned by the above query? A B C D E
  • 37. A data engineer and data analyst are working together on a data pipeline. The data engineer is working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming input. They now want to migrate their pipeline to use Delta Live Tables. Which of the following changes will need to be made to the pipeline when migrating to Delta Live Tables? •A. None of these changes will need to be made •B. The pipeline will need to stop using the medallion-based multi-hop architecture •C. The pipeline will need to be written entirely in SQL •D. The pipeline will need to use a batch source in place of a streaming source •E. The pipeline will need to be written entirely in Python
  • 38. Which of the following must be specified when creating a new Delta Live Tables pipeline? •A. A key-value pair configuration •B. The preferred DBU/hour cost •C. A path to cloud storage location for the written data •D. A location of a target database for the written data •E. At least one notebook library to be executed
  • 39. Which of the following code blocks will remove the rows where the value in column age is greater than 25 from the existing Delta table my_table and save the updated table? •A. SELECT * FROM my_table WHERE age > 25; •B. UPDATE my_table WHERE age > 25; •C. DELETE FROM my_table WHERE age > 25; •D. UPDATE my_table WHERE age <= 25; •E. DELETE FROM my_table WHERE age <= 25;
  • 40. Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta Lake? •A. The ability to manipulate the same data using a variety of languages •B. The ability to collaborate in real time on a single notebook •C. The ability to set up alerts for query failures •D. The ability to support batch and streaming workloads •E. The ability to distribute complex data operations
  • 41. A data engineer has a single-task Job that runs each morning before they begin working. After identifying an upstream data issue, they need to set up another task to run a new notebook prior to the original task. Which approach can the data engineer use to set up the new task? •A. They can clone the existing task in the existing Job and update it to run the new notebook. •B. They can create a new task in the existing Job and then add it as a dependency of the original task. •C. They can create a new task in the existing Job and then add the original task as a dependency of the new task. •D. They can create a new job from scratch and add both tasks to run concurrently.
  • 42. A data engineer that is new to using Python needs to create a Python function to add two integers together and return the sum? Which code block can the data engineer use to complete this task? • A. • B. • C. • D.
  • 43. A new data engineering team team has been assigned to an ELT project. The new data engineering team will need full privileges on the table sales to fully manage the project. Which of the following commands can be used to grant full permissions on the database to the new data engineering team? •A. GRANT ALL PRIVILEGES ON TABLE sales TO team; •B. GRANT SELECT CREATE MODIFY ON TABLE sales TO team; •C. GRANT SELECT ON TABLE sales TO team; •D. GRANT USAGE ON TABLE sales TO team; •E. GRANT ALL PRIVILEGES ON TABLE team TO sales;
  • 44. A data engineer wants to schedule their Databricks SQL dashboard to refresh every hour, but they only want the associated SQL endpoint to be running when it is necessary. The dashboard has multiple queries on multiple datasets associated with it. The data that feeds the dashboard is automatically processed using a Databricks Job. Which of the following approaches can the data engineer use to minimize the total running time of the SQL endpoint used in the refresh schedule of their dashboard? •A. They can turn on the Auto Stop feature for the SQL endpoint. •B. They can ensure the dashboard's SQL endpoint is not one of the included query's SQL endpoint. •C. They can reduce the cluster size of the SQL endpoint. •D. They can ensure the dashboard's SQL endpoint matches each of the queries' SQL endpoints. •E. They can set up the dashboard's SQL endpoint to be serverless.
  • 45. A data analyst has a series of queries in a SQL program. The data analyst wants this program to run every day. They only want the final query in the program to run on Sundays. They ask for help from the data engineering team to complete this task. Which of the following approaches could be used by the data engineering team to complete this task? •A. They could submit a feature request with Databricks to add this functionality. •B. They could wrap the queries using PySpark and use Python’s control flow system to determine when to run the final query. •C. They could only run the entire program on Sundays. •D. They could automatically restrict access to the source table in the final query so that it is only accessible on Sundays. •E. They could redesign the data model to separate the data used in the final query into a new table.
  • 46. Which of the following describes a benefit of creating an external table from Parquet rather than CSV when using a CREATE TABLE AS SELECT statement? •A. Parquet files can be partitioned •B. CREATE TABLE AS SELECT statements cannot be used on files •C. Parquet files have a well-defined schema •D. Parquet files have the ability to be optimized •E. Parquet files will become Delta tables
  • 47. What is a benefit of creating an external table from Parquet rather than CSV when using a CREATE TABLE AS SELECT statement? •A. Parquet files can be partitioned •B. Parquet files will become Delta tables •C. Parquet files have a well-defined schema •D. Parquet files have the ability to be optimized
  • 48. A data engineer has created a new database using the following command: CREATE DATABASE IF NOT EXISTS customer360; In which location will the customer360 database be located? •A. dbfs:/user/hive/database/customer360 •B. dbfs:/user/hive/warehouse •C. dbfs:/user/hive/customer360 •D. dbfs:/user/hive/database
  • 49. A data analyst has created a Delta table sales that is used by the entire data analysis team. They want help from the data engineering team to implement a series of tests to ensure the data is clean. However, the data engineering team uses Python for its tests rather than SQL. Which command could the data engineering team use to access sales in PySpark? •A. SELECT * FROM sales •B. spark.table("sales") •C. spark.sql("sales") •D. spark.delta.table("sales")
  • 50. A data engineer has been given a new record of data: id STRING = 'a1' rank INTEGER = 6 rating FLOAT = 9.4 Which SQL commands can be used to append the new record to an existing Delta table my_table? •A. INSERT INTO my_table VALUES ('a1', 6, 9.4) •B. INSERT VALUES ('a1', 6, 9.4) INTO my_table •C. UPDATE my_table VALUES ('a1', 6, 9.4) •D. UPDATE VALUES ('a1', 6, 9.4) my_table
  • 51. A data architect has determined that a table of the following format is necessary: Which code block is used by SQL DDL command to create an empty Delta table in the above format regardless of whether a table already exists with this name? •A. CREATE OR REPLACE TABLE table_name ( employeeId STRING, startDate DATE, avgRating FLOAT ) •B. CREATE OR REPLACE TABLE table_name WITH COLUMNS ( employeeId STRING, startDate DATE, avgRating FLOAT ) USING DELTA •C. CREATE TABLE IF NOT EXISTS table_name ( employeeId STRING, startDate DATE, avgRating FLOAT ) •D. CREATE TABLE table_name AS SELECT employeeId STRING, startDate DATE, avgRating FLOAT
  • 52. A data engineer is running code in a Databricks Repo that is cloned from a central Git repository. A colleague of the data engineer informs them that changes have been made and synced to the central Git repository. The data engineer now needs to sync their Databricks Repo to get the changes from the central Git repository. Which Git operation does the data engineer need to run to accomplish this task? •A. Clone •B. Pull •C. Merge •D. Push
  • 53. Which file format is used for storing Delta Lake Table? •A. CSV •B. Parquet •C. JSON •D. Delta
  • 54. A data engineer has joined an existing project and they see the following query in the project repository: CREATE STREAMING LIVE TABLE loyal_customers AS SELECT customer_id - FROM STREAM(LIVE.customers) WHERE loyalty_level = 'high'; Which of the following describes why the STREAM function is included in the query? •A. The STREAM function is not needed and will cause an error. •B. The table being created is a live table. •C. The customers table is a streaming live table. •D. The customers table is a reference to a Structured Streaming query on a PySpark DataFrame. •E. The data in the customers table has been updated since its last run.
  • 55. A data engineer has a single-task Job that runs each morning before they begin working. After identifying an upstream data issue, they need to set up another task to run a new notebook prior to the original task. Which of the following approaches can the data engineer use to set up the new task? •A. They can clone the existing task in the existing Job and update it to run the new notebook. •B. They can create a new task in the existing Job and then add it as a dependency of the original task. •C. They can create a new task in the existing Job and then add the original task as a dependency of the new task. •D. They can create a new job from scratch and add both tasks to run concurrently. •E. They can clone the existing task to a new Job and then edit it to run the new notebook.
  • 56. An engineering manager wants to monitor the performance of a recent project using a Databricks SQL query. For the first week following the project’s release, the manager wants the query results to be updated every minute. However, the manager is concerned that the compute resources used for the query will be left running and cost the organization a lot of money beyond the first week of the project’s release. Which of the following approaches can the engineering team use to ensure the query does not cost the organization any money beyond the first week of the project’s release? •A. They can set a limit to the number of DBUs that are consumed by the SQL Endpoint. •B. They can set the query’s refresh schedule to end after a certain number of refreshes. •C. They cannot ensure the query does not cost the organization money beyond the first week of the project’s release. •D. They can set a limit to the number of individuals that are able to manage the query’s refresh schedule. •E. They can set the query’s refresh schedule to end on a certain date in the query scheduler.
  • 57. Which of the following benefits is provided by the array functions from Spark SQL? •A. An ability to work with data in a variety of types at once •B. An ability to work with data within certain partitions and windows •C. An ability to work with time-related data in specified intervals •D. An ability to work with complex, nested data ingested from JSON files •E. An ability to work with an array of tables for procedural automation
  • 58. In order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is used by Spark to record the offset range of the data being processed in each trigger? •A. Checkpointing and Write-ahead Logs •B. Structured Streaming cannot record the offset range of the data being processed in each trigger. •C. Replayable Sources and Idempotent Sinks •D. Write-ahead Logs and Idempotent Sinks •E. Checkpointing and Idempotent Sinks
  • 59. Which of the following statements regarding the relationship between Silver tables and Bronze tables is always true? •A. Silver tables contain a less refined, less clean view of data than Bronze data. •B. Silver tables contain aggregates while Bronze data is unaggregated. •C. Silver tables contain more data than Bronze tables. •D. Silver tables contain a more refined and cleaner view of data than Bronze tables. •E. Silver tables contain less data than Bronze tables.
  • 60. A dataset has been defined using Delta Live Tables and includes an expectations clause: CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON VIOLATION FAIL UPDATE What is the expected behavior when a batch of data containing data that violates these constraints is processed? •A. Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log. •B. Records that violate the expectation cause the job to fail. •C. Records that violate the expectation are dropped from the target dataset and loaded into a quarantine table. •D. Records that violate the expectation are added to the target dataset and recorded as invalid in the event log. •E. Records that violate the expectation are added to the target dataset and flagged as invalid in a field added to the target dataset.
  • 61. Which of the following queries is performing a streaming hop from raw data to a Bronze table?
  • 62. A data engineer is using the following code block as part of a batch ingestion pipeline to read from a composable table: Which of the following changes needs to be made so this code block will work when the transactions table is a stream source? •A. Replace predict with a stream-friendly prediction function •B. Replace schema(schema) with option ("maxFilesPerTrigger", 1) •C. Replace "transactions" with the path to the location of the Delta table •D. Replace format("delta") with format("stream") •E. Replace spark.read with spark.readStream
  • 63. Which of the following describes the type of workloads that are always compatible with Auto Loader? •A. Streaming workloads •B. Machine learning workloads •C. Serverless workloads •D. Batch workloads •E. Dashboard workloads
  • 64. A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE. The table is configured to run in Development mode using the Continuous Pipeline Mode. Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline? •A. All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated. •B. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist until the pipeline is shut down. •C. All datasets will be updated once and the pipeline will persist without any processing. The compute resources will persist but go unused. •D. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing. •E. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing.
  • 65. A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table. The code block used by the data engineer is below: If the data engineer only wants the query to process all of the available data in as many batches as required, which of the following lines of code should the data engineer use to fill in the blank? •A. processingTime(1) •B. trigger(availableNow=True) •C. trigger(parallelBatch=True) •D. trigger(processingTime="once") •E. trigger(continuous="once")
  • 66. A data engineer has a Python variable table_name that they would like to use in a SQL query. They want to construct a Python code block that will run the query using table_name. They have the following incomplete code block: ____(f"SELECT customer_id, spend FROM {table_name}") Which of the following can be used to fill in the blank to successfully complete the task? •A. spark.delta.sql •B. spark.delta.table •C. spark.table •D. dbutils.sql •E. spark.sql
  • 67. A data engineer needs to apply custom logic to identify employees with more than 5 years of experience in array column employees in table stores. The custom logic should create a new column exp_employees that is an array of all of the employees with more than 5 years of experience for each row. In order to apply this custom logic at scale, the data engineer wants to use the FILTER higher-order function. Which of the following code blocks successfully completes this task?
  • 68. A data analyst has developed a query that runs against Delta table. They want help from the data engineering team to implement a series of tests to ensure the data returned by the query is clean. However, the data engineering team uses Python for its tests rather than SQL. the following operations could the data engineering team use to run the query and operate with the results in PySpark? Which of •A. SELECT * FROM sales •B. spark.delta.table •C. spark.sql •D. There is no way to share data between PySpark and SQL. •E. spark.table
  • 69. Which of the following commands will return the number of null values in the member_id column? •A. SELECT count(member_id) FROM my_table; •B. SELECT count(member_id) - count_null(member_id) FROM my_table; •C. SELECT count_if(member_id IS NULL) FROM my_table; •D. SELECT null(member_id) FROM my_table; •E. SELECT count_null(member_id) FROM my_table;
  • 70. Which of the following SQL keywords can be used to convert a table from a long format to a wide format? •A. TRANSFORM •B. PIVOT •C. SUM •D. CONVERT •E. WHERE
  • 71. A data architect has determined that a table of the following format is necessary: Which of the following code blocks uses SQL DDL commands to create an empty Delta table in the above format regardless of whether a table already exists with this name?
  • 72. Which of the following can be used to simplify and unify siloed data architectures that are specialized for specific use cases? •A. None of these •B. Data lake •C. Data warehouse •D. All of these •E. Data lakehouse
  • 73. Which of the following is stored in the Databricks customer's cloud account? •A. Databricks web application •B. Cluster management metadata •C. Repos •D. Data •E. Notebooks
  • 74. In which of the following file formats is data from Delta Lake tables primarily stored? •A. Delta •B. CSV •C. Parquet •D. JSON •E. A proprietary, optimized format specific to Databricks
  • 75. A data engineer has realized that the data files associated with a Delta table are incredibly small. They want to compact the small files to form larger files to improve performance. Which of the following keywords can be used to compact the small files? •A. REDUCE •B. OPTIMIZE •C. COMPACTION •D. REPARTITION •E. VACUUM
  • 76. Which of the following describes a scenario in which a data engineer will want to use a single- node cluster? •A. When they are working interactively with a small amount of data •B. When they are running automated reports to be refreshed as quickly as possible •C. When they are working with SQL within Databricks SQL •D. When they are concerned about the ability to automatically scale with larger data •E. When they are manually running reports with a large amount of data
  • 77. A data engineer needs to use a Delta table as part of a data pipeline, but they do not know if they have the appropriate permissions. In which of the following locations can the data engineer review their permissions on the table? •A. Databricks Filesystem •B. Jobs •C. Dashboards •D. Repos •E. Data Explorer
  • 78. Which of the following is a benefit of the Databricks Lakehouse Platform embracing open source technologies? •A. Cloud-specific integrations •B. Simplified governance •C. Ability to scale storage •D. Ability to scale workloads •E. Avoiding vendor lock-in
  • 79. A new data engineering team team. has been assigned to an ELT project. The new data engineering team will need full privileges on the database customers to fully manage the project. Which of the following commands can be used to grant full permissions on the database to the new data engineering team? •A. GRANT USAGE ON DATABASE customers TO team; •B. GRANT ALL PRIVILEGES ON DATABASE team TO customers; •C. GRANT SELECT PRIVILEGES ON DATABASE customers TO teams; •D. GRANT SELECT CREATE MODIFY USAGE PRIVILEGES ON DATABASE customers TO team; •E. GRANT ALL PRIVILEGES ON DATABASE customers TO team;
  • 80. A data engineer has a Job with multiple tasks that runs nightly. Each of the tasks runs slowly because the clusters take a long time to start. Which of the following actions can the data engineer perform to improve the start up time for the clusters used for the Job? •A. They can use endpoints available in Databricks SQL •B. They can use jobs clusters instead of all-purpose clusters •C. They can configure the clusters to be single-node •D. They can use clusters that are from a cluster pool •E. They can configure the clusters to autoscale for larger data sizes
  • 81. A single Job runs two notebooks as two separate tasks. A data engineer has noticed that one of the notebooks is running slowly in the Job’s current run. The data engineer asks a tech lead for help in identifying why this might be the case. Which of the following approaches can the tech lead use to identify why the notebook is running slowly as part of the Job? •A. They can navigate to the Runs tab in the Jobs UI to immediately review the processing notebook. •B. They can navigate to the Tasks tab in the Jobs UI and click on the active run to review the processing notebook. •C. They can navigate to the Runs tab in the Jobs UI and click on the active run to review the processing notebook. •D. There is no way to determine why a Job task is running slowly. •E. They can navigate to the Tasks tab in the Jobs UI to immediately review the processing notebook.
  • 82. A data analysis team has noticed that their Databricks SQL queries are running too slowly when connected to their always-on SQL endpoint. They claim that this issue is present when many members of the team are running small queries simultaneously. They ask the data engineering team for help. The data engineering team notices that each of the team’s queries uses the same SQL endpoint. Which of the following approaches can the data engineering team use to improve the latency of the team’s queries? •A. They can increase the cluster size of the SQL endpoint. •B. They can increase the maximum bound of the SQL endpoint’s scaling range. •C. They can turn on the Auto Stop feature for the SQL endpoint. •D. They can turn on the Serverless feature for the SQL endpoint. •E. They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance Policy to “Reliability Optimized.”
  • 83. A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the pipeline to drop invalid records at each table. They notice that some data is being dropped due to quality concerns at some point in the DLT pipeline. They would like to determine at which table in their pipeline the data is being dropped. Which of the following approaches can the data engineer take to identify the table that is dropping the records? •A. They can set up separate expectations for each table when developing their DLT pipeline. •B. They cannot determine which table is dropping the records. •C. They can set up DLT to notify them via email when records are dropped. •D. They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics. •E. They can navigate to the DLT pipeline page, click on the “Error” button, and review the present errors.
  • 84. Which of the following Structured Streaming queries is performing a hop from a Silver table to a Gold table?
  • 85. A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run. Which of the following tools can the data engineer use to solve this problem? •A. Unity Catalog •B. Delta Lake •C. Databricks SQL •D. Data Explorer •E. Auto Loader
  • 86. A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE. The table is configured to run in Production mode using the Continuous Pipeline Mode. Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline? •A. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing. •B. All datasets will be updated once and the pipeline will persist without any processing. The compute resources will persist but go unused. •C. All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped. •D. All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated. •E. All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.
  • 87. A data analyst has been asked to use the below table sales_table to get the percentage rank of products within region by the sales: The result of the query should look like this:
  • 88. A data engineer needs to create a table in Databricks using data from a CSV file at location /path/to/csv. They run the following command: Which of the following lines of code fills in the above blank to successfully complete the task? •A. None of these lines of code are needed to successfully complete the task •B. USING CSV •C. FROM CSV •D. USING DELTA •E. FROM "path/to/csv"
  • 89. In which of the following scenarios should a data engineer use the MERGE INTO command instead of the INSERT INTO command? •A. When the location of the data needs to be changed •B. When the target table is an external table •C. When the source table can be deleted •D. When the target table cannot contain duplicate records •E. When the source is not a Delta table
  • 90. A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input data to a data analytics dashboard for a retail use case. The job has a Databricks SQL query that returns the number of store-level records where sales is equal to zero. The data engineer wants their entire team to be notified via a messaging webhook whenever this value is greater than 0. Which of the following approaches can the data engineer use to notify their entire team via a messaging webhook whenever the number of stores with $0 in sales is greater than zero? •A. They can set up an Alert with a custom template. •B. They can set up an Alert with a new email alert destination. •C. They can set up an Alert with one-time notifications. •D. They can set up an Alert with a new webhook alert destination. •E. They can set up an Alert without notifications.
  • 91. A data engineer has a Python notebook in Databricks, but they need to use SQL to accomplish a specific task within a cell. They still want all of the other cells to use Python without making any changes to those cells. Which of the following describes how the data engineer can use SQL within a cell of their Python notebook? •A. It is not possible to use SQL in a Python notebook •B. They can attach the cell to a SQL endpoint rather than a Databricks cluster •C. They can simply write SQL syntax in the cell •D. They can add %sql to the first line of the cell •E. They can change the default language of the notebook to SQL
  • 92. An engineering manager uses a Databricks SQL query to monitor ingestion latency for each data source. The manager checks the results of the query every day, but they are manually rerunning the query each day and waiting for the results. Which of the following approaches can the manager use to ensure the results of the query are updated each day? •A. They can schedule the query to refresh every 1 day from the SQL endpoint's page in Databricks SQL. •B. They can schedule the query to refresh every 12 hours from the SQL endpoint's page in Databricks SQL. •C. They can schedule the query to refresh every 1 day from the query's page in Databricks SQL. •D. They can schedule the query to run every 1 day from the Jobs UI. •E. They can schedule the query to run every 12 hours from the Jobs UI.
  • 93. Which of the following describes when to use the CREATE STREAMING LIVE TABLE (formerly CREATE INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta Live Tables (DLT) tables using SQL? •A. CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline is static. •B. CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally. •C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used. •D. CREATE STREAMING LIVE TABLE should be used when data needs to be processed through complicated aggregations. •E. CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is static.
  • 94. A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but they only want the associated SQL endpoint to be running when it is necessary. Which of the following approaches can the data engineer use to minimize the total running time of the SQL endpoint used in the refresh schedule of their dashboard? •A. They can ensure the dashboard’s SQL endpoint matches each of the queries’ SQL endpoints. •B. They can set up the dashboard’s SQL endpoint to be serverless. •C. They can turn on the Auto Stop feature for the SQL endpoint. •D. They can reduce the cluster size of the SQL endpoint. •E. They can ensure the dashboard’s SQL endpoint is not one of the included query’s SQL endpoint.
  • 95. A data engineer wants to create a relational object by pulling data from two tables. The relational object does not need to be used by other data engineers in other sessions. In order to save on storage costs, the data engineer wants to avoid copying and storing physical data. Which of the following relational objects should the data engineer create? •A. Spark SQL Table •B. View •C. Database •D. Temporary view •E. Delta Table
  • 96. A data engineer has developed a data pipeline to ingest data from a JSON source using Auto Loader, but the engineer has not provided any type inference or schema hints in their pipeline. Upon reviewing the data, the data engineer has noticed that all of the columns in the target table are of the string type despite some of the fields only including float or boolean values. Which of the following describes why Auto Loader inferred all of the columns to be of the string type? •A. There was a type mismatch between the specific schema and the inferred schema •B. JSON data is a text-based format •C. Auto Loader only works with string data •D. All of the fields had at least one null value •E. Auto Loader cannot infer the schema of ingested data
  • 97. A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input data to an ELT job. The ELT job has its Databricks SQL query that returns the number of input records containing unexpected NULL values. The data engineer wants their entire team to be notified via a messaging webhook whenever this value reaches 100. Which of the following approaches can the data engineer use to notify their entire team via a messaging webhook whenever the number of NULL values reaches 100? •A. They can set up an Alert with a custom template. •B. They can set up an Alert with a new email alert destination. •C. They can set up an Alert with a new webhook alert destination. •D. They can set up an Alert with one-time notifications. •E. They can set up an Alert without notifications.
  • 98. Which of the following approaches should be used to send the Databricks Job owner an email in the case that the Job fails? •A. Manually programming in an alert system in each cell of the Notebook •B. Setting up an Alert in the Job page •C. Setting up an Alert in the Notebook •D. There is no way to notify the Job owner in the case of Job failure •E. MLflow Model Registry Webhooks
  • 99. In which of the following scenarios should a data engineer select a Task in the Depends On field of a new Databricks Job Task? •A. When another task needs to be replaced by the new task •B. When another task needs to fail before the new task begins •C. When another task has the same dependency libraries as the new task •D. When another task needs to use as little compute resources as possible •E. When another task needs to successfully complete before the new task begins
  • 100. A data engineer needs access to a table new_table, but they do not have the correct permissions. They can ask the table owner for permission, but they do not know who the table owner is. Which of the following approaches can be used to identify the owner of new_table? •A. Review the Permissions tab in the table's page in Data Explorer •B. All of these options can be used to identify the owner of the table •C. Review the Owner field in the table's page in Data Explorer •D. Review the Owner field in the table's page in the cloud storage solution •E. There is no way to identify the owner of the table
  • 101. A data engineer has a Job that has a complex run schedule, and they want to transfer that schedule to other Jobs. Rather than manually selecting each value in the scheduling form in Databricks, which of the following tools can the data engineer use to represent and submit the schedule programmatically? •A. pyspark.sql.types.DateType •B. datetime •C. pyspark.sql.types.TimestampType •D. Cron syntax •E. There is no way to represent and submit this information programmatically
  • 102. Which of the following data workloads will utilize a Gold table as its source? •A. A job that enriches data by parsing its timestamps into a human-readable format •B. A job that aggregates uncleaned data to create standard summary statistics •C. A job that cleans data by removing malformatted records •D. A job that queries aggregated data designed to feed into a dashboard •E. A job that ingests raw data from a streaming source into the Lakehouse

Editor's Notes

  • #2: Suggested Answer: A 
  • #3: Suggested Answer: C 🗳️ Community vote distributionC (75%) A (25%)
  • #4: Suggested Answer: B
  • #5: Suggested Answer: A [most voted]
  • #6: Suggested Answer: C [most voted]
  • #7: Suggested Answer: D
  • #8: Suggested Answer: C
  • #9: Suggested Answer: C 
  • #10: Suggested Answer: A [most voted]
  • #11: Suggested Answer: E [most voted]
  • #12: Suggested Answer: B [most voted]
  • #13: Suggested Answer: B
  • #14: Suggested Answer: C [most voted]
  • #15: Suggested Answer: E [most voted]
  • #16: Suggested Answer: C
  • #17: Suggested Answer: D [most voted]
  • #18: Suggested Answer: D [most voted]
  • #19: Suggested Answer: C
  • #20: Suggested Answer: A [most voted]
  • #21: Suggested Answer: B
  • #22: Suggested Answer: C [ most voted]
  • #23: Suggested Answer: A [most voted]
  • #24: Suggested Answer: B
  • #25: Suggested Answer: D [most voted]
  • #26: Suggested Answer: C
  • #27: Suggested Answer: E [ most voted]
  • #28: Suggested Answer: D [most voted]
  • #29: Suggested Answer: C [ most voted]
  • #30: Suggested Answer: A [most voted]
  • #31: Suggested Answer: A [most voted]
  • #32: Suggested Answer: E [most voted] 
  • #33: Suggested Answer: B
  • #34: Suggested Answer: D
  • #35: Suggested Answer: C [most voted] 
  • #36: Suggested Answer: C [most voted]
  • #37: Suggested Answer: A [most voted]
  • #38: Suggested Answer: E [most voted]
  • #39: Suggested Answer: C [most voted] 
  • #40: Suggested Answer: D 
  • #41: Suggested Answer: B [most voted]
  • #42: Suggested Answer: D 
  • #43: Suggested Answer: A 
  • #44: Suggested Answer: A [most voted]
  • #45: Suggested Answer: B
  • #46: Suggested Answer: C [most voted] 
  • #47: Suggested Answer: C
  • #48: Suggested Answer: B 
  • #49: Suggested Answer: B 
  • #50: Suggested Answer: A 
  • #51: Suggested Answer: A 
  • #52: Suggested Answer: B
  • #53: Suggested Answer: B
  • #54: Suggested Answer: C
  • #55: Suggested Answer: B [most voted] 
  • #56: Suggested Answer: E [most voted]
  • #57: Suggested Answer: D [most voted] 
  • #58: Suggested Answer: A [most voted]
  • #59: Suggested Answer: D
  • #60: Suggested Answer: B 
  • #61: Suggested Answer: E 
  • #62: Suggested Answer: E
  • #63: Suggested Answer: A 
  • #64: Suggested Answer: E [most voted] 
  • #65: Suggested Answer: B
  • #66: Suggested Answer: E 
  • #67: Suggested Answer: A 
  • #68: Suggested Answer: C [most voted]
  • #69: Suggested Answer: C
  • #70: Suggested Answer: B 
  • #71: Suggested Answer: E
  • #72: Suggested Answer: E 
  • #73: Suggested Answer: D
  • #74: Suggested Answer: C [most voted] 
  • #75: Suggested Answer: B 
  • #76: Suggested Answer: A 
  • #77: Suggested Answer: E
  • #78: Suggested Answer: E
  • #79: Suggested Answer: E [most voted] 
  • #80: Suggested Answer: D [most voted]
  • #81: Suggested Answer: C
  • #82: Suggested Answer: B [most voted] 
  • #83: Suggested Answer: D [most voted] 
  • #84: Suggested Answer: E 
  • #85: Suggested Answer: E 
  • #86: Suggested Answer: C [most voted] 
  • #87: Suggested Answer: B
  • #88: Suggested Answer: B
  • #89: Suggested Answer: D 
  • #90: Suggested Answer: D 
  • #91: Suggested Answer: D
  • #92: Suggested Answer: C
  • #93: Suggested Answer: B
  • #94: Suggested Answer: C
  • #95: Suggested Answer: D 
  • #96: Suggested Answer: B 
  • #97: Suggested Answer: C 
  • #98: Suggested Answer: B 
  • #99: Suggested Answer: E
  • #100: Suggested Answer: C 
  • #101: Suggested Answer: D 
  • #102: Suggested Answer: D