E - Snowflake-Snowpro-Core-1
E - Snowflake-Snowpro-Core-1
Q1
Is re-clustering in Snowflake only triggered if the table would benefit from the
operation?
A) True.
B) False.
Solution:
A) True.
Explanation:
DML operations (INSERT, UPDATE, DELETE, MERGE, COPY) can make the data in the table
become less clustered. To solve that, Snowflake provides periodic & automatic re-clustering
to maintain optimal clustering. It only reclusters a clustered table if it benefits from the
operation.
2
Q2
Solution:
Explanation:
A source system (like a web application) loads the data into a Snowflake Stage (for example,
an external stage like Amazon S3). Then we can copy the data into a Snowflake table. You
can see this behavior in the following image:
3
Q3
Solution:
Explanation:
Semi-structured data is saved as Variant type in Snowflake tables, with a maximum limit
size of 16MB, and it can be queried using JSON notation. You can store arrays, objects, etc.
4
Q4
A) True.
B) False.
Solution:
A) True.
Explanation:
Fail-safe ensures historical data is protected in the event of a system failure or other
catastrophic event, providing a (NON-CONFIGURABLE) 7-day period during which
Snowflake support may recover historical data. It requires additional storage (as other
functionalities like Time Travel), that's why it incurs additional storage costs. You can see
an example of how Fail-Safe works in the following image:
5
Q5
Solution:
Explanation:
Snowpipe is a serverless service that enables loading data when the files are available in
any (internal/external) stage. You use it when you have a small volume of frequent data,
and you load it continuously (micro-batches).
Q6
A) Pruning.
B) Clustering.
C) Indexing.
D) Computing.
Solution:
A) Pruning.
Explanation:
Q7
Which command will we use to download the files from the stage/location
loaded through the COPY INTO <LOCATION> command?
A) GET.
B) PUT.
C) UNLOAD.
D) INSERT INTO.
Solution:
A) GET.
Explanation:
We will use the GET command to DOWNLOAD files from a Snowflake internal stage (named
internal stage, user stage, or table stage) into a directory/folder on a client machine. You
need to use SnowSQL to use this command.
7
Q8
A) True.
B) False.
Solution:
A) True.
Explanation:
A warehouse can be resized up or down (through the web interface or using SQL) at any
time, including while it is running and processing statements. In the following image, you
can see how to resize them using the web interface:
8
Q9
For which activities does Snowflake have administration settings to help with
resource consumption?
Solution:
Explanation:
Snowflake provides resource monitors to help control costs and avoid unexpected credit
usage caused by running warehouses. You can impose limits on the number of credits that
warehouses consume.
Q10
A) True.
B) False.
Solution:
B) False.
Explanation:
If you want to download data from external stages, you need to use the cloud provider; for
example, download them directly from AWS S3.
9
Q11
A query executed a couple of hours ago, which spent more than 5 minutes to
run, is executed again, and it returned the results in less than a second. What
might have happened?
A) Snowflake used the persisted query results from the metadata cache.
B) Snowflake used the persisted query results from the query result cache.
C) Snowflake used the persisted query results from the warehouse cache.
D) A new Snowflake version has been released in the last two hours, improving the speed of
the service.
Solution:
B) Snowflake used the persisted query results from the query result cache.
Explanation:
The query result cache stores the results of our queries for 24 hours, so as long as we
same result without using the warehouse and without consuming credits.
Q12
A) FLATTEN.
B) CHECK_JSON.
C) PARSE_JSON.
Solution:
A) FLATTEN.
Explanation:
Q13
A) User.
B) Table.
C) Named internal.
D) Named external.
E) Account.
Solution:
A) User.
B) Table.
C) Named internal.
D) Named external.
Explanation:
Snowflake stages are a big topic in this exam. External stages reference data files stored
outside Snowflake, and internal ones Stores data files internally within Snowflake. The User
user, whereas the table stage is
the one that each table has by default. You can see the different stages in the following
image:
11
Q14
Solution:
Explanation:
The cost does not depend on how many queries you run in the warehouse. It depends on
which warehouse size and how long the warehouse runs.
Q15
Can you resize the warehouse once you have selected the size?
A) True.
B) False.
Solution:
A) True.
Explanation:
You can always change the warehouse size depending on your needs, even when it's
running.
12
Q16
A) True.
B) False.
Solution:
A) True.
Explanation:
User-defined functions (UDFs) let you extend the system to perform operations that are not
-in, system-defined functions. They support SQL,
JavaScript, Java, and Python (these last two are new features).
13
Q17
You have two virtual warehouses in your Snowflake account. If one of them
updates the data in the storage layer, when will the other one see it?
A) Immediately.
Solution:
A) Immediately.
Explanation:
All the warehouses of your account share the storage layer, so if the data is updated, all the
warehouses will be able to see it. You can see this behavior in the following image:
14
Q18
A) Query optimization.
B) Query planning.
C) Query processing.
Solution:
C) Query processing.
Explanation:
You can find the name of the layers in different ways, like Query Processing for the Compute
Layer.
15
Q19
What property from the Resource Monitors lets you specify whether you want
to control the credit usage of your entire account or a specific set of
warehouses?
A) Credit Quota.
B) Monitor Level.
C) Schedule.
D) Notification.
Solution:
B) Monitor Level.
Explanation:
The monitor level is a property that specifies whether the resource monitor is used to
monitor the credit usage for your entire account or individual warehouses.
16
Q20
You have two types of named stages, one is an external stage, and the other
one is an internal stage. Will external stages always require a cloud storage
provider?
A) True.
B) False.
Solution:
A) True.
Explanation:
External stages reference data files stored in a location outside of Snowflake. Amazon S3
buckets, Google Cloud Storage buckets, and Microsoft Azure containers are the currently
supported cloud storage services. You can see an example of it in the following diagram:
17
Q21
A) True.
B) False.
Solution:
A) True.
Explanation:
Clustering is a feature that allows you to organize data in a table based on the values of one
or more columns. This can improve query performance by minimizing the amount of data
that needs to be scanned when querying the table. The more frequently a table changes, the
more expensive it will be to keep it clustered.
Q22
A) AWS.
B) Azure.
C) IBM.
Solution:
C) IBM.
Explanation:
A Snowflake account can only be hosted on Amazon Web Services, Google Cloud Platforms,
and Microsoft Azure for now.
18
Q23
A) True.
B) False.
Solution:
A) True.
Explanation:
New objects added to a share become immediately available to all consumers, providing
real-time access to shared data, which is always up-to-
storage, as the producer account already pays for it.
Q24
Which function returns the name of the warehouse of the current session?
A) ACTIVE_WAREHOUSE()
B) RUNNING_WAREHOUSE()
C) CURRENT_WAREHOUSE()
D) WAREHOUSE()
Solution:
C) CURRENT_WAREHOUSE()
Explanation:
Q25
C) Virtual Warehouses.
Solution:
C) Virtual Warehouses.
Explanation:
Q26
Can two different virtual warehouses from the same account access the same
data simultaneously without any contention issue?
A) True.
B) False.
Solution:
A) True.
Explanation:
All the warehouses of your account share the storage layer, so they can access the same data
simultaneously.
20
Q27
Solution:
Explanation:
Q28
A) Metadata cache.
B) Results cache.
C) Warehouse cache.
Solution:
B) Results cache.
Explanation:
Query Result cache is also known as Results Cache, which holds the results of every query
executed in the past 24 hours. You can read about all the different types of caches at the
following link.
Q29
Which Snowflake edition (and above) allows until 90 days of Time Travel?
A) Standard.
B) Enterprise.
C) Business Critical.
Solution:
B) Enterprise.
Explanation:
By default, Time travel is enabled with a 1-day retention period. However, we can increase
it to 90 days if we have (at least) the Snowflake Enterprise Edition. It requires additional
storage, which will be reflected in your monthly storage charges.
22
Q30
A) SnowCLI.
B) SnowSQL.
C) SnowTerminal.
D) SnowCMD.
Solution:
B) SnowSQL.
Explanation:
SnowSQL is the command line client for connecting to Snowflake to execute SQL queries
and perform all DDL and DML operations, including loading data into and unloading data
out of database tables.
Q31
cost?
A) True.
B) False.
Solution:
B) False.
Explanation:
With temporary tables, you can optimize storage costs, as when the Snowflake session ends,
data stored in the table is entirely purged from the system. But they also require storage
costs while the session is active. A temporary table is purged once the session ends, so the
retention period is for 24 hours or the remainder of the session.
23
Q32
A) Time-Travel.
B) Fail-Safe.
C) Zero-Copy Cloning.
Solution:
A) Time-Travel.
Explanation:
Time-Travel enables accessing historical data (i.e., data that has been changed or deleted) at
any point within a defined period. If we drop a table, we can restore it with time travel. You
can use it with Databases, Schemas & Tables. The following diagram explains how Time-
Travel works:
24
Q33
A) True.
B) False.
Solution:
B) False.
Explanation:
A transaction is a sequence of SQL statements that are committed or rolled back as a unit.
All statements in the transaction are either applied (i.e., committed) or undone (i.e., rolled
Q34
A) True.
B) False.
Solution:
B) False.
Explanation:
Q35
A) insertFiles.
B) insertReport.
C) insertHistoryScan.
D) loadFiles.
E) loadHistoryScan.
Solution:
A) insertFiles.
B) insertReport.
E) loadHistoryScan.
Explanation:
You can make calls to REST endpoints to get information. For example, by calling the
following insertReport endpoint, you can get a report of files submitted via insertFiles:
GET
https://<account_id>.snowflakecomputing.com/v1/data/pipes/<pipeNa
me>/insertReport
26
Q36
A) True.
B) False.
Solution:
B) False.
Explanation:
Q37
What are the two types of data consumer accounts available in Snowflake?
A) Shared Account.
B) Reader Account.
C) Public Account.
D) Full Account.
Solution:
B) Reader Account.
D) Full Account.
Explanation:
There are two types of data consumers. The first one is the Full Accounts, the consumers
with existing Snowflake accounts. In this case, the consumer account pays for the queries
they make. We also have the Reader Accounts, the consumers without Snowflake accounts.
In this last case, the producer account pays all the compute credits that their warehouses
use. You can see this behavior in the following diagram:
28
Q38
A) True.
B) False.
Solution:
A) True.
Explanation:
Q39
A) Regular.
B) Secure View.
C) Table View.
D) Materialized View.
E) External View.
Solution:
A) Regular.
B) Secure View.
D) Materialized View.
Explanation:
You can see the differences between them in the following image:
30
Q40
A) True.
B) False.
Solution:
A) True.
Explanation:
If you have already-compressed files, Snowflake can automatically detect any of these
compression methods (gzip, bzip2, deflate, and raw_deflate) or you can explicitly specify the
method that was used to compress the files.
Q41
Solution:
Q42
Is Snowpipe Serverless?
A) True.
B) False.
Solution:
A) True.
Explanation:
Snowpipe enables loading data when the files are available in any (internal/external) stage.
You use it when you have a small volume of frequent data, and you load it continuously
(micro-
Warehouses. You can see how Snowpipe works in the following diagram:
32
Q43
A) True.
B) False.
Solution:
A) True.
Explanation:
If the data in the Storage Layer changes, the caches are automatically invalidated.
33
Q44
A) ACCOUNTADMIN.
B) SECURITYADMIN.
C) VIEWER.
D) USERADMIN.
E) SYSADMIN.
Solution:
A) ACCOUNTADMIN.
B) SECURITYADMIN.
D) USERADMIN.
E) SYSADMIN.
Explanation:
The PUBLIC role is also a System-Defined role. You can see the differences between them in
the following table:
34
35
Q45
A) Metadata cache.
C) Index cache.
D) Table cache.
E) Warehouse cache.
Solution:
A) Metadata cache.
E) Warehouse cache.
Explanation:
The Metadata cache is maintained in Global Service Layer and contains Objects Information
& Statistics. For example, when you execute the command COUNT() on a table, the result
comes from this cache; that's why it returns the information quickly.
The Query Result cache holds the results of every query executed in the past 24 hours. If
you repeat a statement and the underlying data hasn't changed, it will use this cache.
The last one is the warehouse cache, attached to the SSD of each warehouse. In this case, the
information is lost when the warehouse is suspended. You can read more information about
the different caching mechanisms at the following link.
36
Q46
A) Storage Schema.
B) Storage Integration.
C) User Stage.
Solution:
B) Storage Integration.
Explanation:
A storage integration is a Snowflake object that stores a generated identity and access
management (IAM) entity for your external cloud storage. This option will enable users to
avoid supplying credentials when creating stages or when loading or unloading data.
37
Q47
D) All of them.
Solution:
D) All of them.
Explanation:
Streams are Snowflake objects that record data manipulation language (DML) changes
made to tables and views, including INSERTS, UPDATES, and DELETES, as well as metadata
about each change. All of the previous options are correct about them.
Q48
Can you load data using the PUT command through worksheets in the
Snowflake UI?
A) True.
B) False.
Solution:
B) False.
Explanation:
We can use the PUT command to UPLOAD files from a local directory/folder on a client
machine into INTERNAL STAGES. It does NOT work with external stages, and we cannot use
it from the Snowflake Web UI.
38
Q49
A) Permanent.
B) Temporary.
C) Transient.
D) External.
E) Internal.
Solution:
A) Permanent.
B) Temporary.
C) Transient.
D) External.
Explanation:
You can see the differences between these tables in the following image:
39
Q50
A) Storage.
B) Compute.
C) Cloud Services.
Solution:
C) Cloud Services.
Explanation:
The Cloud Services layer is a collection of services coordinating activities across Snowflake.
It's in charge of Authentication, Infrastructure management, Metadata management, Query
parsing and optimization, and Access control.
Q51
A) True.
B) False.
Solution:
B) False.
Explanation:
Q52
Solution:
Explanation:
You can see the different types of views in the following image:
41
Q53
Solution:
Explanation:
Q54
A) HIPAA.
B) PCI-DSS.
C) FedRAMP.
Solution:
A) HIPAA.
B) PCI-DSS.
C) FedRAMP.
Explanation:
They won't ask you in-depth questions about this topic in the exam, but it's important to
remember some of the most important ones. You can see other certifications at the
following link.
42
Q55
Solution:
Explanation:
tables are
automatically divided into micro-partitions, which are contiguous units of storage between
43
Q56
What happens to the incoming queries when a warehouse does not have
enough resources to process them?
B) Queries are queued and executed when the warehouse has resources.
Solution:
B) Queries are queued and executed when the warehouse has resources.
Explanation:
If the warehouse does not have enough remaining resources to process a query, the query is
queued, pending resources that become available as other running queries complete.
Q57
A) ACCOUNTADMIN.
B) SECURITYADMIN.
C) SYSADMIN.
D) USERADMIN.
Solution:
A) ACCOUNTADMIN.
Explanation:
ACCOUNTADMIN is the only role that is able to create Shares and Resource Monitors by
default. However, account administrators can choose to enable users with other roles to
view and modify resource monitors using SQL.
44
Q58
What actions can the resource monitor associated with a Warehouse take
when it reaches (or is about to) hit the limit?
Solution:
Explanation:
A resource monitor can Notify, Notify & Suspend, and Notify & Suspend Immediately. You
can see these three actions in the following image:
45
Q59
A) METADATA$ACTION.
B) METADATA$ISREAD.
C) METADATA$ISUPDATE.
D) METADATA$ROW_ID.
E) METADATA$COLUMN_ID.
Solution:
A) METADATA$ACTION.
C) METADATA$ISUPDATE.
D) METADATA$ROW_ID.
Explanation:
Q60
Solution:
Explanation:
The following example shows how we can upload files from our local disk to a Table stage.
As you can see, you need to specify the previous character while indicating the table:
Q61
Which of the following services are provided by the Cloud Services Layer?
A) Metadata Management.
B) Authentication.
C) Storage.
D) Infrastructure Management.
E) Query Execution.
Solution:
A) Metadata Management.
B) Authentication.
D) Infrastructure Management.
Explanation:
The Cloud Services layer is a collection of services coordinating activities across Snowflake.
It's in charge of Authentication, Infrastructure management, Metadata management, Query
parsing and optimization, and Access control.
48
Q62
What can you easily check to see if a large table will benefit from explicitly
defining a clustering key?
A) Clustering depth.
B) Clustering ratio.
C) Values in a table.
Solution:
A) Clustering depth.
Explanation:
The clustering depth measures the average depth of the overlapping micro-partitions for
specified columns in a table (1 or greater). The smaller the cluster depth is, the better
clustered the table is. You can get the clustering depth of a Snowflake table using this
command:
You can also see a real example of how it works at the following link.
49
Q63
B) The data retention period for a permanent table with 30 days of Time-Travel is 37 days.
E) Fail-Safe ensures that historical data is protected in the event of a system failure or other
catastrophic events.
Solution:
B) The data retention period for a permanent table with 30 days of Time-Travel is 37 days.
E) Fail-Safe ensures that historical data is protected in the event of a system failure or other
catastrophic events.
Explanation:
Regarding the second option, we have 30 days of Time-Travel, apart from 7 days of Fail-
Safe. 37 days in total. You can see how Fail-Safe works in the following diagram:
50
Q64
A) Standard.
B) Append-only.
C) Update-only.
D) Insert-only.
Solution:
A) Standard.
B) Append-only.
D) Insert-only.
Explanation:
Standard and Append-only streams are supported on tables, directory tables, and views.
The Standard one tracks all DML changes to the source table, including inserts, updates, and
deletes, whereas the Append-only one Tracks row inserts only.
The Insert-only stream also tracks row inserts only. The difference with the previous one is
that this one is only supported on EXTERNAL TABLES.
51
Q65
B) Snowpipe.
C) Both.
Solution:
Explanation:
Snowpipe is serverless, meaning it doesn't need a running warehouse to load data into
Snowflake.
52
Q66
C) Cache results.
Solution:
Explanation:
Storage fees are incurred for maintaining historical data during both the Time Travel and
Fail-safe periods. To help manage the storage costs, Snowflake provides temporary and
transient tables, which do not incur the same fees as permanent tables but also incur
charges. We can also include data stored in Snowflake locations (i.e., user and table stages
or internal named stages).
Q67
A) True.
B) False.
Solution:
A) True.
Explanation:
Q68
What will happen to the child task if you remove its predecessor?
Solution:
Explanation:
Also, if the owner role of a task is deleted, the Task Ownership is reassigned to the role that
dropped this role. This is also a typical exam question.
54
Q69
A) Credit Quota.
B) Monitor Level.
C) Schedule.
D) Actions.
Solution:
A) Credit Quota.
B) Monitor Level.
C) Schedule.
D) Actions.
Explanation:
The Credit Quota specifies the number of Snowflake credits allocated to the monitor for the
specified frequency interval.
The Monitor Level specifies whether the resource monitor is used to monitor the credit
usage for your entire account or individual warehouses.
The Schedule indicates when the monitor will start monitoring and when the credits will
reset to 0.
Each action specifies a threshold and the action to perform when the threshold is reached
within the specified interval.
55
Q70
Queries in Snowflake are getting queued on the warehouses and delaying the
ETL processes of the company. What are the possible solution options you can
think of, considering we have the Snowflake Enterprise addition?
Solution:
Explanation:
By resizing the warehouse, your company will scale up, reducing the time to execute big
queries. Using multi-cluster warehouses, you will have more queries running
simultaneously and a high concurrency when they execute, and this is the definition of
scaling out. You can see the differences between the different ways to scale in the following
picture:
56
Q71
Which factors influence the unit cost of snowflake credits and data storage?
A) Snowflake Edition.
D) Users on Snowflake.
Solution:
A) Snowflake Edition.
Explanation:
You can create as many users as you want without additional cost; that's why that option is
incorrect. You can see a guide about pricing at the following link.
57
Q72
A) METADATA$FILENAME.
B) METADATA$FILEFORMAT.
C) METADATA$FILE_ROW_NUMBER.
Solution:
A) METADATA$FILENAME.
C) METADATA$FILE_ROW_NUMBER.
Explanation:
The METADATA$FILENAME column is the name of the staged data file that the current row
belongs to. The METADATA$FILE_ROW_NUMBER is the row number for each record in the
container staged data file. This is a way of query the stage metadata:
You can see another example (via docs.snowflake.com) in the following image:
58
Q73
A) Standard.
B) Enterprise.
C) Business Critical.
Solution:
B) Enterprise.
Explanation:
You can see some differences between the Snowflake editions in the following image:
59
Q74
Which Snowflake object returns a set of rows instead of a single, scalar value,
and can be accessed in the FROM clause of a query?
A) UDF.
B) UDTF.
C) Stored procedure.
Solution:
B) UDTF.
Explanation:
User-defined functions (UDFs) let you extend the system to perform operations that are not
-in, system-defined functions. UDTFs can return multiple
e only difference with UDFs.
Q75
A) Standard.
B) Enterprise.
C) Business Critical.
Solution:
A) Standard.
Explanation:
In Snowflake, all the Virtual Warehouses are dedicated to the users. If you create a virtual
warehouse, you will only be the one using it.
60
Q76
A) True.
B) False.
Solution:
A) True.
Explanation:
Imagine you have data on the cloud providers, but the data cannot be copied or moved to
any other location due to compliance regulations. This is a use case of External tables, which
allow us to query data stored in a Cloud Location, for example, AWS S3.
Q77
A) SYSTEM$CLUSTERING_DEPTH
B) SYSTEM$CLUSTERING_INFORMATION
C) SYSTEM$CLUSTERING_METADATA
Solution:
A) SYSTEM$CLUSTERING_DEPTH
B) SYSTEM$CLUSTERING_INFORMATION
Explanation:
The clustering depth measures the average depth of the overlapping micro-partitions for
specified columns in a table (1 or greater). The smaller the cluster depth is, the better
clustered the table is. You can use any previous commands to get the Cluster Depth of a
table.
61
Q78
After how many days does the load history of Snowpipe expire?
A) 1 day.
B) 14 days.
C) 90 days.
D) 180 days.
Solution:
B) 14 days.
Explanation:
The load history is stored in the metadata of the pipe for 14 days. Must be requested from
Snowflake via a REST endpoint, SQL table function, or ACCOUNT_USAGE view.
Q79
Solution:
Explanation:
Q80
Solution:
Explanation:
Only one SQL statement is allowed to be executed through a task. If you need to execute
multiple statements, build a procedure. You can read more information about Snowflake
tasks at the following link.
63
Q81
Which database objects can be shared using Snowflake Secure Data Sharing?
A) Tables.
B) External tables.
C) Secure views.
E) Secure UDFs.
Solution:
A) Tables.
B) External tables.
C) Secure views.
E) Secure UDFs.
Explanation:
Secure Data Sharing lets you share selected objects in a database in your account with other
Snowflake accounts. You can share all the previous database objects.
64
Q82
After how many days does the COPY INTO load metadata expire?
A) 1 day.
B) 14 days.
C) 64 days.
D) 180 days.
Solution:
C) 64 days.
Explanation:
The information about the loaded files is stored in Snowflake metadata. It means that you
cannot COPY the same file again in the next 64 days unless you specify it (with the
"FORCE=True" option in the COPY command). You can see this behavior in the following
image:
65
Q83
Does Snowpipe guarantee that files are loaded in the same order they are
staged?
A) True.
B) False.
Solution:
B) False.
Explanation:
Snowpipe generally loads older files first, but there is no guarantee that files are loaded in
the same order they are staged.
Q84
Which of the following commands cannot be executed from the Snowflake UI?
A) SHOW.
B) LIST <stages>
C) GET.
D) COPY INTO.
E) PUT.
Solution:
C) GET.
E) PUT.
Explanation:
These two commands cannot be executed from the Snowflake web interface; instead, you
should use the SnowSQL client to GET or PUT data files.
66
Q85
Solution:
Explanation:
Each user has a Snowflake personal stage allocated to them by default for storing files, and
no one can access them except the user it belongs to. It's represented with the "@~"
character. In the following example, we are uploading the file "myfile.csv" to the stage from
the current user:
PUT file://C:\data\myfile.csv @~
67
Q86
A) Cluster Keys.
B) Multi-Warehouses.
E) Dedicated Warehouses.
Solution:
A) Cluster Keys.
B) Multi-Warehouses.
E) Dedicated Warehouses.
68
Q87
While loading data through the COPY command, you can transform the data.
Which of the below transformations are allowed?
A) Truncate columns.
B) Omit columns.
C) Filters.
D) Reorder columns.
E) Cast.
Solution:
A) Truncate columns.
B) Omit columns.
D) Reorder columns.
E) Cast.
Explanation:
Q88
A) Permanent.
B) Temporary.
C) Transient.
D) External.
Solution:
B) Temporary.
Explanation:
With temporary tables, you can optimize storage costs, as when the Snowflake session ends,
data stored in the table is entirely purged from the system. But they also require storage
costs while the session is active. A temporary table is purged once the session ends, so the
retention period is for 24 hours or the remainder of the session.
70
Q89
B) Cache.
Solution:
Explanation:
Zero-Copy cloning does NOT duplicate data; it duplicates the metadata of the micro-
partitions. For this reason, Zero- When you modify
some cloned data, it will consume storage because Snowflake has to recreate the micro-
partitions. You can see this behavior in the following image:
71
Q90
What actions can a Resource Monitor perform when it hits the limit?
C) Notify.
Solution:
C) Notify.
Explanation:
- Notify --> It performs no action but sends an alert notification (email/web UI).
- Notify & Suspend --> It sends a notification and suspends all assigned warehouses after all
statements being executed by the warehouse (s) have been completed.
- Notify & Suspend Immediately --> It sends a notification and suspends all assigned
warehouses immediately.
72
Q91
A) Snowflake users have a limit on the number of roles that they can assume.
Solution:
Explanation:
Each user can be assigned multiple roles (and vice versa) but can assume only one role
simultaneously. Privileges are assigned to roles; that's why the last option is false.
73
Q92
What option will you specify to delete the stage files after a successful load
into a Snowflake table with the COPY INTO command?
A) DELETE = TRUE
B) REMOVE = TRUE
C) PURGE = TRUE
D) TRUNCATE = TRUE
Solution:
C) PURGE = TRUE
Explanation:
If the PURGE option is set to TRUE, Snowflake will try its best to remove successfully loaded
data files from stages. If the purge operation fails for any reason, it won't return any error
for now.
Q93
Solution:
Explanation:
Clustering keys are a subset of columns or expressions on a table designated to co-locate the
data in the same micro-partitions. Data might become less clustered when we perform
many DML operations on a table. To solve that, Snowflake also provides periodic &
automatic re-clustering to maintain optimal clustering. These techniques improve the
performance of queries that takes a lot of time, as Snowflake will analyze fewer micro-
partitions.
Q94
C) Based on the amount of uncompressed data stored on the last day of the month.
D) Based on the amount of compressed data stored on the last day of the month.
Solution:
Explanation:
Storage costs benefit from the automatic compression of all data stored, and the total
compressed file size is used to calculate the storage bill for an account.
75
Q95
A) You have data on the cloud providers, but the data cannot be copied or moved to any
other location due to compliance regulations.
B) You have a high volume of data on the cloud providers, but we only need a part of the
data in Snowflake.
C) You have data on the cloud providers that need to be updated by Snowflake.
Solution:
A) You have data on the cloud providers, but the data cannot be copied or moved to any
other location due to compliance regulations.
B) You have a high volume of data on the cloud providers, but we only need a part of the
data in Snowflake.
Explanation:
The third answer is incorrect, as external tables can only read data. The fourth option is also
76
Q96
Which command will you run to list all users and roles to which a role has
been granted?
Solution:
Explanation:
the privileges to which this role has access. Here you can see an example of running the
command in my Snowflake account:
77
Q97
A warehouse ran for 62 seconds, and it was suspended. After some time, it ran
for another 20 seconds. For how many seconds will you be billed?
A) 20 seconds.
B) 62 seconds.
C) 92 seconds.
D) 122 seconds.
Solution:
D) 122 seconds.
Explanation:
You will be billed for 122 seconds (62 + 60 seconds) because warehouses are billed for a
minimum of one minute. The price would be different if the warehouse wasn't suspended
before executing the second query.
For example, if we had only run a query, and it had only run for 62 seconds, you would be
billed for these 62 seconds. If it had only run for 20 seconds, you would've been billed for 60
seconds.
78
Q98
B) The number of micro-partitions containing values that overlap with each other.
Solution:
B) The number of micro-partitions containing values that overlap with each other.
Q99
A) 15 minutes.
B) 60 minutes.
C) 4 hours.
D) 12 hours.
Solution:
C) 4 hours.
Explanation:
If the transaction is left open or not aborted by the user, Snowflake automatically rolls back
the transaction after being idle for four hours. You can still abort a running transaction with
the system function: SYSTEM$ABORT_TRANSACTION
79
Q100
A) Standard.
B) Enterprise.
C) Business Critical.
Solution:
C) Business Critical.
Explanation:
AWS PrivateLink is an AWS service for creating private VPC endpoints that allow direct,
secure connectivity between your AWS VPCs and the Snowflake VPC without traversing the
public Internet. This feature requires the Business Critical edition or higher. You can see the
differences between the Snowflake editions in the following image: