Practice Test 5
Practice Test 5
Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Incorrect
You want to monitor resource utilization (RAM, Disk, Network, CPU, etc.) for all
applications in development, test and production GCP projects in a single
dashboard. What should you do?
Make use of the default Cloud Monitoring dashboards in all the projects.
(Correct)
(Incorrect)
In Cloud Monitoring, share charts from development, test and production GCP
projects.
Explanation
In Cloud Monitoring, share charts from development, test and production GCP
projects. is not right.
This option involves a lot of work. You can share charts from development, test and
production projects by enabling Cloud Monitoring as a data source for Grafana
Ref: https://cloud.google.com/monitoring/charts/sharing-charts
and then follow the instructions
at https://grafana.com/docs/grafana/latest/features/datasources/cloudmonitoring/
to build Grafana dashboards.
Make use of the default Cloud Monitoring dashboards in all the projects. is
not right.
Possibly, but this doesn't satisfy the requirement "single pane of glass".
Question 2: Incorrect
Your company has a massive quantity of unstructured data in text, Apache AVRO
and PARQUET files in the on-premise data centre and wants to transform this data
using a Dataflow job and migrate cleansed/enriched data to BigQuery. How
should you make the on-premise files accessible to Cloud Dataflow?
Migrate the data from the on-premises data centre to BigQuery by using a custom
script with bq commands.
(Incorrect)
Migrate the data from the on-premises data centre to Cloud Spanner by using the
upload files function.
Migrate the data from the on-premises data centre to Cloud SQL for MySQL by
using the upload files function.
Migrate the data from the on-premises data centre to Cloud Storage by using a
custom script with gsutil commands.
(Correct)
Explanation
The key to answering this question is "unstructured data".
Migrate the data from the on-premises data centre to BigQuery by using a
custom script with bq commands. is not right.
The bq load command is used to load data in BigQuery from a local data source, i.e.
local file, but the data has to be in a structured format.
bq --location=LOCATION load \
--source_format=FORMAT \
PROJECT_ID:DATASET.TABLE \
PATH_TO_SOURCE \
SCHEMA
where schema: a valid schema. The schema can be a local JSON file, or it can be typed
inline as part of the command. You can also use the --autodetect flag instead of
supplying a schema definition.
Ref: https://cloud.google.com/bigquery/docs/loading-data-local#bq
Migrate the data from the on-premises data centre to Cloud SQL for MySQL by
using the upload files function. is not right.
Fully managed relational database service for MySQL, PostgreSQL, and SQL Server. As
this is a relational database, it is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/sql
Migrate the data from the on-premises data centre to Cloud Spanner by using
the upload files function. is not right.
Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly
consistent database service built for the cloud specifically to combine the benefits of
relational database structure with non-relational horizontal scale. Although Google
claims Cloud Spanner is the best of the relational and non-relational worlds, it also says
"With Cloud Spanner, you get the best of relational database structure and non-
relational database scale and performance with external strong consistency across rows,
regions, and continents.". Cloud spanner is for structured data and not fit for
unstructured data.
Ref: https://cloud.google.com/spanner
Migrate the data from the on-premises data centre to Cloud Storage by using
a custom script with gsutil commands. is the right answer.
Cloud storage imposes no such restrictions; you can store large quantities of
unstructured data in different file formats. Cloud Storage provides globally unified,
scalable, and highly durable object storage for developers and enterprises. Also,
Dataflow can query Cloud Storage filesets as described in this article
Ref: https://cloud.google.com/dataflow/docs/guides/sql/data-sources-
destinations#querying-gcs-filesets
Question 3: Correct
You are deploying an application on the Google Compute Engine, and you want to
minimize network egress costs. The organization has a policy that requires you to
block all but essential egress traffic. What should you do?
Enable a firewall rule at priority 100 to allow ingress and essential egress traffic.
Enable a firewall rule at priority 100 to block all egress traffic, and another firewall
rule at priority 65534 to allow essential egress traffic.
Enable a firewall rule at priority 65534 to block all egress traffic, and another
firewall rule at priority 100 to allow essential egress traffic.
(Correct)
Enable a firewall rule at priority 100 to allow essential egress traffic.
Explanation
Enable a firewall rule at priority 100 to allow essential egress traffic. is
not right.
This option would enable all egress traffic. Every VPC network has two implied firewall
rules, one of which is the implied allow egress rule. This egress rule whose action is
allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any
instance send traffic to any destination, except for traffic blocked by Google Cloud.
Since we want to restrict egress on all but required traffic, you can't rely on just the high
priority rules to allow specific traffic.
Ref: https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
Enable a firewall rule at priority 100 to allow ingress and essential egress
traffic. is not right.
There is no relation between ingress and egress, and they both work differently. Like
above, this would enable all egress traffic. Every VPC network has two implied firewall
rules, one of which is the implied allow egress rule. This egress rule whose action is
allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any
instance send traffic to any destination, except for traffic blocked by Google Cloud.
Since we want to restrict egress on all but required traffic, you can't rely on just the high
priority rules to allow specific traffic.
Ref: https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
Enable a firewall rule at priority 100 to block all egress traffic, and
another firewall rule at priority 65534 to allow essential egress traffic. is
not right.
The firewall rule priority is an integer from 0 to 65535, inclusive. Lower integers indicate
higher priorities. The highest priority rule applicable for a given protocol and port
definition takes precedence over others. In this scenario, having a deny all traffic at
priority 100 takes effect over all other egress rules that allow traffic at a lower priority
resulting in all outgoing traffic being blocked.
Ref: https://cloud.google.com/vpc/docs/firewalls#priority_order_for_firewall_rules
Enable a firewall rule at priority 65534 to block all egress traffic, and
another firewall rule at priority 100 to allow essential egress traffic. is
the right answer.
The firewall rule priority is an integer from 0 to 65535, inclusive. Lower integers indicate
higher priorities. The highest priority rule applicable for a given protocol and port
definition takes precedence over others. The relative priority of a firewall rule determines
whether it is applicable when evaluated against others. In this scenario, the allow rule at
priority 100 is evaluated first, and this allows the required egress traffic. The deny rule at
65534 priority is executed last and denies all other traffic that is not allowed by previous
allow rules.
Ref: https://cloud.google.com/vpc/docs/firewalls#priority_order_for_firewall_rules
Question 4: Correct
You manage an overnight batch job that uses 20 VMs to transfer customer
information from a CRM system to BigQuery dataset. The job can tolerate some
VMs going down. The current high cost of the VMs make the overnight job not
viable, and you want to reduce the costs. What should you do?
Use a fleet of f1-micro instances behind a Managed Instances Group (MIG) with
autoscaling and minimum nodes set to 1.
Use a fleet of f1-micro instances behind a Managed Instances Group (MIG) with
autoscaling. Set minimum and maximum nodes to 20.
(Correct)
Explanation
Use preemptible compute engine instances to reduce cost. is the right answer.
Since the batch workload is fault-tolerant, i.e. can tolerate some of the VMs being
terminated, you should use preemptible VMs. A preemptible VM is an instance that you
can create and run at a much lower price than normal instances. However, Compute
Engine might stop (preempt) these instances if it requires access to those resources for
other tasks. Preemptible instances are excess Compute Engine capacity, so their
availability varies with usage. If your apps are fault-tolerant and can withstand possible
instance preemptions, then preemptible instances can reduce your Compute Engine
costs significantly. For example, batch processing jobs can run on preemptible instances.
If some of those instances stop during processing, the job slows but does not entirely
stop. Preemptible instances complete your batch processing tasks without placing
additional workload on your existing instances and without requiring you to pay full
price for additional regular instances.
Ref: https://cloud.google.com/compute/docs/instances/preemptible#what_is_a_preempt
ible_instance
Question 5: Incorrect
You want to optimize the storage costs for long term archival of logs. Logs are
accessed frequently in the first 30 days and only retrieved after that if there is any
special requirement in the annual audit. The auditors may need to look into log
entries of the previous three years. What should you do?
Store the logs in Standard Storage Class and set up a lifecycle policy to transition
the files older than 30 days to Archive Storage Class.
(Correct)
Store the logs in Standard Storage Class and set up lifecycle policies to transition
the files older than 30 days to Coldline Storage Class, and files older than 1 year to
Archive Storage Class.
(Incorrect)
Store the logs in Nearline Storage Class and set up a lifecycle policy to transition
the files older than 30 days to Archive Storage Class.
Store the logs in Nearline Storage Class and set up lifecycle policies to transition
the files older than 30 days to Coldline Storage Class, and files older than 1 year to
Archive Storage Class.
Explanation
Store the logs in Nearline Storage Class and set up a lifecycle policy to
transition the files older than 30 days to Archive Storage Class. is not right.
Nearline Storage is ideal for data you plan to read or modify on average once per
month or less, and there are costs associated with data retrieval. Since we require to
access data frequently for 30 days, we should avoid Nearline and prefer Standard
Storage which is suitable for frequently accessed ("hot" data).
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Store the logs in Nearline Storage Class and set up lifecycle policies to
transition the files older than 30 days to Coldline Storage Class, and files
older than 1 year to Archive Storage Class. is not right.
Nearline Storage is ideal for data you plan to read or modify on average once per
month or less, and there are costs associated with data retrieval. Since we require to
access data frequently for 30 days, we should avoid Nearline and prefer Standard
Storage which is suitable for frequently accessed ("hot" data).
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Store the logs in Standard Storage Class and set up lifecycle policies to
transition the files older than 30 days to Coldline Storage Class, and files
older than 1 year to Archive Storage Class. is not right.
Since we require to access data frequently for 30 days, we should use Standard Storage
which is suitable for frequently accessed ("hot" data).
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
However, transitioning to Coldline is unnecessary as there is no requirement to access
data after that so we might as well transition all data to archival storage which offers the
lowest cost option for archiving data.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Ref: https://cloud.google.com/storage/docs/storage-classes#archive
Store the logs in Standard Storage Class and set up a lifecycle policy to
transition the files older than 30 days to Archive Storage Class. is the right
answer.
Since we require to access data frequently for 30 days, we should use Standard Storage
which is suitable for frequently accessed ("hot" data).
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
And since there is no requirement to access data after that, we can transition all data to
archival storage which offers the lowest cost option for archiving data.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Ref: https://cloud.google.com/storage/docs/storage-classes#archive
Question 6: Correct
You want to run an application in Google Compute Engine in the app-tier GCP
project and have it export data from Cloud Bigtable to daily-us-customer-export
Cloud Storage bucket in the data-warehousing project. You plan to run a Cloud
Dataflow job in the data-warehousing project to pick up data from this bucket for
further processing. How should you design the IAM access to enable the compute
engine instance push objects to daily-us-customer-export Cloud Storage bucket in
the data-warehousing project?
Ensure both the projects are in the same GCP folder in the resource hierarchy.
Grant the service account used by the compute engine in app-tier GCP project
roles/storage.objectCreator IAM role on app-tier GCP project.
Grant the service account used by the compute engine in app-tier GCP project
roles/storage.objectCreator IAM role on the daily-us-customer-export Cloud
Storage bucket.
(Correct)
Explanation
Ensure both the projects are in the same GCP folder in the resource
hierarchy. is not right.
Folder resources provide an additional grouping mechanism and isolation boundaries
between projects. They can be seen as sub-organizations within the Organization.
Folders can be used to model different legal entities, departments, and teams within a
company. For example, the first level of folders could be used to represent the main
departments in your organization. Since folders can contain projects and other folders,
each folder could then include other sub-folders, to represent different teams. Each
team folder could contain additional sub-folders to represent different applications.
Ref: https://cloud.google.com/resource-manager/docs/cloud-platform-resource-
hierarchy
Although it is possible to move both projects under the same folder, unless the relevant
permissions are assigned to the VM service account, it can't push the exports to the
cloud storage bucket in a different project.
Grant the service account used by the compute engine in app-tier GCP project
roles/storage.objectCreator IAM role on app-tier GCP project. is not right.
The bucket daily-us-customer-export is in the data-warehousing so the VMs service
account must the assigned the role on data-warehousing and not app-tier.
Grant the service account used by the compute engine in app-tier GCP project
roles/storage.objectCreator IAM role on the daily-us-customer-export Cloud
Storage bucket. is the right answer.
Since the VM needs to access the bucket daily-us-customer-export which is in the data-
warehousing, its service account needs to be granted the required permissions (Storage
Object Creator) on the bucket daily-us-customer-export in the data-warehousing.
Question 7: Incorrect
Your company’s compute workloads are split between the on-premises data centre
and Google Cloud Platform. The on-premises data centre is connected to Google
Cloud network by Cloud VPN. You have a requirement to provision a new non-
publicly-reachable compute engine instance on a c2-standard-8 machine type in
australia-southeast1-b zone. What should you do?
Configure a route to route all traffic to the public IP of compute engine instance
through the VPN tunnel.
(Incorrect)
Provision the instance in a subnet that has Google Private Access enabled.
Provision the instance in a subnetwork that has all egress traffic disabled.
(Correct)
Explanation
Provision the instance in a subnet that has Google Private Access enabled. is
not right.
VM instances that only have internal IP addresses (no external IP addresses) can use
Private Google Access to external IP addresses of Google APIs and services. Private
Google Access has no effect on instances with Public IPs as they are always publicly
reachable irrespective of the private google access setting.
Ref: https://cloud.google.com/vpc/docs/private-access-options#pga
Question 8: Incorrect
Your compliance team has asked you to set up an external auditor access to logs
from all GCP projects for the last 60 days. The auditor wants to combine, explore
and analyze the contents of the logs from all projects quickly and efficiently. You
want to follow Google Recommended practices. What should you do?
Set up a BigQuery sink destination to export logs from all the projects to a
dataset. Configure the table expiration on the dataset to 60 days. Ask the auditor
to query logs from the dataset.
(Correct)
Set up a Cloud Scheduler job to trigger a Cloud Function that reads and export
logs from all the projects to a BigQuery dataset. Configure the table expiration on
the dataset to 60 days. Ask the auditor to query logs from the dataset.
Set up a Cloud Storage sink destination to export logs from all the projects to a
bucket. Configure a lifecycle rule to delete objects older than 60 days. Ask the
auditor to query logs from the bucket.
(Incorrect)
Explanation
Ask the auditor to query logs from Cloud Logging. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention
period - which is 30 days (default configuration). After that, the entries are deleted. To
keep log entries longer, you need to export them outside of Stackdriver Logging by
configuring log sinks. Also, it’s not easy to combine logs from all projects using this
option.
https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-
cloud-audit-logging
Set up a Cloud Scheduler job to trigger a Cloud Function that reads and
export logs from all the projects to a BigQuery dataset. Configure the table
expiration on the dataset to 60 days. Ask the auditor to query logs from the
dataset. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver
and store the logs in BigQuery when Google provides a feature (export sinks) that does
the same thing and works out of the box.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Set up a Cloud Storage sink destination to export logs from all the projects
to a bucket. Configure a lifecycle rule to delete objects older than 60
days. Ask the auditor to query logs from the bucket. is not right.
You can export logs by creating one or more sinks that include a logs query and an
export destination. Supported destinations for exported log entries are Cloud Storage,
BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was
created: a Google Cloud project, organization, folder, or billing account. If it makes it
easier to export logs from all projects of an organization, you can create an aggregated
sink that can export log entries from all the projects, folders, and billing accounts of a
Google Cloud organization.
https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from
Cloud Storage is harder than Querying information from BigQuery dataset. For this
reason, we should prefer BigQuery over Cloud Storage.
Set up a BigQuery sink destination to export logs from all the projects to a
dataset. Configure the table expiration on the dataset to 60 days. Ask the
auditor to query logs from the dataset. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an
export destination. Supported destinations for exported log entries are Cloud Storage,
BigQuery, and Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was
created: a Google Cloud project, organization, folder, or billing account. If it makes it
easier to export logs from all projects of an organization, you can create an aggregated
sink that can export log entries from all the projects, folders, and billing accounts of a
Google Cloud organization.
https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a
Big Query dataset is easier and quicker than analyzing contents in Cloud Storage bucket.
As the requirement is to "Quickly analyze the log contents", we should prefer Big Query
over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default
table expiration for newly created tables in a dataset. If you set the property when the
dataset is created, any table created in the dataset is deleted after the expiration period.
If you set the property after the dataset is created, only new tables are deleted after the
expiration period.
For example, if you set the default table expiration to 7 days, older data is automatically
deleted after 1 week.
Ref: https://cloud.google.com/bigquery/docs/best-practices-storage
Question 9: Correct
A finance analyst at your company is suspended pending an investigation into
alleged financial misconduct. However, their Gsuite account was not disabled
immediately. Your compliance team has asked you to find out if the suspended
employee has accessed any audit logs or BigQuery datasets after their suspension.
What should you do?
Search for users’ Cloud Identity username (email address) as the principal in
system event logs in Cloud Logging.
Search for users’ Cloud Identity username (email address) as the principal in data
access logs in Cloud Logging.
(Correct)
Search for users’ service account as the principal in data access logs in Cloud
Logging.
Search for users’ service account as the principal in admin activity logs in Cloud
Logging.
Explanation
Search for users’ service account as the principal in admin activity logs in
Cloud Logging. is not right.
Admin Activity logs do not contain log entries for reading resource data. Admin Activity
audit logs contain log entries for API calls or other administrative actions that modify
the configuration or metadata of resources.
Ref: https://cloud.google.com/logging/docs/audit#admin-activity
Search for users’ Cloud Identity username (email address) as the principal
in system event logs in Cloud Logging. is not right.
System Event audit logs do not contain log entries for reading resource data. System
Event audit logs contain log entries for Google Cloud administrative actions that modify
the configuration of resources. Google systems generate system Event audit logs; they
are not driven by direct user action.
Ref: https://cloud.google.com/logging/docs/audit#system-event
Search for users’ service account as the principal in data access logs in
Cloud Logging. is not right.
System Event audit logs do not contain log entries for reading resource data. System
Event audit logs contain log entries for Google Cloud administrative actions that modify
the configuration of resources. Google systems generate system Event audit logs; they
are not driven by direct user action.
Ref: https://cloud.google.com/logging/docs/audit#system-event
Search for users’ Cloud Identity username (email address) as the principal
in data access logs in Cloud Logging. is the right answer.
Data Access audit logs contain API calls that read the configuration or metadata of
resources, as well as user-driven API calls that create, modify, or read user-provided
resource data.
Ref: https://cloud.google.com/logging/docs/audit#data-access
(Incorrect)
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.dataViewer role on the same project.
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.dataViewer role on the data-warehousing project.
(Correct)
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.jobUser role on data-warehousing project.
Explanation
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.jobUser role on the data-warehousing project. is not right.
Granting jobUser IAM role lets your App engine service create and run jobs including
"query jobs" but doesn't give access to read data, i.e. query the data directly from the
datasets. The role that you need for reading data from datasets is dataViewer!!
Ref: https://cloud.google.com/bigquery/docs/access-control#bigquery
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.dataViewer role on the same project. is not right.
If you grant the role from your project, you are granting the permissions for BigQuery
instance in your project. Since the requirement is for the app engine service to read data
from the BigQuery dataset in a different project, these permissions are insufficient.
Grant the default app-engine service account in the app-tier GCP project
roles/bigquery.dataViewer role on the data-warehousing project. is the right
answer.
Since the data resides in the other project, the role must be granted in the other project
to the App Engine service account. And since you want to read the data from BigQuery
datasets, you need dataViewer role.
Ref: https://cloud.google.com/bigquery/docs/access-control#bigquery
Use Cloud Identity Aware Proxy (IAP) to enable SSH tunnels to the VMs and add
the third-party team as a tunnel user.
(Correct)
Set up a Cloud VPN tunnel between the third-party network and your production
GCP project.
(Incorrect)
Set up a firewall rule to open SSH port (TCP:22) to the IP range of the third-party
team.
Add all the third party teams’ SSH keys to the production compute engine
instances.
Explanation
Set up a firewall rule to open SSH port (TCP:22) to the IP range of the
third-party team. is not right.
This option a terrible way to enable access - the SSH connections may be happening
over untrusted networks, i.e. no encryption and you can't SSH to the instances without
adding an SSH public key.
Set up a Cloud VPN tunnel between the third-party network and your
production GCP project. is not right.
A step forward but you can't SSH without adding SSH public keys to the instances and
opening the firewall ports to allow traffic from the operations partner IP range.
Add all the third party teams’ SSH keys to the production compute engine
instances. is not right.
Like above, you haven't opened the firewall to allow traffic from the operations partner
IP range, and the SSH connections may be happening over untrusted networks, i.e. no
encryption.
Use Cloud Identity Aware Proxy (IAP) to enable SSH tunnels to the VMs and
add the third-party team as a tunnel user. is the right answer.
This option is the preferred approach, given that the operations partner does not use
Google accounts. IAP lets you
- Control access to your cloud-based and on-premises applications and VMs running on
Google Cloud
- Verify user identity and use context to determine if a user should be granted access
(Correct)
Uncheck “Delete boot disk when instance is deleted” option when provisioning the
compute engine instance.
Explanation
Deploy the application on a preemptible compute engine instance. is not right.
A preemptible VM is an instance that you can create and run at a much lower price than
normal instances. However, Compute Engine might terminate (preempt) these instances
if it requires access to those resources for other tasks. Preemptible instances are excess
Compute Engine capacity, so their availability varies with usage. This option wouldn't
help with our requirement - to prevent anyone from accidentally destroying the
instance.
Ref: https://cloud.google.com/compute/docs/instances/preemptible
Uncheck “Delete boot disk when instance is deleted” option when provisioning
the compute engine instance. is not right.
You can automatically delete read/write persistent zonal disks when the associated VM
instance is deleted. Enabling/Disabling the flag impacts disk deletion but not the
instance termination.
Ref: https://cloud.google.com/compute/docs/disks/add-persistent-
disk#updateautodelete
2. Ensure the instance template, instance and the persistent disk names do not conflict.
1. Ensure you don’t have any persistent disks with the same name as the VM instance.
(Correct)
1. Ensure you don’t have any persistent disks with the same name as the VM instance.
(Incorrect)
1. Ensure the instance template, instance and the persistent disk names do not conflict.
Explanation
1. Ensure you don’t have any persistent disks with the same name as the VM
instance.
2. Ensure the disk autodelete property is turned on (disks.autoDelete set to
true).
3. Ensure instance template syntax is valid. is the right answer.
As described in this article, "My managed instance group keeps failing to create a VM.
What's going on?"
https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-
managed-instances#troubleshooting
The likely causes are
- A persistent disk already exists with the same name as VM Instance
(Correct)
Explanation
Create a gcloud configuration for each production project. To manage
resources of a particular project, activate the relevant gcloud
configuration. is the right answer.
gcloud configurations enable you to manage multiple projects in gcloud CLI using the
fewest possible steps,
Ref: https://cloud.google.com/sdk/gcloud/reference/config
Preview the updates using Deployment Manager and store the results in a Google
Cloud Storage bucket.
(Correct)
Clone the production environment to create a staging environment and deploy the
proposed changes to the staging environment. Execute gcloud compute instances
list to view the changes and store the results in a Google Cloud Storage bucket.
Clone the production environment to create a staging environment and deploy the
proposed changes to the staging environment. Execute gcloud compute instances
list to view the changes and store the results in a Google Cloud Source Repository.
(Incorrect)
Preview the updates using Deployment Manager and store the results in a Google
Cloud Source Repository.
Explanation
Clone the production environment to create a staging environment and deploy
the proposed changes to the staging environment. Execute gcloud compute
instances list to view the changes and store the results in a Google Cloud
Storage bucket. is not right.
gcloud compute instances list - lists Google Compute Engine instances. The
infrastructure changes may include much more than compute engine instances, e.g.
firewall rules, VPC, subnets, databases etc. The output of this command is not sufficient
to describe the proposed changes. Moreover, you want to share the proposed changes,
not the changes after applying them.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Preview the updates using Deployment Manager and store the results in a
Google Cloud Source Repository. is not right.
With deployment manager, you can preview the update you want to make before
committing any changes, with the gcloud command-line tool or the API. The
Deployment Manager service previews the configuration by expanding the full
configuration and creating "shell" resources. Deployment Manager does not instantiate
any actual resources when you preview a configuration, allowing you to see the
deployment before committing to it.
Ref: https://cloud.google.com/deployment-manager However, saving the proposed
changes to Cloud Source Repositories is not a great idea. Cloud source repositories is a
private Git repository in GCP and is not a suitable place for such content.
Ref: https://cloud.google.com/source-repositories
Preview the updates using Deployment Manager and store the results in a
Google Cloud Storage bucket. is the right answer.
With deployment manager, you can preview the update you want to make before
committing any changes, with the gcloud command-line tool or the API. The
Deployment Manager service previews the configuration by expanding the full
configuration and creating "shell" resources. Deployment Manager does not instantiate
any actual resources when you preview a configuration, allowing you to see the
deployment before committing to it.
Ref: https://cloud.google.com/deployment-manager
Cloud Storage bucket is an ideal place to upload the information and share it with the
rest of the team.
Create a new GKE cluster with node pool instances of type c2-standard-16. Deploy
the application on the new GKE cluster and delete the old GKE Cluster.
(Incorrect)
Update the existing cluster to add a new node pool with c2-standard-16 machine
types and deploy the pods.
(Correct)
Create a new GKE cluster with two node pools – one with e2-standard-2 machine
types and other with c2-standard-16 machine types. Deploy the application on the
new GKE cluster and delete the old GKE Cluster.
Explanation
Create a new GKE cluster with node pool instances of type c2-standard-16.
Deploy the application on the new GKE cluster and delete the old GKE
Cluster. is not right.
This option results in the extra cost of running two clusters in parallel until the cutover
happens. Also, creating a single node pool with just n2-highmem-16 nodes might result
in inefficient use of resources and subsequently extra costs.
Create a new GKE cluster with two node pools – one with e2-standard-2
machine types and other with c2-standard-16 machine types. Deploy the
application on the new GKE cluster and delete the old GKE Cluster. is not
right.
Having two node pools - one based on n1-standard-2 and the other based on n2-
highmem-16 is the right idea. The relevant pods can be deployed to the respective node
pools. However, you are incurring the extra cost of running two clusters in parallel while
the cutover happens.
Enable Billing Export from all GCP projects to BigQuery and ask the finance team
to use Google Data Studio to visualize the data.
(Correct)
Use Cloud Scheduler to trigger a Cloud Function every hour. Have the Cloud
Function download the CSV from the Cost Table page and upload the data to
BigQuery. Ask the finance team to use Google Data Studio to visualize the data.
Use Google pricing calculator for all the services used in all GCP projects and pass
the estimated cost to finance team every month.
(Incorrect)
Ask the finance team to check reports view section in Cloud Billing Console.
Explanation
Use Google pricing calculator for all the services used in all GCP projects
and pass the estimated cost to finance team every month. is not right.
We are interested in the costs incurred, not estimates.
Use Cloud Scheduler to trigger a Cloud Function every hour. Have the Cloud
Function download the CSV from the Cost Table page and upload the data to
BigQuery. Ask the finance team to use Google Data Studio to visualize the
data. is not right.
The question states "You want to include new costs as soon as they become available"
but exporting CSV is a manual process, i.e. not automated, so you don't get new cost
data as soon as it becomes available.
Ask the finance team to check reports view section in Cloud Billing
Console. is not right.
If all projects are linked to the same billing account, then the billing report would have
provided this information in a single screen with a visual representation that can be
customized based on different parameters. However, in this scenario, projects are linked
to different billing accounts and viewing the billing information of all these projects in a
single visual representation is not possible in the Reports View section in Cloud Billing
Console.
Ref: https://cloud.google.com/billing/docs/how-to/reports
Enable Billing Export from all GCP projects to BigQuery and ask the finance
team to use Google Data Studio to visualize the data. is the right answer.
Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing
data (such as usage, cost estimates, and pricing data) automatically throughout the day
to a BigQuery dataset that you specify. Then you can access your Cloud Billing data from
BigQuery for detailed analysis, or use a tool like Google Data Studio to visualize your
data and provide cost visibility to the finance department. All projects can be configured
to export their data to the same billing dataset. As the export happens automatically
throughout the day, this satisfies our "as soon as possible" requirement.
Ref: https://cloud.google.com/billing/docs/how-to/export-data-bigquery
Add a metadata entry in the Compute Engine Settings page with key:
compute/zone and value: europe-west1-b.
(Incorrect)
In GCP Console set europe-west1-b zone in the default location in Compute Engine
Settings.
(Correct)
Explanation
Update the gcloud configuration file ~/config.ini to set europe-west1-b as
the default zone. is not right.
gcloud does not read configurations from default.conf
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations
Ref: https://cloud.google.com/sdk/docs/configurations
Run gcloud config to set europe-west1-b as the default zone. is not right.
Using gcloud config set, you can set the zone in your active configuration only. This
setting does not apply to other gcloud configurations and does not become the default
for the project.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set
gcloud config set compute/zone europe-west1-b
Add a metadata entry in the Compute Engine Settings page with key:
compute/zone and value: europe-west1-b. is not right.
You could achieve this behaviour by running the following in gcloud.
https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-
region#gcloud
gcloud compute project-info add-metadata \
--metadata google-compute-default-region=europe-west1,google-compute-default-zone
=europe-west1-b
You could also achieve this behaviour by running the following in gcloud.
https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-
region#gcloud
After you update the default metadata by using any method, run the gcloud init
command to reinitialize your default configuration. The gcloud tool refreshes the default
region and zone settings only after you run the gcloud init command.
Question 19: Incorrect
You deployed an application on a general-purpose Google Cloud Compute Engine
instance that uses a persistent zonal SSD of 300 GB. The application downloads
large Apache AVRO files from Cloud Storage, retrieve customer details from them
and saves a text file on local disk for each customer before pushing all the text
files to a Google Storage Bucket. These operations require high disk I/O, but you
find that the read and write operations on the disk are always throttled. What
should you do to improve the throughput while keeping costs to a minimum?
(Correct)
(Incorrect)
Explanation
Replace Zonal Persistent SSD with a Regional Persistent SSD. is not right.
Migrating to a regional SSD would actually make it worse. At the time of writing, the
Read IOPS for a Zonal standard persistent disk is 7,500, and the Read IOPS reduces to
3000 for a Regional standard persistent disk, which reduces the throughput.
Ref: https://cloud.google.com/compute/docs/disks/performance
Bump up the size of its SSD persistent disk to 1 TB. is not right.
The performance of SSD persistent disks scales with the size of the disk.
Ref: https://cloud.google.com/compute/docs/disks/performance#cpu_count_size
However, no guarantee increasing the disk to 1 TB will increase the throughput in this
scenario as disk performance also depends on the number of vCPUs on VM instance.
Ref: https://cloud.google.com/compute/docs/disks/performance#ssd_persistent_disk_pe
rformance_by_disk_size
Ref: https://cloud.google.com/compute/docs/disks/performance#machine-type-disk-
limits
For example, consider a 1,000 GB SSD persistent disk attached to an instance with an N2
machine type and 4 vCPUs. The read limit based solely on the size of the disk is 30,000
IOPS. However, because the instance has 4 vCPUs, the read limit is restricted to 15,000
IOPS.
Replace Zonal Persistent SSD with a Local SSD. is the right answer.
Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs
have higher throughput and lower latency than standard persistent disks or SSD
persistent disks. The performance gains from local SSDs require trade-offs in availability,
durability, and flexibility. Because of these trade-offs, Local SSD storage isn't
automatically replicated, and all data on the local SSD might be lost if the instance
terminates for any reason.
Ref: https://cloud.google.com/compute/docs/disks#localssds
Ref: https://cloud.google.com/compute/docs/disks/performance#type_comparison
Write a cron script that checks for objects older than 50 days and deletes them.
Enable lifecycle policy on the bucket to delete objects older than 50 days.
(Correct)
Use Cloud Scheduler to trigger a Cloud Function to check for objects older than 50
days and delete them.
(Incorrect)
Have the mobile application use signed URLs to enabled time-limited upload to
Cloud Storage.
(Correct)
Explanation
Have the mobile application send the images to an SFTP server. is not right.
It is possible to set up an SFTP server so that your suppliers can upload files but building
an SFTP solution is not something you would do when the development cycle is short. It
would help if you instead looked for off the shelf solutions that work with minimal
configuration. Moreover, this option doesn't specify where the uploaded files are stored.
Nor does it talk about how the files are secured and how the expiration is handled.
Use Cloud Scheduler to trigger a Cloud Function to check for objects older
than 50 days and delete them. is not right.
Sure can be done, but this is unnecessary when GCP already provides lifecycle
management for the same. You are unnecessarily adding cost and complexity by doing
this using Cloud functions.
Write a cron script that checks for objects older than 50 days and deletes
them. is not right.
Like above, sure can be done but this is unnecessary when GCP already provides
lifecycle management for the same. You are unnecessarily adding cost and complexity
by doing it this way.
Have the mobile application use signed URLs to enabled time-limited upload
to Cloud Storage. is the right answer.
When we generate a signed URL, we can specify an expiry (30 mins), and users can only
upload for the specified time "30 minutes". Also, only users with the signed URL can
view/download the objects so we share individual signed URLs so that "suppliers can
access only their data". Finally, all objects in Google Cloud Storage are encrypted, which
takes care of one of the primary goal "data security".
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Export logs to Cloud Storage and grant the compliance team a custom IAM role
that has logging.privateLogEntries.list permission.
Explanation
Google Cloud provides Cloud Audit Logs, which is an integral part of Cloud Logging. It
consists of two log streams for each project: Admin Activity and Data Access, which are
generated by Google Cloud services to help you answer the question of "who did what,
where, and when?" within your Google Cloud projects.
Ref: https://cloud.google.com/iam/docs/job-
functions/auditing#scenario_external_auditors
To view Admin Activity audit logs, you must have one of the following Cloud IAM roles
in the project that contains your audit logs:
- A custom Cloud IAM role with the logging.logEntries.list Cloud IAM permission.
https://cloud.google.com/iam/docs/audit-logging#audit_log_permissions
To view Data Access audit logs, you must have one of the following roles in the project
that contains your audit logs:
- Project Owner.
Export logs to Cloud Storage and grant the compliance team a custom IAM role
that has logging.privateLogEntries.list permission. is not right.
logging.privateLogEntries.list provides permissions to view Data Access audit logs but
this does not grant permissions to view Admin activity logs.
Ref: https://cloud.google.com/logging/docs/access-control#console_permissions
Provision another compute engine instance in us-west1-b and balance the traffic
across both zones.
(Correct)
Ensure you have hourly snapshots of the disk in Google Cloud Storage. In the
unlikely event of a zonal outage, use the snapshots to provision a new Compute
Engine Instance in a different zone.
Direct the traffic through a Global HTTP(s) Load Balancer to shield your
application from GCP zone failures.
Replace the single instance with a Managed Instance Group (MIG) and autoscaling
enabled. Configure a health check to detect failures rapidly.
(Incorrect)
Explanation
Ensure you have hourly snapshots of the disk in Google Cloud Storage. In the
unlikely event of a zonal outage, use the snapshots to provision a new
Compute Engine Instance in a different zone. is not right.
This option wouldn't eliminate downtime, the solution doesn't support the failure of a
single Compute Engine zone, and the solution involves manual intervention which adds
to the overall cost.
Direct the traffic through a Global HTTP(s) Load Balancer to shield your
application from GCP zone failures. is not right.
The VMs are still in a single zone, so this solution doesn't support the failure of a single
Compute Engine zone.
Replace the single instance with a Managed Instance Group (MIG) and
autoscaling enabled. Configure a health check to detect failures rapidly. is
not right.
The VMs are still in a single zone, so this solution doesn't support the failure of a single
Compute Engine zone.
Add a metadata tag on the Google Compute Engine instance to enable snapshot
creation. Add a second metadata tag to specify the snapshot schedule, and a third
metadata tag to specify the retention period.
Navigate to the Compute Engine Disk section of your VM instance in the GCP
console and enable a snapshot schedule for automated creation of daily snapshots.
Set Auto-Delete snapshots after to 50 days.
(Correct)
Use AppEngine Cron service to trigger a custom script that creates snapshots of
the disk daily. Use AppEngine Cron service to trigger another custom script that
iterates over the snapshots and deletes snapshots older than 50 days.
Use Cloud Scheduler to trigger a Cloud Function that creates snapshots of the disk
daily. Use Cloud Scheduler to trigger another Cloud Function that iterates over the
snapshots and deletes snapshots older than 50 days.
(Incorrect)
Explanation
Add a metadata tag on the Google Compute Engine instance to enable snapshot
creation. Add a second metadata tag to specify the snapshot schedule, and a
third metadata tag to specify the retention period. is not right.
Adding these metadata tags on the instance does not affect snapshot
creation/automation.
Use AppEngine Cron service to trigger a custom script that creates snapshots
of the disk daily. Use AppEngine Cron service to trigger another custom
script that iterates over the snapshots and deletes snapshots older than 50
days. is not right.
Bash scripts and crontabs add a lot of operational overhead. You want to fulfil this
requirement with the least management overhead so you should avoid this.
Navigate to the Compute Engine Disk section of your VM instance in the GCP
console and enable a snapshot schedule for automated creation of daily
snapshots. Set Auto-Delete snapshots after to 50 days. is the right answer.
Google recommends you use Use snapshot schedules as a best practice to back up your
Compute Engine workloads.
Ref: https://cloud.google.com/compute/docs/disks/scheduled-snapshots
Question 24: Incorrect
Your company plans to migrate all applications from the on-premise data centre
to Google Cloud Platform and requires a monthly estimate of the cost of running
these applications in GCP. How can you provide this estimate?
For all GCP services/APIs you are planning to use, use the GCP pricing calculator to
estimate the monthly costs.
(Correct)
Migrate all applications to GCP and run them for a week. Use the costs from the
Billing Report page for this week to extrapolate the monthly cost of running all
applications in GCP.
Migrate all applications to GCP and run them for a week. Use Cloud Monitoring to
identify the costs for this week and use it to derive the monthly cost of running all
applications in GCP.
(Incorrect)
For all GCP services/APIs you are planning to use, capture the pricing from the
products pricing page and use an excel sheet to estimate the monthly costs.
Explanation
Migrate all applications to GCP and run them for a week. Use the costs from
the Billing Report page for this week to extrapolate the monthly cost of
running all applications in GCP. is not right.
By provisioning the solution on GCP, you are going to incur costs. We are required to
estimate the costs, and this can be done by using Google Cloud Pricing Calculator.
Ref: https://cloud.google.com/products/calculator
Migrate all applications to GCP and run them for a week. Use Cloud
Monitoring to identify the costs for this week and use it to derive the
monthly cost of running all applications in GCP. is not right.
By provisioning the solution on GCP, you are going to incur costs. We are required to
estimate the costs, and this can be done by using Google Cloud Pricing Calculator.
Ref: https://cloud.google.com/products/calculator
For all GCP services/APIs you are planning to use, capture the pricing from
the products pricing page and use an excel sheet to estimate the monthly
costs. is not right.
This option would certainly work but is a manual task. Why use this when you can use
Google Cloud Pricing Calculator to achieve the save?
Ref: https://cloud.google.com/products/calculator
For all GCP services/APIs you are planning to use, use the GCP pricing
calculator to estimate the monthly costs. is the right answer.
You can use the Google Cloud Pricing Calculator to total the estimated monthly costs
for each GCP product. You don't incur any charges for doing so.
Ref: https://cloud.google.com/products/calculator
Configure a health check on the instance to identify the issue and email you the
logs when the application experiences the issue.
(Incorrect)
Install the Cloud Logging Agent on the VM and configure it to send logs to Cloud
Logging. Check logs in Cloud Logging.
(Correct)
Explanation
Check logs in Cloud Logging. is not right.
The application writes logs to disk, but we don't know if these logs are forwarded to
Cloud Logging. Unless you install Cloud logging agent (which this option doesn't talk
about) and configure to stream the application logs, the logs don't get to Cloud
logging.
Ref: https://cloud.google.com/logging/docs/agent
Configure a health check on the instance to identify the issue and email you
the logs when the application experiences the issue. is not right.
We don’t know what the issue is, and we want to look at the logs to identify the
problem, so it is not possible to create a health check without first identifying what the
issue is.
Install the Cloud Logging Agent on the VM and configure it to send logs to
Cloud Logging. Check logs in Cloud Logging. is the right answer.
It is a best practice to run the Logging agent on all your VM instances. In its default
configuration, the Logging agent streams logs from common third-party applications
and system software to Logging; review the list of default logs. You can configure the
agent to stream additional logs; go to Configuring the Logging agent for details on
agent configuration and operation. As logs are now streamed to Cloud Logging, you can
view your logs in Cloud logging and diagnose the problem.
Ref: https://cloud.google.com/logging/docs/agent
Add a new VPC and set up VPC sharing between the new and existing VPC.
(Correct)
Explanation
Add a new subnet to the same region. is not right.
When you create a regional (private) GKE cluster, it automatically creates a private
cluster subnet, and you can't change this/add a second subnet.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/private-
clusters#view_subnet
Add a new VPC and set up VPC sharing between the new and existing VPC. is not
right.
You can't split a GKE cluster across two VPCs. You can't use shared VPC either as Google
Kubernetes Engine does not support converting existing clusters to the Shared VPC
model.
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#overview
(Correct)
(Incorrect)
Explanation
Internal TCP/UDP Load Balancer. is not right.
Internal TCP Load Balancing is a regional load balancer that enables you to run and
scale your services behind an internal load balancing IP address that is accessible only to
your internal virtual machine (VM) instances. Since we need to serve public traffic, this
load balancer is not suitable for us.
Ref: https://cloud.google.com/load-balancing/docs/internal
1. Set the custom IAM role lifecycle stage to BETA while you test the role in the test GCP
project.
2. Restrict the custom IAM role to use permissions with SUPPORTED support level.
1. Set the custom IAM role lifecycle stage to ALPHA while you test the role in the test
GCP project.
2. Restrict the custom IAM role to use permissions with SUPPORTED support level.
(Correct)
1. Set the custom IAM role lifecycle stage to BETA while you test the role in the test GCP
project.
2. Restrict the custom IAM role to use permissions with TESTING support level.
(Incorrect)
1. Set the custom IAM role lifecycle stage to ALPHA while you test the role in the test
GCP project.
2. Restrict the custom IAM role to use permissions with TESTING support level.
Explanation
When setting support levels for permissions in custom roles, you can set to one
of SUPPORTED, TESTING or NOT_SUPPORTED. SUPPORTED -The permission is fully
supported in custom roles. TESTING - The permission is being tested to check its
compatibility with custom roles. You can include the permission in custom roles, but you
might see unexpected behaviour. Such permissions are not recommended for
production use.
Ref: https://cloud.google.com/iam/docs/custom-roles-permissions-support Since we
want the role to be suitable for production use, we need "SUPPORTED" and not
"TESTING".
In terms of role stage, the stage transitions from ALPHA --> BETA --> GA
Ref: https://cloud.google.com/iam/docs/understanding-custom-
roles#testing_and_deploying Since this is the first version of the custom role, we start
with "ALPHA".
The only option that satisfies "ALPHA" stage with "SUPPORTED" support level is
1. Set the custom IAM role lifecycle stage to ALPHA while you test the role
in the test GCP project.
2. Restrict the custom IAM role to use permissions with SUPPORTED support
level. is the right answer
Use gcloud commands first to create a new project, then to enable the Compute
Engine API and finally, to launch a new compute engine instance in the project.
(Correct)
In the GCP Console, enable the Compute Engine API. When creating a new
instance in the console, select the checkbox to create the instance in a new GCP
project and provide the project name and ID.
Run gcloud compute instances create with --project flag to automatically create
the new project and a compute engine instance. When prompted to enable the
Compute Engine API, select Yes.
In the GCP Console, enable the Compute Engine API. Run gcloud compute
instances create with --project flag to automatically create the new project and a
compute engine instance.
Explanation
In the GCP Console, enable the Compute Engine API. Run gcloud compute
instances create with --project flag to automatically create the new project
and a compute engine instance. is not right.
You can't create the instance without first creating the project. The --project flag in
gcloud compute create instances command is used to specify an existing project.
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
--project=PROJECT_ID
The Google Cloud Platform project ID to use for this invocation. If omitted, then the
current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID.
Ref: https://cloud.google.com/sdk/gcloud/reference#--project
--project=PROJECT_ID
The Google Cloud Platform project ID to use for this invocation. If omitted, then the
current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID.
Ref: https://cloud.google.com/sdk/gcloud/reference#--project
In the GCP Console, enable the Compute Engine API. When creating a new
instance in the console, select the checkbox to create the instance in a new
GCP project and provide the project name and ID. is not right.
In Cloud Console, when you create a new instance, you don't get an option to select the
project. The compute engine instance is always created in the currently active project.
Ref: https://cloud.google.com/compute/docs/instances/create-start-instance
Use gcloud commands first to create a new project, then to enable the
Compute Engine API and finally, to launch a new compute engine instance in
the project. is the right answer.
This option does it all in the correct order. You first create a project using gcloud
projects create, then enable the compute engine API and finally create the VM instance
in this project by using the --project flag or by creating an instance in this project in
Cloud console.
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
--project=PROJECT_ID
The Google Cloud Platform project ID to use for this invocation. If omitted, then the
current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID.
Ref: https://cloud.google.com/sdk/gcloud/reference#--project
Question 30: Incorrect
Your production Compute workloads are running in a subnet with a range
192.168.20.128/25. A recent surge in traffic has seen the production VMs struggle,
and you want to add more VMs, but all IP addresses in the subnet are in use. All
new and old VMs need to communicate with each other. How can you do this with
the fewest steps?
(Correct)
Create a new VPC and a new subnet with IP range 192.168.21.0/24. Enable VPC
Peering between the old VPC and new VPC. Configure a custom Route exchange.
Create a new non-overlapping Alias range in the existing VPC and Configure the
VMs to use the alias range.
(Incorrect)
Create a new VPC network and a new subnet with IP range 192.168.21.0/24.
Enable VPC Peering between the old VPC and new VPC.
Explanation
Create a new non-overlapping Alias range in the existing VPC and Configure
the VMs to use the alias range. is not right.
Since there isn't any more primary IP Address available in the VPC, it is not possible to
provision new VMs. You cannot create a VM with just a secondary (alias) IP. All subnets
have a primary CIDR range, which is the range of internal IP addresses that define the
subnet. Each VM instance gets its primary internal IP address from this range. You can
also allocate alias IP ranges from that primary range, or you can add a secondary range
to the subnet and allocate alias IP ranges from the secondary range.
Ref: https://cloud.google.com/vpc/docs/alias-
ip#subnet_primary_and_secondary_cidr_ranges
Create a new VPC and a new subnet with IP range 192.168.21.0/24. Enable VPC
Peering between the old VPC and new VPC. Configure a custom Route
exchange. is not right.
Subnet routes that don't use privately reused public IP addresses are always exchanged
between peered networks. You can also exchange custom routes, which include static
and dynamic routes, and routes for subnets that use privately reused public IP addresses
if network administrators in both networks have the appropriate peering configurations.
But in this case, there is no requirement to exchange custom routes.
Ref: https://cloud.google.com/vpc/docs/vpc-peering?&_ga=2.257174475.-
1345429276.1592757751#importing-exporting-routes
Create a new VPC network and a new subnet with IP range 192.168.21.0/24.
Enable VPC Peering between the old VPC and new VPC. is not right.
This approach works but is more complicated than expanding the subnet range.
Ref: https://cloud.google.com/vpc/docs/vpc-peering
Configure alerts in Cloud Monitoring to trigger a Cloud Function via webhook, and
have the Cloud Function scale up or scale down the spanner instance as necessary.
(Correct)
Configure a Cloud Scheduler job to invoke a Cloud Function that reviews the
relevant Cloud Monitoring metrics and resizes the Spanner instance as necessary.
Configure alerts in Cloud Monitoring to alert your operations team and have them
manually scale up or scale down the spanner instance as necessary.
Configure alerts in Cloud Monitoring to alert Google Operations Support team and
have them use their scripts to scale up or scale down the spanner instance as
necessary.
Explanation
Configure a Cloud Scheduler job to invoke a Cloud Function that reviews the
relevant Cloud Monitoring metrics and resizes the Spanner instance as
necessary. is not right.
While this works and does it automatically, it does not follow Google's recommended
practices.
Ref: https://cloud.google.com/spanner/docs/instances
"Note: You can scale the number of nodes in your instance based on the Cloud
Monitoring metrics on CPU or storage utilization in conjunction with Cloud Functions."
Configure alerts in Cloud Monitoring to alert your operations team and have
them manually scale up or scale down the spanner instance as necessary. is
not right.
This option does not follow Google's recommended practices.
Ref: https://cloud.google.com/spanner/docs/instances
"Note: You can scale the number of nodes in your instance based on the Cloud
Monitoring metrics on CPU or storage utilization in conjunction with Cloud Functions."
Ask an engineer with Project Owner IAM role to identify all resources in the
project and delete them.
Ask an engineer with Project Owner IAM role to locate the project and shut down.
(Correct)
Ask an engineer with Organization Administrator IAM role to locate the project
and shut down.
Ask an engineer with Organization Administrator IAM role to identify all resources
in the project and delete them.
Explanation
Ask an engineer with Organization Administrator IAM role to locate the
project and shut down. is not right.
Organization Admin role provides permissions to get and list projects but not shutdown
projects.
Ref: https://cloud.google.com/iam/docs/understanding-roles#resource-manager-roles
Ask an engineer with Project Owner IAM role to identify all resources in the
project and delete them. is not right.
The primitive Project Owner role provides permissions to delete project
https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
But locating all the resources and deleting them is a manual task, time-consuming and
error-prone. Our goal is to accomplish the same but with fewest possible steps.
Ask an engineer with Project Owner IAM role to locate the project and shut
down. is the right answer.
The primitive Project Owner role provides permissions to delete project
https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
You can shut down projects using the Cloud Console. When you shut down a project,
this immediately happens: All billing and traffic serving stops, You lose access to the
project, The owners of the project will be notified and can stop the deletion within 30
days, The project will be scheduled to be deleted after 30 days. However, some
resources may be deleted much earlier.
Ref: https://cloud.google.com/resource-manager/docs/creating-managing-
projects#shutting_down_projects
(Correct)
Check the permissions assigned in all Identity Aware Proxy (IAP) tunnels.
Turn on IAM Audit logging and build a Cloud Monitoring dashboard to display
this information.
Explanation
Extract all project-wide SSH keys. is not right.
Project-wide SSH keys provide the ability to connect to most instances in your project. It
can't be used to identify who has been granted the project editor role.
Ref: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-
keys#edit-ssh-metadata
Check the permissions assigned in all Identity Aware Proxy (IAP) tunnels. is
not right.
Identity Aware Proxy is for controlling access to your cloud-based and on-premises
applications and VMs running on Google Cloud. It can't be used to gather who has been
granted the project editor role.
Ref: https://cloud.google.com/iap
Turn on IAM Audit logging and build a Cloud Monitoring dashboard to display
this information. is not right.
Once enabled, new users signing in with a project editor role are recorded in logs, and
you can query this information, but these logs don't give you a full list of all users who
currently have Project editors role but have not logged in.
(Correct)
Download the installation guide for Cassandra on GCP and follow the instructions
to install the database.
Explanation
Download the installation guide for Cassandra on GCP and follow the
instructions to install the database. is not right.
There is a simple and straightforward way to deploy Cassandra as a Service, called Astra,
on the Google Cloud Marketplace. You don't need to follow the installation guide to
install it.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-
managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-
images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-
b554-fe32e87f04be&pli=1
Install an instance of Cassandra DB on Google Cloud Compute Engine, take a
snapshot of this instance and use the snapshot to spin up additional
instances of Cassandra DB. is not right.
Like above, there is a simple and straightforward way to deploy Cassandra as a Service,
called Astra, on the Google Cloud Marketplace. You don't need to do this in a
convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-
managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-
images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-
b554-fe32e87f04be&pli=1
Grant the Project Editor role on the production GCP project to all members of the
operations team.
Create a custom role with the necessary permissions and grant the role on the
production GCP project to all members of the operations team.
(Correct)
Create a custom role with the necessary permissions and grant the role at the
organization level to all members of the operations team.
Grant the Project Editor role at the organization level to all members of the
operations team.
(Incorrect)
Explanation
Grant the Project Editor role on the production GCP project to all members
of the operations team. is not right.
You want to prevent Google Cloud product changes from broadening their permissions
in the future. So you shouldn't use predefined roles, e.g. Project Editor. Predefined roles
are created and maintained by Google. Their permissions are automatically updated as
necessary, such as when new features or services are added to Google Cloud.
Ref: https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
Grant the Project Editor role at the organization level to all members of
the operations team. is not right.
You want to prevent Google Cloud product changes from broadening their permissions
in the future. So you shouldn't use predefined roles, e.g. Project Editor. Predefined roles
are created and maintained by Google. Their permissions are automatically updated as
necessary, such as when new features or services are added to Google Cloud.
Ref: https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
Create a custom role with the necessary permissions and grant the role at
the organization level to all members of the operations team. is not right.
Unlike predefined roles, the permissions in custom roles are not automatically updated
when Google adds new features or services. So the custom role is the right choice.
Ref: https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
However, granting the custom role at the organization level grants the DevOps team
access to not just the production project but also testing and development projects
which go against the principle of least privilege and should be avoided.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create a custom role with the necessary permissions and grant the role on
the production GCP project to all members of the operations team. is the right
answer.
Unlike predefined roles, the permissions in custom roles are not automatically updated
when Google adds new features or services. So the custom role is the right choice.
Ref: https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
Granting the custom role at the production project level grants the DevOps team access
to just the production project and not testing and development projects which aligns
with the principle of least privilege and should be preferred.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Store the data in Cloud Storage. Have a file per IoT device and append new data to
the file.
Store the data in Cloud Filestore. Have a file per IoT device and append new data
to the file.
Store the data in Cloud BigTable. Have a row key based on the ingestion
timestamp.
(Correct)
Store the data in Cloud Datastore. Have an entity group per device.
Explanation
Store the data in Cloud Storage. Have a file per IoT device and append new
data to the file. is not right.
Terrible idea!! Cloud Storage Objects are immutable, which means that an uploaded
object cannot change throughout its storage lifetime. In practice, this means that you
cannot make incremental changes to objects, such as append operations. However, it is
possible to overwrite objects that are stored in Cloud Storage, and doing so happens
atomically — until the new upload completes the old version of the object will be served
to the readers, and after the upload completes the new version of the object will be
served to readers. So for each update, the clients (construction equipment)) will have to
read the full object, append a single row and upload the full object again. With the high
frequency of IoT data here, different clients may read different data while the updates
happen, and this can mess things up big time.
Ref: https://cloud.google.com/storage/docs/key-terms#immutability
Store the data in Cloud Filestore. Have a file per IoT device and append new
data to the file. is not right.
Like above, there is no easy way to append data to a file in Cloud Filestore. For each
update, the clients will have to read the full file, append a single row and upload the full
file again. A client has to lock the file before updating, and this prevents other clients
from modifying the file. With the high frequency of IoT data here, this design is
impractical.
Ref: https://cloud.google.com/filestore/docs/limits#file_locks
Store the data in Cloud Datastore. Have an entity group per device. is not
right.
Cloud Datastore isn't suitable for ingesting IoT data. It is more suitable for Gaming
leaderboard/player profile data, or where direct client access and real-time sync to
clients is required.
Ref: https://cloud.google.com/products/databases
Also, storing data in an entity group based on the device means that the query has to
iterate through all entities and look at the timestamp value to arrive at the result which
isn't the best design.
Store the data in Cloud BigTable. Have a row key based on the ingestion
timestamp. is the right answer.
Cloud Bigtable provides a scalable NoSQL database service with consistent low latency
and high throughput, making it an ideal choice for storing and processing time-series
vehicle data.
Ref: https://cloud.google.com/solutions/designing-connected-vehicle-
platform#data_ingestion
By creating a row key based on the event timestamp, you can easily/fetch data based on
the time of the event, which is our requirement.
Ref: https://cloud.google.com/bigtable/docs/schema-design-time-series
Create a snapshot of the compute engine instance disk, create a custom image
from the snapshot, create instances from this image to handle the burst in traffic.
(Correct)
Create a snapshot of the compute engine instance disk and create instances from
this snapshot to handle the burst in traffic.
Create a snapshot of the compute engine instance disk, create custom images
from the snapshot to handle the burst in traffic.
Create a snapshot of the compute engine instance disk and create images from
this snapshot to handle the burst in traffic.
Explanation
Create a snapshot of the compute engine instance disk and create images from
this snapshot to handle the burst in traffic. is not right.
You can't process additional traffic with images. It would be best if you spun up new
compute engine VM instances.
Ref: https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots
Create a snapshot of the compute engine instance disk, create custom images
from the snapshot to handle the burst in traffic. is not right.
You can't process additional traffic with images. It would be best if you spun up new
compute engine VM instances.
Ref: https://cloud.google.com/compute/docs/images
Create a snapshot of the compute engine instance disk and create instances
from this snapshot to handle the burst in traffic. is not right.
The documentation states you can do this.
Ref: https://cloud.google.com/compute/docs/instances/create-start-
instance#restore_boot_snapshot
But, further down in step 7, you see that you are creating a new disk which will be used
by the compute engine instance. You can’t directly create a VM from a snapshot without
the disk. You can use the snapshot to create a disk for the new instance, but you can’t
create the instance directly from a snapshot without the disk.
Ref: https://cloud.google.com/compute/docs/instances/create-start-
instance#creating_a_vm_from_a_custom_image
Also, Google says if you plan to create many instances from the same boot disk
snapshot, consider creating a custom image and creating instances from that image
instead. Custom images can create the boot disks for your instances more quickly and
efficiently than snapshots.
Create a snapshot of the compute engine instance disk, create a custom image
from the snapshot, create instances from this image to handle the burst in
traffic. is the right answer.
To create an instance with a custom image, you must first have a custom image. You can
create custom images from source disks, images, snapshots, or images stored in Cloud
Storage. You can then use the custom image to create one or more instances as needed.
Ref: https://cloud.google.com/compute/docs/instances/create-start-
instance#creating_a_vm_from_a_custom_image
Ref: https://cloud.google.com/compute/docs/images
These additional instances can be used to handle the burst in traffic.
(Correct)
Explanation
Deploy Jenkins on a Google Compute Engine VM. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server
through GCP Marketplace.
Deploy Jenkins on a GKE Cluster. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server
through GCP Marketplace.
Set up a cron job with a custom script that uses gcloud APIs to create a new disk
from existing instance disk for all instances daily.
(Incorrect)
(Correct)
Deploy a Cloud Function to initiate the creation of instance templates for all
instances daily.
Configure a Cloud Task to initiate the creation of images for all instances daily and
upload them to Cloud Storage.
Explanation
Deploy a Cloud Function to initiate the creation of instance templates for
all instances daily. is not right.
This option does not fulfil our requirement of allowing quick restore and automatically
deleting old backups.
Set up a cron job with a custom script that uses gcloud APIs to create a new
disk from existing instance disk for all instances daily. is not right.
This option does not fulfil our requirement of allowing quick restore and automatically
deleting old backups.
Configure a Cloud Task to initiate the creation of images for all instances
daily and upload them to Cloud Storage. is not right.
This option does not fulfil our requirement of allowing quick restore and automatically
deleting old backups.
Export the logs from Apache server to Cloud Logging and deploy a Cloud Function
to parse the logs, extract and sum up the size of response payload for all requests
during the current month; and send an email notification when spending exceeds
$250.
Configure a budget with the scope set to the project, the amount set to $250,
threshold rule set to 100% of actual cost & trigger email notifications when
spending exceeds the threshold.
Configure a budget with the scope set to the billing account, the amount set to
$250, threshold rule set to 100% of actual cost & trigger email notifications when
spending exceeds the threshold.
(Incorrect)
Export the project billing data to a BigQuery dataset and deploy a Cloud Function
to extract and sum up the network egress costs from the BigQuery dataset for the
Apache server for the current month, and send an email notification when
spending exceeds $250.
(Correct)
Explanation
Configure a budget with the scope set to the project, the amount set to
$250, threshold rule set to 100% of actual cost & trigger email
notifications when spending exceeds the threshold. is not right.
This budget alert is defined for the project, which means it includes all costs and not just
the egress network costs - which goes against our requirements. It also contains costs
across all applications and not just the Compute Engine instance containing the Apache
webserver. While it is possible to set the budget scope to include the Product (i.e.
Google Compute Engine) and a label that uniquely identifies the specific compute
engine instance, the option doesn't mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Configure a budget with the scope set to the billing account, the amount set
to $250, threshold rule set to 100% of actual cost & trigger email
notifications when spending exceeds the threshold. is not right.
Like above, but worse as this budget alert includes costs from all projects linked to the
billing account. And like above, while it is possible to scope an alert down to
Project/Product/Labels, the option doesn't mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Export the logs from Apache server to Cloud Logging and deploy a Cloud
Function to parse the logs, extract and sum up the size of response payload
for all requests during the current month; and send an email notification
when spending exceeds $250. is not right.
You can't arrive at the exact egress costs with this approach. You can configure apache
logs to include the response object size.
Ref: https://httpd.apache.org/docs/1.3/logs.html#common
And you can then do what this option says to arrive at the combined size of all the
responses, but this is not 100% accurate as it does not include header sizes. Even if we
assume the header size is insignificant compare to the large files published on the
apache web server, our question asks us to do this the Google way "as measured by
Google Cloud Platform (GCP)". GCP does not look at the response sizes in the Apache
log files to determine the egress costs. The GCP egress calculator takes into
consideration the source and destination (source = the region that hosts the Compute
Engine instance running Apache Web Server; and the destination is the destination
region of the packet). The egress cost is different for different destinations, as shown in
this pricing reference.
Ref: https://cloud.google.com/vpc/network-pricing#internet_egress
The Apache logs do not give you the destination information, and without this
information, you can't accurately calculate the egress costs.
Export the project billing data to a BigQuery dataset and deploy a Cloud
Function to extract and sum up the network egress costs from the BigQuery
dataset for the Apache server for the current month, and send an email
notification when spending exceeds $250. is the right answer.
This option is the only one that satisfies our requirement. We do it the Google way by
(re)using the Billing Data that GCP uses. And we scope down the costs to just egress
network costs for the apache web server. Finally, we schedule this to run hourly and
send an email if the costs exceed 250 dollars.
Grant the necessary IAM roles to a service account and configure the application
running on Google Compute Engine instance to use this service account.
Grant the necessary IAM roles to a service account, download the JSON key file
and package it with your application.
Grant the necessary IAM roles to the service account used by Google Compute
Engine instance.
(Correct)
Grant the necessary IAM roles to a service account, store its credentials in a config
file and package it with your application.
Explanation
Grant the necessary IAM roles to a service account, download the JSON key
file and package it with your application. is not right.
To use a service account outside of Google Cloud, such as on other platforms or on-
premises, you must first establish the identity of the service account. Public/private key
pairs provide a secure way of accomplishing this goal. Since our application is running
inside Google Cloud, Google's recommendation is to assign the required permissions to
the service account and not use the service account keys.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-keys
Grant the necessary IAM roles to a service account, store its credentials in
a config file and package it with your application. is not right.
For application to application interaction, Google recommends the use of service
accounts. A service account is an account for an application instead of an individual
end-user. When you run code that's hosted on Google Cloud, the code runs as the
account you specify. You can create as many service accounts as needed to represent
the different logical components of your application.
Ref: https://cloud.google.com/iam/docs/overview#service_account
Grant the necessary IAM roles to a service account and configure the
application running on Google Compute Engine instance to use this service
account. is not right.
Using Application Default Credentials ensures that the service account works seamlessly.
When testing on your local machine, it uses a locally-stored service account key, but
when running on Compute Engine, it uses the project's default Compute Engine service
account. So we have to provide access to the service account used by the compute
engine VM and not the service account used by the application.
Ref: https://cloud.google.com/iam/docs/service-
accounts#application_default_credentials
Grant the necessary IAM roles to the service account used by Google Compute
Engine instance. is the right answer.
Using Application Default Credentials ensures that the service account works seamlessly.
When testing on your local machine, it uses a locally-stored service account key, but
when running on Compute Engine, it uses the project's default Compute Engine service
account. So we have to provide access to the service account used by the compute
engine VM.
Ref: https://cloud.google.com/iam/docs/service-
accounts#application_default_credentials
Use the Data Loss Prevention API to record this information.
Turn on data access audit logging in Cloud Storage to record this information.
(Correct)
Enable the default Cloud Storage Service account exclusive access to read all
operations and record them.
Explanation
Use the Identity Aware Proxy API to record this information. is not right.
Identity Aware Proxy is for controlling access to your cloud-based and on-premises
applications and VMs running on Google Cloud. It can't be used to record/monitor data
access in Cloud Storage bucket.
Ref: https://cloud.google.com/iap
Use the Data Loss Prevention API to record this information. is not right.
Cloud Data Loss Prevention is a fully managed service designed to help you discover,
classify, and protect your most sensitive data. It can't be used to record/monitor data
access in Cloud Storage bucket.
Ref: https://cloud.google.com/dlp
Enable the default Cloud Storage Service account exclusive access to read
all operations and record them. is not right.
You need access logs, and service account access has no impact on that. Moreover,
there is no such thing as a default Cloud Storage service account.
Ref: https://cloud.google.com/storage/docs/access-logs
Deploy the batch job in a GKE Cluster with preemptible VM node pool.
(Correct)
Deploy the batch job on a Google Cloud Compute Engine non-preemptible VM.
Restart instances as required.
Deploy the batch job on a fleet of Google Cloud Compute Engine preemptible VM
in a Managed Instances Group (MIG) with autoscaling.
(Incorrect)
Deploy the batch job on a Google Cloud Compute Engine Preemptible VM.
Explanation
Deploy the batch job on a Google Cloud Compute Engine Preemptible VM. is not
right.
A preemptible VM is an instance that you can create and run at a much lower price than
regular instances. However, Compute Engine might terminate (preempt) these instances
if it requires access to those resources for other tasks. Preemptible instances are excess
Compute Engine capacity, so their availability varies with usage. Since our batch process
must be restarted if interrupted, a preemptible VM by itself is not sufficient.
https://cloud.google.com/compute/docs/instances/preemptible#what_is_a_preemptible
_instance
Deploy the batch job on a Google Cloud Compute Engine non-preemptible VM.
Restart instances as required. is not right.
Stopping and starting instances as needed is a manual activity and incurs operational
expenditure. Since we require to minimize cost, we shouldn't do this.
Deploy the batch job on a fleet of Google Cloud Compute Engine preemptible
VM in a Managed Instances Group (MIG) with autoscaling. is not right.
Our requirement is not to scale up or scale down based on target CPU utilization.
Deploy the batch job in a GKE Cluster with preemptible VM node pool. is the
right answer.
Preemptible VMs are Compute Engine VM instances that last a maximum of 24 hours
and provide no availability guarantees. Preemptible VMs are priced lower than standard
Compute Engine VMs and offer the same machine types and options. You can use
preemptible VMs in your GKE clusters or node pools to run batch or fault-tolerant jobs
that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms
GKE’s autoscaler is very smart and always tries to first scale the node pool with cheaper
VMs. In this case, it scales up the preemptible node pool. The GKE autoscaler then scales
up the default node pool—but only if no preemptible VMs are available.
Ref: https://cloud.google.com/blog/products/containers-kubernetes/cutting-costs-with-
google-kubernetes-engine-using-the-cluster-autoscaler-and-preemptible-vms
Trigger a Cloud Function whenever files in Cloud Storage are created or updated.
(Correct)
Trigger a Cloud Dataflow job whenever files in Cloud Storage are created or
updated.
Deploy the API on GKE cluster and use Cloud Scheduler to trigger the API to look
for files in Cloud Storage there were created or update since the last run.
Explanation
Configure Cloud Pub/Sub to capture details of files created/modified in
Cloud Storage. Deploy the API in App Engine Standard and use Cloud Scheduler
to trigger the API to fetch information from Cloud Pub/Sub. is not right.
Cloud Scheduler lets you run your batch and big data jobs on a recurring schedule.
Since it doesn't work real-time, you can't execute a code snippet whenever a new file is
uploaded to a Cloud Storage bucket.
Ref: https://cloud.google.com/scheduler
Deploy the API on GKE cluster and use Cloud Scheduler to trigger the API to
look for files in Cloud Storage there were created or update since the last
run. is not right.
You can use CronJobs to run tasks at a specific time or interval. Since it doesn't work
real-time, you can't execute a code snippet whenever a new file is uploaded to a Cloud
Storage bucket.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs
Trigger a Cloud Dataflow job whenever files in Cloud Storage are created or
updated. is not right.
Dataflow is Unified stream and batch data processing that's serverless, fast, and cost-
effective. Batch processing is not real-time, so you can't execute a code snippet
whenever a new file is uploaded to a Cloud Storage bucket.
Ref: https://cloud.google.com/dataflow
Trigger a Cloud Function whenever files in Cloud Storage are created or
updated. is the right answer.
Cloud Functions can respond to change notifications emerging from Google Cloud
Storage. These notifications can be configured to trigger in response to various events
inside a bucket—object creation, deletion, archiving and metadata updates.
Ref: https://cloud.google.com/functions/docs/calling/storage
Export Billing data from each development GCP projects to a separate BigQuery
dataset. On each dataset, use a Data Studio dashboard to plot the spending.
Set up a single budget for all development GCP projects. Trigger an email
notification when the spending exceeds $750 in the budget.
(Incorrect)
Set up a budget for each development GCP projects. For each budget, trigger an
email notification when the spending exceeds $750.
(Correct)
Export Billing data from all development GCP projects to a single BigQuery
dataset. Use a Data Studio dashboard to plot the spend.
Explanation
Set up a single budget for all development GCP projects. Trigger an email
notification when the spending exceeds $750 in the budget. is not right.
A budget enables you to track your actual Google Cloud spend against your planned
spend. After you've set a budget amount, you set budget alert threshold rules that are
used to trigger email notifications. Budget alert emails help you stay informed about
how your spend is tracking against your budget. But since a single budget is created for
all projects, it is not possible to identify if a developer has spent more than $750 per
month on their development project.
Ref: https://cloud.google.com/billing/docs/how-to/budgets
Export Billing data from all development GCP projects to a single BigQuery
dataset. Use a Data Studio dashboard to plot the spend. is not right.
This option does not alert the finance team if any of the developers have spent above
$750.
Set up a budget for each development GCP projects. For each budget, trigger
an email notification when the spending exceeds $750. is the right answer.
A budget enables you to track your actual Google Cloud spend against your planned
spend. After you've set a budget amount, you set budget alert threshold rules that are
used to trigger email notifications. Budget alert emails help you stay informed about
how your spend is tracking against your budget. As the budget is created per project,
the alert triggers whenever spent in the project is more than $750 per month.
Ref: https://cloud.google.com/billing/docs/how-to/budgets
Grant the compute engine service account roles/bigquery.dataViewer role on the
data-warehousing GCP project.
(Correct)
(Incorrect)
Explanation
Grant the BigQuery service account roles/owner on app-tier GCP project. is
not right.
The requirement is to identify the access needed for the service account in the app-tier
project, not the service account in the data-warehousing project.