0% found this document useful (0 votes)
10 views

ACE Exam 8

The document outlines a series of questions and answers related to Google Cloud Platform (GCP) services, focusing on configurations for Compute Engine instances, Deployment Manager, App Engine, databases, IAM roles, and Kubernetes. Each question includes a correct answer with explanations based on GCP documentation, emphasizing best practices and principles such as least privilege and minimizing risk during deployments. The content serves as a guide for users to understand how to effectively utilize GCP features and manage resources.

Uploaded by

simonchembeu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

ACE Exam 8

The document outlines a series of questions and answers related to Google Cloud Platform (GCP) services, focusing on configurations for Compute Engine instances, Deployment Manager, App Engine, databases, IAM roles, and Kubernetes. Each question includes a correct answer with explanations based on GCP documentation, emphasizing best practices and principles such as least privilege and minimizing risk during deployments. The content serves as a guide for users to understand how to effectively utilize GCP features and manage resources.

Uploaded by

simonchembeu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Question 1

Your company hosts multiple applications on Compute Engine instances. They want the
instances to be resilient to any instance crashes or system termination. How would you
configure the instances?
A. Set automaticRestart availability policy to true
B. Set automaticRestart availability policy to false
C. Set onHostMaintenance availability policy to migrate instances
D. Set onHostMaintenance availability policy to terminate instances

Correct Answer
A. Set automaticRestart availability policy to true

Explanation
Correct answer is A as automaticRestart availability policy determines how the instance reacts
to the crashes and system termination and should be set to true to restart the instance.
Refer GCP documentation - Instance Scheduling Options
A VM instance's availability policy determines how it behaves when an event occurs that
requires Google to move your VM to a different host machine. For example, you can choose to
keep your VM instances running while Compute Engine live migrates them to another host or
you can choose to terminate your instances instead. You can update an instance's availability
policy at any time to control how you want your VM instances to behave.
You can change an instance's availability policy by configuring the following two settings:
● The VM instance's maintenance behavior, which determines whether the instance is live
migrated or terminated when there is a maintenance event.
● The instance's restart behavior, which determines whether the instance automatically
restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the
behavior to terminate your instance during maintenance events instead.
Configure an instance's maintenance behavior and automatic restart setting using the
onHostMaintenance and automaticRestart properties. All instances are configured with default
values unless you explicitly specify otherwise.
● onHostMaintenance: Determines the behavior when a maintenance event occurs that
might cause your instance to reboot.
○ [Default] migrate, which causes Compute Engine to live migrate an instance
when there is a maintenance event.
○ terminate, which terminates an instance instead of migrating it.
● automaticRestart: Determines the behavior when an instance crashes or is terminated
by the system.
○ [Default] true, so Compute Engine restarts an instance if the instance crashes or
is terminated.
○ false, so Compute Engine does not restart an instance if the instance crashes or
is terminated.
Option B is wrong as automaticRestart availability policy should be set to true.
Options C & D are wrong as the onHostMaintenance does not apply to crashes or system
termination.

Question 2
A company wants to deploy their application using Deployment Manager. However, they want to
understand how the changes will affect before implementing the updated. How can the
company achieve the same?
A. Use Deployment Manager Validate Deployment feature
B. Use Deployment Manager Dry Run feature
C. Use Deployment Manager Preview feature
D. Use Deployment Manager Snapshot feature

Correct Answer
C. Use Deployment Manager Preview feature

Explanation
Correct answer is C as Deployment Manager provides the preview feature to check on what
resources would be created.
Refer GCP documentation - Deployment Manager Preview
After you have written a configuration file, you can preview the configuration before you create a
deployment. Previewing a configuration lets you see the resources that Deployment Manager
would create but does not actually instantiate any actual resources. The Deployment Manager
service previews the configuration by:
1. Expanding the full configuration, including any templates.
2. Creating a deployment and "shell" resources.
You can preview your configuration by using the preview query parameter when making an
insert() request.
gcloud deployment-manager deployments create example-deployment --config configuration-
file.yaml
--preview
Question 3
You need to deploy an update to an application in Google App Engine. The update is risky, but
it can only be tested in a live environment. What is the best way to introduce the update to
minimize risk?
A. Deploy the application temporarily and be prepared to pull it back if needed.
B. Warn users that a new app version may have issues and provide a way to contact you if
there are problems.
C. Deploy a new version of the application but use traffic splitting to only direct a small
number of users to the new version.
D. Create a new project with the new app version, and then redirect users to the new
version.

Correct Answer
C. Deploy a new version of the application but use traffic splitting to only direct a small number
of users to the new version.

Explanation
Correct answer is C as deploying a new version without assigning it as the default version will
not create downtime for the application. Using traffic splitting allows for easily redirecting a small
amount of traffic to the new version and can also be quickly reverted without application
downtime.
Refer GCP documentation - App Engine Splitting Traffic
Traffic migration smoothly switches request routing, gradually moving traffic from the versions
currently receiving traffic to one or more versions that you specify.
Traffic splitting distributes a percentage of traffic to versions of your application. You can split
traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple
versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your
versions and provides control over the pace when rolling out features.
Option A is wrong as deploying the application version as default requires moving all traffic to
the new version. This could impact all users and disable the service
Option B is wrong as this is not a recommended practice and it impacts user experience.
Option D is wrong as App Engine services are intended for hosting different service logic. Using
different services would require manual configuration of the consumers of services to be aware
of the deployment process and manage from the consumer side who is accessing which
service.
Question 4
Your company wants to track whether someone is present in a meeting room reserved for a
scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room
is equipped with a motion sensor that reports its status every second. The data from the motion
detector includes only a sensor ID and several different discrete items of information. Analysts
will use this data, together with information about account owners and office locations. Which
database type should you use?
A. Flat file
B. NoSQL
C. Relational
D. Blobstore

Correct Answer
B. NoSQL

Explanation
Correct answer is B as NoSQL like Bigtable and Datastore solution is an ideal solution to store
sensor ID and several different discrete items of information. It also provides an ability to join
with other data. Datastore can also be configured to store data in multi-region locations.
Refer GCP documentation - Storage Options
Option A is wrong as flat file is not an ideal storage option. It is not scalable.
Option C is wrong as relational database like Cloud SQL is not an ideal solution to store schema
less data.
Option D is wrong as blob storage like Cloud Storage is not an ideal solution to store, analyze
schema less data and join with other sources.

Question 5
You have created an App engine application in the development environment. The testing for
the application has been successful. You want to move the application to production
environment. How can you deploy the application with minimal steps?
A. Activate the production config, perform app engine deploy
B. Perform app engine deploy using the --project parameter
C. Clone the app engine application to the production environment
D. Change the project parameter in app.yaml and redeploy

Correct Answer
B. Perform app engine deploy using the --project parameter

Explanation
Correct answer is B as the gcloud app deploy allows the --project parameter to be passed to
override the project that the app engine application needs to be deployed to.
Refer GCP documentation - Cloud SDK
-project=PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current
project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)'and can be set using gcloud config set project PROJECTID. Overrides
the default core/projectproperty value for this command invocation.

Option A is wrong as it is a two step process, although a valid solution


Option C is wrong as Clone of the application is possile
Option D is wrong app.yaml does not control the project it is deployed to.

Question 6
You've been asked to add a new IAM member and grant her access to run some queries on
BigQuery. Considering the principle of least privilege, which role should you assign?
A. roles/bigquery.dataEditor and roles/bigquery.jobUser
B. roles/bigquery.dataViewer and roles/bigquery.user
C. roles/bigquery.dataViewer and roles/bigquery.jobUser
D. roles/bigquery.dataOwner and roles/bigquery.jobUser

Correct Answer
C. roles/bigquery.dataViewer and roles/bigquery.jobUser

Explanation
Correct answer is C as the user needs to only query the data, they should have access to view
the dataset and query the dataset which would provided by roles/bigquery.dataViewer and
roles/bigquery.jobUser inline with the least privilege principle
Refer GCP documentation - BigQuery Access Control
Option A is wrong as roles/bigquery.dataEditor provides more than required privileges
Option B is wrong as roles/bigquery.user provides more than required privileges
Option D is wrong as roles/bigquery.dataOwner provides more than required privileges
Question 7
Your team has been working on building a web application. The plan is to deploy to Kubernetes.
You currently have a Dockerfile that works locally. How can you get the application deployed to
Kubernetes?
A. Use kubectl to push the convert the Dockerfile into a deployment.
B. Use docker to create a container image, save the image to Cloud Storage, deploy the
uploaded image to Kubernetes with kubectl.
C. Use kubectl apply to push the Dockerfile to Kubernetes.
D. Use docker to create a container image, push it to the Google Container Registry,
deploy the uploaded image to Kubernetes with kubectl.

Correct Answer
D. Use docker to create a container image, push it to the Google Container Registry, deploy the
uploaded image to Kubernetes with kubectl.

Explanation
Correct answer is D as the correct steps are to create the container image and push it to Google
Container Registry and deploy the image to Kubernetes with Kubectl.
Refer GCP documentation - Kubernetes Engine Deploy
To package and deploy your application on GKE, you must:
1. Package your app into a Docker image
2. Run the container locally on your machine (optional)
3. Upload the image to a registry
4. Create a container cluster
5. Deploy your app to the cluster
6. Expose your app to the Internet
7. Scale up your deployment
8. Deploy a new version of your app
Option A is wrong as kubectl cannot convert the Dockerfile to deployment.
Option B is wrong as Cloud Storage is not Docker image repository.
Option C is wrong as kubectl cannot push Dockerfile to Kubernetes and it does not result into
deployment.
Question 8
You have a managed instance group comprised of preemptible VM's. All of the VM's keep
deleting and recreating themselves every minute. What is a possible cause of this behavior?
A. Your zonal capacity is limited, causing all preemptible VM's to be shutdown to recover
capacity. Try deploying your group to another zone.
B. You have hit your instance quota for the region.
C. Your managed instance group's VM's are toggled to only last 1 minute in preemptible
settings.
D. Your managed instance group's health check is repeatedly failing, either to a
misconfigured health check or misconfigured firewall rules not allowing the health check
to access the instances.

Correct Answer
D. Your managed instance group's health check is repeatedly failing, either to a misconfigured
health check or misconfigured firewall rules not allowing the health check to access the
instances.

Explanation
Correct answer is D as the instances (normal or preemptible) would be terminated and
relaunched if the health check fails either due to application not configured properly or the
instances firewall do not allow health check to happen.
Refer GCP documentation - Health Check concepts
GCP provides health check systems that connect to virtual machine (VM) instances on a
configurable, periodic basis. Each connection attempt is called a probe. GCP records the
success or failure of each probe.
Health checks and load balancers work together. Based on a configurable number of sequential
successful or failed probes, GCP computes an overall health state for each VM in the load
balancer. VMs that respond successfully for the configured number of times are considered
healthy. VMs that fail to respond successfully for a separate number of times are unhealthy.
GCP uses the overall health state of each VM to determine its eligibility for receiving new
requests. In addition to being able to configure probe frequency and health state thresholds, you
can configure the criteria that define a successful probe.
Question 9
You've created the code for a Cloud Function that will respond to HTTP triggers and return
some data in JSON format. You have the code locally; it's tested and working. Which command
can you use to create the function inside Google Cloud?
A. gcloud functions deploy
B. gcloud function create
C. gcloud functions create
D. gcloud function deploy

Correct Answer
A. gcloud functions deploy

Explanation
Correct answer is A as the code can be deployed using gcloud functions deploy command.
Refer GCP documentation - Cloud Functions Deploy
Deployments work by uploading an archive containing your function's source code to a Google
Cloud Storage bucket. You can deploy Cloud Functions from your local machine or from your
GitHub or Bitbucket source repository (via Cloud Source Repositories).
Using the gcloud command-line tool, deploy your function from the directory containing your
function code with thegcloud functions deploy command:
gcloud functions deploy NAME --runtime RUNTIME TRIGGER [FLAGS...]

Question 10
You have data stored in a Cloud Storage dataset and also in a BigQuery dataset. You need to
secure the data and provide 3 different types of access levels for your Google Cloud Platform
users: administrator, read/write, and read-only. You want to follow Google-recommended
practices. What should you do?
A. Create 3 custom IAM roles with appropriate policies for the access levels needed for
Cloud Storage and BigQuery. Add your users to the appropriate roles.
B. At the Organization level, add your administrator user accounts to the Owner role, add
your read/write user accounts to the Editor role, and add your read-only user accounts to
the Viewer role.
C. At the Project level, add your administrator user accounts to the Owner role, add your
read/write user accounts to the Editor role, and add your read-only user accounts to the
Viewer role.
D. Use the appropriate pre-defined IAM roles for each of the access levels needed for
Cloud Storage and BigQuery. Add your users to those roles for each of the services.

Correct Answer
D. Use the appropriate pre-defined IAM roles for each of the access levels needed for Cloud
Storage and BigQuery. Add your users to those roles for each of the services.
Explanation
Correct answer is D as Google best practice is to use pre-defined rules over legacy primitive
and custom roles. Pre-defined roles can help grant fine grained control per service.
Refer GCP documentation - IAM Overview
● Primitive roles: The roles historically available in the Google Cloud Platform Console will
continue to work. These are the Owner, Editor, and Viewer roles.
● Predefined roles: Predefined roles are the Cloud IAM roles that give finer-grained access
control than the primitive roles. For example, the predefined role Pub/Sub Publisher
(roles/pubsub.publisher) provides access to only publish messages to a Cloud Pub/Sub
topic.
● Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs.
What is the difference between primitive roles and predefined roles?
Primitive roles are the legacy Owner, Editor, and Viewer roles. IAM provides predefined roles,
which enable more granular access than the primitive roles. Grant predefined roles to identities
when possible, so you only give the least amount of access necessary to access your
resources.
When would I use primitive roles?
Use primitive roles in the following scenarios:
● When the GCP service does not provide a predefined role. See the predefined roles
table for a list of all available predefined roles.
● When you want to grant broader permissions for a project. This often happens when
you’re granting permissions in development or test environments.
● When you need to allow a member to modify permissions for a project, you’ll want to
grant them the owner role because only owners have the permission to grant access to
other users for for projects.
● When you work in a small team where the team members don’t need granular
permissions.
Option A is wrong as you should use custom roles only if predefined roles are not available.
Options B & C are wrong Google does not recommend using primitive roles which do not allow
fine grained access control. Also primitive roles are applied at project or service resource levels
Question : 11
You want to enable your running Google Container Engine cluster to scale as demand for your
application changes. What should you do?
A. Add additional nodes to your Container Engine cluster using the following command:
gcloud container clusters resize CLUSTER_Name –-size 10
B. Add a tag to the instances in the cluster with the following command:gcloud compute
instances add-tags INSTANCE --tags --enable-autoscaling max-nodes-10
C. Update the existing Container Engine cluster with the following command: gcloud alpha
container clusters update mycluster --enable-autoscaling --min-nodes=1 --max-
nodes=10
D. Create a new Container Engine cluster with the following command:gcloud alpha
container clusters create mycluster --enable-autoscaling --min-nodes=1 --max-
nodes=10and redeploy your application

Correct Answer
C. Update the existing Container Engine cluster with the following command: gcloud alpha
container clusters update mycluster --enable-autoscaling --min-nodes=1 --max-nodes=10

Explanation
Correct answer is C as you need to update the cluster to enable auto scaling with min and max
nodes to scale as per the demand.
Refer GCP documentation - Cluster Autoscaling
Option A is wrong as it would only increase the nodes.
Option B is wrong as the cluster needs to updated and not the instances.
Option D is wrong as you do not need to create a new cluster and the existing cluster can be
updated to enable auto scaling.

Question : 12
Your development team has asked you to set up load balancer with SSL termination. The
website would be using HTTPS protocol. Which load balancer should you use?
A. SSL proxy
B. HTTP load balancer
C. TCP proxy
D. HTTPS load balancer

Correct Answer
D. HTTPS load balancer

Explanation
Correct answer is D as HTTPS load balancer supports the HTTPS traffic with the SSL
termination ability.
Refer GCP documentation - Choosing Load Balancer
An HTTPS load balancer has the same basic structure as an HTTP load balancer (described
above), but differs in the following ways:
● An HTTPS load balancer uses a target HTTPS proxy instead of a target HTTP proxy.
● An HTTPS load balancer requires at least one signed SSL certificate installed on the
target HTTPS proxy for the load balancer. You can use Google-managed or self-
managed SSL certificates.
● The client SSL session terminates at the load balancer.
● HTTPS load balancers support the QUIC transport layer protocol.
Option A is wrong as SSL proxy is not recommended for HTTPS traffic.
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load
balancing layer, then balances the connections across your instances using the SSL or TCP
protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S)
load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6
requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Option B is wrong as HTTP load balancer does not support SSL termination.
Option C is wrong as TCP proxy does not support SSL offload and not recommended for
HTTP/S traffic.
Question : 13
Your finance team is working with the engineering team to try and determine your spending for
each service by day and month across all projects used by the billing account. What is the
easiest and most flexible way to aggregate and analyze the data?
A. Export the data for the billing account(s) involved to a JSON File; use a Cloud Function
to listen for a new file in the Storage bucket; code the function to analyze the service
data for the desired projects, by day and month.
B. Export the data for the billing account(s) involved to BigQuery; then use BigQuery to
analyze the service data for the desired projects, by day and month.
C. Export the data for the billing account(s) to File, import the files into a SQL database;
and then use BigQuery to analyze the service data for the desired projects, by day and
month.
D. Use the built-in reports, which already show this data.

Correct Answer
B. Export the data for the billing account(s) involved to BigQuery; then use BigQuery to analyze
the service data for the desired projects, by day and month.

Explanation
Correct answer is B as the billing data can be exported to BigQuery for running daily and
monthly to calculate spending across services.
Refer GCP documentation - Cloud Billing Export to BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing
development. Billing export to BigQuery enables you to export your daily usage and cost
estimates automatically throughout the day to a BigQuery dataset you specify. You can then
access your billing data from BigQuery. You can also use this export method to export data to a
JSON file.
Options A & C are wrong as they are not easy and flexible.
Option D is wrong as there are no built-in reports.

Question : 14
Your company has deployed their application on managed instance groups, which is served
through a network load balancer. They want to enable health checks for the instances. How do
you configure the health checks?
A. Perform the health check using HTTPS by hosting a basic web server
B. Perform the health check using HTTP by hosting a basic web server
C. Perform the health check using TCP
D. Update Managed Instance groups to send a periodic ping to the network load balancer

Correct Answer
B. Perform the health check using HTTP by hosting a basic web server - Wrong
C. Perform the health check using TCP
Explanation
Correct answer is B as Network Load Balancer does not support TCP health checks and hence
HTTP health checks need to be performed. You can run a basic web server on each instance
for health checks.
Refer GCP documentation - Network Load Balancer Health Checks
Health checks ensure that Compute Engine forwards new connections only to instances that are
up and ready to receive them. Compute Engine sends health check requests to each instance
at the specified frequency; once an instance exceeds its allowed number of health check
failures, it is no longer considered an eligible instance for receiving new traffic. Existing
connections will not be actively terminated which allows instances to shut down gracefully and
to close TCP connections.
The health checker continues to query unhealthy instances, and returns an instance to the pool
when the specified number of successful checks is met. If all instances are marked as
UNHEALTHY, the load balancer directs new traffic to all existing instances.
Network Load Balancing relies on legacy HTTP Health checks for determining instance health.
Even if your service does not use HTTP, you'll need to at least run a basic web server on each
instance that the health check system can query.
Option A is wrong as the traffic is not secured, HTTPS health checks are not needed.
Option C is wrong as Network Load Balancer does not support TCP health checks.
Option D is wrong as instances do not need to send any traffic to Network Load Balancer.

Question : 15
Your security team has asked you to present them some numbers based on the logs that are
exported to BigQuery. Due to the team structure, your manager has asked you to determine
how much the query will cost. What's the best way to determine the cost?
A. It's not possible to estimate the cost of a query.
B. Create the query and execute the query in "cost estimation mode"
C. Create the query and use the --dry_run option to determine the amount of data read, and
then use the price calculator to determine the cost.
D. Use the BigQuery index viewer to determine how many records you'll be reading.

Correct Answer
C. Create the query and use the --dry_run option to determine the amount of data read, and
then use the price calculator to determine the cost.

Explanation
Correct answer is C as the --dry-run option can be used to price your queries before they are
actually fired. The Query returns the bytes read, which can then be used with the Pricing
Calculator to estimate the query cost.
Refer GCP documentation - BigQuery Best Practices
Price your queries before running them
Best practice: Before running queries, preview them to estimate costs.
Queries are billed according to the number of bytes read. To estimate costs before running a
query use:
● The query validator in the GCP Console or the classic web UI
● The --dry_run flag in the CLI
● The dryRun parameter when submitting a query job using the API
● The Google Cloud Platform Pricing Calculator
Options A, B & D are wrong as they are not valid options.

Question : 16
You've created a bucket to store some data archives for compliance. The data isn't likely to
need to be viewed. However, you need to store it for at least 7 years. What is the best default
storage class?
A. Multi-regional
B. Coldline
C. Regional
D. Nearline

Correct Answer
B. Coldline

Explanation
Correct answer is B as Coldline storage is an ideal solution for archival of infrequently accessed
data at low cost.
Refer GCP documentation - Cloud Storage Classes
Google Cloud Storage Coldline is a very-low-cost, highly durable storage service for data
archiving, online backup, and disaster recovery. Unlike other "cold" storage services, your data
is available within milliseconds, not hours or days.
Coldline Storage is the best choice for data that you plan to access at most once a year, due to
its slightly lower availability, 90-day minimum storage duration, costs for data access, and
higher per-operation costs. For example:
● Cold Data Storage - Infrequently accessed data, such as data stored for legal or
regulatory reasons, can be stored at low cost as Coldline Storage, and be available
when you need it.
● Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud
Storage provides low latency access to data stored as Coldline Storage.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is
stored: Coldline Storage data stored in multi-regional locations is redundant across multiple
regions, providing higher availability than Coldline Storage data stored in regional locations.
Options A, C & D are wrong as they are not suited for archival data.

Question : 17
You've been tasked with getting all of only the operations team's public SSH keys onto to a
specific Bastion host instance of a particular project. Currently Project wide access has already
been granted to all the instances within the projects. With the fewest steps possible, how do you
block or override the project level access on the Bastion host?
A. Use the gcloud compute instances add-metadata [INSTANCE_NAME] --metadata block-
project-ssh-keys=TRUE command to block the access
B. Use the gcloud compute instances add-metadata [INSTANCE_NAME] --metadata block-
project-ssh-keys=FALSE command to block the access.
C. Use the gcloud compute project-info add-metadata [INSTANCE_NAME] --metadata
block-project-ssh-keys=FALSE command to block the access.
D. Project wide SSH access cannot be overridden or blocked and needs to be removed.

Correct Answer
A. Use the gcloud compute instances add-metadata [INSTANCE_NAME] --metadata block-
project-ssh-keys=TRUE command to block the access

Explanation
Correct answer is A as the project wide SSH access can be blocked by using the --metadata
block-project-ssh-keys=TRUE
Refer GCP documentation - Compute Block Project Keys
If you need your instance to ignore project-wide public SSH keys and use only the instance-
level keys, you can block project-wide public SSH keys from the instance. This will only allow
users whose public SSH key is stored in instance-level metadata to access the instance. If you
want your instance to use both project-wide and instance-level public SSH keys, set the
instance metadata to allow project-wide SSH keys. This will allow any user whose public SSH
key is stored in project-wide or instance-level metadata to access the instance.
gcloud compute instances add-metadata [INSTANCE_NAME] --metadata block-project-ssh-
keys=TRUE

Option B is wrong as the --metadata block-project-ssh-keys parameter needs to be set to TRUE


Option C is wrong as the command needs to be execute at the instance level.
Option D is wrong as project wide SSH key access can be blocked.
Question : 18
An application that relies on Cloud SQL to read infrequently changing data is predicted to grow
dramatically. How can you increase capacity for more read-only clients?
A. Configure high availability on the master node
B. Establish an external replica in the customer's data center
C. Use backups so you can restore if there's an outage
D. Configure read replicas.

Correct Answer
D. Configure read replicas.

Explanation
Correct answer is D as read replicas can help handle the read traffic reducing the load from the
primary database.
Refer GCP documentation - Cloud SQL Replication Options
Cloud SQL provides the ability to replicate a master instance to one or more read replicas. A
read replica is a copy of the master that reflects changes to the master instance in almost real
time.
Option A is wrong as high availability is for failover and not for performance.
Option B is wrong as external replica is not recommended for scaling as it needs to be
maintained and the network established for replication.
Option C is wrong as backups are more to restore the database in case of any outage.

Question : 19
You need to have a backup/rollback plan in place for your application that is distributed across a
large managed instance group. What is the preferred method for doing so?
A. Use the Rolling Update feature to deploy/roll back versions with different managed
instance group templates.
B. Use the managed instance group snapshot function that is included in Compute Engine.
C. Have each instance write critical application data to a Cloud Storage bucket.
D. Schedule a cron job to take snapshots of each instance in the group.

Correct Answer
A. Use the Rolling Update feature to deploy/roll back versions with different managed instance
group templates.

Explanation
Correct answer is A as rolling update helps to apply the update on a controlled number of
instances to maintain high availability and ability to rollback in case of any issues.
Refer GCP documentation - Updating Managed Instance Groups
A managed instance group contains one or more virtual machine instances that are controlled
using an instance template. To update instances in a managed instance group, you can make
update requests to the group as a whole, using the Managed Instance Group Updater feature.
The Managed Instance Group Updater allows you to easily deploy new versions of software to
instances in your managed instance groups, while controlling the speed of deployment, the level
of disruption to your service, and the scope of the update. The Updater offers two primary
advantages:
● The rollout of an update happens automatically to your specifications, without the need
for additional user input after the initial request.
● You can perform partial rollouts which allows for canary testing.
By allowing new software to be deployed inside an existing managed instance group, there is no
need for you to reconfigure the instance group or reconnect load balancing, autoscaling, or
autohealing each time new version of software is rolled out. Without the Updater, new software
versions must be deployed either by creating a new managed instance group with a new
software version, requiring additional set up each time, or through a manual, user-initiated,
instance-by-instance recreate. Both of these approaches require significant manual steps
throughout the process.
A rolling update is an update that is gradually applied to all instances in an instance group until
all instances have been updated. You can control various aspects of a rolling update, such as
how many instances can be taken offline for the update, how long to wait between updating
instances, whether the update affects all or just a portion of instances, and so on.
Options B, C & D are wrong as the key for scaling is to create stateless, disposable VMs to be
able scale and have seamless deployment.

Question : 20
Your company plans to archive data to Cloud Storage, which would be needed only in case of
any compliance issues, or Audits. What is the command for creating the storage bucket with
rare access and named 'archive_bucket'?
A. gsutil rm -coldline gs://archive_bucket
B. gsutil mb -c coldline gs://archive_bucket
C. gsutil mb -c nearline gs://archive_bucket
D. gsutil mb gs://archive_bucket

Correct Answer
B. gsutil mb -c coldline gs://archive_bucket

Explanation
Correct answer is B as the data would be rarely accessed, Coldline is an ideal storage class.
Also gsutil needs -c parameter to pass the class.
Refer GCP documentation - Storage Classes
Coldline - Data you expect to access infrequently (i.e., no more than once per year). Typically
this is for disaster recovery, or data that is archived and may or may not be needed at some
future time
Option A is wrong as rm is the wrong parameter and removes the data.
Option C is wrong as Nearline is not suited for data that needs rare access.
Option D is wrong as by default, gsutil would create a regional bucket.
Question : 21
Your application has a large international audience and runs stateless virtual machines within a
managed instance group across multiple locations. One feature of the application lets users
upload files and share them with other users. Files must be available for 30 days; after that, they
are removed from the system entirely. Which storage solution should you choose?
A. A Cloud Datastore database.
B. A multi-regional Cloud Storage bucket.
C. Persistent SSD on virtual machine instances.
D. A managed instance group of Filestore servers.

Correct Answer
B. A multi-regional Cloud Storage bucket.

Explanation
Correct answer is B as the key storage requirements is it being global, allow lifecycle
management and sharing capability. Cloud Storage is an ideal choice as it can be configured to
be multi-regional, have lifecycle management rules to auto delete the files after 30 days and
share them with others.
Option A is wrong Datastore is a NoSQL solution and not ideal for unstructured data.
Option C is wrong as SSD disks are ephemeral storage option for virtual machines.
Option D is wrong as disks are regional and not ideal storage option for content that needs to be
shared.

Question : 22
You're attempting to deploy a new instance that uses the centos 7 family. You can't recall the
exact name of the family. Which command could you use to determine the family names?
A. gcloud compute instances list
B. gcloud compute images show-families
C. gcloud compute instances show-families
D. gcloud compute images list

Correct Answer
D. gcloud compute images list

Explanation
Correct answer is D as family names are image attributes.
Refer GCP documentation - Cloud SDK Compute Images List & Image Families
Image families simplify the process of managing images in your project by grouping related
images together and making it easy to roll forward and roll back between specific image
versions. An image family always points to the latest version of an image that is not deprecated.
Most public images are grouped into an image families. For example, the debian-9image family
in the debian-cloud project always points to the most recent Debian 9 image.
You can add your own images to an image family when you create a custom image. The image
family points to the most recent image that you added to that family. Because the image family
never points to a deprecated image, rolling the image family back to a previous image version is
as simple as deprecating the most recent image in that family.

Options A, B & C are wrong as they do not help retrieve the image family.

Question : 23
You need to help a developer install the App Engine Go extensions. However, you've forgotten
the exact name of the component. Which command could you run to show all of the available
options?
A. gcloud config list
B. gcloud component list
C. gcloud config components list
D. gcloud components list

Correct Answer
D. gcloud components list

Explanation
Correct answer is D as gcloud components list provides the list of components with the
installation status.
Refer GCP documentation - Cloud SDK Components List
gcloud components list - list the status of all Cloud SDK components
This command lists all the available components in the Cloud SDK. For each component, the
command lists the following information:
● Status on your local workstation: not installed, installed (and up to date), and update
available (installed, but not up to date)
● Name of the component (a description)
● ID of the component (used to refer to the component in other [gcloud components]
commands)
● Size of the component
In addition, if the --show-versions flag is specified, the command lists the currently installed
version (if any) and the latest available version of each individual component.
Options A & C are wrong as config helps view and edit Cloud SDK properties. It does not
provide components detail.
Option B is wrong as it is not a valid command.

Question : 24
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual
machine. The script is printing errors that it cannot connect to BigQuery. What should you do to
fix the script?
A. Install the latest BigQuery API client library for Python
B. Run your script on a new virtual machine with the BigQuery access scope enabled
C. Create a new service account with BigQuery access and execute your script with that
user
D. Install the bq component for gcloud with the command gcloud components install bq.

Correct Answer
B. Run your script on a new virtual machine with the BigQuery access scope enabled

Explanation
Correct answer is B as it is the recommended approach to create a service account with the
BigQuery access.
Refer GCP documentation - Service Account
A service account is an identity that an instance or an application can use to run API requests
on your behalf. This identity is used to identify applications running on your virtual machine
instances to other Google Cloud Platform services. For example, if you write an application that
reads and writes files on Google Cloud Storage, it must first authenticate to the Google Cloud
Storage API. You can create a service account and grant the service account access to the
Cloud Storage API. Then, you would update your application code to pass the service account
credentials to the Cloud Storage API. Your application authenticates seamlessly to the API
without embedding any secret keys or user credentials in your instance, image, or application
code.
If your service accounts have the necessary IAM permissions, those service accounts can
create and manage instances and other resources. Service accounts can modify or delete
resources only if you grant the necessary IAM permissions to the service account at the project
or resource level. You can also change what service account is associated with an instance.
Option A is wrong as it is an issue with connectivity to BigQuery and not a client version
mismatch issue.
Option C is wrong as you need to use the private key of the user and it recommended if using
from on-premises or other cloud platforms
Option D is wrong as bq command is installed by default and not needed with the python client.
It is for direct command line interaction with BigQuery.
Question : 25
Your company needs to create a new Kubernetes Cluster on Google Cloud Platform. They want
the nodes to be configured for resiliency and high availability with no manual intervention. How
should the Kubernetes cluster be configured?
A. Enable auto-healing for the managed instance groups
B. Enable auto-upgrades for the nodes
C. Enable auto-repairing for the nodes
D. Enable auto-healing for the nodes

Correct Answer
C. Enable auto-repairing for the nodes

Explanation
Correct answer is C as the resiliency and high availability can be increased using the node auto-
repair feature, which would allow Kubernetes engine to replace unhealthy nodes.
Refer GCP documentation - Kubernetes Auto-Repairing
GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running
state. When enabled, GKE makes periodic checks on the health state of each node in your
cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a
repair process for that node.
Option A is wrong as this cannot be implemented for the Kubernetes cluster.
Option B is wrong as auto-upgrades are to upgrade the node version to the latest stable
Kubernetes version.
Option D is wrong as there is no auto-healing feature.

Question : 26
You've been trying to deploy a container to Kubernetes; however, kubectl doesn't seem to be
able to connect to the cluster. Of the following, what is the most likely cause and how can you
fix it?
A. The firewall rules are preventing the connection. Open up the firewall rules to allow
traffic to port 1337.
B. The kubeconfig is missing the credentials. Run the gcloud container clusters get-
credentials command.
C. The kubeconfig is missing the credentials. Run the gcloud container clusters auth login
command.
D. The firewall rules are preventing the connection. Open up the firewall rules to allow
traffic to port 3682.

Correct Answer
B. The kubeconfig is missing the credentials. Run the gcloud container clusters get-credentials
command.
Explanation
Correct answer is B as the connection is refused, the context needs to be set using the gcloud
container clusters get-credentials command
Refer GCP documentation - Kubernetes Engine Troubleshooting
kubectl commands return "connection refused" error
Set the cluster context with the following command:
gcloud container clusters get-credentials [CLUSTER_NAME]

If you are unsure of what to enter for CLUSTER_NAME, use the following command to list your
clusters:
gcloud container clusters list

Options A & D are wrong as only SSH access is required and it is automatically added.
Option C is wrong as auth login would be needed if the Resource was not found.

Question : 27
You want to create a new role for your colleagues that will apply to all current and future
projects created in your organization. The role should have the permissions of the BigQuery Job
User and Cloud Bigtable User roles. You want to follow Google’s recommended practices. How
should you create the new role?
A. Use gcloud iam combine-roles --global to combine the 2 roles into a new custom role.
B. For one of your projects, in the Google Cloud Platform Console under Roles, select both
roles and combine them into a new custom role. Use gcloud iam promote-role to
promote the role from a project role to an organization role.
C. For all projects, in the Google Cloud Platform Console under Roles, select both roles
and combine them into a new custom role.
D. For your organization, in the Google Cloud Platform Console under Roles, select both
roles and combine them into a new custom role.

Correct Answer
D. For your organization, in the Google Cloud Platform Console under Roles, select both roles
and combine them into a new custom role.

Explanation
Correct answer is D as this creates a new role with the combined permissions on the
organization level.
Option A is wrong as this does not create a new role.
Option B is wrong as gcloud cannot promote a role to org level.
Option C is wrong as it’s recommended to define the role on the organization level. Also, the
role will not be applied on new projects.
Question : 28
Your company has reserved a monthly budget for your project. You want to be informed
automatically of your project spend so that you can take action when you approach the limit.
What should you do?
A. Link a credit card with a monthly limit equal to your budget.
B. Create a budget alert for desired percentages such as 50%, 90%, and 100% of your
total monthly budget.
C. In App Engine Settings, set a daily budget at the rate of 1/30 of your monthly budget.
D. In the GCP Console, configure billing export to BigQuery. Create a saved view that
queries your total spend.

Correct Answer
B. Create a budget alert for desired percentages such as 50%, 90%, and 100% of your total
monthly budget.

Explanation
Correct answer is B as Budget Alerts allow you configure thresholds and if crossed alerts are
automatically triggered.
Refer GCP documentation - Billing Budgets Alerts
To help you with project planning and controlling costs, you can set a budget alert. Setting a
budget alert lets you track how your spend is growing toward a particular amount.
You can apply budget alerts to either a billing account or a project, and you can set the budget
alert at a specific amount or match it to the previous month's spend. The alerts will be sent to
billing administrators and billing account users when spending exceeds a percentage of your
budget.
Option A is wrong as linked card does not alert. The charges would still increase as per the
usage.
Option C is wrong as App Engine does not have budget settings.
Option D is wrong as the solution would not trigger automatic alerts and the checks would not
be immediate as well.

Question : 29
A recent software update to a static e-commerce website running on Google Cloud has caused
the website to crash for several hours. The CTO decides that all critical changes must now have
a back-out/roll-back plan. The website is deployed Cloud Storage and critical changes are
frequent. Which action should you take to implement the back-out/roll-back plan?
A. Create a Nearline copy for the website static data files stored in Google Cloud Storage.
B. Enable object versioning on the website's static data files stored in Google Cloud
Storage.
C. Enable Google Cloud Deployment Manager (CDM) on the project, and define each
change with a new CDM template.
D. Create a snapshot of each VM prior to an update, and recover the VM from the snapshot
in case of a new version failure.
Correct Answer
B. Enable object versioning on the website's static data files stored in Google Cloud Storage.

Explanation
Correct answers are B as this is a seamless way to ensure the last known good version of the
static content is always available.
Option A is wrong as this copy process is unreliable and makes it tricky to keep things in sync, it
also doesn’t provide a way to rollback once a bad version of the data has been written to the
copy.
Option C is wrong as this would add a great deal of overhead to the process and would cause
conflicts in association between different Deployment Manager deployments which could lead to
unexpected behavior if an old version is changed.
Option D is wrong as this approach doesn’t scale well, there is a lot of management work
involved.

Question : 30
You have a project using BigQuery. You want to list all BigQuery jobs for that project. You want
to set this project as the default for the bq command-line tool. What should you do?
A. Use "gcloud config set project" to set the default project
B. Use "bq config set project" to set the default project.
C. Use "gcloud generate config-url" to generate a URL to the Google Cloud Platform
Console to set the default project.
D. Use "bq generate config-url" to generate a URL to the Google Cloud Platform Console to
set the default project.

Correct Answer
A. Use "gcloud config set project" to set the default project

Explanation
Correct answer is A as you need to use gcloud to manage the config/defaults.
Refer GCP documentation - Cloud SDK Config Set
--project=<var>PROJECT_ID</var>
The Google Cloud Platform project name to use for this invocation. If omitted, then the current
project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID.
Overrides the default core/project property value for this command invocation.

Option B is wrong as the bq command-line tool assumes the gcloud configuration settings and
can’t be set through BigQuery.
Option C is wrong as entering this command will not achieve the desired result and will generate
an error.
Option D is wrong as entering this command will not achieve the desired result and will generate
an error.

Question : 31
You're deploying an application to a Compute Engine instance, and it's going to need to make
calls to read from Cloud Storage and Bigtable. You want to make sure you're following the
principle of least privilege. What's the easiest way to ensure the code can authenticate to the
required Google Cloud APIs?
A. Create a new user account with the required roles. Store the credentials in Cloud Key
Management Service and download them to the instance in code.
B. Use the default Compute Engine service account and set its scopes. Let the code find
the default service account using "Application Default Credentials".
C. Create a new service account and key with the required limited permissions. Set the
instance to use the new service account. Edit the code to use the service account key.
D. Register the application with the Binary Registration Service and apply the required
roles.

Correct Answer
C. Create a new service account and key with the required limited permissions. Set the instance
to use the new service account. Edit the code to use the service account key.

Explanation
Correct answer is C as the best practice is to use a Service Account to grant the application the
required access.
Refer GCP documentation - Service Accounts
A service account is a special type of Google account that belongs to your application or a
virtual machine (VM), instead of to an individual end user. Your application assumes the identity
of the service account to call Google APIs, so that the users aren't directly involved.
A service account is a special type of Google account that represents a Google Cloud service
identity or app rather than an individual user. Like users and groups, service accounts can be
assigned IAM roles to grant access to specific resources. Service accounts authenticate with a
key rather than a password. Google manages and rotates the service account keys for code
running on GCP. We recommend that you use service accounts for server-to-server
interactions.
Option A is wrong as it is not the recommended approach
Option B is wrong as the default Service Account does not have the required permissions.
Option D is wrong as there is Binary Registration service.
Question : 32
Your team uses a third-party monitoring solution. They've asked you to deploy it to all nodes in
your Kubernetes Engine Cluster. What's the best way to do that?
A. Connect to each node via SSH and install the monitoring solution.
B. Deploy the monitoring pod as a StatefulSet.
C. Deploy the monitoring pod as a DaemonSet.
D. Use Deployment Manager to deploy the monitoring solution.

Correct Answer
C. Deploy the monitoring pod as a DaemonSet.

Explanation
Correct answer is C as Daemon set helps deploy applications or tools that you need to run on
all the nodes.
Refer GCP documentation - Kubernetes Engine Daemon Set
Like other workload objects, DaemonSets manage groups of replicated Pods. However,
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or
a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to
the new nodes as needed.
DaemonSets use a Pod template, which contains aspecification for its Pods. The Pod
specification determines how each Pod should look: what applications should run inside its
containers, which volumes it should mount, its labels and selectors, and more.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or
certain nodes, and which do not require user intervention. Examples of such tasks include
storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons
like collectd.
For example, you could have DaemonSets for each type of daemon run on all of your nodes.
Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them
use different configurations for different hardware types and resource needs.
Option A is wrong as it is not a viable option.
Option B is wrong as Stateful set is useful for maintaining state. StatefulSets represent a set of
[Pods] with unique, persistent identities and stable hostnames that GKE maintains regardless of
where they are scheduled. The state information and other resilient data for any given
StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
Option D is wrong as Deployment manager does not control Pods.
Question : 33
Your data team is working on some new machine learning models. They're generating several
output files per day that they want to store in a regional bucket. They focus on the output files
from the last month. The output files older than a month needs to be cleaned up. With the
fewest steps possible, what's the best way to implement the solution?
A. Create a lifecycle policy to switch the objects older than a month to Coldline storage.
B. Create a lifecycle policy to delete the objects older than a month.
C. Create a Cloud Function triggered when objects are added to a bucket. Look at the date
on all the files and delete it, if it's older than a month.
D. Create a Cloud Function triggered when objects are added to a bucket. Look at the date
on all the files and move it to Coldline storage if it's older than a month.

Correct Answer
B. Create a lifecycle policy to delete the objects older than a month.

Explanation
Correct answer is B as the files are not needed anymore they can be deleted and need not be
stored. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation - Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a
set of rules which apply to current and future objects in the bucket. When an object meets the
criteria of one of the rules, Cloud Storage automatically performs a specified action on the
object. Here are some example use cases:
● Downgrade the storage class of objects older than 365 days to Coldline Storage.
● Delete objects created before January 1, 2013.
● Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option A is wrong as the files are not needed anymore they can be deleted.
Options C & D are wrong as the transition can be handled easily using Object Lifecycle
management.

Question : 34
What is the command for creating a storage bucket that has once per month access and is
named 'archive_bucket'?
A. gsutil rm -coldline gs://archive_bucket
B. gsutil mb -c coldline gs://archive_bucket
C. gsutil mb -c nearline gs://archive_bucket
D. gsutil mb gs://archive_bucket

Correct Answer
C. gsutil mb -c nearline gs://archive_bucket

Explanation
Correct answer is C as the data needs to be accessed on monthly basis Nearline is an ideal
storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation - Storage Classes
Nearline - Data you do not expect to access frequently (i.e., no more than once per month).
Ideal for back-up and serving long-tail multimedia content.
Option A is wrong as rm is the wrong parameter and removes the data.
Option B is wrong as coldline is not suited for data that needs monthly access.
Option D is wrong as by default, gsutil would create a regional bucket.

Question : 35
You have created a Kubernetes deployment, called Deployment-A, with 3 replicas on your
cluster. Another deployment, called Deployment-B, needs access to Deployment-A. You cannot
expose Deployment-A outside of the cluster. What should you do?
A. Create a Service of type NodePort for Deployment A and an Ingress Resource for that
Service. Have Deployment B use the Ingress IP address.
B. Create a Service of type LoadBalancer for Deployment A. Have Deployment B use the
Service IP address.
C. Create a Service of type LoadBalancer for Deployment A and an Ingress Resource for
that Service. Have Deployment B use the Ingress IP address.
D. Create a Service of type ClusterIP for Deployment A. Have Deployment B use the
Service IP address.

Correct Answer
D. Create a Service of type ClusterIP for Deployment A. Have Deployment B use the Service IP
address.

Explanation
Correct answer is D as this exposes the service on a cluster-internal IP address. Choosing this
method makes the service reachable only from within the cluster.
Refer GCP documentation - Kubernetes Networking
Option A is wrong as this exposes Deployment A over the public internet.
Option B is wrong as LoadBalancer will expose the service publicly.
Option C is wrong as this exposes the service externally using a cloud provider’s load balancer,
and Ingress can work only with nodeport, not LoadBalancer.
Question : 36
You need to take streaming data from thousands of Internet of Things (IoT) devices, ingest it,
run it through a processing pipeline, and store it for analysis. You want to run SQL queries
against your data for analysis. What services in which order should you use for this task?
A. Cloud Dataflow, Cloud Pub/Sub, BigQuery
B. Cloud Pub/Sub, Cloud Dataflow, Cloud Dataproc
C. Cloud Pub/Sub, Cloud Dataflow, BigQuery
D. App Engine, Cloud Dataflow, BigQuery

Correct Answer
C. Cloud Pub/Sub, Cloud Dataflow, BigQuery

Explanation
Correct answer is C as the need to ingest it, transform and store the Cloud Pub/Sub, Cloud
Dataflow, BigQuery is ideal stack to handle the IoT data.
Refer GCP documentation - IoT
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating
topics for streams or channels, you can enable different components of your application to
subscribe to specific streams of data without needing to construct subscriber-specific channels
on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping
you to connect ingestion, data pipelines, and storage systems.
Google Cloud Dataflow provides the open Apache Beam programming model as a managed
service for processing data in multiple ways, including batch operations, extract-transform-load
(ETL) patterns, and continuous, streaming computation. Cloud Dataflow can be particularly
useful for managing the high-volume data processing pipelines required for IoT scenarios.
Cloud Dataflow is also designed to integrate seamlessly with the other Cloud Platform services
you choose for your pipeline.
Google BigQuery provides a fully managed data warehouse with a familiar SQL interface, so
you can store your IoT data alongside any of your other enterprise analytics and logs. The
performance and cost of BigQuery means you might keep your valuable data longer, instead of
deleting it just to save disk space.
Sample Arch - Mobile Gaming Analysis Telemetry

Option A is wrong as the stack is correct, however the order is not correct.
Option B is wrong as Dataproc is not an ideal tool for analysis. Cloud Dataproc is a fast, easy-
to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a
simpler, more cost-efficient way.
Option D is wrong as App Engine is not an ideal ingestion tool to handle IoT data.
Question : 37
You've been asked to add a new IAM member and grant them access to run some queries on
BigQuery. Considering Google recommended best practices and the principle of least privilege,
how would you assign the access?
A. Create a custom role with roles/bigquery.dataViewer and roles/bigquery.jobUser roles;
assign custom role to the users
B. Create a custom role with roles/bigquery.dataViewer and roles/bigquery.jobUser roles;
assign custom role to the group; add users to groups
C. Assign roles/bigquery.dataViewer and roles/bigquery.jobUser roles to the users
D. Assign roles/bigquery.dataViewer and roles/bigquery.jobUser roles to a group; add users
to groups

Correct Answer
D. Assign roles/bigquery.dataViewer and roles/bigquery.jobUser roles to a group; add users to
groups

Explanation
Correct answer is D as the user would need the roles/bigquery.dataViewer and
roles/bigquery.jobUser to access and query the BigQuery tables inline with the least privilege.
As per google best practices it is recommended to use predefined roles and create groups to
control access to multiple users with same responsibility
Refer GCP documentation - IAM Best Practices
Use Cloud IAM to apply the security principle of least privilege, so you grant only the necessary
access to your resources.
We recommend collecting users with the same responsibilities into groups and assigning Cloud
IAM roles to the groups rather than to individual users. For example, you can create a "data
scientist" group and assign appropriate roles to enable interaction with BigQuery and Cloud
Storage. When a new data scientist joins your team, you can simply add them to the group and
they will inherit the defined permissions.
Options A & B are wrong as the predefined roles can be assigned directly and there is not need
to create custom roles.
Option C is wrong as it is recommended to create groups instead of using individual users.
Question : 38
While looking at your application's source code in your private Github repo, you've noticed that a
service account key has been committed to git. What steps should you take next?
A. Delete the project and create a new one.
B. Do nothing. Git is fine for keys if the repo is private.
C. Revoke the key, remove the key from Git, purge the Git history to remove all traces of
the file, ensure the key is added to the .gitignore file.
D. Contact Google Cloud Support

Correct Answer
C. Revoke the key, remove the key from Git, purge the Git history to remove all traces of the
file, ensure the key is added to the .gitignore file.

Explanation
Correct answer is C as all the traces of the keys needs to removed and add the key to .gitignore
file.
Option A is wrong as deleting project does not remove the keys from Git.
Option B is wrong as it is bad practice to store keys in Git, irrespective of private repo.
Option D is wrong as Google Cloud support cannot help.

Question : 39
Your company wants to reduce cost on infrequently accessed data by moving it to the cloud.
The data will still be accessed approximately once a month to refresh historical charts. In
addition, data older than 5 years needs to be archived for 5 years for compliance reasons. How
should you store and manage the data?
A. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle
Management policy to delete data older than 5 years.
B. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle
Management policy to change the storage class to Coldline for data older than 5 years.
C. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle
Management policy to delete data older than 5 years.
D. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle
Management policy to change the storage class to Coldline for data older than 5 years.

Correct Answer
D. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle
Management policy to change the storage class to Coldline for data older than 5 years.

Explanation
Correct answer is D as the access pattern fits Nearline storage class requirements and Nearline
is a more cost-effective storage approach than Multi-Regional. The object lifecycle management
policy to move data to Coldline is ideal for archival.
Refer GCP documentation - Cloud Storage - Storage Classes
Options A & B are wrong as Multi-Regional storage class is not an ideal storage option with
infrequent access.
Option C is wrong as the data is required for compliance it cannot be deleted and needs to be
moved to the Coldline storage.

Question : 40
A SysOps admin has configured a lifecycle rule on an object versioning enabled multi-regional
bucket. Which of the following statement effect reflects the following lifecycle config?
{
"rule":[
{
"action":{
"type":"Delete"
},
"condition":{
"age":30,
"isLive":false
}
},
{
"action":{
"type":"SetStorageClass",
"storageClass":"COLDLINE"
},
"condition":{
"age":365,
"matchesStorageClass":"MULTI_REGIONAL"
}
}
]
}
A. Archive objects older than 30 days and move objects to Coldline Storage after 365 days
if the storage class in Multi-regional
B. B. Delete objects older than 30 days and move objects to Coldline Storage after 365
days if the storage class in Multi-regional.
C. C. Delete archived objects older than 30 days and move objects to Coldline Storage
after 365 days if the storage class in Multi-regional.
D. D. Move objects to Coldline Storage after 365 days if the storage class in Multi-regional
First rule has no effect on the bucket.

Correct Answer
C. Delete archived objects older than 30 days and move objects to Coldline Storage after 365
days if the storage class in Multi-regional.

Explanation
Correct answer is C.
First rule will delete any object if it has a age over 30 days and is not live (not the latest version).
Second rule will change the storage class of the live object from multi-regional to Coldline for
objects with age over 365 days.
Refer GCP documentation - Object Lifecycle
The following conditions are supported for a lifecycle rule:
● Age: This condition is satisfied when an object reaches the specified age (in days). Age
is measured from the object's creation time. For example, if an object's creation time is
2019/01/10 10:00 UTC and the Age condition is 10 days, then the condition is satisfied
for the object on and after 2019/01/20 10:00 UTC. This is true even if the object
becomes archived through object versioning sometime after its creation.
● CreatedBefore: This condition is satisfied when an object is created before midnight of
the specified date in UTC.
● IsLive: If the value is true, this lifecycle condition matches only live objects; if the value is
false, it matches only archived objects. For the purposes of this condition, objects in non-
versioned buckets are considered live.
● MatchesStorageClass: This condition is satisfied when an object in the bucket is stored
as the specified storage class. Generally, if you intend to use this condition on Multi-
Regional Storage or Regional Storage objects, you should also include STANDARD and
DURABLE_REDUCED_AVAILABILITY in the condition to ensure all objects of similar
storage class are covered.
Option A is wrong as the first rule does not archive but deletes the archived objects.
Option B is wrong as the first rule does not delete live objects but only archives objects.
Option D is wrong as first rule applies to archived or not live objects.
Question : 41
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be
available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
A. Load data into Google BigQuery.
B. Insert data into Google Cloud SQL.
C. Put flat files into Google Cloud Storage.
D. Stream data into Google Cloud Datastore.

Correct Answer
A. Load data into Google BigQuery.

Explanation
Correct answer is A as BigQuery is the only of these Google products that supports an SQL
interface and a high enough SLA (99.9%) to make it readily available.
Option B is wrong as Cloud SQL cannot support multi-petabyte data. Storage limit for Cloud
SQL is 10TB
Option C is wrong as Cloud Storage does not provide SQL interface.
Option D is wrong as Datastore does not provide a SQL interface and is a NoSQL solution.

Question : 42
A SysOps admin has configured a lifecycle rule on an object versioning disabled multi-regional
bucket. Which of the following statement effect reflects the following lifecycle config?
{
"rule": [
{
"action": {
"type": "Delete"
},
"condition": {
"age": 30,
"isLive": false
}
},
{
"action": {
"type": "SetStorageClass",
"storageClass": "COLDLINE"
},
"condition": {
"age": 365,
"matchesStorageClass": "MULTI_REGIONAL"
}
}
]
}

A. Archive objects older than 30 days and move objects to Coldline Storage after 365 days
if the storage class in Multi-regional
B. Delete objects older than 30 days and move objects to Coldline Storage after 365 days if
the storage class is Multi-regional.
C. Delete archived objects older than 30 days and move objects to Coldline Storage after
365 days if the storage class in Multi-regional.
D. Move objects to Coldline Storage after 365 days if the storage class in Multi-regional
First rule has no effect on the bucket.

Correct Answer
D. Move objects to Coldline Storage after 365 days if the storage class in Multi-regional First
rule has no effect on the bucket.

Explanation
Correct answer is D.
First rule will delete any object if it has an age over 30 days and is not live (not the latest
version). However as the bucket is not versioning enabled it does not have any effect. Second
rule will change the storage class of the live object from multi-regional to Coldline for objects
with age over 365 days.
Refer GCP documentation - Object Lifecycle
The following conditions are supported for a lifecycle rule:
● Age: This condition is satisfied when an object reaches the specified age (in days). Age
is measured from the object's creation time. For example, if an object's creation time is
2019/01/10 10:00 UTC and the Age condition is 10 days, then the condition is satisfied
for the object on and after 2019/01/20 10:00 UTC. This is true even if the object
becomes archived through object versioning sometime after its creation.
● CreatedBefore: This condition is satisfied when an object is created before midnight of
the specified date in UTC.
● IsLive: If the value is true, this lifecycle condition matches only live objects; if the value is
false, it matches only archived objects. For the purposes of this condition, objects in non-
versioned buckets are considered live.
● MatchesStorageClass: This condition is satisfied when an object in the bucket is stored
as the specified storage class. Generally, if you intend to use this condition on Multi-
Regional Storage or Regional Storage objects, you should also include STANDARD and
DURABLE_REDUCED_AVAILABILITY in the condition to ensure all objects of similar
storage class are covered.
Option A is wrong as the first rule does not archive but deletes the archived objects, but does
not have any impact on a versioning disabled bucket.
Option B is wrong as the first rule does not delete live objects but only archives objects.
Option C is wrong as first rule does not have any impact on a versioning disabled bucket.
Question : 43
You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs
to grow. You decide to add a node. What should you do?
A. Use "gcloud container clusters resize" with the desired number of nodes.
B. Use "kubectl container clusters resize" with the desired number of nodes.
C. Edit the managed instance group of the cluster and increase the number of VMs by 1.
D. Edit the managed instance group of the cluster and enable autoscaling.

Correct Answer
A. Use "gcloud container clusters resize" with the desired number of nodes.

Explanation
Correct answer is A as the kubernetes cluster can be resized using the gcloud command.
Refer GCP documentation - Resizing Kubernetes Cluster
gcloud container clusters resize <var>[CLUSTER_NAME]</var> --node-pool
<var>[POOL_NAME]</var>
--size <var>[SIZE]</var>

Option B is wrong as kubernetes cluster cannot be resized using the kubectl command
Options C & D are wrong as the managed instance groups should be changed manually.

Question : 44
Your company has hosted their critical application on Compute Engine managed instance
groups. They want the instances to be configured for resiliency and high availability with no
manual intervention. How should the managed instance group be configured?
A. Enable auto-repairing for the managed instance groups
B. Enable auto-updating for the managed instance groups
C. Enable auto-restarts for the managed instance groups
D. Enable auto-healing for the managed instance groups

Correct Answer
D. Enable auto-healing for the managed instance groups
Explanation
Correct answer is D as Managed Instance Groups provide AutoHealing feature, which performs
a health check and if the application is not responding the instance is automatically recreated.
Refer GCP documentation - Managed Instance Groups
Autohealing — You can also set up an autohealing policy that relies on an application-based
health check, which periodically verifies that your application is responding as expected on each
of the MIG's instances. If an application is not responding on an instance, that instance is
automatically recreated. Checking that an application responds is more precise than simply
verifying that an instance is up and running.
Managed instance groups maintain high availability of your applications by proactively keeping
your instances available, which means in RUNNING state. A managed instance group will
automatically recreate an instance that is not RUNNING. However, relying only on instance
state may not be sufficient. You may want to recreate instances when an application freezes,
crashes, or runs out of memory.
Application-based autohealing improves application availability by relying on a health checking
signal that detects application-specific issues such as freezing, crashing, or overloading. If a
health check determines that an application has failed on an instance, the group automatically
recreates that instance.
Options A & C are wrong as these features are not available.
Option B is wrong as auto-updating helps deploy new versions of software to instances in a
managed instance group. The rollout of an update happens automatically based on your
specifications: you can control the speed and scope of the update rollout in order to minimize
disruptions to your application. You can optionally perform partial rollouts which allows for
canary testing.

Question : 45
You have installed an SQL server on a windows instance. You want to connect to the instance.
What steps should you follow to connect to the instance with fewest steps?
A. Generate Windows user and password. Check security group for 3389 firewall rule. Use
RDP option from GCP Console to connect
B. Generate Windows password. Check security group for 22 firewall rule. Use RDP option
from GCP Console to connect
C. Generate Windows user and password. Check security group for 22 firewall rule. Install
RDP Client to connect
D. Generate Windows password. Check security group for 3389 firewall rule. Install RDP
Client to connect

Correct Answer
D. Generate Windows password. Check security group for 3389 firewall rule. Install RDP Client
to connect

Explanation
Correct answer is D as connecting to Windows instance involves installation of the RDP client.
GCP does not provide RDP client and it needs to be installed. Generate Windows instance
password to connect to the instance and the RDP port is 3389
Refer GCP documentation - Windows Connecting to Instance
Options A & B are wrong as you need an external client and connect connect directly from GCP
console.
Options B & C are wrong as 22 port is for SSH.

Question : 46
You have created a Kubernetes engine cluster named 'project-1'. You've realized that you need
to change the machine type for the cluster from n1-standard-1 to n1-standard-4. What is the
command to make this change?
A. Create a new node pool in the same cluster, and migrate the workload to the new pool.
B. gcloud container clusters resize project-1 --machine-type n1-standard-4
C. gcloud container clusters update project-1 --machine-type n1-standard-4
D. gcloud container clusters migrate project-1 --machine-type n1-standard-4

Correct Answer
A. Create a new node pool in the same cluster, and migrate the workload to the new pool.

Explanation
Correct answer is A as the machine type for the cluster cannot be changed through commands.
A new node pool with the updated machine type needs to be created and workload migrated to
the new node pool.
Refer GCP documentation - Kubernetes Engine - Migrating Node Pools
A node pool is a subset of machines that all have the same configuration, including machine
type (CPU and memory) authorization scopes. Node pools represent a subset of nodes within a
cluster; a container cluster can contain one or more node pools.
When you need to change the machine profile of your Compute Engine cluster, you can create
a new node pool and then migrate your workloads over to the new node pool.
To migrate your workloads without incurring downtime, you need to:
● Mark the existing node pool as unschedulable.
● Drain the workloads running on the existing node pool.
● Delete the existing node pool.
Question 47
Your company wants to try out the cloud with low risk. They want to archive approximately 100
TB of their log data to the cloud and test the analytics features available to them there, while
also retaining that data as a long-term disaster recovery backup. Which two steps should they
take? (Choose two answers)
A. Load logs into Google BigQuery.
B. Load logs into Google Cloud SQL.
C. Import logs into Google Stackdriver.
D. Insert logs into Google Cloud Bigtable.
E. Upload log files into Google Cloud Storage.

Correct Answer
A. Load logs into Google BigQuery.
E. Upload log files into Google Cloud Storage.

Explanation
Correct answers are A & E as Google Cloud Storage can provide long term archival option and
BigQuery provides analytics capabilities.
Option B is wrong as Cloud SQL is relational database and does not support the capacity
required as well as not suitable for long term archival storage.
Option C is wrong as Stackdriver is a monitoring, logging, alerting and debugging tool. It is not
ideal for long term retention of data and does not provide analytics capabilities.
Option D is wrong as Bigtable is a NoSQL solution and can be used for analytics. However it is
ideal for data with low latency access and is expensive.

Question : 48
A user wants to install a tool on the Cloud Shell. The tool should be available across sessions.
Where should the user install the tool?
A. /bin
B. /usr/local/bin
C. /google/scripts
D. ~/bin

Correct Answer
D. ~/bin

Explanation
Correct answer is D as only HOME directory is persisted across sessions.
Refer GCP documentation - Cloud Shell
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOME directory
on the virtual machine instance. This storage is on a per-user basis and is available across
projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store
in your home directory, including installed software, scripts and user configuration files
like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and
cannot be accessed by other users.

Question : 49
Your organization requires that metrics from all applications be retained for 5 years for future
analysis in possible legal proceedings. Which approach should you use?
A. Grant the security team access to the logs in each Project
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C. Configure Stackdriver Monitoring for all Projects with the default retention policies
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage

Correct Answer
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery

Explanation
Correct answer is B as Stackdriver monitoring metrics can be exported to BigQuery or Google
Cloud Storage. However as the need is for future analysis, BigQuery is a better option.
Refer GCP documentation - Stackdriver
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud
and open source application services. Allows you to define metrics based on log contents that
are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google
Cloud Storage, and Pub/Sub.
Option A is wrong as project logs are maintained in Stackdriver and it has limited data retention
capability.
Option C is wrong as Stackdriver cannot retain data for 5 year. Refer Stackdriver data retention
Option D is wrong as Google Cloud Storage does not provide analytics capability.

Question : 50
You're migrating an on-premises application to Google Cloud. The application uses a
component that requires a licensing server. The license server has the IP address 10.28.0.10.
You want to deploy the application without making any changes to the code or configuration.
How should you go about deploying the application?
A. Create a subnet with a CIDR range of 10.28.0.0/31. Reserve a static internal IP address
of 10.28.0.10. Assign the static address to the license server instance.
B. Create a subnet with a CIDR range of 10.28.0.0/30. Reserve a static internal IP address
of 10.28.0.10. Assign the static address to the license server instance.
C. Create a subnet with a CIDR range of 10.28.0.0/29. Reserve a static internal IP address
of 10.28.0.10. Assign the static address to the license server instance.
D. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve a static internal IP address
of 10.28.0.10. Assign the static address to the license server instance.
Correct Answer
D. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve a static internal IP address of
10.28.0.10. Assign the static address to the license server instance.

Explanation
Correct answer is D as only the CIDR range 10.28.0.0/28 would include the 10.28.0.10 address.
It provides 16 ip addresses i.e. 10.28.0.0 to 10.28.0.15
Option A is wrong as 10.28.0.0/31 CIDR range provides 2 ip addresses i.e. 10.28.0.0 to
10.28.0.1
Option B is wrong as 10.28.0.0/30 CIDR range provides 4 ip addresses i.e. 10.28.0.0 to
10.28.0.3
Option C is wrong as 10.28.0.0/29 CIDR range provides 8 ip addresses i.e. 10.28.0.0 to
10.28.0.7

You might also like