0% found this document useful (1 vote)
589 views

Professional Cloud DevOps Engineer V12.95 Copy 2

Here are the key steps to develop a monitoring strategy for GCP projects using Stackdriver Workspaces: 1. Create a Stackdriver Workspace for each production project. This will isolate the monitoring data and access control for each project. 2. Configure monitoring for common services like Compute Engine, Kubernetes Engine, App Engine, Cloud SQL, etc. This will provide metrics, logs and alerts for the core infrastructure. 3. Instrument applications using OpenCensus/OpenTelemetry to collect custom metrics and traces. Export this data to the Stackdriver Workspace. 4. Create dashboards to visualize key metrics and logs. Focus on error rates, latency, resource utilization and other indicators of project health. 5. Set up

Uploaded by

zhangsteven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
589 views

Professional Cloud DevOps Engineer V12.95 Copy 2

Here are the key steps to develop a monitoring strategy for GCP projects using Stackdriver Workspaces: 1. Create a Stackdriver Workspace for each production project. This will isolate the monitoring data and access control for each project. 2. Configure monitoring for common services like Compute Engine, Kubernetes Engine, App Engine, Cloud SQL, etc. This will provide metrics, logs and alerts for the core infrastructure. 3. Instrument applications using OpenCensus/OpenTelemetry to collect custom metrics and traces. Export this data to the Stackdriver Workspace. 4. Create dashboards to visualize key metrics and logs. Focus on error rates, latency, resource utilization and other indicators of project health. 5. Set up

Uploaded by

zhangsteven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

IT Certification Guaranteed, The Easy Way!

Exam : Professional-Cloud-DevOps-
Engineer

Title : Google Cloud Certified -


Professional Cloud DevOps
Engineer Exam

Vendor : Google

Version : V12.95

1
IT Certification Guaranteed, The Easy Way!

NO.1 You are writing a postmortem for an incident that severely affected users. You want to prevent
similar incidents in the future. Which two of the following sections should you include in the
postmortem? (Choose two.)
A. An explanation of the root cause of the incident
B. A list of employees responsible for causing the incident
C. A list of action items to prevent a recurrence of the incident
D. Your opinion of the incident's severity compared to past incidents
E. Copies of the design documents for all the services impacted by the incident
Answer: A,B

NO.2 You encountered a major service outage that affected all users of the service for multiple
hours. After several hours of incident management, the service returned to normal, and user access
was restored. You need to provide an incident summary to relevant stakeholders following the Site
Reliability Engineering recommended practices. What should you do first?
A. Call individual stakeholders lo explain what happened.
B. Develop a post-mortem to be distributed to stakeholders.
C. Send the Incident State Document to all the stakeholders.
D. Require the engineer responsible to write an apology email to all stakeholders.
Answer: B

NO.3 Your team has recently deployed an NGINX-based application into Google Kubernetes Engine
(GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You
want to scale the deployment of the application's frontend using an appropriate Service Level
Indicator (SLI). What should you do?
A. Configure the horizontal pod autoscaler to use the average response time from the Liveness and
Readiness probes.
B. Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster
as pods expand.
C. Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the
number of requests provided by the GCLB.
D. Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request
metrics exposed by the NGINX deployment.
Answer: C
Explanation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics

NO.4 You created a Stackdriver chart for CPU utilization in a dashboard within your workspace
project. You want to share the chart with your Site Reliability Engineering (SRE) team only. You want
to ensure you follow the principle of least privilege. What should you do?
A. Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer
IAM role in the workspace project.
B. Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer
IAM role in the workspace project.
C. Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the

2
IT Certification Guaranteed, The Easy Way!

Monitoring Viewer IAM role in the workspace project.


D. Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the
Dashboard Viewer IAM role in the workspace project.
Answer: C
Explanation:
https://cloud.google.com/monitoring/access-control

NO.5 Your company experiences bugs, outages, and slowness in its production systems. Developers
use the production environment for new feature development and bug fixes. Configuration and
experiments are done in the production environment, causing outages for users. Testers use the
production environment for load testing, which often slows the production systems. You need to
redesign the environment to reduce the number of bugs and outages in production and to enable
testers to load test new features. What should you do?
A. Create an automated testing script in production to detect failures as soon as they occur.
B. Create a development environment with smaller server capacity and give access only to
developers and testers.
C. Secure the production environment to ensure that developers can't change it and set up one
controlled update per year.
D. Create a development environment for writing code and a test environment for configurations,
experiments, and load testing.
Answer: D

NO.6 You support a large service with a well-defined Service Level Objective (SLO). The development
team deploys new releases of the service multiple times a week. If a major incident causes the
service to miss its SLO, you want the development team to shift its focus from working on features to
improving service reliability. What should you do before a major incident occurs?
A. Develop an appropriate error budget policy in cooperation with all service stakeholders.
B. Negotiate with the product team to always prioritize service reliability over releasing new
features.
C. Negotiate with the development team to reduce the release frequency to no more than once a
week.
D. Add a plugin to your Jenkins pipeline that prevents new releases whenever your service is out of
SLO.
Answer: B

NO.7 You are part of an organization that follows SRE practices and principles. You are taking over
the management of a new service from the Development Team, and you conduct a Production
Readiness Review (PRR). After the PRR analysis phase, you determine that the service cannot
currently meet its Service Level Objectives (SLOs). You want to ensure that the service can meet its
SLOs in production. What should you do next?
A. Adjust the SLO targets to be achievable by the service so you can bring it into production.
B. Notify the development team that they will have to provide production support for the service.
C. Identify recommended reliability improvements to the service to be completed before handover.
D. Bring the service into production with no SLOs and build them when you have collected

3
IT Certification Guaranteed, The Easy Way!

operational data.
Answer: C

NO.8 Your application images are built using Cloud Build and pushed to Google Container Registry
(GCR). You want to be able to specify a particular version of your application for deployment based
on the release version tagged in source control. What should you do when you push the image?
A. Reference the image digest in the source control tag.
B. Supply the source control tag as a parameter within the image name.
C. Use Cloud Build to include the release version tag in the application image.
D. Use GCR digest versioning to match the image to the tag in source control.
Answer: B
Explanation:
https://cloud.google.com/container-registry/docs/pushing-and-pulling

NO.9 Your application services run in Google Kubernetes Engine (GKE). You want to make sure that
only images from your centrally-managed Google Container Registry (GCR) image registry in the
altostrat-images project can be deployed to the cluster while minimizing development time. What
should you do?
A. Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.
B. Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/attostrat-
images/.
C. Add logic to the deployment pipeline to check that all manifests contain only images from
gcr.io/altostrat-images.
D. Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the
image is deployed.
Answer: B

NO.10 Your team is designing a new application for deployment into Google Kubernetes Engine
(GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a
centralized location. You want to use Google Cloud Platform services while minimizing the amount of
work required to set up monitoring. What should you do?
A. Publish various metrics from the application directly to the Slackdriver Monitoring API, and then
observe these custom metrics in Stackdriver.
B. Install the Cloud Pub/Sub client libraries, push various metrics from the application to various
topics, and then observe the aggregated metrics in Stackdriver.
C. Install the OpenTelemetry client libraries in the application, configure Stackdriver as the export
destination for the metrics, and then observe the application's metrics in Stackdriver.
D. Emit all metrics in the form of application-specific log messages, pass these messages from the
containers to the Stackdriver logging collector, and then observe metrics in Stackdriver.
Answer: A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-
metrics#custom_metrics
https://github.com/GoogleCloudPlatform/k8s-stackdriver/blob/master/custom-metrics-stackdriver-

4
IT Certification Guaranteed, The Easy Way!

adapter/README.md Your application can report a custom metric to Cloud Monitoring. You can
configure Kubernetes to respond to these metrics and scale your workload automatically. For
example, you can scale your application based on metrics such as queries per second, writes per
second, network performance, latency when communicating with a different application, or other
metrics that make sense for your workload. https://cloud.google.com/kubernetes-
engine/docs/concepts/custom-and-external-metrics

NO.11 You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in
production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify
and react to production environment issues without false alerts from development and staging
projects. You want to ensure that you adhere to the principle of least privilege when providing
relevant team members with access to Stackdriver Workspaces. What should you do?
A. Grant relevant team members read access to all GCP production projects. Create Stackdriver
workspaces inside each project.
B. Grant relevant team members the Project Viewer IAM role on all GCP production projects. Create
Slackdriver workspaces inside each project.
C. Choose an existing GCP production project to host the monitoring workspace. Attach the
production projects to this workspace. Grant relevant team members read access to the Stackdriver
Workspace.
D. Create a new GCP monitoring project, and create a Stackdriver Workspace inside it. Attach the
production projects to this workspace. Grant relevant team members read access to the Stackdriver
Workspace.
Answer: D
Explanation:
"A Project can host many Projects and appear in many Projects, but it can only be used as the scoping
project once. We recommend that you create a new Project for the purpose of having multiple
Projects in the same scope."

NO.12 You support a user-facing web application. When analyzing the application's error budget
over the previous six months, you notice that the application has never consumed more than 5% of
its error budget in any given time window. You hold a Service Level Objective (SLO) review with
business stakeholders and confirm that the SLO is set appropriately. You want your application's SLO
to more closely reflect its observed reliability. What steps can you take to further that goal while
balancing velocity, reliability, and business needs? (Choose two.)
A. Add more serving capacity to all of your application's zones.
B. Have more frequent or potentially risky application releases.
C. Tighten the SLO match the application's observed reliability.
D. Implement and measure additional Service Level Indicators (SLIs) fro the application.
E. Announce planned downtime to consume more error budget, and ensure that users are not
depending on a tighter SLO.
Answer: A,D

NO.13 You need to reduce the cost of virtual machines (VM| for your organization. After reviewing
different options, you decide to leverage preemptible VM instances. Which application is suitable for
preemptible VMs?

5
IT Certification Guaranteed, The Easy Way!

A. A scalable in-memory caching system


B. The organization's public-facing website
C. A distributed, eventually consistent NoSQL database cluster with sufficient quorum
D. A GPU-accelerated video rendering platform that retrieves and stores videos in a storage bucket
Answer: D
Explanation:
https://cloud.google.com/compute/docs/instances/preemptible

NO.14 You are running an application on Compute Engine and collecting logs through Stackdriver.
You discover that some personally identifiable information (Pll) is leaking into certain log entry fields.
All Pll entries begin with the text userinfo. You want to capture these log entries in a secure location
for later review and prevent them from leaking to Stackdriver Logging. What should you do?
A. Create a basic log filter matching userinfo, and then configure a log export in the Stackdriver
console with Cloud Storage as a sink.
B. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo,
and then copy the entries to a Cloud Storage bucket.
C. Create an advanced log filter matching userinfo, configure a log export in the Stackdriver console
with Cloud Storage as a sink, and then configure a tog exclusion with userinfo as a filter.
D. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo,
create an advanced log filter matching userinfo, and then configure a log export in the Stackdriver
console with Cloud Storage as a sink.
Answer: B
Explanation:
https://medium.com/google-cloud/fluentd-filter-plugin-for-google-cloud-data-loss-prevention-api-
42bbb1308e76

NO.15 You support a service that recently had an outage. The outage was caused by a new release
that exhausted the service memory resources. You rolled back the release successfully to mitigate
the impact on users. You are now in charge of the post-mortem for the outage. You want to follow
Site Reliability Engineering practices when developing the post-mortem. What should you do?
A. Focus on developing new features rather than avoiding the outages from recurring.
B. Focus on identifying the contributing causes of the incident rather than the individual responsible
for the cause.
C. Plan individual meetings with all the engineers involved. Determine who approved and pushed the
new release to production.
D. Use the Git history to find the related code commit. Prevent the engineer who made that commit
from working on production services.
Answer: B

NO.16 You are performing a semiannual capacity planning exercise for your flagship service. You
expect a service user growth rate of 10% month-over-month over the next six months. Your service is
fully containerized and runs on Google Cloud Platform (GCP). using a Google Kubernetes Engine (GKE)
Standard regional cluster on three zones with cluster autoscaler enabled. You currently consume
about 30% of your total deployed CPU capacity, and you require resilience against the failure of a

6
IT Certification Guaranteed, The Easy Way!

zone. You want to ensure that your users experience minimal negative impact as a result of this
growth or as a result of zone failure, while avoiding unnecessary costs. How should you prepare to
handle the predicted growth?
A. Verity the maximum node pool size, enable a horizontal pod autoscaler, and then perform a load
test to verity your expected resource needs.
B. Because you are deployed on GKE and are using a cluster autoscaler. your GKE cluster will scale
automatically, regardless of growth rate.
C. Because you are at only 30% utilization, you have significant headroom and you won't need to add
any additional capacity for this rate of growth.
D. Proactively add 60% more node capacity to account for six months of 10% growth rate, and then
perform a load test to make sure you have enough capacity.
Answer: A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler The Horizontal
Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or
decreasing the number of Pods in response to the workload's CPU or memory consumption

NO.17 You are deploying an application that needs to access sensitive information. You need to
ensure that this information is encrypted and the risk of exposure is minimal if a breach occurs. What
should you do?
A. Store the encryption keys in Cloud Key Management Service (KMS) and rotate the keys frequently
B. Inject the secret at the time of instance creation via an encrypted configuration management
system.
C. Integrate the application with a Single sign-on (SSO) system and do not expose secrets to the
application
D. Leverage a continuous build pipeline that produces multiple versions of the secret for each
instance of the application.
Answer: A

NO.18 You use Cloud Build to build and deploy your application. You want to securely incorporate
database credentials and other application secrets into the build pipeline. You also want to minimize
the development effort. What should you do?
A. Create a Cloud Storage bucket and use the built-in encryption at rest. Store the secrets in the
bucket and grant Cloud Build access to the bucket.
B. Encrypt the secrets and store them in the application repository. Store a decryption key in a
separate repository and grant Cloud Build access to the repository.
C. Use client-side encryption to encrypt the secrets and store them in a Cloud Storage bucket. Store a
decryption key in the bucket and grant Cloud Build access to the bucket.
D. Use Cloud Key Management Service (Cloud KMS) to encrypt the secrets and include them in your
Cloud Build deployment configuration. Grant Cloud Build access to the KeyRing.
Answer: D

NO.19 You are managing the production deployment to a set of Google Kubernetes Engine (GKE)
clusters. You want to make sure only images which are successfully built by your trusted CI/CD

7
IT Certification Guaranteed, The Easy Way!

pipeline are deployed to production. What should you do?


A. Enable Cloud Security Scanner on the clusters.
B. Enable Vulnerability Analysis on the Container Registry.
C. Set up the Kubernetes Engine clusters as private clusters.
D. Set up the Kubernetes Engine clusters with Binary Authorization.
Answer: D
Explanation:
https://cloud.google.com/binary-authorization/docs/overview

NO.20 You manage several production systems that run on Compute Engine in the same Google
Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances.
You want to know how must it costs to run each of the systems. What should you do?
A. In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per
system.
B. Assign all instances a label specific to the system they run. Configure BigQuery billing export and
query costs per label.
C. Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging
to export to BigQuery, and query costs based on the metadata.
D. Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud
Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.
Answer: D

NO.21 You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you
are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application
required by your company into production. This application is written by a third party and cannot be
modi ed or recon gured. The application writes its log information t
/var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What
should you do?
A. Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
B. Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to
tail the log file in the application's pods and write to Slackdriver Logging.
C. Install Kubernetes on Google Compute Engine (GCE> and redeploy your applications. Then
customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods
fi
fi
o

and write to Stackdriver Logging.


D. Write a script to tail the log file within the pod and write entries to standard output. Run the script
as a sidecar container with the application's pod. Configure a shared volume between the containers
to allow the script to have read access to /var/log in the application container.
Answer: B
Explanation:
https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd Besides the list of
default logs that the Logging agent streams by default, you can customize the Logging agent to send
additional logs to Logging or to adjust agent settings by adding input configurations. The
configuration definitions in these sections apply to the fluent-plugin-google-cloud output plugin only
and specify how logs are transformed and ingested into Cloud Logging.

8
IT Certification Guaranteed, The Easy Way!

https://cloud.google.com/logging/docs/agent/logging/configuration#configure

NO.22 Your application images are built and pushed to Google Container Registry (GCR). You want to
build an automated pipeline that deploys the application when the image is updated while
minimizing the development effort. What should you do?
A. Use Cloud Build to trigger a Spinnaker pipeline.
B. Use Cloud Pub/Sub to trigger a Spinnaker pipeline.
C. Use a custom builder in Cloud Build to trigger a Jenkins pipeline.
D. Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine
(GKE).
Answer: B
Explanation:
https://cloud.google.com/architecture/continuous-delivery-toolchain-spinnaker-cloud
https://spinnaker.io/guides/user/pipeline/triggers/pubsub/

NO.23 You support a Node.js application running on Google Kubernetes Engine (GKE) in production.
The application makes several HTTP requests to dependent applications. You want to anticipate
which dependent applications might cause performance issues. What should you do?
A. Instrument all applications with Stackdriver Profiler.
B. Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.
C. Use Stackdriver Debugger to review the execution of logic within each application to instrument all
applications.
D. Modify the Node.js application to log HTTP request and response times to dependent applications.
Use Stackdriver Logging to find dependent applications that are performing poorly.
Answer: B

NO.24 You support a high-traffic web application and want to ensure that the home page loads in a
timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent
home page request latency with an acceptable page load time set to 100 ms. What is the Google-
recommended way of calculating this SLI?
A. Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
B. Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
C. Count the number of home page requests that load in under 100 ms, and then divide by the total
number of home page requests.
D. Count the number of home page requests that load in under 100 ms. and then divide by the total
number of all web application requests.
Answer: C
Explanation:
https://sre.google/workbook/implementing-slos/
In the SRE principles book, it's recommended treating the SLI as the ratio of two numbers: the
number of good events divided by the total number of events. For example: Number of successful
HTTP requests / total HTTP requests (success rate)

NO.25 You need to deploy a new service to production. The service needs to automatically scale

9
IT Certification Guaranteed, The Easy Way!

using a Managed Instance Group (MIG) and should be deployed over multiple regions. The service
needs a large number of resources for each instance and you need to plan for capacity. What should
you do?
A. Use the n1-highcpu-96 machine type in the configuration of the MIG.
B. Monitor results of Stackdriver Trace to determine the required amount of resources.
C. Validate that the resource requirements are within the available quota limits of each region.
D. Deploy the service in one region and use a global load balancer to route traffic to this region.
Answer: C
Explanation:
https://cloud.google.com/compute/quotas#understanding_quotas
https://cloud.google.com/compute/quotas

NO.26 Your organization recently adopted a container-based workflow for application development.
Your team develops numerous applications that are deployed continuously through an automated
build pipeline to the production environment. A recent security audit alerted your team that the code
pushed to production could contain vulnerabilities and that the existing tooling around virtual
machine (VM) vulnerabilities no longer applies to the containerized environment. You need to ensure
the security and patch level of all code running through the pipeline. What should you do?
A. Set up Container Analysis to scan and report Common Vulnerabilities and Exposures.
B. Configure the containers in the build pipeline to always update themselves before release.
C. Reconfigure the existing operating system vulnerability software to exist inside the container.
D. Implement static code analysis tooling against the Docker les used to create the containers.
Answer: A.
fi
NO.27 You support a high-traffic web application with a microservice architecture. The home page
of the application displays multiple widgets containing content such as the current weather, stock
prices, and news headlines. The main serving thread makes a call to a dedicated microservice for
each widget and then lays out the homepage for the user. The microservices occasionally fail; when
that happens, the serving thread serves the homepage with some missing content. Users of the
application are unhappy if this degraded mode occurs too frequently, but they would rather have
some content served instead of no content at all. You want to set a Service Level Objective (SLO) to
ensure that the user experience does not degrade too much. What Service Level Indicator {SLI)
should you use to measure this?
A. A quality SLI: the ratio of non-degraded responses to total responses
B. An availability SLI: the ratio of healthy microservices to the total number of microservices
C. A freshness SLI: the proportion of widgets that have been updated within the last 10 minutes
D. A latency SLI: the ratio of microservice calls that complete in under 100 ms to the total number of
microservice calls
Answer: B
Explanation:
https://cloud.google.com/blog/products/gcp/available-or-not-that-is-the-question-cre-life-lessons

NO.28 You deploy a new release of an internal application during a weekend maintenance window
when there is minimal user traffic. After the window ends, you learn that one of the new features

10
IT Certification Guaranteed, The Easy Way!

isn't working as expected in the production environment. After an extended outage, you roll back the
new release and deploy a fix. You want to modify your release process to reduce the mean time to
recovery so you can avoid extended outages in the future. What should you do?
Choose 2 answers
A. Before merging new code, require 2 different peers to review the code changes.
B. Adopt the blue/green deployment strategy when releasing new code via a CD server.
C. Integrate a code linting tool to validate coding standards before any code is accepted into the
repository.
D. Require developers to run automated integration tests on their local development environments
before release.
E. Configure a CI server. Add a suite of unit tests to your code and have your CI server run them on
commit and verify any changes.
Answer: B,E

NO.29 You support an application running on GCP and want to configure SMS notifications to your
team for the most critical alerts in Stackdriver Monitoring. You have already identified the alerting
policies you want to configure this for. What should you do?
A. Download and configure a third-party integration between Stackdriver Monitoring and an SMS
gateway. Ensure that your team members add their SMS/phone numbers to the external tool.
B. Select the Webhook notifications option for each alerting policy, and configure it to use a third-
party integration tool. Ensure that your team members add their SMS/phone numbers to the
external tool.
C. Ensure that your team members set their SMS/phone numbers in their Stackdriver Profile. Select
the SMS notification option for each alerting policy and then select the appropriate SMS/phone
numbers from the list.
D. Configure a Slack notification for each alerting policy. Set up a Slack-to-SMS integration to send
SMS messages when Slack messages are received. Ensure that your team members add their
SMS/phone numbers to the external integration.
Answer:
Explanation
https://cloud.google.com/monitoring/support/noti cation-options#creating_channels To con gure
SMS noti cations, do the following:
In the SMS section, click Add new and follow the instructions. Click Save. When you set up your
C

alerting policy, select the SMS notification type and choose a verified phone number from the list.
:

fi
fi
NO.30 You support a trading application written in Python and hosted on App Engine flexible
fi
environment. You want to customize the error information being sent to Stackdriver Error Reporting.
What should you do?
A. Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute
Engine VM.
B. Install the Stackdriver Error Reporting library for Python, and then run your code on Google
Kubernetes Engine.
C. Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine
flexible environment.

11
IT Certification Guaranteed, The Easy Way!

D. Use the Stackdriver Error Reporting API to write errors from your application to
ReportedErrorEvent, and then generate log entries with properly formatted error messages in
Stackdriver Logging.
Answer: C

NO.31 You have an application running in Google Kubernetes Engine. The application invokes
multiple services per request but responds too slowly. You need to identify which downstream
service or services are causing the delay. What should you do?
A. Analyze VPC flow logs along the path of the request.
B. Investigate the Liveness and Readiness probes for each service.
C. Create a Dataflow pipeline to analyze service metrics in real time.
D. Use a distributed tracing framework such as OpenTelemetry or Stackdriver Trace.
Answer: C

NO.32 You support a web application that is hosted on Compute Engine. The application provides a
booking service for thousands of users. Shortly after the release of a new feature, your monitoring
dashboard shows that all users are experiencing latency at login. You want to mitigate the impact of
the incident on the users of your service. What should you do first?
A. Roll back the recent release.
B. Review the Stackdriver monitoring.
C. Upsize the virtual machines running the login services.
D. Deploy a new release to see whether it fixes the problem.
Answer: C

NO.33 You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them
to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML
configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve
the issue following Site Reliability Engineering practices. What should you do?
A. Disable the CI pipeline and revert to manually building and pushing the artifacts.
B. Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.
C. Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the
issue.
D. Run a Git compare between the previous and current Cloud Build Configuration files to find and fix
the bug.
Answer: D
Explanation:
"After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are
being built by the pipeline"- means something wrong on the recent change not with the image
registry.

NO.34 Your application artifacts are being built and deployed via a CI/CD pipeline. You want the
CI/CD pipeline to securely access application secrets. You also want to more easily rotate secrets in
case of a security breach. What should you do?
A. Prompt developers for secrets at build time. Instruct developers to not store secrets at rest.

12
IT Certification Guaranteed, The Easy Way!

B. Store secrets in a separate configuration file on Git. Provide select developers with access to the
configuration file.
C. Store secrets in Cloud Storage encrypted with a key from Cloud KMS. Provide the CI/CD pipeline
with access to Cloud KMS via IAM. The correct answer has to involve use of Cloud KMS
D. Encrypt the secrets and store them in the source code repository. Store a decryption key in a
separate repository and grant your pipeline access to it
Answer: C

NO.35 You are responsible for the reliability of a high-volume enterprise application. A large number
of users report that an important subset of the application's functionality - a data intensive reporting
feature - is consistently failing with an HTTP 500 error. When you investigate your application's
dashboards, you notice a strong correlation between the failures and a metric that represents the
size of an internal queue used for generating reports. You trace the failures to a reporting backend
that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend's persistent
disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report
generation feature. How would you define it? it has been said the
correlation is between
A. As the proportion of report generation requests that result in a successful response the queue size and failure
B. As the application's report generation queue size compared to a known-good threshold the SLI should be based
on the known correlation
C. As the I/O wait times aggregated across all report generation backends instead of imaginary one
D. As the reporting backend PD throughout capacity compared to a known-good threshold
Answer: B

NO.36 You use Spinnaker to deploy your application and have created a canary deployment stage in
the pipeline. Your application has an in-memory cache that loads objects at start time. You want to
automate the comparison of the canary version against the production version. How should you
configure the canary analysis?
A. Compare the canary with a new deployment of the current production version.
B. Compare the canary with a new deployment of the previous production version.
C. Compare the canary with the existing deployment of the current production version.
D. Compare the canary with the average performance of a sliding window of previous production
versions.
Answer: A
Explanation:
https://cloud.google.com/architecture/automated-canary-analysis-kubernetes-engine-spinnaker
https://spinnaker.io/guides/user/canary/best-practices/#compare-canary-against-baseline-not-
against-production

NO.37 You are running an application on Compute Engine and collecting logs through Stackdriver.
You discover that some personally identifiable information (PII) is leaking into certain log entry fields.
You want to prevent these fields from being written in new log entries as quickly as possible. What
should you do?
A. Use the filter-record-transformer Fluentd filter plugin to remove the fields from the log entries in
flight.
B. Use the fluent-plugin-record-reformer Fluentd output plugin to remove the fields from the log

13
IT Certification Guaranteed, The Easy Way!

entries in flight.
C. Wait for the application developers to patch the application, and then verify that the log entries
are no longer exposing PII.
D. Stage log entries to Cloud Storage, and then trigger a Cloud Function to remove the fields and
write the entries to Stackdriver via the Stackdriver Logging API.
Answer: B

NO.38 You are responsible for creating and modifying the Terraform templates that define your
Infrastructure. Because two new engineers will also be working on the same code, you need to define
a process and adopt a tool that will prevent you from overwriting each other's code. You also want to
ensure that you capture all updates in the latest version. What should you do?
A. * Store your code in a Git-based version control system.
* Establish a process that allows developers to merge their own changes at the end of each day.
* Package and upload code lo a versioned Cloud Storage bucket as the latest master version.
B. * Store your code in a Git-based version control system.
* Establish a process that includes code reviews by peers and unit testing to ensure integrity and
functionality before integration of code.
* Establish a process where the fully integrated code in the repository becomes the latest master
version.
C. * Store your code as text files in Google Drive in a defined folder structure that organizes the files.
* At the end of each day. confirm that all changes have been captured in the files within the folder
structure.
* Rename the folder structure with a predefined naming convention that increments the version.
D. * Store your code as text files in Google Drive in a defined folder structure that organizes the files.
* At the end of each day, confirm that all changes have been captured in the files within the folder
structure and create a new .zip archive with a predefined naming convention.
* Upload the .zip archive to a versioned Cloud Storage bucket and accept it as the latest version.
Answer: B

NO.39 You are running an application in a virtual machine (VM) using a custom Debian image. The
image has the Stackdriver Logging agent installed. The VM has the cloud-platform scope. The
application is logging information via syslog. You want to use Stackdriver Logging in the Google Cloud
Platform Console to visualize the logs. You notice that syslog is not showing up in the "All logs"
dropdown list of the Logs Viewer. What is the first thing you should do?
A. Look for the agent's test log entry in the Logs Viewer. this is not right since the VM has had
the cloud-platform scope so no
B. Install the most recent version of the Stackdriver agent. permission issue here
C. Verify the VM service account access scope includes the monitoring.write scope.
D. SSH to the VM and execute the following commands on your VM: ps ax I grep fluentd
Answer: D ps aux | grep fluentd: this is to make sure
first the required logs collecting app is
Explanation: running really in the machine.
https://cloud.google.com/compute/docs/access/service-
accounts#associating_a_service_account_to_an_instance

NO.40 You are working with a government agency that requires you to archive application logs for
seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of

14
IT Certification Guaranteed, The Easy Way!

storage. What should you do?


A. Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.
B. Develop an App Engine application that pulls the logs from Stackdriver and saves them in
BigQuery.
C. Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage
for seven years.
D. Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs,
and then select the bucket as the log export destination.
Answer: D

NO.41 Your team of Infrastructure DevOps Engineers is growing, and you are starting to use
Terraform to manage infrastructure. You need a way to implement code versioning and to share code
with other team members. What should you do?
A. Store the Terraform code in a version-control system. Establish procedures for pushing new
versions and merging with the master.
B. Store the Terraform code in a network shared folder with child folders for each
version release. Ensure that everyone works on different les.
C. Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the
bucket to every team member so they can download the files.
D. Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team
member's computer. Organize files with a naming convention that identifies each new version.
fi
Answer: A

NO.42 You are managing an application that exposes an HTTP endpoint without using a load
balancer. The latency of the HTTP responses is important for the user experience. You want to
understand what HTTP latencies all of your users are experiencing. You use Stackdriver Monitoring.
What should you do?
A. * In your application, create a metric with a metricKind set to DELTA and a valueType set to
DOUBLE.
* In Stackdriver's Metrics Explorer, use a Slacked Bar graph to visualize the metric.
B. * In your application, create a metric with a metricKind set to CUMULATIVE and a valueType set to
DOUBLE.
* In Stackdriver's Metrics Explorer, use a Line graph to visualize the metric.
C. * In your application, create a metric with a metricKind set to gauge and a valueType set to
distribution.
* In Stackdriver's Metrics Explorer, use a Heatmap graph to visualize the metric.
D. * In your application, create a metric with a metricKind. set toMETRlc_KIND_UNSPECIFIEDanda
valueType set to INT64.
* In Stackdriver's Metrics Explorer, use a Stacked Area graph to visualize the metric.
Answer: C
Explanation:
https://sre.google/workbook/implementing-slos/
https://cloud.google.com/architecture/adopting-slos/
Latency is commonly measured as a distribution. Given a distribution, you can measure various

15
IT Certification Guaranteed, The Easy Way!

percentiles. For example, you might measure the number of requests that are slower than the
historical 99th percentile.

NO.43 You support a web application that runs on App Engine and uses CloudSQL and Cloud Storage
for data storage. After a short spike in website traffic, you notice a big increase in latency for all user
requests, increase in CPU use, and the number of processes running the application. Initial
troubleshooting reveals:
After the initial spike in traffic, load levels returned to normal but users still experience high latency.
Requests for content from the CloudSQL database and images from Cloud Storage show the same
high latency.
No changes were made to the website around the time the latency increased.
There is no increase in the number of errors to the users.
You expect another spike in website traffic in the coming days and want to make sure users don't
experience latency. What should you do?
A. Upgrade the GCS buckets to Multi-Regional.
B. Enable high availability on the CloudSQL instances.
C. Move the application from App Engine to Compute Engine.
D. Modify the App Engine configuration to have additional idle instances.
Answer: B

NO.44 You are on-call for an infrastructure service that has a large number of dependent systems.
You receive an alert indicating that the service is failing to serve most of its requests and all of its
dependent systems with hundreds of thousands of users are affected. As part of your Site Reliability
Engineering (SRE) incident management protocol, you declare yourself Incident Commander (IC) and
pull in two experienced people from your team as Operations Lead (OLJ and Communications Lead
(CL). What should you do next?
A. Look for ways to mitigate user impact and deploy the mitigations to production.
B. Contact the affected service owners and update them on the status of the incident.
C. Establish a communication channel where incident responders and leads can communicate with
each other.
D. Start a postmortem, add incident information, circulate the draft internally, and ask internal
stakeholders for input.
Answer: A
Explanation:
https://sre.google/sre-book/managing-incidents/

NO.45 Your team is designing a new application for deployment both inside and outside Google
Cloud Platform (GCP). You need to collect detailed metrics such as system resource utilization. You
want to use centralized GCP services while minimizing the amount of work required to set up this
collection system. What should you do?
A. Import the Stackdriver Profiler package, and configure it to relay function timing data to
Stackdriver for further analysis.
B. Import the Stackdriver Debugger package, and configure the application to emit debug messages
with timing information.

16
IT Certification Guaranteed, The Easy Way!

C. Instrument the code using a timing library, and publish the metrics via a health check endpoint
that is scraped by Stackdriver.
D. Install an Application Performance Monitoring (APM) tool in both locations, and configure an
export to a central data storage location for analysis.
Answer: A

NO.46 Your company follows Site Reliability Engineering practices. You are the person in charge of
Communications for a large, ongoing incident affecting your customer-facing applications. There is
still no estimated time for a resolution of the outage. You are receiving emails from internal
stakeholders who want updates on the outage, as well as emails from customers who want to know
what is happening. You want to efficiently provide updates to everyone affected by the outage. What
should you do?
A. Focus on responding to internal stakeholders at least every 30 minutes. Commit to "next update"
times.
B. Provide periodic updates to all stakeholders in a timely manner. Commit to a "next update" time in
all communications.
C. Delegate the responding to internal stakeholder emails to another member of the Incident
Response Team. Focus on providing responses directly to customers.
D. Provide all internal stakeholder emails to the Incident Commander, and allow them to manage
internal communications. Focus on providing responses directly to customers.
Answer: B
Explanation:
When disaster strikes, the person who declares the incident typically steps into the IC role and directs
the high-level state of the incident. The IC concentrates on the 3Cs and does the following:
Commands and coordinates the incident response, delegating roles as needed. By default, the IC
assumes all roles that have not been delegated yet. Communicates effectively. Stays in control of the
incident response. Works with other responders to resolve the incident.
https://sre.google/workbook/incident-response/

NO.47 Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for
deploying application releases to GCP. You want to streamline the release process, lower operational
toil, and keep user data secure. What should you do?
A. Implement Jenkins on local workstations.
B. Implement Jenkins on Kubernetes on-premises
C. Implement Jenkins on Google Cloud Functions.
D. Implement Jenkins on Compute Engine virtual machines.
Answer: D

NO.48 You support an e-commerce application that runs on a large Google Kubernetes Engine (GKE)
cluster deployed on-premises and on Google Cloud Platform. The application consists of
microservices that run in containers. You want to identify containers that are using the most CPU and
memory. What should you do?
A. Use Stackdriver Kubernetes Engine Monitoring.
B. Use Prometheus to collect and aggregate logs per container, and then analyze the results in

17
IT Certification Guaranteed, The Easy Way!

Grafana.
C. Use the Stackdriver Monitoring API to create custom metrics, and then organize your containers
using groups.
D. Use Stackdriver Logging to export application logs to BigOuery. aggregate logs per container, and
then analyze CPU and memory consumption.
Answer: A
Explanation:
https://cloud.google.com/anthos/clusters/docs/on-prem/1.7/concepts/logging-and-monitoring

NO.49 You are running a real-time gaming application on Compute Engine that has a production and
testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The
application frontend and backend servers are located on different subnets in the environment's VPC.
You suspect there is a malicious process communicating intermittently in your production frontend
servers. You want to ensure that network traffic is captured for analysis. What should you do?
A. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a
sample volume scale of 0.5.
B. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a
sample volume scale of 1.0.
C. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets
with a volume scale of 0.5. Apply changes in testing before production.
D. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets
with a volume scale of 1.0. Apply changes in testing before production.
Answer: D

NO.50 You are using Stackdriver to monitor applications hosted on Google Cloud Platform (GCP).
You recently deployed a new application, but its logs are not appearing on the Stackdriver dashboard
.
You need to troubleshoot the issue. What should you do?
A. Confirm that the Stackdriver agent has been installed in the hosting virtual machine.
B. Confirm that your account has the proper permissions to use the Stackdriver dashboard.
C. Confirm that port 25 has been opened in the firewall to allow messages through to Stackdriver.
D. Confirm that the application is using the required client library and the service account key has
proper permissions.
Answer: B

NO.51 You support an application running on App Engine. The application is used globally and
accessed from various device types. You want to know the number of connections. You are using
Stackdriver Monitoring for App Engine. What metric should you use?
A. flex/connections/current
B. tcp_ssl_proxy/new_connections
C. tcp_ssl_proxy/open_connections
D. flex/instance/connections/current
Answer: A
Explanation:

18
IT Certification Guaranteed, The Easy Way!

https://cloud.google.com/monitoring/api/metrics_gcp#gcp-appengine

NO.52 You use Spinnaker to deploy your application and have created a canary deployment stage in
the pipeline. Your application has an in-memory cache that loads objects at start time. You want to
automate the comparison of the canary version against the production version. How should you
configure the canary analysis?
A. Compare the canary with the average performance of a sliding window of previous production
versions.
B. Compare the canary with a new deployment of the previous production version.
C. Compare the canary with a new deployment of the current production version.
D. Compare the canary with the existing deployment of the current production version.
Answer: C l

NO.53 Your organization wants to implement Site Reliability Engineering (SRE) culture and
principles. Recently, a service that you support had a limited outage. A manager on another team
asks you to provide a formal explanation of what happened so they can action remediations. What
should you do?
A. Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized
list of action items. Share it with the manager only.
B. Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized
list of action items. Share it on the engineering organization's document portal.
C. Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people
responsible, and a list of action items for each person. Share it with the manager only.
D. Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people
responsible, and a list of action items for each person. Share it on the engineering organization's
document portal.
Answer: B

NO.54 You support a high-traffic web application that runs on Google Cloud Platform (GCP). You
need to measure application reliability from a user perspective without making any engineering
changes to it. What should you do?
Choose 2 answers
A. Review current application metrics and add new ones as needed.
B. Modify the code to capture additional information for user interaction.
C. Analyze the web proxy logs only and capture response time of each request.
D. Create new synthetic clients to simulate a user journey using the application.
E. Use current and historic Request Logs to trace customer interaction with the application.
Answer: C,E
Explanation:
https://cloud.google.com/architecture/adopting-slos?hl=en

NO.55 You support an application that stores product information in cached memory. For every
cache miss, an entry is logged in Stackdriver Logging. You want to visualize how often a cache miss
happens over time. What should you do?

19
IT Certification Guaranteed, The Easy Way!

A. Link Stackdriver Logging as a source in Google Data Studio. Filler (he logs on the cache misses.
B. Configure Stackdriver Profiler to identify and visualize when the cache misses occur based on the
logs.
C. Create a logs-based metric in Stackdriver Logging and a dashboard for that metric in Stackdriver
Monitoring.
D. Configure BigOuery as a sink for Stackdriver Logging. Create a scheduled query to filter the cache
miss logs and write them to a separate table
Answer: C
Explanation:
https://cloud.google.com/logging/docs/logs-based-metrics#counter-metric

NO.56 You support a production service that runs on a single Compute Engine instance. You
regularly need to spend time on recreating the service by deleting the crashing instance and creating
a new instance based on the relevant image. You want to reduce the time spent performing manual
operations while following Site Reliability Engineering principles. What should you do?
A. File a bug with the development team so they can find the root cause of the crashing instance.
B. Create a Managed Instance Group with a single instance and use health checks to determine the
system status.
C. Add a Load Balancer in front of the Compute Engine instance and use health checks to determine
the system status.
D. Create a Stackdriver Monitoring dashboard with SMS alerts to be able to start recreating the
crashed instance promptly after it has crashed.
Answer: B

NO.57 You use a multiple step Cloud Build pipeline to build and deploy your application to Google
Kubernetes Engine (GKE). You want to integrate with a third-party monitoring platform by performing
a HTTP POST of the build information to a webhook. You want to minimize the development effort.
What should you do?
A. Add logic to each Cloud Build step to HTTP POST the build information to a webhook.
B. Add a new step at the end of the pipeline in Cloud Build to HTTP POST the build information to a
webhook.
C. Use Stackdriver Logging to create a logs-based metric from the Cloud Buitd logs. Create an Alert
with a Webhook notification type.
D. Create a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic to HTTP
POST the build information to a webhook.
Answer: D

NO.58 Your team uses Cloud Build for all CI/CO pipelines. You want to use the kubectl builder for
Cloud Build to deploy new images to Google Kubernetes Engine (GKE). You need to authenticate to
GKE while minimizing development effort. What should you do?
A. Assign the Container Developer role to the Cloud Build service account.
B. Specify the Container Developer role for Cloud Build in the cloudbuild.yaml file.
C. Create a new service account with the Container Developer role and use it to run Cloud Build.
D. Create a separate step in Cloud Build to retrieve service account credentials and pass these to

20
IT Certification Guaranteed, The Easy Way!

kubectl.
Answer: A
Explanation:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke
https://cloud.google.com/build/docs/securing-builds/configure-user-specified-service-accounts

NO.59 Your company is developing applications that are deployed on Google Kubernetes Engine
(GKE). Each team manages a different application. You need to create the development and
production environments for each team, while minimizing costs. Different teams should not be able
to access other teams' environments. What should you do?
A. Create one GCP Project per team. In each project, create a cluster for Development and one for
Production. Grant the teams IAM access to their respective clusters.
B. Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace
for Development and one for Production. Grant the teams IAM access to their respective clusters.
C. Create a Development and a Production GKE cluster in separate projects. In each cluster, create a
Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can
only access its own namespace.
D. Create a Development and a Production GKE cluster in separate projects. In each cluster, create a
Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC)
so that each team can only access its own namespace.
Answer: D

NO.60 You are creating and assigning action items in a postmodern for an outage. The outage is
over, but you need to address the root causes. You want to ensure that your team handles the action
items quickly and efficiently. How should you assign owners and collaborators to action items?
A. Assign one owner for each action item and any necessary collaborators.
B. Assign multiple owners for each item to guarantee that the team addresses items quickly
C. Assign collaborators but no individual owners to the items to keep the postmortem blameless.
D. Assign the team lead as the owner for all action items because they are in charge of the SRE team.
Answer: A

NO.61 You use Cloud Build to build your application. You want to reduce the build time while
minimizing cost and development effort. What should you do?
A. Use Cloud Storage to cache intermediate artifacts.
B. Run multiple Jenkins agents to parallelize the build.
C. Use multiple smaller build steps to minimize execution time.
D. Use larger Cloud Build virtual machines (VMs) by using the machine-type option.
Answer: C

NO.62 You support an application deployed on Compute Engine. The application connects to a Cloud
SQL instance to store and retrieve dat a. After an update to the application, users report errors
showing database timeout messages. The number of concurrent active users remained stable. You
need to find the most probable cause of the database timeout. What should you do?
A. Check the serial port logs of the Compute Engine instance.

21
IT Certification Guaranteed, The Easy Way!

B. Use Stackdriver Profiler to visualize the resources utilization throughout the application.
C. Determine whether there is an increased number of connections to the Cloud SQL instance.
D. Use Cloud Security Scanner to see whether your Cloud SQL is under a Distributed Denial of Service
(DDoS) attack.
Answer: B

NO.63 You support a multi-region web service running on Google Kubernetes Engine (GKE) behind a
Global HTTP'S Cloud Load Balancer (CLB). For legacy reasons, user requests first go through a third-
party Content Delivery Network (CDN). which then routes traffic to the CLB. You have already
implemented an availability Service Level Indicator (SLI) at the CLB level. However, you want to
increase coverage in case of a potential load balancer misconfiguration. CDN failure, or other global
networking catastrophe. Where should you measure this new SLI?
Choose 2 answers
A. Your application servers' logs
B. Instrumentation coded directly in the client
C. Metrics exported from the application servers
D. GKE health checks for your application servers
E. A synthetic client that periodically sends simulated user requests
Answer: B,E

NO.64 You encounter a large number of outages in the production systems you support. You receive
alerts for all the outages that wake you up at night. The alerts are due to unhealthy systems that are
automatically restarted within a minute. You want to set up a process that would prevent staff
burnout while following Site Reliability Engineering practices. What should you do?
A. Eliminate unactionable alerts.
B. Create an incident report for each of the alerts.
C. Distribute the alerts to engineers in different time zones.
D. Redefine the related Service Level Objective so that the error budget is not exhausted.
Answer: A

NO.65 You are responsible for the reliability of a high-volume enterprise application. A large number
of users report that an important subset of the application's functionality - a data intensive reporting
feature - is consistently failing with an HTTP 500 error. When you investigate your application's
dashboards, you notice a strong correlation between the failures and a metric that represents the
size of an internal queue used for generating reports. You trace the failures to a reporting backend
that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend's persistent
disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report
generation feature. How would you define it?
A. As the I/O wait times aggregated across all report generation backends
B. As the proportion of report generation requests that result in a successful response
C. As the application's report generation queue size compared to a known-good threshold
D. As the reporting backend PD throughout capacity compared to a known-good threshold
Answer: C

22
IT Certification Guaranteed, The Easy Way!

NO.66 You currently store the virtual machine (VM) utilization logs in Stackdriver. You need to
provide an easy-to-share interactive VM utilization dashboard that is updated in real time and
contains information aggregated on a quarterly basis. You want to use Google Cloud Platform
solutions. What should you do?
A. 1. Export VM utilization logs from Stackdriver to BigOuery.
2. Create a dashboard in Data Studio.
3. Share the dashboard with your stakeholders.
B. 1. Export VM utilization logs from Stackdriver to Cloud Pub/Sub.
2. From Cloud Pub/Sub, send the logs to a Security Information and Event Management (SIEM)
system.
3. Build the dashboards in the SIEM system and share with your stakeholders.
C. 1. Export VM utilization logs (rom Stackdriver to BigQuery.
2. From BigQuery. export the logs to a CSV file.
3. Import the CSV file into Google Sheets.
4. Build a dashboard in Google Sheets and share it with your stakeholders.
D. 1. Export VM utilization logs from Stackdriver to a Cloud Storage bucket.
2. Enable the Cloud Storage API to pull the logs programmatically.
3. Build a custom data visualization application.
4. Display the pulled logs in a custom dashboard.
Answer: A

NO.67 You need to run a business-critical workload on a fixed set of Compute Engine instances for
several months. The workload is stable with the exact amount of resources allocated to it. You want
to lower the costs for this workload without any performance implications. What should you do?
A. Purchase Committed Use Discounts.
B. Migrate the instances to a Managed Instance Group.
C. Convert the instances to preemptible virtual machines.
D. Create an Unmanaged Instance Group for the instances used to run the workload.
Answer: A

NO.68 You are ready to deploy a new feature of a web-based application to production. You want to
use Google Kubernetes Engine (GKE) to perform a phased rollout to half of the web server pods.
What should you do?
A. Use a partitioned rolling update.
B. Use Node taints with NoExecute.
C. Use a replica set in the deployment specification.
D. Use a stateful set with parallel pod management policy.
Answer: A

NO.69 You support a service with a well-defined Service Level Objective (SLO). Over the previous 6
months, your service has consistently met its SLO and customer satisfaction has been consistently
high. Most of your service's operations tasks are automated and few repetitive tasks occur
frequently. You want to optimize the balance between reliability and deployment velocity while
following site reliability engineering best practices. What should you do? (Choose two.)

23
IT Certification Guaranteed, The Easy Way!

A. Make the service's SLO more strict.


B. Increase the service's deployment velocity and/or risk.
C. Shift engineering time to other services that need more reliability.
D. Get the product team to prioritize reliability work over new features.
E. Change the implementation of your Service Level Indicators (SLIs) to increase coverage.
Answer: D,E

NO.70 Your company follows Site Reliability Engineering practices. You are the Incident Commander
for a new. customer-impacting incident. You need to immediately assign two incident management
roles to assist you in an effective incident response. What roles should you assign?
Choose 2 answers
A. Operations Lead
B. Engineering Lead
C. Communications Lead
D. Customer Impact Assessor
DRAFT
E. External Customer Communications Lead
Answer: A,C
Explanation:
https://sre.google/workbook/incident-response/
"The main roles in incident response are the Incident Commander (IC), Communications Lead (CL),
and Operations or Ops Lead (OL)."

NO.71 Your company follows Site Reliability Engineering principles. You are writing a postmortem
for an incident, triggered by a software change, that severely affected users. You want to prevent
severe incidents from happening in the future. What should you do?
A. Identify engineers responsible for the incident and escalate to their senior management.
B. Ensure that test cases that catch errors of this type are run successfully before new software
releases.
C. Follow up with the employees who reviewed the changes and prescribe practices they should
follow in the future.
D. Design a policy that will require on-call teams to immediately call engineers and management to
discuss a plan of action if an incident occurs.
Answer: B

NO.72 You manage an application that is writing logs to Stackdriver Logging. You need to give some
team members the ability to export logs. What should you do?
A. Grant the team members the IAM role of logging.configWriter on Cloud IAM.
B. Configure Access Context Manager to allow only these members to export logs.
C. Create and grant a custom IAM role with the permissions logging.sinks.list and logging.sink.get.
D. Create an Organizational Policy in Cloud IAM to allow only these members to create log exports.
Answer: A
Explanation:
https://cloud.google.com/logging/docs/access-control

24
IT Certification Guaranteed, The Easy Way!

NO.73 You have migrated an e-commerce application to Google Cloud Platform (GCP). You want to
prepare the application for the upcoming busy season. What should you do first to prepare for the
busy season?
A. Load teat the application to profile its performance for scaling.
B. Enable AutoScaling on the production clusters, in case there is growth.
C. Pre-provision double the compute power used last season, expecting growth.
D. Create a runbook on inflating the disaster recovery (DR) environment if there is growth.
Answer: B

NO.74 You have a pool of application servers running on Compute Engine. You need to provide a
secure solution that requires the least amount of configuration and allows developers to easily access
application logs for troubleshooting. How would you implement the solution on GCP?
A. * Deploy the Stackdriver logging agent to the application servers.
* Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.
B. * Deploy the Stackdriver logging agent to the application servers.
* Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.
C. * Deploy the Stackdriver monitoring agent to the application servers.
* Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.
D. * Install the gsutil command line tool on your application servers.
* Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then
schedule it to run via cron every 5 minutes.
Answer: A
* Give the developers IAM Object Viewer access to view the logs in the specified bucket.
Explanation:
https://cloud.google.com/logging/docs/audit#access-control

NO.75 You support a stateless web-based API that is deployed on a single Compute Engine instance
in the europe-west2-a zone . The Service Level Indicator (SLI) for service availability is below the
specified Service Level Objective (SLO). A postmortem has revealed that requests to the API regularly
time out. The time outs are due to the API having a high number of requests and running out
memory. You want to improve service availability. What should you do?
A. Change the specified SLO to match the measured SLI.
B. Move the service to higher-specification compute instances with more memory.
C. Set up additional service instances in other zones and load balance the traffic between all
instances.
D. Set up additional service instances in other zones and use them as a failover in case the primary
instance is unavailable.
Answer: C

NO.76 Some of your production services are running in Google Kubernetes Engine (GKE) in the eu-
west-1 region. Your build system runs in the us-west-1 region. You want to push the container images
from your build system to a scalable registry to maximize the bandwidth for transferring the images
to the cluster. What should you do?
A. Push the images to Google Container Registry (GCR) using the gcr.io hostname.

25
IT Certification Guaranteed, The Easy Way!

B. Push the images to Google Container Registry (GCR) using the us.gcr.io hostname.
C. Push the images to Google Container Registry (GCR) using the eu.gcr.io hostname.
D. Push the images to a private image registry running on a Compute Engine instance in the eu-west-
1 region.
Answer: B

NO.77 Your development team has created a new version of their service's API. You need to deploy
the new versions of the API with the least disruption to third-party developers and end users of third-
party installed applications. What should you do?
A. Introduce the new version of the API.
Announce deprecation of the old version of the API.
Deprecate the old version of the API.
Contact remaining users of the old API.
Provide best effort support to users of the old API.
Turn down the old version of the API.
B. Announce deprecation of the old version of the API.
Introduce the new version of the API.
Contact remaining users on the old API.
Deprecate the old version of the API.
Turn down the old version of the API.
Provide best effort support to users of the old API.
C. Announce deprecation of the old version of the API.
Contact remaining users on the old API.
Introduce the new version of the API.
Deprecate the old version of the API.
Provide best effort support to users of the old API.
Turn down the old version of the API.
D. Introduce the new version of the API.
Contact remaining users of the old API.
Announce deprecation of the old version of the API.
Deprecate the old version of the API.
Turn down the old version of the API.
Provide best effort support to users of the old API.
Answer: B

NO.78 You support a popular mobile game application deployed on Google Kubernetes Engine (GKE)
across several Google Cloud regions. Each region has multiple Kubernetes clusters. You receive a
report that none of the users in a specific region can connect to the application. You want to resolve
the incident while following Site Reliability Engineering practices. What should you do first?
A. Reroute the user traffic from the affected region to other regions that don't report issues.
B. Use Stackdriver Monitoring to check for a spike in CPU or memory usage for the affected region.
C. Add an extra node pool that consists of high memory and high CPU machine type instances to the
cluster.
D. Use Stackdriver Logging to filter on the clusters in the affected region, and inspect error messages
in the logs.

26
IT Certification Guaranteed, The Easy Way!

Answer: D

NO.79 Your product is currently deployed in three Google Cloud Platform (GCP) zones with your
users divided between the zones. You can fail over from one zone to another, but it causes a 10-
minute service disruption for the affected users. You typically experience a database failure once per
quarter and can detect it within five minutes. You are cataloging the reliability risks of a new real-
time chat feature for your product. You catalog the following information for each risk:
* Mean Time to Detect (MUD} in minutes
* Mean Time to Repair (MTTR) in minutes
* Mean Time Between Failure (MTBF) in days
* User Impact Percentage
The chat feature requires a new database system that takes twice as long to successfully fail over
between zones. You want to account for the risk of the new database failing in one zone. What would
be the values for the risk of database failover with the new system?
A. MTTD: 5
MTTR: 10
MTBF: 90
Impact: 33%
B. MTTD:5
MTTR: 20
MTBF: 90
Impact: 33%
C. MTTD:5
MTTR: 10
MTBF: 90
Impact 50%
D. MTTD:5
MTTR: 20
MTBF: 90
Impact: 50%
Answer: B
Explanation:
https://www.atlassian.com/incident-management/kpis/common-metrics
https://linkedin.github.io/school-of-sre/

NO.80 Your organization recently adopted a container-based workflow for application development.
Your team develops numerous applications that are deployed continuously through an automated
build pipeline to a Kubernetes cluster in the production environment. The security auditor is
concerned that developers or operators could circumvent automated testing and push code changes
to production without approval. What should you do to enforce approvals?
A. Configure the build system with protected branches that require pull request approval.
B. Use an Admission Controller to verify that incoming requests originate from approved sources.
C. Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.
D. Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an
attestor.

27
IT Certification Guaranteed, The Easy Way!

Answer: D
Explanation:
The keywords here is "developers or operators". Option A the operators could push images to
production without approval (operators could touch the cluster directly and the cluster cannot do
any action against them). Rest same as francisco_guerra.

NO.81 You are running an experiment to see whether your users like a new feature of a web
application. Shortly after deploying the feature as a canary release, you receive a spike in the number
of 500 errors sent to users, and your monitoring reports show increased latency. You want to quickly
minimize the negative impact on users. What should you do first?
A. Roll back the experimental canary release.
B. Start monitoring latency, traffic, errors, and saturation.
C. Record data for the postmortem document of the incident.
D. Trace the origin of 500 errors and the root cause of increased latency.
Answer: A

NO.82 You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web
application. Customers expect the application to always be available and have fast response times.
Customers are currently happy with the application performance and availability. Based on current
measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of
latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to
publish?
A. 90th percentile - 100ms
95th percentile - 250ms
B. 90th percentile - 120ms
95th percentile - 275ms
C. 90th percentile - 150ms
95th percentile - 300ms
D. 90th percentile - 250ms
95th percentile - 400ms
Answer: B

NO.83 You support the backend of a mobile phone game that runs on a Google Kubernetes Engine
(GKE) cluster. The application is serving HTTP requests from users. You need to implement a solution
that will reduce the network cost. What should you do?
A. Configure the VPC as a Shared VPC Host project.
B. Configure your network services on the Standard Tier.
C. Configure your Kubernetes duster as a Private Cluster.
D. Configure a Google Cloud HTTP Load Balancer as Ingress.
Answer: D
Explanation:
Costs associated with a load balancer are charged to the project containing the load balancer
components. Because of these benefits, container-native load balancing is the recommended
solution for load balancing through Ingress. When NEGs are used with GKE Ingress, the Ingress

28
IT Certification Guaranteed, The Easy Way!

controller facilitates the creation of all aspects of the L7 load balancer. This includes creating the
virtual IP address, forwarding rules, health checks, firewall rules, and more.
https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-
applications-on-gke

29

You might also like