PCA Set3
PCA Set3
com
Professional Cloud Architect on Google Cloud Platform v1.0 (Professional Cloud Architect) -
Full Access
Question 201 ( Question Set 1 )
You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no
logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should
you do?
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
B. 1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to
investigate logs from affected Pods.
C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error.
D. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to
trigger whenever the application returns an error.
Answer : C
Reference:
https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine
Answer : D
Reference:
https://cloud.google.com/storage/docs/gcs-fuse
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices.
B. Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the
configurations of the filtered workloads.
C. Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the
namespace. Inspect the configurations of the filtered workloads.
D. Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console.
Answer : A
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
B. Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files.
C. Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years.
D. Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files.
Answer : A
Reference:
https://cloud.google.com/storage/docs/using-bucket-lock
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy
the newly built container image on the development cluster.
B. Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly
built container image on the development cluster.
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the
new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
https://www.itexams.com/exam/Professional Cloud Architect? 1/14
4/11/22, 9:42 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
D. Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of
the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.
Answer : A
Answer : A
Reference:
https://cloud.google.com/blog/products/sap-google-cloud/best-practices-for-sap-app-server-autoscaling-on-google-cloud
Answer : D
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.
B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster.
C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster.
D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster.
Answer : A
A. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development
team. 3. Use Cloud VPN to join the two VPCs.
B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team.
C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute
Admin role to the development team.
D. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development
team. 3. Use VPC Peering to join the two VPCs.
Answer : C
Reference:
https://cloud.google.com/vpc/docs/shared-vpc
A. Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL.
B Store static content such as HTML and images in a Cloud Storage bucket Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones and save the user data in Cloud Spanner
https://www.itexams.com/exam/Professional Cloud Architect? 2/14
4/11/22, 9:42 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
B. Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner.
C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL.
D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.
Answer : B
A. Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant events.
B. Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events.
C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events.
D. Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.
Answer : C
A. Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance.
B. Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP.
C. Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance.
D. Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance.
Answer : D
Reference:
https://cloud.google.com/solutions/connecting-securely
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.
Answer : C
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-folders
A. Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a value.
B. Use Istioג€™s fault injection on the particular microservice whose faulty behavior you want to simulate.
C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior.
D. Configure Istioג€™s traffic management features to steer the traffic away from a crashing microservice.
Answer : C
A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine
Answer : A
Reference:
https://cloud.google.com/terms/services
A. Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API.
B. Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix ג€DEPRECATEDג€ to the current API version number on every backward-incompatible change. Use the current version number for the new API.
Answer : A
A. The new approach will be significantly less costly, make it easier to manage the underlying infrastructure, and automatically manage the CI/CD pipelines.
B. The monolithic solution can be converted to a container with Docker. The generated container can then be deployed into a Kubernetes cluster.
C. The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage CI/CD pipelines and perform A/B testing, and
scale the solution if necessary.
D. The process can be automated with Migrate for Compute Engine.
Answer : C
A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works.
C. Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address.
D. Use Cloud Debugger in the development environment to understand the latency between the different microservices.
Answer : B
A. Use kubectl autoscale deployment APP_NAME --max 6 --min 2 --cpu-percent 50 to configure Kubernetes autoscaling deployment.
B. Configure a Kubernetes autoscaling deployment based on the subscription/push_request_latencies metric.
C. Use the --enable-autoscaling flag when you create the Kubernetes cluster.
D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages metric.
Answer : C
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler
A. Make sure a developer is tagging the code commit with the date and time of commit.
B. Make sure a developer is adding a comment to the commit that links to the deployment.
C. Make the container tag match the source code commit hash.
D. Make sure the developer is tagging the commits with latest.
Answer : A
A. Develop the application with containers, and deploy to Google Kubernetes Engine.
B. Develop the application for App Engine standard environment.
C. Use a Managed Instance Group when deploying to Compute Engine.
D. Develop the application for App Engine flexible environment, using a custom runtime.
Answer : C
A. Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing.
B. Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table.
C. Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing.
D. Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket.
Answer : D
A. Grant all department members the required IAM permissions for their respective projects.
B. Create a Google Group per department and add all department members to their respective groups. Create a folder per department and grant the respective group the required IAM permissions at the folder level.
Add the projects under the respective folders.
C. Create a folder per department and grant the respective members of the department the required IAM permissions at the folder level. Structure all projects for each department under the respective folders.
D. Create a Google Group per department and add all department members to their respective groups. Grant each group the required IAM permissions for their respective projects.
Answer : B
A. Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment.
B. Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment.
C. Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.
D. Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment.
Answer : C
A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
B. Use the Data Transfer appliance to perform an offline migration.
C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
D. Compress the data and upload it with gsutil -m to enable multi-threaded copy.
Answer : A
A. 1. Attach a persistent SSD disk to the first instance. 2. Create a snapshot every hour. 3. In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot.
B. 1. Create a Cloud Storage bucket. 2. Mount the bucket into the first instance with gcs-fuse. 3. In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse.
C. 1. Attach a regional SSD persistent disk to the first instance. 2. In case of a zone outage, force-attach the disk to the other instance.
D. 1. Attach a local SSD to the first instance disk. 2. Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance. 3. In case of a zone outage, use the second instance.
Answer : B
A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the
data and to store it in a new BigQuery dataset.
B. Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key.
C. Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery
dataset.
D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
Answer : D
A. Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS.
B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.
C. Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.
D. Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.
Answer : D
A. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
B. Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
C. Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
D. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet.
Answer : B
Reference:
https://cloud.google.com/architecture/prep-kubernetes-engine-for-prod
Answer : B
A. 1. Use Linux shasum to compute a digest of files you want to upload. 2. Use gsutil -m to upload all the files to Cloud Storage. 3. Use gsutil cp to download the uploaded files. 4. Use Linux shasum to compute a digest
of the downloaded files. 5. Compare the hashes.
B. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Develop a custom Java application that computes CRC32C hashes. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the
uploaded files. 4. Compare the hashes.
C. 1. Use gsutil -m to upload all the files to Cloud Storage. 2. Use gsutil cp to download the uploaded files. 3. Use Linux diff to compare the content of the files.
D. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C
hashes of the uploaded files 4 Compare the hashes
https://www.itexams.com/exam/Professional Cloud Architect? 6/14
4/11/22, 9:42 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
hashes of the uploaded files. 4. Compare the hashes.
Answer : C
A. Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO), and create an alerting policy based on this SLO.
B. Enable the Cloud Trace API on your project, and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics.
C. Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting policy in case this metric exceeds the threshold.
D. Configure Anthos Config Management on your cluster, and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster.
Answer : A
Reference:
https://cloud.google.com/anthos/docs/tutorials/manage-slos
A. Create a second GKE cluster in asia-southeast1, and expose both APIs using a Service of type LoadBalancer. Add the public IPs to the Cloud DNS zone.
B. Use a global HTTP(s) load balancer with Cloud CDN enabled.
C. Create a second GKE cluster in asia-southeast1, and use kubemci to create a global HTTP(s) load balancer.
D. Increase the memory and CPU allocated to the application in the cluster.
Answer : B
A. Create an instance template with the smallest available machine type, and use an image of the third-party application taken from a current on-premises virtual machine. Create a managed instance group that uses
average CPU utilization to autoscale the number of instances in the group. Modify the average CPU utilization threshold to optimize the number of instances running.
B. Create an App Engine flexible environment, and deploy the third-party application using a Dockerfile and a custom runtime. Set CPU and memory options similar to your application's current on-premises virtual
machine in the app.yaml file.
C. Create multiple Compute Engine instances with varying CPU and memory options. Install the Cloud Monitoring agent, and deploy the third-party application on each of them. Run a load test with high traffic levels
on the application, and use the results to determine the optimal settings.
D. Create a Compute Engine instance with CPU and memory options similar to your application's current on-premises virtual machine. Install the Cloud Monitoring agent, and deploy the third-party application. Run
a load test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud Console.
Answer : A
Reference:
https://avinetworks.com/docs/18.2/server-autoscaling-in-gcp/
Answer : A
Reference:
https://cloud.google.com/vpc-service-controls/docs/overview
A. Add the node group name as a network tag when creating Compute Engine instances in order to host each workload on the correct node group.
B. Add the node name as a network tag when creating Compute Engine instances in order to host each workload on the correct node.
C. Use node affinity labels based on the node group name when creating Compute Engine instances in order to host each workload on the correct node group.
D. Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node.
Answer : C
Reference:
https://cloud.google.com/compute/docs/nodes/provisioning-sole-tenant-vms
A. Google Compute Engine unmanaged instance groups and Network Load Balancer
B. Google Compute Engine managed instance groups with auto-scaling
C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test
D. Google App Engine with Google StackDriver for logging
Answer : B
Explanation:
Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched from the standard images or custom images created by users.
Managed instance groups offer autoscaling capabilities that allow you to automatically add or remove instances from a managed instance group based on increases or decreases in load. Autoscaling helps your
applications gracefully handle increases in traffic and reduces cost when the need for resources is lower.
Incorrect Answers:
B: There is no mention of incoming IP data traffic for the custom C++ applications.
C: Apache Hadoop is not fit for testing C++ applications. Apache Hadoop is an open-source software framework used for distributed storage and processing of datasets of big data using the MapReduce programming
model.
D: Google App Engine is intended to be used for web applications.
Google App Engine (often referred to as GAE or simply App Engine) is a web framework and cloud computing platform for developing and hosting web applications in Google-managed data centers.
Reference:
https://cloud.google.com/compute/docs/autoscaler/
A. Help the engineer to convert his websocket code to use HTTP streaming
B. Review the encryption requirements for websocket connections with the security team
C. Meet with the cloud operations team and the engineer to discuss load balancer options
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
Answer : C
Explanation:
Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances.
The HTTP(S) load balancer has native support for the WebSocket protocol.
Incorrect Answers:
A: HTTP server push, also known as HTTP streaming, is a client-server communication pattern that sends information from an HTTP server to a client asynchronously, without a client request. A server push
architecture is especially effective for highly interactive web or mobile applications, where one or more clients need to receive continuous information from the server.
Reference:
https://cloud.google.com/compute/docs/load-balancing/http/
A. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with serverName ג€" Timestamp ג€¢ Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket.
Otherwise, save files to existing bucket.
B. ג€¢ Batch every 10,000 events with a single manifest file for metadata ג€¢ Compress event files and manifest file into a single archive file ג€¢ Name files using serverName ג€" EventSequence ג€¢ Create a new
bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. ג€¢ Compress individual files ג€¢ Name files with serverName ג€" EventSequence ג€¢ Save files to one bucket ג€¢ Set custom metadata headers for each object after saving
D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
Answer : D
hf h kd l l
https://www.itexams.com/exam/Professional Cloud Architect? 8/14
4/11/22, 9:42 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
A. Search for Create VM entry in the Stackdriver alerting console
B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list
Answer : C
Explanation:
Incorrect Answers:
A: To use the Stackdriver alerting console we must first set up alerting policies.
B: Data access logs only contain read-only operations.
Audit logs help you determine who did what, where, and when.
Cloud Audit Logging returns two types of logs:
✑ Admin activity logs
✑ Data access logs: Contains log entries for operations that perform read-only operations do not modify any data, such as get, list, and aggregated list methods.
A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
Answer : D
A. Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.
B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.
C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.
D. Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage
Answer : B
A. Ensure that the load tests validate the performance of Cloud Bigtable
B. Create a separate Google Cloud project to use for the load-testing environment
C. Schedule the load-testing tool to regularly run against the production environment
D. Ensure all third-party systems your services use is capable of handling high load
E. Instrument the production services to record every transaction for replay by the load-testing tool
F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
Answer : ABF
Answer : B
Answer : BE
A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name ג€" -size 10
B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
Answer : C
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
Answer : AC
Explanation:
Reference:
https://cloud.google.com/storage-options/
Answer : BC
Explanation:
B: With Container Engine, Google will automatically deploy your cluster for you, update, patch, secure the nodes.
Kubernetes Engine's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run.
C: Solutions like Datastore, BigQuery, AppEngine, etc are truly NoOps.
App Engine by default scales the number of instances running up and down to match the load, thus providing consistent performance for your app at all times while minimizing idle instances and thus reducing cost.
Note: At a high level, NoOps means that there is no infrastructure to build out and manage during usage of the platform. Typically, the compromise you make with
NoOps is that you lose control of the underlying infrastructure.
Reference:
https://www.quora.com/How-well-does-Google-Container-Engine-support-Google-Cloud-Platform%E2%80%99s-NoOps-claim
Answer : C
A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the
policies for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization.
B. 1. Create a folder under the Organization resource named ג€Production.2 €ג. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4.
Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder.
C. 1. Create folders under the Organization resource named ג€Developmentג€ and ג€Production.2 €ג. Grant all developers the Project Creator IAM role on the ג€Developmentג€ folder. 3. Move the developer projects
into the ג€Developmentג€ folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder.
D. 1. Designate the Organization for production projects only. 2. Ensure that developers do not have the Project Creator IAM role on the Organization. 3. Create development projects outside of the Organization using
the developer Google Workspace accounts. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the individual production projects.
Answer : D
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-organization
A. 1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances. 2. Serve music files directly from the backend Compute Engine instance.
B. 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances. 2. Download popular songs in Cloud Filestore. 3. Serve music files directly from the backend Compute Engine
instance.
C. 1. Copy popular songs into CloudSQL as a blob. 2. Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded.
D. 1. Create a managed instance group with Compute Engine instances. 2. Create a global load balancer and configure it with two backends: ‹—גManaged instance group ‹—גCloud Storage bucket 3. Enable Cloud
CDN on the bucket backend.
Answer : A
Reference:
https://cloud.google.com/compute/docs/logging/usage-export
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.
B. Enable the Compute Engine API, and then enable logging on the firewall rules that match the traffic you want to save.
C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN metrics over a one-year time period.
D. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs.
Answer : A
Reference:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/viewing-logs-metrics
A. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
B. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, store all non-PII data in BigQuery and store all PII data in a Cloud Storage bucket that has a retention policy set.
C. Ask the external partners to upload all data on Cloud Storage. Configure Bucket Lock for the bucket. Create a Dataflow pipeline to read the data from the bucket. As part of the pipeline, use the Cloud Data Loss
Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
D. Ask the external partners to import all data in your BigQuery dataset. Create a dataflow pipeline to copy the data into a new table. As part of the Dataflow bucket, skip all data in columns that have PII data
Answer : A
A. Create an aggregated export on the Production folder. Set the log sink to be a Cloud Storage bucket in an operations project.
B. Create an aggregated export on the Organization resource. Set the log sink to be a Cloud Storage bucket in an operations project.
C. Create log exports in the production projects. Set the log sinks to be a Cloud Storage bucket in an operations project.
D. Create log exports in the production projects. Set the log sinks to be BigQuery datasets in the production projects, and grant IAM access to the operations team to run queries on the datasets.
Answer : B
Reference:
https://cloud.google.com/logging/docs/audit
A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one
month. 4. Configure a retention policy at the bucket level using bucket lock.
B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files
into a Coldline Cloud Storage bucket after one month.
C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
D. 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
Answer : C
Answer : D
Reference:
https://sysdig.com/blog/gcp-security-best-practices/
Answer : C
A. 1. From the dataset where you have the source data, create views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign
access controls to the dataset that contains the view.
B. 1. From the dataset where you have the source data, create materialized views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science
team. 3. Assign access controls to the dataset that contains the view.
C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access
controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
D. 1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4.
Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
Answer : C
Reference:
https://cloud.google.com/blog/topics/developers-practitioners/bigquery-admin-reference-guide-data-governance?skip_cache=true
Answer : B
Reference:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets