0% found this document useful (0 votes)
3 views

private-pool

Uploaded by

avinash.singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

private-pool

Uploaded by

avinash.singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build

ith Cloud Build private pools | Cloud Build Documentation | Google Cloud

Access private Google Kubernetes Engine clusters with Cloud Build


private pools
This tutorial describes how to access a private Google Kubernetes Engine (GKE) cluster
(/kubernetes-engine/docs/how-to/private-clusters#private_cp) using Cloud Build private pools (/build/docs/private-pools/private-pools-overview). This
access lets you use Cloud Build to deploy your application on a private GKE cluster. This tutorial is intended for network administrators
and is applicable to all situations where Cloud Build private pools need to communicate with services running in a peered Virtual Private
Cloud (VPC) (/vpc/docs) network. For example, the private pool workers could communicate with the following services:

Private GKE cluster

Cloud SQL database

Memorystore instance

Compute Engine instance running in a different VPC network than the one peered with the Cloud Build private pool

Cloud Build private pools and GKE cluster control planes (/kubernetes-engine/docs/concepts/cluster-architecture#control_plane) both run in
Google-owned VPC networks (/vpc/docs). These VPC networks are peered to your own VPC network on Google Cloud. However, VPC
Network Peering doesn't support transitive peering (/vpc/docs/vpc-peering#restrictions), which can be a restriction when using Cloud Build
private pools. This tutorial presents a solution that uses Cloud VPN to allow workers in a Cloud Build private pool to access the control
plane of a private GKE cluster.

This tutorial assumes that you're familiar with Google Kubernetes Engine, Cloud Build, the gcloud command, VPC Network Peering, and
Cloud VPN.

Architecture overview
When you create a private GKE cluster with no client access to the public endpoint, clients can only access the GKE cluster control plane
(/kubernetes-engine/docs/concepts/cluster-architecture#control_plane) using its private IP address
(/kubernetes-engine/docs/concepts/private-cluster-concept#endpoints_in_private_clusters). Clients like kubectl can communicate with the control
plane only if they run on an instance that has access to the VPC network and is in an authorized network
(/kubernetes-engine/docs/how-to/authorized-networks#benefits_with_private_clusters).

If you want to use Cloud Build to deploy (/build/docs/deploying-builds/deploy-gke) your application on this private GKE cluster, then you need to
use Cloud Build private pools to access the GKE clusters. Private pools are a set of worker instances that run in a Google Cloud project
owned by Google, and are peered to your VPC network using a VPC Network Peering connection. In this setup, the worker instances are
allowed to communicate with the private IP address of the GKE cluster control plane.

However, the GKE cluster control plane also runs in a Google-owned project and is peered to your VPC network using a peering
connection. VPC Network Peering doesn't support transitive peering (/vpc/docs/vpc-peering#restrictions), so packets can't be routed directly
between the Cloud Build private pool and the GKE cluster control plane.

To enable Cloud Build worker instances to access the GKE cluster control plane, you can peer the private pool and the GKE cluster control
plane with two VPC networks that you own and then connect these two VPC networks using Cloud VPN. This peering and connection
allows each side of the VPC tunnel to advertise the private pool and GKE cluster control plane networks, thus completing the route.

The following architectural diagram shows the resources that are used in this tutorial:

We recommend creating all resources used in this tutorial in the same Google Cloud region (/build/docs/locations) for low latency. The VPN
tunnel can traverse two different regions if this inter-region communication is needed for your own implementation. The two VPC

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 1/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

networks that you own can also belong to different projects.

Objectives
Create a private GKE cluster.

Set up a Cloud Build private pool.

Create a HA VPN (/network-connectivity/docs/vpn/concepts/overview#ha-vpn) connection between two VPC networks.

Enable routing of packets across two VPC Network Peerings and a VPC connection.

Costs
In this document, you use the following billable components of Google Cloud:

Google Kubernetes Engine (/kubernetes-engine/pricing)

Cloud Build (/build/pricing)

Cloud VPN (/network-connectivity/docs/vpn/pricing)

To generate a cost estimate based on your projected usage, use the pricing calculator (/products/calculator). New Google Cloud users might
be eligible for a free trial (/free-trial).

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created.
For more information, see Clean up (#clean-up).

Before you begin


1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project
(/resource-manager/docs/creating-managing-projects).

star Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish
these steps, you can delete the project, removing all resources associated with the project.

Go to project selector (https://console.cloud.google.com/projectselector2/home/dashboard)

2. Make sure that billing is enabled for your Google Cloud project (/billing/docs/how-to/verify-billing-enabled#console).

3. Enable the Cloud Build, Google Kubernetes Engine, and Service Networking APIs.

Enable the APIs (https://console.cloud.google.com/flows/enableapi?apiid=cloudbuild.googleapis.com,container.googleapis.com,servicenetworking.goo

4. In the Google Cloud console, activate Cloud Shell.

Activate Cloud Shell (https://console.cloud.google.com/?cloudshell=true)

At the bottom of the Google Cloud console, a Cloud Shell (/shell/docs/how-cloud-shell-works) session starts and displays a command-
line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your
current project. It can take a few seconds for the session to initialize.

Creating two VPC networks in your own project


In this section, you create two VPC networks and a subnet for the GKE cluster nodes.

1. In Cloud Shell, create the first VPC network (called "Private pool peering VPC network" in the preceding diagram). You don't need to
create subnets in this network.

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 2/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

gcloud compute networks create private-pool-vpc edit \


--subnet-mode=CUSTOM

Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool
network.

2. Create the second VPC network (called "GKE peering VPC network" in the preceding diagram):

gcloud compute networks create blink-web-uat edit \


--subnet-mode=CUSTOM

Replace GKE_PEERING_VPC_NAME with the name of your VPC network to peer with the GKE cluster control plane.

3. Create a subnet for the GKE cluster nodes:

gcloud compute networks subnets create subnet-uat-1 edit \


--network=blink-web-uat edit \
--range=172.16.0.0/20 edit \
--region=asia-south1 edit

Replace the following:

GKE_SUBNET_NAME: the name of the subnetwork that is intended to host the GKE cluster nodes.

GKE_PEERING_VPC_NAME: the name of your VPC network to peer with the GKE cluster control plane.

GKE_SUBNET_RANGE: the IP address range of GKE_SUBNET_NAME. For this tutorial, you can use 10.244.252.0/22.

REGION: the Google Cloud region hosting the GKE cluster. For this tutorial, you can use us-central1.

You've now set up two VPC networks in your own project, and they're ready to peer with other services.

Creating a private GKE cluster


In this section, you create the private GKE cluster.

1. In Cloud Shell, create a GKE cluster with no client access to the public endpoint of the control plane.

gcloud container clusters create blinktradeweb-uat-k8s-cluster edit \


--region=asia-south1 edit \
--enable-master-authorized-networks \
--network=blink-web-uat edit \
--subnetwork=subnet-uat-1 edit \
--enable-private-nodes \
--enable-private-endpoint \
--enable-ip-alias \
--master-ipv4-cidr=172.16.19.0/28 edit

Replace the following:

PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.

REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.

GKE_PEERING_VPC_NAME: the name of your VPC network to peer with the GKE cluster control plane.

GKE_SUBNET_RANGE: the IP address range of GKE_SUBNET_NAME. For this tutorial, you can use 10.244.252.0/22.

CLUSTER_CONTROL_PLANE_CIDR: the IP address range of the GKE cluster control plane. It must have a /28 prefix. For this
tutorial, use 172.16.0.32/28.

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 3/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

star Note: The flag "--enabled-private-endpoint" enables Cloud Build to connect internally to the GKE cluster. If you are creating the GKE cluster in
Cloud Console, make sure to untick Access control plane using its external IP address.

You have now created a private GKE cluster that is peered with the VPC network in your own project.

2. Retrieve the name of the GKE cluster's VPC Network Peering. This VPC Network Peering was automatically created when you
created the GKE cluster.

export GKE_PEERING_NAME=$(gcloud container clusters describe blinktradeweb-uat-k8s-cluster edit \


--region=asia-south1 edit \
--format='value(privateClusterConfig.peeringName)')

Replace the following:

PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.

REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.

3. Enable the export of custom routes in order to advertise the private pool network to the GKE cluster control plane:

gcloud compute networks peerings update $GKE_PEERING_NAME \


--network=blink-web-uat edit \
--export-custom-routes \
--no-export-subnet-routes-with-public-ip

Replace GKE_PEERING_VPC_NAME with the name of your VPC network to peer with the GKE cluster control plane.

For more information about custom routes, you can read Importing and exporting custom routes
(/vpc/docs/vpc-peering#importing-exporting-routes).

Creating a Cloud Build private pool


In this section, you create the Cloud Build private pool.

1. In Cloud Shell, allocate a named IP address range in the PRIVATE_POOL_PEERING_VPC_NAME VPC network for the Cloud Build
private pool:

gcloud compute addresses create private-pool-range edit \


--global \
--purpose=VPC_PEERING \
--addresses=192.168.0.0 edit \
--prefix-length=20 edit \
--network=private-pool-vpc edit

Replace the following:

RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.

PRIVATE_POOL_NETWORK: the first IP address of RESERVED_RANGE_NAME. For this tutorial, you can use 192.168.0.0.

PRIVATE_POOL_PREFIX: the prefix of RESERVED_RANGE_NAME. Each private pool created will use /24 from this range. For this
tutorial, you can use 20; this allows you to create up to sixteen pools.

PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.

IP range is global because when --purpose is VPC_PEERING the named IP address range must be global.

2. Create a private connection between the VPC network that contains the Cloud Build private pool and
PRIVATE_POOL_PEERING_VPC_NAME:

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 4/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

gcloud services vpc-peerings connect \


--service=servicenetworking.googleapis.com \
--ranges=private-pool-range edit \
--network=private-pool-vpc edit

Replace the following:

RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.

PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.

3. Enable the export of custom routes in order to advertise the GKE cluster control plane network to the private pool:

gcloud compute networks peerings update servicenetworking-googleapis-com \


--network=private-pool-vpc edit \
--export-custom-routes \
--no-export-subnet-routes-with-public-ip

Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool
network.

4. Create a Cloud Build private pool that is peered with PRIVATE_POOL_PEERING_VPC_NAME:

gcloud builds worker-pools create private-pool-pvpc edit \


--region=asia-south1 edit \
--peered-network=projects/$GOOGLE_CLOUD_PROJECT/global/networks/private-pool-vpc edit

Replace the following:

PRIVATE_POOL_NAME: the name of the Cloud Build private pool.

REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.

You have now created a Cloud Build private pool and peered it with the VPC network in your own project.

Creating a Cloud VPN connection between your two VPC networks


In your own project, you now have a VPC network peered with the Cloud Build private pool and a second VPC network peered with the
private GKE cluster.

In this section, you create a Cloud VPN connection between the two VPC networks in your project. This connection completes the route
and allows the Cloud Build private pools to access the GKE cluster.

1. In Cloud Shell, create two HA VPN gateways that connect to each other. To create these gateways, follow the instructions in Creating
two fully configured HA VPN gateways that connect to each other
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#creating-ha-gw-2-gw-and-tunnel). The setup is complete after you have created the
BGP sessions. While following these instructions, use the following values:

PRIVATE_POOL_PEERING_VPC_NAME for NETWORK_1

GKE_PEERING_VPC_NAME for NETWORK_2

REGION for REGION_1 and REGION_2

2. Configure each of the four BGP sessions you created to advertise the routes to the private pool VPC network and the GKE cluster
control plane VPC network:

gcloud compute routers update-bgp-peer private-pool-router edit \


--peer-name=private-pool-tunnel-peer0 edit \
--region=asia-south1 edit \
--advertisement-mode=CUSTOM \

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 5/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

--set-advertisement-ranges=192.168.0.0 edit/20 edit

gcloud compute routers update-bgp-peer private-pool-router edit \


--peer-name=private-pool-tunnel-peer1 edit \
--region=asia-south1 edit \
--advertisement-mode=CUSTOM \
--set-advertisement-ranges=192.168.0.0 edit/20 edit

gcloud compute routers update-bgp-peer gke-cicd-uat-router edit \


--peer-name=blink-web-uat-vpn-tunnel-peer0 edit \
--region=asia-south1 edit \
--advertisement-mode=CUSTOM \
--set-advertisement-ranges=172.16.19.0/28 edit

gcloud compute routers update-bgp-peer gke-cicd-uat-router edit \


--peer-name= blink-web-uat-vpn-tunnel-peer1 edit \
--region=asia-south1 edit \
--advertisement-mode=CUSTOM \
--set-advertisement-ranges=172.16.19.0/28 edit

Where the following values are the same names that you used when you created the two HA VPN gateways:

ROUTER_NAME_1

PEER_NAME_GW1_IF0

PEER_NAME_GW1_IF1

ROUTER_NAME_2

PEER_NAME_GW2_IF0

PEER_NAME_GW2_IF1

Enabling Cloud Build access to the GKE cluster control plane


Now that you have a VPN connection between the two VPC networks in your project, enable Cloud Build access to the GKE cluster control
plane.

1. In Cloud Shell, add the private pool network range to the control plane authorized networks in GKE:

gcloud container clusters update blinktradeweb-uat-k8s-cluster edit \


--enable-master-authorized-networks \
--region=asia-south1 edit \
--master-authorized-networks=192.168.0.0 edit/20 edit

Replace the following:

PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.

REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.

PRIVATE_POOL_NETWORK: the first IP address of RESERVED_RANGE_NAME. For this tutorial, you can use 192.168.0.0.

PRIVATE_POOL_PREFIX: the prefix of RESERVED_RANGE_NAME. Each private pool created will use /24 from this range. For this
tutorial, you can use 20; this allows you to create up to sixteen pools.

2. Allow the Cloud Build service account to access the GKE cluster control plane:

export PROJECT_NUMBER=$(gcloud projects describe $GOOGLE_CLOUD_PROJECT --format 'value(projectNumber)')

gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \


--member=serviceAccount:[email protected] \
--role=roles/container.developer

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 6/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

The Cloud Build private pools now can access the GKE cluster control plane.

Verifying the solution


In this section, you verify that the solution is working by running the command kubectl get nodes in a build step which is running in the
private pool.

1. In Cloud Shell, create a temporary folder with a Cloud Build configuration file that runs the command kubectl get nodes:

mkdir private-pool-test && cd private-pool-test

cat > cloudbuild.yaml <<EOF


steps:
- name: "gcr.io/cloud-builders/kubectl"
args: ['get', 'nodes']
env:
- 'CLOUDSDK_COMPUTE_REGION=asia-south1 edit'
- 'CLOUDSDK_CONTAINER_CLUSTER=blinktradeweb-uat-k8s-cluster edit'
options:
workerPool:
'projects/$GOOGLE_CLOUD_PROJECT/locations/asia-south1 edit/workerPools/private-pool-pvpc edit'
EOF

Replace the following:

REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.

PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.

PRIVATE_POOL_NAME: the name of the Cloud Build private pool.

2. Start the build job:

gcloud builds submit --config=cloudbuild.yaml

3. Verify that the output is the list of nodes in the GKE cluster. The build log shown in the console includes a table similar to this:

NAME STATUS ROLES AGE VERSION


gke-private-default-pool-3ec34262-7lq9 Ready <none> 9d v1.19.9-gke.1900
gke-private-default-pool-4c517758-zfqt Ready <none> 9d v1.19.9-gke.1900
gke-private-default-pool-d1a885ae-4s9c Ready <none> 9d v1.19.9-gke.1900

You have now verified that the workers from the private pool can access the GKE cluster. This access lets you use Cloud Build to deploy
(/build/docs/deploying-builds/deploy-gke) your application on this private GKE cluster.

Troubleshooting
If you encounter problems with this tutorial, see the following documents:

Cloud VPN troubleshooting (/network-connectivity/docs/vpn/support/troubleshooting)

VPC Network Peering troubleshooting (/vpc/docs/using-vpc-peering#troubleshooting)

Private GKE cluster troubleshooting (https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#troubleshooting)

Clean up

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 7/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the
resources, or keep the project and delete the individual resources.

Delete the project

error Caution: Deleting a project has the following effects:

Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other
work you've done in the project.

Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To
preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole
project.

If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

1. In the Google Cloud console, go to the Manage resources page.

Go to Manage resources (https://console.cloud.google.com/iam-admin/projects)

2. In the project list, select the project that you want to delete, and then click Delete.

3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the individual resources


1. In Cloud Shell, delete the GKE cluster:

gcloud container clusters delete blinktradeweb-uat-k8s-cluster edit \


--region=asia-south1 edit \
--async

When you run this command, the VPC Network Peering is automatically deleted.

2. Delete the Cloud Build private pool:

gcloud builds worker-pools delete private-pool-pvpc edit \


--region=asia-south1 edit

3. Delete the private connection between the service producer VPC network and PRIVATE_POOL_PEERING_VPC_NAME:

gcloud services vpc-peerings delete \


--network=private-pool-vpc edit \
--async

4. Delete the named IP address range used for the private pool:

gcloud compute addresses delete private-pool-range edit \


--global

5. Delete the four VPN tunnels. Use the same names that you specified at Create VPN tunnels
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#create_vpn_tunnels).

gcloud compute vpn-tunnels delete \


TUNNEL_NAME_GW1_IF0 edit \
TUNNEL_NAME_GW1_IF1 edit \
TUNNEL_NAME_GW2_IF0 edit \
TUNNEL_NAME_GW2_IF1 edit \

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 8/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud

--region=asia-south1 edit

6. Delete the two Cloud Routers. Use the same names that you specified at Create Cloud Routers
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#creates).

gcloud compute routers delete \


private-pool-router edit \
gke-cicd-uat-router edit \
--region=asia-south1 edit

7. Delete the two VPN Gateways. Use the same names that you specified at Create the HA VPN gateways
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#create-ha-gateways).

gcloud compute vpn-gateways delete \


GW_NAME_1 edit \
GW_NAME_2 edit \
--region=asia-south1 edit

8. Delete GKE_SUBNET_NAME, which is the subnetwork that hosts the GKE cluster nodes:

gcloud compute networks subnets delete subnet-uat-1 edit \


--region=asia-south1 edit

9. Delete the two VPC networks PRIVATE_POOL_PEERING_VPC_NAME and GKE_PEERING_VPC_NAME:

gcloud compute networks delete \


private-pool-vpc edit \
blink-web-uat edit

What's next
Learn how to run builds in a private pool (/build/docs/private-pools/run-builds-in-private-pool).

Run a proxy within the private GKE cluster (/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies) that has access to the
control plane.

Learn how to deploy to GKE from Cloud Build (/build/docs/deploying-builds/deploy-gke).

Try out other Google Cloud features for yourself. Have a look at our tutorials (/architecture).

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the Apache 2.0 License (https://www.apache.org/licenses/LICENSE-2.0). For details, see the Google Developers Site Policies
(https://developers.google.com/site-policies). Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2023-10-30 UTC.

https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 9/9

You might also like