private-pool
private-pool
ith Cloud Build private pools | Cloud Build Documentation | Google Cloud
Memorystore instance
Compute Engine instance running in a different VPC network than the one peered with the Cloud Build private pool
Cloud Build private pools and GKE cluster control planes (/kubernetes-engine/docs/concepts/cluster-architecture#control_plane) both run in
Google-owned VPC networks (/vpc/docs). These VPC networks are peered to your own VPC network on Google Cloud. However, VPC
Network Peering doesn't support transitive peering (/vpc/docs/vpc-peering#restrictions), which can be a restriction when using Cloud Build
private pools. This tutorial presents a solution that uses Cloud VPN to allow workers in a Cloud Build private pool to access the control
plane of a private GKE cluster.
This tutorial assumes that you're familiar with Google Kubernetes Engine, Cloud Build, the gcloud command, VPC Network Peering, and
Cloud VPN.
Architecture overview
When you create a private GKE cluster with no client access to the public endpoint, clients can only access the GKE cluster control plane
(/kubernetes-engine/docs/concepts/cluster-architecture#control_plane) using its private IP address
(/kubernetes-engine/docs/concepts/private-cluster-concept#endpoints_in_private_clusters). Clients like kubectl can communicate with the control
plane only if they run on an instance that has access to the VPC network and is in an authorized network
(/kubernetes-engine/docs/how-to/authorized-networks#benefits_with_private_clusters).
If you want to use Cloud Build to deploy (/build/docs/deploying-builds/deploy-gke) your application on this private GKE cluster, then you need to
use Cloud Build private pools to access the GKE clusters. Private pools are a set of worker instances that run in a Google Cloud project
owned by Google, and are peered to your VPC network using a VPC Network Peering connection. In this setup, the worker instances are
allowed to communicate with the private IP address of the GKE cluster control plane.
However, the GKE cluster control plane also runs in a Google-owned project and is peered to your VPC network using a peering
connection. VPC Network Peering doesn't support transitive peering (/vpc/docs/vpc-peering#restrictions), so packets can't be routed directly
between the Cloud Build private pool and the GKE cluster control plane.
To enable Cloud Build worker instances to access the GKE cluster control plane, you can peer the private pool and the GKE cluster control
plane with two VPC networks that you own and then connect these two VPC networks using Cloud VPN. This peering and connection
allows each side of the VPC tunnel to advertise the private pool and GKE cluster control plane networks, thus completing the route.
The following architectural diagram shows the resources that are used in this tutorial:
We recommend creating all resources used in this tutorial in the same Google Cloud region (/build/docs/locations) for low latency. The VPN
tunnel can traverse two different regions if this inter-region communication is needed for your own implementation. The two VPC
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 1/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
Objectives
Create a private GKE cluster.
Enable routing of packets across two VPC Network Peerings and a VPC connection.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator (/products/calculator). New Google Cloud users might
be eligible for a free trial (/free-trial).
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created.
For more information, see Clean up (#clean-up).
star Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish
these steps, you can delete the project, removing all resources associated with the project.
2. Make sure that billing is enabled for your Google Cloud project (/billing/docs/how-to/verify-billing-enabled#console).
3. Enable the Cloud Build, Google Kubernetes Engine, and Service Networking APIs.
At the bottom of the Google Cloud console, a Cloud Shell (/shell/docs/how-cloud-shell-works) session starts and displays a command-
line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your
current project. It can take a few seconds for the session to initialize.
1. In Cloud Shell, create the first VPC network (called "Private pool peering VPC network" in the preceding diagram). You don't need to
create subnets in this network.
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 2/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool
network.
2. Create the second VPC network (called "GKE peering VPC network" in the preceding diagram):
Replace GKE_PEERING_VPC_NAME with the name of your VPC network to peer with the GKE cluster control plane.
GKE_SUBNET_NAME: the name of the subnetwork that is intended to host the GKE cluster nodes.
GKE_PEERING_VPC_NAME: the name of your VPC network to peer with the GKE cluster control plane.
GKE_SUBNET_RANGE: the IP address range of GKE_SUBNET_NAME. For this tutorial, you can use 10.244.252.0/22.
REGION: the Google Cloud region hosting the GKE cluster. For this tutorial, you can use us-central1.
You've now set up two VPC networks in your own project, and they're ready to peer with other services.
1. In Cloud Shell, create a GKE cluster with no client access to the public endpoint of the control plane.
REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.
GKE_PEERING_VPC_NAME: the name of your VPC network to peer with the GKE cluster control plane.
GKE_SUBNET_RANGE: the IP address range of GKE_SUBNET_NAME. For this tutorial, you can use 10.244.252.0/22.
CLUSTER_CONTROL_PLANE_CIDR: the IP address range of the GKE cluster control plane. It must have a /28 prefix. For this
tutorial, use 172.16.0.32/28.
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 3/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
star Note: The flag "--enabled-private-endpoint" enables Cloud Build to connect internally to the GKE cluster. If you are creating the GKE cluster in
Cloud Console, make sure to untick Access control plane using its external IP address.
You have now created a private GKE cluster that is peered with the VPC network in your own project.
2. Retrieve the name of the GKE cluster's VPC Network Peering. This VPC Network Peering was automatically created when you
created the GKE cluster.
REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.
3. Enable the export of custom routes in order to advertise the private pool network to the GKE cluster control plane:
Replace GKE_PEERING_VPC_NAME with the name of your VPC network to peer with the GKE cluster control plane.
For more information about custom routes, you can read Importing and exporting custom routes
(/vpc/docs/vpc-peering#importing-exporting-routes).
1. In Cloud Shell, allocate a named IP address range in the PRIVATE_POOL_PEERING_VPC_NAME VPC network for the Cloud Build
private pool:
RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.
PRIVATE_POOL_NETWORK: the first IP address of RESERVED_RANGE_NAME. For this tutorial, you can use 192.168.0.0.
PRIVATE_POOL_PREFIX: the prefix of RESERVED_RANGE_NAME. Each private pool created will use /24 from this range. For this
tutorial, you can use 20; this allows you to create up to sixteen pools.
PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.
IP range is global because when --purpose is VPC_PEERING the named IP address range must be global.
2. Create a private connection between the VPC network that contains the Cloud Build private pool and
PRIVATE_POOL_PEERING_VPC_NAME:
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 4/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.
PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.
3. Enable the export of custom routes in order to advertise the GKE cluster control plane network to the private pool:
Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool
network.
REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.
You have now created a Cloud Build private pool and peered it with the VPC network in your own project.
In this section, you create a Cloud VPN connection between the two VPC networks in your project. This connection completes the route
and allows the Cloud Build private pools to access the GKE cluster.
1. In Cloud Shell, create two HA VPN gateways that connect to each other. To create these gateways, follow the instructions in Creating
two fully configured HA VPN gateways that connect to each other
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#creating-ha-gw-2-gw-and-tunnel). The setup is complete after you have created the
BGP sessions. While following these instructions, use the following values:
2. Configure each of the four BGP sessions you created to advertise the routes to the private pool VPC network and the GKE cluster
control plane VPC network:
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 5/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
Where the following values are the same names that you used when you created the two HA VPN gateways:
ROUTER_NAME_1
PEER_NAME_GW1_IF0
PEER_NAME_GW1_IF1
ROUTER_NAME_2
PEER_NAME_GW2_IF0
PEER_NAME_GW2_IF1
1. In Cloud Shell, add the private pool network range to the control plane authorized networks in GKE:
REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.
PRIVATE_POOL_NETWORK: the first IP address of RESERVED_RANGE_NAME. For this tutorial, you can use 192.168.0.0.
PRIVATE_POOL_PREFIX: the prefix of RESERVED_RANGE_NAME. Each private pool created will use /24 from this range. For this
tutorial, you can use 20; this allows you to create up to sixteen pools.
2. Allow the Cloud Build service account to access the GKE cluster control plane:
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 6/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
The Cloud Build private pools now can access the GKE cluster control plane.
1. In Cloud Shell, create a temporary folder with a Cloud Build configuration file that runs the command kubectl get nodes:
REGION: the region for the GKE cluster. In this tutorial, use us-central1 for the region, the same region as the one you used for
the VPC networks.
3. Verify that the output is the list of nodes in the GKE cluster. The build log shown in the console includes a table similar to this:
You have now verified that the workers from the private pool can access the GKE cluster. This access lets you use Cloud Build to deploy
(/build/docs/deploying-builds/deploy-gke) your application on this private GKE cluster.
Troubleshooting
If you encounter problems with this tutorial, see the following documents:
Clean up
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 7/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the
resources, or keep the project and delete the individual resources.
Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other
work you've done in the project.
Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To
preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole
project.
If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
2. In the project list, select the project that you want to delete, and then click Delete.
3. In the dialog, type the project ID, and then click Shut down to delete the project.
When you run this command, the VPC Network Peering is automatically deleted.
3. Delete the private connection between the service producer VPC network and PRIVATE_POOL_PEERING_VPC_NAME:
4. Delete the named IP address range used for the private pool:
5. Delete the four VPN tunnels. Use the same names that you specified at Create VPN tunnels
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#create_vpn_tunnels).
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 8/9
11/1/23, 4:54 PM Access private Google Kubernetes Engine clusters with Cloud Build private pools | Cloud Build Documentation | Google Cloud
--region=asia-south1 edit
6. Delete the two Cloud Routers. Use the same names that you specified at Create Cloud Routers
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#creates).
7. Delete the two VPN Gateways. Use the same names that you specified at Create the HA VPN gateways
(/network-connectivity/docs/vpn/how-to/creating-ha-vpn2#create-ha-gateways).
8. Delete GKE_SUBNET_NAME, which is the subnetwork that hosts the GKE cluster nodes:
What's next
Learn how to run builds in a private pool (/build/docs/private-pools/run-builds-in-private-pool).
Run a proxy within the private GKE cluster (/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies) that has access to the
control plane.
Try out other Google Cloud features for yourself. Have a look at our tutorials (/architecture).
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the Apache 2.0 License (https://www.apache.org/licenses/LICENSE-2.0). For details, see the Google Developers Site Policies
(https://developers.google.com/site-policies). Java is a registered trademark of Oracle and/or its affiliates.
https://cloud.google.com/build/docs/private-pools/accessing-private-gke-clusters-with-cloud-build-private-pools 9/9