Skip to content

Commit cbb153d

Browse files
committed
[zh] Resync node-overprovisioning.md
1 parent a9a82f3 commit cbb153d

File tree

3 files changed

+70
-31
lines changed

3 files changed

+70
-31
lines changed

content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md

Lines changed: 67 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -4,53 +4,57 @@ content_type: task
44
weight: 10
55
---
66
<!--
7-
title: Overprovision Node Capacity For A Cluster
7+
title: Overprovision Node Capacity For A Cluster
88
content_type: task
99
weight: 10
1010
-->
1111

1212
<!-- overview -->
1313

1414
<!--
15-
This page guides you through configuring {{< glossary_tooltip text="Node" term_id="node" >}} overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively reserves a portion of your cluster's compute resources. This reservation helps reduce the time required to schedule new pods during scaling events, enhancing your cluster's responsiveness to sudden spikes in traffic or workload demands.
16-
17-
By maintaining some unused capacity, you ensure that resources are immediately available when new pods are created, preventing them from entering a pending state while the cluster scales up.
15+
This page guides you through configuring {{< glossary_tooltip text="Node" term_id="node" >}}
16+
overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively
17+
reserves a portion of your cluster's compute resources. This reservation helps reduce the time
18+
required to schedule new pods during scaling events, enhancing your cluster's responsiveness
19+
to sudden spikes in traffic or workload demands.
20+
21+
By maintaining some unused capacity, you ensure that resources are immediately available when
22+
new pods are created, preventing them from entering a pending state while the cluster scales up.
1823
-->
1924
本页指导你在 Kubernetes 集群中配置{{< glossary_tooltip text="节点" term_id="node" >}}超配。
2025
节点超配是一种主动预留部分集群计算资源的策略。这种预留有助于减少在扩缩容事件期间调度新 Pod 所需的时间,
2126
从而增强集群对突发流量或突发工作负载需求的响应能力。
2227

23-
通过保持一些未使用的容量,你确保在新 Pod 被创建时资源可以立即可用,防止 Pod 在集群扩缩容时进入 Pending 状态。
28+
通过保持一些未使用的容量,确保在新 Pod 被创建时资源可以立即可用,防止 Pod 在集群扩缩容时进入 Pending 状态。
2429

2530
## {{% heading "prerequisites" %}}
2631

2732
<!--
28-
- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with
33+
- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with
2934
your cluster.
3035
- You should already have a basic understanding of
3136
[Deployments](/docs/concepts/workloads/controllers/deployment/),
32-
Pod {{<glossary_tooltip text="priority" term_id="pod-priority">}},
33-
and [PriorityClasses](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
37+
Pod {{< glossary_tooltip text="priority" term_id="pod-priority" >}},
38+
and {{< glossary_tooltip text="PriorityClasses" term_id="priority-class" >}}.
3439
- Your cluster must be set up with an [autoscaler](/docs/concepts/cluster-administration/cluster-autoscaling/)
3540
that manages nodes based on demand.
3641
-->
3742
- 你需要有一个 Kubernetes 集群,并且 kubectl 命令行工具必须被配置为与你的集群通信。
3843
- 你应该已经基本了解了 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)、Pod
3944
{{<glossary_tooltip text="优先级" term_id="pod-priority">}}和
40-
[PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
41-
- 你的集群必须设置一个基于需求管理节点的
42-
[Autoscaler](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)
45+
{{< glossary_tooltip text="PriorityClass" term_id="priority-class" >}}。
46+
- 你的集群必须设置一个基于需求管理节点的[自动扩缩程序](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)
4347

4448
<!-- steps -->
4549

4650
<!--
47-
## Create a placeholder Deployment
51+
## Create a PriorityClass
4852
4953
Begin by defining a PriorityClass for the placeholder Pods. First, create a PriorityClass with a
5054
negative priority value, that you will shortly assign to the placeholder pods.
5155
Later, you will set up a Deployment that uses this PriorityClass
5256
-->
53-
## 创建占位 Deployment {#create-a-placeholder-deployment}
57+
## 创建 PriorityClass {#create-a-priorityclass}
5458

5559
首先为占位 Pod 定义一个 PriorityClass。
5660
先创建一个优先级值为负数的 PriorityClass,稍后将其分配给占位 Pod。
@@ -72,25 +76,50 @@ You will next define a Deployment that uses the negative-priority PriorityClass
7276
When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capacity. Any time there
7377
is a capacity shortage, the control plane will pick one these placeholder pods as the first candidate to
7478
{{< glossary_tooltip text="preempt" term_id="preemption" >}}.
75-
76-
Review the sample manifest:
7779
-->
78-
接下来,你将定义一个 Deployment,使用优先级值为负数的 PriorityClass 并运行最小容器
80+
接下来,你将定义一个 Deployment,使用优先级值为负数的 PriorityClass 并运行最小的容器
7981
当你将此 Deployment 添加到集群中时,Kubernetes 会运行这些占位 Pod 以预留容量。
8082
每当出现容量短缺时,控制面将选择这些占位 Pod
8183
中的一个作为第一个候选者进行{{< glossary_tooltip text="抢占" term_id="preemption" >}}。
8284

85+
<!--
86+
## Run Pods that request node capacity
87+
88+
Review the sample manifest:
89+
-->
90+
## 运行请求节点容量的 Pod {#run-pods-that-request-node-capacity}
91+
8392
查看样例清单:
8493

8594
{{% code_sample language="yaml" file="deployments/deployment-with-capacity-reservation.yaml" %}}
8695

8796
<!--
97+
### Pick a namespace for the placeholder pods
98+
99+
You should select, or create, a {{< glossary_tooltip term_id="namespace" text="namespace">}}
100+
that the placeholder Pods will go into.
101+
-->
102+
### 为占位 Pod 挑选一个命名空间 {#pick-a-namespace-for-the-placeholder-pods}
103+
104+
你应选择或创建占位 Pod 要进入的{{< glossary_tooltip term_id="namespace" text="命名空间">}}。
105+
106+
<!--
107+
### Create the placeholder deployment
108+
88109
Create a Deployment based on that manifest:
110+
111+
```shell
112+
# Change the namespace name "example"
113+
kubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
114+
```
89115
-->
116+
### 创建占位 Deployment {#create-the-placeholder-deployment}
117+
90118
基于该清单创建 Deployment:
91119

92120
```shell
93-
kubectl apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
121+
# 你要更改命名空间名称 "example"
122+
kubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
94123
```
95124

96125
<!--
@@ -108,13 +137,22 @@ To edit the Deployment, modify the `resources` section in the Deployment manifes
108137
to set appropriate requests and limits. You can download that file locally and then edit it
109138
with whichever text editor you prefer.
110139
111-
For example, to reserve 500m CPU and 1Gi memory across 5 placeholder pods,
112-
define the resource requests and limits for a single placeholder pod as follows:
140+
You can also edit the Deployment using kubectl:
113141
-->
114142
要编辑 Deployment,可以修改 Deployment 清单文件中的 `resources` 一节,
115143
设置合适的 `requests``limits`
116144
你可以将该文件下载到本地,然后用自己喜欢的文本编辑器进行编辑。
117145

146+
你也可以使用 kubectl 来编辑 Deployment:
147+
148+
```shell
149+
kubectl edit deployment capacity-reservation
150+
```
151+
152+
<!--
153+
For example, to reserve 500m CPU and 1Gi memory across 5 placeholder pods,
154+
define the resource requests and limits for a single placeholder pod as follows:
155+
-->
118156
例如,要为 5 个占位 Pod 预留 500m CPU 和 1Gi 内存,请为单个占位 Pod 定义以下资源请求和限制:
119157

120158
```yaml
@@ -130,23 +168,23 @@ define the resource requests and limits for a single placeholder pod as follows:
130168
## Set the desired replica count
131169
132170
### Calculate the total reserved resources
133-
134-
For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:
135-
136-
Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity `500m`)
137-
Total Memory reserved: 5 × 200MiB = 1GiB (in the Pod specification, you'll write `1 Gi`)
138-
139-
To scale the Deployment, adjust the number of replicas based on your cluster's size and expected workload:
140171
-->
141172
## 设置所需的副本数量 {#set-the-desired-replica-count}
142173
143174
### 计算总预留资源 {#calculate-the-total-reserved-resources}
144175
145-
例如,有 5 个副本,每个预留 0.1 CPU 和 200MiB 内存:
176+
<!-- trailing whitespace in next paragraph is significant -->
146177
147-
CPU 预留总量:5 × 0.1 = 0.5(在 Pod 规约中,你将写入数量 `500m`)
178+
<!--
179+
For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:
180+
Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity `500m`)
181+
Total memory reserved: 5 × 200MiB = 1GiB (in the Pod specification, you'll write `1 Gi`)
148182

149-
内存预留总量:5 × 200MiB = 1GiB(在 Pod 规约中,你将写入 `1 Gi`)
183+
To scale the Deployment, adjust the number of replicas based on your cluster's size and expected workload:
184+
-->
185+
例如,有 5 个副本,每个预留 0.1 CPU 和 200MiB 内存:
186+
CPU 预留总量:5 × 0.1 = 0.5(在 Pod 规约中,你将写入数量 `500m`)
187+
内存预留总量:5 × 200MiB = 1GiB(在 Pod 规约中,你将写入 `1 Gi`)
150188

151189
要扩缩容 Deployment,请基于集群的大小和预期的工作负载调整副本数:
152190

content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ apiVersion: apps/v1
22
kind: Deployment
33
metadata:
44
name: capacity-reservation
5+
# 你应决定要将此 Deployment 部署到哪个命名空间
56
spec:
67
replicas: 1
78
selector:
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
apiVersion: scheduling.k8s.io/v1
22
kind: PriorityClass
33
metadata:
4-
name: placeholder
4+
name: placeholder # 这些 Pod 表示占位容量
55
value: -1000
66
globalDefault: false
7-
description: "Negative priority for placeholder pods to enable overprovisioning."
7+
description: "Negative priority for placeholder pods to enable overprovisioning."

0 commit comments

Comments
 (0)