本教程使用KubeAdm中的kubekey 安装集群
当然在此之前, 你要安装好Docker,安装Docker可参考下面的教程:
Docker 傻瓜式基础安装全流程 (CentOS 8以上)_docker安装教程傻瓜-CSDN博客
集群系统配置
4台虚拟机
主机名 | IP |
---|---|
k8s-master | 192.168.0.107 |
node1 | 192.168.0.108 |
node2 | 192.168.0.109 |
node3 | 192.168.0.110 |
三个要点,如果是云服务器, 要满足以下3点
- 内网互通
- 每个机器有自己域名
- 防火墙开放30000~32767端口 (如果是云服务器,在安全组中修改)
环境准备
修改主机名
推荐:修改每台主机名,用于分辨每台主机的职责
#各个机器设置自己的域名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
所有节点:其他一些设置
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
时间同步
yum install chrony
# 上面的手工操作,也可以使用 sed 自动替换
sed -i 's/^pool pool.*/pool cn.pool.ntp.org iburst/g' /etc/chrony.conf
#修改时区:上海
timedatectl set-timezone Asia/Shanghai
systemctl enable chronyd --now
# 执行查看命令
chronyc sourcestats -v
安装依赖
# 安装 Kubernetes 系统依赖包,在后续安装集群时会用到,这里提前安装上
yum install -y curl socat conntrack ebtables ipset ipvsadm vim wget
# 安装 tar 包
yum install tar -y
安装部署K8s
Master节点:下载kubekey
mkdir ~/kubekey
cd ~/kubekey/
# 选择中文区下载(访问 GitHub 受限时使用)
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
#修改文件权限,可以被执行
chmod +x kk
查看支持的版本
#查看 KubeKey 支持的 Kubernetes 版本列表
./kk version --show-supported-k8s
创建 K8s 集群部署的配置文件
版本根据自己需求可以修改,另外,即使安装完k8s之后,需要更新版本, 也是可以的
./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.4.1
此时,会生成一个文件 config-sample.yaml,这是集群部署的配置文件,使用vim打开进行修改,示例如下:
我启用的可插拔组件有: metrics_server(用于自动扩缩容)和devops(用于CI/CD)
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 192.168.0.107, internalAddress: 192.168.0.107, user: root, password: "1234"}
- {name: node1, address: 192.168.0.108, internalAddress: 192.168.0.108, user: root, password: "1234"}
- {name: node2, address: 192.168.0.109, internalAddress: 192.168.0.109, user: root, password: "1234"}
- {name: node3, address: 192.168.0.110, internalAddress: 192.168.0.110, user: root, password: "1234"}
roleGroups:
etcd:
- master
control-plane:
- master
worker:
- node1
- node2
- node3
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
#集群内pod可分配的地址有256*256=65536个.至于网段,自己根据使用需求修改,避免冲突
kubePodsCIDR: 10.244.0.0/16
#集群内的服务可分配地址也是256*256=65536个
kubeServiceCIDR: 10.96.0.0/16
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.4.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
enableHA: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
opensearch:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
enabled: true
logMaxAge: 7
opensearchPrefix: whizard
basicAuth:
enabled: true
username: "admin"
password: "admin"
externalOpensearchHost: ""
externalOpensearchPort: ""
dashboard:
enabled: false
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: true
jenkinsCpuReq: 0.5
jenkinsCpuLim: 1
jenkinsMemoryReq: 4Gi
jenkinsMemoryLim: 4Gi
jenkinsVolumeSize: 16Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: true
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
gatekeeper:
enabled: false
# controller_manager:
# resources: {}
# audit:
# resources: {}
terminal:
timeout: 600
创建集群
这个过程需要不少时间(一般情况下,30分钟),耐心等待,可以去吃个饭,或者喝个咖啡什么的
./kk create cluster -f config-sample.yaml -y
虚拟机状况下,如果总是忘记Url地址,账号和密码的话,可以复制下方默认的值:
Url(记得换成你自己的Console的地址)
http://192.168.0.107:30880
账号
admin
密码
P@88w0rd
获取KubeSphere账号和密码, 根据Instruction通过浏览器进入KubeSphere管理页面,