上传文件至 'kubernetes-MD'
This commit is contained in:
parent
4004f6504d
commit
9c359fbf0d
|
@ -0,0 +1,235 @@
|
|||
<h1><center>Kubernetes存储类StorageClass</center></h1>
|
||||
|
||||
著作:行癫 <盗版必究>
|
||||
|
||||
------
|
||||
|
||||
## 一:StorageClass
|
||||
|
||||
StorageClass 为管理员提供了描述存储 "类" 的方法。 不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。 Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为 "配置文件"
|
||||
|
||||
#### 1.StorageClass 资源
|
||||
|
||||
每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段, 这些字段会在 StorageClass 需要动态分配 PersistentVolume 时会使用到
|
||||
|
||||
StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新
|
||||
|
||||
#### 2.创建Storageclass
|
||||
|
||||
```shell
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: example-nfs //名称
|
||||
provisioner: example.com/external-nfs
|
||||
parameters:
|
||||
server: nfs-server.example.com
|
||||
path: /share
|
||||
readOnly: "false"
|
||||
|
||||
server:NFS 服务器的主机名或 IP 地址
|
||||
path:NFS 服务器导出的路径
|
||||
readOnly:是否将存储挂载为只读的标志(默认为 false)
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
provisioner参数值:
|
||||
|
||||
```yaml
|
||||
NFS example.com/external-nfs
|
||||
Glusterfs kubernetes.io/glusterfs
|
||||
AWS EBS kubernetes.io/aws-ebs
|
||||
......
|
||||
```
|
||||
|
||||
AWS EBS:
|
||||
|
||||
```shell
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
parameters:
|
||||
type: io1
|
||||
iopsPerGB: "10" //这里需要输入一个字符串,即 "10",而不是 10
|
||||
fsType: ext4
|
||||
|
||||
type:io1,gp2,sc1,st1。详细信息参见 AWS 文档。默认值:gp2
|
||||
iopsPerGB:只适用于 io1 卷。每 GiB 每秒 I/O 操作。AWS卷插件将其与请求卷的大小相乘以计算IOPS的容量,并将其限制在 20000 IOPS
|
||||
fsType:受 Kubernetes 支持的文件类型。默认值:"ext4"
|
||||
```
|
||||
|
||||
Glusterfs:
|
||||
|
||||
```shell
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
parameters:
|
||||
resturl: "http://127.0.0.1:8081"
|
||||
clusterid: "630372ccdc720a92c681fb928f27b53f"
|
||||
restauthenabled: "true"
|
||||
restuser: "admin"
|
||||
secretNamespace: "default"
|
||||
secretName: "heketi-secret"
|
||||
gidMin: "40000"
|
||||
gidMax: "50000"
|
||||
volumetype: "replicate:3"
|
||||
|
||||
resturl:制备 gluster 卷的需求的Gluster REST服务/Heketi服务url,通用格式应该是 IPaddress:Port
|
||||
restauthenabled:Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证
|
||||
restuser:在 Gluster 可信池中有权创建卷的 Gluster REST服务/Heketi 用户
|
||||
restuserkey:服务器进行身份验证。 此参数已弃用,取而代之的是 secretNamespace + secretName
|
||||
secretNamespace,secretName:Secret 实例的标识,包含与 Gluster REST 服务交互时使用的用户密码;
|
||||
这些参数是可选的,secretNamespace 和 secretName 都省略时使用空密码,以这种方式创建:
|
||||
kubectl create secret generic heketi-secret \
|
||||
--type="kubernetes.io/glusterfs" --from-literal=key='opensesame' \
|
||||
--namespace=default
|
||||
clusterid:630372ccdc720a92c681fb928f27b53f 是集群的 ID,当制备卷时, Heketi 将会使用这个文件
|
||||
gidMin,gidMax:StorageClass GID 范围的最小值和最大值,这是 gidMin 和 gidMax 的默认值
|
||||
volumetype:卷的类型及其参数可以用这个可选值进行配置
|
||||
'Replica volume': volumetype: replicate:3 其中 '3' 是 replica 数量
|
||||
'Disperse/EC volume': volumetype: disperse:4:2 其中 '4' 是数据,'2' 是冗余数量
|
||||
'Distribute volume': volumetype: none
|
||||
```
|
||||
|
||||
#### 3.使用
|
||||
|
||||
创建storageclass文件:
|
||||
|
||||
```shell
|
||||
[root@master class]# cat storageclass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: example-nfs
|
||||
provisioner: example.com/external-nfs
|
||||
parameters:
|
||||
server: 10.0.0.230
|
||||
path: /kubernetes-3
|
||||
readOnly: "false"
|
||||
```
|
||||
|
||||
创建:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl create -f storageclass
|
||||
storageclass.storage.k8s.io/example-nfs created
|
||||
```
|
||||
|
||||
查看:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl get storageclass
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
example-nfs example.com/external-nfs Delete Immediate false 9s
|
||||
|
||||
RECLAIMPOLICY:回收策略 Delete
|
||||
VOLUMEBINDINGMODE:默认情况下, Immediate 模式表示一旦创建了PersistentVolumeClaim 也就完成了卷绑定和动态制备
|
||||
```
|
||||
|
||||
创建pv的yaml文件:
|
||||
|
||||
```shell
|
||||
[root@master class]# cat pv.yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: xingdian-1
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
volumeMode: Filesystem
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: example-nfs
|
||||
nfs:
|
||||
path: /kubernetes-1
|
||||
server: 10.0.0.230
|
||||
```
|
||||
|
||||
创建:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl create -f pv.yaml
|
||||
```
|
||||
|
||||
查看pv:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
xingdian-1 10Gi RWO Retain Available example-nfs 3s
|
||||
```
|
||||
|
||||
创建应用使用:
|
||||
|
||||
```shell
|
||||
[root@master class]# cat nginx.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
serviceName: "nginx"
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: 10.0.0.230/xingdian/nginx:v1
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: "example-nfs"
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
创建:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl create -f nginx.yaml
|
||||
statefulset.apps/web created
|
||||
```
|
||||
|
||||
查看:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl get statefulset
|
||||
NAME READY AGE
|
||||
web 1/1 9s
|
||||
[root@master class]# kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 13s
|
||||
```
|
||||
|
||||
验证pv:
|
||||
|
||||
```shell
|
||||
[root@master class]# kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
xingdian-1 10Gi RWO Retain Bound default/www-web-0 example-nfs 52s
|
||||
```
|
||||
|
||||
![image-20220526224804444](Kubernetes%E5%AD%98%E5%82%A8%E7%B1%BBStorageClass.assets/image-20220526224804444-16535764908601.png)
|
|
@ -0,0 +1,839 @@
|
|||
<h1><center>Kubernetes工作负载资源</center></h1>
|
||||
|
||||
著作:行癫 <盗版必究>
|
||||
|
||||
------
|
||||
|
||||
## 一:Deployments
|
||||
|
||||
一个 Deployment 为Pod和ReplicaSet提供声明式的更新能力;负责描述 Deployment 中的目标状态,而 Deployment 控制器(Controller)以受控速率更改实际状态, 使其变为期望状态。你可以定义 Deployment 以创建新的 ReplicaSet,或删除现有 Deployment, 并通过新的 Deployment 收养其资源。
|
||||
|
||||
#### 1.案例
|
||||
|
||||
以下是 Deployments 的典型用例:
|
||||
|
||||
1)创建 Deployment 以将 ReplicaSet 上线。 ReplicaSet 在后台创建 Pods。 检查 ReplicaSet 的上线状态,查看其是否成功。
|
||||
|
||||
2)通过更新 Deployment 的 PodTemplateSpec,声明 Pod 的新状态。 新的 ReplicaSet 会被创建,Deployment 以受控速率将 Pod 从旧 ReplicaSet 迁移到新 ReplicaSet。 每个新的 ReplicaSet 都会更新 Deployment 的修订版本。
|
||||
|
||||
3)如果 Deployment 的当前状态不稳定,回滚到较早的 Deployment 版本。 每次回滚都会更新 Deployment 的修订版本。
|
||||
|
||||
4)扩大 Deployment 规模以承担更多负载。
|
||||
|
||||
5)暂停Deployment以应用对 PodTemplateSpec 所作的多项修改, 然后恢复其执行以启动新的上线版本。
|
||||
|
||||
6)使用 Deployment 状态来判定上线过程是否出现停滞
|
||||
|
||||
7)清理较旧的不再需要的 ReplicaSet。
|
||||
|
||||
#### 2.创建 Deployment
|
||||
|
||||
下面是一个 Deployment 示例。其中创建了一个 ReplicaSet,负责启动三个 nginx Pods:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# vim Deployment-xingdian.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.20.1
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
创建名为 `nginx-deployment`(由 `.metadata.name` 字段标明)的 Deployment
|
||||
|
||||
该 Deployment 创建三个(由 `replicas` 字段标明)Pod 副本
|
||||
|
||||
`selector` 字段定义 Deployment 如何查找要管理的 Pods。 在这里,你选择在 Pod 模板中定义的标签(`app: nginx`)
|
||||
|
||||
template字段包含以下子字段:
|
||||
|
||||
Pod 被使用 `.metadata.labels` 字段打上 `app: nginx` 标签
|
||||
|
||||
Pod 模板规约(即 `.template.spec` 字段)指示 Pods 运行一个 `nginx` 容器
|
||||
|
||||
创建一个容器并使用 `.spec.template.spec.containers[0].name` 字段将其命名为 `nginx`
|
||||
|
||||
注意:
|
||||
|
||||
`spec.selector.matchLabels` 字段是 `{key,value}` 键值对映射。 在 `matchLabels` 映射中的每个 `{key,value}` 映射等效于 `matchExpressions` 中的一个元素, 即其 `key` 字段是 “key”,`operator` 为 “In”,`values` 数组仅包含 “value”。 在 `matchLabels` 和 `matchExpressions` 中给出的所有条件都必须满足才能匹配。
|
||||
|
||||
通过运行以下命令创建 Deployment :
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl create -f Deployment-xingdian.yaml
|
||||
```
|
||||
|
||||
运行 `kubectl get deployments` 检查 Deployment 是否已创建。 如果仍在创建 Deployment,则输出类似于:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 5m27s
|
||||
```
|
||||
|
||||
在检查集群中的 Deployment 时,所显示的字段有:
|
||||
|
||||
`NAME` 列出了集群中 Deployment 的名称
|
||||
|
||||
`READY` 显示应用程序的可用的“副本”数。显示的模式是“就绪个数/期望个数”
|
||||
|
||||
`UP-TO-DATE` 显示为了达到期望状态已经更新的副本数
|
||||
|
||||
`AVAILABLE` 显示应用可供用户使用的副本数
|
||||
|
||||
`AGE` 显示应用程序运行的时间
|
||||
|
||||
请注意期望副本数是根据 `.spec.replicas` 字段设置 3
|
||||
|
||||
要查看 Deployment 上线状态,运行 `kubectl rollout status deployment/nginx-deployment`
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout status deployment/nginx-deployment
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
要查看 Deployment 创建的 ReplicaSet(`rs`),运行 `kubectl get rs`
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-f8f4bdccc 3 3 3 6m55s
|
||||
```
|
||||
|
||||
ReplicaSet 输出中包含以下字段:
|
||||
|
||||
`NAME` 列出名字空间中 ReplicaSet 的名称
|
||||
|
||||
`DESIRED` 显示应用的期望副本个数,即在创建 Deployment 时所定义的值。 此为期望状态
|
||||
|
||||
`CURRENT` 显示当前运行状态中的副本个数
|
||||
|
||||
`READY` 显示应用中有多少副本可以为用户提供服务
|
||||
|
||||
`AGE` 显示应用已经运行的时间长度
|
||||
|
||||
注意:
|
||||
|
||||
注意 ReplicaSet 的名称始终被格式化为`[Deployment名称]-[随机字符串]`。 其中的随机字符串是使用 `pod-template-hash` 作为种子随机生成的。
|
||||
|
||||
要查看每个 Pod 自动生成的标签,运行 `kubectl get pods --show-labels`
|
||||
|
||||
```
|
||||
[root@master xingdian]# kubectl get pods --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-f8f4bdccc-72bk8 1/1 Running 0 8m39s app=nginx,pod-template-hash=f8f4bdccc
|
||||
nginx-deployment-f8f4bdccc-7dsbx 1/1 Running 0 8m39s app=nginx,pod-template-hash=f8f4bdccc
|
||||
nginx-deployment-f8f4bdccc-j9zps 1/1 Running 0 8m39s app=nginx,pod-template-hash=f8f4bdccc
|
||||
```
|
||||
|
||||
#### 3.Pod-template-hash 标签
|
||||
|
||||
Deployment 控制器将 `pod-template-hash` 标签添加到 Deployment 所创建或收留的每个 ReplicaSet;此标签可确保 Deployment 的子 ReplicaSets 不重叠。 标签是通过对 ReplicaSet 的 `PodTemplate` 进行哈希处理。 所生成的哈希值被添加到 ReplicaSet 选择算符、Pod 模板标签,并存在于在 ReplicaSet 可能拥有的任何现有 Pod 中。
|
||||
|
||||
#### 4.更新 Deployment
|
||||
|
||||
仅当 Deployment Pod 模板(即 `.spec.template`)发生改变时,例如模板的标签或容器镜像被更新, 才会触发 Deployment 上线。其他更新(如对 Deployment 执行扩缩容的操作)不会触发上线动作
|
||||
|
||||
按照以下步骤更新 Deployment:(版本升级)
|
||||
|
||||
1)先来更新 nginx Pod 以使用 `nginx:1.20.1` 镜像,而不是 `nginx:1.20.2` 镜像
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.20.2
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
或者使用下面的命令:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl set image deployment/nginx-deployment nginx=nginx:1.20.2
|
||||
```
|
||||
|
||||
或者,可以对 Deployment 执行 `edit` 操作并将 `.spec.template.spec.containers[0].image` 从 `nginx:1.20.1` 更改至 `nginx:1.20.2`
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl edit deployment/nginx-deployment
|
||||
```
|
||||
|
||||
2)要查看上线状态,运行:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
|
||||
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
3)在上线成功后,可以通过运行 `kubectl get deployments` 来查看 Deployment:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 17m
|
||||
```
|
||||
|
||||
4)运行 `kubectl get rs` 以查看 Deployment 通过创建新的 ReplicaSet 并将其扩容到 3 个副本并将旧 ReplicaSet 缩容到 0 个副本完成了 Pod 的更新操作:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-5b9bb9f548 3 3 3 6m47s
|
||||
nginx-deployment-f8f4bdccc 0 0 0 17m
|
||||
```
|
||||
|
||||
5)现在运行 `get pods` 应仅显示新的 Pods:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-5b9bb9f548-7m79h 1/1 Running 0 8m52s
|
||||
nginx-deployment-5b9bb9f548-df5vc 1/1 Running 0 10m
|
||||
nginx-deployment-5b9bb9f548-w5cwc 1/1 Running 0 7m32s
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
下次要更新这些 Pods 时,只需再次更新 Deployment Pod 模板即可
|
||||
|
||||
Deployment 可确保在更新时仅关闭一定数量的 Pod。默认情况下,它确保至少所需 Pods 75% 处于运行状态
|
||||
|
||||
Deployment 还确保仅所创建 Pod 数量只可能比期望 Pods 数高一点点。 默认情况下,它可确保启动的 Pod 个数比期望个数最多多出 25%
|
||||
|
||||
例如,如果仔细查看上述 Deployment ,将看到它首先创建了一个新的 Pod,然后删除了一些旧的 Pods, 并创建了新的 Pods。它不会杀死老 Pods,直到有足够的数量新的 Pods 已经出现。 在足够数量的旧 Pods 被杀死前并没有创建新 Pods。它确保至少 2 个 Pod 可用, 同时最多总共 4 个 Pod 可用。 当 Deployment 设置为 4 个副本时,Pod 的个数会介于 3 和 5 之间。
|
||||
|
||||
获取 Deployment 的更多信息:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl describe deployments nginx-deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 01 May 2022 14:44:45 +0800
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision: 2
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.20.2
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-5b9bb9f548 (3/3 replicas created)
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set nginx-deployment-f8f4bdccc to 3
|
||||
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 1
|
||||
Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 2
|
||||
Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 2
|
||||
Normal ScalingReplicaSet 9m33s deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 1
|
||||
Normal ScalingReplicaSet 9m33s deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 3
|
||||
Normal ScalingReplicaSet 7m21s deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 0
|
||||
```
|
||||
|
||||
#### 5.翻转-多 Deployment 动态更新
|
||||
|
||||
Deployment 控制器每次注意到新的 Deployment 时,都会创建一个 ReplicaSet 以启动所需的 Pods。 如果更新了 Deployment,则控制标签匹配 `.spec.selector` 但模板不匹配 `.spec.template` 的 Pods 的现有 ReplicaSet 被缩容。最终,新的 ReplicaSet 缩放为 `.spec.replicas` 个副本, 所有旧 ReplicaSets 缩放为 0 个副本。
|
||||
|
||||
当 Deployment 正在上线时被更新,Deployment 会针对更新创建一个新的 ReplicaSet 并开始对其扩容,之前正在被扩容的 ReplicaSet 会被翻转,添加到旧 ReplicaSets 列表 并开始缩容。
|
||||
|
||||
例如,假定你在创建一个 Deployment 以生成 `nginx:1.14.2` 的 5 个副本,但接下来 更新 Deployment 以创建 5 个 `nginx:1.16.1` 的副本,而此时只有 3 个`nginx:1.14.2` 副本已创建。在这种情况下,Deployment 会立即开始杀死 3 个 `nginx:1.14.2` Pods, 并开始创建 `nginx:1.16.1` Pods。它不会等待 `nginx:1.14.2` 的 5 个副本都创建完成后才开始执行变更动作。
|
||||
|
||||
#### 6.回滚 Deployment
|
||||
|
||||
有时,你可能想要回滚 Deployment;例如,当 Deployment 不稳定时(例如进入反复崩溃状态)。 默认情况下,Deployment 的所有上线记录都保留在系统中,以便可以随时回滚 (你可以通过修改修订历史记录限制来更改这一约束)
|
||||
|
||||
注意:
|
||||
|
||||
Deployment 被触发上线时,系统就会创建 Deployment 的新的修订版本。 这意味着仅当 Deployment 的 Pod 模板(`.spec.template`)发生更改时,才会创建新修订版本 -- 例如,模板的标签或容器镜像发生变化。 其他更新,如 Deployment 的扩缩容操作不会创建 Deployment 修订版本。 这是为了方便同时执行手动缩放或自动缩放。 换言之,当你回滚到较早的修订版本时,只有 Deployment 的 Pod 模板部分会被回滚。
|
||||
|
||||
假设你在更新 Deployment 时犯了一个拼写错误,将镜像名称命名设置为 `nginx:1.161` 而不是 `nginx:1.16.1`:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
|
||||
此上线进程会出现停滞。你可以通过检查上线状态来验证:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
你可以看到旧的副本有两个,新的副本有 1 个
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-5b4685b9bd 1 1 0 63s
|
||||
nginx-deployment-5b9bb9f548 3 3 3 19m
|
||||
nginx-deployment-f8f4bdccc 0 0 0 30m
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
Deployment 控制器自动停止有问题的上线过程,并停止对新的 ReplicaSet 扩容。 这行为取决于所指定的 rollingUpdate 参数(具体为 `maxUnavailable`)。 默认情况下,Kubernetes 将此值设置为 25%。
|
||||
|
||||
获取 Deployment 描述信息:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl describe deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 01 May 2022 14:44:45 +0800
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision: 3
|
||||
kubernetes.io/change-cause: kubectl set image deployment/nginx-deployment nginx=nginx:1.161 --record=true
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.161
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True ReplicaSetUpdated
|
||||
OldReplicaSets: nginx-deployment-5b9bb9f548 (3/3 replicas created)
|
||||
NewReplicaSet: nginx-deployment-5b4685b9bd (1/1 replicas created)
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 31m deployment-controller Scaled up replica set nginx-deployment-f8f4bdccc to 3
|
||||
Normal ScalingReplicaSet 20m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 1
|
||||
Normal ScalingReplicaSet 18m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 2
|
||||
Normal ScalingReplicaSet 18m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 2
|
||||
Normal ScalingReplicaSet 17m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 1
|
||||
Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 3
|
||||
Normal ScalingReplicaSet 15m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 0
|
||||
Normal ScalingReplicaSet 2m4s deployment-controller Scaled up replica set nginx-deployment-5b4685b9bd to 1
|
||||
```
|
||||
|
||||
要解决此问题,需要回滚到以前稳定的 Deployment 版本
|
||||
|
||||
检查 Deployment 上线历史:
|
||||
|
||||
1)首先,检查 Deployment 修订历史:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout history deployment/nginx-deployment
|
||||
deployment.apps/nginx-deployment
|
||||
REVISION CHANGE-CAUSE
|
||||
1 <none>
|
||||
2 <none>
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
|
||||
`CHANGE-CAUSE` 的内容是从 Deployment 的 `kubernetes.io/change-cause` 注解复制过来的
|
||||
|
||||
复制动作发生在修订版本创建时。你可以通过以下方式设置 `CHANGE-CAUSE` 消息:
|
||||
|
||||
使用 `kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"` 为 Deployment 添加注解
|
||||
|
||||
手动编辑资源的清单
|
||||
|
||||
2)要查看修订历史的详细信息,运行:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout history deployment/nginx-deployment --revision=2
|
||||
deployment.apps/nginx-deployment with revision #2
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
pod-template-hash=5b9bb9f548
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.20.2
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
```
|
||||
|
||||
#### 7.回滚到之前的修订版本
|
||||
|
||||
1)按照下面给出的步骤将 Deployment 从当前版本回滚到以前的版本(即版本 2)
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout undo deployment/nginx-deployment
|
||||
```
|
||||
|
||||
输出类似于:
|
||||
|
||||
```shell
|
||||
deployment.apps/nginx-deployment
|
||||
```
|
||||
|
||||
或者,你也可以通过使用 `--to-revision` 来回滚到特定修订版本:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout undo deployment/nginx-deployment --to-revision=2
|
||||
deployment.apps/nginx-deployment rolled back
|
||||
```
|
||||
|
||||
2)检查回滚是否成功以及 Deployment 是否正在运行,运行:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get deployment nginx-deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 39m
|
||||
```
|
||||
|
||||
3)获取 Deployment 描述信息:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl describe deployment nginx-deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 01 May 2022 14:44:45 +0800
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision: 4
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.20.2
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-5b9bb9f548 (3/3 replicas created)
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 40m deployment-controller Scaled up replica set nginx-deployment-f8f4bdccc to 3
|
||||
Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 1
|
||||
Normal ScalingReplicaSet 27m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 2
|
||||
Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 2
|
||||
Normal ScalingReplicaSet 25m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 1
|
||||
Normal ScalingReplicaSet 25m deployment-controller Scaled up replica set nginx-deployment-5b9bb9f548 to 3
|
||||
Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-deployment-f8f4bdccc to 0
|
||||
Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set nginx-deployment-5b4685b9bd to 1
|
||||
Normal ScalingReplicaSet 88s deployment-controller Scaled down replica set nginx-deployment-5b4685b9bd to 0
|
||||
```
|
||||
|
||||
#### 8.缩放 Deployment
|
||||
|
||||
1)你可以使用如下指令缩放 Deployment:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl scale deployment/nginx-deployment --replicas=10
|
||||
deployment.apps/nginx-deployment scaled
|
||||
```
|
||||
|
||||
假设集群启用了Pod 的水平自动缩放, 你可以为 Deployment 设置自动缩放器,并基于现有 Pod 的 CPU 利用率选择要运行的 Pod 个数下限和上限。
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80
|
||||
```
|
||||
|
||||
2)比例缩放:
|
||||
|
||||
RollingUpdate 的 Deployment 支持同时运行应用程序的多个版本。 当自动缩放器缩放处于上线进程(仍在进行中或暂停)中的 RollingUpdate Deployment 时, Deployment 控制器会平衡现有的活跃状态的 ReplicaSets(含 Pods 的 ReplicaSets)中的额外副本, 以降低风险。这称为比例缩放。
|
||||
|
||||
例如,你正在运行一个 10 个副本的 Deployment,其maxSurge=3,maxUnavailable=2
|
||||
|
||||
注意:
|
||||
|
||||
<img src="Kubernetes%E5%B7%A5%E4%BD%9C%E8%B4%9F%E8%BD%BD%E8%B5%84%E6%BA%90.assets/image-20220501154649042.png" alt="image-20220501154649042" style="zoom:50%;" />
|
||||
|
||||
最大峰值:
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxSurge` 是一个可选字段,用来指定可以创建的超出期望 Pod 个数的 Pod 数量。此值可以是绝对数(例如,5)或所需 Pods 的百分比(例如,10%)。 如果 `MaxUnavailable` 为 0,则此值不能为 0。百分比值会通过向上取整转换为绝对数。 此字段的默认值为 25%。
|
||||
|
||||
例如,当此值为 30% 时,启动滚动更新后,会立即对新的 ReplicaSet 扩容,同时保证新旧 Pod 的总数不超过所需 Pod 总数的 130%。一旦旧 Pods 被杀死,新的 ReplicaSet 可以进一步扩容, 同时确保更新期间的任何时候运行中的 Pods 总数最多为所需 Pods 总数的 130%。
|
||||
|
||||
最大不可用:
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable` 是一个可选字段,用来指定 更新过程中不可用的 Pod 的个数上限。该值可以是绝对数字(例如,5),也可以是所需 Pods 的百分比(例如,10%)。百分比值会转换成绝对数并去除小数部分。 如果 `.spec.strategy.rollingUpdate.maxSurge` 为 0,则此值不能为 0。 默认值为 25%。
|
||||
|
||||
例如,当此值设置为 30% 时,滚动更新开始时会立即将旧 ReplicaSet 缩容到期望 Pod 个数的70%。 新 Pod 准备就绪后,可以继续缩容旧有的 ReplicaSet,然后对新的 ReplicaSet 扩容, 确保在更新期间可用的 Pods 总数在任何时候都至少为所需的 Pod 个数的 70%。
|
||||
|
||||
进度期限秒数:
|
||||
|
||||
`.spec.progressDeadlineSeconds` 是一个可选字段,用于指定系统在报告 Deployment [进展失败](https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/#failed-deployment) 之前等待 Deployment 取得进展的秒数。 这类报告会在资源状态中体现为 `type: Progressing`、`status: False`、 `reason: ProgressDeadlineExceeded`。Deployment 控制器将持续重试 Deployment。 将来,一旦实现了自动回滚,Deployment 控制器将在探测到这样的条件时立即回滚 Deployment。
|
||||
|
||||
如果指定,则此字段值需要大于 `.spec.minReadySeconds` 取值。
|
||||
|
||||
最短就绪时间:
|
||||
|
||||
`.spec.minReadySeconds` 是一个可选字段,用于指定新创建的 Pod 在没有任意容器崩溃情况下的最小就绪时间, 只有超出这个时间 Pod 才被视为可用。默认值为 0(Pod 在准备就绪后立即将被视为可用)。
|
||||
|
||||
paused(暂停的):
|
||||
|
||||
`.spec.paused` 是用于暂停和恢复 Deployment 的可选布尔字段。 暂停的 Deployment 和未暂停的 Deployment 的唯一区别是,Deployment 处于暂停状态时, PodTemplateSpec 的任何修改都不会触发新的上线。 Deployment 在创建时是默认不会处于暂停状态。
|
||||
|
||||
确保 Deployment 的这 10 个副本都在运行:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 10/10 10 10 53
|
||||
```
|
||||
|
||||
更新 Deployment 使用新镜像,碰巧该镜像无法从集群内部解析:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl set image deployment/nginx-deployment nginx=nginx:sometag
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
镜像更新用 ReplicaSet 启动新的上线过程, 但由于上面提到的 `maxUnavailable` 要求,该进程被阻塞。检查上线状态:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-5b9bb9f548 8 8 8 44m
|
||||
nginx-deployment-745c49b799 5 5 0 76s
|
||||
```
|
||||
|
||||
然后,出现了新的 Deployment 扩缩请求。自动缩放器将 Deployment 副本增加到 13。 Deployment 控制器需要决定在何处添加 3 个新副本。如果未使用比例缩放,所有 5 个副本 都将添加到新的 ReplicaSet 中。使用比例缩放时,可以将额外的副本分布到所有 ReplicaSet。 较大比例的副本会被添加到拥有最多副本的 ReplicaSet,而较低比例的副本会进入到 副本较少的 ReplicaSet。所有剩下的副本都会添加到副本最多的 ReplicaSet。 具有零副本的 ReplicaSets 不会被扩容。
|
||||
|
||||
查看pod数量(取决于maxUnavailable)
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-745c49b799-4dmbw 0/1 ImagePullBackOff 0 119s
|
||||
nginx-deployment-745c49b799-94qtp 0/1 ImagePullBackOff 0 119s
|
||||
nginx-deployment-745c49b799-rmpzt 0/1 ImagePullBackOff 0 119s
|
||||
nginx-deployment-745c49b799-tfw7g 0/1 ImagePullBackOff 0 119s
|
||||
nginx-deployment-745c49b799-tlxfz 0/1 ImagePullBackOff 0 119s
|
||||
nginx-deployment-f8f4bdccc-4ckd7 1/1 Running 0 3m22s
|
||||
nginx-deployment-f8f4bdccc-7tnrn 1/1 Running 0 2m46s
|
||||
nginx-deployment-f8f4bdccc-9ndhj 1/1 Running 0 2m46s
|
||||
nginx-deployment-f8f4bdccc-b5xzc 1/1 Running 0 3m22s
|
||||
nginx-deployment-f8f4bdccc-l226t 1/1 Running 0 2m46s
|
||||
nginx-deployment-f8f4bdccc-lqqjw 1/1 Running 0 2m46s
|
||||
nginx-deployment-f8f4bdccc-s6rzl 1/1 Running 0 2m46s
|
||||
nginx-deployment-f8f4bdccc-zfrcv 1/1 Running 0 3m22s
|
||||
```
|
||||
|
||||
#### 9.暂停、恢复上线过程
|
||||
|
||||
使用如下指令暂停上线:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout pause deployment/nginx-deployment
|
||||
deployment.apps/nginx-deployment paused
|
||||
```
|
||||
|
||||
暂停 Deployment 上线之前的初始状态将继续发挥作用,但新的更新在 Deployment 上线被暂停期间不会产生任何效果。
|
||||
|
||||
接下来更新 Deployment 镜像:
|
||||
|
||||
```
|
||||
[root@master xingdian]# kubectl set image deployment/nginx-deployment nginx=nginx:1.20.1
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
注意没有新的上线被触发:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout history deployment/nginx-deployment
|
||||
deployment.apps/nginx-deployment
|
||||
REVISION CHANGE-CAUSE
|
||||
2 <none>
|
||||
3 <none>
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-745c49b799 5 5 0 7m48s
|
||||
nginx-deployment-f8f4bdccc 8 8 8 9m11s
|
||||
[root@master xingdian]# kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-745c49b799-4dmbw 0/1 ImagePullBackOff 0 7m55s
|
||||
nginx-deployment-745c49b799-94qtp 0/1 ImagePullBackOff 0 7m55s
|
||||
nginx-deployment-745c49b799-rmpzt 0/1 ImagePullBackOff 0 7m55s
|
||||
nginx-deployment-745c49b799-tfw7g 0/1 ImagePullBackOff 0 7m55s
|
||||
nginx-deployment-745c49b799-tlxfz 0/1 ImagePullBackOff 0 7m55s
|
||||
nginx-deployment-f8f4bdccc-4ckd7 1/1 Running 0 9m18s
|
||||
nginx-deployment-f8f4bdccc-7tnrn 1/1 Running 0 8m42s
|
||||
nginx-deployment-f8f4bdccc-9ndhj 1/1 Running 0 8m42s
|
||||
nginx-deployment-f8f4bdccc-b5xzc 1/1 Running 0 9m18s
|
||||
nginx-deployment-f8f4bdccc-l226t 1/1 Running 0 8m42s
|
||||
nginx-deployment-f8f4bdccc-lqqjw 1/1 Running 0 8m42s
|
||||
nginx-deployment-f8f4bdccc-s6rzl 1/1 Running 0 8m42s
|
||||
nginx-deployment-f8f4bdccc-zfrcv 1/1 Running 0 9m18s
|
||||
```
|
||||
|
||||
你可以根据需要执行很多更新操作,例如,可以要使用的资源:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
deployment.apps/nginx-deployment resource requirements updated
|
||||
```
|
||||
|
||||
最终,恢复 Deployment 上线并观察新的 ReplicaSet 的创建过程,其中包含了所应用的所有更新:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout resume deployment/nginx-deployment
|
||||
deployment.apps/nginx-deployment resumed
|
||||
```
|
||||
|
||||
```
|
||||
[root@master xingdian]# kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-578d8bf985 10 10 9 32s
|
||||
```
|
||||
|
||||
#### 10.Deployment 状态
|
||||
|
||||
Deployment 的生命周期中会有许多状态。上线新的 ReplicaSet 期间可能处于 Progressing(进行中),可能是 Complete(已完成),也可能是 Failed(失败)以至于无法继续进行。
|
||||
|
||||
1)进行中的 Deployment
|
||||
|
||||
执行下面的任务期间,Kubernetes 标记 Deployment 为进行中(Progressing):
|
||||
|
||||
Deployment 创建新的 ReplicaSet
|
||||
|
||||
Deployment 正在为其最新的 ReplicaSet 扩容
|
||||
|
||||
Deployment 正在为其旧有的 ReplicaSet(s) 缩容
|
||||
|
||||
新的 Pods 已经就绪或者可用(就绪至少持续了 [MinReadySeconds](https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) 秒)
|
||||
|
||||
当上线过程进入“Progressing”状态时,Deployment 控制器会向 Deployment 的 `.status.conditions` 中添加包含下面属性的状况条目:
|
||||
|
||||
`type: Progressing`
|
||||
|
||||
`status: "True"`
|
||||
|
||||
`reason: NewReplicaSetCreated` | `reason: FoundNewReplicaSet` | `reason: ReplicaSetUpdated`
|
||||
|
||||
2)完成的 Deployment
|
||||
|
||||
当 Deployment 具有以下特征时,Kubernetes 将其标记为完成(Complete):
|
||||
|
||||
与 Deployment 关联的所有副本都已更新到指定的最新版本,这意味着之前请求的所有更新都已完成
|
||||
|
||||
与 Deployment 关联的所有副本都可用
|
||||
|
||||
未运行 Deployment 的旧副本
|
||||
|
||||
当上线过程进入“Complete”状态时,Deployment 控制器会向 Deployment 的 `.status.conditions` 中添加包含下面属性的状况条目:
|
||||
|
||||
`type: Progressing`
|
||||
|
||||
`status: "True"`
|
||||
|
||||
`reason: NewReplicaSetAvailable`
|
||||
|
||||
这一 `Progressing` 状况的状态值会持续为 `"True"`,直至新的上线动作被触发。 即使副本的可用状态发生变化(进而影响 `Available` 状况),`Progressing` 状况的值也不会变化。
|
||||
|
||||
可以使用 `kubectl rollout status` 检查 Deployment 是否已完成。 如果上线成功完成,`kubectl rollout status` 返回退出代码 0。
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl rollout status deployment/nginx-deployment
|
||||
[root@master xingdian]# echo $?
|
||||
0
|
||||
```
|
||||
|
||||
3)失败的 Deployment
|
||||
|
||||
Deployment 可能会在尝试部署其最新的 ReplicaSet 受挫,一直处于未完成状态。 造成此情况一些可能因素如下:
|
||||
|
||||
配额(Quota)不足
|
||||
|
||||
就绪探测(Readiness Probe)失败
|
||||
|
||||
镜像拉取错误
|
||||
|
||||
权限不足
|
||||
|
||||
限制范围(Limit Ranges)问题
|
||||
|
||||
应用程序运行时的配置错误
|
||||
|
||||
检测此状况的一种方法是在 Deployment 规约中指定截止时间参数:
|
||||
|
||||
([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds))。 `.spec.progressDeadlineSeconds` 给出的是一个秒数值,Deployment 控制器在(通过 Deployment 状态) 标示 Deployment 进展停滞之前,需要等待所给的时长。
|
||||
|
||||
`kubectl` 命令设置规约中的 `progressDeadlineSeconds`,从而告知控制器 在 10 分钟后报告 Deployment 没有进展:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
|
||||
deployment.apps/nginx-deployment patched
|
||||
```
|
||||
|
||||
超过时间后,Deployment 控制器将添加具有以下属性的 Deployment 状况到 Deployment 的 `.status.conditions` 中:
|
||||
|
||||
Type=Progressing
|
||||
|
||||
Status=False
|
||||
|
||||
Reason=ProgressDeadlineExceeded
|
||||
|
||||
这一状况也可能会比较早地失败,因而其状态值被设置为 `"False"`, 其原因为 `ReplicaSetCreateError`。 一旦 Deployment 上线完成,就不再考虑其期限。
|
||||
|
||||
注意:
|
||||
|
||||
除了报告 `Reason=ProgressDeadlineExceeded` 状态之外,Kubernetes 对已停止的 Deployment 不执行任何操作。更高级别的编排器可以利用这一设计并相应地采取行动。 例如,将 Deployment 回滚到其以前的版本。
|
||||
|
||||
如果你暂停了某个 Deployment 上线,Kubernetes 不再根据指定的截止时间检查 Deployment 进展。 你可以在上线过程中间安全地暂停 Deployment 再恢复其执行,这样做不会导致超出最后时限的问题。
|
||||
|
||||
Deployment 可能会出现瞬时性的错误,可能因为设置的超时时间过短, 也可能因为其他可认为是临时性的问题。例如,假定所遇到的问题是配额不足。 如果描述 Deployment,你将会注意到以下部分:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl describe deployment nginx-deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 01 May 2022 16:17:05 +0800
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision: 2
|
||||
Selector: app=nginx
|
||||
Replicas: 10 desired | 5 updated | 13 total | 8 available | 5 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:xingdian
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing False ProgressDeadlineExceeded
|
||||
```
|
||||
|
||||
如果运行 `kubectl get deployment nginx-deployment -o yaml`,Deployment 状态输出 将类似于这样:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get deployment nginx-deployment -o yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
deployment.kubernetes.io/revision: "2"
|
||||
creationTimestamp: "2022-05-01T08:17:05Z"
|
||||
generation: 5
|
||||
labels:
|
||||
app: nginx
|
||||
name: nginx-deployment
|
||||
namespace: default
|
||||
resourceVersion: "27025"
|
||||
uid: abfb4f28-eee6-41b7-a26d-801de265f01d
|
||||
spec:
|
||||
progressDeadlineSeconds: 60
|
||||
replicas: 10
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 25%
|
||||
maxUnavailable: 25%
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:xingdian
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
schedulerName: default-scheduler
|
||||
securityContext: {}
|
||||
terminationGracePeriodSeconds: 30
|
||||
status:
|
||||
availableReplicas: 8
|
||||
conditions:
|
||||
- lastTransitionTime: "2022-05-01T08:17:24Z"
|
||||
lastUpdateTime: "2022-05-01T08:17:24Z"
|
||||
message: Deployment has minimum availability.
|
||||
reason: MinimumReplicasAvailable
|
||||
status: "True"
|
||||
type: Available
|
||||
- lastTransitionTime: "2022-05-01T08:26:14Z"
|
||||
lastUpdateTime: "2022-05-01T08:26:14Z"
|
||||
message: ReplicaSet "nginx-deployment-7646c57c" has timed out progressing.
|
||||
reason: ProgressDeadlineExceeded
|
||||
status: "False"
|
||||
type: Progressing
|
||||
observedGeneration: 5
|
||||
readyReplicas: 8
|
||||
replicas: 13
|
||||
unavailableReplicas: 5
|
||||
updatedReplicas: 5
|
||||
```
|
||||
|
||||
对失败 Deployment 的操作:
|
||||
|
||||
1.回滚到之前可以用版本
|
||||
|
||||
2.暂停,修复,继续运行
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,120 @@
|
|||
<h1><center>kubernetes工作负载资源CronJob</center></h1>
|
||||
|
||||
著作:行癫 <盗版必究>
|
||||
|
||||
------
|
||||
|
||||
## 一:CronJob
|
||||
|
||||
CronJob创建基于时隔重复调度的Jobs
|
||||
|
||||
一个 CronJob 对象就像 *crontab* 文件中的一行。 它用Cron格式进行编写, 并周期性地在给定的调度时间执行 Job
|
||||
|
||||
注意:
|
||||
|
||||
所有 **CronJob** 的 `schedule:` 时间都是基于kube-controller-manager. 的时区
|
||||
|
||||
如果你的控制平面在 Pod 或是裸容器中运行了 kube-controller-manager, 那么为该容器所设置的时区将会决定 Cron Job 的控制器所使用的时区
|
||||
|
||||
Kubernetes 项目官方并不支持设置如 `CRON_TZ` 或者 `TZ` 等变量。 `CRON_TZ` 或者 `TZ` 是用于解析和计算下一个 Job 创建时间所使用的内部库中一个实现细节。 不建议在生产集群中使用它
|
||||
|
||||
#### 1.创建CronJob
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# cat CronJob.yaml
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
schedule: "* * * * *"
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: 10.0.0.230/xingdian/nginx:v2
|
||||
imagePullPolicy: IfNotPresent
|
||||
restartPolicy: OnFailure
|
||||
```
|
||||
|
||||
#### 2.运行CronJob
|
||||
|
||||
```hello
|
||||
[root@master xingdian]# kubectl create -f CronJob.yaml
|
||||
cronjob.batch/hello created
|
||||
```
|
||||
|
||||
#### 3.获取其状态
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get cronjob
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
|
||||
hello * * * * * False 3 18s 2m34s
|
||||
```
|
||||
|
||||
CronJob 还没有调度或执行任何任务,大约需要一分钟任务才能创建好:
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get jobs --watch
|
||||
NAME COMPLETIONS DURATION AGE
|
||||
hello-27526413 0/1 3m32s 3m32s
|
||||
hello-27526414 0/1 2m32s 2m32s
|
||||
hello-27526415 0/1 92s 92s
|
||||
hello-27526416 0/1 32s 32s
|
||||
```
|
||||
|
||||
你应该能看到 `hello` CronJob 在 `LAST SCHEDULE` 声明的时间点成功的调度了一次任务。 有 0 个活跃的任务意味着任务执行完毕或者执行失败
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get cronjob
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
|
||||
hello * * * * * False 3 18s 2m34s
|
||||
```
|
||||
|
||||
#### 4.时间语法表
|
||||
|
||||
```shell
|
||||
# ┌───────────── 分钟 (0 - 59)
|
||||
# │ ┌───────────── 小时 (0 - 23)
|
||||
# │ │ ┌───────────── 月的某天 (1 - 31)
|
||||
# │ │ │ ┌───────────── 月份 (1 - 12)
|
||||
# │ │ │ │ ┌───────────── 周的某天 (0 - 6)(周日到周一;在某些系统上,7 也是星期日)
|
||||
# │ │ │ │ │ 或者是 sun,mon,tue,web,thu,fri,sat
|
||||
# │ │ │ │ │
|
||||
# │ │ │ │ │
|
||||
# * * * * *
|
||||
```
|
||||
|
||||
```shell
|
||||
输入 描述 相当于
|
||||
@yearly (or @annually) 每年 1 月 1 日的午夜运行一次 0 0 1 1 *
|
||||
@monthly 每月第一天的午夜运行一次 0 0 1 * *
|
||||
@weekly 每周的周日午夜运行一次 0 0 * * 0
|
||||
@daily (or @midnight) 每天午夜运行一次 0 0 * * *
|
||||
@hourly 每小时的开始一次 0 * * * *
|
||||
```
|
||||
|
||||
例如,下面这行指出必须在每个星期五的午夜以及每个月 13 号的午夜开始任务:0 0 13 * 5
|
||||
|
||||
计划任务时间表:https://crontab.guru/
|
||||
|
||||
#### 5.参数解释
|
||||
|
||||
在 CronJob 对象上设置spec.concurrencyPolicy字段可让您控制此行为。它具有三个可能的值:
|
||||
|
||||
Allow:允许如上所示的并发
|
||||
|
||||
Forbid:CronJob 不允许并发任务执行;如果新任务的执行时间到了老任务没有执行完,CronJob 忽略新任务的执行
|
||||
|
||||
Replace:如果新任务的执行时间到了而老任务没有执行完,CronJob 会用新任务替换当前正在运行的任务
|
||||
|
||||
`spec.successfulJobsHistoryLimit`和`spec.failedJobsHistoryLimit`这两个值分别控制 Kubernetes 保留成功和失败作业历史的时间。它们默认为三个成功的作业和一个失败的作业,默认设置为3和1。限制设置为 `0` 代表相应类型的任务完成后不会保留。
|
||||
|
||||
`.spec.startingDeadlineSeconds` 域是可选的。 它表示任务如果由于某种原因错过了调度时间,开始该任务的截止时间的秒数。过了截止时间,CronJob 就不会开始任务。 不满足这种最后期限的任务会被统计为失败任务。如果该域没有声明,那任务就没有最后期限,如果`.spec.startingDeadlineSeconds`字段被设置(非空),CronJob 控制器会计算从预期创建 Job 到当前时间的时间差。 如果时间差大于该限制,则跳过此次执行,如果将其设置为 `200`,则 Job 控制器允许在实际调度之后最多 200 秒内创建 Job,如果 `startingDeadlineSeconds` 的设置值低于 10 秒钟,CronJob 可能无法被调度。 这是因为 CronJob 控制器每 10 秒钟执行一次检查。
|
||||
|
||||
例如:假设将 CronJob 设置为从 `08:30:00` 开始每隔一分钟创建一个新的 Job, 并将其 `startingDeadlineSeconds` 字段设置为 200 秒。 如果 CronJob 控制器恰好在与上一个示例相同的时间段(`08:29:00` 到 `10:21:00`)终止运行, 则 Job 仍将从 `10:22:00` 开始。 造成这种情况的原因是控制器现在检查在最近 200 秒(即 3 个错过的调度)中发生了多少次错过的 Job 调度,而不是从现在为止的最后一个调度时间开始
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,149 @@
|
|||
<h1><center>kubernetes工作负载资源DaemonSet</center></h1>
|
||||
|
||||
著作:行癫 <盗版必究>
|
||||
|
||||
------
|
||||
|
||||
## 一:DaemonSet
|
||||
|
||||
DaemonSet确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
|
||||
|
||||
#### 1.DaemonSet用法
|
||||
|
||||
在每个节点上运行集群守护进程
|
||||
|
||||
在每个节点上运行日志收集守护进程
|
||||
|
||||
在每个节点上运行监控守护进程
|
||||
|
||||
一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。
|
||||
|
||||
#### 2.创建 DaemonSet
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# cat Daemonset.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: fluentd-elasticsearch
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: fluentd-elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: fluentd-elasticsearch
|
||||
spec:
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: fluentd-elasticsearch
|
||||
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
```
|
||||
|
||||
#### 3.运行Daemonset
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl create -f Daemonset.yaml
|
||||
```
|
||||
|
||||
#### 4.验证运行情况
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get Daemonset -A
|
||||
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
kube-system fluentd-elasticsearch 4 4 4 4 4 <none> 13m
|
||||
```
|
||||
|
||||
```shell
|
||||
[root@master xingdian]# kubectl get pod -A
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system fluentd-elasticsearch-6bnkw 1/1 Running 0 14m
|
||||
kube-system fluentd-elasticsearch-hsqq2 1/1 Running 0 14m
|
||||
kube-system fluentd-elasticsearch-ncmnl 1/1 Running 0 14m
|
||||
kube-system fluentd-elasticsearch-x2mqr 1/1 Running 0 14m
|
||||
```
|
||||
|
||||
#### 5.必需字段
|
||||
|
||||
和所有其他 Kubernetes 配置一样,DaemonSet 需要 `apiVersion`、`kind` 和 `metadata` 字段
|
||||
|
||||
DaemonSet 对象的名称必须是一个合法的DNS 子域名
|
||||
|
||||
DaemonSet 也需要一个 [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 配置段
|
||||
|
||||
#### 6.Pod 模板
|
||||
|
||||
`·spec`中唯一必需的字段是 `.spec.template`
|
||||
|
||||
`.spec.template` 是一个Pod 模板,它是嵌套的,因而不具有 `apiVersion` 或 `kind` 字段之外,它与Pod 具有相同的 schema
|
||||
|
||||
除了 Pod 必需字段外,在 DaemonSet 中的 Pod 模板必须指定合理的标签
|
||||
|
||||
在 DaemonSet 中的 Pod 模板必须具有一个值为 `Always` 的 `RestartPolicy`。 当该值未指定时,默认是 `Always`
|
||||
|
||||
注意:
|
||||
|
||||
Pod 的 `spec` 中包含一个 `restartPolicy` 字段,其可能取值包括 Always、OnFailure 和 Never。默认值是 Always
|
||||
|
||||
Always:当容器终止退出后,总是重启容器
|
||||
|
||||
OnFailure:当容器异常退出时(退出状态码非0),才重启容器
|
||||
|
||||
Never:当容器终止退出,从不重启容器
|
||||
|
||||
#### 7.Pod 选择算符
|
||||
|
||||
`.spec.selector` 字段表示 Pod 选择算符,它与 [Job](https://kubernetes.io/zh/docs/concepts/workloads/controllers/job/) 的 `.spec.selector` 的作用是相同的
|
||||
|
||||
必须指定与 `.spec.template` 的标签匹配的 Pod 选择算符
|
||||
|
||||
一旦创建了 DaemonSet,它的 `.spec.selector` 就不能修改
|
||||
|
||||
`spec.selector` 是一个对象,如下两个字段组成:
|
||||
|
||||
`matchLabels` - 与ReplicationController的 `.spec.selector` 的作用相同
|
||||
|
||||
`matchExpressions` - 允许构建更复杂的选择器,通过指定 key、value 列表以及将 key 和 value 列表关联起来的 operator
|
||||
|
||||
注意:
|
||||
|
||||
Kubernetes 的 Operator 模式概念允许你在不修改 Kubernetes 自身代码的情况下,通过为一个或多个自定义资源关联控制器来扩展集群的能力。 Operator 是 Kubernetes API 的客户端,充当自定义资源的控制器。
|
||||
|
||||
#### 8.Daemon Pods 通信
|
||||
|
||||
与 DaemonSet 中的 Pod 进行通信的几种可能模式如下:
|
||||
|
||||
**推送(Push)**:配置 DaemonSet 中的 Pod,将更新发送到另一个服务,例如统计数据库。 这些服务没有客户端
|
||||
|
||||
**NodeIP 和已知端口**:DaemonSet 中的 Pod 可以使用 `hostPort`,从而可以通过节点 IP 访问到 Pod。客户端能通过某种方法获取节点 IP 列表,并且基于此也可以获取到相应的端口
|
||||
|
||||
**DNS**:创建具有相同 Pod 选择算符的 [无头服务](https://kubernetes.io/zh/docs/concepts/services-networking/service/#headless-services), 通过使用 `endpoints` 资源或从 DNS 中检索到多个 A 记录来发现 DaemonSet
|
||||
|
||||
**Service**:创建具有相同 Pod 选择算符的服务,并使用该服务随机访问到某个节点上的 守护进程(没有办法访问到特定节点)
|
|
@ -0,0 +1,288 @@
|
|||
<h1><center>kubernetes调度粘性</center></h1>
|
||||
|
||||
著作:行癫 <盗版必究>
|
||||
|
||||
------
|
||||
|
||||
## 一:调度粘性
|
||||
|
||||
#### 1.三种调度粘性
|
||||
|
||||
NodeSelector(定向调度)
|
||||
|
||||
NodeAffinity(Node亲和性)
|
||||
|
||||
PodAffinity(Pod亲和性)
|
||||
|
||||
通常情况下,使用的都是k8s默认的调度调度方式,但是在有些情况下,我们需要将pod运行在具有特点的标签的node上才能都运行,这个时候,pod的调度策略就不能使用k8s默认的调度策略了,这个时候,就需要指定调度策略,告诉k8s需要将pod调度到那些node上。
|
||||
|
||||
#### 2.nodeSelector
|
||||
|
||||
常规情况下,会直接使用nodeSelector这种调度策略。labels(标签) 是k8s里面用来编标记资源的一种常用的方式,我们可以给node标记特殊的标签,然后nodeSelector会将pod调度到带有指定labels的node上的;提供简单的pod部署限制,pod选择一个或多个node的label部署
|
||||
|
||||
给node添加label:
|
||||
|
||||
```shell
|
||||
kubectl label nodes <node-name> <label-key>=<label-value>
|
||||
```
|
||||
|
||||
pod添加nodeSelector机制:
|
||||
|
||||
```shell
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
||||
```
|
||||
|
||||
部署pod:
|
||||
|
||||
```shell
|
||||
[root@master ~]# kubectl create -f test.yaml
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
查看结果:
|
||||
|
||||
```shell
|
||||
[root@master ~]# kubectl get pod -A -o wide
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
default nginx 0/1 ContainerCreating 0 37s <none> node-1 <none>
|
||||
```
|
||||
|
||||
从上面的执行结果可以看出,pod 通过默认的 default-scheduler 调度器到了node-1节点上。不过,这种调度方式属于强制性的。如果node02上的资源不足,那么pod的状态将会一直是pending状态。
|
||||
|
||||
#### 3.亲和性和反亲和性调度
|
||||
|
||||
k8s的默认调度流程实际上是经过了两个阶段predicates(判断),priorities(优先选择) 。使用默认的调度流程的话,k8s会将pod调度到资源充裕的节点上,使用nodeselector的调度方法,又会将pod调度具有指定标签的节点上。然后在实际生产环境中,我们需要将pod调度到具有些label的一组node才能满足实际需求,这个时候就需要nodeAffinity、podAffinity以及 podAntiAffinity(pod 反亲和性)
|
||||
|
||||
亲和性可以分为具体可以细分为硬和软两种亲和性:
|
||||
|
||||
软亲和性:如果调度的时候,没有满足要求,也可以继续调度,即能满足最好,不能也无所谓
|
||||
|
||||
硬亲和性:是指调度的时候必须满足特定的要求,如果不满足,那么pod将不会被调度到当前node
|
||||
|
||||
requiredDuringSchedulingIgnoredDuringExecution #硬性强制
|
||||
|
||||
preferredDuringSchedulingIgnoredDuringExecution #软性配置
|
||||
|
||||
#### 4.nodeAffinity 节点亲和性
|
||||
|
||||
节点亲和性主要是用来控制 pod 能部署在哪些节点上,以及不能部署在哪些节点上的;它可以进行一些简单的逻辑组合了,不只是简单的相等匹配(preferredDuringSchedulingIgnoredDuringExecution)
|
||||
|
||||
强调优先满足制定规则,调度器会尝试调度pod到Node上,但并不强求,相当于软限制。多个优先级规则还可以设置权重值,以定义执行的先后顺序
|
||||
|
||||
nodeAffinity控制 pod 的调度:
|
||||
|
||||
```shell
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-node-affinity
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: beta.kubernetes.io/arch
|
||||
operator: In
|
||||
values:
|
||||
- amd64
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: disk-type
|
||||
operator: In
|
||||
values:
|
||||
- ssd
|
||||
containers:
|
||||
- name: with-node-affinity
|
||||
image: nginx
|
||||
```
|
||||
|
||||
设置label:
|
||||
|
||||
```shell
|
||||
[root@master ~]# kubectl label nodes node-2 disk-type=ssd
|
||||
node/node-2 labeled
|
||||
```
|
||||
|
||||
创建pod并查看运行结果:
|
||||
|
||||
```shell
|
||||
[root@master yaml]# kubectl get pod -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
with-node-affinity 0/1 ContainerCreating 0 4m <none> node-2 <none> <none>
|
||||
```
|
||||
|
||||
NodeAffinity规则设置的注意事项如下:
|
||||
|
||||
如果同时定义了nodeSelector和nodeAffinity,name必须两个条件都得到满足,pod才能最终运行在指定的node上
|
||||
|
||||
如果nodeAffinity指定了多个nodeSelectorTerms,那么其中一个能够匹配成功即可
|
||||
|
||||
如果在nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有matchExpressions才能运行该pod
|
||||
|
||||
matchExpressions : 匹配表达式,这个标签可以指定一段,例如pod中定义的key为zone,operator为In(包含那些),values为 foo和bar;就是在node节点中包含foo和bar的标签中调度
|
||||
|
||||
kubernetes提供的操作符有下面的几种:
|
||||
|
||||
In:label 的值在某个标签中
|
||||
|
||||
NotIn:label 的值不在某个标签中
|
||||
|
||||
Gt:label 的值大于某个值
|
||||
|
||||
Lt:label 的值小于某个值
|
||||
|
||||
Exists:某个 label 存在
|
||||
|
||||
DoesNotExist:某个 label 不存在
|
||||
|
||||
#### 5.podAffinity pod亲和性
|
||||
|
||||
Pod的亲和性主要用来解决pod可以和哪些pod部署在同一个集群里面,即拓扑域(由node组成的集群)里面;而pod的反亲和性是为了解决pod不能和哪些pod部署在一起的问题,二者都是为了解决pod之间部署问题。需要注意的是,Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度,不建议在具有几百个节点的集群中使用,而且Pod 反亲和需要对节点进行一致的标记,即集群中的每个节点必须具有适当的标签能够匹配topologyKey。如果某些或所有节点缺少指定的topologyKey标签,可能会导致意外行为
|
||||
|
||||
Pod亲和性场景,我们的k8s集群的节点分布在不同的区域或者不同的机房,当服务A和服务B要求部署在同一个区域或者同一机房的时候,我们就需要亲和性调度了
|
||||
|
||||
labelSelector : 选择跟那组Pod亲和
|
||||
|
||||
namespaces : 选择哪个命名空间
|
||||
|
||||
topologyKey : 指定节点上的哪个键
|
||||
|
||||
pod亲和性调度需要各个相关的pod对象运行于"同一位置", 而反亲和性调度则要求他们不能运行于"同一位置"
|
||||
|
||||
这里指定“同一位置” 是通过 topologyKey 来定义的,topologyKey 对应的值是 node 上的一个标签名称,比如各别节点zone=A标签,各别节点有zone=B标签,pod affinity topologyKey定义为zone,那么调度pod的时候就会围绕着A拓扑,B拓扑来调度,而相同拓扑下的node就为“同一位置”;如果基于各个节点kubernetes.io/hostname标签作为评判标准,那么很明显“同一位置”意味着同一节点,不同节点既为不同位置
|
||||
|
||||
pod亲和性:
|
||||
|
||||
```shell
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-first
|
||||
labels:
|
||||
app: myapp
|
||||
tier: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp
|
||||
image: daocloud.io/library/nginx:latest
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-second
|
||||
labels:
|
||||
app: db
|
||||
tier: db
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: daocloud.io/library/busybox
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["sh","-c","sleep 3600"]
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- {key: app, operator: In, values: ["myapp"]}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
```
|
||||
|
||||
查看结果:
|
||||
|
||||
```shell
|
||||
[root@master yaml]# kubectl get pod -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
pod-first 1/1 Running 0 10m 10.244.1.6 node-1 <none> <none>
|
||||
pod-second 1/1 Running 0 10m 10.244.1.7 node-1 <none> <none>
|
||||
```
|
||||
|
||||
pod反亲和性:
|
||||
|
||||
Pod反亲和性场景,当应用服务A和数据库服务B要求尽量不要在同一台节点上的时候
|
||||
|
||||
```shell
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-first-1
|
||||
labels:
|
||||
app: myapp
|
||||
tier: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp
|
||||
image: daocloud.io/library/nginx:latest
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-second-2
|
||||
labels:
|
||||
app: backend
|
||||
tier: db
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: daocloud.io/library/busybox:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["sh","-c","sleep 3600"]
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- {key: app, operator: In, values: ["myapp"]}
|
||||
topologyKey: kubernetes.io/hostname
|
||||
```
|
||||
|
||||
查看结果:
|
||||
|
||||
```shell
|
||||
[root@master yaml]# kubectl get pod -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
pod-first-1 1/1 Running 0 7m28s 10.244.1.8 node-1 <none> <none>
|
||||
pod-second-2 1/1 Running 0 7m28s 10.244.2.6 node-2 <none> <none>
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue