kubernetes/MD/基于kubeadm部署kubernetes集群1-25...

954 lines
31 KiB
Markdown
Raw Normal View History

2023-10-29 13:38:13 +08:00
<h1><center>基于kubeadm部署kubernetes集群</center></h1>
著作:行癫 <盗版必究>
------
## 一:环境准备
三台服务器一台master两台node,master节点必须是2核cpu
| 节点名称 | IP地址 |
| :------: | :--------: |
| master | 10.0.0.220 |
| node-1 | 10.0.0.221 |
| node-2 | 10.0.0.222 |
| node-3 | 10.0.0.223 |
#### 1.所有服务器关闭防火墙和selinux
```shell
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i '/^SELINUX=/c SELINUX=disabled/' /etc/selinux/config
[root@localhost ~]# swapoff -a  临时关闭
[root@localhost ~]# sed -i 's/.*swap.*/#&/' /etc/fstab 永久关闭
注意:
关闭所有服务器的交换分区
所有节点操作
```
#### 2.保证yum仓库可用
```shell
[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache fast
注意:
使用国内yum源
所有节点操作
```
#### 3.修改主机名
```shell
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# hostnamectl set-hostname node-1
[root@localhost ~]# hostnamectl set-hostname node-2
[root@localhost ~]# hostnamectl set-hostname node-3
注意:
所有节点操作
```
#### 4.添加本地解析
```shell
[root@master ~]# cat >> /etc/hosts <<eof
10.0.0.220 master
10.0.0.221 node-1
10.0.0.222 node-2
10.0.0.223 node-3
eof
注意:
所有节点操作
```
#### 5.安装容器运行时
```shell
第一步Installing containerd
链接地址https://github.com/containerd/containerd/blob/main/docs/getting-started.md
[root@master ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-amd64.tar.gz
[root@master ~]# tar xf containerd-1.6.8-linux-amd64.tar.gz
[root@master ~]# cp bin/* /usr/local/bin
创建systemctl管理服务启动文件
/usr/local/lib/systemd/system/containerd.service
/etc/systemd/system/multi-user.target.wants/containerd.service
[root@master ~]# cat containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start containerd
第二步Installing runc
链接地址https://github.com/opencontainers/runc/releases
[root@master ~]# wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
[root@master ~]# install -m 755 runc.amd64 /usr/local/sbin/runc
第三步Installing CNI plugins
链接地址https://github.com/containernetworking/plugins/releases
[root@master ~]# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
[root@master ~]# mkdir -p /opt/cni/bin
[root@master ~]# tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth
注意:
所有节点操作
```
#### 6.安装kubeadm和kubelet
```shell
国外仓库:
[root@master ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<eof
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
eof
[root@master ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
国内仓库:
[root@master ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<eof
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
eof
[root@master ~]# yum -y install kubeadm kubelet kubectl ipvsadm
注意:
所有节点操作
这里安装的是最新版本也可以指定版本号kubeadm-1.25.0
修改kubelet配置
[root@master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
[root@master ~]# cat /etc/systemd/system/multi-user.target.wants/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet \
--container-runtime=remote \
--container-runtime-endpoint=unix:///run//containerd/containerd.sock
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
```
#### 7.加载内核模块
```shell
[root@master ~]# modprobe br_netfilter
注意:
所有节点操作
```
#### 8.修改内核参数
```shell
[root@master ~]# cat >> /etc/sysctl.conf <<eof
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
net.ipv4.ip_forward=1
eof
[root@master ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
注意:
所有节点操作
```
## 二部署Kubernetes
#### 1.镜像下载
```shell
[root@master ~]# cat image.sh
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io i pull -k registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
#flannel镜像
ctr -n k8s.io i pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
ctr -n k8s.io i pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 registry.k8s.io/etcd:3.5.4-0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 registry.k8s.io/coredns/coredns:v1.9.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 k8s.gcr.io/pause:3.6
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0 registry.k8s.io/kube-proxy:v1.25.0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0 registry.k8s.io/kube-scheduler:v1.25.0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0 registry.k8s.io/kube-controller-manager:v1.25.0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0 registry.k8s.io/kube-apiserver:v1.25.0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 registry.k8s.io/pause:3.8
[root@master ~]# bash image.sh
注意:
所有节点操作
注意:
[root@master ~]# kubeadm config images list //获取所需要的镜像
registry.k8s.io/kube-apiserver:v1.25.0
registry.k8s.io/kube-controller-manager:v1.25.0
registry.k8s.io/kube-scheduler:v1.25.0
registry.k8s.io/kube-proxy:v1.25.0
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
```
#### 3.master节点初始化
```shell
[root@master ~]# kubeadm init --kubernetes-version=1.23.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.220
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.150:6443 --token tlohed.bwgh4tt5erbc2zhg \
--discovery-token-ca-cert-hash sha256:9b28238c55bf5b2e5e0d68c6303e50bf6f12e7e07126897c846a4f5328157e16
```
#### 4.安装pod插件
```shell
[root@master ~]# vi kube-flannel.yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
[root@master ~]# kubectl create -f kube-flannel.yaml
```
#### 5.将node加入工作节点
```shell
[root@node-1 ~]# kubeadm join 10.0.0.220:6443 --token mzrm3c.u9mpt80rddmjvd3g --discovery-token-ca-cert-hash sha256:fec53dfeacc5187d3f0e3998d65bd3e303fa64acd5156192240728567659bf4a
注意:
这里使用的是master初始化产生的token
这里的token时间长了会改变需要使用命令获取见下期内容
没有记录集群 join 命令的可以通过以下方式重新获取:
kubeadm token create --print-join-command --ttl=0
```
#### 6.master节点查看集群状态
```shell
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane 7m v1.25.0
node-1 Ready <none> 2m7s v1.25.0
node-2 Ready <none> 50s v1.25.0
node-3 Ready <none> 110s v1.25.0
```
## 三部署Dashboard
#### 1.kube-proxy 开启 ipvs
```shell
[root@master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@master ~]# kubectl apply -f kube-proxy-configmap.yaml
[root@master ~]# rm -f kube-proxy-configmap.yaml
[root@master ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
```
#### 2.Dashboard安装脚本
```shell
[root@master ~]# cat dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.6.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
```
#### 3.创建证书
```shell
[root@k8s-master ~]# mkdir dashboard-certs
[root@k8s-master ~]# cd dashboard-certs/
#创建命名空间
[root@k8s-master ~]# kubectl create namespace kubernetes-dashboard
# 创建私钥key文件
[root@k8s-master ~]# openssl genrsa -out dashboard.key 2048
#证书请求
[root@k8s-master ~]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
[root@k8s-master ~]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
[root@k8s-master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
```
#### 4.创建管理员
```shell
创建账户
[root@k8s-master ~]# vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master ~]# kubectl create -f dashboard-admin.yaml
为用户分配权限
[root@k8s-master ~]# vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master ~]# kubectl create -f dashboard-admin-bind-cluster-role.yaml
```
#### 5.安装 Dashboard
```shell
[root@master dashboard-certs]# kubectl create -f dashboard.yaml
```
#### 6.获取token
```shell
[root@master dashboard-certs]# kubectl -n kubernetes-dashboard create token dashboard-admin
eyJhbGciOiJSUzI1NiIsImtpZCI6InlBck13aTFMR2daR3htTmxSdG5XbGJjOVFLWmdMZlgzRU10TmJWRFNEMk0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjYyMjE0MTA3LCJpYXQiOjE2NjIyMTA1MDcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiIwOTRhYWI2NC05NTkyLTRjYTctOWI3MS0yNDEwMmI5ODA1YjcifX0sIm5iZiI6MTY2MjIxMDUwNywic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.MQfc83l08PsAqmBHRzwgE_TsZGct0Sul1lM7Ks0ssf29DtXt22u9hHisvaLmQ64sNsvb_D7r47kDDTxZPbIJP_A2mBuHFqy_dnryUmqlTj7KFJm4PdObwiMTlnBch-v7HqxJKLuA6XXLxtpNrbLWqqG47Bc2kvvcF4BzSkiDhe-s5L0PS-WY753QjV0C9v63G8KJDxkQEGVC4PjqfXSclLi_jvIe4n3UqhUNHPl85JWgBhJHTTAei3Ztp7IMweztR_P30p6BiXEF0Kmcv8Nb7Xsk2dx5avYyiTRZTpq4pBkvAMKlCbXyKufh78mil_oNdaA8Q_AeFWFwgDx9UrGoFA
```
#### 7.创建kubeconfig
```shell
获取certificate-authority-data
[root@master ~]# CURRENT_CONTEXT=$(kubectl config current-context)
[root@master ~]# CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
[root@master ~]# kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}'
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1Ea3dNekEzTVRBeE0xb1hEVE15TURnek1UQTNNVEF4TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTVNZCnRxdjVwdlVaQVN3VHl0ODkrWGphM1ZuZFZnaWVCbFc1bEZVc2dzSklxa0tSNlV2cjVYcXEvWjNOaUVpUlBqT28KWkh4a1V5SWpqdUFTUXZuYzhrTXhvNjNQY3d2UUNEYzd3V1pQeVBxMDVobFZhUlhYK0hHNjlaRXozYkQrUmlObgpyTU5uSVZqeEI0ck56SWs0cGFUNjBZMU5hdWx0V01NbEFyMFM3ZC9YQ3ZMeVhBK0NCNVFmZ2xSQTFJZnJ3ZjNJCno3YS9iQ0M4Qk9Fak94QmllRCtra3JYWGJtdXlMUHpTZkdKUGNUajI1eGdjK2RvNDJZKzZ4UUVCK0ZTSnN6VWIKVzhyMkx5TkI2YjNaZlBZcjNIMXQ4RkxkeUxtTU9nR1M2RkpPMmpQWVVWR0RObURLUHlPZWJMVit4UXlvMW4rMQpYK0F5NzJ1b2JlbklESE54czhzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMVXJSUnZXN2VrRlJhajRFRDFuWmVONXJzNVRNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2lucC9OakRCOGhLbFhuMzJCTQorV0hLcTRTOHNCTFBFeFhJSzloUXdWNmVodWgyOEQzSEltOFlFUCtJazEybDUwNi90QlNpYllOYjV1dHYyVnBmCmltUEh2aFAvUzZmTFE4MXVIL2JaQytaMlJ3b3VvLzU4TkJoZVRhR2ozK2VXTzNnMDllaEhaaHVFajE4WWVtaDYKU0xhUU9SZWE2dEpHVjNlVURWUk44Tnc1aXp2T3AxZ2poVHdsNzJTL3JycmxoREo2dGM4VDlPaUFhOWNkeVlPWApLNVZNdlEwRWw3aDlLT3lmN3FsSWgxL0dhSWdqVUl4Z3FNQ1lIallKc0Jvd2g2eDB5d0tSYllJQkl4d1M0NDNECnRmYU5wOTBpUFVGS0Q3c2IvTWxGeDZpK3l3UjVnQUd3NWJWSEdIZTMrY0szNzlRd1R2NS8zdWlYQTlBUnhiVloKS0hNPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
[root@master ~]# cat config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXhNakV3TXpReU5Gb1hEVE14TURVeE1ERXdNelF5TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFd5CmFZLy85Nlc5R2hxWWVBVER5cXhQV2tmVE1mSUxNMFkydy9RSm9SRDJWZmw1cjFNR25mRWNteG81bXc4Z1NXMmEKdVNmeml6dU41eDRMblBBYlVvbHdubkxMeWx3cENmTkRKMFRUVFFyOThuTmphWWQ0d2RmK09uZmtZQ1VaVG1NTwpYWDZBMEZJblFHTEpWQUdOb0xUUnVIR3F6dU0yNUd1Rkp5aXBoNlhRN0tHcmJFVFo0RXQyVWg3azV2UGxPVDQ3CkNEQjlDVkMra1c5MkdIRmNrTkViUU9kTUpPTkZXalh1K1lsSjlZdzNybzhJYU05QVR5SDFwNmNzaXNoejhybVQKUGZEUkl4cXpsNTVzenJNUHV5Y0JqZkVEZXFhcjQ2OXJyMFFGcWJ5NTdaaEtBcGMybTA0eTZ6ZllUQlB4cDhndwpXZ01ONktkcjU4bEFpOERwN3kwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZERTBCTVRreE5BYlpmMGZ6S2JwRm5ZUU94aXpNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCdE1yU281K2JlcEF2UGpwOUJ4TDlncnFtTnpOVEczRWFHZnZKUndoUE5qVXZ1bDZFZQpNenlIM3o3SERzYW5kSHJMU2xYMGZXZ2Y3TFRBSVRLV3duTDI3NjVzaTJ4L0prb2pCV0VySytGTEs1dGZXbmdmCnZ4cE13eE83RVNYd0FwVDdKWk9iVVA0eStPc3k2VzhQQXRjeHFGUHdyaVNhM29KYnZwZEFvcHloZXdoNUxNcUwKWkpobU4wK1RhTnlOaVJXaEkwcnZOSGYrNEdtcjhMd2lzMXZPdGZ4b2FtcGhPWE8wS0NheEJ0MWoxcDQxK2pNTwo0UUhlRWtFTkJndERzMUtuMTRhbFR6NWI1cFg0amwrU2tOOFk0ZlppUk9SRXJ4cWVmVkJEZ1Z0aWtQbEp5VFkyCmRQMkJOSE82R1FjalVvZmZBTmwwK1ZkblJTWGEvNVRsZU1PUwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.2.150:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InlBck13aTFMR2daR3htTmxSdG5XbGJjOVFLWmdMZlgzRU10TmJWRFNEMk0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjYyMjE0MjcwLCJpYXQiOjE2NjIyMTA2NzAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiIwOTRhYWI2NC05NTkyLTRjYTctOWI3MS0yNDEwMmI5ODA1YjcifX0sIm5iZiI6MTY2MjIxMDY3MCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.bHK9jgpCAQIAxwurh05zzndo22hWEzoDnfFRS3VDWAfoD0YOsTF6RbHFSshn0Vm-Xv1sEIgmVkjgftP2Pq_saMs-WdgHfTLz2CjxWpkYV4WQcMs4WJq9Lx5SQeNxw9mEh8c085nnx368GWkENHSsldKP-O6YliWQAP8qpOiUWrJqhteVQi0GD7EYmOPlnKFZF2YKaROYFvn9P8JiCL8rRTZ5GUYIty9LRLkh3daFXj67krk4v3pNLqdHcKKwkv8vFN4hl6RbgA3nY
```