Kubernetes集群模拟删除k8s重装详解

目录
  • 一.系统环境
  • 二.前言
  • 三.重装Kubernetes集群
    • 3.1 环境介绍
    • 3.2 删除k8s所有节点(node)
    • 3.3 kubeadm初始化
    • 3.4 添加worker节点到k8s集群
    • 3.5 安装calico

一.系统环境

服务器版本 docker软件版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 x86_64

二.前言

当我们安装部署好一套Kubernetes集群,使用一段时间之后可能会有重新安装Kubernetes集群的需求,本文为了满足这个需求,模拟重装Kubernetes集群。

重新安装Kubernetes集群的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》

三.重装Kubernetes集群

3.1 环境介绍

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

3.2 删除k8s所有节点(node)

kubectl drain 安全驱逐节点上面所有的 pod,--ignore-daemonsets往往需要指定的,这是因为deamonset会忽略SchedulingDisabled标签(使用kubectl drain时会自动给节点打上不可调度SchedulingDisabled标签),因此deamonset控制器控制的pod被删除后,可能马上又在此节点上启动起来,这样就会成为死循环.因此这里忽略daemonset.

[root@k8scloude1 ~]# kubectl drain k8scloude3 --ignore-daemonsets
node/k8scloude3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wmz4r, kube-system/kube-proxy-84gcx
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-rl2mh
pod/calico-kube-controllers-6b9fbfff44-rl2mh evicted
node/k8scloude3 evicted

k8scloude3变为SchedulingDisabled

[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS                     ROLES                  AGE   VERSION
k8scloude1   Ready                      control-plane,master   64m   v1.21.0
k8scloude2   Ready                      <none>                 56m   v1.21.0
k8scloude3   Ready,SchedulingDisabled   <none>                 56m   v1.21.0

删除节点k8scloude3

[root@k8scloude1 ~]# kubectl delete nodes k8scloude3
node "k8scloude3" deleted
[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8scloude1   Ready    control-plane,master   65m   v1.21.0
k8scloude2   Ready    <none>                 57m   v1.21.0

其余节点进行类似操作

[root@k8scloude1 ~]# kubectl drain k8scloude2 --ignore-daemonsets
node/k8scloude2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-bbst4, kube-system/kube-proxy-8wf8t
evicting pod kube-system/coredns-545d6fc579-kgmfl
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-nq79f
evicting pod kube-system/coredns-545d6fc579-dln6p
pod/coredns-545d6fc579-dln6p evicted
pod/coredns-545d6fc579-kgmfl evicted
pod/calico-kube-controllers-6b9fbfff44-nq79f evicted
node/k8scloude2 evicted
[root@k8scloude1 ~]# kubectl drain k8scloude1 --ignore-daemonsets
node/k8scloude1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-r57vx, kube-system/kube-proxy-zblkg
evicting pod kube-system/coredns-545d6fc579-tgcl4
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-t9k45
evicting pod kube-system/coredns-545d6fc579-l9g7b
pod/calico-kube-controllers-6b9fbfff44-t9k45 evicted
pod/coredns-545d6fc579-tgcl4 evicted
pod/coredns-545d6fc579-l9g7b evicted
node/k8scloude1 evicted
[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS                     ROLES                  AGE   VERSION
k8scloude1   Ready,SchedulingDisabled   control-plane,master   66m   v1.21.0
k8scloude2   Ready,SchedulingDisabled   <none>                 58m   v1.21.0
[root@k8scloude1 ~]# kubectl delete nodes k8scloude2
node "k8scloude2" deleted
[root@k8scloude1 ~]# kubectl delete nodes k8scloude1
node "k8scloude1" deleted

此时,k8s集群所有节点都被删除了

[root@k8scloude1 ~]# kubectl get nodes
No resources found

3.3 kubeadm初始化

此时重新进行kubeadm初始化,但是报错,看报错信息可以发现:端口被占用,配置文件已经存在

[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

当我们重新初始化k8s集群的时候,需要清空原先的设置

[root@k8scloude1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0109 16:17:15.936292   53177 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "k8scloude1" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0109 16:17:17.651795   53177 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重新进行kubeadm初始化

[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 64.004984 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 45wtx2.gfb3j9obk0fz663z
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z \
        --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d

按照要求创建目录和配置文件

[root@k8scloude1 ~]# mkdir -p $HOME/.kube
[root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖"/root/.kube/config"? y
[root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.4 添加worker节点到k8s集群

接下来把另外的两个worker节点也加入到k8s集群。

把k8scloude2节点加入k8s集群

#另外两个节点执行加入集群的命令
[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

work节点重新加入k8s集群也需要清空原先的设置

[root@k8scloude2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0109 16:22:12.705575   59352 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

再次把k8scloude2节点加入k8s集群,可以看到k8scloude2节点加入k8s集群成功

[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

k8scloude3节点也进行类似操作

[root@k8scloude3 ~]# kubeadm reset
[root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443
--token 45wtx2.gfb3j9obk0fz663z
--discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d

查看k8s集群节点状态

#此时所有节点都显示Ready状态
[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8scloude1   Ready    control-plane,master   5m    v1.21.0
k8scloude2   Ready    <none>                 63s   v1.21.0
k8scloude3   Ready    <none>                 33s   v1.21.0

3.5 安装calico

因为我们之前已经安装过一次k8s集群了,并且calico插件也安装好了,重装之后calico是没有装的,但是kubectl get nodes的状态都为Ready状态,是因为Ready这个状态已经写入了etcd数据库里了,状态没更新,所以需要重装一次calico

[root@k8scloude1 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

现在集群才是完全正常的

[root@k8scloude1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8scloude1   Ready    control-plane,master   9m11s   v1.21.0
k8scloude2   Ready    <none>                 5m14s   v1.21.0
k8scloude3   Ready    <none>                 4m44s   v1.21.0

注意:如果k8s master节点没有执行kubeadm reset重置命令,只是重置了worker节点,则不需要重新安装calico

[root@k8scloude1 ~]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-6b9fbfff44-4jzkj   1/1     Running   0          3m16s   10.244.251.193    k8scloude3   <none>           <none>
calico-node-bdlgm                          1/1     Running   0          3m16s   192.168.110.130   k8scloude1   <none>           <none>
calico-node-hx8bk                          1/1     Running   0          3m16s   192.168.110.128   k8scloude3   <none>           <none>
calico-node-nsbfs                          1/1     Running   0          3m16s   192.168.110.129   k8scloude2   <none>           <none>
coredns-545d6fc579-7wm95                   1/1     Running   0          11m     10.244.158.65     k8scloude1   <none>           <none>
coredns-545d6fc579-87q8j                   1/1     Running   0          11m     10.244.158.66     k8scloude1   <none>           <none>
etcd-k8scloude1                            1/1     Running   0          12m     192.168.110.130   k8scloude1   <none>           <none>
kube-apiserver-k8scloude1                  1/1     Running   0          12m     192.168.110.130   k8scloude1   <none>           <none>
kube-controller-manager-k8scloude1         1/1     Running   0          12m     192.168.110.130   k8scloude1   <none>           <none>
kube-proxy-599xh                           1/1     Running   0          7m48s   192.168.110.128   k8scloude3   <none>           <none>
kube-proxy-lpj8z                           1/1     Running   0          8m18s   192.168.110.129   k8scloude2   <none>           <none>
kube-proxy-zxlk9                           1/1     Running   0          11m     192.168.110.130   k8scloude1   <none>           <none>
kube-scheduler-k8scloude1                  1/1     Running   0          12m     192.168.110.130   k8scloude1   <none>           <none>

自此,k8s集群重装完成!

以上就是Kubernetes集群模拟删除k8s重装详解的详细内容,更多关于Kubernetes k8s集群删除重装的资料请关注我们其它相关文章!

(0)

相关推荐

  • Kubernetes k8s configmap 容器技术解析

    目录 1.什么是 ConfigMap? 2.ConfigMap 能带来什么好处? 3.ConfigMap 三种创建方式 4.ConfigMap 作为环境变量三种使用方式 单个引用 多个引用 args 方式传递环境变量 5.挂载 volume 6.Secret 使用 7.应用程序怎么做到不重启情况下读取最新配置 总结 1.什么是 ConfigMap? ConfigMap 是用来存储配置文件的 Kubernetes 资源对象,配置对象存储在 Etcd 中,配置的形式可以是完整的配置文件.key/va

  • Kubernetes中创建命名空间实现方法

    目录 正文 命名空间类型 查看命名空间 创建命名空间 结论 正文 命名空间系统对计算来说并不陌生,我们大多数人可能在几乎所有编程语言中都见过命名空间,无论您在哪里遇到命名空间,其基本目的都是相同的:用于逻辑分组. 同样,在 Linux 内核中,也有命名空间的概念,比如存储和网络命名空间.每个容器也有自己的存储命名空间和网络命名空间,用于资源的隔离和分配. Kubernetes命名空间是指由同一物理集群支持的虚拟集群,此选项专为在多个用户分布在多个工作团队或项目的环境中使用而设计. 本文将介绍如何

  • kubernetes k8s常用问题排查方法

    目录 Pod 的那些状态 镜像拉取失败 启动后容器崩溃 容器被驱逐 总结 Pod 的那些状态 使用 K8s 部署我们的服务之后,为了观察 Pod 是否成功,我们都会使用下面这个命令查询 Pod 的状态. kubectl get pods NAME READY STATUS RESTARTS AGE my-app-5d7d978fb9-2fj5m 0/1 ContainerCreating 0 10s my-app-5d7d978fb9-dbt89 0/1 ContainerCreating 0

  • Go语言k8s kubernetes使用leader election实现选举

    目录 一.背景 二.官网代码示例 三.锁的实现 一.背景 在kubernetes的世界中,很多组件仅仅需要一个实例在运行,比如controller-manager或第三方的controller,但是为了高可用性,需要组件有多个副本,在发生故障的时候需要自动切换.因此,需要利用leader election的机制多副本部署,单实例运行的模式.应用程序可以使用外部的组件比如ZooKeeper或Etcd等中间件进行leader eleaction, ZooKeeper的实现是采用临时节点的方案,临时节

  • MySQL 集群迁移到 Kubernetes操作步骤

    目录 背景 适用场景 前提条件 操作步骤 Step 1:自建集群开启 GTID Step 2:容器集群在线迁移全量数据 Step 3:进行增量同步 Step 4:同步数据到容器集群的其他节点 Step 5:业务切换 Step 6:停止同步 总结 背景 如果你有自建的 MySQL 集群,并且已经感受到了云原生的春风拂面,想将数据迁移到 Kubernetes 上,那么这篇文章可以给你一些思路. 文中将自建 MySQL 集群数据,在线迁移到 Kubernetes 的 MySQL 集群中,快速实现了 M

  • Centos7 安装部署Kubernetes(k8s)集群实现过程

    目录 一.系统环境 二.前言 三.Kubernetes 3.1 概述 3.2 Kubernetes 组件 3.2.1 控制平面组件 3.2.2 Node组件 四.安装部署Kubernetes集群 4.1 环境介绍 4.2 配置节点的基本环境 4.3 节点安装docker,并进行相关配置 4.4 安装kubelet,kubeadm,kubectl 4.5 kubeadm初始化 4.6 添加worker节点到k8s集群 4.7 部署CNI网络插件calico 4.8 配置kubectl命令tab键自

  • 云原生系列Kubernetes深度解析YAML文件使用

    目录 写在前面 YAML基础 Maps Lists 使⽤ YAML 创建 Pod 创建 Pod 创建 Deployment 写在前面 在前⾯的课程中,我们在安装 kubernetes 集群的时候使⽤了⼀些 YAML ⽂件来创建相关的资源,但大家对 YAML ⽂件还是⾮常陌⽣.所以我们先来简单看⼀看 YAML ⽂件是如何⼯作的,并使⽤ YAML ⽂件来定义⼀个 k8s pod,然后再定义⼀个 k8s deployment吧. YAML基础 它的基本语法规则如下: ⼤⼩写敏感 使⽤缩进表示层级关系

  • Kubernetes集群模拟删除k8s重装详解

    目录 一.系统环境 二.前言 三.重装Kubernetes集群 3.1 环境介绍 3.2 删除k8s所有节点(node) 3.3 kubeadm初始化 3.4 添加worker节点到k8s集群 3.5 安装calico 一.系统环境 服务器版本 docker软件版本 CPU架构 CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 x86_64 二.前言 当我们安装部署好一套Kubernetes集群,使用一段时间之后可能会有重新

  • ceph集群RadosGW对象存储使用详解

    目录 什么是对象存储 ceph对象存储的构成 RadosGW存储池作用 RadosGW常用操作详解 操纵radosgw 需要先安装好python3环境,以及python的boto模块 python脚本编写 一个完整的ceph集群,可以提供块存储.文件系统和对象存储. 本节主要介绍对象存储RadosGw功能如何灵活的使用 集群背景: $ ceph -s cluster: id: f0a8789e-6d53-44fa-b76d-efa79bbebbcf health: HEALTH_OK servi

  • Docker Consul概述以及集群环境搭建步骤(图文详解)

    目录 一.Docker consul概述 二.基于 nginx 与 consul 构建自动发现即高可用的 Docker 服务架构 一.Docker consul概述 容器服务更新与发现:先发现再更新,发现的是后端节点上容器的变化(registrator),更新的是nginx配置文件(agent) registrator:是consul安插在docker容器里的眼线,用于监听监控节点上容器的变化(增加或减少,或者宕机),一旦有变化会把这些信息告诉并注册在consul server端(使用回调和协程

  • tomcat 集群监控与弹性伸缩详解

    目录 如何给 tomcat 配置合适的线程池 如何监控 tomcat 线程池的工作情况 tomcat 线程池扩缩容 tomcat 是如何避免原生线程池的缺陷的 如何给 tomcat 配置合适的线程池 任务分为 CPU 密集型和 IO 密集型 对于 CPU 密集型的应用来说,需要大量 CPU 计算速度很快,线程池如果过多,则保存和切换上下文开销过高反而会影响性能,可以适当将线程数量调小一些 对于 IO 密集型应用来说常见于普通的业务系统,比如会去查询 mysql.redis 等然后在内存中做简单的

  • kafka与storm集群环境的安装步骤详解

    前言 在开始之前,需要说明下,storm和kafka集群安装是没有必然联系的,我将这两个写在一起,是因为他们都是由zookeeper进行管理的,也都依赖于JDK的环境,为了不重复再写一遍配置,所以我将这两个写在一起.若只需一个,只需挑选自己选择的阅读即可.下面话不多说了,来一起看看详细的介绍吧. 这两者的依赖如下: Storm集群:JDK1.8 , Zookeeper3.4,Storm1.1.1: Kafa集群 : JDK1.8 ,Zookeeper3.4 ,Kafka2.12: 说明: Sto

  • Spark学习笔记 (二)Spark2.3 HA集群的分布式安装图文详解

    本文实例讲述了Spark2.3 HA集群的分布式安装.分享给大家供大家参考,具体如下: 一.下载Spark安装包 1.从官网下载 http://spark.apache.org/downloads.html 2.从微软的镜像站下载 http://mirrors.hust.edu.cn/apache/ 3.从清华的镜像站下载 https://mirrors.tuna.tsinghua.edu.cn/apache/ 二.安装基础 1.Java8安装成功 2.zookeeper安装成功 3.hadoo

  • 集群rpm安装MySQL步骤详解

    安装mysql数据库 a)下载mysql源安装包:wget http://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm b)安装mysql源:yum localinstall mysql57-community-release-el7-8.noarch.rpm 若结尾出现complete!,则说明MySQL源安装完成 c)检测是否安装完成:yum repolist enabled | grep "mysql.*-comm

  • 在Kubernetes集群中搭建Istio微服务网格的过程详解

    目录 1.使用sealos部署快速部署K8S集群 1.1.基本环境配置 1.2.部署K8S集群 2.在K8S集群中部署Istio网格服务 2.1.下载Istio安装包 2.2.查看Istio可用的配置列表 2.3.展示Istio配置档的配置信息 2.4.查看Istio在k8s集群部署使用的YAML文件内容 1.使用sealos部署快速部署K8S集群 1.1.基本环境配置 1.设置主机名 hostnamectl set-hostname k8s-master hostnamectl set-hos

  • 使用Kubeadm在CentOS7.2上部署Kubernetes集群的方法

    本文参考kubernetes官网文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes集群,解决了一些在按照该文档部署时遇到的问题. 操作系统版本 # cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) 内核版本 # uname -r 3.10.0-327.el7.x86_64 集群节点 192.168.120.122 kube

  • Minikube搭建Kubernetes集群

    Minikube 打开 https://github.com/kubernetes/minikube/releases/tag/v1.19.0 下载最新版本的二进制软件包(deb.rpm包),再使用 apt 或 yum 安装. 或者直接下载 minikube 最新版本二进制文件(推荐). curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.19.0/minikube-linu

随机推荐