Kubernetes集群Node管理
创始人
2024-05-20 13:30:16
0

Kubernetes集群Node管理

  • 一、查看集群信息
  • 二、查看节点信息
    • 2.1 查看集群节点信息
    • 2.2 查看集群节点详细信息
    • 2.3 查看节点描述详细信息
  • 三、worker node节点管理集群
  • 四、节点标签(label)
    • 4.1 查看节点标签信息
    • 4.2 设置节点标签信息
      • 4.2.1 设置节点标签
      • 4.2.2 查看所有节点带region的标签
    • 4.3 多维度标签
      • 4.3.1 设置多维度标签
      • 4.3.2 显示节点的相应标签
      • 4.3.3 查找`region=huanai`的节点
      • 4.3.4 标签的修改
      • 4.3.5 标签的删除
      • 4.3.6 标签选择器

一、查看集群信息

[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

二、查看节点信息

2.1 查看集群节点信息

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready       36d   v1.21.10
k8s-master02   Ready       36d   v1.21.10
k8s-master03   Ready       36d   v1.21.10
k8s-worker02   Ready       36d   v1.21.10

2.2 查看集群节点详细信息

[root@k8s-master01 ~]# kubectl get nodes -owide
NAME           STATUS   ROLES    AGE   VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master01   Ready       36d   v1.21.10   192.168.10.101           CentOS Linux 7 (Core)   6.1.0-1.el7.elrepo.x86_64   docker://20.10.22
k8s-master02   Ready       36d   v1.21.10   192.168.10.102           CentOS Linux 7 (Core)   6.1.0-1.el7.elrepo.x86_64   docker://20.10.22
k8s-master03   Ready       36d   v1.21.10   192.168.10.103           CentOS Linux 7 (Core)   6.1.0-1.el7.elrepo.x86_64   docker://20.10.22
k8s-worker02   Ready       36d   v1.21.10   192.168.10.104           CentOS Linux 7 (Core)   6.1.1-1.el7.elrepo.x86_64   docker://20.10.22

2.3 查看节点描述详细信息

[root@k8s-master01 ~]# kubectl describe nodes k8s-master01
Name:               k8s-master01
Roles:              
Labels:             beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=k8s-master01kubernetes.io/os=linux
Annotations:        node.alpha.kubernetes.io/ttl: 0projectcalico.org/IPv4Address: 192.168.10.101/24projectcalico.org/IPv4IPIPTunnelAddr: 10.244.32.128volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 24 Dec 2022 23:45:43 +0800
Taints:             
Unschedulable:      false
Lease:HolderIdentity:  k8s-master01AcquireTime:     RenewTime:       Mon, 30 Jan 2023 11:03:00 +0800
Conditions:Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message----                 ------  -----------------                 ------------------                ------                       -------NetworkUnavailable   False   Sun, 29 Jan 2023 10:12:40 +0800   Sun, 29 Jan 2023 10:12:40 +0800   CalicoIsUp                   Calico is running on this nodeMemoryPressure       False   Mon, 30 Jan 2023 11:00:34 +0800   Sat, 24 Dec 2022 23:45:42 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory availableDiskPressure         False   Mon, 30 Jan 2023 11:00:34 +0800   Sat, 24 Dec 2022 23:45:42 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressurePIDPressure          False   Mon, 30 Jan 2023 11:00:34 +0800   Sat, 24 Dec 2022 23:45:42 +0800   KubeletHasSufficientPID      kubelet has sufficient PID availableReady                True    Mon, 30 Jan 2023 11:00:34 +0800   Sun, 25 Dec 2022 00:06:35 +0800   KubeletReady                 kubelet is posting ready status
Addresses:InternalIP:  192.168.10.101Hostname:    k8s-master01
Capacity:cpu:                2ephemeral-storage:  19466Mihugepages-1Gi:      0hugepages-2Mi:      0memory:             3995080Kipods:               110
Allocatable:cpu:                2ephemeral-storage:  18370422344hugepages-1Gi:      0hugepages-2Mi:      0memory:             3892680Kipods:               110
System Info:Machine ID:                 0e0a3ea7d11c4165b5eb28435792ad47System UUID:                d3794d56-6573-8633-b1d0-456a80d8ee9aBoot ID:                    09607e08-716a-4834-847b-534c12d3e5deKernel Version:             6.1.0-1.el7.elrepo.x86_64OS Image:                   CentOS Linux 7 (Core)Operating System:           linuxArchitecture:               amd64Container Runtime Version:  docker://20.10.22Kubelet Version:            v1.21.10Kube-Proxy Version:         v1.21.10
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (2 in total)Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age---------                   ----                                         ------------  ----------  ---------------  -------------  ---kube-system                 calico-node-d5qw7                            250m (12%)    0 (0%)      0 (0%)           0 (0%)         36dkubernetes-dashboard        dashboard-metrics-scraper-c45b7869d-9c8jj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         35d
Allocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource           Requests    Limits--------           --------    ------cpu                250m (12%)  0 (0%)memory             0 (0%)      0 (0%)ephemeral-storage  0 (0%)      0 (0%)hugepages-1Gi      0 (0%)      0 (0%)hugepages-2Mi      0 (0%)      0 (0%)
Events:              

三、worker node节点管理集群

  • 如果是kubeasz安装,所有节点(包括master与node)都已经可以对集群进行管理

  • 如果是kubeadm安装,在node节点上管理时会报如下错误

[root@k8s-worker1 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

把master上的管理文件/etc/kubernetes/admin.conf拷贝到node节点的$HOME/.kube/config就可以让node节点也可以实现kubectl命令管理

在node节点的用户家目录创建.kube目录

[root@k8s-worker02 ~]# mkdir /root/.kube

2, 在master节点做如下操作

[root@k8s-worker02 ~]# scp /etc/kubernetes/admin.conf node1:/root/.kube/config

3, 在worker node节点验证

[root@k8s-worker02 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master01   Ready       2d20h   v1.21.10
k8s-master02   Ready       2d20h   v1.21.10
k8s-master03   Ready       2d20h   v1.21.10
k8s-worker02   Ready       2d20h   v1.21.10

四、节点标签(label)

  • k8s集群如果由大量节点组成,可将节点打上对应的标签,然后通过标签进行筛选及查看,更好的进行资源对象的相关选择与匹配

4.1 查看节点标签信息

[root@k8s-master01 ~]# kubectl get nodes --show-labels 
NAME           STATUS   ROLES    AGE   VERSION    LABELS
k8s-master01   Ready       36d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux
k8s-master02   Ready       36d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test1,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux
k8s-master03   Ready       36d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,bussiness=ad,env=test2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,zone=A
k8s-worker02   Ready       36d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux

4.2 设置节点标签信息

4.2.1 设置节点标签

为节点k8s-worker1打一个region=huanai 的标签

[root@k8s-master01 ~]# kubectl label node k8s-worker01 region=huanai
node/k8s-worker01 labeled

4.2.2 查看所有节点带region的标签

[root@k8s-master01 ~]# kubectl get nodes -L region
NAME          STATUS   ROLES    AGE     VERSION    REGION
k8s-master01   Ready       2d21h   v1.21.10
k8s-master02   Ready       2d21h   v1.21.10
k8s-master03   Ready       2d21h   v1.21.10
k8s-worker02   Ready       2d21h   v1.21.10   huanai

4.3 多维度标签

4.3.1 设置多维度标签

也可以加其它的多维度标签,用于不同的需要区分的场景

如把k8s-master03标签为华南区,A机房,测试环境,游戏业务

[root@k8s-master01 ~]# kubectl label node k8s-master03 zone=A env=test bussiness=game
node/k8s-master03 labeled
[root@k8s-master01 ~]# kubectl get nodes k8s-master03 --show-labels
NAME          STATUS   ROLES    AGE     VERSION    LABELS
k8s-master03   Ready       2d21h   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,bussiness=game,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,zone=A

4.3.2 显示节点的相应标签

[root@k8s-master01 ~]# kubectl get nodes -L region,zone
NAME          STATUS   ROLES    AGE     VERSION    REGION   ZONE
k8s-master01   Ready       2d21h   v1.21.10
k8s-master02   Ready       2d21h   v1.21.10
k8s-master03   Ready       2d21h   v1.21.10            A
k8s-worker02   Ready       2d21h   v1.21.10   huanai

4.3.3 查找region=huanai的节点

[root@k8s-master01 ~]# kubectl get nodes -l region=huanai
NAME          STATUS   ROLES    AGE     VERSION
k8s-worker02  Ready       2d21h   v1.21.10

4.3.4 标签的修改

[root@k8s-master01 ~]# kubectl label node k8s-master03 bussiness=ad --overwrite=true
node/k8s-master03 labeled
加上--overwrite=true覆盖原标签的value进行修改操作
[root@k8s-master01 ~]# kubectl get nodes -L bussiness
NAME          STATUS   ROLES    AGE     VERSION    BUSSINESS
k8s-master01   Ready       2d21h   v1.21.10
k8s-master02   Ready       2d21h   v1.21.10
k8s-master03   Ready       2d21h   v1.21.10   ad
k8s-worker02   Ready       2d21h   v1.21.10

4.3.5 标签的删除

使用key加一个减号的写法来取消标签

[root@k8s-master02 ~]# kubectl label node k8s-worker02 region-
node/k8s-worker02 labeled

4.3.6 标签选择器

标签选择器主要有2类:

  • 等值关系: =, !=
  • 集合关系: KEY in {VALUE1, VALUE2…}
[root@k8s-master01 ~]# kubectl label node k8s-master02 env=test1
node/k8s-master02 labeled
[root@k8s-master01 ~]# kubectl label node k8s-master03 env=test2
node/k8s-master03 labeled
[root@k8s-master01 ~]# kubectl get node -l "env in(test1,test2)"
NAME          STATUS   ROLES    AGE     VERSION
k8s-master02   Ready       2d21h   v1.21.10
k8s-master03   Ready       2d21h   v1.21.10

相关内容

热门资讯

江阴让你最难忘的美食是什么? 江阴让你最难忘的美食是什么?经常去无锡江阴,江阴有很多 美食 ,也谈不上最难忘,很多 美食 还是建议...
有没有像《神奇的地球》《探索时... 有没有像《神奇的地球》《探索时代》这样的节目?《探索发现》《百科探密》《探索时空》《魅力发现》《走进...
7月3日新能源ETF易方达(5... 7月3日,新能源ETF易方达(516090)涨0.81%,成交额1379.67万元。当日份额减少13...
7月3日低碳ETF易方达(51... 7月3日,低碳ETF易方达(516070)涨0.89%,成交额568.99万元。当日份额减少200....
思念恋人的暖心情话真实简单 思念恋人的暖心情话真实简单每天都很想你。我想你,工作空闲之余想你,回到家开了门就想你,吃饭想你看电视...
7月3日港股红利ETF博时(5... 7月3日,港股红利ETF博时(513690)涨0.58%,成交额8.29亿元。当日份额增加4.12亿...
7月3日银行ETF易方达(51... 7月3日,银行ETF易方达(516310)涨0.00%,成交额4053.18万元。当日份额增加129...
7月3日软件龙头ETF(159... 7月3日,软件龙头ETF(159899)涨0.37%,成交额1671.37万元。当日份额增加800....
7月3日红利ETF(15970... 7月3日,红利ETF(159708)涨0.13%,成交额1182.71万元。当日份额减少50.00万...
7月3日建材ETF(51675... 7月3日,建材ETF(516750)涨0.95%,成交额1246.21万元。当日份额减少200.00...