- 昨日内容回顾: - harbor的自建证书https案例 服务端: - 自建CA证书 - harbor服务端生成证书签发请求 - 基于自建CA证书签发证书请求 - harbor的配置文件指定自定义域名证书 - 启动harbor 客户端: - 现在服务端生成客户端所需的证书 - 将证书拷贝到客户端docker的指定目录,该目录下需要创建对应的域名目录 - K8S架构: master: etcd: 数据存储。 api-server: K8S集群的访问入口。 scheduler: 调度器,负责Pod调度。 controller manager: 控制器管理者,维护集群状态。 slave: kubelet: 管理Pod的生命周期,并监控容器和节点信息上报给api-server。 kube-proxy: 为Pod提供服务发现和负载均衡。 CNI: 为容器跨节点通信提供网络环境。 flannel calico ... - kubeadm快速搭建单master集群 今日内容预告: - K8S的学习方法 - K8S常用的资源概览 - K8S的资源管理增删改查 - K8S声明式API和响应式API的区别对比 - 标签管理 - rc控制器 - svc资源 - Pod创建流程 - K8S的学习方法 ---> 必读: 怎么学好K8S的心态 1.摆正心态 1.1 拒绝: 急于求成,好高骛远: 有一定工作经验的人,不是技术小白,部署K8S网上随便找个文章搭建起来。 没办法系统完成业务迁移。 1.2 拒绝: 依赖辅助型,借助工具,过于安逸: 使用saas产品,图形化工具管理K8S集群,对K8S底层一无所知。 所有的部署都在K8S图形化界面完成,当集群出现故障时,搞不定,过度依赖工具,比如阿里云,京东云,亚马逊等云平台,kubesphere,kuboard,rancher等。 1.3 拒绝: 未来方向选择其他 想要走传统运维,数据库等方向,不想把精力放在K8S上。 不要在下雨天修房顶,而要在晴天修房顶。 技术呢?要提前学习,将来工作涨薪资提前准备。77学长14K入职容器方向,工作3个月跳槽21k。 1.4 空杯心态: 踏踏实实,勤勤恳恳 一般都是小白,知道自己不知道,踏踏实实系统学习,一次学不会,尝试2-3次。 有一定工作经验的人能做到"空杯心态"的人太少了。 2.学习方法 2.1 纸上得来终觉浅,绝知此事要躬行 每天都要完成课堂的所有练习并整理思维导图。 完成每日作业,时间精力允许的话,可以在选做一下扩展作业。 2.2 不耻下问,不耻上问 学习遇到问题要及时寻求帮助,可以网上查阅资料,也可以问同学。 如果一个技术问题卡了你30分钟以上依旧没有解决,就必须问讲师来解决。 2.3 适当性囫囵吞枣 遇到一些技术难点,不要过于钻牛角尖,而是要跟进整个课程进度,拿下整个架构框架部分,有的问题在学习后面的内容就慢慢搞懂了。 不要想着任何技术学一遍就会,如果真有,那么多半是这个技术过于简单,或者将来你学的这个东西压根就不怎么值钱,一学就会的学习成本过低,市场饱和量可想而知。 第一次学习肯定有不懂的地方,将问题记下来,第二次和第三次学习的时候可能就豁然开朗。 2.4 高阶学习方法 听: 上课听讲,不睡觉。 上课睡觉10分钟,下课可能最少要100分钟才能找回补偿,得不偿失。 做: 主要是课堂的实验,上课的实验必须全部做一遍。 写: 整理课堂笔记,总结为自己的知识。 画: 要学会画思维导图,帮助自己整理知识点的脉络。 讲: 帮助同学解决问题,把问你问题的同学当成面试官。 你给同学讲明白了,将来面试你也问题。 - K8S常用的资源概览 - 1.Pod 是一组容器,里面可以运行多个容器,一个Pod最少有一个容器运行。 - 2.rc ---》 replicationcontrollers 副本控制器,可以保证指定副本的Pod数量存活。 - 3.svc ---》services 为Pod提供服务代理。 - 4.ep ---》endpoints 每个svc都会关联一个ep资源,ep就是后端的IP,这个IP可以是Pod地址,也可以是K8S集群外部的地址。 - 5.rs ---》replicasets 副本控制器,可以保证指定副本的Pod数量存活。相比rc更加轻量级且功能更加丰富。 - 6.deloy ---》 deployments 用于部署无状态服务,比如nginx类。deploy底层调用的是rs实现Pod副本控制。 - 7.ds ---》daemonsets 让每个节点有且仅有一个Pod运行。 - 8.sts ---》statefulsets 用于部署有状态服务,每个Pod都有独立的存储,每个Pod可以基于名称进行访问。 - 9.jobs 用于部署一次性任务的控制器。做一个数据库的全量备份。 - 10.cj ---》cronjobs 用于部署周期性应用。做数据库里的增量备份。 - 11.pv ---》persistentvolumes 用于映射后端的存储设备类型。可以指定存储容器大小。 - 12.pvc ---》persistentvolumeclaims 自动关联pv,Pod将来讲数据写入pvc设备。 - 13.sc ---》storageclasses 动态存储类,可以动态创建pv。 - 14.ns ---》namespaces 实现K8S集群的资源隔离。 - 15.quota ---》resourcequotas 实现名称级别的资源限制。 - 16.limits ---》limitranges 实现Pod级别的资源限制。 - 17.secrets 存储铭感数据 - 18.cm ---> configMaps 存储应用的配置文件。 - 19.cs ---> componentstatuses 查看master组件信息。已经弃用。 - 20.no ---> nodes 查看集群节点的信息。 - 21.hpa ---> horizontalpodautoscalers 水平动态Pod伸缩。 - 22.ing ---》ingresses 实现7层代理,底层基于svc实现四层代理。 - 23.sa ---> serviceaccounts 服务账号,给Pod的应用程序使用。 - 24.roles 定义角色。 - 25.rolebindings 将角色和用户主体进行绑定。 - 26.clusterroles 集群角色。 - 27.clusterrolebindings 将集群角色和用户主体进行绑定。 查看K8S集群的资源类型 [root@master231 ~]# kubectl api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment [root@master231 ~]# [root@master231 ~]# - 入职接管K8S时,K8S日常基础巡检 1.检查K8S集群的worker节点列表 [root@master231 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master231 Ready control-plane,master 17h v1.23.17 worker232 Ready 17h v1.23.17 worker233 Ready 17h v1.23.17 [root@master231 ~]# 2.检查master组件 [root@master231 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [root@master231 ~]# [root@master231 ~]# 3.检查Flannel网卡是否正常 [root@master231 ~]# kubectl get pods -o wide -n kube-flannel NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-flannel-ds-ckkbk 1/1 Running 1 (16h ago) 16h 10.0.0.233 worker233 kube-flannel-ds-kst7g 1/1 Running 1 (16h ago) 16h 10.0.0.232 worker232 kube-flannel-ds-ljktm 1/1 Running 1 (16h ago) 16h 10.0.0.231 master231 [root@master231 ~]# 4.检查各节点网络是否正常及网卡是否存在 [root@master231 ~]# ifconfig cni0: flags=4163 mtu 1450 inet 10.100.0.1 netmask 255.255.255.0 broadcast 10.100.0.255 inet6 fe80::e48a:3ff:fe16:626 prefixlen 64 scopeid 0x20 ether e6:8a:03:16:06:26 txqueuelen 1000 (Ethernet) RX packets 118967 bytes 9855656 (9.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 137513 bytes 12374863 (12.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... flannel.1: flags=4163 mtu 1450 inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::cd6:8dff:fe03:f8a6 prefixlen 64 scopeid 0x20 ether 0e:d6:8d:03:f8:a6 txqueuelen 0 (Ethernet) RX packets 10 bytes 1724 (1.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 892 (892.0 B) TX errors 0 dropped 30 overruns 0 carrier 0 collisions 0 [root@worker232 ~]# ifconfig cni0: flags=4163 mtu 1450 inet 10.100.1.1 netmask 255.255.255.0 broadcast 10.100.1.255 inet6 fe80::2c66:d4ff:fea4:e11e prefixlen 64 scopeid 0x20 ether 2e:66:d4:a4:e1:1e txqueuelen 1000 (Ethernet) RX packets 7 bytes 919 (919.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1208 (1.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... flannel.1: flags=4163 mtu 1450 inet 10.100.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::486d:7aff:fe84:46c2 prefixlen 64 scopeid 0x20 ether 4a:6d:7a:84:46:c2 txqueuelen 0 (Ethernet) RX packets 7 bytes 446 (446.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5 bytes 863 (863.0 B) TX errors 0 dropped 31 overruns 0 carrier 0 collisions 0 [root@worker233 ~]# ifconfig cni0: flags=4163 mtu 1450 inet 10.100.2.1 netmask 255.255.255.0 broadcast 10.100.2.255 inet6 fe80::f4fa:dbff:fef1:6332 prefixlen 64 scopeid 0x20 ether f6:fa:db:f1:63:32 txqueuelen 1000 (Ethernet) RX packets 7 bytes 917 (917.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1208 (1.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... flannel.1: flags=4163 mtu 1450 inet 10.100.2.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::5c21:13ff:fee3:8224 prefixlen 64 scopeid 0x20 ether 5e:21:13:e3:82:24 txqueuelen 0 (Ethernet) RX packets 7 bytes 446 (446.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5 bytes 861 (861.0 B) TX errors 0 dropped 31 overruns 0 carrier 0 collisions 0 可能会出现的问题: - 1.cni0和flannel.1网段不一致 导致各节点Pod容器无法互通。 - 2.不存在cni0设备 可能会出现无法访问外网或者同节点Pod容器无法互相通信。 flannel网络插件重启次数过多,或者不正常运行时: 1、删除资源清单 [root@master231 ~]# kubectl delete -f kube-flannel.yml 2、移除虚拟网卡设备 ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 \rm -rf /var/lib/cni/flannel/* /var/lib/cni/networks/cbr0/* /var/lib/cni/cache/* /etc/cni/net.d/* systemctl restart kubelet docker chmod a+w /var/run/docker.sock systemctl status kubelet docker 3、重新创建flannel网卡 [root@master231 ~]# kubectl apply -f kube-flannel.yml - 如果有节点没有cni0网卡,建议大家手动创建相应的网桥设备,但是注意网段要一致: 1.假设 master231的flannel.1是10.100.0.0网段。 ip link add cni0 type bridge ip link set dev cni0 up ip addr add 10.100.0.1/24 dev cni0 2.假设 worker232的flannel.1是10.100.1.0网段。 ip link add cni0 type bridge ip link set dev cni0 up ip addr add 10.100.1.1/24 dev cni0 3.假设 worker233的flannel.1是10.100.2.0网段。 ip link add cni0 type bridge ip link set dev cni0 up ip addr add 10.100.2.1/24 dev cni0 温馨提示: 将其设置在开机启动脚本中执行。 - K8S的资源管理增查删 1.创建资源清单的目录 [root@master231 ~]# mkdir -pv /oldboyedu/manifests/pods [root@master231 ~]# [root@master231 ~]# cd /oldboyedu/manifests/pods [root@master231 pods]# 2.编写资源清单规范 apiVersion: 声明API的版本号,每个资源都有其对应的版本。 kind: 指定资源的类型。 metadata: 指定资源的元数据信息,包括但不限于资源的名称,标签,名称空间,注解等。 spec: 资源的期望运行状态。 status: 资源的实际运行状态,有K8S组件维护。 3.什么是Pod 所谓的Pod其实就是一组容器,Pod翻译为中文是豌豆荚。 Pod和容器时什么关系呢?说白了,就是豌豆荚和豌豆的关系,一个Pod可以包含多个容器。 在K8S中没有容器的概念,最小的调度单元是Pod。 4.编写资源清单 [root@master231 pods]# cat 01-pods-xiuxian-single.yaml # 指定api的版本号 apiVersion: v1 # 指定资源的类型 kind: Pod # 指定元数据信息 metadata: # 声明资源的名称 name: xiuxian-v1 # 定义资源的期望状态 spec: # 指定Pod调度到哪个节点 nodeName: worker232 # 指定Pod内的容器 containers: # 指定容器的镜像 - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 # 指定容器的名称 name: xiuxian [root@master231 pods]# 5.创建资源 [root@master231 pods]# kubectl create -f 01-pods-xiuxian-single.yaml pod/xiuxian-v1 created [root@master231 pods]# [root@master231 pods]# kubectl create -f 01-pods-xiuxian-single.yaml # 使用create指令创建资源时,若资源已经存在则报错。 Error from server (AlreadyExists): error when creating "01-pods-xiuxian-single.yaml": pods "xiuxian-v1" already exists [root@master231 pods]# [root@master231 pods]# kubectl apply -f 01-pods-xiuxian-single.yaml # 尽管多次执行apply应用,都不会报错。 pod/xiuxian-v1 unchanged [root@master231 pods]# [root@master231 pods]# kubectl apply -f 01-pods-xiuxian-single.yaml pod/xiuxian-v1 unchanged [root@master231 pods]# create和apply命令的区别: 相同点: 当资源不存在时,都可以创建资源。 不同点: 当资源存在时,create指令创建会报错资源已经存在,而apply指令只会输出unchanged的变化。 综上所述,生产环境中,我们想要更新资源就使用apply命令,为了避免报错,我们也同时会使用apply去创建资源。 6.查看资源列表 [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 3s 10.100.1.5 worker232 [root@master231 pods]# 相关字段说明: NAME: Pod的名称。 READY: Pod内容器是否准备就绪。1/1 ,右边的1标识一个Pod内有一个容器,左边的1标识有一个容器准备就绪。 STATUS: Pod的运行状态。 RESTARTS: Pod的重启次数。这里的重启次数指的是删除原有的容器,重启创建新的容器次数。和docker的重启策略并不相同。 AGE: 代表的是资源的运行时间。 IP: 代表的是Pod的IP地址。 NODE: 代表的是Pod所在的节点。 7.删除资源 [root@master231 pods]# kubectl delete -f 01-pods-xiuxian-single.yaml pod "xiuxian-v1" deleted [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide No resources found in default namespace. [root@master231 pods]# - Pod内运行多个容器案例 1.环境准备 [root@worker233 ~]# wget http://192.168.16.253/Image/Docker/Docker/Linux/alpine-v3.20.2.tar.gz [root@worker233 ~]# docker load -i alpine-v3.20.2.tar.gz Loaded image: alpine:3.20.2 [root@worker233 ~]# [root@worker233 ~]# docker tag alpine:3.20.2 harbor.oldboyedu.com/oldboyedu-linux/alpine:3.20.2 [root@worker233 ~]# [root@worker233 ~]# docker push harbor.oldboyedu.com/oldboyedu-linux/alpine:3.20.2 The push refers to repository [harbor.oldboyedu.com/oldboyedu-linux/alpine] 78561cef0761: Pushed 3.20.2: digest: sha256:eddacbc7e24bf8799a4ed3cdcfa50d4b88a323695ad80f317b6629883b2c2a78 size: 528 [root@worker233 ~]# [root@worker233 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2 harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 [root@worker233 ~]# [root@worker233 ~]# docker push harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 The push refers to repository [harbor.oldboyedu.com/oldboyedu-web/xiuxian] 4beb452b2498: Pushed 9d5b000ce7c7: Pushed b8dbe22b95f7: Pushed c39c1c35e3e8: Pushed 5f66747c8a72: Pushed 15d7cdc64789: Pushed 7fcb75871b21: Pushed v2: digest: sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e size: 1778 [root@worker233 ~]# 2.一个Pod部署两个容器 [root@master231 pods]# cat 02-pods-xiuxian-alpine-multiple.yaml # 指定api的版本号 apiVersion: v1 # 指定资源的类型 kind: Pod # 指定元数据信息 metadata: # 声明资源的名称 name: xiuxian-v2 # 定义资源的期望状态 spec: # 指定Pod调度到哪个节点 nodeName: worker232 # 指定Pod内的容器 containers: # 指定第0(计算机是从0开始计数)个容器的镜像 - image: harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 # 指定第0个容器的名称 name: xiuxian # 指定第1个容器 - image: harbor.oldboyedu.com/oldboyedu-linux/alpine:3.20.2 name: alpine # 分配一个标准输入,让alpine的容器能够阻塞住,否则就会退出容器 stdin: true [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl apply -f 02-pods-xiuxian-alpine-multiple.yaml pod/xiuxian-v2 created [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v2 2/2 Running 0 21s 10.100.1.8 worker232 [root@master231 pods]# - 使用kubectl exec在容器中执行命令 1.查看Pod列表 [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 40s 10.100.1.9 worker232 xiuxian-v2 2/2 Running 0 112s 10.100.1.8 worker232 [root@master231 pods]# [root@master231 pods]# 2.在xiuxian-v1的Pod查看IP地址 [root@master231 pods]# kubectl exec xiuxian-v1 -- ifconfig eth0 Link encap:Ethernet HWaddr 0E:DD:73:C9:60:A7 inet addr:10.100.1.9 Bcast:10.100.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:446 (446.0 B) TX bytes:42 (42.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [root@master231 pods]# 3.在xiuxian-v2的Pod查看IP地址 [root@master231 pods]# kubectl exec xiuxian-v2 -- ifconfig # 如果不指定容器,则默认使用第0个容器。 Defaulted container "xiuxian" out of: xiuxian, alpine eth0 Link encap:Ethernet HWaddr 2A:A5:6C:D7:B4:E2 inet addr:10.100.1.8 Bcast:10.100.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:666 (666.0 B) TX bytes:42 (42.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl exec xiuxian-v2 -c xiuxian -- ifconfig # 指定xiuxian容器名称 eth0 Link encap:Ethernet HWaddr 2A:A5:6C:D7:B4:E2 inet addr:10.100.1.8 Bcast:10.100.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:666 (666.0 B) TX bytes:42 (42.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl exec xiuxian-v2 -c alpine -- ifconfig # 指定alpine容器名称。 eth0 Link encap:Ethernet HWaddr 2A:A5:6C:D7:B4:E2 inet addr:10.100.1.8 Bcast:10.100.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:666 (666.0 B) TX bytes:42 (42.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [root@master231 pods]# 综上所述,同一个Pod内的多个容器,共享网络名称空间。说白了,就是IP相同。 4.链接到指定容器并执行相关的命令进行交互 [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 6m10s 10.100.1.9 worker232 xiuxian-v2 2/2 Running 0 7m22s 10.100.1.8 worker232 [root@master231 pods]# [root@master231 pods]# kubectl exec xiuxian-v2 -it -c xiuxian -- sh / # / # hostname xiuxian-v2 / # / # ps -ef PID USER TIME COMMAND 1 root 0:00 nginx: master process nginx -g daemon off; 31 nginx 0:00 nginx: worker process 32 nginx 0:00 nginx: worker process 45 root 0:00 sh 52 root 0:00 ps -ef / # / # hostname -i 10.100.1.8 / # / # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful / # / # [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl exec xiuxian-v2 -it -c alpine -- sh / # / # hostname xiuxian-v2 / # / # ps -ef PID USER TIME COMMAND 1 root 0:00 /bin/sh 13 root 0:00 sh 20 root 0:00 ps -ef / # / # hostname -i 10.100.1.8 / # / # nginx -t sh: nginx: not found / # - K8S声明式API和响应式API的区别对比 相同点: 都可以对资源进行增删改查。 不同点: 声明式API需要编写资源清单(yaml配置文件),而响应式API不需要编写资源清单,基于命令行方式管理即可。 - 案例1: 资源查看 1.声明式查看资源 [root@master231 pods]# kubectl get -f 02-pods-xiuxian-alpine-multiple.yaml NAME READY STATUS RESTARTS AGE xiuxian-v2 2/2 Running 0 27m [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl get -f 01-pods-xiuxian-single.yaml NAME READY STATUS RESTARTS AGE xiuxian-v1 1/1 Running 0 26m [root@master231 pods]# [root@master231 pods]# kubectl get -f 01-pods-xiuxian-single.yaml -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 26m 10.100.1.9 worker232 [root@master231 pods]# 2.响应式查看资源 [root@master231 pods]# kubectl get pods NAME READY STATUS RESTARTS AGE xiuxian-v1 1/1 Running 0 25m xiuxian-v2 2/2 Running 0 27m [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 27m 10.100.1.9 worker232 xiuxian-v2 2/2 Running 0 28m 10.100.1.8 worker232 [root@master231 pods]# - 案例2: 创建资源 1.声明式创建Pod [root@master231 pods]# cat 01-pods-xiuxian-single.yaml # 指定api的版本号 apiVersion: v1 # 指定资源的类型 kind: Pod # 指定元数据信息 metadata: # 声明资源的名称 name: xiuxian-v1 # 定义资源的期望状态 spec: # 指定Pod调度到哪个节点 nodeName: worker232 # 指定Pod内的容器 containers: # 指定容器的镜像 - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 # 指定容器的名称 name: xiuxian [root@master231 pods]# [root@master231 pods]# kubectl apply -f 01-pods-xiuxian-single.yaml pod/xiuxian-v1 created [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 5s 10.100.1.10 worker232 [root@master231 pods]# 2.响应式创建Pod [root@master231 pods]# kubectl run xixi --image=harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 pod/xixi created [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 103s 10.100.1.10 worker232 xixi 1/1 Running 0 10s 10.100.2.5 worker233 [root@master231 pods]# 总结: 声明式特点: 修改资源清单后,需要应用后才能生效。 响应式特点: 在命令行中直接操作,修改后立刻执行。 - 标签管理 1.什么是标签 K8S一切皆资源,将来如果想要对某个资源进行管理时,我们可以基于标签进行过滤。 标签的作用就是用来标识一个资源的。可以给资源打标签。 2.查看资源的标签 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 6m46s 10.100.1.10 worker232 xixi 1/1 Running 0 5m13s 10.100.2.5 worker233 run=xixi [root@master231 pods]# 3.响应式添加标签 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 7m42s 10.100.1.10 worker232 xixi 1/1 Running 0 6m9s 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl label -f 01-pods-xiuxian-single.yaml school=oldboyedu class=linux92 pod/xiuxian-v1 labeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 8m46s 10.100.1.10 worker232 class=linux92,school=oldboyedu xixi 1/1 Running 0 7m13s 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# kubectl label pods xiuxian-v1 address=ShaHe pod/xiuxian-v1 labeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 9m13s 10.100.1.10 worker232 address=ShaHe,class=linux92,school=oldboyedu xixi 1/1 Running 0 7m40s 10.100.2.5 worker233 run=xixi [root@master231 pods]# 4.响应式删除标签 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 9m31s 10.100.1.10 worker232 address=ShaHe,class=linux92,school=oldboyedu xixi 1/1 Running 0 7m58s 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# kubectl label pods xiuxian-v1 address- pod/xiuxian-v1 unlabeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 10m 10.100.1.10 worker232 class=linux92,school=oldboyedu xixi 1/1 Running 0 8m31s 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# kubectl label -f 01-pods-xiuxian-single.yaml school- pod/xiuxian-v1 unlabeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 10m 10.100.1.10 worker232 class=linux92 xixi 1/1 Running 0 8m58s 10.100.2.5 worker233 run=xixi [root@master231 pods]# 5.响应式修改标签 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 12m 10.100.1.10 worker232 class=linux92 xixi 1/1 Running 0 10m 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl label -f 01-pods-xiuxian-single.yaml class=Linux92 --overwrite pod/xiuxian-v1 labeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 12m 10.100.1.10 worker232 class=Linux92 xixi 1/1 Running 0 10m 10.100.2.5 worker233 run=xixi [root@master231 pods]# [root@master231 pods]# kubectl label pods xixi run=haha --overwrite pod/xixi labeled [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 12m 10.100.1.10 worker232 class=Linux92 xixi 1/1 Running 0 11m 10.100.2.5 worker233 run=haha [root@master231 pods]# 5.响应式标签管理是临时的 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 13m 10.100.1.10 worker232 class=Linux92 xixi 1/1 Running 0 11m 10.100.2.5 worker233 run=haha [root@master231 pods]# [root@master231 pods]# kubectl delete -f 01-pods-xiuxian-single.yaml pod "xiuxian-v1" deleted [root@master231 pods]# [root@master231 pods]# kubectl apply -f 01-pods-xiuxian-single.yaml pod/xiuxian-v1 created [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 2s 10.100.1.11 worker232 xixi 1/1 Running 0 12m 10.100.2.5 worker233 run=haha [root@master231 pods]# 6.声明式管理标签 [root@master231 pods]# cat 03-pods-labels.yaml apiVersion: v1 kind: Pod metadata: name: xiuxian-labels # 给资源添加标签 labels: # 自定义该资源的标签键值对 school: oldboyedu class: linux92 spec: nodeName: worker232 containers: - image: harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 name: xiuxian - image: harbor.oldboyedu.com/oldboyedu-linux/alpine:3.20.2 name: alpine stdin: true [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl apply -f 03-pods-labels.yaml pod/xiuxian-labels created [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-labels 2/2 Running 0 9s 10.100.1.12 worker232 class=linux92,school=oldboyedu ... [root@master231 pods]# 温馨提示: 可以修改或删除标签的键值对,但需要重新应用。 - 基于标签过滤资源实战案例 1.基于标签过滤Pod资源 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-labels 2/2 Running 0 4m24s 10.100.1.12 worker232 class=linux92,school=oldboyedu xiuxian-v1 1/1 Running 0 6m51s 10.100.1.11 worker232 class=Linux92 xixi 1/1 Running 0 18m 10.100.2.5 worker233 run=haha [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels -l class NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-labels 2/2 Running 0 4m43s 10.100.1.12 worker232 class=linux92,school=oldboyedu xiuxian-v1 1/1 Running 0 7m10s 10.100.1.11 worker232 class=Linux92 [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels -l class=Linux92 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 9m1s 10.100.1.11 worker232 class=Linux92 [root@master231 pods]# [root@master231 pods]# kubectl get pods -o wide --show-labels -l class!=Linux92 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-labels 2/2 Running 0 7m7s 10.100.1.12 worker232 class=linux92,school=oldboyedu xixi 1/1 Running 0 21m 10.100.2.5 worker233 run=haha [root@master231 pods]# 2.基于标签过滤节点资源 [root@master231 pods]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS master231 Ready control-plane,master 19h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= worker232 Ready 19h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux worker233 Ready 19h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux [root@master231 pods]# [root@master231 pods]# kubectl get nodes -l kubernetes.io/hostname=worker232 --show-labels NAME STATUS ROLES AGE VERSION LABELS worker232 Ready 19h v1.23.17 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux [root@master231 pods]# 3.基于标签删除Pod资源 [root@master231 pods]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS xiuxian-labels 2/2 Running 0 9m12s class=linux92,school=oldboyedu xiuxian-v1 1/1 Running 0 11m class=Linux92 xixi 1/1 Running 0 23m run=haha [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl get pods -l class NAME READY STATUS RESTARTS AGE xiuxian-labels 2/2 Running 0 9m27s xiuxian-v1 1/1 Running 0 11m [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl delete pods -l class pod "xiuxian-labels" deleted pod "xiuxian-v1" deleted [root@master231 pods]# [root@master231 pods]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS xixi 1/1 Running 0 25m run=haha [root@master231 pods]# - 查看资源的详细信息,类似于"docker inspect"命令 1.查看Pod资源的详细信息 [root@master231 pods]# kubectl describe pods xixi Name: xixi Namespace: default Priority: 0 Node: worker233/10.0.0.233 Start Time: Tue, 30 Jul 2024 11:41:58 +0800 Labels: run=haha Annotations: Status: Running IP: 10.100.2.5 IPs: IP: 10.100.2.5 Containers: xixi: Container ID: docker://488b484e4eedae22e105c7396acd3344889c4ab149dd47413398b06c8f8341ff Image: harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 Image ID: docker-pullable://harbor.oldboyedu.com/oldboyedu-web/xiuxian@sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e Port: Host Port: State: Running Started: Tue, 30 Jul 2024 11:41:58 +0800 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4d77 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-v4d77: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 27m default-scheduler Successfully assigned default/xixi to worker233 Normal Pulled 27m kubelet Container image "harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2" already present on machine Normal Created 27m kubelet Created container xixi Normal Started 27m kubelet Started container xixi [root@master231 pods]# 2.查看节点的详细信息 [root@master231 pods]# kubectl describe nodes master231 Name: master231 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=master231 kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"0e:d6:8d:03:f8:a6"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.0.0.231 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 29 Jul 2024 16:38:14 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: master231 AcquireTime: RenewTime: Tue, 30 Jul 2024 12:11:54 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 29 Jul 2024 17:30:22 +0800 Mon, 29 Jul 2024 17:30:22 +0800 FlannelIsUp Flannel is running on this node MemoryPressure False Tue, 30 Jul 2024 12:08:08 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 30 Jul 2024 12:08:08 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 30 Jul 2024 12:08:08 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 30 Jul 2024 12:08:08 +0800 Mon, 29 Jul 2024 17:12:38 +0800 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.231 Hostname: master231 Capacity: cpu: 2 ephemeral-storage: 101590008Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3969504Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 93625351218 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3867104Ki pods: 110 System Info: Machine ID: c258f9d8eb0c49488b9fd461aa25739c System UUID: 8dd84d56-5c48-da60-df1c-9b854e5d1db4 Boot ID: abedd681-a844-4be9-8cd3-b2d0ba9073d2 Kernel Version: 5.15.0-91-generic OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.24 Kubelet Version: v1.23.17 Kube-Proxy Version: v1.23.17 PodCIDR: 10.100.0.0/24 PodCIDRs: 10.100.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-flannel kube-flannel-ds-ljktm 100m (5%) 0 (0%) 50Mi (1%) 0 (0%) 18h kube-system coredns-6d8c4cb4d-2l5jw 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 19h kube-system coredns-6d8c4cb4d-qs4pd 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 19h kube-system etcd-master231 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 19h kube-system kube-apiserver-master231 250m (12%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-controller-manager-master231 200m (10%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-proxy-2vxh9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-scheduler-master231 100m (5%) 0 (0%) 0 (0%) 0 (0%) 19h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 950m (47%) 0 (0%) memory 290Mi (7%) 340Mi (9%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: [root@master231 pods]# 3.查看所有节点信息 [root@master231 pods]# kubectl describe nodes Name: master231 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=master231 kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"0e:d6:8d:03:f8:a6"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.0.0.231 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 29 Jul 2024 16:38:14 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: master231 AcquireTime: RenewTime: Tue, 30 Jul 2024 12:13:46 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 29 Jul 2024 17:30:22 +0800 Mon, 29 Jul 2024 17:30:22 +0800 FlannelIsUp Flannel is running on this node MemoryPressure False Tue, 30 Jul 2024 12:13:15 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 30 Jul 2024 12:13:15 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 30 Jul 2024 12:13:15 +0800 Mon, 29 Jul 2024 16:38:10 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 30 Jul 2024 12:13:15 +0800 Mon, 29 Jul 2024 17:12:38 +0800 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.231 Hostname: master231 Capacity: cpu: 2 ephemeral-storage: 101590008Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3969504Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 93625351218 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3867104Ki pods: 110 System Info: Machine ID: c258f9d8eb0c49488b9fd461aa25739c System UUID: 8dd84d56-5c48-da60-df1c-9b854e5d1db4 Boot ID: abedd681-a844-4be9-8cd3-b2d0ba9073d2 Kernel Version: 5.15.0-91-generic OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.24 Kubelet Version: v1.23.17 Kube-Proxy Version: v1.23.17 PodCIDR: 10.100.0.0/24 PodCIDRs: 10.100.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-flannel kube-flannel-ds-ljktm 100m (5%) 0 (0%) 50Mi (1%) 0 (0%) 19h kube-system coredns-6d8c4cb4d-2l5jw 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 19h kube-system coredns-6d8c4cb4d-qs4pd 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 19h kube-system etcd-master231 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 19h kube-system kube-apiserver-master231 250m (12%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-controller-manager-master231 200m (10%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-proxy-2vxh9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h kube-system kube-scheduler-master231 100m (5%) 0 (0%) 0 (0%) 0 (0%) 19h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 950m (47%) 0 (0%) memory 290Mi (7%) 340Mi (9%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Name: worker232 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=worker232 kubernetes.io/os=linux Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"4a:6d:7a:84:46:c2"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.0.0.232 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 29 Jul 2024 16:48:04 +0800 Taints: Unschedulable: false Lease: HolderIdentity: worker232 AcquireTime: RenewTime: Tue, 30 Jul 2024 12:13:45 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 29 Jul 2024 17:30:21 +0800 Mon, 29 Jul 2024 17:30:21 +0800 FlannelIsUp Flannel is running on this node MemoryPressure False Tue, 30 Jul 2024 12:12:52 +0800 Mon, 29 Jul 2024 16:48:04 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 30 Jul 2024 12:12:52 +0800 Mon, 29 Jul 2024 16:48:04 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 30 Jul 2024 12:12:52 +0800 Mon, 29 Jul 2024 16:48:04 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 30 Jul 2024 12:12:52 +0800 Mon, 29 Jul 2024 17:12:43 +0800 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.232 Hostname: worker232 Capacity: cpu: 2 ephemeral-storage: 101590008Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3969460Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 93625351218 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3867060Ki pods: 110 System Info: Machine ID: c258f9d8eb0c49488b9fd461aa25739c System UUID: 9bf84d56-fc02-7e3b-3a0d-65c8710f2803 Boot ID: 407b7817-b949-4f96-a6ba-cd869077f638 Kernel Version: 5.15.0-91-generic OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.24 Kubelet Version: v1.23.17 Kube-Proxy Version: v1.23.17 PodCIDR: 10.100.1.0/24 PodCIDRs: 10.100.1.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-flannel kube-flannel-ds-kst7g 100m (5%) 0 (0%) 50Mi (1%) 0 (0%) 19h kube-system kube-proxy-65z9n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (5%) 0 (0%) memory 50Mi (1%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Name: worker233 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=worker233 kubernetes.io/os=linux Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"5e:21:13:e3:82:24"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.0.0.233 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 29 Jul 2024 16:49:20 +0800 Taints: Unschedulable: false Lease: HolderIdentity: worker233 AcquireTime: RenewTime: Tue, 30 Jul 2024 12:13:43 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 29 Jul 2024 17:30:54 +0800 Mon, 29 Jul 2024 17:30:54 +0800 FlannelIsUp Flannel is running on this node MemoryPressure False Tue, 30 Jul 2024 12:13:29 +0800 Mon, 29 Jul 2024 16:49:20 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 30 Jul 2024 12:13:29 +0800 Mon, 29 Jul 2024 16:49:20 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 30 Jul 2024 12:13:29 +0800 Mon, 29 Jul 2024 16:49:20 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 30 Jul 2024 12:13:29 +0800 Mon, 29 Jul 2024 17:30:58 +0800 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.233 Hostname: worker233 Capacity: cpu: 2 ephemeral-storage: 101590008Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3969524Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 93625351218 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3867124Ki pods: 110 System Info: Machine ID: c258f9d8eb0c49488b9fd461aa25739c System UUID: b9514d56-aa5b-e934-917c-8959b868dd14 Boot ID: 06ba4271-e99f-43f4-8052-00175e55cd1c Kernel Version: 5.15.0-91-generic OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.24 Kubelet Version: v1.23.17 Kube-Proxy Version: v1.23.17 PodCIDR: 10.100.2.0/24 PodCIDRs: 10.100.2.0/24 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default xixi 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31m kube-flannel kube-flannel-ds-ckkbk 100m (5%) 0 (0%) 50Mi (1%) 0 (0%) 19h kube-system kube-proxy-rmn84 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (5%) 0 (0%) memory 50Mi (1%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: [root@master231 pods]# 4.查看所有Pod详细信息 [root@master231 pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-v1 1/1 Running 0 5s 10.100.1.13 worker232 xixi 1/1 Running 0 32m 10.100.2.5 worker233 [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl describe pods Name: xiuxian-v1 Namespace: default Priority: 0 Node: worker232/10.0.0.232 Start Time: Tue, 30 Jul 2024 12:14:35 +0800 Labels: Annotations: Status: Running IP: 10.100.1.13 IPs: IP: 10.100.1.13 Containers: xiuxian: Container ID: docker://de6b3cc3c759b7df123ae1119880a474ed6f52b59ff95580d3eee92d549393b6 Image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps@sha256:3bee216f250cfd2dbda1744d6849e27118845b8f4d55dda3ca3c6c1227cc2e5c Port: Host Port: State: Running Started: Tue, 30 Jul 2024 12:14:35 +0800 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpchz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-lpchz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 9s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1" already present on machine Normal Created 9s kubelet Created container xiuxian Normal Started 9s kubelet Started container xiuxian Name: xixi Namespace: default Priority: 0 Node: worker233/10.0.0.233 Start Time: Tue, 30 Jul 2024 11:41:58 +0800 Labels: run=haha Annotations: Status: Running IP: 10.100.2.5 IPs: IP: 10.100.2.5 Containers: xixi: Container ID: docker://488b484e4eedae22e105c7396acd3344889c4ab149dd47413398b06c8f8341ff Image: harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 Image ID: docker-pullable://harbor.oldboyedu.com/oldboyedu-web/xiuxian@sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e Port: Host Port: State: Running Started: Tue, 30 Jul 2024 11:41:58 +0800 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4d77 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-v4d77: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned default/xixi to worker233 Normal Pulled 32m kubelet Container image "harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2" already present on machine Normal Created 32m kubelet Created container xixi Normal Started 32m kubelet Started container xixi [root@master231 pods]# 5.基于标签过滤查看指定的资源 [root@master231 pods]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS xiuxian-v1 1/1 Running 0 59s 10.100.1.13 worker232 xixi 1/1 Running 0 33m 10.100.2.5 worker233 run=haha [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# kubectl describe pods -l run=haha Name: xixi Namespace: default Priority: 0 Node: worker233/10.0.0.233 Start Time: Tue, 30 Jul 2024 11:41:58 +0800 Labels: run=haha Annotations: Status: Running IP: 10.100.2.5 IPs: IP: 10.100.2.5 Containers: xixi: Container ID: docker://488b484e4eedae22e105c7396acd3344889c4ab149dd47413398b06c8f8341ff Image: harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2 Image ID: docker-pullable://harbor.oldboyedu.com/oldboyedu-web/xiuxian@sha256:3ac38ee6161e11f2341eda32be95dcc6746f587880f923d2d24a54c3a525227e Port: Host Port: State: Running Started: Tue, 30 Jul 2024 11:41:58 +0800 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4d77 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-v4d77: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 33m default-scheduler Successfully assigned default/xixi to worker233 Normal Pulled 33m kubelet Container image "harbor.oldboyedu.com/oldboyedu-web/xiuxian:v2" already present on machine Normal Created 33m kubelet Created container xixi Normal Started 33m kubelet Started container xixi [root@master231 pods]# - 修改Pod的网络名称空间为宿主机模式 [root@master231 pods]# cat 04-pods-hostNetwork.yaml apiVersion: v1 kind: Pod metadata: name: xiuxian-hostnetwork labels: apps: xiuxian school: oldboyedu class: linux92 spec: # 使用宿主机的网络 hostNetwork: true nodeName: worker232 containers: - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 name: xiuxian [root@master231 pods]# [root@master231 pods]# kubectl apply -f 04-pods-hostNetwork.yaml pod/xiuxian-hostnetwork created [root@master231 pods]# [root@master231 pods]# kubectl get pods -l apps=xiuxian --show-labels NAME READY STATUS RESTARTS AGE LABELS xiuxian-hostnetwork 1/1 Running 0 16s apps=xiuxian,class=linux92,school=oldboyedu [root@master231 pods]# [root@master231 pods]# kubectl get pods -l apps=xiuxian -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES xiuxian-hostnetwork 1/1 Running 0 24s 10.0.0.232 worker232 [root@master231 pods]# [root@master231 pods]# [root@master231 pods]# curl 10.0.0.232 yinzhengjie apps v1

凡人修仙传 v1

[root@master231 pods]# - 今日内容回顾: - 学习方法 ***** - K8S常用的资源 - K8S集群巡检 ***** - 资源清单的结构 apiVersion kind metadata spec status - 一个Pod运行单个容器 - 一个Pod运行多个容器 - 声明式和响应式的区别 - 标签管理 - 资源的增删改查: - kubectl get ***** - kubectl apply ***** - kubectl create ***** - kubectl delete ***** 其他命令: - kubectl labels ***** - kubectl exec ***** - kubectl run ** - kubectl describe ***** 今日作业: - 完成课堂的所有练习并整理思维导图; - 将"oldboyedu-games-v0.6.tar.gz"游戏镜像,上传到harbor,并使用K8S部署; 扩展作业: - 将"oldboyedu-games-v0.6.tar.gz"游戏镜像的任选游戏10个游戏镜像,使用docker-compose一键上传到harbor,要求端口号从81-90开始编号,并将其部署到K8S集群,要求windows能够访问。