脚本宝典收集整理的这篇文章主要介绍了K8s概述以及集群环境搭建,脚本宝典觉得挺不错的,现在分享给大家,也给大家做个参考。
之前学习了docker, 想的成套的学习下K8S相关。
参考: https://www.kubernetes.org wangt.cc /k8s
kubernetes 简称k8s, 因为k和s 中间有'ubernete' 8个单词,所以简称k8s。是一个开源的,用于管理云平台中多个主机上的容器化的应用, k8s 的目标是让部署容器化的应用简单并且高效,k8s 提供了应用部署、规划、更新、维护的一种机制。是google 开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。
最新的部署方式是通过部署容器的方式实现,每个容器之间互相隔离,每个容器有自己的文件系统,容器之间进程不会互相影响, 能区分计算资源。相对于虚拟机,容器能快速部署,由于容器与底层设施、机器文件系统解耦的,所以能再在不同云、不同版本操作系统间进行迁移。
在生产环境部署一个应用时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。在k8s 中,我们可以创建多个容器,每个容器里面运行一个实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问。
1. 自动装箱:基于容器对应用运行环境的资源配置要求自动部署应用容器
2.自我修复:当容器失败时,会对容器进行重启;当所部署的node 节点有问题时,会对容器进行重新部署和重新调度
3. 水平扩展:基于kubernetes 自身能力实现服务发现和负载均衡
4.滚动更新:根据应用的变化,对应用容器运行的应用,进行一次性或批量式更新
5.版本回退
6.秘钥和配置管理:在不需要重新构建镜像的情况下,可以部署和更新秘钥和应用配置,类似于热部署
7.存储编排:自动实现存储系统挂载及应用,特别对有状态应用实现数据持久化非常重要。存储系统可以来自于本地目录、网络存储、公共云存储等服务
8.批处理:提供一次性任务,定时任务了满足批量数据梳理和分析的场景
1. 集群架构
(1) master 主控节点: 对集群进行调度管理, 接收集群外用户操作请求
apiserver:资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制,以restful 方式,交给etcd 存储
scheduler:节点调度,选择node 节点应用部署;按照预定的调度策略将Pod调度到相应的机器上
controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等,一个资源对应一个控制器
etcd:存储系统,用于保存集群相关的数据,比如说状态数据、pod数据等
(2) worker node 工作节点, 运行用户业务应用容器
kubelet: 简单说就是master 派到node 节点的代表,管理本机容器的相关操作。负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
kube-proxy:网络上的代理,包括负载均衡等操作。负责为Service提供cluster内部的服务发现和负载均衡
2. 核心概念
(1) pod:k8s部署的最小单元,一个pod 可以包含多个容器,也就是一组容器的集合;容器内部共享网络;pod 的生命周期是短暂的,重新部署之后是一个新的pod。
pod是K8s集群中所有业务类型的基础,可以看作运行在K8s集群中的小机器人,不同类型的业务就需要不同类型的小机器人去执行。目前K8s中的业务主要可以分为长期伺服型(long-running)、批处理型(batch)、节点后台支撑型(node-daemon)和有状态应用型(stateful application);分别对应的小机器人控制器为Deployment、Job、DaemonSet和PetSet
(2) Replication Controller(复制控制器):
RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中的Pod来保证集群中运行指定数目的Pod副本。指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运行新的Pod副本;多于指定数目,RC就会杀死多余的Pod
副本。即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运行。RC是K8s较早期的技术概念,只适用于长期伺服型的业务类型,比如控制小机器人提供高可用的
Web服务。
(3)Service:定义一组pod 的访问规则。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。
通过service 定义规则统一入口访问,通过controller 创建pod 进行部署。
3. 集群搭建方式
目前部署k8s 主要有两种方式:
(1) kubeadm
kubeadm 是一个K8S 部署工具,提供kubeadm init 和 kube join, 用于快速部署k8s 集群。 官网: https://kubernetes.io/docs/reference/setup-tools/kubeadm/
(2) 二进制包
从github 下载发行版的二进制包,手动部署每个组件,组成k8s 集群。
kubeadm 部署比较简单,但是屏蔽了很多细节,遇到问题可能比较难以排查。 二进制包部署k8s 集群,可以学习很多原理,也利于后期维护。
简单的搭建一个master, 两个node, 相关机器以及ip配置如下: 每个机子都需要访问到外网下载相关依赖
k8smaster1 192.168.13.103 k8snode1 192.168.13.104 k8snode2 192.168.13.105
三个机器都需要做如下操作,我选择用虚拟机,然后克隆
1. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld 查看防火墙状态
2. 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 # 临时
3. 关闭swap
free -g #查看分区状态 swapoff -a #临时关闭 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
4. 修改主机名
hostnamectl set-hostname <hostname>
5. 修改ip 地址为静态IP(注意设置静态IP需要设置DNS,参考之前的rocketmq 集群)
vim /etc/sysconfig/network-scripts/ifcfg-ens33
6. 同步时间
yum install ntpdate -y ntpdate time.windows wangt.cc
7. master节点修改hosts, 配置主机可达
cat >> /etc/hosts << EOF 192.168.13.103 k8smaster1 192.168.13.104 k8snode1 192.168.13.105 k8snode2 EOF
8. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 生效
9. 所有节点安装 Docker/kubeadm/kubelet
Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。
(1) 安装 Docker
wget https://mirrors.aliyun wangt.cc /docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker docker --version
(2) 添加阿里云 YUM 软件源
设置仓库地址
cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs wangt.cc "] } EOF
添加 yum 源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun wangt.cc /kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun wangt.cc /kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun wangt.cc /kubernetes/yum/doc/rpm-package-key.gpg EOF
(3) 安装 kubeadm,kubelet 和 kubectl
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet
安装成功后可以验证相关信息:
[root@k8smaster1 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:56:30Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} [root@k8smaster1 ~]# kubelet --version Kubernetes v1.18.0
1. 在master 节点上执行:
kubeadm init --apiserver-advertise-address=192.168.13.103 --image-repository registry.aliyuncs wangt.cc /google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
其中apiserver-advertise-address 需要修改为master 节点的IP地址; 的哥点指定阿里云镜像仓库地址;第三个是指定kubernetes版本; 后面两个是指定内部访问的ip, 只要不和当前网段冲突即可。 如果上面报错,可以增加--v=6 查看详细的日志, 关于kubeadm 参数详细解释可参考官网。
执行完上述命令后,会拉取一系列的docker 镜像,可以新开一个terminal 然后使用docker images 查看相关镜像以及容器如下:
[root@k8smaster1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs wangt.cc /google_containers/kube-proxy v1.18.0 43940c34f24f 21 months ago 117MB registry.aliyuncs wangt.cc /google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 21 months ago 95.3MB registry.aliyuncs wangt.cc /google_containers/kube-apiserver v1.18.0 74060cea7f70 21 months ago 173MB registry.aliyuncs wangt.cc /google_containers/kube-controller-manager v1.18.0 d3e55153f52f 21 months ago 162MB registry.aliyuncs wangt.cc /google_containers/pause 3.2 80d28bedfe5d 23 months ago 683kB registry.aliyuncs wangt.cc /google_containers/coredns 1.6.7 67da37a9a360 23 months ago 43.8MB registry.aliyuncs wangt.cc /google_containers/etcd 3.4.3-0 303ce5db0e90 2 years ago 288MB [root@k8smaster1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3877168ddb09 43940c34f24f "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 18a32d328d49 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 5f62d3184cd7 303ce5db0e90 "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0 2af2a1b5d169 a31f78c7c8ce "kube-scheduler --au…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 c77506ee4dd2 d3e55153f52f "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 303545e4eca9 74060cea7f70 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 f9da54e2bfae registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 007e2a0cd10b registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 0666c8b43c32 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 0ca472d7f2cd registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
最后下载完成之后主窗口提示如下:(看到successful 即可视为是成功了)
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
使用kubectl工具:执行上面成功后的信息
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看:
[root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 NotReady master 7m17s v1.18.0 [root@k8smaster1 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
在k8snode1、k8snode2 节点执行如下命令。向集群添加新节点,执行在kubeadm init 输出的kubeadm join命令(有token 相关):
kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp
--discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
需要注意这里只能是master 控制台打出的kubeadm join 相关参数,因为token 不一样,默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token, 可以参考官网使用kubeadm token 相关命令。
当join 成功之后输出的日志如下:
[root@k8snode1 ~]# kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp > --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a W0108 21:20:24.380628 25524 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "k8snode1" could not be reached [WARNING Hostname]: hostname "k8snode1": lookup k8snode1 on 114.114.114.114:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
最后master 节点查看nodes:(最终集群信息如下)
[root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 NotReady master 69m v1.18.0 k8snode1 NotReady <none> 49m v1.18.0 k8snode2 NotReady <none> 4m7s v1.18.0
状态是notready, 需要安装网络插件
sed命令修改为docker hub镜像仓库, 在master 节点执行如下命令。
kubectl apply -f https://raw.githubusercontent wangt.cc /coreos/flannel/master/Documentation/kube-flannel.yml kubectl get pods -n kube-system
执行完第二条命令后等相关的组件都是RUNNING 状态后再次查看集群状态
[root@k8smaster1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-stfqz 1/1 Running 0 106m coredns-7ff77c879f-vhwr7 1/1 Running 0 106m etcd-k8smaster1 1/1 Running 0 107m kube-apiserver-k8smaster1 1/1 Running 0 107m kube-controller-manager-k8smaster1 1/1 Running 0 107m kube-flannel-ds-9bx4w 1/1 Running 0 5m31s kube-flannel-ds-qzqjq 1/1 Running 0 5m31s kube-flannel-ds-tldt5 1/1 Running 0 5m31s kube-proxy-6vcvj 1/1 Running 1 86m kube-proxy-hn4gx 1/1 Running 0 106m kube-proxy-qzwh6 1/1 Running 0 41m kube-scheduler-k8smaster1 1/1 Running 0 107m [root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 Ready master 107m v1.18.0 k8snode1 Ready <none> 86m v1.18.0 k8snode2 Ready <none> 41m v1.18.0
kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc
最后生成的信息如下:
[root@k8smaster1 ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-f89759699-cnj62 1/1 Running 0 3m5s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 113m service/nginx NodePort 10.96.201.24 <none> 80:30951/TCP 2m40s
测试: 从任意一个主机访问即可,端口是30951 端口
curl http://192.168.13.103:30951/ curl http://192.168.13.104:30951/ curl http://192.168.13.105:30951/
查看三个机子的docker 进程:
1. k8smaster
[root@k8smaster1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e71930a745f3 67da37a9a360 "/coredns -conf /etc…" 14 minutes ago Up 14 minutes k8s_coredns_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0 5aaacb75700b 67da37a9a360 "/coredns -conf /etc…" 14 minutes ago Up 14 minutes k8s_coredns_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0 756d66c75a56 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0 658b02e25f89 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0 8a6f86753098 404fc3ab6749 "/opt/bin/flanneld -…" 14 minutes ago Up 14 minutes k8s_kube-flannel_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0 b047ca53a8fe registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0 3877168ddb09 43940c34f24f "/usr/local/bin/kube…" 2 hours ago Up 2 hours k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 18a32d328d49 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 5f62d3184cd7 303ce5db0e90 "etcd --advertise-cl…" 2 hours ago Up 2 hours k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0 2af2a1b5d169 a31f78c7c8ce "kube-scheduler --au…" 2 hours ago Up 2 hours k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 c77506ee4dd2 d3e55153f52f "kube-controller-man…" 2 hours ago Up 2 hours k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 303545e4eca9 74060cea7f70 "kube-apiserver --ad…" 2 hours ago Up 2 hours k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 f9da54e2bfae registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 007e2a0cd10b registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 0666c8b43c32 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 0ca472d7f2cd registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2. k8sndoe1
[root@k8snode1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8189b507fc4a 404fc3ab6749 "/opt/bin/flanneld -…" 10 minutes ago Up 10 minutes k8s_kube-flannel_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0 f8e8103639c1 43940c34f24f "/usr/local/bin/kube…" 10 minutes ago Up 10 minutes k8s_kube-proxy_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1 6675466fcc0e registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 11 minutes ago Up 10 minutes k8s_POD_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0 51d248df0e8c registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 11 minutes ago Up 10 minutes k8s_POD_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1
3. k8snode2
[root@k8snode2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8bbbe754ebc nginx "/docker-entrypoint.…" 4 minutes ago Up 4 minutes k8s_nginx_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0 04fbdd617724 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0 e9dc459f9664 404fc3ab6749 "/opt/bin/flanneld -…" 15 minutes ago Up 15 minutes k8s_kube-flannel_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0 f1d0312d2308 registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" 15 minutes ago Up 15 minutes k8s_POD_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0 d6bae886cb61 registry.aliyuncs wangt.cc /google_containers/kube-proxy "/usr/local/bin/kube…" About an hour ago Up About an hour k8s_kube-proxy_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0 324507774c8e registry.aliyuncs wangt.cc /google_containers/pause:3.2 "/pause" About an hour ago Up About an hour k8s_POD_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0
可以看到nginx的容器跑在k8snode2 节点上。
也可以用kubectl 查看运行情况
[root@k8smaster1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-f89759699-cnj62 1/1 Running 0 10m 10.244.2.2 k8snode2 <none> <none>
以yaml 格式输出:
[root@k8smaster1 ~]# kubectl get pods -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-01-09T03:49:49Z" generateName: nginx-f89759699- labels: app: nginx 。。。
查看所有namespace 下面所有的pods 相关且输出详细信息
[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default nginx-f89759699-cnj62 1/1 Running 0 51m 10.244.2.2 k8snode2 <none> <none> kube-system coredns-7ff77c879f-stfqz 1/1 Running 0 161m 10.244.0.3 k8smaster1 <none> <none> kube-system coredns-7ff77c879f-vhwr7 1/1 Running 0 161m 10.244.0.2 k8smaster1 <none> <none> kube-system etcd-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-apiserver-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-controller-manager-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-flannel-ds-9bx4w 1/1 Running 0 59m 192.168.13.104 k8snode1 <none> <none> kube-system kube-flannel-ds-qzqjq 1/1 Running 0 59m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-flannel-ds-tldt5 1/1 Running 0 59m 192.168.13.105 k8snode2 <none> <none> kube-system kube-proxy-6vcvj 1/1 Running 1 140m 192.168.13.104 k8snode1 <none> <none> kube-system kube-proxy-hn4gx 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-proxy-qzwh6 1/1 Running 0 95m 192.168.13.105 k8snode2 <none> <none> kube-system kube-scheduler-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-sfjlr 1/1 Running 0 25m 10.244.2.3 k8snode2 <none> <none> kubernetes-dashboard kubernetes-dashboard-577bd97bc-f2v5g 1/1 Running 0 25m 10.244.1.2 k8snode1 <none> <none>
补充:在master 执行kubeadm init初始化时 过程我本地报错如下
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
解决办法:
1. 创建/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件,内容如下:
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
2. 重启kubelet
systemctl daemon-reload
systemctl restart kubelet
3. 重新执行kubeadm init
补充: master 多次执行kubeadm init 报错如下
error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
是因为上次执行kubeadm init 之后没有清理干净,解决办法:
kubeadm reset
补充:master 执行kubeadm init 报错如下
[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
网上说是下载一些镜像仓库超时, 没有找到相关解决办法,最终我删除该虚拟机,重新克隆了一个虚拟机,然后执行上述初始化以及安装过程。
补充: 在master 部署程中遇到各种错误,尝试安装不同版本kubeadm、kubelet、kubectl
1. 查看指定版本:
yum list kubeadm --showduplicates
2. 删除相关
yum remove -y kubelet kubeadm kubectl
3. 重新安装指定版本 yum install packagename-version
yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
补充: 在节点加入k8s 集群启动后network 启动失败,报错如下
Failed to start LSB
解决办法:
1. 关闭NetworkManager 服务
systemctl stop NetworkManager
systemctl disable NetworkManager
2. 重新启动
systemctl restart network
以上是脚本宝典为你收集整理的K8s概述以及集群环境搭建全部内容,希望文章能够帮你解决K8s概述以及集群环境搭建所遇到的问题。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。