二、关键流程代码走读
到底做了什么?
文章目录 2.2 .3 join
一、介绍
二、关键流程代码走读
首先使用cobra命令行工具为绑定
关于cobra的使用请参考net-tools工具的实现过程 //net-tools
代码位置:
在下面绑定11个子命令与一个other flag
2.1 init
总体介绍
⼀、pre--check阶段
在做出变更前运⾏⼀系列的预检项来验证系统状态。⼀些检查项⽬仅仅触发警告, 其它的则会被视为错误并且退出,除⾮问题得到解决或者⽤户指定了 ----= 参数。
⾸先,检查我们是否与其他前检查分开独⽴存在,因为分为好⼏个go⽂件,还有join以及reset,,当然这是针对代码⽂件的检查然后紧接着就是检查的⽬录,⼤家都清楚默认组件都是使⽤的 cpu核⼼的检查,最低2核⼼,如果你的节点就1cpu,–--=进⾏忽略 内存的检查,最低2G,内存也是⼀样,当然过少的cpu内存,对跑应⽤会不够你还需要知道 或 ,所以你需要 确保它们与通过 安装的控制平⾯的版本相匹配。 如果不这样做,则存在发⽣版本偏差的⻛险,可能会导致⼀些预料之外的错误和问题检查是启⽤还是激活。如果是这样会告诉你输出报错信息,防⽕墙处于活动状态,请确保端⼝已打开,否则您的群集可能⽆法正常运⾏打开的端⼝的检查如果在系统上找不到镜像,则将提取需要的镜像,另外还增加对、,容器运⾏时镜像列表做了判断,通过是⽤于处理容器运⾏时的CRI接⼝,判断⽤的运⾏时环境,并获取需要的镜像列表
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/preflight.go#L38
二、certs阶段
⽣成⼀个⾃签名的 CA 证书来为集群中的每⼀个组件建⽴身份标识。 ⽤户可以通过将其放⼊ --cert-dir 配置的证书⽬录中(默认为 /etc//pki)来提供他们⾃⼰的 CA 证书以及/或者密钥。 证书将为任何 ----sans 参数值提供附加的 SAN 条⽬,必要时将其⼩写
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/certs.go#L57
三、阶段
将 ⽂件写⼊ /etc// ⽬录以便 、控制器管理器和调度器⽤来连接到 API 服务器,它们每⼀个都有⾃⼰的身份标识,同时⽣成⼀个名为 admin.conf 的独⽴的 ⽂件,⽤于管理操作。
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/kubeconfig.go#L64
四、阶段
将env的变量写⼊到/var/lib//-flags.env将配置配置写⼊到/var/lib//.yaml启动
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/kubelet.go#L38
五、阶段
为控制节点的组建创建静态pod的⽬录以及⽣成静态pod在/etc//下⾯
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/controlplane.go#L65
六、Etcd阶段
假使没有提供⼀个外部的 etcd 服务的话,也会为 etcd ⽣成⼀份额外的静态 Pod 清单⽂件
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/etcd.go#L44
七、阶段
会监视这个/etc//⽬录以便在系统启动的时候创建Pod,⼀旦控制平⾯的 Pod 都运⾏起来, init 的⼯作流程就继续往下执⾏。
https://github.com/kubernetes/kubernetes/blob/e6b4fa381152d66ebace4dfd837add5275fd8e1e/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go#L66
了解了init中的七个流程,相信你对 init执行的过程有了一个大概的框架,下面是代码走读部分,Let s GO!
2.1.1 使用cobra给init绑定flag
的具体实现,init
接着往下看,这⾥⽤来传递附带的参数,我们可以找到对应的如何传⼊的
两个地⽅显示了 init下的⼦命令
以及s⾥⾯的内容
里面是一堆的常量
分别对应中的flag
从上⾯我们⼏乎都看到了整个cobra的库在的应⽤,现在到重要的环节了,现在我们熟悉下其中的init环节,再随着看源代码理解当中的关系,其中呢,⼀共分为了13个⽅向的步骤,下⾯咱跟着这13个⽅向的步骤,从源码慢慢去品尝⼀下
2.1.2 初始化工作流程
init 通过执⾏以下步骤来引导控制平⾯节点:
我们先回顾⼀下 init来初始化控制阶段做了哪些操作:当然我们可以根据 init phase阶段性的去执⾏,
通常我们都会使⽤ print init- > -.yaml,⽣成⼀个-的配置⽂件,然后 init --=-.yaml ---certs来初始化集群另外我们还增加将⽤到的证书上传到控制平⾯ ---certs
下⾯我们详细看下具体执⾏并完成了哪些动作,先看下编译完执⾏的效果
1 "[init] Using Kubernetes version: v1.21.0",
2 "[preflight] Running pre-flight checks",
3 "[preflight] Pulling images required for setting up a Kubernetes cluster",
4 "[preflight] This might take a minute or two, depending on the speed of your internet connection"
5 "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"
6 "[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
7 "[certs] Generating \"ca\" certificate and key",
8 "[certs] Generating \"apiserver\" certificate and key",
9 "[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.30.81]"
10 "[certs] Generating \"apiserver-kubelet-client\" certificate and key",
11 "[certs] Generating \"front-proxy-ca\" certificate and key",
12 "[certs] Generating \"front-proxy-client\" certificate and key",
13 "[certs] Generating \"etcd/ca\" certificate and key",
14 "[certs] Generating \"etcd/server\" certificate and key",
15 "[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.30.81 127.0.0.1 ::1]"
16 "[certs] Generating \"etcd/peer\" certificate and key",
17 "[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.30.81 127.0.0.1 ::1]"
18 "[certs] Generating \"etcd/healthcheck-client\" certificate and key",
19 "[certs] Generating \"apiserver-etcd-client\" certificate and key",
20 "[certs] Generating \"sa\" key and public key",
21 "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
22 "[kubeconfig] Writing \"admin.conf\" kubeconfig file",
23 "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
24 "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
25 "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
26 "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
27 "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
28 "[kubelet-start] Starting the kubelet",
29 "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
30 "[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
31 "[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
32 "[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
33 "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
34 "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s"
35 "[kubelet-check] Initial timeout of 40s passed.",
36 "[apiclient] All control plane components are healthy after 67.511154 seconds",
37 "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace"
38 "[kubelet] Creating a ConfigMap \"kubelet-config-1.21\" in namespace kube-system with the configuration for the kubelets in the cluster"
39 "[upload-certs] Storing the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace"
40 "[upload-certs] Using certificate key:",
41 "ad459a3e1a8942faefecdeb78094c439560cce15662908f60fed8928292605bd",
42 "[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
43 "[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]"
44 "[bootstrap-token] Using token: o07ftt.1k2k5dagbgypo863",
45 "[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
46 "[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes"
47 "[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
48 "[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
49 "[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
50 "[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace"
51 "[kubelet-finalize] Updating \"/etc/kubernetes/kubelet.conf\" to point to a rotatable kubelet client certificate and key"
52 "[addons] Applied essential addon: CoreDNS",
53 "[addons] Applied essential addon: kube-proxy",
54 "",
55 "Your Kubernetes control-plane has initialized successfully!",
56 "",
57 "To start using your cluster, you need to run the following as a regular user:",
58 "",
59 " mkdir -p $HOME/.kube",
60 " sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
61 " sudo chown $(id -u):$(id -g) $HOME/.kube/config",
62 "",
63 "Alternatively, if you are the root user, you can run:",
64 "",
65 " export KUBECONFIG=/etc/kubernetes/admin.conf",
66 "",
67 "You should now deploy a pod network to the cluster.",
68 "Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
69 " https://kubernetes.io/docs/concepts/cluster-administration/addons/",
70 "",
71 "Then you can join any number of worker nodes by running the following on each as root:"
72 "",
73 "kubeadm join 192.168.30.81:6443 --token o07ftt.1k2k5dagbgypo863 \\",
74 "\t--discovery-token-ca-cert-hash sha256:b67d481806cef04a60e95646597e3873aa4f3981aa8fc82bf315cf812d54005b "
75 ]
76 }
下⾯整理了这就是 init的核⼼代码流程阶段,当然在上⾯⽇志如果你仔细观察前⾯的标志的话也能看的出来
1
2
3
4
5
6
7 hase
8
9
10 hase
11 se
12 ase
13
下⾯来⾃核⼼代码init所⽤到的部分
上⾯第⼀条⽇志输出,我们可以根据代码去查看阅读
2.1.3 开始初始化
“[init] Using : v1.21.0”,
也就是第⼀个当我们执⾏init的时候,回到我们的主代码init.go当中,我们可以清晰的看到第⼀个就是来获取集群的版本,上⾯还包含了cobra的主函数代码段,配置init的⼦命令,及详细信息说明
2.1.3 就绪前检查
紧接着就是进⾏就绪前的检查,相信⼤家经常部署的很熟悉这个地⽅,那么到底检查了哪些?
"[preflight] Running pre-flight checks","[preflight] Pulling images required for setting up a Kubernetes cluster","[preflight] This might take a minute or two, depending on the speed of your internet connection","[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
1、在做出变更前运⾏⼀系列的预检项来验证系统状态。⼀些检查项⽬仅仅触发警告, 其它的则会被视为错误并且退出
,除⾮问题得到解决或者⽤户指定了 ----= 参数
2、在中实际上调用了(),而中调用了
3、在中包含一堆检查
4、进行一系列的判断之后,其实就是在中添加了一堆对象,每个对象都实现了check的接口,交给真正执行check方法的去执行
5、遍历,执行check()
() 需要传入一个check的切片,而上面的对象每一个都实现了接口,所以可以传入
这里用举例说明
6、在一系列的检查通过后,执行镜像拉取操作
如果在系统上找不到图像,则将提取需要的镜像,另外还增加对、,容器运⾏时镜像列表做了判断,通过是⽤于处理容器运⾏时的CRI接⼝,判断⽤的运⾏时环境,并获取需要的镜像列表
补充一下k8s需要开放的端口
2.1.4 生成证书
⽣成⼀个⾃签名的 CA 证书来为集群中的每⼀个组件建⽴身份标识。 ⽤户可以通过将其放⼊ --cert-dir 配置的证书⽬录中(默认为 /etc//pki) 来提供他们⾃⼰的 CA 证书以及/或者密钥。 证书将为任何
–-cert extra-sans 参数值提供附加的 SAN 条⽬,必要时将其⼩写。
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"","[certs] Generating \"ca\" certificate and key","[certs] Generating \"apiserver\" certificate and key","[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.30.81]","[certs] Generating \"apiserver-kubelet-client\" certificate and key","[certs] Generating \"front-proxy-ca\" certificate and key","[certs] Generating \"front-proxy-client\" certificate and key","[certs] Generating \"etcd/ca\" certificate and key","[certs] Generating \"etcd/server\" certificate and key","[certs] Generating \"etcd/peer\" certificate and key","[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.30.81 127.0.0.1 ::1]","[certs] Generating \"etcd/healthcheck-client\" certificate and key","[certs] Generating \"apiserver-etcd-client\" certificate and key","[certs] Generating \"sa\" key and public key"
具体包含下⾯⼏项,这⾥我帮你解释⼀下,重要的⽂件是做什么的
上⾯的操作其中代码中主要做了上⾯⽣成证书的操作,其中证书存储在中/etc//pki⽬录下
具体写⼊的⽬录则是通过实现的
另外
"[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.30.81]","[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.30.81 127.0.0.1 ::1]"
第⼀个是让服务证书为DNS名称签名,⼀个是给etcd服务证书为DNS名称签名,这⾥则是使⽤的是SAN的机制。SAN( Name) 是 SSL 标准 x509 中定义的⼀个扩展。使⽤了 SAN 字段的 SSL 证书,可以扩展此证书⽀持的域名,使得⼀个证书可以⽀持多个不同域名的解析。
2.1.5 生成配置文件
在/etc/这四个⽂件是如何⽣成的?
这个地⽅就是使⽤的下⾯的核⼼代码实现的,具体⼯作流程如下:
将 ⽂件写⼊ /etc// ⽬录以便 、-和⽤来连接到 API 服务器,它们每⼀个都有⾃⼰的身份标识,同时⽣成⼀个名为 admin.conf 的独⽴的 ⽂件,⽤于管理操作。
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"","[kubeconfig] Writing \"admin.conf\" kubeconfig file","[kubeconfig] Writing \"kubelet.conf\" kubeconfig file","[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file","[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
根据⽇志的输出,我们可以跟踪源码的位置,同样的可以看出还是使⽤的cobra库的⽅式
2.1.6 写入ENV
1 "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
2 "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
3 "[kubelet-start] Starting the kubelet"
这个地⽅主要做了以下⼏件事
将env的变量写⼊到/var/lib//-flags.env
并将配置配置写⼊到/var/lib//.yaml
然后启动
2.1.7 生成三个静态文件
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"","[control-plane] Creating static Pod manifest for \"kube-apiserver\"","[control-plane] Creating static Pod manifest for \"kube-controller-manager\"","[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
根据⽇志打印的输出⼏乎我们也可以看出是为控制节点的组建创建静态pod的⽬录以及⽣成静态pod放置在/etc//下⾯
2.1.8 生成ETC静态文件
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
下⾯这个也就⽐较简单⼀些了,跟随上⾯的步骤创建静态pod清单之后,还会做下⾯的操作,假使没有提供⼀个外部的 etcd 服务的话,也会为 etcd ⽣成⼀份额外的静态 Pod 清单⽂件。
判断是否提供外部etcd
2.1.9 hase 等待POD运行
会监视这个/etc//⽬录以便在系统启动的时候创建 Pod。
⼀旦控制平⾯的 Pod 都运⾏起来, init 的⼯作流程就继续往下执⾏。
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods
2.2 reset 2.3 join