Hace una semana, me asignaron una tarea: actualizar los certificados para el clúster k8s. Por un lado, la tarea parecía bastante trivial, PERO mi falta de confianza con k8s no añadía trivialidad: hasta este punto, usaba kuber como un servicio y más que mirar los pods, no tenía que borrarlos escribiendo implementación usando una plantilla. La confianza se agregó por la presencia de la instrucción, pero resultó que era para la versión v1.13, y el clúster para el cual se requería implementar esta tarea era la versión 1.12.3. Y luego empezó ...
El día 3, resolví el problema con la actualización y quería escribir una instrucción. Escuché que en nuevas versiones ahora este problema lo resuelve casi un equipo, pero para aquellos que tienen la misma cosecha que la mía, les comparto mi experiencia.
Dado un clúster de k8s:
3 nodos maestros
3 nodos etcd
5 nodos trabajadores
kubectl get nodes NAME STATUS ROLES AGE VERSION product1-mvp-k8s-0001 Ready master 464d v1.12.3 product1-mvp-k8s-0002 Ready master 464d v1.12.3 product1-mvp-k8s-0003 Ready master 464d v1.12.3 product1-mvp-k8s-0007 Ready node 464d v1.12.3 product1-mvp-k8s-0008 Ready node 464d v1.12.3 product1-mvp-k8s-0009 Ready node 464d v1.12.3 product1-mvp-k8s-0010 Ready node 464d v1.12.3 product1-mvp-k8s-0011 Ready node 464d v1.12.3
Período de validez del certificado
echo | openssl s_client -showcerts -connect product1-mvp-k8s-0001:6443 -servername api 2>/dev/null | openssl x509 -noout -enddate
notAfter=Mar 4 00:39:56 2021 GMT
Vamos:
en todos los MAESTRO nodos, una copia de seguridad / etc / kubernetes
sudo mkdir backup; sudo cp -R /etc/kubernetes backup/ ; sudo tar -cvzf backup/pki_backup_`hostname`-`date +%Y%m%d`.tar.gz backup/kubernetes/
Miramos la estructura de / etc / Kubernetes, será algo como esto
ls -l total 80 -rw------- 1 root root 5440 Mar 3 13:21 admin.conf drwxr-xr-x 2 root root 4096 Aug 17 2020 audit-policy -rw-r--r-- 1 root root 368 Mar 4 2020 calico-config.yml -rw-r--r-- 1 root root 270 Mar 4 2020 calico-crb.yml -rw-r--r-- 1 root root 341 Mar 4 2020 calico-cr.yml -rw-r--r-- 1 root root 147 Mar 4 2020 calico-node-sa.yml -rw-r--r-- 1 root root 6363 Mar 4 2020 calico-node.yml -rw------- 1 root root 5472 Mar 3 13:21 controller-manager.conf -rw-r--r-- 1 root root 3041 Aug 14 2020 kubeadm-config.v1alpha3.yaml -rw------- 1 root root 5548 Mar 3 13:21 kubelet.conf -rw-r--r-- 1 root root 1751 Mar 4 2020 kubelet.env drwxr-xr-x 2 kube root 4096 Aug 14 2020 manifests lrwxrwxrwx 1 root root 28 Mar 4 2020 node-kubeconfig.yaml -> /etc/kubernetes/kubelet.conf -rw------- 1 root root 5420 Mar 3 13:21 scheduler.conf drwxr-xr-x 3 kube root 4096 Mar 3 10:20 ssl
Tengo todas las claves en ssl , y no en pki, que será necesario para kubeadm , entonces debería aparecer, en mi caso le haré un enlace simbólico
ln -s /etc/kubernetes/ssl /etc/kubernetes/pki
encontramos el archivo con la configuración del clúster, en mi caso fue
kubeadm-config.v1alpha3.yaml
kubectl get cm kubeadm-config -n kube-system -o yaml > /etc/kubernetes/kubeadm-config.yaml
kubeadm alpha phase certs apiserver --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[certificates] Using the existing apiserver certificate and key.
kubeadm alpha phase certs apiserver-kubelet-client
I0303 13:12:24.543254 40613 version.go:236] remote version is much newer: v1.20.4; falling back to: stable-1.12
[certificates] Using the existing apiserver-kubelet-client certificate and key.
kubeadm alpha phase certs front-proxy-client
I0303 13:12:35.660672 40989 version.go:236] remote version is much newer: v1.20.4; falling back to: stable-1.12
[certificates] Using the existing front-proxy-client certificate and key.
kubeadm alpha phase certs etcd-server --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [prod-uct1-mvp-k8s-0001 localhost] and IPs [127.0.0.1 ::1]
kubeadm alpha phase certs etcd-server --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[certificates] Using the existing etcd/server certificate and key.
kubeadm alpha phase certs etcd-healthcheck-client --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[certificates] Generated etcd/healthcheck-client certificate and key.
kubeadm alpha phase certs etcd-peer --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [product1-mvp-k8s-0001 localhost] and IPs [192.168.4.201 127.0.0.1 ::1]
find /etc/kubernetes/pki/ -name '*.crt' -exec openssl x509 -text -noout -in {} \; | grep -A2 Validity
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 2 10:29:44 2030 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 3 10:07:29 2022 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 3 10:07:52 2022 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 3 10:06:48 2022 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 2 10:29:44 2030 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 2 19:39:56 2022 GMT
--
Validity
Not Before: Mar 4 10:29:43 2020 GMT
Not After : Mar 2 10:29:43 2030 GMT
--
Validity
Not Before: Mar 4 10:29:43 2020 GMT
Not After : Mar 2 19:40:13 2022 GMT
--
Validity
Not Before: Mar 4 10:29:44 2020 GMT
Not After : Mar 2 19:36:38 2022 GMT
admin.conf, controller-manager.conf, kubelet.conf, scheduler.conf tmp
kubeadm alpha phase kubeconfig all --config /etc/kubernetes/kubeadm-config.v1alpha3.yaml
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
kubelet kubelet
sudo systemctl stop kubelet; sudo docker stop $(docker ps -aq); sudo docker rm $(docker ps -aq); sudo systemctl start kubelet systemctl status kubelet -l ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-03 14:00:22 MSK; 10s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Process: 52998 ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volume-plugins (code=exited, status=0/SUCCESS) Main PID: 53001 (kubelet) Memory: 51.2M CGroup: /system.slice/kubelet.service
master namespace
kubectl get nodes kubectl get ns NAME STATUS AGE default Active 464d product1-mvp Active 318d infra-logging Active 315d infra-nginx-ingress Active 386d kube-public Active 464d kube-system Active 464d pg Active 318d
notAfter=Mar 3 07:40:43 2022 GMT
master 1 2-.
worker :
kubelet.conf, bootstrap-kubelet.conf
cd /etc/kubernetes/
mv kubelet.conf kubelet.conf_old
bootstrap-kubelet.conf ,
apiVersion: v1 clusters: - cluster: certificate-authority-data: | LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETX server: https://192.168.4.201:6443 name: product1 contexts: - context: cluster: product1 user: tls-bootstrap-token-user name: tls-bootstrap-token-user@product1 current-context: tls-bootstrap-token-user@product1 kind: Config preferences: {} users: - name: tls-bootstrap-token-user user: token: fgz9qz.lujw0bwsdfhdsfjhgds
- certificate-authority-data – PKI CA , /etc/kubernetes/kubelet.conf master
- server: https://192.168.4.201:6443 - ip api master , balance ip
token: fgz9qz.lujw0bwsdfhdsfjhgds - , master
kubeadm token create
kubelet master , work , ready
systemctl restart kubelet systemctl status kubelet -l ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-03 14:06:33 MSK; 11s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Process: 54615 ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volume-plugins (code=exited, status=0/SUCCESS) Main PID: 54621 (kubelet) Memory: 52.1M CGroup: /system.slice/kubelet.service
, –
ls -las /var/lib/kubelet/pki/ total 24 4 -rw-------. 1 root root 1135 Mar 3 14:06 kubelet-client-2021-03-03-14-06-34.pem 0 lrwxrwxrwx. 1 root root 59 Mar 3 14:06 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-03-03-14-06-34.pem 4 -rw-r--r--. 1 root root 2267 Mar 2 10:40 kubelet.crt 4 -rw-------. 1 root root 1679 Mar 2 10:40 kubelet.key
Repetimos un procedimiento similar en todos los nodos de trabajo restantes .
Todos renovamos certificados en k8s cluster v1.12.3