目 录CONTENT

文章目录

Kubeadm管理k8s集群系列05-安装ui及配置高可用

cplinux98
2022-09-06 / 0 评论 / 0 点赞 / 448 阅读 / 1,466 字 / 正在检测是否收录...

00:文章简介

介绍如何安装kubeadm的dashboard web-ui管理,

及使用keepalived + haproxy来部署高可用的kubeadm master集群。

01:安装kubeadm的dashboard

安装master集群前,我们先安装dashboard,看看k8s的ui

Github地址 : https://github.com/kubernetes/dashboard , 目前dashboard是兼容产品,安装前需要在release中查看兼容k8s版本

我们安装v2.3.1版本

# 下载dashboard插件
cd /root/mykube/init/dashboard
mkdir dashboard && cd dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
sed -i 's#kubernetesui/dashboard:v2.3.1#harbor.linux98.com/google_containers/dashboard:v2.3.1#g' recommended.yaml
sed -i 's#kubernetesui/metrics-scraper:v1.0.6#harbor.linux98.com/google_containers/metrics-scraper:v1.0.6#g' recommended.yaml

修改暴露端口

image

安装dashboard

kubectl apply -f recommended.yaml

查看状态

image

NodePort是在所有节点上都开放了30443端口,访问任意节点https://ip:30443即可打开dashboard登陆页面

image

在master节点上生成token

# 创建专用的服务账户
kubectl create serviceaccount dashboard-admin -n kube-system
# 使用集群角色绑定
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 查看创建的token
kubectl -n kube-system get secret | grep dashboard-admin
# 查看token的信息
kubectl describe secrets -n kube-system dashboard-admin-token-jfmph  # 后4位以上面命令显示位准

image

02:配置haproxy+keepalived

搭建k8s的高可用代理,因为k8s集群在配置初始化时要指定vip,所以要先配置ha+keepalived,先把vip后端主机保留1个,配置完其他master节点后再开放

image

ha01、ha02安装基础软件

apt install keepalived haproxy -y

配置keepalived-master

# /etc/keepalived/keepalived.conf
global_defs {
   router_id ha01 
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.20.200.200 dev eth0 label eth0:1
    }
}

配置keepalived检测脚本

# /etc/keepalived/check_haproxy.sh
#!/bin/bash
haproxy_status=$(ps -C haproxy --no-header | wc -l)
if [ $haproxy_status -eq 0 ];then
    systemctl start haproxy
    sleep 3
    if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ];then
        killall keepalived
    fi
fi

配置keepalived-backup

global_defs {
   router_id ha02
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP 
    interface eth0
    virtual_router_id 51
    priority 90
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.20.200.200 dev eth0 label eth0:1
    }
}

脚本和master配置一样

ha01、ha02配置haproxy

这里先开放一个后端节点,防止一会儿初始化第一个节点时,因为找不到其他节点而失败。

# vim /etc/haproxy/haproxy.cfg
...
# 在下面追加内容
listen status
        bind 172.20.200.200:9999
        mode http
        log global
        stats enable
        stats uri /haproxy-stats
        stats auth haadmin:123456

listen k8s-api-6443
        bind 172.20.200.200:6443
        mode tcp
        server master1 172.20.200.201:6443 check inter 3s fall 3 rise 5
#        server master2 172.20.200.202:6443 check inter 3s fall 3 rise 5
#        server master3 172.20.200.203:6443 check inter 3s fall 3 rise 5

重启服务

ha02的haproxy起不来,因为vip在ha01节点上

systemctl restart keepalived
systemctl restart haproxy

03:配置master01节点kubeadm

kubeadm-master01

  1. 重置k8s集群
  2. 重新使用init命令初始化,指定到ha的vip

重置k8s集群,master01 node01-03 都要重置

kubeadm reset -f

rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
ifconfig
ifconfig flannel.1 down
ifconfig cni0 down
ip link delete cni0
ip link delete flannel.1

使用init命令初始化master 01节点

kubeadm init --kubernetes-version=1.22.1 \
--apiserver-advertise-address=172.20.200.201 \
--control-plane-endpoint=172.20.200.200 \
--apiserver-bind-port=6443 \
--image-repository=harbor.linux98.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=Swap

初始化成功

image

配置root权限

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置flannel

cd mykube/init/flannel/
kubectl apply -f kube-flannel.yml

调整scheduler端口

/etc/kubernetes/manifests/kube-scheduler.yaml 的第19行 port=0注释掉

此时master01就准备好了

04:其他两个master加入集群

正常来说,提示信息中的加入命令是无法使用的,因为它缺了一条信息

  kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
        --discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
        --control-plane 

生成依赖的certificate-key

kubeadm init phase upload-certs --upload-certs
bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580
  kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
        --discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
        --control-plane --certificate-key bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580

配置其他master集群的环境

网络配置、内核参数、软件源、安装软件和master01一样

使用kubeadm加入到集群中

  kubeadm join 172.20.200.200:6443 --token 737svp.28ynn6xwmx26a9rg \
        --discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 \
        --control-plane --certificate-key bd8eb662009fd12c0b28b553a5ac36a8cb4bfb73d1e00f98dc119ccb04f19580

加入后提示信息

image

根据提示赋予root权限

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看状态

image

此时其他节点无需配置schedluder和flannel了,配置会从master01中拉取

image

05:打开haproxy的其他节点注释

image

systemctl restart haproxy.service

使用haproxy后台查看状态

image

06:其他node节点加入集群

# kubeadm token create --print-join-command
kubeadm join 172.20.200.200:6443 --token sarnqy.q8qj98cem1tsmaxn --discovery-token-ca-cert-hash sha256:fa0635c154aa68a2d63705a0449da695f19ea1262b1707c670db03a17abe14e1 

然后在master01中查看状态

image

07:重新安装dashboard

cd /root/mykube/init/dashboard
kubectl apply -f recommended.yaml

在haproxy中增加代理端口

# vim /etc/haproxy/haproxy.cfg
...

listen k8s-dashboard-api-30443
        bind 172.20.200.200:30443
        mode tcp
        server master1 172.20.200.201:30443 check inter 3s fall 3 rise 5
        server master2 172.20.200.202:30443 check inter 3s fall 3 rise 5
        server master3 172.20.200.203:30443 check inter 3s fall 3 rise 5

systemctl restart haproxy

生成访问token

# 创建专用的服务账户
kubectl create serviceaccount dashboard-admin -n kube-system
# 使用集群角色绑定
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 查看创建的token
kubectl -n kube-system get secret | grep dashboard-admin
# 查看token的信息
kubectl describe secrets -n kube-system dashboard-admin-token-jfmph  # 后4位以上面命令显示位准

浏览器访问测试

08:移除其他master节点

这个操作和移除node节点相似,可以通用命令,只是需要注意节点名称

0

评论区