首页
常用运维脚本汇总
电子书阅读
Search
1
安装docker时报错container-selinux >= 2:2.74
172 阅读
2
rsync命令(可替代rm删除巨量文件)
141 阅读
3
docker 镜像加速器配置,daemon.json文件详解
133 阅读
4
使用国内镜像地址拉取k8s安装需要的images
94 阅读
5
docker search命令提示i/o timeout的解决方案
93 阅读
运维
自动化运维
数据库
容器与k8s
环境
云计算
脚本
ai
登录
/
注册
Search
标签搜索
命令
nginx
zabbix
Mingrui
累计撰写
92
篇文章
累计收到
8
条评论
首页
栏目
运维
自动化运维
数据库
容器与k8s
环境
云计算
脚本
ai
页面
常用运维脚本汇总
电子书阅读
搜索到
92
篇与
的结果
2025-04-10
kubernetes集群各组件安装过程汇总
本文汇总了k8s集群所需要的各组件二进制安装配置过程,包括etcd、kubelet、kube-apiserver,kube-proxy,kube-controller-manager,kube-scheduler。{collapse}{collapse-item label="各组件详细安装步骤链接" open}containerd安装与配置etcd集群安装kube-apiserver部署配置kubeconfig并安装kube-controller-manager部署kube-scheduler服务HAProxy与keepalived部署node部署服务集群优化{/collapse-item}{/collapse}环境配置{tabs}{tabs-pane label="安装软件包"}dnf -y install iptables ipvsadm ipset nfs-utils{/tabs-pane}{tabs-pane label="安装模块"}cat /etc/modules-load.d/calico.conf ip_vs ip_vs_rr iptable_nat iptable_filter vxlan ipip cat /etc/modules-load.d/containerd.conf overlay br_netfilter nf_conntrack #确保机器上安装了以上模块 #创建了以上两个文件后使用systemctl命令来重新加载模块,使这些模块生效 systemctl restart systemd-modules-load{/tabs-pane}{tabs-pane label="环境配置"}#禁用selinux sed -i '/^SELINUX=/s//SELINUX=disabled/' /etc/selinux/config #禁用swap swapoff -a && sed -i '/swap/d' /etc/fstab #关掉防火墙 dnf -y remove firewalld #修改hosts文件 vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.88.51 master1 192.168.88.52 master2 192.168.88.53 master3 192.168.88.61 node001 192.168.88.62 node002 192.168.88.63 node003 {/tabs-pane}{/tabs}创建目录#创建存放根证书的目录 mkdir -p /etc/kubernetes/pki #创建存放etcd配置文件、CA证书,etcd数据的目录 mkdir -p /etc/etcd/{pki,data} #kubeconfig默认读取目录 mkdir -p $HOME/.kube # containerd cni插件目录 mkdir -p /opt/cni/bin/ # containerd cni插件配置文件目录 mkdir -p /etc/cni/net.d/ #containerd镜像加速存放加速地址配置文件的目录 mkdir -p /etc/containerd/certs.d/docker.ioCA证书制作证书所需要的配置文件{tabs}{tabs-pane label="etcd_ssl.cnf"}vim /etc/etcd/pki/etcd_ssl.cnf [ req ] req_extensions = v3_req distinguished_name = req_distinguished_name [ req_distinguished_name ] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [ alt_names ] IP.1 = 192.168.88.51 IP.2 = 192.168.88.52 IP.3 = 192.168.88.53{/tabs-pane}{tabs-pane label="master_ssl.cnf"}vim /etc/kubernetes/pki/master_ssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = k8s-1 DNS.6 = k8s-2 DNS.7 = k8s-3 IP.1 = 10.245.0.1 IP.2 = 192.168.88.51 IP.3 = 192.168.88.52 IP.4 = 192.168.88.53 IP.5 = 192.168.18.100{/tabs-pane}{/tabs}创建证书{tabs}{tabs-pane label="CA根证书"}cd /etc/kubernetes/pki openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.88.51" -days 36500 -out ca.crt{/tabs-pane}{tabs-pane label="etcd服务端证书"}etcd集群间相互认证所需证书cd /etc/etcd/pki/ openssl genrsa -out etcd_server.key 2048 openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt{/tabs-pane}{/tabs-pane}{tabs-pane label="etcd客户端证书"}kube-apiserver使用etcd数据库所需要证书cd /etc/etcd/pki/ openssl genrsa -out etcd_client.key 2048 openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt{/tabs-pane}{/tabs-pane}{tabs-pane label="kube-apiserver服务端证书"}cd /etc/kubernetes/pki openssl genrsa -out apiserver.key 2048 openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.88.51" -out apiserver.csr openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt cat ca.crt ca.key > ca.pem openssl x509 -in ca.crt -pubkey -noout > ca.pub{/tabs-pane}{/tabs-pane}{tabs-pane label="kube-apiserver客户端证书"}kube-controller-manager,kube-scheduler,kubelet,kube-proxy服务作为客户端连接kube-apiserver服务时,需要为它们创建客户端CA证书,使其能够正确访问kube-apiserver。cd /etc/kubernetes/pki openssl genrsa -out client.key 2048 openssl req -new -key client.key -subj "/CN=admin" -out client.csr openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500{/tabs-pane}{/tabs}制作kubeconfig配置文件kubeconfig 文件是 Kubernetes 的核心配置文件,用于管理和切换不同集群的访问配置。核心作用集群连接管理:存储多个 Kubernetes 集群的 API 服务器地址、CA 证书等信息,便于客户端(如 kubectl)安全连接。用户身份认证:保存用户凭据,例如客户端证书、Token、用户名/密码或 OAuth2 令牌,用于验证操作集群的权限。上下文切换:通过定义上下文(Context),将集群、用户和命名空间(Namespace)组合起来,快速切换不同环境(如开发、测试、生产)。多配置整合:支持通过环境变量 KUBECONFIG 合并多个配置文件,灵活管理不同项目或环境的配置。vim /etc/kubernetes/kubeconfig apiVersion: v1 kind: Config clusters: # 集群列表 - name: default cluster: server: https://192.168.88.100:9443 # 集群API地址 certificate-authority: /etc/kubernetes/pki/ca.crt # 验证集群的CA证书 users: - name: admin # 用户列表 user: client-certificate: /etc/kubernetes/pki/client.crt 用户身份证书 client-key: /etc/kubernetes/pki/client.key # 用户私钥 contexts: # 上下文列表 - context: cluster: default # 关联的集群 user: admin # 关联的用户 name: default # 默认命名空间 current-context: default # 当前生效的上下文配置service配置service文件,方便使用systemctl工具对各组件进行管理。{tabs}{tabs-pane label="apiserver"}vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS Restart=always [Install] WantedBy=multi-user.target {/tabs-pane}{tabs-pane label="kubelet"}vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet $KUBELET_ARGS Restart=always [Install] WantedBy=multi-user.target{/tabs-pane}{tabs-pane label="scheduler"}vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS Restart=always [Install] WantedBy=multi-user.target {/tabs-pane}{tabs-pane label="controller-manager"}vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS Restart=always [Install] WantedBy=multi-user.target {/tabs-pane}{tabs-pane label="kube-proxy"}vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=always [Install] WantedBy=multi-user.target {/tabs-pane}{/tabs}创建各组件所需要的配置文件{tabs}{tabs-pane label="apiserver"}vim /etc/kubernetes/apiserver KUBE_API_ARGS="--secure-port=6443 \ --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \ --tls-private-key-file=/etc/kubernetes/pki/apiserver.key \ --client-ca-file=/etc/kubernetes/pki/ca.crt \ --apiserver-count=3 --endpoint-reconciler-type=master-count \ --etcd-servers=https://192.168.88.51:2379,https://192.168.88.52:2379,https://192.168.88.53:2379 \ --etcd-cafile=/etc/kubernetes/pki/ca.crt \ --etcd-certfile=/etc/etcd/pki/etcd_client.crt \ --etcd-keyfile=/etc/etcd/pki/etcd_client.key \ --service-cluster-ip-range=10.245.0.0/16 \ --service-node-port-range=30000-32767 \ --allow-privileged=true \ --kubelet-client-certificate=/etc/kubernetes/pki/ca.crt \ --kubelet-client-key=/etc/kubernetes/pki/ca.key \ --service-account-key-file=/etc/kubernetes/pki/ca.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/ca.pem \ --service-account-issuer=api" {/tabs-pane}{tabs-pane label="kubelet"}vim /etc/kubernetes/kubelet KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --config=/etc/kubernetes/kubelet.config \ --hostname-override=192.168.88.51" {/tabs-pane}{tabs-pane label="kubelet.config"}vim /etc/kubernetes/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 cgroupDriver: systemd clusterDNS: ["10.245.0.100"] clusterDomain: cluster.local authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt {/tabs-pane}{tabs-pane label="scheduler"}vim /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \ --leader-elect=true" {/tabs-pane}{tabs-pane label="controller-manager"}vim /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \ --leader-elect=true \ --service-cluster-ip-range=10.245.0.0/16 \ --cluster-cidr=10.244.0.0/16 \ --allocate-node-cidrs=true \ --service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \ --root-ca-file=/etc/kubernetes/pki/ca.crt" {/tabs-pane}{tabs-pane label="proxy"} vim /etc/kubernetes/proxy KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \ --hostname-override=192.168.88.51 \ --proxy-mode=ipvs \ --ipvs-strict-arp=true \ --cluster-cidr=10.244.0.0/16"{/tabs-pane}{/tabs}启动服务systemctl enable --now kube-apiserver systemctl enable --now kube-proxy systemctl enable --now kubelet systemctl enable --now kube-scheduler systemctl enable --now kube-controller-manager配置文件下载etcd相关{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/etcd.conf" radius="" content="点击下载etcd.conf "/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_ssl.cnf" radius="" content="点击下载etcd_ssl.cnf "/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_server.crt" radius="" content="点击下载etcd_server.crt"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_server.csr" radius="" content="点击下载etcd_server.csr "/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_server.key" radius="" content="点击下载etcd_server.key"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_client.crt" radius="" content="点击下载etcd_client.crt"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_client.csr" radius="" content="点击下载etcd_client.csr"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_client.key" radius="" content="点击下载etcd_client.key"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/etcd/pki/etcd_client.pem" radius="" content="点击下载etcd_client.pem"/}kubernetes组件相关各组件配置文件{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/apiserver" radius="" content="点击下载apiserver"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/controller-manager" radius="" content="点击下载controller-manager"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/kubeconfig" radius="" content="点击下载kubeconfig"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/kubelet" radius="" content="点击下载kubelet"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/kubelet.config" radius="" content="点击下载kubelet.config"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/config/proxy" radius="" content="点击下载proxy"/}各组件service文件{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/service/kube-apiserver.service" radius="" content="点击下载apiserver"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/service/kube-controller-manager.service" radius="" content="点击下载controller-manager"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/service/kubelet.service" radius="" content="点击下载kubelet"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/service/kube-proxy.service" radius="" content="点击下载kube-proxy"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/service/kube-scheduler.service" radius="" content="点击下载kube-scheduler"/} CA认证文件{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ master_ssl.cnf" radius="" content="点击下载master_ssl.cnf"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/apiserver.crt" radius="" content="点击下载apiserver.crt"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/apiserver.csr" radius="" content="点击下载apiserver.csr"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/apiserver.key" radius="" content="点击下载apiserver.key"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ca.crt" radius="" content="点击下载ca.crt"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ca.key" radius="" content="点击下载ca.key"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ca.pem" radius="" content="点击下载ca.pem"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ca.pub" radius="" content="点击下载ca.pub"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/ca.srl" radius="" content="点击下载ca.srl"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/client.crt" radius="" content="点击下载cclient.crt"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/client.csr" radius="" content="点击下载cclient.csr"/}{abtn icon="fa-cloud-download" color="#5698c3" href="https://doc.zhangmingrui.cool/usr/uploads/2025/04/k8s/pki/client.key" radius="" content="点击下载cclient.key"/}
2025年04月10日
66 阅读
0 评论
1 点赞
2025-04-09
解决kubectl exec Unauthorized报错问题
搭建kubernetes集群,最好的方式是使用kubeadm命令进行,使用二进制方式手动部署,步骤繁琐极易出错,使用过程中很容易出现一些问题。问题复现kubectl logs myhttp error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log myhttp)) kubectl exec myhttp -- bash error: Internal error occurred: unable to upgrade connection: Unauthorized这个报错提示无法升级连接,未经授权。验证问题# 修改/etc/kubernetes/kubelet.config,启用匿名连接。 authentication.anonymous.enabled 的值设置为true #保存修改后重启集群内所有的kubelet。重启kubelet后会发现kubectl exec或者logs命令可以正常使用,这时就可以确认是kubelet与kube-apiserver之间通信时使用的认证相关配置出现了问题。解决报错在kube-apiserver的配置文件中,加入与kubelet通信相关的认证配置。这涉及到两条命令,分别是--kubelet-client-certificate和--kubelet-client-key。之后只需要根据kubelet配置文件中的认证文件来修改kube-apiserver相关参数即可。#查看kubelet使用的证书文件 grep client kubelet.config clientCAFile: /etc/kubernetes/pki/ca.crt #根据上面的证书文件对kube-apiserver配置进行修改 vim /etc/kubernetes/apiserver #添加下面两行内容 --kubelet-client-certificate=/etc/kubernetes/pki/ca.crt \ --kubelet-client-key=/etc/kubernetes/pki/ca.key \对集群内master节点上的所有kube-apiserver进行修改,然后重启kube-apiserver即可。kubectl exec -it myhttp -- bash root@myhttp:/usr/local/apache2# ls bin build cgi-bin conf error htdocs icons include logs modules root@myhttp:/usr/local/apache2# exit exit kubectl logs myhttp AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.61.131. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.61.131. Set the 'ServerName' directive globally to suppress this message [Wed Apr 09 11:57:48.855928 2025] [mpm_event:notice] [pid 1:tid 1] AH00489: Apache/2.4.63 (Unix) configured -- resuming normal operations [Wed Apr 09 11:57:49.254647 2025] [core:notice] [pid 1:tid 1] AH00094: Command line: 'httpd -D FOREGROUND'
2025年04月09日
43 阅读
0 评论
0 点赞
2025-04-08
二进制文件安装高可用k8s集群(七)集群优化
master主机安装containerd为方便主机与各node节点通信,安装containerd,kubelet与kube-proxy,以便于安装cni网络插件。具体安装方式详见 二进制文件安装高可用k8s集群(六)node部署服务 与 containerd安装与配置安装kubectl在master节点,主要通过kubectl命令行工具对k8s集群进行管理,因此需要将安装包中的kubectl可执行文件复制到/usr/bin目录下。cp /root/kubernetes/server/bin/kubelet /usr/bin/ 优化配置信息在集群的安装过程中,kubeconfig文件被存放在了/etc/kubernetes/目录下,这样就会导致每次使用kubectl命令时,需要搭配--kubeconfig参数指定kubeconfig的路径,比较麻烦。解决方案是把配置文件拷贝到k8s默认的配置目录下。mkdir -p $HOME/.kube cp -i /etc/kubernetes/kubeconfig $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config导入tab键方便使用tab键进行命令补全source <(kubectl completion bash|tee /etc/bash_completion.d/kubectl)安装网络插件calico插件的正常启用需要一些本地模块的支持,因此在安装calico插件前,需要先在本地安装一些calico可能会用到的模块#containerd需要的模块(安装containerd时已配置,此处仅展示) cat /etc/modules-load.d/containerd.conf overlay br_netfilter overlay br_netfilter nf_conntrack #配置calico需要用到的模块 vim /etc/modules-load.d/calico.conf #在文件中输入以下内容 ip_vs ip_vs_rr iptable_nat iptable_filter vxlan ipip tree /etc/modules-load.d/ /etc/modules-load.d/ ├── calico.conf └── containerd.conf #加载模块信息使生效 systemctl restart systemd-modules-load各模块作用说明ip_vs:IP 虚拟服务器(IP Virtual Server) 模块,用于实现传输层(L4)负载均衡,支持多种调度算法。ip_vs_rr:ip_vs 的调度算法之一,表示 轮询调度(Round Robin)。iptable_nat:支持 网络地址转换(NAT) 功能,用于实现 SNAT(源地址转换)和 DNAT(目标地址转换)。iptable_filter:支持 包过滤 功能,用于定义防火墙规则(如允许/拒绝特定流量)。vxlan:支持 VXLAN(Virtual Extensible LAN) 协议,用于在现有网络之上创建虚拟的 Overlay 网络,解决 VLAN ID 数量限制(支持 1600 万个虚拟网络)。ipip:支持 IP-in-IP 隧道协议,通过将原始 IP 数据包封装在另一个 IP 包中实现跨网络通信。模块在容器中的作用ip_vs/ip_vs_rr:Kubernetes Service 的负载均衡(IPVS 模式)。iptable_nat/iptable_filter:Service 流量转发和网络策略(iptables 模式)。vxlan/ipip:跨节点容器通信(Calico、Flannel 等插件的隧道封装)。calico模块依赖关系IPIP 模式:依赖 ipip 模块。VXLAN 模式:依赖 vxlan 模块。网络策略:依赖 iptable_filter 和 iptable_nat。kube-proxy IPVS 模式:依赖 ip_vs 和 ip_vs_rr安装calico插件calico插件官网: Calico Documentation calico插件安装教程: Calico Open Source 3.29 (latest) documentation calico官网提供了多种安装方式,简便起见这里选用Manifest(静态文件声明)模式进行安装。在这种模式下,calico根据k8s集群的规模提供了两个安装文件以供选择(以50个node节点规模为分界,)。本文选择选择小于50个节点规模的安装方式进行安装curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/calico.yaml -O #因为集群指定的pod所使用的地址段为10.244.0.0/16,因此需要修改calico.yaml文件中pod地址段的默认值。 vim calico.yaml - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" #找到上面两行内容,大约在6291和6292行,去掉前面的#号和一个空格(yaml格式对缩进要求严格,缩进不对会报错),使得- name行的-号与上面一行的#号对齐,value行的value与name对齐。 kubectl apply -f calico.yaml #等待一段时间,待新生成的pod都是Running状态,即表示插件安装完成 kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-79949b87d-jn4mk 1/1 Running 0 3h11m calico-node-69plr 1/1 Running 0 3h11m calico-node-f6b5v 1/1 Running 0 3h11m calico-node-jsqt2 1/1 Running 0 3h11m calico-node-kwb46 1/1 Running 0 3h11m calico-node-m8ppg 1/1 Running 0 3h11m calico-node-z9rkj 1/1 Running 0 3h11m
2025年04月08日
53 阅读
1 评论
0 点赞
2025-04-03
chrony网络时间同步服务ntp(Network Time Protocol)配置
安装的包名是chrony,systemctl启动服务时用的名称是chronyd,使用命令行查看同步信息时,使用的是chronyc
2025年04月03日
48 阅读
0 评论
0 点赞
2025-04-03
二进制文件安装高可用k8s集群(六)node部署服务
{callout color="#f50000"}node上需要部署的服务有kubelet,kube-proxy,containerd三台node的hostname分别为node001,node002,node003IP地址分别为88.61,88.62,88.63{/callout}containerd的安装与配置详见文章 containerd安装与配置kubelet安装与配置{callout color="#f0ad4e"}在node001,002,003上操作{/callout}cd /root #下载客户端软件包并解压 wget https://dl.k8s.io/v1.29.0/kubernetes-node-linux-amd64.tar.gz #复制软件包到/usr/bin目录 tar xf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kubelet kube-proxy /usr/bin/ mkdir -p /etc/kubernetes/pki{callout color="#f0ad4e"}在node001上操作{/callout}#修改配置文件 cd /etc/kubernetes/ vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet $KUBELET_ARGS Restart=always [Install] WantedBy=multi-user.target vim /etc/kubernetes/kubelet KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --config=/etc/kubernetes/kubelet.config \ --hostname-override=192.168.88.61" #--hostname-override=192.168.88.61 表明 kubelet 会使用 192.168.88.61 这个 IP 地址作为该节点在 Kubernetes 集群中的名称,而非操作系统默认的主机名。这样一来,在 Kubernetes 集群里,这个节点就会以 192.168.88.61 来进行标识和管理。 #注意事项 #要保证 --hostname-override 指定的名称在集群内是唯一的,不然会引发节点注册冲突。 #若使用 IP 地址作为主机名,要确保该 IP 地址在集群内是可访问的。 vim /etc/kubernetes/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 cgroupDriver: systemd clusterDNS: ["10.245.0.100"] clusterDomain: cluster.local authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crtsystemctl start kubelet && systemctl enable kubelet#把各种配置文件拷贝到node002,003 scp /usr/lib/systemd/system/kubelet.service node002:/usr/lib/systemd/system/kubelet.service scp /usr/lib/systemd/system/kubelet.service node003:/usr/lib/systemd/system/kubelet.service scp /etc/kubernetes/kubelet node002:/etc/kubernetes/kubelet scp /etc/kubernetes/kubelet node003:/etc/kubernetes/kubelet scp /etc/kubernetes/kubelet.config node002:/etc/kubernetes/kubelet.config scp /etc/kubernetes/kubelet.config node003:/etc/kubernetes/kubelet.config kube-proxy安装与配置{callout color="#f0ad4e"}在node001,002,003上操作{/callout}#加载需要的内核模块,安装需要的软件包 for i in overlay br_netfilter nf_conntrack;do modprobe ${i} echo "${i}" >>/etc/modules-load.d/containerd.conf done cat >/etc/sysctl.d/99-kubernetes-cri.conf<<EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF dnf -y install iptables ipvsadm ipset nfs-utils{callout color="#f0ad4e"}在node001上操作{/callout}#创建所需的配置文件 vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=always [Install] WantedBy=multi-user.target vim /etc/kubernetes/proxy KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \ --hostname-override=192.168.88.61 \ --proxy-mode=ipvs \ --ipvs-strict-arp=true \ --cluster-cidr=169.169.0.0/16"#把配置文件拷贝到node002,003 scp /usr/lib/systemd/system/kube-proxy.service node002:/usr/lib/systemd/system/kube-proxy.service scp /usr/lib/systemd/system/kube-proxy.service node003:/usr/lib/systemd/system/kube-proxy.service scp /etc/kubernetes/proxy node002:/etc/kubernetes/proxy scp /etc/kubernetes/proxy node003:/etc/kubernetes/proxy #启动服务 systemctl start kube-proxy && systemctl enable kube-proxy{callout color="#f0ad4e"}在node002,003上操作{/callout}#在/etc/kubernetes/目录中的kubelet和proxy文件中,有个hostname-override配置项,把它的值改为当前主机的IP地址 #node002::--hostname-override=192.168.88.62 \ sed -i "/override=/s/61/62/" /etc/kubernetes/proxy sed -i "/override=/s/61/62/" /etc/kubernetes/kubelet #node003:--hostname-override=192.168.88.63 \ sed -i "/override=/s/61/63/" /etc/kubernetes/proxy sed -i "/override=/s/61/63/" /etc/kubernetes/kubelet #开启服务 systemctl start kube-proxy && systemctl enable kube-proxy
2025年04月03日
36 阅读
1 评论
0 点赞
1
...
7
8
9
...
19