采用 kubernetes 和 containerd,管理容器calico作为网络组件(ARM架构)。
任务描述:请采用 kubernetes 和 containerd,管理容器。在 linux5-linux7 上安装 containerd 和 kubernetes,linux5 作为master node , linux6 和 linux7 作 为 work node ; 使 用containerd.sock 作 为 容 器 runtime-endpoint 。 pod 网 络 为10.244.0.0/16,services 网络为 10.96.0.0/16。
master 节点配置 calico 作为网络组件。
导入 nginx.tar 镜像,主页内容为“HelloKubernetes”。用该镜像创建一个名称为 web 的 deployment,副本数为 2;为该 deployment 创建一个类型为 nodeport 的 service,port 为 80,targetPort 为 80,nodePort 为 30000。
环境信息:
主机域名 | 主机名称 | 主机IP | 节点类型 |
---|---|---|---|
linux5.skills.com | linux5 | 192.168.1.15 | master |
linux6.skills.com | linux6 | 192.168.1.16 | node1 |
linux7.skills.com | linux7 | 192.168.1.17 | node2 |
k8s镜像与软件包列表
[root@linux5 kube-1.27.1-linux5]# tree
.
├── calico
│ ├── calico-v3.25.0.tar
│ ├── calico.yaml
│ └── nginx.tar
├── k8sio
│ ├── coredns-v1.10.1.tar
│ ├── etcd-3.5.7-0.tar
│ ├── kube-apiserver-v1.27.1.tar
│ ├── kube-controller-manager-v1.27.1.tar
│ ├── kube-proxy-v1.27.1.tar
│ ├── kube-scheduler-v1.27.1.tar
│ └── pause-3.9.tar
└── rpm
├── containerd.io-1.6.21-3.1.el9.aarch64.rpm
├── container-selinux-2.189.0-1.el9.noarch.rpm
├── cri-tools-1.26.0-0.aarch64.rpm
├── kubeadm-1.27.1-0.aarch64.rpm
├── kubectl-1.27.1-0.aarch64.rpm
├── kubelet-1.27.1-0.aarch64.rpm
└── kubernetes-cni-1.2.0-0.aarch64.rpm
3 directories, 17 files
[root@linux5 kube-1.27.1-linux5]#
各台主机之间配置密钥免密登录,包括本机。
[root@linux5 ~]# ssh-keygen #生成密钥
[root@linux5 ~]# ssh-copy-id [email protected] #传输公钥
[root@linux5 ~]# ssh-copy-id [email protected]
[root@linux5 ~]# ssh-copy-id [email protected]
传输过程中需要填入对端主机root账号密码
开启内核功能,开启内核流量转发
setenforce 0
swapoff -a
systemctl stop firewalld
modprobe br_netfilter
modprobe overlay
sysctl -w net.bridge.bridge-nf-call-ip6tables=1
sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -w net.ipv4.ip_forward=1
安装kubernetes和containerd所需的软件包
[root@linux5 ~]# cd kube-1.27.1/rpm/
[root@linux5 rpm]# yum install *.rpm
Last metadata expiration check: 0:08:41 ago on Fri 26 Apr 2024 10:14:54 AM CST.
Dependencies resolved.
=================================================================================================================================================================================================================
Package Architecture Version Repository Size
=================================================================================================================================================================================================================
Installing:
container-selinux noarch 3:2.189.0-1.el9 @commandline 47 k
containerd.io aarch64 1.6.21-3.1.el9 @commandline 25 M
cri-tools aarch64 1.26.0-0 @commandline 17 M
kubeadm aarch64 1.27.1-0 @commandline 9.1 M
kubectl aarch64 1.27.1-0 @commandline 9.3 M
kubelet aarch64 1.27.1-0 @commandline 17 M
kubernetes-cni aarch64 1.2.0-0 @commandline 33 M
Installing dependencies:
checkpolicy aarch64 3.5-1.el9 AppStream 340 k
conntrack-tools aarch64 1.4.7-2.el9 AppStream 218 k
libnetfilter_cthelper aarch64 1.0.0-22.el9 AppStream 22 k
libnetfilter_cttimeout aarch64 1.0.0-19.el9 AppStream 22 k
libnetfilter_queue aarch64 1.0.5-1.el9 AppStream 27 k
policycoreutils-python-utils noarch 3.5-1.el9 AppStream 71 k
python3-audit aarch64 3.0.7-103.el9 AppStream 83 k
python3-distro noarch 1.5.0-7.el9 AppStream 36 k
python3-libsemanage aarch64 3.5-1.el9 AppStream 78 k
python3-policycoreutils noarch 3.5-1.el9 AppStream 2.0 M
python3-setools aarch64 4.4.1-1.el9 cdrom 534 k
python3-setuptools noarch 53.0.0-12.el9 cdrom 839 k
socat aarch64 1.7.4.1-5.el9 AppStream 297 k
Transaction Summary
=================================================================================================================================================================================================================
Install 20 Packages
Total size: 115 M
Installed size: 410 M
Is this ok [y/N]: y
配置container
在container配置文件目录下生成container默认配置文件
[root@linux5 ~]# containerd config default > /etc/containerd/config.toml
修改配置文件/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true #将false修改为true
重启服务
[root@linux5 ~]# systemctl restart containerd.service
导入k8s和calico镜像
[root@linux5 ~]# cd kube-1.27.1/k8sio/
ctr -n k8s.io images import coredns-v1.10.1.tar
ctr -n k8s.io images import kube-apiserver-v1.27.1.tar
ctr -n k8s.io images import kube-proxy-v1.27.1.tar
ctr -n k8s.io images import pause-3.9.tar
ctr -n k8s.io images import etcd-3.5.7-0.tar
ctr -n k8s.io images import kube-controller-manager-v1.27.1.tar
ctr -n k8s.io images import kube-scheduler-v1.27.1.tar
[root@linux5 k8sio]# cd ../calico/
ctr -n k8s.io images import calico-v3.25.0.tar
查看已导入的镜像
[root@linux5 calico]# ctr -n k8s.io images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/calico/cni:v3.25.0 application/vnd.docker.distribution.manifest.v2+json sha256:111f6eccb6cd9f67c1a8f37bb67f2c7d63b4a9c1e8b60cb2dfa7c21cc8a92528 181.6 MiB linux/arm64 io.cri-containerd.image=managed
docker.io/calico/kube-controllers:v3.25.0 application/vnd.docker.distribution.manifest.v2+json sha256:b61f91501f94cb21deea21c621eb0032a382da44a48f2637f74473f49ff72eb6 61.4 MiB linux/arm64 io.cri-containerd.image=managed
docker.io/calico/node:v3.25.0 application/vnd.docker.distribution.manifest.v2+json sha256:f69647f98b0d201629cef694d2cf5c234f3db7cb3408efb41f65502a9d6ae58f 241.5 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/coredns/coredns:v1.10.1 application/vnd.docker.distribution.manifest.v2+json sha256:fa69fab41ff1a01f89745ad24a37778d712f9be83bc6a96cf7f436d9955bfe20 49.0 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/etcd:3.5.7-0 application/vnd.docker.distribution.manifest.v2+json sha256:928c00857c0e05e0b2382fd83d37818f94e892968cbc109c59cbcecd30da2f41 173.8 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/kube-apiserver:v1.27.1 application/vnd.docker.distribution.manifest.v2+json sha256:debd6559509d1d3d385b171e45d7e81bfe9bef9e60c1bc1d818edf511c12709f 110.8 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/kube-controller-manager:v1.27.1 application/vnd.docker.distribution.manifest.v2+json sha256:f0e8b6fa713bd664cd1150a92f5531d0cc213f8befe3479e045636bf9810508c 103.6 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/kube-proxy:v1.27.1 application/vnd.docker.distribution.manifest.v2+json sha256:dc1514f41621f71af3a8d3ce6febacd851dc381768a8cc2a8ae06fef4ad31f12 64.9 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/kube-scheduler:v1.27.1 application/vnd.docker.distribution.manifest.v2+json sha256:cd92259b1aa5efdb020e6cf15939e7813a71b25aa4ff31be7b904c15eb653326 54.9 MiB linux/arm64 io.cri-containerd.image=managed
registry.k8s.io/pause:3.9
从上述镜像中可以观察到pause
版本为3.9,kube-apiserver
版本为1.27.1
此时我们需要再次回到/etc/containerd/config.toml
配置文件根据实际的pause
版本号进行修改,kube-apiserver
版本在后期会使用到。
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "registry.k8s.io/pause:3.9" #此处修改为导入的版本号
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
重启containerd
服务
[root@linux5 ~]# systemctl restart containerd.service
接下来的操作只在linux5中进行
k8s默认初始化配置文件
生成文件
[root@linux5 ~]# kubeadm config print init-defaults > kubeadm.yaml
编辑文件
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.3.15 #master主机的IP地址
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: linux5.skills.com #master主机的主机名称
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.27.1 #先前看的kube-apiserver版本号
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #pod网络
serviceSubnet: 10.96.0.0/16 #services网络
scheduler: {}
初始化群集
[root@linux5 ~]# kubeadm init --config=kubeadm.yaml
初始化完成后会提示如下信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.15:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0b1988f4875a8374444d920154faee283668895053f29b8b248657f584b9092b
按照提示执行如下命令
[root@linux5 ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@linux5 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
查看pods信息
[root@linux5 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-cfjvn 0/1 Pending 0 23m
kube-system coredns-5d78c9869d-lqqqm 0/1 Pending 0 23m
kube-system etcd-linux5.skills.com 1/1 Running 0 23m
kube-system kube-apiserver-linux5.skills.com 1/1 Running 0 23m
kube-system kube-controller-manager-linux5.skills.com 1/1 Running 0 23m
kube-system kube-proxy-zppns 1/1 Running 0 23m
kube-system kube-scheduler-linux5.skills.com 1/1 Running 0 23m
由于还未安装网络组件,因此dns pod还尚未准备好。
其他节点加入
在其他节点中执行先前master节点初始化后提示的加入群集命令
[root@linux6 k8sio]# kubeadm join 192.168.1.15:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0b1988f4875a8374444d920154faee283668895053f29b8b248657f584b9092b
当看到如下结果时则为加入成功
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
回到主节点查看其他节点信息
[root@linux5 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
linux5.skills.com NotReady control-plane 97m v1.27.1
linux6.skills.com NotReady <none> 48s v1.27.1
linux7.skills.com NotReady <none> 47s v1.27.1
[root@linux5 ~]#
配置calico网络组件
全部节点先导入calico镜像
[root@linux5 calico]# ctr -n k8s.io images import calico-v3.25.0.tar
应用calico配置
[root@linux5 calico]# kubectl apply -f calico.yaml
查看节点状态
[root@linux5 calico]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
linux5.skills.com Ready control-plane 101m v1.27.1
linux6.skills.com Ready <none> 5m17s v1.27.1
linux7.skills.com Ready <none> 5m16s v1.27.1
[root@linux5 calico]#
可以看到全部节点已经准备就绪。
修改容器端口范围
修改文件/etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.15:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.1.15
- --service-node-port-range=1-32767 #添加此处
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
保存退出后执行
[root@linux5 ~]# kubectl get pods -A
The connection to the server 192.168.1.15:6443 was refused - did you specify the right host or port?
提示连接重置为正常情况,等待几分钟后自然会恢复正常。
[root@linux5 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6c99c8747f-drg66 0/1 Running 0 54m
kube-system calico-node-klk4v 1/1 Running 0 54m
kube-system calico-node-zg62q 1/1 Running 0 54m
kube-system calico-node-zzksd 1/1 Running 0 54m
kube-system coredns-5d78c9869d-cfjvn 1/1 Running 0 155m
kube-system coredns-5d78c9869d-lqqqm 1/1 Running 0 155m
kube-system etcd-linux5.skills.com 1/1 Running 0 155m
kube-system kube-apiserver-linux5.skills.com 0/1 Pending 0 1s
kube-system kube-controller-manager-linux5.skills.com 1/1 Running 1 (39s ago) 155m
kube-system kube-proxy-d859t 1/1 Running 0 59m
kube-system kube-proxy-jkqgz 1/1 Running 0 59m
kube-system kube-proxy-zppns 1/1 Running 0 155m
kube-system kube-scheduler-linux5.skills.com 1/1 Running 1 (39s ago) 155m
[root@linux5 ~]#
创建nginx容器
全部节点导入nginx容器镜像
[root@linux5 calico]# ctr -n k8s.io images import nginx.tar
unpacking docker.io/library/nginx:latest (sha256:ea9f9bc148c5c4fdcfdc8297f9cc72da0fca20e6cdf9864cff95b07fa8ae87dd)...done
[root@linux5 calico]#
创建nginx deployment
[root@linux5 ~]# kubectl create deployment web --image=docker.io/library/nginx:latest --replicas=2
deployment.apps/web created
[root@linux5 ~]#
修改拉取策略
[root@linux5 ~]# kubectl edit deployment web
spec:
containers:
- image: docker.io/library/nginx:latest
imagePullPolicy: IfNotPresent #修改此处
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
查看容器
[root@linux5 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-7977ffdfdf-lr8w2 1/1 Running 0 38s
web-7977ffdfdf-v7pxc 1/1 Running 0 40s
配置nginx容器
编辑容器网页内容
[root@linux5 ~]# kubectl exec -it web-7977ffdfdf-lr8w2 -- bash
root@web-7977ffdfdf-lr8w2:/# echo Hellok8s > /usr/share/nginx/html/index.html
root@web-7977ffdfdf-lr8w2:/# exit
exit
[root@linux5 ~]# kubectl exec -it web-7977ffdfdf-v7pxc -- bash
root@web-7977ffdfdf-v7pxc:/# echo Hellok8s > /usr/share/nginx/html/index.html
root@web-7977ffdfdf-v7pxc:/# exit
exit
[root@linux5 ~]#
创建并编辑svc
[root@linux5 ~]# kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
service/web exposed
[root@linux5 ~]# kubectl edit svc web
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-04-26T06:30:58Z"
labels:
app: web
name: web
namespace: default
resourceVersion: "17851"
uid: 7d5f8874-6b83-4b65-bc92-0b5aefb93574
spec:
clusterIP: 10.96.28.14
clusterIPs:
- 10.96.28.14
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: tcp-80 #名称任意
nodePort: 30000 #这是NodePort的端口
port: 80 # 这是Service的端口
protocol: TCP
targetPort: 80 # 这是Pod的端口
- name: tcp-443
nodePort: 443
port: 443
protocol: TCP
targetPort: 443
selector:
app: web
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
测试:
[root@linux5 ~]# curl 127.0.0.1:30000
Hellok8s
[root@linux5 ~]#
本作品采用 知识共享署名-相同方式共享 4.0 国际许可协议 进行许可。