Post

Kubernetes 部署

Kubernetes 搭建

准备

在搭建 Kubernetes 之前,您需要先准备一些必要的配置和工具,包括:

  1. 至少三个节点:用于部署 Kubernetes 集群的主机。一般情况下,建议使用至少三个物理或虚拟机来搭建 Kubernetes 集群,以确保高可用性和容错性。

  2. 操作系统:Kubernetes 支持多种操作系统,例如 Ubuntu、CentOS 等。建议使用最新版本的操作系统,并按照官方文档进行配置。

  3. 容器运行时:Kubernetes 支持多种容器运行时,如 Docker、Containerd、CRI-O 等。建议使用 Docker 作为容器运行时。

  4. Kubernetes 工具集:Kubernetes 提供了一些工具,例如 kubeadm、kubectl 和 kubelet 等,用于搭建、管理和监控 Kubernetes 集群。

  5. 网络插件:Kubernetes 需要一个网络插件来处理容器网络通信。常见的网络插件有 Calico、Flannel 和 Weave Net 等。

接下来是 Kubernetes 的搭建步骤:

  1. 在每个节点上安装 Docker 和 Kubernetes 工具集。

  2. 使用 kubeadm 工具初始化 Kubernetes 控制节点,并创建一个 Kubernetes 集群。

  3. 添加更多的节点到 Kubernetes 集群中。

  4. 安装和配置网络插件,以实现容器网络通信。

  5. 部署应用程序和服务到 Kubernetes 集群中,并通过 kubectl 命令进行管理和监控。

最低配置

Kubernetes 的硬件要求主要由 Master 节点和 Node 节点的配置决定。以下是 Kubernetes 推荐的最低硬件配置:

  1. Master 节点:

    • CPU:至少 2 核心
    • 内存:至少 2 GB RAM
    • 磁盘空间:至少 20 GB
  2. Node 节点:

    • CPU:至少 2 核心
    • 内存:至少 2 GB RAM
    • 磁盘空间:至少 20 GB

需要注意的是,以上硬件配置只是 Kubernetes 的最低要求,如果您希望集群具有更好的性能和可靠性,可以考虑增加 CPU、内存和磁盘空间等资源。

搭建前 master/node

在部署 Kubernetes 集群之前,您需要进行一些节点的预配置工作。下面是一些常见的命令,用于关闭防火墙、Swap 分区、SELinux,并修改主机名:

  1. 关闭防火墙:

    1
    2
    3
    4
    5
    6
    
    # CentOS 7/8
    systemctl stop firewalld
    systemctl disable firewalld
    
    # Ubuntu 16.04/18.04/20.04
    ufw disable
    
  2. 关闭 Swap 分区(建议在所有节点上执行):

    1
    2
    
    swapoff -a
    sed -i '/ swap / s/^/#/' /etc/fstab
    
  3. 关闭 SELinux:

    1
    2
    
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
  4. 修改主机名(每个节点都要执行):

    1
    
    hostnamectl set-hostname <NEW_HOSTNAME>
    
  • 查看查询系统基本信息

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    
    uname -a #  查看版本当前操作系统内核信息
    cat /proc/version # 查看当前系统版本信息
    cat /etc/issue # 查看版本当前操作系统发行版本信息
    cat /etc/redhat-release # 查看版本当前操作系统发行版本信息
    cat /proc/cpuinfo  # OR lscpu 查看CPU相关信息,包括型号、主频、内核信息
    getconf LONG_BIT # 看版本说明当前CPU运行在32bit模式下, 但不代表CPU不支持64bit
    hostname # 查看服务器名称
    
    # 查看网络信息
    cat /etc/sysconfig/network-scripts/ifcfg-ens*
    cat /etc/sysconfig/network-scripts/ifcfg-lo
    ifconfig
    
    lsblk # 查看磁盘信息 - 列出所有可用块设备的信息,而且还能显示他们之间的依赖关系,但是它不会列出RAM盘的信息
    fdisk -l # 观察硬盘实体使用情况,也可对硬盘分区
    df -k # df -k # 用于显示磁盘分区上的可使用的磁盘空
    
    env # 查看环境变量
    iptables -L # 查看防火墙设置
    route -n # 查看路由表
    swapon -s # 查看所有交换分区
    w # 查看活动用户
    last # 查看用户登录信息
    

搭建前准备 (master/node)

  • 关闭防火墙

    1
    2
    3
    
    systemctl stop firewalld
    systemctl disable firewalld
    systemctl status firewalld # 查看下状态
    
  • 警用 Swp

    1
    2
    3
    
    vim /etc/fstab
    # 注释 swap 行
    swapoff -a
    
  • 关闭 selinux

    1
    
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭
    
  • 修改主机名

    1
    2
    3
    4
    
    hostnamectl set-hostname master # 永久生效 但需要重启
    hostnamectl set-hostname nodeOne # 永久生效 但需要重启
    hostnamectl set-hostname nodeTwo # 永久生效 但需要重启
    reboot # 重启
    
  • 修改hosts

    1
    
    vim /etc/hosts
    

安装 docker 20.10.1 (master/node)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/bin/bash

yum remove runc -y
yum install container-selinux yum-utils ipvsadm -y
rpm -ivh https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el7.x86_64.rpm
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache
yum install docker-ce docker-ce-cli -y

mkdir -p /data/docker /data/kubelet  /etc/docker /data/kubelet/plugins/nasplugin.csi.alibabacloud.com
touch  /etc/docker/daemon.json

cat > /etc/docker/daemon.json <<EOF
{
    "data-root": "/data/docker",
    "registry-mirrors": [
        "https://6mrynkvw.mirror.aliyuncs.com"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
            ]
    }
EOF
systemctl enable docker && systemctl start docker

设置阿里云 yum 软件源 (master/node)

1
2
3
4
5
6
7
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

安装 kubeadm、kubelet、kubectl (master/node)

1
2
 yum install -y kubelet-1.19.2 kubectl-1.19.2 kubeadm-1.19.2
 systemctl enable kubelet

kubernetes init (master)

1
2
3
4
5
6
7
 echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
 kubeadm init \
  --apiserver-advertise-address=172.13.8.15 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.19.2 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16

成功信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.13.8.15:6443 --token etvpqx.urx0nmbm4w4wwmt9 \
    --discovery-token-ca-cert-hash sha256:744f6cd77ba585411ddde4a8ed181859c17643df0c90ba4ec7eb859b69875bcd

输入创建测试是否 OK(master/node)

1
2
3
4
5
6
7
  scp admin.conf root@172.13.8.17:/etc/kubernetes
  scp admin.conf root@172.13.8.16:/etc/kubernetes
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  kubectl get nodes  # 测试

安装网络插件

1
2
3
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # 会提示报错,修改hosts
# 追加hosts
# 199.232.28.133 raw.githubusercontent.com

node 加入 master 集群中

1
2
3
# 在成功init 成功之后输出的内容
kubeadm join 172.13.8.15:6443 --token etvpqx.urx0nmbm4w4wwmt9 \
    --discovery-token-ca-cert-hash sha256:744f6cd77ba585411ddde4a8ed181859c17643df0c90ba4ec7eb859b69875bcd

后记

  • kubeadm init 之后没有来得及记录 join 信息

    1
    2
    3
    4
    
    #再生成一个token即可
    kubeadm token create --print-join-command
    #下在的命令可以查看历史的token
    kubeadm token list
    
  • node join 失败后 重新 join

    1
    2
    3
    4
    
    #先执行
    kubeadm -y reset
    #再执行
    kubeadm join  xx.....
    
  • 重启 kubelet

    1
    2
    
    systemctl daemon-reload
    systemctl restart kubelet
    

维护

  • ROLES 为 none

    1
    2
    
    kubectl label node nodeone node-role.kubernetes.io/worker=worker
    kubectl label node nodetwo node-role.kubernetes.io/worker=worker
    

Error

  • /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

    1
    
    echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables # 解决方法
    

插件yml

  • kube-flannel.yml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
      - configMap
      - secret
      - emptyDir
      - hostPath
      allowedHostPaths:
      - pathPrefix: "/etc/cni/net.d"
      - pathPrefix: "/etc/kube-flannel"
      - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    rules:
    - apiGroups: ['extensions']
      resources: ['podsecuritypolicies']
      verbs: ['use']
      resourceNames: ['psp.flannel.unprivileged']
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
            image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
            image: quay.io/coreos/flannel:v0.15.1
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.15.1
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
    
This post is licensed under CC BY 4.0 by the author.