본문 바로가기

Compute

[Kubernetes] 4.How to Install Kubernetes(K8S)

## 여기서는 개발 용도인 Minikube, Docker for MAC/Windows 와 CSP에서 사용하는 방식은 제외

## On-premise 환경에서 Kubernetes 설치를 가정

## kubeadm 등의 도구 이용

 

주의 사항

  • Kubernetes 버전은 너무 최신 버전이거나 너무 예전 버전을 사용하지 말자
  • 모든 서버에서 NTP 동기화 확인
  • 모든 서버의 MAC 주소가 다른지 확인
  • 모든 서버가 2CPU, 2GB 이상의 자원을 할당했는지 확인
  • 모든 서버에서 swapoff -a 명령어로 Memory Swap 비활성화

 

kubeadm 으로 Kubernetes 설치

  • kubeadm은 Kubernetes Community에서 권장하는 설치 방법 중 하나
  • kubeadm은 On-premise, Cloud Infra 환경에 상관없이 일반적인 Linux라면 모두 사용 가능

 

1. Kubernetes 저장소 추가

## 모든 Node에서 수행

# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
> enabled=1
> gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> exclude=kubelet kubeadm kubectl
> EOF
[kubernetes]
name=Kubernetes
baseurl= https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch 
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
 
# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl= https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch 
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl

 

2. kubeadm 설치

## 모든 Node에서 수행

  • Kubernetes에 필요한 Package 설치
  • 별도의 버전은 명시하지 않으면 최신 버전의 Kubernetes 설치
  • 버전을 명시하려면 아래와 같은 형식을 이용하여 설치

# sudo yum install -y kubelet=1.13.5-00 kubeadm kubectl --disableexcludes=kubernetes

# sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Kubernetes                                                                         4.0 kB/s | 1.4 kB     00:00
Dependencies resolved.
===================================================================================================================
 Package                             Architecture        Version                     Repository               Size
===================================================================================================================
Installing:
 kubeadm                             x86_64              1.27.3-0                    kubernetes               11 M
 kubectl                             x86_64              1.27.3-0                    kubernetes               11 M
 kubelet                             x86_64              1.27.3-0                    kubernetes               20 M
Installing dependencies:
 conntrack-tools                     x86_64              1.4.4-11.el8                baseos                  204 k
 cri-tools                           x86_64              1.26.0-0                    kubernetes              8.6 M
 kubernetes-cni                      x86_64              1.2.0-0                     kubernetes               17 M
 libnetfilter_cthelper               x86_64              1.0.0-15.el8                baseos                   24 k
 libnetfilter_cttimeout              x86_64              1.0.0-11.el8                baseos                   24 k
 libnetfilter_queue                  x86_64              1.0.4-3.el8                 baseos                   31 k
 socat                               x86_64              1.7.4.1-1.el8               appstream               323 k
 
Transaction Summary
===================================================================================================================
Install  10 Packages
 
Total download size: 67 M
Installed size: 284 M
Downloading Packages:
(1/10): libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm                               65 kB/s |  24 kB     00:00
(2/10): libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm                             130 kB/s |  24 kB     00:00
(3/10): conntrack-tools-1.4.4-11.el8.x86_64.rpm                                    348 kB/s | 204 kB     00:00
(4/10): libnetfilter_queue-1.0.4-3.el8.x86_64.rpm                                  215 kB/s |  31 kB     00:00
(5/10): socat-1.7.4.1-1.el8.x86_64.rpm                                             412 kB/s | 323 kB     00:00
(6/10): 693f3c83140151a953a420772ddb9e4b7510df8ae49a79cbd7af48e82e7ad48e-kubectl-1  13 MB/s |  11 MB     00:00
(7/10): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools 7.3 MB/s | 8.6 MB     00:01
(8/10): 413f2a94a2f6981b36bf46ee01ade9638508fcace668d6a57b64e5cfc1731ce2-kubeadm-1 8.3 MB/s |  11 MB     00:01
(9/10): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernete  18 MB/s |  17 MB     00:00
(10/10): 484ddb88e9f2aaff13842f2aa730170f768e66fd4d8a30efb139d7868d224fcf-kubelet-  11 MB/s |  20 MB     00:01
-------------------------------------------------------------------------------------------------------------------
Total                                                                               16 MB/s |  67 MB     00:04
Kubernetes                                                                         3.9 kB/s | 975  B     00:00
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                           1/1
  Installing       : kubectl-1.27.3-0.x86_64                                                                  1/10
  Installing       : cri-tools-1.26.0-0.x86_64                                                                2/10
  Installing       : libnetfilter_queue-1.0.4-3.el8.x86_64                                                    3/10
  Running scriptlet: libnetfilter_queue-1.0.4-3.el8.x86_64                                                    3/10
  Installing       : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                               4/10
  Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                               4/10
  Installing       : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                5/10
  Running scriptlet: libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                5/10
  Installing       : conntrack-tools-1.4.4-11.el8.x86_64                                                      6/10
  Running scriptlet: conntrack-tools-1.4.4-11.el8.x86_64                                                      6/10
  Installing       : socat-1.7.4.1-1.el8.x86_64                                                               7/10
  Installing       : kubernetes-cni-1.2.0-0.x86_64                                                            8/10
  Installing       : kubelet-1.27.3-0.x86_64                                                                  9/10
  Installing       : kubeadm-1.27.3-0.x86_64                                                                 10/10
  Running scriptlet: kubeadm-1.27.3-0.x86_64                                                                 10/10
  Verifying        : socat-1.7.4.1-1.el8.x86_64                                                               1/10
  Verifying        : conntrack-tools-1.4.4-11.el8.x86_64                                                      2/10
  Verifying        : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                3/10
  Verifying        : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                               4/10
  Verifying        : libnetfilter_queue-1.0.4-3.el8.x86_64                                                    5/10
  Verifying        : cri-tools-1.26.0-0.x86_64                                                                6/10
  Verifying        : kubeadm-1.27.3-0.x86_64                                                                  7/10
  Verifying        : kubectl-1.27.3-0.x86_64                                                                  8/10
  Verifying        : kubelet-1.27.3-0.x86_64                                                                  9/10
  Verifying        : kubernetes-cni-1.2.0-0.x86_64                                                           10/10
 
Installed:
  conntrack-tools-1.4.4-11.el8.x86_64                     cri-tools-1.26.0-0.x86_64
  kubeadm-1.27.3-0.x86_64                                 kubectl-1.27.3-0.x86_64
  kubelet-1.27.3-0.x86_64                                 kubernetes-cni-1.2.0-0.x86_64
  libnetfilter_cthelper-1.0.0-15.el8.x86_64               libnetfilter_cttimeout-1.0.0-11.el8.x86_64
  libnetfilter_queue-1.0.4-3.el8.x86_64                   socat-1.7.4.1-1.el8.x86_64
 
Complete!
 
# sudo systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

 

3. Kubernetes Cluster 초기화

## Master Node에서만 수행

# kubeadm init --apiserver-advertise-address ###.###.###.### --pod-network-cidr=192.168.0.0/16
 
--apiserver-advertise-address
다른 Node가 Master Node에 접근할 수 있는 IP Address
 
--pod-network-cidr
Kubernetes에서 사용할 Container의 Network 대역

 

## kubeadm init 명령어 실행 시 다음과 같은 오류가 발생하면 아래 Solution 따라 처리

## 모든 Node에 적용

https://github.com/containerd/containerd/issues/4581

# kubeadm init --apiserver-advertise-address ###.###.###.### --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.27.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2023-07-01T19:24:18-07:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


## Solution ### <-- !!
rm /etc/containerd/config.toml
systemctl restart containerd


# kubeadm init --apiserver-advertise-address ###.###.###.### --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.27.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0701 19:30:06.070285   33982 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 10.131.231.163]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [###.###.###.### 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [###.###.###.### 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.001516 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: g406fy.qce8qyj6z0u2xrk3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.131.231.163:6443 --token g406fy.qce8qyj6z0u2xrk3 \
        --discovery-token-ca-cert-hash sha256:cbac5a5d7d1f7e2b7310d10c5af5622fceaf4212c6b662d5010fec72d1e2e646

 

  • 위 출력 결과에서 아래 3줄의 명령어를 복사해서 Master Node에서 실행
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EY3dNakF5TXpBeU1Wb1hEVE16TURZeU9UQXlNekF5TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0piCmF1dzN5N1RkVEFaZTljR0JVa1p4NC9CTkF5T1QycXI4Y2h2SjVRMGJpdDR4SmxNUnQvZ3dqb1duSXRUMEFmdkMKMWttMHJkUHIvbU56UTkwYllSRXlvK3RNcTdJN0NoZHBsaU12MFRUMVRJaTd1WHNUMm84MFdycHpVV0JSU3NOVwp5anVra1BjdHJWNFNvUmpmbVpKeXl5cXRyVWxOTDlPd21leDV2TXQ5QmVkOVZ2T2RKWCtEZ3dUdDRJMUtHWm8yCnNNMVJ6Z01qbjl2a05uTEVQWFdIR2pPZTF5cm5JdXZBVlVDeVpWT1RyWmE2WmhsVzBqWE8vbUhEVVZnWndITkIKVXVTaG04Ri9OaldieS9WVllwQmNhTExPNllORDFKOU9oeXpiMy94QURVMm5tekE3K05JRUx6REVGN0Zua3RjRgowQklHaG5EN1gza3Njck5PQ1JzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMTXp0MTdHNkd1Wi9JeDNsZ2RIQ1R3SGtoR2VNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQXFnTlZnbUdMNnBhQ2hmTXhVaQo2SXVjOW1tSWZyNjdqSTliMkVvUzdqTTdadHU3czBIYVRnQVE3SHNSMWx6QVZVYlRNYjVvYjJRQTd5Z2xpc2wxCmgwV25lUWp5M2xzbFY1OXRVbmhDRm9KaVphK0RBVzhQdGZnQ0o1SzFPRjM4anhzZ2FZRjY5QnFwUXVxdWFja0cKNFFhTHA5bG9KYmlmcnVLSmx2dGhnZXVvcXZhNmNtWHQrZUdRd291RkNSWjJCLzlEcWFpQnlFSDlCb2ZWRHFOZAp1YjZzQXRTQTllSXFqMW9lVzV0Wk5WK0RKNlEvSmkrM1ZNdFh4ak50MHdQdDFvY1hXUER1LzF3d3h5ZWNRbTgvCkpwR1lTRGl4OXNOUW9EODRJTmk2RFZmVEtTNDdqNm51dko0WGpzekd6Q1U5eWxaMDNVeTluTE04UVpWNlJ2MDAKRGJjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://10.131.231.163:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWjNvQjFMc1NRLzR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBM01ESXdNak13TWpGYUZ3MHlOREEzTURFd01qTXdNak5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXMxL2YvUVhxMWlBTWlkK0sKZDdwQlBTZXQxSHZjNHRsK0NCTkU2K3Q5dXdweGMzUk4vbjJVYkZ3VzhzSFlaQU8vekZXemR6cDJNcXdBRVU5Swo2N0RsWDYrcHEwVWhsV1RmS3piQ2x2dFducXljWEtvTlM5Y2VETkcvQlhYYVlFSmwrQVVMRXFvSitLamNXeHg4CmdzZGdDaERDdFpLMmZGdUtWTTQzQTRHb2tVb09RK0F2NHozOEw2Qy9oZk1rdkM3N1UzYVZKWisrYndRRk5nVVIKUitwcGJwR3hSZFhEb3praHlwSW5LQlhRMGQweDg4Ky9aaHRNNGlaMUlMa2k5b3F3NXZsVG1kWEdJanlKZjBGTgpoV2krWXA3aTdqZWVscStaQzJEY0tkRk1WSXJnZVpCYmJWZkdSRURnNHlOdzRvZlk3NWszZDk1WWorYnRIdEFnClc0Q3ZPd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTek03ZGV4dWhybWZ5TWQ1WUhSd2s4QjVJUgpuakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBd0RXS0JhT0Rndzh0OEU5dm5USENEblVwSjNlZEZPMFdqYjdUCkYzSzdpa2VKd1lMQW02SjNQVzNjUFI1dEFFS1doN1RPejhVZjRmczRaWDZCYTlMVUFlYS9EV2VrM2RpU2xicEEKWEJxZ0twcW1MQW5YbFNwa0RxNTVQWDQ0ZTBoZ3VDV1RxNXRTZjBYTFBJdHV0UU9pSVNSQnpEamsrVVJvN1ZLcApsTDVYWmgvVHZOK0NibjU0TzZwVXlRdUJMYTB6N3BFY3FHZ3V1dnduU0pjcll4M0Q1RXFXU0wraTB0UTRlL2p6Ckx4QkZzdXhGamZ1Ylk2MmtUTFFTcklSVGNkdC8zcDZkV0UwVE8wSVhJR3FtVWNHQmlGVGppTU5xM0FUMTJXOGgKSUpTSU02Uk4rR1o3U042VVpFR1JPVGUvZzJMcFZtNGtKanNERFlkZ3Jxb01rMEhLVWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBczEvZi9RWHExaUFNaWQrS2Q3cEJQU2V0MUh2YzR0bCtDQk5FNit0OXV3cHhjM1JOCi9uMlViRndXOHNIWVpBTy96Rld6ZHpwMk1xd0FFVTlLNjdEbFg2K3BxMFVobFdUZkt6YkNsdnRXbnF5Y1hLb04KUzljZURORy9CWFhhWUVKbCtBVUxFcW9KK0tqY1d4eDhnc2RnQ2hEQ3RaSzJmRnVLVk00M0E0R29rVW9PUStBdgo0ejM4TDZDL2hmTWt2Qzc3VTNhVkpaKytid1FGTmdVUlIrcHBicEd4UmRYRG96a2h5cEluS0JYUTBkMHg4OCsvClpodE00aVoxSUxraTlvcXc1dmxUbWRYR0lqeUpmMEZOaFdpK1lwN2k3amVlbHErWkMyRGNLZEZNVklyZ2VaQmIKYlZmR1JFRGc0eU53NG9mWTc1azNkOTVZaitidEh0QWdXNEN2T3dJREFRQUJBb0lCQUJMc2NITC9KdEZFUEU1bgpXUEpjb2ZsVHNGRVVhQzgraHI5UFdSd1NrZ2NqaU9pSFFwc3dvSEgySFMyckthc1RnaTZLZEE2R0NtWTZJeCt3Clg5VVJxb0UzeFF5ZWxIWndWK0wxT2Y0M3NlRzNrQjl1aVV5USswaWE4QzRoU3RLUTdyVUZ3eTlLNVJab3FpYXEKa0xBelhIeHpYRGRQclJUZGkzQjYrTzdUeFBiZXpmN3N2cjdxaFhQMnQ3cHZiN1FRblRYNFBnK1QvSzJzLzBtRgpMWHYvRzc5UlZCZkxFaDZ5UTV5Q1pOdDhXYjUyK0VKZUI5aFBnbmh0aXQzUWJRU0NCcjBnQVgzNzJiOGZ3QVZwCnFYV3RuZW4vdDVFM3NZWnZjVWgzYlh4RHdCcDB0QkZKcjVCZ3dyY3lXdGNLMkFnMmhjWGcyQjFlZTdxSUlRVEcKZFlKSFlkRUNnWUVBMkc4VHRZMnM5VGFNL1B0aHVYdml4ek85N1VLVkpsMm5YcGV0K3NoYldRbk5UcWZVWTZVMgpzR1dzb0VwWkxsbTl1cm14UjZwYjBLMUh1b3VuOVJGUFdMbmloTTltc3dLOS9qNnZRWU52OTh3ZG5ybzVrOHBnCnpMNHhFSWxMS3pZWVFyRmwyMFJja3pmdnBzTXF6alhBVHZQZGdJYzc5Nmh0MEtQSEVZSlh1WjBDZ1lFQTFDcHoKUWJKT3ZJZzI4a00yVkQwLzhBeEVCWkh6czhhbm9XcHBmV05TcFlVV213RDRmZXNKc0U5cnpzYVdjN1k2QmhsZwpnOHFWR1NCYllyU2p4ZEVmK1F6R1NpbXdud3ZqbFZxNUNsUTBWRVBOUXEvNzNmM1FMYThoelI2Vy8rK0s0dCtUClp0VHJZcjFXYU16bFY0dmhLV2RpQ1lBelFraHB6SXRZcDNHSEFMY0NnWUEwdWlyVkNpVGV3R0ZzcUZsUWRNdjAKdDdoSGV2Y1hGNjNVcjZNayszYTFwRnV1RTlqOFJaMmpMOEgwY3VoekVFM1dsYktJd1FvSk5vM0k5b2orZlM4VgpjSU9zMDFJenZqRkhKVUpROVpKcmpnQ1JVTkVDSGtXaTI1cmNhbll6bXNRaVMxR0RMNDVXRjBSOUhnTHBwZEtwCjZXTkhFcXNiVko4Y095b2VLK0R2U1FLQmdDaWltTUNVYmhBZDVxZ3Z4MUFMQ3h2bXZZQnptOEZxNHVBL2lVVEMKcVNtYWUrSGtKYk80T0hyVU8wbTNMMG1xTlNMRjZYNVVab29SY0c5UE9hN0JodVlrVkRZUUZndmdNdzUvK2NESQpyOTBUdjFSdWNFYnNQZHNDRis4NVZLSmdOckdOTUtZWlVadnZ0NFVLK1VIelFJUzFrRWxSakgrOWJzWUdTa3lICnFkdXhBb0dCQUljYVdDTnl0VHZpY09uSDd2MCtsWXQzb1FaOFNXK2lHMzZuQU84RFI4S0YrS0toNEFIeE51eFQKaG1wUHFIL1NieU4vZS9DaTB1Y05hWGpBUzg3R3V6RnQzYnVBT2hJUG1Xa3VrcUdkdlcrd1Q3Y29Jc1EyUmdsRQpUc05oYXRaZy91eFp2RlZtQlVCNnhjbmE5TGhHQUZidG92RzZYZzZnVEt4ZEtFMERKRVFYCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

 

  • 맨 마지막에 출력된 명령어는 Master Node를 제외한 Worker Node에서 실행
kubeadm join ###.###.###.###:6443 --token g406fy.qce8qyj6z0u2xrk3 --discovery-token-ca-cert-hash sha256:cbac5a5d7d1f7e2b7310d10c5af5622fceaf4212c6b662d5010fec72d1e2e646
 
# kubeadm join ###.###.###.###:6443 --token g406fy.qce8qyj6z0u2xrk3 --discovery-token-ca-cert-hash sha256:cbac5a5d7d1f7e2b7310d10c5af5622fceaf4212c6b662d5010fec72d1e2e646
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

4. 설치 상태 확인

## Kubernetes 핵심 component들의 실행 목록 확인
# kubectl get pods --namespace kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-5d78c9869d-4c5fh           0/1     Pending   0          8m54s
coredns-5d78c9869d-xz79b           0/1     Pending   0          8m54s
etcd-master01                      1/1     Running   0          9m9s
kube-apiserver-master01            1/1     Running   0          9m10s
kube-controller-manager-master01   1/1     Running   0          9m9s
kube-proxy-7w8nv                   1/1     Running   0          118s
kube-proxy-g7f8h                   1/1     Running   0          8m54s
kube-scheduler-master01            1/1     Running   0          9m9s
 
## Kubernetes에 등록된 모든 Node 확인
# kubectl get nodes
NAME       STATUS     ROLES           AGE     VERSION
master01   NotReady   control-plane   14m     v1.27.3
worker01   NotReady   <none>          7m18s   v1.27.3
worker02   NotReady   <none>          54s     v1.27.3
worker03   NotReady   <none>          47s     v1.27.3

 

## Kubernetes 설치 중 문제가 있는 경우 아래 2가지 작업 수행 후 다시 시도

# kubeadm reset

# rm -rf /etc/kubernetes

# rm -rf ~/.kube

 

5. Container Networking 설치

iptables 설정

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic

Forwarding IPv4 and letting iptables see bridged traffic

# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
 
# sudo modprobe overlay
# sudo modprobe br_netfilter
 
## sysctl params required by setup, params persist across reboots
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
 
## Apply sysctl params without reboot
# sudo sysctl --system
 
## Verify that the br_netfilter, overlay modules are loaded by running the following commands:
# lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                286720  1 br_netfilter
 
# lsmod | grep overlay
overlay               139264  10
 
## Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:
# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

 

Cilium 설치

https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/

## Master Node에서만 수행

## Install the Cilium CLI
# CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/$ {CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
 
# CLI_ARCH=amd64
# if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
# curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/$ {CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 30.5M  100 30.5M    0     0  14.8M      0  0:00:02  0:00:02 --:--:-- 38.3M
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0    215      0 --:--:-- --:--:-- --:--:--     0
 
# sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
cilium-linux-amd64.tar.gz: OK
 
# sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
cilium
 
# rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
rm: remove regular file 'cilium-linux-amd64.tar.gz'? y
rm: remove regular file 'cilium-linux-amd64.tar.gz.sha256sum'? y
 
## Install Cilium
# cilium install
Using Cilium version 1.13.4
Auto-detected cluster name: kubernetes
Auto-detected datapath mode: tunnel
Auto-detected kube-proxy has been installed

## Validate the Installation
# cilium status --wait
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled
 
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 4, Ready: 4/4, Available: 4/4
Containers:            cilium             Running: 4
                       cilium-operator    Running: 1
Cluster Pods:          2/2 managed by Cilium
Helm chart version:    1.13.4
Image versions         cilium             quay.io/cilium/cilium:v1.13.4@sha256:bde8800d61aaad8b8451b10e247ac7bdeb7af187bb698f83d40ad75a38c1ee6b: 4
                       cilium-operator    quay.io/cilium/operator-generic:v1.13.4@sha256:09ab77d324ef4d31f7d341f97ec5a2a4860910076046d57a2d61494d426c6301: 1
 
 
## validate that your cluster has proper network connectivity
# cilium connectivity test
Monitor aggregation detected, will skip some flow validation steps
[kubernetes] Waiting for deployment cilium-test/client to become ready...
[kubernetes] Waiting for deployment cilium-test/client2 to become ready...
[kubernetes] Waiting for deployment cilium-test/echo-same-node to become ready...
[kubernetes] Waiting for deployment cilium-test/echo-other-node to become ready...
[kubernetes] Waiting for CiliumEndpoint for pod cilium-test/client-6b4b857d98-n7r9b to appear...
[kubernetes] Waiting for CiliumEndpoint for pod cilium-test/client2-646b88fb9b-fpg79 to appear...
[kubernetes] Waiting for pod cilium-test/client-6b4b857d98-n7r9b to reach DNS server on cilium-test/echo-same-node-965bbc7d4-c7svc pod...
[kubernetes] Waiting for pod cilium-test/client2-646b88fb9b-fpg79 to reach DNS server on cilium-test/echo-same-node-965bbc7d4-c7svc pod...
[kubernetes] Waiting for pod cilium-test/client-6b4b857d98-n7r9b to reach DNS server on cilium-test/echo-other-node-545c9b778b-g2dr2 pod...
[kubernetes] Waiting for pod cilium-test/client2-646b88fb9b-fpg79 to reach DNS server on cilium-test/echo-other-node-545c9b778b-g2dr2 pod...
[kubernetes] Waiting for pod cilium-test/client-6b4b857d98-n7r9b to reach default/kubernetes service...
[kubernetes] Waiting for pod cilium-test/client2-646b88fb9b-fpg79 to reach default/kubernetes service...
[kubernetes] Waiting for CiliumEndpoint for pod cilium-test/echo-other-node-545c9b778b-g2dr2 to appear...
[kubernetes] Waiting for CiliumEndpoint for pod cilium-test/echo-same-node-965bbc7d4-c7svc to appear...
[kubernetes] Waiting for Service cilium-test/echo-other-node to become ready...
[kubernetes] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-n2xr4
[kubernetes] Waiting for Service cilium-test/echo-same-node to become ready...
[kubernetes] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-n2xr4
[kubernetes] Waiting for NodePort 10.131.231.164:32122 (cilium-test/echo-other-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.164:32541 (cilium-test/echo-same-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.200:32541 (cilium-test/echo-same-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.200:32122 (cilium-test/echo-other-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.201:32122 (cilium-test/echo-other-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.201:32541 (cilium-test/echo-same-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.163:32122 (cilium-test/echo-other-node) to become ready...
[kubernetes] Waiting for NodePort 10.131.231.163:32541 (cilium-test/echo-same-node) to become ready...
Skipping IPCache check
Enabling Hubble telescope...
Hubble is OK, flows: 10193/16380
Cilium version: 1.13.4
Running tests...
[=] Test [no-policies]
 
<snip>

 

Hubble 설치

https://docs.cilium.io/en/stable/gettingstarted/hubble_setup/#hubble-setup

## Enable Hubble in Cilium
# cilium hubble enable
 
## Run cilium status to validate that Hubble is enabled and running:
# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled
 
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 4, Ready: 4/4, Available: 4/4
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 4
                       cilium-operator    Running: 1
                       hubble-relay       Running: 1
Cluster Pods:          7/7 managed by Cilium
Helm chart version:    1.13.4
Image versions         hubble-relay       quay.io/cilium/hubble-relay:v1.13.4@sha256:bac057a5130cf75adf5bc363292b1f2642c0c460ac9ff018fcae3daf64873871: 1
                       cilium             quay.io/cilium/cilium:v1.13.4@sha256:bde8800d61aaad8b8451b10e247ac7bdeb7af187bb698f83d40ad75a38c1ee6b: 4
                       cilium-operator    quay.io/cilium/operator-generic:v1.13.4@sha256:09ab77d324ef4d31f7d341f97ec5a2a4860910076046d57a2d61494d426c6301: 1
 
## Install the Hubble Client
# export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
# HUBBLE_ARCH=amd64
# if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
# curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-$ {HUBBLE_ARCH}.tar.gz{,.sha256sum}
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 7516k  100 7516k    0     0  4877k      0  0:00:01  0:00:01 --:--:-- 7248k
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0    208      0 --:--:-- --:--:-- --:--:--   208
# sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
hubble-linux-amd64.tar.gz: OK
# sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
hubble
# rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
rm: remove regular file 'hubble-linux-amd64.tar.gz'? y
rm: remove regular file 'hubble-linux-amd64.tar.gz.sha256sum'? y
 
## create a port forward to the Hubble service from your local machine
# cilium hubble port-forward&
 
## validate that you can access the Hubble API via the installed CLI
# hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 790/16,380 (4.82%)
Flows/s: 5.66
Connected Nodes: 4/4
 
## query the flow API and look for flows
# hubble observe
Jul  2 07:46:09.599: 10.0.2.186:46628 (remote-node) <> 10.0.1.205:4240 (health) to-overlay FORWARDED (TCP Flags: ACK)
Jul  2 07:46:09.691: 10.0.2.186:52490 (remote-node) <> 10.0.0.84:4240 (health) to-overlay FORWARDED (TCP Flags: ACK)
 
<snip>
 
Jul  2 07:46:34.512: kube-system/hubble-relay-7789cd958d-sblnp:56320 (ID:8063) <- 10.131.231.200:4244 (host) to-endpoint FORWARDED (TCP Flags: ACK, PSH)

 

[참고 자료]

Installing kubeadm, kubelet and kubectl

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

 

 

'Compute' 카테고리의 다른 글

Admission failure filtering from vmkernel.log  (0) 2023.08.09
SFCB - CIM  (0) 2023.08.08
[Kubernetes] 3.What is Kubernetes(K8S)  (0) 2023.07.03
[Kubernetes] 2.How to Install Docker  (0) 2023.07.03
[Kubernetes] 1.What is docker  (0) 2023.07.03