Container Orchestration/Kubernetes

Kubernetes 클러스터 구축하기(kubespray 2024v.)

Somaz 2024. 2. 2. 14:24
728x90
반응형

Overview

Kubespary를 사용해서 Kubernetes를 설치해본다. 그리고 Worker Node를 한대 추가해 조인까지 진행해본다.

출처 : https://medium.com/@sankalp_saxena/provisioning-ha-k8s-cluster-using-ansible-and-kubespray-using-ec2-instances-45a7b7365d97

 


Kubespray 설치

2024.01.22 기준이다.

 


 

시스템 구성

OS : Ubuntu 20.04 LTS(Focal)

Cloud: Google Compute Engine

 

Master Node(Control Plane) IP CPU Memory
test-server 10.77.101.62 2 8096MB

 

Worker Node  IP CPU Memory
test-server-agent 10.77.101.57 2 8096MB

 


 

VM 생성

 

Control Node

## test_server ##
resource "google_compute_address" "test_server_ip" {
  name = var.test_server_ip
}

resource "google_compute_instance" "test_server" {
  name                      = var.test_server
  machine_type              = "n2-standard-2"
  labels                    = local.default_labels
  zone                      = "${var.region}-a"
  allow_stopping_for_update = true

  tags = [var.kubernetes_server, var.kubernetes_client]

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-2004-lts"
      size  = 10
    }
  }

  metadata = {
    ssh-keys = "somaz:${file("~/.ssh/id_rsa_somaz94.pub")}"
  }

  network_interface {
    network    = var.shared_vpc
    subnetwork = "${var.subnet_share}-mgmt-a"

    access_config {
      ## Include this section to give the VM an external ip ##
      nat_ip = google_compute_address.test_server_ip.address
    }
  }

  depends_on = [google_compute_address.test_server_ip]

}

# test
variable "test_server" {}
variable "test_server_ip" {}
variable "kubernetes_server" {}
variable "kubernetes_client" {}

test_server     = "test-server"
test_server_ip  = "test-server-ip"
kubernetes_server  = "kubernetes-server"
kubernetes_client  = "kubernetes-client"

 

Worker Node

## test_server_agent ##
resource "google_compute_address" "test_server_agent_ip" {
  name = var.test_server_agent_ip
}

resource "google_compute_instance" "test_server_agent" {
  name                      = var.test_server_agent
  machine_type              = "n2-standard-2"
  labels                    = local.default_labels
  zone                      = "${var.region}-a"
  allow_stopping_for_update = true

  tags = [var.kubernetes_server, var.kubernetes_client]

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-2004-lts"
      size  = 10
    }
  }

  metadata = {
    ssh-keys = "somaz:${file("~/.ssh/id_rsa_somaz94.pub")}"
  }

  network_interface {
    network    = var.shared_vpc
    subnetwork = "${var.subnet_share}-mgmt-a"

    access_config {
      ## Include this section to give the VM an external ip ##
      nat_ip = google_compute_address.test_server_agent_ip.address
    }
  }

  depends_on = [google_compute_address.test_server_agent_ip]

}

# test_agent
variable "test_server_agent" {}
variable "test_server_agent_ip" {}

test_server_agent     = "test-server-agent"
test_server_agent_ip  = "test-server-agent-ip"

 

 

Requirements

 

SSH 키 생성 및 복사

Google Compute Engine으로 생성했기 때문에 나는 로컬의 SSH Private Key를 옮겨주기만 하였다.

ssh-keygen

ssh-copy-id somaz@10.77.101.57

# vi /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata

# 추가
10.77.101.62 test-server
10.77.101.57 test-server-agent

# 확인
ssh test-server-agent

 

패키지 설치

해당 부분이 제일 중요하다. python을 3.10 버전을 설치해 줘야 한다.

# python 3.10 설치
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get -y update
sudo apt install -y python3.10 python3-pip git python3.10-venv

# python 버전 확인
python3.10 --version
Python 3.10.13

 

 

Kubespray 배포

 

Git 저장소 클론 후 requirements.txt 파일 의존성 확인 및 설치

git clone https://github.com/kubernetes-sigs/kubespray.git

VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
python3.10 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR

pip install -U -r requirements.txt

 

Ansible 인벤토리 준비

cp -rfp inventory/sample inventory/somaz-cluster

# 인벤토리 빌더로 Ansible 인벤토리 파일 업데이트
declare -a IPS=(10.77.101.62 10.77.101.57)
CONFIG_FILE=inventory/somaz-cluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

 

인벤토리 수정

`inventory/somaz-cluster/inventory.ini`

# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
test-server ansible_host=10.77.101.62  ip=10.77.101.62 #etcd_member_name=etcd1
test-server-agent ansible_host=10.77.101.57  ip=10.77.101.57 #etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
test-server

[etcd]
test-server
# node2
# node3

[kube_node]
test-server-agent
# node3
# node4
# node5
# node6

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

 

`inventory/somaz-cluster/group_vars/k8s_cluster/addons.yml`

여러가지 addon을 함께 설치할 수 있다.

# Helm deployment
helm_enabled: true

# Registry deployment
registry_enabled: false
# registry_namespace: kube-system
# registry_storage_class: ""
# registry_disk_size: "10Gi"

# Metrics Server deployment
metrics_server_enabled: false
# metrics_server_container_port: 10250
# metrics_server_kubelet_insecure_tls: true
# metrics_server_metric_resolution: 15s
# metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
# metrics_server_host_network: false
# metrics_server_replicas: 1

# Rancher Local Path Provisioner
local_path_provisioner_enabled: false
# local_path_provisioner_namespace: "local-path-storage"
# local_path_provisioner_storage_class: "local-path"
# local_path_provisioner_reclaim_policy: Delete
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.24"
# local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest"

# Local volume provisioner deployment
local_volume_provisioner_enabled: false
...
ingress_nginx_enabled: true
...
metallb_enabled: true
...
metallb_protocol: "layer2"
...
metallb_config:
  address_pools:
    primary:
      ip_range:
        - 192.168.56.200-192.168.56.209 # Change your ip range
      auto_assign: true
...
  layer2:
    - primary
...

 

`inventory/somaz-cluster/group_vars/k8s_cluster/k8s-cluster.yml`

...
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB, kube-vip(ARP enabled) to work
kube_proxy_strict_arp: false
...
  • metallb를 사용하려면 true로 설정해야 한다.

 

Ansible 통신 가능 확인

# 주의할점 ~/kubespray 해당경로에서 진행
ansible all -i inventory/somaz-cluster/inventory.ini -m ping
[WARNING]: Skipping callback plugin 'ara_default', unable to load
test-server-agent | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
test-server | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

# apt cache update(option)
ansible all -i inventory/somaz-cluster/inventory.ini -m apt -a 'update_cache=yes' --become

 

플레이북 실행

# 주의할점 ~/kubespray 해당경로에서 진행
ansible-playbook -i inventory/somaz-cluster/inventory.ini cluster.yml --become

 

자격증명 가져오기

mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $USER:$USER ~/.kube/config

 

kubelet 설치 및 자동완성

echo '# kubectl completion and alias' >> ~/.bashrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
echo 'complete -F __start_kubectl k' >> ~/.bashrc

# bashrc 적용
source ~/.bashrc

 

설치 확인

k get nodes
NAME                STATUS   ROLES           AGE     VERSION
test-server         Ready    control-plane   7m56s   v1.28.6
test-server-agent   Ready    <none>          7m18s   v1.28.6

k cluster-info
Kubernetes control plane is running at <https://127.0.0.1:6443>

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

k get po -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-648dffd99-dnr9q   1/1     Running   0          12m
calico-node-5knmr                         1/1     Running   0          12m
calico-node-xgpvk                         1/1     Running   0          12m
coredns-77f7cc69db-n5jsv                  1/1     Running   0          11m
coredns-77f7cc69db-qchwv                  1/1     Running   0          11m
dns-autoscaler-8576bb9f5b-vx5b4           1/1     Running   0          11m
kube-apiserver-test-server                1/1     Running   1          13m
kube-controller-manager-test-server       1/1     Running   2          13m
kube-proxy-78jkg                          1/1     Running   0          12m
kube-proxy-q5j7c                          1/1     Running   0          12m
kube-scheduler-test-server                1/1     Running   1          13m
nginx-proxy-test-server-agent             1/1     Running   0          12m
nodelocaldns-4jls5                        1/1     Running   0          11m
nodelocaldns-8kzlb                        1/1     Running   0          11m

 

 


Kubespray Worker Node 조인

2024.01.22 기준이다.

 


 

시스템 구성

OS : Ubuntu 20.04 LTS(Focal)

Cloud: Google Compute Engine

 

Master Node(Control Plane) IP   CPU  Memory
test-server 10.77.101.62 2 8096MB

 

Worker Node  IP  CPU  Memory
test-server-agent 10.77.101.57 2 8096MB
test-server-agent2 10.77.101.200 2 8096MB

 


 

VM 생성

 

Worker Node2

## test_server_agent2 ##
resource "google_compute_address" "test_server2_agent_ip" {
  name = var.test_server2_agent_ip
}

resource "google_compute_instance" "test_server2_agent" {
  name                      = var.test_server2_agent
  machine_type              = "n2-standard-2"
  labels                    = local.default_labels
  zone                      = "${var.region}-a"
  allow_stopping_for_update = true

  tags = [var.nfs_client]

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-2004-lts"
      size  = 50
    }
  }

  metadata = {
    ssh-keys = "somaz:${file("~/.ssh/id_rsa_somaz94.pub")}"
  }

  network_interface {
    network    = var.shared_vpc
    subnetwork = "${var.subnet_share}-mgmt-a"

    access_config {
      ## Include this section to give the VM an external ip ##
      nat_ip = google_compute_address.test_server2_agent_ip.address
    }
  }

  depends_on = [google_compute_address.test_server_agent_ip]

}

# test_agent2
variable "test_server2_agent" {}
variable "test_server2_agent_ip" {}

 

 

SSH 키 생성 및 복사

동일하게 추가해준다.

ssh-copy-id somaz@10.77.101.200

# vi /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata

# 추가
10.77.101.62 test-server
10.77.101.57 test-server-agent
10.77.101.200 test-server2-agent

# 확인
ssh test-server2-agent

 

Ansible 인벤토리 준비

source $VENVDIR/bin/activate
cd $KUBESPRAYDIR

# 인벤토리 빌더로 Ansible 인벤토리 파일 업데이트
declare -a IPS=(10.77.101.62 10.77.101.57 10.77.101.200 )
CONFIG_FILE=inventory/somaz-cluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

 

인벤토리 수정

`inventory/somaz-cluster/inventory.ini`

# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
test-server ansible_host=10.77.101.62  ip=10.77.101.62 #etcd_member_name=etcd1
test-server-agent ansible_host=10.77.101.57  ip=10.77.101.57 #etcd_member_name=etcd2
test-server2-agent ansible_host=10.77.101.200  ip=10.77.101.200 #etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
test-server

[etcd]
test-server
# node2
# node3

[kube_node]
test-server-agent
test-server2-agent
# node3
# node4
# node5
# node6

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

 

Ansible 통신 가능 확인

# 주의할점 ~/kubespray 해당경로에서 진행
ansible all -i inventory/somaz-cluster/inventory.ini -m ping
[WARNING]: Skipping callback plugin 'ara_default', unable to load
test-server2-agent | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
test-server | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
test-server-agent | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

# apt cache update(option)
ansible all -i inventory/somaz-cluster/inventory.ini -m apt -a 'update_cache=yes' --become

 

플레이북 실행

# 주의할점 ~/kubespray 해당경로에서 진행
ansible-playbook -i inventory/somaz-cluster/inventory.ini scale.yml --become

 

설치 확인

k get nodes -o wide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
test-server          Ready    control-plane   21m   v1.28.6   10.77.101.62    <none>        Ubuntu 20.04.6 LTS   5.15.0-1048-gcp   containerd://1.7.11
test-server-agent    Ready    <none>          20m   v1.28.6   10.77.101.57    <none>        Ubuntu 20.04.6 LTS   5.15.0-1048-gcp   containerd://1.7.11
test-server2-agent   Ready    <none>          65s   v1.28.6   10.77.101.200   <none>        Ubuntu 20.04.6 LTS   5.15.0-1048-gcp   containerd://1.7.11

k cluster-info
Kubernetes control plane is running at <https://127.0.0.1:6443>

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

k get po -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-648dffd99-dnr9q   1/1     Running   0          20m
calico-node-5knmr                         1/1     Running   0          20m
calico-node-xgpvk                         1/1     Running   0          20m
calico-node-6usdu                         1/1     Running   0          86s
coredns-77f7cc69db-n5jsv                  1/1     Running   0          11m
coredns-77f7cc69db-qchwv                  1/1     Running   0          11m
dns-autoscaler-8576bb9f5b-vx5b4           1/1     Running   0          11m
kube-apiserver-test-server                1/1     Running   1          13m
kube-controller-manager-test-server       1/1     Running   2          13m
kube-proxy-78jkg                          1/1     Running   0          12m
kube-proxy-q5j7c                          1/1     Running   0          12m
kube-proxy-kdgn8                          1/1     Running   0          84s
kube-scheduler-test-server                1/1     Running   1          13m
nginx-proxy-test-server-agent             1/1     Running   0          12m
nginx-proxy-test-server2-agent            1/1     Running   0          78s
nodelocaldns-4jls5                        1/1     Running   0          86s
nodelocaldns-8kzlb                        1/1     Running   0          20m
nodelocaldns-dh6w5                        1/1     Running   0          20m

 

 


Reference

https://github.com/kubernetes-sigs/kubespray

https://medium.com/@sankalp_saxena/provisioning-ha-k8s-cluster-using-ansible-and-kubespray-using-ec2-instances-45a7b7365d97

728x90
반응형

'Container Orchestration > Kubernetes' 카테고리의 다른 글

Kubernetes Affinity 및 Scheduling 설정 가이드  (0) 2024.04.19
Kubernetes Network  (0) 2024.02.17
k3d(k3s in docker)란?  (2) 2024.01.14
kind(Kubernetes in Docker)란?  (2) 2024.01.13
K3s & k3sup 이란?  (2) 2024.01.05