교육, 커뮤니티 후기/AEWS 스터디

AEWS 스터디 7주차 - EKS Automation

Somaz 2023. 6. 8. 18:31
728x90
반응형

출처 : 스터디원 이현수님

Overview

CloudNet@ AEWS 스터디 7주차는 EKS Automation 이다.

 

 


0. 실습 환경 배포


 

Amazon EKS (myeks) 윈클릭 배포

a# YAML 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/eks-oneclick6.yaml

# CloudFormation 스택 배포
예시) aws cloudformation deploy --template-file eks-oneclick6.yaml --stack-name myeks --parameter-overrides KeyName=somaz-key SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  MyIamUserAccessKeyID=AKIA5... MyIamUserSecretAccessKey='CVNa2...' ClusterBaseName=myeks --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text

# 작업용 EC2 SSH 접속
ssh -i ~/.ssh/somaz-key.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)

 

기본 설정

# default 네임스페이스 적용
kubectl ns default

# (옵션) context 이름 변경
NICK=<각자 자신의 닉네임>
NICK=somaz
kubectl ctx
somaz@myeks.ap-northeast-2.eksctl.io

kubectl config rename-context admin@myeks.ap-northeast-2.eksctl.io $NICK


# ExternalDNS
MyDomain=<자신의 도메인>
echo "export MyDomain=<자신의 도메인>" >> /etc/profile
MyDomain=somaz.link
echo "export MyDomain=somaz.link" >> /etc/profile
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)
echo $MyDomain, $MyDnzHostedZoneId
somaz.link, /hostedzone/Z03204211VEUZG9O0RLE5

curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml
MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -

# AWS LB Controller
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \
  --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

# 노드 IP 확인 및 PrivateIP 변수 지정
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2b -o jsonpath={.items[0].status.addresses[0].address})
N3=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
echo "export N1=$N1" >> /etc/profile
echo "export N2=$N2" >> /etc/profile
echo "export N3=$N3" >> /etc/profile
echo $N1, $N2, $N3
192.168.1.109, 192.168.2.243, 192.168.3.213

# 노드 보안그룹 ID 확인
NGSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values='*ng1*' --query "SecurityGroups[*].[GroupId]" --output text)
aws ec2 authorize-security-group-ingress --group-id $NGSGID --protocol '-1' --cidr 192.168.1.0/24

# 워커 노드 SSH 접속
for node in $N1 $N2 $N3; do ssh ec2-user@$node hostname; done

 

프로메테우스 & 그라파나(admin / prom-operator) 설치

  • 대시보드 추천 15757 17900 15172
CERT_ARN=`aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text`
echo $CERT_ARN

# repo 추가
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# 파라미터 파일 생성
cat <<EOT > monitor-values.yaml
prometheus:
  prometheusSpec:
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false
    retention: 5d
    retentionSize: "10GiB"

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - prometheus.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - grafana.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

defaultRules:
  create: false
kubeControllerManager:
  enabled: false
kubeEtcd:
  enabled: false
kubeScheduler:
  enabled: false
alertmanager:
  enabled: false
EOT

# 배포
kubectl create ns monitoring
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 45.27.2 \
--set prometheus.prometheusSpec.scrapeInterval='15s' --set prometheus.prometheusSpec.evaluationInterval='15s' \
-f monitor-values.yaml --namespace monitoring

# Metrics-server 배포
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

 


 

1. AWS Controller for Kubernetes (ACK)

ACK는 aws 서비스 리소스를 k8s 에서 직접 정의하고 사용 할 수 있다.

  • ex) 사용자는 kubectl (쿠버네티스 api) 를 사용하여 AWS S3 버킷을 생성 할 수 있다
  • ex) 쿠버네티스 api 는 ack-s3-controller 에 요청을 전달하고, ack-s3-controller(IRSA)이 AWS S3 API 를 통해 버킷을 생성하게 된다.

출처 : https://aws-controllers-k8s.github.io/community/docs/community/how-it-works/

ACK 가 지원하는 AWS 서비스는 (‘23.5.29일) 현재 17개 서비스는 GA General Availability (정식 출시) 되었고, 10개는 Preview (평가판) 상태이다.

링크

 

Permission은 k8s api 와 aws api 의 2개의 RBAC 시스템 확인, 각 서비스 컨트롤러 파드는 AWS 서비스 권한 필요

← IRSA role for ACK Service Controller


 

S3

s3부터 실습을 진행한다.

 

ACK S3 Controller 설치 with Helm

# 서비스명 변수 지정
export SERVICE=s3

# helm 차트 다운로드
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart
/root/s3-chart
├── Chart.yaml
├── crds
│   ├── s3.services.k8s.aws_buckets.yaml
│   ├── services.k8s.aws_adoptedresources.yaml
│   └── services.k8s.aws_fieldexports.yaml
├── templates
│   ├── cluster-role-binding.yaml
│   ├── cluster-role-controller.yaml
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── metrics-service.yaml
│   ├── NOTES.txt
│   ├── role-reader.yaml
│   ├── role-writer.yaml
│   └── service-account.yaml
├── values.schema.json
└── values.yaml

2 directories, 15 files


# ACK S3 Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install --create-namespace -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
ack-s3-controller       ack-system      1               2023-06-04 20:35:35.035632319 +0900 KST deployed        s3-chart-1.0.4  1.0.4

kubectl -n ack-system get pods
NAME                                          READY   STATUS    RESTARTS   AGE
ack-s3-controller-s3-chart-7c55c6657d-m4mkd   1/1     Running   0          54s

kubectl get crd | grep $SERVICE
buckets.s3.services.k8s.aws                  2023-06-04T11:35:34Z

kubectl get-all -n ack-system
NAME                                                   NAMESPACE   AGE
configmap/kube-root-ca.crt                             ack-system  78s
pod/ack-s3-controller-s3-chart-7c55c6657d-m4mkd        ack-system  78s
secret/sh.helm.release.v1.ack-s3-controller.v1         ack-system  78s
serviceaccount/ack-s3-controller                       ack-system  78s
serviceaccount/default                                 ack-system  78s
deployment.apps/ack-s3-controller-s3-chart             ack-system  78s
replicaset.apps/ack-s3-controller-s3-chart-7c55c6657d  ack-system  78s
role.rbac.authorization.k8s.io/ack-s3-reader           ack-system  78s
role.rbac.authorization.k8s.io/ack-s3-writer           ack-system  78s

kubectl describe sa -n ack-system ack-s3-controller

 

IRSA 설정 : 권장 정책 AmazonS3FullAccess

# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create iamserviceaccount \
  --name ack-$SERVICE-controller \
  --namespace ack-system \
  --cluster $CLUSTER_NAME \
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3FullAccess`].Arn' --output text) \
  --override-existing-serviceaccounts --approve

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
NAMESPACE       NAME                            ROLE ARN
ack-system      ack-s3-controller               arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-12UMJX9C9JTHS
kube-system     aws-load-balancer-controller    arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-kube-sy-Role1-1KE1BYKTB2REV

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
kubectl get sa -n ack-system
NAME                SECRETS   AGE
ack-s3-controller   0         3m9s
default             0         3m9s

kubectl describe sa ack-$SERVICE-controller -n ack-system

# Restart ACK service controller deployment using the following commands.
kubectl -n ack-system rollout restart deploy ack-$SERVICE-controller-$SERVICE-chart

# IRSA 적용으로 Env, Volume 추가 확인
kubectl describe pod -n ack-system -l k8s-app=$SERVICE-chart
...
    Environment:
      ACK_SYSTEM_NAMESPACE:            ack-system (v1:metadata.namespace)
      AWS_REGION:                      ap-northeast-2
      AWS_ENDPOINT_URL:
      ACK_WATCH_NAMESPACE:
      DELETION_POLICY:                 delete
      ACK_ENABLE_DEVELOPMENT_LOGGING:  false
      ACK_LOG_LEVEL:                   info
      ACK_RESOURCE_TAGS:               services.k8s.aws/controller-version=%CONTROLLER_SERVICE%-%CONTROLLER_VERSION%,services.k8s.aws/namespace=%K8S_NAMESPACE%
      AWS_STS_REGIONAL_ENDPOINTS:      regional
      AWS_ROLE_ARN:                    arn:aws:iam::611841095956:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-12UMJX9C9JTHS
      AWS_WEB_IDENTITY_TOKEN_FILE:     /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g4mdp (ro)
...

CloudFormation에 잘 생성되고 있다.

 

S3 버킷 생성, 업데이트, 삭제

# [터미널1] 모니터링
watch -d aws s3 ls

# S3 버킷 생성을 위한 설정 파일 생성
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export BUCKET_NAME=my-ack-s3-bucket-$AWS_ACCOUNT_ID

read -r -d '' BUCKET_MANIFEST <<EOF
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: $BUCKET_NAME
spec:
  name: $BUCKET_NAME
EOF

echo "${BUCKET_MANIFEST}" > bucket.yaml
cat bucket.yaml | yh
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: my-ack-s3-bucket-611841095956
spec:
  name: my-ack-s3-bucket-611841095956

# S3 버킷 생성
aws s3 ls
2023-01-08 20:53:24 somaz-k8s-s3

kubectl create -f bucket.yaml
bucket.s3.services.k8s.aws/my-ack-s3-bucket-6118xxxxxxxx created

# S3 버킷 확인
aws s3 ls
2023-06-04 20:41:12 my-ack-s3-bucket-611841095956
2023-01-08 20:53:24 somaz-k8s-s3

kubectl get buckets
NAME                            AGE
my-ack-s3-bucket-611841095956   54s

kubectl describe bucket/$BUCKET_NAME | head -6
Name:         my-ack-s3-bucket-6118xxxxxxxx
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  s3.services.k8s.aws/v1alpha1
Kind:         Bucket

aws s3 ls | grep $BUCKET_NAME
2022-04-24 18:02:07 my-ack-s3-bucket-6118xxxxxxxx

# S3 버킷 업데이트 : 태그 정보 입력
read -r -d '' BUCKET_MANIFEST <<EOF
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: $BUCKET_NAME
spec:
  name: $BUCKET_NAME
  tagging:
    tagSet:
    - key: myTagKey
      value: myTagValue
EOF

echo "${BUCKET_MANIFEST}" > bucket.yaml

# S3 버킷 설정 업데이트 실행 : 필요 주석 자동 업뎃 내용이니 무시해도됨!
kubectl apply -f bucket.yaml

# S3 버킷 업데이트 확인 
kubectl describe bucket/$BUCKET_NAME | grep Spec: -A5
Spec:
  Name:  my-ack-s3-bucket-6118xxxxxxxxx
  Tagging:
    Tag Set:
      Key:    myTagKey
      Value:  myTagValue

# S3 버킷 삭제
kubectl delete -f bucket.yaml

# verify the bucket no longer exists
kubectl get bucket/$BUCKET_NAME
Error from server (NotFound): buckets.s3.services.k8s.aws "my-ack-s3-bucket-611841095956" not found

aws s3 ls | grep $BUCKET_NAME

태그 확인!

 

 

ACK S3 Controller 삭제

# helm uninstall
export SERVICE=s3
helm uninstall -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller

# ACK S3 Controller 관련 crd 삭제
kubectl delete -f ~/$SERVICE-chart/crds

# IRSA 삭제
eksctl delete iamserviceaccount --cluster myeks --name ack-$SERVICE-controller --namespace ack-system

# namespace 삭제 >> ACK 모든 실습 후 삭제
kubectl delete namespace $ACK_K8S_NAMESPACE

 


 

EC2 & VPC

ec2와 vpc도 진행해본다.

 

ACK EC2-Controller 설치 with Helm

# 서비스명 변수 지정 및 helm 차트 다운로드
export SERVICE=ec2
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart
/root/ec2-chart
├── Chart.yaml
├── crds
│   ├── ec2.services.k8s.aws_dhcpoptions.yaml
│   ├── ec2.services.k8s.aws_elasticipaddresses.yaml
│   ├── ec2.services.k8s.aws_instances.yaml
│   ├── ec2.services.k8s.aws_internetgateways.yaml
│   ├── ec2.services.k8s.aws_natgateways.yaml
│   ├── ec2.services.k8s.aws_routetables.yaml
│   ├── ec2.services.k8s.aws_securitygroups.yaml
│   ├── ec2.services.k8s.aws_subnets.yaml
│   ├── ec2.services.k8s.aws_transitgateways.yaml
│   ├── ec2.services.k8s.aws_vpcendpoints.yaml
│   ├── ec2.services.k8s.aws_vpcs.yaml
│   ├── services.k8s.aws_adoptedresources.yaml
│   └── services.k8s.aws_fieldexports.yaml
├── templates
│   ├── cluster-role-binding.yaml
│   ├── cluster-role-controller.yaml
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── metrics-service.yaml
│   ├── NOTES.txt
│   ├── role-reader.yaml
│   ├── role-writer.yaml
│   └── service-account.yaml
├── values.schema.json
└── values.yaml

2 directories, 25 files

# ACK EC2-Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART               APP VERSION
ack-ec2-controller      ack-system      1               2023-06-04 20:46:15.980949768 +0900 KST deployed        ec2-chart-1.0.3     1.0.3

kubectl -n $ACK_SYSTEM_NAMESPACE get pods -l "app.kubernetes.io/instance=ack-$SERVICE-controller"
NAME                                            READY   STATUS    RESTARTS   AGE
ack-ec2-controller-ec2-chart-777567ff4c-nh8zv   1/1     Running   0          21s

kubectl get crd | grep $SERVICE
dhcpoptions.ec2.services.k8s.aws             2023-06-04T11:46:15Z
elasticipaddresses.ec2.services.k8s.aws      2023-06-04T11:46:15Z
instances.ec2.services.k8s.aws               2023-06-04T11:46:15Z
internetgateways.ec2.services.k8s.aws        2023-06-04T11:46:15Z
natgateways.ec2.services.k8s.aws             2023-06-04T11:46:15Z
routetables.ec2.services.k8s.aws             2023-06-04T11:46:15Z
securitygroups.ec2.services.k8s.aws          2023-06-04T11:46:15Z
subnets.ec2.services.k8s.aws                 2023-06-04T11:46:15Z
transitgateways.ec2.services.k8s.aws         2023-06-04T11:46:15Z
vpcendpoints.ec2.services.k8s.aws            2023-06-04T11:46:15Z
vpcs.ec2.services.k8s.aws                    2023-06-04T11:46:15Z

 

IRSA 설정 : 권장 정책 AmazonEC2FullAccess

ARN을 모른다면 해당 링크로 가서 확인할 수 있다.

# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create iamserviceaccount \
  --name ack-$SERVICE-controller \
  --namespace $ACK_SYSTEM_NAMESPACE \
  --cluster $CLUSTER_NAME \
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonEC2FullAccess`].Arn' --output text) \
  --override-existing-serviceaccounts --approve

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
NAMESPACE       NAME                            ROLE ARN
ack-system      ack-ec2-controller              arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-YZA5VA63S3O0
kube-system     aws-load-balancer-controller    arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-kube-sy-Role1-1KE1BYKTB2REV

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
kubectl get sa -n $ACK_SYSTEM_NAMESPACE
kubectl describe sa ack-$SERVICE-controller -n $ACK_SYSTEM_NAMESPACE

# Restart ACK service controller deployment using the following commands.
kubectl -n $ACK_SYSTEM_NAMESPACE rollout restart deploy ack-$SERVICE-controller-$SERVICE-chart

# IRSA 적용으로 Env, Volume 추가 확인
kubectl describe pod -n $ACK_SYSTEM_NAMESPACE -l k8s-app=$SERVICE-chart
...

S3 iamserviceaccount 생성때와 동일하게 CloudFormation에서 스택으로 생성해준다.

 

VPC, Subnet 생성 및 삭제

# [터미널1] 모니터링
while true; do aws ec2 describe-vpcs --query 'Vpcs[*].{VPCId:VpcId, CidrBlock:CidrBlock}' --output text; echo "-----"; sleep 1; done

VPC 모니터링

 

# VPC 생성
cat <<EOF > vpc.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: VPC
metadata:
  name: vpc-tutorial-test
spec:
  cidrBlocks: 
  - 10.0.0.0/16
  enableDNSSupport: true
  enableDNSHostnames: true
EOF
 
kubectl apply -f vpc.yaml
vpc.ec2.services.k8s.aws/vpc-tutorial-test created

# VPC 생성 확인
kubectl get vpcs
NAME                ID                      STATE
vpc-tutorial-test   vpc-0456251587b8cb160   available

kubectl describe vpcs
aws ec2 describe-vpcs --query 'Vpcs[*].{VPCId:VpcId, CidrBlock:CidrBlock}' --output text
192.168.0.0/16  vpc-0bb52a26af2b02291
172.31.0.0/16   vpc-0be33812ca98ab8fa
10.0.0.0/16     vpc-0456251587b8cb160

VPC 생성확인

 

# [터미널1] 모니터링
VPCID=$(kubectl get vpcs vpc-tutorial-test -o jsonpath={.status.vpcID})
while true; do aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --query 'Subnets[*].{SubnetId:SubnetId, CidrBlock:CidrBlock}' --output text; echo "-----"; sleep 1 ; done

# 서브넷 생성
VPCID=$(kubectl get vpcs vpc-tutorial-test -o jsonpath={.status.vpcID})

cat <<EOF > subnet.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Subnet
metadata:
  name: subnet-tutorial-test
spec:
  cidrBlock: 10.0.0.0/20
  vpcID: $VPCID
EOF
kubectl apply -f subnet.yaml

# 서브넷 생성 확인
kubectl get subnets
NAME                   ID                         STATE
subnet-tutorial-test   subnet-0ac1f1c894b82275e   available

kubectl describe subnets
aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --query 'Subnets[*].{SubnetId:SubnetId, CidrBlock:CidrBlock}' --output text
10.0.0.0/20     subnet-0ac1f1c894b82275e

서브넷 생성 모니터링
VPC Web Console 확인
서브넷 Web Console 확인

 

# 리소스 삭제
kubectl delete -f subnet.yaml && kubectl delete -f vpc.yaml

 

 

Create a VPC Workflow : VPC, Subnet, SG, RT, EIP, IGW, NATGW, Instance 생성

출처 : https://aws-controllers-k8s.github.io/community/docs/tutorials/ec2-example/#create-a-vpc-workflow

 

`vpc-workflow.yaml` 파일 생성

아래 코드블록 내용 복붙할것!

cat <<EOF > vpc-workflow.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: VPC
metadata:
  name: tutorial-vpc
spec:
  cidrBlocks: 
  - 10.0.0.0/16
  enableDNSSupport: true
  enableDNSHostnames: true
  tags:
    - key: name
      value: vpc-tutorial
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: InternetGateway
metadata:
  name: tutorial-igw
spec:
  vpcRef:
    from:
      name: tutorial-vpc
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: NATGateway
metadata:
  name: tutorial-natgateway1
spec:
  subnetRef:
    from:
      name: tutorial-public-subnet1
  allocationRef:
    from:
      name: tutorial-eip1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: ElasticIPAddress
metadata:
  name: tutorial-eip1
spec:
  tags:
    - key: name
      value: eip-tutorial
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: RouteTable
metadata:
  name: tutorial-public-route-table
spec:
  vpcRef:
    from:
      name: tutorial-vpc
  routes:
  - destinationCIDRBlock: 0.0.0.0/0
    gatewayRef:
      from:
        name: tutorial-igw
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: RouteTable
metadata:
  name: tutorial-private-route-table-az1
spec:
  vpcRef:
    from:
      name: tutorial-vpc
  routes:
  - destinationCIDRBlock: 0.0.0.0/0
    natGatewayRef:
      from:
        name: tutorial-natgateway1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Subnet
metadata:
  name: tutorial-public-subnet1
spec:
  availabilityZone: ap-northeast-2a
  cidrBlock: 10.0.0.0/20
  mapPublicIPOnLaunch: true
  vpcRef:
    from:
      name: tutorial-vpc
  routeTableRefs:
  - from:
      name: tutorial-public-route-table
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Subnet
metadata:
  name: tutorial-private-subnet1
spec:
  availabilityZone: ap-northeast-2a
  cidrBlock: 10.0.128.0/20
  vpcRef:
    from:
      name: tutorial-vpc
  routeTableRefs:
  - from:
      name: tutorial-private-route-table-az1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: SecurityGroup
metadata:
  name: tutorial-security-group
spec:
  description: "ack security group"
  name: tutorial-sg
  vpcRef:
     from:
       name: tutorial-vpc
  ingressRules:
    - ipProtocol: tcp
      fromPort: 22
      toPort: 22
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "ingress"
EOF

 

그리고 vpc 환경을 생성한다.

# VPC 환경 생성
kubectl apply -f vpc-workflow.yaml

# [터미널1] NATGW 생성 완료 후 tutorial-private-route-table-az1 라우팅 테이블 ID가 확인되고 그후 tutorial-private-subnet1 서브넷ID가 확인됨 > 5분 정도 시간 소요
watch -d kubectl get routetables,subnet

# VPC 환경 생성 확인
kubectl describe vpcs
kubectl describe internetgateways
kubectl describe routetables
kubectl describe natgateways
kubectl describe elasticipaddresses
kubectl describe securitygroups

# 배포 도중 2개의 서브넷 상태 정보 비교 해보자
kubectl describe subnets
...
Status:
  Conditions:
    Last Transition Time:  2023-06-04T11:55:30Z
    Message:               Reference resolution failed
    Reason:                the referenced resource is not synced yet. resource:RouteTable, namespace:default, name:tutorial-private-route-table-az1
    Status:                Unknown
    Type:                  ACK.ReferencesResolved
...
Status:
  Ack Resource Metadata:
    Arn:                       arn:aws:ec2:ap-northeast-2:611841095956:subnet/subnet-0aa9afbc1cda8866c
    Owner Account ID:          611841095956
    Region:                    ap-northeast-2
  Available IP Address Count:  4091
  Conditions:
    Last Transition Time:           2023-06-04T11:55:12Z
    Status:                         True
    Type:                           ACK.ReferencesResolved
    Last Transition Time:           2023-06-04T11:55:13Z
    Message:                        Resource synced successfully
    Reason:
    Status:                         True
    Type:                           ACK.ResourceSynced
...

생성 완료
출처 : https://www.eksworkshop.com/docs/automation/controlplanes/ack/provision-resources

 

퍼블릭 서브넷에 인스턴스 생성한다.

# public 서브넷 ID 확인
PUBSUB1=$(kubectl get subnets tutorial-public-subnet1 -o jsonpath={.status.subnetID})
echo $PUBSUB1
subnet-0aa9afbc1cda8866c

# 보안그룹 ID 확인
TSG=$(kubectl get securitygroups tutorial-security-group -o jsonpath={.status.id})
echo $TSG
sg-0e61c525b7f32a048

# Amazon Linux 2 최신 AMI ID 확인
AL2AMI=$(aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn2-ami-hvm-2.0.*-x86_64-gp2" --query 'Images[0].ImageId' --output text)
echo $AL2AMI
ami-066d41d96fc160063

# 각자 자신의 SSH 키페어 이름 변수 지정
MYKEYPAIR=<각자 자신의 SSH 키페어 이름>
MYKEYPAIR=somaz-key

# 변수 확인 > 특히 서브넷 ID가 확인되었는지 꼭 확인하자!
echo $PUBSUB1 , $TSG , $AL2AMI , $MYKEYPAIR
subnet-0aa9afbc1cda8866c , sg-0e61c525b7f32a048 , ami-066d41d96fc160063 , somaz-key


# [터미널1] 모니터링
while true; do aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table; date ; sleep 1 ; done

# public 서브넷에 인스턴스 생성
cat <<EOF > tutorial-bastion-host.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Instance
metadata:
  name: tutorial-bastion-host
spec:
  imageID: $AL2AMI # AL2 AMI ID - ap-northeast-2
  instanceType: t3.medium
  subnetID: $PUBSUB1
  securityGroupIDs:
  - $TSG
  keyName: $MYKEYPAIR
  tags:
    - key: producer
      value: ack
EOF
kubectl apply -f tutorial-bastion-host.yaml
instance.ec2.services.k8s.aws/tutorial-bastion-host created

# 인스턴스 생성 확인
kubectl get instance
NAME                    ID
tutorial-bastion-host   i-060b4da088a4361a9

kubectl describe instance
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

 

public 서브넷에 인스턴스 접속한다. 통신이 되지 않는다!

ssh -i <자신의 키페어파일> ec2-user@<public 서브넷에 인스턴스 퍼블릭IP>
ssh -i ~/.ssh/somaz-key.pem ec2-user@3.35.138.151
------
# public 서브넷에 인스턴스 접속 후 외부 인터넷 통신 여부 확인 
ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1007ms
exit
------

 

보안 그룹 정책 수정 : egress 규칙 추가

cat <<EOF > modify-sg.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: SecurityGroup
metadata:
  name: tutorial-security-group
spec:
  description: "ack security group"
  name: tutorial-sg
  vpcRef:
     from:
       name: tutorial-vpc
  ingressRules:
    - ipProtocol: tcp
      fromPort: 22
      toPort: 22
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "ingress"
  egressRules:
    - ipProtocol: '-1'
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "egress"
EOF
kubectl apply -f modify-sg.yaml

# 변경 확인 >> 보안그룹에 아웃바운드 규칙 확인
kubectl logs -n $ACK_SYSTEM_NAMESPACE -l k8s-app=ec2-chart -f
...
2023-06-04T12:18:32.295Z        INFO    ackrt   desired resource state has changed      {"account": "611841095956", "role": "", "region": "ap-northeast-2", "kind": "SecurityGroup", "namespace": "default", "name": "tutorial-security-group", "is_adopted": false, "generation": 2, "diff": [{"Path":{"Parts":["Spec","EgressRules"]},"A":[{"ipProtocol":"-1","ipRanges":[{"cidrIP":"0.0.0.0/0","description":"egress"}]}],"B":null}]}
2023-06-04T12:18:32.575Z        INFO    ackrt   updated resource        {"account": "611841095956", "role": "", "region": "ap-northeast-2", "kind": "SecurityGroup", "namespace": "default", "name": "tutorial-security-group", "is_adopted": false, "generation": 2}

 

public 서브넷에 인스턴스 접속 후 외부 인터넷 통신 확인한다.

ssh -i <자신의 키페어파일> ec2-user@<public 서브넷에 인스턴스 퍼블릭IP>
ssh -i ~/.ssh/somaz-key.pem ec2-user@3.35.138.151
------
# public 서브넷에 인스턴스 접속 후 외부 인터넷 통신 여부 확인 
ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=104 time=22.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=104 time=22.1 ms
...

curl ipinfo.io/ip ; echo # 출력되는 공인IP는 무엇인가?
3.35.138.151

exit
------

 

프라이빗 서브넷에 인스턴스 생성한다.

# private 서브넷 ID 확인 >> NATGW 생성 완료 후 RT/SubnetID가 확인되어 다소 시간 필요함
PRISUB1=$(kubectl get subnets tutorial-private-subnet1 -o jsonpath={.status.subnetID})
echo $PRISUB1
subnet-0de4fda5dae83a07d

# 변수 확인 > 특히 private 서브넷 ID가 확인되었는지 꼭 확인하자!
echo $PRISUB1 , $TSG , $AL2AMI , $MYKEYPAIR
subnet-0de4fda5dae83a07d , sg-0e61c525b7f32a048 , ami-066d41d96fc160063 , somaz-key

# private 서브넷에 인스턴스 생성
cat <<EOF > tutorial-instance-private.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: Instance
metadata:
  name: tutorial-instance-private
spec:
  imageID: $AL2AMI # AL2 AMI ID - ap-northeast-2
  instanceType: t3.medium
  subnetID: $PRISUB1
  securityGroupIDs:
  - $TSG
  keyName: $MYKEYPAIR
  tags:
    - key: producer
      value: ack
EOF
kubectl apply -f tutorial-instance-private.yaml

# 인스턴스 생성 확인
kubectl get instance
NAME                        ID
tutorial-bastion-host       i-060b4da088a4361a9
tutorial-instance-private   i-0c62cd6e404ea61bc

kubectl describe instance
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
------------------------------------------------------------------
|                        DescribeInstances                       |
+----------------+-----------------+------------------+----------+
|  InstanceName  |  PrivateIPAdd   |   PublicIPAdd    | Status   |
+----------------+-----------------+------------------+----------+
|  myeks-ng1-Node|  192.168.3.213  |  54.180.25.71    |  running |
|  myeks-ng1-Node|  192.168.2.243  |  15.164.181.124  |  running |
|  myeks-bastion |  192.168.1.100  |  3.35.206.83     |  running |
|  myeks-ng1-Node|  192.168.1.109  |  52.79.142.194   |  running |
|  None          |  10.0.6.141     |  3.35.138.151    |  running |
|  None          |  10.0.128.66    |  None            |  running |
+----------------+-----------------+------------------+----------+

 

public 서브넷에 인스턴스에 SSH 터널링 설정한다.

ssh -i <자신의 키페어파일> -L <자신의 임의 로컬 포트>:<private 서브넷의 인스턴스의 private ip 주소>:22 ec2-user@<public 서브넷에 인스턴스 퍼블릭IP> -v
ssh -i ~/.ssh/somaz-key.pem -L 9999:10.0.128.66:22 ec2-user@3.35.138.151 -v
OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022
debug1: Reading configuration data /home/somaz/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug1: Connecting to 3.35.138.151 [3.35.138.151] port 22.
debug1: Connection established.
debug1: identity file /home/somaz/.ssh/somaz-key.pem type -1
debug1: identity file /home/somaz/.ssh/somaz-key.pem-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.9p1 Ubuntu-3
...
debug1: channel 2: setting env LANG = "C.UTF-8"
debug1: client_global_hostkeys_private_confirm: server used untrusted RSA signature algorithm ssh-rsa for key 0, disregarding
debug1: update_known_hosts: known hosts file /home/somaz/.ssh/known_hosts2 does not exist

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
6 package(s) needed for security, out of 7 available
Run "sudo yum update" to apply all updates.
Last login: Sun Jun  4 20:33:16 KST 2023 on pts/2
---
접속 후 그냥 두기
---

 

자신의 임의 로컬 포트로 SSH 접속 시, private 서브넷에 인스턴스 접속된다.

ssh -i <자신의 키페어파일> -p <자신의 임의 로컬 포트> ec2-user@localhost
ssh -i ~/.ssh/somaz-key.pem -p 9999 ec2-user@localhost
---
# IP 및 네트워크 정보 확인
ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:07:ad:a1:3f:9e brd ff:ff:ff:ff:ff:ff
    inet 10.0.128.66/20 brd 10.0.143.255 scope global dynamic eth0
       valid_lft 2983sec preferred_lft 2983sec
    inet6 fe80::7:adff:fea1:3f9e/64 scope link
       valid_lft forever preferred_lft forever
       
sudo ss -tnp
sudo ss -tnp
State        Recv-Q        Send-Q               Local Address:Port                 Peer Address:Port         Process
ESTAB        0             0                      10.0.128.66:22                     10.0.6.141:46802         users:(("sshd",pid=2696,fd=3),("sshd",pid=2679,fd=3))

ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=45 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=23.2 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 23.215/23.513/23.811/0.298 ms

curl ipinfo.io/ip ; echo # 출력되는 공인IP는 무엇인가? 
# 출력되는 IP는 NAT Gateway의 공인IP를 가지고 있다.
54.180.194.9

exit
---

터널링하고 있는 Public Subnet의 인스턴스에서 debug된다.
Private Subnet에 있는 인스턴스는 위에서 보았던 것과 같이 공인IP를 가지고 있지 않기 때문에 NAT Gateway의 공인IP를 가지고 있다.

 

실습 후 리소스 삭제한다.

kubectl delete -f tutorial-bastion-host.yaml && kubectl delete -f tutorial-instance-private.yaml

aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
------------------------------------------------------------------
|                        DescribeInstances                       |
+----------------+-----------------+------------------+----------+
|  InstanceName  |  PrivateIPAdd   |   PublicIPAdd    | Status   |
+----------------+-----------------+------------------+----------+
|  myeks-ng1-Node|  192.168.3.213  |  54.180.25.71    |  running |
|  myeks-ng1-Node|  192.168.2.243  |  15.164.181.124  |  running |
|  myeks-bastion |  192.168.1.100  |  3.35.206.83     |  running |
|  myeks-ng1-Node|  192.168.1.109  |  52.79.142.194   |  running |
+----------------+-----------------+------------------+----------+

kubectl delete -f vpc-workflow.yaml  # vpc 관련 모든 리소스들 삭제에는 다소 시간이 소요됨

 


 

RDS

이번에는 RDS 실습을 진행해본다.

지원 엔진 - Aurora(MySQL & PostgreSQL), RDS for PostgreSQL, RDS for MySQL, RDS for MariaDB, RDS for Oracle, RDS for SQL Server

ACK RDS Controller 설치 with Helm

# 서비스명 변수 지정 및 helm 차트 다운로드
export SERVICE=rds
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart
/root/rds-chart
├── Chart.yaml
├── crds
│   ├── rds.services.k8s.aws_dbclusterparametergroups.yaml
│   ├── rds.services.k8s.aws_dbclusters.yaml
│   ├── rds.services.k8s.aws_dbinstances.yaml
│   ├── rds.services.k8s.aws_dbparametergroups.yaml
│   ├── rds.services.k8s.aws_dbproxies.yaml
│   ├── rds.services.k8s.aws_dbsubnetgroups.yaml
│   ├── rds.services.k8s.aws_globalclusters.yaml
│   ├── services.k8s.aws_adoptedresources.yaml
│   └── services.k8s.aws_fieldexports.yaml
├── templates
│   ├── cluster-role-binding.yaml
│   ├── cluster-role-controller.yaml
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── metrics-service.yaml
│   ├── NOTES.txt
│   ├── role-reader.yaml
│   ├── role-writer.yaml
│   └── service-account.yaml
├── values.schema.json
└── values.yaml

2 directories, 21 files

# ACK EC2-Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART               APP VERSION
ack-ec2-controller      ack-system      1               2023-06-04 20:46:15.980949768 +0900 KST deployed        ec2-chart-1.0.3     1.0.3
ack-rds-controller      ack-system      1               2023-06-04 22:02:25.502334924 +0900 KST deployed        rds-chart-1.1.4     1.1.4

kubectl -n $ACK_SYSTEM_NAMESPACE get pods -l "app.kubernetes.io/instance=ack-$SERVICE-controller"
NAME                                            READY   STATUS    RESTARTS   AGE
ack-rds-controller-rds-chart-6d59dfdfd7-vvw5n   1/1     Running   0          14s

kubectl get crd | grep $SERVICE
dbclusterparametergroups.rds.services.k8s.aws   2023-06-04T13:02:24Z
dbclusters.rds.services.k8s.aws                 2023-06-04T13:02:24Z
dbinstances.rds.services.k8s.aws                2023-06-04T13:02:24Z
dbparametergroups.rds.services.k8s.aws          2023-06-04T13:02:25Z
dbproxies.rds.services.k8s.aws                  2023-06-04T13:02:25Z
dbsubnetgroups.rds.services.k8s.aws             2023-06-04T13:02:25Z
globalclusters.rds.services.k8s.aws             2023-06-04T13:02:25Z

 

IRSA 설정 : 권장 정책 AmazonRDSFullAccess

# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create iamserviceaccount \
  --name ack-$SERVICE-controller \
  --namespace $ACK_SYSTEM_NAMESPACE \
  --cluster $CLUSTER_NAME \
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonRDSFullAccess`].Arn' --output text) \
  --override-existing-serviceaccounts --approve

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
NAMESPACE       NAME                            ROLE ARN
ack-system      ack-ec2-controller              arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-YZA5VA63S3O0
ack-system      ack-rds-controller              arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-TYYF2CSRH1D4
kube-system     aws-load-balancer-controller    arn:aws:iam::6118xxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-kube-sy-Role1-1KE1BYKTB2REV

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
kubectl get sa -n $ACK_SYSTEM_NAMESPACE
NAME                 SECRETS   AGE
ack-ec2-controller   0         80m
ack-rds-controller   0         3m55s
default              0         90m

kubectl describe sa ack-$SERVICE-controller -n $ACK_SYSTEM_NAMESPACE
Name:                ack-rds-controller
Namespace:           ack-system
Labels:              app.kubernetes.io/instance=ack-rds-controller
                     app.kubernetes.io/managed-by=eksctl
                     app.kubernetes.io/name=rds-chart
                     app.kubernetes.io/version=1.1.4
                     helm.sh/chart=rds-chart-1.1.4
                     k8s-app=rds-chart
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::6118xxxxxxxxx:role/eksctl-myeks-addon-iamserviceaccount-ack-sys-Role1-TYYF2CSRH1D4
                     meta.helm.sh/release-name: ack-rds-controller
                     meta.helm.sh/release-namespace: ack-system
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>


# Restart ACK service controller deployment using the following commands.
kubectl -n $ACK_SYSTEM_NAMESPACE rollout restart deploy ack-$SERVICE-controller-$SERVICE-chart

# IRSA 적용으로 Env, Volume 추가 확인
kubectl describe pod -n $ACK_SYSTEM_NAMESPACE -l k8s-app=$SERVICE-chart
...

CloudFormation 생성 확인
IAM 역할확인

 

AWS RDS for MariaDB 생성 및 삭제

# DB 암호를 위한 secret 생성
RDS_INSTANCE_NAME="<your instance name>"
RDS_INSTANCE_PASSWORD="<your instance password>"
RDS_INSTANCE_NAME=myrds
RDS_INSTANCE_PASSWORD=qwe12345
kubectl create secret generic "${RDS_INSTANCE_NAME}-password" --from-literal=password="${RDS_INSTANCE_PASSWORD}"

# 확인
kubectl get secret $RDS_INSTANCE_NAME-password
NAME             TYPE     DATA   AGE
myrds-password   Opaque   1      8s

# [터미널1] 모니터링
RDS_INSTANCE_NAME=myrds
watch -d "kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'"

# RDS 배포 생성 : 15분 이내 시간 소요 >> 보안그룹, 서브넷 등 필요한 옵션들은 추가해서 설정해보자!
cat <<EOF > rds-mariadb.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
  name: "${RDS_INSTANCE_NAME}"
spec:
  allocatedStorage: 20
  dbInstanceClass: db.t4g.micro
  dbInstanceIdentifier: "${RDS_INSTANCE_NAME}"
  engine: mariadb
  engineVersion: "10.6"
  masterUsername: "admin"
  masterUserPassword:
    namespace: default
    name: "${RDS_INSTANCE_NAME}-password"
    key: password
EOF
kubectl apply -f rds-mariadb.yaml
dbinstance.rds.services.k8s.aws/myrds created

# 생성 확인
kubectl get dbinstances  ${RDS_INSTANCE_NAME}
kubectl describe dbinstance "${RDS_INSTANCE_NAME}"
aws rds describe-db-instances --db-instance-identifier $RDS_INSTANCE_NAME | jq

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         creating

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         backing-up

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         available

# 생성 완료 대기 : for 지정 상태가 완료되면 정상 종료됨
kubectl wait dbinstances ${RDS_INSTANCE_NAME} --for=condition=ACK.ResourceSynced --timeout=15m
dbinstance.rds.services.k8s.aws/myrds condition met

 

MariaDB 접속

fieldexport 생성 : 아래 코드 블록 복붙한다!

RDS_INSTANCE_CONN_CM="${RDS_INSTANCE_NAME}-conn-cm"

cat <<EOF > rds-field-exports.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ${RDS_INSTANCE_CONN_CM}
data: {}
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: ${RDS_INSTANCE_NAME}-host
spec:
  to:
    name: ${RDS_INSTANCE_CONN_CM}
    kind: configmap
  from:
    path: ".status.endpoint.address"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: ${RDS_INSTANCE_NAME}
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: ${RDS_INSTANCE_NAME}-port
spec:
  to:
    name: ${RDS_INSTANCE_CONN_CM}
    kind: configmap
  from:
    path: ".status.endpoint.port"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: ${RDS_INSTANCE_NAME}
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: ${RDS_INSTANCE_NAME}-user
spec:
  to:
    name: ${RDS_INSTANCE_CONN_CM}
    kind: configmap
  from:
    path: ".spec.masterUsername"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: ${RDS_INSTANCE_NAME}
EOF

kubectl apply -f rds-field-exports.yaml

 

상태 정보를 확인해준다.

# 상태 정보 확인 : address 와 port 정보 
kubectl get dbinstances myrds -o jsonpath={.status.endpoint} | jq
{
  "address": "myrds.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com",
  "hostedZoneID": "ZLA2NUCOLGUUR",
  "port": 3306
}

# 상태 정보 확인 : masterUsername 확인
kubectl get dbinstances myrds -o jsonpath={.spec.masterUsername} ; echo
admin

# 컨피그맵 확인
kubectl get cm myrds-conn-cm -o yaml | kubectl neat | yh
apiVersion: v1
data:
  default.myrds-host: myrds.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com
  default.myrds-port: "3306"
  default.myrds-user: admin
kind: ConfigMap
metadata:
  name: myrds-conn-cm
  namespace: default

# fieldexport 정보 확인
kubectl get crd | grep fieldexport
fieldexports.services.k8s.aws                   2023-06-04T11:46:15Z

kubectl get fieldexport
NAME         AGE
myrds-host   57s
myrds-port   57s
myrds-user   57s

kubectl get fieldexport myrds-host -o yaml | k neat | yh
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: myrds-host
  namespace: default
spec:
  from:
    path: .status.endpoint.address
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: myrds
  to:
    kind: configmap
    name: myrds-conn-cm

 

RDS 사용하는 파드를 생성한다.

APP_NAMESPACE=default
cat <<EOF > rds-pods.yaml
apiVersion: v1
kind: Pod
metadata:
  name: app
  namespace: ${APP_NAMESPACE}
spec:
  containers:
   - image: busybox
     name: myapp
     command:
        - sleep
        - "3600"
     imagePullPolicy: IfNotPresent
     env:
      - name: DBHOST
        valueFrom:
         configMapKeyRef:
          name: ${RDS_INSTANCE_CONN_CM}
          key: "${APP_NAMESPACE}.${RDS_INSTANCE_NAME}-host"
      - name: DBPORT
        valueFrom:
         configMapKeyRef:
          name: ${RDS_INSTANCE_CONN_CM}
          key: "${APP_NAMESPACE}.${RDS_INSTANCE_NAME}-port"
      - name: DBUSER
        valueFrom:
         configMapKeyRef:
          name: ${RDS_INSTANCE_CONN_CM}
          key: "${APP_NAMESPACE}.${RDS_INSTANCE_NAME}-user"
      - name: DBPASSWORD
        valueFrom:
          secretKeyRef:
           name: "${RDS_INSTANCE_NAME}-password"
           key: password
EOF
kubectl apply -f rds-pods.yaml

# 생성 확인
kubectl get pod app

# 파드의 환경 변수 확인
kubectl exec -it app -- env | grep DB
DBPASSWORD=qwe12345
DBHOST=myrds.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com
DBPORT=3306
DBUSER=admin

 

RDS 의 master 암호를 변경해보고 확인한다.

# [터미널]
watch -d "kubectl get dbinstance; echo; kubectl get cm myrds-conn-cm -o yaml | kubectl neat"

# 아래 처럼 RDS 에서 직접 변경 할 경우 rds controller 를 별도 추적을 하지 않아서, k8s 상태와 aws 상태 정보가 깨져버럼
# DB 식별자 변경 : studyend 
aws rds modify-db-instance --db-instance-identifier $RDS_INSTANCE_NAME --new-db-instance-identifier studyend --apply-immediately

# DB 식별자를 업데이트 >> 어떤 현상이 발생하는가?
kubectl patch dbinstance myrds --type=merge -p '{"spec":{"dbInstanceIdentifier":"studyend"}}'
dbinstance.rds.services.k8s.aws/myrds patched

# 확인
kubectl get dbinstance myrds
NAME    STATUS
myrds   available

kubectl describe dbinstance myrds

이름 변경중이다.

 

변경되었다.

 

최종적으로 변경된 status 정보가 반영되었는지 확인해보자.

# 상태 정보 확인 : address 변경 확인!
kubectl get dbinstances myrds -o jsonpath={.status.endpoint} | jq
{
  "address": "studyend.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com",
  "hostedZoneID": "ZLA2NUCOLGUUR",
  "port": 3306
}

# 파드의 환경 변수 확인 >> 파드의 경우 환경 변수 env로 정보를 주입했기 때문에 변경된 정보를 확인 할 수 없다
kubectl exec -it app -- env | grep DB
DBPASSWORD=qwe12345
DBHOST=myrds.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com
DBPORT=3306
DBUSER=admin

# 파드 삭제 후 재생성 후 확인
kubectl delete pod app && kubectl apply -f rds-pods.yaml

# 파드의 환경 변수 확인 >> 변경 정보 확인!
# 즉 deployments, daemonsets, statefulsets 의 경우 rollout 으로 env 변경 적용을 할 수 는 있겠다!
kubectl exec -it app -- env | grep DB
DBPASSWORD=qwe12345
DBHOST=studyend.cwg3x2hx1ico.ap-northeast-2.rds.amazonaws.com
DBPORT=3306
DBUSER=admin

 

RDS 삭제

AWS 웹 관리콘솔에서 myrds 는 직접 삭제해야 한다.

# 파드 삭제
kubectl delete pod app

# RDS 삭제 
kubectl delete -f rds-mariadb.yaml

삭제중이다.

 

 


 

2. Flux

flux는 쿠버네티스를 위한 gitops 도구이다.

flux는 git에 있는 쿠버네티스를 manifest 읽고, 쿠버네티스에 manifest를 배포한다.

출처 : 악분일상님 블로그

 

출처 : 악분일상님 블로그

https://malwareanalysis.tistory.com/612

 

EKS 스터디 - 7주차 flux 예제

이 글은 flux가 무엇이고 간단한 예제를 살펴봅니다. 예제를 쉽게 따라하기 위해 모든 예제는 flux CLI를 사용합니다. flux란? flux는 쿠버네티스를 위한 gitops 도구입니다. flux는 git에 있는 쿠버네티스

malwareanalysis.tistory.com

 

 

Flux와 ArgoCD의 차이점(참고 : 악분일상님 블로그)

Flux와 Argo CD는 모두 Kubernetes 환경에서 GitOps 워크플로를 구현하는 데 널리 사용되는 도구이다. 둘 다 쿠버네티스 클러스터의 원하는 상태를 Git 리포지토리에서 읽어와 실제 상태와 일치하게 하는 역할을 수행한다.

먼저 flux는 argocd에 비해 kustomize에 매우 특화된 도구라고 한다.

그러나 kustomize를 제외하고는 argocd가 flux에 비해 많은 기능을 지원한다. flux는 멀티 클러스터 배포가 어렵고 권한관리, SSO연동을 지원하지 않는다.

  1. Design Philosophy: Argo CD는 자체적으로 비교적 단순하게 유지되며, 필요한 기능을 확장하려면 Argo CD와 함께 사용할 수 있는 다른 Argo 프로젝트 도구들을 활용해야 한다. 반면에 Flux는 일부 기능을 기본적으로 포함하고 있으며, 사용자가 이러한 기능을 활용하여 환경을 설정하는 것을 가정하고 설계되었다.
  2. Feature Differences: Flux는 애플리케이션 배포 뿐만 아니라 시스템 및 네트워크 구성까지 관리하는 데에 초점을 맞추었다. 이에 반해, Argo CD는 애플리케이션의 지속적인 배포와 업데이트에 중점을 두었다. 추가적으로, Flux는 이미지 레포지토리를 모니터링하여 새 이미지가 푸시되면 자동으로 배포하는 기능을 제공하는 반면, Argo CD는 이 기능을 제공하지 않는다.
  3. User Interface: Argo CD는 사용자 친화적인 웹 UI를 제공하여 사용자가 쿠버네티스 클러스터 상태를 쉽게 시각화하고 관리할 수 있다. Flux는 이와 같은 풍부한 웹 UI를 제공하지는 않지만, CLI와 API를 통해 도구를 사용하고 관리할 수 있다.

 

 


 

Flux CLI 설치 및 Bootstrap

미리 token 생성해주기

# Flux CLI 설치
curl -s https://fluxcd.io/install.sh | sudo bash
. <(flux completion bash)

# 버전 확인
flux --version
flux version 2.0.0-rc.5

# 자신의 Github 토큰과 유저이름 변수 지정
export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
export GITHUB_TOKEN=ghp_###
export GITHUB_USER=somaz94

# Bootstrap
## Creates a git repository fleet-infra on your GitHub account.
## Adds Flux component manifests to the repository.
## Deploys Flux Components to your Kubernetes Cluster.
## Configures Flux components to track the path /clusters/my-cluster/ in the repository.
flux bootstrap github \
  --owner=$GITHUB_USER \
  --repository=fleet-infra \
  --branch=main \
  --path=./clusters/my-cluster \
  --personal

# 설치 확인
kubectl get pods -n flux-system
NAME                                       READY   STATUS    RESTARTS   AGE
helm-controller-fbdd59577-przqn            1/1     Running   0          23s
kustomize-controller-6b67b54cf8-898hd      1/1     Running   0          23s
notification-controller-78f4869c94-zcf9j   1/1     Running   0          23s
source-controller-75db64d9f7-f7fb4         1/1     Running   0          23s

kubectl get-all -n flux-system
NAME                                                               NAMESPACE    AGE
configmap/kube-root-ca.crt                                         flux-system  37s
endpoints/notification-controller                                  flux-system  35s
endpoints/source-controller                                        flux-system  35s
endpoints/webhook-receiver                                         flux-system  34s
pod/helm-controller-fbdd59577-przqn                                flux-system  34s
pod/kustomize-controller-6b67b54cf8-898hd                          flux-system  34s
pod/notification-controller-78f4869c94-zcf9j                       flux-system  34s
pod/source-controller-75db64d9f7-f7fb4                             flux-system  34s
...

kubectl get crd | grep fluxc
alerts.notification.toolkit.fluxcd.io           2023-06-04T13:38:46Z
buckets.source.toolkit.fluxcd.io                2023-06-04T13:38:46Z
gitrepositories.source.toolkit.fluxcd.io        2023-06-04T13:38:46Z
helmcharts.source.toolkit.fluxcd.io             2023-06-04T13:38:46Z
helmreleases.helm.toolkit.fluxcd.io             2023-06-04T13:38:46Z
helmrepositories.source.toolkit.fluxcd.io       2023-06-04T13:38:46Z
kustomizations.kustomize.toolkit.fluxcd.io      2023-06-04T13:38:47Z
ocirepositories.source.toolkit.fluxcd.io        2023-06-04T13:38:47Z
providers.notification.toolkit.fluxcd.io        2023-06-04T13:38:47Z
receivers.notification.toolkit.fluxcd.io        2023-06-04T13:38:47Z

kubectl get gitrepository -n flux-system
NAME          URL                                        AGE   READY   STATUS
flux-system   ssh://git@github.com/somaz94/fleet-infra   56s   True    stored artifact for revision 'main@sha1:cb4bd5feb0339b2fc08ba214f85300634c23f5c9'
  • 자신의 Github에 private 저장소 확인 : my-cluster 폴더에 설치된 flux manifest 확인

GitHub에 Private으로 생성된 저장소 확인!

 


 

gitops 도구 설치

  • 링크 → flux 대시보드 설치
# gitops 도구 설치
curl --silent --location "https://github.com/weaveworks/weave-gitops/releases/download/v0.24.0/gitops-$(uname)-$(uname -m).tar.gz" | tar xz -C /tmp
sudo mv /tmp/gitops /usr/local/bin
gitops version

# flux 대시보드 설치
PASSWORD="password"
gitops create dashboard ww-gitops --password=$PASSWORD

# 확인
flux -n flux-system get helmrelease
NAME            REVISION        SUSPENDED       READY   MESSAGE
ww-gitops       4.0.22          False           True    Release reconciliation succeeded

kubectl -n flux-system get pod,svc
NAME                                           READY   STATUS    RESTARTS   AGE
pod/helm-controller-fbdd59577-przqn            1/1     Running   0          3m34s
pod/kustomize-controller-6b67b54cf8-898hd      1/1     Running   0          3m34s
pod/notification-controller-78f4869c94-zcf9j   1/1     Running   0          3m34s
pod/source-controller-75db64d9f7-f7fb4         1/1     Running   0          3m34s
pod/ww-gitops-weave-gitops-66dc44989f-7zp5x    1/1     Running   0          32s

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/notification-controller   ClusterIP   10.100.240.250   <none>        80/TCP     3m35s
service/source-controller         ClusterIP   10.100.253.44    <none>        80/TCP     3m35s
service/webhook-receiver          ClusterIP   10.100.60.179    <none>        80/TCP     3m34s
service/ww-gitops-weave-gitops    ClusterIP   10.100.8.210     <none>        9001/TCP   32s

 

Ingress 설정한다.

MyDomain=somaz.link
CERT_ARN=`aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text`
echo $CERT_ARN $MyDomain
arn:aws:acm:ap-northeast-2:6118xxxxxxxxxx:certificate/75exxxx-5xxx-4xxx-8xxx-abxxxxxxxxx somaz.link

# Ingress 설정
cat <<EOT > gitops-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitops-ingress
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
    alb.ingress.kubernetes.io/group.name: study
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
    alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
  - host: gitops.$MyDomain
    http:
      paths:
      - backend:
          service:
            name: ww-gitops-weave-gitops
            port:
              number: 9001
        path: /
        pathType: Prefix
EOT
kubectl apply -f gitops-ingress.yaml -n flux-system

# 배포 확인
kubectl get ingress -n flux-system
NAME             CLASS   HOSTS               ADDRESS                                                        PORTS   AGE
gitops-ingress   alb     gitops.somaz.link   myeks-ingress-alb-490566532.ap-northeast-2.elb.amazonaws.com   80      21s

# GitOps 접속 정보 확인 >> 웹 접속 후 정보 확인
echo -e "GitOps Web https://gitops.$MyDomain"
GitOps Web https://gitops.somaz.link

LoadBalancer Active 확인!
Login 하기
성공!

 

hello world (kubstomize)

 

github에 있는 nginx manifest를 쿠버네티스에 배포한다.

배포할 때 kusotmize를 사용한다.

# 소스 생성 : 유형 - git, helm, oci, bucket
# flux create source {소스 유형}
# 악분(최성욱)님이 준비한 repo로 git 소스 생성
GITURL="https://github.com/sungwook-practice/fluxcd-test.git"
flux create source git nginx-example1 --url=$GITURL --branch=main --interval=30s

# 소스 확인
flux get sources git
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:cb4bd5fe      False           True    stored artifact for revision 'main@sha1:cb4bd5fe'
nginx-example1  main@sha1:4478b54c      False           True    stored artifact for revision 'main@sha1:4478b54c'

kubectl -n flux-system get gitrepositories
NAME             URL                                                    AGE   READY   STATUS
flux-system      ssh://git@github.com/somaz94/fleet-infra               11m   True    stored artifact for revision 'main@sha1:cb4bd5feb0339b2fc08ba214f85300634c23f5c9'
nginx-example1   https://github.com/sungwook-practice/fluxcd-test.git   16s   True    stored artifact for revision 'main@sha1:4478b54cb7a8eaf1ee2665e2b3dd5bcfd55e9da9'

 

flux 애플리케이션 생성한다.

유형(kustomization) , 깃 소스 경로( —path ./nginx) → gitops 웹 대시보드에서 확인

# [터미널] 모니터링
watch -d kubectl get pod,svc nginx-example1

# flux 애플리케이션 생성 : nginx-example1
flux create kustomization nginx-example1 --target-namespace=default --interval=1m --source=nginx-example1 --path="./nginx" --health-check-timeout=2m

# 확인
kubectl get pod,svc nginx-example1
NAME                 READY   STATUS    RESTARTS   AGE
pod/nginx-example1   1/1     Running   0          10s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-example1   ClusterIP   10.100.28.102   <none>        80/TCP    10s

kubectl get kustomizations -n flux-system
NAME             AGE   READY   STATUS
flux-system      13m   True    Applied revision: main@sha1:cb4bd5feb0339b2fc08ba214f85300634c23f5c9
nginx-example1   21s   True    Applied revision: main@sha1:4478b54cb7a8eaf1ee2665e2b3dd5bcfd55e9da9

flux get kustomizations

 

애플리케이션 삭제한다. 어? 삭제가 되지 않는다.

# [터미널] 모니터링
watch -d kubectl get pod,svc nginx-example1

# flux 애플리케이션 삭제 >> 파드와 서비스는? flux 애플리케이션 생성 시 --prune 옵션 false(default 값)
flux delete kustomization nginx-example1 
? Are you sure you want to delete this kustomization? [y/N] y█
✔ kustomization deleted

flux get kustomizations
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:cb4bd5fe      False           True    Applied revision: main@sha1:cb4bd5fe

kubectl get pod,svc nginx-example1
NAME                 READY   STATUS    RESTARTS   AGE
pod/nginx-example1   1/1     Running   0          76s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-example1   ClusterIP   10.100.28.102   <none>        80/TCP    76s

 

 

--prune 옵션을 true로 주니 삭제되었다!

# flux 애플리케이션 다시 생성 :  --prune 옵션 true
flux create kustomization nginx-example1 \
  --target-namespace=default \
  --prune=true \
  --interval=1m \
  --source=nginx-example1 \
  --path="./nginx" \
  --health-check-timeout=2m

# 확인
flux get kustomizations
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:cb4bd5fe      False           True    Applied revision: main@sha1:cb4bd5fe
nginx-example1  main@sha1:4478b54c      False           True    Applied revision: main@sha1:4478b54c

kubectl get pod,svc nginx-example1
NAME                 READY   STATUS    RESTARTS   AGE
pod/nginx-example1   1/1     Running   0          2m34s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-example1   ClusterIP   10.100.28.102   <none>        80/TCP    2m34s

# flux 애플리케이션 삭제 >> 파드와 서비스는? --prune 옵션을 true로 주니 삭제되었다!
flux delete kustomization nginx-example1 
flux get kustomizations
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:cb4bd5fe      False           True    Applied revision: main@sha1:cb4bd5fe

kubectl get pod,svc nginx-example1
Error from server (NotFound): pods "nginx-example1" not found
Error from server (NotFound): services "nginx-example1" not found

# flux 소스 삭제
flux delete source git nginx-example1

# 소스 확인
flux get sources git
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:cb4bd5fe      False           True    stored artifact for revision 'main@sha1:cb4bd5fe'

kubectl -n flux-system get gitrepositories
NAME          URL                                        AGE   READY   STATUS
flux-system   ssh://git@github.com/somaz94/fleet-infra   17m   True    stored artifact for revision 'main@sha1:cb4bd5feb0339b2fc08ba214f85300634c23f5c9'

 


 

공식 Docs 샘플 실습

 

Clone the git repository

# Clone the git repository : 자신의 Github 의 Username, Token 입력
git clone https://github.com/$GITHUB_USER/fleet-infra
Username for 'https://github.com': somaz94
Password for 'https://somaz94@github.com': ghp_4Qxxxxxxxxxxxxxxxxxxxxxxxx
remote: Enumerating objects: 13, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 13 (delta 0), reused 13 (delta 0), pack-reused 0
Receiving objects: 100% (13/13), 34.98 KiB | 2.33 MiB/s, done.

# 폴더 이동
cd fleet-infra
tree
.
└── clusters
    └── my-cluster
        └── flux-system
            ├── gotk-components.yaml
            ├── gotk-sync.yaml
            └── kustomization.yaml

3 directories, 3 files

 

Add podinfo repository to Flux

# GitRepository yaml 파일 생성
flux create source git podinfo \
  --url=https://github.com/stefanprodan/podinfo \
  --branch=master \
  --interval=30s \
  --export > ./clusters/my-cluster/podinfo-source.yaml

# GitRepository yaml 파일 확인
cat ./clusters/my-cluster/podinfo-source.yaml | yh
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: podinfo
  namespace: flux-system
spec:
  interval: 30s
  ref:
    branch: master
  url: https://github.com/stefanprodan/podinfo

# Commit and push the podinfo-source.yaml file to the fleet-infra repository >> Github 확인
git config --global user.name "somaz94"
git config --global user.email "genius5711@gmail.com"
git add -A && git commit -m "Add podinfo GitRepository"
git push
Username for 'https://github.com': <자신의 Github 의 Username> # somaz94
Password for 'https://somaz94@github.com': <자신의 Github의 Token>
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 2 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (5/5), 545 bytes | 545.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0), pack-reused 0
To https://github.com/somaz94/fleet-infra
   cb4bd5f..c2db5d4  main -> main

# 소스 확인
flux get sources git
flux-system     main@sha1:c2db5d47      False           True    stored artifact for revision 'main@sha1:c2db5d47'
podinfo         master@sha1:073f1ec5    False           True    stored artifact for revision 'master@sha1:073f1ec5'

kubectl -n flux-system get gitrepositories
NAME          URL                                        AGE   READY   STATUS
flux-system   ssh://git@github.com/somaz94/fleet-infra   21m   True    stored artifact for revision 'main@sha1:c2db5d476e5278dffc7a6561630c0700868de2b0'
podinfo       https://github.com/stefanprodan/podinfo    12s   True    stored artifact for revision 'master@sha1:073f1ec5aff930bd3411d33534e91cbe23302324'

 

Deploy podinfo application : Configure Flux to build and apply the kustomize directory located in the podinfo repository

# [터미널]
watch -d kubectl get pod,svc

# Use the flux create command to create a Kustomization that applies the podinfo deployment.
flux create kustomization podinfo \
  --target-namespace=default \
  --source=podinfo \
  --path="./kustomize" \
  --prune=true \
  --interval=5m \
  --export > ./clusters/my-cluster/podinfo-kustomization.yaml

# 파일 확인
cat ./clusters/my-cluster/podinfo-kustomization.yaml | yh
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: podinfo
  namespace: flux-system
spec:
  interval: 5m0s
  path: ./kustomize
  prune: true
  sourceRef:
    kind: GitRepository
    name: podinfo
  targetNamespace: default

# Commit and push the Kustomization manifest to the repository:
git add -A && git commit -m "Add podinfo Kustomization"
git push

# 확인
kubectl get pod,svc
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   168m

kubectl get kustomizations -n flux-system
NAME          AGE   READY   STATUS
flux-system   24m   True    Applied revision: main@sha1:c2db5d476e5278dffc7a6561630c0700868de2b0

flux get kustomizations
NAME            REVISION                SUSPENDED       READY   MESSAGE
flux-system     main@sha1:56f8fdee      False           True    Applied revision: main@sha1:56f8fdee
podinfo         master@sha1:073f1ec5    False           True    Applied revision: master@sha1:073f1ec5

tree
.
└── clusters
    └── my-cluster
        ├── flux-system
        │   ├── gotk-components.yaml
        │   ├── gotk-sync.yaml
        │   └── kustomization.yaml
        ├── podinfo-kustomization.yaml
        └── podinfo-source.yaml

3 directories, 5 files

 

Watch Flux sync the application

replica 1일땐 삭제되었다가 바로 생성된다!
replica 3일땐 3개가 유지된다.

# [터미널]
watch -d kubectl get pod,svc

# 파드 갯수 변경 시도 >> 어떻게 되는가? 
# replica 1일땐 삭제되었다가 바로 생성된다!
# replica 3일땐 3개가 유지된다.
kubectl scale deployment podinfo --replicas 1
...
kubectl scale deployment podinfo --replicas 3
...

모니터링
kubectl scale deployment podinfo --replicas 3

 

삭제

#
flux delete kustomization podinfo
flux delete source git podinfo

#
flux uninstall --namespace=flux-system
  • Github에 fleet-infra Repo 제거하기

삭제하기!


 

3. ArgoCD

2023.02.06 - [교육, 커뮤니티 후기] - PKOS 쿠버네티스 스터디 4주차 - Harbor Gitlab Argocd

 

PKOS 쿠버네티스 스터디 4주차 - Harbor Gitlab Argocd

Overview 가시다님이 진행하시는 쿠버네티스 스터디 4주차가 되었다. 오늘은 Harbor Gitlab Argocd에 대해 공부를 하였다! 그림 실습 시라니오는 '24단계 실습으로 정복하는 쿠버네티스' 책을 기준으로

somaz.tistory.com

2023.02.06 - [교육, 커뮤니티 후기] - PKOS 쿠버네티스 스터디 4주차 - Harbor Gitlab Argocd

 

PKOS 쿠버네티스 스터디 4주차 - Harbor Gitlab Argocd

Overview 가시다님이 진행하시는 쿠버네티스 스터디 4주차가 되었다. 오늘은 Harbor Gitlab Argocd에 대해 공부를 하였다! 그림 실습 시라니오는 '24단계 실습으로 정복하는 쿠버네티스' 책을 기준으로

somaz.tistory.com

EKS Workshop - 링크

 


4. Crossplane

Crossplane은 Cloud Native Computing Foundation (CNCF)의 프로젝트로, Kubernetes API를 활용하여 클라우드 인프라스트럭처와 서비스를 관리하고 배포하는 데 사용되는 오픈 소스 도구이다.

Crossplane의 목표는 다양한 클라우드 서비스에 대한 단일, 통합 API를 제공하여 개발자가 인프라스트럭처와 서비스를 코드로 선언하고 관리할 수 있게 하는 것이다. 이를 통해 개발자는 쿠버네티스 리소스와 같은 방식으로 클라우드 서비스를 관리할 수 있다. 이를 (Infrastructure as Code) 방식이다.

Crossplane은 다양한 클라우드 서비스 공급자 (ex: AWS, GCP, Azure 등)를 지원하며, 이를 통해 개발자는 필요에 따라 다양한 클라우드 공급자의 서비스를 동일한 API를 통해 관리할 수 있다. Crossplane은 또한 사용자가 커스텀 리소스 정의 (CRDs)를 만들어 복잡한 인프라 배포를 쿠버네티스의 네이티브 리소스처럼 핸들링 할 수 있도록 한다.

Reference

 


 

5. eksdemo

The easy button for learning, testing and demoing Amazon EKS

 


 

(실습 완료 후) 자원 삭제

 

Helm Chart 삭제

helm uninstall -n monitoring kube-prometheus-stack

 

EKS 클러스터 삭제

eksctl delete cluster --name $CLUSTER_NAME && aws cloudformation delete-stack --stack-name $CLUSTER_NAME

 


Reference

ACK EC2-Controller 설치 with Helm

Flux

Crossplane

 

 

728x90
반응형