728x90
반응형
Overview
이전 발행 글에서 공부했던 내용인 Object - Pod에 대해서 실습을 진행해본다.
2022.08.29 - [교육, 커뮤니티 후기] - <인프런> 대세는 쿠버네티스 [초급] - No.7 Object - Pod
1. Pod
실습 1) Pod 생성
아래와 같이 생성해 준다.
$ vi pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
- name: container1
image: tmkube/p8000
ports:
- containerPort: 8000
- name: container2
image: tmkube/p8080
ports:
- containerPort: 8080
$ kubectl apply -f pod-1.yaml
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 2/2 Running 0 28s 172.16.12.132 dh-k8s-node <none> <none>
생성 후 Pod IP를 이용해 확인해준다.
$ kubectl describe po pod-1
Name: pod-1
Namespace: default
Priority: 0
Node: dh-k8s-node/192.168.21.113
Start Time: Tue, 30 Aug 2022 14:32:58 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 9147e1f435033ac92c3a57bad88a9796bd8547057f685034f7f94785bd8aaa73
cni.projectcalico.org/podIP: 172.16.12.132/32
cni.projectcalico.org/podIPs: 172.16.12.132/32
Status: Running
IP: 172.16.12.132
IPs:
IP: 172.16.12.132
$ curl 172.16.12.132:8000
containerPort : 8000
$ curl 172.16.12.132:8080
containerPort : 8080
Container1으로 들어가서 Curl 명령어로 확인해보자.
$ kubectl exec -ti pod-1 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "container1" out of: container1, container2
root@pod-1:/#
root@pod-1:/# curl localhost:8080
containerPort : 8080
root@pod-1:/# curl localhost:8000
containerPort : 8000
실습 2) ContainerPort 동일하게 설정 후 Pod 생성
containerPort를 동일하게 설정해서 생성해보자.
$ vi pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-2
spec:
containers:
- name: container1
image: tmkube/p8000
ports:
- containerPort: 8000
- name: container2
image: tmkube/p8000
ports:
- containerPort: 8000
$ kubectl apply -f pod-2.yaml
$ kubectl get po -o wide |grep pod-2
pod-2 1/2 Error 1 (4s ago) 10s 172.16.12.134 dh-k8s-node <none> <none>
상태를 보면 error가 발생한다. describe와 logs로 확인해보자.
$ kubectl describe po pod-2
Name: pod-2
Namespace: default
Priority: 0
Node: dh-k8s-node/192.168.21.113
Start Time: Tue, 30 Aug 2022 14:48:32 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: d786ae9d806c67397f0352f0b8b04155091b68fe107559cbb557441453745697
cni.projectcalico.org/podIP: 172.16.12.134/32
cni.projectcalico.org/podIPs: 172.16.12.134/32
Status: Running
IP: 172.16.12.134
IPs:
IP: 172.16.12.134
Containers:
container1:
Container ID: docker://d7be96c370d325ccc7adf5a7fd1d17d23957633b8030e0bbc652499478f27687
Image: tmkube/p8000
Image ID: docker-pullable://tmkube/p8000@sha256:241f2721ac81d83f8b6513c3ec4f44a04a3173cb4629fff60da45909fc6f773d
Port: 8000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 30 Aug 2022 14:48:35 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zf2n8 (ro)
container2:
Container ID: docker://369c612c82e6aa028b8ca55bc14715531bca1dd0476cd4c14e95b1a332415111
Image: tmkube/p8000
Image ID: docker-pullable://tmkube/p8000@sha256:241f2721ac81d83f8b6513c3ec4f44a04a3173cb4629fff60da45909fc6f773d
Port: 8000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 30 Aug 2022 14:51:48 +0900
Finished: Tue, 30 Aug 2022 14:51:48 +0900
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zf2n8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zf2n8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m34s default-scheduler Successfully assigned default/pod-2 to dh-k8s-node
Normal Pulling 3m33s kubelet Pulling image "tmkube/p8000"
Normal Pulled 3m31s kubelet Successfully pulled image "tmkube/p8000" in 2.622825113s
Normal Created 3m31s kubelet Created container container1
Normal Started 3m31s kubelet Started container container1
Normal Pulled 3m28s kubelet Successfully pulled image "tmkube/p8000" in 2.572180607s
Normal Pulled 3m24s kubelet Successfully pulled image "tmkube/p8000" in 2.572726498s
Normal Pulled 3m9s kubelet Successfully pulled image "tmkube/p8000" in 2.621793452s
Normal Pulling 2m44s (x4 over 3m31s) kubelet Pulling image "tmkube/p8000"
Normal Created 2m42s (x4 over 3m28s) kubelet Created container container2
Normal Pulled 2m42s kubelet Successfully pulled image "tmkube/p8000" in 2.53015705s
Normal Started 2m41s (x4 over 3m28s) kubelet Started container container2
Warning BackOff 2m40s (x5 over 3m24s) kubelet Back-off restarting failed container
아래와 같이 container2에 에러로그가 발생한다. 현재 8000번 Port가 이미 사용중이라는 error이다.
이처럼 한 Pod내의 Container는 같은 Port를 사용할 수 없다.
[root@dh-k8s-master ~]# kubectl logs pod-2 -c container1
[root@dh-k8s-master ~]# kubectl logs pod-2 -c container2
events.js:177
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (net.js:1226:14)
at listenInCluster (net.js:1274:12)
at Server.listen (net.js:1362:7)
at Object.<anonymous> (/port8000.js:7:3)
at Module._compile (internal/modules/cjs/loader.js:774:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:785:10)
at Module.load (internal/modules/cjs/loader.js:641:32)
at Function.Module._load (internal/modules/cjs/loader.js:556:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:837:10)
at internal/main/run_main_module.js:17:11
Emitted 'error' event at:
at emitErrorNT (net.js:1253:8)
at processTicksAndRejections (internal/process/task_queues.js:84:9) {
code: 'EADDRINUSE',
errno: 'EADDRINUSE',
syscall: 'listen',
address: '::',
port: 8000
}
실습 3) Pod가 Cluster에서 관리가 되었을 될 때 Pod 삭제 후 Auto Healing
ReplicationController 를 만들어보자.
$ vi replication-1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: replication-1
spec:
replicas: 1
selector:
app: rc
template:
metadata:
name: pod-1
labels:
app: rc
spec:
containers:
- name: container
image: tmkube/init
$ kubectl apply -f replication-1.yaml
replicationcontroller/replication-1 created
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 2/2 Running 0 35m 172.16.12.132 dh-k8s-node <none> <none>
replication-1-t764t 1/1 Running 0 101s 172.16.12.135 dh-k8s-node <none> <none>
pod 삭제를 해보자.
$ kubectl delete po replication-1-t764t
pod "replication-1-t764t" deleted
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 2/2 Running 0 39m 172.16.12.132 dh-k8s-node <none> <none>
replication-1-g56xt 1/1 Running 0 34s 172.16.12.136 dh-k8s-node <none> <none>
IP가 바뀌었다.
2. Label
실습 1) Label을 넣어 Pod 생성
$ vi pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
type: web
lo: dev
spec:
containers:
- name: container
image: tmkube/init
$ vi pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
type: db
lo: dev
spec:
containers:
- name: container
image: tmkube/init
$ vi pod-3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-3
labels:
type: server
lo: dev
spec:
containers:
- name: container
image: tmkube/init
$ vi pod-4.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-4
labels:
type: web
lo: prod
spec:
containers:
- name: container
image: tmkube/init
$ vi pod-5.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-5
labels:
type: db
lo: prod
spec:
containers:
- name: container
image: tmkube/init
$ vi pod-6.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-6
labels:
type: server
lo: prod
spec:
containers:
- name: container
image: tmkube/init
$ kubectl apply -f pod-1.yaml
pod/pod-1 created
$ kubectl apply -f pod-2.yaml
pod/pod-2 created
$ kubectl apply -f pod-3.yaml
pod/pod-3 created
$ kubectl apply -f pod-4.yaml
pod/pod-4 created
$ kubectl apply -f pod-5.yaml
pod/pod-5 created
$ kubectl apply -f pod-6.yaml
pod/pod-6 created
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 1/1 Running 0 4m9s 172.16.12.138 dh-k8s-node <none> <none>
pod-2 1/1 Running 0 4m6s 172.16.12.139 dh-k8s-node <none> <none>
pod-3 1/1 Running 0 2m55s 172.16.12.140 dh-k8s-node <none> <none>
pod-4 1/1 Running 0 36s 172.16.12.141 dh-k8s-node <none> <none>
pod-5 1/1 Running 0 34s 172.16.12.142 dh-k8s-node <none> <none>
pod-6 1/1 Running 0 31s 172.16.12.143 dh-k8s-node <none> <none>
이제 서비스를 설정해 원하는 Pod를 선택해 보겠다.
$ vi svc-web.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-for-web
spec:
selector:
type: web
ports:
- port: 8080
$ kubectl apply -f svc-web.yaml
service/svc-for-web created
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h <none>
svc-for-web ClusterIP 10.111.41.178 <none> 8080/TCP 118s type=web
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 1/1 Running 0 14m 172.16.12.138 dh-k8s-node <none> <none>
pod-2 1/1 Running 0 53s 172.16.12.144 dh-k8s-node <none> <none>
pod-3 1/1 Running 0 13m 172.16.12.140 dh-k8s-node <none> <none>
pod-4 1/1 Running 0 11m 172.16.12.141 dh-k8s-node <none> <none>
pod-5 1/1 Running 0 11m 172.16.12.142 dh-k8s-node <none> <none>
pod-6 1/1 Running 0 10m 172.16.12.143 dh-k8s-node <none> <none>
$ kubectl describe svc svc-for-web
Name: svc-for-web
Namespace: default
Labels: <none>
Annotations: <none>
Selector: type=web
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.92.106
IPs: 10.107.92.106
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.16.12.138:8080,172.16.12.141:8080
Session Affinity: None
Events: <none>
label이 type=web 인 pod-1,4가 Service로 할당됬다.
이번엔 label이 type=prod 인 pod만 Service로 할당되게 해보자.
$ vi svc-prod.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-for-prod
spec:
selector:
lo: prod
ports:
- port: 8080
$ kubectl apply -f svc-prod.yaml
service/svc-for-prod created
$ kubectl describe svc svc-for-prod
Name: svc-for-prod
Namespace: default
Labels: <none>
Annotations: <none>
Selector: lo=prod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.25.208
IPs: 10.102.25.208
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.16.12.141:8080,172.16.12.142:8080,172.16.12.143:8080
Session Affinity: None
Events: <none>
# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 1/1 Running 0 83m 172.16.12.138 dh-k8s-node <none> <none>
pod-2 1/1 Running 0 70m 172.16.12.144 dh-k8s-node <none> <none>
pod-3 1/1 Running 0 82m 172.16.12.140 dh-k8s-node <none> <none>
pod-4 1/1 Running 0 80m 172.16.12.141 dh-k8s-node <none> <none>
pod-5 1/1 Running 0 80m 172.16.12.142 dh-k8s-node <none> <none>
pod-6 1/1 Running 0 80m 172.16.12.143 dh-k8s-node <none> <none>
label이 type=prod 인 pod4,5,6이 Service로 할당되었다.
3. Node Schedule
실습 1) nodeSelector을 넣어 Pod 생성
앞서 생성한 pod와 서비스를 전부 삭제해준다.
$ kubectl delete -f svc-prod.yaml
$ kubectl delete -f svc-web.yaml
# running 중인 pod를 모두 삭제해준다.
$ kubectl get po |grep Running | awk '{print $1}' | xargs kubectl delete po
pod "pod-1" deleted
pod "pod-2" deleted
pod "pod-3" deleted
pod "pod-4" deleted
pod "pod-5" deleted
pod "pod-6" deleted
$ vi pod-3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-3
spec:
nodeSelector:
kubernetes.io/hostname: dh-k8s-node
containers:
- name: container
image: tmkube/init
$ kubectl apply -f pod-3.yaml
pod/pod-3 created
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-3 1/1 Running 0 5s 172.16.12.145 dh-k8s-node <none> <none>
728x90
반응형
'교육, 커뮤니티 후기 > 인프런 교육' 카테고리의 다른 글
<인프런> 대세는 쿠버네티스 [초급] - No.10 Object - Service 실습 (0) | 2022.08.31 |
---|---|
<인프런> 대세는 쿠버네티스 [초급] - No.9 Object - Service (0) | 2022.08.29 |
<인프런> 대세는 쿠버네티스 [초급] - No.7 Object - Pod (0) | 2022.08.29 |
<인프런> 대세는 쿠버네티스 [초급] - No.6 Kubernetes Cluster 설치(v1.22) (0) | 2022.08.29 |
<인프런> 대세는 쿠버네티스 [초급] - No.5 Kubernetes Overview (0) | 2022.08.29 |