이전 포스트에서 배포된 컨테이너들을 scale out하고 업데이트 하는 방법을 정리한다.
Kubernetes Operation
배포할 때 사용했던 label을 이용해서 조회가 가능하다
$ kubectl get pods -l 'app=tomcat-petclinic'
NAME READY STATUS RESTARTS AGE
apache-petclinic-54bf9c58cb-5w46p 1/1 Running 0 1d
mysql-petclinic-b854f5ccc-knrmf 1/1 Running 0 1d
tomcat-petclinic-58d6f5959c-kqsw5 1/1 Running 0 1d
deployment 상태를 조회한다. 현재 몇개의 pod가 배포되어 있는지 알 수 있다.
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
apache-petclinic 1 1 1 1 1d
mysql-petclinic 1 1 1 1 1d
tomcat-petclinic 1 1 1 1 1d
Service의 정보를 조회한다.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-petclinic NodePort 10.152.183.19 <none> 80:30124/TCP 1d
default-http-backend ClusterIP 10.152.183.124 <none> 80/TCP 23d
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23d
mysql-petclinic ClusterIP 10.152.183.134 <none> 3306/TCP 1d
tomcat-petclinic ClusterIP 10.152.183.100 <none> 8009/TCP 1d
이것 역시 label을 통해 filtering할 수 있다.
$ kubectl get services -l 'app=tomcat-petclinic'
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-petclinic NodePort 10.152.183.19 <none> 80:30124/TCP 1d
mysql-petclinic ClusterIP 10.152.183.134 <none> 3306/TCP 1d
tomcat-petclinic ClusterIP 10.152.183.100 <none> 8009/TCP 1d
Scale Out
deployment 조회를 통해 현재 상태를 알 수 있다
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
apache-petclinic 3 3 3 1 1d
mysql-petclinic 1 1 1 1 1d
tomcat-petclinic 1 1 1 1 1d
scale out이 되었기 때문에 pods의 정보를 다시 조회 해 본다.
$ kubectl get pods -o wide -l 'app=tomcat-petclinic'
NAME READY STATUS RESTARTS AGE IP NODE
apache-petclinic-54bf9c58cb-5w46p 1/1 Running 0 1d 10.1.40.6 k8s-node-1
apache-petclinic-54bf9c58cb-7795t 1/1 Running 0 1m 10.1.51.11 k8s-node-2
apache-petclinic-54bf9c58cb-rp6w5 1/1 Running 0 1m 10.1.30.4 k8s-node-3
mysql-petclinic-b854f5ccc-knrmf 1/1 Running 0 1d 10.1.40.3 k8s-node-1
tomcat-petclinic-58d6f5959c-kqsw5 1/1 Running 0 1d 10.1.40.5 k8s-node-1
Rolling Update
롤링 업데이트 과정은 해당 컨테이너 이미지를 수정하고 다시 빌드 한 다음 docker hub에 업데이트 후 kubernetes에서 차례로 업데이트가 된다.
Apache 컨테이너 내용을 수정하고 다시 빌드 후 tag는 prod로 docker hub에 push를 해두었다.
$ kubectl set image deployment/apache-petclinic apache-petclinic=thkang0/apache-petclinic:prod
deployment "apache-petclinic" image updated
정상적으로 업데이트가 되면 아래와 같이 현재 Image가 변경된 것을 알 수 있다.
$ kubectl describe deployment apache-petclinic
Name: apache-petclinic
Namespace: default
CreationTimestamp: Wed, 28 Mar 2018 04:57:43 +0000
Labels: app=tomcat-petclinic
name=apache-petclinic
tier=frontend
Annotations: deployment.kubernetes.io/revision=2
Selector: app=tomcat-petclinic,name=apache-petclinic,tier=frontend
Replicas: 3 desired | 2 updated | 4 total | 2 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=tomcat-petclinic
name=apache-petclinic
tier=frontend
Containers:
apache-petclinic:
Image: thkang0/apache-petclinic:prod
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: apache-petclinic-54bf9c58cb (2/2 replicas created)
NewReplicaSet: apache-petclinic-568d9fd7db (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m deployment-controller Scaled up replica set apache-petclinic-54bf9c58cb to 3
Normal ScalingReplicaSet 57s deployment-controller Scaled up replica set apache-petclinic-568d9fd7db to 1
Normal ScalingReplicaSet 57s deployment-controller Scaled down replica set apache-petclinic-54bf9c58cb to 2
Normal ScalingReplicaSet 57s deployment-controller Scaled up replica set apache-petclinic-568d9fd7db to 2
업데이트 하고 난 뒤 Rolling Update가 되는 것을 볼 수 있다
$ kubectl get pods -l 'tier=frontend'
NAME READY STATUS RESTARTS AGE
apache-petclinic-54bf9c58cb-5w46p 1/1 Running 0 1d
apache-petclinic-54bf9c58cb-7795t 1/1 Running 0 9m
apache-petclinic-568d9fd7db-64brr 0/1 ImagePullBackOff 0 3m
apache-petclinic-568d9fd7db-shczx 0/1 ImagePullBackOff 0 3m
Delete Applications
아래 명령어를 통해 배포된 어플리케이션을 삭제한다.
kubectl delete deployments,services -l "tier in (frontend, backend)"