0%

k8s业务容器化实战案例

k8s 业务容器化实战案例

1 业务容器化优势:

  1. 提高资源利用率、节约部署IT成本。
  2. 提高部署效率,基于kubernetes实现微服务的快速部署与交付、 容器的批量调度与秒级启动。
  3. 实现横向扩容、灰度部署、回滚、链路追踪、服务治理等。
  4. 可根据业务负载进行自动弹性伸缩。
  5. 容器将环境和代码打包在镜像内,保证了测试与生产运行环境
    的一致性。
  6. 紧跟云原生社区技术发展的步伐,不给公司遗留技术债,为后
    期技术升级夯实了基础。
  7. 为个人储备前沿技术,提高个人level。

image-20220731155255757

2 业务规划及镜像分层构建

image-20220731155321533

image-20220731155946942

  • 通过基础镜像构建,如:ubuntu: 22.04
  • 在基础镜像上安装运行必要的环境,如JDK11
  • 以安装JDK11的镜像为基础,构建tomcat, dubbo, spring 镜像
  • 通过tomcat/dubbo/spring镜像打包业务代码,构建具体业务镜像

2.1 构建基础镜像

[root@manager centos]# cat build-command.sh  # 构建脚本
#!/bin/bash
docker build -t harbor.qintianjun.local/k8s/centos-base:7.9.2009 .

docker push harbor.qintianjun.local/k8s/centos-base:7.9.2009 # 上传到本地harbor
[root@manager centos]# cat Dockerfile
#自定义Centos 基础镜像
FROM harbor.qintianjun.local/k8s/centos:7.9.2009 # 使用本地harbor的centos作为基础镜像
MAINTAINER qintianjun "454536188@qq.com"

ADD filebeat-7.12.1-x86_64.rpm /tmp # 拷贝统计目录下filebeat
RUN yum install -y /tmp/filebeat-7.12.1-x86_64.rpm vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop && rm -rf /etc/localtime /tmp/filebeat-7.12.1-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime # 安装基础组建,filebeat,设置时区

执行sh build-command.sh , 看到如下输出

image-20220731163953352

image-20220731163927478

说明镜像构建并上传完成,查看harbor可以看到已上传镜像:

image-20220731164107851

2.2 通过基础镜像构建jdk镜像

[root@manager jdk-1.8.212]# cat build-command.sh
#!/bin/bash
docker build -t harbor.qintianjun.local/k8s/jdk-base:v8.212 .
sleep 1
docker push harbor.qintianjun.local/k8s/jdk-base:v8.212
[root@manager jdk-1.8.212]# cat Dockerfile
#JDK Base Image
FROM harbor.qintianjun.local/k8s/centos-base:7.9.2009

MAINTAINER qintianjun "454536188@qq.com"


ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile


ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin

构建镜像:

[root@manager jdk-1.8.212]# sh build-command.sh
Sending build context to Docker daemon 195MB
Step 1/9 : FROM harbor.qintianjun.local/k8s/centos-base:7.9.2009
---> 72b13222d098
Step 2/9 : MAINTAINER qintianjun "454536188@qq.com"
---> Running in e6d098a60b47
Removing intermediate container e6d098a60b47
---> a2ae6dace6d2
Step 3/9 : ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
---> 82bda0383e23
Step 4/9 : RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
---> Running in 026500d482c9
'/usr/local/jdk' -> '/usr/local/src/jdk1.8.0_212'
Removing intermediate container 026500d482c9
---> ff0d3d834344
Step 5/9 : ADD profile /etc/profile
---> 673880b7c07f
Step 6/9 : ENV JAVA_HOME /usr/local/jdk
---> Running in dc89db687bcb
Removing intermediate container dc89db687bcb
---> 44cbb3fb9715
Step 7/9 : ENV JRE_HOME $JAVA_HOME/jre
---> Running in f14546ac8229
Removing intermediate container f14546ac8229
---> 72d797a0fcaf
Step 8/9 : ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
---> Running in 05c9258eb922
Removing intermediate container 05c9258eb922
---> 2a2f8f6bb43a
Step 9/9 : ENV PATH $PATH:$JAVA_HOME/bin
---> Running in c5fe236e089b
Removing intermediate container c5fe236e089b
---> 5f53217d7526
Successfully built 5f53217d7526
Successfully tagged harbor.qintianjun.local/k8s/jdk-base:v8.212
The push refers to repository [harbor.qintianjun.local/k8s/jdk-base]
c5b980123fc0: Pushed
707c797f8ec8: Pushed
80c1c8f91f96: Pushed
12e7ca64edac: Mounted from k8s/centos-base # Mounted 表示从centos-base挂载,避免重复上传造成浪费
1e3f5eb5dee1: Mounted from k8s/centos-base
174f56854903: Mounted from k8s/centos-base
v8.212: digest: sha256:2ca63a7b441f06388764be5af76ff9dd3b25bbfe59f4b7d96434b49a5fe8cace size: 1582

harbor中查看:

image-20220731164928772

2.3 通过基础镜像构建nginx镜像

[root@manager nginx-base]# cat build-command.sh
#!/bin/bash
docker build -t harbor.qintianjun.local/k8s/nginx-base:v1.20.2 .
sleep 1
docker push harbor.qintianjun.local/k8s/nginx-base:v1.20.2
[root@manager nginx-base]# cat Dockerfile
#Nginx Base Image
FROM harbor.qintianjun.local/k8s/centos-base:7.9.2009

MAINTAINER qintianjun "454536188@qq.com"

RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.20.2.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.20.2 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&rm -rf /usr/local/src/nginx-1.20.2.tar.gz

构建镜像

[root@manager nginx-base]# sh build-command.sh
Sending build context to Docker daemon 1.066MB
Step 1/5 : FROM harbor.qintianjun.local/k8s/centos-base:7.9.2009
---> 72b13222d098
Step 2/5 : MAINTAINER qintianjun "454536188@qq.com"
---> Using cache
---> a2ae6dace6d2
Step 3/5 : RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
---> Running in f655d2b5c708
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: my.mirrors.thegigabit.com
* extras: my.mirrors.thegigabit.com
* updates: my.mirrors.thegigabit.com
Package 2:vim-enhanced-7.4.629-8.el7_9.x86_64 already installed and latest version
Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version
Package tree-1.6.0-10.el7.x86_64 already installed and latest version
Package lrzsz-0.12.20-36.el7.x86_64 already installed and latest version
Package gcc-4.8.5-44.el7.x86_64 already installed and latest version
Package gcc-c++-4.8.5-44.el7.x86_64 already installed and latest version
Package automake-1.13.4-3.el7.noarch already installed and latest version
Package pcre-8.32-17.el7.x86_64 already installed and latest version
Package pcre-devel-8.32-17.el7.x86_64 already installed and latest version
Package zlib-1.2.7-20.el7_9.x86_64 already installed and latest version
Package zlib-devel-1.2.7-20.el7_9.x86_64 already installed and latest version
Package 1:openssl-1.0.2k-25.el7_9.x86_64 already installed and latest version
Package 1:openssl-devel-1.0.2k-25.el7_9.x86_64 already installed and latest version
Package iproute-4.11.0-30.el7.x86_64 already installed and latest version
Package net-tools-2.0-0.25.20131004git.el7.x86_64 already installed and latest version
Package iotop-0.6-4.el7.noarch already installed and latest version
Nothing to do
Removing intermediate container f655d2b5c708
---> ffc80b4becf4
Step 4/5 : ADD nginx-1.20.2.tar.gz /usr/local/src/
---> c19e1687d011
Step 5/5 : RUN cd /usr/local/src/nginx-1.20.2 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&rm -rf /usr/local/src/nginx-1.20.2.tar.gz
---> Running in c9627a4fd
......
---> 5b1c620f0538
Successfully built 5b1c620f0538
Successfully tagged harbor.qintianjun.local/k8s/nginx-base:v1.20.2
The push refers to repository [harbor.qintianjun.local/k8s/nginx-base]
23b057fed7cd: Pushed
6f43c91a70b0: Pushed
49f4662da975: Pushed
12e7ca64edac: Mounted from k8s/jdk-base
1e3f5eb5dee1: Mounted from k8s/jdk-base
174f56854903: Mounted from k8s/jdk-base
v1.20.2: digest: sha256:994c8c49e41ad3cf2bcdf681d58dde4391a9128062372ec1d325c6a43fe4ca55 size: 1588

在harbor中查看

image-20220731171546181

2.4 通过JDK镜像构建tomcat镜像

[root@manager tomcat-base-8.5.43]# ls
apache-tomcat-8.5.43.tar.gz build-command.sh Dockerfile
[root@manager tomcat-base-8.5.43]# cat build-command.sh
#!/bin/bash
docker build -t harbor.qintianjun.local/k8s/tomcat-base:v8.5.43 .
sleep 3
docker push harbor.qintianjun.local/k8s/tomcat-base:v8.5.43
[root@manager tomcat-base-8.5.43]# cat Dockerfile
#Tomcat 8.5.43基础镜像
FROM harbor.qintianjun.local/k8s/jdk-base:v8.212

MAINTAINER qintianjun "454536188@qq.com"

RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz /apps
RUN useradd tomcat -u 2050 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data

2.5 通过tomcat镜像构建app镜像

[root@manager tomcat-app1]# cat Dockerfile
#tomcat web1
FROM harbor.qintianjun.local/k8s/tomcat-base:v8.5.43
# 自行准备对应配置文件
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
RUN chown -R tomcat.tomcat /data/ /apps/

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]
[root@manager tomcat-app1]# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.qintianjun.local/k8s/tomcat-app1:${TAG} .
sleep 3
docker push harbor.qintianjun.local/k8s/tomcat-app1:${TAG}

构建

[root@manager tomcat-app1]# ./build-command.sh tomcat-app1-20220731
Sending build context to Docker daemon 24.13MB
Step 1/8 : FROM harbor.qintianjun.local/k8s/tomcat-base:v8.5.43
---> b92d19e3edb3
Step 2/8 : ADD catalina.sh /apps/tomcat/bin/catalina.sh
---> 67e7db5c915f
Step 3/8 : ADD server.xml /apps/tomcat/conf/server.xml
---> 37e3d4ea6059
Step 4/8 : ADD app1.tar.gz /data/tomcat/webapps/myapp/
---> 94226df8dad7
Step 5/8 : ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
---> 583b3b1830c3
Step 6/8 : RUN chown -R tomcat.tomcat /data/ /apps/
---> Running in 6c707556ca45
Removing intermediate container 6c707556ca45
---> cf73cc40f72d
Step 7/8 : EXPOSE 8080 8443
---> Running in d69b80d89ec1
Removing intermediate container d69b80d89ec1
---> 0f4df0fda324
Step 8/8 : CMD ["/apps/tomcat/bin/run_tomcat.sh"]
---> Running in 7d8d9fa07eea
Removing intermediate container 7d8d9fa07eea
---> 3ea8b69863be
Successfully built 3ea8b69863be
Successfully tagged harbor.qintianjun.local/k8s/tomcat-app1:tomcat-app1-20220731
The push refers to repository [harbor.qintianjun.local/k8s/tomcat-app1]
d1e27926a945: Pushed
b2d8029dc6d4: Pushed
f7163ed94019: Pushed
e3a0faa1d4f9: Pushed
7431aff42bd3: Pushed
3e60ee44c08b: Mounted from k8s/tomcat-base
ef54031810c7: Mounted from k8s/tomcat-base
a0bd3f2ec818: Mounted from k8s/tomcat-base
c5b980123fc0: Mounted from k8s/tomcat-base
707c797f8ec8: Mounted from k8s/tomcat-base
80c1c8f91f96: Mounted from k8s/tomcat-base
12e7ca64edac: Mounted from k8s/tomcat-base
1e3f5eb5dee1: Mounted from k8s/tomcat-base
174f56854903: Mounted from k8s/tomcat-base
tomcat-app1-20220731: digest: sha256:4eaa85a5af9e23dce2478473cc8580e2627119e9f6ce2cfec5ee3326c94b84dd size: 3252

下面使用刚才制作好的镜像实现nginx + tomcat + nfs 动静分离

实战案例一: 自定义镜像运行nginx及tomcat服务并基于NAS实现动静分离

image-20220801181040162

基于tomcat-base配置tomcat业务镜像

构建镜像

[root@manager tomcat-app1]# cat Dockerfile 
#tomcat web1
FROM 192.168.88.138/base/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
RUN chown -R tomcat.tomcat /data/ /apps/
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]


[root@manager tomcat-app1]# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t 192.168.88.138/k8s/tomcat-app1:${TAG} .
sleep 3
docker push 192.168.88.138/k8s/tomcat-app1:${TAG}


[root@manager tomcat-app1]# sh build-command.sh 2022080701
Sending build context to Docker daemon 24.13MB
Step 1/8 : FROM 192.168.88.138/base/tomcat-base:v8.5.43
---> ee5755a0eeee
Step 2/8 : ADD catalina.sh /apps/tomcat/bin/catalina.sh
---> 088261a93ba2
Step 3/8 : ADD server.xml /apps/tomcat/conf/server.xml
---> 65e5a76a394c
Step 4/8 : ADD app1.tar.gz /data/tomcat/webapps/myapp/
---> 89ad99b8f780
Step 5/8 : ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
---> f7e79a435dd4
Step 6/8 : RUN chown -R tomcat.tomcat /data/ /apps/
---> Running in 76a6a9bc8698
Removing intermediate container 76a6a9bc8698
---> bff60f96c4f9
Step 7/8 : EXPOSE 8080 8443
---> Running in 0796e1bd118a
Removing intermediate container 0796e1bd118a
---> 36a9fad9a841
Step 8/8 : CMD ["/apps/tomcat/bin/run_tomcat.sh"]
---> Running in 9dd900d7bf37
Removing intermediate container 9dd900d7bf37
---> 4b20463708f4
Successfully built 4b20463708f4
Successfully tagged 192.168.88.138/k8s/tomcat-app1:2022080701
The push refers to repository [192.168.88.138/k8s/tomcat-app1]
c595e46af53b: Pushed
d6d00542d507: Pushed
9350b1765bbb: Pushed
f7c55cf260ee: Pushed
563e59e4236c: Pushed
2f5077e05c3c: Mounted from base/tomcat-base
8708aaa5d61a: Mounted from base/tomcat-base
9725149d9d5b: Mounted from base/tomcat-base
138273c46704: Mounted from base/tomcat-base
f5058d269019: Mounted from base/tomcat-base
4aa713869ceb: Mounted from base/tomcat-base
80cb57ba4a28: Mounted from base/tomcat-base
cc31b2a0ae3e: Mounted from base/tomcat-base
174f56854903: Mounted from base/tomcat-base
2022080701: digest: sha256:f499c15308c6b66760bf650b868aa9331b4ab27895cdb158e36d18829a61c64c size: 3252

编写yaml文件:

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-deployment-label
name: magedu-tomcat-app1-deployment
namespace: myserver
spec:
replicas: 2
selector:
matchLabels:
app: magedu-tomcat-app1-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-selector
spec:
containers:
- name: magedu-tomcat-app1-container
image: harbor.qintianjun.local/k8s/tomcat-app1:tomcat-app1-20220731
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-service-label
name: magedu-tomcat-app1-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30092
selector:
app: magedu-tomcat-app1-selector

使用kubectl apply -f tomcat-app1.yaml 命令使之生效,查看状态:

[root@master lab]# kubectl get pod,svc,ep -n myserver -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/magedu-tomcat-app1-deployment-67b5fdcfff-d4tmn 1/1 Running 1 21h 172.20.1.153 192.168.68.150 <none> <none>
pod/magedu-tomcat-app1-deployment-67b5fdcfff-f5497 1/1 Running 1 21h 172.20.2.171 192.168.68.149 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/magedu-tomcat-app1-service NodePort 10.68.107.218 <none> 80:30092/TCP 21h app=magedu-tomcat-app1-selector

NAME ENDPOINTS AGE
endpoints/magedu-tomcat-app1-service 172.20.1.153:8080,172.20.2.171:8080 21h

此时访问宿主机30092端口可以看到之前在tomcat镜像中打好的测试页:

image-20220801180910483

说明tomcat正常工作。

基于nginx-base配置nginx的业务镜像

编写nginx.conf

user  tomcat tomcat;  # 统一使用tomcat
worker_processes auto;
daemon off;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream tomcat_webserver { # upstream为刚刚配置的后端tomcat对应的svc
server magedu-tomcat-app1-service.myserver.svc.magedu.local:80;
}
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
location /webapp { # 访问/webapp的uri返回静态资源
root html;
index index.html index.htm;
}
location /myapp { # 访问/myapp转给后端tomcat服务
proxy_pass http://tomcat_webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}

编写nginx-web1对应的Dockerfile:

[root@manager nginx]# cat Dockerfile
#Nginx 1.20.2
FROM harbor.qintianjun.local/k8s/nginx-base:v1.20.2
RUN useradd tomcat -u 2050
ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz /usr/local/nginx/html/webapp/ #ADD会在拷贝的同时自动解压tar文件
ADD index.html /usr/local/nginx/html/index.html

#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images && chown tomcat.tomcat -R /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images

EXPOSE 80 443

CMD ["nginx"]
[root@manager nginx]# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.qintianjun.local/k8s/nginx-web1:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push harbor.qintianjun.local/k8s/nginx-web1:${TAG}
echo "镜像上传到harbor完成"

编辑nginx的yaml:

[root@master lab]# cat nginx-app1.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-nginx-deployment-label
name: magedu-nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: magedu-nginx-selector
template:
metadata:
labels:
app: magedu-nginx-selector
spec:
containers:
- name: magedu-nginx-container
image: harbor.qintianjun.local/k8s/nginx-web1:20220801-test1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "20"
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-nginx-service-label
name: magedu-nginx-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30090
selector:
app: magedu-nginx-selector

使用kubectl apply命令生效,访问宿主机30090端口:

image-20220801185408087

访问/webapp,返回静态资源页:

image-20220801185422583

访问/myapp,转发给后端tomcat:

image-20220801185537471

此时nginx转发后端生效。

验证

分别进入两个tomcat容器,修改/data/tomcat/webapps/myapp/index.html为v1,v2:

[root@master lab]# kubectl exec -it pod/magedu-tomcat-app1-deployment-67b5fdcfff-f5497 -n myserver -- bash
[root@magedu-tomcat-app1-deployment-67b5fdcfff-f5497 /]# cat /data/tomcat/webapps/myapp/index.html
<h1>tomcat app1 for linux n66 v1</h1>

[root@master lab]# kubectl exec -it pod/magedu-tomcat-app1-deployment-67b5fdcfff-d4tmn -n myserver -- bash
[root@magedu-tomcat-app1-deployment-67b5fdcfff-d4tmn /]# cat /data/tomcat/webapps/myapp/index.html
<h1>tomcat app1 for linux n66 v2</h1>

此时反复刷新页面,可以看到页面交替显示v1,v2

image-20220801190445371

image-20220801190352880

说明后端tomcat服务是交替轮询的

也可以使用NAS(NFS)存放静态资源文件,参考:

[root@master lab]# cat nginx-app1.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-nginx-deployment-label
name: magedu-nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: magedu-nginx-selector
template:
metadata:
labels:
app: magedu-nginx-selector
spec:
containers:
- name: magedu-nginx-container
image: harbor.qintianjun.local/k8s/nginx-web1:20220801-test1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "20"
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
# 挂载nfs存储
volumeMounts:
- name: magedu-images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: magedu-static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: magedu-images
nfs:
server: 172.31.7.109
path: /data/k8sdata/magedu/images
- name: magedu-static
nfs:
server: 172.31.7.109
path: /data/k8sdata/magedu/static
#nodeSelector:
# group: magedu
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-nginx-service-label
name: magedu-nginx-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30090
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 30091
selector:
app: magedu-nginx-selector

至此tomcat+nginx动静分离环境构建完成

实战案例二:pv/pvc + zookeeper

通过jdk 镜像构建zookeeper镜像

image-20220805144125634

准备Dockerfile和build脚本

[root@manager zookeeper]# cat Dockerfile
#FROM harbor-linux38.local.com/linux38/slim_java:8
FROM harbor.qintianjun.local/k8s/slim_java:8
ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
tar \
wget && \
#
# Install dependencies
apk add --no-cache \
bash && \
#
# Verify the signature
#
export GNUPGHOME="$(mktemp -d)" && \
gpg -q --batch --import /tmp/KEYS && \
gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
#
# Set up directories
#
mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
#
# Install
tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
#
# Slim down
cd /zookeeper && \
cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
rm -rf \
*.txt \
*.xml \
bin/README.txt \
bin/*.cmd \
conf/* \
contrib \
dist-maven \
docs \
lib/*.txt \
lib/cobertura \
lib/jdiff \
recipes \
src \
zookeeper-*.asc \
zookeeper-*.md5 \
zookeeper-*.sha1 && \
#
# Clean up
apk del .build-deps && \
rm -rf /tmp/* "$GNUPGHOME"

COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:${PATH} \
ZOO_LOG_DIR=/zookeeper/log \
ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010

[root@manager zookeeper]# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.qintianjun.local/k8s/zookeeper:${TAG} .
sleep 1
docker push harbor.qintianjun.local/k8s/zookeeper:${TAG}

准备zookeeper对应配置文件:

[root@manager zookeeper]# cat conf/log4j.properties
zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/zookeeper/log
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=INFO
zookeeper.tracelog.dir=/zookeeper/log
zookeeper.tracelog.file=zookeeper_trace.log
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.MaxBackupIndex=5
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}
log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n

[root@manager zookeeper]# cat conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
#snapCount=100000
autopurge.purgeInterval=1
clientPort=2181

准备启动脚本

[root@manager zookeeper]# cat bin/zkReady.sh
#!/bin/bash
/zookeeper/bin/zkServer.sh status | egrep 'Mode: (standalone|leading|following|observing)'
[root@manager zookeeper]# cat repositories
http://mirrors.aliyun.com/alpine/v3.6/main
http://mirrors.aliyun.com/alpine/v3.6/community

集群服务的配置文件提供方式:

  • 提前在镜像中定义
  • 在部署集群的时候在yaml中传递变量
  • 在entrypoint脚本中定义

准备pv及pvc对应yaml:

[root@master1 pv]# cat zookeeper-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.88.139
path: /data/k8sdata/magedu/zookeeper-datadir-1

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.88.139
path: /data/k8sdata/magedu/zookeeper-datadir-2

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.88.139
path: /data/k8sdata/magedu/zookeeper-datadir-3


[root@master1 pv]# cat zookeeper-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-1
namespace: myserver
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-1
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-2
namespace: myserver
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-2
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-3
namespace: myserver
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-3
resources:
requests:
storage: 10Gi

确认pv状态为Available:

image-20220805152725634

确认pvc状态:

image-20220807144332621

编写zookeeper.yaml并应用:

[root@master1 zookeeper]# cat zookeeper.yaml 
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: magedu
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper1
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper2
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32182
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper3
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32183
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper1
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: 192.168.88.138/k8s/zookeeper:2022080701
imagePullPolicy: IfNotPresent
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx1G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-1
volumes:
- name: zookeeper-datadir-pvc-1
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper2
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "2"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: 192.168.88.138/k8s/zookeeper:2022080701
imagePullPolicy: IfNotPresent
env:
- name: MYID
value: "2"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx1G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-2
volumes:
- name: zookeeper-datadir-pvc-2
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper3
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "3"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: 192.168.88.138/k8s/zookeeper:2022080701
imagePullPolicy: IfNotPresent
env:
- name: MYID
value: "3"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx1G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-3
volumes:
- name: zookeeper-datadir-pvc-3
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-3

查看pod状态:

image-20220807125245227

后面测试过程暂时省略,主要思路就是修改yaml中镜像地址为错误地址,导致其中某个pod创建失败,观察其他pod的follower及leader角色变化。

实战案例三: PV/PVC及Redis单机

构建redis镜像

[root@manager redis]# cat Dockerfile 
#Redis Image
FROM 192.168.88.138/base/centos-base:7.9.2009

MAINTAINER qintianjun "freedom1215@foxmail.com"

ADD redis-4.0.14.tar.gz /usr/local/src
RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data
ADD redis.conf /usr/local/redis/redis.conf
ADD run_redis.sh /usr/local/redis/run_redis.sh

EXPOSE 6379

CMD ["/usr/local/redis/run_redis.sh"]
[root@manager redis]# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t 192.168.88.138/k8s/redis:${TAG} .
sleep 3
docker push 192.168.88.138/k8s/redis:${TAG}

创建pv与pvc

[root@master1 pv]# cat redis-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-datadir-pv-1
namespace: magedu
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /data/k8sdata/magedu/redis-datadir-1
server: 192.168.88.139
[root@master1 pv]# cat redis-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-datadir-pvc-1
namespace: magedu
spec:
volumeName: redis-datadir-pv-1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

运行redis服务

[root@master1 redis]# cat redis.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: devops-redis
name: deploy-devops-redis
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: devops-redis
template:
metadata:
labels:
app: devops-redis
spec:
containers:
- name: redis-container
image: 192.168.88.138/k8s/redis:2022080701
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/data/redis-data/"
name: redis-datadir
volumes:
- name: redis-datadir
persistentVolumeClaim:
claimName: redis-datadir-pvc-1

---
kind: Service
apiVersion: v1
metadata:
labels:
app: devops-redis
name: srv-devops-redis
namespace: magedu
spec:
type: NodePort
ports:
- name: tcp
port: 6379
targetPort: 6379
nodePort: 30379
selector:
app: devops-redis
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800

验证redis数据读写

访问对应node节点30379端口

[root@node2 ~]# telnet 192.168.88.136  30379
Trying 192.168.88.136...
Connected to 192.168.88.136.
Escape character is '^]'.
auth 123456
+OK
set key1 value1
+OK
get key1
$6
value1
quit
+OK

此时进入nfs服务挂载点所在目录

image-20220807173006223

可以看到文件被写入对应pvc, 查看aof文件可以看到刚才执行的命令:

image-20220807173106104

欢迎关注我的其它发布渠道