查看原文
其他

一套包含完整前后端的系统如何在K8S中部署?

木讷大叔爱运维 木讷大叔爱运维 2022-07-13


点击上方蓝色字体,关注我们




读完需 10 分钟

 速读需 5 分钟 



需求

实际应用中,一个系统往往是包含前后端的,通常前端使用Vue,后端使用Springboot。而之前我们只是在K8S中配置过后端Springboot项目,现在我们需要将完整的系统部署到K8S集群中,通过本次部署可以具体分析如何部署,为日后上线生产环境做好充足的准备。


前端

前端如果使用Vue开发,需要将打包后的dist放到Web容器的root目录下,在此我们使用Deployent来部署Nginx pod。


1

root目录


Nginx镜像默认配置文件中指定的root目录为/usr/share/nginx/html,我们可以使用默认根目录也可以自定义,但是要保证集群中node节点能共享此目录,因此需要将root目录进行持久化并以共享的方式挂在,在此我们使用简单的NFS。

# Master节点mkdir /App/nfs/nginx/htdocschmod 777 /App/nfs/nginx/htdocscd /App/nfs/nginx/htdocs# 将dist 放到hello.test.cn 目录下mkdir hello.test.cnmv dist .# htdocs目录将通过NFS挂载到Nginx pod的自定义站点root目录/mnt上。/mnt/hello.test.cn/dist


2

Nginx配置文件


Nginx配置文件默认放在/etc/nginx/conf.d中,站点配置文件通过ConfigMap进行定义。

apiVersion: v1kind: ConfigMapmetadata: name: config-nginx-hello-test-cn namespace: test labels: app: config-nginx-hello-test-cndata: hello.test.cn.conf: |- server { listen 80; server_name hello.test.cn;
location / { root /mnt/hello.test.cn/dist; index index.html index.htm; try_files $uri $uri/ /index.html; }    }


3

Nginx pod



Nginx定义为Deployment类型的工作负载,其中自定义站点root目录及具体的站点配置文件分别以NFSConfigMap形式进行挂载。

apiVersion: apps/v1kind: Deploymentmetadata: name: web-nginx namespace: testspec: selector: matchLabels: app: nginx template: metadata: name: web-nginx labels: app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: htdocs mountPath: /mnt - name: config-nginx-hello-test.cn mountPath: /etc/nginx/conf.d/hello.test.cn.conf subPath: hello.test.cn.conf ports: - containerPort: 80 volumes: #挂在nfs - name: htdocs nfs: path: /App/nfs/htdocs server: 192.168.3.217 #挂在configmap - name: config-nginx-hello-test-cn configMap: name: config-nginx-hello-test-cn            defaultMode: 0640

注意:hello.test.cn.conf一定要挂载为subPath,否则将会变成一个目录。


4

Service


前端的Service比较简单,部署类型为NodePort。

apiVersion: v1kind: Servicemetadata: name: web-helloworld namespace: testspec: type: NodePort selector: app: nginx ports: - port: 80      targetPort: 80


后端

后端Springboot部署,详细配置说明可参见K8S部署Springboot项目一文,在此我们不做具体解释。

# 1.DeploymentapiVersion: apps/v1kind: Deploymentmetadata: name: api-helloworld namespace: testspec: replicas: 1 selector: matchLabels: app: helloworld template: metadata: name: helloworld labels: app: helloworld spec: hostAliases: - ip: "10.11.10.11" hostnames: - "api1.test.cn" - "api2.test.cn" - ip: "10.11.10.12" hostnames: - "api3.test.cn" containers: - name: helloworld env: - name: JAVA_OPTS value: "-Xmx128m -Xms128m -Dspring.profiles.active=test" image: harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e50 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api port: 8080 initialDelaySeconds: 200 timeoutSeconds: 5 readinessProbe: httpGet: path: /api port: 8080 initialDelaySeconds: 180 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: #cpu: "0.5" memory: "500Mi" requests: #cpu: "0.5" memory: "500Mi" volumeMounts: - name: logdir mountPath: /logs - name: localtime mountPath: /etc/localtime - name: timezone mountPath: /etc/timezone imagePullSecrets: - name: harbor volumes: - name: logdir emptyDir: {} - name: localtime hostPath: path: /etc/localtime - name: timezone hostPath: path: /etc/timezone
# 2.ServiceapiVersion: v1kind: Servicemetadata: name: api-hellworld namespace: testspec: type: NodePort selector: app: helloworld ports: - port: 8080 targetPort: 8080

注意:我们在此暂时取消CPU 资源限制,因为Springboot在启动时耗费的CPU资源较多,导致进程启动慢,在测试时可以临时关闭。


Ingress

Ingress 作为访问前后端的入口,我们在此将其分离出来单独讲解。

在Ingress中我们通过访问路由将前后端分离:

  • / 访问前端静态文件;

  • /api 访问后端接口;


1

http访问


apiVersion: extensions/v1beta1kind: Ingressmetadata: name: ingress-hello.test.cn namespace: testspec: rules: - host: hello.test.cn http: paths: - path: /api pathType: Prefix backend: serviceName: api-hellworld servicePort: 8080 - path: / pathType: Prefix backend: serviceName: web-helloworld servicePort: 80


2

https访问


# 1.从证书创建secret# kubectl create secret tls tls-secret-test-cn --key test.cn.key --cert test.cn.pem -n test
# 2.Ingress配置apiVersion: extensions/v1beta1kind: Ingressmetadata: name: ingress-hello.test.cn namespace: testspec: rules: - host: hello.test.cn http: paths: - path: /api pathType: Prefix backend: serviceName: api-helloworld servicePort: 8080 - path: / pathType: Prefix backend: serviceName: web-nginx servicePort: 80 tls: - hosts: - hello.test.cn secretName: tls-secret-test-cn

注意:配置https,默认访问http会强制转换成https访问,在此我们可以通过ssl-redirect设置为"false"(默认为true)来解决,将其加入到ingress-nginx的全局配置文件。

# vim global_configmap.yaml# ingress-nginx 全局配置文件apiVersion: v1kind: ConfigMapmetadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxdata: proxy-connect-timeout: "300" proxy-read-timeout: "300" proxy-send-timeout: "300" proxy-body-size: "200m" ssl-redirect: "false"
# 应用后,nginx会自动reload生效# kubectl apply -f global_configmap.yaml

此时既可以通过http访问,也可以通过https访问。


配置文件合并

最后我们将以上配置合并为一个配置文件hello.test.cn.yaml,方便使用。

apiVersion: v1kind: ConfigMapmetadata: name: config-nginx-hello-test-cn namespace: test labels: app: config-nginx-hello-test-cndata: hello.test.cn.conf: |- server { listen 80; server_name hello.test.cn;
location / { root /mnt/hello.test.cn/dist; index index.html index.htm; try_files $uri $uri/ /index.html; } }
---apiVersion: apps/v1kind: Deploymentmetadata: name: web-nginx namespace: testspec: selector: matchLabels: app: nginx template: metadata: name: web-nginx labels: app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: htdocs mountPath: /mnt - name: config-nginx-hello-test.cn mountPath: /etc/nginx/conf.d/hello.test.cn.conf subPath: hello.test.cn.conf ports: - containerPort: 80 volumes: #挂在nfs - name: htdocs nfs: path: /App/nfs/htdocs server: 192.168.3.217 #挂在configmap - name: config-nginx-hello-test-cn configMap: name: config-nginx-hello-test-cn defaultMode: 0640
---apiVersion: apps/v1kind: Deploymentmetadata: name: api-helloworld namespace: testspec: replicas: 1 selector: matchLabels: app: helloworld template: metadata: name: helloworld labels: app: helloworld spec: hostAliases: - ip: "10.11.10.11" hostnames: - "api1.test.cn" - "api2.test.cn" - ip: "10.11.10.12" hostnames: - "api3.test.cn" containers: - name: helloworld env: - name: JAVA_OPTS value: "-Xmx128m -Xms128m -Dspring.profiles.active=test" image: harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e50 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api port: 8080 initialDelaySeconds: 200 timeoutSeconds: 5 readinessProbe: httpGet: path: /api port: 8080 initialDelaySeconds: 180 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: #cpu: "0.5" memory: "500Mi" requests: #cpu: "0.5" memory: "500Mi" volumeMounts: - name: logdir mountPath: /logs - name: localtime mountPath: /etc/localtime - name: timezone mountPath: /etc/timezone imagePullSecrets: - name: harbor volumes: - name: logdir emptyDir: {} - name: localtime hostPath: path: /etc/localtime - name: timezone hostPath: path: /etc/timezone ---apiVersion: v1kind: Servicemetadata: name: web-helloworld namespace: testspec: type: NodePort selector: app: nginx ports: - port: 80 targetPort: 80 ---apiVersion: v1kind: Servicemetadata: name: api-hellworld namespace: testspec: type: NodePort selector: app: helloworld ports: - port: 8080 targetPort: 8080
---apiVersion: extensions/v1beta1kind: Ingressmetadata: name: ingress-hello.test.cn namespace: testspec: rules: - host: hello.test.cn http: paths: - path: /api pathType: Prefix backend: serviceName: api-helloworld servicePort: 8080 - path: / pathType: Prefix backend: serviceName: web-nginx servicePort: 80 tls: - hosts: - hello.test.cn secretName: tls-secret-test-cn


问题思考

以上虽然实现了一套完整的前后端系统在K8S中部署,但是我们还需思考以下几个问题:

  1. 共享Nginx pod

    K8S集群内一个项目我们使用一个Nginx pod来提供静态文件的访问,如果是多个项目将会产生很多Nginx pod,因此是否可以考虑使用一个Nginx pod来运行所有项目的静态文件呢

  2. Nginx热更新

    每个静态站点我们都是用ConfigMap生成一个Nginx 配置文件,如hello.test.cn.conf;如果所有项目都使用同一个Nginx pod,那么再次通过ConfigMap生成Nginx配置文件,Nginx pod不会自动reload,这就需要手动操作或有一套热更新机制



Prometheus+k8s之告警通知

hadoop完全分布式部署

grafana+alertmanager实现微信报警

滴滴夜莺:从监控告警系统向运维平台演化

kubedog:解决K8S在CI/CD中无法持续追踪问题

PMM:最佳的开源数据库监视解决方案

蓝鲸实现vsphere虚拟机交付 -虚拟机管理(VSPHERE)

版本发布过程中的屏蔽/恢复告警



关注我们




您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存