kubedog:解决K8S在CI/CD中无法持续追踪问题
点击上方蓝色字体,关注我们
读完需 6 分钟
速读需 3 分钟
在Jenkins的CI/CD流水线中,无论是通过Kubernetes CLI
还是Kubernetes Continuous Deploy
插件,在应用yaml后无法检查资源是否部署成功,只能通过kubectl手动检查。
这种现象类似于当通过kubectl apply对资源进行配置后,需以下操作进一步获取资源的运行信息:
kubectl get -w
kubectl logs
kubectl describe
多次输入以上命令我们才能确定pod是否运行成功或定位pod失败原因,操作比较繁琐。
而kubedog则在一定程度上简化了这个过程,可以跟踪指定资源的状态并将状态信息直接输出。
下面我们就来详细了解下kubedog。
kubedog
Kubedog(https://github.com/werf/kubedog)是一个库,用于在 CI/CD 部署pipeline 中监视和跟踪 Kubernetes 资源;同时还支持CLI,但是它提供了访问库函数的最小接口,主要目的是为了检查库功能和调试。
Kubedog最终为用户提供有关资源的足够信息,从而无需进行其他调试和资源状况的kubectl调用,将与资源有关的所有数据将统一为一个事件流。
1
安装
# 1.安装
curl -L https://dl.bintray.com/flant/kubedog/v0.3.4/kubedog-linux-amd64-v0.3.4 -o /tmp/kubedog
chmod +x /tmp/kubedog
sudo mv /tmp/kubedog /usr/local/bin/kubedog
# 2.添加环境变量
vim /etc/profile
KUBEDOG_KUBE_CONFIG=/root/.kube/config
source /etc/profile
使用
kubedog主要使用以下三种方式进行资源跟踪:
follow
rollout
multitrack
分别对应三个命令:
kubedog follow
kubedog rollout track
kubedog multitrack
1.follow
follow可以跟踪资源从创建到Ready对外提供服务的整个过程,并将pod日志打印。
# 1.资源配置
# kubectl apply -f helloworld.yaml
deployment.apps/helloworld created
service/helloworld created
ingress.extensions/helloworld created
# 2.跟踪资源状态
# kubedog follow -n test deployment helloworld
deploy/helloworld added
deploy/helloworld new rs/helloworld-8d958c978 added
deploy/helloworld rs/helloworld-8d958c978(new) po/helloworld-8d958c978-wrb7m added
deploy/helloworld event: po/helloworld-8d958c978-wrb7m Pulling: Pulling image "harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e50"
deploy/helloworld event: po/helloworld-8d958c978-wrb7m Pulled: Successfully pulled image "harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e50"
deploy/helloworld event: po/helloworld-8d958c978-wrb7m Created: Created container helloworld
deploy/helloworld event: po/helloworld-8d958c978-wrb7m Started: Started container helloworld
>> deploy/helloworld rs/helloworld-8d958c978(new) po/helloworld-8d958c978-wrb7m helloworld
LOGBACK: No context given for c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy@1674896058
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.5.RELEASE)
2020-09-29 10:05:13.290 INFO 7 --- [ main] c.d.helloworld.HelloworldApplication : Starting HelloworldApplication v0.0.1-SNAPSHOT on helloworld-8d958c978-wrb7m with PID 7 (/helloworld.jar started by root in /)
....省略....
2020-09-29 10:06:09.380 INFO 7 --- [ main] c.d.helloworld.HelloworldApplication : Started HelloworldApplication in 76.601 seconds (JVM running for 89.287)
2020-09-29 10:07:43.980 INFO 7 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-09-29 10:07:43.981 INFO 7 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-09-29 10:07:43.989 INFO 7 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 8 ms
deploy/helloworld become READY
2.rollout
rollout与follow对比,不会输出pod运行日志,直接打印pod状态。
# 1.资源配置
# kubectl apply -f helloworld.yaml
deployment.apps/helloworld created
service/helloworld created
ingress.extensions/helloworld created
# 2.跟踪资源状态
# kubedog rollout track -n test deployment helloworld
deploy/helloworld added
deploy/helloworld rs/helloworld-8d958c978 added
deploy/helloworld po/helloworld-8d958c978-wrb7m added
deploy/helloworld become READY
# 3.检查运行状态
# echo $?
0
# 4.如果运行报错
# kubedog rollout track -n test deployment helloworld
deploy/helloworld added
deploy/helloworld rs/helloworld-fc99f6486 added
deploy/helloworld po/helloworld-fc99f6486-x27xl added
deploy/helloworld event: ScalingReplicaSet: Scaled up replica set helloworld-fc99f6486 to 1
deploy/helloworld event: po/helloworld-fc99f6486-x27xl Pulling: Pulling image "harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e5"
deploy/helloworld event: po/helloworld-fc99f6486-x27xl Failed: Error: ErrImagePull
deploy/helloworld event: po/helloworld-fc99f6486-x27xl Failed: Failed to pull image "harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e5": rpc error: code = Unknown desc = ror response from daemon: manifest for harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e5 not found: manifest unknown: manifest unknown
deploy/helloworld event: po/helloworld-fc99f6486-x27xl Failed: Error: ImagePullBackOff
deploy/helloworld event: po/helloworld-fc99f6486-x27xl BackOff: Back-off pulling image "harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e5"
deploy/helloworld po/helloworld-fc99f6486-x27xl helloworld error: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: manifest for harbor.test.cn/helloworld/helloworld:1311c4520122dfa67bb60e0103c9519fcb370e5 not found: manifest unknown: manifest unknown
# 3.检查运行状态
# echo $?
130
3.multitrack
官方不建议使用follow和rollout模式,后续也不会更新;而推荐使用multitrack则直观、易读的方式来展示状态信息。
要使用multitrack,您需要将JSON结构传递给kubedog的stdin,如下:
cat << EOF | kubedog multitrack
{
"Deployments": [
{
"ResourceName": "helloworld",
"Namespace": "test"
}
],
"Deployments": [
{
"ResourceName": "helloworld1",
"Namespace": "test"
}
]
}
EOF
或
echo '{"Deployments": [{"ResourceName": "helloworld","Namespace": "test"}],"Deployments": [{"ResourceName": "helloworld1","Namespace": "test"}]}' | kubedog multitrack
具体使用如下:
# 1.资源配置
kubectl apply -f helloworld.yaml
kubectl apply -f helloworld1.yaml
# 2.跟踪状态
echo '{"Deployments": [{"ResourceName": "helloworld","Namespace": "test"}],"Deployments": [{"ResourceName": "helloworld1","Namespace": "test"}]}' | kubedog multitrack
┌ Status progress
│ DEPLOYMENT REPLICAS AVAILABLE UP-TO-DATE
│ helloworld 1/1 0 1
│ │ POD READY RESTARTS STATUS ---
│ └── 8d958c978-2rswq 0/1 0 Pending Waiting for: available 0->1
└ Status progress
┌ Status progress
│ DEPLOYMENT REPLICAS AVAILABLE UP-TO-DATE
│ helloworld 1/1 0 1
│ │ POD READY RESTARTS STATUS ---
│ └── 8d958c978-2rswq 0/1 0 Pending Waiting for: available 0->1
│ └── error: FailedScheduling: 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: },
│ that the pod didn't tolerate, 2 Insufficient cpu.
└ Status progress
应用场景
在了解了kubedog如何使用后,那我们到底可以使用kubedog做些什么呢?
在Jenkins的CI/CD流水线中,无论是通过Kubernetes CLI
还是Kubernetes Continuous Deploy
插件,在应用yaml后无法检查资源是否部署成功,只能通过kubectl手动检查,因此我们可以使用kubedog在pipeline完成这最后一步验证。
借助Kubernetes CLI
插件可以直接执行kubedog命令,而无需先通过ssh登录节点再执行,具体如下:
# 在流水线中插入以下命令
withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s-jenkins-slave', namespace: 'test', serverUrl: 'https://192.168.3.217:6443') {
sh """kubectl apply -f k8s-test.yaml && kubedog rollout track -n test deployment helloworld"""
}
# 控制台日志打印
....部分信息省略....
+ kubectl apply -f k8s-test.yaml
deployment.apps/helloworld created
service/helloworld unchanged
ingress.extensions/helloworld configured
+ kubedog rollout track -n test deployment helloworld
# deploy/helloworld added
# deploy/helloworld rs/helloworld-54b4dd8c4f added
# deploy/helloworld po/helloworld-54b4dd8c4f-gcck8 added
# deploy/helloworld event: po/helloworld-54b4dd8c4f-gcck8 Pulled: Container image "harbor.cityre.cn/helloworld/helloworld:1eaf39a86f11ce728829d3700d14362ccdd96867" already present on machine
# deploy/helloworld event: po/helloworld-54b4dd8c4f-gcck8 Created: Created container helloworld
# deploy/helloworld event: po/helloworld-54b4dd8c4f-gcck8 Started: Started container helloworld
>> po/helloworld-54b4dd8c4f-gcck8 helloworld
LOGBACK: No context given for c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy@1911006827
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.5.RELEASE)
2020-09-30 08:52:43.854 INFO 6 --- [ main] c.d.helloworld.HelloworldApplication : Starting HelloworldApplication v0.0.1-SNAPSHOT on helloworld-54b4dd8c4f-gcck8 with PID 6 (/helloworld.jar started by root in /)
2020-09-30 08:52:43.858 INFO 6 --- [ main] c.d.helloworld.HelloworldApplication : The following profiles are active: test
2020-09-30 08:52:45.308 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2020-09-30 08:52:45.324 INFO 6 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-09-30 08:52:45.325 INFO 6 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.31]
2020-09-30 08:52:45.395 INFO 6 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2020-09-30 08:52:45.395 INFO 6 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1433 ms
2020-09-30 08:52:45.624 INFO 6 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2020-09-30 08:52:45.831 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-09-30 08:52:45.836 INFO 6 --- [ main] c.d.helloworld.HelloworldApplication : Started HelloworldApplication in 3.459 seconds (JVM running for 3.998)
2020-09-30 08:53:45.967 INFO 6 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-09-30 08:53:45.968 INFO 6 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-09-30 08:53:45.975 INFO 6 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 7 ms
# deploy/helloworld become READY
[Pipeline] }
[kubernetes-cli] kubectl configuration cleaned up
注意:
由于需要在CLI模式下执行kubedog命令,因此我们将原来使用的
Kubernetes Continuous Deploy
插件切换为Kubernetes CLI
;Kubernetes CLI
插件必须先执行kubectl apply ,再执行kubedog,直接执行kubedog会报错;提前将宿主机kubedog命令挂载到jenkins-slave容器;
kubedog rollout
在服务器单独执行并不会输出日志,而在jenkins中输出,意外惊喜!
总结
通过kubedog解决了K8S 在CI/CD中无法持续追踪的问题,直接通过jenkins的控制台输出就能实时了解pod的启动状态及报错信息;但是在jenkins 中不建议使用kubedog multitrack
跟踪状态,因为控制台输出的应用日志信息和状态信息交叉错乱,因此我们使用的是kubedog rollout
。
Jenkins多分支流水线:Webhook按分支触发自动构建