I decided to improve the visibility that Kubernetes has into my node.js backend by implementing liveness probes. As we discussed previously, Kubernetes has two different types of health checks:
- Readiness probe – makes sure the container is ready to respond to user requests and participate in the load balancer. If it fails, it is removed. This is similar to making sure the VM or OS started correctly (as opposed to the application).
- Liveness probe – makes sure the application is running properly. This is done by the developers exposing a health check api that you tell Kubernetes to check on a periodic basis. If it fails, Kubernetes will restart the container. This is similar to checking if your application started up and is running correctly after the VM started.
Since I have a Node.js backend, I did a search for “liveness probe for node.js” and I found this great article “Creating Liveness Probes for your Node JS application in Kubernetes” that walked me through it.
First, I had to create the HTTP path in the Node.js app. Notice the “/health-check” in the first line:
app.get(‘/health-check’,(req,res)=> {res.send (“health-check passed”);});
Second, we add the liveness probe to the deployment.yaml file, pointing to that “/health-check” path:
livenessProbe:httpGet:path: /health-checkport: 8888initialDelaySeconds: 3periodSeconds: 3failureThreshold: 3
Third, I recreated the container so it would use the new liveness probe. To check if the liveness probe was enabled I ran this:
kubectl describe pods {yourPodName}
And I noticed this line in the output:
State: Running
Started: Thu, 21 Mar 2019 06:01:59 -0500
Ready: True
Restart Count: 0
Liveness: http-get http://:8888/health-check delay=3s timeout=1s period=3s #success=1 #failure=3
Now, if the container or app becomes unresponsive, Kubernetes will know and it will restart the container automatically!