
Kubernetes Kubelet memory leak on kubectl port-forward
This is a post to document the progress on the kubelet memory leak issue when creating port-forwarding connections.
Tl;Dr
Kubernetes kubelet
creates various tcp connections on every kubectl port-forward
command, but these connections are not released after the port-forward
commands are killed. As explained in this comment, this can be easily tested by running ss -t
on the kubelet
host before and after running several kubectl port-forward
commands.
This behavior is fixed in Kubernetes 1.11.4
and 1.10.12
releases.
This is specially problematic with a node that's running Helm's tiller pod, as it constantly issues port-forwarding commands during normal operations.
Temporary workaround
To mitigate this issue, we moved the tiller Pod to it's own isolated node, and the kubelet
service is restarted every 10 minutes in that node so it releases all the open tcp connections and doesn't run out of memory. No other workloads run on that node.
Resolution / fix
This bug has been fixed in Kubernetes 1.11.4
and 1.10.12
releases. After upgrading our staging clusters to 1.11.6
we've been able to verify that the issue is now resolved, and here's the explanation:
On a 1.10.10
node, after running several kubectl port-forward
commands here's what the open tcp connections look like:

And here's the open tcp connections on a 1.11.6
node after running multiple kubectl port-forward
commands a couple of times:

Also, for reference, this is the memory usage on the 1.11.6
node during the kubectl port-forward
commands. As you can see there's a memory increase but it's released afterwards.

After we've upgraded all our clusters (staging and production), we'll revert the workaround for the tiller pod, so all our clusters will now have one less node, and tiller will be running alongside all other workloads in the cluster.
Would you like these kinds of insights on a continuous basis in your own company? We can help. Get in touch with us below!