Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

Newrelic pod OOMKilled

kubernetes
on-host-integrations

#1

Hi we have a 9 node cluster and newrelic is deployed as a demonset to monitor all the nodes. I found that one such pod running on a worker nodes keeps restarting repeated with OOMKilled.

I have modified the memory limits and resources for the demonset to 300M and yet just this one server seems to have this problem.

Could you provide the recommended configuration?

Controlled By: DaemonSet/newrelic-infra
Containers:
newrelic-infra:
Container ID: docker://6d5e074b771821c0a3780b791700360f81a9049e263a4c598109d04cc16aee84
Image: newrelic/infrastructure-k8s:1.7.0
Image ID: docker-pullable://newrelic/infrastructure-k8s@sha256:e742b49e7e9305ee6f3e54ada2cd8bcfa30b590ce52a54d762aa178b0f2e6bab
Port:
Host Port:
State: Running
Started: Wed, 17 Apr 2019 10:33:18 -0400
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 17 Apr 2019 10:30:43 -0400
Finished: Wed, 17 Apr 2019 10:31:47 -0400
Ready: True
Restart Count: 5
Limits:
memory: 300M
Requests:
cpu: 100m
memory: 300M


#2

Hi @vipalazhi - it sounds like KSM may be experiencing issues outside of the actual daemonset memory limit defined.

This may need to become a support ticket, but would you be willing to upgrade from 1.7.0 to 1.8.0 and seeing if the same occurs?

https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration#update

Would be interesting to know if it continues.