Node.js Agent Memory Leak

+1. Currently experiencing these issues as well.

I can’t use NewRelic with this issue. This is crazy for a company like NewRelic to not fix this issue that impacts all their customers (with a Node.js application).

Hi all - I don’t have an update at this point; however, I wanted to ensure you all knew about this thread that did have an engineer respond. We are actively investigating this issue which appears to be in Node core.

Is the leak still not a priority?

The memory leak problem is really annoying with Heroku’s free tier which limits the memory usage to ~512MB. Disabling the SSL seems quick fix, but it’s not usable in production environment.
Does this issue occur with older Node engines 0.11, 0.10 ? Seems like the issue solving on the Node side takes some time, what would NR team suggest until the fix ?

Hey guys - Wraithan has responded again in the other thread:

I’d recommend subscribing to that for updates. Using an older version of Node may work but I have not tested it - if you have an environment to test in that may well be worth it.

+1. Currently experiencing these issues as well.

The recent update, 1.18.x claims to have fixed the memory leak however I am seeing no improvement. See attached image. I turned on newrelic with 1.18 at midnight and then turned off at 9am this morning.

@andrewbarba Thanks for posting that. I’m going to open a support ticket on your behalf so we can investigate this further with you to see why you’re still seeing this big increase in memory usage.

It’s also important to clarify that version 1.18 of the Node.js agent doesn’t “solve” the memory problem, as the memory problem is actually in Node.js core. Instead, we mitigated the impact of this unresolved Node.js core bug on the agent.

Hello all! We have a new post on this particular issue that may be of interest. You can find it here:

1 Like

I’m also having this problem. I’m afraid I’m going to have to move on from new relic because I need to restart my application every few hours these days. Have any recent node versions fixed the issue?

You should move on from new releak…

@hampzan09 I will get someone from our support team to reach out to you and create a ticket to investigate. The only leak condition that we are currently aware of is the one described in this post which is actually in issue in pg-pool for which we have produced a PR to that repository to address it. As in this case, we often find that leak conditions are actually the result of leaks in other libraries or modules that are simply exacerbated by the agent’s instrumentation. But we’ll need to collect some additional information to investigate the cause in your particular situation.

Team,
Today 3/14/2018. Are we still having same Node.js memory leak issues.

here is the link from NewRelic, has any one used these setting in PROD environment?
We are running Node.js on cluster mode.
https://docs.newrelic.com/docs/agents/nodejs-agent/troubleshooting/troubleshooting-large-memory-usage-nodejs

There are several possible causes for this memory increase and potential solutions for each.

Show All

Increase caused by SSL/TLS

Increase caused by TLS memory buffer allocation

Increase caused by cluster worker slab allocations

Increase caused by log messages stored to disk

Increase caused by leaked MongoDB cursors

Increase caused by agent data storage

Hi @ramesh.palipi !

Because we may need to discuss private information to resolve this, I am going to put you in-touch with our Support Team privately.

Look out for an email and let us know when you get all sorted. Thanks!

We are also experiencing issues with memory, we have a very simple nodejs application running with cluster.
we are Using New Relic for Node.js. Agent version: 6.2.0; Node version: v12.13.1.

app logs this error: MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 disconnected listeners added to [Agent]. Use emitter.setMaxListeners() to increase limit

In the newrelic_agent.log we see this error as soon as we start the App and agent is not working:
{“v”:0,“level”:40,“name”:“newrelic”,“hostname”:“ip-172-31-3-154”,“pid”:5372,“time”:“2020-01-16T12:54:44.471Z”,“msg”:“Could not parse response from the collector: Serialization Error\r\n”,“component”:“new_relic_response”,“stack”:“Synta
xError: Unexpected token S in JSON at position 0\n at JSON.parse ()\n at StreamSink.parser [as callback] (/home/ubuntu/SERVERS/node_modules/newrelic/lib/collector/parse-response.js:46:27)\n at StreamSink.end (/home
/ubuntu/SERVERS/node_modules/newrelic/lib/util/stream-sink.js:43:8)\n at IncomingMessage.onend (_stream_readable.js:692:10)\n at Object.onceWrapper (events.js:299:28)\n at IncomingMessage.emit (events.js:215:7)\n at endRea
dableNT (_stream_readable.js:1184:12)\n at processTicksAndRejections (internal/process/task_queues.js:80:21)”,“message”:“Unexpected token S in JSON at position 0”}

Hello @aviranh,

I see you have opened a support ticket with us about this issue, that’s great!

Once there is a resolution please come back to the community and share your results so others may benefit from the same information that you did.

Issue was resloved by removing the brackets around our process_host display_name setting:
was
process_host: {
display_name: [‘AppName’]
},
Now
process_host: {
display_name: ‘AppName’
},

Our old config file worked and then started breaking 2 weeks ago, I suggest adding documentation for this change

2 Likes

Hey @aviranh, I am happy to hear that the solution provided worked for you! Thank you for sharing the solution with the community if anyone else runs into this issue! :slight_smile:

1 Like