Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

PHP agent 9: serious memory leak issues


#1

Hi Explorers,

Over the weekend, we’ve started seeing very high memory usage of our PHP applications, particularly in long-running PHP non-web transactions. After a while of investigation, we’ve narrowed the problem down to the New Relic PHP agent update. If we pass the PHP flag to disable it with PHP’s -d option then the leaks vanish.

We’ve tried different New Relic settings, thinking the culprit might be transaction trace data, but only disabling the agent entirely seems to mitigate the leaks. Downgrading to the previous 8.x version fixes the problem for us.

To paint a picture of the severity of these leaks, I’ve seen a process that normally uses like 100-120MB use up 1.5 gigabytes of memory. Another process used up all 8GB of RAM on the server and 2GB of swap space, forcing me to stop the VM and double its memory to prevent further production incidents.

Other folks have mentioned segfaults, here we’ve got memory leaks - perhaps writing sections of the agent in Rust might be something the team could look into.

This is actually not the first time we’ve seen issues like these in the New Relic PHP agent, and as a result, we have become hesitant to deploy NR PHP agent package updates in production. I have found the NR team very responsive to fixing things like these in the past, after submitting a support ticket, but I no longer have the means to do so.

So YMMV but if anyone has the latest PHP agent installed, and is running background tasks, my tip would be to check for increased memory usage on your servers.

Toon Spin


#2

Hi @toon.spin - It could be related to this known issue.


#3

Hi @stefan_garnham, I’d seen that post and I agree it seems likely to be related - after all they are both typically symptoms that are caused by bugs relating to memory/pointers. Perhaps it’s a single bug that’s causing both segfaults and leaks. That sort of bug can be pretty subtle and difficult to spot.

That said, I haven’t seen anyone else post about memory leaks, so I felt it would be good to let the community know that this might be a thing.


#4

I haven’t seen any memory leak issues relating to the PHP 9.0 agent. Only Seg Faults.

Agent 9.0 has been rolled back, and currently we recommend you utilise agent 8.7 for now.

With all that said, I would like the development team to review the details you shared, so I’ll open up a ticket for you - so our PHP experts can look into this.


#5

Have had both leaks and segfaults.


#6

Thanks for letting us know - @jarret

Would you be willing to try out the new patched agent:


#7

We’ve officially released an update, see my posting here:


#8

I am in contact with NR support about this, but would like to note that in our case, NR agent 9.0.2.245 does not fix our memory leak issues.


#10

Acknowledged. You are working with the right people, and this is getting attention.


#11

Hi,

We’re still seeing a very serious memory leak on our long running processes with agent version 9.1.0.246 (and 9.0.2 for that matter).

Cron jobs that used to take less than 300meg now get Out-Of-Memory killed as they’re hitting over 2gigs of memory! We were seeing OOM kills on all long-running processes in the application which caused us no end of grief.

It took us over a week to track it down to the NewRelic PHP agent extension. Tracking it down via a profiler is nearly impossible since PHP’s memory_get_usage() and memory_get_peak_usage() functions aren’t able to track this leak. We had to query the kernel directly to determine the growing usage (see https://gist.github.com/chinthakagodawita/bf3c690cdf5242cf5209fe59cd00e52c).

We rigged up a simple test in one of our batch processes which loads a list of items (often thousands large) and processes each one in turn. This particular test was run with ~600 items.

With agent 8.7.0, there’s minimal memory growth and the final memory usage is a respectable ~52mb. With agent 9.1.0, final memory usage is over double this and ends at ~108mb. We also tested with agent 9.1.0 but disabled it via the env var NEWRELIC_ENABLED=false, here final memory usage sat at ~51mb.

Full memory usage detail is available here - https://gist.github.com/chinthakagodawita/5b7dd23840d59fc7bcd82c83f28ecd24

We’ll be rolling back to 8.7.0.242 until this addressed.

Do you have an ETA for when this will be fixed @jvarney? This had a serious impact on our production environment till we were able to track it down.


#12

Hey @chin.lw - We’re still working with @toon.spin to troubleshoot this in their ticket.

I can get another ticket open for you to see if we can track this down quicker by understanding the experiences you have had alongside that of other users like @toon.spin :smiley:


#13

Thanks for raising that ticket @RyanVeitch


#14

No problem, @chin.lw - Let us know back here how that goes for you.


#15

Hi all,

I wanted to update you all with a solution to the memory leak that a couple of you were experiencing. A new setting was introduced in version 9.1.0.246 of the PHP Agent. To take a few steps back, there were some changes made in 9.x versions of the agent on how segments are made. To prevent the behavior you’re seeing now with 9.x versions, we’ve introduced a setting called newrelic.transaction_tracer.max_segments . With this setting, you can limit segments (and therefore memory) usage.

If anyone is experiencing this on 9.x versions of the agent, you’ll want to add newrelic.transaction_tracer.max_segments to your newrelic.ini. The default value is set to 0, so you’ll want to find a number dependent on your use case.

As a suggested starting point, this may be helpful:

Segment size can vary based upon the length of the corresponding method’s name, the length of its class name, and the number of subsequent calls made by the method. That said, a conservative estimate is 400 bytes per segment. To limit the PHP Agent to 40 MB for segment capture, set this value to 100000. If this value is set lower than 2000, it further limits the total number of segments reported for transaction traces.

Hope this helps! Please let me know if I can answer any questions about this.


#16

We have the same problem as @chin.lw, although in our case the new max_segments setting doesn’t seem to have any effect on memory usage whatsoever. Disabling the New Relic module in the CLI SAPI is the only way we can get the memory usage under control at the moment.


#17

As @RyanVeitch mentioned, I am working with NR support the best I can, and it looks like we too don’t seem to respond to the segments setting, although I have to say I do like it as a way of tweaking the agent even if it doesn’t seem to solve this issue.

In fact disabling the transaction tracer altogether does not seem to help either, suggesting that the issue is in another part of the agent.

If anyone is having this issue, and has more time to help than I do, I suggest that they offer to do so by commenting in this topic, because I have found the support team to be as eager to fix this as we are.


#18

That’s good to know - we’re working with NR Support too but I personally haven’t had time to test the max segments change yet.

@sam.stenvall - If it’s an option and you still need CLI traces, we haven’t seen any memory issues once we rolled back to 8.x.


#19

Thanks for all working together, and for working with us to try solve this @chin.lw & @toon.spin :smiley:


#20

We definitely have the same issue and the memory leak is consuming the memory until the point of OOM-killer passes along.

Trying to set newrelic.transaction_tracer.max_segments to 10000 was changing the pace of memory consumption but didn’t stop it. Initially the memory consumption grew in the same speed for the first 2-3minutes and then it was less than half the speed of the one without the max_segments setting.


#21

I’ll get you into a ticket to work closer with our support team on the memory leak you are having problems with. :smiley: