Ruby newrelic_rpm memory leak

Same here. Can you please create a ticket for me? I can share with you charts of the memory usage with and without the gem, and it’s pretty clear that the memory grows to no end when the gem is bundled in the app.

Rails 4.2.2
Ruby 2.2.2
newrelic_rpm (

Hey @jasonad60 - I’ve split the post to focus on your issue. I did have a look at the ticket that was started in the other thread. Can you check one thing before we begin a ticket for you?

  • Do you have developer mode on? If so, it’s worth switching that off if not necessary. Developer mode itself has a high overhead due to the additional developer features.

  • Ruby 2.2 should be a big help here and I see you’re on it.

So, can you confirm if developer mode is on or off? If it’s on that could well be the culprit; if not, I’ll get you into a ticket so that we can collect some logs and dive further.

Thank you!

  1. developer_mode is set to false by default. The only environment that turns it on is the development environment. We took the YML file directly from your site.
  2. Yes, we’re on Ruby 2.2

I’ll start a ticket with you right now. Once you get the notification you could send in the logs immediately; my support team will follow up with you as well. Thank you @jasonad60!

Is their somewhere we can follow the progress on this. We have a similiar project with rails 4.2.3 and ruby 2.2.2 and have a large memory leak issue. Running a simple GC comparison shows many of the leaks may be related to NR. Comparison below:

Leaked 101890 STRING objects of size 13937656/24407846 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 1171 HASH objects of size 0/271672 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 1169 OBJECT objects of size 0/102872 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 1154 ARRAY objects of size 0/63072 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 1072 ARRAY objects of size 0/45760 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 998 ARRAY objects of size 0/39920 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 486 STRING objects of size 14834/43386 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 481 STRING objects of size 16733/39584 at: tester.rb:1
Leaked 326 ARRAY objects of size 0/13040 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 325 HASH objects of size 0/75400 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 324 STRING objects of size 0/12960 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 321 HASH objects of size 0/223416 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 317 STRING objects of size 951/12680 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activesupport/lib/active_support/core_ext/numeric/conversions.rb:131
Leaked 208 ARRAY objects of size 0/8320 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 57 OBJECT objects of size 0/2280 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 51 STRING objects of size 0/2040 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 50 STRING objects of size 0/2000 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 48 STRING objects of size 792/8112 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic-redis-2.0.2/lib/newrelic_redis/instrumentation.rb:83
Leaked 43 STRING objects of size 0/1720 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/arel-6.0.3/lib/arel/collectors/plain_string.rb:5
Leaked 42 STRING objects of size 4390/9450 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:25
Leaked 41 STRING objects of size 0/1640 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 37 ARRAY objects of size 0/1480 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/tempfile.rb:131
Leaked 28 OBJECT objects of size 0/3360 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 28 OBJECT objects of size 0/2464 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 27 STRING objects of size 702/1809 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/rack-1.6.4/lib/rack/request.rb:329
Leaked 27 STRING objects of size 0/1080 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 27 STRING objects of size 108/1080 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 26 HASH objects of size 0/1040 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 25 HASH objects of size 0/1000 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 25 HASH objects of size 0/5800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 25 HASH objects of size 0/5800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 25 HASH objects of size 0/5800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 23 STRING objects of size 598/2001 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/uri/generic.rb:1342
Leaked 23 OBJECT objects of size 0/2024 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 23 STRING objects of size 368/920 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/newrelic_rpm-
Leaked 20 STRING objects of size 0/800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/tmpdir.rb:128
Leaked 20 ARRAY objects of size 0/800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/tempfile.rb:129
Leaked 20 NODE objects of size 0/800 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activesupport/lib/active_support/dependencies.rb:268
Leaked 19 OBJECT objects of size 0/760 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/tempfile.rb:130
Leaked 18 NODE objects of size 0/736 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:193
Leaked 18 STRING objects of size 2112/3042 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/tempfile.rb:136
Leaked 5 STRING objects of size 81/265 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:193
Leaked 4 STRING objects of size 42/160 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activesupport/lib/active_support/dependencies.rb:268
Leaked 2 DATA objects of size 0/1256 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:193
Leaked 2 ARRAY objects of size 0/80 at: tester.rb:1
Leaked 1 ARRAY objects of size 0/40 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:193
Leaked 1 STRING objects of size 9/40 at: eval:1
Leaked 1 DATA objects of size 0/96 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/net/http.rb:1478
Leaked 1 DATA objects of size 0/96 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.0.48/lib/seahorse/client/net_http/connection_pool.rb:336
Leaked 1 DATA objects of size 0/1648 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activesupport/lib/active_support/dependencies.rb:268
Leaked 1 STRING objects of size 27/68 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/bundler/gems/rails-38be9c54023d/activesupport/lib/active_support/core_ext/marshal.rb:6
Leaked 1 STRING objects of size 3/40 at: /Users/mbasset/.rbenv/versions/2.2.2/lib/ruby/2.2.0/net/http/response.rb:42

Hey @mbasset - this line indicates that you’re running with developer mode enabled:


Developer mode should never be run in production (it’s disabled by default there), and has very high memory overhead. Based on that fact and the paths in your backtraces, I’m guessing you were running locally when you generated these results. Unfortunately, these kinds of issues are highly sensitive to small configuration differences between your local machine and production, so this output is unlikely to be helpful.

It’s also likely that the specifics of what’s going on in your application are different enough from those of the original poster that following along with the progress there wouldn’t really be helpful. It would probably make the most sense to start a separate thread or file a support ticket if possible so that we can dig into the details of your specific case.

Ahh ok. You are correct I must have left that on when testing locally. I
will try running the test again with developer mode off.

1 Like

A post was split to a new topic: Memory leak/bloat

A post was split to a new topic: NR looks leaking the objects


We are also seeing odd memory usage, within a period of 24 hours after the agent and app have been started on freshly built production servers with no traffic to the app.
As the buildout target is production I am quite certain developer mode is off.

Ubuntu 16.04
Ruby 2.3
Rails 4.2.6
newrelic sysmond deb but the gem is newrelic_rpm-

App 19381 stderr: /home/deploy/playmob_platform/shared/bundle/ruby/2.3.0/gems/newrelic_rpm- warning: GC.stat keys were changed from Ruby 2.1. In this case, you refer to obsolete `total_allocated_object' (new key is `total_allocated_objects'). Please check <> for more information.

Also FYI regarding that error message above.

As our servers are in AWS EC2 the primary reason we installed the NR sysmond agent was to get better insight into average, typical or unusual memory usage when and when not testing our app, but no monitoring joy yet.

Was this issue ever resolved ? Is anyone else (not running in development mode) seeing this anomaly ?


  • server where agent and app was stopped and started
  • server where the agent was running and the app was idle for some time before the agent was stopped and started for the purpose of this ticket



@zerowolfgang Thanks for posting to the forum. I noticed you’re using a fairly dated version of the Ruby agent. Would you be able to try upgrading your version of the agent and let us know if you see improved memory usage? Specifically, we did improve memory usage for idling applications in agent version, however I recommend upgrading to a more current version of the agent (more info here).

The Ruby issue in the log message you referenced looks to be closed. It was related to a change that was made in the keys that are returned from GC.stat.

Let us know if you have any questions.


Right, I found this out yesterday actually and updated to the latest newrelic_rpm gem rev newrelic_rpm- While the GC.stat key error is gone, I/we are still seeing memory bloat over time in the rails application processes.
Just for the record we are using Passenger 5.0.30 and apache as for our web app server.
Here is a little more info as to what I have since done for testing both before and after updating the NR gem.
As a control test I disabled the newrelic gem in the app and restarted apache after which there were no application processes. I then curled the application URL, a login page, that triggers the startup of 20 app processes as per the Passenger config. Without newrelic we can normally run 30 app processes in production without any memory bloat, but with 30 and newrelic we get
critical memory usage warnings when the bloat tops out.
Anyway, after that I let the app idle for a around 3 hours. I was also monitoring the RSS of the app processes and while idling memory usage did not increase.

The next test was virtually the same but with newrelic gem enabled. What I observed this time is that once the app processes were spawned and after the login page is rendered is a gradual memory increase over time - also about 3 hours - of roughly 33% by the idling application as if something was consuming memory and this was also observable when monitoring RSS.

So I instrumented the app with the rbtrace gem and restarted then using rbtrace I connected to an idling app process after curling the URL. The only activity I observed was the newrelic agent event loop.
And it appears that the event loop fires roughly every minute or 59 seconds.
Interestingly, or oddly, this event loop interval roughly corresponds with a 10MB per minute
increase in memory usage as graphed by newrelic sysmond :slight_smile:

Some stats about our instances that are and are not using the NR gem.
Our 2 production servers are running 30 Passenger app processes each, no NR gem,
and the fully exercised app utilize 2.4GB of 3.7GB memory. These are m3.medium
EC2 instances.
The 2, really now 1, test server I have been using to test this thing that looks like a
leak of some sort can only safely run 20 Passenger app processes with the NR
gem and a partially exercised app initial uses only 1.3GB of 3.9GB memory then
the memory utilization balloons to 2.62GB when the app is idling. We could probably safely
start another 5 app processes I reckon.
This is a t2.medium EC2 instance.

Is the ruby agent (gem) doing something - like maybe gathering and storing metrics for
some time period :slight_smile: - that would explain this behaviour? Or there some default options in the
newrelic.yml file that could be adjusted to reduce this memory usage ? We are using
a stock yaml config file as an erb template and the only changes i currently make are
adding the license key and app name.



NR control panel screen shot of this:

@zerowolfgang Thanks for getting back to us and providing the additional context into what you’re seeing. I’d like to open a support ticket on your behalf to continue looking into this. You should see an email from us soon with a link to the ticket. If you don’t see the email, please check your spam filter and then contact us if necessary.

Also, just to clarify, the Ruby agent works by spawning a background thread in the process it monitors. That background thread collects metrics (i.e. throughput, response time, etc) and data (i.e. slow SQL traces, transaction traces, etc) from your application and sends it to our backend servers every minute. I believe this is what you were noticing. I can’t speak to whether our harvest cycle is causing the 10MB increase you’re seeing but hopefully we can get to the bottom of this.

I’ll update this thread with our findings for the benefit of the Community.