Right, I found this out yesterday actually and updated to the latest
newrelic_rpm gem rev
newrelic_rpm-184.108.40.2066. While the GC.stat key error is gone, I/we are still seeing memory bloat over time in the rails application processes.
Just for the record we are using
Passenger 5.0.30 and
apache as for our web app server.
Here is a little more info as to what I have since done for testing both before and after updating the NR gem.
As a control test I disabled the newrelic gem in the app and restarted apache after which there were no application processes. I then curled the application URL, a login page, that triggers the startup of 20 app processes as per the Passenger config. Without newrelic we can normally run 30 app processes in production without any memory bloat, but with 30 and newrelic we get
critical memory usage warnings when the bloat tops out.
Anyway, after that I let the app idle for a around 3 hours. I was also monitoring the RSS of the app processes and while idling memory usage did not increase.
The next test was virtually the same but with newrelic gem enabled. What I observed this time is that once the app processes were spawned and after the login page is rendered is a gradual memory increase over time - also about 3 hours - of roughly 33% by the idling application as if something was consuming memory and this was also observable when monitoring RSS.
So I instrumented the app with the
rbtrace gem and restarted then using rbtrace I connected to an idling app process after curling the URL. The only activity I observed was the newrelic agent event loop.
And it appears that the event loop fires roughly every minute or 59 seconds.
Interestingly, or oddly, this event loop interval roughly corresponds with a 10MB per minute
increase in memory usage as graphed by newrelic sysmond
Some stats about our instances that are and are not using the NR gem.
Our 2 production servers are running 30 Passenger app processes each, no NR gem,
and the fully exercised app utilize 2.4GB of 3.7GB memory. These are
The 2, really now 1, test server I have been using to test this thing that looks like a
leak of some sort can only safely run 20 Passenger app processes with the NR
gem and a partially exercised app initial uses only 1.3GB of 3.9GB memory then
the memory utilization balloons to 2.62GB when the app is idling. We could probably safely
start another 5 app processes I reckon.
This is a
t2.medium EC2 instance.
Is the ruby agent (gem) doing something - like maybe gathering and storing metrics for
some time period - that would explain this behaviour? Or there some default options in the
newrelic.yml file that could be adjusted to reduce this memory usage ? We are using
a stock yaml config file as an erb template and the only changes i currently make are
adding the license key and app name.
NR control panel screen shot of this: