Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

Significant time spent in ActionDispatch::Routing::RouteSet#call


We’re seeing significant time in certain endpoints in ActionDispatch::Routing::RouteSet#call

Several endpoints show more time and some endpoints show very little time spent there. I’ve seen other issues on the web of people reporting similar issues but nothing recently.

  • Example

  • What language agent version are you using? Any other relevant versions? (Gems, etc.)

Ruby 2.5.0
newrelic_rpm (

The other odd thing is that we use datadog and have logging where we see 99% of our requests to this endpoint spend less than 100ms as reported by rails, but the NewRelic view reports an average of 100-150ms. Is is possible since ActionDispatch::Routing::RouteSet#call is the last middleware to be called, it’s including transport cost to the end client? So if the requester is on a slow network, for instance, it’s including that time in the reporting and is safe to be ignored? If that is true, it does seem like it’s affecting our server apdex is a negative way.


heya @ncavigliano~!

This is a good set of questions here. When I looked at this transaction in question, I was sure I knew what the issue was - but I’m pretty sure I’m dead wrong.

My thought was the average response time was being skewed by outliers as we were seeing transaction traces that were several times higher than the average.

But looking at the app server histogram distribution, this didn’t really add up to me as something like 98.5% of the transactions were clocking in at over 100ms.

I hope you’re OK with digging into this a bit more in a support ticket; we may be asking you to get us some logs to review more closely. We can update here after we work on this. :slight_smile:

Also - If I’m understanding correctly, the apdex score shouldn’t be affected negatively as your apdex_f is 2,000 ms.


To follow up on this in case any future engineers read this and see a similar issue, I’ve found the root cause and will hope to communicate what it was. It was a very specific issue that likely isn’t your issue, but the way to debug might also help you debug your issue.

Just like other google results and posts, this was in fact a middleware issue that is outside normal NewRelic instrumentation and thus shows up as time spent in ActionDispatch::Routing::RouteSet#call. Since AppName::Application.routes is the last middleware to be loaded, any time spent outside of normal instrumentation will show up as ActionDispatch::Routing::RouteSet#call.

The above being said, we also utilize lograge to normalize logging in to a more digestable single line format. A feature of lograge is the ability to configure a custom block to augment log lines with extra contextual data like user/session data. As part of this custom data block we wanted to encrypt certain things before they made it in to the logs, and they way we did that was to use ActiveSupport::KeyGenerator and ActiveSupport::MessageEncryptor. Our mistake was to use the request id as the salt. Creating the key and message encryptor object is CPU intensive, and since it was using request id as the salt, it wasn’t memoized and regenerated each time it was called, on every request that needed encryption.

The way I was able to verify this was to use NewRelic’s great custom method instrumentation to explicitly instrument the custom block code which once deployed, was very telling and showed a majority of time spent in a single method that generated the ActiveSupport::MessageEncryptor object.

I relied on Benchmark.measure as well to find out that instantiating 10 ActiveSupport::MessageEncryptor instances took about 10 seconds.

If anyone sees similar timings in ActionDispatch::Routing::RouteSet#call I encourage you to use a combination of newrelic custom method instrumentation and take a hard look at all your middleware as some small piece of code that might go unnoticed could be having a significant impact on your application.

Good luck!


@ncavigliano - This is a fantastic follow up. Thanks so much for sharing the solution that worked for you, and the detailed steps you took to get there. This is a very specific issue as you noted, but I’m sure the troubleshooting steps can help others to identify other problems they may see as well. :slight_smile: