Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

How is throughput calculated?

throughput
apm

#1

We have a backend service running, which has a SOAP API. As part of the work, the service calls both a DB and an external endpoint.

During a recent incident, we saw a drop in throughput on the APM Overview and Transaction pages. As a consequence of this, there was also a drop in the throughput on the APM External Services page.

Now to our problem. The sending party didn’t seem to send less requests. Why did we see a drop in throughput? Is this because we were actually receiving less traffic, or was it because we were slower to process the received traffic?

According to the New Relic glossary, throughput is measured as Requests Per Minute. We usually speak in terms of requests as incoming traffic, and responses as outgoing traffic. And throughput, as we and Merriam Webster understand it, should be the number of completed request-response pairs.

So how does NewRelic measure throughput? Is it counting incoming traffic, ie. the requests? Or is it counting completed requests, ie. requests that also have a response?

The two scenarios we have then, are:

  • NR only counting incoming traffic: Since sending and receiving parties show different graphs, not all requests made it to our service, and we have a networking issue.
  • NR counting request-reply pairs: Decreased throughput means our service is processing the requests at an increasing slower rate.

#2

Hello @per.junel,

That’s a great question and I’ve had a chance to research this a bit. Throughput is the number of successful requests per minute (RPM) to your web server. If the request fails in the app then it would probably be recorded as an error. The throughput numbers may vary depending on whether you’re looking at the application data or browser data, due to the fact that a single request from a user may generate multiple requests by the application itself.

Our agent considers a request to be a web request that starts a transaction. In the New Relic APM, a transaction is defined as beginning when a request enters your app and ends when your app sends a response. Keep in mind that a transaction can result in other web requests being sent but, since these are considered to be part of a transaction, they will not increase throughput for the app.

Metrics are collected by an agent once per minute and all of the successful requests that have occurred in that minute are included in the metrics as the throughput.

Another thing worth noting about the throughput information is that it’s the average of all servers hosting the application.

You can gather more specific throughput data related to your app with our REST API:

Alternatively, you could also use our Insights product to query for the total number of requests:

SELECT count(*) FROM Transaction SINCE Yesterday

You can find more documentation on writing Insights queries here:

I hope that helps shed some light on how throughput is determined by the .NET agent. If you’re still seeing inconsistencies or believe that throughput data is incorrect, I would appreciate it if you could provide a permalink to the time window of when this is occurring. Only New Relic support staff will be able to view this link outside of your account.

To create a permalink to any page within the New Relic user interface, scroll to the bottom and click ‘Permalink’ all the way on the right next to ‘Kiosk Mode.’ This will show me the exact page and time period that you are observing.

—Neil


#3

Thank you, Neil!
Then throughput seems to be how we understand it. If I understand you correctly, though, the total number of requests sent to our service equals throughput + error count? Remember, this is a backend lookup service, so there’s no “extra traffic” generated by the client, and every request is one transaction.

We have since found out, that it was sort of a loadbalancer/firewall throttling config that under heavy load effectively blocked some of the traffic to and from our service.

Thanks again!
Per.