Get the max rpm in a period of time

Be able to see the max RPM in my app in a give period of time. Example: in the last week what was the max RPM we had?
We are in the process of setting some rate limits in our API and we want to know our ceiling.

We have this query:

SELECT rate(count(*), 1 minute) AS 'Requests per minute' FROM Transaction WHERE appName = 'app name' TIMESERIES 1 minute SINCE 1 week ago EXTRAPOLATE

The problem with the query above is that the graph can only show up to 366 buckets. So we can change that query to:

SELECT rate(count(*), 1 minute) AS 'Requests per minute' FROM Transaction WHERE appName = 'app name' TIMESERIES max SINCE 1 week ago EXTRAPOLATE

But then we actually lose data and we miss all the peaks leaving us with just averages.

Any suggestion how can we detect top RPM in a week? we don’t need a chart, it can be a number or a table ?


Hi, @guillermo4: If this is something you only need to do once, you might try using the REST API to get the metric timeslice data for your application. The HttpDispatcher\calls_per_minute metric should tell you how many calls per minute your application has received; you can adjust the time window for each call to ensure that you receive per-minute values.

Import the results into a spreadsheet to find the maximum value.


It would be nice to have nested function query support. Then the query could be:

SELECT MAX(rate(count(*), 1 minute)) AS 'Requests per minute' FROM Transaction WHERE appName = 'app name' SINCE 1 week ago

This would then output the highest requests per minute for the last week.

@guillermo4 - Add your vote to the feature request below :wink:


Thanks @philweber This will give us the top rpm in a given time but what we need is more aligned to what @stefan_garnham is saying because we want to further improve the query to group by clients.

To define our API rate limit, we can’t just look at the max RPM in a given period because that is aggregating all customers. What we want is RPM group by customers and from there get the max(RPM) so we can then determine what is “normal” and what is not.

Hope this makes sense

1 Like

Hi, Guillermo. Yes, I understand your use case. Unfortunately, as @stefan_garnham pointed out, it is not possible out of the box because NRQL does not support nested functions.

The workaround is to periodically query the data you want to analyze, such as:

SELECT rate(count(*), 1 minute ) 
FROM Transaction 
WHERE appName = 'App Name' 
FACET customer 

Take the result of that query and insert it into Insights as custom events. Then you can query those custom events to get the Max() value per customer.

1 Like

Thanks @philweber Is this something that can be automated within NR or we need to create a script to pull the results from the query and insert them into Insights? I am not 100% familiar with this step so any material that can point me in the right direction would be great.

You would have to create a script to execute the queries and insert the custom events. There is a nice overview of the process in this post:


team ,
i tried below query to get highest rpm

SELECT MAX(rate(count(*), 1 minute)) AS ‘Requests per minute’ FROM Transaction WHERE appId = SINCE 1 week ago

but i am getting this error message

Not supported: Call{target=Id{name=‘rate’, isQuoted=false}, arguments=CallArguments{positionalArgs=[Call{target=Id{name=‘count’, isQuoted=false}, arguments=CallArguments{positionalArgs=[Wildcard{}], kwargs={}}}, DurationLiteral{delta=1, unit=MINUTES}], kwargs={}}} (at line 1 column 11)

Nested functions are not currently possible, this is Stefan’s point above, where max(rate(count is not possible. You can instead just look at count(*) for highest throughput.

Here I’m faceting by the date and the hour, so I can see that April 1st between 20:00 - 21:00 is the busiest time in transactions:

SELECT count(*) FROM Transaction SINCE THIS WEEK FACET hourOf(timestamp), dateOf(timestamp) EXTRAPOLATE 


thnx @RyanVeitch .

Can we get more granular data like in minutes, pls.
as you are aware most of the time peak load generally sustains only for few mins .

by taking average of hours it does not give correct value.


At the moment, no, `hourOf(timestamp) is the most granular we can facet by.

However, using the query above we know what timeframe, and what day had the highest RPM, so we can substitute those values into another query:

SELECT count(*) FROM Transaction SINCE '2020-04-01 20:00:00' UNTIL '2020-04-01 22:00:00'  TIMESERIES 1 minute EXTRAPOLATE

to chart out that hour or 2 hour period (looking minutely).

Hi, wanted to update this topic with news that NRQL now includes nested query support that can solve this.

There are some examples in those docs on how to achieve max rpm per unit time (within limits imposed by TIMESERIES).

1 Like

I was hoping that nested queries would allow 366 buckets per query but it appears that this is the limit for all queries.

I have a number of Terminal Services servers that I’m collecting the session counter information for via the nri-perfmon integration so I can sum the sessions per host for a total per timeseries and then get the timeseries max.

Data is collected from each host every 5 minutes so the 5 minute timeseries with the maximum sessions total is required to give the peak sessions per day.

I have the following query that will give this maximum for a 24 hour period for the 5 minute timeseries.

SELECT max(Total) FROM (SELECT sum(TotalSessions) as Total FROM TerminalServices TIMESERIES 5 minutes) SINCE 1 day ago

However what I really want is a graph for 30 days that will show this daily (24 hour) max, so a query like below:

SELECT max(Total) FROM (SELECT sum(TotalSessions) as Total FROM TerminalServices TIMESERIES 5 minutes) SINCE 30 day ago TIMESERIES 24 hours

I was hoping that the nested “inner” query would get all 5 minute timeseries sum (288 buckets) independently of the “outer” query within its 24 hour timeseries.

So the “inner” query would return all 5 minute timeseries session totals for each timeseries of the “outer” query.

Hence the !outer” query would then return the maximum 24 hour session total for each of the previous 30 days it is set to run for.

However it appears that the total 366 buckets is shared for all queries as the maximum since days ago I can set for the “outer” query is 47 hours, setting 48 exceeds the 366 bucket limit.

I can’t see how what I want is possible without having to perform the “inner” queries aggregation outside of New Relic and then ingest it back in to a new custom event table to then run the “outer” query against.

Is there a way to use nested queries to get the required outcome?

I really don’t want to have to start running additional scripts outside of New Relic when all the data I require is already there.