Feature Idea: Increase maximum buckets for TIMESERIES

Currently timeseries only support 366 buckets. This prevents the ability to see maximums and minimums on smaller intervals and over a wide range of time. For example I can’t see the performance of things down to the minute for the last two weeks. That’s important in some use cases (for example to see short lasting glitches)

Would like to see this increased to a couple of thousand - but any increase would help.


New Relic Edit

  • I want this too
  • I have more info to share (reply below)
  • I have a solution for this

0 voters

We take feature ideas seriously and our product managers review every one when plotting their roadmaps. However, there is no guarantee this feature will be implemented. This post ensures the idea is put on the table and discussed though. So please vote and share your extra details with our team.

Thanks for posting @mark.oueis - and thanks for adding in your use case.

the 366 bucket limit is there mostly for query optimisation. Though I can absolutely see a need for that to be greater.

One note is if you use TIMESERIES MAX over a larger time period, you should see some sense of the spikes in short time periods, then use SINCE & UNTIL to narrow down to the minutely data.

With that said, I will get your feature idea filed internally for you :smiley:

(Also, my apologies, I accidentally hit send on this message too early)

Hey Ryan, thanks for the update. That workaround is kind of what we’ve been doing when we need to. Just not as nice as one large graph and takes more time fiddling with the since & until. I think if you guys can hit into the thousands like maybe 2000 buckets (maybe make it optional even) that would be awesome

1 Like

Understood! Thanks for continuing with that workaround for now. The Feature Request is in and so hopefully we’ll see an update on that :smiley:

is there a way to force a query to not use aggregated buckets? From your chart, the 1-minute data will be retained for 30 days. I NEED to get the single largest CPU and memory metrics over the last 30 days. Whenever you aggregate data you lose granularity. What you are telling me is that if I want a single number I need to run 120 queries and somehow keep track of the timeframes that I have already looked at to determine what the max utilization is.

New Relic needs to provide a way for me to specify what bucket size I want to search across instead of automatically searching over default bucket ranges. When you go from 98% CPU utilization on a 1-minute bucket search down to a 17% utilization over a 24-hour search (10-minute bucket aggregate) and furthermore 3% over a 30 day period (3-hour bucket aggregate) speaks volumes about what data aggregation can do to your query causing you to make erroneous assumptions about the usage of an instance.

Hi Michael - I just moved your post from here over to this feature idea, which I think best aligns with what you’re hoping to achieve here.

More than 366 buckets is currently not possible, but I’ve added your input on this request internally for you.