Hi there @amandeep.singh.5 and @vr00n - I did a little further digging, and it looks like this is a Feature Idea, with no obvious workaround. I added a poll so that others can weigh in and passed this to the team. If you have any other use case details you would like to add, please feel free.
The only workaround I have seen is to create a monitor in Synthetics that calls the Insights API to execute the query and then you have a few options on what to do with the results. You could either:
- Cause the monitor to fail if the results are not what you expected
- Put your query results back into a custom Insights event using the Insights API that you could then use a NRQL alert to monitor
I hope this helps.
Awesome - thanks for that idea @peckb1
HI , I am stuck at same issue and I am not sure if this has been addressed or will be addressed soon. Please let us know if there is any better workaround on this ?
Hi @Swapnil_Sundarkar -
We don’t yet have 24hr timescales for alerting & I’m not sure if this is on the roadmap.
I’ll get your +1 submitted though - so the product team know of the demand for this.
Thanks. It’s almost 2 years that everyone is waiting for this feature. If its not available out-of-box can someone help me to customize and put solution for this.
I want to share that, over the course of the next month, we are releasing official support for “loss of signal detection” for NRQL Conditions. Loss of Signal configuration allows you to set an expiration duration, in wall clock time, from the time that we received the last data point. Once that time expires, we will identify that signal as being lost. Once that happens, we will allow for 2 actions:
- close all open violations
- create a new violation (and resultant notification) for “Loss of Signal”
The maximum expiration duration we will support to start with is 48 hours. Therefore, you should be able to use this feature for being alerted if infrequent jobs do not report data within the expected time frame. If you expect a service to report every 6 hours, you can set an expiration duration of 6 hours and 15 minutes (for example), and be notified if that signal is lost.
After we release this, we will reassess this use case to determine if another method is still required.
We should have Loss of Signal Detection completely by end of July 2020, and then proceed to begin the rollout of Gap Filling capabilities.
Product Manager - AI Ops
Awesome! Looking forward to it.
So we should just see this show up in our alert conditions when it goes live? Where can we see the current release notes or subscribe to them via email?
You can follow the
productupdate tag #productupdate where we try to post new topics to announce new features.
I’ll get signed up.
Great! Thanks Tim
In NewRelic Alerts, NRQL is limited based on the threshold value and the greatest threshold value supported by New Relic is 120 minutes( 2 hours). We wish to set alerts for a duration of 24 hours or even for custom time(say, 6.00-11:00 am) since the jobs run only once a day.
I have a metric called “events received” and I wish to check that the value for this metric is over 1000 in 24 hours. If it is less than 1000 , I need to trigger an alert for the same. However, because of the 2 hour limit I am unable to do so.
A workaround I was considering was to check the value for a specific time period, say between 6:00 am and 11:00 am , the value should be around 100 and more and if it less, an alert should be triggered.
The query that I used but do not get the expected results.
SELECT sum(event_received) from Metric where metricName = 'event_received' and EventType = 'WorkerChanged' and hourOf(timestamp) >= '6:00' and hourOf(timestamp) <= '11:00' SINCE 1 day ago
Moreover, as I will be using NRQL in alerts , I will not be able to use ‘SINCE’ to set a time range.
I’ve also reviewed New Relic Help Center and there’s requests to extend the Alert threshold beyond 2 hours. Unfortunately, I’m finding this was requested way back in 2015 and New Relic hasn’t addressed it yet.
does anyone have any ideas on how to circumvent this New Relic limitation?
@Saumya.Dureja Definitely interested in hearing from community members. Wondering though if this recent release may offer you the answer you need: Announcing: New Relic One Streaming Alerts for NRQL conditions Specifically the information around Loss of Signal Detection?
Loss Of Signal Detection
The NR One Streaming Alerts Platform now provides official support for Loss of Signal Detection. While there are workarounds to achieve this in the current platform, they are inconsistent, and the shift to an event based streaming algorithm disables that workaround. With configurable Loss of Signal Detection, on any NRQL Alert Condition, you simply specify how many seconds we should wait from the time we saw the last data point before we consider that signal to be lost. Once that time expires, you can choose to be notified of the Loss of Signal, or you can simply close any open violations if you expect the entity or signal to go away.
@JoiConverse Thank you for sharing this. While this is extremely useful for setting alerts in a scenario where one needs to check if their job ran or not or if data was received or not.
Is there any way I can check that a specific number/threshold was reached in the past 24 hours. eg. in my case, if the number was greater than 1000 in 24 hours. or if the number was greater than 100 between 6 am and 11 am.
Please see my reply in your other topic: