Feature Idea: Notifications on Alert Warnings

Presently New Relic sends of notifications only for Critical Alerts. But we seen an increasing need for sending out notifications for warnings too. Sending out notifications on warnings helps us be proactive in fixing issues preventing them from becoming Critical or finding solution faster once it becomes Critical. Without Warning notification we are forced to create “Critical” alerts for Warnings and risk triggering Major Incident process for Warnings.


New Relic edit

  • I want this, too
  • I have more info to share (reply below)
  • I have a solution for this

0 voters

We take feature ideas seriously and our product managers review every one when plotting their roadmaps. However, there is no guarantee this feature will be implemented. This post ensures the idea is put on the table and discussed though. So please vote and share your extra details with our team.

3 Likes

Thanks @Anup.Jishnu - I appreciate you sharing this and the need for it. If you have any additional details about how you want to see this implemented (same types of notifications? Notifications of ALL warnings?) that would be greatly appreciated.

2 Likes

Thanks for your response.

Most companies use some 3rd party mechanism for alert notification like PagerDuty/OpsGenie/VictorOps/HipChat/Slack/Email/etc. So having same type of notification as New Relic has presently for Critical, would be great.

With that in mind, notification on all warnings would be one way to go. Alternatively there can be a checkbox next to Warning Condition for “Trigger Notification”.

7 Likes

I would like policies to have a separate set of notification channels for warnings. This would enable warnings to be sent to a lower priority channel, e.g. Slack channel instead of PagerDuty.

5 Likes

Can we get an update on this feature request? Thanks

You could achieve this by configuring a separate set of alert criteria and pushing this to a low level service channel in PagerDuty. Personally my preference would be to set services in PD with use alert severity on how to alert and therefore we should be able to use a single service channel in PD to control this and alert accordingly

1 Like

Thanks for posting your solution here @neil.toolan :smiley:

I’ve actually managed to achieve this by using event rules in PagerDuty and having a defined naming convention for the alert condition in New Relic which then on inbound to PD is interpreted and the event rule pushes to the defined severity level.
E.g. in the alert condition I am now using “SEVERITY” and have a event rule in PD that looks in the summary for this and sets the alert severity accordingly.

This reduces the need for having multiple PD services

1 Like

Hello, any status update on this being implemented? Sometime soon maybe?

1 Like

No update to share right now but I’ll get your plus one added here :+1:

I’ll add my vote to this.
The breaching warning levels needs to have visibility to be of any use in taking preventative action before escalation to Critical occurs.
We make use of alerts on Key Transactions, which a casual browse of NR doesn’t highlight. A Slack notification, potentially to a different channel than the critical alert, would be just what we need to be able to take preemptive action.

The work around solution suggested of setting up a shadow Critical alert that you treat as a Warning alert won’t work for us as we work in an organisation that has outsourced Level 1 support which makes it difficult to have nuance in interpretation.

1 Like

I would like to add that this would be a huge help in getting out in front of potential issues. Being able to have a single alert condition send a variety of notifications for both “warning” and “critical” would be a huge help in being “proactive” instead of “reactive” on an issue.
I would love to see something similar to AWS’s Cloudwatch alarms, where if an alert policy is in the “warning” threshold, we could have it send a message to slack or an email to whomever is oncall, and once it hits “critical” or is triggered how it is today, it could route through pagerduty for phone calls and such.

2 Likes

Yes, the workaround with PagerDuty would be viable - but that’s pretty cumbersome, and creates different kinds of noise in NewRelic; with the workaround described (eg, a separate alert policy with the thresholds on “critical” set to the desired “warning” threshold), there’s no differentiation in the display of critical versus warning in APM charts or health maps, and the workaround presupposes PagerDuty. It seems exceedingly odd in the creation of Alerts to differentiate between Critical and Warning, yet not actually use the Warning. Whether or not an alert policy is a Critical or Warning I believe would only be specified in the name of the alert policy. So, a strong upvote for this feature - NewRelic’s handling of notifications is oddly incomplete without it.

2 Likes

Thanks for your input folks!
We’ll get all of this sent over to the Alerts teams.

2 Likes

Hey team, Any update on this one?
We have been forced to setup multiple critical alerts as warning ones doesnt get sent anywhere.

:frowning:
Please let us know if there is any progress or feature development happening for this one.

Hey @soumya.jk - I have no update for you currently - I’ll get your +1 added now though.

1 Like

Just to add my desired use case to this, I want it for statuspage integration, at the moment the email alert from NewRelic is processed as a “Major outage” because it’s critical, but I think an alert sent off the warning should be able to be used to trigger their “degraded” status.

Great use case @WebScale - that makes total sense. I’ll get it added here internally

While I +1 this whole heartedly, I also believe the ‘severity’ that is sent OUT with the notification should be customizable, without always having to create a custom webhook. So, having more than 1 level that will notify and being able to define the severity. Everything being ‘Critical’ is like the boy who cried wolf. The warn condition triggering would help some to alleviate that pain point.

1 Like

Thanks for the different take on this feature idea @tim.davis - I’ll get that filed internally.