My website is successfully repelling attacks, but the errors keep triggering alerts

I run a web application that has very low traffic, except for some flurries at predictable times.

I’m getting several error alerts a day because vulnerability scanners are trying to find a way in, and keep hitting various errors, such as 404s and CSRF fails. They’re not getting in to my precious data, but they are triggering lots of error alerts, almost to the point I have to ignore alerts or I get nothing else done.

I don’t want to disable checking for 404s in case it highlights a broken link, so how can I prevent my alerts becoming useless due to security scanners? It’s not a known range of IPs or user agents, as they pretend to be all sorts of things, and come from all sorts of locations.

Anyone got any ideas?

Hi, we’re happy to take a look. For more information, could you send a link to the alert condition that is opening the incidents?

There are a couple things here.
For one, you mention that you currently alert on 404’s because you are worried about bad links. There is an OOTB option to set up a synthetics monitor to test your links, which could probably take some of the worry away from those 404’s

Then maybe errors should be something that instead of individually alerting on each one you switch to a baseline alert when there is a significant deviation, along with just having a to do item to occasionally review the errors from the previous days.

1 Like

Thanks for the thoughts.

I’m not alerting on individual 404s, but when errors go over 1.5% of requests. As my web application is generally low traffic except for certain times/events, it’s likely that for several hours at a time the only traffic is attempted vulnerability scans. The scans look at URLs that don’t exist, which creates 404s. As the only traffic is 404s, the error alert condition is triggered.

I’ve configured the webserver layer to reject a lot of the more common requests (e.g. Wordpress admin panel urls) so they never generate an application level error, but there are so many possible combinations, which are constantly changing, that it would almost be a full time job to keep finding all the various combinations.

I’ll have a look at the synthetics, but as it’s a web application I’ll have to be careful about what URLs get hit. And that still doesn’t deal with the issue of misconfigured POSTS triggering errors.[accountId]=1496487&pane=eyJuZXJkbGV0SWQiOiJhbGVydGluZy11aS1jbGFzc2ljLnBvbGljaWVzIiwibmF2IjoiUG9saWNpZXMiLCJwb2xpY3lJZCI6IjEwNzM0OTEifQ==&sidebars[0]=eyJuZXJkbGV0SWQiOiJucmFpLm5hdmlnYXRpb24tYmFyIiwibmF2IjoiUG9saWNpZXMifQ==&state=0a5d7ad3-b740-4c59-06d1-43b754630926

Hi there @jevans6 -

It sounds like what you need here is a NRQL query that captures when 404 errors become more than 1.5% of the requests. You could then potentially set up an alert when the value of the query is more than that threshold.

That may be possible. Writing NRQL queries is outside the scope of support but I am curious if any community members have developed queries like this.

Thanks for the input.

I already have an alert that’s firing if errors are a high percentage of traffic. It was pre built as a default when I installed New Relic on the application. Not sure why making a different one would help, unless I’m missing the point?

The application is very low traffic. It’s used only at certain times of day, and sometimes can go for days or weeks without a single legitimate login (e.g. summer holidays). Traffic is usually near zero, except for certain events that happen a few times a year.

That means the majority of traffic is bots causing errors, which makes the percentage of errors very high. There’s an almost permanent alert condition, so it’s become noise and useless.

I need it to ignore errors generated by bots - but as that’s an arms race it’s unlikely to be something I can do with my 2 person team, it would be almost a full time job trying to filter all the weird and wonderful combinations bots use.

I’ve disabled the alert condition and will just go back to guessing if things are running ok.

Hi @jevans6 did you manage to find a solution to this?

No, just left it with the alert disabled, and check manually every now and then. It’s an incredibly low traffic application so we’ve a rough idea when we’re seeing genuine errors vs bots/scanners. Not ideal, but I’ve other tasks to be getting on with!