tl;dr - don’t worry about this.
The API isn’t a region-centric endpoint so you can choose to run your “monitor of monitors” from any region you choose. It won’t make a difference to your results or timely delivery of alerts here.
The only question you might have is regarding the availability of a single geography with the New Relic infrastructure - what happens when an entire region is offline? You have this risk with synthetic monitoring in terms the loss of a region would mean you wouldn’t know that your monitors aren’t running. The same risk extends to monitoring via layers so perhaps you might want to run your secondary monitors in another region to add a layer of protection in that you could code and detect when monitors aren’t running regularly.
However, you’ve now just pushed the problem to another layer - what if your “monitor of monitors” region fails and you loose monitoring - maybe you need a 3rd layer in a 3rd region.
This can all get out of hand if your’e not careful and it all depends how you run your operations - do you already use insights to graph key metrics that your team(s) see regularly? If so, I’d suggest graphing the number of executed monitors across an account and also aggregate time spent running all monitors is a more valuable sniff test. If either of these drops significantly, you’ll be able to more quickly spot that things are wrong. I’ve used exactly these graphs before to help highlight when private minions go offline (far more likely than loosing a new relic region)
One last thing to remember with this multi-layered monitor approach is you’re potentially doubling your cost as you pay per execution.
You might now want to think about doing smarter bulkier monitors performing multiple service monitors within a single “product” monitor, ultimately firing multiple results into custom tables for each of the monitored services. With this, you can possibly even reduce your new relic costs.
Lots of options here. It all depends on your requirements and how smart you want to get with your coding.