Relic Solution: Synthetics Script Version Control with Terraform and Github Actions

CAVEAT: This is very much a first iteration, hacky solution, there are areas for optimisation that you are welcome to build on.


Inspired by a topic I saw here in the Explorers Hub, asking for a solution for version controlled synthetics scripts, I had a thought that github actions & terraform may be able to help.

Of course Github is the first thought I have when someone mentions version control…

I have just recently figured out using Terraform to create Synthetics monitors, so that part was doable. But I have never used Github actions. So there’s a learning curve here for me at least. So let’s just dive right in.


Set up

  1. Create a new repo in Github (I made mine a private repo, since there will be some API Keys visible in there).
  2. Get some base terraform files uploaded to this repo. See those below, / / script.tpl:

# Configure the New Relic provider
provider "newrelic" {
  api_key       =  ""
  admin_api_key =  ""
  account_id    =  ""
  region        =  "" # EU or US

resource "newrelic_synthetics_monitor" "tf_scripted" {
  name = "My Github Actions Created Script"
  frequency = 1
  status = "ENABLED"
  locations = ["AWS_EU_WEST_1"]

data "template_file" "BrowserScript" {
  template = templatefile("${path.module}/script.tpl", {uri = ""})

resource "newrelic_synthetics_monitor_script" "BrowserScript" {
  monitor_id =
  text = data.template_file.BrowserScript.rendered

terraform {
  required_providers {
    newrelic = {
      source = "newrelic/newrelic"
  required_version = ">= 0.13"


  1. Now flip over to the Actions page in the Repo, and when you are on that set up page you’ll see Terraform listed there as seen below:

Click through to set that up. NOTE: You’ll need your Terraform Cloud API Key here Get your API token set up as a secret attached to your repository

You’ll see the Terraform API token referenced here in the yml config file here :arrow_heading_up:

(here’s a copy of that config, you can just paste that in)

name: 'Terraform'

    - master

    name: 'Terraform'
    runs-on: ubuntu-latest

    # Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
        shell: bash

    # Checkout the repository to the GitHub Actions runner
    - name: Checkout
      uses: actions/checkout@v2

    # Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v1.2.0
        cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}

    # Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
    - name: Terraform Init
      run: terraform init

    # Generates an execution plan for Terraform
    - name: Terraform Plan
      run: terraform plan

      # On push to master, build or change infrastructure according to Terraform configuration files
      # Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information:
    - name: Terraform Apply
      if: github.ref == 'refs/heads/master' && github.event_name == 'push'
      run: terraform apply -auto-approve

The other arrow in the screenshot above shows where to commit that config to your repo.

That’s pretty much it… this will work for you to create new monitors by merging a pull request in Github.

Demo run

Here’s a quick demo of that:

NOTE: The API Keys used in this video have since been rotated. Please do not publicly share your API Keys.

The hack

But… there’s a problem, every time this runs it expects to create a new monitor.
This is where this solution gets rather hacky!

Since every time the terraform workflow runs it expects to create a new monitor, it will consistently return an error:

Error: 400 response returned: Invalid name specified: 'My Terraform Created Scripted Monitor'; a monitor with that name already exists.

How do we work around that? There are a couple of options, the most recommended would likely be to remotely save the state to an S3 bucket, and the terraform config can reference that state every time. Thus, knowing that the monitor exists, but that it can take action to update it, which is what we want.

If this is the route you want to go down, here’s some docs that may help:

For me, I want this solution to be as simple as possible, so, having multiple parts in multiple places isn’t really what I’m going for. So my solution to this is more hacky than it needs to be.

If you use Terraform locally, you’ll see an auto-generated file with the file extension .tfstate
This is for your local runs of terraform to reference the previous state of the resources it had interacted with.

So I tried to just simply, add a .tfstate file to my repo. Surprisingly that worked!

I did need to make some changes to the tfstate file, but that’s fine, it still worked.

Here’s a copy of a terraform.tfstate file, and in this file, see references to <INSERT X HERE> for what you should change in the file for this to work for you.

  "version": 4,
  "terraform_version": "0.13.2",
  "serial": 26,
  "outputs": {
    "Scripted_Monitor_ID": {
      "type": "string"
  "resources": [
      "mode": "data",
      "type": "template_file",
      "name": "BrowserScript",
      "provider": "provider[\"\"]",
      "instances": [
          "schema_version": 0,
          "attributes": {
            "filename": null,
            "id": "83ef9fb07fabbda6694f0e59f6a264926d6ce11d319ed64288ef5724d5567fd9",
            "rendered": "",
            "template": "",
            "vars": null
      "mode": "managed",
      "type": "newrelic_synthetics_monitor",
      "name": "tf_scripted",
      "provider": "provider[\"\"]",
      "instances": [
          "schema_version": 0,
          "attributes": {
            "bypass_head_request": false,
            "frequency": 1,
            "locations": [
            "sla_threshold": 7,
            "status": "ENABLED",
            "treat_redirect_as_failure": false,
            "type": "SCRIPT_BROWSER",
            "uri": "",
            "validation_string": "",
            "verify_ssl": false
          "private": "bnVsbA=="
      "mode": "managed",
      "type": "newrelic_synthetics_monitor_script",
      "name": "BrowserScript",
      "provider": "provider[\"\"]",
      "instances": [
          "schema_version": 0,
          "attributes": {
            "monitor_id": "<INSERT SYNTHETICS MONITOR ID HERE>",
            "text": ""
          "private": "bnVsbA==",
          "dependencies": [

That’s it. Now every time you merge a newly updated script, you can have it auto-ship to New Relic Synthetics.

Potential enhancements

This is somewhat of a bodge - so there are a lot of things here that could be done to make this a cleaner solution.

The most obvious one would be that the terraform.tfstate file should not be hacked together like this. This will fail if there are more than one Synthetics monitor in play in your file. My recommendation would be that rather than hacking together a terraform.tfstate file, you should either have it loaded into S3 as described already, or run Terraform locally to create your monitor, then upload the terraform.tfstate file that is auto-generated after that. Since this will have the right IDs preloaded.

If you do take this project and run with it, let me know below if you get your state into S3, or if you can come up with any additional enhancements to this :slight_smile:

See my code here: