Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

Error 413 - MS SQL 2014 - plugin as a service

plugins

#1

Hi,

i have installed the plugin as a service in my MSSQL server instance but i can view the result in Plugin page.
This is the log file:

“…
2015-07-03 11:21:40.2994|INFO|QueryContext|Gathering Component: sqlXXX; Metric: WaitState/AvgWaitSeconds/PAGEIOLATCH_EX; Value: 0,0090
2015-07-03 11:21:40.2994|INFO|QueryContext|Gathering Component: sqlXXX; Metric: WaitState/AvgResourceSeconds/PAGEIOLATCH_EX; Value: 0,0090
2015-07-03 11:21:40.2994|INFO|QueryContext|Gathering Component: sqlXXX; Metric: WaitState/AvgSignalSeconds/PAGEIOLATCH_EX; Value: 0,0000
2015-07-03 11:21:40.2994|INFO|Context|Preparing to send metrics to service
2015-07-03 11:21:45.6901|ERROR|Context|Unexpected response from the New Relic service. StatusCode: RequestEntityTooLarge (Request Entity Too Large), BodyContents:
2015-07-03 11:21:45.6901|ERROR|MetricCollector|Error sending data to the collector
2015-07-03 11:21:45.6901|INFO|MetricCollector|Recorded 27079 metrics
2015-07-03 11:22:45.7060|INFO|MetricCollector|Connecting with Data Source=sqlXXX\mssqlserver1;Initial Catalog=master;Integrated Security=True
2015-07-03 11:22:54.9091|INFO|QueryContext|Gathering Component: sqlXXX; Metric: Memory/PageLifeNuma/Node_001; Value: 7220
2015-07-03 11:22:54.9091|INFO|QueryContext|Gathering Component: sqlXXX; Metric: Memory/PageLifeNuma/Node_000; Value: 1828
2015-07-03 11:22:54.9091|INFO|QueryContext|Gathering Component: sqlXXX; Metric: FileIO/BytesRead/_MasterNL; Value: 32951818240
2015-07-03 11:22:54.9091|INFO|QueryContext|Gathering Component: sqlXXX; Metric: FileIO/BytesWritten/_MasterNL; Value: 2966702
…”

Can you help me?


#2

@domain - The entity too large means the content post is hitting the limit of our collector to accept the payload. This typically occurs when you have too many hosts/databases being monitored by the plugin. The specific number that you can monitor with the MSSQL plugin can vary depending on how many named databases you have per host and how many hosts you have configured from each individual plugin instance. You will need to adjust your config file to reduce the size of the data payload to below the limit of 1MB.


#3

Thank you @sdelight !
In your experience, how many Databases can i monitor with the newrelic plugin?


#4

@domain - The exact number will vary widely depending on your specific configuration. For each SQL server instance monitored by the plugin, we will get a set value of instance wide metrics plus about a dozen additional metrics per additional database. Generally the number of instances+databases will be roughly 500 per running copy of the plugin but that number can be much less if you have lots of really long database names or large number of instances running just a handful of databases since the metrics gathered per instance are much more than those gathered per database on an instance.


#5

I have the same issue.

How do I customize/configure the payload size?

Thanks!


#6

Hi @ripe-aws-west-eu-2,

Basically you will need to restrict which databases you want to fetch metrics from to avoid hitting the limits.
By default we collect from all of them which, depending on the number of DBs you have, can generate payload over 1MB.

To restrict this ,use the include/exclude sections on plugin.json configuration file of your plugin.
Have a look at our configuration docs here:

Hope this helps!


#7

Thanks Carlos !

I’ve excluded all but one db, and still the same error message.

I’ve even excluded all of them, and still get the above error message.

2020-02-04 13:46:40.4950|ERROR|MetricCollector|Error sending data to the collect
or
2020-02-04 13:46:40.4950|INFO|MetricCollector|Recorded 86 metrics

Do you know if there’s a way to see what the data size is, and what’s actually bothering sql plugin?

Thanks for your response!


#8

Hi @ripe-aws-west-eu-2,

Preface: The payload size limit for APM agents is 2MB. There’s also a limit of 10k metrics. I believe those same limits apply to the collector for plugins. At this point, however, I think that may be irrelevant?

Before I get into the possible issues here, I’d like to point out that the last work done on this plugin was three years ago. More importantly, the only change implemented was a modification to the readme file. The last actual code update was almost five years ago, and that was simply to add the Azure MSSQL database as an available option (the connection strings are significantly different).

One of the requirements for the plugin is .NET 3.5 must be installed. I don’t mean .NET 3.5 or higher. I mean .NET 3.5. The plugin was developed on this version of the framework. There is legacy code in that version of the framework that will prevent the plugin from working properly if it isn’t available.

Next, are you still seeing the following in the log file?

|ERROR|Context|Unexpected response from the New Relic service. StatusCode: RequestEntityTooLarge (Request Entity Too Large), BodyContents:

If you are, the plugin is doing something it shouldn’t oughta. You didn’t give us anything past the 86 metrics, so I cannot tell for sure. 2MB worth of data is 2,000,000 characters. I can see the payload exceeding that (easily, in fact) with 27,079 metrics. Or, the collector rejecting the payload because it exceeded the 10k metric limit, though for APM agents, that is regulated by the agent, not the collector. It could be that was unexpected, hence the reason the entire payload was originally rejected.

Prior to getting into possible connectivity issues, I do want to address the maximum instance limit for the plugin. The main difference here is between the theoretical and the practical. “Theoretically”, with enough memory and CPU, there is no such thing as an instance limit for the plugin. After nearly six years of working with this plugin (a lot), I can tell you from experience that the practical limit is somewhere around 20, and I would not recommend more than 10 unless you intend to dedicate an application server to running just this plugin. If that is the case, you can probably get away with as many as 50, but you’ll want at least 32 cores and 128GB of RAM.

I’m going to move on to connectivity, but if you have not already done so, turn debug logging on to look at this:

You’ll need to restart the service after setting this. Debug output will sometimes reveal the underlying cause for a connection problem where the info level logs may not. I’ll just leave that there as once turned on you might find exactly what the problem is.

Another issue might have to do with the protocols. Remember, this plugin was built on .NET 3.5. The newest security protocol at that time was TLS 1.0. There is no provision in that version of the framework for TLS 1.1 or 1.2. If the host is restricted on this (most servers only allow for TLS 1.2 these days), the plugin will run into an encryption algorithm mismatch, and the connection will get rejected. The following document is specific to the .NET agent, but it applies in this case as well:

https://docs.newrelic.com/docs/agents/net-agent/troubleshooting/no-data-appears-after-disabling-tls-10

Keep in mind, the plugin does not have to live on the MSSQL server. As long as the instances are set to accept external TCP/IP connections and the plugin is installed on a host in the same network segment, it can be on a different server.

Should the plugin continue to report traffic is rejected due to the size of the payload, the next step would be to uninstall the plugin, then install it again. Hold on the to the plugin.json file if you do this so you don’t have to remake the configuration file. The newrelic.json is just for the license key, log levels, and proxy settings. If have proxy settings, you can save the file or just copy them somewhere to be added back in. Then, install the plugin again. Assuming you’re using the NPI installer, run npi -h from an administrator command prompt to output the list of options, then remove the service prior to removing the plugin. If you don’t remove the service first, you’ll get an error running the remove option.

The uninstallation/installation process only takes a few minutes. If that doesn’t fix the large payload issue, it will be time to open a support ticket and do some live troubleshooting.

I would like to encourage you to check out the MSSQL On-Host Integration:

https://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/mssql-monitoring-integration

This does require an Infrastructure Pro subscription, but there is a lot more detail and flexibility in using this option. Probably the greatest advantage is the metrics are stored in an Insights datastore and can be queried using NRQL. There is a version of the MSSQL plugin that does this, but it is not authored by New Relic, and would therefore require support by the author or through the community (I think there is a license fee as well).

There is one last bit of errata I want to bring up with regards to this plugin. The following log file is built as a buffer to populate the agent log:

com.newrelic.platform.microsoft.sqlserver-wrap.out.log

Due to a bug in the plugin, that log will grow and grow and never stop. I’ve seen it close to a terabyte in size before. As it is likely no further development will be done on this plugin, it is very doubtful this will change. My recommendation is to create a daily/weekly/monthly job in the Windows Task Scheduler, and have it run a batch file that stops the plugin, deletes that specific file, and starts it back up again. This should take less than two seconds and no metric data loss will occur if it is done in this manner.