Instrumentation of Web vs. Console Projects

Hello, I am trying to work through custom instrumentation of two sample .Net Core 3.1 projects that represent common scenarios that we see in our service stack.

The first is a console app, with key methods tagged with the [Transaction] attribute. We are able to correctly see transaction trace information (including nested transactions) exactly how we want for this service type within our APM dashboard.

The second, however, is a web API project: a self-hosted, kestrel service running as an exe (non-IIS). As expected, the ASP.NET auto-instrumentation is kicking in, and we see top-level transaction data for each of our sample API requests. But we don’t see any of the nested transactions, which have also been tagged with [Transaction], show up. We see the generic “Middleware Pipeline” along with warnings that our custom code has not been instrumented. I’ve confirmed in the NR logs that they’re observed (“Possibly instrumenting”) but have no corresponding “Instrumenting method”. For some reason, the Transaction attributes are not being honored.

I even went as far as to completely skip the web-portion bootstrapping of the exe and add a simple method call:

    public static void Main(string[] args)

    public static void DoWork()

In this scenario, we still don’t see the DoWork method show up in our transactions list. At this point, it is functionally equivalent to the Console project that is working.

The only difference I found was the project SDK type – the .csproj file for the console app uses “Microsoft.NET.Sdk” whereas the web project uses “Microsoft.NET.Sdk.Web”. As a test, I changed the web project to use “Microsoft.NET.Sdk” and manually added the framework reference to Microsoft.AspNetCore.App and bingo – instrumentation shows up.

So it seems like there’s some distinction happening by either the agent or the profiler hook that is ignoring the [Transaction] attributes ONLY if it’s a web project. I’ve scoured the documentation, and run through the trouble shooting steps to no avail.

While we could take this route with our API projects to get them working, it seems like a bit of a hack. Given how close it seems to be, it feels like we’re missing something simple. I would very much appreciate any thoughts or suggestions on what we could try next or what we might be missing.

Thank you!

Hi @jchelko

I’m not a .Net expert, but wanted to check in to see if this topic on monitoring non-IIS web apps can help get you closer to your monitoring goals?

Hi @RyanVeitch, thank you for sending this along. I’d taken a look through this article previously, as it is very close to the issue we’re seeing. The puzzling part is that we are seeing some data show up correctly in APM, but only the auto-instrumentation.

So everything works from the article up until Custom Instrumentation. The custom attributes we’ve applied even in the same exe/assembly are not being used however, and seemingly ONLY in the case when the project type is an ASP.NET Core Web App.

Hi @jchelko -

One thing to keep in mind is the agent doesn’t support the notion of nested transactions.

If the agent has already created a transaction then it’s my understanding the [Transaction] attribute should behave as if it were a [Trace] attribute but it sounds like that is not happening? Could you try using the [Trace] attribute instead in the web context?

Just an FYI - the agent will automatically create transactions when it recognizes the middleware pipeline is present and it will recognize the invocation of MVC/WebApi controllers, naming the transaction after those.

In the non-web case the use of the [Transaction] attribute is needed to create transactions.

Thanks @Tjd. Yes, totally understood – we are actually expecting that the “nested” transactions will just appear as traces underneath the root. Ideally we’d see each top-level HTTP request as the root transaction, and then see our key methods that we’ve tagged show up in the callstack under the transaction details.

As long as the project type is not Web, this worked perfectly. But again, as soon as we change it to a Web project, it fails. I am working on a sample project that I’ll upload to demonstrate this.

To further the investigation, I pulled down the profiler code and laced in some more tracing. One thing I noticed that is a difference (and possible clue) is how the ReJIT works:

If it’s a console app, I see the following traces:
[Trace] 2020-08-05 18:01:15 JITCompilationStarted. 140701563535872
[Trace] 2020-08-05 18:01:15 Possibly instrumenting: (…CreateArtificialDelay()
[Debug] 2020-08-05 18:01:15 Request reJIT: [140701563535872] (…CreateArtificialDelay()
[Trace] 2020-08-05 18:01:15 JITCompilationStartedFinished. 140701563535872
[Trace] 2020-08-05 18:01:15 GetReJITParameters called
[Trace] 2020-08-05 18:01:15 ReJIT 140701563535872
[Trace] 2020-08-05 18:01:15 Possibly instrumenting: (…CreateArtificialDelay()
[Info ] 2020-08-05 18:01:15 Instrumenting method: (…CreateArtificialDelay()

With the simple change of flipping the project type to Web and no other code change, the flow differs:
[Trace] 2020-08-05 18:06:21 JITCompilationStarted. 140701563535872
[Trace] 2020-08-05 18:06:21 Possibly instrumenting: (…CreateArtificialDelay()
[Debug] 2020-08-05 18:06:21 Request reJIT: [140701563535872] (…CreateArtificialDelay()
[Trace] 2020-08-05 18:06:21 JITCompilationStartedFinished. 140701563535872

It’s as if the ReJIT callback never completes. Is this perhaps less of an issue with the NR profiler, and maybe a limitation of ReJIT in the context of an ASP.NET core solution? I can’t imagine why that would be different though.

An update, we’ve found the root cause. While researching the ReJIT route a bit more, I came across this github issue report:

Looks like there’s a known issue with running a web API project in debug mode (in Visual Studio), which interferes with the ReJIT callback flow. This lines up precisely with what I shared in the logs above, and explains why the callback never fires, and thus our custom methods are never instrumented.

For my testing, I was indeed running the solutions locally. Once I ran them in non-debug mode, telemetry started showing up exactly as we expected. Sharing this for posterity in case others encounter the same issue down the road.

Thank you all for your input. Thankfully this turned out to be a quasi-environmental concern!