Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

Breakdown Table. Nested custom segments


I’m confused by the “Breakdown Table” we’re seeing when monitoring our non-web ruby application (here: )
We have instrumented our code with a transaction and within this, a few different custom segments. Our transaction block is called “Datapipeline::RunProcessor/ingest_one_record”. So I select this at the top-left (high level selection) and this reveals graphs and a breakdown table for timings within it.

I don’t understand why “Datapipeline::RunProcessor/ingest_one_record” then appears within in the breakdown (as “Other transaction” in the left most column). Or if it does appear in there, how can it be less than 100% of the time?

Then I’ve got two custom segments (trace_execution_scoped blocks in the code) called “handle_updated_record”, and “handle_created_record”, which are at a high level decision point in the code. It’s either processing one or the other in each transaction, so… I’m expecting those timing to be adding up to almost 100%.

And then for example “Company/Aggregate_relationship_objects” is one of my lower level custom segments. I’m interested in what percentage of time is spent in that, including time spend with database queries within that.

But I’m seeing lots of these database segments with names like “MySQL Placholder find”. These could be useful for capturing otherwise unidentified execution timings, but if my Aggregate_relationship_objects method is querying Placeholders table, will that time be included within “Company/Aggregate_relationship_objects” (useful) or “MySQL Placeholder find” (not useful) or both?

I guess it can’t be both because I notice the whole breakdown table adds up to 100%. This means it’s not taking account of nested timings in the way I’m expecting. When I drill down into an individual sampled transaction, the nesting comes out correctly, and all makes sense. I see “Datapipeline::RunProcessor/ingest_one_record” taking 100% , within that “handle_updated_record” taking 99% , within that “Company/Aggregate_relationship_objects” sometimes taking ~45% and within that various database calls happening.

…but I’m not sure I’m seeing properly how much time “Company/Aggregate_relationship_objects” is taking in the breakdown table.


Hi @harry.wood!

It looks like you have opened a support ticket with us on the same issue. We are going to pick up troubleshooting on that platform going forward.

We love to encourage folks to circle back to their post here after the ticket closes to share the solution and what was learned with the rest of the community.

Keep your eyes out for a message coming your way from us.