Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

New Relic use of Disruptor

kafka

#1

Hi,
I was just reading your very helpful blog series on event processing in kafka.
blog.newrelic.com slash engineering slash apache-kafka-event-processing
One topic I found intriguing but a little confusing is the section on New Relic’s use of LMAX Distruptor in combination with kafka.
“We also use the disruptor handlers to update state concurrently. We blend together consumers from different topics via the disruptor to manipulate shared state in a thread-safe way”
It seems like a powerful combination, however I was confused by the diagram. My understanding is that preferred pattern for using Disruptor is to follow the Single Writer Principle but diagram appears to show multiple streams being written to a single ring buffer.
It would be great if you could expand on the details of your usage in another blog post.


#2

@jelliott1 - Great question! I’ve sent this over to Amy, who wrote that blog post to see if we can clarify that for you :smiley:


#3

Thanks for the great question!

As I’m sure you’ve read in the disruptor docs, you are correct in that you can often get better performance by using the Single producer vs the multi-producer. So if only consuming and processing a single topic will work for your service/architecture, I would suggest using that configuration. Alternatively, you can have a single consumer consume multiple topics in order to share the disruptor producer. (Be careful about consumer configuration here, if you have one topic which is high-thoughput and the other low traffic, it can cause lag on the high throughput topic as the consumer may wait to fetch data from the low throughput topic when polling)

We have still found good performance in some of our applications which use the disruptor multi-producer. Typically when we’re using this, it’s really simplified our concurrency control with needing to synchronze on certain actions like checkpointing. So you’ll see this pattern typically in our most logically complex services, where the performance trade-off feels worthwhile.

We actually have dug into some more experiments lately about what workloads are good for the disruptor. For example we’ve replaced it in a service which had evolved to have a hetergeneous workload, which would cause short blockages of the disruptor ring, hindering throughput. So your suggestion for a blog post is a good one.