Issue in seeing Kakfa metrics

Hello,

I followed the Kafka integration documentatiom.
Our topics, consumer groups data is stored in kafka.
So we have used bootstrapping servers for autodiscovery.

below is the my configuration file:


integration_name: com.newrelic.kafka

instances:

This instance gives an example of autodiscovery of brokers with a bootstrap broker

  • name: kafka-metrics-bootstrap-discovery
    command: metrics
    arguments:

    A cluster name is required to uniquely identify this collection result in Insights

    cluster_name: “YX-KAFKA-CLUSTER-PROD”

    autodiscover_strategy: “bootstrap”

    Bootstrap broker arguments. These configure a connection to a single broker. The rest of the brokers in the cluster

    will be discovered using that connection.

    bootstrap_broker_host: “localhost”
    bootstrap_broker_kafka_port: 9092
    bootstrap_broker_kafka_protocol: PLAINTEXT # Currently support PLAINTEXT and SSL
    bootstrap_broker_jmx_port: 9999

    JMX user and password default to default_jmx_user and default_jmx_password if unset

    #bootstrap_broker_jmx_user: admin
    #bootstrap_broker_jmx_password: password

    Only collect metrics from the bootstrap broker configured. The integration will not attempt to collect metrics

    for any other broker, nor will it collect cluster-level metrics like topic metrics. This is useful for things

    like deployment to kubernetes, where a single integration instance is desired per broker.

    local_only_collection: false

    See above for more information on topic collection

    collect_broker_topic_data: true
    topic_mode: “all”
    collect_topic_size: false

This instance gives an example of collecting inventory with the integration

  • name: kafka-inventory
    command: inventory
    arguments:
    cluster_name: “YX-KAFKA-CLUSTER-PROD”
    zookeeper_hosts: ‘[{zookeeper host: port}]’
    #zookeeper_auth_secret: “username:password”

    Below are the fields used to fine tune/toggle topic inventory collection.

    In order to collect topics the “topic_mode” field must be set to “all”, “list”, or “regex”

    topic_mode: ‘all’

Example configuration for collecting consumer offsets for the cluster

  • name: kafka-consumer-offsets
    command: consumer_offset
    arguments:
    cluster_name: “YX-KAFKA-CLUSTER-PROD”

    autodiscover_strategy: “bootstrap”
    bootstrap_broker_host: “localhost”
    bootstrap_broker_kafka_port: 9092
    bootstrap_broker_kafka_protocol: PLAINTEXT

    A regex pattern that matches the consumer groups to collect metrics from

    consumer_group_regex: ‘.*’

But I don’t see the kafka consumer lags and consumer groups in the metrics section.
What might be the issue. Not seeing any error in new relic logs as well.
Please guide.

Hi @sathappan, It looks like the configuration file has lost its formatting while copy/pasting it.

Could you use a validator such as yamllint to verify the config file is correctly formatted? Also make sure it matches the example file here - nri-kafka/kafka-config.yml.sample at master · newrelic/nri-kafka · GitHub

You mentioned there are no errors in the logs. Can you confirm if you’ve looked through verbose logs? You’ll want to check the log file for lines that include "level=error".

Lastly, take a look at our troubleshooting docs here -

Hope this helps! :slight_smile:

Thanks for looking in to it.
YAML formatting is ok. File does not have any formatting issues. (screenshot attached


)

I have enabled verbose logs.
Attaching file.
I see some error. but could not make the reason for the error. Could you please point me to the correct root cause. There are lot of topics that should be shown on UI, but aren’t.

Please see error below:

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=941

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=963

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=964

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=1066

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=1121

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=1253

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=1359

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=8011

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=12698

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=15858

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=19911

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=20235

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=22334

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=31191

time=“2021-06-30T10:23:30Z” level=debug msg=“Unable to retrieve NFS stats.” component=NFSSampler error=“no supported NFS mounts found”

time=“2021-06-30T10:23:43Z” level=debug msg=“Integration command wrote to stderr.” instance=kafka-metrics-bootstrap-discovery integration=com.newrelic.kafka prefix=integration/com.newrelic.kafka stderr="[INFO] Running core collection\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata’: invalid return value for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata, error: invalid character ‘E’ looking for beginning of value\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch’: reading nrjmx stdout: read |0: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=ReplicaManager,name=’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean 'kafka.controller:type=ControllerStats,name=’: EOF\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=’: nrjmx error: exit status 1 [proc-state: exit status 1]\n[ERR] Unable to execute JMX query for MBean 'kafka.log:type=LogFlushStats,name=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=DelayedFetchMetrics,name=ExpiresPerSec,fetcherType=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean 'kafka.network:type=RequestMetrics,name=RequestsPerSec,request=,’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=temp_product_images’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=yx_shopping_impressions_manual’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=null_ce_image’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=new_es_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=null_size_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=ed_manual_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=ed_ext_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=index_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=indexer_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=editorialist_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=brand_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=null_indexer_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=in_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=connect-statuses’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=indexer_mp_image’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=yx_shopping_impressions’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=__confluent.support.metrics’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=test’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=items_ce’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=eyx-shopping-impression’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=in_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=backfill_images’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=eyx_staging_mp’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=size_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=mp_inactive’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=items’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=dev_impressions’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=null_indexer_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=mp_ltr_embeddings’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=stylist_yx_shopping_impressions’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=_schemas’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=reporting_impression’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=inactive_colors_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=product_images’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=v’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=merchant_products’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=yx_auto_complete_index’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=base_products’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=image_embeddings’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=connect-configs’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=__consumer_offsets’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=indexer_mpv’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=manual_product_images’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=connect-offsets’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=reporting_gender_view_impressions’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=stg_ltr_events’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=item_ce’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=ce_image’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=_confluent-metrics’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata’: invalid return value for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata, error: invalid character ‘E’ looking for beginning of value\n[WARN] empty result for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch’: EOF\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata’: nrjmx error: exit status 1 [proc-state: exit status 1]\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean 'kafka.server:type=ReplicaManager,name=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.controller:type=ControllerStats,name=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean 'kafka.server:type=BrokerTopicMetrics,name=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.log:type=LogFlushStats,name=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean 'kafka.server:type=DelayedFetchMetrics,name=ExpiresPerSec,fetcherType=’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=RequestsPerSec,request=,’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=temp_product_images’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=yx_shopping_impressions_manual’: writing nrjmx stdin: write |1: file already closed\n[ERR] Unable to execute JMX query for MBean 'kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=null_ce_im

Hey @sathappan , Errors like these -

time=“2021-06-30T10:23:30Z” level=debug msg=“Skipping process.” component=“Metrics Process” error=“process with zero rss” pid=941

indicates that those processes are being filtered as they’re not using any memory. You can disable that if you like using the disable_zero_mem_process_filter


The message -

time=“2021-06-30T10:23:30Z” level=debug msg=“Unable to retrieve NFS stats.” component=NFSSampler error=“no supported NFS mounts found”

is just a debug message looking for NFS mounts.


These are the kafka integration related messages that you should be looking for -

time=“2021-06-30T10:23:43Z” level=debug msg=“Integration command wrote to stderr.” instance=kafka-metrics-bootstrap-discovery integration=com.newrelic.kafka prefix=integration/com.newrelic.kafka stderr="[INFO] Running core collection\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata’: invalid return value for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata, error: invalid character ‘E’ looking for beginning of value\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch’: reading nrjmx stdout: read |0: file already closed\...

As per the Compatibility and requirements, can you ensure JMX is enabled on all brokers consumers and producers?

One way to test that data is being collected is to run the following manually through JMX to see if data is coming back -

1 Like

Hi @zahrasiddiqa ,

I have enabled JMX on all Kafka nodes and they are able to connect each other on that. I don’t see error related to that.

My Question is: Our consumers are apache storm topologies ? Toplogies can run on any storm supervisor node, it is not fixed. Also why do we need to set consumer/producer JMX. Consumer Lag and offset data can be available from Kafka itself.

Also I see this in Logs:

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:eyx-shopping-impression:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:connect-offsets:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:base_products:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:items_ce:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:mp_ltr_embeddings:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:yx_shopping_impressions:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:stg_ltr_events:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:product_images:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:mp:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:size_mpv:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:null_size_mpv:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:v:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:new_es_mp:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:_schemas:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:merchant_products:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:connect-configs:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:ed_ext_mp:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

time=“2021-07-13T13:22:14Z” level=debug msg=“Sending events to metrics-ingest.” component=MetricsIngestSender id=“ka-topic:backfill_images:clustername=yx-kafka-cluster-prod” numEvents=1 postCount=4672 timestamps="[2021-07-13 13:22:13 +0000 UTC]"

but I dont see this data on dashboard, Metrics or Events section. I only see 3 consmer groups:

logstash
connect-eyx_shopping_impressions-s3-sink
connect-eyx_stylist_shopping_impressions-s3-sink

See here:
https://one.newrelic.com/launcher/infra.infra?platform[accountId]=3222072&platform[timeRange][duration]=1800000&pane=eyJuZXJkbGV0SWQiOiJpbmZyYS5zZXJ2aWNlcyIsInByb3ZpZGVyTmFtZSI6Im9uSG9zdEludGVncmF0aW9ucyJ9&overlay=eyJjb250ZXh0TmVyZGxldElkIjoibG9nZ2VyLmhvbWUiLCJuZXJkbGV0SWQiOiJkYXRhLWV4cGxvcmVyLmV4cGxvcmVyIiwid29ya3NwYWNlIjp7ImRhdGFUeXBlIjoibWV0cmljIiwiY2hhcnRUeXBlIjoiTElORSIsImFjY291bnRJZCI6MzIyMjA3MiwiYXR0cmlidXRlU2VhcmNoIjoiIiwicXVlcnkiOnsic2VsZWN0Ijp7ImF0dHJpYnV0ZSI6ImthZmthLmNvbnN1bWVyLmh3bSIsImFnZ3JlZ2F0b3IiOiJsYXRlc3QifSwiZmFjZXQiOiJrYWZrYS5jb25zdW1lckdyb3VwIiwiZmlsdGVycyI6W3siYXR0cmlidXRlIjoia2Fma2EuY29uc3VtZXJHcm91cCIsInZhbHVlIjoiY29ubmVjdC1leXhfc3R5bGlzdF9zaG9wcGluZ19pbXByZXNzaW9ucy1zMy1zaW5rIn1dLCJldmVudFR5cGUiOiJNZXRyaWMiLCJsaW1pdCI6IiJ9LCJ0aW1lUmFuZ2UiOnsiYmVnaW5UaW1lIjpudWxsLCJlbmRUaW1lIjoibm93IiwiZHVyYXRpb24iOjIxNjAwMDAwfX19&state=48254da5-ac46-4541-839e-b77e68e26831

Also only this topic is shown: stylist_yx_shopping_impressions

See here :
https://one.newrelic.com/launcher/infra.infra?platform[accountId]=3222072&platform[timeRange][duration]=1800000&pane=eyJuZXJkbGV0SWQiOiJpbmZyYS5zZXJ2aWNlcyIsInByb3ZpZGVyTmFtZSI6Im9uSG9zdEludGVncmF0aW9ucyJ9&overlay=eyJjb250ZXh0TmVyZGxldElkIjoibG9nZ2VyLmhvbWUiLCJuZXJkbGV0SWQiOiJkYXRhLWV4cGxvcmVyLmV4cGxvcmVyIiwid29ya3NwYWNlIjp7ImRhdGFUeXBlIjoibWV0cmljIiwiY2hhcnRUeXBlIjoiTElORSIsImFjY291bnRJZCI6MzIyMjA3MiwiYXR0cmlidXRlU2VhcmNoIjoia2Fma2EiLCJxdWVyeSI6eyJzZWxlY3QiOnsiYXR0cmlidXRlIjoia2Fma2EuY29uc3VtZXIubGFnIiwiYWdncmVnYXRvciI6ImxhdGVzdCJ9LCJmYWNldCI6ImthZmthLnRvcGljIiwiZmlsdGVycyI6W3siYXR0cmlidXRlIjoia2Fma2EuY29uc3VtZXJHcm91cCIsInZhbHVlIjoiY29ubmVjdC1leXhfc3R5bGlzdF9zaG9wcGluZ19pbXByZXNzaW9ucy1zMy1zaW5rIn0seyJhdHRyaWJ1dGUiOiJrYWZrYS50b3BpYyIsInZhbHVlIjoic3R5bGlzdF95eF9zaG9wcGluZ19pbXByZXNzaW9ucyJ9XSwiZXZlbnRUeXBlIjoiTWV0cmljIiwibGltaXQiOiIifSwidGltZVJhbmdlIjp7ImJlZ2luVGltZSI6bnVsbCwiZW5kVGltZSI6Im5vdyIsImR1cmF0aW9uIjoyMTYwMDAwMH19fQ==&state=33397099-8478-08ae-4d83-17fdd60abe64

Ideally we should see all lags as per consumer group and partition as mentioned below: [Expected Data]

ConsumerGroup TOPIC partition lag
editorialistTopology eyx_staging_mp 0 29965
editorialistTopology eyx_staging_mp 1 28576
editorialistTopology eyx_staging_mp 2 28082
ImageEmbeddingsTopology product_images 0 37595
ImageEmbeddingsTopology product_images 1 40640
ImageEmbeddingsTopology product_images 2 36385
productImagesIndexerTopology_OpenDistro indexer_mp_image 0 35593
productImagesIndexerTopology_OpenDistro indexer_mp_image 1 33927
productImagesIndexerTopology_OpenDistro indexer_mp_image 2 36099
sizeNormalization in_mpv 0 115812
sizeNormalization in_mpv 1 97257
sizeNormalization in_mpv 2 116141
sizeNormalization mpv 0 368398
sizeNormalization mpv 1 382492
sizeNormalization mpv 2 335628
sizeNormalization size_mpv 0 2389876
sizeNormalization size_mpv 1 2380460

Am I looking i correctly on the dashboard ? Is there a different way to look on UI the kafka metrics for Cosumer group by topic and parition ?

Also is the below error causing any issue’s:

time=“2021-07-13T13:22:13Z” level=debug msg=“Integration command wrote to stderr.” instance=kafka-metrics-bootstrap-discovery integration=com.newrelic.kafka prefix=integration/com.newrelic.kafka stderr="[INFO] Running core collection\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata’: invalid return value for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Metadata, error: invalid character ‘E’ looking for beginning of value\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch’: invalid return value for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Fetch, error: invalid character ‘C’ looking for beginning of value\n[WARN] empty result for query: kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets’: EOF\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean 'kafka.network:type=RequestMetrics,name=TotalTimeMs,request

@zahrasiddiqa, I have enabled JMX ports on all brokers and able to see topic metrics and consumer offsets. But for Brokers metrics I am seeing the above mentioned error.

I have also verified the messages from JMXtool and from this command - “echo ‘kafka.network:type=RequestMetrics,name=,request=’ | nrjmx -host host -port port”

Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request
=Metadata’: reading nrjmx stdout: read |0: file already closed\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:ty
pe=RequestMetrics,name=TotalTimeMs,request=Fetch’: writing nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX qu
ery for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets’: writing nrjmx stdin: write |1: broken pip
e\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata’: w
riting nrjmx stdin: write |1: broken pipe\n[ERR] Unable to execute JMX query for MBean ‘kafka.network:type=RequestMetrics,nam
e=TotalTimeMs,request=Produce’: writing nrjmx stdin: write |1: broken pipe\n