Kafka intregration

Hi, I am having some issues with my kafka integration. Partial data/metrics coming through.

Problem

Unable to see any consumer/producer metrics on the default kafka integration dashboard under infrastructure third party services. For consumers, i got the names from Kafka using kafka-consumer-groups.sh --list --bootstrap-server localhost:9092

Will attach a screenshot of what i can see on the NR dashboard.

SELECT * from KafkaConsumerSample returns results but something was off about the results - i didnt expect any entries from hosts that were not running the consumer apps/pods. Same went for KafkaProducerSample.

SELECT * from KafkaOffsetSample returns 0 events.

Configs

Infrastructure: AKS 1.7
Kafka: Strimzi Kafka, with Kafka version 2.5
NR: newrelic/infrastructure-k8s:1.26.0

I was unsure what the producer name should be in the kafka config - its just a pod running on k8s and was unable to get a list of producers from kafka.
nri-integration config map:

kafka-config.yml: |
    ---
    discovery:
      command:
        exec: /var/db/newrelic-infra/nri-discovery-kubernetes --port=10250 --tls
        match:
          label.app: "kafka"
    integrations:
      - name: nri-kafka
        env:
          CLUSTER_NAME: dev-1
          AUTODISCOVER_STRATEGY: bootstrap
          preferred_listener: SSL
          BOOTSTRAP_BROKER_HOST: kafka-bootstrap.kafka.svc.cluster.local
          BOOTSTRAP_BROKER_KAFKA_PORT: 9092
          BOOTSTRAP_BROKER_KAFKA_PROTOCOL: PLAINTEXT 
          BOOTSTRAP_BROKER_JMX_PORT: 9999
          BOOTSTRAP_BROKER_JMX_USER: 
          BOOTSTRAP_BROKER_JMX_PASSWORD: 
          CONSUMERS: '[{"name": "logstash"}, {"name": "datastream-consumer-group"}]'
          PRODUCERS: '[{"name": "logstash"}]'
          LOCAL_ONLY_COLLECTION: false
          COLLECT_BROKER_TOPIC_DATA: true
          TOPIC_MODE: all
          COLLECT_TOPIC_SIZE: false
          METRICS: 1
        labels:
          env: dev
          role: kafka
          cluster_name: dev-1-aks-1

Logs

See some error on logs, unsure if there is an issue as the next health check passes.

time="2020-10-08T01:43:12Z" level=info msg="integration file modified or deleted. Stopping running processes, if any" component=integrations.Manager file=/etc/newrelic-infra/integrations.d/elasticsearch-config.yml
time="2020-10-08T01:43:12Z" level=info msg="integration file modified or deleted. Stopping running processes, if any" component=integrations.Manager file=/etc/newrelic-infra/integrations.d/rabbitmq-config.yml
time="2020-10-08T01:43:12Z" level=info msg="integration file modified or deleted. Stopping running processes, if any" component=integrations.Manager file=/etc/newrelic-infra/integrations.d/rabbitmq-config.yml
time="2020-10-08T01:43:12Z" level=info msg="integration file modified or deleted. Stopping running processes, if any" component=integrations.Manager file=/etc/newrelic-infra/integrations.d/kafka-config.yml
time="2020-10-08T01:43:12Z" level=info msg="integration file modified or deleted. Stopping running processes, if any" component=integrations.Manager file=/etc/newrelic-infra/integrations.d/kafka-config.yml
time="2020-10-08T01:43:12Z" level=warning msg="integration exited with error state" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev error="context canceled" integration_name=nri-rabbitmq role=rabbitmq stderr="(no standard error output)"
time="2020-10-08T01:43:12Z" level=warning msg="integration exited with error state" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev error="exec: not started" integration_name=nri-rabbitmq role=rabbitmq stderr="(no standard error output)"
time="2020-10-08T01:43:12Z" level=warning msg="integration exited with error state" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev error="context canceled" integration_name=nri-kafka role=kafka stderr="(no standard error output)"
time="2020-10-08T01:43:12Z" level=warning msg="integration exited with error state" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev error="exec: not started" integration_name=nri-kafka role=kafka stderr="(no standard error output)"
time="2020-10-08T01:43:14Z" level=info msg="Integration health check finished with success" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev integration_name=nri-rabbitmq role=rabbitmq
time="2020-10-08T01:43:18Z" level=info msg="Integration health check finished with success" cluster_name=dev-1-aks-1 component=integrations.runner.Group env=dev integration_name=nri-kafka role=kafka

Can you please advice what i can do next to get all information? Or what configs am i missing?
Much appreciated!
Thanks,
Rashmi!

Mainly - interested in getting the consumer_lag information.
I setup a separate config for consumer -

nri-kafka-consumer.yaml: |

discovery:
command:
exec: /var/db/newrelic-infra/nri-discovery-kubernetes --port=10250 --tls
match:
label.app.kubernetes.io/name: kafka
integrations:
- name: nri-kafka
env:
CONSUMER_OFFSET: 1
CLUSTER_NAME: dev-1-aks-1
CONSUMERS: ‘[{“name”: “logstash”, “host”: “${discovery.ip}”}]’
ZOOKEEPER_HOSTS: ‘[{“host”: “localhost”, “port”: 2181}]’
CONSUMER_GROUP_REGEX: ‘.*’

My consumers and producers are not written in java(go-lang or nodejs), so unsure if jmx setting would work. How does NR get the consumer lag metrics for non-jmx apps?

Any help will be much appreciated!

Anyone has any suggestions?
Will really appreciate it!