Flow.sampling_interval

Hello everyone,

I’m currently facing an issue while collecting NetFlow data from my backbone routers (Cisco ASR 9902 and ASR 9906).
The flows are successfully received in ElastiFlow, and I can visualize them in Kibana — however, the traffic values are incorrect.
I’m only seeing KBytes or MBytes, while the actual traffic should be in GBytes.

I’ve been troubleshooting this for several days and noticed that it seems to be a common issue, but I haven’t found a proper fix yet.

Here’s my configuration:
sampler-map SAMPLER_DDOS_MONITOR
random 1 out-of 1000
!

flow exporter-map EXP_DDOS_MONITOR
version v9
template data timeout 60
!
transport udp 9995
source Loopback0
destination <IP_ELASTICFLOW>
!

flow monitor-map DDOS_MONITOR
record ipv4
exporter EXP_DDOS_MONITOR
cache timeout active 60
cache timeout update 60
!
And here’s a snippet of my docker-compose.yml used for ElastiFlow (Elasticsearch, Logstash, Kibana stack):

cluster.name: elastiflow

  bootstrap.memory_lock: 'true'

  network.host: 0.0.0.0
  http.port: 9200
  discovery.type: 'single-node'

  indices.query.bool.max_clause_count: 8192
  search.max_buckets: 250000

  action.destructive_requires_name: 'true'

elastiflow-kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: elastiflow-kibana
restart: ‘no’
depends_on:

  • elastiflow-elasticsearch
    network_mode: host
    environment:
    SERVER_HOST: 0.0.0.0
    SERVER_PORT: 5601
    SERVER_MAXPAYLOADBYTES: 8388608
  ELASTICSEARCH_HOSTS: "http://127.0.0.1:9200"
  ELASTICSEARCH_REQUESTTIMEOUT: 132000
  ELASTICSEARCH_SHARDTIMEOUT: 120000

  KIBANA_DEFAULTAPPID: "dashboard/653cf1e0-2fd2-11e7-99ed-49759aed30f5"
  KIBANA_AUTOCOMPLETETIMEOUT: 3000
  KIBANA_AUTOCOMPLETETERMINATEAFTER: 2500000

  LOGGING_DEST: stdout
  LOGGING_QUIET: 'false'

elastiflow-logstash:
image: robcowart/elastiflow-logstash:4.0.1
container_name: elastiflow-logstash
restart: ‘no’
depends_on:

  • elastiflow-elasticsearch
    network_mode: host
    environment:

LS_JAVA_OPTS: ‘-Xms4g -Xmx4g’

  # ElastiFlow global configuration
  ELASTIFLOW_AGENT_ID: elastiflow
  ELASTIFLOW_GEOIP_CACHE_SIZE: 16384
  ELASTIFLOW_GEOIP_LOOKUP: 'true'
  ELASTIFLOW_ASN_LOOKUP: 'true'
  ELASTIFLOW_OUI_LOOKUP: 'false'
  ELASTIFLOW_POPULATE_LOGS: 'true'
  ELASTIFLOW_KEEP_ORIG_DATA: 'true'
  ELASTIFLOW_DEFAULT_APPID_SRCTYPE: '__UNKNOWN'

  # Name resolution option
  ELASTIFLOW_RESOLVE_IP2HOST: 'false'
  ELASTIFLOW_NAMESERVER: '127.0.0.1'
  ELASTIFLOW_DNS_HIT_CACHE_SIZE: 25000
  ELASTIFLOW_DNS_HIT_CACHE_TTL: 900
  ELASTIFLOW_DNS_FAILED_CACHE_SIZE: 75000
  ELASTIFLOW_DNS_FAILED_CACHE_TTL: 3600

  ELASTIFLOW_ES_HOST: '127.0.0.1:9200'
  #ELASTIFLOW_ES_USER: 'elastic'
  #ELASTIFLOW_ES_PASSWD: 'changeme'

  ELASTIFLOW_NETFLOW_IPV4_PORT: 9995
  ELASTIFLOW_NETFLOW_UDP_WORKERS: 2
  ELASTIFLOW_NETFLOW_UDP_QUEUE_SIZE: 4096
  ELASTIFLOW_NETFLOW_UDP_RCV_BUFF: 33554432

  ELASTIFLOW_SFLOW_IPV4_PORT: 6343
  ELASTIFLOW_SFLOW_UDP_WORKERS: 2
  ELASTIFLOW_SFLOW_UDP_QUEUE_SIZE: 4096
  ELASTIFLOW_SFLOW_UDP_RCV_BUFF: 33554432

  ELASTIFLOW_IPFIX_UDP_IPV4_PORT: 2055
  ELASTIFLOW_IPFIX_UDP_WORKERS: 2
  ELASTIFLOW_IPFIX_UDP_QUEUE_SIZE: 4096
  ELASTIFLOW_IPFIX_UDP_RCV_BUFF: 33554432

If anyone has already encountered this issue — incorrect traffic volume display (KBytes/MBytes instead of GBytes) — I’d really appreciate your help or any guidance on how to fix it.

flow.sampling_interval 0 — in flow record !

Thank you in advance for your time and support!

Have you seen this information from our documents site?

Let us know if this helps.

Hello @dxturner

I’ve already tried this solution, but unfortunately it didn’t change anything — I’m still facing the same issue.

Thank you for your help,
I’ll be waiting for your feedback.

Best regards,

I’ll be honest. I’ve never seen or used:

elastiflow-logstash:
image: robcowart/elastiflow-logstash:4.0.1
container_name: elastiflow-logstash

I would recommend installing ElastiFlow NetObserv Flow with a Basic license.

@dxturner

Thank you for your response.

However, the server is already in production, so I can’t make such a change right away. I’d prefer to identify the root cause of the issue before considering a full image replacement.

Thank you for your understanding.

Even with the image u send me, it doesnt work same issue

I tried almost everything, not better

I think the fundamental issue is that the device is not sending the sampling rate, or is sending an invalid sample rate. The current NetObserv Flow collector defaults to 1 if sample rate is not sent. Perhaps this is a better question for the device vendor. As noted in the linked article …

But again, I don’t know anything about the older logstash based collector. We would need to know the device flow configuration and get a packet capture of the flow data coming in to the collector to analyze further.