Friday, August 17, 2018

Elasticsearch term vs query_string speed

I was curios to see what kind of query was faster for elasticsearch: a term query or a query_string. I performed 5 tests doing a simple term query and a simple query_string query for the same values. My example searches looked similar to the ones below with various different fields for each of the 5 searches.

TERM QUERY
curl -XGET -H "Content-Type: application/json" "http://localhost:9200/logstash-2018.08.16/_search?format=yaml" -d '{
  "query": {
    "term": { "src_geo.region": "California" }
  }
}'

QUERY_STRING QUERY

curl -XGET -H "Content-Type: application/json" "http://localhost:9200/logstash-2018.08.16/_search?format=yaml" -d '{
  "query": {
    "query_string" : {
      "query" : "src_geo.region: California"
    }
  }
}'

TERM RESULTS in milliseconds
1. took: 3594
2. took: 2730
3. took: 10553
4. took: 4108
5. took: 1461

QUERY_STRING RESULTS in milliseconds
1. took: 5039
2. took: 5442
3. took: 11294
4. took: 5048
5. took: 1961

WINNER = TERM QUERY


Thursday, March 1, 2018

Logstash - RFC1918 and IP Version

Below is a Logstash filter I use to decorate network and system logs with new fields used for additional filtering.

One field I add is "rfc1918 = true or false (boolean)"
This is nice for easy sorting of internal private addresses to external addresses.

Another field I like to add is "ipv = 4 or 6" to differentiate between IP version 4 and IP version 6.

The below Logstash filter example assumes your ip addresses are in fields named "src.ip, dst.ip or ip"



filter {
  if [dst][ip] {
    if [dst][ip] !~ /:/ {
      mutate {
        add_field => { "[dst][ipv]" => 4 }
      }
      cidr {
        address => [ "%{[dst][ip]}" ]
        network => [ "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
        add_field => { 
          "[dst][rfc1918]" => true
        }
      }
    } else {
      mutate {
        add_field => { "[dst][ipv]" => 6 }
      }
    }
    if ![dst][rfc1918] {
      mutate {
        add_field => { "[dst][rfc1918]" => false }
      }
    }
  }
  if [src][ip] { 
    if [src][ip] !~ /:/ {
      mutate {
        add_field => { "[src][ipv]" => 4 }
      }
      cidr {
        address => [ "%{[src][ip]}" ]
        network => [ "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
        add_field => { 
          "[src][rfc1918]" => true
        }
      }
    } else {
      mutate {
        add_field => { "[src][ipv]" => 6 }
      }
    }
    if ![src][rfc1918] {
      mutate {
        add_field => { "[src][rfc1918]" => false }
      }
    }
  }
  if [ip] {
    if [ip] !~ /:/ {
      mutate {
        add_field => { "ipv" => 4 }
      }
      cidr {
        address => [ "%{ip}" ]
        network => [ "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
        add_field => { 
          "rfc1918" => true
        }
      }
    } else {
      mutate {
        add_field => { "ipv" => 6 }
      }
    }
    if ![rfc1918] {
      mutate {
        add_field => { "[rfc1918]" => false }
      }
    }
  }
}

Thursday, January 18, 2018

Bro x509 and Logstash

Everyone knows that Bro is a great tool for monitoring network traffic.  But Logstash is a great tool for manipulating log files.

Below is a Logstash filter that will add some valuable fields to your x509 Bro log.
The field names are:

  • cert.expired
  • cert.date.not_valid_after
  • cert.date.not_valid_before
  • cert.lifespan.days
  • cert.lifespan.hours
  • cert.lifespan.seconds

# BRO x509
filter {
  if [doctype] == "x509" {
    mutate {
      remove_field => [ "host" ]
      rename => {
        "id" => "[bro][fuid]"
        "basic_constraints.ca" => "[basic_constraints][ca]"
        "certificate.curve" => "[cert][curve]"
        "certificate.exponent" => "[cert][exponent]"
        "certificate.issuer" => "[cert][issuer]"
        "certificate.key_alg" => "[cert][key][alg]"
        "certificate.key_length" => "[cert][key][length]"
        "certificate.key_type" => "[cert][key][type]"
        "certificate.not_valid_after" => "[cert][not_valid_after]"
        "certificate.not_valid_before" => "[cert][not_valid_before]"
        "certificate.serial" => "[cert][serial]"
        "certificate.sig_alg" => "[cert][sig_alg]"
        "certificate.subject" => "[cert][subject]"
        "certificate.version" => "[cert][version]"
        "san.dns" => "san_dns"
      }
    }
    date {
      match => [ "[cert][not_valid_after]", "UNIX" ]
      target => "[cert][date][not_valid_after]"
    }
    date {
      match => [ "[cert][not_valid_before]", "UNIX" ]
      target => "[cert][date][not_valid_before]"
    }
    ruby {
      code => "
        vafter = event.get('[cert][not_valid_after]');
        vbefore = event.get('[cert][not_valid_before]');
        seconds = (vafter - vbefore).ceil;
        hours = (seconds / 3600).ceil;
        days = (seconds / 84600).ceil;
        validcheck = event.get('[cert][date][not_valid_after]') - event.get('@timestamp');
        if validcheck > 0
          expired = false
        else
          expired = true
        end
        event.set('[cert][expired]', expired);
        event.set('[cert][lifespan][seconds]', seconds);
        event.set('[cert][lifespan][hours]', hours);
        event.set('[cert][lifespan][days]', days);
      "
    }
  }
}


Below is what the output looks like in Kibana

Wednesday, January 17, 2018

Arpnamer

I just released a neat utility to log arp scans to json, syslog or straight text file that will map the mac address, IP, hostname and oui data.

https://github.com/panaman/arpnamer