Friday, August 17, 2018

Elasticsearch term vs query_string speed

I was curios to see what kind of query was faster for elasticsearch: a term query or a query_string. I performed 5 tests doing a simple term query and a simple query_string query for the same values. My example searches looked similar to the ones below with various different fields for each of the 5 searches.

curl -XGET -H "Content-Type: application/json" "http://localhost:9200/logstash-2018.08.16/_search?format=yaml" -d '{
  "query": {
    "term": { "src_geo.region": "California" }


curl -XGET -H "Content-Type: application/json" "http://localhost:9200/logstash-2018.08.16/_search?format=yaml" -d '{
  "query": {
    "query_string" : {
      "query" : "src_geo.region: California"

TERM RESULTS in milliseconds
1. took: 3594
2. took: 2730
3. took: 10553
4. took: 4108
5. took: 1461

QUERY_STRING RESULTS in milliseconds
1. took: 5039
2. took: 5442
3. took: 11294
4. took: 5048
5. took: 1961


Thursday, March 1, 2018

Logstash - RFC1918 and IP Version

Below is a Logstash filter I use to decorate network and system logs with new fields used for additional filtering.

One field I add is "rfc1918 = true or false (boolean)"
This is nice for easy sorting of internal private addresses to external addresses.

Another field I like to add is "ipv = 4 or 6" to differentiate between IP version 4 and IP version 6.

The below Logstash filter example assumes your ip addresses are in fields named "src.ip, dst.ip or ip"

filter {
  if [dst][ip] {
    if [dst][ip] !~ /:/ {
      mutate {
        add_field => { "[dst][ipv]" => 4 }
      cidr {
        address => [ "%{[dst][ip]}" ]
        network => [ "", "", "" ]
        add_field => { 
          "[dst][rfc1918]" => true
    } else {
      mutate {
        add_field => { "[dst][ipv]" => 6 }
    if ![dst][rfc1918] {
      mutate {
        add_field => { "[dst][rfc1918]" => false }
  if [src][ip] { 
    if [src][ip] !~ /:/ {
      mutate {
        add_field => { "[src][ipv]" => 4 }
      cidr {
        address => [ "%{[src][ip]}" ]
        network => [ "", "", "" ]
        add_field => { 
          "[src][rfc1918]" => true
    } else {
      mutate {
        add_field => { "[src][ipv]" => 6 }
    if ![src][rfc1918] {
      mutate {
        add_field => { "[src][rfc1918]" => false }
  if [ip] {
    if [ip] !~ /:/ {
      mutate {
        add_field => { "ipv" => 4 }
      cidr {
        address => [ "%{ip}" ]
        network => [ "", "", "" ]
        add_field => { 
          "rfc1918" => true
    } else {
      mutate {
        add_field => { "ipv" => 6 }
    if ![rfc1918] {
      mutate {
        add_field => { "[rfc1918]" => false }

Thursday, January 18, 2018

Bro x509 and Logstash

Everyone knows that Bro is a great tool for monitoring network traffic.  But Logstash is a great tool for manipulating log files.

Below is a Logstash filter that will add some valuable fields to your x509 Bro log.
The field names are:

  • cert.expired
  • cert.lifespan.days
  • cert.lifespan.hours
  • cert.lifespan.seconds

# BRO x509
filter {
  if [doctype] == "x509" {
    mutate {
      remove_field => [ "host" ]
      rename => {
        "id" => "[bro][fuid]"
        "" => "[basic_constraints][ca]"
        "certificate.curve" => "[cert][curve]"
        "certificate.exponent" => "[cert][exponent]"
        "certificate.issuer" => "[cert][issuer]"
        "certificate.key_alg" => "[cert][key][alg]"
        "certificate.key_length" => "[cert][key][length]"
        "certificate.key_type" => "[cert][key][type]"
        "certificate.not_valid_after" => "[cert][not_valid_after]"
        "certificate.not_valid_before" => "[cert][not_valid_before]"
        "certificate.serial" => "[cert][serial]"
        "certificate.sig_alg" => "[cert][sig_alg]"
        "certificate.subject" => "[cert][subject]"
        "certificate.version" => "[cert][version]"
        "san.dns" => "san_dns"
    date {
      match => [ "[cert][not_valid_after]", "UNIX" ]
      target => "[cert][date][not_valid_after]"
    date {
      match => [ "[cert][not_valid_before]", "UNIX" ]
      target => "[cert][date][not_valid_before]"
    ruby {
      code => "
        vafter = event.get('[cert][not_valid_after]');
        vbefore = event.get('[cert][not_valid_before]');
        seconds = (vafter - vbefore).ceil;
        hours = (seconds / 3600).ceil;
        days = (seconds / 84600).ceil;
        validcheck = event.get('[cert][date][not_valid_after]') - event.get('@timestamp');
        if validcheck > 0
          expired = false
          expired = true
        event.set('[cert][expired]', expired);
        event.set('[cert][lifespan][seconds]', seconds);
        event.set('[cert][lifespan][hours]', hours);
        event.set('[cert][lifespan][days]', days);

Below is what the output looks like in Kibana

Wednesday, January 17, 2018


I just released a neat utility to log arp scans to json, syslog or straight text file that will map the mac address, IP, hostname and oui data.

Saturday, September 5, 2015

BroCon2015 M.I.T.

My talk about logging Bro data into Elasticsearch at BroCon 2015 - Massachusetts Institute of Technology

Slides and Configs

Saturday, January 31, 2015

The online price is always cheaper than a phone call

Sunday, December 28, 2014

Geekempire's BadIP List

Using all of my ninja skillz, I recently automated a daily list of malicious IP's that attack the Geekempire hosts.  The list is spot on and you can easily automate the feed into your firewall and detection tools.

Thursday, December 25, 2014

No more ThunderPoop

I have decided to ditch thunderpoop and I have decidied to move the site, to

Thursday, June 19, 2014

Puppet Elasticsearch Demonstration

This is just a demonstration of how Puppet can build an elasticsearch node in a jiffy, This is just a demo, no animals were hurt in the making of this video.

Tuesday, November 26, 2013

Bro Puppet Dependencies

My Bro puppet module has been updated to version 1.0.1.
I had a bug in the module dependencies.
Thanks Ryan for the fix.

version 1.0.1 has been uploaded to the forge

Sunday, November 24, 2013

Webmin Puppet Module

I released my first version of my webmin puppet module.
It should work on any debian or redhat based system.

Sunday, November 17, 2013

Plex Puppet Module

Version 1.0.0 of my Plex Puppet module has been uploaded to the forge.

It is compatible with Centos, Fedora, Redhat, Scientific and Ubuntu.

Bro NSM Puppet Module

Last night I published my Bro NSM Puppet module to the forge.

Bro is a network monitoring tool, it compliments existing IDS technologies.

Saturday, November 16, 2013

hostint puppet fact updated to 2.0.2

I have made some additions to the puppet hostint fact.

I have added two more facts:

hostint_ipv4_cidr = host interface network cidr notation
hostint_ipv4_max = maximum number of allowed hosts on network.

Monday, November 11, 2013

TPS Report

I've finally uploaded one of my simple but useful modules to the Puppet Forge today.
I call it "TPS Report". It is a Puppet module that can create multiline text files without a template in place. I use this all the time to create simple files when I don't feel like creating an ERB base template.

tps::report { '/etc/file.txt':
  flare => [
   'line one',
   'line two',
   'line three',
   'line four',
owner => 'Lumbergh',
group => 'Chotchkies',
mode => '0755',

Sunday, September 8, 2013

hostint v2.0.0

Custom Fact for the host interface on a machine.
It finds the interface based on the gateway of netstat -rn. 
Works on FreeBSD, OSX, RedHat, Centos, Scientific, Ubuntu and probably others. 
I've found it extremely helpful building NSM servers and configure iptables.
You can specify the variable <%= @hostint %> in your puppet templates.

Supports Interface, DNS, Duplex, Gateway, ipv4 address, and Speed.

<%= @hostint %> Host Interface - (Supports Kernel: FreeBSD, Darwin, Linux)
<%= @hostint_dns %> Primary DNS Server (Supports Kernel: FreeBSD, Darwin, Linux) 
<%= @hostint_duplex %>  Full (Supports Kernel: Linux)
<%= @hostint_gw %> (Supports Kernel: FreeBSD, Darwin, Linux)
<%= @hostint_ipv4 %> (Supports Kernel: FreeBSD, Darwin, Linux)
<%= @hostint_speed %>  1000Mb/s (Supports Kernel: Linux)

TODO: Need to add Windows facts

Wednesday, January 2, 2013

HD HomeRun Prime

My entire house is ran in cat5e. I am not sure how this setup would work over wireless.

First of all this can save you $15 to 30 a month depending on how many TV's you have, but media pc's aren't free either, so you need to determine if the long term cost is worth your wallet.

Below is a picture of the retail box for the HD HomeRun Prime. I think I payed about $200 for my first one, then I got the 2nd one on-sale for $130.

I have Time Warner Cable. They are a bunch of bitches and encrypt everything but the broadcast channels, I am forced to use Windows Media Center. Other cool options if it wasn't for my crappy cable company would have been to use MythTV. Below is a picture of my two HD HomeRun Prime's and the two tuning adapaters. This setup costs less than $3 bucks a month per HD HomeRun. I have 6 network tuners to record and watch TV with.

This is a video of my setup. Two Media PC's and one XBOX enjoying the power of cable TV through the HD HomeRun Prime. Sorry for the crappy video, I took it on my iPhone 4S: But Enjoy Anyway.

Saturday, December 15, 2012


Need to backup your VM's off of your ESXi Machine? Today I decided to back them up to my USB drive. I tried to use rsync but rsync isn't on ESXi. So then i thought about LFTP. LFTP has a mirror function, and is a good alternative to rsync.

I run Ubuntu on my desktop so all i had to do was install lftp.
sudo apt-get install lftp
You will also have to enable ssh access to your ESXi Server. You can do that in the vSphere Client.

Once you have ssh enabled, you are ready to backup your vm's. You will need to power down the vm you are backing up to make this work.
Replace "password" with your esxi root password. You will also need to change the IP and the dirctory you are backing up.

In the following example: = my esxi server
/vmfs/volumes/4edb0892-122a7778-ffc0-001a4b524ffe/endor = path of my vm on esxi
/mnt/backup/vm = my USB drive mounted on my Ubuntu Desktop Machine.
lftp -u 'root,password' sftp:// -e "set ftp:ssl-protect-data true; set ftp:ssl-force true; mirror /vmfs/volumes/4edb0892-122a7778-ffc0-001a4b524ffe/endor /mnt/backup/vm"

Wednesday, December 12, 2012

PF_RING: tcpdump on a slave interface

I am running snort with a PF_RING enabled DAQ on CentOS using the default TNAPI drivers. I have my interfaces bonded together and snort is sniffing the bond interface. One goofy thing I noticed is when PF_RING is loaded, I cannot tcpdump on the the slave interfaces in the bond. Dumping on the bond interface works fine. For example, if eth1 and eth2 are slave interfaces in bond0 and PF_RING is enabled the following tcpdump command returns no results.
tcpdump -i eth1
If I disable PF_RING (rmmod pfring), the same tcpdump command works; which points to PF_RING as the cause of the behavior. This has been bugging the crap out of me for months.

Now here is the strange part. Recently I stumbled across something on accident because I fat fingered the tcpdump command. If I add a colon to the end of the interface name tcpdump works!
tcpdump -i eth1:
Even stranger, if I add a colon and any number it also works.
tcpdump -i eth1:7
I am perfectly fine with this behavior because it solves my original issue, but I am curious why? I have searched the hell out of google and have found nothing. I am curious if any else has experienced the same behavior.

Sunday, December 9, 2012

Crash Plan - Headless Ubuntu Server - Unity

CrashPlan is a "cloud" backup service, and one of the few that have linux clients. It is designed as a GUI client. I run a headless Ubuntu 12.04 server, and my desktop is also Ubuntu 12.04 with the Unity Desktop.

I am going to show you how to create a custom launcher that defaults to connecting to your Ubuntu Server from your Ubuntu Desktop Computer using ssh tunneling. I recommend that you use SSH keys. You can find a tutorial here:

First thing first. You need to download the client on your ubuntu server. I did it out of my home directory. I just did a wget, you could just download it with your browser and upload it to your server.
panaman@deathstar:~$ wget -d
Now you need to untar it and run the install (I just answered everything with defaults).
panaman@deathstar:~$ tar -zxvf CrashPlan_3.4.1_Linux.tgz; sudo CrashPlan_install/
After its done installing you will notice its running as a service.
panaman@deathstar:~/CrashPlan-install$ service crashplan status
CrashPlan Engine (pid 11960) is running.
Now you need to install it on your Ubuntu Desktop PC.
You can follow the exact same steps as above.

After you have the client installed on your Ubuntu Desktop you need to edit a config file for your client to listen on.
panaman@anakin:~$ sudo vi /usr/local/crashplan/conf/
Make the File look just like the one below:
#Fri Dec 09 09:50:22 CST 2005
#pollerPeriod=1000  # 1 second
#connectRetryDelay=10000  # 10 seconds

Now you need to make a little shell script on your Ubuntu Desktop PC that will open your ssh tunnel and CrashPlan. You can call it what ever you want and put it where ever you want. I called mine deathstar_crashplan and placed it in my home directory. Mainly because my Ubuntu Server's name is "Deathstar".
panaman@anakin:~$ vi /home/panaman/deathstar_crashplan
Paste the following in your script and replace "deathstar" with the name or IP address of your Ubuntu server.
gnome-terminal --disable-factory --sm-client-disable -x ssh -L 4200:localhost:4243 panaman@deathstar &
Make your script executable.
panaman@anakin:~$ chmod 700 ~/deathstar_crashplan
Now you need to edit your CrashPlan Icon. Just right click it and click properties.
You are going to need to change the Command path to point to the shell script you created.
I also Changed the Name of mine to "CrashPlan Deathstar"

Now you are complete, you should be able to launch your Crashplan GUI thats connected to your Ubuntu Server and start adding folders on your server to backup.