If you are interested in networking or information security then you are likely familiar with the port scanning tool nmap. Network Mapper is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. 

If you're unaware, I warn you that using nmap to port scan IP addresses of infrastructure that you don’t own is most likely illegal in your country. To be safe, scan only your own infrastructure, or get permission to do so. This article assumes that you know how to use nmap.

To ingest your nmap scans, you will have to output it in a format that can ingest into Elasticsearch. Nmap has a command-line argument which allows you to output the nmap results in an xml formatted report. We are going to scan scanme.nmap.org, which is a host that is often used to test nmap with. Anyone is allowed to scan scanme.nmap.org.

$ nmap -T5 -Pn -A -oX report.xml scanme.nmap.org

This outputs the results to report.xml in the current directory. You can check your scan results with:

$ cat report.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE nmaprun>
<?xml-stylesheet href="file:///usr/bin/../share/nmap/nmap.xsl" type="text/xsl"?>
<!-- Nmap 7.01 scan initiated Mon Jul 18 16:26:06 2016 as: nmap -T5 -Pn -A -oX report.xml scanme.nmap.org -->
...

Now, we need to ingest this report. For this tutorial we are assuming that you created a directory, “nmap”, where you will have multiple reports. We are going to assume you have more than one report that you would like to parse. Just for the sake of our examples, create a directory to store your reports and config, and work from there.

$ mkdir nmap
$ cd nmap

I’m modifying one of the configs on the logstash-nmap plugin’s github page. To be able to use my config, you will need to download a template from the github page which is referenced in the config file.

$ wget https://raw.githubusercontent.com/logstash-plugins/logstash-codec-nmap/master/examples/elasticsearch/elasticsearch_nmap_template.json

Start Elasticsearch and then Kibana. Make sure to use screen and start Kibana in its own window.

$ sudo service elasticsearch start
$ cd ~/kibana-*
$ cd bin/
$ ./kibana&

Now back to the nmap directory. You should now only have two files in this directory.

$ cd ~/nmap
$ ls
elasticsearch_nmap_template.json  report.xml

Add your logstash config to the directory. I am adding it in a file named nmap-logstash.conf.

To use the logstash nmap codec plugin, you will need to install it. Navigate to your logstash directory. On my server, the directory is located at /opt/logstash.

$ cd /opt/logstash
$ sudo bin/logstash-plugin install logstash-codec-nmap

You might need to install ruby-nmap to install this plugin. Before you do that, make sure to install this:

$ sudo apt-get install ruby-dev

This is what you should have in your nmap-logstash.conf file:

input {
  file {
    path => "$HOME/nmap/*.xml"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    codec => nmap
    tags => [nmap]
  }
}
filter {
  if "nmap" in [tags] {
    # Don't emit documents for 'down' hosts
    if [status][state] == "down" {
      drop {}
    }
   mutate {
      # Drop HTTP headers and logstash server hostname
      remove_field => ["headers", "hostname"]
    }
   if "nmap_traceroute_link" == [type] {
      geoip {
        source => "[to][address]"
        target => "[to][geoip]"
      }
     geoip {
        source => "[from][address]"
        target => "[from][geoip]"
      }
    }
   if [ipv4] {
      geoip {
        source => ipv4
        target => geoip
      }
    }
 }
}
output {
  if "nmap" in [tags] {
    elasticsearch {
      document_type => "nmap-reports"
      document_id => "%{[id]}"
      # Nmap data usually isn't too bad, so monthly rotation should be fine
      index => "nmap-logstash-%{+YYYY.MM}"
      template => "./elasticsearch_nmap_template.json"
      template_name => "logstash_nmap"
    }
   stdout {
      codec => json_lines
    }
  }
}

Now you can run logstash on your config. Make sure you have the latest version of logstash, especially if you are having trouble installing the logstash-codec-nmap plugin.

$ /opt/logstash/bin/logstash -f nmap-logstash.conf

If you are making use of nmap, then you probably also use OpenVas or Nessus. There is a script called VulnToEs, which is available on Github, that can be used to index Nessus, OpenVas, Nikto, and Nmap results into Elasticsearch. This script makes use of the Python API for Elasticsearch. This is how to index the nmap report into Elasticsearch using the script:

$ git clone https://github.com/ChrisRimondi/VulntoES
$ cd VulntoEs/
$ sudo pip install elasticsearch

In Sense, create the index that you are going to index the data to. Alternatively, you can create the index from your server’s command line using curl.

$ curl -XPUT 'localhost:9200/nmap-vuln-to-es'

Now, index your nmap report.

$ python VulntoES.py -i ~/report.xml -e 127.0.0.1 -r nmap -I nmap-vuln-to-es

You can create visualizations of your nmap data in Kibana and eventually create dashboards from these visualizations.

Conclusion

We have just indexed our nmap report into Elasticsearch. Remember, the script can be used for Nessus, OpenVas, and Nikto reports, too. Have fun and remember to only run nmap or vulnerability scans against infrastructure that you own or have permission to scan. Questions/Comments? Drop us a line below.