ROCK@home — Operation (3/3)
RockNSM is an open-source network sensor platform that focuses on being reliable, scalable, and secure. Check out the full documentation for more details. This is a companion article to the second part of our ROCK@home video series.
Article #1 — Introduction
Article #2 — Install & Deploy
Article #3 — Operate & Maintain (you are here)
We’re proud to announce part 3 of the ROCK@home video series!
Welcome to the final installment of ROCK@home! This series has aimed to make getting started with Network Security Monitoring (NSM) hardware and concepts approachable by starting with looking at your own home network data.
In part 1 and part 2, we talked about what a network sensor is, the core components that make up ROCK, and how ROCK is different from other security distributions. We also covered the basic topology of network spanning and tapping, how to install the base OS, and finally deploying the sensor. In this final installment, let’s take that forward and talk about the basics of:
The best way to follow up a clean install of ROCK is to do a functions check. This includes verifying that you’re getting a feed of data to your monitoring interfaces, and checking the current status all of the primary ROCK services.
In the previous video we talked about ROCK configuration found at:
Among other options, this file defines what network interfaces will be used as a monitoring (or listening) interfaces. After running the deploy script, these settings can be confirmed by running
ip a in order to validate the monitor interface(s) are in promiscuous mode and do not currently have an ip address:
We can take this validation a bit further by checking for a stream of data by doing a temporary dump of the live packets on an interface. This is accomplished by using
This displays all traffic on that interface in standard out. If you don’t see any noise here, you know to check the stream before it gets to ROCK.
ROCK Control — (rockctl)
If you’re generally familiar with systemd you’ll know that the command to used to manage services on modern systems is
systemctl. With ROCK we’ve provided a wrapper to control it’s components called
rockctl. It works in bulk to display the status, start, stop, clear the failed states of services.
sudo rockctl status and demonstrate a key point involving stenographer (which is our solution for raw packet capture):
You’ll notice a few things in the output:
the current state of all the services
there are multiple entries for stenographer
To clarify this: stenographer will have a child process for every interface that it uses to capture packets. There will be an entry in rock control for each one. Let’s start stenographer by running
sudo rockctl start:
This sensor has one interface designated for monitoring (em1).
And then follow with a status to validate everything is running. Once we know we’ve got data coming in and all the things are up and running, we can get finally get into the fantastic interface that is Kibana.
Kibana — Initial Credentials
In the latest version of ROCK we’re serving Kibana over a TLS connection that is secured with a username and a password generated a la XKCD. The generated passphrase is saved to a kibana credentials file in the home folder of the user created at install e.g.
/home/admin/KIBANA_CREDS.README. All that needs to be done is:
grab the username / passphrase
point your browser to
Docket is a new feature in version 2.2 that provides a web interface for an analyst to request very specific slices of PCAP in order to do more targeted analysis. These queries can be made by filtering out traffic based on things like:
Once you know what details you want to filter on, you can pivot to Docket to carve out a specific piece of PCAP to analyze. The is extremely valuable as PCAP can be overwhelming to filter through. Open another tab and navigate to:
Let’s look at the interface. The sidebar shows the “Queries” tab to make requests, and the “Jobs” tab that keeps a historical queries that can be referenced later. There are Advanced Options available, but let’s keep things simple:
Choosing the timeframe, enter the hostname, and ports or protocols, however granular you want to get.
The “Overall State” shows that the job completed, and you can then download that PCAP locally to dig into this pinpointed PCAP using your weapon of choice.
ROCK is designed to keep the user’s focus on analysis rather than wrench turning, but sometimes you have to make sure you’re keeping your house in order. For the scope of this video we’ll focus on some of the high points of maintaining ROCK, but many of these concepts that are universal to NSM operations and linux systems in general.
Let’s rewind a bit: when first booting ROCK media there are 2 install choices, “Automated” and “Custom”.
Automated — I want to point out that the automated install is intended as a starting point to get into things. The Centos Anaconda installer makes it’s best guess at how to use resources.
Custom — A custom install is encouraged for a production environment in order to get more granular in choosing how disk space is allocated.
A common gotcha occurs when you want to have full packet capture with stenographer, but do not provide it’s own mount point. Stenographer is great at managing it’s own disk space (by beginning to overwrite things at 90% capacity), but that doesn’t cut it when it’s sharing the same partition as other data producers such as Bro and Suricata.
Best practice would be to create a
/data/stenographer partition during a custom install in order to prevent things like Elasticsearch (rightfully) putting indexes in a read-only state to keep the ship from crashing hard.
Another useful partition to create is
/var/log to separate system log files. More details on partitioning can be found in the ROCK documentation at: rocknsm.io.
Let’s close the maintenance section with a quick note on Suricata rules. Much like our personal devices, notifications and alerts can quickly move from informative pieces of info to just plain unhelpful noise. This also very true with IDS alerting. False positives, unneeded rulesets, and classifications can make things overwhelming. Be sure to look at how Suricata is configured out of the box, and tune to provide value to your environment.
So that wraps things up for the ROCK@home series. Hopefully we’ve provided answers to a lot of questions and broken down some barriers for you to invest the time to start learning all things NSM in the most approachable context: your own network with your data.
We’ve covered a lot of topics that are often standalone courses, and it’s expected to have follow on questions and problems that need to be solved. Here’s where to head when you need more information:
Your first stop should be the official documentation found at rocknsm.io
Please join our community support site at community.rocknsm.io
While this article series has focuses on the “at home” scenario, ROCK is a proven and secure platform at scale. If your organization has problems to solve or processes to improve around NSM operations, contact Perched for more information.
ROCK is an open-source project and always will be. We couldn’t be more grateful to our community of users and contributors. We’re continually moving the project forward so be sure to follow @rocknsm on twitter for the latest updates. Thanks for your time, and ROCK ON! 🤘