Vulnerability Found: WEF Bypass of Winlogbeat
While Nate Guagenti, a Solutions Engineer on our team, was preparing for a talk on endpoint threat hunting on the Elastic Stack, he identified a reliable event-sending bypass for Winlogbeat (CVE-2019-7613). We notified Elastic on February 26, 2019, and thankfully, it has been fixed as of 6.6.2. We recommend upgrading immediately.
Despite the lack of technical "awe", this bypass cannot be understated.
The technical details are, in short, that ASCII control characters cause XML parsing to fail in Winlogbeat...thus resulting in the log being completely dropped. Elastic has since corrected the issue. This applies to any version of Winlogbeat <= 6.6.1
Trigger the log to be dropped can occur anywhere that you or a maliciously motivated actor can place an ASCII controller character. A few examples include:
PowerShell (even in the comments);
The description in a user account you create;
Scheduled Tasks (the description or any of the other 20+ fields);
Anywhere that a maliciously motivated entity could control characters (pun intended)
All of these would result in the log never being forwarded! Completely dropped. Without unnecessary fear-mongering, that’s a very powerful masking capability for an adversary.
To underscore the severity of this, we wanted to highlight an example on how this could be leveraged. Simply by adding the
0x1E character in the comment (although any ASCII controller character works), the PowerShell Command Script Logging event (EventID: 4104) does not get forwarded.
This is not specifically to fault Winlogbeat. To be honest, we believe there are many Windows log forwarding products that exhibit this issue and aren’t yet aware. To whit, Windows Event Viewer doesn't even display this log correctly.
We strongly recommend that you audit your data. As an example, one way is to go back through all of your Winlogbeat logs (on your forwarders/devices that have Winlogbeat installed) and look for
unmarshal XML in your logs. Assess that event to identify if could previously have gone unnoticed.
@Cyb3rWard0g with SpecterOps and @neu5ron with Perched presented at BSides Columbus 2018 just on data quality alone.
We like to refer to logging & ETL as the "Grey Team" -- because its pretty gloomy, under appreciated, and sits carefully between your aggressors and defenders.
We always recommend collecting error and warning logs of data shippers, the things forwarding your logs, and the thing logging your logs (your database). Additionally, as your platform allows, we recommend including a “catch all” for any and all failed parsing in your transform/ETL pipeline. The RockNSM project does this and the HELK project is very close to implementing it as well.
Using the “catch all” concept, we have observed:
A single value so large that even the best Network Security Monitoring (NSM) product in the world failed to escape correctly resulting in a faulty JSON document;
Values too large for some databases.. PowerShell 30,000+ characters. HTTP URI 150,000+ characters;
Values incorrectly input as an integer on a product producing the log but the RFC states that it could be a string, integer, or an IP;
Fields labeled as an IP but in fact could be ""; and
Things mentioned in the BSides Presentation (linked above)
If you’ve been in IT (not even security or Cyber), you’ve seen all sorts of logging issues in all flavors of products, both open-source and high-priced products. No matter what you use -- MONITOR YOUR MONITORING, AUDIT YOUR AUDIT, LOG YOUR LOGS.