"Community ID" has become an established flow hash for connection correlation
across different monitoring and storage systems. Other NSMs have had native
and built-in support for Community ID since late 2018. And even though the
roots of "Community ID" are very close to Zeek, Zeek itself has never provided
out-of-the-box support and instead required users to install an external plugin.
While we try to make that installation as easy as possible, an external plugin
always sets the bar higher for an initial setup and can be intimidating.
It also requires a rebuild operation of the plugin during upgrades. Nothing
overly complicated, but somewhat unnecessary for such popular functionality.
This isn't a 1:1 import. The options are parameters and the "verbose"
functionality has been removed. Further, instead of a `connection`
record, the new bif works with `conn_id`, allowing computation of the
hash with little effort on the command line:
$ zeek -e 'print community_id_v1([$orig_h=1.2.3.4, $orig_p=1024/tcp, $resp_h=5.6.7.8, $resp_p=80/tcp])'
1:RcCrCS5fwYUeIzgDDx64EN3+okU
Reference: https://github.com/corelight/zeek-community-id/
* security/topic/timw/154-rdp-timeout:
RDP: Instantiate SSL analyzer instead of PIA
RDP: add some enforcement to required values based on MS-RDPBCGR docs
* security/topic/awelzel/152-smtp-validate-mail-transactions:
smtp: Validate mail transaction and disable SMTP analyzer if excessive
generic-analyzer-fuzzer: Detect disable_analyzer() from scripts
* security/topic/awelzel/148-ftp-skip-get-pending-commands-multi-line-response:
ftp/main: Special case for intermediate reply lines
ftp/main: Skip get_pending_command() for intermediate reply lines
An invalid mail transaction is determined as
* RCPT TO command without a preceding MAIL FROM
* a DATA command without a preceding RCPT TO
and logged as a weird.
The testing pcap for invalid mail transactions was produced with a Python
script against a local exim4 configured to accept more errors and unknown
commands than 3 by default:
# exim4.conf.template
smtp_max_synprot_errors = 100
smtp_max_unknown_commands = 100
See also: https://www.rfc-editor.org/rfc/rfc5321#section-3.3
Intermediate lines of multiline replies usually do not contain valid status
codes (even if servers may opt to include them). Their content may be anything
and likely unrelated to the original command. There's little reason for us
trying to match them with a corresponding command.
OSS-Fuzz generated a large command reply with very many intermediate lines
which caused long processing times due to matching every line with all
currently pending commands.
This is a DoS vector against Zeek. The new ipv6-multiline-reply.trace and
ipv6-retr-samba.trace files have been extracted from the external ipv6.trace.
This was exposed by OSS-Fuzz after the HTTP/0.9 changes in zeek/zeek#2851:
We do not check the result of parsing the from and last bytes of a
Content-Range header and would reference uninitialized values on the stack
if these were not valid.
This doesn't seem as bad as it sounds outside of yielding non-sensible values:
If the result was negative, we weird/bailed. If the result was positive, we
already had to treat it with suspicion anyway and the SetPlainDelivery()
logic accounts for that.
OSS-Fuzz tickled an assert when sending a HTTP response before a HTTP/0.9
request. Avoid this by resetting reply_message upon seeing a HTTP/0.9 request.
PCAP was generated artificially: Server sending a reply providing a
Content-Length. Because HTTP/0.9 processing would remove the ContentLine
support analyzer, more data was delivered to the HTTP_Message than
expected, triggering an assert.
This is a follow-up for zeek/zeek#2851.
This commit introduces parsing of the CertificateRequest message in the
TLS handshake. It introduces a new event ssl_certificate_request, as
well as a new function parse_distinguished_name, which can be used to
parse part of the ssl_certificate_request event parameters.
This commit also introduces a new policy script, which appends
information about the CAs a TLS server requests in the
CertificateRequest message, if it sends it.
* security/topic/awelzel/125-ftp-timeout-three:
testing/ftp: Add tests and pcaps with invalid reply lines
ftp: Harden reply handing a bit and don't raise bad replies to script-land
ftp: ignore invalid commands
As initial examples, this branch ports the Syslog and Finger analyzers
over. We leave the old analyzers in place for now and activate them
iff we compile without any Spicy.
Needs `zeek-spicy-infra` branches in `spicy/`, `spicy-plugin/`,
`CMake/`, and `zeek/zeek-testing-private`.
Note that the analyzer events remain associated with the Spicy plugin
for now: that's where they will show up with `-NN`, and also inside
the Zeekygen documentation.
We switch CMake over to linking the runtime library into the plugin,
vs. at the top-level through object libraries.
b41a4bf06d removed a field from this record
because it had a duplicate name as another field. The field does need to
exist, but it needs the correct name.
Not sure this adds much more coverage then there was, but minimally
more recent software versions.
The instances/passwords were ephemeral, so hostname and password hashes
etc aren't useful to anyone.
We were parsing MySQL using bigendian even though the protocol is
specified as with "least significant byte first" [1]. This is most
problematic when parsing length encoded strings with 2 byte length
fields...
Further, I think, the EOF_Packet parsing was borked, either due to
testing the CLIENT_DEPRECATE_EOF with the wrong endianness, or due to
the workaround in Resultset processing raising mysql_ok(). Introduce a
new mysql_eof() that triggers for EOF_Packet's and remove the fake
mysql_ok() Resultset invocation to fix. Adapt the mysql script and tests
to account for the new event.
This is a quite backwards incompatible change on the event level, but
due to being quite buggy in general, doubt this matters to many.
I think there is more buried, but this fixes the violation of the simple
"SHOW ENGINE INNODB STATUS" and the existing tests continue to
succeed...
[1] https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_dt_integers.html
These have been created artificially. The tests show that for an
invalid reply line without a numeric code, with a numeric code < 100
or a numeric code not followed by a space we now raise an analyzer
violation and disable the analyzer.
The seen/file-names script relies on f$info$filename to be populated.
For HTTP and other network protocols, however, this field is only
populated during file_over_new_connection() that's running after
file_new().
Use the file_new() event only for files without connections and
file_over_new_connection() implies that f$conns is populated, anyway.
Special case SMB to avoid finding files twice, because there's a
custom implementation in seen/smb-filenames.zeek.
Fixes#2647
* origin/topic/awelzel/2629-notice-file-info:
analyzer/files: handle non-analyzer names in describe_file()
frameworks/notice: Handle fa_file with no or more than a single connection better
* When a file is transferred over multiple connection, have
create_file_info() just pick the first one instead of none.
* Do not unconditionally assume cid and cuid as set on a
Notice::FileInfo object.
oss-fuzz produced FTP traffic with a ~550KB long FTP command. Cap FTP command
length at 100 bytes, log a weird if a command is larger than that and move
on to the next. Likely it's not actual FTP traffic, but raising an
analyzer violation would allow clients an easy way to disable the analyzer
by sending an overly long command.
The added test PCAP was generated using a fake Python socket server/client.
oss-fuzz generated "HTTP traffic" containing 250k+ sequences of "T<space>\r\r"
which Zeek then logged as individual HTTP requests. Add a heuristic to bail
on such request lines. It's a bit specific to the test case, but should work.
There are more issues around handling HTTP/0.9, e.g. triggering
"not a http reply line" when HTTP/0.9 never had such a thing, but
I don't think that's worth fixing up.
Fixes#119
* origin/topic/robin/gh-2426-flipping:
Fixing productive connections with missing SYN still considered partial after flipping direction.
Add some missing bits when flipping endpoints.
In https://github.com/zeek/zeek/pull/2191, we added endpoint flipping
for cases where a connection starts with a SYN/ACK followed by ACK or
data. The goal was to treat the connection as productive and go ahead
and parse it. But the TCP analyzer could continue to consider it
partial after flipping, meaning that app layers would bail out. #2426
shows such a case: HTTP gets correctly activated after flipping
through content inspection, but it won't process anything because
`IsPartial()` returns true. As the is-partial state reflects
whether we saw the first packets each in direction, this patch now
overrides that state for the originally missing SYN after flipping.
We actually had the same problem at a couple of other locations already
as well. One of that only happened to work because of the originally
inconsistent state flipping that was fixed in the previous commit. The
corresponding unit test now broke after that change. This commit
updates that logic as well to override the state.
This fix is a bit of a hack, but the best solution I could think of
without introducing larger changes.
Closes#2426.
Introduce two new events for analyzer confirmation and analyzer violation
reporting. The current analyzer_confirmation and analyzer_violation
events assume connection objects and analyzer ids are available which
is not always the case. We're already passing aid=0 for packet analyzers
and there's not currently a way to report violations from file analyzers
using analyzer_violation, for example.
These new events use an extensible Info record approach so that additional
(optional) information can be added later without changing the signature.
It would allow for per analyzer extensions to the info records to pass
analyzer specific info to script land. It's not clear that this would be
a good idea, however.
The previous analyzer_confirmation and analyzer_violation events
continue to exist, but are deprecated and will be removed with Zeek 6.1.
There's a logic error in the packet analyzer's AnalyzerConfirmation()
method that causes analyzer_confirmation() events to be raised for every
packet rather than stopping after the first confirmation which appears to
have been the intention. This affects, for example, VXLAN and Geneve tunnels.
The optional arg_tag parameter was used for short-circuit'ing, but the return
value of GetAnalyzerTag() used for setting the session state causing the
disconnect.
In scenarios where Zeek receives purely tunneled monitoring traffic, this may
result in a non-negligible performance impact.
Somewhat related, ensure the session state is set to violated before
short-circuiting if no analyzer_violations are installed.
Suggesting this as a 5.0.3 candidate.
The current_entity tracking in HTTP assumes that client/server never
send HTTP entities at the same time. The attached pcap (generated
artificially) violates this and triggers:
1663698249.307259 expression error in <...>base/protocols/http/./entities.zeek, line 89: field value missing (HTTP::c$http$current_entity)
For the http-no-crlf test, include weird.log as baseline. Now that weird is
@load'ed from http, it is actually created and seems to make sense
to btest-diff it, too.