Currently, if a TLS/DTLS analyzer fails with a protocol violation, we
will still try to remove the analyzer later, which results in the
following error message:
error: connection does not have analyzer specified to disable
Now, instead we don't try removing the analyzer anymore, after a
violation occurred.
* security/topic/timw/154-rdp-timeout:
RDP: Instantiate SSL analyzer instead of PIA
RDP: add some enforcement to required values based on MS-RDPBCGR docs
* security/topic/awelzel/152-smtp-validate-mail-transactions:
smtp: Validate mail transaction and disable SMTP analyzer if excessive
generic-analyzer-fuzzer: Detect disable_analyzer() from scripts
* security/topic/awelzel/148-ftp-skip-get-pending-commands-multi-line-response:
ftp/main: Special case for intermediate reply lines
ftp/main: Skip get_pending_command() for intermediate reply lines
An invalid mail transaction is determined as
* RCPT TO command without a preceding MAIL FROM
* a DATA command without a preceding RCPT TO
and logged as a weird.
The testing pcap for invalid mail transactions was produced with a Python
script against a local exim4 configured to accept more errors and unknown
commands than 3 by default:
# exim4.conf.template
smtp_max_synprot_errors = 100
smtp_max_unknown_commands = 100
See also: https://www.rfc-editor.org/rfc/rfc5321#section-3.3
Intermediate lines of multiline replies usually do not contain valid status
codes (even if servers may opt to include them). Their content may be anything
and likely unrelated to the original command. There's little reason for us
trying to match them with a corresponding command.
OSS-Fuzz generated a large command reply with very many intermediate lines
which caused long processing times due to matching every line with all
currently pending commands.
This is a DoS vector against Zeek. The new ipv6-multiline-reply.trace and
ipv6-retr-samba.trace files have been extracted from the external ipv6.trace.
There was a misunderstanding whether to include them by default in
the dns.log, so remove them again.
There had also been a discussion and quirk that AD of a request would
always be overwritten by reply in the dns.log unless the reply is
missing. For now, let users extend dns.log themselves for what best
fits their requirements, rather than adding these flags by default.
Add a btest to print AD and CD flags for smoke testing still.
OpenSSL 3.1 switched from outputting UNDEF to not giving a short name in
this case. Luckily this only requires a tiny test change.
We might consider pulling this into older versions, for ease of CI
testing.
Fixes GH-2869
This was exposed by OSS-Fuzz after the HTTP/0.9 changes in zeek/zeek#2851:
We do not check the result of parsing the from and last bytes of a
Content-Range header and would reference uninitialized values on the stack
if these were not valid.
This doesn't seem as bad as it sounds outside of yielding non-sensible values:
If the result was negative, we weird/bailed. If the result was positive, we
already had to treat it with suspicion anyway and the SetPlainDelivery()
logic accounts for that.
OSS-Fuzz tickled an assert when sending a HTTP response before a HTTP/0.9
request. Avoid this by resetting reply_message upon seeing a HTTP/0.9 request.
PCAP was generated artificially: Server sending a reply providing a
Content-Length. Because HTTP/0.9 processing would remove the ContentLine
support analyzer, more data was delivered to the HTTP_Message than
expected, triggering an assert.
This is a follow-up for zeek/zeek#2851.
This commit introduces parsing of the CertificateRequest message in the
TLS handshake. It introduces a new event ssl_certificate_request, as
well as a new function parse_distinguished_name, which can be used to
parse part of the ssl_certificate_request event parameters.
This commit also introduces a new policy script, which appends
information about the CAs a TLS server requests in the
CertificateRequest message, if it sends it.
The user and password fields are replicated to each of the ftp.log
entries. Using a very large username (100s of KBs) allows to bloat
the log without actually sending much traffic. Further, limit the
arg and reply_msg columns to large, but not unbounded values.
* security/topic/awelzel/125-ftp-timeout-three:
testing/ftp: Add tests and pcaps with invalid reply lines
ftp: Harden reply handing a bit and don't raise bad replies to script-land
ftp: ignore invalid commands
As initial examples, this branch ports the Syslog and Finger analyzers
over. We leave the old analyzers in place for now and activate them
iff we compile without any Spicy.
Needs `zeek-spicy-infra` branches in `spicy/`, `spicy-plugin/`,
`CMake/`, and `zeek/zeek-testing-private`.
Note that the analyzer events remain associated with the Spicy plugin
for now: that's where they will show up with `-NN`, and also inside
the Zeekygen documentation.
We switch CMake over to linking the runtime library into the plugin,
vs. at the top-level through object libraries.
b41a4bf06d removed a field from this record
because it had a duplicate name as another field. The field does need to
exist, but it needs the correct name.
This instantiates the SSL analyzer when the client requests SSL
so that Zeek now has a bit more visibility into encrypted MySQL
connections.
The pattern used is the same as in the IMAP, POP or XMPP analyzer.
Not sure this adds much more coverage then there was, but minimally
more recent software versions.
The instances/passwords were ephemeral, so hostname and password hashes
etc aren't useful to anyone.
We were parsing MySQL using bigendian even though the protocol is
specified as with "least significant byte first" [1]. This is most
problematic when parsing length encoded strings with 2 byte length
fields...
Further, I think, the EOF_Packet parsing was borked, either due to
testing the CLIENT_DEPRECATE_EOF with the wrong endianness, or due to
the workaround in Resultset processing raising mysql_ok(). Introduce a
new mysql_eof() that triggers for EOF_Packet's and remove the fake
mysql_ok() Resultset invocation to fix. Adapt the mysql script and tests
to account for the new event.
This is a quite backwards incompatible change on the event level, but
due to being quite buggy in general, doubt this matters to many.
I think there is more buried, but this fixes the violation of the simple
"SHOW ENGINE INNODB STATUS" and the existing tests continue to
succeed...
[1] https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_dt_integers.html
These have been created artificially. The tests show that for an
invalid reply line without a numeric code, with a numeric code < 100
or a numeric code not followed by a space we now raise an analyzer
violation and disable the analyzer.
When disabling_analyzer() was introduced, it was added to the GLOBAL
module. The awkward side-effect is that implementing a hook handler
in another module requires to prefix it with GLOBAL. Alternatively, one
can re-open the GLOBAL module and implement the handler in that scope.
Both are not great, and prefixing with GLOBAL is ugly, so move the
identifier to the Analyzer module and ask users to prefix with Analyzer.
* 'topic/fox-ds/ssh-key-init-events' of github.com:fox-ds/zeek:
Added several events for detailed info on the SSH2 key init directions
* Straightened out the zeek:see lines in events.bif to be the same across all events.
oss-fuzz produced FTP traffic with a ~550KB long FTP command. Cap FTP command
length at 100 bytes, log a weird if a command is larger than that and move
on to the next. Likely it's not actual FTP traffic, but raising an
analyzer violation would allow clients an easy way to disable the analyzer
by sending an overly long command.
The added test PCAP was generated using a fake Python socket server/client.
oss-fuzz generated "HTTP traffic" containing 250k+ sequences of "T<space>\r\r"
which Zeek then logged as individual HTTP requests. Add a heuristic to bail
on such request lines. It's a bit specific to the test case, but should work.
There are more issues around handling HTTP/0.9, e.g. triggering
"not a http reply line" when HTTP/0.9 never had such a thing, but
I don't think that's worth fixing up.
Fixes#119
This uses the v3 json as a source for the first time. The test needed
some updating because Google removed a couple more logs - in the future
this should hopefully not be neccessary anymore because I think v3
should retain all logs.
In theory this might be neat in 5.1.
The current_entity tracking in HTTP assumes that client/server never
send HTTP entities at the same time. The attached pcap (generated
artificially) violates this and triggers:
1663698249.307259 expression error in <...>base/protocols/http/./entities.zeek, line 89: field value missing (HTTP::c$http$current_entity)
For the http-no-crlf test, include weird.log as baseline. Now that weird is
@load'ed from http, it is actually created and seems to make sense
to btest-diff it, too.
Now that it's loaded in bare mode, no need to load it explicitly.
The main thing that tests were relying on seems to be tracking of
c$service for conn.log baselines. Very few were actually checking
for dpd.log