The &transient attribute does not work well with $element as that won't
be available within &until anymore apparently.
Found after a few seconds building out the fuzzer.
Don't log them, they are random and arbitrary in the normal case. Users
can do the following to log them if wanted.
redef += WebSocket::Info$client_key += { &log };
redef += WebSocket::Info$server_accept += { &log };
Add a constructed PCAP where the HTTP/websocket server send a WebSocket
ping message directly with the packet of the HTTP reply. Ensure this is
interpreted the same as if the WebSocket message is in a separate packet
following the HTTP reply.
For the server side this should work, for the client side we'd need to
synchronize suspend parsing the client side as we currently cannot quite
know whether it's a pipelined HTTP request following, or upgraded protocol
data and we don't have "suspend parsing" functionality here.
This adds a new WebSocket analyzer that is enabled with the HTTP upgrade
mechanism introduced previously. It is a first implementation in BinPac with
manual chunking of frame payload. Configuration of the analyzer is sketched
via the new websocket_handshake() event and a configuration BiF called
WebSocket::__configure_analyzer(). In short, script land collects WebSocket
related HTTP headers and can forward these to the analyzer to change its
parsing behavior at websocket_handshake() time. For now, however, there's
no actual logic that would change behavior based on agreed upon extensions
exchanged via HTTP headers (e.g. frame compression). WebSocket::Configure()
simply attaches a PIA_TCP analyzer to the WebSocket analyzer for dynamic
protocol detection (or a custom analyzer if set). The added pcaps show this
in action for tunneled ssh, http and https using wstunnel. One test pcap is
Broker's WebSocket traffic from our own test suite, the other is the
Jupyter websocket traffic from the ticket/discussion.
This commit further adds a basic websocket.log that aggregates the WebSocket
specific headers (Sec-WebSocket-*) headers into a single log.
Closes#3424
The BDAT analyzer should be supporting uint64_t sized chunks reasonably well,
but the ContentLine analyzer does not, And also, I totally got types for
RemainingChunkSize() and in DeliverStream() wrong, resulting in overflows
and segfaults when very large chunk sizes were used.
Tickled by OSS-Fuzz. Actually running the fuzzer locally only took a
few minutes to find the crash, too. Embarrassing.
OSS-Fuzz managed to produce a MIME multipart message construction with
thousands of nested entities (or that's what Zeek makes out of it anyhow).
Prevent such deep analysis by capping at a nesting depth of 100,
preventing unnecessary resource usage. A new weird named exceeded_mime_max_depth
is reported when this limit is reached.
This change reduces the runtime of the OSS-Fuzz reproducer from ~45 seconds
to ~2.5 seconds.
The test PCAP was produced from a Python script using the email package
and sending the rendered version via POST to a HTTP server.
Closes#208
OSS-Fuzz found that providing an invalid BDAT line would tickle an
assert in UpdateState(). The BDAT state was never initialized, but
within UpdateState() that was expected.
This also removes the AnalyzerViolation() call for bad BDAT commands
and instead raises a weird. The SMTP analyzer is very lax and not triggering
the violation allows to parse the server's response to such an invalid
command.
PCAP files produced by a custom Python SMTP client against Postfix.
In AWS GLB environments, the max_depth of 2 is easily reached due to packets
being encapsulated with GENEVE and VXLAN [1]. Any additional encapsulation
layer causes Zeek raise a weird and ignore the inner traffic. Bump the default
maximum depth to 4, while not common it's not unusual either to observe
this in the wild.
[1] https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-packet-formats.htmlCloses#3439
The initial (prefix) and final (suffix) strings are specified individually
with a variable number of "any" matches that can occur between these.
The previous implementation assumed a single string and rendered it
as *<string>*.
Reported and PCAP provided by @martinvanhensbergen, thanks!
Closeszeek/spicy-ldap#27
We already had these declared in dns/const.zeek, so extend the parser
as well to avoid raising weirds and add some test pcaps:
$ dig @8.8.8.8 DNSKEY ed448.no
$ dig @8.8.8.8 ed448.no +dnssec
And the same for the ed25519.no domain.
Closes#3453
This adds a signatures/http-body-match btest to verify how the signature
framework matches HTTP body in requests and responses.
It currently fails because the 'http-request-body' and 'http-reply-body'
clauses never match anything when there is a '$' in their regular
expressions.
The other pattern clauses such as the 'payload' clause do not suffer
from that restriction and it is not documented as a limitation of HTTP
body pattern clauses either, so it is probably a bug.
The "http-body-match" btest shows that without a fix any signatures
which ends with a '$' in a http-request-body or http-reply-body rule
will never raise a signature_match() event, and that signatures which do
not end with a '$' cannot distinguish an HTTP body prefixed by the
matching pattern (ex: ABCD) from an HTTP body consisting entirely of the
matching pattern (ex: AB).
Test cases by source port:
- 13579:
- GET without body, plain res body (CD, only)
- 13578:
- GET without body, plain res body (CDEF, prefix)
- 24680:
- POST plain req body (AB, only), plain res body (CD, only)
- 24681:
- POST plain req body (ABCD, prefix), plain res body (CDEF, prefix)
- 24682:
- POST gzipped req body (AB, only), gzipped res body (CD, only)
- POST plain req body (CD, only), plain res body (EF, only)
- 33210:
- POST multipart plain req body (AB;CD;EF, prefix)
- plain res body (CD, only)
- 33211:
- POST multipart plain req body (ABCD;EF, prefix)
- plain res body (CDEF, prefix)
- 34527:
- POST chunked gzipped req body (AB, only)
- chunked gzipped res body (CD, only)
- 34528:
- POST chunked gzipped req body (ABCD, prefix)
- chunked gzipped res body (CDEF, prefix)
The tests with source ports 24680, 24682 and 34527 should
match the signature http_request_body_AB_only and the signature
http_request_body_AB_prefix, but they only match the latter.
The tests with source ports 13579, 24680, 24682, 33210 and 34527 should
match the signature http_response_body_CD_only and the signature
http_response_body_CD_prefix, but they only match the latter.
The tests with source ports 24680, 24681, 33210 and 33211 show how the
http_request_body_AB_then_CD signature with two http-request-body
conditions match either on one or multiple requests (documented
behaviour).
The test cases with other source ports show where the
http_request_body_AB_only and http_response_body_CD_only signatures
should not match because their bodies include more than the searched
patterns.
This commit adds a multitude of new extension types that were added in
the last few years; it also adds grease values to extensions, curves,
and ciphersuites.
Furthermore, it adds a test that contains a encrypted-client-hello
key-exchange (which uses several extension types that we do not have in
our baseline so far).
The first pcap only contained packets from the originator, not the responder.
What stands out here is that the Linux kernel doesn't seem to use a symmetric
flow hash for the tunneled connection, resulting in a total of four tunnel
connections for the two inner connections. Sigh.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
(cherry picked from commit afa6f3a0d3b8db1ec5b5e82d26225504c2891089)
When http_reply events are received before http_request events, either
through faking traffic or possible re-ordering, it is possible to trigger
unbounded state growth due to later http_requests never being matched
again with responses.
Prevent this by synchronizing request/response counters when late
requests come in.
Also forcefully flush pending requests when http_replies are never
observed either due to the analyzer having been disabled or because
half-duplex traffic.
Fixes#1705
Using pcaps from https://interop.seemann.io/ as samples for QUIC protocol
data didn't produce a conn.log for the contained data. `tcpdump -r`
and Wireshark do show the contained IP/UDP packets. Teach Zeek how
to handle link type DLT_PPP 0x09 using a new PPP analyzer based on the
PPPSerial analyzer code.
Usual update to files/x509 baseline after adding new analyzer due
to enum values changing.
This is similar to GH-3206. There do not seem to be practical
consequences - but we should still fix it.
This also includes the udp-testcase that was forgotten in GH-3206.
This reflects the `spicy-plugin` code as of `d8c296b81cc2a11`.
In addition to moving the code into Zeek's source tree, this comes
with a couple small functional changes:
- `spicyz` no longer tries to infer if it's running from the build
directory. Instead `ZEEK_SPICY_LIBRARY` can be set to a custom
location. `zeek-set-path.sh` does that now.
- ZEEK_CONFIG can be set to change what `spicyz -z` print out. This is
primarily for backwards compatibility.
Some further notes on specifics:
- We raise the minimum Spicy version to 1.8 (i.e., current `main`
branch).
- Renamed the `compiler/` subdirectory to `spicyz` to avoid
include-path conflicts with the Spicy headers.
- In `cmake/`, the corresponding PR brings a new/extended version of
`FindZeek`, which Spicy analyzer packages need. We also now install
some of the files that the Spicy plugin used to bring for testing,
so that existing packages keep working.
- For now, this all remains backwards compatible with the current
`zkg` analyzer templates so that they work with both external and
integrated Spicy support. Later, once we don't need to support any
external Spicy plugin versions anymore, we can clean up the
templates as well.
- All the plugin's tests have moved into the standard test suite. They
are skipped if configure with `--disable-spicy`.
This holds off on adapting the new code further to Zeek's coding
conventions, so that it remains easier to maintain it in parallel to
the (now legacy) external plugin. We'll make a pass over the
formatting for (presumable) Zeek 6.1.
This commit adds support for the connection_id extension, adds a trace
that uses DTLS 1.3 connection IDs, and adds parsing for the DTLS 1.3
unified header, in case connection IDs are not used.
In case connection IDs are used, parsing of the DTLS 1.3 unified header
is skipped. This is due to the fact, that the header then contains a
variable length element, with the length of the element not given in the
header. Instead, the length is given in the client/server hello message
of the opposite side of the connection (which we might have missed).
Furthermore, parsing is not of a high importance, since we are not
passing the connection ID, or any of the other parsed values of the
unified header into scriptland.
The NTP mode provides us with the identity of the endpoints. For the
simple CLIENT / SERVER modes, flip the connection if we detect
orig/resp disagreeing with what the message says. This mainly
results in the history getting a ^ and the ntp.log / conn.log
showing the corrected endpoints.
Closes#2998.
* origin/topic/awelzel/smb2-state-handling:
NEWS: Add entry about SMB::max_pending_messages and state discarding
scripts/smb2-main: Reset script-level state upon smb2_discarded_messages_state()
smb2: Limit per-connection read/ioctl/tree state
DTLSv1.3 changes the DTLS record format, introducing a completely new
header - which is a first for DTLS.
We don't currently completely parse this header, as this requires a bit
more statekeeping. This will be added in a future revision. This also
also has little practical implications.
Currently, if a TLS/DTLS analyzer fails with a protocol violation, we
will still try to remove the analyzer later, which results in the
following error message:
error: connection does not have analyzer specified to disable
Now, instead we don't try removing the analyzer anymore, after a
violation occurred.
This is similar to what the external corelight/zeek-smb-clear-state script
does, but leverages the smb2_discarded_messages_state() event instead of
regularly checking on the state of SMB connections.
The pcap was created using the dperson/samba container image and mounting
a share with Linux's CIFS filesystem, then copying the content of a
directory with 100 files. The test uses a BPF filter to imitate mostly
"half-duplex" traffic.