* origin/topic/awelzel/3424-http-upgrade-websocket-v1:
websocket: Handle breaking from WebSocket::configure_analyzer()
websocket: Address review feedback for BinPac code
fuzzers: Add WebSocket fuzzer
websocket: Fix crash for fragmented messages
websocket: Verify Sec-WebSocket-Key/Accept headers and review feedback
btest/websocket: Test for coalesced reply-ping
HTTP/CONNECT: Also weird on extra data in reply
HTTP/Upgrade: Weird when more data is available
ContentLine: Add GetDeliverStreamRemainingLength() accessor
HTTP: Drain event queue after instantiating upgrade analyzer
btest/http: Explain switching-protocols test change as comment
WebSocket: Introduce new analyzer and log
HTTP: Add mechanism to instantiate Upgrade analyzer
The &transient attribute does not work well with $element as that won't
be available within &until anymore apparently.
Found after a few seconds building out the fuzzer.
Don't log them, they are random and arbitrary in the normal case. Users
can do the following to log them if wanted.
redef += WebSocket::Info$client_key += { &log };
redef += WebSocket::Info$server_accept += { &log };
Add a constructed PCAP where the HTTP/websocket server send a WebSocket
ping message directly with the packet of the HTTP reply. Ensure this is
interpreted the same as if the WebSocket message is in a separate packet
following the HTTP reply.
For the server side this should work, for the client side we'd need to
synchronize suspend parsing the client side as we currently cannot quite
know whether it's a pipelined HTTP request following, or upgraded protocol
data and we don't have "suspend parsing" functionality here.
DPD enables HTTP based on the content of the WebSocket frames. However,
it's not HTTP, the protocol is x-kaazing-handshake and the server sends
some form of status/acknowledge to the client first, so the HTTP and the
HTTP analyzer receives that as the first bytes of the response and
bails, oh well.
This adds a new WebSocket analyzer that is enabled with the HTTP upgrade
mechanism introduced previously. It is a first implementation in BinPac with
manual chunking of frame payload. Configuration of the analyzer is sketched
via the new websocket_handshake() event and a configuration BiF called
WebSocket::__configure_analyzer(). In short, script land collects WebSocket
related HTTP headers and can forward these to the analyzer to change its
parsing behavior at websocket_handshake() time. For now, however, there's
no actual logic that would change behavior based on agreed upon extensions
exchanged via HTTP headers (e.g. frame compression). WebSocket::Configure()
simply attaches a PIA_TCP analyzer to the WebSocket analyzer for dynamic
protocol detection (or a custom analyzer if set). The added pcaps show this
in action for tunneled ssh, http and https using wstunnel. One test pcap is
Broker's WebSocket traffic from our own test suite, the other is the
Jupyter websocket traffic from the ticket/discussion.
This commit further adds a basic websocket.log that aggregates the WebSocket
specific headers (Sec-WebSocket-*) headers into a single log.
Closes#3424
The BDAT analyzer should be supporting uint64_t sized chunks reasonably well,
but the ContentLine analyzer does not, And also, I totally got types for
RemainingChunkSize() and in DeliverStream() wrong, resulting in overflows
and segfaults when very large chunk sizes were used.
Tickled by OSS-Fuzz. Actually running the fuzzer locally only took a
few minutes to find the crash, too. Embarrassing.
OSS-Fuzz managed to produce a MIME multipart message construction with
thousands of nested entities (or that's what Zeek makes out of it anyhow).
Prevent such deep analysis by capping at a nesting depth of 100,
preventing unnecessary resource usage. A new weird named exceeded_mime_max_depth
is reported when this limit is reached.
This change reduces the runtime of the OSS-Fuzz reproducer from ~45 seconds
to ~2.5 seconds.
The test PCAP was produced from a Python script using the email package
and sending the rendered version via POST to a HTTP server.
Closes#208
OSS-Fuzz found that providing an invalid BDAT line would tickle an
assert in UpdateState(). The BDAT state was never initialized, but
within UpdateState() that was expected.
This also removes the AnalyzerViolation() call for bad BDAT commands
and instead raises a weird. The SMTP analyzer is very lax and not triggering
the violation allows to parse the server's response to such an invalid
command.
PCAP files produced by a custom Python SMTP client against Postfix.
The initial (prefix) and final (suffix) strings are specified individually
with a variable number of "any" matches that can occur between these.
The previous implementation assumed a single string and rendered it
as *<string>*.
Reported and PCAP provided by @martinvanhensbergen, thanks!
Closeszeek/spicy-ldap#27
Skimming through the RFC, the previous approach of having containers for most
fields seems unfounded for normal protocol operation. The new weirds could just
as well be considered protocol violations. Outside of duplicated or missed data
they just shouldn't happen for well-behaved client/server behavior.
Additionally, with non-conformant traffic it would be trivial to cause
unbounded state growth and immense log record sizes.
Unfortunately, things have become a bit clunky now.
Closes#3504
* origin/topic/awelzel/log-write-delay-3:
logging: ref() to record_ref() renaming
logging: Fix typos from review
logging/Manager: Make LogDelayExpiredTimer an implementation detail
logging/WriteToFilters: Use range-based for loop
testing/btest: Log::delay() from JavaScript
NEWS: Entry for delayed log writes
Bump doc submodule to branch
logging: Do not keep delay state persistent
logging: delay documentation polishing
logging: Better error messages for invalid Log::delay() calls
logging/Manager: Implement DelayTokenType as an actual opaque
logging: Implement get_delay_queue_size()
logging: Introduce Log::delay() and Log::delay_finish()
logging/Manager: zeek::detail'ify
logging/Manager: Split Write()
Timer: Add LOG_DELAY_EXPIRE timer type
Ascii: Remove extra include
With a bit of tweaking in the JavaScript plugin to support opaque types, this
will allow the delay functionality to work there, too.
Making the LogDelayToken an actual opaque seems reasonable, too. It's not
supposed to be user inspected.
This is a verbose, opinionated and fairly restrictive version of the log delay idea.
Main drivers are explicitly, foot-gun-avoidance and implementation simplicity.
Calling the new Log::delay() function is only allowed within the execution
of a Log::log_stream_policy() hook for the currently active log write.
Conceptually, the delay is placed between the execution of the global stream
policy hook and the individual filter policy hooks. A post delay callback
can be registered with every Log::delay() invocation. Post delay callbacks
can (1) modify a log record as they see fit, (2) veto the forwarding of the
log record to the log filters and (3) extend the delay duration by calling
Log::delay() again. The last point allows to delay a record by an indefinite
amount of time, rather than a fixed maximum amount. This should be rare and
is therefore explicit.
Log::delay() increases an internal reference count and returns an opaque
token value to be passed to Log::delay_finish() to release a delay reference.
Once all references are released, the record is forwarded to all filters
attached to a stream when the delay completes.
This functionality separates Log::log_stream_policy() and individual filter
policy hooks. One consequence is that a common use-case of filter policy hooks,
removing unproductive log records, may run after a record was delayed. Users
can lift their filtering logic to the stream level (or replicate the condition
before the delay decision). The main motivation here is that deciding on a
stream-level delay in per-filter hooks is too late. Attaching multiple filters
to a stream can additionally result in hard to understand behavior.
On the flip side, filter policy hooks are guaranteed to run after the delay
and can be used for further mangling or filtering of a delayed record.
We already had these declared in dns/const.zeek, so extend the parser
as well to avoid raising weirds and add some test pcaps:
$ dig @8.8.8.8 DNSKEY ed448.no
$ dig @8.8.8.8 ed448.no +dnssec
And the same for the ed25519.no domain.
Closes#3453
* origin/topic/vern/zam-EH-coalesce:
BTest updates to accommodate event handler coalescence differences
BTests for testing that event handler coalescence operates as expected
coalescing of event handlers (ZAM optimization)
Minor fixups during merge as commented on the PR.
This field isn't required by a worker and it's certainly not used by a
worker to listen on that specific interface. It also isn't required to
be set consistently and its use in-tree limited to the old load-balancing
script.
There's a bif called packet_source() which on a worker will provide
information about the actually used packet source.
Relates to zeek/zeek#2877.
This commit adds a multitude of new extension types that were added in
the last few years; it also adds grease values to extensions, curves,
and ciphersuites.
Furthermore, it adds a test that contains a encrypted-client-hello
key-exchange (which uses several extension types that we do not have in
our baseline so far).
The ssl_history field may grow unbounded (e.g., ssl_alert event). Prevent this
by capping using a configurable limit (default 100) and raise a weird once reached.
Unsure what it's used for today and also results in the situation that on
some platforms we generate a reporter.log in bare mode, while on others
where spicy is disabled, we do not.
If we want base/frameworks/version loaded by default, should put it into
init-bare.zeek and possibly remove the loading of the reporter framework
from it - Reporter::error() would still work and be visible on stderr,
just not create a reporter.log.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
(cherry picked from commit afa6f3a0d3b8db1ec5b5e82d26225504c2891089)