With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.
Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.
(cherry picked from commit bf9704f339)
This eliminates one place in which we currently need to mirror changes to the
script-land Cluster::Node record. Instead of keeping an exact in-core equivalent, the
Supervisor now treats the data structure as opaque, and stores the whole cluster
table as a JSON string.
We may replace the script-layer Supervisor::ClusterEndpoint in the future, using
Cluster::Node directly. But that's a more invasive change that will affect how
people invoke Supervisor::create() and similars.
Relying on JSON for serialization has the side-effect of removing the
Supervisor's earlier quirk of using 0/tcp, not 0/unknown, to indicate unused
ports in the Supervisor::ClusterEndpoint record.
This adds a new lookup_connection_analyzer_id() BiF to find a given connection's
numeric identifier for a given protocol analyzer (as defined by the underlying
Analyzer::id_counter).
This enables users to call disable_analyzer(), which requires a numeric analyzer
ID, outside of analyzer_confirmation_info and analyzer_violation_info events
handlers.
Like traditional file analyzers, we now query Zeek's
`get_file_handle()` event for handles when a connection begins
analyzing an embedded file. That means that Spicy-side protocol
analyzers that are forwarding data into file analysis now need to call
Zeek's `Files::register_protocol()` and provide a callback for
computing file handles. If that's missing, Zeek will now issue a
warning. This aligns with the requirements Zeek's traditional protocol
analyzers. (If the EVT file defines a protocol analyzer to `replace`
an existing one, that one's `register_protocol()` will be consulted.)
Because Zeek's `get_file_handle()` event requires a current
connection, if a Spicy file analyzer isn't directly part of a
connection context (e.g., with nested files), we continue to use
hardcoded, built-in file handle. Scriptland won't be consulted in
that case, just like before.
Closes#3440.
* topic/christian/localversion:
Parse and store localversion string
Remove commented-out code
Check ZEEK_VERSION_LOCAL for dashes
Update version string btests for localversion
Modify version parsing for localversion
Update version used by spicyz
Update build script
Support for configurable localversion
* origin/topic/awelzel/move-iso-9660-sig-to-policy:
signatures/iso-9660: Add \x01 suffix to CD001
test-all-policy: Do not load iso-9660.zeek
signatures: Move ISO 9660 signature to policy
The previous "fix" caused significant performance degradation without
the signature ever having a chance to trigger. Moving it to policy
seems the best compromise, the alternative being outright removing it.
* origin/topic/johanna/netcontrol-updates:
Netcontrol: add rule_added_policy
Netcontrol: more logging in catch-and-release
Netcontrol: allow supplying explicit name to Debug plugin
This introduces a new hook into the Intel::seen() function that allows
users to directly interact with the result of a find() call via external
scripts.
This should solve the use-case brought up by @chrisanag1985 in
discussion #3256: Recording and acting on "no intel match found".
@Canon88 was recently asking on Slack about enabling HTTP logging for a
given connection only when an Intel match occurred and found that the
Intel::match() event would only occur on the manager. The
Intel::match_remote() event might be a workaround, but possibly running a
bit too late and also it's just an internal "detail" event that might not
be stable.
Another internal use case revolved around enabling packet recording
based on Intel matches which necessarily needs to happen on the worker
where the match happened. The proposed workaround is similar to the above
using Intel::match_remote().
This hook also provides an opportunity to rate-limit heavy hitter intel
items locally on the worker nodes, or even replacing the event approach
currently used with a customized approach.
* origin/topic/awelzel/3424-http-upgrade-websocket-v1:
websocket: Handle breaking from WebSocket::configure_analyzer()
websocket: Address review feedback for BinPac code
fuzzers: Add WebSocket fuzzer
websocket: Fix crash for fragmented messages
websocket: Verify Sec-WebSocket-Key/Accept headers and review feedback
btest/websocket: Test for coalesced reply-ping
HTTP/CONNECT: Also weird on extra data in reply
HTTP/Upgrade: Weird when more data is available
ContentLine: Add GetDeliverStreamRemainingLength() accessor
HTTP: Drain event queue after instantiating upgrade analyzer
btest/http: Explain switching-protocols test change as comment
WebSocket: Introduce new analyzer and log
HTTP: Add mechanism to instantiate Upgrade analyzer
OSS-Fuzz managed to produce a MIME multipart message construction with
thousands of nested entities (or that's what Zeek makes out of it anyhow).
Prevent such deep analysis by capping at a nesting depth of 100,
preventing unnecessary resource usage. A new weird named exceeded_mime_max_depth
is reported when this limit is reached.
This change reduces the runtime of the OSS-Fuzz reproducer from ~45 seconds
to ~2.5 seconds.
The test PCAP was produced from a Python script using the email package
and sending the rendered version via POST to a HTTP server.
Closes#208