The Spicy based SSL analyzer was, so far, more permissive with the
record layer versions that it would accept.
This change brings the parsing of record layer versions in line with the
binpac based analyzer. This behavioral difference was discovered due to
a test that changed with the recent dpd log changes.
* origin/topic/awelzel/1474-cluster-telemetry:
btest/cluster/telemetry: Add smoke testing for telemetry
cluster/WebSocket: Fetch X-Application-Name header as app label
cluster/WebSocket: Pass X-Application-Name to dispatcher
broker/WebSocketShim: Add calls to Telemetry hooks
cluster/WebSocket: Configure telemetry for WebSocket backends
broker: Hook up generic cluster telemetry
cluster: Introduce telemetry component
One bug fix removing static from a variable that shouldn't be static.
Using a label named "endpoint" is not intuitive and requires explaining to
users that it's really just the Cluster::node value. Change the label to
"node", so that we don't need to do the explaining.
This probably breaks some existing users of the Prometheus metrics, but after
looking more at metrics recently, "endpoint" really is a thorn in my eye.
This commit fixes the parsing of the data field in the SSL analyzer. So
far, this field contained two extra bytes at the beginning, which
contain the length of the following data.
Now, the data passed to the event only contains the actual value of the
session ticket.
The Spicy analyzer already contains the correct handling of this field,
and does not need to be updated. A test that uses the event and
exhibited the bug was added.
* origin/topic/awelzel/4586-zeromq-ipv6:
cluster/zeromq: Short-circuit DoPublishLogWrite() when not initialized
cluster/zeromq: Hook up and enable IPV6 by default
cluster/zeromq/connect: Make failures fatal
cluster/zeromq: Move log_push creation to DoInit()
ZeroMQ's IPv6 support isn't enabled by default, resulting in
"No such device" errors when attempting to listen on an IPv6
address. This change adds a ipv6 option to the ZeroMQ module
and enables it by default. Further, adds a test configuring
everything to listen on IPv6 ::1 as well, and one test to provoke
the original error. This also regularizes some error messages.
The addr_to_uri() calls weren't actually needed, but they apparently do
not hurt and the result is easier on the eyes, so use them :-)
* origin/topic/johanna/default-canonifier-only-first-timestamp:
Default canonifier change to only remove first timestamp in line
Align SMB timestamp calculation between operating systems
The schema of cluster WebSocket error messages deviated from the
existing one in Broker which breaks seamless migration from the Broker
WebSocket bindings.
This patch aligns the serialization in cluster with the one in Broker.
This is technically a breaking change of the cluster schema, but since
it never worked like documented and is still experimental this is
probably fine.
Closes#4594.
In the past, we used a default canonifier, which removes everything that
looks like a timestamp from log files. The goal of this is to prevent
logs from changing, e.g., due to local system times ending up in log
files.
This, however, also has the side-effect of removing information that is
parsed from protocols which probably should be part of our tests.
There is at least one test (1999 certificates) where the entire test
output was essentially removed by the canonifier.
GH-4521 was similarly masked by this.
This commit changes the default canonifier, so that only the first
timestamp in a line is removed. This should skip timestamps that are
likely to change while keeping timestamps that are parsed
from protocol information.
A pass has been made over the tests, with some additional adjustments
for cases which require the old canonifier.
There are some cases in which we probably could go further and not
remove timestamps at all - that, however, seems like a follow-up
project.
Not the not_before/not_after fields output GMT based times.
Also adds a new btest diff canonifier which only removes the first
timestamp in a line.
Fixes GH-4521
When looking up the postprocessor function from shadow files, id::find_func()
would abort() if the function wasn't available instead of falling back
to the default postprocessor.
Fix by using id::find() and checking the type explicitly and also adding a
strict type check while at it.
This issue was tickled by loading the json-streaming-logs package,
Zeek creating shadow files containing its custom postprocessor function,
then restarting Zeek without the package loaded.
Closes#4562
After a public discussion and also chatting with Vern directly, disable the
ZAM bif tracking test to avoid an update every time new functions are
added. Usually these aren't performance critical and the defaults
characterization is fine. If they are performance critical, then Vern
is currently best positioned to properly integrate an optimized version.
The response to BDAT LAST was never recognized, resulting in the
BDAT LAST commands not being logged in a timely fashion and receiving
the wrong status.
This likely doesn't handle complex pipeline scenarios, but it fixes
the wrong behavior for smtp_reply() not handling simple BDAT commands
responses.
Thanks @cccs-jsjm for the report!
Closes#4522
* origin/topic/johanna/new-style-analyzer-log:
NEWS entries for analyzer log changes
Move detect-protocol from frameworks/dpd to frameworks/analyzer
Introduce new c$failed_analyzers field
Settle on analyzer.log for the dpd.log replacement
dpd->analyzer.log change - rename files
Analyzer failure logging: tweaks and test fixes
Introduce analyzer-failed.log, as a replacement for dpd.log
Rename analyzer.log to analyzer.debug log; move to policy
Move dpd.log to policy script
Currently, coverage/bare-mode-errors is one of the slowest tests in the
entire test suite. This is caused by the fact that it has to repeatedly
launch Zeek for every script that we ship. This is done sequentially.
This commit changes this test to use xargs to spawn 20 parallell
processes.
detect-protocol.zeek was the last non-deprecated script left in
policy/frameworks/dpd. It was moved to policy/frameworks/analyzer. A
script that loads the script from the new location with a deprecation
warning was added.
This field is used internally to trace which analyzers already had a
violation. This is mostly used to prevent duplicate logging.
In the past, c$service_violation was used for a similar purpose -
however it has slightly different semantics. Where c$failed_analyzers
tracks analyzers that were removed due to a violation,
c$service_violation tracks violations - and doesn't care if an analyzer
was actually removed due to it.