Switching to ZeroMQ as cluster backend and dabbling with zeekctl
and WebSocket, replies didn't arrive due to the usage of
Broker::publish() rather than Cluster::publish(). Additionally,
add the node name to the topic on which we reply so that the
receiver can figure out which node sent the reply. It could've
been a separate event parameter, but the topic appears just fine.
While we support initializing records via coercion from an expression
list, e.g.,
local x: X = [$x1=1, $x2=2];
this can sometimes obscure the code to readers, e.g., when assigning to
value declared and typed elsewhere. The language runtime has a similar
overhead since instead of just constructing a known type it needs to
check at runtime that the coercion from the expression list is valid;
this can be slower than just writing the readible code in the first
place, see #4559.
With this patch we use explicit construction, e.g.,
local x = X($x1=1, $x2=2);
ZeroMQ's IPv6 support isn't enabled by default, resulting in
"No such device" errors when attempting to listen on an IPv6
address. This change adds a ipv6 option to the ZeroMQ module
and enables it by default. Further, adds a test configuring
everything to listen on IPv6 ::1 as well, and one test to provoke
the original error. This also regularizes some error messages.
The addr_to_uri() calls weren't actually needed, but they apparently do
not hurt and the result is easier on the eyes, so use them :-)
The cluster is borked if the initialization fails, so may as well just
completely abort Zeek at that point with a fatal error. There's no real
point in continuing to run.
* origin/topic/johanna/new-style-analyzer-log:
NEWS entries for analyzer log changes
Move detect-protocol from frameworks/dpd to frameworks/analyzer
Introduce new c$failed_analyzers field
Settle on analyzer.log for the dpd.log replacement
dpd->analyzer.log change - rename files
Analyzer failure logging: tweaks and test fixes
Introduce analyzer-failed.log, as a replacement for dpd.log
Rename analyzer.log to analyzer.debug log; move to policy
Move dpd.log to policy script
detect-protocol.zeek was the last non-deprecated script left in
policy/frameworks/dpd. It was moved to policy/frameworks/analyzer. A
script that loads the script from the new location with a deprecation
warning was added.
To address review feedback in GH-4362: rename analyzer-failed-log.zeek
to loggig.zeek, analyzer-debug-log.zeek to debug-logging.zeek and
dpd-log.zeek to deprecated-dpd-log.zeek.
Includes respective test, NEWS, etc updates.
The main part of this commit are changes in tests. A lot of the tests
that previously relied on analyzer.log or dpd.log now use the new
analyzer-failed.log.
I verified all the changes and, as far as I can tell, everything
behaves as it should. This includes the external test baselines.
This change also enables logging of file and packet analyzer to
analyzer_failed.log and fixes some small behavior issues.
The analyzer_failed event is no longer raised when the removal of an
analyzer is vetoed.
If an analyzer is no longer active when an analyzer violation is raised,
currently the analyzer_failed event is raised. This can, e.g., happen
when an analyzer error happens at the very end of the connection. This
makes the behavior more similar to what happened in the past, and also
intuitively seems to make sense.
A bug introduced in the failed service logging was fixed.
Analyzer-failed.log is, essentially, the replacement for dpd.log. The
name should make more sense, as it does now log analyzer failures. For
protocol analyzers specifically, these are failures that lead to the
analyzer being disabled.
The current analyzer.log is more useful for debugging than for
operational purposes. Hence this is disabled by default, moved to a
policy script, and the log is renamed to analyzer-debug.log.
Furthermore, logging of analyzer confirmations and disabling analyzers
are now enabled by default.
This is the first phase of moving from the current dpd log to a more
modern logfile, without some of the weirdnesses that the current dpd log
contains.
Tests will not pass in the current state; this is just splitting out
functionality.
The ZeroMQ heuristic for "ready to publish" is to create an unique and
ephemeral subscription using the XSUB socket and observe it arrive on the
XPUB socket. At this point, visibility into other node's subscriptions
is provided.
Due to prefix matching, worker-1's node_topic() also matched worker-10,
worker-11, etc. Suffix the node topic with a `.`. The original implementation
came from NATS, where subjects are separated by `.`.
Adapt nodeid_topic() for consistency.
It is not safe to use the same socket from different threads, but the
current code used the xsub socket directly from the main thread (to setup
subscriptions) and from the internal thread for polling and reading.
Leverage the PAIR socket already in use for forwarding publish operations
to the internal thread also for subscribe and unsubscribe.
The failure mode is/was a bit annoying. Essentially, closing of the
context would hang indefinitely in zmq_ctx_term().
Since the changes to port autoassignment in the preceding commits leverage agent
IP address information, we need to ensure that this information is available at
the time of autoassignment. The controller learns IP addresses from connecting
agents, and previously used that information at deploy time. This moves the
augmentation of the cluster config up to port autoassignment time.
This fixes instances where `zeek:see` was used incorrectly so it was not
rendered correctly. All these instances have been found by looking for
`zeek:see` in the generated HTML where it should not be visible anymore.
I also removed a doc reference to `paraglob_add` which never existed.
This is a cluster backend implementation using a central XPUB/XSUB proxy
that by default runs on the manager node. Logging is implemented leveraging
PUSH/PULL sockets between logger and other nodes, rather than going
through XPUB/XSUB.
The test-all-policy-cluster baseline changed: Previously, Broker::peer()
would be called from setup-connections.zeek, causing the IO loop to be
alive. With the ZeroMQ backend, the IO loop is only alive when
Cluster::init() is called, but that doesn't happen anymore.