* origin/topic/johanna/sqlite-pragmas:
Options for SQLite log writer, eliminate duplicate definitions
Test synchronous/journal mode options for SQLite log writer
Added default options for synchronous and journal mode
Support for synchronous and journal_mode
* origin/topic/awelzel/pluggable-cluster-backends-part1:
btest: Test Broker::make_event() together with Cluster::publish_hrw()
btest: Add cluster dir, minimal test for enum value
broker: Add shim plugin adding a backend component
zeek-setup: Instantiate backend::manager
cluster: Add to src/CMakeLists.txt
cluster: Add Components and ComponentManager for new components
cluster/Backend: Interface for cluster backends
cluster/Serializer: Interface for event and log serializers
logging: Introduce logging/Types.h
SerialTypes/Field: Allow default construction and add move constructor
DebugLogger: Add cluster debugging stream
plugin: Add component enums for pluggable cluster backends
broker: Pass frame to MakeEvent()
@Sheco reported that standalone epoch processing may exclude scheduled
events when the final sumstat epoch runs before. For example, this easily
happens when attempting to do sumstat observations within connection_state_remove().
Delay final epoch processing to zeek_done() instead.
This doesn't deal with the clustered version - this would need something
more elaborate and potentially a mechanism to delay the shutdown of
other cluster nodes until/after sumstat processing completed.
* topic/christian/telemetry-make-bifs-primary:
Telemetry framework: move BIFs to the primary-bif stage
Minor comment tweaks for init-frameworks-and-bifs.zeek
This stops invoking Telemetry::sync() via a scheduled event and instead
only invokes it on-demand. This makes metric collection network time
independent and lazier, too.
With Prometheus scrape requests being processed on Zeek's main thread
now, we can safely invoke the script layer Telemetry::sync() hook.
Closes#3947
This moves the Telemetry framework's BIF-defined functionalit from the
secondary-BIFs stage to the primary one. That is, this functionality is now
available from the end of init-bare.zeek, not only after the end of
init-frameworks-and-bifs.zeek.
This allows us to use script-layer telemetry in our Zeek's own code that get
pulled in during init-frameworks-and-bifs.
This change splits up the BIF features into functions, constants, and types,
because that's the granularity most workable in Func.cc and NetVar. It also now
defines the Telemetry::MetricsType enum once, not redundantly in BIFs and script
layer.
Due to subtle load ordering issues between the telemetry and cluster frameworks
this pushes the redef stage of Telemetry::metrics_port and address into
base/frameworks/telemetry/options.zeek, which is loaded sufficiently late in
init-frameworks-and-bifs.zeek to sidestep those issues. (When not doing this,
the effect is that the redef in telemetry/main.zeek doesn't yet find the
cluster-provided values, and Zeek does not end up listening on these ports.)
The need to add basic Zeek headers in script_opt/ZAM/ZBody.cc as a side-effect
of this is curious, but looks harmless.
Also includes baseline updates for the usual btests and adds a few doc strings.
This avoids the earlier problem of not tracking ports correctly in
scriptland, while still supporting `port` in EVT files and `%port` in
Spicy files.
As it turns out we are already following the same approach for file
analyzers' MIME types, so I'm applying the same pattern: it's one
event per port, without further customization points. That leaves the
patch pretty small after all while fixing the original issue.
With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.
Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.
This eliminates one place in which we currently need to mirror changes to the
script-land Cluster::Node record. Instead of keeping an exact in-core equivalent, the
Supervisor now treats the data structure as opaque, and stores the whole cluster
table as a JSON string.
We may replace the script-layer Supervisor::ClusterEndpoint in the future, using
Cluster::Node directly. But that's a more invasive change that will affect how
people invoke Supervisor::create() and similars.
Relying on JSON for serialization has the side-effect of removing the
Supervisor's earlier quirk of using 0/tcp, not 0/unknown, to indicate unused
ports in the Supervisor::ClusterEndpoint record.
If the script layer is able to access the current node's config via
Supervisor::node(), it can handle populating Cluster::nodes. That code
is much more straightforward than an equivalent in-core implementation
(especially with the upcoming change to the cluster table's implementation).
This introduces base/frameworks/cluster/supervisor.zeek and
Cluster::Supervisor::__init_cluster_nodes() for that purpose.
The @load of the Supervisor API in cluster/main.zeek isn't technically
necessary since we already load it explicitly even in init-bare.zeek,
but being explicit seems better.
The previous "fix" caused significant performance degradation without
the signature ever having a chance to trigger. Moving it to policy
seems the best compromise, the alternative being outright removing it.
* origin/topic/johanna/netcontrol-updates:
Netcontrol: add rule_added_policy
Netcontrol: more logging in catch-and-release
Netcontrol: allow supplying explicit name to Debug plugin