This hooks into Telemetry::sync() to update Broker-level metrics tracking the
peerings' send buffer state. We do this in the cluster framework so we can label
the resulting metrics with Zeek cluster node names, not Broker's endpoint IDs.
This implements basic tracking of each peering's current fill level, the maximum
level over a recent time interval (via a new Broker::buffer_stats_reset_interval
tunable, defaulting to 1min), and the number of times a buffer overflows. For
the disconnect policy this is the number of depeerings, but for drop_newest and
drop_oldest it implies the number of messages lost.
This doesn't use "proper" telemetry metrics for a few reasons: this tracking is
Broker-specific, so we need to track each peering via endpoint_ids, while we
want the metrics to use Cluster node name labels, and the latter live in the
script layer. Using broker::endpoint_id directly as keys also means we rely on
their ability to hash in STL containers, which should be fast.
This does not track the buffer levels for Broker "clients" (as opposed to
"peers"), i.e. WebSockets, since we currently don't have a way to name these,
and we don't want to use ephemeral Broker IDs in their telemetry.
To make the stats accessible to the script layer the Broker manager (via a new
helper class that lives in the event_observer) maintains a TableVal mapping
Broker IDs to a new BrokerPeeringStats record. The table's members get updated
every time that table is requested. This minimizes new val instantiation and
allows the script layer to customize the BrokerPeeringStats record by redefing,
updating fields, etc. Since we can't use Zeek vals outside the main thread, this
requires some care so all table updates happen only in the Zeek-side table
updater, PeerBufferState::GetPeeringStatsTable().
Limit the number WebSocket events queued from external clients to
dispatcher instances to produce back pressure to the clients if
Zeek's IO loop is overloaded.
Logic to detect this error already existed, but due to enum identifiers
not having a value set, it never triggered before.
Should probably backport this one.
The number of args being passed to the put() methods was getting to be
fairly long, with more on the horizon. Changing to a record means simplifying
things a little bit.
The new Broker API allows us to provide a custom logger to Broker that
pulls previously unattainable context information out of Broker to put
them into broker.log for users of Zeek.
Since Broker log events happen asynchronously, we cache them in a queue
and use a flare to notify Zeek of activity. Furthermore, the Broker
manager now implements the `ProcessFd` function to avoid unnecessary
polling of the new log queue. As a side effect, data stores are polled
less as well.
* origin/topic/johanna/dpd-changes:
DPD: failed services logging alignment
DPD: update test baselines; change options for external tests.
DPD: change policy script for service violation logging; add NEWS
DPD changes - small script fixes and renames.
Update public and private test suite for DPD changes.
Allow to track service violations in conn.log.
Make conn.log service field ordered
DPD: change handling of pre-confirmation violations, remove max_violations
DPD: log analyzers that have confirmed
IRC analyzer - make protocol confirmation more robust.
This also includes some test baseline updates, due to recent QUIC
changes.
* origin/master: (39 commits)
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in resilience to agent connection timing [skip ci]
IPv6 support for detect-external-names and testcase
Add `skip_resp_host_port_pairs` option.
util/init_random_seed: write_file implies deterministic
external/subdir-btest.cfg: Set OPENSSL_ENABLE_SHA1_SIGNATURES=1
btest/x509_verify: Drop OpenSSL 1.0 hack
testing/btest: Use OPENSSL_ENABLE_SHA1_SIGNATURES
Add ZAM baseline for new scripts.base.protocols.quic.analyzer-confirmations btest
QUIC/decrypt_crypto: Rename all_data to data
QUIC: Confirm before forwarding data to SSL
QUIC: Parse all QUIC packets in a UDP datagram
QUIC: Only slurp till packet end, not till &eod
Remove unused SupervisedNode::InitCluster declaration
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in updated Prometheus tests
Make enc_part value from kerberos response available to scripts
Management framework: move up addition of agent IPs into deployable cluster configs
Support multiple instances per host addr in auto metrics generation
When auto-generating metrics ports for worker nodes, get them more uniform across instances.
...
This introduces ian options, DPD::track_removed_services_in_connection.
It adds failed services to the services column, prefixed with a
"-".
Alternatively, this commit also adds
policy/protocols/conn/failed-services.zeek, which provides the same
information in a new column in conn.log.
This commit revamps the handling of analyzer violations that happen
before an analyzer confirms the protocol.
The current state is that an analyzer is disabled after 5 violations, if
it has not been confirmed. If it has been confirmed, it is disabled
after a single violation.
The reason for this is a historic mistake. In Zeek up to versions 1.5,
analyzers were unconditianally removed when they raised the first
protocol violation.
When this script was ported to the new layout for Zeek 2.0 in
b4b990cfb5, a logic error was introduced
that caused analyzers to no longer be disabled if they were not
confirmed.
This was the state for ~8 years, till the DPD::max_violations options
was added, which instates the current approach of disabling unconfirmed
analyzers after 5 violations. Sadly, there is not much discussion about
this change - from my hazy memory, I think this was discovered during
performance tests and the new behavior was added without checking into
the history of previous changes.
This commit reinstates the originally intended behavior of DPD. When an
analyzer that has not been confirmed raises a protocol violation, it is
immediately removed from the connection. This also makes a lot of sense
- this allows the analyzer to be in a "tasting" phase at the beginning
of the connection, and to error out quickly once it realizes that it was
attached to a connection not containing the desired protocol.
This change also removes the DPD::max_violations option, as it no longer
serves any purpose after this change. (In practice, the option remains
with an &deprecated warning, but it is no longer used for anything).
There are relatively minimal test-baseline changes due to this; they are
mostly triggered by the removal of the data structure and by less
analyzer errors being thrown, as unconfirmed analyzers are disabled
after the first error.
This switches the DPD logic to always log analyzers that raised a
protocol confirmation.
The logic is that, once a protocol has been confirmed - and thus there
probably is log output - it does not make sense to later remove it from
the log. It does make sense to somehow flag it as failed - but that
seems like a secondary step.
This fixes instances where `zeek:see` was used incorrectly so it was not
rendered correctly. All these instances have been found by looking for
`zeek:see` in the generated HTML where it should not be visible anymore.
I also removed a doc reference to `paraglob_add` which never existed.
This caused confusion and I don't think it's very intuitive. If called
with a name that does not exist, this returns without a value, not even
an error value. Changing that seems like it could be more deprecation
work.
* origin/topic/awelzel/move-broker-to-cluster-publish:
netcontrol: Move to Cluster::publish()
openflow: Move to Cluster::publish()
netcontrol/catch-and-release: Move to Cluster::publish()
config: Move to Cluster::publish()
ssl/validate-certs: Move to Cluster::publish()
irc: Move to Cluster::publish()
ftp: Move to Cluster::publish()
dhcp: Move to cluster publish
notice: Move to Cluster::publish()
intel: Move to Cluster::publish()
sumstats: Move to Cluster::publish()
* origin/topic/awelzel/fix-cluster-publish-any:
cluster/Backend: Handle unspecified table/set
cluster: Fix Cluster::publish() of Broker::Data
cluster: Be noisy when attempting to connect to an unknown node
Mostly due to spending too much time wondering why nodes didn't connect
when there was a mismatch between "manager" and "manager-1" in the
cluster layout. Remove manager from test-all-policy-cluster test to
avoid connection attempts in this test.
* topic/christian/disconnect-slow-peers:
Bump cluster testsuite to pull in Broker backpressure tests
Expand documentation of Broker events.
Add sleep() BiF.
Add backpressure disconnect notification to cluster.log and via telemetry
Remove unneeded @loads from base/misc/version.zeek
Add Cluster::nodeid_to_node() helper function
Support re-peering with Broker peers that fall behind
Add Zeek-level configurability of Broker slow-peer disconnects
Bump Broker to pull in disconnect feature and infinite-loop fix
No need to namespace Cluster:: functions in their own namespace