This only changes the script-layer API, but keeps the std::string host
in the C++ layer's ServerOptions. Mostly because the ixwebsocket library
takes host as std::string. Also, maybe at some point we'd want to
support something scheme-based like unix:///var/run/zeek.sock and placing
that in a string could not be totally wrong.
Add tests for IPV6, too.
When a node restarts or a peering between two nodes starts over for other
reasons, the internal tracking in the Broker manager resets its state (since
it's per-peering), and thus the message overflow counter. The script layer was
unaware of this, and threw errors when trying to reset the corresponding counter
metric down to zero at sync time.
We now track past buffer overflows via a separate epoch table, using Broker peer
ID comparisons to identify new peerings, and set the counter to the sum of past
and current overflows.
I considered just making this a gauge, but it seems more helpful to be able to
look at a counter to see whether any messages have ever been dropped over the
lifetime of the node process.
As an aside, this now also avoids repeatedly creating the labels vector,
re-using the same one for each metric.
Thanks to @pbcullen for identifying this one!
At every site where we've dug into backpressure disconnect findings, it has been
the case that the default values were too small. 8192, so 4x the old default,
suffices at every site to drown out premature disconnects.
With metrics now available for the send buffers regardless of backpressure
overflow policy, this also switches the default from "disconnect" to
"drop_oldest" (for both peers and websockets), meaning that peerings remain
untouched but the oldest queued message simply gets dropped when a new message
is enqueued. With this policy, the number of backpressure overflows is then
simply the count of discarded messages, something that users can tune to see
drop to zero in everyday use. Another benefit is that marginal overflows cause
less message loss than when an entire buffer's worth (plus potentially more
in-flight messages) gets thrown out with a disconnect.
Since a reorg in the Broker library (commit b04195183) that revamped flow
control and that we pulled in with Zeek 5.0, this setting hasn't done
anything. Broker's endpoint::make_subscriber() and
endpoint::make_status_subscriber() take a queue size argument (with a default
value) that simply gets dropped in the eventual subscriber::make() call. See:
b041951835 (diff-5c0d2baa7981caeb6a4080708ddca6ad929746d10c73d66598e46d7c2c03c8deL34-R178)
This hooks into Telemetry::sync() to update Broker-level metrics tracking the
peerings' send buffer state. We do this in the cluster framework so we can label
the resulting metrics with Zeek cluster node names, not Broker's endpoint IDs.
This implements basic tracking of each peering's current fill level, the maximum
level over a recent time interval (via a new Broker::buffer_stats_reset_interval
tunable, defaulting to 1min), and the number of times a buffer overflows. For
the disconnect policy this is the number of depeerings, but for drop_newest and
drop_oldest it implies the number of messages lost.
This doesn't use "proper" telemetry metrics for a few reasons: this tracking is
Broker-specific, so we need to track each peering via endpoint_ids, while we
want the metrics to use Cluster node name labels, and the latter live in the
script layer. Using broker::endpoint_id directly as keys also means we rely on
their ability to hash in STL containers, which should be fast.
This does not track the buffer levels for Broker "clients" (as opposed to
"peers"), i.e. WebSockets, since we currently don't have a way to name these,
and we don't want to use ephemeral Broker IDs in their telemetry.
To make the stats accessible to the script layer the Broker manager (via a new
helper class that lives in the event_observer) maintains a TableVal mapping
Broker IDs to a new BrokerPeeringStats record. The table's members get updated
every time that table is requested. This minimizes new val instantiation and
allows the script layer to customize the BrokerPeeringStats record by redefing,
updating fields, etc. Since we can't use Zeek vals outside the main thread, this
requires some care so all table updates happen only in the Zeek-side table
updater, PeerBufferState::GetPeeringStatsTable().
Limit the number WebSocket events queued from external clients to
dispatcher instances to produce back pressure to the clients if
Zeek's IO loop is overloaded.
Logic to detect this error already existed, but due to enum identifiers
not having a value set, it never triggered before.
Should probably backport this one.
The number of args being passed to the put() methods was getting to be
fairly long, with more on the horizon. Changing to a record means simplifying
things a little bit.
The new Broker API allows us to provide a custom logger to Broker that
pulls previously unattainable context information out of Broker to put
them into broker.log for users of Zeek.
Since Broker log events happen asynchronously, we cache them in a queue
and use a flare to notify Zeek of activity. Furthermore, the Broker
manager now implements the `ProcessFd` function to avoid unnecessary
polling of the new log queue. As a side effect, data stores are polled
less as well.
* origin/topic/johanna/dpd-changes:
DPD: failed services logging alignment
DPD: update test baselines; change options for external tests.
DPD: change policy script for service violation logging; add NEWS
DPD changes - small script fixes and renames.
Update public and private test suite for DPD changes.
Allow to track service violations in conn.log.
Make conn.log service field ordered
DPD: change handling of pre-confirmation violations, remove max_violations
DPD: log analyzers that have confirmed
IRC analyzer - make protocol confirmation more robust.
This also includes some test baseline updates, due to recent QUIC
changes.
* origin/master: (39 commits)
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in resilience to agent connection timing [skip ci]
IPv6 support for detect-external-names and testcase
Add `skip_resp_host_port_pairs` option.
util/init_random_seed: write_file implies deterministic
external/subdir-btest.cfg: Set OPENSSL_ENABLE_SHA1_SIGNATURES=1
btest/x509_verify: Drop OpenSSL 1.0 hack
testing/btest: Use OPENSSL_ENABLE_SHA1_SIGNATURES
Add ZAM baseline for new scripts.base.protocols.quic.analyzer-confirmations btest
QUIC/decrypt_crypto: Rename all_data to data
QUIC: Confirm before forwarding data to SSL
QUIC: Parse all QUIC packets in a UDP datagram
QUIC: Only slurp till packet end, not till &eod
Remove unused SupervisedNode::InitCluster declaration
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in updated Prometheus tests
Make enc_part value from kerberos response available to scripts
Management framework: move up addition of agent IPs into deployable cluster configs
Support multiple instances per host addr in auto metrics generation
When auto-generating metrics ports for worker nodes, get them more uniform across instances.
...
This introduces ian options, DPD::track_removed_services_in_connection.
It adds failed services to the services column, prefixed with a
"-".
Alternatively, this commit also adds
policy/protocols/conn/failed-services.zeek, which provides the same
information in a new column in conn.log.
This commit revamps the handling of analyzer violations that happen
before an analyzer confirms the protocol.
The current state is that an analyzer is disabled after 5 violations, if
it has not been confirmed. If it has been confirmed, it is disabled
after a single violation.
The reason for this is a historic mistake. In Zeek up to versions 1.5,
analyzers were unconditianally removed when they raised the first
protocol violation.
When this script was ported to the new layout for Zeek 2.0 in
b4b990cfb5, a logic error was introduced
that caused analyzers to no longer be disabled if they were not
confirmed.
This was the state for ~8 years, till the DPD::max_violations options
was added, which instates the current approach of disabling unconfirmed
analyzers after 5 violations. Sadly, there is not much discussion about
this change - from my hazy memory, I think this was discovered during
performance tests and the new behavior was added without checking into
the history of previous changes.
This commit reinstates the originally intended behavior of DPD. When an
analyzer that has not been confirmed raises a protocol violation, it is
immediately removed from the connection. This also makes a lot of sense
- this allows the analyzer to be in a "tasting" phase at the beginning
of the connection, and to error out quickly once it realizes that it was
attached to a connection not containing the desired protocol.
This change also removes the DPD::max_violations option, as it no longer
serves any purpose after this change. (In practice, the option remains
with an &deprecated warning, but it is no longer used for anything).
There are relatively minimal test-baseline changes due to this; they are
mostly triggered by the removal of the data structure and by less
analyzer errors being thrown, as unconfirmed analyzers are disabled
after the first error.
This switches the DPD logic to always log analyzers that raised a
protocol confirmation.
The logic is that, once a protocol has been confirmed - and thus there
probably is log output - it does not make sense to later remove it from
the log. It does make sense to somehow flag it as failed - but that
seems like a secondary step.
This fixes instances where `zeek:see` was used incorrectly so it was not
rendered correctly. All these instances have been found by looking for
`zeek:see` in the generated HTML where it should not be visible anymore.
I also removed a doc reference to `paraglob_add` which never existed.
This caused confusion and I don't think it's very intuitive. If called
with a name that does not exist, this returns without a value, not even
an error value. Changing that seems like it could be more deprecation
work.
* origin/topic/awelzel/move-broker-to-cluster-publish:
netcontrol: Move to Cluster::publish()
openflow: Move to Cluster::publish()
netcontrol/catch-and-release: Move to Cluster::publish()
config: Move to Cluster::publish()
ssl/validate-certs: Move to Cluster::publish()
irc: Move to Cluster::publish()
ftp: Move to Cluster::publish()
dhcp: Move to cluster publish
notice: Move to Cluster::publish()
intel: Move to Cluster::publish()
sumstats: Move to Cluster::publish()
* origin/topic/awelzel/fix-cluster-publish-any:
cluster/Backend: Handle unspecified table/set
cluster: Fix Cluster::publish() of Broker::Data
cluster: Be noisy when attempting to connect to an unknown node