* origin/topic/awelzel/4136-cluster-backend-pre-work:
cluster/zeromq: Fix Unsubscribe() bug caused by \x00 prefix
cluster: Add SubscribeCallback support
cluster/zeromq: Fix XSUB threading issues
cluster/zeromq: Use NodeId(), drop my_node_id
cluster/Backend: Pass node_id via Init()
cluster/Backend: Make backend event processing customizable
cluster/broker/Serializer: Fix adaptor to adapter
cluster/Backend: Do not use const std::string_view&
cluster/serializer/broker: Fix handler lookup
broker/Manager: Move name in PublishEvent()
btest/zeromq/test-bootstrap: Fix port parsing
EventHandler: Support operator!=
This allows callers of Subscribe() to pass in a callback that will be invoked
once the subscription is established or failed to establish. It is the
backend's responsibility to execute the callback on the main thread either
synchronously, or preferably asynchronously at a later point, by
scheduling a task on the IO main loop.
This turns on ZMQ_XPUB_VERBOSE for ZeroMQ so that notifications about
subscriptions are raised even if the subscriptions has previously been
observed.
It is not safe to use the same socket from different threads, but the
current code used the xsub socket directly from the main thread (to setup
subscriptions) and from the internal thread for polling and reading.
Leverage the PAIR socket already in use for forwarding publish operations
to the internal thread also for subscribe and unsubscribe.
The failure mode is/was a bit annoying. Essentially, closing of the
context would hang indefinitely in zmq_ctx_term().
This allows configurability at the code level to decide what to do with
a received remote events and events produced by a backend. For now, only
enqueue events into the process's script layer, but for the WebSocket
interface, the action would be to send out the event on a WebSocket
connection instead.
This also includes some test baseline updates, due to recent QUIC
changes.
* origin/master: (39 commits)
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in resilience to agent connection timing [skip ci]
IPv6 support for detect-external-names and testcase
Add `skip_resp_host_port_pairs` option.
util/init_random_seed: write_file implies deterministic
external/subdir-btest.cfg: Set OPENSSL_ENABLE_SHA1_SIGNATURES=1
btest/x509_verify: Drop OpenSSL 1.0 hack
testing/btest: Use OPENSSL_ENABLE_SHA1_SIGNATURES
Add ZAM baseline for new scripts.base.protocols.quic.analyzer-confirmations btest
QUIC/decrypt_crypto: Rename all_data to data
QUIC: Confirm before forwarding data to SSL
QUIC: Parse all QUIC packets in a UDP datagram
QUIC: Only slurp till packet end, not till &eod
Remove unused SupervisedNode::InitCluster declaration
Update doc submodule [nomail] [skip ci]
Bump cluster testsuite to pull in updated Prometheus tests
Make enc_part value from kerberos response available to scripts
Management framework: move up addition of agent IPs into deployable cluster configs
Support multiple instances per host addr in auto metrics generation
When auto-generating metrics ports for worker nodes, get them more uniform across instances.
...
This commit builds on top of GH-4183 and adds IPv6 support for
policy/protocols/dns/detect-external-names.
Additionally it adds a test-case for this file testing it with mDNS
queries.
This makes Zeek run in deterministic mode with --save-seeds usage
and reworks all the extra indirections used in init_random_seed()
to make it easier to follow the control flow.
Fixes#4209
* origin/topic/awelzel/4035-btest-openssl-sha1-certs:
external/subdir-btest.cfg: Set OPENSSL_ENABLE_SHA1_SIGNATURES=1
btest/x509_verify: Drop OpenSSL 1.0 hack
testing/btest: Use OPENSSL_ENABLE_SHA1_SIGNATURES
This reverts the call to update-crypto-policies in the Fedora 41 image
and instead sets OPENSSL_ENABLE_SHA1_SIGNATURES in the individual tests.
This allows RHEL 10 or Fedora 41 users to run the tests in question
without needing to fiddle with system settings.
Fixes#4035
* origin/topic/timw/add-note-about-pe-pcap:
Add note to Traces/README about possible malware in pe/pe.trace
Fix formatting of Traces/README entry for modbus-eit.trace
* origin/topic/awelzel/4198-4201-quic-maintenance:
QUIC/decrypt_crypto: Rename all_data to data
QUIC: Confirm before forwarding data to SSL
QUIC: Parse all QUIC packets in a UDP datagram
QUIC: Only slurp till packet end, not till &eod
A UDP datagram may contain multiple QUIC packets, but the parser so far
handled only the very first packet, ignoring any subsequent packets.
Fixes#4198
This doesn't change behavior, but avoids slurping in more data than
needed. A UDP packet an contain multiple QUIC packets and we'd read
all following ones instead just the one we're interested in.
* topic/christian/management-multinode-metrics-ports:
Bump cluster testsuite to pull in updated Prometheus tests
Management framework: move up addition of agent IPs into deployable cluster configs
Support multiple instances per host addr in auto metrics generation
When auto-generating metrics ports for worker nodes, get them more uniform across instances.
Since the changes to port autoassignment in the preceding commits leverage agent
IP address information, we need to ensure that this information is available at
the time of autoassignment. The controller learns IP addresses from connecting
agents, and previously used that information at deploy time. This moves the
augmentation of the cluster config up to port autoassignment time.
- Analyzer: Reduce from 208 bytes to 192 bytes, remove one cache line
- EventGroup: Reduce from 104 bytes to 96 bytes
- Packet: Reduce from 200 bytes to 184 bytes, remove one cache line
- threading::Value: Reduce from 48 bytes to 40 bytes
- ConnTuple: push hole to the end of struct
- TCP_Reassembler: Reduce from 240 bytes to 232 bytes
The changes are mostly quite minor. The main change reasons are:
* analyzers that were confirmed, and later removed now show up in the
conn.log.
* a couple of removed lines in analyzer.log, because non-confirmed
analyzers get removed more quickly.
* in some cases there are additional lines in analyzer.log. These are
cases in which an analyzer gets removed due to a violation and then
re-attached because of a later signature match, which replays the
violating content. In all examples that I have so far, this is caused
by both sides of a connection speaking a differing protocol. There
probably should be a better way to handle this - but it works.
* new column for failed analyzers in conn.log
This introduces ian options, DPD::track_removed_services_in_connection.
It adds failed services to the services column, prefixed with a
"-".
Alternatively, this commit also adds
policy/protocols/conn/failed-services.zeek, which provides the same
information in a new column in conn.log.