While we support initializing records via coercion from an expression
list, e.g.,
local x: X = [$x1=1, $x2=2];
this can sometimes obscure the code to readers, e.g., when assigning to
value declared and typed elsewhere. The language runtime has a similar
overhead since instead of just constructing a known type it needs to
check at runtime that the coercion from the expression list is valid;
this can be slower than just writing the readible code in the first
place, see #4559.
With this patch we use explicit construction, e.g.,
local x = X($x1=1, $x2=2);
At every site where we've dug into backpressure disconnect findings, it has been
the case that the default values were too small. 8192, so 4x the old default,
suffices at every site to drown out premature disconnects.
With metrics now available for the send buffers regardless of backpressure
overflow policy, this also switches the default from "disconnect" to
"drop_oldest" (for both peers and websockets), meaning that peerings remain
untouched but the oldest queued message simply gets dropped when a new message
is enqueued. With this policy, the number of backpressure overflows is then
simply the count of discarded messages, something that users can tune to see
drop to zero in everyday use. Another benefit is that marginal overflows cause
less message loss than when an entire buffer's worth (plus potentially more
in-flight messages) gets thrown out with a disconnect.
Since a reorg in the Broker library (commit b04195183) that revamped flow
control and that we pulled in with Zeek 5.0, this setting hasn't done
anything. Broker's endpoint::make_subscriber() and
endpoint::make_status_subscriber() take a queue size argument (with a default
value) that simply gets dropped in the eventual subscriber::make() call. See:
b041951835 (diff-5c0d2baa7981caeb6a4080708ddca6ad929746d10c73d66598e46d7c2c03c8deL34-R178)
This implements basic tracking of each peering's current fill level, the maximum
level over a recent time interval (via a new Broker::buffer_stats_reset_interval
tunable, defaulting to 1min), and the number of times a buffer overflows. For
the disconnect policy this is the number of depeerings, but for drop_newest and
drop_oldest it implies the number of messages lost.
This doesn't use "proper" telemetry metrics for a few reasons: this tracking is
Broker-specific, so we need to track each peering via endpoint_ids, while we
want the metrics to use Cluster node name labels, and the latter live in the
script layer. Using broker::endpoint_id directly as keys also means we rely on
their ability to hash in STL containers, which should be fast.
This does not track the buffer levels for Broker "clients" (as opposed to
"peers"), i.e. WebSockets, since we currently don't have a way to name these,
and we don't want to use ephemeral Broker IDs in their telemetry.
To make the stats accessible to the script layer the Broker manager (via a new
helper class that lives in the event_observer) maintains a TableVal mapping
Broker IDs to a new BrokerPeeringStats record. The table's members get updated
every time that table is requested. This minimizes new val instantiation and
allows the script layer to customize the BrokerPeeringStats record by redefing,
updating fields, etc. Since we can't use Zeek vals outside the main thread, this
requires some care so all table updates happen only in the Zeek-side table
updater, PeerBufferState::GetPeeringStatsTable().
Logic to detect this error already existed, but due to enum identifiers
not having a value set, it never triggered before.
Should probably backport this one.
The new Broker API allows us to provide a custom logger to Broker that
pulls previously unattainable context information out of Broker to put
them into broker.log for users of Zeek.
Since Broker log events happen asynchronously, we cache them in a queue
and use a flare to notify Zeek of activity. Furthermore, the Broker
manager now implements the `ProcessFd` function to avoid unnecessary
polling of the new log queue. As a side effect, data stores are polled
less as well.
This adds re-peering at the Broker level for peers that Broker decided to
unpeer. We keep this at the Broker level since this behavior is specific to
it (as opposed to other cluster backends).
Includes baseline updates for btests that pick up on the new script's @load.
Add configurability of synchronous and journal_mode for SQLite backed
Broker data stores. Setting these to synchronous=normal and journal_mode=wal
can significantly improve throughput at the cost of some durability in
the presence of power loss or OS crash. In the context of Zeek, this is
likely more than acceptable.
Additionally, add integrity_check and failure_mode options to support deleting
and re-opening a corrupted SQLite database at store creation.
Closes#2698
This exposes Broker's new WebSocket support in Zeek. To enable it,
call `Broker::listen_websocket()`. Zeek will then start listening on
port 9997 for incoming WebSocket connections.
See the Broker documentation for a description of the message format
expected over these WebSocket connections.
Broker::create_master() and Broker::create_clone() now return
a valid value even when there's a failure to open the backend database
(e.g. SQLite filesystem error). In that case, the returned value can
still be passed into other data store operations, but they'll fail
immediately with an error. Broker::is_closed() can now also be used to
determine whether the data store creation calls failed.
This adds a "policy" hook into the logging framework's streams and
filters to replace the existing log filter predicates. The hook
signature is as follows:
hook(rec: any, id: Log::ID, filter: Log::Filter);
The logging manager invokes hooks on each log record. Hooks can veto
log records via a break, and modify them if necessary. Log filters
inherit the stream-level hook, but can override or remove the hook as
needed.
The distribution's existing log streams now come with pre-defined
hooks that users can add handlers to. Their name is standardized as
"log_policy" by convention, with additional suffixes when a module
provides multiple streams. The following adds a handler to the Conn
module's default log policy hook:
hook Conn::log_policy(rec: Conn::Info, id: Log::ID, filter: Log::Filter)
{
if ( some_veto_reason(rec) )
break;
}
By default, this handler will get invoked for any log filter
associated with the Conn::LOG stream.
The existing predicates are deprecated for removal in 4.1 but continue
to work.
* origin/topic/johanna/table-changes: (26 commits)
TableSync: try to make test more robust & add debug output
Increase timeouts to see if FreeBSD will be happy with this.
Try to make FreeBSD test happy with larger timeout.
TableSync: refactor common functionality into function
TableSync: don't raise &on_change, smaller fixes
TableSync: rename auto_store -> table_store
SyncTables: address feedback part 1 - naming (broker and zeek)
BrokerStore <-> Zeek Tables: cleanup and bug workaround
Zeek Table<->Brokerstore: cleanup, documentation, small fixes
BrokerStore<->Zeek table: adopt to recent Zeek API changes
BrokerStore<->Zeek Tables Fix a few small test failures.
BrokerStore<->Zeek tables: allow setting storage location & tests
BrokerStore<->Zeek tables: &backend works for in-memory stores.
BrokerStore<->Zeek table - introdude &backend attribute
BrokerStore<->Zeek tables: test for clones synchronizing to a master
BrokerStore<->Zeek tables: load persistent tables on startup.
Brokerstore<->Tables: attribute conflicts
Zeek/Brokerstore updates: expiration
Zeek/Brokerstore updates: add test that includes updates from clones
Zeek/Brokerstore updates: first working end-to-end test
...
This commit adds script/c++ documentation and fixes a few loose ends.
It also adds tests for corner cases and massively improves error
messages.
This also actually introduces type-compatibility checking and introduces
a new attribute that lets a user override this if they really know what
they are doing. I am not quite sure if we should really let that stay in
- but it can be very convenient to have this functionality.
One test is continuing to fail - the expiry test is very flaky. This is,
I think, caused by delays of the broker store forwarding. I am unsure if
we can actually do anything about that.
The &backend attribute allows for a much more convenient way of
interacting with brokerstores. One does not need to create a broker
store anymore - instead all of this is done internally.
The current state of this partially works. This should work fine for
persistence - but clones are currently not yet correctly attached.
Logs that got sent sparsely or burstily would get buffered for long
periods of time since the logic to flush them only does so on the next
log write. In the worst case, a subsequent log write could never happen
and cause a log entry to be indefinitely buffered.
This fix introduces a recurring event/timer to simply flush all pending
logs at frequency of Broker::log_batch_interval.
It may generally be better for our default use-case, as workers may
save a few percent cpu utilization as this policy does not have to
use any polling like the stealing policy does.
This also helps avoid a potential issue with the implementation of
spinlocks used in the work-stealing policy in current CAF versions,
where there's some conditions where lock contention causes a thread
to spin for long periods without relinquishing the cpu to others.
For backward compatibility when reading values, we first check
the ZEEK-prefixed value, and if not set, then check the corresponding
BRO-prefixed value.
* All "Broxygen" usages have been replaced in
code, documentation, filenames, etc.
* Sphinx roles/directives like ":bro:see" are now ":zeek:see"
* The "--broxygen" command-line option is now "--zeexygen"
Disabling this option allows one to read pcaps, but still initiate
Broker peerings and automatically exit when done processing the pcap
file. The default behavior would normally cause Broker::peer() to
prevent shutting the process down even after done reading the pcap.