Now retrieves and processes all N available responses at once instead
of one-by-one-until-empty. The later may be problematic from two
points: (1) hitting the shared queue/mailbox matching logic once per
response instead of once per Process() and (2) looping until empty is
not clearly bounded -- imagining a condition where there's a thread
trying to push a large influx of responses into the mailbox while at
the same time we're trying to take from it until it's empty.
* origin/topic/jsiwek/plist-and-event-cleanup:
Add comments to QueueEvent() and ConnectionEvent()
Add methods to queue events without handler existence check
Cleanup/improve PList usage and Event API
Previously, if there was always input in each Process() call, then
the Broker IOSource would never go idle and could completely starve
out a packet IOSource since it would always report readiness with
a timestamp value of the last known network_time (which prevents
selecting a packet IOSource for processing, due to incoming packets
likely having timestamps that are later).
Added ConnectionEventFast() and QueueEventFast() methods to avoid
redundant event handler existence checks.
It's common practice for caller to already check for event handler
existence before doing all the work of constructing the arguments, so
it's desirable to not have to check for existence again.
E.g. going through ConnectionEvent() means 3 existence checks:
one you do yourself before calling it, one in ConnectionEvent(), and then
another in QueueEvent().
The existence check itself can be more than a few operations sometimes
as it needs to check a few flags that determine if it's enabled, has
a local body, or has any remote receivers in the old comm. system or
has been flagged as something to publish in the new comm. system.
Majority of PLists are now created as automatic/stack objects,
rather than on heap and initialized either with the known-capacity
reserved upfront or directly from an initializer_list (so there's no
wasted slack in the memory that gets allocated for lists containing
a fixed/known number of elements).
Added versions of the ConnectionEvent/QueueEvent methods that take
a val_list by value.
Added a move ctor/assign-operator to Plists to allow passing them
around without having to copy the underlying array of pointers.
* origin/topic/jsiwek/val_mgr:
Pre-allocate and re-use Vals for bool, int, count, enum and empty string
Preallocate booleans and small counts
I added a tiny change to CompHash to make sure that nothing messes this
up in the future.
Disabling this option allows one to read pcaps, but still initiate
Broker peerings and automatically exit when done processing the pcap
file. The default behavior would normally cause Broker::peer() to
prevent shutting the process down even after done reading the pcap.
This enables explicit forwarding of events matching a given topic
prefix. Even if a receiving node has an event handler, it will not
be raised if the event was sent along a topic that matches a previous
call to Broker::forward().
Namely these are now removed:
- Broker::relay
- Broker::publish_and_relay
- Cluster::relay_rr
- Cluster::relay_hrw
The idea being that Broker may eventually implement the necessary
routing (plus load balancing) functionality. For now, code that used
these should "manually" handle and re-publish events as needed.
Now defaults to a max of 4 threads typically indepedent of core
count (previously could go up to a hard cap of 8). Also now allow
controlling this setting via BRO_BROKER_MAX_THREADS environment
variable.
In the following example, the republication of "arg" would result in
literally sending it as a Broker::Data record instead of the broker data
that it was already wrapping.
Sender:
Broker::publish("topic", my_event, "hello")
Receiver:
event my_event(arg: any)
{
Broker::publish("topic", my_event, arg)
}
The former replaces the pcap vs. live versions of the same tuning
option. If a user does not change these, Bro makes some internal
decisions that may help avoid performance problems on systems with high
core counts: the number of CAF threads is capped at 8 and the maximum
sleep duration for under-utilized threads is increased to 64ms (CAF's
default is 10ms).
These may be used to change the number of scheduler threads that the
underlying CAF library creates. In pcap mode, it's currently hardcoded
to the minimal 4 threads due to potentially significant overhead in CAF.
Now manually keeps track of peer count instead of querying Broker for
that information (which would result in waiting upon a blocking request
to the core actor).
Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.