We use the SegmentProfiler in quite a few hot places and the memset of
the rusage structure (144bytes here) can show up significantly even if
the segment profiler itself isn't used.
Relates to #3485.
It can be visible overhead to call write() on the underlying pipe of the
EventMgr's flare whenever the first event is enqueued during an IO loop
iteration. Particularly in scenarios where there's about 1 event per packet
for long lived connections and script-side event processing is fast.
Given the event manager is drained anyhow at the end of the main loop, this
shouldn't be needed. In fact, the EventMgr.Process() method is basically
a stub. The one reason it is needed is when more events are enqueued during
a drain. That, however, can be dealt with by implementing GetNextTimeout()
to return 0.0 when there's more events queued. This way the main-loop's poll
timeout is 0.0 and it'll continue immediately.
This also allows to removes some extra code and drop the recently introduced
InitPostFork() addition: Without a pipe, there's no need to recreate it.
These can be significant if a lot of new connections and or events
are created for which an existing conn val needs updating and otherwise
things are very fast.
By avoiding to use `broker::data` directly, we gain a degree of freedom
that allows us to swap out `broker::data` for something else (e.g.,
`broker::variant`) in the future. Furthermore, it also helps us to keep
Broker types "local" to the Broker manager and gives us a nicer
interface.
Also replaces uses of `broker::expected` with `std::optional`. While an
`expected `can carry additional information as to why a value is not
present, nothing in Zeek ever cared about that. Hence, using
`std::optional` removes an unnecessary dependency on a Broker detail
while also being more efficient (no extra heap allocation when no value
is present).
* origin/topic/awelzel/log-write-delay-3:
logging: ref() to record_ref() renaming
logging: Fix typos from review
logging/Manager: Make LogDelayExpiredTimer an implementation detail
logging/WriteToFilters: Use range-based for loop
testing/btest: Log::delay() from JavaScript
NEWS: Entry for delayed log writes
Bump doc submodule to branch
logging: Do not keep delay state persistent
logging: delay documentation polishing
logging: Better error messages for invalid Log::delay() calls
logging/Manager: Implement DelayTokenType as an actual opaque
logging: Implement get_delay_queue_size()
logging: Introduce Log::delay() and Log::delay_finish()
logging/Manager: zeek::detail'ify
logging/Manager: Split Write()
Timer: Add LOG_DELAY_EXPIRE timer type
Ascii: Remove extra include
The only reason this was a private component of Manager was to access
the Stream's function. Use a generic callback and a lambda to avoid
that exposure.
With a bit of tweaking in the JavaScript plugin to support opaque types, this
will allow the delay functionality to work there, too.
Making the LogDelayToken an actual opaque seems reasonable, too. It's not
supposed to be user inspected.
This is a verbose, opinionated and fairly restrictive version of the log delay idea.
Main drivers are explicitly, foot-gun-avoidance and implementation simplicity.
Calling the new Log::delay() function is only allowed within the execution
of a Log::log_stream_policy() hook for the currently active log write.
Conceptually, the delay is placed between the execution of the global stream
policy hook and the individual filter policy hooks. A post delay callback
can be registered with every Log::delay() invocation. Post delay callbacks
can (1) modify a log record as they see fit, (2) veto the forwarding of the
log record to the log filters and (3) extend the delay duration by calling
Log::delay() again. The last point allows to delay a record by an indefinite
amount of time, rather than a fixed maximum amount. This should be rare and
is therefore explicit.
Log::delay() increases an internal reference count and returns an opaque
token value to be passed to Log::delay_finish() to release a delay reference.
Once all references are released, the record is forwarded to all filters
attached to a stream when the delay completes.
This functionality separates Log::log_stream_policy() and individual filter
policy hooks. One consequence is that a common use-case of filter policy hooks,
removing unproductive log records, may run after a record was delayed. Users
can lift their filtering logic to the stream level (or replicate the condition
before the delay decision). The main motivation here is that deciding on a
stream-level delay in per-filter hooks is too late. Attaching multiple filters
to a stream can additionally result in hard to understand behavior.
On the flip side, filter policy hooks are guaranteed to run after the delay
and can be used for further mangling or filtering of a delayed record.
If we delay in the stream policy hook, we'll need to resume writing
to the attached filters later on. Prepare for that by splitting out
the filter processing.
Contains the following fixes:
2da4abe Types: Add support for opaque types
1f1093f Types: Cast internal field to v8::Value
67e225c Plugin: Avoid creating Exprs at runtime
* origin/topic/timw/copy-instead-of-move:
Add some uses of std::move in constructors and simple functions for pass-by-value arguments
Avoid creating a few temporary values to avoid copy operations
Change function return types to more concise types where possible