The previous "fix" caused significant performance degradation without
the signature ever having a chance to trigger. Moving it to policy
seems the best compromise, the alternative being outright removing it.
* origin/topic/johanna/netcontrol-updates:
Netcontrol: add rule_added_policy
Netcontrol: more logging in catch-and-release
Netcontrol: allow supplying explicit name to Debug plugin
This requires pool creation to spell out a spec explicitly, which the only code
using these types already does. There's no reason for pools to automatically
refer to proxies.
rule_added_policy allows the modification of rules just after they have
been added. This allows the implementation of some more complex features
- like changing rule states depending on insertion in other plugins.
Catch-and-release logs now include the plugin that is responsible for an
action. Furthermore, the catch-and-release log also includes instances
where a rule already existed, and where an error occurred during an
operation.
This change extends the arguments of NetControl::create_debug, and
allows the specification of an optional name argument, which can be used
instead of the default-generated name.
This is helpful when one wants to attach several plugins to verify
behavior in those cases.
This introduces a new hook into the Intel::seen() function that allows
users to directly interact with the result of a find() call via external
scripts.
This should solve the use-case brought up by @chrisanag1985 in
discussion #3256: Recording and acting on "no intel match found".
@Canon88 was recently asking on Slack about enabling HTTP logging for a
given connection only when an Intel match occurred and found that the
Intel::match() event would only occur on the manager. The
Intel::match_remote() event might be a workaround, but possibly running a
bit too late and also it's just an internal "detail" event that might not
be stable.
Another internal use case revolved around enabling packet recording
based on Intel matches which necessarily needs to happen on the worker
where the match happened. The proposed workaround is similar to the above
using Intel::match_remote().
This hook also provides an opportunity to rate-limit heavy hitter intel
items locally on the worker nodes, or even replacing the event approach
currently used with a customized approach.
With a bit of tweaking in the JavaScript plugin to support opaque types, this
will allow the delay functionality to work there, too.
Making the LogDelayToken an actual opaque seems reasonable, too. It's not
supposed to be user inspected.
This is a verbose, opinionated and fairly restrictive version of the log delay idea.
Main drivers are explicitly, foot-gun-avoidance and implementation simplicity.
Calling the new Log::delay() function is only allowed within the execution
of a Log::log_stream_policy() hook for the currently active log write.
Conceptually, the delay is placed between the execution of the global stream
policy hook and the individual filter policy hooks. A post delay callback
can be registered with every Log::delay() invocation. Post delay callbacks
can (1) modify a log record as they see fit, (2) veto the forwarding of the
log record to the log filters and (3) extend the delay duration by calling
Log::delay() again. The last point allows to delay a record by an indefinite
amount of time, rather than a fixed maximum amount. This should be rare and
is therefore explicit.
Log::delay() increases an internal reference count and returns an opaque
token value to be passed to Log::delay_finish() to release a delay reference.
Once all references are released, the record is forwarded to all filters
attached to a stream when the delay completes.
This functionality separates Log::log_stream_policy() and individual filter
policy hooks. One consequence is that a common use-case of filter policy hooks,
removing unproductive log records, may run after a record was delayed. Users
can lift their filtering logic to the stream level (or replicate the condition
before the delay decision). The main motivation here is that deciding on a
stream-level delay in per-filter hooks is too late. Attaching multiple filters
to a stream can additionally result in hard to understand behavior.
On the flip side, filter policy hooks are guaranteed to run after the delay
and can be used for further mangling or filtering of a delayed record.
There was some confusion around which value was used subsequent to a strip(),
but sub not respecting anchors make it appear to work. Also seems that the
`\(?` part seems redundant.
This field isn't required by a worker and it's certainly not used by a
worker to listen on that specific interface. It also isn't required to
be set consistently and its use in-tree limited to the old load-balancing
script.
There's a bif called packet_source() which on a worker will provide
information about the actually used packet source.
Relates to zeek/zeek#2877.
Unsure what it's used for today and also results in the situation that on
some platforms we generate a reporter.log in bare mode, while on others
where spicy is disabled, we do not.
If we want base/frameworks/version loaded by default, should put it into
init-bare.zeek and possibly remove the loading of the reporter framework
from it - Reporter::error() would still work and be visible on stderr,
just not create a reporter.log.
When http_reply events are received before http_request events, either
through faking traffic or possible re-ordering, it is possible to trigger
unbounded state growth due to later http_requests never being matched
again with responses.
Prevent this by synchronizing request/response counters when late
requests come in.
Also forcefully flush pending requests when http_replies are never
observed either due to the analyzer having been disabled or because
half-duplex traffic.
Fixes#1705
* origin/topic/awelzel/3145-dcerpc-state-clean:
dce-rpc: Test cases for unbounded state growth
dce-rpc: Handle smb2_close_request() in scripts
smb/dce-rpc: Cleanup DCE-RPC analyzers when fid is closed and limit them
dce-rpc: Do not repeatedly register removal hooks
Roughly 2.5 years ago all events taking the ``icmp_conn`` parameter were
removed with 44ad614094 and the NetVar.cc
type not populated anymore.
Remove the left-overs in script land, too.
This patch does two things:
1) For SMB close requests, tear down any associated DCE-RPC
analyzer if one exists.
2) Protect from fid_to_analyzer_map growing unbounded by introducing a
new SMB::max_dce_rpc_analyzers limit and forcefully wipe the
analyzers if exceeded. Propagate this to script land as event
smb_discarded_dce_rpc_analyzers() for additional cleanup.
This is mostly to fix how the binpac SMB analyzer tracks individual
DCE-RPC analyzers per open fid. Connections that re-open the same or
different pipe may currently allocate unbounded number of analyzers.
Closes#3145.
* topic/awelzel/3112-log-suffix-left-over-log-rotation:
cluster/logger: Fix leftover-log-rotation in multi-logger setups
cluster/logger: Fix global var reference
Populating log_metadata during zeek_init() is too late for the
leftover-log-rotation functionality, so do it at script parse time.
Also, prepend archiver_ to the log_metadata table and encoding function
due to being in the global namespace and to align with the
archiver_rotation_format_func. This hasn't been in a released
version yet, so fine to rename still.
Closes#3112