While it seems interesting functionality, this hasn't been documented,
maintained or knowingly leveraged for many years.
There are various other approaches today, too:
* We track the number of event handler invocations regardless of
profiling. It's possible to approximate a load_sample event by
comparing the result of two get_event_stats() calls. Or, visualize
the corresponding counters in a Prometheus setup to get an idea of
event/s broken down by event names.
* HookCallFunction() allows to intercept script execution, including
measuring the time execution takes.
* The global call_stack and g_frame_stack can be used from plugins
(and even external processes) to walk the Zeek script stack at certain
points to implement a sampling profiler.
* USDT probes or more plugin hooks will likely be preferred over Zeek
builtin functionality in the future.
Relates to #3458
The SMB::State$recent_files field is meant to have expiring entries.
However, due to usage of &default=string_set(), the &read_expire
attribute is not respected causing unbounded state growth. Replace
&default=string_set() with &default=set().
Thanks to ya-sato on Slack for reporting!
Related: zeek/zeek-docs#179, #3513.
With a bit of tweaking in the JavaScript plugin to support opaque types, this
will allow the delay functionality to work there, too.
Making the LogDelayToken an actual opaque seems reasonable, too. It's not
supposed to be user inspected.
This is a verbose, opinionated and fairly restrictive version of the log delay idea.
Main drivers are explicitly, foot-gun-avoidance and implementation simplicity.
Calling the new Log::delay() function is only allowed within the execution
of a Log::log_stream_policy() hook for the currently active log write.
Conceptually, the delay is placed between the execution of the global stream
policy hook and the individual filter policy hooks. A post delay callback
can be registered with every Log::delay() invocation. Post delay callbacks
can (1) modify a log record as they see fit, (2) veto the forwarding of the
log record to the log filters and (3) extend the delay duration by calling
Log::delay() again. The last point allows to delay a record by an indefinite
amount of time, rather than a fixed maximum amount. This should be rare and
is therefore explicit.
Log::delay() increases an internal reference count and returns an opaque
token value to be passed to Log::delay_finish() to release a delay reference.
Once all references are released, the record is forwarded to all filters
attached to a stream when the delay completes.
This functionality separates Log::log_stream_policy() and individual filter
policy hooks. One consequence is that a common use-case of filter policy hooks,
removing unproductive log records, may run after a record was delayed. Users
can lift their filtering logic to the stream level (or replicate the condition
before the delay decision). The main motivation here is that deciding on a
stream-level delay in per-filter hooks is too late. Attaching multiple filters
to a stream can additionally result in hard to understand behavior.
On the flip side, filter policy hooks are guaranteed to run after the delay
and can be used for further mangling or filtering of a delayed record.
Update cipher consts.
Furthermore some past updates have been applied to scriptland, but it
was not considered that some of these also have to be applied to binpac
code, to be able to correcly parse the ServerKeyExchange message.
(As a side-note - this was discovered due to a test discrepancy with the
Spicy parser)
There was some confusion around which value was used subsequent to a strip(),
but sub not respecting anchors make it appear to work. Also seems that the
`\(?` part seems redundant.
and check_for_unused_event_handlers: UsageAnalyzer is more thorough
and the previous ones weren't extended to work with &is_used and
should probably be considered superseded by the UsageAnalyzer even
if that currently does not provide a public API and just prints
out deprecation warnings.
I'm also tempted to deprecate SetUsed() and Used() of EventHandler
for the same reason.
Closes#3187.
This field isn't required by a worker and it's certainly not used by a
worker to listen on that specific interface. It also isn't required to
be set consistently and its use in-tree limited to the old load-balancing
script.
There's a bif called packet_source() which on a worker will provide
information about the actually used packet source.
Relates to zeek/zeek#2877.
This commit adds a multitude of new extension types that were added in
the last few years; it also adds grease values to extensions, curves,
and ciphersuites.
Furthermore, it adds a test that contains a encrypted-client-hello
key-exchange (which uses several extension types that we do not have in
our baseline so far).
The ssl_history field may grow unbounded (e.g., ssl_alert event). Prevent this
by capping using a configurable limit (default 100) and raise a weird once reached.
Limit the number of events raised from an SSL record with content_type
alert (21) to a configurable maximum number (default 10). For TLS 1.3,
the limit is set to 1 as specified in the RFC. Add a new weird cases
where the limit is exceeded.
OSS-Fuzz managed to generate a reproducer that raised ~660k ssl_plaintext
and ssl_alert events given ~810kb of input data. This change prevents this
with hopefully no negative side-effect in the real-world.
Previously, seq was computed as the result of |pending_commands|+1. This
opened the possibility to override queued commands, as well as logging
the same pending ftp reply multiple times.
For example, when commands 1, 2, 3 are pending, command 1 may be dequeued,
but the incoming command then receives seq 3 and overrides the already
pending command 3. The second scenario happens when ftp_reply() selected
command 3 as pending for logging, but is then followed by many ftp_request()
events. This resulted in command 3's response being logged for every
following ftp_request() over and over again.
Avoid both scenarios by tracking the command sequence as an absolute counter.
Unsure what it's used for today and also results in the situation that on
some platforms we generate a reporter.log in bare mode, while on others
where spicy is disabled, we do not.
If we want base/frameworks/version loaded by default, should put it into
init-bare.zeek and possibly remove the loading of the reporter framework
from it - Reporter::error() would still work and be visible on stderr,
just not create a reporter.log.
Makes testing easier and aligns better with log rotation and timer
expiration. Should not have an effect in practice. Also, log detail
about whether inode or modification time changed, too.
* origin/topic/awelzel/2326-import-quic:
ci/btest: Remove spicy-quic helper, disable Spicy on CentOS 7
btest/core/ppp: Run test in bare mode
btest/quic: Update other tests
testing/quic: Fixups and simplification after Zeek integration
quic: Integrate as default analyzer
quic: Include Copyright lines to the analyzer's source code contributed by Fox-IT
quic: Squashed follow-ups: quic.log, tests, various fixes, performance
quic: Initial implementation
* origin/topic/bbannier/issue-3234:
Introduce dedicated `LDAP::Info`
Remove redundant storing of protocol in LDAP logs
Use LDAP `RemovalHook` instead of implementing `connection_state_remove`
Tidy up LDAP code by using local references
Pluralize container names in LDAP types
Move LDAP script constants to their own file
Name `LDAP::Message` and `LDAP::Search` `*Info`
Make ports for LDAP analyzers fully configurable
Require have-spicy for tests which log spicy-ldap information
Fix LDAP analyzer setup for when Spicy analyzers are disabled
Bump zeek-testing-private
Integrate spicy-ldap test suite
Move spicy-ldap into Zeek protocol analyzer tree
Explicitly use all of spicy-ldap's modules
Explicitly list `asn1.spicy` as spicy-ldap source
Remove uses of `zeek` module in spicy-ldap
Fix typos in spicy-ldap
Remove project configuration files in spicy-ldap
Integrate spicy-ldap into build
Import zeek/spicy-ldap@57b5eff988
This moves the ports the LDAP analyzers should be triggered on from the
EVT file to the Zeek module. This gives users full control over which
ports the analyzers are registered for while previously they could only
register them for additional ports (there is no Zeek script equivalent
of `Manager::UnregisterAnalyzerForPort`).
The analyzers could still be triggered via DPD, but this is intentional.
To fully disable analyzers users can use e.g.,
```zeek
event zeek_init()
{
Analyzer::disable_analyzer(Analyzer::ANALYZER_LDAP_TCP);
}
```
On Linux with a default ext4 or tmpfs filesystem, the default buffer size for
reading a pcap is chosen as 4k (strace/gdb validated). When reading large pcaps
containing raw data transfers, the syscall overhead for read becomes visible
in profiles. Support configurability of the buffer size and default to 128kb.
When processing a ~830M PCAP (16 UDP connections, each transferring ~50MB) in
bare mode, this change improves runtime from 1.39 sec to 1.29 sec. Increasing
the buffer further didn't provide a noticeable boost.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
In the past, we allocated a buffer with zeroes and wrote that with
fwrite. Now, instead we just fseek to the correct offset.
This changes the way in which the file extract limit is counted a bit;
skipped bytes do no longer count against the file size limit.
(cherry picked from commit 5071592e9b7105090a1d9de19689c499070749d4)
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
(cherry picked from commit afa6f3a0d3b8db1ec5b5e82d26225504c2891089)
OSS Fuzz generated a CWD request and reply followed by very many EPRT
requests. This caused Zeek to re-log the CWD request and invoke `build_url_ftp()`
over and over again resulting in long processing times.
Avoid this scenario by not logging commands that aren't pending anymore.
(cherry picked from commit b05dd31667ff634ec7d017f09d122f05878fdf65)
A call to `extract_filename_from_content_disposition()` is only
efficient if the string is guaranteed to contain the pattern that
is removed by `sub()`. Due to missing brackets around the `[:blank:]`
character class, an overly long string (756kb) ending in
"Type:dtanameaa=" matched the wrong pattern causing `sub()` to
exhibit quadratic runtime. Besides that, we may have potentially
extracted wrong information from a crafted header value.
(cherry picked from commit 6d385b1ca724a10444865e4ad38a58b31a2e2288)