While it seems interesting functionality, this hasn't been documented,
maintained or knowingly leveraged for many years.
There are various other approaches today, too:
* We track the number of event handler invocations regardless of
profiling. It's possible to approximate a load_sample event by
comparing the result of two get_event_stats() calls. Or, visualize
the corresponding counters in a Prometheus setup to get an idea of
event/s broken down by event names.
* HookCallFunction() allows to intercept script execution, including
measuring the time execution takes.
* The global call_stack and g_frame_stack can be used from plugins
(and even external processes) to walk the Zeek script stack at certain
points to implement a sampling profiler.
* USDT probes or more plugin hooks will likely be preferred over Zeek
builtin functionality in the future.
Relates to #3458
This field isn't required by a worker and it's certainly not used by a
worker to listen on that specific interface. It also isn't required to
be set consistently and its use in-tree limited to the old load-balancing
script.
There's a bif called packet_source() which on a worker will provide
information about the actually used packet source.
Relates to zeek/zeek#2877.
The dump-events baseline changes are pure noise and have spurred confusion
for internal and external contributors. For example, adding new
analyzers have perturbed orderings of sets holding analyzer tags.
Running in non-bare mode, the baselines change almost whenever any of the
record types attached to connections change in the default scripts. This
causes continuous and seemingly little useful updates to the baselines.
This change switches the test to run in bare mode and explicitly loads
just base/protocols/conn and base/protocols/smtp. The primary intention
of the test should be testing the functionality of the misc/dump-events
script, not the raised events of all loaded default scripts (for that the
used PCAP is too narrow).
Protocol specific scripts that do want to leverage misc/dump-events for
baseline creation of their or their analyzer's events can add additional
specific tests with suitable PCAP files.
This commit fixes the compile-time warnings that OpenSSL 3.0 raises for
our source-code. For the cases where this was necessary we now have two
implementations - one for OpenSSL 1.1 and earlier, and one for OpenSSL
3.0.
This also makes our testsuite pass with OpenSSL 3.0
Relates to GH-1379
generate_all_events causes all events to be raised internally; this
makes it possible for dump_events to really capture all events (and not
just those that were handled).
Addresses GH-169
- Use `-b` most everywhere, it will save time.
- Start some intel tests upon the input file being fully read instead of
at an arbitrary time.
- Improve termination condition for some sumstats/cluster tests.
- Filter uninteresting output from some supervisor tests.
- Test for `notice_policy.log` is no longer needed.
- Added test case and adjusted whitespace in merge
* 'stats-logging-fix' of https://github.com/brittanydonowho/zeek:
Fixed stats.zeek to log all data before zeek terminates rather than return too soon
* Generally increase timeouts for tests that have recent transient
failures
* Change any test that relied on `btest-bg-wait -k` since that's never
going to play with with CI systems. Instead, we always need to have
a well-defined termination condition in the test itself (and most
already did, so didn't really need the `-k` flag anyway).
Or otherwise convert into a regular btest if it didn't already seem to
be covered.
There's no need for a separate memory leak test group since compiling
with LeakSanitizer now covers leak checking for the full btest suite.
For backward compatibility when reading values, we first check
the ZEEK-prefixed value, and if not set, then check the corresponding
BRO-prefixed value.
This also installs symlinks from "zeek" and "bro-config" to a wrapper
script that prints a deprecation warning.
The btests pass, but this is still WIP. broctl renaming is still
missing.
#239
The generation of weird events, by default, are now rate-limited
according to these tunable options:
- Weird::sampling_whitelist
- Weird::sampling_threshold
- Weird::sampling_rate
- Weird::sampling_duration
The new get_reporter_stats() BIF also allows one to query the
total number of weirds generated (pre-sampling) which the new
policy/misc/weird-stats.bro script uses periodically to populate
a weird_stats.log.
There's also new reporter BIFs to allow generating weirds from the
script-layer such that they go through the same, internal
rate-limiting/sampling mechanisms:
- Reporter::conn_weird
- Reporter::flow_weird
- Reporter::net_weird
Some of the code was adapted from previous work by Johanna Amann.
The problem is that with certain compilers, the order of the file hash
events is reversed (for at this moment unknown reasons).
This fix simply removes all MD5 events from the dump-events test, only
leaving the SHA1 events. This removes this condition during the test.
Changes:
- Changing semantics of the new_event() meta event: it's raised
only for events that have a handler defined. There are too many
checks in Bro that prevent events wo/ handler from being even
prepared to raise to do that differently.
- Adding test case.
* topic/robin/event-dumper:
New script misc/dump-events.bro, along with core support, that dumps events Bro is raising in an easily readable form.
Prettyfing Describe() for record types.
Updated README and collected coverage-related tests in a common dir.
There are still coverage failures resulting from either the following
scripts not being @load'd in the default bro mode:
base/frameworks/time-machine/notice.bro
base/protocols/http/partial-content.bro
base/protocols/rpc/main.bro
Or the following result in errors when @load'd:
policy/protocols/conn/scan.bro
policy/hot.conn.bro
If these are all scripts-in-progress, can we move them all to live
outside the main scripts/ directory until they're ready?