publish_hrw() and publish_rr() are excluded from type checking due to their
variadic nature. Passing a wrong type for the pool argument previously triggered
an abort, now the result is runtime errors. This isn't great, but it's
better than crashing Zeek.
Closes#2935
While working on a rotation format function, ran into Zeek crashing
when not returning a value from it, fix and recover the same way as
for scripting errors.
This has been around since Zeek v4.1, so it was warned about in Zeek 5.0
LTS and we could've removed it with 5.1.
Also removed merge_top_scope() from the zeek::detail namespace, it's
unused now.
Updated the when-aggregates test somehow. I'm not quite sure what had
been tested there :-/
This adds one metric per log stream and one metric per log writer (path based)
to track the number of writes on a stream level as well as on a writer level.
$ curl -sSf localhost:8181/metrics | grep Conn
zeek_log_writer_writes_total{endpoint="",filter-name="default",module="HTTP",path="http",stream="HTTP::LOG",writer="Log::WRITER_SQLITE"} 1 1677497572770
zeek_log_stream_writes_total{endpoint="",module="HTTP",stream="HTTP::LOG"} 1 1677497572770
The initial version of this change also included metrics around log
write vetoes, but given no log policies exist in the default configuration
and they are mostly interesting for a few streams/writers only, skip this
for now. These can always be added by the script writer, too.
The difference between the stream level writes and concrete writers can
be used to deduce the number of vetoes (or errors) as a starting point.
Put the IntCounter into a std::optional rather than initializing
it at EventHandler construction time as that will currently expose
a time series per event handler through the Prometheus API.
This avoids interference from other log streams in the policy hook test cases,
which could cause deviations in output vs baselines depending on build
configuration.
As initial examples, this branch ports the Syslog and Finger analyzers
over. We leave the old analyzers in place for now and activate them
iff we compile without any Spicy.
Needs `zeek-spicy-infra` branches in `spicy/`, `spicy-plugin/`,
`CMake/`, and `zeek/zeek-testing-private`.
Note that the analyzer events remain associated with the Spicy plugin
for now: that's where they will show up with `-NN`, and also inside
the Zeekygen documentation.
We switch CMake over to linking the runtime library into the plugin,
vs. at the top-level through object libraries.
* origin/topic/awelzel/analyzer-log:
btest/net-control: Use different expiration times for rules
analyzer: Add analyzer.log for logging violations/confirmations
By default this only logs all the violations, regardless of the
confirmation state (for which there's still dpd.log). It includes
packet, protocol and file analyzers.
This uses options, change handlers and event groups for toggling
the functionality at runtime.
Closes#2031
In certain deployment scenarios, all analyzers are disabled by default.
However, conditionally/optionally loaded scripts may rely on analyzers
functioning and declare a request for them.
Add a global set set to the Analyzer module where external scripts can record
their requirement/request for a certain analyzer. Analyzers found in this
set are enabled at zeek_init() time.
This commit adds an optional event_groups field to the Logging::Stream record
to associated event groups with logging streams.
This can be used to disable all event groups of a logging stream when it is
disabled. It does require making an explicit connection between the
logging stream and the involved groups, however.
When a fa_file object is created through the use of Input::add_analysis(),
the fa_file's source is likely not valid representation of an analyzer's
tag and a Files::describe() should not error and instead return an empty
description.
Add a new Analyzer::is_tag() helper that can be used to pre-check `f$source`.
* When a file is transferred over multiple connection, have
create_file_info() just pick the first one instead of none.
* Do not unconditionally assume cid and cuid as set on a
Notice::FileInfo object.
This allows to enable/disable file analyzers through the same interfaces
as packet and protocol analyzers, specifically Analyzer::disable_analyzer
could be interesting.
This adds machinery to the packet_analysis manager for disabling
and enabling packet analyzers and implements two low-level bifs
to use it.
Extend Analyzer::enable_analyzer() and Analyzer::disable_analyzer()
to transparently work with packet analyzers, too. This also allows
to add packet analyzers to Analyzer::disabled_analyzers.
* origin/topic/awelzel/dpd-analyzer-merger:
analyzer/dpd: Address review comments
Remove @load base/frameworks/dpd from tests
frameworks/dpd: Move to frameworks/analyzer/dpd, load by default
scripts/dce-rpc,ntlm: Do not load base/frameworks/dpd
btest: Remove unnecessary loading of frameworks/dpd
Now that it's loaded in bare mode, no need to load it explicitly.
The main thing that tests were relying on seems to be tracking of
c$service for conn.log baselines. Very few were actually checking
for dpd.log
This is a script-only change that unrolls File::Info records into
multiple files.log entries if the same file was seen over different
connections by single worker. Consequently, the File::Info record
gets the commonly used uid and id fields added. These fields are
optional for File::Info - a file may be analyzed without relation
to a network connection (e.g by using Input::add_analysis()).
The existing tx_hosts, rx_hosts and conn_uids fields of Files::Info
are not meaningful after this change and removed by default. Therefore,
files.log will have them removed, too.
The tx_hosts, rx_hosts and conn_uids fields can be revived by using the
policy script frameworks/files/deprecated-txhosts-rxhosts-connuids.zeek
included in the distribution. However, with v6.1 this script will be
removed.
Adds base/frameworks/telemetry with wrappers around telemetry.bif
and updates telemetry/Manager to support collecting metrics from
script land.
Add policy/frameworks/telemetry/log for logging of metrics data
into a new telemetry.log and telemetry_histogram.log and add into
local.zeek by default.
The previous way of splitting strings would break if the last string in
the line was an empty string, and it would return one fewer fields than
it should have. This was breaking the last line in the
scripts.base.framework.input.ascii.setspecialcases once the bug fixed in
GH #1628 was fixed.
While writing a test for the new "tail -F semantics" I found that
the $want_record=F case was broken (errno 25). So instead of opening
/dev/null when the input file is missing change READER_RAW to avoid
I/O until it can be opened.
Add two tests, one for when the event handler is called with a
record and one for when it's called with a string.
Observed .sqlite-journal files and missing reporter.sqlite files
in CI runs. Subsequently reading the ./test.sqlite file is more
reliable and should be good enough.
Also modify FormatRotationPath to keep rotated logs within
Log::default_logdir unless the rotation function explicitly
set dir, e.g. by when the user redef'ed default_rotation_interval.
With the introduction of LogAscii::logdir, log filenames can now include
parent directories rather than being plain basenames. Enabling log rotation,
leftover log rotation and setting LogAscii::logdir broke due to not
handling this situation.
This change ensures that .shadow files are placed within the directory where
the respective .log file is created. Previously, the .shadow. (or .tmp.shadow.)
prefix was simply prepended, yielding non-sensical paths such as
.tmp.shadow.foo/bar/packet_filter.log for a logdir of foo/bar.
Additionally, respect LogAscii::logdir when searching for leftover log files
rather than defaulting to the current working directory.
The following quirk exist around LogAscii::logdir, but will be addressed
in a follow-up.
* By default, logs are currently rotated into the working directory of the
process, rather than staying confined within LogAscii::logdir. One of
the added tests shows this behavior.
This simply expands this test to match the behavior of
cluster-transparency-with-proxy, since the two are so similar. This test does
not seem to need disabling the worker's initial send of the data store.
This test was unstable for two reasons:
- Nothing verified whether the two workers had checked in with the proxy,
meaning that messages between the workers and proxies could get lost. This adds
an extra node_up event that the proxy generates synthetically, with values
recognizable to the manager, once the proxy sees both workers connected. This is
a test-level workaround for what should really be a cluster-is-ready event in
the cluster framework proper.
- More subtle: the Intel framework makes the manager send its current
min_data_store to newly connected workers, which in the case of this tests
introduces a race: since the data store, arriving at the worker, replaces the
existing value, it could actually remove already established items if timing was
right. This would lead to the count in the test reaching 3, assuming that 3
intel items are available, when in reality it was less, causing the
Intel::seen() call to do nothing. We now disable the sending of the data store
upon connect, via the global added in the previous commit.
This also expands the test slightly so that both workers call Intel::seen() for
the items inserted by the other worker. This is added validation for the second
point above, because in the presence of that race one occasionally sees one log
entry make it, and the other fail.