* origin/topic/awelzel/dpd-analyzer-merger:
analyzer/dpd: Address review comments
Remove @load base/frameworks/dpd from tests
frameworks/dpd: Move to frameworks/analyzer/dpd, load by default
scripts/dce-rpc,ntlm: Do not load base/frameworks/dpd
btest: Remove unnecessary loading of frameworks/dpd
Now that it's loaded in bare mode, no need to load it explicitly.
The main thing that tests were relying on seems to be tracking of
c$service for conn.log baselines. Very few were actually checking
for dpd.log
This is a script-only change that unrolls File::Info records into
multiple files.log entries if the same file was seen over different
connections by single worker. Consequently, the File::Info record
gets the commonly used uid and id fields added. These fields are
optional for File::Info - a file may be analyzed without relation
to a network connection (e.g by using Input::add_analysis()).
The existing tx_hosts, rx_hosts and conn_uids fields of Files::Info
are not meaningful after this change and removed by default. Therefore,
files.log will have them removed, too.
The tx_hosts, rx_hosts and conn_uids fields can be revived by using the
policy script frameworks/files/deprecated-txhosts-rxhosts-connuids.zeek
included in the distribution. However, with v6.1 this script will be
removed.
Adds base/frameworks/telemetry with wrappers around telemetry.bif
and updates telemetry/Manager to support collecting metrics from
script land.
Add policy/frameworks/telemetry/log for logging of metrics data
into a new telemetry.log and telemetry_histogram.log and add into
local.zeek by default.
The previous way of splitting strings would break if the last string in
the line was an empty string, and it would return one fewer fields than
it should have. This was breaking the last line in the
scripts.base.framework.input.ascii.setspecialcases once the bug fixed in
GH #1628 was fixed.
While writing a test for the new "tail -F semantics" I found that
the $want_record=F case was broken (errno 25). So instead of opening
/dev/null when the input file is missing change READER_RAW to avoid
I/O until it can be opened.
Add two tests, one for when the event handler is called with a
record and one for when it's called with a string.
Observed .sqlite-journal files and missing reporter.sqlite files
in CI runs. Subsequently reading the ./test.sqlite file is more
reliable and should be good enough.
Also modify FormatRotationPath to keep rotated logs within
Log::default_logdir unless the rotation function explicitly
set dir, e.g. by when the user redef'ed default_rotation_interval.
With the introduction of LogAscii::logdir, log filenames can now include
parent directories rather than being plain basenames. Enabling log rotation,
leftover log rotation and setting LogAscii::logdir broke due to not
handling this situation.
This change ensures that .shadow files are placed within the directory where
the respective .log file is created. Previously, the .shadow. (or .tmp.shadow.)
prefix was simply prepended, yielding non-sensical paths such as
.tmp.shadow.foo/bar/packet_filter.log for a logdir of foo/bar.
Additionally, respect LogAscii::logdir when searching for leftover log files
rather than defaulting to the current working directory.
The following quirk exist around LogAscii::logdir, but will be addressed
in a follow-up.
* By default, logs are currently rotated into the working directory of the
process, rather than staying confined within LogAscii::logdir. One of
the added tests shows this behavior.
This simply expands this test to match the behavior of
cluster-transparency-with-proxy, since the two are so similar. This test does
not seem to need disabling the worker's initial send of the data store.
This test was unstable for two reasons:
- Nothing verified whether the two workers had checked in with the proxy,
meaning that messages between the workers and proxies could get lost. This adds
an extra node_up event that the proxy generates synthetically, with values
recognizable to the manager, once the proxy sees both workers connected. This is
a test-level workaround for what should really be a cluster-is-ready event in
the cluster framework proper.
- More subtle: the Intel framework makes the manager send its current
min_data_store to newly connected workers, which in the case of this tests
introduces a race: since the data store, arriving at the worker, replaces the
existing value, it could actually remove already established items if timing was
right. This would lead to the count in the test reaching 3, assuming that 3
intel items are available, when in reality it was less, causing the
Intel::seen() call to do nothing. We now disable the sending of the data store
upon connect, via the global added in the previous commit.
This also expands the test slightly so that both workers call Intel::seen() for
the items inserted by the other worker. This is added validation for the second
point above, because in the presence of that race one occasionally sees one log
entry make it, and the other fail.
This allows us to create an EnumType that groups all of the analyzer
tag values into a single type, while still having the existing types
that split them up. We can then use this for certain events that benefit
from taking all of the tag types at once.
Due to a bug (or intentional code) in SQLite, we disabled enabling the shared cache
in sqlite3 if running under ThreadSanitizer (see cf1fefbe0b0a6163b389cc92b5a6878c7fc95f1f).
Unfortunately, this has the side-effect of breaking the simultaneous-writes test because
the shared cache is disabled. This is hopefully a temporary fix until SQLite fixes the
issue on their side.
On FreeBSD, this test showed two problems: (1) reordering problems
based on writing the predicate, event, and end-of-data updates into a
single file, (2) a race condition based on printing the entirety of
the table description argument in update events. The description
contains the destination table, and its content at the time an update
event gets processed isn't deterministic: depending on the number
of updates the reader thread has sent, the table will contain a
varying number of entries.
The invalidset and invalidtext tests loaded an input file via table
and event reads, in parallel. On FreeBSD this triggers an occasional
reordering of messages coming out of the reader thread vs the input
managers. This commit makes the table and event reads sequential,
avoiding the race.
This addresses the need for a central hook on any log write, which
wasn't previously doable without a lot of effort. The log manager
invokes the new Log::log_stream_policy hook prior to any filter-specific
hooks. Like filter-level hooks, it may veto a log write. Even when
it does, filter-level hooks still get invoked, but cannot "un-veto".
Includes test cases.
The framework so far populated data structures with missing fields
even when those fields are defined without the &optional
attribute. When using the attribute, such entries continue to get
populated.
Update tests to reflect focus on unset fields.
* origin/topic/vern/ZAM-prep: (45 commits)
whoops overlooked the need to canonicalize filenames
another set of tweaks per review comments
addressed a number of code review comments
baseline updates for merge
support "any" coercions for "-O gen-C++"
better descriptions for named record constructors
test suite baseline updates for "-a opt" optimize-AST alternative
test suite baseline updates for "-a xform" alternative / AST transformation
error propagation fix for AST reduction
updates to "-a inline" test suite alternative baseline
updates for the main test suite baseline
updates to test suite tests for compatibility with upcoming ZAM functionality
"-O compile-all" option to specify compilation of inlined functions
compile inlined functions if they're also used indirectly
provide ZAM-generated code with low-level access to record fields
fix for cloning records with fields of type "any"
direct access for ZAM to VectorVal internal vector
ZVal constructors, accessors & methods in support of ZAM
switch ZVal representation of types from Type objects to TypeVal's
revised error-reporting interface for ZVal's, to accommodate ZAM inner loop
...
* 'logging/script-logdir' of https://github.com/kramse/zeek:
Copy of ascii-empty test, just changed path in the beginning
Logdir: Change requested by 0xxon, no problem
Introduce script-land variable that can be used to set logdir.
Closes GH-772