This simply expands this test to match the behavior of
cluster-transparency-with-proxy, since the two are so similar. This test does
not seem to need disabling the worker's initial send of the data store.
This test was unstable for two reasons:
- Nothing verified whether the two workers had checked in with the proxy,
meaning that messages between the workers and proxies could get lost. This adds
an extra node_up event that the proxy generates synthetically, with values
recognizable to the manager, once the proxy sees both workers connected. This is
a test-level workaround for what should really be a cluster-is-ready event in
the cluster framework proper.
- More subtle: the Intel framework makes the manager send its current
min_data_store to newly connected workers, which in the case of this tests
introduces a race: since the data store, arriving at the worker, replaces the
existing value, it could actually remove already established items if timing was
right. This would lead to the count in the test reaching 3, assuming that 3
intel items are available, when in reality it was less, causing the
Intel::seen() call to do nothing. We now disable the sending of the data store
upon connect, via the global added in the previous commit.
This also expands the test slightly so that both workers call Intel::seen() for
the items inserted by the other worker. This is added validation for the second
point above, because in the presence of that race one occasionally sees one log
entry make it, and the other fail.
This allows us to create an EnumType that groups all of the analyzer
tag values into a single type, while still having the existing types
that split them up. We can then use this for certain events that benefit
from taking all of the tag types at once.
Due to a bug (or intentional code) in SQLite, we disabled enabling the shared cache
in sqlite3 if running under ThreadSanitizer (see cf1fefbe0b0a6163b389cc92b5a6878c7fc95f1f).
Unfortunately, this has the side-effect of breaking the simultaneous-writes test because
the shared cache is disabled. This is hopefully a temporary fix until SQLite fixes the
issue on their side.
On FreeBSD, this test showed two problems: (1) reordering problems
based on writing the predicate, event, and end-of-data updates into a
single file, (2) a race condition based on printing the entirety of
the table description argument in update events. The description
contains the destination table, and its content at the time an update
event gets processed isn't deterministic: depending on the number
of updates the reader thread has sent, the table will contain a
varying number of entries.
The invalidset and invalidtext tests loaded an input file via table
and event reads, in parallel. On FreeBSD this triggers an occasional
reordering of messages coming out of the reader thread vs the input
managers. This commit makes the table and event reads sequential,
avoiding the race.
This addresses the need for a central hook on any log write, which
wasn't previously doable without a lot of effort. The log manager
invokes the new Log::log_stream_policy hook prior to any filter-specific
hooks. Like filter-level hooks, it may veto a log write. Even when
it does, filter-level hooks still get invoked, but cannot "un-veto".
Includes test cases.
The framework so far populated data structures with missing fields
even when those fields are defined without the &optional
attribute. When using the attribute, such entries continue to get
populated.
Update tests to reflect focus on unset fields.
* origin/topic/vern/ZAM-prep: (45 commits)
whoops overlooked the need to canonicalize filenames
another set of tweaks per review comments
addressed a number of code review comments
baseline updates for merge
support "any" coercions for "-O gen-C++"
better descriptions for named record constructors
test suite baseline updates for "-a opt" optimize-AST alternative
test suite baseline updates for "-a xform" alternative / AST transformation
error propagation fix for AST reduction
updates to "-a inline" test suite alternative baseline
updates for the main test suite baseline
updates to test suite tests for compatibility with upcoming ZAM functionality
"-O compile-all" option to specify compilation of inlined functions
compile inlined functions if they're also used indirectly
provide ZAM-generated code with low-level access to record fields
fix for cloning records with fields of type "any"
direct access for ZAM to VectorVal internal vector
ZVal constructors, accessors & methods in support of ZAM
switch ZVal representation of types from Type objects to TypeVal's
revised error-reporting interface for ZVal's, to accommodate ZAM inner loop
...
* 'logging/script-logdir' of https://github.com/kramse/zeek:
Copy of ascii-empty test, just changed path in the beginning
Logdir: Change requested by 0xxon, no problem
Introduce script-land variable that can be used to set logdir.
Closes GH-772
* origin/topic/timw/1487-not-valid-enum:
Move an assert() in input/Manager.cc to account for ValueToVal errors
Add test for config framework
Fix similar issues with ValueTo* methods in the input framework
GH-1487: Handle error from ValueToVal instead of ignoring it
The modp_dtoa/modp_dtoa2 functions aren't capable of handling double
values larger than INT_MAX and fallback on using sprintf() in that
situation. Previously, the format string to that sprintf() was "%e",
defaulting to a precision of 6, which is already too few digits to
represent a number known to be larger than INT_MAX. Now, an sprintf()
is still performed for values larger than INT_MAX and still uses a
scientific notation format, but in a way that uses as many decimal
digits as needed to preserve information.
When a text with an (escaped) zero byte was passed to ParseValue, only
the part of the string up to the zero byte was copied, but the length of
the full string was passed to the input framework.
This leads to the input manager reading over the end of the buffer.
Fixeszeek/zeek#1398
Merge adjustments:
- Removed some stale str_split() references from docs
- Renumbered TypeTag enum comments
- Simplified test-case for @unload (don't need .bro files anymore)
* origin/topic/timw/deprecation-cleanup:
Doc updates
Fix language.init-in-anon-function btest due to changes to log filter predicates
Remove deprecated log filter predicates for 4.1
Remove Plugin::HookCallFunction and fix tests related to it
Remove support for .bro script extension and BRO_ environment variables
Remove deprecated ICMP events
Remove some deprected methods/events from bif files
Remove TYPE_COUNTER
Remove all of the random single-file deprecations
Remove all fully-deprecated files
Update bifcl submodule to remove deprecations from generated code
Script-layer counts, when provided as negative integers in an input
file, got cast to unsigned values because strtoull() does not complain
about negative values. For example, input string "-1" would lead to
value 18446744073709551615 (an all-ones 64-bit int) on x86_64. This is
more likely to be an error than an intent to get very large,
platform-dependent values, so these input lines are now skipped with
according messaging in the reporter.log/stderr.
This also affected ports: -1/tcp got cast to unsigned and only thrown
out because PortVal rejects values > 65535, mapping them to 0. We now
skip such inputs as well.
Updates existing input framework tests to capture the new behavior.
Update the logging framework tests: since hooks operate
by name, they cannot be anonymous. I'm also dropping the &optional
attribute from the status field, since here know that the values are
actually defined, and access to an optional status field should
normally be guarded by the existence test operator.
Also includes baseline update for plugins.hooks, which picks up the
fact that the pred record field is now gone.
By default all baslines are run through diff-remove-timestamp. On a BSD
sed implementation, this means that a newline is added to the end of the
file, if no newline was there originally. This behavior differs from GNU
sed, which does not add a newline.
In this commit we unify this behavior by always adding a newline, even
when using GNU sed. This commit also disables the canonifier for a bunch
of binary baselines, so we do not have to change them.
This change allows users to specify an epoch length of 0, which means
that the user manually has to finish the epochs. A new next_epoch
function is introduced to allow users to manually end epochs.
Addresses GH-348