Processing out-of-order commands or finishing commands based on invalid
server responses resulted in inconsistent analyzer state, potentially
triggering null pointer references for crafted traffic.
This commit reworks cf9fe91705 such that
too many pending commands are simply discarded, rather than any attempt
being made to process them. Further, invalid server responses do not
result in command completion anymore.
Test PCAP was crafted based on traffic produced by the OSS-Fuzz reproducer.
Closes#215
That test got flaky probably from #3949 on centosstream9 CI. You can
replicate that behavior by increasing the sleep time when waiting for
the file such that the test will attempt to read the missing file again.
Since the one second wait for file is glacially slow for this, speeding
it up should mean that the file gets created sooner and so the test
won't try to open the file again. But, it's always still technically
possible, since the test will wait for 10 seconds and the heartbeat
seems to be 1 second. At least if that happens, it's probably a bug or
massive slowdown of some kind.
It seems like other similar tests get by because they have more "stuff"
before they call `terminate()` most likely. But, to be safe, just
removing the "received termination signal" line seems like the best
approach.
Invalid lines in a file was the one case that would not suppress future
warnings. Just make it suppress warnings too, but clear that suppression
if there is a field in between that doesn't error.
Fixes#3692
* topic/vern/script-opt-maint.Sep24B:
factoring of logic used by ZAM's low-level optimizer when adjusting control flow info
BTest baseline update for more complete function/lambda names
tweak to -O gen-C++ maintenance script to avoid treating plugins as BTests
fixed lambda hash collision bug due to function descriptions lacking full parameter information
fixes (to avoid collisions) for AST profiling's function hash computations
removed unused ZAM cast-to-any operation
fixes for ZAM tracking the return type associated with function calls
ZAM control-flow tracking now explicitly includes the ends of loops
fix for ZAM identification of common subexpressions
"-O dump-final-ZAM" option similar to "dump-ZAM" only prints final version of functions
fix for setting object locations to avoid use-after-free situation
extended "-O allow-cond" to apply to both gen-C++ and gen-standalone-C++
-O gen-C++ fix for run-time warnings for "when" lambdas
fix to -O gen-C++ for recent AST profiling changes for identifying function parameters
fix to -O gen-C++ for dealing with "hidden" parameters
tweak to prevent an incorrect warning for scripts compiled to C++
fixed overly narrow Spicy test for manipulating packet analyzers
fixed memory leak for recursive ZAM functions that exit via an exception
remove unnecessary header include
* origin/topic/awelzel/cluster-backends-pre-work-v1:
NEWS: Update
scripts/base/cluster: Move active node management into node_down()
logging/Manager: Extract another CreateWriter() helper
logging/Manager: Extract path_func invocation into helper
logging: Dedicated log flush timer
all: Change to use Func::GetName()
script_opt: Use Func::GetName()
Func: Add std::string name accessors, deprecate const char* versions
plugin/ComponentManager: Support lookup by EnumValPtr
For other cluster backends, CreateWriter() will use a logger's filter
configuration rather than receiving all configuration through CreateLog.
Extract a helper out from WriteToFilters() for reuse.
Log flushing is currently triggered based on the threading heartbeat timer
of WriterBackends and the hard-coded WRITE_BUFFER_SIZE 1000.
This change introduces a separate timer that is managed by the logger
manager instead of piggy-backing on the heartbeat timer, as well as a
const &redef for the buffer size.
This allows to modify the log flush frequency and batch size independently
of the threading heartbeat interval. Later, this will allow to re-use the
buffering and flushing logic of writer frontends for non-Broker cluster
backends, too.
One change here is that even frontends that do not have a backend will
be flushed regularly. This is wanted for non-Broker backends and should be
very cheap. Possibly, Broker can piggy back on this timer down the road, too,
rather than using its own script-level timer (see Broker::log_flush()).
For debugging btests, it can be convenient to enable debug streams
by setting an environment variable rather than editing zeek invocations
and adding -B selectively.
Sample use case:
$ export ZEEK_DEBUG_LOG_STREAMS=all
$ btest -d core/failing-test.zeek
$ less .tmp/core/failing-test/debug.log
This change makes Zeek's -B option and ZEEK_DEBUG_LOG_STREAMS are additive.
The tests `core.sigterm-regular` and `core.sigterm-stdin` rely on `ps`
to be present which is not the case anymore on OpenSuse Leap; install it
explicitly there.