The schema of cluster WebSocket error messages deviated from the
existing one in Broker which breaks seamless migration from the Broker
WebSocket bindings.
This patch aligns the serialization in cluster with the one in Broker.
This is technically a breaking change of the cluster schema, but since
it never worked like documented and is still experimental this is
probably fine.
Closes#4594.
Not the not_before/not_after fields output GMT based times.
Also adds a new btest diff canonifier which only removes the first
timestamp in a line.
Fixes GH-4521
When looking up the postprocessor function from shadow files, id::find_func()
would abort() if the function wasn't available instead of falling back
to the default postprocessor.
Fix by using id::find() and checking the type explicitly and also adding a
strict type check while at it.
This issue was tickled by loading the json-streaming-logs package,
Zeek creating shadow files containing its custom postprocessor function,
then restarting Zeek without the package loaded.
Closes#4562
After a public discussion and also chatting with Vern directly, disable the
ZAM bif tracking test to avoid an update every time new functions are
added. Usually these aren't performance critical and the defaults
characterization is fine. If they are performance critical, then Vern
is currently best positioned to properly integrate an optimized version.
The response to BDAT LAST was never recognized, resulting in the
BDAT LAST commands not being logged in a timely fashion and receiving
the wrong status.
This likely doesn't handle complex pipeline scenarios, but it fixes
the wrong behavior for smtp_reply() not handling simple BDAT commands
responses.
Thanks @cccs-jsjm for the report!
Closes#4522
* origin/topic/johanna/new-style-analyzer-log:
NEWS entries for analyzer log changes
Move detect-protocol from frameworks/dpd to frameworks/analyzer
Introduce new c$failed_analyzers field
Settle on analyzer.log for the dpd.log replacement
dpd->analyzer.log change - rename files
Analyzer failure logging: tweaks and test fixes
Introduce analyzer-failed.log, as a replacement for dpd.log
Rename analyzer.log to analyzer.debug log; move to policy
Move dpd.log to policy script
Currently, coverage/bare-mode-errors is one of the slowest tests in the
entire test suite. This is caused by the fact that it has to repeatedly
launch Zeek for every script that we ship. This is done sequentially.
This commit changes this test to use xargs to spawn 20 parallell
processes.
detect-protocol.zeek was the last non-deprecated script left in
policy/frameworks/dpd. It was moved to policy/frameworks/analyzer. A
script that loads the script from the new location with a deprecation
warning was added.
This field is used internally to trace which analyzers already had a
violation. This is mostly used to prevent duplicate logging.
In the past, c$service_violation was used for a similar purpose -
however it has slightly different semantics. Where c$failed_analyzers
tracks analyzers that were removed due to a violation,
c$service_violation tracks violations - and doesn't care if an analyzer
was actually removed due to it.
To address review feedback in GH-4362: rename analyzer-failed-log.zeek
to loggig.zeek, analyzer-debug-log.zeek to debug-logging.zeek and
dpd-log.zeek to deprecated-dpd-log.zeek.
Includes respective test, NEWS, etc updates.
The main part of this commit are changes in tests. A lot of the tests
that previously relied on analyzer.log or dpd.log now use the new
analyzer-failed.log.
I verified all the changes and, as far as I can tell, everything
behaves as it should. This includes the external test baselines.
This change also enables logging of file and packet analyzer to
analyzer_failed.log and fixes some small behavior issues.
The analyzer_failed event is no longer raised when the removal of an
analyzer is vetoed.
If an analyzer is no longer active when an analyzer violation is raised,
currently the analyzer_failed event is raised. This can, e.g., happen
when an analyzer error happens at the very end of the connection. This
makes the behavior more similar to what happened in the past, and also
intuitively seems to make sense.
A bug introduced in the failed service logging was fixed.
* origin/topic/awelzel/4177-4178-custom-event-metadata-part-2:
Event: Bail on add_missing_remote_network_timestamp without add_network_timestamp
btest/plugin: Test custom metadata publish
NEWS: Add note about generic event metadata
cluster: Remove deprecated Event constructor
cluster: Remove some explicit timestamp handling
broker/Manager: Fetch and forward all metadata from events
Event/init-bare: Add add_missing_remote_network_timestamp logic
cluster/Backend/DoProcessEvent: Use generic metadata, not just timestamps
cluster/Event: Support moving args and metadata from event
cluster/serializer/broker: Support generic metadata
cluster/Event: Generic metadata support
Event: Use -1.0 for undefined/unset timestamps
cluster: Use shorter obj_desc versions
Desc: Add obj_desc() / obj_desc_short() overloads for IntrusivePtr
This can happen if either there's no network timestamp associated with
an event, or there's currently no event being dispatched. Using 0.0
isn't great as it's the normal start timestamp before reading a network
packet. Using -1.0 gives the caller a chance to check and realize what's
going on.
* origin/topic/vern/CPP-maint.May25:
minor BTest maintenance updates for -O gen-C++
fix for more robustly finding BTests to assess for -O gen-C++
fix for -O gen-C++ dealing with type constants of unnamed compound types
This change adds two new hooks to the Intel framework that can be used
to intercept added and removed indicators and their type.
These hooks are fairly low-level. One immediate use-case is to count the
number of indicators loaded per Intel::Type and enable and disable the
corresponding event groups of the intel/seen scripts.
I attempted to gauge the overhead and while it's definitely there, loading
a file with ~500k DOMAIN entries takes somewhere around ~0.5 seconds hooks
when populated via the min_data_store store mechanism. While that
doesn't sound great, it actually takes the manager on my system 2.5
seconds to serialize and Cluster::publish() the min_data_store alone
and its doing that serially for every active worker. Mostly to say that
the bigger overhead in that area on the manager doing redundant work
per worker.
Co-authored-by: Mohan Dhawan <mohan@corelight.com>
* origin/topic/vern/ZAM-maint.May25:
fix for crash when interpreting transformed ASTs that include multi-field record assignments/additions
Remove unused ZAM compiler method
This only changes the script-layer API, but keeps the std::string host
in the C++ layer's ServerOptions. Mostly because the ixwebsocket library
takes host as std::string. Also, maybe at some point we'd want to
support something scheme-based like unix:///var/run/zeek.sock and placing
that in a string could not be totally wrong.
Add tests for IPV6, too.
* origin/topic/etyp/redis-analyzer:
spicy-redis: Add NEWS entry
spicy-redis: Separate error replies from success
spicy-redis: Cleanup scripts and tests
spciy-redis: Bring Redis analyzer into Zeek proper
spicy-redis: Abort parsing if server data comes first
spicy-redis: Add recursion depth to server data
spicy-redis: Make client data only accept bulk strings
spicy-redis: Add dpd signature and clean pcaps
spicy-redis: Add some commands and touch up parsing
spicy-redis: Add some script logic for logging
spicy-redis: Separate client/server
spicy-redis: Touchup logging and Spicy issues
spicy-redis: Add synchronization and pipeline support
spicy-redis: Begin Spicy Redis analyzer