* origin/topic/johanna/new-style-analyzer-log:
NEWS entries for analyzer log changes
Move detect-protocol from frameworks/dpd to frameworks/analyzer
Introduce new c$failed_analyzers field
Settle on analyzer.log for the dpd.log replacement
dpd->analyzer.log change - rename files
Analyzer failure logging: tweaks and test fixes
Introduce analyzer-failed.log, as a replacement for dpd.log
Rename analyzer.log to analyzer.debug log; move to policy
Move dpd.log to policy script
detect-protocol.zeek was the last non-deprecated script left in
policy/frameworks/dpd. It was moved to policy/frameworks/analyzer. A
script that loads the script from the new location with a deprecation
warning was added.
This field is used internally to trace which analyzers already had a
violation. This is mostly used to prevent duplicate logging.
In the past, c$service_violation was used for a similar purpose -
however it has slightly different semantics. Where c$failed_analyzers
tracks analyzers that were removed due to a violation,
c$service_violation tracks violations - and doesn't care if an analyzer
was actually removed due to it.
To address review feedback in GH-4362: rename analyzer-failed-log.zeek
to loggig.zeek, analyzer-debug-log.zeek to debug-logging.zeek and
dpd-log.zeek to deprecated-dpd-log.zeek.
Includes respective test, NEWS, etc updates.
The main part of this commit are changes in tests. A lot of the tests
that previously relied on analyzer.log or dpd.log now use the new
analyzer-failed.log.
I verified all the changes and, as far as I can tell, everything
behaves as it should. This includes the external test baselines.
This change also enables logging of file and packet analyzer to
analyzer_failed.log and fixes some small behavior issues.
The analyzer_failed event is no longer raised when the removal of an
analyzer is vetoed.
If an analyzer is no longer active when an analyzer violation is raised,
currently the analyzer_failed event is raised. This can, e.g., happen
when an analyzer error happens at the very end of the connection. This
makes the behavior more similar to what happened in the past, and also
intuitively seems to make sense.
A bug introduced in the failed service logging was fixed.
Analyzer-failed.log is, essentially, the replacement for dpd.log. The
name should make more sense, as it does now log analyzer failures. For
protocol analyzers specifically, these are failures that lead to the
analyzer being disabled.
The current analyzer.log is more useful for debugging than for
operational purposes. Hence this is disabled by default, moved to a
policy script, and the log is renamed to analyzer-debug.log.
Furthermore, logging of analyzer confirmations and disabling analyzers
are now enabled by default.
This is the first phase of moving from the current dpd log to a more
modern logfile, without some of the weirdnesses that the current dpd log
contains.
Tests will not pass in the current state; this is just splitting out
functionality.
* origin/topic/awelzel/4177-4178-custom-event-metadata-part-2:
Event: Bail on add_missing_remote_network_timestamp without add_network_timestamp
btest/plugin: Test custom metadata publish
NEWS: Add note about generic event metadata
cluster: Remove deprecated Event constructor
cluster: Remove some explicit timestamp handling
broker/Manager: Fetch and forward all metadata from events
Event/init-bare: Add add_missing_remote_network_timestamp logic
cluster/Backend/DoProcessEvent: Use generic metadata, not just timestamps
cluster/Event: Support moving args and metadata from event
cluster/serializer/broker: Support generic metadata
cluster/Event: Generic metadata support
Event: Use -1.0 for undefined/unset timestamps
cluster: Use shorter obj_desc versions
Desc: Add obj_desc() / obj_desc_short() overloads for IntrusivePtr
This change adds two new hooks to the Intel framework that can be used
to intercept added and removed indicators and their type.
These hooks are fairly low-level. One immediate use-case is to count the
number of indicators loaded per Intel::Type and enable and disable the
corresponding event groups of the intel/seen scripts.
I attempted to gauge the overhead and while it's definitely there, loading
a file with ~500k DOMAIN entries takes somewhere around ~0.5 seconds hooks
when populated via the min_data_store store mechanism. While that
doesn't sound great, it actually takes the manager on my system 2.5
seconds to serialize and Cluster::publish() the min_data_store alone
and its doing that serially for every active worker. Mostly to say that
the bigger overhead in that area on the manager doing redundant work
per worker.
Co-authored-by: Mohan Dhawan <mohan@corelight.com>
This only changes the script-layer API, but keeps the std::string host
in the C++ layer's ServerOptions. Mostly because the ixwebsocket library
takes host as std::string. Also, maybe at some point we'd want to
support something scheme-based like unix:///var/run/zeek.sock and placing
that in a string could not be totally wrong.
Add tests for IPV6, too.
- Recomputes checksums for pcaps to keep clean
- Removes some tests that had big pcaps or weren't necessary
- Cleans up scripting names and minor points
- Comments out Spicy code that causes a build failure now with a TODO to
uncomment it
This deprecates the Event constructor and the ``ts`` parameter of Enqueue()
Instead, versions are introduced that take a detail::MetadataVectorPtr which
can hold the network timestamp metadata and is meant to be allocated by the
caller instead of automatically during Enqueue() or within the Event
constructor.
This also introduces a BifConst ``EventMetadata::add_network_timestamp`` to
opt-in adding network timestamps to events globally. It's disabled by
default as there are not a lot of known use cases that need this.
* origin/topic/vern/http-sqli-replacement:
site/local: Switch to detect-sql-injection
Add a revised script for detecting HTTP SQL injection, deprecate original
* 'smoot-improve-from_json' of github.com:/stevesmoot/zeek:
update baseline for zam
Update src/zeek.bif
Change from_json to return an error rather than print it.
When a node restarts or a peering between two nodes starts over for other
reasons, the internal tracking in the Broker manager resets its state (since
it's per-peering), and thus the message overflow counter. The script layer was
unaware of this, and threw errors when trying to reset the corresponding counter
metric down to zero at sync time.
We now track past buffer overflows via a separate epoch table, using Broker peer
ID comparisons to identify new peerings, and set the counter to the sum of past
and current overflows.
I considered just making this a gauge, but it seems more helpful to be able to
look at a counter to see whether any messages have ever been dropped over the
lifetime of the node process.
As an aside, this now also avoids repeatedly creating the labels vector,
re-using the same one for each metric.
Thanks to @pbcullen for identifying this one!
In GH-4422 it was pointed out that the protocols/conn/failed-service-logging.zeek
policy script only works when
`DPD::track_removed_services_in_connection=T` is set.
This was caused by a logic error in the script. This commit fixes this
logic error and introduces an additional test that checks that
failed-service-logging works even when the option is not set to true.
* origin/topic/timw/update-ct-ca-lists:
External tests: add removed logs to CT list to prevent baseline changes
Update Mozilla CA list and CT list to NSS 3.110