The .rst generation doesn't escape the trailing `_` and the docs build
gets upset due to using `type` as a reference target then.
For the better or worse, revert to using tpe. Though I acknowledge this
means we need to be careful with trailing underscores because our docs
build is so fragile.
Partly reverts b9eabbabba.
* origin/topic/awelzel/4605-conn-id-context:
NEWS: Adapt for conn_id$ctx introduction
conn_key/fivetuple: Drop support for non conn_id records
Conn: Move conn_id init and flip to IPBasedConnKey
IPBasedConnKey: Add GetTransportProto() helper
input/Manager: Ignore empty record types
external: Bump commit hashes for external suites
ip/vlan_fivetuple: Populate nested conn_id_context, not conn_id
ConnKey: Extend DoPopulateConnIdVal() with ctx
btest: Update tests and baselines after adding ctx to conn_id
init-bare: Add conn_id_ctx to conn_id
This nested record can be used to discriminate orig_h or resp_h being
observed in different "contexts". A context can be based on VLAN tags,
but any custom ConnKey implementation should populate the ctx field,
allowing to write context-aware Zeek scripts without needing to know
what the context really is.
Closes#4504
Messages are not typical responses, so they need special handling. This
is different between RESP2 and 3, so this is the first instance where
the script layer needs to tell the difference.
A bit ad-hoc formatting for the log, but that's mostly because cluster.log
only has message field and I don't think having a dedicated application_name
column is worth it. That could also be added by custom scripts if it's really
wanted for a given deployment.
* origin/topic/awelzel/1474-cluster-telemetry:
btest/cluster/telemetry: Add smoke testing for telemetry
cluster/WebSocket: Fetch X-Application-Name header as app label
cluster/WebSocket: Pass X-Application-Name to dispatcher
broker/WebSocketShim: Add calls to Telemetry hooks
cluster/WebSocket: Configure telemetry for WebSocket backends
broker: Hook up generic cluster telemetry
cluster: Introduce telemetry component
One bug fix removing static from a variable that shouldn't be static.
Using a label named "endpoint" is not intuitive and requires explaining to
users that it's really just the Cluster::node value. Change the label to
"node", so that we don't need to do the explaining.
This probably breaks some existing users of the Prometheus metrics, but after
looking more at metrics recently, "endpoint" really is a thorn in my eye.
The response to BDAT LAST was never recognized, resulting in the
BDAT LAST commands not being logged in a timely fashion and receiving
the wrong status.
This likely doesn't handle complex pipeline scenarios, but it fixes
the wrong behavior for smtp_reply() not handling simple BDAT commands
responses.
Thanks @cccs-jsjm for the report!
Closes#4522
This field is used internally to trace which analyzers already had a
violation. This is mostly used to prevent duplicate logging.
In the past, c$service_violation was used for a similar purpose -
however it has slightly different semantics. Where c$failed_analyzers
tracks analyzers that were removed due to a violation,
c$service_violation tracks violations - and doesn't care if an analyzer
was actually removed due to it.
To address review feedback in GH-4362: rename analyzer-failed-log.zeek
to loggig.zeek, analyzer-debug-log.zeek to debug-logging.zeek and
dpd-log.zeek to deprecated-dpd-log.zeek.
Includes respective test, NEWS, etc updates.
The main part of this commit are changes in tests. A lot of the tests
that previously relied on analyzer.log or dpd.log now use the new
analyzer-failed.log.
I verified all the changes and, as far as I can tell, everything
behaves as it should. This includes the external test baselines.
This change also enables logging of file and packet analyzer to
analyzer_failed.log and fixes some small behavior issues.
The analyzer_failed event is no longer raised when the removal of an
analyzer is vetoed.
If an analyzer is no longer active when an analyzer violation is raised,
currently the analyzer_failed event is raised. This can, e.g., happen
when an analyzer error happens at the very end of the connection. This
makes the behavior more similar to what happened in the past, and also
intuitively seems to make sense.
A bug introduced in the failed service logging was fixed.
Analyzer-failed.log is, essentially, the replacement for dpd.log. The
name should make more sense, as it does now log analyzer failures. For
protocol analyzers specifically, these are failures that lead to the
analyzer being disabled.
The current analyzer.log is more useful for debugging than for
operational purposes. Hence this is disabled by default, moved to a
policy script, and the log is renamed to analyzer-debug.log.
Furthermore, logging of analyzer confirmations and disabling analyzers
are now enabled by default.
This is the first phase of moving from the current dpd log to a more
modern logfile, without some of the weirdnesses that the current dpd log
contains.
Tests will not pass in the current state; this is just splitting out
functionality.
* origin/topic/awelzel/4177-4178-custom-event-metadata-part-2:
Event: Bail on add_missing_remote_network_timestamp without add_network_timestamp
btest/plugin: Test custom metadata publish
NEWS: Add note about generic event metadata
cluster: Remove deprecated Event constructor
cluster: Remove some explicit timestamp handling
broker/Manager: Fetch and forward all metadata from events
Event/init-bare: Add add_missing_remote_network_timestamp logic
cluster/Backend/DoProcessEvent: Use generic metadata, not just timestamps
cluster/Event: Support moving args and metadata from event
cluster/serializer/broker: Support generic metadata
cluster/Event: Generic metadata support
Event: Use -1.0 for undefined/unset timestamps
cluster: Use shorter obj_desc versions
Desc: Add obj_desc() / obj_desc_short() overloads for IntrusivePtr
This change adds two new hooks to the Intel framework that can be used
to intercept added and removed indicators and their type.
These hooks are fairly low-level. One immediate use-case is to count the
number of indicators loaded per Intel::Type and enable and disable the
corresponding event groups of the intel/seen scripts.
I attempted to gauge the overhead and while it's definitely there, loading
a file with ~500k DOMAIN entries takes somewhere around ~0.5 seconds hooks
when populated via the min_data_store store mechanism. While that
doesn't sound great, it actually takes the manager on my system 2.5
seconds to serialize and Cluster::publish() the min_data_store alone
and its doing that serially for every active worker. Mostly to say that
the bigger overhead in that area on the manager doing redundant work
per worker.
Co-authored-by: Mohan Dhawan <mohan@corelight.com>
This only changes the script-layer API, but keeps the std::string host
in the C++ layer's ServerOptions. Mostly because the ixwebsocket library
takes host as std::string. Also, maybe at some point we'd want to
support something scheme-based like unix:///var/run/zeek.sock and placing
that in a string could not be totally wrong.
Add tests for IPV6, too.
- Recomputes checksums for pcaps to keep clean
- Removes some tests that had big pcaps or weren't necessary
- Cleans up scripting names and minor points
- Comments out Spicy code that causes a build failure now with a TODO to
uncomment it
This deprecates the Event constructor and the ``ts`` parameter of Enqueue()
Instead, versions are introduced that take a detail::MetadataVectorPtr which
can hold the network timestamp metadata and is meant to be allocated by the
caller instead of automatically during Enqueue() or within the Event
constructor.
This also introduces a BifConst ``EventMetadata::add_network_timestamp`` to
opt-in adding network timestamps to events globally. It's disabled by
default as there are not a lot of known use cases that need this.