This commit introduces a mechanism to attach light weight analyzers to
the root analyzer of sessions in order to tap into the packets delivered
to child analyzer.
* origin/topic/awelzel/defer-more-stuff:
RecordType: Ensure &default fields are always re-initialized
Attr: Deprecate using &default and &optional together on record fields
RecordType: Allow deferring &default=vector(), set(), table() fields
This moves c$service_violation to the deprecated-dpd-log policy script.
This is the only script in the distribution that uses the field, and it
is unlikely to be used externally. It is also responsible for a
significant amount of memory use by itself.
This also restores the field being populated, which was broken in
GH-4362
The util:: versions of these methods remain as a thin wrapper around them so
they can be used with const char* arguments. Otherwise callers have to manually
make string_view objects from the input.
s Please enter the commit message for your changes. Lines starting
Previously, we supported any records that happened to have orig_h,
resp_h, etc. fields, but it's not exactly clear why we ever did. Users
that relied on this can instantiate an explicit conn_id instance, too.
* origin/topic/awelzel/1474-cluster-telemetry:
btest/cluster/telemetry: Add smoke testing for telemetry
cluster/WebSocket: Fetch X-Application-Name header as app label
cluster/WebSocket: Pass X-Application-Name to dispatcher
broker/WebSocketShim: Add calls to Telemetry hooks
cluster/WebSocket: Configure telemetry for WebSocket backends
broker: Hook up generic cluster telemetry
cluster: Introduce telemetry component
One bug fix removing static from a variable that shouldn't be static.
Using a label named "endpoint" is not intuitive and requires explaining to
users that it's really just the Cluster::node value. Change the label to
"node", so that we don't need to do the explaining.
This probably breaks some existing users of the Prometheus metrics, but after
looking more at metrics recently, "endpoint" really is a thorn in my eye.
This commit fixes the parsing of the data field in the SSL analyzer. So
far, this field contained two extra bytes at the beginning, which
contain the length of the following data.
Now, the data passed to the event only contains the actual value of the
session ticket.
The Spicy analyzer already contains the correct handling of this field,
and does not need to be updated. A test that uses the event and
exhibited the bug was added.
Not the not_before/not_after fields output GMT based times.
Also adds a new btest diff canonifier which only removes the first
timestamp in a line.
Fixes GH-4521
* origin/topic/awelzel/4177-4178-custom-event-metadata-part-2:
Event: Bail on add_missing_remote_network_timestamp without add_network_timestamp
btest/plugin: Test custom metadata publish
NEWS: Add note about generic event metadata
cluster: Remove deprecated Event constructor
cluster: Remove some explicit timestamp handling
broker/Manager: Fetch and forward all metadata from events
Event/init-bare: Add add_missing_remote_network_timestamp logic
cluster/Backend/DoProcessEvent: Use generic metadata, not just timestamps
cluster/Event: Support moving args and metadata from event
cluster/serializer/broker: Support generic metadata
cluster/Event: Generic metadata support
Event: Use -1.0 for undefined/unset timestamps
cluster: Use shorter obj_desc versions
Desc: Add obj_desc() / obj_desc_short() overloads for IntrusivePtr
This can happen if either there's no network timestamp associated with
an event, or there's currently no event being dispatched. Using 0.0
isn't great as it's the normal start timestamp before reading a network
packet. Using -1.0 gives the caller a chance to check and realize what's
going on.
This change adds two new hooks to the Intel framework that can be used
to intercept added and removed indicators and their type.
These hooks are fairly low-level. One immediate use-case is to count the
number of indicators loaded per Intel::Type and enable and disable the
corresponding event groups of the intel/seen scripts.
I attempted to gauge the overhead and while it's definitely there, loading
a file with ~500k DOMAIN entries takes somewhere around ~0.5 seconds hooks
when populated via the min_data_store store mechanism. While that
doesn't sound great, it actually takes the manager on my system 2.5
seconds to serialize and Cluster::publish() the min_data_store alone
and its doing that serially for every active worker. Mostly to say that
the bigger overhead in that area on the manager doing redundant work
per worker.
Co-authored-by: Mohan Dhawan <mohan@corelight.com>