mirror of
https://github.com/zeek/zeek.git
synced 2025-10-02 06:38:20 +00:00
Merge remote-tracking branch 'origin/master' into topic/johanna/spicy-tls
* origin/master: Update broker submodule [nomail] telemetry: Deprecate prometheus.zeek policy script input/Manager: Improve type checks of record fields with type any Bump zeek-testing-cluster to pull in tee SIGPIPE fix ldap: Remove MessageWrapper with magic 0x30 searching ldap: Harden parsing a bit ldap: Handle integrity-only KRB wrap tokens Bump auxil/spicy to latest development snapshot CI: Set FETCH_CONTENT_FULLY_DISCONNECTED flag for configure Update broker and cmake submodules [nomail] Fix a broken merge Do not emit hook files for builtin modules Fix warning about grealpath when running 'make dist' on Linux Start of 7.1.0 development Updating submodule(s) [nomail] Update the scripts.base.frameworks.telemetry.internal-metrics test Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest" Bump Broker to pull in new Prometheus support and pass in Zeek's registry Do not emit hook files for builtin modules
This commit is contained in:
commit
f95f5d2adb
47 changed files with 632 additions and 207 deletions
|
@ -10,7 +10,7 @@ btest_jobs: &BTEST_JOBS 4
|
|||
btest_retries: &BTEST_RETRIES 2
|
||||
memory: &MEMORY 16GB
|
||||
|
||||
config: &CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
|
||||
config: &CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror -D FETCHCONTENT_FULLY_DISCONNECTED:BOOL=ON
|
||||
no_spicy_config: &NO_SPICY_CONFIG --build-type=release --disable-broker-tests --disable-spicy --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
|
||||
static_config: &STATIC_CONFIG --build-type=release --disable-broker-tests --enable-static-broker --enable-static-binpac --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
|
||||
binary_config: &BINARY_CONFIG --prefix=$CIRRUS_WORKING_DIR/install --libdir=$CIRRUS_WORKING_DIR/install/lib --binary-package --enable-static-broker --enable-static-binpac --disable-broker-tests --build-type=Release --ccache --enable-werror
|
||||
|
|
99
CHANGES
99
CHANGES
|
@ -1,3 +1,102 @@
|
|||
7.1.0-dev.23 | 2024-07-23 10:02:52 +0200
|
||||
|
||||
* telemetry: Deprecate prometheus.zeek policy script (Arne Welzel, Corelight)
|
||||
|
||||
With Cluster::Node$metrics_port being optional, there's not really
|
||||
a need for the extra script. New rule, if a metrics_port is set, the
|
||||
node will attempt to listen on it.
|
||||
|
||||
Users can still redef Telemetry::metrics_port *after*
|
||||
base/frameworks/telemetry was loaded to change the port defined
|
||||
in cluster-layout.zeek.
|
||||
|
||||
* Update broker submodule [nomail] (Tim Wojtulewicz, Corelight)
|
||||
|
||||
7.1.0-dev.20 | 2024-07-19 19:51:12 +0200
|
||||
|
||||
* GH-3836: input/Manager: Improve type checks of record fields with type any (Arne Welzel, Corelight)
|
||||
|
||||
Calling AsRecordType() or AsFunc() on a Val of type any isn't safe.
|
||||
|
||||
Closes #3836
|
||||
|
||||
7.1.0-dev.18 | 2024-07-17 15:37:12 -0700
|
||||
|
||||
* Bump zeek-testing-cluster to pull in tee SIGPIPE fix (Christian Kreibich, Corelight)
|
||||
|
||||
7.1.0-dev.16 | 2024-07-17 16:45:13 +0200
|
||||
|
||||
* ldap: Remove MessageWrapper with magic 0x30 searching (Arne Welzel, Corelight)
|
||||
|
||||
This unit implements a heuristic to search for the 0x30 sequence
|
||||
byte if Message couldn't readily be parsed. Remove it with the
|
||||
idea of explicit and predictable support for SASL mechanisms.
|
||||
|
||||
* ldap: Harden parsing a bit (Arne Welzel, Corelight)
|
||||
|
||||
ASN1Message(True) may go off parsing arbitrary input data as
|
||||
"something ASN.1" This could be GBs of octet strings or just very
|
||||
long sequences. Avoid this by open-coding some top-level types expected.
|
||||
|
||||
This also tries to avoid some of the &parse-from usages that result
|
||||
in unnecessary copies of data.
|
||||
|
||||
Adds a locally generated PCAP with addRequest/addResponse that we
|
||||
don't currently handle.
|
||||
|
||||
* ldap: Handle integrity-only KRB wrap tokens (Arne Welzel, Corelight)
|
||||
|
||||
Mostly staring at the PCAPs and opened a few RFCs. For now, only if the
|
||||
MS_KRB5 OID is used and accepted in a bind response, start stripping
|
||||
KRB5 wrap tokens for both, client and server traffic.
|
||||
|
||||
Would probably be nice to forward the GSS-API data to the analyzer...
|
||||
|
||||
Closes zeek/spicy-ldap#29.
|
||||
|
||||
7.1.0-dev.12 | 2024-07-16 10:16:02 -0700
|
||||
|
||||
* Bump auxil/spicy to latest development snapshot (Benjamin Bannier, Corelight)
|
||||
|
||||
This patch bump Spicy to the latest development snapshot. This
|
||||
introduces a backwards-incompatible change in that it removes support
|
||||
for a never officially supported syntax to specify unit fields (so I
|
||||
would argue: not strictly a breaking change).
|
||||
|
||||
7.1.0-dev.10 | 2024-07-12 16:02:22 -0700
|
||||
|
||||
* CI: Set FETCH_CONTENT_FULLY_DISCONNECTED flag for configure (Tim Wojtulewicz, Corelight)
|
||||
|
||||
* Fix a broken merge (Tim Wojtulewicz, Corelight)
|
||||
|
||||
I merged an old version of the branch on accident and then merged the right
|
||||
one over top of it, but git ended up including both versions. This fixes
|
||||
that mistake.
|
||||
|
||||
7.1.0-dev.6 | 2024-07-12 09:51:39 -0700
|
||||
|
||||
* Do not emit hook files for builtin modules (Benjamin Bannier, Corelight)
|
||||
|
||||
We would previously emit a C++ file with hooks for at least the builtin
|
||||
`spicy` module even though that module like any other builtin module
|
||||
never contains implementations of hooks for types in user code.
|
||||
|
||||
This patch prevents modules with skipped implementations (such as our
|
||||
builtin modules) from being added to the compilation which prevents
|
||||
generating their hook files.
|
||||
|
||||
7.1.0-dev.2 | 2024-07-12 09:46:34 -0700
|
||||
|
||||
* Fix warning about grealpath when running 'make dist' on Linux (Tim Wojtulewicz, Corelight)
|
||||
|
||||
7.0.0-dev.467 | 2024-07-11 12:14:52 -0700
|
||||
|
||||
* Update the scripts.base.frameworks.telemetry.internal-metrics test (Christian Kreibich, Corelight)
|
||||
|
||||
* Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest" (Christian Kreibich, Corelight)
|
||||
|
||||
* Bump Broker to pull in new Prometheus support and pass in Zeek's registry (Dominik Charousset and Christian Kreibich, Corelight)
|
||||
|
||||
7.0.0-dev.461 | 2024-07-10 18:45:36 +0200
|
||||
|
||||
* Extend btest for logging of disabled analyzers (Jan Grashoefer, Corelight)
|
||||
|
|
2
Makefile
2
Makefile
|
@ -9,7 +9,7 @@ BUILD=build
|
|||
REPO=$$(cd $(CURDIR) && basename $$(git config --get remote.origin.url | sed 's/^[^:]*://g'))
|
||||
VERSION_FULL=$(REPO)-$$(cd $(CURDIR) && cat VERSION)
|
||||
GITDIR=$$(test -f .git && echo $$(cut -d" " -f2 .git) || echo .git)
|
||||
REALPATH=$$($$(realpath --relative-to=$(pwd) . >/dev/null 2>&1) && echo 'realpath' || echo 'grealpath')
|
||||
REALPATH=$$($$(realpath --relative-to=$(shell pwd) . >/dev/null 2>&1) && echo 'realpath' || echo 'grealpath')
|
||||
|
||||
all: configured
|
||||
$(MAKE) -C $(BUILD) $@
|
||||
|
|
29
NEWS
29
NEWS
|
@ -3,6 +3,30 @@ This document summarizes the most important changes in the current Zeek
|
|||
release. For an exhaustive list of changes, see the ``CHANGES`` file
|
||||
(note that submodules, such as Broker, come with their own ``CHANGES``.)
|
||||
|
||||
Zeek 7.1.0
|
||||
==========
|
||||
|
||||
Breaking Changes
|
||||
----------------
|
||||
|
||||
New Functionality
|
||||
-----------------
|
||||
|
||||
* The LDAP analyzer now supports handling of non-sealed GSS-API WRAP tokens.
|
||||
|
||||
Changed Functionality
|
||||
---------------------
|
||||
|
||||
* Heuristics for parsing SASL encrypted and signed LDAP traffic have been
|
||||
made more strict and predictable. Please provide input if this results in
|
||||
less visibility in your environment.
|
||||
|
||||
Removed Functionality
|
||||
---------------------
|
||||
|
||||
Deprecated Functionality
|
||||
------------------------
|
||||
|
||||
Zeek 7.0.0
|
||||
==========
|
||||
|
||||
|
@ -167,6 +191,11 @@ Deprecated Functionality
|
|||
- The ``--disable-archiver`` configure flag no longer does anything and will be
|
||||
removed in 7.1. zeek-archiver has moved into the zeek-aux repository.
|
||||
|
||||
- The policy/frameworks/telemetry/prometheus.zeek script has been deprecated
|
||||
and will be removed with Zeek 7.1. Setting the ``metrics_port`` field on a
|
||||
``Cluster::Node`` implies listening on that port and exposing telemetry
|
||||
in Prometheus format.
|
||||
|
||||
Zeek 6.2.0
|
||||
==========
|
||||
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
7.0.0-dev.461
|
||||
7.1.0-dev.23
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit fd83a789848b485c81f28b8a6af23d28eca7b3c7
|
||||
Subproject commit 7c5ccc9aa91466004bc4a0dbbce11a239f3e742e
|
|
@ -1 +1 @@
|
|||
Subproject commit 7db629d4e2f8128e3e27aa28200106fa6d553be0
|
||||
Subproject commit a5c8f19fb49c60171622536fa6d369fa168f19e0
|
|
@ -1 +1 @@
|
|||
Subproject commit c47de11e4b84f24e8b501c3b1a446ad808e4964a
|
||||
Subproject commit 4348515873b4d1b0e44c7344011b18d21411accf
|
|
@ -1 +1 @@
|
|||
Subproject commit 396723c04ba1f8f2f75555745a503b8edf353ff6
|
||||
Subproject commit 610cf8527dad7033b971595a1d556c2c95294f2b
|
|
@ -1 +1 @@
|
|||
Subproject commit 6581b1855a5ea8cc102c66b4ac6a431fc67484a0
|
||||
Subproject commit 4a1b43ef07d1305a7e88a4f0866068dc49de9d06
|
|
@ -1 +1 @@
|
|||
Subproject commit 1478f2ee550a0f99f5b93975c17ae814ebe515b7
|
||||
Subproject commit 8a66cd60fb29a1237b5070854cb194f43a3f7a30
|
|
@ -1 +1 @@
|
|||
Subproject commit 7671450f34c65259463b4fd651a18df3935f235c
|
||||
Subproject commit 39c0ee1e1742bb28dff57632ee4620f905b892e7
|
2
cmake
2
cmake
|
@ -1 +1 @@
|
|||
Subproject commit db0d52761f38f3602060da36adc1afff608730c1
|
||||
Subproject commit 2d42baf8e63a7494224aa9d02afa2cb43ddb96b8
|
|
@ -1,3 +1 @@
|
|||
@load ./main
|
||||
|
||||
@load base/frameworks/cluster
|
||||
|
|
|
@ -5,10 +5,28 @@
|
|||
##! enabled by setting :zeek:see:`Telemetry::metrics_port`.
|
||||
|
||||
@load base/misc/version
|
||||
@load base/frameworks/cluster
|
||||
|
||||
@load base/frameworks/telemetry/options
|
||||
|
||||
module Telemetry;
|
||||
|
||||
# In a cluster configuration, open the port number for metrics
|
||||
# from the cluster node configuration for exporting data to
|
||||
# Prometheus.
|
||||
#
|
||||
# The manager node will also provide a ``/services.json`` endpoint
|
||||
# for the HTTP Service Discovery system in Prometheus to use for
|
||||
# configuration. This endpoint will include information for all of
|
||||
# the other nodes in the cluster.
|
||||
@if ( Cluster::is_enabled() )
|
||||
redef Telemetry::metrics_endpoint_name = Cluster::node;
|
||||
|
||||
@if ( Cluster::local_node_metrics_port() != 0/unknown )
|
||||
redef Telemetry::metrics_port = Cluster::local_node_metrics_port();
|
||||
@endif
|
||||
@endif
|
||||
|
||||
export {
|
||||
## Alias for a vector of label values.
|
||||
type labels_vector: vector of string;
|
||||
|
|
|
@ -1,19 +1,2 @@
|
|||
##! In a cluster configuration, open the port number for metrics
|
||||
##! from the cluster node configuration for exporting data to
|
||||
##! Prometheus.
|
||||
##!
|
||||
##! The manager node will also provide a ``/services.json`` endpoint
|
||||
##! for the HTTP Service Discovery system in Prometheus to use for
|
||||
##! configuration. This endpoint will include information for all of
|
||||
##! the other nodes in the cluster.
|
||||
@load base/frameworks/cluster
|
||||
|
||||
@if ( Cluster::is_enabled() )
|
||||
|
||||
redef Telemetry::metrics_endpoint_name = Cluster::node;
|
||||
|
||||
@if ( Cluster::local_node_metrics_port() != 0/unknown )
|
||||
redef Telemetry::metrics_port = Cluster::local_node_metrics_port();
|
||||
@endif
|
||||
|
||||
@endif
|
||||
@deprecated "Remove in v7.1: Cluster nodes now implicitly listen on metrics port if set in cluster-layout."
|
||||
@load base/frameworks/telemetry
|
||||
|
|
|
@ -94,10 +94,6 @@ redef digest_salt = "Please change this value.";
|
|||
# telemetry_histogram.log.
|
||||
@load frameworks/telemetry/log
|
||||
|
||||
# Enable Prometheus metrics scraping in the cluster: each Zeek node will listen
|
||||
# on the metrics port defined in its Cluster::nodes entry.
|
||||
# @load frameworks/telemetry/prometheus
|
||||
|
||||
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
|
||||
# this might impact performance a bit.
|
||||
# @load policy/protocols/ssl/heartbleed
|
||||
|
|
|
@ -15,7 +15,7 @@ public type Request = unit {
|
|||
|
||||
switch {
|
||||
-> : /\/W/ { self.whois = True; }
|
||||
-> void;
|
||||
-> : void;
|
||||
};
|
||||
|
||||
: OptionalWhiteSpace;
|
||||
|
|
|
@ -126,125 +126,126 @@ public type Result = unit {
|
|||
# https://tools.ietf.org/html/rfc4511#section-4.1.10
|
||||
};
|
||||
|
||||
# 1.2.840.48018.1.2.2 (MS KRB5 - Microsoft Kerberos 5)
|
||||
const GSSAPI_MECH_MS_KRB5 = "1.2.840.48018.1.2.2";
|
||||
|
||||
# Supported SASL stripping modes.
|
||||
type SaslStripping = enum {
|
||||
MS_KRB5 = 1, # Payload starts with a 4 byte length followed by a wrap token that may or may not be sealed.
|
||||
};
|
||||
|
||||
type Ctx = struct {
|
||||
saslStripping: SaslStripping; # Which mode of SASL stripping to use.
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
public type Messages = unit {
|
||||
: MessageWrapper[];
|
||||
%context = Ctx;
|
||||
: SASLStrip(self.context())[];
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
type SASLLayer = unit {
|
||||
# For the time being (before we support parsing the SASL layer) this unit
|
||||
# is used by MessageWrapper below to strip it (SASL) so that the parser
|
||||
# can attempt to resume parsing afterward. It also sets the success flag
|
||||
# if '\x30' is found, otherwise backtracks so that we can deal with encrypted
|
||||
# SASL payloads without raising a parse error.
|
||||
var success: bool = False;
|
||||
: bytes &until=b"\x30" {
|
||||
self.success = True;
|
||||
public type SASLStrip = unit(ctx: Ctx&) {
|
||||
switch( ctx.saslStripping ) {
|
||||
SaslStripping::Undef -> : Message(ctx);
|
||||
SaslStripping::MS_KRB5 -> : SaslMsKrb5Stripper(ctx);
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
type KrbWrapToken = unit {
|
||||
# https://datatracker.ietf.org/doc/html/rfc4121#section-4.2.6.2
|
||||
|
||||
# Number of bytes to expect *after* the payload.
|
||||
var trailer_ec: uint64;
|
||||
var header_ec: uint64;
|
||||
|
||||
ctx_flags: bitfield(8) {
|
||||
send_by_acceptor: 0;
|
||||
sealed: 1;
|
||||
acceptor_subkey: 2;
|
||||
};
|
||||
filler: skip b"\xff";
|
||||
ec: uint16; # extra count
|
||||
rrc: uint16 { # right rotation count
|
||||
# Handle rrc == ec or rrc == 0.
|
||||
if ( self.rrc == self.ec ) {
|
||||
self.header_ec = self.ec;
|
||||
} else if ( self.rrc == 0 ) {
|
||||
self.trailer_ec = self.ec;
|
||||
} else {
|
||||
throw "Unhandled rc %s and ec %s" % (self.ec, self.rrc);
|
||||
}
|
||||
}
|
||||
|
||||
on %error {
|
||||
self.backtrack();
|
||||
}
|
||||
snd_seq: uint64;
|
||||
header_e: skip bytes &size=self.header_ec;
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
public type MessageWrapper = unit {
|
||||
# A wrapper around 'Message'. First, we try to parse a Message unit.
|
||||
# There are two possible outcomes:
|
||||
# (1) Success -> We consumed all bytes and successfully parsed a Message unit
|
||||
# (2) No success -> self.backtrack() is called in the Message unit,
|
||||
# so effectively we didn't consume any bytes yet.
|
||||
# The outcome can be determined by checking the `success` variable of the Message unit
|
||||
type SaslMsKrb5Stripper = unit(ctx: Ctx&) {
|
||||
# This is based on Wireshark output and example traffic we have. There's always
|
||||
# a 4 byte length field followed by the krb5_tok_id field in messages after
|
||||
# MS_KRB5 was selected. I haven't read enough specs to understand if it's
|
||||
# just this one case that works, or others could use the same stripping.
|
||||
var switch_size: uint64;
|
||||
|
||||
# This success variable is different, because this keeps track of the status for the MessageWrapper object
|
||||
var success: bool = False;
|
||||
var message: Message;
|
||||
len: uint32;
|
||||
krb5_tok_id: uint16;
|
||||
|
||||
# Here, we try to parse the message...
|
||||
: Message &try {
|
||||
switch ( self.krb5_tok_id ) {
|
||||
0x0504 -> krb_wrap_token: KrbWrapToken;
|
||||
* -> : void;
|
||||
};
|
||||
|
||||
# ... and only if the Message unit successfully parsed, we can set
|
||||
# the status of this MessageWrapper's success to 'True'
|
||||
if ( $$.success == True ) {
|
||||
self.success = True;
|
||||
self.message = $$;
|
||||
}
|
||||
: skip bytes &size=0 {
|
||||
self.switch_size = self.len - (self.offset() - 4);
|
||||
if ( self?.krb_wrap_token )
|
||||
self.switch_size -= self.krb_wrap_token.trailer_ec;
|
||||
}
|
||||
|
||||
# If we failed to parse the message, then we're going to scan the remaining bytes for the '\x30'
|
||||
# start byte and try to parse a Message starting from that byte. This effectively
|
||||
# strips the SASL layer if SASL Signing was enabled. Until now, I haven't found A
|
||||
# better way to scan / determine the exact SASL header length yet, so we'll stick with this
|
||||
# for the time being. If the entire LDAP packet was encrypted with SASL, then we skip parsing for
|
||||
# now (in the long run we need to be parsing SASL/GSSAPI instead, in which case encrypted payloads
|
||||
# are just another message type).
|
||||
switch ( self?.krb_wrap_token && ! self.krb_wrap_token.ctx_flags.sealed ) {
|
||||
True -> : Message(ctx)[] &eod;
|
||||
* -> : skip bytes &eod;
|
||||
} &size=self.switch_size;
|
||||
|
||||
# SASLLayer (see unit above) just consumes bytes &until=b"\x30" or backtracks if it isn't found
|
||||
# and sets a success flag we can use later to decide if those bytes contain a parsable message.
|
||||
var sasl_success: bool = False;
|
||||
: SASLLayer &try if ( self.success == False ) {
|
||||
if ( $$.success == True ) {
|
||||
self.sasl_success = True;
|
||||
}
|
||||
}
|
||||
var remainder: bytes;
|
||||
|
||||
# SASLLayer consumes the delimiter ('\x30'), and because this is the first byte of a valid LDAP message
|
||||
# we should re-add it to the remainder if the delimiter was found. If the delimiter was not found, we
|
||||
# leave the remainder empty, but note that the bytes must be consumed either way to avoid stalling the
|
||||
# parser and causing an infinite loop error.
|
||||
: bytes &eod if ( self.success == False ) {
|
||||
if ( self.sasl_success == True ) {
|
||||
self.remainder = b"\x30" + $$;
|
||||
}
|
||||
}
|
||||
|
||||
# Again, try to parse a Message unit. Be aware that in this will sometimes fail if the '\x30' byte is
|
||||
# also present in the SASL header.
|
||||
|
||||
# Also, we could try to do this recursively or try a few iterations, but for now I would suggest
|
||||
# to try this extra parsing once to get the best cost/benefit tradeoff.
|
||||
: Message &try &parse-from=self.remainder if ( self.success == False && self.sasl_success == True ) {
|
||||
if ( $$.success == True ) {
|
||||
self.success = True;
|
||||
self.message = $$;
|
||||
}
|
||||
}
|
||||
|
||||
# If we still didn't manage to parse a message (so the &try resulted in another backtrack()) then
|
||||
# this is probably an encrypted LDAP message, so skip it
|
||||
|
||||
} &convert=self.message;
|
||||
# Consume the wrap token trailer, if any.
|
||||
trailer_e: skip bytes &size=self.krb_wrap_token.trailer_ec if (self?.krb_wrap_token);
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
public type Message = unit {
|
||||
public type Message = unit(ctx: Ctx&) {
|
||||
var messageID: int64;
|
||||
var opcode: ProtocolOpcode = ProtocolOpcode::Undef;
|
||||
var applicationBytes: bytes;
|
||||
var unsetResultDefault: Result;
|
||||
var result_: Result& = self.unsetResultDefault;
|
||||
var obj: string = "";
|
||||
var arg: string = "";
|
||||
var success: bool = False;
|
||||
var seqHeaderLen: uint64;
|
||||
var msgLen: uint64;
|
||||
|
||||
: ASN1::ASN1Message(True) {
|
||||
if (($$.head.tag.type_ == ASN1::ASN1Type::Sequence) &&
|
||||
($$.body?.seq) &&
|
||||
(|$$.body.seq.submessages| >= 2)) {
|
||||
if ($$.body.seq.submessages[0].body?.num_value) {
|
||||
self.messageID = $$.body.seq.submessages[0].body.num_value;
|
||||
}
|
||||
if ($$.body.seq.submessages[1]?.application_id) {
|
||||
self.opcode = cast<ProtocolOpcode>(cast<uint8>($$.body.seq.submessages[1].application_id));
|
||||
self.applicationBytes = $$.body.seq.submessages[1].application_data;
|
||||
}
|
||||
}
|
||||
seqHeader: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::Sequence) {
|
||||
self.msgLen = $$.len.len;
|
||||
}
|
||||
|
||||
# Use offset() to determine how many bytes the seqHeader took. This
|
||||
# needs to be done after the seqHeader field hook.
|
||||
: void {
|
||||
self.seqHeaderLen = self.offset();
|
||||
}
|
||||
|
||||
messageID_header: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::Integer);
|
||||
: ASN1::ASN1Body(self.messageID_header, False) {
|
||||
self.messageID = $$.num_value;
|
||||
}
|
||||
|
||||
protocolOp: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Application) {
|
||||
self.opcode = cast<ProtocolOpcode>(cast<uint8>($$.tag.type_));
|
||||
}
|
||||
|
||||
switch ( self.opcode ) {
|
||||
ProtocolOpcode::BIND_REQUEST -> BIND_REQUEST: BindRequest(self);
|
||||
ProtocolOpcode::BIND_RESPONSE -> BIND_RESPONSE: BindResponse(self);
|
||||
ProtocolOpcode::BIND_RESPONSE -> BIND_RESPONSE: BindResponse(self, ctx);
|
||||
ProtocolOpcode::UNBIND_REQUEST -> UNBIND_REQUEST: UnbindRequest(self);
|
||||
ProtocolOpcode::SEARCH_REQUEST -> SEARCH_REQUEST: SearchRequest(self);
|
||||
ProtocolOpcode::SEARCH_RESULT_ENTRY -> SEARCH_RESULT_ENTRY: SearchResultEntry(self);
|
||||
|
@ -267,17 +268,15 @@ public type Message = unit {
|
|||
ProtocolOpcode::INTERMEDIATE_RESPONSE -> INTERMEDIATE_RESPONSE: NotImplemented(self);
|
||||
ProtocolOpcode::MOD_DN_REQUEST -> MOD_DN_REQUEST: NotImplemented(self);
|
||||
ProtocolOpcode::SEARCH_RESULT_REFERENCE -> SEARCH_RESULT_REFERENCE: NotImplemented(self);
|
||||
} &parse-from=self.applicationBytes if ( self.opcode );
|
||||
} &size=self.protocolOp.len.len;
|
||||
|
||||
on %error {
|
||||
self.backtrack();
|
||||
}
|
||||
# Ensure some invariants hold after parsing the command.
|
||||
: void &requires=(self.offset() >= self.seqHeaderLen);
|
||||
: void &requires=(self.msgLen >= (self.offset() - self.seqHeaderLen));
|
||||
|
||||
on %done {
|
||||
self.success = True;
|
||||
}
|
||||
|
||||
} &requires=((self?.messageID) && (self?.opcode) && (self.opcode != ProtocolOpcode::Undef));
|
||||
# Eat the controls field if it exists.
|
||||
: skip bytes &size=self.msgLen - (self.offset() - self.seqHeaderLen);
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
# Bind Operation
|
||||
|
@ -288,15 +287,99 @@ public type BindAuthType = enum {
|
|||
BIND_AUTH_SASL = 3,
|
||||
};
|
||||
|
||||
type GSS_SPNEGO_negTokenInit = unit {
|
||||
oidHeader: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::ObjectIdentifier);
|
||||
oid: ASN1::ASN1ObjectIdentifier(self.oidHeader.len.len) &requires=(self.oid.oidstring == "1.3.6.1.5.5.2");
|
||||
|
||||
# TODO: Parse the rest of negTokenInit.
|
||||
: skip bytes &eod;
|
||||
};
|
||||
|
||||
# Peak into GSS-SPNEGO payload and ensure it is indeed GSS-SPNEGO.
|
||||
type GSS_SPNEGO = unit {
|
||||
# This is the optional octet string in SaslCredentials.
|
||||
credentialsHeader: ASN1::ASN1Header &requires=($$.tag.type_ == ASN1::ASN1Type::OctetString);
|
||||
|
||||
# Now we either have the initial message as specified in RFC2743 or
|
||||
# a continuation from RFC4178
|
||||
#
|
||||
# 60 -> APPLICATION [0] https://datatracker.ietf.org/doc/html/rfc2743#page-81)
|
||||
# a1 -> CHOICE [1] https://www.rfc-editor.org/rfc/rfc4178#section-4.2
|
||||
#
|
||||
gssapiHeader: ASN1::ASN1Header &requires=(
|
||||
$$.tag.class == ASN1::ASN1Class::Application && $$.tag.type_ == ASN1::ASN1Type(0)
|
||||
|| $$.tag.class == ASN1::ASN1Class::ContextSpecific && $$.tag.type_ == ASN1::ASN1Type(1)
|
||||
);
|
||||
|
||||
switch ( self.gssapiHeader.tag.type_ ) {
|
||||
ASN1::ASN1Type(0) -> initial: GSS_SPNEGO_negTokenInit;
|
||||
* -> : skip bytes &eod;
|
||||
} &size=self.gssapiHeader.len.len;
|
||||
};
|
||||
|
||||
type SaslCredentials = unit() {
|
||||
mechanism: ASN1::ASN1Message(True) &convert=$$.body.str_value;
|
||||
# TODO: if we want to parse the (optional) credentials string
|
||||
mechanism: ASN1::ASN1Message(False) &convert=$$.body.str_value;
|
||||
|
||||
# Peak into GSS-SPNEGO payload if we have any.
|
||||
switch ( self.mechanism ) {
|
||||
"GSS-SPNEGO" -> gss_spnego: GSS_SPNEGO;
|
||||
* -> : skip bytes &eod;
|
||||
};
|
||||
};
|
||||
|
||||
type NegTokenResp = unit {
|
||||
var accepted: bool;
|
||||
var supportedMech: ASN1::ASN1Message;
|
||||
|
||||
# Parse the contained Sequence.
|
||||
seq: ASN1::ASN1Message(True) {
|
||||
for ( msg in $$.body.seq.submessages ) {
|
||||
# https://www.rfc-editor.org/rfc/rfc4178#section-4.2.2
|
||||
if ( msg.application_id == 0 ) {
|
||||
self.accepted = msg.application_data == b"\x0a\x01\x00";
|
||||
} else if ( msg.application_id == 1 ) {
|
||||
self.supportedMech = msg;
|
||||
} else if ( msg.application_id == 2 ) {
|
||||
# ignore responseToken
|
||||
} else if ( msg.application_id == 3 ) {
|
||||
# ignore mechListMec
|
||||
} else {
|
||||
throw "unhandled NegTokenResp id %s" % msg.application_id;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch ( self?.supportedMech ) {
|
||||
True -> supportedMechOid: ASN1::ASN1Message(False) &convert=$$.body.str_value;
|
||||
* -> : void;
|
||||
} &parse-from=self.supportedMech.application_data;
|
||||
};
|
||||
|
||||
type ServerSaslCreds = unit {
|
||||
serverSaslCreds: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::ContextSpecific && $$.tag.type_ == ASN1::ASN1Type(7));
|
||||
|
||||
# The PCAP missing_ldap_logs.pcapng has a1 81 b6 here for the GSS-SPNEGO response.
|
||||
#
|
||||
# This is context-specific ID 1, constructed, and a length of 182 as
|
||||
# specified by in 4.2 of RFC4178.
|
||||
#
|
||||
# https://www.rfc-editor.org/rfc/rfc4178#section-4.2
|
||||
#
|
||||
# TODO: This is only valid for a GSS-SPNEGO negTokenResp.
|
||||
# If you want to support something else, remove the requires
|
||||
# and add more to the switch below.
|
||||
choice: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::ContextSpecific);
|
||||
|
||||
switch ( self.choice.tag.type_ ) {
|
||||
ASN1::ASN1Type(1) -> negTokenResp: NegTokenResp;
|
||||
# ...
|
||||
} &size=self.choice.len.len;
|
||||
};
|
||||
|
||||
# TODO(fox-ds): A helper unit for requests for which no handling has been implemented.
|
||||
# Eventually all uses of this unit should be replaced with actual parsers so this unit can be removed.
|
||||
type NotImplemented = unit(inout message: Message) {
|
||||
# Do nothing
|
||||
: skip bytes &eod;
|
||||
};
|
||||
|
||||
type BindRequest = unit(inout message: Message) {
|
||||
|
@ -324,14 +407,32 @@ type BindRequest = unit(inout message: Message) {
|
|||
(|self.authData| > 0)) {
|
||||
message.arg = self.saslCreds.mechanism;
|
||||
}
|
||||
} &requires=((self?.authType) && (self.authType != BindAuthType::Undef));
|
||||
} &requires=(self?.authType && (self.authType != BindAuthType::Undef));
|
||||
|
||||
type BindResponse = unit(inout message: Message) {
|
||||
type BindResponse = unit(inout message: Message, ctx: Ctx&) {
|
||||
: Result {
|
||||
message.result_ = $$;
|
||||
}
|
||||
|
||||
# TODO: if we want to parse SASL credentials returned
|
||||
# Try to parse serverSaslCreds if there's any input remaining. This
|
||||
# unit is parsed with &size, so &eod here works.
|
||||
#
|
||||
# Technically we should be able to tell from the ASN.1 structure
|
||||
# if the serverSaslCreds field exists or not. But, not sure we can
|
||||
# check if there's any bytes left at this point outside of passing
|
||||
# in the length and playing with offset().
|
||||
serverSaslCreds: ServerSaslCreds[] &eod {
|
||||
if ( |self.serverSaslCreds| > 0 ) {
|
||||
if ( self.serverSaslCreds[0]?.negTokenResp ) {
|
||||
local token = self.serverSaslCreds[0].negTokenResp;
|
||||
if ( token.accepted && token?.supportedMechOid ) {
|
||||
if ( token.supportedMechOid == GSSAPI_MECH_MS_KRB5 ) {
|
||||
ctx.saslStripping = SaslStripping::MS_KRB5;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
#-----------------------------------------------------------------------------
|
||||
|
@ -899,6 +1000,6 @@ type AbandonRequest = unit(inout message: Message) {
|
|||
#
|
||||
# };
|
||||
|
||||
on LDAP::MessageWrapper::%done {
|
||||
on LDAP::Message::%done {
|
||||
spicy::accept_input();
|
||||
}
|
||||
|
|
|
@ -7,7 +7,7 @@ import spicy;
|
|||
public type Message = unit {
|
||||
switch {
|
||||
-> prio: Priority;
|
||||
-> void;
|
||||
-> : void;
|
||||
};
|
||||
|
||||
msg: bytes &eod;
|
||||
|
|
|
@ -189,7 +189,7 @@ struct opt_mapping {
|
|||
class BrokerState {
|
||||
public:
|
||||
BrokerState(broker::configuration config, size_t congestion_queue_size)
|
||||
: endpoint(std::move(config)),
|
||||
: endpoint(std::move(config), telemetry_mgr->GetRegistry()),
|
||||
subscriber(
|
||||
endpoint.make_subscriber({broker::topic::statuses(), broker::topic::errors()}, congestion_queue_size)) {}
|
||||
|
||||
|
|
|
@ -264,6 +264,15 @@ bool Manager::CreateStream(Stream* info, RecordVal* description) {
|
|||
return true;
|
||||
}
|
||||
|
||||
// Return true if v is a TypeVal that contains a record type, else false.
|
||||
static bool is_record_type_val(const zeek::ValPtr& v) {
|
||||
const auto& t = v->GetType();
|
||||
return t->Tag() == TYPE_TYPE && t->AsTypeType()->GetType()->Tag() == TYPE_RECORD;
|
||||
}
|
||||
|
||||
// Return true if v contains a FuncVal, else false.
|
||||
static bool is_func_val(const zeek::ValPtr& v) { return v->GetType()->Tag() == TYPE_FUNC; }
|
||||
|
||||
bool Manager::CreateEventStream(RecordVal* fval) {
|
||||
RecordType* rtype = fval->GetType()->AsRecordType();
|
||||
if ( ! same_type(rtype, BifType::Record::Input::EventDescription, false) ) {
|
||||
|
@ -274,11 +283,21 @@ bool Manager::CreateEventStream(RecordVal* fval) {
|
|||
string stream_name = fval->GetFieldOrDefault("name")->AsString()->CheckString();
|
||||
|
||||
auto fields_val = fval->GetFieldOrDefault("fields");
|
||||
if ( ! is_record_type_val(fields_val) ) {
|
||||
reporter->Error("Input stream %s: 'idx' field is not a record type", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
RecordType* fields = fields_val->AsType()->AsTypeType()->GetType()->AsRecordType();
|
||||
|
||||
auto want_record = fval->GetFieldOrDefault("want_record");
|
||||
|
||||
auto ev_val = fval->GetFieldOrDefault("ev");
|
||||
if ( ev_val && ! is_func_val(ev_val) ) {
|
||||
reporter->Error("Input stream %s: 'ev' field is not an event", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
Func* event = ev_val->AsFunc();
|
||||
|
||||
const auto& etype = event->GetType();
|
||||
|
@ -356,6 +375,11 @@ bool Manager::CreateEventStream(RecordVal* fval) {
|
|||
assert(false);
|
||||
|
||||
auto error_event_val = fval->GetFieldOrDefault("error_ev");
|
||||
if ( error_event_val && ! is_func_val(error_event_val) ) {
|
||||
reporter->Error("Input stream %s: 'error_ev' field is not an event", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr;
|
||||
|
||||
if ( ! CheckErrorEventTypes(stream_name, error_event, false) )
|
||||
|
@ -414,15 +438,31 @@ bool Manager::CreateTableStream(RecordVal* fval) {
|
|||
|
||||
auto pred = fval->GetFieldOrDefault("pred");
|
||||
auto idx_val = fval->GetFieldOrDefault("idx");
|
||||
if ( ! is_record_type_val(idx_val) ) {
|
||||
reporter->Error("Input stream %s: 'idx' field is not a record type", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
RecordType* idx = idx_val->AsType()->AsTypeType()->GetType()->AsRecordType();
|
||||
|
||||
RecordTypePtr val;
|
||||
auto val_val = fval->GetFieldOrDefault("val");
|
||||
|
||||
if ( val_val )
|
||||
if ( val_val ) {
|
||||
if ( ! is_record_type_val(val_val) ) {
|
||||
reporter->Error("Input stream %s: 'val' field is not a record type", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
val = val_val->AsType()->AsTypeType()->GetType<RecordType>();
|
||||
}
|
||||
|
||||
auto dst = fval->GetFieldOrDefault("destination");
|
||||
if ( ! dst->GetType()->IsSet() && ! dst->GetType()->IsTable() ) {
|
||||
reporter->Error("Input stream %s: 'destination' field has type %s, expected table or set identifier",
|
||||
stream_name.c_str(), obj_desc_short(dst->GetType().get()).c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
// check if index fields match table description
|
||||
size_t num = idx->NumFields();
|
||||
|
@ -497,6 +537,11 @@ bool Manager::CreateTableStream(RecordVal* fval) {
|
|||
}
|
||||
|
||||
auto event_val = fval->GetFieldOrDefault("ev");
|
||||
if ( event_val && ! is_func_val(event_val) ) {
|
||||
reporter->Error("Input stream %s: 'ev' field is not an event", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
Func* event = event_val ? event_val->AsFunc() : nullptr;
|
||||
|
||||
if ( event ) {
|
||||
|
@ -572,6 +617,11 @@ bool Manager::CreateTableStream(RecordVal* fval) {
|
|||
}
|
||||
|
||||
auto error_event_val = fval->GetFieldOrDefault("error_ev");
|
||||
if ( error_event_val && ! is_func_val(error_event_val) ) {
|
||||
reporter->Error("Input stream %s: 'error_ev' field is not an event", stream_name.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr;
|
||||
|
||||
if ( ! CheckErrorEventTypes(stream_name, error_event, true) )
|
||||
|
|
|
@ -4,10 +4,8 @@
|
|||
|
||||
#include <getopt.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include <type_traits>
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
|
||||
|
@ -42,11 +40,10 @@ struct VisitorTypes : public spicy::visitor::PreOrder {
|
|||
module = {};
|
||||
return;
|
||||
}
|
||||
|
||||
module = n->scopeID();
|
||||
path = n->uid().path;
|
||||
|
||||
if ( is_resolved )
|
||||
if ( is_resolved && ! n->skipImplementation() )
|
||||
glue->addSpicyModule(module, path);
|
||||
}
|
||||
|
||||
|
|
|
@ -1375,7 +1375,7 @@ bool GlueCompiler::CreateSpicyHook(glue::Event* ev) {
|
|||
|
||||
auto attrs = builder()->attributeSet({builder()->attribute("&priority", builder()->integer(ev->priority))});
|
||||
auto parameters = hilti::util::transform(ev->parameters, [](const auto& p) { return p.get(); });
|
||||
auto unit_hook = builder()->declarationHook(parameters, body.block(), ::spicy::Engine::All, attrs, meta);
|
||||
auto unit_hook = builder()->declarationHook(parameters, body.block(), attrs, meta);
|
||||
auto hook_decl = builder()->declarationUnitHook(ev->hook, unit_hook, meta);
|
||||
ev->spicy_module->spicy_module->add(context(), hook_decl);
|
||||
|
||||
|
|
|
@ -40,4 +40,12 @@ error: Input stream error3: Error event's first attribute must be of type Input:
|
|||
error: Input stream error4: Error event's second attribute must be of type string
|
||||
error: Input stream error5: Error event's third attribute must be of type Reporter::Level
|
||||
error: Input stream error6: 'destination' field is a table, but 'val' field is not provided (did you mean to use a set instead of a table?)
|
||||
error: Input stream types1: 'idx' field is not a record type
|
||||
error: Input stream types2: 'val' field is not a record type
|
||||
error: Input stream types3: 'destination' field has type string, expected table or set identifier
|
||||
error: Input stream types4: 'ev' field is not an event
|
||||
error: Input stream types5: 'error_ev' field is not an event
|
||||
error: Input stream types6: 'idx' field is not a record type
|
||||
error: Input stream types7: 'ev' field is not an event
|
||||
error: Input stream types8: 'error_ev' field is not an event
|
||||
received termination signal
|
||||
|
|
|
@ -1,40 +1,27 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
### broker |12|
|
||||
Telemetry::INT_GAUGE, broker, connections, [type], [native], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, connections, [type], [web-socket], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, broker, processed-messages, [type], [data], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, broker, processed-messages, [type], [command], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, broker, processed-messages, [type], [routing-update], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, broker, processed-messages, [type], [ping], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, broker, processed-messages, [type], [pong], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [data], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [command], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [routing-update], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [ping], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [pong], 0.0
|
||||
count_value, 0
|
||||
### caf |5|
|
||||
Telemetry::INT_COUNTER, caf.system, rejected-messages, [], [], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_COUNTER, caf.system, processed-messages, [], [], 7.0
|
||||
count_value, 7
|
||||
Telemetry::INT_GAUGE, caf.system, running-actors, [], [], 2.0
|
||||
count_value, 2
|
||||
Telemetry::INT_GAUGE, caf.system, queued-messages, [], [], 0.0
|
||||
count_value, 0
|
||||
Telemetry::INT_GAUGE, caf.actor, mailbox-size, [name], [broker.core], 0.0
|
||||
count_value, 0
|
||||
### caf |2|
|
||||
Telemetry::DOUBLE_HISTOGRAM, caf.actor, processing-time, [0.00001, 0.0001, 0.0005, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, inf], [name], [broker.core]
|
||||
Telemetry::DOUBLE_HISTOGRAM, caf.actor, mailbox-time, [0.00001, 0.0001, 0.0005, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, inf], [name], [broker.core]
|
||||
Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [command], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_buffered_messages, [type], [command], 0.0
|
||||
value, 0.0
|
||||
Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [data], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_buffered_messages, [type], [data], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_connections, [type], [native], 0.0
|
||||
value, 0.0
|
||||
Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [ping], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_buffered_messages, [type], [ping], 0.0
|
||||
value, 0.0
|
||||
Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [pong], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_buffered_messages, [type], [pong], 0.0
|
||||
value, 0.0
|
||||
Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [routing-update], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_buffered_messages, [type], [routing-update], 0.0
|
||||
value, 0.0
|
||||
Telemetry::GAUGE, broker, broker_connections, [type], [web-socket], 0.0
|
||||
value, 0.0
|
||||
### broker |0|
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 tcp ldap_tcp 3.537413 536 42 SF 0 ShADadFf 11 1116 6 362 -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,14 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path ldap
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
|
||||
#types time string addr port addr port int int string string string string string
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 1 3 bind simple success - cn=admin,dc=example,dc=com REDACTED
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 2 - add success - - -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 3 - add success - - -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 4 - unbind - - - -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,11 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 tcp ldap_tcp 0.033404 3046 90400 RSTR 0 ShADdar 14 1733 68 93132 -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,12 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path ldap
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
|
||||
#types time string addr port addr port int int string string string string string
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 3 3 bind SASL success - - GSS-SPNEGO
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 9 - unbind - - - -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,14 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path ldap_search
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id scope deref_aliases base_object result_count result diagnostic_message filter attributes
|
||||
#types time string addr port addr port int string string string count string string string vector[string]
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 1 base never - 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 4 base never - 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 6 single never CN=Schema,CN=Configuration,DC=matrix,DC=local 424 success - (&(!(isdefunct=TRUE))(|(|(|(|(|(attributeSyntax=2.5.5.17)(attributeSyntax=2.5.5.10))(attributeSyntax=2.5.5.15))(attributeSyntax=2.5.5.1))(attributeSyntax=2.5.5.7))(attributeSyntax=2.5.5.14))) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 8 tree never DC=matrix,DC=local 1 success - (samaccountname=krbtgt) -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,13 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 tcp ldap_tcp 63.273503 3963 400107 OTH 0 Dd 12 2595 282 411387 -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 tcp ldap_tcp 0.007979 2630 3327 OTH 0 Dd 6 990 6 3567 -
|
||||
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 tcp ldap_tcp 0.001925 2183 3436 OTH 0 Dd 4 463 5 3636 -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,15 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path ldap
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
|
||||
#types time string addr port addr port int int string string string string string
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 3 3 bind SASL success - - GSS-SPNEGO
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 3 3 bind SASL success - - GSS-SPNEGO
|
||||
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 9 3 bind SASL success - - GSS-SPNEGO
|
||||
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 12 - unbind - - - -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 13 - unbind - - - -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
|
@ -0,0 +1,27 @@
|
|||
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
|
||||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path ldap_search
|
||||
#open XXXX-XX-XX-XX-XX-XX
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id scope deref_aliases base_object result_count result diagnostic_message filter attributes
|
||||
#types time string addr port addr port int string string string count string string string vector[string]
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 1 base never - 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 4 base never - 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 5 base never CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 6 base never - 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 7 tree never CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 2 success - (objectCategory=pKIEnrollmentService) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 8 base never - 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 9 base never CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=dMD) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 10 base never CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=dMD) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 11 base never CN=Aggregate,CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 1 base never - 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 4 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 5 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 6 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
|
||||
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 10 base never - 1 success - (ObjectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 11 base never CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=services,CN=Configuration,DC=DMC,DC=local 0 no such object 0000208D: NameErr: DSID-0310028B, problem 2001 (NO_OBJECT), data 0, best match of:??'CN=Services,CN=Configuration,DC=DMC,DC=local'?? (ObjectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 12 base never CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
|
||||
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 13 tree never CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 38 success - (objectclass=pKICertificateTemplate) -
|
||||
#close XXXX-XX-XX-XX-XX-XX
|
BIN
testing/btest/Traces/ldap/ldap-add.pcap
Normal file
BIN
testing/btest/Traces/ldap/ldap-add.pcap
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ldap/missing_krbtgt_ldap_request.pcapng
Normal file
BIN
testing/btest/Traces/ldap/missing_krbtgt_ldap_request.pcapng
Normal file
Binary file not shown.
BIN
testing/btest/Traces/ldap/missing_ldap_logs.pcapng
Normal file
BIN
testing/btest/Traces/ldap/missing_ldap_logs.pcapng
Normal file
Binary file not shown.
|
@ -9,4 +9,4 @@
|
|||
#
|
||||
# @TEST-EXEC: test -d $DIST/scripts
|
||||
# @TEST-EXEC: for script in `find $DIST/scripts/ -name \*\.zeek`; do zeek -b --parse-only $script >>errors 2>&1; done
|
||||
# @TEST-EXEC: TEST_DIFF_CANONIFIER="grep -v -e 'load-balancing.zeek.*deprecated script loaded' | $SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-sort" btest-diff errors
|
||||
# @TEST-EXEC: TEST_DIFF_CANONIFIER="grep -v -e 'load-balancing.zeek.*deprecated script loaded' | grep -v -e 'prometheus.zeek.*deprecated script loaded' | $SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-sort" btest-diff errors
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
# @TEST-EXEC: CLUSTER_NODE=logger-1 zeek %INPUT
|
||||
# @TEST-EXEC: CLUSTER_NODE=proxy-1 zeek %INPUT
|
||||
# @TEST-EXEC: CLUSTER_NODE=worker-1 zeek %INPUT
|
||||
# @TEST-EXEC: TEST_DIFF_CANONIFIER='grep -v "load-balancing.zeek.*deprecated script" | $SCRIPTS/diff-remove-abspath' btest-diff .stderr
|
||||
# @TEST-EXEC: TEST_DIFF_CANONIFIER='grep -v "load-balancing.zeek.*deprecated script" | grep -v "prometheus.zeek.*deprecated script" | $SCRIPTS/diff-remove-abspath' btest-diff .stderr
|
||||
|
||||
@load base/frameworks/cluster
|
||||
@load misc/loaded-scripts
|
||||
|
|
|
@ -59,6 +59,7 @@ global val_table: table[count] of Val = table();
|
|||
global val_table2: table[count, int] of Val = table();
|
||||
global val_table3: table[count, int] of int = table();
|
||||
global val_table4: table[count] of int;
|
||||
global val_set: set[count];
|
||||
|
||||
event line_file(description: Input::EventDescription, tpe: Input::Event, r:FileVal)
|
||||
{
|
||||
|
@ -190,5 +191,15 @@ event zeek_init()
|
|||
|
||||
Input::add_table([$source="input.log", $name="error6", $idx=Idx, $destination=val_table]);
|
||||
|
||||
# Check that we do not crash when a user passes unexpected types to any fields in the description records.
|
||||
Input::add_table([$source="input.log", $name="types1", $idx="string-is-not-allowed", $destination=val_set]);
|
||||
Input::add_table([$source="input.log", $name="types2", $idx=Idx, $val="string-is-not-allowed", $destination=val_set]);
|
||||
Input::add_table([$source="input.log", $name="types3", $idx=Idx, $destination="string-is-not-allowed"]);
|
||||
Input::add_table([$source="input.log", $name="types4", $idx=Idx, $destination=val_set, $ev="not-an-event"]);
|
||||
Input::add_table([$source="input.log", $name="types5", $idx=Idx, $destination=val_set, $error_ev="not-an-event"]);
|
||||
Input::add_event([$source="input.log", $name="types6", $fields="string-is-not-allowed", $ev=event11]);
|
||||
Input::add_event([$source="input.log", $name="types7", $fields=Val, $ev="not-an-event"]);
|
||||
Input::add_event([$source="input.log", $name="types8", $fields=Val, $ev=event11, $error_ev="not-an-event"]);
|
||||
|
||||
schedule 3secs { kill_me() };
|
||||
}
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
# @TEST-DOC: Query some internal broker/caf related metrics as they use the int64_t versions, too.
|
||||
# @TEST-DOC: Query Broker's telemetry to verify it ends up in Zeek's registry.
|
||||
# Note compilable to C++ due to globals being initialized to a record that
|
||||
# has an opaque type as a field.
|
||||
# @TEST-KNOWN-FAILURE: Implementation for prometheus-cpp missing in broker
|
||||
# @TEST-REQUIRES: test "${ZEEK_USE_CPP}" != "1"
|
||||
# @TEST-EXEC: zcat <$TRACES/echo-connections.pcap.gz | zeek -b -Cr - %INPUT > out
|
||||
# @TEST-EXEC: btest-diff out
|
||||
|
@ -9,17 +8,19 @@
|
|||
|
||||
@load base/frameworks/telemetry
|
||||
|
||||
redef running_under_test = T;
|
||||
|
||||
function print_histogram_metrics(what: string, metrics: vector of Telemetry::HistogramMetric)
|
||||
{
|
||||
print fmt("### %s |%s|", what, |metrics|);
|
||||
for (i in metrics)
|
||||
{
|
||||
local m = metrics[i];
|
||||
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$bounds, m$opts$labels, m$labels;
|
||||
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$bounds, m$label_names, m?$label_values ? m$label_values : vector();
|
||||
# Don't output actual values as they are runtime dependent.
|
||||
# print m$values, m$sum, m$observations;
|
||||
if ( m$opts?$count_bounds )
|
||||
print m$opts$count_bounds;
|
||||
if ( m$opts?$bounds )
|
||||
print m$opts$bounds;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -29,19 +30,17 @@ function print_metrics(what: string, metrics: vector of Telemetry::Metric)
|
|||
for (i in metrics)
|
||||
{
|
||||
local m = metrics[i];
|
||||
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$labels, m$labels, m$value;
|
||||
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$label_names, m?$label_values ? m$label_values : vector(), m$value;
|
||||
|
||||
if (m?$count_value)
|
||||
print "count_value", m$count_value;
|
||||
if (m?$value)
|
||||
print "value", m$value;
|
||||
}
|
||||
}
|
||||
|
||||
event zeek_done() &priority=-100
|
||||
{
|
||||
local broker_metrics = Telemetry::collect_metrics("broker", "*");
|
||||
local broker_metrics = Telemetry::collect_metrics("broker*", "*");
|
||||
print_metrics("broker", broker_metrics);
|
||||
local caf_metrics = Telemetry::collect_metrics("caf*", "*");
|
||||
print_metrics("caf", caf_metrics);
|
||||
local caf_histogram_metrics = Telemetry::collect_histogram_metrics("caf*", "*");
|
||||
print_histogram_metrics("caf", caf_histogram_metrics);
|
||||
local broker_histogram_metrics = Telemetry::collect_histogram_metrics("broker*", "*");
|
||||
print_histogram_metrics("broker", broker_histogram_metrics);
|
||||
}
|
||||
|
|
11
testing/btest/scripts/base/protocols/ldap/add.zeek
Normal file
11
testing/btest/scripts/base/protocols/ldap/add.zeek
Normal file
|
@ -0,0 +1,11 @@
|
|||
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
|
||||
|
||||
# @TEST-REQUIRES: have-spicy
|
||||
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/ldap-add.pcap %INPUT
|
||||
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
|
||||
# @TEST-EXEC: btest-diff conn.log
|
||||
# @TEST-EXEC: btest-diff ldap.log
|
||||
# @TEST-EXEC: ! test -f dpd.log
|
||||
# @TEST-EXEC: ! test -f analyzer.log
|
||||
#
|
||||
# @TEST-DOC: The addRequest/addResponse operation is not implemented, yet we process it.
|
|
@ -0,0 +1,11 @@
|
|||
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
|
||||
|
||||
# @TEST-REQUIRES: have-spicy
|
||||
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/missing_krbtgt_ldap_request.pcapng %INPUT
|
||||
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
|
||||
# @TEST-EXEC: btest-diff conn.log
|
||||
# @TEST-EXEC: btest-diff ldap.log
|
||||
# @TEST-EXEC: btest-diff ldap_search.log
|
||||
# @TEST-EXEC: ! test -f dpd.log
|
||||
#
|
||||
# @TEST-DOC: Test LDAP analyzer with GSS-API integrity traffic where we can still peak into LDAP wrapped into WRAP tokens.
|
|
@ -0,0 +1,11 @@
|
|||
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
|
||||
|
||||
# @TEST-REQUIRES: have-spicy
|
||||
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/missing_ldap_logs.pcapng %INPUT
|
||||
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
|
||||
# @TEST-EXEC: btest-diff conn.log
|
||||
# @TEST-EXEC: btest-diff ldap.log
|
||||
# @TEST-EXEC: btest-diff ldap_search.log
|
||||
# @TEST-EXEC: ! test -f dpd.log
|
||||
#
|
||||
# @TEST-DOC: Test LDAP analyzer with GSS-API integrity traffic where we can still peak into LDAP wrapped into WRAP tokens.
|
|
@ -55,7 +55,6 @@ done
|
|||
@TEST-END-FILE
|
||||
|
||||
@load policy/frameworks/cluster/experimental
|
||||
@load policy/frameworks/telemetry/prometheus
|
||||
@load base/frameworks/telemetry
|
||||
|
||||
# So the cluster nodes don't terminate right away.
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
# # simply update this test's TEST-START-FILE with the latest contents
|
||||
# site/local.zeek.
|
||||
|
||||
@TEST-START-FILE local-7.0.zeek
|
||||
@TEST-START-FILE local-7.1.zeek
|
||||
##! Local site policy. Customize as appropriate.
|
||||
##!
|
||||
##! This file will not be overwritten when upgrading or reinstalling!
|
||||
|
|
|
@ -1 +1 @@
|
|||
45582671c6715e719d91c8afde7ffb480c602441
|
||||
ded009fb7a0cdee6f36d5b40a6394788b760fa06
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue