Merge remote-tracking branch 'origin/master' into topic/johanna/spicy-tls

* origin/master:
  Update broker submodule [nomail]
  telemetry: Deprecate prometheus.zeek policy script
  input/Manager: Improve type checks of record fields with type any
  Bump zeek-testing-cluster to pull in tee SIGPIPE fix
  ldap: Remove MessageWrapper with magic 0x30 searching
  ldap: Harden parsing a bit
  ldap: Handle integrity-only KRB wrap tokens
  Bump auxil/spicy to latest development snapshot
  CI: Set FETCH_CONTENT_FULLY_DISCONNECTED flag for configure
  Update broker and cmake submodules [nomail]
  Fix a broken merge
  Do not emit hook files for builtin modules
  Fix warning about grealpath when running 'make dist' on Linux
  Start of 7.1.0 development
  Updating submodule(s) [nomail]
  Update the scripts.base.frameworks.telemetry.internal-metrics test
  Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest"
  Bump Broker to pull in new Prometheus support and pass in Zeek's registry
  Do not emit hook files for builtin modules
This commit is contained in:
Johanna Amann 2024-07-23 10:21:49 +01:00
commit f95f5d2adb
47 changed files with 632 additions and 207 deletions

View file

@ -10,7 +10,7 @@ btest_jobs: &BTEST_JOBS 4
btest_retries: &BTEST_RETRIES 2 btest_retries: &BTEST_RETRIES 2
memory: &MEMORY 16GB memory: &MEMORY 16GB
config: &CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror config: &CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror -D FETCHCONTENT_FULLY_DISCONNECTED:BOOL=ON
no_spicy_config: &NO_SPICY_CONFIG --build-type=release --disable-broker-tests --disable-spicy --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror no_spicy_config: &NO_SPICY_CONFIG --build-type=release --disable-broker-tests --disable-spicy --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
static_config: &STATIC_CONFIG --build-type=release --disable-broker-tests --enable-static-broker --enable-static-binpac --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror static_config: &STATIC_CONFIG --build-type=release --disable-broker-tests --enable-static-broker --enable-static-binpac --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
binary_config: &BINARY_CONFIG --prefix=$CIRRUS_WORKING_DIR/install --libdir=$CIRRUS_WORKING_DIR/install/lib --binary-package --enable-static-broker --enable-static-binpac --disable-broker-tests --build-type=Release --ccache --enable-werror binary_config: &BINARY_CONFIG --prefix=$CIRRUS_WORKING_DIR/install --libdir=$CIRRUS_WORKING_DIR/install/lib --binary-package --enable-static-broker --enable-static-binpac --disable-broker-tests --build-type=Release --ccache --enable-werror

99
CHANGES
View file

@ -1,3 +1,102 @@
7.1.0-dev.23 | 2024-07-23 10:02:52 +0200
* telemetry: Deprecate prometheus.zeek policy script (Arne Welzel, Corelight)
With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.
Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.
* Update broker submodule [nomail] (Tim Wojtulewicz, Corelight)
7.1.0-dev.20 | 2024-07-19 19:51:12 +0200
* GH-3836: input/Manager: Improve type checks of record fields with type any (Arne Welzel, Corelight)
Calling AsRecordType() or AsFunc() on a Val of type any isn't safe.
Closes #3836
7.1.0-dev.18 | 2024-07-17 15:37:12 -0700
* Bump zeek-testing-cluster to pull in tee SIGPIPE fix (Christian Kreibich, Corelight)
7.1.0-dev.16 | 2024-07-17 16:45:13 +0200
* ldap: Remove MessageWrapper with magic 0x30 searching (Arne Welzel, Corelight)
This unit implements a heuristic to search for the 0x30 sequence
byte if Message couldn't readily be parsed. Remove it with the
idea of explicit and predictable support for SASL mechanisms.
* ldap: Harden parsing a bit (Arne Welzel, Corelight)
ASN1Message(True) may go off parsing arbitrary input data as
"something ASN.1" This could be GBs of octet strings or just very
long sequences. Avoid this by open-coding some top-level types expected.
This also tries to avoid some of the &parse-from usages that result
in unnecessary copies of data.
Adds a locally generated PCAP with addRequest/addResponse that we
don't currently handle.
* ldap: Handle integrity-only KRB wrap tokens (Arne Welzel, Corelight)
Mostly staring at the PCAPs and opened a few RFCs. For now, only if the
MS_KRB5 OID is used and accepted in a bind response, start stripping
KRB5 wrap tokens for both, client and server traffic.
Would probably be nice to forward the GSS-API data to the analyzer...
Closes zeek/spicy-ldap#29.
7.1.0-dev.12 | 2024-07-16 10:16:02 -0700
* Bump auxil/spicy to latest development snapshot (Benjamin Bannier, Corelight)
This patch bump Spicy to the latest development snapshot. This
introduces a backwards-incompatible change in that it removes support
for a never officially supported syntax to specify unit fields (so I
would argue: not strictly a breaking change).
7.1.0-dev.10 | 2024-07-12 16:02:22 -0700
* CI: Set FETCH_CONTENT_FULLY_DISCONNECTED flag for configure (Tim Wojtulewicz, Corelight)
* Fix a broken merge (Tim Wojtulewicz, Corelight)
I merged an old version of the branch on accident and then merged the right
one over top of it, but git ended up including both versions. This fixes
that mistake.
7.1.0-dev.6 | 2024-07-12 09:51:39 -0700
* Do not emit hook files for builtin modules (Benjamin Bannier, Corelight)
We would previously emit a C++ file with hooks for at least the builtin
`spicy` module even though that module like any other builtin module
never contains implementations of hooks for types in user code.
This patch prevents modules with skipped implementations (such as our
builtin modules) from being added to the compilation which prevents
generating their hook files.
7.1.0-dev.2 | 2024-07-12 09:46:34 -0700
* Fix warning about grealpath when running 'make dist' on Linux (Tim Wojtulewicz, Corelight)
7.0.0-dev.467 | 2024-07-11 12:14:52 -0700
* Update the scripts.base.frameworks.telemetry.internal-metrics test (Christian Kreibich, Corelight)
* Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest" (Christian Kreibich, Corelight)
* Bump Broker to pull in new Prometheus support and pass in Zeek's registry (Dominik Charousset and Christian Kreibich, Corelight)
7.0.0-dev.461 | 2024-07-10 18:45:36 +0200 7.0.0-dev.461 | 2024-07-10 18:45:36 +0200
* Extend btest for logging of disabled analyzers (Jan Grashoefer, Corelight) * Extend btest for logging of disabled analyzers (Jan Grashoefer, Corelight)

View file

@ -9,7 +9,7 @@ BUILD=build
REPO=$$(cd $(CURDIR) && basename $$(git config --get remote.origin.url | sed 's/^[^:]*://g')) REPO=$$(cd $(CURDIR) && basename $$(git config --get remote.origin.url | sed 's/^[^:]*://g'))
VERSION_FULL=$(REPO)-$$(cd $(CURDIR) && cat VERSION) VERSION_FULL=$(REPO)-$$(cd $(CURDIR) && cat VERSION)
GITDIR=$$(test -f .git && echo $$(cut -d" " -f2 .git) || echo .git) GITDIR=$$(test -f .git && echo $$(cut -d" " -f2 .git) || echo .git)
REALPATH=$$($$(realpath --relative-to=$(pwd) . >/dev/null 2>&1) && echo 'realpath' || echo 'grealpath') REALPATH=$$($$(realpath --relative-to=$(shell pwd) . >/dev/null 2>&1) && echo 'realpath' || echo 'grealpath')
all: configured all: configured
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@

29
NEWS
View file

@ -3,6 +3,30 @@ This document summarizes the most important changes in the current Zeek
release. For an exhaustive list of changes, see the ``CHANGES`` file release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as Broker, come with their own ``CHANGES``.) (note that submodules, such as Broker, come with their own ``CHANGES``.)
Zeek 7.1.0
==========
Breaking Changes
----------------
New Functionality
-----------------
* The LDAP analyzer now supports handling of non-sealed GSS-API WRAP tokens.
Changed Functionality
---------------------
* Heuristics for parsing SASL encrypted and signed LDAP traffic have been
made more strict and predictable. Please provide input if this results in
less visibility in your environment.
Removed Functionality
---------------------
Deprecated Functionality
------------------------
Zeek 7.0.0 Zeek 7.0.0
========== ==========
@ -167,6 +191,11 @@ Deprecated Functionality
- The ``--disable-archiver`` configure flag no longer does anything and will be - The ``--disable-archiver`` configure flag no longer does anything and will be
removed in 7.1. zeek-archiver has moved into the zeek-aux repository. removed in 7.1. zeek-archiver has moved into the zeek-aux repository.
- The policy/frameworks/telemetry/prometheus.zeek script has been deprecated
and will be removed with Zeek 7.1. Setting the ``metrics_port`` field on a
``Cluster::Node`` implies listening on that port and exposing telemetry
in Prometheus format.
Zeek 6.2.0 Zeek 6.2.0
========== ==========

View file

@ -1 +1 @@
7.0.0-dev.461 7.1.0-dev.23

@ -1 +1 @@
Subproject commit fd83a789848b485c81f28b8a6af23d28eca7b3c7 Subproject commit 7c5ccc9aa91466004bc4a0dbbce11a239f3e742e

@ -1 +1 @@
Subproject commit 7db629d4e2f8128e3e27aa28200106fa6d553be0 Subproject commit a5c8f19fb49c60171622536fa6d369fa168f19e0

@ -1 +1 @@
Subproject commit c47de11e4b84f24e8b501c3b1a446ad808e4964a Subproject commit 4348515873b4d1b0e44c7344011b18d21411accf

@ -1 +1 @@
Subproject commit 396723c04ba1f8f2f75555745a503b8edf353ff6 Subproject commit 610cf8527dad7033b971595a1d556c2c95294f2b

@ -1 +1 @@
Subproject commit 6581b1855a5ea8cc102c66b4ac6a431fc67484a0 Subproject commit 4a1b43ef07d1305a7e88a4f0866068dc49de9d06

@ -1 +1 @@
Subproject commit 1478f2ee550a0f99f5b93975c17ae814ebe515b7 Subproject commit 8a66cd60fb29a1237b5070854cb194f43a3f7a30

@ -1 +1 @@
Subproject commit 7671450f34c65259463b4fd651a18df3935f235c Subproject commit 39c0ee1e1742bb28dff57632ee4620f905b892e7

2
cmake

@ -1 +1 @@
Subproject commit db0d52761f38f3602060da36adc1afff608730c1 Subproject commit 2d42baf8e63a7494224aa9d02afa2cb43ddb96b8

View file

@ -1,3 +1 @@
@load ./main @load ./main
@load base/frameworks/cluster

View file

@ -5,10 +5,28 @@
##! enabled by setting :zeek:see:`Telemetry::metrics_port`. ##! enabled by setting :zeek:see:`Telemetry::metrics_port`.
@load base/misc/version @load base/misc/version
@load base/frameworks/cluster
@load base/frameworks/telemetry/options @load base/frameworks/telemetry/options
module Telemetry; module Telemetry;
# In a cluster configuration, open the port number for metrics
# from the cluster node configuration for exporting data to
# Prometheus.
#
# The manager node will also provide a ``/services.json`` endpoint
# for the HTTP Service Discovery system in Prometheus to use for
# configuration. This endpoint will include information for all of
# the other nodes in the cluster.
@if ( Cluster::is_enabled() )
redef Telemetry::metrics_endpoint_name = Cluster::node;
@if ( Cluster::local_node_metrics_port() != 0/unknown )
redef Telemetry::metrics_port = Cluster::local_node_metrics_port();
@endif
@endif
export { export {
## Alias for a vector of label values. ## Alias for a vector of label values.
type labels_vector: vector of string; type labels_vector: vector of string;

View file

@ -1,19 +1,2 @@
##! In a cluster configuration, open the port number for metrics @deprecated "Remove in v7.1: Cluster nodes now implicitly listen on metrics port if set in cluster-layout."
##! from the cluster node configuration for exporting data to @load base/frameworks/telemetry
##! Prometheus.
##!
##! The manager node will also provide a ``/services.json`` endpoint
##! for the HTTP Service Discovery system in Prometheus to use for
##! configuration. This endpoint will include information for all of
##! the other nodes in the cluster.
@load base/frameworks/cluster
@if ( Cluster::is_enabled() )
redef Telemetry::metrics_endpoint_name = Cluster::node;
@if ( Cluster::local_node_metrics_port() != 0/unknown )
redef Telemetry::metrics_port = Cluster::local_node_metrics_port();
@endif
@endif

View file

@ -94,10 +94,6 @@ redef digest_salt = "Please change this value.";
# telemetry_histogram.log. # telemetry_histogram.log.
@load frameworks/telemetry/log @load frameworks/telemetry/log
# Enable Prometheus metrics scraping in the cluster: each Zeek node will listen
# on the metrics port defined in its Cluster::nodes entry.
# @load frameworks/telemetry/prometheus
# Uncomment the following line to enable detection of the heartbleed attack. Enabling # Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit. # this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed # @load policy/protocols/ssl/heartbleed

View file

@ -15,7 +15,7 @@ public type Request = unit {
switch { switch {
-> : /\/W/ { self.whois = True; } -> : /\/W/ { self.whois = True; }
-> void; -> : void;
}; };
: OptionalWhiteSpace; : OptionalWhiteSpace;

View file

@ -126,125 +126,126 @@ public type Result = unit {
# https://tools.ietf.org/html/rfc4511#section-4.1.10 # https://tools.ietf.org/html/rfc4511#section-4.1.10
}; };
# 1.2.840.48018.1.2.2 (MS KRB5 - Microsoft Kerberos 5)
const GSSAPI_MECH_MS_KRB5 = "1.2.840.48018.1.2.2";
# Supported SASL stripping modes.
type SaslStripping = enum {
MS_KRB5 = 1, # Payload starts with a 4 byte length followed by a wrap token that may or may not be sealed.
};
type Ctx = struct {
saslStripping: SaslStripping; # Which mode of SASL stripping to use.
};
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
public type Messages = unit { public type Messages = unit {
: MessageWrapper[]; %context = Ctx;
: SASLStrip(self.context())[];
}; };
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
type SASLLayer = unit { public type SASLStrip = unit(ctx: Ctx&) {
# For the time being (before we support parsing the SASL layer) this unit switch( ctx.saslStripping ) {
# is used by MessageWrapper below to strip it (SASL) so that the parser SaslStripping::Undef -> : Message(ctx);
# can attempt to resume parsing afterward. It also sets the success flag SaslStripping::MS_KRB5 -> : SaslMsKrb5Stripper(ctx);
# if '\x30' is found, otherwise backtracks so that we can deal with encrypted };
# SASL payloads without raising a parse error. };
var success: bool = False;
: bytes &until=b"\x30" {
self.success = True; type KrbWrapToken = unit {
# https://datatracker.ietf.org/doc/html/rfc4121#section-4.2.6.2
# Number of bytes to expect *after* the payload.
var trailer_ec: uint64;
var header_ec: uint64;
ctx_flags: bitfield(8) {
send_by_acceptor: 0;
sealed: 1;
acceptor_subkey: 2;
};
filler: skip b"\xff";
ec: uint16; # extra count
rrc: uint16 { # right rotation count
# Handle rrc == ec or rrc == 0.
if ( self.rrc == self.ec ) {
self.header_ec = self.ec;
} else if ( self.rrc == 0 ) {
self.trailer_ec = self.ec;
} else {
throw "Unhandled rc %s and ec %s" % (self.ec, self.rrc);
}
} }
on %error { snd_seq: uint64;
self.backtrack(); header_e: skip bytes &size=self.header_ec;
}
}; };
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
public type MessageWrapper = unit { type SaslMsKrb5Stripper = unit(ctx: Ctx&) {
# A wrapper around 'Message'. First, we try to parse a Message unit. # This is based on Wireshark output and example traffic we have. There's always
# There are two possible outcomes: # a 4 byte length field followed by the krb5_tok_id field in messages after
# (1) Success -> We consumed all bytes and successfully parsed a Message unit # MS_KRB5 was selected. I haven't read enough specs to understand if it's
# (2) No success -> self.backtrack() is called in the Message unit, # just this one case that works, or others could use the same stripping.
# so effectively we didn't consume any bytes yet. var switch_size: uint64;
# The outcome can be determined by checking the `success` variable of the Message unit
# This success variable is different, because this keeps track of the status for the MessageWrapper object len: uint32;
var success: bool = False; krb5_tok_id: uint16;
var message: Message;
# Here, we try to parse the message... switch ( self.krb5_tok_id ) {
: Message &try { 0x0504 -> krb_wrap_token: KrbWrapToken;
* -> : void;
};
# ... and only if the Message unit successfully parsed, we can set : skip bytes &size=0 {
# the status of this MessageWrapper's success to 'True' self.switch_size = self.len - (self.offset() - 4);
if ( $$.success == True ) { if ( self?.krb_wrap_token )
self.success = True; self.switch_size -= self.krb_wrap_token.trailer_ec;
self.message = $$;
}
} }
# If we failed to parse the message, then we're going to scan the remaining bytes for the '\x30' switch ( self?.krb_wrap_token && ! self.krb_wrap_token.ctx_flags.sealed ) {
# start byte and try to parse a Message starting from that byte. This effectively True -> : Message(ctx)[] &eod;
# strips the SASL layer if SASL Signing was enabled. Until now, I haven't found A * -> : skip bytes &eod;
# better way to scan / determine the exact SASL header length yet, so we'll stick with this } &size=self.switch_size;
# for the time being. If the entire LDAP packet was encrypted with SASL, then we skip parsing for
# now (in the long run we need to be parsing SASL/GSSAPI instead, in which case encrypted payloads
# are just another message type).
# SASLLayer (see unit above) just consumes bytes &until=b"\x30" or backtracks if it isn't found # Consume the wrap token trailer, if any.
# and sets a success flag we can use later to decide if those bytes contain a parsable message. trailer_e: skip bytes &size=self.krb_wrap_token.trailer_ec if (self?.krb_wrap_token);
var sasl_success: bool = False; };
: SASLLayer &try if ( self.success == False ) {
if ( $$.success == True ) {
self.sasl_success = True;
}
}
var remainder: bytes;
# SASLLayer consumes the delimiter ('\x30'), and because this is the first byte of a valid LDAP message
# we should re-add it to the remainder if the delimiter was found. If the delimiter was not found, we
# leave the remainder empty, but note that the bytes must be consumed either way to avoid stalling the
# parser and causing an infinite loop error.
: bytes &eod if ( self.success == False ) {
if ( self.sasl_success == True ) {
self.remainder = b"\x30" + $$;
}
}
# Again, try to parse a Message unit. Be aware that in this will sometimes fail if the '\x30' byte is
# also present in the SASL header.
# Also, we could try to do this recursively or try a few iterations, but for now I would suggest
# to try this extra parsing once to get the best cost/benefit tradeoff.
: Message &try &parse-from=self.remainder if ( self.success == False && self.sasl_success == True ) {
if ( $$.success == True ) {
self.success = True;
self.message = $$;
}
}
# If we still didn't manage to parse a message (so the &try resulted in another backtrack()) then
# this is probably an encrypted LDAP message, so skip it
} &convert=self.message;
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
public type Message = unit { public type Message = unit(ctx: Ctx&) {
var messageID: int64; var messageID: int64;
var opcode: ProtocolOpcode = ProtocolOpcode::Undef; var opcode: ProtocolOpcode = ProtocolOpcode::Undef;
var applicationBytes: bytes;
var unsetResultDefault: Result; var unsetResultDefault: Result;
var result_: Result& = self.unsetResultDefault; var result_: Result& = self.unsetResultDefault;
var obj: string = ""; var obj: string = "";
var arg: string = ""; var arg: string = "";
var success: bool = False; var seqHeaderLen: uint64;
var msgLen: uint64;
: ASN1::ASN1Message(True) { seqHeader: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::Sequence) {
if (($$.head.tag.type_ == ASN1::ASN1Type::Sequence) && self.msgLen = $$.len.len;
($$.body?.seq) &&
(|$$.body.seq.submessages| >= 2)) {
if ($$.body.seq.submessages[0].body?.num_value) {
self.messageID = $$.body.seq.submessages[0].body.num_value;
} }
if ($$.body.seq.submessages[1]?.application_id) {
self.opcode = cast<ProtocolOpcode>(cast<uint8>($$.body.seq.submessages[1].application_id)); # Use offset() to determine how many bytes the seqHeader took. This
self.applicationBytes = $$.body.seq.submessages[1].application_data; # needs to be done after the seqHeader field hook.
: void {
self.seqHeaderLen = self.offset();
} }
messageID_header: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::Integer);
: ASN1::ASN1Body(self.messageID_header, False) {
self.messageID = $$.num_value;
} }
protocolOp: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Application) {
self.opcode = cast<ProtocolOpcode>(cast<uint8>($$.tag.type_));
} }
switch ( self.opcode ) { switch ( self.opcode ) {
ProtocolOpcode::BIND_REQUEST -> BIND_REQUEST: BindRequest(self); ProtocolOpcode::BIND_REQUEST -> BIND_REQUEST: BindRequest(self);
ProtocolOpcode::BIND_RESPONSE -> BIND_RESPONSE: BindResponse(self); ProtocolOpcode::BIND_RESPONSE -> BIND_RESPONSE: BindResponse(self, ctx);
ProtocolOpcode::UNBIND_REQUEST -> UNBIND_REQUEST: UnbindRequest(self); ProtocolOpcode::UNBIND_REQUEST -> UNBIND_REQUEST: UnbindRequest(self);
ProtocolOpcode::SEARCH_REQUEST -> SEARCH_REQUEST: SearchRequest(self); ProtocolOpcode::SEARCH_REQUEST -> SEARCH_REQUEST: SearchRequest(self);
ProtocolOpcode::SEARCH_RESULT_ENTRY -> SEARCH_RESULT_ENTRY: SearchResultEntry(self); ProtocolOpcode::SEARCH_RESULT_ENTRY -> SEARCH_RESULT_ENTRY: SearchResultEntry(self);
@ -267,17 +268,15 @@ public type Message = unit {
ProtocolOpcode::INTERMEDIATE_RESPONSE -> INTERMEDIATE_RESPONSE: NotImplemented(self); ProtocolOpcode::INTERMEDIATE_RESPONSE -> INTERMEDIATE_RESPONSE: NotImplemented(self);
ProtocolOpcode::MOD_DN_REQUEST -> MOD_DN_REQUEST: NotImplemented(self); ProtocolOpcode::MOD_DN_REQUEST -> MOD_DN_REQUEST: NotImplemented(self);
ProtocolOpcode::SEARCH_RESULT_REFERENCE -> SEARCH_RESULT_REFERENCE: NotImplemented(self); ProtocolOpcode::SEARCH_RESULT_REFERENCE -> SEARCH_RESULT_REFERENCE: NotImplemented(self);
} &parse-from=self.applicationBytes if ( self.opcode ); } &size=self.protocolOp.len.len;
on %error { # Ensure some invariants hold after parsing the command.
self.backtrack(); : void &requires=(self.offset() >= self.seqHeaderLen);
} : void &requires=(self.msgLen >= (self.offset() - self.seqHeaderLen));
on %done { # Eat the controls field if it exists.
self.success = True; : skip bytes &size=self.msgLen - (self.offset() - self.seqHeaderLen);
} };
} &requires=((self?.messageID) && (self?.opcode) && (self.opcode != ProtocolOpcode::Undef));
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Bind Operation # Bind Operation
@ -288,15 +287,99 @@ public type BindAuthType = enum {
BIND_AUTH_SASL = 3, BIND_AUTH_SASL = 3,
}; };
type GSS_SPNEGO_negTokenInit = unit {
oidHeader: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::Universal && $$.tag.type_ == ASN1::ASN1Type::ObjectIdentifier);
oid: ASN1::ASN1ObjectIdentifier(self.oidHeader.len.len) &requires=(self.oid.oidstring == "1.3.6.1.5.5.2");
# TODO: Parse the rest of negTokenInit.
: skip bytes &eod;
};
# Peak into GSS-SPNEGO payload and ensure it is indeed GSS-SPNEGO.
type GSS_SPNEGO = unit {
# This is the optional octet string in SaslCredentials.
credentialsHeader: ASN1::ASN1Header &requires=($$.tag.type_ == ASN1::ASN1Type::OctetString);
# Now we either have the initial message as specified in RFC2743 or
# a continuation from RFC4178
#
# 60 -> APPLICATION [0] https://datatracker.ietf.org/doc/html/rfc2743#page-81)
# a1 -> CHOICE [1] https://www.rfc-editor.org/rfc/rfc4178#section-4.2
#
gssapiHeader: ASN1::ASN1Header &requires=(
$$.tag.class == ASN1::ASN1Class::Application && $$.tag.type_ == ASN1::ASN1Type(0)
|| $$.tag.class == ASN1::ASN1Class::ContextSpecific && $$.tag.type_ == ASN1::ASN1Type(1)
);
switch ( self.gssapiHeader.tag.type_ ) {
ASN1::ASN1Type(0) -> initial: GSS_SPNEGO_negTokenInit;
* -> : skip bytes &eod;
} &size=self.gssapiHeader.len.len;
};
type SaslCredentials = unit() { type SaslCredentials = unit() {
mechanism: ASN1::ASN1Message(True) &convert=$$.body.str_value; mechanism: ASN1::ASN1Message(False) &convert=$$.body.str_value;
# TODO: if we want to parse the (optional) credentials string
# Peak into GSS-SPNEGO payload if we have any.
switch ( self.mechanism ) {
"GSS-SPNEGO" -> gss_spnego: GSS_SPNEGO;
* -> : skip bytes &eod;
};
};
type NegTokenResp = unit {
var accepted: bool;
var supportedMech: ASN1::ASN1Message;
# Parse the contained Sequence.
seq: ASN1::ASN1Message(True) {
for ( msg in $$.body.seq.submessages ) {
# https://www.rfc-editor.org/rfc/rfc4178#section-4.2.2
if ( msg.application_id == 0 ) {
self.accepted = msg.application_data == b"\x0a\x01\x00";
} else if ( msg.application_id == 1 ) {
self.supportedMech = msg;
} else if ( msg.application_id == 2 ) {
# ignore responseToken
} else if ( msg.application_id == 3 ) {
# ignore mechListMec
} else {
throw "unhandled NegTokenResp id %s" % msg.application_id;
}
}
}
switch ( self?.supportedMech ) {
True -> supportedMechOid: ASN1::ASN1Message(False) &convert=$$.body.str_value;
* -> : void;
} &parse-from=self.supportedMech.application_data;
};
type ServerSaslCreds = unit {
serverSaslCreds: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::ContextSpecific && $$.tag.type_ == ASN1::ASN1Type(7));
# The PCAP missing_ldap_logs.pcapng has a1 81 b6 here for the GSS-SPNEGO response.
#
# This is context-specific ID 1, constructed, and a length of 182 as
# specified by in 4.2 of RFC4178.
#
# https://www.rfc-editor.org/rfc/rfc4178#section-4.2
#
# TODO: This is only valid for a GSS-SPNEGO negTokenResp.
# If you want to support something else, remove the requires
# and add more to the switch below.
choice: ASN1::ASN1Header &requires=($$.tag.class == ASN1::ASN1Class::ContextSpecific);
switch ( self.choice.tag.type_ ) {
ASN1::ASN1Type(1) -> negTokenResp: NegTokenResp;
# ...
} &size=self.choice.len.len;
}; };
# TODO(fox-ds): A helper unit for requests for which no handling has been implemented. # TODO(fox-ds): A helper unit for requests for which no handling has been implemented.
# Eventually all uses of this unit should be replaced with actual parsers so this unit can be removed. # Eventually all uses of this unit should be replaced with actual parsers so this unit can be removed.
type NotImplemented = unit(inout message: Message) { type NotImplemented = unit(inout message: Message) {
# Do nothing : skip bytes &eod;
}; };
type BindRequest = unit(inout message: Message) { type BindRequest = unit(inout message: Message) {
@ -324,14 +407,32 @@ type BindRequest = unit(inout message: Message) {
(|self.authData| > 0)) { (|self.authData| > 0)) {
message.arg = self.saslCreds.mechanism; message.arg = self.saslCreds.mechanism;
} }
} &requires=((self?.authType) && (self.authType != BindAuthType::Undef)); } &requires=(self?.authType && (self.authType != BindAuthType::Undef));
type BindResponse = unit(inout message: Message) { type BindResponse = unit(inout message: Message, ctx: Ctx&) {
: Result { : Result {
message.result_ = $$; message.result_ = $$;
} }
# TODO: if we want to parse SASL credentials returned # Try to parse serverSaslCreds if there's any input remaining. This
# unit is parsed with &size, so &eod here works.
#
# Technically we should be able to tell from the ASN.1 structure
# if the serverSaslCreds field exists or not. But, not sure we can
# check if there's any bytes left at this point outside of passing
# in the length and playing with offset().
serverSaslCreds: ServerSaslCreds[] &eod {
if ( |self.serverSaslCreds| > 0 ) {
if ( self.serverSaslCreds[0]?.negTokenResp ) {
local token = self.serverSaslCreds[0].negTokenResp;
if ( token.accepted && token?.supportedMechOid ) {
if ( token.supportedMechOid == GSSAPI_MECH_MS_KRB5 ) {
ctx.saslStripping = SaslStripping::MS_KRB5;
}
}
}
}
}
}; };
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
@ -899,6 +1000,6 @@ type AbandonRequest = unit(inout message: Message) {
# #
# }; # };
on LDAP::MessageWrapper::%done { on LDAP::Message::%done {
spicy::accept_input(); spicy::accept_input();
} }

View file

@ -7,7 +7,7 @@ import spicy;
public type Message = unit { public type Message = unit {
switch { switch {
-> prio: Priority; -> prio: Priority;
-> void; -> : void;
}; };
msg: bytes &eod; msg: bytes &eod;

View file

@ -189,7 +189,7 @@ struct opt_mapping {
class BrokerState { class BrokerState {
public: public:
BrokerState(broker::configuration config, size_t congestion_queue_size) BrokerState(broker::configuration config, size_t congestion_queue_size)
: endpoint(std::move(config)), : endpoint(std::move(config), telemetry_mgr->GetRegistry()),
subscriber( subscriber(
endpoint.make_subscriber({broker::topic::statuses(), broker::topic::errors()}, congestion_queue_size)) {} endpoint.make_subscriber({broker::topic::statuses(), broker::topic::errors()}, congestion_queue_size)) {}

View file

@ -264,6 +264,15 @@ bool Manager::CreateStream(Stream* info, RecordVal* description) {
return true; return true;
} }
// Return true if v is a TypeVal that contains a record type, else false.
static bool is_record_type_val(const zeek::ValPtr& v) {
const auto& t = v->GetType();
return t->Tag() == TYPE_TYPE && t->AsTypeType()->GetType()->Tag() == TYPE_RECORD;
}
// Return true if v contains a FuncVal, else false.
static bool is_func_val(const zeek::ValPtr& v) { return v->GetType()->Tag() == TYPE_FUNC; }
bool Manager::CreateEventStream(RecordVal* fval) { bool Manager::CreateEventStream(RecordVal* fval) {
RecordType* rtype = fval->GetType()->AsRecordType(); RecordType* rtype = fval->GetType()->AsRecordType();
if ( ! same_type(rtype, BifType::Record::Input::EventDescription, false) ) { if ( ! same_type(rtype, BifType::Record::Input::EventDescription, false) ) {
@ -274,11 +283,21 @@ bool Manager::CreateEventStream(RecordVal* fval) {
string stream_name = fval->GetFieldOrDefault("name")->AsString()->CheckString(); string stream_name = fval->GetFieldOrDefault("name")->AsString()->CheckString();
auto fields_val = fval->GetFieldOrDefault("fields"); auto fields_val = fval->GetFieldOrDefault("fields");
if ( ! is_record_type_val(fields_val) ) {
reporter->Error("Input stream %s: 'idx' field is not a record type", stream_name.c_str());
return false;
}
RecordType* fields = fields_val->AsType()->AsTypeType()->GetType()->AsRecordType(); RecordType* fields = fields_val->AsType()->AsTypeType()->GetType()->AsRecordType();
auto want_record = fval->GetFieldOrDefault("want_record"); auto want_record = fval->GetFieldOrDefault("want_record");
auto ev_val = fval->GetFieldOrDefault("ev"); auto ev_val = fval->GetFieldOrDefault("ev");
if ( ev_val && ! is_func_val(ev_val) ) {
reporter->Error("Input stream %s: 'ev' field is not an event", stream_name.c_str());
return false;
}
Func* event = ev_val->AsFunc(); Func* event = ev_val->AsFunc();
const auto& etype = event->GetType(); const auto& etype = event->GetType();
@ -356,6 +375,11 @@ bool Manager::CreateEventStream(RecordVal* fval) {
assert(false); assert(false);
auto error_event_val = fval->GetFieldOrDefault("error_ev"); auto error_event_val = fval->GetFieldOrDefault("error_ev");
if ( error_event_val && ! is_func_val(error_event_val) ) {
reporter->Error("Input stream %s: 'error_ev' field is not an event", stream_name.c_str());
return false;
}
Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr; Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr;
if ( ! CheckErrorEventTypes(stream_name, error_event, false) ) if ( ! CheckErrorEventTypes(stream_name, error_event, false) )
@ -414,15 +438,31 @@ bool Manager::CreateTableStream(RecordVal* fval) {
auto pred = fval->GetFieldOrDefault("pred"); auto pred = fval->GetFieldOrDefault("pred");
auto idx_val = fval->GetFieldOrDefault("idx"); auto idx_val = fval->GetFieldOrDefault("idx");
if ( ! is_record_type_val(idx_val) ) {
reporter->Error("Input stream %s: 'idx' field is not a record type", stream_name.c_str());
return false;
}
RecordType* idx = idx_val->AsType()->AsTypeType()->GetType()->AsRecordType(); RecordType* idx = idx_val->AsType()->AsTypeType()->GetType()->AsRecordType();
RecordTypePtr val; RecordTypePtr val;
auto val_val = fval->GetFieldOrDefault("val"); auto val_val = fval->GetFieldOrDefault("val");
if ( val_val ) if ( val_val ) {
if ( ! is_record_type_val(val_val) ) {
reporter->Error("Input stream %s: 'val' field is not a record type", stream_name.c_str());
return false;
}
val = val_val->AsType()->AsTypeType()->GetType<RecordType>(); val = val_val->AsType()->AsTypeType()->GetType<RecordType>();
}
auto dst = fval->GetFieldOrDefault("destination"); auto dst = fval->GetFieldOrDefault("destination");
if ( ! dst->GetType()->IsSet() && ! dst->GetType()->IsTable() ) {
reporter->Error("Input stream %s: 'destination' field has type %s, expected table or set identifier",
stream_name.c_str(), obj_desc_short(dst->GetType().get()).c_str());
return false;
}
// check if index fields match table description // check if index fields match table description
size_t num = idx->NumFields(); size_t num = idx->NumFields();
@ -497,6 +537,11 @@ bool Manager::CreateTableStream(RecordVal* fval) {
} }
auto event_val = fval->GetFieldOrDefault("ev"); auto event_val = fval->GetFieldOrDefault("ev");
if ( event_val && ! is_func_val(event_val) ) {
reporter->Error("Input stream %s: 'ev' field is not an event", stream_name.c_str());
return false;
}
Func* event = event_val ? event_val->AsFunc() : nullptr; Func* event = event_val ? event_val->AsFunc() : nullptr;
if ( event ) { if ( event ) {
@ -572,6 +617,11 @@ bool Manager::CreateTableStream(RecordVal* fval) {
} }
auto error_event_val = fval->GetFieldOrDefault("error_ev"); auto error_event_val = fval->GetFieldOrDefault("error_ev");
if ( error_event_val && ! is_func_val(error_event_val) ) {
reporter->Error("Input stream %s: 'error_ev' field is not an event", stream_name.c_str());
return false;
}
Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr; Func* error_event = error_event_val ? error_event_val->AsFunc() : nullptr;
if ( ! CheckErrorEventTypes(stream_name, error_event, true) ) if ( ! CheckErrorEventTypes(stream_name, error_event, true) )

View file

@ -4,10 +4,8 @@
#include <getopt.h> #include <getopt.h>
#include <algorithm>
#include <memory> #include <memory>
#include <string> #include <string>
#include <type_traits>
#include <utility> #include <utility>
#include <vector> #include <vector>
@ -42,11 +40,10 @@ struct VisitorTypes : public spicy::visitor::PreOrder {
module = {}; module = {};
return; return;
} }
module = n->scopeID(); module = n->scopeID();
path = n->uid().path; path = n->uid().path;
if ( is_resolved ) if ( is_resolved && ! n->skipImplementation() )
glue->addSpicyModule(module, path); glue->addSpicyModule(module, path);
} }

View file

@ -1375,7 +1375,7 @@ bool GlueCompiler::CreateSpicyHook(glue::Event* ev) {
auto attrs = builder()->attributeSet({builder()->attribute("&priority", builder()->integer(ev->priority))}); auto attrs = builder()->attributeSet({builder()->attribute("&priority", builder()->integer(ev->priority))});
auto parameters = hilti::util::transform(ev->parameters, [](const auto& p) { return p.get(); }); auto parameters = hilti::util::transform(ev->parameters, [](const auto& p) { return p.get(); });
auto unit_hook = builder()->declarationHook(parameters, body.block(), ::spicy::Engine::All, attrs, meta); auto unit_hook = builder()->declarationHook(parameters, body.block(), attrs, meta);
auto hook_decl = builder()->declarationUnitHook(ev->hook, unit_hook, meta); auto hook_decl = builder()->declarationUnitHook(ev->hook, unit_hook, meta);
ev->spicy_module->spicy_module->add(context(), hook_decl); ev->spicy_module->spicy_module->add(context(), hook_decl);

View file

@ -40,4 +40,12 @@ error: Input stream error3: Error event's first attribute must be of type Input:
error: Input stream error4: Error event's second attribute must be of type string error: Input stream error4: Error event's second attribute must be of type string
error: Input stream error5: Error event's third attribute must be of type Reporter::Level error: Input stream error5: Error event's third attribute must be of type Reporter::Level
error: Input stream error6: 'destination' field is a table, but 'val' field is not provided (did you mean to use a set instead of a table?) error: Input stream error6: 'destination' field is a table, but 'val' field is not provided (did you mean to use a set instead of a table?)
error: Input stream types1: 'idx' field is not a record type
error: Input stream types2: 'val' field is not a record type
error: Input stream types3: 'destination' field has type string, expected table or set identifier
error: Input stream types4: 'ev' field is not an event
error: Input stream types5: 'error_ev' field is not an event
error: Input stream types6: 'idx' field is not a record type
error: Input stream types7: 'ev' field is not an event
error: Input stream types8: 'error_ev' field is not an event
received termination signal received termination signal

View file

@ -1,40 +1,27 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63. ### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
### broker |12| ### broker |12|
Telemetry::INT_GAUGE, broker, connections, [type], [native], 0.0 Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [command], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, connections, [type], [web-socket], 0.0 Telemetry::GAUGE, broker, broker_buffered_messages, [type], [command], 0.0
count_value, 0 value, 0.0
Telemetry::INT_COUNTER, broker, processed-messages, [type], [data], 0.0 Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [data], 0.0
count_value, 0 value, 0.0
Telemetry::INT_COUNTER, broker, processed-messages, [type], [command], 0.0 Telemetry::GAUGE, broker, broker_buffered_messages, [type], [data], 0.0
count_value, 0 value, 0.0
Telemetry::INT_COUNTER, broker, processed-messages, [type], [routing-update], 0.0 Telemetry::GAUGE, broker, broker_connections, [type], [native], 0.0
count_value, 0 value, 0.0
Telemetry::INT_COUNTER, broker, processed-messages, [type], [ping], 0.0 Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [ping], 0.0
count_value, 0 value, 0.0
Telemetry::INT_COUNTER, broker, processed-messages, [type], [pong], 0.0 Telemetry::GAUGE, broker, broker_buffered_messages, [type], [ping], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [data], 0.0 Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [pong], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [command], 0.0 Telemetry::GAUGE, broker, broker_buffered_messages, [type], [pong], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [routing-update], 0.0 Telemetry::COUNTER, broker, broker_processed_messages_total, [type], [routing-update], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [ping], 0.0 Telemetry::GAUGE, broker, broker_buffered_messages, [type], [routing-update], 0.0
count_value, 0 value, 0.0
Telemetry::INT_GAUGE, broker, buffered-messages, [type], [pong], 0.0 Telemetry::GAUGE, broker, broker_connections, [type], [web-socket], 0.0
count_value, 0 value, 0.0
### caf |5| ### broker |0|
Telemetry::INT_COUNTER, caf.system, rejected-messages, [], [], 0.0
count_value, 0
Telemetry::INT_COUNTER, caf.system, processed-messages, [], [], 7.0
count_value, 7
Telemetry::INT_GAUGE, caf.system, running-actors, [], [], 2.0
count_value, 2
Telemetry::INT_GAUGE, caf.system, queued-messages, [], [], 0.0
count_value, 0
Telemetry::INT_GAUGE, caf.actor, mailbox-size, [name], [broker.core], 0.0
count_value, 0
### caf |2|
Telemetry::DOUBLE_HISTOGRAM, caf.actor, processing-time, [0.00001, 0.0001, 0.0005, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, inf], [name], [broker.core]
Telemetry::DOUBLE_HISTOGRAM, caf.actor, mailbox-time, [0.00001, 0.0001, 0.0005, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, inf], [name], [broker.core]

View file

@ -0,0 +1,11 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 tcp ldap_tcp 3.537413 536 42 SF 0 ShADadFf 11 1116 6 362 -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,14 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ldap
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
#types time string addr port addr port int int string string string string string
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 1 3 bind simple success - cn=admin,dc=example,dc=com REDACTED
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 2 - add success - - -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 3 - add success - - -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 127.0.0.1 46160 127.0.1.1 389 4 - unbind - - - -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,11 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 tcp ldap_tcp 0.033404 3046 90400 RSTR 0 ShADdar 14 1733 68 93132 -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,12 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ldap
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
#types time string addr port addr port int int string string string string string
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 3 3 bind SASL success - - GSS-SPNEGO
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 9 - unbind - - - -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,14 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ldap_search
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id scope deref_aliases base_object result_count result diagnostic_message filter attributes
#types time string addr port addr port int string string string count string string string vector[string]
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 1 base never - 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 4 base never - 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 6 single never CN=Schema,CN=Configuration,DC=matrix,DC=local 424 success - (&(!(isdefunct=TRUE))(|(|(|(|(|(attributeSyntax=2.5.5.17)(attributeSyntax=2.5.5.10))(attributeSyntax=2.5.5.15))(attributeSyntax=2.5.5.1))(attributeSyntax=2.5.5.7))(attributeSyntax=2.5.5.14))) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 192.168.10.138 63815 192.168.10.186 389 8 tree never DC=matrix,DC=local 1 success - (samaccountname=krbtgt) -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,13 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string count string count count count count set[string]
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 tcp ldap_tcp 63.273503 3963 400107 OTH 0 Dd 12 2595 282 411387 -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 tcp ldap_tcp 0.007979 2630 3327 OTH 0 Dd 6 990 6 3567 -
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 tcp ldap_tcp 0.001925 2183 3436 OTH 0 Dd 4 463 5 3636 -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,15 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ldap
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id version opcode result diagnostic_message object argument
#types time string addr port addr port int int string string string string string
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 3 3 bind SASL success - - GSS-SPNEGO
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 3 3 bind SASL success - - GSS-SPNEGO
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 9 3 bind SASL success - - GSS-SPNEGO
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 12 - unbind - - - -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 13 - unbind - - - -
#close XXXX-XX-XX-XX-XX-XX

View file

@ -0,0 +1,27 @@
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ldap_search
#open XXXX-XX-XX-XX-XX-XX
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p message_id scope deref_aliases base_object result_count result diagnostic_message filter attributes
#types time string addr port addr port int string string string count string string string vector[string]
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 1 base never - 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 4 base never - 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 5 base never CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 6 base never - 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 7 tree never CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 2 success - (objectCategory=pKIEnrollmentService) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 8 base never - 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 9 base never CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=dMD) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 10 base never CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=dMD) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 11 base never CN=Aggregate,CN=Schema,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 1 base never - 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 4 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 5 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX ClEkJM2Vm5giqnMf4h 10.199.2.121 59355 10.199.2.111 389 6 base never CN=WS01,CN=Computers,DC=DMC,DC=local 1 success - (objectclass=*) -
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 10 base never - 1 success - (ObjectClass=*) -
XXXXXXXXXX.XXXXXX C4J4Th3PJpwUYZZ6gc 10.199.2.121 59356 10.199.2.111 389 11 base never CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=services,CN=Configuration,DC=DMC,DC=local 0 no such object 0000208D: NameErr: DSID-0310028B, problem 2001 (NO_OBJECT), data 0, best match of:??'CN=Services,CN=Configuration,DC=DMC,DC=local'?? (ObjectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 12 base never CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 1 success - (objectClass=*) -
XXXXXXXXXX.XXXXXX CHhAvVGS1DHFjwGM9 10.199.2.121 59327 10.199.2.111 389 13 tree never CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=DMC,DC=local 38 success - (objectclass=pKICertificateTemplate) -
#close XXXX-XX-XX-XX-XX-XX

Binary file not shown.

Binary file not shown.

View file

@ -9,4 +9,4 @@
# #
# @TEST-EXEC: test -d $DIST/scripts # @TEST-EXEC: test -d $DIST/scripts
# @TEST-EXEC: for script in `find $DIST/scripts/ -name \*\.zeek`; do zeek -b --parse-only $script >>errors 2>&1; done # @TEST-EXEC: for script in `find $DIST/scripts/ -name \*\.zeek`; do zeek -b --parse-only $script >>errors 2>&1; done
# @TEST-EXEC: TEST_DIFF_CANONIFIER="grep -v -e 'load-balancing.zeek.*deprecated script loaded' | $SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-sort" btest-diff errors # @TEST-EXEC: TEST_DIFF_CANONIFIER="grep -v -e 'load-balancing.zeek.*deprecated script loaded' | grep -v -e 'prometheus.zeek.*deprecated script loaded' | $SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-sort" btest-diff errors

View file

@ -9,7 +9,7 @@
# @TEST-EXEC: CLUSTER_NODE=logger-1 zeek %INPUT # @TEST-EXEC: CLUSTER_NODE=logger-1 zeek %INPUT
# @TEST-EXEC: CLUSTER_NODE=proxy-1 zeek %INPUT # @TEST-EXEC: CLUSTER_NODE=proxy-1 zeek %INPUT
# @TEST-EXEC: CLUSTER_NODE=worker-1 zeek %INPUT # @TEST-EXEC: CLUSTER_NODE=worker-1 zeek %INPUT
# @TEST-EXEC: TEST_DIFF_CANONIFIER='grep -v "load-balancing.zeek.*deprecated script" | $SCRIPTS/diff-remove-abspath' btest-diff .stderr # @TEST-EXEC: TEST_DIFF_CANONIFIER='grep -v "load-balancing.zeek.*deprecated script" | grep -v "prometheus.zeek.*deprecated script" | $SCRIPTS/diff-remove-abspath' btest-diff .stderr
@load base/frameworks/cluster @load base/frameworks/cluster
@load misc/loaded-scripts @load misc/loaded-scripts

View file

@ -59,6 +59,7 @@ global val_table: table[count] of Val = table();
global val_table2: table[count, int] of Val = table(); global val_table2: table[count, int] of Val = table();
global val_table3: table[count, int] of int = table(); global val_table3: table[count, int] of int = table();
global val_table4: table[count] of int; global val_table4: table[count] of int;
global val_set: set[count];
event line_file(description: Input::EventDescription, tpe: Input::Event, r:FileVal) event line_file(description: Input::EventDescription, tpe: Input::Event, r:FileVal)
{ {
@ -190,5 +191,15 @@ event zeek_init()
Input::add_table([$source="input.log", $name="error6", $idx=Idx, $destination=val_table]); Input::add_table([$source="input.log", $name="error6", $idx=Idx, $destination=val_table]);
# Check that we do not crash when a user passes unexpected types to any fields in the description records.
Input::add_table([$source="input.log", $name="types1", $idx="string-is-not-allowed", $destination=val_set]);
Input::add_table([$source="input.log", $name="types2", $idx=Idx, $val="string-is-not-allowed", $destination=val_set]);
Input::add_table([$source="input.log", $name="types3", $idx=Idx, $destination="string-is-not-allowed"]);
Input::add_table([$source="input.log", $name="types4", $idx=Idx, $destination=val_set, $ev="not-an-event"]);
Input::add_table([$source="input.log", $name="types5", $idx=Idx, $destination=val_set, $error_ev="not-an-event"]);
Input::add_event([$source="input.log", $name="types6", $fields="string-is-not-allowed", $ev=event11]);
Input::add_event([$source="input.log", $name="types7", $fields=Val, $ev="not-an-event"]);
Input::add_event([$source="input.log", $name="types8", $fields=Val, $ev=event11, $error_ev="not-an-event"]);
schedule 3secs { kill_me() }; schedule 3secs { kill_me() };
} }

View file

@ -1,7 +1,6 @@
# @TEST-DOC: Query some internal broker/caf related metrics as they use the int64_t versions, too. # @TEST-DOC: Query Broker's telemetry to verify it ends up in Zeek's registry.
# Note compilable to C++ due to globals being initialized to a record that # Note compilable to C++ due to globals being initialized to a record that
# has an opaque type as a field. # has an opaque type as a field.
# @TEST-KNOWN-FAILURE: Implementation for prometheus-cpp missing in broker
# @TEST-REQUIRES: test "${ZEEK_USE_CPP}" != "1" # @TEST-REQUIRES: test "${ZEEK_USE_CPP}" != "1"
# @TEST-EXEC: zcat <$TRACES/echo-connections.pcap.gz | zeek -b -Cr - %INPUT > out # @TEST-EXEC: zcat <$TRACES/echo-connections.pcap.gz | zeek -b -Cr - %INPUT > out
# @TEST-EXEC: btest-diff out # @TEST-EXEC: btest-diff out
@ -9,17 +8,19 @@
@load base/frameworks/telemetry @load base/frameworks/telemetry
redef running_under_test = T;
function print_histogram_metrics(what: string, metrics: vector of Telemetry::HistogramMetric) function print_histogram_metrics(what: string, metrics: vector of Telemetry::HistogramMetric)
{ {
print fmt("### %s |%s|", what, |metrics|); print fmt("### %s |%s|", what, |metrics|);
for (i in metrics) for (i in metrics)
{ {
local m = metrics[i]; local m = metrics[i];
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$bounds, m$opts$labels, m$labels; print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$bounds, m$label_names, m?$label_values ? m$label_values : vector();
# Don't output actual values as they are runtime dependent. # Don't output actual values as they are runtime dependent.
# print m$values, m$sum, m$observations; # print m$values, m$sum, m$observations;
if ( m$opts?$count_bounds ) if ( m$opts?$bounds )
print m$opts$count_bounds; print m$opts$bounds;
} }
} }
@ -29,19 +30,17 @@ function print_metrics(what: string, metrics: vector of Telemetry::Metric)
for (i in metrics) for (i in metrics)
{ {
local m = metrics[i]; local m = metrics[i];
print m$opts$metric_type, m$opts$prefix, m$opts$name, m$opts$labels, m$labels, m$value; print m$opts$metric_type, m$opts$prefix, m$opts$name, m$label_names, m?$label_values ? m$label_values : vector(), m$value;
if (m?$count_value) if (m?$value)
print "count_value", m$count_value; print "value", m$value;
} }
} }
event zeek_done() &priority=-100 event zeek_done() &priority=-100
{ {
local broker_metrics = Telemetry::collect_metrics("broker", "*"); local broker_metrics = Telemetry::collect_metrics("broker*", "*");
print_metrics("broker", broker_metrics); print_metrics("broker", broker_metrics);
local caf_metrics = Telemetry::collect_metrics("caf*", "*"); local broker_histogram_metrics = Telemetry::collect_histogram_metrics("broker*", "*");
print_metrics("caf", caf_metrics); print_histogram_metrics("broker", broker_histogram_metrics);
local caf_histogram_metrics = Telemetry::collect_histogram_metrics("caf*", "*");
print_histogram_metrics("caf", caf_histogram_metrics);
} }

View file

@ -0,0 +1,11 @@
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
# @TEST-REQUIRES: have-spicy
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/ldap-add.pcap %INPUT
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
# @TEST-EXEC: btest-diff conn.log
# @TEST-EXEC: btest-diff ldap.log
# @TEST-EXEC: ! test -f dpd.log
# @TEST-EXEC: ! test -f analyzer.log
#
# @TEST-DOC: The addRequest/addResponse operation is not implemented, yet we process it.

View file

@ -0,0 +1,11 @@
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
# @TEST-REQUIRES: have-spicy
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/missing_krbtgt_ldap_request.pcapng %INPUT
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
# @TEST-EXEC: btest-diff conn.log
# @TEST-EXEC: btest-diff ldap.log
# @TEST-EXEC: btest-diff ldap_search.log
# @TEST-EXEC: ! test -f dpd.log
#
# @TEST-DOC: Test LDAP analyzer with GSS-API integrity traffic where we can still peak into LDAP wrapped into WRAP tokens.

View file

@ -0,0 +1,11 @@
# Copyright (c) 2024 by the Zeek Project. See LICENSE for details.
# @TEST-REQUIRES: have-spicy
# @TEST-EXEC: zeek -C -r ${TRACES}/ldap/missing_ldap_logs.pcapng %INPUT
# @TEST-EXEC: cat conn.log | zeek-cut -Cn local_orig local_resp > conn.log2 && mv conn.log2 conn.log
# @TEST-EXEC: btest-diff conn.log
# @TEST-EXEC: btest-diff ldap.log
# @TEST-EXEC: btest-diff ldap_search.log
# @TEST-EXEC: ! test -f dpd.log
#
# @TEST-DOC: Test LDAP analyzer with GSS-API integrity traffic where we can still peak into LDAP wrapped into WRAP tokens.

View file

@ -55,7 +55,6 @@ done
@TEST-END-FILE @TEST-END-FILE
@load policy/frameworks/cluster/experimental @load policy/frameworks/cluster/experimental
@load policy/frameworks/telemetry/prometheus
@load base/frameworks/telemetry @load base/frameworks/telemetry
# So the cluster nodes don't terminate right away. # So the cluster nodes don't terminate right away.

View file

@ -15,7 +15,7 @@
# # simply update this test's TEST-START-FILE with the latest contents # # simply update this test's TEST-START-FILE with the latest contents
# site/local.zeek. # site/local.zeek.
@TEST-START-FILE local-7.0.zeek @TEST-START-FILE local-7.1.zeek
##! Local site policy. Customize as appropriate. ##! Local site policy. Customize as appropriate.
##! ##!
##! This file will not be overwritten when upgrading or reinstalling! ##! This file will not be overwritten when upgrading or reinstalling!

View file

@ -1 +1 @@
45582671c6715e719d91c8afde7ffb480c602441 ded009fb7a0cdee6f36d5b40a6394788b760fa06