The current implementation would only log, if the password contains a
colon, the part before the first colon (e.g., the password
`password:password` would be logged as `password`).
A test has been added to confirm the expected behaviour.
* origin/topic/vern/cpp-init:
Func: Add SetCapturesVec()
marked some recently added BTests as not suitable for -O gen-C++ testing
robustness improvements for -O gen-C++ generation of lambdas / "when"s
speedups for compilation of initializers in -O gen-C++ generated code
fixes for -O gen-C++ generation of floating point constants
-O gen-C++ fix for dealing with use of more than one module qualifier
header tweaks to provide gen-C++ script optimization with more flexibility
fix for script optimization of constants of type "opaque"
fix for script optimization of "in" operations
some minor tidying of -O gen-C++ sources
This reworks the parser such that COM_CHANGE_USER switches the
connection back into the CONNECTION_PHASE so that we can remove the
EXPECT_AUTH_SWITCH special case in the COMMAND_PHASE. Adds two pcaps
produced with Python that actually do COM_CHANGE_USER as it seems
not possible from the MySQL CLI.
The ctu-sme-11-win7ad-1-ldap-tcp-50041.pcap file was harvested
from the CTU-SME-11 (Experiment-VM-Microsoft-Windows7AD-1) dataset
at https://zenodo.org/records/7958259 (DOI 10.5281/zenodo.7958258).
Closes#3853
Remove caching_sha2_password parsing/state from the analyzer and implement
the generic events. If we actually want to peak into the authentication
mechanism, we could write a separate analyzer for it. For now, treat it
as opaque values that are exposed to script land.
The added tests show the --get-server-public-key in use where
mysql_auth_more_data contains an RSA public key.
PCAP was produced with a local OpenLDAP server configured to support StartTLS.
This puts the Zeek calls into a separate ldap_zeek.spicy file/module
to separate it from LDAP.
With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.
Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.
ASN1Message(True) may go off parsing arbitrary input data as
"something ASN.1" This could be GBs of octet strings or just very
long sequences. Avoid this by open-coding some top-level types expected.
This also tries to avoid some of the &parse-from usages that result
in unnecessary copies of data.
Adds a locally generated PCAP with addRequest/addResponse that we
don't currently handle.
Mostly staring at the PCAPs and opened a few RFCs. For now, only if the
MS_KRB5 OID is used and accepted in a bind response, start stripping
KRB5 wrap tokens for both, client and server traffic.
Would probably be nice to forward the GSS-API data to the analyzer...
Closeszeek/spicy-ldap#29.
* topic/christian/broker-prometheus-cpp:
Update the scripts.base.frameworks.telemetry.internal-metrics test
Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest"
Bump Broker to pull in new Prometheus support and pass in Zeek's registry
This now uses different record fields, and for now we no longer have CAF
telemetry. We indicate we're running under test to get reliable ordering in the
baselined output.
When Zeek flips roles of a HTTP connection subsequent to the HTTP analyzer
being attached, that analyzer would not update its own ContentLine analyzer
state, resulting in the wrong ContentLine analyzer being switched into
plain delivery mode.
In debug builds, this would result in assertion failures, in production
builds, the HTTP analyzer would receive HTTP bodies as individual header
lines, or conversely, individual header lines would be delivered as a
large chunk from the ContentLine analyzer.
PCAPs were generated locally using tcprewrite to select well-known-http ports
for both endpoints, then editcap to drop the first SYN packet.
Kudos to @JordanBarnartt for keeping at it.
Closes#3789
This reverts part of commit a0888b7e36 due
to inhibiting analyzer violations when parsing non SSH traffic when
the &restofdata path is entered.
@J-Gras reported the analyzer not being disabled when sending HTTP
traffic on port 22.
This adds the verbose analyzer.log baselines such that future improvements
of these scenarios become visible.