The current test attempts to instantiate two spicy::SSH_1 protocol
analyzers in the .evt file. The intention likely was to use two
distinct protocol analyzer both trying to replace the builtin SSH
analyzer.
Coincidentally, fixing this happens to workaround TSAN errors tickled
by the FatalError() call while loading the .hlto with two identically
named analyzers.
$ cat .tmp/spicy.replaces-conflicts/output
error: redefinition of protocol analyzer spicy::SSH_1
ThreadSanitizer: main thread finished with ignores enabled
One of the following ignores was not ended (in order of probability)
Ignore was enabled at:
#0 __llvm_gcov_init __linker___d192e45c25d5ee23-484d3e0fc2caf5b4.cc (ssh.hlto+0x34036) (BuildId: 091934ca4da885e7)
#1 __llvm_gcov_init __linker___d192e45c25d5ee23-484d3e0fc2caf5b4.cc (ssh.hlto+0x34036) (BuildId: 091934ca4da885e7)
...
I was tempted to replace FatalError() with Error() and rely on
zeek-setup.cc's early exiting on any reporter errors, but this
seems easier for now.
Relates to #3865.
This avoids the callbacks from being processed on the worker thread
spawned by Civetweb. It fixes data race issues with lookups involving
global variables, amongst other threading issues.
The ctu-sme-11-win7ad-1-ldap-tcp-50041.pcap file was harvested
from the CTU-SME-11 (Experiment-VM-Microsoft-Windows7AD-1) dataset
at https://zenodo.org/records/7958259 (DOI 10.5281/zenodo.7958258).
Closes#3853
Pcap was generated as follows. Doesn't seem wireshark even parses
this properly right now.
with common.get_connection() as c:
with c.cursor() as cur:
date1 = datetime.date(1987, 10, 18)
datetime1 = datetime.datetime(1990, 9, 26, 12, 13, 14)
cur.add_attribute("number1", 42)
cur.add_attribute("string1", "a string")
cur.add_attribute("date1", date1)
cur.add_attribute("datetime1", datetime1)
cur.execute("SELECT version()")
result = cur.fetchall()
print("result", result)
Remove caching_sha2_password parsing/state from the analyzer and implement
the generic events. If we actually want to peak into the authentication
mechanism, we could write a separate analyzer for it. For now, treat it
as opaque values that are exposed to script land.
The added tests show the --get-server-public-key in use where
mysql_auth_more_data contains an RSA public key.
* origin/topic/vern/script-opt-maint.Aug24:
minor optimization of boolean comparisons
fix & regression test for GH-3839 (spurious warnings for "when" constructs)
PCAP was produced with a local OpenLDAP server configured to support StartTLS.
This puts the Zeek calls into a separate ldap_zeek.spicy file/module
to separate it from LDAP.
With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.
Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.
ASN1Message(True) may go off parsing arbitrary input data as
"something ASN.1" This could be GBs of octet strings or just very
long sequences. Avoid this by open-coding some top-level types expected.
This also tries to avoid some of the &parse-from usages that result
in unnecessary copies of data.
Adds a locally generated PCAP with addRequest/addResponse that we
don't currently handle.
Mostly staring at the PCAPs and opened a few RFCs. For now, only if the
MS_KRB5 OID is used and accepted in a bind response, start stripping
KRB5 wrap tokens for both, client and server traffic.
Would probably be nice to forward the GSS-API data to the analyzer...
Closeszeek/spicy-ldap#29.
* topic/christian/broker-prometheus-cpp:
Update the scripts.base.frameworks.telemetry.internal-metrics test
Revert "Temporarily disable the scripts/base/frameworks/telemetry/internal-metrics btest"
Bump Broker to pull in new Prometheus support and pass in Zeek's registry
This now uses different record fields, and for now we no longer have CAF
telemetry. We indicate we're running under test to get reliable ordering in the
baselined output.
This is a rework of b59bed9d06 moving
HILTI_JIT_PARALLELISM=1 into btest.cfg to make it default applicable to
btest -j users (and CI).
The background for this change is that spicyz may spawn up to nproc compiler
instances by default. Combined with btest -j, this may be nproc x nproc
instances worst case. Particularly with gcc, this easily overloads CI or
local systems, putting them into hard-to-recover-from thrashing/OOM states.
Exporting HILTI_JIT_PARALLELISM in the shell allows overriding.
When Zeek flips roles of a HTTP connection subsequent to the HTTP analyzer
being attached, that analyzer would not update its own ContentLine analyzer
state, resulting in the wrong ContentLine analyzer being switched into
plain delivery mode.
In debug builds, this would result in assertion failures, in production
builds, the HTTP analyzer would receive HTTP bodies as individual header
lines, or conversely, individual header lines would be delivered as a
large chunk from the ContentLine analyzer.
PCAPs were generated locally using tcprewrite to select well-known-http ports
for both endpoints, then editcap to drop the first SYN packet.
Kudos to @JordanBarnartt for keeping at it.
Closes#3789
This eliminates one place in which we currently need to mirror changes to the
script-land Cluster::Node record. Instead of keeping an exact in-core equivalent, the
Supervisor now treats the data structure as opaque, and stores the whole cluster
table as a JSON string.
We may replace the script-layer Supervisor::ClusterEndpoint in the future, using
Cluster::Node directly. But that's a more invasive change that will affect how
people invoke Supervisor::create() and similars.
Relying on JSON for serialization has the side-effect of removing the
Supervisor's earlier quirk of using 0/tcp, not 0/unknown, to indicate unused
ports in the Supervisor::ClusterEndpoint record.
This needed a small tweak in the deserialization, since each roundtrip
would otherwise pad the prior pattern with an extra /^?(...)$?/.
This expands the language.set test to also verify serializing/unserializing for
sets, similarly to tables in the previous commit.
This allows additional data roundtripping through JSON since to_json() already
supports tables. There are some subtleties around the formatting of strings in
JSON object keys, for which this adds a bit of helper infrastructure.
This also expands the language.table test to verify the roundtrips, and adapts
bif.from_json to include a table in the test record.
The from_json() BiF and its underlying code in Val.cc currently expect ports
expressed as a string ('80/tcp' etc). Zeek's own serialization via ToJSON()
renders them as an object ('{"port":80, "proto":"tcp"}'). This adds support
for the latter format to from_json(), so serialized values can be read back.
The core.file-analyzer-violation test showed that it's possible to
create new threads (log writers) when Zeek is in the process of
terminating. This can result in the IO manager's deconstructor
deleting IO sources for threads that are still running.
This is sort of a scripting issue, so for now log a reporter warning
when it happens to have a bit of a bread-crumb what might be
going on. In the future it might make sense to plug APIs with
zeek_is_terminating().
This is a fixup for 0cd023b839 which
currently causes ASAN coverage builds to fail for non-master branches
when due to a missing COVERALLS_REPO_TOKEN.
Instead of bailing out for non-master branches, pass `--dry-run` to the
coveralls-lcov invocation to test more of the script.