* origin/topic/timw/update-c-ares:
Configure c-ares before libkqueue
Update 3rdparty submodule to update sqlite to 3.45.0
Upgrade rapidjson to current upstream master
Upgrade c-ares to 1.26.0
On platforms without a native libkqueue, c-ares is using the existing
value for HAVE_KQUEUE that was set during the libkqueue setup. We don't
pass the libkqueue information down to the c-ares cmake run so it won't
have the paths or library when it builds.
Seem reasonable give we log the server SCID. Interestingly, the Chromium
examples actually have zero length (empty) source connection IDs. I wonder
if that's part of their "protocol ossification avoidance" effort.
The original logic stopped decrypting any INITIAL packets after the
first. The Firefox/cloudflare pcaps actually show that the server
replies with a QUIC INITAL packet containing just ACK frames and no
CRYPTO frames. Only the second QUIC INITIAL packet from the server
then contains the CRYPTO frames.
There's no good reason to stop decryption attempts, either we succeed
down the road and then stop, or we fail and raise analyzer violations.
* topic/christian/mmdb-fix:
Move GeoIP availability test in btests to `zeek-config --have-geoip`
Fix MMDB::Lookup() to check result status correctly
Add btest for succeeding/failing IPv4/IPv6 lookups
Add an IPv6 range to the test MMDB DBs
There's something wrong with chocolatey's OpenSSL 3.2.0 package that
causes cmake to not be able to find libcrypto even though it's clearly
in the directory. Pinning to 3.1.1 fixes the build issue.
This function confused checking the return value of MMDB_lookup_sockaddr() with
testing the value of the returned result.found_entry bit when that call
succeeds. Both need to happen.
This introduces a new hook into the Intel::seen() function that allows
users to directly interact with the result of a find() call via external
scripts.
This should solve the use-case brought up by @chrisanag1985 in
discussion #3256: Recording and acting on "no intel match found".
@Canon88 was recently asking on Slack about enabling HTTP logging for a
given connection only when an Intel match occurred and found that the
Intel::match() event would only occur on the manager. The
Intel::match_remote() event might be a workaround, but possibly running a
bit too late and also it's just an internal "detail" event that might not
be stable.
Another internal use case revolved around enabling packet recording
based on Intel matches which necessarily needs to happen on the worker
where the match happened. The proposed workaround is similar to the above
using Intel::match_remote().
This hook also provides an opportunity to rate-limit heavy hitter intel
items locally on the worker nodes, or even replacing the event approach
currently used with a customized approach.
- With `broker::data`, we always have actual `std::string` objects that
we can pass to C functions expecting a null-terminated string.
However, `broker::variant` will return a `std::string_view` where we
have previously received a `std::string`. Hence, we add an extra level
of indirection that ensures that views are converted to
null-terminated strings and also use `c_str()` where we have
previously used `data()`. The former is not present on a
`std::string_view`. Using this member function instead acts as an
extra level of insurance that we do not accidentally pass the bytes
from a view to a C function.
- Switch from error and status views to actual error and status objects.
The view types from Broker only work with `broker::data` and thus
won't be available with `broker::variant`.
A continuation frame has the same type as the first frame, but that
information wasn't used nor kept, resulting payload of continuation
frames not being forwarded. The pcap was created with a fake Python
server and a bit of message crafting.
I'm always a bit worried to use sed -E anywhere, because the canonifiers
give the impression it won't work everywhere consistently. My manpage says
sed -E should be preferred for portability, so lets remove the
sed -r / sed -E differentiation assuming it's just a thing from the past.
There implementation assumed that arg is null terminated. Due to
the ContentLineAnalyzer wrongly being in plain delivery mode, this
assumption was violated. It shouldn't happen anymore, but protect
from this anyhow.
When resetting the BDAT state, we also need to switch the ContentLine
analyzer back into line mode, otherwise we're feeding plain delivery
data through ProcessLine(), possibly violating some assumptions about
null termination.
Do it for both ContentLineAnalyzers - only one of them will be in plain
delivery mode anyhow, but we don't keep state which one it was.