Parsing of client/server hellos that do not contain extensions should
now work correctly.
The port registration is now done Zeek-side, wich fixes some test
failures.
Naming the analyzer different than the old one was a mistake that
required unnecessary code changes; keeping the old name makes things
like StartTLS in other protocol work without additional code changes.
* origin/master:
Update doc submodule [nomail] [skip ci]
build_inner_connection: Use the outer packet's timestamp
build_inner_connection: Avoid one extra Init()
packet_analysis: Do not run DetectProtocol() on disabled analyzers
packet_analysis/Dispatcher: Do not index table twice
packet_analysis: Avoid shared_ptr copying for analyzer lookups
Update doc submodule [nomail] [skip ci]
Update doc submodule [nomail] [skip ci]
SSL: Add new extension types and ECH test
Update `.git-blame-ignore-revs`
Format JSON with clang-format
Bump pre-commit hooks
Reformat Zeek in Spicy style
This commit adds a multitude of new extension types that were added in
the last few years; it also adds grease values to extensions, curves,
and ciphersuites.
Furthermore, it adds a test that contains a encrypted-client-hello
key-exchange (which uses several extension types that we do not have in
our baseline so far).
* origin/master: (386 commits)
Normalize version strings in test
Update doc submodule [nomail] [skip ci]
Update external testing baseline hashes
fuzzers: Add DTLS fuzzer
generic-analyzer-fuzzer: Support NextPacket() fuzzing
Require `truncate` for a test using it
Bump outdated baseline
Fix tests so they work both with GNU and BSD tools
Install libmaxminddb in macOS CI
Bump auxil/spicy to latest release
Supervisor: Handle EAGAIN error on stem pipe
fuzzer-setup: Allow customization without recompiling
ssl: Prevent unbounded ssl_history growth
ssl: Cap number of alerts parsed from SSL record
subdir-btest: Allow setting build_dir
Update doc submodule [nomail] [skip ci]
CI: Pass -A flag to btest for cluster-testing builds
Update doc submodule [nomail] [skip ci]
Update baselines
ftp: Do not base seq on number of pending commands
...
The ssl_history field may grow unbounded (e.g., ssl_alert event). Prevent this
by capping using a configurable limit (default 100) and raise a weird once reached.
Limit the number of events raised from an SSL record with content_type
alert (21) to a configurable maximum number (default 10). For TLS 1.3,
the limit is set to 1 as specified in the RFC. Add a new weird cases
where the limit is exceeded.
OSS-Fuzz managed to generate a reproducer that raised ~660k ssl_plaintext
and ssl_alert events given ~810kb of input data. This change prevents this
with hopefully no negative side-effect in the real-world.
Previously, seq was computed as the result of |pending_commands|+1. This
opened the possibility to override queued commands, as well as logging
the same pending ftp reply multiple times.
For example, when commands 1, 2, 3 are pending, command 1 may be dequeued,
but the incoming command then receives seq 3 and overrides the already
pending command 3. The second scenario happens when ftp_reply() selected
command 3 as pending for logging, but is then followed by many ftp_request()
events. This resulted in command 3's response being logged for every
following ftp_request() over and over again.
Avoid both scenarios by tracking the command sequence as an absolute counter.
Unsure what it's used for today and also results in the situation that on
some platforms we generate a reporter.log in bare mode, while on others
where spicy is disabled, we do not.
If we want base/frameworks/version loaded by default, should put it into
init-bare.zeek and possibly remove the loading of the reporter framework
from it - Reporter::error() would still work and be visible on stderr,
just not create a reporter.log.
Makes testing easier and aligns better with log rotation and timer
expiration. Should not have an effect in practice. Also, log detail
about whether inode or modification time changed, too.
* origin/topic/awelzel/2326-import-quic:
ci/btest: Remove spicy-quic helper, disable Spicy on CentOS 7
btest/core/ppp: Run test in bare mode
btest/quic: Update other tests
testing/quic: Fixups and simplification after Zeek integration
quic: Integrate as default analyzer
quic: Include Copyright lines to the analyzer's source code contributed by Fox-IT
quic: Squashed follow-ups: quic.log, tests, various fixes, performance
quic: Initial implementation
* origin/topic/bbannier/issue-3234:
Introduce dedicated `LDAP::Info`
Remove redundant storing of protocol in LDAP logs
Use LDAP `RemovalHook` instead of implementing `connection_state_remove`
Tidy up LDAP code by using local references
Pluralize container names in LDAP types
Move LDAP script constants to their own file
Name `LDAP::Message` and `LDAP::Search` `*Info`
Make ports for LDAP analyzers fully configurable
Require have-spicy for tests which log spicy-ldap information
Fix LDAP analyzer setup for when Spicy analyzers are disabled
Bump zeek-testing-private
Integrate spicy-ldap test suite
Move spicy-ldap into Zeek protocol analyzer tree
Explicitly use all of spicy-ldap's modules
Explicitly list `asn1.spicy` as spicy-ldap source
Remove uses of `zeek` module in spicy-ldap
Fix typos in spicy-ldap
Remove project configuration files in spicy-ldap
Integrate spicy-ldap into build
Import zeek/spicy-ldap@57b5eff988
This moves the ports the LDAP analyzers should be triggered on from the
EVT file to the Zeek module. This gives users full control over which
ports the analyzers are registered for while previously they could only
register them for additional ports (there is no Zeek script equivalent
of `Manager::UnregisterAnalyzerForPort`).
The analyzers could still be triggered via DPD, but this is intentional.
To fully disable analyzers users can use e.g.,
```zeek
event zeek_init()
{
Analyzer::disable_analyzer(Analyzer::ANALYZER_LDAP_TCP);
}
```
On Linux with a default ext4 or tmpfs filesystem, the default buffer size for
reading a pcap is chosen as 4k (strace/gdb validated). When reading large pcaps
containing raw data transfers, the syscall overhead for read becomes visible
in profiles. Support configurability of the buffer size and default to 128kb.
When processing a ~830M PCAP (16 UDP connections, each transferring ~50MB) in
bare mode, this change improves runtime from 1.39 sec to 1.29 sec. Increasing
the buffer further didn't provide a noticeable boost.
So far we had trouble documenting Spicy analyzers through Zeekygen
because they would show up as components belonging to the
`Zeek::Spicy` plugin; whereas traditional analyzers would be their own
plugins and hence documented individually on their own. This commit
teaches Zeekygen to track Spicy analyzers separately inside their own
`Info` instances. This information isn't further used in this commit
yet, but will be merged with the plugin output in a subsequent change
to get the expected joint output.
To pass additional information to Zeekygen, EVT files now also support
two new tags for Zeekygen purposes:
- `%doc-id = ID;` defines the global ID under which everything inside
the EVT file will be documented by Zeekygen, conceptually comparable
to plugin names (e.g., `Zeek::Syslog`).
- `%doc-description = "text" provides additional text to go into the
documentation (comparable to plugin descriptions).
This information is carried through into the HLTO runtime
initialization code, from where it's registered with Zeekygen.
This commit also removes a couple of previous hacks of how Spicy
integrated with Zeekygen which (1) ended up generating broken doc output
for Spicy components, and (2) don't seem to be necessary anymore
anyways.
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
In the past, we allocated a buffer with zeroes and wrote that with
fwrite. Now, instead we just fseek to the correct offset.
This changes the way in which the file extract limit is counted a bit;
skipped bytes do no longer count against the file size limit.
(cherry picked from commit 5071592e9b7105090a1d9de19689c499070749d4)
Setting this option to false does not count missing bytes in files towards the
extraction limits, and allows to extract data up to the desired limit,
even when partial files are written.
When missing bytes are encountered, files are now written as sparse
files.
Using this option requires the underlying storage and utilities to support
sparse files.
(cherry picked from commit afa6f3a0d3b8db1ec5b5e82d26225504c2891089)
OSS Fuzz generated a CWD request and reply followed by very many EPRT
requests. This caused Zeek to re-log the CWD request and invoke `build_url_ftp()`
over and over again resulting in long processing times.
Avoid this scenario by not logging commands that aren't pending anymore.
(cherry picked from commit b05dd31667ff634ec7d017f09d122f05878fdf65)
A call to `extract_filename_from_content_disposition()` is only
efficient if the string is guaranteed to contain the pattern that
is removed by `sub()`. Due to missing brackets around the `[:blank:]`
character class, an overly long string (756kb) ending in
"Type:dtanameaa=" matched the wrong pattern causing `sub()` to
exhibit quadratic runtime. Besides that, we may have potentially
extracted wrong information from a crafted header value.
(cherry picked from commit 6d385b1ca724a10444865e4ad38a58b31a2e2288)
When http_reply events are received before http_request events, either
through faking traffic or possible re-ordering, it is possible to trigger
unbounded state growth due to later http_requests never being matched
again with responses.
Prevent this by synchronizing request/response counters when late
requests come in.
Also forcefully flush pending requests when http_replies are never
observed either due to the analyzer having been disabled or because
half-duplex traffic.
Fixes#1705