* origin/topic/awelzel/2326-import-quic:
ci/btest: Remove spicy-quic helper, disable Spicy on CentOS 7
btest/core/ppp: Run test in bare mode
btest/quic: Update other tests
testing/quic: Fixups and simplification after Zeek integration
quic: Integrate as default analyzer
quic: Include Copyright lines to the analyzer's source code contributed by Fox-IT
quic: Squashed follow-ups: quic.log, tests, various fixes, performance
quic: Initial implementation
* origin/topic/bbannier/issue-3234:
Introduce dedicated `LDAP::Info`
Remove redundant storing of protocol in LDAP logs
Use LDAP `RemovalHook` instead of implementing `connection_state_remove`
Tidy up LDAP code by using local references
Pluralize container names in LDAP types
Move LDAP script constants to their own file
Name `LDAP::Message` and `LDAP::Search` `*Info`
Make ports for LDAP analyzers fully configurable
Require have-spicy for tests which log spicy-ldap information
Fix LDAP analyzer setup for when Spicy analyzers are disabled
Bump zeek-testing-private
Integrate spicy-ldap test suite
Move spicy-ldap into Zeek protocol analyzer tree
Explicitly use all of spicy-ldap's modules
Explicitly list `asn1.spicy` as spicy-ldap source
Remove uses of `zeek` module in spicy-ldap
Fix typos in spicy-ldap
Remove project configuration files in spicy-ldap
Integrate spicy-ldap into build
Import zeek/spicy-ldap@57b5eff988
On Linux with a default ext4 or tmpfs filesystem, the default buffer size for
reading a pcap is chosen as 4k (strace/gdb validated). When reading large pcaps
containing raw data transfers, the syscall overhead for read becomes visible
in profiles. Support configurability of the buffer size and default to 128kb.
When processing a ~830M PCAP (16 UDP connections, each transferring ~50MB) in
bare mode, this change improves runtime from 1.39 sec to 1.29 sec. Increasing
the buffer further didn't provide a noticeable boost.
* origin/topic/awelzel/3314-lambda-redefinition-segfault:
Var/Func: Render function parameters using comma, not semicolon
Var: Fix null-pointer deref on redefinition of lambdas
* origin/topic/awelzel/deferred-default-non-const-v4:
CreationInitsOptimizer: Use PreTypedef() instead of PreType()
Fix deferred record initialization
testing/btest: Un-deferred record initalization tests
* origin/topic/timw/3059-set-vector-conversion:
Fix conversion with record types
Add conversion between set and vector using 'as' keyword
Add std::move for a couple of variables passed by value
This is based on the discussion in zeek/zeek#2668. Using &default with tables
can be confusing as the default value is not inserted. The following example
prints an empty table at the end even new Service records was instantiated.
type Service: record {
occurrences: count &default=0;
last_seen: time &default=network_time();
};
global services: table[string] of Service &default=Service();
event zeek_init()
{
services["http"]$occurrences += 1;
services["http"]$last_seen = network_time();
print services;
}
Changing above &default to &default_insert will insert the newly created
default value upon a missed lookup and act less surprising.
Other examples that caused confusion previously revolved around table of sets
or table of vectors and `add` or `+=` not working as expected.
tbl_of_vector["http"] += 1
add tbl_of_set["http"][1];
Ad-hoc include module names in the global_ids() table. Table values will
have the type_name field set to "module" and their key in the table is
prefixed with "module " to avoid clashes with existing global identifiers
shadowing module names (Management::Node being an existing example).
Closes#3136
Avoids loosing state on a connection value when a connection is flipped.
Fixes up the NTP baseline as well where this was visible: analyzer_confirmation_info()
was raised for a connection value which was immediately forgotten due to
the subsequent connection flipping.
Closed#3028
When a JSON document contains key names containing colons or other
special characters that are not valid in Zeek identifiers, from_json()
cannot be used to parse such input.
This change allows a customizable normalization function.
Closes#3142.
While writing documentation about troubleshooting and looking a bit
at the older stats.log, realized we don't have the packet lag metric
exposed as metric/telemetry. Add it.
This is a Zeek instance lagging behind in network time ~6second because
it's very overloaded:
zeek_net_packet_lag_seconds{endpoint=""} 6.169406 1684848998092
* jgras/topic/jgras/event-ts:
Add compatibility tests for timestamped events.
Add timestamps to auto published broker events.
Add timestamps to manually published broker events.
Annotate scheduled events with intended timestamp.
Add timestamp to events.
One timestamp to ts rename during the merge.
This commit adds support for the connection_id extension, adds a trace
that uses DTLS 1.3 connection IDs, and adds parsing for the DTLS 1.3
unified header, in case connection IDs are not used.
In case connection IDs are used, parsing of the DTLS 1.3 unified header
is skipped. This is due to the fact, that the header then contains a
variable length element, with the length of the element not given in the
header. Instead, the length is given in the client/server hello message
of the opposite side of the connection (which we might have missed).
Furthermore, parsing is not of a high importance, since we are not
passing the connection ID, or any of the other parsed values of the
unified header into scriptland.
* amazing-pp/topic/fupeng/from_json_bif:
Implement from_json bif
Minor updates during merge: Moved ValFromJSON into zeek::detail for the
time being, removed gotos, normalized some error messages to lower case,
minimal test extension and added a raw reader input framework test reading
"json lines" as a demo, adding notes about the implicit type
conversions.
When multiple loggers are configured in a Supervisor controlled cluster
configuration, encode extra information into the rotated filename to
identify which logger produced the log.
This is similar to the approach taken for ZeekControl, re-using the
log_suffix terminology, but as there's only a single zeek-archiver
process and no postprocessors and no other side-channel for additional
information, we encode extra metadata into the filename. zeek-archiver
is extended to recognize the special metadata part of the filename.
This also solves the issue that multiple loggers in a supervisor setup
overwrite each others log files within a single log-queue directory.