Issue #3028 tracks how a flipped connections reset a connection's value
including any state set during new_connection(). For the time being,
update community-id functionality back to the original connection_state_remove()
approach to avoid missing community_ids on flipped connections.
* amazing-pp/topic/fupeng/from_json_bif:
Implement from_json bif
Minor updates during merge: Moved ValFromJSON into zeek::detail for the
time being, removed gotos, normalized some error messages to lower case,
minimal test extension and added a raw reader input framework test reading
"json lines" as a demo, adding notes about the implicit type
conversions.
When multiple loggers are configured in a Supervisor controlled cluster
configuration, encode extra information into the rotated filename to
identify which logger produced the log.
This is similar to the approach taken for ZeekControl, re-using the
log_suffix terminology, but as there's only a single zeek-archiver
process and no postprocessors and no other side-channel for additional
information, we encode extra metadata into the filename. zeek-archiver
is extended to recognize the special metadata part of the filename.
This also solves the issue that multiple loggers in a supervisor setup
overwrite each others log files within a single log-queue directory.
* origin/topic/awelzel/smb2-state-handling:
NEWS: Add entry about SMB::max_pending_messages and state discarding
scripts/smb2-main: Reset script-level state upon smb2_discarded_messages_state()
smb2: Limit per-connection read/ioctl/tree state
DTLSv1.3 changes the DTLS record format, introducing a completely new
header - which is a first for DTLS.
We don't currently completely parse this header, as this requires a bit
more statekeeping. This will be added in a future revision. This also
also has little practical implications.
* topic/johanna/no-error-message-durning-tls-or-dtls-protocol-violations:
SSL: failing analyzer handling - address review feedback
SSL: do not try to disable failed analyzer
Also folds in minor feedback from GH-3012
It turns out that we never logged hello retry requests correctly in the
ssl_history field.
Hello retry requests are (in their final version) signaled by a specific
random value in the server random.
This commit fixes this oversight, and hello retry requests are now
correctly logged as such.
Currently, if a TLS/DTLS analyzer fails with a protocol violation, we
will still try to remove the analyzer later, which results in the
following error message:
error: connection does not have analyzer specified to disable
Now, instead we don't try removing the analyzer anymore, after a
violation occurred.
This is similar to what the external corelight/zeek-smb-clear-state script
does, but leverages the smb2_discarded_messages_state() event instead of
regularly checking on the state of SMB connections.
The pcap was created using the dperson/samba container image and mounting
a share with Linux's CIFS filesystem, then copying the content of a
directory with 100 files. The test uses a BPF filter to imitate mostly
"half-duplex" traffic.
Users on Slack observed memory growth in an environment with a lot of
SMB traffic. jeprof memory profiling pointed at the offset and fid maps
kept per-connection for smb2 read requests.
These maps can grow unbounded if responses are seen before requests, there's
packet drops, just one side of the connection is visible, or we fail to parse
responses properly.
Forcefully wipe out these maps when they grow too large and raise
smb2_discarded_messages_state() to notify script land about this.
For low-level packet analysis use-cases, these fields are currently
not script-land accessible via raw_packet() or so. They are accessible
on the icmp_context record, but not on the actual ip4_hdr record, so
add them.
* origin/topic/awelzel/zeekctl-multiple-loggers:
NEWS: Add entry for ZeekControl and multi-loggers
Bump zeekctl to multi-logger version
logging: Support rotation_postprocessor_command_env
This also fixes the GRE analyzer to forward into the IEEE 802.11 analyzer
if it encounters Aruba packets with the proper protocol types. This way
the QoS header can be handled correctly.
* 'topic/amazingpp/irc-fuid-missing' of github.com:AmazingPP/zeek:
Add irc_dcc_send_ack event and fix missing fields
I've moved IRC_Data back into the zeek::analyzer::file namespace, but
we did move the declaration from protocol/file/File.h to protocol/irc/IRC.h.
But, if someone actually customized IRC_Data and didn't include protocol/irc/IRC.h
for other reasons, I'll be surprised (and also just suggest to update the include).
* origin/topic/awelzel/add-community-id:
testing/external: Bump hashes for community_id addition
NEWS: Add entry for Community ID
policy: Import zeek-community-id scripts into protocols/conn frameworks/notice
Add community_id_v1() based on corelight/zeek-community-id
"Community ID" has become an established flow hash for connection correlation
across different monitoring and storage systems. Other NSMs have had native
and built-in support for Community ID since late 2018. And even though the
roots of "Community ID" are very close to Zeek, Zeek itself has never provided
out-of-the-box support and instead required users to install an external plugin.
While we try to make that installation as easy as possible, an external plugin
always sets the bar higher for an initial setup and can be intimidating.
It also requires a rebuild operation of the plugin during upgrades. Nothing
overly complicated, but somewhat unnecessary for such popular functionality.
This isn't a 1:1 import. The options are parameters and the "verbose"
functionality has been removed. Further, instead of a `connection`
record, the new bif works with `conn_id`, allowing computation of the
hash with little effort on the command line:
$ zeek -e 'print community_id_v1([$orig_h=1.2.3.4, $orig_p=1024/tcp, $resp_h=5.6.7.8, $resp_p=80/tcp])'
1:RcCrCS5fwYUeIzgDDx64EN3+okU
Reference: https://github.com/corelight/zeek-community-id/
This set contains the topics to reach all cluster nodes. Due to broker's
forwarding mechanism, we cannot define a single broadcast topic, as it
would create routing loops.
This new table provides a mechanism to add environment variables to the
postprocessor execution. Use case is from ZeekControl to inject a suffix
to be used when running with multiple logger.
While working on a rotation format function, ran into Zeek crashing
when not returning a value from it, fix and recover the same way as
for scripting errors.
* security/topic/awelzel/152-smtp-validate-mail-transactions:
smtp: Validate mail transaction and disable SMTP analyzer if excessive
generic-analyzer-fuzzer: Detect disable_analyzer() from scripts
* security/topic/awelzel/148-ftp-skip-get-pending-commands-multi-line-response:
ftp/main: Special case for intermediate reply lines
ftp/main: Skip get_pending_command() for intermediate reply lines
The medium.trace in the private external test suite contains one
session/server that violates the multi-line reply protocol and
happened to work out fairly well regardless due to how we looked
up the pending commands unconditionally before.
Continue to match up reply lines that "look like they contain status codes"
even if cont_resp = T. This still improves runtime for the OSS-Fuzz
generated test case and keeps the external baselines valid.
The affected session can be extracted as follows:
zcat Traces/medium.trace.gz | tcpdump -r - 'port 1491 and port 21'
We could push this into the analyzer, too, minimally the RFC says:
> If an intermediary line begins with a 3-digit number, the Server
> must pad the front to avoid confusion.
* origin/topic/vern/ZAM-Apr23-maint:
minor ZAM BTest baseline updates
fixed type mismatch for ssl_certificate_request event
skip ZAM optimization of invalid scripts
extended script validation to be call-able on a per-function basis
Increasing this value 10x has lowered CPU usage on a Myricom based
deployment significantly with reportedly no adverse side-effects.
After reviewing the Zeek 3 IO loop, my hunch is that previously when
no packets were available, we'd sleep 20usec every loop iteration after
calling ->Process() on the packet source. With current master ->Process()
is called 10 times on a packet source before going to sleep just once
for 20 usec. Likely this explains the increased CPU usage reported.
It's probably too risky to increase the current value, so introduce
a const &redef value for advanced users to tweak it. A middle ground
might be to lower ``io_poll_interval_live`` to 5 and increase the new
``Pcap::non_fd_timeout`` setting to 100usec.
While this doesn't really fix#2296, we now have enough knobs for tweaking.
Closes#2296.
An invalid mail transaction is determined as
* RCPT TO command without a preceding MAIL FROM
* a DATA command without a preceding RCPT TO
and logged as a weird.
The testing pcap for invalid mail transactions was produced with a Python
script against a local exim4 configured to accept more errors and unknown
commands than 3 by default:
# exim4.conf.template
smtp_max_synprot_errors = 100
smtp_max_unknown_commands = 100
See also: https://www.rfc-editor.org/rfc/rfc5321#section-3.3
Intermediate lines of multiline replies usually do not contain valid status
codes (even if servers may opt to include them). Their content may be anything
and likely unrelated to the original command. There's little reason for us
trying to match them with a corresponding command.
OSS-Fuzz generated a large command reply with very many intermediate lines
which caused long processing times due to matching every line with all
currently pending commands.
This is a DoS vector against Zeek. The new ipv6-multiline-reply.trace and
ipv6-retr-samba.trace files have been extracted from the external ipv6.trace.
Add a central place where the decision when it's okay to update network time
to the current time (wallclock) is. It checks for pseudo_realtime and packet
source existence as well as packet source idleness.
A new const &redef allows to completely disable forwarding of network time.
This probably should not be changed by users, but it's useful for
testing and experimentation rather than needing to recompile.
Processing 100 packets without checking an FD based IO source can
actually mean that FD based sources are never checked during a read
of a very small pcap...