This introduces ian options, DPD::track_removed_services_in_connection.
It adds failed services to the services column, prefixed with a
"-".
Alternatively, this commit also adds
policy/protocols/conn/failed-services.zeek, which provides the same
information in a new column in conn.log.
This commit revamps the handling of analyzer violations that happen
before an analyzer confirms the protocol.
The current state is that an analyzer is disabled after 5 violations, if
it has not been confirmed. If it has been confirmed, it is disabled
after a single violation.
The reason for this is a historic mistake. In Zeek up to versions 1.5,
analyzers were unconditianally removed when they raised the first
protocol violation.
When this script was ported to the new layout for Zeek 2.0 in
b4b990cfb5, a logic error was introduced
that caused analyzers to no longer be disabled if they were not
confirmed.
This was the state for ~8 years, till the DPD::max_violations options
was added, which instates the current approach of disabling unconfirmed
analyzers after 5 violations. Sadly, there is not much discussion about
this change - from my hazy memory, I think this was discovered during
performance tests and the new behavior was added without checking into
the history of previous changes.
This commit reinstates the originally intended behavior of DPD. When an
analyzer that has not been confirmed raises a protocol violation, it is
immediately removed from the connection. This also makes a lot of sense
- this allows the analyzer to be in a "tasting" phase at the beginning
of the connection, and to error out quickly once it realizes that it was
attached to a connection not containing the desired protocol.
This change also removes the DPD::max_violations option, as it no longer
serves any purpose after this change. (In practice, the option remains
with an &deprecated warning, but it is no longer used for anything).
There are relatively minimal test-baseline changes due to this; they are
mostly triggered by the removal of the data structure and by less
analyzer errors being thrown, as unconfirmed analyzers are disabled
after the first error.
* origin/topic/johanna/gh-4061:
Update BiF-tracking, add is_event_handled
Address review comments and small updates for DNS warnings
Raise warnings when for DNS events that are not raised due to dns_skip_all_addl
By default, dns_skip_all_addl is set to false. This causes several
events to not be raised. This change emits warnings when a user defines
event handlers for events that will not be raised.
Furthermore, it adds notes about this behavior to the documentation. We
also introduce a new BIF, `is_event_handled`, which checks if an event
is handled.
Fixes GH-4061
* origin/topic/johanna/sqlite-pragmas:
Options for SQLite log writer, eliminate duplicate definitions
Test synchronous/journal mode options for SQLite log writer
Added default options for synchronous and journal mode
Support for synchronous and journal_mode
* origin/topic/awelzel/pluggable-cluster-backends-part1:
btest: Test Broker::make_event() together with Cluster::publish_hrw()
btest: Add cluster dir, minimal test for enum value
broker: Add shim plugin adding a backend component
zeek-setup: Instantiate backend::manager
cluster: Add to src/CMakeLists.txt
cluster: Add Components and ComponentManager for new components
cluster/Backend: Interface for cluster backends
cluster/Serializer: Interface for event and log serializers
logging: Introduce logging/Types.h
SerialTypes/Field: Allow default construction and add move constructor
DebugLogger: Add cluster debugging stream
plugin: Add component enums for pluggable cluster backends
broker: Pass frame to MakeEvent()
@Sheco reported that standalone epoch processing may exclude scheduled
events when the final sumstat epoch runs before. For example, this easily
happens when attempting to do sumstat observations within connection_state_remove().
Delay final epoch processing to zeek_done() instead.
This doesn't deal with the clustered version - this would need something
more elaborate and potentially a mechanism to delay the shutdown of
other cluster nodes until/after sumstat processing completed.
This wasn't possible before #3028 was fixed, but now it's safe to set
the value in new_connection() and allow other users access to the
field much earlier. We do not have to deal with connection_flipped()
because the community-id hash is symmetric.
This admittedly is a quite esoteric combination of protocols. But - as
we do correctly support them, it seems nice to have a slightly more
complete testcase that covers this.
remove instance of plus sign to account for real plus in sql
account for spaces encoding to plus signs in sqli regex detection
add test cases for sqli space to plus
account for spaces encoding to plus signs in sqli regex detection
forgot semicolon
account for spaces encoding to plus signs in sqli regex detection
Adding a metric for the network time value itself should make it
possible to observe it stopping or growing slowly as compared to
realtime when Zeek isn't able to keep up.
Also, modify the telemetry/log.zeek test to include misc/stats and
log at a higher frequency with a more interesting pcap.
This stops invoking Telemetry::sync() via a scheduled event and instead
only invokes it on-demand. This makes metric collection network time
independent and lazier, too.
With Prometheus scrape requests being processed on Zeek's main thread
now, we can safely invoke the script layer Telemetry::sync() hook.
Closes#3947
This commit prevents most non-Modbus TCP traffic on port 502 to be
reported as Modbus in conn.log as well as in modbus.log.
To do so, we have introduced two &enforce checks in the Modbus
protocol definition that checks that some specific fields of the
(supposedly) Modbus header are compatible with values specified in
the specs.
To ensure non-regression, with this commit we also introduce a
new btest.
Closes#3962
This isn't a straightforward fix, unfortunately. The existing GetLine()
implementation didn't deal well with input that's incrementally produced
where individually read chunks wouldn't end with the separator.
The prior implementation increased the buffer each time it failed to find
a separator in the current buffer, but then also ended up not searching the
full new buffer size for the terminator, doing that endlessly.
This change reworks the Raw reader to rely only on bufpos for reading
and searching purposes and skip reallocation if the buffer size if it
wasn't actually exhausted.
Closes#3957
Processing out-of-order commands or finishing commands based on invalid
server responses resulted in inconsistent analyzer state, potentially
triggering null pointer references for crafted traffic.
This commit reworks cf9fe91705 such that
too many pending commands are simply discarded, rather than any attempt
being made to process them. Further, invalid server responses do not
result in command completion anymore.
Test PCAP was crafted based on traffic produced by the OSS-Fuzz reproducer.
Closes#215
It seems like other similar tests get by because they have more "stuff"
before they call `terminate()` most likely. But, to be safe, just
removing the "received termination signal" line seems like the best
approach.
Invalid lines in a file was the one case that would not suppress future
warnings. Just make it suppress warnings too, but clear that suppression
if there is a field in between that doesn't error.
Fixes#3692
The cmds list may grow unbounded due to the POP3 analyzer being in
multiLine mode after seeing `AUTH` in a Redis connection, but never
a `.` terminator. This can easily be provoked by the Redis ping
command.
This adds two heuristics: 1) Forcefully process the oldest commands in
the cmds list and cap it at max_pending_commands. 2) Start raising
analyzer violations if the client has been using more than
max_unknown_client_commands commands (default 10).
Closes#3936
* origin/master: (27 commits)
Update doc submodule [nomail] [skip ci]
btest/ldap: Add regression test for #3919
postgresql: Simplify SSL buffering and forwarding
postgresql: Initial parser implementation
testing/external: Update private baselines
analyzer/syslog: Reformat with spicy-format
analyzer/finger: Reformat with spicy-format
scripts/spicy: Reformat with spicy-format
pre-commit: Add spicy-format
Check for netbios to avoid reporting extra bad DNS opcodes
Add weird for unhandled opcodes in DNS analyzer
Bump zeek-aux for zeek/zeek-aux#57
Remove pre-commit exclusions for clang-format
Bump clang-format
Bump auxil/spicy to latest development snapshot
RunState: Drop broker_mgr->Active() usage
script_opt/ZAM/IterInfo.h: Add missing Dict.h dependency
script_opt/ZAM: ZBody.h / Support.h: Cleanup includes, use forward declarations
script_opt/ZAM/Profile: Remove Zeek header includes
script_opt: Extend Support.h to break include dependencies
...
This adds a protocol parser for the PostgreSQL protocol and a new
postgresql.log similar to the existing mysql.log.
This should be considered preliminary and hopefully during 7.1 and 7.2
with feedback from the community, we can improve on the events and logs.
Even if most PostgreSQL communication is encrypted in the real-world, this
will minimally allow monitoring of the SSLRequest and hand off further
analysis to the SSL analyzer.
This originates from github.com/awelzel/spicy-postgresql, with lots of
polishing happening in the past two days.