This allows to select the Upgrade analyzer based on the tuple
of [Upgrade, Content-Type] as motivated by the Docker API trace
that sets a very specific Content-Type in the HTTP response.
Closes#4068
* origin/topic/johanna/sqlite-pragmas:
Options for SQLite log writer, eliminate duplicate definitions
Test synchronous/journal mode options for SQLite log writer
Added default options for synchronous and journal mode
Support for synchronous and journal_mode
* origin/topic/awelzel/pluggable-cluster-backends-part1:
btest: Test Broker::make_event() together with Cluster::publish_hrw()
btest: Add cluster dir, minimal test for enum value
broker: Add shim plugin adding a backend component
zeek-setup: Instantiate backend::manager
cluster: Add to src/CMakeLists.txt
cluster: Add Components and ComponentManager for new components
cluster/Backend: Interface for cluster backends
cluster/Serializer: Interface for event and log serializers
logging: Introduce logging/Types.h
SerialTypes/Field: Allow default construction and add move constructor
DebugLogger: Add cluster debugging stream
plugin: Add component enums for pluggable cluster backends
broker: Pass frame to MakeEvent()
@Sheco reported that standalone epoch processing may exclude scheduled
events when the final sumstat epoch runs before. For example, this easily
happens when attempting to do sumstat observations within connection_state_remove().
Delay final epoch processing to zeek_done() instead.
This doesn't deal with the clustered version - this would need something
more elaborate and potentially a mechanism to delay the shutdown of
other cluster nodes until/after sumstat processing completed.
This commit fixes three issues with Zeek's Modbus message logging:
1 - Some exception responses (e.g., READ_COILS_EXCEPTION) are logged
twice: once without and once with the exception message.
2 - Some exception responses (e.g., PROGRAM_484_EXCEPTION) are not
logged.
3 - Some known but reserved function codes (e.g., PROGRAM_UNITY) are
logged as unk-xxx (e.g., unk-90), while it would be possible to
log their known name.
To address these inconsistencies, the modbus parser has been updated
to parse all exception responses (i.e., all responses where the MSB
of the function code is set) using the already defined Exception
message.
Also, the Modbus main.zeek script has been updated to consistently
demand logging exception responses to the specialized
modbus_exception event, rather than logging some exception responses
in the modbus_message event and others in the modbus_exception event.
Finally, the main.zeek script has been updated to make sure that
for every known function code, the corresponding exception code was
also present, and the enumeration of known function codes in
consts.zeek has been expanded.
Closes#3984
* topic/christian/telemetry-make-bifs-primary:
Telemetry framework: move BIFs to the primary-bif stage
Minor comment tweaks for init-frameworks-and-bifs.zeek
This stops invoking Telemetry::sync() via a scheduled event and instead
only invokes it on-demand. This makes metric collection network time
independent and lazier, too.
With Prometheus scrape requests being processed on Zeek's main thread
now, we can safely invoke the script layer Telemetry::sync() hook.
Closes#3947
This moves the Telemetry framework's BIF-defined functionalit from the
secondary-BIFs stage to the primary one. That is, this functionality is now
available from the end of init-bare.zeek, not only after the end of
init-frameworks-and-bifs.zeek.
This allows us to use script-layer telemetry in our Zeek's own code that get
pulled in during init-frameworks-and-bifs.
This change splits up the BIF features into functions, constants, and types,
because that's the granularity most workable in Func.cc and NetVar. It also now
defines the Telemetry::MetricsType enum once, not redundantly in BIFs and script
layer.
Due to subtle load ordering issues between the telemetry and cluster frameworks
this pushes the redef stage of Telemetry::metrics_port and address into
base/frameworks/telemetry/options.zeek, which is loaded sufficiently late in
init-frameworks-and-bifs.zeek to sidestep those issues. (When not doing this,
the effect is that the redef in telemetry/main.zeek doesn't yet find the
cluster-provided values, and Zeek does not end up listening on these ports.)
The need to add basic Zeek headers in script_opt/ZAM/ZBody.cc as a side-effect
of this is curious, but looks harmless.
Also includes baseline updates for the usual btests and adds a few doc strings.
Log flushing is currently triggered based on the threading heartbeat timer
of WriterBackends and the hard-coded WRITE_BUFFER_SIZE 1000.
This change introduces a separate timer that is managed by the logger
manager instead of piggy-backing on the heartbeat timer, as well as a
const &redef for the buffer size.
This allows to modify the log flush frequency and batch size independently
of the threading heartbeat interval. Later, this will allow to re-use the
buffering and flushing logic of writer frontends for non-Broker cluster
backends, too.
One change here is that even frontends that do not have a backend will
be flushed regularly. This is wanted for non-Broker backends and should be
very cheap. Possibly, Broker can piggy back on this timer down the road, too,
rather than using its own script-level timer (see Broker::log_flush()).
The cmds list may grow unbounded due to the POP3 analyzer being in
multiLine mode after seeing `AUTH` in a Redis connection, but never
a `.` terminator. This can easily be provoked by the Redis ping
command.
This adds two heuristics: 1) Forcefully process the oldest commands in
the cmds list and cap it at max_pending_commands. 2) Start raising
analyzer violations if the client has been using more than
max_unknown_client_commands commands (default 10).
Closes#3936
Remove overhead of unconditionally calling remove_teredo_connection()
for *every* connection by installing a connection removal hook for only
when state was allocated.
This adds a protocol parser for the PostgreSQL protocol and a new
postgresql.log similar to the existing mysql.log.
This should be considered preliminary and hopefully during 7.1 and 7.2
with feedback from the community, we can improve on the events and logs.
Even if most PostgreSQL communication is encrypted in the real-world, this
will minimally allow monitoring of the SSLRequest and hand off further
analysis to the SSL analyzer.
This originates from github.com/awelzel/spicy-postgresql, with lots of
polishing happening in the past two days.
The current implementation would only log, if the password contains a
colon, the part before the first colon (e.g., the password
`password:password` would be logged as `password`).
A test has been added to confirm the expected behaviour.
It turns out that, for probably a long time, we have reported an
incorrect version when parsing an SSLv2 client hello. We always reported
this as SSLv2, no matter which version the client hello actually
contained.
This bug probably went unnoticed for a long time, as SSLv2 is
essentially unused nowadays, and as this field does not show up in the
default logs.
This was found due to a baseline difference when writing the Spicy SSL
analyzer.
This avoids the earlier problem of not tracking ports correctly in
scriptland, while still supporting `port` in EVT files and `%port` in
Spicy files.
As it turns out we are already following the same approach for file
analyzers' MIME types, so I'm applying the same pattern: it's one
event per port, without further customization points. That leaves the
patch pretty small after all while fixing the original issue.
* origin/topic/johanna/ssl-history-also-for-sslv2-not-only-for-things-that-use-the-more-modern-handshake:
Make ssl_history work for SSLv2 handshakes/connections