This moves c$service_violation to the deprecated-dpd-log policy script.
This is the only script in the distribution that uses the field, and it
is unlikely to be used externally. It is also responsible for a
significant amount of memory use by itself.
This also restores the field being populated, which was broken in
GH-4362
Specifically, set a MIME part's parent_id to the rfc822_msg_fuid if it
is set and take into account the current rfc822_msg_fuid for describe_file()
to avoid fuid collisions of the top-level RFC822 message and the first
MIME part.
* origin/topic/awelzel/4605-conn-id-context:
NEWS: Adapt for conn_id$ctx introduction
conn_key/fivetuple: Drop support for non conn_id records
Conn: Move conn_id init and flip to IPBasedConnKey
IPBasedConnKey: Add GetTransportProto() helper
input/Manager: Ignore empty record types
external: Bump commit hashes for external suites
ip/vlan_fivetuple: Populate nested conn_id_context, not conn_id
ConnKey: Extend DoPopulateConnIdVal() with ctx
btest: Update tests and baselines after adding ctx to conn_id
init-bare: Add conn_id_ctx to conn_id
This nested record can be used to discriminate orig_h or resp_h being
observed in different "contexts". A context can be based on VLAN tags,
but any custom ConnKey implementation should populate the ctx field,
allowing to write context-aware Zeek scripts without needing to know
what the context really is.
Closes#4504
Messages are not typical responses, so they need special handling. This
is different between RESP2 and 3, so this is the first instance where
the script layer needs to tell the difference.
The response to BDAT LAST was never recognized, resulting in the
BDAT LAST commands not being logged in a timely fashion and receiving
the wrong status.
This likely doesn't handle complex pipeline scenarios, but it fixes
the wrong behavior for smtp_reply() not handling simple BDAT commands
responses.
Thanks @cccs-jsjm for the report!
Closes#4522
This field is used internally to trace which analyzers already had a
violation. This is mostly used to prevent duplicate logging.
In the past, c$service_violation was used for a similar purpose -
however it has slightly different semantics. Where c$failed_analyzers
tracks analyzers that were removed due to a violation,
c$service_violation tracks violations - and doesn't care if an analyzer
was actually removed due to it.
The main part of this commit are changes in tests. A lot of the tests
that previously relied on analyzer.log or dpd.log now use the new
analyzer-failed.log.
I verified all the changes and, as far as I can tell, everything
behaves as it should. This includes the external test baselines.
This change also enables logging of file and packet analyzer to
analyzer_failed.log and fixes some small behavior issues.
The analyzer_failed event is no longer raised when the removal of an
analyzer is vetoed.
If an analyzer is no longer active when an analyzer violation is raised,
currently the analyzer_failed event is raised. This can, e.g., happen
when an analyzer error happens at the very end of the connection. This
makes the behavior more similar to what happened in the past, and also
intuitively seems to make sense.
A bug introduced in the failed service logging was fixed.
- Recomputes checksums for pcaps to keep clean
- Removes some tests that had big pcaps or weren't necessary
- Cleans up scripting names and minor points
- Comments out Spicy code that causes a build failure now with a TODO to
uncomment it
This commit revamps the handling of analyzer violations that happen
before an analyzer confirms the protocol.
The current state is that an analyzer is disabled after 5 violations, if
it has not been confirmed. If it has been confirmed, it is disabled
after a single violation.
The reason for this is a historic mistake. In Zeek up to versions 1.5,
analyzers were unconditianally removed when they raised the first
protocol violation.
When this script was ported to the new layout for Zeek 2.0 in
b4b990cfb5, a logic error was introduced
that caused analyzers to no longer be disabled if they were not
confirmed.
This was the state for ~8 years, till the DPD::max_violations options
was added, which instates the current approach of disabling unconfirmed
analyzers after 5 violations. Sadly, there is not much discussion about
this change - from my hazy memory, I think this was discovered during
performance tests and the new behavior was added without checking into
the history of previous changes.
This commit reinstates the originally intended behavior of DPD. When an
analyzer that has not been confirmed raises a protocol violation, it is
immediately removed from the connection. This also makes a lot of sense
- this allows the analyzer to be in a "tasting" phase at the beginning
of the connection, and to error out quickly once it realizes that it was
attached to a connection not containing the desired protocol.
This change also removes the DPD::max_violations option, as it no longer
serves any purpose after this change. (In practice, the option remains
with an &deprecated warning, but it is no longer used for anything).
There are relatively minimal test-baseline changes due to this; they are
mostly triggered by the removal of the data structure and by less
analyzer errors being thrown, as unconfirmed analyzers are disabled
after the first error.
This adds a protocol parser for the PostgreSQL protocol and a new
postgresql.log similar to the existing mysql.log.
This should be considered preliminary and hopefully during 7.1 and 7.2
with feedback from the community, we can improve on the events and logs.
Even if most PostgreSQL communication is encrypted in the real-world, this
will minimally allow monitoring of the SSLRequest and hand off further
analysis to the SSL analyzer.
This originates from github.com/awelzel/spicy-postgresql, with lots of
polishing happening in the past two days.
Seem reasonable give we log the server SCID. Interestingly, the Chromium
examples actually have zero length (empty) source connection IDs. I wonder
if that's part of their "protocol ossification avoidance" effort.
Don't log them, they are random and arbitrary in the normal case. Users
can do the following to log them if wanted.
redef += WebSocket::Info$client_key += { &log };
redef += WebSocket::Info$server_accept += { &log };
This adds a new WebSocket analyzer that is enabled with the HTTP upgrade
mechanism introduced previously. It is a first implementation in BinPac with
manual chunking of frame payload. Configuration of the analyzer is sketched
via the new websocket_handshake() event and a configuration BiF called
WebSocket::__configure_analyzer(). In short, script land collects WebSocket
related HTTP headers and can forward these to the analyzer to change its
parsing behavior at websocket_handshake() time. For now, however, there's
no actual logic that would change behavior based on agreed upon extensions
exchanged via HTTP headers (e.g. frame compression). WebSocket::Configure()
simply attaches a PIA_TCP analyzer to the WebSocket analyzer for dynamic
protocol detection (or a custom analyzer if set). The added pcaps show this
in action for tunneled ssh, http and https using wstunnel. One test pcap is
Broker's WebSocket traffic from our own test suite, the other is the
Jupyter websocket traffic from the ticket/discussion.
This commit further adds a basic websocket.log that aggregates the WebSocket
specific headers (Sec-WebSocket-*) headers into a single log.
Closes#3424
Skimming through the RFC, the previous approach of having containers for most
fields seems unfounded for normal protocol operation. The new weirds could just
as well be considered protocol violations. Outside of duplicated or missed data
they just shouldn't happen for well-behaved client/server behavior.
Additionally, with non-conformant traffic it would be trivial to cause
unbounded state growth and immense log record sizes.
Unfortunately, things have become a bit clunky now.
Closes#3504
Previously, seq was computed as the result of |pending_commands|+1. This
opened the possibility to override queued commands, as well as logging
the same pending ftp reply multiple times.
For example, when commands 1, 2, 3 are pending, command 1 may be dequeued,
but the incoming command then receives seq 3 and overrides the already
pending command 3. The second scenario happens when ftp_reply() selected
command 3 as pending for logging, but is then followed by many ftp_request()
events. This resulted in command 3's response being logged for every
following ftp_request() over and over again.
Avoid both scenarios by tracking the command sequence as an absolute counter.
Justin pointed out that the misc/dump-events test shows added fields to
the connection record. Add a new test that prints the connection record
recursively in bare and default mode to cover that use-case
specifically.