This commit adds an optional event_groups field to the Logging::Stream record
to associated event groups with logging streams.
This can be used to disable all event groups of a logging stream when it is
disabled. It does require making an explicit connection between the
logging stream and the involved groups, however.
When a fa_file object is created through the use of Input::add_analysis(),
the fa_file's source is likely not valid representation of an analyzer's
tag and a Files::describe() should not error and instead return an empty
description.
Add a new Analyzer::is_tag() helper that can be used to pre-check `f$source`.
* When a file is transferred over multiple connection, have
create_file_info() just pick the first one instead of none.
* Do not unconditionally assume cid and cuid as set on a
Notice::FileInfo object.
From Vern in GH-846: This is a conscious decision in the TCP analysis to
consider a connection's "duration" to run up through the end of its
productive (= data can be delivered) lifetime, not extending beyond that. So
once it's closed, packets seen subsequently (until the state-holding for the
connection times out) get processed in terms of updating the associated
history, but not the duration. This can include (unnecessarily) retransmitted
data packets, like in one of the examples above. An advantage of this definition
of "duration" is it allows more accurate computation of connection data rates.
* origin/topic/awelzel/2514-expire-all-timers-special-case:
TimerMgr: Add back max_timer_expires=0 special case
Add btest for expiration of all pending timers.
Commit 58fae22708 removed the max_expire==0
handling from DoAdvance() due to not being obvious what use it is. Jan
later reported that it broke the `redef max_timer_expires=0` (#2514).
This commit adds back the special case re-introducing the `max_timer_expires=0` ,
trying to make it fairly explicit that it exists.
This is an adaption of #2516 not adding a new option and trying a bit
to avoid global variable accesses down in DoAdvance(), though that
just moved to InitPostScript().
Fixes#2514.
oss-fuzz produced FTP traffic with a ~550KB long FTP command. Cap FTP command
length at 100 bytes, log a weird if a command is larger than that and move
on to the next. Likely it's not actual FTP traffic, but raising an
analyzer violation would allow clients an easy way to disable the analyzer
by sending an overly long command.
The added test PCAP was generated using a fake Python socket server/client.
When a negotiate request offers no dialects, but the response contains
an ntlm record which selects a dialect, a script error is triggered.
$ zeek -C -r ./f2b0e.pcap 'DPD::ignore_violations+={ Analyzer::ANALYZER_SMB }'
1668615340.837882 expression error in /home/awelzel/corelight-oss/zeek/scripts/base/protocols/smb/./smb1-main.zeek, line 96: no such index (SMB1::c$smb_state$current_cmd$smb1_offered_dialects[SMB1::response$ntlm$dialect_index])
Script error triggered by fuzzing when testing Tim's all-the-fuzzing branch.
While unusual, analyzer_confirmation() may never be called for the
SSH analyzer, but still ssh_auth_attempted is invoked later indicating
successful authentication. I haven't checked how that is actually possible,
but seems prudent to check for the existence of c$ssh$analyzer_id before
referencing it (also in light of runtime enable/disabling of events).
This was found testing Tim's all-the-fuzzing branch on large system,
merging this should avoid oss-fuzz telling us about it.
$ zeek -C -r ./e83db.pcap 'DPD::ignore_violations+={ Analyzer::ANALYZER_SSH }'
1668610572.429058 expression error in scripts/base/protocols/ssh/./main.zeek, line 260: field value missing (SSH::c$ssh$analyzer_id)
The controller now listens on an additional port, defaulting to 2149, for Broker
connections via websockets. Configuration works as for the existing traditional
Broker port (2150), via ZEEK_CONTROLLER_WEBSOCKET_ADDR and
ZEEK_CONTROLLER_WEBSOCKET_PORT environment variables, as well as corresponding
redef'able constants.
To disable the websockets feature, leave ZEEK_CONTROLLER_WEBSOCKET_PORT unset
and redefine Management::Controller::default_port_websocket to 0/unknown.
This uses the v3 json as a source for the first time. The test needed
some updating because Google removed a couple more logs - in the future
this should hopefully not be neccessary anymore because I think v3
should retain all logs.
In theory this might be neat in 5.1.
This allows to enable/disable file analyzers through the same interfaces
as packet and protocol analyzers, specifically Analyzer::disable_analyzer
could be interesting.
This adds machinery to the packet_analysis manager for disabling
and enabling packet analyzers and implements two low-level bifs
to use it.
Extend Analyzer::enable_analyzer() and Analyzer::disable_analyzer()
to transparently work with packet analyzers, too. This also allows
to add packet analyzers to Analyzer::disabled_analyzers.
Introduce two new events for analyzer confirmation and analyzer violation
reporting. The current analyzer_confirmation and analyzer_violation
events assume connection objects and analyzer ids are available which
is not always the case. We're already passing aid=0 for packet analyzers
and there's not currently a way to report violations from file analyzers
using analyzer_violation, for example.
These new events use an extensible Info record approach so that additional
(optional) information can be added later without changing the signature.
It would allow for per analyzer extensions to the info records to pass
analyzer specific info to script land. It's not clear that this would be
a good idea, however.
The previous analyzer_confirmation and analyzer_violation events
continue to exist, but are deprecated and will be removed with Zeek 6.1.
The current_entity tracking in HTTP assumes that client/server never
send HTTP entities at the same time. The attached pcap (generated
artificially) violates this and triggers:
1663698249.307259 expression error in <...>base/protocols/http/./entities.zeek, line 89: field value missing (HTTP::c$http$current_entity)
For the http-no-crlf test, include weird.log as baseline. Now that weird is
@load'ed from http, it is actually created and seems to make sense
to btest-diff it, too.
...the only known cases where the `-` for `connection$service` was
handled is to skip/ignore these analyzers.
Slight suspicion that join_string_set() should maybe become a bif
now determine_service() runs once for each connection.
Closes#2388