GetChildAnalyzer() has the same semantics as HasChildAnalyzer(), but returns
the raw pointer to the child analyzer. Main issue is memory management: That
pointer is not guaranteed to stay valid. It might be disabled from script
land or otherwise removed from the analyzer tree and subsequent
deleted in one of the Forward* methods.
IsPreventedChildAnalyzer() provides minimal introspection for prevented
child analyzer tags and allows to remove some duplicated code.
* origin/topic/awelzel/broker-no-network-time-init:
btest/broker: Add test using Python bindings and zeek -r
Broker: Remove network time initialization
An invalid mail transaction is determined as
* RCPT TO command without a preceding MAIL FROM
* a DATA command without a preceding RCPT TO
and logged as a weird.
The testing pcap for invalid mail transactions was produced with a Python
script against a local exim4 configured to accept more errors and unknown
commands than 3 by default:
# exim4.conf.template
smtp_max_synprot_errors = 100
smtp_max_unknown_commands = 100
See also: https://www.rfc-editor.org/rfc/rfc5321#section-3.3
It is currently not possible to call a->Conn()->GetVal() or construct a
zeek/file_analysis/File object from within doctests, as these quickly
reference the unpopulated zeek::id namespace to construct Val objects
of various types, making it hard write basic tests without completely
re-organizing.
Move running of the unit tests after parsing the scripts, so it is possible
for some basic exercising of File objects within tests.
Remove the special case of initializing network time if it hasn't
happened yet. The argument about broker.log containing 0.0 timestamps
is more a problem of the log, not something that would justify modifying
network time globally. For broker.log and possibly cluster.log, it might
be more reasonable to use current time, anyway.
I was a bit wary about tables backed by broker stores being populated
with network_time set to 0.0, but there seems to exist logic and assumptions
that this is okay: It should be the same as if one populates a table with
expirations set within zeek_init().
In fact, staring a bit more, *not setting* network time might be more correct
as workers that don't see packets would never set zeek_start_network_time
which is used within the expiration computation.
Test if the analyzer is removed from the TCPSessionAdapter during
event processing. If we don't do this, we continue feeding the analyzer
even if scripts decided to disable the analyzer.
The analyzer instance isn't flagged as disabled itself, so we need
to look at the parent's children.
Intermediate lines of multiline replies usually do not contain valid status
codes (even if servers may opt to include them). Their content may be anything
and likely unrelated to the original command. There's little reason for us
trying to match them with a corresponding command.
OSS-Fuzz generated a large command reply with very many intermediate lines
which caused long processing times due to matching every line with all
currently pending commands.
This is a DoS vector against Zeek. The new ipv6-multiline-reply.trace and
ipv6-retr-samba.trace files have been extracted from the external ipv6.trace.
* origin/topic/awelzel/try-update-network-time:
NEWS: Some notes about timing related changes
iosource: Remove non-existing ManagerBase friend
broker::Manager: use_realtime_false when allow_network_time_forward=F
A set of tests around set_network_time() and timer expiration
Remove suspend-processing test
Add a set of suspend_processing tests
btest: More verbose recursive-event output
broker::Manager: No more network_time forwarding
TimerMgr: No network_time updates in Process()
Event: No more network_time updates
RunState: Implement forward_network_time_if_applicable()
PktSrc: Add HasBeenIdleFor() method
PktSrc: Move termination pseduo_realtime special case to RunState
Run the broker in non-realtime mode when allow_network_time_forward=F.
This may need an extra option for really advanced use-cases, but for
now this seems reasonable.
This tests that timer expiration happens after a call to set_network_time()
upon the next time around the loop. This should be fairly stable, but
suspect major changes in the main loop or around timer expiration may
subtly change behavior things.
This tested that timers continue working even if one calls
suspend_processing() in zeek -r mode. The new behavior is
that timers do not function in that scenario and the test
invalid.
network_time forwarding will happen in the main-loop before draining the
EventMgr so timers/events scheduled based on broker messages should
behave similarly. This also keeps network_time unaffected during
non pseudo-realtime trace processing.
network_time forwarding will now happen centrally in the main loop.
The TimerMgr returns a valid timeout that can be waited for and will
trigger network_time advancement, so we don't need to do it.
The whole docs read like this was only used to update the
network_time, so there may be a follow-up to ditch EventMgr
even being an IOSource (which could be argued it's not IO).
Add a central place where the decision when it's okay to update network time
to the current time (wallclock) is. It checks for pseudo_realtime and packet
source existence as well as packet source idleness.
A new const &redef allows to completely disable forwarding of network time.
This method will be used by the main loop to determine if an interface
has become idle. Initially this will be used to determine when it is
acceptable to update network_time to the current time (wallclock).
This also removes setting pseduo_realtime to 0.0 in the main loop
when the packet source has been closed. I had tried to understand
the implications it actually seems, if we shutdown the iosource::Manager
anyway, it shouldn't and it's just confusing.
* origin/topic/awelzel/make-some-deprecations-errors:
Expr: Factor out type tag extraction
Var: Add version to deprecated initialization
Stmt: Error on deprecated when/local usage
Expr: Remove vector scalar operations
parse.y: Make out-of-scope use errors
scan.l: Remove unused deprecated_attr
* origin/topic/awelzel/main-loop-allow-multiple-zero-timeouts:
NEWS: main-loop changes around zero-timeout sources
Add a new plugin test with verbose IO source output
iosource: Make poll intervals configurable
iomanager/Poll: Add zero-timeout timeout_src also when there's other events ready
iomanager: Collect all sources with zero timeouts as ready
This is mostly for documentation/verification purposes of how the IO loop
currently does draining and when it picks up FD based (non packet) IO
source. For example, it shows that currently FD based sources are processed
fairly delayed and that we now also process two timeout sources that are ready.
This probably should not be changed by users, but it's useful for
testing and experimentation rather than needing to recompile.
Processing 100 packets without checking an FD based IO source can
actually mean that FD based sources are never checked during a read
of a very small pcap...
This would generally happen the next loop iteration around anyway, but
seems nice to ensure a zero timeout source will be processed at the same
time as sources with ready FDs.
Previously, if two iosources returned 0.0 as their timeout, only
one of them would be considered ready. An always ready source
therefore may starve other ready ones due to this and minimally
this behavior seems surprising.
Offline pcap sources are always ready and return 0.0 for
GetNextTimeout() (unless in pseudo-realtime), so we can
also remove the offline source special case.
One subtle side-effect of this change is that if an IO source
returns a 0.0 timeout *and* it's file descriptor is ready in
the same loop iteration, it may be processed twice.