The user and password fields are replicated to each of the ftp.log
entries. Using a very large username (100s of KBs) allows to bloat
the log without actually sending much traffic. Further, limit the
arg and reply_msg columns to large, but not unbounded values.
This fixes a potential memory leak when getting responses for asnyc DNS requests
where the TTL value on the response is zero. We were immediately considering the
request as expired and never removing it from the map of requests. This lead to
the DNS_Mgr eventually stopping processing of async requests.
It's happening regularly to me that I forget the type specifier when redef'ing
records or enums and usually it takes me a while to figure out what's going
on as the errors are not descriptive. Improve the error reporting and just
bail as there's no sensible way to continue.
Closes#2777
* ekoyle/add-protocol-pbb:
Update seemingly-unrelated btests
Use a default analyzer
Simplify PBB analyzer by using Ethernet analyzer
Add btest for PBB and update baselines
Use constexpr instead of #define
Cleanup and add customer MAC addresses
Add PBB (802.1ah) support
We previously would include any and all output from stderr during
compilation in the test baseline. Depending on the used compiler this
output may contain C++ compilation warnings which are uninteresting for
the behavior under test.
(cherry picked from commit 5221edf474)
* origin/topic/awelzel/propagate-on-change-through-copy:
TableVal: Propagate &on_change attribute through copy()
testing/btest: Add test showing &expire_func/&create_expire is copied
Copying an &ordered table or set would result in a copy that is not ordered.
This seems rather surprising behavior, so propagate the &ordered attribute.
Closes#2793
Mostly for consistency with &default, &expire_func and other attributes
being propagated through a copy(). Seems this was just missed during
the implementation and/or was never tested for.
After the first 4 bytes, this traffic actually just looks like Ethernet.
Rather than try to re-implement the ethernet analyzer, just check the
length, skip 4 bytes, and pass it on.
* origin/topic/christian/btest-invocation-for-cluster-tests:
CI: remove no longer needed workaround for GITHUB_ACTION env var in cluster tests
CI: directly invoke btest in the cluster testsuite
This resembles the way we also invoke it in ci/test.sh, and "-d"'s direct
console output saves a roundtrip through uploaded artifacts when tests fail.
This skips test retries for now -- not sure we really need it for this
testsuite.
* origin/topic/timw/fix-windows-build:
Fix linking of zeek_build_info on Windows
CI: Enable Windows builds for PRs
Call python explicitly from cmake for collecting repo info on Windows
Rework zeek-inet-ntop snprintf return value handling
* origin/topic/vern/Feb23-C++-maint:
added to C++ script compiler maintainer notes utility of starting with full base script compile
fixes for order-of-initialization in scripts compiled to C++ annotations of such initializations to tie them to the original Zeek script
Fixed bad memory access in compiled-to-C++ scripts when initializing attributes
* origin/topic/timw/2720-vxlan-geneve-confirmation:
Call AnalyzerConfirmation earlier in VXLAN/Geneve analysis
Add validation of session to start of AYIYA/VXLAN/Geneve analysis
This mimics how the Teredo analyzer is already doing it, including
sending a weird if the session is invalid and bailing out if the
protocol was already violated.
An unnecessary overhead of the Hash() method was uncovered for DEBUG builds
due to computing a description of every HashKey() even when the DBG_HASHKEY
stream is not enabled. Squelch it.
This adds a new utility called ci/collect-repo-info.py to produce a JSON
document that is then baked into the Zeek executable file. Further, when
creating a tarball via `make dist`, put a top-level repo-info.json file
in place that is picked when no .git directory exists.
Closes#1405