Compare commits

..

152 commits

Author SHA1 Message Date
Tim Wojtulewicz
6aecda0699 Update CHANGES, VERSION, and NEWS for 7.0.10 2025-08-26 10:39:29 -07:00
Arne Welzel
f802819fae Merge remote-tracking branch 'origin/topic/vern/zam-record-fields-fixes'
* origin/topic/vern/zam-record-fields-fixes:
  fixes for specialized ZAM operations needing to check whether record fields exist

(cherry picked from commit d7fbd49d9e)
2025-08-22 13:32:29 -07:00
Vern Paxson
9ff78da69a modified merge_types() to skip work if given identical types, which
also preserves type names (useful for -O gen-C++)

(cherry picked from commit 7ed3f79c87)
2025-08-22 13:04:30 -07:00
Tim Wojtulewicz
0624a652ec Merge remote-tracking branch 'origin/topic/awelzel/4730-smb-read-response-data-offset'
* origin/topic/awelzel/4730-smb-read-response-data-offset:
  smb2/read: Parse only 1 byte for data_offset, ignore reserved1

(cherry picked from commit 76289a8022)
2025-08-22 13:01:19 -07:00
Tim Wojtulewicz
43f4ff255d Updating CHANGES and VERSION. 2025-07-21 10:24:00 -07:00
Tim Wojtulewicz
79a51c715a Update docs submodule [nomail] [skip ci] 2025-07-21 10:23:17 -07:00
Tim Wojtulewicz
19346b93ad Return weird if a log line is over a configurable size limit 2025-07-21 09:18:16 -07:00
Tim Wojtulewicz
9bda7493ec Update NEWS for 7.0.9 release [skip ci] [nomail] 2025-07-18 16:04:51 -07:00
Tim Wojtulewicz
66a801bd2c CI: Remove spicy-head task 2025-07-17 08:47:45 -07:00
Robin Sommer
6e4d3f0e56 Merge remote-tracking branch 'origin/topic/bbannier/protocol-handle-close-finish'
* origin/topic/bbannier/protocol-handle-close-finish:
  [Spicy] Let `zeek::protocol_handle_close()` send a TCP EOF.

(cherry picked from commit ce6c7a6cd1)
2025-07-17 08:43:15 -07:00
Tim Wojtulewicz
db5ab72d0e Remove libzmq5 from Docker images
This was accidentally added in 356685d82d and
doesn't need to be in our official 7.0 images.
2025-07-14 14:28:09 -07:00
Tim Wojtulewicz
8f877f9d58 CI: Force opensuse-tumbleweed image to rebuild 2025-07-14 14:24:10 -07:00
Arne Welzel
a0d35d6e28 Merge remote-tracking branch 'origin/topic/vern/ZAM-const-prop-fix'
* origin/topic/vern/ZAM-const-prop-fix:
  fix for error in ZAM's constant propagation logic

(cherry picked from commit 869bd181b2)
2025-07-14 14:16:09 -07:00
Arne Welzel
59a1c74ac5 Merge remote-tracking branch 'origin/topic/awelzel/4562-post-proc-lookup-failure'
* origin/topic/awelzel/4562-post-proc-lookup-failure:
  btest/logging: Fly-by cleanup
  logging/Ascii: Fix abort() for non-existing postrotation functions

(cherry picked from commit f4357485d2)
2025-07-14 14:13:37 -07:00
Arne Welzel
356685d82d Merge branch 'topic/ado/final-docker' of https://github.com/edoardomich/zeek
* 'topic/ado/final-docker' of https://github.com/edoardomich/zeek:
  docker: Add `net-tools` and `procps` dependencies

(cherry picked from commit 8189716adc)
2025-07-14 14:11:49 -07:00
Tim Wojtulewicz
d90c0d3730 Update ZeekJS to v0.18.0
This is primarily to bring in 26c8c3684c46dce2f00b191ed009b1ea9bfe9159.
2025-07-14 14:10:31 -07:00
Arne Welzel
181214ed78 Merge remote-tracking branch 'origin/topic/awelzel/4522-bdat-last-reply-fix'
* origin/topic/awelzel/4522-bdat-last-reply-fix:
  smtp: Fix last_reply column in smtp.log for BDAT LAST

(cherry picked from commit f5063bfcd4)
2025-07-14 13:57:07 -07:00
Tim Wojtulewicz
4021a0c654 Update CHANGES, VERSION, and NEWS for 7.0.8 2025-05-19 14:42:22 -07:00
Arne Welzel
b76a75d86e Merge remote-tracking branch 'origin/topic/awelzel/4035-btest-openssl-sha1-certs'
* origin/topic/awelzel/4035-btest-openssl-sha1-certs:
  external/subdir-btest.cfg: Set OPENSSL_ENABLE_SHA1_SIGNATURES=1
  btest/x509_verify: Drop OpenSSL 1.0 hack
  testing/btest: Use OPENSSL_ENABLE_SHA1_SIGNATURES

(cherry picked from commit 280e7acc6e)
2025-05-19 11:18:20 -07:00
Tim Wojtulewicz
737b7d0add Update paraglob submodule for GCC 15.1 build fix 2025-05-19 09:36:28 -07:00
Arne Welzel
a233788a69 Merge remote-tracking branch 'origin/topic/awelzel/ci-fedora-42'
* origin/topic/awelzel/ci-fedora-42:
  probabilistic/BitVector: Add include <cstdint>
  Bump spicy to fix build with GCC 15.1
  CI: Drop fedora-40
  CI: Add fedora-42

(cherry picked from commit 7583651bec)
2025-05-19 09:36:28 -07:00
Johanna Amann
1610fe9eaf Merge remote-tracking branch 'origin/topic/johanna/remove-bind-library-check'
* origin/topic/johanna/remove-bind-library-check:
  Remove unnecessary check for bind library.

Closes GH-432t log9

(cherry picked from commit 37be65dfd0)
2025-05-19 09:18:21 -07:00
Arne Welzel
94700130ed Merge remote-tracking branch 'origin/topic/vern/zam-aggr-change-in-loop'
* origin/topic/vern/zam-aggr-change-in-loop:
  fix for ZAM optimization when an aggregate is modified inside of a loop

(cherry picked from commit 2255fa23b8)
2025-05-19 09:16:10 -07:00
Tim Wojtulewicz
c700efc3c8 Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy-7.0' into release/7.0
* origin/topic/bbannier/bump-spicy-7.0:
  Bump `auxil/spicy` to v1.11.5
2025-05-19 09:09:14 -07:00
Benjamin Bannier
1b5ac2d2e5 Bump auxil/spicy to v1.11.5 2025-05-19 14:54:59 +02:00
Tim Wojtulewicz
05da1c5a52 Updating CHANGES and VERSION. 2025-05-09 07:30:44 -07:00
Tim Wojtulewicz
5f07b3a858 Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy-7.0' into release/7.0
* origin/topic/bbannier/bump-spicy-7.0:
  Bump auxil/spicy to spicy-1.11.4
2025-05-08 14:46:40 -07:00
Benjamin Bannier
98eb2a10de Bump auxil/spicy to spicy-1.11.4 2025-05-08 13:13:43 -07:00
Tim Wojtulewicz
c2874bf818 Update docs submodule [nomail] [skip ci] 2025-05-08 12:14:30 -07:00
Tim Wojtulewicz
83ea862c11 Update NEWS for 7.0.7 [nomail] [skip ci] 2025-05-06 13:42:21 -07:00
Tim Wojtulewicz
11cf9e99f2 Add fix to support CMake 4.0, plus update Spicy to version that supports it 2025-05-06 12:45:49 -07:00
Tim Wojtulewicz
76c94e84ac CI: Use brew version of python3 on macOS 2025-05-06 10:57:18 -07:00
Tim Wojtulewicz
37e7b57664 Update quic baselines due to service ordering 2025-05-06 10:09:16 -07:00
Tim Wojtulewicz
c8b42fe3c7 Merge remote-tracking branch 'origin/topic/awelzel/4275-for-release-7.0' into release/7.0
* origin/topic/awelzel/4275-for-release-7.0:
  ldap: Replace if with switch on bool
  Merge remote-tracking branch 'origin/topic/awelzel/4275-ldap-gss-spnego-auth-miss'
2025-05-06 09:54:58 -07:00
Arne Welzel
bdcb1c8a44 ldap: Replace if with switch on bool
The change from a2a535d0c9 used
zeek/spicy#1841, but Zeek 7.0 does not have that functionality
yet. Replace with switch ( bool ).
2025-05-06 09:47:42 +02:00
Arne Welzel
ceb798b42a Merge remote-tracking branch 'origin/topic/awelzel/4275-ldap-gss-spnego-auth-miss'
* origin/topic/awelzel/4275-ldap-gss-spnego-auth-miss:
  ldap: Clean up from code review
  ldap: Add Sicily Authentication constants
  ldap: Only switch into MS_KRB5 mode if responseToken exists

(cherry picked from commit a2a535d0c9)
2025-05-06 09:46:49 +02:00
Arne Welzel
ec18da8baa Merge remote-tracking branch 'origin/topic/awelzel/4405-quic-fragmented-crypto'
* origin/topic/awelzel/4405-quic-fragmented-crypto:
  Bump external/zeek-testing
  QUIC: Extract reset_crypto() function
  QUIC: Rename ConnectionIDInfo to Context
  QUIC: Switch initial_destination_conn_id to optional
  QUIC: Use initial destination conn_id for decryption
  QUIC: Handle CRYPTO frames across multiple INITIAL packets
  QUIC: Do not consume EncryptedLongPacketPayload
  QUIC: Fix ACK frame parsing

(cherry picked from commit 50ac8d1468)
2025-05-05 12:56:53 -07:00
Arne Welzel
e712461719 broker/main: Adapt enum values to agree with comm.bif
Logic to detect this error already existed, but due to enum identifiers
not having a value set, it never triggered before.

Should probably backport this one.

(cherry picked from commit 6bc36e8cf8)
2025-05-05 12:54:42 -07:00
Tim Wojtulewicz
bc8dc65bd6 Update cmake submodule [nomail] 2025-05-05 12:16:39 -07:00
Tim Wojtulewicz
3e5060018a Update docs submodule to fix RTD [nomail] [skip ci] 2025-03-20 13:48:45 -07:00
Tim Wojtulewicz
9f8e27118e Update CHANGES, VERSION, and NEWS for 7.0.6 release 2025-03-20 12:24:26 -07:00
Tim Wojtulewicz
89376095dc Update zeekctl submodule to fix a couple btests 2025-03-19 13:04:31 -07:00
Tim Wojtulewicz
3e8af6497e Update zeekjs to v0.16.0 2025-03-19 10:43:17 -07:00
Tim Wojtulewicz
5051cce720 Updating CHANGES and VERSION. 2025-03-19 10:43:02 -07:00
Tim Wojtulewicz
c30b835a14 Update mozilla-ca-list.zeek and ct-list.zeek to NSS 3.109 2025-03-18 17:59:01 -07:00
Tim Wojtulewicz
a041080e3f Update core/vntag-in-vlan baseline to remove ip_proto field for 7.0 2025-03-18 17:03:05 -07:00
Tim Wojtulewicz
fc3001c76a CI: Force rebuild of tumbleweed docker image 2025-03-18 16:33:45 -07:00
Tim Wojtulewicz
e2b2c79306 Merge remote-tracking branch 'origin/topic/timw/ci-macos-upgrade-pip'
* origin/topic/timw/ci-macos-upgrade-pip:
  CI: Unconditionally upgrade pip on macOS

(cherry picked from commit e8d91c8227)
2025-03-18 16:21:45 -07:00
Tim Wojtulewicz
ed32ee73fa Merge remote-tracking branch 'origin/topic/timw/ci-macos-sequoia'
* origin/topic/timw/ci-macos-sequoia:
  ci/init-external-repo.sh: Use regex to match macos cirrus task
  CI: Change macOS runner to Sequoia

(cherry picked from commit 43f108bb71)
2025-03-18 16:21:13 -07:00
Tim Wojtulewicz
eed9858bc4 CI: Update freebsd to 13.4 and 14.2 2025-03-18 16:20:06 -07:00
Tim Wojtulewicz
ed081212ae Merge remote-tracking branch 'origin/topic/timw/vntag-in-vlan'
* origin/topic/timw/vntag-in-vlan:
  Add analyzer registration from VLAN to VNTAG

(cherry picked from commit cb5e3d0054)
2025-03-18 16:18:13 -07:00
Arne Welzel
ec04c925a0 Merge remote-tracking branch 'origin/topic/awelzel/2311-load-plugin-bare-mode'
* origin/topic/awelzel/2311-load-plugin-bare-mode:
  scan.l: Fix @load-plugin scripts loading
  scan.l: Extract switch_to() from load_files()
  ScannedFile: Allow skipping canonicalization

(cherry picked from commit a3a08fa0f3)
2025-03-18 16:16:39 -07:00
Arne Welzel
de8127f3cd Merge remote-tracking branch 'origin/topic/awelzel/4198-4201-quic-maintenance'
* origin/topic/awelzel/4198-4201-quic-maintenance:
  QUIC/decrypt_crypto: Rename all_data to data
  QUIC: Confirm before forwarding data to SSL
  QUIC: Parse all QUIC packets in a UDP datagram
  QUIC: Only slurp till packet end, not till &eod

(cherry picked from commit 44304973fb)
2025-03-18 16:15:34 -07:00
Arne Welzel
b5774f2de9 Merge remote-tracking branch 'origin/topic/vern/ZAM-field-assign-in-op'
* origin/topic/vern/ZAM-field-assign-in-op:
  pre-commit: Bump spicy-format to 0.23
  fix for ZAM optimization of assigning a record field to result of "in" operation

(cherry picked from commit 991bc9644d)
2025-03-18 16:13:01 -07:00
Tim Wojtulewicz
7c8a7680ba Update CHANGES, VERSION, and NEWS for 7.0.5 release 2024-12-16 11:12:48 -07:00
Tim Wojtulewicz
26b50908e1 Merge remote-tracking branch 'security/topic/timw/7.0.5-patches' into release/7.0
* security/topic/timw/7.0.5-patches:
  QUIC/decrypt_crypto: Actually check if decryption was successful
  QUIC/decrypt_crypto: Limit payload_length to 10k
  QUIC/decrypt_crypto: Fix decrypting into too small stack buffer
2024-12-16 10:21:59 -07:00
Arne Welzel
c2f2388f18 QUIC/decrypt_crypto: Actually check if decryption was successful
...and bail if it wasn't.

PCAP was produced using OSS-Fuzz input from issue 383379789.
2024-12-13 13:10:45 -07:00
Arne Welzel
d745d746bc QUIC/decrypt_crypto: Limit payload_length to 10k
Given we dynamically allocate memory for decryption, employ a limit
that is unlikely to be hit, but allows for large payloads produced
by the fuzzer or jumbo frames.
2024-12-13 13:10:45 -07:00
Arne Welzel
5fbb6b4599 QUIC/decrypt_crypto: Fix decrypting into too small stack buffer
A QUIC initial packet larger than 1500 bytes could lead to crashes
due to the usage of a fixed size stack buffer for decryption.

Allocate the necessary memory dynamically on the heap instead.
2024-12-13 13:10:45 -07:00
Tim Wojtulewicz
7c463b5f92 Update docs submodule [nomail] [skip ci] 2024-12-13 13:08:51 -07:00
Tim Wojtulewicz
e7f694bcbb Merge remote-tracking branch 'origin/topic/vern/ZAM-tbl-iteration-memory-mgt-fix'
* origin/topic/vern/ZAM-tbl-iteration-memory-mgt-fix:
  fix for memory management associated with ZAM table iteration

(cherry picked from commit 805e9db588)
2024-12-13 12:27:16 -07:00
Arne Welzel
f54416eae4 Merge remote-tracking branch 'origin/topic/christian/fix-zam-analyzer-name'
* origin/topic/christian/fix-zam-analyzer-name:
  Fix ZAM's implementation of Analyzer::name() BiF

(cherry picked from commit e100a8e698)
2024-12-12 13:14:10 -07:00
Arne Welzel
68bfe8d1c0 Merge remote-tracking branch 'origin/topic/vern/zam-exception-leaks'
* origin/topic/vern/zam-exception-leaks:
  More robust memory management for ZAM execution - fixes #4052

(cherry picked from commit c3b30b187e)
2024-12-12 13:05:13 -07:00
Arne Welzel
cf97ed6ac1 Merge remote-tracking branch 'origin/topic/awelzel/bump-zeekjs-0-14-0'
* origin/topic/awelzel/bump-zeekjs-0-14-0:
  Bump zeekjs to v0.14.0

(cherry picked from commit aac640ebff)
2024-12-12 12:45:14 -07:00
Benjamin Bannier
35cd891d6e Merge remote-tracking branch 'origin/topic/bbannier/doc-have-spicy'
(cherry picked from commit 4a96d34af6)
2024-12-12 12:43:43 -07:00
Tim Wojtulewicz
f300ddb9fe Update CHANGES, VERSION, and NEWS for 7.0.4 release 2024-11-19 12:35:32 -07:00
Arne Welzel
fa5a7c4a5b Merge remote-tracking branch 'origin/topic/awelzel/bump-zeekjs-0-13-2'
* origin/topic/awelzel/bump-zeekjs-0-13-2:
  Bump zeekjs to 0.13.2

(cherry picked from commit 6e916efe8d)
2024-11-19 11:19:31 -07:00
Tim Wojtulewicz
56b596a3e3 Merge remote-tracking branch 'origin/topic/timw/speed-up-zam-ci-testing'
* origin/topic/timw/speed-up-zam-ci-testing:
  CI: Use test.sh script for running ZAM tests, but disable parts of it

(cherry picked from commit d9a74680e0)
2024-11-19 10:56:28 -07:00
Tim Wojtulewicz
91067b32cc Update docs submodule [nomail] [skip ci] 2024-11-19 09:43:20 -07:00
Arne Welzel
43ab74b70f Merge branch 'sqli-spaces-encode-to-plus' of https://github.com/cooper-grill/zeek
* 'sqli-spaces-encode-to-plus' of https://github.com/cooper-grill/zeek:
  account for spaces encoding to plus signs in sqli regex detection

(cherry picked from commit 5200b84fb3)
2024-11-19 09:33:22 -07:00
Arne Welzel
887d92e26c Merge remote-tracking branch 'upstream/topic/awelzel/3774-skip-script-args-test-under-tsan'
* upstream/topic/awelzel/3774-skip-script-args-test-under-tsan:
  btest: Skip core.script-args under TSAN

(cherry picked from commit 159f40a4bf)
2024-11-14 19:07:51 -07:00
Tim Wojtulewicz
b1fec3284e Disable core.expr-execption btest under ZAM to fix CI builds 2024-11-14 16:04:41 -07:00
Tim Wojtulewicz
5ce0f2edb6 Fix ubsan warning with ZAM and mmdb btest 2024-11-14 13:14:58 -07:00
Tim Wojtulewicz
d5c3cdf33a Update doc submodule [nomail] [skip ci] 2024-11-14 13:02:08 -07:00
Arne Welzel
7ed52733d2 Merge remote-tracking branch 'origin/topic/awelzel/asan-zam-ci'
* origin/topic/awelzel/asan-zam-ci:
  ci: Add asan and ubsan sanitizer tasks for ZAM

(cherry picked from commit 8945b2b186)
2024-11-14 12:16:33 -07:00
Arne Welzel
056b70bd2d Merge remote-tracking branch 'origin/topic/awelzel/community-id-new-connection'
* origin/topic/awelzel/community-id-new-connection:
  policy/community-id: Populate conn$community_id in new_connection()

(cherry picked from commit d3579c1f34)
2024-11-14 12:15:27 -07:00
Tim Wojtulewicz
f697670668 Update zeekjs submodule to latest tagged version
This picks up the changes to support Node.js v22.11.0.
2024-11-14 12:07:00 -07:00
Benjamin Bannier
826d5e6fb7 Merge remote-tracking branch 'origin/topic/etyp/cookie-nullptr-spicy-dpd'
(cherry picked from commit 1d38c31071)
2024-11-14 11:59:05 -07:00
Benjamin Bannier
1c3be97fe9 Merge remote-tracking branch 'origin/topic/bbannier/spicy-cookie-nullptr-deref'
(cherry picked from commit 2e8d6e86e7)
2024-11-14 11:56:53 -07:00
Evan Typanski
107c0da15d Fix up minor warnings in touched files
(cherry picked from commit 36af0591a6)
2024-11-14 11:53:29 -07:00
Evan Typanski
e3845060dc Fix Clang 19 deprecation failure
Fixes #3994

Clang 19 with libc++ started failing to compile because the default
implementation of `std::char_traits` was removed, making uses of
`std::char_traits<unsigned char>` invalid. This was more of used for
convenience before, but it should be roughly the same behavior with
`char`.

See relevant LLVM commits:

aeecef08c3

08a0faf4cd
(cherry picked from commit 985f4f7c72)
2024-11-14 11:52:23 -07:00
Arne Welzel
34ef830b9c Merge remote-tracking branch 'origin/topic/awelzel/3978-zeekjs-0.12.1-bump'
* origin/topic/awelzel/3978-zeekjs-0.12.1-bump:
  Bump zeekjs to 0.12.1

(cherry picked from commit d74b073852)
2024-11-14 11:33:38 -07:00
Arne Welzel
3ebe867193 Merge branch 'modbus-fixes' of https://github.com/zambo99/zeek
* 'modbus-fixes' of https://github.com/zambo99/zeek:
  Prevent non-Modbus on port 502 to be reported as Modbus

(cherry picked from commit 4763282f36)
2024-11-14 11:32:17 -07:00
Christian Kreibich
300b7a11ac Merge branch 'topic/awelzel/3957-raw-reader-spinning'
* topic/awelzel/3957-raw-reader-spinning:
  input/Raw: Rework GetLine()

(cherry picked from commit 2a23e9fc19)
2024-11-14 11:30:55 -07:00
Tim Wojtulewicz
f5fefd17df Merge remote-tracking branch 'origin/topic/vern/zam-fixes-for-7.0.x' into release/7.0
* origin/topic/vern/zam-fixes-for-7.0.x:
  import of GH-4022 BTest additions ZAM baseline update
  fix for setting object locations to avoid use-after-free situation
  fixes for script optimization of coerce-to-any expressions
  porting of GH-4022
  porting of GH-4016
  porting of GH-4013
  fixed access to uninitialized memory in ZAM's "cat" built-in
2024-11-14 10:22:07 -07:00
Vern Paxson
3281aa6284 import of GH-4022 BTest additions
ZAM baseline update
2024-11-14 10:19:07 -07:00
Vern Paxson
bcfd47c28d fix for setting object locations to avoid use-after-free situation 2024-11-14 10:19:07 -07:00
Vern Paxson
10d5ca5948 fixes for script optimization of coerce-to-any expressions 2024-11-14 10:19:07 -07:00
Vern Paxson
f693f22192 porting of GH-4022 2024-11-12 15:41:20 -08:00
Vern Paxson
c86f9267ff porting of GH-4016 2024-11-11 11:54:15 -08:00
Vern Paxson
dfbeb3e71f porting of GH-4013 2024-11-11 11:38:04 -08:00
Vern Paxson
fabb4023c9 fixed access to uninitialized memory in ZAM's "cat" built-in 2024-11-11 10:54:23 -08:00
Christian Kreibich
9eb3ada8c8 Merge remote-tracking branch 'origin/topic/bbannier/fix-docs-ci-again'
* origin/topic/bbannier/fix-docs-ci-again:
  Fix installation of Python packages in generate docs CI job again

(cherry picked from commit c28442a9a1)
2024-10-18 17:15:51 -07:00
Christian Kreibich
7a73f81792 Update CHANGES, VERSION, and NEWS for 7.0.3 release 2024-10-04 15:42:59 -07:00
Christian Kreibich
ea44c30272 Merge remote-tracking branch 'security/topic/awelzel/215-pop3-mail-null-deref'
* security/topic/awelzel/215-pop3-mail-null-deref:
  POP3: Rework unbounded pending command fix

(cherry picked from commit 7fea32c6edc5d4d14646366f87c9208c8c9cf555)
2024-10-04 10:46:40 -07:00
Christian Kreibich
c988bd2e4d Update docs submodule [nomail] [skip ci] 2024-10-04 10:28:35 -07:00
Christian Kreibich
5579494d48 Merge branch 'topic/bbannier/bump-spicy' into release/7.0
* topic/bbannier/bump-spicy:
  Bump auxil/spicy to latest release
2024-10-04 09:56:47 -07:00
Benjamin Bannier
121170a5de Merge remote-tracking branch 'origin/topic/bbannier/ci-opensuse-leap-ps-dep'
(cherry picked from commit a27066e3fc)
2024-10-04 09:53:29 -07:00
Benjamin Bannier
0e4f2a2bab Bump auxil/spicy to latest release 2024-10-02 12:39:26 +02:00
Tim Wojtulewicz
270429bfea Update CHANGES, VERSION, and NEWS for 7.0.2 release 2024-09-23 12:15:32 -07:00
Tim Wojtulewicz
815001f2aa Update docs submodule [nomail] [skip ci] 2024-09-23 11:58:24 -07:00
Tim Wojtulewicz
88c37d0be8 Merge remote-tracking branch 'origin/topic/awelzel/3936-pop3-and-redis'
* origin/topic/awelzel/3936-pop3-and-redis:
  pop3: Remove unused headers
  pop3: Prevent unbounded state growth
  btest/pop3: Add somewhat more elaborate testing

(cherry picked from commit 702fb031a4)
2024-09-23 11:12:54 -07:00
Johanna Amann
40db8463df Merge remote-tracking branch 'origin/topic/timw/remove-negative-timestamp-test'
* origin/topic/timw/remove-negative-timestamp-test:
  Remove core.negative-time btest

(cherry picked from commit 899f7297d7)
2024-09-23 10:27:19 -07:00
Arne Welzel
fb51e3a88f Merge remote-tracking branch 'origin/topic/awelzel/prom-callbacks-2'
* origin/topic/awelzel/prom-callbacks-2:
  Update broker submodule
  telemetry: Move callbacks to Zeek
  auxil/prometheus-cpp: Pin to 1.2.4

(cherry picked from commit f24bc1ee88)
2024-09-23 10:00:58 -07:00
Arne Welzel
5a0e2bf771 Merge remote-tracking branch 'origin/topic/awelzel/3919-ldap-logs-missing'
* origin/topic/awelzel/3919-ldap-logs-missing:
  btest/ldap: Add regression test for #3919

(cherry picked from commit a339cfa4c0)
2024-09-23 09:24:52 -07:00
Arne Welzel
95e7c5a63e Merge remote-tracking branch 'origin/topic/awelzel/3853-ldap-spnego-ntlmssp'
* origin/topic/awelzel/3853-ldap-spnego-ntlmssp:
  ldap: Recognize SASL+SPNEGO+NTLMSSP

(cherry picked from commit 152bbbd680)
2024-09-23 09:23:19 -07:00
Tim Wojtulewicz
024304bddf Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy-7.0' into release/7.0
* origin/topic/bbannier/bump-spicy-7.0:
  Bump auxil/spicy to latest release
2024-09-23 09:07:50 -07:00
Benjamin Bannier
2cc6c735d3 Bump auxil/spicy to latest release 2024-09-19 13:40:34 +02:00
Tim Wojtulewicz
3bf8bfaac6 Update CHANGES, VERSION, and NEWS for 7.0.1 release 2024-09-03 13:04:36 -07:00
Tim Wojtulewicz
89b9f9a456 Update zeek-aux submodule to pick up zeek-archiver permissions fix 2024-09-03 13:03:51 -07:00
Tim Wojtulewicz
8de8fb8fae Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy-7.0' into release/7.0
* origin/topic/bbannier/bump-spicy-7.0:
  Bump auxil/spicy to latest release
  Update docs submodule [nomail] [skip ci]
2024-09-03 09:02:37 -07:00
Benjamin Bannier
595cdf8b55 Bump auxil/spicy to latest release 2024-09-02 12:51:07 +02:00
Tim Wojtulewicz
74b832fa39 Update docs submodule [nomail] [skip ci] 2024-08-30 14:39:46 -07:00
Robin Sommer
15be682f63 Merge remote-tracking branch 'origin/topic/robin/gh-3881-spicy-ports'
* origin/topic/robin/gh-3881-spicy-ports:
  Spicy: Register well-known ports through an event handler.
  Revert "Remove deprecated port/ports fields for spicy analyzers"

(cherry picked from commit a2079bcda6)
2024-08-30 13:26:16 -07:00
Tim Wojtulewicz
8f9c5f79c6 Updating CHANGES and VERSION. 2024-08-30 12:34:09 -07:00
Arne Welzel
382b4b5473 Merge remote-tracking branch 'origin/topic/awelzel/ldap-fix-uint8-shift'
* origin/topic/awelzel/ldap-fix-uint8-shift:
  ldap: Promote uint8 to uint64 before shifting

(cherry picked from commit 97fa7cdc0a)
2024-08-30 11:47:39 -07:00
Arne Welzel
6f65b88f1b Merge remote-tracking branch 'origin/topic/awelzel/ldap-extended-request-response-starttls'
* origin/topic/awelzel/ldap-extended-request-response-starttls:
  ldap: Add heuristic for wrap tokens
  ldap: Ignore ec/rrc for sealed wrap tokens
  ldap: Add LDAP sample with SASL-SRP mechanism
  ldap: Reintroduce encryption after SASL heuristic
  ldap: Fix assuming GSS-SPNEGO for all bindResponses
  ldap: Implement extended request/response and StartTLS support

(cherry picked from commit 6a6a5c3d0d)
2024-08-30 11:47:08 -07:00
Arne Welzel
cfe47f40a4 Merge remote-tracking branch 'origin/topic/awelzel/spicy-ldap-krb-wrap-tokens'
* origin/topic/awelzel/spicy-ldap-krb-wrap-tokens:
  ldap: Remove MessageWrapper with magic 0x30 searching
  ldap: Harden parsing a bit
  ldap: Handle integrity-only KRB wrap tokens

(cherry picked from commit 2ea3a651bd)
2024-08-30 11:46:47 -07:00
Arne Welzel
0fd6672dde Merge branch 'fix-http-password-capture' of https://github.com/p-l-/zeek
* 'fix-http-password-capture' of https://github.com/p-l-/zeek:
  http: fix password capture when enabled

(cherry picked from commit c27e18631c)
2024-08-30 11:34:24 -07:00
Arne Welzel
e7ab18b343 Merge remote-tracking branch 'origin/topic/awelzel/no-child-analyzer-on-finished-connections'
* origin/topic/awelzel/no-child-analyzer-on-finished-connections:
  Analyzer: Do not add child analyzers when finished

(cherry picked from commit 45b33bf5c1)
2024-08-30 11:33:35 -07:00
Arne Welzel
8a92b150a5 Merge remote-tracking branch 'origin/topic/awelzel/tcp-reassembler-undelivered-data-match-bool-bool-bool-confusion'
* origin/topic/awelzel/tcp-reassembler-undelivered-data-match-bool-bool-bool-confusion:
  TCP_Reassembler: Fix IsOrig() position in Match() call

(cherry picked from commit 4a4cbf2576)
2024-08-30 11:32:34 -07:00
Tim Wojtulewicz
dd4597865a Merge remote-tracking branch 'origin/topic/timw/telemetry-threading'
* origin/topic/timw/telemetry-threading:
  Process metric callbacks from the main-loop thread

(cherry picked from commit 3c3853dc7d)
2024-08-30 11:29:17 -07:00
Arne Welzel
056bbe04ea Merge remote-tracking branch 'origin/topic/timw/use-more-memory-for-freebsd-builds'
* origin/topic/timw/use-more-memory-for-freebsd-builds:
  CI: Use 16GB of memory for FreeBSD builds

(cherry picked from commit 9d9cc51e9d)
2024-08-30 11:28:15 -07:00
Christian Kreibich
f6b8864584 Update docs submodule [nomail] [skip ci] 2024-08-12 17:54:52 -07:00
Tim Wojtulewicz
d1f6e91988 Updating CHANGES and VERSION. 2024-08-01 10:42:25 -07:00
Tim Wojtulewicz
6bbaef3e09 Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy' into release/7.0
* origin/topic/bbannier/bump-spicy:
  Allowlist a name for typos check
  Bump Spicy to latest release
2024-07-31 09:37:03 -07:00
Benjamin Bannier
55d36fc2cd Allowlist a name for typos check 2024-07-31 15:06:47 +02:00
Benjamin Bannier
f8fbeca504 Bump Spicy to latest release 2024-07-31 14:57:53 +02:00
Tim Wojtulewicz
72ff343f17 Update docs submodule [nomail] [skip ci] 2024-07-29 11:40:28 -07:00
Tim Wojtulewicz
b76096a9ee Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy'
* origin/topic/bbannier/bump-spicy:
  Bump auxil/spicy to latest development snapshot

(cherry picked from commit 4c0c7581c8)
2024-07-26 10:18:47 -07:00
Tim Wojtulewicz
b9e4669632 Updating CHANGES and VERSION. 2024-07-25 11:06:51 -07:00
Tim Wojtulewicz
5974613cae Generate docs for 7.0.0-rc3 2024-07-25 10:52:29 -07:00
Christian Kreibich
3a44bda957 Bump zeek-testing-cluster to reflect deprecation of prometheus.zeek
(cherry picked from commit 146cf99ff6)
2024-07-24 17:07:14 -07:00
Christian Kreibich
51262d02c7 Merge branch 'topic/christian/ack-contribs' into release/7.0
* topic/christian/ack-contribs:
  Add contributors to 7.0.0 NEWS entry [skip ci]
2024-07-24 17:01:39 -07:00
Christian Kreibich
b46aeefbab Add contributors to 7.0.0 NEWS entry [skip ci] 2024-07-24 16:48:17 -07:00
Tim Wojtulewicz
a4b746e5e8 Merge remote-tracking branch 'origin/topic/timw/smb2-ioctl-errors'
* origin/topic/timw/smb2-ioctl-errors:
  Update 7.0 NEWS with blurb about multi-PDU parsing causing increased load [nomail] [skip ci]
  Fix handling of zero-length SMB2 error responses

(cherry picked from commit bd208f4c54)
2024-07-24 13:29:09 -07:00
Tim Wojtulewicz
746ae4d2cc Merge remote-tracking branch 'origin/topic/johanna/update-the-ct-list-and-the-ca-list-again'
* origin/topic/johanna/update-the-ct-list-and-the-ca-list-again:
  Update Mozilla CA list and CT list

(cherry picked from commit cb88f6316c)
2024-07-23 08:55:11 -07:00
Tim Wojtulewicz
a65a339aa8 Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy'
* origin/topic/bbannier/bump-spicy:
  Bump auxil/spicy to latest development snapshot

(cherry picked from commit da7c3d9138)
2024-07-23 08:52:47 -07:00
Arne Welzel
8014c4b8c3 telemetry: Deprecate prometheus.zeek policy script
With Cluster::Node$metrics_port being optional, there's not really
a need for the extra script. New rule, if a metrics_port is set, the
node will attempt to listen on it.

Users can still redef Telemetry::metrics_port *after*
base/frameworks/telemetry was loaded to change the port defined
in cluster-layout.zeek.

(cherry picked from commit bf9704f339)
2024-07-23 10:05:46 +02:00
Tim Wojtulewicz
d9dc121e9a Update broker submodule [nomail] 2024-07-22 15:00:22 -07:00
Tim Wojtulewicz
5a56ff92d2 Updating CHANGES and VERSION. 2024-07-18 14:54:47 -07:00
Tim Wojtulewicz
b13dfa3b16 Update docs submodule [nomail] 2024-07-18 14:31:49 -07:00
Christian Kreibich
d17a1f9822 Bump zeek-testing-cluster to pull in tee SIGPIPE fix
(cherry picked from commit b51a46f94d)
2024-07-17 15:39:45 -07:00
Tim Wojtulewicz
5cdddd92d5 Merge remote-tracking branch 'origin/topic/bbannier/bump-spicy'
* origin/topic/bbannier/bump-spicy:
  Bump auxil/spicy to latest development snapshot

(cherry picked from commit 9ba7c2ddaf)
2024-07-16 10:16:57 -07:00
Tim Wojtulewicz
b8d11f4688 CI: Set FETCH_CONTENT_FULLY_DISCONNECTED flag for configure 2024-07-12 16:13:11 -07:00
Tim Wojtulewicz
91b23a6e2e Update broker and cmake submodules [nomail] 2024-07-12 16:13:04 -07:00
Tim Wojtulewicz
a8c56c1f25 Fix a broken merge
I merged an old version of the branch on accident and then merged the right
one over top of it, but git ended up including both versions. This fixes
that mistake.

(cherry picked from commit f3bcf1a55d)
2024-07-12 10:04:16 -07:00
Tim Wojtulewicz
5f6df68463 Merge remote-tracking branch 'origin/topic/bbannier/lib-spicy-hooks'
* origin/topic/bbannier/lib-spicy-hooks:
  Do not emit hook files for builtin modules

(cherry picked from commit b935d2f59a)
2024-07-12 09:52:44 -07:00
Tim Wojtulewicz
ac95484382 Merge remote-tracking branch 'origin/topic/bbannier/lib-spicy-hooks'
* origin/topic/bbannier/lib-spicy-hooks:
  Do not emit hook files for builtin modules

(cherry picked from commit 7a38cee81f)
2024-07-12 09:48:40 -07:00
Tim Wojtulewicz
962b03a431 Merge remote-tracking branch 'origin/topic/timw/grealpath-make-dist-warning'
* origin/topic/timw/grealpath-make-dist-warning:
  Fix warning about grealpath when running 'make dist' on Linux

(cherry picked from commit e4716b6c91)
2024-07-12 09:47:38 -07:00
Tim Wojtulewicz
92a685df50 Fix a typo in the 7.0 NEWS 2024-07-11 14:20:47 -07:00
Tim Wojtulewicz
1bf439cd58 Updating CHANGES and VERSION. 2024-07-11 13:20:24 -07:00
3939 changed files with 178010 additions and 487401 deletions

View file

@ -14,12 +14,9 @@ config: &CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WOR
no_spicy_config: &NO_SPICY_CONFIG --build-type=release --disable-broker-tests --disable-spicy --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror no_spicy_config: &NO_SPICY_CONFIG --build-type=release --disable-broker-tests --disable-spicy --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
static_config: &STATIC_CONFIG --build-type=release --disable-broker-tests --enable-static-broker --enable-static-binpac --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror static_config: &STATIC_CONFIG --build-type=release --disable-broker-tests --enable-static-broker --enable-static-binpac --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror
binary_config: &BINARY_CONFIG --prefix=$CIRRUS_WORKING_DIR/install --libdir=$CIRRUS_WORKING_DIR/install/lib --binary-package --enable-static-broker --enable-static-binpac --disable-broker-tests --build-type=Release --ccache --enable-werror binary_config: &BINARY_CONFIG --prefix=$CIRRUS_WORKING_DIR/install --libdir=$CIRRUS_WORKING_DIR/install/lib --binary-package --enable-static-broker --enable-static-binpac --disable-broker-tests --build-type=Release --ccache --enable-werror
spicy_ssl_config: &SPICY_SSL_CONFIG --build-type=release --disable-broker-tests --enable-spicy-ssl --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror asan_sanitizer_config: &ASAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=address --enable-fuzzers --enable-coverage --disable-spicy --ccache
asan_sanitizer_config: &ASAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=address --enable-fuzzers --enable-coverage --ccache --enable-werror ubsan_sanitizer_config: &UBSAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=undefined --enable-fuzzers --disable-spicy --ccache --enable-werror
ubsan_sanitizer_config: &UBSAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=undefined --enable-fuzzers --ccache --enable-werror tsan_sanitizer_config: &TSAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=thread --enable-fuzzers --disable-spicy --ccache --enable-werror
tsan_sanitizer_config: &TSAN_SANITIZER_CONFIG --build-type=debug --disable-broker-tests --sanitizers=thread --enable-fuzzers --ccache --enable-werror
macos_config: &MACOS_CONFIG --build-type=release --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror --with-krb5=/opt/homebrew/opt/krb5
clang_tidy_config: &CLANG_TIDY_CONFIG --build-type=debug --disable-broker-tests --prefix=$CIRRUS_WORKING_DIR/install --ccache --enable-werror --enable-clang-tidy
resources_template: &RESOURCES_TEMPLATE resources_template: &RESOURCES_TEMPLATE
cpu: *CPUS cpu: *CPUS
@ -35,7 +32,6 @@ macos_environment: &MACOS_ENVIRONMENT
ZEEK_CI_BTEST_JOBS: 12 ZEEK_CI_BTEST_JOBS: 12
# No permission to write to default location of /zeek # No permission to write to default location of /zeek
CIRRUS_WORKING_DIR: /tmp/zeek CIRRUS_WORKING_DIR: /tmp/zeek
ZEEK_CI_CONFIGURE_FLAGS: *MACOS_CONFIG
freebsd_resources_template: &FREEBSD_RESOURCES_TEMPLATE freebsd_resources_template: &FREEBSD_RESOURCES_TEMPLATE
cpu: 8 cpu: 8
@ -48,108 +44,46 @@ freebsd_environment: &FREEBSD_ENVIRONMENT
ZEEK_CI_CPUS: 8 ZEEK_CI_CPUS: 8
ZEEK_CI_BTEST_JOBS: 8 ZEEK_CI_BTEST_JOBS: 8
only_if_pr_master_release: &ONLY_IF_PR_MASTER_RELEASE builds_only_if_template: &BUILDS_ONLY_IF_TEMPLATE
# Rules for skipping builds:
# - Do not run builds for anything that's cron triggered
# - Don't do darwin builds on zeek-security repo because they use up a ton of compute credits.
# - Always build PRs, but not if they come from dependabot
# - Always build master and release/* builds from the main repo
only_if: > only_if: >
( $CIRRUS_CRON == '' ) &&
( ( $CIRRUS_PR != '' && $CIRRUS_BRANCH !=~ 'dependabot/.*' ) ||
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) && ( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
( $CIRRUS_CRON != 'weekly' ) && (
( $CIRRUS_PR != '' || $CIRRUS_BRANCH == 'master' ||
$CIRRUS_BRANCH == 'master' || $CIRRUS_BRANCH =~ 'release/.*'
$CIRRUS_BRANCH =~ 'release/.*'
) )
) ) )
only_if_pr_master_release_nightly: &ONLY_IF_PR_MASTER_RELEASE_NIGHTLY skip_task_on_pr: &SKIP_TASK_ON_PR
# Skip this task on PRs if it does not have the fullci label,
# it continues to run for direct pushes to master/release.
skip: >
($CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ '.*fullci.*')
zam_skip_task_on_pr: &ZAM_SKIP_TASK_ON_PR
# Skip this task on PRs if it does not have the fullci or zamci label,
# it continues to run for direct pushes to master/release.
skip: >
($CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ '.*fullci.*' && $CIRRUS_PR_LABELS !=~ '.*zamci.*')
benchmark_only_if_template: &BENCHMARK_ONLY_IF_TEMPLATE
# only_if condition for cron-triggered benchmarking tests.
# These currently do not run for release/.*
only_if: > only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) && ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
( $CIRRUS_CRON != 'weekly' ) && ( $CIRRUS_CRON == 'benchmark-nightly' ||
( $CIRRUS_PR != '' || $CIRRUS_PR_LABELS =~ '.*fullci.*' ||
$CIRRUS_BRANCH == 'master' || $CIRRUS_PR_LABELS =~ '.*benchmark.*' )
$CIRRUS_BRANCH =~ 'release/.*' ||
( $CIRRUS_CRON == 'nightly' && $CIRRUS_BRANCH == 'master' )
)
)
only_if_pr_release_and_nightly: &ONLY_IF_PR_RELEASE_AND_NIGHTLY
only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
( $CIRRUS_CRON != 'weekly' ) &&
( $CIRRUS_PR != '' ||
$CIRRUS_BRANCH =~ 'release/.*' ||
( $CIRRUS_CRON == 'nightly' && $CIRRUS_BRANCH == 'master' )
)
)
only_if_pr_nightly: &ONLY_IF_PR_NIGHTLY
only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
( $CIRRUS_CRON != 'weekly' ) &&
( $CIRRUS_PR != '' ||
( $CIRRUS_CRON == 'nightly' && $CIRRUS_BRANCH == 'master' )
)
)
only_if_release_tag_nightly: &ONLY_IF_RELEASE_TAG_NIGHTLY
only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' ) &&
( $CIRRUS_CRON != 'weekly' ) &&
( ( $CIRRUS_BRANCH =~ 'release/.*' && $CIRRUS_TAG =~ 'v[0-9]+\.[0-9]+\.[0-9]+(-rc[0-9]+)?$' ) ||
( $CIRRUS_CRON == 'nightly' && $CIRRUS_BRANCH == 'master' )
)
)
only_if_nightly: &ONLY_IF_NIGHTLY
only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' ) &&
( $CIRRUS_CRON == 'nightly' && $CIRRUS_BRANCH == 'master' )
)
only_if_weekly: &ONLY_IF_WEEKLY
only_if: >
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
( $CIRRUS_CRON == 'weekly' && $CIRRUS_BRANCH == 'master' )
)
skip_if_pr_skip_all: &SKIP_IF_PR_SKIP_ALL
skip: >
( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
skip_if_pr_not_full_ci: &SKIP_IF_PR_NOT_FULL_CI
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: Full.*") ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
skip_if_pr_not_full_or_benchmark: &SKIP_IF_PR_NOT_FULL_OR_BENCHMARK
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: (Full|Benchmark).*" ) ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
skip_if_pr_not_full_or_cluster_test: &SKIP_IF_PR_NOT_FULL_OR_CLUSTER_TEST
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: (Full|Cluster Test).*" ) ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
skip_if_pr_not_full_or_zam: &SKIP_IF_PR_NOT_FULL_OR_ZAM
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: (Full|ZAM).*" ) ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
skip_if_pr_not_full_or_zeekctl: &SKIP_IF_PR_NOT_FULL_OR_ZEEKCTL
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: (Full|Zeekctl).*" ) ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
skip_if_pr_not_full_or_windows: &SKIP_IF_PR_NOT_FULL_OR_WINDOWS
skip: >
( ( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS !=~ ".*CI: (Full|Windows).*" ) ||
( $CIRRUS_PR_LABELS =~ ".*CI: Skip All.*" )
)
ci_template: &CI_TEMPLATE ci_template: &CI_TEMPLATE
<< : *BUILDS_ONLY_IF_TEMPLATE
# Default timeout is 60 minutes, Cirrus hard limit is 120 minutes for free # Default timeout is 60 minutes, Cirrus hard limit is 120 minutes for free
# tasks, so may as well ask for full time. # tasks, so may as well ask for full time.
timeout_in: 120m timeout_in: 120m
@ -193,7 +127,6 @@ ci_template: &CI_TEMPLATE
env: env:
CIRRUS_WORKING_DIR: /zeek CIRRUS_WORKING_DIR: /zeek
CIRRUS_LOG_TIMESTAMP: true
ZEEK_CI_CPUS: *CPUS ZEEK_CI_CPUS: *CPUS
ZEEK_CI_BTEST_JOBS: *BTEST_JOBS ZEEK_CI_BTEST_JOBS: *BTEST_JOBS
ZEEK_CI_BTEST_RETRIES: *BTEST_RETRIES ZEEK_CI_BTEST_RETRIES: *BTEST_RETRIES
@ -244,10 +177,6 @@ fedora42_task:
dockerfile: ci/fedora-42/Dockerfile dockerfile: ci/fedora-42/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
env:
ZEEK_CI_CONFIGURE_FLAGS: *BINARY_CONFIG
fedora41_task: fedora41_task:
container: container:
@ -255,71 +184,14 @@ fedora41_task:
dockerfile: ci/fedora-41/Dockerfile dockerfile: ci/fedora-41/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_CI
centosstream9_task: centosstream9_task:
container: container:
# Stream 9 EOL: 31 May 2027 # Stream 9 EOL: Around Dec 2027
dockerfile: ci/centos-stream-9/Dockerfile dockerfile: ci/centos-stream-9/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
centosstream10_task:
container:
# Stream 10 EOL: 01 January 2030
dockerfile: ci/centos-stream-10/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
debian13_task:
container:
# Debian 13 (trixie) EOL: TBD
dockerfile: ci/debian-13/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
arm_debian13_task:
arm_container:
# Debian 13 (trixie) EOL: TBD
dockerfile: ci/debian-13/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
debian13_static_task:
container:
# Just use a recent/common distro to run a static compile test.
# Debian 13 (trixie) EOL: TBD
dockerfile: ci/debian-13/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
env:
ZEEK_CI_CONFIGURE_FLAGS: *STATIC_CONFIG
debian13_binary_task:
container:
# Just use a recent/common distro to run binary mode compile test.
# As of 2024-03, the used configure flags are equivalent to the flags
# that we use to create binary packages.
# Just use a recent/common distro to run a static compile test.
# Debian 13 (trixie) EOL: TBD
dockerfile: ci/debian-13/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
env:
ZEEK_CI_CONFIGURE_FLAGS: *BINARY_CONFIG
debian12_task: debian12_task:
container: container:
@ -327,8 +199,56 @@ debian12_task:
dockerfile: ci/debian-12/Dockerfile dockerfile: ci/debian-12/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI arm_debian12_task:
arm_container:
# Debian 12 (bookworm) EOL: TBD
dockerfile: ci/debian-12/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
env:
ZEEK_CI_CONFIGURE_FLAGS: *NO_SPICY_CONFIG
debian12_static_task:
container:
# Just use a recent/common distro to run a static compile test.
# Debian 12 (bookworm) EOL: TBD
dockerfile: ci/debian-12/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *SKIP_TASK_ON_PR
env:
ZEEK_CI_CONFIGURE_FLAGS: *STATIC_CONFIG
debian12_binary_task:
container:
# Just use a recent/common distro to run binary mode compile test.
# As of 2024-03, the used configure flags are equivalent to the flags
# that we use to create binary packages.
# Just use a recent/common distro to run a static compile test.
# Debian 12 (bookworm) EOL: TBD
dockerfile: ci/debian-12/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *SKIP_TASK_ON_PR
env:
ZEEK_CI_CONFIGURE_FLAGS: *BINARY_CONFIG
debian11_task:
container:
# Debian 11 EOL: June 2026
dockerfile: ci/debian-11/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *SKIP_TASK_ON_PR
opensuse_leap_15_5_task:
container:
# Opensuse Leap 15.5 EOL: ~Dec 2024
dockerfile: ci/opensuse-leap-15.5/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *SKIP_TASK_ON_PR
opensuse_leap_15_6_task: opensuse_leap_15_6_task:
container: container:
@ -336,8 +256,6 @@ opensuse_leap_15_6_task:
dockerfile: ci/opensuse-leap-15.6/Dockerfile dockerfile: ci/opensuse-leap-15.6/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
opensuse_tumbleweed_task: opensuse_tumbleweed_task:
container: container:
@ -346,140 +264,56 @@ opensuse_tumbleweed_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
prepare_script: ./ci/opensuse-tumbleweed/prepare.sh prepare_script: ./ci/opensuse-tumbleweed/prepare.sh
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE # << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_CI
weekly_current_gcc_task: ubuntu24_task:
container:
# Opensuse Tumbleweed has no EOL
dockerfile: ci/opensuse-tumbleweed/Dockerfile
<< : *RESOURCES_TEMPLATE
prepare_script: ./ci/opensuse-tumbleweed/prepare-weekly.sh
<< : *CI_TEMPLATE
<< : *ONLY_IF_WEEKLY
env:
ZEEK_CI_COMPILER: gcc
weekly_current_clang_task:
container:
# Opensuse Tumbleweed has no EOL
dockerfile: ci/opensuse-tumbleweed/Dockerfile
<< : *RESOURCES_TEMPLATE
prepare_script: ./ci/opensuse-tumbleweed/prepare-weekly.sh
<< : *CI_TEMPLATE
<< : *ONLY_IF_WEEKLY
env:
ZEEK_CI_COMPILER: clang
ubuntu25_04_task:
container:
# Ubuntu 25.04 EOL: 2026-01-31
dockerfile: ci/ubuntu-25.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
ubuntu24_04_task:
container: container:
# Ubuntu 24.04 EOL: Jun 2029 # Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
env:
ZEEK_CI_CREATE_ARTIFACT: 1
upload_binary_artifacts:
path: build.tgz
benchmark_script: ./ci/benchmark.sh
# Same as above, but running the ZAM tests instead of the regular tests. ubuntu22_task:
ubuntu24_04_zam_task:
container:
# Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_OR_ZAM
env:
ZEEK_CI_SKIP_UNIT_TESTS: 1
ZEEK_CI_SKIP_EXTERNAL_BTESTS: 1
ZEEK_CI_BTEST_EXTRA_ARGS: -a zam
# Use a lower number of jobs due to OOM issues with ZAM tasks
ZEEK_CI_BTEST_JOBS: 3
# Same as above, but using Clang and libc++
ubuntu24_04_clang_libcpp_task:
container:
# Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
env:
CC: clang-19
CXX: clang++-19
CXXFLAGS: -stdlib=libc++
ubuntu24_04_clang_tidy_task:
container:
# Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
env:
CC: clang-19
CXX: clang++-19
ZEEK_CI_CONFIGURE_FLAGS: *CLANG_TIDY_CONFIG
# Also enable Spicy SSL for this
ubuntu24_04_spicy_task:
container:
# Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_OR_BENCHMARK
env:
ZEEK_CI_CREATE_ARTIFACT: 1
ZEEK_CI_CONFIGURE_FLAGS: *SPICY_SSL_CONFIG
spicy_install_analyzers_script: ./ci/spicy-install-analyzers.sh
upload_binary_artifacts:
path: build.tgz
benchmark_script: ./ci/benchmark.sh
ubuntu24_04_spicy_head_task:
container:
# Ubuntu 24.04 EOL: Jun 2029
dockerfile: ci/ubuntu-24.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE_NIGHTLY
<< : *SKIP_IF_PR_NOT_FULL_OR_BENCHMARK
env:
ZEEK_CI_CREATE_ARTIFACT: 1
ZEEK_CI_CONFIGURE_FLAGS: *SPICY_SSL_CONFIG
# Pull auxil/spicy to the latest head version. May or may not build.
ZEEK_CI_PREBUILD_COMMAND: 'cd auxil/spicy && git fetch && git reset --hard origin/main && git submodule update --init --recursive'
spicy_install_analyzers_script: ./ci/spicy-install-analyzers.sh
upload_binary_artifacts:
path: build.tgz
benchmark_script: ./ci/benchmark.sh
ubuntu22_04_task:
container: container:
# Ubuntu 22.04 EOL: June 2027 # Ubuntu 22.04 EOL: June 2027
dockerfile: ci/ubuntu-22.04/Dockerfile dockerfile: ci/ubuntu-22.04/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE env:
<< : *SKIP_IF_PR_NOT_FULL_CI ZEEK_CI_CREATE_ARTIFACT: 1
upload_binary_artifacts:
path: build.tgz
benchmark_script: ./ci/benchmark.sh
# Run on PRs, merges to master and release/.* and benchmark-nightly cron.
only_if: >
( $CIRRUS_PR != '' && $CIRRUS_BRANCH !=~ 'dependabot/.*' ) ||
( ( $CIRRUS_REPO_NAME == 'zeek' || $CIRRUS_REPO_NAME == 'zeek-security' ) &&
$CIRRUS_BRANCH == 'master' ||
$CIRRUS_BRANCH =~ 'release/.*' ||
$CIRRUS_CRON == 'benchmark-nightly' )
ubuntu22_spicy_task:
container:
# Ubuntu 22.04 EOL: April 2027
dockerfile: ci/ubuntu-22.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
env:
ZEEK_CI_CREATE_ARTIFACT: 1
test_script: true # Don't run tests, these are redundant.
spicy_install_analyzers_script: ./ci/spicy-install-analyzers.sh
upload_binary_artifacts:
path: build.tgz
benchmark_script: ./ci/benchmark.sh
<< : *BENCHMARK_ONLY_IF_TEMPLATE
ubuntu20_task:
container:
# Ubuntu 20.04 EOL: April 2025
dockerfile: ci/ubuntu-20.04/Dockerfile
<< : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE
<< : *SKIP_TASK_ON_PR
alpine_task: alpine_task:
container: container:
@ -489,8 +323,6 @@ alpine_task:
dockerfile: ci/alpine/Dockerfile dockerfile: ci/alpine/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_CI
# Cirrus only supports the following macos runner currently, selecting # Cirrus only supports the following macos runner currently, selecting
# anything else automatically upgrades to this one. # anything else automatically upgrades to this one.
@ -503,8 +335,6 @@ macos_sequoia_task:
image: ghcr.io/cirruslabs/macos-runner:sequoia image: ghcr.io/cirruslabs/macos-runner:sequoia
prepare_script: ./ci/macos/prepare.sh prepare_script: ./ci/macos/prepare.sh
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
<< : *MACOS_ENVIRONMENT << : *MACOS_ENVIRONMENT
# FreeBSD EOL timelines: https://www.freebsd.org/security/#sup # FreeBSD EOL timelines: https://www.freebsd.org/security/#sup
@ -516,8 +346,6 @@ freebsd14_task:
prepare_script: ./ci/freebsd/prepare.sh prepare_script: ./ci/freebsd/prepare.sh
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
<< : *FREEBSD_ENVIRONMENT << : *FREEBSD_ENVIRONMENT
freebsd13_task: freebsd13_task:
@ -528,8 +356,7 @@ freebsd13_task:
prepare_script: ./ci/freebsd/prepare.sh prepare_script: ./ci/freebsd/prepare.sh
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_CI
<< : *FREEBSD_ENVIRONMENT << : *FREEBSD_ENVIRONMENT
asan_sanitizer_task: asan_sanitizer_task:
@ -539,8 +366,6 @@ asan_sanitizer_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_SKIP_ALL
test_fuzzers_script: ./ci/test-fuzzers.sh test_fuzzers_script: ./ci/test-fuzzers.sh
coverage_script: ./ci/upload-coverage.sh coverage_script: ./ci/upload-coverage.sh
env: env:
@ -557,16 +382,13 @@ asan_sanitizer_zam_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_NIGHTLY
<< : *SKIP_IF_PR_NOT_FULL_OR_ZAM
env: env:
ZEEK_CI_CONFIGURE_FLAGS: *ASAN_SANITIZER_CONFIG ZEEK_CI_CONFIGURE_FLAGS: *ASAN_SANITIZER_CONFIG
ASAN_OPTIONS: detect_leaks=1:detect_odr_violation=0 ASAN_OPTIONS: detect_leaks=1:detect_odr_violation=0
ZEEK_CI_SKIP_UNIT_TESTS: 1 ZEEK_CI_SKIP_UNIT_TESTS: 1
ZEEK_CI_SKIP_EXTERNAL_BTESTS: 1 ZEEK_CI_SKIP_EXTERNAL_BTESTS: 1
ZEEK_CI_BTEST_EXTRA_ARGS: -a zam ZEEK_CI_BTEST_EXTRA_ARGS: -a zam
# Use a lower number of jobs due to OOM issues with ZAM tasks << : *ZAM_SKIP_TASK_ON_PR
ZEEK_CI_BTEST_JOBS: 3
ubsan_sanitizer_task: ubsan_sanitizer_task:
container: container:
@ -575,12 +397,11 @@ ubsan_sanitizer_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_NIGHTLY << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_CI
test_fuzzers_script: ./ci/test-fuzzers.sh test_fuzzers_script: ./ci/test-fuzzers.sh
env: env:
CC: clang-19 CC: clang-18
CXX: clang++-19 CXX: clang++-18
CXXFLAGS: -DZEEK_DICT_DEBUG CXXFLAGS: -DZEEK_DICT_DEBUG
ZEEK_CI_CONFIGURE_FLAGS: *UBSAN_SANITIZER_CONFIG ZEEK_CI_CONFIGURE_FLAGS: *UBSAN_SANITIZER_CONFIG
ZEEK_TAILORED_UB_CHECKS: 1 ZEEK_TAILORED_UB_CHECKS: 1
@ -592,19 +413,16 @@ ubsan_sanitizer_zam_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_NIGHTLY
<< : *SKIP_IF_PR_NOT_FULL_OR_ZAM
env: env:
CC: clang-19 CC: clang-18
CXX: clang++-19 CXX: clang++-18
ZEEK_CI_CONFIGURE_FLAGS: *UBSAN_SANITIZER_CONFIG ZEEK_CI_CONFIGURE_FLAGS: *UBSAN_SANITIZER_CONFIG
ZEEK_TAILORED_UB_CHECKS: 1 ZEEK_TAILORED_UB_CHECKS: 1
UBSAN_OPTIONS: print_stacktrace=1 UBSAN_OPTIONS: print_stacktrace=1
ZEEK_CI_SKIP_UNIT_TESTS: 1 ZEEK_CI_SKIP_UNIT_TESTS: 1
ZEEK_CI_SKIP_EXTERNAL_BTESTS: 1 ZEEK_CI_SKIP_EXTERNAL_BTESTS: 1
ZEEK_CI_BTEST_EXTRA_ARGS: -a zam ZEEK_CI_BTEST_EXTRA_ARGS: -a zam
# Use a lower number of jobs due to OOM issues with ZAM tasks << : *ZAM_SKIP_TASK_ON_PR
ZEEK_CI_BTEST_JOBS: 3
tsan_sanitizer_task: tsan_sanitizer_task:
container: container:
@ -613,11 +431,10 @@ tsan_sanitizer_task:
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
<< : *CI_TEMPLATE << : *CI_TEMPLATE
<< : *ONLY_IF_PR_NIGHTLY << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_CI
env: env:
CC: clang-19 CC: clang-18
CXX: clang++-19 CXX: clang++-18
ZEEK_CI_CONFIGURE_FLAGS: *TSAN_SANITIZER_CONFIG ZEEK_CI_CONFIGURE_FLAGS: *TSAN_SANITIZER_CONFIG
ZEEK_CI_DISABLE_SCRIPT_PROFILING: 1 ZEEK_CI_DISABLE_SCRIPT_PROFILING: 1
# If this is defined directly in the environment, configure fails to find # If this is defined directly in the environment, configure fails to find
@ -638,12 +455,11 @@ windows_task:
prepare_script: ci/windows/prepare.cmd prepare_script: ci/windows/prepare.cmd
build_script: ci/windows/build.cmd build_script: ci/windows/build.cmd
test_script: ci/windows/test.cmd test_script: ci/windows/test.cmd
<< : *ONLY_IF_PR_MASTER_RELEASE
<< : *SKIP_IF_PR_NOT_FULL_OR_WINDOWS
env: env:
ZEEK_CI_CPUS: 8 ZEEK_CI_CPUS: 8
# Give verbose error output on a test failure. # Give verbose error output on a test failure.
CTEST_OUTPUT_ON_FAILURE: 1 CTEST_OUTPUT_ON_FAILURE: 1
<< : *BUILDS_ONLY_IF_TEMPLATE
# Container images # Container images
@ -724,18 +540,22 @@ arm64_container_image_docker_builder:
env: env:
CIRRUS_ARCH: arm64 CIRRUS_ARCH: arm64
<< : *DOCKER_BUILD_TEMPLATE << : *DOCKER_BUILD_TEMPLATE
<< : *ONLY_IF_RELEASE_TAG_NIGHTLY << : *SKIP_TASK_ON_PR
amd64_container_image_docker_builder: amd64_container_image_docker_builder:
env: env:
CIRRUS_ARCH: amd64 CIRRUS_ARCH: amd64
<< : *DOCKER_BUILD_TEMPLATE << : *DOCKER_BUILD_TEMPLATE
<< : *ONLY_IF_PR_MASTER_RELEASE_NIGHTLY << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_OR_CLUSTER_TEST
container_image_manifest_docker_builder: container_image_manifest_docker_builder:
cpu: 1 cpu: 1
<< : *ONLY_IF_RELEASE_TAG_NIGHTLY # Push master builds to zeek/zeek-dev, or tagged release branches to zeek/zeek
only_if: >
( $CIRRUS_CRON == '' ) &&
( $CIRRUS_REPO_FULL_NAME == 'zeek/zeek' &&
( $CIRRUS_BRANCH == 'master' ||
$CIRRUS_TAG =~ 'v[0-9]+\.[0-9]+\.[0-9]+$' ) )
env: env:
DOCKER_USERNAME: ENCRYPTED[!505b3dee552a395730a7e79e6aab280ffbe1b84ec62ae7616774dfefe104e34f896d2e20ce3ad701f338987c13c33533!] DOCKER_USERNAME: ENCRYPTED[!505b3dee552a395730a7e79e6aab280ffbe1b84ec62ae7616774dfefe104e34f896d2e20ce3ad701f338987c13c33533!]
DOCKER_PASSWORD: ENCRYPTED[!6c4b2f6f0e5379ef1091719cc5d2d74c90cfd2665ac786942033d6d924597ffb95dbbc1df45a30cc9ddeec76c07ac620!] DOCKER_PASSWORD: ENCRYPTED[!6c4b2f6f0e5379ef1091719cc5d2d74c90cfd2665ac786942033d6d924597ffb95dbbc1df45a30cc9ddeec76c07ac620!]
@ -754,12 +574,8 @@ container_image_manifest_docker_builder:
# for tags, or zeek/zeek-dev:latest for pushes to master. # for tags, or zeek/zeek-dev:latest for pushes to master.
set -x set -x
if [ -n "${CIRRUS_TAG}" ]; then if [ -n "${CIRRUS_TAG}" ]; then
echo "IMAGE_NAME=zeek" >> $CIRRUS_ENV
echo "IMAGE_TAG=$(cat VERSION)" >> $CIRRUS_ENV echo "IMAGE_TAG=$(cat VERSION)" >> $CIRRUS_ENV
if [ "${CIRRUS_TAG}" != "v$(cat VERSION)" ]; then echo "IMAGE_NAME=zeek" >> $CIRRUS_ENV
echo "CIRRUS_TAG '${CIRRUS_TAG}' and VERSION '$(cat VERSION)' inconsistent!" >&2
exit 1
fi
elif [ "${CIRRUS_BRANCH}" = "master" ]; then elif [ "${CIRRUS_BRANCH}" = "master" ]; then
echo "IMAGE_NAME=zeek-dev" >> $CIRRUS_ENV echo "IMAGE_NAME=zeek-dev" >> $CIRRUS_ENV
echo "IMAGE_TAG=latest" >> $CIRRUS_ENV echo "IMAGE_TAG=latest" >> $CIRRUS_ENV
@ -786,7 +602,31 @@ container_image_manifest_docker_builder:
'+refs/heads/release/*:refs/remotes/origin/release/*' \ '+refs/heads/release/*:refs/remotes/origin/release/*' \
'+refs/heads/master:refs/remotes/origin/master' '+refs/heads/master:refs/remotes/origin/master'
./ci/container-images-addl-tags.sh "${CIRRUS_TAG}" | tee -a $CIRRUS_ENV # Find current versions for lts and feature depending on branches and
# tags in the repo. sed for escaping the dot in the version for using
# it in the regex below to match against CIRRUS_TAG.
lts_ver=$(./ci/find-current-version.sh lts)
lts_pat="^v$(echo $lts_ver | sed 's,\.,\\.,g')\.[0-9]+\$"
feature_ver=$(./ci/find-current-version.sh feature)
feature_pat="^v$(echo $feature_ver | sed 's,\.,\\.,g')\.[0-9]+\$"
# Construct additional tags for the image. At most this will
# be "lts x.0 feature" for an lts branch x.0 that is currently
# also the latest feature branch.
ADDL_MANIFEST_TAGS=
if echo "${CIRRUS_TAG}" | grep -E "${lts_pat}"; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} lts ${lts_ver}"
fi
if echo "${CIRRUS_TAG}" | grep -E "${feature_pat}"; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} latest"
if [ "${feature_ver}" != "${lts_ver}" ]; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} ${feature_ver}"
fi
fi
# Let downstream know about it.
echo "ADDITIONAL_MANIFEST_TAGS=${ADDL_MANIFEST_TAGS}" >> $CIRRUS_ENV
# These should've been populated by the previous jobs # These should've been populated by the previous jobs
zeek_image_arm64_cache: zeek_image_arm64_cache:
@ -814,7 +654,8 @@ container_image_manifest_docker_builder:
# images from the public ECR repository to stay within free-tier bounds. # images from the public ECR repository to stay within free-tier bounds.
public_ecr_cleanup_docker_builder: public_ecr_cleanup_docker_builder:
cpu: 1 cpu: 1
<< : *ONLY_IF_NIGHTLY only_if: >
$CIRRUS_CRON == '' && $CIRRUS_REPO_FULL_NAME == 'zeek/zeek' && $CIRRUS_BRANCH == 'master'
env: env:
AWS_ACCESS_KEY_ID: ENCRYPTED[!eff52f6442e1bc78bce5b15a23546344df41bf519f6201924cb70c7af12db23f442c0e5f2b3687c2d856ceb11fcb8c49!] AWS_ACCESS_KEY_ID: ENCRYPTED[!eff52f6442e1bc78bce5b15a23546344df41bf519f6201924cb70c7af12db23f442c0e5f2b3687c2d856ceb11fcb8c49!]
AWS_SECRET_ACCESS_KEY: ENCRYPTED[!748bc302dd196140a5fa8e89c9efd148882dc846d4e723787d2de152eb136fa98e8dea7e6d2d6779d94f72dd3c088228!] AWS_SECRET_ACCESS_KEY: ENCRYPTED[!748bc302dd196140a5fa8e89c9efd148882dc846d4e723787d2de152eb136fa98e8dea7e6d2d6779d94f72dd3c088228!]
@ -854,23 +695,27 @@ cluster_testing_docker_builder:
path: "testing/external/zeek-testing-cluster/.tmp/**" path: "testing/external/zeek-testing-cluster/.tmp/**"
depends_on: depends_on:
- amd64_container_image - amd64_container_image
<< : *ONLY_IF_PR_RELEASE_AND_NIGHTLY << : *SKIP_TASK_ON_PR
<< : *SKIP_IF_PR_NOT_FULL_OR_CLUSTER_TEST
# Test zeekctl upon master and release pushes and also when # Test zeekctl upon master and release pushes and also when
# a PR has a "CI: Zeekctl" or "CI: Full" label. # a PR has a zeekctlci or fullci label.
# #
# Also triggers on CIRRUS_CRON == 'zeekctl-nightly' if that is configured # Also triggers on CIRRUS_CRON == 'zeekctl-nightly' if that is configured
# through the Cirrus Web UI. # through the Cirrus Web UI.
zeekctl_debian12_task: zeekctl_debian11_task:
cpu: *CPUS cpu: *CPUS
memory: *MEMORY memory: *MEMORY
<< : *ONLY_IF_PR_MASTER_RELEASE only_if: >
<< : *SKIP_IF_PR_NOT_FULL_OR_ZEEKCTL ( $CIRRUS_CRON == 'zeekctl-nightly' ) ||
( $CIRRUS_PR != '' && $CIRRUS_PR_LABELS =~ '.*(zeekctlci|fullci).*' ) ||
( $CIRRUS_REPO_NAME == 'zeek' && (
$CIRRUS_BRANCH == 'master' ||
$CIRRUS_BRANCH =~ 'release/.*' )
)
container: container:
# Debian 13 (trixie) EOL: TBD # Debian 11 EOL: June 2026
dockerfile: ci/debian-13/Dockerfile dockerfile: ci/debian-11/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
sync_submodules_script: git submodule update --recursive --init sync_submodules_script: git submodule update --recursive --init
always: always:
@ -884,46 +729,31 @@ zeekctl_debian12_task:
build_script: build_script:
- cd auxil/zeekctl/testing && ./Scripts/build-zeek - cd auxil/zeekctl/testing && ./Scripts/build-zeek
test_script: test_script:
- cd auxil/zeekctl/testing && ../../btest/btest -A -d -j ${ZEEK_CI_BTEST_JOBS} - cd auxil/zeekctl/testing && ../../btest/btest -A -d -j ${BTEST_JOBS}
on_failure: on_failure:
upload_zeekctl_testing_artifacts: upload_zeekctl_testing_artifacts:
path: "auxil/zeekctl/testing/.tmp/**" path: "auxil/zeekctl/testing/.tmp/**"
include_plugins_debian12_task: # Test building Zeek with builtin plugins available in
# testing/builtin-plugins/Files/
include_plugins_debian11_task:
cpu: *CPUS cpu: *CPUS
memory: *MEMORY memory: *MEMORY
container: container:
# Debian 13 (trixie) EOL: TBD # Debian 11 EOL: June 2026
dockerfile: ci/debian-13/Dockerfile dockerfile: ci/debian-11/Dockerfile
<< : *RESOURCES_TEMPLATE << : *RESOURCES_TEMPLATE
sync_submodules_script: git submodule update --recursive --init sync_submodules_script: git submodule update --recursive --init
fetch_external_plugins_script:
- cd /zeek/testing/builtin-plugins/external && git clone https://github.com/zeek/zeek-perf-support.git
- cd zeek-perf-support && echo "Cloned $(git rev-parse HEAD) for $(basename $(pwd))"
- cd /zeek/testing/builtin-plugins/external && git clone https://github.com/zeek/zeek-more-hashes.git
- cd zeek-more-hashes && echo "Cloned $(git rev-parse HEAD) for $(basename $(pwd))"
- cd /zeek/testing/builtin-plugins/external && git clone https://github.com/zeek/zeek-cluster-backend-nats.git
- cd zeek-cluster-backend-nats && echo "Cloned $(git rev-parse HEAD) for $(basename $(pwd))"
- cd /zeek/testing/builtin-plugins/external && git clone https://github.com/SeisoLLC/zeek-kafka.git
- cd zeek-kafka && echo "Cloned $(git rev-parse HEAD) for $(basename $(pwd))"
always: always:
ccache_cache: ccache_cache:
folder: /tmp/ccache folder: /tmp/ccache
fingerprint_script: echo builtin-plugins-ccache-$ZEEK_CCACHE_EPOCH-$CIRRUS_TASK_NAME-$CIRRUS_OS fingerprint_script: echo builtin-plugins-ccache-$ZEEK_CCACHE_EPOCH-$CIRRUS_TASK_NAME-$CIRRUS_OS
reupload_on_changes: true reupload_on_changes: true
build_script: ZEEK_CI_CONFIGURE_FLAGS="${ZEEK_CI_CONFIGURE_FLAGS} --include-plugins='/zeek/testing/builtin-plugins/Files/protocol-plugin;/zeek/testing/builtin-plugins/Files/py-lib-plugin;/zeek/testing/builtin-plugins/Files/zeek-version-plugin;/zeek/testing/builtin-plugins/external/zeek-perf-support;/zeek/testing/builtin-plugins/external/zeek-more-hashes;/zeek/testing/builtin-plugins/external/zeek-cluster-backend-nats;/zeek/testing/builtin-plugins/external/zeek-kafka'" ./ci/build.sh build_script: ZEEK_CI_CONFIGURE_FLAGS="${ZEEK_CI_CONFIGURE_FLAGS} --include-plugins='/zeek/testing/builtin-plugins/Files/protocol-plugin;/zeek/testing/builtin-plugins/Files/py-lib-plugin;/zeek/testing/builtin-plugins/Files/zeek-version-plugin'" ./ci/build.sh
test_script: test_script:
- cd testing/builtin-plugins && ../../auxil/btest/btest -d -b -j ${ZEEK_CI_BTEST_JOBS} - cd testing/builtin-plugins && ../../auxil/btest/btest -d -b -j ${ZEEK_CI_BTEST_JOBS}
test_external_plugins_script: |
. /zeek/build/zeek-path-dev.sh
set -ex
# For now, just check if the external plugins are available.
zeek -N Zeek::PerfSupport
zeek -N Zeek::MoreHashes
zeek -N Zeek::Cluster_Backend_NATS
zeek -N Seiso::Kafka
on_failure: on_failure:
upload_include_plugins_testing_artifacts: upload_include_plugins_testing_artifacts:
path: "testing/builtin-plugins/.tmp/**" path: "testing/builtin-plugins/.tmp/**"
<< : *ONLY_IF_PR_MASTER_RELEASE << : *BUILDS_ONLY_IF_TEMPLATE
<< : *SKIP_IF_PR_NOT_FULL_CI << : *SKIP_TASK_ON_PR

View file

@ -1,4 +1,4 @@
# See the file "COPYING" in the main distribution directory for copyright. # Copyright (c) 2020-2023 by the Zeek Project. See LICENSE for details.
--- ---
Language: Cpp Language: Cpp
@ -71,7 +71,6 @@ IncludeBlocks: Regroup
# 4: any header that starts with "zeek/" # 4: any header that starts with "zeek/"
# 5: everything else, which should catch any of the auto-generated code from the # 5: everything else, which should catch any of the auto-generated code from the
# build directory as well # build directory as well
# 6: third party doctest header
# #
# Sections 0-1 and 2-3 get grouped together in their respective blocks # Sections 0-1 and 2-3 get grouped together in their respective blocks
IncludeCategories: IncludeCategories:
@ -87,8 +86,6 @@ IncludeCategories:
- Regex: '^<[[:print:]]+>' - Regex: '^<[[:print:]]+>'
Priority: 2 Priority: 2
SortPriority: 3 SortPriority: 3
- Regex: '^"zeek/3rdparty/doctest.h'
Priority: 6
- Regex: '^"zeek/' - Regex: '^"zeek/'
Priority: 4 Priority: 4
- Regex: '.*' - Regex: '.*'

View file

@ -1,76 +1,5 @@
Checks: [-*, Checks: '-*,
bugprone-*, bugprone-*,
performance-*, -bugprone-easily-swappable-parameters,
modernize-*, clang-analyzer-*,
readability-isolate-declaration, performance-*'
readability-container-contains,
# Enable a very limited number of the cppcoreguidelines checkers.
# See the notes for some of the rest of them below.
cppcoreguidelines-macro-usage,
cppcoreguidelines-misleading-capture-default-by-value,
cppcoreguidelines-virtual-class-destructor,
# Skipping these temporarily because they are very noisy
-bugprone-forward-declaration-namespace,
-bugprone-narrowing-conversions,
-bugprone-unchecked-optional-access,
-performance-unnecessary-value-param,
-modernize-use-equals-default,
-modernize-use-integer-sign-comparison,
# The following cause either lots of pointless or advisory warnings
-bugprone-easily-swappable-parameters,
-bugprone-nondeterministic-pointer-iteration-order,
# bifcl generates a lot of code with double underscores in their name.
# ZAM uses a few identifiers that start with underscores or have
# double-underscores in the name.
-bugprone-reserved-identifier,
# bifcl generates almost every switch statement without a default case
# and so this one generates a lot of warnings.
-bugprone-switch-missing-default-case,
# These report warnings that are rather difficult to fix or are things
# we simply don't want to fix.
-bugprone-undefined-memory-manipulation,
-bugprone-pointer-arithmetic-on-polymorphic-object,
-bugprone-empty-catch,
-bugprone-exception-escape,
-bugprone-suspicious-include,
-modernize-avoid-c-arrays,
-modernize-concat-nested-namespaces,
-modernize-raw-string-literal,
-modernize-use-auto,
-modernize-use-nodiscard,
-modernize-use-trailing-return-type,
-modernize-use-designated-initializers,
# This one returns a bunch of findings in DFA and the sqlite library.
# We're unlikely to fix either of them.
-performance-no-int-to-ptr,
# These cppcoreguidelines checkers are things we should investigate
# and possibly fix, but there are so many findings that we're holding
# off doing it for now.
#cppcoreguidelines-init-variables,
#cppcoreguidelines-prefer-member-initializer,
#cppcoreguidelines-pro-type-member-init,
#cppcoreguidelines-pro-type-cstyle-cast,
#cppcoreguidelines-pro-type-static-cast-downcast,
#cppcoreguidelines-special-member-functions,
# These are features in newer version of C++ that we don't have
# access to yet.
-modernize-use-std-format,
-modernize-use-std-print,
]
HeaderFilterRegex: '.h'
ExcludeHeaderFilterRegex: '.*(auxil|3rdparty)/.*'
SystemHeaders: false
CheckOptions:
- key: modernize-use-default-member-init.UseAssignment
value: 'true'
WarningsAsErrors: '*'

View file

@ -72,23 +72,10 @@
"SOURCES": "*", "SOURCES": "*",
"MODULES": "*" "MODULES": "*"
} }
},
"zeek_add_plugin": {
"kwargs": {
"INCLUDE_DIRS": "*",
"DEPENDENCIES": "*",
"SOURCES": "*",
"BIFS": "*",
"PAC": "*"
}
} }
} }
}, },
"format": { "format": {
"always_wrap": [
"spicy_add_analyzer",
"zeek_add_plugin"
],
"line_width": 100, "line_width": 100,
"tab_size": 4, "tab_size": 4,
"separate_ctrl_name_with_space": true, "separate_ctrl_name_with_space": true,

View file

@ -33,6 +33,3 @@ f5a76c1aedc7f8886bc6abef0dfaa8065684b1f6
# clang-format: Format JSON with clang-format # clang-format: Format JSON with clang-format
e6256446ddef5c5d5240eefff974556f2e12ac46 e6256446ddef5c5d5240eefff974556f2e12ac46
# analyzer/protocol: Reformat with spicy-format
d70bcd07b9b26036b16092fe950eca40e2f5a032

View file

@ -10,10 +10,10 @@ permissions:
jobs: jobs:
scan: scan:
if: github.repository == 'zeek/zeek' if: github.repository == 'zeek/zeek'
runs-on: ubuntu-24.04 runs-on: ubuntu-20.04
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
submodules: "recursive" submodules: "recursive"
@ -21,71 +21,58 @@ jobs:
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get -y install \ sudo apt-get -y install \
bison \
bsdmainutils \
cmake \
curl \
flex \
g++ \
gcc \
git \ git \
jq \ cmake \
libfl-dev \ make \
libfl2 \ gcc \
libkrb5-dev \ g++ \
libmaxminddb-dev \ flex \
bison \
libpcap-dev \ libpcap-dev \
libssl-dev \ libssl-dev \
libzmq3-dev \
make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-pip \ python3-pip \
sqlite3 \
swig \ swig \
zlib1g-dev zlib1g-dev \
libmaxminddb-dev \
libkrb5-dev \
bsdmainutils \
sqlite3 \
curl \
wget
- name: Configure - name: Configure
run: ./configure --build-type=debug --disable-broker-tests run: ./configure --build-type=debug --disable-broker-tests --disable-spicy
- name: Fetch Coverity Tools - name: Fetch Coverity Tools
env: env:
COVERITY_TOKEN: ${{ secrets.COVERITY_TOKEN }} COVERITY_TOKEN: ${{ secrets.COVERITY_TOKEN }}
run: | run: |
curl \ wget \
-o coverity_tool.tgz \ -nv https://scan.coverity.com/download/cxx/linux64 \
-d token=${COVERITY_TOKEN} \ --post-data "token=${COVERITY_TOKEN}&project=Bro" \
-d project=Bro \ -O coverity_tool.tgz
https://scan.coverity.com/download/cxx/linux64
tar xzf coverity_tool.tgz tar xzf coverity_tool.tgz
rm coverity_tool.tgz rm coverity_tool.tgz
mv cov-analysis* coverity-tools mv cov-analysis* coverity-tools
- name: Build - name: Build
run: | run: |
export PATH=$(pwd)/coverity-tools/bin:$PATH export PATH=`pwd`/coverity-tools/bin:$PATH
( cd build && cov-build --dir cov-int make -j "$(nproc)" ) ( cd build && cov-build --dir cov-int make -j $(nproc) )
cat build/cov-int/build-log.txt cat build/cov-int/build-log.txt
- name: Submit - name: Submit
env: env:
COVERITY_TOKEN: ${{ secrets.COVERITY_TOKEN }} COVERITY_TOKEN: ${{ secrets.COVERITY_TOKEN }}
run: | run: |
( cd build && tar czf myproject.tgz cov-int ) cd build
curl -X POST \ tar czf myproject.tgz cov-int
-d version=$(cat VERSION) \ curl \
-d description=$(git rev-parse HEAD) \ --form token=${COVERITY_TOKEN} \
-d email=zeek-commits-internal@zeek.org \ --form email=zeek-commits-internal@zeek.org \
-d token=${COVERITY_TOKEN} \ --form file=@myproject.tgz \
-d file_name=myproject.tgz \ --form "version=`cat ../VERSION`" \
-o response \ --form "description=`git rev-parse HEAD`" \
https://scan.coverity.com/projects/641/builds/init https://scan.coverity.com/builds?project=Bro
upload_url=$(jq -r '.url' response)
build_id=$(jq -r '.build_id' response)
curl -X PUT \
--header 'Content-Type: application/json' \
--upload-file build/myproject.tgz \
${upload_url}
curl -X PUT \
-d token=${COVERITY_TOKEN} \
https://scan.coverity.com/projects/641/builds/${build_id}/enqueue

View file

@ -16,20 +16,20 @@ jobs:
generate: generate:
permissions: permissions:
contents: write # for Git to git push contents: write # for Git to git push
if: "github.repository == 'zeek/zeek' && contains(github.event.pull_request.labels.*.name, 'CI: Skip All') == false" if: github.repository == 'zeek/zeek'
runs-on: ubuntu-24.04 runs-on: ubuntu-22.04
steps: steps:
# We only perform a push if the action was triggered via a schedule # We only perform a push if the action was triggered via a schedule
# event, so we only need to authenticate in that case. Use # event, so we only need to authenticate in that case. Use
# unauthenticated access otherwise so this action can e.g., also run from # unauthenticated access otherwise so this action can e.g., also run from
# clones. # clones.
- uses: actions/checkout@v4 - uses: actions/checkout@v3
if: github.event_name == 'schedule' if: github.event_name == 'schedule'
with: with:
submodules: "recursive" submodules: "recursive"
token: ${{ secrets.ZEEK_BOT_TOKEN }} token: ${{ secrets.ZEEK_BOT_TOKEN }}
- uses: actions/checkout@v4 - uses: actions/checkout@v3
if: github.event_name != 'schedule' if: github.event_name != 'schedule'
with: with:
submodules: "recursive" submodules: "recursive"
@ -51,29 +51,27 @@ jobs:
bsdmainutils \ bsdmainutils \
ccache \ ccache \
cmake \ cmake \
cppzmq-dev \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
git \ git \
libhiredis-dev \
libfl-dev \ libfl-dev \
libfl2 \ libfl2 \
libkrb5-dev \ libkrb5-dev \
libnode-dev \
libpcap-dev \ libpcap-dev \
libssl-dev \ libssl-dev \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-pip \ python3-pip\
sqlite3 \ sqlite3 \
swig \ swig \
zlib1g-dev zlib1g-dev
python3 -m venv ci-docs-venv # Many distros adhere to PEP 394's recommendation for `python` =
source ci-docs-venv/bin/activate # `python2` so this is a simple workaround until we drop Python 2
pip3 install -r doc/requirements.txt # support and explicitly use `python3` for all invocations.
pip3 install pre-commit sudo ln -sf /usr/bin/python3 /usr/local/bin/python
sudo pip3 install -r doc/requirements.txt
- name: ccache - name: ccache
uses: hendrikmuhs/ccache-action@v1.2 uses: hendrikmuhs/ccache-action@v1.2
@ -81,48 +79,25 @@ jobs:
key: 'docs-gen-${{ github.job }}' key: 'docs-gen-${{ github.job }}'
max-size: '2000M' max-size: '2000M'
# Github runners have node installed on them by default in /usr/local. This
# causes problems with configure finding the version from the apt package,
# plus gcc using it by default if we pass the right cmake variables to
# configure. The easiest solution is to move the directory away prior to
# running our build. It's moved back after just in case some workflow action
# expects it to exist.
- name: Move default node install to backup
run: sudo mv /usr/local/include/node /usr/local/include/node.bak
- name: Configure - name: Configure
run: ./configure --disable-broker-tests --disable-cpp-tests --ccache run: ./configure --disable-broker-tests --disable-cpp-tests --ccache
- name: Build - name: Build
run: cd build && make -j $(nproc) run: cd build && make -j $(nproc)
- name: Move default node install to original location
run: sudo mv /usr/local/include/node.bak /usr/local/include/node
- name: Check Spicy docs - name: Check Spicy docs
run: cd doc && make check-spicy-docs run: cd doc && make check-spicy-docs
# Cache pre-commit environment for reuse.
- uses: actions/cache@v4
with:
path: ~/.cache/pre-commit
key: doc-pre-commit-3|${{ env.pythonLocation }}|${{ hashFiles('doc/.pre-commit-config.yaml') }}
- name: Generate Docs - name: Generate Docs
run: | run: |
source ci-docs-venv/bin/activate
git config --global user.name zeek-bot git config --global user.name zeek-bot
git config --global user.email info@zeek.org git config --global user.email info@zeek.org
echo "*** Generating Zeekygen Docs ***" echo "*** Generating Zeekygen Docs ***"
./ci/update-zeekygen-docs.sh || exit 1 ./ci/update-zeekygen-docs.sh || exit 1
cd doc
echo "*** Running pre-commit ***"
pre-commit run -a --show-diff-on-failure --color=always
echo "*** Generating Sphinx Docs ***" echo "*** Generating Sphinx Docs ***"
cd doc
make > make.out 2>&1 make > make.out 2>&1
make_status=$? make_status=$?
echo "*** Sphinx Build Output ***" echo "*** Sphinx Build Output ***"
@ -156,7 +131,7 @@ jobs:
# Only send notifications for scheduled runs. Runs from pull requests # Only send notifications for scheduled runs. Runs from pull requests
# show failures in the GitHub UI. # show failures in the GitHub UI.
if: failure() && github.event_name == 'schedule' if: failure() && github.event_name == 'schedule'
uses: dawidd6/action-send-mail@v3.12.0 uses: dawidd6/action-send-mail@v3.7.0
with: with:
server_address: ${{secrets.SMTP_HOST}} server_address: ${{secrets.SMTP_HOST}}
server_port: ${{secrets.SMTP_PORT}} server_port: ${{secrets.SMTP_PORT}}

View file

@ -7,8 +7,8 @@ on:
jobs: jobs:
pre-commit: pre-commit:
runs-on: ubuntu-22.04 runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v5 - uses: actions/setup-python@v4
- uses: pre-commit/action@v3.0.1 - uses: pre-commit/action@v3.0.0

27
.gitmodules vendored
View file

@ -1,6 +1,9 @@
[submodule "auxil/zeek-aux"] [submodule "auxil/zeek-aux"]
path = auxil/zeek-aux path = auxil/zeek-aux
url = https://github.com/zeek/zeek-aux url = https://github.com/zeek/zeek-aux
[submodule "auxil/binpac"]
path = auxil/binpac
url = https://github.com/zeek/binpac
[submodule "auxil/zeekctl"] [submodule "auxil/zeekctl"]
path = auxil/zeekctl path = auxil/zeekctl
url = https://github.com/zeek/zeekctl url = https://github.com/zeek/zeekctl
@ -10,12 +13,18 @@
[submodule "cmake"] [submodule "cmake"]
path = cmake path = cmake
url = https://github.com/zeek/cmake url = https://github.com/zeek/cmake
[submodule "src/3rdparty"]
path = src/3rdparty
url = https://github.com/zeek/zeek-3rdparty
[submodule "auxil/broker"] [submodule "auxil/broker"]
path = auxil/broker path = auxil/broker
url = https://github.com/zeek/broker url = https://github.com/zeek/broker
[submodule "auxil/netcontrol-connectors"] [submodule "auxil/netcontrol-connectors"]
path = auxil/netcontrol-connectors path = auxil/netcontrol-connectors
url = https://github.com/zeek/zeek-netcontrol url = https://github.com/zeek/zeek-netcontrol
[submodule "auxil/bifcl"]
path = auxil/bifcl
url = https://github.com/zeek/bifcl
[submodule "doc"] [submodule "doc"]
path = doc path = doc
url = https://github.com/zeek/zeek-docs url = https://github.com/zeek/zeek-docs
@ -37,6 +46,9 @@
[submodule "auxil/zeek-client"] [submodule "auxil/zeek-client"]
path = auxil/zeek-client path = auxil/zeek-client
url = https://github.com/zeek/zeek-client url = https://github.com/zeek/zeek-client
[submodule "auxil/gen-zam"]
path = auxil/gen-zam
url = https://github.com/zeek/gen-zam
[submodule "auxil/c-ares"] [submodule "auxil/c-ares"]
path = auxil/c-ares path = auxil/c-ares
url = https://github.com/c-ares/c-ares url = https://github.com/c-ares/c-ares
@ -46,6 +58,12 @@
[submodule "auxil/spicy"] [submodule "auxil/spicy"]
path = auxil/spicy path = auxil/spicy
url = https://github.com/zeek/spicy url = https://github.com/zeek/spicy
[submodule "auxil/filesystem"]
path = auxil/filesystem
url = https://github.com/gulrak/filesystem.git
[submodule "auxil/zeek-af_packet-plugin"]
path = auxil/zeek-af_packet-plugin
url = https://github.com/zeek/zeek-af_packet-plugin.git
[submodule "auxil/libunistd"] [submodule "auxil/libunistd"]
path = auxil/libunistd path = auxil/libunistd
url = https://github.com/zeek/libunistd url = https://github.com/zeek/libunistd
@ -58,12 +76,3 @@
[submodule "auxil/prometheus-cpp"] [submodule "auxil/prometheus-cpp"]
path = auxil/prometheus-cpp path = auxil/prometheus-cpp
url = https://github.com/zeek/prometheus-cpp url = https://github.com/zeek/prometheus-cpp
[submodule "src/cluster/backend/zeromq/auxil/cppzmq"]
path = src/cluster/backend/zeromq/auxil/cppzmq
url = https://github.com/zeromq/cppzmq
[submodule "src/cluster/websocket/auxil/IXWebSocket"]
path = src/cluster/websocket/auxil/IXWebSocket
url = https://github.com/machinezone/IXWebSocket
[submodule "auxil/expected-lite"]
path = auxil/expected-lite
url = https://github.com/martinmoene/expected-lite.git

View file

@ -2,58 +2,34 @@
# See https://pre-commit.com/hooks.html for more hooks # See https://pre-commit.com/hooks.html for more hooks
# #
repos: repos:
- repo: local
hooks:
- id: license
name: Check for license headers
entry: ./ci/license-header.py
language: python
files: '\.(h|c|cpp|cc|spicy|evt)$'
types: [file]
exclude: '^(testing/btest/(Baseline|plugins|spicy|scripts)/.*|testing/builtin-plugins/.*|src/3rdparty/.*)$'
- id: btest-command-commented
name: Check that all BTest command lines are commented out
entry: '^\s*@TEST-'
language: pygrep
files: '^testing/btest/.*$'
- repo: https://github.com/pre-commit/mirrors-clang-format - repo: https://github.com/pre-commit/mirrors-clang-format
rev: v20.1.8 rev: 'v17.0.3'
hooks: hooks:
- id: clang-format - id: clang-format
types_or: types_or:
- "c" - "c"
- "c++" - "c++"
- "json" - "json"
exclude: '^src/3rdparty/.*'
- repo: https://github.com/maxwinterstein/shfmt-py - repo: https://github.com/maxwinterstein/shfmt-py
rev: v3.12.0.1 rev: v3.7.0.1
hooks: hooks:
- id: shfmt - id: shfmt
args: ["-w", "-i", "4", "-ci"] args: ["-w", "-i", "4", "-ci"]
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/google/yapf
rev: v0.12.8 rev: v0.40.2
hooks: hooks:
- id: ruff-check - id: yapf
args: ["--fix"]
- id: ruff-format
- repo: https://github.com/cheshirekow/cmake-format-precommit - repo: https://github.com/cheshirekow/cmake-format-precommit
rev: v0.6.13 rev: v0.6.13
hooks: hooks:
- id: cmake-format - id: cmake-format
exclude: '^auxil/.*$'
- repo: https://github.com/crate-ci/typos - repo: https://github.com/crate-ci/typos
rev: v1.35.3 rev: v1.16.21
hooks: hooks:
- id: typos - id: typos
exclude: '^(.typos.toml|src/SmithWaterman.cc|testing/.*|auxil/.*|scripts/base/frameworks/files/magic/.*|CHANGES|scripts/base/protocols/ssl/mozilla-ca-list.zeek|src/3rdparty/.*)$' exclude: '^(.typos.toml|src/SmithWaterman.cc|testing/.*|auxil/.*|scripts/base/frameworks/files/magic/.*|CHANGES|scripts/base/protocols/ssl/mozilla-ca-list.zeek)$'
- repo: https://github.com/bbannier/spicy-format
rev: v0.26.0
hooks:
- id: spicy-format
exclude: '^testing/.*'

2
.style.yapf Normal file
View file

@ -0,0 +1,2 @@
[style]
column_limit=100

View file

@ -6,9 +6,9 @@ extend-ignore-re = [
# ALLO is a valid FTP command # ALLO is a valid FTP command
"\"ALLO\".*200", "\"ALLO\".*200",
"des-ede3-cbc-Env-OID", "des-ede3-cbc-Env-OID",
"Remove in v6.1.*SupressWeird",
"max_repititions:.*Remove in v6.1",
"mis-aliasing of", "mis-aliasing of",
"mis-indexing",
"compilability",
# On purpose # On purpose
"\"THE NETBIOS NAM\"", "\"THE NETBIOS NAM\"",
# NFS stuff. # NFS stuff.
@ -20,25 +20,17 @@ extend-ignore-re = [
"ot->Tag\\(\\) == TYPE_.*", "ot->Tag\\(\\) == TYPE_.*",
"auto.* ot =", "auto.* ot =",
"ot = OP_.*", "ot = OP_.*",
"ot\\[",
"ot.size",
"ot.empty",
"ot_i",
"ot.c_str",
"have_ot",
"if \\( ot == OP_.*", "if \\( ot == OP_.*",
"ot->Yield\\(\\)->InternalType\\(\\)", "ot->Yield\\(\\)->InternalType\\(\\)",
"switch \\( ot \\)", "switch \\( ot \\)",
"\\(ZAMOpType ot\\)", "\\(ZAMOpType ot\\)",
"exat", # Redis expire at
"EXAT",
# News stuff # News stuff
"SupressWeirds.*deprecated", "SupressWeirds.*deprecated",
"\"BaR\"", "\"BaR\"",
"\"xFoObar\"", "\"xFoObar\"",
"\"FoO\"", "\"FoO\"",
"Smoot", "Steve Smoot",
] ]
extend-ignore-identifiers-re = [ extend-ignore-identifiers-re = [
@ -50,17 +42,6 @@ extend-ignore-identifiers-re = [
"ND_ROUTER_.*", "ND_ROUTER_.*",
"ND_NEIGHBOR_.*", "ND_NEIGHBOR_.*",
".*_ND_option.*", ".*_ND_option.*",
"bck", # Used with same length as `fwd`
"pn", # Use for `PoolNode` variables
"ffrom_[ip|port|mac]", # Used in netcontrol.
"complte_flag", # Existing use in exported record in base.
"VidP(n|N)", # In SMB.
"iin", # In DNP3.
"SCN[dioux]", # sccanf fixed-width identifiers
"(ScValidatePnPService|ScSendPnPMessage)", # In DCE-RPC.
"snet", # Used as shorthand for subnet in base scripts.
"typ",
"(e|i)it", # Used as name for some iterators.
] ]
[default.extend-identifiers] [default.extend-identifiers]
@ -73,7 +54,7 @@ ND_REDIRECT = "ND_REDIRECT"
NED_ACK = "NED_ACK" NED_ACK = "NED_ACK"
NFS3ERR_ACCES = "NFS3ERR_ACCES" NFS3ERR_ACCES = "NFS3ERR_ACCES"
NO_SEH = "NO_SEH" NO_SEH = "NO_SEH"
OP_SWITCHS_Vii = "OP_SWITCHS_Vii" OP_SWITCHS_VVV = "OP_SWITCHS_VVV"
O_WRONLY = "O_WRONLY" O_WRONLY = "O_WRONLY"
RPC_NT_CALL_FAILED_DNE = "RPC_NT_CALL_FAILED_DNE" RPC_NT_CALL_FAILED_DNE = "RPC_NT_CALL_FAILED_DNE"
RpcAddPrintProvidor = "RpcAddPrintProvidor" RpcAddPrintProvidor = "RpcAddPrintProvidor"
@ -86,7 +67,6 @@ ot2 = "ot2"
uses_seh = "uses_seh" uses_seh = "uses_seh"
ect0 = "ect0" ect0 = "ect0"
ect1 = "ect1" ect1 = "ect1"
tpe = "tpe"
[default.extend-words] [default.extend-words]
caf = "caf" caf = "caf"

6747
CHANGES

File diff suppressed because it is too large Load diff

View file

@ -59,8 +59,6 @@ option(ENABLE_DEBUG "Build Zeek with additional debugging support." ${ENABLE_DEB
option(ENABLE_JEMALLOC "Link against jemalloc." OFF) option(ENABLE_JEMALLOC "Link against jemalloc." OFF)
option(ENABLE_PERFTOOLS "Build with support for Google perftools." OFF) option(ENABLE_PERFTOOLS "Build with support for Google perftools." OFF)
option(ENABLE_ZEEK_UNIT_TESTS "Build the C++ unit tests." ON) option(ENABLE_ZEEK_UNIT_TESTS "Build the C++ unit tests." ON)
option(ENABLE_IWYU "Enable include-what-you-use for the main Zeek target." OFF)
option(ENABLE_CLANG_TIDY "Enable clang-tidy for the main Zeek target." OFF)
option(INSTALL_AUX_TOOLS "Install additional tools from auxil." ${ZEEK_INSTALL_TOOLS_DEFAULT}) option(INSTALL_AUX_TOOLS "Install additional tools from auxil." ${ZEEK_INSTALL_TOOLS_DEFAULT})
option(INSTALL_BTEST "Install btest alongside Zeek." ${ZEEK_INSTALL_TOOLS_DEFAULT}) option(INSTALL_BTEST "Install btest alongside Zeek." ${ZEEK_INSTALL_TOOLS_DEFAULT})
option(INSTALL_BTEST_PCAPS "Install pcap files for testing." ${ZEEK_INSTALL_TOOLS_DEFAULT}) option(INSTALL_BTEST_PCAPS "Install pcap files for testing." ${ZEEK_INSTALL_TOOLS_DEFAULT})
@ -68,8 +66,7 @@ option(INSTALL_ZEEKCTL "Install zeekctl." ${ZEEK_INSTALL_TOOLS_DEFAULT})
option(INSTALL_ZEEK_CLIENT "Install the zeek-client." ${ZEEK_INSTALL_TOOLS_DEFAULT}) option(INSTALL_ZEEK_CLIENT "Install the zeek-client." ${ZEEK_INSTALL_TOOLS_DEFAULT})
option(INSTALL_ZKG "Install zkg." ${ZEEK_INSTALL_TOOLS_DEFAULT}) option(INSTALL_ZKG "Install zkg." ${ZEEK_INSTALL_TOOLS_DEFAULT})
option(PREALLOCATE_PORT_ARRAY "Pre-allocate all ports for zeek::Val." ON) option(PREALLOCATE_PORT_ARRAY "Pre-allocate all ports for zeek::Val." ON)
option(ZEEK_STANDALONE "Build Zeek as stand-alone binary." ON) option(ZEEK_STANDALONE "Build Zeek as stand-alone binary?" ON)
option(ZEEK_ENABLE_FUZZERS "Build Zeek fuzzing targets." OFF)
# Non-boolean options. # Non-boolean options.
if (NOT WIN32) if (NOT WIN32)
@ -90,6 +87,8 @@ set(ZEEK_ETC_INSTALL_DIR "${CMAKE_INSTALL_PREFIX}/etc"
set(CMAKE_EXPORT_COMPILE_COMMANDS ON CACHE INTERNAL set(CMAKE_EXPORT_COMPILE_COMMANDS ON CACHE INTERNAL
"Whether to write a JSON compile commands database") "Whether to write a JSON compile commands database")
set(ZEEK_CXX_STD cxx_std_17 CACHE STRING "The C++ standard to use.")
set(ZEEK_SANITIZERS "" CACHE STRING "Sanitizers to use when building.") set(ZEEK_SANITIZERS "" CACHE STRING "Sanitizers to use when building.")
set(CPACK_SOURCE_IGNORE_FILES "" CACHE STRING "Files to be ignored by CPack") set(CPACK_SOURCE_IGNORE_FILES "" CACHE STRING "Files to be ignored by CPack")
@ -192,53 +191,21 @@ if (MSVC)
# TODO: This is disabled for now because there a bunch of known # TODO: This is disabled for now because there a bunch of known
# compiler warnings on Windows that we don't have good fixes for. # compiler warnings on Windows that we don't have good fixes for.
#set(WERROR_FLAG "/WX") #set(WERROR_FLAG "/WX")
#set(WNOERROR_FLAG "/WX:NO") #set(WERROR_FLAG "/WX")
endif () endif ()
# Always build binpac in static mode if building on Windows
set(BUILD_STATIC_BINPAC true)
else () else ()
include(GNUInstallDirs) include(GNUInstallDirs)
if (BUILD_WITH_WERROR) if (BUILD_WITH_WERROR)
set(WERROR_FLAG "-Werror") set(WERROR_FLAG "-Werror")
set(WNOERROR_FLAG "-Wno-error")
# With versions >=13.0 GCC gained `-Warray-bounds` which reports false
# positives, see e.g., https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111273.
if (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 13.0)
list(APPEND WERROR_FLAG "-Wno-error=array-bounds")
endif ()
# With versions >=11.0 GCC is returning false positives for -Wrestrict. See
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100366. It's more prevalent
# building with -std=c++20.
if (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 11.0)
list(APPEND WERROR_FLAG "-Wno-error=restrict")
endif ()
endif () endif ()
endif () endif ()
include(cmake/CommonCMakeConfig.cmake) include(cmake/CommonCMakeConfig.cmake)
include(cmake/FindClangTidy.cmake)
include(cmake/CheckCompilerArch.cmake) include(cmake/CheckCompilerArch.cmake)
include(cmake/RequireCXXStd.cmake)
string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER) string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
if (ENABLE_IWYU)
find_program(ZEEK_IWYU_PATH NAMES include-what-you-use iwyu)
if (NOT ZEEK_IWYU_PATH)
message(FATAL_ERROR "Could not find the program include-what-you-use")
endif ()
endif ()
if (ENABLE_CLANG_TIDY)
find_program(ZEEK_CLANG_TIDY_PATH NAMES clang-tidy)
if (NOT ZEEK_CLANG_TIDY_PATH)
message(FATAL_ERROR "Could not find the program clang-tidy")
endif ()
endif ()
# ############################################################################## # ##############################################################################
# Main targets and utilities. # Main targets and utilities.
@ -250,7 +217,7 @@ set(ZEEK_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}")
# zeek-plugin-create-package.sh. Needed by ZeekPluginConfig.cmake.in. # zeek-plugin-create-package.sh. Needed by ZeekPluginConfig.cmake.in.
set(ZEEK_PLUGIN_SCRIPTS_PATH "${PROJECT_SOURCE_DIR}/cmake") set(ZEEK_PLUGIN_SCRIPTS_PATH "${PROJECT_SOURCE_DIR}/cmake")
# Our C++ base target for propagating compiler and linker flags. Note: for # Our C++17 base target for propagating compiler and linker flags. Note: for
# now, we only use it for passing library dependencies around. # now, we only use it for passing library dependencies around.
add_library(zeek_internal INTERFACE) add_library(zeek_internal INTERFACE)
add_library(Zeek::Internal ALIAS zeek_internal) add_library(Zeek::Internal ALIAS zeek_internal)
@ -338,16 +305,6 @@ function (zeek_target_link_libraries lib_target)
endforeach () endforeach ()
endfunction () endfunction ()
function (zeek_target_add_linters lib_target)
if (ZEEK_IWYU_PATH)
set_target_properties(${lib_target} PROPERTIES CXX_INCLUDE_WHAT_YOU_USE ${ZEEK_IWYU_PATH})
endif ()
if (ZEEK_CLANG_TIDY_PATH)
set_target_properties(${lib_target} PROPERTIES CXX_CLANG_TIDY ${ZEEK_CLANG_TIDY_PATH})
endif ()
endfunction ()
function (zeek_include_directories) function (zeek_include_directories)
foreach (name zeek_exe zeek_lib zeek_fuzzer_shared) foreach (name zeek_exe zeek_lib zeek_fuzzer_shared)
if (TARGET ${name}) if (TARGET ${name})
@ -369,7 +326,7 @@ endfunction ()
find_package(Threads REQUIRED) find_package(Threads REQUIRED)
# Interface library for propagating extra flags and include paths to dynamically # Interface library for propagating extra flags and include paths to dynamically
# loaded plugins. Also propagates include paths and c++ standard mode on the install # loaded plugins. Also propagates include paths and C++17 mode on the install
# interface. # interface.
add_library(zeek_dynamic_plugin_base INTERFACE) add_library(zeek_dynamic_plugin_base INTERFACE)
target_include_directories( target_include_directories(
@ -396,17 +353,21 @@ endfunction ()
add_zeek_dynamic_plugin_build_interface_include_directories( add_zeek_dynamic_plugin_build_interface_include_directories(
${PROJECT_SOURCE_DIR}/src/include ${PROJECT_SOURCE_DIR}/src/include
${PROJECT_SOURCE_DIR}/tools/binpac/lib ${PROJECT_SOURCE_DIR}/auxil/binpac/lib
${PROJECT_SOURCE_DIR}/auxil/broker/libbroker ${PROJECT_SOURCE_DIR}/auxil/broker/libbroker
${PROJECT_SOURCE_DIR}/auxil/paraglob/include ${PROJECT_SOURCE_DIR}/auxil/paraglob/include
${PROJECT_SOURCE_DIR}/auxil/rapidjson/include
${PROJECT_SOURCE_DIR}/auxil/prometheus-cpp/core/include ${PROJECT_SOURCE_DIR}/auxil/prometheus-cpp/core/include
${PROJECT_SOURCE_DIR}/auxil/expected-lite/include
${CMAKE_BINARY_DIR}/src ${CMAKE_BINARY_DIR}/src
${CMAKE_BINARY_DIR}/src/include ${CMAKE_BINARY_DIR}/src/include
${CMAKE_BINARY_DIR}/tools/binpac/lib ${CMAKE_BINARY_DIR}/auxil/binpac/lib
${CMAKE_BINARY_DIR}/auxil/broker/libbroker ${CMAKE_BINARY_DIR}/auxil/broker/libbroker
${CMAKE_BINARY_DIR}/auxil/prometheus-cpp/core/include) ${CMAKE_BINARY_DIR}/auxil/prometheus-cpp/core/include)
# threading/formatters/JSON.h includes rapidjson headers and may be used
# by external plugins, extend the include path.
target_include_directories(zeek_dynamic_plugin_base SYSTEM
INTERFACE $<INSTALL_INTERFACE:include/zeek/3rdparty/rapidjson/include>)
target_include_directories( target_include_directories(
zeek_dynamic_plugin_base SYSTEM zeek_dynamic_plugin_base SYSTEM
INTERFACE $<INSTALL_INTERFACE:include/zeek/3rdparty/prometheus-cpp/include>) INTERFACE $<INSTALL_INTERFACE:include/zeek/3rdparty/prometheus-cpp/include>)
@ -432,6 +393,7 @@ function (zeek_add_subdir_library name)
target_compile_definitions(${target_name} PRIVATE ZEEK_CONFIG_SKIP_VERSION_H) target_compile_definitions(${target_name} PRIVATE ZEEK_CONFIG_SKIP_VERSION_H)
add_dependencies(${target_name} zeek_autogen_files) add_dependencies(${target_name} zeek_autogen_files)
target_link_libraries(${target_name} PRIVATE $<BUILD_INTERFACE:zeek_internal>) target_link_libraries(${target_name} PRIVATE $<BUILD_INTERFACE:zeek_internal>)
add_clang_tidy_files(${FN_ARGS_SOURCES})
target_compile_options(${target_name} PRIVATE ${WERROR_FLAG}) target_compile_options(${target_name} PRIVATE ${WERROR_FLAG})
# Take care of compiling BIFs. # Take care of compiling BIFs.
@ -455,9 +417,6 @@ function (zeek_add_subdir_library name)
# Feed into the main Zeek target(s). # Feed into the main Zeek target(s).
zeek_target_link_libraries(${target_name}) zeek_target_link_libraries(${target_name})
# Add IWYU and clang-tidy to the target if enabled.
zeek_target_add_linters(${target_name})
endfunction () endfunction ()
# ############################################################################## # ##############################################################################
@ -666,7 +625,6 @@ if (ENABLE_DEBUG)
set(VERSION_C_IDENT "${VERSION_C_IDENT}_debug") set(VERSION_C_IDENT "${VERSION_C_IDENT}_debug")
target_compile_definitions(zeek_internal INTERFACE DEBUG) target_compile_definitions(zeek_internal INTERFACE DEBUG)
target_compile_definitions(zeek_dynamic_plugin_base INTERFACE DEBUG) target_compile_definitions(zeek_dynamic_plugin_base INTERFACE DEBUG)
set(SPICYZ_FLAGS "-d" CACHE STRING "Additional flags to pass to spicyz for builtin analyzers")
endif () endif ()
if (NOT BINARY_PACKAGING_MODE) if (NOT BINARY_PACKAGING_MODE)
@ -835,7 +793,7 @@ if (NOT SED_EXE)
endif () endif ()
endif () endif ()
set(ZEEK_PYTHON_MIN 3.9.0) set(ZEEK_PYTHON_MIN 3.5.0)
set(Python_FIND_UNVERSIONED_NAMES FIRST) set(Python_FIND_UNVERSIONED_NAMES FIRST)
find_package(Python ${ZEEK_PYTHON_MIN} REQUIRED COMPONENTS Interpreter) find_package(Python ${ZEEK_PYTHON_MIN} REQUIRED COMPONENTS Interpreter)
find_package(FLEX REQUIRED) find_package(FLEX REQUIRED)
@ -883,35 +841,46 @@ endif ()
set(PY_MOD_INSTALL_DIR ${py_mod_install_dir} CACHE STRING "Installation path for Python modules" set(PY_MOD_INSTALL_DIR ${py_mod_install_dir} CACHE STRING "Installation path for Python modules"
FORCE) FORCE)
# BinPAC uses the same 'ENABLE_STATIC_ONLY' variable to define whether if (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/auxil/binpac/CMakeLists.txt)
# to build statically. Save a local copy so it can be set based on the
# configure flag before we add the subdirectory.
set(ENABLE_STATIC_ONLY_SAVED ${ENABLE_STATIC_ONLY})
if (BUILD_STATIC_BINPAC) set(ENABLE_STATIC_ONLY_SAVED ${ENABLE_STATIC_ONLY})
set(ENABLE_STATIC_ONLY true) if (MSVC)
set(BUILD_STATIC_BINPAC true)
endif ()
if (BUILD_STATIC_BINPAC)
set(ENABLE_STATIC_ONLY true)
endif ()
add_subdirectory(auxil/binpac)
set(ENABLE_STATIC_ONLY ${ENABLE_STATIC_ONLY_SAVED})
# FIXME: avoid hard-coding a path for multi-config generator support. See the
# TODO in ZeekPluginConfig.cmake.in.
set(BINPAC_EXE_PATH "${CMAKE_BINARY_DIR}/auxil/binpac/src/binpac${CMAKE_EXECUTABLE_SUFFIX}")
endif () endif ()
add_subdirectory(tools/binpac)
set(ENABLE_STATIC_ONLY ${ENABLE_STATIC_ONLY_SAVED})
# FIXME: avoid hard-coding a path for multi-config generator support. See the
# TODO in ZeekPluginConfig.cmake.in.
set(BINPAC_EXE_PATH "${CMAKE_BINARY_DIR}/tools/binpac/src/binpac${CMAKE_EXECUTABLE_SUFFIX}")
set(_binpac_exe_path "included")
# Need to call find_package so it sets up the include paths used by plugin builds.
find_package(BinPAC REQUIRED) find_package(BinPAC REQUIRED)
# Add an alias (used by our plugin setup).
add_executable(Zeek::BinPAC ALIAS binpac) add_executable(Zeek::BinPAC ALIAS binpac)
add_subdirectory(tools/bifcl) if (NOT BIFCL_EXE_PATH)
add_executable(Zeek::BifCl ALIAS bifcl) add_subdirectory(auxil/bifcl)
# FIXME: avoid hard-coding a path for multi-config generator support. See the add_executable(Zeek::BifCl ALIAS bifcl)
# TODO in ZeekPluginConfig.cmake.in. # FIXME: avoid hard-coding a path for multi-config generator support. See the
set(BIFCL_EXE_PATH "${CMAKE_BINARY_DIR}/tools/bifcl/bifcl${CMAKE_EXECUTABLE_SUFFIX}") # TODO in ZeekPluginConfig.cmake.in.
set(_bifcl_exe_path "included") set(BIFCL_EXE_PATH "${CMAKE_BINARY_DIR}/auxil/bifcl/bifcl${CMAKE_EXECUTABLE_SUFFIX}")
set(_bifcl_exe_path "included")
else ()
add_executable(Zeek::BifCl IMPORTED)
set_property(TARGET Zeek::BifCl PROPERTY IMPORTED_LOCATION "${BIFCL_EXE_PATH}")
set(_bifcl_exe_path "BIFCL_EXE_PATH")
endif ()
add_subdirectory(tools/gen-zam) if (NOT GEN_ZAM_EXE_PATH)
add_subdirectory(auxil/gen-zam)
endif ()
if (ENABLE_JEMALLOC) if (ENABLE_JEMALLOC)
if (${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD") if (${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD")
@ -1016,7 +985,6 @@ if (NOT DISABLE_SPICY)
set(Python3_EXECUTABLE ${Python_EXECUTABLE} CACHE STRING "Python3_EXECUTABLE hint") set(Python3_EXECUTABLE ${Python_EXECUTABLE} CACHE STRING "Python3_EXECUTABLE hint")
endif () endif ()
set(SPICY_ENABLE_TESTS OFF)
add_subdirectory(auxil/spicy) add_subdirectory(auxil/spicy)
include(ConfigureSpicyBuild) # set some options different for building Spicy include(ConfigureSpicyBuild) # set some options different for building Spicy
@ -1055,24 +1023,27 @@ include(BuiltInSpicyAnalyzer)
include_directories(BEFORE ${PCAP_INCLUDE_DIR} ${BIND_INCLUDE_DIR} ${BinPAC_INCLUDE_DIR} include_directories(BEFORE ${PCAP_INCLUDE_DIR} ${BIND_INCLUDE_DIR} ${BinPAC_INCLUDE_DIR}
${ZLIB_INCLUDE_DIR} ${JEMALLOC_INCLUDE_DIR}) ${ZLIB_INCLUDE_DIR} ${JEMALLOC_INCLUDE_DIR})
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/auxil/rapidjson/include/rapidjson
DESTINATION include/zeek/3rdparty/rapidjson/include)
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/auxil/filesystem/include/ghc
DESTINATION include/zeek/3rdparty/)
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/auxil/prometheus-cpp/core/include/prometheus install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/auxil/prometheus-cpp/core/include/prometheus
DESTINATION include/zeek/3rdparty/prometheus-cpp/include) DESTINATION include/zeek/3rdparty/prometheus-cpp/include)
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/auxil/prometheus-cpp/core/include/prometheus install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/auxil/prometheus-cpp/core/include/prometheus
DESTINATION include/zeek/3rdparty/prometheus-cpp/include) DESTINATION include/zeek/3rdparty/prometheus-cpp/include)
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/auxil/expected-lite/include/nonstd # Create 3rdparty/ghc within the build directory so that the include for
DESTINATION include/zeek/3rdparty/) # "zeek/3rdparty/ghc/filesystem.hpp" works within the build tree.
execute_process(COMMAND "${CMAKE_COMMAND}" -E make_directory execute_process(COMMAND "${CMAKE_COMMAND}" -E make_directory
"${CMAKE_CURRENT_BINARY_DIR}/3rdparty/") "${CMAKE_CURRENT_BINARY_DIR}/3rdparty/")
# Do the same for nonstd.
execute_process( execute_process(
COMMAND COMMAND
"${CMAKE_COMMAND}" -E create_symlink "${CMAKE_COMMAND}" -E create_symlink
"${CMAKE_CURRENT_SOURCE_DIR}/auxil/expected-lite/include/nonstd" "${CMAKE_CURRENT_SOURCE_DIR}/auxil/filesystem/include/ghc"
"${CMAKE_CURRENT_BINARY_DIR}/3rdparty/nonstd") "${CMAKE_CURRENT_BINARY_DIR}/3rdparty/ghc")
# Optional Dependencies # Optional Dependencies
@ -1080,16 +1051,18 @@ set(USE_GEOIP false)
find_package(LibMMDB) find_package(LibMMDB)
if (LIBMMDB_FOUND) if (LIBMMDB_FOUND)
set(USE_GEOIP true) set(USE_GEOIP true)
include_directories(BEFORE SYSTEM ${LibMMDB_INCLUDE_DIR}) include_directories(BEFORE ${LibMMDB_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibMMDB_LIBRARY}) list(APPEND OPTLIBS ${LibMMDB_LIBRARY})
endif () endif ()
set(USE_KRB5 false) set(USE_KRB5 false)
find_package(LibKrb5) if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
if (LIBKRB5_FOUND) find_package(LibKrb5)
set(USE_KRB5 true) if (LIBKRB5_FOUND)
include_directories(BEFORE SYSTEM ${LibKrb5_INCLUDE_DIR}) set(USE_KRB5 true)
list(APPEND OPTLIBS ${LibKrb5_LIBRARY}) include_directories(BEFORE ${LibKrb5_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibKrb5_LIBRARY})
endif ()
endif () endif ()
set(HAVE_PERFTOOLS false) set(HAVE_PERFTOOLS false)
@ -1121,7 +1094,7 @@ endif ()
# dependencies which tend to be in standard system locations and thus cause the # dependencies which tend to be in standard system locations and thus cause the
# system OpenSSL headers to still be picked up even if one specifies # system OpenSSL headers to still be picked up even if one specifies
# --with-openssl (which may be common). # --with-openssl (which may be common).
include_directories(BEFORE SYSTEM ${OPENSSL_INCLUDE_DIR}) include_directories(BEFORE ${OPENSSL_INCLUDE_DIR})
# Determine if libfts is external to libc, i.e. musl # Determine if libfts is external to libc, i.e. musl
find_package(FTS) find_package(FTS)
@ -1175,7 +1148,6 @@ include(FindKqueue)
include(FindPrometheusCpp) include(FindPrometheusCpp)
include_directories(BEFORE "auxil/out_ptr/include") include_directories(BEFORE "auxil/out_ptr/include")
include_directories(BEFORE "auxil/expected-lite/include")
if ((OPENSSL_VERSION VERSION_EQUAL "1.1.0") OR (OPENSSL_VERSION VERSION_GREATER "1.1.0")) if ((OPENSSL_VERSION VERSION_EQUAL "1.1.0") OR (OPENSSL_VERSION VERSION_GREATER "1.1.0"))
set(ZEEK_HAVE_OPENSSL_1_1 true CACHE INTERNAL "" FORCE) set(ZEEK_HAVE_OPENSSL_1_1 true CACHE INTERNAL "" FORCE)
@ -1187,6 +1159,18 @@ endif ()
# Tell the plugin code that we're building as part of the main tree. # Tell the plugin code that we're building as part of the main tree.
set(ZEEK_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE) set(ZEEK_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE)
set(ZEEK_HAVE_AF_PACKET no)
if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
if (NOT DISABLE_AF_PACKET)
if (NOT AF_PACKET_PLUGIN_PATH)
set(AF_PACKET_PLUGIN_PATH ${CMAKE_SOURCE_DIR}/auxil/zeek-af_packet-plugin)
endif ()
list(APPEND ZEEK_INCLUDE_PLUGINS ${AF_PACKET_PLUGIN_PATH})
set(ZEEK_HAVE_AF_PACKET yes)
endif ()
endif ()
set(ZEEK_HAVE_JAVASCRIPT no) set(ZEEK_HAVE_JAVASCRIPT no)
if (NOT DISABLE_JAVASCRIPT) if (NOT DISABLE_JAVASCRIPT)
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${PROJECT_SOURCE_DIR}/auxil/zeekjs/cmake) set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${PROJECT_SOURCE_DIR}/auxil/zeekjs/cmake)
@ -1206,7 +1190,6 @@ if (NOT DISABLE_JAVASCRIPT)
endif () endif ()
endif () endif ()
set(ZEEK_HAVE_AF_PACKET no CACHE INTERNAL "Zeek has AF_PACKET support")
set(ZEEK_HAVE_JAVASCRIPT ${ZEEK_HAVE_JAVASCRIPT} CACHE INTERNAL "Zeek has JavaScript support") set(ZEEK_HAVE_JAVASCRIPT ${ZEEK_HAVE_JAVASCRIPT} CACHE INTERNAL "Zeek has JavaScript support")
set(DEFAULT_ZEEKPATH_PATHS set(DEFAULT_ZEEKPATH_PATHS
@ -1225,7 +1208,11 @@ endif ()
include_directories(BEFORE ${CMAKE_CURRENT_BINARY_DIR}) include_directories(BEFORE ${CMAKE_CURRENT_BINARY_DIR})
execute_process(COMMAND "${CMAKE_COMMAND}" -E create_symlink "." "${CMAKE_CURRENT_BINARY_DIR}/zeek") execute_process(COMMAND "${CMAKE_COMMAND}" -E create_symlink "." "${CMAKE_CURRENT_BINARY_DIR}/zeek")
set(ZEEK_CONFIG_BINPAC_ROOT_DIR ${BinPAC_ROOT_DIR}) if (BinPAC_ROOT_DIR)
set(ZEEK_CONFIG_BINPAC_ROOT_DIR ${BinPAC_ROOT_DIR})
else ()
set(ZEEK_CONFIG_BINPAC_ROOT_DIR ${ZEEK_ROOT_DIR})
endif ()
if (BROKER_ROOT_DIR) if (BROKER_ROOT_DIR)
set(ZEEK_CONFIG_BROKER_ROOT_DIR ${BROKER_ROOT_DIR}) set(ZEEK_CONFIG_BROKER_ROOT_DIR ${BROKER_ROOT_DIR})
@ -1443,6 +1430,11 @@ else ()
set(_install_btest_tools_msg "no pcaps") set(_install_btest_tools_msg "no pcaps")
endif () endif ()
set(_binpac_exe_path "included")
if (BINPAC_EXE_PATH)
set(_binpac_exe_path ${BINPAC_EXE_PATH})
endif ()
set(_gen_zam_exe_path "included") set(_gen_zam_exe_path "included")
if (GEN_ZAM_EXE_PATH) if (GEN_ZAM_EXE_PATH)
set(_gen_zam_exe_path ${GEN_ZAM_EXE_PATH}) set(_gen_zam_exe_path ${GEN_ZAM_EXE_PATH})
@ -1472,118 +1464,57 @@ if (ZEEK_LEGACY_ANALYZERS OR ZEEK_SKIPPED_ANALYZERS)
) )
endif () endif ()
set(_zeek_builtin_plugins "${ZEEK_BUILTIN_PLUGINS}") message(
if (NOT ZEEK_BUILTIN_PLUGINS) "\n====================| Zeek Build Summary |===================="
set(_zeek_builtin_plugins "none") "\n"
endif () "\nBuild type: ${CMAKE_BUILD_TYPE}"
"\nBuild dir: ${PROJECT_BINARY_DIR}"
set(_zeek_fuzzing_engine "${ZEEK_FUZZING_ENGINE}") "\n"
if (NOT ZEEK_FUZZING_ENGINE) "\nInstall prefix: ${CMAKE_INSTALL_PREFIX}"
if (ZEEK_ENABLE_FUZZERS) "\nConfig file dir: ${ZEEK_ETC_INSTALL_DIR}"
# The default fuzzer used by gcc and clang is libFuzzer. This is if you "\nLog dir: ${ZEEK_LOG_DIR}"
# simply pass '-fsanitize=fuzzer' to the compiler. "\nPlugin dir: ${ZEEK_PLUGIN_DIR}"
set(_zeek_fuzzing_engine "libFuzzer") "\nPython module dir: ${PY_MOD_INSTALL_DIR}"
endif () "\nScript dir: ${ZEEK_SCRIPT_INSTALL_PATH}"
endif () "\nSpool dir: ${ZEEK_SPOOL_DIR}"
"\nState dir: ${ZEEK_STATE_DIR}"
## Utility method for outputting status information for features that just have a "\nSpicy modules dir: ${ZEEK_SPICY_MODULE_PATH}"
## string representation. This can also take an optional second argument that is a "\n"
## value string to print. "\nDebug mode: ${ENABLE_DEBUG}"
function (output_summary_line what) "\nUnit tests: ${ENABLE_ZEEK_UNIT_TESTS}"
if ("${ARGV1}" MATCHES "^$") "\nBuiltin Plugins: ${ZEEK_BUILTIN_PLUGINS}"
message("${what}:") "\n"
return() "\nCC: ${CMAKE_C_COMPILER}"
endif () "\nCFLAGS: ${CMAKE_C_FLAGS} ${CMAKE_C_FLAGS_${BuildType}}"
"\nCXX: ${CMAKE_CXX_COMPILER}"
set(_spaces " ") "\nCXXFLAGS: ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${BuildType}}"
string(LENGTH ${what} _what_length) "\nCPP: ${CMAKE_CXX_COMPILER}"
math(EXPR _num_spaces "25 - ${_what_length}") "\n"
string(SUBSTRING ${_spaces} 0 ${_num_spaces} _spacing) "\nAF_PACKET: ${ZEEK_HAVE_AF_PACKET}"
message("${what}:${_spacing}${ARGV1}") "\nAux. Tools: ${INSTALL_AUX_TOOLS}"
endfunction () "\nBifCL: ${_bifcl_exe_path}"
"\nBinPAC: ${_binpac_exe_path}"
## Utility method for outputting status information for features that have an ON/OFF "\nBTest: ${INSTALL_BTEST}"
## state. "\nBTest tooling: ${_install_btest_tools_msg}"
function (output_summary_bool what state) "\nGen-ZAM: ${_gen_zam_exe_path}"
if (${state}) "\nJavaScript: ${ZEEK_HAVE_JAVASCRIPT}"
output_summary_line("${what}" "ON") "\nSpicy: ${_spicy}"
else () "\nSpicy analyzers: ${USE_SPICY_ANALYZERS}"
output_summary_line("${what}" "OFF") "\nzeek-client: ${INSTALL_ZEEK_CLIENT}"
endif () "\nZeekControl: ${INSTALL_ZEEKCTL}"
endfunction () "\nzkg: ${INSTALL_ZKG}"
"\n"
message("\n====================| Zeek Build Summary |====================\n") "\nlibmaxminddb: ${USE_GEOIP}"
"\nKerberos: ${USE_KRB5}"
output_summary_line("Build type" "${CMAKE_BUILD_TYPE}") "\ngperftools found: ${HAVE_PERFTOOLS}"
output_summary_line("Build dir" "${PROJECT_BINARY_DIR}") "\n - tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
message("") "\n - debugging: ${USE_PERFTOOLS_DEBUG}"
"\njemalloc: ${ENABLE_JEMALLOC}"
output_summary_line("Install prefix" "${CMAKE_INSTALL_PREFIX}") "\n"
output_summary_line("Config file dir" "${ZEEK_ETC_INSTALL_DIR}") "\nFuzz Targets: ${ZEEK_ENABLE_FUZZERS}"
output_summary_line("Log dir" "${ZEEK_LOG_DIR}") "\nFuzz Engine: ${ZEEK_FUZZING_ENGINE}"
output_summary_line("Plugin dir" "${ZEEK_PLUGIN_DIR}") "${_analyzer_warning}"
output_summary_line("Python module dir" "${PY_MOD_INSTALL_DIR}") "\n"
output_summary_line("Script dir" "${ZEEK_SCRIPT_INSTALL_PATH}") "\n================================================================\n")
output_summary_line("Spool dir" "${ZEEK_SPOOL_DIR}")
output_summary_line("State dir" "${ZEEK_STATE_DIR}")
output_summary_line("Spicy modules dir" "${ZEEK_SPICY_MODULE_PATH}")
message("")
output_summary_bool("Debug mode" ${ENABLE_DEBUG})
output_summary_bool("Unit tests" ${ENABLE_ZEEK_UNIT_TESTS})
message("")
output_summary_line("Builtin Plugins" "${_zeek_builtin_plugins}")
message("")
output_summary_line("CC" "${CMAKE_C_COMPILER}")
output_summary_line("CFLAGS" "${CMAKE_C_FLAGS} ${CMAKE_C_FLAGS_${BuildType}}")
output_summary_line("CXX" "${CMAKE_CXX_COMPILER}")
output_summary_line("CXXFLAGS" "${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${BuildType}}")
output_summary_line("CPP" "${CMAKE_CXX_COMPILER}")
message("")
output_summary_bool("AF_PACKET" ${ZEEK_HAVE_AF_PACKET})
output_summary_bool("Aux. Tools" ${INSTALL_AUX_TOOLS})
output_summary_bool("BTest" ${INSTALL_BTEST})
output_summary_line("BTest tooling" ${_install_btest_tools_msg})
output_summary_bool("JavaScript" ${ZEEK_HAVE_JAVASCRIPT})
output_summary_line("Spicy" ${_spicy})
output_summary_bool("Spicy analyzers" ${USE_SPICY_ANALYZERS})
output_summary_bool("zeek-client" ${INSTALL_ZEEK_CLIENT})
output_summary_bool("ZeekControl" ${INSTALL_ZEEKCTL})
output_summary_bool("zkg" ${INSTALL_ZKG})
message("")
output_summary_bool("libmaxminddb" ${USE_GEOIP})
output_summary_bool("Kerberos" ${USE_KRB5})
output_summary_bool("gperftools" ${HAVE_PERFTOOLS})
output_summary_bool(" - tcmalloc" ${USE_PERFTOOLS_TCMALLOC})
output_summary_bool(" - debugging" ${USE_PERFTOOLS_DEBUG})
output_summary_bool("jemalloc" ${ENABLE_JEMALLOC})
message("")
output_summary_line("Cluster backends")
output_summary_bool(" - Broker" ON)
output_summary_bool(" - ZeroMQ" ${ENABLE_CLUSTER_BACKEND_ZEROMQ})
message("")
output_summary_line("Storage backends")
output_summary_bool(" - SQLite" ON)
output_summary_bool(" - Redis" ${ENABLE_STORAGE_BACKEND_REDIS})
message("")
output_summary_bool("Fuzz Targets" ${ZEEK_ENABLE_FUZZERS})
output_summary_line("Fuzz Engine" "${_zeek_fuzzing_engine}")
message("")
output_summary_line("External Tools/Linters")
output_summary_bool(" - Include What You Use" ${ENABLE_IWYU})
output_summary_bool(" - Clang-Tidy" ${ENABLE_CLANG_TIDY})
if (${_analyzer_warning})
message("${_analyzer_warning}\n")
endif ()
message("\n================================================================")
include(UserChangedWarning) include(UserChangedWarning)

View file

@ -1 +0,0 @@
Our code of conduct is published at https://zeek.org/community-code-of-conduct/

View file

@ -1,3 +0,0 @@
Our contribution guide is available at https://github.com/zeek/zeek/wiki/Contribution-Guide.
More information about contributing is also available at https://docs.zeek.org/en/master/devel/contributors.html.

View file

@ -1,4 +1,4 @@
Copyright (c) 1995-now, The Regents of the University of California Copyright (c) 1995-2023, The Regents of the University of California
through the Lawrence Berkeley National Laboratory and the through the Lawrence Berkeley National Laboratory and the
International Computer Science Institute. All rights reserved. International Computer Science Institute. All rights reserved.

View file

@ -533,6 +533,32 @@ POSSIBILITY OF SUCH DAMAGE.
============================================================================== ==============================================================================
%%% auxil/filesystem
==============================================================================
Copyright (c) 2018, Steffen Schümann <s.schuemann@pobox.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================
%%% auxil/highwayhash %%% auxil/highwayhash
============================================================================== ==============================================================================
@ -756,433 +782,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
==============================================================================
%%% auxil/c-ares
==============================================================================
MIT License
Copyright (c) 1998 Massachusetts Institute of Technology
Copyright (c) 2007 - 2023 Daniel Stenberg with many contributors, see AUTHORS
file.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice (including the next
paragraph) shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================
%%% auxil/expected-lite
==============================================================================
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
==============================================================================
%%% auxil/out_ptr
==============================================================================
Copyright ⓒ 2018-2021 ThePhD.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
==============================================================================
%%% auxil/prometheus-cpp
==============================================================================
MIT License
Copyright (c) 2016-2021 Jupp Mueller
Copyright (c) 2017-2022 Gregor Jasny
And many contributors, see
https://github.com/jupp0r/prometheus-cpp/graphs/contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================
%%% auxil/rapidjson
==============================================================================
Tencent is pleased to support the open source community by making RapidJSON available.
Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip. All rights reserved.
If you have downloaded a copy of the RapidJSON binary from Tencent, please note that the RapidJSON binary is licensed under the MIT License.
If you have downloaded a copy of the RapidJSON source code from Tencent, please note that RapidJSON source code is licensed under the MIT License, except for the third-party components listed below which are subject to different license terms. Your integration of RapidJSON into your own projects may require compliance with the MIT License, as well as the other licenses applicable to the third-party components included within RapidJSON. To avoid the problematic JSON license in your own projects, it's sufficient to exclude the bin/jsonchecker/ directory, as it's the only code under the JSON license.
A copy of the MIT License is included in this file.
Other dependencies and licenses:
Open Source Software Licensed Under the BSD License:
--------------------------------------------------------------------
The msinttypes r29
Copyright (c) 2006-2013 Alexander Chemeris
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Open Source Software Licensed Under the JSON License:
--------------------------------------------------------------------
json.org
Copyright (c) 2002 JSON.org
All Rights Reserved.
JSON_checker
Copyright (c) 2002 JSON.org
All Rights Reserved.
Terms of the JSON License:
---------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
The Software shall be used for Good, not Evil.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Terms of the MIT License:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
==============================================================================
%%% auxil/vcpkg
==============================================================================
MIT License
Copyright (c) Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be included in all copies
or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
==============================================================================
%%% src/cluster/websocket/auxil/IXWebSocket
==============================================================================
Copyright (c) 2018 Machine Zone, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the
distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
==============================================================================
%%% src/cluster/backend/zeromq/auxil/cppzmq
==============================================================================
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.

1135
NEWS

File diff suppressed because it is too large Load diff

2
README
View file

@ -3,7 +3,7 @@ The Zeek Network Security Monitor
================================= =================================
Zeek is a powerful framework for network traffic analysis and security Zeek is a powerful framework for network traffic analysis and security
monitoring. monitoring. Follow us on Twitter at @zeekurity.
Key Features Key Features
============ ============

View file

@ -15,15 +15,14 @@ traffic analysis and security monitoring.
[_Development_](#development) — [_Development_](#development) —
[_License_](#license) [_License_](#license)
Follow us on Twitter at [@zeekurity](https://twitter.com/zeekurity).
[![Coverage Status](https://coveralls.io/repos/github/zeek/zeek/badge.svg?branch=master)](https://coveralls.io/github/zeek/zeek?branch=master) [![Coverage Status](https://coveralls.io/repos/github/zeek/zeek/badge.svg?branch=master)](https://coveralls.io/github/zeek/zeek?branch=master)
[![Build Status](https://img.shields.io/cirrus/github/zeek/zeek)](https://cirrus-ci.com/github/zeek/zeek) [![Build Status](https://img.shields.io/cirrus/github/zeek/zeek)](https://cirrus-ci.com/github/zeek/zeek)
[![Slack](https://img.shields.io/badge/slack-@zeek-brightgreen.svg?logo=slack)](https://zeek.org/slack) [![Slack](https://img.shields.io/badge/slack-@zeek-brightgreen.svg?logo=slack)](https://zeek.org/slack)
[![Discourse](https://img.shields.io/discourse/status?server=https%3A%2F%2Fcommunity.zeek.org)](https://community.zeek.org) [![Discourse](https://img.shields.io/discourse/status?server=https%3A%2F%2Fcommunity.zeek.org)](https://community.zeek.org)
[![Mastodon](https://img.shields.io/badge/mastodon-@zeek@infosec.exchange-brightgreen.svg?logo=mastodon)](https://infosec.exchange/@zeek)
[![Bluesky](https://img.shields.io/badge/bluesky-@zeek-brightgreen.svg?logo=bluesky)](https://bsky.app/profile/zeek.org)
</h4> </h4>
@ -52,7 +51,7 @@ Getting Started
The best place to find information about getting started with Zeek is The best place to find information about getting started with Zeek is
our web site [www.zeek.org](https://www.zeek.org), specifically the our web site [www.zeek.org](https://www.zeek.org), specifically the
[documentation](https://docs.zeek.org/en/stable/index.html) section [documentation](https://www.zeek.org/documentation/index.html) section
there. On the web site you can also find downloads for stable there. On the web site you can also find downloads for stable
releases, tutorials on getting Zeek set up, and many other useful releases, tutorials on getting Zeek set up, and many other useful
resources. resources.
@ -105,9 +104,9 @@ you might find
[these](https://github.com/zeek/zeek/labels/good%20first%20issue) [these](https://github.com/zeek/zeek/labels/good%20first%20issue)
to be a good place to get started. More information on Zeek's to be a good place to get started. More information on Zeek's
development can be found development can be found
[here](https://docs.zeek.org/en/current/devel/index.html), and information [here](https://www.zeek.org/development/index.html), and information
about its community and mailing lists (which are fairly active) can be about its community and mailing lists (which are fairly active) can be
found [here](https://www.zeek.org/community/). found [here](https://www.zeek.org/community/index.html).
License License
------- -------

View file

@ -1,5 +0,0 @@
# Security Policy
Zeek's Security Policy is defined on our website at https://zeek.org/security-reporting/
Our Security Release Process is further clarified at https://github.com/zeek/zeek/wiki/Security-Release-Process

View file

@ -1 +1 @@
8.1.0-dev.626 7.0.10

1
auxil/bifcl Submodule

@ -0,0 +1 @@
Subproject commit 7c5ccc9aa91466004bc4a0dbbce11a239f3e742e

1
auxil/binpac Submodule

@ -0,0 +1 @@
Subproject commit a5c8f19fb49c60171622536fa6d369fa168f19e0

@ -1 +1 @@
Subproject commit 06d491943f4bee6c2d1e17a5c7c31836d725273d Subproject commit a80bf420aa6f55b4eb959ae89c184522a096a119

@ -1 +1 @@
Subproject commit 8c0fbfd74325b6c9be022a98bcd414b6f103d09e Subproject commit 989c7513c3b6056a429a5d48dacdc9a2c1b216a7

@ -1 +1 @@
Subproject commit d3a507e920e7af18a5efb7f9f1d8044ed4750013 Subproject commit 0ad09d251bf01cc2b7860950527e33e22cd64256

@ -1 +0,0 @@
Subproject commit f339d2f73730f8fee4412f5e4938717866ecef48

1
auxil/filesystem Submodule

@ -0,0 +1 @@
Subproject commit 72a76d774e4c7c605141fd6d11c33cc211209ed9

1
auxil/gen-zam Submodule

@ -0,0 +1 @@
Subproject commit 610cf8527dad7033b971595a1d556c2c95294f2b

@ -1 +1 @@
Subproject commit ea30540c77679ced3ce7886199384e8743628921 Subproject commit 10d93cff9fd6c8d8c3e0bae58312aed470843ff8

@ -1 +1 @@
Subproject commit 7e3670aa1f6ab7623a87ff1e770f7f6b5a1c59f1 Subproject commit b38e9c8ebff08959a712a5663ba25e0624a3af00

@ -1 +1 @@
Subproject commit ad301651ad0a7426757f8bc94cfc8e8cd98451a8 Subproject commit bdc15fab95b1ca2bd370fa25d91f7879b5da35fc

@ -1 +1 @@
Subproject commit 7635e113080be6fc20cb308636c8c38565c95c8a Subproject commit 597ec897fb13f9995e87c8748486f359558415de

@ -1 +1 @@
Subproject commit ce613c41372b23b1f51333815feb3edd87ef8a8b Subproject commit 66b4b34d99ab272fcf21f2bd12b616e371c6bb31

@ -0,0 +1 @@
Subproject commit a3fe59b3f1ded5c3461995134b66c6db182fa56f

@ -1 +1 @@
Subproject commit 9a51ce1940a808aaad253077905c2b34f15f1e08 Subproject commit e850412ab5dea10ee2ebb98e42527d80fcf9a7ed

@ -1 +1 @@
Subproject commit 16849ca3ec2f8637e3f8ef8ee27e2c279724387f Subproject commit 5bcc14085178ed4ddfa9ad972b441c36e8bc0787

@ -1 +1 @@
Subproject commit 485abcad45daeea6d09680e5fc7d29e97d2e3fbe Subproject commit 9419b9a4242a4dc3860511d827f395971ae58ca0

@ -1 +1 @@
Subproject commit e5985abfffc1ef5ead3a0bab196fa5d86bc5276f Subproject commit 1b7071e294fde14230c5908a2f0b05228d9d695c

View file

@ -2,7 +2,7 @@ FROM alpine:latest
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230823
RUN apk add --no-cache \ RUN apk add --no-cache \
bash \ bash \
@ -10,10 +10,8 @@ RUN apk add --no-cache \
bsd-compat-headers \ bsd-compat-headers \
ccache \ ccache \
cmake \ cmake \
cppzmq \
curl \ curl \
diffutils \ diffutils \
dnsmasq \
flex-dev \ flex-dev \
musl-fts-dev \ musl-fts-dev \
g++ \ g++ \
@ -23,13 +21,13 @@ RUN apk add --no-cache \
linux-headers \ linux-headers \
make \ make \
openssh-client \ openssh-client \
openssl \
openssl-dev \ openssl-dev \
procps \ procps \
py3-pip \ py3-pip \
py3-websockets \
python3 \ python3 \
python3-dev \ python3-dev \
swig \ swig \
zlib-dev zlib-dev
RUN pip3 install --break-system-packages websockets junit2html RUN pip3 install --break-system-packages junit2html

View file

@ -1,49 +0,0 @@
FROM quay.io/centos/centos:stream10
# A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905
# dnf config-manager isn't available at first, and
# we need it to install the CRB repo below.
RUN dnf -y install 'dnf-command(config-manager)'
# What used to be powertools is now called "CRB".
# We need it for some of the packages installed below.
# https://docs.fedoraproject.org/en-US/epel/
RUN dnf config-manager --set-enabled crb
RUN dnf -y install \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm
# The --nobest flag is hopefully temporary. Without it we currently hit
# package versioning conflicts around OpenSSL.
RUN dnf -y --nobest install \
bison \
ccache \
cmake \
cppzmq-devel \
diffutils \
flex \
gcc \
gcc-c++ \
git \
jq \
libpcap-devel \
make \
openssl \
openssl-devel \
procps-ng \
python3 \
python3-devel \
python3-pip\
sqlite \
swig \
tar \
which \
zlib-devel \
&& dnf clean all && rm -rf /var/cache/dnf
# Set the crypto policy to allow SHA-1 certificates - which we have in our tests
RUN dnf -y --nobest install crypto-policies-scripts && update-crypto-policies --set LEGACY
RUN pip3 install websockets junit2html

View file

@ -2,7 +2,7 @@ FROM quay.io/centos/centos:stream9
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230801
# dnf config-manager isn't available at first, and # dnf config-manager isn't available at first, and
# we need it to install the CRB repo below. # we need it to install the CRB repo below.
@ -22,7 +22,6 @@ RUN dnf -y --nobest install \
bison \ bison \
ccache \ ccache \
cmake \ cmake \
cppzmq-devel \
diffutils \ diffutils \
flex \ flex \
gcc \ gcc \
@ -34,9 +33,9 @@ RUN dnf -y --nobest install \
openssl \ openssl \
openssl-devel \ openssl-devel \
procps-ng \ procps-ng \
python3.13 \ python3 \
python3.13-devel \ python3-devel \
python3.13-pip\ python3-pip\
sqlite \ sqlite \
swig \ swig \
tar \ tar \
@ -47,8 +46,4 @@ RUN dnf -y --nobest install \
# Set the crypto policy to allow SHA-1 certificates - which we have in our tests # Set the crypto policy to allow SHA-1 certificates - which we have in our tests
RUN dnf -y --nobest install crypto-policies-scripts && update-crypto-policies --set LEGACY RUN dnf -y --nobest install crypto-policies-scripts && update-crypto-policies --set LEGACY
# Override the default python3.9 installation paths with 3.13
RUN alternatives --install /usr/bin/python3 python3 /usr/bin/python3.13 10
RUN alternatives --install /usr/bin/pip3 pip3 /usr/bin/pip3.13 10
RUN pip3 install websockets junit2html RUN pip3 install websockets junit2html

View file

@ -12,8 +12,8 @@ import argparse
import copy import copy
import json import json
import logging import logging
import os
import pathlib import pathlib
import os
import subprocess import subprocess
import sys import sys
@ -38,22 +38,14 @@ def git_available():
def git_is_repo(d: pathlib.Path): def git_is_repo(d: pathlib.Path):
try: try:
git( git("-C", str(d), "rev-parse", "--is-inside-work-tree", stderr=subprocess.DEVNULL)
"-C",
str(d),
"rev-parse",
"--is-inside-work-tree",
stderr=subprocess.DEVNULL,
)
return True return True
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
return False return False
def git_is_dirty(d: pathlib.Path): def git_is_dirty(d: pathlib.Path):
return ( return (len(git("-C", str(d), "status", "--untracked=no", "--short").splitlines()) > 0)
len(git("-C", str(d), "status", "--untracked=no", "--short").splitlines()) > 0
)
def git_generic_info(d: pathlib.Path): def git_generic_info(d: pathlib.Path):
@ -119,9 +111,7 @@ def collect_git_info(zeek_dir: pathlib.Path):
info["name"] = "zeek" info["name"] = "zeek"
info["version"] = (zeek_dir / "VERSION").read_text().strip() info["version"] = (zeek_dir / "VERSION").read_text().strip()
info["submodules"] = collect_submodule_info(zeek_dir) info["submodules"] = collect_submodule_info(zeek_dir)
info["branch"] = git( info["branch"] = git("-C", str(zeek_dir), "rev-parse", "--abbrev-ref", "HEAD").strip()
"-C", str(zeek_dir), "rev-parse", "--abbrev-ref", "HEAD"
).strip()
info["source"] = "git" info["source"] = "git"
return info return info
@ -166,13 +156,14 @@ def main():
for p in [p.strip() for p in v.split(";") if p.strip()]: for p in [p.strip() for p in v.split(";") if p.strip()]:
yield pathlib.Path(p) yield pathlib.Path(p)
parser.add_argument( parser.add_argument("included_plugin_dirs",
"included_plugin_dirs", default="", nargs="?", type=included_plugin_dir_conv default="",
) nargs="?",
type=included_plugin_dir_conv)
parser.add_argument("--dir", default=".") parser.add_argument("--dir", default=".")
parser.add_argument( parser.add_argument("--only-git",
"--only-git", action="store_true", help="Do not try repo-info.json fallback" action="store_true",
) help="Do not try repo-info.json fallback")
args = parser.parse_args() args = parser.parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s") logging.basicConfig(format="%(levelname)s: %(message)s")
@ -219,9 +210,7 @@ def main():
zkg_provides_info = copy.deepcopy(included_plugins_info) zkg_provides_info = copy.deepcopy(included_plugins_info)
# Hardcode the former spicy-plugin so that zkg knows Spicy is available. # Hardcode the former spicy-plugin so that zkg knows Spicy is available.
zkg_provides_info.append( zkg_provides_info.append({"name": "spicy-plugin", "version": info["version"].split("-")[0]})
{"name": "spicy-plugin", "version": info["version"].split("-")[0]}
)
info["zkg"] = {"provides": zkg_provides_info} info["zkg"] = {"provides": zkg_provides_info}
json_str = json.dumps(info, indent=2, sort_keys=True) json_str = json.dumps(info, indent=2, sort_keys=True)

View file

@ -1,44 +0,0 @@
#!/bin/bash
#
# This script produces output in the form of
#
# $ REMOTE=awelzel ./ci/container-images-addl-tags.sh v7.0.5
# ADDITIONAL_MANIFEST_TAGS= lts 7.0 latest
#
# This scripts expects visibility to all tags and release branches
# to work correctly. See the find-current-version.sh for details.
set -eu
dir="$(cd "$(dirname "$0")" && pwd)"
if [ $# -ne 1 ] || [ -z "${1}" ]; then
echo "Usage: $0 <tag>" >&2
exit 1
fi
TAG="${1}"
# Find current versions for lts and feature depending on branches and
# tags in the repo. sed for escaping the dot in the version for using
# it in the regex below to match against TAG.
lts_ver=$(${dir}/find-current-version.sh lts)
lts_pat="^v$(echo $lts_ver | sed 's,\.,\\.,g')\.[0-9]+\$"
feature_ver=$(${dir}/find-current-version.sh feature)
feature_pat="^v$(echo $feature_ver | sed 's,\.,\\.,g')\.[0-9]+\$"
# Construct additional tags for the image. At most this will
# be "lts x.0 feature" for an lts branch x.0 that is currently
# also the latest feature branch.
ADDL_MANIFEST_TAGS=
if echo "${TAG}" | grep -q -E "${lts_pat}"; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} lts ${lts_ver}"
fi
if echo "${TAG}" | grep -q -E "${feature_pat}"; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} latest"
if [ "${feature_ver}" != "${lts_ver}" ]; then
ADDL_MANIFEST_TAGS="${ADDL_MANIFEST_TAGS} ${feature_ver}"
fi
fi
echo "ADDITIONAL_MANIFEST_TAGS=${ADDL_MANIFEST_TAGS}"

View file

@ -1,36 +1,31 @@
FROM debian:13 FROM debian:11
ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles" ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles"
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230801
RUN apt-get update && apt-get -y install \ RUN apt-get update && apt-get -y install \
bison \ bison \
bsdmainutils \ bsdmainutils \
ccache \ ccache \
cmake \ cmake \
cppzmq-dev \
curl \ curl \
dnsmasq \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
git \ git \
jq \ jq \
libkrb5-dev \ libkrb5-dev \
libnats-dev \
libnode-dev \ libnode-dev \
libpcap-dev \ libpcap-dev \
librdkafka-dev \
libssl-dev \ libssl-dev \
libuv1-dev \ libuv1-dev \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-pip\ python3-pip\
python3-websockets \
sqlite3 \ sqlite3 \
swig \ swig \
wget \ wget \
@ -39,6 +34,4 @@ RUN apt-get update && apt-get -y install \
&& apt autoclean \ && apt autoclean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Debian trixie really doesn't like using pip to install system wide stuff, but RUN pip3 install websockets junit2html
# doesn't seem there's a python3-junit2html package, so not sure what we'd break.
RUN pip3 install --break-system-packages junit2html

View file

@ -4,32 +4,29 @@ ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles"
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230801
RUN apt-get update && apt-get -y install \ RUN apt-get update && apt-get -y install \
bison \ bison \
bsdmainutils \ bsdmainutils \
ccache \ ccache \
cmake \ cmake \
cppzmq-dev \
curl \ curl \
dnsmasq \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
git \ git \
jq \ jq \
libkrb5-dev \ libkrb5-dev \
libnats-dev \
libnode-dev \ libnode-dev \
libpcap-dev \ libpcap-dev \
librdkafka-dev \
libssl-dev \ libssl-dev \
libuv1-dev \ libuv1-dev \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-pip\ python3-pip\
python3-websockets \
sqlite3 \ sqlite3 \
swig \ swig \
wget \ wget \
@ -40,4 +37,4 @@ RUN apt-get update && apt-get -y install \
# Debian bookworm really doesn't like using pip to install system wide stuff, but # Debian bookworm really doesn't like using pip to install system wide stuff, but
# doesn't seem there's a python3-junit2html package, so not sure what we'd break. # doesn't seem there's a python3-junit2html package, so not sure what we'd break.
RUN pip3 install --break-system-packages websockets junit2html RUN pip3 install --break-system-packages junit2html

View file

@ -2,7 +2,7 @@ FROM fedora:41
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20250203
RUN dnf -y install \ RUN dnf -y install \
bison \ bison \

View file

@ -2,13 +2,12 @@ FROM fedora:42
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20250508
RUN dnf -y install \ RUN dnf -y install \
bison \ bison \
ccache \ ccache \
cmake \ cmake \
cppzmq-devel \
diffutils \ diffutils \
findutils \ findutils \
flex \ flex \

View file

@ -6,7 +6,7 @@ set -e
set -x set -x
env ASSUME_ALWAYS_YES=YES pkg bootstrap env ASSUME_ALWAYS_YES=YES pkg bootstrap
pkg install -y bash cppzmq git cmake swig bison python3 base64 flex ccache jq dnsmasq krb5 pkg install -y bash git cmake swig bison python3 base64 flex ccache jq
pkg upgrade -y curl pkg upgrade -y curl
pyver=$(python3 -c 'import sys; print(f"py{sys.version_info[0]}{sys.version_info[1]}")') pyver=$(python3 -c 'import sys; print(f"py{sys.version_info[0]}{sys.version_info[1]}")')
pkg install -y $pyver-sqlite3 pkg install -y $pyver-sqlite3
@ -17,6 +17,3 @@ python -m pip install websockets junit2html
# Spicy detects whether it is run from build directory via `/proc`. # Spicy detects whether it is run from build directory via `/proc`.
echo "proc /proc procfs rw,noauto 0 0" >>/etc/fstab echo "proc /proc procfs rw,noauto 0 0" >>/etc/fstab
mount /proc mount /proc
# dnsmasq is in /usr/local/sbin and that's not in the PATH by default
ln -s /usr/local/sbin/dnsmasq /usr/local/bin/dnsmasq

View file

@ -1,33 +0,0 @@
#!/usr/bin/env python3
import re
import sys
exit_code = 0
copyright_pat = re.compile(
r"See the file \"COPYING\" in the main distribution directory for copyright."
)
def match_line(line):
m = copyright_pat.search(line)
if m is not None:
return True
return False
for f in sys.argv[1:]:
has_license_header = False
with open(f) as fp:
for line in fp:
line = line.strip()
if has_license_header := match_line(line):
break
if not has_license_header:
print(f"{f}:does not seem to contain a license header", file=sys.stderr)
exit_code = 1
sys.exit(exit_code)

View file

@ -7,7 +7,7 @@ set -x
brew update brew update
brew upgrade cmake brew upgrade cmake
brew install cppzmq openssl@3 python@3 swig bison flex ccache libmaxminddb dnsmasq krb5 brew install openssl@3 python@3 swig bison flex ccache libmaxminddb
which python3 which python3
python3 --version python3 --version

View file

@ -0,0 +1,41 @@
FROM opensuse/leap:15.5
# A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION 20230905
RUN zypper addrepo https://download.opensuse.org/repositories/openSUSE:Leap:15.5:Update/standard/openSUSE:Leap:15.5:Update.repo \
&& zypper refresh \
&& zypper in -y \
bison \
ccache \
cmake \
curl \
flex \
gcc12 \
gcc12-c++ \
git \
gzip \
jq \
libopenssl-devel \
libpcap-devel \
make \
openssh \
procps \
python311 \
python311-devel \
python311-pip \
swig \
tar \
which \
zlib-devel \
&& rm -rf /var/cache/zypp
RUN update-alternatives --install /usr/bin/pip3 pip3 /usr/bin/pip3.11 100
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 100
RUN update-alternatives --install /usr/bin/python3-config python3-config /usr/bin/python3.11-config 100
RUN pip3 install websockets junit2html
RUN update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-12 100
RUN update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++-12 100

View file

@ -2,7 +2,7 @@ FROM opensuse/leap:15.6
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230905
RUN zypper addrepo https://download.opensuse.org/repositories/openSUSE:Leap:15.6:Update/standard/openSUSE:Leap:15.6:Update.repo \ RUN zypper addrepo https://download.opensuse.org/repositories/openSUSE:Leap:15.6:Update/standard/openSUSE:Leap:15.6:Update.repo \
&& zypper refresh \ && zypper refresh \
@ -10,9 +10,7 @@ RUN zypper addrepo https://download.opensuse.org/repositories/openSUSE:Leap:15.6
bison \ bison \
ccache \ ccache \
cmake \ cmake \
cppzmq-devel \
curl \ curl \
dnsmasq \
flex \ flex \
gcc12 \ gcc12 \
gcc12-c++ \ gcc12-c++ \

View file

@ -2,7 +2,7 @@ FROM opensuse/tumbleweed
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20250714
# Remove the repo-openh264 repository, it caused intermittent issues # Remove the repo-openh264 repository, it caused intermittent issues
# and we should not be needing any packages from it. # and we should not be needing any packages from it.
@ -14,10 +14,8 @@ RUN zypper refresh \
bison \ bison \
ccache \ ccache \
cmake \ cmake \
cppzmq-devel \
curl \ curl \
diffutils \ diffutils \
dnsmasq \
findutils \ findutils \
flex \ flex \
gcc \ gcc \
@ -32,6 +30,7 @@ RUN zypper refresh \
python3 \ python3 \
python3-devel \ python3-devel \
python3-pip \ python3-pip \
python3-websockets \
swig \ swig \
tar \ tar \
util-linux \ util-linux \
@ -39,4 +38,4 @@ RUN zypper refresh \
zlib-devel \ zlib-devel \
&& rm -rf /var/cache/zypp && rm -rf /var/cache/zypp
RUN pip3 install --break-system-packages websockets junit2html RUN pip3 install --break-system-packages junit2html

View file

@ -1,27 +0,0 @@
#!/bin/sh
zypper refresh
zypper patch -y --with-update --with-optional
LATEST_VERSION=$(zypper search -n ${ZEEK_CI_COMPILER} |
awk -F "|" "match(\$2, / ${ZEEK_CI_COMPILER}([0-9]{2})[^-]/, a) {print a[1]}" |
sort | tail -1)
echo "Installing ${ZEEK_CI_COMPILER} ${LATEST_VERSION}"
zypper install -y "${ZEEK_CI_COMPILER}${LATEST_VERSION}"
if [ "${ZEEK_CI_COMPILER}" == "gcc" ]; then
zypper install -y "${ZEEK_CI_COMPILER}${LATEST_VERSION}-c++"
fi
update-alternatives --install /usr/bin/cc cc "/usr/bin/${ZEEK_CI_COMPILER}-${LATEST_VERSION}" 100
update-alternatives --set cc "/usr/bin/${ZEEK_CI_COMPILER}-${LATEST_VERSION}"
if [ "${ZEEK_CI_COMPILER}" == "gcc" ]; then
update-alternatives --install /usr/bin/c++ c++ "/usr/bin/g++-${LATEST_VERSION}" 100
update-alternatives --set c++ "/usr/bin/g++-${LATEST_VERSION}"
else
update-alternatives --install /usr/bin/c++ c++ "/usr/bin/clang++-${LATEST_VERSION}" 100
update-alternatives --set c++ "/usr/bin/clang++-${LATEST_VERSION}"
fi

View file

@ -7,13 +7,6 @@
result=0 result=0
BTEST=$(pwd)/auxil/btest/btest BTEST=$(pwd)/auxil/btest/btest
# Due to issues with DNS lookups on macOS, one of the Cirrus support people recommended we
# run our tests as root. See https://github.com/cirruslabs/cirrus-ci-docs/issues/1302 for
# more details.
if [[ "${CIRRUS_OS}" == "darwin" ]]; then
BTEST="sudo ${BTEST}"
fi
if [[ -z "${CIRRUS_CI}" ]]; then if [[ -z "${CIRRUS_CI}" ]]; then
# Set default values to use in place of env. variables set by Cirrus CI. # Set default values to use in place of env. variables set by Cirrus CI.
ZEEK_CI_CPUS=1 ZEEK_CI_CPUS=1
@ -48,14 +41,14 @@ function banner {
function run_unit_tests { function run_unit_tests {
if [[ ${ZEEK_CI_SKIP_UNIT_TESTS} -eq 1 ]]; then if [[ ${ZEEK_CI_SKIP_UNIT_TESTS} -eq 1 ]]; then
printf "Skipping unit tests as requested by task configuration\n\n" printf "Skipping unit tests as requested by task configureation\n\n"
return 0 return 0
fi fi
banner "Running unit tests" banner "Running unit tests"
pushd build pushd build
(. ./zeek-path-dev.sh && TZ=UTC zeek --test --no-skip) || result=1 (. ./zeek-path-dev.sh && zeek --test --no-skip) || result=1
popd popd
return 0 return 0
} }

View file

@ -46,16 +46,3 @@ deadlock:zeek::threading::Queue<zeek::threading::BasicInputMessage*>::LocksForAl
# This only happens at shutdown. It was supposedly fixed in civetweb, but has cropped # This only happens at shutdown. It was supposedly fixed in civetweb, but has cropped
# up again. See https://github.com/civetweb/civetweb/issues/861 for details. # up again. See https://github.com/civetweb/civetweb/issues/861 for details.
race:mg_stop race:mg_stop
# Uninstrumented library.
#
# We'd need to build zmq with TSAN enabled, without it reports data races
# as it doesn't see the synchronization done [1], but also there's reports
# that ZeroMQ uses non-standard synchronization that may be difficult for
# TSAN to see.
#
# [1] https://groups.google.com/g/thread-sanitizer/c/7UZqM02yMYg/m/KlHOv2ckr9sJ
# [2] https://github.com/zeromq/libzmq/issues/3919
#
called_from_lib:libzmq.so.5
called_from_lib:libzmq.so

View file

@ -1,22 +1,18 @@
FROM ubuntu:25.04 FROM ubuntu:20.04
ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles" ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles"
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20240528
RUN apt-get update && apt-get -y install \ RUN apt-get update && apt-get -y install \
bc \ bc \
bison \ bison \
bsdmainutils \ bsdmainutils \
ccache \ ccache \
clang-18 \
clang++-18 \
cmake \ cmake \
cppzmq-dev \
curl \ curl \
dnsmasq \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
@ -30,17 +26,14 @@ RUN apt-get update && apt-get -y install \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-pip \ python3-pip\
ruby \ ruby \
sqlite3 \ sqlite3 \
swig \ swig \
unzip \ unzip \
wget \ wget \
zlib1g-dev \ zlib1g-dev \
libc++-dev \
libc++abi-dev \
&& apt autoclean \ && apt autoclean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN pip3 install --break-system-packages websockets junit2html RUN pip3 install websockets junit2html
RUN gem install coveralls-lcov

View file

@ -4,7 +4,7 @@ ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles"
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230801
RUN apt-get update && apt-get -y install \ RUN apt-get update && apt-get -y install \
bc \ bc \
@ -23,7 +23,6 @@ RUN apt-get update && apt-get -y install \
libmaxminddb-dev \ libmaxminddb-dev \
libpcap-dev \ libpcap-dev \
libssl-dev \ libssl-dev \
libzmq3-dev \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \

View file

@ -4,20 +4,17 @@ ENV DEBIAN_FRONTEND="noninteractive" TZ="America/Los_Angeles"
# A version field to invalidate Cirrus's build cache when needed, as suggested in # A version field to invalidate Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20240528
RUN apt-get update && apt-get -y install \ RUN apt-get update && apt-get -y install \
bc \ bc \
bison \ bison \
bsdmainutils \ bsdmainutils \
ccache \ ccache \
clang-19 \ clang-18 \
clang++-19 \ clang++-18 \
clang-tidy-19 \
cmake \ cmake \
cppzmq-dev \
curl \ curl \
dnsmasq \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
@ -25,53 +22,22 @@ RUN apt-get update && apt-get -y install \
jq \ jq \
lcov \ lcov \
libkrb5-dev \ libkrb5-dev \
libhiredis-dev \
libmaxminddb-dev \ libmaxminddb-dev \
libpcap-dev \ libpcap-dev \
libssl-dev \ libssl-dev \
make \ make \
python3 \ python3 \
python3-dev \ python3-dev \
python3-git \
python3-pip \ python3-pip \
python3-semantic-version \ python3-websockets \
redis-server \
ruby \ ruby \
sqlite3 \ sqlite3 \
swig \ swig \
unzip \ unzip \
wget \ wget \
zlib1g-dev \ zlib1g-dev \
libc++-dev \
libc++abi-dev \
&& apt autoclean \ && apt autoclean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN pip3 install --break-system-packages websockets junit2html RUN pip3 install --break-system-packages junit2html
RUN gem install coveralls-lcov RUN gem install coveralls-lcov
# Ubuntu installs clang versions with the binaries having the version number
# appended. Create a symlink for clang-tidy so cmake finds it correctly.
RUN update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-19 1000
# Download a newer pre-built ccache version that recognizes -fprofile-update=atomic
# which is used when building with --coverage.
#
# This extracts the tarball into /opt/ccache-<version>-<platform> and
# symlinks the executable to /usr/local/bin/ccache.
#
# See: https://ccache.dev/download.html
ENV CCACHE_VERSION=4.10.2
ENV CCACHE_PLATFORM=linux-x86_64
ENV CCACHE_URL=https://github.com/ccache/ccache/releases/download/v${CCACHE_VERSION}/ccache-${CCACHE_VERSION}-${CCACHE_PLATFORM}.tar.xz
ENV CCACHE_SHA256=80cab87bd510eca796467aee8e663c398239e0df1c4800a0b5dff11dca0b4f18
RUN cd /opt \
&& if [ "$(uname -p)" != "x86_64" ]; then echo "cannot use ccache pre-built for x86_64!" >&2; exit 1 ; fi \
&& curl -L --fail --max-time 30 $CCACHE_URL -o ccache.tar.xz \
&& sha256sum ./ccache.tar.xz >&2 \
&& echo "${CCACHE_SHA256} ccache.tar.xz" | sha256sum -c - \
&& tar xvf ./ccache.tar.xz \
&& ln -s $(pwd)/ccache-${CCACHE_VERSION}-${CCACHE_PLATFORM}/ccache /usr/local/bin/ccache \
&& test "$(command -v ccache)" = "/usr/local/bin/ccache" \
&& test "$(ccache --print-version)" = "${CCACHE_VERSION}" \
&& rm ./ccache.tar.xz

View file

@ -28,7 +28,7 @@ cd $build_dir
export ZEEK_SEED_FILE=$source_dir/testing/btest/random.seed export ZEEK_SEED_FILE=$source_dir/testing/btest/random.seed
function run_zeek { function run_zeek {
ZEEK_ALLOW_INIT_ERRORS=1 zeek -X $conf_file zeekygen ZEEK_ALLOW_INIT_ERRORS=1 zeek -X $conf_file zeekygen >/dev/null
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
echo "Failed running zeek with zeekygen config file $conf_file" >&2 echo "Failed running zeek with zeekygen config file $conf_file" >&2

View file

@ -5,7 +5,7 @@ SHELL [ "powershell" ]
# A version field to invalidatea Cirrus's build cache when needed, as suggested in # A version field to invalidatea Cirrus's build cache when needed, as suggested in
# https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822 # https://github.com/cirruslabs/cirrus-ci-docs/issues/544#issuecomment-566066822
ENV DOCKERFILE_VERSION=20250905 ENV DOCKERFILE_VERSION 20230801
RUN Set-ExecutionPolicy Unrestricted -Force RUN Set-ExecutionPolicy Unrestricted -Force
@ -14,8 +14,8 @@ RUN [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePoin
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
# Install prerequisites # Install prerequisites
RUN choco install -y --no-progress visualstudio2022buildtools --version=117.14.1 RUN choco install -y --no-progress visualstudio2019buildtools --version=16.11.11.0
RUN choco install -y --no-progress visualstudio2022-workload-vctools --version=1.0.0 --package-parameters '--add Microsoft.VisualStudio.Component.VC.ATLMFC' RUN choco install -y --no-progress visualstudio2019-workload-vctools --version=1.0.0 --package-parameters '--add Microsoft.VisualStudio.Component.VC.ATLMFC'
RUN choco install -y --no-progress sed RUN choco install -y --no-progress sed
RUN choco install -y --no-progress winflexbison3 RUN choco install -y --no-progress winflexbison3
RUN choco install -y --no-progress msysgit RUN choco install -y --no-progress msysgit
@ -30,4 +30,4 @@ RUN mkdir C:\build
WORKDIR C:\build WORKDIR C:\build
# This entry point starts the developer command prompt and launches the PowerShell shell. # This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\Common7\\Tools\\VsDevCmd.bat", "-arch=x64", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Unrestricted"] ENTRYPOINT ["C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\Common7\\Tools\\VsDevCmd.bat", "-arch=x64", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Unrestricted"]

View file

@ -2,7 +2,7 @@
:: cmd current shell. This path is hard coded to the one on the CI image, but :: cmd current shell. This path is hard coded to the one on the CI image, but
:: can be adjusted if running builds locally. Unfortunately, the initial path :: can be adjusted if running builds locally. Unfortunately, the initial path
:: isn't in the environment so we have to hardcode the whole path. :: isn't in the environment so we have to hardcode the whole path.
call "c:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 call "c:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
mkdir build mkdir build
cd build cd build

View file

@ -1,5 +1,5 @@
:: See build.cmd for documentation on this call. :: See build.cmd for documentation on this call.
call "c:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 call "c:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
cd build cd build

2
cmake

@ -1 +1 @@
Subproject commit d51c6990446cf70cb9c01bca17dad171a1db05d3 Subproject commit 621d098e6dcc52ad355cb2a196a7aa1a7b1a676f

View file

@ -2,9 +2,10 @@
#pragma once #pragma once
constexpr char ZEEK_SCRIPT_INSTALL_PATH[] = "@ZEEK_SCRIPT_INSTALL_PATH@"; #define ZEEK_SCRIPT_INSTALL_PATH "@ZEEK_SCRIPT_INSTALL_PATH@"
constexpr char ZEEK_PLUGIN_INSTALL_PATH[] = "@ZEEK_PLUGIN_DIR@"; #define BRO_PLUGIN_INSTALL_PATH "@ZEEK_PLUGIN_DIR@"
constexpr char DEFAULT_ZEEKPATH[] = "@DEFAULT_ZEEKPATH@"; #define ZEEK_PLUGIN_INSTALL_PATH "@ZEEK_PLUGIN_DIR@"
constexpr char ZEEK_SPICY_MODULE_PATH[] = "@ZEEK_SPICY_MODULE_PATH@"; #define DEFAULT_ZEEKPATH "@DEFAULT_ZEEKPATH@"
constexpr char ZEEK_SPICY_LIBRARY_PATH[] = "@ZEEK_SPICY_LIBRARY_PATH@"; #define ZEEK_SPICY_MODULE_PATH "@ZEEK_SPICY_MODULE_PATH@"
constexpr char ZEEK_SPICY_DATA_PATH[] = "@ZEEK_SPICY_DATA_PATH@"; #define ZEEK_SPICY_LIBRARY_PATH "@ZEEK_SPICY_LIBRARY_PATH@"
#define ZEEK_SPICY_DATA_PATH "@ZEEK_SPICY_DATA_PATH@"

View file

@ -1,6 +1,4 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
// NOLINTBEGIN(modernize-macro-to-enum)
// NOLINTBEGIN(cppcoreguidelines-macro-usage)
#pragma once #pragma once
@ -246,9 +244,6 @@
/* Enable/disable ZAM profiling capability */ /* Enable/disable ZAM profiling capability */
#cmakedefine ENABLE_ZAM_PROFILE #cmakedefine ENABLE_ZAM_PROFILE
/* Enable/disable the Spicy SSL analyzer */
#cmakedefine ENABLE_SPICY_SSL
/* String with host architecture (e.g., "linux-x86_64") */ /* String with host architecture (e.g., "linux-x86_64") */
#define HOST_ARCHITECTURE "@HOST_ARCHITECTURE@" #define HOST_ARCHITECTURE "@HOST_ARCHITECTURE@"
@ -308,6 +303,3 @@
/* compiled with Spicy support */ /* compiled with Spicy support */
#cmakedefine HAVE_SPICY #cmakedefine HAVE_SPICY
// NOLINTEND(cppcoreguidelines-macro-usage)
// NOLINTEND(modernize-macro-to-enum)

42
configure vendored
View file

@ -69,17 +69,11 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-static-broker build Broker statically (ignored if --with-broker is specified) --enable-static-broker build Broker statically (ignored if --with-broker is specified)
--enable-werror build with -Werror --enable-werror build with -Werror
--enable-ZAM-profiling build with ZAM profiling enabled (--enable-debug implies this) --enable-ZAM-profiling build with ZAM profiling enabled (--enable-debug implies this)
--enable-spicy-ssl build with spicy SSL/TLS analyzer (conflicts with --disable-spicy)
--enable-iwyu build with include-what-you-use enabled for the main Zeek target.
Requires include-what-you-use binary to be in the PATH.
--enable-clang-tidy build with clang-tidy enabled for the main Zeek target.
Requires clang-tidy binary to be in the PATH.
--disable-af-packet don't include native AF_PACKET support (Linux only) --disable-af-packet don't include native AF_PACKET support (Linux only)
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-broker-tests don't try to build Broker unit tests --disable-broker-tests don't try to build Broker unit tests
--disable-btest don't install BTest --disable-btest don't install BTest
--disable-btest-pcaps don't install Zeek's BTest input pcaps --disable-btest-pcaps don't install Zeek's BTest input pcaps
--disable-cluster-backend-zeromq don't build Zeek's ZeroMQ cluster backend
--disable-cpp-tests don't build Zeek's C++ unit tests --disable-cpp-tests don't build Zeek's C++ unit tests
--disable-javascript don't build Zeek's JavaScript support --disable-javascript don't build Zeek's JavaScript support
--disable-port-prealloc disable pre-allocating the PortVal array in ValManager --disable-port-prealloc disable pre-allocating the PortVal array in ValManager
@ -90,9 +84,16 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-zkg don't install zkg --disable-zkg don't install zkg
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-bifcl=PATH path to Zeek BIF compiler executable
(useful for cross-compiling)
--with-bind=PATH path to BIND install root
--with-binpac=PATH path to BinPAC executable
(useful for cross-compiling)
--with-bison=PATH path to bison executable --with-bison=PATH path to bison executable
--with-broker=PATH path to Broker install root --with-broker=PATH path to Broker install root
(Zeek uses an embedded version by default) (Zeek uses an embedded version by default)
--with-gen-zam=PATH path to Gen-ZAM code generator
(Zeek uses an embedded version by default)
--with-flex=PATH path to flex executable --with-flex=PATH path to flex executable
--with-libkqueue=PATH path to libkqueue install root --with-libkqueue=PATH path to libkqueue install root
(Zeek uses an embedded version by default) (Zeek uses an embedded version by default)
@ -309,18 +310,12 @@ while [ $# -ne 0 ]; do
--enable-ZAM-profiling) --enable-ZAM-profiling)
append_cache_entry ENABLE_ZAM_PROFILE BOOL true append_cache_entry ENABLE_ZAM_PROFILE BOOL true
;; ;;
--enable-spicy-ssl)
append_cache_entry ENABLE_SPICY_SSL BOOL true
;;
--enable-iwyu)
append_cache_entry ENABLE_IWYU BOOL true
;;
--enable-clang-tidy)
append_cache_entry ENABLE_CLANG_TIDY BOOL true
;;
--disable-af-packet) --disable-af-packet)
append_cache_entry DISABLE_AF_PACKET BOOL true append_cache_entry DISABLE_AF_PACKET BOOL true
;; ;;
--disable-archiver)
has_disable_archiver=1
;;
--disable-auxtools) --disable-auxtools)
append_cache_entry INSTALL_AUX_TOOLS BOOL false append_cache_entry INSTALL_AUX_TOOLS BOOL false
;; ;;
@ -334,9 +329,6 @@ while [ $# -ne 0 ]; do
--disable-btest-pcaps) --disable-btest-pcaps)
append_cache_entry INSTALL_BTEST_PCAPS BOOL false append_cache_entry INSTALL_BTEST_PCAPS BOOL false
;; ;;
--disable-cluster-backend-zeromq)
append_cache_entry ENABLE_CLUSTER_BACKEND_ZEROMQ BOOL false
;;
--disable-cpp-tests) --disable-cpp-tests)
append_cache_entry ENABLE_ZEEK_UNIT_TESTS BOOL false append_cache_entry ENABLE_ZEEK_UNIT_TESTS BOOL false
;; ;;
@ -361,9 +353,15 @@ while [ $# -ne 0 ]; do
--disable-zkg) --disable-zkg)
append_cache_entry INSTALL_ZKG BOOL false append_cache_entry INSTALL_ZKG BOOL false
;; ;;
--with-bifcl=*)
append_cache_entry BIFCL_EXE_PATH PATH $optarg
;;
--with-bind=*) --with-bind=*)
append_cache_entry BIND_ROOT_DIR PATH $optarg append_cache_entry BIND_ROOT_DIR PATH $optarg
;; ;;
--with-binpac=*)
append_cache_entry BINPAC_EXE_PATH PATH $optarg
;;
--with-bison=*) --with-bison=*)
append_cache_entry BISON_EXECUTABLE PATH $optarg append_cache_entry BISON_EXECUTABLE PATH $optarg
;; ;;
@ -376,6 +374,9 @@ while [ $# -ne 0 ]; do
--with-flex=*) --with-flex=*)
append_cache_entry FLEX_EXECUTABLE PATH $optarg append_cache_entry FLEX_EXECUTABLE PATH $optarg
;; ;;
--with-gen-zam=*)
append_cache_entry GEN_ZAM_EXE_PATH PATH $optarg
;;
--with-geoip=*) --with-geoip=*)
append_cache_entry LibMMDB_ROOT_DIR PATH $optarg append_cache_entry LibMMDB_ROOT_DIR PATH $optarg
;; ;;
@ -491,3 +492,8 @@ eval ${cmake} 2>&1
echo "# This is the command used to configure this build" >config.status echo "# This is the command used to configure this build" >config.status
echo $command >>config.status echo $command >>config.status
chmod u+x config.status chmod u+x config.status
if [ $has_disable_archiver -eq 1 ]; then
echo
echo "NOTE: The --disable-archiver argument no longer has any effect and will be removed in v7.1. zeek-archiver is now part of zeek-aux, so consider --disable-auxtools instead."
fi

2
doc

@ -1 +1 @@
Subproject commit 8f38ae2fd563314393eb1ca58c827d26e9966520 Subproject commit 2c5816ea62920979ff7cf92f42455e7a6827dd2f

View file

@ -1,7 +1,7 @@
# See the file "COPYING" in the main distribution directory for copyright. # See the file "COPYING" in the main distribution directory for copyright.
# Layer to build Zeek. # Layer to build Zeek.
FROM debian:13-slim FROM debian:bookworm-slim
# Make the shell split commands in the log so we can determine reasons for # Make the shell split commands in the log so we can determine reasons for
# failures more easily. # failures more easily.
@ -16,13 +16,11 @@ RUN echo 'Acquire::https::timeout "180";' >> /etc/apt/apt.conf.d/99-timeouts
# Configure system for build. # Configure system for build.
RUN apt-get -q update \ RUN apt-get -q update \
&& apt-get upgrade -q -y \
&& apt-get install -q -y --no-install-recommends \ && apt-get install -q -y --no-install-recommends \
bind9 \ bind9 \
bison \ bison \
ccache \ ccache \
cmake \ cmake \
cppzmq-dev \
flex \ flex \
g++ \ g++ \
gcc \ gcc \
@ -37,7 +35,7 @@ RUN apt-get -q update \
libz-dev \ libz-dev \
make \ make \
python3-minimal \ python3-minimal \
python3-dev \ python3.11-dev \
swig \ swig \
ninja-build \ ninja-build \
python3-pip \ python3-pip \

View file

@ -1,7 +1,7 @@
# See the file "COPYING" in the main distribution directory for copyright. # See the file "COPYING" in the main distribution directory for copyright.
# Final layer containing all artifacts. # Final layer containing all artifacts.
FROM debian:13-slim FROM debian:bookworm-slim
# Make the shell split commands in the log so we can determine reasons for # Make the shell split commands in the log so we can determine reasons for
# failures more easily. # failures more easily.
@ -15,23 +15,21 @@ RUN echo 'Acquire::http::timeout "180";' > /etc/apt/apt.conf.d/99-timeouts
RUN echo 'Acquire::https::timeout "180";' >> /etc/apt/apt.conf.d/99-timeouts RUN echo 'Acquire::https::timeout "180";' >> /etc/apt/apt.conf.d/99-timeouts
RUN apt-get -q update \ RUN apt-get -q update \
&& apt-get upgrade -q -y \
&& apt-get install -q -y --no-install-recommends \ && apt-get install -q -y --no-install-recommends \
ca-certificates \ ca-certificates \
git \ git \
jq \ jq \
libmaxminddb0 \ libmaxminddb0 \
libnode115 \ libnode108 \
libpython3.11 \
libpcap0.8 \ libpcap0.8 \
libpython3.13 \
libssl3 \ libssl3 \
libuv1 \ libuv1 \
libz1 \ libz1 \
libzmq5 \
net-tools \ net-tools \
procps \ procps \
python3-git \
python3-minimal \ python3-minimal \
python3-git \
python3-semantic-version \ python3-semantic-version \
python3-websocket \ python3-websocket \
&& apt-get clean \ && apt-get clean \
@ -39,5 +37,5 @@ RUN apt-get -q update \
# Copy over Zeek installation from build # Copy over Zeek installation from build
COPY --from=zeek-build /usr/local/zeek /usr/local/zeek COPY --from=zeek-build /usr/local/zeek /usr/local/zeek
ENV PATH="/usr/local/zeek/bin:${PATH}" ENV PATH "/usr/local/zeek/bin:${PATH}"
ENV PYTHONPATH="/usr/local/zeek/lib/zeek/python:${PYTHONPATH}" ENV PYTHONPATH "/usr/local/zeek/lib/zeek/python:${PYTHONPATH}"

View file

@ -1,8 +0,0 @@
target-version = "py39"
# Skip anything in the auxil directory. This includes pysubnetree which
# should be handled separately.
exclude = ["auxil"]
[lint]
select = ["C4", "F", "I", "ISC", "UP"]

View file

@ -60,13 +60,13 @@ const pe_mime_types = { "application/x-dosexec" };
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
Files::register_for_mime_types(Files::ANALYZER_PE, pe_mime_types); Files::register_for_mime_types(Files::ANALYZER_PE, pe_mime_types);
Log::create_stream(LOG, Log::Stream($columns=Info, $ev=log_pe, $path="pe", $policy=log_policy)); Log::create_stream(LOG, [$columns=Info, $ev=log_pe, $path="pe", $policy=log_policy]);
} }
hook set_file(f: fa_file) &priority=5 hook set_file(f: fa_file) &priority=5
{ {
if ( ! f?$pe ) if ( ! f?$pe )
f$pe = PE::Info($ts=f$info$ts, $id=f$id); f$pe = [$ts=f$info$ts, $id=f$id];
} }
event pe_dos_header(f: fa_file, h: PE::DOSHeader) &priority=5 event pe_dos_header(f: fa_file, h: PE::DOSHeader) &priority=5

View file

@ -40,7 +40,7 @@ export {
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
Log::create_stream(LOG, Log::Stream($columns=Info, $ev=log_ocsp, $path="ocsp", $policy=log_policy)); Log::create_stream(LOG, [$columns=Info, $ev=log_ocsp, $path="ocsp", $policy=log_policy]);
Files::register_for_mime_type(Files::ANALYZER_OCSP_REPLY, "application/ocsp-response"); Files::register_for_mime_type(Files::ANALYZER_OCSP_REPLY, "application/ocsp-response");
} }

View file

@ -105,29 +105,6 @@ export {
## Event for accessing logged records. ## Event for accessing logged records.
global log_x509: event(rec: Info); global log_x509: event(rec: Info);
## The maximum number of bytes that a single string field can contain when
## logging. If a string reaches this limit, the log output for the field will be
## truncated. Setting this to zero disables the limiting.
##
## .. zeek:see:: Log::default_max_field_string_bytes
const default_max_field_string_bytes = Log::default_max_field_string_bytes &redef;
## The maximum number of elements a single container field can contain when
## logging. If a container reaches this limit, the log output for the field will
## be truncated. Setting this to zero disables the limiting.
##
## .. zeek:see:: Log::default_max_field_container_elements
const default_max_field_container_elements = 500 &redef;
## The maximum total number of container elements a record may log. This is the
## sum of all container elements logged for the record. If this limit is reached,
## all further containers will be logged as empty containers. If the limit is
## reached while processing a container, the container will be truncated in the
## output. Setting this to zero disables the limiting.
##
## .. zeek:see:: Log::default_max_total_container_elements
const default_max_total_container_elements = 1500 &redef;
} }
global known_log_certs_with_broker: set[LogCertHash] &create_expire=relog_known_certificates_after &backend=Broker::MEMORY; global known_log_certs_with_broker: set[LogCertHash] &create_expire=relog_known_certificates_after &backend=Broker::MEMORY;
@ -140,12 +117,7 @@ redef record Files::Info += {
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
# x509 can have some very large certificates and very large sets of URIs. Expand the log size filters Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509, $path="x509", $policy=log_policy]);
# so that we're not truncating those.
Log::create_stream(X509::LOG, Log::Stream($columns=Info, $ev=log_x509, $path="x509", $policy=log_policy,
$max_field_string_bytes=X509::default_max_field_string_bytes,
$max_field_container_elements=X509::default_max_field_container_elements,
$max_total_container_elements=X509::default_max_total_container_elements));
# We use MIME types internally to distinguish between user and CA certificates. # We use MIME types internally to distinguish between user and CA certificates.
# The first certificate in a connection always gets tagged as user-cert, all # The first certificate in a connection always gets tagged as user-cert, all
@ -195,7 +167,7 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certifi
{ {
local der_cert = x509_get_certificate_string(cert_ref); local der_cert = x509_get_certificate_string(cert_ref);
local fp = hash_function(der_cert); local fp = hash_function(der_cert);
f$info$x509 = X509::Info($ts=f$info$ts, $fingerprint=fp, $certificate=cert, $handle=cert_ref); f$info$x509 = [$ts=f$info$ts, $fingerprint=fp, $certificate=cert, $handle=cert_ref];
if ( f$info$mime_type == "application/x-x509-user-cert" ) if ( f$info$mime_type == "application/x-x509-user-cert" )
f$info$x509$host_cert = T; f$info$x509$host_cert = T;
if ( f$is_orig ) if ( f$is_orig )
@ -253,3 +225,4 @@ event file_state_remove(f: fa_file) &priority=5
Log::write(LOG, f$info$x509); Log::write(LOG, f$info$x509);
} }

View file

@ -1,33 +1,61 @@
##! Disables analyzers if protocol violations occur, and adds service information ##! Activates port-independent protocol detection and selectively disables
##! to connection log. ##! analyzers if protocol violations occur.
@load ./main
module DPD; module DPD;
export { export {
## Analyzers which you don't want to remove on violations. ## Add the DPD logging stream identifier.
redef enum Log::ID += { LOG };
## A default logging policy hook for the stream.
global log_policy: Log::PolicyHook;
## The record type defining the columns to log in the DPD logging stream.
type Info: record {
## Timestamp for when protocol analysis failed.
ts: time &log;
## Connection unique ID.
uid: string &log;
## Connection ID containing the 4-tuple which identifies endpoints.
id: conn_id &log;
## Transport protocol for the violation.
proto: transport_proto &log;
## The analyzer that generated the violation.
analyzer: string &log;
## The textual reason for the analysis failure.
failure_reason: string &log;
};
## Ongoing DPD state tracking information.
type State: record {
## Current number of protocol violations seen per analyzer instance.
violations: table[count] of count;
};
## Number of protocol violations to tolerate before disabling an analyzer.
option max_violations: table[Analyzer::Tag] of count = table() &default = 5;
## Analyzers which you don't want to throw
option ignore_violations: set[Analyzer::Tag] = set(); option ignore_violations: set[Analyzer::Tag] = set();
## Ignore violations which go this many bytes into the connection. ## Ignore violations which go this many bytes into the connection.
## Set to 0 to never ignore protocol violations. ## Set to 0 to never ignore protocol violations.
option ignore_violations_after = 10 * 1024; option ignore_violations_after = 10 * 1024;
## Change behavior of service field in conn.log:
## Failed services are no longer removed. Instead, for a failed
## service, a second entry with a "-" in front of it is added.
## E.g. a http connection with a violation would be logged as
## "http,-http".
option track_removed_services_in_connection = F;
} }
redef record connection += { redef record connection += {
## The set of prototol analyzers that were removed due to a protocol dpd: Info &optional;
## violation after the same analyzer had previously been confirmed. dpd_state: State &optional;
failed_analyzers: set[string] &default=set() &ordered; ## The set of services (analyzers) for which Zeek has observed a
## violation after the same service had previously been confirmed.
service_violation: set[string] &default=set();
}; };
# Add confirmed protocol analyzers to conn.log service field event zeek_init() &priority=5
{
Log::create_stream(DPD::LOG, [$columns=Info, $path="dpd", $policy=log_policy]);
}
event analyzer_confirmation_info(atype: AllAnalyzers::Tag, info: AnalyzerConfirmationInfo) &priority=10 event analyzer_confirmation_info(atype: AllAnalyzers::Tag, info: AnalyzerConfirmationInfo) &priority=10
{ {
if ( ! is_protocol_analyzer(atype) && ! is_packet_analyzer(atype) ) if ( ! is_protocol_analyzer(atype) && ! is_packet_analyzer(atype) )
@ -41,11 +69,9 @@ event analyzer_confirmation_info(atype: AllAnalyzers::Tag, info: AnalyzerConfirm
add c$service[analyzer]; add c$service[analyzer];
} }
# Remove failed analyzers from service field and add them to c$failed_analyzers event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo) &priority=10
# Low priority to allow other handlers to check if the analyzer was confirmed
event analyzer_failed(ts: time, atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo) &priority=-5
{ {
if ( ! is_protocol_analyzer(atype) ) if ( ! is_protocol_analyzer(atype) && ! is_packet_analyzer(atype) )
return; return;
if ( ! info?$c ) if ( ! info?$c )
@ -53,32 +79,38 @@ event analyzer_failed(ts: time, atype: AllAnalyzers::Tag, info: AnalyzerViolatio
local c = info$c; local c = info$c;
local analyzer = Analyzer::name(atype); local analyzer = Analyzer::name(atype);
# If the service hasn't been confirmed yet, or already failed, # If the service hasn't been confirmed yet, don't generate a log message
# don't generate a log message for the protocol violation. # for the protocol violation.
if ( analyzer !in c$service ) if ( analyzer !in c$service )
return; return;
# If removed service tracking is active, don't delete the service here. delete c$service[analyzer];
if ( ! track_removed_services_in_connection ) add c$service_violation[analyzer];
delete c$service[analyzer];
# if statement is separate, to allow repeated removal of service, in case there are several local dpd: Info;
# confirmation and violation events dpd$ts = network_time();
if ( analyzer !in c$failed_analyzers ) dpd$uid = c$uid;
add c$failed_analyzers[analyzer]; dpd$id = c$id;
dpd$proto = get_port_transport_proto(c$id$orig_p);
dpd$analyzer = analyzer;
# add "-service" to the list of services on removal due to violation, if analyzer was confirmed before # Encode data into the reason if there's any as done for the old
if ( track_removed_services_in_connection && Analyzer::name(atype) in c$service ) # analyzer_violation event, previously.
local reason = info$reason;
if ( info?$data )
{ {
local rname = cat("-", Analyzer::name(atype)); local ellipsis = |info$data| > 40 ? "..." : "";
if ( rname !in c$service ) local data = info$data[0:40];
add c$service[rname]; reason = fmt("%s [%s%s]", reason, data, ellipsis);
} }
dpd$failure_reason = reason;
c$dpd = dpd;
} }
event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo ) &priority=5 event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo ) &priority=5
{ {
if ( ! is_protocol_analyzer(atype) ) if ( ! is_protocol_analyzer(atype) && ! is_packet_analyzer(atype) )
return; return;
if ( ! info?$c || ! info?$aid ) if ( ! info?$c || ! info?$aid )
@ -93,17 +125,37 @@ event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationI
if ( ignore_violations_after > 0 && size > ignore_violations_after ) if ( ignore_violations_after > 0 && size > ignore_violations_after )
return; return;
# analyzer already was removed or connection finished if ( ! c?$dpd_state )
# let's still log this.
if ( lookup_connection_analyzer_id(c$id, atype) == 0 )
{ {
event analyzer_failed(network_time(), atype, info); local s: State;
return; c$dpd_state = s;
} }
local disabled = disable_analyzer(c$id, aid, F); if ( aid in c$dpd_state$violations )
++c$dpd_state$violations[aid];
else
c$dpd_state$violations[aid] = 1;
# If analyzer was disabled, send failed event if ( c?$dpd || c$dpd_state$violations[aid] > max_violations[atype] )
if ( disabled ) {
event analyzer_failed(network_time(), atype, info); # Disable an analyzer we've previously confirmed, but is now in
# violation, or else any analyzer in excess of the max allowed
# violations, regardless of whether it was previously confirmed.
disable_analyzer(c$id, aid, F);
}
}
event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo ) &priority=-5
{
if ( ! is_protocol_analyzer(atype) && ! is_packet_analyzer(atype) )
return;
if ( ! info?$c )
return;
if ( info$c?$dpd )
{
Log::write(DPD::LOG, info$c$dpd);
delete info$c$dpd;
}
} }

View file

@ -1,6 +1,8 @@
##! Logging analyzer violations into analyzer.log ##! Logging analyzer confirmations and violations into analyzer.log
@load base/frameworks/config
@load base/frameworks/logging @load base/frameworks/logging
@load ./main @load ./main
module Analyzer::Logging; module Analyzer::Logging;
@ -9,10 +11,16 @@ export {
## Add the analyzer logging stream identifier. ## Add the analyzer logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## A default logging policy hook for the stream.
global log_policy: Log::PolicyHook;
## The record type defining the columns to log in the analyzer logging stream. ## The record type defining the columns to log in the analyzer logging stream.
type Info: record { type Info: record {
## Timestamp of the violation. ## Timestamp of confirmation or violation.
ts: time &log; ts: time &log;
## What caused this log entry to be produced. This can
## currently be "violation" or "confirmation".
cause: string &log;
## The kind of analyzer involved. Currently "packet", "file" ## The kind of analyzer involved. Currently "packet", "file"
## or "protocol". ## or "protocol".
analyzer_kind: string &log; analyzer_kind: string &log;
@ -23,64 +31,163 @@ export {
uid: string &log &optional; uid: string &log &optional;
## File UID if available. ## File UID if available.
fuid: string &log &optional; fuid: string &log &optional;
## Connection identifier if available. ## Connection identifier if available
id: conn_id &log &optional; id: conn_id &log &optional;
## Transport protocol for the violation, if available.
proto: transport_proto &log &optional;
## Failure or violation reason, if available. ## Failure or violation reason, if available.
failure_reason: string &log; failure_reason: string &log &optional;
## Data causing failure or violation if available. Truncated ## Data causing failure or violation if available. Truncated
## to :zeek:see:`Analyzer::Logging::failure_data_max_size`. ## to :zeek:see:`Analyzer::Logging::failure_data_max_size`.
failure_data: string &log &optional; failure_data: string &log &optional;
}; };
## Enable logging of analyzer violations and optionally confirmations
## when :zeek:see:`Analyzer::Logging::include_confirmations` is set.
option enable = T;
## Enable analyzer_confirmation. They are usually less interesting
## outside of development of analyzers or troubleshooting scenarios.
## Setting this option may also generated multiple log entries per
## connection, minimally one for each conn.log entry with a populated
## service field.
option include_confirmations = F;
## Enable tracking of analyzers getting disabled. This is mostly
## interesting for troubleshooting of analyzers in DPD scenarios.
## Setting this option may also generated multiple log entries per
## connection.
option include_disabling = F;
## If a violation contains information about the data causing it, ## If a violation contains information about the data causing it,
## include at most this many bytes of it in the log. ## include at most this many bytes of it in the log.
option failure_data_max_size = 40; option failure_data_max_size = 40;
## An event that can be handled to access the :zeek:type:`Analyzer::Logging::Info` ## Set of analyzers for which to not log confirmations or violations.
## record as it is sent on to the logging framework. option ignore_analyzers: set[AllAnalyzers::Tag] = set();
global log_analyzer: event(rec: Info);
## A default logging policy hook for the stream.
global log_policy: Log::PolicyHook;
} }
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
Log::create_stream(LOG, Log::Stream($columns=Info, $path="analyzer", $ev=log_analyzer, $policy=log_policy)); Log::create_stream(LOG, [$columns=Info, $path="analyzer", $policy=log_policy,
$event_groups=set("Analyzer::Logging")]);
local enable_handler = function(id: string, new_value: bool): bool {
if ( new_value )
Log::enable_stream(LOG);
else
Log::disable_stream(LOG);
return new_value;
};
Option::set_change_handler("Analyzer::Logging::enable", enable_handler);
local include_confirmations_handler = function(id: string, new_value: bool): bool {
if ( new_value )
enable_event_group("Analyzer::Logging::include_confirmations");
else
disable_event_group("Analyzer::Logging::include_confirmations");
return new_value;
};
Option::set_change_handler("Analyzer::Logging::include_confirmations",
include_confirmations_handler);
local include_disabling_handler = function(id: string, new_value: bool): bool {
if ( new_value )
enable_event_group("Analyzer::Logging::include_disabling");
else
disable_event_group("Analyzer::Logging::include_disabling");
return new_value;
};
Option::set_change_handler("Analyzer::Logging::include_disabling",
include_disabling_handler);
# Call the handlers directly with the current values to avoid config
# framework interactions like creating entries in config.log.
enable_handler("Analyzer::Logging::enable", Analyzer::Logging::enable);
include_confirmations_handler("Analyzer::Logging::include_confirmations",
Analyzer::Logging::include_confirmations);
include_disabling_handler("Analyzer::Logging::include_disabling",
Analyzer::Logging::include_disabling);
} }
function log_analyzer_failure(ts: time, atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo) function analyzer_kind(atype: AllAnalyzers::Tag): string
{ {
if ( is_protocol_analyzer(atype) )
return "protocol";
else if ( is_packet_analyzer(atype) )
return "packet";
else if ( is_file_analyzer(atype) )
return "file";
Reporter::warning(fmt("Unknown kind of analyzer %s", atype));
return "unknown";
}
function populate_from_conn(rec: Info, c: connection)
{
rec$id = c$id;
rec$uid = c$uid;
}
function populate_from_file(rec: Info, f: fa_file)
{
rec$fuid = f$id;
# If the confirmation didn't have a connection, but the
# fa_file object has exactly one, use it.
if ( ! rec?$uid && f?$conns && |f$conns| == 1 )
{
for ( _, c in f$conns )
{
rec$id = c$id;
rec$uid = c$uid;
}
}
}
event analyzer_confirmation_info(atype: AllAnalyzers::Tag, info: AnalyzerConfirmationInfo) &group="Analyzer::Logging::include_confirmations"
{
if ( atype in ignore_analyzers )
return;
local rec = Info( local rec = Info(
$ts=ts, $ts=network_time(),
$analyzer_kind=Analyzer::kind(atype), $cause="confirmation",
$analyzer_kind=analyzer_kind(atype),
$analyzer_name=Analyzer::name(atype), $analyzer_name=Analyzer::name(atype),
$failure_reason=info$reason
); );
if ( info?$c ) if ( info?$c )
{ populate_from_conn(rec, info$c);
rec$id = info$c$id;
rec$uid = info$c$uid;
rec$proto = get_port_transport_proto(info$c$id$orig_p);
}
if ( info?$f ) if ( info?$f )
{ populate_from_file(rec, info$f);
rec$fuid = info$f$id;
# If the confirmation didn't have a connection, but the Log::write(LOG, rec);
# fa_file object has exactly one, use it. }
if ( ! rec?$uid && info$f?$conns && |info$f$conns| == 1 )
{ event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo) &priority=6
for ( _, c in info$f$conns ) {
{ if ( atype in ignore_analyzers )
rec$id = c$id; return;
rec$uid = c$uid;
} local rec = Info(
} $ts=network_time(),
} $cause="violation",
$analyzer_kind=analyzer_kind(atype),
$analyzer_name=Analyzer::name(atype),
$failure_reason=info$reason,
);
if ( info?$c )
populate_from_conn(rec, info$c);
if ( info?$f )
populate_from_file(rec, info$f);
if ( info?$data ) if ( info?$data )
{ {
@ -93,31 +200,24 @@ function log_analyzer_failure(ts: time, atype: AllAnalyzers::Tag, info: Analyzer
Log::write(LOG, rec); Log::write(LOG, rec);
} }
# event currently is only raised for protocol analyzers; we do not fail packet and file analyzers hook Analyzer::disabling_analyzer(c: connection, atype: AllAnalyzers::Tag, aid: count) &priority=-1000 &group="Analyzer::Logging::include_disabling"
event analyzer_failed(ts: time, atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo)
{ {
if ( ! is_protocol_analyzer(atype) ) if ( atype in ignore_analyzers )
return; return;
if ( ! info?$c ) local rec = Info(
return; $ts=network_time(),
$cause="disabled",
$analyzer_kind=analyzer_kind(atype),
$analyzer_name=Analyzer::name(atype),
);
# log only for previously confirmed service that did not already log violation populate_from_conn(rec, c);
# note that analyzers can fail repeatedly in some circumstances - e.g. when they
# are re-attached by the dynamic protocol detection due to later data.
local analyzer_name = Analyzer::name(atype);
if ( analyzer_name !in info$c$service || analyzer_name in info$c$failed_analyzers )
return;
log_analyzer_failure(ts, atype, info); if ( c?$dpd_state && aid in c$dpd_state$violations )
{
rec$failure_data = fmt("Disabled after %d violations", c$dpd_state$violations[aid]);
}
Log::write(LOG, rec);
} }
# log packet and file analyzers here separately
event analyzer_violation_info(atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo )
{
if ( is_protocol_analyzer(atype) )
return;
log_analyzer_failure(network_time(), atype, info);
}

View file

@ -88,15 +88,6 @@ export {
## Returns: The analyzer name corresponding to the tag. ## Returns: The analyzer name corresponding to the tag.
global name: function(tag: Analyzer::Tag) : string; global name: function(tag: Analyzer::Tag) : string;
## Translates an analyzer type to a string with the analyzer's type.
##
## Possible values are "protocol", "packet", "file", or "unknown".
##
## tag: The analyzer tag.
##
## Returns: The analyzer kind corresponding to the tag.
global kind: function(tag: Analyzer::Tag) : string;
## Check whether the given analyzer name exists. ## Check whether the given analyzer name exists.
## ##
## This can be used before calling :zeek:see:`Analyzer::get_tag` to ## This can be used before calling :zeek:see:`Analyzer::get_tag` to
@ -109,10 +100,6 @@ export {
## Translates an analyzer's name to a tag enum value. ## Translates an analyzer's name to a tag enum value.
## ##
## The analyzer is assumed to exist; call
## :zeek:see:`Analyzer::has_tag` first to verify that name is a
## valid analyzer name.
##
## name: The analyzer name. ## name: The analyzer name.
## ##
## Returns: The analyzer tag corresponding to the name. ## Returns: The analyzer tag corresponding to the name.
@ -172,23 +159,6 @@ export {
## ##
## This set can be added to via :zeek:see:`redef`. ## This set can be added to via :zeek:see:`redef`.
global requested_analyzers: set[AllAnalyzers::Tag] = {} &redef; global requested_analyzers: set[AllAnalyzers::Tag] = {} &redef;
## Event that is raised when an analyzer raised a service violation and was
## removed.
##
## The event is also raised if the analyzer already was no longer active by
## the time that the violation was handled - so if it happens at the very
## end of a connection.
##
## Currently this event is only raised for protocol analyzers, as packet
## and file analyzers are never actively removed/disabled.
##
## ts: time at which the violation occurred
##
## atype: atype: The analyzer tag, such as ``Analyzer::ANALYZER_HTTP``.
##
##info: Details about the violation. This record should include a :zeek:type:`connection`
global analyzer_failed: event(ts: time, atype: AllAnalyzers::Tag, info: AnalyzerViolationInfo);
} }
@load base/bif/analyzer.bif @load base/bif/analyzer.bif
@ -272,19 +242,6 @@ function name(atype: AllAnalyzers::Tag) : string
return __name(atype); return __name(atype);
} }
function kind(atype: AllAnalyzers::Tag): string
{
if ( is_protocol_analyzer(atype) )
return "protocol";
else if ( is_packet_analyzer(atype) )
return "packet";
else if ( is_file_analyzer(atype) )
return "file";
Reporter::warning(fmt("Unknown kind of analyzer %s", atype));
return "unknown";
}
function has_tag(name: string): bool function has_tag(name: string): bool
{ {
return __has_tag(name); return __has_tag(name);

View file

@ -1,4 +1,3 @@
@load ./main @load ./main
@load ./store @load ./store
@load ./log @load ./log
@load ./backpressure

View file

@ -1,31 +0,0 @@
##! This handles Broker peers that fall so far behind in handling messages that
##! this node sends it that the local Broker endpoint decides to unpeer them.
##! Zeek captures this as follows:
##!
##! - In broker.log, with a regular "peer-removed" entry indicating CAF's reason.
##! - Via eventing through :zeek:see:`Broker::peer_removed` as done in this script.
##!
##! The cluster framework additionally captures the unpeering as follows:
##!
##! - In cluster.log, with a higher-level message indicating the node names involved.
##! - Via telemetry, using a labeled counter.
event Broker::peer_removed(ep: Broker::EndpointInfo, msg: string)
{
if ( "caf::sec::backpressure_overflow" !in msg ) {
return;
}
if ( ! ep?$network ) {
Reporter::error(fmt("Missing network info to re-peer with %s", ep$id));
return;
}
# Re-establish the peering. Broker will periodically re-try connecting
# as necessary. Do this only if the local node originally established
# the peering, otherwise we would connect to an ephemeral client-side
# TCP port that doesn't listen. If we didn't originally establish the
# peering, the other side will retry anyway.
if ( Broker::is_outbound_peering(ep$network$address, ep$network$bound_port) )
Broker::peer(ep$network$address, ep$network$bound_port);
}

View file

@ -14,19 +14,7 @@ export {
## An informational status update. ## An informational status update.
STATUS, STATUS,
## An error situation. ## An error situation.
ERROR, ERROR
## Fatal event, normal operation has most likely broken down.
CRITICAL_EVENT,
## Unrecoverable event that imparts at least part of the system.
ERROR_EVENT,
## Unexpected or conspicuous event that may still be recoverable.
WARNING_EVENT,
## Noteworthy event during normal operation.
INFO_EVENT,
## Information that might be relevant for a user to understand system behavior.
VERBOSE_EVENT,
## An event that is relevant only for troubleshooting and debugging.
DEBUG_EVENT,
}; };
## A record type containing the column fields of the Broker log. ## A record type containing the column fields of the Broker log.
@ -47,17 +35,17 @@ export {
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
Log::create_stream(Broker::LOG, Log::Stream($columns=Info, $path="broker", $policy=log_policy)); Log::create_stream(Broker::LOG, [$columns=Info, $path="broker", $policy=log_policy]);
} }
function log_status(ev: string, endpoint: EndpointInfo, msg: string) function log_status(ev: string, endpoint: EndpointInfo, msg: string)
{ {
local r: Info; local r: Info;
r = Broker::Info($ts = network_time(), r = [$ts = network_time(),
$ev = ev, $ev = ev,
$ty = STATUS, $ty = STATUS,
$message = msg); $message = msg];
if ( endpoint?$network ) if ( endpoint?$network )
r$peer = endpoint$network; r$peer = endpoint$network;
@ -87,36 +75,11 @@ event Broker::error(code: ErrorCode, msg: string)
ev = subst_string(ev, "_", "-"); ev = subst_string(ev, "_", "-");
ev = to_lower(ev); ev = to_lower(ev);
Log::write(Broker::LOG, Info($ts = network_time(), Log::write(Broker::LOG, [$ts = network_time(),
$ev = ev, $ev = ev,
$ty = ERROR, $ty = ERROR,
$message = msg)); $message = msg]);
Reporter::error(fmt("Broker error (%s): %s", code, msg)); Reporter::error(fmt("Broker error (%s): %s", code, msg));
} }
event Broker::internal_log_event(lvl: LogSeverityLevel, id: string, description: string)
{
local severity = Broker::CRITICAL_EVENT;
switch lvl {
case Broker::LOG_ERROR:
severity = Broker::ERROR_EVENT;
break;
case Broker::LOG_WARNING:
severity = Broker::WARNING_EVENT;
break;
case Broker::LOG_INFO:
severity = Broker::INFO_EVENT;
break;
case Broker::LOG_VERBOSE:
severity = Broker::VERBOSE_EVENT;
break;
case Broker::LOG_DEBUG:
severity = Broker::DEBUG_EVENT;
break;
}
Log::write(Broker::LOG, Info($ts = network_time(),
$ty = severity,
$ev = id,
$message = description));
}

View file

@ -19,7 +19,7 @@ export {
## use already. Use of the ZEEK_DEFAULT_LISTEN_RETRY environment variable ## use already. Use of the ZEEK_DEFAULT_LISTEN_RETRY environment variable
## (set as a number of seconds) will override this option and also ## (set as a number of seconds) will override this option and also
## any values given to :zeek:see:`Broker::listen`. ## any values given to :zeek:see:`Broker::listen`.
const default_listen_retry = 1sec &redef; const default_listen_retry = 30sec &redef;
## Default address on which to listen. ## Default address on which to listen.
## ##
@ -28,7 +28,7 @@ export {
## Default address on which to listen for WebSocket connections. ## Default address on which to listen for WebSocket connections.
## ##
## .. zeek:see:: Cluster::listen_websocket ## .. zeek:see:: Broker::listen_websocket
const default_listen_address_websocket = getenv("ZEEK_DEFAULT_LISTEN_ADDRESS") &redef; const default_listen_address_websocket = getenv("ZEEK_DEFAULT_LISTEN_ADDRESS") &redef;
## Default interval to retry connecting to a peer if it cannot be made to ## Default interval to retry connecting to a peer if it cannot be made to
@ -36,7 +36,7 @@ export {
## ZEEK_DEFAULT_CONNECT_RETRY environment variable (set as number of ## ZEEK_DEFAULT_CONNECT_RETRY environment variable (set as number of
## seconds) will override this option and also any values given to ## seconds) will override this option and also any values given to
## :zeek:see:`Broker::peer`. ## :zeek:see:`Broker::peer`.
const default_connect_retry = 1sec &redef; const default_connect_retry = 30sec &redef;
## If true, do not use SSL for network connections. By default, SSL will ## If true, do not use SSL for network connections. By default, SSL will
## even be used if no certificates / CAs have been configured. In that case ## even be used if no certificates / CAs have been configured. In that case
@ -69,6 +69,11 @@ export {
## all peers. ## all peers.
const ssl_keyfile = "" &redef; const ssl_keyfile = "" &redef;
## The number of buffered messages at the Broker/CAF layer after which
## a subscriber considers themselves congested (i.e. tune the congestion
## control mechanisms).
const congestion_queue_size = 200 &redef;
## The max number of log entries per log stream to batch together when ## The max number of log entries per log stream to batch together when
## sending log messages to a remote logger. ## sending log messages to a remote logger.
const log_batch_size = 400 &redef; const log_batch_size = 400 &redef;
@ -78,31 +83,9 @@ export {
const log_batch_interval = 1sec &redef; const log_batch_interval = 1sec &redef;
## Max number of threads to use for Broker/CAF functionality. The ## Max number of threads to use for Broker/CAF functionality. The
## ``ZEEK_BROKER_MAX_THREADS`` environment variable overrides this setting. ## ZEEK_BROKER_MAX_THREADS environment variable overrides this setting.
const max_threads = 1 &redef; const max_threads = 1 &redef;
## Max number of items we buffer at most per peer. What action to take when
## the buffer reaches its maximum size is determined by
## :zeek:see:`Broker::peer_overflow_policy`.
const peer_buffer_size = 8192 &redef;
## Configures how Broker responds to peers that cannot keep up with the
## incoming message rate. Available strategies:
## - disconnect: drop the connection to the unresponsive peer
## - drop_newest: replace the newest message in the buffer
## - drop_oldest: removed the olsted message from the buffer, then append
const peer_overflow_policy = "drop_oldest" &redef;
## Same as :zeek:see:`Broker::peer_buffer_size` but for WebSocket clients.
const web_socket_buffer_size = 8192 &redef;
## Same as :zeek:see:`Broker::peer_overflow_policy` but for WebSocket clients.
const web_socket_overflow_policy = "drop_oldest" &redef;
## How frequently Zeek resets some peering/client buffer statistics,
## such as ``max_queued_recently`` in :zeek:see:`BrokerPeeringStats`.
const buffer_stats_reset_interval = 1min &redef;
## The CAF scheduling policy to use. Available options are "sharing" and ## The CAF scheduling policy to use. Available options are "sharing" and
## "stealing". The "sharing" policy uses a single, global work queue along ## "stealing". The "sharing" policy uses a single, global work queue along
## with mutex and condition variable used for accessing it, which may be ## with mutex and condition variable used for accessing it, which may be
@ -175,28 +158,6 @@ export {
## will be sent. ## will be sent.
const log_topic: function(id: Log::ID, path: string): string = default_log_topic &redef; const log_topic: function(id: Log::ID, path: string): string = default_log_topic &redef;
## The possible log event severity levels for Broker.
type LogSeverityLevel: enum {
## Fatal event, normal operation has most likely broken down.
LOG_CRITICAL,
## Unrecoverable event that imparts at least part of the system.
LOG_ERROR,
## Unexpected or conspicuous event that may still be recoverable.
LOG_WARNING,
## Noteworthy event during normal operation.
LOG_INFO,
## Information that might be relevant for a user to understand system behavior.
LOG_VERBOSE,
## An event that is relevant only for troubleshooting and debugging.
LOG_DEBUG,
};
## The log event severity level for the Broker log output.
const log_severity_level = LOG_WARNING &redef;
## Event severity level for also printing the Broker log output to stderr.
const log_stderr_severity_level = LOG_CRITICAL &redef;
type ErrorCode: enum { type ErrorCode: enum {
## The unspecified default error code. ## The unspecified default error code.
UNSPECIFIED = 1, UNSPECIFIED = 1,
@ -263,10 +224,6 @@ export {
type PeerInfo: record { type PeerInfo: record {
peer: EndpointInfo; peer: EndpointInfo;
status: PeerStatus; status: PeerStatus;
## Whether the local node created the peering, as opposed to a
## remote establishing it by connecting to us.
is_outbound: bool;
}; };
type PeerInfos: vector of PeerInfo; type PeerInfos: vector of PeerInfo;
@ -314,6 +271,26 @@ export {
p: port &default = default_port, p: port &default = default_port,
retry: interval &default = default_listen_retry): port; retry: interval &default = default_listen_retry): port;
## Listen for remote connections using WebSocket.
##
## a: an address string on which to accept connections, e.g.
## "127.0.0.1". An empty string refers to INADDR_ANY.
##
## p: the TCP port to listen on. The value 0 means that the OS should choose
## the next available free port.
##
## retry: If non-zero, retries listening in regular intervals if the port cannot be
## acquired immediately. 0 disables retries. If the
## ZEEK_DEFAULT_LISTEN_RETRY environment variable is set (as number
## of seconds), it overrides any value given here.
##
## Returns: the bound port or 0/? on failure.
##
## .. zeek:see:: Broker::status
global listen_websocket: function(a: string &default = default_listen_address_websocket,
p: port &default = default_port_websocket,
retry: interval &default = default_listen_retry): port;
## Initiate a remote connection. ## Initiate a remote connection.
## ##
## a: an address to connect to, e.g. "localhost" or "127.0.0.1". ## a: an address to connect to, e.g. "localhost" or "127.0.0.1".
@ -350,16 +327,6 @@ export {
## TODO: We do not have a function yet to terminate a connection. ## TODO: We do not have a function yet to terminate a connection.
global unpeer: function(a: string, p: port): bool; global unpeer: function(a: string, p: port): bool;
## Whether the local node originally initiated the peering with the
## given endpoint.
##
## a: the address used in previous successful call to :zeek:see:`Broker::peer`.
##
## p: the port used in previous successful call to :zeek:see:`Broker::peer`.
##
## Returns:: True if this node initiated the peering.
global is_outbound_peering: function(a: string, p: port): bool;
## Get a list of all peer connections. ## Get a list of all peer connections.
## ##
## Returns: a list of all peer connections. ## Returns: a list of all peer connections.
@ -370,12 +337,6 @@ export {
## Returns: a unique identifier for the local broker endpoint. ## Returns: a unique identifier for the local broker endpoint.
global node_id: function(): string; global node_id: function(): string;
## Obtain each peering's send-buffer statistics. The keys are Broker
## endpoint IDs.
##
## Returns: per-peering statistics.
global peering_stats: function(): table[string] of BrokerPeeringStats;
## Sends all pending log messages to remote peers. This normally ## Sends all pending log messages to remote peers. This normally
## doesn't need to be used except for test cases that are time-sensitive. ## doesn't need to be used except for test cases that are time-sensitive.
global flush_logs: function(): count; global flush_logs: function(): count;
@ -424,6 +385,29 @@ export {
## ##
## Returns: true if a new event forwarding/subscription is now registered. ## Returns: true if a new event forwarding/subscription is now registered.
global forward: function(topic_prefix: string): bool; global forward: function(topic_prefix: string): bool;
## Automatically send an event to any interested peers whenever it is
## locally dispatched. (For example, using "event my_event(...);" in a
## script.)
##
## topic: a topic string associated with the event message.
## Peers advertise interest by registering a subscription to some
## prefix of this topic name.
##
## ev: a Zeek event value.
##
## Returns: true if automatic event sending is now enabled.
global auto_publish: function(topic: string, ev: any): bool;
## Stop automatically sending an event to peers upon local dispatch.
##
## topic: a topic originally given to :zeek:see:`Broker::auto_publish`.
##
## ev: an event originally given to :zeek:see:`Broker::auto_publish`.
##
## Returns: true if automatic events will not occur for the topic/event
## pair.
global auto_unpublish: function(topic: string, ev: any): bool;
} }
@load base/bif/comm.bif @load base/bif/comm.bif
@ -465,6 +449,29 @@ function listen(a: string, p: port, retry: interval): port
return bound; return bound;
} }
event retry_listen_websocket(a: string, p: port, retry: interval)
{
listen_websocket(a, p, retry);
}
function listen_websocket(a: string, p: port, retry: interval): port
{
local bound = __listen(a, p, Broker::WEBSOCKET);
if ( bound == 0/tcp )
{
local e = getenv("ZEEK_DEFAULT_LISTEN_RETRY");
if ( e != "" )
retry = double_to_interval(to_double(e));
if ( retry != 0secs )
schedule retry { retry_listen_websocket(a, p, retry) };
}
return bound;
}
function peer(a: string, p: port, retry: interval): bool function peer(a: string, p: port, retry: interval): bool
{ {
return __peer(a, p, retry); return __peer(a, p, retry);
@ -475,11 +482,6 @@ function unpeer(a: string, p: port): bool
return __unpeer(a, p); return __unpeer(a, p);
} }
function is_outbound_peering(a: string, p: port): bool
{
return __is_outbound_peering(a, p);
}
function peers(): vector of PeerInfo function peers(): vector of PeerInfo
{ {
return __peers(); return __peers();
@ -490,11 +492,6 @@ function node_id(): string
return __node_id(); return __node_id();
} }
function peering_stats(): table[string] of BrokerPeeringStats
{
return __peering_stats();
}
function flush_logs(): count function flush_logs(): count
{ {
return __flush_logs(); return __flush_logs();
@ -519,3 +516,13 @@ function unsubscribe(topic_prefix: string): bool
{ {
return __unsubscribe(topic_prefix); return __unsubscribe(topic_prefix);
} }
function auto_publish(topic: string, ev: any): bool
{
return __auto_publish(topic, ev);
}
function auto_unpublish(topic: string, ev: any): bool
{
return __auto_unpublish(topic, ev);
}

View file

@ -1,7 +1,6 @@
# Load the core cluster support. # Load the core cluster support.
@load ./main @load ./main
@load ./pools @load ./pools
@load ./telemetry
@if ( Cluster::is_enabled() ) @if ( Cluster::is_enabled() )
@ -15,12 +14,6 @@ redef Broker::log_topic = Cluster::rr_log_topic;
# Add a cluster prefix. # Add a cluster prefix.
@prefixes += cluster @prefixes += cluster
# Broker-specific additions:
@if ( Cluster::backend == Cluster::CLUSTER_BACKEND_BROKER )
@load ./broker-backpressure
@load ./broker-telemetry
@endif
@if ( Supervisor::is_supervised() ) @if ( Supervisor::is_supervised() )
# When running a supervised cluster, populate Cluster::nodes from the node table # When running a supervised cluster, populate Cluster::nodes from the node table
# the Supervisor provides to new Zeek nodes. The management framework configures # the Supervisor provides to new Zeek nodes. The management framework configures

View file

@ -1,29 +0,0 @@
# Notifications for Broker-reported backpressure overflow.
# See base/frameworks/broker/backpressure.zeek for context.
@load base/frameworks/telemetry
module Cluster;
global broker_backpressure_disconnects_cf = Telemetry::register_counter_family(Telemetry::MetricOpts(
$prefix="zeek",
$name="broker-backpressure-disconnects",
$unit="",
$label_names=vector("peer"),
$help_text="Number of Broker peerings dropped due to a neighbor falling behind in message I/O",
));
event Broker::peer_removed(endpoint: Broker::EndpointInfo, msg: string)
{
if ( ! endpoint?$network || "caf::sec::backpressure_overflow" !in msg )
return;
local nn = nodeid_to_node(endpoint$id);
Cluster::log(fmt("removed due to backpressure overflow: %s%s:%s (%s)",
nn$name != "" ? "" : "non-cluster peer ",
endpoint$network$address, endpoint$network$bound_port,
nn$name != "" ? nn$name : endpoint$id));
Telemetry::counter_family_inc(broker_backpressure_disconnects_cf,
vector(nn$name != "" ? nn$name : "unknown"));
}

View file

@ -1,104 +0,0 @@
# Additional Broker-specific metrics that use Zeek cluster-level node names.
@load base/frameworks/telemetry
module Cluster;
## This gauge tracks the current number of locally queued messages in each
## Broker peering's send buffer. The "peer" label identifies the remote side of
## the peering, containing a Zeek cluster node name.
global broker_peer_buffer_messages_gf = Telemetry::register_gauge_family(Telemetry::MetricOpts(
$prefix="zeek",
$name="broker-peer-buffer-messages",
$unit="",
$label_names=vector("peer"),
$help_text="Number of messages queued in Broker's send buffers",
));
## This gauge tracks recent maximum queue lengths for each Broker peering's send
## buffer. Most of the time the send buffers are nearly empty, so this gauge
## helps understand recent bursts of messages. "Recent" here means
## :zeek:see:`Broker::buffer_stats_reset_interval`. The time window advances in
## increments of at least the stats interval, not incrementally with every new
## observed message. That is, Zeek keeps a timestamp of when the window started,
## and once it notices that the interval has passed, it moves the start of the
## window to current time.
global broker_peer_buffer_recent_max_messages_gf = Telemetry::register_gauge_family(Telemetry::MetricOpts(
$prefix="zeek",
$name="broker-peer-buffer-recent-max-messages",
$unit="",
$label_names=vector("peer"),
$help_text="Maximum number of messages recently queued in Broker's send buffers",
));
## This counter tracks for each Broker peering the number of times its send
## buffer has overflowed. For the "disconnect" policy this can at most be 1,
## since Broker stops the peering at this time. For the "drop_oldest" and
## "drop_newest" policies (see :zeek:see:`Broker:peer_overflow_policy`) the count
## instead reflects the number of messages lost.
global broker_peer_buffer_overflows_cf = Telemetry::register_counter_family(Telemetry::MetricOpts(
$prefix="zeek",
$name="broker-peer-buffer-overflows",
$unit="",
$label_names=vector("peer"),
$help_text="Number of overflows in Broker's send buffers",
));
# A helper to track overflow counts over past peerings as well as the current
# one. The peer_id field allows us to identify when the counter has reset: a
# Broker ID different from the one on file means it's a new peering.
type EpochData: record {
peer_id: string;
num_overflows: count &default=0;
num_past_overflows: count &default=0;
};
# This maps from a cluster node name to its EpochData.
global peering_epoch_data: table[string] of EpochData;
hook Telemetry::sync()
{
local peers = Broker::peering_stats();
local nn: NamedNode;
local labels: vector of string;
local ed: EpochData;
for ( peer_id, stats in peers )
{
# Translate the Broker IDs to Zeek-level node names. We skip
# telemetry for peers where this mapping fails, i.e. ones for
# connections to external systems.
nn = nodeid_to_node(peer_id);
if ( |nn$name| == 0 )
next;
labels = vector(nn$name);
Telemetry::gauge_family_set(broker_peer_buffer_messages_gf,
labels, stats$num_queued);
Telemetry::gauge_family_set(broker_peer_buffer_recent_max_messages_gf,
labels, stats$max_queued_recently);
if ( nn$name !in peering_epoch_data )
peering_epoch_data[nn$name] = EpochData($peer_id=peer_id);
ed = peering_epoch_data[nn$name];
if ( peer_id != ed$peer_id )
{
# A new peering. Ensure that we account for overflows in
# past ones. There is a risk here that we might have
# missed a peering altogether if we scrape infrequently,
# but re-peering should be a rare event.
ed$peer_id = peer_id;
ed$num_past_overflows += ed$num_overflows;
}
ed$num_overflows = stats$num_overflows;
Telemetry::counter_family_set(broker_peer_buffer_overflows_cf,
labels, ed$num_past_overflows + ed$num_overflows);
}
}

View file

@ -40,6 +40,10 @@ export {
## worker nodes in a cluster. Used with broker-enabled cluster communication. ## worker nodes in a cluster. Used with broker-enabled cluster communication.
const worker_topic = "zeek/cluster/worker" &redef; const worker_topic = "zeek/cluster/worker" &redef;
## The topic name used for exchanging messages that are relevant to
## time machine nodes in a cluster. Used with broker-enabled cluster communication.
const time_machine_topic = "zeek/cluster/time_machine" &redef &deprecated="Remove in v7.1: Unused.";
## A set of topic names to be used for broadcasting messages that are ## A set of topic names to be used for broadcasting messages that are
## relevant to all nodes in a cluster. Currently, there is not a common ## relevant to all nodes in a cluster. Currently, there is not a common
## topic to broadcast to, because enabling implicit Broker forwarding would ## topic to broadcast to, because enabling implicit Broker forwarding would
@ -49,6 +53,9 @@ export {
manager_topic, manager_topic,
proxy_topic, proxy_topic,
worker_topic, worker_topic,
@pragma push ignore-deprecations
time_machine_topic,
@pragma pop ignore-deprecations
}; };
## The topic prefix used for exchanging messages that are relevant to ## The topic prefix used for exchanging messages that are relevant to
@ -75,19 +82,6 @@ export {
## :zeek:see:`Cluster::create_store` with the *persistent* argument set true. ## :zeek:see:`Cluster::create_store` with the *persistent* argument set true.
const default_persistent_backend = Broker::SQLITE &redef; const default_persistent_backend = Broker::SQLITE &redef;
## The default maximum queue size for WebSocket event dispatcher instances.
##
## If the maximum queue size is reached, events from external WebSocket
## clients will be stalled and processed once the queue has been drained.
##
## An internal metric named ``cluster_onloop_queue_stalls`` and
## labeled with a ``WebSocketEventDispatcher:<host>:<port>`` tag
## is incremented when the maximum queue size is reached.
const default_websocket_max_event_queue_size = 32 &redef;
## The default ping interval for WebSocket clients.
const default_websocket_ping_interval = 5 sec &redef;
## Setting a default dir will, for persistent backends that have not ## Setting a default dir will, for persistent backends that have not
## been given an explicit file path via :zeek:see:`Cluster::stores`, ## been given an explicit file path via :zeek:see:`Cluster::stores`,
## automatically create a path within this dir that is based on the name of ## automatically create a path within this dir that is based on the name of
@ -175,6 +169,10 @@ export {
PROXY, PROXY,
## The node type doing all the actual traffic analysis. ## The node type doing all the actual traffic analysis.
WORKER, WORKER,
## A node acting as a traffic recorder using the
## `Time Machine <https://github.com/zeek/time-machine>`_
## software.
TIME_MACHINE &deprecated="Remove in v7.1: Unused.",
}; };
## Record type to indicate a node in a cluster. ## Record type to indicate a node in a cluster.
@ -189,8 +187,12 @@ export {
## The port that this node will listen on for peer connections. ## The port that this node will listen on for peer connections.
## A value of ``0/unknown`` means the node is not pre-configured to listen. ## A value of ``0/unknown`` means the node is not pre-configured to listen.
p: port &default=0/unknown; p: port &default=0/unknown;
## Identifier for the interface a worker is sniffing.
interface: string &optional &deprecated="Remove in v7.1: interface is not required and not set consistently on workers. Replace usages with packet_source() or keep a separate worker-to-interface mapping in a global table.";
## Name of the manager node this node uses. For workers and proxies. ## Name of the manager node this node uses. For workers and proxies.
manager: string &optional; manager: string &optional;
## Name of a time machine node with which this node connects.
time_machine: string &optional &deprecated="Remove in v7.1: Unused.";
## A unique identifier assigned to the node by the broker framework. ## A unique identifier assigned to the node by the broker framework.
## This field is only set while a node is connected. ## This field is only set while a node is connected.
id: string &optional; id: string &optional;
@ -255,17 +257,10 @@ export {
## of the cluster that is started up. ## of the cluster that is started up.
const node = getenv("CLUSTER_NODE") &redef; const node = getenv("CLUSTER_NODE") &redef;
## Function returning this node's identifier.
##
## By default this is :zeek:see:`Broker::node_id`, but can be
## redefined by other cluster backends. This identifier should be
## a short lived identifier that resets when a node is restarted.
global node_id: function(): string = Broker::node_id &redef;
## Interval for retrying failed connections between cluster nodes. ## Interval for retrying failed connections between cluster nodes.
## If set, the ZEEK_DEFAULT_CONNECT_RETRY (given in number of seconds) ## If set, the ZEEK_DEFAULT_CONNECT_RETRY (given in number of seconds)
## environment variable overrides this option. ## environment variable overrides this option.
const retry_interval = 1sec &redef; const retry_interval = 1min &redef;
## When using broker-enabled cluster framework, nodes broadcast this event ## When using broker-enabled cluster framework, nodes broadcast this event
## to exchange their user-defined name along with a string that uniquely ## to exchange their user-defined name along with a string that uniquely
@ -290,7 +285,7 @@ export {
## ##
## Returns: a topic string that may used to send a message exclusively to ## Returns: a topic string that may used to send a message exclusively to
## a given cluster node. ## a given cluster node.
global node_topic: function(name: string): string &redef; global node_topic: function(name: string): string;
## Retrieve the topic associated with a specific node in the cluster. ## Retrieve the topic associated with a specific node in the cluster.
## ##
@ -299,126 +294,9 @@ export {
## ##
## Returns: a topic string that may used to send a message exclusively to ## Returns: a topic string that may used to send a message exclusively to
## a given cluster node. ## a given cluster node.
global nodeid_topic: function(id: string): string &redef; global nodeid_topic: function(id: string): string;
## Retrieve the cluster-level naming of a node based on its node ID,
## a backend-specific identifier.
##
## id: the node ID of a peer.
##
## Returns: the :zeek:see:`Cluster::NamedNode` for the requested node, if
## known, otherwise a "null" instance with an empty name field.
global nodeid_to_node: function(id: string): NamedNode;
## Initialize the cluster backend.
##
## Cluster backends usually invoke this from a :zeek:see:`zeek_init` handler.
##
## Returns: T on success, else F.
global init: function(): bool;
## Subscribe to the given topic.
##
## topic: The topic to subscribe to.
##
## Returns: T on success, else F.
global subscribe: function(topic: string): bool;
## Unsubscribe from the given topic.
##
## topic: The topic to unsubscribe from.
##
## Returns: T on success, else F.
global unsubscribe: function(topic: string): bool;
## An event instance for cluster pub/sub.
##
## See :zeek:see:`Cluster::publish` and :zeek:see:`Cluster::make_event`.
type Event: record {
## The event handler to be invoked on the remote node.
ev: any;
## The arguments for the event.
args: vector of any;
};
## The TLS options for a WebSocket server.
##
## If cert_file and key_file are set, TLS is enabled. If both
## are unset, TLS is disabled. Any other combination is an error.
type WebSocketTLSOptions: record {
## The cert file to use.
cert_file: string &optional;
## The key file to use.
key_file: string &optional;
## Expect peers to send client certificates.
enable_peer_verification: bool &default=F;
## The CA certificate or CA bundle used for peer verification.
## Empty will use the implementations's default when
## ``enable_peer_verification`` is T.
ca_file: string &default="";
## The ciphers to use. Empty will use the implementation's defaults.
ciphers: string &default="";
};
## WebSocket server options to pass to :zeek:see:`Cluster::listen_websocket`.
type WebSocketServerOptions: record {
## The address to listen on, cannot be used together with ``listen_host``.
listen_addr: addr &optional;
## The port the WebSocket server is supposed to listen on.
listen_port: port;
## The maximum event queue size for this server.
max_event_queue_size: count &default=default_websocket_max_event_queue_size;
## Ping interval to use. A WebSocket client not responding to
## the pings will be disconnected. Set to a negative value to
## disable pings. Subsecond intervals are currently not supported.
ping_interval: interval &default=default_websocket_ping_interval;
## The TLS options used for this WebSocket server. By default,
## TLS is disabled. See also :zeek:see:`Cluster::WebSocketTLSOptions`.
tls_options: WebSocketTLSOptions &default=WebSocketTLSOptions();
};
## Start listening on a WebSocket address.
##
## options: The server :zeek:see:`Cluster::WebSocketServerOptions` to use.
##
## Returns: T on success, else F.
global listen_websocket: function(options: WebSocketServerOptions): bool;
## Network information of an endpoint.
type NetworkInfo: record {
## The IP address or hostname where the endpoint listens.
address: string;
## The port where the endpoint is bound to.
bound_port: port;
};
## Information about a WebSocket endpoint.
type EndpointInfo: record {
id: string;
network: NetworkInfo;
## The value of the X-Application-Name HTTP header, if any.
application_name: string &optional;
};
## A hook invoked for every :zeek:see:`Cluster::subscribe` call.
##
## Breaking from this hook has no effect.
##
## topic: The topic string as given to :zeek:see:`Cluster::subscribe`.
global on_subscribe: hook(topic: string);
## A hook invoked for every :zeek:see:`Cluster::subscribe` call.
##
## Breaking from this hook has no effect.
##
## topic: The topic string as given to :zeek:see:`Cluster::subscribe`.
global on_unsubscribe: hook(topic: string);
} }
# Needs declaration of Cluster::Event type.
@load base/bif/cluster.bif
@load base/bif/plugins/Zeek_Cluster_WebSocket.events.bif.zeek
# Track active nodes per type. # Track active nodes per type.
global active_node_ids: table[NodeType] of set[string]; global active_node_ids: table[NodeType] of set[string];
@ -438,7 +316,7 @@ function nodes_with_type(node_type: NodeType): vector of NamedNode
{ return strcmp(n1$name, n2$name); }); { return strcmp(n1$name, n2$name); });
} }
function get_node_count(node_type: NodeType): count function Cluster::get_node_count(node_type: NodeType): count
{ {
local cnt = 0; local cnt = 0;
@ -451,7 +329,7 @@ function get_node_count(node_type: NodeType): count
return cnt; return cnt;
} }
function get_active_node_count(node_type: NodeType): count function Cluster::get_active_node_count(node_type: NodeType): count
{ {
return node_type in active_node_ids ? |active_node_ids[node_type]| : 0; return node_type in active_node_ids ? |active_node_ids[node_type]| : 0;
} }
@ -496,17 +374,6 @@ function nodeid_topic(id: string): string
return nodeid_topic_prefix + id + "/"; return nodeid_topic_prefix + id + "/";
} }
function nodeid_to_node(id: string): NamedNode
{
for ( name, n in nodes )
{
if ( n?$id && n$id == id )
return NamedNode($name=name, $node=n);
}
return NamedNode($name="", $node=Node($node_type=NONE, $ip=0.0.0.0));
}
event Cluster::hello(name: string, id: string) &priority=10 event Cluster::hello(name: string, id: string) &priority=10
{ {
if ( name !in nodes ) if ( name !in nodes )
@ -539,7 +406,7 @@ event Broker::peer_added(endpoint: Broker::EndpointInfo, msg: string) &priority=
if ( ! Cluster::is_enabled() ) if ( ! Cluster::is_enabled() )
return; return;
local e = Broker::make_event(Cluster::hello, node, Cluster::node_id()); local e = Broker::make_event(Cluster::hello, node, Broker::node_id());
Broker::publish(nodeid_topic(endpoint$id), e); Broker::publish(nodeid_topic(endpoint$id), e);
} }
@ -549,32 +416,16 @@ event Broker::peer_lost(endpoint: Broker::EndpointInfo, msg: string) &priority=1
{ {
if ( n?$id && n$id == endpoint$id ) if ( n?$id && n$id == endpoint$id )
{ {
Cluster::log(fmt("node down: %s", node_name));
delete n$id;
delete active_node_ids[n$node_type][endpoint$id];
event Cluster::node_down(node_name, endpoint$id); event Cluster::node_down(node_name, endpoint$id);
break; break;
} }
} }
} }
event node_down(name: string, id: string) &priority=10
{
local found = F;
for ( node_name, n in nodes )
{
if ( n?$id && n$id == id )
{
Cluster::log(fmt("node down: %s", node_name));
delete n$id;
delete active_node_ids[n$node_type][id];
found = T;
break;
}
}
if ( ! found )
Reporter::error(fmt("No node found in Cluster::node_down() node:%s id:%s",
name, id));
}
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
# If a node is given, but it's an unknown name we need to fail. # If a node is given, but it's an unknown name we need to fail.
@ -584,7 +435,7 @@ event zeek_init() &priority=5
terminate(); terminate();
} }
Log::create_stream(Cluster::LOG, Log::Stream($columns=Info, $path="cluster", $policy=log_policy)); Log::create_stream(Cluster::LOG, [$columns=Info, $path="cluster", $policy=log_policy]);
} }
function create_store(name: string, persistent: bool &default=F): Cluster::StoreInfo function create_store(name: string, persistent: bool &default=F): Cluster::StoreInfo
@ -666,55 +517,5 @@ function create_store(name: string, persistent: bool &default=F): Cluster::Store
function log(msg: string) function log(msg: string)
{ {
Log::write(Cluster::LOG, Info($ts = network_time(), $node = node, $message = msg)); Log::write(Cluster::LOG, [$ts = network_time(), $node = node, $message = msg]);
}
function init(): bool
{
return Cluster::Backend::__init(Cluster::node_id());
}
function subscribe(topic: string): bool
{
return Cluster::__subscribe(topic);
}
function unsubscribe(topic: string): bool
{
return Cluster::__unsubscribe(topic);
}
function listen_websocket(options: WebSocketServerOptions): bool
{
return Cluster::__listen_websocket(options);
}
function format_endpoint_info(ei: EndpointInfo): string
{
local s = fmt("'%s' (%s:%d)", ei$id, ei$network$address, ei$network$bound_port);
if ( ei?$application_name )
s += fmt(" application_name=%s", ei$application_name);
return s;
}
event websocket_client_added(endpoint: EndpointInfo, subscriptions: string_vec)
{
local msg = fmt("WebSocket client %s subscribed to %s",
format_endpoint_info(endpoint), subscriptions);
Cluster::log(msg);
}
event websocket_client_lost(endpoint: EndpointInfo, code: count, reason: string)
{
local msg = fmt("WebSocket client %s gone with code %d%s",
format_endpoint_info(endpoint), code,
|reason| > 0 ? fmt(" and reason '%s'", reason) : "");
Cluster::log(msg);
}
# If a backend reports an error, propagate it via a reporter error message.
event Cluster::Backend::error(tag: string, message: string)
{
local msg = fmt("Cluster::Backend::error: %s (%s)", tag, message);
Reporter::error(msg);
} }

View file

@ -18,8 +18,6 @@ export {
site_id: count; site_id: count;
## Whether the node is currently alive and can receive work. ## Whether the node is currently alive and can receive work.
alive: bool &default=F; alive: bool &default=F;
## The pre-computed result from Cluster::node_topic
topic: string;
}; };
## A pool specification. ## A pool specification.
@ -174,7 +172,7 @@ function hrw_topic(pool: Pool, key: any): string
local site = HashHRW::get_site(pool$hrw_pool, key); local site = HashHRW::get_site(pool$hrw_pool, key);
local pn: PoolNode = site$user_data; local pn: PoolNode = site$user_data;
return pn$topic; return Cluster::node_topic(pn$name);
} }
function rr_topic(pool: Pool, key: string): string function rr_topic(pool: Pool, key: string): string
@ -200,7 +198,7 @@ function rr_topic(pool: Pool, key: string): string
if ( pn$alive ) if ( pn$alive )
{ {
rval = pn$topic; rval = Cluster::node_topic(pn$name);
break; break;
} }
@ -278,7 +276,7 @@ function init_pool_node(pool: Pool, name: string): bool
else else
{ {
local pn = PoolNode($name=name, $alias=alias, $site_id=site_id, local pn = PoolNode($name=name, $alias=alias, $site_id=site_id,
$alive=Cluster::node == name, $topic=Cluster::node_topic(name)); $alive=Cluster::node == name);
pool$nodes[name] = pn; pool$nodes[name] = pn;
pool$node_list += pn; pool$node_list += pn;

View file

@ -36,8 +36,6 @@ function connect_peer(node_type: NodeType, node_name: string)
status)); status));
return; return;
} }
Reporter::warning(fmt("connect_peer: node '%s' (%s) not found", node_name, node_type));
} }
function connect_peers_with_type(node_type: NodeType) function connect_peers_with_type(node_type: NodeType)
@ -71,7 +69,7 @@ event zeek_init() &priority=-10
local pool = registered_pools[i]; local pool = registered_pools[i];
if ( node in pool$nodes ) if ( node in pool$nodes )
Cluster::subscribe(pool$spec$topic); Broker::subscribe(pool$spec$topic);
} }
switch ( self$node_type ) { switch ( self$node_type ) {
@ -80,47 +78,34 @@ event zeek_init() &priority=-10
case CONTROL: case CONTROL:
break; break;
case LOGGER: case LOGGER:
Cluster::subscribe(Cluster::logger_topic); Broker::subscribe(Cluster::logger_topic);
Broker::subscribe(Broker::default_log_topic_prefix);
break; break;
case MANAGER: case MANAGER:
Cluster::subscribe(Cluster::manager_topic); Broker::subscribe(Cluster::manager_topic);
if ( Cluster::manager_is_logger )
Broker::subscribe(Broker::default_log_topic_prefix);
break; break;
case PROXY: case PROXY:
Cluster::subscribe(Cluster::proxy_topic); Broker::subscribe(Cluster::proxy_topic);
break; break;
case WORKER: case WORKER:
Cluster::subscribe(Cluster::worker_topic); Broker::subscribe(Cluster::worker_topic);
break; break;
@pragma push ignore-deprecations
case TIME_MACHINE:
Broker::subscribe(Cluster::time_machine_topic);
break;
@pragma pop ignore-deprecations
default: default:
Reporter::error(fmt("Unhandled cluster node type: %s", self$node_type)); Reporter::error(fmt("Unhandled cluster node type: %s", self$node_type));
return; return;
} }
Cluster::subscribe(nodeid_topic(Cluster::node_id())); Broker::subscribe(nodeid_topic(Broker::node_id()));
Cluster::subscribe(node_topic(node)); Broker::subscribe(node_topic(node));
# Listening and connecting to other peers is broker specific,
# short circuit if Zeek is configured with a different
# cluster backend.
#
# In the future, this could move into a policy script, but
# for the time being it's easier for backwards compatibility
# to keep this here.
if ( Cluster::backend != Cluster::CLUSTER_BACKEND_BROKER )
return;
# Logging setup: Anything handling logging additionally subscribes
# to Broker::default_log_topic_prefix.
switch ( self$node_type ) {
case LOGGER:
Cluster::subscribe(Broker::default_log_topic_prefix);
break;
case MANAGER:
if ( Cluster::manager_is_logger )
Cluster::subscribe(Broker::default_log_topic_prefix);
break;
}
if ( self$p != 0/unknown ) if ( self$p != 0/unknown )
{ {
@ -136,6 +121,11 @@ event zeek_init() &priority=-10
case MANAGER: case MANAGER:
connect_peers_with_type(LOGGER); connect_peers_with_type(LOGGER);
@pragma push ignore-deprecations
if ( self?$time_machine )
connect_peer(TIME_MACHINE, self$time_machine);
@pragma pop ignore-deprecations
break; break;
case PROXY: case PROXY:
connect_peers_with_type(LOGGER); connect_peers_with_type(LOGGER);
@ -151,6 +141,11 @@ event zeek_init() &priority=-10
if ( self?$manager ) if ( self?$manager )
connect_peer(MANAGER, self$manager); connect_peer(MANAGER, self$manager);
@pragma push ignore-deprecations
if ( self?$time_machine )
connect_peer(TIME_MACHINE, self$time_machine);
@pragma pop ignore-deprecations
break; break;
} }
} }

View file

@ -42,7 +42,11 @@ function __init_cluster_nodes(): bool
if ( endp$role in rolemap ) if ( endp$role in rolemap )
typ = rolemap[endp$role]; typ = rolemap[endp$role];
cnode = Cluster::Node($node_type=typ, $ip=endp$host, $p=endp$p); cnode = [$node_type=typ, $ip=endp$host, $p=endp$p];
@pragma push ignore-deprecations
if ( endp?$interface )
cnode$interface = endp$interface;
@pragma pop ignore-deprecations
if ( |manager_name| > 0 && cnode$node_type != Cluster::MANAGER ) if ( |manager_name| > 0 && cnode$node_type != Cluster::MANAGER )
cnode$manager = manager_name; cnode$manager = manager_name;
if ( endp?$metrics_port ) if ( endp?$metrics_port )

View file

@ -1,39 +0,0 @@
## Module for cluster telemetry.
module Cluster::Telemetry;
export {
type Type: enum {
## Creates counter metrics for incoming and for outgoing
## events without labels.
INFO,
## Creates counter metrics for incoming and outgoing events
## labeled with handler and normalized topic names.
VERBOSE,
## Creates histogram metrics using the serialized message size
## for events, labeled by topic, handler and script location
## (outgoing only).
DEBUG,
};
## The telemetry types to enable for the core backend.
const core_metrics: set[Type] = {
INFO,
} &redef;
## The telemetry types to enable for WebSocket backends.
const websocket_metrics: set[Type] = {
INFO,
} &redef;
## Table used for normalizing topic names that contain random parts.
## Map to an empty string to skip recording a specific metric
## completely.
const topic_normalizations: table[pattern] of string = {
[/^zeek\/cluster\/nodeid\/.*/] = "zeek/cluster/nodeid/__normalized__",
} &ordered &redef;
## For the DEBUG metrics, the histogram buckets to use.
const message_size_bounds: vector of double = {
10.0, 50.0, 100.0, 500.0, 1000.0, 5000.0, 10000.0, 50000.0,
} &redef;
}

View file

@ -40,14 +40,14 @@ event zeek_init() &priority=5
return; return;
for ( fi in config_files ) for ( fi in config_files )
Input::add_table(Input::TableDescription($reader=Input::READER_CONFIG, Input::add_table([$reader=Input::READER_CONFIG,
$mode=Input::REREAD, $mode=Input::REREAD,
$source=fi, $source=fi,
$name=cat("config-", fi), $name=cat("config-", fi),
$idx=ConfigItem, $idx=ConfigItem,
$val=ConfigItem, $val=ConfigItem,
$want_record=F, $want_record=F,
$destination=current_config)); $destination=current_config]);
} }
event InputConfig::new_value(name: string, source: string, id: string, value: any) event InputConfig::new_value(name: string, source: string, id: string, value: any)
@ -67,11 +67,11 @@ function read_config(filename: string)
local iname = cat("config-oneshot-", filename); local iname = cat("config-oneshot-", filename);
Input::add_event(Input::EventDescription($reader=Input::READER_CONFIG, Input::add_event([$reader=Input::READER_CONFIG,
$mode=Input::MANUAL, $mode=Input::MANUAL,
$source=filename, $source=filename,
$name=iname, $name=iname,
$fields=EventFields, $fields=EventFields,
$ev=config_line)); $ev=config_line]);
Input::remove(iname); Input::remove(iname);
} }

View file

@ -60,7 +60,7 @@ global Config::cluster_set_option: event(ID: string, val: any, location: string)
function broadcast_option(ID: string, val: any, location: string) &is_used function broadcast_option(ID: string, val: any, location: string) &is_used
{ {
for ( topic in Cluster::broadcast_topics ) for ( topic in Cluster::broadcast_topics )
Cluster::publish(topic, Config::cluster_set_option, ID, val, location); Broker::publish(topic, Config::cluster_set_option, ID, val, location);
} }
event Config::cluster_set_option(ID: string, val: any, location: string) event Config::cluster_set_option(ID: string, val: any, location: string)
@ -89,7 +89,7 @@ function set_value(ID: string, val: any, location: string &default = ""): bool
option_cache[ID] = OptionCacheValue($val=val, $location=location); option_cache[ID] = OptionCacheValue($val=val, $location=location);
broadcast_option(ID, val, location); broadcast_option(ID, val, location);
@else @else
Cluster::publish(Cluster::manager_topic, Config::cluster_set_option, Broker::publish(Cluster::manager_topic, Config::cluster_set_option,
ID, val, location); ID, val, location);
@endif @endif
@ -109,7 +109,7 @@ event Cluster::node_up(name: string, id: string) &priority=-10
# When a node connects, send it all current Option values. # When a node connects, send it all current Option values.
if ( name in Cluster::nodes ) if ( name in Cluster::nodes )
for ( ID in option_cache ) for ( ID in option_cache )
Cluster::publish(Cluster::node_topic(name), Config::cluster_set_option, ID, option_cache[ID]$val, option_cache[ID]$location); Broker::publish(Cluster::node_topic(name), Config::cluster_set_option, ID, option_cache[ID]$val, option_cache[ID]$location);
} }
@endif @endif
@ -153,7 +153,7 @@ function config_option_changed(ID: string, new_value: any, location: string): an
event zeek_init() &priority=10 event zeek_init() &priority=10
{ {
Log::create_stream(LOG, Log::Stream($columns=Info, $ev=log_config, $path="config", $policy=log_policy)); Log::create_stream(LOG, [$columns=Info, $ev=log_config, $path="config", $policy=log_policy]);
# Limit logging to the manager - everyone else just feeds off it. # Limit logging to the manager - everyone else just feeds off it.
@if ( !Cluster::is_enabled() || Cluster::local_node_type() == Cluster::MANAGER ) @if ( !Cluster::is_enabled() || Cluster::local_node_type() == Cluster::MANAGER )

View file

@ -7,7 +7,6 @@
@load-sigs ./java @load-sigs ./java
@load-sigs ./office @load-sigs ./office
@load-sigs ./programming @load-sigs ./programming
@load-sigs ./python
@load-sigs ./video @load-sigs ./video
@load-sigs ./libmagic @load-sigs ./libmagic

View file

@ -41,3 +41,66 @@ signature file-elc {
file-mime "application/x-elc", 10 file-mime "application/x-elc", 10
file-magic /\x3bELC[\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff]/ file-magic /\x3bELC[\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff]/
} }
# Python 1 bytecode
signature file-pyc-1 {
file-magic /^(\xfc\xc4|\x99\x4e)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 2 bytecode
signature file-pyc-2 {
file-magic /^(\x87\xc6|[\x2a\x2d]\xed|[\x3b\x45\x59\x63\x6d\x77\x81\x8b\x8c\x95\x9f\xa9\xb3\xc7\xd1\xdb\xe5\xef\xf9]\xf2|\x03\xf3)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.0 bytecode
signature file-pyc-3-0 {
file-magic /^([\xb8\xc2\xcc\xd6\xe0\xea\xf4\xf5\xff]\x0b|[\x09\x13\x1d\x1f\x27\x3b]\x0c)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.1 bytecode
signature file-pyc-3-1 {
file-magic /^[\x45\x4f]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.2 bytecode
signature file-pyc-3-2 {
file-magic /^[\x58\x62\x6c]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.3 bytecode
signature file-pyc-3-3 {
file-magic /^[\x76\x80\x94\x9e]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.4 bytecode
signature file-pyc-3-4 {
file-magic /^[\xb2\xcc\xc6\xd0\xda\xe4\xee]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.5 bytecode
signature file-pyc-3-5 {
file-magic /^(\xf8\x0c|[\x02\x0c\x16\x17]\x0d)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.6 bytecode
signature file-pyc-3-6 {
file-magic /^[\x20\x21\x2a-\x2d\x2f-\x33]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.7 bytecode
signature file-pyc-3-7 {
file-magic /^[\x3e-\x42]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}

View file

@ -1,111 +0,0 @@
# Python magic numbers can be updated/added by looking at the list at
# https://github.com/python/cpython/blob/main/Include/internal/pycore_magic_number.h
# The numbers in the list are converted to little-endian and then to hex for the
# file-magic entries below.
# Python 1 bytecode
signature file-pyc-1 {
file-magic /^(\xfc\xc4|\x99\x4e)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 2 bytecode
signature file-pyc-2 {
file-magic /^(\x87\xc6|[\x2a\x2d]\xed|[\x3b\x45\x59\x63\x6d\x77\x81\x8b\x8c\x95\x9f\xa9\xb3\xc7\xd1\xdb\xe5\xef\xf9]\xf2|\x03\xf3)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.0 bytecode
signature file-pyc-3-0 {
file-magic /^([\xb8\xc2\xcc\xd6\xe0\xea\xf4\xf5\xff]\x0b|[\x09\x13\x1d\x1f\x27\x3b]\x0c)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.1 bytecode
signature file-pyc-3-1 {
file-magic /^[\x45\x4f]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.2 bytecode
signature file-pyc-3-2 {
file-magic /^[\x58\x62\x6c]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.3 bytecode
signature file-pyc-3-3 {
file-magic /^[\x76\x80\x94\x9e]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.4 bytecode
signature file-pyc-3-4 {
file-magic /^[\xb2\xcc\xc6\xd0\xda\xe4\xee]\x0c\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.5 bytecode
signature file-pyc-3-5 {
file-magic /^(\xf8\x0c|[\x02\x0c\x16\x17]\x0d)\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.6 bytecode
signature file-pyc-3-6 {
file-magic /^[\x20\x21\x2a-\x2d\x2f-\x33]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.7 bytecode
signature file-pyc-3-7 {
file-magic /^[\x3e-\x42]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.8 bytecode
signature file-pyc-3-8 {
file-magic /^[\x48\x49\x52-\x55]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.9 bytecode
signature file-pyc-3-9 {
file-magic /^[\x5c-\x61]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.10 bytecode
signature file-pyc-3-10 {
file-magic /^[\x66-\x6f]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.11 bytecode
signature file-pyc-3-11 {
file-magic /^[\x7a-\xa7]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.12 bytecode
signature file-pyc-3-12 {
file-magic /^[\xac-\xcb]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.13 bytecode
signature file-pyc-3-13 {
file-magic /^[\xde-\xf3]\x0d\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}
# Python 3.14 bytecode
# This is in pre-release at this time, and may need to be updated as new
# versions come out.
signature file-pyc-3-14 {
file-magic /^[\x10-\x19]\x0e\x0d\x0a/
file-mime "application/x-python-bytecode", 80
}

View file

@ -341,7 +341,7 @@ global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: A
event zeek_init() &priority=5 event zeek_init() &priority=5
{ {
Log::create_stream(Files::LOG, Log::Stream($columns=Info, $ev=log_files, $path="files", $policy=log_policy)); Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files, $path="files", $policy=log_policy]);
} }
function set_info(f: fa_file) function set_info(f: fa_file)

View file

@ -24,10 +24,10 @@ export {
STREAM = 2 STREAM = 2
}; };
## The default input reader used. Defaults to :zeek:see:`Input::READER_ASCII`. ## The default input reader used. Defaults to `READER_ASCII`.
option default_reader = READER_ASCII; option default_reader = READER_ASCII;
## The default reader mode used. Defaults to :zeek:see:`Input::MANUAL`. ## The default reader mode used. Defaults to `MANUAL`.
option default_mode = MANUAL; option default_mode = MANUAL;
## Separator between fields. ## Separator between fields.
@ -60,7 +60,7 @@ export {
# Common definitions for tables and events # Common definitions for tables and events
## String that allows the reader to find the source of the data. ## String that allows the reader to find the source of the data.
## For :zeek:see:`Input::READER_ASCII`, this is the filename. ## For `READER_ASCII`, this is the filename.
source: string; source: string;
## Reader to use for this stream. ## Reader to use for this stream.
@ -112,7 +112,7 @@ export {
## ##
## The event is raised like if it had been declared as follows: ## The event is raised like if it had been declared as follows:
## error_ev: function(desc: TableDescription, message: string, level: Reporter::Level) &optional; ## error_ev: function(desc: TableDescription, message: string, level: Reporter::Level) &optional;
## The actual declaration uses the :zeek:type:`any` type because of deficiencies of the Zeek type system. ## The actual declaration uses the ``any`` type because of deficiencies of the Zeek type system.
error_ev: any &optional; error_ev: any &optional;
## A key/value table that will be passed to the reader. ## A key/value table that will be passed to the reader.
@ -126,7 +126,7 @@ export {
# Common definitions for tables and events # Common definitions for tables and events
## String that allows the reader to find the source. ## String that allows the reader to find the source.
## For :zeek:see:`Input::READER_ASCII`, this is the filename. ## For `READER_ASCII`, this is the filename.
source: string; source: string;
## Reader to use for this stream. ## Reader to use for this stream.
@ -151,8 +151,8 @@ export {
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a new line is received from the ## The event that is raised each time a new line is received from the
## reader. The event will receive an :zeek:see:`Input::EventDescription` record ## reader. The event will receive an Input::EventDescription record
## as the first argument, an :zeek:see:`Input::Event` enum as the second ## as the first argument, an Input::Event enum as the second
## argument, and the fields (as specified in *fields*) as the following ## argument, and the fields (as specified in *fields*) as the following
## arguments (this will either be a single record value containing ## arguments (this will either be a single record value containing
## all fields, or each field value as a separate argument). ## all fields, or each field value as a separate argument).
@ -161,12 +161,12 @@ export {
## Error event that is raised when an information, warning or error ## Error event that is raised when an information, warning or error
## is raised by the input stream. If the level is error, the stream will automatically ## is raised by the input stream. If the level is error, the stream will automatically
## be closed. ## be closed.
## The event receives the :zeek:see:`Input::EventDescription` as the first argument, the ## The event receives the Input::EventDescription as the first argument, the
## message as the second argument and the :zeek:see:`Reporter::Level` as the third argument. ## message as the second argument and the Reporter::Level as the third argument.
## ##
## The event is raised like it had been declared as follows: ## The event is raised like it had been declared as follows:
## error_ev: function(desc: EventDescription, message: string, level: Reporter::Level) &optional; ## error_ev: function(desc: EventDescription, message: string, level: Reporter::Level) &optional;
## The actual declaration uses the :zeek:type:`any` type because of deficiencies of the Zeek type system. ## The actual declaration uses the ``any`` type because of deficiencies of the Zeek type system.
error_ev: any &optional; error_ev: any &optional;
## A key/value table that will be passed to the reader. ## A key/value table that will be passed to the reader.
@ -179,7 +179,7 @@ export {
## file analysis framework. ## file analysis framework.
type AnalysisDescription: record { type AnalysisDescription: record {
## String that allows the reader to find the source. ## String that allows the reader to find the source.
## For :zeek:see:`Input::READER_ASCII`, this is the filename. ## For `READER_ASCII`, this is the filename.
source: string; source: string;
## Reader to use for this stream. Compatible readers must be ## Reader to use for this stream. Compatible readers must be
@ -205,14 +205,14 @@ export {
## Create a new table input stream from a given source. ## Create a new table input stream from a given source.
## ##
## description: :zeek:see:`Input::TableDescription` record describing the source. ## description: `TableDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_table: function(description: Input::TableDescription) : bool; global add_table: function(description: Input::TableDescription) : bool;
## Create a new event input stream from a given source. ## Create a new event input stream from a given source.
## ##
## description: :zeek:see:`Input::EventDescription` record describing the source. ## description: `EventDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_event: function(description: Input::EventDescription) : bool; global add_event: function(description: Input::EventDescription) : bool;
@ -278,3 +278,4 @@ function force_update(id: string) : bool
{ {
return __force_update(id); return __force_update(id);
} }

View file

@ -11,9 +11,6 @@ module Intel;
global insert_item: event(item: Item) &is_used; global insert_item: event(item: Item) &is_used;
global insert_indicator: event(item: Item) &is_used; global insert_indicator: event(item: Item) &is_used;
# Event to transfer the min_data_store to connecting nodes.
global new_min_data_store: event(store: MinDataStore) &is_used;
# By default the manager sends its current min_data_store to connecting workers. # By default the manager sends its current min_data_store to connecting workers.
# During testing it's handy to suppress this, since receipt of the store # During testing it's handy to suppress this, since receipt of the store
# introduces nondeterminism when mixed with explicit data insertions. # introduces nondeterminism when mixed with explicit data insertions.
@ -25,10 +22,9 @@ redef have_full_data = F;
@endif @endif
@if ( Cluster::local_node_type() == Cluster::MANAGER ) @if ( Cluster::local_node_type() == Cluster::MANAGER )
# The manager propagates remove_indicator() to workers. event zeek_init()
event remove_indicator(item: Item)
{ {
Cluster::publish(Cluster::worker_topic, remove_indicator, item); Broker::auto_publish(Cluster::worker_topic, remove_indicator);
} }
# Handling of new worker nodes. # Handling of new worker nodes.
@ -39,7 +35,7 @@ event Cluster::node_up(name: string, id: string)
# this by the insert_indicator event. # this by the insert_indicator event.
if ( send_store_on_node_up && name in Cluster::nodes && Cluster::nodes[name]$node_type == Cluster::WORKER ) if ( send_store_on_node_up && name in Cluster::nodes && Cluster::nodes[name]$node_type == Cluster::WORKER )
{ {
Cluster::publish(Cluster::node_topic(name), new_min_data_store, min_data_store); Broker::publish_id(Cluster::node_topic(name), "Intel::min_data_store");
} }
} }
@ -47,9 +43,6 @@ event Cluster::node_up(name: string, id: string)
# has to be distributed. # has to be distributed.
event Intel::new_item(item: Item) &priority=5 event Intel::new_item(item: Item) &priority=5
{ {
# This shouldn't be required, pushing directly from
# the manager is more efficient and has less round
# trips for non-broker backends.
local pt = Cluster::rr_topic(Cluster::proxy_pool, "intel_insert_rr_key"); local pt = Cluster::rr_topic(Cluster::proxy_pool, "intel_insert_rr_key");
if ( pt == "" ) if ( pt == "" )
@ -57,7 +50,7 @@ event Intel::new_item(item: Item) &priority=5
# relaying via a proxy. # relaying via a proxy.
pt = Cluster::worker_topic; pt = Cluster::worker_topic;
Cluster::publish(pt, Intel::insert_indicator, item); Broker::publish(pt, Intel::insert_indicator, item);
} }
# Handling of item insertion triggered by remote node. # Handling of item insertion triggered by remote node.
@ -80,23 +73,18 @@ event Intel::match_remote(s: Seen) &priority=5
} }
@endif @endif
@if ( Cluster::local_node_type() == Cluster::WORKER ) @if ( Cluster::local_node_type() == Cluster::WORKER )
event match_remote(s: Seen) event zeek_init()
{ {
Cluster::publish(Cluster::manager_topic, match_remote, s); Broker::auto_publish(Cluster::manager_topic, match_remote);
} Broker::auto_publish(Cluster::manager_topic, remove_item);
event remove_item(item: Item, purge_indicator: bool)
{
Cluster::publish(Cluster::manager_topic, remove_item, item, purge_indicator);
} }
# On a worker, the new_item event requires to trigger the insertion # On a worker, the new_item event requires to trigger the insertion
# on the manager to update the back-end data store. # on the manager to update the back-end data store.
event Intel::new_item(item: Intel::Item) &priority=5 event Intel::new_item(item: Intel::Item) &priority=5
{ {
Cluster::publish(Cluster::manager_topic, Intel::insert_item, item); Broker::publish(Cluster::manager_topic, Intel::insert_item, item);
} }
# Handling of new indicators published by the manager. # Handling of new indicators published by the manager.
@ -104,39 +92,13 @@ event Intel::insert_indicator(item: Intel::Item) &priority=5
{ {
Intel::_insert(item, F); Intel::_insert(item, F);
} }
function invoke_indicator_hook(store: MinDataStore, h: hook(v: string, t: Intel::Type))
{
for ( a in store$host_data )
hook h(cat(a), Intel::ADDR);
for ( sn in store$subnet_data)
hook h(cat(sn), Intel::SUBNET);
for ( [indicator_value, indicator_type] in store$string_data )
hook h(indicator_value, indicator_type);
}
# Handling of a complete MinDataStore snapshot
#
# Invoke the removed and inserted hooks using the old and new min data store
# instances, respectively. The way this event is used, the original
# min_data_store should essentially be empty.
event new_min_data_store(store: MinDataStore)
{
invoke_indicator_hook(min_data_store, Intel::indicator_removed);
min_data_store = store;
invoke_indicator_hook(min_data_store, Intel::indicator_inserted);
}
@endif @endif
@if ( Cluster::local_node_type() == Cluster::PROXY ) @if ( Cluster::local_node_type() == Cluster::PROXY )
event Intel::insert_indicator(item: Intel::Item) &priority=5 event Intel::insert_indicator(item: Intel::Item) &priority=5
{ {
# Just forwarding from manager to workers. # Just forwarding from manager to workers.
Cluster::publish(Cluster::worker_topic, Intel::insert_indicator, item); Broker::publish(Cluster::worker_topic, Intel::insert_indicator, item);
} }
@endif @endif

Some files were not shown because too many files have changed in this diff Show more