update master and merge into this branch

This commit is contained in:
Mauro Palumbo 2019-05-05 16:46:41 +02:00
commit c90eec6b54
1667 changed files with 12111 additions and 6888 deletions

View file

@ -7,15 +7,7 @@ function new_version_hook
# test suite repos to check out on a CI system.
version=$1
if [ -d testing/external/zeek-testing ]; then
echo "Updating testing/external/commit-hash.zeek-testing"
( cd testing/external/zeek-testing && git fetch origin && git rev-parse origin/master ) > testing/external/commit-hash.zeek-testing
git add testing/external/commit-hash.zeek-testing
fi
./testing/scripts/update-external-repo-pointer.sh testing/external/zeek-testing testing/external/commit-hash.zeek-testing
if [ -d testing/external/zeek-testing-private ]; then
echo "Updating testing/external/commit-hash.zeek-testing-private"
( cd testing/external/zeek-testing-private && git fetch origin && git rev-parse origin/master ) > testing/external/commit-hash.zeek-testing-private
git add testing/external/commit-hash.zeek-testing-private
fi
./testing/scripts/update-external-repo-pointer.sh testing/external/zeek-testing-private testing/external/commit-hash.zeek-testing-private
}

276
CHANGES
View file

@ -1,4 +1,280 @@
2.6-249 | 2019-04-26 19:26:44 -0700
* Fix parsing of hybrid IPv6-IPv4 addr literals with no zero compression (Jon Siwek, Corelight)
2.6-246 | 2019-04-25 10:22:11 -0700
* Add Zeexygen cross-reference links for some events (Jon Siwek, Corelight)
2.6-245 | 2019-04-23 18:42:02 -0700
* Expose TCP analyzer utility functions to derived classes (Vern Paxson, Corelight)
2.6-243 | 2019-04-22 19:42:52 -0700
* GH-234: rename Broxygen to Zeexygen along with roles/directives (Jon Siwek, Corelight)
* All "Broxygen" usages have been replaced in
code, documentation, filenames, etc.
* Sphinx roles/directives like ":bro:see" are now ":zeek:see"
* The "--broxygen" command-line option is now "--zeexygen"
2.6-242 | 2019-04-22 22:43:09 +0200
* update SSL consts from TLS 1.3 (Johanna Amann)
2.6-241 | 2019-04-22 12:38:06 -0700
* Add 'g' character to conn.log history field to flag content gaps (Vern Paxson, Corelight)
There's also a small change to TCP state machine that distrusts ACKs
appearing at the end of connections (in FIN or RST) such that they won't
count towards revealing a true content gap.
2.6-237 | 2019-04-19 12:00:37 -0700
* GH-236: Add zeek_script_loaded event, deprecate bro_script_loaded (Jon Siwek, Corelight)
Existing handlers for bro_script_loaded automatically alias to the new
zeek_script_loaded event, but emit a deprecation warning.
2.6-236 | 2019-04-19 11:16:35 -0700
* Add zeek_init/zeek_done events and deprecate bro_init/bro_done (Seth Hall, Corelight)
Any existing handlers for bro_init and bro_done will automatically alias
to the new zeek_init and zeek_done events such that code will not break,
but will emit a deprecation warning.
2.6-232 | 2019-04-18 09:34:13 +0200
* Prevent topk_merge from crashing when second argument is empty set (Jeff Barber)
2.6-230 | 2019-04-17 16:44:16 -0700
* Fix unit test failures on case-insensitive file systems (Jon Siwek, Corelight)
2.6-227 | 2019-04-16 17:44:31 -0700
* GH-237: add `@load foo.bro` -> foo.zeek fallback (Jon Siwek, Corelight)
When failing to locate a script with explicit .bro suffix, check for
whether one with a .zeek suffix exists and use it instead.
2.6-225 | 2019-04-16 16:07:49 -0700
* Use .zeek file suffix in unit tests (Jon Siwek, Corelight)
2.6-223 | 2019-04-16 11:56:00 -0700
* Update tests and baselines due to renaming all scripts (Daniel Thayer)
* Rename all scripts to have ".zeek" file extension (Daniel Thayer)
* Add test cases to verify new file extension is recognized (Daniel Thayer)
* Fix the core/load-duplicates.bro test (Daniel Thayer)
* Update script search logic for new .zeek file extension (Daniel Thayer)
When searching for script files, look for both the new and old file
extensions. If a file with ".zeek" can't be found, then search for
a file with ".bro" as a fallback.
* Remove unnecessary ".bro" from @load directives (Daniel Thayer)
2.6-212 | 2019-04-12 10:12:31 -0700
* smb2_write_response event added (Mauro Palumbo)
2.6-210 | 2019-04-10 09:54:27 -0700
* Add options to tune BinPAC flowbuffer policy (Jon Siwek, Corelight)
2.6-208 | 2019-04-10 11:36:17 +0000
* Improve PE file analysis (Jon Siwek, Corelight)
* Set PE analyzer CMake dependencies correctly (Jon Siwek, Corelight)
2.6-205 | 2019-04-05 17:06:26 -0700
* Add script to update external test repo commit pointers (Jon Siwek, Corelight)
2.6-203 | 2019-04-04 16:35:52 -0700
* Update DTLS error handling (Johanna Amann, Corelight)
- Adds tuning options: SSL::dtls_max_version_errors and
SSL::dtls_max_reported_version_errors
2.6-200 | 2019-04-03 09:44:53 -0700
* Fix reporter net_weird API usage for unknown_mobility_type
(Jon Siwek, Corelight)
* Remove variable content from weird names
This changes many weird names to move non-static content from the
weird name into the "addl" field to help ensure the total number of
weird names is reasonably bounded. Note the net_weird and flow_weird
events do not have an "addl" parameter, so information may no longer
be available in those cases -- to make it available again we'd need
to either (1) define new events that contain such a parameter, or
(2) change net_weird/flow_weird event signature (which is a breaking
change for user-code at the moment).
Also, the generic handling of binpac exceptions for analyzers which
to not otherwise catch and handle them has been changed from a Weird
to a ProtocolViolation.
Finally, a new "file_weird" event has been added for reporting
weirdness found during file analysis. (Jon Siwek, Corelight)
2.6-197 | 2019-04-03 09:08:58 -0700
* Make Syslog analyzer accept non-conformant messages that omit Priority.
(Jon Siwek, Corelight)
2.6-195 | 2019-03-27 12:36:34 -0700
* Reduce weird-stats overhead (Justin Azoff, Corelight)
2.6-193 | 2019-03-27 10:53:01 -0700
* Update now-broken Broker API usages (Jon Siwek, Corelight)
Related to https://github.com/zeek/broker/pull/38, see Broker's NEWS file
for C++ code migration hints.
2.6-192 | 2019-03-25 17:49:18 -0700
* Deprecate str_shell_escape, add safe_shell_quote replacement (Jon Siwek, Corelight)
2.6-191 | 2019-03-25 16:43:10 -0700
* Add support for SMB filenames to the intel framework (Stephen Hosom)
2.6-186 | 2019-03-25 09:41:57 -0700
* Added policy script for intel removal. (Jan Grashoefer)
* Added Intel::filter_item hook to filter intelligence items. (Jan Grashoefer)
2.6-178 | 2019-03-21 14:10:44 -0700
* Add support for parsing SMB 3.1.1 NegotiateContextList response values (Mauro Palumbo)
2.6-175 | 2019-03-20 19:25:11 -0700
* Parse SMB2 TRANSFORM_HEADER messages and generate new smb2_transform_header event (Mauro Palumbo)
2.6-172 | 2019-03-20 17:59:30 -0700
* Fix smb_files.log missing FUID field in read/write actions (Mauro Palumbo)
2.6-169 | 2019-03-19 19:12:47 -0700
* Add support for NFLOG link-layer type (Ryan Denniston)
2.6-167 | 2019-03-18 13:58:28 -0700
* GH-307: Build binpac as a shared lib, not static by default (Jon Siwek, Corelight)
2.6-166 | 2019-03-18 11:45:35 -0700
* Add source file path control options for Input and Intel frameworks (Christian Kreibich, Corelight)
This introduces the following redefinable string constants, empty by
default:
- InputAscii::path_prefix
- InputBinary::path_prefix
- Intel::path_prefix
2.6-164 | 2019-03-15 19:45:48 -0700
* Migrate table-based for-loops to key-value iteration (Jon Siwek, Corelight)
* GH-154: Extend for-loops to allow iteration over a table's key-value pairs (Zeke Medley)
2.6-161 | 2019-03-15 12:59:31 -0700
* Fix SSH remote_location geo-data not being logged for successful authNs. (Michael Dopheide)
2.6-159 | 2019-03-14 16:39:52 -0700
* Move NEWS file back into main repo from zeek-docs (Jon Siwek, Corelight)
2.6-158 | 2019-03-14 16:23:30 -0700
* Fix signed/unsigned comparison compiler warning (Jon Siwek, Corelight)
2.6-157 | 2019-03-14 16:18:13 +0000
* GH-250: Add VXLAN decapsulation support (Henrik Lund Kramshoej; Jon Siwek, Corelight)
Zeek now automatically decapsulates VXLAN traffic on UDP port
4789. It will log such sessions as Tunnel::VXLAN in tunnel.log and
proceed to analyze the inner payload. Two options allow to tune
the analysis:
* "Tunnel::vxlan_ports" allows to tune the set of VXLAN ports
to analyze/decapsulate.
* "Tunnel::validate_vxlan_checksums" allows for tuning of how
checksums associated with the outer UDP header of a possible
VXLAN tunnel are handled.
A new "vxlan_packet" event also provides per-packet access to
VXLAN traffic.
2.6-154 | 2019-03-13 17:28:26 -0700
* Decrease memory usage via deferred list/dict initialization (Justin Azoff, Corelight)
2.6-152 | 2019-03-13 13:46:17 -0700
* Add field to the default http.log for the Origin header (Nate Guagenti)
2.6-149 | 2019-03-13 18:21:59 +0000
* GH-289: Add options to limit entries in http.log file fields. The
"orig_fuids", "orig_filenames", "orig_mime_types" http.log fields
as well as their "resp" counterparts are now limited to having
"HTTP::max_files_orig" or "HTTP::max_files_resp" entries, which
are 15 by default. The limit can also be ignored case-by-case via
the "HTTP::max_files_policy" hook. (Jon Siwek, Corelight)
* GH-282: Remove JSON formatter's range restriction on numbers. It
now produces numbers as large as is required to match the data it
needs to represent. (Jon Siwek, Corelight)
* GH-281: Improve parsing of Google Pixel user agent. (Jon Siwek,
Corelight)
* GH-286: Check for record type mismatch in ternary operator. (Jon
Siwek, Corelight)
2.6-141 | 2019-03-08 18:36:25 -0800
* Improve DNS query queuing logic (Jon Siwek, Corelight)
2.6-140 | 2019-03-08 16:21:42 -0800
* Improve performance of DNS policy scripts (Justin Azoff, Corelight)
2.6-135 | 2019-03-07 13:14:00 -0800
* Fix typos in dnp3-protocol.pac (g0nzu1)
2.6-132 | 2019-03-06 15:30:58 -0800
* GH-219: revert a breaking change to |x| operator for interval/time (Jon Siwek, Corelight)
2.6-130 | 2019-02-22 14:56:41 -0600
* Make input framework parse whitespace around various data types. (Johanna Amann, Corelight)

View file

@ -97,7 +97,15 @@ FindRequiredPackage(ZLIB)
if (NOT BINPAC_EXE_PATH AND
EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/aux/binpac/CMakeLists.txt)
set(ENABLE_STATIC_ONLY_SAVED ${ENABLE_STATIC_ONLY})
if ( BUILD_STATIC_BINPAC )
set(ENABLE_STATIC_ONLY true)
endif()
add_subdirectory(aux/binpac)
set(ENABLE_STATIC_ONLY ${ENABLE_STATIC_ONLY_SAVED})
endif ()
FindRequiredPackage(BinPAC)
@ -286,10 +294,14 @@ if ( BROKER_ROOT_DIR )
set(brodeps ${brodeps} ${BROKER_LIBRARY} ${CAF_LIBRARIES})
include_directories(BEFORE ${BROKER_INCLUDE_DIR})
else ()
set(ENABLE_STATIC_ONLY_SAVED ${ENABLE_STATIC_ONLY})
if ( BUILD_STATIC_BROKER )
set(ENABLE_STATIC_ONLY true)
endif()
add_subdirectory(aux/broker)
set(ENABLE_STATIC_ONLY ${ENABLE_STATIC_ONLY_SAVED})
if ( BUILD_STATIC_BROKER )
set(brodeps ${brodeps} broker_static)

1
NEWS
View file

@ -1 +0,0 @@
doc/install/NEWS.rst

2530
NEWS Normal file

File diff suppressed because it is too large Load diff

View file

@ -1 +1 @@
2.6-130
2.6-249

@ -1 +1 @@
Subproject commit 0fae77f96abe63c93c2b8ab902651ad42e5d6de4
Subproject commit 1b5375e9f81ecec59f983e6abe86300c6bbbcb8f

@ -1 +1 @@
Subproject commit 24d7a40fa81c906510150fb89ff15579be282bb2
Subproject commit 04c7e27a22491a91ee309877253da0922d0822bc

@ -1 +1 @@
Subproject commit d5839843727b2dd17f2f85159522879f0d455318
Subproject commit 8668422406cb74f4f0c574a0c9b6365a21f3e81a

@ -1 +1 @@
Subproject commit 1748d8fe7fa2f32d775045079dd11d3048cb1696
Subproject commit 39ae4a469d6ae86c12b49020b361da4fcab24b5b

@ -1 +1 @@
Subproject commit 7065ab0d25f3db797b0290724da87c02c262827c
Subproject commit 56408c5582c80db6774c8b25642149dfb542345a

@ -1 +1 @@
Subproject commit 46411f7e4235f119fea5f38fc0329a60631400e3
Subproject commit ba482418c4e16551fd7b9128a4082348ef2842f0

2
cmake

@ -1 +1 @@
Subproject commit 6135c1a6639dfbfcf9b1fd720fa6a96118b3ab43
Subproject commit 5521da04df0190e3362e4c5164df5c2c8884dd2c

4
configure vendored
View file

@ -53,6 +53,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-jemalloc link against jemalloc
--enable-broccoli build or install the Broccoli library (deprecated)
--enable-static-broker build broker statically (ignored if --with-broker is specified)
--enable-static-binpac build binpac statically (ignored if --with-binpac is specified)
--disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
@ -227,6 +228,9 @@ while [ $# -ne 0 ]; do
--enable-static-broker)
append_cache_entry BUILD_STATIC_BROKER BOOL true
;;
--enable-static-binpac)
append_cache_entry BUILD_STATIC_BINPAC BOOL true
;;
--disable-broctl)
append_cache_entry INSTALL_BROCTL BOOL false
;;

2
doc

@ -1 +1 @@
Subproject commit 650a136dccefe44fa276e4fb06d9dc854f9ab06c
Subproject commit 073bb08473b8172b8bb175e0702204f15f522392

View file

@ -99,7 +99,7 @@ Record process status in file
\fB\-W\fR,\ \-\-watchdog
activate watchdog timer
.TP
\fB\-X\fR,\ \-\-broxygen <cfgfile>
\fB\-X\fR,\ \-\-zeexygen <cfgfile>
generate documentation based on config file
.TP
\fB\-\-pseudo\-realtime[=\fR<speedup>]
@ -150,7 +150,7 @@ ASCII log file extension
Output file for script execution statistics
.TP
.B BRO_DISABLE_BROXYGEN
Disable Broxygen documentation support
Disable Zeexygen (Broxygen) documentation support
.SH AUTHOR
.B bro
was written by The Bro Project <info@bro.org>.

View file

@ -2,8 +2,8 @@ include(InstallPackageConfigFile)
install(DIRECTORY ./ DESTINATION ${BRO_SCRIPT_INSTALL_PATH} FILES_MATCHING
PATTERN "site/local*" EXCLUDE
PATTERN "test-all-policy.bro" EXCLUDE
PATTERN "*.bro"
PATTERN "test-all-policy.zeek" EXCLUDE
PATTERN "*.zeek"
PATTERN "*.sig"
PATTERN "*.fp"
)
@ -11,6 +11,6 @@ install(DIRECTORY ./ DESTINATION ${BRO_SCRIPT_INSTALL_PATH} FILES_MATCHING
# Install all local* scripts as config files since they are meant to be
# user modify-able.
InstallPackageConfigFile(
${CMAKE_CURRENT_SOURCE_DIR}/site/local.bro
${CMAKE_CURRENT_SOURCE_DIR}/site/local.zeek
${BRO_SCRIPT_INSTALL_PATH}/site
local.bro)
local.zeek)

View file

@ -29,12 +29,12 @@ export {
## to know where to write the file to. If not specified, then
## a filename in the format "extract-<source>-<id>" is
## automatically assigned (using the *source* and *id*
## fields of :bro:see:`fa_file`).
## fields of :zeek:see:`fa_file`).
extract_filename: string &optional;
## The maximum allowed file size in bytes of *extract_filename*.
## Once reached, a :bro:see:`file_extraction_limit` event is
## Once reached, a :zeek:see:`file_extraction_limit` event is
## raised and the analyzer will be removed unless
## :bro:see:`FileExtract::set_limit` is called to increase the
## :zeek:see:`FileExtract::set_limit` is called to increase the
## limit. A value of zero means "no limit".
extract_limit: count &default=default_limit;
};
@ -75,7 +75,7 @@ event file_extraction_limit(f: fa_file, args: Files::AnalyzerArgs, limit: count,
f$info$extracted_size = limit;
}
event bro_init() &priority=10
event zeek_init() &priority=10
{
Files::register_analyzer_add_callback(Files::ANALYZER_EXTRACT, on_add);
}

View file

@ -1,6 +1,6 @@
module PE;
@load ./consts.bro
@load ./consts
export {
redef enum Log::ID += { LOG };
@ -55,7 +55,7 @@ redef record fa_file += {
const pe_mime_types = { "application/x-dosexec" };
event bro_init() &priority=5
event zeek_init() &priority=5
{
Files::register_for_mime_types(Files::ANALYZER_PE, pe_mime_types);
Log::create_stream(LOG, [$columns=Info, $ev=log_pe, $path="pe"]);

View file

@ -193,7 +193,7 @@ event Input::end_of_data(name: string, source: string)
start_watching();
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(Unified2::LOG, [$columns=Info, $ev=log_unified2, $path="unified2"]);
@ -289,9 +289,9 @@ event file_state_remove(f: fa_file)
{
# In case any events never had matching packets, flush
# the extras to the log.
for ( i in f$u2_events )
for ( i, ev in f$u2_events )
{
Log::write(LOG, create_info(f$u2_events[i]));
Log::write(LOG, create_info(ev));
}
}
}

View file

@ -29,7 +29,7 @@ export {
global log_x509: event(rec: Info);
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509, $path="x509"]);

View file

@ -5,7 +5,7 @@
##! particular analyzer for new connections.
##!
##! Protocol analyzers are identified by unique tags of type
##! :bro:type:`Analyzer::Tag`, such as :bro:enum:`Analyzer::ANALYZER_HTTP`.
##! :zeek:type:`Analyzer::Tag`, such as :zeek:enum:`Analyzer::ANALYZER_HTTP`.
##! These tags are defined internally by
##! the analyzers themselves, and documented in their analyzer-specific
##! description along with the events that they generate.
@ -17,7 +17,7 @@ module Analyzer;
export {
## If true, all available analyzers are initially disabled at startup.
## One can then selectively enable them with
## :bro:id:`Analyzer::enable_analyzer`.
## :zeek:id:`Analyzer::enable_analyzer`.
global disable_all = F &redef;
## Enables an analyzer. Once enabled, the analyzer may be used for analysis
@ -109,7 +109,7 @@ export {
## Automatically creates a BPF filter for the specified protocol based
## on the data supplied for the protocol through the
## :bro:see:`Analyzer::register_for_ports` function.
## :zeek:see:`Analyzer::register_for_ports` function.
##
## tag: The analyzer tag.
##
@ -135,7 +135,7 @@ export {
global ports: table[Analyzer::Tag] of set[port];
event bro_init() &priority=5
event zeek_init() &priority=5
{
if ( disable_all )
__disable_all_analyzers();

View file

@ -30,7 +30,7 @@ export {
};
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(Broker::LOG, [$columns=Info, $path="broker"]);
}

View file

@ -10,19 +10,19 @@ export {
## Default interval to retry listening on a port if it's currently in
## use already. Use of the BRO_DEFAULT_LISTEN_RETRY environment variable
## (set as a number of seconds) will override this option and also
## any values given to :bro:see:`Broker::listen`.
## any values given to :zeek:see:`Broker::listen`.
const default_listen_retry = 30sec &redef;
## Default address on which to listen.
##
## .. bro:see:: Broker::listen
## .. zeek:see:: Broker::listen
const default_listen_address = getenv("BRO_DEFAULT_LISTEN_ADDRESS") &redef;
## Default interval to retry connecting to a peer if it cannot be made to
## work initially, or if it ever becomes disconnected. Use of the
## BRO_DEFAULT_CONNECT_RETRY environment variable (set as number of
## seconds) will override this option and also any values given to
## :bro:see:`Broker::peer`.
## :zeek:see:`Broker::peer`.
const default_connect_retry = 30sec &redef;
## If true, do not use SSL for network connections. By default, SSL will
@ -47,7 +47,7 @@ export {
const ssl_certificate = "" &redef;
## Passphrase to decrypt the private key specified by
## :bro:see:`Broker::ssl_keyfile`. If set, Bro will require valid
## :zeek:see:`Broker::ssl_keyfile`. If set, Bro will require valid
## certificates for all peers.
const ssl_passphrase = "" &redef;
@ -96,7 +96,7 @@ export {
## Forward all received messages to subscribing peers.
const forward_messages = F &redef;
## Whether calling :bro:see:`Broker::peer` will register the Broker
## Whether calling :zeek:see:`Broker::peer` will register the Broker
## system as an I/O source that will block the process from shutting
## down. For example, set this to false when you are reading pcaps,
## but also want to initaiate a Broker peering and still shutdown after
@ -107,7 +107,7 @@ export {
## id is appended when writing to a particular stream.
const default_log_topic_prefix = "bro/logs/" &redef;
## The default implementation for :bro:see:`Broker::log_topic`.
## The default implementation for :zeek:see:`Broker::log_topic`.
function default_log_topic(id: Log::ID, path: string): string
{
return default_log_topic_prefix + cat(id);
@ -116,7 +116,7 @@ export {
## A function that will be called for each log entry to determine what
## broker topic string will be used for sending it to peers. The
## default implementation will return a value based on
## :bro:see:`Broker::default_log_topic_prefix`.
## :zeek:see:`Broker::default_log_topic_prefix`.
##
## id: the ID associated with the log stream entry that will be sent.
##
@ -232,7 +232,7 @@ export {
##
## Returns: the bound port or 0/? on failure.
##
## .. bro:see:: Broker::status
## .. zeek:see:: Broker::status
global listen: function(a: string &default = default_listen_address,
p: port &default = default_port,
retry: interval &default = default_listen_retry): port;
@ -252,7 +252,7 @@ export {
## it's a new peer. The actual connection may not be established
## until a later point in time.
##
## .. bro:see:: Broker::status
## .. zeek:see:: Broker::status
global peer: function(a: string, p: port &default=default_port,
retry: interval &default=default_connect_retry): bool;
@ -262,12 +262,12 @@ export {
## just means that we won't exchange any further information with it
## unless peering resumes later.
##
## a: the address used in previous successful call to :bro:see:`Broker::peer`.
## a: the address used in previous successful call to :zeek:see:`Broker::peer`.
##
## p: the port used in previous successful call to :bro:see:`Broker::peer`.
## p: the port used in previous successful call to :zeek:see:`Broker::peer`.
##
## Returns: true if the arguments match a previously successful call to
## :bro:see:`Broker::peer`.
## :zeek:see:`Broker::peer`.
##
## TODO: We do not have a function yet to terminate a connection.
global unpeer: function(a: string, p: port): bool;
@ -298,7 +298,7 @@ export {
## Register interest in all peer event messages that use a certain topic
## prefix. Note that subscriptions may not be altered immediately after
## calling (except during :bro:see:`bro_init`).
## calling (except during :zeek:see:`zeek_init`).
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
@ -309,10 +309,10 @@ export {
## Unregister interest in all peer event messages that use a topic prefix.
## Note that subscriptions may not be altered immediately after calling
## (except during :bro:see:`bro_init`).
## (except during :zeek:see:`zeek_init`).
##
## topic_prefix: a prefix previously supplied to a successful call to
## :bro:see:`Broker::subscribe` or :bro:see:`Broker::forward`.
## :zeek:see:`Broker::subscribe` or :zeek:see:`Broker::forward`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
global unsubscribe: function(topic_prefix: string): bool;
@ -320,8 +320,8 @@ export {
## Register a topic prefix subscription for events that should only be
## forwarded to any subscribing peers and not raise any event handlers
## on the receiving/forwarding node. i.e. it's the same as
## :bro:see:`Broker::subscribe` except matching events are not raised
## on the receiver, just forwarded. Use :bro:see:`Broker::unsubscribe`
## :zeek:see:`Broker::subscribe` except matching events are not raised
## on the receiver, just forwarded. Use :zeek:see:`Broker::unsubscribe`
## with the same argument to undo this operation.
##
## topic_prefix: a prefix to match against remote message topics.
@ -346,9 +346,9 @@ export {
## Stop automatically sending an event to peers upon local dispatch.
##
## topic: a topic originally given to :bro:see:`Broker::auto_publish`.
## topic: a topic originally given to :zeek:see:`Broker::auto_publish`.
##
## ev: an event originally given to :bro:see:`Broker::auto_publish`.
## ev: an event originally given to :zeek:see:`Broker::auto_publish`.
##
## Returns: true if automatic events will not occur for the topic/event
## pair.

View file

@ -353,7 +353,7 @@ export {
##
## Returns: a set with the keys. If you expect the keys to be of
## non-uniform type, consider using
## :bro:see:`Broker::set_iterator` to iterate over the result.
## :zeek:see:`Broker::set_iterator` to iterate over the result.
global keys: function(h: opaque of Broker::Store): QueryResult;
## Deletes all of a store's content, it will be empty afterwards.

View file

@ -17,7 +17,7 @@ redef Broker::log_topic = Cluster::rr_log_topic;
# If this script isn't found anywhere, the cluster bombs out.
# Loading the cluster framework requires that a script by this name exists
# somewhere in the BROPATH. The only thing in the file should be the
# cluster definition in the :bro:id:`Cluster::nodes` variable.
# cluster definition in the :zeek:id:`Cluster::nodes` variable.
@load cluster-layout
@if ( Cluster::node in Cluster::nodes )

View file

@ -1,8 +1,8 @@
##! A framework for establishing and controlling a cluster of Bro instances.
##! In order to use the cluster framework, a script named
##! ``cluster-layout.bro`` must exist somewhere in Bro's script search path
##! which has a cluster definition of the :bro:id:`Cluster::nodes` variable.
##! The ``CLUSTER_NODE`` environment variable or :bro:id:`Cluster::node`
##! ``cluster-layout.zeek`` must exist somewhere in Bro's script search path
##! which has a cluster definition of the :zeek:id:`Cluster::nodes` variable.
##! The ``CLUSTER_NODE`` environment variable or :zeek:id:`Cluster::node`
##! must also be sent and the cluster framework loaded as a package like
##! ``@load base/frameworks/cluster``.
@ -44,23 +44,23 @@ export {
const nodeid_topic_prefix = "bro/cluster/nodeid/" &redef;
## Name of the node on which master data stores will be created if no other
## has already been specified by the user in :bro:see:`Cluster::stores`.
## has already been specified by the user in :zeek:see:`Cluster::stores`.
## An empty value means "use whatever name corresponds to the manager
## node".
const default_master_node = "" &redef;
## The type of data store backend that will be used for all data stores if
## no other has already been specified by the user in :bro:see:`Cluster::stores`.
## no other has already been specified by the user in :zeek:see:`Cluster::stores`.
const default_backend = Broker::MEMORY &redef;
## The type of persistent data store backend that will be used for all data
## stores if no other has already been specified by the user in
## :bro:see:`Cluster::stores`. This will be used when script authors call
## :bro:see:`Cluster::create_store` with the *persistent* argument set true.
## :zeek:see:`Cluster::stores`. This will be used when script authors call
## :zeek:see:`Cluster::create_store` with the *persistent* argument set true.
const default_persistent_backend = Broker::SQLITE &redef;
## Setting a default dir will, for persistent backends that have not
## been given an explicit file path via :bro:see:`Cluster::stores`,
## been given an explicit file path via :zeek:see:`Cluster::stores`,
## automatically create a path within this dir that is based on the name of
## the data store.
const default_store_dir = "" &redef;
@ -81,21 +81,21 @@ export {
## Parameters used for configuring the backend.
options: Broker::BackendOptions &default=Broker::BackendOptions();
## A resync/reconnect interval to pass through to
## :bro:see:`Broker::create_clone`.
## :zeek:see:`Broker::create_clone`.
clone_resync_interval: interval &default=Broker::default_clone_resync_interval;
## A staleness duration to pass through to
## :bro:see:`Broker::create_clone`.
## :zeek:see:`Broker::create_clone`.
clone_stale_interval: interval &default=Broker::default_clone_stale_interval;
## A mutation buffer interval to pass through to
## :bro:see:`Broker::create_clone`.
## :zeek:see:`Broker::create_clone`.
clone_mutation_buffer_interval: interval &default=Broker::default_clone_mutation_buffer_interval;
};
## A table of cluster-enabled data stores that have been created, indexed
## by their name. This table will be populated automatically by
## :bro:see:`Cluster::create_store`, but if you need to customize
## :zeek:see:`Cluster::create_store`, but if you need to customize
## the options related to a particular data store, you may redef this
## table. Calls to :bro:see:`Cluster::create_store` will first check
## table. Calls to :zeek:see:`Cluster::create_store` will first check
## the table for an entry of the same name and, if found, will use the
## predefined options there when setting up the store.
global stores: table[string] of StoreInfo &default=StoreInfo() &redef;
@ -174,15 +174,15 @@ export {
## This function can be called at any time to determine if the cluster
## framework is being enabled for this run.
##
## Returns: True if :bro:id:`Cluster::node` has been set.
## Returns: True if :zeek:id:`Cluster::node` has been set.
global is_enabled: function(): bool;
## This function can be called at any time to determine what type of
## cluster node the current Bro instance is going to be acting as.
## If :bro:id:`Cluster::is_enabled` returns false, then
## :bro:enum:`Cluster::NONE` is returned.
## If :zeek:id:`Cluster::is_enabled` returns false, then
## :zeek:enum:`Cluster::NONE` is returned.
##
## Returns: The :bro:type:`Cluster::NodeType` the calling node acts as.
## Returns: The :zeek:type:`Cluster::NodeType` the calling node acts as.
global local_node_type: function(): NodeType;
## This gives the value for the number of workers currently connected to,
@ -192,7 +192,7 @@ export {
global worker_count: count = 0;
## The cluster layout definition. This should be placed into a filter
## named cluster-layout.bro somewhere in the BROPATH. It will be
## named cluster-layout.zeek somewhere in the BROPATH. It will be
## automatically loaded if the CLUSTER_NODE environment variable is set.
## Note that BroControl handles all of this automatically.
## The table is typically indexed by node names/labels (e.g. "manager"
@ -200,7 +200,7 @@ export {
const nodes: table[string] of Node = {} &redef;
## Indicates whether or not the manager will act as the logger and receive
## logs. This value should be set in the cluster-layout.bro script (the
## logs. This value should be set in the cluster-layout.zeek script (the
## value should be true only if no logger is specified in Cluster::nodes).
## Note that BroControl handles this automatically.
const manager_is_logger = T &redef;
@ -241,8 +241,8 @@ export {
## Retrieve the topic associated with a specific node in the cluster.
##
## id: the id of the cluster node (from :bro:see:`Broker::EndpointInfo`
## or :bro:see:`Broker::node_id`.
## id: the id of the cluster node (from :zeek:see:`Broker::EndpointInfo`
## or :zeek:see:`Broker::node_id`.
##
## Returns: a topic string that may used to send a message exclusively to
## a given cluster node.
@ -340,10 +340,8 @@ event Broker::peer_added(endpoint: Broker::EndpointInfo, msg: string) &priority=
event Broker::peer_lost(endpoint: Broker::EndpointInfo, msg: string) &priority=10
{
for ( node_name in nodes )
for ( node_name, n in nodes )
{
local n = nodes[node_name];
if ( n?$id && n$id == endpoint$id )
{
Cluster::log(fmt("node down: %s", node_name));
@ -361,7 +359,7 @@ event Broker::peer_lost(endpoint: Broker::EndpointInfo, msg: string) &priority=1
}
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
# If a node is given, but it's an unknown name we need to fail.
if ( node != "" && node !in nodes )

View file

@ -58,17 +58,17 @@ export {
alive_count: count &default = 0;
};
## The specification for :bro:see:`Cluster::proxy_pool`.
## The specification for :zeek:see:`Cluster::proxy_pool`.
global proxy_pool_spec: PoolSpec =
PoolSpec($topic = "bro/cluster/pool/proxy",
$node_type = Cluster::PROXY) &redef;
## The specification for :bro:see:`Cluster::worker_pool`.
## The specification for :zeek:see:`Cluster::worker_pool`.
global worker_pool_spec: PoolSpec =
PoolSpec($topic = "bro/cluster/pool/worker",
$node_type = Cluster::WORKER) &redef;
## The specification for :bro:see:`Cluster::logger_pool`.
## The specification for :zeek:see:`Cluster::logger_pool`.
global logger_pool_spec: PoolSpec =
PoolSpec($topic = "bro/cluster/pool/logger",
$node_type = Cluster::LOGGER) &redef;
@ -120,10 +120,10 @@ export {
global rr_topic: function(pool: Pool, key: string &default=""): string;
## Distributes log message topics among logger nodes via round-robin.
## This will be automatically assigned to :bro:see:`Broker::log_topic`
## if :bro:see:`Cluster::enable_round_robin_logging` is enabled.
## This will be automatically assigned to :zeek:see:`Broker::log_topic`
## if :zeek:see:`Cluster::enable_round_robin_logging` is enabled.
## If no logger nodes are active, then this will return the value
## of :bro:see:`Broker::default_log_topic`.
## of :zeek:see:`Broker::default_log_topic`.
global rr_log_topic: function(id: Log::ID, path: string): string;
}
@ -136,7 +136,7 @@ export {
## Returns: F if a node of the same name already exists in the pool, else T.
global init_pool_node: function(pool: Pool, name: string): bool;
## Mark a pool node as alive/online/available. :bro:see:`Cluster::hrw_topic`
## Mark a pool node as alive/online/available. :zeek:see:`Cluster::hrw_topic`
## will distribute keys to nodes marked as alive.
##
## pool: the pool to which the node belongs.
@ -146,7 +146,7 @@ global init_pool_node: function(pool: Pool, name: string): bool;
## Returns: F if the node does not exist in the pool, else T.
global mark_pool_node_alive: function(pool: Pool, name: string): bool;
## Mark a pool node as dead/offline/unavailable. :bro:see:`Cluster::hrw_topic`
## Mark a pool node as dead/offline/unavailable. :zeek:see:`Cluster::hrw_topic`
## will not distribute keys to nodes marked as dead.
##
## pool: the pool to which the node belongs.
@ -246,10 +246,8 @@ event Cluster::node_down(name: string, id: string) &priority=10
function site_id_in_pool(pool: Pool, site_id: count): bool
{
for ( i in pool$nodes )
for ( i, pn in pool$nodes )
{
local pn = pool$nodes[i];
if ( pn$site_id == site_id )
return T;
}
@ -326,7 +324,7 @@ function mark_pool_node_dead(pool: Pool, name: string): bool
return T;
}
event bro_init()
event zeek_init()
{
worker_pool = register_pool(worker_pool_spec);
proxy_pool = register_pool(proxy_pool_spec);
@ -346,8 +344,8 @@ function pool_sorter(a: Pool, b: Pool): int
return strcmp(a$spec$topic, b$spec$topic);
}
# Needs to execute before the bro_init in setup-connections
event bro_init() &priority=-5
# Needs to execute before the zeek_init in setup-connections
event zeek_init() &priority=-5
{
if ( ! Cluster::is_enabled() )
return;
@ -395,10 +393,8 @@ event bro_init() &priority=-5
pet$excluded += pool$spec$max_nodes;
}
for ( nt in pool_eligibility )
for ( nt, pet in pool_eligibility )
{
pet = pool_eligibility[nt];
if ( pet$excluded > |pet$eligible_nodes| )
Reporter::fatal(fmt("not enough %s nodes to satisfy pool exclusivity requirements: need %d nodes", nt, pet$excluded));
}

View file

@ -1,5 +1,5 @@
##! This script establishes communication among all nodes in a cluster
##! as defined by :bro:id:`Cluster::nodes`.
##! as defined by :zeek:id:`Cluster::nodes`.
@load ./main
@load ./pools
@ -42,7 +42,7 @@ function connect_peers_with_type(node_type: NodeType)
}
}
event bro_init() &priority=-10
event zeek_init() &priority=-10
{
if ( getenv("BROCTL_CHECK_CONFIG") != "" )
return;

View file

@ -34,7 +34,7 @@ event config_line(description: Input::EventDescription, tpe: Input::Event, p: Ev
{
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
return;

View file

@ -24,14 +24,14 @@ export {
location: string &optional &log;
};
## Event that can be handled to access the :bro:type:`Config::Info`
## Event that can be handled to access the :zeek:type:`Config::Info`
## record as it is sent on to the logging framework.
global log_config: event(rec: Info);
## This function is the config framework layer around the lower-level
## :bro:see:`Option::set` call. Config::set_value will set the configuration
## :zeek:see:`Option::set` call. Config::set_value will set the configuration
## value for all nodes in the cluster, no matter where it was called. Note
## that :bro:see:`Option::set` does not distribute configuration changes
## that :zeek:see:`Option::set` does not distribute configuration changes
## to other nodes.
##
## ID: The ID of the option to update.
@ -150,7 +150,7 @@ function config_option_changed(ID: string, new_value: any, location: string): an
return new_value;
}
event bro_init() &priority=10
event zeek_init() &priority=10
{
Log::create_stream(LOG, [$columns=Info, $ev=log_config, $path="config"]);
@ -159,9 +159,9 @@ event bro_init() &priority=10
# Iterate over all existing options and add ourselves as change handlers
# with a low priority so that we can log the changes.
local gids = global_ids();
for ( i in gids )
for ( i, gid in gids )
{
if ( ! gids[i]$option_value )
if ( ! gid$option_value )
next;
Option::set_change_handler(i, config_option_changed, -100);

View file

@ -35,7 +35,7 @@ function weird_option_change_interval(ID: string, new_value: interval, location:
return new_value;
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Option::set_change_handler("Weird::sampling_whitelist", weird_option_change_sampling_whitelist, 5);
Option::set_change_handler("Weird::sampling_threshold", weird_option_change_count, 5);

View file

@ -8,7 +8,7 @@ export {
## The topic prefix used for exchanging control messages via Broker.
const topic_prefix = "bro/control";
## Whether the controllee should call :bro:see:`Broker::listen`.
## Whether the controllee should call :zeek:see:`Broker::listen`.
## In a cluster, this isn't needed since the setup process calls it.
const controllee_listen = T &redef;
@ -18,7 +18,7 @@ export {
## The port of the host that will be controlled.
const host_port = 0/tcp &redef;
## If :bro:id:`Control::host` is a non-global IPv6 address and
## If :zeek:id:`Control::host` is a non-global IPv6 address and
## requires a specific :rfc:`4007` ``zone_id``, it can be set here.
const zone_id = "" &redef;
@ -45,7 +45,7 @@ export {
## Event for requesting the value of an ID (a variable).
global id_value_request: event(id: string);
## Event for returning the value of an ID after an
## :bro:id:`Control::id_value_request` event.
## :zeek:id:`Control::id_value_request` event.
global id_value_response: event(id: string, val: string);
## Requests the current communication status.
@ -62,7 +62,7 @@ export {
## updated.
global configuration_update_request: event();
## This event is a wrapper and alias for the
## :bro:id:`Control::configuration_update_request` event.
## :zeek:id:`Control::configuration_update_request` event.
## This event is also a primary hooking point for the control framework.
global configuration_update: event();
## Message in response to a configuration update request.

View file

@ -39,7 +39,7 @@ redef record connection += {
dpd: Info &optional;
};
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(DPD::LOG, [$columns=Info, $path="dpd"]);
}

View file

@ -1,2 +0,0 @@
@load ./main.bro
@load ./magic

View file

@ -0,0 +1,2 @@
@load ./main
@load ./magic

View file

@ -18,19 +18,19 @@ export {
type AnalyzerArgs: record {
## An event which will be generated for all new file contents,
## chunk-wise. Used when *tag* (in the
## :bro:see:`Files::add_analyzer` function) is
## :bro:see:`Files::ANALYZER_DATA_EVENT`.
## :zeek:see:`Files::add_analyzer` function) is
## :zeek:see:`Files::ANALYZER_DATA_EVENT`.
chunk_event: event(f: fa_file, data: string, off: count) &optional;
## An event which will be generated for all new file contents,
## stream-wise. Used when *tag* is
## :bro:see:`Files::ANALYZER_DATA_EVENT`.
## :zeek:see:`Files::ANALYZER_DATA_EVENT`.
stream_event: event(f: fa_file, data: string) &optional;
} &redef;
## Contains all metadata related to the analysis of a given file.
## For the most part, fields here are derived from ones of the same name
## in :bro:see:`fa_file`.
## in :zeek:see:`fa_file`.
type Info: record {
## The time when the file was first seen.
ts: time &log;
@ -66,7 +66,7 @@ export {
analyzers: set[string] &default=string_set() &log;
## A mime type provided by the strongest file magic signature
## match against the *bof_buffer* field of :bro:see:`fa_file`,
## match against the *bof_buffer* field of :zeek:see:`fa_file`,
## or in the cases where no buffering of the beginning of file
## occurs, an initial guess of the mime type based on the first
## data seen.
@ -82,7 +82,7 @@ export {
## If the source of this file is a network connection, this field
## indicates if the data originated from the local network or not as
## determined by the configured :bro:see:`Site::local_nets`.
## determined by the configured :zeek:see:`Site::local_nets`.
local_orig: bool &log &optional;
## If the source of this file is a network connection, this field
@ -118,8 +118,8 @@ export {
const disable: table[Files::Tag] of bool = table() &redef;
## The salt concatenated to unique file handle strings generated by
## :bro:see:`get_file_handle` before hashing them in to a file id
## (the *id* field of :bro:see:`fa_file`).
## :zeek:see:`get_file_handle` before hashing them in to a file id
## (the *id* field of :zeek:see:`fa_file`).
## Provided to help mitigate the possibility of manipulating parts of
## network connections that factor in to the file handle in order to
## generate two handles that would hash to the same file id.
@ -142,11 +142,11 @@ export {
## Returns: T if the file uid is known.
global file_exists: function(fuid: string): bool;
## Lookup an :bro:see:`fa_file` record with the file id.
## Lookup an :zeek:see:`fa_file` record with the file id.
##
## fuid: the file id.
##
## Returns: the associated :bro:see:`fa_file` record.
## Returns: the associated :zeek:see:`fa_file` record.
global lookup_file: function(fuid: string): fa_file;
## Allows the file reassembler to be used if it's necessary because the
@ -169,10 +169,10 @@ export {
## max: Maximum allowed size of the reassembly buffer.
global set_reassembly_buffer_size: function(f: fa_file, max: count);
## Sets the *timeout_interval* field of :bro:see:`fa_file`, which is
## Sets the *timeout_interval* field of :zeek:see:`fa_file`, which is
## used to determine the length of inactivity that is allowed for a file
## before internal state related to it is cleaned up. When used within
## a :bro:see:`file_timeout` handler, the analysis will delay timing out
## a :zeek:see:`file_timeout` handler, the analysis will delay timing out
## again for the period specified by *t*.
##
## f: the file.
@ -255,7 +255,7 @@ export {
##
## tag: Tag for the protocol analyzer having a callback being registered.
##
## reg: A :bro:see:`Files::ProtoRegistration` record.
## reg: A :zeek:see:`Files::ProtoRegistration` record.
##
## Returns: true if the protocol being registered was not previously registered.
global register_protocol: function(tag: Analyzer::Tag, reg: ProtoRegistration): bool;
@ -324,7 +324,7 @@ global mime_type_to_analyzers: table[string] of set[Files::Tag];
global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: AnalyzerArgs) = table();
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files, $path="files"]);
}

View file

@ -193,7 +193,7 @@ export {
## Descriptive name that uniquely identifies the input source.
## Can be used to remove a stream at a later time.
## This will also be used for the unique *source* field of
## :bro:see:`fa_file`. Most of the time, the best choice for this
## :zeek:see:`fa_file`. Most of the time, the best choice for this
## field will be the same value as the *source* field.
name: string;

View file

@ -47,4 +47,10 @@ export {
## fail_on_file_problem = T was the default behavior
## until Bro 2.6.
const fail_on_file_problem = F &redef;
## On input streams with a pathless or relative-path source filename,
## prefix the following path. This prefix can, but need not be, absolute.
## The default is to leave any filenames unchanged. This prefix has no
## effect if the source already is an absolute path.
const path_prefix = "" &redef;
}

View file

@ -1,8 +0,0 @@
##! Interface for the binary input reader.
module InputBinary;
export {
## Size of data chunks to read from the input file at a time.
const chunk_size = 1024 &redef;
}

View file

@ -0,0 +1,14 @@
##! Interface for the binary input reader.
module InputBinary;
export {
## Size of data chunks to read from the input file at a time.
const chunk_size = 1024 &redef;
## On input streams with a pathless or relative-path source filename,
## prefix the following path. This prefix can, but need not be, absolute.
## The default is to leave any filenames unchanged. This prefix has no
## effect if the source already is an absolute path.
const path_prefix = "" &redef;
}

View file

@ -16,7 +16,7 @@ redef have_full_data = F;
@endif
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::worker_topic, remove_indicator);
}
@ -67,7 +67,7 @@ event Intel::match_remote(s: Seen) &priority=5
@endif
@if ( Cluster::local_node_type() == Cluster::WORKER )
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::manager_topic, match_remote);
Broker::auto_publish(Cluster::manager_topic, remove_item);

View file

@ -53,8 +53,8 @@ hook extend_match(info: Info, s: Seen, items: set[Item]) &priority=6
if ( s$f?$conns && |s$f$conns| == 1 )
{
for ( cid in s$f$conns )
s$conn = s$f$conns[cid];
for ( cid, c in s$f$conns )
s$conn = c;
}
if ( ! info?$file_mime_type && s$f?$info && s$f$info?$mime_type )

View file

@ -1,36 +0,0 @@
##! Input handling for the intelligence framework. This script implements the
##! import of intelligence data from files using the input framework.
@load ./main
module Intel;
export {
## Intelligence files that will be read off disk. The files are
## reread every time they are updated so updates must be atomic
## with "mv" instead of writing the file in place.
const read_files: set[string] = {} &redef;
}
event Intel::read_entry(desc: Input::EventDescription, tpe: Input::Event, item: Intel::Item)
{
Intel::insert(item);
}
event bro_init() &priority=5
{
if ( ! Cluster::is_enabled() ||
Cluster::local_node_type() == Cluster::MANAGER )
{
for ( a_file in read_files )
{
Input::add_event([$source=a_file,
$reader=Input::READER_ASCII,
$mode=Input::REREAD,
$name=cat("intel-", a_file),
$fields=Intel::Item,
$ev=Intel::read_entry]);
}
}
}

View file

@ -0,0 +1,56 @@
##! Input handling for the intelligence framework. This script implements the
##! import of intelligence data from files using the input framework.
@load ./main
module Intel;
export {
## Intelligence files that will be read off disk. The files are
## reread every time they are updated so updates must be atomic
## with "mv" instead of writing the file in place.
const read_files: set[string] = {} &redef;
## An optional path prefix for intel files. This prefix can, but
## need not be, absolute. The default is to leave any filenames
## unchanged. This prefix has no effect if a read_file entry is
## an absolute path. This prefix gets applied _before_ entering
## the input framework, so if the prefix is absolute, the input
## framework won't munge it further. If it is relative, then
## any path_prefix specified in the input framework will apply
## additionally.
const path_prefix = "" &redef;
}
event Intel::read_entry(desc: Input::EventDescription, tpe: Input::Event, item: Intel::Item)
{
Intel::insert(item);
}
event zeek_init() &priority=5
{
if ( ! Cluster::is_enabled() ||
Cluster::local_node_type() == Cluster::MANAGER )
{
for ( a_file in read_files )
{
# Handle prefixing of the source file name. Note
# that this currently always uses the ASCII reader,
# so we know we're dealing with filenames.
local source = a_file;
# If we have a path prefix and the file doesn't
# already have an absolute path, prepend the prefix.
if ( |path_prefix| > 0 && sub_bytes(a_file, 0, 1) != "/" )
source = cat(rstrip(path_prefix, "/"), "/", a_file);
Input::add_event([$source=source,
$reader=Input::READER_ASCII,
$mode=Input::REREAD,
$name=cat("intel-", a_file),
$fields=Intel::Item,
$ev=Intel::read_entry]);
}
}
}

View file

@ -35,7 +35,7 @@ export {
## Set of intelligence data types.
type TypeSet: set[Type];
## Data about an :bro:type:`Intel::Item`.
## Data about an :zeek:type:`Intel::Item`.
type MetaData: record {
## An arbitrary string value representing the data source. This
## value is used as unique key to identify a metadata record in
@ -75,7 +75,7 @@ export {
## The type of data that the indicator represents.
indicator_type: Type &log &optional;
## If the indicator type was :bro:enum:`Intel::ADDR`, then this
## If the indicator type was :zeek:enum:`Intel::ADDR`, then this
## field will be present.
host: addr &optional;
@ -155,7 +155,7 @@ export {
global extend_match: hook(info: Info, s: Seen, items: set[Item]);
## The expiration timeout for intelligence items. Once an item expires, the
## :bro:id:`Intel::item_expired` hook is called. Reinsertion of an item
## :zeek:id:`Intel::item_expired` hook is called. Reinsertion of an item
## resets the timeout. A negative value disables expiration of intelligence
## items.
const item_expiration = -1 min &redef;
@ -173,6 +173,14 @@ export {
## be removed.
global item_expired: hook(indicator: string, indicator_type: Type, metas: set[MetaData]);
## This hook can be used to filter intelligence items that are about to be
## inserted into the internal data store. In case the hook execution is
## terminated using break, the item will not be (re)added to the internal
## data store.
##
## item: The intel item that should be inserted.
global filter_item: hook(item: Intel::Item);
global log_intel: event(rec: Info);
}
@ -215,7 +223,7 @@ type MinDataStore: record {
global min_data_store: MinDataStore &redef;
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(LOG, [$columns=Info, $ev=log_intel, $path="intel"]);
}
@ -235,8 +243,8 @@ function expire_host_data(data: table[addr] of MetaDataTable, idx: addr): interv
{
local meta_tbl: MetaDataTable = data[idx];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
for ( src, md in meta_tbl )
add metas[md];
return expire_item(cat(idx), ADDR, metas);
}
@ -245,8 +253,8 @@ function expire_subnet_data(data: table[subnet] of MetaDataTable, idx: subnet):
{
local meta_tbl: MetaDataTable = data[idx];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
for ( src, md in meta_tbl )
add metas[md];
return expire_item(cat(idx), SUBNET, metas);
}
@ -259,8 +267,8 @@ function expire_string_data(data: table[string, Type] of MetaDataTable, idx: any
local meta_tbl: MetaDataTable = data[indicator, indicator_type];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
for ( src, md in meta_tbl )
add metas[md];
return expire_item(indicator, indicator_type, metas);
}
@ -268,16 +276,21 @@ function expire_string_data(data: table[string, Type] of MetaDataTable, idx: any
# Function to check for intelligence hits.
function find(s: Seen): bool
{
local ds = have_full_data ? data_store : min_data_store;
if ( s?$host )
{
return ((s$host in ds$host_data) ||
(|matching_subnets(addr_to_subnet(s$host), ds$subnet_data)| > 0));
if ( have_full_data )
return ((s$host in data_store$host_data) ||
(|matching_subnets(addr_to_subnet(s$host), data_store$subnet_data)| > 0));
else
return ((s$host in min_data_store$host_data) ||
(|matching_subnets(addr_to_subnet(s$host), min_data_store$subnet_data)| > 0));
}
else
{
return ([to_lower(s$indicator), s$indicator_type] in ds$string_data);
if ( have_full_data )
return ([to_lower(s$indicator), s$indicator_type] in data_store$string_data);
else
return ([to_lower(s$indicator), s$indicator_type] in min_data_store$string_data);
}
}
@ -301,20 +314,19 @@ function get_items(s: Seen): set[Item]
if ( s$host in data_store$host_data )
{
mt = data_store$host_data[s$host];
for ( m in mt )
for ( m, md in mt )
{
add return_data[Item($indicator=cat(s$host), $indicator_type=ADDR, $meta=mt[m])];
add return_data[Item($indicator=cat(s$host), $indicator_type=ADDR, $meta=md)];
}
}
# See if the host is part of a known subnet, which has meta values
local nets: table[subnet] of MetaDataTable;
nets = filter_subnet_table(addr_to_subnet(s$host), data_store$subnet_data);
for ( n in nets )
for ( n, mt in nets )
{
mt = nets[n];
for ( m in mt )
for ( m, md in mt )
{
add return_data[Item($indicator=cat(n), $indicator_type=SUBNET, $meta=mt[m])];
add return_data[Item($indicator=cat(n), $indicator_type=SUBNET, $meta=md)];
}
}
}
@ -325,9 +337,9 @@ function get_items(s: Seen): set[Item]
if ( [lower_indicator, s$indicator_type] in data_store$string_data )
{
mt = data_store$string_data[lower_indicator, s$indicator_type];
for ( m in mt )
for ( m, md in mt )
{
add return_data[Item($indicator=s$indicator, $indicator_type=s$indicator_type, $meta=mt[m])];
add return_data[Item($indicator=s$indicator, $indicator_type=s$indicator_type, $meta=md)];
}
}
}
@ -491,24 +503,28 @@ function _insert(item: Item, first_dispatch: bool &default = T)
}
function insert(item: Item)
{
if ( hook filter_item(item) )
{
# Insert possibly new item.
_insert(item, T);
}
}
# Function to check whether an item is present.
function item_exists(item: Item): bool
{
local ds = have_full_data ? data_store : min_data_store;
switch ( item$indicator_type )
{
case ADDR:
return to_addr(item$indicator) in ds$host_data;
return have_full_data ? to_addr(item$indicator) in data_store$host_data :
to_addr(item$indicator) in min_data_store$host_data;
case SUBNET:
return to_subnet(item$indicator) in ds$subnet_data;
return have_full_data ? to_subnet(item$indicator) in data_store$subnet_data :
to_subnet(item$indicator) in min_data_store$subnet_data;
default:
return [item$indicator, item$indicator_type] in ds$string_data;
return have_full_data ? [item$indicator, item$indicator_type] in data_store$string_data :
[item$indicator, item$indicator_type] in min_data_store$string_data;
}
}

View file

@ -176,7 +176,7 @@ export {
## easy to flood the disk by returning a new string for each
## connection. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then
## :bro:see:`Log::default_path_func` is used.
## :zeek:see:`Log::default_path_func` is used.
##
## id: The ID associated with the log stream.
##
@ -191,7 +191,7 @@ export {
##
## Returns: The path to be used for the filter, which will be
## subject to the same automatic correction rules as
## the *path* field of :bro:type:`Log::Filter` in the
## the *path* field of :zeek:type:`Log::Filter` in the
## case of conflicts with other filters trying to use
## the same writer/path pair.
path_func: function(id: ID, path: string, rec: any): string &optional;
@ -232,7 +232,7 @@ export {
interv: interval &default=default_rotation_interval;
## Callback function to trigger for rotated files. If not set, the
## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
## default comes out of :zeek:id:`Log::default_rotation_postprocessors`.
postprocessor: function(info: RotationInfo) : bool &optional;
## A key/value table that will be passed on to the writer.
@ -253,7 +253,7 @@ export {
## Returns: True if a new logging stream was successfully created and
## a default filter added to it.
##
## .. bro:see:: Log::add_default_filter Log::remove_default_filter
## .. zeek:see:: Log::add_default_filter Log::remove_default_filter
global create_stream: function(id: ID, stream: Stream) : bool;
## Removes a logging stream completely, stopping all the threads.
@ -262,7 +262,7 @@ export {
##
## Returns: True if the stream was successfully removed.
##
## .. bro:see:: Log::create_stream
## .. zeek:see:: Log::create_stream
global remove_stream: function(id: ID) : bool;
## Enables a previously disabled logging stream. Disabled streams
@ -273,7 +273,7 @@ export {
##
## Returns: True if the stream is re-enabled or was not previously disabled.
##
## .. bro:see:: Log::disable_stream
## .. zeek:see:: Log::disable_stream
global enable_stream: function(id: ID) : bool;
## Disables a currently enabled logging stream. Disabled streams
@ -284,7 +284,7 @@ export {
##
## Returns: True if the stream is now disabled or was already disabled.
##
## .. bro:see:: Log::enable_stream
## .. zeek:see:: Log::enable_stream
global disable_stream: function(id: ID) : bool;
## Adds a custom filter to an existing logging stream. If a filter
@ -299,7 +299,7 @@ export {
## the filter was not added or the *filter* argument was not
## the correct type.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## .. zeek:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter Log::get_filter Log::get_filter_names
global add_filter: function(id: ID, filter: Filter) : bool;
@ -309,12 +309,12 @@ export {
## remove a filter.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
## :zeek:type:`Log::Filter` for identification purposes.
##
## Returns: True if the logging stream's filter was removed or
## if no filter associated with *name* was found.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## .. zeek:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter Log::get_filter Log::get_filter_names
global remove_filter: function(id: ID, name: string) : bool;
@ -326,7 +326,7 @@ export {
##
## Returns: The set of filter names associated with the stream.
##
## ..bro:see:: Log::remove_filter Log::add_default_filter
## ..zeek:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter Log::get_filter
global get_filter_names: function(id: ID) : set[string];
@ -336,13 +336,13 @@ export {
## obtain one of its filters.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
## :zeek:type:`Log::Filter` for identification purposes.
##
## Returns: A filter attached to the logging stream *id* matching
## *name* or, if no matches are found returns the
## :bro:id:`Log::no_filter` sentinel value.
## :zeek:id:`Log::no_filter` sentinel value.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
## .. zeek:see:: Log::add_filter Log::remove_filter Log::add_default_filter
## Log::remove_default_filter Log::get_filter_names
global get_filter: function(id: ID, name: string) : Filter;
@ -360,7 +360,7 @@ export {
## to handle, or one of the stream's filters has an invalid
## ``path_func``.
##
## .. bro:see:: Log::enable_stream Log::disable_stream
## .. zeek:see:: Log::enable_stream Log::disable_stream
global write: function(id: ID, columns: any) : bool;
## Sets the buffering status for all the writers of a given logging stream.
@ -375,7 +375,7 @@ export {
## Returns: True if buffering status was set, false if the logging stream
## does not exist.
##
## .. bro:see:: Log::flush
## .. zeek:see:: Log::flush
global set_buf: function(id: ID, buffered: bool): bool;
## Flushes any currently buffered output for all the writers of a given
@ -388,50 +388,50 @@ export {
## buffered data or if the logging stream is disabled,
## false if the logging stream does not exist.
##
## .. bro:see:: Log::set_buf Log::enable_stream Log::disable_stream
## .. zeek:see:: Log::set_buf Log::enable_stream Log::disable_stream
global flush: function(id: ID): bool;
## Adds a default :bro:type:`Log::Filter` record with ``name`` field
## Adds a default :zeek:type:`Log::Filter` record with ``name`` field
## set as "default" to a given logging stream.
##
## id: The ID associated with a logging stream for which to add a default
## filter.
##
## Returns: The status of a call to :bro:id:`Log::add_filter` using a
## default :bro:type:`Log::Filter` argument with ``name`` field
## Returns: The status of a call to :zeek:id:`Log::add_filter` using a
## default :zeek:type:`Log::Filter` argument with ``name`` field
## set to "default".
##
## .. bro:see:: Log::add_filter Log::remove_filter
## .. zeek:see:: Log::add_filter Log::remove_filter
## Log::remove_default_filter
global add_default_filter: function(id: ID) : bool;
## Removes the :bro:type:`Log::Filter` with ``name`` field equal to
## Removes the :zeek:type:`Log::Filter` with ``name`` field equal to
## "default".
##
## id: The ID associated with a logging stream from which to remove the
## default filter.
##
## Returns: The status of a call to :bro:id:`Log::remove_filter` using
## Returns: The status of a call to :zeek:id:`Log::remove_filter` using
## "default" as the argument.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
## .. zeek:see:: Log::add_filter Log::remove_filter Log::add_default_filter
global remove_default_filter: function(id: ID) : bool;
## Runs a command given by :bro:id:`Log::default_rotation_postprocessor_cmd`
## Runs a command given by :zeek:id:`Log::default_rotation_postprocessor_cmd`
## on a rotated file. Meant to be called from postprocessor functions
## that are added to :bro:id:`Log::default_rotation_postprocessors`.
## that are added to :zeek:id:`Log::default_rotation_postprocessors`.
##
## info: A record holding meta-information about the log being rotated.
##
## npath: The new path of the file (after already being rotated/processed
## by writer-specific postprocessor as defined in
## :bro:id:`Log::default_rotation_postprocessors`).
## :zeek:id:`Log::default_rotation_postprocessors`).
##
## Returns: True when :bro:id:`Log::default_rotation_postprocessor_cmd`
## Returns: True when :zeek:id:`Log::default_rotation_postprocessor_cmd`
## is empty or the system command given by it has been invoked
## to postprocess a rotated log file.
##
## .. bro:see:: Log::default_rotation_date_format
## .. zeek:see:: Log::default_rotation_date_format
## Log::default_rotation_postprocessor_cmd
## Log::default_rotation_postprocessors
global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool;

View file

@ -2,22 +2,22 @@
##! to a logging filter in order to automatically SCP (secure copy)
##! a log stream (or a subset of it) to a remote host at configurable
##! rotation time intervals. Generally, to use this functionality
##! you must handle the :bro:id:`bro_init` event and do the following
##! you must handle the :zeek:id:`zeek_init` event and do the following
##! in your handler:
##!
##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path,
##! 1) Create a new :zeek:type:`Log::Filter` record that defines a name/path,
##! rotation interval, and set the ``postprocessor`` to
##! :bro:id:`Log::scp_postprocessor`.
##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`.
##! 3) Add a table entry to :bro:id:`Log::scp_destinations` for the filter's
##! writer/path pair which defines a set of :bro:type:`Log::SCPDestination`
##! :zeek:id:`Log::scp_postprocessor`.
##! 2) Add the filter to a logging stream using :zeek:id:`Log::add_filter`.
##! 3) Add a table entry to :zeek:id:`Log::scp_destinations` for the filter's
##! writer/path pair which defines a set of :zeek:type:`Log::SCPDestination`
##! records.
module Log;
export {
## Secure-copies the rotated log to all the remote hosts
## defined in :bro:id:`Log::scp_destinations` and then deletes
## defined in :zeek:id:`Log::scp_destinations` and then deletes
## the local copy of the rotated log. It's not active when
## reading from trace files.
##
@ -42,7 +42,7 @@ export {
};
## A table indexed by a particular log writer and filter path, that yields
## a set of remote destinations. The :bro:id:`Log::scp_postprocessor`
## a set of remote destinations. The :zeek:id:`Log::scp_postprocessor`
## function queries this table upon log rotation and performs a secure
## copy of the rotated log to each destination in the set. This
## table can be modified at run-time.

View file

@ -2,22 +2,22 @@
##! to a logging filter in order to automatically SFTP
##! a log stream (or a subset of it) to a remote host at configurable
##! rotation time intervals. Generally, to use this functionality
##! you must handle the :bro:id:`bro_init` event and do the following
##! you must handle the :zeek:id:`zeek_init` event and do the following
##! in your handler:
##!
##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path,
##! 1) Create a new :zeek:type:`Log::Filter` record that defines a name/path,
##! rotation interval, and set the ``postprocessor`` to
##! :bro:id:`Log::sftp_postprocessor`.
##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`.
##! 3) Add a table entry to :bro:id:`Log::sftp_destinations` for the filter's
##! writer/path pair which defines a set of :bro:type:`Log::SFTPDestination`
##! :zeek:id:`Log::sftp_postprocessor`.
##! 2) Add the filter to a logging stream using :zeek:id:`Log::add_filter`.
##! 3) Add a table entry to :zeek:id:`Log::sftp_destinations` for the filter's
##! writer/path pair which defines a set of :zeek:type:`Log::SFTPDestination`
##! records.
module Log;
export {
## Securely transfers the rotated log to all the remote hosts
## defined in :bro:id:`Log::sftp_destinations` and then deletes
## defined in :zeek:id:`Log::sftp_destinations` and then deletes
## the local copy of the rotated log. It's not active when
## reading from trace files.
##
@ -44,7 +44,7 @@ export {
};
## A table indexed by a particular log writer and filter path, that yields
## a set of remote destinations. The :bro:id:`Log::sftp_postprocessor`
## a set of remote destinations. The :zeek:id:`Log::sftp_postprocessor`
## function queries this table upon log rotation and performs a secure
## transfer of the rotated log to each destination in the set. This
## table can be modified at run-time.

View file

@ -80,7 +80,7 @@ export {
## again.
##
## In cluster mode, this function works on workers as well as the manager. On managers,
## the returned :bro:see:`NetControl::BlockInfo` record will not contain the block ID,
## the returned :zeek:see:`NetControl::BlockInfo` record will not contain the block ID,
## which will be assigned on the manager.
##
## a: The address to be dropped.
@ -89,7 +89,7 @@ export {
##
## location: An optional string describing where the drop was triggered.
##
## Returns: The :bro:see:`NetControl::BlockInfo` record containing information about
## Returns: The :zeek:see:`NetControl::BlockInfo` record containing information about
## the inserted block.
global drop_address_catch_release: function(a: addr, location: string &default="") : BlockInfo;
@ -114,7 +114,7 @@ export {
## a: The address that was seen and should be re-dropped if it is being watched.
global catch_release_seen: function(a: addr);
## Get the :bro:see:`NetControl::BlockInfo` record for an address currently blocked by catch and release.
## Get the :zeek:see:`NetControl::BlockInfo` record for an address currently blocked by catch and release.
## If the address is unknown to catch and release, the watch_until time will be set to 0.
##
## In cluster mode, this function works on the manager and workers. On workers, the data will
@ -123,7 +123,7 @@ export {
##
## a: The address to get information about.
##
## Returns: The :bro:see:`NetControl::BlockInfo` record containing information about
## Returns: The :zeek:see:`NetControl::BlockInfo` record containing information about
## the inserted block.
global get_catch_release_info: function(a: addr) : BlockInfo;
@ -132,7 +132,7 @@ export {
##
## a: The address that is no longer being managed.
##
## bi: The :bro:see:`NetControl::BlockInfo` record containing information about the block.
## bi: The :zeek:see:`NetControl::BlockInfo` record containing information about the block.
global catch_release_forgotten: event(a: addr, bi: BlockInfo);
## If true, catch_release_seen is called on the connection originator in new_connection,
@ -148,7 +148,7 @@ export {
## effect.
const catch_release_intervals: vector of interval = vector(10min, 1hr, 24hrs, 7days) &redef;
## Event that can be handled to access the :bro:type:`NetControl::CatchReleaseInfo`
## Event that can be handled to access the :zeek:type:`NetControl::CatchReleaseInfo`
## record as it is sent on to the logging framework.
global log_netcontrol_catch_release: event(rec: CatchReleaseInfo);
@ -163,7 +163,7 @@ export {
# Set that is used to only send seen notifications to the master every ~30 seconds.
global catch_release_recently_notified: set[addr] &create_expire=30secs;
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(NetControl::CATCH_RELEASE, [$columns=CatchReleaseInfo, $ev=log_netcontrol_catch_release, $path="netcontrol_catch_release"]);
}
@ -227,13 +227,13 @@ global blocks: table[addr] of BlockInfo = {}
@if ( Cluster::is_enabled() )
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::worker_topic, NetControl::catch_release_block_new);
Broker::auto_publish(Cluster::worker_topic, NetControl::catch_release_block_delete);
}
@else
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::manager_topic, NetControl::catch_release_add);
Broker::auto_publish(Cluster::manager_topic, NetControl::catch_release_delete);

View file

@ -17,7 +17,7 @@ export {
}
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::worker_topic, NetControl::rule_added);
Broker::auto_publish(Cluster::worker_topic, NetControl::rule_removed);
@ -28,7 +28,7 @@ event bro_init()
Broker::auto_publish(Cluster::worker_topic, NetControl::rule_destroyed);
}
@else
event bro_init()
event zeek_init()
{
Broker::auto_publish(Cluster::manager_topic, NetControl::cluster_netcontrol_add_rule);
Broker::auto_publish(Cluster::manager_topic, NetControl::cluster_netcontrol_remove_rule);

View file

@ -50,12 +50,12 @@ export {
## r: The rule to be added.
global NetControl::drop_rule_policy: hook(r: Rule);
## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`
## Event that can be handled to access the :zeek:type:`NetControl::ShuntInfo`
## record as it is sent on to the logging framework.
global log_netcontrol_drop: event(rec: DropInfo);
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(NetControl::DROP, [$columns=DropInfo, $ev=log_netcontrol_drop, $path="netcontrol_drop"]);
}

View file

@ -43,8 +43,8 @@ export {
# ### High-level API.
# ###
# ### Note - other high level primitives are in catch-and-release.bro, shunt.bro and
# ### drop.bro
# ### Note - other high level primitives are in catch-and-release.zeek,
# ### shunt.zeek and drop.zeek
## Allows all traffic involving a specific IP address to be forwarded.
##
@ -98,7 +98,7 @@ export {
## Returns: Vector of inserted rules on success, empty list on failure.
global quarantine_host: function(infected: addr, dns: addr, quarantine: addr, t: interval, location: string &default="") : vector of string;
## Flushes all state by calling :bro:see:`NetControl::remove_rule` on all currently active rules.
## Flushes all state by calling :zeek:see:`NetControl::remove_rule` on all currently active rules.
global clear: function();
# ###
@ -122,7 +122,7 @@ export {
## Removes a rule.
##
## id: The rule to remove, specified as the ID returned by :bro:see:`NetControl::add_rule`.
## id: The rule to remove, specified as the ID returned by :zeek:see:`NetControl::add_rule`.
##
## reason: Optional string argument giving information on why the rule was removed.
##
@ -138,7 +138,7 @@ export {
## the rule has been added; if it is not removed from them by a separate mechanism,
## it will stay installed and not be removed later.
##
## id: The rule to delete, specified as the ID returned by :bro:see:`NetControl::add_rule`.
## id: The rule to delete, specified as the ID returned by :zeek:see:`NetControl::add_rule`.
##
## reason: Optional string argument giving information on why the rule was deleted.
##
@ -262,7 +262,7 @@ export {
##### Plugin functions
## Function called by plugins once they finished their activation. After all
## plugins defined in bro_init finished to activate, rules will start to be sent
## plugins defined in zeek_init finished to activate, rules will start to be sent
## to the plugins. Rules that scripts try to set before the backends are ready
## will be discarded.
global plugin_activated: function(p: PluginState);
@ -321,7 +321,7 @@ export {
plugin: string &log &optional;
};
## Event that can be handled to access the :bro:type:`NetControl::Info`
## Event that can be handled to access the :zeek:type:`NetControl::Info`
## record as it is sent on to the logging framework.
global log_netcontrol: event(rec: Info);
}
@ -338,13 +338,13 @@ redef record Rule += {
};
# Variable tracking the state of plugin activation. Once all plugins that
# have been added in bro_init are activated, this will switch to T and
# have been added in zeek_init are activated, this will switch to T and
# the event NetControl::init_done will be raised.
global plugins_active: bool = F;
# Set to true at the end of bro_init (with very low priority).
# Set to true at the end of zeek_init (with very low priority).
# Used to track when plugin activation could potentially be finished
global bro_init_done: bool = F;
global zeek_init_done: bool = F;
# The counters that are used to generate the rule and plugin IDs
global rule_counter: count = 1;
@ -364,7 +364,7 @@ global rules_by_subnets: table[subnet] of set[string];
# There always only can be one rule of each type for one entity.
global rule_entities: table[Entity, RuleType] of Rule;
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(NetControl::LOG, [$columns=Info, $ev=log_netcontrol, $path="netcontrol"]);
}
@ -613,18 +613,18 @@ function plugin_activated(p: PluginState)
plugin_ids[id]$_activated = T;
log_msg("activation finished", p);
if ( bro_init_done )
if ( zeek_init_done )
check_plugins();
}
event bro_init() &priority=-5
event zeek_init() &priority=-5
{
event NetControl::init();
}
event NetControl::init() &priority=-20
{
bro_init_done = T;
zeek_init_done = T;
check_plugins();

View file

@ -9,7 +9,7 @@ module NetControl;
@load base/frameworks/broker
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_broker`.
## This record specifies the configuration that is passed to :zeek:see:`NetControl::create_broker`.
type BrokerConfig: record {
## The broker topic to send events to.
topic: string &optional;

View file

@ -7,7 +7,7 @@
module NetControl;
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_openflow`.
## This record specifies the configuration that is passed to :zeek:see:`NetControl::create_openflow`.
type OfConfig: record {
monitor: bool &default=T; ##< Accept rules that target the monitor path.
forward: bool &default=T; ##< Accept rules that target the forward path.

View file

@ -31,12 +31,12 @@ export {
location: string &log &optional;
};
## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`
## Event that can be handled to access the :zeek:type:`NetControl::ShuntInfo`
## record as it is sent on to the logging framework.
global log_netcontrol_shunt: event(rec: ShuntInfo);
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(NetControl::SHUNT, [$columns=ShuntInfo, $ev=log_netcontrol_shunt, $path="netcontrol_shunt"]);
}

View file

@ -1,6 +1,6 @@
##! This file defines the types that are used by the NetControl framework.
##!
##! The most important type defined in this file is :bro:see:`NetControl::Rule`,
##! The most important type defined in this file is :zeek:see:`NetControl::Rule`,
##! which is used to describe all rules that can be expressed by the NetControl framework.
module NetControl;
@ -10,11 +10,11 @@ export {
option default_priority: int = +0;
## The default priority that is used when using the high-level functions to
## push whitelist entries to the backends (:bro:see:`NetControl::whitelist_address` and
## :bro:see:`NetControl::whitelist_subnet`).
## push whitelist entries to the backends (:zeek:see:`NetControl::whitelist_address` and
## :zeek:see:`NetControl::whitelist_subnet`).
##
## Note that this priority is not automatically used when manually creating rules
## that have a :bro:see:`NetControl::RuleType` of :bro:enum:`NetControl::WHITELIST`.
## that have a :zeek:see:`NetControl::RuleType` of :zeek:enum:`NetControl::WHITELIST`.
const whitelist_priority: int = +5 &redef;
## Type defining the entity that a rule applies to.
@ -25,7 +25,7 @@ export {
MAC, ##< Activity involving a MAC address.
};
## Flow is used in :bro:type:`NetControl::Entity` together with :bro:enum:`NetControl::FLOW` to specify
## Flow is used in :zeek:type:`NetControl::Entity` together with :zeek:enum:`NetControl::FLOW` to specify
## a uni-directional flow that a rule applies to.
##
## If optional fields are not set, they are interpreted as wildcarded.
@ -41,10 +41,10 @@ export {
## Type defining the entity a rule is operating on.
type Entity: record {
ty: EntityType; ##< Type of entity.
conn: conn_id &optional; ##< Used with :bro:enum:`NetControl::CONNECTION`.
flow: Flow &optional; ##< Used with :bro:enum:`NetControl::FLOW`.
ip: subnet &optional; ##< Used with :bro:enum:`NetControl::ADDRESS` to specifiy a CIDR subnet.
mac: string &optional; ##< Used with :bro:enum:`NetControl::MAC`.
conn: conn_id &optional; ##< Used with :zeek:enum:`NetControl::CONNECTION`.
flow: Flow &optional; ##< Used with :zeek:enum:`NetControl::FLOW`.
ip: subnet &optional; ##< Used with :zeek:enum:`NetControl::ADDRESS` to specifiy a CIDR subnet.
mac: string &optional; ##< Used with :zeek:enum:`NetControl::MAC`.
};
## Type defining the target of a rule.
@ -59,7 +59,7 @@ export {
};
## Type of rules that the framework supports. Each type lists the extra
## :bro:type:`NetControl::Rule` fields it uses, if any.
## :zeek:type:`NetControl::Rule` fields it uses, if any.
##
## Plugins may extend this type to define their own.
type RuleType: enum {
@ -108,8 +108,8 @@ export {
priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority).
location: string &optional; ##< Optional string describing where/what installed the rule.
out_port: count &optional; ##< Argument for :bro:enum:`NetControl::REDIRECT` rules.
mod: FlowMod &optional; ##< Argument for :bro:enum:`NetControl::MODIFY` rules.
out_port: count &optional; ##< Argument for :zeek:enum:`NetControl::REDIRECT` rules.
mod: FlowMod &optional; ##< Argument for :zeek:enum:`NetControl::MODIFY` rules.
id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added.
cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added.

View file

@ -13,7 +13,7 @@ module Notice;
export {
redef enum Action += {
## Indicates that the notice should have geodata added for the
## "remote" host. :bro:id:`Site::local_nets` must be defined
## "remote" host. :zeek:id:`Site::local_nets` must be defined
## in order for this to work.
ACTION_ADD_GEODATA
};

View file

@ -8,7 +8,7 @@ module Notice;
export {
redef enum Action += {
## Drops the address via :bro:see:`NetControl::drop_address_catch_release`.
## Drops the address via :zeek:see:`NetControl::drop_address_catch_release`.
ACTION_DROP
};

View file

@ -1,6 +1,6 @@
##! Adds a new notice action type which can be used to email notices
##! to the administrators of a particular address space as set by
##! :bro:id:`Site::local_admins` if the notice contains a source
##! :zeek:id:`Site::local_admins` if the notice contains a source
##! or destination address that lies within their space.
@load ../main
@ -12,7 +12,7 @@ export {
redef enum Action += {
## Indicate that the generated email should be addressed to the
## appropriate email addresses as found by the
## :bro:id:`Site::get_emails` function based on the relevant
## :zeek:id:`Site::get_emails` function based on the relevant
## address or addresses indicated in the notice.
ACTION_EMAIL_ADMIN
};

View file

@ -7,12 +7,12 @@ module Notice;
export {
redef enum Action += {
## Indicates that the notice should be sent to the pager email
## address configured in the :bro:id:`Notice::mail_page_dest`
## address configured in the :zeek:id:`Notice::mail_page_dest`
## variable.
ACTION_PAGE
};
## Email address to send notices with the :bro:enum:`Notice::ACTION_PAGE`
## Email address to send notices with the :zeek:enum:`Notice::ACTION_PAGE`
## action.
option mail_page_dest = "";
}

View file

@ -12,7 +12,7 @@ export {
const pretty_print_alarms = T &redef;
## Address to send the pretty-printed reports to. Default if not set is
## :bro:id:`Notice::mail_dest`.
## :zeek:id:`Notice::mail_dest`.
##
## Note that this is overridden by the BroControl MailAlarmsTo option.
const mail_dest_pretty_printed = "" &redef;
@ -95,7 +95,7 @@ function pp_postprocessor(info: Log::RotationInfo): bool
return T;
}
event bro_init()
event zeek_init()
{
if ( ! want_pp() )
return;

View file

@ -18,7 +18,7 @@ export {
## Scripts creating new notices need to redef this enum to add their
## own specific notice types which would then get used when they call
## the :bro:id:`NOTICE` function. The convention is to give a general
## the :zeek:id:`NOTICE` function. The convention is to give a general
## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example,
@ -37,12 +37,12 @@ export {
## logging stream.
ACTION_LOG,
## Indicates that the notice should be sent to the email
## address(es) configured in the :bro:id:`Notice::mail_dest`
## address(es) configured in the :zeek:id:`Notice::mail_dest`
## variable.
ACTION_EMAIL,
## Indicates that the notice should be alarmed. A readable
## ASCII version of the alarm log is emailed in bulk to the
## address(es) configured in :bro:id:`Notice::mail_dest`.
## address(es) configured in :zeek:id:`Notice::mail_dest`.
ACTION_ALARM,
};
@ -50,7 +50,7 @@ export {
type ActionSet: set[Notice::Action];
## The notice framework is able to do automatic notice suppression by
## utilizing the *identifier* field in :bro:type:`Notice::Info` records.
## utilizing the *identifier* field in :zeek:type:`Notice::Info` records.
## Set this to "0secs" to completely disable automated notice
## suppression.
option default_suppression_interval = 1hrs;
@ -103,18 +103,18 @@ export {
## *conn*, *iconn* or *p* is specified.
proto: transport_proto &log &optional;
## The :bro:type:`Notice::Type` of the notice.
## The :zeek:type:`Notice::Type` of the notice.
note: Type &log;
## The human readable message for the notice.
msg: string &log &optional;
## The human readable sub-message.
sub: string &log &optional;
## Source address, if we don't have a :bro:type:`conn_id`.
## Source address, if we don't have a :zeek:type:`conn_id`.
src: addr &log &optional;
## Destination address.
dst: addr &log &optional;
## Associated port, if we don't have a :bro:type:`conn_id`.
## Associated port, if we don't have a :zeek:type:`conn_id`.
p: port &log &optional;
## Associated count, or perhaps a status code.
n: count &log &optional;
@ -131,14 +131,14 @@ export {
## By adding chunks of text into this element, other scripts
## can expand on notices that are being emailed. The normal
## way to add text is to extend the vector by handling the
## :bro:id:`Notice::notice` event and modifying the notice in
## :zeek:id:`Notice::notice` event and modifying the notice in
## place.
email_body_sections: vector of string &optional;
## Adding a string "token" to this set will cause the notice
## framework's built-in emailing functionality to delay sending
## the email until either the token has been removed or the
## email has been delayed for :bro:id:`Notice::max_email_delay`.
## email has been delayed for :zeek:id:`Notice::max_email_delay`.
email_delay_tokens: set[string] &optional;
## This field is to be provided when a notice is generated for
@ -192,8 +192,8 @@ export {
## Note that this is overridden by the BroControl SendMail option.
option sendmail = "/usr/sbin/sendmail";
## Email address to send notices with the
## :bro:enum:`Notice::ACTION_EMAIL` action or to send bulk alarm logs
## on rotation with :bro:enum:`Notice::ACTION_ALARM`.
## :zeek:enum:`Notice::ACTION_EMAIL` action or to send bulk alarm logs
## on rotation with :zeek:enum:`Notice::ACTION_ALARM`.
##
## Note that this is overridden by the BroControl MailTo option.
const mail_dest = "" &redef;
@ -212,18 +212,18 @@ export {
## The maximum amount of time a plugin can delay email from being sent.
const max_email_delay = 15secs &redef;
## Contains a portion of :bro:see:`fa_file` that's also contained in
## :bro:see:`Notice::Info`.
## Contains a portion of :zeek:see:`fa_file` that's also contained in
## :zeek:see:`Notice::Info`.
type FileInfo: record {
fuid: string; ##< File UID.
desc: string; ##< File description from e.g.
##< :bro:see:`Files::describe`.
##< :zeek:see:`Files::describe`.
mime: string &optional; ##< Strongest mime type match for file.
cid: conn_id &optional; ##< Connection tuple over which file is sent.
cuid: string &optional; ##< Connection UID over which file is sent.
};
## Creates a record containing a subset of a full :bro:see:`fa_file` record.
## Creates a record containing a subset of a full :zeek:see:`fa_file` record.
##
## f: record containing metadata about a file.
##
@ -245,7 +245,7 @@ export {
global populate_file_info2: function(fi: Notice::FileInfo, n: Notice::Info);
## A log postprocessing function that implements emailing the contents
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
## of a log upon rotation to any configured :zeek:id:`Notice::mail_dest`.
## The rotated log is removed upon being sent.
##
## info: A record containing the rotated log file information.
@ -254,9 +254,9 @@ export {
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
## This is the event that is called as the entry point to the
## notice framework by the global :bro:id:`NOTICE` function. By the
## notice framework by the global :zeek:id:`NOTICE` function. By the
## time this event is generated, default values have already been
## filled out in the :bro:type:`Notice::Info` record and the notice
## filled out in the :zeek:type:`Notice::Info` record and the notice
## policy has also been applied.
##
## n: The record containing notice data.
@ -268,7 +268,7 @@ export {
##
## suppress_for: length of time that this notice should be suppressed.
##
## note: The :bro:type:`Notice::Type` of the notice.
## note: The :zeek:type:`Notice::Type` of the notice.
##
## identifier: The identifier string of the notice that should be suppressed.
global begin_suppression: event(ts: time, suppress_for: interval, note: Type, identifier: string);
@ -286,8 +286,8 @@ export {
global suppressed: event(n: Notice::Info);
## Call this function to send a notice in an email. It is already used
## by default with the built in :bro:enum:`Notice::ACTION_EMAIL` and
## :bro:enum:`Notice::ACTION_PAGE` actions.
## by default with the built in :zeek:enum:`Notice::ACTION_EMAIL` and
## :zeek:enum:`Notice::ACTION_PAGE` actions.
##
## n: The record of notice data to email.
##
@ -308,13 +308,13 @@ export {
## appended.
global email_headers: function(subject_desc: string, dest: string): string;
## This event can be handled to access the :bro:type:`Notice::Info`
## This event can be handled to access the :zeek:type:`Notice::Info`
## record as it is sent on to the logging framework.
##
## rec: The record containing notice data before it is logged.
global log_notice: event(rec: Info);
## This is an internal wrapper for the global :bro:id:`NOTICE`
## This is an internal wrapper for the global :zeek:id:`NOTICE`
## function; disregard.
##
## n: The record of notice data.
@ -385,7 +385,7 @@ function log_mailing_postprocessor(info: Log::RotationInfo): bool
return T;
}
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice, $path="notice"]);
@ -531,7 +531,7 @@ event Notice::begin_suppression(ts: time, suppress_for: interval, note: Type,
suppressing[note, identifier] = suppress_until;
}
event bro_init()
event zeek_init()
{
if ( ! Cluster::is_enabled() )
return;
@ -569,10 +569,10 @@ function create_file_info(f: fa_file): Notice::FileInfo
fi$mime = f$info$mime_type;
if ( f?$conns && |f$conns| == 1 )
for ( id in f$conns )
for ( id, c in f$conns )
{
fi$cid = id;
fi$cuid = f$conns[id]$uid;
fi$cuid = c$uid;
}
return fi;
@ -598,7 +598,7 @@ function populate_file_info2(fi: Notice::FileInfo, n: Notice::Info)
# This is run synchronously as a function before all of the other
# notice related functions and events. It also modifies the
# :bro:type:`Notice::Info` record in place.
# :zeek:type:`Notice::Info` record in place.
function apply_policy(n: Notice::Info)
{
# Fill in some defaults.

View file

@ -296,7 +296,7 @@ const notice_actions = {
ACTION_NOTICE_ONCE,
};
event bro_init() &priority=5
event zeek_init() &priority=5
{
Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird, $path="weird"]);
}
@ -422,3 +422,13 @@ event net_weird(name: string)
local i = Info($ts=network_time(), $name=name);
weird(i);
}
event file_weird(name: string, f: fa_file, addl: string)
{
local i = Info($ts=network_time(), $name=name, $addl=f$id);
if ( addl != "" )
i$addl += fmt(": %s", addl);
weird(i);
}

Some files were not shown because too many files have changed in this diff Show more