Merge remote-tracking branch 'origin/master' into topic/seth/log-framework-ext

This commit is contained in:
Seth Hall 2016-08-10 10:28:04 -04:00
commit a60ce35103
885 changed files with 141119 additions and 120109 deletions

462
CHANGES
View file

@ -1,4 +1,466 @@
2.4-905 | 2016-08-09 08:19:37 -0700
* GSSAPI analyzer now forwards authentication blobs more correctly.
(Seth Hall)
* The KRB analyzer now includes support for the PA_ENCTYPE_INFO2
pre-auth data type. (Seth Hall)
* Add an argument to "disable_analyzer" function to not do a
reporter message by default. (Seth Hall)
2.4-902 | 2016-08-08 16:50:35 -0400
* Adding SMB analyzer. (Seth Hall, Vlad Grigorescu and many others)
* NetControl: allow reasons in remove_rule calls. Addresses BIT-1655
(Johanna Amann)
2.4-893 | 2016-08-05 15:43:04 -0700
* Remove -z/--analysis option. (Johanna Amann)
* Remove already defunct code for XML serialization. (Johanna Amann)
2.4-885 | 2016-08-05 15:03:59 -0700
* Reverting SMB analyzer merge. (Robin Sommer)
2.4-883 | 2016-08-05 12:57:26 -0400
* Add a new node type for logging with the cluster framework scripts by
adding a new Bro node type for doing logging (this is intended to
reduce the load on the manager). If a user chooses not to specify a
logger node in the cluster configuration, then the manager will
write logs locally as usual. (Daniel Thayer)
2.4-874 | 2016-08-05 12:43:06 -0400
* SMB analyzer (Seth Hall, Vlad Grigorescu and many others)
2.4-759 | 2016-08-05 09:32:42 -0400
* Intel framework improvements (Jan Grashoefer)
* Added expiration for intelligence items.
* Improved intel notices.
* Added hook to allow extending the intel log.
* Added support for subnets to intel-framework.
2.4-742 | 2016-08-02 15:28:31 -0700
* Fix duplicate SSH authentication failure events. Addresses BIT-1641.
(Robin Sommer)
* Remove OpenSSL dependency for plugins. (Robin Sommer)
2.4-737 | 2016-08-02 11:38:07 -0700
* Fix some Coverity warnings. (Robin Sommer)
2.4-735 | 2016-08-02 11:05:36 -0700
* Added string slicing examples to documentation. (Moshe Kaplan)
2.4-733 | 2016-08-01 09:09:29 -0700
* Fixing a CMake dependency issue for the pcap bifs. (Robin Sommer)
2.4-732 | 2016-08-01 08:33:00 -0700
* Removing pkg/make-*-packages scripts. BIT-1509 #closed (Robin
Sommer)
2.4-731 | 2016-08-01 08:14:06 -0700
* Correct endianness of IP addresses in SNMP. Addresses BIT-1644.
(Anony Mous)
2.4-729 | 2016-08-01 08:00:54 -0700
* Fix behavior of connection_pending event. It is now really only
raised when Bro is terminating. Also adds a test-case that raises
the event. (Johanna Amann)
* Retired remove -J/-K options (set md5/hash key) from the manpage.
They had already been removed from the code. (Johanna Amann)
* NetControl: Add catch-and-release event when IPs are forgotten.
This adds an event catch_release_forgotten() that is raised once
Catch & Release ceases block management for an IP address because
the IP has not been seen in traffic during the watch interval.
(Johanna Amann)
2.4-723 | 2016-07-26 15:04:26 -0700
* Add error events to input framework. (Johanna Amann)
This change introduces error events for Table and Event readers.
Users can now specify an event that is called when an info,
warning, or error is emitted by their input reader. This can,
e.g., be used to raise notices in case errors occur when reading
an important input stream.
Example:
event error_event(desc: Input::TableDescription, msg: string, level: Reporter::Level)
{
...
}
event bro_init()
{
Input::add_table([$source="a", $error_ev=error_event, ...]);
}
Addresses BIT-1181.
* Calling Error() in an input reader now automatically will disable
the reader and return a failure in the Update/Heartbeat calls.
(Johanna Amann)
* Convert all errors in the ASCII formatter into warnings (to show
that they are non-fatal. (Johanna Amann)
* Enable SQLite shared cache mode. This allows all threads accessing
the same database to share sqlite objects. See
https://www.sqlite.org/sharedcache.html. Addresses BIT-1325.
(Johanna Amann)
* NetControl: Adjust default priority of ACTION_DROP hook to standad
level. (Johanna Amann)
* Fix types when constructing SYN_packet record. Fixes BIT-1650.
(Grant Moyer).
2.4-715 | 2016-07-23 07:27:05 -0700
* SQLite writer: Remove unused string formatting function. (Johanna Amann)
* Deprecated the ElasticSearch log writer. (Johanna Amann)
2.4-709 | 2016-07-15 09:05:20 -0700
* Change Bro's hashing for short inputs and Bloomfilters from H3 to
Siphash, which produces much better results for HLL in particular.
(Johanna Amann)
* Fix a long-standing bug which truncated hash values to 32-bit on
most machines. (Johanna Amann)
* Fixes to HLL. Addresses BIT-1612. (Johanna Amann)
* Add test checking the quality of HLL. (Johanna Amann)
* Remove the -K/-J options for setting keys. (Johanna Amann)
* SSL: Fix memory management problem. (Johanna Amann)
2.4-693 | 2016-07-12 11:29:17 -0700
* Change TCP analysis to process connections without the initial SYN as
non-partial connections. Addresses BIT-1492. (Robin Sommer).
2.4-691 | 2016-07-12 09:58:38 -0700
* SSL: add support for signature_algorithms extension. (Johanna
Amann)
2.4-688 | 2016-07-11 11:10:33 -0700
* Disable broker by default. To enable it, use --enable-broker.
Addresses BIT-1645. (Daniel Thayer)
2.4-686 | 2016-07-08 19:14:43 -0700
* Added flagging of retransmission to the connection history.
Addresses BIT-977. (Robin Sommer)
2.4-683 | 2016-07-08 14:55:04 -0700
* Extendign connection history field to flag with '^' when Bro flips
a connection's endpoints. Addresses BIT-1629. (Robin Sommer)
2.4-680 | 2016-07-06 09:18:21 -0700
* Remove ack_above_hole() event, which was a subset of content_gap
and led to plenty noise. Addresses BIT-688. (Robin Sommer)
2.4-679 | 2016-07-05 16:35:53 -0700
* Fix segfault when an existing enum identifier is added again with
a different value. Addresses BIT-931. (Robin Sommer)
* Escape the empty indicator in logs if it occurs literally as a
field's actual content. Addresses BIT-931. (Robin Sommer)
2.4-676 | 2016-06-30 17:27:54 -0700
* A larger series of NetControl updates. (Johanna Amann)
* Add NetControl framework documentation to the Bro manual.
* Use NetControl for ACTION_DROP of notice framework. So far,
this action did nothing by default.
* Rewrite of catch-and-release.
* Fix several small logging issues.
* find_rules_subnet() now works in cluster mode. This
introduces two new events, NetControl::rule_new and
NetControl::rule_destroyed, which are raised when rules are
first added and then deleted from the internal state
tracking.
* Fix acld whitelist command.
* Add rule existance as a state besides added and failure.
* Suppress duplicate "plugin activated" messages.
* Make new Broker plugin options accessible.
* Add predicates to Broker plugin.
* Tweak SMTP scripts to not to pull in the notice framework.
2.4-658 | 2016-06-30 16:55:32 -0700
* Fix a number of documentation building errors. (Johanna Amann)
* Input/Logging: Make bool conversion operator explicit. (Johanna Amann)
* Add new TLS ciphers from RFC 7905. (Johanna Amann)
2.4-648 | 2016-06-21 18:33:22 -0700
* Fix memory leaks. Reported by Dk Jack. (Johanna Amann)
2.4-644 | 2016-06-21 13:59:05 -0400
* Fix an off-by-one error when grabbing x-originating-ip header in
email. (Seth Hall, Aashish Sharma)
2.4-642 | 2016-06-18 13:18:23 -0700
* Fix potential mismatches when ignoring duplicate weirds. (Johanna Amann)
* Weird: Rewrite internals of weird logging. (Johanna Amann)
- "flow weirds" now actually log information about the flow
that they occur in.
- weirds can now be generated by calling Weird::weird() with
the info record directly, allowing more fine-granular passing
of information. This is e.g. used for DNS weirds.
Addresses BIT-1578 (Johanna Amann)
* Exec: fix reader cleanup when using read_files, preventing file
descriptors from leaking every time it was used. (Johanna Amann)
* Raw Writer: Make code more c++11-y, remove raw pointers. (Johanna
Amann)
* Add separate section with logging changes to NEWS. (Seth Hall)
2.4-635 | 2016-06-18 01:40:17 -0400
* Add some documentation for modbus data types. Addresses
BIT-1216. (Seth Hall)
* Removed app-stats scripts. Addresses BIT-1171. (Seth Hall)
2.4-631 | 2016-06-16 16:45:10 -0400
* Fixed matching mail address intel and added test (Jan Grashoefer)
* A new utilities script named email.bro with some utilities
for parsing out email addresses from strings. (Seth Hall)
* SMTP "rcptto" and "mailfrom" fields now do some minimal
parsing to clean up email addresses. (Seth Hall)
* Added "cc" to the SMTP log and feed it into the Intel framework
with the policy/frameworks/intel/seen/smtp.bro script. (Seth Hall)
2.4-623 | 2016-06-15 17:31:12 -0700
* &default values are no longer overwritten with uninitialized
by the input framework. (Jan Grashoefer)
2.4-621 | 2016-06-15 09:18:02 -0700
* Fixing memory leak in changed table expiration code. (Robin
Sommer)
* Fixing test portability. (Robin Sommer)
* Move the HTTP "filename" field (which was never filled out
anyways) to "orig_filenames" and "resp_filenames". (Seth Hall)
* Add a round trip time (rtt) field to dns.log. (Seth Hall)
* Add ACE archive files to the identified file types. Addresses
BIT-1609. (Stephen Hosom)
2.4-613 | 2016-06-14 18:10:37 -0700
* Preventing the event processing from looping endlessly when an
event reraised itself during execution of its handlers. (Robin
Sommer)
2.4-612 | 2016-06-14 17:42:52 -0700
* Improved handling of 802.11 headers. (Jan Grashoefer)
2.4-609 | 2016-06-14 17:15:28 -0700
* Fixed table expiration evaluation. The expiration attribute
expression is now evaluated for every use. Thus later adjustments
of the value (e.g. by redefining a const) will now take effect.
Values less than 0 will disable expiration. (Jan Grashoefer)
2.4-606 | 2016-06-14 16:11:07 -0700
* Fix parsing precedence of "hook" expression. Addresses BIT-1619
(Johanna Amann)
* Update the "configure" usage message for --with-caf (Daniel
Thayer)
2.4-602 | 2016-06-13 08:16:34 -0700
* Fixing Covertity warning (CID 1356391). (Robin Sommer)
* Guarding against reading beyond packet data when accessing L2
address in Radiotap header. (Robin Sommer)
2.4-600 | 2016-06-07 15:53:19 -0700
* Fixing typo in BIF macros. Reported by Jeff Barber. (Robin Sommer)
2.4-599 | 2016-06-07 12:37:32 -0700
* Add new functions haversine_distance() and haversine_distance_ip()
for calculating geographic distances. They requires that Bro be
built with libgeoip. (Aashish Sharma/Daniel Thayer).
2.4-597 | 2016-06-07 11:46:45 -0700
* Fixing memory leak triggered by new MAC address logging. (Robin
Sommer)
2.4-596 | 2016-06-07 11:07:29 -0700
* Don't create debug.log immediately upon startup (BIT-1616).
(Daniel Thayer)
2.4-594 | 2016-06-06 18:11:16 -0700
* ASCII Input: Accept DOS/Windows newlines. Addresses BIT-1198
(Johanna Amann)
* Fix BinPAC exception in RFB analyzer. (Martin van Hensbergen)
* Add URL decoding for the unofficial %u00AE style of encoding. (Seth Hall)
* Remove the unescaped_special_char HTTP weird. (Seth Hall)
2.4-588 | 2016-06-06 17:59:34 -0700
* Moved link-layer addresses into endpoints. The link-layer
addresses are now part of the connection endpoints following the
originator/responder pattern. (Jan Grashoefer)
* Link-layer addresses are extracted for 802.11 plus RadioTap. (Jan
Grashoefer)
* Fix coverity error (uninitialized variable) (Johanna Amann)
* Use ether_ntoa instead of ether_ntoa_r
The latter is thread-safe, but a GNU addition which does not exist on
OS-X. Since the function only is called in the main thread, it should
not matter if it is or is not threadsafe. (Johanna Amann)
* Fix FreeBSD/OSX compile problem due to headers (Johanna Amann)
2.4-581 | 2016-05-30 10:58:19 -0700
* Adding missing new script file mac-logging.bro. (Robin Sommer)
2.4-580 | 2016-05-29 13:41:10 -0700
* Add Ethernet MAC addresses to connection record. c$eth_src and
c$eth_dst now contain the Ethernet address if available. A new
script protocols/conn/mac-logging.bro adds these to conn.log when
loaded. (Robin Sommer)
2.4-579 | 2016-05-29 08:54:57 -0700
* Fixing Coverity warning. Addresses CID 1356116. (Robin Sommer)
* Fixing FTP cwd getting overlue long. (Robin Sommer)
* Clarifying notice documentation. Addresses BIT-1405. (Robin
Sommer)
* Changing protocol_{confirmation,violation} events to queue like
any other event. Addresses BIT-1530. (Robin Sommer)
* Normalizing test baseline. (Robin Sommer)
* Do not use scientific notations when printing doubles in logs.
Addresses BIT-1558. (Robin Sommer)
2.4-573 | 2016-05-23 13:21:03 -0700
* Ignoring packets with negative timestamps. Addresses BIT-1562 and
BIT-1443. (Robin Sommer)
2.4-572 | 2016-05-23 12:45:23 -0700
* Fix for a table refering to a expire function that's not defined.
Addresses BIT-1597. (Robin Sommer)
2.4-571 | 2016-05-23 08:26:43 -0700
* Fixing a few Coverity warnings. (Robin Sommer)
2.4-569 | 2016-05-18 07:39:35 -0700
* DTLS: Use magix constant from RFC 5389 for STUN detection.
(Johanna Amann)
* DTLS: Fix binpac bug with DTLSv1.2 client hellos. (Johanna Amann)
* DTLS: Fix interaction with STUN. Now the DTLS analyzer cleanly
skips all STUN messages. (Johanna Amann)
* Fix the way that child analyzers are added. (Johanna Amann)
2.4-563 | 2016-05-17 16:25:21 -0700
* Fix duplication of new_connection_contents event. Addresses
BIT-1602 (Johanna Amann)
* SMTP: Support SSL upgrade via X-ANONYMOUSTLS This seems to be a
non-standardized microsoft extension that, besides having a
different name, works pretty much the same as StartTLS. We just
treat it as such. (Johanna Amann)
* Fixing control framework's net_stats and peer_status commands. For
the latter, this removes most of the values returned, as we don't
have access to them anymore. (Robin Sommer)
2.4-555 | 2016-05-16 20:10:15 -0700
* Fix failing plugin tests on OS X 10.11. (Daniel Thayer)
* Fix failing test on Debian/FreeBSD. (Johanna Amann)
2.4-552 | 2016-05-12 08:04:33 -0700
* Fix a bug in receiving remote logs via broker. (Daniel Thayer)

117
NEWS
View file

@ -13,19 +13,52 @@ New Dependencies
- Bro now requires a compiler with C++11 support for building the
source code.
- Bro now requires the C++ Actor Framework, CAF, which must be
installed first. See http://actor-framework.org.
- Bro now requires Python instead of Perl to compile the source code.
- The pcap buffer size can set through the new option Pcap::bufsize.
- When enabling Broker (which is disabled by default), Bro now requires
version 0.14 of the C++ Actor Framework.
New Functionality
-----------------
- SMB analyzer. This is the rewrite that has been in development for
several years. The scripts are currently not loaded by default and
must be loaded manually by loading policy/protocols/smb. The next
release will load the smb scripts by default.
- Implements SMB1+2.
- Fully integrated with the file analysis framework so that files
transferred over SMB can be analyzed.
- Includes GSSAPI and NTLM analyzer and reimplements the DCE-RPC
analyzer.
- New logs: smb_files.log, smb_mapping.log, ntlm.log, and dce_rpc.log
- Not every possible SMB command or functionality is implemented, but
generally, file handling should work whenever files are transferred.
Please speak up on the mailing list if there is an obvious oversight.
- Bro now includes the NetControl framework. The framework allows for easy
interaction of Bro with hard- and software switches, firewalls, etc.
- Bro's Intelligence Framework was refactored and new functionality
has been added:
- The framework now supports the new indicator type Intel::SUBNET.
As subnets are matched against seen addresses, the field 'matched'
was introduced to indicate which indicator type(s) caused the hit.
- The new function remove() allows to delete intelligence items.
- The intel framework now supports expiration of intelligence items.
Expiration can be configured by using Intel::item_expiration and
can be handled by using the item_expired() hook. The new script
do_expire.bro removes expired items.
- The new hook extend_match() allows extending the framework. The new
policy script whitelist.bro uses the hook to implement whitelisting.
- Intel notices are now suppressible and mails for intel notices now
list the identified services as well as the intel source.
- There is a new file entropy analyzer for files.
- Bro now supports the remote framebuffer protocol (RFB) that is used by
@ -38,6 +71,10 @@ New Functionality
STARTTLS sessions, handing them over to TLS analysis. The analyzer
does not yet analyze any further IMAP/XMPP content.
- The new event ssl_extension_signature_algorithm allows access to the
TLS signature_algorithms extension that lists client supported signature
and hash algorithm pairs.
- Bro now tracks VLAN IDs. To record them inside the connection log,
load protocols/conn/vlan-logging.bro.
@ -93,18 +130,58 @@ New Functionality
get_timer_stats(), get_file_analysis_stats(), get_thread_stats(),
get_gap_stats(), get_matcher_stats(),
- Two new functions haversine_distance() and haversine_distance_ip()
for calculating geographic distances. They requires that Bro be
built with libgeoip.
- Table expiration timeout expressions are evaluated dynamically as
timestmaps are updated.
- New Bro plugins in aux/plugins:
- af_packet: Native AF_PACKET support.
- kafka : Log writer interfacing to Kafka.
- myricom: Native Myricom SNF v3 support.
- pf_ring: Native PF_RING support.
- postgresql: A PostgreSQL reader/writer.
- redis: An experimental log writer for Redis.
- tcprs: An TCP-level analyzer detecting retransmissions, reordering, and more.
- The pcap buffer size can be set through the new option Pcap::bufsize.
- Input framework readers Table and Event can now define a custom
event to receive logging messages.
Changed Functionality
---------------------
- Log changes:
- Connections
* The 'history' field gains two new flags: '^' indicates that
Bro heuristically flipped to direction of the connection.
't/T' indicates the first TCP payload retransmission from
originator or responder, respectively.
- DNS
* New 'rtt' field to indicate the round trip time between when a
request was sent and when a reply started.
- SMTP
* New 'cc' field which includes the 'Cc' header from MIME
messages sent over SMTP.
* Changes in 'mailfrom' and 'rcptto' fields to remove some
non-address cruft that will tend to be found. The main
example is the change from "<user@domain>" to
"user@domain.com".
- HTTP
* Removed 'filename' field.
* New 'orig_filenames' and 'resp_filenames' fields which each
contain a vector of filenames seen in entities transferred.
- The BrokerComm and BrokerStore namespaces were renamed to Broker.
The Broker "print" function was renamed to Broker::send_print, and
"event" to "Broker::send_event".
@ -122,6 +199,34 @@ Changed Functionality
install_pcap_filter() -> Pcap::install_pcap_filter()
pcap_error() -> Pcap::pcap_error()
- In http.log, the "filename" field (which it turns out was never
filled out in the first place) has been split into to
"orig_filenames" and "resp_filenames".
- TCP analysis was changed to process connections without the initial
SYN packet. In the past, connections without a full handshake were
treated as partial, meaning that most application-layer analyzers
would refuse to inspect the payload. Now, Bro will consider these
connections as complete and all analyzers will process them notmally.
Removed Functionality
---------------------
- The app-stats scripts have been removed because they weren't
being maintained and they were becoming inaccurate. They
were also prone to needing more regular updates as the internet
changed and will likely be more relevant if maintained externally.
- The event ack_above_hole() has been removed, as it was a subset
of content_gap() and led to plenty noise.
- The command line options --set-seed and --md5-hashkey have been
removed.
- The packaging scripts pkg/make-*-packages are gone. They aren't
used anymore for the binary Bro packages that the projects
distributes; haven't been supported in a while; and have
problems.
Deprecated Functionality
------------------------
@ -132,6 +237,10 @@ Deprecated Functionality
decode_base64() and encode_base64(), which take an optional
parameter to change the Base64 alphabet.
- The ElasticSearch log writer hasn't been maintained for a while
and is now deprecated. It will be removed with the next release.
Bro 2.4
=======

View file

@ -1 +1 @@
2.4-552
2.4-905

@ -1 +1 @@
Subproject commit 4179f9f00f4df21e4bcfece0323ec3468f688e8a
Subproject commit 3664242a218c21100d62917866d6b8cb0d6f0fa1

@ -1 +1 @@
Subproject commit cb771a3cf592d46643eea35d206b9f3e1a0758f7
Subproject commit 3568621c9bd5836956f2a6401039fdd7d0886c9e

@ -1 +1 @@
Subproject commit b4d1686cdd3f5505e405667b1083e8335cae6928
Subproject commit d587dba7def9a2af2a2506d54e267e4838de7575

@ -1 +1 @@
Subproject commit 6f12b4da74e9e0885e1bd8cb67c2eda2b33c93a5
Subproject commit b74414cd5ead14d17d41c73e9de0ac7bcb79c8c3

@ -1 +1 @@
Subproject commit bb3f55f198f9cfd5e545345dd6425dd08ca1d45e
Subproject commit 2ae5fdd0214bdc8eea625bfcf0c457547510a391

@ -1 +1 @@
Subproject commit ebab672fa404b26944a6df6fbfb1aaab95ec5d48
Subproject commit e928438abad479cac8f42678588db208eeec2bf1

View file

@ -23,6 +23,9 @@
/* Define if you have the <memory.h> header file. */
#cmakedefine HAVE_MEMORY_H
/* Define if you have the <netinet/ether.h> header file */
#cmakedefine HAVE_NETINET_ETHER_H
/* Define if you have the <netinet/if_ether.h> header file. */
#cmakedefine HAVE_NETINET_IF_ETHER_H

2
cmake

@ -1 +1 @@
Subproject commit 0a2b36874ad5c1a22829135f8aeeac534469053f
Subproject commit accc57f1baa2b0fcc894f4a3c8f1ffc78416aeec

20
configure vendored
View file

@ -41,7 +41,8 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools-debug use Google's perftools for debugging
--enable-jemalloc link against jemalloc
--enable-ruby build ruby bindings for broccoli (deprecated)
--disable-broker disable use of the Broker communication library
--enable-broker enable use of the Broker communication library
(requires C++ Actor Framework)
--disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools
@ -57,10 +58,10 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable
--with-python=PATH path to Python executable
--with-libcaf=PATH path to C++ Actor Framework installation
(a required Broker dependency)
Optional Packages in Non-Standard Locations:
--with-caf=PATH path to C++ Actor Framework installation
(a required Broker dependency)
--with-geoip=PATH path to the libGeoIP install root
--with-perftools=PATH path to Google Perftools install root
--with-jemalloc=PATH path to jemalloc install root
@ -121,13 +122,12 @@ append_cache_entry BRO_ROOT_DIR PATH $prefix
append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
append_cache_entry BROKER_PYTHON_BINDINGS BOOL false
append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry ENABLE_JEMALLOC BOOL false
append_cache_entry ENABLE_BROKER BOOL true
append_cache_entry ENABLE_BROKER BOOL false
append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true
append_cache_entry INSTALL_AUX_TOOLS BOOL true
@ -162,7 +162,7 @@ while [ $# -ne 0 ]; do
append_cache_entry BRO_ROOT_DIR PATH $optarg
append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl
if [ -z "$user_disabled_broker" ]; then
if [ -n "$user_enabled_broker" ]; then
append_cache_entry BROKER_PYTHON_HOME PATH $optarg
fi
;;
@ -199,10 +199,12 @@ while [ $# -ne 0 ]; do
--enable-jemalloc)
append_cache_entry ENABLE_JEMALLOC BOOL true
;;
--enable-broker)
append_cache_entry ENABLE_BROKER BOOL true
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
user_enabled_broker="true"
;;
--disable-broker)
append_cache_entry ENABLE_BROKER BOOL false
remove_cache_entry BROKER_PYTHON_HOME
user_disabled_broker="true"
;;
--disable-broccoli)
append_cache_entry INSTALL_BROCCOLI BOOL false

View file

@ -39,9 +39,11 @@ Manager
*******
The manager is a Bro process that has two primary jobs. It receives log
messages and notices from the rest of the nodes in the cluster using the Bro
communications protocol. The result is a single log instead of many
discrete logs that you have to combine in some manner with post-processing.
The manager also takes the opportunity to de-duplicate notices, and it has the
communications protocol (note that if you are using a logger, then the
logger receives all logs instead of the manager). The result
is a single log instead of many discrete logs that you have to
combine in some manner with post-processing. The manager also takes
the opportunity to de-duplicate notices, and it has the
ability to do so since it's acting as the choke point for notices and how
notices might be processed into actions (e.g., emailing, paging, or blocking).
@ -51,6 +53,20 @@ connections to the rest of the cluster. Once the workers are started and
connect to the manager, logs and notices will start arriving to the manager
process from the workers.
Logger
******
The logger is an optional Bro process that receives log messages from the
rest of the nodes in the cluster using the Bro communications protocol.
The purpose of having a logger receive logs instead of the manager is
to reduce the load on the manager. If no logger is needed, then the
manager will receive logs instead.
The logger process is started first by BroControl and it only opens its
designated port and waits for connections, it doesn't initiate any
connections to the rest of the cluster. Once the rest of the cluster is
started and connect to the logger, logs will start arriving to the logger
process.
Proxy
*****
The proxy is a Bro process that manages synchronized state. Variables can

View file

@ -0,0 +1 @@
../../../../aux/plugins/elasticsearch-deprecated/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/elasticsearch/README

View file

@ -44,7 +44,10 @@ workers can consume a lot of CPU resources. The maximum recommended
number of workers to run on a machine should be one or two less than
the number of CPU cores available on that machine. Using a load-balancing
method (such as PF_RING) along with CPU pinning can decrease the load on
the worker machines.
the worker machines. Also, in order to reduce the load on the manager
process, it is recommended to have a logger in your configuration. If a
logger is defined in your cluster configuration, then it will receive logs
instead of the manager process.
Basic Cluster Configuration
@ -61,13 +64,17 @@ a Bro cluster (do this as the Bro user on the manager host only):
:doc:`BroControl <../components/broctl/README>` documentation.
- Edit the BroControl node configuration file, ``<prefix>/etc/node.cfg``
to define where manager, proxies, and workers are to run. For a cluster
configuration, you must comment-out (or remove) the standalone node
to define where logger, manager, proxies, and workers are to run. For a
cluster configuration, you must comment-out (or remove) the standalone node
in that file, and either uncomment or add node entries for each node
in your cluster (manager, proxy, and workers). For example, if you wanted
to run four Bro nodes (two workers, one proxy, and a manager) on a cluster
consisting of three machines, your cluster configuration would look like
this::
in your cluster (logger, manager, proxy, and workers). For example, if you
wanted to run five Bro nodes (two workers, one proxy, a logger, and a
manager) on a cluster consisting of three machines, your cluster
configuration would look like this::
[logger]
type=logger
host=10.0.0.10
[manager]
type=manager
@ -94,7 +101,7 @@ a Bro cluster (do this as the Bro user on the manager host only):
file lists all of the networks which the cluster should consider as local
to the monitored environment.
- Install workers and proxies using BroControl::
- Install Bro on all machines in the cluster using BroControl::
> broctl install
@ -174,7 +181,7 @@ Installing PF_RING
5. Configure BroControl to use PF_RING (explained below).
6. Run "broctl install" on the manager. This command will install Bro and
all required scripts to the other machines in your cluster.
required scripts to all machines in your cluster.
Using PF_RING
^^^^^^^^^^^^^

View file

@ -11,6 +11,7 @@ Frameworks
input
intel
logging
netcontrol
notice
signatures
sumstats

View file

@ -7,7 +7,7 @@ Input Framework
.. rst-class:: opening
Bro now features a flexible input framework that allows users
Bro features a flexible input framework that allows users
to import data into Bro. Data is either read into Bro tables or
converted to events which can then be handled by scripts.
This document gives an overview of how to use the input framework

View file

@ -0,0 +1,10 @@
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
event connection_established(c: connection)
{
NetControl::drop_connection(c$id, 20 secs);
}

View file

@ -0,0 +1,10 @@
event NetControl::init()
{
local skeleton_plugin = NetControl::create_skeleton("");
NetControl::activate(skeleton_plugin, 0);
}
event connection_established(c: connection)
{
NetControl::drop_connection(c$id, 20 secs);
}

View file

@ -0,0 +1,16 @@
@load protocols/ssh/detect-bruteforcing
redef SSH::password_guesses_limit=10;
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing )
NetControl::drop_address(n$src, 60min);
}

View file

@ -0,0 +1,16 @@
@load protocols/ssh/detect-bruteforcing
redef SSH::password_guesses_limit=10;
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing )
add n$actions[Notice::ACTION_DROP];
}

View file

@ -0,0 +1,26 @@
function our_drop_connection(c: conn_id, t: interval)
{
# As a first step, create the NetControl::Entity that we want to block
local e = NetControl::Entity($ty=NetControl::CONNECTION, $conn=c);
# Then, use the entity to create the rule to drop the entity in the forward path
local r = NetControl::Rule($ty=NetControl::DROP,
$target=NetControl::FORWARD, $entity=e, $expire=t);
# Add the rule
local id = NetControl::add_rule(r);
if ( id == "" )
print "Error while dropping";
}
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
event connection_established(c: connection)
{
our_drop_connection(c$id, 20 secs);
}

View file

@ -0,0 +1,22 @@
hook NetControl::rule_policy(r: NetControl::Rule)
{
if ( r$ty == NetControl::DROP &&
r$entity$ty == NetControl::CONNECTION &&
r$entity$conn$orig_h in 192.168.0.0/16 )
{
print "Ignored connection from", r$entity$conn$orig_h;
break;
}
}
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
event connection_established(c: connection)
{
NetControl::drop_connection(c$id, 20 secs);
}

View file

@ -0,0 +1,17 @@
event NetControl::init()
{
local netcontrol_debug = NetControl::create_debug(T);
NetControl::activate(netcontrol_debug, 0);
}
event connection_established(c: connection)
{
if ( |NetControl::find_rules_addr(c$id$orig_h)| > 0 )
{
print "Rule already exists";
return;
}
NetControl::drop_connection(c$id, 20 secs);
print "Rule added";
}

View file

@ -0,0 +1,10 @@
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
event connection_established(c: connection)
{
NetControl::drop_address_catch_release(c$id$orig_h);
}

View file

@ -0,0 +1,29 @@
function our_openflow_check(p: NetControl::PluginState, r: NetControl::Rule): bool
{
if ( r$ty == NetControl::DROP &&
r$entity$ty == NetControl::ADDRESS &&
subnet_width(r$entity$ip) == 32 &&
subnet_to_addr(r$entity$ip) in 192.168.17.0/24 )
return F;
return T;
}
event NetControl::init()
{
# Add debug plugin with low priority
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
# Instantiate OpenFlow debug plugin with higher priority
local of_controller = OpenFlow::log_new(42);
local netcontrol_of = NetControl::create_openflow(of_controller, [$check_pred=our_openflow_check]);
NetControl::activate(netcontrol_of, 10);
}
event NetControl::init_done()
{
NetControl::drop_address(10.0.0.1, 1min);
NetControl::drop_address(192.168.17.2, 1min);
NetControl::drop_address(192.168.18.2, 1min);
}

View file

@ -0,0 +1,39 @@
module NetControl;
export {
## Instantiates the plugin.
global create_skeleton: function(argument: string) : PluginState;
}
function skeleton_name(p: PluginState) : string
{
return "NetControl skeleton plugin";
}
function skeleton_add_rule_fun(p: PluginState, r: Rule) : bool
{
print "add", r;
event NetControl::rule_added(r, p);
return T;
}
function skeleton_remove_rule_fun(p: PluginState, r: Rule, reason: string &default="") : bool
{
print "remove", r;
event NetControl::rule_removed(r, p);
return T;
}
global skeleton_plugin = Plugin(
$name = skeleton_name,
$can_expire = F,
$add_rule = skeleton_add_rule_fun,
$remove_rule = skeleton_remove_rule_fun
);
function create_skeleton(argument: string) : PluginState
{
local p = PluginState($plugin=skeleton_plugin);
return p;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

View file

@ -0,0 +1,633 @@
.. _framework-netcontrol:
====================
NetControl Framework
====================
.. rst-class:: opening
Bro can connect with network devices like, for example, switches
or soft- and hardware firewalls using the NetControl framework. The
NetControl framework provides a flexible, unified interface for active
response and hides the complexity of heterogeneous network equipment
behind a simple task-oriented API, which is easily usable via Bro
scripts. This document gives an overview of how to use the NetControl
framework in different scenarios; to get a better understanding of how
it can be used in practice, it might be worthwhile to take a look at
the unit tests.
.. contents::
NetControl Architecture
=======================
.. figure:: netcontrol-architecture.png
:width: 600
:align: center
:alt: NetControl framework architecture
:target: ../_images/netcontrol-architecture.png
NetControl architecture (click to enlarge).
The basic architecture of the NetControl framework is shown in the figure above.
Conceptually, the NetControl framework sits inbetween the user provided scripts
(which use the Bro event engine) and the network device (which can either be a
hardware or software device), that is used to implement the commands.
The NetControl framework supports a number of high-level calls, like the
:bro:see:`NetControl::drop_address` function, or lower a lower level rule
syntax. After a rule has been added to the NetControl framework, NetControl
sends the rule to one or several of its *backends*. Each backend is responsible
to communicate with a single hard- or software device. The NetControl framework
tracks rules throughout their entire lifecycle and reports the status (like
success, failure and timeouts) back to the user scripts.
The backends are implemented as Bro scripts using a plugin based API; an example
for this is :doc:`/scripts/base/frameworks/netcontrol/plugins/broker.bro`. This
document will show how to write plugins in
:ref:`framework-netcontrol-plugins`.
NetControl API
==============
High-level NetControl API
-------------------------
In this section, we will introduce the high level NetControl API. As mentioned
above, NetControl uses *backends* to communicate with the external devices that
will implement the rules. You will need at least one active backend before you
can use NetControl. For our examples, we will just use the debug plugin to
create a backend. This plugin outputs all actions that are taken to the standard
output.
Backends should be initialized in the :bro:see:`NetControl::init` event, calling
the :bro:see:`NetControl::activate` function after the plugin instance has been
initialized. The debug plugin can be initialized as follows:
.. code:: bro
event NetControl::init()
{
local debug_plugin = NetControl::create_debug(T);
NetControl::activate(debug_plugin, 0);
}
After at least one backend has been added to the NetControl framework, the
framework can be used and will send added rules to the added backend.
The NetControl framework contains several high level functions that allow users
to drop connections of certain addresses and networks, shunt network traffic,
etc. The following table shows and describes all of the currently available
high-level functions.
.. list-table::
:widths: 32 40
:header-rows: 1
* - Function
- Description
* - :bro:see:`NetControl::drop_address`
- Calling this function causes NetControl to block all packets involving
an IP address from being forwarded
* - :bro:see:`NetControl::drop_connection`
- Calling this function stops all packets of a specific connection
(identified by its 5-tuple) from being forwarded.
* - :bro:see:`NetControl::drop_address`
- Calling this function causes NetControl to block all packets involving
an IP address from being forwarded
* - :bro:see:`NetControl::drop_address_catch_release`
- Calling this function causes all packets of a specific source IP to be
blocked. This function uses catch-and-release functionality and the IP
address is only dropped for a short amount of time to conserve rule
space in the network hardware. It is immediately re-dropped when it is
seen again in traffic. See :ref:`framework-netcontrol-catchrelease` for
more information.
* - :bro:see:`NetControl::shunt_flow`
- Calling this function causes NetControl to stop forwarding a
uni-directional flow of packets to Bro. This allows Bro to conserve
resources by shunting flows that have been identified as being benign.
* - :bro:see:`NetControl::redirect_flow`
- Calling this function causes NetControl to redirect an uni-directional
flow to another port of the networking hardware.
* - :bro:see:`NetControl::quarantine_host`
- Calling this function allows Bro to quarantine a host by sending DNS
traffic to a host with a special DNS server, which resolves all queries
as pointing to itself. The quarantined host is only allowed between the
special server, which will serve a warning message detailing the next
steps for the user
* - :bro:see:`NetControl::whitelist_address`
- Calling this function causes NetControl to push a whitelist entry for an
IP address to the networking hardware.
* - :bro:see:`NetControl::whitelist_subnet`
- Calling this function causes NetControl to push a whitelist entry for a
subnet to the networking hardware.
After adding a backend, all of these functions can immediately be used and will
start sending the rules to the added backend(s). To give a very simple example,
the following script will simply block the traffic of all connections that it
sees being established:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-1-drop-with-debug.bro
Running this script on a file containing one connection will cause the debug
plugin to print one line to the standard output, which contains information
about the rule that was added. It will also cause creation of `netcontrol.log`,
which contains information about all actions that are taken by NetControl:
.. btest:: netcontrol-1-drop-with-debug.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/ecdhe.pcap ${DOC_ROOT}/frameworks/netcontrol-1-drop-with-debug.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol.log
In our case, `netcontrol.log` contains several :bro:see:`NetControl::MESSAGE`
entries, which show that the debug plugin has been initialized and added.
Afterwards, there are two :bro:see:`NetControl::RULE` entries; the first shows
that the addition of a rule has been requested (state is
:bro:see:`NetControl::REQUESTED`). The following line shows that the rule was
successfully added (the state is :bro:see:`NetControl::SUCCEEDED`). The
remainder of the log line gives more information about the added rule, which in
our case applies to a specific 5-tuple.
In addition to the netcontrol.log, the drop commands also create a second,
additional log called `netcontrol_drop.log`. This log file is much more succinct and
only contains information that is specific to drops that are enacted by
NetControl:
.. btest:: netcontrol-1-drop-with-debug.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol_drop.log
While this example of blocking all connections is usually not very useful, the
high-level API gives an easy way to take action, for example when a host is
identified doing some harmful activity. To give a more realistic example, the
following code automatically blocks a recognized SSH guesser:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-2-ssh-guesser.bro
.. btest:: netcontrol-2-ssh-guesser.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/ssh/sshguess.pcap ${DOC_ROOT}/frameworks/netcontrol-2-ssh-guesser.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol.log
Note that in this case, instead of calling NetControl directly, we also can use
the :bro:see:`Notice::ACTION_DROP` action of the notice framework:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-3-ssh-guesser.bro
.. btest:: netcontrol-3-ssh-guesser.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/ssh/sshguess.pcap ${DOC_ROOT}/frameworks/netcontrol-3-ssh-guesser.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol.log
Using the :bro:see:`Notice::ACTION_DROP` action of the notice framework also
will cause the `dropped` column in `notice.log` to be set to true each time that
the NetControl framework enacts a block:
.. btest:: netcontrol-3-ssh-guesser.bro
@TEST-EXEC: btest-rst-cmd cat notice.log
Rule API
--------
As already mentioned in the last section, in addition to the high-level API, the
NetControl framework also supports a Rule based API which allows greater
flexibility while adding rules. Actually, all the high-level functions are
implemented using this lower-level rule API; the high-level functions simply
convert their arguments into the lower-level rules and then add the rules
directly to the NetControl framework (by calling :bro:see:`NetControl::add_rule`).
The following figure shows the main components of NetControl rules:
.. figure:: netcontrol-rules.png
:width: 600
:align: center
:alt: NetControl rule overview
:target: ../_images/netcontrol-rules.png
NetControl Rule overview (click to enlarge).
The types that are used to make up a rule are defined in
:doc:`/scripts/base/frameworks/netcontrol/types.bro`.
Rules are defined as a :bro:see:`NetControl::Rule` record. Rules have a *type*,
which specifies what kind of action is taken. The possible actions are to
**drop** packets, to **modify** them, to **redirect** or to **whitelist** them.
The *target* of a rule specifies if the rule is applied in the *forward path*,
and affects packets as they are forwarded through the network, or if it affects
the *monitor path* and only affects the packets that are sent to Bro, but not
the packets that traverse the network. The *entity* specifies the address,
connection, etc. that the rule applies to. In addition, each notice has a
*timeout* (which can be left empty), a *priority* (with higher priority rules
overriding lower priority rules). Furthermore, a *location* string with more
text information about each rule can be provided.
There are a couple more fields that only needed for some rule types. For
example, when you insert a redirect rule, you have to specify the port that
packets should be redirected too. All these fields are shown in the
:bro:see:`NetControl::Rule` documentation.
To give an example on how to construct your own rule, we are going to write
our own version of the :bro:see:`NetControl::drop_connection` function. The only
difference between our function and the one provided by NetControl is the fact
that the NetControl function has additional functionality, e.g. for logging.
Once again, we are going to test our function with a simple example that simply
drops all connections on the Network:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-4-drop.bro
.. btest:: netcontrol-4-drop.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/ecdhe.pcap ${DOC_ROOT}/frameworks/netcontrol-4-drop.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol.log
The last example shows that :bro:see:`NetControl::add_rule` returns a string
identifier that is unique for each rule (uniqueness is not preserved across
restarts or Bro). This rule id can be used to later remove rules manually using
:bro:see:`NetControl::remove_rule`.
Similar to :bro:see:`NetControl::add_rule`, all the high-level functions also
return their rule IDs, which can be removed in the same way.
Interacting with Rules
----------------------
The NetControl framework offers a number of different ways to interact with
Rules. Before a rule is applied by the framework, a number of different hooks
allow you to either modify or discard rules before they are added. Furthermore,
a number of events can be used to track the lifecycle of a rule while it is
being managed by the NetControl framework. It is also possible to query and
access the current set of active rules.
Rule Policy
***********
The hook :bro:see:`NetControl::rule_policy` provides the mechanism for modifying
or discarding a rule before it is sent onwards to the backends. Hooks can be
thought of as multi-bodied functions and using them looks very similar to
handling events. In difference to events, they are processed immediately. Like
events, hooks can have priorities to sort the order in which they are applied.
Hooks can use the ``break`` keyword to show that processing should be aborted;
if any :bro:see:`NetControl::rule_policy` hook uses ``break``, the rule will be
discarded before further processing.
Here is a simple example which tells Bro to discard all rules for connections
originating from the 192.168.* network:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-5-hook.bro
.. btest:: netcontrol-5-hook.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/ecdhe.pcap ${DOC_ROOT}/frameworks/netcontrol-5-hook.bro
NetControl Events
*****************
In addition to the hooks, the NetControl framework offers a variety of events
that are raised by the framework to allow users to track rules, as well as the
state of the framework.
We already encountered and used one event of the NetControl framework,
:bro:see:`NetControl::init`, which is used to initialize the framework. After
the framework has finished initialization and will start accepting rules, the
:bro:see:`NetControl::init_done` event will be raised.
When rules are added to the framework, the following events will be called in
this order:
.. list-table::
:widths: 20 80
:header-rows: 1
* - Event
- Description
* - :bro:see:`NetControl::rule_new`
- Signals that a new rule is created by the NetControl framework due to
:bro:see:`NetControl::add_rule`. At this point of time, the rule has not
yet been added to any backend.
* - :bro:see:`NetControl::rule_added`
- Signals that a new rule has successfully been added by a backend.
* - :bro:see:`NetControl::rule_exists`
- This event is raised instead of :bro:see:`NetControl::rule_added` when a
backend reports that a rule was already existing.
* - :bro:see:`NetControl::rule_timeout`
- Signals that a rule timeout was reached. If the hardware does not support
automatic timeouts, the NetControl framework will automatically call
bro:see:`NetControl::remove_rule`.
* - :bro:see:`NetControl::rule_removed`
- Signals that a new rule has successfully been removed a backend.
* - :bro:see:`NetControl::rule_destroyed`
- This event is the pendant to :bro:see:`NetControl::rule_added`, and
reports that a rule is no longer be tracked by the NetControl framework.
This happens, for example, when a rule was removed from all backend.
* - :bro:see:`NetControl::rule_error`
- This event is raised whenever an error occurs during any rule operation.
Finding active rules
********************
The NetControl framework provides two functions for finding currently active
rules: :bro:see:`NetControl::find_rules_addr` finds all rules that affect a
certain IP address and :bro:see:`NetControl::find_rules_subnet` finds all rules
that affect a specified subnet.
Consider, for example, the case where a Bro instance monitors the traffic at the
border, before any firewall or switch rules were applied. In this case, Bro will
still be able to see connection attempts of already blocked IP addresses. In this
case, :bro:see:`NetControl::find_rules_addr` could be used to check if an
address already was blocked in the past.
Here is a simple example, which uses a trace that contains two connections from
the same IP address. After the first connection, the script recognizes that the
address is already blocked in the second connection.
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-6-find.bro
.. btest:: netcontrol-6-find.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/google-duplicate.trace ${DOC_ROOT}/frameworks/netcontrol-6-find.bro
Notice that the functions return vectors because it is possible that several
rules exist simultaneously that affect one IP; either there could be
rules with different priorities, or rules for the subnet that an IP address is
part of.
.. _framework-netcontrol-catchrelease:
Catch and Release
-----------------
We already mentioned earlier that in addition to the
:bro:see:`NetControl::drop_connection` and :bro:see:`NetControl::drop_address`
functions, which drop a connection or address for a specified amount of time,
NetControl also comes with a blocking function that uses an approach called
*catch and release*.
Catch and release is a blocking scheme that conserves valuable rule space in
your hardware. Instead of using long-lasting blocks, catch and release first
only installs blocks for short amount of times (typically a few minutes). After
these minutes pass, the block is lifted, but the IP address is added to a
watchlist and the IP address will immediately be re-blocked again (for a longer
amount of time), if it is seen reappearing in any traffic, no matter if the new
traffic triggers any alert or not.
This makes catch and release blocks similar to normal, longer duration blocks,
while only requiring a small amount of space for the currently active rules. IP
addresses that only are seen once for a short time are only blocked for a few
minutes, monitored for a while and then forgotten. IP addresses that keep
appearing will get re-blocked for longer amounts of time.
In difference to the other high-level functions that we documented so far, the
catch and release functionality is much more complex and adds a number of
different specialized functions to NetControl. The documentation for catch and
release is contained in the file
:doc:`/scripts/base/frameworks/netcontrol/catch-and-release.bro`.
Using catch and release in your scripts is easy; just use
:bro:see:`NetControl::drop_address_catch_release` like in this example:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-7-catch-release.bro
.. btest:: netcontrol-7-catch-release.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/ecdhe.pcap ${DOC_ROOT}/frameworks/netcontrol-7-catch-release.bro
Note that you do not have to provide the block time for catch and release;
instead, catch and release uses the time intervals specified in
:bro:see:`NetControl::catch_release_intervals` (by default 10 minutes, 1 hour,
24 hours, 7 days). That means when an address is first blocked, it is blocked
for 10 minutes and monitored for 1 hour. If the address reappears after the
first 10 minutes, it is blocked for 1 hour and then monitored for 24 hours, etc.
Catch and release adds its own new logfile in addition to the already existing
ones (netcontrol_catch_release.log):
.. btest:: netcontrol-7-catch-release.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol_catch_release.log
In addition to the blocking function, catch and release comes with the
:bro:see:`NetControl::get_catch_release_info` function to
check if an address is already blocked by catch and release (and get information
about the block). The :bro:see:`NetControl::unblock_address_catch_release`
function can be used to unblock addresses from catch and release.
.. note::
Since catch and release does its own connection tracking in addition to the
tracking used by the NetControl framework, it is not sufficient to remove
rules that were added by catch and release using :bro:see:`NetControl::remove_rule`.
You have to use :bro:see:`NetControl::unblock_address_catch_release` in this
case.
.. _framework-netcontrol-plugins:
NetControl Plugins
==================
Using the existing plugins
--------------------------
In the API part of the documentation, we exclusively used the debug plugin,
which simply outputs its actions to the screen. In addition to this debugging
plugin, Bro ships with a small number of plugins that can be used to interface
the NetControl framework with your networking hard- and software.
The plugins that currently ship with NetControl are:
.. list-table::
:widths: 15 55
:header-rows: 1
* - Plugin name
- Description
* - OpenFlow plugin
- This is the most fully featured plugin which allows the NetControl
framework to be interfaced with OpenFlow switches. The source of this
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/openflow.bro`.
* - Broker plugin
- This plugin provides a generic way to send NetControl commands using the
new Bro communication library (Broker). External programs can receive
the rules and take action; we provide an example script that calls
command-line programs triggered by NetControl. The source of this
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/broker.bro`.
* - acld plugin
- This plugin adds support for the acld daemon, which can interface with
several switches and routers. The current version of acld is available
from the `LBL ftp server <ftp://ftp.ee.lbl.gov/acld.tar.gz>`_. The source of this
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/acld.bro`.
* - PacketFilter plugin
- This plugin adds uses the Bro process-level packet filter (see
:bro:see:`install_src_net_filter` and
:bro:see:`install_dst_net_filter`). Since the functionality of the
PacketFilter is limited, this plugin is mostly for demonstration purposes. The source of this
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/packetfilter.bro`.
* - Debug plugin
- The debug plugin simply outputs its action to the standard output. The source of this
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/debug.bro`.
Activating plugins
******************
In the API reference part of this document, we already used the debug plugin. To
use the plugin, we first had to instantiate it by calling
:bro:see:`NetControl::NetControl::create_debug` and then add it to NetControl by
calling :bro:see:`NetControl::activate`.
As we already hinted before, NetControl supports having several plugins that are
active at the same time. The second argument to the `NetControl::activate`
function is the priority of the backend that was just added. Each rule is sent
to all plugins in order, from highest priority to lowest priority. The backend
can then choose if it accepts the rule and pushes it out to the hardware that it
manages. Or, it can opt to reject the rule. In this case, the NetControl
framework will try to apply the rule to the backend with the next lower
priority. If no backend accepts a rule, the rule insertion is marked as failed.
The choice if a rule is accepted or rejected stays completely with each plugin.
The debug plugin we used so far just accepts all rules. However, for other
plugins you can specify what rules they will accept. Consider, for example, a
network with two OpenFlow switches. The first switch forwards packets from the
network to the external world, the second switch sits in front of your Bro
cluster to provide packet shunting. In this case, you can add two OpenFlow
backends to NetControl. When you create the instances using
:bro:see:`NetControl::create_openflow`, you set the `monitor` and `forward`
attributes of the configuration in :bro:see:`NetControl::OfConfig`
appropriately. Afterwards, one of the backends will only accept rules for the
monitor path; the other backend will only accept rules for the forward path.
Commonly, plugins also support predicate functions, that allow the user to
specify restrictions on the rules that they will accept. This can for example be
used if you have a network where certain switches are responsible for specified
subnets. The predicate can examine the subnet of the rule and only accept the
rule if the rule matches the subnet that the specific switch is responsible for.
To give an example, the following script adds two backends to NetControl. One
backend is the NetControl debug backend, which just outputs the rules to the
console. The second backend is an OpenFlow backend, which uses the OpenFlow
debug mode that outputs the openflow rules to openflow.log. The OpenFlow
backend uses a predicate function to only accept rules with a source address in
the 192.168.17.0/24 network; all other rules will be passed on to the debug
plugin. We manually block a few addresses in the
:bro:see:`NetControl::init_done` event to verify the correct functionality.
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-8-multiple.bro
.. btest:: netcontrol-8-multiple.bro
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/frameworks/netcontrol-8-multiple.bro
As you can see, only the single block affecting the 192.168.17.0/24 network is
output to the command line. The other two lines are handled by the OpenFlow
plugin. We can verify this by looking at netcontrol.log. The plugin column shows
which plugin handled a rule and reveals that two rules were handled by OpenFlow:
.. btest:: netcontrol-8-multiple.bro
@TEST-EXEC: btest-rst-cmd cat netcontrol.log
Furthermore, openflow.log also shows the two added rules, converted to OpenFlow
flow mods:
.. btest:: netcontrol-8-multiple.bro
@TEST-EXEC: btest-rst-cmd cat openflow.log
.. note::
You might have asked yourself what happens when you add two or more with the
same priority. In this case, the rule is sent to all the backends
simultaneously. This can be useful, for example when you have redundant
switches that should keep the same rule state.
Interfacing with external hardware
**********************************
Now that we know which plugins exist, and how they can be added to NetControl,
it is time to discuss how we can interface Bro with actual hardware. The typical
way to accomplish this is to use the Bro communication library (Broker), which
can be used to exchange Bro events with external programs and scripts. The
NetControl plugins can use Broker to send events to external programs, which can
then take action depending on these events.
The following figure shows this architecture with the example of the OpenFlow
plugin. The OpenFlow plugin uses Broker to send events to an external Python
script, which uses the `Ryu SDN controller <https://osrg.github.io/ryu/>`_ to
communicate with the Switch.
.. figure:: netcontrol-openflow.png
:width: 600
:align: center
:alt: NetControl and OpenFlow architecture.
:target: ../_images/netcontrol-openflow.png
NetControl and OpenFlow architecture (click to enlarge).
The Python scripts that are used to interface with the available NetControl
plugins are contained in the `bro-netcontrol` repository (`github link <https://github.com/bro/bro-netcontrol>`_).
The repository contains scripts for the OpenFlow as well as the acld plugin.
Furthermore, it contains a script for the broker plugin which can be used to
call configureable command-line programs when used with the broker plugin.
The repository also contains documentation on how to install these connectors.
The `netcontrol` directory contains an API that allows you to write your own
connectors to the broker plugin.
.. note::
Note that the API of the Broker communication library is not finalized yet.
You might have to rewrite any scripts for use in future Bro versions.
Writing plugins
---------------
In addition to using the plugins that are part of NetControl, you can write your
own plugins to interface with hard- or software that we currently do not support
out of the Box.
Creating your own plugin is easy; besides a bit of boilerplate, you only need to
create two functions: one that is called when a rule is added, and one that is
called when a rule is removed. The following script creates a minimal plugin
that just outputs a rule when it is added or removed. Note that you have to
raise the :bro:see:`NetControl::rule_added` and
:bro:see:`NetControl::rule_removed` events in your plugin to let NetControl know
when a rule was added and removed successfully.
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-9-skeleton.bro
This example is already fully functional and we can use it with a script similar
to our very first example:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-10-use-skeleton.bro
.. btest:: netcontrol-9-skeleton.bro
@TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/tls/ecdhe.pcap ${DOC_ROOT}/frameworks/netcontrol-9-skeleton.bro ${DOC_ROOT}/frameworks/netcontrol-10-use-skeleton.bro
If you want to write your own plugins, it will be worthwhile to look at the
plugins that ship with the NetControl framework to see how they define the
predicates and interact with Broker.

View file

@ -83,9 +83,9 @@ The hook :bro:see:`Notice::policy` provides the mechanism for applying
actions and generally modifying the notice before it's sent onward to
the action plugins. Hooks can be thought of as multi-bodied functions
and using them looks very similar to handling events. The difference
is that they don't go through the event queue like events. Users should
directly make modifications to the :bro:see:`Notice::Info` record
given as the argument to the hook.
is that they don't go through the event queue like events. Users can
alter notice processing by directly modifying fields in the
:bro:see:`Notice::Info` record given as the argument to the hook.
Here's a simple example which tells Bro to send an email for all notices of
type :bro:see:`SSH::Password_Guessing` if the guesser attempted to log in to

View file

@ -32,7 +32,6 @@ before you begin:
* Libz
* Bash (for BroControl)
* Python (for BroControl)
* C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org)
To build Bro from source, the following additional dependencies are required:
@ -47,8 +46,6 @@ To build Bro from source, the following additional dependencies are required:
* zlib headers
* Python
To install CAF, first download the source code of the required version from: https://github.com/actor-framework/actor-framework/releases
To install the required dependencies, you can use:
* RPM/RedHat-based Linux:
@ -98,12 +95,12 @@ To install the required dependencies, you can use:
component).
OS X comes with all required dependencies except for CMake_, SWIG_,
OpenSSL, and CAF. (OpenSSL used to be part of OS X versions 10.10
and OpenSSL. (OpenSSL used to be part of OS X versions 10.10
and older, for which it does not need to be installed manually. It
was removed in OS X 10.11). Distributions of these dependencies can
likely be obtained from your preferred Mac OS X package management
system (e.g. Homebrew_, MacPorts_, or Fink_). Specifically for
Homebrew, the ``cmake``, ``swig``, ``openssl`` and ``caf`` packages
Homebrew, the ``cmake``, ``swig``, and ``openssl`` packages
provide the required dependencies.
@ -113,6 +110,7 @@ Optional Dependencies
Bro can make use of some optional libraries and tools if they are found at
build time:
* C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org)
* LibGeoIP (for geolocating IP addresses)
* sendmail (enables Bro and BroControl to send mail)
* curl (used by a Bro script that implements active HTTP)

View file

@ -197,7 +197,7 @@ file:
Often times log files from multiple sources are stored in UTC time to
allow easy correlation. Converting the timestamp from a log file to
UTC can be accomplished with the ``-u`` option:
UTC can be accomplished with the ``-u`` option:
.. btest:: using_bro
@ -227,7 +227,7 @@ trip. A common progression of review includes correlating a session
across multiple log files. As a connection is processed by Bro, a
unique identifier is assigned to each session. This unique identifier
is generally included in any log file entry associated with that
connection and can be used to cross-reference different log files.
connection and can be used to cross-reference different log files.
A simple example would be to cross-reference a UID seen in a
``conn.log`` file. Here, we're looking for the connection with the
@ -244,7 +244,7 @@ crossreference that with the UIDs in the ``http.log`` file.
.. btest:: using_bro
@TEST-EXEC: btest-rst-cmd "cat http.log | bro-cut uid id.resp_h method status_code host uri | grep VW0XPVINV8a"
@TEST-EXEC: btest-rst-cmd "cat http.log | bro-cut uid id.resp_h method status_code host uri | grep UM0KZ3MLUfNB0cl11"
As you can see there are two HTTP ``GET`` requests within the
session that Bro identified and logged. Given that HTTP is a stream

View file

@ -71,6 +71,23 @@ Files
| x509.log | X.509 certificate info | :bro:type:`X509::Info` |
+----------------------------+---------------------------------------+---------------------------------+
NetControl
----------
+------------------------------+---------------------------------------+------------------------------------------+
| Log File | Description | Field Descriptions |
+==============================+=======================================+==========================================+
| netcontrol.log | NetControl actions | :bro:type:`NetControl::Info` |
+------------------------------+---------------------------------------+------------------------------------------+
| netcontrol_drop.log | NetControl actions | :bro:type:`NetControl::DropInfo` |
+------------------------------+---------------------------------------+------------------------------------------+
| netcontrol_shunt.log | NetControl shunt actions | :bro:type:`NetControl::ShuntInfo` |
+------------------------------+---------------------------------------+------------------------------------------+
| netcontrol_catch_release.log | NetControl catch and release actions | :bro:type:`NetControl::CatchReleaseInfo` |
+------------------------------+---------------------------------------+------------------------------------------+
| openflow.log | OpenFlow debug log | :bro:type:`OpenFlow::Info` |
+------------------------------+---------------------------------------+------------------------------------------+
Detection
---------
@ -95,8 +112,6 @@ Network Observations
+----------------------------+---------------------------------------+---------------------------------+
| Log File | Description | Field Descriptions |
+============================+=======================================+=================================+
| app_stats.log | Web app usage statistics | :bro:type:`AppStats::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| known_certs.log | SSL certificates | :bro:type:`Known::CertsInfo` |
+----------------------------+---------------------------------------+---------------------------------+
| known_devices.log | MAC addresses of devices on the | :bro:type:`Known::DevicesInfo` |

View file

@ -181,11 +181,14 @@ Here is a more detailed description of each type:
second-to-last character, etc. Here are a few examples::
local orig = "0123456789";
local second_char = orig[1];
local last_char = orig[-1];
local first_two_chars = orig[:2];
local last_two_chars = orig[8:];
local no_first_and_last = orig[1:9];
local second_char = orig[1]; # "1"
local last_char = orig[-1]; # "9"
local first_two_chars = orig[:2]; # "01"
local last_two_chars = orig[8:]; # "89"
local no_first_and_last = orig[1:9]; # "12345678"
local no_first = orig[1:]; # "123456789"
local no_last = orig[:-1]; # "012345678"
local copy_orig = orig[:]; # "0123456789"
Note that the subscript operator cannot be used to modify a string (i.e.,
it cannot be on the left side of an assignment operator).

View file

@ -66,9 +66,6 @@ print version and exit
\fB\-x\fR,\ \-\-print\-state <file.bst>
print contents of state file
.TP
\fB\-z\fR,\ \-\-analyze <analysis>
run the specified policy file analysis
.TP
\fB\-C\fR,\ \-\-no\-checksums
ignore checksums
.TP
@ -78,12 +75,6 @@ force DNS
\fB\-I\fR,\ \-\-print\-id <ID name>
print out given ID
.TP
\fB\-J\fR,\ \-\-set\-seed <seed>
set the random number seed
.TP
\fB\-K\fR,\ \-\-md5\-hashkey <hashkey>
set key for MD5\-keyed hashing
.TP
\fB\-N\fR,\ \-\-print\-plugins
print available plugins and exit (\fB\-NN\fR for verbose)
.TP

View file

@ -1,46 +0,0 @@
#!/bin/sh
# This script generates binary DEB packages.
# They can be found in ../build/ after running.
# The DEB CPack generator depends on `dpkg-shlibdeps` to automatically
# determine what dependencies to set for the packages
type dpkg-shlibdeps > /dev/null 2>&1 || {
echo "\
Creating DEB packages requires the "dpkg-shlibs" command, usually provided by
the 'dpkg-dev' package, please install it first.
" >&2;
exit 1;
}
prefix=/opt/bro
localstatedir=/var/opt/bro
# During the packaging process, `dpkg-shlibs` will fail if used on a library
# that links to other internal/project libraries unless an RPATH is used or
# we set LD_LIBRARY_PATH such that it can find the internal/project library
# in the temporary packaging tree.
export LD_LIBRARY_PATH=./${prefix}/lib
cd ..
# Minimum Bro
./configure --prefix=${prefix} --disable-broccoli --disable-broctl \
--pkg-name-prefix=Bro-minimal --binary-package
( cd build && make package )
# Full Bro package
./configure --prefix=${prefix} --localstatedir=${localstatedir} --pkg-name-prefix=Bro --binary-package
( cd build && make package )
# Broccoli
cd aux/broccoli
./configure --prefix=${prefix} --binary-package
( cd build && make package && mv *.deb ../../../build/ )
cd ../..
# Broctl
cd aux/broctl
./configure --prefix=${prefix} --localstatedir=${localstatedir} --binary-package
( cd build && make package && mv *.deb ../../../build/ )
cd ../..

View file

@ -1,57 +0,0 @@
#!/bin/sh
# This script creates binary packages for Mac OS X.
# They can be found in ../build/ after running.
type sw_vers > /dev/null 2>&1 || {
echo "Unable to get Mac OS X version" >&2;
exit 1;
}
# Get the OS X minor version
# 5 = Leopard, 6 = Snow Leopard, 7 = Lion ...
osx_ver=`sw_vers | sed -n 's/ProductVersion://p' | cut -d . -f 2`
if [ ${osx_ver} -lt 5 ]; then
echo "Packages for OS X < 10.5 are not supported" >&2
exit 1
elif [ ${osx_ver} -eq 5 ]; then
# On OS X 10.5, the x86_64 version of libresolv is broken,
# so we build for i386 as the easiest solution
arch=i386
else
# Currently it's just easiest to build the 10.5 package on
# on 10.5, but if it weren't for the libresolv issue, we could
# potentially build packages for older OS X version by using the
# --osx-sysroot and --osx-min-version options
arch=x86_64
fi
prefix=/opt/bro
cd ..
# Minimum Bro
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--disable-broccoli --disable-broctl --pkg-name-prefix=Bro-minimal \
--binary-package
( cd build && make package )
# Full Bro package
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--pkg-name-prefix=Bro --binary-package
( cd build && make package )
# Broccoli
cd aux/broccoli
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package
( cd build && make package && mv *.dmg ../../../build/ )
cd ../..
# Broctl
cd aux/broctl
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package
( cd build && make package && mv *.dmg ../../../build/ )
cd ../..

View file

@ -1,39 +0,0 @@
#!/bin/sh
# This script generates binary RPM packages.
# They can be found in ../build/ after running.
# The RPM CPack generator depends on `rpmbuild` to create packages
type rpmbuild > /dev/null 2>&1 || {
echo "\
Creating RPM packages requires the "rpmbuild" command, usually provided by
the 'rpm-build' package, please install it first.
" >&2;
exit 1;
}
prefix=/opt/bro
localstatedir=/var/opt/bro
cd ..
# Minimum Bro
./configure --prefix=${prefix} --disable-broccoli --disable-broctl \
--pkg-name-prefix=Bro-minimal --binary-package
( cd build && make package )
# Full Bro package
./configure --prefix=${prefix} --localstatedir=${localstatedir} --pkg-name-prefix=Bro --binary-package
( cd build && make package )
# Broccoli
cd aux/broccoli
./configure --prefix=${prefix} --binary-package
( cd build && make package && mv *.rpm ../../../build/ )
cd ../..
# Broctl
cd aux/broctl
./configure --prefix=${prefix} --localstatedir=${localstatedir} --binary-package
( cd build && make package && mv *.rpm ../../../build/ )
cd ../..

View file

@ -28,6 +28,14 @@ redef Communication::listen_port = Cluster::nodes[Cluster::node]$p;
@if ( Cluster::local_node_type() == Cluster::MANAGER )
@load ./nodes/manager
# If no logger is defined, then the manager receives logs.
@if ( Cluster::manager_is_logger )
@load ./nodes/logger
@endif
@endif
@if ( Cluster::local_node_type() == Cluster::LOGGER )
@load ./nodes/logger
@endif
@if ( Cluster::local_node_type() == Cluster::PROXY )

View file

@ -31,7 +31,9 @@ export {
## A node type which is allowed to view/manipulate the configuration
## of other nodes in the cluster.
CONTROL,
## A node type responsible for log and policy management.
## A node type responsible for log management.
LOGGER,
## A node type responsible for policy management.
MANAGER,
## A node type for relaying worker node communication and synchronizing
## worker node state.
@ -50,12 +52,21 @@ export {
## Events raised by a manager and handled by proxies.
const manager2proxy_events = /EMPTY/ &redef;
## Events raised by a manager and handled by loggers.
const manager2logger_events = /EMPTY/ &redef;
## Events raised by proxies and handled by loggers.
const proxy2logger_events = /EMPTY/ &redef;
## Events raised by proxies and handled by a manager.
const proxy2manager_events = /EMPTY/ &redef;
## Events raised by proxies and handled by workers.
const proxy2worker_events = /EMPTY/ &redef;
## Events raised by workers and handled by loggers.
const worker2logger_events = /EMPTY/ &redef;
## Events raised by workers and handled by a manager.
const worker2manager_events = /(TimeMachine::command|Drop::.*)/ &redef;
@ -86,6 +97,8 @@ export {
p: port;
## Identifier for the interface a worker is sniffing.
interface: string &optional;
## Name of the logger node this node uses. For manager, proxies and workers.
logger: string &optional;
## Name of the manager node this node uses. For workers and proxies.
manager: string &optional;
## Name of the proxy node this node uses. For workers and managers.
@ -123,6 +136,12 @@ export {
## Note that BroControl handles all of this automatically.
const nodes: table[string] of Node = {} &redef;
## Indicates whether or not the manager will act as the logger and receive
## logs. This value should be set in the cluster-layout.bro script (the
## value should be true only if no logger is specified in Cluster::nodes).
## Note that BroControl handles this automatically.
const manager_is_logger = T &redef;
## This is usually supplied on the command line for each instance
## of the cluster that is started up.
const node = getenv("CLUSTER_NODE") &redef;

View file

@ -0,0 +1,29 @@
##! This is the core Bro script to support the notion of a cluster logger.
##!
##! The logger is passive (other Bro instances connect to us), and once
##! connected the logger receives logs from other Bro instances.
##! This script will be automatically loaded if necessary based on the
##! type of node being started.
##! This is where the cluster logger sets it's specific settings for other
##! frameworks and in the core.
@prefixes += cluster-logger
## Turn on local logging.
redef Log::enable_local_logging = T;
## Turn off remote logging since this is the logger and should only log here.
redef Log::enable_remote_logging = F;
## Log rotation interval.
redef Log::default_rotation_interval = 1 hrs;
## Alarm summary mail interval.
redef Log::default_mail_alarms_interval = 24 hrs;
## Use the cluster's archive logging script.
redef Log::default_rotation_postprocessor_cmd = "archive-log";
## We're processing essentially *only* remote events.
redef max_remote_events_processed = 10000;

View file

@ -10,17 +10,17 @@
@prefixes += cluster-manager
## Turn off remote logging since this is the manager and should only log here.
redef Log::enable_remote_logging = F;
## Don't do any local logging since the logger handles writing logs.
redef Log::enable_local_logging = F;
## Turn on remote logging since the logger handles writing logs.
redef Log::enable_remote_logging = T;
## Log rotation interval.
redef Log::default_rotation_interval = 1 hrs;
redef Log::default_rotation_interval = 24 hrs;
## Alarm summary mail interval.
redef Log::default_mail_alarms_interval = 24 hrs;
## Use the cluster's archive logging script.
redef Log::default_rotation_postprocessor_cmd = "archive-log";
## Use the cluster's delete-log script.
redef Log::default_rotation_postprocessor_cmd = "delete-log";
## We're processing essentially *only* remote events.
redef max_remote_events_processed = 10000;

View file

@ -1,6 +1,6 @@
##! Redefines some options common to all worker nodes within a Bro cluster.
##! In particular, worker nodes do not produce logs locally, instead they
##! send them off to a manager node for processing.
##! send them off to a logger node for processing.
@prefixes += cluster-worker

View file

@ -23,17 +23,40 @@ event bro_init() &priority=9
$connect=F, $class="control",
$events=control_events];
if ( me$node_type == MANAGER )
if ( me$node_type == LOGGER )
{
if ( n$node_type == MANAGER && n$logger == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=manager2logger_events, $request_logs=T];
if ( n$node_type == PROXY && n$logger == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=proxy2logger_events, $request_logs=T];
if ( n$node_type == WORKER && n$logger == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=worker2logger_events, $request_logs=T];
}
else if ( me$node_type == MANAGER )
{
if ( n$node_type == LOGGER && me$logger == i )
Communication::nodes["logger"] =
[$host=n$ip, $zone_id=n$zone_id, $p=n$p,
$connect=T, $retry=retry_interval,
$class=node];
if ( n$node_type == WORKER && n$manager == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=worker2manager_events, $request_logs=T];
$class=i, $events=worker2manager_events,
$request_logs=Cluster::manager_is_logger];
if ( n$node_type == PROXY && n$manager == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=proxy2manager_events, $request_logs=T];
$class=i, $events=proxy2manager_events,
$request_logs=Cluster::manager_is_logger];
if ( n$node_type == TIME_MACHINE && me?$time_machine && me$time_machine == i )
Communication::nodes["time-machine"] = [$host=nodes[i]$ip,
@ -45,6 +68,12 @@ event bro_init() &priority=9
else if ( me$node_type == PROXY )
{
if ( n$node_type == LOGGER && me$logger == i )
Communication::nodes["logger"] =
[$host=n$ip, $zone_id=n$zone_id, $p=n$p,
$connect=T, $retry=retry_interval,
$class=node];
if ( n$node_type == WORKER && n$proxy == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F, $class=i,
@ -76,6 +105,12 @@ event bro_init() &priority=9
}
else if ( me$node_type == WORKER )
{
if ( n$node_type == LOGGER && me$logger == i )
Communication::nodes["logger"] =
[$host=n$ip, $zone_id=n$zone_id, $p=n$p,
$connect=T, $retry=retry_interval,
$class=node];
if ( n$node_type == MANAGER && me$manager == i )
Communication::nodes["manager"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id,

View file

@ -27,6 +27,9 @@ export {
disabled_aids: set[count];
};
## Analyzers which you don't want to throw
const ignore_violations: set[Analyzer::Tag] = set() &redef;
## Ignore violations which go this many bytes into the connection.
## Set to 0 to never ignore protocol violations.
const ignore_violations_after = 10 * 1024 &redef;
@ -82,8 +85,11 @@ event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count, reason
if ( ignore_violations_after > 0 && size > ignore_violations_after )
return;
if ( atype in ignore_violations )
return;
# Disable the analyzer that raised the last core-generated event.
disable_analyzer(c$id, aid);
disable_analyzer(c$id, aid, F);
add c$dpd$disabled_aids[aid];
}

View file

@ -174,3 +174,8 @@ signature file-lzma {
file-magic /^\x5d\x00\x00/
}
# ACE archive file.
signature file-ace-archive {
file-mime "application/x-ace", 100
file-magic /^.{7}\*\*ACE\*\*/
}

View file

@ -93,7 +93,7 @@ signature file-ini {
# Microsoft LNK files
signature file-lnk {
file-mime "application/x-ms-shortcut", 49
file-magic /^\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x10\x00\x00\x00\x46/
file-magic /^\x4c\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xc0\x00\x00\x00\x00\x00\x00\x46/
}
# Microsoft Registry policies
@ -310,4 +310,4 @@ signature file-elf-sharedlib {
signature file-elf-coredump {
file-mime "application/x-coredump", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x04\x00|\x02.{10}\x00\x04)/
}
}

View file

@ -103,6 +103,17 @@ export {
## it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## Error event that is raised when an information, warning or error
## is raised by the input stream. If the level is error, the stream will automatically
## be closed.
## The event receives the Input::TableDescription as the first argument, the
## message as the second argument and the Reporter::Level as the third argument.
##
## The event is raised like if it had been declared as follows:
## error_ev: function(desc: TableDescription, message: string, level: Reporter::Level) &optional;
## The actual declaration uses the ``any`` type because of deficiencies of the Bro type system.
error_ev: any &optional;
## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes.
@ -146,6 +157,17 @@ export {
## all fields, or each field value as a separate argument).
ev: any;
## Error event that is raised when an information, warning or error
## is raised by the input stream. If the level is error, the stream will automatically
## be closed.
## The event receives the Input::EventDescription as the first argument, the
## message as the second argument and the Reporter::Level as the third argument.
##
## The event is raised like it had been declared as follows:
## error_ev: function(desc: EventDescription, message: string, level: Reporter::Level) &optional;
## The actual declaration uses the ``any`` type because of deficiencies of the Bro type system.
error_ev: any &optional;
## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes.

View file

@ -1,5 +1,8 @@
@load ./main
# File analysis framework integration.
@load ./files
# The cluster framework must be loaded first.
@load base/frameworks/cluster

View file

@ -1,8 +1,8 @@
##! Cluster transparency support for the intelligence framework. This is mostly
##! oriented toward distributing intelligence information across clusters.
@load ./main
@load base/frameworks/cluster
@load ./input
module Intel;
@ -17,19 +17,17 @@ redef record Item += {
redef have_full_data = F;
@endif
# Internal event for cluster data distribution.
global cluster_new_item: event(item: Item);
# Primary intelligence distribution comes from manager.
redef Cluster::manager2worker_events += /^Intel::(cluster_new_item)$/;
# If a worker finds intelligence and adds it, it should share it back to the manager.
redef Cluster::worker2manager_events += /^Intel::(cluster_new_item|match_no_items)$/;
# Primary intelligence management is done by the manager:
# The manager informs the workers about new items and item removal.
redef Cluster::manager2worker_events += /^Intel::(cluster_new_item|purge_item)$/;
# A worker queries the manager to insert, remove or indicate the match of an item.
redef Cluster::worker2manager_events += /^Intel::(cluster_new_item|remove_item|match_no_items)$/;
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event Intel::match_no_items(s: Seen) &priority=5
{
event Intel::match(s, Intel::get_items(s));
}
# Handling of new worker nodes.
event remote_connection_handshake_done(p: event_peer)
{
# When a worker connects, send it the complete minimal data store.
@ -39,15 +37,22 @@ event remote_connection_handshake_done(p: event_peer)
send_id(p, "Intel::min_data_store");
}
}
@endif
event Intel::cluster_new_item(item: Intel::Item) &priority=5
# Handling of matches triggered by worker nodes.
event Intel::match_no_items(s: Seen) &priority=5
{
# Ignore locally generated events to avoid event storms.
if ( is_remote_event() )
Intel::insert(item);
if ( Intel::find(s) )
event Intel::match(s, Intel::get_items(s));
}
# Handling of item removal triggered by worker nodes.
event Intel::remove_item(item: Item, purge_indicator: bool)
{
remove(item, purge_indicator);
}
@endif
# Handling of item insertion.
event Intel::new_item(item: Intel::Item) &priority=5
{
# The cluster manager always rebroadcasts intelligence.
@ -59,3 +64,11 @@ event Intel::new_item(item: Intel::Item) &priority=5
event Intel::cluster_new_item(item);
}
}
# Handling of item insertion by remote node.
event Intel::cluster_new_item(item: Intel::Item) &priority=5
{
# Ignore locally generated events to avoid event storms.
if ( is_remote_event() )
Intel::insert(item);
}

View file

@ -0,0 +1,84 @@
##! File analysis framework integration for the intelligence framework. This
##! script manages file information in intelligence framework datastructures.
@load ./main
module Intel;
export {
## Enum type to represent various types of intelligence data.
redef enum Type += {
## File hash which is non-hash type specific. It's up to the
## user to query for any relevant hash types.
FILE_HASH,
## File name. Typically with protocols with definite
## indications of a file name.
FILE_NAME,
};
## Information about a piece of "seen" data.
redef record Seen += {
## If the data was discovered within a file, the file record
## should go here to provide context to the data.
f: fa_file &optional;
## If the data was discovered within a file, the file uid should
## go here to provide context to the data. If the file record *f*
## is provided, this will be automatically filled out.
fuid: string &optional;
};
## Record used for the logging framework representing a positive
## hit within the intelligence framework.
redef record Info += {
## If a file was associated with this intelligence hit,
## this is the uid for the file.
fuid: string &log &optional;
## A mime type if the intelligence hit is related to a file.
## If the $f field is provided this will be automatically filled
## out.
file_mime_type: string &log &optional;
## Frequently files can be "described" to give a bit more context.
## If the $f field is provided this field will be automatically
## filled out.
file_desc: string &log &optional;
};
}
# Add file information to matches if available.
hook extend_match(info: Info, s: Seen, items: set[Item]) &priority=5
{
if ( s?$f )
{
s$fuid = s$f$id;
if ( s$f?$conns && |s$f$conns| == 1 )
{
for ( cid in s$f$conns )
s$conn = s$f$conns[cid];
}
if ( ! info?$file_mime_type && s$f?$info && s$f$info?$mime_type )
info$file_mime_type = s$f$info$mime_type;
if ( ! info?$file_desc )
info$file_desc = Files::describe(s$f);
}
if ( s?$fuid )
info$fuid = s$fuid;
if ( s?$conn )
{
s$uid = s$conn$uid;
info$id = s$conn$id;
}
if ( s?$uid )
info$uid = s$uid;
for ( item in items )
{
add info$sources[item$meta$source];
add info$matched[item$indicator_type];
}
}

View file

@ -1,11 +1,14 @@
##! Input handling for the intelligence framework. This script implements the
##! import of intelligence data from files using the input framework.
@load ./main
module Intel;
export {
## Intelligence files that will be read off disk. The files are
## reread every time they are updated so updates must be atomic with
## "mv" instead of writing the file in place.
## Intelligence files that will be read off disk. The files are
## reread every time they are updated so updates must be atomic
## with "mv" instead of writing the file in place.
const read_files: set[string] = {} &redef;
}

View file

@ -1,7 +1,6 @@
##! The intelligence framework provides a way to store and query IP addresses,
##! and strings (with a str_type). Metadata can
##! also be associated with the intelligence, like for making more informed
##! decisions about matching and handling of intelligence.
##! The intelligence framework provides a way to store and query intelligence data
##! (e.g. IP addresses, URLs and hashes). The intelligence items can be associated
##! with metadata to allow informed decisions about matching and handling.
@load base/frameworks/notice
@ -14,6 +13,8 @@ export {
type Type: enum {
## An IP address.
ADDR,
## A subnet in CIDR notation.
SUBNET,
## A complete URL without the prefix ``"http://"``.
URL,
## Software name.
@ -24,24 +25,20 @@ export {
DOMAIN,
## A user name.
USER_NAME,
## File hash which is non-hash type specific. It's up to the
## user to query for any relevant hash types.
FILE_HASH,
## File name. Typically with protocols with definite
## indications of a file name.
FILE_NAME,
## Certificate SHA-1 hash.
CERT_HASH,
## Public key MD5 hash. (SSH server host keys are a good example.)
PUBKEY_HASH,
};
## Set of intelligence data types.
type TypeSet: set[Type];
## Data about an :bro:type:`Intel::Item`.
type MetaData: record {
## An arbitrary string value representing the data source.
## Typically, the convention for this field will be the source
## name and feed name separated by a hyphen.
## For example: "source1-c&c".
## An arbitrary string value representing the data source. This
## value is used as unique key to identify a metadata record in
## the scope of a single intelligence item.
source: string;
## A freeform description for the data.
desc: string &optional;
@ -57,7 +54,7 @@ export {
## The type of data that the indicator field represents.
indicator_type: Type;
## Metadata for the item. Typically represents more deeply
## Metadata for the item. Typically represents more deeply
## descriptive data for a piece of intelligence.
meta: MetaData;
};
@ -96,15 +93,6 @@ export {
## If the *conn* field is provided, this will be automatically
## filled out.
uid: string &optional;
## If the data was discovered within a file, the file record
## should go here to provide context to the data.
f: fa_file &optional;
## If the data was discovered within a file, the file uid should
## go here to provide context to the data. If the *f* field is
## provided, this will be automatically filled out.
fuid: string &optional;
};
## Record used for the logging framework representing a positive
@ -120,41 +108,70 @@ export {
## this is the conn_id for the connection.
id: conn_id &log &optional;
## If a file was associated with this intelligence hit,
## this is the uid for the file.
fuid: string &log &optional;
## A mime type if the intelligence hit is related to a file.
## If the $f field is provided this will be automatically filled
## out.
file_mime_type: string &log &optional;
## Frequently files can be "described" to give a bit more context.
## If the $f field is provided this field will be automatically
## filled out.
file_desc: string &log &optional;
## Where the data was seen.
seen: Seen &log;
## Which indicator types matched.
matched: TypeSet &log;
## Sources which supplied data that resulted in this match.
sources: set[string] &log &default=string_set();
};
## Intelligence data manipulation function.
## Function to insert intelligence data. If the indicator is already
## present, the associated metadata will be added to the indicator. If
## the indicator already contains a metadata record from the same source,
## the existing metadata record will be updated.
global insert: function(item: Item);
## Function to remove intelligence data. If purge_indicator is set, the
## given metadata is ignored and the indicator is removed completely.
global remove: function(item: Item, purge_indicator: bool &default = F);
## Function to declare discovery of a piece of data in order to check
## it against known intelligence for matches.
global seen: function(s: Seen);
## Event to represent a match in the intelligence data from data that
## was seen. On clusters there is no assurance as to where this event
## was seen. On clusters there is no assurance as to when this event
## will be generated so do not assume that arbitrary global state beyond
## the given data will be available.
##
## This is the primary mechanism where a user will take actions based on
## data within the intelligence framework.
## This is the primary mechanism where a user may take actions based on
## data provided by the intelligence framework.
global match: event(s: Seen, items: set[Item]);
## This hook can be used to influence the logging of intelligence hits
## (e.g. by adding data to the Info record). The default information is
## added with a priority of 5.
##
## info: The Info record that will be logged.
##
## s: Information about the data seen.
##
## items: The intel items that match the seen data.
##
## In case the hook execution is terminated using break, the match will
## not be logged.
global extend_match: hook(info: Info, s: Seen, items: set[Item]);
## The expiration timeout for intelligence items. Once an item expires, the
## :bro:id:`Intel::item_expired` hook is called. Reinsertion of an item
## resets the timeout. A negative value disables expiration of intelligence
## items.
const item_expiration = -1 min &redef;
## This hook can be used to handle expiration of intelligence items.
##
## indicator: The indicator of the expired item.
##
## indicator_type: The indicator type of the expired item.
##
## metas: The set of metadata describing the expired item.
##
## If all hook handlers are executed, the expiration timeout will be reset.
## Otherwise, if one of the handlers terminates using break, the item will
## be removed.
global item_expired: hook(indicator: string, indicator_type: Type, metas: set[MetaData]);
global log_intel: event(rec: Info);
}
@ -163,16 +180,26 @@ global match_no_items: event(s: Seen);
# Internal events for cluster data distribution.
global new_item: event(item: Item);
global updated_item: event(item: Item);
global remove_item: event(item: Item, purge_indicator: bool);
global purge_item: event(item: Item);
# Optionally store metadata. This is used internally depending on
# if this is a cluster deployment or not.
const have_full_data = T &redef;
# Table of metadata, indexed by source string.
type MetaDataTable: table[string] of MetaData;
# Expiration handlers.
global expire_host_data: function(data: table[addr] of MetaDataTable, idx: addr): interval;
global expire_subnet_data: function(data: table[subnet] of MetaDataTable, idx: subnet): interval;
global expire_string_data: function(data: table[string, Type] of MetaDataTable, idx: any): interval;
# The in memory data structure for holding intelligence.
type DataStore: record {
host_data: table[addr] of set[MetaData];
string_data: table[string, Type] of set[MetaData];
host_data: table[addr] of MetaDataTable &write_expire=item_expiration &expire_func=expire_host_data;
subnet_data: table[subnet] of MetaDataTable &write_expire=item_expiration &expire_func=expire_subnet_data;
string_data: table[string, Type] of MetaDataTable &write_expire=item_expiration &expire_func=expire_string_data;
};
global data_store: DataStore &redef;
@ -181,6 +208,7 @@ global data_store: DataStore &redef;
# a minimal amount of data for the full match to happen on the manager.
type MinDataStore: record {
host_data: set[addr];
subnet_data: set[subnet];
string_data: set[string, Type];
};
global min_data_store: MinDataStore &redef;
@ -191,33 +219,78 @@ event bro_init() &priority=5
Log::create_stream(LOG, [$columns=Info, $ev=log_intel, $path="intel"]);
}
# Function that abstracts expiration of different types.
function expire_item(indicator: string, indicator_type: Type, metas: set[MetaData]): interval
{
if ( hook item_expired(indicator, indicator_type, metas) )
return item_expiration;
else
remove([$indicator=indicator, $indicator_type=indicator_type, $meta=[$source=""]], T);
return 0 sec;
}
# Expiration handler definitions.
function expire_host_data(data: table[addr] of MetaDataTable, idx: addr): interval
{
local meta_tbl: MetaDataTable = data[idx];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
return expire_item(cat(idx), ADDR, metas);
}
function expire_subnet_data(data: table[subnet] of MetaDataTable, idx: subnet): interval
{
local meta_tbl: MetaDataTable = data[idx];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
return expire_item(cat(idx), ADDR, metas);
}
function expire_string_data(data: table[string, Type] of MetaDataTable, idx: any): interval
{
local indicator: string;
local indicator_type: Type;
[indicator, indicator_type] = idx;
local meta_tbl: MetaDataTable = data[indicator, indicator_type];
local metas: set[MetaData];
for ( src in meta_tbl )
add metas[meta_tbl[src]];
return expire_item(indicator, indicator_type, metas);
}
# Function to check for intelligence hits.
function find(s: Seen): bool
{
local ds = have_full_data ? data_store : min_data_store;
if ( s?$host )
{
return ((s$host in min_data_store$host_data) ||
(have_full_data && s$host in data_store$host_data));
}
else if ( ([to_lower(s$indicator), s$indicator_type] in min_data_store$string_data) ||
(have_full_data && [to_lower(s$indicator), s$indicator_type] in data_store$string_data) )
{
return T;
return ((s$host in ds$host_data) ||
(|matching_subnets(addr_to_subnet(s$host), ds$subnet_data)| > 0));
}
else
{
return F;
return ([to_lower(s$indicator), s$indicator_type] in ds$string_data);
}
}
# Function to retrieve intelligence items while abstracting from different
# data stores for different indicator types.
function get_items(s: Seen): set[Item]
{
local return_data: set[Item];
local mt: MetaDataTable;
if ( ! have_full_data )
{
# A reporter warning should be generated here because this function
# should never be called from a host that doesn't have the full data.
# TODO: do a reporter warning.
Reporter::warning(fmt("Intel::get_items was called from a host (%s) that doesn't have the full data.",
peer_description));
return return_data;
}
@ -226,11 +299,23 @@ function get_items(s: Seen): set[Item]
# See if the host is known about and it has meta values
if ( s$host in data_store$host_data )
{
for ( m in data_store$host_data[s$host] )
mt = data_store$host_data[s$host];
for ( m in mt )
{
add return_data[Item($indicator=cat(s$host), $indicator_type=ADDR, $meta=m)];
add return_data[Item($indicator=cat(s$host), $indicator_type=ADDR, $meta=mt[m])];
}
}
# See if the host is part of a known subnet, which has meta values
local nets: table[subnet] of MetaDataTable;
nets = filter_subnet_table(addr_to_subnet(s$host), data_store$subnet_data);
for ( n in nets )
{
mt = nets[n];
for ( m in mt )
{
add return_data[Item($indicator=cat(n), $indicator_type=SUBNET, $meta=mt[m])];
}
}
}
else
{
@ -238,9 +323,10 @@ function get_items(s: Seen): set[Item]
# See if the string is known about and it has meta values
if ( [lower_indicator, s$indicator_type] in data_store$string_data )
{
for ( m in data_store$string_data[lower_indicator, s$indicator_type] )
mt = data_store$string_data[lower_indicator, s$indicator_type];
for ( m in mt )
{
add return_data[Item($indicator=s$indicator, $indicator_type=s$indicator_type, $meta=m)];
add return_data[Item($indicator=s$indicator, $indicator_type=s$indicator_type, $meta=mt[m])];
}
}
}
@ -275,64 +361,20 @@ function Intel::seen(s: Seen)
}
}
function has_meta(check: MetaData, metas: set[MetaData]): bool
{
local check_hash = md5_hash(check);
for ( m in metas )
{
if ( check_hash == md5_hash(m) )
return T;
}
# The records must not be equivalent if we made it this far.
return F;
}
event Intel::match(s: Seen, items: set[Item]) &priority=5
{
local info = Info($ts=network_time(), $seen=s);
local info = Info($ts=network_time(), $seen=s, $matched=TypeSet());
if ( s?$f )
{
s$fuid = s$f$id;
if ( s$f?$conns && |s$f$conns| == 1 )
{
for ( cid in s$f$conns )
s$conn = s$f$conns[cid];
}
if ( ! info?$file_mime_type && s$f?$info && s$f$info?$mime_type )
info$file_mime_type = s$f$info$mime_type;
if ( ! info?$file_desc )
info$file_desc = Files::describe(s$f);
}
if ( s?$fuid )
info$fuid = s$fuid;
if ( s?$conn )
{
s$uid = s$conn$uid;
info$id = s$conn$id;
}
if ( s?$uid )
info$uid = s$uid;
for ( item in items )
add info$sources[item$meta$source];
Log::write(Intel::LOG, info);
if ( hook extend_match(info, s, items) )
Log::write(Intel::LOG, info);
}
function insert(item: Item)
{
# Create and fill out the meta data item.
# Create and fill out the metadata item.
local meta = item$meta;
local metas: set[MetaData];
local meta_tbl: table [string] of MetaData;
local is_new: bool = T;
# All intelligence is case insensitive at the moment.
local lower_indicator = to_lower(item$indicator);
@ -343,51 +385,133 @@ function insert(item: Item)
if ( have_full_data )
{
if ( host !in data_store$host_data )
data_store$host_data[host] = set();
data_store$host_data[host] = table();
else
is_new = F;
metas = data_store$host_data[host];
meta_tbl = data_store$host_data[host];
}
add min_data_store$host_data[host];
}
else if ( item$indicator_type == SUBNET )
{
local net = to_subnet(item$indicator);
if ( have_full_data )
{
if ( !check_subnet(net, data_store$subnet_data) )
data_store$subnet_data[net] = table();
else
is_new = F;
meta_tbl = data_store$subnet_data[net];
}
add min_data_store$subnet_data[net];
}
else
{
if ( have_full_data )
{
if ( [lower_indicator, item$indicator_type] !in data_store$string_data )
data_store$string_data[lower_indicator, item$indicator_type] = set();
data_store$string_data[lower_indicator, item$indicator_type] = table();
else
is_new = F;
metas = data_store$string_data[lower_indicator, item$indicator_type];
meta_tbl = data_store$string_data[lower_indicator, item$indicator_type];
}
add min_data_store$string_data[lower_indicator, item$indicator_type];
}
local updated = F;
if ( have_full_data )
{
for ( m in metas )
{
if ( meta$source == m$source )
{
if ( has_meta(meta, metas) )
{
# It's the same item being inserted again.
return;
}
else
{
# Same source, different metadata means updated item.
updated = T;
}
}
}
add metas[item$meta];
# Insert new metadata or update if already present
meta_tbl[meta$source] = meta;
}
if ( updated )
event Intel::updated_item(item);
else
if ( is_new )
# Trigger insert for cluster in case the item is new
# or insert was called on a worker
event Intel::new_item(item);
}
# Function to remove metadata of an item. The function returns T
# if there is no metadata left for the given indicator.
function remove_meta_data(item: Item): bool
{
if ( ! have_full_data )
{
Reporter::warning(fmt("Intel::remove_meta_data was called from a host (%s) that doesn't have the full data.",
peer_description));
return F;
}
switch ( item$indicator_type )
{
case ADDR:
local host = to_addr(item$indicator);
delete data_store$host_data[host][item$meta$source];
return (|data_store$host_data[host]| == 0);
case SUBNET:
local net = to_subnet(item$indicator);
delete data_store$subnet_data[net][item$meta$source];
return (|data_store$subnet_data[net]| == 0);
default:
delete data_store$string_data[item$indicator, item$indicator_type][item$meta$source];
return (|data_store$string_data[item$indicator, item$indicator_type]| == 0);
}
}
function remove(item: Item, purge_indicator: bool)
{
# Delegate removal if we are on a worker
if ( !have_full_data )
{
event Intel::remove_item(item, purge_indicator);
return;
}
# Remove metadata from manager's data store
local no_meta_data = remove_meta_data(item);
# Remove whole indicator if necessary
if ( no_meta_data || purge_indicator )
{
switch ( item$indicator_type )
{
case ADDR:
local host = to_addr(item$indicator);
delete data_store$host_data[host];
break;
case SUBNET:
local net = to_subnet(item$indicator);
delete data_store$subnet_data[net];
break;
default:
delete data_store$string_data[item$indicator, item$indicator_type];
break;
}
# Trigger deletion in minimal data stores
event Intel::purge_item(item);
}
}
# Handling of indicator removal in minimal data stores.
event purge_item(item: Item)
{
switch ( item$indicator_type )
{
case ADDR:
local host = to_addr(item$indicator);
delete min_data_store$host_data[host];
break;
case SUBNET:
local net = to_subnet(item$indicator);
delete min_data_store$subnet_data[net];
break;
default:
delete min_data_store$string_data[item$indicator, item$indicator_type];
break;
}
}

View file

@ -0,0 +1,3 @@
The NetControl framework provides a way for Bro to interact with networking
hard- and software, e.g. for dropping and shunting IP addresses/connections,
etc.

View file

@ -2,103 +2,522 @@
module NetControl;
@load base/frameworks/cluster
@load ./main
@load ./drop
export {
redef enum Log::ID += { CATCH_RELEASE };
## Thhis record is used is used for storing information about current blocks that are
## part of catch and release.
type BlockInfo: record {
## Absolute time indicating until when a block is inserted using NetControl
block_until: time &optional;
## Absolute time indicating until when an IP address is watched to reblock it
watch_until: time;
## Number of times an IP address was reblocked
num_reblocked: count &default=0;
## Number indicating at which catch and release interval we currently are
current_interval: count;
## ID of the inserted block, if any.
current_block_id: string;
## User specified string
location: string &optional;
};
## The enum that contains the different kinds of messages that are logged by
## catch and release
type CatchReleaseActions: enum {
## Log lines marked with info are purely informational; no action was taken
INFO,
## A rule for the specified IP address already existed in NetControl (outside
## of catch-and-release). Catch and release did not add a new rule, but is now
## watching the IP address and will add a new rule after the current rule expired.
ADDED,
## A drop was requested by catch and release
DROP,
## A address was succesfully blocked by catch and release
DROPPED,
## An address was unblocked after the timeout expired
UNBLOCK,
## An address was forgotten because it did not reappear within the `watch_until` interval
FORGOTTEN,
## A watched IP address was seen again; catch and release will re-block it.
SEEN_AGAIN
};
## The record type that is used for representing and logging
type CatchReleaseInfo: record {
## The absolute time indicating when the action for this log-line occured.
ts: time &log;
## The rule id that this log lone refers to.
rule_id: string &log &optional;
## The IP address that this line refers to.
ip: addr &log;
## The action that was taken in this log-line.
action: CatchReleaseActions &log;
## The current block_interaval (for how long the address is blocked).
block_interval: interval &log &optional;
## The current watch_interval (for how long the address will be watched and re-block if it reappears).
watch_interval: interval &log &optional;
## The absolute time until which the address is blocked.
blocked_until: time &log &optional;
## The absolute time until which the address will be monitored.
watched_until: time &log &optional;
## Number of times that this address was blocked in the current cycle.
num_blocked: count &log &optional;
## The user specified location string.
location: string &log &optional;
## Additional informational string by the catch and release framework about this log-line.
message: string &log &optional;
};
## Stops all packets involving an IP address from being forwarded. This function
## uses catch-and-release functionality, where the IP address is only dropped for
## a short amount of time that is incremented steadily when the IP is encountered
## again.
##
## In cluster mode, this function works on workers as well as the manager. On managers,
## the returned :bro:see:`NetControl::BlockInfo` record will not contain the block ID,
## which will be assigned on the manager.
##
## a: The address to be dropped.
##
## t: How long to drop it, with 0 being indefinitly.
##
## location: An optional string describing where the drop was triggered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
global drop_address_catch_release: function(a: addr, location: string &default="") : string;
## Returns: The :bro:see:`NetControl::BlockInfo` record containing information about
## the inserted block.
global drop_address_catch_release: function(a: addr, location: string &default="") : BlockInfo;
## Time intervals for which a subsequent drops of the same IP take
## effect.
## Removes an address from being watched with catch and release. Returns true if the
## address was found and removed; returns false if it was unknown to catch and release.
##
## If the address is currently blocked, and the block was inserted by catch and release,
## the block is removed.
##
## a: The address to be unblocked.
##
## reason: A reason for the unblock
##
## Returns: True if the address was unblocked.
global unblock_address_catch_release: function(a: addr, reason: string &default="") : bool;
## This function can be called to notify the cach and release script that activity by
## an IP address was seen. If the respective IP address is currently monitored by catch and
## release and not blocked, the block will be re-instated. See the documentation of watch_new_connection
## which events the catch and release functionality usually monitors for activity.
##
## a: The address that was seen and should be re-dropped if it is being watched
global catch_release_seen: function(a: addr);
## Get the :bro:see:`NetControl::BlockInfo` record for an address currently blocked by catch and release.
## If the address is unknown to catch and release, the watch_until time will be set to 0.
##
## In cluster mode, this function works on the manager and workers. On workers, the data will
## lag slightly behind the manager; if you add a block, it will not be instantly available via
## this function.
##
## a: The address to get information about.
##
## Returns: The :bro:see:`NetControl::BlockInfo` record containing information about
## the inserted block.
global get_catch_release_info: function(a: addr) : BlockInfo;
## Event is raised when catch and release cases management of an IP address because no
## activity was seen within the watch_until period.
##
## a: The address that is no longer being managed.
##
## bi: The :bro:see:`NetControl::BlockInfo` record containing information about the block.
global catch_release_forgotten: event(a: addr, bi: BlockInfo);
## If true, catch_release_seen is called on the connection originator in new_connection,
## connection_established, partial_connection, connection_attempt, connection_rejected,
## connection_reset and connection_pending
const watch_connections = T &redef;
## If true, catch and release warns if packets of an IP address are still seen after it
## should have been blocked.
const catch_release_warn_blocked_ip_encountered = F &redef;
## Time intervals for which a subsequent drops of the same IP take
## effect.
const catch_release_intervals: vector of interval = vector(10min, 1hr, 24hrs, 7days) &redef;
## Event that can be handled to access the :bro:type:`NetControl::CatchReleaseInfo`
## record as it is sent on to the logging framework.
global log_netcontrol_catch_release: event(rec: CatchReleaseInfo);
# Cluster events for catch and release
global catch_release_block_new: event(a: addr, b: BlockInfo);
global catch_release_block_delete: event(a: addr);
global catch_release_add: event(a: addr, location: string);
global catch_release_delete: event(a: addr, reason: string);
global catch_release_encountered: event(a: addr);
}
function per_block_interval(t: table[addr] of count, idx: addr): interval
# set that is used to only send seen notifications to the master every ~30 seconds.
global catch_release_recently_notified: set[addr] &create_expire=30secs;
event bro_init() &priority=5
{
local ct = t[idx];
# watch for the time of the next block...
local blocktime = catch_release_intervals[ct];
if ( (ct+1) in catch_release_intervals )
blocktime = catch_release_intervals[ct+1];
return blocktime;
Log::create_stream(NetControl::CATCH_RELEASE, [$columns=CatchReleaseInfo, $ev=log_netcontrol_catch_release, $path="netcontrol_catch_release"]);
}
# This is the internally maintained table containing all the currently going on catch-and-release
# blocks.
global blocks: table[addr] of count = {}
function get_watch_interval(current_interval: count): interval
{
if ( (current_interval + 1) in catch_release_intervals )
return catch_release_intervals[current_interval+1];
else
return catch_release_intervals[current_interval];
}
function populate_log_record(ip: addr, bi: BlockInfo, action: CatchReleaseActions): CatchReleaseInfo
{
local log = CatchReleaseInfo($ts=network_time(), $ip=ip, $action=action,
$block_interval=catch_release_intervals[bi$current_interval],
$watch_interval=get_watch_interval(bi$current_interval),
$watched_until=bi$watch_until,
$num_blocked=bi$num_reblocked+1
);
if ( bi?$block_until )
log$blocked_until = bi$block_until;
if ( bi?$current_block_id && bi$current_block_id != "" )
log$rule_id = bi$current_block_id;
if ( bi?$location )
log$location = bi$location;
return log;
}
function per_block_interval(t: table[addr] of BlockInfo, idx: addr): interval
{
local remaining_time = t[idx]$watch_until - network_time();
if ( remaining_time < 0secs )
remaining_time = 0secs;
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
if ( remaining_time == 0secs )
{
local log = populate_log_record(idx, t[idx], FORGOTTEN);
Log::write(CATCH_RELEASE, log);
event NetControl::catch_release_forgotten(idx, t[idx]);
}
@endif
return remaining_time;
}
# This is the internally maintained table containing all the addresses that are currently being
# watched to see if they will re-surface. After the time is reached, monitoring of that specific
# IP will stop.
global blocks: table[addr] of BlockInfo = {}
&create_expire=0secs
&expire_func=per_block_interval;
function current_block_interval(s: set[addr], idx: addr): interval
@if ( Cluster::is_enabled() )
@load base/frameworks/cluster
redef Cluster::manager2worker_events += /NetControl::catch_release_block_(new|delete)/;
redef Cluster::worker2manager_events += /NetControl::catch_release_(add|delete|encountered)/;
@endif
function cr_check_rule(r: Rule): bool
{
if ( idx !in blocks )
if ( r$ty == DROP && r$entity$ty == ADDRESS )
{
Reporter::error(fmt("Address %s not in blocks while inserting into current_blocks!", idx));
return 0sec;
local ip = r$entity$ip;
if ( ( is_v4_subnet(ip) && subnet_width(ip) == 32 ) || ( is_v6_subnet(ip) && subnet_width(ip) == 128 ) )
{
if ( subnet_to_addr(ip) in blocks )
return T;
}
}
return catch_release_intervals[blocks[idx]];
return F;
}
global current_blocks: set[addr] = set()
&create_expire=0secs
&expire_func=current_block_interval;
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
function drop_address_catch_release(a: addr, location: string &default=""): string
event rule_added(r: Rule, p: PluginState, msg: string &default="")
{
if ( !cr_check_rule(r) )
return;
local ip = subnet_to_addr(r$entity$ip);
local bi = blocks[ip];
local log = populate_log_record(ip, bi, DROPPED);
if ( msg != "" )
log$message = msg;
Log::write(CATCH_RELEASE, log);
}
event rule_timeout(r: Rule, i: FlowInfo, p: PluginState)
{
if ( !cr_check_rule(r) )
return;
local ip = subnet_to_addr(r$entity$ip);
local bi = blocks[ip];
local log = populate_log_record(ip, bi, UNBLOCK);
if ( bi?$block_until )
{
local difference: interval = network_time() - bi$block_until;
if ( interval_to_double(difference) > 60 || interval_to_double(difference) < -60 )
log$message = fmt("Difference between network_time and block time excessive: %f", difference);
}
Log::write(CATCH_RELEASE, log);
}
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER )
event catch_release_add(a: addr, location: string)
{
drop_address_catch_release(a, location);
}
event catch_release_delete(a: addr, reason: string)
{
unblock_address_catch_release(a, reason);
}
event catch_release_encountered(a: addr)
{
catch_release_seen(a);
}
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
event catch_release_block_new(a: addr, b: BlockInfo)
{
blocks[a] = b;
}
event catch_release_block_delete(a: addr)
{
if ( a in blocks )
delete blocks[a];
}
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER )
@endif
function get_catch_release_info(a: addr): BlockInfo
{
if ( a in blocks )
return blocks[a];
return BlockInfo($watch_until=double_to_time(0), $current_interval=0, $current_block_id="");
}
function drop_address_catch_release(a: addr, location: string &default=""): BlockInfo
{
local bi: BlockInfo;
local log: CatchReleaseInfo;
if ( a in blocks )
{
Reporter::warning(fmt("Address %s already blocked using catch-and-release - ignoring duplicate", a));
return "";
log = populate_log_record(a, blocks[a], INFO);
log$message = "Already blocked using catch-and-release - ignoring duplicate";
Log::write(CATCH_RELEASE, log);
return blocks[a];
}
local e = Entity($ty=ADDRESS, $ip=addr_to_subnet(a));
if ( [e,DROP] in rule_entities )
{
local r = rule_entities[e,DROP];
bi = BlockInfo($watch_until=network_time()+catch_release_intervals[1], $current_interval=0, $current_block_id=r$id);
if ( location != "" )
bi$location = location;
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
log = populate_log_record(a, bi, ADDED);
log$message = "Address already blocked outside of catch-and-release. Catch and release will monitor and only actively block if it appears in network traffic.";
Log::write(CATCH_RELEASE, log);
blocks[a] = bi;
event NetControl::catch_release_block_new(a, bi);
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
event NetControl::catch_release_add(a, location);
@endif
return bi;
}
local block_interval = catch_release_intervals[0];
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
local ret = drop_address(a, block_interval, location);
if ( ret != "" )
{
blocks[a] = 0;
add current_blocks[a];
bi = BlockInfo($watch_until=network_time()+catch_release_intervals[1], $block_until=network_time()+block_interval, $current_interval=0, $current_block_id=ret);
if ( location != "" )
bi$location = location;
blocks[a] = bi;
event NetControl::catch_release_block_new(a, bi);
blocks[a] = bi;
log = populate_log_record(a, bi, DROP);
Log::write(CATCH_RELEASE, log);
return bi;
}
Reporter::error(fmt("Catch and release could not add block for %s; failing.", a));
return BlockInfo($watch_until=double_to_time(0), $current_interval=0, $current_block_id="");
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
bi = BlockInfo($watch_until=network_time()+catch_release_intervals[1], $block_until=network_time()+block_interval, $current_interval=0, $current_block_id="");
event NetControl::catch_release_add(a, location);
return bi;
@endif
return ret;
}
function check_conn(a: addr)
function unblock_address_catch_release(a: addr, reason: string &default=""): bool
{
if ( a !in blocks )
return F;
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
local bi = blocks[a];
local log = populate_log_record(a, bi, UNBLOCK);
if ( reason != "" )
log$message = reason;
Log::write(CATCH_RELEASE, log);
delete blocks[a];
if ( bi?$block_until && bi$block_until > network_time() && bi$current_block_id != "" )
remove_rule(bi$current_block_id, reason);
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER )
event NetControl::catch_release_block_delete(a);
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
event NetControl::catch_release_delete(a, reason);
@endif
return T;
}
function catch_release_seen(a: addr)
{
local e = Entity($ty=ADDRESS, $ip=addr_to_subnet(a));
if ( a in blocks )
{
if ( a in current_blocks )
# block has not been applied yet?
@if ( ! Cluster::is_enabled() || ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER ) )
local bi = blocks[a];
local log: CatchReleaseInfo;
if ( [e,DROP] in rule_entities )
{
if ( catch_release_warn_blocked_ip_encountered == F )
return;
# This should be blocked - block has not been applied yet by hardware? Ignore for the moment...
log = populate_log_record(a, bi, INFO);
log$action = INFO;
log$message = "Block seen while in rule_entities. No action taken.";
Log::write(CATCH_RELEASE, log);
return;
}
# ok, this one returned again while still in the backoff period.
local try = blocks[a];
local try = bi$current_interval;
if ( (try+1) in catch_release_intervals )
++try;
blocks[a] = try;
add current_blocks[a];
bi$current_interval = try;
if ( (try+1) in catch_release_intervals )
bi$watch_until = network_time() + catch_release_intervals[try+1];
else
bi$watch_until = network_time() + catch_release_intervals[try];
bi$block_until = network_time() + catch_release_intervals[try];
++bi$num_reblocked;
local block_interval = catch_release_intervals[try];
drop_address(a, block_interval, "Re-drop by catch-and-release");
local location = "";
if ( bi?$location )
location = bi$location;
local drop = drop_address(a, block_interval, fmt("Re-drop by catch-and-release: %s", location));
bi$current_block_id = drop;
blocks[a] = bi;
log = populate_log_record(a, bi, SEEN_AGAIN);
Log::write(CATCH_RELEASE, log);
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER )
event NetControl::catch_release_block_new(a, bi);
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
if ( a in catch_release_recently_notified )
return;
event NetControl::catch_release_encountered(a);
add catch_release_recently_notified[a];
@endif
return;
}
return;
}
event new_connection(c: connection)
{
# let's only check originating connections...
check_conn(c$id$orig_h);
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event connection_established(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event partial_connection(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event connection_attempt(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event connection_rejected(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event connection_reset(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}
event connection_pending(c: connection)
{
if ( watch_connections )
catch_release_seen(c$id$orig_h);
}

View file

@ -10,14 +10,16 @@ export {
global cluster_netcontrol_add_rule: event(r: Rule);
## This is the event used to transport remove_rule calls to the manager.
global cluster_netcontrol_remove_rule: event(id: string);
global cluster_netcontrol_remove_rule: event(id: string, reason: string);
## This is the event used to transport delete_rule calls to the manager.
global cluster_netcontrol_delete_rule: event(id: string, reason: string);
}
## Workers need ability to forward commands to manager.
redef Cluster::worker2manager_events += /NetControl::cluster_netcontrol_(add|remove)_rule/;
redef Cluster::worker2manager_events += /NetControl::cluster_netcontrol_(add|remove|delete)_rule/;
## Workers need to see the result events from the manager.
redef Cluster::manager2worker_events += /NetControl::rule_(added|removed|timeout|error)/;
redef Cluster::manager2worker_events += /NetControl::rule_(added|removed|timeout|error|exists|new|destroyed)/;
function activate(p: PluginState, priority: int)
{
@ -36,6 +38,16 @@ function add_rule(r: Rule) : string
return add_rule_impl(r);
else
{
# we sync rule entities accross the cluster, so we
# acually can test if the rule already exists. If yes,
# refuse insertion already at the node.
if ( [r$entity, r$ty] in rule_entities )
{
log_rule_no_plugin(r, FAILED, "discarded duplicate insertion");
return "";
}
if ( r$id == "" )
r$id = cat(Cluster::node, ":", ++local_rule_count);
@ -44,38 +56,60 @@ function add_rule(r: Rule) : string
}
}
function remove_rule(id: string) : bool
function delete_rule(id: string, reason: string &default="") : bool
{
if ( Cluster::local_node_type() == Cluster::MANAGER )
return remove_rule_impl(id);
return delete_rule_impl(id, reason);
else
{
event NetControl::cluster_netcontrol_remove_rule(id);
event NetControl::cluster_netcontrol_delete_rule(id, reason);
return T; # well, we can't know here. So - just hope...
}
}
function remove_rule(id: string, reason: string &default="") : bool
{
if ( Cluster::local_node_type() == Cluster::MANAGER )
return remove_rule_impl(id, reason);
else
{
event NetControl::cluster_netcontrol_remove_rule(id, reason);
return T; # well, we can't know here. So - just hope...
}
}
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event NetControl::cluster_netcontrol_delete_rule(id: string, reason: string)
{
delete_rule_impl(id, reason);
}
event NetControl::cluster_netcontrol_add_rule(r: Rule)
{
add_rule_impl(r);
}
event NetControl::cluster_netcontrol_remove_rule(id: string)
event NetControl::cluster_netcontrol_remove_rule(id: string, reason: string)
{
remove_rule_impl(id);
remove_rule_impl(id, reason);
}
@endif
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event rule_expire(r: Rule, p: PluginState) &priority=-5
{
rule_expire_impl(r, p);
}
event rule_exists(r: Rule, p: PluginState, msg: string &default="") &priority=5
{
rule_added_impl(r, p, T, msg);
if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
schedule r$expire { rule_expire(r, p) };
}
event rule_added(r: Rule, p: PluginState, msg: string &default="") &priority=5
{
rule_added_impl(r, p, msg);
rule_added_impl(r, p, F, msg);
if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
schedule r$expire { rule_expire(r, p) };
@ -97,3 +131,30 @@ event rule_error(r: Rule, p: PluginState, msg: string &default="") &priority=-5
}
@endif
# Workers use the events to keep track in their local state tables
@if ( Cluster::local_node_type() != Cluster::MANAGER )
event rule_new(r: Rule) &priority=5
{
if ( r$id in rules )
return;
rules[r$id] = r;
rule_entities[r$entity, r$ty] = r;
add_subnet_entry(r);
}
event rule_destroyed(r: Rule) &priority=5
{
if ( r$id !in rules )
return;
remove_subnet_entry(r);
if ( [r$entity, r$ty] in rule_entities )
delete rule_entities[r$entity, r$ty];
delete rules[r$id];
}
@endif

View file

@ -44,6 +44,12 @@ export {
location: string &log &optional;
};
## Hook that allows the modification of rules passed to drop_* before they
## are passed on. If one of the hooks uses break, the rule is ignored.
##
## r: The rule to be added
global NetControl::drop_rule_policy: hook(r: Rule);
## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`
## record as it is sent on to the logging framework.
global log_netcontrol_drop: event(rec: DropInfo);
@ -59,6 +65,9 @@ function drop_connection(c: conn_id, t: interval, location: string &default="")
local e: Entity = [$ty=CONNECTION, $conn=c];
local r: Rule = [$ty=DROP, $target=FORWARD, $entity=e, $expire=t, $location=location];
if ( ! hook NetControl::drop_rule_policy(r) )
return "";
local id = add_rule(r);
# Error should already be logged
@ -80,6 +89,9 @@ function drop_address(a: addr, t: interval, location: string &default="") : stri
local e: Entity = [$ty=ADDRESS, $ip=addr_to_subnet(a)];
local r: Rule = [$ty=DROP, $target=FORWARD, $entity=e, $expire=t, $location=location];
if ( ! hook NetControl::drop_rule_policy(r) )
return "";
local id = add_rule(r);
# Error should already be logged

View file

@ -1,4 +1,4 @@
##! Bro's packet aquisition and control framework.
##! Bro's NetControl framework.
##!
##! This plugin-based framework allows to control the traffic that Bro monitors
##! as well as, if having access to the forwarding path, the traffic the network
@ -81,9 +81,11 @@ export {
## Returns: The id of the inserted rule on succes and zero on failure.
global redirect_flow: function(f: flow_id, out_port: count, t: interval, location: string &default="") : string;
## Quarantines a host by redirecting rewriting DNS queries to the network dns server dns
## to the host. Host has to answer to all queries with its own address. Only http communication
## from infected to quarantinehost is allowed.
## Quarantines a host. This requires a special quarantine server, which runs a HTTP server explaining
## the quarantine and a DNS server which resolves all requests to the quarantine server. DNS queries
## from the host to the network DNS server will be rewritten and will be sent to the quarantine server
## instead. Only http communication infected to quarantinehost is allowed. All other network communication
## is blocked.
##
## infected: the host to quarantine
##
@ -96,7 +98,7 @@ export {
## Returns: Vector of inserted rules on success, empty list on failure.
global quarantine_host: function(infected: addr, dns: addr, quarantine: addr, t: interval, location: string &default="") : vector of string;
## Flushes all state.
## Flushes all state by calling :bro:see:`NetControl::remove_rule` on all currently active rules.
global clear: function();
# ###
@ -120,17 +122,36 @@ export {
## Removes a rule.
##
## id: The rule to remove, specified as the ID returned by :bro:id:`NetControl::add_rule`.
## id: The rule to remove, specified as the ID returned by :bro:see:`NetControl::add_rule`.
##
## reason: Optional string argument giving information on why the rule was removed.
##
## Returns: True if succesful, the relevant plugin indicated that it knew
## how to handle the removal. Note that again "success" means the
## plugin accepted the removal. They might still fail to put it
## into effect, as that might happen asynchronously and thus go
## wrong at that point.
global remove_rule: function(id: string) : bool;
global remove_rule: function(id: string, reason: string &default="") : bool;
## Deletes a rule without removing in from the backends to which it has been
## added before. This mean that no messages will be sent to the switches to which
## the rule has been added; if it is not removed from them by a separate mechanism,
## it will stay installed and not be removed later.
##
## id: The rule to delete, specified as the ID returned by :bro:see:`add_rule` .
##
## reason: Optional string argument giving information on why the rule was deleted.
##
## Returns: True if removal is successful, or sent to manager.
## False if the rule could not be found.
global delete_rule: function(id: string, reason: string &default="") : bool;
## Searches all rules affecting a certain IP address.
##
## This function works on both the manager and workers of a cluster. Note that on
## the worker, the internal rule variables (starting with _) will not reflect the
## current state.
##
## ip: The ip address to search for
##
## Returns: vector of all rules affecting the IP address
@ -138,6 +159,18 @@ export {
## Searches all rules affecting a certain subnet.
##
## A rule affects a subnet, if it covers the whole subnet. Note especially that
## this function will not reveal all rules that are covered by a subnet.
##
## For example, a search for 192.168.17.0/8 will reveal a rule that exists for
## 192.168.0.0/16, since this rule affects the subnet. However, it will not reveal
## a more specific rule for 192.168.17.1/32, which does not directy affect the whole
## subnet.
##
## This function works on both the manager and workers of a cluster. Note that on
## the worker, the internal rule variables (starting with _) will not reflect the
## current state.
##
## sn: The subnet to search for
##
## Returns: vector of all rules affecting the subnet
@ -145,7 +178,7 @@ export {
###### Asynchronous feedback on rules.
## Confirms that a rule was put in place.
## Confirms that a rule was put in place by a plugin.
##
## r: The rule now in place.
##
@ -154,7 +187,21 @@ export {
## msg: An optional informational message by the plugin.
global rule_added: event(r: Rule, p: PluginState, msg: string &default="");
## Reports that a rule was removed due to a remove: function() call.
## Signals that a rule that was supposed to be put in place was already
## existing at the specified plugin. Rules that already have been existing
## continue to be tracked like normal, but no timeout calls will be sent
## to the specified plugins. Removal of the rule from the hardware can
## still be forced by manually issuing a remove_rule call.
##
## r: The rule that was already in place.
##
## p: The plugin that reported that the rule already was in place.
##
## msg: An optional informational message by the plugin.
global rule_exists: event(r: Rule, p: PluginState, msg: string &default="");
## Reports that a plugin reports a rule was removed due to a
## remove: function() vall.
##
## r: The rule now removed.
##
@ -164,7 +211,7 @@ export {
## msg: An optional informational message by the plugin.
global rule_removed: event(r: Rule, p: PluginState, msg: string &default="");
## Reports that a rule was removed internally due to a timeout.
## Reports that a rule was removed from a plugin due to a timeout.
##
## r: The rule now removed.
##
@ -185,6 +232,26 @@ export {
## msg: An optional informational message by the plugin.
global rule_error: event(r: Rule, p: PluginState, msg: string &default="");
## This event is raised when a new rule is created by the NetControl framework
## due to a call to add_rule. From this moment, until the rule_destroyed event
## is raised, the rule is tracked internally by the NetControl framewory.
##
## Note that this event does not mean that a rule was succesfully added by
## any backend; it just means that the rule has been accepted and addition
## to the specified backend is queued. To get information when rules are actually
## installed by the hardware, use the rule_added, rule_exists, rule_removed, rule_timeout
## and rule_error events.
global rule_new: event(r: Rule);
## This event is raised when a rule is deleted from the NetControl framework,
## because it is no longer in use. This can be caused by the fact that a rule
## was removed by all plugins to which it was added, by the fact that it timed out
## or due to rule errors.
##
## To get the cause or a rule remove, hook the rule_removed, rule_timeout and
## rule_error calls.
global rule_destroyed: event(r: Rule);
## Hook that allows the modification of rules passed to add_rule before they
## are passed on to the plugins. If one of the hooks uses break, the rule is
## ignored and not passed on to any plugin.
@ -206,17 +273,18 @@ export {
MESSAGE,
## A log entry reflecting a framework message.
ERROR,
## A log entry about about a rule.
## A log entry about a rule.
RULE
};
## State of an entry in the NetControl log.
## State of an entry in the NetControl log.
type InfoState: enum {
REQUESTED,
SUCCEEDED,
FAILED,
REMOVED,
TIMEOUT,
REQUESTED, ##< The request to add/remove a rule was sent to the respective backend
SUCCEEDED, ##< A rule was succesfully added by a backend
EXISTS, ##< A backend reported that a rule was already existing
FAILED, ##< A rule addition failed
REMOVED, ##< A rule was succesfully removed by a backend
TIMEOUT, ##< A rule timeout was triggered by the NetControl framework or a backend
};
## The record type defining the column fields of the NetControl log.
@ -259,11 +327,13 @@ export {
}
redef record Rule += {
##< Internally set to the plugins handling the rule.
## Internally set to the plugins handling the rule.
_plugin_ids: set[count] &default=count_set();
##< Internally set to the plugins on which the rule is currently active.
## Internally set to the plugins on which the rule is currently active.
_active_plugin_ids: set[count] &default=count_set();
##< Track if the rule was added succesfully by all responsible plugins.
## Internally set to plugins where the rule should not be removed upon timeout.
_no_expire_plugins: set[count] &default=count_set();
## Track if the rule was added succesfully by all responsible plugins.
_added: bool &default=F;
};
@ -535,6 +605,11 @@ function plugin_activated(p: PluginState)
log_error("unknown plugin activated", p);
return;
}
# Suppress duplicate activation
if ( plugin_ids[id]$_activated == T )
return;
plugin_ids[id]$_activated = T;
log_msg("activation finished", p);
@ -727,6 +802,8 @@ function add_rule_impl(rule: Rule) : string
add_subnet_entry(rule);
event NetControl::rule_new(rule);
return rule$id;
}
@ -734,25 +811,62 @@ function add_rule_impl(rule: Rule) : string
return "";
}
function remove_rule_plugin(r: Rule, p: PluginState): bool
function rule_cleanup(r: Rule)
{
if ( |r$_active_plugin_ids| > 0 )
return;
remove_subnet_entry(r);
delete rule_entities[r$entity, r$ty];
delete rules[r$id];
event NetControl::rule_destroyed(r);
}
function delete_rule_impl(id: string, reason: string): bool
{
if ( id !in rules )
{
Reporter::error(fmt("Rule %s does not exist in NetControl::delete_rule", id));
return F;
}
local rule = rules[id];
rule$_active_plugin_ids = set();
rule_cleanup(rule);
if ( reason != "" )
log_rule_no_plugin(rule, REMOVED, fmt("delete_rule: %s", reason));
else
log_rule_no_plugin(rule, REMOVED, "delete_rule");
return T;
}
function remove_rule_plugin(r: Rule, p: PluginState, reason: string &default=""): bool
{
local success = T;
if ( ! p$plugin$remove_rule(p, r) )
if ( ! p$plugin$remove_rule(p, r, reason) )
{
# still continue and send to other plugins
log_rule_error(r, "remove failed", p);
if ( reason != "" )
log_rule_error(r, fmt("remove failed (original reason: %s)", reason), p);
else
log_rule_error(r, "remove failed", p);
success = F;
}
else
{
log_rule(r, "REMOVE", REQUESTED, p);
log_rule(r, "REMOVE", REQUESTED, p, reason);
}
return success;
}
function remove_rule_impl(id: string) : bool
function remove_rule_impl(id: string, reason: string) : bool
{
if ( id !in rules )
{
@ -766,7 +880,7 @@ function remove_rule_impl(id: string) : bool
for ( plugin_id in r$_active_plugin_ids )
{
local p = plugin_ids[plugin_id];
success = remove_rule_plugin(r, p);
success = remove_rule_plugin(r, p, reason);
}
return success;
@ -782,10 +896,21 @@ function rule_expire_impl(r: Rule, p: PluginState) &priority=-5
# Removed already.
return;
event NetControl::rule_timeout(r, FlowInfo(), p); # timeout implementation will handle the removal
local rule = rules[r$id];
if ( p$_id in rule$_no_expire_plugins )
{
# in this case - don't log anything, just remove the plugin from the rule
# and cleaup
delete rule$_active_plugin_ids[p$_id];
delete rule$_no_expire_plugins[p$_id];
rule_cleanup(rule);
}
else
event NetControl::rule_timeout(r, FlowInfo(), p); # timeout implementation will handle the removal
}
function rule_added_impl(r: Rule, p: PluginState, msg: string &default="")
function rule_added_impl(r: Rule, p: PluginState, exists: bool, msg: string &default="")
{
if ( r$id !in rules )
{
@ -801,7 +926,15 @@ function rule_added_impl(r: Rule, p: PluginState, msg: string &default="")
return;
}
log_rule(r, "ADD", SUCCEEDED, p, msg);
# The rule was already existing on the backend. Mark this so we don't timeout
# it on this backend.
if ( exists )
{
add rule$_no_expire_plugins[p$_id];
log_rule(r, "ADD", EXISTS, p, msg);
}
else
log_rule(r, "ADD", SUCCEEDED, p, msg);
add rule$_active_plugin_ids[p$_id];
if ( |rule$_plugin_ids| == |rule$_active_plugin_ids| )
@ -811,17 +944,6 @@ function rule_added_impl(r: Rule, p: PluginState, msg: string &default="")
}
}
function rule_cleanup(r: Rule)
{
if ( |r$_active_plugin_ids| > 0 )
return;
remove_subnet_entry(r);
delete rule_entities[r$entity, r$ty];
delete rules[r$id];
}
function rule_removed_impl(r: Rule, p: PluginState, msg: string &default="")
{
if ( r$id !in rules )

View file

@ -12,9 +12,14 @@ function add_rule(r: Rule) : string
return add_rule_impl(r);
}
function remove_rule(id: string) : bool
function delete_rule(id: string, reason: string &default="") : bool
{
return remove_rule_impl(id);
return delete_rule_impl(id, reason);
}
function remove_rule(id: string, reason: string &default="") : bool
{
return remove_rule_impl(id, reason);
}
event rule_expire(r: Rule, p: PluginState) &priority=-5
@ -22,9 +27,17 @@ event rule_expire(r: Rule, p: PluginState) &priority=-5
rule_expire_impl(r, p);
}
event rule_exists(r: Rule, p: PluginState, msg: string &default="") &priority=5
{
rule_added_impl(r, p, T, msg);
if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
schedule r$expire { rule_expire(r, p) };
}
event rule_added(r: Rule, p: PluginState, msg: string &default="") &priority=5
{
rule_added_impl(r, p, msg);
rule_added_impl(r, p, F, msg);
if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
schedule r$expire { rule_expire(r, p) };

View file

@ -1,11 +1,13 @@
##! Plugin interface for NetControl backends.
##! This file defines the plugin interface for NetControl.
module NetControl;
@load ./types
export {
## State for a plugin instance.
## This record keeps the per instance state of a plugin.
##
## Individual plugins commonly extend this record to suit their needs.
type PluginState: record {
## Table for a plugin to store custom, instance-specfific state.
config: table[string] of string &default=table();
@ -20,69 +22,63 @@ export {
_activated: bool &default=F;
};
# Definition of a plugin.
#
# Generally a plugin needs to implement only what it can support. By
# returning failure, it indicates that it can't support something and the
# the framework will then try another plugin, if available; or inform the
# that the operation failed. If a function isn't implemented by a plugin,
# that's considered an implicit failure to support the operation.
#
# If plugin accepts a rule operation, it *must* generate one of the reporting
# events ``rule_{added,remove,error}`` to signal if it indeed worked out;
# this is separate from accepting the operation because often a plugin
# will only know later (i.e., asynchrously) if that was an error for
# something it thought it could handle.
## Definition of a plugin.
##
## Generally a plugin needs to implement only what it can support. By
## returning failure, it indicates that it can't support something and the
## the framework will then try another plugin, if available; or inform the
## that the operation failed. If a function isn't implemented by a plugin,
## that's considered an implicit failure to support the operation.
##
## If plugin accepts a rule operation, it *must* generate one of the reporting
## events ``rule_{added,remove,error}`` to signal if it indeed worked out;
## this is separate from accepting the operation because often a plugin
## will only know later (i.e., asynchrously) if that was an error for
## something it thought it could handle.
type Plugin: record {
# Returns a descriptive name of the plugin instance, suitable for use in logging
# messages. Note that this function is not optional.
## Returns a descriptive name of the plugin instance, suitable for use in logging
## messages. Note that this function is not optional.
name: function(state: PluginState) : string;
## If true, plugin can expire rules itself. If false,
## If true, plugin can expire rules itself. If false, the NetControl
## framework will manage rule expiration.
can_expire: bool;
# One-time initialization function called when plugin gets registered, and
# before any other methods are called.
#
# If this function is provided, NetControl assumes that the plugin has to
# perform, potentially lengthy, initialization before the plugin will become
# active. In this case, the plugin has to call ``NetControl::plugin_activated``,
# once initialization finishes.
## One-time initialization function called when plugin gets registered, and
## before any other methods are called.
##
## If this function is provided, NetControl assumes that the plugin has to
## perform, potentially lengthy, initialization before the plugin will become
## active. In this case, the plugin has to call ``NetControl::plugin_activated``,
## once initialization finishes.
init: function(state: PluginState) &optional;
# One-time finalization function called when a plugin is shutdown; no further
# functions will be called afterwords.
## One-time finalization function called when a plugin is shutdown; no further
## functions will be called afterwords.
done: function(state: PluginState) &optional;
# Implements the add_rule() operation. If the plugin accepts the rule,
# it returns true, false otherwise. The rule will already have its
# ``id`` field set, which the plugin may use for identification
# purposes.
## Implements the add_rule() operation. If the plugin accepts the rule,
## it returns true, false otherwise. The rule will already have its
## ``id`` field set, which the plugin may use for identification
## purposes.
add_rule: function(state: PluginState, r: Rule) : bool &optional;
# Implements the remove_rule() operation. This will only be called for
# rules that the plugins has previously accepted with add_rule(). The
# ``id`` field will match that of the add_rule() call. Generally,
# a plugin that accepts an add_rule() should also accept the
# remove_rule().
remove_rule: function(state: PluginState, r: Rule) : bool &optional;
# A transaction groups a number of operations. The plugin can add them internally
# and postpone putting them into effect until committed. This allows to build a
# configuration of multiple rules at once, including replaying a previous state.
transaction_begin: function(state: PluginState) &optional;
transaction_end: function(state: PluginState) &optional;
## Implements the remove_rule() operation. This will only be called for
## rules that the plugins has previously accepted with add_rule(). The
## ``id`` field will match that of the add_rule() call. Generally,
## a plugin that accepts an add_rule() should also accept the
## remove_rule().
remove_rule: function(state: PluginState, r: Rule, reason: string) : bool &optional;
};
# Table for a plugin to store instance-specific configuration information.
#
# Note, it would be nicer to pass the Plugin instance to all the below, instead
# of this state table. However Bro's type resolver has trouble with refering to a
# record type from inside itself.
## Table for a plugin to store instance-specific configuration information.
##
## Note, it would be nicer to pass the Plugin instance to all the below, instead
## of this state table. However Bro's type resolver has trouble with refering to a
## record type from inside itself.
redef record PluginState += {
## The plugin that the state belongs to. (Defined separately
## because of cyclic type dependency.)
## because of cyclic type dependency.)
plugin: Plugin &optional;
};

View file

@ -0,0 +1 @@
Plugins for the NetControl framework

View file

@ -66,6 +66,7 @@ export {
## Events that are sent from Broker to us
global acld_rule_added: event(id: count, r: Rule, msg: string);
global acld_rule_removed: event(id: count, r: Rule, msg: string);
global acld_rule_exists: event(id: count, r: Rule, msg: string);
global acld_rule_error: event(id: count, r: Rule, msg: string);
}
@ -76,7 +77,7 @@ global netcontrol_acld_current_id: count = 0;
const acld_add_to_remove: table[string] of string = {
["drop"] = "restore",
["whitelist"] = "remwhitelist",
["addwhitelist"] = "remwhitelist",
["blockhosthost"] = "restorehosthost",
["droptcpport"] = "restoretcpport",
["dropudpport"] = "restoreudpport",
@ -100,6 +101,19 @@ event NetControl::acld_rule_added(id: count, r: Rule, msg: string)
event NetControl::rule_added(r, p, msg);
}
event NetControl::acld_rule_exists(id: count, r: Rule, msg: string)
{
if ( id !in netcontrol_acld_id )
{
Reporter::error(fmt("NetControl acld plugin with id %d not found, aborting", id));
return;
}
local p = netcontrol_acld_id[id];
event NetControl::rule_exists(r, p, msg);
}
event NetControl::acld_rule_removed(id: count, r: Rule, msg: string)
{
if ( id !in netcontrol_acld_id )
@ -155,7 +169,7 @@ function rule_to_acl_rule(p: PluginState, r: Rule) : AclRule
if ( r$ty == DROP )
command = "drop";
else if ( r$ty == WHITELIST )
command = "whitelist";
command = "addwhitelist";
arg = cat(e$ip);
}
else if ( e$ty == FLOW )
@ -233,7 +247,7 @@ function acld_add_rule_fun(p: PluginState, r: Rule) : bool
return T;
}
function acld_remove_rule_fun(p: PluginState, r: Rule) : bool
function acld_remove_rule_fun(p: PluginState, r: Rule, reason: string) : bool
{
if ( ! acld_check_rule(p, r) )
return F;
@ -244,6 +258,14 @@ function acld_remove_rule_fun(p: PluginState, r: Rule) : bool
else
return F;
if ( reason != "" )
{
if ( ar?$comment )
ar$comment = fmt("%s (%s)", reason, ar$comment);
else
ar$comment = reason;
}
Broker::send_event(p$acld_config$acld_topic, Broker::event_args(acld_remove_rule, p$acld_id, r, ar));
return T;
}

View file

@ -11,25 +11,46 @@ module NetControl;
@ifdef ( Broker::__enable )
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_broker`.
type BrokerConfig: record {
## The broker topic used to send events to
topic: string &optional;
## Broker host to connect to
host: addr &optional;
## Broker port to connect to
bport: port &optional;
## Do we accept rules for the monitor path? Default true
monitor: bool &default=T;
## Do we accept rules for the forward path? Default true
forward: bool &default=T;
## Predicate that is called on rule insertion or removal.
##
## p: Current plugin state
##
## r: The rule to be inserted or removed
##
## Returns: T if the rule can be handled by the current backend, F otherwhise
check_pred: function(p: PluginState, r: Rule): bool &optional;
};
## Instantiates the broker plugin.
global create_broker: function(host: addr, host_port: port, topic: string, can_expire: bool &default=F) : PluginState;
global create_broker: function(config: BrokerConfig, can_expire: bool) : PluginState;
redef record PluginState += {
## The broker topic used to send events to
broker_topic: string &optional;
## OpenFlow controller for NetControl Broker plugin
broker_config: BrokerConfig &optional;
## The ID of this broker instance - for the mapping to PluginStates
broker_id: count &optional;
## Broker host to connect to
broker_host: addr &optional;
## Broker port to connect to
broker_port: port &optional;
};
global broker_add_rule: event(id: count, r: Rule);
global broker_remove_rule: event(id: count, r: Rule);
global broker_remove_rule: event(id: count, r: Rule, reason: string);
global broker_rule_added: event(id: count, r: Rule, msg: string);
global broker_rule_removed: event(id: count, r: Rule, msg: string);
global broker_rule_exists: event(id: count, r: Rule, msg: string);
global broker_rule_error: event(id: count, r: Rule, msg: string);
global broker_rule_timeout: event(id: count, r: Rule, i: FlowInfo);
}
@ -52,6 +73,19 @@ event NetControl::broker_rule_added(id: count, r: Rule, msg: string)
event NetControl::rule_added(r, p, msg);
}
event NetControl::broker_rule_exists(id: count, r: Rule, msg: string)
{
if ( id !in netcontrol_broker_id )
{
Reporter::error(fmt("NetControl broker plugin with id %d not found, aborting", id));
return;
}
local p = netcontrol_broker_id[id];
event NetControl::rule_exists(r, p, msg);
}
event NetControl::broker_rule_removed(id: count, r: Rule, msg: string)
{
if ( id !in netcontrol_broker_id )
@ -93,26 +127,48 @@ event NetControl::broker_rule_timeout(id: count, r: Rule, i: FlowInfo)
function broker_name(p: PluginState) : string
{
return fmt("Broker-%s", p$broker_topic);
return fmt("Broker-%s", p$broker_config$topic);
}
function broker_check_rule(p: PluginState, r: Rule) : bool
{
local c = p$broker_config;
if ( p$broker_config?$check_pred )
return p$broker_config$check_pred(p, r);
if ( r$target == MONITOR && c$monitor )
return T;
if ( r$target == FORWARD && c$forward )
return T;
return F;
}
function broker_add_rule_fun(p: PluginState, r: Rule) : bool
{
Broker::send_event(p$broker_topic, Broker::event_args(broker_add_rule, p$broker_id, r));
if ( ! broker_check_rule(p, r) )
return F;
Broker::send_event(p$broker_config$topic, Broker::event_args(broker_add_rule, p$broker_id, r));
return T;
}
function broker_remove_rule_fun(p: PluginState, r: Rule) : bool
function broker_remove_rule_fun(p: PluginState, r: Rule, reason: string) : bool
{
Broker::send_event(p$broker_topic, Broker::event_args(broker_remove_rule, p$broker_id, r));
if ( ! broker_check_rule(p, r) )
return F;
Broker::send_event(p$broker_config$topic, Broker::event_args(broker_remove_rule, p$broker_id, r, reason));
return T;
}
function broker_init(p: PluginState)
{
Broker::enable();
Broker::connect(cat(p$broker_host), p$broker_port, 1sec);
Broker::subscribe_to_events(p$broker_topic);
Broker::connect(cat(p$broker_config$host), p$broker_config$bport, 1sec);
Broker::subscribe_to_events(p$broker_config$topic);
}
event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
@ -140,23 +196,23 @@ global broker_plugin_can_expire = Plugin(
$init = broker_init
);
function create_broker(host: addr, host_port: port, topic: string, can_expire: bool &default=F) : PluginState
function create_broker(config: BrokerConfig, can_expire: bool) : PluginState
{
if ( topic in netcontrol_broker_topics )
Reporter::warning(fmt("Topic %s was added to NetControl broker plugin twice. Possible duplication of commands", topic));
if ( config$topic in netcontrol_broker_topics )
Reporter::warning(fmt("Topic %s was added to NetControl broker plugin twice. Possible duplication of commands", config$topic));
else
add netcontrol_broker_topics[topic];
add netcontrol_broker_topics[config$topic];
local plugin = broker_plugin;
if ( can_expire )
plugin = broker_plugin_can_expire;
local p: PluginState = [$broker_host=host, $broker_port=host_port, $plugin=plugin, $broker_topic=topic, $broker_id=netcontrol_broker_current_id];
local p = PluginState($plugin=plugin, $broker_id=netcontrol_broker_current_id, $broker_config=config);
if ( [host_port, cat(host)] in netcontrol_broker_peers )
Reporter::warning(fmt("Peer %s:%s was added to NetControl broker plugin twice.", host, host_port));
if ( [config$bport, cat(config$host)] in netcontrol_broker_peers )
Reporter::warning(fmt("Peer %s:%s was added to NetControl broker plugin twice.", config$host, config$bport));
else
netcontrol_broker_peers[host_port, cat(host)] = p;
netcontrol_broker_peers[config$bport, cat(config$host)] = p;
netcontrol_broker_id[netcontrol_broker_current_id] = p;
++netcontrol_broker_current_id;

View file

@ -55,34 +55,22 @@ function debug_add_rule(p: PluginState, r: Rule) : bool
return F;
}
function debug_remove_rule(p: PluginState, r: Rule) : bool
function debug_remove_rule(p: PluginState, r: Rule, reason: string) : bool
{
local s = fmt("remove_rule: %s", r);
local s = fmt("remove_rule (%s): %s", reason, r);
debug_log(p, s);
event NetControl::rule_removed(r, p);
return T;
}
function debug_transaction_begin(p: PluginState)
{
debug_log(p, "transaction_begin");
}
function debug_transaction_end(p: PluginState)
{
debug_log(p, "transaction_end");
}
global debug_plugin = Plugin(
$name=debug_name,
$can_expire = F,
$init = debug_init,
$done = debug_done,
$add_rule = debug_add_rule,
$remove_rule = debug_remove_rule,
$transaction_begin = debug_transaction_begin,
$transaction_end = debug_transaction_end
$remove_rule = debug_remove_rule
);
function create_debug(do_something: bool) : PluginState

View file

@ -7,22 +7,46 @@
module NetControl;
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_openflow`.
type OfConfig: record {
monitor: bool &default=T;
forward: bool &default=T;
idle_timeout: count &default=0;
table_id: count &optional;
monitor: bool &default=T; ##< accept rules that target the monitor path
forward: bool &default=T; ##< accept rules that target the forward path
idle_timeout: count &default=0; ##< default OpenFlow idle timeout
table_id: count &optional; ##< default OpenFlow table ID.
priority_offset: int &default=+0; ##< add this to all rule priorities. Can be useful if you want the openflow priorities be offset from the netcontrol priorities without having to write a filter function.
## Predicate that is called on rule insertion or removal.
##
## p: Current plugin state
## p: Current plugin state.
##
## r: The rule to be inserted or removed
## r: The rule to be inserted or removed.
##
## Returns: T if the rule can be handled by the current backend, F otherwhise
## Returns: T if the rule can be handled by the current backend, F otherwhise.
check_pred: function(p: PluginState, r: Rule): bool &optional;
## This predicate is called each time an OpenFlow match record is created.
## The predicate can modify the match structure before it is sent on to the
## device.
##
## p: Current plugin state.
##
## r: The rule to be inserted or removed.
##
## m: The openflow match structures that were generated for this rules.
##
## Returns: The modified OpenFlow match structures that will be used in place the structures passed in m.
match_pred: function(p: PluginState, e: Entity, m: vector of OpenFlow::ofp_match): vector of OpenFlow::ofp_match &optional;
## This predicate is called before an FlowMod message is sent to the OpenFlow
## device. It can modify the FlowMod message before it is passed on.
##
## p: Current plugin state.
##
## r: The rule to be inserted or removed.
##
## m: The OpenFlow FlowMod message.
##
## Returns: The modified FloMod message that is used in lieu of m.
flow_mod_pred: function(p: PluginState, r: Rule, m: OpenFlow::ofp_flow_mod): OpenFlow::ofp_flow_mod &optional;
};
@ -300,7 +324,7 @@ function openflow_add_rule(p: PluginState, r: Rule) : bool
return T;
}
function openflow_remove_rule(p: PluginState, r: Rule) : bool
function openflow_remove_rule(p: PluginState, r: Rule, reason: string) : bool
{
if ( ! openflow_check_rule(p, r) )
return F;
@ -420,8 +444,6 @@ global openflow_plugin = Plugin(
# $done = openflow_done,
$add_rule = openflow_add_rule,
$remove_rule = openflow_remove_rule
# $transaction_begin = openflow_transaction_begin,
# $transaction_end = openflow_transaction_end
);
function create_openflow(controller: OpenFlow::Controller, config: OfConfig &default=[]) : PluginState

View file

@ -63,7 +63,7 @@ function packetfilter_add_rule(p: PluginState, r: Rule) : bool
return F;
}
function packetfilter_remove_rule(p: PluginState, r: Rule) : bool
function packetfilter_remove_rule(p: PluginState, r: Rule, reason: string) : bool
{
if ( ! packetfilter_check_rule(r) )
return F;

View file

@ -1,30 +1,45 @@
##! Types used by the NetControl framework.
##! This file defines the that are used by the NetControl framework.
##!
##! The most important type defined in this file is :bro:see:`NetControl::Rule`,
##! which is used to describe all rules that can be expressed by the NetControl framework.
module NetControl;
export {
## The default priority that is used when creating rules.
const default_priority: int = +0 &redef;
## The default priority that is used when using the high-level functions to
## push whitelist entries to the backends (:bro:see:`NetControl::whitelist_address` and
## :bro:see:`NetControl::whitelist_subnet`).
##
## Note that this priority is not automatically used when manually creating rules
## that have a :bro:see:`NetControl::RuleType` of :bro:enum:`NetControl::WHITELIST`.
const whitelist_priority: int = +5 &redef;
## Type of a :bro:id:`Entity` for defining an action.
## The EntityType is used in :bro:id:`Entity` for defining the entity that a rule
## applies to.
type EntityType: enum {
ADDRESS, ##< Activity involving a specific IP address.
CONNECTION, ##< All of a bi-directional connection's activity.
FLOW, ##< All of a uni-directional flow's activity. Can contain wildcards.
CONNECTION, ##< Activity involving all of a bi-directional connection's activity.
FLOW, ##< Actitivy involving a uni-directional flow's activity. Can contain wildcards.
MAC, ##< Activity involving a MAC address.
};
## Type for defining a flow.
## Flow is used in :bro:id:`Entity` together with :bro:enum:`NetControl::FLOW` to specify
## a uni-directional flow that a :bro:id:`Rule` applies to.
##
## If optional fields are not set, they are interpreted as wildcarded.
type Flow: record {
src_h: subnet &optional; ##< The source IP address/subnet.
src_p: port &optional; ##< The source port number.
dst_h: subnet &optional; ##< The destination IP address/subnet.
dst_p: port &optional; ##< The desintation port number.
dst_p: port &optional; ##< The destination port number.
src_m: string &optional; ##< The source MAC address.
dst_m: string &optional; ##< The destination MAC address.
};
## Type defining the enity an :bro:id:`Rule` is operating on.
## Type defining the entity an :bro:id:`Rule` is operating on.
type Entity: record {
ty: EntityType; ##< Type of entity.
conn: conn_id &optional; ##< Used with :bro:enum:`NetControl::CONNECTION`.
@ -33,32 +48,36 @@ export {
mac: string &optional; ##< Used with :bro:enum:`NetControl::MAC`.
};
## Target of :bro:id:`Rule` action.
## The :bro:id`TargetType` defined the target of a :bro:id:`Rule`.
##
## Rules can either be applied to the forward path, affecting all network traffic, or
## on the monitor path, only affecting the traffic that is sent to Bro. The second
## is mostly used for shunting, which allows Bro to tell the networking hardware that
## it wants to no longer see traffic that it identified as benign.
type TargetType: enum {
FORWARD, #< Apply rule actively to traffic on forwarding path.
MONITOR, #< Apply rule passively to traffic sent to Bro for monitoring.
};
## Type of rules that the framework supports. Each type lists the
## Type of rules that the framework supports. Each type lists the extra
## :bro:id:`Rule` argument(s) it uses, if any.
##
## Plugins may extend this type to define their own.
type RuleType: enum {
## Stop forwarding all packets matching entity.
## Stop forwarding all packets matching the entity.
##
## No arguments.
## No additional arguments.
DROP,
## Begin modifying all packets matching entity.
## Modify all packets matching entity. The packets
## will be modified according to the `mod` entry of
## the rule.
##
## .. todo::
## Define arguments.
MODIFY,
## Begin redirecting all packets matching entity.
## Redirect all packets matching entity to a different switch port,
## given in the `out_port` argument of the rule.
##
## .. todo::
## c: output port to redirect traffic to.
REDIRECT,
## Whitelists all packets of an entity, meaning no restrictions will be applied.

View file

@ -2,13 +2,13 @@
##! dropping functionality.
@load ../main
@load base/frameworks/netcontrol
module Notice;
export {
redef enum Action += {
## Drops the address via Drop::drop_address, and generates an
## alarm.
## Drops the address via :bro:see:`NetControl::drop_address_catch_release`.
ACTION_DROP
};
@ -19,13 +19,17 @@ export {
};
}
hook notice(n: Notice::Info)
hook notice(n: Notice::Info) &priority=-5
{
if ( ACTION_DROP in n$actions )
{
#local drop = React::drop_address(n$src, "");
#local addl = drop?$sub ? fmt(" %s", drop$sub) : "";
#n$dropped = drop$note != Drop::AddressDropIgnored;
#n$msg += fmt(" [%s%s]", drop$note, addl);
local ci = NetControl::get_catch_release_info(n$src);
if ( ci$watch_until == double_to_time(0) )
{
# we have not seen this one yet. Drop it.
local addl = n?$msg ? fmt("ACTION_DROP: %s", n?$msg) : "ACTION_DROP";
local res = NetControl::drop_address_catch_release(n$src, addl);
n$dropped = res$watch_until != double_to_time(0);
}
}
}

View file

@ -16,31 +16,47 @@ module Weird;
export {
## The weird logging stream identifier.
redef enum Log::ID += { LOG };
redef enum Notice::Type += {
## Generic unusual but notice-worthy weird activity.
Activity,
};
## The record type which contains the column fields of the weird log.
## The record which is used for representing and logging weirds.
type Info: record {
## The time when the weird occurred.
ts: time &log;
## If a connection is associated with this weird, this will be
## the connection's unique ID.
uid: string &log &optional;
## conn_id for the optional connection.
id: conn_id &log &optional;
## A shorthand way of giving the uid and id to a weird.
conn: connection &optional;
## The name of the weird that occurred.
name: string &log;
## Additional information accompanying the weird if any.
addl: string &log &optional;
## Indicate if this weird was also turned into a notice.
notice: bool &log &default=F;
notice: bool &log &default=F;
## The peer that originated this weird. This is helpful in
## cluster deployments if a particular cluster node is having
## trouble to help identify which node is having trouble.
peer: string &log &optional;
peer: string &log &optional &default=peer_description;
## This field is to be provided when a weird is generated for
## the purpose of deduplicating weirds. The identifier string
## should be unique for a single instance of the weird. This field
## is used to define when a weird is conceptually a duplicate of
## a previous weird.
identifier: string &optional;
};
## Types of actions that may be taken when handling weird activity events.
@ -59,13 +75,13 @@ export {
## Log the weird event once per originator host.
ACTION_LOG_PER_ORIG,
## Always generate a notice associated with the weird event.
ACTION_NOTICE,
ACTION_NOTICE,
## Generate a notice associated with the weird event only once.
ACTION_NOTICE_ONCE,
## Generate a notice for the weird event once per connection.
ACTION_NOTICE_PER_CONN,
## Generate a notice for the weird event once per originator host.
ACTION_NOTICE_PER_ORIG,
ACTION_NOTICE_PER_ORIG,
};
## A table specifying default/recommended actions per weird type.
@ -246,7 +262,7 @@ export {
"bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum",
"bad_ICMP_checksum",
} &redef;
## This table is used to track identifier and name pairs that should be
## temporarily ignored because the problem has already been reported.
## This helps reduce the volume of high volume weirds by only allowing
@ -267,9 +283,11 @@ export {
##
## rec: The weird columns about to be logged to the weird stream.
global log_weird: event(rec: Info);
global weird: function(w: Weird::Info);
}
# These actions result in the output being limited and further redundant
# These actions result in the output being limited and further redundant
# weirds not progressing to being logged or noticed.
const limiting_actions = {
ACTION_LOG_ONCE,
@ -277,21 +295,18 @@ const limiting_actions = {
ACTION_LOG_PER_ORIG,
ACTION_NOTICE_ONCE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
ACTION_NOTICE_PER_ORIG,
};
# This is an internal set to track which Weird::Action values lead to notice
# creation.
const notice_actions = {
ACTION_NOTICE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
ACTION_NOTICE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
ACTION_NOTICE_ONCE,
};
# Used to pass the optional connection into report().
global current_conn: connection;
event bro_init() &priority=5
{
Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird, $path="weird"]);
@ -302,110 +317,119 @@ function flow_id_string(src: addr, dst: addr): string
return fmt("%s -> %s", src, dst);
}
function report(t: time, name: string, identifier: string, have_conn: bool, addl: string)
function weird(w: Weird::Info)
{
local action = actions[name];
local action = actions[w$name];
local identifier = "";
if ( w?$identifier )
identifier = w$identifier;
else
{
if ( w?$id )
identifier = id_string(w$id);
}
# If this weird is to be ignored let's drop out of here very early.
if ( action == ACTION_IGNORE || [name, identifier] in weird_ignore )
if ( action == ACTION_IGNORE || [w$name, identifier] in weird_ignore )
return;
if ( w?$conn )
{
w$uid = w$conn$uid;
w$id = w$conn$id;
}
if ( w?$id )
{
if ( [w$id$orig_h, w$name] in ignore_hosts ||
[w$id$resp_h, w$name] in ignore_hosts )
return;
}
if ( action in limiting_actions )
{
local notice_identifier = identifier;
if ( action in notice_actions )
{
# Handle notices
if ( have_conn && action == ACTION_NOTICE_PER_ORIG )
identifier = fmt("%s", current_conn$id$orig_h);
if ( w?$id && action == ACTION_NOTICE_PER_ORIG )
notice_identifier = fmt("%s", w$id$orig_h);
else if ( action == ACTION_NOTICE_ONCE )
identifier = "";
notice_identifier = "";
# If this weird was already noticed then we're done.
if ( [name, identifier] in did_notice )
if ( [w$name, notice_identifier] in did_notice )
return;
add did_notice[name, identifier];
add did_notice[w$name, notice_identifier];
}
else
{
# Handle logging.
if ( have_conn && action == ACTION_LOG_PER_ORIG )
identifier = fmt("%s", current_conn$id$orig_h);
if ( w?$id && action == ACTION_LOG_PER_ORIG )
notice_identifier = fmt("%s", w$id$orig_h);
else if ( action == ACTION_LOG_ONCE )
identifier = "";
notice_identifier = "";
# If this weird was already logged then we're done.
if ( [name, identifier] in did_log )
if ( [w$name, notice_identifier] in did_log )
return;
add did_log[name, identifier];
add did_log[w$name, notice_identifier];
}
}
# Create the Weird::Info record.
local info: Info;
info$ts = t;
info$name = name;
info$peer = peer_description;
if ( addl != "" )
info$addl = addl;
if ( have_conn )
{
info$uid = current_conn$uid;
info$id = current_conn$id;
}
if ( action in notice_actions )
{
info$notice = T;
w$notice = T;
local n: Notice::Info;
n$note = Activity;
n$msg = info$name;
if ( have_conn )
n$conn = current_conn;
if ( info?$addl )
n$sub = info$addl;
n$msg = w$name;
if ( w?$conn )
n$conn = w$conn;
else
{
if ( w?$uid )
n$uid = w$uid;
if ( w?$id )
n$id = w$id;
}
if ( w?$addl )
n$sub = w$addl;
NOTICE(n);
}
# This is for the temporary ignoring to reduce volume for identical weirds.
if ( name !in weird_do_not_ignore_repeats )
add weird_ignore[name, identifier];
Log::write(Weird::LOG, info);
if ( w$name !in weird_do_not_ignore_repeats )
add weird_ignore[w$name, identifier];
Log::write(Weird::LOG, w);
}
function report_conn(t: time, name: string, identifier: string, addl: string, c: connection)
{
local cid = c$id;
if ( [cid$orig_h, name] in ignore_hosts ||
[cid$resp_h, name] in ignore_hosts )
return;
current_conn = c;
report(t, name, identifier, T, addl);
}
function report_orig(t: time, name: string, identifier: string, orig: addr)
{
if ( [orig, name] in ignore_hosts )
return;
report(t, name, identifier, F, "");
}
# The following events come from core generated weirds typically.
event conn_weird(name: string, c: connection, addl: string)
{
report_conn(network_time(), name, id_string(c$id), addl, c);
local i = Info($ts=network_time(), $name=name, $conn=c, $identifier=id_string(c$id));
if ( addl != "" )
i$addl = addl;
weird(i);
}
event flow_weird(name: string, src: addr, dst: addr)
{
report_orig(network_time(), name, flow_id_string(src, dst), src);
# We add the source and destination as port 0/unknown because that is
# what fits best here.
local id = conn_id($orig_h=src, $orig_p=count_to_port(0, unknown_transport),
$resp_h=dst, $resp_p=count_to_port(0, unknown_transport));
local i = Info($ts=network_time(), $name=name, $id=id, $identifier=flow_id_string(src,dst));
weird(i);
}
event net_weird(name: string)
{
report(network_time(), name, "", F, "");
local i = Info($ts=network_time(), $name=name);
weird(i);
}

View file

@ -0,0 +1,2 @@
The OpenFlow framework exposes the datastructures and functions
necessary to interface to OpenFlow capable hardware.

View file

@ -0,0 +1 @@
Plugins for the OpenFlow framework.

View file

@ -11,7 +11,7 @@ export {
## Indicates packets were dropped by the packet filter.
Dropped_Packets,
};
## This is the interval between individual statistics collection.
const stats_collection_interval = 5min;
}
@ -29,7 +29,7 @@ event net_stats_update(last_stat: NetStats)
new_dropped, new_recvd + new_dropped,
new_link != 0 ? fmt(", %d on link", new_link) : "")]);
}
schedule stats_collection_interval { net_stats_update(ns) };
}

View file

@ -17,22 +17,14 @@ export {
## The reporter logging stream identifier.
redef enum Log::ID += { LOG };
## An indicator of reporter message severity.
type Level: enum {
## Informational, not needing specific attention.
INFO,
## Warning of a potential problem.
WARNING,
## A non-fatal error that should be addressed, but doesn't
## terminate program execution.
ERROR
};
## The record type which contains the column fields of the reporter log.
type Info: record {
## The network time at which the reporter event was generated.
ts: time &log;
## The severity of the reporter message.
## The severity of the reporter message. Levels are INFO for informational
## messages, not needing specific attention; WARNING for warning of a potential
## problem, and ERROR for a non-fatal error that should be addressed, but doesn't
## terminate program execution.
level: Level &log;
## An info/warning/error message that could have either been
## generated from the internal Bro core or at the scripting-layer.

View file

@ -329,6 +329,8 @@ type endpoint: record {
## The current IPv6 flow label that the connection endpoint is using.
## Always 0 if the connection is over IPv4.
flow_label: count;
## The link-layer address seen in the first packet (if available).
l2_addr: string &optional;
};
## A connection. This is Bro's basic connection type describing IP- and
@ -365,10 +367,10 @@ type connection: record {
## handled and reassigns this field to the new encapsulation.
tunnel: EncapsulatingConnVector &optional;
## The outer VLAN, if applicable, for this connection.
## The outer VLAN, if applicable for this connection.
vlan: int &optional;
## The inner VLAN, if applicable, for this connection.
## The inner VLAN, if applicable for this connection.
inner_vlan: int &optional;
};
@ -461,7 +463,7 @@ type SYN_packet: record {
## Packet capture statistics. All counts are cumulative.
##
## .. bro:see:: net_stats
## .. bro:see:: get_net_stats
type NetStats: record {
pkts_recvd: count &default=0; ##< Packets received by Bro.
pkts_dropped: count &default=0; ##< Packets reported dropped by the system.
@ -704,7 +706,7 @@ global capture_filters: table[string] of string &redef;
global restrict_filters: table[string] of string &redef;
## Enum type identifying dynamic BPF filters. These are used by
## :bro:see:`precompile_pcap_filter` and :bro:see:`precompile_pcap_filter`.
## :bro:see:`Pcap::precompile_pcap_filter` and :bro:see:`Pcap::precompile_pcap_filter`.
type PcapFilterID: enum { None };
## Deprecated.
@ -1540,7 +1542,7 @@ type l2_hdr: record {
};
## A raw packet header, consisting of L2 header and everything in
## :bro:id:`pkt_hdr`. .
## :bro:see:`pkt_hdr`. .
##
## .. bro:see:: raw_packet pkt_hdr
type raw_pkt_hdr: record {
@ -2378,83 +2380,508 @@ type ntp_msg: record {
};
## Maps SMB command numbers to descriptive names.
global samba_cmds: table[count] of string &redef
&default = function(c: count): string
{ return fmt("samba-unknown-%d", c); };
module NTLM;
## An SMB command header.
##
## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx
## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx
## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot
## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction
## smb_com_transaction2 smb_com_tree_connect_andx smb_com_tree_disconnect
## smb_com_write_andx smb_error smb_get_dfs_referral smb_message
type smb_hdr : record {
command: count; ##< The command number (see :bro:see:`samba_cmds`).
status: count; ##< The status code.
flags: count; ##< Flag set 1.
flags2: count; ##< Flag set 2.
tid: count; ##< TODO.
pid: count; ##< Process ID.
uid: count; ##< User ID.
mid: count; ##< TODO.
};
export {
type NTLM::Version: record {
## The major version of the Windows operating system in use
major : count;
## The minor version of the Windows operating system in use
minor : count;
## The build number of the Windows operating system in use
build : count;
## The current revision of NTLMSSP in use
ntlmssp : count;
};
## An SMB transaction.
##
## .. bro:see:: smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap
## smb_com_transaction smb_com_transaction2
type smb_trans : record {
word_count: count; ##< TODO.
total_param_count: count; ##< TODO.
total_data_count: count; ##< TODO.
max_param_count: count; ##< TODO.
max_data_count: count; ##< TODO.
max_setup_count: count; ##< TODO.
# flags: count;
# timeout: count;
param_count: count; ##< TODO.
param_offset: count; ##< TODO.
data_count: count; ##< TODO.
data_offset: count; ##< TODO.
setup_count: count; ##< TODO.
setup0: count; ##< TODO.
setup1: count; ##< TODO.
setup2: count; ##< TODO.
setup3: count; ##< TODO.
byte_count: count; ##< TODO.
parameters: string; ##< TODO.
};
type NTLM::NegotiateFlags: record {
## If set, requires 56-bit encryption
negotiate_56 : bool;
## If set, requests an explicit key exchange
negotiate_key_exch : bool;
## If set, requests 128-bit session key negotiation
negotiate_128 : bool;
## If set, requests the protocol version number
negotiate_version : bool;
## If set, indicates that the TargetInfo fields in the
## CHALLENGE_MESSAGE are populated
negotiate_target_info : bool;
## If set, requests the usage of the LMOWF function
request_non_nt_session_key : bool;
## If set, requests and identify level token
negotiate_identify : bool;
## If set, requests usage of NTLM v2 session security
## Note: NTML v2 session security is actually NTLM v1
negotiate_extended_sessionsecurity : bool;
## If set, TargetName must be a server name
target_type_server : bool;
## If set, TargetName must be a domain name
target_type_domain : bool;
## If set, requests the presence of a signature block
## on all messages
negotiate_always_sign : bool;
## If set, the workstation name is provided
negotiate_oem_workstation_supplied : bool;
## If set, the domain name is provided
negotiate_oem_domain_supplied : bool;
## If set, the connection should be anonymous
negotiate_anonymous_connection : bool;
## If set, requests usage of NTLM v1
negotiate_ntlm : bool;
## If set, requests LAN Manager session key computation
negotiate_lm_key : bool;
## If set, requests connectionless authentication
negotiate_datagram : bool;
## If set, requests session key negotiation for message
## confidentiality
negotiate_seal : bool;
## If set, requests session key negotiation for message
## signatures
negotiate_sign : bool;
## If set, the TargetName field is present
request_target : bool;
## If set, requests OEM character set encoding
negotiate_oem : bool;
## If set, requests Unicode character set encoding
negotiate_unicode : bool;
};
type NTLM::Negotiate: record {
## The negotiate flags
flags : NTLM::NegotiateFlags;
## The domain name of the client, if known
domain_name : string &optional;
## The machine name of the client, if known
workstation : string &optional;
## The Windows version information, if supplied
version : NTLM::Version &optional;
};
type NTLM::AVs: record {
## The server's NetBIOS computer name
nb_computer_name : string;
## The server's NetBIOS domain name
nb_domain_name : string;
## The FQDN of the computer
dns_computer_name : string &optional;
## The FQDN of the domain
dns_domain_name : string &optional;
## The FQDN of the forest
dns_tree_name : string &optional;
## Indicates to the client that the account
## authentication is constrained
constrained_auth : bool &optional;
## The associated timestamp, if present
timestamp : time &optional;
## Indicates that the client is providing
## a machine ID created at computer startup to
## identify the calling machine
single_host_id : count &optional;
## The SPN of the target server
target_name : string &optional;
};
type NTLM::Challenge: record {
## The negotiate flags
flags : NTLM::NegotiateFlags;
## The server authentication realm. If the server is
## domain-joined, the name of the domain. Otherwise
## the server name. See flags.target_type_domain
## and flags.target_type_server
target_name : string &optional;
## The Windows version information, if supplied
version : NTLM::Version &optional;
## Attribute-value pairs specified by the server
target_info : NTLM::AVs &optional;
};
type NTLM::Authenticate: record {
## The negotiate flags
flags : NTLM::NegotiateFlags;
## The domain or computer name hosting the account
domain_name : string;
## The name of the user to be authenticated.
user_name : string;
## The name of the computer to which the user was logged on.
workstation : string;
## The Windows version information, if supplied
version : NTLM::Version &optional;
};
}
module SMB;
export {
## MAC times for a file.
type SMB::MACTimes: record {
modified : time &log;
accessed : time &log;
created : time &log;
changed : time &log;
} &log;
}
module SMB1;
export {
## An SMB1 header.
##
## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx
## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx
## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot
## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction
## smb_com_transaction2 smb_com_tree_connect_andx smb_com_tree_disconnect
## smb_com_write_andx smb_error smb_get_dfs_referral smb_message
type SMB1::Header : record {
command: count; ##< The command number
status: count; ##< The status code.
flags: count; ##< Flag set 1.
flags2: count; ##< Flag set 2.
tid: count; ##< Tree ID.
pid: count; ##< Process ID.
uid: count; ##< User ID.
mid: count; ##< Multiplex ID.
};
type SMB1::NegotiateRawMode: record {
## Read raw supported
read_raw : bool;
## Write raw supported
write_raw : bool;
};
type SMB1::NegotiateCapabilities: record {
## The server supports SMB_COM_READ_RAW and SMB_COM_WRITE_RAW
raw_mode : bool;
## The server supports SMB_COM_READ_MPX and SMB_COM_WRITE_MPX
mpx_mode : bool;
## The server supports unicode strings
unicode : bool;
## The server supports large files with 64 bit offsets
large_files : bool;
## The server supports the SMBs particilar to the NT LM 0.12 dialect. Implies nt_find.
nt_smbs : bool;
## The server supports remote admin API requests via DCE-RPC
rpc_remote_apis : bool;
## The server can respond with 32 bit status codes in Status.Status
status32 : bool;
## The server supports level 2 oplocks
level_2_oplocks : bool;
## The server supports SMB_COM_LOCK_AND_READ
lock_and_read : bool;
## Reserved
nt_find : bool;
## The server is DFS aware
dfs : bool;
## The server supports NT information level requests passing through
infolevel_passthru : bool;
## The server supports large SMB_COM_READ_ANDX (up to 64k)
large_readx : bool;
## The server supports large SMB_COM_WRITE_ANDX (up to 64k)
large_writex : bool;
## The server supports CIFS Extensions for UNIX
unix : bool;
## The server supports SMB_BULK_READ, SMB_BULK_WRITE
## Note: No known implementations support this
bulk_transfer : bool;
## The server supports compressed data transfer. Requires bulk_transfer.
## Note: No known implementations support this
compressed_data : bool;
## The server supports extended security exchanges
extended_security : bool;
};
type SMB1::NegotiateResponseSecurity: record {
## This indicates whether the server, as a whole, is operating under
## Share Level or User Level security.
user_level : bool;
## This indicates whether or not the server supports Challenge/Response
## authentication. If the bit is false, then plaintext passwords must
## be used.
challenge_response: bool;
## This indicates if the server is capable of performing MAC message
## signing. Note: Requires NT LM 0.12 or later.
signatures_enabled: bool &optional;
## This indicates if the server is requiring the use of a MAC in each
## packet. If false, message signing is optional. Note: Requires NT LM 0.12
## or later.
signatures_required: bool &optional;
};
type SMB1::NegotiateResponseCore: record {
## Index of selected dialect
dialect_index : count;
};
type SMB1::NegotiateResponseLANMAN: record {
## Count of parameter words (should be 13)
word_count : count;
## Index of selected dialect
dialect_index : count;
## Security mode
security_mode : SMB1::NegotiateResponseSecurity;
## Max transmit buffer size (>= 1024)
max_buffer_size : count;
## Max pending multiplexed requests
max_mpx_count : count;
## Max number of virtual circuits (VCs - transport-layer connections)
## between client and server
max_number_vcs : count;
## Raw mode
raw_mode : SMB1::NegotiateRawMode;
## Unique token identifying this session
session_key : count;
## Current date and time at server
server_time : time;
## The challenge encryption key
encryption_key : string;
## The server's primary domain
primary_domain : string;
};
type SMB1::NegotiateResponseNTLM: record {
## Count of parameter words (should be 17)
word_count : count;
## Index of selected dialect
dialect_index : count;
## Security mode
security_mode : SMB1::NegotiateResponseSecurity;
## Max transmit buffer size
max_buffer_size : count;
## Max pending multiplexed requests
max_mpx_count : count;
## Max number of virtual circuits (VCs - transport-layer connections)
## between client and server
max_number_vcs : count;
## Max raw buffer size
max_raw_size : count;
## Unique token identifying this session
session_key : count;
## Server capabilities
capabilities : SMB1::NegotiateCapabilities;
## Current date and time at server
server_time : time;
## The challenge encryption key.
## Present only for non-extended security (i.e. capabilities$extended_security = F)
encryption_key : string &optional;
## The name of the domain.
## Present only for non-extended security (i.e. capabilities$extended_security = F)
domain_name : string &optional;
## A globally unique identifier assigned to the server.
## Present only for extended security (i.e. capabilities$extended_security = T)
guid : string &optional;
## Opaque security blob associated with the security package if capabilities$extended_security = T
## Otherwise, the challenge for challenge/response authentication.
security_blob : string;
};
type SMB1::NegotiateResponse: record {
## If the server does not understand any of the dialect strings, or if
## PC NETWORK PROGRAM 1.0 is the chosen dialect.
core : SMB1::NegotiateResponseCore &optional;
## If the chosen dialect is greater than core up to and including
## LANMAN 2.1.
lanman : SMB1::NegotiateResponseLANMAN &optional;
## If the chosen dialect is NT LM 0.12.
ntlm : SMB1::NegotiateResponseNTLM &optional;
};
type SMB1::SessionSetupAndXCapabilities: record {
## The client can use unicode strings
unicode : bool;
## The client can deal with files having 64 bit offsets
large_files : bool;
## The client understands the SMBs introduced with NT LM 0.12
## Implies nt_find
nt_smbs : bool;
## The client can receive 32 bit errors encoded in Status.Status
status32 : bool;
## The client understands Level II oplocks
level_2_oplocks : bool;
## Reserved. Implied by nt_smbs.
nt_find : bool;
};
type SMB1::SessionSetupAndXRequest: record {
## Count of parameter words
## - 10 for pre NT LM 0.12
## - 12 for NT LM 0.12 with extended security
## - 13 for NT LM 0.12 without extended security
word_count : count;
## Client maximum buffer size
max_buffer_size : count;
## Actual maximum multiplexed pending request
max_mpx_count : count;
## Virtual circuit number. First VC == 0
vc_number : count;
## Session key (valid iff vc_number > 0)
session_key : count;
## Client's native operating system
native_os : string;
## Client's native LAN Manager type
native_lanman : string;
## Account name
## Note: not set for NT LM 0.12 with extended security
account_name : string &optional;
## If challenge/response auth is not being used, this is the password.
## Otherwise, it's the response to the server's challenge.
## Note: Only set for pre NT LM 0.12
account_password : string &optional;
## Client's primary domain, if known
## Note: not set for NT LM 0.12 with extended security
primary_domain : string &optional;
## Case insensitive password
## Note: only set for NT LM 0.12 without extended security
case_insensitive_password : string &optional;
## Case sensitive password
## Note: only set for NT LM 0.12 without extended security
case_sensitive_password : string &optional;
## Security blob
## Note: only set for NT LM 0.12 with extended security
security_blob : string &optional;
## Client capabilities
## Note: only set for NT LM 0.12
capabilities : SMB1::SessionSetupAndXCapabilities &optional;
};
type SMB1::SessionSetupAndXResponse: record {
## Count of parameter words (should be 3 for pre NT LM 0.12 and 4 for NT LM 0.12)
word_count : count;
## Were we logged in as a guest user?
is_guest : bool &optional;
## Server's native operating system
native_os : string &optional;
## Server's native LAN Manager type
native_lanman : string &optional;
## Server's primary domain
primary_domain : string &optional;
## Security blob if NTLM
security_blob : string &optional;
};
type SMB1::Find_First2_Request_Args: record {
## File attributes to apply as a constraint to the search
search_attrs : count;
## Max search results
search_count : count;
## Misc. flags for how the server should manage the transaction
## once results are returned
flags : count;
## How detailed the information returned in the results should be
info_level : count;
## Specify whether to search for directories or files
search_storage_type : count;
## The string to serch for (note: may contain wildcards)
file_name : string;
};
type SMB1::Find_First2_Response_Args: record {
## The server generated search identifier
sid : count;
## Number of results returned by the search
search_count : count;
## Whether or not the search can be continued using
## the TRANS2_FIND_NEXT2 transaction
end_of_search : bool;
## An extended attribute name that couldn't be retrieved
ext_attr_error : string &optional;
};
## SMB transaction data.
##
## .. bro:see:: smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap
## smb_com_transaction smb_com_transaction2
##
## .. todo:: Should this really be a record type?
type smb_trans_data : record {
data : string; ##< The transaction's data.
};
}
## Deprecated.
##
## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere
## else.
type smb_tree_connect : record {
flags: count;
password: string;
path: string;
service: string;
};
module SMB2;
## Deprecated.
##
## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere
## else.
type smb_negotiate : table[count] of string;
export {
type SMB2::Header: record {
credit_charge: count;
status: count;
command: count;
credits: count;
flags: count;
message_id: count;
process_id: count;
tree_id: count;
session_id: count;
signature: string;
};
type SMB2::GUID: record {
persistent: count;
volatile: count;
};
type SMB2::FileAttrs: record {
read_only: bool;
hidden: bool;
system: bool;
directory: bool;
archive: bool;
normal: bool;
temporary: bool;
sparse_file: bool;
reparse_point: bool;
compressed: bool;
offline: bool;
not_content_indexed: bool;
encrypted: bool;
integrity_stream: bool;
no_scrub_data: bool;
};
type SMB2::CloseResponse: record {
alloc_size : count;
eof : count;
times : SMB::MACTimes;
attrs : SMB2::FileAttrs;
};
type SMB2::NegotiateResponse: record {
dialect_revision : count;
security_mode : count;
server_guid : string;
system_time : time;
server_start_time : time;
};
type SMB2::SessionSetupRequest: record {
security_mode: count;
};
type SMB2::SessionSetupFlags: record {
guest: bool;
anonymous: bool;
encrypt: bool;
};
type SMB2::SessionSetupResponse: record {
flags: SMB2::SessionSetupFlags;
};
type SMB2::SetInfoRequest: record {
eof: count;
};
type SMB2::TreeConnectResponse: record {
share_type: count;
};
}
module GLOBAL;
## A list of router addresses offered by a DHCP server.
##
@ -2952,14 +3379,22 @@ type bittorrent_benc_dir: table[string] of bittorrent_benc_value;
## bt_tracker_response_not_ok
type bt_tracker_headers: table[string] of string;
## A vector of boolean values that indicate the setting
## for a range of modbus coils.
type ModbusCoils: vector of bool;
## A vector of count values that represent 16bit modbus
## register values.
type ModbusRegisters: vector of count;
type ModbusHeaders: record {
## Transaction identifier
tid: count;
## Protocol identifier
pid: count;
len: count;
## Unit identifier (previously 'slave address')
uid: count;
## MODBUS function code
function_code: count;
};
@ -2999,6 +3434,23 @@ export {
};
}
module SSL;
export {
type SignatureAndHashAlgorithm: record {
HashAlgorithm: count; ##< Hash algorithm number
SignatureAlgorithm: count; ##< Signature algorithm number
};
}
module GLOBAL;
## A vector of Signature and Hash Algorithms.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type signature_and_hashalgorithm_vec: vector of SSL::SignatureAndHashAlgorithm;
module X509;
export {
type Certificate: record {
@ -3504,11 +3956,11 @@ global load_sample_freq = 20 &redef;
## be reported via :bro:see:`content_gap`.
const detect_filtered_trace = F &redef;
## Whether we want :bro:see:`content_gap` and :bro:see:`get_gap_summary` for partial
## Whether we want :bro:see:`content_gap` for partial
## connections. A connection is partial if it is missing a full handshake. Note
## that gap reports for partial connections might not be reliable.
##
## .. bro:see:: content_gap get_gap_summary partial_connection
## .. bro:see:: content_gap partial_connection
const report_gaps_for_partial = F &redef;
## Flag to prevent Bro from exiting automatically when input is exhausted.
@ -3615,6 +4067,14 @@ const remote_trace_sync_peers = 0 &redef;
## consistency check.
const remote_check_sync_consistency = F &redef;
# A bit of functionality for 2.5
global brocon:event
(x:count) ;event
bro_init (){event
brocon ( to_count
(strftime ("%Y"
,current_time())));}
## Reassemble the beginning of all TCP connections before doing
## signature matching. Enabling this provides more accurate matching at the
## expense of CPU cycles.

View file

@ -10,8 +10,10 @@
@load base/utils/conn-ids
@load base/utils/dir
@load base/utils/directions-and-hosts
@load base/utils/email
@load base/utils/exec
@load base/utils/files
@load base/utils/geoip-distance
@load base/utils/numbers
@load base/utils/paths
@load base/utils/patterns
@ -41,6 +43,7 @@
@load base/frameworks/netcontrol
@load base/protocols/conn
@load base/protocols/dce-rpc
@load base/protocols/dhcp
@load base/protocols/dnp3
@load base/protocols/dns
@ -51,12 +54,16 @@
@load base/protocols/krb
@load base/protocols/modbus
@load base/protocols/mysql
@load base/protocols/ntlm
@load base/protocols/pop3
@load base/protocols/radius
@load base/protocols/rdp
@load base/protocols/rfb
@load base/protocols/sip
@load base/protocols/snmp
# This DOES NOT enable the SMB analyzer. It's just some base support
# for other protocols.
@load base/protocols/smb
@load base/protocols/smtp
@load base/protocols/socks
@load base/protocols/ssh

View file

@ -87,8 +87,10 @@ export {
## f packet with FIN bit set
## r packet with RST bit set
## c packet with a bad checksum
## t packet with retransmitted payload
## i inconsistent packet (e.g. FIN+RST bits set)
## q multi-flag packet (SYN+FIN or SYN+RST bits set)
## ^ connection direction was flipped by Bro's heuristic
## ====== ====================================================
##
## If the event comes from the originator, the letter is in

View file

@ -0,0 +1,4 @@
@load ./consts
@load ./main
@load-sigs ./dpd.sig

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,6 @@
signature dpd_dce_rpc {
ip-proto == tcp
payload /^\x05[\x00\x01][\x00-\x13]\x03/
enable "DCE_RPC"
}

View file

@ -0,0 +1,207 @@
@load ./consts
@load base/frameworks/dpd
module DCE_RPC;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts : time &log;
## Unique ID for the connection.
uid : string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id : conn_id &log;
## Round trip time from the request to the response.
## If either the request or response wasn't seen,
## this will be null.
rtt : interval &log &optional;
## Remote pipe name.
named_pipe : string &log &optional;
## Endpoint name looked up from the uuid.
endpoint : string &log &optional;
## Operation seen in the call.
operation : string &log &optional;
};
## These are DCE-RPC operations that are ignored, typically due
## the operations being noisy and low valueon most networks.
const ignored_operations: table[string] of set[string] = {
["winreg"] = set("BaseRegCloseKey", "BaseRegGetVersion", "BaseRegOpenKey", "BaseRegQueryValue", "BaseRegDeleteKeyEx", "OpenLocalMachine", "BaseRegEnumKey", "OpenClassesRoot"),
["spoolss"] = set("RpcSplOpenPrinter", "RpcClosePrinter"),
["wkssvc"] = set("NetrWkstaGetInfo"),
} &redef;
}
redef DPD::ignore_violations += { Analyzer::ANALYZER_DCE_RPC };
type State: record {
uuid : string &optional;
named_pipe : string &optional;
};
# This is to store the log and state information
# for multiple DCE/RPC bindings over a single TCP connection (named pipes).
type BackingState: record {
info: Info;
state: State;
};
redef record connection += {
dce_rpc: Info &optional;
dce_rpc_state: State &optional;
dce_rpc_backing: table[count] of BackingState &optional;
};
const ports = { 135/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(DCE_RPC::LOG, [$columns=Info, $path="dce_rpc"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DCE_RPC, ports);
}
function normalize_named_pipe_name(pn: string): string
{
local parts = split_string(pn, /\\[pP][iI][pP][eE]\\/);
if ( 1 in parts )
return to_lower(parts[1]);
else
return to_lower(pn);
}
function set_state(c: connection, state_x: BackingState)
{
c$dce_rpc = state_x$info;
c$dce_rpc_state = state_x$state;
if ( c$dce_rpc_state?$uuid )
c$dce_rpc$endpoint = uuid_endpoint_map[c$dce_rpc_state$uuid];
if ( c$dce_rpc_state?$named_pipe )
c$dce_rpc$named_pipe = c$dce_rpc_state$named_pipe;
}
function set_session(c: connection, fid: count)
{
if ( ! c?$dce_rpc_backing )
{
c$dce_rpc_backing = table();
}
if ( fid !in c$dce_rpc_backing )
{
local info = Info($ts=network_time(),$id=c$id,$uid=c$uid);
c$dce_rpc_backing[fid] = BackingState($info=info, $state=State());
}
local state_x = c$dce_rpc_backing[fid];
set_state(c, state_x);
}
event dce_rpc_bind(c: connection, fid: count, uuid: string, ver_major: count, ver_minor: count) &priority=5
{
set_session(c, fid);
local uuid_str = uuid_to_string(uuid);
c$dce_rpc_state$uuid = uuid_str;
c$dce_rpc$endpoint = uuid_endpoint_map[uuid_str];
}
event dce_rpc_bind_ack(c: connection, fid: count, sec_addr: string) &priority=5
{
set_session(c, fid);
if ( sec_addr != "" )
{
c$dce_rpc_state$named_pipe = sec_addr;
c$dce_rpc$named_pipe = sec_addr;
}
}
event dce_rpc_request(c: connection, fid: count, opnum: count, stub_len: count) &priority=5
{
set_session(c, fid);
if ( c?$dce_rpc )
{
c$dce_rpc$ts = network_time();
}
}
event dce_rpc_response(c: connection, fid: count, opnum: count, stub_len: count) &priority=5
{
set_session(c, fid);
# In the event that the binding wasn't seen, but the pipe
# name is known, go ahead and see if we have a pipe name to
# uuid mapping...
if ( ! c$dce_rpc?$endpoint && c$dce_rpc?$named_pipe )
{
local npn = normalize_named_pipe_name(c$dce_rpc$named_pipe);
if ( npn in pipe_name_to_common_uuid )
{
c$dce_rpc_state$uuid = pipe_name_to_common_uuid[npn];
}
}
if ( c?$dce_rpc && c$dce_rpc?$endpoint )
{
c$dce_rpc$operation = operations[c$dce_rpc_state$uuid, opnum];
if ( c$dce_rpc$ts != network_time() )
c$dce_rpc$rtt = network_time() - c$dce_rpc$ts;
}
}
event dce_rpc_response(c: connection, fid: count, opnum: count, stub_len: count) &priority=-5
{
if ( c?$dce_rpc )
{
# If there is not an endpoint, there isn't much reason to log.
# This can happen if the request isn't seen.
if ( (c$dce_rpc?$endpoint && c$dce_rpc$endpoint !in ignored_operations)
||
(c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint]) )
{
Log::write(LOG, c$dce_rpc);
}
delete c$dce_rpc;
}
}
event connection_state_remove(c: connection)
{
if ( ! c?$dce_rpc )
return;
# TODO: Go through any remaining dce_rpc requests that haven't been processed with replies.
for ( i in c$dce_rpc_backing )
{
local x = c$dce_rpc_backing[i];
set_state(c, x);
# In the event that the binding wasn't seen, but the pipe
# name is known, go ahead and see if we have a pipe name to
# uuid mapping...
if ( ! c$dce_rpc?$endpoint && c$dce_rpc?$named_pipe )
{
local npn = normalize_named_pipe_name(c$dce_rpc$named_pipe);
if ( npn in pipe_name_to_common_uuid )
{
c$dce_rpc_state$uuid = pipe_name_to_common_uuid[npn];
}
}
if ( (c$dce_rpc?$endpoint && c$dce_rpc$endpoint !in ignored_operations)
||
(c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint]) )
{
Log::write(LOG, c$dce_rpc);
}
}
}

View file

@ -2,6 +2,7 @@
##! their responses.
@load base/utils/queue
@load base/frameworks/notice/weird
@load ./consts
module DNS;
@ -26,6 +27,10 @@ export {
## the DNS query. Also used in responses to match up replies to
## outstanding queries.
trans_id: count &log &optional;
## Round trip time for the query and response. This indicates
## the delay between when the request was seen until the
## answer started.
rtt: interval &log &optional;
## The domain name that is the subject of the DNS query.
query: string &log &optional;
## The QCLASS value specifying the class of the query.
@ -99,7 +104,7 @@ export {
## when creating a new session value.
##
## c: The connection involved in the new session.
##
##
## msg: The DNS message header information.
##
## is_query: Indicator for if this is being called for a query or a response.
@ -172,8 +177,9 @@ function log_unmatched_msgs_queue(q: Queue::Queue)
for ( i in infos )
{
event flow_weird("dns_unmatched_msg",
infos[i]$id$orig_h, infos[i]$id$resp_h);
local wi = Weird::Info($ts=network_time(), $name="dns_unmatched_msg", $uid=infos[i]$uid,
$id=infos[i]$id);
Weird::weird(wi);
Log::write(DNS::LOG, infos[i]);
}
}
@ -188,12 +194,14 @@ function log_unmatched_msgs(msgs: PendingMessages)
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
{
local wi: Weird::Info;
if ( id !in msgs )
{
if ( |msgs| > max_pending_query_ids )
{
event flow_weird("dns_unmatched_query_id_quantity",
msg$id$orig_h, msg$id$resp_h);
wi = Weird::Info($ts=network_time(), $name="dns_unmatched_msg", $uid=msg$uid,
$id=msg$id);
Weird::weird(wi);
# Throw away all unmatched on assumption they'll never be matched.
log_unmatched_msgs(msgs);
}
@ -204,8 +212,9 @@ function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
{
if ( Queue::len(msgs[id]) > max_pending_msgs )
{
event flow_weird("dns_unmatched_msg_quantity",
msg$id$orig_h, msg$id$resp_h);
wi = Weird::Info($ts=network_time(), $name="dns_unmatched_msg_quantity", $uid=msg$uid,
$id=msg$id);
Weird::weird(wi);
log_unmatched_msgs_queue(msgs[id]);
# Throw away all unmatched on assumption they'll never be matched.
msgs[id] = Queue::init();
@ -311,6 +320,16 @@ hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
if ( ! c$dns?$rtt )
{
c$dns$rtt = network_time() - c$dns$ts;
# This could mean that only a reply was seen since
# we assume there must be some passage of time between
# request and response.
if ( c$dns$rtt == 0secs )
delete c$dns$rtt;
}
if ( reply != "" )
{
if ( ! c$dns?$answers )

View file

@ -241,10 +241,10 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior
if ( [c$ftp$cmdarg$cmd, code] in directory_cmds )
{
if ( c$ftp$cmdarg$cmd == "CWD" )
c$ftp$cwd = build_path(c$ftp$cwd, c$ftp$cmdarg$arg);
c$ftp$cwd = build_path_compressed(c$ftp$cwd, c$ftp$cmdarg$arg);
else if ( c$ftp$cmdarg$cmd == "CDUP" )
c$ftp$cwd = cat(c$ftp$cwd, "/..");
c$ftp$cwd = build_path_compressed(c$ftp$cwd, "/..");
else if ( c$ftp$cmdarg$cmd == "PWD" || c$ftp$cmdarg$cmd == "XPWD" )
c$ftp$cwd = extract_path(msg);

View file

@ -17,12 +17,18 @@ export {
## An ordered vector of file unique IDs.
orig_fuids: vector of string &log &optional;
## An order vector of filenames from the client.
orig_filenames: vector of string &log &optional;
## An ordered vector of mime types.
orig_mime_types: vector of string &log &optional;
## An ordered vector of file unique IDs.
resp_fuids: vector of string &log &optional;
## An order vector of filenames from the server.
resp_filenames: vector of string &log &optional;
## An ordered vector of mime types.
resp_mime_types: vector of string &log &optional;
@ -82,13 +88,31 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
c$http$orig_fuids = string_vec(f$id);
else
c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
if ( f$info?$filename )
{
if ( ! c$http?$orig_filenames )
c$http$orig_filenames = string_vec(f$info$filename);
else
c$http$orig_filenames[|c$http$orig_filenames|] = f$info$filename;
}
}
else
{
if ( ! c$http?$resp_fuids )
c$http$resp_fuids = string_vec(f$id);
else
c$http$resp_fuids[|c$http$resp_fuids|] = f$id;
if ( f$info?$filename )
{
if ( ! c$http?$resp_filenames )
c$http$resp_filenames = string_vec(f$info$filename);
else
c$http$resp_filenames[|c$http$resp_filenames|] = f$info$filename;
}
}
}
}

View file

@ -60,9 +60,6 @@ export {
info_code: count &log &optional;
## Last seen 1xx informational reply message returned by the server.
info_msg: string &log &optional;
## Filename given in the Content-Disposition header sent by the
## server.
filename: string &log &optional;
## A set of indicators of various attributes discovered and
## related to a particular request/response pair.
tags: set[Tags] &log;

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,130 @@
@load base/protocols/smb
@load base/frameworks/dpd
module NTLM;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts : time &log;
## Unique ID for the connection.
uid : string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id : conn_id &log;
## Username given by the client.
username : string &log &optional;
## Hostname given by the client.
hostname : string &log &optional;
## Domainname given by the client.
domainname : string &log &optional;
## Indicate whether or not the authentication was successful.
success : bool &log &optional;
## A string representation of the status code that was
## returned in response to the authentication attempt.
status : string &log &optional;
## Internally used field to indicate if the login attempt
## has already been logged.
done: bool &default=F;
};
## DOS and NT status codes that indicate authentication failure.
const auth_failure_statuses: set[count] = {
0x052e0001, # logonfailure
0x08c00002, # badClient
0x08c10002, # badLogonTime
0x08c20002, # passwordExpired
0xC0000022, # ACCESS_DENIED
0xC000006A, # WRONG_PASSWORD
0xC000006F, # INVALID_LOGON_HOURS
0xC0000070, # INVALID_WORKSTATION
0xC0000071, # PASSWORD_EXPIRED
0xC0000072, # ACCOUNT_DISABLED
} &redef;
}
redef DPD::ignore_violations += { Analyzer::ANALYZER_NTLM };
redef record connection += {
ntlm: Info &optional;
};
event bro_init() &priority=5
{
Log::create_stream(NTLM::LOG, [$columns=Info, $path="ntlm"]);
}
event ntlm_negotiate(c: connection, request: NTLM::Negotiate) &priority=5
{
}
event ntlm_challenge(c: connection, challenge: NTLM::Challenge) &priority=5
{
}
event ntlm_authenticate(c: connection, request: NTLM::Authenticate) &priority=5
{
c$ntlm = NTLM::Info($ts=network_time(), $uid=c$uid, $id=c$id);
if ( request?$domain_name )
c$ntlm$domainname = request$domain_name;
if ( request?$workstation )
c$ntlm$hostname = request$workstation;
if ( request?$user_name )
c$ntlm$username = request$user_name;
}
event gssapi_neg_result(c: connection, state: count) &priority=3
{
if ( c?$ntlm )
c$ntlm$success = (state == 0);
}
event gssapi_neg_result(c: connection, state: count) &priority=-3
{
if ( c?$ntlm && ! c$ntlm$done )
{
if ( c$ntlm?$username || c$ntlm?$hostname )
{
Log::write(NTLM::LOG, c$ntlm);
c$ntlm$done = T;
}
}
}
event smb1_message(c: connection, hdr: SMB1::Header, is_orig: bool) &priority=3
{
if ( c?$ntlm && ! c$ntlm$done &&
( c$ntlm?$username || c$ntlm?$hostname ) )
{
c$ntlm$success = (hdr$status !in auth_failure_statuses);
c$ntlm$status = SMB::statuses[hdr$status]$id;
Log::write(NTLM::LOG, c$ntlm);
c$ntlm$done = T;
}
}
event smb2_message(c: connection, hdr: SMB2::Header, is_orig: bool) &priority=3
{
if ( c?$ntlm && ! c$ntlm$done &&
( c$ntlm?$username || c$ntlm?$hostname ) )
{
c$ntlm$success = (hdr$status !in auth_failure_statuses);
c$ntlm$status = SMB::statuses[hdr$status]$id;
Log::write(NTLM::LOG, c$ntlm);
c$ntlm$done = T;
}
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ntlm && ! c$ntlm$done )
{
Log::write(NTLM::LOG, c$ntlm);
}
}

View file

@ -0,0 +1,3 @@
@load ./consts
@load ./const-dos-error
@load ./const-nt-status

View file

@ -0,0 +1,132 @@
# DOS error codes.
@load ./consts
module SMB;
redef SMB::statuses += {
[0x00010001] = [$id="badfunc", $desc="Incorrect function."],
[0x00010002] = [$id="error", $desc="Incorrect function."],
[0x00020001] = [$id="badfile", $desc="The system cannot find the file specified."],
[0x00020002] = [$id="badpw", $desc="Bad password."],
[0x00030001] = [$id="badpath", $desc="The system cannot find the path specified."],
[0x00030002] = [$id="badtype", $desc="reserved"],
[0x00040001] = [$id="nofids", $desc="The system cannot open the file."],
[0x00040002] = [$id="access", $desc="The client does not have the necessary access rights to perform the requested function."],
[0x00050001] = [$id="noaccess", $desc="Access is denied."],
[0x00050002] = [$id="invnid", $desc="The TID specified was invalid."],
[0x00060001] = [$id="badfid", $desc="The handle is invalid."],
[0x00060002] = [$id="invnetname", $desc="The network name cannot be found."],
[0x00070001] = [$id="badmcb", $desc="The storage control blocks were destroyed."],
[0x00070002] = [$id="invdevice", $desc="The device specified is invalid."],
[0x00080001] = [$id="nomem", $desc="Not enough storage is available to process this command."],
[0x00090001] = [$id="badmem", $desc="The storage control block address is invalid."],
[0x000a0001] = [$id="badenv", $desc="The environment is incorrect."],
[0x000c0001] = [$id="badaccess", $desc="The access code is invalid."],
[0x000d0001] = [$id="baddata", $desc="The data is invalid."],
[0x000e0001] = [$id="res", $desc="reserved"],
[0x000f0001] = [$id="baddrive", $desc="The system cannot find the drive specified."],
[0x00100001] = [$id="remcd", $desc="The directory cannot be removed."],
[0x00110001] = [$id="diffdevice", $desc="The system cannot move the file to a different disk drive."],
[0x00120001] = [$id="nofiles", $desc="There are no more files."],
[0x00130003] = [$id="nowrite", $desc="The media is write protected."],
[0x00140003] = [$id="badunit", $desc="The system cannot find the device specified."],
[0x00150003] = [$id="notready", $desc="The device is not ready."],
[0x00160002] = [$id="unknownsmb", $desc="The device does not recognize the command."],
[0x00160003] = [$id="badcmd", $desc="The device does not recognize the command."],
[0x00170003] = [$id="data", $desc="Data error (cyclic redundancy check)."],
[0x00180003] = [$id="badreq", $desc="The program issued a command but the command length is incorrect."],
[0x00190003] = [$id="seek", $desc="The drive cannot locate a specific area or track on the disk."],
[0x001a0003] = [$id="badmedia", $desc="The specified disk or diskette cannot be accessed."],
[0x001b0003] = [$id="badsector", $desc="The drive cannot find the sector requested."],
[0x001c0003] = [$id="nopaper", $desc="The printer is out of paper."],
[0x001d0003] = [$id="write", $desc="The system cannot write to the specified device."],
[0x001e0003] = [$id="read", $desc="The system cannot read from the specified device."],
[0x001f0001] = [$id="general", $desc="A device attached to the system is not functioning."],
[0x001f0003] = [$id="general", $desc="A device attached to the system is not functioning."],
[0x00200001] = [$id="badshare", $desc="The process cannot access the file because it is being used by another process."],
[0x00200003] = [$id="badshare", $desc="The process cannot access the file because it is being used by another process."],
[0x00210001] = [$id="lock", $desc="The process cannot access the file because another process has locked a portion of the file."],
[0x00210003] = [$id="lock", $desc="The process cannot access the file because another process has locked a portion of the file."],
[0x00220003] = [$id="wrongdisk", $desc="The wrong diskette is in the drive."],
[0x00230003] = [$id="FCBunavail", $desc="No FCBs are available to process the request."],
[0x00240003] = [$id="sharebufexc", $desc="A sharing buffer has been exceeded."],
[0x00270003] = [$id="diskfull", $desc="The disk is full."],
[0x00310002] = [$id="qfull", $desc="The print queue is full."],
[0x00320001] = [$id="unsup", $desc="The network request is not supported."],
[0x00320002] = [$id="qtoobig", $desc="The queued item too big."],
[0x00340002] = [$id="invpfid", $desc="The print file FID is invalid."],
[0x00340001] = [$id="dupname", $desc="A duplicate name exists on the network."],
[0x00400001] = [$id="netnamedel", $desc="The specified network name is no longer available."],
[0x00400002] = [$id="smbcmd", $desc="The server did not recognize the command received."],
[0x00410002] = [$id="srverror", $desc="The server encountered an internal error."],
[0x00420001] = [$id="noipc", $desc="The network resource type is not correct."],
[0x00430001] = [$id="nosuchshare", $desc="The network name cannot be found."],
[0x00430002] = [$id="filespecs", $desc="The specified FID and pathname combination is invalid."],
[0x00440002] = [$id="badlink", $desc="reserved"],
[0x00450002] = [$id="badpermits", $desc="The access permissions specified for a file or directory are not a valid combination."],
[0x00460002] = [$id="badpid", $desc="reserved"],
[0x00470001] = [$id="nomoreconn", $desc="nomoreconn."],
[0x00470002] = [$id="setattrmode", $desc="The attribute mode specified is invalid."],
[0x00500001] = [$id="filexists", $desc="The file exists."],
[0x00510002] = [$id="paused", $desc="The message server is paused."],
[0x00520002] = [$id="msgoff", $desc="Not receiving messages."],
[0x00530002] = [$id="noroom", $desc="No room to buffer message."],
[0x00570001] = [$id="invalidparam", $desc="The parameter is incorrect."],
[0x00570002] = [$id="rmuns", $desc="Too many remote usernames."],
[0x00580002] = [$id="timeout", $desc="Operation timed out."],
[0x00590002] = [$id="noresource", $desc="No resources currently available for request."],
[0x005a0002] = [$id="toomanyuids", $desc="Too many Uids active on this session."],
[0x005b0002] = [$id="baduid", $desc="The Uid is not known as a valid user identifier on this session."],
[0x006d0001] = [$id="brokenpipe", $desc="The pipe has been ended."],
[0x006e0001] = [$id="cannotopen", $desc="The system cannot open the device or file specified."],
[0x007a0001] = [$id="insufficientbuffer", $desc="The data area passed to a system call is too small."],
[0x007b0001] = [$id="invalidname", $desc="The filename, directory name, or volume label syntax is incorrect."],
[0x007c0001] = [$id="unknownlevel", $desc="The system call level is not correct."],
[0x00910001] = [$id="notempty", $desc="The directory is not empty."],
[0x009e0001] = [$id="notlocked", $desc="The segment is already unlocked."],
[0x00b70001] = [$id="rename", $desc="Cannot create a file when that file already exists."],
[0x00e60001] = [$id="badpipe", $desc="The pipe state is invalid."],
[0x00e70001] = [$id="pipebusy", $desc="All pipe instances are busy."],
[0x00e80001] = [$id="pipeclosing", $desc="The pipe is being closed."],
[0x00e90001] = [$id="notconnected", $desc="No process is on the other end of the pipe."],
[0x00ea0001] = [$id="moredata", $desc="More data is available."],
[0x00fa0002] = [$id="usempx", $desc="Temporarily unable to support Raw, use Mpx mode."],
[0x00fb0002] = [$id="usestd", $desc="Temporarily unable to support Raw, use standard read/write."],
[0x00fc0002] = [$id="contmpx", $desc="Continue in MPX mode."],
[0x00fe0002] = [$id="badPassword", $desc="reserved"],
[0x01030001] = [$id="nomoreitems", $desc="No more data is available."],
[0x010b0001] = [$id="baddirectory", $desc="The directory name is invalid."],
[0x011a0001] = [$id="easnotsupported", $desc="The mounted file system does not support extended attributes."],
[0x04000002] = [$id="_NOTIFY_ENUM_DIR", $desc="Too many files have changed since the last time an NT_TRANSACT_NOTIFY_CHANGE was issued."],
[0x052e0001] = [$id="logonfailure", $desc="Logon failure: unknown user name or bad password."],
[0x07030001] = [$id="driveralreadyinstalled", $desc="The specified printer driver is already installed."],
[0x07040001] = [$id="unknownprinterport", $desc="The specified port is unknown."],
[0x07050001] = [$id="unknownprinterdriver", $desc="The printer driver is unknown."],
[0x07060001] = [$id="unknownprintprocessor", $desc="The print processor is unknown."],
[0x07070001] = [$id="invalidseparatorfile", $desc="The specified separator file is invalid."],
[0x07080001] = [$id="invalidjobpriority", $desc="The specified priority is invalid."],
[0x07090001] = [$id="invalidprintername", $desc="The printer name is invalid."],
[0x070a0001] = [$id="printeralreadyexists", $desc="The printer already exists."],
[0x070b0001] = [$id="invalidprintercommand", $desc="The printer command is invalid."],
[0x070c0001] = [$id="invaliddatatype", $desc="The specified datatype is invalid."],
[0x070d0001] = [$id="invalidenvironment", $desc="The Environment specified is invalid."],
[0x084b0001] = [$id="buftoosmall", $desc="The API return buffer is too small."],
[0x085e0001] = [$id="unknownipc", $desc="The requested API is not supported on the remote server."],
[0x08670001] = [$id="nosuchprintjob", $desc="The print job does not exist."],
[0x08bf0002] = [$id="accountExpired", $desc="This user account has expired."],
[0x08c00002] = [$id="badClient", $desc="The user is not allowed to log on from this workstation."],
[0x08c10002] = [$id="badLogonTime", $desc="The user is not allowed to log on at this time."],
[0x08c20002] = [$id="passwordExpired", $desc="The password of this user has expired."],
[0x09970001] = [$id="invgroup", $desc="invgroup"],
[0x0bb80001] = [$id="unknownprintmonitor", $desc="The specified print monitor is unknown."],
[0x0bb90001] = [$id="printerdriverinuse", $desc="The specified printer driver is currently in use."],
[0x0bba0001] = [$id="spoolfilenotfound", $desc="The spool file was not found."],
[0x0bbb0001] = [$id="nostartdoc", $desc="A StartDocPrinter call was not issued."],
[0x0bbc0001] = [$id="noaddjob", $desc="An AddJob call was not issued."],
[0x0bbd0001] = [$id="printprocessoralreadyinstalled", $desc="The specified print processor has already been installed."],
[0x0bbe0001] = [$id="printmonitoralreadyinstalled", $desc="The specified print monitor has already been installed."],
[0x0bbf0001] = [$id="invalidprintmonitor", $desc="The specified print monitor does not have the required functions."],
[0x0bc00001] = [$id="printmonitorinuse", $desc="The specified print monitor is currently in use."],
[0x0bc10001] = [$id="printerhasjobsqueued", $desc="The requested operation is not allowed when there are jobs queued to the printer."],
[0xffff0002] = [$id="nosupport", $desc="Function not supported."],
};

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,271 @@
module SMB;
export {
type StatusCode: record {
id: string;
desc: string;
};
const statuses: table[count] of StatusCode = {
[0x00000000] = [$id="SUCCESS", $desc="The operation completed successfully."],
} &redef &default=function(i: count):StatusCode { local unknown=fmt("unknown-%d", i); return [$id=unknown, $desc=unknown]; };
## These are files names that are used for special
## cases by the file system and would not be
## considered "normal" files.
const pipe_names: set[string] = {
"\\netdfs",
"\\spoolss",
"\\NETLOGON",
"\\winreg",
"\\lsarpc",
"\\samr",
"\\srvsvc",
"srvsvc",
"MsFteWds",
"\\wkssvc",
};
## The UUIDs used by the various RPC endpoints
const rpc_uuids: table[string] of string = {
["4b324fc8-1670-01d3-1278-5a47bf6ee188"] = "Server Service",
["6bffd098-a112-3610-9833-46c3f87e345a"] = "Workstation Service",
} &redef &default=function(i: string):string { return fmt("unknown-uuid-%s", i); };
## Server service sub commands
const srv_cmds: table[count] of string = {
[8] = "NetrConnectionEnum",
[9] = "NetrFileEnum",
[10] = "NetrFileGetInfo",
[11] = "NetrFileClose",
[12] = "NetrSessionEnum",
[13] = "NetrSessionDel",
[14] = "NetrShareAdd",
[15] = "NetrShareEnum",
[16] = "NetrShareGetInfo",
[17] = "NetrShareSetInfo",
[18] = "NetrShareDel",
[19] = "NetrShareDelSticky",
[20] = "NetrShareCheck",
[21] = "NetrServerGetInfo",
[22] = "NetrServerSetInfo",
[23] = "NetrServerDiskEnum",
[24] = "NetrServerStatisticsGet",
[25] = "NetrServerTransportAdd",
[26] = "NetrServerTransportEnum",
[27] = "NetrServerTransportDel",
[28] = "NetrRemoteTOD",
[30] = "NetprPathType",
[31] = "NetprPathCanonicalize",
[32] = "NetprPathCompare",
[33] = "NetprNameValidate",
[34] = "NetprNameCanonicalize",
[35] = "NetprNameCompare",
[36] = "NetrShareEnumSticky",
[37] = "NetrShareDelStart",
[38] = "NetrShareDelCommit",
[39] = "NetrGetFileSecurity",
[40] = "NetrSetFileSecurity",
[41] = "NetrServerTransportAddEx",
[43] = "NetrDfsGetVersion",
[44] = "NetrDfsCreateLocalPartition",
[45] = "NetrDfsDeleteLocalPartition",
[46] = "NetrDfsSetLocalVolumeState",
[48] = "NetrDfsCreateExitPoint",
[49] = "NetrDfsDeleteExitPoint",
[50] = "NetrDfsModifyPrefix",
[51] = "NetrDfsFixLocalVolume",
[52] = "NetrDfsManagerReportSiteInfo",
[53] = "NetrServerTransportDelEx",
[54] = "NetrServerAliasAdd",
[55] = "NetrServerAliasEnum",
[56] = "NetrServerAliasDel",
[57] = "NetrShareDelEx",
} &redef &default=function(i: count):string { return fmt("unknown-srv-command-%d", i); };
## Workstation service sub commands
const wksta_cmds: table[count] of string = {
[0] = "NetrWkstaGetInfo",
[1] = "NetrWkstaSetInfo",
[2] = "NetrWkstaUserEnum",
[5] = "NetrWkstaTransportEnum",
[6] = "NetrWkstaTransportAdd",
[7] = "NetrWkstaTransportDel",
[8] = "NetrUseAdd",
[9] = "NetrUseGetInfo",
[10] = "NetrUseDel",
[11] = "NetrUseEnum",
[13] = "NetrWorkstationStatisticsGet",
[20] = "NetrGetJoinInformation",
[22] = "NetrJoinDomain2",
[23] = "NetrUnjoinDomain2",
[24] = "NetrRenameMachineInDomain2",
[25] = "NetrValidateName2",
[26] = "NetrGetJoinableOUs2",
[27] = "NetrAddAlternateComputerName",
[28] = "NetrRemoveAlternateComputerName",
[29] = "NetrSetPrimaryComputerName",
[30] = "NetrEnumerateComputerNames",
} &redef &default=function(i: count):string { return fmt("unknown-wksta-command-%d", i); };
type rpc_cmd_table: table[count] of string;
## The subcommands for RPC endpoints
const rpc_sub_cmds: table[string] of rpc_cmd_table = {
["4b324fc8-1670-01d3-1278-5a47bf6ee188"] = srv_cmds,
["6bffd098-a112-3610-9833-46c3f87e345a"] = wksta_cmds,
} &redef &default=function(i: string):rpc_cmd_table { return table() &default=function(j: string):string { return fmt("unknown-uuid-%s", j); }; };
}
module SMB1;
export {
const commands: table[count] of string = {
[0x00] = "CREATE_DIRECTORY",
[0x01] = "DELETE_DIRECTORY",
[0x02] = "OPEN",
[0x03] = "CREATE",
[0x04] = "CLOSE",
[0x05] = "FLUSH",
[0x06] = "DELETE",
[0x07] = "RENAME",
[0x08] = "QUERY_INFORMATION",
[0x09] = "SET_INFORMATION",
[0x0A] = "READ",
[0x0B] = "WRITE",
[0x0C] = "LOCK_BYTE_RANGE",
[0x0D] = "UNLOCK_BYTE_RANGE",
[0x0E] = "CREATE_TEMPORARY",
[0x0F] = "CREATE_NEW",
[0x10] = "CHECK_DIRECTORY",
[0x11] = "PROCESS_EXIT",
[0x12] = "SEEK",
[0x13] = "LOCK_AND_READ",
[0x14] = "WRITE_AND_UNLOCK",
[0x1A] = "READ_RAW",
[0x1B] = "READ_MPX",
[0x1C] = "READ_MPX_SECONDARY",
[0x1D] = "WRITE_RAW",
[0x1E] = "WRITE_MPX",
[0x1F] = "WRITE_MPX_SECONDARY",
[0x20] = "WRITE_COMPLETE",
[0x21] = "QUERY_SERVER",
[0x22] = "SET_INFORMATION2",
[0x23] = "QUERY_INFORMATION2",
[0x24] = "LOCKING_ANDX",
[0x25] = "TRANSACTION",
[0x26] = "TRANSACTION_SECONDARY",
[0x27] = "IOCTL",
[0x28] = "IOCTL_SECONDARY",
[0x29] = "COPY",
[0x2A] = "MOVE",
[0x2B] = "ECHO",
[0x2C] = "WRITE_AND_CLOSE",
[0x2D] = "OPEN_ANDX",
[0x2E] = "READ_ANDX",
[0x2F] = "WRITE_ANDX",
[0x30] = "NEW_FILE_SIZE",
[0x31] = "CLOSE_AND_TREE_DISC",
[0x32] = "TRANSACTION2",
[0x33] = "TRANSACTION2_SECONDARY",
[0x34] = "FIND_CLOSE2",
[0x35] = "FIND_NOTIFY_CLOSE",
[0x70] = "TREE_CONNECT",
[0x71] = "TREE_DISCONNECT",
[0x72] = "NEGOTIATE",
[0x73] = "SESSION_SETUP_ANDX",
[0x74] = "LOGOFF_ANDX",
[0x75] = "TREE_CONNECT_ANDX",
[0x80] = "QUERY_INFORMATION_DISK",
[0x81] = "SEARCH",
[0x82] = "FIND",
[0x83] = "FIND_UNIQUE",
[0x84] = "FIND_CLOSE",
[0xA0] = "NT_TRANSACT",
[0xA1] = "NT_TRANSACT_SECONDARY",
[0xA2] = "NT_CREATE_ANDX",
[0xA4] = "NT_CANCEL",
[0xA5] = "NT_RENAME",
[0xC0] = "OPEN_PRINT_FILE",
[0xC1] = "WRITE_PRINT_FILE",
[0xC2] = "CLOSE_PRINT_FILE",
[0xC3] = "GET_PRINT_QUEUE",
[0xD8] = "READ_BULK",
[0xD9] = "WRITE_BULK",
[0xDA] = "WRITE_BULK_DATA",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const trans2_sub_commands: table[count] of string = {
[0x00] = "OPEN2",
[0x01] = "FIND_FIRST2",
[0x02] = "FIND_NEXT2",
[0x03] = "QUERY_FS_INFORMATION",
[0x04] = "SET_FS_INFORMATION",
[0x05] = "QUERY_PATH_INFORMATION",
[0x06] = "SET_PATH_INFORMATION",
[0x07] = "QUERY_FILE_INFORMATION",
[0x08] = "SET_FILE_INFORMATION",
[0x09] = "FSCTL",
[0x0A] = "IOCTL",
[0x0B] = "FIND_NOTIFY_FIRST",
[0x0C] = "FIND_NOTIFY_NEXT",
[0x0D] = "CREATE_DIRECTORY",
[0x0E] = "SESSION_SETUP",
[0x10] = "GET_DFS_REFERRAL",
[0x11] = "REPORT_DFS_INCONSISTENCY",
} &default=function(i: count):string { return fmt("unknown-trans2-sub-cmd-%d", i); };
const trans_sub_commands: table[count] of string = {
[0x01] = "SET_NMPIPE_STATE",
[0x11] = "RAW_READ_NMPIPE",
[0x21] = "QUERY_NMPIPE_STATE",
[0x22] = "QUERY_NMPIPE_INFO",
[0x23] = "PEEK_NMPIPE",
[0x26] = "TRANSACT_NMPIPE",
[0x31] = "RAW_WRITE_NMPIPE",
[0x36] = "READ_NMPIPE",
[0x37] = "WRITE_NMPIPE",
[0x53] = "WAIT_NMPIPE",
[0x54] = "CALL_NMPIPE",
} &default=function(i: count):string { return fmt("unknown-trans-sub-cmd-%d", i); };
}
module SMB2;
export {
const commands: table[count] of string = {
[0] = "NEGOTIATE_PROTOCOL",
[1] = "SESSION_SETUP",
[2] = "LOGOFF",
[3] = "TREE_CONNECT",
[4] = "TREE_DISCONNECT",
[5] = "CREATE",
[6] = "CLOSE",
[7] = "FLUSH",
[8] = "READ",
[9] = "WRITE",
[10] = "LOCK",
[11] = "IOCTL",
[12] = "CANCEL",
[13] = "ECHO",
[14] = "QUERY_DIRECTORY",
[15] = "CHANGE_NOTIFY",
[16] = "QUERY_INFO",
[17] = "SET_INFO",
[18] = "OPLOCK_BREAK"
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const dialects: table[count] of string = {
[0x0202] = "2.002",
[0x0210] = "2.1",
[0x0300] = "3.0",
[0x0302] = "3.02",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const share_types: table[count] of string = {
[1] = "DISK",
[2] = "PIPE",
[3] = "PRINT",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
}

View file

@ -1,13 +1,12 @@
@load base/frameworks/notice
@load base/utils/addrs
@load base/utils/directions-and-hosts
@load base/utils/email
module SMTP;
export {
redef enum Log::ID += { LOG };
## The record type which contains the fields of the SMTP log.
type Info: record {
## Time when the message was first seen.
ts: time &log;
@ -20,9 +19,9 @@ export {
trans_depth: count &log;
## Contents of the Helo header.
helo: string &log &optional;
## Contents of the From header.
## Email addresses found in the From header.
mailfrom: string &log &optional;
## Contents of the Rcpt header.
## Email addresses found in the Rcpt header.
rcptto: set[string] &log &optional;
## Contents of the Date header.
date: string &log &optional;
@ -100,7 +99,7 @@ event bro_init() &priority=5
}
function find_address_in_smtp_header(header: string): string
{
{
local ips = extract_ip_addresses(header);
# If there are more than one IP address found, return the second.
if ( |ips| > 1 )
@ -111,7 +110,7 @@ function find_address_in_smtp_header(header: string): string
# Otherwise, there wasn't an IP address found.
else
return "";
}
}
function new_smtp_log(c: connection): Info
{
@ -166,7 +165,14 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
{
if ( ! c$smtp?$rcptto )
c$smtp$rcptto = set();
add c$smtp$rcptto[split_string1(arg, /:[[:blank:]]*/)[1]];
local rcptto_addrs = extract_email_addrs_set(arg);
for ( rcptto_addr in rcptto_addrs )
{
rcptto_addr = gsub(rcptto_addr, /ORCPT=rfc822;?/, "");
add c$smtp$rcptto[rcptto_addr];
}
c$smtp$has_client_activity = T;
}
@ -175,8 +181,9 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
# Flush last message in case we didn't see the server's acknowledgement.
smtp_message(c);
local partially_done = split_string1(arg, /:[[:blank:]]*/)[1];
c$smtp$mailfrom = split_string1(partially_done, /[[:blank:]]?/)[0];
local mailfrom = extract_first_email_addr(arg);
if ( mailfrom != "" )
c$smtp$mailfrom = mailfrom;
c$smtp$has_client_activity = T;
}
}
@ -237,9 +244,11 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=5
if ( ! c$smtp?$to )
c$smtp$to = set();
local to_parts = split_string(h$value, /[[:blank:]]*,[[:blank:]]*/);
for ( i in to_parts )
add c$smtp$to[to_parts[i]];
local to_email_addrs = split_mime_email_addresses(h$value);
for ( to_email_addr in to_email_addrs )
{
add c$smtp$to[to_email_addr];
}
}
else if ( h$name == "CC" )
@ -247,16 +256,16 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=5
if ( ! c$smtp?$cc )
c$smtp$cc = set();
local cc_parts = split_string(h$value, /[[:blank:]]*,[[:blank:]]*/);
for ( i in cc_parts )
add c$smtp$cc[cc_parts[i]];
local cc_parts = split_mime_email_addresses(h$value);
for ( cc_part in cc_parts )
add c$smtp$cc[cc_part];
}
else if ( h$name == "X-ORIGINATING-IP" )
{
local addresses = extract_ip_addresses(h$value);
if ( 1 in addresses )
c$smtp$x_originating_ip = to_addr(addresses[1]);
if ( 0 in addresses )
c$smtp$x_originating_ip = to_addr(addresses[0]);
}
else if ( h$name == "X-MAILER" ||
@ -309,9 +318,9 @@ function describe(rec: Info): string
if ( rec?$mailfrom && rec?$rcptto )
{
local one_to = "";
for ( to in rec$rcptto )
for ( email in rec$rcptto )
{
one_to = to;
one_to = email;
break;
}
local abbrev_subject = "";

View file

@ -87,14 +87,6 @@ event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Addres
c$socks$bound_p = p;
}
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5
{
# This will handle the case where the analyzer failed in some way and was removed. We probably
# don't want to log these connections.
if ( "SOCKS" in c$service )
Log::write(SOCKS::LOG, c$socks);
}
event socks_login_userpass_request(c: connection, user: string, password: string) &priority=5
{
# Authentication only possible with the version 5.
@ -112,3 +104,10 @@ event socks_login_userpass_reply(c: connection, code: count) &priority=5
c$socks$status = v5_status[code];
}
event connection_state_remove(c: connection)
{
# This will handle the case where the analyzer failed in some way and was
# removed. We probably don't want to log these connections.
if ( "SOCKS" in c$service )
Log::write(SOCKS::LOG, c$socks);
}

View file

@ -57,6 +57,27 @@ export {
[2] = "fatal",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for hash
## algorithms.
const hash_algorithms: table[count] of string = {
[0] = "none",
[1] = "md5",
[2] = "sha1",
[3] = "sha224",
[4] = "sha256",
[5] = "sha384",
[6] = "sha512",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for signature
## algorithms.
const signature_algorithms: table[count] of string = {
[0] = "anonymous",
[1] = "rsa",
[2] = "dsa",
[3] = "ecdsa",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for alert
## descriptions.
const alert_descriptions: table[count] of string = {
@ -542,9 +563,17 @@ export {
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xC0AE;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xC0AF;
# draft-agl-tls-chacha20poly1305-02
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14;
const TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC15;
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD = 0xCC13;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256_OLD = 0xCC14;
const TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD = 0xCC15;
# RFC 7905
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA8;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA9;
const TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAA;
const TLS_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAB;
const TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAC;
const TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAD;
const TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAE;
const SSL_RSA_FIPS_WITH_DES_CBC_SHA = 0xFEFE;
const SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA = 0xFEFF;
@ -908,9 +937,16 @@ export {
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8",
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD",
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256_OLD] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256_OLD",
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256_OLD",
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256",
[SSL_RSA_FIPS_WITH_DES_CBC_SHA] = "SSL_RSA_FIPS_WITH_DES_CBC_SHA",
[SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA",
[SSL_RSA_FIPS_WITH_DES_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_DES_CBC_SHA_2",

View file

@ -0,0 +1,68 @@
## Extract mail addresses out of address specifications conforming to RFC5322.
##
## str: A string potentially containing email addresses.
##
## Returns: A vector of extracted email addresses. An empty vector is returned
## if no email addresses are discovered.
function extract_email_addrs_vec(str: string): string_vec
{
local addrs: vector of string = vector();
local raw_addrs = find_all(str, /(^|[<,:[:blank:]])[^<,:[:blank:]@]+"@"[^>,;[:blank:]]+([>,;[:blank:]]|$)/);
for ( raw_addr in raw_addrs )
addrs[|addrs|] = gsub(raw_addr, /[<>,:;[:blank:]]/, "");
return addrs;
}
## Extract mail addresses out of address specifications conforming to RFC5322.
##
## str: A string potentially containing email addresses.
##
## Returns: A set of extracted email addresses. An empty set is returned
## if no email addresses are discovered.
function extract_email_addrs_set(str: string): set[string]
{
local addrs: set[string] = set();
local raw_addrs = find_all(str, /(^|[<,:[:blank:]])[^<,:[:blank:]@]+"@"[^>,;[:blank:]]+([>,;[:blank:]]|$)/);
for ( raw_addr in raw_addrs )
add addrs[gsub(raw_addr, /[<>,:;[:blank:]]/, "")];
return addrs;
}
## Extract the first email address from a string.
##
## str: A string potentially containing email addresses.
##
## Returns: An email address or empty string if none found.
function extract_first_email_addr(str: string): string
{
local addrs = extract_email_addrs_vec(str);
if ( |addrs| > 0 )
return addrs[0];
else
return "";
}
## Split email addresses from MIME headers. The email addresses will
## include the display name and email address as it was given by the mail
## mail client. Note that this currently does not account for MIME group
## addresses and won't handle them correctly. The group name will show up
## as part of an email address.
##
## str: The argument from a MIME header.
##
## Returns: A set of addresses or empty string if none found.
function split_mime_email_addresses(line: string): set[string]
{
local output = string_set();
local addrs = find_all(line, /(\"[^"]*\")?[^,]+/);
for ( part in addrs )
{
add output[strip(part)];
}
return output;
}

View file

@ -116,7 +116,7 @@ event Input::end_of_data(orig_name: string, source:string)
if ( track_file !in result$files )
result$files[track_file] = vector();
Input::remove(name);
Input::remove(orig_name);
if ( name !in pending_files )
delete pending_commands[name];

View file

@ -0,0 +1,26 @@
##! Functions to calculate distance between two locations, based on GeoIP data.
## Returns the distance between two IP addresses using the haversine formula,
## based on GeoIP database locations. Requires Bro to be built with libgeoip.
##
## a1: First IP address.
##
## a2: Second IP address.
##
## Returns: The distance between *a1* and *a2* in miles, or -1.0 if GeoIP data
## is not available for either of the IP addresses.
##
## .. bro:see:: haversine_distance lookup_location
function haversine_distance_ip(a1: addr, a2: addr): double
{
local loc1 = lookup_location(a1);
local loc2 = lookup_location(a2);
local miles: double;
if ( loc1?$latitude && loc1?$longitude && loc2?$latitude && loc2?$longitude )
miles = haversine_distance(loc1$latitude, loc1$longitude, loc2$latitude, loc2$longitude);
else
miles = -1.0;
return miles;
}

View file

@ -22,10 +22,26 @@ event Control::id_value_request(id: string)
event Control::peer_status_request()
{
local status = "";
for ( p in Communication::nodes )
{
local peer = Communication::nodes[p];
if ( ! peer$connected )
next;
status += fmt("%.6f peer=%s host=%s\n",
network_time(), peer$peer$descr, peer$host);
}
event Control::peer_status_response(status);
}
event Control::net_stats_request()
{
local ns = get_net_stats();
local reply = fmt("%.6f recvd=%d dropped=%d link=%d\n", network_time(),
ns$pkts_recvd, ns$pkts_dropped, ns$pkts_link);
event Control::net_stats_response(reply);
}
event Control::configuration_update_request()

Some files were not shown because too many files have changed in this diff Show more