diff --git a/CHANGES b/CHANGES
index 5fd012d64e..af063f122d 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,4 +1,514 @@
+2.4-471 | 2016-04-25 15:37:15 -0700
+
+ * Add DNS tests for huge TLLs and CAA. (Johanna Amann)
+
+ * Add DNS "CAA" RR type and event. (Mark Taylor)
+
+ * Fix DNS response parsing: TTLs are unsigned. (Mark Taylor)
+
+2.4-466 | 2016-04-22 16:25:33 -0700
+
+ * Rename BrokerStore and BrokerComm to Broker. Also split broker main.bro
+ into two scripts. (Daniel Thayer)
+
+ * Add get_current_packet_header bif. (Jan Grashoefer)
+
+2.4-457 | 2016-04-22 08:36:27 -0700
+
+ * Fix Intel framework not checking the CERT_HASH indicator type. (Johanna Amann)
+
+2.4-454 | 2016-04-14 10:06:58 -0400
+
+ * Additional mime types for file identification and a few fixes. (Seth Hall)
+
+ New file mime types:
+ - .ini files
+ - MS Registry policy files
+ - MS Registry files
+ - MS Registry format files (e.g. DESKTOP.DAT)
+ - MS Outlook PST files
+ - Apple AFPInfo files
+
+ Mime type fixes:
+ - MP3 files with ID3 tags.
+ - JSON and XML matchers were extended
+
+ * Avoid a macro name conflict on FreeBSD. (Seth Hall, Daniel Thayer)
+
+2.4-452 | 2016-04-13 01:15:20 -0400
+
+ * Add a simple file entropy analyzer. (Seth Hall)
+
+ * Analyzer and bro script for RFB/VNC protocol (Martin van Hensbergen)
+
+ This analyzer parses the Remote Frame Buffer
+ protocol, usually referred to as the 'VNC protocol'.
+
+ It supports several dialects (3.3, 3.7, 3.8) and
+ also handles the Apple Remote Desktop variant.
+
+ It will log such facts as client/server versions,
+ authentication method used, authentication result,
+ height, width and name of the shared screen.
+
+
+2.4-430 | 2016-04-07 13:36:36 -0700
+
+ * Fix regex literal in scripting documentation. (William Tom)
+
+2.4-428 | 2016-04-07 13:33:08 -0700
+
+ * Confirm protocol in SNMP/SIP only if we saw a response SNMP/SIP
+ packet. (Vlad Grigorescu)
+
+2.4-424 | 2016-03-24 13:38:47 -0700
+
+ * Only load openflow/netcontrol if compiled with broker. (Johanna Amann)
+
+ * Adding canonifier to test. (Robin Sommer)
+
+2.4-422 | 2016-03-21 19:48:30 -0700
+
+ * Adapt to recent change in CAF CMake script. (Matthias Vallentin)
+
+ * Deprecate --with-libcaf in favor of --with-caf, as already done in
+ Broker. (Matthias Vallentin)
+
+2.4-418 | 2016-03-21 12:22:15 -0700
+
+ * Add protocol confirmation to MySQL analyzer. (Vlad Grigorescu)
+
+ * Check that there is only one of &read_expire, &write_expire,
+ &create_expire. (Johanna Amann)
+
+ * Fixed &read_expire for subnet-indexed tables, plus test case. (Jan
+ Grashoefer)
+
+ * Add filter_subnet_table() that works similar to matching_subnet()
+ but returns a filtered view of the original set/table only
+ containing the changed subnets. (Jan Grashoefer)
+
+ * Fix bug in tablue values' tracking read operations. (Johanna
+ Amann)
+
+ * Update TLS constants and extensions from IANA. (Johanna Amann)
+
+2.4-406 | 2016-03-11 14:27:47 -0800
+
+ * Add NetControl and OpenFlow frameworks. (Johanna Amann)
+
+2.4-313 | 2016-03-08 07:47:57 -0800
+
+ * Remove old string functions in C++ code. This removes the
+ functions: strcasecmp_n, strchr_n, and strrchr_n. (Johanna Amann)
+
+2.4-307 | 2016-03-07 13:33:45 -0800
+
+ * Add "disable_analyzer_after_detection" and remove
+ "skip_processing_after_detection". Addresses BIT-1545.
+ (Aaron Eppert & Johanna Amann)
+
+ * Add bad_HTTP_request_with_version weird (William Glodek)
+
+2.4-299 | 2016-03-04 12:51:55 -0800
+
+ * More detailed installation instructions for FreeBSD 9.X. (Johanna Amann)
+
+ * Update CMake OpenSSL checks. (Johanna Amann)
+
+ * "SUBSCRIBE" is a valid SIP. message per RFC 3265. Addresses
+ BIT-1529. (Johanna Amann)
+
+ * Update documentation for connection log's RSTR. Addresses BIT-1535
+ (Johanna Amann)
+
+2.4-284 | 2016-02-17 14:12:15 -0800
+
+ * Fix sometimes failing dump-events test. (Johanna Amann)
+
+2.4-282 | 2016-02-13 10:48:21 -0800
+
+ * Add missing break in in StartTLS case of IRC analyzer. Found by
+ Aaron Eppert. (Johanna Amann)
+
+2.4-280 | 2016-02-13 10:40:16 -0800
+
+ * Fix memory leaks in stats.cc and smb.cc. (Johanna Amann)
+
+2.4-278 | 2016-02-12 18:53:35 -0800
+
+ * Better multi-space separator handline. (Mark Taylor & Johanna Amann)
+
+2.4-276 | 2016-02-10 21:29:33 -0800
+
+ * Allow IRC commands to not have parameters. (Mark Taylor)
+
+2.4-272 | 2016-02-08 14:27:58 -0800
+
+ * fix memory leaks in find_all() and IRC analyzer. (Dirk Leinenbach)
+
+2.4-270 | 2016-02-08 13:00:57 -0800
+
+ * Removed duplicate parameter for IRC "QUIT" event handler. (Mark Taylor)
+
+2.4-267 | 2016-02-01 12:38:32 -0800
+
+ * Add testcase for CVE-2015-3194. (Johanna Amann)
+
+ * Fix portability issue with use of mktemp. (Daniel Thayer)
+
+2.4-260 | 2016-01-28 08:05:27 -0800
+
+ * Correct irc_privmsg_message event handling bug. (Mark Taylor)
+
+ * Update copyright year for Sphinx. (Johanna Amann)
+
+2.4-253 | 2016-01-20 17:41:20 -0800
+
+ * Support of RadioTap encapsulation for 802.11 (Seth Hall)
+
+ Radiotap support should be fully functional with Radiotap
+ packets that include IPv4 and IPv6. Other radiotap packets are
+ silently ignored.
+
+2.4-247 | 2016-01-19 10:19:48 -0800
+
+ * Fixing C++11 compiler warnings. (Seth Hall)
+
+ * Updating plugin documentation building. (Johanna Amann)
+
+2.4-238 | 2016-01-15 12:56:33 -0800
+
+ * Add HTTP version information to HTTP log file. (Aaron Eppert)
+
+ * Add NOTIFY as a valid SIP message, per RFC 3265. (Aaron Eppert)
+
+ * Improve HTTP parser's handling of requests that don't have a URI.
+ (William Glodek/Robin Sommer)
+
+ * Fix crash when deleting non existing record member. Addresses
+ BIT-1519. (Johanna Amann)
+
+2.4-228 | 2015-12-19 13:40:09 -0800
+
+ * Updating BroControl submodule.
+
+2.4-227 | 2015-12-18 17:47:24 -0800
+
+ * Update host name in windows-version-detection.bro. (Aaron Eppert)
+
+ * Update installation instructions to mention OpenSSL dependency for
+ newer OS X version. (Johanna Amann)
+
+ * Change a stale bro-ids.org to bro.org. (Johanna Amann)
+
+ * StartTLS support for IRC. (Johanna Amann)
+
+ * Adding usage guard to canonifier script. (Robin Sommer)
+
+2.4-217 | 2015-12-04 16:50:46 -0800
+
+ * SIP scripts code cleanup. (Seth Hall)
+
+ - Daniel Guerra pointed out a type issue for SIP request and
+ response code length fields which is now corrected.
+
+ - Some redundant code was removed.
+
+ - if/else tree modified to use switch instead.
+
+2.4-214 | 2015-12-04 16:40:15 -0800
+
+ * Delaying BinPAC initializaton until afte plugins have been
+ activated. (Robin Sommer)
+
+2.4-213 | 2015-12-04 15:25:48 -0800
+
+ * Use better data structure for storing BPF filters. (Robin Sommer)
+
+2.4-211 | 2015-11-17 13:28:29 -0800
+
+ * Making cluster reconnect timeout configurable. (Robin Sommer)
+
+ * Bugfix for child process' communication loop. (Robin Sommer)
+
+2.4-209 | 2015-11-16 07:31:22 -0800
+
+ * Updating submodule(s).
+
+2.4-207 | 2015-11-10 13:34:42 -0800
+
+ * Fix to compile with OpenSSL that has SSLv3 disalbed. (Christoph
+ Pietsch)
+
+ * Fix potential race condition when logging VLAN info to conn.log.
+ (Daniel Thayer)
+
+2.4-201 | 2015-10-27 16:11:15 -0700
+
+ * Updating NEWS. (Robin Sommer)
+
+2.4-200 | 2015-10-26 16:57:39 -0700
+
+ * Adding missing file. (Robin Sommer)
+
+2.4-199 | 2015-10-26 16:51:47 -0700
+
+ * Fix problem with the JSON Serialization code. (Aaron Eppert)
+
+2.4-188 | 2015-10-26 14:11:21 -0700
+
+ * Extending rexmit_inconsistency() event to receive an additional
+ parameter with the packet's TCP flags, if available. (Robin
+ Sommer)
+
+2.4-187 | 2015-10-26 13:43:32 -0700
+
+ * Updating NEWS for new plugins. (Robin Sommer)
+
+2.4-186 | 2015-10-23 15:07:06 -0700
+
+ * Removing pcap options for AF_PACKET support. Addresses BIT-1363.
+ (Robin Sommer)
+
+ * Correct a typo in controller.bro documentation. (Daniel Thayer)
+
+ * Extend SSL DPD signature to allow alert before server_hello.
+ (Johanna Amann)
+
+ * Make join_string_vec work with vectors containing empty elements.
+ (Johanna Amann)
+
+ * Fix support for HTTP CONNECT when server adds headers to response.
+ (Eric Karasuda).
+
+ * Load static CA list for validation tests too. (Johanna Amann)
+
+ * Remove cluster certificate validation script. (Johanna Amann)
+
+ * Fix a bug in diff-remove-x509-names canonifier. (Daniel Thayer)
+
+ * Fix test canonifiers in scripts/policy/protocols/ssl. (Daniel
+ Thayer)
+
+2.4-169 | 2015-10-01 17:21:21 -0700
+
+ * Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509
+ certificates. (Yun Zheng Hu)
+
+ * Improve X509 end-of-string-check code. (Johanna Amann)
+
+ * Refactor X509 generalizedtime support and test. (Johanna Amann)
+
+ * Fix case of offset=-1 (EOF) for RAW reader. Addresses BIT-1479.
+ (Johanna Amann)
+
+ * Improve a number of test canonifiers. (Daniel Thayer)
+
+ * Remove unnecessary use of TEST_DIFF_CANONIFIER. (Daniel Thayer)
+
+ * Fixed some test canonifiers to read only from stdin
+
+ * Remove unused test canonifier scripts. (Daniel Thayer)
+
+ * A potpourri of updates and improvements across the documentation.
+ (Daniel Thayer)
+
+ * Add configure option to disable Broker Python bindings. Also
+ improve the configure summary output to more clearly show whether
+ or not Broker Python bindings will be built. (Daniel Thayer)
+
+2.4-131 | 2015-09-11 12:16:39 -0700
+
+ * Add README.rst symlink. Addresses BIT-1413 (Vlad Grigorescu)
+
+2.4-129 | 2015-09-11 11:56:04 -0700
+
+ * hash-all-files.bro depends on base/files/hash (Richard van den Berg)
+
+ * Make dns_max_queries redef-able, and bump default to 25. Addresses
+ BIT-1460 (Vlad Grigorescu)
+
+2.4-125 | 2015-09-03 20:10:36 -0700
+
+ * Move SIP analyzer to flowunit instead of datagram Addresses
+ BIT-1458 (Vlad Grigorescu)
+
+2.4-122 | 2015-08-31 14:39:41 -0700
+
+ * Add a number of out-of-bound checks to layer 2 code. Addresses
+ BIT-1463 (Johanna Amann)
+
+ * Fix error in 2.4 release notes regarding SSH events. (Robin
+ Sommer)
+
+2.4-118 | 2015-08-31 10:55:29 -0700
+
+ * Fix FreeBSD build errors (Johanna Amann)
+
+2.4-117 | 2015-08-30 22:16:24 -0700
+
+ * Fix initialization of a pointer in RDP analyzer. (Daniel
+ Thayer/Robin Sommer)
+
+2.4-115 | 2015-08-30 21:57:35 -0700
+
+ * Enable Bro to leverage packet fanout mode on Linux. (Kris
+ Nielander).
+
+ ## Toggle whether to do packet fanout (Linux-only).
+ const Pcap::packet_fanout_enable = F &redef;
+
+ ## If packet fanout is enabled, the id to sue for it. This should be shared amongst
+ ## worker processes processing the same socket.
+ const Pcap::packet_fanout_id = 0 &redef;
+
+ ## If packet fanout is enabled, whether packets are to be defragmented before
+ ## fanout is applied.
+ const Pcap::packet_fanout_defrag = T &redef;
+
+ * Allow libpcap buffer size to be set via configuration. (Kris Nielander)
+
+ ## Number of Mbytes to provide as buffer space when capturing from live
+ ## interfaces.
+ const Pcap::bufsize = 128 &redef;
+
+ * Move the pcap-related script-level identifiers into the new Pcap
+ namespace. (Robin Sommer)
+
+ snaplen -> Pcap::snaplen
+ precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
+ install_pcap_filter() -> Pcap::install_pcap_filter()
+ pcap_error() -> Pcap::pcap_error()
+
+
+2.4-108 | 2015-08-30 20:14:31 -0700
+
+ * Update Base64 decoding. (Jan Grashoefer)
+
+ - A new built-in function, decode_base64_conn() for Base64
+ decoding. It works like decode_base64() but receives an
+ additional connection argument that will be used for
+ reporting decoding errors into weird.log (instead of
+ reporter.log).
+
+ - FTP, POP3, and HTTP analyzers now likewise log Base64
+ decoding errors to weird.log.
+
+ - The built-in functions decode_base64_custom() and
+ encode_base64_custom() are now deprecated. Their
+ functionality is provided directly by decode_base64() and
+ encode_base64(), which take an optional parameter to change
+ the Base64 alphabet.
+
+ * Fix potential crash if TCP header was captured incompletely.
+ (Robin Sommer)
+
+2.4-103 | 2015-08-29 10:51:55 -0700
+
+ * Make ASN.1 date/time parsing more robust. (Johanna Amann)
+
+ * Be more permissive on what characters we accept as an unquoted
+ multipart boundary. Addresses BIT-1459. (Johanna Amann)
+
+2.4-99 | 2015-08-25 07:56:57 -0700
+
+ * Add ``Q`` and update ``I`` documentation for connection history
+ field. Addresses BIT-1466. (Vlad Grigorescu)
+
+2.4-96 | 2015-08-21 17:37:56 -0700
+
+ * Update SIP analyzer. (balintm)
+
+ - Allows space on both sides of ':'.
+ - Require CR/LF after request/reply line.
+
+2.4-94 | 2015-08-21 17:31:32 -0700
+
+ * Add file type detection support for video/MP2T. (Mike Freemon)
+
+2.4-93 | 2015-08-21 17:23:39 -0700
+
+ * Make plugin install honor DESTDIR= convention. (Jeff Barber)
+
+2.4-89 | 2015-08-18 07:53:36 -0700
+
+ * Fix diff-canonifier-external to use basename of input file.
+ (Daniel Thayer)
+
+2.4-87 | 2015-08-14 08:34:41 -0700
+
+ * Removing the yielding_teredo_decapsulation option. (Robin Sommer)
+
+2.4-86 | 2015-08-12 17:02:24 -0700
+
+ * Make Teredo DPD signature more precise. (Martina Balint)
+
+2.4-84 | 2015-08-10 14:44:39 -0700
+
+ * Add hook 'HookSetupAnalyzerTree' to allow plugins access to a
+ connection's initial analyzer tree for customization. (James
+ Swaro)
+
+ * Plugins now look for a file "__preload__.bro" in the top-level
+ script directory. If found, they load it first, before any scripts
+ defining BiF elements. This can be used to define types that the
+ BiFs already depend on (like a custom type for an event argument).
+ (Robin Sommer)
+
+2.4-81 | 2015-08-08 07:38:42 -0700
+
+ * Fix a test that is failing very frequently. (Daniel Thayer)
+
+2.4-78 | 2015-08-06 22:25:19 -0400
+
+ * Remove build dependency on Perl (now requiring Python instad).
+ (Daniel Thayer)
+
+ * CID 1314754: Fixing unreachable code in RSH analyzer. (Robin
+ Sommer)
+
+ * CID 1312752: Add comment to mark 'case' fallthrough as ok. (Robin
+ Sommer)
+
+ * CID 1312751: Removing redundant assignment. (Robin Sommer)
+
+2.4-73 | 2015-07-31 08:53:49 -0700
+
+ * BIT-1429: SMTP logs now include CC: addresses. (Albert Zaharovits)
+
+2.4-70 | 2015-07-30 07:23:44 -0700
+
+ * Updated detection of Flash and AdobeAIR. (Jan Grashoefer)
+
+ * Adding tests for Flash version parsing and browser plugin
+ detection. (Robin Sommer)
+
+2.4-63 | 2015-07-28 12:26:37 -0700
+
+ * Updating submodule(s).
+
+2.4-61 | 2015-07-28 12:13:39 -0700
+
+ * Renaming config.h to bro-config.h. (Robin Sommer)
+
+2.4-58 | 2015-07-24 15:06:07 -0700
+
+ * Add script protocols/conn/vlan-logging.bro to record VLAN data in
+ conn.log. (Aaron Brown)
+
+ * Add field "vlan" and "inner_vlan" to connection record. (Aaron
+ Brown)
+
+ * Save the inner vlan in the Packet object for Q-in-Q setups. (Aaron
+ Brown)
+
+ * Increasing plugin API version for recent packet source changes.
+ (Robin Sommer)
+
+ * Slightly earlier protocol confirmation for POP3. (Johanna Amann)
+
2.4-46 | 2015-07-22 10:56:40 -0500
* Fix broker python bindings install location to track --prefix.
@@ -1550,21 +2060,21 @@
2.3-beta-18 | 2014-06-06 13:11:50 -0700
* Add two more SSL events, one triggered for each handshake message
- and one triggered for the tls change cipherspec message. (Bernhard
+ and one triggered for the tls change cipherspec message. (Johanna
Amann)
* Small SSL bug fix. In case SSL::disable_analyzer_after_detection
was set to false, the ssl_established event would fire after each
- data packet once the session is established. (Bernhard Amann)
+ data packet once the session is established. (Johanna Amann)
2.3-beta-16 | 2014-06-06 13:05:44 -0700
* Re-activate notice suppression for expiring certificates.
- (Bernhard Amann)
+ (Johanna Amann)
2.3-beta-14 | 2014-06-05 14:43:33 -0700
- * Add new TLS extension type numbers from IANA (Bernhard Amann)
+ * Add new TLS extension type numbers from IANA (Johanna Amann)
* Switch to double hashing for Bloomfilters for better performance.
(Matthias Vallentin)
@@ -1574,7 +2084,7 @@
(Matthias Vallentin)
* Make buffer for X509 certificate subjects larger. Addresses
- BIT-1195 (Bernhard Amann)
+ BIT-1195 (Johanna Amann)
2.3-beta-5 | 2014-05-29 15:34:42 -0500
@@ -1596,19 +2106,19 @@
* Release 2.3-beta
- * Clean up OpenSSL data structures on exit. (Bernhard Amann)
+ * Clean up OpenSSL data structures on exit. (Johanna Amann)
- * Fixes for OCSP & x509 analysis memory leak issues. (Bernhard Amann)
+ * Fixes for OCSP & x509 analysis memory leak issues. (Johanna Amann)
* Remove remaining references to BROMAGIC (Daniel Thayer)
* Fix typos and formatting in event and BiF documentation (Daniel Thayer)
* Update intel framework plugin for ssl server_name extension API
- changes. (Bernhard Amann, Justin Azoff)
+ changes. (Johanna Amann, Justin Azoff)
* Fix expression errors in SSL/x509 scripts when unparseable data
- is in certificate chain. (Bernhard Amann)
+ is in certificate chain. (Johanna Amann)
2.2-478 | 2014-05-19 15:31:33 -0500
@@ -1617,7 +2127,7 @@
2.2-477 | 2014-05-19 14:13:00 -0500
- * Fix X509::Result record's "result" field to be set internally as type int instead of type count. (Bernhard Amann)
+ * Fix X509::Result record's "result" field to be set internally as type int instead of type count. (Johanna Amann)
* Fix a couple of doc build warnings (Daniel Thayer)
@@ -1635,19 +2145,19 @@
* New script policy/protocols/ssl/validate-ocsp.bro that adds OSCP
validation to ssl.log. The work is done by a new bif
- x509_ocsp_verify(). (Bernhard Amann)
+ x509_ocsp_verify(). (Johanna Amann)
* STARTTLS support for POP3 and SMTP. The SSL analyzer takes over
when seen. smtp.log now logs when a connection switches to SSL.
- (Bernhard Amann)
+ (Johanna Amann)
- * Replace errors when parsing x509 certs with weirds. (Bernhard
+ * Replace errors when parsing x509 certs with weirds. (Johanna
Amann)
- * Improved Heartbleed attack/scan detection. (Bernhard Amann)
+ * Improved Heartbleed attack/scan detection. (Johanna Amann)
* Let TLS analyzer fail better when no longer in sync with the data
- stream. (Bernhard Amann)
+ stream. (Johanna Amann)
2.2-444 | 2014-05-16 14:10:32 -0500
@@ -1666,7 +2176,7 @@
2.2-427 | 2014-05-15 13:37:23 -0400
- * Fix dynamic SumStats update on clusters (Bernhard Amann)
+ * Fix dynamic SumStats update on clusters (Johanna Amann)
2.2-425 | 2014-05-08 16:34:44 -0700
@@ -1718,11 +2228,11 @@
* Add DH support to SSL analyzer. When using DHE or DH-Anon, sever
key parameters are now available in scriptland. Also add script to
- alert on weak certificate keys or weak dh-params. (Bernhard Amann)
+ alert on weak certificate keys or weak dh-params. (Johanna Amann)
- * Add a few more ciphers Bro did not know at all so far. (Bernhard Amann)
+ * Add a few more ciphers Bro did not know at all so far. (Johanna Amann)
- * Log chosen curve when using ec cipher suite in TLS. (Bernhard Amann)
+ * Log chosen curve when using ec cipher suite in TLS. (Johanna Amann)
2.2-397 | 2014-05-01 20:29:20 -0700
@@ -1734,7 +2244,7 @@
(Jon Siwek)
* Correct a notice for heartbleed. The notice is thrown correctly,
- just the message conteined wrong values. (Bernhard Amann)
+ just the message conteined wrong values. (Johanna Amann)
* Improve/standardize some malloc/realloc return value checks. (Jon
Siwek)
@@ -1761,7 +2271,7 @@
2.2-377 | 2014-04-24 16:57:54 -0700
* A larger set of SSL improvements and extensions. Addresses
- BIT-1178. (Bernhard Amann)
+ BIT-1178. (Johanna Amann)
- Fixes TLS protocol version detection. It also should
bail-out correctly on non-tls-connections now
@@ -1822,9 +2332,9 @@
2.2-335 | 2014-04-10 15:04:57 -0700
- * Small logic fix for main SSL script. (Bernhard Amann)
+ * Small logic fix for main SSL script. (Johanna Amann)
- * Update DPD signatures for detecting TLS 1.2. (Bernhard Amann)
+ * Update DPD signatures for detecting TLS 1.2. (Johanna Amann)
* Remove unused data member of SMTP_Analyzer to silence a Coverity
warning. (Jon Siwek)
@@ -1853,7 +2363,7 @@
2.2-315 | 2014-04-01 16:50:01 -0700
* Change logging's "#types" description of sets to "set". Addresses
- BIT-1163 (Bernhard Amann)
+ BIT-1163 (Johanna Amann)
2.2-313 | 2014-04-01 16:40:19 -0700
@@ -1868,7 +2378,7 @@
(Jon Siwek)
* Fix potential memory leak in x509 parser reported by Coverity.
- (Bernhard Amann)
+ (Johanna Amann)
2.2-304 | 2014-03-30 23:05:54 +0200
@@ -1939,7 +2449,7 @@
from the certificates (e.g. elliptic curve information, subject
alternative names, basic constraints). Certificate validation also
was improved, should be easier to use and exposes information like
- the full verified certificate chain. (Bernhard Amann)
+ the full verified certificate chain. (Johanna Amann)
This update changes the format of ssl.log and adds a new x509.log
with certificate information. Furthermore all x509 events and
@@ -1977,7 +2487,7 @@
2.2-256 | 2014-03-30 19:57:28 +0200
* For the summary statistics framewirk, change all &create_expire
- attributes to &read_expire in the cluster part. (Bernhard Amann)
+ attributes to &read_expire in the cluster part. (Johanna Amann)
2.2-254 | 2014-03-30 19:55:22 +0200
@@ -2001,7 +2511,7 @@
2.2-244 | 2014-03-17 08:24:17 -0700
* Fix compile errror on FreeBSD caused by wrong include file order.
- (Bernhard Amann)
+ (Johanna Amann)
2.2-240 | 2014-03-14 10:23:54 -0700
@@ -2097,7 +2607,7 @@
* Improve SSL logging so that connections are logged even when the
ssl_established event is not generated as well as other small SSL
- fixes. (Bernhard Amann)
+ fixes. (Johanna Amann)
2.2-206 | 2014-03-03 16:52:28 -0800
@@ -2114,7 +2624,7 @@
* Allow iterating over bif functions with result type vector of any.
This changes the internal type that is used to signal that a
vector is unspecified from any to void. Addresses BIT-1144
- (Bernhard Amann)
+ (Johanna Amann)
2.2-197 | 2014-02-28 15:36:58 -0800
@@ -2122,37 +2632,37 @@
2.2-194 | 2014-02-28 14:50:53 -0800
- * Remove packet sorter. Addresses BIT-700. (Bernhard Amann)
+ * Remove packet sorter. Addresses BIT-700. (Johanna Amann)
2.2-192 | 2014-02-28 09:46:43 -0800
- * Update Mozilla root bundle. (Bernhard Amann)
+ * Update Mozilla root bundle. (Johanna Amann)
2.2-190 | 2014-02-27 07:34:44 -0800
- * Adjust timings of a few leak tests. (Bernhard Amann)
+ * Adjust timings of a few leak tests. (Johanna Amann)
2.2-187 | 2014-02-25 07:24:42 -0800
- * More Google TLS extensions that are being actively used. (Bernhard
+ * More Google TLS extensions that are being actively used. Johanna(
Amann)
* Remove unused, and potentially unsafe, function
- ListVal::IncludedInString. (Bernhard Amann)
+ ListVal::IncludedInString. (Johanna Amann)
2.2-184 | 2014-02-24 07:28:18 -0800
* New TLS constants from
https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-01.
- (Bernhard Amann)
+ (Johanna Amann)
2.2-180 | 2014-02-20 17:29:14 -0800
* New SSL alert descriptions from
https://tools.ietf.org/html/draft-ietf-tls-applayerprotoneg-04.
- (Bernhard Amann)
+ (Johanna Amann)
- * Update SQLite. (Bernhard Amann)
+ * Update SQLite. (Johanna Amann)
2.2-177 | 2014-02-20 17:27:46 -0800
@@ -2183,7 +2693,7 @@
'modbus_read_fifo_queue_response' event handler. (Jon Siwek)
* Add channel_id TLS extension number. This number is not IANA
- defined, but we see it being actively used. (Bernhard Amann)
+ defined, but we see it being actively used. (Johanna Amann)
* Test baseline updates for DNS change. (Robin Sommer)
@@ -2225,7 +2735,7 @@
2.2-147 | 2014-02-07 08:06:53 -0800
- * Fix x509-extension test sometimes failing. (Bernhard Amann)
+ * Fix x509-extension test sometimes failing. (Johanna Amann)
2.2-144 | 2014-02-06 20:31:18 -0800
@@ -2261,7 +2771,7 @@
2.2-128 | 2014-01-30 15:58:47 -0800
- * Add leak test for Exec module. (Bernhard Amann)
+ * Add leak test for Exec module. (Johanna Amann)
* Fix file_over_new_connection event to trigger when entire file is
missed. (Jon Siwek)
@@ -2279,7 +2789,7 @@
2.2-120 | 2014-01-28 10:25:23 -0800
* Fix and extend x509_extension() event, which now actually returns
- the extension. (Bernhard Amann)
+ the extension. (Johanna Amann)
New event signauture:
@@ -2394,7 +2904,7 @@
* Several improvements to input framework error handling for more
robustness and more helpful error messages. Includes tests for
- many cases. (Bernhard Amann)
+ many cases. (Johanna Amann)
2.2-66 | 2013-12-09 13:54:16 -0800
@@ -2420,7 +2930,7 @@
* Fix memory leak in input framework. If the input framework was
used to read event streams and those streams contained records
with more than one field, not all elements of the threading Values
- were cleaned up. Addresses BIT-1103. (Bernhard Amann)
+ were cleaned up. Addresses BIT-1103. (Johanna Amann)
* Minor Broxygen improvements. Addresses BIT-1098. (Jon Siwek)
@@ -2464,7 +2974,7 @@
2.2-40 | 2013-12-04 12:16:38 -0800
* ssl_client_hello() now receives a vector of ciphers, instead of a
- set, to preserve their order. (Bernhard Amann)
+ set, to preserve their order. (Johanna Amann)
2.2-38 | 2013-12-04 12:10:54 -0800
@@ -2601,13 +3111,13 @@
2.2-beta-157 | 2013-10-25 11:11:17 -0700
* Extend the documentation of the SQLite reader/writer framework.
- (Bernhard Amann)
+ (Johanna Amann)
* Fix inclusion of wrong example file in scripting tutorial.
- Reported by Michael Auger @LM4K. (Bernhard Amann)
+ Reported by Michael Auger @LM4K. (Johanna Amann)
* Alternative fix for the thrading deadlock issue to avoid potential
- performance impact. (Bernhard Amann)
+ performance impact. (Johanna Amann)
2.2-beta-152 | 2013-10-24 18:16:49 -0700
@@ -2620,7 +3130,7 @@
2.2-beta-150 | 2013-10-24 16:32:14 -0700
* Change temporary ASCII reader workaround for getline() on
- Mavericks to permanent fix. (Bernhard Amann)
+ Mavericks to permanent fix. (Johanna Amann)
2.2-beta-148 | 2013-10-24 14:34:35 -0700
@@ -2634,7 +3144,7 @@
* Intel framework notes added to NEWS. (Seth Hall)
* Temporary OSX Mavericks libc++ issue workaround for getline()
- problem in ASCII reader. (Bernhard Amann)
+ problem in ASCII reader. (Johanna Amann)
* Change test of identify_data BIF to ignore charset as it may vary
with libmagic version. (Jon Siwek)
@@ -2677,16 +3187,16 @@
2.2-beta-80 | 2013-10-18 13:18:05 -0700
- * SQLite reader/writer documentation. (Bernhard Amann)
+ * SQLite reader/writer documentation. (Johanna Amann)
* Check that the SQLite reader is only used in MANUAL reading mode.
- (Bernhard Amann)
+ (Johanna Amann)
* Rename the SQLite writer "dbname" configuration option to
- "tablename". (Bernhard Amann)
+ "tablename". (Johanna Amann)
* Remove the "dbname" configuration option from the SQLite reader as
- it wasn't used there. (Bernhard Amann)
+ it wasn't used there. (Johanna Amann)
2.2-beta-73 | 2013-10-14 14:28:25 -0700
@@ -2718,9 +3228,9 @@
2.2-beta-55 | 2013-10-10 13:36:38 -0700
- * A couple of new TLS extension numbers. (Bernhard Amann)
+ * A couple of new TLS extension numbers. (Johanna Amann)
- * Suport for three more new TLS ciphers. (Bernhard Amann)
+ * Suport for three more new TLS ciphers. (Johanna Amann)
* Removing ICSI notary from default site config. (Robin Sommer)
@@ -2765,7 +3275,7 @@
2.2-beta-18 | 2013-10-02 10:28:17 -0700
- * Add support for further TLS cipher suites. (Bernhard Amann)
+ * Add support for further TLS cipher suites. (Johanna Amann)
2.2-beta-13 | 2013-10-01 11:31:55 -0700
@@ -2815,7 +3325,7 @@
* Add links to Intelligence Framework documentation. (Daniel Thayer)
- * Update Mozilla root CA list. (Bernhard Amann, Jon Siwek)
+ * Update Mozilla root CA list. (Johanna Amann, Jon Siwek)
* Update documentation of required packages. (Daniel Thayer)
@@ -2826,10 +3336,10 @@
2.1-1357 | 2013-09-18 14:58:52 -0700
- * Update HLL API and its documentation. (Bernhard Amann)
+ * Update HLL API and its documentation. (Johanna Amann)
* Fix case in HLL where hll_error_margin could be undefined.
- (Bernhard Amann)
+ (Johanna Amann)
2.1-1352 | 2013-09-18 14:42:28 -0700
@@ -2890,7 +3400,7 @@
* Support for probabilistic set cardinality, using the HyperLogLog
- algorithm. (Bernhard Amann, Soumya Basu)
+ algorithm. (Johanna Amann, Soumya Basu)
Bro now provides the following BiFs:
@@ -2929,7 +3439,7 @@
2.1-1137 | 2013-08-27 13:26:44 -0700
* Add BiF hexstr_to_bytestring() that does exactly the opposite of
- bytestring_to_hexstr(). (Bernhard Amann)
+ bytestring_to_hexstr(). (Johanna Amann)
2.1-1135 | 2013-08-27 12:16:26 -0700
@@ -3001,7 +3511,7 @@
2.1-1078 | 2013-08-19 09:29:30 -0700
- * Moving sqlite code into new external 3rdparty submodule. (Bernhard
+ * Moving sqlite code into new external 3rdparty submodule. Johanna(
Amann)
2.1-1074 | 2013-08-14 10:29:54 -0700
@@ -3101,12 +3611,12 @@
2.1-1007 | 2013-08-01 15:41:54 -0700
- * More function documentation. (Bernhard Amann)
+ * More function documentation. (Johanna Amann)
2.1-1004 | 2013-08-01 14:37:43 -0700
* Adding a probabilistic data structure for computing "top k"
- elements. (Bernhard Amann)
+ elements. (Johanna Amann)
The corresponding functions are:
@@ -3140,7 +3650,7 @@
2.1-948 | 2013-07-31 20:08:28 -0700
* Fix segfault caused by merging an empty bloom-filter with a
- bloom-filter already containing values. (Bernhard Amann)
+ bloom-filter already containing values. (Johanna Amann)
2.1-945 | 2013-07-30 10:05:10 -0700
@@ -3280,12 +3790,12 @@
2.1-814 | 2013-07-15 18:18:20 -0700
* Fixing raw reader crash when accessing nonexistant file, and
- memory leak when reading from file. Addresses #1038. (Bernhard
+ memory leak when reading from file. Addresses #1038. (Johanna
Amann)
2.1-811 | 2013-07-14 08:01:54 -0700
- * Bump sqlite to 3.7.17. (Bernhard Amann)
+ * Bump sqlite to 3.7.17. (Johanna Amann)
* Small test fixes. (Seth Hall)
@@ -3335,7 +3845,7 @@
2.1-780 | 2013-07-03 16:46:26 -0700
* Rewrite of the RAW input reader for improved robustness and new
- features. (Bernhard Amann) This includes:
+ features. (Johanna Amann) This includes:
- Send "end_of_data" event for all kind of streams.
- Send "process_finished" event with exit code of child
@@ -3464,12 +3974,12 @@
2.1-656 | 2013-05-17 15:58:07 -0700
- * Fix mutex lock problem for writers. (Bernhard Amann)
+ * Fix mutex lock problem for writers. (Johanna Amann)
2.1-654 | 2013-05-17 13:49:52 -0700
* Tweaks to sqlite3 configuration to address threading issues.
- (Bernhard Amann)
+ (Johanna Amann)
2.1-651 | 2013-05-17 13:37:16 -0700
@@ -3495,7 +4005,7 @@
2.1-640 | 2013-05-15 17:24:09 -0700
- * Support for cleaning up threads that have terminated. (Bernhard
+ * Support for cleaning up threads that have terminated. (Johanna
Amann and Robin Sommer). Includes:
- Both logging and input frameworks now clean up threads once
@@ -3512,14 +4022,14 @@
2.1-626 | 2013-05-15 16:09:31 -0700
* Add "reservoir" sampler for SumStats framework. This maintains
- a set of N uniquely distributed random samples. (Bernhard Amann)
+ a set of N uniquely distributed random samples. (Johanna Amann)
2.1-619 | 2013-05-15 16:01:42 -0700
* SQLite reader and writer combo. This allows to read/write
persistent data from on disk SQLite databases. The current
interface is quite low-level, we'll add higher-level abstractions
- in the future. (Bernhard Amann)
+ in the future. (Johanna Amann)
2.1-576 | 2013-05-15 14:29:09 -0700
@@ -3540,7 +4050,7 @@
2.1-500 | 2013-05-10 19:22:24 -0700
* Fix to prevent merge-hook of SumStat's unique plugin from damaging
- source data. (Bernhard Amann)
+ source data. (Johanna Amann)
2.1-498 | 2013-05-03 17:44:08 -0700
@@ -3556,7 +4066,7 @@
2.1-492 | 2013-05-02 12:46:26 -0700
* Work-around for sumstats framework not propagating updates after
- intermediate check in cluster environments. (Bernhard Amann)
+ intermediate check in cluster environments. (Johanna Amann)
* Always apply tcp_connection_attempt. Before this change it was
only applied when a connection_attempt() event handler was
@@ -3611,7 +4121,7 @@
2.1-380 | 2013-03-18 12:18:10 -0700
* Fix gcc compile warnings in base64 encoder and benchmark reader.
- (Bernhard Amann)
+ (Johanna Amann)
2.1-377 | 2013-03-17 17:36:09 -0700
@@ -3620,10 +4130,10 @@
2.1-375 | 2013-03-17 13:14:26 -0700
* Add base64 encoding functionality, including new BiFs
- encode_base64() and encode_base64_custom(). (Bernhard Amann)
+ encode_base64() and encode_base64_custom(). (Johanna Amann)
* Replace call to external "openssl" in extract-certs-pem.bro with
- that encode_base64(). (Bernhard Amann)
+ that encode_base64(). (Johanna Amann)
* Adding a test for extract-certs-pem.pem. (Robin Sommer)
@@ -3657,7 +4167,7 @@
2.1-357 | 2013-03-08 09:18:35 -0800
- * Fix race-condition in table-event test. (Bernhard Amann)
+ * Fix race-condition in table-event test. (Johanna Amann)
* s/bro-ids.org/bro.org/g. (Robin Sommer)
@@ -3674,9 +4184,9 @@
2.1-347 | 2013-03-06 16:48:44 -0800
- * Remove unused parameter from vector assignment method. (Bernhard Amann)
+ * Remove unused parameter from vector assignment method. (Johanna Amann)
- * Remove the byte_len() and length() bifs. (Bernhard Amann)
+ * Remove the byte_len() and length() bifs. (Johanna Amann)
2.1-342 | 2013-03-06 15:42:52 -0800
@@ -3728,7 +4238,7 @@
2.1-319 | 2013-02-04 09:45:34 -0800
- * Update input tests to use exit_only_after_terminate. (Bernhard
+ * Update input tests to use exit_only_after_terminate. (Johanna
Amann)
* New option exit_only_after_terminate to prevent Bro from exiting.
@@ -3760,7 +4270,7 @@
2.1-302 | 2013-01-23 16:17:29 -0800
* Refactoring ASCII formatting/parsing from loggers/readers into a
- separate AsciiFormatter class. (Bernhard Amann)
+ separate AsciiFormatter class. (Johanna Amann)
* Fix uninitialized locals in event/hook handlers from having a
value. Addresses #932. (Jon Siwek)
@@ -3791,7 +4301,7 @@
* Removing unused class member. (Robin Sommer)
* Add opaque type-ignoring for the accept_unsupported_types input
- framework option. (Bernhard Amann)
+ framework option. (Johanna Amann)
2.1-271 | 2013-01-08 10:18:57 -0800
@@ -3872,7 +4382,7 @@
2.1-229 | 2012-12-14 14:46:12 -0800
* Fix memory leak in ASCII reader when encoutering errors in input.
- (Bernhard Amann)
+ (Johanna Amann)
* Improvements for the "bad checksums" detector to make it detect
bad TCP checksums. (Seth Hall)
@@ -3943,7 +4453,7 @@
yet. Addresses #66. (Jon Siwek)
* Fix segfault: Delete correct entry in error case in input
- framework. (Bernhard Amann)
+ framework. (Johanna Amann)
* Bad record constructor initializers now give an error. Addresses
#34. (Jon Siwek)
@@ -4201,7 +4711,7 @@
* Rename the Input Framework's update_finished event to end_of_data.
It will now not only fire after table-reads have been completed,
but also after the last event of a whole-file-read (or
- whole-db-read, etc.). (Bernhard Amann)
+ whole-db-read, etc.). (Johanna Amann)
* Fix for DNS log problem when a DNS response is seen with 0 RRs.
(Seth Hall)
@@ -4216,7 +4726,7 @@
2.1-61 | 2012-10-12 09:32:48 -0700
* Fix bug in the input framework: the config table did not work.
- (Bernhard Amann)
+ (Johanna Amann)
2.1-58 | 2012-10-08 10:10:09 -0700
@@ -4251,7 +4761,7 @@
* Fix for the input framework: BroStrings were constructed without a
final \0, which makes them unusable by basically all internal
- functions (like to_count). (Bernhard Amann)
+ functions (like to_count). (Johanna Amann)
* Remove deprecated script functionality (see NEWS for details).
(Daniel Thayer)
@@ -4303,7 +4813,7 @@
* Small change to non-blocking DNS initialization. (Jon Siwek)
* Reorder a few statements in scan.l to make 1.5msecs etc work.
- Adresses #872. (Bernhard Amann)
+ Adresses #872. (Johanna Amann)
2.1-6 | 2012-09-06 23:23:14 -0700
@@ -4332,11 +4842,11 @@
* Fix uninitialized value for 'is_partial' in TCP analyzer. (Jon
Siwek)
- * Parse 64-bit consts in Bro scripts correctly. (Bernhard Amann)
+ * Parse 64-bit consts in Bro scripts correctly. (Johanna Amann)
- * Output 64-bit counts correctly on 32-bit machines (Bernhard Amann)
+ * Output 64-bit counts correctly on 32-bit machines (Johanna Amann)
- * Input framework fixes, including: (Bernhard Amann)
+ * Input framework fixes, including: (Johanna Amann)
- One of the change events got the wrong parameters.
@@ -4377,7 +4887,7 @@
2.1-beta-45 | 2012-08-22 16:11:10 -0700
* Add an option to the input framework that allows the user to chose
- to not die upon encountering files/functions. (Bernhard Amann)
+ to not die upon encountering files/functions. (Johanna Amann)
2.1-beta-41 | 2012-08-22 16:05:21 -0700
@@ -4396,7 +4906,7 @@
2.1-beta-35 | 2012-08-22 08:44:52 -0700
* Add testcase for input framework reading sets (rather than
- tables). (Bernhard Amann)
+ tables). (Johanna Amann)
2.1-beta-31 | 2012-08-21 15:46:05 -0700
@@ -4455,9 +4965,9 @@
2.1-beta-6 | 2012-08-10 12:22:52 -0700
- * Fix bug in input framework with an edge case. (Bernhard Amann)
+ * Fix bug in input framework with an edge case. (Johanna Amann)
- * Fix small bug in input framework test script. (Bernhard Amann)
+ * Fix small bug in input framework test script. (Johanna Amann)
2.1-beta-3 | 2012-08-03 10:46:49 -0700
@@ -4506,13 +5016,13 @@
writers that don't have a postprocessor. (Seth Hall)
* Update input framework documentation to reflect want_record
- change. (Bernhard Amann)
+ change. (Johanna Amann)
* Fix crash when encountering an InterpreterException in a predicate
- in logging or input Framework. (Bernhard Amann)
+ in logging or input Framework. (Johanna Amann)
* Input framework: Make want_record=T the default for events
- (Bernhard Amann)
+ (Johanna Amann)
* Changing the start/end markers in logs to open/close now
reflecting wall clock. (Robin Sommer)
@@ -4534,10 +5044,10 @@
* Add comprehensive error handling for close() calls. (Jon Siwek)
- * Add more test cases for input framework. (Bernhard Amann)
+ * Add more test cases for input framework. (Johanna Amann)
* Input framework: make error output for non-matching event types
- much more verbose. (Bernhard Amann)
+ much more verbose. (Johanna Amann)
2.0-877 | 2012-07-25 17:20:34 -0700
@@ -4577,12 +5087,12 @@
* Fix initialization problem in logging class. (Jon Siwek)
* Input framework now accepts escaped ASCII values as input (\x##),
- and unescapes appropiately. (Bernhard Amann)
+ and unescapes appropiately. (Johanna Amann)
* Make reading ASCII logfiles work when the input separator is
- different from \t. (Bernhard Amann)
+ different from \t. (Johanna Amann)
- * A number of smaller fixes for input framework. (Bernhard Amann)
+ * A number of smaller fixes for input framework. (Johanna Amann)
2.0-851 | 2012-07-24 15:04:14 -0700
@@ -4602,7 +5112,7 @@
* Reworking parts of the internal threading/logging/input APIs for
thread-safety. (Robin Sommer)
- * Bugfix for SSL version check. (Bernhard Amann)
+ * Bugfix for SSL version check. (Johanna Amann)
* Changing a HTTP DPD from port 3138 to 3128. Addresses #857. (Robin
Sommer)
@@ -4622,7 +5132,7 @@
#763. (Robin Sommer)
* Fix bug, where in dns.log rcode always was set to 0/NOERROR when
- no reply package was seen. (Bernhard Amann)
+ no reply package was seen. (Johanna Amann)
* Updating to Mozilla's current certificate bundle. (Seth Hall)
@@ -4638,7 +5148,7 @@
* Remove baselines for some leak-detecting unit tests. (Jon Siwek)
* Unblock SIGFPE, SIGILL, SIGSEGV and SIGBUS for threads, so that
- they now propagate to the main thread. Adresses #848. (Bernhard
+ they now propagate to the main thread. Adresses #848. (Johanna
Amann)
2.0-761 | 2012-07-12 08:14:38 -0700
@@ -4646,7 +5156,7 @@
* Some small fixes to further reduce SOCKS false positive logs. (Seth Hall)
* Calls to pthread_mutex_unlock now log the reason for failures.
- (Bernhard Amann)
+ (Johanna Amann)
2.0-757 | 2012-07-11 08:30:19 -0700
@@ -4677,11 +5187,11 @@
2.0-733 | 2012-07-02 15:31:24 -0700
- * Extending the input reader DoInit() API. (Bernhard Amann). It now
+ * Extending the input reader DoInit() API. (Johanna Amann). It now
provides a Info struct similar to what we introduced for log
writers, including a corresponding "config" key/value table.
- * Fix to make writer-info work when debugging is enabled. (Bernhard
+ * Fix to make writer-info work when debugging is enabled. (Johanna
Amann)
2.0-726 | 2012-07-02 15:19:15 -0700
@@ -4720,7 +5230,7 @@
* Set input frontend type before starting the thread. This means
that the thread type will be output correctly in the error
- message. (Bernhard Amann)
+ message. (Johanna Amann)
2.0-719 | 2012-07-02 14:49:03 -0700
@@ -4809,7 +5319,7 @@
2.0-622 | 2012-06-15 15:38:43 -0700
- * Input framework updates. (Bernhard Amann)
+ * Input framework updates. (Johanna Amann)
- Disable streaming reads from executed commands. This lead to
hanging Bros because pclose apparently can wait for eternity if
@@ -4888,7 +5398,7 @@
* A new input framework enables scripts to read in external data
dynamically on the fly as Bro is processing network traffic.
- (Bernhard Amann)
+ (Johanna Amann)
Currently, the framework supports reading ASCII input that's
structured similar as Bro's log files as well as raw blobs of
@@ -5055,7 +5565,7 @@
2.0-315 | 2012-05-03 11:44:17 -0700
* Add two more TLS extension values that we see in live traffic.
- (Bernhard Amann)
+ (Johanna Amann)
* Fixed IPv6 link local unicast CIDR and added IPv6 loopback to
private address space. (Seth Hall)
@@ -5443,7 +5953,7 @@
2.0-41 | 2012-02-03 04:10:53 -0500
- * Updates to the Software framework to simplify the API. (Bernhard
+ * Updates to the Software framework to simplify the API. (Johanna
Amann)
2.0-40 | 2012-02-03 01:55:27 -0800
@@ -5586,7 +6096,7 @@
2.0-beta-152 | 2012-01-03 14:51:34 -0800
- * Notices now record the transport-layer protocol. (Bernhard Amann)
+ * Notices now record the transport-layer protocol. (Johanna Amann)
2.0-beta-150 | 2012-01-03 14:42:45 -0800
@@ -5613,7 +6123,7 @@
assignments. Addresses #722. (Jon Siwek)
* Make log headers include the type of data stored inside a set or
- vector ("vector[string]"). (Bernhard Amann)
+ vector ("vector[string]"). (Johanna Amann)
2.0-beta-126 | 2011-12-18 15:18:05 -0800
@@ -5750,11 +6260,11 @@
* Fix order of include directories. (Jon Siwek)
* Catch if logged vectors do not contain only atomic types.
- (Bernhard Amann)
+ (Johanna Amann)
2.0-beta-47 | 2011-11-16 08:24:33 -0800
- * Catch if logged sets do not contain only atomic types. (Bernhard
+ * Catch if logged sets do not contain only atomic types. (Johanna
Amann)
* Promote libz and libmagic to required dependencies. (Jon Siwek)
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 30e1a4a545..374af64a18 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -61,7 +61,7 @@ if (NOT SED_EXE)
endif ()
endif ()
-FindRequiredPackage(Perl)
+FindRequiredPackage(PythonInterp)
FindRequiredPackage(FLEX)
FindRequiredPackage(BISON)
FindRequiredPackage(PCAP)
@@ -88,7 +88,7 @@ endif ()
include_directories(BEFORE
${PCAP_INCLUDE_DIR}
- ${OpenSSL_INCLUDE_DIR}
+ ${OPENSSL_INCLUDE_DIR}
${BIND_INCLUDE_DIR}
${BinPAC_INCLUDE_DIR}
${ZLIB_INCLUDE_DIR}
@@ -141,7 +141,7 @@ endif ()
set(brodeps
${BinPAC_LIBRARY}
${PCAP_LIBRARY}
- ${OpenSSL_LIBRARIES}
+ ${OPENSSL_LIBRARIES}
${BIND_LIBRARY}
${ZLIB_LIBRARY}
${JEMALLOC_LIBRARIES}
@@ -170,8 +170,8 @@ include(RequireCXX11)
# Tell the plugin code that we're building as part of the main tree.
set(BRO_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE)
-configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in
- ${CMAKE_CURRENT_BINARY_DIR}/config.h)
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/bro-config.h.in
+ ${CMAKE_CURRENT_BINARY_DIR}/bro-config.h)
include_directories(${CMAKE_CURRENT_BINARY_DIR})
@@ -233,6 +233,7 @@ message(
"\nCPP: ${CMAKE_CXX_COMPILER}"
"\n"
"\nBroker: ${ENABLE_BROKER}"
+ "\nBroker Python: ${BROKER_PYTHON_BINDINGS}"
"\nBroccoli: ${INSTALL_BROCCOLI}"
"\nBroctl: ${INSTALL_BROCTL}"
"\nAux. Tools: ${INSTALL_AUX_TOOLS}"
diff --git a/NEWS b/NEWS
index 348c179bdc..4f1a84b7b6 100644
--- a/NEWS
+++ b/NEWS
@@ -16,6 +16,102 @@ New Dependencies
- Bro now requires the C++ Actor Framework, CAF, which must be
installed first. See http://actor-framework.org.
+- Bro now requires Python instead of Perl to compile the source code.
+
+- The pcap buffer size can set through the new option Pcap::bufsize.
+
+New Functionality
+-----------------
+
+- Bro now includes the NetControl framework. The framework allows for easy
+ interaction of Bro with hard- and software switches, firewalls, etc.
+
+- There is a new file entropy analyzer for files.
+
+- Bro now supports the remote framebuffer protocol (RFB) that is used by
+ VNC servers for remote graphical displays.
+
+- Bro now supports the Radiotap header for 802.11 frames.
+
+- Bro now tracks VLAN IDs. To record them inside the connection log,
+ load protocols/conn/vlan-logging.bro.
+
+- A new dns_CAA_reply event gives access to DNS Certification Authority
+ Authorization replies.
+
+- A new per-packet event raw_packet() provides access to layer 2
+ information. Use with care, generating events per packet is
+ expensive.
+
+- A new built-in function, decode_base64_conn() for Base64 decoding.
+ It works like decode_base64() but receives an additional connection
+ argument that will be used for decoding errors into weird.log
+ (instead of reporter.log).
+
+- A new get_current_packet_header bif returns the headers of the current
+ packet.
+
+- Two new built-in functions for handling set[subnet] and table[subnet]:
+
+ - check_subnet(subnet, table) checks if a specific subnet is a member
+ of a set/table. This is different from the "in" operator, which always
+ performs a longest prefix match.
+
+ - matching_subnets(subnet, table) returns all subnets of the set or table
+ that contain the given subnet.
+
+ - filter_subnet_table(subnet, table) works like check_subnet, but returns
+ a table containing all matching entries.
+
+- Several built-in functions for handling IP addresses and subnets were added:
+
+ - is_v4_subnet(subnet) checks whether a subnet specification is IPv4.
+
+ - is_v6_subnet(subnet) checks whether a subnet specification is IPv6.
+
+ - addr_to_subnet(addr) converts an IP address to a /32 subnet.
+
+ - subnet_to_addr(subnet) returns the IP address part of a subnet.
+
+ - subnet_width(subnet) returns the width of a subnet.
+
+- The IRC analyzer now recognizes StartTLS sessions and enable the SSL
+ analyzer for them.
+
+- New Bro plugins in aux/plugins:
+
+ - af_packet: Native AF_PACKET support.
+ - kafka : Log writer interfacing to Kafka.
+ - myricom: Native Myricom SNF v3 support.
+ - pf_ring: Native PF_RING support.
+ - redis: An experimental log writer for Redis.
+ - tcprs: An TCP-level analyzer detecting retransmissions, reordering, and more.
+
+Changed Functionality
+---------------------
+
+- The BrokerComm and BrokerStore namespaces were renamed to Broker.
+
+- ``SSH::skip_processing_after_detection`` was removed. The functionality was
+ replaced by ``SSH::disable_analyzer_after_detection``.
+
+- Some script-level identifier have changed their names:
+
+ snaplen -> Pcap::snaplen
+ precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
+ install_pcap_filter() -> Pcap::install_pcap_filter()
+ pcap_error() -> Pcap::pcap_error()
+
+
+Deprecated Functionality
+------------------------
+
+ - The built-in functions decode_base64_custom() and
+ encode_base64_custom() are no longer needed and will be removed
+ in the future. Their functionality is now provided directly by
+ decode_base64() and encode_base64(), which take an optional
+ parameter to change the Base64 alphabet.
+
Bro 2.4
=======
@@ -176,8 +272,8 @@ Changed Functionality
- The SSH changes come with a few incompatibilities. The following
events have been renamed:
- * ``SSH::heuristic_failed_login`` to ``SSH::ssh_auth_failed``
- * ``SSH::heuristic_successful_login`` to ``SSH::ssh_auth_successful``
+ * ``SSH::heuristic_failed_login`` to ``ssh_auth_failed``
+ * ``SSH::heuristic_successful_login`` to ``ssh_auth_successful``
The ``SSH::Info`` status field has been removed and replaced with
the ``auth_success`` field. This field has been changed from a
diff --git a/README.rst b/README.rst
new file mode 120000
index 0000000000..100b93820a
--- /dev/null
+++ b/README.rst
@@ -0,0 +1 @@
+README
\ No newline at end of file
diff --git a/VERSION b/VERSION
index 32dfa2f555..33a6cae723 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-2.4-46
+2.4-471
diff --git a/aux/binpac b/aux/binpac
index 4f33233aef..424d40c1e8 160000
--- a/aux/binpac
+++ b/aux/binpac
@@ -1 +1 @@
-Subproject commit 4f33233aef5539ae4f12c6d0e4338247833c3900
+Subproject commit 424d40c1e8d5888311b50c0e5a9dfc9c5f818b66
diff --git a/aux/bro-aux b/aux/bro-aux
index 07af9748f4..105dfe4ad6 160000
--- a/aux/bro-aux
+++ b/aux/bro-aux
@@ -1 +1 @@
-Subproject commit 07af9748f40dc47d3a2b3290db494a90dcbddbdc
+Subproject commit 105dfe4ad6c4ae4563b21cb0466ee350f0af0d43
diff --git a/aux/broccoli b/aux/broccoli
index 74bb4bbd94..f83038b17f 160000
--- a/aux/broccoli
+++ b/aux/broccoli
@@ -1 +1 @@
-Subproject commit 74bb4bbd949e61e099178f8a97499d3f1355de8b
+Subproject commit f83038b17fc83788415a58d77f75ad182ca6a9b7
diff --git a/aux/broctl b/aux/broctl
index 54377d4746..583f3a3ff1 160000
--- a/aux/broctl
+++ b/aux/broctl
@@ -1 +1 @@
-Subproject commit 54377d4746e2fd3ba7b7ca97e4a6ceccbd2cc236
+Subproject commit 583f3a3ff1847cf96a87f865d5cf0f36fae9dd67
diff --git a/aux/broker b/aux/broker
index d25efc7d5f..6684ab5109 160000
--- a/aux/broker
+++ b/aux/broker
@@ -1 +1 @@
-Subproject commit d25efc7d5f495c30294b11180c1857477078f2d6
+Subproject commit 6684ab5109f526fb535013760f17a4c8dff093ae
diff --git a/aux/btest b/aux/btest
index a89cd0fda0..4bea8fa948 160000
--- a/aux/btest
+++ b/aux/btest
@@ -1 +1 @@
-Subproject commit a89cd0fda0f17f69b96c935959cae89145b92927
+Subproject commit 4bea8fa948be2bc86ff92399137131bc1c029b08
diff --git a/aux/plugins b/aux/plugins
index 98ad8a5b97..ab61be0c4f 160000
--- a/aux/plugins
+++ b/aux/plugins
@@ -1 +1 @@
-Subproject commit 98ad8a5b97f601a3ec9a773d87582438212b8290
+Subproject commit ab61be0c4f128c976f72dfa5a09a87cd842f387a
diff --git a/config.h.in b/bro-config.h.in
similarity index 100%
rename from config.h.in
rename to bro-config.h.in
diff --git a/cmake b/cmake
index 6406fb79d3..0a2b36874a 160000
--- a/cmake
+++ b/cmake
@@ -1 +1 @@
-Subproject commit 6406fb79d30df8d7956110ce65a97d18e4bc8c3b
+Subproject commit 0a2b36874ad5c1a22829135f8aeeac534469053f
diff --git a/configure b/configure
index ae2f337117..8859a6fa9b 100755
--- a/configure
+++ b/configure
@@ -47,6 +47,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli
+ --disable-pybroker don't try to build python bindings for broker
Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root
@@ -55,7 +56,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-binpac=PATH path to BinPAC install root
--with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable
- --with-perl=PATH path to perl executable
+ --with-python=PATH path to Python executable
--with-libcaf=PATH path to C++ Actor Framework installation
(a required Broker dependency)
@@ -63,7 +64,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-geoip=PATH path to the libGeoIP install root
--with-perftools=PATH path to Google Perftools install root
--with-jemalloc=PATH path to jemalloc install root
- --with-python=PATH path to Python interpreter
--with-python-lib=PATH path to libpython
--with-python-inc=PATH path to Python headers
--with-ruby=PATH path to ruby interpreter
@@ -122,6 +122,7 @@ append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
+append_cache_entry BROKER_PYTHON_BINDINGS BOOL false
append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
@@ -218,11 +219,14 @@ while [ $# -ne 0 ]; do
--disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;;
+ --disable-pybroker)
+ append_cache_entry DISABLE_PYBROKER BOOL true
+ ;;
--enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;;
--with-openssl=*)
- append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
+ append_cache_entry OPENSSL_ROOT_DIR PATH $optarg
;;
--with-bind=*)
append_cache_entry BIND_ROOT_DIR PATH $optarg
@@ -239,9 +243,6 @@ while [ $# -ne 0 ]; do
--with-bison=*)
append_cache_entry BISON_EXECUTABLE PATH $optarg
;;
- --with-perl=*)
- append_cache_entry PERL_EXECUTABLE PATH $optarg
- ;;
--with-geoip=*)
append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg
;;
@@ -275,8 +276,12 @@ while [ $# -ne 0 ]; do
--with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg
;;
+ --with-caf=*)
+ append_cache_entry CAF_ROOT_DIR PATH $optarg
+ ;;
--with-libcaf=*)
- append_cache_entry LIBCAF_ROOT_DIR PATH $optarg
+ echo "warning: --with-libcaf deprecated, use --with-caf instead"
+ append_cache_entry CAF_ROOT_DIR PATH $optarg
;;
--with-rocksdb=*)
append_cache_entry ROCKSDB_ROOT_DIR PATH $optarg
diff --git a/doc/components/bro-plugins/af_packet/README.rst b/doc/components/bro-plugins/af_packet/README.rst
new file mode 120000
index 0000000000..b8f745bed2
--- /dev/null
+++ b/doc/components/bro-plugins/af_packet/README.rst
@@ -0,0 +1 @@
+../../../../aux/plugins/af_packet/README
\ No newline at end of file
diff --git a/doc/components/bro-plugins/dataseries/README.rst b/doc/components/bro-plugins/dataseries/README.rst
deleted file mode 120000
index 3362e911fc..0000000000
--- a/doc/components/bro-plugins/dataseries/README.rst
+++ /dev/null
@@ -1 +0,0 @@
-../../../../aux/plugins/dataseries/README
\ No newline at end of file
diff --git a/doc/components/bro-plugins/myricom/README.rst b/doc/components/bro-plugins/myricom/README.rst
new file mode 120000
index 0000000000..3bfabcdae3
--- /dev/null
+++ b/doc/components/bro-plugins/myricom/README.rst
@@ -0,0 +1 @@
+../../../../aux/plugins/myricom/README
\ No newline at end of file
diff --git a/doc/components/bro-plugins/pf_ring/README.rst b/doc/components/bro-plugins/pf_ring/README.rst
new file mode 120000
index 0000000000..5ea666e8c9
--- /dev/null
+++ b/doc/components/bro-plugins/pf_ring/README.rst
@@ -0,0 +1 @@
+../../../../aux/plugins/pf_ring/README
\ No newline at end of file
diff --git a/doc/components/bro-plugins/redis/README.rst b/doc/components/bro-plugins/redis/README.rst
new file mode 120000
index 0000000000..c42051828e
--- /dev/null
+++ b/doc/components/bro-plugins/redis/README.rst
@@ -0,0 +1 @@
+../../../../aux/plugins/redis/README
\ No newline at end of file
diff --git a/doc/components/bro-plugins/tcprs/README.rst b/doc/components/bro-plugins/tcprs/README.rst
new file mode 120000
index 0000000000..c0e84fd579
--- /dev/null
+++ b/doc/components/bro-plugins/tcprs/README.rst
@@ -0,0 +1 @@
+../../../../aux/plugins/tcprs/README
\ No newline at end of file
diff --git a/doc/conf.py.in b/doc/conf.py.in
index 4faebed3b8..ef9367483a 100644
--- a/doc/conf.py.in
+++ b/doc/conf.py.in
@@ -66,7 +66,7 @@ master_doc = 'index'
# General information about the project.
project = u'Bro'
-copyright = u'2013, The Bro Project'
+copyright = u'2016, The Bro Project'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
diff --git a/doc/devel/plugins.rst b/doc/devel/plugins.rst
index 091a0090d1..dc1c9a3cd4 100644
--- a/doc/devel/plugins.rst
+++ b/doc/devel/plugins.rst
@@ -209,8 +209,15 @@ directory. With the skeleton, ```` corresponds to ``build/``.
"@load"ed.
``scripts``/__load__.bro
- A Bro script that will be loaded immediately when the plugin gets
- activated. See below for more information on activating plugins.
+ A Bro script that will be loaded when the plugin gets activated.
+ When this script executes, any BiF elements that the plugin
+ defines will already be available. See below for more information
+ on activating plugins.
+
+``scripts``/__preload__.bro
+ A Bro script that will be loaded when the plugin gets activated,
+ but before any BiF elements become available. See below for more
+ information on activating plugins.
``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugin's
@@ -279,7 +286,9 @@ Activating a plugin will:
1. Load the dynamic module
2. Make any bif items available
3. Add the ``scripts/`` directory to ``BROPATH``
- 4. Load ``scripts/__load__.bro``
+ 4. Load ``scripts/__preload__.bro``
+ 5. Make BiF elements available to scripts.
+ 6. Load ``scripts/__load__.bro``
By default, Bro will automatically activate all dynamic plugins found
in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro
diff --git a/doc/frameworks/broker.rst b/doc/frameworks/broker.rst
index 3cd8dab6e3..328c465c18 100644
--- a/doc/frameworks/broker.rst
+++ b/doc/frameworks/broker.rst
@@ -9,10 +9,7 @@ Broker-Enabled Communication Framework
Bro can now use the `Broker Library
<../components/broker/README.html>`_ to exchange information with
- other Bro processes. To enable it run Bro's ``configure`` script
- with the ``--enable-broker`` option. Note that a C++11 compatible
- compiler (e.g. GCC 4.8+ or Clang 3.3+) is required as well as the
- `C++ Actor Framework `_.
+ other Bro processes.
.. contents::
@@ -20,35 +17,35 @@ Connecting to Peers
===================
Communication via Broker must first be turned on via
-:bro:see:`BrokerComm::enable`.
+:bro:see:`Broker::enable`.
-Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen`
-and then monitor connection status updates via
-:bro:see:`BrokerComm::incoming_connection_established` and
-:bro:see:`BrokerComm::incoming_connection_broken`.
+Bro can accept incoming connections by calling :bro:see:`Broker::listen`
+and then monitor connection status updates via the
+:bro:see:`Broker::incoming_connection_established` and
+:bro:see:`Broker::incoming_connection_broken` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro
-Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect`
-and then monitor connection status updates via
-:bro:see:`BrokerComm::outgoing_connection_established`,
-:bro:see:`BrokerComm::outgoing_connection_broken`, and
-:bro:see:`BrokerComm::outgoing_connection_incompatible`.
+Bro can initiate outgoing connections by calling :bro:see:`Broker::connect`
+and then monitor connection status updates via the
+:bro:see:`Broker::outgoing_connection_established`,
+:bro:see:`Broker::outgoing_connection_broken`, and
+:bro:see:`Broker::outgoing_connection_incompatible` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro
Remote Printing
===============
-To receive remote print messages, first use
-:bro:see:`BrokerComm::subscribe_to_prints` to advertise to peers a topic
-prefix of interest and then create an event handler for
-:bro:see:`BrokerComm::print_handler` to handle any print messages that are
+To receive remote print messages, first use the
+:bro:see:`Broker::subscribe_to_prints` function to advertise to peers a
+topic prefix of interest and then create an event handler for
+:bro:see:`Broker::print_handler` to handle any print messages that are
received.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-listener.bro
-To send remote print messages, just call :bro:see:`BrokerComm::print`.
+To send remote print messages, just call :bro:see:`Broker::print`.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-connector.bro
@@ -71,17 +68,17 @@ the Broker message format is simply:
Remote Events
=============
-Receiving remote events is similar to remote prints. Just use
-:bro:see:`BrokerComm::subscribe_to_events` and possibly define any new events
-along with handlers that peers may want to send.
+Receiving remote events is similar to remote prints. Just use the
+:bro:see:`Broker::subscribe_to_events` function and possibly define any
+new events along with handlers that peers may want to send.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro
-To send events, there are two choices. The first is to use call
-:bro:see:`BrokerComm::event` directly. The second option is to use
-:bro:see:`BrokerComm::auto_event` to make it so a particular event is
-automatically sent to peers whenever it is called locally via the normal
-event invocation syntax.
+There are two different ways to send events. The first is to call the
+:bro:see:`Broker::event` function directly. The second option is to call
+the :bro:see:`Broker::auto_event` function where you specify a
+particular event that will be automatically sent to peers whenever the
+event is called locally via the normal event invocation syntax.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro
@@ -98,7 +95,7 @@ the Broker message format is:
broker::message{std::string{}, ...};
The first parameter is the name of the event and the remaining ``...``
-are its arguments, which are any of the support Broker data types as
+are its arguments, which are any of the supported Broker data types as
they correspond to the Bro types for the event named in the first
parameter of the message.
@@ -107,23 +104,23 @@ Remote Logging
.. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro
-Use :bro:see:`BrokerComm::subscribe_to_logs` to advertise interest in logs
-written by peers. The topic names that Bro uses are implicitly of the
+Use the :bro:see:`Broker::subscribe_to_logs` function to advertise interest
+in logs written by peers. The topic names that Bro uses are implicitly of the
form "bro/log/".
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro
-To send remote logs either use :bro:see:`Log::enable_remote_logging` or
-:bro:see:`BrokerComm::enable_remote_logs`. The former allows any log stream
-to be sent to peers while the later toggles remote logging for
-particular streams.
+To send remote logs either redef :bro:see:`Log::enable_remote_logging` or
+use the :bro:see:`Broker::enable_remote_logs` function. The former
+allows any log stream to be sent to peers while the latter enables remote
+logging for particular streams.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro
Message Format
--------------
-For other applications that want to exchange logs messages with Bro,
+For other applications that want to exchange log messages with Bro,
the Broker message format is:
.. code:: c++
@@ -132,7 +129,7 @@ the Broker message format is:
The enum value corresponds to the stream's :bro:see:`Log::ID` value, and
the record corresponds to a single entry of that log's columns record,
-in this case a ``Test::INFO`` value.
+in this case a ``Test::Info`` value.
Tuning Access Control
=====================
@@ -140,23 +137,24 @@ Tuning Access Control
By default, endpoints do not restrict the message topics that it sends
to peers and do not restrict what message topics and data store
identifiers get advertised to peers. These are the default
-:bro:see:`BrokerComm::EndpointFlags` supplied to :bro:see:`BrokerComm::enable`.
+:bro:see:`Broker::EndpointFlags` supplied to :bro:see:`Broker::enable`.
If not using the ``auto_publish`` flag, one can use the
-:bro:see:`BrokerComm::publish_topic` and :bro:see:`BrokerComm::unpublish_topic`
+:bro:see:`Broker::publish_topic` and :bro:see:`Broker::unpublish_topic`
functions to manipulate the set of message topics (must match exactly)
that are allowed to be sent to peer endpoints. These settings take
precedence over the per-message ``peers`` flag supplied to functions
-that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print`,
-:bro:see:`BrokerComm::event`, :bro:see:`BrokerComm::auto_event` or
-:bro:see:`BrokerComm::enable_remote_logs`.
+that take a :bro:see:`Broker::SendFlags` such as :bro:see:`Broker::print`,
+:bro:see:`Broker::event`, :bro:see:`Broker::auto_event` or
+:bro:see:`Broker::enable_remote_logs`.
If not using the ``auto_advertise`` flag, one can use the
-:bro:see:`BrokerComm::advertise_topic` and :bro:see:`BrokerComm::unadvertise_topic`
-to manupulate the set of topic prefixes that are allowed to be
-advertised to peers. If an endpoint does not advertise a topic prefix,
-the only way a peers can send messages to it is via the ``unsolicited``
-flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching
+:bro:see:`Broker::advertise_topic` and
+:bro:see:`Broker::unadvertise_topic` functions
+to manipulate the set of topic prefixes that are allowed to be
+advertised to peers. If an endpoint does not advertise a topic prefix, then
+the only way peers can send messages to it is via the ``unsolicited``
+flag of :bro:see:`Broker::SendFlags` and choosing a topic with a matching
prefix (i.e. full topic may be longer than receivers prefix, just the
prefix needs to match).
@@ -172,7 +170,7 @@ specific type of frontend, but a standalone frontend can also exist to
e.g. query and modify the contents of a remote master store without
actually "owning" any of the contents itself.
-A master data store can be be cloned from remote peers which may then
+A master data store can be cloned from remote peers which may then
perform lightweight, local queries against the clone, which
automatically stays synchronized with the master store. Clones cannot
modify their content directly, instead they send modifications to the
@@ -181,7 +179,7 @@ all clones.
Master and clone stores get to choose what type of storage backend to
use. E.g. In-memory versus SQLite for persistence. Note that if clones
-are used, data store sizes should still be able to fit within memory
+are used, then data store sizes must be able to fit within memory
regardless of the storage backend as a single snapshot of the master
store is sent in a single chunk to initialize the clone.
@@ -194,9 +192,9 @@ last modification time.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/stores-connector.bro
In the above example, if a local copy of the store contents isn't
-needed, just replace the :bro:see:`BrokerStore::create_clone` call with
-:bro:see:`BrokerStore::create_frontend`. Queries will then be made against
+needed, just replace the :bro:see:`Broker::create_clone` call with
+:bro:see:`Broker::create_frontend`. Queries will then be made against
the remote master store instead of the local clone.
-Note that all queries are made within Bro's asynchrounous ``when``
-statements and must specify a timeout block.
+Note that all data store queries must be made within Bro's asynchronous
+``when`` statements and must specify a timeout block.
diff --git a/doc/frameworks/broker/connecting-connector.bro b/doc/frameworks/broker/connecting-connector.bro
index a7e621e4a6..adf901ea6a 100644
--- a/doc/frameworks/broker/connecting-connector.bro
+++ b/doc/frameworks/broker/connecting-connector.bro
@@ -1,19 +1,18 @@
-
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "connector";
+redef Broker::endpoint_name = "connector";
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::connect("127.0.0.1", broker_port, 1sec);
+ Broker::enable();
+ Broker::connect("127.0.0.1", broker_port, 1sec);
}
-event BrokerComm::outgoing_connection_established(peer_address: string,
+event Broker::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
- print "BrokerComm::outgoing_connection_established",
+ print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name;
terminate();
}
diff --git a/doc/frameworks/broker/connecting-listener.bro b/doc/frameworks/broker/connecting-listener.bro
index c37af3ae4d..aa2b945dbe 100644
--- a/doc/frameworks/broker/connecting-listener.bro
+++ b/doc/frameworks/broker/connecting-listener.bro
@@ -1,21 +1,20 @@
-
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "listener";
+redef Broker::endpoint_name = "listener";
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::listen(broker_port, "127.0.0.1");
+ Broker::enable();
+ Broker::listen(broker_port, "127.0.0.1");
}
-event BrokerComm::incoming_connection_established(peer_name: string)
+event Broker::incoming_connection_established(peer_name: string)
{
- print "BrokerComm::incoming_connection_established", peer_name;
+ print "Broker::incoming_connection_established", peer_name;
}
-event BrokerComm::incoming_connection_broken(peer_name: string)
+event Broker::incoming_connection_broken(peer_name: string)
{
- print "BrokerComm::incoming_connection_broken", peer_name;
+ print "Broker::incoming_connection_broken", peer_name;
terminate();
}
diff --git a/doc/frameworks/broker/events-connector.bro b/doc/frameworks/broker/events-connector.bro
index 1ad458c245..19a617c9cd 100644
--- a/doc/frameworks/broker/events-connector.bro
+++ b/doc/frameworks/broker/events-connector.bro
@@ -1,30 +1,30 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "connector";
+redef Broker::endpoint_name = "connector";
global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count);
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::connect("127.0.0.1", broker_port, 1sec);
- BrokerComm::auto_event("bro/event/my_auto_event", my_auto_event);
+ Broker::enable();
+ Broker::connect("127.0.0.1", broker_port, 1sec);
+ Broker::auto_event("bro/event/my_auto_event", my_auto_event);
}
-event BrokerComm::outgoing_connection_established(peer_address: string,
+event Broker::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
- print "BrokerComm::outgoing_connection_established",
+ print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name;
- BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "hi", 0));
+ Broker::event("bro/event/my_event", Broker::event_args(my_event, "hi", 0));
event my_auto_event("stuff", 88);
- BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "...", 1));
+ Broker::event("bro/event/my_event", Broker::event_args(my_event, "...", 1));
event my_auto_event("more stuff", 51);
- BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "bye", 2));
+ Broker::event("bro/event/my_event", Broker::event_args(my_event, "bye", 2));
}
-event BrokerComm::outgoing_connection_broken(peer_address: string,
+event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
diff --git a/doc/frameworks/broker/events-listener.bro b/doc/frameworks/broker/events-listener.bro
index aa6ea9ee4e..b803e646ec 100644
--- a/doc/frameworks/broker/events-listener.bro
+++ b/doc/frameworks/broker/events-listener.bro
@@ -1,21 +1,20 @@
-
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "listener";
+redef Broker::endpoint_name = "listener";
global msg_count = 0;
global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count);
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::subscribe_to_events("bro/event/");
- BrokerComm::listen(broker_port, "127.0.0.1");
+ Broker::enable();
+ Broker::subscribe_to_events("bro/event/");
+ Broker::listen(broker_port, "127.0.0.1");
}
-event BrokerComm::incoming_connection_established(peer_name: string)
+event Broker::incoming_connection_established(peer_name: string)
{
- print "BrokerComm::incoming_connection_established", peer_name;
+ print "Broker::incoming_connection_established", peer_name;
}
event my_event(msg: string, c: count)
diff --git a/doc/frameworks/broker/logs-connector.bro b/doc/frameworks/broker/logs-connector.bro
index 6089419cab..9c5df335b9 100644
--- a/doc/frameworks/broker/logs-connector.bro
+++ b/doc/frameworks/broker/logs-connector.bro
@@ -2,16 +2,16 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "connector";
+redef Broker::endpoint_name = "connector";
redef Log::enable_local_logging = F;
redef Log::enable_remote_logging = F;
global n = 0;
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::enable_remote_logs(Test::LOG);
- BrokerComm::connect("127.0.0.1", broker_port, 1sec);
+ Broker::enable();
+ Broker::enable_remote_logs(Test::LOG);
+ Broker::connect("127.0.0.1", broker_port, 1sec);
}
event do_write()
@@ -24,16 +24,16 @@ event do_write()
event do_write();
}
-event BrokerComm::outgoing_connection_established(peer_address: string,
+event Broker::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
- print "BrokerComm::outgoing_connection_established",
+ print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name;
event do_write();
}
-event BrokerComm::outgoing_connection_broken(peer_address: string,
+event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
diff --git a/doc/frameworks/broker/logs-listener.bro b/doc/frameworks/broker/logs-listener.bro
index 5c807f08b7..34d475512a 100644
--- a/doc/frameworks/broker/logs-listener.bro
+++ b/doc/frameworks/broker/logs-listener.bro
@@ -2,18 +2,18 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "listener";
+redef Broker::endpoint_name = "listener";
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::subscribe_to_logs("bro/log/Test::LOG");
- BrokerComm::listen(broker_port, "127.0.0.1");
+ Broker::enable();
+ Broker::subscribe_to_logs("bro/log/Test::LOG");
+ Broker::listen(broker_port, "127.0.0.1");
}
-event BrokerComm::incoming_connection_established(peer_name: string)
+event Broker::incoming_connection_established(peer_name: string)
{
- print "BrokerComm::incoming_connection_established", peer_name;
+ print "Broker::incoming_connection_established", peer_name;
}
event Test::log_test(rec: Test::Info)
diff --git a/doc/frameworks/broker/printing-connector.bro b/doc/frameworks/broker/printing-connector.bro
index 2a504ffba0..0ab14d926b 100644
--- a/doc/frameworks/broker/printing-connector.bro
+++ b/doc/frameworks/broker/printing-connector.bro
@@ -1,25 +1,25 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "connector";
+redef Broker::endpoint_name = "connector";
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::connect("127.0.0.1", broker_port, 1sec);
+ Broker::enable();
+ Broker::connect("127.0.0.1", broker_port, 1sec);
}
-event BrokerComm::outgoing_connection_established(peer_address: string,
+event Broker::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
- print "BrokerComm::outgoing_connection_established",
+ print "Broker::outgoing_connection_established",
peer_address, peer_port, peer_name;
- BrokerComm::print("bro/print/hi", "hello");
- BrokerComm::print("bro/print/stuff", "...");
- BrokerComm::print("bro/print/bye", "goodbye");
+ Broker::print("bro/print/hi", "hello");
+ Broker::print("bro/print/stuff", "...");
+ Broker::print("bro/print/bye", "goodbye");
}
-event BrokerComm::outgoing_connection_broken(peer_address: string,
+event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
diff --git a/doc/frameworks/broker/printing-listener.bro b/doc/frameworks/broker/printing-listener.bro
index 080d09e8f5..4630a7e6d7 100644
--- a/doc/frameworks/broker/printing-listener.bro
+++ b/doc/frameworks/broker/printing-listener.bro
@@ -1,22 +1,21 @@
-
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-redef BrokerComm::endpoint_name = "listener";
+redef Broker::endpoint_name = "listener";
global msg_count = 0;
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::subscribe_to_prints("bro/print/");
- BrokerComm::listen(broker_port, "127.0.0.1");
+ Broker::enable();
+ Broker::subscribe_to_prints("bro/print/");
+ Broker::listen(broker_port, "127.0.0.1");
}
-event BrokerComm::incoming_connection_established(peer_name: string)
+event Broker::incoming_connection_established(peer_name: string)
{
- print "BrokerComm::incoming_connection_established", peer_name;
+ print "Broker::incoming_connection_established", peer_name;
}
-event BrokerComm::print_handler(msg: string)
+event Broker::print_handler(msg: string)
{
++msg_count;
print "got print message", msg;
diff --git a/doc/frameworks/broker/stores-connector.bro b/doc/frameworks/broker/stores-connector.bro
index 5db8657a68..d50807cc89 100644
--- a/doc/frameworks/broker/stores-connector.bro
+++ b/doc/frameworks/broker/stores-connector.bro
@@ -1,42 +1,42 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-global h: opaque of BrokerStore::Handle;
+global h: opaque of Broker::Handle;
-function dv(d: BrokerComm::Data): BrokerComm::DataVector
+function dv(d: Broker::Data): Broker::DataVector
{
- local rval: BrokerComm::DataVector;
+ local rval: Broker::DataVector;
rval[0] = d;
return rval;
}
global ready: event();
-event BrokerComm::outgoing_connection_broken(peer_address: string,
+event Broker::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
}
-event BrokerComm::outgoing_connection_established(peer_address: string,
+event Broker::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
local myset: set[string] = {"a", "b", "c"};
local myvec: vector of string = {"alpha", "beta", "gamma"};
- h = BrokerStore::create_master("mystore");
- BrokerStore::insert(h, BrokerComm::data("one"), BrokerComm::data(110));
- BrokerStore::insert(h, BrokerComm::data("two"), BrokerComm::data(223));
- BrokerStore::insert(h, BrokerComm::data("myset"), BrokerComm::data(myset));
- BrokerStore::insert(h, BrokerComm::data("myvec"), BrokerComm::data(myvec));
- BrokerStore::increment(h, BrokerComm::data("one"));
- BrokerStore::decrement(h, BrokerComm::data("two"));
- BrokerStore::add_to_set(h, BrokerComm::data("myset"), BrokerComm::data("d"));
- BrokerStore::remove_from_set(h, BrokerComm::data("myset"), BrokerComm::data("b"));
- BrokerStore::push_left(h, BrokerComm::data("myvec"), dv(BrokerComm::data("delta")));
- BrokerStore::push_right(h, BrokerComm::data("myvec"), dv(BrokerComm::data("omega")));
+ h = Broker::create_master("mystore");
+ Broker::insert(h, Broker::data("one"), Broker::data(110));
+ Broker::insert(h, Broker::data("two"), Broker::data(223));
+ Broker::insert(h, Broker::data("myset"), Broker::data(myset));
+ Broker::insert(h, Broker::data("myvec"), Broker::data(myvec));
+ Broker::increment(h, Broker::data("one"));
+ Broker::decrement(h, Broker::data("two"));
+ Broker::add_to_set(h, Broker::data("myset"), Broker::data("d"));
+ Broker::remove_from_set(h, Broker::data("myset"), Broker::data("b"));
+ Broker::push_left(h, Broker::data("myvec"), dv(Broker::data("delta")));
+ Broker::push_right(h, Broker::data("myvec"), dv(Broker::data("omega")));
- when ( local res = BrokerStore::size(h) )
+ when ( local res = Broker::size(h) )
{
print "master size", res;
event ready();
@@ -47,7 +47,7 @@ event BrokerComm::outgoing_connection_established(peer_address: string,
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::connect("127.0.0.1", broker_port, 1secs);
- BrokerComm::auto_event("bro/event/ready", ready);
+ Broker::enable();
+ Broker::connect("127.0.0.1", broker_port, 1secs);
+ Broker::auto_event("bro/event/ready", ready);
}
diff --git a/doc/frameworks/broker/stores-listener.bro b/doc/frameworks/broker/stores-listener.bro
index 454e41a8c2..3dac30deca 100644
--- a/doc/frameworks/broker/stores-listener.bro
+++ b/doc/frameworks/broker/stores-listener.bro
@@ -1,13 +1,13 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
-global h: opaque of BrokerStore::Handle;
+global h: opaque of Broker::Handle;
global expected_key_count = 4;
global key_count = 0;
function do_lookup(key: string)
{
- when ( local res = BrokerStore::lookup(h, BrokerComm::data(key)) )
+ when ( local res = Broker::lookup(h, Broker::data(key)) )
{
++key_count;
print "lookup", key, res;
@@ -21,15 +21,15 @@ function do_lookup(key: string)
event ready()
{
- h = BrokerStore::create_clone("mystore");
+ h = Broker::create_clone("mystore");
- when ( local res = BrokerStore::keys(h) )
+ when ( local res = Broker::keys(h) )
{
print "clone keys", res;
- do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 0)));
- do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 1)));
- do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 2)));
- do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 3)));
+ do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 0)));
+ do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 1)));
+ do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 2)));
+ do_lookup(Broker::refine_to_string(Broker::vector_lookup(res$result, 3)));
}
timeout 10sec
{ print "timeout"; }
@@ -37,7 +37,7 @@ event ready()
event bro_init()
{
- BrokerComm::enable();
- BrokerComm::subscribe_to_events("bro/event/ready");
- BrokerComm::listen(broker_port, "127.0.0.1");
+ Broker::enable();
+ Broker::subscribe_to_events("bro/event/ready");
+ Broker::listen(broker_port, "127.0.0.1");
}
diff --git a/doc/frameworks/broker/testlog.bro b/doc/frameworks/broker/testlog.bro
index f63c19ac48..0099671e6d 100644
--- a/doc/frameworks/broker/testlog.bro
+++ b/doc/frameworks/broker/testlog.bro
@@ -1,4 +1,3 @@
-
module Test;
export {
@@ -14,6 +13,6 @@ export {
event bro_init() &priority=5
{
- BrokerComm::enable();
+ Broker::enable();
Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test, $path="test"]);
}
diff --git a/doc/frameworks/geoip.rst b/doc/frameworks/geoip.rst
index 98252d7184..d756f97589 100644
--- a/doc/frameworks/geoip.rst
+++ b/doc/frameworks/geoip.rst
@@ -20,11 +20,13 @@ GeoLocation
Install libGeoIP
----------------
+Before building Bro, you need to install libGeoIP.
+
* FreeBSD:
.. console::
- sudo pkg_add -r GeoIP
+ sudo pkg install GeoIP
* RPM/RedHat-based Linux:
@@ -40,80 +42,99 @@ Install libGeoIP
* Mac OS X:
- Vanilla OS X installations don't ship with libGeoIP, but if
- installed from your preferred package management system (e.g.
- MacPorts, Fink, or Homebrew), they should be automatically detected
- and Bro will compile against them.
+ You need to install from your preferred package management system
+ (e.g. MacPorts, Fink, or Homebrew). The name of the package that you need
+ may be libgeoip, geoip, or geoip-dev, depending on which package management
+ system you are using.
GeoIPLite Database Installation
-------------------------------------
+-------------------------------
A country database for GeoIPLite is included when you do the C API
install, but for Bro, we are using the city database which includes
cities and regions in addition to countries.
`Download `__ the GeoLite city
-binary database.
+binary database:
- .. console::
+.. console::
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz
-Next, the file needs to be put in the database directory. This directory
-should already exist and will vary depending on which platform and package
-you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For Linux,
-use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one
+Next, the file needs to be renamed and put in the GeoIP database directory.
+This directory should already exist and will vary depending on which platform
+and package you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For
+Linux, use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one
already exists).
- .. console::
+.. console::
mv GeoLiteCity.dat /GeoIPCity.dat
+Note that there is a separate database for IPv6 addresses, which can also
+be installed if you want GeoIP functionality for IPv6.
+
+Testing
+-------
+
+Before using the GeoIP functionality, it is a good idea to verify that
+everything is setup correctly. After installing libGeoIP and the GeoIP city
+database, and building Bro, you can quickly check if the GeoIP functionality
+works by running a command like this:
+
+.. console::
+
+ bro -e "print lookup_location(8.8.8.8);"
+
+If you see an error message similar to "Failed to open GeoIP City database",
+then you may need to either rename or move your GeoIP city database file (the
+error message should give you the full pathname of the database file that
+Bro is looking for).
+
+If you see an error message similar to "Bro was not configured for GeoIP
+support", then you need to rebuild Bro and make sure it is linked against
+libGeoIP. Normally, if libGeoIP is installed correctly then it should
+automatically be found when building Bro. If this doesn't happen, then
+you may need to specify the path to the libGeoIP installation
+(e.g. ``./configure --with-geoip=``).
Usage
-----
-There is a single built in function that provides the GeoIP
-functionality:
+There is a built-in function that provides the GeoIP functionality:
.. code:: bro
function lookup_location(a:addr): geo_location
-There is also the :bro:see:`geo_location` data structure that is returned
-from the :bro:see:`lookup_location` function:
-
-.. code:: bro
-
- type geo_location: record {
- country_code: string;
- region: string;
- city: string;
- latitude: double;
- longitude: double;
- };
-
+The return value of the :bro:see:`lookup_location` function is a record
+type called :bro:see:`geo_location`, and it consists of several fields
+containing the country, region, city, latitude, and longitude of the specified
+IP address. Since one or more fields in this record will be uninitialized
+for some IP addresses (for example, the country and region of an IP address
+might be known, but the city could be unknown), a field should be checked
+if it has a value before trying to access the value.
Example
-------
-To write a line in a log file for every ftp connection from hosts in
-Ohio, this is now very easy:
+To show every ftp connection from hosts in Ohio, this is now very easy:
.. code:: bro
- global ftp_location_log: file = open_log_file("ftp-location");
-
event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool)
{
local client = c$id$orig_h;
local loc = lookup_location(client);
- if (loc$region == "OH" && loc$country_code == "US")
+
+ if (loc?$region && loc$region == "OH" && loc$country_code == "US")
{
- print ftp_location_log, fmt("FTP Connection from:%s (%s,%s,%s)", client, loc$city, loc$region, loc$country_code);
+ local city = loc?$city ? loc$city : "";
+
+ print fmt("FTP Connection from:%s (%s,%s,%s)", client, city,
+ loc$region, loc$country_code);
}
}
-
diff --git a/doc/frameworks/input.rst b/doc/frameworks/input.rst
index ef40756a26..aa2dce6417 100644
--- a/doc/frameworks/input.rst
+++ b/doc/frameworks/input.rst
@@ -32,7 +32,8 @@ For this example we assume that we want to import data from a blacklist
that contains server IP addresses as well as the timestamp and the reason
for the block.
-An example input file could look like this:
+An example input file could look like this (note that all fields must be
+tab-separated):
::
@@ -63,19 +64,23 @@ The two records are defined as:
reason: string;
};
-Note that the names of the fields in the record definitions have to correspond
+Note that the names of the fields in the record definitions must correspond
to the column names listed in the '#fields' line of the log file, in this
-case 'ip', 'timestamp', and 'reason'.
+case 'ip', 'timestamp', and 'reason'. Also note that the ordering of the
+columns does not matter, because each column is identified by name.
-The log file is read into the table with a simple call of the ``add_table``
-function:
+The log file is read into the table with a simple call of the
+:bro:id:`Input::add_table` function:
.. code:: bro
global blacklist: table[addr] of Val = table();
- Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]);
- Input::remove("blacklist");
+ event bro_init() {
+ Input::add_table([$source="blacklist.file", $name="blacklist",
+ $idx=Idx, $val=Val, $destination=blacklist]);
+ Input::remove("blacklist");
+ }
With these three lines we first create an empty table that should contain the
blacklist data and then instruct the input framework to open an input stream
@@ -92,7 +97,7 @@ Because of this, the data is not immediately accessible. Depending on the
size of the data source it might take from a few milliseconds up to a few
seconds until all data is present in the table. Please note that this means
that when Bro is running without an input source or on very short captured
-files, it might terminate before the data is present in the system (because
+files, it might terminate before the data is present in the table (because
Bro already handled all packets before the import thread finished).
Subsequent calls to an input source are queued until the previous action has
@@ -101,8 +106,8 @@ been completed. Because of this, it is, for example, possible to call
will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires
-the ``end_of_data`` event. Once this event has been received all data
-from the input file is available in the table.
+the :bro:id:`Input::end_of_data` event. Once this event has been received all
+data from the input file is available in the table.
.. code:: bro
@@ -111,9 +116,9 @@ from the input file is available in the table.
print blacklist;
}
-The table can also already be used while the data is still being read - it
-just might not contain all lines in the input file when the event has not
-yet fired. After it has been populated it can be used like any other Bro
+The table can be used while the data is still being read - it
+just might not contain all lines from the input file before the event has
+fired. After the table has been populated it can be used like any other Bro
table and blacklist entries can easily be tested:
.. code:: bro
@@ -130,10 +135,11 @@ changing. For these cases, the Bro input framework supports several ways to
deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When
-an input stream is open, the function ``force_update`` can be called. This
-will trigger a complete refresh of the table; any changed elements from the
-file will be updated. After the update is finished the ``end_of_data``
-event will be raised.
+an input stream is open (this means it has not yet been removed by a call to
+:bro:id:`Input::remove`), the function :bro:id:`Input::force_update` can be
+called. This will trigger a complete refresh of the table; any changed
+elements from the file will be updated. After the update is finished the
+:bro:id:`Input::end_of_data` event will be raised.
In our example the call would look like:
@@ -141,30 +147,35 @@ In our example the call would look like:
Input::force_update("blacklist");
-The input framework also supports two automatic refresh modes. The first mode
-continually checks if a file has been changed. If the file has been changed, it
+Alternatively, the input framework can automatically refresh the table
+contents when it detects a change to the input file. To use this feature,
+you need to specify a non-default read mode by setting the ``mode`` option
+of the :bro:id:`Input::add_table` call. Valid values are ``Input::MANUAL``
+(the default), ``Input::REREAD`` and ``Input::STREAM``. For example,
+setting the value of the ``mode`` option in the previous example
+would look like this:
+
+.. code:: bro
+
+ Input::add_table([$source="blacklist.file", $name="blacklist",
+ $idx=Idx, $val=Val, $destination=blacklist,
+ $mode=Input::REREAD]);
+
+When using the reread mode (i.e., ``$mode=Input::REREAD``), Bro continually
+checks if the input file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been
read into the table, the ``end_of_data`` event is raised.
-The second mode is a streaming mode. This mode assumes that the source data
-file is an append-only file to which new data is continually appended. Bro
-continually checks for new data at the end of the file and will add the new
-data to the table. If newer lines in the file have the same index as previous
-lines, they will overwrite the values in the output table. Because of the
-nature of streaming reads (data is continually added to the table),
-the ``end_of_data`` event is never raised when using streaming reads.
+When using the streaming mode (i.e., ``$mode=Input::STREAM``), Bro assumes
+that the source data file is an append-only file to which new data is
+continually appended. Bro continually checks for new data at the end of
+the file and will add the new data to the table. If newer lines in the
+file have the same index as previous lines, they will overwrite the
+values in the output table. Because of the nature of streaming reads
+(data is continually added to the table), the ``end_of_data`` event
+is never raised when using streaming reads.
-The reading mode can be selected by setting the ``mode`` option of the
-add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``
-and ``STREAM``.
-
-Hence, when adding ``$mode=Input::REREAD`` to the previous example, the
-blacklist table will always reflect the state of the blacklist input file.
-
-.. code:: bro
-
- Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]);
Receiving change events
-----------------------
@@ -173,34 +184,40 @@ When re-reading files, it might be interesting to know exactly which lines in
the source files have changed.
For this reason, the input framework can raise an event each time when a data
-item is added to, removed from or changed in a table.
+item is added to, removed from, or changed in a table.
-The event definition looks like this:
+The event definition looks like this (note that you can change the name of
+this event in your own Bro script):
.. code:: bro
- event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) {
- # act on values
+ event entry(description: Input::TableDescription, tpe: Input::Event,
+ left: Idx, right: Val) {
+ # do something here...
+ print fmt("%s = %s", left, right);
}
-The event has to be specified in ``$ev`` in the ``add_table`` call:
+The event must be specified in ``$ev`` in the ``add_table`` call:
.. code:: bro
- Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]);
+ Input::add_table([$source="blacklist.file", $name="blacklist",
+ $idx=Idx, $val=Val, $destination=blacklist,
+ $mode=Input::REREAD, $ev=entry]);
-The ``description`` field of the event contains the arguments that were
+The ``description`` argument of the event contains the arguments that were
originally supplied to the add_table call. Hence, the name of the stream can,
-for example, be accessed with ``description$name``. ``tpe`` is an enum
-containing the type of the change that occurred.
+for example, be accessed with ``description$name``. The ``tpe`` argument of the
+event is an enum containing the type of the change that occurred.
If a line that was not previously present in the table has been added,
-then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains
-the index of the added table entry and ``right`` contains the values of the
-added entry.
+then the value of ``tpe`` will be ``Input::EVENT_NEW``. In this case ``left``
+contains the index of the added table entry and ``right`` contains the
+values of the added entry.
If a table entry that already was present is altered during the re-reading or
-streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In
+streaming read of a file, then the value of ``tpe`` will be
+``Input::EVENT_CHANGED``. In
this case ``left`` contains the index of the changed table entry and ``right``
contains the values of the entry before the change. The reason for this is
that the table already has been updated when the event is raised. The current
@@ -208,8 +225,9 @@ value in the table can be ascertained by looking up the current table value.
Hence it is possible to compare the new and the old values of the table.
If a table element is removed because it was no longer present during a
-re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left``
-contains the index and ``right`` the values of the removed element.
+re-read, then the value of ``tpe`` will be ``Input::EVENT_REMOVED``. In this
+case ``left`` contains the index and ``right`` the values of the removed
+element.
Filtering data during import
@@ -222,24 +240,26 @@ can either accept or veto the change by returning true for an accepted
change and false for a rejected change. Furthermore, it can alter the data
before it is written to the table.
-The following example filter will reject to add entries to the table when
+The following example filter will reject adding entries to the table when
they were generated over a month ago. It will accept all changes and all
removals of values that are already present in the table.
.. code:: bro
- Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD,
- $pred(typ: Input::Event, left: Idx, right: Val) = {
- if ( typ != Input::EVENT_NEW ) {
- return T;
- }
- return ( ( current_time() - right$timestamp ) < (30 day) );
- }]);
+ Input::add_table([$source="blacklist.file", $name="blacklist",
+ $idx=Idx, $val=Val, $destination=blacklist,
+ $mode=Input::REREAD,
+ $pred(typ: Input::Event, left: Idx, right: Val) = {
+ if ( typ != Input::EVENT_NEW ) {
+ return T;
+ }
+ return (current_time() - right$timestamp) < 30day;
+ }]);
To change elements while they are being imported, the predicate function can
manipulate ``left`` and ``right``. Note that predicate functions are called
before the change is committed to the table. Hence, when a table element is
-changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right``
+changed (``typ`` is ``Input::EVENT_CHANGED``), ``left`` and ``right``
contain the new values, but the destination (``blacklist`` in our example)
still contains the old values. This allows predicate functions to examine
the changes between the old and the new version before deciding if they
@@ -250,14 +270,19 @@ Different readers
The input framework supports different kinds of readers for different kinds
of source data files. At the moment, the default reader reads ASCII files
-formatted in the Bro log file format (tab-separated values). At the moment,
-Bro comes with two other readers. The ``RAW`` reader reads a file that is
-split by a specified record separator (usually newline). The contents are
+formatted in the Bro log file format (tab-separated values with a "#fields"
+header line). Several other readers are included in Bro.
+
+The raw reader reads a file that is
+split by a specified record separator (newline by default). The contents are
returned line-by-line as strings; it can, for example, be used to read
configuration files and the like and is probably
only useful in the event mode and not for reading data to tables.
-Another included reader is the ``BENCHMARK`` reader, which is being used
+The binary reader is intended to be used with file analysis input streams (and
+is the default type of reader for those streams).
+
+The benchmark reader is being used
to optimize the speed of the input framework. It can generate arbitrary
amounts of semi-random data in all Bro data types supported by the input
framework.
@@ -270,75 +295,17 @@ aforementioned ones:
logging-input-sqlite
-Add_table options
------------------
-
-This section lists all possible options that can be used for the add_table
-function and gives a short explanation of their use. Most of the options
-already have been discussed in the previous sections.
-
-The possible fields that can be set for a table stream are:
-
- ``source``
- A mandatory string identifying the source of the data.
- For the ASCII reader this is the filename.
-
- ``name``
- A mandatory name for the filter that can later be used
- to manipulate it further.
-
- ``idx``
- Record type that defines the index of the table.
-
- ``val``
- Record type that defines the values of the table.
-
- ``reader``
- The reader used for this stream. Default is ``READER_ASCII``.
-
- ``mode``
- The mode in which the stream is opened. Possible values are
- ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
- ``MANUAL`` means that the file is not updated after it has
- been read. Changes to the file will not be reflected in the
- data Bro knows. ``REREAD`` means that the whole file is read
- again each time a change is found. This should be used for
- files that are mapped to a table where individual lines can
- change. ``STREAM`` means that the data from the file is
- streamed. Events / table entries will be generated as new
- data is appended to the file.
-
- ``destination``
- The destination table.
-
- ``ev``
- Optional event that is raised, when values are added to,
- changed in, or deleted from the table. Events are passed an
- Input::Event description as the first argument, the index
- record as the second argument and the values as the third
- argument.
-
- ``pred``
- Optional predicate, that can prevent entries from being added
- to the table and events from being sent.
-
- ``want_record``
- Boolean value, that defines if the event wants to receive the
- fields inside of a single record value, or individually
- (default). This can be used if ``val`` is a record
- containing only one type. In this case, if ``want_record`` is
- set to false, the table will contain elements of the type
- contained in ``val``.
Reading Data to Events
======================
The second supported mode of the input framework is reading data to Bro
-events instead of reading them to a table using event streams.
+events instead of reading them to a table.
Event streams work very similarly to table streams that were already
discussed in much detail. To read the blacklist of the previous example
-into an event stream, the following Bro code could be used:
+into an event stream, the :bro:id:`Input::add_event` function is used.
+For example:
.. code:: bro
@@ -348,12 +315,15 @@ into an event stream, the following Bro code could be used:
reason: string;
};
- event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) {
- # work with event data
+ event blacklistentry(description: Input::EventDescription,
+ t: Input::Event, data: Val) {
+ # do something here...
+ print "data:", data;
}
event bro_init() {
- Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]);
+ Input::add_event([$source="blacklist.file", $name="blacklist",
+ $fields=Val, $ev=blacklistentry]);
}
@@ -364,52 +334,3 @@ data types are provided in a single record definition.
Apart from this, event streams work exactly the same as table streams and
support most of the options that are also supported for table streams.
-The options that can be set when creating an event stream with
-``add_event`` are:
-
- ``source``
- A mandatory string identifying the source of the data.
- For the ASCII reader this is the filename.
-
- ``name``
- A mandatory name for the stream that can later be used
- to remove it.
-
- ``fields``
- Name of a record type containing the fields, which should be
- retrieved from the input stream.
-
- ``ev``
- The event which is fired, after a line has been read from the
- input source. The first argument that is passed to the event
- is an Input::Event structure, followed by the data, either
- inside of a record (if ``want_record is set``) or as
- individual fields. The Input::Event structure can contain
- information, if the received line is ``NEW``, has been
- ``CHANGED`` or ``DELETED``. Since the ASCII reader cannot
- track this information for event filters, the value is
- always ``NEW`` at the moment.
-
- ``mode``
- The mode in which the stream is opened. Possible values are
- ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
- ``MANUAL`` means that the file is not updated after it has
- been read. Changes to the file will not be reflected in the
- data Bro knows. ``REREAD`` means that the whole file is read
- again each time a change is found. This should be used for
- files that are mapped to a table where individual lines can
- change. ``STREAM`` means that the data from the file is
- streamed. Events / table entries will be generated as new
- data is appended to the file.
-
- ``reader``
- The reader used for this stream. Default is ``READER_ASCII``.
-
- ``want_record``
- Boolean value, that defines if the event wants to receive the
- fields inside of a single record value, or individually
- (default). If this is set to true, the event will receive a
- single record of the type provided in ``fields``.
-
-
-
diff --git a/doc/frameworks/logging-input-sqlite.rst b/doc/frameworks/logging-input-sqlite.rst
index 6f5e867686..e0f10308ae 100644
--- a/doc/frameworks/logging-input-sqlite.rst
+++ b/doc/frameworks/logging-input-sqlite.rst
@@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet
seen extensive use in production environments. While we are not aware
of any issues with them, we urge to caution when using them
in production environments. There could be lingering issues which only occur
-when the plugins are used with high amounts of data or in high-load environments.
+when the plugins are used with high amounts of data or in high-load
+environments.
Logging Data into SQLite Databases
==================================
Logging support for SQLite is available in all Bro installations starting with
-version 2.2. There is no need to load any additional scripts or for any compile-time
-configurations.
+version 2.2. There is no need to load any additional scripts or for any
+compile-time configurations.
-Sending data from existing logging streams to SQLite is rather straightforward. You
-have to define a filter which specifies SQLite as the writer.
+Sending data from existing logging streams to SQLite is rather straightforward.
+You have to define a filter which specifies SQLite as the writer.
The following example code adds SQLite as a filter for the connection log:
@@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log:
# Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro
-Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist.
-It will also create a table with the name ``conn`` (if it does not exist) and start
-appending connection information to the table.
+Bro will create the database file ``/var/db/conn.sqlite``, if it does not
+already exist. It will also create a table with the name ``conn`` (if it
+does not exist) and start appending connection information to the table.
-At the moment, SQLite databases are not rotated the same way ASCII log-files are. You
-have to take care to create them in an adequate location.
+At the moment, SQLite databases are not rotated the same way ASCII log-files
+are. You have to take care to create them in an adequate location.
-If you examine the resulting SQLite database, the schema will contain the same fields
-that are present in the ASCII log files::
+If you examine the resulting SQLite database, the schema will contain the
+same fields that are present in the ASCII log files::
# sqlite3 /var/db/conn.sqlite
@@ -75,27 +76,31 @@ from being created, you can remove the default filter:
Log::remove_filter(Conn::LOG, "default");
-To create a custom SQLite log file, you have to create a new log stream that contains
-just the information you want to commit to the database. Please refer to the
-:ref:`framework-logging` documentation on how to create custom log streams.
+To create a custom SQLite log file, you have to create a new log stream
+that contains just the information you want to commit to the database.
+Please refer to the :ref:`framework-logging` documentation on how to
+create custom log streams.
Reading Data from SQLite Databases
==================================
-Like logging support, support for reading data from SQLite databases is built into Bro starting
-with version 2.2.
+Like logging support, support for reading data from SQLite databases is
+built into Bro starting with version 2.2.
-Just as with the text-based input readers (please refer to the :ref:`framework-input`
-documentation for them and for basic information on how to use the input-framework), the SQLite reader
-can be used to read data - in this case the result of SQL queries - into tables or into events.
+Just as with the text-based input readers (please refer to the
+:ref:`framework-input` documentation for them and for basic information
+on how to use the input framework), the SQLite reader can be used to
+read data - in this case the result of SQL queries - into tables or into
+events.
Reading Data into Tables
------------------------
-To read data from a SQLite database, we first have to provide Bro with the information, how
-the resulting data will be structured. For this example, we expect that we have a SQLite database,
-which contains host IP addresses and the user accounts that are allowed to log into a specific
-machine.
+To read data from a SQLite database, we first have to provide Bro with
+the information, how the resulting data will be structured. For this
+example, we expect that we have a SQLite database, which contains
+host IP addresses and the user accounts that are allowed to log into
+a specific machine.
The SQLite commands to create the schema are as follows::
@@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows::
insert into machines_to_users values ('192.168.17.2', 'bernhard');
insert into machines_to_users values ('192.168.17.3', 'seth,matthias');
-After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table
-into Bro:
+After creating a file called ``hosts.sqlite`` with this content, we can
+read the resulting table into Bro:
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro
@@ -117,22 +122,25 @@ into Bro:
# Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro
-Afterwards, that table can be used to check logins into hosts against the available
-userlist.
+Afterwards, that table can be used to check logins into hosts against
+the available userlist.
Turning Data into Events
------------------------
-The second mode is to use the SQLite reader to output the input data as events. Typically there
-are two reasons to do this. First, when the structure of the input data is too complicated
-for a direct table import. In this case, the data can be read into an event which can then
-create the necessary data structures in Bro in scriptland.
+The second mode is to use the SQLite reader to output the input data as events.
+Typically there are two reasons to do this. First, when the structure of
+the input data is too complicated for a direct table import. In this case,
+the data can be read into an event which can then create the necessary
+data structures in Bro in scriptland.
-The second reason is, that the dataset is too big to hold it in memory. In this case, the checks
-can be performed on-demand, when Bro encounters a situation where it needs additional information.
+The second reason is, that the dataset is too big to hold it in memory. In
+this case, the checks can be performed on-demand, when Bro encounters a
+situation where it needs additional information.
-An example for this would be an internal huge database with malware hashes. Live database queries
-could be used to check the sporadically happening downloads against the database.
+An example for this would be an internal huge database with malware
+hashes. Live database queries could be used to check the sporadically
+happening downloads against the database.
The SQLite commands to create the schema are as follows::
@@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows::
insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace');
-The following code uses the file-analysis framework to get the sha1 hashes of files that are
-transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query
-returns with a result, we had a hit against our malware-database and output the matching hash.
+The following code uses the file-analysis framework to get the sha1 hashes
+of files that are transmitted over the network. For each hash, a SQL-query
+is run against SQLite. If the query returns with a result, we had a hit
+against our malware-database and output the matching hash.
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro
@@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the
# Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro
-If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you
-will get one hit.
+If you run this script against the trace in
+``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit.
diff --git a/doc/frameworks/logging.rst b/doc/frameworks/logging.rst
index 9b6fef0c15..a5128da202 100644
--- a/doc/frameworks/logging.rst
+++ b/doc/frameworks/logging.rst
@@ -537,6 +537,5 @@ Additional writers are available as external plugins:
.. toctree::
:maxdepth: 1
- ../components/bro-plugins/dataseries/README
- ../components/bro-plugins/elasticsearch/README
+ ../components/bro-plugins/README
diff --git a/doc/install/guidelines.rst b/doc/install/guidelines.rst
index d1e1777165..a56110f865 100644
--- a/doc/install/guidelines.rst
+++ b/doc/install/guidelines.rst
@@ -46,4 +46,4 @@ where Bro was originally installed). Review the files for differences
before copying and make adjustments as necessary (use the new version for
differences that aren't a result of a local change). Of particular note,
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
-to the ``SpoolDir`` and ``LogDir`` settings.
+to any settings that specify a pathname.
diff --git a/doc/install/install.rst b/doc/install/install.rst
index eff3ec9728..60c7cf27d1 100644
--- a/doc/install/install.rst
+++ b/doc/install/install.rst
@@ -4,7 +4,7 @@
.. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org
.. _Homebrew: http://brew.sh
-.. _bro downloads page: http://bro.org/download/index.html
+.. _bro downloads page: https://www.bro.org/download/index.html
.. _installing-bro:
@@ -32,24 +32,22 @@ before you begin:
* Libz
* Bash (for BroControl)
* Python (for BroControl)
- * C++ Actor Framework (CAF) (http://actor-framework.org)
+ * C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org)
To build Bro from source, the following additional dependencies are required:
* CMake 2.8 or greater (http://www.cmake.org)
* Make
- * C/C++ compiler with C++11 support
+ * C/C++ compiler with C++11 support (GCC 4.8+ or Clang 3.3+)
* SWIG (http://www.swig.org)
* Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer)
* Libpcap headers (http://www.tcpdump.org)
* OpenSSL headers (http://www.openssl.org)
* zlib headers
- * Perl
+ * Python
-.. todo::
-
- Update with instructions for installing CAF.
+To install CAF, first download the source code of the required version from: https://github.com/actor-framework/actor-framework/releases
To install the required dependencies, you can use:
@@ -72,11 +70,26 @@ To install the required dependencies, you can use:
.. console::
- sudo pkg install bash cmake swig bison python perl5 py27-sqlite3
+ sudo pkg install bash cmake swig bison python py27-sqlite3
Note that in older versions of FreeBSD, you might have to use the
"pkg_add -r" command instead of "pkg install".
+ For older versions of FreeBSD (especially FreeBSD 9.x), the system compiler
+ is not new enough to compile Bro. For these systems, you will have to install
+ a newer compiler using pkg; the ``clang34`` package should work.
+
+ You will also have to define several environment variables on these older
+ systems to use the new compiler and headers similar to this before calling
+ configure:
+
+ .. console::
+
+ export CC=clang34
+ export CXX=clang++34
+ export CXXFLAGS="-stdlib=libc++ -I${LOCALBASE}/include/c++/v1 -L${LOCALBASE}/lib"
+ export LDFLAGS="-pthread"
+
* Mac OS X:
Compiling source code on Macs requires first installing Xcode_ (in older
@@ -84,11 +97,14 @@ To install the required dependencies, you can use:
"Preferences..." -> "Downloads" menus to install the "Command Line Tools"
component).
- OS X comes with all required dependencies except for CMake_ and SWIG_.
- Distributions of these dependencies can likely be obtained from your
- preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
- or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``,
- and ``swig-python`` packages provide the required dependencies.
+ OS X comes with all required dependencies except for CMake_, SWIG_,
+ OpenSSL, and CAF. (OpenSSL used to be part of OS X versions 10.10
+ and older, for which it does not need to be installed manually. It
+ was removed in OS X 10.11). Distributions of these dependencies can
+ likely be obtained from your preferred Mac OS X package management
+ system (e.g. Homebrew_, MacPorts_, or Fink_). Specifically for
+ Homebrew, the ``cmake``, ``swig``, ``openssl`` and ``caf`` packages
+ provide the required dependencies.
Optional Dependencies
@@ -101,6 +117,8 @@ build time:
* sendmail (enables Bro and BroControl to send mail)
* curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage)
+ * jemalloc (http://www.canonware.com/jemalloc/)
+ * PF_RING (Linux only, see :doc:`Cluster Configuration <../configuration/index>`)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
LibGeoIP is probably the most interesting and can be installed
@@ -117,7 +135,7 @@ code forms.
Using Pre-Built Binary Release Packages
-=======================================
+---------------------------------------
See the `bro downloads page`_ for currently supported/targeted
platforms for binary releases and for installation instructions.
@@ -126,25 +144,21 @@ platforms for binary releases and for installation instructions.
Linux based binary installations are usually performed by adding
information about the Bro packages to the respective system packaging
- tool. Then the usual system utilities such as ``apt``, ``yum``
- or ``zypper`` are used to perform the installation. By default,
- installations of binary packages will go into ``/opt/bro``.
-
-* MacOS Disk Image with Installer
-
- Just open the ``Bro-*.dmg`` and then run the ``.pkg`` installer.
- Everything installed by the package will go into ``/opt/bro``.
+ tool. Then the usual system utilities such as ``apt``, ``dnf``, ``yum``,
+ or ``zypper`` are used to perform the installation.
The primary install prefix for binary packages is ``/opt/bro``.
Installing from Source
-======================
+----------------------
Bro releases are bundled into source packages for convenience and are
-available on the `bro downloads page`_. Alternatively, the latest
-Bro development version can be obtained through git repositories
+available on the `bro downloads page`_.
+
+Alternatively, the latest Bro development version
+can be obtained through git repositories
hosted at ``git.bro.org``. See our `git development documentation
-`_ for comprehensive
+`_ for comprehensive
information on Bro's use of git revision control, but the short story
for downloading the full source code experience for Bro via git is:
@@ -165,13 +179,23 @@ run ``./configure --help``):
make
make install
+If the ``configure`` script fails, then it is most likely because it either
+couldn't find a required dependency or it couldn't find a sufficiently new
+version of a dependency. Assuming that you already installed all required
+dependencies, then you may need to use one of the ``--with-*`` options
+that can be given to the ``configure`` script to help it locate a dependency.
+
The default installation path is ``/usr/local/bro``, which would typically
-require root privileges when doing the ``make install``. A different
-installation path can be chosen by specifying the ``--prefix`` option.
-Note that ``/usr`` and ``/opt/bro`` are the
+require root privileges when doing the ``make install``. A different
+installation path can be chosen by specifying the ``configure`` script
+``--prefix`` option. Note that ``/usr`` and ``/opt/bro`` are the
standard prefixes for binary Bro packages to be installed, so those are
typically not good choices unless you are creating such a package.
+OpenBSD users, please see our `FAQ
+`_ if you are having
+problems installing Bro.
+
Depending on the Bro package you downloaded, there may be auxiliary
tools and libraries available in the ``aux/`` directory. Some of them
will be automatically built and installed along with Bro. There are
@@ -180,10 +204,6 @@ turn off unwanted auxiliary projects that would otherwise be installed
automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory.
-OpenBSD users, please see our `FAQ
-/www.bro.org/documentation/faq.html>`_ if you are having
-problems installing Bro.
-
Finally, if you want to build the Bro documentation (not required, because
all of the documentation for the latest Bro release is available on the
Bro web site), there are instructions in ``doc/README`` in the source
@@ -192,7 +212,7 @@ distribution.
Configure the Run-Time Environment
==================================
-Just remember that you may need to adjust your ``PATH`` environment variable
+You may want to adjust your ``PATH`` environment variable
according to the platform/shell/package you're using. For example:
Bourne-Shell Syntax:
diff --git a/doc/script-reference/attributes.rst b/doc/script-reference/attributes.rst
index d37cc2a98a..fec72570d2 100644
--- a/doc/script-reference/attributes.rst
+++ b/doc/script-reference/attributes.rst
@@ -54,13 +54,16 @@ Here is a more detailed explanation of each attribute:
.. bro:attr:: &redef
- Allows for redefinition of initial values of global objects declared as
- constant.
-
- In this example, the constant (assuming it is global) can be redefined
- with a :bro:keyword:`redef` at some later point::
+ Allows use of a :bro:keyword:`redef` to redefine initial values of
+ global variables (i.e., variables declared either :bro:keyword:`global`
+ or :bro:keyword:`const`). Example::
const clever = T &redef;
+ global cache_size = 256 &redef;
+
+ Note that a variable declared "global" can also have its value changed
+ with assignment statements (doesn't matter if it has the "&redef"
+ attribute or not).
.. bro:attr:: &priority
diff --git a/doc/script-reference/statements.rst b/doc/script-reference/statements.rst
index 1f5b388e7f..e2f93a5627 100644
--- a/doc/script-reference/statements.rst
+++ b/doc/script-reference/statements.rst
@@ -71,9 +71,11 @@ Statements
Declarations
------------
-The following global declarations cannot occur within a function, hook, or
-event handler. Also, these declarations cannot appear after any statements
-that are outside of a function, hook, or event handler.
+Declarations cannot occur within a function, hook, or event handler.
+
+Declarations must appear before any statements (except those statements
+that are in a function, hook, or event handler) in the concatenation of
+all loaded Bro scripts.
.. bro:keyword:: module
@@ -126,9 +128,12 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: global
Variables declared with the "global" keyword will be global.
+
If a type is not specified, then an initializer is required so that
the type can be inferred. Likewise, if an initializer is not supplied,
- then the type must be specified. Example::
+ then the type must be specified. In some cases, when the type cannot
+ be correctly inferred, the type must be specified even when an
+ initializer is present. Example::
global pi = 3.14;
global hosts: set[addr];
@@ -136,10 +141,11 @@ that are outside of a function, hook, or event handler.
Variable declarations outside of any function, hook, or event handler are
required to use this keyword (unless they are declared with the
- :bro:keyword:`const` keyword). Definitions of functions, hooks, and
- event handlers are not allowed to use the "global"
- keyword (they already have global scope), except function declarations
- where no function body is supplied use the "global" keyword.
+ :bro:keyword:`const` keyword instead).
+
+ Definitions of functions, hooks, and event handlers are not allowed
+ to use the "global" keyword. However, function declarations (i.e., no
+ function body is provided) can use the "global" keyword.
The scope of a global variable begins where the declaration is located,
and extends through all remaining Bro scripts that are loaded (however,
@@ -150,18 +156,22 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: const
A variable declared with the "const" keyword will be constant.
+
Variables declared as constant are required to be initialized at the
- time of declaration. Example::
+ time of declaration. Normally, the type is inferred from the initializer,
+ but the type can be explicitly specified. Example::
const pi = 3.14;
const ssh_port: port = 22/tcp;
- The value of a constant cannot be changed later (the only
- exception is if the variable is global and has the :bro:attr:`&redef`
- attribute, then its value can be changed only with a :bro:keyword:`redef`).
+ The value of a constant cannot be changed. The only exception is if the
+ variable is a global constant and has the :bro:attr:`&redef`
+ attribute, but even then its value can be changed only with a
+ :bro:keyword:`redef`.
The scope of a constant is local if the declaration is in a
function, hook, or event handler, and global otherwise.
+
Note that the "const" keyword cannot be used with either the "local"
or "global" keywords (i.e., "const" replaces "local" and "global").
@@ -184,7 +194,8 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: redef
There are three ways that "redef" can be used: to change the value of
- a global variable, to extend a record type or enum type, or to specify
+ a global variable (but only if it has the :bro:attr:`&redef` attribute),
+ to extend a record type or enum type, or to specify
a new event handler body that replaces all those that were previously
defined.
@@ -237,13 +248,14 @@ that are outside of a function, hook, or event handler.
Statements
----------
+Statements (except those contained within a function, hook, or event
+handler) can appear only after all global declarations in the concatenation
+of all loaded Bro scripts.
+
Each statement in a Bro script must be terminated with a semicolon (with a
few exceptions noted below). An individual statement can span multiple
lines.
-All statements (except those contained within a function, hook, or event
-handler) must appear after all global declarations.
-
Here are the statements that the Bro scripting language supports.
.. bro:keyword:: add
diff --git a/doc/script-reference/types.rst b/doc/script-reference/types.rst
index cc601db75f..847e0f8fab 100644
--- a/doc/script-reference/types.rst
+++ b/doc/script-reference/types.rst
@@ -340,15 +340,18 @@ Here is a more detailed description of each type:
table [ type^+ ] of type
- where *type^+* is one or more types, separated by commas.
- For example:
+ where *type^+* is one or more types, separated by commas. The
+ index type cannot be any of the following types: pattern, table, set,
+ vector, file, opaque, any.
+
+ Here is an example of declaring a table indexed by "count" values
+ and yielding "string" values:
.. code:: bro
global a: table[count] of string;
- declares a table indexed by "count" values and yielding
- "string" values. The yield type can also be more complex:
+ The yield type can also be more complex:
.. code:: bro
@@ -441,7 +444,9 @@ Here is a more detailed description of each type:
set [ type^+ ]
- where *type^+* is one or more types separated by commas.
+ where *type^+* is one or more types separated by commas. The
+ index type cannot be any of the following types: pattern, table, set,
+ vector, file, opaque, any.
Sets can be initialized by listing elements enclosed by curly braces:
diff --git a/doc/scripting/data_struct_record_01.bro b/doc/scripting/data_struct_record_01.bro
index a80d30faae..ab28501f96 100644
--- a/doc/scripting/data_struct_record_01.bro
+++ b/doc/scripting/data_struct_record_01.bro
@@ -4,7 +4,7 @@ type Service: record {
rfc: count;
};
-function print_service(serv: Service): string
+function print_service(serv: Service)
{
print fmt("Service: %s(RFC%d)",serv$name, serv$rfc);
diff --git a/doc/scripting/data_struct_record_02.bro b/doc/scripting/data_struct_record_02.bro
index b10b3feac0..515c8a716c 100644
--- a/doc/scripting/data_struct_record_02.bro
+++ b/doc/scripting/data_struct_record_02.bro
@@ -9,7 +9,7 @@ type System: record {
services: set[Service];
};
-function print_service(serv: Service): string
+function print_service(serv: Service)
{
print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc);
@@ -17,7 +17,7 @@ function print_service(serv: Service): string
print fmt(" port: %s", p);
}
-function print_system(sys: System): string
+function print_system(sys: System)
{
print fmt("System: %s", sys$name);
diff --git a/doc/scripting/index.rst b/doc/scripting/index.rst
index 2b5cfbb49c..a776fc0ad3 100644
--- a/doc/scripting/index.rst
+++ b/doc/scripting/index.rst
@@ -776,7 +776,7 @@ string against which it will be tested to be on the right.
In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word
-``quick`` or the word ``fox``. The ``if`` statement in the script uses
+``quick`` or the word ``lazy``. The ``if`` statement in the script uses
embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces.
diff --git a/scripts/base/frameworks/broker/__load__.bro b/scripts/base/frameworks/broker/__load__.bro
index a10fe855df..018d772f4f 100644
--- a/scripts/base/frameworks/broker/__load__.bro
+++ b/scripts/base/frameworks/broker/__load__.bro
@@ -1 +1,2 @@
@load ./main
+@load ./store
diff --git a/scripts/base/frameworks/broker/main.bro b/scripts/base/frameworks/broker/main.bro
index e8b57d57d9..d8b4a208a2 100644
--- a/scripts/base/frameworks/broker/main.bro
+++ b/scripts/base/frameworks/broker/main.bro
@@ -1,11 +1,11 @@
##! Various data structure definitions for use with Bro's communication system.
-module BrokerComm;
+module Broker;
export {
## A name used to identify this endpoint to peers.
- ## .. bro:see:: BrokerComm::connect BrokerComm::listen
+ ## .. bro:see:: Broker::connect Broker::listen
const endpoint_name = "" &redef;
## Change communication behavior.
@@ -32,11 +32,11 @@ export {
## Opaque communication data.
type Data: record {
- d: opaque of BrokerComm::Data &optional;
+ d: opaque of Broker::Data &optional;
};
## Opaque communication data.
- type DataVector: vector of BrokerComm::Data;
+ type DataVector: vector of Broker::Data;
## Opaque event communication data.
type EventArgs: record {
@@ -49,55 +49,7 @@ export {
## Opaque communication data used as a convenient way to wrap key-value
## pairs that comprise table entries.
type TableItem : record {
- key: BrokerComm::Data;
- val: BrokerComm::Data;
- };
-}
-
-module BrokerStore;
-
-export {
-
- ## Whether a data store query could be completed or not.
- type QueryStatus: enum {
- SUCCESS,
- FAILURE,
- };
-
- ## An expiry time for a key-value pair inserted in to a data store.
- type ExpiryTime: record {
- ## Absolute point in time at which to expire the entry.
- absolute: time &optional;
- ## A point in time relative to the last modification time at which
- ## to expire the entry. New modifications will delay the expiration.
- since_last_modification: interval &optional;
- };
-
- ## The result of a data store query.
- type QueryResult: record {
- ## Whether the query completed or not.
- status: BrokerStore::QueryStatus;
- ## The result of the query. Certain queries may use a particular
- ## data type (e.g. querying store size always returns a count, but
- ## a lookup may return various data types).
- result: BrokerComm::Data;
- };
-
- ## Options to tune the SQLite storage backend.
- type SQLiteOptions: record {
- ## File system path of the database.
- path: string &default = "store.sqlite";
- };
-
- ## Options to tune the RocksDB storage backend.
- type RocksDBOptions: record {
- ## File system path of the database.
- path: string &default = "store.rocksdb";
- };
-
- ## Options to tune the particular storage backends.
- type BackendOptions: record {
- sqlite: SQLiteOptions &default = SQLiteOptions();
- rocksdb: RocksDBOptions &default = RocksDBOptions();
+ key: Broker::Data;
+ val: Broker::Data;
};
}
diff --git a/scripts/base/frameworks/broker/store.bro b/scripts/base/frameworks/broker/store.bro
new file mode 100644
index 0000000000..e6468f2b2c
--- /dev/null
+++ b/scripts/base/frameworks/broker/store.bro
@@ -0,0 +1,51 @@
+##! Various data structure definitions for use with Bro's communication system.
+
+@load ./main
+
+module Broker;
+
+export {
+
+ ## Whether a data store query could be completed or not.
+ type QueryStatus: enum {
+ SUCCESS,
+ FAILURE,
+ };
+
+ ## An expiry time for a key-value pair inserted in to a data store.
+ type ExpiryTime: record {
+ ## Absolute point in time at which to expire the entry.
+ absolute: time &optional;
+ ## A point in time relative to the last modification time at which
+ ## to expire the entry. New modifications will delay the expiration.
+ since_last_modification: interval &optional;
+ };
+
+ ## The result of a data store query.
+ type QueryResult: record {
+ ## Whether the query completed or not.
+ status: Broker::QueryStatus;
+ ## The result of the query. Certain queries may use a particular
+ ## data type (e.g. querying store size always returns a count, but
+ ## a lookup may return various data types).
+ result: Broker::Data;
+ };
+
+ ## Options to tune the SQLite storage backend.
+ type SQLiteOptions: record {
+ ## File system path of the database.
+ path: string &default = "store.sqlite";
+ };
+
+ ## Options to tune the RocksDB storage backend.
+ type RocksDBOptions: record {
+ ## File system path of the database.
+ path: string &default = "store.rocksdb";
+ };
+
+ ## Options to tune the particular storage backends.
+ type BackendOptions: record {
+ sqlite: SQLiteOptions &default = SQLiteOptions();
+ rocksdb: RocksDBOptions &default = RocksDBOptions();
+ };
+}
diff --git a/scripts/base/frameworks/cluster/main.bro b/scripts/base/frameworks/cluster/main.bro
index 218e309bad..3451cb4169 100644
--- a/scripts/base/frameworks/cluster/main.bro
+++ b/scripts/base/frameworks/cluster/main.bro
@@ -43,35 +43,35 @@ export {
## software.
TIME_MACHINE,
};
-
+
## Events raised by a manager and handled by the workers.
const manager2worker_events = /Drop::.*/ &redef;
-
+
## Events raised by a manager and handled by proxies.
const manager2proxy_events = /EMPTY/ &redef;
-
+
## Events raised by proxies and handled by a manager.
const proxy2manager_events = /EMPTY/ &redef;
-
+
## Events raised by proxies and handled by workers.
const proxy2worker_events = /EMPTY/ &redef;
-
+
## Events raised by workers and handled by a manager.
const worker2manager_events = /(TimeMachine::command|Drop::.*)/ &redef;
-
+
## Events raised by workers and handled by proxies.
const worker2proxy_events = /EMPTY/ &redef;
-
+
## Events raised by TimeMachine instances and handled by a manager.
const tm2manager_events = /EMPTY/ &redef;
-
+
## Events raised by TimeMachine instances and handled by workers.
const tm2worker_events = /EMPTY/ &redef;
-
- ## Events sent by the control host (i.e. BroControl) when dynamically
+
+ ## Events sent by the control host (i.e. BroControl) when dynamically
## connecting to a running instance to update settings or request data.
const control_events = Control::controller_events &redef;
-
+
## Record type to indicate a node in a cluster.
type Node: record {
## Identifies the type of cluster node in this node's configuration.
@@ -96,13 +96,13 @@ export {
## Name of a time machine node with which this node connects.
time_machine: string &optional;
};
-
+
## This function can be called at any time to determine if the cluster
## framework is being enabled for this run.
##
## Returns: True if :bro:id:`Cluster::node` has been set.
global is_enabled: function(): bool;
-
+
## This function can be called at any time to determine what type of
## cluster node the current Bro instance is going to be acting as.
## If :bro:id:`Cluster::is_enabled` returns false, then
@@ -110,22 +110,25 @@ export {
##
## Returns: The :bro:type:`Cluster::NodeType` the calling node acts as.
global local_node_type: function(): NodeType;
-
+
## This gives the value for the number of workers currently connected to,
- ## and it's maintained internally by the cluster framework. It's
- ## primarily intended for use by managers to find out how many workers
+ ## and it's maintained internally by the cluster framework. It's
+ ## primarily intended for use by managers to find out how many workers
## should be responding to requests.
global worker_count: count = 0;
-
+
## The cluster layout definition. This should be placed into a filter
- ## named cluster-layout.bro somewhere in the BROPATH. It will be
+ ## named cluster-layout.bro somewhere in the BROPATH. It will be
## automatically loaded if the CLUSTER_NODE environment variable is set.
## Note that BroControl handles all of this automatically.
const nodes: table[string] of Node = {} &redef;
-
+
## This is usually supplied on the command line for each instance
## of the cluster that is started up.
const node = getenv("CLUSTER_NODE") &redef;
+
+ ## Interval for retrying failed connections between cluster nodes.
+ const retry_interval = 1min &redef;
}
function is_enabled(): bool
@@ -158,6 +161,6 @@ event bro_init() &priority=5
Reporter::error(fmt("'%s' is not a valid node in the Cluster::nodes configuration", node));
terminate();
}
-
+
Log::create_stream(Cluster::LOG, [$columns=Info, $path="cluster"]);
}
diff --git a/scripts/base/frameworks/cluster/setup-connections.bro b/scripts/base/frameworks/cluster/setup-connections.bro
index 4576f5b913..95aff64a6c 100644
--- a/scripts/base/frameworks/cluster/setup-connections.bro
+++ b/scripts/base/frameworks/cluster/setup-connections.bro
@@ -11,7 +11,7 @@ module Cluster;
event bro_init() &priority=9
{
local me = nodes[node];
-
+
for ( i in Cluster::nodes )
{
local n = nodes[i];
@@ -22,35 +22,35 @@ event bro_init() &priority=9
Communication::nodes["control"] = [$host=n$ip, $zone_id=n$zone_id,
$connect=F, $class="control",
$events=control_events];
-
+
if ( me$node_type == MANAGER )
{
if ( n$node_type == WORKER && n$manager == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=worker2manager_events, $request_logs=T];
-
+
if ( n$node_type == PROXY && n$manager == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F,
$class=i, $events=proxy2manager_events, $request_logs=T];
-
+
if ( n$node_type == TIME_MACHINE && me?$time_machine && me$time_machine == i )
Communication::nodes["time-machine"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id,
$p=nodes[i]$p,
- $connect=T, $retry=1min,
+ $connect=T, $retry=retry_interval,
$events=tm2manager_events];
}
-
+
else if ( me$node_type == PROXY )
{
if ( n$node_type == WORKER && n$proxy == node )
Communication::nodes[i] =
[$host=n$ip, $zone_id=n$zone_id, $connect=F, $class=i,
$sync=T, $auth=T, $events=worker2proxy_events];
-
- # accepts connections from the previous one.
+
+ # accepts connections from the previous one.
# (This is not ideal for setups with many proxies)
# FIXME: Once we're using multiple proxies, we should also figure out some $class scheme ...
if ( n$node_type == PROXY )
@@ -58,49 +58,49 @@ event bro_init() &priority=9
if ( n?$proxy )
Communication::nodes[i]
= [$host=n$ip, $zone_id=n$zone_id, $p=n$p,
- $connect=T, $auth=F, $sync=T, $retry=1mins];
+ $connect=T, $auth=F, $sync=T, $retry=retry_interval];
else if ( me?$proxy && me$proxy == i )
Communication::nodes[me$proxy]
= [$host=nodes[i]$ip, $zone_id=nodes[i]$zone_id,
$connect=F, $auth=T, $sync=T];
}
-
+
# Finally the manager, to send it status updates.
if ( n$node_type == MANAGER && me$manager == i )
- Communication::nodes["manager"] = [$host=nodes[i]$ip,
- $zone_id=nodes[i]$zone_id,
- $p=nodes[i]$p,
- $connect=T, $retry=1mins,
+ Communication::nodes["manager"] = [$host=nodes[i]$ip,
+ $zone_id=nodes[i]$zone_id,
+ $p=nodes[i]$p,
+ $connect=T, $retry=retry_interval,
$class=node,
$events=manager2proxy_events];
}
else if ( me$node_type == WORKER )
{
if ( n$node_type == MANAGER && me$manager == i )
- Communication::nodes["manager"] = [$host=nodes[i]$ip,
+ Communication::nodes["manager"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id,
$p=nodes[i]$p,
- $connect=T, $retry=1mins,
- $class=node,
+ $connect=T, $retry=retry_interval,
+ $class=node,
$events=manager2worker_events];
-
+
if ( n$node_type == PROXY && me$proxy == i )
- Communication::nodes["proxy"] = [$host=nodes[i]$ip,
+ Communication::nodes["proxy"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id,
$p=nodes[i]$p,
- $connect=T, $retry=1mins,
- $sync=T, $class=node,
+ $connect=T, $retry=retry_interval,
+ $sync=T, $class=node,
$events=proxy2worker_events];
-
- if ( n$node_type == TIME_MACHINE &&
+
+ if ( n$node_type == TIME_MACHINE &&
me?$time_machine && me$time_machine == i )
- Communication::nodes["time-machine"] = [$host=nodes[i]$ip,
+ Communication::nodes["time-machine"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id,
$p=nodes[i]$p,
- $connect=T,
- $retry=1min,
+ $connect=T,
+ $retry=retry_interval,
$events=tm2worker_events];
-
+
}
}
}
diff --git a/scripts/base/frameworks/files/magic/audio.sig b/scripts/base/frameworks/files/magic/audio.sig
index efba99ed0d..9b4d7da66b 100644
--- a/scripts/base/frameworks/files/magic/audio.sig
+++ b/scripts/base/frameworks/files/magic/audio.sig
@@ -2,7 +2,7 @@
# MPEG v3 audio
signature file-mpeg-audio {
file-mime "audio/mpeg", 20
- file-magic /^\xff[\xe2\xe3\xf2\xf3\xf6\xf7\xfa\xfb\xfc\xfd]/
+ file-magic /^(ID3|\xff[\xe2\xe3\xf2\xf3\xf6\xf7\xfa\xfb\xfc\xfd])/
}
# MPEG v4 audio
diff --git a/scripts/base/frameworks/files/magic/general.sig b/scripts/base/frameworks/files/magic/general.sig
index 268412ff05..bea6ae9ece 100644
--- a/scripts/base/frameworks/files/magic/general.sig
+++ b/scripts/base/frameworks/files/magic/general.sig
@@ -9,53 +9,53 @@ signature file-plaintext {
signature file-json {
file-mime "text/json", 1
- file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\{[\x0d\x0a[:blank:]]*(["][^"]{1,}["]|[a-zA-Z][a-zA-Z0-9\\_]*)[\x0d\x0a[:blank:]]*:[\x0d\x0a[:blank:]]*(["]|\[|\{|[0-9]|true|false)/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*\{[\x0d\x0a[:blank:]]*(["][^"]{1,}["]|[a-zA-Z][a-zA-Z0-9\\_]*)[\x0d\x0a[:blank:]]*:[\x0d\x0a[:blank:]]*(["]|\[|\{|[0-9]|true|false)/
}
signature file-json2 {
file-mime "text/json", 1
- file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\[[\x0d\x0a[:blank:]]*(((["][^"]{1,}["]|[0-9]{1,}(\.[0-9]{1,})?|true|false)[\x0d\x0a[:blank:]]*,)|\{|\[)[\x0d\x0a[:blank:]]*/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*\[[\x0d\x0a[:blank:]]*(((["][^"]{1,}["]|[0-9]{1,}(\.[0-9]{1,})?|true|false)[\x0d\x0a[:blank:]]*,)|\{|\[)[\x0d\x0a[:blank:]]*/
}
# Match empty JSON documents.
signature file-json3 {
file-mime "text/json", 0
- file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*(\[\]|\{\})[\x0d\x0a[:blank:]]*$/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x0d\x0a[:blank:]]*(\[\]|\{\})[\x0d\x0a[:blank:]]*$/
}
signature file-xml {
file-mime "application/xml", 10
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<\?xml /
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*\x00?<\x00?\?\x00?x\x00?m\x00?l\x00? \x00?/
}
signature file-xhtml {
file-mime "text/html", 100
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<(![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]|[hH][tT][mM][lL]|[mM][eE][tT][aA] {1,}[hH][tT][tT][pP]-[eE][qQ][uU][iI][vV])/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<(![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]|[hH][tT][mM][lL]|[mM][eE][tT][aA] {1,}[hH][tT][tT][pP]-[eE][qQ][uU][iI][vV])/
}
signature file-html {
file-mime "text/html", 49
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<([hH][eE][aA][dD]|[hH][tT][mM][lL]|[tT][iI][tT][lL][eE]|[bB][oO][dD][yY])/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<([hH][eE][aA][dD]|[hH][tT][mM][lL]|[tT][iI][tT][lL][eE]|[bB][oO][dD][yY])/
}
signature file-rss {
file-mime "text/rss", 90
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[rR][sS][sS]/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[rR][sS][sS]/
}
signature file-atom {
file-mime "text/atom", 100
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<([rR][sS][sS][^>]*xmlns:atom|[fF][eE][eE][dD][^>]*xmlns=["']?http:\/\/www.w3.org\/2005\/Atom["']?)/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<([rR][sS][sS][^>]*xmlns:atom|[fF][eE][eE][dD][^>]*xmlns=["']?http:\/\/www.w3.org\/2005\/Atom["']?)/
}
signature file-soap {
file-mime "application/soap+xml", 49
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[sS][oO][aA][pP](-[eE][nN][vV])?:[eE][nN][vV][eE][lL][oO][pP][eE]/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[sS][oO][aA][pP](-[eE][nN][vV])?:[eE][nN][vV][eE][lL][oO][pP][eE]/
}
signature file-cross-domain-policy {
@@ -70,7 +70,7 @@ signature file-cross-domain-policy2 {
signature file-xmlrpc {
file-mime "application/xml-rpc", 49
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[mM][eE][tT][hH][oO][dD][rR][eE][sS][pP][oO][nN][sS][eE]>/
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[mM][eE][tT][hH][oO][dD][rR][eE][sS][pP][oO][nN][sS][eE]>/
}
signature file-coldfusion {
@@ -81,7 +81,13 @@ signature file-coldfusion {
# Adobe Flash Media Manifest
signature file-f4m {
file-mime "application/f4m", 49
- file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\//
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*()?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\//
+}
+
+# .ini style files
+signature file-ini {
+ file-mime "text/ini", 20
+ file-magic /^(\xef\xbb\xbf|\xff\xfe|\xfe\xff)?[\x00\x0d\x0a[:blank:]]*\[[^\x0d\x0a]+\][[:blank:]\x00]*[\x0d\x0a]/
}
# Microsoft LNK files
@@ -90,6 +96,41 @@ signature file-lnk {
file-magic /^\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x10\x00\x00\x00\x46/
}
+# Microsoft Registry policies
+signature file-pol {
+ file-mime "application/vnd.ms-pol", 49
+ file-magic /^PReg/
+}
+
+# Old style Windows registry file
+signature file-reg {
+ file-mime "application/vnd.ms-reg", 49
+ file-magic /^REGEDIT4/
+}
+
+# Newer Windows registry file
+signature file-reg-utf16 {
+ file-mime "application/vnd.ms-reg", 49
+ file-magic /^\xFF\xFEW\x00i\x00n\x00d\x00o\x00w\x00s\x00 \x00R\x00e\x00g\x00i\x00s\x00t\x00r\x00y\x00 \x00E\x00d\x00i\x00t\x00o\x00r\x00 \x00V\x00e\x00r\x00s\x00i\x00o\x00n\x00 \x005\x00\.\x000\x000/
+}
+
+# Microsoft Registry format (typically DESKTOP.DAT)
+signature file-regf {
+ file-mime "application vnd.ms-regf", 49
+ file-magic /^\x72\x65\x67\x66/
+}
+
+# Microsoft Outlook PST files
+signature file-pst {
+ file-mime "application/vnd.ms-outlook", 49
+ file-magic /!BDN......[\x0e\x0f\x15\x17][\x00-\x02]/
+}
+
+signature file-afpinfo {
+ file-mime "application/vnd.apple-afpinfo"
+ file-magic /^AFP/
+}
+
signature file-jar {
file-mime "application/java-archive", 100
file-magic /^PK\x03\x04.{1,200}\x14\x00..META-INF\/MANIFEST\.MF/
diff --git a/scripts/base/frameworks/files/magic/video.sig b/scripts/base/frameworks/files/magic/video.sig
index 5d499f2119..d939c15618 100644
--- a/scripts/base/frameworks/files/magic/video.sig
+++ b/scripts/base/frameworks/files/magic/video.sig
@@ -71,6 +71,14 @@ signature file-mp2p {
file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/
}
+# MPEG transport stream data. These files typically have the extension "ts".
+# Note: The 0x47 repeats every 188 bytes. Using four as the number of
+# occurrences for the test here is arbitrary.
+signature file-mp2t {
+ file-mime "video/mp2t", 40
+ file-magic /^(\x47.{187}){4}/
+}
+
# Silicon Graphics video
signature file-sgi-movie {
file-mime "video/x-sgi-movie", 70
@@ -94,3 +102,4 @@ signature file-3gpp {
file-mime "video/3gpp", 60
file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/
}
+
diff --git a/scripts/base/frameworks/input/main.bro b/scripts/base/frameworks/input/main.bro
index fa766ba27b..3df418315f 100644
--- a/scripts/base/frameworks/input/main.bro
+++ b/scripts/base/frameworks/input/main.bro
@@ -1,18 +1,25 @@
##! The input framework provides a way to read previously stored data either
-##! as an event stream or into a bro table.
+##! as an event stream or into a Bro table.
module Input;
export {
type Event: enum {
+ ## New data has been imported.
EVENT_NEW = 0,
+ ## Existing data has been changed.
EVENT_CHANGED = 1,
+ ## Previously existing data has been removed.
EVENT_REMOVED = 2,
};
+ ## Type that defines the input stream read mode.
type Mode: enum {
+ ## Do not automatically reread the file after it has been read.
MANUAL = 0,
+ ## Reread the entire file each time a change is found.
REREAD = 1,
+ ## Read data from end of file each time new data is appended.
STREAM = 2
};
@@ -24,20 +31,20 @@ export {
## Separator between fields.
## Please note that the separator has to be exactly one character long.
- ## Can be overwritten by individual writers.
+ ## Individual readers can use a different value.
const separator = "\t" &redef;
## Separator between set elements.
## Please note that the separator has to be exactly one character long.
- ## Can be overwritten by individual writers.
+ ## Individual readers can use a different value.
const set_separator = "," &redef;
## String to use for empty fields.
- ## Can be overwritten by individual writers.
+ ## Individual readers can use a different value.
const empty_field = "(empty)" &redef;
## String to use for an unset &optional field.
- ## Can be overwritten by individual writers.
+ ## Individual readers can use a different value.
const unset_field = "-" &redef;
## Flag that controls if the input framework accepts records
@@ -47,11 +54,11 @@ export {
## abort. Defaults to false (abort).
const accept_unsupported_types = F &redef;
- ## TableFilter description type used for the `table` method.
+ ## A table input stream type used to send data to a Bro table.
type TableDescription: record {
# Common definitions for tables and events
- ## String that allows the reader to find the source.
+ ## String that allows the reader to find the source of the data.
## For `READER_ASCII`, this is the filename.
source: string;
@@ -61,7 +68,8 @@ export {
## Read mode to use for this stream.
mode: Mode &default=default_mode;
- ## Descriptive name. Used to remove a stream at a later time.
+ ## Name of the input stream. This is used by some functions to
+ ## manipulate the stream.
name: string;
# Special definitions for tables
@@ -73,31 +81,35 @@ export {
idx: any;
## Record that defines the values used as the elements of the table.
- ## If this is undefined, then *destination* has to be a set.
+ ## If this is undefined, then *destination* must be a set.
val: any &optional;
- ## Defines if the value of the table is a record (default), or a single value.
- ## When this is set to false, then *val* can only contain one element.
+ ## Defines if the value of the table is a record (default), or a single
+ ## value. When this is set to false, then *val* can only contain one
+ ## element.
want_record: bool &default=T;
- ## The event that is raised each time a value is added to, changed in or removed
- ## from the table. The event will receive an Input::Event enum as the first
- ## argument, the *idx* record as the second argument and the value (record) as the
- ## third argument.
- ev: any &optional; # event containing idx, val as values.
+ ## The event that is raised each time a value is added to, changed in,
+ ## or removed from the table. The event will receive an
+ ## Input::TableDescription as the first argument, an Input::Event
+ ## enum as the second argument, the *idx* record as the third argument
+ ## and the value (record) as the fourth argument.
+ ev: any &optional;
- ## Predicate function that can decide if an insertion, update or removal should
- ## really be executed. Parameters are the same as for the event. If true is
- ## returned, the update is performed. If false is returned, it is skipped.
+ ## Predicate function that can decide if an insertion, update or removal
+ ## should really be executed. Parameters have same meaning as for the
+ ## event.
+ ## If true is returned, the update is performed. If false is returned,
+ ## it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional;
- ## A key/value table that will be passed on the reader.
- ## Interpretation of the values is left to the writer, but
+ ## A key/value table that will be passed to the reader.
+ ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes.
- config: table[string] of string &default=table();
+ config: table[string] of string &default=table();
};
- ## EventFilter description type used for the `event` method.
+ ## An event input stream type used to send input data to a Bro event.
type EventDescription: record {
# Common definitions for tables and events
@@ -116,19 +128,26 @@ export {
# Special definitions for events
- ## Record describing the fields to be retrieved from the source input.
+ ## Record type describing the fields to be retrieved from the input
+ ## source.
fields: any;
- ## If this is false, the event receives each value in fields as a separate argument.
- ## If this is set to true (default), the event receives all fields in a single record value.
+ ## If this is false, the event receives each value in *fields* as a
+ ## separate argument.
+ ## If this is set to true (default), the event receives all fields in
+ ## a single record value.
want_record: bool &default=T;
- ## The event that is raised each time a new line is received from the reader.
- ## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
+ ## The event that is raised each time a new line is received from the
+ ## reader. The event will receive an Input::EventDescription record
+ ## as the first argument, an Input::Event enum as the second
+ ## argument, and the fields (as specified in *fields*) as the following
+ ## arguments (this will either be a single record value containing
+ ## all fields, or each field value as a separate argument).
ev: any;
- ## A key/value table that will be passed on the reader.
- ## Interpretation of the values is left to the writer, but
+ ## A key/value table that will be passed to the reader.
+ ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
};
@@ -155,28 +174,29 @@ export {
## field will be the same value as the *source* field.
name: string;
- ## A key/value table that will be passed on the reader.
- ## Interpretation of the values is left to the writer, but
+ ## A key/value table that will be passed to the reader.
+ ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
};
- ## Create a new table input from a given source.
+ ## Create a new table input stream from a given source.
##
## description: `TableDescription` record describing the source.
##
## Returns: true on success.
global add_table: function(description: Input::TableDescription) : bool;
- ## Create a new event input from a given source.
+ ## Create a new event input stream from a given source.
##
## description: `EventDescription` record describing the source.
##
## Returns: true on success.
global add_event: function(description: Input::EventDescription) : bool;
- ## Create a new file analysis input from a given source. Data read from
- ## the source is automatically forwarded to the file analysis framework.
+ ## Create a new file analysis input stream from a given source. Data read
+ ## from the source is automatically forwarded to the file analysis
+ ## framework.
##
## description: A record describing the source.
##
@@ -199,7 +219,11 @@ export {
## Event that is called when the end of a data source has been reached,
## including after an update.
- global end_of_data: event(name: string, source:string);
+ ##
+ ## name: Name of the input stream.
+ ##
+ ## source: String that identifies the data source (such as the filename).
+ global end_of_data: event(name: string, source: string);
}
@load base/bif/input.bif
diff --git a/scripts/base/frameworks/input/readers/raw.bro b/scripts/base/frameworks/input/readers/raw.bro
index b1e0fb6831..a1e95b71a1 100644
--- a/scripts/base/frameworks/input/readers/raw.bro
+++ b/scripts/base/frameworks/input/readers/raw.bro
@@ -11,7 +11,9 @@ export {
##
## name: name of the input stream.
## source: source of the input stream.
- ## exit_code: exit code of the program, or number of the signal that forced the program to exit.
- ## signal_exit: false when program exited normally, true when program was forced to exit by a signal.
+ ## exit_code: exit code of the program, or number of the signal that forced
+ ## the program to exit.
+ ## signal_exit: false when program exited normally, true when program was
+ ## forced to exit by a signal.
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
}
diff --git a/scripts/base/frameworks/netcontrol/__load__.bro b/scripts/base/frameworks/netcontrol/__load__.bro
new file mode 100644
index 0000000000..a8e391f7c8
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/__load__.bro
@@ -0,0 +1,15 @@
+@load ./types
+@load ./main
+@load ./plugins
+@load ./drop
+@load ./shunt
+@load ./catch-and-release
+
+# The cluster framework must be loaded first.
+@load base/frameworks/cluster
+
+@if ( Cluster::is_enabled() )
+@load ./cluster
+@else
+@load ./non-cluster
+@endif
diff --git a/scripts/base/frameworks/netcontrol/catch-and-release.bro b/scripts/base/frameworks/netcontrol/catch-and-release.bro
new file mode 100644
index 0000000000..a95954ac07
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/catch-and-release.bro
@@ -0,0 +1,104 @@
+##! Implementation of catch-and-release functionality for NetControl.
+
+module NetControl;
+
+@load ./main
+@load ./drop
+
+export {
+ ## Stops all packets involving an IP address from being forwarded. This function
+ ## uses catch-and-release functionality, where the IP address is only dropped for
+ ## a short amount of time that is incremented steadily when the IP is encountered
+ ## again.
+ ##
+ ## a: The address to be dropped.
+ ##
+ ## t: How long to drop it, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing where the drop was triggered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global drop_address_catch_release: function(a: addr, location: string &default="") : string;
+
+ ## Time intervals for which a subsequent drops of the same IP take
+ ## effect.
+ const catch_release_intervals: vector of interval = vector(10min, 1hr, 24hrs, 7days) &redef;
+}
+
+function per_block_interval(t: table[addr] of count, idx: addr): interval
+ {
+ local ct = t[idx];
+
+ # watch for the time of the next block...
+ local blocktime = catch_release_intervals[ct];
+ if ( (ct+1) in catch_release_intervals )
+ blocktime = catch_release_intervals[ct+1];
+
+ return blocktime;
+ }
+
+# This is the internally maintained table containing all the currently going on catch-and-release
+# blocks.
+global blocks: table[addr] of count = {}
+ &create_expire=0secs
+ &expire_func=per_block_interval;
+
+function current_block_interval(s: set[addr], idx: addr): interval
+ {
+ if ( idx !in blocks )
+ {
+ Reporter::error(fmt("Address %s not in blocks while inserting into current_blocks!", idx));
+ return 0sec;
+ }
+
+ return catch_release_intervals[blocks[idx]];
+ }
+
+global current_blocks: set[addr] = set()
+ &create_expire=0secs
+ &expire_func=current_block_interval;
+
+function drop_address_catch_release(a: addr, location: string &default=""): string
+ {
+ if ( a in blocks )
+ {
+ Reporter::warning(fmt("Address %s already blocked using catch-and-release - ignoring duplicate", a));
+ return "";
+ }
+
+ local block_interval = catch_release_intervals[0];
+ local ret = drop_address(a, block_interval, location);
+ if ( ret != "" )
+ {
+ blocks[a] = 0;
+ add current_blocks[a];
+ }
+
+ return ret;
+ }
+
+function check_conn(a: addr)
+ {
+ if ( a in blocks )
+ {
+ if ( a in current_blocks )
+ # block has not been applied yet?
+ return;
+
+ # ok, this one returned again while still in the backoff period.
+ local try = blocks[a];
+ if ( (try+1) in catch_release_intervals )
+ ++try;
+
+ blocks[a] = try;
+ add current_blocks[a];
+ local block_interval = catch_release_intervals[try];
+ drop_address(a, block_interval, "Re-drop by catch-and-release");
+ }
+ }
+
+event new_connection(c: connection)
+ {
+ # let's only check originating connections...
+ check_conn(c$id$orig_h);
+ }
diff --git a/scripts/base/frameworks/netcontrol/cluster.bro b/scripts/base/frameworks/netcontrol/cluster.bro
new file mode 100644
index 0000000000..7f635b4375
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/cluster.bro
@@ -0,0 +1,99 @@
+##! Cluster support for the NetControl framework.
+
+@load ./main
+@load base/frameworks/cluster
+
+module NetControl;
+
+export {
+ ## This is the event used to transport add_rule calls to the manager.
+ global cluster_netcontrol_add_rule: event(r: Rule);
+
+ ## This is the event used to transport remove_rule calls to the manager.
+ global cluster_netcontrol_remove_rule: event(id: string);
+}
+
+## Workers need ability to forward commands to manager.
+redef Cluster::worker2manager_events += /NetControl::cluster_netcontrol_(add|remove)_rule/;
+## Workers need to see the result events from the manager.
+redef Cluster::manager2worker_events += /NetControl::rule_(added|removed|timeout|error)/;
+
+
+function activate(p: PluginState, priority: int)
+ {
+ # we only run the activate function on the manager.
+ if ( Cluster::local_node_type() != Cluster::MANAGER )
+ return;
+
+ activate_impl(p, priority);
+ }
+
+global local_rule_count: count = 1;
+
+function add_rule(r: Rule) : string
+ {
+ if ( Cluster::local_node_type() == Cluster::MANAGER )
+ return add_rule_impl(r);
+ else
+ {
+ if ( r$id == "" )
+ r$id = cat(Cluster::node, ":", ++local_rule_count);
+
+ event NetControl::cluster_netcontrol_add_rule(r);
+ return r$id;
+ }
+ }
+
+function remove_rule(id: string) : bool
+ {
+ if ( Cluster::local_node_type() == Cluster::MANAGER )
+ return remove_rule_impl(id);
+ else
+ {
+ event NetControl::cluster_netcontrol_remove_rule(id);
+ return T; # well, we can't know here. So - just hope...
+ }
+ }
+
+@if ( Cluster::local_node_type() == Cluster::MANAGER )
+event NetControl::cluster_netcontrol_add_rule(r: Rule)
+ {
+ add_rule_impl(r);
+ }
+
+event NetControl::cluster_netcontrol_remove_rule(id: string)
+ {
+ remove_rule_impl(id);
+ }
+@endif
+
+@if ( Cluster::local_node_type() == Cluster::MANAGER )
+event rule_expire(r: Rule, p: PluginState) &priority=-5
+ {
+ rule_expire_impl(r, p);
+ }
+
+event rule_added(r: Rule, p: PluginState, msg: string &default="") &priority=5
+ {
+ rule_added_impl(r, p, msg);
+
+ if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
+ schedule r$expire { rule_expire(r, p) };
+ }
+
+event rule_removed(r: Rule, p: PluginState, msg: string &default="") &priority=-5
+ {
+ rule_removed_impl(r, p, msg);
+ }
+
+event rule_timeout(r: Rule, i: FlowInfo, p: PluginState) &priority=-5
+ {
+ rule_timeout_impl(r, i, p);
+ }
+
+event rule_error(r: Rule, p: PluginState, msg: string &default="") &priority=-5
+ {
+ rule_error_impl(r, p, msg);
+ }
+@endif
+
diff --git a/scripts/base/frameworks/netcontrol/drop.bro b/scripts/base/frameworks/netcontrol/drop.bro
new file mode 100644
index 0000000000..63c70f0e88
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/drop.bro
@@ -0,0 +1,98 @@
+##! Implementation of the drop functionality for NetControl.
+
+module NetControl;
+
+@load ./main
+
+export {
+ redef enum Log::ID += { DROP };
+
+ ## Stops all packets involving an IP address from being forwarded.
+ ##
+ ## a: The address to be dropped.
+ ##
+ ## t: How long to drop it, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing where the drop was triggered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global drop_address: function(a: addr, t: interval, location: string &default="") : string;
+
+ ## Stops all packets involving an connection address from being forwarded.
+ ##
+ ## c: The connection to be dropped.
+ ##
+ ## t: How long to drop it, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing where the drop was triggered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global drop_connection: function(c: conn_id, t: interval, location: string &default="") : string;
+
+ type DropInfo: record {
+ ## Time at which the recorded activity occurred.
+ ts: time &log;
+ ## ID of the rule; unique during each Bro run
+ rule_id: string &log;
+ orig_h: addr &log; ##< The originator's IP address.
+ orig_p: port &log &optional; ##< The originator's port number.
+ resp_h: addr &log &optional; ##< The responder's IP address.
+ resp_p: port &log &optional; ##< The responder's port number.
+ ## Expiry time of the shunt
+ expire: interval &log;
+ ## Location where the underlying action was triggered.
+ location: string &log &optional;
+ };
+
+ ## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`
+ ## record as it is sent on to the logging framework.
+ global log_netcontrol_drop: event(rec: DropInfo);
+}
+
+event bro_init() &priority=5
+ {
+ Log::create_stream(NetControl::DROP, [$columns=DropInfo, $ev=log_netcontrol_drop, $path="netcontrol_drop"]);
+ }
+
+function drop_connection(c: conn_id, t: interval, location: string &default="") : string
+ {
+ local e: Entity = [$ty=CONNECTION, $conn=c];
+ local r: Rule = [$ty=DROP, $target=FORWARD, $entity=e, $expire=t, $location=location];
+
+ local id = add_rule(r);
+
+ # Error should already be logged
+ if ( id == "" )
+ return id;
+
+ local log = DropInfo($ts=network_time(), $rule_id=id, $orig_h=c$orig_h, $orig_p=c$orig_p, $resp_h=c$resp_h, $resp_p=c$resp_p, $expire=t);
+
+ if ( location != "" )
+ log$location=location;
+
+ Log::write(DROP, log);
+
+ return id;
+ }
+
+function drop_address(a: addr, t: interval, location: string &default="") : string
+ {
+ local e: Entity = [$ty=ADDRESS, $ip=addr_to_subnet(a)];
+ local r: Rule = [$ty=DROP, $target=FORWARD, $entity=e, $expire=t, $location=location];
+
+ local id = add_rule(r);
+
+ # Error should already be logged
+ if ( id == "" )
+ return id;
+
+ local log = DropInfo($ts=network_time(), $rule_id=id, $orig_h=a, $expire=t);
+
+ if ( location != "" )
+ log$location=location;
+
+ Log::write(DROP, log);
+
+ return id;
+ }
+
diff --git a/scripts/base/frameworks/netcontrol/main.bro b/scripts/base/frameworks/netcontrol/main.bro
new file mode 100644
index 0000000000..563188921d
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/main.bro
@@ -0,0 +1,935 @@
+##! Bro's packet aquisition and control framework.
+##!
+##! This plugin-based framework allows to control the traffic that Bro monitors
+##! as well as, if having access to the forwarding path, the traffic the network
+##! forwards. By default, the framework lets everything through, to both Bro
+##! itself as well as on the network. Scripts can then add rules to impose
+##! restrictions on entities, such as specific connections or IP addresses.
+##!
+##! This framework has two APIs: a high-level and low-level. The high-level API
+##! provides convinience functions for a set of common operations. The
+##! low-level API provides full flexibility.
+
+module NetControl;
+
+@load ./plugin
+@load ./types
+
+export {
+ ## The framework's logging stream identifier.
+ redef enum Log::ID += { LOG };
+
+ # ###
+ # ### Generic functions and events.
+ # ###
+
+ # Activates a plugin.
+ #
+ # p: The plugin to acticate.
+ #
+ # priority: The higher the priority, the earlier this plugin will be checked
+ # whether it supports an operation, relative to other plugins.
+ global activate: function(p: PluginState, priority: int);
+
+ # Event that is used to initialize plugins. Place all plugin initialization
+ # related functionality in this event.
+ global NetControl::init: event();
+
+ # Event that is raised once all plugins activated in ``NetControl::init`` have finished
+ # their initialization.
+ global NetControl::init_done: event();
+
+ # ###
+ # ### High-level API.
+ # ###
+
+ # ### Note - other high level primitives are in catch-and-release.bro, shunt.bro and
+ # ### drop.bro
+
+ ## Allows all traffic involving a specific IP address to be forwarded.
+ ##
+ ## a: The address to be whitelistet.
+ ##
+ ## t: How long to whitelist it, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing whitelist was triddered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global whitelist_address: function(a: addr, t: interval, location: string &default="") : string;
+
+ ## Allows all traffic involving a specific IP subnet to be forwarded.
+ ##
+ ## s: The subnet to be whitelistet.
+ ##
+ ## t: How long to whitelist it, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing whitelist was triddered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global whitelist_subnet: function(s: subnet, t: interval, location: string &default="") : string;
+
+ ## Redirects an uni-directional flow to another port.
+ ##
+ ## f: The flow to redirect.
+ ##
+ ## out_port: Port to redirect the flow to
+ ##
+ ## t: How long to leave the redirect in place, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing where the redirect was triggered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global redirect_flow: function(f: flow_id, out_port: count, t: interval, location: string &default="") : string;
+
+ ## Quarantines a host by redirecting rewriting DNS queries to the network dns server dns
+ ## to the host. Host has to answer to all queries with its own address. Only http communication
+ ## from infected to quarantinehost is allowed.
+ ##
+ ## infected: the host to quarantine
+ ##
+ ## dns: the network dns server
+ ##
+ ## quarantine: the quarantine server running a dns and a web server
+ ##
+ ## t: how long to leave the quarantine in place
+ ##
+ ## Returns: Vector of inserted rules on success, empty list on failure.
+ global quarantine_host: function(infected: addr, dns: addr, quarantine: addr, t: interval, location: string &default="") : vector of string;
+
+ ## Flushes all state.
+ global clear: function();
+
+ # ###
+ # ### Low-level API.
+ # ###
+
+ ###### Manipulation of rules.
+
+ ## Installs a rule.
+ ##
+ ## r: The rule to install.
+ ##
+ ## Returns: If succesful, returns an ID string unique to the rule that can later
+ ## be used to refer to it. If unsuccessful, returns an empty string. The ID is also
+ ## assigned to ``r$id``. Note that "successful" means "a plugin knew how to handle
+ ## the rule", it doesn't necessarily mean that it was indeed successfully put in
+ ## place, because that might happen asynchronously and thus fail only later.
+ global add_rule: function(r: Rule) : string;
+
+ ## Removes a rule.
+ ##
+ ## id: The rule to remove, specified as the ID returned by :bro:id:`add_rule` .
+ ##
+ ## Returns: True if succesful, the relevant plugin indicated that it knew how
+ ## to handle the removal. Note that again "success" means the plugin accepted the
+ ## removal. They might still fail to put it into effect, as that might happen
+ ## asynchronously and thus go wrong at that point.
+ global remove_rule: function(id: string) : bool;
+
+ ## Searches all rules affecting a certain IP address.
+ ##
+ ## ip: The ip address to search for
+ ##
+ ## Returns: vector of all rules affecting the IP address
+ global find_rules_addr: function(ip: addr) : vector of Rule;
+
+ ## Searches all rules affecting a certain subnet.
+ ##
+ ## sn: The subnet to search for
+ ##
+ ## Returns: vector of all rules affecting the subnet
+ global find_rules_subnet: function(sn: subnet) : vector of Rule;
+
+ ###### Asynchronous feedback on rules.
+
+ ## Confirms that a rule was put in place.
+ ##
+ ## r: The rule now in place.
+ ##
+ ## p: The state for the plugin that put it into place.
+ ##
+ ## msg: An optional informational message by the plugin.
+ global rule_added: event(r: Rule, p: PluginState, msg: string &default="");
+
+ ## Reports that a rule was removed due to a remove: function() call.
+ ##
+ ## r: The rule now removed.
+ ##
+ ## p: The state for the plugin that had the rule in place and now
+ ## removed it.
+ ##
+ ## msg: An optional informational message by the plugin.
+ global rule_removed: event(r: Rule, p: PluginState, msg: string &default="");
+
+ ## Reports that a rule was removed internally due to a timeout.
+ ##
+ ## r: The rule now removed.
+ ##
+ ## i: Additional flow information, if supported by the protocol.
+ ##
+ ## p: The state for the plugin that had the rule in place and now
+ ## removed it.
+ ##
+ ## msg: An optional informational message by the plugin.
+ global rule_timeout: event(r: Rule, i: FlowInfo, p: PluginState);
+
+ ## Reports an error when operating on a rule.
+ ##
+ ## r: The rule that encountered an error.
+ ##
+ ## p: The state for the plugin that reported the error.
+ ##
+ ## msg: An optional informational message by the plugin.
+ global rule_error: event(r: Rule, p: PluginState, msg: string &default="");
+
+ ## Hook that allows the modification of rules passed to add_rule before they
+ ## are passed on to the plugins. If one of the hooks uses break, the rule is
+ ## ignored and not passed on to any plugin.
+ ##
+ ## r: The rule to be added
+ global NetControl::rule_policy: hook(r: Rule);
+
+ ##### Plugin functions
+
+ ## Function called by plugins once they finished their activation. After all
+ ## plugins defined in bro_init finished to activate, rules will start to be sent
+ ## to the plugins. Rules that scripts try to set before the backends are ready
+ ## will be discarded.
+ global plugin_activated: function(p: PluginState);
+
+ ## Type of an entry in the NetControl log.
+ type InfoCategory: enum {
+ ## A log entry reflecting a framework message.
+ MESSAGE,
+ ## A log entry reflecting a framework message.
+ ERROR,
+ ## A log entry about about a rule.
+ RULE
+ };
+
+ ## State of an entry in the NetControl log.
+ type InfoState: enum {
+ REQUESTED,
+ SUCCEEDED,
+ FAILED,
+ REMOVED,
+ TIMEOUT,
+ };
+
+ ## The record type defining the column fields of the NetControl log.
+ type Info: record {
+ ## Time at which the recorded activity occurred.
+ ts: time &log;
+ ## ID of the rule; unique during each Bro run
+ rule_id: string &log &optional;
+ ## Type of the log entry.
+ category: InfoCategory &log &optional;
+ ## The command the log entry is about.
+ cmd: string &log &optional;
+ ## State the log entry reflects.
+ state: InfoState &log &optional;
+ ## String describing an action the entry is about.
+ action: string &log &optional;
+ ## The target type of the action.
+ target: TargetType &log &optional;
+ ## Type of the entity the log entry is about.
+ entity_type: string &log &optional;
+ ## String describing the entity the log entry is about.
+ entity: string &log &optional;
+ ## String describing the optional modification of the entry (e.h. redirect)
+ mod: string &log &optional;
+ ## String with an additional message.
+ msg: string &log &optional;
+ ## Number describing the priority of the log entry
+ priority: int &log &optional;
+ ## Expiry time of the log entry
+ expire: interval &log &optional;
+ ## Location where the underlying action was triggered.
+ location: string &log &optional;
+ ## Plugin triggering the log entry.
+ plugin: string &log &optional;
+ };
+
+ ## Event that can be handled to access the :bro:type:`NetControl::Info`
+ ## record as it is sent on to the logging framework.
+ global log_netcontrol: event(rec: Info);
+}
+
+redef record Rule += {
+ ##< Internally set to the plugins handling the rule.
+ _plugin_ids: set[count] &default=count_set();
+ ##< Internally set to the plugins on which the rule is currently active.
+ _active_plugin_ids: set[count] &default=count_set();
+ ##< Track if the rule was added succesfully by all responsible plugins.
+ _added: bool &default=F;
+};
+
+# Variable tracking the state of plugin activation. Once all plugins that
+# have been added in bro_init are activated, this will switch to T and
+# the event NetControl::init_done will be raised.
+global plugins_active: bool = F;
+
+# Set to true at the end of bro_init (with very low priority).
+# Used to track when plugin activation could potentially be finished
+global bro_init_done: bool = F;
+
+# The counters that are used to generate the rule and plugin IDs
+global rule_counter: count = 1;
+global plugin_counter: count = 1;
+
+# List of the currently active plugins
+global plugins: vector of PluginState;
+global plugin_ids: table[count] of PluginState;
+
+# These tables hold information about rules.
+global rules: table[string] of Rule; # Rules indexed by id and cid
+
+# All rules that apply to a certain subnet/IP address.
+global rules_by_subnets: table[subnet] of set[string];
+
+# Rules pertaining to a specific entity.
+# There always only can be one rule of each type for one entity.
+global rule_entities: table[Entity, RuleType] of Rule;
+
+event bro_init() &priority=5
+ {
+ Log::create_stream(NetControl::LOG, [$columns=Info, $ev=log_netcontrol, $path="netcontrol"]);
+ }
+
+function entity_to_info(info: Info, e: Entity)
+ {
+ info$entity_type = fmt("%s", e$ty);
+
+ switch ( e$ty ) {
+ case ADDRESS:
+ info$entity = fmt("%s", e$ip);
+ break;
+
+ case CONNECTION:
+ info$entity = fmt("%s/%d<->%s/%d",
+ e$conn$orig_h, e$conn$orig_p,
+ e$conn$resp_h, e$conn$resp_p);
+ break;
+
+ case FLOW:
+ local ffrom_ip = "*";
+ local ffrom_port = "*";
+ local fto_ip = "*";
+ local fto_port = "*";
+ local ffrom_mac = "*";
+ local fto_mac = "*";
+ if ( e$flow?$src_h )
+ ffrom_ip = cat(e$flow$src_h);
+ if ( e$flow?$src_p )
+ ffrom_port = fmt("%d", e$flow$src_p);
+ if ( e$flow?$dst_h )
+ fto_ip = cat(e$flow$dst_h);
+ if ( e$flow?$dst_p )
+ fto_port = fmt("%d", e$flow$dst_p);
+ info$entity = fmt("%s/%s->%s/%s",
+ ffrom_ip, ffrom_port,
+ fto_ip, fto_port);
+ if ( e$flow?$src_m || e$flow?$dst_m )
+ {
+ if ( e$flow?$src_m )
+ ffrom_mac = e$flow$src_m;
+ if ( e$flow?$dst_m )
+ fto_mac = e$flow$dst_m;
+
+ info$entity = fmt("%s (%s->%s)", info$entity, ffrom_mac, fto_mac);
+ }
+ break;
+
+ case MAC:
+ info$entity = e$mac;
+ break;
+
+ default:
+ info$entity = "";
+ break;
+ }
+ }
+
+function rule_to_info(info: Info, r: Rule)
+ {
+ info$action = fmt("%s", r$ty);
+ info$target = r$target;
+ info$rule_id = r$id;
+ info$expire = r$expire;
+ info$priority = r$priority;
+
+ if ( r?$location && r$location != "" )
+ info$location = r$location;
+
+ if ( r$ty == REDIRECT )
+ info$mod = fmt("-> %d", r$out_port);
+
+ if ( r$ty == MODIFY )
+ {
+ local mfrom_ip = "_";
+ local mfrom_port = "_";
+ local mto_ip = "_";
+ local mto_port = "_";
+ local mfrom_mac = "_";
+ local mto_mac = "_";
+ if ( r$mod?$src_h )
+ mfrom_ip = cat(r$mod$src_h);
+ if ( r$mod?$src_p )
+ mfrom_port = fmt("%d", r$mod$src_p);
+ if ( r$mod?$dst_h )
+ mto_ip = cat(r$mod$dst_h);
+ if ( r$mod?$dst_p )
+ mto_port = fmt("%d", r$mod$dst_p);
+
+ if ( r$mod?$src_m )
+ mfrom_mac = r$mod$src_m;
+ if ( r$mod?$dst_m )
+ mto_mac = r$mod$dst_m;
+
+ info$mod = fmt("Src: %s/%s (%s) Dst: %s/%s (%s)",
+ mfrom_ip, mfrom_port, mfrom_mac, mto_ip, mto_port, mto_mac);
+
+ if ( r$mod?$redirect_port )
+ info$mod = fmt("%s -> %d", info$mod, r$mod$redirect_port);
+
+ }
+
+ entity_to_info(info, r$entity);
+ }
+
+function log_msg(msg: string, p: PluginState)
+ {
+ Log::write(LOG, [$ts=network_time(), $category=MESSAGE, $msg=msg, $plugin=p$plugin$name(p)]);
+ }
+
+function log_error(msg: string, p: PluginState)
+ {
+ Log::write(LOG, [$ts=network_time(), $category=ERROR, $msg=msg, $plugin=p$plugin$name(p)]);
+ }
+
+function log_msg_no_plugin(msg: string)
+ {
+ Log::write(LOG, [$ts=network_time(), $category=MESSAGE, $msg=msg]);
+ }
+
+function log_rule(r: Rule, cmd: string, state: InfoState, p: PluginState, msg: string &default="")
+ {
+ local info: Info = [$ts=network_time()];
+ info$category = RULE;
+ info$cmd = cmd;
+ info$state = state;
+ info$plugin = p$plugin$name(p);
+ if ( msg != "" )
+ info$msg = msg;
+
+ rule_to_info(info, r);
+
+ Log::write(LOG, info);
+ }
+
+function log_rule_error(r: Rule, msg: string, p: PluginState)
+ {
+ local info: Info = [$ts=network_time(), $category=ERROR, $msg=msg, $plugin=p$plugin$name(p)];
+ rule_to_info(info, r);
+ Log::write(LOG, info);
+ }
+
+function log_rule_no_plugin(r: Rule, state: InfoState, msg: string)
+ {
+ local info: Info = [$ts=network_time()];
+ info$category = RULE;
+ info$state = state;
+ info$msg = msg;
+
+ rule_to_info(info, r);
+
+ Log::write(LOG, info);
+ }
+
+function whitelist_address(a: addr, t: interval, location: string &default="") : string
+ {
+ local e: Entity = [$ty=ADDRESS, $ip=addr_to_subnet(a)];
+ local r: Rule = [$ty=WHITELIST, $priority=whitelist_priority, $target=FORWARD, $entity=e, $expire=t, $location=location];
+
+ return add_rule(r);
+ }
+
+function whitelist_subnet(s: subnet, t: interval, location: string &default="") : string
+ {
+ local e: Entity = [$ty=ADDRESS, $ip=s];
+ local r: Rule = [$ty=WHITELIST, $priority=whitelist_priority, $target=FORWARD, $entity=e, $expire=t, $location=location];
+
+ return add_rule(r);
+ }
+
+
+function redirect_flow(f: flow_id, out_port: count, t: interval, location: string &default="") : string
+ {
+ local flow = NetControl::Flow(
+ $src_h=addr_to_subnet(f$src_h),
+ $src_p=f$src_p,
+ $dst_h=addr_to_subnet(f$dst_h),
+ $dst_p=f$dst_p
+ );
+ local e: Entity = [$ty=FLOW, $flow=flow];
+ local r: Rule = [$ty=REDIRECT, $target=FORWARD, $entity=e, $expire=t, $location=location, $out_port=out_port];
+
+ return add_rule(r);
+ }
+
+function quarantine_host(infected: addr, dns: addr, quarantine: addr, t: interval, location: string &default="") : vector of string
+ {
+ local orules: vector of string = vector();
+ local edrop: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected))];
+ local rdrop: Rule = [$ty=DROP, $target=FORWARD, $entity=edrop, $expire=t, $location=location];
+ orules[|orules|] = add_rule(rdrop);
+
+ local todnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(dns), $dst_p=53/udp)];
+ local todnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=todnse, $expire=t, $location=location, $mod=FlowMod($dst_h=quarantine), $priority=+5);
+ orules[|orules|] = add_rule(todnsr);
+
+ local fromdnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(dns), $src_p=53/udp, $dst_h=addr_to_subnet(infected))];
+ local fromdnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=fromdnse, $expire=t, $location=location, $mod=FlowMod($src_h=dns), $priority=+5);
+ orules[|orules|] = add_rule(fromdnsr);
+
+ local wle: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(quarantine), $dst_p=80/tcp)];
+ local wlr = Rule($ty=WHITELIST, $target=FORWARD, $entity=wle, $expire=t, $location=location, $priority=+5);
+ orules[|orules|] = add_rule(wlr);
+
+ return orules;
+ }
+
+function check_plugins()
+ {
+ if ( plugins_active )
+ return;
+
+ local all_active = T;
+ for ( i in plugins )
+ {
+ local p = plugins[i];
+ if ( p$_activated == F )
+ all_active = F;
+ }
+
+ if ( all_active )
+ {
+ plugins_active = T;
+
+ # Skip log message if there are no plugins
+ if ( |plugins| > 0 )
+ log_msg_no_plugin("plugin initialization done");
+
+ event NetControl::init_done();
+ }
+ }
+
+function plugin_activated(p: PluginState)
+ {
+ local id = p$_id;
+ if ( id !in plugin_ids )
+ {
+ log_error("unknown plugin activated", p);
+ return;
+ }
+ plugin_ids[id]$_activated = T;
+ log_msg("activation finished", p);
+
+ if ( bro_init_done )
+ check_plugins();
+ }
+
+event bro_init() &priority=-5
+ {
+ event NetControl::init();
+ }
+
+event NetControl::init() &priority=-20
+ {
+ bro_init_done = T;
+
+ check_plugins();
+
+ if ( plugins_active == F )
+ log_msg_no_plugin("waiting for plugins to initialize");
+ }
+
+# Low-level functions that only runs on the manager (or standalone) Bro node.
+
+function activate_impl(p: PluginState, priority: int)
+ {
+ p$_priority = priority;
+ plugins[|plugins|] = p;
+ sort(plugins, function(p1: PluginState, p2: PluginState) : int { return p2$_priority - p1$_priority; });
+
+ plugin_ids[plugin_counter] = p;
+ p$_id = plugin_counter;
+ ++plugin_counter;
+
+ # perform one-time initialization
+ if ( p$plugin?$init )
+ {
+ log_msg(fmt("activating plugin with priority %d", priority), p);
+ p$plugin$init(p);
+ }
+ else
+ {
+ # no initialization necessary, mark plugin as active right away
+ plugin_activated(p);
+ }
+
+ }
+
+function add_one_subnet_entry(s: subnet, r: Rule)
+ {
+ if ( ! check_subnet(s, rules_by_subnets) )
+ rules_by_subnets[s] = set(r$id);
+ else
+ add rules_by_subnets[s][r$id];
+ }
+
+function add_subnet_entry(rule: Rule)
+ {
+ local e = rule$entity;
+ if ( e$ty == ADDRESS )
+ {
+ add_one_subnet_entry(e$ip, rule);
+ }
+ else if ( e$ty == CONNECTION )
+ {
+ add_one_subnet_entry(addr_to_subnet(e$conn$orig_h), rule);
+ add_one_subnet_entry(addr_to_subnet(e$conn$resp_h), rule);
+ }
+ else if ( e$ty == FLOW )
+ {
+ if ( e$flow?$src_h )
+ add_one_subnet_entry(e$flow$src_h, rule);
+ if ( e$flow?$dst_h )
+ add_one_subnet_entry(e$flow$dst_h, rule);
+ }
+ }
+
+function remove_one_subnet_entry(s: subnet, r: Rule)
+ {
+ if ( ! check_subnet(s, rules_by_subnets) )
+ return;
+
+ if ( r$id !in rules_by_subnets[s] )
+ return;
+
+ delete rules_by_subnets[s][r$id];
+ if ( |rules_by_subnets[s]| == 0 )
+ delete rules_by_subnets[s];
+ }
+
+function remove_subnet_entry(rule: Rule)
+ {
+ local e = rule$entity;
+ if ( e$ty == ADDRESS )
+ {
+ remove_one_subnet_entry(e$ip, rule);
+ }
+ else if ( e$ty == CONNECTION )
+ {
+ remove_one_subnet_entry(addr_to_subnet(e$conn$orig_h), rule);
+ remove_one_subnet_entry(addr_to_subnet(e$conn$resp_h), rule);
+ }
+ else if ( e$ty == FLOW )
+ {
+ if ( e$flow?$src_h )
+ remove_one_subnet_entry(e$flow$src_h, rule);
+ if ( e$flow?$dst_h )
+ remove_one_subnet_entry(e$flow$dst_h, rule);
+ }
+ }
+
+function find_rules_subnet(sn: subnet) : vector of Rule
+ {
+ local ret: vector of Rule = vector();
+
+ local matches = matching_subnets(sn, rules_by_subnets);
+
+ for ( m in matches )
+ {
+ local sn_entry = matches[m];
+ local rule_ids = rules_by_subnets[sn_entry];
+ for ( rule_id in rules_by_subnets[sn_entry] )
+ {
+ if ( rule_id in rules )
+ ret[|ret|] = rules[rule_id];
+ else
+ Reporter::error("find_rules_subnet - internal data structure error, missing rule");
+ }
+ }
+
+ return ret;
+ }
+
+function find_rules_addr(ip: addr) : vector of Rule
+ {
+ return find_rules_subnet(addr_to_subnet(ip));
+ }
+
+function add_rule_impl(rule: Rule) : string
+ {
+ if ( ! plugins_active )
+ {
+ log_rule_no_plugin(rule, FAILED, "plugins not initialized yet");
+ return "";
+ }
+
+ rule$cid = ++rule_counter; # numeric id that can be used by plugins for their rules.
+
+ if ( ! rule?$id || rule$id == "" )
+ rule$id = cat(rule$cid);
+
+ if ( ! hook NetControl::rule_policy(rule) )
+ return "";
+
+ if ( [rule$entity, rule$ty] in rule_entities )
+ {
+ log_rule_no_plugin(rule, FAILED, "discarded duplicate insertion");
+ return "";
+ }
+
+ local accepted = F;
+ local priority: int = +0;
+
+ for ( i in plugins )
+ {
+ local p = plugins[i];
+
+ if ( p$_activated == F )
+ next;
+
+ # in this case, rule was accepted by earlier plugin and this plugin has a lower
+ # priority. Abort and do not send there...
+ if ( accepted == T && p$_priority != priority )
+ break;
+
+ if ( p$plugin$add_rule(p, rule) )
+ {
+ accepted = T;
+ priority = p$_priority;
+ log_rule(rule, "ADD", REQUESTED, p);
+
+ add rule$_plugin_ids[p$_id];
+ }
+ }
+
+ if ( accepted )
+ {
+ rules[rule$id] = rule;
+ rule_entities[rule$entity, rule$ty] = rule;
+
+ add_subnet_entry(rule);
+
+ return rule$id;
+ }
+
+ log_rule_no_plugin(rule, FAILED, "not supported");
+ return "";
+ }
+
+function remove_rule_plugin(r: Rule, p: PluginState): bool
+ {
+ local success = T;
+
+ if ( ! p$plugin$remove_rule(p, r) )
+ {
+ # still continue and send to other plugins
+ log_rule_error(r, "remove failed", p);
+ success = F;
+ }
+ else
+ {
+ log_rule(r, "REMOVE", REQUESTED, p);
+ }
+
+ return success;
+ }
+
+function remove_rule_impl(id: string) : bool
+ {
+ if ( id !in rules )
+ {
+ Reporter::error(fmt("Rule %s does not exist in NetControl::remove_rule", id));
+ return F;
+ }
+
+ local r = rules[id];
+
+ local success = T;
+ for ( plugin_id in r$_active_plugin_ids )
+ {
+ local p = plugin_ids[plugin_id];
+ success = remove_rule_plugin(r, p);
+ }
+
+ return success;
+ }
+
+function rule_expire_impl(r: Rule, p: PluginState) &priority=-5
+ {
+ # do not emit timeout events on shutdown
+ if ( bro_is_terminating() )
+ return;
+
+ if ( r$id !in rules )
+ # Removed already.
+ return;
+
+ event NetControl::rule_timeout(r, FlowInfo(), p); # timeout implementation will handle the removal
+ }
+
+function rule_added_impl(r: Rule, p: PluginState, msg: string &default="")
+ {
+ if ( r$id !in rules )
+ {
+ log_rule_error(r, "Addition of unknown rule", p);
+ return;
+ }
+
+ # use our version to prevent operating on copies.
+ local rule = rules[r$id];
+ if ( p$_id !in rule$_plugin_ids )
+ {
+ log_rule_error(rule, "Rule added to non-responsible plugin", p);
+ return;
+ }
+
+ log_rule(r, "ADD", SUCCEEDED, p, msg);
+
+ add rule$_active_plugin_ids[p$_id];
+ if ( |rule$_plugin_ids| == |rule$_active_plugin_ids| )
+ {
+ # rule was completely added.
+ rule$_added = T;
+ }
+ }
+
+function rule_cleanup(r: Rule)
+ {
+ if ( |r$_active_plugin_ids| > 0 )
+ return;
+
+ remove_subnet_entry(r);
+
+ delete rule_entities[r$entity, r$ty];
+ delete rules[r$id];
+ }
+
+function rule_removed_impl(r: Rule, p: PluginState, msg: string &default="")
+ {
+ if ( r$id !in rules )
+ {
+ log_rule_error(r, "Removal of non-existing rule", p);
+ return;
+ }
+
+ # use our version to prevent operating on copies.
+ local rule = rules[r$id];
+
+ if ( p$_id !in rule$_plugin_ids )
+ {
+ log_rule_error(r, "Removed from non-assigned plugin", p);
+ return;
+ }
+
+ if ( p$_id in rule$_active_plugin_ids )
+ {
+ delete rule$_active_plugin_ids[p$_id];
+ }
+
+ log_rule(rule, "REMOVE", SUCCEEDED, p, msg);
+ rule_cleanup(rule);
+ }
+
+function rule_timeout_impl(r: Rule, i: FlowInfo, p: PluginState)
+ {
+ if ( r$id !in rules )
+ {
+ log_rule_error(r, "Timeout of non-existing rule", p);
+ return;
+ }
+
+ local rule = rules[r$id];
+
+ local msg = "";
+ if ( i?$packet_count )
+ msg = fmt("Packets: %d", i$packet_count);
+ if ( i?$byte_count )
+ {
+ if ( msg != "" )
+ msg = msg + " ";
+ msg = fmt("%sBytes: %s", msg, i$byte_count);
+ }
+
+ log_rule(rule, "EXPIRE", TIMEOUT, p, msg);
+
+ if ( ! p$plugin$can_expire )
+ {
+ # in this case, we actually have to delete the rule and the timeout
+ # call just originated locally
+ remove_rule_plugin(rule, p);
+ return;
+ }
+
+ if ( p$_id !in rule$_plugin_ids )
+ {
+ log_rule_error(r, "Timeout from non-assigned plugin", p);
+ return;
+ }
+
+ if ( p$_id in rule$_active_plugin_ids )
+ {
+ delete rule$_active_plugin_ids[p$_id];
+ }
+
+ rule_cleanup(rule);
+ }
+
+function rule_error_impl(r: Rule, p: PluginState, msg: string &default="")
+ {
+ if ( r$id !in rules )
+ {
+ log_rule_error(r, "Error of non-existing rule", p);
+ return;
+ }
+
+ local rule = rules[r$id];
+
+ log_rule_error(rule, msg, p);
+
+ # Remove the plugin both from active and all plugins of the rule. If there
+ # are no plugins left afterwards - delete it
+ if ( p$_id !in rule$_plugin_ids )
+ {
+ log_rule_error(r, "Error from non-assigned plugin", p);
+ return;
+ }
+
+ if ( p$_id in rule$_active_plugin_ids )
+ {
+ # error during removal. Let's pretend it worked.
+ delete rule$_plugin_ids[p$_id];
+ delete rule$_active_plugin_ids[p$_id];
+ rule_cleanup(rule);
+ }
+ else
+ {
+ # error during insertion. Meh. If we are the only plugin, remove the rule again.
+ # Otherwhise - keep it, minus us.
+ delete rule$_plugin_ids[p$_id];
+ if ( |rule$_plugin_ids| == 0 )
+ {
+ rule_cleanup(rule);
+ }
+ }
+ }
+
+function clear()
+ {
+ for ( id in rules )
+ remove_rule(id);
+ }
diff --git a/scripts/base/frameworks/netcontrol/non-cluster.bro b/scripts/base/frameworks/netcontrol/non-cluster.bro
new file mode 100644
index 0000000000..4098586be4
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/non-cluster.bro
@@ -0,0 +1,47 @@
+module NetControl;
+
+@load ./main
+
+function activate(p: PluginState, priority: int)
+ {
+ activate_impl(p, priority);
+ }
+
+function add_rule(r: Rule) : string
+ {
+ return add_rule_impl(r);
+ }
+
+function remove_rule(id: string) : bool
+ {
+ return remove_rule_impl(id);
+ }
+
+event rule_expire(r: Rule, p: PluginState) &priority=-5
+ {
+ rule_expire_impl(r, p);
+ }
+
+event rule_added(r: Rule, p: PluginState, msg: string &default="") &priority=5
+ {
+ rule_added_impl(r, p, msg);
+
+ if ( r?$expire && r$expire > 0secs && ! p$plugin$can_expire )
+ schedule r$expire { rule_expire(r, p) };
+ }
+
+event rule_removed(r: Rule, p: PluginState, msg: string &default="") &priority=-5
+ {
+ rule_removed_impl(r, p, msg);
+ }
+
+event rule_timeout(r: Rule, i: FlowInfo, p: PluginState) &priority=-5
+ {
+ rule_timeout_impl(r, i, p);
+ }
+
+event rule_error(r: Rule, p: PluginState, msg: string &default="") &priority=-5
+ {
+ rule_error_impl(r, p, msg);
+ }
+
diff --git a/scripts/base/frameworks/netcontrol/plugin.bro b/scripts/base/frameworks/netcontrol/plugin.bro
new file mode 100644
index 0000000000..7d0ee13a81
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugin.bro
@@ -0,0 +1,89 @@
+##! Plugin interface for NetControl backends.
+
+module NetControl;
+
+@load ./types
+
+export {
+ ## State for a plugin instance.
+ type PluginState: record {
+ ## Table for a plugin to store custom, instance-specfific state.
+ config: table[string] of string &default=table();
+
+ ## Unique plugin identifier -- used for backlookup of plugins from Rules. Set internally.
+ _id: count &optional;
+
+ ## Set internally.
+ _priority: int &default=+0;
+
+ ## Set internally. Signifies if the plugin has returned that it has activated succesfully
+ _activated: bool &default=F;
+ };
+
+ # Definition of a plugin.
+ #
+ # Generally a plugin needs to implement only what it can support. By
+ # returning failure, it indicates that it can't support something and the
+ # the framework will then try another plugin, if available; or inform the
+ # that the operation failed. If a function isn't implemented by a plugin,
+ # that's considered an implicit failure to support the operation.
+ #
+ # If plugin accepts a rule operation, it *must* generate one of the reporting
+ # events ``rule_{added,remove,error}`` to signal if it indeed worked out;
+ # this is separate from accepting the operation because often a plugin
+ # will only know later (i.e., asynchrously) if that was an error for
+ # something it thought it could handle.
+ type Plugin: record {
+ # Returns a descriptive name of the plugin instance, suitable for use in logging
+ # messages. Note that this function is not optional.
+ name: function(state: PluginState) : string;
+
+ ## If true, plugin can expire rules itself. If false,
+ ## framework will manage rule expiration.
+ can_expire: bool;
+
+ # One-time initialization function called when plugin gets registered, and
+ # before any other methods are called.
+ #
+ # If this function is provided, NetControl assumes that the plugin has to
+ # perform, potentially lengthy, initialization before the plugin will become
+ # active. In this case, the plugin has to call ``NetControl::plugin_activated``,
+ # once initialization finishes.
+ init: function(state: PluginState) &optional;
+
+ # One-time finalization function called when a plugin is shutdown; no further
+ # functions will be called afterwords.
+ done: function(state: PluginState) &optional;
+
+ # Implements the add_rule() operation. If the plugin accepts the rule,
+ # it returns true, false otherwise. The rule will already have its
+ # ``id`` field set, which the plugin may use for identification
+ # purposes.
+ add_rule: function(state: PluginState, r: Rule) : bool &optional;
+
+ # Implements the remove_rule() operation. This will only be called for
+ # rules that the plugins has previously accepted with add_rule(). The
+ # ``id`` field will match that of the add_rule() call. Generally,
+ # a plugin that accepts an add_rule() should also accept the
+ # remove_rule().
+ remove_rule: function(state: PluginState, r: Rule) : bool &optional;
+
+ # A transaction groups a number of operations. The plugin can add them internally
+ # and postpone putting them into effect until committed. This allows to build a
+ # configuration of multiple rules at once, including replaying a previous state.
+ transaction_begin: function(state: PluginState) &optional;
+ transaction_end: function(state: PluginState) &optional;
+ };
+
+ # Table for a plugin to store instance-specific configuration information.
+ #
+ # Note, it would be nicer to pass the Plugin instance to all the below, instead
+ # of this state table. However Bro's type resolver has trouble with refering to a
+ # record type from inside itself.
+ redef record PluginState += {
+ ## The plugin that the state belongs to. (Defined separately
+ ## because of cyclic type dependency.)
+ plugin: Plugin &optional;
+ };
+
+}
diff --git a/scripts/base/frameworks/netcontrol/plugins/__load__.bro b/scripts/base/frameworks/netcontrol/plugins/__load__.bro
new file mode 100644
index 0000000000..255cee5f69
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/__load__.bro
@@ -0,0 +1,5 @@
+@load ./debug
+@load ./openflow
+@load ./packetfilter
+@load ./broker
+@load ./acld
diff --git a/scripts/base/frameworks/netcontrol/plugins/acld.bro b/scripts/base/frameworks/netcontrol/plugins/acld.bro
new file mode 100644
index 0000000000..13802f2e21
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/acld.bro
@@ -0,0 +1,294 @@
+##! Acld plugin for the netcontrol framework.
+
+module NetControl;
+
+@load ../main
+@load ../plugin
+@load base/frameworks/broker
+
+export {
+ type AclRule : record {
+ command: string;
+ cookie: count;
+ arg: string;
+ comment: string &optional;
+ };
+
+ type AcldConfig: record {
+ ## The acld topic used to send events to
+ acld_topic: string;
+ ## Broker host to connect to
+ acld_host: addr;
+ ## Broker port to connect to
+ acld_port: port;
+ ## Do we accept rules for the monitor path? Default false
+ monitor: bool &default=F;
+ ## Do we accept rules for the forward path? Default true
+ forward: bool &default=T;
+
+ ## Predicate that is called on rule insertion or removal.
+ ##
+ ## p: Current plugin state
+ ##
+ ## r: The rule to be inserted or removed
+ ##
+ ## Returns: T if the rule can be handled by the current backend, F otherwhise
+ check_pred: function(p: PluginState, r: Rule): bool &optional;
+ };
+
+ ## Instantiates the acld plugin.
+ global create_acld: function(config: AcldConfig) : PluginState;
+
+ redef record PluginState += {
+ acld_config: AcldConfig &optional;
+ ## The ID of this acld instance - for the mapping to PluginStates
+ acld_id: count &optional;
+ };
+
+ ## Hook that is called after a rule is converted to an acld rule.
+ ## The hook may modify the rule before it is sent to acld.
+ ## Setting the acld command to F will cause the rule to be rejected
+ ## by the plugin
+ ##
+ ## p: Current plugin state
+ ##
+ ## r: The rule to be inserted or removed
+ ##
+ ## ar: The acld rule to be inserted or removed
+ global NetControl::acld_rule_policy: hook(p: PluginState, r: Rule, ar: AclRule);
+
+ ## Events that are sent from us to Broker
+ global acld_add_rule: event(id: count, r: Rule, ar: AclRule);
+ global acld_remove_rule: event(id: count, r: Rule, ar: AclRule);
+
+ ## Events that are sent from Broker to us
+ global acld_rule_added: event(id: count, r: Rule, msg: string);
+ global acld_rule_removed: event(id: count, r: Rule, msg: string);
+ global acld_rule_error: event(id: count, r: Rule, msg: string);
+}
+
+global netcontrol_acld_peers: table[port, string] of PluginState;
+global netcontrol_acld_topics: set[string] = set();
+global netcontrol_acld_id: table[count] of PluginState = table();
+global netcontrol_acld_current_id: count = 0;
+
+const acld_add_to_remove: table[string] of string = {
+ ["drop"] = "restore",
+ ["whitelist"] = "remwhitelist",
+ ["blockhosthost"] = "restorehosthost",
+ ["droptcpport"] = "restoretcpport",
+ ["dropudpport"] = "restoreudpport",
+ ["droptcpdsthostport"] ="restoretcpdsthostport",
+ ["dropudpdsthostport"] ="restoreudpdsthostport",
+ ["permittcpdsthostport"] ="unpermittcpdsthostport",
+ ["permitudpdsthostport"] ="unpermitudpdsthostport",
+ ["nullzero"] ="nonullzero"
+};
+
+event NetControl::acld_rule_added(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_acld_id )
+ {
+ Reporter::error(fmt("NetControl acld plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_acld_id[id];
+
+ event NetControl::rule_added(r, p, msg);
+ }
+
+event NetControl::acld_rule_removed(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_acld_id )
+ {
+ Reporter::error(fmt("NetControl acld plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_acld_id[id];
+
+ event NetControl::rule_removed(r, p, msg);
+ }
+
+event NetControl::acld_rule_error(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_acld_id )
+ {
+ Reporter::error(fmt("NetControl acld plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_acld_id[id];
+
+ event NetControl::rule_error(r, p, msg);
+ }
+
+function acld_name(p: PluginState) : string
+ {
+ return fmt("Acld-%s", p$acld_config$acld_topic);
+ }
+
+# check that subnet specifies an addr
+function check_sn(sn: subnet) : bool
+ {
+ if ( is_v4_subnet(sn) && subnet_width(sn) == 32 )
+ return T;
+ if ( is_v6_subnet(sn) && subnet_width(sn) == 128 )
+ return T;
+
+ Reporter::error(fmt("Acld: rule_to_acl_rule was given a subnet that does not specify a distinct address where needed - %s", sn));
+ return F;
+ }
+
+function rule_to_acl_rule(p: PluginState, r: Rule) : AclRule
+ {
+ local e = r$entity;
+
+ local command: string = "";
+ local arg: string = "";
+
+ if ( e$ty == ADDRESS )
+ {
+ if ( r$ty == DROP )
+ command = "drop";
+ else if ( r$ty == WHITELIST )
+ command = "whitelist";
+ arg = cat(e$ip);
+ }
+ else if ( e$ty == FLOW )
+ {
+ local f = e$flow;
+ if ( ( ! f?$src_h ) && ( ! f?$src_p ) && f?$dst_h && f?$dst_p && ( ! f?$src_m ) && ( ! f?$dst_m ) )
+ {
+ if ( !check_sn(f$dst_h) )
+ command = ""; # invalid addr, do nothing
+ else if ( is_tcp_port(f$dst_p) && r$ty == DROP )
+ command = "droptcpdsthostport";
+ else if ( is_tcp_port(f$dst_p) && r$ty == WHITELIST )
+ command = "permittcpdsthostport";
+ else if ( is_udp_port(f$dst_p) && r$ty == DROP)
+ command = "dropucpdsthostport";
+ else if ( is_udp_port(f$dst_p) && r$ty == WHITELIST)
+ command = "permitucpdsthostport";
+
+ arg = fmt("%s %d", subnet_to_addr(f$dst_h), f$dst_p);
+ }
+ else if ( f?$src_h && ( ! f?$src_p ) && f?$dst_h && ( ! f?$dst_p ) && ( ! f?$src_m ) && ( ! f?$dst_m ) )
+ {
+ if ( !check_sn(f$src_h) || !check_sn(f$dst_h) )
+ command = "";
+ else if ( r$ty == DROP )
+ command = "blockhosthost";
+ arg = fmt("%s %s", subnet_to_addr(f$src_h), subnet_to_addr(f$dst_h));
+ }
+ else if ( ( ! f?$src_h ) && ( ! f?$src_p ) && ( ! f?$dst_h ) && f?$dst_p && ( ! f?$src_m ) && ( ! f?$dst_m ) )
+ {
+ if ( is_tcp_port(f$dst_p) && r$ty == DROP )
+ command = "droptcpport";
+ else if ( is_udp_port(f$dst_p) && r$ty == DROP )
+ command = "dropudpport";
+ arg = fmt("%d", f$dst_p);
+ }
+ }
+
+ local ar = AclRule($command=command, $cookie=r$cid, $arg=arg);
+ if ( r?$location )
+ ar$comment = r$location;
+
+ hook NetControl::acld_rule_policy(p, r, ar);
+
+ return ar;
+ }
+
+function acld_check_rule(p: PluginState, r: Rule) : bool
+ {
+ local c = p$acld_config;
+
+ if ( p$acld_config?$check_pred )
+ return p$acld_config$check_pred(p, r);
+
+ if ( r$target == MONITOR && c$monitor )
+ return T;
+
+ if ( r$target == FORWARD && c$forward )
+ return T;
+
+ return F;
+ }
+
+function acld_add_rule_fun(p: PluginState, r: Rule) : bool
+ {
+ if ( ! acld_check_rule(p, r) )
+ return F;
+
+ local ar = rule_to_acl_rule(p, r);
+
+ if ( ar$command == "" )
+ return F;
+
+ Broker::event(p$acld_config$acld_topic, Broker::event_args(acld_add_rule, p$acld_id, r, ar));
+ return T;
+ }
+
+function acld_remove_rule_fun(p: PluginState, r: Rule) : bool
+ {
+ if ( ! acld_check_rule(p, r) )
+ return F;
+
+ local ar = rule_to_acl_rule(p, r);
+ if ( ar$command in acld_add_to_remove )
+ ar$command = acld_add_to_remove[ar$command];
+ else
+ return F;
+
+ Broker::event(p$acld_config$acld_topic, Broker::event_args(acld_remove_rule, p$acld_id, r, ar));
+ return T;
+ }
+
+function acld_init(p: PluginState)
+ {
+ Broker::enable();
+ Broker::connect(cat(p$acld_config$acld_host), p$acld_config$acld_port, 1sec);
+ Broker::subscribe_to_events(p$acld_config$acld_topic);
+ }
+
+event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
+ {
+ if ( [peer_port, peer_address] !in netcontrol_acld_peers )
+ # ok, this one was none of ours...
+ return;
+
+ local p = netcontrol_acld_peers[peer_port, peer_address];
+ plugin_activated(p);
+ }
+
+global acld_plugin = Plugin(
+ $name=acld_name,
+ $can_expire = F,
+ $add_rule = acld_add_rule_fun,
+ $remove_rule = acld_remove_rule_fun,
+ $init = acld_init
+ );
+
+function create_acld(config: AcldConfig) : PluginState
+ {
+ if ( config$acld_topic in netcontrol_acld_topics )
+ Reporter::warning(fmt("Topic %s was added to NetControl acld plugin twice. Possible duplication of commands", config$acld_topic));
+ else
+ add netcontrol_acld_topics[config$acld_topic];
+
+ local host = cat(config$acld_host);
+ local p: PluginState = [$acld_config=config, $plugin=acld_plugin, $acld_id=netcontrol_acld_current_id];
+
+ if ( [config$acld_port, host] in netcontrol_acld_peers )
+ Reporter::warning(fmt("Peer %s:%s was added to NetControl acld plugin twice.", host, config$acld_port));
+ else
+ netcontrol_acld_peers[config$acld_port, host] = p;
+
+ netcontrol_acld_id[netcontrol_acld_current_id] = p;
+ ++netcontrol_acld_current_id;
+
+ return p;
+ }
+
diff --git a/scripts/base/frameworks/netcontrol/plugins/broker.bro b/scripts/base/frameworks/netcontrol/plugins/broker.bro
new file mode 100644
index 0000000000..2af3724db7
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/broker.bro
@@ -0,0 +1,163 @@
+##! Broker plugin for the netcontrol framework. Sends the raw data structures
+##! used in NetControl on to Broker to allow for easy handling, e.g., of
+##! command-line scripts.
+
+module NetControl;
+
+@load ../main
+@load ../plugin
+@load base/frameworks/broker
+
+export {
+ ## Instantiates the broker plugin.
+ global create_broker: function(host: addr, host_port: port, topic: string, can_expire: bool &default=F) : PluginState;
+
+ redef record PluginState += {
+ ## The broker topic used to send events to
+ broker_topic: string &optional;
+ ## The ID of this broker instance - for the mapping to PluginStates
+ broker_id: count &optional;
+ ## Broker host to connect to
+ broker_host: addr &optional;
+ ## Broker port to connect to
+ broker_port: port &optional;
+ };
+
+ global broker_add_rule: event(id: count, r: Rule);
+ global broker_remove_rule: event(id: count, r: Rule);
+
+ global broker_rule_added: event(id: count, r: Rule, msg: string);
+ global broker_rule_removed: event(id: count, r: Rule, msg: string);
+ global broker_rule_error: event(id: count, r: Rule, msg: string);
+ global broker_rule_timeout: event(id: count, r: Rule, i: FlowInfo);
+}
+
+global netcontrol_broker_peers: table[port, string] of PluginState;
+global netcontrol_broker_topics: set[string] = set();
+global netcontrol_broker_id: table[count] of PluginState = table();
+global netcontrol_broker_current_id: count = 0;
+
+event NetControl::broker_rule_added(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_broker_id )
+ {
+ Reporter::error(fmt("NetControl broker plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_broker_id[id];
+
+ event NetControl::rule_added(r, p, msg);
+ }
+
+event NetControl::broker_rule_removed(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_broker_id )
+ {
+ Reporter::error(fmt("NetControl broker plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_broker_id[id];
+
+ event NetControl::rule_removed(r, p, msg);
+ }
+
+event NetControl::broker_rule_error(id: count, r: Rule, msg: string)
+ {
+ if ( id !in netcontrol_broker_id )
+ {
+ Reporter::error(fmt("NetControl broker plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_broker_id[id];
+
+ event NetControl::rule_error(r, p, msg);
+ }
+
+event NetControl::broker_rule_timeout(id: count, r: Rule, i: FlowInfo)
+ {
+ if ( id !in netcontrol_broker_id )
+ {
+ Reporter::error(fmt("NetControl broker plugin with id %d not found, aborting", id));
+ return;
+ }
+
+ local p = netcontrol_broker_id[id];
+
+ event NetControl::rule_timeout(r, i, p);
+ }
+
+function broker_name(p: PluginState) : string
+ {
+ return fmt("Broker-%s", p$broker_topic);
+ }
+
+function broker_add_rule_fun(p: PluginState, r: Rule) : bool
+ {
+ Broker::event(p$broker_topic, Broker::event_args(broker_add_rule, p$broker_id, r));
+ return T;
+ }
+
+function broker_remove_rule_fun(p: PluginState, r: Rule) : bool
+ {
+ Broker::event(p$broker_topic, Broker::event_args(broker_remove_rule, p$broker_id, r));
+ return T;
+ }
+
+function broker_init(p: PluginState)
+ {
+ Broker::enable();
+ Broker::connect(cat(p$broker_host), p$broker_port, 1sec);
+ Broker::subscribe_to_events(p$broker_topic);
+ }
+
+event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
+ {
+ if ( [peer_port, peer_address] !in netcontrol_broker_peers )
+ return;
+
+ local p = netcontrol_broker_peers[peer_port, peer_address];
+ plugin_activated(p);
+ }
+
+global broker_plugin = Plugin(
+ $name=broker_name,
+ $can_expire = F,
+ $add_rule = broker_add_rule_fun,
+ $remove_rule = broker_remove_rule_fun,
+ $init = broker_init
+ );
+
+global broker_plugin_can_expire = Plugin(
+ $name=broker_name,
+ $can_expire = T,
+ $add_rule = broker_add_rule_fun,
+ $remove_rule = broker_remove_rule_fun,
+ $init = broker_init
+ );
+
+function create_broker(host: addr, host_port: port, topic: string, can_expire: bool &default=F) : PluginState
+ {
+ if ( topic in netcontrol_broker_topics )
+ Reporter::warning(fmt("Topic %s was added to NetControl broker plugin twice. Possible duplication of commands", topic));
+ else
+ add netcontrol_broker_topics[topic];
+
+ local plugin = broker_plugin;
+ if ( can_expire )
+ plugin = broker_plugin_can_expire;
+
+ local p: PluginState = [$broker_host=host, $broker_port=host_port, $plugin=plugin, $broker_topic=topic, $broker_id=netcontrol_broker_current_id];
+
+ if ( [host_port, cat(host)] in netcontrol_broker_peers )
+ Reporter::warning(fmt("Peer %s:%s was added to NetControl broker plugin twice.", host, host_port));
+ else
+ netcontrol_broker_peers[host_port, cat(host)] = p;
+
+ netcontrol_broker_id[netcontrol_broker_current_id] = p;
+ ++netcontrol_broker_current_id;
+
+ return p;
+ }
diff --git a/scripts/base/frameworks/netcontrol/plugins/debug.bro b/scripts/base/frameworks/netcontrol/plugins/debug.bro
new file mode 100644
index 0000000000..f421dc55e3
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/debug.bro
@@ -0,0 +1,99 @@
+##! Debugging plugin for the NetControl framework, providing insight into
+##! executed operations.
+
+@load ../plugin
+@load ../main
+
+module NetControl;
+
+export {
+ ## Instantiates a debug plugin for the NetControl framework. The debug
+ ## plugin simply logs the operations it receives.
+ ##
+ ## do_something: If true, the plugin will claim it supports all operations; if
+ ## false, it will indicate it doesn't support any.
+ global create_debug: function(do_something: bool) : PluginState;
+}
+
+function do_something(p: PluginState) : bool
+ {
+ return p$config["all"] == "1";
+ }
+
+function debug_name(p: PluginState) : string
+ {
+ return fmt("Debug-%s", (do_something(p) ? "All" : "None"));
+ }
+
+function debug_log(p: PluginState, msg: string)
+ {
+ print fmt("netcontrol debug (%s): %s", debug_name(p), msg);
+ }
+
+function debug_init(p: PluginState)
+ {
+ debug_log(p, "init");
+ plugin_activated(p);
+ }
+
+function debug_done(p: PluginState)
+ {
+ debug_log(p, "init");
+ }
+
+function debug_add_rule(p: PluginState, r: Rule) : bool
+ {
+ local s = fmt("add_rule: %s", r);
+ debug_log(p, s);
+
+ if ( do_something(p) )
+ {
+ event NetControl::rule_added(r, p);
+ return T;
+ }
+
+ return F;
+ }
+
+function debug_remove_rule(p: PluginState, r: Rule) : bool
+ {
+ local s = fmt("remove_rule: %s", r);
+ debug_log(p, s);
+
+ event NetControl::rule_removed(r, p);
+ return T;
+ }
+
+function debug_transaction_begin(p: PluginState)
+ {
+ debug_log(p, "transaction_begin");
+ }
+
+function debug_transaction_end(p: PluginState)
+ {
+ debug_log(p, "transaction_end");
+ }
+
+global debug_plugin = Plugin(
+ $name=debug_name,
+ $can_expire = F,
+ $init = debug_init,
+ $done = debug_done,
+ $add_rule = debug_add_rule,
+ $remove_rule = debug_remove_rule,
+ $transaction_begin = debug_transaction_begin,
+ $transaction_end = debug_transaction_end
+ );
+
+function create_debug(do_something: bool) : PluginState
+ {
+ local p: PluginState = [$plugin=debug_plugin];
+
+ # FIXME: Why's the default not working?
+ p$config = table();
+ p$config["all"] = (do_something ? "1" : "0");
+
+ return p;
+ }
+
+
diff --git a/scripts/base/frameworks/netcontrol/plugins/openflow.bro b/scripts/base/frameworks/netcontrol/plugins/openflow.bro
new file mode 100644
index 0000000000..44a8bb2f1a
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/openflow.bro
@@ -0,0 +1,432 @@
+##! OpenFlow plugin for the NetControl framework.
+
+@load ../main
+@load ../plugin
+@load base/frameworks/openflow
+
+module NetControl;
+
+export {
+ type OfConfig: record {
+ monitor: bool &default=T;
+ forward: bool &default=T;
+ idle_timeout: count &default=0;
+ table_id: count &optional;
+ priority_offset: int &default=+0; ##< add this to all rule priorities. Can be useful if you want the openflow priorities be offset from the netcontrol priorities without having to write a filter function.
+
+ ## Predicate that is called on rule insertion or removal.
+ ##
+ ## p: Current plugin state
+ ##
+ ## r: The rule to be inserted or removed
+ ##
+ ## Returns: T if the rule can be handled by the current backend, F otherwhise
+ check_pred: function(p: PluginState, r: Rule): bool &optional;
+ match_pred: function(p: PluginState, e: Entity, m: vector of OpenFlow::ofp_match): vector of OpenFlow::ofp_match &optional;
+ flow_mod_pred: function(p: PluginState, r: Rule, m: OpenFlow::ofp_flow_mod): OpenFlow::ofp_flow_mod &optional;
+ };
+
+ redef record PluginState += {
+ ## OpenFlow controller for NetControl OpenFlow plugin
+ of_controller: OpenFlow::Controller &optional;
+ ## OpenFlow configuration record that is passed on initialization
+ of_config: OfConfig &optional;
+ };
+
+ type OfTable: record {
+ p: PluginState;
+ r: Rule;
+ c: count &default=0; # how many replies did we see so far? needed for ids where we have multiple rules...
+ packet_count: count &default=0;
+ byte_count: count &default=0;
+ duration_sec: double &default=0.0;
+ };
+
+ ## the time interval after which an openflow message is considered to be timed out
+ ## and we delete it from our internal tracking.
+ const openflow_message_timeout = 20secs &redef;
+
+ ## the time interval after we consider a flow timed out. This should be fairly high (or
+ ## even disabled) if you expect a lot of long flows. However, one also will have state
+ ## buildup for quite a while if keeping this around...
+ const openflow_flow_timeout = 24hrs &redef;
+
+ ## Instantiates an openflow plugin for the NetControl framework.
+ global create_openflow: function(controller: OpenFlow::Controller, config: OfConfig &default=[]) : PluginState;
+}
+
+global of_messages: table[count, OpenFlow::ofp_flow_mod_command] of OfTable &create_expire=openflow_message_timeout
+ &expire_func=function(t: table[count, OpenFlow::ofp_flow_mod_command] of OfTable, idx: any): interval
+ {
+ local rid: count;
+ local command: OpenFlow::ofp_flow_mod_command;
+ [rid, command] = idx;
+
+ local p = t[rid, command]$p;
+ local r = t[rid, command]$r;
+ event NetControl::rule_error(r, p, "Timeout during rule insertion/removal");
+ return 0secs;
+ };
+
+global of_flows: table[count] of OfTable &create_expire=openflow_flow_timeout;
+global of_instances: table[string] of PluginState;
+
+function openflow_name(p: PluginState) : string
+ {
+ return fmt("Openflow-%s", p$of_controller$describe(p$of_controller$state));
+ }
+
+function openflow_check_rule(p: PluginState, r: Rule) : bool
+ {
+ local c = p$of_config;
+
+ if ( p$of_config?$check_pred )
+ return p$of_config$check_pred(p, r);
+
+ if ( r$target == MONITOR && c$monitor )
+ return T;
+
+ if ( r$target == FORWARD && c$forward )
+ return T;
+
+ return F;
+ }
+
+function openflow_match_pred(p: PluginState, e: Entity, m: vector of OpenFlow::ofp_match) : vector of OpenFlow::ofp_match
+ {
+ if ( p$of_config?$match_pred )
+ return p$of_config$match_pred(p, e, m);
+
+ return m;
+ }
+
+function openflow_flow_mod_pred(p: PluginState, r: Rule, m: OpenFlow::ofp_flow_mod): OpenFlow::ofp_flow_mod
+ {
+ if ( p$of_config?$flow_mod_pred )
+ return p$of_config$flow_mod_pred(p, r, m);
+
+ return m;
+ }
+
+function determine_dl_type(s: subnet): count
+ {
+ local pdl = OpenFlow::ETH_IPv4;
+ if ( is_v6_subnet(s) )
+ pdl = OpenFlow::ETH_IPv6;
+
+ return pdl;
+ }
+
+function determine_proto(p: port): count
+ {
+ local proto = OpenFlow::IP_TCP;
+ if ( is_udp_port(p) )
+ proto = OpenFlow::IP_UDP;
+ else if ( is_icmp_port(p) )
+ proto = OpenFlow::IP_ICMP;
+
+ return proto;
+ }
+
+function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_match
+ {
+ local v : vector of OpenFlow::ofp_match = vector();
+
+ if ( e$ty == CONNECTION )
+ {
+ v[|v|] = OpenFlow::match_conn(e$conn); # forward and...
+ v[|v|] = OpenFlow::match_conn(e$conn, T); # reverse
+ return openflow_match_pred(p, e, v);
+ }
+
+ if ( e$ty == MAC )
+ {
+ v[|v|] = OpenFlow::ofp_match(
+ $dl_src=e$mac
+ );
+ v[|v|] = OpenFlow::ofp_match(
+ $dl_dst=e$mac
+ );
+
+ return openflow_match_pred(p, e, v);
+ }
+
+ local dl_type = OpenFlow::ETH_IPv4;
+
+ if ( e$ty == ADDRESS )
+ {
+ if ( is_v6_subnet(e$ip) )
+ dl_type = OpenFlow::ETH_IPv6;
+
+ v[|v|] = OpenFlow::ofp_match(
+ $dl_type=dl_type,
+ $nw_src=e$ip
+ );
+
+ v[|v|] = OpenFlow::ofp_match(
+ $dl_type=dl_type,
+ $nw_dst=e$ip
+ );
+
+ return openflow_match_pred(p, e, v);
+ }
+
+ local proto = OpenFlow::IP_TCP;
+
+ if ( e$ty == FLOW )
+ {
+ local m = OpenFlow::ofp_match();
+ local f = e$flow;
+
+ if ( f?$src_m )
+ m$dl_src=f$src_m;
+ if ( f?$dst_m )
+ m$dl_dst=f$dst_m;
+
+ if ( f?$src_h )
+ {
+ m$dl_type = determine_dl_type(f$src_h);
+ m$nw_src = f$src_h;
+ }
+
+ if ( f?$dst_h )
+ {
+ m$dl_type = determine_dl_type(f$dst_h);
+ m$nw_dst = f$dst_h;
+ }
+
+ if ( f?$src_p )
+ {
+ m$nw_proto = determine_proto(f$src_p);
+ m$tp_src = port_to_count(f$src_p);
+ }
+
+ if ( f?$dst_p )
+ {
+ m$nw_proto = determine_proto(f$dst_p);
+ m$tp_dst = port_to_count(f$dst_p);
+ }
+
+ v[|v|] = m;
+
+ return openflow_match_pred(p, e, v);
+ }
+
+ Reporter::error(fmt("Entity type %s not supported for openflow yet", cat(e$ty)));
+ return openflow_match_pred(p, e, v);
+ }
+
+function openflow_rule_to_flow_mod(p: PluginState, r: Rule) : OpenFlow::ofp_flow_mod
+ {
+ local c = p$of_config;
+
+ local flow_mod = OpenFlow::ofp_flow_mod(
+ $cookie=OpenFlow::generate_cookie(r$cid*2), # leave one space for the cases in which we need two rules.
+ $command=OpenFlow::OFPFC_ADD,
+ $idle_timeout=c$idle_timeout,
+ $priority=int_to_count(r$priority + c$priority_offset),
+ $flags=OpenFlow::OFPFF_SEND_FLOW_REM # please notify us when flows are removed
+ );
+
+ if ( r?$expire )
+ flow_mod$hard_timeout = double_to_count(interval_to_double(r$expire));
+ if ( c?$table_id )
+ flow_mod$table_id = c$table_id;
+
+ if ( r$ty == DROP )
+ {
+ # default, nothing to do. We simply do not add an output port to the rule...
+ }
+ else if ( r$ty == WHITELIST )
+ {
+ # at the moment our interpretation of whitelist is to hand this off to the switches L2/L3 routing.
+ flow_mod$actions$out_ports = vector(OpenFlow::OFPP_NORMAL);
+ }
+ else if ( r$ty == MODIFY )
+ {
+ # if no ports are given, just assume normal pipeline...
+ flow_mod$actions$out_ports = vector(OpenFlow::OFPP_NORMAL);
+
+ local mod = r$mod;
+ if ( mod?$redirect_port )
+ flow_mod$actions$out_ports = vector(mod$redirect_port);
+
+ if ( mod?$src_h )
+ flow_mod$actions$nw_src = mod$src_h;
+ if ( mod?$dst_h )
+ flow_mod$actions$nw_dst = mod$dst_h;
+ if ( mod?$src_m )
+ flow_mod$actions$dl_src = mod$src_m;
+ if ( mod?$dst_m )
+ flow_mod$actions$dl_dst = mod$dst_m;
+ if ( mod?$src_p )
+ flow_mod$actions$tp_src = mod$src_p;
+ if ( mod?$dst_p )
+ flow_mod$actions$tp_dst = mod$dst_p;
+ }
+ else if ( r$ty == REDIRECT )
+ {
+ # redirect to port c
+ flow_mod$actions$out_ports = vector(r$out_port);
+ }
+ else
+ {
+ Reporter::error(fmt("Rule type %s not supported for openflow yet", cat(r$ty)));
+ }
+
+ return openflow_flow_mod_pred(p, r, flow_mod);
+ }
+
+function openflow_add_rule(p: PluginState, r: Rule) : bool
+ {
+ if ( ! openflow_check_rule(p, r) )
+ return F;
+
+ local flow_mod = openflow_rule_to_flow_mod(p, r);
+ local matches = entity_to_match(p, r$entity);
+
+ for ( i in matches )
+ {
+ if ( OpenFlow::flow_mod(p$of_controller, matches[i], flow_mod) )
+ {
+ of_messages[r$cid, flow_mod$command] = OfTable($p=p, $r=r);
+ flow_mod = copy(flow_mod);
+ ++flow_mod$cookie;
+ }
+ else
+ event rule_error(r, p, "Error while executing OpenFlow::flow_mod");
+ }
+
+ return T;
+ }
+
+function openflow_remove_rule(p: PluginState, r: Rule) : bool
+ {
+ if ( ! openflow_check_rule(p, r) )
+ return F;
+
+ local flow_mod: OpenFlow::ofp_flow_mod = [
+ $cookie=OpenFlow::generate_cookie(r$cid*2),
+ $command=OpenFlow::OFPFC_DELETE
+ ];
+
+ if ( OpenFlow::flow_mod(p$of_controller, [], flow_mod) )
+ of_messages[r$cid, flow_mod$command] = OfTable($p=p, $r=r);
+ else
+ {
+ event rule_error(r, p, "Error while executing OpenFlow::flow_mod");
+ return F;
+ }
+
+ # if this was an address or mac match, we also need to remove the reverse
+ if ( r$entity$ty == ADDRESS || r$entity$ty == MAC )
+ {
+ local flow_mod_2 = copy(flow_mod);
+ ++flow_mod_2$cookie;
+ OpenFlow::flow_mod(p$of_controller, [], flow_mod_2);
+ }
+
+ return T;
+ }
+
+event OpenFlow::flow_mod_success(name: string, match: OpenFlow::ofp_match, flow_mod: OpenFlow::ofp_flow_mod, msg: string) &priority=3
+ {
+ local id = OpenFlow::get_cookie_uid(flow_mod$cookie)/2;
+ if ( [id, flow_mod$command] !in of_messages )
+ return;
+
+ local r = of_messages[id,flow_mod$command]$r;
+ local p = of_messages[id,flow_mod$command]$p;
+ local c = of_messages[id,flow_mod$command]$c;
+
+ if ( r$entity$ty == ADDRESS || r$entity$ty == MAC )
+ {
+ ++of_messages[id,flow_mod$command]$c;
+ if ( of_messages[id,flow_mod$command]$c < 2 )
+ return; # will do stuff once the second part arrives...
+ }
+
+ delete of_messages[id,flow_mod$command];
+
+ if ( p$of_controller$supports_flow_removed )
+ of_flows[id] = OfTable($p=p, $r=r);
+
+ if ( flow_mod$command == OpenFlow::OFPFC_ADD )
+ event NetControl::rule_added(r, p, msg);
+ else if ( flow_mod$command == OpenFlow::OFPFC_DELETE || flow_mod$command == OpenFlow::OFPFC_DELETE_STRICT )
+ event NetControl::rule_removed(r, p, msg);
+ }
+
+event OpenFlow::flow_mod_failure(name: string, match: OpenFlow::ofp_match, flow_mod: OpenFlow::ofp_flow_mod, msg: string) &priority=3
+ {
+ local id = OpenFlow::get_cookie_uid(flow_mod$cookie)/2;
+ if ( [id, flow_mod$command] !in of_messages )
+ return;
+
+ local r = of_messages[id,flow_mod$command]$r;
+ local p = of_messages[id,flow_mod$command]$p;
+ delete of_messages[id,flow_mod$command];
+
+ event NetControl::rule_error(r, p, msg);
+ }
+
+event OpenFlow::flow_removed(name: string, match: OpenFlow::ofp_match, cookie: count, priority: count, reason: count, duration_sec: count, idle_timeout: count, packet_count: count, byte_count: count)
+ {
+ local id = OpenFlow::get_cookie_uid(cookie)/2;
+ if ( id !in of_flows )
+ return;
+
+ local rec = of_flows[id];
+ local r = rec$r;
+ local p = rec$p;
+
+ if ( r$entity$ty == ADDRESS || r$entity$ty == MAC )
+ {
+ ++of_flows[id]$c;
+ if ( of_flows[id]$c < 2 )
+ return; # will do stuff once the second part arrives...
+ else
+ event NetControl::rule_timeout(r, FlowInfo($duration=double_to_interval((rec$duration_sec+duration_sec)/2), $packet_count=packet_count+rec$packet_count, $byte_count=byte_count+rec$byte_count), p);
+
+ return;
+ }
+
+ event NetControl::rule_timeout(r, FlowInfo($duration=double_to_interval(duration_sec+0.0), $packet_count=packet_count, $byte_count=byte_count), p);
+ }
+
+function openflow_init(p: PluginState)
+ {
+ local name = p$of_controller$state$_name;
+ if ( name in of_instances )
+ Reporter::error(fmt("OpenFlow instance %s added to NetControl twice.", name));
+
+ of_instances[name] = p;
+
+ # let's check, if our OpenFlow controller is already active. If not, we have to wait for it to become active.
+ if ( p$of_controller$state$_activated )
+ plugin_activated(p);
+ }
+
+event OpenFlow::controller_activated(name: string, controller: OpenFlow::Controller)
+ {
+ if ( name in of_instances )
+ plugin_activated(of_instances[name]);
+ }
+
+global openflow_plugin = Plugin(
+ $name=openflow_name,
+ $can_expire = T,
+ $init = openflow_init,
+# $done = openflow_done,
+ $add_rule = openflow_add_rule,
+ $remove_rule = openflow_remove_rule
+# $transaction_begin = openflow_transaction_begin,
+# $transaction_end = openflow_transaction_end
+ );
+
+function create_openflow(controller: OpenFlow::Controller, config: OfConfig &default=[]) : PluginState
+ {
+ local p: PluginState = [$plugin=openflow_plugin, $of_controller=controller, $of_config=config];
+
+ return p;
+ }
diff --git a/scripts/base/frameworks/netcontrol/plugins/packetfilter.bro b/scripts/base/frameworks/netcontrol/plugins/packetfilter.bro
new file mode 100644
index 0000000000..437c08eb73
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/plugins/packetfilter.bro
@@ -0,0 +1,113 @@
+##! NetControl plugin for the process-level PacketFilter that comes with
+##! Bro. Since the PacketFilter in Bro is quite limited in scope
+##! and can only add/remove filters for addresses, this is quite
+##! limited in scope at the moment.
+
+module NetControl;
+
+@load ../plugin
+
+export {
+ ## Instantiates the packetfilter plugin.
+ global create_packetfilter: function() : PluginState;
+}
+
+# Check if we can handle this rule. If it specifies ports or
+# anything Bro cannot handle, simply ignore it for now.
+function packetfilter_check_rule(r: Rule) : bool
+ {
+ if ( r$ty != DROP )
+ return F;
+
+ if ( r$target != MONITOR )
+ return F;
+
+ local e = r$entity;
+ if ( e$ty == ADDRESS )
+ return T;
+
+ if ( e$ty != FLOW ) # everything else requires ports or MAC stuff
+ return F;
+
+ if ( e$flow?$src_p || e$flow?$dst_p || e$flow?$src_m || e$flow?$dst_m )
+ return F;
+
+ return T;
+ }
+
+
+function packetfilter_add_rule(p: PluginState, r: Rule) : bool
+ {
+ if ( ! packetfilter_check_rule(r) )
+ return F;
+
+ local e = r$entity;
+ if ( e$ty == ADDRESS )
+ {
+ install_src_net_filter(e$ip, 0, 1.0);
+ install_dst_net_filter(e$ip, 0, 1.0);
+ return T;
+ }
+
+ if ( e$ty == FLOW )
+ {
+ local f = e$flow;
+ if ( f?$src_h )
+ install_src_net_filter(f$src_h, 0, 1.0);
+ if ( f?$dst_h )
+ install_dst_net_filter(f$dst_h, 0, 1.0);
+
+ return T;
+ }
+
+ return F;
+ }
+
+function packetfilter_remove_rule(p: PluginState, r: Rule) : bool
+ {
+ if ( ! packetfilter_check_rule(r) )
+ return F;
+
+ local e = r$entity;
+ if ( e$ty == ADDRESS )
+ {
+ uninstall_src_net_filter(e$ip);
+ uninstall_dst_net_filter(e$ip);
+ return T;
+ }
+
+ if ( e$ty == FLOW )
+ {
+ local f = e$flow;
+ if ( f?$src_h )
+ uninstall_src_net_filter(f$src_h);
+ if ( f?$dst_h )
+ uninstall_dst_net_filter(f$dst_h);
+
+ return T;
+ }
+
+ return F;
+ }
+
+function packetfilter_name(p: PluginState) : string
+ {
+ return "Packetfilter";
+ }
+
+global packetfilter_plugin = Plugin(
+ $name=packetfilter_name,
+ $can_expire = F,
+# $init = packetfilter_init,
+# $done = packetfilter_done,
+ $add_rule = packetfilter_add_rule,
+ $remove_rule = packetfilter_remove_rule
+ );
+
+function create_packetfilter() : PluginState
+ {
+ local p: PluginState = [$plugin=packetfilter_plugin];
+
+ return p;
+ }
+
diff --git a/scripts/base/frameworks/netcontrol/shunt.bro b/scripts/base/frameworks/netcontrol/shunt.bro
new file mode 100644
index 0000000000..e1a5715582
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/shunt.bro
@@ -0,0 +1,69 @@
+##! Implementation of the shunt functionality for NetControl.
+
+module NetControl;
+
+@load ./main
+
+export {
+ redef enum Log::ID += { SHUNT };
+
+ ## Stops forwarding a uni-directional flow's packets to Bro.
+ ##
+ ## f: The flow to shunt.
+ ##
+ ## t: How long to leave the shunt in place, with 0 being indefinitly.
+ ##
+ ## location: An optional string describing where the shunt was triggered.
+ ##
+ ## Returns: The id of the inserted rule on succes and zero on failure.
+ global shunt_flow: function(f: flow_id, t: interval, location: string &default="") : string;
+
+ type ShuntInfo: record {
+ ## Time at which the recorded activity occurred.
+ ts: time &log;
+ ## ID of the rule; unique during each Bro run
+ rule_id: string &log;
+ ## Flow ID of the shunted flow
+ f: flow_id &log;
+ ## Expiry time of the shunt
+ expire: interval &log;
+ ## Location where the underlying action was triggered.
+ location: string &log &optional;
+ };
+
+ ## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`
+ ## record as it is sent on to the logging framework.
+ global log_netcontrol_shunt: event(rec: ShuntInfo);
+}
+
+event bro_init() &priority=5
+ {
+ Log::create_stream(NetControl::SHUNT, [$columns=ShuntInfo, $ev=log_netcontrol_shunt, $path="netcontrol_shunt"]);
+ }
+
+function shunt_flow(f: flow_id, t: interval, location: string &default="") : string
+ {
+ local flow = NetControl::Flow(
+ $src_h=addr_to_subnet(f$src_h),
+ $src_p=f$src_p,
+ $dst_h=addr_to_subnet(f$dst_h),
+ $dst_p=f$dst_p
+ );
+ local e: Entity = [$ty=FLOW, $flow=flow];
+ local r: Rule = [$ty=DROP, $target=MONITOR, $entity=e, $expire=t, $location=location];
+
+ local id = add_rule(r);
+
+ # Error should already be logged
+ if ( id == "" )
+ return id;
+
+ local log = ShuntInfo($ts=network_time(), $rule_id=id, $f=f, $expire=t);
+ if ( location != "" )
+ log$location=location;
+
+ Log::write(SHUNT, log);
+
+ return id;
+ }
+
diff --git a/scripts/base/frameworks/netcontrol/types.bro b/scripts/base/frameworks/netcontrol/types.bro
new file mode 100644
index 0000000000..440d63d8bc
--- /dev/null
+++ b/scripts/base/frameworks/netcontrol/types.bro
@@ -0,0 +1,109 @@
+##! Types used by the NetControl framework.
+
+module NetControl;
+
+export {
+ const default_priority: int = +0 &redef;
+ const whitelist_priority: int = +5 &redef;
+
+ ## Type of a :bro:id:`Entity` for defining an action.
+ type EntityType: enum {
+ ADDRESS, ##< Activity involving a specific IP address.
+ CONNECTION, ##< All of a bi-directional connection's activity.
+ FLOW, ##< All of a uni-directional flow's activity. Can contain wildcards.
+ MAC, ##< Activity involving a MAC address.
+ };
+
+ ## Type of a :bro:id:`Flow` for defining a flow.
+ type Flow: record {
+ src_h: subnet &optional; ##< The source IP address/subnet.
+ src_p: port &optional; ##< The source port number.
+ dst_h: subnet &optional; ##< The destination IP address/subnet.
+ dst_p: port &optional; ##< The desintation port number.
+ src_m: string &optional; ##< The source MAC address.
+ dst_m: string &optional; ##< The destination MAC address.
+ };
+
+ ## Type defining the enity an :bro:id:`Rule` is operating on.
+ type Entity: record {
+ ty: EntityType; ##< Type of entity.
+ conn: conn_id &optional; ##< Used with :bro:id:`CONNECTION` .
+ flow: Flow &optional; ##< Used with :bro:id:`FLOW` .
+ ip: subnet &optional; ##< Used with bro:id:`ADDRESS`; can specifiy a CIDR subnet.
+ mac: string &optional; ##< Used with :bro:id:`MAC`.
+ };
+
+ ## Target of :bro:id:`Rule` action.
+ type TargetType: enum {
+ FORWARD, #< Apply rule actively to traffic on forwarding path.
+ MONITOR, #< Apply rule passively to traffic sent to Bro for monitoring.
+ };
+
+ ## Type of rules that the framework supports. Each type lists the
+ ## :bro:id:`Rule` argument(s) it uses, if any.
+ ##
+ ## Plugins may extend this type to define their own.
+ type RuleType: enum {
+ ## Stop forwarding all packets matching entity.
+ ##
+ ## No arguments.
+ DROP,
+
+ ## Begin modifying all packets matching entity.
+ ##
+ ## .. todo::
+ ## Define arguments.
+ MODIFY,
+
+ ## Begin redirecting all packets matching entity.
+ ##
+ ## .. todo::
+ ## c: output port to redirect traffic to.
+ REDIRECT,
+
+ ## Whitelists all packets of an entity, meaning no restrictions will be applied.
+ ## While whitelisting is the default if no rule matches an this can type can be
+ ## used to override lower-priority rules that would otherwise take effect for the
+ ## entity.
+ WHITELIST,
+ };
+
+ ## Type of a :bro:id:`FlowMod` for defining a flow modification action.
+ type FlowMod: record {
+ src_h: addr &optional; ##< The source IP address.
+ src_p: count &optional; ##< The source port number.
+ dst_h: addr &optional; ##< The destination IP address.
+ dst_p: count &optional; ##< The desintation port number.
+ src_m: string &optional; ##< The source MAC address.
+ dst_m: string &optional; ##< The destination MAC address.
+ redirect_port: count &optional;
+ };
+
+ ## A rule for the framework to put in place. Of all rules currently in
+ ## place, the first match will be taken, sorted by priority. All
+ ## further rules will be ignored.
+ type Rule: record {
+ ty: RuleType; ##< Type of rule.
+ target: TargetType; ##< Where to apply rule.
+ entity: Entity; ##< Entity to apply rule to.
+ expire: interval &optional; ##< Timeout after which to expire the rule.
+ priority: int &default=default_priority; ##< Priority if multiple rules match an entity (larger value is higher priority).
+ location: string &optional; ##< Optional string describing where/what installed the rule.
+
+ out_port: count &optional; ##< Argument for bro:id:`REDIRECT` rules.
+ mod: FlowMod &optional; ##< Argument for :bro:id:`MODIFY` rules.
+
+ id: string &default=""; ##< Internally determined unique ID for this rule. Will be set when added.
+ cid: count &default=0; ##< Internally determined unique numeric ID for this rule. Set when added.
+ };
+
+ ## Information of a flow that can be provided by switches when the flow times out.
+ ## Currently this is heavily influenced by the data that OpenFlow returns by default.
+ ## That being said - their design makes sense and this is probably the data one
+ ## can expect to be available.
+ type FlowInfo: record {
+ duration: interval &optional; ##< total duration of the rule
+ packet_count: count &optional; ##< number of packets exchanged over connections matched by the rule
+ byte_count: count &optional; ##< total bytes exchanged over connections matched by the rule
+ };
+}
diff --git a/scripts/base/frameworks/openflow/__load__.bro b/scripts/base/frameworks/openflow/__load__.bro
new file mode 100644
index 0000000000..bd9128b5aa
--- /dev/null
+++ b/scripts/base/frameworks/openflow/__load__.bro
@@ -0,0 +1,13 @@
+@load ./consts
+@load ./types
+@load ./main
+@load ./plugins
+
+# The cluster framework must be loaded first.
+@load base/frameworks/cluster
+
+@if ( Cluster::is_enabled() )
+@load ./cluster
+@else
+@load ./non-cluster
+@endif
diff --git a/scripts/base/frameworks/openflow/cluster.bro b/scripts/base/frameworks/openflow/cluster.bro
new file mode 100644
index 0000000000..28de1db3c3
--- /dev/null
+++ b/scripts/base/frameworks/openflow/cluster.bro
@@ -0,0 +1,120 @@
+##! Cluster support for the OpenFlow framework.
+
+@load ./main
+@load base/frameworks/cluster
+
+module OpenFlow;
+
+export {
+ ## This is the event used to transport flow_mod messages to the manager.
+ global cluster_flow_mod: event(name: string, match: ofp_match, flow_mod: ofp_flow_mod);
+
+ ## This is the event used to transport flow_clear messages to the manager.
+ global cluster_flow_clear: event(name: string);
+}
+
+## Workers need ability to forward commands to manager.
+redef Cluster::worker2manager_events += /OpenFlow::cluster_flow_(mod|clear)/;
+
+# the flow_mod function wrapper
+function flow_mod(controller: Controller, match: ofp_match, flow_mod: ofp_flow_mod): bool
+ {
+ if ( ! controller?$flow_mod )
+ return F;
+
+ if ( Cluster::local_node_type() == Cluster::MANAGER )
+ return controller$flow_mod(controller$state, match, flow_mod);
+ else
+ event OpenFlow::cluster_flow_mod(controller$state$_name, match, flow_mod);
+
+ return T;
+ }
+
+function flow_clear(controller: Controller): bool
+ {
+ if ( ! controller?$flow_clear )
+ return F;
+
+ if ( Cluster::local_node_type() == Cluster::MANAGER )
+ return controller$flow_clear(controller$state);
+ else
+ event OpenFlow::cluster_flow_clear(controller$state$_name);
+
+ return T;
+ }
+
+@if ( Cluster::local_node_type() == Cluster::MANAGER )
+event OpenFlow::cluster_flow_mod(name: string, match: ofp_match, flow_mod: ofp_flow_mod)
+ {
+ if ( name !in name_to_controller )
+ {
+ Reporter::error(fmt("OpenFlow controller %s not found in mapping on master", name));
+ return;
+ }
+
+ local c = name_to_controller[name];
+
+ if ( ! c$state$_activated )
+ return;
+
+ if ( c?$flow_mod )
+ c$flow_mod(c$state, match, flow_mod);
+ }
+
+event OpenFlow::cluster_flow_clear(name: string)
+ {
+ if ( name !in name_to_controller )
+ {
+ Reporter::error(fmt("OpenFlow controller %s not found in mapping on master", name));
+ return;
+ }
+
+ local c = name_to_controller[name];
+
+ if ( ! c$state$_activated )
+ return;
+
+ if ( c?$flow_clear )
+ c$flow_clear(c$state);
+ }
+@endif
+
+function register_controller(tpe: OpenFlow::Plugin, name: string, controller: Controller)
+ {
+ controller$state$_name = cat(tpe, name);
+ controller$state$_plugin = tpe;
+
+ # we only run the init functions on the manager.
+ if ( Cluster::local_node_type() != Cluster::MANAGER )
+ return;
+
+ register_controller_impl(tpe, name, controller);
+ }
+
+function unregister_controller(controller: Controller)
+ {
+ # we only run the on the manager.
+ if ( Cluster::local_node_type() != Cluster::MANAGER )
+ return;
+
+ unregister_controller_impl(controller);
+ }
+
+function lookup_controller(name: string): vector of Controller
+ {
+ # we only run the on the manager. Otherwhise we don't have a mapping or state -> return empty
+ if ( Cluster::local_node_type() != Cluster::MANAGER )
+ return vector();
+
+ # I am not quite sure if we can actually get away with this - in the
+ # current state, this means that the individual nodes cannot lookup
+ # a controller by name.
+ #
+ # This means that there can be no reactions to things on the actual
+ # worker nodes - because they cannot look up a name. On the other hand -
+ # currently we also do not even send the events to the worker nodes (at least
+ # not if we are using broker). Because of that I am not really feeling that
+ # badly about it...
+
+ return lookup_controller_impl(name);
+ }
diff --git a/scripts/base/frameworks/openflow/consts.bro b/scripts/base/frameworks/openflow/consts.bro
new file mode 100644
index 0000000000..ca956702a7
--- /dev/null
+++ b/scripts/base/frameworks/openflow/consts.bro
@@ -0,0 +1,229 @@
+##! Constants used by the OpenFlow framework.
+
+# All types/constants not specific to OpenFlow will be defined here
+# unitl they somehow get into Bro.
+
+module OpenFlow;
+
+# Some cookie specific constants.
+# first 24 bits
+const COOKIE_BID_SIZE = 16777216;
+# start at bit 40 (1 << 40)
+const COOKIE_BID_START = 1099511627776;
+# bro specific cookie ID shall have the 42 bit set (1 << 42)
+const BRO_COOKIE_ID = 4;
+# 8 bits group identifier
+const COOKIE_GID_SIZE = 256;
+# start at bit 32 (1 << 32)
+const COOKIE_GID_START = 4294967296;
+# 32 bits unique identifier
+const COOKIE_UID_SIZE = 4294967296;
+# start at bit 0 (1 << 0)
+const COOKIE_UID_START = 0;
+
+export {
+ # All ethertypes can be found at
+ # http://standards.ieee.org/develop/regauth/ethertype/eth.txt
+ # but are not interesting for us at this point
+#type ethertype: enum {
+ # Internet protocol version 4
+ const ETH_IPv4 = 0x0800;
+ # Address resolution protocol
+ const ETH_ARP = 0x0806;
+ # Wake on LAN
+ const ETH_WOL = 0x0842;
+ # Reverse address resolution protocol
+ const ETH_RARP = 0x8035;
+ # Appletalk
+ const ETH_APPLETALK = 0x809B;
+ # Appletalk address resolution protocol
+ const ETH_APPLETALK_ARP = 0x80F3;
+ # IEEE 802.1q & IEEE 802.1aq
+ const ETH_VLAN = 0x8100;
+ # Novell IPX old
+ const ETH_IPX_OLD = 0x8137;
+ # Novell IPX
+ const ETH_IPX = 0x8138;
+ # Internet protocol version 6
+ const ETH_IPv6 = 0x86DD;
+ # IEEE 802.3x
+ const ETH_ETHER_FLOW_CONTROL = 0x8808;
+ # Multiprotocol Label Switching unicast
+ const ETH_MPLS_UNICAST = 0x8847;
+ # Multiprotocol Label Switching multicast
+ const ETH_MPLS_MULTICAST = 0x8848;
+ # Point-to-point protocol over Ethernet discovery phase (rfc2516)
+ const ETH_PPPOE_DISCOVERY = 0x8863;
+ # Point-to-point protocol over Ethernet session phase (rfc2516)
+ const ETH_PPPOE_SESSION = 0x8864;
+ # Jumbo frames
+ const ETH_JUMBO_FRAMES = 0x8870;
+ # IEEE 802.1X
+ const ETH_EAP_OVER_LAN = 0x888E;
+ # IEEE 802.1ad & IEEE 802.1aq
+ const ETH_PROVIDER_BRIDING = 0x88A8;
+ # IEEE 802.1ae
+ const ETH_MAC_SECURITY = 0x88E5;
+ # IEEE 802.1ad (QinQ)
+ const ETH_QINQ = 0x9100;
+#};
+
+ # A list of ip protocol numbers can be found at
+ # http://en.wikipedia.org/wiki/List_of_IP_protocol_numbers
+#type iptype: enum {
+ # IPv6 Hop-by-Hop Option (RFC2460)
+ const IP_HOPOPT = 0x00;
+ # Internet Control Message Protocol (RFC792)
+ const IP_ICMP = 0x01;
+ # Internet Group Management Protocol (RFC1112)
+ const IP_IGMP = 0x02;
+ # Gateway-to-Gateway Protocol (RFC823)
+ const IP_GGP = 0x03;
+ # IP-Within-IP (encapsulation) (RFC2003)
+ const IP_IPIP = 0x04;
+ # Internet Stream Protocol (RFC1190;RFC1819)
+ const IP_ST = 0x05;
+ # Tansmission Control Protocol (RFC793)
+ const IP_TCP = 0x06;
+ # Core-based trees (RFC2189)
+ const IP_CBT = 0x07;
+ # Exterior Gateway Protocol (RFC888)
+ const IP_EGP = 0x08;
+ # Interior Gateway Protocol (any private interior
+ # gateway (used by Cisco for their IGRP))
+ const IP_IGP = 0x09;
+ # User Datagram Protocol (RFC768)
+ const IP_UDP = 0x11;
+ # Reliable Datagram Protocol (RFC908)
+ const IP_RDP = 0x1B;
+ # IPv6 Encapsulation (RFC2473)
+ const IP_IPv6 = 0x29;
+ # Resource Reservation Protocol (RFC2205)
+ const IP_RSVP = 0x2E;
+ # Generic Routing Encapsulation (RFC2784;RFC2890)
+ const IP_GRE = 0x2F;
+ # Open Shortest Path First (RFC1583)
+ const IP_OSPF = 0x59;
+ # Multicast Transport Protocol
+ const IP_MTP = 0x5C;
+ # IP-within-IP Encapsulation Protocol (RFC2003)
+ ### error 0x5E;
+ # Ethernet-within-IP Encapsulation Protocol (RFC3378)
+ const IP_ETHERIP = 0x61;
+ # Layer Two Tunneling Protocol Version 3 (RFC3931)
+ const IP_L2TP = 0x73;
+ # Intermediate System to Intermediate System (IS-IS) Protocol over IPv4 (RFC1142;RFC1195)
+ const IP_ISIS = 0x7C;
+ # Fibre Channel
+ const IP_FC = 0x85;
+ # Multiprotocol Label Switching Encapsulated in IP (RFC4023)
+ const IP_MPLS = 0x89;
+#};
+
+ ## Return value for a cookie from a flow
+ ## which is not added, modified or deleted
+ ## from the bro openflow framework
+ const INVALID_COOKIE = 0xffffffffffffffff;
+ # Openflow pysical port definitions
+ ## Send the packet out the input port. This
+ ## virual port must be explicitly used in
+ ## order to send back out of the input port.
+ const OFPP_IN_PORT = 0xfffffff8;
+ ## Perform actions in flow table.
+ ## NB: This can only be the destination port
+ ## for packet-out messages.
+ const OFPP_TABLE = 0xfffffff9;
+ ## Process with normal L2/L3 switching.
+ const OFPP_NORMAL = 0xfffffffa;
+ ## All pysical ports except input port and
+ ## those disabled by STP.
+ const OFPP_FLOOD = 0xfffffffb;
+ ## All pysical ports except input port.
+ const OFPP_ALL = 0xfffffffc;
+ ## Send to controller.
+ const OFPP_CONTROLLER = 0xfffffffd;
+ ## Local openflow "port".
+ const OFPP_LOCAL = 0xfffffffe;
+ ## Wildcard port used only for flow mod (delete) and flow stats requests.
+ const OFPP_ANY = 0xffffffff;
+ # Openflow no buffer constant.
+ const OFP_NO_BUFFER = 0xffffffff;
+ ## Send flow removed message when flow
+ ## expires or is deleted.
+ const OFPFF_SEND_FLOW_REM = 0x1;
+ ## Check for overlapping entries first.
+ const OFPFF_CHECK_OVERLAP = 0x2;
+ ## Remark this is for emergency.
+ ## Flows added with this are only used
+ ## when the controller is disconnected.
+ const OFPFF_EMERG = 0x4;
+
+ # Wildcard table used for table config,
+ # flow stats and flow deletes.
+ const OFPTT_ALL = 0xff;
+
+ ## Openflow action_type definitions
+ ##
+ ## The openflow action type defines
+ ## what actions openflow can take
+ ## to modify a packet
+ type ofp_action_type: enum {
+ ## Output to switch port.
+ OFPAT_OUTPUT = 0x0000,
+ ## Set the 802.1q VLAN id.
+ OFPAT_SET_VLAN_VID = 0x0001,
+ ## Set the 802.1q priority.
+ OFPAT_SET_VLAN_PCP = 0x0002,
+ ## Strip the 802.1q header.
+ OFPAT_STRIP_VLAN = 0x0003,
+ ## Ethernet source address.
+ OFPAT_SET_DL_SRC = 0x0004,
+ ## Ethernet destination address.
+ OFPAT_SET_DL_DST = 0x0005,
+ ## IP source address
+ OFPAT_SET_NW_SRC = 0x0006,
+ ## IP destination address.
+ OFPAT_SET_NW_DST = 0x0007,
+ ## IP ToS (DSCP field, 6 bits).
+ OFPAT_SET_NW_TOS = 0x0008,
+ ## TCP/UDP source port.
+ OFPAT_SET_TP_SRC = 0x0009,
+ ## TCP/UDP destination port.
+ OFPAT_SET_TP_DST = 0x000a,
+ ## Output to queue.
+ OFPAT_ENQUEUE = 0x000b,
+ ## Vendor specific
+ OFPAT_VENDOR = 0xffff,
+ };
+
+ ## Openflow flow_mod_command definitions
+ ##
+ ## The openflow flow_mod_command describes
+ ## of what kind an action is.
+ type ofp_flow_mod_command: enum {
+ ## New flow.
+ OFPFC_ADD = 0x0,
+ ## Modify all matching flows.
+ OFPFC_MODIFY = 0x1,
+ ## Modify entry strictly matching wildcards.
+ OFPFC_MODIFY_STRICT = 0x2,
+ ## Delete all matching flows.
+ OFPFC_DELETE = 0x3,
+ ## Strictly matching wildcards and priority.
+ OFPFC_DELETE_STRICT = 0x4,
+ };
+
+ ## Openflow config flag definitions
+ ##
+ ## TODO: describe
+ type ofp_config_flags: enum {
+ ## No special handling for fragments.
+ OFPC_FRAG_NORMAL = 0,
+ ## Drop fragments.
+ OFPC_FRAG_DROP = 1,
+ ## Reassemble (only if OFPC_IP_REASM set).
+ OFPC_FRAG_REASM = 2,
+ OFPC_FRAG_MASK = 3,
+ };
+
+}
diff --git a/scripts/base/frameworks/openflow/main.bro b/scripts/base/frameworks/openflow/main.bro
new file mode 100644
index 0000000000..889929c641
--- /dev/null
+++ b/scripts/base/frameworks/openflow/main.bro
@@ -0,0 +1,289 @@
+##! Bro's OpenFlow control framework
+##!
+##! This plugin-based framework allows to control OpenFlow capable
+##! switches by implementing communication to an OpenFlow controller
+##! via plugins. The framework has to be instantiated via the new function
+##! in one of the plugins. This framework only offers very low-level
+##! functionality; if you want to use OpenFlow capable switches, e.g.,
+##! for shunting, please look at the PACF framework, which provides higher
+##! level functions and can use the OpenFlow framework as a backend.
+
+module OpenFlow;
+
+@load ./consts
+@load ./types
+
+export {
+ ## Global flow_mod function.
+ ##
+ ## controller: The controller which should execute the flow modification
+ ##
+ ## match: The ofp_match record which describes the flow to match.
+ ##
+ ## flow_mod: The openflow flow_mod record which describes the action to take.
+ ##
+ ## Returns: F on error or if the plugin does not support the operation, T when the operation was queued.
+ global flow_mod: function(controller: Controller, match: ofp_match, flow_mod: ofp_flow_mod): bool;
+
+ ## Clear the current flow table of the controller.
+ ##
+ ## controller: The controller which should execute the flow modification
+ ##
+ ## Returns: F on error or if the plugin does not support the operation, T when the operation was queued.
+ global flow_clear: function(controller: Controller): bool;
+
+ ## Event confirming successful modification of a flow rule.
+ ##
+ ## name: The unique name of the OpenFlow controller from which this event originated.
+ ##
+ ## match: The ofp_match record which describes the flow to match.
+ ##
+ ## flow_mod: The openflow flow_mod record which describes the action to take.
+ ##
+ ## msg: An optional informational message by the plugin.
+ global flow_mod_success: event(name: string, match: ofp_match, flow_mod: ofp_flow_mod, msg: string &default="");
+
+ ## Reports an error while installing a flow Rule.
+ ##
+ ## name: The unique name of the OpenFlow controller from which this event originated.
+ ##
+ ## match: The ofp_match record which describes the flow to match.
+ ##
+ ## flow_mod: The openflow flow_mod record which describes the action to take.
+ ##
+ ## msg: Message to describe the event.
+ global flow_mod_failure: event(name: string, match: ofp_match, flow_mod: ofp_flow_mod, msg: string &default="");
+
+ ## Reports that a flow was removed by the switch because of either the hard or the idle timeout.
+ ## This message is only generated by controllers that indicate that they support flow removal
+ ## in supports_flow_removed.
+ ##
+ ## name: The unique name of the OpenFlow controller from which this event originated.
+ ##
+ ## match: The ofp_match record which was used to create the flow.
+ ##
+ ## cookie: The cookie that was specified when creating the flow.
+ ##
+ ## priority: The priority that was specified when creating the flow.
+ ##
+ ## reason: The reason for flow removal (OFPRR_*)
+ ##
+ ## duration_sec: duration of the flow in seconds
+ ##
+ ## packet_count: packet count of the flow
+ ##
+ ## byte_count: byte count of the flow
+ global flow_removed: event(name: string, match: ofp_match, cookie: count, priority: count, reason: count, duration_sec: count, idle_timeout: count, packet_count: count, byte_count: count);
+
+ ## Convert a conn_id record into an ofp_match record that can be used to
+ ## create match objects for OpenFlow.
+ ##
+ ## id: the conn_id record that describes the record.
+ ##
+ ## reverse: reverse the sources and destinations when creating the match record (default F)
+ ##
+ ## Returns: ofp_match object for the conn_id record.
+ global match_conn: function(id: conn_id, reverse: bool &default=F): ofp_match;
+
+ # ###
+ # ### Low-level functions for cookie handling and plugin registration.
+ # ###
+
+ ## Function to get the unique id out of a given cookie.
+ ##
+ ## cookie: The openflow match cookie.
+ ##
+ ## Returns: The cookie unique id.
+ global get_cookie_uid: function(cookie: count): count;
+
+ ## Function to get the group id out of a given cookie.
+ ##
+ ## cookie: The openflow match cookie.
+ ##
+ ## Returns: The cookie group id.
+ global get_cookie_gid: function(cookie: count): count;
+
+ ## Function to generate a new cookie using our group id.
+ ##
+ ## cookie: The openflow match cookie.
+ ##
+ ## Returns: The cookie group id.
+ global generate_cookie: function(cookie: count &default=0): count;
+
+ ## Function to register a controller instance. This function
+ ## is called automatically by the plugin _new functions.
+ ##
+ ## tpe: type of this plugin
+ ##
+ ## name: unique name of this controller instance.
+ ##
+ ## controller: The controller to register
+ global register_controller: function(tpe: OpenFlow::Plugin, name: string, controller: Controller);
+
+ ## Function to unregister a controller instance. This function
+ ## should be called when a specific controller should no longer
+ ## be used.
+ ##
+ ## controller: The controller to unregister
+ global unregister_controller: function(controller: Controller);
+
+ ## Function to signal that a controller finished activation and is
+ ## ready to use. Will throw the ``OpenFlow::controller_activated``
+ ## event.
+ global controller_init_done: function(controller: Controller);
+
+ ## Event that is raised once a controller finishes initialization
+ ## and is completely activated.
+ ## name: unique name of this controller instance.
+ ##
+ ## controller: The controller that finished activation.
+ global OpenFlow::controller_activated: event(name: string, controller: Controller);
+
+ ## Function to lookup a controller instance by name
+ ##
+ ## name: unique name of the controller to look up
+ ##
+ ## Returns: one element vector with controller, if found. Empty vector otherwhise.
+ global lookup_controller: function(name: string): vector of Controller;
+}
+
+global name_to_controller: table[string] of Controller;
+
+
+function match_conn(id: conn_id, reverse: bool &default=F): ofp_match
+ {
+ local dl_type = ETH_IPv4;
+ local proto = IP_TCP;
+
+ local orig_h: addr;
+ local orig_p: port;
+ local resp_h: addr;
+ local resp_p: port;
+
+ if ( reverse == F )
+ {
+ orig_h = id$orig_h;
+ orig_p = id$orig_p;
+ resp_h = id$resp_h;
+ resp_p = id$resp_p;
+ }
+ else
+ {
+ orig_h = id$resp_h;
+ orig_p = id$resp_p;
+ resp_h = id$orig_h;
+ resp_p = id$orig_p;
+ }
+
+ if ( is_v6_addr(orig_h) )
+ dl_type = ETH_IPv6;
+
+ if ( is_udp_port(orig_p) )
+ proto = IP_UDP;
+ else if ( is_icmp_port(orig_p) )
+ proto = IP_ICMP;
+
+ return ofp_match(
+ $dl_type=dl_type,
+ $nw_proto=proto,
+ $nw_src=addr_to_subnet(orig_h),
+ $tp_src=port_to_count(orig_p),
+ $nw_dst=addr_to_subnet(resp_h),
+ $tp_dst=port_to_count(resp_p)
+ );
+ }
+
+# local function to forge a flow_mod cookie for this framework.
+# all flow entries from the openflow framework should have the
+# 42 bit of the cookie set.
+function generate_cookie(cookie: count &default=0): count
+ {
+ local c = BRO_COOKIE_ID * COOKIE_BID_START;
+
+ if ( cookie >= COOKIE_UID_SIZE )
+ Reporter::warning(fmt("The given cookie uid '%d' is > 32bit and will be discarded", cookie));
+ else
+ c += cookie;
+
+ return c;
+ }
+
+# local function to check if a given flow_mod cookie is forged from this framework.
+function is_valid_cookie(cookie: count): bool
+ {
+ if ( cookie / COOKIE_BID_START == BRO_COOKIE_ID )
+ return T;
+
+ Reporter::warning(fmt("The given Openflow cookie '%d' is not valid", cookie));
+
+ return F;
+ }
+
+function get_cookie_uid(cookie: count): count
+ {
+ if( is_valid_cookie(cookie) )
+ return (cookie - ((cookie / COOKIE_GID_START) * COOKIE_GID_START));
+
+ return INVALID_COOKIE;
+ }
+
+function get_cookie_gid(cookie: count): count
+ {
+ if( is_valid_cookie(cookie) )
+ return (
+ (cookie - (COOKIE_BID_START * BRO_COOKIE_ID) -
+ (cookie - ((cookie / COOKIE_GID_START) * COOKIE_GID_START))) /
+ COOKIE_GID_START
+ );
+
+ return INVALID_COOKIE;
+ }
+
+function controller_init_done(controller: Controller)
+ {
+ if ( controller$state$_name !in name_to_controller )
+ {
+ Reporter::error(fmt("Openflow initialized unknown plugin %s successfully?", controller$state$_name));
+ return;
+ }
+
+ controller$state$_activated = T;
+ event OpenFlow::controller_activated(controller$state$_name, controller);
+ }
+
+# Functions that are called from cluster.bro and non-cluster.bro
+
+function register_controller_impl(tpe: OpenFlow::Plugin, name: string, controller: Controller)
+ {
+ if ( controller$state$_name in name_to_controller )
+ {
+ Reporter::error(fmt("OpenFlow Controller %s was already registered. Ignored duplicate registration", controller$state$_name));
+ return;
+ }
+
+ name_to_controller[controller$state$_name] = controller;
+
+ if ( controller?$init )
+ controller$init(controller$state);
+ else
+ controller_init_done(controller);
+ }
+
+function unregister_controller_impl(controller: Controller)
+ {
+ if ( controller$state$_name in name_to_controller )
+ delete name_to_controller[controller$state$_name];
+ else
+ Reporter::error("OpenFlow Controller %s was not registered in unregister.");
+
+ if ( controller?$destroy )
+ controller$destroy(controller$state);
+ }
+
+function lookup_controller_impl(name: string): vector of Controller
+ {
+ if ( name in name_to_controller )
+ return vector(name_to_controller[name]);
+ else
+ return vector();
+ }
diff --git a/scripts/base/frameworks/openflow/non-cluster.bro b/scripts/base/frameworks/openflow/non-cluster.bro
new file mode 100644
index 0000000000..22b5980924
--- /dev/null
+++ b/scripts/base/frameworks/openflow/non-cluster.bro
@@ -0,0 +1,44 @@
+@load ./main
+
+module OpenFlow;
+
+# the flow_mod function wrapper
+function flow_mod(controller: Controller, match: ofp_match, flow_mod: ofp_flow_mod): bool
+ {
+ if ( ! controller$state$_activated )
+ return F;
+
+ if ( controller?$flow_mod )
+ return controller$flow_mod(controller$state, match, flow_mod);
+ else
+ return F;
+ }
+
+function flow_clear(controller: Controller): bool
+ {
+ if ( ! controller$state$_activated )
+ return F;
+
+ if ( controller?$flow_clear )
+ return controller$flow_clear(controller$state);
+ else
+ return F;
+ }
+
+function register_controller(tpe: OpenFlow::Plugin, name: string, controller: Controller)
+ {
+ controller$state$_name = cat(tpe, name);
+ controller$state$_plugin = tpe;
+
+ register_controller_impl(tpe, name, controller);
+ }
+
+function unregister_controller(controller: Controller)
+ {
+ unregister_controller_impl(controller);
+ }
+
+function lookup_controller(name: string): vector of Controller
+ {
+ return lookup_controller_impl(name);
+ }
diff --git a/scripts/base/frameworks/openflow/plugins/__load__.bro b/scripts/base/frameworks/openflow/plugins/__load__.bro
new file mode 100644
index 0000000000..e387132034
--- /dev/null
+++ b/scripts/base/frameworks/openflow/plugins/__load__.bro
@@ -0,0 +1,3 @@
+@load ./ryu
+@load ./log
+@load ./broker
diff --git a/scripts/base/frameworks/openflow/plugins/broker.bro b/scripts/base/frameworks/openflow/plugins/broker.bro
new file mode 100644
index 0000000000..93a627a8f4
--- /dev/null
+++ b/scripts/base/frameworks/openflow/plugins/broker.bro
@@ -0,0 +1,95 @@
+##! OpenFlow plugin for interfacing to controllers via Broker.
+
+@load base/frameworks/openflow
+@load base/frameworks/broker
+
+module OpenFlow;
+
+export {
+ redef enum Plugin += {
+ BROKER,
+ };
+
+ ## Broker controller constructor.
+ ##
+ ## host: Controller ip.
+ ##
+ ## host_port: Controller listen port.
+ ##
+ ## topic: broker topic to send messages to.
+ ##
+ ## dpid: OpenFlow switch datapath id.
+ ##
+ ## Returns: OpenFlow::Controller record
+ global broker_new: function(name: string, host: addr, host_port: port, topic: string, dpid: count): OpenFlow::Controller;
+
+ redef record ControllerState += {
+ ## Controller ip.
+ broker_host: addr &optional;
+ ## Controller listen port.
+ broker_port: port &optional;
+ ## OpenFlow switch datapath id.
+ broker_dpid: count &optional;
+ ## Topic to sent events for this controller to
+ broker_topic: string &optional;
+ };
+
+ global broker_flow_mod: event(name: string, dpid: count, match: ofp_match, flow_mod: ofp_flow_mod);
+ global broker_flow_clear: event(name: string, dpid: count);
+}
+
+global broker_peers: table[port, string] of Controller;
+
+function broker_describe(state: ControllerState): string
+ {
+ return fmt("Broker-%s:%d-%d", state$broker_host, state$broker_port, state$broker_dpid);
+ }
+
+function broker_flow_mod_fun(state: ControllerState, match: ofp_match, flow_mod: OpenFlow::ofp_flow_mod): bool
+ {
+ Broker::event(state$broker_topic, Broker::event_args(broker_flow_mod, state$_name, state$broker_dpid, match, flow_mod));
+
+ return T;
+ }
+
+function broker_flow_clear_fun(state: OpenFlow::ControllerState): bool
+ {
+ Broker::event(state$broker_topic, Broker::event_args(broker_flow_clear, state$_name, state$broker_dpid));
+
+ return T;
+ }
+
+function broker_init(state: OpenFlow::ControllerState)
+ {
+ Broker::enable();
+ Broker::connect(cat(state$broker_host), state$broker_port, 1sec);
+ Broker::subscribe_to_events(state$broker_topic); # openflow success and failure events are directly sent back via the other plugin via broker.
+ }
+
+event Broker::outgoing_connection_established(peer_address: string, peer_port: port, peer_name: string)
+ {
+ if ( [peer_port, peer_address] !in broker_peers )
+ # ok, this one was none of ours...
+ return;
+
+ local p = broker_peers[peer_port, peer_address];
+ controller_init_done(p);
+ delete broker_peers[peer_port, peer_address];
+ }
+
+# broker controller constructor
+function broker_new(name: string, host: addr, host_port: port, topic: string, dpid: count): OpenFlow::Controller
+ {
+ local c = OpenFlow::Controller($state=OpenFlow::ControllerState($broker_host=host, $broker_port=host_port, $broker_dpid=dpid, $broker_topic=topic),
+ $flow_mod=broker_flow_mod_fun, $flow_clear=broker_flow_clear_fun, $describe=broker_describe, $supports_flow_removed=T, $init=broker_init);
+
+ register_controller(OpenFlow::BROKER, name, c);
+
+ if ( [host_port, cat(host)] in broker_peers )
+ Reporter::warning(fmt("Peer %s:%s was added to NetControl acld plugin twice.", host, host_port));
+ else
+ broker_peers[host_port, cat(host)] = c;
+
+ return c;
+ }
+
diff --git a/scripts/base/frameworks/openflow/plugins/log.bro b/scripts/base/frameworks/openflow/plugins/log.bro
new file mode 100644
index 0000000000..18aa0c1584
--- /dev/null
+++ b/scripts/base/frameworks/openflow/plugins/log.bro
@@ -0,0 +1,76 @@
+##! OpenFlow plugin that outputs flow-modification commands
+##! to a Bro log file.
+
+@load base/frameworks/openflow
+@load base/frameworks/logging
+
+module OpenFlow;
+
+export {
+ redef enum Plugin += {
+ OFLOG,
+ };
+
+ redef enum Log::ID += { LOG };
+
+ ## Log controller constructor.
+ ##
+ ## dpid: OpenFlow switch datapath id.
+ ##
+ ## success_event: If true, flow_mod_success is raised for each logged line.
+ ##
+ ## Returns: OpenFlow::Controller record
+ global log_new: function(dpid: count, success_event: bool &default=T): OpenFlow::Controller;
+
+ redef record ControllerState += {
+ ## OpenFlow switch datapath id.
+ log_dpid: count &optional;
+ ## Raise or do not raise success event
+ log_success_event: bool &optional;
+ };
+
+ ## The record type which contains column fields of the OpenFlow log.
+ type Info: record {
+ ## Network time
+ ts: time &log;
+ ## OpenFlow switch datapath id
+ dpid: count &log;
+ ## OpenFlow match fields
+ match: ofp_match &log;
+ ## OpenFlow modify flow entry message
+ flow_mod: ofp_flow_mod &log;
+ };
+
+ ## Event that can be handled to access the :bro:type:`OpenFlow::Info`
+ ## record as it is sent on to the logging framework.
+ global log_openflow: event(rec: Info);
+}
+
+event bro_init() &priority=5
+ {
+ Log::create_stream(OpenFlow::LOG, [$columns=Info, $ev=log_openflow, $path="openflow"]);
+ }
+
+function log_flow_mod(state: ControllerState, match: ofp_match, flow_mod: OpenFlow::ofp_flow_mod): bool
+ {
+ Log::write(OpenFlow::LOG, [$ts=network_time(), $dpid=state$log_dpid, $match=match, $flow_mod=flow_mod]);
+ if ( state$log_success_event )
+ event OpenFlow::flow_mod_success(state$_name, match, flow_mod);
+
+ return T;
+ }
+
+function log_describe(state: ControllerState): string
+ {
+ return fmt("Log-%d", state$log_dpid);
+ }
+
+function log_new(dpid: count, success_event: bool &default=T): OpenFlow::Controller
+ {
+ local c = OpenFlow::Controller($state=OpenFlow::ControllerState($log_dpid=dpid, $log_success_event=success_event),
+ $flow_mod=log_flow_mod, $describe=log_describe, $supports_flow_removed=F);
+
+ register_controller(OpenFlow::OFLOG, cat(dpid), c);
+
+ return c;
+ }
diff --git a/scripts/base/frameworks/openflow/plugins/ryu.bro b/scripts/base/frameworks/openflow/plugins/ryu.bro
new file mode 100644
index 0000000000..69d51adc9b
--- /dev/null
+++ b/scripts/base/frameworks/openflow/plugins/ryu.bro
@@ -0,0 +1,190 @@
+##! OpenFlow plugin for the Ryu controller.
+
+@load base/frameworks/openflow
+@load base/utils/active-http
+@load base/utils/exec
+@load base/utils/json
+
+module OpenFlow;
+
+export {
+ redef enum Plugin += {
+ RYU,
+ };
+
+ ## Ryu controller constructor.
+ ##
+ ## host: Controller ip.
+ ##
+ ## host_port: Controller listen port.
+ ##
+ ## dpid: OpenFlow switch datapath id.
+ ##
+ ## Returns: OpenFlow::Controller record
+ global ryu_new: function(host: addr, host_port: count, dpid: count): OpenFlow::Controller;
+
+ redef record ControllerState += {
+ ## Controller ip.
+ ryu_host: addr &optional;
+ ## Controller listen port.
+ ryu_port: count &optional;
+ ## OpenFlow switch datapath id.
+ ryu_dpid: count &optional;
+ ## Enable debug mode - output JSON to stdout; do not perform actions
+ ryu_debug: bool &default=F;
+ };
+}
+
+# Ryu ReST API flow_mod URL-path
+const RYU_FLOWENTRY_PATH = "/stats/flowentry/";
+# Ryu ReST API flow_stats URL-path
+#const RYU_FLOWSTATS_PATH = "/stats/flow/";
+
+# Ryu ReST API action_output type.
+type ryu_flow_action: record {
+ # Ryu uses strings as its ReST API output action.
+ _type: string;
+ # The output port for type OUTPUT
+ _port: count &optional;
+};
+
+# The ReST API documentation can be found at
+# https://media.readthedocs.org/pdf/ryu/latest/ryu.pdf
+# Ryu ReST API flow_mod type.
+type ryu_ofp_flow_mod: record {
+ dpid: count;
+ cookie: count &optional;
+ cookie_mask: count &optional;
+ table_id: count &optional;
+ idle_timeout: count &optional;
+ hard_timeout: count &optional;
+ priority: count &optional;
+ flags: count &optional;
+ match: OpenFlow::ofp_match;
+ actions: vector of ryu_flow_action;
+ out_port: count &optional;
+ out_group: count &optional;
+};
+
+# Mapping between ofp flow mod commands and ryu urls
+const ryu_url: table[ofp_flow_mod_command] of string = {
+ [OFPFC_ADD] = "add",
+ [OFPFC_MODIFY] = "modify",
+ [OFPFC_MODIFY_STRICT] = "modify_strict",
+ [OFPFC_DELETE] = "delete",
+ [OFPFC_DELETE_STRICT] = "delete_strict",
+};
+
+# Ryu flow_mod function
+function ryu_flow_mod(state: OpenFlow::ControllerState, match: ofp_match, flow_mod: OpenFlow::ofp_flow_mod): bool
+ {
+ if ( state$_plugin != RYU )
+ {
+ Reporter::error("Ryu openflow plugin was called with state of non-ryu plugin");
+ return F;
+ }
+
+ # Generate ryu_flow_actions because their type differs (using strings as type).
+ local flow_actions: vector of ryu_flow_action = vector();
+
+ for ( i in flow_mod$actions$out_ports )
+ flow_actions[|flow_actions|] = ryu_flow_action($_type="OUTPUT", $_port=flow_mod$actions$out_ports[i]);
+
+ # Generate our ryu_flow_mod record for the ReST API call.
+ local mod: ryu_ofp_flow_mod = ryu_ofp_flow_mod(
+ $dpid=state$ryu_dpid,
+ $cookie=flow_mod$cookie,
+ $idle_timeout=flow_mod$idle_timeout,
+ $hard_timeout=flow_mod$hard_timeout,
+ $priority=flow_mod$priority,
+ $flags=flow_mod$flags,
+ $match=match,
+ $actions=flow_actions
+ );
+
+ if ( flow_mod?$out_port )
+ mod$out_port = flow_mod$out_port;
+ if ( flow_mod?$out_group )
+ mod$out_group = flow_mod$out_group;
+
+ # Type of the command
+ local command_type: string;
+
+ if ( flow_mod$command in ryu_url )
+ command_type = ryu_url[flow_mod$command];
+ else
+ {
+ Reporter::warning(fmt("The given OpenFlow command type '%s' is not available", cat(flow_mod$command)));
+ return F;
+ }
+
+ local url=cat("http://", cat(state$ryu_host), ":", cat(state$ryu_port), RYU_FLOWENTRY_PATH, command_type);
+
+ if ( state$ryu_debug )
+ {
+ print url;
+ print to_json(mod);
+ event OpenFlow::flow_mod_success(state$_name, match, flow_mod);
+ return T;
+ }
+
+ # Create the ActiveHTTP request and convert the record to a Ryu ReST API JSON string
+ local request: ActiveHTTP::Request = ActiveHTTP::Request(
+ $url=url,
+ $method="POST",
+ $client_data=to_json(mod)
+ );
+
+ # Execute call to Ryu's ReST API
+ when ( local result = ActiveHTTP::request(request) )
+ {
+ if(result$code == 200)
+ event OpenFlow::flow_mod_success(state$_name, match, flow_mod, result$body);
+ else
+ {
+ Reporter::warning(fmt("Flow modification failed with error: %s", result$body));
+ event OpenFlow::flow_mod_failure(state$_name, match, flow_mod, result$body);
+ return F;
+ }
+ }
+
+ return T;
+ }
+
+function ryu_flow_clear(state: OpenFlow::ControllerState): bool
+ {
+ local url=cat("http://", cat(state$ryu_host), ":", cat(state$ryu_port), RYU_FLOWENTRY_PATH, "clear", "/", state$ryu_dpid);
+
+ if ( state$ryu_debug )
+ {
+ print url;
+ return T;
+ }
+
+ local request: ActiveHTTP::Request = ActiveHTTP::Request(
+ $url=url,
+ $method="DELETE"
+ );
+
+ when ( local result = ActiveHTTP::request(request) )
+ {
+ }
+
+ return T;
+ }
+
+function ryu_describe(state: ControllerState): string
+ {
+ return fmt("Ryu-%d-http://%s:%d", state$ryu_dpid, state$ryu_host, state$ryu_port);
+ }
+
+# Ryu controller constructor
+function ryu_new(host: addr, host_port: count, dpid: count): OpenFlow::Controller
+ {
+ local c = OpenFlow::Controller($state=OpenFlow::ControllerState($ryu_host=host, $ryu_port=host_port, $ryu_dpid=dpid),
+ $flow_mod=ryu_flow_mod, $flow_clear=ryu_flow_clear, $describe=ryu_describe, $supports_flow_removed=F);
+
+ register_controller(OpenFlow::RYU, cat(host,host_port,dpid), c);
+
+ return c;
+ }
diff --git a/scripts/base/frameworks/openflow/types.bro b/scripts/base/frameworks/openflow/types.bro
new file mode 100644
index 0000000000..f527cd51a7
--- /dev/null
+++ b/scripts/base/frameworks/openflow/types.bro
@@ -0,0 +1,132 @@
+##! Types used by the OpenFlow framework.
+
+module OpenFlow;
+
+@load ./consts
+
+export {
+ ## Available openflow plugins
+ type Plugin: enum {
+ ## Internal placeholder plugin
+ INVALID,
+ };
+
+ ## Controller related state.
+ ## Can be redefined by plugins to
+ ## add state.
+ type ControllerState: record {
+ ## Internally set to the type of plugin used.
+ _plugin: Plugin &optional;
+ ## Internally set to the unique name of the controller.
+ _name: string &optional;
+ ## Internally set to true once the controller is activated
+ _activated: bool &default=F;
+ } &redef;
+
+ ## Openflow match definition.
+ ##
+ ## The openflow match record describes
+ ## which packets match to a specific
+ ## rule in a flow table.
+ type ofp_match: record {
+ # Input switch port.
+ in_port: count &optional;
+ # Ethernet source address.
+ dl_src: string &optional;
+ # Ethernet destination address.
+ dl_dst: string &optional;
+ # Input VLAN id.
+ dl_vlan: count &optional;
+ # Input VLAN priority.
+ dl_vlan_pcp: count &optional;
+ # Ethernet frame type.
+ dl_type: count &optional;
+ # IP ToS (actually DSCP field, 6bits).
+ nw_tos: count &optional;
+ # IP protocol or lower 8 bits of ARP opcode.
+ nw_proto: count &optional;
+ # At the moment, we store both v4 and v6 in the same fields.
+ # This is not how OpenFlow does it, we might want to change that...
+ # IP source address.
+ nw_src: subnet &optional;
+ # IP destination address.
+ nw_dst: subnet &optional;
+ # TCP/UDP source port.
+ tp_src: count &optional;
+ # TCP/UDP destination port.
+ tp_dst: count &optional;
+ } &log;
+
+ ## The actions that can be taken in a flow.
+ ## (Sepearate record to make ofp_flow_mod less crowded)
+ type ofp_flow_action: record {
+ ## Output ports to send data to.
+ out_ports: vector of count &default=vector();
+ ## set vlan vid to this value
+ vlan_vid: count &optional;
+ ## set vlan priority to this value
+ vlan_pcp: count &optional;
+ ## strip vlan tag
+ vlan_strip: bool &default=F;
+ ## set ethernet source address
+ dl_src: string &optional;
+ ## set ethernet destination address
+ dl_dst: string &optional;
+ ## set ip tos to this value
+ nw_tos: count &optional;
+ ## set source to this ip
+ nw_src: addr &optional;
+ ## set destination to this ip
+ nw_dst: addr &optional;
+ ## set tcp/udp source port
+ tp_src: count &optional;
+ ## set tcp/udp destination port
+ tp_dst: count &optional;
+ } &log;
+
+ ## Openflow flow_mod definition, describing the action to perform.
+ type ofp_flow_mod: record {
+ ## Opaque controller-issued identifier.
+ # This is optional in the specification - but let's force
+ # it so we always can identify our flows...
+ cookie: count; # &default=BRO_COOKIE_ID * COOKIE_BID_START;
+ # Flow actions
+ ## Table to put the flow in. OFPTT_ALL can be used for delete,
+ ## to delete flows from all matching tables.
+ table_id: count &optional;
+ ## One of OFPFC_*.
+ command: ofp_flow_mod_command; # &default=OFPFC_ADD;
+ ## Idle time before discarding (seconds).
+ idle_timeout: count &default=0;
+ ## Max time before discarding (seconds).
+ hard_timeout: count &default=0;
+ ## Priority level of flow entry.
+ priority: count &default=0;
+ ## For OFPFC_DELETE* commands, require matching entried to include
+ ## this as an output port/group. OFPP_ANY/OFPG_ANY means no restrictions.
+ out_port: count &optional;
+ out_group: count &optional;
+ ## Bitmap of the OFPFF_* flags
+ flags: count &default=0;
+ ## Actions to take on match
+ actions: ofp_flow_action &default=ofp_flow_action();
+ } &log;
+
+ ## Controller record representing an openflow controller
+ type Controller: record {
+ ## Controller related state.
+ state: ControllerState;
+ ## Does the controller support the flow_removed event?
+ supports_flow_removed: bool;
+ ## function that describes the controller. Has to be implemented.
+ describe: function(state: ControllerState): string;
+ ## one-time initialization function. If defined, controller_init_done has to be called once initialization finishes.
+ init: function (state: ControllerState) &optional;
+ ## one-time destruction function
+ destroy: function (state: ControllerState) &optional;
+ ## flow_mod function
+ flow_mod: function(state: ControllerState, match: ofp_match, flow_mod: ofp_flow_mod): bool &optional;
+ ## flow_clear function
+ flow_clear: function(state: ControllerState): bool &optional;
+ };
+}
diff --git a/scripts/base/frameworks/packet-filter/main.bro b/scripts/base/frameworks/packet-filter/main.bro
index b0a6f144e3..8a9cb4eb98 100644
--- a/scripts/base/frameworks/packet-filter/main.bro
+++ b/scripts/base/frameworks/packet-filter/main.bro
@@ -138,7 +138,7 @@ redef enum PcapFilterID += {
function test_filter(filter: string): bool
{
- if ( ! precompile_pcap_filter(FilterTester, filter) )
+ if ( ! Pcap::precompile_pcap_filter(FilterTester, filter) )
{
# The given filter was invalid
# TODO: generate a notice.
@@ -273,7 +273,7 @@ function install(): bool
return F;
local ts = current_time();
- if ( ! precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
+ if ( ! Pcap::precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
{
NOTICE([$note=Compile_Failure,
$msg=fmt("Compiling packet filter failed"),
@@ -303,7 +303,7 @@ function install(): bool
}
info$filter = current_filter;
- if ( ! install_pcap_filter(DefaultPcapFilter) )
+ if ( ! Pcap::install_pcap_filter(DefaultPcapFilter) )
{
# Installing the filter failed for some reason.
info$success = F;
diff --git a/scripts/base/frameworks/software/main.bro b/scripts/base/frameworks/software/main.bro
index bcb791b4f4..0c1c4cd302 100644
--- a/scripts/base/frameworks/software/main.bro
+++ b/scripts/base/frameworks/software/main.bro
@@ -280,6 +280,13 @@ function parse_mozilla(unparsed_version: string): Description
v = parse(parts[1])$version;
}
}
+ else if ( /AdobeAIR\/[0-9\.]*/ in unparsed_version )
+ {
+ software_name = "AdobeAIR";
+ parts = split_string_all(unparsed_version, /AdobeAIR\/[0-9\.]*/);
+ if ( 1 in parts )
+ v = parse(parts[1])$version;
+ }
else if ( /AppleWebKit\/[0-9\.]*/ in unparsed_version )
{
software_name = "Unspecified WebKit";
diff --git a/scripts/base/init-bare.bro b/scripts/base/init-bare.bro
index 1a5c9e2e96..a2cb3e4c5e 100644
--- a/scripts/base/init-bare.bro
+++ b/scripts/base/init-bare.bro
@@ -39,6 +39,13 @@ type count_set: set[count];
## directly and then remove this alias.
type index_vec: vector of count;
+## A vector of subnets.
+##
+## .. todo:: We need this type definition only for declaring builtin functions
+## via ``bifcl``. We should extend ``bifcl`` to understand composite types
+## directly and then remove this alias.
+type subnet_vec: vector of subnet;
+
## A vector of any, used by some builtin functions to store a list of varying
## types.
##
@@ -120,6 +127,18 @@ type conn_id: record {
resp_p: port; ##< The responder's port number.
} &log;
+## The identifying 4-tuple of a uni-directional flow.
+##
+## .. note:: It's actually a 5-tuple: the transport-layer protocol is stored as
+## part of the port values, `src_p` and `dst_p`, and can be extracted from
+## them with :bro:id:`get_port_transport_proto`.
+type flow_id : record {
+ src_h: addr; ##< The source IP address.
+ src_p: port; ##< The source port number.
+ dst_h: addr; ##< The destination IP address.
+ dst_p: port; ##< The desintation port number.
+} &log;
+
## Specifics about an ICMP conversation. ICMP events typically pass this in
## addition to :bro:type:`conn_id`.
##
@@ -345,6 +364,12 @@ type connection: record {
## for the connection unless the :bro:id:`tunnel_changed` event is
## handled and reassigns this field to the new encapsulation.
tunnel: EncapsulatingConnVector &optional;
+
+ ## The outer VLAN, if applicable, for this connection.
+ vlan: int &optional;
+
+ ## The inner VLAN, if applicable, for this connection.
+ inner_vlan: int &optional;
};
## Default amount of time a file can be inactive before the file analysis
@@ -768,71 +793,6 @@ type entropy_test_result: record {
serial_correlation: double; ##< Serial correlation coefficient.
};
-# Prototypes of Bro built-in functions.
-@load base/bif/strings.bif
-@load base/bif/bro.bif
-@load base/bif/reporter.bif
-
-## Deprecated. This is superseded by the new logging framework.
-global log_file_name: function(tag: string): string &redef;
-
-## Deprecated. This is superseded by the new logging framework.
-global open_log_file: function(tag: string): file &redef;
-
-## Specifies a directory for Bro to store its persistent state. All globals can
-## be declared persistent via the :bro:attr:`&persistent` attribute.
-const state_dir = ".state" &redef;
-
-## Length of the delays inserted when storing state incrementally. To avoid
-## dropping packets when serializing larger volumes of persistent state to
-## disk, Bro interleaves the operation with continued packet processing.
-const state_write_delay = 0.01 secs &redef;
-
-global done_with_network = F;
-event net_done(t: time) { done_with_network = T; }
-
-function log_file_name(tag: string): string
- {
- local suffix = getenv("BRO_LOG_SUFFIX") == "" ? "log" : getenv("BRO_LOG_SUFFIX");
- return fmt("%s.%s", tag, suffix);
- }
-
-function open_log_file(tag: string): file
- {
- return open(log_file_name(tag));
- }
-
-## Internal function.
-function add_interface(iold: string, inew: string): string
- {
- if ( iold == "" )
- return inew;
- else
- return fmt("%s %s", iold, inew);
- }
-
-## Network interfaces to listen on. Use ``redef interfaces += "eth0"`` to
-## extend.
-global interfaces = "" &add_func = add_interface;
-
-## Internal function.
-function add_signature_file(sold: string, snew: string): string
- {
- if ( sold == "" )
- return snew;
- else
- return cat(sold, " ", snew);
- }
-
-## Signature files to read. Use ``redef signature_files += "foo.sig"`` to
-## extend. Signature files added this way will be searched relative to
-## ``BROPATH``. Using the ``@load-sigs`` directive instead is preferred
-## since that can search paths relative to the current script.
-global signature_files = "" &add_func = add_signature_file;
-
-## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
-const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
-
# TCP values for :bro:see:`endpoint` *state* field.
# todo:: these should go into an enum to make them autodoc'able.
const TCP_INACTIVE = 0; ##< Endpoint is still inactive.
@@ -1511,6 +1471,7 @@ type l2_hdr: record {
src: string &optional; ##< L2 source (if Ethernet).
dst: string &optional; ##< L2 destination (if Ethernet).
vlan: count &optional; ##< Outermost VLAN tag if any (and Ethernet).
+ inner_vlan: count &optional; ##< Innermost VLAN tag if any (and Ethernet).
eth_type: count &optional; ##< Innermost Ethertype (if Ethernet).
proto: layer3_proto; ##< L3 protocol.
};
@@ -1742,6 +1703,71 @@ type gtp_delete_pdp_ctx_response_elements: record {
ext: gtp_private_extension &optional;
};
+# Prototypes of Bro built-in functions.
+@load base/bif/strings.bif
+@load base/bif/bro.bif
+@load base/bif/reporter.bif
+
+## Deprecated. This is superseded by the new logging framework.
+global log_file_name: function(tag: string): string &redef;
+
+## Deprecated. This is superseded by the new logging framework.
+global open_log_file: function(tag: string): file &redef;
+
+## Specifies a directory for Bro to store its persistent state. All globals can
+## be declared persistent via the :bro:attr:`&persistent` attribute.
+const state_dir = ".state" &redef;
+
+## Length of the delays inserted when storing state incrementally. To avoid
+## dropping packets when serializing larger volumes of persistent state to
+## disk, Bro interleaves the operation with continued packet processing.
+const state_write_delay = 0.01 secs &redef;
+
+global done_with_network = F;
+event net_done(t: time) { done_with_network = T; }
+
+function log_file_name(tag: string): string
+ {
+ local suffix = getenv("BRO_LOG_SUFFIX") == "" ? "log" : getenv("BRO_LOG_SUFFIX");
+ return fmt("%s.%s", tag, suffix);
+ }
+
+function open_log_file(tag: string): file
+ {
+ return open(log_file_name(tag));
+ }
+
+## Internal function.
+function add_interface(iold: string, inew: string): string
+ {
+ if ( iold == "" )
+ return inew;
+ else
+ return fmt("%s %s", iold, inew);
+ }
+
+## Network interfaces to listen on. Use ``redef interfaces += "eth0"`` to
+## extend.
+global interfaces = "" &add_func = add_interface;
+
+## Internal function.
+function add_signature_file(sold: string, snew: string): string
+ {
+ if ( sold == "" )
+ return snew;
+ else
+ return cat(sold, " ", snew);
+ }
+
+## Signature files to read. Use ``redef signature_files += "foo.sig"`` to
+## extend. Signature files added this way will be searched relative to
+## ``BROPATH``. Using the ``@load-sigs`` directive instead is preferred
+## since that can search paths relative to the current script.
+global signature_files = "" &add_func = add_signature_file;
+
+## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
+const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
+
## Definition of "secondary filters". A secondary filter is a BPF filter given
## as index in this table. For each such filter, the corresponding event is
## raised for all matching packets.
@@ -2502,7 +2528,7 @@ global dns_skip_all_addl = T &redef;
## If a DNS request includes more than this many queries, assume it's non-DNS
## traffic and do not process it. Set to 0 to turn off this functionality.
-global dns_max_queries = 5;
+global dns_max_queries = 25 &redef;
## HTTP session statistics.
##
@@ -3655,20 +3681,11 @@ export {
## Toggle whether to do GRE decapsulation.
const enable_gre = T &redef;
- ## With this option set, the Teredo analysis will first check to see if
- ## other protocol analyzers have confirmed that they think they're
- ## parsing the right protocol and only continue with Teredo tunnel
- ## decapsulation if nothing else has yet confirmed. This can help
- ## reduce false positives of UDP traffic (e.g. DNS) that also happens
- ## to have a valid Teredo encapsulation.
- const yielding_teredo_decapsulation = T &redef;
-
## With this set, the Teredo analyzer waits until it sees both sides
## of a connection using a valid Teredo encapsulation before issuing
## a :bro:see:`protocol_confirmation`. If it's false, the first
## occurrence of a packet with valid Teredo encapsulation causes a
- ## confirmation. Both cases are still subject to effects of
- ## :bro:see:`Tunnel::yielding_teredo_decapsulation`.
+ ## confirmation.
const delay_teredo_confirmation = T &redef;
## With this set, the GTP analyzer waits until the most-recent upflow
@@ -3684,7 +3701,6 @@ export {
## (includes GRE tunnels).
const ip_tunnel_timeout = 24hrs &redef;
} # end export
-module GLOBAL;
module Reporter;
export {
@@ -3703,10 +3719,18 @@ export {
## external harness and shouldn't output anything to the console.
const errors_to_stderr = T &redef;
}
-module GLOBAL;
-## Number of bytes per packet to capture from live interfaces.
-const snaplen = 8192 &redef;
+module Pcap;
+export {
+ ## Number of bytes per packet to capture from live interfaces.
+ const snaplen = 8192 &redef;
+
+ ## Number of Mbytes to provide as buffer space when capturing from live
+ ## interfaces.
+ const bufsize = 128 &redef;
+} # end export
+
+module GLOBAL;
## Seed for hashes computed internally for probabilistic data structures. Using
## the same value here will make the hashes compatible between independent Bro
diff --git a/scripts/base/init-default.bro b/scripts/base/init-default.bro
index 58d2b4b2b9..19f7f82dd8 100644
--- a/scripts/base/init-default.bro
+++ b/scripts/base/init-default.bro
@@ -37,6 +37,10 @@
@load base/frameworks/reporter
@load base/frameworks/sumstats
@load base/frameworks/tunnels
+@ifdef ( Broker::enable )
+@load base/frameworks/openflow
+@load base/frameworks/netcontrol
+@endif
@load base/protocols/conn
@load base/protocols/dhcp
@@ -52,6 +56,7 @@
@load base/protocols/pop3
@load base/protocols/radius
@load base/protocols/rdp
+@load base/protocols/rfb
@load base/protocols/sip
@load base/protocols/snmp
@load base/protocols/smtp
diff --git a/scripts/base/protocols/conn/main.bro b/scripts/base/protocols/conn/main.bro
index 7ef204268b..c6eef5d2d5 100644
--- a/scripts/base/protocols/conn/main.bro
+++ b/scripts/base/protocols/conn/main.bro
@@ -47,7 +47,7 @@ export {
## S2 Connection established and close attempt by originator seen (but no reply from responder).
## S3 Connection established and close attempt by responder seen (but no reply from originator).
## RSTO Connection established, originator aborted (sent a RST).
- ## RSTR Established, responder aborted.
+ ## RSTR Responder sent a RST.
## RSTOS0 Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder.
## RSTRH Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator.
## SH Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was "half" open).
@@ -87,7 +87,8 @@ export {
## f packet with FIN bit set
## r packet with RST bit set
## c packet with a bad checksum
- ## i inconsistent packet (e.g. SYN+RST bits both set)
+ ## i inconsistent packet (e.g. FIN+RST bits set)
+ ## q multi-flag packet (SYN+FIN or SYN+RST bits set)
## ====== ====================================================
##
## If the event comes from the originator, the letter is in
diff --git a/scripts/base/protocols/dns/consts.bro b/scripts/base/protocols/dns/consts.bro
index 13af6c3e81..026588f777 100644
--- a/scripts/base/protocols/dns/consts.bro
+++ b/scripts/base/protocols/dns/consts.bro
@@ -26,6 +26,7 @@ export {
[49] = "DHCID", [99] = "SPF", [100] = "DINFO", [101] = "UID",
[102] = "GID", [103] = "UNSPEC", [249] = "TKEY", [250] = "TSIG",
[251] = "IXFR", [252] = "AXFR", [253] = "MAILB", [254] = "MAILA",
+ [257] = "CAA",
[32768] = "TA", [32769] = "DLV",
[ANY] = "*",
} &default = function(n: count): string { return fmt("query-%d", n); };
diff --git a/scripts/base/protocols/ftp/main.bro b/scripts/base/protocols/ftp/main.bro
index f98e33b315..717b3a0669 100644
--- a/scripts/base/protocols/ftp/main.bro
+++ b/scripts/base/protocols/ftp/main.bro
@@ -213,7 +213,7 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior
# on a different file could be checked, but the file size will
# be overwritten by the server response to the RETR command
# if that's given as well which would be more correct.
- c$ftp$file_size = extract_count(msg);
+ c$ftp$file_size = extract_count(msg, F);
}
# PASV and EPSV processing
diff --git a/scripts/base/protocols/http/main.bro b/scripts/base/protocols/http/main.bro
index 916723ebcb..e70d166f11 100644
--- a/scripts/base/protocols/http/main.bro
+++ b/scripts/base/protocols/http/main.bro
@@ -41,6 +41,8 @@ export {
## misspelled like the standard declares, but the name used here
## is "referrer" spelled correctly.
referrer: string &log &optional;
+ ## Value of the version portion of the request.
+ version: string &log &optional;
## Value of the User-Agent header from the client.
user_agent: string &log &optional;
## Actual uncompressed content size of the data transferred from
@@ -222,6 +224,8 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
c$http$status_code = code;
c$http$status_msg = reason;
+ c$http$version = version;
+
if ( code_in_range(code, 100, 199) )
{
c$http$info_code = code;
@@ -270,7 +274,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{
if ( /^[bB][aA][sS][iI][cC] / in value )
{
- local userpass = decode_base64(sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, ""));
+ local userpass = decode_base64_conn(c$id, sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, ""));
local up = split_string(userpass, /:/);
if ( |up| >= 2 )
{
diff --git a/scripts/base/protocols/rfb/README b/scripts/base/protocols/rfb/README
new file mode 100644
index 0000000000..afe67958a2
--- /dev/null
+++ b/scripts/base/protocols/rfb/README
@@ -0,0 +1 @@
+Support for Remote FrameBuffer analysis. This includes all VNC servers.
\ No newline at end of file
diff --git a/scripts/base/protocols/rfb/__load__.bro b/scripts/base/protocols/rfb/__load__.bro
new file mode 100644
index 0000000000..9e43682d13
--- /dev/null
+++ b/scripts/base/protocols/rfb/__load__.bro
@@ -0,0 +1,3 @@
+# Generated by binpac_quickstart
+@load ./main
+@load-sigs ./dpd.sig
\ No newline at end of file
diff --git a/scripts/base/protocols/rfb/dpd.sig b/scripts/base/protocols/rfb/dpd.sig
new file mode 100644
index 0000000000..40793ad590
--- /dev/null
+++ b/scripts/base/protocols/rfb/dpd.sig
@@ -0,0 +1,12 @@
+signature dpd_rfb_server {
+ ip-proto == tcp
+ payload /^RFB/
+ requires-reverse-signature dpd_rfb_client
+ enable "rfb"
+}
+
+signature dpd_rfb_client {
+ ip-proto == tcp
+ payload /^RFB/
+ tcp-state originator
+}
\ No newline at end of file
diff --git a/scripts/base/protocols/rfb/main.bro b/scripts/base/protocols/rfb/main.bro
new file mode 100644
index 0000000000..03e39a40f9
--- /dev/null
+++ b/scripts/base/protocols/rfb/main.bro
@@ -0,0 +1,164 @@
+module RFB;
+
+export {
+ redef enum Log::ID += { LOG };
+
+ type Info: record {
+ ## Timestamp for when the event happened.
+ ts: time &log;
+ ## Unique ID for the connection.
+ uid: string &log;
+ ## The connection's 4-tuple of endpoint addresses/ports.
+ id: conn_id &log;
+
+ ## Major version of the client.
+ client_major_version: string &log &optional;
+ ## Minor version of the client.
+ client_minor_version: string &log &optional;
+ ## Major version of the server.
+ server_major_version: string &log &optional;
+ ## Major version of the client.
+ server_minor_version: string &log &optional;
+
+ ## Identifier of authentication method used.
+ authentication_method: string &log &optional;
+ ## Whether or not authentication was succesful.
+ auth: bool &log &optional;
+
+ ## Whether the client has an exclusive or a shared session.
+ share_flag: bool &log &optional;
+ ## Name of the screen that is being shared.
+ desktop_name: string &log &optional;
+ ## Width of the screen that is being shared.
+ width: count &log &optional;
+ ## Height of the screen that is being shared.
+ height: count &log &optional;
+
+ ## Internally used value to determine if this connection
+ ## has already been logged.
+ done: bool &default=F;
+ };
+
+ global log_rfb: event(rec: Info);
+}
+
+function friendly_auth_name(auth: count): string
+ {
+ switch (auth) {
+ case 0:
+ return "Invalid";
+ case 1:
+ return "None";
+ case 2:
+ return "VNC";
+ case 16:
+ return "Tight";
+ case 17:
+ return "Ultra";
+ case 18:
+ return "TLS";
+ case 19:
+ return "VeNCrypt";
+ case 20:
+ return "GTK-VNC SASL";
+ case 21:
+ return "MD5 hash authentication";
+ case 22:
+ return "Colin Dean xvp";
+ case 30:
+ return "Apple Remote Desktop";
+ }
+ return "RealVNC";
+}
+
+redef record connection += {
+ rfb: Info &optional;
+};
+
+event bro_init() &priority=5
+ {
+ Log::create_stream(RFB::LOG, [$columns=Info, $ev=log_rfb, $path="rfb"]);
+ }
+
+function write_log(c:connection)
+ {
+ local state = c$rfb;
+ if ( state$done )
+ {
+ return;
+ }
+
+ Log::write(RFB::LOG, c$rfb);
+ c$rfb$done = T;
+ }
+
+function set_session(c: connection)
+ {
+ if ( ! c?$rfb )
+ {
+ local info: Info;
+ info$ts = network_time();
+ info$uid = c$uid;
+ info$id = c$id;
+
+ c$rfb = info;
+ }
+ }
+
+event rfb_event(c: connection) &priority=5
+ {
+ set_session(c);
+ }
+
+event rfb_client_version(c: connection, major_version: string, minor_version: string) &priority=5
+ {
+ set_session(c);
+ c$rfb$client_major_version = major_version;
+ c$rfb$client_minor_version = minor_version;
+ }
+
+event rfb_server_version(c: connection, major_version: string, minor_version: string) &priority=5
+ {
+ set_session(c);
+ c$rfb$server_major_version = major_version;
+ c$rfb$server_minor_version = minor_version;
+ }
+
+event rfb_authentication_type(c: connection, authtype: count) &priority=5
+ {
+ set_session(c);
+
+ c$rfb$authentication_method = friendly_auth_name(authtype);
+ }
+
+event rfb_server_parameters(c: connection, name: string, width: count, height: count) &priority=5
+ {
+ set_session(c);
+
+ c$rfb$desktop_name = name;
+ c$rfb$width = width;
+ c$rfb$height = height;
+ }
+
+event rfb_server_parameters(c: connection, name: string, width: count, height: count) &priority=-5
+ {
+ write_log(c);
+ }
+
+event rfb_auth_result(c: connection, result: bool) &priority=5
+ {
+ c$rfb$auth = !result;
+ }
+
+event rfb_share_flag(c: connection, flag: bool) &priority=5
+ {
+ c$rfb$share_flag = flag;
+ }
+
+event connection_state_remove(c: connection) &priority=-5
+ {
+ if ( c?$rfb )
+ {
+ write_log(c);
+ }
+ }
diff --git a/scripts/base/protocols/sip/main.bro b/scripts/base/protocols/sip/main.bro
index 0f396b8f74..dc790ad560 100644
--- a/scripts/base/protocols/sip/main.bro
+++ b/scripts/base/protocols/sip/main.bro
@@ -60,9 +60,9 @@ export {
## Contents of the Warning: header
warning: string &log &optional;
## Contents of the Content-Length: header from the client
- request_body_len: string &log &optional;
+ request_body_len: count &log &optional;
## Contents of the Content-Length: header from the server
- response_body_len: string &log &optional;
+ response_body_len: count &log &optional;
## Contents of the Content-Type: header from the server
content_type: string &log &optional;
};
@@ -80,7 +80,7 @@ export {
## that the SIP analyzer will only accept methods consisting solely
## of letters ``[A-Za-z]``.
const sip_methods: set[string] = {
- "REGISTER", "INVITE", "ACK", "CANCEL", "BYE", "OPTIONS"
+ "REGISTER", "INVITE", "ACK", "CANCEL", "BYE", "OPTIONS", "NOTIFY", "SUBSCRIBE"
} &redef;
## Event that can be handled to access the SIP record as it is sent on
@@ -127,17 +127,6 @@ function set_state(c: connection, is_request: bool)
c$sip_state = s;
}
- # These deal with new requests and responses.
- if ( is_request && c$sip_state$current_request !in c$sip_state$pending )
- c$sip_state$pending[c$sip_state$current_request] = new_sip_session(c);
- if ( ! is_request && c$sip_state$current_response !in c$sip_state$pending )
- c$sip_state$pending[c$sip_state$current_response] = new_sip_session(c);
-
- if ( is_request )
- c$sip = c$sip_state$pending[c$sip_state$current_request];
- else
- c$sip = c$sip_state$pending[c$sip_state$current_response];
-
if ( is_request )
{
if ( c$sip_state$current_request !in c$sip_state$pending )
@@ -152,7 +141,6 @@ function set_state(c: connection, is_request: bool)
c$sip = c$sip_state$pending[c$sip_state$current_response];
}
-
}
function flush_pending(c: connection)
@@ -163,7 +151,9 @@ function flush_pending(c: connection)
for ( r in c$sip_state$pending )
{
# We don't use pending elements at index 0.
- if ( r == 0 ) next;
+ if ( r == 0 )
+ next;
+
Log::write(SIP::LOG, c$sip_state$pending[r]);
}
}
@@ -205,16 +195,39 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
if ( c$sip_state$current_request !in c$sip_state$pending )
++c$sip_state$current_request;
set_state(c, is_request);
- if ( name == "CALL-ID" ) c$sip$call_id = value;
- else if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$request_body_len = value;
- else if ( name == "CSEQ" ) c$sip$seq = value;
- else if ( name == "DATE" ) c$sip$date = value;
- else if ( name == "FROM" || name == "F" ) c$sip$request_from = split_string1(value, /;[ ]?tag=/)[0];
- else if ( name == "REPLY-TO" ) c$sip$reply_to = value;
- else if ( name == "SUBJECT" || name == "S" ) c$sip$subject = value;
- else if ( name == "TO" || name == "T" ) c$sip$request_to = value;
- else if ( name == "USER-AGENT" ) c$sip$user_agent = value;
- else if ( name == "VIA" || name == "V" ) c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
+ switch ( name )
+ {
+ case "CALL-ID":
+ c$sip$call_id = value;
+ break;
+ case "CONTENT-LENGTH", "L":
+ c$sip$request_body_len = to_count(value);
+ break;
+ case "CSEQ":
+ c$sip$seq = value;
+ break;
+ case "DATE":
+ c$sip$date = value;
+ break;
+ case "FROM", "F":
+ c$sip$request_from = split_string1(value, /;[ ]?tag=/)[0];
+ break;
+ case "REPLY-TO":
+ c$sip$reply_to = value;
+ break;
+ case "SUBJECT", "S":
+ c$sip$subject = value;
+ break;
+ case "TO", "T":
+ c$sip$request_to = value;
+ break;
+ case "USER-AGENT":
+ c$sip$user_agent = value;
+ break;
+ case "VIA", "V":
+ c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
+ break;
+ }
c$sip_state$pending[c$sip_state$current_request] = c$sip;
}
@@ -222,13 +235,29 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
{
if ( c$sip_state$current_response !in c$sip_state$pending )
++c$sip_state$current_response;
+
set_state(c, is_request);
- if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$response_body_len = value;
- else if ( name == "CONTENT-TYPE" || name == "C" ) c$sip$content_type = value;
- else if ( name == "WARNING" ) c$sip$warning = value;
- else if ( name == "FROM" || name == "F" ) c$sip$response_from = split_string1(value, /;[ ]?tag=/)[0];
- else if ( name == "TO" || name == "T" ) c$sip$response_to = value;
- else if ( name == "VIA" || name == "V" ) c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
+ switch ( name )
+ {
+ case "CONTENT-LENGTH", "L":
+ c$sip$response_body_len = to_count(value);
+ break;
+ case "CONTENT-TYPE", "C":
+ c$sip$content_type = value;
+ break;
+ case "WARNING":
+ c$sip$warning = value;
+ break;
+ case "FROM", "F":
+ c$sip$response_from = split_string1(value, /;[ ]?tag=/)[0];
+ break;
+ case "TO", "T":
+ c$sip$response_to = value;
+ break;
+ case "VIA", "V":
+ c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
+ break;
+ }
c$sip_state$pending[c$sip_state$current_response] = c$sip;
}
diff --git a/scripts/base/protocols/smtp/main.bro b/scripts/base/protocols/smtp/main.bro
index 5fb5cac4bc..6df9bddb54 100644
--- a/scripts/base/protocols/smtp/main.bro
+++ b/scripts/base/protocols/smtp/main.bro
@@ -29,6 +29,8 @@ export {
from: string &log &optional;
## Contents of the To header.
to: set[string] &log &optional;
+ ## Contents of the CC header.
+ cc: set[string] &log &optional;
## Contents of the ReplyTo header.
reply_to: string &log &optional;
## Contents of the MsgID header.
@@ -239,6 +241,16 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=5
add c$smtp$to[to_parts[i]];
}
+ else if ( h$name == "CC" )
+ {
+ if ( ! c$smtp?$cc )
+ c$smtp$cc = set();
+
+ local cc_parts = split_string(h$value, /[[:blank:]]*,[[:blank:]]*/);
+ for ( i in cc_parts )
+ add c$smtp$cc[cc_parts[i]];
+ }
+
else if ( h$name == "X-ORIGINATING-IP" )
{
local addresses = extract_ip_addresses(h$value);
diff --git a/scripts/base/protocols/ssh/main.bro b/scripts/base/protocols/ssh/main.bro
index d9e1e2b3cf..fad2da0b8e 100644
--- a/scripts/base/protocols/ssh/main.bro
+++ b/scripts/base/protocols/ssh/main.bro
@@ -46,11 +46,10 @@ export {
## authentication success or failure when compression is enabled.
const compression_algorithms = set("zlib", "zlib@openssh.com") &redef;
- ## If true, we tell the event engine to not look at further data
- ## packets after the initial SSH handshake. Helps with performance
- ## (especially with large file transfers) but precludes some
- ## kinds of analyses. Defaults to T.
- const skip_processing_after_detection = T &redef;
+ ## If true, after detection detach the SSH analyzer from the connection
+ ## to prevent continuing to process encrypted traffic. Helps with performance
+ ## (especially with large file transfers).
+ const disable_analyzer_after_detection = T &redef;
## Event that can be handled to access the SSH record as it is sent on
## to the logging framework.
@@ -70,6 +69,8 @@ redef record Info += {
# Store capabilities from the first host for
# comparison with the second (internal use)
capabilities: Capabilities &optional;
+ ## Analzyer ID
+ analyzer_id: count &optional;
};
redef record connection += {
@@ -130,11 +131,8 @@ event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=5
c$ssh$auth_success = T;
- if ( skip_processing_after_detection)
- {
- skip_further_processing(c$id);
- set_record_packets(c$id, F);
- }
+ if ( disable_analyzer_after_detection )
+ disable_analyzer(c$id, c$ssh$analyzer_id);
}
event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=-5
@@ -179,7 +177,7 @@ function find_bidirectional_alg(client_prefs: Algorithm_Prefs, server_prefs: Alg
# Usually these are the same, but if they're not, return the details
return c_to_s == s_to_c ? c_to_s : fmt("To server: %s, to client: %s", c_to_s, s_to_c);
}
-
+
event ssh_capabilities(c: connection, cookie: string, capabilities: Capabilities)
{
if ( !c?$ssh || ( c$ssh?$capabilities && c$ssh$capabilities$is_server == capabilities$is_server ) )
@@ -233,3 +231,12 @@ event ssh2_server_host_key(c: connection, key: string) &priority=5
{
generate_fingerprint(c, key);
}
+
+event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=20
+ {
+ if ( atype == Analyzer::ANALYZER_SSH )
+ {
+ set_session(c);
+ c$ssh$analyzer_id = aid;
+ }
+ }
diff --git a/scripts/base/protocols/ssl/consts.bro b/scripts/base/protocols/ssl/consts.bro
index 7a95d63cc6..35cfc7681d 100644
--- a/scripts/base/protocols/ssl/consts.bro
+++ b/scripts/base/protocols/ssl/consts.bro
@@ -109,7 +109,7 @@ export {
[7] = "client_authz",
[8] = "server_authz",
[9] = "cert_type",
- [10] = "elliptic_curves",
+ [10] = "elliptic_curves", # new name: supported_groups - draft-ietf-tls-negotiated-ff-dhe
[11] = "ec_point_formats",
[12] = "srp",
[13] = "signature_algorithms",
@@ -120,9 +120,10 @@ export {
[18] = "signed_certificate_timestamp",
[19] = "client_certificate_type",
[20] = "server_certificate_type",
- [21] = "padding", # temporary till 2016-03-12
+ [21] = "padding",
[22] = "encrypt_then_mac",
[23] = "extended_master_secret",
+ [24] = "token_binding", # temporary till 2017-02-04 - draft-ietf-tokbind-negotiation
[35] = "SessionTicket TLS",
[40] = "extended_random",
[13172] = "next_protocol_negotiation",
@@ -165,7 +166,10 @@ export {
[26] = "brainpoolP256r1",
[27] = "brainpoolP384r1",
[28] = "brainpoolP512r1",
- # draft-ietf-tls-negotiated-ff-dhe-05
+ # Temporary till 2017-03-01 - draft-ietf-tls-rfc4492bis
+ [29] = "ecdh_x25519",
+ [30] = "ecdh_x448",
+ # draft-ietf-tls-negotiated-ff-dhe-10
[256] = "ffdhe2048",
[257] = "ffdhe3072",
[258] = "ffdhe4096",
diff --git a/scripts/base/protocols/ssl/dpd.sig b/scripts/base/protocols/ssl/dpd.sig
index e238575568..2ebe1cc634 100644
--- a/scripts/base/protocols/ssl/dpd.sig
+++ b/scripts/base/protocols/ssl/dpd.sig
@@ -1,7 +1,7 @@
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
- payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
+ payload /^((\x15\x03[\x00\x01\x02\x03]....)?\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
diff --git a/scripts/base/protocols/tunnels/dpd.sig b/scripts/base/protocols/tunnels/dpd.sig
index 0c66775f5d..9c4bddeffd 100644
--- a/scripts/base/protocols/tunnels/dpd.sig
+++ b/scripts/base/protocols/tunnels/dpd.sig
@@ -9,6 +9,6 @@ signature dpd_ayiya {
signature dpd_teredo {
ip-proto = udp
- payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
+ payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f].{7}((\x20\x01\x00\x00)).{28})|([\x60-\x6f].{23}((\x20\x01\x00\x00))).{12}/
enable "teredo"
}
diff --git a/scripts/base/utils/json.bro b/scripts/base/utils/json.bro
new file mode 100644
index 0000000000..b6d0093b58
--- /dev/null
+++ b/scripts/base/utils/json.bro
@@ -0,0 +1,105 @@
+##! Functions to assist with generating JSON data from Bro data scructures.
+# We might want to implement this in core somtime, this looks... hacky at best.
+
+@load base/utils/strings
+
+## A function to convert arbitrary Bro data into a JSON string.
+##
+## v: The value to convert to JSON. Typically a record.
+##
+## only_loggable: If the v value is a record this will only cause
+## fields with the &log attribute to be included in the JSON.
+##
+## returns: a JSON formatted string.
+function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: pattern &default=/^_/): string
+ {
+ local tn = type_name(v);
+ switch ( tn )
+ {
+ case "type":
+ return "";
+
+ case "string":
+ return cat("\"", gsub(gsub(clean(v), /\\/, "\\\\"), /\"/, "\\\""), "\"");
+
+ case "port":
+ return cat(port_to_count(to_port(cat(v))));
+
+ case "addr":
+ fallthrough;
+ case "subnet":
+ return cat("\"", v, "\"");
+
+ case "int":
+ fallthrough;
+ case "count":
+ fallthrough;
+ case "time":
+ fallthrough;
+ case "double":
+ fallthrough;
+ case "bool":
+ fallthrough;
+ case "enum":
+ return cat(v);
+
+ default:
+ break;
+ }
+
+ if ( /^record/ in tn )
+ {
+ local rec_parts: string_vec = vector();
+
+ local ft = record_fields(v);
+ for ( field in ft )
+ {
+ local field_desc = ft[field];
+ # replace the escape pattern in the field.
+ if( field_escape_pattern in field )
+ field = cat(sub(field, field_escape_pattern, ""));
+ if ( field_desc?$value && (!only_loggable || field_desc$log) )
+ {
+ local onepart = cat("\"", field, "\": ", to_json(field_desc$value, only_loggable));
+ rec_parts[|rec_parts|] = onepart;
+ }
+ }
+ return cat("{", join_string_vec(rec_parts, ", "), "}");
+ }
+
+ # None of the following are supported.
+ else if ( /^set/ in tn )
+ {
+ local set_parts: string_vec = vector();
+ local sa: set[bool] = v;
+ for ( sv in sa )
+ {
+ set_parts[|set_parts|] = to_json(sv, only_loggable);
+ }
+ return cat("[", join_string_vec(set_parts, ", "), "]");
+ }
+ else if ( /^table/ in tn )
+ {
+ local tab_parts: vector of string = vector();
+ local ta: table[bool] of any = v;
+ for ( ti in ta )
+ {
+ local ts = to_json(ti);
+ local if_quotes = (ts[0] == "\"") ? "" : "\"";
+ tab_parts[|tab_parts|] = cat(if_quotes, ts, if_quotes, ": ", to_json(ta[ti], only_loggable));
+ }
+ return cat("{", join_string_vec(tab_parts, ", "), "}");
+ }
+ else if ( /^vector/ in tn )
+ {
+ local vec_parts: string_vec = vector();
+ local va: vector of any = v;
+ for ( vi in va )
+ {
+ vec_parts[|vec_parts|] = to_json(va[vi], only_loggable);
+ }
+ return cat("[", join_string_vec(vec_parts, ", "), "]");
+ }
+
+ return "\"\"";
+ }
diff --git a/scripts/base/utils/numbers.bro b/scripts/base/utils/numbers.bro
index da8c15d7a0..d2adb49ea2 100644
--- a/scripts/base/utils/numbers.bro
+++ b/scripts/base/utils/numbers.bro
@@ -1,10 +1,26 @@
-## Extract the first integer found in the given string.
-## If no integer can be found, 0 is returned.
-function extract_count(s: string): count
+
+## Extract an integer from a string.
+##
+## s: The string to search for a number.
+##
+## get_first: Provide `F` if you would like the last number found.
+##
+## Returns: The request integer from the given string or 0 if
+## no integer was found.
+function extract_count(s: string, get_first: bool &default=T): count
{
- local parts = split_string_n(s, /[0-9]+/, T, 1);
- if ( 1 in parts )
- return to_count(parts[1]);
+ local extract_num_pattern = /[0-9]+/;
+ if ( get_first )
+ {
+ local first_parts = split_string_n(s, extract_num_pattern, T, 1);
+ if ( 1 in first_parts )
+ return to_count(first_parts[1]);
+ }
else
- return 0;
+ {
+ local last_parts = split_string_all(s, extract_num_pattern);
+ if ( |last_parts| > 1 )
+ return to_count(last_parts[|last_parts|-2]);
+ }
+ return 0;
}
diff --git a/scripts/policy/frameworks/control/controller.bro b/scripts/policy/frameworks/control/controller.bro
index cc94767370..edef4149f9 100644
--- a/scripts/policy/frameworks/control/controller.bro
+++ b/scripts/policy/frameworks/control/controller.bro
@@ -4,7 +4,7 @@
##!
##! It's intended to be used from the command line like this::
##!
-##! bro frameworks/control/controller Control::host= Control::port= Control::cmd= [Control::arg=]
+##! bro frameworks/control/controller Control::host= Control::host_port= Control::cmd= [Control::arg=]
@load base/frameworks/control
@load base/frameworks/communication
diff --git a/scripts/policy/frameworks/files/entropy-test-all-files.bro b/scripts/policy/frameworks/files/entropy-test-all-files.bro
new file mode 100644
index 0000000000..fd02b9ecaa
--- /dev/null
+++ b/scripts/policy/frameworks/files/entropy-test-all-files.bro
@@ -0,0 +1,20 @@
+
+module Files;
+
+export {
+ redef record Files::Info += {
+ ## The information density of the contents of the file,
+ ## expressed as a number of bits per character.
+ entropy: double &log &optional;
+ };
+}
+
+event file_new(f: fa_file)
+ {
+ Files::add_analyzer(f, Files::ANALYZER_ENTROPY);
+ }
+
+event file_entropy(f: fa_file, ent: entropy_test_result)
+ {
+ f$info$entropy = ent$entropy;
+ }
\ No newline at end of file
diff --git a/scripts/policy/frameworks/files/hash-all-files.bro b/scripts/policy/frameworks/files/hash-all-files.bro
index 74bea47bb9..f076abdd91 100644
--- a/scripts/policy/frameworks/files/hash-all-files.bro
+++ b/scripts/policy/frameworks/files/hash-all-files.bro
@@ -1,5 +1,7 @@
##! Perform MD5 and SHA1 hashing on all files.
+@load base/files/hash
+
event file_new(f: fa_file)
{
Files::add_analyzer(f, Files::ANALYZER_MD5);
diff --git a/scripts/policy/frameworks/intel/seen/x509.bro b/scripts/policy/frameworks/intel/seen/x509.bro
index 3a2859b6d5..9dcbc3edb9 100644
--- a/scripts/policy/frameworks/intel/seen/x509.bro
+++ b/scripts/policy/frameworks/intel/seen/x509.bro
@@ -26,3 +26,14 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certifi
$where=X509::IN_CERT]);
}
}
+
+event file_hash(f: fa_file, kind: string, hash: string)
+ {
+ if ( ! f?$info || ! f$info?$x509 || kind != "sha1" )
+ return;
+
+ Intel::seen([$indicator=hash,
+ $indicator_type=Intel::CERT_HASH,
+ $f=f,
+ $where=X509::IN_CERT]);
+ }
diff --git a/scripts/policy/frameworks/software/windows-version-detection.bro b/scripts/policy/frameworks/software/windows-version-detection.bro
index 0162dddf75..7ed1ab359e 100644
--- a/scripts/policy/frameworks/software/windows-version-detection.bro
+++ b/scripts/policy/frameworks/software/windows-version-detection.bro
@@ -53,7 +53,7 @@ export {
event HTTP::log_http(rec: HTTP::Info) &priority=5
{
- if ( rec?$host && rec?$user_agent && rec$host == "crl.microsoft.com" &&
+ if ( rec?$host && rec?$user_agent && /crl.microsoft.com/ in rec$host &&
/Microsoft-CryptoAPI\// in rec$user_agent )
{
if ( rec$user_agent !in crypto_api_mapping )
diff --git a/scripts/policy/protocols/conn/vlan-logging.bro b/scripts/policy/protocols/conn/vlan-logging.bro
new file mode 100644
index 0000000000..e0692c5ab5
--- /dev/null
+++ b/scripts/policy/protocols/conn/vlan-logging.bro
@@ -0,0 +1,26 @@
+##! This script add VLAN information to the connection logs
+
+@load base/protocols/conn
+
+module Conn;
+
+redef record Info += {
+ ## The outer VLAN for this connection, if applicable.
+ vlan: int &log &optional;
+
+ ## The inner VLAN for this connection, if applicable.
+ inner_vlan: int &log &optional;
+};
+
+# Add the VLAN information to the Conn::Info structure after the connection
+# has been removed. This ensures it's only done once, and is done before the
+# connection information is written to the log.
+event connection_state_remove(c: connection)
+ {
+ if ( c?$vlan )
+ c$conn$vlan = c$vlan;
+
+ if ( c?$inner_vlan )
+ c$conn$inner_vlan = c$inner_vlan;
+ }
+
diff --git a/scripts/policy/protocols/conn/weirds.bro b/scripts/policy/protocols/conn/weirds.bro
index 9d6730819c..8710635418 100644
--- a/scripts/policy/protocols/conn/weirds.bro
+++ b/scripts/policy/protocols/conn/weirds.bro
@@ -19,12 +19,12 @@ export {
};
}
-event rexmit_inconsistency(c: connection, t1: string, t2: string)
+event rexmit_inconsistency(c: connection, t1: string, t2: string, tcp_flags: string)
{
NOTICE([$note=Retransmission_Inconsistency,
$conn=c,
- $msg=fmt("%s rexmit inconsistency (%s) (%s)",
- id_string(c$id), t1, t2),
+ $msg=fmt("%s rexmit inconsistency (%s) (%s) [%s]",
+ id_string(c$id), t1, t2, tcp_flags),
$identifier=fmt("%s", c$id)]);
}
diff --git a/scripts/policy/protocols/http/software-browser-plugins.bro b/scripts/policy/protocols/http/software-browser-plugins.bro
index ab4bb93b15..c43e19dca2 100644
--- a/scripts/policy/protocols/http/software-browser-plugins.bro
+++ b/scripts/policy/protocols/http/software-browser-plugins.bro
@@ -1,4 +1,4 @@
-##! Detect browser plugins as they leak through requests to Omniture
+##! Detect browser plugins as they leak through requests to Omniture
##! advertising servers.
@load base/protocols/http
@@ -10,8 +10,10 @@ export {
redef record Info += {
## Indicates if the server is an omniture advertising server.
omniture: bool &default=F;
+ ## The unparsed Flash version, if detected.
+ flash_version: string &optional;
};
-
+
redef enum Software::Type += {
## Identifier for browser plugins in the software framework.
BROWSER_PLUGIN
@@ -22,12 +24,20 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{
if ( is_orig )
{
- if ( name == "X-FLASH-VERSION" )
+ switch ( name )
{
- # Flash doesn't include it's name so we'll add it here since it
- # simplifies the version parsing.
- value = cat("Flash/", value);
- Software::found(c$id, [$unparsed_version=value, $host=c$id$orig_h, $software_type=BROWSER_PLUGIN]);
+ case "X-FLASH-VERSION":
+ # Flash doesn't include it's name so we'll add it here since it
+ # simplifies the version parsing.
+ c$http$flash_version = cat("Flash/", value);
+ break;
+
+ case "X-REQUESTED-WITH":
+ # This header is usually used to indicate AJAX requests (XMLHttpRequest),
+ # but Chrome uses this header also to indicate the use of Flash.
+ if ( /Flash/ in value )
+ c$http$flash_version = value;
+ break;
}
}
else
@@ -38,9 +48,26 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
}
}
+event http_message_done(c: connection, is_orig: bool, stat: http_message_stat)
+ {
+ # If a Flash was detected, it has to be logged considering the user agent.
+ if ( is_orig && c$http?$flash_version )
+ {
+ # AdobeAIR contains a seperate Flash, which should be emphasized.
+ # Note: We assume that the user agent header was not reset by the app.
+ if( c$http?$user_agent )
+ {
+ if ( /AdobeAIR/ in c$http$user_agent )
+ c$http$flash_version = cat("AdobeAIR-", c$http$flash_version);
+ }
+
+ Software::found(c$id, [$unparsed_version=c$http$flash_version, $host=c$id$orig_h, $software_type=BROWSER_PLUGIN]);
+ }
+ }
+
event log_http(rec: Info)
{
- # We only want to inspect requests that were sent to omniture advertising
+ # We only want to inspect requests that were sent to omniture advertising
# servers.
if ( rec$omniture && rec?$uri )
{
@@ -48,11 +75,11 @@ event log_http(rec: Info)
local parts = split_string_n(rec$uri, /&p=([^&]{5,});&/, T, 1);
if ( 1 in parts )
{
- # We do sub_bytes here just to remove the extra extracted
+ # We do sub_bytes here just to remove the extra extracted
# characters from the regex split above.
local sw = sub_bytes(parts[1], 4, |parts[1]|-5);
local plugins = split_string(sw, /[[:blank:]]*;[[:blank:]]*/);
-
+
for ( i in plugins )
Software::found(rec$id, [$unparsed_version=plugins[i], $host=rec$id$orig_h, $software_type=BROWSER_PLUGIN]);
}
diff --git a/scripts/site/local.bro b/scripts/site/local.bro
index afe1d9d4f2..8c6e495a07 100644
--- a/scripts/site/local.bro
+++ b/scripts/site/local.bro
@@ -1,4 +1,4 @@
-##! Local site policy. Customize as appropriate.
+##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!
@@ -11,16 +11,16 @@
# Load the scan detection script.
@load misc/scan
-# Log some information about web applications being used by users
+# Log some information about web applications being used by users
# on your network.
@load misc/app-stats
-# Detect traceroute being run on the network.
+# Detect traceroute being run on the network.
@load misc/detect-traceroute
# Generate notices when vulnerable versions of software are discovered.
# The default is to only monitor software found in the address space defined
-# as "local". Refer to the software framework's documentation for more
+# as "local". Refer to the software framework's documentation for more
# information.
@load frameworks/software/vulnerable
@@ -35,12 +35,12 @@
@load protocols/smtp/software
@load protocols/ssh/software
@load protocols/http/software
-# The detect-webapps script could possibly cause performance trouble when
+# The detect-webapps script could possibly cause performance trouble when
# running on live traffic. Enable it cautiously.
#@load protocols/http/detect-webapps
-# This script detects DNS results pointing toward your Site::local_nets
-# where the name is not part of your local DNS zone and is being hosted
+# This script detects DNS results pointing toward your Site::local_nets
+# where the name is not part of your local DNS zone and is being hosted
# externally. Requires that the Site::local_zones variable is defined.
@load protocols/dns/detect-external-names
@@ -62,7 +62,7 @@
# certificate notary service; see http://notary.icsi.berkeley.edu .
# @load protocols/ssl/notary
-# If you have libGeoIP support built in, do some geographic detections and
+# If you have libGeoIP support built in, do some geographic detections and
# logging for SSH traffic.
@load protocols/ssh/geo-data
# Detect hosts doing SSH bruteforce attacks.
@@ -84,3 +84,7 @@
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed
+
+# Uncomment the following line to enable logging of connection VLANs. Enabling
+# this adds two VLAN fields to the conn.log file.
+# @load policy/protocols/conn/vlan-logging
diff --git a/scripts/test-all-policy.bro b/scripts/test-all-policy.bro
index 2ffdda8e6a..f85fdb58b0 100644
--- a/scripts/test-all-policy.bro
+++ b/scripts/test-all-policy.bro
@@ -29,6 +29,7 @@
@load frameworks/intel/seen/where-locations.bro
@load frameworks/intel/seen/x509.bro
@load frameworks/files/detect-MHR.bro
+@load frameworks/files/entropy-test-all-files.bro
#@load frameworks/files/extract-all-files.bro
@load frameworks/files/hash-all-files.bro
@load frameworks/packet-filter/shunt.bro
@@ -62,6 +63,7 @@
@load misc/trim-trace-file.bro
@load protocols/conn/known-hosts.bro
@load protocols/conn/known-services.bro
+@load protocols/conn/vlan-logging.bro
@load protocols/conn/weirds.bro
@load protocols/dhcp/known-devices-and-hostnames.bro
@load protocols/dns/auth-addl.bro
diff --git a/src/3rdparty b/src/3rdparty
index 6a429e79bb..f1eaca0e08 160000
--- a/src/3rdparty
+++ b/src/3rdparty
@@ -1 +1 @@
-Subproject commit 6a429e79bbaf0fcc11eff5f639bfb9d1f62be6f2
+Subproject commit f1eaca0e085a8b37ec6a32c7e1b0e9571414a2e3
diff --git a/src/Attr.cc b/src/Attr.cc
index dad51c608e..14b00bd0d7 100644
--- a/src/Attr.cc
+++ b/src/Attr.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Attr.h"
#include "Expr.h"
@@ -375,12 +375,33 @@ void Attributes::CheckAttr(Attr* a)
case ATTR_EXPIRE_READ:
case ATTR_EXPIRE_WRITE:
case ATTR_EXPIRE_CREATE:
+ {
if ( type->Tag() != TYPE_TABLE )
{
Error("expiration only applicable to tables");
break;
}
+ int num_expires = 0;
+ if ( attrs )
+ {
+ loop_over_list(*attrs, i)
+ {
+ Attr* a = (*attrs)[i];
+ if ( a->Tag() == ATTR_EXPIRE_READ ||
+ a->Tag() == ATTR_EXPIRE_WRITE ||
+ a->Tag() == ATTR_EXPIRE_CREATE )
+ num_expires++;
+ }
+ }
+
+ if ( num_expires > 1 )
+ {
+ Error("set/table can only have one of &read_expire, &write_expire, &create_expire");
+ break;
+ }
+ }
+
#if 0
//### not easy to test this w/o knowing the ID.
if ( ! IsGlobal() )
diff --git a/src/Attr.h b/src/Attr.h
index 63f2524c21..0960a9d5f9 100644
--- a/src/Attr.h
+++ b/src/Attr.h
@@ -93,7 +93,7 @@ public:
void RemoveAttr(attr_tag t);
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
void DescribeReST(ODesc* d) const;
attr_list* Attrs() { return attrs; }
diff --git a/src/Base64.cc b/src/Base64.cc
index e76621e634..3644740c7e 100644
--- a/src/Base64.cc
+++ b/src/Base64.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include "Base64.h"
#include
@@ -82,7 +82,7 @@ int* Base64Converter::InitBase64Table(const string& alphabet)
return base64_table;
}
-Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string& arg_alphabet)
+Base64Converter::Base64Converter(Connection* arg_conn, const string& arg_alphabet)
{
if ( arg_alphabet.size() > 0 )
{
@@ -98,7 +98,7 @@ Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string&
base64_group_next = 0;
base64_padding = base64_after_padding = 0;
errored = 0;
- analyzer = arg_analyzer;
+ conn = arg_conn;
}
Base64Converter::~Base64Converter()
@@ -216,9 +216,9 @@ int Base64Converter::Done(int* pblen, char** pbuf)
}
-BroString* decode_base64(const BroString* s, const BroString* a)
+BroString* decode_base64(const BroString* s, const BroString* a, Connection* conn)
{
- if ( a && a->Len() != 64 )
+ if ( a && a->Len() != 0 && a->Len() != 64 )
{
reporter->Error("base64 decoding alphabet is not 64 characters: %s",
a->CheckString());
@@ -229,7 +229,7 @@ BroString* decode_base64(const BroString* s, const BroString* a)
int rlen2, rlen = buf_len;
char* rbuf2, *rbuf = new char[rlen];
- Base64Converter dec(0, a ? a->CheckString() : "");
+ Base64Converter dec(conn, a ? a->CheckString() : "");
if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 )
goto err;
@@ -248,9 +248,9 @@ err:
return 0;
}
-BroString* encode_base64(const BroString* s, const BroString* a)
+BroString* encode_base64(const BroString* s, const BroString* a, Connection* conn)
{
- if ( a && a->Len() != 64 )
+ if ( a && a->Len() != 0 && a->Len() != 64 )
{
reporter->Error("base64 alphabet is not 64 characters: %s",
a->CheckString());
@@ -259,7 +259,7 @@ BroString* encode_base64(const BroString* s, const BroString* a)
char* outbuf = 0;
int outlen = 0;
- Base64Converter enc(0, a ? a->CheckString() : "");
+ Base64Converter enc(conn, a ? a->CheckString() : "");
enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf);
return new BroString(1, (u_char*)outbuf, outlen);
diff --git a/src/Base64.h b/src/Base64.h
index d7e4384ac5..fb030915ef 100644
--- a/src/Base64.h
+++ b/src/Base64.h
@@ -8,15 +8,17 @@
#include "util.h"
#include "BroString.h"
#include "Reporter.h"
-#include "analyzer/Analyzer.h"
+#include "Conn.h"
// Maybe we should have a base class for generic decoders?
class Base64Converter {
public:
- // is used for error reporting, and it should be zero when
- // the decoder is called by the built-in function decode_base64() or encode_base64().
- // Empty alphabet indicates the default base64 alphabet.
- Base64Converter(analyzer::Analyzer* analyzer, const string& alphabet = "");
+ // is used for error reporting. If it is set to zero (as,
+ // e.g., done by the built-in functions decode_base64() and
+ // encode_base64()), encoding-errors will go to Reporter instead of
+ // Weird. Usage errors go to Reporter in any case. Empty alphabet
+ // indicates the default base64 alphabet.
+ Base64Converter(Connection* conn, const string& alphabet = "");
~Base64Converter();
// A note on Decode():
@@ -42,8 +44,8 @@ public:
void IllegalEncoding(const char* msg)
{
// strncpy(error_msg, msg, sizeof(error_msg));
- if ( analyzer )
- analyzer->Weird("base64_illegal_encoding", msg);
+ if ( conn )
+ conn->Weird("base64_illegal_encoding", msg);
else
reporter->Error("%s", msg);
}
@@ -63,11 +65,11 @@ protected:
int base64_after_padding;
int* base64_table;
int errored; // if true, we encountered an error - skip further processing
- analyzer::Analyzer* analyzer;
+ Connection* conn;
};
-BroString* decode_base64(const BroString* s, const BroString* a = 0);
-BroString* encode_base64(const BroString* s, const BroString* a = 0);
+BroString* decode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
+BroString* encode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
#endif /* base64_h */
diff --git a/src/BroString.cc b/src/BroString.cc
index 086a7f8dde..c86e14cf37 100644
--- a/src/BroString.cc
+++ b/src/BroString.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/CCL.cc b/src/CCL.cc
index 6c4ec5ea2e..a725257c75 100644
--- a/src/CCL.cc
+++ b/src/CCL.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "CCL.h"
#include "RE.h"
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index bdbd3839ce..9a807b3182 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -223,16 +223,16 @@ endmacro(COLLECT_HEADERS _var)
cmake_policy(POP)
-# define a command that's used to run the make_dbg_constants.pl script
+# define a command that's used to run the make_dbg_constants.py script
# building the bro binary depends on the outputs of this script
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/DebugCmdConstants.h
${CMAKE_CURRENT_BINARY_DIR}/DebugCmdInfoConstants.cc
- COMMAND ${PERL_EXECUTABLE}
- ARGS ${CMAKE_CURRENT_SOURCE_DIR}/make_dbg_constants.pl
+ COMMAND ${PYTHON_EXECUTABLE}
+ ARGS ${CMAKE_CURRENT_SOURCE_DIR}/make_dbg_constants.py
${CMAKE_CURRENT_SOURCE_DIR}/DebugCmdInfoConstants.in
- DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/make_dbg_constants.pl
+ DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/make_dbg_constants.py
${CMAKE_CURRENT_SOURCE_DIR}/DebugCmdInfoConstants.in
- COMMENT "[Perl] Processing debug commands"
+ COMMENT "[Python] Processing debug commands"
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
diff --git a/src/ChunkedIO.cc b/src/ChunkedIO.cc
index 1e581806d6..0c402dc2af 100644
--- a/src/ChunkedIO.cc
+++ b/src/ChunkedIO.cc
@@ -9,7 +9,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "ChunkedIO.h"
#include "NetVar.h"
#include "RemoteSerializer.h"
@@ -709,7 +709,7 @@ bool ChunkedIOSSL::Init()
{
SSL_load_error_strings();
- ctx = SSL_CTX_new(SSLv3_method());
+ ctx = SSL_CTX_new(SSLv23_method());
if ( ! ctx )
{
Log("can't create SSL context");
diff --git a/src/ChunkedIO.h b/src/ChunkedIO.h
index afb239b325..238bea5044 100644
--- a/src/ChunkedIO.h
+++ b/src/ChunkedIO.h
@@ -3,7 +3,7 @@
#ifndef CHUNKEDIO_H
#define CHUNKEDIO_H
-#include "config.h"
+#include "bro-config.h"
#include "List.h"
#include "util.h"
#include "Flare.h"
diff --git a/src/CompHash.cc b/src/CompHash.cc
index 5a972f6016..2e28bff78e 100644
--- a/src/CompHash.cc
+++ b/src/CompHash.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "CompHash.h"
#include "Val.h"
diff --git a/src/Conn.cc b/src/Conn.cc
index 5c1a58a7b1..3f6757d89c 100644
--- a/src/Conn.cc
+++ b/src/Conn.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -115,7 +115,8 @@ unsigned int Connection::external_connections = 0;
IMPLEMENT_SERIAL(Connection, SER_CONNECTION);
Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
- uint32 flow, const EncapsulationStack* arg_encap)
+ uint32 flow, uint32 arg_vlan, uint32 arg_inner_vlan,
+ const EncapsulationStack* arg_encap)
{
sessions = s;
key = k;
@@ -131,6 +132,9 @@ Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
saw_first_orig_packet = 1;
saw_first_resp_packet = 0;
+ vlan = arg_vlan;
+ inner_vlan = arg_inner_vlan;
+
conn_val = 0;
login_conn = 0;
@@ -378,6 +382,12 @@ RecordVal* Connection::BuildConnVal()
if ( encapsulation && encapsulation->Depth() > 0 )
conn_val->Assign(8, encapsulation->GetVectorVal());
+
+ if ( vlan != 0 )
+ conn_val->Assign(9, new Val(vlan, TYPE_INT));
+
+ if ( inner_vlan != 0 )
+ conn_val->Assign(10, new Val(inner_vlan, TYPE_INT));
}
if ( root_analyzer )
diff --git a/src/Conn.h b/src/Conn.h
index 62a1aa9613..11dbb11abe 100644
--- a/src/Conn.h
+++ b/src/Conn.h
@@ -56,7 +56,7 @@ namespace analyzer { class Analyzer; }
class Connection : public BroObj {
public:
Connection(NetSessions* s, HashKey* k, double t, const ConnID* id,
- uint32 flow, const EncapsulationStack* arg_encap);
+ uint32 flow, uint32 vlan, uint32 inner_vlan, const EncapsulationStack* arg_encap);
virtual ~Connection();
// Invoked when an encapsulation is discovered. It records the
@@ -201,7 +201,7 @@ public:
bool IsPersistent() { return persistent; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
void IDString(ODesc* d) const;
TimerMgr* GetTimerMgr() const;
@@ -294,7 +294,8 @@ protected:
IPAddr resp_addr;
uint32 orig_port, resp_port; // in network order
TransportProto proto;
- uint32 orig_flow_label, resp_flow_label; // most recent IPv6 flow labels
+ uint32 orig_flow_label, resp_flow_label; // most recent IPv6 flow labels
+ uint32 vlan, inner_vlan; // VLAN this connection traverses, if available
double start_time, last_time;
double inactivity_timeout;
RecordVal* conn_val;
@@ -335,7 +336,7 @@ public:
{ Init(arg_conn, arg_timer, arg_do_expire); }
virtual ~ConnectionTimer();
- void Dispatch(double t, int is_expire);
+ void Dispatch(double t, int is_expire) override;
protected:
ConnectionTimer() {}
diff --git a/src/ConvertUTF.h b/src/ConvertUTF.h
index 9be51e57f1..4eb7900e9f 100644
--- a/src/ConvertUTF.h
+++ b/src/ConvertUTF.h
@@ -91,6 +91,8 @@
targetEnd. Note: the end pointers are *after* the last item: e.g.
*(sourceEnd - 1) is the last item.
+ !!! NOTE: The source and end pointers must be aligned properly !!!
+
The return result indicates whether the conversion was successful,
and if not, whether the problem was in the source or target buffers.
(Only the first encountered problem is indicated.)
@@ -199,18 +201,22 @@ ConversionResult ConvertUTF8toUTF32(
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
+/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF16toUTF8 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
+/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF32toUTF8 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
+/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF16toUTF32 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
+/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF32toUTF16 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags);
diff --git a/src/DFA.cc b/src/DFA.cc
index 514183165a..e7b2279ed5 100644
--- a/src/DFA.cc
+++ b/src/DFA.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/DNS_Mgr.cc b/src/DNS_Mgr.cc
index 99947e3531..7040b9a882 100644
--- a/src/DNS_Mgr.cc
+++ b/src/DNS_Mgr.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/DbgBreakpoint.cc b/src/DbgBreakpoint.cc
index 9000d89077..c573a8d3b8 100644
--- a/src/DbgBreakpoint.cc
+++ b/src/DbgBreakpoint.cc
@@ -1,6 +1,6 @@
// Implementation of breakpoints.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/DbgHelp.cc b/src/DbgHelp.cc
index accf7ce6f6..6bbf9c6ecb 100644
--- a/src/DbgHelp.cc
+++ b/src/DbgHelp.cc
@@ -1,5 +1,5 @@
// Bro Debugger Help
-#include "config.h"
+#include "bro-config.h"
#include "Debug.h"
diff --git a/src/DbgWatch.cc b/src/DbgWatch.cc
index 74ac26cb73..c34144dc1f 100644
--- a/src/DbgWatch.cc
+++ b/src/DbgWatch.cc
@@ -1,6 +1,6 @@
// Implementation of watches
-#include "config.h"
+#include "bro-config.h"
#include "Debug.h"
#include "DbgWatch.h"
diff --git a/src/Debug.cc b/src/Debug.cc
index 09e8810edb..7f1250cf49 100644
--- a/src/Debug.cc
+++ b/src/Debug.cc
@@ -1,6 +1,6 @@
// Debugging support for Bro policy files.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/DebugCmds.cc b/src/DebugCmds.cc
index bfb4d6ecc8..4e856b00f5 100644
--- a/src/DebugCmds.cc
+++ b/src/DebugCmds.cc
@@ -1,7 +1,7 @@
// Support routines to help deal with Bro debugging commands and
// implementation of most commands.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/Desc.cc b/src/Desc.cc
index ebe5fb616c..4654454129 100644
--- a/src/Desc.cc
+++ b/src/Desc.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -351,3 +351,24 @@ void ODesc::Clear()
}
}
+bool ODesc::PushType(const BroType* type)
+ {
+ auto res = encountered_types.insert(type);
+ return std::get<1>(res);
+ }
+
+bool ODesc::PopType(const BroType* type)
+ {
+ size_t res = encountered_types.erase(type);
+ return (res == 1);
+ }
+
+bool ODesc::FindType(const BroType* type)
+ {
+ auto res = encountered_types.find(type);
+
+ if ( res != encountered_types.end() )
+ return true;
+
+ return false;
+ }
diff --git a/src/Desc.h b/src/Desc.h
index cc6ba3a662..df850378c4 100644
--- a/src/Desc.h
+++ b/src/Desc.h
@@ -23,6 +23,7 @@ typedef enum {
class BroFile;
class IPAddr;
class IPPrefix;
+class BroType;
class ODesc {
public:
@@ -140,6 +141,12 @@ public:
void Clear();
+ // Used to determine recursive types. Records push their types on here;
+ // if the same type (by address) is re-encountered, processing aborts.
+ bool PushType(const BroType* type);
+ bool PopType(const BroType* type);
+ bool FindType(const BroType* type);
+
protected:
void Indent();
@@ -190,6 +197,8 @@ protected:
int do_flush;
int include_stats;
int indent_with_spaces;
+
+ std::set encountered_types;
};
#endif
diff --git a/src/Dict.cc b/src/Dict.cc
index cd7792b539..1d32eccde3 100644
--- a/src/Dict.cc
+++ b/src/Dict.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#ifdef HAVE_MEMORY_H
#include
diff --git a/src/Discard.cc b/src/Discard.cc
index edfeea1408..2a20c897aa 100644
--- a/src/Discard.cc
+++ b/src/Discard.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Net.h"
#include "Var.h"
diff --git a/src/EquivClass.cc b/src/EquivClass.cc
index 6ab667b146..7f54f07060 100644
--- a/src/EquivClass.cc
+++ b/src/EquivClass.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "EquivClass.h"
diff --git a/src/Event.cc b/src/Event.cc
index 82ea80988e..89e745361f 100644
--- a/src/Event.cc
+++ b/src/Event.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Event.h"
#include "Func.h"
diff --git a/src/Expr.cc b/src/Expr.cc
index ba44149ec3..9927ca52ec 100644
--- a/src/Expr.cc
+++ b/src/Expr.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Expr.h"
#include "Event.h"
diff --git a/src/Expr.h b/src/Expr.h
index 97092c1315..fb533b1469 100644
--- a/src/Expr.h
+++ b/src/Expr.h
@@ -220,18 +220,18 @@ public:
ID* Id() const { return id; }
- Val* Eval(Frame* f) const;
- void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
- Expr* MakeLvalue();
- int IsPure() const;
+ Val* Eval(Frame* f) const override;
+ void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN) override;
+ Expr* MakeLvalue() override;
+ int IsPure() const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
NameExpr() { id = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(NameExpr);
@@ -246,15 +246,15 @@ public:
Val* Value() const { return val; }
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
ConstExpr() { val = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(ConstExpr);
Val* val;
@@ -267,11 +267,11 @@ public:
// UnaryExpr::Eval correctly handles vector types. Any child
// class that overrides Eval() should be modified to handle
// vectors correctly as necessary.
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- int IsPure() const;
+ int IsPure() const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
@@ -280,7 +280,7 @@ protected:
UnaryExpr(BroExprTag arg_tag, Expr* arg_op);
virtual ~UnaryExpr();
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
// Returns the expression folded using the given constant.
virtual Val* Fold(Val* v) const;
@@ -295,14 +295,14 @@ public:
Expr* Op1() const { return op1; }
Expr* Op2() const { return op2; }
- int IsPure() const;
+ int IsPure() const override;
// BinaryExpr::Eval correctly handles vector types. Any child
// class that overrides Eval() should be modified to handle
// vectors correctly as necessary.
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
@@ -340,7 +340,7 @@ protected:
// operands and also set expression's type).
void PromoteType(TypeTag t, bool is_vector);
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(BinaryExpr);
@@ -351,13 +351,13 @@ protected:
class CloneExpr : public UnaryExpr {
public:
CloneExpr(Expr* op);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
CloneExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(CloneExpr);
};
@@ -366,9 +366,9 @@ class IncrExpr : public UnaryExpr {
public:
IncrExpr(BroExprTag tag, Expr* op);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
Val* DoSingleEval(Frame* f, Val* v) const;
- int IsPure() const;
+ int IsPure() const override;
protected:
friend class Expr;
@@ -385,7 +385,7 @@ protected:
friend class Expr;
NotExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(NotExpr);
};
@@ -398,7 +398,7 @@ protected:
friend class Expr;
PosExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(PosExpr);
};
@@ -411,7 +411,7 @@ protected:
friend class Expr;
NegExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(NegExpr);
};
@@ -419,20 +419,20 @@ protected:
class SizeExpr : public UnaryExpr {
public:
SizeExpr(Expr* op);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
SizeExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(SizeExpr);
};
class AddExpr : public BinaryExpr {
public:
AddExpr(Expr* op1, Expr* op2);
- void Canonicize();
+ void Canonicize() override;
protected:
friend class Expr;
@@ -445,7 +445,7 @@ protected:
class AddToExpr : public BinaryExpr {
public:
AddToExpr(Expr* op1, Expr* op2);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
@@ -457,7 +457,7 @@ protected:
class RemoveFromExpr : public BinaryExpr {
public:
RemoveFromExpr(Expr* op1, Expr* op2);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
@@ -481,7 +481,7 @@ protected:
class TimesExpr : public BinaryExpr {
public:
TimesExpr(Expr* op1, Expr* op2);
- void Canonicize();
+ void Canonicize() override;
protected:
friend class Expr;
@@ -499,7 +499,7 @@ protected:
friend class Expr;
DivideExpr() { }
- Val* AddrFold(Val* v1, Val* v2) const;
+ Val* AddrFold(Val* v1, Val* v2) const override;
DECLARE_SERIAL(DivideExpr);
@@ -520,7 +520,7 @@ class BoolExpr : public BinaryExpr {
public:
BoolExpr(BroExprTag tag, Expr* op1, Expr* op2);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
Val* DoSingleEval(Frame* f, Val* v1, Expr* op2) const;
protected:
@@ -533,13 +533,13 @@ protected:
class EqExpr : public BinaryExpr {
public:
EqExpr(BroExprTag tag, Expr* op1, Expr* op2);
- void Canonicize();
+ void Canonicize() override;
protected:
friend class Expr;
EqExpr() { }
- Val* Fold(Val* v1, Val* v2) const;
+ Val* Fold(Val* v1, Val* v2) const override;
DECLARE_SERIAL(EqExpr);
};
@@ -547,7 +547,7 @@ protected:
class RelExpr : public BinaryExpr {
public:
RelExpr(BroExprTag tag, Expr* op1, Expr* op2);
- void Canonicize();
+ void Canonicize() override;
protected:
friend class Expr;
@@ -565,16 +565,16 @@ public:
const Expr* Op2() const { return op2; }
const Expr* Op3() const { return op3; }
- Val* Eval(Frame* f) const;
- int IsPure() const;
+ Val* Eval(Frame* f) const override;
+ int IsPure() const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
CondExpr() { op1 = op2 = op3 = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(CondExpr);
@@ -587,8 +587,8 @@ class RefExpr : public UnaryExpr {
public:
RefExpr(Expr* op);
- void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
- Expr* MakeLvalue();
+ void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN) override;
+ Expr* MakeLvalue() override;
protected:
friend class Expr;
@@ -604,12 +604,12 @@ public:
AssignExpr(Expr* op1, Expr* op2, int is_init, Val* val = 0, attr_list* attrs = 0);
virtual ~AssignExpr() { Unref(val); }
- Val* Eval(Frame* f) const;
- void EvalIntoAggregate(const BroType* t, Val* aggr, Frame* f) const;
- BroType* InitType() const;
- int IsRecordElement(TypeDecl* td) const;
- Val* InitVal(const BroType* t, Val* aggr) const;
- int IsPure() const;
+ Val* Eval(Frame* f) const override;
+ void EvalIntoAggregate(const BroType* t, Val* aggr, Frame* f) const override;
+ BroType* InitType() const override;
+ int IsRecordElement(TypeDecl* td) const override;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
+ int IsPure() const override;
protected:
friend class Expr;
@@ -628,28 +628,28 @@ class IndexExpr : public BinaryExpr {
public:
IndexExpr(Expr* op1, ListExpr* op2, bool is_slice = false);
- int CanAdd() const;
- int CanDel() const;
+ int CanAdd() const override;
+ int CanDel() const override;
- void Add(Frame* f);
- void Delete(Frame* f);
+ void Add(Frame* f) override;
+ void Delete(Frame* f) override;
- void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
- Expr* MakeLvalue();
+ void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN) override;
+ Expr* MakeLvalue() override;
// Need to override Eval since it can take a vector arg but does
// not necessarily return a vector.
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
IndexExpr() { }
- Val* Fold(Val* v1, Val* v2) const;
+ Val* Fold(Val* v1, Val* v2) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(IndexExpr);
};
@@ -662,20 +662,20 @@ public:
int Field() const { return field; }
const char* FieldName() const { return field_name; }
- int CanDel() const;
+ int CanDel() const override;
- void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
- void Delete(Frame* f);
+ void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN) override;
+ void Delete(Frame* f) override;
- Expr* MakeLvalue();
+ Expr* MakeLvalue() override;
protected:
friend class Expr;
FieldExpr() { field_name = 0; td = 0; }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(FieldExpr);
@@ -697,9 +697,9 @@ protected:
friend class Expr;
HasFieldExpr() { field_name = 0; }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(HasFieldExpr);
@@ -716,10 +716,10 @@ protected:
friend class Expr;
RecordConstructorExpr() { }
- Val* InitVal(const BroType* t, Val* aggr) const;
- Val* Fold(Val* v) const;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
+ Val* Fold(Val* v) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(RecordConstructorExpr);
};
@@ -732,15 +732,15 @@ public:
Attributes* Attrs() { return attrs; }
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
TableConstructorExpr() { }
- Val* InitVal(const BroType* t, Val* aggr) const;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(TableConstructorExpr);
@@ -755,15 +755,15 @@ public:
Attributes* Attrs() { return attrs; }
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
SetConstructorExpr() { }
- Val* InitVal(const BroType* t, Val* aggr) const;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(SetConstructorExpr);
@@ -774,15 +774,15 @@ class VectorConstructorExpr : public UnaryExpr {
public:
VectorConstructorExpr(ListExpr* constructor_list, BroType* arg_type = 0);
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
protected:
friend class Expr;
VectorConstructorExpr() { }
- Val* InitVal(const BroType* t, Val* aggr) const;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(VectorConstructorExpr);
};
@@ -793,14 +793,14 @@ public:
const char* FieldName() const { return field_name.c_str(); }
- void EvalIntoAggregate(const BroType* t, Val* aggr, Frame* f) const;
- int IsRecordElement(TypeDecl* td) const;
+ void EvalIntoAggregate(const BroType* t, Val* aggr, Frame* f) const override;
+ int IsRecordElement(TypeDecl* td) const override;
protected:
friend class Expr;
FieldAssignExpr() { }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(FieldAssignExpr);
@@ -816,7 +816,7 @@ protected:
ArithCoerceExpr() { }
Val* FoldSingleVal(Val* v, InternalTypeTag t) const;
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(ArithCoerceExpr);
};
@@ -830,8 +830,8 @@ protected:
friend class Expr;
RecordCoerceExpr() { map = 0; }
- Val* InitVal(const BroType* t, Val* aggr) const;
- Val* Fold(Val* v) const;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(RecordCoerceExpr);
@@ -850,7 +850,7 @@ protected:
friend class Expr;
TableCoerceExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(TableCoerceExpr);
};
@@ -864,7 +864,7 @@ protected:
friend class Expr;
VectorCoerceExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(VectorCoerceExpr);
};
@@ -879,7 +879,7 @@ protected:
friend class Expr;
FlattenExpr() { }
- Val* Fold(Val* v) const;
+ Val* Fold(Val* v) const override;
DECLARE_SERIAL(FlattenExpr);
@@ -907,20 +907,20 @@ public:
ScheduleExpr(Expr* when, EventExpr* event);
~ScheduleExpr();
- int IsPure() const;
+ int IsPure() const override;
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
Expr* When() const { return when; }
EventExpr* Event() const { return event; }
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
ScheduleExpr() { when = 0; event = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(ScheduleExpr);
@@ -936,7 +936,7 @@ protected:
friend class Expr;
InExpr() { }
- Val* Fold(Val* v1, Val* v2) const;
+ Val* Fold(Val* v1, Val* v2) const override;
DECLARE_SERIAL(InExpr);
@@ -950,17 +950,17 @@ public:
Expr* Func() const { return func; }
ListExpr* Args() const { return args; }
- int IsPure() const;
+ int IsPure() const override;
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
CallExpr() { func = 0; args = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(CallExpr);
@@ -977,15 +977,15 @@ public:
ListExpr* Args() const { return args; }
EventHandlerPtr Handler() const { return handler; }
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Expr;
EventExpr() { args = 0; }
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(EventExpr);
@@ -1006,24 +1006,24 @@ public:
expr_list& Exprs() { return exprs; }
// True if the entire list represents pure values.
- int IsPure() const;
+ int IsPure() const override;
// True if the entire list represents constant values.
int AllConst() const;
- Val* Eval(Frame* f) const;
+ Val* Eval(Frame* f) const override;
- BroType* InitType() const;
- Val* InitVal(const BroType* t, Val* aggr) const;
- Expr* MakeLvalue();
- void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
+ BroType* InitType() const override;
+ Val* InitVal(const BroType* t, Val* aggr) const override;
+ Expr* MakeLvalue() override;
+ void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN) override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
Val* AddSetInit(const BroType* t, Val* aggr) const;
- void ExprDescribe(ODesc* d) const;
+ void ExprDescribe(ODesc* d) const override;
DECLARE_SERIAL(ListExpr);
@@ -1035,7 +1035,7 @@ class RecordAssignExpr : public ListExpr {
public:
RecordAssignExpr(Expr* record, Expr* init_list, int is_init);
- Val* Eval(Frame* f) const { return ListExpr::Eval(f); }
+ Val* Eval(Frame* f) const override { return ListExpr::Eval(f); }
protected:
friend class Expr;
diff --git a/src/File.cc b/src/File.cc
index e62ca732cd..16d4259fe5 100644
--- a/src/File.cc
+++ b/src/File.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#ifdef TIME_WITH_SYS_TIME
diff --git a/src/File.h b/src/File.h
index dc56c5a3fe..f3fdf2f271 100644
--- a/src/File.h
+++ b/src/File.h
@@ -49,7 +49,7 @@ public:
// closed, not active, or whatever.
int Close();
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
void SetRotateInterval(double secs);
diff --git a/src/Frag.cc b/src/Frag.cc
index 8ada148750..6a8b901a73 100644
--- a/src/Frag.cc
+++ b/src/Frag.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "Hash.h"
diff --git a/src/Frame.cc b/src/Frame.cc
index 8754c02a9f..e97b948dbe 100644
--- a/src/Frame.cc
+++ b/src/Frame.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Frame.h"
#include "Stmt.h"
diff --git a/src/Func.cc b/src/Func.cc
index 82f73e1f19..e1eadb8c9f 100644
--- a/src/Func.cc
+++ b/src/Func.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/Func.h b/src/Func.h
index 0e50d546f4..791f8b7135 100644
--- a/src/Func.h
+++ b/src/Func.h
@@ -92,15 +92,15 @@ public:
BroFunc(ID* id, Stmt* body, id_list* inits, int frame_size, int priority);
~BroFunc();
- int IsPure() const;
- Val* Call(val_list* args, Frame* parent) const;
+ int IsPure() const override;
+ Val* Call(val_list* args, Frame* parent) const override;
void AddBody(Stmt* new_body, id_list* new_inits, int new_frame_size,
- int priority);
+ int priority) override;
int FrameSize() const { return frame_size; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
protected:
BroFunc() : Func(BRO_FUNC) {}
@@ -118,11 +118,11 @@ public:
BuiltinFunc(built_in_func func, const char* name, int is_pure);
~BuiltinFunc();
- int IsPure() const;
- Val* Call(val_list* args, Frame* parent) const;
+ int IsPure() const override;
+ Val* Call(val_list* args, Frame* parent) const override;
built_in_func TheFunc() const { return func; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
protected:
BuiltinFunc() { func = 0; is_pure = 0; }
diff --git a/src/Hash.cc b/src/Hash.cc
index 7873e398c3..d723601635 100644
--- a/src/Hash.cc
+++ b/src/Hash.cc
@@ -15,7 +15,7 @@
// for the adversary to construct conflicts, though I do not know if
// HMAC/MD5 is provably universal.
-#include "config.h"
+#include "bro-config.h"
#include "Hash.h"
diff --git a/src/ID.cc b/src/ID.cc
index a308ffa81d..efc488449b 100644
--- a/src/ID.cc
+++ b/src/ID.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "ID.h"
#include "Expr.h"
diff --git a/src/ID.h b/src/ID.h
index 805a8e391b..2e0d5708a9 100644
--- a/src/ID.h
+++ b/src/ID.h
@@ -87,7 +87,7 @@ public:
void Error(const char* msg, const BroObj* o2 = 0);
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
// Adds type and value to description.
void DescribeExtended(ODesc* d) const;
// Produces a description that's reST-ready.
diff --git a/src/IP.cc b/src/IP.cc
index 3a19f02d23..ebe778e3d7 100644
--- a/src/IP.cc
+++ b/src/IP.cc
@@ -1,5 +1,9 @@
// See the file "COPYING" in the main distribution directory for copyright.
+#include
+#include
+#include
+
#include "IP.h"
#include "Type.h"
#include "Val.h"
@@ -403,6 +407,17 @@ RecordVal* IP_Hdr::BuildPktHdrVal(RecordVal* pkt_hdr, int sindex) const
break;
}
+ case IPPROTO_ICMPV6:
+ {
+ const struct icmp6_hdr* icmpp = (const struct icmp6_hdr*) data;
+ RecordVal* icmp_hdr = new RecordVal(icmp_hdr_type);
+
+ icmp_hdr->Assign(0, new Val(icmpp->icmp6_type, TYPE_COUNT));
+
+ pkt_hdr->Assign(sindex + 4, icmp_hdr);
+ break;
+ }
+
default:
{
// This is not a protocol we understand.
diff --git a/src/IP.h b/src/IP.h
index bfd3ce8a41..8be2d3e609 100644
--- a/src/IP.h
+++ b/src/IP.h
@@ -3,7 +3,7 @@
#ifndef ip_h
#define ip_h
-#include "config.h"
+#include "bro-config.h"
#include "net_util.h"
#include "IPAddr.h"
#include "Reporter.h"
diff --git a/src/IntSet.cc b/src/IntSet.cc
index fb198f0e25..f5b004666c 100644
--- a/src/IntSet.cc
+++ b/src/IntSet.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#ifdef HAVE_MEMORY_H
#include
diff --git a/src/List.cc b/src/List.cc
index 9a1af3fe4f..a2b4609975 100644
--- a/src/List.cc
+++ b/src/List.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/NFA.cc b/src/NFA.cc
index 4849755941..def04d79a1 100644
--- a/src/NFA.cc
+++ b/src/NFA.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "NFA.h"
#include "EquivClass.h"
diff --git a/src/Net.cc b/src/Net.cc
index 2a368c47ef..0b0491719f 100644
--- a/src/Net.cc
+++ b/src/Net.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#ifdef TIME_WITH_SYS_TIME
diff --git a/src/Net.h b/src/Net.h
index d19bd9083c..370f08a3ca 100644
--- a/src/Net.h
+++ b/src/Net.h
@@ -70,9 +70,6 @@ extern bool terminating;
// True if the remote serializer is to be activated.
extern bool using_communication;
-// Snaplen passed to libpcap.
-extern int snaplen;
-
extern const Packet* current_pkt;
extern int current_dispatched;
extern double current_timestamp;
diff --git a/src/NetVar.cc b/src/NetVar.cc
index 5585cf8211..ccc94c97a6 100644
--- a/src/NetVar.cc
+++ b/src/NetVar.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Var.h"
#include "NetVar.h"
@@ -15,6 +15,8 @@ RecordType* icmp_conn;
RecordType* icmp_context;
RecordType* SYN_packet;
RecordType* pcap_packet;
+RecordType* raw_pkt_hdr_type;
+RecordType* l2_hdr_type;
RecordType* signature_state;
EnumType* transport_proto;
TableType* string_set;
@@ -324,6 +326,8 @@ void init_net_var()
signature_state = internal_type("signature_state")->AsRecordType();
SYN_packet = internal_type("SYN_packet")->AsRecordType();
pcap_packet = internal_type("pcap_packet")->AsRecordType();
+ raw_pkt_hdr_type = internal_type("raw_pkt_hdr")->AsRecordType();
+ l2_hdr_type = internal_type("l2_hdr")->AsRecordType();
transport_proto = internal_type("transport_proto")->AsEnumType();
string_set = internal_type("string_set")->AsTableType();
string_array = internal_type("string_array")->AsTableType();
diff --git a/src/NetVar.h b/src/NetVar.h
index 97018121f9..909a2a4c1c 100644
--- a/src/NetVar.h
+++ b/src/NetVar.h
@@ -19,6 +19,8 @@ extern RecordType* icmp_context;
extern RecordType* signature_state;
extern RecordType* SYN_packet;
extern RecordType* pcap_packet;
+extern RecordType* raw_pkt_hdr_type;
+extern RecordType* l2_hdr_type;
extern EnumType* transport_proto;
extern TableType* string_set;
extern TableType* string_array;
diff --git a/src/Obj.cc b/src/Obj.cc
index 99ddb1329c..5553674598 100644
--- a/src/Obj.cc
+++ b/src/Obj.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/OpaqueVal.h b/src/OpaqueVal.h
index 70ba48f8d1..df928dff60 100644
--- a/src/OpaqueVal.h
+++ b/src/OpaqueVal.h
@@ -48,9 +48,9 @@ public:
protected:
friend class Val;
- virtual bool DoInit() /* override */;
- virtual bool DoFeed(const void* data, size_t size) /* override */;
- virtual StringVal* DoGet() /* override */;
+ virtual bool DoInit() override;
+ virtual bool DoFeed(const void* data, size_t size) override;
+ virtual StringVal* DoGet() override;
DECLARE_SERIAL(MD5Val);
@@ -67,9 +67,9 @@ public:
protected:
friend class Val;
- virtual bool DoInit() /* override */;
- virtual bool DoFeed(const void* data, size_t size) /* override */;
- virtual StringVal* DoGet() /* override */;
+ virtual bool DoInit() override;
+ virtual bool DoFeed(const void* data, size_t size) override;
+ virtual StringVal* DoGet() override;
DECLARE_SERIAL(SHA1Val);
@@ -86,9 +86,9 @@ public:
protected:
friend class Val;
- virtual bool DoInit() /* override */;
- virtual bool DoFeed(const void* data, size_t size) /* override */;
- virtual StringVal* DoGet() /* override */;
+ virtual bool DoInit() override;
+ virtual bool DoFeed(const void* data, size_t size) override;
+ virtual StringVal* DoGet() override;
DECLARE_SERIAL(SHA256Val);
diff --git a/src/PacketDumper.cc b/src/PacketDumper.cc
index 84b22ff17c..1a53550dfd 100644
--- a/src/PacketDumper.cc
+++ b/src/PacketDumper.cc
@@ -1,7 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/PolicyFile.cc b/src/PolicyFile.cc
index 5d0082c6a9..bd41c15e9d 100644
--- a/src/PolicyFile.cc
+++ b/src/PolicyFile.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/PrefixTable.cc b/src/PrefixTable.cc
index d31203c9d3..007e08349c 100644
--- a/src/PrefixTable.cc
+++ b/src/PrefixTable.cc
@@ -1,7 +1,7 @@
#include "PrefixTable.h"
#include "Reporter.h"
-inline static prefix_t* make_prefix(const IPAddr& addr, int width)
+prefix_t* PrefixTable::MakePrefix(const IPAddr& addr, int width)
{
prefix_t* prefix = (prefix_t*) safe_malloc(sizeof(prefix_t));
@@ -13,9 +13,14 @@ inline static prefix_t* make_prefix(const IPAddr& addr, int width)
return prefix;
}
+IPPrefix PrefixTable::PrefixToIPPrefix(prefix_t* prefix)
+ {
+ return IPPrefix(IPAddr(IPv6, reinterpret_cast(&prefix->add.sin6), IPAddr::Network), prefix->bitlen, 1);
+ }
+
void* PrefixTable::Insert(const IPAddr& addr, int width, void* data)
{
- prefix_t* prefix = make_prefix(addr, width);
+ prefix_t* prefix = MakePrefix(addr, width);
patricia_node_t* node = patricia_lookup(tree, prefix);
Deref_Prefix(prefix);
@@ -57,13 +62,39 @@ void* PrefixTable::Insert(const Val* value, void* data)
}
}
+list> PrefixTable::FindAll(const IPAddr& addr, int width) const
+ {
+ std::list> out;
+ prefix_t* prefix = MakePrefix(addr, width);
+
+ int elems = 0;
+ patricia_node_t** list = nullptr;
+
+ patricia_search_all(tree, prefix, &list, &elems);
+
+ for ( int i = 0; i < elems; ++i )
+ out.push_back(std::make_tuple(PrefixToIPPrefix(list[i]->prefix), list[i]->data));
+
+ Deref_Prefix(prefix);
+ free(list);
+ return out;
+ }
+
+list> PrefixTable::FindAll(const SubNetVal* value) const
+ {
+ return FindAll(value->AsSubNet().Prefix(), value->AsSubNet().LengthIPv6());
+ }
+
void* PrefixTable::Lookup(const IPAddr& addr, int width, bool exact) const
{
- prefix_t* prefix = make_prefix(addr, width);
+ prefix_t* prefix = MakePrefix(addr, width);
patricia_node_t* node =
exact ? patricia_search_exact(tree, prefix) :
patricia_search_best(tree, prefix);
+ int elems = 0;
+ patricia_node_t** list = nullptr;
+
Deref_Prefix(prefix);
return node ? node->data : 0;
}
@@ -94,7 +125,7 @@ void* PrefixTable::Lookup(const Val* value, bool exact) const
void* PrefixTable::Remove(const IPAddr& addr, int width)
{
- prefix_t* prefix = make_prefix(addr, width);
+ prefix_t* prefix = MakePrefix(addr, width);
patricia_node_t* node = patricia_search_exact(tree, prefix);
Deref_Prefix(prefix);
diff --git a/src/PrefixTable.h b/src/PrefixTable.h
index 2e5f43a0a8..6606b77e81 100644
--- a/src/PrefixTable.h
+++ b/src/PrefixTable.h
@@ -36,6 +36,10 @@ public:
void* Lookup(const IPAddr& addr, int width, bool exact = false) const;
void* Lookup(const Val* value, bool exact = false) const;
+ // Returns list of all found matches or empty list otherwise.
+ list> FindAll(const IPAddr& addr, int width) const;
+ list> FindAll(const SubNetVal* value) const;
+
// Returns pointer to data or nil if not found.
void* Remove(const IPAddr& addr, int width);
void* Remove(const Val* value);
@@ -45,6 +49,10 @@ public:
iterator InitIterator();
void* GetNext(iterator* i);
+private:
+ static prefix_t* MakePrefix(const IPAddr& addr, int width);
+ static IPPrefix PrefixToIPPrefix(prefix_t* p);
+
patricia_tree_t* tree;
};
diff --git a/src/PriorityQueue.cc b/src/PriorityQueue.cc
index 8db161b10a..75b731142e 100644
--- a/src/PriorityQueue.cc
+++ b/src/PriorityQueue.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/Queue.cc b/src/Queue.cc
index 28bcb92405..587e37063f 100644
--- a/src/Queue.cc
+++ b/src/Queue.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/RE.cc b/src/RE.cc
index f52eff47eb..6c1e80588f 100644
--- a/src/RE.cc
+++ b/src/RE.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/Reassem.cc b/src/Reassem.cc
index bfac7f7a07..54f27bd895 100644
--- a/src/Reassem.cc
+++ b/src/Reassem.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Reassem.h"
#include "Serializer.h"
diff --git a/src/RemoteSerializer.cc b/src/RemoteSerializer.cc
index 44ec678a0f..16add7c9c5 100644
--- a/src/RemoteSerializer.cc
+++ b/src/RemoteSerializer.cc
@@ -159,7 +159,7 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#ifdef TIME_WITH_SYS_TIME
# include
# include
@@ -3459,7 +3459,11 @@ void SocketComm::Run()
if ( io->CanWrite() )
++canwrites;
- int a = select(max_fd + 1, &fd_read, &fd_write, &fd_except, 0);
+ struct timeval timeout;
+ timeout.tv_sec = 1;
+ timeout.tv_usec = 0;
+
+ int a = select(max_fd + 1, &fd_read, &fd_write, &fd_except, &timeout);
if ( selects % 100000 == 0 )
Log(fmt("selects=%ld canwrites=%ld pending=%lu",
diff --git a/src/Reporter.cc b/src/Reporter.cc
index cd1aa09d4c..6020b6569c 100644
--- a/src/Reporter.cc
+++ b/src/Reporter.cc
@@ -4,7 +4,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Reporter.h"
#include "Event.h"
#include "NetVar.h"
@@ -393,4 +393,3 @@ void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out,
if ( alloced )
free(alloced);
}
-
diff --git a/src/Rule.cc b/src/Rule.cc
index c978b93177..c483527c63 100644
--- a/src/Rule.cc
+++ b/src/Rule.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include "Rule.h"
#include "RuleMatcher.h"
diff --git a/src/RuleAction.cc b/src/RuleAction.cc
index a0f4e89010..bac38a1236 100644
--- a/src/RuleAction.cc
+++ b/src/RuleAction.cc
@@ -1,7 +1,7 @@
#include
using std::string;
-#include "config.h"
+#include "bro-config.h"
#include "RuleAction.h"
#include "RuleMatcher.h"
diff --git a/src/RuleCondition.cc b/src/RuleCondition.cc
index 36d8cba39d..68eb13121f 100644
--- a/src/RuleCondition.cc
+++ b/src/RuleCondition.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include "RuleCondition.h"
#include "analyzer/protocol/tcp/TCP.h"
diff --git a/src/RuleMatcher.cc b/src/RuleMatcher.cc
index 967c4e4e65..f40a5c4349 100644
--- a/src/RuleMatcher.cc
+++ b/src/RuleMatcher.cc
@@ -1,7 +1,7 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "analyzer/Analyzer.h"
#include "RuleMatcher.h"
diff --git a/src/Scope.cc b/src/Scope.cc
index 4916cdbfce..091dbabb9b 100644
--- a/src/Scope.cc
+++ b/src/Scope.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "ID.h"
#include "Val.h"
diff --git a/src/SerialObj.h b/src/SerialObj.h
index 4794f2bf20..ca661db8af 100644
--- a/src/SerialObj.h
+++ b/src/SerialObj.h
@@ -37,7 +37,7 @@
#include "DebugLogger.h"
#include "Continuation.h"
#include "SerialTypes.h"
-#include "config.h"
+#include "bro-config.h"
#if SIZEOF_LONG_LONG < 8
# error "Serialization requires that sizeof(long long) is at least 8. (Remove this message only if you know what you're doing.)"
@@ -169,10 +169,10 @@ public:
#define DECLARE_SERIAL(classname) \
static classname* Instantiate(); \
static SerialTypeRegistrator register_type; \
- virtual bool DoSerialize(SerialInfo*) const; \
- virtual bool DoUnserialize(UnserialInfo*); \
- virtual const TransientID* GetTID() const { return &tid; } \
- virtual SerialType GetSerialType() const; \
+ virtual bool DoSerialize(SerialInfo*) const override; \
+ virtual bool DoUnserialize(UnserialInfo*) override; \
+ virtual const TransientID* GetTID() const override { return &tid; } \
+ virtual SerialType GetSerialType() const override; \
TransientID tid;
// Only needed (and usable) for non-abstract classes.
diff --git a/src/Sessions.cc b/src/Sessions.cc
index 7b7974bfce..b8bfe82b34 100644
--- a/src/Sessions.cc
+++ b/src/Sessions.cc
@@ -1,7 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -674,7 +674,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
conn = (Connection*) d->Lookup(h);
if ( ! conn )
{
- conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
+ conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), pkt->vlan, pkt->inner_vlan, encapsulation);
if ( conn )
d->Insert(h, conn);
}
@@ -694,7 +694,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
conn->Event(connection_reused, 0);
Remove(conn);
- conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation);
+ conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), pkt->vlan, pkt->inner_vlan, encapsulation);
if ( conn )
d->Insert(h, conn);
}
@@ -1173,6 +1173,7 @@ void NetSessions::GetStats(SessionStats& s) const
Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
const u_char* data, int proto, uint32 flow_label,
+ uint32 vlan, uint32 inner_vlan,
const EncapsulationStack* encapsulation)
{
// FIXME: This should be cleaned up a bit, it's too protocol-specific.
@@ -1229,7 +1230,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id,
id = &flip_id;
}
- Connection* conn = new Connection(this, k, t, id, flow_label, encapsulation);
+ Connection* conn = new Connection(this, k, t, id, flow_label, vlan, inner_vlan, encapsulation);
conn->SetTransport(tproto);
if ( ! analyzer_mgr->BuildInitialAnalyzerTree(conn) )
diff --git a/src/Sessions.h b/src/Sessions.h
index 1780bbdb24..2aca292789 100644
--- a/src/Sessions.h
+++ b/src/Sessions.h
@@ -184,6 +184,7 @@ protected:
Connection* NewConn(HashKey* k, double t, const ConnID* id,
const u_char* data, int proto, uint32 flow_lable,
+ uint32 vlan, uint32 inner_vlan,
const EncapsulationStack* encapsulation);
// Check whether the tag of the current packet is consistent with
diff --git a/src/SmithWaterman.cc b/src/SmithWaterman.cc
index 5f2786caa0..ae57bab00c 100644
--- a/src/SmithWaterman.cc
+++ b/src/SmithWaterman.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/Stats.cc b/src/Stats.cc
index 00f603cba7..eb5ac67e26 100644
--- a/src/Stats.cc
+++ b/src/Stats.cc
@@ -362,12 +362,16 @@ SampleLogger::~SampleLogger()
void SampleLogger::FunctionSeen(const Func* func)
{
- load_samples->Assign(new StringVal(func->Name()), 0);
+ Val* idx = new StringVal(func->Name());
+ load_samples->Assign(idx, 0);
+ Unref(idx);
}
void SampleLogger::LocationSeen(const Location* loc)
{
- load_samples->Assign(new StringVal(loc->filename), 0);
+ Val* idx = new StringVal(loc->filename);
+ load_samples->Assign(idx, 0);
+ Unref(idx);
}
void SampleLogger::SegmentProfile(const char* /* name */,
diff --git a/src/Stmt.cc b/src/Stmt.cc
index 932943803c..d93e8ff14e 100644
--- a/src/Stmt.cc
+++ b/src/Stmt.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Expr.h"
#include "Event.h"
@@ -994,6 +994,9 @@ bool AddStmt::DoUnserialize(UnserialInfo* info)
DelStmt::DelStmt(Expr* arg_e) : ExprStmt(STMT_DELETE, arg_e)
{
+ if ( e->IsError() )
+ return;
+
if ( ! e->CanDel() )
Error("illegal delete statement");
}
diff --git a/src/Stmt.h b/src/Stmt.h
index 36fe624e68..1c3bef2984 100644
--- a/src/Stmt.h
+++ b/src/Stmt.h
@@ -124,7 +124,7 @@ protected:
friend class Stmt;
PrintStmt() {}
- Val* DoExec(val_list* vals, stmt_flow_type& flow) const;
+ Val* DoExec(val_list* vals, stmt_flow_type& flow) const override;
DECLARE_SERIAL(PrintStmt);
};
@@ -134,13 +134,13 @@ public:
ExprStmt(Expr* e);
virtual ~ExprStmt();
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
const Expr* StmtExpr() const { return e; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -149,7 +149,7 @@ protected:
virtual Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const;
- int IsPure() const;
+ int IsPure() const override;
DECLARE_SERIAL(ExprStmt);
@@ -164,16 +164,16 @@ public:
const Stmt* TrueBranch() const { return s1; }
const Stmt* FalseBranch() const { return s2; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
IfStmt() { s1 = s2 = 0; }
- Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const override;
+ int IsPure() const override;
DECLARE_SERIAL(IfStmt);
@@ -192,7 +192,7 @@ public:
const Stmt* Body() const { return s; }
Stmt* Body() { return s; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
bool Serialize(SerialInfo* info) const;
static Case* Unserialize(UnserialInfo* info);
@@ -216,16 +216,16 @@ public:
const case_list* Cases() const { return cases; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
SwitchStmt() { cases = 0; default_case_idx = -1; comp_hash = 0; }
- Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const override;
+ int IsPure() const override;
DECLARE_SERIAL(SwitchStmt);
@@ -252,10 +252,10 @@ class AddStmt : public ExprStmt {
public:
AddStmt(Expr* e);
- int IsPure() const;
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ int IsPure() const override;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -268,10 +268,10 @@ class DelStmt : public ExprStmt {
public:
DelStmt(Expr* e);
- int IsPure() const;
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ int IsPure() const override;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -284,9 +284,9 @@ class EventStmt : public ExprStmt {
public:
EventStmt(EventExpr* e);
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -303,11 +303,11 @@ public:
WhileStmt(Expr* loop_condition, Stmt* body);
~WhileStmt();
- int IsPure() const;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -315,7 +315,7 @@ protected:
WhileStmt()
{ loop_condition = 0; body = 0; }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
DECLARE_SERIAL(WhileStmt);
@@ -334,17 +334,17 @@ public:
const Expr* LoopExpr() const { return e; }
const Stmt* LoopBody() const { return body; }
- int IsPure() const;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
ForStmt() { loop_vars = 0; body = 0; }
- Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const;
+ Val* DoExec(Frame* f, Val* v, stmt_flow_type& flow) const override;
DECLARE_SERIAL(ForStmt);
@@ -356,12 +356,12 @@ class NextStmt : public Stmt {
public:
NextStmt() : Stmt(STMT_NEXT) { }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
DECLARE_SERIAL(NextStmt);
@@ -371,12 +371,12 @@ class BreakStmt : public Stmt {
public:
BreakStmt() : Stmt(STMT_BREAK) { }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
DECLARE_SERIAL(BreakStmt);
@@ -386,12 +386,12 @@ class FallthroughStmt : public Stmt {
public:
FallthroughStmt() : Stmt(STMT_FALLTHROUGH) { }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
DECLARE_SERIAL(FallthroughStmt);
@@ -401,9 +401,9 @@ class ReturnStmt : public ExprStmt {
public:
ReturnStmt(Expr* e);
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
protected:
friend class Stmt;
@@ -417,17 +417,17 @@ public:
StmtList();
~StmtList();
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
const stmt_list& Stmts() const { return stmts; }
stmt_list& Stmts() { return stmts; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
- int IsPure() const;
+ int IsPure() const override;
DECLARE_SERIAL(StmtList);
@@ -439,9 +439,9 @@ public:
EventBodyList() : StmtList()
{ topmost = false; tag = STMT_EVENT_BODY_LIST; }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
// "Topmost" means that this is the main body of a function or event.
// void SetTopmost(bool is_topmost) { topmost = is_topmost; }
@@ -465,13 +465,13 @@ public:
~InitStmt();
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
const id_list* Inits() const { return inits; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
friend class Stmt;
@@ -486,12 +486,12 @@ class NullStmt : public Stmt {
public:
NullStmt() : Stmt(STMT_NULL) { }
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
+ int IsPure() const override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
DECLARE_SERIAL(NullStmt);
@@ -503,17 +503,17 @@ public:
WhenStmt(Expr* cond, Stmt* s1, Stmt* s2, Expr* timeout, bool is_return);
~WhenStmt();
- Val* Exec(Frame* f, stmt_flow_type& flow) const;
- int IsPure() const;
+ Val* Exec(Frame* f, stmt_flow_type& flow) const override;
+ int IsPure() const override;
const Expr* Cond() const { return cond; }
const Stmt* Body() const { return s1; }
const Expr* TimeoutExpr() const { return timeout; }
const Stmt* TimeoutBody() const { return s2; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- TraversalCode Traverse(TraversalCallback* cb) const;
+ TraversalCode Traverse(TraversalCallback* cb) const override;
protected:
WhenStmt() { cond = 0; s1 = s2 = 0; timeout = 0; is_return = 0; }
diff --git a/src/Tag.h b/src/Tag.h
index 2c76f253a5..a3d7197fa0 100644
--- a/src/Tag.h
+++ b/src/Tag.h
@@ -3,7 +3,7 @@
#ifndef TAG_H
#define TAG_H
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "Type.h"
diff --git a/src/Timer.cc b/src/Timer.cc
index b8871ee489..f4370ed735 100644
--- a/src/Timer.cc
+++ b/src/Timer.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "Timer.h"
diff --git a/src/TunnelEncapsulation.h b/src/TunnelEncapsulation.h
index 419a3000b4..b853fc01b3 100644
--- a/src/TunnelEncapsulation.h
+++ b/src/TunnelEncapsulation.h
@@ -3,7 +3,7 @@
#ifndef TUNNELS_H
#define TUNNELS_H
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "IPAddr.h"
#include "Val.h"
diff --git a/src/Type.cc b/src/Type.cc
index 7fab05673f..1a6a4d36b8 100644
--- a/src/Type.cc
+++ b/src/Type.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Type.h"
#include "Attr.h"
@@ -1045,6 +1045,8 @@ TypeDecl* RecordType::FieldDecl(int field)
void RecordType::Describe(ODesc* d) const
{
+ d->PushType(this);
+
if ( d->IsReadable() )
{
if ( d->IsShort() && GetName().size() )
@@ -1064,10 +1066,13 @@ void RecordType::Describe(ODesc* d) const
d->Add(int(Tag()));
DescribeFields(d);
}
+
+ d->PopType(this);
}
void RecordType::DescribeReST(ODesc* d, bool roles_only) const
{
+ d->PushType(this);
d->Add(":bro:type:`record`");
if ( num_fields == 0 )
@@ -1075,6 +1080,7 @@ void RecordType::DescribeReST(ODesc* d, bool roles_only) const
d->NL();
DescribeFieldsReST(d, false);
+ d->PopType(this);
}
const char* RecordType::AddFields(type_decl_list* others, attr_list* attr)
@@ -1129,7 +1135,12 @@ void RecordType::DescribeFields(ODesc* d) const
const TypeDecl* td = FieldDecl(i);
d->Add(td->id);
d->Add(":");
- td->type->Describe(d);
+
+ if ( d->FindType(td->type) )
+ d->Add("");
+ else
+ td->type->Describe(d);
+
d->Add(";");
}
}
@@ -1170,7 +1181,11 @@ void RecordType::DescribeFieldsReST(ODesc* d, bool func_args) const
}
const TypeDecl* td = FieldDecl(i);
- td->DescribeReST(d);
+
+ if ( d->FindType(td->type) )
+ d->Add("");
+ else
+ td->DescribeReST(d);
if ( func_args )
continue;
diff --git a/src/Type.h b/src/Type.h
index f902b0d907..9f7a439986 100644
--- a/src/Type.h
+++ b/src/Type.h
@@ -182,6 +182,7 @@ public:
CHECK_TYPE_TAG(TYPE_FUNC, "BroType::AsFuncType");
return (const FuncType*) this;
}
+
FuncType* AsFuncType()
{
CHECK_TYPE_TAG(TYPE_FUNC, "BroType::AsFuncType");
@@ -201,7 +202,7 @@ public:
}
const VectorType* AsVectorType() const
- {
+ {
CHECK_TYPE_TAG(TYPE_VECTOR, "BroType::AsVectorType");
return (VectorType*) this;
}
@@ -219,19 +220,19 @@ public:
}
VectorType* AsVectorType()
- {
+ {
CHECK_TYPE_TAG(TYPE_VECTOR, "BroType::AsVectorType");
return (VectorType*) this;
}
const TypeType* AsTypeType() const
- {
+ {
CHECK_TYPE_TAG(TYPE_TYPE, "BroType::AsTypeType");
return (TypeType*) this;
}
TypeType* AsTypeType()
- {
+ {
CHECK_TYPE_TAG(TYPE_TYPE, "BroType::AsTypeType");
return (TypeType*) this;
}
@@ -248,7 +249,7 @@ public:
BroType* Ref() { ::Ref(this); return this; }
- virtual void Describe(ODesc* d) const;
+ virtual void Describe(ODesc* d) const override;
virtual void DescribeReST(ODesc* d, bool roles_only = false) const;
virtual unsigned MemoryAllocation() const;
@@ -312,9 +313,9 @@ public:
void Append(BroType* t);
void AppendEvenIfNotPure(BroType* t);
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- unsigned int MemoryAllocation() const
+ unsigned int MemoryAllocation() const override
{
return BroType::MemoryAllocation()
+ padded_sizeof(*this) - padded_sizeof(BroType)
@@ -330,15 +331,15 @@ protected:
class IndexType : public BroType {
public:
- int MatchesIndex(ListExpr*& index) const;
+ int MatchesIndex(ListExpr*& index) const override;
TypeList* Indices() const { return indices; }
const type_list* IndexTypes() const { return indices->Types(); }
- BroType* YieldType();
+ BroType* YieldType() override;
const BroType* YieldType() const;
- void Describe(ODesc* d) const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void Describe(ODesc* d) const override;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
// Returns true if this table is solely indexed by subnet.
bool IsSubNetIndex() const;
@@ -397,7 +398,7 @@ public:
~FuncType();
RecordType* Args() const { return args; }
- BroType* YieldType();
+ BroType* YieldType() override;
const BroType* YieldType() const;
void SetYieldType(BroType* arg_yield) { yield = arg_yield; }
function_flavor Flavor() const { return flavor; }
@@ -407,13 +408,13 @@ public:
void ClearYieldType(function_flavor arg_flav)
{ Unref(yield); yield = 0; flavor = arg_flav; }
- int MatchesIndex(ListExpr*& index) const;
+ int MatchesIndex(ListExpr*& index) const override;
int CheckArgs(const type_list* args, bool is_init = false) const;
TypeList* ArgTypes() const { return arg_types; }
- void Describe(ODesc* d) const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void Describe(ODesc* d) const override;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
protected:
FuncType() { args = 0; arg_types = 0; yield = 0; flavor = FUNC_FLAVOR_FUNCTION; }
@@ -463,8 +464,8 @@ public:
~RecordType();
- int HasField(const char* field) const;
- BroType* FieldType(const char* field) const;
+ int HasField(const char* field) const override;
+ BroType* FieldType(const char* field) const override;
BroType* FieldType(int field) const;
Val* FieldDefault(int field) const; // Ref's the returned value; 0 if none.
@@ -487,8 +488,8 @@ public:
// Takes ownership of list.
const char* AddFields(type_decl_list* types, attr_list* attr);
- void Describe(ODesc* d) const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void Describe(ODesc* d) const override;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
void DescribeFields(ODesc* d) const;
void DescribeFieldsReST(ODesc* d, bool func_args) const;
@@ -504,7 +505,7 @@ protected:
class SubNetType : public BroType {
public:
SubNetType();
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
protected:
DECLARE_SERIAL(SubNetType)
};
@@ -514,9 +515,9 @@ public:
FileType(BroType* yield_type);
~FileType();
- BroType* YieldType();
+ BroType* YieldType() override;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
protected:
FileType() { yield = 0; }
@@ -533,8 +534,8 @@ public:
const string& Name() const { return name; }
- void Describe(ODesc* d) const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void Describe(ODesc* d) const override;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
protected:
OpaqueType() { }
@@ -569,7 +570,7 @@ public:
// will be fully qualified with their module name.
enum_name_list Names() const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
protected:
EnumType() { counter = 0; }
@@ -599,17 +600,17 @@ class VectorType : public BroType {
public:
VectorType(BroType* t);
virtual ~VectorType();
- BroType* YieldType();
+ BroType* YieldType() override;
const BroType* YieldType() const;
- int MatchesIndex(ListExpr*& index) const;
+ int MatchesIndex(ListExpr*& index) const override;
// Returns true if this table type is "unspecified", which is what one
// gets using an empty "vector()" constructor.
bool IsUnspecifiedVector() const;
- void Describe(ODesc* d) const;
- void DescribeReST(ODesc* d, bool roles_only = false) const;
+ void Describe(ODesc* d) const override;
+ void DescribeReST(ODesc* d, bool roles_only = false) const override;
protected:
VectorType() { yield_type = 0; }
diff --git a/src/Val.cc b/src/Val.cc
index f3825dc9da..ed4ca40e14 100644
--- a/src/Val.cc
+++ b/src/Val.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -1787,7 +1787,16 @@ Val* TableVal::Lookup(Val* index, bool use_default_val)
{
TableEntryVal* v = (TableEntryVal*) subnets->Lookup(index);
if ( v )
+ {
+ if ( attrs && attrs->FindAttr(ATTR_EXPIRE_READ) )
+ {
+ v->SetExpireAccess(network_time);
+ if ( LoggingAccess() && expire_time )
+ ReadOperation(index, v);
+ }
+
return v->Value() ? v->Value() : this;
+ }
if ( ! use_default_val )
return 0;
@@ -1810,9 +1819,7 @@ Val* TableVal::Lookup(Val* index, bool use_default_val)
if ( v )
{
- if ( attrs &&
- ! (attrs->FindAttr(ATTR_EXPIRE_WRITE) ||
- attrs->FindAttr(ATTR_EXPIRE_CREATE)) )
+ if ( attrs && attrs->FindAttr(ATTR_EXPIRE_READ) )
{
v->SetExpireAccess(network_time);
if ( LoggingAccess() && expire_time )
@@ -1833,6 +1840,57 @@ Val* TableVal::Lookup(Val* index, bool use_default_val)
return def;
}
+VectorVal* TableVal::LookupSubnets(const SubNetVal* search)
+ {
+ if ( ! subnets )
+ reporter->InternalError("LookupSubnets called on wrong table type");
+
+ VectorVal* result = new VectorVal(internal_type("subnet_vec")->AsVectorType());
+
+ auto matches = subnets->FindAll(search);
+ for ( auto element : matches )
+ {
+ SubNetVal* s = new SubNetVal(get<0>(element));
+ result->Assign(result->Size(), s);
+ }
+
+ return result;
+ }
+
+TableVal* TableVal::LookupSubnetValues(const SubNetVal* search)
+ {
+ if ( ! subnets )
+ reporter->InternalError("LookupSubnetValues called on wrong table type");
+
+ TableVal* nt = new TableVal(this->Type()->Ref()->AsTableType());
+
+ auto matches = subnets->FindAll(search);
+ for ( auto element : matches )
+ {
+ SubNetVal* s = new SubNetVal(get<0>(element));
+ TableEntryVal* entry = reinterpret_cast(get<1>(element));
+
+ if ( entry && entry->Value() )
+ nt->Assign(s, entry->Value()->Ref());
+ else
+ nt->Assign(s, 0); // set
+
+ if ( entry )
+ {
+ if ( attrs && attrs->FindAttr(ATTR_EXPIRE_READ) )
+ {
+ entry->SetExpireAccess(network_time);
+ if ( LoggingAccess() && expire_time )
+ ReadOperation(s, entry);
+ }
+ }
+
+ Unref(s); // assign does not consume index
+ }
+
+ return nt;
+ }
+
bool TableVal::UpdateTimestamp(Val* index)
{
TableEntryVal* v;
@@ -1854,7 +1912,7 @@ bool TableVal::UpdateTimestamp(Val* index)
return false;
v->SetExpireAccess(network_time);
- if ( attrs->FindAttr(ATTR_EXPIRE_READ) )
+ if ( LoggingAccess() && attrs->FindAttr(ATTR_EXPIRE_READ) )
ReadOperation(index, v);
return true;
@@ -2478,7 +2536,7 @@ bool TableVal::DoUnserialize(UnserialInfo* info)
}
// If necessary, activate the expire timer.
- if ( attrs)
+ if ( attrs )
{
CheckExpireAttr(ATTR_EXPIRE_READ);
CheckExpireAttr(ATTR_EXPIRE_WRITE);
diff --git a/src/Val.h b/src/Val.h
index 58b24a3e5d..a49a2e2235 100644
--- a/src/Val.h
+++ b/src/Val.h
@@ -325,7 +325,7 @@ public:
return (MutableVal*) this;
}
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
virtual void DescribeReST(ODesc* d) const;
bool Serialize(SerialInfo* info) const;
@@ -443,7 +443,7 @@ public:
#endif
}
- virtual uint64 LastModified() const { return last_modified; }
+ virtual uint64 LastModified() const override { return last_modified; }
// Mark value as changed.
void Modified()
@@ -487,7 +487,7 @@ public:
protected:
IntervalVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(IntervalVal);
};
@@ -509,7 +509,7 @@ public:
PortVal(uint32 p, TransportProto port_type);
PortVal(uint32 p); // used for already-massaged port value.
- Val* SizeVal() const { return new Val(val.uint_val, TYPE_INT); }
+ Val* SizeVal() const override { return new Val(val.uint_val, TYPE_INT); }
// Returns the port number in host order (not including the mask).
uint32 Port() const;
@@ -535,7 +535,7 @@ protected:
friend class Val;
PortVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(PortVal);
};
@@ -545,14 +545,14 @@ public:
AddrVal(const char* text);
~AddrVal();
- Val* SizeVal() const;
+ Val* SizeVal() const override;
// Constructor for address already in network order.
AddrVal(uint32 addr); // IPv4.
AddrVal(const uint32 addr[4]); // IPv6.
AddrVal(const IPAddr& addr);
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
protected:
friend class Val;
@@ -573,7 +573,7 @@ public:
SubNetVal(const IPPrefix& prefix);
~SubNetVal();
- Val* SizeVal() const;
+ Val* SizeVal() const override;
const IPAddr& Prefix() const;
int Width() const;
@@ -581,13 +581,13 @@ public:
bool Contains(const IPAddr& addr) const;
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
protected:
friend class Val;
SubNetVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(SubNetVal);
};
@@ -599,7 +599,7 @@ public:
StringVal(const string& s);
StringVal(int length, const char* s);
- Val* SizeVal() const
+ Val* SizeVal() const override
{ return new Val(val.string_val->Len(), TYPE_COUNT); }
int Len() { return AsString()->Len(); }
@@ -613,13 +613,13 @@ public:
StringVal* ToUpper();
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
protected:
friend class Val;
StringVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(StringVal);
};
@@ -629,17 +629,17 @@ public:
PatternVal(RE_Matcher* re);
~PatternVal();
- int AddTo(Val* v, int is_first_init) const;
+ int AddTo(Val* v, int is_first_init) const override;
void SetMatcher(RE_Matcher* re);
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
protected:
friend class Val;
PatternVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(PatternVal);
};
@@ -653,7 +653,7 @@ public:
TypeTag BaseTag() const { return tag; }
- Val* SizeVal() const { return new Val(vals.length(), TYPE_COUNT); }
+ Val* SizeVal() const override { return new Val(vals.length(), TYPE_COUNT); }
int Length() const { return vals.length(); }
Val* Index(const int n) { return vals[n]; }
@@ -677,9 +677,9 @@ public:
const val_list* Vals() const { return &vals; }
val_list* Vals() { return &vals; }
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
protected:
friend class Val;
@@ -753,21 +753,22 @@ public:
TableVal(TableType* t, Attributes* attrs = 0);
~TableVal();
- // Returns true if the assignment typechecked, false if not.
- // Second version takes a HashKey and Unref()'s it when done.
- // If we're a set, new_val has to be nil.
- // If we aren't a set, index may be nil in the second version.
+ // Returns true if the assignment typechecked, false if not. The
+ // methods take ownership of new_val, but not of the index. Second
+ // version takes a HashKey and Unref()'s it when done. If we're a
+ // set, new_val has to be nil. If we aren't a set, index may be nil
+ // in the second version.
int Assign(Val* index, Val* new_val, Opcode op = OP_ASSIGN);
int Assign(Val* index, HashKey* k, Val* new_val, Opcode op = OP_ASSIGN);
- Val* SizeVal() const { return new Val(Size(), TYPE_COUNT); }
+ Val* SizeVal() const override { return new Val(Size(), TYPE_COUNT); }
// Add the entire contents of the table to the given value,
// which must also be a TableVal.
// Returns true if the addition typechecked, false if not.
// If is_first_init is true, then this is the *first* initialization
// (and so should be strictly adding new elements).
- int AddTo(Val* v, int is_first_init) const;
+ int AddTo(Val* v, int is_first_init) const override;
// Same but allows suppression of state operations.
int AddTo(Val* v, int is_first_init, bool propagate_ops) const;
@@ -778,7 +779,7 @@ public:
// Remove the entire contents of the table from the given value.
// which must also be a TableVal.
// Returns true if the addition typechecked, false if not.
- int RemoveFrom(Val* v) const;
+ int RemoveFrom(Val* v) const override;
// Expands any lists in the index into multiple initializations.
// Returns true if the initializations typecheck, false if not.
@@ -789,6 +790,16 @@ public:
// need to Ref/Unref it when calling the default function.
Val* Lookup(Val* index, bool use_default_val = true);
+ // For a table[subnet]/set[subnet], return all subnets that cover
+ // the given subnet.
+ // Causes an internal error if called for any other kind of table.
+ VectorVal* LookupSubnets(const SubNetVal* s);
+
+ // For a set[subnet]/table[subnet], return a new table that only contains
+ // entries that cover the given subnet.
+ // Causes an internal error if called for any other kind of table.
+ TableVal* LookupSubnetValues(const SubNetVal* s);
+
// Sets the timestamp for the given index to network time.
// Returns false if index does not exist.
bool UpdateTimestamp(Val* index);
@@ -813,12 +824,17 @@ public:
int Size() const { return AsTable()->Length(); }
int RecursiveSize() const;
- void Describe(ODesc* d) const;
+ // Returns the Prefix table used inside the table (if present).
+ // This allows us to do more direct queries to this specialized
+ // type that the general Table API does not allow.
+ const PrefixTable* Subnets() const { return subnets; }
+
+ void Describe(ODesc* d) const override;
void InitTimer(double delay);
void DoExpire(double t);
- unsigned int MemoryAllocation() const;
+ unsigned int MemoryAllocation() const override;
void ClearTimer(Timer* t)
{
@@ -840,8 +856,8 @@ protected:
int ExpandCompoundAndInit(val_list* vl, int k, Val* new_val);
int CheckAndAssign(Val* index, Val* new_val, Opcode op = OP_ASSIGN);
- bool AddProperties(Properties arg_state);
- bool RemoveProperties(Properties arg_state);
+ bool AddProperties(Properties arg_state) override;
+ bool RemoveProperties(Properties arg_state) override;
// Calculates default value for index. Returns 0 if none.
Val* Default(Val* index);
@@ -871,7 +887,7 @@ public:
RecordVal(RecordType* t);
~RecordVal();
- Val* SizeVal() const
+ Val* SizeVal() const override
{ return new Val(record_type->NumFields(), TYPE_COUNT); }
void Assign(int field, Val* new_val, Opcode op = OP_ASSIGN);
@@ -889,7 +905,7 @@ public:
*/
Val* Lookup(const char* field, bool with_default = false) const;
- void Describe(ODesc* d) const;
+ void Describe(ODesc* d) const override;
// This is an experiment to associate a BroObj within the
// event engine to a record value in bro script.
@@ -910,15 +926,15 @@ public:
RecordVal* CoerceTo(const RecordType* other, Val* aggr, bool allow_orphaning = false) const;
RecordVal* CoerceTo(RecordType* other, bool allow_orphaning = false);
- unsigned int MemoryAllocation() const;
- void DescribeReST(ODesc* d) const;
+ unsigned int MemoryAllocation() const override;
+ void DescribeReST(ODesc* d) const override;
protected:
friend class Val;
RecordVal() {}
- bool AddProperties(Properties arg_state);
- bool RemoveProperties(Properties arg_state);
+ bool AddProperties(Properties arg_state) override;
+ bool RemoveProperties(Properties arg_state) override;
DECLARE_SERIAL(RecordVal);
@@ -934,13 +950,13 @@ public:
type = t;
}
- Val* SizeVal() const { return new Val(val.int_val, TYPE_INT); }
+ Val* SizeVal() const override { return new Val(val.int_val, TYPE_INT); }
protected:
friend class Val;
EnumVal() {}
- void ValDescribe(ODesc* d) const;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(EnumVal);
};
@@ -951,7 +967,7 @@ public:
VectorVal(VectorType* t);
~VectorVal();
- Val* SizeVal() const
+ Val* SizeVal() const override
{ return new Val(uint32(val.vector_val->size()), TYPE_COUNT); }
// Returns false if the type of the argument was wrong.
@@ -996,9 +1012,9 @@ protected:
friend class Val;
VectorVal() { }
- bool AddProperties(Properties arg_state);
- bool RemoveProperties(Properties arg_state);
- void ValDescribe(ODesc* d) const;
+ bool AddProperties(Properties arg_state) override;
+ bool RemoveProperties(Properties arg_state) override;
+ void ValDescribe(ODesc* d) const override;
DECLARE_SERIAL(VectorVal);
diff --git a/src/Var.cc b/src/Var.cc
index ed0f486875..e923e2ec37 100644
--- a/src/Var.cc
+++ b/src/Var.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Var.h"
#include "Func.h"
diff --git a/src/analyzer/Component.h b/src/analyzer/Component.h
index f538c17919..0704852145 100644
--- a/src/analyzer/Component.h
+++ b/src/analyzer/Component.h
@@ -7,7 +7,7 @@
#include "plugin/Component.h"
#include "plugin/TaggedComponent.h"
-#include "../config.h"
+#include "../bro-config.h"
#include "../util.h"
class Connection;
diff --git a/src/analyzer/Manager.cc b/src/analyzer/Manager.cc
index bc8fceaf39..67aa6a0d33 100644
--- a/src/analyzer/Manager.cc
+++ b/src/analyzer/Manager.cc
@@ -505,6 +505,8 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
if ( ! analyzed )
conn->SetLifetime(non_analyzed_lifetime);
+ PLUGIN_HOOK_VOID(HOOK_SETUP_ANALYZER_TREE, HookSetupAnalyzerTree(conn));
+
return true;
}
diff --git a/src/analyzer/Tag.h b/src/analyzer/Tag.h
index d01c8902ee..9ba04b2ef8 100644
--- a/src/analyzer/Tag.h
+++ b/src/analyzer/Tag.h
@@ -3,7 +3,7 @@
#ifndef ANALYZER_TAG_H
#define ANALYZER_TAG_H
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "../Tag.h"
#include "plugin/TaggedComponent.h"
diff --git a/src/analyzer/protocol/CMakeLists.txt b/src/analyzer/protocol/CMakeLists.txt
index 9e824d42d2..a1f283af6e 100644
--- a/src/analyzer/protocol/CMakeLists.txt
+++ b/src/analyzer/protocol/CMakeLists.txt
@@ -31,6 +31,7 @@ add_subdirectory(pia)
add_subdirectory(pop3)
add_subdirectory(radius)
add_subdirectory(rdp)
+add_subdirectory(rfb)
add_subdirectory(rpc)
add_subdirectory(sip)
add_subdirectory(snmp)
diff --git a/src/analyzer/protocol/arp/ARP.h b/src/analyzer/protocol/arp/ARP.h
index 1778f5e200..c4deddee03 100644
--- a/src/analyzer/protocol/arp/ARP.h
+++ b/src/analyzer/protocol/arp/ARP.h
@@ -3,7 +3,7 @@
#ifndef ANALYZER_PROTOCOL_ARP_ARP_H
#define ANALYZER_PROTOCOL_ARP_ARP_H
-#include "config.h"
+#include "bro-config.h"
#include
#include
#include
diff --git a/src/analyzer/protocol/backdoor/BackDoor.cc b/src/analyzer/protocol/backdoor/BackDoor.cc
index 984b2a5dcf..4119b66121 100644
--- a/src/analyzer/protocol/backdoor/BackDoor.cc
+++ b/src/analyzer/protocol/backdoor/BackDoor.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "BackDoor.h"
#include "Event.h"
diff --git a/src/analyzer/protocol/dce-rpc/DCE_RPC.cc b/src/analyzer/protocol/dce-rpc/DCE_RPC.cc
index dd31cfa8a7..1d3b6ef0ef 100644
--- a/src/analyzer/protocol/dce-rpc/DCE_RPC.cc
+++ b/src/analyzer/protocol/dce-rpc/DCE_RPC.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/analyzer/protocol/dns/DNS.cc b/src/analyzer/protocol/dns/DNS.cc
index 0c5ef53000..1fc94a80ba 100644
--- a/src/analyzer/protocol/dns/DNS.cc
+++ b/src/analyzer/protocol/dns/DNS.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -282,6 +282,10 @@ int DNS_Interpreter::ParseAnswer(DNS_MsgInfo* msg,
status = ParseRR_TXT(msg, data, len, rdlength, msg_start);
break;
+ case TYPE_CAA:
+ status = ParseRR_CAA(msg, data, len, rdlength, msg_start);
+ break;
+
case TYPE_NBS:
status = ParseRR_NBS(msg, data, len, rdlength, msg_start);
break;
@@ -904,6 +908,51 @@ int DNS_Interpreter::ParseRR_TXT(DNS_MsgInfo* msg,
return rdlength == 0;
}
+int DNS_Interpreter::ParseRR_CAA(DNS_MsgInfo* msg,
+ const u_char*& data, int& len, int rdlength,
+ const u_char* msg_start)
+ {
+ if ( ! dns_CAA_reply || msg->skip_event )
+ {
+ data += rdlength;
+ len -= rdlength;
+ return 1;
+ }
+
+ unsigned int flags = ExtractShort(data, len);
+ unsigned int tagLen = flags & 0xff;
+ flags = flags >> 8;
+ rdlength -= 2;
+ if ( (int) tagLen >= rdlength )
+ {
+ analyzer->Weird("DNS_CAA_char_str_past_rdlen");
+ return 0;
+ }
+ BroString* tag = new BroString(data, tagLen, 1);
+ len -= tagLen;
+ data += tagLen;
+ rdlength -= tagLen;
+ BroString* value = new BroString(data, rdlength, 0);
+
+ len -= value->Len();
+ data += value->Len();
+ rdlength -= value->Len();
+
+ val_list* vl = new val_list;
+
+ vl->append(analyzer->BuildConnVal());
+ vl->append(msg->BuildHdrVal());
+ vl->append(msg->BuildAnswerVal());
+ vl->append(new Val(flags, TYPE_COUNT));
+ vl->append(new StringVal(tag));
+ vl->append(new StringVal(value));
+
+ analyzer->ConnectionEvent(dns_CAA_reply, vl);
+
+ return rdlength == 0;
+ }
+
+
void DNS_Interpreter::SendReplyOrRejectEvent(DNS_MsgInfo* msg,
EventHandlerPtr event,
const u_char*& data, int& len,
diff --git a/src/analyzer/protocol/dns/DNS.h b/src/analyzer/protocol/dns/DNS.h
index 59f51812ca..87618cd18e 100644
--- a/src/analyzer/protocol/dns/DNS.h
+++ b/src/analyzer/protocol/dns/DNS.h
@@ -56,6 +56,7 @@ typedef enum {
TYPE_EDNS = 41, ///< OPT pseudo-RR (RFC 2671)
TYPE_TKEY = 249, ///< Transaction Key (RFC 2930)
TYPE_TSIG = 250, ///< Transaction Signature (RFC 2845)
+ TYPE_CAA = 257, ///< Certification Authority Authorization (RFC 6844)
// The following are only valid in queries.
TYPE_AXFR = 252,
@@ -132,7 +133,7 @@ public:
StringVal* query_name;
RR_Type atype;
int aclass; ///< normally = 1, inet
- int ttl;
+ uint32 ttl;
DNS_AnswerType answer_type;
int skip_event; ///< if true, don't generate corresponding events
@@ -211,6 +212,9 @@ protected:
int ParseRR_TXT(DNS_MsgInfo* msg,
const u_char*& data, int& len, int rdlength,
const u_char* msg_start);
+ int ParseRR_CAA(DNS_MsgInfo* msg,
+ const u_char*& data, int& len, int rdlength,
+ const u_char* msg_start);
int ParseRR_TSIG(DNS_MsgInfo* msg,
const u_char*& data, int& len, int rdlength,
const u_char* msg_start);
diff --git a/src/analyzer/protocol/dns/events.bif b/src/analyzer/protocol/dns/events.bif
index 9350939a2e..ae796c8e4c 100644
--- a/src/analyzer/protocol/dns/events.bif
+++ b/src/analyzer/protocol/dns/events.bif
@@ -378,6 +378,25 @@ event dns_MX_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string,
## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth
event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec%);
+## Generated for DNS replies of type *CAA* (Certification Authority Authorization).
+## For replies with multiple answers, an individual event of the corresponding type
+## is raised for each.
+## See `RFC 6844 `__ for more details.
+##
+## c: The connection, which may be UDP or TCP depending on the type of the
+## transport-layer session being analyzed.
+##
+## msg: The parsed DNS message header.
+##
+## ans: The type-independent part of the parsed answer record.
+##
+## flags: The flags byte of the CAA reply.
+##
+## tag: The property identifier of the CAA reply.
+##
+## value: The property value of the CAA reply.
+event dns_CAA_reply%(c: connection, msg: dns_msg, ans: dns_answer, flags: count, tag: string, value: string%);
+
## Generated for DNS replies of type *SRV*. For replies with multiple answers,
## an individual event of the corresponding type is raised for each.
##
diff --git a/src/analyzer/protocol/finger/Finger.cc b/src/analyzer/protocol/finger/Finger.cc
index bf9bdcc68a..a9818ff7af 100644
--- a/src/analyzer/protocol/finger/Finger.cc
+++ b/src/analyzer/protocol/finger/Finger.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -54,7 +54,9 @@ void Finger_Analyzer::DeliverStream(int length, const u_char* data, bool is_orig
if ( long_cnt )
line = skip_whitespace(line+2, end_of_line);
- const char* at = strchr_n(line, end_of_line, '@');
+ assert(line <= end_of_line);
+ size_t n = end_of_line >= line ? end_of_line - line : 0; // just to be sure if assertions aren't on.
+ const char* at = reinterpret_cast(memchr(line, '@', n));
const char* host = 0;
if ( ! at )
at = host = end_of_line;
diff --git a/src/analyzer/protocol/ftp/FTP.cc b/src/analyzer/protocol/ftp/FTP.cc
index 91afe6f8a4..70d1be5777 100644
--- a/src/analyzer/protocol/ftp/FTP.cc
+++ b/src/analyzer/protocol/ftp/FTP.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -206,7 +206,7 @@ void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
line = skip_whitespace(line + cmd_len, end_of_line);
StringVal encoded(end_of_line - line, line);
- decoded_adat = decode_base64(encoded.AsString());
+ decoded_adat = decode_base64(encoded.AsString(), 0, Conn());
if ( first_token )
{
@@ -273,7 +273,7 @@ void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
line += 5;
StringVal encoded(end_of_line - line, line);
- decoded_adat = decode_base64(encoded.AsString());
+ decoded_adat = decode_base64(encoded.AsString(), 0, Conn());
}
break;
diff --git a/src/analyzer/protocol/gnutella/Gnutella.cc b/src/analyzer/protocol/gnutella/Gnutella.cc
index 84a33381a0..60c5475d4a 100644
--- a/src/analyzer/protocol/gnutella/Gnutella.cc
+++ b/src/analyzer/protocol/gnutella/Gnutella.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/analyzer/protocol/http/HTTP.cc b/src/analyzer/protocol/http/HTTP.cc
index ff72c6f350..490a9d2324 100644
--- a/src/analyzer/protocol/http/HTTP.cc
+++ b/src/analyzer/protocol/http/HTTP.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -995,28 +995,9 @@ void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
HTTP_Reply();
- if ( connect_request && reply_code == 200 )
- {
- pia = new pia::PIA_TCP(Conn());
-
- if ( AddChildAnalyzer(pia) )
- {
- pia->FirstPacket(true, 0);
- pia->FirstPacket(false, 0);
-
- // This connection has transitioned to no longer
- // being http and the content line support analyzers
- // need to be removed.
- RemoveSupportAnalyzer(content_line_orig);
- RemoveSupportAnalyzer(content_line_resp);
-
- return;
- }
-
- else
- // AddChildAnalyzer() will have deleted PIA.
- pia = 0;
- }
+ if ( connect_request && reply_code != 200 )
+ // Request failed, do not set up tunnel.
+ connect_request = false;
InitHTTPMessage(content_line,
reply_message, is_orig,
@@ -1036,6 +1017,30 @@ void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
case EXPECT_REPLY_MESSAGE:
reply_message->Deliver(len, line, 1);
+
+ if ( connect_request && len == 0 )
+ {
+ // End of message header reached, set up
+ // tunnel decapsulation.
+ pia = new pia::PIA_TCP(Conn());
+
+ if ( AddChildAnalyzer(pia) )
+ {
+ pia->FirstPacket(true, 0);
+ pia->FirstPacket(false, 0);
+
+ // This connection has transitioned to no longer
+ // being http and the content line support analyzers
+ // need to be removed.
+ RemoveSupportAnalyzer(content_line_orig);
+ RemoveSupportAnalyzer(content_line_resp);
+ }
+
+ else
+ // AddChildAnalyzer() will have deleted PIA.
+ pia = 0;
+ }
+
break;
case EXPECT_REPLY_TRAILER:
@@ -1204,7 +1209,15 @@ int HTTP_Analyzer::HTTP_RequestLine(const char* line, const char* end_of_line)
const char* end_of_method = get_HTTP_token(line, end_of_line);
if ( end_of_method == line )
+ {
+ // something went wrong with get_HTTP_token
+ // perform a weak test to see if the string "HTTP/"
+ // is found at the end of the RequestLine
+ if ( end_of_line - 9 >= line && strncasecmp(end_of_line - 9, " HTTP/", 6) == 0 )
+ goto bad_http_request_with_version;
+
goto error;
+ }
rest = skip_whitespace(end_of_method, end_of_line);
@@ -1225,6 +1238,10 @@ int HTTP_Analyzer::HTTP_RequestLine(const char* line, const char* end_of_line)
return 1;
+bad_http_request_with_version:
+ reporter->Weird(Conn(), "bad_HTTP_request_with_version");
+ return 0;
+
error:
reporter->Weird(Conn(), "bad_HTTP_request");
return 0;
@@ -1244,6 +1261,12 @@ int HTTP_Analyzer::ParseRequest(const char* line, const char* end_of_line)
break;
}
+ if ( end_of_uri >= end_of_line && PrefixMatch(line, end_of_line, "HTTP/") )
+ {
+ Weird("missing_HTTP_uri");
+ end_of_uri = line; // Leave URI empty.
+ }
+
for ( version_start = end_of_uri; version_start < end_of_line; ++version_start )
{
end_of_uri = version_start;
@@ -1359,7 +1382,7 @@ void HTTP_Analyzer::HTTP_Request()
const char* method = (const char*) request_method->AsString()->Bytes();
int method_len = request_method->AsString()->Len();
- if ( strcasecmp_n(method_len, method, "CONNECT") == 0 )
+ if ( strncasecmp(method, "CONNECT", method_len) == 0 )
connect_request = true;
if ( http_request )
@@ -1553,7 +1576,7 @@ int HTTP_Analyzer::ExpectReplyMessageBody()
const BroString* method = UnansweredRequestMethod();
- if ( method && strcasecmp_n(method->Len(), (const char*) (method->Bytes()), "HEAD") == 0 )
+ if ( method && strncasecmp((const char*) (method->Bytes()), "HEAD", method->Len()) == 0 )
return HTTP_BODY_NOT_EXPECTED;
if ( (reply_code >= 100 && reply_code < 200) ||
diff --git a/src/analyzer/protocol/icmp/ICMP.cc b/src/analyzer/protocol/icmp/ICMP.cc
index 84df7ab0d2..6a42e064d7 100644
--- a/src/analyzer/protocol/icmp/ICMP.cc
+++ b/src/analyzer/protocol/icmp/ICMP.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Net.h"
#include "NetVar.h"
diff --git a/src/analyzer/protocol/ident/Ident.cc b/src/analyzer/protocol/ident/Ident.cc
index 8e25775af8..f668be921c 100644
--- a/src/analyzer/protocol/ident/Ident.cc
+++ b/src/analyzer/protocol/ident/Ident.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -153,8 +153,10 @@ void Ident_Analyzer::DeliverStream(int length, const u_char* data, bool is_orig)
else
{
const char* sys_type = line;
- const char* colon = strchr_n(line, end_of_line, ':');
- const char* comma = strchr_n(line, end_of_line, ',');
+ assert(line <= end_of_line);
+ size_t n = end_of_line >= line ? end_of_line - line : 0; // just to be sure if assertions aren't on.
+ const char* colon = reinterpret_cast(memchr(line, ':', n));
+ const char* comma = reinterpret_cast(memchr(line, ',', n));
if ( ! colon )
{
BadReply(length, orig_line);
diff --git a/src/analyzer/protocol/interconn/InterConn.cc b/src/analyzer/protocol/interconn/InterConn.cc
index eb529cbb6d..c1bf0f37f5 100644
--- a/src/analyzer/protocol/interconn/InterConn.cc
+++ b/src/analyzer/protocol/interconn/InterConn.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "InterConn.h"
#include "Event.h"
diff --git a/src/analyzer/protocol/irc/IRC.cc b/src/analyzer/protocol/irc/IRC.cc
index d621ce2cce..a26045f250 100644
--- a/src/analyzer/protocol/irc/IRC.cc
+++ b/src/analyzer/protocol/irc/IRC.cc
@@ -2,7 +2,6 @@
#include
#include "IRC.h"
-#include "analyzer/protocol/tcp/ContentLine.h"
#include "NetVar.h"
#include "Event.h"
#include "analyzer/protocol/zip/ZIP.h"
@@ -21,8 +20,11 @@ IRC_Analyzer::IRC_Analyzer(Connection* conn)
resp_status = WAIT_FOR_REGISTRATION;
orig_zip_status = NO_ZIP;
resp_zip_status = NO_ZIP;
- AddSupportAnalyzer(new tcp::ContentLine_Analyzer(conn, true));
- AddSupportAnalyzer(new tcp::ContentLine_Analyzer(conn, false));
+ starttls = false;
+ cl_orig = new tcp::ContentLine_Analyzer(conn, true);
+ AddSupportAnalyzer(cl_orig);
+ cl_resp = new tcp::ContentLine_Analyzer(conn, false);
+ AddSupportAnalyzer(cl_resp);
}
void IRC_Analyzer::Done()
@@ -30,10 +32,25 @@ void IRC_Analyzer::Done()
tcp::TCP_ApplicationAnalyzer::Done();
}
+inline void IRC_Analyzer::SkipLeadingWhitespace(string& str)
+ {
+ const auto first_char = str.find_first_not_of(" ");
+ if ( first_char == string::npos )
+ str = "";
+ else
+ str = str.substr(first_char);
+ }
+
void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
{
tcp::TCP_ApplicationAnalyzer::DeliverStream(length, line, orig);
+ if ( starttls )
+ {
+ ForwardStream(length, line, orig);
+ return;
+ }
+
// check line size
if ( length > 512 )
{
@@ -41,20 +58,21 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
return;
}
- if ( length < 2 )
+ string myline = string((const char*) line, length);
+ SkipLeadingWhitespace(myline);
+
+ if ( myline.length() < 3 )
{
Weird("irc_line_too_short");
return;
}
- string myline = string((const char*) line);
-
// Check for prefix.
string prefix = "";
- if ( line[0] == ':' )
+ if ( myline[0] == ':' )
{ // find end of prefix and extract it
- unsigned int pos = myline.find(' ');
- if ( pos > (unsigned int) length )
+ auto pos = myline.find(' ');
+ if ( pos == string::npos )
{
Weird("irc_invalid_line");
return;
@@ -62,9 +80,9 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
prefix = myline.substr(1, pos - 1);
myline = myline.substr(pos + 1); // remove prefix from line
+ SkipLeadingWhitespace(myline);
}
-
if ( orig )
ProtocolConfirmation();
@@ -72,7 +90,8 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
string command = "";
// Check if line is long enough to include status code or command.
- if ( myline.size() < 4 )
+ // (shortest command with optional params is "WHO")
+ if ( myline.length() < 3 )
{
Weird("irc_invalid_line");
ProtocolViolation("line too short");
@@ -98,23 +117,30 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
}
else
{ // get command
- unsigned int pos = myline.find(' ');
- if ( pos > (unsigned int) length )
- {
- Weird("irc_invalid_line");
- return;
- }
+ auto pos = myline.find(' ');
+ // Not all commands require parameters
+ if ( pos == string::npos )
+ pos = myline.length();
command = myline.substr(0, pos);
for ( unsigned int i = 0; i < command.size(); ++i )
command[i] = toupper(command[i]);
+ // Adjust for the no-parameter case
+ if ( pos == myline.length() )
+ pos--;
+
myline = myline.substr(pos + 1);
+ SkipLeadingWhitespace(myline);
}
// Extract parameters.
string params = myline;
+ // special case
+ if ( command == "STARTTLS" )
+ return;
+
// Check for Server2Server - connections with ZIP enabled.
if ( orig && orig_status == WAIT_FOR_REGISTRATION )
{
@@ -135,7 +161,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
//
// (### This seems not quite prudent to me - VP)
if ( command == "SERVER" && prefix == "")
- {
+ {
orig_status = REGISTERED;
}
}
@@ -143,7 +169,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
if ( ! orig && resp_status == WAIT_FOR_REGISTRATION )
{
if ( command == "PASS" )
- {
+ {
vector p = SplitWords(params,' ');
if ( p.size() > 3 &&
(p[3].find('Z')<=p[3].size() ||
@@ -255,7 +281,9 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
{
if ( parts[i][0] == '@' )
parts[i] = parts[i].substr(1);
- set->Assign(new StringVal(parts[i].c_str()), 0);
+ Val* idx = new StringVal(parts[i].c_str());
+ set->Assign(idx, 0);
+ Unref(idx);
}
vl->append(set);
@@ -556,6 +584,11 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
}
break;
+ case 670:
+ // StartTLS success reply to StartTLS
+ StartTLS();
+ break;
+
// All other server replies.
default:
val_list* vl = new val_list;
@@ -584,7 +617,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
return;
}
- else if ( irc_privmsg_message || (irc_dcc_message && command == "PRIVMSG") )
+ else if ( ( irc_privmsg_message || irc_dcc_message ) && command == "PRIVMSG")
{
unsigned int pos = params.find(' ');
if ( pos >= params.size() )
@@ -595,6 +628,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
string target = params.substr(0, pos);
string message = params.substr(pos + 1);
+ SkipLeadingWhitespace(message);
if ( message.size() > 0 && message[0] == ':' )
message = message.substr(1);
@@ -669,6 +703,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
string target = params.substr(0, pos);
string message = params.substr(pos + 1);
+ SkipLeadingWhitespace(message);
if ( message[0] == ':' )
message = message.substr(1);
@@ -693,6 +728,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
string target = params.substr(0, pos);
string message = params.substr(pos + 1);
+ SkipLeadingWhitespace(message);
if ( message[0] == ':' )
message = message.substr(1);
@@ -918,7 +954,10 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
{
channels = params.substr(0, pos);
if ( params.size() > pos + 1 )
+ {
message = params.substr(pos + 1);
+ SkipLeadingWhitespace(message);
+ }
if ( message[0] == ':' )
message = message.substr(1);
}
@@ -965,7 +1004,6 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
val_list* vl = new val_list;
vl->append(BuildConnVal());
vl->append(new Val(orig, TYPE_BOOL));
- vl->append(new Val(orig, TYPE_BOOL));
vl->append(new StringVal(nickname.c_str()));
vl->append(new StringVal(message.c_str()));
@@ -990,7 +1028,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
else if ( irc_who_message && command == "WHO" )
{
vector parts = SplitWords(params, ' ');
- if ( parts.size() < 1 || parts.size() > 2 )
+ if ( parts.size() > 2 )
{
Weird("irc_invalid_who_message_format");
return;
@@ -1001,13 +1039,16 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
oper = true;
// Remove ":" from mask.
- if ( parts[0].size() > 0 && parts[0][0] == ':' )
+ if ( parts.size() > 0 && parts[0].size() > 0 && parts[0][0] == ':' )
parts[0] = parts[0].substr(1);
val_list* vl = new val_list;
vl->append(BuildConnVal());
vl->append(new Val(orig, TYPE_BOOL));
- vl->append(new StringVal(parts[0].c_str()));
+ if ( parts.size() > 0 )
+ vl->append(new StringVal(parts[0].c_str()));
+ else
+ vl->append(new StringVal(""));
vl->append(new Val(oper, TYPE_BOOL));
ConnectionEvent(irc_who_message, vl);
@@ -1112,6 +1153,7 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
{
server = params.substr(0, pos);
message = params.substr(pos + 1);
+ SkipLeadingWhitespace(message);
if ( message[0] == ':' )
message = message.substr(1);
}
@@ -1169,6 +1211,25 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig)
return;
}
+void IRC_Analyzer::StartTLS()
+ {
+ // STARTTLS was succesful. Remove support analyzers, add SSL
+ // analyzer, and throw event signifying the change.
+ starttls = true;
+
+ RemoveSupportAnalyzer(cl_orig);
+ RemoveSupportAnalyzer(cl_resp);
+
+ Analyzer* ssl = analyzer_mgr->InstantiateAnalyzer("SSL", Conn());
+ if ( ssl )
+ AddChildAnalyzer(ssl);
+
+ val_list* vl = new val_list;
+ vl->append(BuildConnVal());
+
+ ConnectionEvent(irc_starttls, vl);
+ }
+
vector IRC_Analyzer::SplitWords(const string input, const char split)
{
vector words;
diff --git a/src/analyzer/protocol/irc/IRC.h b/src/analyzer/protocol/irc/IRC.h
index bce9cdf054..497225846d 100644
--- a/src/analyzer/protocol/irc/IRC.h
+++ b/src/analyzer/protocol/irc/IRC.h
@@ -3,6 +3,7 @@
#ifndef ANALYZER_PROTOCOL_IRC_IRC_H
#define ANALYZER_PROTOCOL_IRC_IRC_H
#include "analyzer/protocol/tcp/TCP.h"
+#include "analyzer/protocol/tcp/ContentLine.h"
namespace analyzer { namespace irc {
@@ -21,7 +22,7 @@ public:
/**
* \brief Called when connection is closed.
*/
- virtual void Done();
+ void Done() override;
/**
* \brief New input line in network stream.
@@ -30,7 +31,7 @@ public:
* \param data pointer to line start
* \param orig was this data sent from connection originator?
*/
- virtual void DeliverStream(int len, const u_char* data, bool orig);
+ void DeliverStream(int len, const u_char* data, bool orig) override;
static analyzer::Analyzer* Instantiate(Connection* conn)
{
@@ -44,6 +45,10 @@ protected:
int resp_zip_status;
private:
+ void StartTLS();
+
+ inline void SkipLeadingWhitespace(string& str);
+
/** \brief counts number of invalid IRC messages */
int invalid_msg_count;
@@ -60,6 +65,9 @@ private:
*/
vector SplitWords(const string input, const char split);
+ tcp::ContentLine_Analyzer* cl_orig;
+ tcp::ContentLine_Analyzer* cl_resp;
+ bool starttls; // if true, connection has been upgraded to tls
};
} } // namespace analyzer::*
diff --git a/src/analyzer/protocol/irc/events.bif b/src/analyzer/protocol/irc/events.bif
index 4e69b9ad33..be425817b2 100644
--- a/src/analyzer/protocol/irc/events.bif
+++ b/src/analyzer/protocol/irc/events.bif
@@ -797,3 +797,10 @@ event irc_user_message%(c: connection, is_orig: bool, user: string, host: string
## irc_nick_message irc_notice_message irc_oper_message irc_oper_response
## irc_part_message
event irc_password_message%(c: connection, is_orig: bool, password: string%);
+
+## Generated if an IRC connection switched to TLS using STARTTLS. After this
+## event no more IRC events will be raised for the connection. See the SSL
+## analyzer for related SSL events, which will now be generated.
+##
+## c: The connection.
+event irc_starttls%(c: connection%);
diff --git a/src/analyzer/protocol/login/Login.cc b/src/analyzer/protocol/login/Login.cc
index 8dcb7dba55..c39c4cf383 100644
--- a/src/analyzer/protocol/login/Login.cc
+++ b/src/analyzer/protocol/login/Login.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/analyzer/protocol/login/NVT.cc b/src/analyzer/protocol/login/NVT.cc
index 462cd42177..11952103bf 100644
--- a/src/analyzer/protocol/login/NVT.cc
+++ b/src/analyzer/protocol/login/NVT.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/analyzer/protocol/login/RSH.cc b/src/analyzer/protocol/login/RSH.cc
index f768f4bdc2..ff8e6bad3e 100644
--- a/src/analyzer/protocol/login/RSH.cc
+++ b/src/analyzer/protocol/login/RSH.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "Event.h"
@@ -93,8 +93,7 @@ void Contents_Rsh_Analyzer::DoDeliver(int len, const u_char* data)
case RSH_LINE_MODE:
case RSH_UNKNOWN:
case RSH_PRESUMED_REJECTED:
- if ( state == RSH_LINE_MODE &&
- state == RSH_PRESUMED_REJECTED )
+ if ( state == RSH_PRESUMED_REJECTED )
{
Conn()->Weird("rsh_text_after_rejected");
state = RSH_UNKNOWN;
diff --git a/src/analyzer/protocol/login/Rlogin.cc b/src/analyzer/protocol/login/Rlogin.cc
index d90c9be123..6979148676 100644
--- a/src/analyzer/protocol/login/Rlogin.cc
+++ b/src/analyzer/protocol/login/Rlogin.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "Event.h"
diff --git a/src/analyzer/protocol/login/Telnet.cc b/src/analyzer/protocol/login/Telnet.cc
index c22b2afc5e..78a3289931 100644
--- a/src/analyzer/protocol/login/Telnet.cc
+++ b/src/analyzer/protocol/login/Telnet.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "Telnet.h"
#include "NVT.h"
diff --git a/src/analyzer/protocol/mime/MIME.cc b/src/analyzer/protocol/mime/MIME.cc
index cbc1abd17d..bcdfe03248 100644
--- a/src/analyzer/protocol/mime/MIME.cc
+++ b/src/analyzer/protocol/mime/MIME.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "MIME.h"
@@ -148,7 +148,7 @@ void MIME_Mail::Undelivered(int len)
int strcasecmp_n(data_chunk_t s, const char* t)
{
- return ::strcasecmp_n(s.length, s.data, t);
+ return strncasecmp(s.data, t, s.length);
}
int MIME_count_leading_lws(int len, const char* data)
@@ -248,9 +248,7 @@ int MIME_get_field_name(int len, const char* data, data_chunk_t* name)
int MIME_is_tspecial (char ch, bool is_boundary = false)
{
if ( is_boundary )
- return ch == '(' || ch == ')' || ch == '@' ||
- ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
- ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=';
+ return ch == '"';
else
return ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' ||
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
@@ -272,7 +270,11 @@ int MIME_is_token_char (char ch, bool is_boundary = false)
int MIME_get_token(int len, const char* data, data_chunk_t* token,
bool is_boundary)
{
- int i = MIME_skip_lws_comments(len, data);
+ int i = 0;
+
+ if ( ! is_boundary )
+ i = MIME_skip_lws_comments(len, data);
+
while ( i < len )
{
int j;
@@ -366,7 +368,10 @@ int MIME_get_quoted_string(int len, const char* data, data_chunk_t* str)
int MIME_get_value(int len, const char* data, BroString*& buf, bool is_boundary)
{
- int offset = MIME_skip_lws_comments(len, data);
+ int offset = 0;
+
+ if ( ! is_boundary ) // For boundaries, simply accept everything.
+ offset = MIME_skip_lws_comments(len, data);
len -= offset;
data += offset;
@@ -876,6 +881,13 @@ int MIME_Entity::ParseFieldParameters(int len, const char* data)
// token or quoted-string (and some lenience for characters
// not explicitly allowed by the RFC, but encountered in the wild)
offset = MIME_get_value(len, data, val, true);
+
+ if ( ! val )
+ {
+ IllegalFormat("Could not parse multipart boundary");
+ continue;
+ }
+
data_chunk_t vd = get_data_chunk(val);
multipart_boundary = new BroString((const u_char*)vd.data,
vd.length, 1);
@@ -1122,7 +1134,15 @@ void MIME_Entity::StartDecodeBase64()
delete base64_decoder;
}
- base64_decoder = new Base64Converter(message->GetAnalyzer());
+ analyzer::Analyzer* analyzer = message->GetAnalyzer();
+
+ if ( ! analyzer )
+ {
+ reporter->InternalWarning("no analyzer associated with MIME message");
+ return;
+ }
+
+ base64_decoder = new Base64Converter(analyzer->Conn());
}
void MIME_Entity::FinishDecodeBase64()
diff --git a/src/analyzer/protocol/mysql/mysql-analyzer.pac b/src/analyzer/protocol/mysql/mysql-analyzer.pac
index 2108401436..66710fb2bb 100644
--- a/src/analyzer/protocol/mysql/mysql-analyzer.pac
+++ b/src/analyzer/protocol/mysql/mysql-analyzer.pac
@@ -19,6 +19,9 @@ refine flow MySQL_Flow += {
function proc_mysql_handshake_response_packet(msg: Handshake_Response_Packet): bool
%{
+ if ( ${msg.version} == 9 || ${msg.version == 10} )
+ connection()->bro_analyzer()->ProtocolConfirmation();
+
if ( mysql_handshake )
{
if ( ${msg.version} == 10 )
diff --git a/src/analyzer/protocol/ncp/NCP.cc b/src/analyzer/protocol/ncp/NCP.cc
index 3858f0b2ad..4605ad2bca 100644
--- a/src/analyzer/protocol/ncp/NCP.cc
+++ b/src/analyzer/protocol/ncp/NCP.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/analyzer/protocol/netbios/NetbiosSSN.cc b/src/analyzer/protocol/netbios/NetbiosSSN.cc
index d65a152b2f..a75c23525c 100644
--- a/src/analyzer/protocol/netbios/NetbiosSSN.cc
+++ b/src/analyzer/protocol/netbios/NetbiosSSN.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/analyzer/protocol/ntp/NTP.cc b/src/analyzer/protocol/ntp/NTP.cc
index 5778da9a0e..d46972b8cb 100644
--- a/src/analyzer/protocol/ntp/NTP.cc
+++ b/src/analyzer/protocol/ntp/NTP.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "NTP.h"
diff --git a/src/analyzer/protocol/pia/PIA.cc b/src/analyzer/protocol/pia/PIA.cc
index 1adeb54a2d..7d73624dd0 100644
--- a/src/analyzer/protocol/pia/PIA.cc
+++ b/src/analyzer/protocol/pia/PIA.cc
@@ -1,5 +1,6 @@
#include "PIA.h"
#include "RuleMatcher.h"
+#include "analyzer/protocol/tcp/TCP_Flags.h"
#include "analyzer/protocol/tcp/TCP_Reassembler.h"
#include "events.bif.h"
@@ -348,12 +349,16 @@ void PIA_TCP::ActivateAnalyzer(analyzer::Tag tag, const Rule* rule)
for ( DataBlock* b = pkt_buffer.head; b; b = b->next )
{
+ // We don't have the TCP flags here during replay. We could
+ // funnel them through, but it's non-trivial and doesn't seem
+ // worth the effort.
+
if ( b->is_orig )
reass_orig->DataSent(network_time, orig_seq = b->seq,
- b->len, b->data, true);
+ b->len, b->data, tcp::TCP_Flags(), true);
else
reass_resp->DataSent(network_time, resp_seq = b->seq,
- b->len, b->data, true);
+ b->len, b->data, tcp::TCP_Flags(), true);
}
// We also need to pass the current packet on.
@@ -363,11 +368,11 @@ void PIA_TCP::ActivateAnalyzer(analyzer::Tag tag, const Rule* rule)
if ( current->is_orig )
reass_orig->DataSent(network_time,
orig_seq = current->seq,
- current->len, current->data, true);
+ current->len, current->data, analyzer::tcp::TCP_Flags(), true);
else
reass_resp->DataSent(network_time,
resp_seq = current->seq,
- current->len, current->data, true);
+ current->len, current->data, analyzer::tcp::TCP_Flags(), true);
}
ClearBuffer(&pkt_buffer);
diff --git a/src/analyzer/protocol/pop3/POP3.cc b/src/analyzer/protocol/pop3/POP3.cc
index 05ea4434d3..b7d6aa0dcb 100644
--- a/src/analyzer/protocol/pop3/POP3.cc
+++ b/src/analyzer/protocol/pop3/POP3.cc
@@ -1,7 +1,7 @@
// This code contributed to Bro by Florian Schimandl, Hugh Dollman and
// Robin Sommer.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -137,7 +137,7 @@ void POP3_Analyzer::ProcessRequest(int length, const char* line)
++authLines;
BroString encoded(line);
- BroString* decoded = decode_base64(&encoded);
+ BroString* decoded = decode_base64(&encoded, 0, Conn());
if ( ! decoded )
{
@@ -720,14 +720,18 @@ void POP3_Analyzer::ProcessReply(int length, const char* line)
break;
}
+ case CAPA:
+ ProtocolConfirmation();
+ // Fall-through.
+
case UIDL:
case LIST:
- case CAPA:
if (requestForMultiLine == true)
multiLine = true;
break;
case STLS:
+ ProtocolConfirmation();
tls = true;
StartTLS();
return;
diff --git a/src/analyzer/protocol/rdp/rdp-analyzer.pac b/src/analyzer/protocol/rdp/rdp-analyzer.pac
index a70d55fb7b..fdfb8c44fc 100644
--- a/src/analyzer/protocol/rdp/rdp-analyzer.pac
+++ b/src/analyzer/protocol/rdp/rdp-analyzer.pac
@@ -9,9 +9,8 @@ refine flow RDP_Flow += {
function utf16_to_utf8_val(utf16: bytestring): StringVal
%{
std::string resultstring;
- size_t widesize = utf16.length();
- size_t utf8size = 3 * widesize + 1;
+ size_t utf8size = (3 * utf16.length() + 1);
if ( utf8size > resultstring.max_size() )
{
@@ -20,8 +19,16 @@ refine flow RDP_Flow += {
}
resultstring.resize(utf8size, '\0');
- const UTF16* sourcestart = reinterpret_cast(utf16.begin());
- const UTF16* sourceend = sourcestart + widesize;
+
+ // We can't assume that the string data is properly aligned
+ // here, so make a copy.
+ UTF16 utf16_copy[utf16.length()]; // Twice as much memory than necessary.
+ memcpy(utf16_copy, utf16.begin(), utf16.length());
+
+ const char* utf16_copy_end = reinterpret_cast(utf16_copy) + utf16.length();
+ const UTF16* sourcestart = utf16_copy;
+ const UTF16* sourceend = reinterpret_cast(utf16_copy_end);
+
UTF8* targetstart = reinterpret_cast(&resultstring[0]);
UTF8* targetend = targetstart + utf8size;
@@ -37,6 +44,7 @@ refine flow RDP_Flow += {
}
*targetstart = 0;
+
// We're relying on no nulls being in the string.
return new StringVal(resultstring.c_str());
%}
diff --git a/src/analyzer/protocol/rfb/CMakeLists.txt b/src/analyzer/protocol/rfb/CMakeLists.txt
new file mode 100644
index 0000000000..28523bfe2d
--- /dev/null
+++ b/src/analyzer/protocol/rfb/CMakeLists.txt
@@ -0,0 +1,9 @@
+include(BroPlugin)
+
+include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR})
+
+bro_plugin_begin(Bro RFB)
+ bro_plugin_cc(RFB.cc Plugin.cc)
+ bro_plugin_bif(events.bif)
+ bro_plugin_pac(rfb.pac rfb-analyzer.pac rfb-protocol.pac)
+bro_plugin_end()
\ No newline at end of file
diff --git a/src/analyzer/protocol/rfb/Plugin.cc b/src/analyzer/protocol/rfb/Plugin.cc
new file mode 100644
index 0000000000..b3bed0f093
--- /dev/null
+++ b/src/analyzer/protocol/rfb/Plugin.cc
@@ -0,0 +1,23 @@
+#include "plugin/Plugin.h"
+
+#include "RFB.h"
+
+namespace plugin {
+namespace Bro_RFB {
+
+class Plugin : public plugin::Plugin {
+public:
+ plugin::Configuration Configure()
+ {
+ AddComponent(new ::analyzer::Component("RFB",
+ ::analyzer::rfb::RFB_Analyzer::InstantiateAnalyzer));
+
+ plugin::Configuration config;
+ config.name = "Bro::RFB";
+ config.description = "Parser for rfb (VNC) analyzer";
+ return config;
+ }
+} plugin;
+
+}
+}
\ No newline at end of file
diff --git a/src/analyzer/protocol/rfb/RFB.cc b/src/analyzer/protocol/rfb/RFB.cc
new file mode 100644
index 0000000000..2669d6ed56
--- /dev/null
+++ b/src/analyzer/protocol/rfb/RFB.cc
@@ -0,0 +1,67 @@
+#include "RFB.h"
+
+#include "analyzer/protocol/tcp/TCP_Reassembler.h"
+
+#include "Reporter.h"
+
+#include "events.bif.h"
+
+using namespace analyzer::rfb;
+
+RFB_Analyzer::RFB_Analyzer(Connection* c)
+
+: tcp::TCP_ApplicationAnalyzer("RFB", c)
+
+ {
+ interp = new binpac::RFB::RFB_Conn(this);
+ had_gap = false;
+ }
+
+RFB_Analyzer::~RFB_Analyzer()
+ {
+ delete interp;
+ }
+
+void RFB_Analyzer::Done()
+ {
+ tcp::TCP_ApplicationAnalyzer::Done();
+
+ interp->FlowEOF(true);
+ interp->FlowEOF(false);
+
+ }
+
+void RFB_Analyzer::EndpointEOF(bool is_orig)
+ {
+ tcp::TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
+ interp->FlowEOF(is_orig);
+ }
+
+void RFB_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
+ {
+ tcp::TCP_ApplicationAnalyzer::DeliverStream(len, data, orig);
+ assert(TCP());
+ if ( TCP()->IsPartial() )
+ return;
+
+ if ( had_gap )
+ // If only one side had a content gap, we could still try to
+ // deliver data to the other side if the script layer can handle this.
+ return;
+
+ try
+ {
+ interp->NewData(orig, data, data + len);
+ }
+ catch ( const binpac::Exception& e )
+ {
+ ProtocolViolation(fmt("Binpac exception: %s", e.c_msg()));
+ }
+ }
+
+void RFB_Analyzer::Undelivered(uint64 seq, int len, bool orig)
+ {
+ tcp::TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
+ had_gap = true;
+ interp->NewGap(orig, len);
+ }
diff --git a/src/analyzer/protocol/rfb/RFB.h b/src/analyzer/protocol/rfb/RFB.h
new file mode 100644
index 0000000000..88a17eea5a
--- /dev/null
+++ b/src/analyzer/protocol/rfb/RFB.h
@@ -0,0 +1,43 @@
+#ifndef ANALYZER_PROTOCOL_RFB_RFB_H
+#define ANALYZER_PROTOCOL_RFB_RFB_H
+
+#include "events.bif.h"
+
+
+#include "analyzer/protocol/tcp/TCP.h"
+
+#include "rfb_pac.h"
+
+namespace analyzer { namespace rfb {
+
+class RFB_Analyzer
+
+: public tcp::TCP_ApplicationAnalyzer {
+
+public:
+ RFB_Analyzer(Connection* conn);
+ virtual ~RFB_Analyzer();
+
+ // Overriden from Analyzer.
+ virtual void Done();
+
+ virtual void DeliverStream(int len, const u_char* data, bool orig);
+ virtual void Undelivered(uint64 seq, int len, bool orig);
+
+ // Overriden from tcp::TCP_ApplicationAnalyzer.
+ virtual void EndpointEOF(bool is_orig);
+
+
+ static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
+ { return new RFB_Analyzer(conn); }
+
+protected:
+ binpac::RFB::RFB_Conn* interp;
+
+ bool had_gap;
+
+};
+
+} } // namespace analyzer::*
+
+#endif
diff --git a/src/analyzer/protocol/rfb/events.bif b/src/analyzer/protocol/rfb/events.bif
new file mode 100644
index 0000000000..4a5bb40121
--- /dev/null
+++ b/src/analyzer/protocol/rfb/events.bif
@@ -0,0 +1,50 @@
+## Generated for RFB event
+##
+## c: The connection record for the underlying transport-layer session/flow.
+event rfb_event%(c: connection%);
+
+## Generated for RFB event authentication mechanism selection
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## authtype: the value of the chosen authentication mechanism
+event rfb_authentication_type%(c: connection, authtype: count%);
+
+## Generated for RFB event authentication result message
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## result: whether or not authentication was succesful
+event rfb_auth_result%(c: connection, result: bool%);
+
+## Generated for RFB event share flag messages
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## flag: whether or not the share flag was set
+event rfb_share_flag%(c: connection, flag: bool%);
+
+## Generated for RFB event client banner message
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## version: of the client's rfb library
+event rfb_client_version%(c: connection, major_version: string, minor_version: string%);
+
+## Generated for RFB event server banner message
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## version: of the server's rfb library
+event rfb_server_version%(c: connection, major_version: string, minor_version: string%);
+
+## Generated for RFB event server parameter message
+##
+## c: The connection record for the underlying transport-layer session/flow.
+##
+## name: name of the shared screen
+##
+## width: width of the shared screen
+##
+## height: height of the shared screen
+event rfb_server_parameters%(c: connection, name: string, width: count, height: count%);
\ No newline at end of file
diff --git a/src/analyzer/protocol/rfb/rfb-analyzer.pac b/src/analyzer/protocol/rfb/rfb-analyzer.pac
new file mode 100644
index 0000000000..cd24ea0ced
--- /dev/null
+++ b/src/analyzer/protocol/rfb/rfb-analyzer.pac
@@ -0,0 +1,199 @@
+refine flow RFB_Flow += {
+ function proc_rfb_message(msg: RFB_PDU): bool
+ %{
+ BifEvent::generate_rfb_event(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn());
+ return true;
+ %}
+
+ function proc_rfb_version(client: bool, major: bytestring, minor: bytestring) : bool
+ %{
+ if (client)
+ {
+ BifEvent::generate_rfb_client_version(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), bytestring_to_val(major), bytestring_to_val(minor));
+
+ connection()->bro_analyzer()->ProtocolConfirmation();
+ }
+ else
+ {
+ BifEvent::generate_rfb_server_version(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), bytestring_to_val(major), bytestring_to_val(minor));
+ }
+ return true;
+ %}
+
+ function proc_rfb_share_flag(shared: bool) : bool
+ %{
+ BifEvent::generate_rfb_share_flag(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), shared);
+ return true;
+ %}
+
+ function proc_security_types(msg: RFBSecurityTypes) : bool
+ %{
+ BifEvent::generate_rfb_authentication_type(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), ${msg.sectype});
+ return true;
+ %}
+
+ function proc_security_types37(msg: RFBAuthTypeSelected) : bool
+ %{
+ BifEvent::generate_rfb_authentication_type(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), ${msg.type});
+ return true;
+ %}
+
+ function proc_handle_server_params(msg:RFBServerInit) : bool
+ %{
+ BifEvent::generate_rfb_server_parameters(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), bytestring_to_val(${msg.name}), ${msg.width}, ${msg.height});
+ return true;
+ %}
+
+ function proc_handle_security_result(result : uint32) : bool
+ %{
+ BifEvent::generate_rfb_auth_result(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), result);
+ return true;
+ %}
+};
+
+refine connection RFB_Conn += {
+ %member{
+ enum states {
+ AWAITING_SERVER_BANNER = 0,
+ AWAITING_CLIENT_BANNER = 1,
+ AWAITING_SERVER_AUTH_TYPES = 2,
+ AWAITING_SERVER_CHALLENGE = 3,
+ AWAITING_CLIENT_RESPONSE = 4,
+ AWAITING_SERVER_AUTH_RESULT = 5,
+ AWAITING_CLIENT_SHARE_FLAG = 6,
+ AWAITING_SERVER_PARAMS = 7,
+ AWAITING_CLIENT_AUTH_METHOD = 8,
+ AWAITING_SERVER_ARD_CHALLENGE = 9,
+ AWAITING_CLIENT_ARD_RESPONSE = 10,
+ AWAITING_SERVER_AUTH_TYPES37 = 11,
+ AWAITING_CLIENT_AUTH_TYPE_SELECTED37 = 12,
+ RFB_MESSAGE = 13
+ };
+ %}
+
+ function get_state(client: bool) : int
+ %{
+ return state;
+ %}
+
+ function handle_banners(client: bool, msg: RFBProtocolVersion) : bool
+ %{
+ if ( client )
+ {
+ // Set protocol version on client's version
+ int minor_version = bytestring_to_int(${msg.minor_ver},10);
+ version = minor_version;
+
+ // Apple specifies minor version "889" but talks v37
+ if ( minor_version >= 7 )
+ state = AWAITING_SERVER_AUTH_TYPES37;
+ else
+ state = AWAITING_SERVER_AUTH_TYPES;
+ }
+ else
+ state = AWAITING_CLIENT_BANNER;
+
+ return true;
+ %}
+
+ function handle_ard_challenge() : bool
+ %{
+ state = AWAITING_CLIENT_ARD_RESPONSE;
+ return true;
+ %}
+
+ function handle_ard_response() : bool
+ %{
+ state = AWAITING_SERVER_AUTH_RESULT;
+ return true;
+ %}
+
+ function handle_auth_request() : bool
+ %{
+ state = AWAITING_CLIENT_RESPONSE;
+ return true;
+ %}
+
+ function handle_auth_response() : bool
+ %{
+ state = AWAITING_SERVER_AUTH_RESULT;
+ return true;
+ %}
+
+ function handle_security_result(msg: RFBSecurityResult) : bool
+ %{
+ if ( ${msg.result} == 0 )
+ {
+ state = AWAITING_CLIENT_SHARE_FLAG;
+ }
+ return true;
+ %}
+
+ function handle_client_init(msg: RFBClientInit) : bool
+ %{
+ state = AWAITING_SERVER_PARAMS;
+ return true;
+ %}
+
+ function handle_server_init(msg: RFBServerInit) : bool
+ %{
+ state = RFB_MESSAGE;
+ return true;
+ %}
+
+ function handle_security_types(msg: RFBSecurityTypes): bool
+ %{
+ if ( msg->sectype() == 0 )
+ { // No auth
+ state = AWAITING_CLIENT_SHARE_FLAG;
+ return true;
+ }
+
+ if ( msg->sectype() == 2 )
+ { //VNC
+ state = AWAITING_SERVER_CHALLENGE;
+ }
+ return true;
+ %}
+
+ function handle_security_types37(msg: RFBSecurityTypes37): bool
+ %{
+ if ( ${msg.count} == 0 )
+ { // No auth
+ state = AWAITING_CLIENT_SHARE_FLAG;
+ return true;
+ }
+ state = AWAITING_CLIENT_AUTH_TYPE_SELECTED37;
+ return true;
+ %}
+
+ function handle_auth_type_selected(msg: RFBAuthTypeSelected): bool
+ %{
+ if ( ${msg.type} == 30 )
+ { // Apple Remote Desktop
+ state = AWAITING_SERVER_ARD_CHALLENGE;
+ return true;
+ }
+
+ if ( ${msg.type} == 1 )
+ {
+ if ( version > 7 )
+ state = AWAITING_SERVER_AUTH_RESULT;
+ else
+ state = AWAITING_CLIENT_SHARE_FLAG;
+ }
+ else
+ state = AWAITING_SERVER_CHALLENGE;
+
+ return true;
+ %}
+
+ %member{
+ uint8 state = AWAITING_SERVER_BANNER;
+ int version = 0;
+ %}
+};
+
+refine typeattr RFB_PDU += &let {
+ proc: bool = $context.flow.proc_rfb_message(this);
+};
diff --git a/src/analyzer/protocol/rfb/rfb-protocol.pac b/src/analyzer/protocol/rfb/rfb-protocol.pac
new file mode 100644
index 0000000000..d80416664b
--- /dev/null
+++ b/src/analyzer/protocol/rfb/rfb-protocol.pac
@@ -0,0 +1,139 @@
+enum states {
+ AWAITING_SERVER_BANNER = 0,
+ AWAITING_CLIENT_BANNER = 1,
+ AWAITING_SERVER_AUTH_TYPES = 2,
+ AWAITING_SERVER_CHALLENGE = 3,
+ AWAITING_CLIENT_RESPONSE = 4,
+ AWAITING_SERVER_AUTH_RESULT = 5,
+ AWAITING_CLIENT_SHARE_FLAG = 6,
+ AWAITING_SERVER_PARAMS = 7,
+ AWAITING_CLIENT_AUTH_METHOD = 8,
+ AWAITING_SERVER_ARD_CHALLENGE = 9,
+ AWAITING_CLIENT_ARD_RESPONSE = 10,
+ AWAITING_SERVER_AUTH_TYPES37 = 11,
+ AWAITING_CLIENT_AUTH_TYPE_SELECTED37 = 12,
+ RFB_MESSAGE = 13
+ };
+
+type RFBProtocolVersion (client: bool) = record {
+ header: "RFB ";
+ major_ver: bytestring &length=3;
+ dot: ".";
+ minor_ver: bytestring &length=3;
+ pad: uint8;
+} &let {
+ proc: bool = $context.connection.handle_banners(client, this);
+ proc2: bool = $context.flow.proc_rfb_version(client, major_ver, minor_ver);
+}
+
+type RFBSecurityTypes = record {
+ sectype: uint32;
+} &let {
+ proc: bool = $context.connection.handle_security_types(this);
+ proc2: bool = $context.flow.proc_security_types(this);
+};
+
+type RFBSecurityTypes37 = record {
+ count: uint8;
+ types: uint8[count];
+} &let {
+ proc: bool = $context.connection.handle_security_types37(this);
+};
+
+type RFBAuthTypeSelected = record {
+ type: uint8;
+} &let {
+ proc: bool = $context.connection.handle_auth_type_selected(this);
+ proc2: bool = $context.flow.proc_security_types37(this);
+};
+
+type RFBSecurityResult = record {
+ result: uint32;
+} &let {
+ proc: bool = $context.connection.handle_security_result(this);
+ proc2: bool = $context.flow.proc_handle_security_result(result);
+};
+
+type RFBSecurityResultReason = record {
+ len: uint32;
+ reason: bytestring &length=len;
+};
+
+type RFBVNCAuthenticationRequest = record {
+ challenge: bytestring &length=16;
+} &let {
+ proc: bool = $context.connection.handle_auth_request();
+};
+
+type RFBVNCAuthenticationResponse = record {
+ response: bytestring &length= 16;
+} &let {
+ proc: bool = $context.connection.handle_auth_response();
+};
+
+type RFBSecurityARDChallenge = record {
+ challenge: bytestring &restofdata;
+} &let {
+ proc: bool = $context.connection.handle_ard_challenge();
+}
+
+type RFBSecurityARDResponse = record {
+ response: bytestring &restofdata;
+} &let {
+ proc: bool = $context.connection.handle_ard_response();
+}
+
+type RFBClientInit = record {
+ shared_flag: uint8;
+} &let {
+ proc: bool = $context.connection.handle_client_init(this);
+ proc2: bool = $context.flow.proc_rfb_share_flag(shared_flag);
+}
+
+type RFBServerInit = record {
+ width: uint16;
+ height: uint16;
+ pixel_format: bytestring &length= 16;
+ len : uint32;
+ name: bytestring &length = len;
+} &let {
+ proc: bool = $context.connection.handle_server_init(this);
+ proc2: bool = $context.flow.proc_handle_server_params(this);
+};
+
+type RFB_PDU_request = record {
+ request: case state of {
+ AWAITING_CLIENT_BANNER -> version: RFBProtocolVersion(true);
+ AWAITING_CLIENT_RESPONSE -> response: RFBVNCAuthenticationResponse;
+ AWAITING_CLIENT_SHARE_FLAG -> shareflag: RFBClientInit;
+ AWAITING_CLIENT_AUTH_TYPE_SELECTED37 -> authtype: RFBAuthTypeSelected;
+ AWAITING_CLIENT_ARD_RESPONSE -> ard_response: RFBSecurityARDResponse;
+ RFB_MESSAGE -> ignore: bytestring &restofdata &transient;
+ default -> data: bytestring &restofdata &transient;
+ } &requires(state);
+ } &let {
+ state: uint8 = $context.connection.get_state(true);
+};
+
+type RFB_PDU_response = record {
+ request: case rstate of {
+ AWAITING_SERVER_BANNER -> version: RFBProtocolVersion(false);
+ AWAITING_SERVER_AUTH_TYPES -> auth_types: RFBSecurityTypes;
+ AWAITING_SERVER_AUTH_TYPES37 -> auth_types37: RFBSecurityTypes37;
+ AWAITING_SERVER_CHALLENGE -> challenge: RFBVNCAuthenticationRequest;
+ AWAITING_SERVER_AUTH_RESULT -> authresult : RFBSecurityResult;
+ AWAITING_SERVER_ARD_CHALLENGE -> ard_challenge: RFBSecurityARDChallenge;
+ AWAITING_SERVER_PARAMS -> serverinit: RFBServerInit;
+ RFB_MESSAGE -> ignore: bytestring &restofdata &transient;
+ default -> data: bytestring &restofdata &transient;
+ } &requires(rstate);
+ } &let {
+ rstate: uint8 = $context.connection.get_state(false);
+};
+
+type RFB_PDU(is_orig: bool) = record {
+ payload: case is_orig of {
+ true -> request: RFB_PDU_request;
+ false -> response: RFB_PDU_response;
+ };
+} &byteorder = bigendian;
diff --git a/src/analyzer/protocol/rfb/rfb.pac b/src/analyzer/protocol/rfb/rfb.pac
new file mode 100644
index 0000000000..2e88f8e5bb
--- /dev/null
+++ b/src/analyzer/protocol/rfb/rfb.pac
@@ -0,0 +1,30 @@
+# Analyzer for Parser for rfb (VNC)
+# - rfb-protocol.pac: describes the rfb protocol messages
+# - rfb-analyzer.pac: describes the rfb analyzer code
+
+%include binpac.pac
+%include bro.pac
+
+%extern{
+ #include "events.bif.h"
+%}
+
+analyzer RFB withcontext {
+ connection: RFB_Conn;
+ flow: RFB_Flow;
+};
+
+# Our connection consists of two flows, one in each direction.
+connection RFB_Conn(bro_analyzer: BroAnalyzer) {
+ upflow = RFB_Flow(true);
+ downflow = RFB_Flow(false);
+};
+
+%include rfb-protocol.pac
+
+# Now we define the flow:
+flow RFB_Flow(is_orig: bool) {
+ datagram = RFB_PDU(is_orig) withcontext(connection, this);
+};
+
+%include rfb-analyzer.pac
\ No newline at end of file
diff --git a/src/analyzer/protocol/rpc/NFS.cc b/src/analyzer/protocol/rpc/NFS.cc
index 136491ec84..8a2620e2e5 100644
--- a/src/analyzer/protocol/rpc/NFS.cc
+++ b/src/analyzer/protocol/rpc/NFS.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "XDR.h"
diff --git a/src/analyzer/protocol/rpc/Portmap.cc b/src/analyzer/protocol/rpc/Portmap.cc
index f57d9a915c..5d7c980879 100644
--- a/src/analyzer/protocol/rpc/Portmap.cc
+++ b/src/analyzer/protocol/rpc/Portmap.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "XDR.h"
diff --git a/src/analyzer/protocol/rpc/RPC.cc b/src/analyzer/protocol/rpc/RPC.cc
index 38ed229a10..aff6bfefc0 100644
--- a/src/analyzer/protocol/rpc/RPC.cc
+++ b/src/analyzer/protocol/rpc/RPC.cc
@@ -4,7 +4,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "NetVar.h"
#include "XDR.h"
diff --git a/src/analyzer/protocol/rpc/XDR.cc b/src/analyzer/protocol/rpc/XDR.cc
index 981a982716..9ae1ba1236 100644
--- a/src/analyzer/protocol/rpc/XDR.cc
+++ b/src/analyzer/protocol/rpc/XDR.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "XDR.h"
diff --git a/src/analyzer/protocol/sip/sip-analyzer.pac b/src/analyzer/protocol/sip/sip-analyzer.pac
index 36a1dae7e2..829904aa3a 100644
--- a/src/analyzer/protocol/sip/sip-analyzer.pac
+++ b/src/analyzer/protocol/sip/sip-analyzer.pac
@@ -18,7 +18,6 @@ refine flow SIP_Flow += {
function proc_sip_request(method: bytestring, uri: bytestring, vers: SIP_Version): bool
%{
- connection()->bro_analyzer()->ProtocolConfirmation();
if ( sip_request )
{
BifEvent::generate_sip_request(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(),
diff --git a/src/analyzer/protocol/sip/sip-protocol.pac b/src/analyzer/protocol/sip/sip-protocol.pac
index a9e03cf2c1..15f07df44a 100644
--- a/src/analyzer/protocol/sip/sip-protocol.pac
+++ b/src/analyzer/protocol/sip/sip-protocol.pac
@@ -1,14 +1,5 @@
-enum ExpectBody {
- BODY_EXPECTED,
- BODY_NOT_EXPECTED,
- BODY_MAYBE,
-};
-
type SIP_TOKEN = RE/[^()<>@,;:\\"\/\[\]?={} \t]+/;
type SIP_WS = RE/[ \t]*/;
-type SIP_COLON = RE/:/;
-type SIP_TO_EOL = RE/[^\r\n]*/;
-type SIP_EOL = RE/(\r\n){1,2}/;
type SIP_URI = RE/[[:alnum:]@[:punct:]]+/;
type SIP_PDU(is_orig: bool) = case is_orig of {
@@ -17,14 +8,12 @@ type SIP_PDU(is_orig: bool) = case is_orig of {
};
type SIP_Request = record {
- request: SIP_RequestLine;
- newline: padding[2];
+ request: SIP_RequestLine &oneline;
msg: SIP_Message;
};
type SIP_Reply = record {
- reply: SIP_ReplyLine;
- newline: padding[2];
+ reply: SIP_ReplyLine &oneline;
msg: SIP_Message;
};
@@ -33,7 +22,7 @@ type SIP_RequestLine = record {
: SIP_WS;
uri: SIP_URI;
: SIP_WS;
- version: SIP_Version;
+ version: SIP_Version &restofdata;
} &oneline;
type SIP_ReplyLine = record {
@@ -41,7 +30,7 @@ type SIP_ReplyLine = record {
: SIP_WS;
status: SIP_Status;
: SIP_WS;
- reason: SIP_TO_EOL;
+ reason: bytestring &restofdata;
} &oneline;
type SIP_Status = record {
@@ -51,7 +40,7 @@ type SIP_Status = record {
};
type SIP_Version = record {
- : "SIP/";
+ : "SIP/";
vers_str: RE/[0-9]+\.[0-9]+/;
} &let {
vers_num: double = bytestring_to_double(vers_str);
@@ -67,11 +56,11 @@ type SIP_Message = record {
type SIP_HEADER_NAME = RE/[^: \t]+/;
type SIP_Header = record {
name: SIP_HEADER_NAME;
- : SIP_COLON;
: SIP_WS;
- value: SIP_TO_EOL;
- : SIP_EOL;
-} &oneline &byteorder=bigendian;
+ : ":";
+ : SIP_WS;
+ value: bytestring &restofdata;
+} &oneline;
type SIP_Body = record {
body: bytestring &length = $context.flow.get_content_length();
diff --git a/src/analyzer/protocol/sip/sip.pac b/src/analyzer/protocol/sip/sip.pac
index f527a90117..15addb8c1e 100644
--- a/src/analyzer/protocol/sip/sip.pac
+++ b/src/analyzer/protocol/sip/sip.pac
@@ -21,7 +21,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) {
%include sip-protocol.pac
flow SIP_Flow(is_orig: bool) {
- datagram = SIP_PDU(is_orig) withcontext(connection, this);
+ flowunit = SIP_PDU(is_orig) withcontext(connection, this);
};
%include sip-analyzer.pac
diff --git a/src/analyzer/protocol/sip/sip_TCP.pac b/src/analyzer/protocol/sip/sip_TCP.pac
index 5546d28ece..2e51675dea 100644
--- a/src/analyzer/protocol/sip/sip_TCP.pac
+++ b/src/analyzer/protocol/sip/sip_TCP.pac
@@ -24,7 +24,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) {
%include sip-protocol.pac
flow SIP_Flow(is_orig: bool) {
- datagram = SIP_PDU(is_orig) withcontext(connection, this);
+ flowunit = SIP_PDU(is_orig) withcontext(connection, this);
};
%include sip-analyzer.pac
diff --git a/src/analyzer/protocol/smb/SMB.cc b/src/analyzer/protocol/smb/SMB.cc
index 9d388a0886..f72dbf4e19 100644
--- a/src/analyzer/protocol/smb/SMB.cc
+++ b/src/analyzer/protocol/smb/SMB.cc
@@ -336,7 +336,9 @@ int SMB_Session::ParseNegotiate(binpac::SMB::SMB_header const& hdr,
{
binpac::SMB::SMB_dialect* d = (*msg.dialects())[i];
BroString* tmp = ExtractString(d->dialectname());
- t->Assign(new Val(i, TYPE_COUNT), new StringVal(tmp));
+ Val* idx = new Val(i, TYPE_COUNT);
+ t->Assign(idx, new StringVal(tmp));
+ Unref(idx);
}
val_list* vl = new val_list;
diff --git a/src/analyzer/protocol/smtp/SMTP.cc b/src/analyzer/protocol/smtp/SMTP.cc
index 614457dbca..efc55ecc74 100644
--- a/src/analyzer/protocol/smtp/SMTP.cc
+++ b/src/analyzer/protocol/smtp/SMTP.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
@@ -809,7 +809,7 @@ void SMTP_Analyzer::ProcessExtension(int ext_len, const char* ext)
if ( ! ext )
return;
- if ( ! strcasecmp_n(ext_len, ext, "PIPELINING") )
+ if ( ! strncasecmp(ext, "PIPELINING", ext_len) )
pipelining = 1;
}
@@ -819,7 +819,7 @@ int SMTP_Analyzer::ParseCmd(int cmd_len, const char* cmd)
return -1;
for ( int code = SMTP_CMD_EHLO; code < SMTP_CMD_LAST; ++code )
- if ( ! strcasecmp_n(cmd_len, cmd, smtp_cmd_word[code - SMTP_CMD_EHLO]) )
+ if ( ! strncasecmp(cmd, smtp_cmd_word[code - SMTP_CMD_EHLO], cmd_len) )
return code;
return -1;
diff --git a/src/analyzer/protocol/snmp/snmp-analyzer.pac b/src/analyzer/protocol/snmp/snmp-analyzer.pac
index 891531b292..44dce4dbf5 100644
--- a/src/analyzer/protocol/snmp/snmp-analyzer.pac
+++ b/src/analyzer/protocol/snmp/snmp-analyzer.pac
@@ -373,10 +373,12 @@ refine connection SNMP_Conn += {
function proc_header(rec: Header): bool
%{
+ if ( ! ${rec.is_orig} )
+ bro_analyzer()->ProtocolConfirmation();
+
if ( rec->unknown() )
return false;
- bro_analyzer()->ProtocolConfirmation();
return true;
%}
diff --git a/src/analyzer/protocol/stepping-stone/SteppingStone.cc b/src/analyzer/protocol/stepping-stone/SteppingStone.cc
index b6473dcf6e..c85b34172f 100644
--- a/src/analyzer/protocol/stepping-stone/SteppingStone.cc
+++ b/src/analyzer/protocol/stepping-stone/SteppingStone.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
diff --git a/src/analyzer/protocol/tcp/TCP.cc b/src/analyzer/protocol/tcp/TCP.cc
index 72cad8a05c..8b3876c7ce 100644
--- a/src/analyzer/protocol/tcp/TCP.cc
+++ b/src/analyzer/protocol/tcp/TCP.cc
@@ -442,7 +442,7 @@ const struct tcphdr* TCP_Analyzer::ExtractTCP_Header(const u_char*& data,
}
if ( tcp_hdr_len > uint32(len) ||
- sizeof(struct tcphdr) > uint32(caplen) )
+ tcp_hdr_len > uint32(caplen) )
{
// This can happen even with the above test, due to TCP
// options.
@@ -946,23 +946,11 @@ void TCP_Analyzer::GeneratePacketEvent(
const u_char* data, int len, int caplen,
int is_orig, TCP_Flags flags)
{
- char tcp_flags[256];
- int tcp_flag_len = 0;
-
- if ( flags.SYN() ) tcp_flags[tcp_flag_len++] = 'S';
- if ( flags.FIN() ) tcp_flags[tcp_flag_len++] = 'F';
- if ( flags.RST() ) tcp_flags[tcp_flag_len++] = 'R';
- if ( flags.ACK() ) tcp_flags[tcp_flag_len++] = 'A';
- if ( flags.PUSH() ) tcp_flags[tcp_flag_len++] = 'P';
- if ( flags.URG() ) tcp_flags[tcp_flag_len++] = 'U';
-
- tcp_flags[tcp_flag_len] = '\0';
-
val_list* vl = new val_list();
vl->append(BuildConnVal());
vl->append(new Val(is_orig, TYPE_BOOL));
- vl->append(new StringVal(tcp_flags));
+ vl->append(new StringVal(flags.AsString()));
vl->append(new Val(rel_seq, TYPE_COUNT));
vl->append(new Val(flags.ACK() ? rel_ack : 0, TYPE_COUNT));
vl->append(new Val(len, TYPE_COUNT));
diff --git a/src/analyzer/protocol/tcp/TCP.h b/src/analyzer/protocol/tcp/TCP.h
index 608c06a5aa..e5589b01a3 100644
--- a/src/analyzer/protocol/tcp/TCP.h
+++ b/src/analyzer/protocol/tcp/TCP.h
@@ -8,6 +8,7 @@
#include "PacketDumper.h"
#include "IPAddr.h"
#include "TCP_Endpoint.h"
+#include "TCP_Flags.h"
#include "Conn.h"
// We define two classes here:
@@ -23,21 +24,6 @@ class TCP_Endpoint;
class TCP_ApplicationAnalyzer;
class TCP_Reassembler;
-class TCP_Flags {
-public:
- TCP_Flags(const struct tcphdr* tp) { flags = tp->th_flags; }
-
- bool SYN() { return flags & TH_SYN; }
- bool FIN() { return flags & TH_FIN; }
- bool RST() { return flags & TH_RST; }
- bool ACK() { return flags & TH_ACK; }
- bool URG() { return flags & TH_URG; }
- bool PUSH() { return flags & TH_PUSH; }
-
-protected:
- u_char flags;
-};
-
class TCP_Analyzer : public analyzer::TransportLayerAnalyzer {
public:
TCP_Analyzer(Connection* conn);
diff --git a/src/analyzer/protocol/tcp/TCP_Endpoint.cc b/src/analyzer/protocol/tcp/TCP_Endpoint.cc
index 846eb6d9d1..7c359623f3 100644
--- a/src/analyzer/protocol/tcp/TCP_Endpoint.cc
+++ b/src/analyzer/protocol/tcp/TCP_Endpoint.cc
@@ -204,7 +204,7 @@ int TCP_Endpoint::DataSent(double t, uint64 seq, int len, int caplen,
if ( contents_processor )
{
if ( caplen >= len )
- status = contents_processor->DataSent(t, seq, len, data);
+ status = contents_processor->DataSent(t, seq, len, data, TCP_Flags(tp));
else
TCP()->Weird("truncated_tcp_payload");
}
diff --git a/src/analyzer/protocol/tcp/TCP_Flags.h b/src/analyzer/protocol/tcp/TCP_Flags.h
new file mode 100644
index 0000000000..cc3c1f5915
--- /dev/null
+++ b/src/analyzer/protocol/tcp/TCP_Flags.h
@@ -0,0 +1,55 @@
+#ifndef ANALYZER_PROTOCOL_TCP_TCP_FLAGS_H
+#define ANALYZER_PROTOCOL_TCP_TCP_FLAGS_H
+
+namespace analyzer { namespace tcp {
+
+class TCP_Flags {
+public:
+ TCP_Flags(const struct tcphdr* tp) { flags = tp->th_flags; }
+ TCP_Flags() { flags = 0; }
+
+ bool SYN() const { return flags & TH_SYN; }
+ bool FIN() const { return flags & TH_FIN; }
+ bool RST() const { return flags & TH_RST; }
+ bool ACK() const { return flags & TH_ACK; }
+ bool URG() const { return flags & TH_URG; }
+ bool PUSH() const { return flags & TH_PUSH; }
+
+ string AsString() const;
+
+protected:
+ u_char flags;
+};
+
+inline string TCP_Flags::AsString() const
+ {
+ char tcp_flags[10];
+ char* p = tcp_flags;
+
+ if ( SYN() )
+ *p++ = 'S';
+
+ if ( FIN() )
+ *p++ = 'F';
+
+ if ( RST() )
+ *p++ = 'R';
+
+ if ( ACK() )
+ *p++ = 'A';
+
+ if ( PUSH() )
+ *p++ = 'P';
+
+ if ( URG() )
+ *p++ = 'U';
+
+ *p++ = '\0';
+ return tcp_flags;
+ }
+}
+
+
+}
+
+#endif
diff --git a/src/analyzer/protocol/tcp/TCP_Reassembler.cc b/src/analyzer/protocol/tcp/TCP_Reassembler.cc
index bbcd9cb43a..5b88d2dafb 100644
--- a/src/analyzer/protocol/tcp/TCP_Reassembler.cc
+++ b/src/analyzer/protocol/tcp/TCP_Reassembler.cc
@@ -433,8 +433,13 @@ void TCP_Reassembler::Overlap(const u_char* b1, const u_char* b2, uint64 n)
{
BroString* b1_s = new BroString((const u_char*) b1, n, 0);
BroString* b2_s = new BroString((const u_char*) b2, n, 0);
- tcp_analyzer->Event(rexmit_inconsistency,
- new StringVal(b1_s), new StringVal(b2_s));
+
+ val_list* vl = new val_list(3);
+ vl->append(tcp_analyzer->BuildConnVal());
+ vl->append(new StringVal(b1_s));
+ vl->append(new StringVal(b2_s));
+ vl->append(new StringVal(flags.AsString()));
+ tcp_analyzer->ConnectionEvent(rexmit_inconsistency, vl);
}
}
@@ -461,7 +466,7 @@ void TCP_Reassembler::Deliver(uint64 seq, int len, const u_char* data)
}
int TCP_Reassembler::DataSent(double t, uint64 seq, int len,
- const u_char* data, bool replaying)
+ const u_char* data, TCP_Flags arg_flags, bool replaying)
{
uint64 ack = endp->ToRelativeSeqSpace(endp->AckSeq(), endp->AckWraps());
uint64 upper_seq = seq + len;
@@ -492,7 +497,9 @@ int TCP_Reassembler::DataSent(double t, uint64 seq, int len,
len -= amount_acked;
}
+ flags = arg_flags;
NewBlock(t, seq, len, data);
+ flags = TCP_Flags();
if ( Endpoint()->NoDataAcked() && tcp_max_above_hole_without_any_acks &&
NumUndeliveredBytes() > static_cast(tcp_max_above_hole_without_any_acks) )
diff --git a/src/analyzer/protocol/tcp/TCP_Reassembler.h b/src/analyzer/protocol/tcp/TCP_Reassembler.h
index c2ed0175ca..2e85e48e2f 100644
--- a/src/analyzer/protocol/tcp/TCP_Reassembler.h
+++ b/src/analyzer/protocol/tcp/TCP_Reassembler.h
@@ -3,6 +3,7 @@
#include "Reassem.h"
#include "TCP_Endpoint.h"
+#include "TCP_Flags.h"
class BroFile;
class Connection;
@@ -61,7 +62,7 @@ public:
void SkipToSeq(uint64 seq);
int DataSent(double t, uint64 seq, int len, const u_char* data,
- bool replaying=true);
+ analyzer::tcp::TCP_Flags flags, bool replaying=true);
void AckReceived(uint64 seq);
// Checks if we have delivered all contents that we can possibly
@@ -90,15 +91,15 @@ private:
DECLARE_SERIAL(TCP_Reassembler);
- void Undelivered(uint64 up_to_seq);
+ void Undelivered(uint64 up_to_seq) override;
void Gap(uint64 seq, uint64 len);
void RecordToSeq(uint64 start_seq, uint64 stop_seq, BroFile* f);
void RecordBlock(DataBlock* b, BroFile* f);
void RecordGap(uint64 start_seq, uint64 upper_seq, BroFile* f);
- void BlockInserted(DataBlock* b);
- void Overlap(const u_char* b1, const u_char* b2, uint64 n);
+ void BlockInserted(DataBlock* b) override;
+ void Overlap(const u_char* b1, const u_char* b2, uint64 n) override;
TCP_Endpoint* endp;
@@ -110,6 +111,7 @@ private:
uint64 seq_to_skip;
bool in_delivery;
+ analyzer::tcp::TCP_Flags flags;
BroFile* record_contents_file; // file on which to reassemble contents
diff --git a/src/analyzer/protocol/teredo/Teredo.cc b/src/analyzer/protocol/teredo/Teredo.cc
index 400f38839e..6ad00a82dc 100644
--- a/src/analyzer/protocol/teredo/Teredo.cc
+++ b/src/analyzer/protocol/teredo/Teredo.cc
@@ -189,36 +189,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
else
valid_resp = true;
- if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
- ! ProtocolConfirmed() )
- {
- // Only confirm the Teredo tunnel and start decapsulating packets
- // when no other sibling analyzer thinks it's already parsing the
- // right protocol.
- bool sibling_has_confirmed = false;
- if ( Parent() )
- {
- LOOP_OVER_GIVEN_CONST_CHILDREN(i, Parent()->GetChildren())
- {
- if ( (*i)->ProtocolConfirmed() )
- {
- sibling_has_confirmed = true;
- break;
- }
- }
- }
-
- if ( ! sibling_has_confirmed )
- Confirm();
- else
- {
- delete inner;
- return;
- }
- }
- else
- // Aggressively decapsulate anything with valid Teredo encapsulation.
- Confirm();
+ Confirm();
}
else
diff --git a/src/analyzer/protocol/udp/UDP.cc b/src/analyzer/protocol/udp/UDP.cc
index 36d5831a6a..3bd3736b2a 100644
--- a/src/analyzer/protocol/udp/UDP.cc
+++ b/src/analyzer/protocol/udp/UDP.cc
@@ -2,7 +2,7 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Net.h"
#include "NetVar.h"
diff --git a/src/analyzer/protocol/zip/ZIP.h b/src/analyzer/protocol/zip/ZIP.h
index b284529d86..580235ec63 100644
--- a/src/analyzer/protocol/zip/ZIP.h
+++ b/src/analyzer/protocol/zip/ZIP.h
@@ -3,7 +3,7 @@
#ifndef ANALYZER_PROTOCOL_ZIP_ZIP_H
#define ANALYZER_PROTOCOL_ZIP_ZIP_H
-#include "config.h"
+#include "bro-config.h"
#include "zlib.h"
#include "analyzer/protocol/tcp/TCP.h"
diff --git a/src/bif_arg.cc b/src/bif_arg.cc
index 92e228032b..f5e25f3746 100644
--- a/src/bif_arg.cc
+++ b/src/bif_arg.cc
@@ -1,4 +1,4 @@
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/bro.bif b/src/bro.bif
index 629abe7735..5d097734a4 100644
--- a/src/bro.bif
+++ b/src/bro.bif
@@ -1031,6 +1031,72 @@ function clear_table%(v: any%): any
return 0;
%}
+## Gets all subnets that contain a given subnet from a set/table[subnet]
+##
+## search: the subnet to search for.
+##
+## t: the set[subnet] or table[subnet].
+##
+## Returns: All the keys of the set or table that cover the subnet searched for.
+function matching_subnets%(search: subnet, t: any%): subnet_vec
+ %{
+ if ( t->Type()->Tag() != TYPE_TABLE || ! t->Type()->AsTableType()->IsSubNetIndex() )
+ {
+ reporter->Error("matching_subnets needs to be called on a set[subnet]/table[subnet].");
+ return nullptr;
+ }
+
+ return t->AsTableVal()->LookupSubnets(search);
+ %}
+
+## For a set[subnet]/table[subnet], create a new table that contains all entries that
+## contain a given subnet.
+##
+## search: the subnet to search for.
+##
+## t: the set[subnet] or table[subnet].
+##
+## Returns: A new table that contains all the entries that cover the subnet searched for.
+function filter_subnet_table%(search: subnet, t: any%): any
+ %{
+ if ( t->Type()->Tag() != TYPE_TABLE || ! t->Type()->AsTableType()->IsSubNetIndex() )
+ {
+ reporter->Error("filter_subnet_table needs to be called on a set[subnet]/table[subnet].");
+ return nullptr;
+ }
+
+ return t->AsTableVal()->LookupSubnetValues(search);
+ %}
+
+## Checks if a specific subnet is a member of a set/table[subnet].
+## In difference to the ``in`` operator, this performs an exact match, not
+## a longest prefix match.
+##
+## search: the subnet to search for.
+##
+## t: the set[subnet] or table[subnet].
+##
+## Returns: True if the exact subnet is a member, false otherwise.
+function check_subnet%(search: subnet, t: any%): bool
+ %{
+ if ( t->Type()->Tag() != TYPE_TABLE || ! t->Type()->AsTableType()->IsSubNetIndex() )
+ {
+ reporter->Error("check_subnet needs to be called on a set[subnet]/table[subnet].");
+ return nullptr;
+ }
+
+ const PrefixTable* pt = t->AsTableVal()->Subnets();
+ if ( ! pt )
+ {
+ reporter->Error("check_subnet encountered nonexisting prefix table.");
+ return nullptr;
+ }
+
+ void* res = pt->Lookup(search, true);
+
+ return new Val (res != nullptr, TYPE_BOOL);
+ %}
+
## Checks whether two objects reference the same internal object. This function
## uses equality comparison of C++ raw pointer values to determine if the two
## objects are the same.
@@ -2078,6 +2144,33 @@ function is_v6_addr%(a: addr%): bool
return new Val(0, TYPE_BOOL);
%}
+## Returns whether a subnet specification is IPv4 or not.
+##
+## s: the subnet to check.
+##
+## Returns: true if *a* is an IPv4 subnet, else false.
+function is_v4_subnet%(s: subnet%): bool
+ %{
+ if ( s->AsSubNet().Prefix().GetFamily() == IPv4 )
+ return new Val(1, TYPE_BOOL);
+ else
+ return new Val(0, TYPE_BOOL);
+ %}
+
+## Returns whether a subnet specification is IPv6 or not.
+##
+## s: the subnet to check.
+##
+## Returns: true if *a* is an IPv6 subnet, else false.
+function is_v6_subnet%(s: subnet%): bool
+ %{
+ if ( s->AsSubNet().Prefix().GetFamily() == IPv6 )
+ return new Val(1, TYPE_BOOL);
+ else
+ return new Val(0, TYPE_BOOL);
+ %}
+
+
# ===========================================================================
#
# Conversion
@@ -2368,6 +2461,44 @@ function to_subnet%(sn: string%): subnet
return ret;
%}
+## Converts a :bro:type:`addr` to a :bro:type:`subnet`.
+##
+## a: The address to convert.
+##
+## Returns: The *a* address as a :bro:type:`subnet`.
+##
+## .. bro:see:: to_subset
+function addr_to_subnet%(a: addr%): subnet
+ %{
+ int width = (a->AsAddr().GetFamily() == IPv4 ? 32 : 128);
+ return new SubNetVal(a->AsAddr(), width);
+ %}
+
+## Converts a :bro:type:`subnet` to a :bro:type:`addr` by
+## extracting the prefix.
+##
+## s: The subnet to convert.
+##
+## Returns: The *s* subnet as a :bro:type:`addr`.
+##
+## .. bro:see:: to_subset
+function subnet_to_addr%(sn: subnet%): addr
+ %{
+ return new AddrVal(sn->Prefix());
+ %}
+
+## Returns the width of a :bro:type:`subnet`.
+##
+## s: The subnet to convert.
+##
+## Returns: The width of the subnet.
+##
+## .. bro:see:: to_subset
+function subnet_width%(sn: subnet%): count
+ %{
+ return new Val(sn->Width(), TYPE_COUNT);
+ %}
+
## Converts a :bro:type:`string` to a :bro:type:`double`.
##
## str: The :bro:type:`string` to convert.
@@ -2723,14 +2854,17 @@ function hexstr_to_bytestring%(hexstr: string%): string
## Encodes a Base64-encoded string.
##
-## s: The string to encode
+## s: The string to encode.
+##
+## a: An optional custom alphabet. The empty string indicates the default
+## alphabet. If given, the string must consist of 64 unique characters.
##
## Returns: The encoded version of *s*.
##
-## .. bro:see:: encode_base64_custom decode_base64
-function encode_base64%(s: string%): string
+## .. bro:see:: decode_base64
+function encode_base64%(s: string, a: string &default=""%): string
%{
- BroString* t = encode_base64(s->AsString());
+ BroString* t = encode_base64(s->AsString(), a->AsString());
if ( t )
return new StringVal(t);
else
@@ -2740,18 +2874,18 @@ function encode_base64%(s: string%): string
}
%}
+
## Encodes a Base64-encoded string with a custom alphabet.
##
-## s: The string to encode
+## s: The string to encode.
##
-## a: The custom alphabet. The empty string indicates the default alphabet. The
-## length of *a* must be 64. For example, a custom alphabet could be
-## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``.
+## a: The custom alphabet. The string must consist of 64 unique
+## characters. The empty string indicates the default alphabet.
##
## Returns: The encoded version of *s*.
##
-## .. bro:see:: encode_base64 decode_base64_custom
-function encode_base64_custom%(s: string, a: string%): string
+## .. bro:see:: encode_base64
+function encode_base64_custom%(s: string, a: string%): string &deprecated
%{
BroString* t = encode_base64(s->AsString(), a->AsString());
if ( t )
@@ -2767,12 +2901,48 @@ function encode_base64_custom%(s: string, a: string%): string
##
## s: The Base64-encoded string.
##
+## a: An optional custom alphabet. The empty string indicates the default
+## alphabet. If given, the string must consist of 64 unique characters.
+##
## Returns: The decoded version of *s*.
##
-## .. bro:see:: decode_base64_custom encode_base64
-function decode_base64%(s: string%): string
+## .. bro:see:: decode_base64_conn encode_base64
+function decode_base64%(s: string, a: string &default=""%): string
%{
- BroString* t = decode_base64(s->AsString());
+ BroString* t = decode_base64(s->AsString(), a->AsString());
+ if ( t )
+ return new StringVal(t);
+ else
+ {
+ reporter->Error("error in decoding string %s", s->CheckString());
+ return new StringVal("");
+ }
+ %}
+
+## Decodes a Base64-encoded string that was derived from processing a connection.
+## If an error is encountered decoding the string, that will be logged to
+## ``weird.log`` with the associated connection.
+##
+## cid: The identifier of the connection that the encoding originates from.
+##
+## s: The Base64-encoded string.
+##
+## a: An optional custom alphabet. The empty string indicates the default
+## alphabet. If given, the string must consist of 64 unique characters.
+##
+## Returns: The decoded version of *s*.
+##
+## .. bro:see:: decode_base64
+function decode_base64_conn%(cid: conn_id, s: string, a: string &default=""%): string
+ %{
+ Connection* conn = sessions->FindConnection(cid);
+ if ( ! conn )
+ {
+ builtin_error("connection ID not a known connection", cid);
+ return new StringVal("");
+ }
+
+ BroString* t = decode_base64(s->AsString(), a->AsString(), conn);
if ( t )
return new StringVal(t);
else
@@ -2786,14 +2956,13 @@ function decode_base64%(s: string%): string
##
## s: The Base64-encoded string.
##
-## a: The custom alphabet. The empty string indicates the default alphabet. The
-## length of *a* must be 64. For example, a custom alphabet could be
-## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``.
+## a: The custom alphabet. The string must consist of 64 unique characters.
+## The empty string indicates the default alphabet.
##
## Returns: The decoded version of *s*.
##
-## .. bro:see:: decode_base64 encode_base64_custom
-function decode_base64_custom%(s: string, a: string%): string
+## .. bro:see:: decode_base64 decode_base64_conn
+function decode_base64_custom%(s: string, a: string%): string &deprecated
%{
BroString* t = decode_base64(s->AsString(), a->AsString());
if ( t )
@@ -3289,6 +3458,26 @@ function get_current_packet%(%) : pcap_packet
return pkt;
%}
+## Function to get the raw headers of the currently processed packet.
+##
+## Returns: The :bro:type:`raw_pkt_hdr` record containing the Layer 2, 3 and
+## 4 headers of the currently processed packet.
+##
+## .. bro:see:: raw_pkt_hdr get_current_packet
+function get_current_packet_header%(%) : raw_pkt_hdr
+ %{
+ const Packet* p;
+
+ if ( current_pktsrc &&
+ current_pktsrc->GetCurrentPacket(&p) )
+ {
+ return p->BuildPktHdrVal();
+ }
+
+ RecordVal* hdr = new RecordVal(raw_pkt_hdr_type);
+ return hdr;
+ %}
+
## Writes a given packet to a file.
##
## pkt: The PCAP packet.
diff --git a/src/broker/CMakeLists.txt b/src/broker/CMakeLists.txt
index 7329bfd46e..988855cafb 100644
--- a/src/broker/CMakeLists.txt
+++ b/src/broker/CMakeLists.txt
@@ -10,8 +10,8 @@ if ( ROCKSDB_INCLUDE_DIR )
include_directories(BEFORE ${ROCKSDB_INCLUDE_DIR})
endif ()
-include_directories(BEFORE ${LIBCAF_INCLUDE_DIR_CORE})
-include_directories(BEFORE ${LIBCAF_INCLUDE_DIR_IO})
+include_directories(BEFORE ${CAF_INCLUDE_DIR_CORE})
+include_directories(BEFORE ${CAF_INCLUDE_DIR_IO})
set(comm_SRCS
Data.cc
diff --git a/src/broker/Data.cc b/src/broker/Data.cc
index 8f66427bb5..fe3f271c49 100644
--- a/src/broker/Data.cc
+++ b/src/broker/Data.cc
@@ -539,7 +539,7 @@ broker::util::optional bro_broker::val_to_data(Val* v)
return {rval};
}
default:
- reporter->Error("unsupported BrokerComm::Data type: %s",
+ reporter->Error("unsupported Broker::Data type: %s",
type_name(v->Type()->Tag()));
break;
}
@@ -549,7 +549,7 @@ broker::util::optional bro_broker::val_to_data(Val* v)
RecordVal* bro_broker::make_data_val(Val* v)
{
- auto rval = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::Data);
auto data = val_to_data(v);
if ( data )
@@ -560,7 +560,7 @@ RecordVal* bro_broker::make_data_val(Val* v)
RecordVal* bro_broker::make_data_val(broker::data d)
{
- auto rval = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::Data);
rval->Assign(0, new DataVal(move(d)));
return rval;
}
@@ -570,92 +570,92 @@ struct data_type_getter {
result_type operator()(bool a)
{
- return new EnumVal(BifEnum::BrokerComm::BOOL,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::BOOL,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(uint64_t a)
{
- return new EnumVal(BifEnum::BrokerComm::COUNT,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::COUNT,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(int64_t a)
{
- return new EnumVal(BifEnum::BrokerComm::INT,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::INT,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(double a)
{
- return new EnumVal(BifEnum::BrokerComm::DOUBLE,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::DOUBLE,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const std::string& a)
{
- return new EnumVal(BifEnum::BrokerComm::STRING,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::STRING,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::address& a)
{
- return new EnumVal(BifEnum::BrokerComm::ADDR,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::ADDR,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::subnet& a)
{
- return new EnumVal(BifEnum::BrokerComm::SUBNET,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::SUBNET,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::port& a)
{
- return new EnumVal(BifEnum::BrokerComm::PORT,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::PORT,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::time_point& a)
{
- return new EnumVal(BifEnum::BrokerComm::TIME,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::TIME,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::time_duration& a)
{
- return new EnumVal(BifEnum::BrokerComm::INTERVAL,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::INTERVAL,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::enum_value& a)
{
- return new EnumVal(BifEnum::BrokerComm::ENUM,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::ENUM,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::set& a)
{
- return new EnumVal(BifEnum::BrokerComm::SET,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::SET,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::table& a)
{
- return new EnumVal(BifEnum::BrokerComm::TABLE,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::TABLE,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::vector& a)
{
- return new EnumVal(BifEnum::BrokerComm::VECTOR,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::VECTOR,
+ BifType::Enum::Broker::DataType);
}
result_type operator()(const broker::record& a)
{
- return new EnumVal(BifEnum::BrokerComm::RECORD,
- BifType::Enum::BrokerComm::DataType);
+ return new EnumVal(BifEnum::Broker::RECORD,
+ BifType::Enum::Broker::DataType);
}
};
@@ -670,7 +670,7 @@ broker::data& bro_broker::opaque_field_to_data(RecordVal* v, Frame* f)
if ( ! d )
reporter->RuntimeError(f->GetCall()->GetLocationInfo(),
- "BrokerComm::Data's opaque field is not set");
+ "Broker::Data's opaque field is not set");
return static_cast(d)->data;
}
diff --git a/src/broker/Data.h b/src/broker/Data.h
index 84495056be..f212979853 100644
--- a/src/broker/Data.h
+++ b/src/broker/Data.h
@@ -21,25 +21,25 @@ extern OpaqueType* opaque_of_record_iterator;
TransportProto to_bro_port_proto(broker::port::protocol tp);
/**
- * Create a BrokerComm::Data value from a Bro value.
+ * Create a Broker::Data value from a Bro value.
* @param v the Bro value to convert to a Broker data value.
- * @return a BrokerComm::Data value, where the optional field is set if the conversion
+ * @return a Broker::Data value, where the optional field is set if the conversion
* was possible, else it is unset.
*/
RecordVal* make_data_val(Val* v);
/**
- * Create a BrokerComm::Data value from a Broker data value.
+ * Create a Broker::Data value from a Broker data value.
* @param d the Broker value to wrap in an opaque type.
- * @return a BrokerComm::Data value that wraps the Broker value.
+ * @return a Broker::Data value that wraps the Broker value.
*/
RecordVal* make_data_val(broker::data d);
/**
- * Get the type of Broker data that BrokerComm::Data wraps.
- * @param v a BrokerComm::Data value.
+ * Get the type of Broker data that Broker::Data wraps.
+ * @param v a Broker::Data value.
* @param frame used to get location info upon error.
- * @return a BrokerComm::DataType value.
+ * @return a Broker::DataType value.
*/
EnumVal* get_data_type(RecordVal* v, Frame* frame);
@@ -141,8 +141,8 @@ struct type_name_getter {
};
/**
- * Retrieve Broker data value associated with a BrokerComm::Data Bro value.
- * @param v a BrokerComm::Data value.
+ * Retrieve Broker data value associated with a Broker::Data Bro value.
+ * @param v a Broker::Data value.
* @param f used to get location information on error.
* @return a reference to the wrapped Broker data value. A runtime interpreter
* exception is thrown if the the optional opaque value of \a v is not set.
@@ -183,9 +183,9 @@ inline T& require_data_type(RecordVal* v, TypeTag tag, Frame* f)
}
/**
- * Convert a BrokerComm::Data Bro value to a Bro value of a given type.
+ * Convert a Broker::Data Bro value to a Bro value of a given type.
* @tparam a type that a Broker data variant may contain.
- * @param v a BrokerComm::Data value.
+ * @param v a Broker::Data value.
* @param tag a Bro type to convert to.
* @param f used to get location information on error.
* A runtime interpret exception is thrown if trying to access a type which
diff --git a/src/broker/Manager.cc b/src/broker/Manager.cc
index 06ece6d6c1..334b7f84f5 100644
--- a/src/broker/Manager.cc
+++ b/src/broker/Manager.cc
@@ -77,20 +77,20 @@ bool bro_broker::Manager::Enable(Val* broker_endpoint_flags)
if ( endpoint != nullptr )
return true;
- auto send_flags_type = internal_type("BrokerComm::SendFlags")->AsRecordType();
+ auto send_flags_type = internal_type("Broker::SendFlags")->AsRecordType();
send_flags_self_idx = require_field(send_flags_type, "self");
send_flags_peers_idx = require_field(send_flags_type, "peers");
send_flags_unsolicited_idx = require_field(send_flags_type, "unsolicited");
log_id_type = internal_type("Log::ID")->AsEnumType();
- bro_broker::opaque_of_data_type = new OpaqueType("BrokerComm::Data");
- bro_broker::opaque_of_set_iterator = new OpaqueType("BrokerComm::SetIterator");
- bro_broker::opaque_of_table_iterator = new OpaqueType("BrokerComm::TableIterator");
- bro_broker::opaque_of_vector_iterator = new OpaqueType("BrokerComm::VectorIterator");
- bro_broker::opaque_of_record_iterator = new OpaqueType("BrokerComm::RecordIterator");
- bro_broker::opaque_of_store_handle = new OpaqueType("BrokerStore::Handle");
- vector_of_data_type = new VectorType(internal_type("BrokerComm::Data")->Ref());
+ bro_broker::opaque_of_data_type = new OpaqueType("Broker::Data");
+ bro_broker::opaque_of_set_iterator = new OpaqueType("Broker::SetIterator");
+ bro_broker::opaque_of_table_iterator = new OpaqueType("Broker::TableIterator");
+ bro_broker::opaque_of_vector_iterator = new OpaqueType("Broker::VectorIterator");
+ bro_broker::opaque_of_record_iterator = new OpaqueType("Broker::RecordIterator");
+ bro_broker::opaque_of_store_handle = new OpaqueType("Broker::Handle");
+ vector_of_data_type = new VectorType(internal_type("Broker::Data")->Ref());
auto res = broker::init();
@@ -110,7 +110,7 @@ bool bro_broker::Manager::Enable(Val* broker_endpoint_flags)
}
const char* name;
- auto name_from_script = internal_val("BrokerComm::endpoint_name")->AsString();
+ auto name_from_script = internal_val("Broker::endpoint_name")->AsString();
if ( name_from_script->Len() )
name = name_from_script->CheckString();
@@ -290,7 +290,7 @@ bool bro_broker::Manager::AutoEvent(string topic, Val* event, Val* flags)
if ( event->Type()->Tag() != TYPE_FUNC )
{
- reporter->Error("BrokerComm::auto_event must operate on an event");
+ reporter->Error("Broker::auto_event must operate on an event");
return false;
}
@@ -298,7 +298,7 @@ bool bro_broker::Manager::AutoEvent(string topic, Val* event, Val* flags)
if ( event_val->Flavor() != FUNC_FLAVOR_EVENT )
{
- reporter->Error("BrokerComm::auto_event must operate on an event");
+ reporter->Error("Broker::auto_event must operate on an event");
return false;
}
@@ -306,7 +306,7 @@ bool bro_broker::Manager::AutoEvent(string topic, Val* event, Val* flags)
if ( ! handler )
{
- reporter->Error("BrokerComm::auto_event failed to lookup event '%s'",
+ reporter->Error("Broker::auto_event failed to lookup event '%s'",
event_val->Name());
return false;
}
@@ -322,7 +322,7 @@ bool bro_broker::Manager::AutoEventStop(const string& topic, Val* event)
if ( event->Type()->Tag() != TYPE_FUNC )
{
- reporter->Error("BrokerComm::auto_event_stop must operate on an event");
+ reporter->Error("Broker::auto_event_stop must operate on an event");
return false;
}
@@ -330,7 +330,7 @@ bool bro_broker::Manager::AutoEventStop(const string& topic, Val* event)
if ( event_val->Flavor() != FUNC_FLAVOR_EVENT )
{
- reporter->Error("BrokerComm::auto_event_stop must operate on an event");
+ reporter->Error("Broker::auto_event_stop must operate on an event");
return false;
}
@@ -338,7 +338,7 @@ bool bro_broker::Manager::AutoEventStop(const string& topic, Val* event)
if ( ! handler )
{
- reporter->Error("BrokerComm::auto_event_stop failed to lookup event '%s'",
+ reporter->Error("Broker::auto_event_stop failed to lookup event '%s'",
event_val->Name());
return false;
}
@@ -353,7 +353,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
if ( ! Enabled() )
return nullptr;
- auto rval = new RecordVal(BifType::Record::BrokerComm::EventArgs);
+ auto rval = new RecordVal(BifType::Record::Broker::EventArgs);
auto arg_vec = new VectorVal(vector_of_data_type);
rval->Assign(1, arg_vec);
Func* func = 0;
@@ -368,7 +368,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
if ( arg_val->Type()->Tag() != TYPE_FUNC )
{
- reporter->Error("1st param of BrokerComm::event_args must be event");
+ reporter->Error("1st param of Broker::event_args must be event");
return rval;
}
@@ -376,7 +376,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
if ( func->Flavor() != FUNC_FLAVOR_EVENT )
{
- reporter->Error("1st param of BrokerComm::event_args must be event");
+ reporter->Error("1st param of Broker::event_args must be event");
return rval;
}
@@ -384,7 +384,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
if ( num_args != args->length() - 1 )
{
- reporter->Error("bad # of BrokerComm::event_args: got %d, expect %d",
+ reporter->Error("bad # of Broker::event_args: got %d, expect %d",
args->length(), num_args + 1);
return rval;
}
@@ -398,7 +398,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
if ( ! same_type((*args)[i]->Type(), expected_type) )
{
rval->Assign(0, 0);
- reporter->Error("BrokerComm::event_args param %d type mismatch", i);
+ reporter->Error("Broker::event_args param %d type mismatch", i);
return rval;
}
@@ -408,7 +408,7 @@ RecordVal* bro_broker::Manager::MakeEventArgs(val_list* args)
{
Unref(data_val);
rval->Assign(0, 0);
- reporter->Error("BrokerComm::event_args unsupported event/params");
+ reporter->Error("Broker::event_args unsupported event/params");
return rval;
}
@@ -584,7 +584,7 @@ struct response_converter {
case broker::store::query::tag::lookup:
// A boolean result means the key doesn't exist (if it did, then
// the result would contain the broker::data value, not a bool).
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
default:
return bro_broker::make_data_val(broker::data{d});
}
@@ -639,36 +639,36 @@ void bro_broker::Manager::Process()
{
switch ( u.status ) {
case broker::outgoing_connection_status::tag::established:
- if ( BrokerComm::outgoing_connection_established )
+ if ( Broker::outgoing_connection_established )
{
val_list* vl = new val_list;
vl->append(new StringVal(u.relation.remote_tuple().first));
vl->append(new PortVal(u.relation.remote_tuple().second,
TRANSPORT_TCP));
vl->append(new StringVal(u.peer_name));
- mgr.QueueEvent(BrokerComm::outgoing_connection_established, vl);
+ mgr.QueueEvent(Broker::outgoing_connection_established, vl);
}
break;
case broker::outgoing_connection_status::tag::disconnected:
- if ( BrokerComm::outgoing_connection_broken )
+ if ( Broker::outgoing_connection_broken )
{
val_list* vl = new val_list;
vl->append(new StringVal(u.relation.remote_tuple().first));
vl->append(new PortVal(u.relation.remote_tuple().second,
TRANSPORT_TCP));
- mgr.QueueEvent(BrokerComm::outgoing_connection_broken, vl);
+ mgr.QueueEvent(Broker::outgoing_connection_broken, vl);
}
break;
case broker::outgoing_connection_status::tag::incompatible:
- if ( BrokerComm::outgoing_connection_incompatible )
+ if ( Broker::outgoing_connection_incompatible )
{
val_list* vl = new val_list;
vl->append(new StringVal(u.relation.remote_tuple().first));
vl->append(new PortVal(u.relation.remote_tuple().second,
TRANSPORT_TCP));
- mgr.QueueEvent(BrokerComm::outgoing_connection_incompatible, vl);
+ mgr.QueueEvent(Broker::outgoing_connection_incompatible, vl);
}
break;
@@ -684,20 +684,20 @@ void bro_broker::Manager::Process()
{
switch ( u.status ) {
case broker::incoming_connection_status::tag::established:
- if ( BrokerComm::incoming_connection_established )
+ if ( Broker::incoming_connection_established )
{
val_list* vl = new val_list;
vl->append(new StringVal(u.peer_name));
- mgr.QueueEvent(BrokerComm::incoming_connection_established, vl);
+ mgr.QueueEvent(Broker::incoming_connection_established, vl);
}
break;
case broker::incoming_connection_status::tag::disconnected:
- if ( BrokerComm::incoming_connection_broken )
+ if ( Broker::incoming_connection_broken )
{
val_list* vl = new val_list;
vl->append(new StringVal(u.peer_name));
- mgr.QueueEvent(BrokerComm::incoming_connection_broken, vl);
+ mgr.QueueEvent(Broker::incoming_connection_broken, vl);
}
break;
@@ -718,7 +718,7 @@ void bro_broker::Manager::Process()
ps.second.received += print_messages.size();
- if ( ! BrokerComm::print_handler )
+ if ( ! Broker::print_handler )
continue;
for ( auto& pm : print_messages )
@@ -741,7 +741,7 @@ void bro_broker::Manager::Process()
val_list* vl = new val_list;
vl->append(new StringVal(move(*msg)));
- mgr.QueueEvent(BrokerComm::print_handler, vl);
+ mgr.QueueEvent(Broker::print_handler, vl);
}
}
diff --git a/src/broker/Manager.h b/src/broker/Manager.h
index 9e1ac7a70b..9fb7b9e328 100644
--- a/src/broker/Manager.h
+++ b/src/broker/Manager.h
@@ -63,7 +63,7 @@ public:
/**
* Enable use of communication.
* @param flags used to tune the local Broker endpoint's behavior.
- * See the BrokerComm::EndpointFlags record type.
+ * See the Broker::EndpointFlags record type.
* @return true if communication is successfully initialized.
*/
bool Enable(Val* flags);
@@ -122,7 +122,7 @@ public:
* of this topic name.
* @param msg the string to send to peers.
* @param flags tune the behavior of how the message is send.
- * See the BrokerComm::SendFlags record type.
+ * See the Broker::SendFlags record type.
* @return true if the message is sent successfully.
*/
bool Print(std::string topic, std::string msg, Val* flags);
@@ -135,7 +135,7 @@ public:
* @param msg the event to send to peers, which is the name of the event
* as a string followed by all of its arguments.
* @param flags tune the behavior of how the message is send.
- * See the BrokerComm::SendFlags record type.
+ * See the Broker::SendFlags record type.
* @return true if the message is sent successfully.
*/
bool Event(std::string topic, broker::message msg, int flags);
@@ -146,9 +146,9 @@ public:
* Peers advertise interest by registering a subscription to some prefix
* of this topic name.
* @param args the event and its arguments to send to peers. See the
- * BrokerComm::EventArgs record type.
+ * Broker::EventArgs record type.
* @param flags tune the behavior of how the message is send.
- * See the BrokerComm::SendFlags record type.
+ * See the Broker::SendFlags record type.
* @return true if the message is sent successfully.
*/
bool Event(std::string topic, RecordVal* args, Val* flags);
@@ -160,7 +160,7 @@ public:
* @param columns the data which comprises the log entry.
* @param info the record type corresponding to the log's columns.
* @param flags tune the behavior of how the message is send.
- * See the BrokerComm::SendFlags record type.
+ * See the Broker::SendFlags record type.
* @return true if the message is sent successfully.
*/
bool Log(EnumVal* stream_id, RecordVal* columns, RecordType* info,
@@ -174,7 +174,7 @@ public:
* of this topic name.
* @param event a Bro event value.
* @param flags tune the behavior of how the message is send.
- * See the BrokerComm::SendFlags record type.
+ * See the Broker::SendFlags record type.
* @return true if automatic event sending is now enabled.
*/
bool AutoEvent(std::string topic, Val* event, Val* flags);
@@ -320,7 +320,7 @@ public:
Stats ConsumeStatistics();
/**
- * Convert BrokerComm::SendFlags to int flags for use with broker::send().
+ * Convert Broker::SendFlags to int flags for use with broker::send().
*/
static int send_flags_to_int(Val* flags);
@@ -335,7 +335,7 @@ private:
void Process() override;
const char* Tag() override
- { return "BrokerComm::Manager"; }
+ { return "Broker::Manager"; }
broker::endpoint& Endpoint()
{ return *endpoint; }
diff --git a/src/broker/Store.cc b/src/broker/Store.cc
index f9effa6d9e..97954bb328 100644
--- a/src/broker/Store.cc
+++ b/src/broker/Store.cc
@@ -14,12 +14,12 @@ OpaqueType* bro_broker::opaque_of_store_handle;
bro_broker::StoreHandleVal::StoreHandleVal(broker::store::identifier id,
bro_broker::StoreType arg_type,
- broker::util::optional arg_back,
+ broker::util::optional arg_back,
RecordVal* backend_options, std::chrono::duration resync)
: OpaqueVal(opaque_of_store_handle),
store(), store_type(arg_type), backend_type(arg_back)
{
- using BifEnum::BrokerStore::BackendType;
+ using BifEnum::Broker::BackendType;
std::unique_ptr backend;
if ( backend_type )
@@ -91,7 +91,7 @@ bro_broker::StoreHandleVal::StoreHandleVal(broker::store::identifier id,
void bro_broker::StoreHandleVal::ValDescribe(ODesc* d) const
{
- using BifEnum::BrokerStore::BackendType;
+ using BifEnum::Broker::BackendType;
d->Add("broker::store::");
switch ( store_type ) {
diff --git a/src/broker/Store.h b/src/broker/Store.h
index 5823e0c3f8..4b673e70dc 100644
--- a/src/broker/Store.h
+++ b/src/broker/Store.h
@@ -25,9 +25,9 @@ enum StoreType {
};
/**
- * Create a BrokerStore::QueryStatus value.
+ * Create a Broker::QueryStatus value.
* @param success whether the query status should be set to success or failure.
- * @return a BrokerStore::QueryStatus value.
+ * @return a Broker::QueryStatus value.
*/
inline EnumVal* query_status(bool success)
{
@@ -37,34 +37,34 @@ inline EnumVal* query_status(bool success)
if ( ! store_query_status )
{
- store_query_status = internal_type("BrokerStore::QueryStatus")->AsEnumType();
- success_val = store_query_status->Lookup("BrokerStore", "SUCCESS");
- failure_val = store_query_status->Lookup("BrokerStore", "FAILURE");
+ store_query_status = internal_type("Broker::QueryStatus")->AsEnumType();
+ success_val = store_query_status->Lookup("Broker", "SUCCESS");
+ failure_val = store_query_status->Lookup("Broker", "FAILURE");
}
return new EnumVal(success ? success_val : failure_val, store_query_status);
}
/**
- * @return a BrokerStore::QueryResult value that has a BrokerStore::QueryStatus indicating
+ * @return a Broker::QueryResult value that has a Broker::QueryStatus indicating
* a failure.
*/
inline RecordVal* query_result()
{
- auto rval = new RecordVal(BifType::Record::BrokerStore::QueryResult);
+ auto rval = new RecordVal(BifType::Record::Broker::QueryResult);
rval->Assign(0, query_status(false));
- rval->Assign(1, new RecordVal(BifType::Record::BrokerComm::Data));
+ rval->Assign(1, new RecordVal(BifType::Record::Broker::Data));
return rval;
}
/**
* @param data the result of the query.
- * @return a BrokerStore::QueryResult value that has a BrokerStore::QueryStatus indicating
+ * @return a Broker::QueryResult value that has a Broker::QueryStatus indicating
* a success.
*/
inline RecordVal* query_result(RecordVal* data)
{
- auto rval = new RecordVal(BifType::Record::BrokerStore::QueryResult);
+ auto rval = new RecordVal(BifType::Record::Broker::QueryResult);
rval->Assign(0, query_status(true));
rval->Assign(1, data);
return rval;
@@ -130,7 +130,7 @@ public:
StoreHandleVal(broker::store::identifier id,
bro_broker::StoreType arg_type,
- broker::util::optional arg_back,
+ broker::util::optional arg_back,
RecordVal* backend_options,
std::chrono::duration resync = std::chrono::seconds(1));
@@ -140,7 +140,7 @@ public:
broker::store::frontend* store;
bro_broker::StoreType store_type;
- broker::util::optional backend_type;
+ broker::util::optional backend_type;
protected:
diff --git a/src/broker/comm.bif b/src/broker/comm.bif
index f8dd546965..4caa1f8859 100644
--- a/src/broker/comm.bif
+++ b/src/broker/comm.bif
@@ -5,124 +5,124 @@
#include "broker/Manager.h"
%%}
-module BrokerComm;
+module Broker;
-type BrokerComm::EndpointFlags: record;
+type Broker::EndpointFlags: record;
## Enable use of communication.
##
## flags: used to tune the local Broker endpoint behavior.
##
## Returns: true if communication is successfully initialized.
-function BrokerComm::enable%(flags: EndpointFlags &default = EndpointFlags()%): bool
+function Broker::enable%(flags: EndpointFlags &default = EndpointFlags()%): bool
%{
return new Val(broker_mgr->Enable(flags), TYPE_BOOL);
%}
-## Changes endpoint flags originally supplied to :bro:see:`BrokerComm::enable`.
+## Changes endpoint flags originally supplied to :bro:see:`Broker::enable`.
##
## flags: the new endpoint behavior flags to use.
##
## Returns: true if flags were changed.
-function BrokerComm::set_endpoint_flags%(flags: EndpointFlags &default = EndpointFlags()%): bool
+function Broker::set_endpoint_flags%(flags: EndpointFlags &default = EndpointFlags()%): bool
%{
return new Val(broker_mgr->SetEndpointFlags(flags), TYPE_BOOL);
%}
## Allow sending messages to peers if associated with the given topic.
## This has no effect if auto publication behavior is enabled via the flags
-## supplied to :bro:see:`BrokerComm::enable` or :bro:see:`BrokerComm::set_endpoint_flags`.
+## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to allow messages to be published under.
##
## Returns: true if successful.
-function BrokerComm::publish_topic%(topic: string%): bool
+function Broker::publish_topic%(topic: string%): bool
%{
return new Val(broker_mgr->PublishTopic(topic->CheckString()), TYPE_BOOL);
%}
## Disallow sending messages to peers if associated with the given topic.
## This has no effect if auto publication behavior is enabled via the flags
-## supplied to :bro:see:`BrokerComm::enable` or :bro:see:`BrokerComm::set_endpoint_flags`.
+## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to disallow messages to be published under.
##
## Returns: true if successful.
-function BrokerComm::unpublish_topic%(topic: string%): bool
+function Broker::unpublish_topic%(topic: string%): bool
%{
return new Val(broker_mgr->UnpublishTopic(topic->CheckString()), TYPE_BOOL);
%}
## Allow advertising interest in the given topic to peers.
## This has no effect if auto advertise behavior is enabled via the flags
-## supplied to :bro:see:`BrokerComm::enable` or :bro:see:`BrokerComm::set_endpoint_flags`.
+## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to allow advertising interest/subscription to peers.
##
## Returns: true if successful.
-function BrokerComm::advertise_topic%(topic: string%): bool
+function Broker::advertise_topic%(topic: string%): bool
%{
return new Val(broker_mgr->AdvertiseTopic(topic->CheckString()), TYPE_BOOL);
%}
## Disallow advertising interest in the given topic to peers.
## This has no effect if auto advertise behavior is enabled via the flags
-## supplied to :bro:see:`BrokerComm::enable` or :bro:see:`BrokerComm::set_endpoint_flags`.
+## supplied to :bro:see:`Broker::enable` or :bro:see:`Broker::set_endpoint_flags`.
##
## topic: a topic to disallow advertising interest/subscription to peers.
##
## Returns: true if successful.
-function BrokerComm::unadvertise_topic%(topic: string%): bool
+function Broker::unadvertise_topic%(topic: string%): bool
%{
return new Val(broker_mgr->UnadvertiseTopic(topic->CheckString()), TYPE_BOOL);
%}
## Generated when a connection has been established due to a previous call
-## to :bro:see:`BrokerComm::connect`.
+## to :bro:see:`Broker::connect`.
##
## peer_address: the address used to connect to the peer.
##
## peer_port: the port used to connect to the peer.
##
## peer_name: the name by which the peer identified itself.
-event BrokerComm::outgoing_connection_established%(peer_address: string,
+event Broker::outgoing_connection_established%(peer_address: string,
peer_port: port,
peer_name: string%);
## Generated when a previously established connection becomes broken.
## Reconnection will automatically be attempted at a frequency given
-## by the original call to :bro:see:`BrokerComm::connect`.
+## by the original call to :bro:see:`Broker::connect`.
##
## peer_address: the address used to connect to the peer.
##
## peer_port: the port used to connect to the peer.
##
-## .. bro:see:: BrokerComm::outgoing_connection_established
-event BrokerComm::outgoing_connection_broken%(peer_address: string,
+## .. bro:see:: Broker::outgoing_connection_established
+event Broker::outgoing_connection_broken%(peer_address: string,
peer_port: port%);
-## Generated when a connection via :bro:see:`BrokerComm::connect` has failed
+## Generated when a connection via :bro:see:`Broker::connect` has failed
## because the remote side is incompatible.
##
## peer_address: the address used to connect to the peer.
##
## peer_port: the port used to connect to the peer.
-event BrokerComm::outgoing_connection_incompatible%(peer_address: string,
+event Broker::outgoing_connection_incompatible%(peer_address: string,
peer_port: port%);
## Generated when a peer has established a connection with this process
-## as a result of previously performing a :bro:see:`BrokerComm::listen`.
+## as a result of previously performing a :bro:see:`Broker::listen`.
##
## peer_name: the name by which the peer identified itself.
-event BrokerComm::incoming_connection_established%(peer_name: string%);
+event Broker::incoming_connection_established%(peer_name: string%);
## Generated when a peer that previously established a connection with this
## process becomes disconnected.
##
## peer_name: the name by which the peer identified itself.
##
-## .. bro:see:: BrokerComm::incoming_connection_established
-event BrokerComm::incoming_connection_broken%(peer_name: string%);
+## .. bro:see:: Broker::incoming_connection_established
+event Broker::incoming_connection_broken%(peer_name: string%);
## Listen for remote connections.
##
@@ -135,8 +135,8 @@ event BrokerComm::incoming_connection_broken%(peer_name: string%);
##
## Returns: true if the local endpoint is now listening for connections.
##
-## .. bro:see:: BrokerComm::incoming_connection_established
-function BrokerComm::listen%(p: port, a: string &default = "",
+## .. bro:see:: Broker::incoming_connection_established
+function Broker::listen%(p: port, a: string &default = "",
reuse: bool &default = T%): bool
%{
if ( ! p->IsTCP() )
@@ -164,8 +164,8 @@ function BrokerComm::listen%(p: port, a: string &default = "",
## it's a new peer. The actual connection may not be established
## until a later point in time.
##
-## .. bro:see:: BrokerComm::outgoing_connection_established
-function BrokerComm::connect%(a: string, p: port, retry: interval%): bool
+## .. bro:see:: Broker::outgoing_connection_established
+function Broker::connect%(a: string, p: port, retry: interval%): bool
%{
if ( ! p->IsTCP() )
{
@@ -180,13 +180,13 @@ function BrokerComm::connect%(a: string, p: port, retry: interval%): bool
## Remove a remote connection.
##
-## a: the address used in previous successful call to :bro:see:`BrokerComm::connect`.
+## a: the address used in previous successful call to :bro:see:`Broker::connect`.
##
-## p: the port used in previous successful call to :bro:see:`BrokerComm::connect`.
+## p: the port used in previous successful call to :bro:see:`Broker::connect`.
##
## Returns: true if the arguments match a previously successful call to
-## :bro:see:`BrokerComm::connect`.
-function BrokerComm::disconnect%(a: string, p: port%): bool
+## :bro:see:`Broker::connect`.
+function Broker::disconnect%(a: string, p: port%): bool
%{
if ( ! p->IsTCP() )
{
diff --git a/src/broker/data.bif b/src/broker/data.bif
index 9ea1ca1e86..d4744f07c6 100644
--- a/src/broker/data.bif
+++ b/src/broker/data.bif
@@ -5,9 +5,9 @@
#include "broker/Data.h"
%%}
-module BrokerComm;
+module Broker;
-## Enumerates the possible types that :bro:see:`BrokerComm::Data` may be in
+## Enumerates the possible types that :bro:see:`Broker::Data` may be in
## terms of Bro data types.
enum DataType %{
BOOL,
@@ -27,9 +27,9 @@ enum DataType %{
RECORD,
%}
-type BrokerComm::Data: record;
+type Broker::Data: record;
-type BrokerComm::TableItem: record;
+type Broker::TableItem: record;
## Convert any Bro value to communication data.
##
@@ -39,7 +39,7 @@ type BrokerComm::TableItem: record;
## field will not be set if the conversion was not possible (this can
## happen if the Bro data type does not support being converted to
## communication data).
-function BrokerComm::data%(d: any%): BrokerComm::Data
+function Broker::data%(d: any%): Broker::Data
%{
return bro_broker::make_data_val(d);
%}
@@ -49,75 +49,75 @@ function BrokerComm::data%(d: any%): BrokerComm::Data
## d: the communication data.
##
## Returns: the data type associated with the communication data.
-function BrokerComm::data_type%(d: BrokerComm::Data%): BrokerComm::DataType
+function Broker::data_type%(d: Broker::Data%): Broker::DataType
%{
return bro_broker::get_data_type(d->AsRecordVal(), frame);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::BOOL` to
+## Convert communication data with a type of :bro:see:`Broker::BOOL` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_bool%(d: BrokerComm::Data%): bool
+function Broker::refine_to_bool%(d: Broker::Data%): bool
%{
return bro_broker::refine(d->AsRecordVal(), TYPE_BOOL, frame);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::INT` to
+## Convert communication data with a type of :bro:see:`Broker::INT` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_int%(d: BrokerComm::Data%): int
+function Broker::refine_to_int%(d: Broker::Data%): int
%{
return bro_broker::refine(d->AsRecordVal(), TYPE_INT, frame);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::COUNT` to
+## Convert communication data with a type of :bro:see:`Broker::COUNT` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_count%(d: BrokerComm::Data%): count
+function Broker::refine_to_count%(d: Broker::Data%): count
%{
return bro_broker::refine(d->AsRecordVal(), TYPE_COUNT, frame);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::DOUBLE` to
+## Convert communication data with a type of :bro:see:`Broker::DOUBLE` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_double%(d: BrokerComm::Data%): double
+function Broker::refine_to_double%(d: Broker::Data%): double
%{
return bro_broker::refine(d->AsRecordVal(), TYPE_DOUBLE, frame);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::STRING` to
+## Convert communication data with a type of :bro:see:`Broker::STRING` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_string%(d: BrokerComm::Data%): string
+function Broker::refine_to_string%(d: Broker::Data%): string
%{
return new StringVal(bro_broker::require_data_type(d->AsRecordVal(),
TYPE_STRING,
frame));
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::ADDR` to
+## Convert communication data with a type of :bro:see:`Broker::ADDR` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_addr%(d: BrokerComm::Data%): addr
+function Broker::refine_to_addr%(d: Broker::Data%): addr
%{
auto& a = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_ADDR, frame);
@@ -125,13 +125,13 @@ function BrokerComm::refine_to_addr%(d: BrokerComm::Data%): addr
return new AddrVal(IPAddr(*bits));
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::SUBNET` to
+## Convert communication data with a type of :bro:see:`Broker::SUBNET` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_subnet%(d: BrokerComm::Data%): subnet
+function Broker::refine_to_subnet%(d: Broker::Data%): subnet
%{
auto& a = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_SUBNET, frame);
@@ -139,53 +139,53 @@ function BrokerComm::refine_to_subnet%(d: BrokerComm::Data%): subnet
return new SubNetVal(IPPrefix(IPAddr(*bits), a.length()));
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::PORT` to
+## Convert communication data with a type of :bro:see:`Broker::PORT` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_port%(d: BrokerComm::Data%): port
+function Broker::refine_to_port%(d: Broker::Data%): port
%{
auto& a = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_SUBNET, frame);
return new PortVal(a.number(), bro_broker::to_bro_port_proto(a.type()));
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::TIME` to
+## Convert communication data with a type of :bro:see:`Broker::TIME` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_time%(d: BrokerComm::Data%): time
+function Broker::refine_to_time%(d: Broker::Data%): time
%{
auto v = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_TIME, frame).value;
return new Val(v, TYPE_TIME);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::INTERVAL` to
+## Convert communication data with a type of :bro:see:`Broker::INTERVAL` to
## an actual Bro value.
##
## d: the communication data to convert.
##
## Returns: the value retrieved from the communication data.
-function BrokerComm::refine_to_interval%(d: BrokerComm::Data%): interval
+function Broker::refine_to_interval%(d: Broker::Data%): interval
%{
auto v = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_TIME, frame).value;
return new Val(v, TYPE_INTERVAL);
%}
-## Convert communication data with a type of :bro:see:`BrokerComm::ENUM` to
+## Convert communication data with a type of :bro:see:`Broker::ENUM` to
## the name of the enum value. :bro:see:`lookup_ID` may be used to convert
## the name to the actual enum value.
##
## d: the communication data to convert.
##
## Returns: the enum name retrieved from the communication data.
-function BrokerComm::refine_to_enum_name%(d: BrokerComm::Data%): string
+function Broker::refine_to_enum_name%(d: Broker::Data%): string
%{
auto& v = bro_broker::require_data_type(d->AsRecordVal(),
TYPE_ENUM, frame).name;
@@ -193,7 +193,7 @@ function BrokerComm::refine_to_enum_name%(d: BrokerComm::Data%): string
%}
## Create communication data of type "set".
-function BrokerComm::set_create%(%): BrokerComm::Data
+function Broker::set_create%(%): Broker::Data
%{
return bro_broker::make_data_val(broker::set());
%}
@@ -203,7 +203,7 @@ function BrokerComm::set_create%(%): BrokerComm::Data
## s: the set to clear.
##
## Returns: always true.
-function BrokerComm::set_clear%(s: BrokerComm::Data%): bool
+function Broker::set_clear%(s: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(s->AsRecordVal(), TYPE_TABLE,
frame);
@@ -216,7 +216,7 @@ function BrokerComm::set_clear%(s: BrokerComm::Data%): bool
## s: the set to query.
##
## Returns: the number of elements in the set.
-function BrokerComm::set_size%(s: BrokerComm::Data%): count
+function Broker::set_size%(s: Broker::Data%): count
%{
auto& v = bro_broker::require_data_type(s->AsRecordVal(), TYPE_TABLE,
frame);
@@ -230,7 +230,7 @@ function BrokerComm::set_size%(s: BrokerComm::Data%): count
## key: the element to check for existence.
##
## Returns: true if the key exists in the set.
-function BrokerComm::set_contains%(s: BrokerComm::Data, key: BrokerComm::Data%): bool
+function Broker::set_contains%(s: Broker::Data, key: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(s->AsRecordVal(), TYPE_TABLE,
frame);
@@ -245,7 +245,7 @@ function BrokerComm::set_contains%(s: BrokerComm::Data, key: BrokerComm::Data%):
## key: the element to insert.
##
## Returns: true if the key was inserted, or false if it already existed.
-function BrokerComm::set_insert%(s: BrokerComm::Data, key: BrokerComm::Data%): bool
+function Broker::set_insert%(s: Broker::Data, key: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(s->AsRecordVal(), TYPE_TABLE,
frame);
@@ -260,7 +260,7 @@ function BrokerComm::set_insert%(s: BrokerComm::Data, key: BrokerComm::Data%): b
## key: the element to remove.
##
## Returns: true if the element existed in the set and is now removed.
-function BrokerComm::set_remove%(s: BrokerComm::Data, key: BrokerComm::Data%): bool
+function Broker::set_remove%(s: Broker::Data, key: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(s->AsRecordVal(), TYPE_TABLE,
frame);
@@ -274,7 +274,7 @@ function BrokerComm::set_remove%(s: BrokerComm::Data, key: BrokerComm::Data%): b
## s: the set to iterate over.
##
## Returns: an iterator.
-function BrokerComm::set_iterator%(s: BrokerComm::Data%): opaque of BrokerComm::SetIterator
+function Broker::set_iterator%(s: Broker::Data%): opaque of Broker::SetIterator
%{
return new bro_broker::SetIterator(s->AsRecordVal(), TYPE_TABLE, frame);
%}
@@ -285,7 +285,7 @@ function BrokerComm::set_iterator%(s: BrokerComm::Data%): opaque of BrokerComm::
##
## Returns: true if there are no more elements to iterator over, i.e.
## the iterator is one-past-the-final-element.
-function BrokerComm::set_iterator_last%(it: opaque of BrokerComm::SetIterator%): bool
+function Broker::set_iterator_last%(it: opaque of Broker::SetIterator%): bool
%{
auto set_it = static_cast(it);
return new Val(set_it->it == set_it->dat.end(), TYPE_BOOL);
@@ -298,7 +298,7 @@ function BrokerComm::set_iterator_last%(it: opaque of BrokerComm::SetIterator%):
## Returns: true if the iterator, after advancing, still references an element
## in the collection. False if the iterator, after advancing, is
## one-past-the-final-element.
-function BrokerComm::set_iterator_next%(it: opaque of BrokerComm::SetIterator%): bool
+function Broker::set_iterator_next%(it: opaque of Broker::SetIterator%): bool
%{
auto set_it = static_cast(it);
@@ -314,10 +314,10 @@ function BrokerComm::set_iterator_next%(it: opaque of BrokerComm::SetIterator%):
## it: an iterator.
##
## Returns: element in the collection that the iterator currently references.
-function BrokerComm::set_iterator_value%(it: opaque of BrokerComm::SetIterator%): BrokerComm::Data
+function Broker::set_iterator_value%(it: opaque of Broker::SetIterator%): Broker::Data
%{
auto set_it = static_cast(it);
- auto rval = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::Data);
if ( set_it->it == set_it->dat.end() )
{
@@ -332,7 +332,7 @@ function BrokerComm::set_iterator_value%(it: opaque of BrokerComm::SetIterator%)
%}
## Create communication data of type "table".
-function BrokerComm::table_create%(%): BrokerComm::Data
+function Broker::table_create%(%): Broker::Data
%{
return bro_broker::make_data_val(broker::table());
%}
@@ -342,7 +342,7 @@ function BrokerComm::table_create%(%): BrokerComm::Data
## t: the table to clear.
##
## Returns: always true.
-function BrokerComm::table_clear%(t: BrokerComm::Data%): bool
+function Broker::table_clear%(t: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -355,7 +355,7 @@ function BrokerComm::table_clear%(t: BrokerComm::Data%): bool
## t: the table to query.
##
## Returns: the number of elements in the table.
-function BrokerComm::table_size%(t: BrokerComm::Data%): count
+function Broker::table_size%(t: Broker::Data%): count
%{
auto& v = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -369,7 +369,7 @@ function BrokerComm::table_size%(t: BrokerComm::Data%): count
## key: the key to check for existence.
##
## Returns: true if the key exists in the table.
-function BrokerComm::table_contains%(t: BrokerComm::Data, key: BrokerComm::Data%): bool
+function Broker::table_contains%(t: Broker::Data, key: Broker::Data%): bool
%{
auto& v = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -387,7 +387,7 @@ function BrokerComm::table_contains%(t: BrokerComm::Data, key: BrokerComm::Data%
##
## Returns: true if the key-value pair was inserted, or false if the key
## already existed in the table.
-function BrokerComm::table_insert%(t: BrokerComm::Data, key: BrokerComm::Data, val: BrokerComm::Data%): BrokerComm::Data
+function Broker::table_insert%(t: Broker::Data, key: Broker::Data, val: Broker::Data%): Broker::Data
%{
auto& table = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -404,7 +404,7 @@ function BrokerComm::table_insert%(t: BrokerComm::Data, key: BrokerComm::Data, v
catch (const std::out_of_range&)
{
table[k] = v;
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
}
%}
@@ -416,7 +416,7 @@ function BrokerComm::table_insert%(t: BrokerComm::Data, key: BrokerComm::Data, v
##
## Returns: the value associated with the key. If the key did not exist, then
## the optional field of the returned record is not set.
-function BrokerComm::table_remove%(t: BrokerComm::Data, key: BrokerComm::Data%): BrokerComm::Data
+function Broker::table_remove%(t: Broker::Data, key: Broker::Data%): Broker::Data
%{
auto& table = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -424,7 +424,7 @@ function BrokerComm::table_remove%(t: BrokerComm::Data, key: BrokerComm::Data%):
auto it = table.find(k);
if ( it == table.end() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
else
{
auto rval = bro_broker::make_data_val(move(it->second));
@@ -441,7 +441,7 @@ function BrokerComm::table_remove%(t: BrokerComm::Data, key: BrokerComm::Data%):
##
## Returns: the value associated with the key. If the key did not exist, then
## the optional field of the returned record is not set.
-function BrokerComm::table_lookup%(t: BrokerComm::Data, key: BrokerComm::Data%): BrokerComm::Data
+function Broker::table_lookup%(t: Broker::Data, key: Broker::Data%): Broker::Data
%{
auto& table = bro_broker::require_data_type(t->AsRecordVal(),
TYPE_TABLE, frame);
@@ -449,7 +449,7 @@ function BrokerComm::table_lookup%(t: BrokerComm::Data, key: BrokerComm::Data%):
auto it = table.find(k);
if ( it == table.end() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
else
return bro_broker::make_data_val(it->second);
%}
@@ -460,7 +460,7 @@ function BrokerComm::table_lookup%(t: BrokerComm::Data, key: BrokerComm::Data%):
## t: the table to iterate over.
##
## Returns: an iterator.
-function BrokerComm::table_iterator%(t: BrokerComm::Data%): opaque of BrokerComm::TableIterator
+function Broker::table_iterator%(t: Broker::Data%): opaque of Broker::TableIterator
%{
return new bro_broker::TableIterator(t->AsRecordVal(), TYPE_TABLE, frame);
%}
@@ -471,7 +471,7 @@ function BrokerComm::table_iterator%(t: BrokerComm::Data%): opaque of BrokerComm
##
## Returns: true if there are no more elements to iterator over, i.e.
## the iterator is one-past-the-final-element.
-function BrokerComm::table_iterator_last%(it: opaque of BrokerComm::TableIterator%): bool
+function Broker::table_iterator_last%(it: opaque of Broker::TableIterator%): bool
%{
auto ti = static_cast(it);
return new Val(ti->it == ti->dat.end(), TYPE_BOOL);
@@ -484,7 +484,7 @@ function BrokerComm::table_iterator_last%(it: opaque of BrokerComm::TableIterato
## Returns: true if the iterator, after advancing, still references an element
## in the collection. False if the iterator, after advancing, is
## one-past-the-final-element.
-function BrokerComm::table_iterator_next%(it: opaque of BrokerComm::TableIterator%): bool
+function Broker::table_iterator_next%(it: opaque of Broker::TableIterator%): bool
%{
auto ti = static_cast(it);
@@ -500,12 +500,12 @@ function BrokerComm::table_iterator_next%(it: opaque of BrokerComm::TableIterato
## it: an iterator.
##
## Returns: element in the collection that the iterator currently references.
-function BrokerComm::table_iterator_value%(it: opaque of BrokerComm::TableIterator%): BrokerComm::TableItem
+function Broker::table_iterator_value%(it: opaque of Broker::TableIterator%): Broker::TableItem
%{
auto ti = static_cast(it);
- auto rval = new RecordVal(BifType::Record::BrokerComm::TableItem);
- auto key_val = new RecordVal(BifType::Record::BrokerComm::Data);
- auto val_val = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::TableItem);
+ auto key_val = new RecordVal(BifType::Record::Broker::Data);
+ auto val_val = new RecordVal(BifType::Record::Broker::Data);
rval->Assign(0, key_val);
rval->Assign(1, val_val);
@@ -523,7 +523,7 @@ function BrokerComm::table_iterator_value%(it: opaque of BrokerComm::TableIterat
%}
## Create communication data of type "vector".
-function BrokerComm::vector_create%(%): BrokerComm::Data
+function Broker::vector_create%(%): Broker::Data
%{
return bro_broker::make_data_val(broker::vector());
%}
@@ -533,7 +533,7 @@ function BrokerComm::vector_create%(%): BrokerComm::Data
## v: the vector to clear.
##
## Returns: always true.
-function BrokerComm::vector_clear%(v: BrokerComm::Data%): bool
+function Broker::vector_clear%(v: Broker::Data%): bool
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
@@ -546,7 +546,7 @@ function BrokerComm::vector_clear%(v: BrokerComm::Data%): bool
## v: the vector to query.
##
## Returns: the number of elements in the vector.
-function BrokerComm::vector_size%(v: BrokerComm::Data%): count
+function Broker::vector_size%(v: Broker::Data%): count
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
@@ -564,7 +564,7 @@ function BrokerComm::vector_size%(v: BrokerComm::Data%): count
## current size of the vector, the element is inserted at the end.
##
## Returns: always true.
-function BrokerComm::vector_insert%(v: BrokerComm::Data, d: BrokerComm::Data, idx: count%): bool
+function Broker::vector_insert%(v: Broker::Data, d: Broker::Data, idx: count%): bool
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
@@ -584,14 +584,14 @@ function BrokerComm::vector_insert%(v: BrokerComm::Data, d: BrokerComm::Data, id
##
## Returns: the value that was just evicted. If the index was larger than any
## valid index, the optional field of the returned record is not set.
-function BrokerComm::vector_replace%(v: BrokerComm::Data, d: BrokerComm::Data, idx: count%): BrokerComm::Data
+function Broker::vector_replace%(v: Broker::Data, d: Broker::Data, idx: count%): Broker::Data
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
auto& item = bro_broker::opaque_field_to_data(d->AsRecordVal(), frame);
if ( idx >= vec.size() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
auto rval = bro_broker::make_data_val(move(vec[idx]));
vec[idx] = item;
@@ -606,13 +606,13 @@ function BrokerComm::vector_replace%(v: BrokerComm::Data, d: BrokerComm::Data, i
##
## Returns: the value that was just evicted. If the index was larger than any
## valid index, the optional field of the returned record is not set.
-function BrokerComm::vector_remove%(v: BrokerComm::Data, idx: count%): BrokerComm::Data
+function Broker::vector_remove%(v: Broker::Data, idx: count%): Broker::Data
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
if ( idx >= vec.size() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
auto rval = bro_broker::make_data_val(move(vec[idx]));
vec.erase(vec.begin() + idx);
@@ -627,13 +627,13 @@ function BrokerComm::vector_remove%(v: BrokerComm::Data, idx: count%): BrokerCom
##
## Returns: the value at the index. If the index was larger than any
## valid index, the optional field of the returned record is not set.
-function BrokerComm::vector_lookup%(v: BrokerComm::Data, idx: count%): BrokerComm::Data
+function Broker::vector_lookup%(v: Broker::Data, idx: count%): Broker::Data
%{
auto& vec = bro_broker::require_data_type(v->AsRecordVal(),
TYPE_VECTOR, frame);
if ( idx >= vec.size() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
return bro_broker::make_data_val(vec[idx]);
%}
@@ -644,7 +644,7 @@ function BrokerComm::vector_lookup%(v: BrokerComm::Data, idx: count%): BrokerCom
## v: the vector to iterate over.
##
## Returns: an iterator.
-function BrokerComm::vector_iterator%(v: BrokerComm::Data%): opaque of BrokerComm::VectorIterator
+function Broker::vector_iterator%(v: Broker::Data%): opaque of Broker::VectorIterator
%{
return new bro_broker::VectorIterator(v->AsRecordVal(), TYPE_VECTOR, frame);
%}
@@ -655,7 +655,7 @@ function BrokerComm::vector_iterator%(v: BrokerComm::Data%): opaque of BrokerCom
##
## Returns: true if there are no more elements to iterator over, i.e.
## the iterator is one-past-the-final-element.
-function BrokerComm::vector_iterator_last%(it: opaque of BrokerComm::VectorIterator%): bool
+function Broker::vector_iterator_last%(it: opaque of Broker::VectorIterator%): bool
%{
auto vi = static_cast(it);
return new Val(vi->it == vi->dat.end(), TYPE_BOOL);
@@ -668,7 +668,7 @@ function BrokerComm::vector_iterator_last%(it: opaque of BrokerComm::VectorItera
## Returns: true if the iterator, after advancing, still references an element
## in the collection. False if the iterator, after advancing, is
## one-past-the-final-element.
-function BrokerComm::vector_iterator_next%(it: opaque of BrokerComm::VectorIterator%): bool
+function Broker::vector_iterator_next%(it: opaque of Broker::VectorIterator%): bool
%{
auto vi = static_cast(it);
@@ -684,10 +684,10 @@ function BrokerComm::vector_iterator_next%(it: opaque of BrokerComm::VectorItera
## it: an iterator.
##
## Returns: element in the collection that the iterator currently references.
-function BrokerComm::vector_iterator_value%(it: opaque of BrokerComm::VectorIterator%): BrokerComm::Data
+function Broker::vector_iterator_value%(it: opaque of Broker::VectorIterator%): Broker::Data
%{
auto vi = static_cast(it);
- auto rval = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::Data);
if ( vi->it == vi->dat.end() )
{
@@ -706,7 +706,7 @@ function BrokerComm::vector_iterator_value%(it: opaque of BrokerComm::VectorIter
## sz: the number of fields in the record.
##
## Returns: record data, with all fields uninitialized.
-function BrokerComm::record_create%(sz: count%): BrokerComm::Data
+function Broker::record_create%(sz: count%): Broker::Data
%{
return bro_broker::make_data_val(broker::record(std::vector(sz)));
%}
@@ -716,7 +716,7 @@ function BrokerComm::record_create%(sz: count%): BrokerComm::Data
## r: the record to query.
##
## Returns: the number of fields in the record.
-function BrokerComm::record_size%(r: BrokerComm::Data%): count
+function Broker::record_size%(r: Broker::Data%): count
%{
auto& v = bro_broker::require_data_type(r->AsRecordVal(),
TYPE_RECORD, frame);
@@ -732,7 +732,7 @@ function BrokerComm::record_size%(r: BrokerComm::Data%): count
## idx: the index to replace.
##
## Returns: false if the index was larger than any valid index, else true.
-function BrokerComm::record_assign%(r: BrokerComm::Data, d: BrokerComm::Data, idx: count%): bool
+function Broker::record_assign%(r: Broker::Data, d: Broker::Data, idx: count%): bool
%{
auto& v = bro_broker::require_data_type(r->AsRecordVal(),
TYPE_RECORD, frame);
@@ -754,16 +754,16 @@ function BrokerComm::record_assign%(r: BrokerComm::Data, d: BrokerComm::Data, id
## Returns: the value at the index. The optional field of the returned record
## may not be set if the field of the record has no value or if the
## index was not valid.
-function BrokerComm::record_lookup%(r: BrokerComm::Data, idx: count%): BrokerComm::Data
+function Broker::record_lookup%(r: Broker::Data, idx: count%): Broker::Data
%{
auto& v = bro_broker::require_data_type(r->AsRecordVal(),
TYPE_RECORD, frame);
if ( idx >= v.size() )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
if ( ! v.fields[idx] )
- return new RecordVal(BifType::Record::BrokerComm::Data);
+ return new RecordVal(BifType::Record::Broker::Data);
return bro_broker::make_data_val(*v.fields[idx]);
%}
@@ -774,7 +774,7 @@ function BrokerComm::record_lookup%(r: BrokerComm::Data, idx: count%): BrokerCom
## r: the record to iterate over.
##
## Returns: an iterator.
-function BrokerComm::record_iterator%(r: BrokerComm::Data%): opaque of BrokerComm::RecordIterator
+function Broker::record_iterator%(r: Broker::Data%): opaque of Broker::RecordIterator
%{
return new bro_broker::RecordIterator(r->AsRecordVal(), TYPE_RECORD, frame);
%}
@@ -785,7 +785,7 @@ function BrokerComm::record_iterator%(r: BrokerComm::Data%): opaque of BrokerCom
##
## Returns: true if there are no more elements to iterator over, i.e.
## the iterator is one-past-the-final-element.
-function BrokerComm::record_iterator_last%(it: opaque of BrokerComm::RecordIterator%): bool
+function Broker::record_iterator_last%(it: opaque of Broker::RecordIterator%): bool
%{
auto ri = static_cast(it);
return new Val(ri->it == ri->dat.fields.end(), TYPE_BOOL);
@@ -798,7 +798,7 @@ function BrokerComm::record_iterator_last%(it: opaque of BrokerComm::RecordItera
## Returns: true if the iterator, after advancing, still references an element
## in the collection. False if the iterator, after advancing, is
## one-past-the-final-element.
-function BrokerComm::record_iterator_next%(it: opaque of BrokerComm::RecordIterator%): bool
+function Broker::record_iterator_next%(it: opaque of Broker::RecordIterator%): bool
%{
auto ri = static_cast(it);
@@ -814,10 +814,10 @@ function BrokerComm::record_iterator_next%(it: opaque of BrokerComm::RecordItera
## it: an iterator.
##
## Returns: element in the collection that the iterator currently references.
-function BrokerComm::record_iterator_value%(it: opaque of BrokerComm::RecordIterator%): BrokerComm::Data
+function Broker::record_iterator_value%(it: opaque of Broker::RecordIterator%): Broker::Data
%{
auto ri = static_cast(it);
- auto rval = new RecordVal(BifType::Record::BrokerComm::Data);
+ auto rval = new RecordVal(BifType::Record::Broker::Data);
if ( ri->it == ri->dat.fields.end() )
{
diff --git a/src/broker/messaging.bif b/src/broker/messaging.bif
index 97b794b50e..3c3240ff16 100644
--- a/src/broker/messaging.bif
+++ b/src/broker/messaging.bif
@@ -6,18 +6,18 @@
#include "logging/Manager.h"
%%}
-module BrokerComm;
+module Broker;
-type BrokerComm::SendFlags: record;
+type Broker::SendFlags: record;
-type BrokerComm::EventArgs: record;
+type Broker::EventArgs: record;
## Used to handle remote print messages from peers that call
-## :bro:see:`BrokerComm::print`.
-event BrokerComm::print_handler%(msg: string%);
+## :bro:see:`Broker::print`.
+event Broker::print_handler%(msg: string%);
## Print a simple message to any interested peers. The receiver can use
-## :bro:see:`BrokerComm::print_handler` to handle messages.
+## :bro:see:`Broker::print_handler` to handle messages.
##
## topic: a topic associated with the printed message.
##
@@ -26,7 +26,7 @@ event BrokerComm::print_handler%(msg: string%);
## flags: tune the behavior of how the message is sent.
##
## Returns: true if the message is sent.
-function BrokerComm::print%(topic: string, msg: string,
+function Broker::print%(topic: string, msg: string,
flags: SendFlags &default = SendFlags()%): bool
%{
auto rval = broker_mgr->Print(topic->CheckString(), msg->CheckString(),
@@ -35,14 +35,14 @@ function BrokerComm::print%(topic: string, msg: string,
%}
## Register interest in all peer print messages that use a certain topic prefix.
-## Use :bro:see:`BrokerComm::print_handler` to handle received messages.
+## Use :bro:see:`Broker::print_handler` to handle received messages.
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new print subscription and it is now registered.
-function BrokerComm::subscribe_to_prints%(topic_prefix: string%): bool
+function Broker::subscribe_to_prints%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->SubscribeToPrints(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
@@ -51,23 +51,23 @@ function BrokerComm::subscribe_to_prints%(topic_prefix: string%): bool
## Unregister interest in all peer print messages that use a topic prefix.
##
## topic_prefix: a prefix previously supplied to a successful call to
-## :bro:see:`BrokerComm::subscribe_to_prints`.
+## :bro:see:`Broker::subscribe_to_prints`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
-function BrokerComm::unsubscribe_to_prints%(topic_prefix: string%): bool
+function Broker::unsubscribe_to_prints%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->UnsubscribeToPrints(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
%}
## Create a data structure that may be used to send a remote event via
-## :bro:see:`BrokerComm::event`.
+## :bro:see:`Broker::event`.
##
## args: an event, followed by a list of argument values that may be used
## to call it.
##
## Returns: opaque communication data that may be used to send a remote event.
-function BrokerComm::event_args%(...%): BrokerComm::EventArgs
+function Broker::event_args%(...%): Broker::EventArgs
%{
auto rval = broker_mgr->MakeEventArgs(@ARGS@);
return rval;
@@ -77,12 +77,12 @@ function BrokerComm::event_args%(...%): BrokerComm::EventArgs
##
## topic: a topic associated with the event message.
##
-## args: event arguments as made by :bro:see:`BrokerComm::event_args`.
+## args: event arguments as made by :bro:see:`Broker::event_args`.
##
## flags: tune the behavior of how the message is sent.
##
## Returns: true if the message is sent.
-function BrokerComm::event%(topic: string, args: BrokerComm::EventArgs,
+function Broker::event%(topic: string, args: Broker::EventArgs,
flags: SendFlags &default = SendFlags()%): bool
%{
auto rval = broker_mgr->Event(topic->CheckString(), args->AsRecordVal(),
@@ -102,7 +102,7 @@ function BrokerComm::event%(topic: string, args: BrokerComm::EventArgs,
## flags: tune the behavior of how the message is sent.
##
## Returns: true if automatic event sending is now enabled.
-function BrokerComm::auto_event%(topic: string, ev: any,
+function Broker::auto_event%(topic: string, ev: any,
flags: SendFlags &default = SendFlags()%): bool
%{
auto rval = broker_mgr->AutoEvent(topic->CheckString(), ev, flags);
@@ -111,12 +111,12 @@ function BrokerComm::auto_event%(topic: string, ev: any,
## Stop automatically sending an event to peers upon local dispatch.
##
-## topic: a topic originally given to :bro:see:`BrokerComm::auto_event`.
+## topic: a topic originally given to :bro:see:`Broker::auto_event`.
##
-## ev: an event originally given to :bro:see:`BrokerComm::auto_event`.
+## ev: an event originally given to :bro:see:`Broker::auto_event`.
##
## Returns: true if automatic events will not occur for the topic/event pair.
-function BrokerComm::auto_event_stop%(topic: string, ev: any%): bool
+function Broker::auto_event_stop%(topic: string, ev: any%): bool
%{
auto rval = broker_mgr->AutoEventStop(topic->CheckString(), ev);
return new Val(rval, TYPE_BOOL);
@@ -129,7 +129,7 @@ function BrokerComm::auto_event_stop%(topic: string, ev: any%): bool
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new event subscription and it is now registered.
-function BrokerComm::subscribe_to_events%(topic_prefix: string%): bool
+function Broker::subscribe_to_events%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->SubscribeToEvents(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
@@ -138,10 +138,10 @@ function BrokerComm::subscribe_to_events%(topic_prefix: string%): bool
## Unregister interest in all peer event messages that use a topic prefix.
##
## topic_prefix: a prefix previously supplied to a successful call to
-## :bro:see:`BrokerComm::subscribe_to_events`.
+## :bro:see:`Broker::subscribe_to_events`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
-function BrokerComm::unsubscribe_to_events%(topic_prefix: string%): bool
+function Broker::unsubscribe_to_events%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->UnsubscribeToEvents(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
@@ -155,7 +155,7 @@ function BrokerComm::unsubscribe_to_events%(topic_prefix: string%): bool
##
## Returns: true if remote logs are enabled for the stream.
function
-BrokerComm::enable_remote_logs%(id: Log::ID,
+Broker::enable_remote_logs%(id: Log::ID,
flags: SendFlags &default = SendFlags()%): bool
%{
auto rval = log_mgr->EnableRemoteLogs(id->AsEnumVal(),
@@ -168,7 +168,7 @@ BrokerComm::enable_remote_logs%(id: Log::ID,
## id: the log stream to disable remote logs for.
##
## Returns: true if remote logs are disabled for the stream.
-function BrokerComm::disable_remote_logs%(id: Log::ID%): bool
+function Broker::disable_remote_logs%(id: Log::ID%): bool
%{
auto rval = log_mgr->DisableRemoteLogs(id->AsEnumVal());
return new Val(rval, TYPE_BOOL);
@@ -179,7 +179,7 @@ function BrokerComm::disable_remote_logs%(id: Log::ID%): bool
## id: the log stream to check.
##
## Returns: true if remote logs are enabled for the given stream.
-function BrokerComm::remote_logs_enabled%(id: Log::ID%): bool
+function Broker::remote_logs_enabled%(id: Log::ID%): bool
%{
auto rval = log_mgr->RemoteLogsAreEnabled(id->AsEnumVal());
return new Val(rval, TYPE_BOOL);
@@ -194,7 +194,7 @@ function BrokerComm::remote_logs_enabled%(id: Log::ID%): bool
## "alice" and "amy" but not "bob".
##
## Returns: true if it's a new log subscription and it is now registered.
-function BrokerComm::subscribe_to_logs%(topic_prefix: string%): bool
+function Broker::subscribe_to_logs%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->SubscribeToLogs(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
@@ -205,10 +205,10 @@ function BrokerComm::subscribe_to_logs%(topic_prefix: string%): bool
## receiving side processes them through the logging framework as usual.
##
## topic_prefix: a prefix previously supplied to a successful call to
-## :bro:see:`BrokerComm::subscribe_to_logs`.
+## :bro:see:`Broker::subscribe_to_logs`.
##
## Returns: true if interest in the topic prefix is no longer advertised.
-function BrokerComm::unsubscribe_to_logs%(topic_prefix: string%): bool
+function Broker::unsubscribe_to_logs%(topic_prefix: string%): bool
%{
auto rval = broker_mgr->UnsubscribeToLogs(topic_prefix->CheckString());
return new Val(rval, TYPE_BOOL);
diff --git a/src/broker/store.bif b/src/broker/store.bif
index 853bd1f2d7..57bddd3da7 100644
--- a/src/broker/store.bif
+++ b/src/broker/store.bif
@@ -8,13 +8,13 @@
#include "Trigger.h"
%%}
-module BrokerStore;
+module Broker;
-type BrokerStore::ExpiryTime: record;
+type Broker::ExpiryTime: record;
-type BrokerStore::QueryResult: record;
+type Broker::QueryResult: record;
-type BrokerStore::BackendOptions: record;
+type Broker::BackendOptions: record;
## Enumerates the possible storage backends.
enum BackendType %{
@@ -32,8 +32,8 @@ enum BackendType %{
## options: tunes how some storage backends operate.
##
## Returns: a handle to the data store.
-function BrokerStore::create_master%(id: string, b: BackendType &default = MEMORY,
- options: BackendOptions &default = BackendOptions()%): opaque of BrokerStore::Handle
+function Broker::create_master%(id: string, b: BackendType &default = MEMORY,
+ options: BackendOptions &default = BackendOptions()%): opaque of Broker::Handle
%{
auto id_str = id->CheckString();
auto type = bro_broker::StoreType::MASTER;
@@ -46,7 +46,7 @@ function BrokerStore::create_master%(id: string, b: BackendType &default = MEMOR
}
rval = new bro_broker::StoreHandleVal(id_str, type,
- static_cast(b->AsEnum()),
+ static_cast(b->AsEnum()),
options->AsRecordVal());
auto added = broker_mgr->AddStore(rval);
assert(added);
@@ -75,9 +75,9 @@ function BrokerStore::create_master%(id: string, b: BackendType &default = MEMOR
## but updates will be lost until the master is once again available.
##
## Returns: a handle to the data store.
-function BrokerStore::create_clone%(id: string, b: BackendType &default = MEMORY,
+function Broker::create_clone%(id: string, b: BackendType &default = MEMORY,
options: BackendOptions &default = BackendOptions(),
- resync: interval &default = 1sec%): opaque of BrokerStore::Handle
+ resync: interval &default = 1sec%): opaque of Broker::Handle
%{
auto id_str = id->CheckString();
auto type = bro_broker::StoreType::CLONE;
@@ -90,7 +90,7 @@ function BrokerStore::create_clone%(id: string, b: BackendType &default = MEMORY
}
rval = new bro_broker::StoreHandleVal(id_str, type,
- static_cast(b->AsEnum()),
+ static_cast(b->AsEnum()),
options->AsRecordVal(),
std::chrono::duration(resync));
auto added = broker_mgr->AddStore(rval);
@@ -104,7 +104,7 @@ function BrokerStore::create_clone%(id: string, b: BackendType &default = MEMORY
## id: the unique name which identifies the master data store.
##
## Returns: a handle to the data store.
-function BrokerStore::create_frontend%(id: string%): opaque of BrokerStore::Handle
+function Broker::create_frontend%(id: string%): opaque of Broker::Handle
%{
auto id_str = id->CheckString();
auto type = bro_broker::StoreType::FRONTEND;
@@ -128,7 +128,7 @@ function BrokerStore::create_frontend%(id: string%): opaque of BrokerStore::Hand
##
## Returns: true if store was valid and is now closed. The handle can no
## longer be used for data store operations.
-function BrokerStore::close_by_handle%(h: opaque of BrokerStore::Handle%): bool
+function Broker::close_by_handle%(h: opaque of Broker::Handle%): bool
%{
auto handle = static_cast(h);
@@ -154,9 +154,9 @@ function BrokerStore::close_by_handle%(h: opaque of BrokerStore::Handle%): bool
## e: the expiration time of the key-value pair.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::insert%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data, v: BrokerComm::Data,
- e: BrokerStore::ExpiryTime &default = BrokerStore::ExpiryTime()%): bool
+function Broker::insert%(h: opaque of Broker::Handle,
+ k: Broker::Data, v: Broker::Data,
+ e: Broker::ExpiryTime &default = Broker::ExpiryTime()%): bool
%{
auto handle = static_cast(h);
@@ -198,7 +198,7 @@ function BrokerStore::insert%(h: opaque of BrokerStore::Handle,
## k: the key to remove.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::erase%(h: opaque of BrokerStore::Handle, k: BrokerComm::Data%): bool
+function Broker::erase%(h: opaque of Broker::Handle, k: Broker::Data%): bool
%{
auto handle = static_cast(h);
@@ -215,7 +215,7 @@ function BrokerStore::erase%(h: opaque of BrokerStore::Handle, k: BrokerComm::Da
## h: the handle of the store to modify.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::clear%(h: opaque of BrokerStore::Handle%): bool
+function Broker::clear%(h: opaque of Broker::Handle%): bool
%{
auto handle = static_cast(h);
@@ -236,8 +236,8 @@ function BrokerStore::clear%(h: opaque of BrokerStore::Handle%): bool
## create it with an implicit value of zero before incrementing.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::increment%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data, by: int &default = +1%): bool
+function Broker::increment%(h: opaque of Broker::Handle,
+ k: Broker::Data, by: int &default = +1%): bool
%{
auto handle = static_cast(h);
@@ -259,8 +259,8 @@ function BrokerStore::increment%(h: opaque of BrokerStore::Handle,
## create it with an implicit value of zero before decrementing.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::decrement%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data, by: int &default = +1%): bool
+function Broker::decrement%(h: opaque of Broker::Handle,
+ k: Broker::Data, by: int &default = +1%): bool
%{
auto handle = static_cast(h);
@@ -282,8 +282,8 @@ function BrokerStore::decrement%(h: opaque of BrokerStore::Handle,
## create it with an implicit empty set value before modifying.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::add_to_set%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data, element: BrokerComm::Data%): bool
+function Broker::add_to_set%(h: opaque of Broker::Handle,
+ k: Broker::Data, element: Broker::Data%): bool
%{
auto handle = static_cast(h);
@@ -306,8 +306,8 @@ function BrokerStore::add_to_set%(h: opaque of BrokerStore::Handle,
## implicitly create an empty set value associated with the key.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::remove_from_set%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data, element: BrokerComm::Data%): bool
+function Broker::remove_from_set%(h: opaque of Broker::Handle,
+ k: Broker::Data, element: Broker::Data%): bool
%{
auto handle = static_cast(h);
@@ -330,8 +330,8 @@ function BrokerStore::remove_from_set%(h: opaque of BrokerStore::Handle,
## create an empty vector value before modifying.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::push_left%(h: opaque of BrokerStore::Handle, k: BrokerComm::Data,
- items: BrokerComm::DataVector%): bool
+function Broker::push_left%(h: opaque of Broker::Handle, k: Broker::Data,
+ items: Broker::DataVector%): bool
%{
auto handle = static_cast(h);
@@ -363,8 +363,8 @@ function BrokerStore::push_left%(h: opaque of BrokerStore::Handle, k: BrokerComm
## create an empty vector value before modifying.
##
## Returns: false if the store handle was not valid.
-function BrokerStore::push_right%(h: opaque of BrokerStore::Handle, k: BrokerComm::Data,
- items: BrokerComm::DataVector%): bool
+function Broker::push_right%(h: opaque of Broker::Handle, k: Broker::Data,
+ items: Broker::DataVector%): bool
%{
auto handle = static_cast(h);
@@ -401,7 +401,7 @@ static bool prepare_for_query(Val* opaque, Frame* frame,
if ( ! (*handle)->store )
{
reporter->PushLocation(frame->GetCall()->GetLocationInfo());
- reporter->Error("BrokerStore query has an invalid data store");
+ reporter->Error("Broker query has an invalid data store");
reporter->PopLocation();
return false;
}
@@ -411,7 +411,7 @@ static bool prepare_for_query(Val* opaque, Frame* frame,
if ( ! trigger )
{
reporter->PushLocation(frame->GetCall()->GetLocationInfo());
- reporter->Error("BrokerStore queries can only be called inside when-condition");
+ reporter->Error("Broker queries can only be called inside when-condition");
reporter->PopLocation();
return false;
}
@@ -421,7 +421,7 @@ static bool prepare_for_query(Val* opaque, Frame* frame,
if ( *timeout < 0 )
{
reporter->PushLocation(frame->GetCall()->GetLocationInfo());
- reporter->Error("BrokerStore queries must specify a timeout block");
+ reporter->Error("Broker queries must specify a timeout block");
reporter->PopLocation();
return false;
}
@@ -444,8 +444,8 @@ static bool prepare_for_query(Val* opaque, Frame* frame,
## k: the key associated with the vector to modify.
##
## Returns: the result of the query.
-function BrokerStore::pop_left%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data%): BrokerStore::QueryResult
+function Broker::pop_left%(h: opaque of Broker::Handle,
+ k: Broker::Data%): Broker::QueryResult
%{
if ( ! broker_mgr->Enabled() )
return bro_broker::query_result();
@@ -474,8 +474,8 @@ function BrokerStore::pop_left%(h: opaque of BrokerStore::Handle,
## k: the key associated with the vector to modify.
##
## Returns: the result of the query.
-function BrokerStore::pop_right%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data%): BrokerStore::QueryResult
+function Broker::pop_right%(h: opaque of Broker::Handle,
+ k: Broker::Data%): Broker::QueryResult
%{
if ( ! broker_mgr->Enabled() )
return bro_broker::query_result();
@@ -504,8 +504,8 @@ function BrokerStore::pop_right%(h: opaque of BrokerStore::Handle,
## k: the key to lookup.
##
## Returns: the result of the query.
-function BrokerStore::lookup%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data%): BrokerStore::QueryResult
+function Broker::lookup%(h: opaque of Broker::Handle,
+ k: Broker::Data%): Broker::QueryResult
%{
if ( ! broker_mgr->Enabled() )
return bro_broker::query_result();
@@ -533,9 +533,9 @@ function BrokerStore::lookup%(h: opaque of BrokerStore::Handle,
##
## k: the key to check for existence.
##
-## Returns: the result of the query (uses :bro:see:`BrokerComm::BOOL`).
-function BrokerStore::exists%(h: opaque of BrokerStore::Handle,
- k: BrokerComm::Data%): BrokerStore::QueryResult
+## Returns: the result of the query (uses :bro:see:`Broker::BOOL`).
+function Broker::exists%(h: opaque of Broker::Handle,
+ k: Broker::Data%): Broker::QueryResult
%{
if ( ! broker_mgr->Enabled() )
return bro_broker::query_result();
@@ -561,8 +561,8 @@ function BrokerStore::exists%(h: opaque of BrokerStore::Handle,
##
## h: the handle of the store to query.
##
-## Returns: the result of the query (uses :bro:see:`BrokerComm::VECTOR`).
-function BrokerStore::keys%(h: opaque of BrokerStore::Handle%): BrokerStore::QueryResult
+## Returns: the result of the query (uses :bro:see:`Broker::VECTOR`).
+function Broker::keys%(h: opaque of Broker::Handle%): Broker::QueryResult
%{
double timeout;
bro_broker::StoreQueryCallback* cb;
@@ -579,8 +579,8 @@ function BrokerStore::keys%(h: opaque of BrokerStore::Handle%): BrokerStore::Que
##
## h: the handle of the store to query.
##
-## Returns: the result of the query (uses :bro:see:`BrokerComm::COUNT`).
-function BrokerStore::size%(h: opaque of BrokerStore::Handle%): BrokerStore::QueryResult
+## Returns: the result of the query (uses :bro:see:`Broker::COUNT`).
+function Broker::size%(h: opaque of Broker::Handle%): Broker::QueryResult
%{
if ( ! broker_mgr->Enabled() )
return bro_broker::query_result();
diff --git a/src/bsd-getopt-long.c b/src/bsd-getopt-long.c
index 7ecb064fc8..65a3d94093 100644
--- a/src/bsd-getopt-long.c
+++ b/src/bsd-getopt-long.c
@@ -54,7 +54,7 @@
#define IN_GETOPT_LONG_C 1
-#include
+#include
#include
#include
#include
diff --git a/src/const.bif b/src/const.bif
index 0ba168ca85..2d062d854a 100644
--- a/src/const.bif
+++ b/src/const.bif
@@ -19,7 +19,6 @@ const Tunnel::enable_ayiya: bool;
const Tunnel::enable_teredo: bool;
const Tunnel::enable_gtpv1: bool;
const Tunnel::enable_gre: bool;
-const Tunnel::yielding_teredo_decapsulation: bool;
const Tunnel::delay_teredo_confirmation: bool;
const Tunnel::delay_gtp_confirmation: bool;
const Tunnel::ip_tunnel_timeout: interval;
diff --git a/src/event.bif b/src/event.bif
index 456de20b3a..ff6ec059fb 100644
--- a/src/event.bif
+++ b/src/event.bif
@@ -305,8 +305,14 @@ event packet_contents%(c: connection, contents: string%);
##
## t2: The new payload.
##
+## tcp_flags: A string with the TCP flags of the packet triggering the
+## inconsistency. In the string, each character corresponds to one set flag,
+## as follows: ``S`` -> SYN; ``F`` -> FIN; ``R`` -> RST; ``A`` -> ACK; ``P`` ->
+## PUSH. This string will not always be set, only if the information is available;
+## it's "best effort".
+##
## .. bro:see:: tcp_rexmit tcp_contents
-event rexmit_inconsistency%(c: connection, t1: string, t2: string%);
+event rexmit_inconsistency%(c: connection, t1: string, t2: string, tcp_flags: string%);
## Generated when a TCP endpoint acknowledges payload that Bro never saw.
##
diff --git a/src/file_analysis/Component.h b/src/file_analysis/Component.h
index 1900369f10..6e282da205 100644
--- a/src/file_analysis/Component.h
+++ b/src/file_analysis/Component.h
@@ -9,7 +9,7 @@
#include "Val.h"
-#include "../config.h"
+#include "../bro-config.h"
#include "../util.h"
namespace file_analysis {
diff --git a/src/file_analysis/FileReassembler.h b/src/file_analysis/FileReassembler.h
index 396aa062e1..aa07a84d42 100644
--- a/src/file_analysis/FileReassembler.h
+++ b/src/file_analysis/FileReassembler.h
@@ -52,9 +52,9 @@ protected:
DECLARE_SERIAL(FileReassembler);
- void Undelivered(uint64 up_to_seq);
- void BlockInserted(DataBlock* b);
- void Overlap(const u_char* b1, const u_char* b2, uint64 n);
+ void Undelivered(uint64 up_to_seq) override;
+ void BlockInserted(DataBlock* b) override;
+ void Overlap(const u_char* b1, const u_char* b2, uint64 n) override;
File* the_file;
bool flushing;
diff --git a/src/file_analysis/Tag.h b/src/file_analysis/Tag.h
index aa38836403..c28183a07f 100644
--- a/src/file_analysis/Tag.h
+++ b/src/file_analysis/Tag.h
@@ -3,7 +3,7 @@
#ifndef FILE_ANALYZER_TAG_H
#define FILE_ANALYZER_TAG_H
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "../Tag.h"
#include "plugin/TaggedComponent.h"
diff --git a/src/file_analysis/analyzer/CMakeLists.txt b/src/file_analysis/analyzer/CMakeLists.txt
index 225504c56a..ef17247997 100644
--- a/src/file_analysis/analyzer/CMakeLists.txt
+++ b/src/file_analysis/analyzer/CMakeLists.txt
@@ -1,4 +1,5 @@
add_subdirectory(data_event)
+add_subdirectory(entropy)
add_subdirectory(extract)
add_subdirectory(hash)
add_subdirectory(pe)
diff --git a/src/file_analysis/analyzer/entropy/CMakeLists.txt b/src/file_analysis/analyzer/entropy/CMakeLists.txt
new file mode 100644
index 0000000000..38db5e726a
--- /dev/null
+++ b/src/file_analysis/analyzer/entropy/CMakeLists.txt
@@ -0,0 +1,9 @@
+include(BroPlugin)
+
+include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR}
+ ${CMAKE_CURRENT_BINARY_DIR})
+
+bro_plugin_begin(Bro FileEntropy)
+bro_plugin_cc(Entropy.cc Plugin.cc ../../Analyzer.cc)
+bro_plugin_bif(events.bif)
+bro_plugin_end()
diff --git a/src/file_analysis/analyzer/entropy/Entropy.cc b/src/file_analysis/analyzer/entropy/Entropy.cc
new file mode 100644
index 0000000000..2a1bc72723
--- /dev/null
+++ b/src/file_analysis/analyzer/entropy/Entropy.cc
@@ -0,0 +1,71 @@
+// See the file "COPYING" in the main distribution directory for copyright.
+
+#include
+
+#include "Entropy.h"
+#include "util.h"
+#include "Event.h"
+#include "file_analysis/Manager.h"
+
+using namespace file_analysis;
+
+Entropy::Entropy(RecordVal* args, File* file)
+ : file_analysis::Analyzer(file_mgr->GetComponentTag("ENTROPY"), args, file)
+ {
+ //entropy->Init();
+ entropy = new EntropyVal;
+ }
+
+Entropy::~Entropy()
+ {
+ Unref(entropy);
+ }
+
+file_analysis::Analyzer* Entropy::Instantiate(RecordVal* args, File* file)
+ {
+ return new Entropy(args, file);
+ }
+
+bool Entropy::DeliverStream(const u_char* data, uint64 len)
+ {
+ if ( ! fed )
+ fed = len > 0;
+
+ entropy->Feed(data, len);
+ return true;
+ }
+
+bool Entropy::EndOfFile()
+ {
+ Finalize();
+ return false;
+ }
+
+bool Entropy::Undelivered(uint64 offset, uint64 len)
+ {
+ return false;
+ }
+
+void Entropy::Finalize()
+ {
+ //if ( ! entropy->IsValid() || ! fed )
+ if ( ! fed )
+ return;
+
+ val_list* vl = new val_list();
+ vl->append(GetFile()->GetVal()->Ref());
+
+ double montepi, scc, ent, mean, chisq;
+ montepi = scc = ent = mean = chisq = 0.0;
+ entropy->Get(&ent, &chisq, &mean, &montepi, &scc);
+
+ RecordVal* ent_result = new RecordVal(entropy_test_result);
+ ent_result->Assign(0, new Val(ent, TYPE_DOUBLE));
+ ent_result->Assign(1, new Val(chisq, TYPE_DOUBLE));
+ ent_result->Assign(2, new Val(mean, TYPE_DOUBLE));
+ ent_result->Assign(3, new Val(montepi, TYPE_DOUBLE));
+ ent_result->Assign(4, new Val(scc, TYPE_DOUBLE));
+
+ vl->append(ent_result);
+ mgr.QueueEvent(file_entropy, vl);
+ }
diff --git a/src/file_analysis/analyzer/entropy/Entropy.h b/src/file_analysis/analyzer/entropy/Entropy.h
new file mode 100644
index 0000000000..6a5075263c
--- /dev/null
+++ b/src/file_analysis/analyzer/entropy/Entropy.h
@@ -0,0 +1,84 @@
+// See the file "COPYING" in the main distribution directory for copyright.
+
+#ifndef FILE_ANALYSIS_ENTROPY_H
+#define FILE_ANALYSIS_ENTROPY_H
+
+#include
+
+#include "Val.h"
+#include "OpaqueVal.h"
+#include "File.h"
+#include "Analyzer.h"
+
+#include "events.bif.h"
+
+namespace file_analysis {
+
+/**
+ * An analyzer to produce a hash of file contents.
+ */
+class Entropy : public file_analysis::Analyzer {
+public:
+
+ /**
+ * Destructor.
+ */
+ virtual ~Entropy();
+
+ /**
+ * Create a new instance of an Extract analyzer.
+ * @param args the \c AnalyzerArgs value which represents the analyzer.
+ * @param file the file to which the analyzer will be attached.
+ * @return the new Extract analyzer instance or a null pointer if the
+ * the "extraction_file" field of \a args wasn't set.
+ */
+ static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file);
+
+ /**
+ * Incrementally hash next chunk of file contents.
+ * @param data pointer to start of a chunk of a file data.
+ * @param len number of bytes in the data chunk.
+ * @return false if the digest is in an invalid state, else true.
+ */
+ virtual bool DeliverStream(const u_char* data, uint64 len);
+
+ /**
+ * Finalizes the hash and raises a "file_entropy_test" event.
+ * @return always false so analyze will be deteched from file.
+ */
+ virtual bool EndOfFile();
+
+ /**
+ * Missing data can't be handled, so just indicate the this analyzer should
+ * be removed from receiving further data. The hash will not be finalized.
+ * @param offset byte offset in file at which missing chunk starts.
+ * @param len number of missing bytes.
+ * @return always false so analyzer will detach from file.
+ */
+ virtual bool Undelivered(uint64 offset, uint64 len);
+
+protected:
+
+ /**
+ * Constructor.
+ * @param args the \c AnalyzerArgs value which represents the analyzer.
+ * @param file the file to which the analyzer will be attached.
+ * @param hv specific hash calculator object.
+ * @param kind human readable name of the hash algorithm to use.
+ */
+ Entropy(RecordVal* args, File* file);
+
+ /**
+ * If some file contents have been seen, finalizes the hash of them and
+ * raises the "file_hash" event with the results.
+ */
+ void Finalize();
+
+private:
+ EntropyVal* entropy;
+ bool fed;
+};
+
+} // namespace file_analysis
+
+#endif
diff --git a/src/file_analysis/analyzer/entropy/Plugin.cc b/src/file_analysis/analyzer/entropy/Plugin.cc
new file mode 100644
index 0000000000..f1dd954cba
--- /dev/null
+++ b/src/file_analysis/analyzer/entropy/Plugin.cc
@@ -0,0 +1,24 @@
+// See the file in the main distribution directory for copyright.
+
+#include "plugin/Plugin.h"
+
+#include "Entropy.h"
+
+namespace plugin {
+namespace Bro_FileEntropy {
+
+class Plugin : public plugin::Plugin {
+public:
+ plugin::Configuration Configure()
+ {
+ AddComponent(new ::file_analysis::Component("ENTROPY", ::file_analysis::Entropy::Instantiate));
+
+ plugin::Configuration config;
+ config.name = "Bro::FileEntropy";
+ config.description = "Entropy test file content";
+ return config;
+ }
+} plugin;
+
+}
+}
diff --git a/src/file_analysis/analyzer/entropy/events.bif b/src/file_analysis/analyzer/entropy/events.bif
new file mode 100644
index 0000000000..a51bb3d39b
--- /dev/null
+++ b/src/file_analysis/analyzer/entropy/events.bif
@@ -0,0 +1,8 @@
+## This event is generated each time file analysis performs
+## entropy testing on a file.
+##
+## f: The file.
+##
+## ent: The results of the entropy testing.
+##
+event file_entropy%(f: fa_file, ent: entropy_test_result%);
\ No newline at end of file
diff --git a/src/file_analysis/analyzer/x509/X509.cc b/src/file_analysis/analyzer/x509/X509.cc
index 8c70597dca..e8ea5cb7b4 100644
--- a/src/file_analysis/analyzer/x509/X509.cc
+++ b/src/file_analysis/analyzer/x509/X509.cc
@@ -52,7 +52,8 @@ bool file_analysis::X509::EndOfFile()
X509Val* cert_val = new X509Val(ssl_cert); // cert_val takes ownership of ssl_cert
- RecordVal* cert_record = ParseCertificate(cert_val); // parse basic information into record
+ // parse basic information into record.
+ RecordVal* cert_record = ParseCertificate(cert_val, GetFile()->GetID().c_str());
// and send the record on to scriptland
val_list* vl = new val_list();
@@ -84,7 +85,7 @@ bool file_analysis::X509::EndOfFile()
return false;
}
-RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val)
+RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val, const char* fid)
{
::X509* ssl_cert = cert_val->GetCertificate();
@@ -131,8 +132,8 @@ RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val)
pX509Cert->Assign(3, new StringVal(len, buf));
BIO_free(bio);
- pX509Cert->Assign(5, new Val(GetTimeFromAsn1(X509_get_notBefore(ssl_cert)), TYPE_TIME));
- pX509Cert->Assign(6, new Val(GetTimeFromAsn1(X509_get_notAfter(ssl_cert)), TYPE_TIME));
+ pX509Cert->Assign(5, new Val(GetTimeFromAsn1(X509_get_notBefore(ssl_cert), fid), TYPE_TIME));
+ pX509Cert->Assign(6, new Val(GetTimeFromAsn1(X509_get_notAfter(ssl_cert), fid), TYPE_TIME));
// we only read 255 bytes because byte 256 is always 0.
// if the string is longer than 255, that will be our null-termination,
@@ -515,54 +516,103 @@ unsigned int file_analysis::X509::KeyLength(EVP_PKEY *key)
reporter->InternalError("cannot be reached");
}
-double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
+double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* arg_fid)
{
+ const char *fid = arg_fid ? arg_fid : "";
time_t lResult = 0;
- char lBuffer[24];
+ char lBuffer[26];
char* pBuffer = lBuffer;
- size_t lTimeLength = atime->length;
- char * pString = (char *) atime->data;
+ const char *pString = (const char *) atime->data;
+ unsigned int remaining = atime->length;
if ( atime->type == V_ASN1_UTCTIME )
{
- if ( lTimeLength < 11 || lTimeLength > 17 )
+ if ( remaining < 11 || remaining > 17 )
+ {
+ reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- UTCTime has wrong length", fid));
return 0;
+ }
+
+ if ( pString[remaining-1] != 'Z' )
+ {
+ // not valid according to RFC 2459 4.1.2.5.1
+ reporter->Weird(fmt("Could not parse UTC time in non-YY-format in X509 certificate (x509 %s)", fid));
+ return 0;
+ }
+
+ // year is first two digits in YY format. Buffer expects YYYY format.
+ if ( pString[0] - '0' < 50 ) // RFC 2459 4.1.2.5.1
+ {
+ *(pBuffer++) = '2';
+ *(pBuffer++) = '0';
+ }
+ else
+ {
+ *(pBuffer++) = '1';
+ *(pBuffer++) = '9';
+ }
memcpy(pBuffer, pString, 10);
pBuffer += 10;
pString += 10;
+ remaining -= 10;
}
-
- else
+ else if ( atime->type == V_ASN1_GENERALIZEDTIME )
{
- if ( lTimeLength < 13 )
+ // generalized time. We apparently ignore the YYYYMMDDHH case
+ // for now and assume we always have minutes and seconds.
+ // This should be ok because it is specified as a requirement in RFC 2459 4.1.2.5.2
+
+ if ( remaining < 12 || remaining > 23 )
+ {
+ reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- Generalized time has wrong length", fid));
return 0;
+ }
memcpy(pBuffer, pString, 12);
pBuffer += 12;
pString += 12;
+ remaining -= 12;
+ }
+ else
+ {
+ reporter->Weird(fmt("Invalid time type in X509 certificate (fuid %s)", fid));
+ return 0;
}
- if ((*pString == 'Z') || (*pString == '-') || (*pString == '+'))
+ if ( (remaining == 0) || (*pString == 'Z') || (*pString == '-') || (*pString == '+') )
{
*(pBuffer++) = '0';
*(pBuffer++) = '0';
}
+ else if ( remaining >= 2 )
+ {
+ *(pBuffer++) = *(pString++);
+ *(pBuffer++) = *(pString++);
+
+ remaining -= 2;
+
+ // Skip any fractional seconds...
+ if ( (remaining > 0) && (*pString == '.') )
+ {
+ pString++;
+ remaining--;
+
+ while ( (remaining > 0) && (*pString >= '0') && (*pString <= '9') )
+ {
+ pString++;
+ remaining--;
+ }
+ }
+ }
+
else
{
- *(pBuffer++) = *(pString++);
- *(pBuffer++) = *(pString++);
-
- // Skip any fractional seconds...
- if (*pString == '.')
- {
- pString++;
- while ((*pString >= '0') && (*pString <= '9'))
- pString++;
- }
+ reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- additional char after time", fid));
+ return 0;
}
*(pBuffer++) = 'Z';
@@ -570,31 +620,39 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
time_t lSecondsFromUTC;
- if ( *pString == 'Z' )
+ if ( remaining == 0 || *pString == 'Z' )
lSecondsFromUTC = 0;
-
else
{
- if ((*pString != '+') && (pString[5] != '-'))
+ if ( remaining < 5 )
+ {
+ reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- not enough bytes remaining for offset", fid));
return 0;
+ }
- lSecondsFromUTC = ((pString[1]-'0') * 10 + (pString[2]-'0')) * 60;
- lSecondsFromUTC += (pString[3]-'0') * 10 + (pString[4]-'0');
+ if ((*pString != '+') && (*pString != '-'))
+ {
+ reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- unknown offset type", fid));
+ return 0;
+ }
+
+ lSecondsFromUTC = ((pString[1] - '0') * 10 + (pString[2] - '0')) * 60;
+ lSecondsFromUTC += (pString[3] - '0') * 10 + (pString[4] - '0');
if (*pString == '-')
lSecondsFromUTC = -lSecondsFromUTC;
}
tm lTime;
- lTime.tm_sec = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0');
- lTime.tm_min = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0');
- lTime.tm_hour = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0');
- lTime.tm_mday = ((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0');
- lTime.tm_mon = (((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0')) - 1;
- lTime.tm_year = ((lBuffer[0] - '0') * 10) + (lBuffer[1] - '0');
+ lTime.tm_sec = ((lBuffer[12] - '0') * 10) + (lBuffer[13] - '0');
+ lTime.tm_min = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0');
+ lTime.tm_hour = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0');
+ lTime.tm_mday = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0');
+ lTime.tm_mon = (((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0')) - 1;
+ lTime.tm_year = (lBuffer[0] - '0') * 1000 + (lBuffer[1] - '0') * 100 + ((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0');
- if ( lTime.tm_year < 50 )
- lTime.tm_year += 100; // RFC 2459
+ if ( lTime.tm_year > 1900)
+ lTime.tm_year -= 1900;
lTime.tm_wday = 0;
lTime.tm_yday = 0;
@@ -604,7 +662,7 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
if ( lResult )
{
- if ( 0 != lTime.tm_isdst )
+ if ( lTime.tm_isdst != 0 )
lResult -= 3600; // mktime may adjust for DST (OS dependent)
lResult += lSecondsFromUTC;
diff --git a/src/file_analysis/analyzer/x509/X509.h b/src/file_analysis/analyzer/x509/X509.h
index bd4c8fc7a5..c671c68a99 100644
--- a/src/file_analysis/analyzer/x509/X509.h
+++ b/src/file_analysis/analyzer/x509/X509.h
@@ -29,10 +29,13 @@ public:
*
* @param cert_val The certificate to converts.
*
+ * @param fid A file ID associated with the certificate, if any
+ * (primarily for error reporting).
+ *
* @param Returns the new record value and passes ownership to
* caller.
*/
- static RecordVal* ParseCertificate(X509Val* cert_val);
+ static RecordVal* ParseCertificate(X509Val* cert_val, const char* fid = 0);
static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file)
{ return new X509(args, file); }
@@ -59,7 +62,7 @@ private:
std::string cert_data;
// Helpers for ParseCertificate.
- static double GetTimeFromAsn1(const ASN1_TIME * atime);
+ static double GetTimeFromAsn1(const ASN1_TIME * atime, const char* fid);
static StringVal* KeyCurve(EVP_PKEY *key);
static unsigned int KeyLength(EVP_PKEY *key);
};
diff --git a/src/input/Tag.h b/src/input/Tag.h
index 8188fbc294..c9e997f8e9 100644
--- a/src/input/Tag.h
+++ b/src/input/Tag.h
@@ -3,7 +3,7 @@
#ifndef INPUT_TAG_H
#define INPUT_TAG_H
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "../Tag.h"
#include "plugin/TaggedComponent.h"
diff --git a/src/input/readers/raw/Raw.cc b/src/input/readers/raw/Raw.cc
index 2aae96abf7..76d8958fea 100644
--- a/src/input/readers/raw/Raw.cc
+++ b/src/input/readers/raw/Raw.cc
@@ -302,8 +302,10 @@ bool Raw::OpenInput()
if ( offset )
{
- int whence = (offset > 0) ? SEEK_SET : SEEK_END;
- if ( fseek(file, offset, whence) < 0 )
+ int whence = (offset >= 0) ? SEEK_SET : SEEK_END;
+ int64_t pos = (offset >= 0) ? offset : offset + 1; // we want -1 to be the end of the file
+
+ if ( fseek(file, pos, whence) < 0 )
{
char buf[256];
strerror_r(errno, buf, sizeof(buf));
@@ -395,8 +397,6 @@ bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fie
{
string offset_s = it->second;
offset = strtoll(offset_s.c_str(), 0, 10);
- if ( offset < 0 )
- offset++; // we want -1 to be the end of the file
}
else if ( it != info.config.end() )
{
diff --git a/src/input/readers/sqlite/SQLite.cc b/src/input/readers/sqlite/SQLite.cc
index 3790e5919d..9352d04742 100644
--- a/src/input/readers/sqlite/SQLite.cc
+++ b/src/input/readers/sqlite/SQLite.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/input/readers/sqlite/SQLite.h b/src/input/readers/sqlite/SQLite.h
index 5d82bc55f1..5add678b16 100644
--- a/src/input/readers/sqlite/SQLite.h
+++ b/src/input/readers/sqlite/SQLite.h
@@ -3,7 +3,7 @@
#ifndef INPUT_READERS_SQLITE_H
#define INPUT_READERS_SQLITE_H
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/iosource/BPF_Program.cc b/src/iosource/BPF_Program.cc
index 70469c97e7..451a74bed3 100644
--- a/src/iosource/BPF_Program.cc
+++ b/src/iosource/BPF_Program.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "BPF_Program.h"
diff --git a/src/iosource/CMakeLists.txt b/src/iosource/CMakeLists.txt
index b1de9bddaf..27c42e9a40 100644
--- a/src/iosource/CMakeLists.txt
+++ b/src/iosource/CMakeLists.txt
@@ -17,8 +17,6 @@ set(iosource_SRCS
PktSrc.cc
)
-bif_target(pcap.bif)
-
bro_add_subdir_library(iosource ${iosource_SRCS})
add_dependencies(bro_iosource generate_outputs)
diff --git a/src/iosource/Packet.cc b/src/iosource/Packet.cc
index 2ff910ed52..c75b62a832 100644
--- a/src/iosource/Packet.cc
+++ b/src/iosource/Packet.cc
@@ -43,9 +43,16 @@ void Packet::Init(int arg_link_type, struct timeval *arg_ts, uint32 arg_caplen,
l3_proto = L3_UNKNOWN;
eth_type = 0;
vlan = 0;
+ inner_vlan = 0;
l2_valid = false;
+ if ( data && cap_len < hdr_size )
+ {
+ Weird("truncated_link_header");
+ return;
+ }
+
if ( data )
ProcessLayer2();
}
@@ -76,6 +83,9 @@ int Packet::GetLinkHeaderSize(int link_type)
case DLT_PPP_SERIAL: // PPP_SERIAL
return 4;
+ case DLT_IEEE802_11_RADIO: // 802.11 plus RadioTap
+ return 59;
+
case DLT_RAW:
return 0;
}
@@ -93,6 +103,7 @@ void Packet::ProcessLayer2()
bool have_mpls = false;
const u_char* pdata = data;
+ const u_char* end_of_data = data + cap_len;
switch ( link_type ) {
case DLT_NULL:
@@ -139,6 +150,12 @@ void Packet::ProcessLayer2()
// 802.1q / 802.1ad
case 0x8100:
case 0x9100:
+ if ( pdata + 4 >= end_of_data )
+ {
+ Weird("truncated_link_header");
+ return;
+ }
+
vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff;
protocol = ((pdata[2] << 8) + pdata[3]);
pdata += 4; // Skip the vlan header
@@ -153,6 +170,13 @@ void Packet::ProcessLayer2()
// Check for double-tagged (802.1ad)
if ( protocol == 0x8100 || protocol == 0x9100 )
{
+ if ( pdata + 4 >= end_of_data )
+ {
+ Weird("truncated_link_header");
+ return;
+ }
+
+ inner_vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff;
protocol = ((pdata[2] << 8) + pdata[3]);
pdata += 4; // Skip the vlan header
}
@@ -162,6 +186,12 @@ void Packet::ProcessLayer2()
// PPPoE carried over the ethernet frame.
case 0x8864:
+ if ( pdata + 8 >= end_of_data )
+ {
+ Weird("truncated_link_header");
+ return;
+ }
+
protocol = (pdata[6] << 8) + pdata[7];
pdata += 8; // Skip the PPPoE session and PPP header
@@ -224,10 +254,90 @@ void Packet::ProcessLayer2()
break;
}
+ case DLT_IEEE802_11_RADIO:
+ {
+ if ( pdata + 3 >= end_of_data )
+ {
+ Weird("truncated_radiotap_header");
+ return;
+ }
+ // Skip over the RadioTap header
+ int rtheader_len = (pdata[3] << 8) + pdata[2];
+ if ( pdata + rtheader_len >= end_of_data )
+ {
+ Weird("truncated_radiotap_header");
+ return;
+ }
+ pdata += rtheader_len;
+
+ int type_80211 = pdata[0];
+ int len_80211 = 0;
+ if ( (type_80211 >> 4) & 0x04 )
+ {
+ //identified a null frame (we ignore for now). no weird.
+ return;
+ }
+ // Look for the QoS indicator bit.
+ if ( (type_80211 >> 4) & 0x08 )
+ len_80211 = 26;
+ else
+ len_80211 = 24;
+
+ if ( pdata + len_80211 >= end_of_data )
+ {
+ Weird("truncated_radiotap_header");
+ return;
+ }
+ // skip 802.11 data header
+ pdata += len_80211;
+
+ if ( pdata + 8 >= end_of_data )
+ {
+ Weird("truncated_radiotap_header");
+ return;
+ }
+ // Check that the DSAP and SSAP are both SNAP and that the control
+ // field indicates that this is an unnumbered frame.
+ // The organization code (24bits) needs to also be zero to
+ // indicate that this is encapsulated ethernet.
+ if ( pdata[0] == 0xAA && pdata[1] == 0xAA && pdata[2] == 0x03 &&
+ pdata[3] == 0 && pdata[4] == 0 && pdata[5] == 0 )
+ {
+ pdata += 6;
+ }
+ else
+ {
+ // If this is a logical link control frame without the
+ // possibility of having a protocol we care about, we'll
+ // just skip it for now.
+ return;
+ }
+
+ int protocol = (pdata[0] << 8) + pdata[1];
+ if ( protocol == 0x0800 )
+ l3_proto = L3_IPV4;
+ else if ( protocol == 0x86DD )
+ l3_proto = L3_IPV6;
+ else
+ {
+ Weird("non_ip_packet_in_ieee802_11_radio_encapsulation");
+ return;
+ }
+ pdata += 2;
+
+ break;
+ }
+
default:
{
// Assume we're pointing at IP. Just figure out which version.
pdata += GetLinkHeaderSize(link_type);
+ if ( pdata + sizeof(struct ip) >= end_of_data )
+ {
+ Weird("truncated_link_header");
+ return;
+ }
+
const struct ip* ip = (const struct ip *)pdata;
if ( ip->ip_v == 4 )
@@ -252,18 +362,18 @@ void Packet::ProcessLayer2()
while ( ! end_of_stack )
{
- end_of_stack = *(pdata + 2) & 0x01;
- pdata += 4;
-
- if ( pdata >= pdata + cap_len )
+ if ( pdata + 4 >= end_of_data )
{
- Weird("no_mpls_payload");
+ Weird("truncated_link_header");
return;
}
+
+ end_of_stack = *(pdata + 2) & 0x01;
+ pdata += 4;
}
// We assume that what remains is IP
- if ( pdata + sizeof(struct ip) >= data + cap_len )
+ if ( pdata + sizeof(struct ip) >= end_of_data )
{
Weird("no_ip_in_mpls_payload");
return;
@@ -286,13 +396,14 @@ void Packet::ProcessLayer2()
else if ( encap_hdr_size )
{
// Blanket encapsulation. We assume that what remains is IP.
- pdata += encap_hdr_size;
- if ( pdata + sizeof(struct ip) >= data + cap_len )
+ if ( pdata + encap_hdr_size + sizeof(struct ip) >= end_of_data )
{
Weird("no_ip_left_after_encap");
return;
}
+ pdata += encap_hdr_size;
+
const struct ip* ip = (const struct ip *)pdata;
if ( ip->ip_v == 4 )
@@ -308,9 +419,8 @@ void Packet::ProcessLayer2()
}
- // We've now determined (a) L3_IPV4 vs (b) L3_IPV6 vs
- // (c) L3_ARP vs (d) L3_UNKNOWN.
- l3_proto = l3_proto;
+ // We've now determined (a) L3_IPV4 vs (b) L3_IPV6 vs (c) L3_ARP vs
+ // (d) L3_UNKNOWN.
// Calculate how much header we've used up.
hdr_size = (pdata - data);
@@ -318,15 +428,6 @@ void Packet::ProcessLayer2()
RecordVal* Packet::BuildPktHdrVal() const
{
- static RecordType* l2_hdr_type = 0;
- static RecordType* raw_pkt_hdr_type = 0;
-
- if ( ! raw_pkt_hdr_type )
- {
- raw_pkt_hdr_type = internal_type("raw_pkt_hdr")->AsRecordType();
- l2_hdr_type = internal_type("l2_hdr")->AsRecordType();
- }
-
RecordVal* pkt_hdr = new RecordVal(raw_pkt_hdr_type);
RecordVal* l2_hdr = new RecordVal(l2_hdr_type);
@@ -350,6 +451,7 @@ RecordVal* Packet::BuildPktHdrVal() const
// src: string &optional; ##< L2 source (if ethernet)
// dst: string &optional; ##< L2 destination (if ethernet)
// vlan: count &optional; ##< VLAN tag if any (and ethernet)
+ // inner_vlan: count &optional; ##< Inner VLAN tag if any (and ethernet)
// ethertype: count &optional; ##< If ethernet
// proto: layer3_proto; ##< L3 proto
@@ -364,7 +466,10 @@ RecordVal* Packet::BuildPktHdrVal() const
if ( vlan )
l2_hdr->Assign(5, new Val(vlan, TYPE_COUNT));
- l2_hdr->Assign(6, new Val(eth_type, TYPE_COUNT));
+ if ( inner_vlan )
+ l2_hdr->Assign(6, new Val(inner_vlan, TYPE_COUNT));
+
+ l2_hdr->Assign(7, new Val(eth_type, TYPE_COUNT));
if ( eth_type == ETHERTYPE_ARP || eth_type == ETHERTYPE_REVARP )
// We also identify ARP for L3 over ethernet
@@ -376,7 +481,7 @@ RecordVal* Packet::BuildPktHdrVal() const
l2_hdr->Assign(1, new Val(len, TYPE_COUNT));
l2_hdr->Assign(2, new Val(cap_len, TYPE_COUNT));
- l2_hdr->Assign(7, new EnumVal(l3, BifType::Enum::layer3_proto));
+ l2_hdr->Assign(8, new EnumVal(l3, BifType::Enum::layer3_proto));
pkt_hdr->Assign(0, l2_hdr);
diff --git a/src/iosource/Packet.h b/src/iosource/Packet.h
index eaa1b90210..a96f14ebdd 100644
--- a/src/iosource/Packet.h
+++ b/src/iosource/Packet.h
@@ -167,13 +167,13 @@ public:
* Layer 3 protocol identified (if any). Valid iff Layer2Valid()
* returns true.
*/
- Layer3Proto l3_proto; ///
+ Layer3Proto l3_proto; ///
/**
* If layer 2 is Ethernet, innermost ethertype field. Valid iff
* Layer2Valid() returns true.
*/
- uint32 eth_type; ///
+ uint32 eth_type; ///
/**
* (Outermost) VLAN tag if any, else 0. Valid iff Layer2Valid()
@@ -181,6 +181,12 @@ public:
*/
uint32 vlan; ///
+ /**
+ * (Innermost) VLAN tag if any, else 0. Valid iff Layer2Valid()
+ * returns true.
+ */
+ uint32 inner_vlan;
+
private:
// Calculate layer 2 attributes. Sets
void ProcessLayer2();
diff --git a/src/iosource/PktDumper.cc b/src/iosource/PktDumper.cc
index a4bc3a82f8..10c95e8021 100644
--- a/src/iosource/PktDumper.cc
+++ b/src/iosource/PktDumper.cc
@@ -4,7 +4,7 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "PktDumper.h"
diff --git a/src/iosource/PktSrc.cc b/src/iosource/PktSrc.cc
index 8012f79f1b..8db9db6ef1 100644
--- a/src/iosource/PktSrc.cc
+++ b/src/iosource/PktSrc.cc
@@ -3,7 +3,7 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "PktSrc.h"
@@ -11,6 +11,8 @@
#include "Net.h"
#include "Sessions.h"
+#include "pcap/const.bif.h"
+
using namespace iosource;
PktSrc::Properties::Properties()
@@ -34,9 +36,7 @@ PktSrc::PktSrc()
PktSrc::~PktSrc()
{
- BPF_Program* code;
- IterCookie* cookie = filters.InitForIteration();
- while ( (code = filters.NextEntry(cookie)) )
+ for ( auto code : filters )
delete code;
}
@@ -66,11 +66,6 @@ bool PktSrc::IsError() const
return ErrorMsg();
}
-int PktSrc::SnapLen() const
- {
- return snaplen; // That's a global. Change?
- }
-
bool PktSrc::IsLive() const
{
return props.is_live;
@@ -112,7 +107,7 @@ void PktSrc::Opened(const Properties& arg_props)
}
if ( props.is_live )
- Info(fmt("listening on %s, capture length %d bytes\n", props.path.c_str(), SnapLen()));
+ Info(fmt("listening on %s\n", props.path.c_str()));
DBG_LOG(DBG_PKTIO, "Opened source %s", props.path.c_str());
}
@@ -325,7 +320,7 @@ bool PktSrc::PrecompileBPFFilter(int index, const std::string& filter)
// Compile filter.
BPF_Program* code = new BPF_Program();
- if ( ! code->Compile(SnapLen(), LinkType(), filter.c_str(), Netmask(), errbuf, sizeof(errbuf)) )
+ if ( ! code->Compile(BifConst::Pcap::snaplen, LinkType(), filter.c_str(), Netmask(), errbuf, sizeof(errbuf)) )
{
string msg = fmt("cannot compile BPF filter \"%s\"", filter.c_str());
@@ -338,16 +333,16 @@ bool PktSrc::PrecompileBPFFilter(int index, const std::string& filter)
return 0;
}
- // Store it in hash.
- HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
- BPF_Program* oldcode = filters.Lookup(hash);
- if ( oldcode )
- delete oldcode;
+ // Store it in vector.
+ if ( index >= static_cast(filters.size()) )
+ filters.resize(index + 1);
- filters.Insert(hash, code);
- delete hash;
+ if ( auto old = filters[index] )
+ delete old;
- return 1;
+ filters[index] = code;
+
+ return true;
}
BPF_Program* PktSrc::GetBPFFilter(int index)
@@ -355,10 +350,7 @@ BPF_Program* PktSrc::GetBPFFilter(int index)
if ( index < 0 )
return 0;
- HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
- BPF_Program* code = filters.Lookup(hash);
- delete hash;
- return code;
+ return (static_cast(filters.size()) > index ? filters[index] : 0);
}
bool PktSrc::ApplyBPFFilter(int index, const struct pcap_pkthdr *hdr, const u_char *pkt)
diff --git a/src/iosource/PktSrc.h b/src/iosource/PktSrc.h
index bf4c811dca..25a743dc53 100644
--- a/src/iosource/PktSrc.h
+++ b/src/iosource/PktSrc.h
@@ -3,6 +3,8 @@
#ifndef IOSOURCE_PKTSRC_PKTSRC_H
#define IOSOURCE_PKTSRC_PKTSRC_H
+#include
+
#include "IOSource.h"
#include "BPF_Program.h"
#include "Dict.h"
@@ -95,11 +97,6 @@ public:
*/
int HdrSize() const;
- /**
- * Returns the snap length for this source.
- */
- int SnapLen() const;
-
/**
* In pseudo-realtime mode, returns the logical timestamp of the
* current packet. Undefined if not running pseudo-realtime mode.
@@ -367,7 +364,7 @@ private:
Packet current_packet;
// For BPF filtering support.
- PDict(BPF_Program) filters;
+ std::vector filters;
// Only set in pseudo-realtime mode.
double first_timestamp;
diff --git a/src/iosource/pcap/CMakeLists.txt b/src/iosource/pcap/CMakeLists.txt
index 1c57bb6ac9..cf9f577760 100644
--- a/src/iosource/pcap/CMakeLists.txt
+++ b/src/iosource/pcap/CMakeLists.txt
@@ -5,4 +5,6 @@ include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DI
bro_plugin_begin(Bro Pcap)
bro_plugin_cc(Source.cc Dumper.cc Plugin.cc)
+bif_target(functions.bif)
+bif_target(const.bif)
bro_plugin_end()
diff --git a/src/iosource/pcap/Dumper.cc b/src/iosource/pcap/Dumper.cc
index 5bea6231f7..20e36420c6 100644
--- a/src/iosource/pcap/Dumper.cc
+++ b/src/iosource/pcap/Dumper.cc
@@ -7,6 +7,8 @@
#include "../PktSrc.h"
#include "../../Net.h"
+#include "const.bif.h"
+
using namespace iosource::pcap;
PcapDumper::PcapDumper(const std::string& path, bool arg_append)
@@ -25,7 +27,8 @@ void PcapDumper::Open()
{
int linktype = -1;
- pd = pcap_open_dead(DLT_EN10MB, snaplen);
+ pd = pcap_open_dead(DLT_EN10MB, BifConst::Pcap::snaplen);
+
if ( ! pd )
{
Error("error for pcap_open_dead");
diff --git a/src/iosource/pcap/Source.cc b/src/iosource/pcap/Source.cc
index bebe02c018..8158266f1c 100644
--- a/src/iosource/pcap/Source.cc
+++ b/src/iosource/pcap/Source.cc
@@ -2,11 +2,13 @@
#include
-#include "config.h"
+#include "bro-config.h"
#include "Source.h"
#include "iosource/Packet.h"
+#include "const.bif.h"
+
#ifdef HAVE_PCAP_INT_H
#include
#endif
@@ -84,32 +86,64 @@ void PcapSource::OpenLive()
props.netmask = PktSrc::NETMASK_UNKNOWN;
#endif
- // We use the smallest time-out possible to return almost immediately if
- // no packets are available. (We can't use set_nonblocking() as it's
- // broken on FreeBSD: even when select() indicates that we can read
- // something, we may get nothing if the store buffer hasn't filled up
- // yet.)
- pd = pcap_open_live(props.path.c_str(), SnapLen(), 1, 1, tmp_errbuf);
+ pd = pcap_create(props.path.c_str(), errbuf);
if ( ! pd )
{
- Error(tmp_errbuf);
+ PcapError("pcap_create");
return;
}
- // ### This needs autoconf'ing.
-#ifdef HAVE_PCAP_INT_H
- Info(fmt("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize));
-#endif
+ if ( pcap_set_snaplen(pd, BifConst::Pcap::snaplen) )
+ {
+ PcapError("pcap_set_snaplen");
+ return;
+ }
+
+ if ( pcap_set_promisc(pd, 1) )
+ {
+ PcapError("pcap_set_promisc");
+ return;
+ }
+
+ // We use the smallest time-out possible to return almost immediately
+ // if no packets are available. (We can't use set_nonblocking() as
+ // it's broken on FreeBSD: even when select() indicates that we can
+ // read something, we may get nothing if the store buffer hasn't
+ // filled up yet.)
+ //
+ // TODO: The comment about FreeBSD is pretty old and may not apply
+ // anymore these days.
+ if ( pcap_set_timeout(pd, 1) )
+ {
+ PcapError("pcap_set_timeout");
+ return;
+ }
+
+ if ( pcap_set_buffer_size(pd, BifConst::Pcap::bufsize * 1024 * 1024) )
+ {
+ PcapError("pcap_set_buffer_size");
+ return;
+ }
+
+ if ( pcap_activate(pd) )
+ {
+ PcapError("pcap_activate");
+ return;
+ }
#ifdef HAVE_LINUX
if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 )
{
- PcapError();
+ PcapError("pcap_setnonblock");
return;
}
#endif
+#ifdef HAVE_PCAP_INT_H
+ Info(fmt("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize));
+#endif
+
props.selectable_fd = pcap_fileno(pd);
SetHdrSize();
@@ -257,12 +291,17 @@ void PcapSource::Statistics(Stats* s)
s->dropped = 0;
}
-void PcapSource::PcapError()
+void PcapSource::PcapError(const char* where)
{
+ string location;
+
+ if ( where )
+ location = fmt(" (%s)", where);
+
if ( pd )
- Error(fmt("pcap_error: %s", pcap_geterr(pd)));
+ Error(fmt("pcap_error: %s%s", pcap_geterr(pd), location.c_str()));
else
- Error("pcap_error: not open");
+ Error(fmt("pcap_error: not open%s", location.c_str()));
Close();
}
diff --git a/src/iosource/pcap/Source.h b/src/iosource/pcap/Source.h
index f627e30afa..f3c193d855 100644
--- a/src/iosource/pcap/Source.h
+++ b/src/iosource/pcap/Source.h
@@ -28,7 +28,7 @@ protected:
private:
void OpenLive();
void OpenOffline();
- void PcapError();
+ void PcapError(const char* where = 0);
void SetHdrSize();
Properties props;
diff --git a/src/iosource/pcap/const.bif b/src/iosource/pcap/const.bif
new file mode 100644
index 0000000000..877dccef74
--- /dev/null
+++ b/src/iosource/pcap/const.bif
@@ -0,0 +1,4 @@
+
+
+const Pcap::snaplen: count;
+const Pcap::bufsize: count;
diff --git a/src/iosource/pcap.bif b/src/iosource/pcap/functions.bif
similarity index 89%
rename from src/iosource/pcap.bif
rename to src/iosource/pcap/functions.bif
index ee4e1e6c06..4465510987 100644
--- a/src/iosource/pcap.bif
+++ b/src/iosource/pcap/functions.bif
@@ -1,4 +1,6 @@
+module Pcap;
+
## Precompiles a PCAP filter and binds it to a given identifier.
##
## id: The PCAP identifier to reference the filter *s* later on.
@@ -19,6 +21,15 @@
## pcap_error
function precompile_pcap_filter%(id: PcapFilterID, s: string%): bool
%{
+ if ( id->AsEnum() >= 100 )
+ {
+ // We use a vector as underlying data structure for fast
+ // lookups and limit the ID space so that that doesn't grow too
+ // large.
+ builtin_error(fmt("PCAP filter ids must remain below 100 (is %" PRId64 ")", id->AsInt()));
+ return new Val(false, TYPE_BOOL);
+ }
+
bool success = true;
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
@@ -86,7 +97,7 @@ function install_pcap_filter%(id: PcapFilterID%): bool
## install_dst_net_filter
## uninstall_dst_addr_filter
## uninstall_dst_net_filter
-function pcap_error%(%): string
+function error%(%): string
%{
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
diff --git a/src/logging/Tag.h b/src/logging/Tag.h
index b5b235154a..ae75487664 100644
--- a/src/logging/Tag.h
+++ b/src/logging/Tag.h
@@ -3,7 +3,7 @@
#ifndef LOGGING_TAG_H
#define LOGGING_TAG_H
-#include "config.h"
+#include "bro-config.h"
#include "util.h"
#include "../Tag.h"
#include "plugin/TaggedComponent.h"
diff --git a/src/logging/writers/sqlite/SQLite.cc b/src/logging/writers/sqlite/SQLite.cc
index 090810055d..ce04839337 100644
--- a/src/logging/writers/sqlite/SQLite.cc
+++ b/src/logging/writers/sqlite/SQLite.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/logging/writers/sqlite/SQLite.h b/src/logging/writers/sqlite/SQLite.h
index a820530456..cce87da2ef 100644
--- a/src/logging/writers/sqlite/SQLite.h
+++ b/src/logging/writers/sqlite/SQLite.h
@@ -5,7 +5,7 @@
#ifndef LOGGING_WRITER_SQLITE_H
#define LOGGING_WRITER_SQLITE_H
-#include "config.h"
+#include "bro-config.h"
#include "logging/WriterBackend.h"
#include "threading/formatters/Ascii.h"
diff --git a/src/main.cc b/src/main.cc
index 425269b713..73181c82f2 100644
--- a/src/main.cc
+++ b/src/main.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
@@ -121,7 +121,6 @@ char* command_line_policy = 0;
vector params;
set requested_plugins;
char* proc_status_file = 0;
-int snaplen = 0; // this gets set from the scripting-layer's value
OpaqueType* md5_type = 0;
OpaqueType* sha1_type = 0;
@@ -762,9 +761,6 @@ int main(int argc, char** argv)
// DEBUG_MSG("HMAC key: %s\n", md5_digest_print(shared_hmac_md5_key));
init_hash_function();
- // Must come after hash initialization.
- binpac::init();
-
ERR_load_crypto_strings();
OPENSSL_add_all_algorithms_conf();
SSL_library_init();
@@ -864,6 +860,10 @@ int main(int argc, char** argv)
if ( events_file )
event_player = new EventPlayer(events_file);
+ // Must come after plugin activation (and also after hash
+ // initialization).
+ binpac::init();
+
init_event_handlers();
md5_type = new OpaqueType("md5");
@@ -989,8 +989,6 @@ int main(int argc, char** argv)
}
}
- snaplen = internal_val("snaplen")->AsCount();
-
if ( dns_type != DNS_PRIME )
net_init(interfaces, read_files, writefile, do_watchdog);
diff --git a/src/make_dbg_constants.pl b/src/make_dbg_constants.pl
deleted file mode 100644
index 29efac8050..0000000000
--- a/src/make_dbg_constants.pl
+++ /dev/null
@@ -1,143 +0,0 @@
-# Build the DebugCmdConstants.h and DebugCmdInfoConstants.h files from the
-# DebugCmdInfoConstants.in file.
-#
-# We do this via a script rather than maintaining them directly because
-# the struct is a little complicated, so has to be initialized from code,
-# plus we want to make adding new constants somewhat less painful.
-#
-# The input filename should be supplied as an argument
-#
-# DebugCmds are printed to DebugCmdConstants.h
-# DebugCmdInfos are printed to DebugCmdInfoConstants.h
-#
-# The input format is:
-#
-# cmd: [DebugCmd]
-# names: [space delimited names of cmd]
-# resume: ['true' or 'false': should execution resume after this command?]
-# help: [some help text]
-#
-# Blank lines are skipped.
-# Comments should start with // and should be on a line by themselves.
-
-use strict;
-
-open INPUT, $ARGV[0] or die "Input file $ARGV[0] not found.";
-open DEBUGCMDS, ">DebugCmdConstants.h"
- or die "Unable to open DebugCmdConstants.h";
-open DEBUGCMDINFOS, ">DebugCmdInfoConstants.cc"
- or die "Unable to open DebugCmdInfoConstants.cc";
-
-my $init_tmpl =
-'
- {
- DebugCmdInfo* info;
- @@name_init
- info = new DebugCmdInfo (@@cmd, names, @@num_names, @@resume, "@@help",
- @@repeatable);
- g_DebugCmdInfos.push_back(info);
- }
-';
-
-my $enum_str = "
-//
-// This file was automatically generated from $ARGV[0]
-// DO NOT EDIT.
-//
-enum DebugCmd {
-";
-
-my $init_str = "
-//
-// This file was automatically generated from $ARGV[0]
-// DO NOT EDIT.
-//
-
-#include \"util.h\"
-void init_global_dbg_constants () {
-";
-
-my %dbginfo;
-# { cmd, num_names, \@names, name_init, resume, help, repeatable }
-
-no strict "refs";
-sub OutputRecord {
- $dbginfo{name_init} .= "const char * const names[] = {\n\t";
- $_ = "\"$_\"" foreach @{$dbginfo{names}}; # put quotes around the strings
- my $name_strs = join ",\n\t", @{$dbginfo{names}};
- $dbginfo{name_init} .= "$name_strs\n };\n";
-
- $dbginfo{num_names} = scalar @{$dbginfo{names}};
-
- # substitute into template
- my $init = $init_tmpl;
- $init =~ s/(\@\@(\w+))/defined $dbginfo{$2} ? $dbginfo{$2} : ""/eg;
-
- $init_str .= $init;
-
- $enum_str .= "\t$dbginfo{cmd},\n";
-}
-use strict "refs";
-
-sub InitDbginfo
- {
- my $dbginfo = shift;
- %$dbginfo = ( num_names => 0, names => [], resume => 'false', help => '',
- repeatable => 'false' );
- }
-
-
-InitDbginfo(\%dbginfo);
-
-while () {
- chomp ($_);
- next if $_ =~ /^\s*$/; # skip blank
- next if $_ =~ /^\s*\/\//; # skip comments
-
- $_ =~ /^\s*([a-z]+):\s*(.*)$/ or
- die "Error in debug constant file on line: $_";
-
- if ($1 eq 'cmd')
- {
- my $newcmd = $2;
- if (defined $dbginfo{cmd}) { # output the previous record
- OutputRecord();
- InitDbginfo(\%dbginfo);
- }
-
- $dbginfo{cmd} = $newcmd;
- }
- elsif ($1 eq 'names')
- {
- my @names = split / /, $2;
- $dbginfo{names} = \@names;
- }
- elsif ($1 eq 'resume')
- {
- $dbginfo{resume} = $2;
- }
- elsif ($1 eq 'help')
- {
- $dbginfo{help} = $2;
- $dbginfo{help} =~ s{\"}{\\\"}g; # escape quotation marks
- }
- elsif ($1 eq 'repeatable')
- {
- $dbginfo{repeatable} = $2;
- }
- else {
- die "Unknown command: $_\n";
- }
-}
-
-# output the last record
-OutputRecord();
-
-$init_str .= " \n}\n";
-$enum_str .= " dcLast\n};\n";
-
-print DEBUGCMDS $enum_str;
-close DEBUGCMDS;
-
-print DEBUGCMDINFOS $init_str;
-close DEBUGCMDINFOS;
diff --git a/src/make_dbg_constants.py b/src/make_dbg_constants.py
new file mode 100644
index 0000000000..e18330db87
--- /dev/null
+++ b/src/make_dbg_constants.py
@@ -0,0 +1,114 @@
+# Build the DebugCmdConstants.h and DebugCmdInfoConstants.cc files from the
+# DebugCmdInfoConstants.in file.
+#
+# We do this via a script rather than maintaining them directly because
+# the struct is a little complicated, so has to be initialized from code,
+# plus we want to make adding new constants somewhat less painful.
+#
+# The input filename should be supplied as an argument.
+#
+# DebugCmds are printed to DebugCmdConstants.h
+# DebugCmdInfos are printed to DebugCmdInfoConstants.cc
+#
+# The input format is:
+#
+# cmd: [DebugCmd]
+# names: [space delimited names of cmd]
+# resume: ['true' or 'false': should execution resume after this command?]
+# help: [some help text]
+#
+# Blank lines are skipped.
+# Comments should start with // and should be on a line by themselves.
+
+import sys
+
+inputfile = sys.argv[1]
+
+init_tmpl = '''
+ {
+ DebugCmdInfo* info;
+ %(name_init)s
+ info = new DebugCmdInfo (%(cmd)s, names, %(num_names)s, %(resume)s, "%(help)s",
+ %(repeatable)s);
+ g_DebugCmdInfos.push_back(info);
+ }
+'''
+
+enum_str = '''
+//
+// This file was automatically generated from %s
+// DO NOT EDIT.
+//
+enum DebugCmd {
+''' % inputfile
+
+init_str = '''
+//
+// This file was automatically generated from %s
+// DO NOT EDIT.
+//
+
+#include "util.h"
+void init_global_dbg_constants () {
+''' % inputfile
+
+def outputrecord():
+ global init_str, enum_str
+
+ dbginfo["name_init"] = "const char * const names[] = {\n\t%s\n };\n" % ",\n\t".join(dbginfo["names"])
+
+ dbginfo["num_names"] = len(dbginfo["names"])
+
+ # substitute into template
+ init_str += init_tmpl % dbginfo
+
+ enum_str += "\t%s,\n" % dbginfo["cmd"]
+
+def initdbginfo():
+ return {"cmd": "", "name_init": "", "num_names": 0, "names": [],
+ "resume": "false", "help": "", "repeatable": "false"}
+
+dbginfo = initdbginfo()
+
+inputf = open(inputfile, "r")
+for line in inputf:
+ line = line.strip()
+ if not line or line.startswith("//"): # skip empty lines and comments
+ continue
+
+ fields = line.split(":", 1)
+ if len(fields) != 2:
+ raise RuntimeError("Error in debug constant file on line: %s" % line)
+
+ f1, f2 = fields
+ f2 = f2.strip()
+
+ if f1 == "cmd":
+ if dbginfo[f1]: # output the previous record
+ outputrecord()
+ dbginfo = initdbginfo()
+
+ dbginfo[f1] = f2
+ elif f1 == "names":
+ # put quotes around the strings
+ dbginfo[f1] = [ '"%s"' % n for n in f2.split() ]
+ elif f1 == "help":
+ dbginfo[f1] = f2.replace('"', '\\"') # escape quotation marks
+ elif f1 in ("resume", "repeatable"):
+ dbginfo[f1] = f2
+ else:
+ raise RuntimeError("Unknown command: %s" % line)
+
+# output the last record
+outputrecord()
+
+init_str += " \n}\n"
+enum_str += " dcLast\n};\n"
+
+debugcmds = open("DebugCmdConstants.h", "w")
+debugcmds.write(enum_str)
+debugcmds.close()
+
+debugcmdinfos = open("DebugCmdInfoConstants.cc", "w")
+debugcmdinfos.write(init_str)
+debugcmdinfos.close()
diff --git a/src/nb_dns.c b/src/nb_dns.c
index 33a00837e4..1e5d427924 100644
--- a/src/nb_dns.c
+++ b/src/nb_dns.c
@@ -11,7 +11,7 @@
* crack reply buffers is private.
*/
-#include "config.h" /* must appear before first ifdef */
+#include "bro-config.h" /* must appear before first ifdef */
#include
#include
diff --git a/src/net_util.cc b/src/net_util.cc
index aa88903a8a..95be1f8b0c 100644
--- a/src/net_util.cc
+++ b/src/net_util.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/net_util.h b/src/net_util.h
index d68a7110ce..ebdd0cbb88 100644
--- a/src/net_util.h
+++ b/src/net_util.h
@@ -3,7 +3,7 @@
#ifndef netutil_h
#define netutil_h
-#include "config.h"
+#include "bro-config.h"
// Define first.
typedef enum {
diff --git a/src/patricia.c b/src/patricia.c
index c4815b40ec..552019be09 100644
--- a/src/patricia.c
+++ b/src/patricia.c
@@ -1,3 +1,8 @@
+/*
+ * Johanna Amann
+ *
+ * Added patricia_search_all function.
+ */
/*
* Dave Plonka
*
@@ -61,6 +66,7 @@ static char copyright[] =
#include /* memcpy, strchr, strlen */
#include /* for inet_addr */
#include /* for u_short, etc. */
+#include
#include "patricia.h"
@@ -561,6 +567,105 @@ patricia_search_exact (patricia_tree_t *patricia, prefix_t *prefix)
return (NULL);
}
+bool
+patricia_search_all (patricia_tree_t *patricia, prefix_t *prefix, patricia_node_t ***list, int *n)
+{
+ patricia_node_t *node;
+ patricia_node_t *stack[PATRICIA_MAXBITS + 1];
+ u_char *addr;
+ u_int bitlen;
+ int cnt = 0;
+
+ assert (patricia);
+ assert (prefix);
+ assert (prefix->bitlen <= patricia->maxbits);
+ assert (n);
+ assert (list);
+ assert (*list == NULL);
+
+ *n = 0;
+
+ if (patricia->head == NULL)
+ return (NULL);
+
+ node = patricia->head;
+ addr = prefix_touchar (prefix);
+ bitlen = prefix->bitlen;
+
+ while (node->bit < bitlen) {
+
+ if (node->prefix) {
+#ifdef PATRICIA_DEBUG
+ fprintf (stderr, "patricia_search_all: push %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+#endif /* PATRICIA_DEBUG */
+ stack[cnt++] = node;
+ }
+
+ if (BIT_TEST (addr[node->bit >> 3], 0x80 >> (node->bit & 0x07))) {
+#ifdef PATRICIA_DEBUG
+ if (node->prefix)
+ fprintf (stderr, "patricia_search_all: take right %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+ else
+ fprintf (stderr, "patricia_search_all: take right at %d\n",
+ node->bit);
+#endif /* PATRICIA_DEBUG */
+ node = node->r;
+ } else {
+#ifdef PATRICIA_DEBUG
+ if (node->prefix)
+ fprintf (stderr, "patricia_search_all: take left %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+ else
+ fprintf (stderr, "patricia_search_all: take left at %d\n",
+ node->bit);
+#endif /* PATRICIA_DEBUG */
+ node = node->l;
+ }
+
+ if (node == NULL)
+ break;
+ }
+
+ if (node && node->prefix)
+ stack[cnt++] = node;
+
+#ifdef PATRICIA_DEBUG
+ if (node == NULL)
+ fprintf (stderr, "patricia_search_all: stop at null\n");
+ else if (node->prefix)
+ fprintf (stderr, "patricia_search_all: stop at %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+ else
+ fprintf (stderr, "patricia_search_all: stop at %d\n", node->bit);
+#endif /* PATRICIA_DEBUG */
+
+ if (cnt <= 0)
+ return false;
+
+ // ok, now we have an upper bound of how much we can return. Let's just alloc that...
+ patricia_node_t **outlist = calloc(cnt, sizeof(patricia_node_t*));
+
+ while (--cnt >= 0) {
+ node = stack[cnt];
+#ifdef PATRICIA_DEBUG
+ fprintf (stderr, "patricia_search_all: pop %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+#endif /* PATRICIA_DEBUG */
+ if (comp_with_mask (prefix_tochar (node->prefix), prefix_tochar (prefix), node->prefix->bitlen)) {
+#ifdef PATRICIA_DEBUG
+ fprintf (stderr, "patricia_search_all: found %s/%d\n",
+ prefix_toa (node->prefix), node->prefix->bitlen);
+#endif /* PATRICIA_DEBUG */
+ outlist[*n] = node;
+ (*n)++;
+ }
+ }
+ *list = outlist;
+ return (*n == 0);
+}
+
/* if inclusive != 0, "best" may be the given prefix itself */
patricia_node_t *
diff --git a/src/patricia.h b/src/patricia.h
index dc67226362..3a9badd29a 100644
--- a/src/patricia.h
+++ b/src/patricia.h
@@ -104,6 +104,7 @@ typedef struct _patricia_tree_t {
patricia_node_t *patricia_search_exact (patricia_tree_t *patricia, prefix_t *prefix);
+bool patricia_search_all (patricia_tree_t *patricia, prefix_t *prefix, patricia_node_t ***list, int *n);
patricia_node_t *patricia_search_best (patricia_tree_t *patricia, prefix_t *prefix);
patricia_node_t * patricia_search_best2 (patricia_tree_t *patricia, prefix_t *prefix,
int inclusive);
diff --git a/src/plugin/Manager.cc b/src/plugin/Manager.cc
index 8e58c1296b..a449fb34e4 100644
--- a/src/plugin/Manager.cc
+++ b/src/plugin/Manager.cc
@@ -182,9 +182,17 @@ bool Manager::ActivateDynamicPluginInternal(const std::string& name, bool ok_if_
add_to_bro_path(scripts);
}
- // Load {bif,scripts}/__load__.bro automatically.
+ // First load {scripts}/__preload__.bro automatically.
+ string init = dir + "scripts/__preload__.bro";
- string init = dir + "lib/bif/__load__.bro";
+ if ( is_file(init) )
+ {
+ DBG_LOG(DBG_PLUGINS, " Loading %s", init.c_str());
+ scripts_to_load.push_back(init);
+ }
+
+ // Load {bif,scripts}/__load__.bro automatically.
+ init = dir + "lib/bif/__load__.bro";
if ( is_file(init) )
{
@@ -660,6 +668,33 @@ void Manager::HookDrainEvents() const
}
+void Manager::HookSetupAnalyzerTree(Connection *conn) const
+ {
+ HookArgumentList args;
+
+ if ( HavePluginForHook(META_HOOK_PRE) )
+ {
+ args.push_back(conn);
+ MetaHookPre(HOOK_SETUP_ANALYZER_TREE, args);
+ }
+
+ hook_list *l = hooks[HOOK_SETUP_ANALYZER_TREE];
+
+ if ( l )
+ {
+ for (hook_list::iterator i = l->begin() ; i != l->end(); ++i)
+ {
+ Plugin *p = (*i).second;
+ p->HookSetupAnalyzerTree(conn);
+ }
+ }
+
+ if ( HavePluginForHook(META_HOOK_POST) )
+ {
+ MetaHookPost(HOOK_SETUP_ANALYZER_TREE, args, HookArgument());
+ }
+ }
+
void Manager::HookUpdateNetworkTime(double network_time) const
{
HookArgumentList args;
diff --git a/src/plugin/Manager.h b/src/plugin/Manager.h
index db812b6a8c..04c632d61a 100644
--- a/src/plugin/Manager.h
+++ b/src/plugin/Manager.h
@@ -264,6 +264,15 @@ public:
*/
void HookUpdateNetworkTime(double network_time) const;
+ /**
+ * Hook that executes when a connection's initial analyzer tree
+ * has been fully set up. The hook can manipulate the tree at this time,
+ * for example by adding further analyzers.
+ *
+ * @param conn The connection.
+ */
+ void HookSetupAnalyzerTree(Connection *conn) const;
+
/**
* Hook that informs plugins that the event queue is being drained.
*/
diff --git a/src/plugin/Plugin.cc b/src/plugin/Plugin.cc
index f05378eb84..190ae02cde 100644
--- a/src/plugin/Plugin.cc
+++ b/src/plugin/Plugin.cc
@@ -23,6 +23,7 @@ const char* plugin::hook_name(HookType h)
"DrainEvents",
"UpdateNetworkTime",
"BroObjDtor",
+ "SetupAnalyzerTree",
// MetaHooks
"MetaHookPre",
"MetaHookPost",
@@ -310,6 +311,10 @@ void Plugin::HookUpdateNetworkTime(double network_time)
{
}
+void Plugin::HookSetupAnalyzerTree(Connection *conn)
+ {
+ }
+
void Plugin::HookBroObjDtor(void* obj)
{
}
diff --git a/src/plugin/Plugin.h b/src/plugin/Plugin.h
index 32359b4686..e23173f726 100644
--- a/src/plugin/Plugin.h
+++ b/src/plugin/Plugin.h
@@ -7,14 +7,14 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "analyzer/Component.h"
#include "file_analysis/Component.h"
#include "iosource/Component.h"
// We allow to override this externally for testing purposes.
#ifndef BRO_PLUGIN_API_VERSION
-#define BRO_PLUGIN_API_VERSION 2
+#define BRO_PLUGIN_API_VERSION 4
#endif
class ODesc;
@@ -39,6 +39,7 @@ enum HookType {
HOOK_DRAIN_EVENTS, //< Activates Plugin::HookDrainEvents()
HOOK_UPDATE_NETWORK_TIME, //< Activates Plugin::HookUpdateNetworkTime.
HOOK_BRO_OBJ_DTOR, //< Activates Plugin::HookBroObjDtor.
+ HOOK_SETUP_ANALYZER_TREE, //< Activates Plugin::HookAddToAnalyzerTree
// Meta hooks.
META_HOOK_PRE, //< Activates Plugin::MetaHookPre().
@@ -636,6 +637,8 @@ protected:
*/
virtual void HookUpdateNetworkTime(double network_time);
+ virtual void HookSetupAnalyzerTree(Connection *conn);
+
/**
* Hook for destruction of objects registered with
* RequestBroObjDtor(). When Bro's reference counting triggers the
diff --git a/src/probabilistic/BloomFilter.h b/src/probabilistic/BloomFilter.h
index 53b66c377e..7fc32a9442 100644
--- a/src/probabilistic/BloomFilter.h
+++ b/src/probabilistic/BloomFilter.h
@@ -158,11 +158,11 @@ public:
static size_t K(size_t cells, size_t capacity);
// Overridden from BloomFilter.
- virtual bool Empty() const;
- virtual void Clear();
- virtual bool Merge(const BloomFilter* other);
- virtual BasicBloomFilter* Clone() const;
- virtual string InternalState() const;
+ virtual bool Empty() const override;
+ virtual void Clear() override;
+ virtual bool Merge(const BloomFilter* other) override;
+ virtual BasicBloomFilter* Clone() const override;
+ virtual string InternalState() const override;
protected:
DECLARE_SERIAL(BasicBloomFilter);
@@ -173,8 +173,8 @@ protected:
BasicBloomFilter();
// Overridden from BloomFilter.
- virtual void Add(const HashKey* key);
- virtual size_t Count(const HashKey* key) const;
+ virtual void Add(const HashKey* key) override;
+ virtual size_t Count(const HashKey* key) const override;
private:
BitVector* bits;
@@ -203,11 +203,11 @@ public:
~CountingBloomFilter();
// Overridden from BloomFilter.
- virtual bool Empty() const;
- virtual void Clear();
- virtual bool Merge(const BloomFilter* other);
- virtual CountingBloomFilter* Clone() const;
- virtual string InternalState() const;
+ virtual bool Empty() const override;
+ virtual void Clear() override;
+ virtual bool Merge(const BloomFilter* other) override;
+ virtual CountingBloomFilter* Clone() const override;
+ virtual string InternalState() const override;
protected:
DECLARE_SERIAL(CountingBloomFilter);
@@ -218,8 +218,8 @@ protected:
CountingBloomFilter();
// Overridden from BloomFilter.
- virtual void Add(const HashKey* key);
- virtual size_t Count(const HashKey* key) const;
+ virtual void Add(const HashKey* key) override;
+ virtual size_t Count(const HashKey* key) const override;
private:
CounterVector* cells;
diff --git a/src/probabilistic/Hasher.h b/src/probabilistic/Hasher.h
index 6128f3e04e..6ce13c6302 100644
--- a/src/probabilistic/Hasher.h
+++ b/src/probabilistic/Hasher.h
@@ -191,9 +191,9 @@ public:
DefaultHasher(size_t k, size_t seed);
// Overridden from Hasher.
- virtual digest_vector Hash(const void* x, size_t n) const /* final */;
- virtual DefaultHasher* Clone() const /* final */;
- virtual bool Equals(const Hasher* other) const /* final */;
+ virtual digest_vector Hash(const void* x, size_t n) const final;
+ virtual DefaultHasher* Clone() const final;
+ virtual bool Equals(const Hasher* other) const final;
DECLARE_SERIAL(DefaultHasher);
@@ -219,9 +219,9 @@ public:
DoubleHasher(size_t k, size_t seed);
// Overridden from Hasher.
- virtual digest_vector Hash(const void* x, size_t n) const /* final */;
- virtual DoubleHasher* Clone() const /* final */;
- virtual bool Equals(const Hasher* other) const /* final */;
+ virtual digest_vector Hash(const void* x, size_t n) const final;
+ virtual DoubleHasher* Clone() const final;
+ virtual bool Equals(const Hasher* other) const final;
DECLARE_SERIAL(DoubleHasher);
diff --git a/src/rule-parse.y b/src/rule-parse.y
index b0e00d10ed..32ada02cb3 100644
--- a/src/rule-parse.y
+++ b/src/rule-parse.y
@@ -2,7 +2,7 @@
#include
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "RuleMatcher.h"
#include "Reporter.h"
#include "IPAddr.h"
diff --git a/src/setsignal.c b/src/setsignal.c
index b49f0784e9..6344820398 100644
--- a/src/setsignal.c
+++ b/src/setsignal.c
@@ -2,7 +2,7 @@
* See the file "COPYING" in the main distribution directory for copyright.
*/
-#include "config.h" /* must appear before first ifdef */
+#include "bro-config.h" /* must appear before first ifdef */
#include
diff --git a/src/strings.bif b/src/strings.bif
index 80b60a57d0..914baaebbf 100644
--- a/src/strings.bif
+++ b/src/strings.bif
@@ -216,7 +216,13 @@ function join_string_vec%(vec: string_vec, sep: string%): string
if ( i > 0 )
d.Add(sep->CheckString(), 0);
- v->Lookup(i)->Describe(&d);
+ Val* e = v->Lookup(i);
+
+ // If the element is empty, skip it.
+ if ( ! e )
+ continue;
+
+ e->Describe(&d);
}
BroString* s = new BroString(1, d.TakeBytes(), d.Len());
@@ -1155,7 +1161,9 @@ function find_all%(str: string, re: pattern%) : string_set
int n = re->MatchPrefix(t, e - t);
if ( n >= 0 )
{
- a->Assign(new StringVal(n, (const char*) t), 0);
+ Val* idx = new StringVal(n, (const char*) t);
+ a->Assign(idx, 0);
+ Unref(idx);
t += n - 1;
}
}
diff --git a/src/strsep.c b/src/strsep.c
index 15a750885d..8540ac3688 100644
--- a/src/strsep.c
+++ b/src/strsep.c
@@ -31,7 +31,7 @@
* SUCH DAMAGE.
*/
-#include "config.h"
+#include "bro-config.h"
#ifndef HAVE_STRSEP
diff --git a/src/threading/BasicThread.cc b/src/threading/BasicThread.cc
index ffee21bc16..86d7d7b560 100644
--- a/src/threading/BasicThread.cc
+++ b/src/threading/BasicThread.cc
@@ -2,7 +2,7 @@
#include
#include
-#include "config.h"
+#include "bro-config.h"
#include "BasicThread.h"
#include "Manager.h"
diff --git a/src/threading/Formatter.cc b/src/threading/Formatter.cc
index f003f37d29..3f366de90a 100644
--- a/src/threading/Formatter.cc
+++ b/src/threading/Formatter.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/threading/formatters/Ascii.cc b/src/threading/formatters/Ascii.cc
index 6c114ff3fd..07ec05ca8b 100644
--- a/src/threading/formatters/Ascii.cc
+++ b/src/threading/formatters/Ascii.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include
#include
diff --git a/src/threading/formatters/JSON.cc b/src/threading/formatters/JSON.cc
index e1a5713461..3558baee5c 100644
--- a/src/threading/formatters/JSON.cc
+++ b/src/threading/formatters/JSON.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
@@ -35,7 +35,12 @@ bool JSON::Describe(ODesc* desc, int num_fields, const Field* const * fields,
const u_char* bytes = desc->Bytes();
int len = desc->Len();
- if ( i > 0 && len > 0 && bytes[len-1] != ',' && vals[i]->present )
+ if ( i > 0 &&
+ len > 0 &&
+ bytes[len-1] != ',' &&
+ bytes[len-1] != '{' &&
+ bytes[len-1] != '[' &&
+ vals[i]->present )
desc->AddRaw(",");
if ( ! Describe(desc, vals[i], fields[i]->name) )
diff --git a/src/util.cc b/src/util.cc
index a76ba84de3..0ea89beb90 100644
--- a/src/util.cc
+++ b/src/util.cc
@@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
-#include "config.h"
+#include "bro-config.h"
#include "util-config.h"
#ifdef TIME_WITH_SYS_TIME
@@ -323,24 +323,6 @@ string to_upper(const std::string& s)
return t;
}
-const char* strchr_n(const char* s, const char* end_of_s, char ch)
- {
- for ( ; s < end_of_s; ++s )
- if ( *s == ch )
- return s;
-
- return 0;
- }
-
-const char* strrchr_n(const char* s, const char* end_of_s, char ch)
- {
- for ( --end_of_s; end_of_s >= s; --end_of_s )
- if ( *end_of_s == ch )
- return end_of_s;
-
- return 0;
- }
-
int decode_hex(char ch)
{
if ( ch >= '0' && ch <= '9' )
@@ -382,27 +364,6 @@ const char* strpbrk_n(size_t len, const char* s, const char* charset)
return 0;
}
-int strcasecmp_n(int b_len, const char* b, const char* t)
- {
- if ( ! b )
- return -1;
-
- int i;
- for ( i = 0; i < b_len; ++i )
- {
- char c1 = islower(b[i]) ? toupper(b[i]) : b[i];
- char c2 = islower(t[i]) ? toupper(t[i]) : t[i];
-
- if ( c1 < c2 )
- return -1;
-
- if ( c1 > c2 )
- return 1;
- }
-
- return t[i] != '\0';
- }
-
#ifndef HAVE_STRCASESTR
// This code is derived from software contributed to BSD by Chris Torek.
char* strcasestr(const char* s, const char* find)
@@ -421,7 +382,7 @@ char* strcasestr(const char* s, const char* find)
if ( sc == 0 )
return 0;
} while ( char(tolower((unsigned char) sc)) != c );
- } while ( strcasecmp_n(len, s, find) != 0 );
+ } while ( strncasecmp(s, find, len) != 0 );
--s;
}
diff --git a/src/util.h b/src/util.h
index f65e0fb7d0..15d1a059cd 100644
--- a/src/util.h
+++ b/src/util.h
@@ -23,7 +23,7 @@
#include
#include
#include
-#include "config.h"
+#include "bro-config.h"
#if __STDC__
#define myattribute __attribute__
@@ -143,11 +143,8 @@ extern char* get_word(char*& s);
extern void get_word(int length, const char* s, int& pwlen, const char*& pw);
extern void to_upper(char* s);
extern std::string to_upper(const std::string& s);
-extern const char* strchr_n(const char* s, const char* end_of_s, char ch);
-extern const char* strrchr_n(const char* s, const char* end_of_s, char ch);
extern int decode_hex(char ch);
extern unsigned char encode_hex(int h);
-extern int strcasecmp_n(int s_len, const char* s, const char* t);
#ifndef HAVE_STRCASESTR
extern char* strcasestr(const char* s, const char* find);
#endif
diff --git a/testing/Makefile b/testing/Makefile
index 122262f865..e83ec09396 100644
--- a/testing/Makefile
+++ b/testing/Makefile
@@ -17,8 +17,8 @@ make-brief:
coverage:
@for repo in $(DIRS); do (cd $$repo && echo "Coverage for '$$repo' dir:" && make -s coverage); done
- @test -f btest/coverage.log && cp btest/coverage.log `mktemp brocov.tmp.XXX` || true
- @for f in external/*/coverage.log; do test -f $$f && cp $$f `mktemp brocov.tmp.XXX` || true; done
+ @test -f btest/coverage.log && cp btest/coverage.log `mktemp brocov.tmp.XXXXXX` || true
+ @for f in external/*/coverage.log; do test -f $$f && cp $$f `mktemp brocov.tmp.XXXXXX` || true; done
@echo "Complete test suite code coverage:"
@./scripts/coverage-calc "brocov.tmp.*" coverage.log `pwd`/../scripts
@rm -f brocov.tmp.*
diff --git a/testing/btest/Baseline/bifs.check_subnet/output b/testing/btest/Baseline/bifs.check_subnet/output
new file mode 100644
index 0000000000..d2f111f555
--- /dev/null
+++ b/testing/btest/Baseline/bifs.check_subnet/output
@@ -0,0 +1,8 @@
+in says: 10.2.0.2/32 is member
+check_subnet says: 10.2.0.2/32 is no member
+in says: 10.2.0.2/31 is member
+check_subnet says: 10.2.0.2/31 is member
+in says: 10.0.0.0/9 is member
+check_subnet says: 10.0.0.0/9 is no member
+in says: 10.0.0.0/8 is member
+check_subnet says: 10.0.0.0/8 is member
diff --git a/testing/btest/Baseline/bifs.decode_base64/out b/testing/btest/Baseline/bifs.decode_base64/out
index af0d32fbb8..aa265d2148 100644
--- a/testing/btest/Baseline/bifs.decode_base64/out
+++ b/testing/btest/Baseline/bifs.decode_base64/out
@@ -4,3 +4,11 @@ bro
bro
bro
bro
+bro
+bro
+bro
+bro
+bro
+bro
+bro
+bro
diff --git a/testing/btest/Baseline/bifs.decode_base64_conn/weird.log b/testing/btest/Baseline/bifs.decode_base64_conn/weird.log
new file mode 100644
index 0000000000..e263a05ccc
--- /dev/null
+++ b/testing/btest/Baseline/bifs.decode_base64_conn/weird.log
@@ -0,0 +1,12 @@
+#separator \x09
+#set_separator ,
+#empty_field (empty)
+#unset_field -
+#path weird
+#open 2015-08-31-03-09-20
+#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
+#types time string addr port addr port string string bool string
+1254722767.875996 CjhGID4nQcgTWjvg4c 10.10.1.4 1470 74.53.140.153 25 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
+1437831787.861602 CPbrpk1qSsw6ESzHV4 192.168.133.100 49648 192.168.133.102 25 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
+1437831799.610433 C7XEbhP654jzLoe3a 192.168.133.100 49655 17.167.150.73 443 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
+#close 2015-08-31-03-09-20
diff --git a/testing/btest/Baseline/bifs.encode_base64/out b/testing/btest/Baseline/bifs.encode_base64/out
index 84c2c98264..3008115853 100644
--- a/testing/btest/Baseline/bifs.encode_base64/out
+++ b/testing/btest/Baseline/bifs.encode_base64/out
@@ -1,5 +1,9 @@
YnJv
YnJv
+YnJv
+}n-v
+YnJv
+YnJv
}n-v
cGFkZGluZw==
cGFkZGluZzE=
diff --git a/testing/btest/Baseline/bifs.filter_subnet_table/output b/testing/btest/Baseline/bifs.filter_subnet_table/output
new file mode 100644
index 0000000000..d86ca621a5
--- /dev/null
+++ b/testing/btest/Baseline/bifs.filter_subnet_table/output
@@ -0,0 +1,20 @@
+{
+10.0.0.0/8,
+10.2.0.2/31,
+10.2.0.0/16
+}
+{
+[10.0.0.0/8] = a,
+[10.2.0.2/31] = c,
+[10.2.0.0/16] = b
+}
+{
+[10.0.0.0/8] = a,
+[10.3.0.0/16] = e
+}
+{
+
+}
+{
+
+}
diff --git a/testing/btest/Baseline/bifs.get_current_packet_header/output b/testing/btest/Baseline/bifs.get_current_packet_header/output
new file mode 100644
index 0000000000..761a248077
--- /dev/null
+++ b/testing/btest/Baseline/bifs.get_current_packet_header/output
@@ -0,0 +1 @@
+[l2=[encap=LINK_ETHERNET, len=78, cap_len=78, src=00:00:00:00:00:00, dst=ff:ff:ff:ff:ff:ff, vlan=, inner_vlan=, eth_type=34525, proto=L3_IPV6], ip=, ip6=[class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::dead, dst=fe80::beef, exts=[]], tcp=, udp=, icmp=[icmp_type=135]]
diff --git a/testing/btest/Baseline/bifs.join_string/out b/testing/btest/Baseline/bifs.join_string/out
index f1640a57ee..e916fc304a 100644
--- a/testing/btest/Baseline/bifs.join_string/out
+++ b/testing/btest/Baseline/bifs.join_string/out
@@ -4,3 +4,4 @@ mytest
this__is__another__test
thisisanothertest
Test
+...hi..there
diff --git a/testing/btest/Baseline/bifs.matching_subnets/output b/testing/btest/Baseline/bifs.matching_subnets/output
new file mode 100644
index 0000000000..e051d89b79
--- /dev/null
+++ b/testing/btest/Baseline/bifs.matching_subnets/output
@@ -0,0 +1,18 @@
+{
+10.0.0.0/8,
+10.3.0.0/16,
+10.2.0.2/31,
+2607:f8b0:4007:807::/64,
+10.2.0.0/16,
+5.2.0.0/32,
+5.5.0.0/25,
+10.1.0.0/16,
+5.0.0.0/8,
+2607:f8b0:4007:807::200e/128,
+7.2.0.0/32,
+2607:f8b0:4008:807::/64
+}
+[10.2.0.2/31, 10.2.0.0/16, 10.0.0.0/8]
+[2607:f8b0:4007:807::200e/128, 2607:f8b0:4007:807::/64]
+[]
+[10.0.0.0/8]
diff --git a/testing/btest/Baseline/bifs.subnet_to_addr/error b/testing/btest/Baseline/bifs.subnet_to_addr/error
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/testing/btest/Baseline/bifs.subnet_to_addr/output b/testing/btest/Baseline/bifs.subnet_to_addr/output
new file mode 100644
index 0000000000..8c89b3920a
--- /dev/null
+++ b/testing/btest/Baseline/bifs.subnet_to_addr/output
@@ -0,0 +1,3 @@
+subnet_to_addr(0.0.0.0/32) = 0.0.0.0 (SUCCESS)
+subnet_to_addr(1.2.0.0/16) = 1.2.0.0 (SUCCESS)
+subnet_to_addr(2607:f8b0:4005:803::200e/128) = 2607:f8b0:4005:803::200e (SUCCESS)
diff --git a/testing/btest/Baseline/bifs.subnet_version/out b/testing/btest/Baseline/bifs.subnet_version/out
new file mode 100644
index 0000000000..328bff6687
--- /dev/null
+++ b/testing/btest/Baseline/bifs.subnet_version/out
@@ -0,0 +1,4 @@
+T
+F
+F
+T
diff --git a/testing/btest/Baseline/broker.clone_store/clone.clone.out b/testing/btest/Baseline/broker.clone_store/clone.clone.out
index 570f3f25ca..3db1dd4e00 100644
--- a/testing/btest/Baseline/broker.clone_store/clone.clone.out
+++ b/testing/btest/Baseline/broker.clone_store/clone.clone.out
@@ -1,5 +1,5 @@
-clone keys, [status=BrokerStore::SUCCESS, result=[d=broker::data{[one, two, myset, myvec]}]]
-lookup, one, [status=BrokerStore::SUCCESS, result=[d=broker::data{111}]]
-lookup, myset, [status=BrokerStore::SUCCESS, result=[d=broker::data{{a, c, d}}]]
-lookup, two, [status=BrokerStore::SUCCESS, result=[d=broker::data{222}]]
-lookup, myvec, [status=BrokerStore::SUCCESS, result=[d=broker::data{[delta, alpha, beta, gamma, omega]}]]
+clone keys, [status=Broker::SUCCESS, result=[d=broker::data{[one, two, myset, myvec]}]]
+lookup, two, [status=Broker::SUCCESS, result=[d=broker::data{222}]]
+lookup, one, [status=Broker::SUCCESS, result=[d=broker::data{111}]]
+lookup, myvec, [status=Broker::SUCCESS, result=[d=broker::data{[delta, alpha, beta, gamma, omega]}]]
+lookup, myset, [status=Broker::SUCCESS, result=[d=broker::data{{a, c, d}}]]
diff --git a/testing/btest/Baseline/broker.connection_updates/recv.recv.out b/testing/btest/Baseline/broker.connection_updates/recv.recv.out
index 714cbfbac4..d246bf153f 100644
--- a/testing/btest/Baseline/broker.connection_updates/recv.recv.out
+++ b/testing/btest/Baseline/broker.connection_updates/recv.recv.out
@@ -1,2 +1,2 @@
-BrokerComm::incoming_connection_established, connector
-BrokerComm::incoming_connection_broken, connector
+Broker::incoming_connection_established, connector
+Broker::incoming_connection_broken, connector
diff --git a/testing/btest/Baseline/broker.connection_updates/send.send.out b/testing/btest/Baseline/broker.connection_updates/send.send.out
index 61c988d1c8..205782c8f0 100644
--- a/testing/btest/Baseline/broker.connection_updates/send.send.out
+++ b/testing/btest/Baseline/broker.connection_updates/send.send.out
@@ -1 +1 @@
-BrokerComm::outgoing_connection_established, 127.0.0.1, 9999/tcp, listener
+Broker::outgoing_connection_established, 127.0.0.1, 9999/tcp, listener
diff --git a/testing/btest/Baseline/broker.data/out b/testing/btest/Baseline/broker.data/out
index 628870144a..281eb9b316 100644
--- a/testing/btest/Baseline/broker.data/out
+++ b/testing/btest/Baseline/broker.data/out
@@ -1,18 +1,18 @@
-BrokerComm::BOOL
-BrokerComm::INT
-BrokerComm::COUNT
-BrokerComm::DOUBLE
-BrokerComm::STRING
-BrokerComm::ADDR
-BrokerComm::SUBNET
-BrokerComm::PORT
-BrokerComm::TIME
-BrokerComm::INTERVAL
-BrokerComm::ENUM
-BrokerComm::SET
-BrokerComm::TABLE
-BrokerComm::VECTOR
-BrokerComm::RECORD
+Broker::BOOL
+Broker::INT
+Broker::COUNT
+Broker::DOUBLE
+Broker::STRING
+Broker::ADDR
+Broker::SUBNET
+Broker::PORT
+Broker::TIME
+Broker::INTERVAL
+Broker::ENUM
+Broker::SET
+Broker::TABLE
+Broker::VECTOR
+Broker::RECORD
***************************
T
F
@@ -29,7 +29,7 @@ hello
22/tcp
42.0
180.0
-BrokerComm::BOOL
+Broker::BOOL
***************************
{
two,
diff --git a/testing/btest/Baseline/broker.master_store/master.out b/testing/btest/Baseline/broker.master_store/master.out
index 4208503151..1983d0bccc 100644
--- a/testing/btest/Baseline/broker.master_store/master.out
+++ b/testing/btest/Baseline/broker.master_store/master.out
@@ -1,14 +1,14 @@
-lookup(two): [status=BrokerStore::SUCCESS, result=[d=broker::data{222}]]
-lookup(four): [status=BrokerStore::SUCCESS, result=[d=