From 6d031c41f14353c7e5e7819de6392fb3a41649eb Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 4 Aug 2015 22:00:54 -0500 Subject: [PATCH 01/34] Significant improvements to the GeoLocation doc Updated the install section for FreeBSD and OS X. Added a section to explain how to quickly test that everything is setup correctly. Improved the usage section by removing the misleading record definition (a link to the reference doc is provided), and explaining that some fields will be uninitialized. Corrected the example so that it doesn't try to access uninitialized fields. --- doc/frameworks/geoip.rst | 91 ++++++++++++++++++++++++---------------- 1 file changed, 56 insertions(+), 35 deletions(-) diff --git a/doc/frameworks/geoip.rst b/doc/frameworks/geoip.rst index 98252d7184..d756f97589 100644 --- a/doc/frameworks/geoip.rst +++ b/doc/frameworks/geoip.rst @@ -20,11 +20,13 @@ GeoLocation Install libGeoIP ---------------- +Before building Bro, you need to install libGeoIP. + * FreeBSD: .. console:: - sudo pkg_add -r GeoIP + sudo pkg install GeoIP * RPM/RedHat-based Linux: @@ -40,80 +42,99 @@ Install libGeoIP * Mac OS X: - Vanilla OS X installations don't ship with libGeoIP, but if - installed from your preferred package management system (e.g. - MacPorts, Fink, or Homebrew), they should be automatically detected - and Bro will compile against them. + You need to install from your preferred package management system + (e.g. MacPorts, Fink, or Homebrew). The name of the package that you need + may be libgeoip, geoip, or geoip-dev, depending on which package management + system you are using. GeoIPLite Database Installation ------------------------------------- +------------------------------- A country database for GeoIPLite is included when you do the C API install, but for Bro, we are using the city database which includes cities and regions in addition to countries. `Download `__ the GeoLite city -binary database. +binary database: - .. console:: +.. console:: wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz gunzip GeoLiteCity.dat.gz -Next, the file needs to be put in the database directory. This directory -should already exist and will vary depending on which platform and package -you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For Linux, -use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one +Next, the file needs to be renamed and put in the GeoIP database directory. +This directory should already exist and will vary depending on which platform +and package you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For +Linux, use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one already exists). - .. console:: +.. console:: mv GeoLiteCity.dat /GeoIPCity.dat +Note that there is a separate database for IPv6 addresses, which can also +be installed if you want GeoIP functionality for IPv6. + +Testing +------- + +Before using the GeoIP functionality, it is a good idea to verify that +everything is setup correctly. After installing libGeoIP and the GeoIP city +database, and building Bro, you can quickly check if the GeoIP functionality +works by running a command like this: + +.. console:: + + bro -e "print lookup_location(8.8.8.8);" + +If you see an error message similar to "Failed to open GeoIP City database", +then you may need to either rename or move your GeoIP city database file (the +error message should give you the full pathname of the database file that +Bro is looking for). + +If you see an error message similar to "Bro was not configured for GeoIP +support", then you need to rebuild Bro and make sure it is linked against +libGeoIP. Normally, if libGeoIP is installed correctly then it should +automatically be found when building Bro. If this doesn't happen, then +you may need to specify the path to the libGeoIP installation +(e.g. ``./configure --with-geoip=``). Usage ----- -There is a single built in function that provides the GeoIP -functionality: +There is a built-in function that provides the GeoIP functionality: .. code:: bro function lookup_location(a:addr): geo_location -There is also the :bro:see:`geo_location` data structure that is returned -from the :bro:see:`lookup_location` function: - -.. code:: bro - - type geo_location: record { - country_code: string; - region: string; - city: string; - latitude: double; - longitude: double; - }; - +The return value of the :bro:see:`lookup_location` function is a record +type called :bro:see:`geo_location`, and it consists of several fields +containing the country, region, city, latitude, and longitude of the specified +IP address. Since one or more fields in this record will be uninitialized +for some IP addresses (for example, the country and region of an IP address +might be known, but the city could be unknown), a field should be checked +if it has a value before trying to access the value. Example ------- -To write a line in a log file for every ftp connection from hosts in -Ohio, this is now very easy: +To show every ftp connection from hosts in Ohio, this is now very easy: .. code:: bro - global ftp_location_log: file = open_log_file("ftp-location"); - event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) { local client = c$id$orig_h; local loc = lookup_location(client); - if (loc$region == "OH" && loc$country_code == "US") + + if (loc?$region && loc$region == "OH" && loc$country_code == "US") { - print ftp_location_log, fmt("FTP Connection from:%s (%s,%s,%s)", client, loc$city, loc$region, loc$country_code); + local city = loc?$city ? loc$city : ""; + + print fmt("FTP Connection from:%s (%s,%s,%s)", client, city, + loc$region, loc$country_code); } } - From 7b6ab180b69914953bd722f4a5950f23fb5a00f7 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Mon, 17 Aug 2015 14:58:22 -0500 Subject: [PATCH 02/34] Fix typo in documentation of a field in connection record --- scripts/base/init-bare.bro | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/base/init-bare.bro b/scripts/base/init-bare.bro index 40f518b682..ade1169091 100644 --- a/scripts/base/init-bare.bro +++ b/scripts/base/init-bare.bro @@ -349,7 +349,7 @@ type connection: record { ## The outer VLAN, if applicable, for this connection. vlan: int &optional; - ## The VLAN vlan, if applicable, for this connection. + ## The inner VLAN, if applicable, for this connection. inner_vlan: int &optional; }; From c6dec18e2b913f259fd6bae8efef61913ff4a501 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Mon, 17 Aug 2015 16:24:02 -0500 Subject: [PATCH 03/34] Improve documentation of table and set types Add a list of the types that are not allowed to be the index type of a table or set. --- doc/script-reference/types.rst | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/doc/script-reference/types.rst b/doc/script-reference/types.rst index cc601db75f..847e0f8fab 100644 --- a/doc/script-reference/types.rst +++ b/doc/script-reference/types.rst @@ -340,15 +340,18 @@ Here is a more detailed description of each type: table [ type^+ ] of type - where *type^+* is one or more types, separated by commas. - For example: + where *type^+* is one or more types, separated by commas. The + index type cannot be any of the following types: pattern, table, set, + vector, file, opaque, any. + + Here is an example of declaring a table indexed by "count" values + and yielding "string" values: .. code:: bro global a: table[count] of string; - declares a table indexed by "count" values and yielding - "string" values. The yield type can also be more complex: + The yield type can also be more complex: .. code:: bro @@ -441,7 +444,9 @@ Here is a more detailed description of each type: set [ type^+ ] - where *type^+* is one or more types separated by commas. + where *type^+* is one or more types separated by commas. The + index type cannot be any of the following types: pattern, table, set, + vector, file, opaque, any. Sets can be initialized by listing elements enclosed by curly braces: From f56b3ebd93856af6c4f1d25c3c9fa95aeab126f4 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 18 Aug 2015 14:23:48 -0500 Subject: [PATCH 04/34] Fix some doc build warnings --- doc/components/bro-plugins/pf_ring/README.rst | 1 + doc/components/bro-plugins/redis/README.rst | 1 + doc/devel/plugins.rst | 6 +++--- 3 files changed, 5 insertions(+), 3 deletions(-) create mode 120000 doc/components/bro-plugins/pf_ring/README.rst create mode 120000 doc/components/bro-plugins/redis/README.rst diff --git a/doc/components/bro-plugins/pf_ring/README.rst b/doc/components/bro-plugins/pf_ring/README.rst new file mode 120000 index 0000000000..5ea666e8c9 --- /dev/null +++ b/doc/components/bro-plugins/pf_ring/README.rst @@ -0,0 +1 @@ +../../../../aux/plugins/pf_ring/README \ No newline at end of file diff --git a/doc/components/bro-plugins/redis/README.rst b/doc/components/bro-plugins/redis/README.rst new file mode 120000 index 0000000000..c42051828e --- /dev/null +++ b/doc/components/bro-plugins/redis/README.rst @@ -0,0 +1 @@ +../../../../aux/plugins/redis/README \ No newline at end of file diff --git a/doc/devel/plugins.rst b/doc/devel/plugins.rst index 0ed22a0cb9..dc1c9a3cd4 100644 --- a/doc/devel/plugins.rst +++ b/doc/devel/plugins.rst @@ -286,9 +286,9 @@ Activating a plugin will: 1. Load the dynamic module 2. Make any bif items available 3. Add the ``scripts/`` directory to ``BROPATH`` - 5. Load ``scripts/__preload__.bro`` - 6. Make BiF elements available to scripts. - 7. Load ``scripts/__load__.bro`` + 4. Load ``scripts/__preload__.bro`` + 5. Make BiF elements available to scripts. + 6. Load ``scripts/__load__.bro`` By default, Bro will automatically activate all dynamic plugins found in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro From 92c5885f06f8e8ee88aa335671bfe089a3bd0e26 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 18 Aug 2015 15:50:58 -0500 Subject: [PATCH 05/34] Remove unnecessary blank lines from some broker doc files --- doc/frameworks/broker/connecting-connector.bro | 1 - doc/frameworks/broker/connecting-listener.bro | 1 - doc/frameworks/broker/events-listener.bro | 1 - doc/frameworks/broker/printing-listener.bro | 1 - doc/frameworks/broker/testlog.bro | 1 - .../output | 1 - .../output | 1 - .../output | 1 - .../output | 1 - .../doc.sphinx.include-doc_frameworks_broker_testlog_bro/output | 1 - .../include-doc_frameworks_broker_connecting-connector_bro.btest | 1 - .../include-doc_frameworks_broker_connecting-listener_bro.btest | 1 - .../include-doc_frameworks_broker_events-listener_bro.btest | 1 - .../include-doc_frameworks_broker_printing-listener_bro.btest | 1 - .../doc/sphinx/include-doc_frameworks_broker_testlog_bro.btest | 1 - 15 files changed, 15 deletions(-) diff --git a/doc/frameworks/broker/connecting-connector.bro b/doc/frameworks/broker/connecting-connector.bro index a7e621e4a6..cd5c74add8 100644 --- a/doc/frameworks/broker/connecting-connector.bro +++ b/doc/frameworks/broker/connecting-connector.bro @@ -1,4 +1,3 @@ - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "connector"; diff --git a/doc/frameworks/broker/connecting-listener.bro b/doc/frameworks/broker/connecting-listener.bro index c37af3ae4d..21c67f9696 100644 --- a/doc/frameworks/broker/connecting-listener.bro +++ b/doc/frameworks/broker/connecting-listener.bro @@ -1,4 +1,3 @@ - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/doc/frameworks/broker/events-listener.bro b/doc/frameworks/broker/events-listener.bro index aa6ea9ee4e..dc18795903 100644 --- a/doc/frameworks/broker/events-listener.bro +++ b/doc/frameworks/broker/events-listener.bro @@ -1,4 +1,3 @@ - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/doc/frameworks/broker/printing-listener.bro b/doc/frameworks/broker/printing-listener.bro index 080d09e8f5..f55c5b9bad 100644 --- a/doc/frameworks/broker/printing-listener.bro +++ b/doc/frameworks/broker/printing-listener.bro @@ -1,4 +1,3 @@ - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/doc/frameworks/broker/testlog.bro b/doc/frameworks/broker/testlog.bro index f63c19ac48..506d359bb7 100644 --- a/doc/frameworks/broker/testlog.bro +++ b/doc/frameworks/broker/testlog.bro @@ -1,4 +1,3 @@ - module Test; export { diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-connector_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-connector_bro/output index 0953d88a3e..042b8999f3 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-connector_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-connector_bro/output @@ -2,7 +2,6 @@ connecting-connector.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "connector"; diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-listener_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-listener_bro/output index 2879beb396..33e3df2330 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-listener_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_connecting-listener_bro/output @@ -2,7 +2,6 @@ connecting-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_events-listener_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_events-listener_bro/output index 59e697601b..9f004692cb 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_events-listener_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_events-listener_bro/output @@ -2,7 +2,6 @@ events-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_printing-listener_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_printing-listener_bro/output index 9cb48a0528..fb416612ab 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_printing-listener_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_printing-listener_bro/output @@ -2,7 +2,6 @@ printing-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_testlog_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_testlog_bro/output index da2261ebc4..c87fc3cd6f 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_testlog_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_frameworks_broker_testlog_bro/output @@ -2,7 +2,6 @@ testlog.bro - module Test; export { diff --git a/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-connector_bro.btest b/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-connector_bro.btest index 0953d88a3e..042b8999f3 100644 --- a/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-connector_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-connector_bro.btest @@ -2,7 +2,6 @@ connecting-connector.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "connector"; diff --git a/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-listener_bro.btest b/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-listener_bro.btest index 2879beb396..33e3df2330 100644 --- a/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-listener_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_frameworks_broker_connecting-listener_bro.btest @@ -2,7 +2,6 @@ connecting-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/doc/sphinx/include-doc_frameworks_broker_events-listener_bro.btest b/testing/btest/doc/sphinx/include-doc_frameworks_broker_events-listener_bro.btest index 59e697601b..9f004692cb 100644 --- a/testing/btest/doc/sphinx/include-doc_frameworks_broker_events-listener_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_frameworks_broker_events-listener_bro.btest @@ -2,7 +2,6 @@ events-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/doc/sphinx/include-doc_frameworks_broker_printing-listener_bro.btest b/testing/btest/doc/sphinx/include-doc_frameworks_broker_printing-listener_bro.btest index 9cb48a0528..fb416612ab 100644 --- a/testing/btest/doc/sphinx/include-doc_frameworks_broker_printing-listener_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_frameworks_broker_printing-listener_bro.btest @@ -2,7 +2,6 @@ printing-listener.bro - const broker_port: port = 9999/tcp &redef; redef exit_only_after_terminate = T; redef BrokerComm::endpoint_name = "listener"; diff --git a/testing/btest/doc/sphinx/include-doc_frameworks_broker_testlog_bro.btest b/testing/btest/doc/sphinx/include-doc_frameworks_broker_testlog_bro.btest index da2261ebc4..c87fc3cd6f 100644 --- a/testing/btest/doc/sphinx/include-doc_frameworks_broker_testlog_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_frameworks_broker_testlog_bro.btest @@ -2,7 +2,6 @@ testlog.bro - module Test; export { From 7ce0cefcba939008b9f3fb038bcbcc2322119243 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Wed, 19 Aug 2015 13:28:35 -0500 Subject: [PATCH 06/34] Minor clarifications and typo fixes in broker doc --- doc/frameworks/broker.rst | 72 +++++++++++++++++++-------------------- doc/install/install.rst | 4 +-- 2 files changed, 37 insertions(+), 39 deletions(-) diff --git a/doc/frameworks/broker.rst b/doc/frameworks/broker.rst index 3cd8dab6e3..8c5ed24e25 100644 --- a/doc/frameworks/broker.rst +++ b/doc/frameworks/broker.rst @@ -9,10 +9,7 @@ Broker-Enabled Communication Framework Bro can now use the `Broker Library <../components/broker/README.html>`_ to exchange information with - other Bro processes. To enable it run Bro's ``configure`` script - with the ``--enable-broker`` option. Note that a C++11 compatible - compiler (e.g. GCC 4.8+ or Clang 3.3+) is required as well as the - `C++ Actor Framework `_. + other Bro processes. .. contents:: @@ -23,26 +20,26 @@ Communication via Broker must first be turned on via :bro:see:`BrokerComm::enable`. Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen` -and then monitor connection status updates via +and then monitor connection status updates via the :bro:see:`BrokerComm::incoming_connection_established` and -:bro:see:`BrokerComm::incoming_connection_broken`. +:bro:see:`BrokerComm::incoming_connection_broken` events. .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect` -and then monitor connection status updates via +and then monitor connection status updates via the :bro:see:`BrokerComm::outgoing_connection_established`, :bro:see:`BrokerComm::outgoing_connection_broken`, and -:bro:see:`BrokerComm::outgoing_connection_incompatible`. +:bro:see:`BrokerComm::outgoing_connection_incompatible` events. .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro Remote Printing =============== -To receive remote print messages, first use -:bro:see:`BrokerComm::subscribe_to_prints` to advertise to peers a topic -prefix of interest and then create an event handler for +To receive remote print messages, first use the +:bro:see:`BrokerComm::subscribe_to_prints` function to advertise to peers a +topic prefix of interest and then create an event handler for :bro:see:`BrokerComm::print_handler` to handle any print messages that are received. @@ -71,17 +68,17 @@ the Broker message format is simply: Remote Events ============= -Receiving remote events is similar to remote prints. Just use -:bro:see:`BrokerComm::subscribe_to_events` and possibly define any new events -along with handlers that peers may want to send. +Receiving remote events is similar to remote prints. Just use the +:bro:see:`BrokerComm::subscribe_to_events` function and possibly define any +new events along with handlers that peers may want to send. .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro -To send events, there are two choices. The first is to use call -:bro:see:`BrokerComm::event` directly. The second option is to use -:bro:see:`BrokerComm::auto_event` to make it so a particular event is -automatically sent to peers whenever it is called locally via the normal -event invocation syntax. +There are two different ways to send events. The first is to call the +:bro:see:`BrokerComm::event` function directly. The second option is to call +the :bro:see:`BrokerComm::auto_event` function where you specify a +particular event that will be automatically sent to peers whenever the +event is called locally via the normal event invocation syntax. .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro @@ -98,7 +95,7 @@ the Broker message format is: broker::message{std::string{}, ...}; The first parameter is the name of the event and the remaining ``...`` -are its arguments, which are any of the support Broker data types as +are its arguments, which are any of the supported Broker data types as they correspond to the Bro types for the event named in the first parameter of the message. @@ -107,23 +104,23 @@ Remote Logging .. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro -Use :bro:see:`BrokerComm::subscribe_to_logs` to advertise interest in logs -written by peers. The topic names that Bro uses are implicitly of the +Use the :bro:see:`BrokerComm::subscribe_to_logs` function to advertise interest +in logs written by peers. The topic names that Bro uses are implicitly of the form "bro/log/". .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro -To send remote logs either use :bro:see:`Log::enable_remote_logging` or -:bro:see:`BrokerComm::enable_remote_logs`. The former allows any log stream -to be sent to peers while the later toggles remote logging for -particular streams. +To send remote logs either redef :bro:see:`Log::enable_remote_logging` or +use the :bro:see:`BrokerComm::enable_remote_logs` function. The former +allows any log stream to be sent to peers while the latter enables remote +logging for particular streams. .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro Message Format -------------- -For other applications that want to exchange logs messages with Bro, +For other applications that want to exchange log messages with Bro, the Broker message format is: .. code:: c++ @@ -132,7 +129,7 @@ the Broker message format is: The enum value corresponds to the stream's :bro:see:`Log::ID` value, and the record corresponds to a single entry of that log's columns record, -in this case a ``Test::INFO`` value. +in this case a ``Test::Info`` value. Tuning Access Control ===================== @@ -152,11 +149,12 @@ that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print :bro:see:`BrokerComm::enable_remote_logs`. If not using the ``auto_advertise`` flag, one can use the -:bro:see:`BrokerComm::advertise_topic` and :bro:see:`BrokerComm::unadvertise_topic` -to manupulate the set of topic prefixes that are allowed to be -advertised to peers. If an endpoint does not advertise a topic prefix, -the only way a peers can send messages to it is via the ``unsolicited`` -flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching +:bro:see:`BrokerComm::advertise_topic` and +:bro:see:`BrokerComm::unadvertise_topic` functions +to manipulate the set of topic prefixes that are allowed to be +advertised to peers. If an endpoint does not advertise a topic prefix, then +the only way peers can send messages to it is via the ``unsolicited`` +flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching prefix (i.e. full topic may be longer than receivers prefix, just the prefix needs to match). @@ -172,7 +170,7 @@ specific type of frontend, but a standalone frontend can also exist to e.g. query and modify the contents of a remote master store without actually "owning" any of the contents itself. -A master data store can be be cloned from remote peers which may then +A master data store can be cloned from remote peers which may then perform lightweight, local queries against the clone, which automatically stays synchronized with the master store. Clones cannot modify their content directly, instead they send modifications to the @@ -181,7 +179,7 @@ all clones. Master and clone stores get to choose what type of storage backend to use. E.g. In-memory versus SQLite for persistence. Note that if clones -are used, data store sizes should still be able to fit within memory +are used, then data store sizes must be able to fit within memory regardless of the storage backend as a single snapshot of the master store is sent in a single chunk to initialize the clone. @@ -198,5 +196,5 @@ needed, just replace the :bro:see:`BrokerStore::create_clone` call with :bro:see:`BrokerStore::create_frontend`. Queries will then be made against the remote master store instead of the local clone. -Note that all queries are made within Bro's asynchrounous ``when`` -statements and must specify a timeout block. +Note that all data store queries must be made within Bro's asynchronous +``when`` statements and must specify a timeout block. diff --git a/doc/install/install.rst b/doc/install/install.rst index ff8d83ad97..10fdfeefaf 100644 --- a/doc/install/install.rst +++ b/doc/install/install.rst @@ -32,13 +32,13 @@ before you begin: * Libz * Bash (for BroControl) * Python (for BroControl) - * C++ Actor Framework (CAF) (http://actor-framework.org) + * C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org) To build Bro from source, the following additional dependencies are required: * CMake 2.8 or greater (http://www.cmake.org) * Make - * C/C++ compiler with C++11 support + * C/C++ compiler with C++11 support (GCC 4.8+ or Clang 3.3+) * SWIG (http://www.swig.org) * Bison (GNU Parser Generator) * Flex (Fast Lexical Analyzer) From ac9552a0cf2aedbfb678d9927b26c5cd3fb4ed7e Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Thu, 20 Aug 2015 10:45:22 -0500 Subject: [PATCH 07/34] Update documentation of Conn::Info history field --- scripts/base/protocols/conn/main.bro | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/scripts/base/protocols/conn/main.bro b/scripts/base/protocols/conn/main.bro index 7ef204268b..015c5520db 100644 --- a/scripts/base/protocols/conn/main.bro +++ b/scripts/base/protocols/conn/main.bro @@ -87,7 +87,8 @@ export { ## f packet with FIN bit set ## r packet with RST bit set ## c packet with a bad checksum - ## i inconsistent packet (e.g. SYN+RST bits both set) + ## i inconsistent packet (e.g. FIN+RST bits set) + ## q multi-flag packet (SYN+FIN or SYN+RST bits set) ## ====== ==================================================== ## ## If the event comes from the originator, the letter is in From ab8a8d3ef3ac1164cd9056774764490a5866dba5 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Fri, 21 Aug 2015 16:30:51 -0500 Subject: [PATCH 08/34] Split long lines in input framework docs --- doc/frameworks/logging-input-sqlite.rst | 91 ++++++++++--------- scripts/base/frameworks/input/main.bro | 22 +++-- scripts/base/frameworks/input/readers/raw.bro | 6 +- 3 files changed, 66 insertions(+), 53 deletions(-) diff --git a/doc/frameworks/logging-input-sqlite.rst b/doc/frameworks/logging-input-sqlite.rst index 6f5e867686..e0f10308ae 100644 --- a/doc/frameworks/logging-input-sqlite.rst +++ b/doc/frameworks/logging-input-sqlite.rst @@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet seen extensive use in production environments. While we are not aware of any issues with them, we urge to caution when using them in production environments. There could be lingering issues which only occur -when the plugins are used with high amounts of data or in high-load environments. +when the plugins are used with high amounts of data or in high-load +environments. Logging Data into SQLite Databases ================================== Logging support for SQLite is available in all Bro installations starting with -version 2.2. There is no need to load any additional scripts or for any compile-time -configurations. +version 2.2. There is no need to load any additional scripts or for any +compile-time configurations. -Sending data from existing logging streams to SQLite is rather straightforward. You -have to define a filter which specifies SQLite as the writer. +Sending data from existing logging streams to SQLite is rather straightforward. +You have to define a filter which specifies SQLite as the writer. The following example code adds SQLite as a filter for the connection log: @@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log: # Make sure this parses correctly at least. @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro -Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist. -It will also create a table with the name ``conn`` (if it does not exist) and start -appending connection information to the table. +Bro will create the database file ``/var/db/conn.sqlite``, if it does not +already exist. It will also create a table with the name ``conn`` (if it +does not exist) and start appending connection information to the table. -At the moment, SQLite databases are not rotated the same way ASCII log-files are. You -have to take care to create them in an adequate location. +At the moment, SQLite databases are not rotated the same way ASCII log-files +are. You have to take care to create them in an adequate location. -If you examine the resulting SQLite database, the schema will contain the same fields -that are present in the ASCII log files:: +If you examine the resulting SQLite database, the schema will contain the +same fields that are present in the ASCII log files:: # sqlite3 /var/db/conn.sqlite @@ -75,27 +76,31 @@ from being created, you can remove the default filter: Log::remove_filter(Conn::LOG, "default"); -To create a custom SQLite log file, you have to create a new log stream that contains -just the information you want to commit to the database. Please refer to the -:ref:`framework-logging` documentation on how to create custom log streams. +To create a custom SQLite log file, you have to create a new log stream +that contains just the information you want to commit to the database. +Please refer to the :ref:`framework-logging` documentation on how to +create custom log streams. Reading Data from SQLite Databases ================================== -Like logging support, support for reading data from SQLite databases is built into Bro starting -with version 2.2. +Like logging support, support for reading data from SQLite databases is +built into Bro starting with version 2.2. -Just as with the text-based input readers (please refer to the :ref:`framework-input` -documentation for them and for basic information on how to use the input-framework), the SQLite reader -can be used to read data - in this case the result of SQL queries - into tables or into events. +Just as with the text-based input readers (please refer to the +:ref:`framework-input` documentation for them and for basic information +on how to use the input framework), the SQLite reader can be used to +read data - in this case the result of SQL queries - into tables or into +events. Reading Data into Tables ------------------------ -To read data from a SQLite database, we first have to provide Bro with the information, how -the resulting data will be structured. For this example, we expect that we have a SQLite database, -which contains host IP addresses and the user accounts that are allowed to log into a specific -machine. +To read data from a SQLite database, we first have to provide Bro with +the information, how the resulting data will be structured. For this +example, we expect that we have a SQLite database, which contains +host IP addresses and the user accounts that are allowed to log into +a specific machine. The SQLite commands to create the schema are as follows:: @@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows:: insert into machines_to_users values ('192.168.17.2', 'bernhard'); insert into machines_to_users values ('192.168.17.3', 'seth,matthias'); -After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table -into Bro: +After creating a file called ``hosts.sqlite`` with this content, we can +read the resulting table into Bro: .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro @@ -117,22 +122,25 @@ into Bro: # Make sure this parses correctly at least. @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro -Afterwards, that table can be used to check logins into hosts against the available -userlist. +Afterwards, that table can be used to check logins into hosts against +the available userlist. Turning Data into Events ------------------------ -The second mode is to use the SQLite reader to output the input data as events. Typically there -are two reasons to do this. First, when the structure of the input data is too complicated -for a direct table import. In this case, the data can be read into an event which can then -create the necessary data structures in Bro in scriptland. +The second mode is to use the SQLite reader to output the input data as events. +Typically there are two reasons to do this. First, when the structure of +the input data is too complicated for a direct table import. In this case, +the data can be read into an event which can then create the necessary +data structures in Bro in scriptland. -The second reason is, that the dataset is too big to hold it in memory. In this case, the checks -can be performed on-demand, when Bro encounters a situation where it needs additional information. +The second reason is, that the dataset is too big to hold it in memory. In +this case, the checks can be performed on-demand, when Bro encounters a +situation where it needs additional information. -An example for this would be an internal huge database with malware hashes. Live database queries -could be used to check the sporadically happening downloads against the database. +An example for this would be an internal huge database with malware +hashes. Live database queries could be used to check the sporadically +happening downloads against the database. The SQLite commands to create the schema are as follows:: @@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows:: insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace'); -The following code uses the file-analysis framework to get the sha1 hashes of files that are -transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query -returns with a result, we had a hit against our malware-database and output the matching hash. +The following code uses the file-analysis framework to get the sha1 hashes +of files that are transmitted over the network. For each hash, a SQL-query +is run against SQLite. If the query returns with a result, we had a hit +against our malware-database and output the matching hash. .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro @@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the # Make sure this parses correctly at least. @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro -If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you -will get one hit. +If you run this script against the trace in +``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit. diff --git a/scripts/base/frameworks/input/main.bro b/scripts/base/frameworks/input/main.bro index fa766ba27b..82c46b870c 100644 --- a/scripts/base/frameworks/input/main.bro +++ b/scripts/base/frameworks/input/main.bro @@ -73,22 +73,23 @@ export { idx: any; ## Record that defines the values used as the elements of the table. - ## If this is undefined, then *destination* has to be a set. + ## If this is undefined, then *destination* must be a set. val: any &optional; ## Defines if the value of the table is a record (default), or a single value. ## When this is set to false, then *val* can only contain one element. want_record: bool &default=T; - ## The event that is raised each time a value is added to, changed in or removed - ## from the table. The event will receive an Input::Event enum as the first - ## argument, the *idx* record as the second argument and the value (record) as the - ## third argument. + ## The event that is raised each time a value is added to, changed in or + ## removed from the table. The event will receive an Input::Event enum + ## as the first argument, the *idx* record as the second argument and + ## the value (record) as the third argument. ev: any &optional; # event containing idx, val as values. - ## Predicate function that can decide if an insertion, update or removal should - ## really be executed. Parameters are the same as for the event. If true is - ## returned, the update is performed. If false is returned, it is skipped. + ## Predicate function that can decide if an insertion, update or removal + ## should really be executed. Parameters are the same as for the event. + ## If true is returned, the update is performed. If false is returned, + ## it is skipped. pred: function(typ: Input::Event, left: any, right: any): bool &optional; ## A key/value table that will be passed on the reader. @@ -123,8 +124,9 @@ export { ## If this is set to true (default), the event receives all fields in a single record value. want_record: bool &default=T; - ## The event that is raised each time a new line is received from the reader. - ## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. + ## The event that is raised each time a new line is received from the + ## reader. The event will receive an Input::Event enum as the first + ## element, and the fields as the following arguments. ev: any; ## A key/value table that will be passed on the reader. diff --git a/scripts/base/frameworks/input/readers/raw.bro b/scripts/base/frameworks/input/readers/raw.bro index b1e0fb6831..a1e95b71a1 100644 --- a/scripts/base/frameworks/input/readers/raw.bro +++ b/scripts/base/frameworks/input/readers/raw.bro @@ -11,7 +11,9 @@ export { ## ## name: name of the input stream. ## source: source of the input stream. - ## exit_code: exit code of the program, or number of the signal that forced the program to exit. - ## signal_exit: false when program exited normally, true when program was forced to exit by a signal. + ## exit_code: exit code of the program, or number of the signal that forced + ## the program to exit. + ## signal_exit: false when program exited normally, true when program was + ## forced to exit by a signal. global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool); } From d85e5d776d28d86647a13454c24e3f15169d3d45 Mon Sep 17 00:00:00 2001 From: Vlad Grigorescu Date: Thu, 3 Sep 2015 16:29:58 -0500 Subject: [PATCH 09/34] Move SIP analyzer to flowunit instead of datagram Moving to flowunit simplifies the BinPAC constructs by allowing the use of &oneline instead of relying on regular expressions which sometimes didn't work as intended. Addresses BIT-1458 --- src/analyzer/protocol/sip/sip-protocol.pac | 29 +++++-------------- src/analyzer/protocol/sip/sip.pac | 2 +- src/analyzer/protocol/sip/sip_TCP.pac | 2 +- .../sip.log | 5 ++-- 4 files changed, 13 insertions(+), 25 deletions(-) diff --git a/src/analyzer/protocol/sip/sip-protocol.pac b/src/analyzer/protocol/sip/sip-protocol.pac index ce26b8be95..15f07df44a 100644 --- a/src/analyzer/protocol/sip/sip-protocol.pac +++ b/src/analyzer/protocol/sip/sip-protocol.pac @@ -1,16 +1,6 @@ -enum ExpectBody { - BODY_EXPECTED, - BODY_NOT_EXPECTED, - BODY_MAYBE, -}; - type SIP_TOKEN = RE/[^()<>@,;:\\"\/\[\]?={} \t]+/; type SIP_WS = RE/[ \t]*/; -type SIP_COLON = RE/:/; -type SIP_TO_EOL = RE/[^\r\n]*/; -type SIP_EOL = RE/(\r\n){1,2}/; type SIP_URI = RE/[[:alnum:]@[:punct:]]+/; -type SIP_NL = RE/(\r\n)/; type SIP_PDU(is_orig: bool) = case is_orig of { true -> request: SIP_Request; @@ -18,14 +8,12 @@ type SIP_PDU(is_orig: bool) = case is_orig of { }; type SIP_Request = record { - request: SIP_RequestLine; - newline: SIP_NL; + request: SIP_RequestLine &oneline; msg: SIP_Message; }; type SIP_Reply = record { - reply: SIP_ReplyLine; - newline: SIP_NL; + reply: SIP_ReplyLine &oneline; msg: SIP_Message; }; @@ -34,7 +22,7 @@ type SIP_RequestLine = record { : SIP_WS; uri: SIP_URI; : SIP_WS; - version: SIP_Version; + version: SIP_Version &restofdata; } &oneline; type SIP_ReplyLine = record { @@ -42,7 +30,7 @@ type SIP_ReplyLine = record { : SIP_WS; status: SIP_Status; : SIP_WS; - reason: SIP_TO_EOL; + reason: bytestring &restofdata; } &oneline; type SIP_Status = record { @@ -52,7 +40,7 @@ type SIP_Status = record { }; type SIP_Version = record { - : "SIP/"; + : "SIP/"; vers_str: RE/[0-9]+\.[0-9]+/; } &let { vers_num: double = bytestring_to_double(vers_str); @@ -69,11 +57,10 @@ type SIP_HEADER_NAME = RE/[^: \t]+/; type SIP_Header = record { name: SIP_HEADER_NAME; : SIP_WS; - : SIP_COLON; + : ":"; : SIP_WS; - value: SIP_TO_EOL; - : SIP_EOL; -} &oneline &byteorder=bigendian; + value: bytestring &restofdata; +} &oneline; type SIP_Body = record { body: bytestring &length = $context.flow.get_content_length(); diff --git a/src/analyzer/protocol/sip/sip.pac b/src/analyzer/protocol/sip/sip.pac index f527a90117..15addb8c1e 100644 --- a/src/analyzer/protocol/sip/sip.pac +++ b/src/analyzer/protocol/sip/sip.pac @@ -21,7 +21,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) { %include sip-protocol.pac flow SIP_Flow(is_orig: bool) { - datagram = SIP_PDU(is_orig) withcontext(connection, this); + flowunit = SIP_PDU(is_orig) withcontext(connection, this); }; %include sip-analyzer.pac diff --git a/src/analyzer/protocol/sip/sip_TCP.pac b/src/analyzer/protocol/sip/sip_TCP.pac index 5546d28ece..2e51675dea 100644 --- a/src/analyzer/protocol/sip/sip_TCP.pac +++ b/src/analyzer/protocol/sip/sip_TCP.pac @@ -24,7 +24,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) { %include sip-protocol.pac flow SIP_Flow(is_orig: bool) { - datagram = SIP_PDU(is_orig) withcontext(connection, this); + flowunit = SIP_PDU(is_orig) withcontext(connection, this); }; %include sip-analyzer.pac diff --git a/testing/btest/Baseline/scripts.base.protocols.sip.wireshark/sip.log b/testing/btest/Baseline/scripts.base.protocols.sip.wireshark/sip.log index 19f05ec1b9..047fa4e2d1 100644 --- a/testing/btest/Baseline/scripts.base.protocols.sip.wireshark/sip.log +++ b/testing/btest/Baseline/scripts.base.protocols.sip.wireshark/sip.log @@ -3,7 +3,7 @@ #empty_field (empty) #unset_field - #path sip -#open 2015-04-30-03-33-33 +#open 2015-09-03-21-02-33 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method uri date request_from request_to response_from response_to reply_to call_id seq subject request_path response_path user_agent status_code status_msg warning request_body_len response_body_len content_type #types time string addr port addr port count string string string string string string string string string string string vector[string] vector[string] string count string string string string string 1120469572.844249 CXWv6p3arKYeMETxOg 192.168.1.2 5060 212.242.33.35 5060 0 REGISTER sip:sip.cybercity.dk - ;tag=00-04092-1701af62-120c67172 - 578222729-4665d775@578222732-4665d772 68 REGISTER - SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 401 Unauthorized - 0 0 - @@ -37,8 +37,9 @@ 1120470900.060556 CIPOse170MGiRM1Qf4 192.168.1.2 5060 212.242.33.35 5060 0 ACK sip:0097239287044@sip.cybercity.dk - "arik" ;tag=00-04083-1701ba17-57d493ef5 - - - 24487391-449bf2a0@192.168.1.2 2 ACK - SIP/2.0/UDP 192.168.1.2 (empty) - - - - 0 - - 1120470966.443914 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 INVITE sip:35104724@sip.cybercity.dk - "arik" "arik" ;tag=00-04079-1701ba6f-3e08e2f66 - 11894297-4432a9f8@192.168.1.2 1 INVITE - SIP/2.0/UDP 192.168.1.2:5060 SIP/2.0/UDP 192.168.1.2:5060;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 407 authentication required - 270 0 - 1120470966.606422 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 INVITE sip:35104724@sip.cybercity.dk Mon, 04 Jul 2005 09:56:06 GMT "arik" "arik" - 11894297-4432a9f8@192.168.1.2 2 INVITE - SIP/2.0/UDP 192.168.1.2:5060,SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 100 Trying - 270 0 - +1120470966.606422 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 INVITE sip:35104724@sip.cybercity.dk Mon, 04 Jul 2005 09:56:06 GMT "arik" "arik" ;tag=00-04075-1701baa2-2dfdf7c21 - 11894297-4432a9f8@192.168.1.2 2 INVITE - SIP/2.0/UDP 192.168.1.2:5060,SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060,SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 183 In band info available - 270 199 application/sdp 1120470966.606422 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 INVITE sip:35104724@sip.cybercity.dk Mon, 04 Jul 2005 09:56:06 GMT "arik" "arik" ;tag=00-04075-1701baa2-2dfdf7c21 - 11894297-4432a9f8@192.168.1.2 2 INVITE - SIP/2.0/UDP 192.168.1.2:5060,SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060,SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060,SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 480 Error - 270 0 application/sdp 1120470984.353086 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 REGISTER sip:sip.cybercity.dk - ;tag=00-04074-1701bac9-1daa0b4c5 - 29858147-465b0752@29858051-465b07b2 5 REGISTER - SIP/2.0/UDP 192.168.1.2,SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 401 Unauthorized - 0 0 - 1120471018.723316 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 REGISTER sip:sip.cybercity.dk - - 29858147-465b0752@29858051-465b07b2 6 REGISTER - SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 100 Trying - 0 0 - 1120471018.723316 C7XEbhP654jzLoe3a 192.168.1.2 5060 212.242.33.35 5060 0 REGISTER sip:sip.cybercity.dk - ;tag=00-04087-1701bae7-76fb74995 - 29858147-465b0752@29858051-465b07b2 6 REGISTER - SIP/2.0/UDP 192.168.1.2 SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060,SIP/2.0/UDP 192.168.1.2;received=80.230.219.70;rport=5060 Nero SIPPS IP Phone Version 2.0.51.16 200 OK - 0 0 - -#close 2015-04-30-03-33-33 +#close 2015-09-03-21-02-33 From 4ac8ae61f7fdb3842ffac6b8355913aa0ab79dde Mon Sep 17 00:00:00 2001 From: Vlad Grigorescu Date: Fri, 4 Sep 2015 07:39:31 -0500 Subject: [PATCH 10/34] Make dns_max_queries redef-able, and bump up the default from 5 to 25. Addresses BIT-1460 --- scripts/base/init-bare.bro | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/base/init-bare.bro b/scripts/base/init-bare.bro index 40f518b682..14d3ed3a4b 100644 --- a/scripts/base/init-bare.bro +++ b/scripts/base/init-bare.bro @@ -2509,7 +2509,7 @@ global dns_skip_all_addl = T &redef; ## If a DNS request includes more than this many queries, assume it's non-DNS ## traffic and do not process it. Set to 0 to turn off this functionality. -global dns_max_queries = 5; +global dns_max_queries = 25 &redef; ## HTTP session statistics. ## From bebd08484cb61345152864af38bbedde6aaa265a Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Mon, 7 Sep 2015 03:35:23 -0500 Subject: [PATCH 11/34] Clarifications to the script reference docs --- doc/script-reference/attributes.rst | 13 +++++---- doc/script-reference/statements.rst | 44 ++++++++++++++++++----------- 2 files changed, 36 insertions(+), 21 deletions(-) diff --git a/doc/script-reference/attributes.rst b/doc/script-reference/attributes.rst index d37cc2a98a..fec72570d2 100644 --- a/doc/script-reference/attributes.rst +++ b/doc/script-reference/attributes.rst @@ -54,13 +54,16 @@ Here is a more detailed explanation of each attribute: .. bro:attr:: &redef - Allows for redefinition of initial values of global objects declared as - constant. - - In this example, the constant (assuming it is global) can be redefined - with a :bro:keyword:`redef` at some later point:: + Allows use of a :bro:keyword:`redef` to redefine initial values of + global variables (i.e., variables declared either :bro:keyword:`global` + or :bro:keyword:`const`). Example:: const clever = T &redef; + global cache_size = 256 &redef; + + Note that a variable declared "global" can also have its value changed + with assignment statements (doesn't matter if it has the "&redef" + attribute or not). .. bro:attr:: &priority diff --git a/doc/script-reference/statements.rst b/doc/script-reference/statements.rst index 1f5b388e7f..e2f93a5627 100644 --- a/doc/script-reference/statements.rst +++ b/doc/script-reference/statements.rst @@ -71,9 +71,11 @@ Statements Declarations ------------ -The following global declarations cannot occur within a function, hook, or -event handler. Also, these declarations cannot appear after any statements -that are outside of a function, hook, or event handler. +Declarations cannot occur within a function, hook, or event handler. + +Declarations must appear before any statements (except those statements +that are in a function, hook, or event handler) in the concatenation of +all loaded Bro scripts. .. bro:keyword:: module @@ -126,9 +128,12 @@ that are outside of a function, hook, or event handler. .. bro:keyword:: global Variables declared with the "global" keyword will be global. + If a type is not specified, then an initializer is required so that the type can be inferred. Likewise, if an initializer is not supplied, - then the type must be specified. Example:: + then the type must be specified. In some cases, when the type cannot + be correctly inferred, the type must be specified even when an + initializer is present. Example:: global pi = 3.14; global hosts: set[addr]; @@ -136,10 +141,11 @@ that are outside of a function, hook, or event handler. Variable declarations outside of any function, hook, or event handler are required to use this keyword (unless they are declared with the - :bro:keyword:`const` keyword). Definitions of functions, hooks, and - event handlers are not allowed to use the "global" - keyword (they already have global scope), except function declarations - where no function body is supplied use the "global" keyword. + :bro:keyword:`const` keyword instead). + + Definitions of functions, hooks, and event handlers are not allowed + to use the "global" keyword. However, function declarations (i.e., no + function body is provided) can use the "global" keyword. The scope of a global variable begins where the declaration is located, and extends through all remaining Bro scripts that are loaded (however, @@ -150,18 +156,22 @@ that are outside of a function, hook, or event handler. .. bro:keyword:: const A variable declared with the "const" keyword will be constant. + Variables declared as constant are required to be initialized at the - time of declaration. Example:: + time of declaration. Normally, the type is inferred from the initializer, + but the type can be explicitly specified. Example:: const pi = 3.14; const ssh_port: port = 22/tcp; - The value of a constant cannot be changed later (the only - exception is if the variable is global and has the :bro:attr:`&redef` - attribute, then its value can be changed only with a :bro:keyword:`redef`). + The value of a constant cannot be changed. The only exception is if the + variable is a global constant and has the :bro:attr:`&redef` + attribute, but even then its value can be changed only with a + :bro:keyword:`redef`. The scope of a constant is local if the declaration is in a function, hook, or event handler, and global otherwise. + Note that the "const" keyword cannot be used with either the "local" or "global" keywords (i.e., "const" replaces "local" and "global"). @@ -184,7 +194,8 @@ that are outside of a function, hook, or event handler. .. bro:keyword:: redef There are three ways that "redef" can be used: to change the value of - a global variable, to extend a record type or enum type, or to specify + a global variable (but only if it has the :bro:attr:`&redef` attribute), + to extend a record type or enum type, or to specify a new event handler body that replaces all those that were previously defined. @@ -237,13 +248,14 @@ that are outside of a function, hook, or event handler. Statements ---------- +Statements (except those contained within a function, hook, or event +handler) can appear only after all global declarations in the concatenation +of all loaded Bro scripts. + Each statement in a Bro script must be terminated with a semicolon (with a few exceptions noted below). An individual statement can span multiple lines. -All statements (except those contained within a function, hook, or event -handler) must appear after all global declarations. - Here are the statements that the Bro scripting language supports. .. bro:keyword:: add From 2327f5bba509244f3c00d0d5ecc4176c181d1a46 Mon Sep 17 00:00:00 2001 From: Yun Zheng Hu Date: Thu, 10 Sep 2015 10:50:35 +0200 Subject: [PATCH 12/34] Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509 certificates --- src/file_analysis/analyzer/x509/X509.cc | 34 +++++++++++++----- .../Baseline/core.x509-generalizedtime/output | 16 +++++++++ .../Traces/tls/x509-generalizedtime.pcap | Bin 0 -> 8770 bytes testing/btest/core/x509-generalizedtime.bro | 10 ++++++ 4 files changed, 52 insertions(+), 8 deletions(-) create mode 100644 testing/btest/Baseline/core.x509-generalizedtime/output create mode 100644 testing/btest/Traces/tls/x509-generalizedtime.pcap create mode 100644 testing/btest/core/x509-generalizedtime.bro diff --git a/src/file_analysis/analyzer/x509/X509.cc b/src/file_analysis/analyzer/x509/X509.cc index d9604740a7..9059a3e250 100644 --- a/src/file_analysis/analyzer/x509/X509.cc +++ b/src/file_analysis/analyzer/x509/X509.cc @@ -620,15 +620,33 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* } tm lTime; - lTime.tm_sec = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0'); - lTime.tm_min = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0'); - lTime.tm_hour = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0'); - lTime.tm_mday = ((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0'); - lTime.tm_mon = (((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0')) - 1; - lTime.tm_year = ((lBuffer[0] - '0') * 10) + (lBuffer[1] - '0'); + size_t i; + if ( atime->type == V_ASN1_GENERALIZEDTIME ) + { + // YYYY format + lTime.tm_year = (lBuffer[0] - '0') * 1000; + lTime.tm_year += (lBuffer[1] - '0') * 100; + lTime.tm_year += (lBuffer[2] - '0') * 10; + lTime.tm_year += (lBuffer[3] - '0'); + if ( lTime.tm_year > 1900) + lTime.tm_year -= 1900; + i = 4; + } + else + { + // YY format + lTime.tm_year = (lBuffer[0] - '0') * 10; + lTime.tm_year += (lBuffer[1] - '0'); + if ( lTime.tm_year < 50 ) + lTime.tm_year += 100; // RFC 2459 + i = 2; + } - if ( lTime.tm_year < 50 ) - lTime.tm_year += 100; // RFC 2459 + lTime.tm_mon = ((lBuffer[i+0] - '0') * 10) + (lBuffer[i+1] - '0') - 1; // MM + lTime.tm_mday = ((lBuffer[i+2] - '0') * 10) + (lBuffer[i+3] - '0'); // DD + lTime.tm_hour = ((lBuffer[i+4] - '0') * 10) + (lBuffer[i+5] - '0'); // hh + lTime.tm_min = ((lBuffer[i+6] - '0') * 10) + (lBuffer[i+7] - '0'); // mm + lTime.tm_sec = ((lBuffer[i+8] - '0') * 10) + (lBuffer[i+9] - '0'); // ss lTime.tm_wday = 0; lTime.tm_yday = 0; diff --git a/testing/btest/Baseline/core.x509-generalizedtime/output b/testing/btest/Baseline/core.x509-generalizedtime/output new file mode 100644 index 0000000000..75605f5668 --- /dev/null +++ b/testing/btest/Baseline/core.x509-generalizedtime/output @@ -0,0 +1,16 @@ +----- x509_certificate ---- +subject: CN=bro-generalizedtime-test,O=Bro,C=NL +not_valid_before: 2015-09-01-13:33:37.000000000 (epoch: 1441114417.0) +not_valid_after : 2025-09-01-13:33:37.000000000 (epoch: 1756733617.0) +----- x509_certificate ---- +subject: CN=*.taleo.net,OU=Comodo PremiumSSL Wildcard,OU=Web,O=Taleo Inc.,street=4140 Dublin Boulevard,street=Suite 400,L=Dublin,ST=CA,postalCode=94568,C=US +not_valid_before: 2011-05-04-00:00:00.000000000 (epoch: 1304467200.0) +not_valid_after : 2016-07-04-23:59:59.000000000 (epoch: 1467676799.0) +----- x509_certificate ---- +subject: CN=COMODO High-Assurance Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB +not_valid_before: 2010-04-16-00:00:00.000000000 (epoch: 1271376000.0) +not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0) +----- x509_certificate ---- +subject: CN=AddTrust External CA Root,OU=AddTrust External TTP Network,O=AddTrust AB,C=SE +not_valid_before: 2000-05-30-10:48:38.000000000 (epoch: 959683718.0) +not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0) diff --git a/testing/btest/Traces/tls/x509-generalizedtime.pcap b/testing/btest/Traces/tls/x509-generalizedtime.pcap new file mode 100644 index 0000000000000000000000000000000000000000..6f026034df358a0dd174d3f7b5d69047444d88cb GIT binary patch literal 8770 zcmbuFbyQSs*T&B<4BZWag0zTqDh<*hB^}a&bO}mHr*un$w9?HG(%ndxgmj0%H{-jO zua7?K{pXvt_FA07%QDd$1&{y$;lr=OfG`sFDD>kD2o}|Ir=a?L6Gs{Nn?E7k-qt?>00cxx1qcj= zfIui{56Q^*;lD#f_zztiMu)%o=Lddr|FgzVI_$G>Izkzo3||3MKXcB0O21*S@gQ7xFpbSuYC;=27iU*~G(n29n zL?{^g0Llo(hGIc60ZYIP-~%`SB0vCm0&oE=02&k-iVH=6VgL|WkxPLRAPfivf`Jg& z^7jEW00lq-hyfCS6hH?MVD~Tq3;+!RLIfdz5Wxswxc6(A_g~rkvre2|)o|}V2!GUR z2Ue#OFC5@IFWY~-!+dMQY9s_FzG6cHJix%u_|J~|Z{@+ZUNvx{$CrO4k^tWyzxqYg zffI?qU_cWA0Mx+XHFS~`R-$|5x0PlN7u}tP2JN`}UBhPbr~9c~yBy>Dl$sjOynK=@ zwc`y-O6WWe6ND7VAyVh}zDsv@n5`sA+CUj#DYyl}PXIcA2nIk1FgoZ49ROVdpfdn; z%<2Q$hlM5r0(gTJ6AAT!zm)$OCI|$90IY&+=&YznU^PS}d;|qKHVjrY_yrmUSk&Hz zjf(Xi{0fXg^1|MR+04Y+#NN=#!p+3k(c+Z}v!jWFBkO$=EPHZ~5}i$@E7 zgZ=j#|L--t(|%VCf&gIeFo4xN2*KMMSbQ(Wd%H+3>Jf{Gkc2);ae`wo(e#vgI_PD# zobRTFy1X*+JRQZ!b%mcRF{8x6HZ*A4c4%9S(siSdE{nN#Br|(iTuJHl!?SRsdyEG; z*X89@OKs|M+cdR`kyZ6wR>_9`ji=U7E;|hcD@6yJfocG~aofyk_A^s}EoioO7y))H zK)=g_B^3CM)y3vzPR*5x=Cy7r4SxQzL*qv*ex=2U+%P?aFgkOkT9r-9k@fKo= zt;eZ$a^gZjfEVABnh&OcF-RQXjB=muA=7WBY&}1}=v}DJgKT+)q1u0*tlSclj^#fm z+9I5zXtR2to?N-FVrr^>aeewj->0gL6AKxE&XVmoa;zAU`R+?kRbn}2PX&)4tedH9 z>WU%%s|T;`5AR@JiOEna{jkp*E?hIZ>#xYeaf4bcS@#g`2oVd`%HXFFoBg-*{b!fw z%u-6JXf7<8!IU%ZJd8kl4f9x0T)RzRBmF=Txr$d>*PX(7x_ z$nAX8rt(waq7GSDG++CH-vgz2azy6lRxX>44twPxc0Ke~WBM*HPD^sQ_*t0v|Iyv+ z;Nrb+|0NxT|B?=CI1zL2w{-lO;54uLpTy{YCBnKpZ_2Ok4o|T4Z|SIZEm2I!qwi`( zNq$CtOz0k&<;N3Zh+t)l?Z#zYYjoh7fW#q?yko*s`xQOqME)H({r)<}lm5@zyH|DI zhy@HLqoRHNp@#A6`SOiFmV9P%Vr#Ri%CAgmT1zt6qO~~S;)h`3f2HHE!;tO+7yp9i zk6ISOY8eAN3_s#8{#i>nk(~({n1pQ%c>J|Hx|Ke`iJu++l?dA%1^+z^9`M~k4BHp) zh4P=K81_x6OfVu{L1W7nNF_Cg*?64&7yb&Kt8RCDA>MfP!}fSOnMb)&Ng+yfjRR8YxklsH{8s9V0pz->A_LK-tggp{lLt^Inc^_Dw0t7|10ond4O z>LvAqQT$^^L+-Bn2bh2)Gu2!Kx^B$ca}qKxKfSHe>QtVhsoFg{&5ws&p2SeaZ;s8^ zOFnqk>K&9}bLKdHp~86M+ezy6?v_{AsC9}FA9ZzVWo9xRVx7vpRG|xwf=7W(rsGqe z+!u?=Z#e90KUHWEN=my9^dGzX5>RI`^5m}bZJxThi=JU%4JLY8;e)CrJjIb(bTifN zuH2A>a*5hx4^->Pq@olwr+IcMPN>?qXI%O2X~|Tc$t-b=q4t(C*d>OAFP7C!A0SO~ z0EON6`E%%`S1ohqQoWe_?Kt!KqctkBKeAY-VQD=gh<%&aA7W}4*;*DLeF`QtJPo!P zx#)A_k4$!)WbCDl;*Jir7wr|n(&2E9#>lWVh= zlkdh@&|V4&bCBoc-4p|!cQxL3-m{}c(q6{cyI^GY%K1LR;%X!-xOnJRFO6NB6&rmu z^oEnRtn0$~0M*9LRYC*%fj<*PgMaCFD-Zw4@3!gUm9l(MCWmP+w~vz<<>KZiq ze)6ls(@2j8T8P0=ic4(=KG1;w*Ar zsDJ9`T{{LYEPTNyu43)4g@W$)N;zRl7A{CeEGMU%+EroUXZjX05R z)uu#bb%(5uO%CO)&=};v)*~j7m1(pnet*%dw_Wroj?w1j2lUmg=9Nx^J?0DcjYCfU z%gtUEp?pXY$2m_n$TIm^L#` z$$k3vuS-vWlAsVN4$`?qW74(hCu+!YO++IjD<5~Zq0)_V=W%mUcPjw83K4+ne z2yZT8HBXzF)*+kTUG2;+uT@+IN%8jHucf?@+UZ7(?2>_8iyO+ zVm6!>rtkBc>Z3>6NT`Lq#^Doh%TQ`w(oQVw0Qv`rw->KMkVb2fICqG9ye#MB`=a2Y8knd zJ7k|KmG4>I!Hr%~X*8Mg;9?38X5uoxpC~F&i_8;#uU0y@bFG84z?+&IvImJ$^Sx(V zw4H`f9WCePb1NOv{8*)vudo=&K$IzlCs}ac{ML{$iCkb+6$eVUEEMF8xM~ieA}~}v z^SB6A`o`q)LE!5;d!xh4dZ(8Qvn1K#IF!fv!66m)PkAp^4uvB-LcX*KY3Fla%CGcw zcT$V8i3T1E3jNri-X{k=m)LEe9I~qr=%)N=ix$E_#y|P`1=s1PbUfLw<9bN6uH_3K z*h~AA*WRBU-4VS1RxF=)0^{ zj1`&pVzbmEw+?w#gje0!%=+r&A7Hplv;~gc@t5ZC`>@_?8;C|lBH|rc)7&1LQ?vT-*T3)B1=#b!Aj0yDGB9(|;m$5hpOFYzq_o9F|!@(E~UUH6eH zk!_H56FGCL_6;t}w1fdMpJIkf@w8KlPIyhs2l2<5Y%@iC{-V|hn7kc7hD`HJw;aws zy(}3$FkbD6^~#xyeo?LZ1)<=2sb}Q*#uqcG_jf-xYh~0Z4}Z<8QLFwjI$VY4tr71~ z5#9SK0KB4jR|mt40}Bi~t}uTb9Rj|ObMk!EODJzcg=ZLJSyb{dpB#UNAc)>q-Wc0K zx87;}5kBYO=k-npzqu&QA0_!lWil$9wG%>^n?$%u8Sy5W?^*pK(g!H@9~ve?^PQMh zUGifPLjsGfMir5`SiCFqJ)WP}E`Jdm{34*7)LyXNlwbp4%e3Ch9jUSnKGOa`kOsv7 zzer%YQy(a0LNrTqd=+l}`g3Tg*ZbPR*iUUw*bTuS7&M61!;-vFXR(D)ymth~2p>;y z`DB00Ik3Pccj}kSTP7t9B~ua`{Uqfk&g~`2O=obV-+~%5eU^>Ov!K`5&v;^<=#@I3 zfp=_S@B>6&l=BW{I+uri!UHuem%ZZyS7-{ORr^q|rQmm0D51lBI}fuWc`YC^P1GIa zsaf}?G0P@T#T|9K6zJ1P`zF6N!?T7QPx_D%!jaSc1vgSxeT{sRS4iPbvCUqyn`dcO z1`dsf`jQTxSoi+f;P|KrTNB?gB3R3$UCKQ_wXm=|PD(oT=&bWK1Foq#_9^9)MzT*Y zIA11Hi0g$UBPyxduD(N0H@s$6jhklo0f8m5kCn$PM1w`#Jw7Xg6WuiSS7_qZ41Ig& z2x9~LY`wx%TbC2)7aSy(y%72rj{<<)XSY3N_tGa)HQzoPD%m(kk8^R>C8O$|SAT0R zxkmmGFVPBwM>th^rvOWVcSe`_m9ags*%y=rBn%>oaI{{OK1%2uzQFyqM7#-U6JOfw zG<>b@_#M|A;xy$w)`5Myo=N(R#!^afA4+A2WvdHn$x>o>^((anS&FJiA5O-zd@0bN@)W2~176)l}4mfLyO zTUPbRcjtE{qLYOr)}@Ja%LKBot1BO<-ui*c9c~c5MwvX@?7VAChF>_C5($WR-Y6z~ z`gYA^huNjdoJuH1cMv;wE9I|wt&1sPo+;TewI-SO#n5`~Dc2#xNgaEu1pyUXt9uu1 z;2luet>kB$b$iaz=)BTAn#Slw{A9Kclf46!M!6~FMO_JnkW^X+C!~j{_ug|fai9eq zTM0S&QP48HmG^Q7B!J>d@iUH>1|C@9BFu6M$#nSS+-ky(j^gnb=)bVo~SUn*5x3} zY3=M#Vt79^auXq;?4=hoN7%^H^*p0<zZLiAW>bOM9w8i6*RBDwB~Z|1^p+X0=nFU#3es}<_KxoyVi>e7B>U^TMrD&vpp zQp}rd`9onQy^}~UN^?8oJ9avc_1D4-X}K}3>pF1Io(Eeb^uC=Z)5Ck|-fQ?vaN66=uk(qhB2 z5cADz=f-&RgEY<8rVEPVt(B*{4~!c3@EPJuLhcs0eYc5tZqCE`@+b>CtDJ0I!(E@< zMAFo;>DlW>)eb6mL4voqlSc~+d-QZ70NVXwn{#zlB}sX4I(F4j4u5-InhO-bR$QdF zLs2&Wieb$CtNLR-r#?tB8ckpr!rgZ>5KQ$GR3%eaUrc1qSWMlHb)Q;5Wy_nQ7jI~9R)&NikU}@o?-UGIqOMQW>pUL@kH|nv zHaNc&Fx9;i?-!g@!F{vBD; zckQ0v$3y3Li4NZ@T^*XtR@4mbBvy3TT*g&sTSvK=5$R-0$PF;(cs`AFY=gs-|NV(Q8G*0F2{Jik@XZWm zJ@n} z&qqIdCm>tFHT0R<#6!P6s^xI`UBp{aa?|I;WUHzLyIsW}Ie~AFp_TRR1@hz0kpwhO z_45t4_6u(MZMy6+iV2tl?hYPDSFM}`0UM;~H5%~Rhqj)Ec8nQNf>F~Q zZD=H}c2i2drf)%7vwFM8SU%Zzay%0CVjY(+vX&pXZ_hQy20y7do&)UFoa_p(j6ID; zNKdD1D>K5K>0&x#a~nh(4C&{^){j!QxL`0rZq> z&t)LX=shDvLCmq$`#Pd02a$}99CgP(s7QA1QcF4X(mJ@j4bdWK8hum%UjOQ$mxahn zD0loV*VSQLtmd59jTS?Ac+M~eM+|pb=~39T)LW&te)9wHKEo-(0y)Tf>|Mn5MhcO% zV%%)=D0i9~Ru^J^XS8t`CmZdqozqK$Itsqmfp4Hf2?NnvllA?wO?sU<6%1rUf+_k618 z7_%n!Ld9JzMsQ`t!yloFxd=ibF;;wMlYK>|Kx*Iof{yzn@^ob($eCkOZjSRI;ofVL z;QBR=A0p$i6@Ka&`7{-`I0u_fv?lkV(T=_^sqEciHO^-xTEa_0C3Pa0KUnG`I7Dw5 zCb+4LE07)J&zzR9zMZUgFpG9)bupN|?;{I9d1&veZPU1~Aj=q5K4w8bN5?3Yk?F^9 zXZC>$GY^S9-9YLaEDxkvU5@*7Mpch5$>JY=J;NDqLMLy&{VIjN1-e8YEUi4BJ3>VA zBNREfNT;j^%FE`u8E`f5-6%h*2=1}UMhEC?QhoE5YeShIORCVm@}UXTmL7C-J$h3a54Q48uFqdjDFhb8(ZuZW(Ijm5S)5Yk%w2Dxdi z_~C5qVAV$axb_nwXYM!}?f7Hb4$x+xcprllwWNVXFLrt>GAvw#0h`Yp7iubs=oh7Al0BNkc?J{a5NC9b z9~?;0-vo^!eq2tydzbLhJiPfy(L zeb1gN*Y*uO@ifh~G+(-H@YVix%zi!}t!{0H2;+Fy+uV`u1F5^sPiG*Er;n3!3BO=v z#%XWlLi_Iiz&?+OyWqQH|HZg;OMl0+J#9p5h#>O{vx`;S9>X&AUWNNwl`M09pKZj` z)Q%dv`m)jOhE|+m!wqxrHsMMGwsig!nKM57jZOe)!wiN7+Pkk>f(bpLccbito z3;iTJ5$a6c+ZsEG@diKRui)#C01RBl={ zmn|G5jmYqu>9_mmZ7KsDrg`i}#L9pgeC&$IjQrnu^`FCbPA}-sVf)2D$F3T+KgX`Pz|n6H zXMc`e|04c10cVSZ6Nef8Ij?qxJSVe-qLFCsFo0oLDvRH&LS&rg7d!`fsAr|0M35!-)hf|3a*TYxJS}n|SXL zG7t|N`NQL{xqlGKPvV8uzY<|{fAN-o5(!~*e z+>eMt(~eabBoR`D5uct2HOjv6yOmZya-Ulh3Lq_tJ2uRTf4CrmCohwsW#2_9v;5 x+n%?gtJ1=Y!v>Sd4O0k@Kj-!UgZ%>Bc*HlD%-_C+p;7nq&9E>output 2>&1 +# @TEST-EXEC: bro -C -r $TRACES/tls/tls1.2.trace %INPUT >>output 2>&1 +# @TEST-EXEC: btest-diff output +event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) + { + print "----- x509_certificate ----"; + print fmt("subject: %s", cert$subject); + print fmt("not_valid_before: %T (epoch: %s)", cert$not_valid_before, cert$not_valid_before); + print fmt("not_valid_after : %T (epoch: %s)", cert$not_valid_after, cert$not_valid_after); + } From 20ac0c5aeb8be78df37ac6723c3fd59df09dd373 Mon Sep 17 00:00:00 2001 From: Vlad Grigorescu Date: Thu, 10 Sep 2015 15:22:13 -0500 Subject: [PATCH 13/34] Add README.rst -> README symlink. Addresses BIT-1413 --- README.rst | 1 + 1 file changed, 1 insertion(+) create mode 120000 README.rst diff --git a/README.rst b/README.rst new file mode 120000 index 0000000000..100b93820a --- /dev/null +++ b/README.rst @@ -0,0 +1 @@ +README \ No newline at end of file From aa8f56c2bd39705198b60553bcf4f8c0a6cb4466 Mon Sep 17 00:00:00 2001 From: Richard van den Berg Date: Fri, 11 Sep 2015 13:01:43 +0200 Subject: [PATCH 14/34] hash-all-files.bro depends on base/files/hash --- scripts/policy/frameworks/files/hash-all-files.bro | 2 ++ 1 file changed, 2 insertions(+) diff --git a/scripts/policy/frameworks/files/hash-all-files.bro b/scripts/policy/frameworks/files/hash-all-files.bro index 74bea47bb9..f076abdd91 100644 --- a/scripts/policy/frameworks/files/hash-all-files.bro +++ b/scripts/policy/frameworks/files/hash-all-files.bro @@ -1,5 +1,7 @@ ##! Perform MD5 and SHA1 hashing on all files. +@load base/files/hash + event file_new(f: fa_file) { Files::add_analyzer(f, Files::ANALYZER_MD5); From 09904aeb5401c561c7bf21ee84a78e0eca25aaa9 Mon Sep 17 00:00:00 2001 From: Johanna Amann Date: Fri, 11 Sep 2015 12:26:15 -0700 Subject: [PATCH 15/34] Updating sumbodule [nomail] --- aux/binpac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/aux/binpac b/aux/binpac index ff16caf3d8..239a1a809d 160000 --- a/aux/binpac +++ b/aux/binpac @@ -1 +1 @@ -Subproject commit ff16caf3d8c5b12febd465a8ddd1524af60eae1a +Subproject commit 239a1a809dad7eb190fe31f03d949da3cbf79b3a From 401743313c63402da82f7ecdf8b86f0fafb28b0d Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Mon, 14 Sep 2015 13:30:25 -0500 Subject: [PATCH 16/34] Fixed some examples in "Writing Bro Scripts" doc --- doc/scripting/data_struct_record_01.bro | 2 +- doc/scripting/data_struct_record_02.bro | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/scripting/data_struct_record_01.bro b/doc/scripting/data_struct_record_01.bro index a80d30faae..ab28501f96 100644 --- a/doc/scripting/data_struct_record_01.bro +++ b/doc/scripting/data_struct_record_01.bro @@ -4,7 +4,7 @@ type Service: record { rfc: count; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt("Service: %s(RFC%d)",serv$name, serv$rfc); diff --git a/doc/scripting/data_struct_record_02.bro b/doc/scripting/data_struct_record_02.bro index b10b3feac0..515c8a716c 100644 --- a/doc/scripting/data_struct_record_02.bro +++ b/doc/scripting/data_struct_record_02.bro @@ -9,7 +9,7 @@ type System: record { services: set[Service]; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc); @@ -17,7 +17,7 @@ function print_service(serv: Service): string print fmt(" port: %s", p); } -function print_system(sys: System): string +function print_system(sys: System) { print fmt("System: %s", sys$name); From a052dc4e355e206f42b47337b17ed0af749bf44c Mon Sep 17 00:00:00 2001 From: Johanna Amann Date: Wed, 16 Sep 2015 15:16:04 -0700 Subject: [PATCH 17/34] Fix offset=-1 (eof) for raw reader Addresses BIT-1479 --- src/input/readers/raw/Raw.cc | 5 ++--- .../scripts.base.frameworks.input.raw.offset/out | 1 + .../scripts/base/frameworks/input/raw/offset.bro | 13 ++++++++++--- 3 files changed, 13 insertions(+), 6 deletions(-) diff --git a/src/input/readers/raw/Raw.cc b/src/input/readers/raw/Raw.cc index 2aae96abf7..8a1f469ddb 100644 --- a/src/input/readers/raw/Raw.cc +++ b/src/input/readers/raw/Raw.cc @@ -303,7 +303,8 @@ bool Raw::OpenInput() if ( offset ) { int whence = (offset > 0) ? SEEK_SET : SEEK_END; - if ( fseek(file, offset, whence) < 0 ) + int64_t pos = (offset >= 0) ? offset : offset + 1; // we want -1 to be the end of the file + if ( fseek(file, pos, whence) < 0 ) { char buf[256]; strerror_r(errno, buf, sizeof(buf)); @@ -395,8 +396,6 @@ bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fie { string offset_s = it->second; offset = strtoll(offset_s.c_str(), 0, 10); - if ( offset < 0 ) - offset++; // we want -1 to be the end of the file } else if ( it != info.config.end() ) { diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.raw.offset/out b/testing/btest/Baseline/scripts.base.frameworks.input.raw.offset/out index 3af2451db9..c8dd06805b 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.input.raw.offset/out +++ b/testing/btest/Baseline/scripts.base.frameworks.input.raw.offset/out @@ -1,2 +1,3 @@ fkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF F +hi diff --git a/testing/btest/scripts/base/frameworks/input/raw/offset.bro b/testing/btest/scripts/base/frameworks/input/raw/offset.bro index 8161785fdd..5ab2d84655 100644 --- a/testing/btest/scripts/base/frameworks/input/raw/offset.bro +++ b/testing/btest/scripts/base/frameworks/input/raw/offset.bro @@ -1,5 +1,8 @@ +# @TEST-EXEC: cp input.log input2.log # @TEST-EXEC: btest-bg-run bro bro -b %INPUT -# @TEST-EXEC: btest-bg-wait 5 +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: echo "hi" >> input2.log +# @TEST-EXEC: btest-bg-wait 10 # @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff out @TEST-START-FILE input.log @@ -7,6 +10,7 @@ sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF @TEST-END-FILE redef exit_only_after_terminate = T; +@load base/frameworks/communication # keep network time running global outfile: file; global try: count; @@ -21,9 +25,8 @@ event line(description: Input::EventDescription, tpe: Input::Event, s: string) { print outfile, s; try = try + 1; - if ( try == 2 ) + if ( try == 3 ) { - Input::remove("input"); close(outfile); terminate(); } @@ -39,7 +42,11 @@ event bro_init() local config_strings_two: table[string] of string = { ["offset"] = "-3", # 2 characters before end, last char is newline. }; + local config_strings_three: table[string] of string = { + ["offset"] = "-1", # End of file + }; Input::add_event([$source="../input.log", $config=config_strings, $reader=Input::READER_RAW, $mode=Input::STREAM, $name="input", $fields=Val, $ev=line, $want_record=F]); Input::add_event([$source="../input.log", $config=config_strings_two, $reader=Input::READER_RAW, $mode=Input::STREAM, $name="input2", $fields=Val, $ev=line, $want_record=F]); + Input::add_event([$source="../input2.log", $config=config_strings_three, $reader=Input::READER_RAW, $mode=Input::STREAM, $name="input3", $fields=Val, $ev=line, $want_record=F]); } From 708ede22c6781e854739c67332ac18a391f4782f Mon Sep 17 00:00:00 2001 From: Johanna Amann Date: Fri, 18 Sep 2015 12:32:23 -0700 Subject: [PATCH 18/34] Refactor X509 generalizedtime support and test. The generalizedtime support in for certificates now fits more seamlessly to how the rest of the code was structured and does the different processing for UTC and generalized times at the beginning, when checking for them. The test does not output the common name anymore, since the output format might change accross openssl versions (inserted the serial instead). I also added a bit more error checking for the UTC time case. --- src/file_analysis/analyzer/x509/X509.cc | 64 ++++++++++--------- .../Baseline/core.x509-generalizedtime/output | 8 +-- testing/btest/core/x509-generalizedtime.bro | 2 +- 3 files changed, 40 insertions(+), 34 deletions(-) diff --git a/src/file_analysis/analyzer/x509/X509.cc b/src/file_analysis/analyzer/x509/X509.cc index 9059a3e250..534676d41f 100644 --- a/src/file_analysis/analyzer/x509/X509.cc +++ b/src/file_analysis/analyzer/x509/X509.cc @@ -521,7 +521,7 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* const char *fid = arg_fid ? arg_fid : ""; time_t lResult = 0; - char lBuffer[24]; + char lBuffer[26]; char* pBuffer = lBuffer; const char *pString = (const char *) atime->data; @@ -535,16 +535,35 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* return 0; } + if ( pString[atime->length-1] != 'Z' ) + { + // not valid according to RFC 2459 4.1.2.5.1 + reporter->Weird(fmt("Could not parse UTC time in non-YY-format in X509 certificate (x509 %s)", fid)); + return 0; + } + + // year is first two digits in YY format. Buffer expects YYYY format. + if ( pString[0] - '0' < 50 ) // RFC 2459 4.1.2.5.1 + { + *(pBuffer++) = '2'; + *(pBuffer++) = '0'; + } + else + { + *(pBuffer++) = '1'; + *(pBuffer++) = '9'; + } + memcpy(pBuffer, pString, 10); pBuffer += 10; pString += 10; remaining -= 10; } - - else + else if ( atime->type == V_ASN1_GENERALIZEDTIME ) { // generalized time. We apparently ignore the YYYYMMDDHH case // for now and assume we always have minutes and seconds. + // This should be ok because it is specified as a requirement in RFC 2459 4.1.2.5.2 if ( remaining < 12 || remaining > 23 ) { @@ -557,6 +576,11 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* pString += 12; remaining -= 12; } + else + { + reporter->Weird(fmt("Invalid time type in X509 certificate (fuid %s)", fid)); + return 0; + } if ( (remaining == 0) || (*pString == 'Z') || (*pString == '-') || (*pString == '+') ) { @@ -620,33 +644,15 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* } tm lTime; - size_t i; - if ( atime->type == V_ASN1_GENERALIZEDTIME ) - { - // YYYY format - lTime.tm_year = (lBuffer[0] - '0') * 1000; - lTime.tm_year += (lBuffer[1] - '0') * 100; - lTime.tm_year += (lBuffer[2] - '0') * 10; - lTime.tm_year += (lBuffer[3] - '0'); - if ( lTime.tm_year > 1900) - lTime.tm_year -= 1900; - i = 4; - } - else - { - // YY format - lTime.tm_year = (lBuffer[0] - '0') * 10; - lTime.tm_year += (lBuffer[1] - '0'); - if ( lTime.tm_year < 50 ) - lTime.tm_year += 100; // RFC 2459 - i = 2; - } + lTime.tm_sec = ((lBuffer[12] - '0') * 10) + (lBuffer[13] - '0'); + lTime.tm_min = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0'); + lTime.tm_hour = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0'); + lTime.tm_mday = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0'); + lTime.tm_mon = (((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0')) - 1; + lTime.tm_year = (lBuffer[0] - '0') * 1000 + (lBuffer[1] - '0') * 100 + ((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0'); - lTime.tm_mon = ((lBuffer[i+0] - '0') * 10) + (lBuffer[i+1] - '0') - 1; // MM - lTime.tm_mday = ((lBuffer[i+2] - '0') * 10) + (lBuffer[i+3] - '0'); // DD - lTime.tm_hour = ((lBuffer[i+4] - '0') * 10) + (lBuffer[i+5] - '0'); // hh - lTime.tm_min = ((lBuffer[i+6] - '0') * 10) + (lBuffer[i+7] - '0'); // mm - lTime.tm_sec = ((lBuffer[i+8] - '0') * 10) + (lBuffer[i+9] - '0'); // ss + if ( lTime.tm_year > 1900) + lTime.tm_year -= 1900; lTime.tm_wday = 0; lTime.tm_yday = 0; diff --git a/testing/btest/Baseline/core.x509-generalizedtime/output b/testing/btest/Baseline/core.x509-generalizedtime/output index 75605f5668..349551efe5 100644 --- a/testing/btest/Baseline/core.x509-generalizedtime/output +++ b/testing/btest/Baseline/core.x509-generalizedtime/output @@ -1,16 +1,16 @@ ----- x509_certificate ---- -subject: CN=bro-generalizedtime-test,O=Bro,C=NL +serial: 03E8 not_valid_before: 2015-09-01-13:33:37.000000000 (epoch: 1441114417.0) not_valid_after : 2025-09-01-13:33:37.000000000 (epoch: 1756733617.0) ----- x509_certificate ---- -subject: CN=*.taleo.net,OU=Comodo PremiumSSL Wildcard,OU=Web,O=Taleo Inc.,street=4140 Dublin Boulevard,street=Suite 400,L=Dublin,ST=CA,postalCode=94568,C=US +serial: 99FAA8037A4EB2FAEF84EB5E55D5B8C8 not_valid_before: 2011-05-04-00:00:00.000000000 (epoch: 1304467200.0) not_valid_after : 2016-07-04-23:59:59.000000000 (epoch: 1467676799.0) ----- x509_certificate ---- -subject: CN=COMODO High-Assurance Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB +serial: 1690C329B6780607511F05B0344846CB not_valid_before: 2010-04-16-00:00:00.000000000 (epoch: 1271376000.0) not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0) ----- x509_certificate ---- -subject: CN=AddTrust External CA Root,OU=AddTrust External TTP Network,O=AddTrust AB,C=SE +serial: 01 not_valid_before: 2000-05-30-10:48:38.000000000 (epoch: 959683718.0) not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0) diff --git a/testing/btest/core/x509-generalizedtime.bro b/testing/btest/core/x509-generalizedtime.bro index 5d82b28ca8..b69ab31743 100644 --- a/testing/btest/core/x509-generalizedtime.bro +++ b/testing/btest/core/x509-generalizedtime.bro @@ -4,7 +4,7 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) { print "----- x509_certificate ----"; - print fmt("subject: %s", cert$subject); + print fmt("serial: %s", cert$serial); print fmt("not_valid_before: %T (epoch: %s)", cert$not_valid_before, cert$not_valid_before); print fmt("not_valid_after : %T (epoch: %s)", cert$not_valid_after, cert$not_valid_after); } From 5785530c6b2b4c6d4ae2083d1866e411cd6feb93 Mon Sep 17 00:00:00 2001 From: Johanna Amann Date: Fri, 18 Sep 2015 12:55:55 -0700 Subject: [PATCH 19/34] Make x509 end-of-string-check nicer. Use remaining instead of the total length, in case someone changes the code later and changes pString before. --- src/file_analysis/analyzer/x509/X509.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/file_analysis/analyzer/x509/X509.cc b/src/file_analysis/analyzer/x509/X509.cc index 534676d41f..e8ea5cb7b4 100644 --- a/src/file_analysis/analyzer/x509/X509.cc +++ b/src/file_analysis/analyzer/x509/X509.cc @@ -535,7 +535,7 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* return 0; } - if ( pString[atime->length-1] != 'Z' ) + if ( pString[remaining-1] != 'Z' ) { // not valid according to RFC 2459 4.1.2.5.1 reporter->Weird(fmt("Could not parse UTC time in non-YY-format in X509 certificate (x509 %s)", fid)); From 6f1e07f6d5ada9a34a51e0aae1a5823bb7a7f9cc Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Fri, 18 Sep 2015 17:27:30 -0500 Subject: [PATCH 20/34] Fixed some test canonifiers to read only from stdin Fixed some test canonifier scripts to read from stdin instead of from a filename specified as a cmd-line argument. This is needed in order to be able to reliably use them in a pipeline with other test canonifiers. Also removed some unused test canonifier scripts. --- testing/scripts/diff-canon-notice-policy | 10 -------- testing/scripts/diff-canonifier-external | 1 - testing/scripts/diff-remove-file-ids | 7 +++--- testing/scripts/diff-remove-mime-types | 29 ------------------------ testing/scripts/diff-remove-uids | 7 +++--- testing/scripts/diff-remove-x509-names | 18 ++++++++------- 6 files changed, 16 insertions(+), 56 deletions(-) delete mode 100755 testing/scripts/diff-canon-notice-policy delete mode 100755 testing/scripts/diff-remove-mime-types diff --git a/testing/scripts/diff-canon-notice-policy b/testing/scripts/diff-canon-notice-policy deleted file mode 100755 index f05abaa103..0000000000 --- a/testing/scripts/diff-canon-notice-policy +++ /dev/null @@ -1,10 +0,0 @@ -#! /usr/bin/awk -f -# -# A diff canonifier that removes the priorities in notice_policy.log. - -/^#/ && $2 == "notice_policy" { filter = 1; } - -filter == 1 && /^[^#]/ { sub("^[0-9]*", "X"); } - -{ print; } - diff --git a/testing/scripts/diff-canonifier-external b/testing/scripts/diff-canonifier-external index ee6405b3a8..611d7c7baf 100755 --- a/testing/scripts/diff-canonifier-external +++ b/testing/scripts/diff-canonifier-external @@ -18,7 +18,6 @@ fi | `dirname $0`/diff-remove-uids \ | `dirname $0`/diff-remove-file-ids \ | `dirname $0`/diff-remove-x509-names \ - | `dirname $0`/diff-canon-notice-policy \ | `dirname $0`/diff-sort \ | eval $addl diff --git a/testing/scripts/diff-remove-file-ids b/testing/scripts/diff-remove-file-ids index 965a74442e..b34191f2c8 100755 --- a/testing/scripts/diff-remove-file-ids +++ b/testing/scripts/diff-remove-file-ids @@ -1,7 +1,8 @@ -#! /usr/bin/awk -f +#! /usr/bin/env bash # # A diff canonifier that removes all file IDs from files.log +awk ' BEGIN { FS="\t"; OFS="\t"; @@ -28,6 +29,4 @@ process && column1 > 0 && column2 > 0 { } { print } - - - +' diff --git a/testing/scripts/diff-remove-mime-types b/testing/scripts/diff-remove-mime-types deleted file mode 100755 index b8cc3d1e6d..0000000000 --- a/testing/scripts/diff-remove-mime-types +++ /dev/null @@ -1,29 +0,0 @@ -#! /usr/bin/awk -f -# -# A diff canonifier that removes all MIME types because libmagic output -# can differ between installations. - -BEGIN { FS="\t"; OFS="\t"; type_col = -1; desc_col = -1 } - -/^#fields/ { - for ( i = 2; i < NF; ++i ) - { - if ( $i == "mime_type" ) - type_col = i-1; - if ( $i == "mime_desc" ) - desc_col = i-1; - } -} - -function remove_mime (n) { - if ( n >= 0 && $n != "-" ) - # Mark that it's set, but ignore content. - $n = "+" -} - -remove_mime(type_col) -remove_mime(desc_col) - -{ - print; -} diff --git a/testing/scripts/diff-remove-uids b/testing/scripts/diff-remove-uids index 8e12b7abe5..3c3faae083 100755 --- a/testing/scripts/diff-remove-uids +++ b/testing/scripts/diff-remove-uids @@ -1,7 +1,8 @@ -#! /usr/bin/awk -f +#! /usr/bin/env bash # # A diff canonifier that removes all connection UIDs. +awk ' BEGIN { FS="\t"; OFS="\t"; } column > 0 { @@ -16,6 +17,4 @@ column > 0 { } { print } - - - +' diff --git a/testing/scripts/diff-remove-x509-names b/testing/scripts/diff-remove-x509-names index 4534cb7d87..d7c1fe7032 100755 --- a/testing/scripts/diff-remove-x509-names +++ b/testing/scripts/diff-remove-x509-names @@ -1,8 +1,9 @@ -#! /usr/bin/awk -f +#! /usr/bin/env bash # # A diff canonifier that removes all X.509 Distinguished Name subject fields # because that output can differ depending on installed OpenSSL version. +awk ' BEGIN { FS="\t"; OFS="\t"; s_col = -1; i_col = -1; is_col = -1; cs_col = -1; ci_col = -1; cert_subj_col = -1; cert_issuer_col = -1 } /^#fields/ { @@ -27,46 +28,47 @@ BEGIN { FS="\t"; OFS="\t"; s_col = -1; i_col = -1; is_col = -1; cs_col = -1; ci_ s_col >= 0 { if ( $s_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $s_col = "+"; } i_col >= 0 { if ( $i_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $i_col = "+"; } is_col >= 0 { if ( $is_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $is_col = "+"; } cs_col >= 0 { if ( $cs_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $cs_col = "+"; } ci_col >= 0 { if ( $ci_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $ci_col = "+"; } cert_subj_col >= 0 { if ( $cert_subj_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $cert_subj_col = "+"; } cert_issuer_col >= 0 { if ( $cert_issuer_col != "-" ) - # Mark that it's set, but ignore content. + # Mark that it is set, but ignore content. $cert_issuer_col = "+"; } { print; } +' From 8a16145e31a2acddab274ff95f17d27b70752c48 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Fri, 18 Sep 2015 17:32:30 -0500 Subject: [PATCH 21/34] Remove unnecessary use of TEST_DIFF_CANONIFIER Removed a TEST_DIFF_CANONIFIER from a test, because it is already set in btest.cfg, and this one also doesn't actually specify the path to the script. --- testing/btest/plugins/writer.bro | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testing/btest/plugins/writer.bro b/testing/btest/plugins/writer.bro index 8cecff6843..732d726fd7 100644 --- a/testing/btest/plugins/writer.bro +++ b/testing/btest/plugins/writer.bro @@ -4,5 +4,5 @@ # @TEST-EXEC: BRO_PLUGIN_PATH=`pwd` bro -NN Demo::Foo >>output # @TEST-EXEC: echo === >>output # @TEST-EXEC: BRO_PLUGIN_PATH=`pwd` bro -r $TRACES/socks.trace Log::default_writer=Log::WRITER_FOO %INPUT | sort >>output -# @TEST-EXEC: TEST_DIFF_CANONIFIER=diff-remove-timestamps btest-diff output +# @TEST-EXEC: btest-diff output From a7aa393aefc8f4b8e71dfc70d0812b84972a83f4 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Sat, 19 Sep 2015 18:08:31 -0500 Subject: [PATCH 22/34] Improve a few test canonifiers --- testing/scripts/diff-remove-fields | 18 ++++++++---------- testing/scripts/diff-remove-file-ids | 12 +++++++----- testing/scripts/diff-remove-uids | 10 ++++++---- 3 files changed, 21 insertions(+), 19 deletions(-) diff --git a/testing/scripts/diff-remove-fields b/testing/scripts/diff-remove-fields index 7f18748a5f..3b20425f72 100755 --- a/testing/scripts/diff-remove-fields +++ b/testing/scripts/diff-remove-fields @@ -4,8 +4,8 @@ # prefix. if [ $# != 1 ]; then - echo "usage: `basename $0` " - exit 1 + echo "usage: `basename $0` " + exit 1 fi awk -v "PREFIX=$1" ' @@ -18,17 +18,15 @@ BEGIN { FS="\t"; OFS="\t"; } if ( index($i, PREFIX) == 1 ) rem[i-1] = 1; } - print; - next; +} + +/^[^#]/ { + for ( i in rem ) + # Mark that it is set, but ignore content. + $i = "+"; } { - for ( i in rem ) - # Mark that it iss set, but ignore content. - $i = "+"; - print; } - ' - diff --git a/testing/scripts/diff-remove-file-ids b/testing/scripts/diff-remove-file-ids index b34191f2c8..d6c3e7c813 100755 --- a/testing/scripts/diff-remove-file-ids +++ b/testing/scripts/diff-remove-file-ids @@ -13,13 +13,15 @@ $1 == "#path" && $2 == "files" { process = 1; } -process && column1 > 0 && column2 > 0 { - $column1 = "XXXXXXXXXXX"; - $column2 = "XXXXXXXXXXX"; +/^[^#]/ { + if ( process && column1 > 0 && column2 > 0 ) { + $column1 = "XXXXXXXXXXX"; + $column2 = "XXXXXXXXXXX"; + } } -/^#/ { - for ( i = 0; i < NF; ++i ) { +/^#fields/ { + for ( i = 2; i <= NF; ++i ) { if ( $i == "fuid" ) column1 = i - 1; diff --git a/testing/scripts/diff-remove-uids b/testing/scripts/diff-remove-uids index 3c3faae083..4d4d041b12 100755 --- a/testing/scripts/diff-remove-uids +++ b/testing/scripts/diff-remove-uids @@ -5,12 +5,14 @@ awk ' BEGIN { FS="\t"; OFS="\t"; } -column > 0 { - $column = "XXXXXXXXXXX"; +/^[^#]/ { + if ( column > 0 ) { + $column = "XXXXXXXXXXX"; } +} -/^#/ { - for ( i = 0; i < NF; ++i ) { +/^#fields/ { + for ( i = 2; i <= NF; ++i ) { if ( $i == "uid" ) column = i - 1; } From aa5471ec153c5b21371097fe862a6bbe53e0abbd Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Mon, 21 Sep 2015 16:42:53 -0500 Subject: [PATCH 23/34] Improve documentation of input framework --- scripts/base/frameworks/input/main.bro | 82 ++++++++++++++++---------- 1 file changed, 52 insertions(+), 30 deletions(-) diff --git a/scripts/base/frameworks/input/main.bro b/scripts/base/frameworks/input/main.bro index 82c46b870c..41e42d2da9 100644 --- a/scripts/base/frameworks/input/main.bro +++ b/scripts/base/frameworks/input/main.bro @@ -1,18 +1,25 @@ ##! The input framework provides a way to read previously stored data either -##! as an event stream or into a bro table. +##! as an event stream or into a Bro table. module Input; export { type Event: enum { + ## New data has been imported. EVENT_NEW = 0, + ## Existing data has been changed. EVENT_CHANGED = 1, + ## Previously existing data has been removed. EVENT_REMOVED = 2, }; + ## Type that defines the input stream read mode. type Mode: enum { + ## Do not automatically reread the file after it has been read. MANUAL = 0, + ## Reread the entire file each time a change is found. REREAD = 1, + ## Read data from end of file each time new data is appended. STREAM = 2 }; @@ -47,11 +54,11 @@ export { ## abort. Defaults to false (abort). const accept_unsupported_types = F &redef; - ## TableFilter description type used for the `table` method. + ## A table input stream type used to send data to a Bro table. type TableDescription: record { # Common definitions for tables and events - ## String that allows the reader to find the source. + ## String that allows the reader to find the source of the data. ## For `READER_ASCII`, this is the filename. source: string; @@ -61,7 +68,8 @@ export { ## Read mode to use for this stream. mode: Mode &default=default_mode; - ## Descriptive name. Used to remove a stream at a later time. + ## Name of the input stream. This is used by some functions to + ## manipulate the stream. name: string; # Special definitions for tables @@ -76,29 +84,32 @@ export { ## If this is undefined, then *destination* must be a set. val: any &optional; - ## Defines if the value of the table is a record (default), or a single value. - ## When this is set to false, then *val* can only contain one element. + ## Defines if the value of the table is a record (default), or a single + ## value. When this is set to false, then *val* can only contain one + ## element. want_record: bool &default=T; - ## The event that is raised each time a value is added to, changed in or - ## removed from the table. The event will receive an Input::Event enum - ## as the first argument, the *idx* record as the second argument and - ## the value (record) as the third argument. - ev: any &optional; # event containing idx, val as values. + ## The event that is raised each time a value is added to, changed in, + ## or removed from the table. The event will receive an + ## Input::TableDescription as the first argument, an Input::Event + ## enum as the second argument, the *idx* record as the third argument + ## and the value (record) as the fourth argument. + ev: any &optional; ## Predicate function that can decide if an insertion, update or removal - ## should really be executed. Parameters are the same as for the event. + ## should really be executed. Parameters have same meaning as for the + ## event. ## If true is returned, the update is performed. If false is returned, ## it is skipped. pred: function(typ: Input::Event, left: any, right: any): bool &optional; - ## A key/value table that will be passed on the reader. - ## Interpretation of the values is left to the writer, but + ## A key/value table that will be passed to the reader. + ## Interpretation of the values is left to the reader, but ## usually they will be used for configuration purposes. - config: table[string] of string &default=table(); + config: table[string] of string &default=table(); }; - ## EventFilter description type used for the `event` method. + ## An event input stream type used to send input data to a Bro event. type EventDescription: record { # Common definitions for tables and events @@ -117,20 +128,26 @@ export { # Special definitions for events - ## Record describing the fields to be retrieved from the source input. + ## Record type describing the fields to be retrieved from the input + ## source. fields: any; - ## If this is false, the event receives each value in fields as a separate argument. - ## If this is set to true (default), the event receives all fields in a single record value. + ## If this is false, the event receives each value in *fields* as a + ## separate argument. + ## If this is set to true (default), the event receives all fields in + ## a single record value. want_record: bool &default=T; ## The event that is raised each time a new line is received from the - ## reader. The event will receive an Input::Event enum as the first - ## element, and the fields as the following arguments. + ## reader. The event will receive an Input::EventDescription record + ## as the first argument, an Input::Event enum as the second + ## argument, and the fields (as specified in *fields*) as the following + ## arguments (this will either be a single record value containing + ## all fields, or each field value as a separate argument). ev: any; - ## A key/value table that will be passed on the reader. - ## Interpretation of the values is left to the writer, but + ## A key/value table that will be passed to the reader. + ## Interpretation of the values is left to the reader, but ## usually they will be used for configuration purposes. config: table[string] of string &default=table(); }; @@ -157,28 +174,29 @@ export { ## field will be the same value as the *source* field. name: string; - ## A key/value table that will be passed on the reader. - ## Interpretation of the values is left to the writer, but + ## A key/value table that will be passed to the reader. + ## Interpretation of the values is left to the reader, but ## usually they will be used for configuration purposes. config: table[string] of string &default=table(); }; - ## Create a new table input from a given source. + ## Create a new table input stream from a given source. ## ## description: `TableDescription` record describing the source. ## ## Returns: true on success. global add_table: function(description: Input::TableDescription) : bool; - ## Create a new event input from a given source. + ## Create a new event input stream from a given source. ## ## description: `EventDescription` record describing the source. ## ## Returns: true on success. global add_event: function(description: Input::EventDescription) : bool; - ## Create a new file analysis input from a given source. Data read from - ## the source is automatically forwarded to the file analysis framework. + ## Create a new file analysis input stream from a given source. Data read + ## from the source is automatically forwarded to the file analysis + ## framework. ## ## description: A record describing the source. ## @@ -201,7 +219,11 @@ export { ## Event that is called when the end of a data source has been reached, ## including after an update. - global end_of_data: event(name: string, source:string); + ## + ## name: Name of the input stream. + ## + ## source: String that identifies the data source (such as the filename). + global end_of_data: event(name: string, source: string); } @load base/bif/input.bif From 160b852f643d35c02dfc634ba1077da19bdabbdd Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 22 Sep 2015 13:03:28 -0500 Subject: [PATCH 24/34] Update install instructions for CAF --- doc/install/install.rst | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/doc/install/install.rst b/doc/install/install.rst index 10fdfeefaf..8321ee7504 100644 --- a/doc/install/install.rst +++ b/doc/install/install.rst @@ -47,9 +47,7 @@ To build Bro from source, the following additional dependencies are required: * zlib headers * Python -.. todo:: - - Update with instructions for installing CAF. +To install CAF, first download the source code of the required version from: https://github.com/actor-framework/actor-framework/releases To install the required dependencies, you can use: @@ -84,11 +82,11 @@ To install the required dependencies, you can use: "Preferences..." -> "Downloads" menus to install the "Command Line Tools" component). - OS X comes with all required dependencies except for CMake_ and SWIG_. + OS X comes with all required dependencies except for CMake_, SWIG_, and CAF. Distributions of these dependencies can likely be obtained from your - preferred Mac OS X package management system (e.g. MacPorts_, Fink_, - or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``, - and ``swig-python`` packages provide the required dependencies. + preferred Mac OS X package management system (e.g. Homebrew_, MacPorts_, + or Fink_). Specifically for Homebrew, the ``cmake``, ``swig``, + and ``caf`` packages provide the required dependencies. Optional Dependencies From 8896679a015b115eb78a7c3128243a04fc0ba422 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 22 Sep 2015 15:08:16 -0500 Subject: [PATCH 25/34] More improvements to input framework documentation Fixed more typos, reformatted the code examples to remove the horizontal scroll bars, and removed some redundant sections that were just outdated copies of information in the auto-generated reference docs. --- doc/frameworks/input.rst | 275 +++++++++---------------- scripts/base/frameworks/input/main.bro | 8 +- 2 files changed, 102 insertions(+), 181 deletions(-) diff --git a/doc/frameworks/input.rst b/doc/frameworks/input.rst index ef40756a26..aa2dce6417 100644 --- a/doc/frameworks/input.rst +++ b/doc/frameworks/input.rst @@ -32,7 +32,8 @@ For this example we assume that we want to import data from a blacklist that contains server IP addresses as well as the timestamp and the reason for the block. -An example input file could look like this: +An example input file could look like this (note that all fields must be +tab-separated): :: @@ -63,19 +64,23 @@ The two records are defined as: reason: string; }; -Note that the names of the fields in the record definitions have to correspond +Note that the names of the fields in the record definitions must correspond to the column names listed in the '#fields' line of the log file, in this -case 'ip', 'timestamp', and 'reason'. +case 'ip', 'timestamp', and 'reason'. Also note that the ordering of the +columns does not matter, because each column is identified by name. -The log file is read into the table with a simple call of the ``add_table`` -function: +The log file is read into the table with a simple call of the +:bro:id:`Input::add_table` function: .. code:: bro global blacklist: table[addr] of Val = table(); - Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]); - Input::remove("blacklist"); + event bro_init() { + Input::add_table([$source="blacklist.file", $name="blacklist", + $idx=Idx, $val=Val, $destination=blacklist]); + Input::remove("blacklist"); + } With these three lines we first create an empty table that should contain the blacklist data and then instruct the input framework to open an input stream @@ -92,7 +97,7 @@ Because of this, the data is not immediately accessible. Depending on the size of the data source it might take from a few milliseconds up to a few seconds until all data is present in the table. Please note that this means that when Bro is running without an input source or on very short captured -files, it might terminate before the data is present in the system (because +files, it might terminate before the data is present in the table (because Bro already handled all packets before the import thread finished). Subsequent calls to an input source are queued until the previous action has @@ -101,8 +106,8 @@ been completed. Because of this, it is, for example, possible to call will remain queued until the first read has been completed. Once the input framework finishes reading from a data source, it fires -the ``end_of_data`` event. Once this event has been received all data -from the input file is available in the table. +the :bro:id:`Input::end_of_data` event. Once this event has been received all +data from the input file is available in the table. .. code:: bro @@ -111,9 +116,9 @@ from the input file is available in the table. print blacklist; } -The table can also already be used while the data is still being read - it -just might not contain all lines in the input file when the event has not -yet fired. After it has been populated it can be used like any other Bro +The table can be used while the data is still being read - it +just might not contain all lines from the input file before the event has +fired. After the table has been populated it can be used like any other Bro table and blacklist entries can easily be tested: .. code:: bro @@ -130,10 +135,11 @@ changing. For these cases, the Bro input framework supports several ways to deal with changing data files. The first, very basic method is an explicit refresh of an input stream. When -an input stream is open, the function ``force_update`` can be called. This -will trigger a complete refresh of the table; any changed elements from the -file will be updated. After the update is finished the ``end_of_data`` -event will be raised. +an input stream is open (this means it has not yet been removed by a call to +:bro:id:`Input::remove`), the function :bro:id:`Input::force_update` can be +called. This will trigger a complete refresh of the table; any changed +elements from the file will be updated. After the update is finished the +:bro:id:`Input::end_of_data` event will be raised. In our example the call would look like: @@ -141,30 +147,35 @@ In our example the call would look like: Input::force_update("blacklist"); -The input framework also supports two automatic refresh modes. The first mode -continually checks if a file has been changed. If the file has been changed, it +Alternatively, the input framework can automatically refresh the table +contents when it detects a change to the input file. To use this feature, +you need to specify a non-default read mode by setting the ``mode`` option +of the :bro:id:`Input::add_table` call. Valid values are ``Input::MANUAL`` +(the default), ``Input::REREAD`` and ``Input::STREAM``. For example, +setting the value of the ``mode`` option in the previous example +would look like this: + +.. code:: bro + + Input::add_table([$source="blacklist.file", $name="blacklist", + $idx=Idx, $val=Val, $destination=blacklist, + $mode=Input::REREAD]); + +When using the reread mode (i.e., ``$mode=Input::REREAD``), Bro continually +checks if the input file has been changed. If the file has been changed, it is re-read and the data in the Bro table is updated to reflect the current state. Each time a change has been detected and all the new data has been read into the table, the ``end_of_data`` event is raised. -The second mode is a streaming mode. This mode assumes that the source data -file is an append-only file to which new data is continually appended. Bro -continually checks for new data at the end of the file and will add the new -data to the table. If newer lines in the file have the same index as previous -lines, they will overwrite the values in the output table. Because of the -nature of streaming reads (data is continually added to the table), -the ``end_of_data`` event is never raised when using streaming reads. +When using the streaming mode (i.e., ``$mode=Input::STREAM``), Bro assumes +that the source data file is an append-only file to which new data is +continually appended. Bro continually checks for new data at the end of +the file and will add the new data to the table. If newer lines in the +file have the same index as previous lines, they will overwrite the +values in the output table. Because of the nature of streaming reads +(data is continually added to the table), the ``end_of_data`` event +is never raised when using streaming reads. -The reading mode can be selected by setting the ``mode`` option of the -add_table call. Valid values are ``MANUAL`` (the default), ``REREAD`` -and ``STREAM``. - -Hence, when adding ``$mode=Input::REREAD`` to the previous example, the -blacklist table will always reflect the state of the blacklist input file. - -.. code:: bro - - Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]); Receiving change events ----------------------- @@ -173,34 +184,40 @@ When re-reading files, it might be interesting to know exactly which lines in the source files have changed. For this reason, the input framework can raise an event each time when a data -item is added to, removed from or changed in a table. +item is added to, removed from, or changed in a table. -The event definition looks like this: +The event definition looks like this (note that you can change the name of +this event in your own Bro script): .. code:: bro - event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) { - # act on values + event entry(description: Input::TableDescription, tpe: Input::Event, + left: Idx, right: Val) { + # do something here... + print fmt("%s = %s", left, right); } -The event has to be specified in ``$ev`` in the ``add_table`` call: +The event must be specified in ``$ev`` in the ``add_table`` call: .. code:: bro - Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]); + Input::add_table([$source="blacklist.file", $name="blacklist", + $idx=Idx, $val=Val, $destination=blacklist, + $mode=Input::REREAD, $ev=entry]); -The ``description`` field of the event contains the arguments that were +The ``description`` argument of the event contains the arguments that were originally supplied to the add_table call. Hence, the name of the stream can, -for example, be accessed with ``description$name``. ``tpe`` is an enum -containing the type of the change that occurred. +for example, be accessed with ``description$name``. The ``tpe`` argument of the +event is an enum containing the type of the change that occurred. If a line that was not previously present in the table has been added, -then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains -the index of the added table entry and ``right`` contains the values of the -added entry. +then the value of ``tpe`` will be ``Input::EVENT_NEW``. In this case ``left`` +contains the index of the added table entry and ``right`` contains the +values of the added entry. If a table entry that already was present is altered during the re-reading or -streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In +streaming read of a file, then the value of ``tpe`` will be +``Input::EVENT_CHANGED``. In this case ``left`` contains the index of the changed table entry and ``right`` contains the values of the entry before the change. The reason for this is that the table already has been updated when the event is raised. The current @@ -208,8 +225,9 @@ value in the table can be ascertained by looking up the current table value. Hence it is possible to compare the new and the old values of the table. If a table element is removed because it was no longer present during a -re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left`` -contains the index and ``right`` the values of the removed element. +re-read, then the value of ``tpe`` will be ``Input::EVENT_REMOVED``. In this +case ``left`` contains the index and ``right`` the values of the removed +element. Filtering data during import @@ -222,24 +240,26 @@ can either accept or veto the change by returning true for an accepted change and false for a rejected change. Furthermore, it can alter the data before it is written to the table. -The following example filter will reject to add entries to the table when +The following example filter will reject adding entries to the table when they were generated over a month ago. It will accept all changes and all removals of values that are already present in the table. .. code:: bro - Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, - $pred(typ: Input::Event, left: Idx, right: Val) = { - if ( typ != Input::EVENT_NEW ) { - return T; - } - return ( ( current_time() - right$timestamp ) < (30 day) ); - }]); + Input::add_table([$source="blacklist.file", $name="blacklist", + $idx=Idx, $val=Val, $destination=blacklist, + $mode=Input::REREAD, + $pred(typ: Input::Event, left: Idx, right: Val) = { + if ( typ != Input::EVENT_NEW ) { + return T; + } + return (current_time() - right$timestamp) < 30day; + }]); To change elements while they are being imported, the predicate function can manipulate ``left`` and ``right``. Note that predicate functions are called before the change is committed to the table. Hence, when a table element is -changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right`` +changed (``typ`` is ``Input::EVENT_CHANGED``), ``left`` and ``right`` contain the new values, but the destination (``blacklist`` in our example) still contains the old values. This allows predicate functions to examine the changes between the old and the new version before deciding if they @@ -250,14 +270,19 @@ Different readers The input framework supports different kinds of readers for different kinds of source data files. At the moment, the default reader reads ASCII files -formatted in the Bro log file format (tab-separated values). At the moment, -Bro comes with two other readers. The ``RAW`` reader reads a file that is -split by a specified record separator (usually newline). The contents are +formatted in the Bro log file format (tab-separated values with a "#fields" +header line). Several other readers are included in Bro. + +The raw reader reads a file that is +split by a specified record separator (newline by default). The contents are returned line-by-line as strings; it can, for example, be used to read configuration files and the like and is probably only useful in the event mode and not for reading data to tables. -Another included reader is the ``BENCHMARK`` reader, which is being used +The binary reader is intended to be used with file analysis input streams (and +is the default type of reader for those streams). + +The benchmark reader is being used to optimize the speed of the input framework. It can generate arbitrary amounts of semi-random data in all Bro data types supported by the input framework. @@ -270,75 +295,17 @@ aforementioned ones: logging-input-sqlite -Add_table options ------------------ - -This section lists all possible options that can be used for the add_table -function and gives a short explanation of their use. Most of the options -already have been discussed in the previous sections. - -The possible fields that can be set for a table stream are: - - ``source`` - A mandatory string identifying the source of the data. - For the ASCII reader this is the filename. - - ``name`` - A mandatory name for the filter that can later be used - to manipulate it further. - - ``idx`` - Record type that defines the index of the table. - - ``val`` - Record type that defines the values of the table. - - ``reader`` - The reader used for this stream. Default is ``READER_ASCII``. - - ``mode`` - The mode in which the stream is opened. Possible values are - ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``. - ``MANUAL`` means that the file is not updated after it has - been read. Changes to the file will not be reflected in the - data Bro knows. ``REREAD`` means that the whole file is read - again each time a change is found. This should be used for - files that are mapped to a table where individual lines can - change. ``STREAM`` means that the data from the file is - streamed. Events / table entries will be generated as new - data is appended to the file. - - ``destination`` - The destination table. - - ``ev`` - Optional event that is raised, when values are added to, - changed in, or deleted from the table. Events are passed an - Input::Event description as the first argument, the index - record as the second argument and the values as the third - argument. - - ``pred`` - Optional predicate, that can prevent entries from being added - to the table and events from being sent. - - ``want_record`` - Boolean value, that defines if the event wants to receive the - fields inside of a single record value, or individually - (default). This can be used if ``val`` is a record - containing only one type. In this case, if ``want_record`` is - set to false, the table will contain elements of the type - contained in ``val``. Reading Data to Events ====================== The second supported mode of the input framework is reading data to Bro -events instead of reading them to a table using event streams. +events instead of reading them to a table. Event streams work very similarly to table streams that were already discussed in much detail. To read the blacklist of the previous example -into an event stream, the following Bro code could be used: +into an event stream, the :bro:id:`Input::add_event` function is used. +For example: .. code:: bro @@ -348,12 +315,15 @@ into an event stream, the following Bro code could be used: reason: string; }; - event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) { - # work with event data + event blacklistentry(description: Input::EventDescription, + t: Input::Event, data: Val) { + # do something here... + print "data:", data; } event bro_init() { - Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]); + Input::add_event([$source="blacklist.file", $name="blacklist", + $fields=Val, $ev=blacklistentry]); } @@ -364,52 +334,3 @@ data types are provided in a single record definition. Apart from this, event streams work exactly the same as table streams and support most of the options that are also supported for table streams. -The options that can be set when creating an event stream with -``add_event`` are: - - ``source`` - A mandatory string identifying the source of the data. - For the ASCII reader this is the filename. - - ``name`` - A mandatory name for the stream that can later be used - to remove it. - - ``fields`` - Name of a record type containing the fields, which should be - retrieved from the input stream. - - ``ev`` - The event which is fired, after a line has been read from the - input source. The first argument that is passed to the event - is an Input::Event structure, followed by the data, either - inside of a record (if ``want_record is set``) or as - individual fields. The Input::Event structure can contain - information, if the received line is ``NEW``, has been - ``CHANGED`` or ``DELETED``. Since the ASCII reader cannot - track this information for event filters, the value is - always ``NEW`` at the moment. - - ``mode`` - The mode in which the stream is opened. Possible values are - ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``. - ``MANUAL`` means that the file is not updated after it has - been read. Changes to the file will not be reflected in the - data Bro knows. ``REREAD`` means that the whole file is read - again each time a change is found. This should be used for - files that are mapped to a table where individual lines can - change. ``STREAM`` means that the data from the file is - streamed. Events / table entries will be generated as new - data is appended to the file. - - ``reader`` - The reader used for this stream. Default is ``READER_ASCII``. - - ``want_record`` - Boolean value, that defines if the event wants to receive the - fields inside of a single record value, or individually - (default). If this is set to true, the event will receive a - single record of the type provided in ``fields``. - - - diff --git a/scripts/base/frameworks/input/main.bro b/scripts/base/frameworks/input/main.bro index 41e42d2da9..3df418315f 100644 --- a/scripts/base/frameworks/input/main.bro +++ b/scripts/base/frameworks/input/main.bro @@ -31,20 +31,20 @@ export { ## Separator between fields. ## Please note that the separator has to be exactly one character long. - ## Can be overwritten by individual writers. + ## Individual readers can use a different value. const separator = "\t" &redef; ## Separator between set elements. ## Please note that the separator has to be exactly one character long. - ## Can be overwritten by individual writers. + ## Individual readers can use a different value. const set_separator = "," &redef; ## String to use for empty fields. - ## Can be overwritten by individual writers. + ## Individual readers can use a different value. const empty_field = "(empty)" &redef; ## String to use for an unset &optional field. - ## Can be overwritten by individual writers. + ## Individual readers can use a different value. const unset_field = "-" &redef; ## Flag that controls if the input framework accepts records From 6ff68ce6ae4ce3bfc362518279cfbc5d3ab55328 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Tue, 22 Sep 2015 17:42:58 -0500 Subject: [PATCH 26/34] Update and improve install instructions Added info about optional dependencies, and what to do when the configure script fails. A few other clarifications and updates. --- doc/install/guidelines.rst | 2 +- doc/install/install.rst | 36 +++++++++++++++++++++++------------- 2 files changed, 24 insertions(+), 14 deletions(-) diff --git a/doc/install/guidelines.rst b/doc/install/guidelines.rst index d1e1777165..a56110f865 100644 --- a/doc/install/guidelines.rst +++ b/doc/install/guidelines.rst @@ -46,4 +46,4 @@ where Bro was originally installed). Review the files for differences before copying and make adjustments as necessary (use the new version for differences that aren't a result of a local change). Of particular note, the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes -to the ``SpoolDir`` and ``LogDir`` settings. +to any settings that specify a pathname. diff --git a/doc/install/install.rst b/doc/install/install.rst index 8321ee7504..ca1ea7f26a 100644 --- a/doc/install/install.rst +++ b/doc/install/install.rst @@ -4,7 +4,7 @@ .. _MacPorts: http://www.macports.org .. _Fink: http://www.finkproject.org .. _Homebrew: http://brew.sh -.. _bro downloads page: http://bro.org/download/index.html +.. _bro downloads page: https://www.bro.org/download/index.html .. _installing-bro: @@ -99,6 +99,8 @@ build time: * sendmail (enables Bro and BroControl to send mail) * curl (used by a Bro script that implements active HTTP) * gperftools (tcmalloc is used to improve memory and CPU usage) + * jemalloc (http://www.canonware.com/jemalloc/) + * PF_RING (Linux only, see :doc:`Cluster Configuration <../configuration/index>`) * ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump) LibGeoIP is probably the most interesting and can be installed @@ -115,7 +117,7 @@ code forms. Using Pre-Built Binary Release Packages -======================================= +--------------------------------------- See the `bro downloads page`_ for currently supported/targeted platforms for binary releases and for installation instructions. @@ -136,13 +138,15 @@ platforms for binary releases and for installation instructions. The primary install prefix for binary packages is ``/opt/bro``. Installing from Source -====================== +---------------------- Bro releases are bundled into source packages for convenience and are -available on the `bro downloads page`_. Alternatively, the latest -Bro development version can be obtained through git repositories +available on the `bro downloads page`_. + +Alternatively, the latest Bro development version +can be obtained through git repositories hosted at ``git.bro.org``. See our `git development documentation -`_ for comprehensive +`_ for comprehensive information on Bro's use of git revision control, but the short story for downloading the full source code experience for Bro via git is: @@ -163,13 +167,23 @@ run ``./configure --help``): make make install +If the ``configure`` script fails, then it is most likely because it either +couldn't find a required dependency or it couldn't find a sufficiently new +version of a dependency. Assuming that you already installed all required +dependencies, then you may need to use one of the ``--with-*`` options +that can be given to the ``configure`` script to help it locate a dependency. + The default installation path is ``/usr/local/bro``, which would typically require root privileges when doing the ``make install``. A different -installation path can be chosen by specifying the ``--prefix`` option. -Note that ``/usr`` and ``/opt/bro`` are the +installation path can be chosen by specifying the ``configure`` script +``--prefix`` option. Note that ``/usr`` and ``/opt/bro`` are the standard prefixes for binary Bro packages to be installed, so those are typically not good choices unless you are creating such a package. +OpenBSD users, please see our `FAQ +`_ if you are having +problems installing Bro. + Depending on the Bro package you downloaded, there may be auxiliary tools and libraries available in the ``aux/`` directory. Some of them will be automatically built and installed along with Bro. There are @@ -178,10 +192,6 @@ turn off unwanted auxiliary projects that would otherwise be installed automatically. Finally, use ``make install-aux`` to install some of the other programs that are in the ``aux/bro-aux`` directory. -OpenBSD users, please see our `FAQ -`_ if you are having -problems installing Bro. - Finally, if you want to build the Bro documentation (not required, because all of the documentation for the latest Bro release is available on the Bro web site), there are instructions in ``doc/README`` in the source @@ -190,7 +200,7 @@ distribution. Configure the Run-Time Environment ================================== -Just remember that you may need to adjust your ``PATH`` environment variable +You may want to adjust your ``PATH`` environment variable according to the platform/shell/package you're using. For example: Bourne-Shell Syntax: From 34adce126b860d21ce8bcc6d8f33e8f6fe5deeb1 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Wed, 23 Sep 2015 11:39:36 -0500 Subject: [PATCH 27/34] Update some doc tests and baselines --- .../output | 2 +- .../output | 4 ++-- .../include-doc_scripting_data_struct_record_01_bro.btest | 2 +- .../include-doc_scripting_data_struct_record_02_bro.btest | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_01_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_01_bro/output index ea390412f6..e67783fdeb 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_01_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_01_bro/output @@ -8,7 +8,7 @@ type Service: record { rfc: count; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt("Service: %s(RFC%d)",serv$name, serv$rfc); diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_02_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_02_bro/output index 143e6c5672..04da3522f2 100644 --- a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_02_bro/output +++ b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_record_02_bro/output @@ -13,7 +13,7 @@ type System: record { services: set[Service]; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc); @@ -21,7 +21,7 @@ function print_service(serv: Service): string print fmt(" port: %s", p); } -function print_system(sys: System): string +function print_system(sys: System) { print fmt("System: %s", sys$name); diff --git a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_01_bro.btest b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_01_bro.btest index ea390412f6..e67783fdeb 100644 --- a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_01_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_01_bro.btest @@ -8,7 +8,7 @@ type Service: record { rfc: count; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt("Service: %s(RFC%d)",serv$name, serv$rfc); diff --git a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_02_bro.btest b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_02_bro.btest index 143e6c5672..04da3522f2 100644 --- a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_02_bro.btest +++ b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_record_02_bro.btest @@ -13,7 +13,7 @@ type System: record { services: set[Service]; }; -function print_service(serv: Service): string +function print_service(serv: Service) { print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc); @@ -21,7 +21,7 @@ function print_service(serv: Service): string print fmt(" port: %s", p); } -function print_system(sys: System): string +function print_system(sys: System) { print fmt("System: %s", sys$name); From 87170652baa840d0177e2e591e3902fbd241d7b8 Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Wed, 23 Sep 2015 13:23:38 -0500 Subject: [PATCH 28/34] Fix documentation of encode/decode_base64 BiFs Some of these were generating warnings during "make doc". Also simplified the description for some, and corrected a few minor typos. --- src/bro.bif | 37 ++++++++++++++++--------------------- 1 file changed, 16 insertions(+), 21 deletions(-) diff --git a/src/bro.bif b/src/bro.bif index 04394434b3..b0465b9609 100644 --- a/src/bro.bif +++ b/src/bro.bif @@ -2725,13 +2725,12 @@ function hexstr_to_bytestring%(hexstr: string%): string ## ## s: The string to encode. ## -## a: An optional custom alphabet. The empty string indicates the default alphabet. -## If given, the length of *a* must be 64. For example, a custom alphabet could be -## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## a: An optional custom alphabet. The empty string indicates the default +## alphabet. If given, the string must consist of 64 unique characters. ## ## Returns: The encoded version of *s*. ## -## .. bro:see:: decode_base64 decode_base64_conn +## .. bro:see:: decode_base64 function encode_base64%(s: string, a: string &default=""%): string %{ BroString* t = encode_base64(s->AsString(), a->AsString()); @@ -2749,13 +2748,12 @@ function encode_base64%(s: string, a: string &default=""%): string ## ## s: The string to encode. ## -## a: An optional custom alphabet. The empty string indicates the default alphabet. -## If given, the length of *a* must be 64. For example, a custom alphabet could be -## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## a: The custom alphabet. The string must consist of 64 unique +## characters. The empty string indicates the default alphabet. ## ## Returns: The encoded version of *s*. ## -## .. bro:see:: encode_base64 decode_base64 decode_base64_conn +## .. bro:see:: encode_base64 function encode_base64_custom%(s: string, a: string%): string &deprecated %{ BroString* t = encode_base64(s->AsString(), a->AsString()); @@ -2772,13 +2770,12 @@ function encode_base64_custom%(s: string, a: string%): string &deprecated ## ## s: The Base64-encoded string. ## -## a: An optional custom alphabet. The empty string indicates the default alphabet. -## If given, the length of *a* must be 64. For example, a custom alphabet could be -## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## a: An optional custom alphabet. The empty string indicates the default +## alphabet. If given, the string must consist of 64 unique characters. ## ## Returns: The decoded version of *s*. ## -## .. bro:see:: decode_base64_intern encode_base64 +## .. bro:see:: decode_base64_conn encode_base64 function decode_base64%(s: string, a: string &default=""%): string %{ BroString* t = decode_base64(s->AsString(), a->AsString()); @@ -2793,19 +2790,18 @@ function decode_base64%(s: string, a: string &default=""%): string ## Decodes a Base64-encoded string that was derived from processing a connection. ## If an error is encountered decoding the string, that will be logged to -## ``weird.log`` with the associated connection, +## ``weird.log`` with the associated connection. ## ## cid: The identifier of the connection that the encoding originates from. ## ## s: The Base64-encoded string. ## -## a: An optional custom alphabet. The empty string indicates the default alphabet. -## If given, the length of *a* must be 64. For example, a custom alphabet could be -## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## a: An optional custom alphabet. The empty string indicates the default +## alphabet. If given, the string must consist of 64 unique characters. ## ## Returns: The decoded version of *s*. ## -## .. bro:see:: decode_base64 encode_base64_intern +## .. bro:see:: decode_base64 function decode_base64_conn%(cid: conn_id, s: string, a: string &default=""%): string %{ Connection* conn = sessions->FindConnection(cid); @@ -2829,13 +2825,12 @@ function decode_base64_conn%(cid: conn_id, s: string, a: string &default=""%): s ## ## s: The Base64-encoded string. ## -## a: The custom alphabet. The empty string indicates the default alphabet. The -## length of *a* must be 64. For example, a custom alphabet could be -## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## a: The custom alphabet. The string must consist of 64 unique characters. +## The empty string indicates the default alphabet. ## ## Returns: The decoded version of *s*. ## -## .. bro:see:: decode_base64 decode_base64_conn encode_base64 +## .. bro:see:: decode_base64 decode_base64_conn function decode_base64_custom%(s: string, a: string%): string &deprecated %{ BroString* t = decode_base64(s->AsString(), a->AsString()); From ec245241471cda86227a21bd95a4fd013da835ee Mon Sep 17 00:00:00 2001 From: Daniel Thayer Date: Fri, 25 Sep 2015 15:11:41 -0500 Subject: [PATCH 29/34] Add configure option to disable broker python bindings Also improved the configure summary output to more clearly show whether or not broker python bindings will be built. --- CMakeLists.txt | 1 + configure | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/CMakeLists.txt b/CMakeLists.txt index bf55696eb6..846f2b484a 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -234,6 +234,7 @@ message( "\nCPP: ${CMAKE_CXX_COMPILER}" "\n" "\nBroker: ${ENABLE_BROKER}" + "\nBroker Python: ${BROKER_PYTHON_BINDINGS}" "\nBroccoli: ${INSTALL_BROCCOLI}" "\nBroctl: ${INSTALL_BROCTL}" "\nAux. Tools: ${INSTALL_AUX_TOOLS}" diff --git a/configure b/configure index 3e844735a5..f94085f9d3 100755 --- a/configure +++ b/configure @@ -47,6 +47,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]... --disable-auxtools don't build or install auxiliary tools --disable-perftools don't try to build with Google Perftools --disable-python don't try to build python bindings for broccoli + --disable-pybroker don't try to build python bindings for broker Required Packages in Non-Standard Locations: --with-openssl=PATH path to OpenSSL install root @@ -121,6 +122,7 @@ append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc append_cache_entry BROKER_PYTHON_HOME PATH $prefix +append_cache_entry BROKER_PYTHON_BINDINGS BOOL false append_cache_entry ENABLE_DEBUG BOOL false append_cache_entry ENABLE_PERFTOOLS BOOL false append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false @@ -217,6 +219,9 @@ while [ $# -ne 0 ]; do --disable-python) append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true ;; + --disable-pybroker) + append_cache_entry DISABLE_PYBROKER BOOL true + ;; --enable-ruby) append_cache_entry DISABLE_RUBY_BINDINGS BOOL false ;; From f1e0ca0be18d313ce7a1bf2d621ab13e07d2cbbb Mon Sep 17 00:00:00 2001 From: Seth Hall Date: Tue, 29 Sep 2015 15:20:26 -0400 Subject: [PATCH 30/34] Update the cmake module to match the commit tcmalloc finding commit. --- cmake | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmake b/cmake index 0fab31c3b3..843cdf6a91 160000 --- a/cmake +++ b/cmake @@ -1 +1 @@ -Subproject commit 0fab31c3b3b6606831364a9c4266128bb7e53465 +Subproject commit 843cdf6a91f06e5407bffbc79a343bff3cf4c81f From e66b236ae81c6c1d4f9d6a6fc31ee4bc6844506a Mon Sep 17 00:00:00 2001 From: Robin Sommer Date: Thu, 1 Oct 2015 16:31:25 -0700 Subject: [PATCH 31/34] Tiny tweak for code consistency in RAW reader. --- aux/plugins | 2 +- src/input/readers/raw/Raw.cc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/aux/plugins b/aux/plugins index 082676f548..a8bce4c277 160000 --- a/aux/plugins +++ b/aux/plugins @@ -1 +1 @@ -Subproject commit 082676f54874de968bc95bb8fede13a6c2521b5e +Subproject commit a8bce4c27749405c3aaec01686c363163897539b diff --git a/src/input/readers/raw/Raw.cc b/src/input/readers/raw/Raw.cc index df5a32b36c..76d8958fea 100644 --- a/src/input/readers/raw/Raw.cc +++ b/src/input/readers/raw/Raw.cc @@ -302,7 +302,7 @@ bool Raw::OpenInput() if ( offset ) { - int whence = (offset > 0) ? SEEK_SET : SEEK_END; + int whence = (offset >= 0) ? SEEK_SET : SEEK_END; int64_t pos = (offset >= 0) ? offset : offset + 1; // we want -1 to be the end of the file if ( fseek(file, pos, whence) < 0 ) From 24973e56bde91b4626e3835e2af9e132b4ef75fa Mon Sep 17 00:00:00 2001 From: Robin Sommer Date: Thu, 1 Oct 2015 17:02:46 -0700 Subject: [PATCH 32/34] Updating submodule(s). [nomail] --- CHANGES | 27 +++++++++++++++++++++++++++ VERSION | 2 +- aux/binpac | 2 +- 3 files changed, 29 insertions(+), 2 deletions(-) diff --git a/CHANGES b/CHANGES index f83a9cae82..aed573645f 100644 --- a/CHANGES +++ b/CHANGES @@ -1,4 +1,31 @@ +2.4-165 | 2015-10-01 17:02:46 -0700 + + * Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509 + certificates. (Yun Zheng Hu) + + * Improve X509 end-of-string-check code. (Johanna Amann) + + * Refactor X509 generalizedtime support and test. (Johanna Amann) + + * Fix case of offset=-1 (EOF) for RAW reader. Addresses BIT-1479. + (Johanna Amann) + + * Improve a number of test canonifiers. (Daniel Thayer) + + * Remove unnecessary use of TEST_DIFF_CANONIFIER. (Daniel Thayer) + + * Fixed some test canonifiers to read only from stdin + + * Remove unused test canonifier scripts. (Daniel Thayer) + + * A potpourri of updates and improvements across the documentation. + (Daniel Thayer) + + * Add configure option to disable Broker Python bindings. Also + improve the configure summary output to more clearly show whether + or not Broker Python bindings will be built. (Daniel Thayer) + 2.4-131 | 2015-09-11 12:16:39 -0700 * Add README.rst symlink. Addresses BIT-1413 (Vlad Grigorescu) diff --git a/VERSION b/VERSION index 9b4d5e4401..75ce3ba2c6 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.4-131 +2.4-165 diff --git a/aux/binpac b/aux/binpac index 239a1a809d..ff16caf3d8 160000 --- a/aux/binpac +++ b/aux/binpac @@ -1 +1 @@ -Subproject commit 239a1a809dad7eb190fe31f03d949da3cbf79b3a +Subproject commit ff16caf3d8c5b12febd465a8ddd1524af60eae1a From 7a435026073845cfd5c925d61b56ff9430b8af67 Mon Sep 17 00:00:00 2001 From: Robin Sommer Date: Thu, 1 Oct 2015 17:13:34 -0700 Subject: [PATCH 33/34] Updating submodule(s). [nomail] --- aux/binpac | 2 +- aux/broctl | 2 +- aux/broker | 2 +- aux/btest | 2 +- aux/plugins | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/aux/binpac b/aux/binpac index ff16caf3d8..239a1a809d 160000 --- a/aux/binpac +++ b/aux/binpac @@ -1 +1 @@ -Subproject commit ff16caf3d8c5b12febd465a8ddd1524af60eae1a +Subproject commit 239a1a809dad7eb190fe31f03d949da3cbf79b3a diff --git a/aux/broctl b/aux/broctl index 992a79e1e3..b230db3b0f 160000 --- a/aux/broctl +++ b/aux/broctl @@ -1 +1 @@ -Subproject commit 992a79e1e36cef032373bf42cff456bb3598597d +Subproject commit b230db3b0fba14415177f451ac9b420344901549 diff --git a/aux/broker b/aux/broker index c3cd43a314..b48d061c3a 160000 --- a/aux/broker +++ b/aux/broker @@ -1 +1 @@ -Subproject commit c3cd43a31447cce366394b7bb76399f36509c374 +Subproject commit b48d061c3a13b12dbadbb2bdb725cb861c4ec7b9 diff --git a/aux/btest b/aux/btest index 31fa85eb6b..ce1d474859 160000 --- a/aux/btest +++ b/aux/btest @@ -1 +1 @@ -Subproject commit 31fa85eb6b9564450b3a70f96a1c0e908c810f75 +Subproject commit ce1d474859cc8a0f39d5eaf69fb1bb56eb1a5161 diff --git a/aux/plugins b/aux/plugins index a8bce4c277..9b7943e1a6 160000 --- a/aux/plugins +++ b/aux/plugins @@ -1 +1 @@ -Subproject commit a8bce4c27749405c3aaec01686c363163897539b +Subproject commit 9b7943e1a61062005f01b48eaad11bbb3b7ae757 From 8e1ce364343cab26dccaf39b8bf607065d94b895 Mon Sep 17 00:00:00 2001 From: Robin Sommer Date: Thu, 1 Oct 2015 17:21:21 -0700 Subject: [PATCH 34/34] Updating submodule(s). [nomail] --- CHANGES | 4 ++-- VERSION | 2 +- aux/binpac | 2 +- aux/bro-aux | 2 +- aux/broccoli | 2 +- aux/broctl | 2 +- aux/broker | 2 +- 7 files changed, 8 insertions(+), 8 deletions(-) diff --git a/CHANGES b/CHANGES index aed573645f..77b404540f 100644 --- a/CHANGES +++ b/CHANGES @@ -1,5 +1,5 @@ -2.4-165 | 2015-10-01 17:02:46 -0700 +2.4-169 | 2015-10-01 17:21:21 -0700 * Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509 certificates. (Yun Zheng Hu) @@ -12,7 +12,7 @@ (Johanna Amann) * Improve a number of test canonifiers. (Daniel Thayer) - + * Remove unnecessary use of TEST_DIFF_CANONIFIER. (Daniel Thayer) * Fixed some test canonifiers to read only from stdin diff --git a/VERSION b/VERSION index 75ce3ba2c6..622ec2383c 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.4-165 +2.4-169 diff --git a/aux/binpac b/aux/binpac index 239a1a809d..214294c502 160000 --- a/aux/binpac +++ b/aux/binpac @@ -1 +1 @@ -Subproject commit 239a1a809dad7eb190fe31f03d949da3cbf79b3a +Subproject commit 214294c502d377bb7bf511eac8c43608e54c875a diff --git a/aux/bro-aux b/aux/bro-aux index 2ec49971f1..4e0d2bff4b 160000 --- a/aux/bro-aux +++ b/aux/bro-aux @@ -1 +1 @@ -Subproject commit 2ec49971f12176e1fabe9db21445435b77bad68e +Subproject commit 4e0d2bff4b2c287f66186c3654ef784bb0748d11 diff --git a/aux/broccoli b/aux/broccoli index 0c051fb343..8046800085 160000 --- a/aux/broccoli +++ b/aux/broccoli @@ -1 +1 @@ -Subproject commit 0c051fb3439abe7b4c915dbdaa751e91140dcf1e +Subproject commit 80468000859bcb7c3784c69280888fcfe89d8922 diff --git a/aux/broctl b/aux/broctl index b230db3b0f..921b0abcb9 160000 --- a/aux/broctl +++ b/aux/broctl @@ -1 +1 @@ -Subproject commit b230db3b0fba14415177f451ac9b420344901549 +Subproject commit 921b0abcb967666d8349c0c6c2bb8e41e1300579 diff --git a/aux/broker b/aux/broker index b48d061c3a..e7da54a3f4 160000 --- a/aux/broker +++ b/aux/broker @@ -1 +1 @@ -Subproject commit b48d061c3a13b12dbadbb2bdb725cb861c4ec7b9 +Subproject commit e7da54a3f40e71ca9020f9846256f60c0b885963