Merge remote-tracking branch 'origin/master' into topic/johanna/ocsp

This commit is contained in:
Johanna Amann 2015-12-14 16:05:41 -08:00
commit da9b5425e4
157 changed files with 1830 additions and 1130 deletions

230
CHANGES
View file

@ -1,4 +1,234 @@
2.4-217 | 2015-12-04 16:50:46 -0800
* SIP scripts code cleanup. (Seth Hall)
- Daniel Guerra pointed out a type issue for SIP request and
response code length fields which is now corrected.
- Some redundant code was removed.
- if/else tree modified to use switch instead.
2.4-214 | 2015-12-04 16:40:15 -0800
* Delaying BinPAC initializaton until afte plugins have been
activated. (Robin Sommer)
2.4-213 | 2015-12-04 15:25:48 -0800
* Use better data structure for storing BPF filters. (Robin Sommer)
2.4-211 | 2015-11-17 13:28:29 -0800
* Making cluster reconnect timeout configurable. (Robin Sommer)
* Bugfix for child process' communication loop. (Robin Sommer)
2.4-209 | 2015-11-16 07:31:22 -0800
* Updating submodule(s).
2.4-207 | 2015-11-10 13:34:42 -0800
* Fix to compile with OpenSSL that has SSLv3 disalbed. (Christoph
Pietsch)
* Fix potential race condition when logging VLAN info to conn.log.
(Daniel Thayer)
2.4-201 | 2015-10-27 16:11:15 -0700
* Updating NEWS. (Robin Sommer)
2.4-200 | 2015-10-26 16:57:39 -0700
* Adding missing file. (Robin Sommer)
2.4-199 | 2015-10-26 16:51:47 -0700
* Fix problem with the JSON Serialization code. (Aaron Eppert)
2.4-188 | 2015-10-26 14:11:21 -0700
* Extending rexmit_inconsistency() event to receive an additional
parameter with the packet's TCP flags, if available. (Robin
Sommer)
2.4-187 | 2015-10-26 13:43:32 -0700
* Updating NEWS for new plugins. (Robin Sommer)
2.4-186 | 2015-10-23 15:07:06 -0700
* Removing pcap options for AF_PACKET support. Addresses BIT-1363.
(Robin Sommer)
* Correct a typo in controller.bro documentation. (Daniel Thayer)
* Extend SSL DPD signature to allow alert before server_hello.
(Johanna Amann)
* Make join_string_vec work with vectors containing empty elements.
(Johanna Amann)
* Fix support for HTTP CONNECT when server adds headers to response.
(Eric Karasuda).
* Load static CA list for validation tests too. (Johanna Amann)
* Remove cluster certificate validation script. (Johanna Amann)
* Fix a bug in diff-remove-x509-names canonifier. (Daniel Thayer)
* Fix test canonifiers in scripts/policy/protocols/ssl. (Daniel
Thayer)
2.4-169 | 2015-10-01 17:21:21 -0700
* Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509
certificates. (Yun Zheng Hu)
* Improve X509 end-of-string-check code. (Johanna Amann)
* Refactor X509 generalizedtime support and test. (Johanna Amann)
* Fix case of offset=-1 (EOF) for RAW reader. Addresses BIT-1479.
(Johanna Amann)
* Improve a number of test canonifiers. (Daniel Thayer)
* Remove unnecessary use of TEST_DIFF_CANONIFIER. (Daniel Thayer)
* Fixed some test canonifiers to read only from stdin
* Remove unused test canonifier scripts. (Daniel Thayer)
* A potpourri of updates and improvements across the documentation.
(Daniel Thayer)
* Add configure option to disable Broker Python bindings. Also
improve the configure summary output to more clearly show whether
or not Broker Python bindings will be built. (Daniel Thayer)
2.4-131 | 2015-09-11 12:16:39 -0700
* Add README.rst symlink. Addresses BIT-1413 (Vlad Grigorescu)
2.4-129 | 2015-09-11 11:56:04 -0700
* hash-all-files.bro depends on base/files/hash (Richard van den Berg)
* Make dns_max_queries redef-able, and bump default to 25. Addresses
BIT-1460 (Vlad Grigorescu)
2.4-125 | 2015-09-03 20:10:36 -0700
* Move SIP analyzer to flowunit instead of datagram Addresses
BIT-1458 (Vlad Grigorescu)
2.4-122 | 2015-08-31 14:39:41 -0700
* Add a number of out-of-bound checks to layer 2 code. Addresses
BIT-1463 (Johanna Amann)
* Fix error in 2.4 release notes regarding SSH events. (Robin
Sommer)
2.4-118 | 2015-08-31 10:55:29 -0700
* Fix FreeBSD build errors (Johanna Amann)
2.4-117 | 2015-08-30 22:16:24 -0700
* Fix initialization of a pointer in RDP analyzer. (Daniel
Thayer/Robin Sommer)
2.4-115 | 2015-08-30 21:57:35 -0700
* Enable Bro to leverage packet fanout mode on Linux. (Kris
Nielander).
## Toggle whether to do packet fanout (Linux-only).
const Pcap::packet_fanout_enable = F &redef;
## If packet fanout is enabled, the id to sue for it. This should be shared amongst
## worker processes processing the same socket.
const Pcap::packet_fanout_id = 0 &redef;
## If packet fanout is enabled, whether packets are to be defragmented before
## fanout is applied.
const Pcap::packet_fanout_defrag = T &redef;
* Allow libpcap buffer size to be set via configuration. (Kris Nielander)
## Number of Mbytes to provide as buffer space when capturing from live
## interfaces.
const Pcap::bufsize = 128 &redef;
* Move the pcap-related script-level identifiers into the new Pcap
namespace. (Robin Sommer)
snaplen -> Pcap::snaplen
precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
install_pcap_filter() -> Pcap::install_pcap_filter()
pcap_error() -> Pcap::pcap_error()
2.4-108 | 2015-08-30 20:14:31 -0700
* Update Base64 decoding. (Jan Grashoefer)
- A new built-in function, decode_base64_conn() for Base64
decoding. It works like decode_base64() but receives an
additional connection argument that will be used for
reporting decoding errors into weird.log (instead of
reporter.log).
- FTP, POP3, and HTTP analyzers now likewise log Base64
decoding errors to weird.log.
- The built-in functions decode_base64_custom() and
encode_base64_custom() are now deprecated. Their
functionality is provided directly by decode_base64() and
encode_base64(), which take an optional parameter to change
the Base64 alphabet.
* Fix potential crash if TCP header was captured incompletely.
(Robin Sommer)
2.4-103 | 2015-08-29 10:51:55 -0700
* Make ASN.1 date/time parsing more robust. (Johanna Amann)
* Be more permissive on what characters we accept as an unquoted
multipart boundary. Addresses BIT-1459. (Johanna Amann)
2.4-99 | 2015-08-25 07:56:57 -0700
* Add ``Q`` and update ``I`` documentation for connection history
field. Addresses BIT-1466. (Vlad Grigorescu)
2.4-96 | 2015-08-21 17:37:56 -0700
* Update SIP analyzer. (balintm)
- Allows space on both sides of ':'.
- Require CR/LF after request/reply line.
2.4-94 | 2015-08-21 17:31:32 -0700
* Add file type detection support for video/MP2T. (Mike Freemon)
2.4-93 | 2015-08-21 17:23:39 -0700
* Make plugin install honor DESTDIR= convention. (Jeff Barber)
2.4-89 | 2015-08-18 07:53:36 -0700
* Fix diff-canonifier-external to use basename of input file.
(Daniel Thayer)
2.4-87 | 2015-08-14 08:34:41 -0700 2.4-87 | 2015-08-14 08:34:41 -0700
* Removing the yielding_teredo_decapsulation option. (Robin Sommer) * Removing the yielding_teredo_decapsulation option. (Robin Sommer)

View file

@ -233,6 +233,7 @@ message(
"\nCPP: ${CMAKE_CXX_COMPILER}" "\nCPP: ${CMAKE_CXX_COMPILER}"
"\n" "\n"
"\nBroker: ${ENABLE_BROKER}" "\nBroker: ${ENABLE_BROKER}"
"\nBroker Python: ${BROKER_PYTHON_BINDINGS}"
"\nBroccoli: ${INSTALL_BROCCOLI}" "\nBroccoli: ${INSTALL_BROCCOLI}"
"\nBroctl: ${INSTALL_BROCTL}" "\nBroctl: ${INSTALL_BROCTL}"
"\nAux. Tools: ${INSTALL_AUX_TOOLS}" "\nAux. Tools: ${INSTALL_AUX_TOOLS}"

34
NEWS
View file

@ -18,6 +18,8 @@ New Dependencies
- Bro now requires Python instead of Perl to compile the source code. - Bro now requires Python instead of Perl to compile the source code.
- The pcap buffer size can set through the new option Pcap::bufsize.
New Functionality New Functionality
----------------- -----------------
@ -28,10 +30,38 @@ New Functionality
information. Use with care, generating events per packet is information. Use with care, generating events per packet is
expensive. expensive.
- A new built-in function, decode_base64_conn() for Base64 decoding.
It works like decode_base64() but receives an additional connection
argument that will be used for decoding errors into weird.log
(instead of reporter.log).
- New Bro plugins in aux/plugins: - New Bro plugins in aux/plugins:
- af_packet: Native AF_PACKET support.
- myricom: Native Myricom SNF v3 support.
- pf_ring: Native PF_RING support. - pf_ring: Native PF_RING support.
- redis: An experimental log writer for Redis. - redis: An experimental log writer for Redis.
- tcprs: An TCP-level analyzer detecting retransmissions, reordering, and more.
Changed Functionality
---------------------
- Some script-level identifier have changed their names:
snaplen -> Pcap::snaplen
precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
install_pcap_filter() -> Pcap::install_pcap_filter()
pcap_error() -> Pcap::pcap_error()
Deprecated Functionality
------------------------
- The built-in functions decode_base64_custom() and
encode_base64_custom() are no longer needed and will be removed
in the future. Their functionality is now provided directly by
decode_base64() and encode_base64(), which take an optional
parameter to change the Base64 alphabet.
Bro 2.4 Bro 2.4
======= =======
@ -193,8 +223,8 @@ Changed Functionality
- The SSH changes come with a few incompatibilities. The following - The SSH changes come with a few incompatibilities. The following
events have been renamed: events have been renamed:
* ``SSH::heuristic_failed_login`` to ``SSH::ssh_auth_failed`` * ``SSH::heuristic_failed_login`` to ``ssh_auth_failed``
* ``SSH::heuristic_successful_login`` to ``SSH::ssh_auth_successful`` * ``SSH::heuristic_successful_login`` to ``ssh_auth_successful``
The ``SSH::Info`` status field has been removed and replaced with The ``SSH::Info`` status field has been removed and replaced with
the ``auth_success`` field. This field has been changed from a the ``auth_success`` field. This field has been changed from a

1
README.rst Symbolic link
View file

@ -0,0 +1 @@
README

View file

@ -1 +1 @@
2.4-87 2.4-217

@ -1 +1 @@
Subproject commit 4f33233aef5539ae4f12c6d0e4338247833c3900 Subproject commit 214294c502d377bb7bf511eac8c43608e54c875a

@ -1 +1 @@
Subproject commit 2470f64b58d875f9491e251b866a15a2ec4c05da Subproject commit 4e0d2bff4b2c287f66186c3654ef784bb0748d11

@ -1 +1 @@
Subproject commit 74bb4bbd949e61e099178f8a97499d3f1355de8b Subproject commit 959cc0a8181e7f4b07559a6aecca2a0d7d3d445c

@ -1 +1 @@
Subproject commit d37009f1e81b5fac8e34f6707690841e6d4d739a Subproject commit 1299fab8f6e98c8b0b88d01c60bb6b21329e19e5

@ -1 +1 @@
Subproject commit d25efc7d5f495c30294b11180c1857477078f2d6 Subproject commit 9a2e8ec7b365bde282edc7301c7936eed6b4fbbb

@ -1 +1 @@
Subproject commit a89cd0fda0f17f69b96c935959cae89145b92927 Subproject commit 71a1e3efc437aa9f981be71affa1c4615e8d98a5

@ -1 +1 @@
Subproject commit bb86ad945c823c94ea8385ec4ebb9546ba5198af Subproject commit 35007df0974b566f75d7c82af5b4d5a022333d87

2
cmake

@ -1 +1 @@
Subproject commit 6406fb79d30df8d7956110ce65a97d18e4bc8c3b Subproject commit 843cdf6a91f06e5407bffbc79a343bff3cf4c81f

5
configure vendored
View file

@ -47,6 +47,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools --disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-pybroker don't try to build python bindings for broker
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root --with-openssl=PATH path to OpenSSL install root
@ -121,6 +122,7 @@ append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry BROKER_PYTHON_HOME PATH $prefix append_cache_entry BROKER_PYTHON_HOME PATH $prefix
append_cache_entry BROKER_PYTHON_BINDINGS BOOL false
append_cache_entry ENABLE_DEBUG BOOL false append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
@ -217,6 +219,9 @@ while [ $# -ne 0 ]; do
--disable-python) --disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;; ;;
--disable-pybroker)
append_cache_entry DISABLE_PYBROKER BOOL true
;;
--enable-ruby) --enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;; ;;

View file

@ -0,0 +1 @@
../../../../aux/plugins/pf_ring/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/redis/README

View file

@ -286,9 +286,9 @@ Activating a plugin will:
1. Load the dynamic module 1. Load the dynamic module
2. Make any bif items available 2. Make any bif items available
3. Add the ``scripts/`` directory to ``BROPATH`` 3. Add the ``scripts/`` directory to ``BROPATH``
5. Load ``scripts/__preload__.bro`` 4. Load ``scripts/__preload__.bro``
6. Make BiF elements available to scripts. 5. Make BiF elements available to scripts.
7. Load ``scripts/__load__.bro`` 6. Load ``scripts/__load__.bro``
By default, Bro will automatically activate all dynamic plugins found By default, Bro will automatically activate all dynamic plugins found
in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro

View file

@ -9,10 +9,7 @@ Broker-Enabled Communication Framework
Bro can now use the `Broker Library Bro can now use the `Broker Library
<../components/broker/README.html>`_ to exchange information with <../components/broker/README.html>`_ to exchange information with
other Bro processes. To enable it run Bro's ``configure`` script other Bro processes.
with the ``--enable-broker`` option. Note that a C++11 compatible
compiler (e.g. GCC 4.8+ or Clang 3.3+) is required as well as the
`C++ Actor Framework <http://actor-framework.org/>`_.
.. contents:: .. contents::
@ -23,26 +20,26 @@ Communication via Broker must first be turned on via
:bro:see:`BrokerComm::enable`. :bro:see:`BrokerComm::enable`.
Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen` Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen`
and then monitor connection status updates via and then monitor connection status updates via the
:bro:see:`BrokerComm::incoming_connection_established` and :bro:see:`BrokerComm::incoming_connection_established` and
:bro:see:`BrokerComm::incoming_connection_broken`. :bro:see:`BrokerComm::incoming_connection_broken` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro
Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect` Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect`
and then monitor connection status updates via and then monitor connection status updates via the
:bro:see:`BrokerComm::outgoing_connection_established`, :bro:see:`BrokerComm::outgoing_connection_established`,
:bro:see:`BrokerComm::outgoing_connection_broken`, and :bro:see:`BrokerComm::outgoing_connection_broken`, and
:bro:see:`BrokerComm::outgoing_connection_incompatible`. :bro:see:`BrokerComm::outgoing_connection_incompatible` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro
Remote Printing Remote Printing
=============== ===============
To receive remote print messages, first use To receive remote print messages, first use the
:bro:see:`BrokerComm::subscribe_to_prints` to advertise to peers a topic :bro:see:`BrokerComm::subscribe_to_prints` function to advertise to peers a
prefix of interest and then create an event handler for topic prefix of interest and then create an event handler for
:bro:see:`BrokerComm::print_handler` to handle any print messages that are :bro:see:`BrokerComm::print_handler` to handle any print messages that are
received. received.
@ -71,17 +68,17 @@ the Broker message format is simply:
Remote Events Remote Events
============= =============
Receiving remote events is similar to remote prints. Just use Receiving remote events is similar to remote prints. Just use the
:bro:see:`BrokerComm::subscribe_to_events` and possibly define any new events :bro:see:`BrokerComm::subscribe_to_events` function and possibly define any
along with handlers that peers may want to send. new events along with handlers that peers may want to send.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro
To send events, there are two choices. The first is to use call There are two different ways to send events. The first is to call the
:bro:see:`BrokerComm::event` directly. The second option is to use :bro:see:`BrokerComm::event` function directly. The second option is to call
:bro:see:`BrokerComm::auto_event` to make it so a particular event is the :bro:see:`BrokerComm::auto_event` function where you specify a
automatically sent to peers whenever it is called locally via the normal particular event that will be automatically sent to peers whenever the
event invocation syntax. event is called locally via the normal event invocation syntax.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro
@ -98,7 +95,7 @@ the Broker message format is:
broker::message{std::string{}, ...}; broker::message{std::string{}, ...};
The first parameter is the name of the event and the remaining ``...`` The first parameter is the name of the event and the remaining ``...``
are its arguments, which are any of the support Broker data types as are its arguments, which are any of the supported Broker data types as
they correspond to the Bro types for the event named in the first they correspond to the Bro types for the event named in the first
parameter of the message. parameter of the message.
@ -107,23 +104,23 @@ Remote Logging
.. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro
Use :bro:see:`BrokerComm::subscribe_to_logs` to advertise interest in logs Use the :bro:see:`BrokerComm::subscribe_to_logs` function to advertise interest
written by peers. The topic names that Bro uses are implicitly of the in logs written by peers. The topic names that Bro uses are implicitly of the
form "bro/log/<stream-name>". form "bro/log/<stream-name>".
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro
To send remote logs either use :bro:see:`Log::enable_remote_logging` or To send remote logs either redef :bro:see:`Log::enable_remote_logging` or
:bro:see:`BrokerComm::enable_remote_logs`. The former allows any log stream use the :bro:see:`BrokerComm::enable_remote_logs` function. The former
to be sent to peers while the later toggles remote logging for allows any log stream to be sent to peers while the latter enables remote
particular streams. logging for particular streams.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro
Message Format Message Format
-------------- --------------
For other applications that want to exchange logs messages with Bro, For other applications that want to exchange log messages with Bro,
the Broker message format is: the Broker message format is:
.. code:: c++ .. code:: c++
@ -132,7 +129,7 @@ the Broker message format is:
The enum value corresponds to the stream's :bro:see:`Log::ID` value, and The enum value corresponds to the stream's :bro:see:`Log::ID` value, and
the record corresponds to a single entry of that log's columns record, the record corresponds to a single entry of that log's columns record,
in this case a ``Test::INFO`` value. in this case a ``Test::Info`` value.
Tuning Access Control Tuning Access Control
===================== =====================
@ -152,10 +149,11 @@ that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print
:bro:see:`BrokerComm::enable_remote_logs`. :bro:see:`BrokerComm::enable_remote_logs`.
If not using the ``auto_advertise`` flag, one can use the If not using the ``auto_advertise`` flag, one can use the
:bro:see:`BrokerComm::advertise_topic` and :bro:see:`BrokerComm::unadvertise_topic` :bro:see:`BrokerComm::advertise_topic` and
to manupulate the set of topic prefixes that are allowed to be :bro:see:`BrokerComm::unadvertise_topic` functions
advertised to peers. If an endpoint does not advertise a topic prefix, to manipulate the set of topic prefixes that are allowed to be
the only way a peers can send messages to it is via the ``unsolicited`` advertised to peers. If an endpoint does not advertise a topic prefix, then
the only way peers can send messages to it is via the ``unsolicited``
flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching
prefix (i.e. full topic may be longer than receivers prefix, just the prefix (i.e. full topic may be longer than receivers prefix, just the
prefix needs to match). prefix needs to match).
@ -172,7 +170,7 @@ specific type of frontend, but a standalone frontend can also exist to
e.g. query and modify the contents of a remote master store without e.g. query and modify the contents of a remote master store without
actually "owning" any of the contents itself. actually "owning" any of the contents itself.
A master data store can be be cloned from remote peers which may then A master data store can be cloned from remote peers which may then
perform lightweight, local queries against the clone, which perform lightweight, local queries against the clone, which
automatically stays synchronized with the master store. Clones cannot automatically stays synchronized with the master store. Clones cannot
modify their content directly, instead they send modifications to the modify their content directly, instead they send modifications to the
@ -181,7 +179,7 @@ all clones.
Master and clone stores get to choose what type of storage backend to Master and clone stores get to choose what type of storage backend to
use. E.g. In-memory versus SQLite for persistence. Note that if clones use. E.g. In-memory versus SQLite for persistence. Note that if clones
are used, data store sizes should still be able to fit within memory are used, then data store sizes must be able to fit within memory
regardless of the storage backend as a single snapshot of the master regardless of the storage backend as a single snapshot of the master
store is sent in a single chunk to initialize the clone. store is sent in a single chunk to initialize the clone.
@ -198,5 +196,5 @@ needed, just replace the :bro:see:`BrokerStore::create_clone` call with
:bro:see:`BrokerStore::create_frontend`. Queries will then be made against :bro:see:`BrokerStore::create_frontend`. Queries will then be made against
the remote master store instead of the local clone. the remote master store instead of the local clone.
Note that all queries are made within Bro's asynchrounous ``when`` Note that all data store queries must be made within Bro's asynchronous
statements and must specify a timeout block. ``when`` statements and must specify a timeout block.

View file

@ -1,4 +1,3 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";

View file

@ -1,4 +1,3 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";

View file

@ -1,4 +1,3 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";

View file

@ -1,4 +1,3 @@
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";

View file

@ -1,4 +1,3 @@
module Test; module Test;
export { export {

View file

@ -20,11 +20,13 @@ GeoLocation
Install libGeoIP Install libGeoIP
---------------- ----------------
Before building Bro, you need to install libGeoIP.
* FreeBSD: * FreeBSD:
.. console:: .. console::
sudo pkg_add -r GeoIP sudo pkg install GeoIP
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
@ -40,80 +42,99 @@ Install libGeoIP
* Mac OS X: * Mac OS X:
Vanilla OS X installations don't ship with libGeoIP, but if You need to install from your preferred package management system
installed from your preferred package management system (e.g. (e.g. MacPorts, Fink, or Homebrew). The name of the package that you need
MacPorts, Fink, or Homebrew), they should be automatically detected may be libgeoip, geoip, or geoip-dev, depending on which package management
and Bro will compile against them. system you are using.
GeoIPLite Database Installation GeoIPLite Database Installation
------------------------------------ -------------------------------
A country database for GeoIPLite is included when you do the C API A country database for GeoIPLite is included when you do the C API
install, but for Bro, we are using the city database which includes install, but for Bro, we are using the city database which includes
cities and regions in addition to countries. cities and regions in addition to countries.
`Download <http://www.maxmind.com/app/geolitecity>`__ the GeoLite city `Download <http://www.maxmind.com/app/geolitecity>`__ the GeoLite city
binary database. binary database:
.. console:: .. console::
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz gunzip GeoLiteCity.dat.gz
Next, the file needs to be put in the database directory. This directory Next, the file needs to be renamed and put in the GeoIP database directory.
should already exist and will vary depending on which platform and package This directory should already exist and will vary depending on which platform
you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For Linux, and package you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For
use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one Linux, use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one
already exists). already exists).
.. console:: .. console::
mv GeoLiteCity.dat <path_to_database_dir>/GeoIPCity.dat mv GeoLiteCity.dat <path_to_database_dir>/GeoIPCity.dat
Note that there is a separate database for IPv6 addresses, which can also
be installed if you want GeoIP functionality for IPv6.
Testing
-------
Before using the GeoIP functionality, it is a good idea to verify that
everything is setup correctly. After installing libGeoIP and the GeoIP city
database, and building Bro, you can quickly check if the GeoIP functionality
works by running a command like this:
.. console::
bro -e "print lookup_location(8.8.8.8);"
If you see an error message similar to "Failed to open GeoIP City database",
then you may need to either rename or move your GeoIP city database file (the
error message should give you the full pathname of the database file that
Bro is looking for).
If you see an error message similar to "Bro was not configured for GeoIP
support", then you need to rebuild Bro and make sure it is linked against
libGeoIP. Normally, if libGeoIP is installed correctly then it should
automatically be found when building Bro. If this doesn't happen, then
you may need to specify the path to the libGeoIP installation
(e.g. ``./configure --with-geoip=<path>``).
Usage Usage
----- -----
There is a single built in function that provides the GeoIP There is a built-in function that provides the GeoIP functionality:
functionality:
.. code:: bro .. code:: bro
function lookup_location(a:addr): geo_location function lookup_location(a:addr): geo_location
There is also the :bro:see:`geo_location` data structure that is returned The return value of the :bro:see:`lookup_location` function is a record
from the :bro:see:`lookup_location` function: type called :bro:see:`geo_location`, and it consists of several fields
containing the country, region, city, latitude, and longitude of the specified
.. code:: bro IP address. Since one or more fields in this record will be uninitialized
for some IP addresses (for example, the country and region of an IP address
type geo_location: record { might be known, but the city could be unknown), a field should be checked
country_code: string; if it has a value before trying to access the value.
region: string;
city: string;
latitude: double;
longitude: double;
};
Example Example
------- -------
To write a line in a log file for every ftp connection from hosts in To show every ftp connection from hosts in Ohio, this is now very easy:
Ohio, this is now very easy:
.. code:: bro .. code:: bro
global ftp_location_log: file = open_log_file("ftp-location");
event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool)
{ {
local client = c$id$orig_h; local client = c$id$orig_h;
local loc = lookup_location(client); local loc = lookup_location(client);
if (loc$region == "OH" && loc$country_code == "US")
if (loc?$region && loc$region == "OH" && loc$country_code == "US")
{ {
print ftp_location_log, fmt("FTP Connection from:%s (%s,%s,%s)", client, loc$city, loc$region, loc$country_code); local city = loc?$city ? loc$city : "<unknown>";
print fmt("FTP Connection from:%s (%s,%s,%s)", client, city,
loc$region, loc$country_code);
} }
} }

View file

@ -32,7 +32,8 @@ For this example we assume that we want to import data from a blacklist
that contains server IP addresses as well as the timestamp and the reason that contains server IP addresses as well as the timestamp and the reason
for the block. for the block.
An example input file could look like this: An example input file could look like this (note that all fields must be
tab-separated):
:: ::
@ -63,19 +64,23 @@ The two records are defined as:
reason: string; reason: string;
}; };
Note that the names of the fields in the record definitions have to correspond Note that the names of the fields in the record definitions must correspond
to the column names listed in the '#fields' line of the log file, in this to the column names listed in the '#fields' line of the log file, in this
case 'ip', 'timestamp', and 'reason'. case 'ip', 'timestamp', and 'reason'. Also note that the ordering of the
columns does not matter, because each column is identified by name.
The log file is read into the table with a simple call of the ``add_table`` The log file is read into the table with a simple call of the
function: :bro:id:`Input::add_table` function:
.. code:: bro .. code:: bro
global blacklist: table[addr] of Val = table(); global blacklist: table[addr] of Val = table();
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]); event bro_init() {
Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist]);
Input::remove("blacklist"); Input::remove("blacklist");
}
With these three lines we first create an empty table that should contain the With these three lines we first create an empty table that should contain the
blacklist data and then instruct the input framework to open an input stream blacklist data and then instruct the input framework to open an input stream
@ -92,7 +97,7 @@ Because of this, the data is not immediately accessible. Depending on the
size of the data source it might take from a few milliseconds up to a few size of the data source it might take from a few milliseconds up to a few
seconds until all data is present in the table. Please note that this means seconds until all data is present in the table. Please note that this means
that when Bro is running without an input source or on very short captured that when Bro is running without an input source or on very short captured
files, it might terminate before the data is present in the system (because files, it might terminate before the data is present in the table (because
Bro already handled all packets before the import thread finished). Bro already handled all packets before the import thread finished).
Subsequent calls to an input source are queued until the previous action has Subsequent calls to an input source are queued until the previous action has
@ -101,8 +106,8 @@ been completed. Because of this, it is, for example, possible to call
will remain queued until the first read has been completed. will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires Once the input framework finishes reading from a data source, it fires
the ``end_of_data`` event. Once this event has been received all data the :bro:id:`Input::end_of_data` event. Once this event has been received all
from the input file is available in the table. data from the input file is available in the table.
.. code:: bro .. code:: bro
@ -111,9 +116,9 @@ from the input file is available in the table.
print blacklist; print blacklist;
} }
The table can also already be used while the data is still being read - it The table can be used while the data is still being read - it
just might not contain all lines in the input file when the event has not just might not contain all lines from the input file before the event has
yet fired. After it has been populated it can be used like any other Bro fired. After the table has been populated it can be used like any other Bro
table and blacklist entries can easily be tested: table and blacklist entries can easily be tested:
.. code:: bro .. code:: bro
@ -130,10 +135,11 @@ changing. For these cases, the Bro input framework supports several ways to
deal with changing data files. deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When The first, very basic method is an explicit refresh of an input stream. When
an input stream is open, the function ``force_update`` can be called. This an input stream is open (this means it has not yet been removed by a call to
will trigger a complete refresh of the table; any changed elements from the :bro:id:`Input::remove`), the function :bro:id:`Input::force_update` can be
file will be updated. After the update is finished the ``end_of_data`` called. This will trigger a complete refresh of the table; any changed
event will be raised. elements from the file will be updated. After the update is finished the
:bro:id:`Input::end_of_data` event will be raised.
In our example the call would look like: In our example the call would look like:
@ -141,30 +147,35 @@ In our example the call would look like:
Input::force_update("blacklist"); Input::force_update("blacklist");
The input framework also supports two automatic refresh modes. The first mode Alternatively, the input framework can automatically refresh the table
continually checks if a file has been changed. If the file has been changed, it contents when it detects a change to the input file. To use this feature,
you need to specify a non-default read mode by setting the ``mode`` option
of the :bro:id:`Input::add_table` call. Valid values are ``Input::MANUAL``
(the default), ``Input::REREAD`` and ``Input::STREAM``. For example,
setting the value of the ``mode`` option in the previous example
would look like this:
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist,
$mode=Input::REREAD]);
When using the reread mode (i.e., ``$mode=Input::REREAD``), Bro continually
checks if the input file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been state. Each time a change has been detected and all the new data has been
read into the table, the ``end_of_data`` event is raised. read into the table, the ``end_of_data`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data When using the streaming mode (i.e., ``$mode=Input::STREAM``), Bro assumes
file is an append-only file to which new data is continually appended. Bro that the source data file is an append-only file to which new data is
continually checks for new data at the end of the file and will add the new continually appended. Bro continually checks for new data at the end of
data to the table. If newer lines in the file have the same index as previous the file and will add the new data to the table. If newer lines in the
lines, they will overwrite the values in the output table. Because of the file have the same index as previous lines, they will overwrite the
nature of streaming reads (data is continually added to the table), values in the output table. Because of the nature of streaming reads
the ``end_of_data`` event is never raised when using streaming reads. (data is continually added to the table), the ``end_of_data`` event
is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the
add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``
and ``STREAM``.
Hence, when adding ``$mode=Input::REREAD`` to the previous example, the
blacklist table will always reflect the state of the blacklist input file.
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]);
Receiving change events Receiving change events
----------------------- -----------------------
@ -173,34 +184,40 @@ When re-reading files, it might be interesting to know exactly which lines in
the source files have changed. the source files have changed.
For this reason, the input framework can raise an event each time when a data For this reason, the input framework can raise an event each time when a data
item is added to, removed from or changed in a table. item is added to, removed from, or changed in a table.
The event definition looks like this: The event definition looks like this (note that you can change the name of
this event in your own Bro script):
.. code:: bro .. code:: bro
event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) { event entry(description: Input::TableDescription, tpe: Input::Event,
# act on values left: Idx, right: Val) {
# do something here...
print fmt("%s = %s", left, right);
} }
The event has to be specified in ``$ev`` in the ``add_table`` call: The event must be specified in ``$ev`` in the ``add_table`` call:
.. code:: bro .. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]); Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist,
$mode=Input::REREAD, $ev=entry]);
The ``description`` field of the event contains the arguments that were The ``description`` argument of the event contains the arguments that were
originally supplied to the add_table call. Hence, the name of the stream can, originally supplied to the add_table call. Hence, the name of the stream can,
for example, be accessed with ``description$name``. ``tpe`` is an enum for example, be accessed with ``description$name``. The ``tpe`` argument of the
containing the type of the change that occurred. event is an enum containing the type of the change that occurred.
If a line that was not previously present in the table has been added, If a line that was not previously present in the table has been added,
then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains then the value of ``tpe`` will be ``Input::EVENT_NEW``. In this case ``left``
the index of the added table entry and ``right`` contains the values of the contains the index of the added table entry and ``right`` contains the
added entry. values of the added entry.
If a table entry that already was present is altered during the re-reading or If a table entry that already was present is altered during the re-reading or
streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In streaming read of a file, then the value of ``tpe`` will be
``Input::EVENT_CHANGED``. In
this case ``left`` contains the index of the changed table entry and ``right`` this case ``left`` contains the index of the changed table entry and ``right``
contains the values of the entry before the change. The reason for this is contains the values of the entry before the change. The reason for this is
that the table already has been updated when the event is raised. The current that the table already has been updated when the event is raised. The current
@ -208,8 +225,9 @@ value in the table can be ascertained by looking up the current table value.
Hence it is possible to compare the new and the old values of the table. Hence it is possible to compare the new and the old values of the table.
If a table element is removed because it was no longer present during a If a table element is removed because it was no longer present during a
re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left`` re-read, then the value of ``tpe`` will be ``Input::EVENT_REMOVED``. In this
contains the index and ``right`` the values of the removed element. case ``left`` contains the index and ``right`` the values of the removed
element.
Filtering data during import Filtering data during import
@ -222,24 +240,26 @@ can either accept or veto the change by returning true for an accepted
change and false for a rejected change. Furthermore, it can alter the data change and false for a rejected change. Furthermore, it can alter the data
before it is written to the table. before it is written to the table.
The following example filter will reject to add entries to the table when The following example filter will reject adding entries to the table when
they were generated over a month ago. It will accept all changes and all they were generated over a month ago. It will accept all changes and all
removals of values that are already present in the table. removals of values that are already present in the table.
.. code:: bro .. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist,
$mode=Input::REREAD,
$pred(typ: Input::Event, left: Idx, right: Val) = { $pred(typ: Input::Event, left: Idx, right: Val) = {
if ( typ != Input::EVENT_NEW ) { if ( typ != Input::EVENT_NEW ) {
return T; return T;
} }
return ( ( current_time() - right$timestamp ) < (30 day) ); return (current_time() - right$timestamp) < 30day;
}]); }]);
To change elements while they are being imported, the predicate function can To change elements while they are being imported, the predicate function can
manipulate ``left`` and ``right``. Note that predicate functions are called manipulate ``left`` and ``right``. Note that predicate functions are called
before the change is committed to the table. Hence, when a table element is before the change is committed to the table. Hence, when a table element is
changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right`` changed (``typ`` is ``Input::EVENT_CHANGED``), ``left`` and ``right``
contain the new values, but the destination (``blacklist`` in our example) contain the new values, but the destination (``blacklist`` in our example)
still contains the old values. This allows predicate functions to examine still contains the old values. This allows predicate functions to examine
the changes between the old and the new version before deciding if they the changes between the old and the new version before deciding if they
@ -250,14 +270,19 @@ Different readers
The input framework supports different kinds of readers for different kinds The input framework supports different kinds of readers for different kinds
of source data files. At the moment, the default reader reads ASCII files of source data files. At the moment, the default reader reads ASCII files
formatted in the Bro log file format (tab-separated values). At the moment, formatted in the Bro log file format (tab-separated values with a "#fields"
Bro comes with two other readers. The ``RAW`` reader reads a file that is header line). Several other readers are included in Bro.
split by a specified record separator (usually newline). The contents are
The raw reader reads a file that is
split by a specified record separator (newline by default). The contents are
returned line-by-line as strings; it can, for example, be used to read returned line-by-line as strings; it can, for example, be used to read
configuration files and the like and is probably configuration files and the like and is probably
only useful in the event mode and not for reading data to tables. only useful in the event mode and not for reading data to tables.
Another included reader is the ``BENCHMARK`` reader, which is being used The binary reader is intended to be used with file analysis input streams (and
is the default type of reader for those streams).
The benchmark reader is being used
to optimize the speed of the input framework. It can generate arbitrary to optimize the speed of the input framework. It can generate arbitrary
amounts of semi-random data in all Bro data types supported by the input amounts of semi-random data in all Bro data types supported by the input
framework. framework.
@ -270,75 +295,17 @@ aforementioned ones:
logging-input-sqlite logging-input-sqlite
Add_table options
-----------------
This section lists all possible options that can be used for the add_table
function and gives a short explanation of their use. Most of the options
already have been discussed in the previous sections.
The possible fields that can be set for a table stream are:
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``name``
A mandatory name for the filter that can later be used
to manipulate it further.
``idx``
Record type that defines the index of the table.
``val``
Record type that defines the values of the table.
``reader``
The reader used for this stream. Default is ``READER_ASCII``.
``mode``
The mode in which the stream is opened. Possible values are
``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means that the file is not updated after it has
been read. Changes to the file will not be reflected in the
data Bro knows. ``REREAD`` means that the whole file is read
again each time a change is found. This should be used for
files that are mapped to a table where individual lines can
change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``destination``
The destination table.
``ev``
Optional event that is raised, when values are added to,
changed in, or deleted from the table. Events are passed an
Input::Event description as the first argument, the index
record as the second argument and the values as the third
argument.
``pred``
Optional predicate, that can prevent entries from being added
to the table and events from being sent.
``want_record``
Boolean value, that defines if the event wants to receive the
fields inside of a single record value, or individually
(default). This can be used if ``val`` is a record
containing only one type. In this case, if ``want_record`` is
set to false, the table will contain elements of the type
contained in ``val``.
Reading Data to Events Reading Data to Events
====================== ======================
The second supported mode of the input framework is reading data to Bro The second supported mode of the input framework is reading data to Bro
events instead of reading them to a table using event streams. events instead of reading them to a table.
Event streams work very similarly to table streams that were already Event streams work very similarly to table streams that were already
discussed in much detail. To read the blacklist of the previous example discussed in much detail. To read the blacklist of the previous example
into an event stream, the following Bro code could be used: into an event stream, the :bro:id:`Input::add_event` function is used.
For example:
.. code:: bro .. code:: bro
@ -348,12 +315,15 @@ into an event stream, the following Bro code could be used:
reason: string; reason: string;
}; };
event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) { event blacklistentry(description: Input::EventDescription,
# work with event data t: Input::Event, data: Val) {
# do something here...
print "data:", data;
} }
event bro_init() { event bro_init() {
Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]); Input::add_event([$source="blacklist.file", $name="blacklist",
$fields=Val, $ev=blacklistentry]);
} }
@ -364,52 +334,3 @@ data types are provided in a single record definition.
Apart from this, event streams work exactly the same as table streams and Apart from this, event streams work exactly the same as table streams and
support most of the options that are also supported for table streams. support most of the options that are also supported for table streams.
The options that can be set when creating an event stream with
``add_event`` are:
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``name``
A mandatory name for the stream that can later be used
to remove it.
``fields``
Name of a record type containing the fields, which should be
retrieved from the input stream.
``ev``
The event which is fired, after a line has been read from the
input source. The first argument that is passed to the event
is an Input::Event structure, followed by the data, either
inside of a record (if ``want_record is set``) or as
individual fields. The Input::Event structure can contain
information, if the received line is ``NEW``, has been
``CHANGED`` or ``DELETED``. Since the ASCII reader cannot
track this information for event filters, the value is
always ``NEW`` at the moment.
``mode``
The mode in which the stream is opened. Possible values are
``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means that the file is not updated after it has
been read. Changes to the file will not be reflected in the
data Bro knows. ``REREAD`` means that the whole file is read
again each time a change is found. This should be used for
files that are mapped to a table where individual lines can
change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``reader``
The reader used for this stream. Default is ``READER_ASCII``.
``want_record``
Boolean value, that defines if the event wants to receive the
fields inside of a single record value, or individually
(default). If this is set to true, the event will receive a
single record of the type provided in ``fields``.

View file

@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet
seen extensive use in production environments. While we are not aware seen extensive use in production environments. While we are not aware
of any issues with them, we urge to caution when using them of any issues with them, we urge to caution when using them
in production environments. There could be lingering issues which only occur in production environments. There could be lingering issues which only occur
when the plugins are used with high amounts of data or in high-load environments. when the plugins are used with high amounts of data or in high-load
environments.
Logging Data into SQLite Databases Logging Data into SQLite Databases
================================== ==================================
Logging support for SQLite is available in all Bro installations starting with Logging support for SQLite is available in all Bro installations starting with
version 2.2. There is no need to load any additional scripts or for any compile-time version 2.2. There is no need to load any additional scripts or for any
configurations. compile-time configurations.
Sending data from existing logging streams to SQLite is rather straightforward. You Sending data from existing logging streams to SQLite is rather straightforward.
have to define a filter which specifies SQLite as the writer. You have to define a filter which specifies SQLite as the writer.
The following example code adds SQLite as a filter for the connection log: The following example code adds SQLite as a filter for the connection log:
@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro
Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist. Bro will create the database file ``/var/db/conn.sqlite``, if it does not
It will also create a table with the name ``conn`` (if it does not exist) and start already exist. It will also create a table with the name ``conn`` (if it
appending connection information to the table. does not exist) and start appending connection information to the table.
At the moment, SQLite databases are not rotated the same way ASCII log-files are. You At the moment, SQLite databases are not rotated the same way ASCII log-files
have to take care to create them in an adequate location. are. You have to take care to create them in an adequate location.
If you examine the resulting SQLite database, the schema will contain the same fields If you examine the resulting SQLite database, the schema will contain the
that are present in the ASCII log files:: same fields that are present in the ASCII log files::
# sqlite3 /var/db/conn.sqlite # sqlite3 /var/db/conn.sqlite
@ -75,27 +76,31 @@ from being created, you can remove the default filter:
Log::remove_filter(Conn::LOG, "default"); Log::remove_filter(Conn::LOG, "default");
To create a custom SQLite log file, you have to create a new log stream that contains To create a custom SQLite log file, you have to create a new log stream
just the information you want to commit to the database. Please refer to the that contains just the information you want to commit to the database.
:ref:`framework-logging` documentation on how to create custom log streams. Please refer to the :ref:`framework-logging` documentation on how to
create custom log streams.
Reading Data from SQLite Databases Reading Data from SQLite Databases
================================== ==================================
Like logging support, support for reading data from SQLite databases is built into Bro starting Like logging support, support for reading data from SQLite databases is
with version 2.2. built into Bro starting with version 2.2.
Just as with the text-based input readers (please refer to the :ref:`framework-input` Just as with the text-based input readers (please refer to the
documentation for them and for basic information on how to use the input-framework), the SQLite reader :ref:`framework-input` documentation for them and for basic information
can be used to read data - in this case the result of SQL queries - into tables or into events. on how to use the input framework), the SQLite reader can be used to
read data - in this case the result of SQL queries - into tables or into
events.
Reading Data into Tables Reading Data into Tables
------------------------ ------------------------
To read data from a SQLite database, we first have to provide Bro with the information, how To read data from a SQLite database, we first have to provide Bro with
the resulting data will be structured. For this example, we expect that we have a SQLite database, the information, how the resulting data will be structured. For this
which contains host IP addresses and the user accounts that are allowed to log into a specific example, we expect that we have a SQLite database, which contains
machine. host IP addresses and the user accounts that are allowed to log into
a specific machine.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows::
insert into machines_to_users values ('192.168.17.2', 'bernhard'); insert into machines_to_users values ('192.168.17.2', 'bernhard');
insert into machines_to_users values ('192.168.17.3', 'seth,matthias'); insert into machines_to_users values ('192.168.17.3', 'seth,matthias');
After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table After creating a file called ``hosts.sqlite`` with this content, we can
into Bro: read the resulting table into Bro:
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro
@ -117,22 +122,25 @@ into Bro:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro
Afterwards, that table can be used to check logins into hosts against the available Afterwards, that table can be used to check logins into hosts against
userlist. the available userlist.
Turning Data into Events Turning Data into Events
------------------------ ------------------------
The second mode is to use the SQLite reader to output the input data as events. Typically there The second mode is to use the SQLite reader to output the input data as events.
are two reasons to do this. First, when the structure of the input data is too complicated Typically there are two reasons to do this. First, when the structure of
for a direct table import. In this case, the data can be read into an event which can then the input data is too complicated for a direct table import. In this case,
create the necessary data structures in Bro in scriptland. the data can be read into an event which can then create the necessary
data structures in Bro in scriptland.
The second reason is, that the dataset is too big to hold it in memory. In this case, the checks The second reason is, that the dataset is too big to hold it in memory. In
can be performed on-demand, when Bro encounters a situation where it needs additional information. this case, the checks can be performed on-demand, when Bro encounters a
situation where it needs additional information.
An example for this would be an internal huge database with malware hashes. Live database queries An example for this would be an internal huge database with malware
could be used to check the sporadically happening downloads against the database. hashes. Live database queries could be used to check the sporadically
happening downloads against the database.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows::
insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace'); insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace');
The following code uses the file-analysis framework to get the sha1 hashes of files that are The following code uses the file-analysis framework to get the sha1 hashes
transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query of files that are transmitted over the network. For each hash, a SQL-query
returns with a result, we had a hit against our malware-database and output the matching hash. is run against SQLite. If the query returns with a result, we had a hit
against our malware-database and output the matching hash.
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro
@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro
If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you If you run this script against the trace in
will get one hit. ``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit.

View file

@ -46,4 +46,4 @@ where Bro was originally installed). Review the files for differences
before copying and make adjustments as necessary (use the new version for before copying and make adjustments as necessary (use the new version for
differences that aren't a result of a local change). Of particular note, differences that aren't a result of a local change). Of particular note,
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
to the ``SpoolDir`` and ``LogDir`` settings. to any settings that specify a pathname.

View file

@ -4,7 +4,7 @@
.. _MacPorts: http://www.macports.org .. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org .. _Fink: http://www.finkproject.org
.. _Homebrew: http://brew.sh .. _Homebrew: http://brew.sh
.. _bro downloads page: http://bro.org/download/index.html .. _bro downloads page: https://www.bro.org/download/index.html
.. _installing-bro: .. _installing-bro:
@ -32,13 +32,13 @@ before you begin:
* Libz * Libz
* Bash (for BroControl) * Bash (for BroControl)
* Python (for BroControl) * Python (for BroControl)
* C++ Actor Framework (CAF) (http://actor-framework.org) * C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org)
To build Bro from source, the following additional dependencies are required: To build Bro from source, the following additional dependencies are required:
* CMake 2.8 or greater (http://www.cmake.org) * CMake 2.8 or greater (http://www.cmake.org)
* Make * Make
* C/C++ compiler with C++11 support * C/C++ compiler with C++11 support (GCC 4.8+ or Clang 3.3+)
* SWIG (http://www.swig.org) * SWIG (http://www.swig.org)
* Bison (GNU Parser Generator) * Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer) * Flex (Fast Lexical Analyzer)
@ -47,9 +47,7 @@ To build Bro from source, the following additional dependencies are required:
* zlib headers * zlib headers
* Python * Python
.. todo:: To install CAF, first download the source code of the required version from: https://github.com/actor-framework/actor-framework/releases
Update with instructions for installing CAF.
To install the required dependencies, you can use: To install the required dependencies, you can use:
@ -84,11 +82,11 @@ To install the required dependencies, you can use:
"Preferences..." -> "Downloads" menus to install the "Command Line Tools" "Preferences..." -> "Downloads" menus to install the "Command Line Tools"
component). component).
OS X comes with all required dependencies except for CMake_ and SWIG_. OS X comes with all required dependencies except for CMake_, SWIG_, and CAF.
Distributions of these dependencies can likely be obtained from your Distributions of these dependencies can likely be obtained from your
preferred Mac OS X package management system (e.g. MacPorts_, Fink_, preferred Mac OS X package management system (e.g. Homebrew_, MacPorts_,
or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``, or Fink_). Specifically for Homebrew, the ``cmake``, ``swig``,
and ``swig-python`` packages provide the required dependencies. and ``caf`` packages provide the required dependencies.
Optional Dependencies Optional Dependencies
@ -101,6 +99,8 @@ build time:
* sendmail (enables Bro and BroControl to send mail) * sendmail (enables Bro and BroControl to send mail)
* curl (used by a Bro script that implements active HTTP) * curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage) * gperftools (tcmalloc is used to improve memory and CPU usage)
* jemalloc (http://www.canonware.com/jemalloc/)
* PF_RING (Linux only, see :doc:`Cluster Configuration <../configuration/index>`)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump) * ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
LibGeoIP is probably the most interesting and can be installed LibGeoIP is probably the most interesting and can be installed
@ -117,7 +117,7 @@ code forms.
Using Pre-Built Binary Release Packages Using Pre-Built Binary Release Packages
======================================= ---------------------------------------
See the `bro downloads page`_ for currently supported/targeted See the `bro downloads page`_ for currently supported/targeted
platforms for binary releases and for installation instructions. platforms for binary releases and for installation instructions.
@ -138,13 +138,15 @@ platforms for binary releases and for installation instructions.
The primary install prefix for binary packages is ``/opt/bro``. The primary install prefix for binary packages is ``/opt/bro``.
Installing from Source Installing from Source
====================== ----------------------
Bro releases are bundled into source packages for convenience and are Bro releases are bundled into source packages for convenience and are
available on the `bro downloads page`_. Alternatively, the latest available on the `bro downloads page`_.
Bro development version can be obtained through git repositories
Alternatively, the latest Bro development version
can be obtained through git repositories
hosted at ``git.bro.org``. See our `git development documentation hosted at ``git.bro.org``. See our `git development documentation
<http://bro.org/development/howtos/process.html>`_ for comprehensive <https://www.bro.org/development/howtos/process.html>`_ for comprehensive
information on Bro's use of git revision control, but the short story information on Bro's use of git revision control, but the short story
for downloading the full source code experience for Bro via git is: for downloading the full source code experience for Bro via git is:
@ -165,13 +167,23 @@ run ``./configure --help``):
make make
make install make install
If the ``configure`` script fails, then it is most likely because it either
couldn't find a required dependency or it couldn't find a sufficiently new
version of a dependency. Assuming that you already installed all required
dependencies, then you may need to use one of the ``--with-*`` options
that can be given to the ``configure`` script to help it locate a dependency.
The default installation path is ``/usr/local/bro``, which would typically The default installation path is ``/usr/local/bro``, which would typically
require root privileges when doing the ``make install``. A different require root privileges when doing the ``make install``. A different
installation path can be chosen by specifying the ``--prefix`` option. installation path can be chosen by specifying the ``configure`` script
Note that ``/usr`` and ``/opt/bro`` are the ``--prefix`` option. Note that ``/usr`` and ``/opt/bro`` are the
standard prefixes for binary Bro packages to be installed, so those are standard prefixes for binary Bro packages to be installed, so those are
typically not good choices unless you are creating such a package. typically not good choices unless you are creating such a package.
OpenBSD users, please see our `FAQ
<https://www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Depending on the Bro package you downloaded, there may be auxiliary Depending on the Bro package you downloaded, there may be auxiliary
tools and libraries available in the ``aux/`` directory. Some of them tools and libraries available in the ``aux/`` directory. Some of them
will be automatically built and installed along with Bro. There are will be automatically built and installed along with Bro. There are
@ -180,10 +192,6 @@ turn off unwanted auxiliary projects that would otherwise be installed
automatically. Finally, use ``make install-aux`` to install some of automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory. the other programs that are in the ``aux/bro-aux`` directory.
OpenBSD users, please see our `FAQ
<//www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because Finally, if you want to build the Bro documentation (not required, because
all of the documentation for the latest Bro release is available on the all of the documentation for the latest Bro release is available on the
Bro web site), there are instructions in ``doc/README`` in the source Bro web site), there are instructions in ``doc/README`` in the source
@ -192,7 +200,7 @@ distribution.
Configure the Run-Time Environment Configure the Run-Time Environment
================================== ==================================
Just remember that you may need to adjust your ``PATH`` environment variable You may want to adjust your ``PATH`` environment variable
according to the platform/shell/package you're using. For example: according to the platform/shell/package you're using. For example:
Bourne-Shell Syntax: Bourne-Shell Syntax:

View file

@ -54,13 +54,16 @@ Here is a more detailed explanation of each attribute:
.. bro:attr:: &redef .. bro:attr:: &redef
Allows for redefinition of initial values of global objects declared as Allows use of a :bro:keyword:`redef` to redefine initial values of
constant. global variables (i.e., variables declared either :bro:keyword:`global`
or :bro:keyword:`const`). Example::
In this example, the constant (assuming it is global) can be redefined
with a :bro:keyword:`redef` at some later point::
const clever = T &redef; const clever = T &redef;
global cache_size = 256 &redef;
Note that a variable declared "global" can also have its value changed
with assignment statements (doesn't matter if it has the "&redef"
attribute or not).
.. bro:attr:: &priority .. bro:attr:: &priority

View file

@ -71,9 +71,11 @@ Statements
Declarations Declarations
------------ ------------
The following global declarations cannot occur within a function, hook, or Declarations cannot occur within a function, hook, or event handler.
event handler. Also, these declarations cannot appear after any statements
that are outside of a function, hook, or event handler. Declarations must appear before any statements (except those statements
that are in a function, hook, or event handler) in the concatenation of
all loaded Bro scripts.
.. bro:keyword:: module .. bro:keyword:: module
@ -126,9 +128,12 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: global .. bro:keyword:: global
Variables declared with the "global" keyword will be global. Variables declared with the "global" keyword will be global.
If a type is not specified, then an initializer is required so that If a type is not specified, then an initializer is required so that
the type can be inferred. Likewise, if an initializer is not supplied, the type can be inferred. Likewise, if an initializer is not supplied,
then the type must be specified. Example:: then the type must be specified. In some cases, when the type cannot
be correctly inferred, the type must be specified even when an
initializer is present. Example::
global pi = 3.14; global pi = 3.14;
global hosts: set[addr]; global hosts: set[addr];
@ -136,10 +141,11 @@ that are outside of a function, hook, or event handler.
Variable declarations outside of any function, hook, or event handler are Variable declarations outside of any function, hook, or event handler are
required to use this keyword (unless they are declared with the required to use this keyword (unless they are declared with the
:bro:keyword:`const` keyword). Definitions of functions, hooks, and :bro:keyword:`const` keyword instead).
event handlers are not allowed to use the "global"
keyword (they already have global scope), except function declarations Definitions of functions, hooks, and event handlers are not allowed
where no function body is supplied use the "global" keyword. to use the "global" keyword. However, function declarations (i.e., no
function body is provided) can use the "global" keyword.
The scope of a global variable begins where the declaration is located, The scope of a global variable begins where the declaration is located,
and extends through all remaining Bro scripts that are loaded (however, and extends through all remaining Bro scripts that are loaded (however,
@ -150,18 +156,22 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: const .. bro:keyword:: const
A variable declared with the "const" keyword will be constant. A variable declared with the "const" keyword will be constant.
Variables declared as constant are required to be initialized at the Variables declared as constant are required to be initialized at the
time of declaration. Example:: time of declaration. Normally, the type is inferred from the initializer,
but the type can be explicitly specified. Example::
const pi = 3.14; const pi = 3.14;
const ssh_port: port = 22/tcp; const ssh_port: port = 22/tcp;
The value of a constant cannot be changed later (the only The value of a constant cannot be changed. The only exception is if the
exception is if the variable is global and has the :bro:attr:`&redef` variable is a global constant and has the :bro:attr:`&redef`
attribute, then its value can be changed only with a :bro:keyword:`redef`). attribute, but even then its value can be changed only with a
:bro:keyword:`redef`.
The scope of a constant is local if the declaration is in a The scope of a constant is local if the declaration is in a
function, hook, or event handler, and global otherwise. function, hook, or event handler, and global otherwise.
Note that the "const" keyword cannot be used with either the "local" Note that the "const" keyword cannot be used with either the "local"
or "global" keywords (i.e., "const" replaces "local" and "global"). or "global" keywords (i.e., "const" replaces "local" and "global").
@ -184,7 +194,8 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: redef .. bro:keyword:: redef
There are three ways that "redef" can be used: to change the value of There are three ways that "redef" can be used: to change the value of
a global variable, to extend a record type or enum type, or to specify a global variable (but only if it has the :bro:attr:`&redef` attribute),
to extend a record type or enum type, or to specify
a new event handler body that replaces all those that were previously a new event handler body that replaces all those that were previously
defined. defined.
@ -237,13 +248,14 @@ that are outside of a function, hook, or event handler.
Statements Statements
---------- ----------
Statements (except those contained within a function, hook, or event
handler) can appear only after all global declarations in the concatenation
of all loaded Bro scripts.
Each statement in a Bro script must be terminated with a semicolon (with a Each statement in a Bro script must be terminated with a semicolon (with a
few exceptions noted below). An individual statement can span multiple few exceptions noted below). An individual statement can span multiple
lines. lines.
All statements (except those contained within a function, hook, or event
handler) must appear after all global declarations.
Here are the statements that the Bro scripting language supports. Here are the statements that the Bro scripting language supports.
.. bro:keyword:: add .. bro:keyword:: add

View file

@ -340,15 +340,18 @@ Here is a more detailed description of each type:
table [ type^+ ] of type table [ type^+ ] of type
where *type^+* is one or more types, separated by commas. where *type^+* is one or more types, separated by commas. The
For example: index type cannot be any of the following types: pattern, table, set,
vector, file, opaque, any.
Here is an example of declaring a table indexed by "count" values
and yielding "string" values:
.. code:: bro .. code:: bro
global a: table[count] of string; global a: table[count] of string;
declares a table indexed by "count" values and yielding The yield type can also be more complex:
"string" values. The yield type can also be more complex:
.. code:: bro .. code:: bro
@ -441,7 +444,9 @@ Here is a more detailed description of each type:
set [ type^+ ] set [ type^+ ]
where *type^+* is one or more types separated by commas. where *type^+* is one or more types separated by commas. The
index type cannot be any of the following types: pattern, table, set,
vector, file, opaque, any.
Sets can be initialized by listing elements enclosed by curly braces: Sets can be initialized by listing elements enclosed by curly braces:

View file

@ -4,7 +4,7 @@ type Service: record {
rfc: count; rfc: count;
}; };
function print_service(serv: Service): string function print_service(serv: Service)
{ {
print fmt("Service: %s(RFC%d)",serv$name, serv$rfc); print fmt("Service: %s(RFC%d)",serv$name, serv$rfc);

View file

@ -9,7 +9,7 @@ type System: record {
services: set[Service]; services: set[Service];
}; };
function print_service(serv: Service): string function print_service(serv: Service)
{ {
print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc); print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc);
@ -17,7 +17,7 @@ function print_service(serv: Service): string
print fmt(" port: %s", p); print fmt(" port: %s", p);
} }
function print_system(sys: System): string function print_system(sys: System)
{ {
print fmt("System: %s", sys$name); print fmt("System: %s", sys$name);

View file

@ -126,6 +126,9 @@ export {
## This is usually supplied on the command line for each instance ## This is usually supplied on the command line for each instance
## of the cluster that is started up. ## of the cluster that is started up.
const node = getenv("CLUSTER_NODE") &redef; const node = getenv("CLUSTER_NODE") &redef;
## Interval for retrying failed connections between cluster nodes.
const retry_interval = 1min &redef;
} }
function is_enabled(): bool function is_enabled(): bool

View file

@ -39,7 +39,7 @@ event bro_init() &priority=9
Communication::nodes["time-machine"] = [$host=nodes[i]$ip, Communication::nodes["time-machine"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id, $zone_id=nodes[i]$zone_id,
$p=nodes[i]$p, $p=nodes[i]$p,
$connect=T, $retry=1min, $connect=T, $retry=retry_interval,
$events=tm2manager_events]; $events=tm2manager_events];
} }
@ -58,7 +58,7 @@ event bro_init() &priority=9
if ( n?$proxy ) if ( n?$proxy )
Communication::nodes[i] Communication::nodes[i]
= [$host=n$ip, $zone_id=n$zone_id, $p=n$p, = [$host=n$ip, $zone_id=n$zone_id, $p=n$p,
$connect=T, $auth=F, $sync=T, $retry=1mins]; $connect=T, $auth=F, $sync=T, $retry=retry_interval];
else if ( me?$proxy && me$proxy == i ) else if ( me?$proxy && me$proxy == i )
Communication::nodes[me$proxy] Communication::nodes[me$proxy]
= [$host=nodes[i]$ip, $zone_id=nodes[i]$zone_id, = [$host=nodes[i]$ip, $zone_id=nodes[i]$zone_id,
@ -70,7 +70,7 @@ event bro_init() &priority=9
Communication::nodes["manager"] = [$host=nodes[i]$ip, Communication::nodes["manager"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id, $zone_id=nodes[i]$zone_id,
$p=nodes[i]$p, $p=nodes[i]$p,
$connect=T, $retry=1mins, $connect=T, $retry=retry_interval,
$class=node, $class=node,
$events=manager2proxy_events]; $events=manager2proxy_events];
} }
@ -80,7 +80,7 @@ event bro_init() &priority=9
Communication::nodes["manager"] = [$host=nodes[i]$ip, Communication::nodes["manager"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id, $zone_id=nodes[i]$zone_id,
$p=nodes[i]$p, $p=nodes[i]$p,
$connect=T, $retry=1mins, $connect=T, $retry=retry_interval,
$class=node, $class=node,
$events=manager2worker_events]; $events=manager2worker_events];
@ -88,7 +88,7 @@ event bro_init() &priority=9
Communication::nodes["proxy"] = [$host=nodes[i]$ip, Communication::nodes["proxy"] = [$host=nodes[i]$ip,
$zone_id=nodes[i]$zone_id, $zone_id=nodes[i]$zone_id,
$p=nodes[i]$p, $p=nodes[i]$p,
$connect=T, $retry=1mins, $connect=T, $retry=retry_interval,
$sync=T, $class=node, $sync=T, $class=node,
$events=proxy2worker_events]; $events=proxy2worker_events];
@ -98,7 +98,7 @@ event bro_init() &priority=9
$zone_id=nodes[i]$zone_id, $zone_id=nodes[i]$zone_id,
$p=nodes[i]$p, $p=nodes[i]$p,
$connect=T, $connect=T,
$retry=1min, $retry=retry_interval,
$events=tm2worker_events]; $events=tm2worker_events];
} }

View file

@ -71,6 +71,14 @@ signature file-mp2p {
file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/ file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/
} }
# MPEG transport stream data. These files typically have the extension "ts".
# Note: The 0x47 repeats every 188 bytes. Using four as the number of
# occurrences for the test here is arbitrary.
signature file-mp2t {
file-mime "video/mp2t", 40
file-magic /^(\x47.{187}){4}/
}
# Silicon Graphics video # Silicon Graphics video
signature file-sgi-movie { signature file-sgi-movie {
file-mime "video/x-sgi-movie", 70 file-mime "video/x-sgi-movie", 70
@ -94,3 +102,4 @@ signature file-3gpp {
file-mime "video/3gpp", 60 file-mime "video/3gpp", 60
file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/ file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/
} }

View file

@ -1,18 +1,25 @@
##! The input framework provides a way to read previously stored data either ##! The input framework provides a way to read previously stored data either
##! as an event stream or into a bro table. ##! as an event stream or into a Bro table.
module Input; module Input;
export { export {
type Event: enum { type Event: enum {
## New data has been imported.
EVENT_NEW = 0, EVENT_NEW = 0,
## Existing data has been changed.
EVENT_CHANGED = 1, EVENT_CHANGED = 1,
## Previously existing data has been removed.
EVENT_REMOVED = 2, EVENT_REMOVED = 2,
}; };
## Type that defines the input stream read mode.
type Mode: enum { type Mode: enum {
## Do not automatically reread the file after it has been read.
MANUAL = 0, MANUAL = 0,
## Reread the entire file each time a change is found.
REREAD = 1, REREAD = 1,
## Read data from end of file each time new data is appended.
STREAM = 2 STREAM = 2
}; };
@ -24,20 +31,20 @@ export {
## Separator between fields. ## Separator between fields.
## Please note that the separator has to be exactly one character long. ## Please note that the separator has to be exactly one character long.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const separator = "\t" &redef; const separator = "\t" &redef;
## Separator between set elements. ## Separator between set elements.
## Please note that the separator has to be exactly one character long. ## Please note that the separator has to be exactly one character long.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const set_separator = "," &redef; const set_separator = "," &redef;
## String to use for empty fields. ## String to use for empty fields.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const empty_field = "(empty)" &redef; const empty_field = "(empty)" &redef;
## String to use for an unset &optional field. ## String to use for an unset &optional field.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const unset_field = "-" &redef; const unset_field = "-" &redef;
## Flag that controls if the input framework accepts records ## Flag that controls if the input framework accepts records
@ -47,11 +54,11 @@ export {
## abort. Defaults to false (abort). ## abort. Defaults to false (abort).
const accept_unsupported_types = F &redef; const accept_unsupported_types = F &redef;
## TableFilter description type used for the `table` method. ## A table input stream type used to send data to a Bro table.
type TableDescription: record { type TableDescription: record {
# Common definitions for tables and events # Common definitions for tables and events
## String that allows the reader to find the source. ## String that allows the reader to find the source of the data.
## For `READER_ASCII`, this is the filename. ## For `READER_ASCII`, this is the filename.
source: string; source: string;
@ -61,7 +68,8 @@ export {
## Read mode to use for this stream. ## Read mode to use for this stream.
mode: Mode &default=default_mode; mode: Mode &default=default_mode;
## Descriptive name. Used to remove a stream at a later time. ## Name of the input stream. This is used by some functions to
## manipulate the stream.
name: string; name: string;
# Special definitions for tables # Special definitions for tables
@ -73,31 +81,35 @@ export {
idx: any; idx: any;
## Record that defines the values used as the elements of the table. ## Record that defines the values used as the elements of the table.
## If this is undefined, then *destination* has to be a set. ## If this is undefined, then *destination* must be a set.
val: any &optional; val: any &optional;
## Defines if the value of the table is a record (default), or a single value. ## Defines if the value of the table is a record (default), or a single
## When this is set to false, then *val* can only contain one element. ## value. When this is set to false, then *val* can only contain one
## element.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a value is added to, changed in or removed ## The event that is raised each time a value is added to, changed in,
## from the table. The event will receive an Input::Event enum as the first ## or removed from the table. The event will receive an
## argument, the *idx* record as the second argument and the value (record) as the ## Input::TableDescription as the first argument, an Input::Event
## third argument. ## enum as the second argument, the *idx* record as the third argument
ev: any &optional; # event containing idx, val as values. ## and the value (record) as the fourth argument.
ev: any &optional;
## Predicate function that can decide if an insertion, update or removal should ## Predicate function that can decide if an insertion, update or removal
## really be executed. Parameters are the same as for the event. If true is ## should really be executed. Parameters have same meaning as for the
## returned, the update is performed. If false is returned, it is skipped. ## event.
## If true is returned, the update is performed. If false is returned,
## it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional; pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
## EventFilter description type used for the `event` method. ## An event input stream type used to send input data to a Bro event.
type EventDescription: record { type EventDescription: record {
# Common definitions for tables and events # Common definitions for tables and events
@ -116,19 +128,26 @@ export {
# Special definitions for events # Special definitions for events
## Record describing the fields to be retrieved from the source input. ## Record type describing the fields to be retrieved from the input
## source.
fields: any; fields: any;
## If this is false, the event receives each value in fields as a separate argument. ## If this is false, the event receives each value in *fields* as a
## If this is set to true (default), the event receives all fields in a single record value. ## separate argument.
## If this is set to true (default), the event receives all fields in
## a single record value.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a new line is received from the reader. ## The event that is raised each time a new line is received from the
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. ## reader. The event will receive an Input::EventDescription record
## as the first argument, an Input::Event enum as the second
## argument, and the fields (as specified in *fields*) as the following
## arguments (this will either be a single record value containing
## all fields, or each field value as a separate argument).
ev: any; ev: any;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
@ -155,28 +174,29 @@ export {
## field will be the same value as the *source* field. ## field will be the same value as the *source* field.
name: string; name: string;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
## Create a new table input from a given source. ## Create a new table input stream from a given source.
## ##
## description: `TableDescription` record describing the source. ## description: `TableDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_table: function(description: Input::TableDescription) : bool; global add_table: function(description: Input::TableDescription) : bool;
## Create a new event input from a given source. ## Create a new event input stream from a given source.
## ##
## description: `EventDescription` record describing the source. ## description: `EventDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_event: function(description: Input::EventDescription) : bool; global add_event: function(description: Input::EventDescription) : bool;
## Create a new file analysis input from a given source. Data read from ## Create a new file analysis input stream from a given source. Data read
## the source is automatically forwarded to the file analysis framework. ## from the source is automatically forwarded to the file analysis
## framework.
## ##
## description: A record describing the source. ## description: A record describing the source.
## ##
@ -199,6 +219,10 @@ export {
## Event that is called when the end of a data source has been reached, ## Event that is called when the end of a data source has been reached,
## including after an update. ## including after an update.
##
## name: Name of the input stream.
##
## source: String that identifies the data source (such as the filename).
global end_of_data: event(name: string, source: string); global end_of_data: event(name: string, source: string);
} }

View file

@ -11,7 +11,9 @@ export {
## ##
## name: name of the input stream. ## name: name of the input stream.
## source: source of the input stream. ## source: source of the input stream.
## exit_code: exit code of the program, or number of the signal that forced the program to exit. ## exit_code: exit code of the program, or number of the signal that forced
## signal_exit: false when program exited normally, true when program was forced to exit by a signal. ## the program to exit.
## signal_exit: false when program exited normally, true when program was
## forced to exit by a signal.
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool); global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
} }

View file

@ -138,7 +138,7 @@ redef enum PcapFilterID += {
function test_filter(filter: string): bool function test_filter(filter: string): bool
{ {
if ( ! precompile_pcap_filter(FilterTester, filter) ) if ( ! Pcap::precompile_pcap_filter(FilterTester, filter) )
{ {
# The given filter was invalid # The given filter was invalid
# TODO: generate a notice. # TODO: generate a notice.
@ -273,7 +273,7 @@ function install(): bool
return F; return F;
local ts = current_time(); local ts = current_time();
if ( ! precompile_pcap_filter(DefaultPcapFilter, tmp_filter) ) if ( ! Pcap::precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
{ {
NOTICE([$note=Compile_Failure, NOTICE([$note=Compile_Failure,
$msg=fmt("Compiling packet filter failed"), $msg=fmt("Compiling packet filter failed"),
@ -303,7 +303,7 @@ function install(): bool
} }
info$filter = current_filter; info$filter = current_filter;
if ( ! install_pcap_filter(DefaultPcapFilter) ) if ( ! Pcap::install_pcap_filter(DefaultPcapFilter) )
{ {
# Installing the filter failed for some reason. # Installing the filter failed for some reason.
info$success = F; info$success = F;

View file

@ -349,7 +349,7 @@ type connection: record {
## The outer VLAN, if applicable, for this connection. ## The outer VLAN, if applicable, for this connection.
vlan: int &optional; vlan: int &optional;
## The VLAN vlan, if applicable, for this connection. ## The inner VLAN, if applicable, for this connection.
inner_vlan: int &optional; inner_vlan: int &optional;
}; };
@ -2509,7 +2509,7 @@ global dns_skip_all_addl = T &redef;
## If a DNS request includes more than this many queries, assume it's non-DNS ## If a DNS request includes more than this many queries, assume it's non-DNS
## traffic and do not process it. Set to 0 to turn off this functionality. ## traffic and do not process it. Set to 0 to turn off this functionality.
global dns_max_queries = 5; global dns_max_queries = 25 &redef;
## HTTP session statistics. ## HTTP session statistics.
## ##
@ -3733,7 +3733,6 @@ export {
## (includes GRE tunnels). ## (includes GRE tunnels).
const ip_tunnel_timeout = 24hrs &redef; const ip_tunnel_timeout = 24hrs &redef;
} # end export } # end export
module GLOBAL;
module Reporter; module Reporter;
export { export {
@ -3752,11 +3751,19 @@ export {
## external harness and shouldn't output anything to the console. ## external harness and shouldn't output anything to the console.
const errors_to_stderr = T &redef; const errors_to_stderr = T &redef;
} }
module GLOBAL;
module Pcap;
export {
## Number of bytes per packet to capture from live interfaces. ## Number of bytes per packet to capture from live interfaces.
const snaplen = 8192 &redef; const snaplen = 8192 &redef;
## Number of Mbytes to provide as buffer space when capturing from live
## interfaces.
const bufsize = 128 &redef;
} # end export
module GLOBAL;
## Seed for hashes computed internally for probabilistic data structures. Using ## Seed for hashes computed internally for probabilistic data structures. Using
## the same value here will make the hashes compatible between independent Bro ## the same value here will make the hashes compatible between independent Bro
## instances. If left unset, Bro will use a temporary local seed. ## instances. If left unset, Bro will use a temporary local seed.

View file

@ -87,7 +87,8 @@ export {
## f packet with FIN bit set ## f packet with FIN bit set
## r packet with RST bit set ## r packet with RST bit set
## c packet with a bad checksum ## c packet with a bad checksum
## i inconsistent packet (e.g. SYN+RST bits both set) ## i inconsistent packet (e.g. FIN+RST bits set)
## q multi-flag packet (SYN+FIN or SYN+RST bits set)
## ====== ==================================================== ## ====== ====================================================
## ##
## If the event comes from the originator, the letter is in ## If the event comes from the originator, the letter is in

View file

@ -270,7 +270,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{ {
if ( /^[bB][aA][sS][iI][cC] / in value ) if ( /^[bB][aA][sS][iI][cC] / in value )
{ {
local userpass = decode_base64(sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, "")); local userpass = decode_base64_conn(c$id, sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, ""));
local up = split_string(userpass, /:/); local up = split_string(userpass, /:/);
if ( |up| >= 2 ) if ( |up| >= 2 )
{ {

View file

@ -60,9 +60,9 @@ export {
## Contents of the Warning: header ## Contents of the Warning: header
warning: string &log &optional; warning: string &log &optional;
## Contents of the Content-Length: header from the client ## Contents of the Content-Length: header from the client
request_body_len: string &log &optional; request_body_len: count &log &optional;
## Contents of the Content-Length: header from the server ## Contents of the Content-Length: header from the server
response_body_len: string &log &optional; response_body_len: count &log &optional;
## Contents of the Content-Type: header from the server ## Contents of the Content-Type: header from the server
content_type: string &log &optional; content_type: string &log &optional;
}; };
@ -127,17 +127,6 @@ function set_state(c: connection, is_request: bool)
c$sip_state = s; c$sip_state = s;
} }
# These deal with new requests and responses.
if ( is_request && c$sip_state$current_request !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_request] = new_sip_session(c);
if ( ! is_request && c$sip_state$current_response !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_response] = new_sip_session(c);
if ( is_request )
c$sip = c$sip_state$pending[c$sip_state$current_request];
else
c$sip = c$sip_state$pending[c$sip_state$current_response];
if ( is_request ) if ( is_request )
{ {
if ( c$sip_state$current_request !in c$sip_state$pending ) if ( c$sip_state$current_request !in c$sip_state$pending )
@ -152,7 +141,6 @@ function set_state(c: connection, is_request: bool)
c$sip = c$sip_state$pending[c$sip_state$current_response]; c$sip = c$sip_state$pending[c$sip_state$current_response];
} }
} }
function flush_pending(c: connection) function flush_pending(c: connection)
@ -163,7 +151,9 @@ function flush_pending(c: connection)
for ( r in c$sip_state$pending ) for ( r in c$sip_state$pending )
{ {
# We don't use pending elements at index 0. # We don't use pending elements at index 0.
if ( r == 0 ) next; if ( r == 0 )
next;
Log::write(SIP::LOG, c$sip_state$pending[r]); Log::write(SIP::LOG, c$sip_state$pending[r]);
} }
} }
@ -205,16 +195,39 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
if ( c$sip_state$current_request !in c$sip_state$pending ) if ( c$sip_state$current_request !in c$sip_state$pending )
++c$sip_state$current_request; ++c$sip_state$current_request;
set_state(c, is_request); set_state(c, is_request);
if ( name == "CALL-ID" ) c$sip$call_id = value; switch ( name )
else if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$request_body_len = value; {
else if ( name == "CSEQ" ) c$sip$seq = value; case "CALL-ID":
else if ( name == "DATE" ) c$sip$date = value; c$sip$call_id = value;
else if ( name == "FROM" || name == "F" ) c$sip$request_from = split_string1(value, /;[ ]?tag=/)[0]; break;
else if ( name == "REPLY-TO" ) c$sip$reply_to = value; case "CONTENT-LENGTH", "L":
else if ( name == "SUBJECT" || name == "S" ) c$sip$subject = value; c$sip$request_body_len = to_count(value);
else if ( name == "TO" || name == "T" ) c$sip$request_to = value; break;
else if ( name == "USER-AGENT" ) c$sip$user_agent = value; case "CSEQ":
else if ( name == "VIA" || name == "V" ) c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0]; c$sip$seq = value;
break;
case "DATE":
c$sip$date = value;
break;
case "FROM", "F":
c$sip$request_from = split_string1(value, /;[ ]?tag=/)[0];
break;
case "REPLY-TO":
c$sip$reply_to = value;
break;
case "SUBJECT", "S":
c$sip$subject = value;
break;
case "TO", "T":
c$sip$request_to = value;
break;
case "USER-AGENT":
c$sip$user_agent = value;
break;
case "VIA", "V":
c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
break;
}
c$sip_state$pending[c$sip_state$current_request] = c$sip; c$sip_state$pending[c$sip_state$current_request] = c$sip;
} }
@ -222,13 +235,29 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
{ {
if ( c$sip_state$current_response !in c$sip_state$pending ) if ( c$sip_state$current_response !in c$sip_state$pending )
++c$sip_state$current_response; ++c$sip_state$current_response;
set_state(c, is_request); set_state(c, is_request);
if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$response_body_len = value; switch ( name )
else if ( name == "CONTENT-TYPE" || name == "C" ) c$sip$content_type = value; {
else if ( name == "WARNING" ) c$sip$warning = value; case "CONTENT-LENGTH", "L":
else if ( name == "FROM" || name == "F" ) c$sip$response_from = split_string1(value, /;[ ]?tag=/)[0]; c$sip$response_body_len = to_count(value);
else if ( name == "TO" || name == "T" ) c$sip$response_to = value; break;
else if ( name == "VIA" || name == "V" ) c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0]; case "CONTENT-TYPE", "C":
c$sip$content_type = value;
break;
case "WARNING":
c$sip$warning = value;
break;
case "FROM", "F":
c$sip$response_from = split_string1(value, /;[ ]?tag=/)[0];
break;
case "TO", "T":
c$sip$response_to = value;
break;
case "VIA", "V":
c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
break;
}
c$sip_state$pending[c$sip_state$current_response] = c$sip; c$sip_state$pending[c$sip_state$current_response] = c$sip;
} }

View file

@ -1,7 +1,7 @@
signature dpd_ssl_server { signature dpd_ssl_server {
ip-proto == tcp ip-proto == tcp
# Server hello. # Server hello.
payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/ payload /^((\x15\x03[\x00\x01\x02\x03]....)?\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client requires-reverse-signature dpd_ssl_client
enable "ssl" enable "ssl"
tcp-state responder tcp-state responder

View file

@ -4,7 +4,7 @@
##! ##!
##! It's intended to be used from the command line like this:: ##! It's intended to be used from the command line like this::
##! ##!
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>] ##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::host_port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
@load base/frameworks/control @load base/frameworks/control
@load base/frameworks/communication @load base/frameworks/communication

View file

@ -1,5 +1,7 @@
##! Perform MD5 and SHA1 hashing on all files. ##! Perform MD5 and SHA1 hashing on all files.
@load base/files/hash
event file_new(f: fa_file) event file_new(f: fa_file)
{ {
Files::add_analyzer(f, Files::ANALYZER_MD5); Files::add_analyzer(f, Files::ANALYZER_MD5);

View file

@ -15,7 +15,7 @@ redef record Info += {
# Add the VLAN information to the Conn::Info structure after the connection # Add the VLAN information to the Conn::Info structure after the connection
# has been removed. This ensures it's only done once, and is done before the # has been removed. This ensures it's only done once, and is done before the
# connection information is written to the log. # connection information is written to the log.
event connection_state_remove(c: connection) &priority=5 event connection_state_remove(c: connection)
{ {
if ( c?$vlan ) if ( c?$vlan )
c$conn$vlan = c$vlan; c$conn$vlan = c$vlan;

View file

@ -19,12 +19,12 @@ export {
}; };
} }
event rexmit_inconsistency(c: connection, t1: string, t2: string) event rexmit_inconsistency(c: connection, t1: string, t2: string, tcp_flags: string)
{ {
NOTICE([$note=Retransmission_Inconsistency, NOTICE([$note=Retransmission_Inconsistency,
$conn=c, $conn=c,
$msg=fmt("%s rexmit inconsistency (%s) (%s)", $msg=fmt("%s rexmit inconsistency (%s) (%s) [%s]",
id_string(c$id), t1, t2), id_string(c$id), t1, t2, tcp_flags),
$identifier=fmt("%s", c$id)]); $identifier=fmt("%s", c$id)]);
} }

View file

@ -82,7 +82,7 @@ int* Base64Converter::InitBase64Table(const string& alphabet)
return base64_table; return base64_table;
} }
Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string& arg_alphabet) Base64Converter::Base64Converter(Connection* arg_conn, const string& arg_alphabet)
{ {
if ( arg_alphabet.size() > 0 ) if ( arg_alphabet.size() > 0 )
{ {
@ -98,7 +98,7 @@ Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string&
base64_group_next = 0; base64_group_next = 0;
base64_padding = base64_after_padding = 0; base64_padding = base64_after_padding = 0;
errored = 0; errored = 0;
analyzer = arg_analyzer; conn = arg_conn;
} }
Base64Converter::~Base64Converter() Base64Converter::~Base64Converter()
@ -216,9 +216,9 @@ int Base64Converter::Done(int* pblen, char** pbuf)
} }
BroString* decode_base64(const BroString* s, const BroString* a) BroString* decode_base64(const BroString* s, const BroString* a, Connection* conn)
{ {
if ( a && a->Len() != 64 ) if ( a && a->Len() != 0 && a->Len() != 64 )
{ {
reporter->Error("base64 decoding alphabet is not 64 characters: %s", reporter->Error("base64 decoding alphabet is not 64 characters: %s",
a->CheckString()); a->CheckString());
@ -229,7 +229,7 @@ BroString* decode_base64(const BroString* s, const BroString* a)
int rlen2, rlen = buf_len; int rlen2, rlen = buf_len;
char* rbuf2, *rbuf = new char[rlen]; char* rbuf2, *rbuf = new char[rlen];
Base64Converter dec(0, a ? a->CheckString() : ""); Base64Converter dec(conn, a ? a->CheckString() : "");
if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 ) if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 )
goto err; goto err;
@ -248,9 +248,9 @@ err:
return 0; return 0;
} }
BroString* encode_base64(const BroString* s, const BroString* a) BroString* encode_base64(const BroString* s, const BroString* a, Connection* conn)
{ {
if ( a && a->Len() != 64 ) if ( a && a->Len() != 0 && a->Len() != 64 )
{ {
reporter->Error("base64 alphabet is not 64 characters: %s", reporter->Error("base64 alphabet is not 64 characters: %s",
a->CheckString()); a->CheckString());
@ -259,7 +259,7 @@ BroString* encode_base64(const BroString* s, const BroString* a)
char* outbuf = 0; char* outbuf = 0;
int outlen = 0; int outlen = 0;
Base64Converter enc(0, a ? a->CheckString() : ""); Base64Converter enc(conn, a ? a->CheckString() : "");
enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf); enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf);
return new BroString(1, (u_char*)outbuf, outlen); return new BroString(1, (u_char*)outbuf, outlen);

View file

@ -8,15 +8,17 @@
#include "util.h" #include "util.h"
#include "BroString.h" #include "BroString.h"
#include "Reporter.h" #include "Reporter.h"
#include "analyzer/Analyzer.h" #include "Conn.h"
// Maybe we should have a base class for generic decoders? // Maybe we should have a base class for generic decoders?
class Base64Converter { class Base64Converter {
public: public:
// <analyzer> is used for error reporting, and it should be zero when // <conn> is used for error reporting. If it is set to zero (as,
// the decoder is called by the built-in function decode_base64() or encode_base64(). // e.g., done by the built-in functions decode_base64() and
// Empty alphabet indicates the default base64 alphabet. // encode_base64()), encoding-errors will go to Reporter instead of
Base64Converter(analyzer::Analyzer* analyzer, const string& alphabet = ""); // Weird. Usage errors go to Reporter in any case. Empty alphabet
// indicates the default base64 alphabet.
Base64Converter(Connection* conn, const string& alphabet = "");
~Base64Converter(); ~Base64Converter();
// A note on Decode(): // A note on Decode():
@ -42,8 +44,8 @@ public:
void IllegalEncoding(const char* msg) void IllegalEncoding(const char* msg)
{ {
// strncpy(error_msg, msg, sizeof(error_msg)); // strncpy(error_msg, msg, sizeof(error_msg));
if ( analyzer ) if ( conn )
analyzer->Weird("base64_illegal_encoding", msg); conn->Weird("base64_illegal_encoding", msg);
else else
reporter->Error("%s", msg); reporter->Error("%s", msg);
} }
@ -63,11 +65,11 @@ protected:
int base64_after_padding; int base64_after_padding;
int* base64_table; int* base64_table;
int errored; // if true, we encountered an error - skip further processing int errored; // if true, we encountered an error - skip further processing
analyzer::Analyzer* analyzer; Connection* conn;
}; };
BroString* decode_base64(const BroString* s, const BroString* a = 0); BroString* decode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
BroString* encode_base64(const BroString* s, const BroString* a = 0); BroString* encode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
#endif /* base64_h */ #endif /* base64_h */

View file

@ -709,7 +709,7 @@ bool ChunkedIOSSL::Init()
{ {
SSL_load_error_strings(); SSL_load_error_strings();
ctx = SSL_CTX_new(SSLv3_method()); ctx = SSL_CTX_new(SSLv23_method());
if ( ! ctx ) if ( ! ctx )
{ {
Log("can't create SSL context"); Log("can't create SSL context");

View file

@ -91,6 +91,8 @@
targetEnd. Note: the end pointers are *after* the last item: e.g. targetEnd. Note: the end pointers are *after* the last item: e.g.
*(sourceEnd - 1) is the last item. *(sourceEnd - 1) is the last item.
!!! NOTE: The source and end pointers must be aligned properly !!!
The return result indicates whether the conversion was successful, The return result indicates whether the conversion was successful,
and if not, whether the problem was in the source or target buffers. and if not, whether the problem was in the source or target buffers.
(Only the first encountered problem is indicated.) (Only the first encountered problem is indicated.)
@ -199,18 +201,22 @@ ConversionResult ConvertUTF8toUTF32(
const UTF8** sourceStart, const UTF8* sourceEnd, const UTF8** sourceStart, const UTF8* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags); UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF16toUTF8 ( ConversionResult ConvertUTF16toUTF8 (
const UTF16** sourceStart, const UTF16* sourceEnd, const UTF16** sourceStart, const UTF16* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags); UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF32toUTF8 ( ConversionResult ConvertUTF32toUTF8 (
const UTF32** sourceStart, const UTF32* sourceEnd, const UTF32** sourceStart, const UTF32* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags); UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF16toUTF32 ( ConversionResult ConvertUTF16toUTF32 (
const UTF16** sourceStart, const UTF16* sourceEnd, const UTF16** sourceStart, const UTF16* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags); UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
/* NOTE: The source and end pointers must be aligned properly. */
ConversionResult ConvertUTF32toUTF16 ( ConversionResult ConvertUTF32toUTF16 (
const UTF32** sourceStart, const UTF32* sourceEnd, const UTF32** sourceStart, const UTF32* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags); UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags);

View file

@ -70,9 +70,6 @@ extern bool terminating;
// True if the remote serializer is to be activated. // True if the remote serializer is to be activated.
extern bool using_communication; extern bool using_communication;
// Snaplen passed to libpcap.
extern int snaplen;
extern const Packet* current_pkt; extern const Packet* current_pkt;
extern int current_dispatched; extern int current_dispatched;
extern double current_timestamp; extern double current_timestamp;

View file

@ -3459,7 +3459,11 @@ void SocketComm::Run()
if ( io->CanWrite() ) if ( io->CanWrite() )
++canwrites; ++canwrites;
int a = select(max_fd + 1, &fd_read, &fd_write, &fd_except, 0); struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
int a = select(max_fd + 1, &fd_read, &fd_write, &fd_except, &timeout);
if ( selects % 100000 == 0 ) if ( selects % 100000 == 0 )
Log(fmt("selects=%ld canwrites=%ld pending=%lu", Log(fmt("selects=%ld canwrites=%ld pending=%lu",

View file

@ -393,4 +393,3 @@ void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out,
if ( alloced ) if ( alloced )
free(alloced); free(alloced);
} }

View file

@ -206,7 +206,7 @@ void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{ {
line = skip_whitespace(line + cmd_len, end_of_line); line = skip_whitespace(line + cmd_len, end_of_line);
StringVal encoded(end_of_line - line, line); StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString()); decoded_adat = decode_base64(encoded.AsString(), 0, Conn());
if ( first_token ) if ( first_token )
{ {
@ -273,7 +273,7 @@ void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{ {
line += 5; line += 5;
StringVal encoded(end_of_line - line, line); StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString()); decoded_adat = decode_base64(encoded.AsString(), 0, Conn());
} }
break; break;

View file

@ -995,28 +995,9 @@ void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
HTTP_Reply(); HTTP_Reply();
if ( connect_request && reply_code == 200 ) if ( connect_request && reply_code != 200 )
{ // Request failed, do not set up tunnel.
pia = new pia::PIA_TCP(Conn()); connect_request = false;
if ( AddChildAnalyzer(pia) )
{
pia->FirstPacket(true, 0);
pia->FirstPacket(false, 0);
// This connection has transitioned to no longer
// being http and the content line support analyzers
// need to be removed.
RemoveSupportAnalyzer(content_line_orig);
RemoveSupportAnalyzer(content_line_resp);
return;
}
else
// AddChildAnalyzer() will have deleted PIA.
pia = 0;
}
InitHTTPMessage(content_line, InitHTTPMessage(content_line,
reply_message, is_orig, reply_message, is_orig,
@ -1036,6 +1017,30 @@ void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
case EXPECT_REPLY_MESSAGE: case EXPECT_REPLY_MESSAGE:
reply_message->Deliver(len, line, 1); reply_message->Deliver(len, line, 1);
if ( connect_request && len == 0 )
{
// End of message header reached, set up
// tunnel decapsulation.
pia = new pia::PIA_TCP(Conn());
if ( AddChildAnalyzer(pia) )
{
pia->FirstPacket(true, 0);
pia->FirstPacket(false, 0);
// This connection has transitioned to no longer
// being http and the content line support analyzers
// need to be removed.
RemoveSupportAnalyzer(content_line_orig);
RemoveSupportAnalyzer(content_line_resp);
}
else
// AddChildAnalyzer() will have deleted PIA.
pia = 0;
}
break; break;
case EXPECT_REPLY_TRAILER: case EXPECT_REPLY_TRAILER:

View file

@ -248,9 +248,7 @@ int MIME_get_field_name(int len, const char* data, data_chunk_t* name)
int MIME_is_tspecial (char ch, bool is_boundary = false) int MIME_is_tspecial (char ch, bool is_boundary = false)
{ {
if ( is_boundary ) if ( is_boundary )
return ch == '(' || ch == ')' || ch == '@' || return ch == '"';
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=';
else else
return ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' || return ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' ||
ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' || ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' ||
@ -272,7 +270,11 @@ int MIME_is_token_char (char ch, bool is_boundary = false)
int MIME_get_token(int len, const char* data, data_chunk_t* token, int MIME_get_token(int len, const char* data, data_chunk_t* token,
bool is_boundary) bool is_boundary)
{ {
int i = MIME_skip_lws_comments(len, data); int i = 0;
if ( ! is_boundary )
i = MIME_skip_lws_comments(len, data);
while ( i < len ) while ( i < len )
{ {
int j; int j;
@ -366,7 +368,10 @@ int MIME_get_quoted_string(int len, const char* data, data_chunk_t* str)
int MIME_get_value(int len, const char* data, BroString*& buf, bool is_boundary) int MIME_get_value(int len, const char* data, BroString*& buf, bool is_boundary)
{ {
int offset = MIME_skip_lws_comments(len, data); int offset = 0;
if ( ! is_boundary ) // For boundaries, simply accept everything.
offset = MIME_skip_lws_comments(len, data);
len -= offset; len -= offset;
data += offset; data += offset;
@ -876,6 +881,13 @@ int MIME_Entity::ParseFieldParameters(int len, const char* data)
// token or quoted-string (and some lenience for characters // token or quoted-string (and some lenience for characters
// not explicitly allowed by the RFC, but encountered in the wild) // not explicitly allowed by the RFC, but encountered in the wild)
offset = MIME_get_value(len, data, val, true); offset = MIME_get_value(len, data, val, true);
if ( ! val )
{
IllegalFormat("Could not parse multipart boundary");
continue;
}
data_chunk_t vd = get_data_chunk(val); data_chunk_t vd = get_data_chunk(val);
multipart_boundary = new BroString((const u_char*)vd.data, multipart_boundary = new BroString((const u_char*)vd.data,
vd.length, 1); vd.length, 1);
@ -1122,7 +1134,15 @@ void MIME_Entity::StartDecodeBase64()
delete base64_decoder; delete base64_decoder;
} }
base64_decoder = new Base64Converter(message->GetAnalyzer()); analyzer::Analyzer* analyzer = message->GetAnalyzer();
if ( ! analyzer )
{
reporter->InternalWarning("no analyzer associated with MIME message");
return;
}
base64_decoder = new Base64Converter(analyzer->Conn());
} }
void MIME_Entity::FinishDecodeBase64() void MIME_Entity::FinishDecodeBase64()

View file

@ -1,5 +1,6 @@
#include "PIA.h" #include "PIA.h"
#include "RuleMatcher.h" #include "RuleMatcher.h"
#include "analyzer/protocol/tcp/TCP_Flags.h"
#include "analyzer/protocol/tcp/TCP_Reassembler.h" #include "analyzer/protocol/tcp/TCP_Reassembler.h"
#include "events.bif.h" #include "events.bif.h"
@ -348,12 +349,16 @@ void PIA_TCP::ActivateAnalyzer(analyzer::Tag tag, const Rule* rule)
for ( DataBlock* b = pkt_buffer.head; b; b = b->next ) for ( DataBlock* b = pkt_buffer.head; b; b = b->next )
{ {
// We don't have the TCP flags here during replay. We could
// funnel them through, but it's non-trivial and doesn't seem
// worth the effort.
if ( b->is_orig ) if ( b->is_orig )
reass_orig->DataSent(network_time, orig_seq = b->seq, reass_orig->DataSent(network_time, orig_seq = b->seq,
b->len, b->data, true); b->len, b->data, tcp::TCP_Flags(), true);
else else
reass_resp->DataSent(network_time, resp_seq = b->seq, reass_resp->DataSent(network_time, resp_seq = b->seq,
b->len, b->data, true); b->len, b->data, tcp::TCP_Flags(), true);
} }
// We also need to pass the current packet on. // We also need to pass the current packet on.
@ -363,11 +368,11 @@ void PIA_TCP::ActivateAnalyzer(analyzer::Tag tag, const Rule* rule)
if ( current->is_orig ) if ( current->is_orig )
reass_orig->DataSent(network_time, reass_orig->DataSent(network_time,
orig_seq = current->seq, orig_seq = current->seq,
current->len, current->data, true); current->len, current->data, analyzer::tcp::TCP_Flags(), true);
else else
reass_resp->DataSent(network_time, reass_resp->DataSent(network_time,
resp_seq = current->seq, resp_seq = current->seq,
current->len, current->data, true); current->len, current->data, analyzer::tcp::TCP_Flags(), true);
} }
ClearBuffer(&pkt_buffer); ClearBuffer(&pkt_buffer);

View file

@ -137,7 +137,7 @@ void POP3_Analyzer::ProcessRequest(int length, const char* line)
++authLines; ++authLines;
BroString encoded(line); BroString encoded(line);
BroString* decoded = decode_base64(&encoded); BroString* decoded = decode_base64(&encoded, 0, Conn());
if ( ! decoded ) if ( ! decoded )
{ {

View file

@ -9,9 +9,8 @@ refine flow RDP_Flow += {
function utf16_to_utf8_val(utf16: bytestring): StringVal function utf16_to_utf8_val(utf16: bytestring): StringVal
%{ %{
std::string resultstring; std::string resultstring;
size_t widesize = utf16.length();
size_t utf8size = 3 * widesize + 1; size_t utf8size = (3 * utf16.length() + 1);
if ( utf8size > resultstring.max_size() ) if ( utf8size > resultstring.max_size() )
{ {
@ -20,8 +19,16 @@ refine flow RDP_Flow += {
} }
resultstring.resize(utf8size, '\0'); resultstring.resize(utf8size, '\0');
const UTF16* sourcestart = reinterpret_cast<const UTF16*>(utf16.begin());
const UTF16* sourceend = sourcestart + widesize; // We can't assume that the string data is properly aligned
// here, so make a copy.
UTF16 utf16_copy[utf16.length()]; // Twice as much memory than necessary.
memcpy(utf16_copy, utf16.begin(), utf16.length());
const char* utf16_copy_end = reinterpret_cast<const char*>(utf16_copy) + utf16.length();
const UTF16* sourcestart = utf16_copy;
const UTF16* sourceend = reinterpret_cast<const UTF16*>(utf16_copy_end);
UTF8* targetstart = reinterpret_cast<UTF8*>(&resultstring[0]); UTF8* targetstart = reinterpret_cast<UTF8*>(&resultstring[0]);
UTF8* targetend = targetstart + utf8size; UTF8* targetend = targetstart + utf8size;
@ -37,6 +44,7 @@ refine flow RDP_Flow += {
} }
*targetstart = 0; *targetstart = 0;
// We're relying on no nulls being in the string. // We're relying on no nulls being in the string.
return new StringVal(resultstring.c_str()); return new StringVal(resultstring.c_str());
%} %}

View file

@ -1,14 +1,5 @@
enum ExpectBody {
BODY_EXPECTED,
BODY_NOT_EXPECTED,
BODY_MAYBE,
};
type SIP_TOKEN = RE/[^()<>@,;:\\"\/\[\]?={} \t]+/; type SIP_TOKEN = RE/[^()<>@,;:\\"\/\[\]?={} \t]+/;
type SIP_WS = RE/[ \t]*/; type SIP_WS = RE/[ \t]*/;
type SIP_COLON = RE/:/;
type SIP_TO_EOL = RE/[^\r\n]*/;
type SIP_EOL = RE/(\r\n){1,2}/;
type SIP_URI = RE/[[:alnum:]@[:punct:]]+/; type SIP_URI = RE/[[:alnum:]@[:punct:]]+/;
type SIP_PDU(is_orig: bool) = case is_orig of { type SIP_PDU(is_orig: bool) = case is_orig of {
@ -17,14 +8,12 @@ type SIP_PDU(is_orig: bool) = case is_orig of {
}; };
type SIP_Request = record { type SIP_Request = record {
request: SIP_RequestLine; request: SIP_RequestLine &oneline;
newline: padding[2];
msg: SIP_Message; msg: SIP_Message;
}; };
type SIP_Reply = record { type SIP_Reply = record {
reply: SIP_ReplyLine; reply: SIP_ReplyLine &oneline;
newline: padding[2];
msg: SIP_Message; msg: SIP_Message;
}; };
@ -33,7 +22,7 @@ type SIP_RequestLine = record {
: SIP_WS; : SIP_WS;
uri: SIP_URI; uri: SIP_URI;
: SIP_WS; : SIP_WS;
version: SIP_Version; version: SIP_Version &restofdata;
} &oneline; } &oneline;
type SIP_ReplyLine = record { type SIP_ReplyLine = record {
@ -41,7 +30,7 @@ type SIP_ReplyLine = record {
: SIP_WS; : SIP_WS;
status: SIP_Status; status: SIP_Status;
: SIP_WS; : SIP_WS;
reason: SIP_TO_EOL; reason: bytestring &restofdata;
} &oneline; } &oneline;
type SIP_Status = record { type SIP_Status = record {
@ -67,11 +56,11 @@ type SIP_Message = record {
type SIP_HEADER_NAME = RE/[^: \t]+/; type SIP_HEADER_NAME = RE/[^: \t]+/;
type SIP_Header = record { type SIP_Header = record {
name: SIP_HEADER_NAME; name: SIP_HEADER_NAME;
: SIP_COLON;
: SIP_WS; : SIP_WS;
value: SIP_TO_EOL; : ":";
: SIP_EOL; : SIP_WS;
} &oneline &byteorder=bigendian; value: bytestring &restofdata;
} &oneline;
type SIP_Body = record { type SIP_Body = record {
body: bytestring &length = $context.flow.get_content_length(); body: bytestring &length = $context.flow.get_content_length();

View file

@ -21,7 +21,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) {
%include sip-protocol.pac %include sip-protocol.pac
flow SIP_Flow(is_orig: bool) { flow SIP_Flow(is_orig: bool) {
datagram = SIP_PDU(is_orig) withcontext(connection, this); flowunit = SIP_PDU(is_orig) withcontext(connection, this);
}; };
%include sip-analyzer.pac %include sip-analyzer.pac

View file

@ -24,7 +24,7 @@ connection SIP_Conn(bro_analyzer: BroAnalyzer) {
%include sip-protocol.pac %include sip-protocol.pac
flow SIP_Flow(is_orig: bool) { flow SIP_Flow(is_orig: bool) {
datagram = SIP_PDU(is_orig) withcontext(connection, this); flowunit = SIP_PDU(is_orig) withcontext(connection, this);
}; };
%include sip-analyzer.pac %include sip-analyzer.pac

View file

@ -442,7 +442,7 @@ const struct tcphdr* TCP_Analyzer::ExtractTCP_Header(const u_char*& data,
} }
if ( tcp_hdr_len > uint32(len) || if ( tcp_hdr_len > uint32(len) ||
sizeof(struct tcphdr) > uint32(caplen) ) tcp_hdr_len > uint32(caplen) )
{ {
// This can happen even with the above test, due to TCP // This can happen even with the above test, due to TCP
// options. // options.
@ -946,23 +946,11 @@ void TCP_Analyzer::GeneratePacketEvent(
const u_char* data, int len, int caplen, const u_char* data, int len, int caplen,
int is_orig, TCP_Flags flags) int is_orig, TCP_Flags flags)
{ {
char tcp_flags[256];
int tcp_flag_len = 0;
if ( flags.SYN() ) tcp_flags[tcp_flag_len++] = 'S';
if ( flags.FIN() ) tcp_flags[tcp_flag_len++] = 'F';
if ( flags.RST() ) tcp_flags[tcp_flag_len++] = 'R';
if ( flags.ACK() ) tcp_flags[tcp_flag_len++] = 'A';
if ( flags.PUSH() ) tcp_flags[tcp_flag_len++] = 'P';
if ( flags.URG() ) tcp_flags[tcp_flag_len++] = 'U';
tcp_flags[tcp_flag_len] = '\0';
val_list* vl = new val_list(); val_list* vl = new val_list();
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(new Val(is_orig, TYPE_BOOL)); vl->append(new Val(is_orig, TYPE_BOOL));
vl->append(new StringVal(tcp_flags)); vl->append(new StringVal(flags.AsString()));
vl->append(new Val(rel_seq, TYPE_COUNT)); vl->append(new Val(rel_seq, TYPE_COUNT));
vl->append(new Val(flags.ACK() ? rel_ack : 0, TYPE_COUNT)); vl->append(new Val(flags.ACK() ? rel_ack : 0, TYPE_COUNT));
vl->append(new Val(len, TYPE_COUNT)); vl->append(new Val(len, TYPE_COUNT));

View file

@ -8,6 +8,7 @@
#include "PacketDumper.h" #include "PacketDumper.h"
#include "IPAddr.h" #include "IPAddr.h"
#include "TCP_Endpoint.h" #include "TCP_Endpoint.h"
#include "TCP_Flags.h"
#include "Conn.h" #include "Conn.h"
// We define two classes here: // We define two classes here:
@ -23,21 +24,6 @@ class TCP_Endpoint;
class TCP_ApplicationAnalyzer; class TCP_ApplicationAnalyzer;
class TCP_Reassembler; class TCP_Reassembler;
class TCP_Flags {
public:
TCP_Flags(const struct tcphdr* tp) { flags = tp->th_flags; }
bool SYN() { return flags & TH_SYN; }
bool FIN() { return flags & TH_FIN; }
bool RST() { return flags & TH_RST; }
bool ACK() { return flags & TH_ACK; }
bool URG() { return flags & TH_URG; }
bool PUSH() { return flags & TH_PUSH; }
protected:
u_char flags;
};
class TCP_Analyzer : public analyzer::TransportLayerAnalyzer { class TCP_Analyzer : public analyzer::TransportLayerAnalyzer {
public: public:
TCP_Analyzer(Connection* conn); TCP_Analyzer(Connection* conn);

View file

@ -204,7 +204,7 @@ int TCP_Endpoint::DataSent(double t, uint64 seq, int len, int caplen,
if ( contents_processor ) if ( contents_processor )
{ {
if ( caplen >= len ) if ( caplen >= len )
status = contents_processor->DataSent(t, seq, len, data); status = contents_processor->DataSent(t, seq, len, data, TCP_Flags(tp));
else else
TCP()->Weird("truncated_tcp_payload"); TCP()->Weird("truncated_tcp_payload");
} }

View file

@ -0,0 +1,55 @@
#ifndef ANALYZER_PROTOCOL_TCP_TCP_FLAGS_H
#define ANALYZER_PROTOCOL_TCP_TCP_FLAGS_H
namespace analyzer { namespace tcp {
class TCP_Flags {
public:
TCP_Flags(const struct tcphdr* tp) { flags = tp->th_flags; }
TCP_Flags() { flags = 0; }
bool SYN() const { return flags & TH_SYN; }
bool FIN() const { return flags & TH_FIN; }
bool RST() const { return flags & TH_RST; }
bool ACK() const { return flags & TH_ACK; }
bool URG() const { return flags & TH_URG; }
bool PUSH() const { return flags & TH_PUSH; }
string AsString() const;
protected:
u_char flags;
};
inline string TCP_Flags::AsString() const
{
char tcp_flags[10];
char* p = tcp_flags;
if ( SYN() )
*p++ = 'S';
if ( FIN() )
*p++ = 'F';
if ( RST() )
*p++ = 'R';
if ( ACK() )
*p++ = 'A';
if ( PUSH() )
*p++ = 'P';
if ( URG() )
*p++ = 'U';
*p++ = '\0';
return tcp_flags;
}
}
}
#endif

View file

@ -433,8 +433,13 @@ void TCP_Reassembler::Overlap(const u_char* b1, const u_char* b2, uint64 n)
{ {
BroString* b1_s = new BroString((const u_char*) b1, n, 0); BroString* b1_s = new BroString((const u_char*) b1, n, 0);
BroString* b2_s = new BroString((const u_char*) b2, n, 0); BroString* b2_s = new BroString((const u_char*) b2, n, 0);
tcp_analyzer->Event(rexmit_inconsistency,
new StringVal(b1_s), new StringVal(b2_s)); val_list* vl = new val_list(3);
vl->append(tcp_analyzer->BuildConnVal());
vl->append(new StringVal(b1_s));
vl->append(new StringVal(b2_s));
vl->append(new StringVal(flags.AsString()));
tcp_analyzer->ConnectionEvent(rexmit_inconsistency, vl);
} }
} }
@ -461,7 +466,7 @@ void TCP_Reassembler::Deliver(uint64 seq, int len, const u_char* data)
} }
int TCP_Reassembler::DataSent(double t, uint64 seq, int len, int TCP_Reassembler::DataSent(double t, uint64 seq, int len,
const u_char* data, bool replaying) const u_char* data, TCP_Flags arg_flags, bool replaying)
{ {
uint64 ack = endp->ToRelativeSeqSpace(endp->AckSeq(), endp->AckWraps()); uint64 ack = endp->ToRelativeSeqSpace(endp->AckSeq(), endp->AckWraps());
uint64 upper_seq = seq + len; uint64 upper_seq = seq + len;
@ -492,7 +497,9 @@ int TCP_Reassembler::DataSent(double t, uint64 seq, int len,
len -= amount_acked; len -= amount_acked;
} }
flags = arg_flags;
NewBlock(t, seq, len, data); NewBlock(t, seq, len, data);
flags = TCP_Flags();
if ( Endpoint()->NoDataAcked() && tcp_max_above_hole_without_any_acks && if ( Endpoint()->NoDataAcked() && tcp_max_above_hole_without_any_acks &&
NumUndeliveredBytes() > static_cast<uint64>(tcp_max_above_hole_without_any_acks) ) NumUndeliveredBytes() > static_cast<uint64>(tcp_max_above_hole_without_any_acks) )

View file

@ -3,6 +3,7 @@
#include "Reassem.h" #include "Reassem.h"
#include "TCP_Endpoint.h" #include "TCP_Endpoint.h"
#include "TCP_Flags.h"
class BroFile; class BroFile;
class Connection; class Connection;
@ -61,7 +62,7 @@ public:
void SkipToSeq(uint64 seq); void SkipToSeq(uint64 seq);
int DataSent(double t, uint64 seq, int len, const u_char* data, int DataSent(double t, uint64 seq, int len, const u_char* data,
bool replaying=true); analyzer::tcp::TCP_Flags flags, bool replaying=true);
void AckReceived(uint64 seq); void AckReceived(uint64 seq);
// Checks if we have delivered all contents that we can possibly // Checks if we have delivered all contents that we can possibly
@ -110,6 +111,7 @@ private:
uint64 seq_to_skip; uint64 seq_to_skip;
bool in_delivery; bool in_delivery;
analyzer::tcp::TCP_Flags flags;
BroFile* record_contents_file; // file on which to reassemble contents BroFile* record_contents_file; // file on which to reassemble contents

View file

@ -2723,14 +2723,17 @@ function hexstr_to_bytestring%(hexstr: string%): string
## Encodes a Base64-encoded string. ## Encodes a Base64-encoded string.
## ##
## s: The string to encode ## s: The string to encode.
##
## a: An optional custom alphabet. The empty string indicates the default
## alphabet. If given, the string must consist of 64 unique characters.
## ##
## Returns: The encoded version of *s*. ## Returns: The encoded version of *s*.
## ##
## .. bro:see:: encode_base64_custom decode_base64 ## .. bro:see:: decode_base64
function encode_base64%(s: string%): string function encode_base64%(s: string, a: string &default=""%): string
%{ %{
BroString* t = encode_base64(s->AsString()); BroString* t = encode_base64(s->AsString(), a->AsString());
if ( t ) if ( t )
return new StringVal(t); return new StringVal(t);
else else
@ -2740,18 +2743,18 @@ function encode_base64%(s: string%): string
} }
%} %}
## Encodes a Base64-encoded string with a custom alphabet. ## Encodes a Base64-encoded string with a custom alphabet.
## ##
## s: The string to encode ## s: The string to encode.
## ##
## a: The custom alphabet. The empty string indicates the default alphabet. The ## a: The custom alphabet. The string must consist of 64 unique
## length of *a* must be 64. For example, a custom alphabet could be ## characters. The empty string indicates the default alphabet.
## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``.
## ##
## Returns: The encoded version of *s*. ## Returns: The encoded version of *s*.
## ##
## .. bro:see:: encode_base64 decode_base64_custom ## .. bro:see:: encode_base64
function encode_base64_custom%(s: string, a: string%): string function encode_base64_custom%(s: string, a: string%): string &deprecated
%{ %{
BroString* t = encode_base64(s->AsString(), a->AsString()); BroString* t = encode_base64(s->AsString(), a->AsString());
if ( t ) if ( t )
@ -2767,12 +2770,48 @@ function encode_base64_custom%(s: string, a: string%): string
## ##
## s: The Base64-encoded string. ## s: The Base64-encoded string.
## ##
## a: An optional custom alphabet. The empty string indicates the default
## alphabet. If given, the string must consist of 64 unique characters.
##
## Returns: The decoded version of *s*. ## Returns: The decoded version of *s*.
## ##
## .. bro:see:: decode_base64_custom encode_base64 ## .. bro:see:: decode_base64_conn encode_base64
function decode_base64%(s: string%): string function decode_base64%(s: string, a: string &default=""%): string
%{ %{
BroString* t = decode_base64(s->AsString()); BroString* t = decode_base64(s->AsString(), a->AsString());
if ( t )
return new StringVal(t);
else
{
reporter->Error("error in decoding string %s", s->CheckString());
return new StringVal("");
}
%}
## Decodes a Base64-encoded string that was derived from processing a connection.
## If an error is encountered decoding the string, that will be logged to
## ``weird.log`` with the associated connection.
##
## cid: The identifier of the connection that the encoding originates from.
##
## s: The Base64-encoded string.
##
## a: An optional custom alphabet. The empty string indicates the default
## alphabet. If given, the string must consist of 64 unique characters.
##
## Returns: The decoded version of *s*.
##
## .. bro:see:: decode_base64
function decode_base64_conn%(cid: conn_id, s: string, a: string &default=""%): string
%{
Connection* conn = sessions->FindConnection(cid);
if ( ! conn )
{
builtin_error("connection ID not a known connection", cid);
return new StringVal("");
}
BroString* t = decode_base64(s->AsString(), a->AsString(), conn);
if ( t ) if ( t )
return new StringVal(t); return new StringVal(t);
else else
@ -2786,14 +2825,13 @@ function decode_base64%(s: string%): string
## ##
## s: The Base64-encoded string. ## s: The Base64-encoded string.
## ##
## a: The custom alphabet. The empty string indicates the default alphabet. The ## a: The custom alphabet. The string must consist of 64 unique characters.
## length of *a* must be 64. For example, a custom alphabet could be ## The empty string indicates the default alphabet.
## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``.
## ##
## Returns: The decoded version of *s*. ## Returns: The decoded version of *s*.
## ##
## .. bro:see:: decode_base64 encode_base64_custom ## .. bro:see:: decode_base64 decode_base64_conn
function decode_base64_custom%(s: string, a: string%): string function decode_base64_custom%(s: string, a: string%): string &deprecated
%{ %{
BroString* t = decode_base64(s->AsString(), a->AsString()); BroString* t = decode_base64(s->AsString(), a->AsString());
if ( t ) if ( t )

View file

@ -305,8 +305,14 @@ event packet_contents%(c: connection, contents: string%);
## ##
## t2: The new payload. ## t2: The new payload.
## ##
## tcp_flags: A string with the TCP flags of the packet triggering the
## inconsistency. In the string, each character corresponds to one set flag,
## as follows: ``S`` -> SYN; ``F`` -> FIN; ``R`` -> RST; ``A`` -> ACK; ``P`` ->
## PUSH. This string will not always be set, only if the information is available;
## it's "best effort".
##
## .. bro:see:: tcp_rexmit tcp_contents ## .. bro:see:: tcp_rexmit tcp_contents
event rexmit_inconsistency%(c: connection, t1: string, t2: string%); event rexmit_inconsistency%(c: connection, t1: string, t2: string, tcp_flags: string%);
## Generated when a TCP endpoint acknowledges payload that Bro never saw. ## Generated when a TCP endpoint acknowledges payload that Bro never saw.
## ##

View file

@ -52,7 +52,8 @@ bool file_analysis::X509::EndOfFile()
X509Val* cert_val = new X509Val(ssl_cert); // cert_val takes ownership of ssl_cert X509Val* cert_val = new X509Val(ssl_cert); // cert_val takes ownership of ssl_cert
RecordVal* cert_record = ParseCertificate(cert_val); // parse basic information into record // parse basic information into record.
RecordVal* cert_record = ParseCertificate(cert_val, GetFile()->GetID().c_str());
// and send the record on to scriptland // and send the record on to scriptland
val_list* vl = new val_list(); val_list* vl = new val_list();
@ -84,7 +85,7 @@ bool file_analysis::X509::EndOfFile()
return false; return false;
} }
RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val) RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val, const char* fid)
{ {
::X509* ssl_cert = cert_val->GetCertificate(); ::X509* ssl_cert = cert_val->GetCertificate();
@ -131,8 +132,8 @@ RecordVal* file_analysis::X509::ParseCertificate(X509Val* cert_val)
pX509Cert->Assign(3, new StringVal(len, buf)); pX509Cert->Assign(3, new StringVal(len, buf));
BIO_free(bio); BIO_free(bio);
pX509Cert->Assign(5, new Val(GetTimeFromAsn1(X509_get_notBefore(ssl_cert)), TYPE_TIME)); pX509Cert->Assign(5, new Val(GetTimeFromAsn1(X509_get_notBefore(ssl_cert), fid), TYPE_TIME));
pX509Cert->Assign(6, new Val(GetTimeFromAsn1(X509_get_notAfter(ssl_cert)), TYPE_TIME)); pX509Cert->Assign(6, new Val(GetTimeFromAsn1(X509_get_notAfter(ssl_cert), fid), TYPE_TIME));
// we only read 255 bytes because byte 256 is always 0. // we only read 255 bytes because byte 256 is always 0.
// if the string is longer than 255, that will be our null-termination, // if the string is longer than 255, that will be our null-termination,
@ -515,54 +516,103 @@ unsigned int file_analysis::X509::KeyLength(EVP_PKEY *key)
reporter->InternalError("cannot be reached"); reporter->InternalError("cannot be reached");
} }
double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime) double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime, const char* arg_fid)
{ {
const char *fid = arg_fid ? arg_fid : "";
time_t lResult = 0; time_t lResult = 0;
char lBuffer[24]; char lBuffer[26];
char* pBuffer = lBuffer; char* pBuffer = lBuffer;
size_t lTimeLength = atime->length; const char *pString = (const char *) atime->data;
char * pString = (char *) atime->data; unsigned int remaining = atime->length;
if ( atime->type == V_ASN1_UTCTIME ) if ( atime->type == V_ASN1_UTCTIME )
{ {
if ( lTimeLength < 11 || lTimeLength > 17 ) if ( remaining < 11 || remaining > 17 )
{
reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- UTCTime has wrong length", fid));
return 0; return 0;
}
if ( pString[remaining-1] != 'Z' )
{
// not valid according to RFC 2459 4.1.2.5.1
reporter->Weird(fmt("Could not parse UTC time in non-YY-format in X509 certificate (x509 %s)", fid));
return 0;
}
// year is first two digits in YY format. Buffer expects YYYY format.
if ( pString[0] - '0' < 50 ) // RFC 2459 4.1.2.5.1
{
*(pBuffer++) = '2';
*(pBuffer++) = '0';
}
else
{
*(pBuffer++) = '1';
*(pBuffer++) = '9';
}
memcpy(pBuffer, pString, 10); memcpy(pBuffer, pString, 10);
pBuffer += 10; pBuffer += 10;
pString += 10; pString += 10;
remaining -= 10;
} }
else if ( atime->type == V_ASN1_GENERALIZEDTIME )
else
{ {
if ( lTimeLength < 13 ) // generalized time. We apparently ignore the YYYYMMDDHH case
// for now and assume we always have minutes and seconds.
// This should be ok because it is specified as a requirement in RFC 2459 4.1.2.5.2
if ( remaining < 12 || remaining > 23 )
{
reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- Generalized time has wrong length", fid));
return 0; return 0;
}
memcpy(pBuffer, pString, 12); memcpy(pBuffer, pString, 12);
pBuffer += 12; pBuffer += 12;
pString += 12; pString += 12;
remaining -= 12;
}
else
{
reporter->Weird(fmt("Invalid time type in X509 certificate (fuid %s)", fid));
return 0;
} }
if ((*pString == 'Z') || (*pString == '-') || (*pString == '+')) if ( (remaining == 0) || (*pString == 'Z') || (*pString == '-') || (*pString == '+') )
{ {
*(pBuffer++) = '0'; *(pBuffer++) = '0';
*(pBuffer++) = '0'; *(pBuffer++) = '0';
} }
else if ( remaining >= 2 )
{
*(pBuffer++) = *(pString++);
*(pBuffer++) = *(pString++);
remaining -= 2;
// Skip any fractional seconds...
if ( (remaining > 0) && (*pString == '.') )
{
pString++;
remaining--;
while ( (remaining > 0) && (*pString >= '0') && (*pString <= '9') )
{
pString++;
remaining--;
}
}
}
else else
{ {
*(pBuffer++) = *(pString++); reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- additional char after time", fid));
*(pBuffer++) = *(pString++); return 0;
// Skip any fractional seconds...
if (*pString == '.')
{
pString++;
while ((*pString >= '0') && (*pString <= '9'))
pString++;
}
} }
*(pBuffer++) = 'Z'; *(pBuffer++) = 'Z';
@ -570,13 +620,21 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
time_t lSecondsFromUTC; time_t lSecondsFromUTC;
if ( *pString == 'Z' ) if ( remaining == 0 || *pString == 'Z' )
lSecondsFromUTC = 0; lSecondsFromUTC = 0;
else else
{ {
if ((*pString != '+') && (pString[5] != '-')) if ( remaining < 5 )
{
reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- not enough bytes remaining for offset", fid));
return 0; return 0;
}
if ((*pString != '+') && (*pString != '-'))
{
reporter->Weird(fmt("Could not parse time in X509 certificate (fuid %s) -- unknown offset type", fid));
return 0;
}
lSecondsFromUTC = ((pString[1] - '0') * 10 + (pString[2] - '0')) * 60; lSecondsFromUTC = ((pString[1] - '0') * 10 + (pString[2] - '0')) * 60;
lSecondsFromUTC += (pString[3] - '0') * 10 + (pString[4] - '0'); lSecondsFromUTC += (pString[3] - '0') * 10 + (pString[4] - '0');
@ -586,15 +644,15 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
} }
tm lTime; tm lTime;
lTime.tm_sec = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0'); lTime.tm_sec = ((lBuffer[12] - '0') * 10) + (lBuffer[13] - '0');
lTime.tm_min = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0'); lTime.tm_min = ((lBuffer[10] - '0') * 10) + (lBuffer[11] - '0');
lTime.tm_hour = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0'); lTime.tm_hour = ((lBuffer[8] - '0') * 10) + (lBuffer[9] - '0');
lTime.tm_mday = ((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0'); lTime.tm_mday = ((lBuffer[6] - '0') * 10) + (lBuffer[7] - '0');
lTime.tm_mon = (((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0')) - 1; lTime.tm_mon = (((lBuffer[4] - '0') * 10) + (lBuffer[5] - '0')) - 1;
lTime.tm_year = ((lBuffer[0] - '0') * 10) + (lBuffer[1] - '0'); lTime.tm_year = (lBuffer[0] - '0') * 1000 + (lBuffer[1] - '0') * 100 + ((lBuffer[2] - '0') * 10) + (lBuffer[3] - '0');
if ( lTime.tm_year < 50 ) if ( lTime.tm_year > 1900)
lTime.tm_year += 100; // RFC 2459 lTime.tm_year -= 1900;
lTime.tm_wday = 0; lTime.tm_wday = 0;
lTime.tm_yday = 0; lTime.tm_yday = 0;
@ -604,7 +662,7 @@ double file_analysis::X509::GetTimeFromAsn1(const ASN1_TIME* atime)
if ( lResult ) if ( lResult )
{ {
if ( 0 != lTime.tm_isdst ) if ( lTime.tm_isdst != 0 )
lResult -= 3600; // mktime may adjust for DST (OS dependent) lResult -= 3600; // mktime may adjust for DST (OS dependent)
lResult += lSecondsFromUTC; lResult += lSecondsFromUTC;

View file

@ -29,10 +29,13 @@ public:
* *
* @param cert_val The certificate to converts. * @param cert_val The certificate to converts.
* *
* @param fid A file ID associated with the certificate, if any
* (primarily for error reporting).
*
* @param Returns the new record value and passes ownership to * @param Returns the new record value and passes ownership to
* caller. * caller.
*/ */
static RecordVal* ParseCertificate(X509Val* cert_val); static RecordVal* ParseCertificate(X509Val* cert_val, const char* fid = 0);
static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file) static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file)
{ return new X509(args, file); } { return new X509(args, file); }
@ -59,7 +62,7 @@ private:
std::string cert_data; std::string cert_data;
// Helpers for ParseCertificate. // Helpers for ParseCertificate.
static double GetTimeFromAsn1(const ASN1_TIME * atime); static double GetTimeFromAsn1(const ASN1_TIME * atime, const char* fid);
static StringVal* KeyCurve(EVP_PKEY *key); static StringVal* KeyCurve(EVP_PKEY *key);
static unsigned int KeyLength(EVP_PKEY *key); static unsigned int KeyLength(EVP_PKEY *key);
}; };

View file

@ -302,8 +302,10 @@ bool Raw::OpenInput()
if ( offset ) if ( offset )
{ {
int whence = (offset > 0) ? SEEK_SET : SEEK_END; int whence = (offset >= 0) ? SEEK_SET : SEEK_END;
if ( fseek(file, offset, whence) < 0 ) int64_t pos = (offset >= 0) ? offset : offset + 1; // we want -1 to be the end of the file
if ( fseek(file, pos, whence) < 0 )
{ {
char buf[256]; char buf[256];
strerror_r(errno, buf, sizeof(buf)); strerror_r(errno, buf, sizeof(buf));
@ -395,8 +397,6 @@ bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fie
{ {
string offset_s = it->second; string offset_s = it->second;
offset = strtoll(offset_s.c_str(), 0, 10); offset = strtoll(offset_s.c_str(), 0, 10);
if ( offset < 0 )
offset++; // we want -1 to be the end of the file
} }
else if ( it != info.config.end() ) else if ( it != info.config.end() )
{ {

View file

@ -17,8 +17,6 @@ set(iosource_SRCS
PktSrc.cc PktSrc.cc
) )
bif_target(pcap.bif)
bro_add_subdir_library(iosource ${iosource_SRCS}) bro_add_subdir_library(iosource ${iosource_SRCS})
add_dependencies(bro_iosource generate_outputs) add_dependencies(bro_iosource generate_outputs)

View file

@ -47,6 +47,12 @@ void Packet::Init(int arg_link_type, struct timeval *arg_ts, uint32 arg_caplen,
l2_valid = false; l2_valid = false;
if ( data && cap_len < hdr_size )
{
Weird("truncated_link_header");
return;
}
if ( data ) if ( data )
ProcessLayer2(); ProcessLayer2();
} }
@ -94,6 +100,7 @@ void Packet::ProcessLayer2()
bool have_mpls = false; bool have_mpls = false;
const u_char* pdata = data; const u_char* pdata = data;
const u_char* end_of_data = data + cap_len;
switch ( link_type ) { switch ( link_type ) {
case DLT_NULL: case DLT_NULL:
@ -140,6 +147,12 @@ void Packet::ProcessLayer2()
// 802.1q / 802.1ad // 802.1q / 802.1ad
case 0x8100: case 0x8100:
case 0x9100: case 0x9100:
if ( pdata + 4 >= end_of_data )
{
Weird("truncated_link_header");
return;
}
vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff; vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff;
protocol = ((pdata[2] << 8) + pdata[3]); protocol = ((pdata[2] << 8) + pdata[3]);
pdata += 4; // Skip the vlan header pdata += 4; // Skip the vlan header
@ -154,6 +167,12 @@ void Packet::ProcessLayer2()
// Check for double-tagged (802.1ad) // Check for double-tagged (802.1ad)
if ( protocol == 0x8100 || protocol == 0x9100 ) if ( protocol == 0x8100 || protocol == 0x9100 )
{ {
if ( pdata + 4 >= end_of_data )
{
Weird("truncated_link_header");
return;
}
inner_vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff; inner_vlan = ((pdata[0] << 8) + pdata[1]) & 0xfff;
protocol = ((pdata[2] << 8) + pdata[3]); protocol = ((pdata[2] << 8) + pdata[3]);
pdata += 4; // Skip the vlan header pdata += 4; // Skip the vlan header
@ -164,6 +183,12 @@ void Packet::ProcessLayer2()
// PPPoE carried over the ethernet frame. // PPPoE carried over the ethernet frame.
case 0x8864: case 0x8864:
if ( pdata + 8 >= end_of_data )
{
Weird("truncated_link_header");
return;
}
protocol = (pdata[6] << 8) + pdata[7]; protocol = (pdata[6] << 8) + pdata[7];
pdata += 8; // Skip the PPPoE session and PPP header pdata += 8; // Skip the PPPoE session and PPP header
@ -230,6 +255,12 @@ void Packet::ProcessLayer2()
{ {
// Assume we're pointing at IP. Just figure out which version. // Assume we're pointing at IP. Just figure out which version.
pdata += GetLinkHeaderSize(link_type); pdata += GetLinkHeaderSize(link_type);
if ( pdata + sizeof(struct ip) >= end_of_data )
{
Weird("truncated_link_header");
return;
}
const struct ip* ip = (const struct ip *)pdata; const struct ip* ip = (const struct ip *)pdata;
if ( ip->ip_v == 4 ) if ( ip->ip_v == 4 )
@ -254,18 +285,18 @@ void Packet::ProcessLayer2()
while ( ! end_of_stack ) while ( ! end_of_stack )
{ {
end_of_stack = *(pdata + 2) & 0x01; if ( pdata + 4 >= end_of_data )
pdata += 4;
if ( pdata >= pdata + cap_len )
{ {
Weird("no_mpls_payload"); Weird("truncated_link_header");
return; return;
} }
end_of_stack = *(pdata + 2) & 0x01;
pdata += 4;
} }
// We assume that what remains is IP // We assume that what remains is IP
if ( pdata + sizeof(struct ip) >= data + cap_len ) if ( pdata + sizeof(struct ip) >= end_of_data )
{ {
Weird("no_ip_in_mpls_payload"); Weird("no_ip_in_mpls_payload");
return; return;
@ -288,13 +319,14 @@ void Packet::ProcessLayer2()
else if ( encap_hdr_size ) else if ( encap_hdr_size )
{ {
// Blanket encapsulation. We assume that what remains is IP. // Blanket encapsulation. We assume that what remains is IP.
pdata += encap_hdr_size; if ( pdata + encap_hdr_size + sizeof(struct ip) >= end_of_data )
if ( pdata + sizeof(struct ip) >= data + cap_len )
{ {
Weird("no_ip_left_after_encap"); Weird("no_ip_left_after_encap");
return; return;
} }
pdata += encap_hdr_size;
const struct ip* ip = (const struct ip *)pdata; const struct ip* ip = (const struct ip *)pdata;
if ( ip->ip_v == 4 ) if ( ip->ip_v == 4 )

View file

@ -11,6 +11,8 @@
#include "Net.h" #include "Net.h"
#include "Sessions.h" #include "Sessions.h"
#include "pcap/const.bif.h"
using namespace iosource; using namespace iosource;
PktSrc::Properties::Properties() PktSrc::Properties::Properties()
@ -34,9 +36,7 @@ PktSrc::PktSrc()
PktSrc::~PktSrc() PktSrc::~PktSrc()
{ {
BPF_Program* code; for ( auto code : filters )
IterCookie* cookie = filters.InitForIteration();
while ( (code = filters.NextEntry(cookie)) )
delete code; delete code;
} }
@ -66,11 +66,6 @@ bool PktSrc::IsError() const
return ErrorMsg(); return ErrorMsg();
} }
int PktSrc::SnapLen() const
{
return snaplen; // That's a global. Change?
}
bool PktSrc::IsLive() const bool PktSrc::IsLive() const
{ {
return props.is_live; return props.is_live;
@ -112,7 +107,7 @@ void PktSrc::Opened(const Properties& arg_props)
} }
if ( props.is_live ) if ( props.is_live )
Info(fmt("listening on %s, capture length %d bytes\n", props.path.c_str(), SnapLen())); Info(fmt("listening on %s\n", props.path.c_str()));
DBG_LOG(DBG_PKTIO, "Opened source %s", props.path.c_str()); DBG_LOG(DBG_PKTIO, "Opened source %s", props.path.c_str());
} }
@ -325,7 +320,7 @@ bool PktSrc::PrecompileBPFFilter(int index, const std::string& filter)
// Compile filter. // Compile filter.
BPF_Program* code = new BPF_Program(); BPF_Program* code = new BPF_Program();
if ( ! code->Compile(SnapLen(), LinkType(), filter.c_str(), Netmask(), errbuf, sizeof(errbuf)) ) if ( ! code->Compile(BifConst::Pcap::snaplen, LinkType(), filter.c_str(), Netmask(), errbuf, sizeof(errbuf)) )
{ {
string msg = fmt("cannot compile BPF filter \"%s\"", filter.c_str()); string msg = fmt("cannot compile BPF filter \"%s\"", filter.c_str());
@ -338,16 +333,16 @@ bool PktSrc::PrecompileBPFFilter(int index, const std::string& filter)
return 0; return 0;
} }
// Store it in hash. // Store it in vector.
HashKey* hash = new HashKey(HashKey(bro_int_t(index))); if ( index >= static_cast<int>(filters.size()) )
BPF_Program* oldcode = filters.Lookup(hash); filters.resize(index + 1);
if ( oldcode )
delete oldcode;
filters.Insert(hash, code); if ( auto old = filters[index] )
delete hash; delete old;
return 1; filters[index] = code;
return true;
} }
BPF_Program* PktSrc::GetBPFFilter(int index) BPF_Program* PktSrc::GetBPFFilter(int index)
@ -355,10 +350,7 @@ BPF_Program* PktSrc::GetBPFFilter(int index)
if ( index < 0 ) if ( index < 0 )
return 0; return 0;
HashKey* hash = new HashKey(HashKey(bro_int_t(index))); return (static_cast<int>(filters.size()) > index ? filters[index] : 0);
BPF_Program* code = filters.Lookup(hash);
delete hash;
return code;
} }
bool PktSrc::ApplyBPFFilter(int index, const struct pcap_pkthdr *hdr, const u_char *pkt) bool PktSrc::ApplyBPFFilter(int index, const struct pcap_pkthdr *hdr, const u_char *pkt)

View file

@ -3,6 +3,8 @@
#ifndef IOSOURCE_PKTSRC_PKTSRC_H #ifndef IOSOURCE_PKTSRC_PKTSRC_H
#define IOSOURCE_PKTSRC_PKTSRC_H #define IOSOURCE_PKTSRC_PKTSRC_H
#include <vector>
#include "IOSource.h" #include "IOSource.h"
#include "BPF_Program.h" #include "BPF_Program.h"
#include "Dict.h" #include "Dict.h"
@ -95,11 +97,6 @@ public:
*/ */
int HdrSize() const; int HdrSize() const;
/**
* Returns the snap length for this source.
*/
int SnapLen() const;
/** /**
* In pseudo-realtime mode, returns the logical timestamp of the * In pseudo-realtime mode, returns the logical timestamp of the
* current packet. Undefined if not running pseudo-realtime mode. * current packet. Undefined if not running pseudo-realtime mode.
@ -367,7 +364,7 @@ private:
Packet current_packet; Packet current_packet;
// For BPF filtering support. // For BPF filtering support.
PDict(BPF_Program) filters; std::vector<BPF_Program *> filters;
// Only set in pseudo-realtime mode. // Only set in pseudo-realtime mode.
double first_timestamp; double first_timestamp;

View file

@ -5,4 +5,6 @@ include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DI
bro_plugin_begin(Bro Pcap) bro_plugin_begin(Bro Pcap)
bro_plugin_cc(Source.cc Dumper.cc Plugin.cc) bro_plugin_cc(Source.cc Dumper.cc Plugin.cc)
bif_target(functions.bif)
bif_target(const.bif)
bro_plugin_end() bro_plugin_end()

View file

@ -7,6 +7,8 @@
#include "../PktSrc.h" #include "../PktSrc.h"
#include "../../Net.h" #include "../../Net.h"
#include "const.bif.h"
using namespace iosource::pcap; using namespace iosource::pcap;
PcapDumper::PcapDumper(const std::string& path, bool arg_append) PcapDumper::PcapDumper(const std::string& path, bool arg_append)
@ -25,7 +27,8 @@ void PcapDumper::Open()
{ {
int linktype = -1; int linktype = -1;
pd = pcap_open_dead(DLT_EN10MB, snaplen); pd = pcap_open_dead(DLT_EN10MB, BifConst::Pcap::snaplen);
if ( ! pd ) if ( ! pd )
{ {
Error("error for pcap_open_dead"); Error("error for pcap_open_dead");

View file

@ -7,6 +7,8 @@
#include "Source.h" #include "Source.h"
#include "iosource/Packet.h" #include "iosource/Packet.h"
#include "const.bif.h"
#ifdef HAVE_PCAP_INT_H #ifdef HAVE_PCAP_INT_H
#include <pcap-int.h> #include <pcap-int.h>
#endif #endif
@ -84,32 +86,64 @@ void PcapSource::OpenLive()
props.netmask = PktSrc::NETMASK_UNKNOWN; props.netmask = PktSrc::NETMASK_UNKNOWN;
#endif #endif
// We use the smallest time-out possible to return almost immediately if pd = pcap_create(props.path.c_str(), errbuf);
// no packets are available. (We can't use set_nonblocking() as it's
// broken on FreeBSD: even when select() indicates that we can read
// something, we may get nothing if the store buffer hasn't filled up
// yet.)
pd = pcap_open_live(props.path.c_str(), SnapLen(), 1, 1, tmp_errbuf);
if ( ! pd ) if ( ! pd )
{ {
Error(tmp_errbuf); PcapError("pcap_create");
return; return;
} }
// ### This needs autoconf'ing. if ( pcap_set_snaplen(pd, BifConst::Pcap::snaplen) )
#ifdef HAVE_PCAP_INT_H {
Info(fmt("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize)); PcapError("pcap_set_snaplen");
#endif return;
}
if ( pcap_set_promisc(pd, 1) )
{
PcapError("pcap_set_promisc");
return;
}
// We use the smallest time-out possible to return almost immediately
// if no packets are available. (We can't use set_nonblocking() as
// it's broken on FreeBSD: even when select() indicates that we can
// read something, we may get nothing if the store buffer hasn't
// filled up yet.)
//
// TODO: The comment about FreeBSD is pretty old and may not apply
// anymore these days.
if ( pcap_set_timeout(pd, 1) )
{
PcapError("pcap_set_timeout");
return;
}
if ( pcap_set_buffer_size(pd, BifConst::Pcap::bufsize * 1024 * 1024) )
{
PcapError("pcap_set_buffer_size");
return;
}
if ( pcap_activate(pd) )
{
PcapError("pcap_activate");
return;
}
#ifdef HAVE_LINUX #ifdef HAVE_LINUX
if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 ) if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 )
{ {
PcapError(); PcapError("pcap_setnonblock");
return; return;
} }
#endif #endif
#ifdef HAVE_PCAP_INT_H
Info(fmt("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize));
#endif
props.selectable_fd = pcap_fileno(pd); props.selectable_fd = pcap_fileno(pd);
SetHdrSize(); SetHdrSize();
@ -257,12 +291,17 @@ void PcapSource::Statistics(Stats* s)
s->dropped = 0; s->dropped = 0;
} }
void PcapSource::PcapError() void PcapSource::PcapError(const char* where)
{ {
string location;
if ( where )
location = fmt(" (%s)", where);
if ( pd ) if ( pd )
Error(fmt("pcap_error: %s", pcap_geterr(pd))); Error(fmt("pcap_error: %s%s", pcap_geterr(pd), location.c_str()));
else else
Error("pcap_error: not open"); Error(fmt("pcap_error: not open%s", location.c_str()));
Close(); Close();
} }

View file

@ -28,7 +28,7 @@ protected:
private: private:
void OpenLive(); void OpenLive();
void OpenOffline(); void OpenOffline();
void PcapError(); void PcapError(const char* where = 0);
void SetHdrSize(); void SetHdrSize();
Properties props; Properties props;

View file

@ -0,0 +1,4 @@
const Pcap::snaplen: count;
const Pcap::bufsize: count;

View file

@ -1,4 +1,6 @@
module Pcap;
## Precompiles a PCAP filter and binds it to a given identifier. ## Precompiles a PCAP filter and binds it to a given identifier.
## ##
## id: The PCAP identifier to reference the filter *s* later on. ## id: The PCAP identifier to reference the filter *s* later on.
@ -19,6 +21,15 @@
## pcap_error ## pcap_error
function precompile_pcap_filter%(id: PcapFilterID, s: string%): bool function precompile_pcap_filter%(id: PcapFilterID, s: string%): bool
%{ %{
if ( id->AsEnum() >= 100 )
{
// We use a vector as underlying data structure for fast
// lookups and limit the ID space so that that doesn't grow too
// large.
builtin_error(fmt("PCAP filter ids must remain below 100 (is %ld)", id->AsInt()));
return new Val(false, TYPE_BOOL);
}
bool success = true; bool success = true;
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs()); const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
@ -86,7 +97,7 @@ function install_pcap_filter%(id: PcapFilterID%): bool
## install_dst_net_filter ## install_dst_net_filter
## uninstall_dst_addr_filter ## uninstall_dst_addr_filter
## uninstall_dst_net_filter ## uninstall_dst_net_filter
function pcap_error%(%): string function error%(%): string
%{ %{
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs()); const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());

View file

@ -121,7 +121,6 @@ char* command_line_policy = 0;
vector<string> params; vector<string> params;
set<string> requested_plugins; set<string> requested_plugins;
char* proc_status_file = 0; char* proc_status_file = 0;
int snaplen = 0; // this gets set from the scripting-layer's value
OpaqueType* md5_type = 0; OpaqueType* md5_type = 0;
OpaqueType* sha1_type = 0; OpaqueType* sha1_type = 0;
@ -764,9 +763,6 @@ int main(int argc, char** argv)
// DEBUG_MSG("HMAC key: %s\n", md5_digest_print(shared_hmac_md5_key)); // DEBUG_MSG("HMAC key: %s\n", md5_digest_print(shared_hmac_md5_key));
init_hash_function(); init_hash_function();
// Must come after hash initialization.
binpac::init();
ERR_load_crypto_strings(); ERR_load_crypto_strings();
OPENSSL_add_all_algorithms_conf(); OPENSSL_add_all_algorithms_conf();
SSL_library_init(); SSL_library_init();
@ -866,6 +862,10 @@ int main(int argc, char** argv)
if ( events_file ) if ( events_file )
event_player = new EventPlayer(events_file); event_player = new EventPlayer(events_file);
// Must come after plugin activation (and also after hash
// initialization).
binpac::init();
init_event_handlers(); init_event_handlers();
md5_type = new OpaqueType("md5"); md5_type = new OpaqueType("md5");
@ -993,8 +993,6 @@ int main(int argc, char** argv)
} }
} }
snaplen = internal_val("snaplen")->AsCount();
if ( dns_type != DNS_PRIME ) if ( dns_type != DNS_PRIME )
net_init(interfaces, read_files, writefile, do_watchdog); net_init(interfaces, read_files, writefile, do_watchdog);

View file

@ -216,7 +216,13 @@ function join_string_vec%(vec: string_vec, sep: string%): string
if ( i > 0 ) if ( i > 0 )
d.Add(sep->CheckString(), 0); d.Add(sep->CheckString(), 0);
v->Lookup(i)->Describe(&d); Val* e = v->Lookup(i);
// If the element is empty, skip it.
if ( ! e )
continue;
e->Describe(&d);
} }
BroString* s = new BroString(1, d.TakeBytes(), d.Len()); BroString* s = new BroString(1, d.TakeBytes(), d.Len());

View file

@ -35,7 +35,12 @@ bool JSON::Describe(ODesc* desc, int num_fields, const Field* const * fields,
const u_char* bytes = desc->Bytes(); const u_char* bytes = desc->Bytes();
int len = desc->Len(); int len = desc->Len();
if ( i > 0 && len > 0 && bytes[len-1] != ',' && vals[i]->present ) if ( i > 0 &&
len > 0 &&
bytes[len-1] != ',' &&
bytes[len-1] != '{' &&
bytes[len-1] != '[' &&
vals[i]->present )
desc->AddRaw(","); desc->AddRaw(",");
if ( ! Describe(desc, vals[i], fields[i]->name) ) if ( ! Describe(desc, vals[i], fields[i]->name) )

View file

@ -4,3 +4,11 @@ bro
bro bro
bro bro
bro bro
bro
bro
bro
bro
bro
bro
bro
bro

View file

@ -0,0 +1,12 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#open 2015-08-31-03-09-20
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1254722767.875996 CjhGID4nQcgTWjvg4c 10.10.1.4 1470 74.53.140.153 25 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
1437831787.861602 CPbrpk1qSsw6ESzHV4 192.168.133.100 49648 192.168.133.102 25 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
1437831799.610433 C7XEbhP654jzLoe3a 192.168.133.100 49655 17.167.150.73 443 base64_illegal_encoding incomplete base64 group, padding with 12 bits of 0 F bro
#close 2015-08-31-03-09-20

View file

@ -1,5 +1,9 @@
YnJv YnJv
YnJv YnJv
YnJv
}n-v
YnJv
YnJv
}n-v }n-v
cGFkZGluZw== cGFkZGluZw==
cGFkZGluZzE= cGFkZGluZzE=

View file

@ -4,3 +4,4 @@ mytest
this__is__another__test this__is__another__test
thisisanothertest thisisanothertest
Test Test
...hi..there

View file

@ -23,10 +23,10 @@ net_weird, truncated_IP
net_weird, truncated_IP net_weird, truncated_IP
net_weird, truncated_IP net_weird, truncated_IP
net_weird, truncated_IP net_weird, truncated_IP
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliidlhd rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliidlhd, A
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], dgphrodofqhq, orgmmpelofil rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], dgphrodofqhq, orgmmpelofil, A
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], lenhfdqhqfgs, dfpqssidkpdg rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], lenhfdqhqfgs, dfpqssidkpdg, A
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliislrr rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfOOOOOOOOOOOOOOOOOOOOOOOOOOOO, nlkmlpjfjjnoomfnqmdqgrdsgpefslhjrdjghsshrmosrkosidknnieiggpmnggelfhlkflfqojpjrsmeqghklmjlkdskjollmensjiqosemknoehellhlsspjfjpddfgqkemghskqosrksmkpsdomfoghllfokilshsisgpjhjoosidirlnmespjhdogdidoemejrnjjrookfrmiqllllqhlqfgolfqssfjrhrjhgfkpdnigiilrmnespjspeqjfedjhrkisjdhoofqdfeqnmihrelmildkngirkqorjslhmglripdojfedjjngjnpikoliqhdipgpshenekqiphmrsqmemghklodqnqoeggfkdqngrfollhjmddjreeghdqflohgrhqhelqsmdghgihpifpnikrddpmdfejhrhgfdfdlepmmhlhrnrslepqgmkopmdfogpoljeepqoemisfeksdeddiplnkfjddjioqhojlnmlirehidipdhqlddssssgpgikieeldsmfrkidpldsngdkidkoshkrofnonrrehghlmgmqshkedgpkpgjjkoneigsfjdlgjsngepfkndqoefqmsssrgegspromqepdpdeglmmegjljlmljeeorhhfmrohjeregpfshqjsqkekrihjdpfdjflgspepqjrqfemsjffmjfkhejdkrokmgdrhojgmgjpldjeiphroeheipolfmshoglkfnllfnhlflhlpddjflekhiqilefjpfqepdrrdokkjiekmelkhdpjlqjdlnfjemqdrksirdnjlrhrdijgqjhdqlidpfdisgrmnlfnsdlishlpfkshhglpdiqhpgmhpjdrpednjljfsqknsiqpfeqhlphgqdphflglpmqfkkhdjeodkelinkfpmfedidhphldmqjqggrljlhriehqqemeimkjhoqnsrdgengmgjokpeiijgrseppeoiflngggomdfjkndpqedhgnkiqlodkpjfkqoifidjmrdhhmglledkomllhpehdfjfdspmklkjdnhkdgpgqephfdfdrfplmepoegsekmrnikknelnprdpslmfkhghhooknieksjjhdeelidikndedijqqhfmphdondndpehmfoqelqigdpgioeljhedhfoeqlinriemqjigerkphgepqmiiidqlhriqioimpglonlsgomeloipndiihqqfiekkeriokrsjlmsjqiehqsrqkhdjlddjrrllirqkidqiggdrjpjirssgqepnqmhigfsqlekiqdddllnsjmroiofkieqnghddpjnhdjkfloilheljofddrkherkrieeoijrlfghiikmhpfdhekdjloejlmpperkgrhomedpfqkrodjdmrqfpiodgphidfliislrr, A
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], iokgedlsdkjkiefgmeqkfjoh, ggdeolssksemrhedoledddml rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], iokgedlsdkjkiefgmeqkfjoh, ggdeolssksemrhedoledddml, A
net_weird, truncated_IP net_weird, truncated_IP
rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?>\x0d\x0a<g:searchrequest xmlns:g=, OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?igplqgeqsonkllfshdjplhjspmde rexmit_inconsistency, [orig_h=63.193.213.194, orig_p=2564/tcp, resp_h=128.3.97.175, resp_p=80/tcp], OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?>\x0d\x0a<g:searchrequest xmlns:g=, OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO HTTP/1.1\x0d\x0aHost: 127.0.0.1\x0d\x0aContent-Type: text/xml\x0d\x0aContent-length: 1\x0d\x0a\x0d\x0aO<?xml version="1.0"?igplqgeqsonkllfshdjplhjspmde, AP

View file

@ -2,3 +2,4 @@
1429652006.683290 c: [orig_h=178.200.100.200, orig_p=39976/tcp, resp_h=96.126.98.124, resp_p=80/tcp] 1429652006.683290 c: [orig_h=178.200.100.200, orig_p=39976/tcp, resp_h=96.126.98.124, resp_p=80/tcp]
1429652006.683290 t1: HTTP/1.1 200 OK\x0d\x0aContent-Length: 5\x0d\x0a\x0d\x0aBANG! 1429652006.683290 t1: HTTP/1.1 200 OK\x0d\x0aContent-Length: 5\x0d\x0a\x0d\x0aBANG!
1429652006.683290 t2: HTTP/1.1 200 OK\x0d\x0aServer: nginx/1.4.4\x0d\x0aDate: 1429652006.683290 t2: HTTP/1.1 200 OK\x0d\x0aServer: nginx/1.4.4\x0d\x0aDate:
1429652006.683290 tcp_flags: AP

View file

@ -0,0 +1,23 @@
1103139821.635001, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139821.833528, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139821.841126, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.039902, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.040151, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.040254, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.040878, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.240529, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.240632, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.247627, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.450278, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.450381, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.453253, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.65178, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.651883, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.652756, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.882264, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.933982, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.934084, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.934209, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139822.934214, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139823.145731, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]
1103139823.145958, [orig_h=128.3.26.249, orig_p=25/tcp, resp_h=201.186.157.67, resp_p=60827/tcp]

View file

@ -3,38 +3,48 @@
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path weird #path weird
#open 2012-04-11-16-01-35 #open 2015-08-31-21-35-27
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string #types time string addr port addr port string string bool string
1334160095.895421 - - - - - truncated_IP - F bro 1334160095.895421 - - - - - truncated_IP - F bro
#close 2012-04-11-16-01-35 #close 2015-08-31-21-35-27
#separator \x09 #separator \x09
#set_separator , #set_separator ,
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path weird #path weird
#open 2012-04-11-14-57-21 #open 2015-08-31-21-35-27
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string #types time string addr port addr port string string bool string
1334156241.519125 - - - - - truncated_IP - F bro 1334156241.519125 - - - - - truncated_IP - F bro
#close 2012-04-11-14-57-21 #close 2015-08-31-21-35-27
#separator \x09 #separator \x09
#set_separator , #set_separator ,
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path weird #path weird
#open 2012-04-10-21-50-48 #open 2015-08-31-21-35-28
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string #types time string addr port addr port string string bool string
1334094648.590126 - - - - - truncated_IP - F bro 1334094648.590126 - - - - - truncated_IP - F bro
#close 2012-04-10-21-50-48 #close 2015-08-31-21-35-28
#separator \x09 #separator \x09
#set_separator , #set_separator ,
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path weird #path weird
#open 2012-05-29-22-02-34 #open 2015-08-31-21-35-30
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string #types time string addr port addr port string string bool string
1338328954.078361 - - - - - internally_truncated_header - F bro 1338328954.078361 - - - - - internally_truncated_header - F bro
#close 2012-05-29-22-02-34 #close 2015-08-31-21-35-30
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#open 2015-08-31-21-35-30
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
0.000000 - - - - - truncated_link_header - F bro
#close 2015-08-31-21-35-30

View file

@ -0,0 +1,16 @@
----- x509_certificate ----
serial: 03E8
not_valid_before: 2015-09-01-13:33:37.000000000 (epoch: 1441114417.0)
not_valid_after : 2025-09-01-13:33:37.000000000 (epoch: 1756733617.0)
----- x509_certificate ----
serial: 99FAA8037A4EB2FAEF84EB5E55D5B8C8
not_valid_before: 2011-05-04-00:00:00.000000000 (epoch: 1304467200.0)
not_valid_after : 2016-07-04-23:59:59.000000000 (epoch: 1467676799.0)
----- x509_certificate ----
serial: 1690C329B6780607511F05B0344846CB
not_valid_before: 2010-04-16-00:00:00.000000000 (epoch: 1271376000.0)
not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0)
----- x509_certificate ----
serial: 01
not_valid_before: 2000-05-30-10:48:38.000000000 (epoch: 959683718.0)
not_valid_after : 2020-05-30-10:48:38.000000000 (epoch: 1590835718.0)

View file

@ -3,7 +3,7 @@
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path loaded_scripts #path loaded_scripts
#open 2015-04-21-22-29-19 #open 2015-08-31-04-50-43
#fields name #fields name
#types string #types string
scripts/base/init-bare.bro scripts/base/init-bare.bro
@ -46,7 +46,7 @@ scripts/base/init-bare.bro
scripts/base/frameworks/files/magic/__load__.bro scripts/base/frameworks/files/magic/__load__.bro
build/scripts/base/bif/__load__.bro build/scripts/base/bif/__load__.bro
build/scripts/base/bif/broxygen.bif.bro build/scripts/base/bif/broxygen.bif.bro
build/scripts/base/bif/pcap.bif.bro build/scripts/base/bif/functions.bif.bro
build/scripts/base/bif/bloom-filter.bif.bro build/scripts/base/bif/bloom-filter.bif.bro
build/scripts/base/bif/cardinality-counter.bif.bro build/scripts/base/bif/cardinality-counter.bif.bro
build/scripts/base/bif/top-k.bif.bro build/scripts/base/bif/top-k.bif.bro
@ -128,4 +128,4 @@ scripts/base/init-bare.bro
build/scripts/base/bif/plugins/Bro_SQLiteWriter.sqlite.bif.bro build/scripts/base/bif/plugins/Bro_SQLiteWriter.sqlite.bif.bro
scripts/policy/misc/loaded-scripts.bro scripts/policy/misc/loaded-scripts.bro
scripts/base/utils/paths.bro scripts/base/utils/paths.bro
#close 2015-04-21-22-29-19 #close 2015-08-31-04-50-43

View file

@ -3,7 +3,7 @@
#empty_field (empty) #empty_field (empty)
#unset_field - #unset_field -
#path loaded_scripts #path loaded_scripts
#open 2015-04-21-22-29-27 #open 2015-08-31-05-07-15
#fields name #fields name
#types string #types string
scripts/base/init-bare.bro scripts/base/init-bare.bro
@ -46,7 +46,7 @@ scripts/base/init-bare.bro
scripts/base/frameworks/files/magic/__load__.bro scripts/base/frameworks/files/magic/__load__.bro
build/scripts/base/bif/__load__.bro build/scripts/base/bif/__load__.bro
build/scripts/base/bif/broxygen.bif.bro build/scripts/base/bif/broxygen.bif.bro
build/scripts/base/bif/pcap.bif.bro build/scripts/base/bif/functions.bif.bro
build/scripts/base/bif/bloom-filter.bif.bro build/scripts/base/bif/bloom-filter.bif.bro
build/scripts/base/bif/cardinality-counter.bif.bro build/scripts/base/bif/cardinality-counter.bif.bro
build/scripts/base/bif/top-k.bif.bro build/scripts/base/bif/top-k.bif.bro
@ -273,4 +273,4 @@ scripts/base/init-default.bro
scripts/base/misc/find-checksum-offloading.bro scripts/base/misc/find-checksum-offloading.bro
scripts/base/misc/find-filtered-trace.bro scripts/base/misc/find-filtered-trace.bro
scripts/policy/misc/loaded-scripts.bro scripts/policy/misc/loaded-scripts.bro
#close 2015-04-21-22-29-27 #close 2015-08-31-05-07-15

View file

@ -2,7 +2,6 @@
connecting-connector.bro connecting-connector.bro
const broker_port: port = 9999/tcp &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";

Some files were not shown because too many files have changed in this diff Show more