Merge remote-tracking branch 'origin/master' into topic/bernhard/sqlite

Conflicts:
	scripts/base/frameworks/logging/__load__.bro
	src/CMakeLists.txt
	src/logging.bif
	src/types.bif
This commit is contained in:
Bernhard Amann 2012-07-25 15:04:23 -07:00
commit da157c8ded
296 changed files with 4703 additions and 2175 deletions

197
CHANGES
View file

@ -1,4 +1,201 @@
2.0-871 | 2012-07-25 13:08:00 -0700
* Fix complaint from valgrind about uninitialized memory usage. (Jon
Siwek)
* Fix differing log filters of streams from writing to same
writer/path (which now produces a warning, but is otherwise
skipped for the second). Addresses #842. (Jon Siwek)
* Fix tests and error message for to_double BIF. (Daniel Thayer)
* Compile fix. (Robin Sommer)
2.0-866 | 2012-07-24 16:02:07 -0700
* Correct a typo in usage message. (Daniel Thayer)
* Fix file permissions of log files (which were created with execute
permissions after a recent change). (Daniel Thayer)
2.0-862 | 2012-07-24 15:22:52 -0700
* Fix initialization problem in logging class. (Jon Siwek)
* Input framework now accepts escaped ASCII values as input (\x##),
and unescapes appropiately. (Bernhard Amann)
* Make reading ASCII logfiles work when the input separator is
different from \t. (Bernhard Amann)
* A number of smaller fixes for input framework. (Bernhard Amann)
2.0-851 | 2012-07-24 15:04:14 -0700
* New built-in function to_double(s: string). (Scott Campbell)
2.0-849 | 2012-07-24 11:06:16 -0700
* Adding missing include needed on some systems. (Robin Sommer)
2.0-846 | 2012-07-23 16:36:37 -0700
* Fix WriterBackend::WriterInfo serialization, reenable ascii
start/end tags. (Jon Siwek)
2.0-844 | 2012-07-23 16:20:59 -0700
* Reworking parts of the internal threading/logging/input APIs for
thread-safety. (Robin Sommer)
* Bugfix for SSL version check. (Bernhard Amann)
* Changing a HTTP DPD from port 3138 to 3128. Addresses #857. (Robin
Sommer)
* ElasticSearch logging writer. See logging-elasticsearch.rst for
more information. (Vlad Grigorescu and Seth Hall).
* Give configure a --disable-perftools option to disable Perftools
support even if found. (Robin Sommer)
* The ASCII log writer now includes "#start <timestamp>" and "#end
<timestamp> lines in the each file. (Robin Sommer)
* Renamed ASCII logger "header" options to "meta". (Robin Sommer)
* ASCII logs now escape '#' at the beginning of log lines. Addresses
#763. (Robin Sommer)
* Fix bug, where in dns.log rcode always was set to 0/NOERROR when
no reply package was seen. (Bernhard Amann)
* Updating to Mozilla's current certificate bundle. (Seth Hall)
2.0-769 | 2012-07-13 16:17:33 -0700
* Fix some Info:Record field documentation. (Vlad Grigorescu)
* Fix overrides of TCP_ApplicationAnalyzer::EndpointEOF. (Jon Siwek)
* Fix segfault when incrementing whole vector values. Also removed
RefExpr::Eval(Val*) method since it was never called. (Jon Siwek)
* Remove baselines for some leak-detecting unit tests. (Jon Siwek)
* Unblock SIGFPE, SIGILL, SIGSEGV and SIGBUS for threads, so that
they now propagate to the main thread. Adresses #848. (Bernhard
Amann)
2.0-761 | 2012-07-12 08:14:38 -0700
* Some small fixes to further reduce SOCKS false positive logs. (Seth Hall)
* Calls to pthread_mutex_unlock now log the reason for failures.
(Bernhard Amann)
2.0-757 | 2012-07-11 08:30:19 -0700
* Fixing memory leak. (Seth Hall)
2.0-755 | 2012-07-10 16:25:16 -0700
* Add sorting canonifier to rotate-custom unit test. Addresses #846.
(Jon Siwek)
* Fix many compiler warnings. (Daniel Thayer)
* Fix segfault when there's an error/timeout resolving DNS requests.
Addresses #846. (Jon Siwek)
* Remove a non-portable test case. (Daniel Thayer)
* Fix typos in input framework doc. (Daniel Thayer)
* Fix typos in DataSeries documentation. (Daniel Thayer)
* Bugfix making custom rotate functions work again. (Robin Sommer)
* Tiny bugfix for returning writer name. (Robin Sommer)
* Moving make target update-doc-sources from top-level Makefile to
btest Makefile. (Robin Sommer)
2.0-733 | 2012-07-02 15:31:24 -0700
* Extending the input reader DoInit() API. (Bernhard Amann). It now
provides a Info struct similar to what we introduced for log
writers, including a corresponding "config" key/value table.
* Fix to make writer-info work when debugging is enabled. (Bernhard
Amann)
2.0-726 | 2012-07-02 15:19:15 -0700
* Extending the log writer DoInit() API. (Robin Sommer)
We now pass in a Info struct that contains:
- the path name (as before)
- the rotation interval
- the log_rotate_base_time in seconds
- a table of key/value pairs with further configuration options.
To fill the table, log filters have a new field "config: table[string]
of strings". This gives a way to pass arbitrary values from
script-land to writers. Interpretation is left up to the writer.
* Split calc_next_rotate() into two functions, one of which is
thread-safe and can be used with the log_rotate_base_time value
from DoInit().
* Updates to the None writer. (Robin Sommer)
- It gets its own script writers/none.bro.
- New bool option LogNone::debug to enable debug output. It then
prints out all the values passed to DoInit().
- Fixed a bug that prevented Bro from terminating.
2.0-723 | 2012-07-02 15:02:56 -0700
* Extract ICMPv6 NDP options and include in ICMP events. This adds
a new parameter of type "icmp6_nd_options" to the ICMPv6 neighbor
discovery events. Addresses #833. (Jon Siwek)
* Set input frontend type before starting the thread. This means
that the thread type will be output correctly in the error
message. (Bernhard Amann)
2.0-719 | 2012-07-02 14:49:03 -0700
* Fix inconsistencies in random number generation. The
srand()/rand() interface was being intermixed with the
srandom()/random() one. The later is now used throughout. (Jon
Siwek)
* Changed the srand() and rand() BIFs to work deterministically if
Bro was given a seed file. Addresses #825. (Jon Siwek)
* Updating input framework unit tests to make them more reliable and
execute quicker. (Jon Siwek)
* Fixed race condition in writer and reader initializations. (Jon
Siwek)
* Small tweak to make test complete quicker. (Jon Siwek)
* Drain events before terminating log/thread managers. (Jon Siwek)
* Fix strict-aliasing warning in RemoteSerializer.cc. Addresses
#834. (Jon Siwek)
* Fix typos in event documentation. (Daniel Thayer)
* Fix typos in NEWS for Bro 2.1 beta. (Daniel Thayer)
2.0-709 | 2012-06-21 10:14:24 -0700 2.0-709 | 2012-06-21 10:14:24 -0700
* Fix exceptions thrown in event handlers preventing others from running. (Jon Siwek) * Fix exceptions thrown in event handlers preventing others from running. (Jon Siwek)

View file

@ -91,7 +91,9 @@ endif ()
set(USE_PERFTOOLS false) set(USE_PERFTOOLS false)
set(USE_PERFTOOLS_DEBUG false) set(USE_PERFTOOLS_DEBUG false)
find_package(GooglePerftools) if (NOT DISABLE_PERFTOOLS)
find_package(GooglePerftools)
endif ()
if (GOOGLEPERFTOOLS_FOUND) if (GOOGLEPERFTOOLS_FOUND)
include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR}) include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR})
@ -122,6 +124,17 @@ if (LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
list(APPEND OPTLIBS ${LibXML2_LIBRARIES}) list(APPEND OPTLIBS ${LibXML2_LIBRARIES})
endif() endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(CURL)
if (CURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${CURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${CURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG) if (ENABLE_PERFTOOLS_DEBUG)
# Just a no op to prevent CMake from complaining about manually-specified # Just a no op to prevent CMake from complaining about manually-specified
# ENABLE_PERFTOOLS_DEBUG not being used if google perftools weren't found # ENABLE_PERFTOOLS_DEBUG not being used if google perftools weren't found
@ -213,7 +226,10 @@ message(
"\nGeoIP: ${USE_GEOIP}" "\nGeoIP: ${USE_GEOIP}"
"\nGoogle perftools: ${USE_PERFTOOLS}" "\nGoogle perftools: ${USE_PERFTOOLS}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}" "\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\ncURL: ${USE_CURL}"
"\n"
"\nDataSeries: ${USE_DATASERIES}" "\nDataSeries: ${USE_DATASERIES}"
"\nElasticSearch: ${USE_ELASTICSEARCH}"
"\n" "\n"
"\n================================================================\n" "\n================================================================\n"
) )

View file

@ -41,9 +41,6 @@ broxygen: configured
broxygenclean: configured broxygenclean: configured
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@
update-doc-sources:
./doc/scripts/genDocSourcesList.sh ./doc/scripts/DocSourcesList.cmake
dist: dist:
@rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz @rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz
@rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz @rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz

61
NEWS
View file

@ -3,8 +3,9 @@ Release Notes
============= =============
This document summarizes the most important changes in the current Bro This document summarizes the most important changes in the current Bro
release. For a complete list of changes, see the ``CHANGES`` file. release. For a complete list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own CHANGES.)
Bro 2.1 Beta Bro 2.1 Beta
------------ ------------
@ -38,14 +39,14 @@ New Functionality
- Bro now decapsulates tunnels via its new tunnel framework located in - Bro now decapsulates tunnels via its new tunnel framework located in
scripts/base/frameworks/tunnels. It currently supports Teredo, scripts/base/frameworks/tunnels. It currently supports Teredo,
AYIYA, IP-in-IP (both IPv4 and IPv6), and SOCKS. For all these, it AYIYA, IP-in-IP (both IPv4 and IPv6), and SOCKS. For all these, it
logs the outher tunnel connections in both conn.log and tunnel.log, logs the outer tunnel connections in both conn.log and tunnel.log,
and then proceeds to analyze the inner payload as if it were not and then proceeds to analyze the inner payload as if it were not
tunneled, including also logging that session in conn.log. For tunneled, including also logging that session in conn.log. For
SOCKS, it generates a new socks.log in addition with more SOCKS, it generates a new socks.log in addition with more
information. information.
- Bro now features a flexible input framework that allows users to - Bro now features a flexible input framework that allows users to
integrate external information in real-time into Bro while it integrate external information in real-time into Bro while it's
processing network traffic. The most direct use-case at the moment processing network traffic. The most direct use-case at the moment
is reading data from ASCII files into Bro tables, with updates is reading data from ASCII files into Bro tables, with updates
picked up automatically when the file changes during runtime. See picked up automatically when the file changes during runtime. See
@ -55,18 +56,44 @@ New Functionality
"reader plugins" that make it easy to interface to different data "reader plugins" that make it easy to interface to different data
sources. We will add more in the future. sources. We will add more in the future.
- BroControl now has built-in support for host-based load-balancing
when using either PF_RING, Myricom cards, or individual interfaces.
Instead of adding a separate worker entry in node.cfg for each Bro
worker process on each worker host, it is now possible to just
specify the number of worker processes on each host and BroControl
configures everything correctly (including any neccessary enviroment
variables for the balancers).
This change adds three new keywords to the node.cfg file (to be used
with worker entries): lb_procs (specifies number of workers on a
host), lb_method (specifies what type of load balancing to use:
pf_ring, myricom, or interfaces), and lb_interfaces (used only with
"lb_method=interfaces" to specify which interfaces to load-balance
on).
- Bro's default ASCII log format is not exactly the most efficient way - Bro's default ASCII log format is not exactly the most efficient way
for storing and searching large volumes of data. An an alternative, for storing and searching large volumes of data. An alternatives,
Bro nows comes with experimental support for DataSeries output, an Bro now comes with experimental support for two alternative output
efficient binary format for recording structured bulk data. formats:
DataSeries is developed and maintained at HP Labs. See
doc/logging-dataseries for more information. * DataSeries: an efficient binary format for recording structured
bulk data. DataSeries is developed and maintained at HP Labs.
See doc/logging-dataseries for more information.
* ElasticSearch: a distributed RESTful, storage engine and search
engine built on top of Apache Lucene. It scales very well, both
for distributed indexing and distributed searching.
Note that at this point, we consider Bro's support for these two
formats as prototypes for collecting experience with alternative
outputs. We do not yet recommend them for production (but welcome
feedback!)
Changed Functionality Changed Functionality
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
The following summarized the most important differences in existing The following summarizes the most important differences in existing
functionality. Note that this list is not complete, see CHANGES for functionality. Note that this list is not complete, see CHANGES for
the full set. the full set.
@ -100,7 +127,7 @@ the full set.
a bunch of Bro threads. a bunch of Bro threads.
- We renamed the configure option --enable-perftools to - We renamed the configure option --enable-perftools to
--enable-perftool-debug to indicate that the switch is only relevant --enable-perftools-debug to indicate that the switch is only relevant
for debugging the heap. for debugging the heap.
- Bro's ICMP analyzer now handles both IPv4 and IPv6 messages with a - Bro's ICMP analyzer now handles both IPv4 and IPv6 messages with a
@ -110,8 +137,8 @@ the full set.
- Log postprocessor scripts get an additional argument indicating the - Log postprocessor scripts get an additional argument indicating the
type of the log writer in use (e.g., "ascii"). type of the log writer in use (e.g., "ascii").
- BroControl's make-archive-name scripts also receives the writer - BroControl's make-archive-name script also receives the writer
type, but as it's 2nd(!) argument. If you're using a custom version type, but as its 2nd(!) argument. If you're using a custom version
of that script, you need to adapt it. See the shipped version for of that script, you need to adapt it. See the shipped version for
details. details.
@ -124,6 +151,14 @@ the full set.
Bro now supports decapsulating tunnels directly for protocols it Bro now supports decapsulating tunnels directly for protocols it
understands. understands.
- ASCII logs now record the time when they were opened/closed at the
beginning and end of the file, respectively. The options
LogAscii::header_prefix and LogAscii::include_header have been
renamed to LogAscii::meta_prefix and LogAscii::include_meta,
respectively.
- The ASCII writers "header_*" options have been renamed to "meta_*"
(because there's now also a footer).
Bro 2.0 Bro 2.0
------- -------

View file

@ -1 +1 @@
2.0-709 2.0-871

@ -1 +1 @@
Subproject commit 6f43a8115d8e6483a50957c5d21c5d69270ab3aa Subproject commit 4f01ea40817ad232a96535c64fce7dc16d4e2fff

@ -1 +1 @@
Subproject commit c6391412e902e896836450ab98910309b2ca2d9b Subproject commit c691c01e9cefae5a79bcd4b0f84ca387c8c587a7

@ -1 +1 @@
Subproject commit f1b0a395ab32388d8375ab72ec263b6029833f96 Subproject commit 8234b8903cbc775f341bdb6a1c0159981d88d27b

@ -1 +1 @@
Subproject commit 880f3e48d33bb28d17184656f858a4a0e2e1574c Subproject commit 231358f166f61cc32201a8ac3671ea0c0f5c324e

@ -1 +1 @@
Subproject commit 585645371256e8ec028cabae24c5f4a2108546d2 Subproject commit 44441a6c912c7c9f8d4771e042306ec5f44e461d

View file

@ -114,9 +114,15 @@
/* Analyze Mobile IPv6 traffic */ /* Analyze Mobile IPv6 traffic */
#cmakedefine ENABLE_MOBILE_IPV6 #cmakedefine ENABLE_MOBILE_IPV6
/* Use libCurl. */
#cmakedefine USE_CURL
/* Use the DataSeries writer. */ /* Use the DataSeries writer. */
#cmakedefine USE_DATASERIES #cmakedefine USE_DATASERIES
/* Use the ElasticSearch writer. */
#cmakedefine USE_ELASTICSEARCH
/* Version number of package */ /* Version number of package */
#define VERSION "@VERSION@" #define VERSION "@VERSION@"

5
configure vendored
View file

@ -33,6 +33,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-broccoli don't build or install the Broccoli library --disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl --disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-ruby don't try to build ruby bindings for broccoli --disable-ruby don't try to build ruby bindings for broccoli
@ -105,6 +106,7 @@ append_cache_entry INSTALL_BROCCOLI BOOL true
append_cache_entry INSTALL_BROCTL BOOL true append_cache_entry INSTALL_BROCTL BOOL true
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
append_cache_entry DISABLE_PERFTOOLS BOOL false
# parse arguments # parse arguments
while [ $# -ne 0 ]; do while [ $# -ne 0 ]; do
@ -156,6 +158,9 @@ while [ $# -ne 0 ]; do
--disable-auxtools) --disable-auxtools)
append_cache_entry INSTALL_AUX_TOOLS BOOL false append_cache_entry INSTALL_AUX_TOOLS BOOL false
;; ;;
--disable-perftools)
append_cache_entry DISABLE_PERFTOOLS BOOL true
;;
--disable-python) --disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;; ;;

View file

@ -4,19 +4,13 @@ Loading Data into Bro with the Input Framework
.. rst-class:: opening .. rst-class:: opening
Bro now features a flexible input frameworks that allows users Bro now features a flexible input framework that allows users
to import data into Bro. Data is either read into Bro tables or to import data into Bro. Data is either read into Bro tables or
converted to events which can then be handled by scripts. converted to events which can then be handled by scripts.
This document gives an overview of how to use the input framework
The input framework is merged into the git master and we with some examples. For more complex scenarios it is
will give a short summary on how to use it. worthwhile to take a look at the unit tests in
The input framework is automatically compiled and installed ``testing/btest/scripts/base/frameworks/input/``.
together with Bro. The interface to it is exposed via the
scripting layer.
This document gives the most common examples. For more complex
scenarios it is worthwhile to take a look at the unit tests in
``testing/btest/scripts/base/frameworks/input/``.
.. contents:: .. contents::
@ -66,11 +60,12 @@ The two records are defined as:
reason: string; reason: string;
}; };
ote that the names of the fields in the record definitions have to correspond to Note that the names of the fields in the record definitions have to correspond
the column names listed in the '#fields' line of the log file, in this case 'ip', to the column names listed in the '#fields' line of the log file, in this
'timestamp', and 'reason'. case 'ip', 'timestamp', and 'reason'.
The log file is read into the table with a simple call of the add_table function: The log file is read into the table with a simple call of the ``add_table``
function:
.. code:: bro .. code:: bro
@ -80,7 +75,7 @@ The log file is read into the table with a simple call of the add_table function
Input::remove("blacklist"); Input::remove("blacklist");
With these three lines we first create an empty table that should contain the With these three lines we first create an empty table that should contain the
blacklist data and then instruct the Input framework to open an input stream blacklist data and then instruct the input framework to open an input stream
named ``blacklist`` to read the data into the table. The third line removes the named ``blacklist`` to read the data into the table. The third line removes the
input stream again, because we do not need it any more after the data has been input stream again, because we do not need it any more after the data has been
read. read.
@ -91,20 +86,20 @@ This thread opens the input data file, converts the data into a Bro format and
sends it back to the main Bro thread. sends it back to the main Bro thread.
Because of this, the data is not immediately accessible. Depending on the Because of this, the data is not immediately accessible. Depending on the
size of the data source it might take from a few milliseconds up to a few seconds size of the data source it might take from a few milliseconds up to a few
until all data is present in the table. Please note that this means that when Bro seconds until all data is present in the table. Please note that this means
is running without an input source or on very short captured files, it might terminate that when Bro is running without an input source or on very short captured
before the data is present in the system (because Bro already handled all packets files, it might terminate before the data is present in the system (because
before the import thread finished). Bro already handled all packets before the import thread finished).
Subsequent calls to an input source are queued until the previous action has been Subsequent calls to an input source are queued until the previous action has
completed. Because of this, it is, for example, possible to call ``add_table`` and been completed. Because of this, it is, for example, possible to call
``remove`` in two subsequent lines: the ``remove`` action will remain queued until ``add_table`` and ``remove`` in two subsequent lines: the ``remove`` action
the first read has been completed. will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires the ``update_finished`` Once the input framework finishes reading from a data source, it fires
event. Once this event has been received all data from the input file is available the ``update_finished`` event. Once this event has been received all data
in the table. from the input file is available in the table.
.. code:: bro .. code:: bro
@ -113,10 +108,10 @@ in the table.
print blacklist; print blacklist;
} }
The table can also already be used while the data is still being read - it just might The table can also already be used while the data is still being read - it
not contain all lines in the input file when the event has not yet fired. After it has just might not contain all lines in the input file when the event has not
been populated it can be used like any other Bro table and blacklist entries easily be yet fired. After it has been populated it can be used like any other Bro
tested: table and blacklist entries can easily be tested:
.. code:: bro .. code:: bro
@ -128,13 +123,14 @@ Re-reading and streaming data
----------------------------- -----------------------------
For many data sources, like for many blacklists, the source data is continually For many data sources, like for many blacklists, the source data is continually
changing. For this cases, the Bro input framework supports several ways to changing. For these cases, the Bro input framework supports several ways to
deal with changing data files. deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When an input The first, very basic method is an explicit refresh of an input stream. When
stream is open, the function ``force_update`` can be called. This will trigger an input stream is open, the function ``force_update`` can be called. This
a complete refresh of the table; any changed elements from the file will be updated. will trigger a complete refresh of the table; any changed elements from the
After the update is finished the ``update_finished`` event will be raised. file will be updated. After the update is finished the ``update_finished``
event will be raised.
In our example the call would look like: In our example the call would look like:
@ -142,25 +138,26 @@ In our example the call would look like:
Input::force_update("blacklist"); Input::force_update("blacklist");
The input framework also supports two automatic refresh mode. The first mode The input framework also supports two automatic refresh modes. The first mode
continually checks if a file has been changed. If the file has been changed, it continually checks if a file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current state. is re-read and the data in the Bro table is updated to reflect the current
Each time a change has been detected and all the new data has been read into the state. Each time a change has been detected and all the new data has been
table, the ``update_finished`` event is raised. read into the table, the ``update_finished`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data file The second mode is a streaming mode. This mode assumes that the source data
is an append-only file to which new data is continually appended. Bro continually file is an append-only file to which new data is continually appended. Bro
checks for new data at the end of the file and will add the new data to the table. continually checks for new data at the end of the file and will add the new
If newer lines in the file have the same index as previous lines, they will overwrite data to the table. If newer lines in the file have the same index as previous
the values in the output table. lines, they will overwrite the values in the output table. Because of the
Because of the nature of streaming reads (data is continually added to the table), nature of streaming reads (data is continually added to the table),
the ``update_finished`` event is never raised when using streaming reads. the ``update_finished`` event is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the add_table call. The reading mode can be selected by setting the ``mode`` option of the
Valid values are ``MANUAL`` (the default), ``REREAD`` and ``STREAM``. add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``
and ``STREAM``.
Hence, when using adding ``$mode=Input::REREAD`` to the previous example, the blacklists Hence, when adding ``$mode=Input::REREAD`` to the previous example, the
table will always reflect the state of the blacklist input file. blacklist table will always reflect the state of the blacklist input file.
.. code:: bro .. code:: bro
@ -169,11 +166,11 @@ table will always reflect the state of the blacklist input file.
Receiving change events Receiving change events
----------------------- -----------------------
When re-reading files, it might be interesting to know exactly which lines in the source When re-reading files, it might be interesting to know exactly which lines in
files have changed. the source files have changed.
For this reason, the input framework can raise an event each time when a data item is added to, For this reason, the input framework can raise an event each time when a data
removed from or changed in a table. item is added to, removed from or changed in a table.
The event definition looks like this: The event definition looks like this:
@ -189,34 +186,42 @@ The event has to be specified in ``$ev`` in the ``add_table`` call:
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]); Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]);
The ``description`` field of the event contains the arguments that were originally supplied to the add_table call. The ``description`` field of the event contains the arguments that were
Hence, the name of the stream can, for example, be accessed with ``description$name``. ``tpe`` is an enum containing originally supplied to the add_table call. Hence, the name of the stream can,
the type of the change that occurred. for example, be accessed with ``description$name``. ``tpe`` is an enum
containing the type of the change that occurred.
It will contain ``Input::EVENT_NEW``, when a line that was not previously been If a line that was not previously present in the table has been added,
present in the table has been added. In this case ``left`` contains the Index of the added table entry and ``right`` contains then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains
the values of the added entry. the index of the added table entry and ``right`` contains the values of the
added entry.
If a table entry that already was present is altered during the re-reading or streaming read of a file, ``tpe`` will contain If a table entry that already was present is altered during the re-reading or
``Input::EVENT_CHANGED``. In this case ``left`` contains the Index of the changed table entry and ``right`` contains the streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In
values of the entry before the change. The reason for this is, that the table already has been updated when the event is this case ``left`` contains the index of the changed table entry and ``right``
raised. The current value in the table can be ascertained by looking up the current table value. Hence it is possible to compare contains the values of the entry before the change. The reason for this is
the new and the old value of the table. that the table already has been updated when the event is raised. The current
value in the table can be ascertained by looking up the current table value.
Hence it is possible to compare the new and the old values of the table.
``tpe`` contains ``Input::REMOVED``, when a table element is removed because it was no longer present during a re-read. If a table element is removed because it was no longer present during a
In this case ``left`` contains the index and ``right`` the values of the removed element. re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left``
contains the index and ``right`` the values of the removed element.
Filtering data during import Filtering data during import
---------------------------- ----------------------------
The input framework also allows a user to filter the data during the import. To this end, predicate functions are used. A predicate The input framework also allows a user to filter the data during the import.
function is called before a new element is added/changed/removed from a table. The predicate can either accept or veto To this end, predicate functions are used. A predicate function is called
the change by returning true for an accepted change and false for an rejected change. Furthermore, it can alter the data before a new element is added/changed/removed from a table. The predicate
can either accept or veto the change by returning true for an accepted
change and false for a rejected change. Furthermore, it can alter the data
before it is written to the table. before it is written to the table.
The following example filter will reject to add entries to the table when they were generated over a month ago. It The following example filter will reject to add entries to the table when
will accept all changes and all removals of values that are already present in the table. they were generated over a month ago. It will accept all changes and all
removals of values that are already present in the table.
.. code:: bro .. code:: bro
@ -228,34 +233,43 @@ will accept all changes and all removals of values that are already present in t
return ( ( current_time() - right$timestamp ) < (30 day) ); return ( ( current_time() - right$timestamp ) < (30 day) );
}]); }]);
To change elements while they are being imported, the predicate function can manipulate ``left`` and ``right``. Note To change elements while they are being imported, the predicate function can
that predicate functions are called before the change is committed to the table. Hence, when a table element is changed ( ``tpe`` manipulate ``left`` and ``right``. Note that predicate functions are called
is ``INPUT::EVENT_CHANGED`` ), ``left`` and ``right`` contain the new values, but the destination (``blacklist`` in our example) before the change is committed to the table. Hence, when a table element is
still contains the old values. This allows predicate functions to examine the changes between the old and the new version before changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right``
deciding if they should be allowed. contain the new values, but the destination (``blacklist`` in our example)
still contains the old values. This allows predicate functions to examine
the changes between the old and the new version before deciding if they
should be allowed.
Different readers Different readers
----------------- -----------------
The input framework supports different kinds of readers for different kinds of source data files. At the moment, the default The input framework supports different kinds of readers for different kinds
reader reads ASCII files formatted in the Bro log-file-format (tab-separated values). At the moment, Bro comes with two of source data files. At the moment, the default reader reads ASCII files
other readers. The ``RAW`` reader reads a file that is split by a specified record separator (usually newline). The contents formatted in the Bro log file format (tab-separated values). At the moment,
are returned line-by-line as strings; it can, for example, be used to read configuration files and the like and is probably Bro comes with two other readers. The ``RAW`` reader reads a file that is
split by a specified record separator (usually newline). The contents are
returned line-by-line as strings; it can, for example, be used to read
configuration files and the like and is probably
only useful in the event mode and not for reading data to tables. only useful in the event mode and not for reading data to tables.
Another included reader is the ``BENCHMARK`` reader, which is being used to optimize the speed of the input framework. It Another included reader is the ``BENCHMARK`` reader, which is being used
can generate arbitrary amounts of semi-random data in all Bro data types supported by the input framework. to optimize the speed of the input framework. It can generate arbitrary
amounts of semi-random data in all Bro data types supported by the input
framework.
In the future, the input framework will get support for new data sources like, for example, different databases. In the future, the input framework will get support for new data sources
like, for example, different databases.
Add_table options Add_table options
----------------- -----------------
This section lists all possible options that can be used for the add_table function and gives This section lists all possible options that can be used for the add_table
a short explanation of their use. Most of the options already have been discussed in the function and gives a short explanation of their use. Most of the options
previous sections. already have been discussed in the previous sections.
The possible fields that can be set for an table stream are: The possible fields that can be set for a table stream are:
``source`` ``source``
A mandatory string identifying the source of the data. A mandatory string identifying the source of the data.
@ -266,51 +280,57 @@ The possible fields that can be set for an table stream are:
to manipulate it further. to manipulate it further.
``idx`` ``idx``
Record type that defines the index of the table Record type that defines the index of the table.
``val`` ``val``
Record type that defines the values of the table Record type that defines the values of the table.
``reader`` ``reader``
The reader used for this stream. Default is ``READER_ASCII``. The reader used for this stream. Default is ``READER_ASCII``.
``mode`` ``mode``
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``. The mode in which the stream is opened. Possible values are
Default is ``MANUAL``. ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not ``MANUAL`` means that the file is not updated after it has
be reflected in the data Bro knows. been read. Changes to the file will not be reflected in the
``REREAD`` means that the whole file is read again each time a change is found. This should be used for data Bro knows. ``REREAD`` means that the whole file is read
files that are mapped to a table where individual lines can change. again each time a change is found. This should be used for
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new files that are mapped to a table where individual lines can
data is added to the file. change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``destination`` ``destination``
The destination table The destination table.
``ev`` ``ev``
Optional event that is raised, when values are added to, changed in or deleted from the table. Optional event that is raised, when values are added to,
Events are passed an Input::Event description as the first argument, the index record as the second argument changed in, or deleted from the table. Events are passed an
and the values as the third argument. Input::Event description as the first argument, the index
record as the second argument and the values as the third
argument.
``pred`` ``pred``
Optional predicate, that can prevent entries from being added to the table and events from being sent. Optional predicate, that can prevent entries from being added
to the table and events from being sent.
``want_record`` ``want_record``
Boolean value, that defines if the event wants to receive the fields inside of Boolean value, that defines if the event wants to receive the
a single record value, or individually (default). fields inside of a single record value, or individually
This can be used, if ``val`` is a record containing only one type. In this case, (default). This can be used if ``val`` is a record
if ``want_record`` is set to false, the table will contain elements of the type containing only one type. In this case, if ``want_record`` is
set to false, the table will contain elements of the type
contained in ``val``. contained in ``val``.
Reading data to events Reading Data to Events
====================== ======================
The second supported mode of the input framework is reading data to Bro events instead The second supported mode of the input framework is reading data to Bro
of reading them to a table using event streams. events instead of reading them to a table using event streams.
Event streams work very similarly to table streams that were already discussed in much Event streams work very similarly to table streams that were already
detail. To read the blacklist of the previous example into an event stream, the following discussed in much detail. To read the blacklist of the previous example
Bro code could be used: into an event stream, the following Bro code could be used:
.. code:: bro .. code:: bro
@ -329,14 +349,15 @@ Bro code could be used:
} }
The main difference in the declaration of the event stream is, that an event stream needs no The main difference in the declaration of the event stream is, that an event
separate index and value declarations -- instead, all source data types are provided in a single stream needs no separate index and value declarations -- instead, all source
record definition. data types are provided in a single record definition.
Apart from this, event streams work exactly the same as table streams and support most of the options Apart from this, event streams work exactly the same as table streams and
that are also supported for table streams. support most of the options that are also supported for table streams.
The options that can be set for when creating an event stream with ``add_event`` are: The options that can be set when creating an event stream with
``add_event`` are:
``source`` ``source``
A mandatory string identifying the source of the data. A mandatory string identifying the source of the data.
@ -347,35 +368,40 @@ The options that can be set for when creating an event stream with ``add_event``
to remove it. to remove it.
``fields`` ``fields``
Name of a record type containing the fields, which should be retrieved from Name of a record type containing the fields, which should be
the input stream. retrieved from the input stream.
``ev`` ``ev``
The event which is fired, after a line has been read from the input source. The event which is fired, after a line has been read from the
The first argument that is passed to the event is an Input::Event structure, input source. The first argument that is passed to the event
followed by the data, either inside of a record (if ``want_record is set``) or as is an Input::Event structure, followed by the data, either
individual fields. inside of a record (if ``want_record is set``) or as
The Input::Event structure can contain information, if the received line is ``NEW``, has individual fields. The Input::Event structure can contain
been ``CHANGED`` or ``DELETED``. Singe the ASCII reader cannot track this information information, if the received line is ``NEW``, has been
for event filters, the value is always ``NEW`` at the moment. ``CHANGED`` or ``DELETED``. Since the ASCII reader cannot
track this information for event filters, the value is
always ``NEW`` at the moment.
``mode`` ``mode``
The mode in which the stream is opened. Possible values are ``MANUAL``, ``REREAD`` and ``STREAM``. The mode in which the stream is opened. Possible values are
Default is ``MANUAL``. ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means, that the files is not updated after it has been read. Changes to the file will not ``MANUAL`` means that the file is not updated after it has
be reflected in the data Bro knows. been read. Changes to the file will not be reflected in the
``REREAD`` means that the whole file is read again each time a change is found. This should be used for data Bro knows. ``REREAD`` means that the whole file is read
files that are mapped to a table where individual lines can change. again each time a change is found. This should be used for
``STREAM`` means that the data from the file is streamed. Events / table entries will be generated as new files that are mapped to a table where individual lines can
data is added to the file. change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``reader`` ``reader``
The reader used for this stream. Default is ``READER_ASCII``. The reader used for this stream. Default is ``READER_ASCII``.
``want_record`` ``want_record``
Boolean value, that defines if the event wants to receive the fields inside of Boolean value, that defines if the event wants to receive the
a single record value, or individually (default). If this is set to true, the fields inside of a single record value, or individually
event will receive a single record of the type provided in ``fields``. (default). If this is set to true, the event will receive a
single record of the type provided in ``fields``.

View file

@ -21,7 +21,7 @@ To use DataSeries, its libraries must be available at compile-time,
along with the supporting *Lintel* package. Generally, both are along with the supporting *Lintel* package. Generally, both are
distributed on `HP Labs' web site distributed on `HP Labs' web site
<http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need <http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need
to use recent developments versions for both packages, which you can to use recent development versions for both packages, which you can
download from github like this:: download from github like this::
git clone http://github.com/dataseries/Lintel git clone http://github.com/dataseries/Lintel
@ -76,7 +76,7 @@ tools, which its installation process installs into ``<prefix>/bin``.
For example, to convert a file back into an ASCII representation:: For example, to convert a file back into an ASCII representation::
$ ds2txt conn.log $ ds2txt conn.log
[... We skip a bunch of meta data here ...] [... We skip a bunch of metadata here ...]
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0 1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0 1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
@ -86,13 +86,13 @@ For example, to convert a file back into an ASCII representation::
1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211 1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211
[...] [...]
(``--skip-all`` suppresses the meta data.) (``--skip-all`` suppresses the metadata.)
Note that the ASCII conversion is *not* equivalent to Bro's default Note that the ASCII conversion is *not* equivalent to Bro's default
output format. output format.
You can also switch only individual files over to DataSeries by adding You can also switch only individual files over to DataSeries by adding
code like this to your ``local.bro``:: code like this to your ``local.bro``:
.. code:: bro .. code:: bro
@ -109,7 +109,7 @@ Bro's DataSeries writer comes with a few tuning options, see
Working with DataSeries Working with DataSeries
======================= =======================
Here are few examples of using DataSeries command line tools to work Here are a few examples of using DataSeries command line tools to work
with the output files. with the output files.
* Printing CSV:: * Printing CSV::
@ -147,7 +147,7 @@ with the output files.
* Calculate some statistics: * Calculate some statistics:
Mean/stdev/min/max over a column:: Mean/stddev/min/max over a column::
$ dsstatgroupby '*' basic duration from conn.ds $ dsstatgroupby '*' basic duration from conn.ds
# Begin DSStatGroupByModule # Begin DSStatGroupByModule
@ -158,7 +158,7 @@ with the output files.
Quantiles of total connection volume:: Quantiles of total connection volume::
> dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds $ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds
[...] [...]
2159 data points, mean 24616 +- 343295 [0,1.26615e+07] 2159 data points, mean 24616 +- 343295 [0,1.26615e+07]
quantiles about every 216 data points: quantiles about every 216 data points:
@ -166,7 +166,7 @@ with the output files.
tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262 tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262
[...] [...]
The ``man`` pages for these tool show further options, and their The ``man`` pages for these tools show further options, and their
``-h`` option gives some more information (either can be a bit cryptic ``-h`` option gives some more information (either can be a bit cryptic
unfortunately though). unfortunately though).
@ -175,7 +175,7 @@ Deficiencies
Due to limitations of the DataSeries format, one cannot inspect its Due to limitations of the DataSeries format, one cannot inspect its
files before they have been fully written. In other words, when using files before they have been fully written. In other words, when using
DataSeries, it's currently it's not possible to inspect the live log DataSeries, it's currently not possible to inspect the live log
files inside the spool directory before they are rotated to their files inside the spool directory before they are rotated to their
final location. It seems that this could be fixed with some effort, final location. It seems that this could be fixed with some effort,
and we will work with DataSeries development team on that if the and we will work with DataSeries development team on that if the

View file

@ -0,0 +1,89 @@
=========================================
Indexed Logging Output with ElasticSearch
=========================================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for searching large volumes of data. ElasticSearch
is a new data storage technology for dealing with tons of data.
It's also a search engine built on top of Apache's Lucene
project. It scales very well, both for distributed indexing and
distributed searching.
.. contents::
Warning
-------
This writer plugin is still in testing and is not yet recommended for
production use! The approach to how logs are handled in the plugin is "fire
and forget" at this time, there is no error handling if the server fails to
respond successfully to the insertion request.
Installing ElasticSearch
------------------------
Download the latest version from: <http://www.elasticsearch.org/download/>.
Once extracted, start ElasticSearch with::
# ./bin/elasticsearch
For more detailed information, refer to the ElasticSearch installation
documentation: http://www.elasticsearch.org/guide/reference/setup/installation.html
Compiling Bro with ElasticSearch Support
----------------------------------------
First, ensure that you have libcurl installed the run configure.::
# ./configure
[...]
====================| Bro Build Summary |=====================
[...]
cURL: true
[...]
ElasticSearch: true
[...]
================================================================
Activating ElasticSearch
------------------------
The easiest way to enable ElasticSearch output is to load the tuning/logs-to-
elasticsearch.bro script. If you are using BroControl, the following line in
local.bro will enable it.
.. console::
@load tuning/logs-to-elasticsearch
With that, Bro will now write most of its logs into ElasticSearch in addition
to maintaining the Ascii logs like it would do by default. That script has
some tunable options for choosing which logs to send to ElasticSearch, refer
to the autogenerated script documentation for those options.
There is an interface being written specifically to integrate with the data
that Bro outputs into ElasticSearch named Brownian. It can be found here::
https://github.com/grigorescu/Brownian
Tuning
------
A common problem encountered with ElasticSearch is too many files being held
open. The ElasticSearch website has some suggestions on how to increase the
open file limit.
- http://www.elasticsearch.org/tutorials/2011/04/06/too-many-open-files.html
TODO
----
Lots.
- Perform multicast discovery for server.
- Better error detection.
- Better defaults (don't index loaded-plugins, for instance).
-

View file

@ -377,7 +377,7 @@ uncommon to need to delete that data before the end of the connection.
Other Writers Other Writers
------------- -------------
Bro support the following output formats other than ASCII: Bro supports the following output formats other than ASCII:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View file

@ -42,6 +42,8 @@ rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro)
rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro) rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro)
rest_target(${psd} base/frameworks/logging/writers/ascii.bro) rest_target(${psd} base/frameworks/logging/writers/ascii.bro)
rest_target(${psd} base/frameworks/logging/writers/dataseries.bro) rest_target(${psd} base/frameworks/logging/writers/dataseries.bro)
rest_target(${psd} base/frameworks/logging/writers/elasticsearch.bro)
rest_target(${psd} base/frameworks/logging/writers/none.bro)
rest_target(${psd} base/frameworks/metrics/cluster.bro) rest_target(${psd} base/frameworks/metrics/cluster.bro)
rest_target(${psd} base/frameworks/metrics/main.bro) rest_target(${psd} base/frameworks/metrics/main.bro)
rest_target(${psd} base/frameworks/metrics/non-cluster.bro) rest_target(${psd} base/frameworks/metrics/non-cluster.bro)
@ -144,6 +146,7 @@ rest_target(${psd} policy/protocols/ssl/known-certs.bro)
rest_target(${psd} policy/protocols/ssl/validate-certs.bro) rest_target(${psd} policy/protocols/ssl/validate-certs.bro)
rest_target(${psd} policy/tuning/defaults/packet-fragments.bro) rest_target(${psd} policy/tuning/defaults/packet-fragments.bro)
rest_target(${psd} policy/tuning/defaults/warnings.bro) rest_target(${psd} policy/tuning/defaults/warnings.bro)
rest_target(${psd} policy/tuning/logs-to-elasticsearch.bro)
rest_target(${psd} policy/tuning/track-all-assets.bro) rest_target(${psd} policy/tuning/track-all-assets.bro)
rest_target(${psd} site/local-manager.bro) rest_target(${psd} site/local-manager.bro)
rest_target(${psd} site/local-proxy.bro) rest_target(${psd} site/local-proxy.bro)

View file

@ -42,7 +42,7 @@ export {
type Info: record { type Info: record {
## The network time at which a communication event occurred. ## The network time at which a communication event occurred.
ts: time &log; ts: time &log;
## The peer name (if any) for which a communication event is concerned. ## The peer name (if any) with which a communication event is concerned.
peer: string &log &optional; peer: string &log &optional;
## Where the communication event message originated from, that is, ## Where the communication event message originated from, that is,
## either from the scripting layer or inside the Bro process. ## either from the scripting layer or inside the Bro process.

View file

@ -55,7 +55,8 @@ export {
pred: function(typ: Input::Event, left: any, right: any): bool &optional; pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the reader. ## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
@ -90,7 +91,8 @@ export {
ev: any; ev: any;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the reader. ## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };

View file

@ -3,4 +3,5 @@
@load ./writers/ascii @load ./writers/ascii
@load ./writers/dataseries @load ./writers/dataseries
@load ./writers/sqlite @load ./writers/sqlite
@load ./writers/elasticsearch
@load ./writers/none @load ./writers/none

View file

@ -139,8 +139,9 @@ export {
## default comes out of :bro:id:`Log::default_rotation_postprocessors`. ## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
postprocessor: function(info: RotationInfo) : bool &optional; postprocessor: function(info: RotationInfo) : bool &optional;
## A key/value table that will be passed on the writer. ## A key/value table that will be passed on to the writer.
## Interpretation of the values is left to the writer. ## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };

View file

@ -8,12 +8,13 @@ export {
## into files. This is primarily for debugging purposes. ## into files. This is primarily for debugging purposes.
const output_to_stdout = F &redef; const output_to_stdout = F &redef;
## If true, include a header line with column names and description ## If true, include lines with log meta information such as column names with
## of the other ASCII logging options that were used. ## types, the values of ASCII logging options that in use, and the time when the
const include_header = T &redef; ## file was opened and closes (the latter at the end).
const include_meta = T &redef;
## Prefix for the header line if included. ## Prefix for lines with meta information.
const header_prefix = "#" &redef; const meta_prefix = "#" &redef;
## Separator between fields. ## Separator between fields.
const separator = "\t" &redef; const separator = "\t" &redef;

View file

@ -0,0 +1,46 @@
##! Log writer for sending logs to an ElasticSearch server.
##!
##! Note: This module is in testing and is not yet considered stable!
##!
##! There is one known memory issue. If your elasticsearch server is
##! running slowly and taking too long to return from bulk insert
##! requests, the message queue to the writer thread will continue
##! growing larger and larger giving the appearance of a memory leak.
module LogElasticSearch;
export {
## Name of the ES cluster
const cluster_name = "elasticsearch" &redef;
## ES Server
const server_host = "127.0.0.1" &redef;
## ES Port
const server_port = 9200 &redef;
## Name of the ES index
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout.
## This is not working!
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
}

View file

@ -3,8 +3,8 @@
module LogNone; module LogNone;
export { export {
## If true, output some debugging output that can be useful for unit ## If true, output debugging output that can be useful for unit
##testing the logging framework. ## testing the logging framework.
const debug = F &redef; const debug = F &redef;
} }

View file

@ -115,6 +115,61 @@ type icmp_context: record {
DF: bool; ##< True if the packets *don't fragment* flag is set. DF: bool; ##< True if the packets *don't fragment* flag is set.
}; };
## Values extracted from a Prefix Information option in an ICMPv6 neighbor
## discovery message as specified by :rfc:`4861`.
##
## .. bro:see:: icmp6_nd_option
type icmp6_nd_prefix_info: record {
## Number of leading bits of the *prefix* that are valid.
prefix_len: count;
## Flag indicating the prefix can be used for on-link determination.
L_flag: bool;
## Autonomous address-configuration flag.
A_flag: bool;
## Length of time in seconds that the prefix is valid for purpose of
## on-link determination (0xffffffff represents infinity).
valid_lifetime: interval;
## Length of time in seconds that the addresses generated from the prefix
## via stateless address autoconfiguration remain preferred
## (0xffffffff represents infinity).
preferred_lifetime: interval;
## An IP address or prefix of an IP address. Use the *prefix_len* field
## to convert this into a :bro:type:`subnet`.
prefix: addr;
};
## Options extracted from ICMPv6 neighbor discovery messages as specified
## by :rfc:`4861`.
##
## .. bro:see:: icmp_router_solicitation icmp_router_advertisement
## icmp_neighbor_advertisement icmp_neighbor_solicitation icmp_redirect
## icmp6_nd_options
type icmp6_nd_option: record {
## 8-bit identifier of the type of option.
otype: count;
## 8-bit integer representing the length of the option (including the type
## and length fields) in units of 8 octets.
len: count;
## Source Link-Layer Address (Type 1) or Target Link-Layer Address (Type 2).
## Byte ordering of this is dependent on the actual link-layer.
link_address: string &optional;
## Prefix Information (Type 3).
prefix: icmp6_nd_prefix_info &optional;
## Redirected header (Type 4). This field contains the context of the
## original, redirected packet.
redirect: icmp_context &optional;
## Recommended MTU for the link (Type 5).
mtu: count &optional;
## The raw data of the option (everything after type & length fields),
## useful for unknown option types or when the full option payload is
## truncated in the captured packet. In those cases, option fields
## won't be pre-extracted into the fields above.
payload: string &optional;
};
## A type alias for a vector of ICMPv6 neighbor discovery message options.
type icmp6_nd_options: vector of icmp6_nd_option;
# A DNS mapping between IP address and hostname resolved by Bro's internal # A DNS mapping between IP address and hostname resolved by Bro's internal
# resolver. # resolver.
# #

View file

@ -17,7 +17,7 @@ export {
type Info: record { type Info: record {
## This is the time of the first packet. ## This is the time of the first packet.
ts: time &log; ts: time &log;
## A unique identifier of a connection. ## A unique identifier of the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports. ## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
@ -61,7 +61,7 @@ export {
## be left empty at all times. ## be left empty at all times.
local_orig: bool &log &optional; local_orig: bool &log &optional;
## Indicates the number of bytes missed in content gaps which is ## Indicates the number of bytes missed in content gaps, which is
## representative of packet loss. A value other than zero will ## representative of packet loss. A value other than zero will
## normally cause protocol analysis to fail but some analysis may ## normally cause protocol analysis to fail but some analysis may
## have been completed prior to the packet loss. ## have been completed prior to the packet loss.
@ -83,23 +83,24 @@ export {
## i inconsistent packet (e.g. SYN+RST bits both set) ## i inconsistent packet (e.g. SYN+RST bits both set)
## ====== ==================================================== ## ====== ====================================================
## ##
## If the letter is in upper case it means the event comes from the ## If the event comes from the originator, the letter is in upper-case; if it comes
## originator and lower case then means the responder. ## from the responder, it's in lower-case. Multiple packets of the same type will
## Also, there is compression. We only record one "d" in each direction, ## only be noted once (e.g. we only record one "d" in each direction, regardless of
## for instance. I.e., we just record that data went in that direction. ## how many data packets were seen.)
## This history is not meant to encode how much data that happened to
## be.
history: string &log &optional; history: string &log &optional;
## Number of packets the originator sent. ## Number of packets that the originator sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
orig_pkts: count &log &optional; orig_pkts: count &log &optional;
## Number IP level bytes the originator sent (as seen on the wire, ## Number of IP level bytes that the originator sent (as seen on the wire,
## taken from IP total_length header field). ## taken from IP total_length header field).
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
orig_ip_bytes: count &log &optional; orig_ip_bytes: count &log &optional;
## Number of packets the responder sent. See ``orig_pkts``. ## Number of packets that the responder sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T
resp_pkts: count &log &optional; resp_pkts: count &log &optional;
## Number IP level bytes the responder sent. See ``orig_pkts``. ## Number og IP level bytes that the responder sent (as seen on the wire,
## taken from IP total_length header field).
## Only set if :bro:id:`use_conn_size_analyzer` = T
resp_ip_bytes: count &log &optional; resp_ip_bytes: count &log &optional;
## If this connection was over a tunnel, indicate the ## If this connection was over a tunnel, indicate the
## *uid* values for any encapsulating parent connections ## *uid* values for any encapsulating parent connections

View file

@ -45,16 +45,16 @@ export {
AA: bool &log &default=F; AA: bool &log &default=F;
## The Truncation bit specifies that the message was truncated. ## The Truncation bit specifies that the message was truncated.
TC: bool &log &default=F; TC: bool &log &default=F;
## The Recursion Desired bit indicates to a name server to recursively ## The Recursion Desired bit in a request message indicates that
## purse the query. ## the client wants recursive service for this query.
RD: bool &log &default=F; RD: bool &log &default=F;
## The Recursion Available bit in a response message indicates if ## The Recursion Available bit in a response message indicates that
## the name server supports recursive queries. ## the name server supports recursive queries.
RA: bool &log &default=F; RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all ## A reserved field that is currently supposed to be zero in all
## queries and responses. ## queries and responses.
Z: count &log &default=0; Z: count &log &default=0;
## The set of resource descriptions in answer of the query. ## The set of resource descriptions in the query answer.
answers: vector of string &log &optional; answers: vector of string &log &optional;
## The caching intervals of the associated RRs described by the ## The caching intervals of the associated RRs described by the
## ``answers`` field. ## ``answers`` field.
@ -162,11 +162,11 @@ function set_session(c: connection, msg: dns_msg, is_query: bool)
c$dns = c$dns_state$pending[msg$id]; c$dns = c$dns_state$pending[msg$id];
if ( ! is_query )
{
c$dns$rcode = msg$rcode; c$dns$rcode = msg$rcode;
c$dns$rcode_name = base_errors[msg$rcode]; c$dns$rcode_name = base_errors[msg$rcode];
if ( ! is_query )
{
if ( ! c$dns?$total_answers ) if ( ! c$dns?$total_answers )
c$dns$total_answers = msg$num_answers; c$dns$total_answers = msg$num_answers;

View file

@ -28,7 +28,9 @@ export {
type Info: record { type Info: record {
## Time when the command was sent. ## Time when the command was sent.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## User name for the current FTP session. ## User name for the current FTP session.
user: string &log &default="<unknown>"; user: string &log &default="<unknown>";

View file

@ -22,7 +22,9 @@ export {
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Represents the pipelined depth into the connection of this ## Represents the pipelined depth into the connection of this
## request/response transaction. ## request/response transaction.
@ -112,7 +114,7 @@ event bro_init() &priority=5
# DPD configuration. # DPD configuration.
const ports = { const ports = {
80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3138/tcp, 80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3128/tcp,
8000/tcp, 8080/tcp, 8888/tcp, 8000/tcp, 8080/tcp, 8888/tcp,
}; };
redef dpd_config += { redef dpd_config += {

View file

@ -11,7 +11,9 @@ export {
type Info: record { type Info: record {
## Timestamp when the command was seen. ## Timestamp when the command was seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Nick name given for the connection. ## Nick name given for the connection.
nick: string &log &optional; nick: string &log &optional;

View file

@ -8,33 +8,51 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Time when the message was first seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## This is a number that indicates the number of messages deep into ## A count to represent the depth of this message transaction in a single
## this connection where this particular message was transferred. ## connection where multiple messages were transferred.
trans_depth: count &log; trans_depth: count &log;
## Contents of the Helo header.
helo: string &log &optional; helo: string &log &optional;
## Contents of the From header.
mailfrom: string &log &optional; mailfrom: string &log &optional;
## Contents of the Rcpt header.
rcptto: set[string] &log &optional; rcptto: set[string] &log &optional;
## Contents of the Date header.
date: string &log &optional; date: string &log &optional;
## Contents of the From header.
from: string &log &optional; from: string &log &optional;
## Contents of the To header.
to: set[string] &log &optional; to: set[string] &log &optional;
## Contents of the ReplyTo header.
reply_to: string &log &optional; reply_to: string &log &optional;
## Contents of the MsgID header.
msg_id: string &log &optional; msg_id: string &log &optional;
## Contents of the In-Reply-To header.
in_reply_to: string &log &optional; in_reply_to: string &log &optional;
## Contents of the Subject header.
subject: string &log &optional; subject: string &log &optional;
## Contents of the X-Origininating-IP header.
x_originating_ip: addr &log &optional; x_originating_ip: addr &log &optional;
## Contents of the first Received header.
first_received: string &log &optional; first_received: string &log &optional;
## Contents of the second Received header.
second_received: string &log &optional; second_received: string &log &optional;
## The last message the server sent to the client. ## The last message that the server sent to the client.
last_reply: string &log &optional; last_reply: string &log &optional;
## The message transmission path, as extracted from the headers.
path: vector of addr &log &optional; path: vector of addr &log &optional;
## Value of the User-Agent header from the client.
user_agent: string &log &optional; user_agent: string &log &optional;
## Indicate if the "Received: from" headers should still be processed. ## Indicates if the "Received: from" headers should still be processed.
process_received_from: bool &default=T; process_received_from: bool &default=T;
## Indicates if client activity has been seen, but not yet logged ## Indicates if client activity has been seen, but not yet logged.
has_client_activity: bool &default=F; has_client_activity: bool &default=F;
}; };

View file

@ -9,11 +9,13 @@ export {
type Info: record { type Info: record {
## Time when the proxy connection was first detected. ## Time when the proxy connection was first detected.
ts: time &log; ts: time &log;
## Unique ID for the tunnel - may correspond to connection uid or be non-existent.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Protocol version of SOCKS. ## Protocol version of SOCKS.
version: count &log; version: count &log;
## Username for the proxy if extracted from the network. ## Username for the proxy if extracted from the network..
user: string &log &optional; user: string &log &optional;
## Server status for the attempt at using the proxy. ## Server status for the attempt at using the proxy.
status: string &log &optional; status: string &log &optional;
@ -83,5 +85,8 @@ event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Addres
event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5 event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5
{ {
# This will handle the case where the analyzer failed in some way and was removed. We probably
# don't want to log these connections.
if ( "SOCKS" in c$service )
Log::write(SOCKS::LOG, c$socks); Log::write(SOCKS::LOG, c$socks);
} }

View file

@ -26,19 +26,21 @@ export {
type Info: record { type Info: record {
## Time when the SSH connection began. ## Time when the SSH connection began.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Indicates if the login was heuristically guessed to be "success" ## Indicates if the login was heuristically guessed to be "success"
## or "failure". ## or "failure".
status: string &log &optional; status: string &log &optional;
## Direction of the connection. If the client was a local host ## Direction of the connection. If the client was a local host
## logging into an external host, this would be OUTBOUD. INBOUND ## logging into an external host, this would be OUTBOUND. INBOUND
## would be set for the opposite situation. ## would be set for the opposite situation.
# TODO: handle local-local and remote-remote better. # TODO: handle local-local and remote-remote better.
direction: Direction &log &optional; direction: Direction &log &optional;
## Software string given by the client. ## Software string from the client.
client: string &log &optional; client: string &log &optional;
## Software string given by the server. ## Software string from the server.
server: string &log &optional; server: string &log &optional;
## Amount of data returned from the server. This is currently ## Amount of data returned from the server. This is currently
## the only measure of the success heuristic and it is logged to ## the only measure of the success heuristic and it is logged to

View file

@ -9,13 +9,15 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Time when the SSL connection began. ## Time when the SSL connection was first detected.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## SSL/TLS version the server offered. ## SSL/TLS version that the server offered.
version: string &log &optional; version: string &log &optional;
## SSL/TLS cipher suite the server chose. ## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional; cipher: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It ## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting. ## indicates the server name that the client was requesting.

File diff suppressed because one or more lines are too long

View file

@ -9,9 +9,11 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Timestamp of when the syslog message was seen. ## Timestamp when the syslog message was seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Protocol over which the message was seen. ## Protocol over which the message was seen.
proto: transport_proto &log; proto: transport_proto &log;

View file

@ -0,0 +1,45 @@
##! Load this script to enable global log output to an ElasticSearch database.
module LogElasticSearch;
export {
## An elasticsearch specific rotation interval.
const rotation_interval = 24hr &redef;
## Optionally ignore any :bro:enum:`Log::ID` from being sent to
## ElasticSearch with this script.
const excluded_log_ids: set[string] = set("Communication::LOG") &redef;
## If you want to explicitly only send certain :bro:enum:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`excluded_log_ids` option will remain in
## effect as well.
const send_logs: set[string] = set() &redef;
}
module Log;
event bro_init() &priority=-5
{
local my_filters: table[ID, string] of Filter = table();
for ( [id, name] in filters )
{
local filter = filters[id, name];
if ( fmt("%s", id) in LogElasticSearch::excluded_log_ids ||
(|LogElasticSearch::send_logs| > 0 && fmt("%s", id) !in LogElasticSearch::send_logs) )
next;
filter$name = cat(name, "-es");
filter$writer = Log::WRITER_ELASTICSEARCH;
filter$interv = LogElasticSearch::rotation_interval;
my_filters[id, name] = filter;
}
# This had to be done separately to avoid an ever growing filters list
# where the for loop would never end.
for ( [id, name] in my_filters )
{
Log::add_filter(id, filter);
}
}

View file

@ -60,4 +60,5 @@
@load tuning/defaults/__load__.bro @load tuning/defaults/__load__.bro
@load tuning/defaults/packet-fragments.bro @load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro @load tuning/defaults/warnings.bro
# @load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro @load tuning/track-all-assets.bro

View file

@ -106,10 +106,10 @@ void BitTorrent_Analyzer::Undelivered(int seq, int len, bool orig)
// } // }
} }
void BitTorrent_Analyzer::EndpointEOF(TCP_Reassembler* endp) void BitTorrent_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void BitTorrent_Analyzer::DeliverWeird(const char* msg, bool orig) void BitTorrent_Analyzer::DeliverWeird(const char* msg, bool orig)

View file

@ -15,7 +15,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new BitTorrent_Analyzer(conn); } { return new BitTorrent_Analyzer(conn); }

View file

@ -215,9 +215,9 @@ void BitTorrentTracker_Analyzer::Undelivered(int seq, int len, bool orig)
stop_resp = true; stop_resp = true;
} }
void BitTorrentTracker_Analyzer::EndpointEOF(TCP_Reassembler* endp) void BitTorrentTracker_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
} }
void BitTorrentTracker_Analyzer::InitBencParser(void) void BitTorrentTracker_Analyzer::InitBencParser(void)

View file

@ -48,7 +48,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new BitTorrentTracker_Analyzer(conn); } { return new BitTorrentTracker_Analyzer(conn); }

View file

@ -429,6 +429,7 @@ set(bro_SRCS
logging/writers/Ascii.cc logging/writers/Ascii.cc
logging/writers/DataSeries.cc logging/writers/DataSeries.cc
logging/writers/SQLite.cc logging/writers/SQLite.cc
logging/writers/ElasticSearch.cc
logging/writers/None.cc logging/writers/None.cc
input/Manager.cc input/Manager.cc

View file

@ -63,10 +63,10 @@ void DNS_TCP_Analyzer_binpac::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void DNS_TCP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) void DNS_TCP_Analyzer_binpac::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void DNS_TCP_Analyzer_binpac::DeliverStream(int len, const u_char* data, void DNS_TCP_Analyzer_binpac::DeliverStream(int len, const u_char* data,

View file

@ -45,7 +45,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new DNS_TCP_Analyzer_binpac(conn); } { return new DNS_TCP_Analyzer_binpac(conn); }

View file

@ -693,7 +693,7 @@ Val* DNS_Mgr::BuildMappingVal(DNS_Mapping* dm)
void DNS_Mgr::AddResult(DNS_Mgr_Request* dr, struct nb_dns_result* r) void DNS_Mgr::AddResult(DNS_Mgr_Request* dr, struct nb_dns_result* r)
{ {
struct hostent* h = (r && r->host_errno == 0) ? r->hostent : 0; struct hostent* h = (r && r->host_errno == 0) ? r->hostent : 0;
u_int32_t ttl = r->ttl; u_int32_t ttl = (r && r->host_errno == 0) ? r->ttl : 0;
DNS_Mapping* new_dm; DNS_Mapping* new_dm;
DNS_Mapping* prev_dm; DNS_Mapping* prev_dm;

View file

@ -96,7 +96,7 @@ EventHandler* EventHandler::Unserialize(UnserialInfo* info)
{ {
char* name; char* name;
if ( ! UNSERIALIZE_STR(&name, 0) ) if ( ! UNSERIALIZE_STR(&name, 0) )
return false; return 0;
EventHandler* h = event_registry->Lookup(name); EventHandler* h = event_registry->Lookup(name);
if ( ! h ) if ( ! h )

View file

@ -1035,12 +1035,10 @@ Val* IncrExpr::Eval(Frame* f) const
{ {
Val* new_elt = DoSingleEval(f, elt); Val* new_elt = DoSingleEval(f, elt);
v_vec->Assign(i, new_elt, this, OP_INCR); v_vec->Assign(i, new_elt, this, OP_INCR);
Unref(new_elt); // was Ref()'d by Assign()
} }
else else
v_vec->Assign(i, 0, this, OP_INCR); v_vec->Assign(i, 0, this, OP_INCR);
} }
// FIXME: Is the next line needed?
op->Assign(f, v_vec, OP_INCR); op->Assign(f, v_vec, OP_INCR);
} }
@ -2402,11 +2400,6 @@ Expr* RefExpr::MakeLvalue()
return this; return this;
} }
Val* RefExpr::Eval(Val* v) const
{
return Fold(v);
}
void RefExpr::Assign(Frame* f, Val* v, Opcode opcode) void RefExpr::Assign(Frame* f, Val* v, Opcode opcode)
{ {
op->Assign(f, v, opcode); op->Assign(f, v, opcode);

View file

@ -608,10 +608,6 @@ public:
void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN); void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
Expr* MakeLvalue(); Expr* MakeLvalue();
// Only overridden to avoid special vector handling which doesn't apply
// for this class.
Val* Eval(Val* v) const;
protected: protected:
friend class Expr; friend class Expr;
RefExpr() { } RefExpr() { }

View file

@ -100,7 +100,7 @@ Func* Func::Unserialize(UnserialInfo* info)
if ( ! (id->HasVal() && id->ID_Val()->Type()->Tag() == TYPE_FUNC) ) if ( ! (id->HasVal() && id->ID_Val()->Type()->Tag() == TYPE_FUNC) )
{ {
info->s->Error(fmt("ID %s is not a built-in", name)); info->s->Error(fmt("ID %s is not a built-in", name));
return false; return 0;
} }
Unref(f); Unref(f);

View file

@ -20,10 +20,10 @@ void HTTP_Analyzer_binpac::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void HTTP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) void HTTP_Analyzer_binpac::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void HTTP_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig) void HTTP_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig)

View file

@ -13,7 +13,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new HTTP_Analyzer_binpac(conn); } { return new HTTP_Analyzer_binpac(conn); }

View file

@ -169,8 +169,10 @@ void ICMP_Analyzer::NextICMP6(double t, const struct icmp* icmpp, int len, int c
NeighborSolicit(t, icmpp, len, caplen, data, ip_hdr); NeighborSolicit(t, icmpp, len, caplen, data, ip_hdr);
break; break;
case ND_ROUTER_SOLICIT: case ND_ROUTER_SOLICIT:
RouterSolicit(t, icmpp, len, caplen, data, ip_hdr);
break;
case ICMP6_ROUTER_RENUMBERING: case ICMP6_ROUTER_RENUMBERING:
Router(t, icmpp, len, caplen, data, ip_hdr); ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr);
break; break;
#if 0 #if 0
@ -515,9 +517,12 @@ void ICMP_Analyzer::RouterAdvert(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_router_advertisement; EventHandlerPtr f = icmp_router_advertisement;
uint32 reachable, retrans; uint32 reachable = 0, retrans = 0;
if ( caplen >= (int)sizeof(reachable) )
memcpy(&reachable, data, sizeof(reachable)); memcpy(&reachable, data, sizeof(reachable));
if ( caplen >= (int)sizeof(reachable) + (int)sizeof(retrans) )
memcpy(&retrans, data + sizeof(reachable), sizeof(retrans)); memcpy(&retrans, data + sizeof(reachable), sizeof(retrans));
val_list* vl = new val_list; val_list* vl = new val_list;
@ -534,6 +539,9 @@ void ICMP_Analyzer::RouterAdvert(double t, const struct icmp* icmpp, int len,
vl->append(new IntervalVal((double)ntohl(reachable), Milliseconds)); vl->append(new IntervalVal((double)ntohl(reachable), Milliseconds));
vl->append(new IntervalVal((double)ntohl(retrans), Milliseconds)); vl->append(new IntervalVal((double)ntohl(retrans), Milliseconds));
int opt_offset = sizeof(reachable) + sizeof(retrans);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -542,9 +550,10 @@ void ICMP_Analyzer::NeighborAdvert(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_neighbor_advertisement; EventHandlerPtr f = icmp_neighbor_advertisement;
in6_addr tgtaddr; IPAddr tgtaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
tgtaddr = IPAddr(*((const in6_addr*)data));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
@ -552,7 +561,10 @@ void ICMP_Analyzer::NeighborAdvert(double t, const struct icmp* icmpp, int len,
vl->append(new Val(icmpp->icmp_num_addrs & 0x80, TYPE_BOOL)); // Router vl->append(new Val(icmpp->icmp_num_addrs & 0x80, TYPE_BOOL)); // Router
vl->append(new Val(icmpp->icmp_num_addrs & 0x40, TYPE_BOOL)); // Solicited vl->append(new Val(icmpp->icmp_num_addrs & 0x40, TYPE_BOOL)); // Solicited
vl->append(new Val(icmpp->icmp_num_addrs & 0x20, TYPE_BOOL)); // Override vl->append(new Val(icmpp->icmp_num_addrs & 0x20, TYPE_BOOL)); // Override
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
int opt_offset = sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -562,14 +574,18 @@ void ICMP_Analyzer::NeighborSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_neighbor_solicitation; EventHandlerPtr f = icmp_neighbor_solicitation;
in6_addr tgtaddr; IPAddr tgtaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
tgtaddr = IPAddr(*((const in6_addr*)data));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
int opt_offset = sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -579,40 +595,36 @@ void ICMP_Analyzer::Redirect(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = icmp_redirect; EventHandlerPtr f = icmp_redirect;
in6_addr tgtaddr, dstaddr; IPAddr tgtaddr, dstaddr;
memcpy(&tgtaddr.s6_addr, data, sizeof(tgtaddr.s6_addr)); if ( caplen >= (int)sizeof(in6_addr) )
memcpy(&dstaddr.s6_addr, data + sizeof(tgtaddr.s6_addr), sizeof(dstaddr.s6_addr)); tgtaddr = IPAddr(*((const in6_addr*)data));
if ( caplen >= 2 * (int)sizeof(in6_addr) )
dstaddr = IPAddr(*((const in6_addr*)(data + sizeof(in6_addr))));
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(new AddrVal(IPAddr(tgtaddr))); vl->append(new AddrVal(tgtaddr));
vl->append(new AddrVal(IPAddr(dstaddr))); vl->append(new AddrVal(dstaddr));
int opt_offset = 2 * sizeof(in6_addr);
vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
void ICMP_Analyzer::Router(double t, const struct icmp* icmpp, int len, void ICMP_Analyzer::RouterSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr) int caplen, const u_char*& data, const IP_Hdr* ip_hdr)
{ {
EventHandlerPtr f = 0; EventHandlerPtr f = icmp_router_solicitation;
switch ( icmpp->icmp_type )
{
case ND_ROUTER_SOLICIT:
f = icmp_router_solicitation;
break;
case ICMP6_ROUTER_RENUMBERING:
default:
ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr);
return;
}
val_list* vl = new val_list; val_list* vl = new val_list;
vl->append(BuildConnVal()); vl->append(BuildConnVal());
vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr));
vl->append(BuildNDOptionsVal(caplen, data));
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
} }
@ -685,6 +697,144 @@ void ICMP_Analyzer::Context6(double t, const struct icmp* icmpp,
} }
} }
VectorVal* ICMP_Analyzer::BuildNDOptionsVal(int caplen, const u_char* data)
{
static RecordType* icmp6_nd_option_type = 0;
static RecordType* icmp6_nd_prefix_info_type = 0;
if ( ! icmp6_nd_option_type )
{
icmp6_nd_option_type = internal_type("icmp6_nd_option")->AsRecordType();
icmp6_nd_prefix_info_type =
internal_type("icmp6_nd_prefix_info")->AsRecordType();
}
VectorVal* vv = new VectorVal(
internal_type("icmp6_nd_options")->AsVectorType());
while ( caplen > 0 )
{
// Must have at least type & length to continue parsing options.
if ( caplen < 2 )
{
Weird("truncated_ICMPv6_ND_options");
break;
}
uint8 type = *((const uint8*)data);
uint8 length = *((const uint8*)(data + 1));
if ( length == 0 )
{
Weird("zero_length_ICMPv6_ND_option");
break;
}
RecordVal* rv = new RecordVal(icmp6_nd_option_type);
rv->Assign(0, new Val(type, TYPE_COUNT));
rv->Assign(1, new Val(length, TYPE_COUNT));
// Adjust length to be in units of bytes, exclude type/length fields.
length = length * 8 - 2;
data += 2;
caplen -= 2;
bool set_payload_field = false;
// Only parse out known options that are there in full.
switch ( type ) {
case 1:
case 2:
// Source/Target Link-layer Address option
{
if ( caplen >= length )
{
BroString* link_addr = new BroString(data, length, 0);
rv->Assign(2, new StringVal(link_addr));
}
else
set_payload_field = true;
break;
}
case 3:
// Prefix Information option
{
if ( caplen >= 30 )
{
RecordVal* info = new RecordVal(icmp6_nd_prefix_info_type);
uint8 prefix_len = *((const uint8*)(data));
bool L_flag = (*((const uint8*)(data + 1)) & 0x80) != 0;
bool A_flag = (*((const uint8*)(data + 1)) & 0x40) != 0;
uint32 valid_life = *((const uint32*)(data + 2));
uint32 prefer_life = *((const uint32*)(data + 6));
in6_addr prefix = *((const in6_addr*)(data + 14));
info->Assign(0, new Val(prefix_len, TYPE_COUNT));
info->Assign(1, new Val(L_flag, TYPE_BOOL));
info->Assign(2, new Val(A_flag, TYPE_BOOL));
info->Assign(3, new IntervalVal((double)ntohl(valid_life), Seconds));
info->Assign(4, new IntervalVal((double)ntohl(prefer_life), Seconds));
info->Assign(5, new AddrVal(IPAddr(prefix)));
rv->Assign(3, info);
}
else
set_payload_field = true;
break;
}
case 4:
// Redirected Header option
{
if ( caplen >= length )
{
const u_char* hdr = data + 6;
rv->Assign(4, ExtractICMP6Context(length - 6, hdr));
}
else
set_payload_field = true;
break;
}
case 5:
// MTU option
{
if ( caplen >= 6 )
rv->Assign(5, new Val(ntohl(*((const uint32*)(data + 2))),
TYPE_COUNT));
else
set_payload_field = true;
break;
}
default:
{
set_payload_field = true;
break;
}
}
if ( set_payload_field )
{
BroString* payload =
new BroString(data, min((int)length, caplen), 0);
rv->Assign(6, new StringVal(payload));
}
data += length;
caplen -= length;
vv->Assign(vv->Size(), rv, 0);
}
return vv;
}
int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way) int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way)
{ {
is_one_way = false; is_one_way = false;

View file

@ -48,7 +48,7 @@ protected:
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void NeighborSolicit(double t, const struct icmp* icmpp, int len, void NeighborSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void Router(double t, const struct icmp* icmpp, int len, void RouterSolicit(double t, const struct icmp* icmpp, int len,
int caplen, const u_char*& data, const IP_Hdr* ip_hdr); int caplen, const u_char*& data, const IP_Hdr* ip_hdr);
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
@ -75,6 +75,9 @@ protected:
void Context6(double t, const struct icmp* icmpp, int len, int caplen, void Context6(double t, const struct icmp* icmpp, int len, int caplen,
const u_char*& data, const IP_Hdr* ip_hdr); const u_char*& data, const IP_Hdr* ip_hdr);
// RFC 4861 Neighbor Discover message options
VectorVal* BuildNDOptionsVal(int caplen, const u_char* data);
RecordVal* icmp_conn_val; RecordVal* icmp_conn_val;
int type; int type;
int code; int code;

View file

@ -2692,12 +2692,12 @@ bool RemoteSerializer::ProcessLogCreateWriter()
int id, writer; int id, writer;
int num_fields; int num_fields;
logging::WriterBackend::WriterInfo info; logging::WriterBackend::WriterInfo* info = new logging::WriterBackend::WriterInfo();
bool success = fmt.Read(&id, "id") && bool success = fmt.Read(&id, "id") &&
fmt.Read(&writer, "writer") && fmt.Read(&writer, "writer") &&
fmt.Read(&num_fields, "num_fields") && fmt.Read(&num_fields, "num_fields") &&
info.Read(&fmt); info->Read(&fmt);
if ( ! success ) if ( ! success )
goto error; goto error;
@ -4208,32 +4208,38 @@ bool SocketComm::Listen()
bool SocketComm::AcceptConnection(int fd) bool SocketComm::AcceptConnection(int fd)
{ {
sockaddr_storage client; union {
socklen_t len = sizeof(client); sockaddr_storage ss;
sockaddr_in s4;
sockaddr_in6 s6;
} client;
int clientfd = accept(fd, (sockaddr*) &client, &len); socklen_t len = sizeof(client.ss);
int clientfd = accept(fd, (sockaddr*) &client.ss, &len);
if ( clientfd < 0 ) if ( clientfd < 0 )
{ {
Error(fmt("accept failed, %s %d", strerror(errno), errno)); Error(fmt("accept failed, %s %d", strerror(errno), errno));
return false; return false;
} }
if ( client.ss_family != AF_INET && client.ss_family != AF_INET6 ) if ( client.ss.ss_family != AF_INET && client.ss.ss_family != AF_INET6 )
{ {
Error(fmt("accept fail, unknown address family %d", client.ss_family)); Error(fmt("accept fail, unknown address family %d",
client.ss.ss_family));
close(clientfd); close(clientfd);
return false; return false;
} }
Peer* peer = new Peer; Peer* peer = new Peer;
peer->id = id_counter++; peer->id = id_counter++;
peer->ip = client.ss_family == AF_INET ? peer->ip = client.ss.ss_family == AF_INET ?
IPAddr(((sockaddr_in*)&client)->sin_addr) : IPAddr(client.s4.sin_addr) :
IPAddr(((sockaddr_in6*)&client)->sin6_addr); IPAddr(client.s6.sin6_addr);
peer->port = client.ss_family == AF_INET ? peer->port = client.ss.ss_family == AF_INET ?
ntohs(((sockaddr_in*)&client)->sin_port) : ntohs(client.s4.sin_port) :
ntohs(((sockaddr_in6*)&client)->sin6_port); ntohs(client.s6.sin6_port);
peer->connected = true; peer->connected = true;
peer->ssl = listen_ssl; peer->ssl = listen_ssl;

View file

@ -17,8 +17,8 @@
class IncrementalSendTimer; class IncrementalSendTimer;
namespace threading { namespace threading {
class Field; struct Field;
class Value; struct Value;
} }
// This class handles the communication done in Bro's main loop. // This class handles the communication done in Bro's main loop.

View file

@ -368,7 +368,7 @@ int SMB_Session::ParseSetupAndx(int is_orig, binpac::SMB::SMB_header const& hdr,
// The binpac type depends on the negotiated server settings - // The binpac type depends on the negotiated server settings -
// possibly we can just pick the "right" format here, and use that? // possibly we can just pick the "right" format here, and use that?
if ( hdr.flags2() && 0x0800 ) if ( hdr.flags2() & 0x0800 )
{ {
binpac::SMB::SMB_setup_andx_ext msg(hdr.unicode()); binpac::SMB::SMB_setup_andx_ext msg(hdr.unicode());
msg.Parse(body.data(), body.data() + body.length()); msg.Parse(body.data(), body.data() + body.length());

View file

@ -31,10 +31,10 @@ void SOCKS_Analyzer::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void SOCKS_Analyzer::EndpointEOF(TCP_Reassembler* endp) void SOCKS_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig) void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
@ -66,9 +66,16 @@ void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
ForwardStream(len, data, orig); ForwardStream(len, data, orig);
} }
else else
{
try
{ {
interp->NewData(orig, data, data + len); interp->NewData(orig, data, data + len);
} }
catch ( const binpac::Exception& e )
{
ProtocolViolation(fmt("Binpac exception: %s", e.c_msg()));
}
}
} }
void SOCKS_Analyzer::Undelivered(int seq, int len, bool orig) void SOCKS_Analyzer::Undelivered(int seq, int len, bool orig)

View file

@ -23,7 +23,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new SOCKS_Analyzer(conn); } { return new SOCKS_Analyzer(conn); }

View file

@ -23,10 +23,10 @@ void SSL_Analyzer::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void SSL_Analyzer::EndpointEOF(TCP_Reassembler* endp) void SSL_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void SSL_Analyzer::DeliverStream(int len, const u_char* data, bool orig) void SSL_Analyzer::DeliverStream(int len, const u_char* data, bool orig)

View file

@ -15,7 +15,7 @@ public:
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
// Overriden from TCP_ApplicationAnalyzer. // Overriden from TCP_ApplicationAnalyzer.
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new SSL_Analyzer(conn); } { return new SSL_Analyzer(conn); }

View file

@ -163,7 +163,7 @@ SerialObj* SerialObj::Unserialize(UnserialInfo* info, SerialType type)
if ( ! result ) if ( ! result )
{ {
DBG_POP(DBG_SERIAL); DBG_POP(DBG_SERIAL);
return false; return 0;
} }
DBG_POP(DBG_SERIAL); DBG_POP(DBG_SERIAL);

View file

@ -18,9 +18,9 @@ struct pcap_pkthdr;
class EncapsulationStack; class EncapsulationStack;
class Connection; class Connection;
class ConnID;
class OSFingerprint; class OSFingerprint;
class ConnCompressor; class ConnCompressor;
struct ConnID;
declare(PDict,Connection); declare(PDict,Connection);
declare(PDict,FragReassembler); declare(PDict,FragReassembler);

View file

@ -910,7 +910,7 @@ Val* RecordType::FieldDefault(int field) const
const TypeDecl* td = FieldDecl(field); const TypeDecl* td = FieldDecl(field);
if ( ! td->attrs ) if ( ! td->attrs )
return false; return 0;
const Attr* def_attr = td->attrs->FindAttr(ATTR_DEFAULT); const Attr* def_attr = td->attrs->FindAttr(ATTR_DEFAULT);

View file

@ -972,12 +972,12 @@ function sha256_hash_finish%(index: any%): string
## ##
## .. note:: ## .. note::
## ##
## This function is a wrapper about the function ``rand`` provided by ## This function is a wrapper about the function ``random``
## the OS. ## provided by the OS.
function rand%(max: count%): count function rand%(max: count%): count
%{ %{
int result; int result;
result = bro_uint_t(double(max) * double(rand()) / (RAND_MAX + 1.0)); result = bro_uint_t(double(max) * double(bro_random()) / (RAND_MAX + 1.0));
return new Val(result, TYPE_COUNT); return new Val(result, TYPE_COUNT);
%} %}
@ -989,11 +989,11 @@ function rand%(max: count%): count
## ##
## .. note:: ## .. note::
## ##
## This function is a wrapper about the function ``srand`` provided ## This function is a wrapper about the function ``srandom``
## by the OS. ## provided by the OS.
function srand%(seed: count%): any function srand%(seed: count%): any
%{ %{
srand(seed); bro_srandom(seed);
return 0; return 0;
%} %}
@ -2604,6 +2604,29 @@ function to_subnet%(sn: string%): subnet
return ret; return ret;
%} %}
## Converts a :bro:type:`string` to a :bro:type:`double`.
##
## str: The :bro:type:`string` to convert.
##
## Returns: The :bro:type:`string` *str* as double, or 0 if *str* has
## an invalid format.
##
function to_double%(str: string%): double
%{
const char* s = str->CheckString();
char* end_s;
double d = strtod(s, &end_s);
if ( s[0] == '\0' || end_s[0] != '\0' )
{
builtin_error("bad conversion to double", @ARG@[0]);
d = 0;
}
return new Val(d, TYPE_DOUBLE);
%}
## Converts a :bro:type:`count` to an :bro:type:`addr`. ## Converts a :bro:type:`count` to an :bro:type:`addr`.
## ##
## ip: The :bro:type:`count` to convert. ## ip: The :bro:type:`count` to convert.

View file

@ -157,7 +157,7 @@ event new_connection%(c: connection%);
## e: The new encapsulation. ## e: The new encapsulation.
event tunnel_changed%(c: connection, e: EncapsulatingConnVector%); event tunnel_changed%(c: connection, e: EncapsulatingConnVector%);
## Generated when reassembly starts for a TCP connection. The event is raised ## Generated when reassembly starts for a TCP connection. This event is raised
## at the moment when Bro's TCP analyzer enables stream reassembly for a ## at the moment when Bro's TCP analyzer enables stream reassembly for a
## connection. ## connection.
## ##
@ -522,7 +522,7 @@ event esp_packet%(p: pkt_hdr%);
## .. bro:see:: new_packet tcp_packet ipv6_ext_headers ## .. bro:see:: new_packet tcp_packet ipv6_ext_headers
event mobile_ipv6_message%(p: pkt_hdr%); event mobile_ipv6_message%(p: pkt_hdr%);
## Genereated for any IPv6 packet encapsulated in a Teredo tunnel. ## Generated for any IPv6 packet encapsulated in a Teredo tunnel.
## See :rfc:`4380` for more information about the Teredo protocol. ## See :rfc:`4380` for more information about the Teredo protocol.
## ##
## outer: The Teredo tunnel connection. ## outer: The Teredo tunnel connection.
@ -532,10 +532,10 @@ event mobile_ipv6_message%(p: pkt_hdr%);
## .. bro:see:: teredo_authentication teredo_origin_indication teredo_bubble ## .. bro:see:: teredo_authentication teredo_origin_indication teredo_bubble
## ##
## .. note:: Since this event may be raised on a per-packet basis, handling ## .. note:: Since this event may be raised on a per-packet basis, handling
## it may become particular expensive for real-time analysis. ## it may become particularly expensive for real-time analysis.
event teredo_packet%(outer: connection, inner: teredo_hdr%); event teredo_packet%(outer: connection, inner: teredo_hdr%);
## Genereated for IPv6 packets encapsulated in a Teredo tunnel that ## Generated for IPv6 packets encapsulated in a Teredo tunnel that
## use the Teredo authentication encapsulation method. ## use the Teredo authentication encapsulation method.
## See :rfc:`4380` for more information about the Teredo protocol. ## See :rfc:`4380` for more information about the Teredo protocol.
## ##
@ -546,10 +546,10 @@ event teredo_packet%(outer: connection, inner: teredo_hdr%);
## .. bro:see:: teredo_packet teredo_origin_indication teredo_bubble ## .. bro:see:: teredo_packet teredo_origin_indication teredo_bubble
## ##
## .. note:: Since this event may be raised on a per-packet basis, handling ## .. note:: Since this event may be raised on a per-packet basis, handling
## it may become particular expensive for real-time analysis. ## it may become particularly expensive for real-time analysis.
event teredo_authentication%(outer: connection, inner: teredo_hdr%); event teredo_authentication%(outer: connection, inner: teredo_hdr%);
## Genereated for IPv6 packets encapsulated in a Teredo tunnel that ## Generated for IPv6 packets encapsulated in a Teredo tunnel that
## use the Teredo origin indication encapsulation method. ## use the Teredo origin indication encapsulation method.
## See :rfc:`4380` for more information about the Teredo protocol. ## See :rfc:`4380` for more information about the Teredo protocol.
## ##
@ -560,10 +560,10 @@ event teredo_authentication%(outer: connection, inner: teredo_hdr%);
## .. bro:see:: teredo_packet teredo_authentication teredo_bubble ## .. bro:see:: teredo_packet teredo_authentication teredo_bubble
## ##
## .. note:: Since this event may be raised on a per-packet basis, handling ## .. note:: Since this event may be raised on a per-packet basis, handling
## it may become particular expensive for real-time analysis. ## it may become particularly expensive for real-time analysis.
event teredo_origin_indication%(outer: connection, inner: teredo_hdr%); event teredo_origin_indication%(outer: connection, inner: teredo_hdr%);
## Genereated for Teredo bubble packets. That is, IPv6 packets encapsulated ## Generated for Teredo bubble packets. That is, IPv6 packets encapsulated
## in a Teredo tunnel that have a Next Header value of :bro:id:`IPPROTO_NONE`. ## in a Teredo tunnel that have a Next Header value of :bro:id:`IPPROTO_NONE`.
## See :rfc:`4380` for more information about the Teredo protocol. ## See :rfc:`4380` for more information about the Teredo protocol.
## ##
@ -574,15 +574,15 @@ event teredo_origin_indication%(outer: connection, inner: teredo_hdr%);
## .. bro:see:: teredo_packet teredo_authentication teredo_origin_indication ## .. bro:see:: teredo_packet teredo_authentication teredo_origin_indication
## ##
## .. note:: Since this event may be raised on a per-packet basis, handling ## .. note:: Since this event may be raised on a per-packet basis, handling
## it may become particular expensive for real-time analysis. ## it may become particularly expensive for real-time analysis.
event teredo_bubble%(outer: connection, inner: teredo_hdr%); event teredo_bubble%(outer: connection, inner: teredo_hdr%);
## Generated for every packet that has non-empty transport-layer payload. This is a ## Generated for every packet that has a non-empty transport-layer payload.
## very low-level and expensive event that should be avoided when at all possible. ## This is a very low-level and expensive event that should be avoided when
## It's usually infeasible to handle when processing even medium volumes of ## at all possible. It's usually infeasible to handle when processing even
## traffic in real-time. It's even worse than :bro:id:`new_packet`. That said, if ## medium volumes of traffic in real-time. It's even worse than
## you work from a trace and want to do some packet-level analysis, it may come in ## :bro:id:`new_packet`. That said, if you work from a trace and want to
## handy. ## do some packet-level analysis, it may come in handy.
## ##
## c: The connection the packet is part of. ## c: The connection the packet is part of.
## ##
@ -1054,9 +1054,11 @@ event icmp_parameter_problem%(c: connection, icmp: icmp_conn, code: count, conte
## icmp: Additional ICMP-specific information augmenting the standard connection ## icmp: Additional ICMP-specific information augmenting the standard connection
## record *c*. ## record *c*.
## ##
## options: Any Neighbor Discovery options included with message (:rfc:`4861`).
##
## .. bro:see:: icmp_router_advertisement ## .. bro:see:: icmp_router_advertisement
## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect ## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect
event icmp_router_solicitation%(c: connection, icmp: icmp_conn%); event icmp_router_solicitation%(c: connection, icmp: icmp_conn, options: icmp6_nd_options%);
## Generated for ICMP *router advertisement* messages. ## Generated for ICMP *router advertisement* messages.
## ##
@ -1090,9 +1092,11 @@ event icmp_router_solicitation%(c: connection, icmp: icmp_conn%);
## ##
## retrans_timer: How long a host should wait before retransmitting. ## retrans_timer: How long a host should wait before retransmitting.
## ##
## options: Any Neighbor Discovery options included with message (:rfc:`4861`).
##
## .. bro:see:: icmp_router_solicitation ## .. bro:see:: icmp_router_solicitation
## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect ## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect
event icmp_router_advertisement%(c: connection, icmp: icmp_conn, cur_hop_limit: count, managed: bool, other: bool, home_agent: bool, pref: count, proxy: bool, rsv: count, router_lifetime: interval, reachable_time: interval, retrans_timer: interval%); event icmp_router_advertisement%(c: connection, icmp: icmp_conn, cur_hop_limit: count, managed: bool, other: bool, home_agent: bool, pref: count, proxy: bool, rsv: count, router_lifetime: interval, reachable_time: interval, retrans_timer: interval, options: icmp6_nd_options%);
## Generated for ICMP *neighbor solicitation* messages. ## Generated for ICMP *neighbor solicitation* messages.
## ##
@ -1107,9 +1111,11 @@ event icmp_router_advertisement%(c: connection, icmp: icmp_conn, cur_hop_limit:
## ##
## tgt: The IP address of the target of the solicitation. ## tgt: The IP address of the target of the solicitation.
## ##
## options: Any Neighbor Discovery options included with message (:rfc:`4861`).
##
## .. bro:see:: icmp_router_solicitation icmp_router_advertisement ## .. bro:see:: icmp_router_solicitation icmp_router_advertisement
## icmp_neighbor_advertisement icmp_redirect ## icmp_neighbor_advertisement icmp_redirect
event icmp_neighbor_solicitation%(c: connection, icmp: icmp_conn, tgt:addr%); event icmp_neighbor_solicitation%(c: connection, icmp: icmp_conn, tgt: addr, options: icmp6_nd_options%);
## Generated for ICMP *neighbor advertisement* messages. ## Generated for ICMP *neighbor advertisement* messages.
## ##
@ -1131,9 +1137,11 @@ event icmp_neighbor_solicitation%(c: connection, icmp: icmp_conn, tgt:addr%);
## tgt: the Target Address in the soliciting message or the address whose ## tgt: the Target Address in the soliciting message or the address whose
## link-layer address has changed for unsolicited adverts. ## link-layer address has changed for unsolicited adverts.
## ##
## options: Any Neighbor Discovery options included with message (:rfc:`4861`).
##
## .. bro:see:: icmp_router_solicitation icmp_router_advertisement ## .. bro:see:: icmp_router_solicitation icmp_router_advertisement
## icmp_neighbor_solicitation icmp_redirect ## icmp_neighbor_solicitation icmp_redirect
event icmp_neighbor_advertisement%(c: connection, icmp: icmp_conn, router: bool, solicited: bool, override: bool, tgt:addr%); event icmp_neighbor_advertisement%(c: connection, icmp: icmp_conn, router: bool, solicited: bool, override: bool, tgt: addr, options: icmp6_nd_options%);
## Generated for ICMP *redirect* messages. ## Generated for ICMP *redirect* messages.
## ##
@ -1151,9 +1159,11 @@ event icmp_neighbor_advertisement%(c: connection, icmp: icmp_conn, router: bool,
## ##
## dest: The address of the destination which is redirected to the target. ## dest: The address of the destination which is redirected to the target.
## ##
## options: Any Neighbor Discovery options included with message (:rfc:`4861`).
##
## .. bro:see:: icmp_router_solicitation icmp_router_advertisement ## .. bro:see:: icmp_router_solicitation icmp_router_advertisement
## icmp_neighbor_solicitation icmp_neighbor_advertisement ## icmp_neighbor_solicitation icmp_neighbor_advertisement
event icmp_redirect%(c: connection, icmp: icmp_conn, tgt: addr, dest: addr%); event icmp_redirect%(c: connection, icmp: icmp_conn, tgt: addr, dest: addr, options: icmp6_nd_options%);
## Generated when a TCP connection terminated, passing on statistics about the ## Generated when a TCP connection terminated, passing on statistics about the
## two endpoints. This event is always generated when Bro flushes the internal ## two endpoints. This event is always generated when Bro flushes the internal
@ -6216,13 +6226,12 @@ event signature_match%(state: signature_state, msg: string, data: string%);
## ##
## request_type: The type of the request. ## request_type: The type of the request.
## ##
## dstaddr: Address that the tunneled traffic should be sent to. ## sa: Address that the tunneled traffic should be sent to.
##
## dstname: DNS name of the host that the tunneled traffic should be sent to.
## ##
## p: The destination port for the proxied traffic. ## p: The destination port for the proxied traffic.
## ##
## user: Username given for the SOCKS connection. This is not yet implemented for SOCKSv5. ## user: Username given for the SOCKS connection. This is not yet implemented
## for SOCKSv5.
event socks_request%(c: connection, version: count, request_type: count, sa: SOCKS::Address, p: port, user: string%); event socks_request%(c: connection, version: count, request_type: count, sa: SOCKS::Address, p: port, user: string%);
## Generated when a SOCKS reply is analyzed. ## Generated when a SOCKS reply is analyzed.
@ -6233,9 +6242,7 @@ event socks_request%(c: connection, version: count, request_type: count, sa: SOC
## ##
## reply: The status reply from the server. ## reply: The status reply from the server.
## ##
## dstaddr: The address that the server sent the traffic to. ## sa: The address that the server sent the traffic to.
##
## dstname: The name the server sent the traffic to. Only applicable for SOCKSv5.
## ##
## p: The destination port for the proxied traffic. ## p: The destination port for the proxied traffic.
event socks_reply%(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port%); event socks_reply%(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port%);

View file

@ -71,11 +71,9 @@ declare(PDict, InputHash);
class Manager::Stream { class Manager::Stream {
public: public:
string name; string name;
ReaderBackend::ReaderInfo info; ReaderBackend::ReaderInfo* info;
bool removed; bool removed;
ReaderMode mode;
StreamType stream_type; // to distinguish between event and table streams StreamType stream_type; // to distinguish between event and table streams
EnumVal* type; EnumVal* type;
@ -262,7 +260,6 @@ ReaderBackend* Manager::CreateBackend(ReaderFrontend* frontend, bro_int_t type)
ReaderBackend* backend = (*ir->factory)(frontend); ReaderBackend* backend = (*ir->factory)(frontend);
assert(backend); assert(backend);
frontend->ty_name = ir->name;
return backend; return backend;
} }
@ -293,9 +290,6 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
EnumVal* reader = description->LookupWithDefault(rtype->FieldOffset("reader"))->AsEnumVal(); EnumVal* reader = description->LookupWithDefault(rtype->FieldOffset("reader"))->AsEnumVal();
ReaderFrontend* reader_obj = new ReaderFrontend(reader->InternalInt());
assert(reader_obj);
// get the source ... // get the source ...
Val* sourceval = description->LookupWithDefault(rtype->FieldOffset("source")); Val* sourceval = description->LookupWithDefault(rtype->FieldOffset("source"));
assert ( sourceval != 0 ); assert ( sourceval != 0 );
@ -303,21 +297,22 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
string source((const char*) bsource->Bytes(), bsource->Len()); string source((const char*) bsource->Bytes(), bsource->Len());
Unref(sourceval); Unref(sourceval);
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal(); ReaderBackend::ReaderInfo* rinfo = new ReaderBackend::ReaderInfo();
Val* config = description->LookupWithDefault(rtype->FieldOffset("config")); rinfo->source = copy_string(source.c_str());
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal();
switch ( mode->InternalInt() ) switch ( mode->InternalInt() )
{ {
case 0: case 0:
info->mode = MODE_MANUAL; rinfo->mode = MODE_MANUAL;
break; break;
case 1: case 1:
info->mode = MODE_REREAD; rinfo->mode = MODE_REREAD;
break; break;
case 2: case 2:
info->mode = MODE_STREAM; rinfo->mode = MODE_STREAM;
break; break;
default: default:
@ -326,13 +321,16 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
Unref(mode); Unref(mode);
Val* config = description->LookupWithDefault(rtype->FieldOffset("config"));
ReaderFrontend* reader_obj = new ReaderFrontend(*rinfo, reader);
assert(reader_obj);
info->reader = reader_obj; info->reader = reader_obj;
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name; info->name = name;
info->config = config->AsTableVal(); // ref'd by LookupWithDefault info->config = config->AsTableVal(); // ref'd by LookupWithDefault
info->info = rinfo;
ReaderBackend::ReaderInfo readerinfo;
readerinfo.source = source;
Ref(description); Ref(description);
info->description = description; info->description = description;
@ -347,16 +345,13 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
ListVal* index = info->config->RecoverIndex(k); ListVal* index = info->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString(); string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString(); string value = v->Value()->AsString()->CheckString();
info->info.config.insert(std::make_pair(key, value)); info->info->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str())));
Unref(index); Unref(index);
delete k; delete k;
} }
} }
info->info = readerinfo;
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s", DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
name.c_str()); name.c_str());
@ -481,7 +476,7 @@ bool Manager::CreateEventStream(RecordVal* fval)
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->info, stream->mode, stream->num_fields, logf ); stream->reader->Init(stream->num_fields, logf );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -658,7 +653,7 @@ bool Manager::CreateTableStream(RecordVal* fval)
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->info, stream->mode, fieldsV.size(), fields ); stream->reader->Init(fieldsV.size(), fields );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -732,8 +727,6 @@ bool Manager::RemoveStream(Stream *i)
i->removed = true; i->removed = true;
i->reader->Close();
DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s", DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s",
i->name.c_str()); i->name.c_str());
@ -799,17 +792,19 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
else else
{ {
Field* field = new Field(); string name = nameprepend + rec->FieldName(i);
field->name = nameprepend + rec->FieldName(i); const char* secondary = 0;
field->type = rec->FieldType(i)->Tag(); TypeTag ty = rec->FieldType(i)->Tag();
TypeTag st = TYPE_VOID;
bool optional = false;
if ( field->type == TYPE_TABLE ) if ( ty == TYPE_TABLE )
field->subtype = rec->FieldType(i)->AsSetType()->Indices()->PureType()->Tag(); st = rec->FieldType(i)->AsSetType()->Indices()->PureType()->Tag();
else if ( field->type == TYPE_VECTOR ) else if ( ty == TYPE_VECTOR )
field->subtype = rec->FieldType(i)->AsVectorType()->YieldType()->Tag(); st = rec->FieldType(i)->AsVectorType()->YieldType()->Tag();
else if ( field->type == TYPE_PORT && else if ( ty == TYPE_PORT &&
rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN) ) rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN) )
{ {
// we have an annotation for the second column // we have an annotation for the second column
@ -819,12 +814,13 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
assert(c); assert(c);
assert(c->Type()->Tag() == TYPE_STRING); assert(c->Type()->Tag() == TYPE_STRING);
field->secondary_name = c->AsStringVal()->AsString()->CheckString(); secondary = c->AsStringVal()->AsString()->CheckString();
} }
if ( rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL ) ) if ( rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL ) )
field->optional = true; optional = true;
Field* field = new Field(name.c_str(), secondary, ty, st, optional);
fields->push_back(field); fields->push_back(field);
} }
} }
@ -1238,7 +1234,7 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
#endif #endif
// Send event that the current update is indeed finished. // Send event that the current update is indeed finished.
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info.source.c_str())); SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info->source));
} }
void Manager::Put(ReaderFrontend* reader, Value* *vals) void Manager::Put(ReaderFrontend* reader, Value* *vals)
@ -1715,7 +1711,7 @@ int Manager::GetValueLength(const Value* val) {
case TYPE_STRING: case TYPE_STRING:
case TYPE_ENUM: case TYPE_ENUM:
{ {
length += val->val.string_val->size(); length += val->val.string_val.length;
break; break;
} }
@ -1814,13 +1810,13 @@ int Manager::CopyValue(char *data, const int startpos, const Value* val)
case TYPE_STRING: case TYPE_STRING:
case TYPE_ENUM: case TYPE_ENUM:
{ {
memcpy(data+startpos, val->val.string_val->c_str(), val->val.string_val->length()); memcpy(data+startpos, val->val.string_val.data, val->val.string_val.length);
return val->val.string_val->size(); return val->val.string_val.length;
} }
case TYPE_ADDR: case TYPE_ADDR:
{ {
int length; int length = 0;
switch ( val->val.addr_val.family ) { switch ( val->val.addr_val.family ) {
case IPv4: case IPv4:
length = sizeof(val->val.addr_val.in.in4); length = sizeof(val->val.addr_val.in.in4);
@ -1841,7 +1837,7 @@ int Manager::CopyValue(char *data, const int startpos, const Value* val)
case TYPE_SUBNET: case TYPE_SUBNET:
{ {
int length; int length = 0;
switch ( val->val.subnet_val.prefix.family ) { switch ( val->val.subnet_val.prefix.family ) {
case IPv4: case IPv4:
length = sizeof(val->val.addr_val.in.in4); length = sizeof(val->val.addr_val.in.in4);
@ -1963,7 +1959,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_STRING: case TYPE_STRING:
{ {
BroString *s = new BroString(*(val->val.string_val)); BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 0);
return new StringVal(s); return new StringVal(s);
} }
@ -1972,7 +1968,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_ADDR: case TYPE_ADDR:
{ {
IPAddr* addr; IPAddr* addr = 0;
switch ( val->val.addr_val.family ) { switch ( val->val.addr_val.family ) {
case IPv4: case IPv4:
addr = new IPAddr(val->val.addr_val.in.in4); addr = new IPAddr(val->val.addr_val.in.in4);
@ -1993,7 +1989,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_SUBNET: case TYPE_SUBNET:
{ {
IPAddr* addr; IPAddr* addr = 0;
switch ( val->val.subnet_val.prefix.family ) { switch ( val->val.subnet_val.prefix.family ) {
case IPv4: case IPv4:
addr = new IPAddr(val->val.subnet_val.prefix.in.in4); addr = new IPAddr(val->val.subnet_val.prefix.in.in4);
@ -2047,8 +2043,8 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_ENUM: { case TYPE_ENUM: {
// well, this is kind of stupid, because EnumType just mangles the module name and the var name together again... // well, this is kind of stupid, because EnumType just mangles the module name and the var name together again...
// but well // but well
string module = extract_module_name(val->val.string_val->c_str()); string module = extract_module_name(val->val.string_val.data);
string var = extract_var_name(val->val.string_val->c_str()); string var = extract_var_name(val->val.string_val.data);
bro_int_t index = request_type->AsEnumType()->Lookup(module, var.c_str()); bro_int_t index = request_type->AsEnumType()->Lookup(module, var.c_str());
if ( index == -1 ) if ( index == -1 )
reporter->InternalError("Value not found in enum mappimg. Module: %s, var: %s", reporter->InternalError("Value not found in enum mappimg. Module: %s, var: %s",

View file

@ -56,22 +56,24 @@ private:
class SendEventMessage : public threading::OutputMessage<ReaderFrontend> { class SendEventMessage : public threading::OutputMessage<ReaderFrontend> {
public: public:
SendEventMessage(ReaderFrontend* reader, const string& name, const int num_vals, Value* *val) SendEventMessage(ReaderFrontend* reader, const char* name, const int num_vals, Value* *val)
: threading::OutputMessage<ReaderFrontend>("SendEvent", reader), : threading::OutputMessage<ReaderFrontend>("SendEvent", reader),
name(name), num_vals(num_vals), val(val) {} name(copy_string(name)), num_vals(num_vals), val(val) {}
virtual ~SendEventMessage() { delete [] name; }
virtual bool Process() virtual bool Process()
{ {
bool success = input_mgr->SendEvent(name, num_vals, val); bool success = input_mgr->SendEvent(name, num_vals, val);
if ( ! success ) if ( ! success )
reporter->Error("SendEvent for event %s failed", name.c_str()); reporter->Error("SendEvent for event %s failed", name);
return true; // We do not want to die if sendEvent fails because the event did not return. return true; // We do not want to die if sendEvent fails because the event did not return.
} }
private: private:
const string name; const char* name;
const int num_vals; const int num_vals;
Value* *val; Value* *val;
}; };
@ -142,58 +144,18 @@ public:
using namespace logging; using namespace logging;
bool ReaderBackend::ReaderInfo::Read(SerializationFormat* fmt)
{
int size;
if ( ! (fmt->Read(&source, "source") &&
fmt->Read(&size, "config_size")) )
return false;
config.clear();
while ( size )
{
string value;
string key;
if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) )
return false;
config.insert(std::make_pair(value, key));
}
return true;
}
bool ReaderBackend::ReaderInfo::Write(SerializationFormat* fmt) const
{
int size = config.size();
if ( ! (fmt->Write(source, "source") &&
fmt->Write(size, "config_size")) )
return false;
for ( config_map::const_iterator i = config.begin(); i != config.end(); ++i )
{
if ( ! (fmt->Write(i->first, "config-value") && fmt->Write(i->second, "config-key")) )
return false;
}
return true;
}
ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread() ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread()
{ {
disabled = true; // disabled will be set correcty in init. disabled = true; // disabled will be set correcty in init.
frontend = arg_frontend; frontend = arg_frontend;
info = new ReaderInfo(frontend->Info());
SetName(frontend->Name()); SetName(frontend->Name());
} }
ReaderBackend::~ReaderBackend() ReaderBackend::~ReaderBackend()
{ {
delete info;
} }
void ReaderBackend::Put(Value* *val) void ReaderBackend::Put(Value* *val)
@ -211,7 +173,7 @@ void ReaderBackend::Clear()
SendOut(new ClearMessage(frontend)); SendOut(new ClearMessage(frontend));
} }
void ReaderBackend::SendEvent(const string& name, const int num_vals, Value* *vals) void ReaderBackend::SendEvent(const char* name, const int num_vals, Value* *vals)
{ {
SendOut(new SendEventMessage(frontend, name, num_vals, vals)); SendOut(new SendEventMessage(frontend, name, num_vals, vals));
} }
@ -226,18 +188,14 @@ void ReaderBackend::SendEntry(Value* *vals)
SendOut(new SendEntryMessage(frontend, vals)); SendOut(new SendEntryMessage(frontend, vals));
} }
bool ReaderBackend::Init(const ReaderInfo& arg_info, ReaderMode arg_mode, const int arg_num_fields, bool ReaderBackend::Init(const int arg_num_fields,
const threading::Field* const* arg_fields) const threading::Field* const* arg_fields)
{ {
info = arg_info;
mode = arg_mode;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
SetName("InputReader/"+info.source);
// disable if DoInit returns error. // disable if DoInit returns error.
int success = DoInit(arg_info, mode, arg_num_fields, arg_fields); int success = DoInit(*info, arg_num_fields, arg_fields);
if ( ! success ) if ( ! success )
{ {
@ -250,7 +208,7 @@ bool ReaderBackend::Init(const ReaderInfo& arg_info, ReaderMode arg_mode, const
return success; return success;
} }
void ReaderBackend::Close() bool ReaderBackend::OnFinish(double network_time)
{ {
DoClose(); DoClose();
disabled = true; // frontend disables itself when it gets the Close-message. disabled = true; // frontend disables itself when it gets the Close-message.
@ -264,6 +222,8 @@ void ReaderBackend::Close()
delete [] (fields); delete [] (fields);
fields = 0; fields = 0;
} }
return true;
} }
bool ReaderBackend::Update() bool ReaderBackend::Update()
@ -286,10 +246,9 @@ void ReaderBackend::DisableFrontend()
SendOut(new DisableMessage(frontend)); SendOut(new DisableMessage(frontend));
} }
bool ReaderBackend::DoHeartbeat(double network_time, double current_time) bool ReaderBackend::OnHeartbeat(double network_time, double current_time)
{ {
MsgThread::DoHeartbeat(network_time, current_time); return DoHeartbeat(network_time, current_time);
return true;
} }
TransportProto ReaderBackend::StringToProto(const string &proto) TransportProto ReaderBackend::StringToProto(const string &proto)

View file

@ -7,8 +7,6 @@
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
#include "threading/MsgThread.h" #include "threading/MsgThread.h"
class RemoteSerializer;
namespace input { namespace input {
@ -36,7 +34,10 @@ enum ReaderMode {
* for new appended data. When new data is appended is has to be sent * for new appended data. When new data is appended is has to be sent
* using the Put api functions. * using the Put api functions.
*/ */
MODE_STREAM MODE_STREAM,
/** Internal dummy mode for initialization. */
MODE_NONE
}; };
class ReaderFrontend; class ReaderFrontend;
@ -72,14 +73,17 @@ public:
*/ */
struct ReaderInfo struct ReaderInfo
{ {
typedef std::map<string, string> config_map; // Structure takes ownership of the strings.
typedef std::map<const char*, const char*, CompareString> config_map;
/** /**
* A string left to the interpretation of the reader * A string left to the interpretation of the reader
* implementation; it corresponds to the value configured on * implementation; it corresponds to the value configured on
* the script-level for the logging filter. * the script-level for the logging filter.
*
* Structure takes ownership of the string.
*/ */
string source; const char* source;
/** /**
* A map of key/value pairs corresponding to the relevant * A map of key/value pairs corresponding to the relevant
@ -87,23 +91,45 @@ public:
*/ */
config_map config; config_map config;
private: /**
friend class ::RemoteSerializer; * The opening mode for the input source.
*/
ReaderMode mode;
// Note, these need to be adapted when changing the struct's ReaderInfo()
// fields. They serialize/deserialize the struct. {
bool Read(SerializationFormat* fmt); source = 0;
bool Write(SerializationFormat* fmt) const; mode = MODE_NONE;
}
ReaderInfo(const ReaderInfo& other)
{
source = other.source ? copy_string(other.source) : 0;
mode = other.mode;
for ( config_map::const_iterator i = other.config.begin(); i != other.config.end(); i++ )
config.insert(std::make_pair(copy_string(i->first), copy_string(i->second)));
}
~ReaderInfo()
{
delete [] source;
for ( config_map::iterator i = config.begin(); i != config.end(); i++ )
{
delete [] i->first;
delete [] i->second;
}
}
private:
const ReaderInfo& operator=(const ReaderInfo& other); // Disable.
}; };
/** /**
* One-time initialization of the reader to define the input source. * One-time initialization of the reader to define the input source.
* *
* @param source A string left to the interpretation of the * @param @param info Meta information for the writer.
* reader implementation; it corresponds to the value configured on
* the script-level for the input stream.
*
* @param mode The opening mode for the input source.
* *
* @param num_fields Number of fields contained in \a fields. * @param num_fields Number of fields contained in \a fields.
* *
@ -115,16 +141,7 @@ public:
* *
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Init(const ReaderInfo& info, ReaderMode mode, int num_fields, const threading::Field* const* fields); bool Init(int num_fields, const threading::Field* const* fields);
/**
* Finishes reading from this input stream in a regular fashion. Must
* not be called if an error has been indicated earlier. After
* calling this, no further reading from the stream can be performed.
*
* @return False if an error occured.
*/
void Close();
/** /**
* Force trigger an update of the input stream. The action that will * Force trigger an update of the input stream. The action that will
@ -151,13 +168,16 @@ public:
/** /**
* Returns the additional reader information into the constructor. * Returns the additional reader information into the constructor.
*/ */
const ReaderInfo& Info() const { return info; } const ReaderInfo& Info() const { return *info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
*/ */
int NumFields() const { return num_fields; } int NumFields() const { return num_fields; }
// Overridden from MsgThread.
virtual bool OnHeartbeat(double network_time, double current_time);
virtual bool OnFinish(double network_time);
protected: protected:
// Methods that have to be overwritten by the individual readers // Methods that have to be overwritten by the individual readers
@ -180,7 +200,7 @@ protected:
* provides accessor methods to get them later, and they are passed * provides accessor methods to get them later, and they are passed
* in here only for convinience. * in here only for convinience.
*/ */
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields) = 0; virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields) = 0;
/** /**
* Reader-specific method implementing input finalization at * Reader-specific method implementing input finalization at
@ -210,9 +230,9 @@ protected:
virtual bool DoUpdate() = 0; virtual bool DoUpdate() = 0;
/** /**
* Returns the reader mode as passed into Init(). * Triggered by regular heartbeat messages from the main thread.
*/ */
const ReaderMode Mode() const { return mode; } virtual bool DoHeartbeat(double network_time, double current_time) = 0;
/** /**
* Method allowing a reader to send a specified Bro event. Vals must * Method allowing a reader to send a specified Bro event. Vals must
@ -224,7 +244,7 @@ protected:
* *
* @param vals the values to be given to the event * @param vals the values to be given to the event
*/ */
void SendEvent(const string& name, const int num_vals, threading::Value* *vals); void SendEvent(const char* name, const int num_vals, threading::Value* *vals);
// Content-sending-functions (simple mode). Include table-specific // Content-sending-functions (simple mode). Include table-specific
// functionality that simply is not used if we have no table. // functionality that simply is not used if we have no table.
@ -285,14 +305,6 @@ protected:
*/ */
void EndCurrentSend(); void EndCurrentSend();
/**
* Triggered by regular heartbeat messages from the main thread.
*
* This method can be overridden but once must call
* ReaderBackend::DoHeartbeat().
*/
virtual bool DoHeartbeat(double network_time, double current_time);
/** /**
* Convert a string into a TransportProto. This is just a utility * Convert a string into a TransportProto. This is just a utility
* function for Readers. * function for Readers.
@ -314,8 +326,7 @@ private:
// from this class, it's running in a different thread! // from this class, it's running in a different thread!
ReaderFrontend* frontend; ReaderFrontend* frontend;
ReaderInfo info; ReaderInfo* info;
ReaderMode mode;
unsigned int num_fields; unsigned int num_fields;
const threading::Field* const * fields; // raw mapping const threading::Field* const * fields; // raw mapping

View file

@ -11,19 +11,17 @@ namespace input {
class InitMessage : public threading::InputMessage<ReaderBackend> class InitMessage : public threading::InputMessage<ReaderBackend>
{ {
public: public:
InitMessage(ReaderBackend* backend, const ReaderBackend::ReaderInfo& info, ReaderMode mode, InitMessage(ReaderBackend* backend,
const int num_fields, const threading::Field* const* fields) const int num_fields, const threading::Field* const* fields)
: threading::InputMessage<ReaderBackend>("Init", backend), : threading::InputMessage<ReaderBackend>("Init", backend),
info(info), mode(mode), num_fields(num_fields), fields(fields) { } num_fields(num_fields), fields(fields) { }
virtual bool Process() virtual bool Process()
{ {
return Object()->Init(info, mode, num_fields, fields); return Object()->Init(num_fields, fields);
} }
private: private:
const ReaderBackend::ReaderInfo info;
const ReaderMode mode;
const int num_fields; const int num_fields;
const threading::Field* const* fields; const threading::Field* const* fields;
}; };
@ -38,32 +36,26 @@ public:
virtual bool Process() { return Object()->Update(); } virtual bool Process() { return Object()->Update(); }
}; };
class CloseMessage : public threading::InputMessage<ReaderBackend> ReaderFrontend::ReaderFrontend(const ReaderBackend::ReaderInfo& arg_info, EnumVal* type)
{
public:
CloseMessage(ReaderBackend* backend)
: threading::InputMessage<ReaderBackend>("Close", backend)
{ }
virtual bool Process() { Object()->Close(); return true; }
};
ReaderFrontend::ReaderFrontend(bro_int_t type)
{ {
disabled = initialized = false; disabled = initialized = false;
ty_name = "<not set>"; info = new ReaderBackend::ReaderInfo(arg_info);
backend = input_mgr->CreateBackend(this, type);
const char* t = type->Type()->AsEnumType()->Lookup(type->InternalInt());
name = copy_string(fmt("%s/%s", arg_info.source, t));
backend = input_mgr->CreateBackend(this, type->InternalInt());
assert(backend); assert(backend);
backend->Start(); backend->Start();
} }
ReaderFrontend::~ReaderFrontend() ReaderFrontend::~ReaderFrontend()
{ {
delete [] name;
delete info;
} }
void ReaderFrontend::Init(const ReaderBackend::ReaderInfo& arg_info, ReaderMode mode, const int arg_num_fields, void ReaderFrontend::Init(const int arg_num_fields,
const threading::Field* const* arg_fields) const threading::Field* const* arg_fields)
{ {
if ( disabled ) if ( disabled )
@ -72,12 +64,11 @@ void ReaderFrontend::Init(const ReaderBackend::ReaderInfo& arg_info, ReaderMode
if ( initialized ) if ( initialized )
reporter->InternalError("reader initialize twice"); reporter->InternalError("reader initialize twice");
info = arg_info;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
initialized = true; initialized = true;
backend->SendIn(new InitMessage(backend, info, mode, num_fields, fields)); backend->SendIn(new InitMessage(backend, num_fields, fields));
} }
void ReaderFrontend::Update() void ReaderFrontend::Update()
@ -94,27 +85,9 @@ void ReaderFrontend::Update()
backend->SendIn(new UpdateMessage(backend)); backend->SendIn(new UpdateMessage(backend));
} }
void ReaderFrontend::Close() const char* ReaderFrontend::Name() const
{ {
if ( disabled ) return name;
return;
if ( ! initialized )
{
reporter->Error("Tried to call finish on uninitialized reader");
return;
}
disabled = true;
backend->SendIn(new CloseMessage(backend));
}
string ReaderFrontend::Name() const
{
if ( info.source.size() )
return ty_name;
return ty_name + "/" + info.source;
} }
} }

View file

@ -4,10 +4,11 @@
#define INPUT_READERFRONTEND_H #define INPUT_READERFRONTEND_H
#include "ReaderBackend.h" #include "ReaderBackend.h"
#include "threading/MsgThread.h" #include "threading/MsgThread.h"
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
#include "Val.h"
namespace input { namespace input {
class Manager; class Manager;
@ -25,6 +26,8 @@ public:
/** /**
* Constructor. * Constructor.
* *
* info: The meta information struct for the writer.
*
* type: The backend writer type, with the value corresponding to the * type: The backend writer type, with the value corresponding to the
* script-level \c Input::Reader enum (e.g., \a READER_ASCII). The * script-level \c Input::Reader enum (e.g., \a READER_ASCII). The
* frontend will internally instantiate a ReaderBackend of the * frontend will internally instantiate a ReaderBackend of the
@ -32,7 +35,7 @@ public:
* *
* Frontends must only be instantiated by the main thread. * Frontends must only be instantiated by the main thread.
*/ */
ReaderFrontend(bro_int_t type); ReaderFrontend(const ReaderBackend::ReaderInfo& info, EnumVal* type);
/** /**
* Destructor. * Destructor.
@ -52,7 +55,7 @@ public:
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
void Init(const ReaderBackend::ReaderInfo& info, ReaderMode mode, const int arg_num_fields, const threading::Field* const* fields); void Init(const int arg_num_fields, const threading::Field* const* fields);
/** /**
* Force an update of the current input source. Actual action depends * Force an update of the current input source. Actual action depends
@ -100,12 +103,12 @@ public:
* *
* This method is safe to call from any thread. * This method is safe to call from any thread.
*/ */
string Name() const; const char* Name() const;
/** /**
* Returns the additional reader information into the constructor. * Returns the additional reader information passed into the constructor.
*/ */
const ReaderBackend::ReaderInfo& Info() const { return info; } const ReaderBackend::ReaderInfo& Info() const { assert(info); return *info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
@ -120,19 +123,14 @@ public:
protected: protected:
friend class Manager; friend class Manager;
/**
* Returns the name of the backend's type.
*/
const string& TypeName() const { return ty_name; }
private: private:
ReaderBackend* backend; // The backend we have instanatiated. ReaderBackend* backend; // The backend we have instanatiated.
ReaderBackend::ReaderInfo info; // Meta information as passed to Init(). ReaderBackend::ReaderInfo* info; // Meta information.
const threading::Field* const* fields; // The log fields. const threading::Field* const* fields; // The input fields.
int num_fields; // Information as passed to init(); int num_fields; // Information as passed to Init().
string ty_name; // Backend type, set by manager.
bool disabled; // True if disabled. bool disabled; // True if disabled.
bool initialized; // True if initialized. bool initialized; // True if initialized.
const char* name; // Descriptive name.
}; };
} }

View file

@ -83,14 +83,14 @@ void Ascii::DoClose()
} }
} }
bool Ascii::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields) bool Ascii::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
mtime = 0; mtime = 0;
file = new ifstream(info.source.c_str()); file = new ifstream(info.source);
if ( ! file->is_open() ) if ( ! file->is_open() )
{ {
Error(Fmt("Init: cannot open %s", info.source.c_str())); Error(Fmt("Init: cannot open %s", info.source));
delete(file); delete(file);
file = 0; file = 0;
return false; return false;
@ -98,7 +98,7 @@ bool Ascii::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, cons
if ( ReadHeader(false) == false ) if ( ReadHeader(false) == false )
{ {
Error(Fmt("Init: cannot open %s; headers are incorrect", info.source.c_str())); Error(Fmt("Init: cannot open %s; headers are incorrect", info.source));
file->close(); file->close();
delete(file); delete(file);
file = 0; file = 0;
@ -144,7 +144,7 @@ bool Ascii::ReadHeader(bool useCached)
pos++; pos++;
} }
//printf("Updating fields from description %s\n", line.c_str()); // printf("Updating fields from description %s\n", line.c_str());
columnMap.clear(); columnMap.clear();
for ( int i = 0; i < NumFields(); i++ ) for ( int i = 0; i < NumFields(); i++ )
@ -164,20 +164,20 @@ bool Ascii::ReadHeader(bool useCached)
} }
Error(Fmt("Did not find requested field %s in input data file %s.", Error(Fmt("Did not find requested field %s in input data file %s.",
field->name.c_str(), Info().source.c_str())); field->name, Info().source));
return false; return false;
} }
FieldMapping f(field->name, field->type, field->subtype, ifields[field->name]); FieldMapping f(field->name, field->type, field->subtype, ifields[field->name]);
if ( field->secondary_name != "" ) if ( field->secondary_name && strlen(field->secondary_name) != 0 )
{ {
map<string, uint32_t>::iterator fit2 = ifields.find(field->secondary_name); map<string, uint32_t>::iterator fit2 = ifields.find(field->secondary_name);
if ( fit2 == ifields.end() ) if ( fit2 == ifields.end() )
{ {
Error(Fmt("Could not find requested port type field %s in input data file.", Error(Fmt("Could not find requested port type field %s in input data file.",
field->secondary_name.c_str())); field->secondary_name));
return false; return false;
} }
@ -199,7 +199,7 @@ bool Ascii::GetLine(string& str)
if ( str[0] != '#' ) if ( str[0] != '#' )
return true; return true;
if ( str.compare(0,8, "#fields\t") == 0 ) if ( ( str.length() > 8 ) && ( str.compare(0,7, "#fields") == 0 ) && ( str[7] == separator[0] ) )
{ {
str = str.substr(8); str = str.substr(8);
return true; return true;
@ -220,7 +220,8 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
switch ( field.type ) { switch ( field.type ) {
case TYPE_ENUM: case TYPE_ENUM:
case TYPE_STRING: case TYPE_STRING:
val->val.string_val = new string(s); val->val.string_val.length = s.size();
val->val.string_val.data = copy_string(s.c_str());
break; break;
case TYPE_BOOL: case TYPE_BOOL:
@ -232,7 +233,7 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
{ {
Error(Fmt("Field: %s Invalid value for boolean: %s", Error(Fmt("Field: %s Invalid value for boolean: %s",
field.name.c_str(), s.c_str())); field.name.c_str(), s.c_str()));
return false; return 0;
} }
break; break;
@ -262,7 +263,7 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
if ( pos == s.npos ) if ( pos == s.npos )
{ {
Error(Fmt("Invalid value for subnet: %s", s.c_str())); Error(Fmt("Invalid value for subnet: %s", s.c_str()));
return false; return 0;
} }
int width = atoi(s.substr(pos+1).c_str()); int width = atoi(s.substr(pos+1).c_str());
@ -362,14 +363,14 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
// read the entire file and send appropriate thingies back to InputMgr // read the entire file and send appropriate thingies back to InputMgr
bool Ascii::DoUpdate() bool Ascii::DoUpdate()
{ {
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_REREAD: case MODE_REREAD:
{ {
// check if the file has changed // check if the file has changed
struct stat sb; struct stat sb;
if ( stat(Info().source.c_str(), &sb) == -1 ) if ( stat(Info().source, &sb) == -1 )
{ {
Error(Fmt("Could not get stat for %s", Info().source.c_str())); Error(Fmt("Could not get stat for %s", Info().source));
return false; return false;
} }
@ -389,7 +390,7 @@ bool Ascii::DoUpdate()
// - this is not that bad) // - this is not that bad)
if ( file && file->is_open() ) if ( file && file->is_open() )
{ {
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
{ {
file->clear(); // remove end of file evil bits file->clear(); // remove end of file evil bits
if ( !ReadHeader(true) ) if ( !ReadHeader(true) )
@ -403,10 +404,10 @@ bool Ascii::DoUpdate()
file = 0; file = 0;
} }
file = new ifstream(Info().source.c_str()); file = new ifstream(Info().source);
if ( ! file->is_open() ) if ( ! file->is_open() )
{ {
Error(Fmt("cannot open %s", Info().source.c_str())); Error(Fmt("cannot open %s", Info().source));
return false; return false;
} }
@ -437,6 +438,8 @@ bool Ascii::DoUpdate()
if ( ! getline(splitstream, s, separator[0]) ) if ( ! getline(splitstream, s, separator[0]) )
break; break;
s = get_unescaped_string(s);
stringfields[pos] = s; stringfields[pos] = s;
pos++; pos++;
} }
@ -492,13 +495,13 @@ bool Ascii::DoUpdate()
//printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields); //printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields);
assert ( fpos == NumFields() ); assert ( fpos == NumFields() );
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
Put(fields); Put(fields);
else else
SendEntry(fields); SendEntry(fields);
} }
if ( Mode () != MODE_STREAM ) if ( Info().mode != MODE_STREAM )
EndCurrentSend(); EndCurrentSend();
return true; return true;
@ -506,9 +509,7 @@ bool Ascii::DoUpdate()
bool Ascii::DoHeartbeat(double network_time, double current_time) bool Ascii::DoHeartbeat(double network_time, double current_time)
{ {
ReaderBackend::DoHeartbeat(network_time, current_time); switch ( Info().mode ) {
switch ( Mode() ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;

View file

@ -38,7 +38,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); }
protected: protected:
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -36,9 +36,9 @@ void Benchmark::DoClose()
{ {
} }
bool Benchmark::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields) bool Benchmark::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
num_lines = atoi(info.source.c_str()); num_lines = atoi(info.source);
if ( autospread != 0.0 ) if ( autospread != 0.0 )
autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) ); autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) );
@ -59,7 +59,7 @@ string Benchmark::RandomString(const int len)
"abcdefghijklmnopqrstuvwxyz"; "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < len; ++i) for (int i = 0; i < len; ++i)
s[i] = values[rand() / (RAND_MAX / sizeof(values))]; s[i] = values[random() / (RAND_MAX / sizeof(values))];
return s; return s;
} }
@ -83,7 +83,7 @@ bool Benchmark::DoUpdate()
for (int j = 0; j < NumFields(); j++ ) for (int j = 0; j < NumFields(); j++ )
field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype); field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype);
if ( Mode() == MODE_STREAM ) if ( Info().mode == MODE_STREAM )
// do not do tracking, spread out elements over the second that we have... // do not do tracking, spread out elements over the second that we have...
Put(field); Put(field);
else else
@ -109,7 +109,7 @@ bool Benchmark::DoUpdate()
} }
if ( Mode() != MODE_STREAM ) if ( Info().mode != MODE_STREAM )
EndCurrentSend(); EndCurrentSend();
return true; return true;
@ -126,15 +126,19 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
assert(false); // no enums, please. assert(false); // no enums, please.
case TYPE_STRING: case TYPE_STRING:
val->val.string_val = new string(RandomString(10)); {
string rnd = RandomString(10);
val->val.string_val.data = copy_string(rnd.c_str());
val->val.string_val.length = rnd.size();
break; break;
}
case TYPE_BOOL: case TYPE_BOOL:
val->val.int_val = 1; // we never lie. val->val.int_val = 1; // we never lie.
break; break;
case TYPE_INT: case TYPE_INT:
val->val.int_val = rand(); val->val.int_val = random();
break; break;
case TYPE_TIME: case TYPE_TIME:
@ -148,11 +152,11 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
case TYPE_COUNT: case TYPE_COUNT:
case TYPE_COUNTER: case TYPE_COUNTER:
val->val.uint_val = rand(); val->val.uint_val = random();
break; break;
case TYPE_PORT: case TYPE_PORT:
val->val.port_val.port = rand() / (RAND_MAX / 60000); val->val.port_val.port = random() / (RAND_MAX / 60000);
val->val.port_val.proto = TRANSPORT_UNKNOWN; val->val.port_val.proto = TRANSPORT_UNKNOWN;
break; break;
@ -175,7 +179,7 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
// Then - common stuff // Then - common stuff
{ {
// how many entries do we have... // how many entries do we have...
unsigned int length = rand() / (RAND_MAX / 15); unsigned int length = random() / (RAND_MAX / 15);
Value** lvals = new Value* [length]; Value** lvals = new Value* [length];
@ -222,12 +226,11 @@ threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype)
bool Benchmark::DoHeartbeat(double network_time, double current_time) bool Benchmark::DoHeartbeat(double network_time, double current_time)
{ {
ReaderBackend::DoHeartbeat(network_time, current_time);
num_lines = (int) ( (double) num_lines*multiplication_factor); num_lines = (int) ( (double) num_lines*multiplication_factor);
num_lines += add; num_lines += add;
heartbeatstarttime = CurrTime(); heartbeatstarttime = CurrTime();
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;

View file

@ -18,7 +18,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); }
protected: protected:
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -66,7 +66,7 @@ bool Raw::OpenInput()
// This is defined in input/fdstream.h // This is defined in input/fdstream.h
in = new boost::fdistream(fileno(file)); in = new boost::fdistream(fileno(file));
if ( execute && Mode() == MODE_STREAM ) if ( execute && Info().mode == MODE_STREAM )
fcntl(fileno(file), F_SETFL, O_NONBLOCK); fcntl(fileno(file), F_SETFL, O_NONBLOCK);
return true; return true;
@ -100,7 +100,7 @@ bool Raw::CloseInput()
return true; return true;
} }
bool Raw::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const Field* const* fields) bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields)
{ {
fname = info.source; fname = info.source;
mtime = 0; mtime = 0;
@ -108,7 +108,7 @@ bool Raw::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const
firstrun = true; firstrun = true;
bool result; bool result;
if ( info.source.length() == 0 ) if ( ! info.source || strlen(info.source) == 0 )
{ {
Error("No source path provided"); Error("No source path provided");
return false; return false;
@ -129,16 +129,17 @@ bool Raw::DoInit(const ReaderInfo& info, ReaderMode mode, int num_fields, const
} }
// do Initialization // do Initialization
char last = info.source[info.source.length()-1]; string source = string(info.source);
char last = info.source[source.length() - 1];
if ( last == '|' ) if ( last == '|' )
{ {
execute = true; execute = true;
fname = info.source.substr(0, fname.length() - 1); fname = source.substr(0, fname.length() - 1);
if ( (mode != MODE_MANUAL) ) if ( (info.mode != MODE_MANUAL) )
{ {
Error(Fmt("Unsupported read mode %d for source %s in execution mode", Error(Fmt("Unsupported read mode %d for source %s in execution mode",
mode, fname.c_str())); info.mode, fname.c_str()));
return false; return false;
} }
@ -187,7 +188,7 @@ bool Raw::DoUpdate()
else else
{ {
switch ( Mode() ) { switch ( Info().mode ) {
case MODE_REREAD: case MODE_REREAD:
{ {
// check if the file has changed // check if the file has changed
@ -210,7 +211,7 @@ bool Raw::DoUpdate()
case MODE_MANUAL: case MODE_MANUAL:
case MODE_STREAM: case MODE_STREAM:
if ( Mode() == MODE_STREAM && file != NULL && in != NULL ) if ( Info().mode == MODE_STREAM && file != NULL && in != NULL )
{ {
//fpurge(file); //fpurge(file);
in->clear(); // remove end of file evil bits in->clear(); // remove end of file evil bits
@ -237,7 +238,8 @@ bool Raw::DoUpdate()
// filter has exactly one text field. convert to it. // filter has exactly one text field. convert to it.
Value* val = new Value(TYPE_STRING, true); Value* val = new Value(TYPE_STRING, true);
val->val.string_val = new string(line); val->val.string_val.data = copy_string(line.c_str());
val->val.string_val.length = line.size();
fields[0] = val; fields[0] = val;
Put(fields); Put(fields);
@ -252,9 +254,7 @@ bool Raw::DoUpdate()
bool Raw::DoHeartbeat(double network_time, double current_time) bool Raw::DoHeartbeat(double network_time, double current_time)
{ {
ReaderBackend::DoHeartbeat(network_time, current_time); switch ( Info().mode ) {
switch ( Mode() ) {
case MODE_MANUAL: case MODE_MANUAL:
// yay, we do nothing :) // yay, we do nothing :)
break; break;

View file

@ -22,7 +22,7 @@ public:
static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); } static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); }
protected: protected:
virtual bool DoInit(const ReaderInfo& info, ReaderMode mode, int arg_num_fields, const threading::Field* const* fields); virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields);
virtual void DoClose(); virtual void DoClose();
virtual bool DoUpdate(); virtual bool DoUpdate();
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time);

View file

@ -65,8 +65,8 @@ function Log::__flush%(id: Log::ID%): bool
module LogAscii; module LogAscii;
const output_to_stdout: bool; const output_to_stdout: bool;
const include_header: bool; const include_meta: bool;
const header_prefix: string; const meta_prefix: string;
const separator: string; const separator: string;
const set_separator: string; const set_separator: string;
const empty_field: string; const empty_field: string;
@ -82,10 +82,26 @@ const dump_schema: bool;
const use_integer_for_time: bool; const use_integer_for_time: bool;
const num_threads: count; const num_threads: count;
# Options for the SQLite writer
module LogSQLite; module LogSQLite;
const set_separator: string; const set_separator: string;
# Options for the ElasticSearch writer.
module LogElasticSearch;
const cluster_name: string;
const server_host: string;
const server_port: count;
const index_prefix: string;
const type_prefix: string;
const transfer_timeout: interval;
const max_batch_size: count;
const max_batch_interval: interval;
const max_byte_size: count;
# Options for the None writer. # Options for the None writer.
module LogNone; module LogNone;

View file

@ -6,6 +6,7 @@
#include "../EventHandler.h" #include "../EventHandler.h"
#include "../NetVar.h" #include "../NetVar.h"
#include "../Net.h" #include "../Net.h"
#include "../Type.h"
#include "threading/Manager.h" #include "threading/Manager.h"
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
@ -17,6 +18,10 @@
#include "writers/Ascii.h" #include "writers/Ascii.h"
#include "writers/None.h" #include "writers/None.h"
#ifdef USE_ELASTICSEARCH
#include "writers/ElasticSearch.h"
#endif
#ifdef USE_DATASERIES #ifdef USE_DATASERIES
#include "writers/DataSeries.h" #include "writers/DataSeries.h"
#endif #endif
@ -41,6 +46,11 @@ struct WriterDefinition {
WriterDefinition log_writers[] = { WriterDefinition log_writers[] = {
{ BifEnum::Log::WRITER_NONE, "None", 0, writer::None::Instantiate }, { BifEnum::Log::WRITER_NONE, "None", 0, writer::None::Instantiate },
{ BifEnum::Log::WRITER_ASCII, "Ascii", 0, writer::Ascii::Instantiate }, { BifEnum::Log::WRITER_ASCII, "Ascii", 0, writer::Ascii::Instantiate },
#ifdef USE_ELASTICSEARCH
{ BifEnum::Log::WRITER_ELASTICSEARCH, "ElasticSearch", 0, writer::ElasticSearch::Instantiate },
#endif
#ifdef USE_DATASERIES #ifdef USE_DATASERIES
{ BifEnum::Log::WRITER_DATASERIES, "DataSeries", 0, writer::DataSeries::Instantiate }, { BifEnum::Log::WRITER_DATASERIES, "DataSeries", 0, writer::DataSeries::Instantiate },
#endif #endif
@ -84,7 +94,8 @@ struct Manager::WriterInfo {
double interval; double interval;
Func* postprocessor; Func* postprocessor;
WriterFrontend* writer; WriterFrontend* writer;
WriterBackend::WriterInfo info; WriterBackend::WriterInfo* info;
string instantiating_filter;
}; };
struct Manager::Stream { struct Manager::Stream {
@ -127,6 +138,7 @@ Manager::Stream::~Stream()
Unref(winfo->type); Unref(winfo->type);
delete winfo->writer; delete winfo->writer;
delete winfo->info;
delete winfo; delete winfo;
} }
@ -205,7 +217,6 @@ WriterBackend* Manager::CreateBackend(WriterFrontend* frontend, bro_int_t type)
WriterBackend* backend = (*ld->factory)(frontend); WriterBackend* backend = (*ld->factory)(frontend);
assert(backend); assert(backend);
frontend->ty_name = ld->name;
return backend; return backend;
} }
@ -485,18 +496,17 @@ bool Manager::TraverseRecord(Stream* stream, Filter* filter, RecordType* rt,
return false; return false;
} }
threading::Field* field = new threading::Field(); TypeTag st = TYPE_VOID;
field->name = new_path;
field->type = t->Tag();
field->optional = rt->FieldDecl(i)->FindAttr(ATTR_OPTIONAL);
if ( field->type == TYPE_TABLE ) if ( t->Tag() == TYPE_TABLE )
field->subtype = t->AsSetType()->Indices()->PureType()->Tag(); st = t->AsSetType()->Indices()->PureType()->Tag();
else if ( field->type == TYPE_VECTOR ) else if ( t->Tag() == TYPE_VECTOR )
field->subtype = t->AsVectorType()->YieldType()->Tag(); st = t->AsVectorType()->YieldType()->Tag();
filter->fields[filter->num_fields - 1] = field; bool optional = rt->FieldDecl(i)->FindAttr(ATTR_OPTIONAL);
filter->fields[filter->num_fields - 1] = new threading::Field(new_path.c_str(), 0, t->Tag(), st, optional);
} }
return true; return true;
@ -603,7 +613,7 @@ bool Manager::AddFilter(EnumVal* id, RecordVal* fval)
{ {
threading::Field* field = filter->fields[i]; threading::Field* field = filter->fields[i];
DBG_LOG(DBG_LOGGING, " field %10s: %s", DBG_LOG(DBG_LOGGING, " field %10s: %s",
field->name.c_str(), type_name(field->type)); field->name, type_name(field->type));
} }
#endif #endif
@ -764,8 +774,18 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
WriterFrontend* writer = 0; WriterFrontend* writer = 0;
if ( w != stream->writers.end() ) if ( w != stream->writers.end() )
{
if ( w->second->instantiating_filter != filter->name )
{
reporter->Warning("Skipping write to filter '%s' on path '%s'"
" because filter '%s' has already instantiated the same"
" writer type for that path", filter->name.c_str(),
filter->path.c_str(), w->second->instantiating_filter.c_str());
continue;
}
// We know this writer already. // We know this writer already.
writer = w->second->writer; writer = w->second->writer;
}
else else
{ {
@ -778,8 +798,9 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
for ( int j = 0; j < filter->num_fields; ++j ) for ( int j = 0; j < filter->num_fields; ++j )
arg_fields[j] = new threading::Field(*filter->fields[j]); arg_fields[j] = new threading::Field(*filter->fields[j]);
WriterBackend::WriterInfo info; WriterBackend::WriterInfo* info = new WriterBackend::WriterInfo;
info.path = path; info->path = copy_string(path.c_str());
info->network_time = network_time;
HashKey* k; HashKey* k;
IterCookie* c = filter->config->AsTable()->InitForIteration(); IterCookie* c = filter->config->AsTable()->InitForIteration();
@ -790,7 +811,7 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
ListVal* index = filter->config->RecoverIndex(k); ListVal* index = filter->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString(); string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString(); string value = v->Value()->AsString()->CheckString();
info.config.insert(std::make_pair(key, value)); info->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str())));
Unref(index); Unref(index);
delete k; delete k;
} }
@ -799,7 +820,7 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
writer = CreateWriter(stream->id, filter->writer, writer = CreateWriter(stream->id, filter->writer,
info, filter->num_fields, info, filter->num_fields,
arg_fields, filter->local, filter->remote); arg_fields, filter->local, filter->remote, filter->name);
if ( ! writer ) if ( ! writer )
{ {
@ -852,11 +873,16 @@ threading::Value* Manager::ValToLogVal(Val* val, BroType* ty)
val->Type()->AsEnumType()->Lookup(val->InternalInt()); val->Type()->AsEnumType()->Lookup(val->InternalInt());
if ( s ) if ( s )
lval->val.string_val = new string(s); {
lval->val.string_val.data = copy_string(s);
lval->val.string_val.length = strlen(s);
}
else else
{ {
val->Type()->Error("enum type does not contain value", val); val->Type()->Error("enum type does not contain value", val);
lval->val.string_val = new string(); lval->val.string_val.data = copy_string("");
lval->val.string_val.length = 0;
} }
break; break;
} }
@ -888,15 +914,20 @@ threading::Value* Manager::ValToLogVal(Val* val, BroType* ty)
case TYPE_STRING: case TYPE_STRING:
{ {
const BroString* s = val->AsString(); const BroString* s = val->AsString();
lval->val.string_val = char* buf = new char[s->Len()];
new string((const char*) s->Bytes(), s->Len()); memcpy(buf, s->Bytes(), s->Len());
lval->val.string_val.data = buf;
lval->val.string_val.length = s->Len();
break; break;
} }
case TYPE_FILE: case TYPE_FILE:
{ {
const BroFile* f = val->AsFile(); const BroFile* f = val->AsFile();
lval->val.string_val = new string(f->Name()); string s = f->Name();
lval->val.string_val.data = copy_string(s.c_str());
lval->val.string_val.length = s.size();
break; break;
} }
@ -905,7 +936,9 @@ threading::Value* Manager::ValToLogVal(Val* val, BroType* ty)
ODesc d; ODesc d;
const Func* f = val->AsFunc(); const Func* f = val->AsFunc();
f->Describe(&d); f->Describe(&d);
lval->val.string_val = new string(d.Description()); const char* s = d.Description();
lval->val.string_val.data = copy_string(s);
lval->val.string_val.length = strlen(s);
break; break;
} }
@ -985,34 +1018,33 @@ threading::Value** Manager::RecordToFilterVals(Stream* stream, Filter* filter,
return vals; return vals;
} }
WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info, WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info,
int num_fields, const threading::Field* const* fields, bool local, bool remote) int num_fields, const threading::Field* const* fields, bool local, bool remote,
const string& instantiating_filter)
{ {
Stream* stream = FindStream(id); Stream* stream = FindStream(id);
if ( ! stream ) if ( ! stream )
// Don't know this stream. // Don't know this stream.
return false; return 0;
Stream::WriterMap::iterator w = Stream::WriterMap::iterator w =
stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), info.path)); stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), info->path));
if ( w != stream->writers.end() ) if ( w != stream->writers.end() )
// If we already have a writer for this. That's fine, we just // If we already have a writer for this. That's fine, we just
// return it. // return it.
return w->second->writer; return w->second->writer;
WriterFrontend* writer_obj = new WriterFrontend(id, writer, local, remote);
assert(writer_obj);
WriterInfo* winfo = new WriterInfo; WriterInfo* winfo = new WriterInfo;
winfo->type = writer->Ref()->AsEnumVal(); winfo->type = writer->Ref()->AsEnumVal();
winfo->writer = writer_obj; winfo->writer = 0;
winfo->open_time = network_time; winfo->open_time = network_time;
winfo->rotation_timer = 0; winfo->rotation_timer = 0;
winfo->interval = 0; winfo->interval = 0;
winfo->postprocessor = 0; winfo->postprocessor = 0;
winfo->info = info; winfo->info = info;
winfo->instantiating_filter = instantiating_filter;
// Search for a corresponding filter for the writer/path pair and use its // Search for a corresponding filter for the writer/path pair and use its
// rotation settings. If no matching filter is found, fall back on // rotation settings. If no matching filter is found, fall back on
@ -1024,7 +1056,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const Writer
{ {
Filter* f = *it; Filter* f = *it;
if ( f->writer->AsEnum() == writer->AsEnum() && if ( f->writer->AsEnum() == writer->AsEnum() &&
f->path == winfo->writer->info.path ) f->path == info->path )
{ {
found_filter_match = true; found_filter_match = true;
winfo->interval = f->interval; winfo->interval = f->interval;
@ -1040,10 +1072,8 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const Writer
winfo->interval = id->ID_Val()->AsInterval(); winfo->interval = id->ID_Val()->AsInterval();
} }
InstallRotationTimer(winfo);
stream->writers.insert( stream->writers.insert(
Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), info.path), Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), info->path),
winfo)); winfo));
// Still need to set the WriterInfo's rotation parameters, which we // Still need to set the WriterInfo's rotation parameters, which we
@ -1051,12 +1081,15 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, const Writer
const char* base_time = log_rotate_base_time ? const char* base_time = log_rotate_base_time ?
log_rotate_base_time->AsString()->CheckString() : 0; log_rotate_base_time->AsString()->CheckString() : 0;
winfo->info.rotation_interval = winfo->interval; winfo->info->rotation_interval = winfo->interval;
winfo->info.rotation_base = parse_rotate_base_time(base_time); winfo->info->rotation_base = parse_rotate_base_time(base_time);
writer_obj->Init(winfo->info, num_fields, fields); winfo->writer = new WriterFrontend(*winfo->info, id, writer, local, remote);
winfo->writer->Init(num_fields, fields);
return writer_obj; InstallRotationTimer(winfo);
return winfo->writer;
} }
void Manager::DeleteVals(int num_fields, threading::Value** vals) void Manager::DeleteVals(int num_fields, threading::Value** vals)
@ -1134,7 +1167,7 @@ void Manager::SendAllWritersTo(RemoteSerializer::PeerID peer)
EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer); EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer);
remote_serializer->SendLogCreateWriter(peer, (*s)->id, remote_serializer->SendLogCreateWriter(peer, (*s)->id,
&writer_val, &writer_val,
i->second->info, *i->second->info,
writer->NumFields(), writer->NumFields(),
writer->Fields()); writer->Fields());
} }
@ -1167,7 +1200,7 @@ bool Manager::Flush(EnumVal* id)
for ( Stream::WriterMap::iterator i = stream->writers.begin(); for ( Stream::WriterMap::iterator i = stream->writers.begin();
i != stream->writers.end(); i++ ) i != stream->writers.end(); i++ )
i->second->writer->Flush(); i->second->writer->Flush(network_time);
RemoveDisabledWriters(stream); RemoveDisabledWriters(stream);
@ -1270,14 +1303,14 @@ void Manager::InstallRotationTimer(WriterInfo* winfo)
timer_mgr->Add(winfo->rotation_timer); timer_mgr->Add(winfo->rotation_timer);
DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f", DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f",
winfo->writer->Name().c_str(), winfo->rotation_timer->Time()); winfo->writer->Name(), winfo->rotation_timer->Time());
} }
} }
void Manager::Rotate(WriterInfo* winfo) void Manager::Rotate(WriterInfo* winfo)
{ {
DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f", DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f",
winfo->writer->Name().c_str(), network_time); winfo->writer->Name(), network_time);
// Build a temporary path for the writer to move the file to. // Build a temporary path for the writer to move the file to.
struct tm tm; struct tm tm;
@ -1288,15 +1321,14 @@ void Manager::Rotate(WriterInfo* winfo)
localtime_r(&teatime, &tm); localtime_r(&teatime, &tm);
strftime(buf, sizeof(buf), date_fmt, &tm); strftime(buf, sizeof(buf), date_fmt, &tm);
string tmp = string(fmt("%s-%s", winfo->writer->Info().path.c_str(), buf));
// Trigger the rotation. // Trigger the rotation.
const char* tmp = fmt("%s-%s", winfo->writer->Info().path, buf);
winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating); winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating);
++rotations_pending; ++rotations_pending;
} }
bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string old_name, bool Manager::FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating) double open, double close, bool terminating)
{ {
--rotations_pending; --rotations_pending;
@ -1306,7 +1338,7 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
return true; return true;
DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s", DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s",
writer->Name().c_str(), network_time, new_name.c_str()); writer->Name(), network_time, new_name);
WriterInfo* winfo = FindWriter(writer); WriterInfo* winfo = FindWriter(writer);
if ( ! winfo ) if ( ! winfo )
@ -1315,8 +1347,8 @@ bool Manager::FinishedRotation(WriterFrontend* writer, string new_name, string o
// Create the RotationInfo record. // Create the RotationInfo record.
RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo); RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo);
info->Assign(0, winfo->type->Ref()); info->Assign(0, winfo->type->Ref());
info->Assign(1, new StringVal(new_name.c_str())); info->Assign(1, new StringVal(new_name));
info->Assign(2, new StringVal(winfo->writer->Info().path.c_str())); info->Assign(2, new StringVal(winfo->writer->Info().path));
info->Assign(3, new Val(open, TYPE_TIME)); info->Assign(3, new Val(open, TYPE_TIME));
info->Assign(4, new Val(close, TYPE_TIME)); info->Assign(4, new Val(close, TYPE_TIME));
info->Assign(5, new Val(terminating, TYPE_BOOL)); info->Assign(5, new Val(terminating, TYPE_BOOL));

View file

@ -162,10 +162,10 @@ protected:
//// Function also used by the RemoteSerializer. //// Function also used by the RemoteSerializer.
// Takes ownership of fields. // Takes ownership of fields and info.
WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, const WriterBackend::WriterInfo& info, WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info,
int num_fields, const threading::Field* const* fields, int num_fields, const threading::Field* const* fields,
bool local, bool remote); bool local, bool remote, const string& instantiating_filter="");
// Takes ownership of values.. // Takes ownership of values..
bool Write(EnumVal* id, EnumVal* writer, string path, bool Write(EnumVal* id, EnumVal* writer, string path,
@ -175,7 +175,7 @@ protected:
void SendAllWritersTo(RemoteSerializer::PeerID peer); void SendAllWritersTo(RemoteSerializer::PeerID peer);
// Signals that a file has been rotated. // Signals that a file has been rotated.
bool FinishedRotation(WriterFrontend* writer, string new_name, string old_name, bool FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating); double open, double close, bool terminating);
// Deletes the values as passed into Write(). // Deletes the values as passed into Write().

View file

@ -18,20 +18,26 @@ namespace logging {
class RotationFinishedMessage : public threading::OutputMessage<WriterFrontend> class RotationFinishedMessage : public threading::OutputMessage<WriterFrontend>
{ {
public: public:
RotationFinishedMessage(WriterFrontend* writer, string new_name, string old_name, RotationFinishedMessage(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating) double open, double close, bool terminating)
: threading::OutputMessage<WriterFrontend>("RotationFinished", writer), : threading::OutputMessage<WriterFrontend>("RotationFinished", writer),
new_name(new_name), old_name(old_name), open(open), new_name(copy_string(new_name)), old_name(copy_string(old_name)), open(open),
close(close), terminating(terminating) { } close(close), terminating(terminating) { }
virtual ~RotationFinishedMessage()
{
delete [] new_name;
delete [] old_name;
}
virtual bool Process() virtual bool Process()
{ {
return log_mgr->FinishedRotation(Object(), new_name, old_name, open, close, terminating); return log_mgr->FinishedRotation(Object(), new_name, old_name, open, close, terminating);
} }
private: private:
string new_name; const char* new_name;
string old_name; const char* old_name;
double open; double open;
double close; double close;
bool terminating; bool terminating;
@ -65,12 +71,17 @@ bool WriterBackend::WriterInfo::Read(SerializationFormat* fmt)
{ {
int size; int size;
if ( ! (fmt->Read(&path, "path") && string tmp_path;
if ( ! (fmt->Read(&tmp_path, "path") &&
fmt->Read(&rotation_base, "rotation_base") && fmt->Read(&rotation_base, "rotation_base") &&
fmt->Read(&rotation_interval, "rotation_interval") && fmt->Read(&rotation_interval, "rotation_interval") &&
fmt->Read(&network_time, "network_time") &&
fmt->Read(&size, "config_size")) ) fmt->Read(&size, "config_size")) )
return false; return false;
path = copy_string(tmp_path.c_str());
config.clear(); config.clear();
while ( size ) while ( size )
@ -81,7 +92,7 @@ bool WriterBackend::WriterInfo::Read(SerializationFormat* fmt)
if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) ) if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) )
return false; return false;
config.insert(std::make_pair(value, key)); config.insert(std::make_pair(copy_string(value.c_str()), copy_string(key.c_str())));
} }
return true; return true;
@ -95,6 +106,7 @@ bool WriterBackend::WriterInfo::Write(SerializationFormat* fmt) const
if ( ! (fmt->Write(path, "path") && if ( ! (fmt->Write(path, "path") &&
fmt->Write(rotation_base, "rotation_base") && fmt->Write(rotation_base, "rotation_base") &&
fmt->Write(rotation_interval, "rotation_interval") && fmt->Write(rotation_interval, "rotation_interval") &&
fmt->Write(network_time, "network_time") &&
fmt->Write(size, "config_size")) ) fmt->Write(size, "config_size")) )
return false; return false;
@ -113,8 +125,7 @@ WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread()
fields = 0; fields = 0;
buffering = true; buffering = true;
frontend = arg_frontend; frontend = arg_frontend;
info = new WriterInfo(frontend->Info());
info.path = "<path not yet set>";
SetName(frontend->Name()); SetName(frontend->Name());
} }
@ -128,6 +139,8 @@ WriterBackend::~WriterBackend()
delete [] fields; delete [] fields;
} }
delete info;
} }
void WriterBackend::DeleteVals(int num_writes, Value*** vals) void WriterBackend::DeleteVals(int num_writes, Value*** vals)
@ -144,7 +157,7 @@ void WriterBackend::DeleteVals(int num_writes, Value*** vals)
delete [] vals; delete [] vals;
} }
bool WriterBackend::FinishedRotation(string new_name, string old_name, bool WriterBackend::FinishedRotation(const char* new_name, const char* old_name,
double open, double close, bool terminating) double open, double close, bool terminating)
{ {
SendOut(new RotationFinishedMessage(frontend, new_name, old_name, open, close, terminating)); SendOut(new RotationFinishedMessage(frontend, new_name, old_name, open, close, terminating));
@ -156,17 +169,12 @@ void WriterBackend::DisableFrontend()
SendOut(new DisableMessage(frontend)); SendOut(new DisableMessage(frontend));
} }
bool WriterBackend::Init(const WriterInfo& arg_info, int arg_num_fields, const Field* const* arg_fields) bool WriterBackend::Init(int arg_num_fields, const Field* const* arg_fields)
{ {
info = arg_info;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
string name = Fmt("%s/%s", info.path.c_str(), frontend->Name().c_str()); if ( ! DoInit(*info, arg_num_fields, arg_fields) )
SetName(name);
if ( ! DoInit(arg_info, arg_num_fields, arg_fields) )
{ {
DisableFrontend(); DisableFrontend();
return false; return false;
@ -193,7 +201,6 @@ bool WriterBackend::Write(int arg_num_fields, int num_writes, Value*** vals)
return false; return false;
} }
#ifdef DEBUG
// Double-check all the types match. // Double-check all the types match.
for ( int j = 0; j < num_writes; j++ ) for ( int j = 0; j < num_writes; j++ )
{ {
@ -201,17 +208,17 @@ bool WriterBackend::Write(int arg_num_fields, int num_writes, Value*** vals)
{ {
if ( vals[j][i]->type != fields[i]->type ) if ( vals[j][i]->type != fields[i]->type )
{ {
#ifdef DEBUG
const char* msg = Fmt("Field type doesn't match in WriterBackend::Write() (%d vs. %d)", const char* msg = Fmt("Field type doesn't match in WriterBackend::Write() (%d vs. %d)",
vals[j][i]->type, fields[i]->type); vals[j][i]->type, fields[i]->type);
Debug(DBG_LOGGING, msg); Debug(DBG_LOGGING, msg);
#endif
DisableFrontend(); DisableFrontend();
DeleteVals(num_writes, vals); DeleteVals(num_writes, vals);
return false; return false;
} }
} }
} }
#endif
bool success = true; bool success = true;
@ -248,7 +255,7 @@ bool WriterBackend::SetBuf(bool enabled)
return true; return true;
} }
bool WriterBackend::Rotate(string rotated_path, double open, bool WriterBackend::Rotate(const char* rotated_path, double open,
double close, bool terminating) double close, bool terminating)
{ {
if ( ! DoRotate(rotated_path, open, close, terminating) ) if ( ! DoRotate(rotated_path, open, close, terminating) )
@ -260,9 +267,9 @@ bool WriterBackend::Rotate(string rotated_path, double open,
return true; return true;
} }
bool WriterBackend::Flush() bool WriterBackend::Flush(double network_time)
{ {
if ( ! DoFlush() ) if ( ! DoFlush(network_time) )
{ {
DisableFrontend(); DisableFrontend();
return false; return false;
@ -271,13 +278,15 @@ bool WriterBackend::Flush()
return true; return true;
} }
bool WriterBackend::DoHeartbeat(double network_time, double current_time) bool WriterBackend::OnFinish(double network_time)
{ {
MsgThread::DoHeartbeat(network_time, current_time); return DoFinish(network_time);
}
bool WriterBackend::OnHeartbeat(double network_time, double current_time)
{
SendOut(new FlushWriteBufferMessage(frontend)); SendOut(new FlushWriteBufferMessage(frontend));
return DoHeartbeat(network_time, current_time);
return true;
} }
string WriterBackend::Render(const threading::Value::addr_t& addr) const string WriterBackend::Render(const threading::Value::addr_t& addr) const

View file

@ -48,14 +48,17 @@ public:
*/ */
struct WriterInfo struct WriterInfo
{ {
typedef std::map<string, string> config_map; // Structure takes ownership of these strings.
typedef std::map<const char*, const char*, CompareString> config_map;
/** /**
* A string left to the interpretation of the writer * A string left to the interpretation of the writer
* implementation; it corresponds to the value configured on * implementation; it corresponds to the 'path' value configured
* the script-level for the logging filter. * on the script-level for the logging filter.
*
* Structure takes ownership of string.
*/ */
string path; const char* path;
/** /**
* The rotation interval as configured for this writer. * The rotation interval as configured for this writer.
@ -67,13 +70,47 @@ public:
*/ */
double rotation_base; double rotation_base;
/**
* The network time when the writer is created.
*/
double network_time;
/** /**
* A map of key/value pairs corresponding to the relevant * A map of key/value pairs corresponding to the relevant
* filter's "config" table. * filter's "config" table.
*/ */
std::map<string, string> config; config_map config;
WriterInfo() : path(0), rotation_interval(0.0), rotation_base(0.0),
network_time(0.0)
{
}
WriterInfo(const WriterInfo& other)
{
path = other.path ? copy_string(other.path) : 0;
rotation_interval = other.rotation_interval;
rotation_base = other.rotation_base;
network_time = other.network_time;
for ( config_map::const_iterator i = other.config.begin(); i != other.config.end(); i++ )
config.insert(std::make_pair(copy_string(i->first), copy_string(i->second)));
}
~WriterInfo()
{
delete [] path;
for ( config_map::iterator i = config.begin(); i != config.end(); i++ )
{
delete [] i->first;
delete [] i->second;
}
}
private: private:
const WriterInfo& operator=(const WriterInfo& other); // Disable.
friend class ::RemoteSerializer; friend class ::RemoteSerializer;
// Note, these need to be adapted when changing the struct's // Note, these need to be adapted when changing the struct's
@ -85,15 +122,16 @@ public:
/** /**
* One-time initialization of the writer to define the logged fields. * One-time initialization of the writer to define the logged fields.
* *
* @param info Meta information for the writer.
* @param num_fields * @param num_fields
* *
* @param fields An array of size \a num_fields with the log fields. * @param fields An array of size \a num_fields with the log fields.
* The methods takes ownership of the array. * The methods takes ownership of the array.
* *
* @param frontend_name The name of the front-end writer implementation.
*
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Init(const WriterInfo& info, int num_fields, const threading::Field* const* fields); bool Init(int num_fields, const threading::Field* const* fields);
/** /**
* Writes one log entry. * Writes one log entry.
@ -127,9 +165,11 @@ public:
* Flushes any currently buffered output, assuming the writer * Flushes any currently buffered output, assuming the writer
* supports that. (If not, it will be ignored). * supports that. (If not, it will be ignored).
* *
* @param network_time The network time when the flush was triggered.
*
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Flush(); bool Flush(double network_time);
/** /**
* Triggers rotation, if the writer supports that. (If not, it will * Triggers rotation, if the writer supports that. (If not, it will
@ -137,7 +177,7 @@ public:
* *
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Rotate(string rotated_path, double open, double close, bool terminating); bool Rotate(const char* rotated_path, double open, double close, bool terminating);
/** /**
* Disables the frontend that has instantiated this backend. Once * Disables the frontend that has instantiated this backend. Once
@ -146,9 +186,9 @@ public:
void DisableFrontend(); void DisableFrontend();
/** /**
* Returns the additional writer information into the constructor. * Returns the additional writer information passed into the constructor.
*/ */
const WriterInfo& Info() const { return info; } const WriterInfo& Info() const { return *info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
@ -184,7 +224,7 @@ public:
* @param terminating: True if the original rotation request occured * @param terminating: True if the original rotation request occured
* due to the main Bro process shutting down. * due to the main Bro process shutting down.
*/ */
bool FinishedRotation(string new_name, string old_name, bool FinishedRotation(const char* new_name, const char* old_name,
double open, double close, bool terminating); double open, double close, bool terminating);
/** Helper method to render an IP address as a string. /** Helper method to render an IP address as a string.
@ -211,6 +251,10 @@ public:
*/ */
string Render(double d) const; string Render(double d) const;
// Overridden from MsgThread.
virtual bool OnHeartbeat(double network_time, double current_time);
virtual bool OnFinish(double network_time);
protected: protected:
friend class FinishMessage; friend class FinishMessage;
@ -270,8 +314,10 @@ protected:
* will then be disabled and eventually deleted. When returning * will then be disabled and eventually deleted. When returning
* false, an implementation should also call Error() to indicate what * false, an implementation should also call Error() to indicate what
* happened. * happened.
*
* @param network_time The network time when the flush was triggered.
*/ */
virtual bool DoFlush() = 0; virtual bool DoFlush(double network_time) = 0;
/** /**
* Writer-specific method implementing log rotation. Most directly * Writer-specific method implementing log rotation. Most directly
@ -307,25 +353,24 @@ protected:
* due the main Bro prcoess terminating (and not because we've * due the main Bro prcoess terminating (and not because we've
* reached a regularly scheduled time for rotation). * reached a regularly scheduled time for rotation).
*/ */
virtual bool DoRotate(string rotated_path, double open, double close, virtual bool DoRotate(const char* rotated_path, double open, double close,
bool terminating) = 0; bool terminating) = 0;
/** /**
* Writer-specific method called just before the threading system is * Writer-specific method called just before the threading system is
* going to shutdown. * going to shutdown. It is assumed that once this messages returns,
* the thread can be safely terminated.
* *
* This method can be overridden but one must call * @param network_time The network time when the finish is triggered.
* WriterBackend::DoFinish().
*/ */
virtual bool DoFinish() { return MsgThread::DoFinish(); } virtual bool DoFinish(double network_time) = 0;
/** /**
* Triggered by regular heartbeat messages from the main thread. * Triggered by regular heartbeat messages from the main thread.
* *
* This method can be overridden but one must call * This method can be overridden. Default implementation does
* WriterBackend::DoHeartbeat(). * nothing.
*/ */
virtual bool DoHeartbeat(double network_time, double current_time); virtual bool DoHeartbeat(double network_time, double current_time) = 0;
private: private:
/** /**
@ -337,7 +382,7 @@ private:
// this class, it's running in a different thread! // this class, it's running in a different thread!
WriterFrontend* frontend; WriterFrontend* frontend;
WriterInfo info; // Meta information as passed to Init(). const WriterInfo* info; // Meta information.
int num_fields; // Number of log fields. int num_fields; // Number of log fields.
const threading::Field* const* fields; // Log fields. const threading::Field* const* fields; // Log fields.
bool buffering; // True if buffering is enabled. bool buffering; // True if buffering is enabled.

View file

@ -16,14 +16,15 @@ namespace logging {
class InitMessage : public threading::InputMessage<WriterBackend> class InitMessage : public threading::InputMessage<WriterBackend>
{ {
public: public:
InitMessage(WriterBackend* backend, const WriterBackend::WriterInfo& info, const int num_fields, const Field* const* fields) InitMessage(WriterBackend* backend, const int num_fields, const Field* const* fields)
: threading::InputMessage<WriterBackend>("Init", backend), : threading::InputMessage<WriterBackend>("Init", backend),
info(info), num_fields(num_fields), fields(fields) { } num_fields(num_fields), fields(fields)
{}
virtual bool Process() { return Object()->Init(info, num_fields, fields); }
virtual bool Process() { return Object()->Init(num_fields, fields); }
private: private:
WriterBackend::WriterInfo info;
const int num_fields; const int num_fields;
const Field * const* fields; const Field * const* fields;
}; };
@ -31,18 +32,20 @@ private:
class RotateMessage : public threading::InputMessage<WriterBackend> class RotateMessage : public threading::InputMessage<WriterBackend>
{ {
public: public:
RotateMessage(WriterBackend* backend, WriterFrontend* frontend, const string rotated_path, const double open, RotateMessage(WriterBackend* backend, WriterFrontend* frontend, const char* rotated_path, const double open,
const double close, const bool terminating) const double close, const bool terminating)
: threading::InputMessage<WriterBackend>("Rotate", backend), : threading::InputMessage<WriterBackend>("Rotate", backend),
frontend(frontend), frontend(frontend),
rotated_path(rotated_path), open(open), rotated_path(copy_string(rotated_path)), open(open),
close(close), terminating(terminating) { } close(close), terminating(terminating) { }
virtual ~RotateMessage() { delete [] rotated_path; }
virtual bool Process() { return Object()->Rotate(rotated_path, open, close, terminating); } virtual bool Process() { return Object()->Rotate(rotated_path, open, close, terminating); }
private: private:
WriterFrontend* frontend; WriterFrontend* frontend;
const string rotated_path; const char* rotated_path;
const double open; const double open;
const double close; const double close;
const bool terminating; const bool terminating;
@ -79,19 +82,13 @@ private:
class FlushMessage : public threading::InputMessage<WriterBackend> class FlushMessage : public threading::InputMessage<WriterBackend>
{ {
public: public:
FlushMessage(WriterBackend* backend) FlushMessage(WriterBackend* backend, double network_time)
: threading::InputMessage<WriterBackend>("Flush", backend) {} : threading::InputMessage<WriterBackend>("Flush", backend),
network_time(network_time) {}
virtual bool Process() { return Object()->Flush(); } virtual bool Process() { return Object()->Flush(network_time); }
}; private:
double network_time;
class FinishMessage : public threading::InputMessage<WriterBackend>
{
public:
FinishMessage(WriterBackend* backend)
: threading::InputMessage<WriterBackend>("Finish", backend) {}
virtual bool Process() { return Object()->DoFinish(); }
}; };
} }
@ -100,7 +97,7 @@ public:
using namespace logging; using namespace logging;
WriterFrontend::WriterFrontend(EnumVal* arg_stream, EnumVal* arg_writer, bool arg_local, bool arg_remote) WriterFrontend::WriterFrontend(const WriterBackend::WriterInfo& arg_info, EnumVal* arg_stream, EnumVal* arg_writer, bool arg_local, bool arg_remote)
{ {
stream = arg_stream; stream = arg_stream;
writer = arg_writer; writer = arg_writer;
@ -113,7 +110,10 @@ WriterFrontend::WriterFrontend(EnumVal* arg_stream, EnumVal* arg_writer, bool ar
remote = arg_remote; remote = arg_remote;
write_buffer = 0; write_buffer = 0;
write_buffer_pos = 0; write_buffer_pos = 0;
ty_name = "<not set>"; info = new WriterBackend::WriterInfo(arg_info);
const char* w = arg_writer->Type()->AsEnumType()->Lookup(arg_writer->InternalInt());
name = copy_string(fmt("%s/%s", arg_info.path, w));
if ( local ) if ( local )
{ {
@ -131,26 +131,16 @@ WriterFrontend::~WriterFrontend()
{ {
Unref(stream); Unref(stream);
Unref(writer); Unref(writer);
} delete info;
string WriterFrontend::Name() const
{
if ( info.path.size() )
return ty_name;
return ty_name + "/" + info.path;
} }
void WriterFrontend::Stop() void WriterFrontend::Stop()
{ {
FlushWriteBuffer(); FlushWriteBuffer();
SetDisable(); SetDisable();
if ( backend )
backend->Stop();
} }
void WriterFrontend::Init(const WriterBackend::WriterInfo& arg_info, int arg_num_fields, const Field* const * arg_fields) void WriterFrontend::Init(int arg_num_fields, const Field* const * arg_fields)
{ {
if ( disabled ) if ( disabled )
return; return;
@ -158,19 +148,18 @@ void WriterFrontend::Init(const WriterBackend::WriterInfo& arg_info, int arg_num
if ( initialized ) if ( initialized )
reporter->InternalError("writer initialize twice"); reporter->InternalError("writer initialize twice");
info = arg_info;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
initialized = true; initialized = true;
if ( backend ) if ( backend )
backend->SendIn(new InitMessage(backend, arg_info, arg_num_fields, arg_fields)); backend->SendIn(new InitMessage(backend, arg_num_fields, arg_fields));
if ( remote ) if ( remote )
remote_serializer->SendLogCreateWriter(stream, remote_serializer->SendLogCreateWriter(stream,
writer, writer,
arg_info, *info,
arg_num_fields, arg_num_fields,
arg_fields); arg_fields);
@ -184,7 +173,7 @@ void WriterFrontend::Write(int num_fields, Value** vals)
if ( remote ) if ( remote )
remote_serializer->SendLogWrite(stream, remote_serializer->SendLogWrite(stream,
writer, writer,
info.path, info->path,
num_fields, num_fields,
vals); vals);
@ -238,7 +227,7 @@ void WriterFrontend::SetBuf(bool enabled)
FlushWriteBuffer(); FlushWriteBuffer();
} }
void WriterFrontend::Flush() void WriterFrontend::Flush(double network_time)
{ {
if ( disabled ) if ( disabled )
return; return;
@ -246,10 +235,10 @@ void WriterFrontend::Flush()
FlushWriteBuffer(); FlushWriteBuffer();
if ( backend ) if ( backend )
backend->SendIn(new FlushMessage(backend)); backend->SendIn(new FlushMessage(backend, network_time));
} }
void WriterFrontend::Rotate(string rotated_path, double open, double close, bool terminating) void WriterFrontend::Rotate(const char* rotated_path, double open, double close, bool terminating)
{ {
if ( disabled ) if ( disabled )
return; return;
@ -264,17 +253,6 @@ void WriterFrontend::Rotate(string rotated_path, double open, double close, bool
log_mgr->FinishedRotation(0, "", rotated_path, open, close, terminating); log_mgr->FinishedRotation(0, "", rotated_path, open, close, terminating);
} }
void WriterFrontend::Finish()
{
if ( disabled )
return;
FlushWriteBuffer();
if ( backend )
backend->SendIn(new FinishMessage(backend));
}
void WriterFrontend::DeleteVals(Value** vals) void WriterFrontend::DeleteVals(Value** vals)
{ {
// Note this code is duplicated in Manager::DeleteVals(). // Note this code is duplicated in Manager::DeleteVals().

View file

@ -32,6 +32,10 @@ public:
* frontend will internally instantiate a WriterBackend of the * frontend will internally instantiate a WriterBackend of the
* corresponding type. * corresponding type.
* *
* info: The meta information struct for the writer.
*
* writer_name: A descriptive name for the writer's type.
*
* local: If true, the writer will instantiate a local backend. * local: If true, the writer will instantiate a local backend.
* *
* remote: If true, the writer will forward all data to remote * remote: If true, the writer will forward all data to remote
@ -39,7 +43,7 @@ public:
* *
* Frontends must only be instantiated by the main thread. * Frontends must only be instantiated by the main thread.
*/ */
WriterFrontend(EnumVal* stream, EnumVal* writer, bool local, bool remote); WriterFrontend(const WriterBackend::WriterInfo& info, EnumVal* stream, EnumVal* writer, bool local, bool remote);
/** /**
* Destructor. * Destructor.
@ -50,7 +54,7 @@ public:
/** /**
* Stops all output to this writer. Calling this methods disables all * Stops all output to this writer. Calling this methods disables all
* message forwarding to the backend and stops the backend thread. * message forwarding to the backend.
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
@ -68,7 +72,7 @@ public:
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
void Init(const WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const* fields); void Init(int num_fields, const threading::Field* const* fields);
/** /**
* Write out a record. * Write out a record.
@ -114,8 +118,10 @@ public:
* message back that will asynchronously call Disable(). * message back that will asynchronously call Disable().
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*
* @param network_time The network time when the flush was triggered.
*/ */
void Flush(); void Flush(double network_time);
/** /**
* Triggers log rotation. * Triggers log rotation.
@ -128,7 +134,7 @@ public:
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*/ */
void Rotate(string rotated_path, double open, double close, bool terminating); void Rotate(const char* rotated_path, double open, double close, bool terminating);
/** /**
* Finalizes writing to this tream. * Finalizes writing to this tream.
@ -138,8 +144,10 @@ public:
* sends a message back that will asynchronously call Disable(). * sends a message back that will asynchronously call Disable().
* *
* This method must only be called from the main thread. * This method must only be called from the main thread.
*
* @param network_time The network time when the finish was triggered.
*/ */
void Finish(); void Finish(double network_time);
/** /**
* Explicitly triggers a transfer of all potentially buffered Write() * Explicitly triggers a transfer of all potentially buffered Write()
@ -171,7 +179,7 @@ public:
/** /**
* Returns the additional writer information as passed into the constructor. * Returns the additional writer information as passed into the constructor.
*/ */
const WriterBackend::WriterInfo& Info() const { return info; } const WriterBackend::WriterInfo& Info() const { return *info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
@ -184,7 +192,7 @@ public:
* *
* This method is safe to call from any thread. * This method is safe to call from any thread.
*/ */
string Name() const; const char* Name() const { return name; }
/** /**
* Returns the log fields as passed into the constructor. * Returns the log fields as passed into the constructor.
@ -206,8 +214,8 @@ protected:
bool local; // True if logging locally. bool local; // True if logging locally.
bool remote; // True if loggin remotely. bool remote; // True if loggin remotely.
string ty_name; // Name of the backend type. Set by the manager. const char* name; // Descriptive name of the
WriterBackend::WriterInfo info; // The writer information. WriterBackend::WriterInfo* info; // The writer information.
int num_fields; // The number of log fields. int num_fields; // The number of log fields.
const threading::Field* const* fields; // The log fields. const threading::Field* const* fields; // The log fields.

View file

@ -2,6 +2,8 @@
#include <string> #include <string>
#include <errno.h> #include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include "NetVar.h" #include "NetVar.h"
#include "threading/SerialTypes.h" #include "threading/SerialTypes.h"
@ -15,10 +17,11 @@ using threading::Field;
Ascii::Ascii(WriterFrontend* frontend) : WriterBackend(frontend) Ascii::Ascii(WriterFrontend* frontend) : WriterBackend(frontend)
{ {
file = 0; fd = 0;
ascii_done = false;
output_to_stdout = BifConst::LogAscii::output_to_stdout; output_to_stdout = BifConst::LogAscii::output_to_stdout;
include_header = BifConst::LogAscii::include_header; include_meta = BifConst::LogAscii::include_meta;
separator_len = BifConst::LogAscii::separator->Len(); separator_len = BifConst::LogAscii::separator->Len();
separator = new char[separator_len]; separator = new char[separator_len];
@ -40,10 +43,10 @@ Ascii::Ascii(WriterFrontend* frontend) : WriterBackend(frontend)
memcpy(unset_field, BifConst::LogAscii::unset_field->Bytes(), memcpy(unset_field, BifConst::LogAscii::unset_field->Bytes(),
unset_field_len); unset_field_len);
header_prefix_len = BifConst::LogAscii::header_prefix->Len(); meta_prefix_len = BifConst::LogAscii::meta_prefix->Len();
header_prefix = new char[header_prefix_len]; meta_prefix = new char[meta_prefix_len];
memcpy(header_prefix, BifConst::LogAscii::header_prefix->Bytes(), memcpy(meta_prefix, BifConst::LogAscii::meta_prefix->Bytes(),
header_prefix_len); meta_prefix_len);
desc.EnableEscaping(); desc.EnableEscaping();
desc.AddEscapeSequence(separator, separator_len); desc.AddEscapeSequence(separator, separator_len);
@ -51,26 +54,46 @@ Ascii::Ascii(WriterFrontend* frontend) : WriterBackend(frontend)
Ascii::~Ascii() Ascii::~Ascii()
{ {
if ( file ) if ( ! ascii_done )
fclose(file); {
fprintf(stderr, "internal error: finish missing\n");
abort();
}
delete [] separator; delete [] separator;
delete [] set_separator; delete [] set_separator;
delete [] empty_field; delete [] empty_field;
delete [] unset_field; delete [] unset_field;
delete [] header_prefix; delete [] meta_prefix;
} }
bool Ascii::WriteHeaderField(const string& key, const string& val) bool Ascii::WriteHeaderField(const string& key, const string& val)
{ {
string str = string(header_prefix, header_prefix_len) + string str = string(meta_prefix, meta_prefix_len) +
key + string(separator, separator_len) + val + "\n"; key + string(separator, separator_len) + val + "\n";
return (fwrite(str.c_str(), str.length(), 1, file) == 1); return safe_write(fd, str.c_str(), str.length());
}
void Ascii::CloseFile(double t)
{
if ( ! fd )
return;
if ( include_meta )
{
string ts = t ? Timestamp(t) : string("<abnormal termination>");
WriteHeaderField("end", ts);
}
close(fd);
fd = 0;
} }
bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields) bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields)
{ {
assert(! fd);
string path = info.path; string path = info.path;
if ( output_to_stdout ) if ( output_to_stdout )
@ -78,34 +101,39 @@ bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const *
fname = IsSpecial(path) ? path : path + "." + LogExt(); fname = IsSpecial(path) ? path : path + "." + LogExt();
if ( ! (file = fopen(fname.c_str(), "w")) ) fd = open(fname.c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0666);
if ( fd < 0 )
{ {
Error(Fmt("cannot open %s: %s", fname.c_str(), Error(Fmt("cannot open %s: %s", fname.c_str(),
strerror(errno))); Strerror(errno)));
fd = 0;
return false; return false;
} }
if ( include_header ) if ( include_meta )
{ {
string names; string names;
string types; string types;
string str = string(header_prefix, header_prefix_len) string str = string(meta_prefix, meta_prefix_len)
+ "separator " // Always use space as separator here. + "separator " // Always use space as separator here.
+ get_escaped_string(string(separator, separator_len), false) + get_escaped_string(string(separator, separator_len), false)
+ "\n"; + "\n";
if( fwrite(str.c_str(), str.length(), 1, file) != 1 ) if ( ! safe_write(fd, str.c_str(), str.length()) )
goto write_error; goto write_error;
string ts = Timestamp(info.network_time);
if ( ! (WriteHeaderField("set_separator", get_escaped_string( if ( ! (WriteHeaderField("set_separator", get_escaped_string(
string(set_separator, set_separator_len), false)) && string(set_separator, set_separator_len), false)) &&
WriteHeaderField("empty_field", get_escaped_string( WriteHeaderField("empty_field", get_escaped_string(
string(empty_field, empty_field_len), false)) && string(empty_field, empty_field_len), false)) &&
WriteHeaderField("unset_field", get_escaped_string( WriteHeaderField("unset_field", get_escaped_string(
string(unset_field, unset_field_len), false)) && string(unset_field, unset_field_len), false)) &&
WriteHeaderField("path", get_escaped_string(path, false))) ) WriteHeaderField("path", get_escaped_string(path, false)) &&
WriteHeaderField("start", ts)) )
goto write_error; goto write_error;
for ( int i = 0; i < num_fields; ++i ) for ( int i = 0; i < num_fields; ++i )
@ -116,8 +144,8 @@ bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const *
types += string(separator, separator_len); types += string(separator, separator_len);
} }
names += fields[i]->name; names += string(fields[i]->name);
types += fields[i]->TypeName(); types += fields[i]->TypeName().c_str();
} }
if ( ! (WriteHeaderField("fields", names) if ( ! (WriteHeaderField("fields", names)
@ -128,21 +156,32 @@ bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const *
return true; return true;
write_error: write_error:
Error(Fmt("error writing to %s: %s", fname.c_str(), strerror(errno))); Error(Fmt("error writing to %s: %s", fname.c_str(), Strerror(errno)));
return false; return false;
} }
bool Ascii::DoFlush() bool Ascii::DoFlush(double network_time)
{ {
fflush(file); fsync(fd);
return true; return true;
} }
bool Ascii::DoFinish() bool Ascii::DoFinish(double network_time)
{ {
return WriterBackend::DoFinish(); if ( ascii_done )
{
fprintf(stderr, "internal error: duplicate finish\n");
abort();
} }
ascii_done = true;
CloseFile(network_time);
return true;
}
bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field) bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field)
{ {
if ( ! val->present ) if ( ! val->present )
@ -198,8 +237,8 @@ bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field)
case TYPE_FILE: case TYPE_FILE:
case TYPE_FUNC: case TYPE_FUNC:
{ {
int size = val->val.string_val->size(); int size = val->val.string_val.length;
const char* data = val->val.string_val->data(); const char* data = val->val.string_val.data;
if ( ! size ) if ( ! size )
{ {
@ -280,8 +319,7 @@ bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field)
} }
default: default:
Error(Fmt("unsupported field format %d for %s", val->type, Error(Fmt("unsupported field format %d for %s", val->type, field->name));
field->name.c_str()));
return false; return false;
} }
@ -291,7 +329,7 @@ bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field)
bool Ascii::DoWrite(int num_fields, const Field* const * fields, bool Ascii::DoWrite(int num_fields, const Field* const * fields,
Value** vals) Value** vals)
{ {
if ( ! file ) if ( ! fd )
DoInit(Info(), NumFields(), Fields()); DoInit(Info(), NumFields(), Fields());
desc.Clear(); desc.Clear();
@ -307,31 +345,47 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
desc.AddRaw("\n", 1); desc.AddRaw("\n", 1);
if ( fwrite(desc.Bytes(), desc.Len(), 1, file) != 1 ) const char* bytes = (const char*)desc.Bytes();
int len = desc.Len();
if ( strncmp(bytes, meta_prefix, meta_prefix_len) == 0 )
{ {
Error(Fmt("error writing to %s: %s", fname.c_str(), strerror(errno))); // It would so escape the first character.
char buf[16];
snprintf(buf, sizeof(buf), "\\x%02x", bytes[0]);
if ( ! safe_write(fd, buf, strlen(buf)) )
goto write_error;
++bytes;
--len;
}
if ( ! safe_write(fd, bytes, len) )
goto write_error;
if ( IsBuf() )
fsync(fd);
return true;
write_error:
Error(Fmt("error writing to %s: %s", fname.c_str(), Strerror(errno)));
return false; return false;
} }
if ( IsBuf() ) bool Ascii::DoRotate(const char* rotated_path, double open, double close, bool terminating)
fflush(file);
return true;
}
bool Ascii::DoRotate(string rotated_path, double open, double close, bool terminating)
{ {
// Don't rotate special files or if there's not one currently open. // Don't rotate special files or if there's not one currently open.
if ( ! file || IsSpecial(Info().path) ) if ( ! fd || IsSpecial(Info().path) )
return true; return true;
fclose(file); CloseFile(close);
file = 0;
string nname = rotated_path + "." + LogExt(); string nname = string(rotated_path) + "." + LogExt();
rename(fname.c_str(), nname.c_str()); rename(fname.c_str(), nname.c_str());
if ( ! FinishedRotation(nname, fname, open, close, terminating) ) if ( ! FinishedRotation(nname.c_str(), fname.c_str(), open, close, terminating) )
{ {
Error(Fmt("error rotating %s to %s", fname.c_str(), nname.c_str())); Error(Fmt("error rotating %s to %s", fname.c_str(), nname.c_str()));
return false; return false;
@ -346,9 +400,33 @@ bool Ascii::DoSetBuf(bool enabled)
return true; return true;
} }
bool Ascii::DoHeartbeat(double network_time, double current_time)
{
// Nothing to do.
return true;
}
string Ascii::LogExt() string Ascii::LogExt()
{ {
const char* ext = getenv("BRO_LOG_SUFFIX"); const char* ext = getenv("BRO_LOG_SUFFIX");
if ( ! ext ) ext = "log"; if ( ! ext )
ext = "log";
return ext; return ext;
} }
string Ascii::Timestamp(double t)
{
time_t teatime = time_t(t);
struct tm tmbuf;
struct tm* tm = localtime_r(&teatime, &tmbuf);
char tmp[128];
const char* const date_fmt = "%Y-%m-%d-%H-%M-%S";
strftime(tmp, sizeof(tmp), date_fmt, tm);
return tmp;
}

View file

@ -24,23 +24,27 @@ protected:
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals); threading::Value** vals);
virtual bool DoSetBuf(bool enabled); virtual bool DoSetBuf(bool enabled);
virtual bool DoRotate(string rotated_path, double open, virtual bool DoRotate(const char* rotated_path, double open,
double close, bool terminating); double close, bool terminating);
virtual bool DoFlush(); virtual bool DoFlush(double network_time);
virtual bool DoFinish(); virtual bool DoFinish(double network_time);
virtual bool DoHeartbeat(double network_time, double current_time);
private: private:
bool IsSpecial(string path) { return path.find("/dev/") == 0; } bool IsSpecial(string path) { return path.find("/dev/") == 0; }
bool DoWriteOne(ODesc* desc, threading::Value* val, const threading::Field* field); bool DoWriteOne(ODesc* desc, threading::Value* val, const threading::Field* field);
bool WriteHeaderField(const string& key, const string& value); bool WriteHeaderField(const string& key, const string& value);
void CloseFile(double t);
string Timestamp(double t);
FILE* file; int fd;
string fname; string fname;
ODesc desc; ODesc desc;
bool ascii_done;
// Options set from the script-level. // Options set from the script-level.
bool output_to_stdout; bool output_to_stdout;
bool include_header; bool include_meta;
char* separator; char* separator;
int separator_len; int separator_len;
@ -54,8 +58,8 @@ private:
char* unset_field; char* unset_field;
int unset_field_len; int unset_field_len;
char* header_prefix; char* meta_prefix;
int header_prefix_len; int meta_prefix_len;
}; };
} }

View file

@ -78,10 +78,10 @@ std::string DataSeries::LogValueToString(threading::Value *val)
case TYPE_STRING: case TYPE_STRING:
case TYPE_FILE: case TYPE_FILE:
case TYPE_FUNC: case TYPE_FUNC:
if ( ! val->val.string_val->size() ) if ( ! val->val.string_val.length )
return ""; return "";
return string(val->val.string_val->data(), val->val.string_val->size()); return string(val->val.string_val.data, val->val.string_val.length);
case TYPE_TABLE: case TYPE_TABLE:
{ {
@ -302,7 +302,8 @@ bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading:
if( ds_dump_schema ) if( ds_dump_schema )
{ {
FILE* pFile = fopen ( string(info.path + ".ds.xml").c_str() , "wb" ); string name = string(info.path) + ".ds.xml";
FILE* pFile = fopen(name.c_str(), "wb" );
if( pFile ) if( pFile )
{ {
@ -311,7 +312,7 @@ bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading:
} }
else else
Error(Fmt("cannot dump schema: %s", strerror(errno))); Error(Fmt("cannot dump schema: %s", Strerror(errno)));
} }
compress_type = Extent::compress_all; compress_type = Extent::compress_all;
@ -343,7 +344,7 @@ bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading:
return OpenLog(info.path); return OpenLog(info.path);
} }
bool DataSeries::DoFlush() bool DataSeries::DoFlush(double network_time)
{ {
// Flushing is handled by DataSeries automatically, so this function // Flushing is handled by DataSeries automatically, so this function
// doesn't do anything. // doesn't do anything.
@ -366,11 +367,10 @@ void DataSeries::CloseLog()
log_file = 0; log_file = 0;
} }
bool DataSeries::DoFinish() bool DataSeries::DoFinish(double network_time)
{ {
CloseLog(); CloseLog();
return true;
return WriterBackend::DoFinish();
} }
bool DataSeries::DoWrite(int num_fields, const threading::Field* const * fields, bool DataSeries::DoWrite(int num_fields, const threading::Field* const * fields,
@ -395,17 +395,17 @@ bool DataSeries::DoWrite(int num_fields, const threading::Field* const * fields,
return true; return true;
} }
bool DataSeries::DoRotate(string rotated_path, double open, double close, bool terminating) bool DataSeries::DoRotate(const char* rotated_path, double open, double close, bool terminating)
{ {
// Note that if DS files are rotated too often, the aggregate log // Note that if DS files are rotated too often, the aggregate log
// size will be (much) larger. // size will be (much) larger.
CloseLog(); CloseLog();
string dsname = Info().path + ".ds"; string dsname = string(Info().path) + ".ds";
string nname = rotated_path + ".ds"; string nname = string(rotated_path) + ".ds";
rename(dsname.c_str(), nname.c_str()); rename(dsname.c_str(), nname.c_str());
if ( ! FinishedRotation(nname, dsname, open, close, terminating) ) if ( ! FinishedRotation(nname.c_str(), dsname.c_str(), open, close, terminating) )
{ {
Error(Fmt("error rotating %s to %s", dsname.c_str(), nname.c_str())); Error(Fmt("error rotating %s to %s", dsname.c_str(), nname.c_str()));
return false; return false;
@ -420,4 +420,9 @@ bool DataSeries::DoSetBuf(bool enabled)
return true; return true;
} }
bool DataSeries::DoHeartbeat(double network_time, double current_time)
{
return true;
}
#endif /* USE_DATASERIES */ #endif /* USE_DATASERIES */

View file

@ -32,10 +32,11 @@ protected:
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals); threading::Value** vals);
virtual bool DoSetBuf(bool enabled); virtual bool DoSetBuf(bool enabled);
virtual bool DoRotate(string rotated_path, double open, virtual bool DoRotate(const char* rotated_path, double open,
double close, bool terminating); double close, bool terminating);
virtual bool DoFlush(); virtual bool DoFlush(double network_time);
virtual bool DoFinish(); virtual bool DoFinish(double network_time);
virtual bool DoHeartbeat(double network_time, double current_time);
private: private:
static const size_t ROW_MIN = 2048; // Minimum extent size. static const size_t ROW_MIN = 2048; // Minimum extent size.

View file

@ -0,0 +1,416 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// This is experimental code that is not yet ready for production usage.
//
#include "config.h"
#ifdef USE_ELASTICSEARCH
#include "util.h" // Needs to come first for stdint.h
#include <string>
#include <errno.h>
#include "BroString.h"
#include "NetVar.h"
#include "threading/SerialTypes.h"
#include <curl/curl.h>
#include <curl/easy.h>
#include "ElasticSearch.h"
using namespace logging;
using namespace writer;
using threading::Value;
using threading::Field;
ElasticSearch::ElasticSearch(WriterFrontend* frontend) : WriterBackend(frontend)
{
cluster_name_len = BifConst::LogElasticSearch::cluster_name->Len();
cluster_name = new char[cluster_name_len + 1];
memcpy(cluster_name, BifConst::LogElasticSearch::cluster_name->Bytes(), cluster_name_len);
cluster_name[cluster_name_len] = 0;
index_prefix = string((const char*) BifConst::LogElasticSearch::index_prefix->Bytes(), BifConst::LogElasticSearch::index_prefix->Len());
es_server = string(Fmt("http://%s:%d", BifConst::LogElasticSearch::server_host->Bytes(),
(int) BifConst::LogElasticSearch::server_port));
bulk_url = string(Fmt("%s/_bulk", es_server.c_str()));
http_headers = curl_slist_append(NULL, "Content-Type: text/json; charset=utf-8");
buffer.Clear();
counter = 0;
current_index = string();
prev_index = string();
last_send = current_time();
failing = false;
transfer_timeout = BifConst::LogElasticSearch::transfer_timeout * 1000;
curl_handle = HTTPSetup();
}
ElasticSearch::~ElasticSearch()
{
delete [] cluster_name;
}
bool ElasticSearch::DoInit(const WriterInfo& info, int num_fields, const threading::Field* const* fields)
{
return true;
}
bool ElasticSearch::DoFlush(double network_time)
{
BatchIndex();
return true;
}
bool ElasticSearch::DoFinish(double network_time)
{
BatchIndex();
curl_slist_free_all(http_headers);
curl_easy_cleanup(curl_handle);
return true;
}
bool ElasticSearch::BatchIndex()
{
curl_easy_reset(curl_handle);
curl_easy_setopt(curl_handle, CURLOPT_URL, bulk_url.c_str());
curl_easy_setopt(curl_handle, CURLOPT_POST, 1);
curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDSIZE_LARGE, (curl_off_t)buffer.Len());
curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDS, buffer.Bytes());
failing = ! HTTPSend(curl_handle);
// We are currently throwing the data out regardless of if the send failed. Fire and forget!
buffer.Clear();
counter = 0;
last_send = current_time();
return true;
}
bool ElasticSearch::AddValueToBuffer(ODesc* b, Value* val)
{
switch ( val->type )
{
// ES treats 0 as false and any other value as true so bool types go here.
case TYPE_BOOL:
case TYPE_INT:
b->Add(val->val.int_val);
break;
case TYPE_COUNT:
case TYPE_COUNTER:
{
// ElasticSearch doesn't seem to support unsigned 64bit ints.
if ( val->val.uint_val >= INT64_MAX )
{
Error(Fmt("count value too large: %" PRIu64, val->val.uint_val));
b->AddRaw("null", 4);
}
else
b->Add(val->val.uint_val);
break;
}
case TYPE_PORT:
b->Add(val->val.port_val.port);
break;
case TYPE_SUBNET:
b->AddRaw("\"", 1);
b->Add(Render(val->val.subnet_val));
b->AddRaw("\"", 1);
break;
case TYPE_ADDR:
b->AddRaw("\"", 1);
b->Add(Render(val->val.addr_val));
b->AddRaw("\"", 1);
break;
case TYPE_DOUBLE:
case TYPE_INTERVAL:
b->Add(val->val.double_val);
break;
case TYPE_TIME:
{
// ElasticSearch uses milliseconds for timestamps and json only
// supports signed ints (uints can be too large).
uint64_t ts = (uint64_t) (val->val.double_val * 1000);
if ( ts >= INT64_MAX )
{
Error(Fmt("time value too large: %" PRIu64, ts));
b->AddRaw("null", 4);
}
else
b->Add(ts);
break;
}
case TYPE_ENUM:
case TYPE_STRING:
case TYPE_FILE:
case TYPE_FUNC:
{
b->AddRaw("\"", 1);
for ( int i = 0; i < val->val.string_val.length; ++i )
{
char c = val->val.string_val.data[i];
// 2byte Unicode escape special characters.
if ( c < 32 || c > 126 || c == '\n' || c == '"' || c == '\'' || c == '\\' || c == '&' )
{
static const char hex_chars[] = "0123456789abcdef";
b->AddRaw("\\u00", 4);
b->AddRaw(&hex_chars[(c & 0xf0) >> 4], 1);
b->AddRaw(&hex_chars[c & 0x0f], 1);
}
else
b->AddRaw(&c, 1);
}
b->AddRaw("\"", 1);
break;
}
case TYPE_TABLE:
{
b->AddRaw("[", 1);
for ( int j = 0; j < val->val.set_val.size; j++ )
{
if ( j > 0 )
b->AddRaw(",", 1);
AddValueToBuffer(b, val->val.set_val.vals[j]);
}
b->AddRaw("]", 1);
break;
}
case TYPE_VECTOR:
{
b->AddRaw("[", 1);
for ( int j = 0; j < val->val.vector_val.size; j++ )
{
if ( j > 0 )
b->AddRaw(",", 1);
AddValueToBuffer(b, val->val.vector_val.vals[j]);
}
b->AddRaw("]", 1);
break;
}
default:
return false;
}
return true;
}
bool ElasticSearch::AddFieldToBuffer(ODesc *b, Value* val, const Field* field)
{
if ( ! val->present )
return false;
b->AddRaw("\"", 1);
b->Add(field->name);
b->AddRaw("\":", 2);
AddValueToBuffer(b, val);
return true;
}
bool ElasticSearch::DoWrite(int num_fields, const Field* const * fields,
Value** vals)
{
if ( current_index.empty() )
UpdateIndex(network_time, Info().rotation_interval, Info().rotation_base);
// Our action line looks like:
buffer.AddRaw("{\"index\":{\"_index\":\"", 20);
buffer.Add(current_index);
buffer.AddRaw("\",\"_type\":\"", 11);
buffer.Add(Info().path);
buffer.AddRaw("\"}}\n", 4);
buffer.AddRaw("{", 1);
for ( int i = 0; i < num_fields; i++ )
{
if ( i > 0 && buffer.Bytes()[buffer.Len()] != ',' && vals[i]->present )
buffer.AddRaw(",", 1);
AddFieldToBuffer(&buffer, vals[i], fields[i]);
}
buffer.AddRaw("}\n", 2);
counter++;
if ( counter >= BifConst::LogElasticSearch::max_batch_size ||
uint(buffer.Len()) >= BifConst::LogElasticSearch::max_byte_size )
BatchIndex();
return true;
}
bool ElasticSearch::UpdateIndex(double now, double rinterval, double rbase)
{
if ( rinterval == 0 )
{
// if logs aren't being rotated, don't use a rotation oriented index name.
current_index = index_prefix;
}
else
{
double nr = calc_next_rotate(now, rinterval, rbase);
double interval_beginning = now - (rinterval - nr);
struct tm tm;
char buf[128];
time_t teatime = (time_t)interval_beginning;
localtime_r(&teatime, &tm);
strftime(buf, sizeof(buf), "%Y%m%d%H%M", &tm);
prev_index = current_index;
current_index = index_prefix + "-" + buf;
// Send some metadata about this index.
buffer.AddRaw("{\"index\":{\"_index\":\"@", 21);
buffer.Add(index_prefix);
buffer.AddRaw("-meta\",\"_type\":\"index\",\"_id\":\"", 30);
buffer.Add(current_index);
buffer.AddRaw("-", 1);
buffer.Add(Info().rotation_base);
buffer.AddRaw("-", 1);
buffer.Add(Info().rotation_interval);
buffer.AddRaw("\"}}\n{\"name\":\"", 13);
buffer.Add(current_index);
buffer.AddRaw("\",\"start\":", 10);
buffer.Add(interval_beginning);
buffer.AddRaw(",\"end\":", 7);
buffer.Add(interval_beginning+rinterval);
buffer.AddRaw("}\n", 2);
}
//printf("%s - prev:%s current:%s\n", Info().path.c_str(), prev_index.c_str(), current_index.c_str());
return true;
}
bool ElasticSearch::DoRotate(const char* rotated_path, double open, double close, bool terminating)
{
// Update the currently used index to the new rotation interval.
UpdateIndex(close, Info().rotation_interval, Info().rotation_base);
// Only do this stuff if there was a previous index.
if ( ! prev_index.empty() )
{
// FIXME: I think this section is taking too long and causing the thread to die.
// Compress the previous index
//curl_easy_reset(curl_handle);
//curl_easy_setopt(curl_handle, CURLOPT_URL, Fmt("%s/%s/_settings", es_server.c_str(), prev_index.c_str()));
//curl_easy_setopt(curl_handle, CURLOPT_CUSTOMREQUEST, "PUT");
//curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDS, "{\"index\":{\"store.compress.stored\":\"true\"}}");
//curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDSIZE_LARGE, (curl_off_t) 42);
//HTTPSend(curl_handle);
// Optimize the previous index.
// TODO: make this into variables.
//curl_easy_reset(curl_handle);
//curl_easy_setopt(curl_handle, CURLOPT_URL, Fmt("%s/%s/_optimize?max_num_segments=1&wait_for_merge=false", es_server.c_str(), prev_index.c_str()));
//HTTPSend(curl_handle);
}
if ( ! FinishedRotation(current_index.c_str(), prev_index.c_str(), open, close, terminating) )
{
Error(Fmt("error rotating %s to %s", prev_index.c_str(), current_index.c_str()));
}
return true;
}
bool ElasticSearch::DoSetBuf(bool enabled)
{
// Nothing to do.
return true;
}
bool ElasticSearch::DoHeartbeat(double network_time, double current_time)
{
if ( last_send > 0 && buffer.Len() > 0 &&
current_time-last_send > BifConst::LogElasticSearch::max_batch_interval )
{
BatchIndex();
}
return true;
}
CURL* ElasticSearch::HTTPSetup()
{
CURL* handle = curl_easy_init();
if ( ! handle )
{
Error("cURL did not initialize correctly.");
return 0;
}
return handle;
}
bool ElasticSearch::HTTPReceive(void* ptr, int size, int nmemb, void* userdata)
{
//TODO: Do some verification on the result?
return true;
}
bool ElasticSearch::HTTPSend(CURL *handle)
{
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, http_headers);
curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, &logging::writer::ElasticSearch::HTTPReceive); // This gets called with the result.
// HTTP 1.1 likes to use chunked encoded transfers, which aren't good for speed.
// The best (only?) way to disable that is to just use HTTP 1.0
curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
//curl_easy_setopt(handle, CURLOPT_TIMEOUT_MS, transfer_timeout);
CURLcode return_code = curl_easy_perform(handle);
switch ( return_code )
{
case CURLE_COULDNT_CONNECT:
case CURLE_COULDNT_RESOLVE_HOST:
case CURLE_WRITE_ERROR:
case CURLE_RECV_ERROR:
{
if ( ! failing )
Error(Fmt("ElasticSearch server may not be accessible."));
}
case CURLE_OPERATION_TIMEDOUT:
{
if ( ! failing )
Warning(Fmt("HTTP operation with elasticsearch server timed out at %" PRIu64 " msecs.", transfer_timeout));
}
case CURLE_OK:
{
uint http_code = 0;
curl_easy_getinfo(curl_handle, CURLINFO_RESPONSE_CODE, &http_code);
if ( http_code == 200 )
// Hopefully everything goes through here.
return true;
else if ( ! failing )
Error(Fmt("Received a non-successful status code back from ElasticSearch server, check the elasticsearch server log."));
}
default:
{
}
}
// The "successful" return happens above
return false;
}
#endif

View file

@ -0,0 +1,81 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Log writer for writing to an ElasticSearch database
//
// This is experimental code that is not yet ready for production usage.
//
#ifndef LOGGING_WRITER_ELASTICSEARCH_H
#define LOGGING_WRITER_ELASTICSEARCH_H
#include <curl/curl.h>
#include "../WriterBackend.h"
namespace logging { namespace writer {
class ElasticSearch : public WriterBackend {
public:
ElasticSearch(WriterFrontend* frontend);
~ElasticSearch();
static WriterBackend* Instantiate(WriterFrontend* frontend)
{ return new ElasticSearch(frontend); }
static string LogExt();
protected:
// Overidden from WriterBackend.
virtual bool DoInit(const WriterInfo& info, int num_fields,
const threading::Field* const* fields);
virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals);
virtual bool DoSetBuf(bool enabled);
virtual bool DoRotate(const char* rotated_path, double open,
double close, bool terminating);
virtual bool DoFlush(double network_time);
virtual bool DoFinish(double network_time);
virtual bool DoHeartbeat(double network_time, double current_time);
private:
bool AddFieldToBuffer(ODesc *b, threading::Value* val, const threading::Field* field);
bool AddValueToBuffer(ODesc *b, threading::Value* val);
bool BatchIndex();
bool SendMappings();
bool UpdateIndex(double now, double rinterval, double rbase);
CURL* HTTPSetup();
bool HTTPReceive(void* ptr, int size, int nmemb, void* userdata);
bool HTTPSend(CURL *handle);
// Buffers, etc.
ODesc buffer;
uint64 counter;
double last_send;
string current_index;
string prev_index;
CURL* curl_handle;
// From scripts
char* cluster_name;
int cluster_name_len;
string es_server;
string bulk_url;
struct curl_slist *http_headers;
string path;
string index_prefix;
uint64 transfer_timeout;
bool failing;
uint64 batch_size;
};
}
}
#endif

View file

@ -1,4 +1,6 @@
#include <algorithm>
#include "None.h" #include "None.h"
#include "NetVar.h" #include "NetVar.h"
@ -15,8 +17,17 @@ bool None::DoInit(const WriterInfo& info, int num_fields,
std::cout << " rotation_interval=" << info.rotation_interval << std::endl; std::cout << " rotation_interval=" << info.rotation_interval << std::endl;
std::cout << " rotation_base=" << info.rotation_base << std::endl; std::cout << " rotation_base=" << info.rotation_base << std::endl;
for ( std::map<string,string>::const_iterator i = info.config.begin(); i != info.config.end(); i++ ) // Output the config sorted by keys.
std::cout << " config[" << i->first << "] = " << i->second << std::endl;
std::vector<std::pair<string, string> > keys;
for ( WriterInfo::config_map::const_iterator i = info.config.begin(); i != info.config.end(); i++ )
keys.push_back(std::make_pair(i->first, i->second));
std::sort(keys.begin(), keys.end());
for ( std::vector<std::pair<string,string> >::const_iterator i = keys.begin(); i != keys.end(); i++ )
std::cout << " config[" << (*i).first << "] = " << (*i).second << std::endl;
for ( int i = 0; i < num_fields; i++ ) for ( int i = 0; i < num_fields; i++ )
{ {
@ -31,11 +42,11 @@ bool None::DoInit(const WriterInfo& info, int num_fields,
return true; return true;
} }
bool None::DoRotate(string rotated_path, double open, double close, bool terminating) bool None::DoRotate(const char* rotated_path, double open, double close, bool terminating)
{ {
if ( ! FinishedRotation(string("/dev/null"), Info().path, open, close, terminating)) if ( ! FinishedRotation("/dev/null", Info().path, open, close, terminating))
{ {
Error(Fmt("error rotating %s", Info().path.c_str())); Error(Fmt("error rotating %s", Info().path));
return false; return false;
} }

View file

@ -24,10 +24,11 @@ protected:
virtual bool DoWrite(int num_fields, const threading::Field* const* fields, virtual bool DoWrite(int num_fields, const threading::Field* const* fields,
threading::Value** vals) { return true; } threading::Value** vals) { return true; }
virtual bool DoSetBuf(bool enabled) { return true; } virtual bool DoSetBuf(bool enabled) { return true; }
virtual bool DoRotate(string rotated_path, double open, virtual bool DoRotate(const char* rotated_path, double open,
double close, bool terminating); double close, bool terminating);
virtual bool DoFlush() { return true; } virtual bool DoFlush(double network_time) { return true; }
virtual bool DoFinish() { WriterBackend::DoFinish(); return true; } virtual bool DoFinish(double network_time) { return true; }
virtual bool DoHeartbeat(double network_time, double current_time) { return true; }
}; };
} }

View file

@ -12,6 +12,10 @@
#include <getopt.h> #include <getopt.h>
#endif #endif
#ifdef USE_CURL
#include <curl/curl.h>
#endif
#ifdef USE_IDMEF #ifdef USE_IDMEF
extern "C" { extern "C" {
#include <libidmef/idmefxml.h> #include <libidmef/idmefxml.h>
@ -313,6 +317,8 @@ void terminate_bro()
if ( remote_serializer ) if ( remote_serializer )
remote_serializer->LogStats(); remote_serializer->LogStats();
mgr.Drain();
log_mgr->Terminate(); log_mgr->Terminate();
thread_mgr->Terminate(); thread_mgr->Terminate();
@ -359,12 +365,6 @@ RETSIGTYPE sig_handler(int signo)
set_processing_status("TERMINATING", "sig_handler"); set_processing_status("TERMINATING", "sig_handler");
signal_val = signo; signal_val = signo;
if ( thread_mgr->Terminating() && (signal_val == SIGTERM || signal_val == SIGINT) )
// If the thread manager is already terminating (i.e.,
// waiting for child threads to exit), another term signal
// will send the threads a kill.
thread_mgr->KillThreads();
return RETSIGVAL; return RETSIGVAL;
} }
@ -716,6 +716,10 @@ int main(int argc, char** argv)
SSL_library_init(); SSL_library_init();
SSL_load_error_strings(); SSL_load_error_strings();
#ifdef USE_CURL
curl_global_init(CURL_GLOBAL_ALL);
#endif
// FIXME: On systems that don't provide /dev/urandom, OpenSSL doesn't // FIXME: On systems that don't provide /dev/urandom, OpenSSL doesn't
// seed the PRNG. We should do this here (but at least Linux, FreeBSD // seed the PRNG. We should do this here (but at least Linux, FreeBSD
// and Solaris provide /dev/urandom). // and Solaris provide /dev/urandom).
@ -1066,6 +1070,10 @@ int main(int argc, char** argv)
done_with_network(); done_with_network();
net_delete(); net_delete();
#ifdef USE_CURL
curl_global_cleanup();
#endif
terminate_bro(); terminate_bro();
// Close files after net_delete(), because net_delete() // Close files after net_delete(), because net_delete()

View file

@ -93,6 +93,7 @@ function version_ok(vers : uint16) : bool
case SSLv30: case SSLv30:
case TLSv10: case TLSv10:
case TLSv11: case TLSv11:
case TLSv12:
return true; return true;
default: default:
@ -295,7 +296,7 @@ refine connection SSL_Conn += {
for ( int k = 0; k < num_ext; ++k ) for ( int k = 0; k < num_ext; ++k )
{ {
unsigned char *pBuffer = 0; unsigned char *pBuffer = 0;
uint length = 0; int length = 0;
X509_EXTENSION* ex = X509_get_ext(pTemp, k); X509_EXTENSION* ex = X509_get_ext(pTemp, k);
if (ex) if (ex)
@ -303,14 +304,14 @@ refine connection SSL_Conn += {
ASN1_STRING *pString = X509_EXTENSION_get_data(ex); ASN1_STRING *pString = X509_EXTENSION_get_data(ex);
length = ASN1_STRING_to_UTF8(&pBuffer, pString); length = ASN1_STRING_to_UTF8(&pBuffer, pString);
//i2t_ASN1_OBJECT(&pBuffer, length, obj) //i2t_ASN1_OBJECT(&pBuffer, length, obj)
// printf("extension length: %u\n", length); // printf("extension length: %d\n", length);
// -1 indicates an error. // -1 indicates an error.
if ( length < 0 ) if ( length >= 0 )
continue; {
StringVal* value = new StringVal(length, (char*)pBuffer); StringVal* value = new StringVal(length, (char*)pBuffer);
BifEvent::generate_x509_extension(bro_analyzer(), BifEvent::generate_x509_extension(bro_analyzer(),
bro_analyzer()->Conn(), ${rec.is_orig}, value); bro_analyzer()->Conn(), ${rec.is_orig}, value);
}
OPENSSL_free(pBuffer); OPENSSL_free(pBuffer);
} }
} }

View file

@ -22,5 +22,6 @@ enum SSLVersions {
SSLv20 = 0x0002, SSLv20 = 0x0002,
SSLv30 = 0x0300, SSLv30 = 0x0300,
TLSv10 = 0x0301, TLSv10 = 0x0301,
TLSv11 = 0x0302 TLSv11 = 0x0302,
TLSv12 = 0x0303
}; };

View file

@ -12,18 +12,23 @@
using namespace threading; using namespace threading;
static const int STD_FMT_BUF_LEN = 2048;
uint64_t BasicThread::thread_counter = 0; uint64_t BasicThread::thread_counter = 0;
BasicThread::BasicThread() BasicThread::BasicThread()
{ {
started = false; started = false;
terminating = false; terminating = false;
killed = false;
pthread = 0; pthread = 0;
buf_len = 2048; buf_len = STD_FMT_BUF_LEN;
buf = (char*) malloc(buf_len); buf = (char*) malloc(buf_len);
name = Fmt("thread-%d", ++thread_counter); strerr_buffer = 0;
name = copy_string(fmt("thread-%" PRIu64, ++thread_counter));
thread_mgr->AddThread(this); thread_mgr->AddThread(this);
} }
@ -32,31 +37,42 @@ BasicThread::~BasicThread()
{ {
if ( buf ) if ( buf )
free(buf); free(buf);
delete [] name;
delete [] strerr_buffer;
} }
void BasicThread::SetName(const string& arg_name) void BasicThread::SetName(const char* arg_name)
{ {
// Slight race condition here with reader threads, but shouldn't matter. delete [] name;
name = arg_name; name = copy_string(arg_name);
} }
void BasicThread::SetOSName(const string& name) void BasicThread::SetOSName(const char* arg_name)
{ {
#ifdef HAVE_LINUX #ifdef HAVE_LINUX
prctl(PR_SET_NAME, name.c_str(), 0, 0, 0); prctl(PR_SET_NAME, arg_name, 0, 0, 0);
#endif #endif
#ifdef __APPLE__ #ifdef __APPLE__
pthread_setname_np(name.c_str()); pthread_setname_np(arg_name);
#endif #endif
#ifdef FREEBSD #ifdef FREEBSD
pthread_set_name_np(pthread_self(), name, name.c_str()); pthread_set_name_np(pthread_self(), arg_name, arg_name);
#endif #endif
} }
const char* BasicThread::Fmt(const char* format, ...) const char* BasicThread::Fmt(const char* format, ...)
{ {
if ( buf_len > 10 * STD_FMT_BUF_LEN )
{
// Shrink back to normal.
buf = (char*) safe_realloc(buf, STD_FMT_BUF_LEN);
buf_len = STD_FMT_BUF_LEN;
}
va_list al; va_list al;
va_start(al, format); va_start(al, format);
int n = safe_vsnprintf(buf, buf_len, format, al); int n = safe_vsnprintf(buf, buf_len, format, al);
@ -64,42 +80,56 @@ const char* BasicThread::Fmt(const char* format, ...)
if ( (unsigned int) n >= buf_len ) if ( (unsigned int) n >= buf_len )
{ // Not enough room, grow the buffer. { // Not enough room, grow the buffer.
int tmp_len = n + 32; buf_len = n + 32;
char* tmp = (char*) malloc(tmp_len); buf = (char*) safe_realloc(buf, buf_len);
// Is it portable to restart? // Is it portable to restart?
va_start(al, format); va_start(al, format);
n = safe_vsnprintf(tmp, tmp_len, format, al); n = safe_vsnprintf(buf, buf_len, format, al);
va_end(al); va_end(al);
free(tmp);
} }
return buf; return buf;
} }
const char* BasicThread::Strerror(int err)
{
if ( ! strerr_buffer )
strerr_buffer = new char[256];
strerror_r(err, strerr_buffer, 256);
return strerr_buffer;
}
void BasicThread::Start() void BasicThread::Start()
{ {
if ( started ) if ( started )
return; return;
if ( pthread_mutex_init(&terminate, 0) != 0 )
reporter->FatalError("Cannot create terminate mutex for thread %s", name.c_str());
// We use this like a binary semaphore and acquire it immediately.
if ( pthread_mutex_lock(&terminate) != 0 )
reporter->FatalError("Cannot aquire terminate mutex for thread %s", name.c_str());
if ( pthread_create(&pthread, 0, BasicThread::launcher, this) != 0 )
reporter->FatalError("Cannot create thread %s", name.c_str());
DBG_LOG(DBG_THREADING, "Started thread %s", name.c_str());
started = true; started = true;
int err = pthread_create(&pthread, 0, BasicThread::launcher, this);
if ( err != 0 )
reporter->FatalError("Cannot create thread %s: %s", name, Strerror(err));
DBG_LOG(DBG_THREADING, "Started thread %s", name);
OnStart(); OnStart();
} }
void BasicThread::PrepareStop()
{
if ( ! started )
return;
if ( terminating )
return;
DBG_LOG(DBG_THREADING, "Preparing thread %s to terminate ...", name);
OnPrepareStop();
}
void BasicThread::Stop() void BasicThread::Stop()
{ {
if ( ! started ) if ( ! started )
@ -108,16 +138,11 @@ void BasicThread::Stop()
if ( terminating ) if ( terminating )
return; return;
DBG_LOG(DBG_THREADING, "Signaling thread %s to terminate ...", name.c_str()); DBG_LOG(DBG_THREADING, "Signaling thread %s to terminate ...", name);
// Signal that it's ok for the thread to exit now by unlocking the
// mutex.
if ( pthread_mutex_unlock(&terminate) != 0 )
reporter->FatalError("Failure flagging terminate condition for thread %s", name.c_str());
terminating = true;
OnStop(); OnStop();
terminating = true;
} }
void BasicThread::Join() void BasicThread::Join()
@ -125,30 +150,34 @@ void BasicThread::Join()
if ( ! started ) if ( ! started )
return; return;
if ( ! terminating ) assert(terminating);
Stop();
DBG_LOG(DBG_THREADING, "Joining thread %s ...", name.c_str()); DBG_LOG(DBG_THREADING, "Joining thread %s ...", name);
if ( pthread_join(pthread, 0) != 0 ) if ( pthread && pthread_join(pthread, 0) != 0 )
reporter->FatalError("Failure joining thread %s", name.c_str()); reporter->FatalError("Failure joining thread %s", name);
pthread_mutex_destroy(&terminate); DBG_LOG(DBG_THREADING, "Joined with thread %s", name);
DBG_LOG(DBG_THREADING, "Done with thread %s", name.c_str());
pthread = 0; pthread = 0;
} }
void BasicThread::Kill() void BasicThread::Kill()
{ {
if ( ! (started && pthread) ) // We don't *really* kill the thread here because that leads to race
return; // conditions. Instead we set a flag that parts of the the code need
// to check and get out of any loops they might be in.
terminating = true;
killed = true;
OnKill();
}
// I believe this is safe to call from a signal handler ... Not error void BasicThread::Done()
// checking so that killing doesn't bail out if we have already {
// terminated. DBG_LOG(DBG_THREADING, "Thread %s has finished", name);
pthread_kill(pthread, SIGKILL);
terminating = true;
killed = true;
} }
void* BasicThread::launcher(void *arg) void* BasicThread::launcher(void *arg)
@ -159,16 +188,21 @@ void* BasicThread::launcher(void *arg)
// process. // process.
sigset_t mask_set; sigset_t mask_set;
sigfillset(&mask_set); sigfillset(&mask_set);
// Unblock the signals where according to POSIX the result is undefined if they are blocked
// in a thread and received by that thread. If those are not unblocked, threads will just
// hang when they crash without the user being notified.
sigdelset(&mask_set, SIGFPE);
sigdelset(&mask_set, SIGILL);
sigdelset(&mask_set, SIGSEGV);
sigdelset(&mask_set, SIGBUS);
int res = pthread_sigmask(SIG_BLOCK, &mask_set, 0); int res = pthread_sigmask(SIG_BLOCK, &mask_set, 0);
assert(res == 0); // assert(res == 0);
// Run thread's main function. // Run thread's main function.
thread->Run(); thread->Run();
// Wait until somebody actually wants us to terminate. thread->Done();
if ( pthread_mutex_lock(&thread->terminate) != 0 )
reporter->FatalError("Failure acquiring terminate mutex at end of thread %s", thread->Name().c_str());
return 0; return 0;
} }

View file

@ -5,7 +5,6 @@
#include <pthread.h> #include <pthread.h>
#include <semaphore.h> #include <semaphore.h>
#include "Queue.h"
#include "util.h" #include "util.h"
using namespace std; using namespace std;
@ -42,22 +41,25 @@ public:
* *
* This method is safe to call from any thread. * This method is safe to call from any thread.
*/ */
const string& Name() const { return name; } const char* Name() const { return name; }
/** /**
* Sets a descriptive name for the thread. This should be a string * Sets a descriptive name for the thread. This should be a string
* that's useful in output presented to the user and uniquely * that's useful in output presented to the user and uniquely
* identifies the thread. * identifies the thread.
* *
* This method must be called only from the thread itself. * This method must be called only from main thread at initialization
* time.
*/ */
void SetName(const string& name); void SetName(const char* name);
/** /**
* Set the name shown by the OS as the thread's description. Not * Set the name shown by the OS as the thread's description. Not
* supported on all OSs. * supported on all OSs.
*
* Must be called only from the child thread.
*/ */
void SetOSName(const string& name); void SetOSName(const char* name);
/** /**
* Starts the thread. Calling this methods will spawn a new OS thread * Starts the thread. Calling this methods will spawn a new OS thread
@ -68,6 +70,18 @@ public:
*/ */
void Start(); void Start();
/**
* Signals the thread to prepare for stopping. This must be called
* before Stop() and allows the thread to trigger shutting down
* without yet blocking for doing so.
*
* Calling this method has no effect if Start() hasn't been executed
* yet.
*
* Only Bro's main thread must call this method.
*/
void PrepareStop();
/** /**
* Signals the thread to stop. The method lets Terminating() now * Signals the thread to stop. The method lets Terminating() now
* return true. It does however not force the thread to terminate. * return true. It does however not force the thread to terminate.
@ -88,6 +102,13 @@ public:
*/ */
bool Terminating() const { return terminating; } bool Terminating() const { return terminating; }
/**
* Returns true if Kill() has been called.
*
* This method is safe to call from any thread.
*/
bool Killed() const { return killed; }
/** /**
* A version of fmt() that the thread can safely use. * A version of fmt() that the thread can safely use.
* *
@ -96,6 +117,14 @@ public:
*/ */
const char* Fmt(const char* format, ...); const char* Fmt(const char* format, ...);
/**
* A version of strerror() that the thread can safely use. This is
* essentially a wrapper around strerror_r(). Note that it keeps a
* single buffer per thread internally so the result remains valid
* only until the next call.
*/
const char* Strerror(int err);
protected: protected:
friend class Manager; friend class Manager;
@ -116,12 +145,24 @@ protected:
virtual void OnStart() {} virtual void OnStart() {}
/** /**
* Executed with Stop(). This is a hook into stopping the thread. It * Executed with PrepareStop() (and before OnStop()). This is a hook
* will be called from Bro's main thread after the thread has been * into preparing the thread for stopping. It will be called from
* signaled to stop. * Bro's main thread before the thread has been signaled to stop.
*/
virtual void OnPrepareStop() {}
/**
* Executed with Stop() (and after OnPrepareStop()). This is a hook
* into stopping the thread. It will be called from Bro's main thread
* after the thread has been signaled to stop.
*/ */
virtual void OnStop() {} virtual void OnStop() {}
/**
* Executed with Kill(). This is a hook into killing the thread.
*/
virtual void OnKill() {}
/** /**
* Destructor. This will be called by the manager. * Destructor. This will be called by the manager.
* *
@ -145,14 +186,18 @@ protected:
*/ */
void Kill(); void Kill();
/** Called by child thread's launcher when it's done processing. */
void Done();
private: private:
// pthread entry function. // pthread entry function.
static void* launcher(void *arg); static void* launcher(void *arg);
string name; const char* name;
pthread_t pthread; pthread_t pthread;
bool started; // Set to to true once running. bool started; // Set to to true once running.
bool terminating; // Set to to true to signal termination. bool terminating; // Set to to true to signal termination.
bool killed; // Set to true once forcefully killed.
// Used as a semaphore to tell the pthread thread when it may // Used as a semaphore to tell the pthread thread when it may
// terminate. // terminate.
@ -162,6 +207,9 @@ private:
char* buf; char* buf;
unsigned int buf_len; unsigned int buf_len;
// For implementating Strerror().
char* strerr_buffer;
static uint64_t thread_counter; static uint64_t thread_counter;
}; };

View file

@ -30,6 +30,10 @@ void Manager::Terminate()
do Process(); while ( did_process ); do Process(); while ( did_process );
// Signal all to stop. // Signal all to stop.
for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ )
(*i)->PrepareStop();
for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ ) for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ )
(*i)->Stop(); (*i)->Stop();
@ -48,24 +52,16 @@ void Manager::Terminate()
terminating = false; terminating = false;
} }
void Manager::KillThreads()
{
DBG_LOG(DBG_THREADING, "Killing threads ...");
for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ )
(*i)->Kill();
}
void Manager::AddThread(BasicThread* thread) void Manager::AddThread(BasicThread* thread)
{ {
DBG_LOG(DBG_THREADING, "Adding thread %s ...", thread->Name().c_str()); DBG_LOG(DBG_THREADING, "Adding thread %s ...", thread->Name());
all_threads.push_back(thread); all_threads.push_back(thread);
idle = false; idle = false;
} }
void Manager::AddMsgThread(MsgThread* thread) void Manager::AddMsgThread(MsgThread* thread)
{ {
DBG_LOG(DBG_THREADING, "%s is a MsgThread ...", thread->Name().c_str()); DBG_LOG(DBG_THREADING, "%s is a MsgThread ...", thread->Name());
msg_threads.push_back(thread); msg_threads.push_back(thread);
} }
@ -91,6 +87,14 @@ double Manager::NextTimestamp(double* network_time)
return -1.0; return -1.0;
} }
void Manager::KillThreads()
{
DBG_LOG(DBG_THREADING, "Killing threads ...");
for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ )
(*i)->Kill();
}
void Manager::Process() void Manager::Process()
{ {
bool do_beat = false; bool do_beat = false;
@ -114,6 +118,12 @@ void Manager::Process()
{ {
Message* msg = t->RetrieveOut(); Message* msg = t->RetrieveOut();
if ( ! msg )
{
assert(t->Killed());
break;
}
if ( msg->Process() ) if ( msg->Process() )
{ {
if ( network_time ) if ( network_time )
@ -122,8 +132,7 @@ void Manager::Process()
else else
{ {
string s = msg->Name() + " failed, terminating thread"; reporter->Error("%s failed, terminating thread", msg->Name());
reporter->Error("%s", s.c_str());
t->Stop(); t->Stop();
} }

View file

@ -49,15 +49,6 @@ public:
*/ */
bool Terminating() const { return terminating; } bool Terminating() const { return terminating; }
/**
* Immediately kills all child threads. It does however not yet join
* them, one still needs to call Terminate() for that.
*
* This method is safe to call from a signal handler, and can in fact
* be called while Terminate() is already in progress.
*/
void KillThreads();
typedef std::list<std::pair<string, MsgThread::Stats> > msg_stats_list; typedef std::list<std::pair<string, MsgThread::Stats> > msg_stats_list;
/** /**
@ -115,6 +106,13 @@ protected:
*/ */
virtual double NextTimestamp(double* network_time); virtual double NextTimestamp(double* network_time);
/**
* Kills all thread immediately. Note that this may cause race conditions
* if a child thread currently holds a lock that might block somebody
* else.
*/
virtual void KillThreads();
/** /**
* Part of the IOSource interface. * Part of the IOSource interface.
*/ */

View file

@ -5,6 +5,7 @@
#include "Manager.h" #include "Manager.h"
#include <unistd.h> #include <unistd.h>
#include <signal.h>
using namespace threading; using namespace threading;
@ -16,19 +17,17 @@ namespace threading {
class FinishMessage : public InputMessage<MsgThread> class FinishMessage : public InputMessage<MsgThread>
{ {
public: public:
FinishMessage(MsgThread* thread) : InputMessage<MsgThread>("Finish", thread) { } FinishMessage(MsgThread* thread, double network_time) : InputMessage<MsgThread>("Finish", thread),
network_time(network_time) { }
virtual bool Process() { return Object()->DoFinish(); } virtual bool Process() {
}; bool result = Object()->OnFinish(network_time);
Object()->Finished();
return result;
}
// A dummy message that's only purpose is unblock the current read operation private:
// so that the child's Run() methods can check the termination status. double network_time;
class UnblockMessage : public InputMessage<MsgThread>
{
public:
UnblockMessage(MsgThread* thread) : InputMessage<MsgThread>("Unblock", thread) { }
virtual bool Process() { return true; }
}; };
/// Sends a heartbeat to the child thread. /// Sends a heartbeat to the child thread.
@ -39,7 +38,10 @@ public:
: InputMessage<MsgThread>("Heartbeat", thread) : InputMessage<MsgThread>("Heartbeat", thread)
{ network_time = arg_network_time; current_time = arg_current_time; } { network_time = arg_network_time; current_time = arg_current_time; }
virtual bool Process() { return Object()->DoHeartbeat(network_time, current_time); } virtual bool Process() {
Object()->HeartbeatInChild();
return Object()->OnHeartbeat(network_time, current_time);
}
private: private:
double network_time; double network_time;
@ -55,14 +57,16 @@ public:
INTERNAL_WARNING, INTERNAL_ERROR INTERNAL_WARNING, INTERNAL_ERROR
}; };
ReporterMessage(Type arg_type, MsgThread* thread, const string& arg_msg) ReporterMessage(Type arg_type, MsgThread* thread, const char* arg_msg)
: OutputMessage<MsgThread>("ReporterMessage", thread) : OutputMessage<MsgThread>("ReporterMessage", thread)
{ type = arg_type; msg = arg_msg; } { type = arg_type; msg = copy_string(arg_msg); }
~ReporterMessage() { delete [] msg; }
virtual bool Process(); virtual bool Process();
private: private:
string msg; const char* msg;
Type type; Type type;
}; };
@ -71,18 +75,19 @@ private:
class DebugMessage : public OutputMessage<MsgThread> class DebugMessage : public OutputMessage<MsgThread>
{ {
public: public:
DebugMessage(DebugStream arg_stream, MsgThread* thread, const string& arg_msg) DebugMessage(DebugStream arg_stream, MsgThread* thread, const char* arg_msg)
: OutputMessage<MsgThread>("DebugMessage", thread) : OutputMessage<MsgThread>("DebugMessage", thread)
{ stream = arg_stream; msg = arg_msg; } { stream = arg_stream; msg = copy_string(arg_msg); }
virtual ~DebugMessage() { delete [] msg; }
virtual bool Process() virtual bool Process()
{ {
string s = Object()->Name() + ": " + msg; debug_logger.Log(stream, "%s: %s", Object()->Name(), msg);
debug_logger.Log(stream, "%s", s.c_str());
return true; return true;
} }
private: private:
string msg; const char* msg;
DebugStream stream; DebugStream stream;
}; };
#endif #endif
@ -93,41 +98,39 @@ private:
Message::~Message() Message::~Message()
{ {
delete [] name;
} }
bool ReporterMessage::Process() bool ReporterMessage::Process()
{ {
string s = Object()->Name() + ": " + msg;
const char* cmsg = s.c_str();
switch ( type ) { switch ( type ) {
case INFO: case INFO:
reporter->Info("%s", cmsg); reporter->Info("%s: %s", Object()->Name(), msg);
break; break;
case WARNING: case WARNING:
reporter->Warning("%s", cmsg); reporter->Warning("%s: %s", Object()->Name(), msg);
break; break;
case ERROR: case ERROR:
reporter->Error("%s", cmsg); reporter->Error("%s: %s", Object()->Name(), msg);
break; break;
case FATAL_ERROR: case FATAL_ERROR:
reporter->FatalError("%s", cmsg); reporter->FatalError("%s: %s", Object()->Name(), msg);
break; break;
case FATAL_ERROR_WITH_CORE: case FATAL_ERROR_WITH_CORE:
reporter->FatalErrorWithCore("%s", cmsg); reporter->FatalErrorWithCore("%s: %s", Object()->Name(), msg);
break; break;
case INTERNAL_WARNING: case INTERNAL_WARNING:
reporter->InternalWarning("%s", cmsg); reporter->InternalWarning("%s: %s", Object()->Name(), msg);
break; break;
case INTERNAL_ERROR : case INTERNAL_ERROR :
reporter->InternalError("%s", cmsg); reporter->InternalError("%s: %s", Object()->Name(), msg);
break; break;
default: default:
@ -137,32 +140,74 @@ bool ReporterMessage::Process()
return true; return true;
} }
MsgThread::MsgThread() : BasicThread() MsgThread::MsgThread() : BasicThread(), queue_in(this, 0), queue_out(0, this)
{ {
cnt_sent_in = cnt_sent_out = 0; cnt_sent_in = cnt_sent_out = 0;
finished = false; finished = false;
thread_mgr->AddMsgThread(this); thread_mgr->AddMsgThread(this);
} }
// Set by Bro's main signal handler.
extern int signal_val;
void MsgThread::OnPrepareStop()
{
if ( finished || Killed() )
return;
// Signal thread to terminate and wait until it has acknowledged.
SendIn(new FinishMessage(this, network_time), true);
}
void MsgThread::OnStop() void MsgThread::OnStop()
{ {
// Signal thread to terminate and wait until it has acknowledged. int signal_count = 0;
SendIn(new FinishMessage(this), true); int old_signal_val = signal_val;
signal_val = 0;
int cnt = 0; int cnt = 0;
while ( ! finished ) uint64_t last_size = 0;
uint64_t cur_size = 0;
while ( ! (finished || Killed() ) )
{ {
if ( ++cnt > 1000 ) // Insurance against broken threads ... // Terminate if we get another kill signal.
if ( signal_val == SIGTERM || signal_val == SIGINT )
{ {
reporter->Warning("thread %s didn't finish in time", Name().c_str()); ++signal_count;
break;
if ( signal_count == 1 )
{
// Abort all threads here so that we won't hang next
// on another one.
fprintf(stderr, "received signal while waiting for thread %s, aborting all ...\n", Name());
thread_mgr->KillThreads();
} }
else
{
// More than one signal. Abort processing
// right away. on another one.
fprintf(stderr, "received another signal while waiting for thread %s, aborting processing\n", Name());
exit(1);
}
signal_val = 0;
}
queue_in.WakeUp();
usleep(1000); usleep(1000);
} }
// One more message to make sure the current queue read operation unblocks. signal_val = old_signal_val;
SendIn(new UnblockMessage(this), true); }
void MsgThread::OnKill()
{
// Send a message to unblock the reader if its currently waiting for
// input. This is just an optimization to make it terminate more
// quickly, even without the message it will eventually time out.
queue_in.WakeUp();
} }
void MsgThread::Heartbeat() void MsgThread::Heartbeat()
@ -170,25 +215,20 @@ void MsgThread::Heartbeat()
SendIn(new HeartbeatMessage(this, network_time, current_time())); SendIn(new HeartbeatMessage(this, network_time, current_time()));
} }
bool MsgThread::DoHeartbeat(double network_time, double current_time) void MsgThread::HeartbeatInChild()
{ {
string n = Name(); string n = Fmt("bro: %s (%" PRIu64 "/%" PRIu64 ")", Name(),
n = Fmt("bro: %s (%" PRIu64 "/%" PRIu64 ")", n.c_str(),
cnt_sent_in - queue_in.Size(), cnt_sent_in - queue_in.Size(),
cnt_sent_out - queue_out.Size()); cnt_sent_out - queue_out.Size());
SetOSName(n.c_str()); SetOSName(n.c_str());
return true;
} }
bool MsgThread::DoFinish() void MsgThread::Finished()
{ {
// This is thread-safe "enough", we're the only one ever writing // This is thread-safe "enough", we're the only one ever writing
// there. // there.
finished = true; finished = true;
return true;
} }
void MsgThread::Info(const char* msg) void MsgThread::Info(const char* msg)
@ -245,7 +285,7 @@ void MsgThread::SendIn(BasicInputMessage* msg, bool force)
return; return;
} }
DBG_LOG(DBG_THREADING, "Sending '%s' to %s ...", msg->Name().c_str(), Name().c_str()); DBG_LOG(DBG_THREADING, "Sending '%s' to %s ...", msg->Name(), Name());
queue_in.Put(msg); queue_in.Put(msg);
++cnt_sent_in; ++cnt_sent_in;
@ -268,9 +308,10 @@ void MsgThread::SendOut(BasicOutputMessage* msg, bool force)
BasicOutputMessage* MsgThread::RetrieveOut() BasicOutputMessage* MsgThread::RetrieveOut()
{ {
BasicOutputMessage* msg = queue_out.Get(); BasicOutputMessage* msg = queue_out.Get();
assert(msg); if ( ! msg )
return 0;
DBG_LOG(DBG_THREADING, "Retrieved '%s' from %s", msg->Name().c_str(), Name().c_str()); DBG_LOG(DBG_THREADING, "Retrieved '%s' from %s", msg->Name(), Name());
return msg; return msg;
} }
@ -278,10 +319,12 @@ BasicOutputMessage* MsgThread::RetrieveOut()
BasicInputMessage* MsgThread::RetrieveIn() BasicInputMessage* MsgThread::RetrieveIn()
{ {
BasicInputMessage* msg = queue_in.Get(); BasicInputMessage* msg = queue_in.Get();
assert(msg);
if ( ! msg )
return 0;
#ifdef DEBUG #ifdef DEBUG
string s = Fmt("Retrieved '%s' in %s", msg->Name().c_str(), Name().c_str()); string s = Fmt("Retrieved '%s' in %s", msg->Name(), Name());
Debug(DBG_THREADING, s.c_str()); Debug(DBG_THREADING, s.c_str());
#endif #endif
@ -290,26 +333,32 @@ BasicInputMessage* MsgThread::RetrieveIn()
void MsgThread::Run() void MsgThread::Run()
{ {
while ( true ) while ( ! (finished || Killed() ) )
{ {
// When requested to terminate, we only do so when
// all input has been processed.
if ( Terminating() && ! queue_in.Ready() )
break;
BasicInputMessage* msg = RetrieveIn(); BasicInputMessage* msg = RetrieveIn();
if ( ! msg )
continue;
bool result = msg->Process(); bool result = msg->Process();
delete msg;
if ( ! result ) if ( ! result )
{ {
string s = msg->Name() + " failed, terminating thread (MsgThread)"; string s = Fmt("%s failed, terminating thread (MsgThread)", Name());
Error(s.c_str()); Error(s.c_str());
Stop();
break; break;
} }
}
delete msg; // In case we haven't send the finish method yet, do it now. Reading
// global network_time here should be fine, it isn't changing
// anymore.
if ( ! finished )
{
OnFinish(network_time);
Finished();
} }
} }

Some files were not shown because too many files have changed in this diff Show more