Merge remote-tracking branch 'origin/master' into topic/bernhard/sqlite

This commit is contained in:
Bernhard Amann 2012-10-08 10:31:22 -07:00
commit 87ef8fe649
360 changed files with 6010 additions and 990 deletions

327
CHANGES
View file

@ -1,4 +1,331 @@
2.1-56 | 2012-10-03 16:04:52 -0700
* Add general FAQ entry about upgrading Bro. (Jon Siwek)
2.1-53 | 2012-10-03 16:00:40 -0700
* Add new Tunnel::delay_teredo_confirmation option that indicates
that the Teredo analyzer should wait until it sees both sides of a
connection using a valid Teredo encapsulation before issuing a
protocol_confirmation. Default is on. Addresses #890. (Jon Siwek)
2.1-50 | 2012-10-02 12:06:08 -0700
* Fix a typing issue that prevented the ElasticSearch timeout to
work. (Matthias Vallentin)
* Use second granularity for ElasticSearch timeouts. (Matthias
Vallentin)
* Fix compile issues with older versions of libcurl, which don't
offer *_MS timeout constants. (Matthias Vallentin)
2.1-47 | 2012-10-02 11:59:29 -0700
* Fix for the input framework: BroStrings were constructed without a
final \0, which makes them unusable by basically all internal
functions (like to_count). (Bernhard Amann)
* Remove deprecated script functionality (see NEWS for details).
(Daniel Thayer)
2.1-39 | 2012-09-29 14:09:16 -0700
* Reliability adjustments to istate tests with network
communication. (Jon Siwek)
2.1-37 | 2012-09-25 14:21:37 -0700
* Reenable some tests that previously would cause Bro to exit with
an error. (Daniel Thayer)
* Fix parsing of large integers on 32-bit systems. (Daniel Thayer)
* Serialize language.when unit test with the "comm" group. (Jon
Siwek)
2.1-32 | 2012-09-24 16:24:34 -0700
* Fix race condition in language/when.bro test. (Daniel Thayer)
2.1-26 | 2012-09-23 08:46:03 -0700
* Add an item to FAQ page about broctl options. (Daniel Thayer)
* Add more language tests. We now have tests of all built-in Bro
data types (including different representations of constant
values, and max./min. values), keywords, and operators (including
special properties of certain operators, such as short-circuit
evaluation and associativity). (Daniel Thayer)
* Fix construction of ip6_ah (Authentication Header) record values.
Authentication Headers with a Payload Len field set to zero would
cause a crash due to invalid memory allocation because the
previous code assumed Payload Len would always be great enough to
contain all mandatory fields of the header. (Jon Siwek)
* Update compile/dependency docs for OS X. (Jon Siwek)
* Adjusting Mac binary packaging script. Setting CMAKE_PREFIX_PATH
helps link against standard system libs instead of ones that come
from other package manager (e.g. MacPorts). (Jon Siwek)
* Adjusting some unit tests that do cluster communication. (Jon Siwek)
* Small change to non-blocking DNS initialization. (Jon Siwek)
* Reorder a few statements in scan.l to make 1.5msecs etc work.
Adresses #872. (Bernhard Amann)
2.1-6 | 2012-09-06 23:23:14 -0700
* Fixed a bug where "a -= b" (both operands are intervals) was not
allowed in Bro scripts (although "a = a - b" is allowed). (Daniel
Thayer)
* Fixed a bug where the "!=" operator with subnet operands was
treated the same as the "==" operator. (Daniel Thayer)
* Add sleeps to configuration_update test for better reliability.
(Jon Siwek)
* Fix a segfault when iterating over a set when using malformed
index. (Daniel Thayer)
2.1 | 2012-08-28 16:46:42 -0700
* Make bif.identify_magic robust against FreeBSD's libmagic config.
(Robin Sommer)
* Remove automatic use of gperftools on non-Linux systems.
--enable-perftools must now explicity be supplied to ./configure
on non-Linux systems to link against the tcmalloc library.
* Fix uninitialized value for 'is_partial' in TCP analyzer. (Jon
Siwek)
* Parse 64-bit consts in Bro scripts correctly. (Bernhard Amann)
* Output 64-bit counts correctly on 32-bit machines (Bernhard Amann)
* Input framework fixes, including: (Bernhard Amann)
- One of the change events got the wrong parameters.
- Escape commas in sets and vectors that were unescaped before
tokenization.
- Handling of zero-length-strings as last element in a set was
broken (sets ending with a ,).
- Hashing of lines just containing zero-length-strings was broken.
- Make set_separators different from , work for input framework.
- Input framework was not handling counts and ints out of
32-bit-range correctly.
- Errors in single lines do not kill processing, but simply ignore
the line, log it, and continue.
* Update documentation for builtin types. (Daniel Thayer)
- Add missing description of interval "msec" unit.
- Improved description of pattern by clarifying the issue of
operand order and difference between exact and embedded
matching.
* Documentation fixes for signature 'eval' conditions. (Jon Siwek)
* Remove orphaned 1.5 unit tests. (Jon Siwek)
* Add type checking for signature 'eval' condition functions. (Jon
Siwek)
* Adding an identifier to the SMTP blocklist notices for duplicate
suppression. (Seth Hall)
2.1-beta-45 | 2012-08-22 16:11:10 -0700
* Add an option to the input framework that allows the user to chose
to not die upon encountering files/functions. (Bernhard Amann)
2.1-beta-41 | 2012-08-22 16:05:21 -0700
* Add test serialization to "leak" unit tests that use
communication. (Jon Siwek)
* Change to metrics/basic-cluster unit test for reliability. (Jon
Siwek)
* Fixed ack tracking which could overflow quickly in some
situations. (Seth Hall)
* Minor tweak to coverage.bare-mode-errors unit test to work with a
symlinked 'scripts' dir. (Jon Siwek)
2.1-beta-35 | 2012-08-22 08:44:52 -0700
* Add testcase for input framework reading sets (rather than
tables). (Bernhard Amann)
2.1-beta-31 | 2012-08-21 15:46:05 -0700
* Tweak to rotate-custom.bro unit test. (Jon Siwek)
* Ignore small mem leak every rotation interval for dataseries logs.
(Jon Siwek)
2.1-beta-28 | 2012-08-21 08:32:42 -0700
* Linking ES docs into logging document. (Robin Sommer)
2.1-beta-27 | 2012-08-20 20:06:20 -0700
* Add the Stream record to Log:active_streams to make more dynamic
logging possible. (Seth Hall)
* Fix portability of printing to files returned by
open("/dev/stderr"). (Jon Siwek)
* Fix mime type diff canonifier to also skip mime_desc columns. (Jon
Siwek)
* Unit test tweaks/fixes. (Jon Siwek)
- Some baselines for tests in "leaks" group were outdated.
- Changed a few of the cluster/communication tests to terminate
more explicitly instead of relying on btest-bg-wait to kill
processes. This makes the tests finish faster in the success case
and makes the reason for failing clearer in the that case.
* Fix memory leak of serialized IDs when compiled with
--enable-debug. (Jon Siwek)
2.1-beta-21 | 2012-08-16 11:48:56 -0700
* Installing a handler for running out of memory in "new". Bro will
now print an error message in that case rather than abort with an
uncaught exception. (Robin Sommer)
2.1-beta-20 | 2012-08-16 11:43:31 -0700
* Fixed potential problems with ElasticSearch output plugin. (Seth
Hall)
2.1-beta-13 | 2012-08-10 12:28:04 -0700
* Reporter warnings and error now print to stderr by default. New
options Reporter::warnings_to_stderr and
Reporter::errors_to_stderr to disable. (Seth Hall)
2.1-beta-9 | 2012-08-10 12:24:29 -0700
* Add more BIF tests. (Daniel Thayer)
2.1-beta-6 | 2012-08-10 12:22:52 -0700
* Fix bug in input framework with an edge case. (Bernhard Amann)
* Fix small bug in input framework test script. (Bernhard Amann)
2.1-beta-3 | 2012-08-03 10:46:49 -0700
* Merge branch 'master' of ssh://git.bro-ids.org/bro (Robin Sommer)
* Fix configure script to exit with non-zero status on error (Jon
Siwek)
* Improve ASCII output performance. (Robin Sommer)
2.1-beta | 2012-07-30 11:59:53 -0700
* Improve log filter compatibility with remote logging. Addresses
#842. (Jon Siwek)
2.0-907 | 2012-07-30 09:13:36 -0700
* Add missing breaks to switch cases in
ElasticSearch::HTTPReceive(). (Jon Siwek)
2.0-905 | 2012-07-28 16:24:34 -0700
* Fix log manager hanging on waiting for pending file rotations,
plus writer API tweak for failed rotations. Addresses #860. (Jon
Siwek and Robin Sommer)
* Tweaking logs-to-elasticsearch.bro so that it doesn't do anything
if ES server is unset. (Robin Sommer)
2.0-902 | 2012-07-27 12:42:13 -0700
* New variable in logging framework Log::active_streams to indicate
Log:ID enums which are currently active. (Seth Hall)
* Reworked how the logs-to-elasticsearch scripts works to stop
abusing the logging framework. (Seth Hall)
* Fix input test for recent default change on fastpath. (Robin
Sommer)
2.0-898 | 2012-07-27 12:22:03 -0700
* Small (potential performance) improvement for logging framework. (Seth Hall)
* Script-level rotation postprocessor fix. This fixes a problem with
writers that don't have a postprocessor. (Seth Hall)
* Update input framework documentation to reflect want_record
change. (Bernhard Amann)
* Fix crash when encountering an InterpreterException in a predicate
in logging or input Framework. (Bernhard Amann)
* Input framework: Make want_record=T the default for events
(Bernhard Amann)
* Changing the start/end markers in logs to open/close now
reflecting wall clock. (Robin Sommer)
2.0-891 | 2012-07-26 17:15:10 -0700
* Reader/writer API: preventing plugins from receiving further
messages after a failure. (Robin Sommer)
* New test for input framework that fails to find a file. (Robin
Sommer)
* Improving error handling for threads. (Robin Sommer)
* Tweaking the custom-rotate test to produce stable output. (Robin
Sommer)
2.0-884 | 2012-07-26 14:33:21 -0700
* Add comprehensive error handling for close() calls. (Jon Siwek)
* Add more test cases for input framework. (Bernhard Amann)
* Input framework: make error output for non-matching event types
much more verbose. (Bernhard Amann)
2.0-877 | 2012-07-25 17:20:34 -0700
* Fix double close() in FilerSerializer class. (Jon Siwek)
* Fix build warnings. (Daniel Thayer)
* Fixes to ElasticSearch plugin to make libcurl handle http
responses correctly. (Seth Hall)
* Fixing FreeBSD compiler error. (Robin Sommer)
* Silencing compiler warnings. (Robin Sommer)
2.0-871 | 2012-07-25 13:08:00 -0700
* Fix complaint from valgrind about uninitialized memory usage. (Jon

View file

@ -88,24 +88,30 @@ if (LIBGEOIP_FOUND)
list(APPEND OPTLIBS ${LibGeoIP_LIBRARY})
endif ()
set(USE_PERFTOOLS false)
set(HAVE_PERFTOOLS false)
set(USE_PERFTOOLS_DEBUG false)
set(USE_PERFTOOLS_TCMALLOC false)
if (NOT DISABLE_PERFTOOLS)
find_package(GooglePerftools)
endif ()
if (GOOGLEPERFTOOLS_FOUND)
include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR})
set(USE_PERFTOOLS true)
set(HAVE_PERFTOOLS true)
# Non-Linux systems may not be well-supported by gperftools, so
# require explicit request from user to enable it in that case.
if (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ENABLE_PERFTOOLS)
set(USE_PERFTOOLS_TCMALLOC true)
if (ENABLE_PERFTOOLS_DEBUG)
# Enable heap debugging with perftools.
set(USE_PERFTOOLS_DEBUG true)
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES_DEBUG})
else ()
# Link in tcmalloc for better performance.
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES})
if (ENABLE_PERFTOOLS_DEBUG)
# Enable heap debugging with perftools.
set(USE_PERFTOOLS_DEBUG true)
include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR})
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES_DEBUG})
else ()
# Link in tcmalloc for better performance.
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES})
endif ()
endif ()
endif ()
@ -224,7 +230,8 @@ message(
"\nAux. Tools: ${INSTALL_AUX_TOOLS}"
"\n"
"\nGeoIP: ${USE_GEOIP}"
"\nGoogle perftools: ${USE_PERFTOOLS}"
"\ngperftools found: ${HAVE_PERFTOOLS}"
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\ncURL: ${USE_CURL}"
"\n"

50
NEWS
View file

@ -7,8 +7,33 @@ release. For a complete list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own CHANGES.)
Bro 2.1 Beta
------------
Bro 2.2
-------
New Functionality
~~~~~~~~~~~~~~~~~
- TODO: Update.
Changed Functionality
~~~~~~~~~~~~~~~~~~~~~
- We removed the following, already deprecated, functionality:
* Scripting language:
- &disable_print_hook attribute.
* BiF functions:
- parse_dotted_addr(), dump_config(),
make_connection_persistent(), generate_idmef(),
split_complete()
- Removed a now unused argument from "do_split" helper function.
- "this" is no longer a reserved keyword.
Bro 2.1
-------
New Functionality
~~~~~~~~~~~~~~~~~
@ -82,7 +107,8 @@ New Functionality
* ElasticSearch: a distributed RESTful, storage engine and search
engine built on top of Apache Lucene. It scales very well, both
for distributed indexing and distributed searching.
for distributed indexing and distributed searching. See
doc/logging-elasticsearch.rst for more information.
Note that at this point, we consider Bro's support for these two
formats as prototypes for collecting experience with alternative
@ -101,9 +127,14 @@ the full set.
* Bro now requires CMake >= 2.6.3.
* Bro now links in tcmalloc (part of Google perftools) if found at
configure time. Doing so can significantly improve memory and
CPU use.
* On Linux, Bro now links in tcmalloc (part of Google perftools)
if found at configure time. Doing so can significantly improve
memory and CPU use.
On the other platforms, the new configure option
--enable-perftools can be used to enable linking to tcmalloc.
(Note that perftools's support for non-Linux platforms may be
less reliable).
- The configure switch --enable-brov6 is gone.
@ -152,14 +183,15 @@ the full set.
understands.
- ASCII logs now record the time when they were opened/closed at the
beginning and end of the file, respectively. The options
LogAscii::header_prefix and LogAscii::include_header have been
renamed to LogAscii::meta_prefix and LogAscii::include_meta,
beginning and end of the file, respectively (wall clock). The
options LogAscii::header_prefix and LogAscii::include_header have
been renamed to LogAscii::meta_prefix and LogAscii::include_meta,
respectively.
- The ASCII writers "header_*" options have been renamed to "meta_*"
(because there's now also a footer).
Bro 2.0
-------

View file

@ -1 +1 @@
2.0-871
2.1-56

@ -1 +1 @@
Subproject commit 4f01ea40817ad232a96535c64fce7dc16d4e2fff
Subproject commit a93ef1373512c661ffcd0d0a61bd19b96667e0d5

@ -1 +1 @@
Subproject commit c691c01e9cefae5a79bcd4b0f84ca387c8c587a7
Subproject commit 6748ec3a96d582a977cd9114ef19c76fe75c57ff

@ -1 +1 @@
Subproject commit 8234b8903cbc775f341bdb6a1c0159981d88d27b
Subproject commit ebfa4de45a839e58aec200e7e4bad33eaab4f1ed

@ -1 +1 @@
Subproject commit 231358f166f61cc32201a8ac3671ea0c0f5c324e
Subproject commit b0e3c0d84643878c135dcb8a9774ed78147dd648

@ -1 +1 @@
Subproject commit 44441a6c912c7c9f8d4771e042306ec5f44e461d
Subproject commit 44a43e62452302277f88e8fac08d1f979dc53f98

2
cmake

@ -1 +1 @@
Subproject commit 2a72c5e08e018cf632033af3920432d5f684e130
Subproject commit 125f9a5fa851381d0350efa41a4d14f27be263a2

9
configure vendored
View file

@ -1,7 +1,7 @@
#!/bin/sh
# Convenience wrapper for easily viewing/setting options that
# the project's CMake scripts will recognize
set -e
command="$0 $*"
# check for `cmake` command
@ -29,6 +29,8 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
Optional Features:
--enable-debug compile in debugging mode
--enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275
--enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux)
--enable-perftools-debug use Google's perftools for debugging
--disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl
@ -98,6 +100,7 @@ append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true
@ -146,7 +149,11 @@ while [ $# -ne 0 ]; do
--enable-mobile-ipv6)
append_cache_entry ENABLE_MOBILE_IPV6 BOOL true
;;
--enable-perftools)
append_cache_entry ENABLE_PERFTOOLS BOOL true
;;
--enable-perftools-debug)
append_cache_entry ENABLE_PERFTOOLS BOOL true
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true
;;
--disable-broccoli)

View file

@ -29,7 +29,7 @@ class BroLexer(RegexLexer):
r'|vector)\b', Keyword.Type),
(r'(T|F)\b', Keyword.Constant),
(r'(&)((?:add|delete|expire)_func|attr|(create|read|write)_expire'
r'|default|disable_print_hook|raw_output|encrypt|group|log'
r'|default|raw_output|encrypt|group|log'
r'|mergeable|optional|persistent|priority|redef'
r'|rotate_(?:interval|size)|synchronized)\b', bygroups(Punctuation,
Keyword)),

Binary file not shown.

View file

@ -12,6 +12,43 @@ Frequently Asked Questions
Installation and Configuration
==============================
How do I upgrade to a new version of Bro?
-----------------------------------------
There's two suggested approaches, either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy
local customizations over.
Re-Use Previous Install Prefix
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you choose to configure and install Bro with the same prefix
directory as before, local customization and configuration to files in
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
(``$prefix`` indicating the root of where Bro was installed). Also, logs
generated at run-time won't be touched by the upgrade. (But making
a backup of local changes before proceeding is still recommended.)
After upgrading, remember to check ``$prefix/share/bro/site`` and
``$prefix/etc`` for ``.example`` files, which indicate the
distribution's version of the file differs from the local one, which may
include local changes. Review the differences, and make adjustments
as necessary (for differences that aren't the result of a local change,
use the new version's).
Pick a New Install prefix
^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to install the newer version in a different prefix
directory than before, you can just copy local customization and
configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc``
to the new location (``$prefix`` indicating the root of where Bro was
originally installed). Make sure to review the files for difference
before copying and make adjustments as necessary (for differences that
aren't the result of a local change, use the new version's). Of
particular note, the copied version of ``$prefix/etc/broctl.cfg`` is
likely to need changes to the ``SpoolDir`` and ``LogDir`` settings.
How can I tune my operating system for best capture performance?
----------------------------------------------------------------
@ -46,7 +83,7 @@ directions:
http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html
What does an error message like ``internal error: NB-DNS error`` mean?
---------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------
That often means that DNS is not set up correctly on the system
running Bro. Try verifying from the command line that DNS lookups
@ -65,6 +102,15 @@ Generally, please note that we do not regularly test OpenBSD builds.
We appreciate any patches that improve Bro's support for this
platform.
How do BroControl options affect Bro script variables?
------------------------------------------------------
Some (but not all) BroControl options override a corresponding Bro script variable.
For example, setting the BroControl option "LogRotationInterval" will override
the value of the Bro script variable "Log::default_rotation_interval".
See the :doc:`BroControl Documentation <components/broctl/README>` to find out
which BroControl options override Bro script variables, and for more discussion
on site-specific customization.
Usage
=====

View file

@ -383,3 +383,4 @@ Bro supports the following output formats other than ASCII:
:maxdepth: 1
logging-dataseries
logging-elasticsearch

View file

@ -1,5 +1,6 @@
.. _CMake: http://www.cmake.org
.. _SWIG: http://www.swig.org
.. _Xcode: https://developer.apple.com/xcode/
.. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org
.. _Homebrew: http://mxcl.github.com/homebrew
@ -85,17 +86,20 @@ The following dependencies are required to build Bro:
* Mac OS X
Snow Leopard (10.6) comes with all required dependencies except for CMake_.
Compiling source code on Macs requires first downloading Xcode_,
then going through its "Preferences..." -> "Downloads" menus to
install the "Command Line Tools" component.
Lion (10.7) comes with all required dependencies except for CMake_ and SWIG_.
Lion (10.7) and Mountain Lion (10.8) come with all required
dependencies except for CMake_, SWIG_, and ``libmagic``.
Distributions of these dependencies can be obtained from the project websites
linked above, but they're also likely available from your preferred Mac OS X
package management system (e.g. MacPorts_, Fink_, or Homebrew_).
Distributions of these dependencies can be obtained from the project
websites linked above, but they're also likely available from your
preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
or Homebrew_).
Note that the MacPorts ``swig`` package may not include any specific
language support so you may need to also install ``swig-ruby`` and
``swig-python``.
Specifically for MacPorts, the ``swig``, ``swig-ruby``, ``swig-python``
and ``file`` packages provide the required dependencies.
Optional Dependencies
~~~~~~~~~~~~~~~~~~~~~

View file

@ -55,8 +55,8 @@ The Bro scripting language supports the following built-in types.
A temporal type representing a relative time. An ``interval``
constant can be written as a numeric constant followed by a time
unit where the time unit is one of ``usec``, ``sec``, ``min``,
``hr``, or ``day`` which respectively represent microseconds,
unit where the time unit is one of ``usec``, ``msec``, ``sec``, ``min``,
``hr``, or ``day`` which respectively represent microseconds, milliseconds,
seconds, minutes, hours, and days. Whitespace between the numeric
constant and time unit is optional. Appending the letter "s" to the
time unit in order to pluralize it is also optional (to no semantic
@ -95,14 +95,14 @@ The Bro scripting language supports the following built-in types.
and embedded.
In exact matching the ``==`` equality relational operator is used
with one :bro:type:`string` operand and one :bro:type:`pattern`
operand to check whether the full string exactly matches the
pattern. In this case, the ``^`` beginning-of-line and ``$``
end-of-line anchors are redundant since pattern is implicitly
anchored to the beginning and end of the line to facilitate an exact
match. For example::
with one :bro:type:`pattern` operand and one :bro:type:`string`
operand (order of operands does not matter) to check whether the full
string exactly matches the pattern. In exact matching, the ``^``
beginning-of-line and ``$`` end-of-line anchors are redundant since
the pattern is implicitly anchored to the beginning and end of the
line to facilitate an exact match. For example::
"foo" == /foo|bar/
/foo|bar/ == "foo"
yields true, while::
@ -110,9 +110,9 @@ The Bro scripting language supports the following built-in types.
yields false. The ``!=`` operator would yield the negation of ``==``.
In embedded matching the ``in`` operator is again used with one
:bro:type:`string` operand and one :bro:type:`pattern` operand
(which must be on the left-hand side), but tests whether the pattern
In embedded matching the ``in`` operator is used with one
:bro:type:`pattern` operand (which must be on the left-hand side) and
one :bro:type:`string` operand, but tests whether the pattern
appears anywhere within the given string. For example::
/foo|bar/ in "foobar"
@ -600,10 +600,6 @@ scripting language supports the following built-in attributes.
.. TODO: needs to be documented.
.. bro:attr:: &disable_print_hook
Deprecated. Will be removed.
.. bro:attr:: &raw_output
Opens a file in raw mode, i.e., non-ASCII characters are not

View file

@ -229,20 +229,10 @@ matched. The following context conditions are defined:
confirming the match. If false is returned, no signature match is
going to be triggered. The function has to be of type ``function
cond(state: signature_state, data: string): bool``. Here,
``content`` may contain the most recent content chunk available at
``data`` may contain the most recent content chunk available at
the time the signature was matched. If no such chunk is available,
``content`` will be the empty string. ``signature_state`` is
defined as follows:
.. code:: bro
type signature_state: record {
id: string; # ID of the signature
conn: connection; # Current connection
is_orig: bool; # True if current endpoint is originator
payload_size: count; # Payload size of the first packet
};
``data`` will be the empty string. See :bro:type:`signature_state`
for its definition.
``payload-size <cmp> <integer>``
Compares the integer to the size of the payload of a packet. For

View file

@ -3,7 +3,13 @@
# This script creates binary packages for Mac OS X.
# They can be found in ../build/ after running.
./check-cmake || { exit 1; }
cmake -P /dev/stdin << "EOF"
if ( ${CMAKE_VERSION} VERSION_LESS 2.8.9 )
message(FATAL_ERROR "CMake >= 2.8.9 required to build package")
endif ()
EOF
[ $? -ne 0 ] && exit 1;
type sw_vers > /dev/null 2>&1 || {
echo "Unable to get Mac OS X version" >&2;
@ -34,26 +40,26 @@ prefix=/opt/bro
cd ..
# Minimum Bro
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--disable-broccoli --disable-broctl --pkg-name-prefix=Bro-minimal \
--binary-package
( cd build && make package )
# Full Bro package
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--pkg-name-prefix=Bro --binary-package
( cd build && make package )
# Broccoli
cd aux/broccoli
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package
( cd build && make package && mv *.dmg ../../../build/ )
cd ../..
# Broctl
cd aux/broctl
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package
( cd build && make package && mv *.dmg ../../../build/ )
cd ../..

View file

@ -8,8 +8,16 @@ export {
## The default input reader used. Defaults to `READER_ASCII`.
const default_reader = READER_ASCII &redef;
## The default reader mode used. Defaults to `MANUAL`.
const default_mode = MANUAL &redef;
## Flag that controls if the input framework accepts records
## that contain types that are not supported (at the moment
## file and function). If true, the input framework will
## warn in these cases, but continue. If false, it will
## abort. Defaults to false (abort)
const accept_unsupported_types = F &redef;
## TableFilter description type used for the `table` method.
type TableDescription: record {
## Common definitions for tables and events
@ -82,11 +90,11 @@ export {
## Record describing the fields to be retrieved from the source input.
fields: any;
## If want_record if false (default), the event receives each value in fields as a seperate argument.
## If it is set to true, the event receives all fields in a signle record value.
want_record: bool &default=F;
## If want_record if false, the event receives each value in fields as a separate argument.
## If it is set to true (default), the event receives all fields in a single record value.
want_record: bool &default=T;
## The event that is rised each time a new line is received from the reader.
## The event that is raised each time a new line is received from the reader.
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
ev: any;

View file

@ -96,6 +96,12 @@ export {
## file name. Generally, filenames are expected to given
## without any extensions; writers will add appropiate
## extensions automatically.
##
## If this path is found to conflict with another filter's
## for the same writer type, it is automatically corrected
## by appending "-N", where N is the smallest integer greater
## or equal to 2 that allows the corrected path name to not
## conflict with another filter's.
path: string &optional;
## A function returning the output path for recording entries
@ -115,7 +121,10 @@ export {
## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to be logged.
##
## Returns: The path to be used for the filter.
## Returns: The path to be used for the filter, which will be subject
## to the same automatic correction rules as the *path*
## field of :bro:type:`Log::Filter` in the case of conflicts
## with other filters trying to use the same writer/path pair.
path_func: function(id: ID, path: string, rec: any): string &optional;
## Subset of column names to record. If not given, all
@ -318,6 +327,11 @@ export {
## Log::default_rotation_postprocessor_cmd
## Log::default_rotation_postprocessors
global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool;
## The streams which are currently active and not disabled.
## This table is not meant to be modified by users! Only use it for
## examining which streams are active.
global active_streams: table[ID] of Stream = table();
}
# We keep a script-level copy of all filters so that we can manipulate them.
@ -332,22 +346,23 @@ function __default_rotation_postprocessor(info: RotationInfo) : bool
{
if ( info$writer in default_rotation_postprocessors )
return default_rotation_postprocessors[info$writer](info);
return F;
else
# Return T by default so that postprocessor-less writers don't shutdown.
return T;
}
function default_path_func(id: ID, path: string, rec: any) : string
{
# The suggested path value is a previous result of this function
# or a filter path explicitly set by the user, so continue using it.
if ( path != "" )
return path;
local id_str = fmt("%s", id);
local parts = split1(id_str, /::/);
if ( |parts| == 2 )
{
# The suggested path value is a previous result of this function
# or a filter path explicitly set by the user, so continue using it.
if ( path != "" )
return path;
# Example: Notice::LOG -> "notice"
if ( parts[2] == "LOG" )
{
@ -402,11 +417,15 @@ function create_stream(id: ID, stream: Stream) : bool
if ( ! __create_stream(id, stream) )
return F;
active_streams[id] = stream;
return add_default_filter(id);
}
function disable_stream(id: ID) : bool
{
delete active_streams[id];
return __disable_stream(id);
}

View file

@ -23,11 +23,13 @@ export {
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro_" would create types of bro_dns, bro_software, etc.
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout.
## This is not working!
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular, time
## specifications less than a second result in a timeout value of 0, which
## means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before

View file

@ -1,5 +1,5 @@
##! This framework is intended to create an output and filtering path for
##! internal messages/warnings/errors. It should typically be loaded to
##! This framework is intended to create an output and filtering path for
##! internal messages/warnings/errors. It should typically be loaded to
##! avoid Bro spewing internal messages to standard error and instead log
##! them to a file in a standard way. Note that this framework deals with
##! the handling of internally-generated reporter messages, for the
@ -13,11 +13,11 @@ export {
redef enum Log::ID += { LOG };
## An indicator of reporter message severity.
type Level: enum {
type Level: enum {
## Informational, not needing specific attention.
INFO,
INFO,
## Warning of a potential problem.
WARNING,
WARNING,
## A non-fatal error that should be addressed, but doesn't
## terminate program execution.
ERROR
@ -36,24 +36,55 @@ export {
## Not all reporter messages will have locations in them though.
location: string &log &optional;
};
## Tunable for sending reporter warning messages to STDERR. The option to
## turn it off is presented here in case Bro is being run by some
## external harness and shouldn't output anything to the console.
const warnings_to_stderr = T &redef;
## Tunable for sending reporter error messages to STDERR. The option to
## turn it off is presented here in case Bro is being run by some
## external harness and shouldn't output anything to the console.
const errors_to_stderr = T &redef;
}
global stderr: file;
event bro_init() &priority=5
{
Log::create_stream(Reporter::LOG, [$columns=Info]);
if ( errors_to_stderr || warnings_to_stderr )
stderr = open("/dev/stderr");
}
event reporter_info(t: time, msg: string, location: string)
event reporter_info(t: time, msg: string, location: string) &priority=-5
{
Log::write(Reporter::LOG, [$ts=t, $level=INFO, $message=msg, $location=location]);
}
event reporter_warning(t: time, msg: string, location: string)
event reporter_warning(t: time, msg: string, location: string) &priority=-5
{
if ( warnings_to_stderr )
{
if ( t > double_to_time(0.0) )
print stderr, fmt("WARNING: %.6f %s (%s)", t, msg, location);
else
print stderr, fmt("WARNING: %s (%s)", msg, location);
}
Log::write(Reporter::LOG, [$ts=t, $level=WARNING, $message=msg, $location=location]);
}
event reporter_error(t: time, msg: string, location: string)
event reporter_error(t: time, msg: string, location: string) &priority=-5
{
if ( errors_to_stderr )
{
if ( t > double_to_time(0.0) )
print stderr, fmt("ERROR: %.6f %s (%s)", t, msg, location);
else
print stderr, fmt("ERROR: %s (%s)", msg, location);
}
Log::write(Reporter::LOG, [$ts=t, $level=ERROR, $message=msg, $location=location]);
}

View file

@ -1135,10 +1135,10 @@ type ip6_ah: record {
rsv: count;
## Security Parameter Index.
spi: count;
## Sequence number.
seq: count;
## Authentication data.
data: string;
## Sequence number, unset in the case that *len* field is zero.
seq: count &optional;
## Authentication data, unset in the case that *len* field is zero.
data: string &optional;
};
## Values extracted from an IPv6 ESP extension header.
@ -2784,6 +2784,14 @@ export {
## to have a valid Teredo encapsulation.
const yielding_teredo_decapsulation = T &redef;
## With this set, the Teredo analyzer waits until it sees both sides
## of a connection using a valid Teredo encapsulation before issuing
## a :bro:see:`protocol_confirmation`. If it's false, the first
## occurence of a packet with valid Teredo encapsulation causes a
## confirmation. Both cases are still subject to effects of
## :bro:see:`Tunnel::yielding_teredo_decapsulation`.
const delay_teredo_confirmation = T &redef;
## How often to cleanup internal state for inactive IP tunnels.
const ip_tunnel_timeout = 24hrs &redef;
} # end export

View file

@ -1,3 +1,4 @@
##! Watch for various SPAM blocklist URLs in SMTP error messages.
@load base/protocols/smtp
@ -5,9 +6,11 @@ module SMTP;
export {
redef enum Notice::Type += {
## Indicates that the server sent a reply mentioning an SMTP block list.
## An SMTP server sent a reply mentioning an SMTP block list.
Blocklist_Error_Message,
## Indicates the client's address is seen in the block list error message.
## The originator's address is seen in the block list error message.
## This is useful to detect local hosts sending SPAM with a high
## positive rate.
Blocklist_Blocked_Host,
};
@ -52,7 +55,8 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
message = fmt("%s is on an SMTP block list", c$id$orig_h);
}
NOTICE([$note=note, $conn=c, $msg=message, $sub=msg]);
NOTICE([$note=note, $conn=c, $msg=message, $sub=msg,
$identifier=cat(c$id$orig_h)]);
}
}
}

View file

@ -4,42 +4,33 @@ module LogElasticSearch;
export {
## An elasticsearch specific rotation interval.
const rotation_interval = 24hr &redef;
const rotation_interval = 3hr &redef;
## Optionally ignore any :bro:enum:`Log::ID` from being sent to
## Optionally ignore any :bro:type:`Log::ID` from being sent to
## ElasticSearch with this script.
const excluded_log_ids: set[string] = set("Communication::LOG") &redef;
const excluded_log_ids: set[Log::ID] &redef;
## If you want to explicitly only send certain :bro:enum:`Log::ID`
## If you want to explicitly only send certain :bro:type:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`excluded_log_ids` option will remain in
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option will remain in
## effect as well.
const send_logs: set[string] = set() &redef;
const send_logs: set[Log::ID] &redef;
}
module Log;
event bro_init() &priority=-5
{
local my_filters: table[ID, string] of Filter = table();
for ( [id, name] in filters )
if ( server_host == "" )
return;
for ( stream_id in Log::active_streams )
{
local filter = filters[id, name];
if ( fmt("%s", id) in LogElasticSearch::excluded_log_ids ||
(|LogElasticSearch::send_logs| > 0 && fmt("%s", id) !in LogElasticSearch::send_logs) )
if ( stream_id in excluded_log_ids ||
(|send_logs| > 0 && stream_id !in send_logs) )
next;
filter$name = cat(name, "-es");
filter$writer = Log::WRITER_ELASTICSEARCH;
filter$interv = LogElasticSearch::rotation_interval;
my_filters[id, name] = filter;
local filter: Log::Filter = [$name = "default-es",
$writer = Log::WRITER_ELASTICSEARCH,
$interv = LogElasticSearch::rotation_interval];
Log::add_filter(stream_id, filter);
}
# This had to be done separately to avoid an ever growing filters list
# where the for loop would never end.
for ( [id, name] in my_filters )
{
Log::add_filter(id, filter);
}
}
}

View file

@ -60,5 +60,5 @@
@load tuning/defaults/__load__.bro
@load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro
# @load tuning/logs-to-elasticsearch.bro
@load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro

View file

@ -15,7 +15,7 @@ const char* attr_name(attr_tag t)
"&add_func", "&delete_func", "&expire_func",
"&read_expire", "&write_expire", "&create_expire",
"&persistent", "&synchronized", "&postprocessor",
"&encrypt", "&match", "&disable_print_hook",
"&encrypt", "&match",
"&raw_output", "&mergeable", "&priority",
"&group", "&log", "&error_handler", "&type_column",
"(&tracked)",
@ -385,11 +385,6 @@ void Attributes::CheckAttr(Attr* a)
// FIXME: Check here for global ID?
break;
case ATTR_DISABLE_PRINT_HOOK:
if ( type->Tag() != TYPE_FILE )
Error("&disable_print_hook only applicable to files");
break;
case ATTR_RAW_OUTPUT:
if ( type->Tag() != TYPE_FILE )
Error("&raw_output only applicable to files");

View file

@ -28,7 +28,6 @@ typedef enum {
ATTR_POSTPROCESSOR,
ATTR_ENCRYPT,
ATTR_MATCH,
ATTR_DISABLE_PRINT_HOOK,
ATTR_RAW_OUTPUT,
ATTR_MERGEABLE,
ATTR_PRIORITY,

View file

@ -76,7 +76,7 @@ void ChunkedIO::DumpDebugData(const char* basefnname, bool want_reads)
ChunkedIOFd io(fd, "dump-file");
io.Write(*i);
io.Flush();
close(fd);
safe_close(fd);
}
l->clear();
@ -127,7 +127,7 @@ ChunkedIOFd::~ChunkedIOFd()
delete [] read_buffer;
delete [] write_buffer;
close(fd);
safe_close(fd);
if ( partial )
{
@ -686,7 +686,7 @@ ChunkedIOSSL::~ChunkedIOSSL()
ssl = 0;
}
close(socket);
safe_close(socket);
}

View file

@ -872,10 +872,12 @@ Val* BinaryExpr::SubNetFold(Val* v1, Val* v2) const
const IPPrefix& n1 = v1->AsSubNet();
const IPPrefix& n2 = v2->AsSubNet();
if ( n1 == n2 )
return new Val(1, TYPE_BOOL);
else
return new Val(0, TYPE_BOOL);
bool result = ( n1 == n2 ) ? true : false;
if ( tag == EXPR_NE )
result = ! result;
return new Val(result, TYPE_BOOL);
}
void BinaryExpr::SwapOps()
@ -1515,6 +1517,8 @@ RemoveFromExpr::RemoveFromExpr(Expr* arg_op1, Expr* arg_op2)
if ( BothArithmetic(bt1, bt2) )
PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2));
else if ( BothInterval(bt1, bt2) )
SetType(base_type(bt1));
else
ExprError("requires two arithmetic operands");
}

View file

@ -138,11 +138,22 @@ BroFile::BroFile(FILE* arg_f, const char* arg_name, const char* arg_access)
BroFile::BroFile(const char* arg_name, const char* arg_access, BroType* arg_t)
{
Init();
f = 0;
name = copy_string(arg_name);
access = copy_string(arg_access);
t = arg_t ? arg_t : base_type(TYPE_STRING);
if ( ! Open() )
if ( streq(name, "/dev/stdin") )
f = stdin;
else if ( streq(name, "/dev/stdout") )
f = stdout;
else if ( streq(name, "/dev/stderr") )
f = stderr;
if ( f )
is_open = 1;
else if ( ! Open() )
{
reporter->Error("cannot open %s: %s", name, strerror(errno));
is_open = 0;
@ -342,8 +353,8 @@ int BroFile::Close()
FinishEncrypt();
// Do not close stdout/stderr.
if ( f == stdout || f == stderr )
// Do not close stdin/stdout/stderr.
if ( f == stdin || f == stdout || f == stderr )
return 0;
if ( is_in_cache )
@ -503,12 +514,9 @@ void BroFile::SetAttrs(Attributes* arg_attrs)
InitEncrypt(log_encryption_key->AsString()->CheckString());
}
if ( attrs->FindAttr(ATTR_DISABLE_PRINT_HOOK) )
DisablePrintHook();
if ( attrs->FindAttr(ATTR_RAW_OUTPUT) )
EnableRawOutput();
InstallRotateTimer();
}
@ -523,6 +531,10 @@ RecordVal* BroFile::Rotate()
if ( ! is_open )
return 0;
// Do not rotate stdin/stdout/stderr.
if ( f == stdin || f == stdout || f == stderr )
return 0;
if ( okay_to_manage && ! is_in_cache )
BringIntoCache();

View file

@ -57,7 +57,7 @@ public:
RecordVal* Rotate();
// Set &rotate_interval, &rotate_size, &postprocessor,
// &disable_print_hook, and &raw_output attributes.
// and &raw_output attributes.
void SetAttrs(Attributes* attrs);
// Returns the current size of the file, after fresh stat'ing.

View file

@ -58,7 +58,7 @@ void FlowSrc::Process()
void FlowSrc::Close()
{
close(selectable_fd);
safe_close(selectable_fd);
}

View file

@ -148,9 +148,15 @@ RecordVal* IPv6_Hdr::BuildRecordVal(VectorVal* chain) const
rv->Assign(1, new Val(((ip6_ext*)data)->ip6e_len, TYPE_COUNT));
rv->Assign(2, new Val(ntohs(((uint16*)data)[1]), TYPE_COUNT));
rv->Assign(3, new Val(ntohl(((uint32*)data)[1]), TYPE_COUNT));
rv->Assign(4, new Val(ntohl(((uint32*)data)[2]), TYPE_COUNT));
uint16 off = 3 * sizeof(uint32);
rv->Assign(5, new StringVal(new BroString(data + off, Length() - off, 1)));
if ( Length() >= 12 )
{
// Sequence Number and ICV fields can only be extracted if
// Payload Len was non-zero for this header.
rv->Assign(4, new Val(ntohl(((uint32*)data)[2]), TYPE_COUNT));
uint16 off = 3 * sizeof(uint32);
rv->Assign(5, new StringVal(new BroString(data + off, Length() - off, 1)));
}
}
break;

View file

@ -647,7 +647,7 @@ void RemoteSerializer::Fork()
exit(1); // FIXME: Better way to handle this?
}
close(pipe[1]);
safe_close(pipe[1]);
return;
}
@ -664,12 +664,12 @@ void RemoteSerializer::Fork()
}
child.SetParentIO(io);
close(pipe[0]);
safe_close(pipe[0]);
// Close file descriptors.
close(0);
close(1);
close(2);
safe_close(0);
safe_close(1);
safe_close(2);
// Be nice.
setpriority(PRIO_PROCESS, 0, 5);
@ -2716,7 +2716,8 @@ bool RemoteSerializer::ProcessLogCreateWriter()
id_val = new EnumVal(id, BifType::Enum::Log::ID);
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, true, false) )
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields,
true, false, true) )
goto error;
Unref(id_val);
@ -2896,11 +2897,6 @@ void RemoteSerializer::GotID(ID* id, Val* val)
(desc && *desc) ? desc : "not set"),
current_peer);
#ifdef USE_PERFTOOLS_DEBUG
// May still be cached, but we don't care.
heap_checker->IgnoreObject(id);
#endif
Unref(id);
return;
}
@ -4001,7 +3997,7 @@ bool SocketComm::Connect(Peer* peer)
if ( connect(sockfd, res->ai_addr, res->ai_addrlen) < 0 )
{
Error(fmt("connect failed: %s", strerror(errno)), peer);
close(sockfd);
safe_close(sockfd);
sockfd = -1;
continue;
}
@ -4174,16 +4170,18 @@ bool SocketComm::Listen()
{
Error(fmt("can't bind to %s:%s, %s", l_addr_str.c_str(),
port_str, strerror(errno)));
close(fd);
if ( errno == EADDRINUSE )
{
// Abandon completely this attempt to set up listening sockets,
// try again later.
safe_close(fd);
CloseListenFDs();
listen_next_try = time(0) + bind_retry_interval;
return false;
}
safe_close(fd);
continue;
}
@ -4191,7 +4189,7 @@ bool SocketComm::Listen()
{
Error(fmt("can't listen on %s:%s, %s", l_addr_str.c_str(),
port_str, strerror(errno)));
close(fd);
safe_close(fd);
continue;
}
@ -4227,7 +4225,7 @@ bool SocketComm::AcceptConnection(int fd)
{
Error(fmt("accept fail, unknown address family %d",
client.ss.ss_family));
close(clientfd);
safe_close(clientfd);
return false;
}
@ -4298,7 +4296,7 @@ const char* SocketComm::MakeLogString(const char* msg, Peer* peer)
void SocketComm::CloseListenFDs()
{
for ( size_t i = 0; i < listen_fds.size(); ++i )
close(listen_fds[i]);
safe_close(listen_fds[i]);
listen_fds.clear();
}

View file

@ -126,6 +126,23 @@ RuleConditionEval::RuleConditionEval(const char* func)
rules_error("unknown identifier", func);
return;
}
if ( id->Type()->Tag() == TYPE_FUNC )
{
// Validate argument quantity and type.
FuncType* f = id->Type()->AsFuncType();
if ( f->YieldType()->Tag() != TYPE_BOOL )
rules_error("eval function type must yield a 'bool'", func);
TypeList tl;
tl.Append(internal_type("signature_state")->Ref());
tl.Append(base_type(TYPE_STRING));
if ( ! f->CheckArgs(tl.Types()) )
rules_error("eval function parameters must be a 'signature_state' "
"and a 'string' type", func);
}
}
bool RuleConditionEval::DoMatch(Rule* rule, RuleEndpointState* state,

View file

@ -742,10 +742,11 @@ FileSerializer::~FileSerializer()
io->Flush();
delete [] file;
delete io;
if ( fd >= 0 )
close(fd);
if ( io )
delete io; // destructor will call close() on fd
else if ( fd >= 0 )
safe_close(fd);
}
bool FileSerializer::Open(const char* file, bool pure)
@ -808,8 +809,8 @@ void FileSerializer::CloseFile()
if ( io )
io->Flush();
if ( fd >= 0 )
close(fd);
if ( fd >= 0 && ! io ) // destructor of io calls close() on fd
safe_close(fd);
fd = -1;
delete [] file;

View file

@ -12,10 +12,10 @@
int killed_by_inactivity = 0;
uint32 tot_ack_events = 0;
uint32 tot_ack_bytes = 0;
uint32 tot_gap_events = 0;
uint32 tot_gap_bytes = 0;
uint64 tot_ack_events = 0;
uint64 tot_ack_bytes = 0;
uint64 tot_gap_events = 0;
uint64 tot_gap_bytes = 0;
class ProfileTimer : public Timer {

View file

@ -116,10 +116,10 @@ extern SampleLogger* sample_logger;
extern int killed_by_inactivity;
// Content gap statistics.
extern uint32 tot_ack_events;
extern uint32 tot_ack_bytes;
extern uint32 tot_gap_events;
extern uint32 tot_gap_bytes;
extern uint64 tot_ack_events;
extern uint64 tot_ack_bytes;
extern uint64 tot_gap_events;
extern uint64 tot_gap_bytes;
// A TCPStateStats object tracks the distribution of TCP states for

View file

@ -943,7 +943,10 @@ ForStmt::ForStmt(id_list* arg_loop_vars, Expr* loop_expr)
{
const type_list* indices = e->Type()->AsTableType()->IndexTypes();
if ( indices->length() != loop_vars->length() )
{
e->Error("wrong index size");
return;
}
for ( int i = 0; i < indices->length(); i++ )
{

View file

@ -46,6 +46,7 @@ TCP_Analyzer::TCP_Analyzer(Connection* conn)
finished = 0;
reassembling = 0;
first_packet_seen = 0;
is_partial = 0;
orig = new TCP_Endpoint(this, 1);
resp = new TCP_Endpoint(this, 0);

View file

@ -20,10 +20,10 @@ const bool DEBUG_tcp_connection_close = false;
const bool DEBUG_tcp_match_undelivered = false;
static double last_gap_report = 0.0;
static uint32 last_ack_events = 0;
static uint32 last_ack_bytes = 0;
static uint32 last_gap_events = 0;
static uint32 last_gap_bytes = 0;
static uint64 last_ack_events = 0;
static uint64 last_ack_bytes = 0;
static uint64 last_gap_events = 0;
static uint64 last_gap_bytes = 0;
TCP_Reassembler::TCP_Reassembler(Analyzer* arg_dst_analyzer,
TCP_Analyzer* arg_tcp_analyzer,
@ -513,10 +513,10 @@ void TCP_Reassembler::AckReceived(int seq)
if ( gap_report && gap_report_freq > 0.0 &&
dt >= gap_report_freq )
{
int devents = tot_ack_events - last_ack_events;
int dbytes = tot_ack_bytes - last_ack_bytes;
int dgaps = tot_gap_events - last_gap_events;
int dgap_bytes = tot_gap_bytes - last_gap_bytes;
uint64 devents = tot_ack_events - last_ack_events;
uint64 dbytes = tot_ack_bytes - last_ack_bytes;
uint64 dgaps = tot_gap_events - last_gap_events;
uint64 dgap_bytes = tot_gap_bytes - last_gap_bytes;
RecordVal* r = new RecordVal(gap_info);
r->Assign(0, new Val(devents, TYPE_COUNT));

View file

@ -138,6 +138,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
if ( orig )
valid_orig = false;
else
valid_resp = false;
TeredoEncapsulation te(this);
if ( ! te.Parse(data, len) )
@ -150,7 +155,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
{
Weird("tunnel_depth");
Weird("tunnel_depth", true);
return;
}
@ -162,7 +167,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 )
// Teredo bubbles having data after IPv6 header isn't strictly a
// violation, but a little weird.
Weird("Teredo_bubble_with_payload");
Weird("Teredo_bubble_with_payload", true);
else
{
delete inner;
@ -173,6 +178,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( rslt == 0 || rslt > 0 )
{
if ( orig )
valid_orig = true;
else
valid_resp = true;
if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
! ProtocolConfirmed() )
{
@ -193,7 +203,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
}
if ( ! sibling_has_confirmed )
ProtocolConfirmation();
Confirm();
else
{
delete inner;
@ -201,10 +211,8 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
}
}
else
{
// Aggressively decapsulate anything with valid Teredo encapsulation
ProtocolConfirmation();
}
// Aggressively decapsulate anything with valid Teredo encapsulation.
Confirm();
}
else

View file

@ -6,7 +6,8 @@
class Teredo_Analyzer : public Analyzer {
public:
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn)
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn),
valid_orig(false), valid_resp(false)
{}
virtual ~Teredo_Analyzer()
@ -26,18 +27,34 @@ public:
/**
* Emits a weird only if the analyzer has previously been able to
* decapsulate a Teredo packet since otherwise the weirds could happen
* frequently enough to be less than helpful.
* decapsulate a Teredo packet in both directions or if *force* param is
* set, since otherwise the weirds could happen frequently enough to be less
* than helpful. The *force* param is meant for cases where just one side
* has a valid encapsulation and so the weird would be informative.
*/
void Weird(const char* name) const
void Weird(const char* name, bool force = false) const
{
if ( ProtocolConfirmed() )
if ( ProtocolConfirmed() || force )
reporter->Weird(Conn(), name);
}
/**
* If the delayed confirmation option is set, then a valid encapsulation
* seen from both end points is required before confirming.
*/
void Confirm()
{
if ( ! BifConst::Tunnel::delay_teredo_confirmation ||
( valid_orig && valid_resp ) )
ProtocolConfirmation();
}
protected:
friend class AnalyzerTimer;
void ExpireTimer(double t);
bool valid_orig;
bool valid_resp;
};
class TeredoEncapsulation {

View file

@ -64,7 +64,7 @@ Val::~Val()
Unref(type);
#ifdef DEBUG
Unref(bound_id);
delete [] bound_id;
#endif
}

View file

@ -347,13 +347,15 @@ public:
#ifdef DEBUG
// For debugging, we keep a reference to the global ID to which a
// value has been bound *last*.
ID* GetID() const { return bound_id; }
ID* GetID() const
{
return bound_id ? global_scope()->Lookup(bound_id) : 0;
}
void SetID(ID* id)
{
if ( bound_id )
::Unref(bound_id);
bound_id = id;
::Ref(bound_id);
delete [] bound_id;
bound_id = id ? copy_string(id->Name()) : 0;
}
#endif
@ -401,8 +403,8 @@ protected:
RecordVal* attribs;
#ifdef DEBUG
// For debugging, we keep the ID to which a Val is bound.
ID* bound_id;
// For debugging, we keep the name of the ID to which a Val is bound.
const char* bound_id;
#endif
};

View file

@ -3787,7 +3787,7 @@ static GeoIP* open_geoip_db(GeoIPDBTypes type)
geoip = GeoIP_open_type(type, GEOIP_MEMORY_CACHE);
if ( ! geoip )
reporter->Warning("Failed to open GeoIP database: %s",
reporter->Info("Failed to open GeoIP database: %s",
GeoIPDBFileName[type]);
return geoip;
}
@ -3827,7 +3827,7 @@ function lookup_location%(a: addr%) : geo_location
if ( ! geoip )
builtin_error("Can't initialize GeoIP City/Country database");
else
reporter->Warning("Fell back to GeoIP Country database");
reporter->Info("Fell back to GeoIP Country database");
}
else
have_city_db = true;
@ -4858,7 +4858,7 @@ function file_size%(f: string%) : double
%}
## Disables sending :bro:id:`print_hook` events to remote peers for a given
## file. This function is equivalent to :bro:attr:`&disable_print_hook`. In a
## file. In a
## distributed setup, communicating Bro instances generate the event
## :bro:id:`print_hook` for each print statement and send it to the remote
## side. When disabled for a particular file, these events will not be
@ -4874,7 +4874,7 @@ function disable_print_hook%(f: file%): any
%}
## Prevents escaping of non-ASCII characters when writing to a file.
## This function is equivalent to :bro:attr:`&disable_print_hook`.
## This function is equivalent to :bro:attr:`&raw_output`.
##
## f: The file to disable raw output for.
##
@ -5683,12 +5683,6 @@ function match_signatures%(c: connection, pattern_type: int, s: string,
#
# ===========================================================================
## Deprecated. Will be removed.
function parse_dotted_addr%(s: string%): addr
%{
IPAddr a(s->CheckString());
return new AddrVal(a);
%}
%%{
@ -5788,75 +5782,3 @@ function anonymize_addr%(a: addr, cl: IPAddrAnonymizationClass%): addr
}
%}
## Deprecated. Will be removed.
function dump_config%(%) : bool
%{
return new Val(persistence_serializer->WriteConfig(true), TYPE_BOOL);
%}
## Deprecated. Will be removed.
function make_connection_persistent%(c: connection%) : any
%{
c->MakePersistent();
return 0;
%}
%%{
// Experimental code to add support for IDMEF XML output based on
// notices. For now, we're implementing it as a builtin you can call on an
// notices record.
#ifdef USE_IDMEF
extern "C" {
#include <libidmef/idmefxml.h>
}
#endif
#include <sys/socket.h>
char* port_to_string(PortVal* port)
{
char buf[256]; // to hold sprintf results on port numbers
snprintf(buf, sizeof(buf), "%u", port->Port());
return copy_string(buf);
}
%%}
## Deprecated. Will be removed.
function generate_idmef%(src_ip: addr, src_port: port,
dst_ip: addr, dst_port: port%) : bool
%{
#ifdef USE_IDMEF
xmlNodePtr message =
newIDMEF_Message(newAttribute("version","1.0"),
newAlert(newCreateTime(NULL),
newSource(
newNode(newAddress(
newAttribute("category","ipv4-addr"),
newSimpleElement("address",
copy_string(src_ip->AsAddr().AsString().c_str())),
NULL), NULL),
newService(
newSimpleElement("port",
port_to_string(src_port)),
NULL), NULL),
newTarget(
newNode(newAddress(
newAttribute("category","ipv4-addr"),
newSimpleElement("address",
copy_string(dst_ip->AsAddr().AsString().c_str())),
NULL), NULL),
newService(
newSimpleElement("port",
port_to_string(dst_port)),
NULL), NULL), NULL), NULL);
// if ( validateCurrentDoc() )
printCurrentMessage(stderr);
return new Val(1, TYPE_BOOL);
#else
builtin_error("Bro was not configured for IDMEF support");
return new Val(0, TYPE_BOOL);
#endif
%}

View file

@ -16,6 +16,7 @@ const Tunnel::enable_ip: bool;
const Tunnel::enable_ayiya: bool;
const Tunnel::enable_teredo: bool;
const Tunnel::yielding_teredo_decapsulation: bool;
const Tunnel::delay_teredo_confirmation: bool;
const Tunnel::ip_tunnel_timeout: interval;
const Threading::heartbeat_interval: interval;

View file

@ -34,6 +34,10 @@ function Input::__force_update%(id: string%) : bool
return new Val(res, TYPE_BOOL);
%}
# Options for the input framework
const accept_unsupported_types: bool;
# Options for Ascii Reader
module InputAscii;

View file

@ -388,6 +388,8 @@ bool Manager::CreateEventStream(RecordVal* fval)
FuncType* etype = event->FType()->AsFuncType();
bool allow_file_func = false;
if ( ! etype->IsEvent() )
{
reporter->Error("stream event is a function, not an event");
@ -441,12 +443,20 @@ bool Manager::CreateEventStream(RecordVal* fval)
return false;
}
if ( !same_type((*args)[2], fields ) )
if ( ! same_type((*args)[2], fields ) )
{
reporter->Error("Incompatible type for event");
ODesc desc1;
ODesc desc2;
(*args)[2]->Describe(&desc1);
fields->Describe(&desc2);
reporter->Error("Incompatible type '%s':%s for event, which needs type '%s':%s\n",
type_name((*args)[2]->Tag()), desc1.Description(),
type_name(fields->Tag()), desc2.Description());
return false;
}
allow_file_func = BifConst::Input::accept_unsupported_types;
}
else
@ -455,7 +465,7 @@ bool Manager::CreateEventStream(RecordVal* fval)
vector<Field*> fieldsV; // vector, because UnrollRecordType needs it
bool status = !UnrollRecordType(&fieldsV, fields, "");
bool status = (! UnrollRecordType(&fieldsV, fields, "", allow_file_func));
if ( status )
{
@ -603,12 +613,12 @@ bool Manager::CreateTableStream(RecordVal* fval)
vector<Field*> fieldsV; // vector, because we don't know the length beforehands
bool status = !UnrollRecordType(&fieldsV, idx, "");
bool status = (! UnrollRecordType(&fieldsV, idx, "", false));
int idxfields = fieldsV.size();
if ( val ) // if we are not a set
status = status || !UnrollRecordType(&fieldsV, val, "");
status = status || ! UnrollRecordType(&fieldsV, val, "", BifConst::Input::accept_unsupported_types);
int valfields = fieldsV.size() - idxfields;
@ -766,15 +776,29 @@ bool Manager::RemoveStreamContinuation(ReaderFrontend* reader)
return true;
}
bool Manager::UnrollRecordType(vector<Field*> *fields,
const RecordType *rec, const string& nameprepend)
bool Manager::UnrollRecordType(vector<Field*> *fields, const RecordType *rec,
const string& nameprepend, bool allow_file_func)
{
for ( int i = 0; i < rec->NumFields(); i++ )
{
if ( ! IsCompatibleType(rec->FieldType(i)) )
{
{
// If the field is a file or a function type
// and it is optional, we accept it nevertheless.
// This allows importing logfiles containing this
// stuff that we actually cannot read :)
if ( allow_file_func )
{
if ( ( rec->FieldType(i)->Tag() == TYPE_FILE ||
rec->FieldType(i)->Tag() == TYPE_FUNC ) &&
rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL) )
{
reporter->Info("Encountered incompatible type \"%s\" in table definition for ReaderFrontend. Ignoring field.", type_name(rec->FieldType(i)->Tag()));
continue;
}
}
reporter->Error("Incompatible type \"%s\" in table definition for ReaderFrontend", type_name(rec->FieldType(i)->Tag()));
return false;
}
@ -783,7 +807,7 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
{
string prep = nameprepend + rec->FieldName(i) + ".";
if ( !UnrollRecordType(fields, rec->FieldType(i)->AsRecordType(), prep) )
if ( !UnrollRecordType(fields, rec->FieldType(i)->AsRecordType(), prep, allow_file_func) )
{
return false;
}
@ -1038,9 +1062,7 @@ int Manager::SendEntryTable(Stream* i, const Value* const *vals)
if ( ! updated )
{
// throw away. Hence - we quit. And remove the entry from the current dictionary...
// (but why should it be in there? assert this).
assert ( stream->currDict->RemoveEntry(idxhash) == 0 );
// just quit and delete everything we created.
delete idxhash;
delete h;
return stream->num_val_fields + stream->num_idx_fields;
@ -1206,7 +1228,7 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
Ref(predidx);
Ref(val);
Ref(ev);
SendEvent(stream->event, 3, ev, predidx, val);
SendEvent(stream->event, 4, stream->description->Ref(), ev, predidx, val);
}
if ( predidx ) // if we have a stream or an event...
@ -1540,7 +1562,7 @@ bool Manager::Delete(ReaderFrontend* reader, Value* *vals)
bool Manager::CallPred(Func* pred_func, const int numvals, ...)
{
bool result;
bool result = false;
val_list vl(numvals);
va_list lP;
@ -1551,10 +1573,13 @@ bool Manager::CallPred(Func* pred_func, const int numvals, ...)
va_end(lP);
Val* v = pred_func->Call(&vl);
result = v->AsBool();
Unref(v);
if ( v )
{
result = v->AsBool();
Unref(v);
}
return(result);
return result;
}
bool Manager::SendEvent(const string& name, const int num_vals, Value* *vals)
@ -1668,6 +1693,18 @@ RecordVal* Manager::ValueToRecordVal(const Value* const *vals,
Val* fieldVal = 0;
if ( request_type->FieldType(i)->Tag() == TYPE_RECORD )
fieldVal = ValueToRecordVal(vals, request_type->FieldType(i)->AsRecordType(), position);
else if ( request_type->FieldType(i)->Tag() == TYPE_FILE ||
request_type->FieldType(i)->Tag() == TYPE_FUNC )
{
// If those two unsupported types are encountered here, they have
// been let through by the type checking.
// That means that they are optional & the user agreed to ignore
// them and has been warned by reporter.
// Hence -> assign null to the field, done.
// Better check that it really is optional. Uou never know.
assert(request_type->FieldDecl(i)->FindAttr(ATTR_OPTIONAL));
}
else
{
fieldVal = ValueToVal(vals[*position], request_type->FieldType(i));
@ -1711,7 +1748,7 @@ int Manager::GetValueLength(const Value* val) {
case TYPE_STRING:
case TYPE_ENUM:
{
length += val->val.string_val.length;
length += val->val.string_val.length + 1;
break;
}
@ -1811,7 +1848,10 @@ int Manager::CopyValue(char *data, const int startpos, const Value* val)
case TYPE_ENUM:
{
memcpy(data+startpos, val->val.string_val.data, val->val.string_val.length);
return val->val.string_val.length;
// Add a \0 to the end. To be able to hash zero-length
// strings and differentiate from !present.
memset(data + startpos + val->val.string_val.length, 0, 1);
return val->val.string_val.length + 1;
}
case TYPE_ADDR:
@ -1902,13 +1942,15 @@ HashKey* Manager::HashValues(const int num_elements, const Value* const *vals)
const Value* val = vals[i];
if ( val->present )
length += GetValueLength(val);
// And in any case add 1 for the end-of-field-identifier.
length++;
}
if ( length == 0 )
{
reporter->Error("Input reader sent line where all elements are null values. Ignoring line");
assert ( length >= num_elements );
if ( length == num_elements )
return NULL;
}
int position = 0;
char *data = (char*) malloc(length);
@ -1920,6 +1962,12 @@ HashKey* Manager::HashValues(const int num_elements, const Value* const *vals)
const Value* val = vals[i];
if ( val->present )
position += CopyValue(data, position, val);
memset(data + position, 1, 1); // Add end-of-field-marker. Does not really matter which value it is,
// it just has to be... something.
position++;
}
HashKey *key = new HashKey(data, length);
@ -1959,7 +2007,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_STRING:
{
BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 0);
BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 1);
return new StringVal(s);
}

View file

@ -158,7 +158,7 @@ private:
// Check if a record is made up of compatible types and return a list
// of all fields that are in the record in order. Recursively unrolls
// records
bool UnrollRecordType(vector<threading::Field*> *fields, const RecordType *rec, const string& nameprepend);
bool UnrollRecordType(vector<threading::Field*> *fields, const RecordType *rec, const string& nameprepend, bool allow_file_func);
// Send events
void SendEvent(EventHandlerPtr ev, const int numvals, ...);

View file

@ -191,6 +191,9 @@ void ReaderBackend::SendEntry(Value* *vals)
bool ReaderBackend::Init(const int arg_num_fields,
const threading::Field* const* arg_fields)
{
if ( Failed() )
return true;
num_fields = arg_num_fields;
fields = arg_fields;
@ -210,7 +213,9 @@ bool ReaderBackend::Init(const int arg_num_fields,
bool ReaderBackend::OnFinish(double network_time)
{
DoClose();
if ( ! Failed() )
DoClose();
disabled = true; // frontend disables itself when it gets the Close-message.
SendOut(new ReaderClosedMessage(frontend));
@ -231,6 +236,9 @@ bool ReaderBackend::Update()
if ( disabled )
return false;
if ( Failed() )
return true;
bool success = DoUpdate();
if ( ! success )
DisableFrontend();
@ -248,6 +256,9 @@ void ReaderBackend::DisableFrontend()
bool ReaderBackend::OnHeartbeat(double network_time, double current_time)
{
if ( Failed() )
return true;
return DoHeartbeat(network_time, current_time);
}

View file

@ -11,6 +11,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <errno.h>
using namespace input::reader;
using threading::Value;
@ -209,6 +210,42 @@ bool Ascii::GetLine(string& str)
return false;
}
bool Ascii::CheckNumberError(const string& s, const char * end)
{
// Do this check first, before executing s.c_str() or similar.
// otherwise the value to which *end is pointing at the moment might
// be gone ...
bool endnotnull = (*end != '\0');
if ( s.length() == 0 )
{
Error("Got empty string for number field");
return true;
}
if ( end == s.c_str() ) {
Error(Fmt("String '%s' contained no parseable number", s.c_str()));
return true;
}
if ( endnotnull )
Warning(Fmt("Number '%s' contained non-numeric trailing characters. Ignored trailing characters '%s'", s.c_str(), end));
if ( errno == EINVAL )
{
Error(Fmt("String '%s' could not be converted to a number", s.c_str()));
return true;
}
else if ( errno == ERANGE )
{
Error(Fmt("Number '%s' out of supported range.", s.c_str()));
return true;
}
return false;
}
Value* Ascii::EntryToVal(string s, FieldMapping field)
{
@ -216,10 +253,13 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
return new Value(field.type, false);
Value* val = new Value(field.type, true);
char* end = 0;
errno = 0;
switch ( field.type ) {
case TYPE_ENUM:
case TYPE_STRING:
s = get_unescaped_string(s);
val->val.string_val.length = s.size();
val->val.string_val.data = copy_string(s.c_str());
break;
@ -238,27 +278,37 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
break;
case TYPE_INT:
val->val.int_val = atoi(s.c_str());
val->val.int_val = strtoll(s.c_str(), &end, 10);
if ( CheckNumberError(s, end) )
return 0;
break;
case TYPE_DOUBLE:
case TYPE_TIME:
case TYPE_INTERVAL:
val->val.double_val = atof(s.c_str());
val->val.double_val = strtod(s.c_str(), &end);
if ( CheckNumberError(s, end) )
return 0;
break;
case TYPE_COUNT:
case TYPE_COUNTER:
val->val.uint_val = atoi(s.c_str());
val->val.uint_val = strtoull(s.c_str(), &end, 10);
if ( CheckNumberError(s, end) )
return 0;
break;
case TYPE_PORT:
val->val.port_val.port = atoi(s.c_str());
val->val.port_val.port = strtoull(s.c_str(), &end, 10);
if ( CheckNumberError(s, end) )
return 0;
val->val.port_val.proto = TRANSPORT_UNKNOWN;
break;
case TYPE_SUBNET:
{
s = get_unescaped_string(s);
size_t pos = s.find("/");
if ( pos == s.npos )
{
@ -266,7 +316,11 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
return 0;
}
int width = atoi(s.substr(pos+1).c_str());
uint8_t width = (uint8_t) strtol(s.substr(pos+1).c_str(), &end, 10);
if ( CheckNumberError(s, end) )
return 0;
string addr = s.substr(0, pos);
val->val.subnet_val.prefix = StringToAddr(addr);
@ -275,6 +329,7 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
}
case TYPE_ADDR:
s = get_unescaped_string(s);
val->val.addr_val = StringToAddr(s);
break;
@ -288,7 +343,10 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
// how many entries do we have...
unsigned int length = 1;
for ( unsigned int i = 0; i < s.size(); i++ )
if ( s[i] == ',' ) length++;
{
if ( s[i] == set_separator[0] )
length++;
}
unsigned int pos = 0;
@ -342,9 +400,24 @@ Value* Ascii::EntryToVal(string s, FieldMapping field)
pos++;
}
// Test if the string ends with a set_separator... or if the
// complete string is empty. In either of these cases we have
// to push an empty val on top of it.
if ( s.empty() || *s.rbegin() == set_separator[0] )
{
lvals[pos] = EntryToVal("", field.subType());
if ( lvals[pos] == 0 )
{
Error("Error while trying to add empty set element");
return 0;
}
pos++;
}
if ( pos != length )
{
Error("Internal error while parsing set: did not find all elements");
Error(Fmt("Internal error while parsing set: did not find all elements: %s", s.c_str()));
return 0;
}
@ -428,6 +501,7 @@ bool Ascii::DoUpdate()
while ( GetLine(line ) )
{
// split on tabs
bool error = false;
istringstream splitstream(line);
map<int, string> stringfields;
@ -438,8 +512,6 @@ bool Ascii::DoUpdate()
if ( ! getline(splitstream, s, separator[0]) )
break;
s = get_unescaped_string(s);
stringfields[pos] = s;
pos++;
}
@ -474,8 +546,9 @@ bool Ascii::DoUpdate()
Value* val = EntryToVal(stringfields[(*fit).position], *fit);
if ( val == 0 )
{
Error("Could not convert String value to Val");
return false;
Error(Fmt("Could not convert line '%s' to Val. Ignoring line.", line.c_str()));
error = true;
break;
}
if ( (*fit).secondary_position != -1 )
@ -492,6 +565,19 @@ bool Ascii::DoUpdate()
fpos++;
}
if ( error )
{
// Encountered non-fatal error, ignoring line. But
// first, delete all successfully read fields and the
// array structure.
for ( int i = 0; i < fpos; i++ )
delete fields[fpos];
delete [] fields;
continue;
}
//printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields);
assert ( fpos == NumFields() );

View file

@ -48,6 +48,7 @@ private:
bool ReadHeader(bool useCached);
bool GetLine(string& str);
threading::Value* EntryToVal(string s, FieldMapping type);
bool CheckNumberError(const string& s, const char * end);
ifstream* file;
time_t mtime;

View file

@ -95,6 +95,7 @@ struct Manager::WriterInfo {
Func* postprocessor;
WriterFrontend* writer;
WriterBackend::WriterInfo* info;
bool from_remote;
string instantiating_filter;
};
@ -249,6 +250,29 @@ Manager::WriterInfo* Manager::FindWriter(WriterFrontend* writer)
return 0;
}
bool Manager::CompareFields(const Filter* filter, const WriterFrontend* writer)
{
if ( filter->num_fields != writer->NumFields() )
return false;
for ( int i = 0; i < filter->num_fields; ++ i)
if ( filter->fields[i]->type != writer->Fields()[i]->type )
return false;
return true;
}
bool Manager::CheckFilterWriterConflict(const WriterInfo* winfo, const Filter* filter)
{
if ( winfo->from_remote )
// If the writer was instantiated as a result of remote logging, then
// a filter and writer are only compatible if field types match
return ! CompareFields(filter, winfo->writer);
else
// If the writer was instantiated locally, it is bound to one filter
return winfo->instantiating_filter != filter->name;
}
void Manager::RemoveDisabledWriters(Stream* stream)
{
list<Stream::WriterPathPair> disabled;
@ -695,16 +719,13 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
int result = 1;
try
Val* v = filter->pred->Call(&vl);
if ( v )
{
Val* v = filter->pred->Call(&vl);
result = v->AsBool();
Unref(v);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
if ( ! result )
continue;
}
@ -735,15 +756,10 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
Val* v = 0;
try
{
v = filter->path_func->Call(&vl);
}
v = filter->path_func->Call(&vl);
catch ( InterpreterException& e )
{
if ( ! v )
return false;
}
if ( ! v->Type()->Tag() == TYPE_STRING )
{
@ -767,22 +783,43 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
#endif
}
Stream::WriterPathPair wpp(filter->writer->AsEnum(), path);
// See if we already have a writer for this path.
Stream::WriterMap::iterator w =
stream->writers.find(Stream::WriterPathPair(filter->writer->AsEnum(), path));
Stream::WriterMap::iterator w = stream->writers.find(wpp);
if ( w != stream->writers.end() &&
CheckFilterWriterConflict(w->second, filter) )
{
// Auto-correct path due to conflict over the writer/path pairs.
string instantiator = w->second->instantiating_filter;
string new_path;
unsigned int i = 2;
do {
char num[32];
snprintf(num, sizeof(num), "-%u", i++);
new_path = path + num;
wpp.second = new_path;
w = stream->writers.find(wpp);
} while ( w != stream->writers.end() &&
CheckFilterWriterConflict(w->second, filter) );
Unref(filter->path_val);
filter->path_val = new StringVal(new_path.c_str());
reporter->Warning("Write using filter '%s' on path '%s' changed to"
" use new path '%s' to avoid conflict with filter '%s'",
filter->name.c_str(), path.c_str(), new_path.c_str(),
instantiator.c_str());
path = filter->path = filter->path_val->AsString()->CheckString();
}
WriterFrontend* writer = 0;
if ( w != stream->writers.end() )
{
if ( w->second->instantiating_filter != filter->name )
{
reporter->Warning("Skipping write to filter '%s' on path '%s'"
" because filter '%s' has already instantiated the same"
" writer type for that path", filter->name.c_str(),
filter->path.c_str(), w->second->instantiating_filter.c_str());
continue;
}
// We know this writer already.
writer = w->second->writer;
}
@ -819,8 +856,8 @@ bool Manager::Write(EnumVal* id, RecordVal* columns)
// CreateWriter() will set the other fields in info.
writer = CreateWriter(stream->id, filter->writer,
info, filter->num_fields,
arg_fields, filter->local, filter->remote, filter->name);
info, filter->num_fields, arg_fields, filter->local,
filter->remote, false, filter->name);
if ( ! writer )
{
@ -1019,7 +1056,7 @@ threading::Value** Manager::RecordToFilterVals(Stream* stream, Filter* filter,
}
WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info,
int num_fields, const threading::Field* const* fields, bool local, bool remote,
int num_fields, const threading::Field* const* fields, bool local, bool remote, bool from_remote,
const string& instantiating_filter)
{
Stream* stream = FindStream(id);
@ -1044,6 +1081,7 @@ WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, WriterBacken
winfo->interval = 0;
winfo->postprocessor = 0;
winfo->info = info;
winfo->from_remote = from_remote;
winfo->instantiating_filter = instantiating_filter;
// Search for a corresponding filter for the writer/path pair and use its
@ -1210,12 +1248,16 @@ bool Manager::Flush(EnumVal* id)
void Manager::Terminate()
{
// Make sure we process all the pending rotations.
while ( rotations_pending )
while ( rotations_pending > 0 )
{
thread_mgr->ForceProcessing(); // A blatant layering violation ...
usleep(1000);
}
if ( rotations_pending < 0 )
reporter->InternalError("Negative pending log rotations: %d", rotations_pending);
for ( vector<Stream *>::iterator s = streams.begin(); s != streams.end(); ++s )
{
if ( ! *s )
@ -1329,13 +1371,18 @@ void Manager::Rotate(WriterInfo* winfo)
}
bool Manager::FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating)
double open, double close, bool success, bool terminating)
{
assert(writer);
--rotations_pending;
if ( ! writer )
// Writer didn't produce local output.
if ( ! success )
{
DBG_LOG(DBG_LOGGING, "Non-successful rotating writer '%s', file '%s' at %.6f,",
writer->Name(), filename, network_time);
return true;
}
DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s",
writer->Name(), network_time, new_name);
@ -1369,16 +1416,12 @@ bool Manager::FinishedRotation(WriterFrontend* writer, const char* new_name, con
int result = 0;
try
Val* v = func->Call(&vl);
if ( v )
{
Val* v = func->Call(&vl);
result = v->AsBool();
Unref(v);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
return result;
}

View file

@ -153,6 +153,7 @@ public:
protected:
friend class WriterFrontend;
friend class RotationFinishedMessage;
friend class RotationFailedMessage;
friend class ::RemoteSerializer;
friend class ::RotationTimer;
@ -165,7 +166,7 @@ protected:
// Takes ownership of fields and info.
WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info,
int num_fields, const threading::Field* const* fields,
bool local, bool remote, const string& instantiating_filter="");
bool local, bool remote, bool from_remote, const string& instantiating_filter="");
// Takes ownership of values..
bool Write(EnumVal* id, EnumVal* writer, string path,
@ -176,7 +177,7 @@ protected:
// Signals that a file has been rotated.
bool FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating);
double open, double close, bool success, bool terminating);
// Deletes the values as passed into Write().
void DeleteVals(int num_fields, threading::Value** vals);
@ -199,6 +200,8 @@ private:
void Rotate(WriterInfo* info);
Filter* FindFilter(EnumVal* id, StringVal* filter);
WriterInfo* FindWriter(WriterFrontend* writer);
bool CompareFields(const Filter* filter, const WriterFrontend* writer);
bool CheckFilterWriterConflict(const WriterInfo* winfo, const Filter* filter);
vector<Stream *> streams; // Indexed by stream enum.
int rotations_pending; // Number of rotations not yet finished.

View file

@ -19,10 +19,10 @@ class RotationFinishedMessage : public threading::OutputMessage<WriterFrontend>
{
public:
RotationFinishedMessage(WriterFrontend* writer, const char* new_name, const char* old_name,
double open, double close, bool terminating)
double open, double close, bool success, bool terminating)
: threading::OutputMessage<WriterFrontend>("RotationFinished", writer),
new_name(copy_string(new_name)), old_name(copy_string(old_name)), open(open),
close(close), terminating(terminating) { }
close(close), success(success), terminating(terminating) { }
virtual ~RotationFinishedMessage()
{
@ -32,7 +32,7 @@ public:
virtual bool Process()
{
return log_mgr->FinishedRotation(Object(), new_name, old_name, open, close, terminating);
return log_mgr->FinishedRotation(Object(), new_name, old_name, open, close, success, terminating);
}
private:
@ -40,6 +40,7 @@ private:
const char* old_name;
double open;
double close;
bool success;
bool terminating;
};
@ -126,6 +127,7 @@ WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread()
buffering = true;
frontend = arg_frontend;
info = new WriterInfo(frontend->Info());
rotation_counter = 0;
SetName(frontend->Name());
}
@ -160,7 +162,15 @@ void WriterBackend::DeleteVals(int num_writes, Value*** vals)
bool WriterBackend::FinishedRotation(const char* new_name, const char* old_name,
double open, double close, bool terminating)
{
SendOut(new RotationFinishedMessage(frontend, new_name, old_name, open, close, terminating));
--rotation_counter;
SendOut(new RotationFinishedMessage(frontend, new_name, old_name, open, close, true, terminating));
return true;
}
bool WriterBackend::FinishedRotation()
{
--rotation_counter;
SendOut(new RotationFinishedMessage(frontend, 0, 0, 0, 0, false, false));
return true;
}
@ -174,6 +184,9 @@ bool WriterBackend::Init(int arg_num_fields, const Field* const* arg_fields)
num_fields = arg_num_fields;
fields = arg_fields;
if ( Failed() )
return true;
if ( ! DoInit(*info, arg_num_fields, arg_fields) )
{
DisableFrontend();
@ -222,12 +235,15 @@ bool WriterBackend::Write(int arg_num_fields, int num_writes, Value*** vals)
bool success = true;
for ( int j = 0; j < num_writes; j++ )
if ( ! Failed() )
{
success = DoWrite(num_fields, fields, vals[j]);
for ( int j = 0; j < num_writes; j++ )
{
success = DoWrite(num_fields, fields, vals[j]);
if ( ! success )
break;
if ( ! success )
break;
}
}
DeleteVals(num_writes, vals);
@ -244,6 +260,9 @@ bool WriterBackend::SetBuf(bool enabled)
// No change.
return true;
if ( Failed() )
return true;
buffering = enabled;
if ( ! DoSetBuf(enabled) )
@ -258,17 +277,32 @@ bool WriterBackend::SetBuf(bool enabled)
bool WriterBackend::Rotate(const char* rotated_path, double open,
double close, bool terminating)
{
if ( Failed() )
return true;
rotation_counter = 1;
if ( ! DoRotate(rotated_path, open, close, terminating) )
{
DisableFrontend();
return false;
}
// Insurance against broken writers.
if ( rotation_counter > 0 )
InternalError(Fmt("writer %s did not call FinishedRotation() in DoRotation()", Name()));
if ( rotation_counter < 0 )
InternalError(Fmt("writer %s called FinishedRotation() more than once in DoRotation()", Name()));
return true;
}
bool WriterBackend::Flush(double network_time)
{
if ( Failed() )
return true;
if ( ! DoFlush(network_time) )
{
DisableFrontend();
@ -280,11 +314,17 @@ bool WriterBackend::Flush(double network_time)
bool WriterBackend::OnFinish(double network_time)
{
if ( Failed() )
return true;
return DoFinish(network_time);
}
bool WriterBackend::OnHeartbeat(double network_time, double current_time)
{
if ( Failed() )
return true;
SendOut(new FlushWriteBufferMessage(frontend));
return DoHeartbeat(network_time, current_time);
}

View file

@ -182,6 +182,8 @@ public:
/**
* Disables the frontend that has instantiated this backend. Once
* disabled,the frontend will not send any further message over.
*
* TODO: Do we still need this method (and the corresponding message)?
*/
void DisableFrontend();
@ -208,11 +210,15 @@ public:
bool IsBuf() { return buffering; }
/**
* Signals that a file has been rotated. This must be called by a
* writer's implementation of DoRotate() once rotation has finished.
* Signals that a file has been successfully rotated and any
* potential post-processor can now run.
*
* Most of the parameters should be passed through from DoRotate().
*
* Note: Exactly one of the two FinishedRotation() methods must be
* called by a writer's implementation of DoRotate() once rotation
* has finished.
*
* @param new_name The filename of the rotated file.
*
* @param old_name The filename of the original file.
@ -227,6 +233,29 @@ public:
bool FinishedRotation(const char* new_name, const char* old_name,
double open, double close, bool terminating);
/**
* Signals that a file rotation request has been processed, but no
* further post-processing needs to be performed (either because
* there was an error, or there was nothing to rotate to begin with
* with this writer).
*
* Note: Exactly one of the two FinishedRotation() methods must be
* called by a writer's implementation of DoRotate() once rotation
* has finished.
*
* @param new_name The filename of the rotated file.
*
* @param old_name The filename of the original file.
*
* @param open: The timestamp when the original file was opened.
*
* @param close: The timestamp when the origina file was closed.
*
* @param terminating: True if the original rotation request occured
* due to the main Bro process shutting down.
*/
bool FinishedRotation();
/** Helper method to render an IP address as a string.
*
* @param addr The address.
@ -323,8 +352,8 @@ protected:
* Writer-specific method implementing log rotation. Most directly
* this only applies to writers writing into files, which should then
* close the current file and open a new one. However, a writer may
* also trigger other apppropiate actions if semantics are similar. *
* Once rotation has finished, the implementation must call
* also trigger other apppropiate actions if semantics are similar.
* Once rotation has finished, the implementation *must* call
* FinishedRotation() to signal the log manager that potential
* postprocessors can now run.
*
@ -386,6 +415,8 @@ private:
int num_fields; // Number of log fields.
const threading::Field* const* fields; // Log fields.
bool buffering; // True if buffering is enabled.
int rotation_counter; // Tracks FinishedRotation() calls.
};

View file

@ -248,9 +248,8 @@ void WriterFrontend::Rotate(const char* rotated_path, double open, double close,
if ( backend )
backend->SendIn(new RotateMessage(backend, this, rotated_path, open, close, terminating));
else
// Still signal log manager that we're done, but signal that
// nothing happened by setting the writer to zeri.
log_mgr->FinishedRotation(0, "", rotated_path, open, close, terminating);
// Still signal log manager that we're done.
log_mgr->FinishedRotation(this, 0, 0, 0, 0, false, terminating);
}
void WriterFrontend::DeleteVals(Value** vals)

View file

@ -81,18 +81,15 @@ void Ascii::CloseFile(double t)
return;
if ( include_meta )
{
string ts = t ? Timestamp(t) : string("<abnormal termination>");
WriteHeaderField("end", ts);
}
WriteHeaderField("close", Timestamp(0));
close(fd);
safe_close(fd);
fd = 0;
}
bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields)
{
assert(! fd);
assert(! fd);
string path = info.path;
@ -124,8 +121,6 @@ bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const *
if ( ! safe_write(fd, str.c_str(), str.length()) )
goto write_error;
string ts = Timestamp(info.network_time);
if ( ! (WriteHeaderField("set_separator", get_escaped_string(
string(set_separator, set_separator_len), false)) &&
WriteHeaderField("empty_field", get_escaped_string(
@ -133,7 +128,7 @@ bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const *
WriteHeaderField("unset_field", get_escaped_string(
string(unset_field, unset_field_len), false)) &&
WriteHeaderField("path", get_escaped_string(path, false)) &&
WriteHeaderField("start", ts)) )
WriteHeaderField("open", Timestamp(0))) )
goto write_error;
for ( int i = 0; i < num_fields; ++i )
@ -364,7 +359,7 @@ bool Ascii::DoWrite(int num_fields, const Field* const * fields,
if ( ! safe_write(fd, bytes, len) )
goto write_error;
if ( IsBuf() )
if ( ! IsBuf() )
fsync(fd);
return true;
@ -378,7 +373,10 @@ bool Ascii::DoRotate(const char* rotated_path, double open, double close, bool t
{
// Don't rotate special files or if there's not one currently open.
if ( ! fd || IsSpecial(Info().path) )
{
FinishedRotation();
return true;
}
CloseFile(close);
@ -419,6 +417,16 @@ string Ascii::Timestamp(double t)
{
time_t teatime = time_t(t);
if ( ! teatime )
{
// Use wall clock.
struct timeval tv;
if ( gettimeofday(&tv, 0) < 0 )
Error("gettimeofday failed");
else
teatime = tv.tv_sec;
}
struct tm tmbuf;
struct tm* tm = localtime_r(&teatime, &tmbuf);

View file

@ -35,7 +35,7 @@ private:
bool DoWriteOne(ODesc* desc, threading::Value* val, const threading::Field* field);
bool WriteHeaderField(const string& key, const string& value);
void CloseFile(double t);
string Timestamp(double t);
string Timestamp(double t); // Uses current time if t is zero.
int fd;
string fname;

View file

@ -243,8 +243,25 @@ bool DataSeries::OpenLog(string path)
log_file->writeExtentLibrary(log_types);
for( size_t i = 0; i < schema_list.size(); ++i )
extents.insert(std::make_pair(schema_list[i].field_name,
GeneralField::create(log_series, schema_list[i].field_name)));
{
string fn = schema_list[i].field_name;
GeneralField* gf = 0;
#ifdef USE_PERFTOOLS_DEBUG
{
// GeneralField isn't cleaning up some results of xml parsing, reported
// here: https://github.com/dataseries/DataSeries/issues/1
// Ignore for now to make leak tests pass. There's confidence that
// we do clean up the GeneralField* since the ExtentSeries dtor for
// member log_series would trigger an assert if dynamically allocated
// fields aren't deleted beforehand.
HeapLeakChecker::Disabler disabler;
#endif
gf = GeneralField::create(log_series, fn);
#ifdef USE_PERFTOOLS_DEBUG
}
#endif
extents.insert(std::make_pair(fn, gf));
}
if ( ds_extent_size < ROW_MIN )
{

View file

@ -48,7 +48,7 @@ ElasticSearch::ElasticSearch(WriterFrontend* frontend) : WriterBackend(frontend)
last_send = current_time();
failing = false;
transfer_timeout = BifConst::LogElasticSearch::transfer_timeout * 1000;
transfer_timeout = static_cast<long>(BifConst::LogElasticSearch::transfer_timeout);
curl_handle = HTTPSetup();
}
@ -322,9 +322,7 @@ bool ElasticSearch::DoRotate(const char* rotated_path, double open, double close
}
if ( ! FinishedRotation(current_index.c_str(), prev_index.c_str(), open, close, terminating) )
{
Error(Fmt("error rotating %s to %s", prev_index.c_str(), current_index.c_str()));
}
return true;
}
@ -359,10 +357,10 @@ CURL* ElasticSearch::HTTPSetup()
return handle;
}
bool ElasticSearch::HTTPReceive(void* ptr, int size, int nmemb, void* userdata)
size_t ElasticSearch::HTTPReceive(void* ptr, int size, int nmemb, void* userdata)
{
//TODO: Do some verification on the result?
return true;
return size;
}
bool ElasticSearch::HTTPSend(CURL *handle)
@ -373,7 +371,11 @@ bool ElasticSearch::HTTPSend(CURL *handle)
// The best (only?) way to disable that is to just use HTTP 1.0
curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
//curl_easy_setopt(handle, CURLOPT_TIMEOUT_MS, transfer_timeout);
// Some timeout options. These will need more attention later.
curl_easy_setopt(handle, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, transfer_timeout);
curl_easy_setopt(handle, CURLOPT_TIMEOUT, transfer_timeout);
curl_easy_setopt(handle, CURLOPT_DNS_CACHE_TIMEOUT, 60*60);
CURLcode return_code = curl_easy_perform(handle);
@ -386,12 +388,16 @@ bool ElasticSearch::HTTPSend(CURL *handle)
{
if ( ! failing )
Error(Fmt("ElasticSearch server may not be accessible."));
break;
}
case CURLE_OPERATION_TIMEDOUT:
{
if ( ! failing )
Warning(Fmt("HTTP operation with elasticsearch server timed out at %" PRIu64 " msecs.", transfer_timeout));
break;
}
case CURLE_OK:
@ -403,10 +409,13 @@ bool ElasticSearch::HTTPSend(CURL *handle)
return true;
else if ( ! failing )
Error(Fmt("Received a non-successful status code back from ElasticSearch server, check the elasticsearch server log."));
break;
}
default:
{
break;
}
}
// The "successful" return happens above

View file

@ -45,7 +45,7 @@ private:
bool UpdateIndex(double now, double rinterval, double rbase);
CURL* HTTPSetup();
bool HTTPReceive(void* ptr, int size, int nmemb, void* userdata);
size_t HTTPReceive(void* ptr, int size, int nmemb, void* userdata);
bool HTTPSend(CURL *handle);
// Buffers, etc.
@ -68,7 +68,7 @@ private:
string path;
string index_prefix;
uint64 transfer_timeout;
long transfer_timeout;
bool failing;
uint64 batch_size;

View file

@ -337,6 +337,8 @@ void terminate_bro()
delete log_mgr;
delete thread_mgr;
delete reporter;
reporter = 0;
}
void termination_signal()
@ -380,6 +382,8 @@ static void bro_new_handler()
int main(int argc, char** argv)
{
std::set_new_handler(bro_new_handler);
brofiler.ReadStats();
bro_argc = argc;

View file

@ -56,7 +56,7 @@ void modp_uitoa10(uint32_t value, char* str)
void modp_litoa10(int64_t value, char* str)
{
char* wstr=str;
unsigned long uvalue = (value < 0) ? -value : value;
uint64_t uvalue = (value < 0) ? -value : value;
// Conversion. Number is reversed.
do *wstr++ = (char)(48 + (uvalue % 10)); while(uvalue /= 10);

View file

@ -124,7 +124,7 @@ nb_dns_init(char *errstr)
nd->s = -1;
/* XXX should be able to init static hostent struct some other way */
(void)gethostbyname("localhost.");
(void)gethostbyname("localhost");
if ((_res.options & RES_INIT) == 0 && res_init() == -1) {
snprintf(errstr, NB_DNS_ERRSIZE, "res_init() failed");

View file

@ -2,7 +2,7 @@
// See the file "COPYING" in the main distribution directory for copyright.
%}
%expect 90
%expect 87
%token TOK_ADD TOK_ADD_TO TOK_ADDR TOK_ANY
%token TOK_ATENDIF TOK_ATELSE TOK_ATIF TOK_ATIFDEF TOK_ATIFNDEF
@ -14,7 +14,7 @@
%token TOK_NEXT TOK_OF TOK_PATTERN TOK_PATTERN_TEXT
%token TOK_PORT TOK_PRINT TOK_RECORD TOK_REDEF
%token TOK_REMOVE_FROM TOK_RETURN TOK_SCHEDULE TOK_SET
%token TOK_STRING TOK_SUBNET TOK_SWITCH TOK_TABLE TOK_THIS
%token TOK_STRING TOK_SUBNET TOK_SWITCH TOK_TABLE
%token TOK_TIME TOK_TIMEOUT TOK_TIMER TOK_TYPE TOK_UNION TOK_VECTOR TOK_WHEN
%token TOK_ATTR_ADD_FUNC TOK_ATTR_ATTR TOK_ATTR_ENCRYPT TOK_ATTR_DEFAULT
@ -22,7 +22,7 @@
%token TOK_ATTR_ROTATE_SIZE TOK_ATTR_DEL_FUNC TOK_ATTR_EXPIRE_FUNC
%token TOK_ATTR_EXPIRE_CREATE TOK_ATTR_EXPIRE_READ TOK_ATTR_EXPIRE_WRITE
%token TOK_ATTR_PERSISTENT TOK_ATTR_SYNCHRONIZED
%token TOK_ATTR_DISABLE_PRINT_HOOK TOK_ATTR_RAW_OUTPUT TOK_ATTR_MERGEABLE
%token TOK_ATTR_RAW_OUTPUT TOK_ATTR_MERGEABLE
%token TOK_ATTR_PRIORITY TOK_ATTR_GROUP TOK_ATTR_LOG TOK_ATTR_ERROR_HANDLER
%token TOK_ATTR_TYPE_COLUMN
@ -118,7 +118,6 @@ extern const char* g_curr_debug_error;
#define YYLTYPE yyltype
Expr* bro_this = 0;
int in_init = 0;
int in_record = 0;
bool resolving_global_ID = false;
@ -584,12 +583,6 @@ expr:
$$ = new ConstExpr(new PatternVal($1));
}
| TOK_THIS
{
set_location(@1);
$$ = bro_this->Ref();
}
| '|' expr '|'
{
set_location(@1, @3);
@ -1297,8 +1290,6 @@ attr:
{ $$ = new Attr(ATTR_ENCRYPT); }
| TOK_ATTR_ENCRYPT '=' expr
{ $$ = new Attr(ATTR_ENCRYPT, $3); }
| TOK_ATTR_DISABLE_PRINT_HOOK
{ $$ = new Attr(ATTR_DISABLE_PRINT_HOOK); }
| TOK_ATTR_RAW_OUTPUT
{ $$ = new Attr(ATTR_RAW_OUTPUT); }
| TOK_ATTR_MERGEABLE

View file

@ -306,7 +306,6 @@ string return TOK_STRING;
subnet return TOK_SUBNET;
switch return TOK_SWITCH;
table return TOK_TABLE;
this return TOK_THIS;
time return TOK_TIME;
timeout return TOK_TIMEOUT;
timer return TOK_TIMER;
@ -320,7 +319,6 @@ when return TOK_WHEN;
&create_expire return TOK_ATTR_EXPIRE_CREATE;
&default return TOK_ATTR_DEFAULT;
&delete_func return TOK_ATTR_DEL_FUNC;
&disable_print_hook return TOK_ATTR_DISABLE_PRINT_HOOK;
&raw_output return TOK_ATTR_RAW_OUTPUT;
&encrypt return TOK_ATTR_ENCRYPT;
&error_handler return TOK_ATTR_ERROR_HANDLER;
@ -437,9 +435,7 @@ F RET_CONST(new Val(false, TYPE_BOOL))
}
{D} {
// TODO: check if we can use strtoull instead of atol,
// and similarly for {HEX}.
RET_CONST(new Val(static_cast<unsigned int>(atol(yytext)),
RET_CONST(new Val(static_cast<bro_uint_t>(strtoull(yytext, (char**) NULL, 10)),
TYPE_COUNT))
}
{FLOAT} RET_CONST(new Val(atof(yytext), TYPE_DOUBLE))
@ -481,12 +477,6 @@ F RET_CONST(new Val(false, TYPE_BOOL))
RET_CONST(new PortVal(p, TRANSPORT_UNKNOWN))
}
({D}"."){3}{D} RET_CONST(new AddrVal(yytext))
"0x"{HEX}+ RET_CONST(new Val(static_cast<bro_uint_t>(strtol(yytext, 0, 16)), TYPE_COUNT))
{H}("."{H})+ RET_CONST(dns_mgr->LookupHost(yytext))
{FLOAT}{OWS}day(s?) RET_CONST(new IntervalVal(atof(yytext),Days))
{FLOAT}{OWS}hr(s?) RET_CONST(new IntervalVal(atof(yytext),Hours))
{FLOAT}{OWS}min(s?) RET_CONST(new IntervalVal(atof(yytext),Minutes))
@ -494,6 +484,12 @@ F RET_CONST(new Val(false, TYPE_BOOL))
{FLOAT}{OWS}msec(s?) RET_CONST(new IntervalVal(atof(yytext),Milliseconds))
{FLOAT}{OWS}usec(s?) RET_CONST(new IntervalVal(atof(yytext),Microseconds))
({D}"."){3}{D} RET_CONST(new AddrVal(yytext))
"0x"{HEX}+ RET_CONST(new Val(static_cast<bro_uint_t>(strtoull(yytext, 0, 16)), TYPE_COUNT))
{H}("."{H})+ RET_CONST(dns_mgr->LookupHost(yytext))
\"([^\\\n\"]|{ESCSEQ})*\" {
const char* text = yytext;
int len = strlen(text) + 1;

View file

@ -311,15 +311,9 @@ static int match_prefix(int s_len, const char* s, int t_len, const char* t)
return 1;
}
Val* do_split(StringVal* str_val, RE_Matcher* re, TableVal* other_sep,
int incl_sep, int max_num_sep)
Val* do_split(StringVal* str_val, RE_Matcher* re, int incl_sep, int max_num_sep)
{
TableVal* a = new TableVal(string_array);
ListVal* other_strings = 0;
if ( other_sep && other_sep->Size() > 0 )
other_strings = other_sep->ConvertToPureList();
const u_char* s = str_val->Bytes();
int n = str_val->Len();
const u_char* end_of_s = s + n;
@ -373,9 +367,6 @@ Val* do_split(StringVal* str_val, RE_Matcher* re, TableVal* other_sep,
reporter->InternalError("RegMatch in split goes beyond the string");
}
if ( other_strings )
delete other_strings;
return a;
}
@ -483,7 +474,7 @@ Val* do_sub(StringVal* str_val, RE_Matcher* re, StringVal* repl, int do_all)
##
function split%(str: string, re: pattern%): string_array
%{
return do_split(str, re, 0, 0, 0);
return do_split(str, re, 0, 0);
%}
## Splits a string *once* into a two-element array of strings according to a
@ -503,7 +494,7 @@ function split%(str: string, re: pattern%): string_array
## .. bro:see:: split split_all split_n str_split
function split1%(str: string, re: pattern%): string_array
%{
return do_split(str, re, 0, 0, 1);
return do_split(str, re, 0, 1);
%}
## Splits a string into an array of strings according to a pattern. This
@ -523,7 +514,7 @@ function split1%(str: string, re: pattern%): string_array
## .. bro:see:: split split1 split_n str_split
function split_all%(str: string, re: pattern%): string_array
%{
return do_split(str, re, 0, 1, 0);
return do_split(str, re, 1, 0);
%}
## Splits a string a given number of times into an array of strings according
@ -549,16 +540,7 @@ function split_all%(str: string, re: pattern%): string_array
function split_n%(str: string, re: pattern,
incl_sep: bool, max_num_sep: count%): string_array
%{
return do_split(str, re, 0, incl_sep, max_num_sep);
%}
## Deprecated. Will be removed.
# Reason: the parameter ``other`` does nothing.
function split_complete%(str: string,
re: pattern, other: string_set,
incl_sep: bool, max_num_sep: count%): string_array
%{
return do_split(str, re, other->AsTableVal(), incl_sep, max_num_sep);
return do_split(str, re, incl_sep, max_num_sep);
%}
## Substitutes a given replacement string for the first occurrence of a pattern

View file

@ -80,8 +80,10 @@ double Manager::NextTimestamp(double* network_time)
for ( msg_thread_list::iterator i = msg_threads.begin(); i != msg_threads.end(); i++ )
{
if ( (*i)->MightHaveOut() )
return timer_mgr->Time();
MsgThread* t = *i;
if ( (*i)->MightHaveOut() && ! t->Killed() )
return timer_mgr->Time();
}
return -1.0;
@ -95,6 +97,12 @@ void Manager::KillThreads()
(*i)->Kill();
}
void Manager::KillThread(BasicThread* thread)
{
DBG_LOG(DBG_THREADING, "Killing thread %s ...", thread->Name());
thread->Kill();
}
void Manager::Process()
{
bool do_beat = false;
@ -114,7 +122,7 @@ void Manager::Process()
if ( do_beat )
t->Heartbeat();
while ( t->HasOut() )
while ( t->HasOut() && ! t->Killed() )
{
Message* msg = t->RetrieveOut();

View file

@ -74,6 +74,16 @@ public:
*/
void ForceProcessing() { Process(); }
/**
* Signals a specific threads to terminate immediately.
*/
void KillThread(BasicThread* thread);
/**
* Signals all threads to terminate immediately.
*/
void KillThreads();
protected:
friend class BasicThread;
friend class MsgThread;
@ -106,13 +116,6 @@ protected:
*/
virtual double NextTimestamp(double* network_time);
/**
* Kills all thread immediately. Note that this may cause race conditions
* if a child thread currently holds a lock that might block somebody
* else.
*/
virtual void KillThreads();
/**
* Part of the IOSource interface.
*/

View file

@ -70,6 +70,16 @@ private:
Type type;
};
// A message from the the child to the main process, requesting suicide.
class KillMeMessage : public OutputMessage<MsgThread>
{
public:
KillMeMessage(MsgThread* thread)
: OutputMessage<MsgThread>("ReporterMessage", thread) {}
virtual bool Process() { thread_mgr->KillThread(Object()); return true; }
};
#ifdef DEBUG
// A debug message from the child to be passed on to the DebugLogger.
class DebugMessage : public OutputMessage<MsgThread>
@ -144,6 +154,7 @@ MsgThread::MsgThread() : BasicThread(), queue_in(this, 0), queue_out(0, this)
{
cnt_sent_in = cnt_sent_out = 0;
finished = false;
failed = false;
thread_mgr->AddMsgThread(this);
}
@ -346,16 +357,21 @@ void MsgThread::Run()
if ( ! result )
{
string s = Fmt("%s failed, terminating thread (MsgThread)", Name());
Error(s.c_str());
break;
Error("terminating thread");
// This will eventually kill this thread, but only
// after all other outgoing messages (in particular
// error messages have been processed by then main
// thread).
SendOut(new KillMeMessage(this));
failed = true;
}
}
// In case we haven't send the finish method yet, do it now. Reading
// global network_time here should be fine, it isn't changing
// anymore.
if ( ! finished )
if ( ! finished && ! Killed() )
{
OnFinish(network_time);
Finished();

View file

@ -201,6 +201,12 @@ protected:
*/
void HeartbeatInChild();
/** Returns true if a child command has reported a failure. In that case, we'll
* be in the process of killing this thread and no further activity
* should carried out. To be called only from this child thread.
*/
bool Failed() const { return failed; }
/**
* Regulatly triggered for execution in the child thread.
*
@ -294,6 +300,7 @@ private:
uint64_t cnt_sent_out; // Counts message sent by child.
bool finished; // Set to true by Finished message.
bool failed; // Set to true when a command failed.
};
/**

View file

@ -113,6 +113,9 @@ std::string get_escaped_string(const std::string& str, bool escape_all)
char* copy_string(const char* s)
{
if ( ! s )
return 0;
char* c = new char[strlen(s)+1];
strcpy(c, s);
return c;
@ -722,7 +725,7 @@ void init_random_seed(uint32 seed, const char* read_file, const char* write_file
{
int amt = read(fd, buf + pos,
sizeof(uint32) * (bufsiz - pos));
close(fd);
safe_close(fd);
if ( amt > 0 )
pos += amt / sizeof(uint32);
@ -1204,7 +1207,7 @@ void _set_processing_status(const char* status)
len -= n;
}
close(fd);
safe_close(fd);
errno = old_errno;
}
@ -1353,9 +1356,40 @@ bool safe_write(int fd, const char* data, int len)
return true;
}
void safe_close(int fd)
{
/*
* Failure cases of close(2) are ...
* EBADF: Indicative of programming logic error that needs to be fixed, we
* should always be attempting to close a valid file descriptor.
* EINTR: Ignore signal interruptions, most implementations will actually
* reclaim the open descriptor and POSIX standard doesn't leave many
* options by declaring the state of the descriptor as "unspecified".
* Attempting to inspect actual state or re-attempt close() is not
* thread safe.
* EIO: Again the state of descriptor is "unspecified", but don't recover
* from an I/O error, safe_write() won't either.
*
* Note that we don't use the reporter here to allow use from different threads.
*/
if ( close(fd) < 0 && errno != EINTR )
{
char buf[128];
strerror_r(errno, buf, sizeof(buf));
fprintf(stderr, "safe_close error %d: %s\n", errno, buf);
abort();
}
}
void out_of_memory(const char* where)
{
reporter->FatalError("out of memory in %s.\n", where);
fprintf(stderr, "out of memory in %s.\n", where);
if ( reporter )
// Guess that might fail here if memory is really tight ...
reporter->FatalError("out of memory in %s.\n", where);
abort();
}
void get_memory_usage(unsigned int* total, unsigned int* malloced)

View file

@ -297,6 +297,9 @@ inline size_t pad_size(size_t size)
// thread-safe as long as no two threads write to the same descriptor.
extern bool safe_write(int fd, const char* data, int len);
// Wraps close(2) to emit error messages and abort on unrecoverable errors.
extern void safe_close(int fd);
extern void out_of_memory(const char* where);
inline void* safe_realloc(void* ptr, size_t size)

View file

@ -1,5 +0,0 @@
1128727430.350788 ? 141.42.64.125 125.190.109.199 other 56729 12345 tcp ? ? S0 X 1 60 0 0 cc=1
1144876538.705610 5.921003 169.229.147.203 239.255.255.253 other 49370 427 udp 147 ? S0 X 3 231 0 0
1144876599.397603 0.815763 192.150.186.169 194.64.249.244 http 53063 80 tcp 377 445 SF X 6 677 5 713
1144876709.032670 9.000191 169.229.147.43 239.255.255.253 other 49370 427 udp 196 ? S0 X 4 308 0 0
1144876697.068273 0.000650 192.150.186.169 192.150.186.15 icmp-unreach 3 3 icmp 56 ? OTH X 2 112 0 0

View file

@ -1,5 +0,0 @@
1128727430.350788 ? 141.42.64.125 125.190.109.199 other 56729 12345 tcp ? ? S0 X 1 60 0 0
1144876538.705610 5.921003 169.229.147.203 239.255.255.253 other 49370 427 udp 147 ? S0 X 3 231 0 0
1144876599.397603 0.815763 192.150.186.169 194.64.249.244 http 53063 80 tcp 377 445 SF X 6 697 5 713
1144876709.032670 9.000191 169.229.147.43 239.255.255.253 other 49370 427 udp 196 ? S0 X 4 308 0 0
1144876697.068273 0.000650 192.150.186.169 192.150.186.15 icmp-unreach 3 3 icmp 56 ? OTH X 2 112 0 0

View file

@ -0,0 +1 @@
PIA_TCP

View file

@ -0,0 +1 @@
T

View file

@ -0,0 +1,2 @@
[entropy=4.715374, chi_square=591.981818, mean=75.472727, monte_carlo_pi=4.0, serial_correlation=-0.11027]
[entropy=2.083189, chi_square=3906.018182, mean=69.054545, monte_carlo_pi=4.0, serial_correlation=0.849402]

View file

@ -0,0 +1 @@
found bro_init

View file

@ -0,0 +1,4 @@
ASCII text, with no line terminators
text/plain; charset=us-ascii
PNG image
image/png; charset=binary

View file

@ -0,0 +1,4 @@
T
F
F
T

View file

@ -0,0 +1 @@
F

View file

@ -0,0 +1 @@
T

View file

@ -0,0 +1,4 @@
1970-01-01 00:00:00
000000 19700101
1973-11-29 21:33:09
213309 19731129

View file

@ -3,101 +3,101 @@
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-03-01
#open 2012-03-26-18-03-01
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332784981.078396 - - - - - bad_IP_checksum - F bro
#end 2012-03-26-18-03-01
#close 2012-03-26-18-03-01
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-01-25
#open 2012-03-26-18-01-25
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332784885.686428 UWkUyAuUGXf 127.0.0.1 30000 127.0.0.1 80 bad_TCP_checksum - F bro
#end 2012-03-26-18-01-25
#close 2012-03-26-18-01-25
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-02-13
#open 2012-03-26-18-02-13
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332784933.501023 UWkUyAuUGXf 127.0.0.1 30000 127.0.0.1 13000 bad_UDP_checksum - F bro
#end 2012-03-26-18-02-13
#close 2012-03-26-18-02-13
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-29-23
#open 2012-04-10-16-29-23
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075363.536871 UWkUyAuUGXf 192.168.1.100 8 192.168.1.101 0 bad_ICMP_checksum - F bro
#end 2012-04-10-16-29-23
#close 2012-04-10-16-29-23
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-06-50
#open 2012-03-26-18-06-50
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332785210.013051 - - - - - routing0_hdr - F bro
1332785210.013051 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:78:1:32::2 80 bad_TCP_checksum - F bro
#end 2012-03-26-18-06-50
#close 2012-03-26-18-06-50
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-17-23-00
#open 2012-03-26-17-23-00
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332782580.798420 - - - - - routing0_hdr - F bro
1332782580.798420 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:78:1:32::2 13000 bad_UDP_checksum - F bro
#end 2012-03-26-17-23-00
#close 2012-03-26-17-23-00
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-25-11
#open 2012-04-10-16-25-11
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075111.800086 - - - - - routing0_hdr - F bro
1334075111.800086 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:78:1:32::1 129 bad_ICMP_checksum - F bro
#end 2012-04-10-16-25-11
#close 2012-04-10-16-25-11
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-07-30
#open 2012-03-26-18-07-30
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332785250.469132 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 80 bad_TCP_checksum - F bro
#end 2012-03-26-18-07-30
#close 2012-03-26-18-07-30
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-17-02-22
#open 2012-03-26-17-02-22
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332781342.923813 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 13000 bad_UDP_checksum - F bro
#end 2012-03-26-17-02-22
#close 2012-03-26-17-02-22
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-22-19
#open 2012-04-10-16-22-19
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334074939.467194 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:4f8:4:7:2e0:81ff:fe52:9a6b 129 bad_ICMP_checksum - F bro
#end 2012-04-10-16-22-19
#close 2012-04-10-16-22-19

View file

@ -3,68 +3,68 @@
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-22-19
#open 2012-04-10-16-22-19
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334074939.467194 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:4f8:4:7:2e0:81ff:fe52:9a6b 129 bad_ICMP_checksum - F bro
#end 2012-04-10-16-22-19
#close 2012-04-10-16-22-19
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-18-05-25
#open 2012-03-26-18-05-25
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332785125.596793 - - - - - routing0_hdr - F bro
#end 2012-03-26-18-05-25
#close 2012-03-26-18-05-25
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-03-26-17-21-48
#open 2012-03-26-17-21-48
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1332782508.592037 - - - - - routing0_hdr - F bro
#end 2012-03-26-17-21-48
#close 2012-03-26-17-21-48
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-23-47
#open 2012-04-10-16-23-47
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075027.053380 - - - - - routing0_hdr - F bro
#end 2012-04-10-16-23-47
#close 2012-04-10-16-23-47
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-23-47
#open 2012-04-10-16-23-47
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075027.053380 - - - - - routing0_hdr - F bro
#end 2012-04-10-16-23-47
#close 2012-04-10-16-23-47
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-23-47
#open 2012-04-10-16-23-47
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075027.053380 - - - - - routing0_hdr - F bro
#end 2012-04-10-16-23-47
#close 2012-04-10-16-23-47
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-10-16-23-47
#open 2012-04-10-16-23-47
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1334075027.053380 - - - - - routing0_hdr - F bro
#end 2012-04-10-16-23-47
#close 2012-04-10-16-23-47

View file

@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path weird
#start 2012-04-05-21-56-51
#open 2012-04-05-21-56-51
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1333663011.602839 - - - - - unknown_protocol_135 - F bro
#end 2012-04-05-21-56-51
#close 2012-04-05-21-56-51

View file

@ -3,7 +3,7 @@
#empty_field (empty)
#unset_field -
#path reporter
#start 2011-03-18-19-06-08
#open 2011-03-18-19-06-08
#fields ts level message location
#types time enum string string
1300475168.783842 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
@ -15,4 +15,4 @@
1300475168.954761 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475168.962628 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
1300475169.780331 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10
#end 2011-03-18-19-06-13
#close 2011-03-18-19-06-13

View file

@ -3,9 +3,9 @@
#empty_field (empty)
#unset_field -
#path dns
#start 2012-03-07-01-37-58
#open 2012-03-07-01-37-58
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval]
1331084278.438444 UWkUyAuUGXf 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51850 2607:f740:b::f93 53 udp 3903 txtpadding_323.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000
1331084293.592245 arKYeMETxOg 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51851 2607:f740:b::f93 53 udp 40849 txtpadding_3230.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000
#end 2012-03-07-01-38-18
#close 2012-03-07-01-38-18

View file

@ -0,0 +1,2 @@
[orig_h=2000:1300::1, orig_p=128/icmp, resp_h=2000:1300::2, resp_p=129/icmp]
[ip=<uninitialized>, ip6=[class=0, flow=0, len=166, nxt=51, hlim=255, src=2000:1300::1, dst=2000:1300::2, exts=[[id=51, hopopts=<uninitialized>, dstopts=<uninitialized>, routing=<uninitialized>, fragment=<uninitialized>, ah=[nxt=58, len=0, rsv=0, spi=0, seq=<uninitialized>, data=<uninitialized>], esp=<uninitialized>, mobility=<uninitialized>]]], tcp=<uninitialized>, udp=<uninitialized>, icmp=<uninitialized>]

View file

@ -3,8 +3,10 @@
#empty_field (empty)
#unset_field -
#path metrics
#open 2012-07-20-01-50-41
#fields ts metric_id filter_name index.host index.str index.network value
#types time enum string addr string subnet count
1331256494.591966 TEST_METRIC foo-bar 6.5.4.3 - - 4
1331256494.591966 TEST_METRIC foo-bar 7.2.1.5 - - 2
1331256494.591966 TEST_METRIC foo-bar 1.2.3.4 - - 6
1342749041.601712 TEST_METRIC foo-bar 6.5.4.3 - - 4
1342749041.601712 TEST_METRIC foo-bar 7.2.1.5 - - 2
1342749041.601712 TEST_METRIC foo-bar 1.2.3.4 - - 6
#close 2012-07-20-01-50-49

View file

@ -3,8 +3,10 @@
#empty_field (empty)
#unset_field -
#path test.failure
#open 2012-07-20-01-50-18
#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country
#types time addr port addr port string string
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure US
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure UK
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure MX
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX
#close 2012-07-20-01-50-18

View file

@ -3,10 +3,12 @@
#empty_field (empty)
#unset_field -
#path test
#open 2012-07-20-01-50-18
#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country
#types time addr port addr port string string
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 success unknown
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure US
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure UK
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 success BR
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 failure MX
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX
#close 2012-07-20-01-50-18

View file

@ -3,7 +3,9 @@
#empty_field (empty)
#unset_field -
#path test.success
#open 2012-07-20-01-50-18
#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country
#types time addr port addr port string string
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 success unknown
1331256472.375609 1.2.3.4 1234 2.3.4.5 80 success BR
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown
1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR
#close 2012-07-20-01-50-18

View file

@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path conn
#start 2005-10-07-23-23-57
#open 2005-10-07-23-23-57
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
1128727435.450898 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 (empty)
#end 2005-10-07-23-23-57
#close 2005-10-07-23-23-57

View file

@ -3,38 +3,38 @@
#empty_field (empty)
#unset_field -
#path packet_filter
#start 1970-01-01-00-00-00
#open 2012-07-27-19-14-29
#fields ts node filter init success
#types time string string bool bool
1342748953.570646 - ip or not ip T T
#end <abnormal termination>
1343416469.508262 - ip or not ip T T
#close 2012-07-27-19-14-29
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#start 1970-01-01-00-00-00
#open 2012-07-27-19-14-29
#fields ts node filter init success
#types time string string bool bool
1342748953.898675 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
#end <abnormal termination>
1343416469.888870 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
#close 2012-07-27-19-14-29
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#start 1970-01-01-00-00-00
#open 2012-07-27-19-14-30
#fields ts node filter init success
#types time string string bool bool
1342748954.278211 - port 42 T T
#end <abnormal termination>
1343416470.252918 - port 42 T T
#close 2012-07-27-19-14-30
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#start 1970-01-01-00-00-00
#open 2012-07-27-19-14-30
#fields ts node filter init success
#types time string string bool bool
1342748954.883780 - port 56730 T T
#end 2005-10-07-23-23-57
1343416470.614962 - port 56730 T T
#close 2012-07-27-19-14-30

Some files were not shown because too many files have changed in this diff Show more