Merge remote-tracking branch 'origin/master' into topic/dnthayer/alarms-mail

This commit is contained in:
Daniel Thayer 2012-10-30 11:32:58 -05:00
commit 0f97f0b6e4
618 changed files with 11183 additions and 2057 deletions

535
CHANGES
View file

@ -1,4 +1,539 @@
2.1-91 | 2012-10-24 16:04:47 -0700
* Adding PPPoE support to Bro. (Seth Hall)
2.1-87 | 2012-10-24 15:40:06 -0700
* Adding missing &redef for some TCP options. Addresses #905, #906,
#907. (Carsten Langer)
2.1-86 | 2012-10-24 15:37:11 -0700
* Add parsing rules for IPv4/IPv6 subnet literal constants.
Addresses #888. (Jon Siwek)
2.1-84 | 2012-10-19 15:12:56 -0700
* Added a BiF strptime() to wrap the corresponding C function. (Seth
Hall)
2.1-82 | 2012-10-19 15:05:40 -0700
* Add IPv6 support to signature header conditions. (Jon Siwek)
- "src-ip" and "dst-ip" conditions can now use IPv6 addresses/subnets.
They must be written in colon-hexadecimal representation and enclosed
in square brackets (e.g. [fe80::1]). Addresses #774.
- "icmp6" is now a valid protocol for use with "ip-proto" and "header"
conditions. This allows signatures to be written that can match
against ICMPv6 payloads. Addresses #880.
- "ip6" is now a valid protocol for use with the "header" condition.
(also the "ip-proto" condition, but it results in a no-op in that
case since signatures apply only to the inner-most IP packet when
packets are tunneled). This allows signatures to match specifically
against IPv6 packets (whereas "ip" only matches against IPv4 packets).
- "ip-proto" conditions can now match against IPv6 packets. Before,
IPv6 packets were just silently ignored which meant DPD based on
signatures did not function for IPv6 -- protocol analyzers would only
get attached to a connection over IPv6 based on the well-known ports
set in the "dpd_config" table.
2.1-80 | 2012-10-19 14:48:42 -0700
* Change how "gridftp" gets added to service field of connection
records. In addition to checking for a finished SSL handshake over
an FTP connection, it now also requires that the SSL handshake
occurs after the FTP client requested AUTH GSSAPI, more
specifically identifying the characteristics of GridFTP control
channels. Addresses #891. (Jon Siwek)
* Allow faster rebuilds in certain cases. Previously, when
rebuilding with a different "--prefix" or "--scriptdir", all Bro
source files were recompiled. With this change, only util.cc is
recompiled. (Daniel Thayer)
2.1-76 | 2012-10-12 10:32:39 -0700
* Add support for recognizing GridFTP connections as an extension to
the standard FTP analyzer. (Jon Siwek)
This is enabled by default and includes:
- An analyzer for GSI mechanism of GSSAPI FTP AUTH method. GSI
authentication involves an encoded TLS/SSL handshake over the
FTP control session. For FTP sessions that attempt GSI
authentication, the *service* field of the connection log will
include "gridftp" (as well as also "ftp" and "ssl").
- Add an example of a GridFTP data channel detection script. It
relies on the heuristics of GridFTP data channels commonly
default to SSL mutual authentication with a NULL bulk cipher
and that they usually transfer large datasets (default
threshold of script is 1 GB). The script also defaults to
skip_further_processing() after detection to try to save
cycles analyzing the large, benign connection.
For identified GridFTP data channels, the *services* fields of
the connection log will include "gridftp-data".
* Add *client_subject* and *client_issuer_subject* as &log'd fields
to SSL::Info record. Also add *client_cert* and
*client_cert_chain* fields to track client cert chain. (Jon Siwek)
* Add a script in base/protocols/conn/polling that generalizes the
process of polling a connection for interesting features. The
GridFTP data channel detection script depends on it to monitor
bytes transferred. (Jon Siwek)
2.1-68 | 2012-10-12 09:46:41 -0700
* Rename the Input Framework's update_finished event to end_of_data.
It will now not only fire after table-reads have been completed,
but also after the last event of a whole-file-read (or
whole-db-read, etc.). (Bernhard Amann)
* Fix for DNS log problem when a DNS response is seen with 0 RRs.
(Seth Hall)
2.1-64 | 2012-10-12 09:36:41 -0700
* Teach --disable-dataseries/--disable-elasticsearch to ./configure.
Addresses #877. (Jon Siwek)
* Add --with-curl option to ./configure. Addresses #877. (Jon Siwek)
2.1-61 | 2012-10-12 09:32:48 -0700
* Fix bug in the input framework: the config table did not work.
(Bernhard Amann)
2.1-58 | 2012-10-08 10:10:09 -0700
* Fix a problem with non-manager cluster nodes applying
Notice::policy. This could, for example, result in duplicate
emails being sent if Notice::emailed_types is redef'd in local.bro
(or any script that gets loaded on all cluster nodes). (Jon Siwek)
2.1-56 | 2012-10-03 16:04:52 -0700
* Add general FAQ entry about upgrading Bro. (Jon Siwek)
2.1-53 | 2012-10-03 16:00:40 -0700
* Add new Tunnel::delay_teredo_confirmation option that indicates
that the Teredo analyzer should wait until it sees both sides of a
connection using a valid Teredo encapsulation before issuing a
protocol_confirmation. Default is on. Addresses #890. (Jon Siwek)
2.1-50 | 2012-10-02 12:06:08 -0700
* Fix a typing issue that prevented the ElasticSearch timeout to
work. (Matthias Vallentin)
* Use second granularity for ElasticSearch timeouts. (Matthias
Vallentin)
* Fix compile issues with older versions of libcurl, which don't
offer *_MS timeout constants. (Matthias Vallentin)
2.1-47 | 2012-10-02 11:59:29 -0700
* Fix for the input framework: BroStrings were constructed without a
final \0, which makes them unusable by basically all internal
functions (like to_count). (Bernhard Amann)
* Remove deprecated script functionality (see NEWS for details).
(Daniel Thayer)
2.1-39 | 2012-09-29 14:09:16 -0700
* Reliability adjustments to istate tests with network
communication. (Jon Siwek)
2.1-37 | 2012-09-25 14:21:37 -0700
* Reenable some tests that previously would cause Bro to exit with
an error. (Daniel Thayer)
* Fix parsing of large integers on 32-bit systems. (Daniel Thayer)
* Serialize language.when unit test with the "comm" group. (Jon
Siwek)
2.1-32 | 2012-09-24 16:24:34 -0700
* Fix race condition in language/when.bro test. (Daniel Thayer)
2.1-26 | 2012-09-23 08:46:03 -0700
* Add an item to FAQ page about broctl options. (Daniel Thayer)
* Add more language tests. We now have tests of all built-in Bro
data types (including different representations of constant
values, and max./min. values), keywords, and operators (including
special properties of certain operators, such as short-circuit
evaluation and associativity). (Daniel Thayer)
* Fix construction of ip6_ah (Authentication Header) record values.
Authentication Headers with a Payload Len field set to zero would
cause a crash due to invalid memory allocation because the
previous code assumed Payload Len would always be great enough to
contain all mandatory fields of the header. (Jon Siwek)
* Update compile/dependency docs for OS X. (Jon Siwek)
* Adjusting Mac binary packaging script. Setting CMAKE_PREFIX_PATH
helps link against standard system libs instead of ones that come
from other package manager (e.g. MacPorts). (Jon Siwek)
* Adjusting some unit tests that do cluster communication. (Jon Siwek)
* Small change to non-blocking DNS initialization. (Jon Siwek)
* Reorder a few statements in scan.l to make 1.5msecs etc work.
Adresses #872. (Bernhard Amann)
2.1-6 | 2012-09-06 23:23:14 -0700
* Fixed a bug where "a -= b" (both operands are intervals) was not
allowed in Bro scripts (although "a = a - b" is allowed). (Daniel
Thayer)
* Fixed a bug where the "!=" operator with subnet operands was
treated the same as the "==" operator. (Daniel Thayer)
* Add sleeps to configuration_update test for better reliability.
(Jon Siwek)
* Fix a segfault when iterating over a set when using malformed
index. (Daniel Thayer)
2.1 | 2012-08-28 16:46:42 -0700
* Make bif.identify_magic robust against FreeBSD's libmagic config.
(Robin Sommer)
* Remove automatic use of gperftools on non-Linux systems.
--enable-perftools must now explicity be supplied to ./configure
on non-Linux systems to link against the tcmalloc library.
* Fix uninitialized value for 'is_partial' in TCP analyzer. (Jon
Siwek)
* Parse 64-bit consts in Bro scripts correctly. (Bernhard Amann)
* Output 64-bit counts correctly on 32-bit machines (Bernhard Amann)
* Input framework fixes, including: (Bernhard Amann)
- One of the change events got the wrong parameters.
- Escape commas in sets and vectors that were unescaped before
tokenization.
- Handling of zero-length-strings as last element in a set was
broken (sets ending with a ,).
- Hashing of lines just containing zero-length-strings was broken.
- Make set_separators different from , work for input framework.
- Input framework was not handling counts and ints out of
32-bit-range correctly.
- Errors in single lines do not kill processing, but simply ignore
the line, log it, and continue.
* Update documentation for builtin types. (Daniel Thayer)
- Add missing description of interval "msec" unit.
- Improved description of pattern by clarifying the issue of
operand order and difference between exact and embedded
matching.
* Documentation fixes for signature 'eval' conditions. (Jon Siwek)
* Remove orphaned 1.5 unit tests. (Jon Siwek)
* Add type checking for signature 'eval' condition functions. (Jon
Siwek)
* Adding an identifier to the SMTP blocklist notices for duplicate
suppression. (Seth Hall)
2.1-beta-45 | 2012-08-22 16:11:10 -0700
* Add an option to the input framework that allows the user to chose
to not die upon encountering files/functions. (Bernhard Amann)
2.1-beta-41 | 2012-08-22 16:05:21 -0700
* Add test serialization to "leak" unit tests that use
communication. (Jon Siwek)
* Change to metrics/basic-cluster unit test for reliability. (Jon
Siwek)
* Fixed ack tracking which could overflow quickly in some
situations. (Seth Hall)
* Minor tweak to coverage.bare-mode-errors unit test to work with a
symlinked 'scripts' dir. (Jon Siwek)
2.1-beta-35 | 2012-08-22 08:44:52 -0700
* Add testcase for input framework reading sets (rather than
tables). (Bernhard Amann)
2.1-beta-31 | 2012-08-21 15:46:05 -0700
* Tweak to rotate-custom.bro unit test. (Jon Siwek)
* Ignore small mem leak every rotation interval for dataseries logs.
(Jon Siwek)
2.1-beta-28 | 2012-08-21 08:32:42 -0700
* Linking ES docs into logging document. (Robin Sommer)
2.1-beta-27 | 2012-08-20 20:06:20 -0700
* Add the Stream record to Log:active_streams to make more dynamic
logging possible. (Seth Hall)
* Fix portability of printing to files returned by
open("/dev/stderr"). (Jon Siwek)
* Fix mime type diff canonifier to also skip mime_desc columns. (Jon
Siwek)
* Unit test tweaks/fixes. (Jon Siwek)
- Some baselines for tests in "leaks" group were outdated.
- Changed a few of the cluster/communication tests to terminate
more explicitly instead of relying on btest-bg-wait to kill
processes. This makes the tests finish faster in the success case
and makes the reason for failing clearer in the that case.
* Fix memory leak of serialized IDs when compiled with
--enable-debug. (Jon Siwek)
2.1-beta-21 | 2012-08-16 11:48:56 -0700
* Installing a handler for running out of memory in "new". Bro will
now print an error message in that case rather than abort with an
uncaught exception. (Robin Sommer)
2.1-beta-20 | 2012-08-16 11:43:31 -0700
* Fixed potential problems with ElasticSearch output plugin. (Seth
Hall)
2.1-beta-13 | 2012-08-10 12:28:04 -0700
* Reporter warnings and error now print to stderr by default. New
options Reporter::warnings_to_stderr and
Reporter::errors_to_stderr to disable. (Seth Hall)
2.1-beta-9 | 2012-08-10 12:24:29 -0700
* Add more BIF tests. (Daniel Thayer)
2.1-beta-6 | 2012-08-10 12:22:52 -0700
* Fix bug in input framework with an edge case. (Bernhard Amann)
* Fix small bug in input framework test script. (Bernhard Amann)
2.1-beta-3 | 2012-08-03 10:46:49 -0700
* Merge branch 'master' of ssh://git.bro-ids.org/bro (Robin Sommer)
* Fix configure script to exit with non-zero status on error (Jon
Siwek)
* Improve ASCII output performance. (Robin Sommer)
2.1-beta | 2012-07-30 11:59:53 -0700
* Improve log filter compatibility with remote logging. Addresses
#842. (Jon Siwek)
2.0-907 | 2012-07-30 09:13:36 -0700
* Add missing breaks to switch cases in
ElasticSearch::HTTPReceive(). (Jon Siwek)
2.0-905 | 2012-07-28 16:24:34 -0700
* Fix log manager hanging on waiting for pending file rotations,
plus writer API tweak for failed rotations. Addresses #860. (Jon
Siwek and Robin Sommer)
* Tweaking logs-to-elasticsearch.bro so that it doesn't do anything
if ES server is unset. (Robin Sommer)
2.0-902 | 2012-07-27 12:42:13 -0700
* New variable in logging framework Log::active_streams to indicate
Log:ID enums which are currently active. (Seth Hall)
* Reworked how the logs-to-elasticsearch scripts works to stop
abusing the logging framework. (Seth Hall)
* Fix input test for recent default change on fastpath. (Robin
Sommer)
2.0-898 | 2012-07-27 12:22:03 -0700
* Small (potential performance) improvement for logging framework. (Seth Hall)
* Script-level rotation postprocessor fix. This fixes a problem with
writers that don't have a postprocessor. (Seth Hall)
* Update input framework documentation to reflect want_record
change. (Bernhard Amann)
* Fix crash when encountering an InterpreterException in a predicate
in logging or input Framework. (Bernhard Amann)
* Input framework: Make want_record=T the default for events
(Bernhard Amann)
* Changing the start/end markers in logs to open/close now
reflecting wall clock. (Robin Sommer)
2.0-891 | 2012-07-26 17:15:10 -0700
* Reader/writer API: preventing plugins from receiving further
messages after a failure. (Robin Sommer)
* New test for input framework that fails to find a file. (Robin
Sommer)
* Improving error handling for threads. (Robin Sommer)
* Tweaking the custom-rotate test to produce stable output. (Robin
Sommer)
2.0-884 | 2012-07-26 14:33:21 -0700
* Add comprehensive error handling for close() calls. (Jon Siwek)
* Add more test cases for input framework. (Bernhard Amann)
* Input framework: make error output for non-matching event types
much more verbose. (Bernhard Amann)
2.0-877 | 2012-07-25 17:20:34 -0700
* Fix double close() in FilerSerializer class. (Jon Siwek)
* Fix build warnings. (Daniel Thayer)
* Fixes to ElasticSearch plugin to make libcurl handle http
responses correctly. (Seth Hall)
* Fixing FreeBSD compiler error. (Robin Sommer)
* Silencing compiler warnings. (Robin Sommer)
2.0-871 | 2012-07-25 13:08:00 -0700
* Fix complaint from valgrind about uninitialized memory usage. (Jon
Siwek)
* Fix differing log filters of streams from writing to same
writer/path (which now produces a warning, but is otherwise
skipped for the second). Addresses #842. (Jon Siwek)
* Fix tests and error message for to_double BIF. (Daniel Thayer)
* Compile fix. (Robin Sommer)
2.0-866 | 2012-07-24 16:02:07 -0700
* Correct a typo in usage message. (Daniel Thayer)
* Fix file permissions of log files (which were created with execute
permissions after a recent change). (Daniel Thayer)
2.0-862 | 2012-07-24 15:22:52 -0700
* Fix initialization problem in logging class. (Jon Siwek)
* Input framework now accepts escaped ASCII values as input (\x##),
and unescapes appropiately. (Bernhard Amann)
* Make reading ASCII logfiles work when the input separator is
different from \t. (Bernhard Amann)
* A number of smaller fixes for input framework. (Bernhard Amann)
2.0-851 | 2012-07-24 15:04:14 -0700
* New built-in function to_double(s: string). (Scott Campbell)
2.0-849 | 2012-07-24 11:06:16 -0700
* Adding missing include needed on some systems. (Robin Sommer)
2.0-846 | 2012-07-23 16:36:37 -0700
* Fix WriterBackend::WriterInfo serialization, reenable ascii
start/end tags. (Jon Siwek)
2.0-844 | 2012-07-23 16:20:59 -0700
* Reworking parts of the internal threading/logging/input APIs for
thread-safety. (Robin Sommer)
* Bugfix for SSL version check. (Bernhard Amann)
* Changing a HTTP DPD from port 3138 to 3128. Addresses #857. (Robin
Sommer)
* ElasticSearch logging writer. See logging-elasticsearch.rst for
more information. (Vlad Grigorescu and Seth Hall).
* Give configure a --disable-perftools option to disable Perftools
support even if found. (Robin Sommer)
* The ASCII log writer now includes "#start <timestamp>" and "#end
<timestamp> lines in the each file. (Robin Sommer)
* Renamed ASCII logger "header" options to "meta". (Robin Sommer)
* ASCII logs now escape '#' at the beginning of log lines. Addresses
#763. (Robin Sommer)
* Fix bug, where in dns.log rcode always was set to 0/NOERROR when
no reply package was seen. (Bernhard Amann)
* Updating to Mozilla's current certificate bundle. (Seth Hall)
2.0-769 | 2012-07-13 16:17:33 -0700
* Fix some Info:Record field documentation. (Vlad Grigorescu)
* Fix overrides of TCP_ApplicationAnalyzer::EndpointEOF. (Jon Siwek)
* Fix segfault when incrementing whole vector values. Also removed
RefExpr::Eval(Val*) method since it was never called. (Jon Siwek)
* Remove baselines for some leak-detecting unit tests. (Jon Siwek)
* Unblock SIGFPE, SIGILL, SIGSEGV and SIGBUS for threads, so that
they now propagate to the main thread. Adresses #848. (Bernhard
Amann)
2.0-761 | 2012-07-12 08:14:38 -0700 2.0-761 | 2012-07-12 08:14:38 -0700
* Some small fixes to further reduce SOCKS false positive logs. (Seth Hall) * Some small fixes to further reduce SOCKS false positive logs. (Seth Hall)

View file

@ -88,23 +88,31 @@ if (LIBGEOIP_FOUND)
list(APPEND OPTLIBS ${LibGeoIP_LIBRARY}) list(APPEND OPTLIBS ${LibGeoIP_LIBRARY})
endif () endif ()
set(USE_PERFTOOLS false) set(HAVE_PERFTOOLS false)
set(USE_PERFTOOLS_DEBUG false) set(USE_PERFTOOLS_DEBUG false)
set(USE_PERFTOOLS_TCMALLOC false)
find_package(GooglePerftools) if (NOT DISABLE_PERFTOOLS)
find_package(GooglePerftools)
endif ()
if (GOOGLEPERFTOOLS_FOUND) if (GOOGLEPERFTOOLS_FOUND)
include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR}) set(HAVE_PERFTOOLS true)
set(USE_PERFTOOLS true) # Non-Linux systems may not be well-supported by gperftools, so
# require explicit request from user to enable it in that case.
if (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ENABLE_PERFTOOLS)
set(USE_PERFTOOLS_TCMALLOC true)
if (ENABLE_PERFTOOLS_DEBUG) if (ENABLE_PERFTOOLS_DEBUG)
# Enable heap debugging with perftools. # Enable heap debugging with perftools.
set(USE_PERFTOOLS_DEBUG true) set(USE_PERFTOOLS_DEBUG true)
include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR})
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES_DEBUG}) list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES_DEBUG})
else () else ()
# Link in tcmalloc for better performance. # Link in tcmalloc for better performance.
list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES}) list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES})
endif () endif ()
endif ()
endif () endif ()
set(USE_DATASERIES false) set(USE_DATASERIES false)
@ -112,7 +120,8 @@ find_package(Lintel)
find_package(DataSeries) find_package(DataSeries)
find_package(LibXML2) find_package(LibXML2)
if (LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND) if (NOT DISABLE_DATASERIES AND
LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
set(USE_DATASERIES true) set(USE_DATASERIES true)
include_directories(BEFORE ${Lintel_INCLUDE_DIR}) include_directories(BEFORE ${Lintel_INCLUDE_DIR})
include_directories(BEFORE ${DataSeries_INCLUDE_DIR}) include_directories(BEFORE ${DataSeries_INCLUDE_DIR})
@ -122,6 +131,17 @@ if (LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
list(APPEND OPTLIBS ${LibXML2_LIBRARIES}) list(APPEND OPTLIBS ${LibXML2_LIBRARIES})
endif() endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(LibCURL)
if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${LibCURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibCURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG) if (ENABLE_PERFTOOLS_DEBUG)
# Just a no op to prevent CMake from complaining about manually-specified # Just a no op to prevent CMake from complaining about manually-specified
# ENABLE_PERFTOOLS_DEBUG not being used if google perftools weren't found # ENABLE_PERFTOOLS_DEBUG not being used if google perftools weren't found
@ -211,9 +231,13 @@ message(
"\nAux. Tools: ${INSTALL_AUX_TOOLS}" "\nAux. Tools: ${INSTALL_AUX_TOOLS}"
"\n" "\n"
"\nGeoIP: ${USE_GEOIP}" "\nGeoIP: ${USE_GEOIP}"
"\nGoogle perftools: ${USE_PERFTOOLS}" "\ngperftools found: ${HAVE_PERFTOOLS}"
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}" "\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\ncURL: ${USE_CURL}"
"\n"
"\nDataSeries: ${USE_DATASERIES}" "\nDataSeries: ${USE_DATASERIES}"
"\nElasticSearch: ${USE_ELASTICSEARCH}"
"\n" "\n"
"\n================================================================\n" "\n================================================================\n"
) )

82
NEWS
View file

@ -7,8 +7,40 @@ release. For a complete list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with (note that submodules, such as BroControl and Broccoli, come with
their own CHANGES.) their own CHANGES.)
Bro 2.1 Beta Bro 2.2
------------ -------
New Functionality
~~~~~~~~~~~~~~~~~
- GridFTP support. TODO: Extend.
- ssl.log now also records the subject client and issuer certificates.
Changed Functionality
~~~~~~~~~~~~~~~~~~~~~
- We removed the following, already deprecated, functionality:
* Scripting language:
- &disable_print_hook attribute.
* BiF functions:
- parse_dotted_addr(), dump_config(),
make_connection_persistent(), generate_idmef(),
split_complete()
- Removed a now unused argument from "do_split" helper function.
- "this" is no longer a reserved keyword.
- The Input Framework's update_finished event has been renamed to
end_of_data. It will now not only fire after table-reads have been
completed, but also after the last event of a whole-file-read (or
whole-db-read, etc.).
Bro 2.1
-------
New Functionality New Functionality
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
@ -56,13 +88,6 @@ New Functionality
"reader plugins" that make it easy to interface to different data "reader plugins" that make it easy to interface to different data
sources. We will add more in the future. sources. We will add more in the future.
- Bro's default ASCII log format is not exactly the most efficient way
for storing and searching large volumes of data. An an alternative,
Bro now comes with experimental support for DataSeries output, an
efficient binary format for recording structured bulk data.
DataSeries is developed and maintained at HP Labs. See
doc/logging-dataseries for more information.
- BroControl now has built-in support for host-based load-balancing - BroControl now has built-in support for host-based load-balancing
when using either PF_RING, Myricom cards, or individual interfaces. when using either PF_RING, Myricom cards, or individual interfaces.
Instead of adding a separate worker entry in node.cfg for each Bro Instead of adding a separate worker entry in node.cfg for each Bro
@ -78,6 +103,25 @@ New Functionality
"lb_method=interfaces" to specify which interfaces to load-balance "lb_method=interfaces" to specify which interfaces to load-balance
on). on).
- Bro's default ASCII log format is not exactly the most efficient way
for storing and searching large volumes of data. An alternatives,
Bro now comes with experimental support for two alternative output
formats:
* DataSeries: an efficient binary format for recording structured
bulk data. DataSeries is developed and maintained at HP Labs.
See doc/logging-dataseries for more information.
* ElasticSearch: a distributed RESTful, storage engine and search
engine built on top of Apache Lucene. It scales very well, both
for distributed indexing and distributed searching. See
doc/logging-elasticsearch.rst for more information.
Note that at this point, we consider Bro's support for these two
formats as prototypes for collecting experience with alternative
outputs. We do not yet recommend them for production (but welcome
feedback!)
Changed Functionality Changed Functionality
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
@ -90,9 +134,14 @@ the full set.
* Bro now requires CMake >= 2.6.3. * Bro now requires CMake >= 2.6.3.
* Bro now links in tcmalloc (part of Google perftools) if found at * On Linux, Bro now links in tcmalloc (part of Google perftools)
configure time. Doing so can significantly improve memory and if found at configure time. Doing so can significantly improve
CPU use. memory and CPU use.
On the other platforms, the new configure option
--enable-perftools can be used to enable linking to tcmalloc.
(Note that perftools's support for non-Linux platforms may be
less reliable).
- The configure switch --enable-brov6 is gone. - The configure switch --enable-brov6 is gone.
@ -140,6 +189,15 @@ the full set.
Bro now supports decapsulating tunnels directly for protocols it Bro now supports decapsulating tunnels directly for protocols it
understands. understands.
- ASCII logs now record the time when they were opened/closed at the
beginning and end of the file, respectively (wall clock). The
options LogAscii::header_prefix and LogAscii::include_header have
been renamed to LogAscii::meta_prefix and LogAscii::include_meta,
respectively.
- The ASCII writers "header_*" options have been renamed to "meta_*"
(because there's now also a footer).
Bro 2.0 Bro 2.0
------- -------

View file

@ -1 +1 @@
2.0-761 2.1-91

@ -1 +1 @@
Subproject commit 4ad8d15b6395925c9875c9d2912a6cc3b4918e0a Subproject commit 74e6a5401c4228d5293c0e309283f43c389e7c12

@ -1 +1 @@
Subproject commit c691c01e9cefae5a79bcd4b0f84ca387c8c587a7 Subproject commit 01bb93cb23f31a98fb400584e8d2f2fbe8a589ef

@ -1 +1 @@
Subproject commit 8234b8903cbc775f341bdb6a1c0159981d88d27b Subproject commit 907210ce1470724fb386f939cc1b10a4caa2ae39

@ -1 +1 @@
Subproject commit d5ecd1a42c04b0dca332edc31811e5a6d0f7f2fb Subproject commit 8c53c57ebf16f5aaf34052eab3b02be75774cd75

@ -1 +1 @@
Subproject commit 44441a6c912c7c9f8d4771e042306ec5f44e461d Subproject commit 44a43e62452302277f88e8fac08d1f979dc53f98

2
cmake

@ -1 +1 @@
Subproject commit 2a72c5e08e018cf632033af3920432d5f684e130 Subproject commit 14537f56d66b18ab9d5024f798caf4d1f356fc67

View file

@ -114,9 +114,15 @@
/* Analyze Mobile IPv6 traffic */ /* Analyze Mobile IPv6 traffic */
#cmakedefine ENABLE_MOBILE_IPV6 #cmakedefine ENABLE_MOBILE_IPV6
/* Use libCurl. */
#cmakedefine USE_CURL
/* Use the DataSeries writer. */ /* Use the DataSeries writer. */
#cmakedefine USE_DATASERIES #cmakedefine USE_DATASERIES
/* Use the ElasticSearch writer. */
#cmakedefine USE_ELASTICSEARCH
/* Version number of package */ /* Version number of package */
#define VERSION "@VERSION@" #define VERSION "@VERSION@"

26
configure vendored
View file

@ -1,7 +1,7 @@
#!/bin/sh #!/bin/sh
# Convenience wrapper for easily viewing/setting options that # Convenience wrapper for easily viewing/setting options that
# the project's CMake scripts will recognize # the project's CMake scripts will recognize
set -e
command="$0 $*" command="$0 $*"
# check for `cmake` command # check for `cmake` command
@ -29,12 +29,17 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
Optional Features: Optional Features:
--enable-debug compile in debugging mode --enable-debug compile in debugging mode
--enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275 --enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275
--enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux)
--enable-perftools-debug use Google's perftools for debugging --enable-perftools-debug use Google's perftools for debugging
--disable-broccoli don't build or install the Broccoli library --disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl --disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-ruby don't try to build ruby bindings for broccoli --disable-ruby don't try to build ruby bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root --with-openssl=PATH path to OpenSSL install root
@ -58,6 +63,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-swig=PATH path to SWIG executable --with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries --with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries) --with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
Packaging Options (for developers): Packaging Options (for developers):
--binary-package toggle special logic for binary packaging --binary-package toggle special logic for binary packaging
@ -97,6 +103,7 @@ append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry ENABLE_DEBUG BOOL false append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry BinPAC_SKIP_INSTALL BOOL true append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true append_cache_entry BUILD_SHARED_LIBS BOOL true
@ -105,6 +112,7 @@ append_cache_entry INSTALL_BROCCOLI BOOL true
append_cache_entry INSTALL_BROCTL BOOL true append_cache_entry INSTALL_BROCTL BOOL true
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
append_cache_entry DISABLE_PERFTOOLS BOOL false
# parse arguments # parse arguments
while [ $# -ne 0 ]; do while [ $# -ne 0 ]; do
@ -144,7 +152,11 @@ while [ $# -ne 0 ]; do
--enable-mobile-ipv6) --enable-mobile-ipv6)
append_cache_entry ENABLE_MOBILE_IPV6 BOOL true append_cache_entry ENABLE_MOBILE_IPV6 BOOL true
;; ;;
--enable-perftools)
append_cache_entry ENABLE_PERFTOOLS BOOL true
;;
--enable-perftools-debug) --enable-perftools-debug)
append_cache_entry ENABLE_PERFTOOLS BOOL true
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true
;; ;;
--disable-broccoli) --disable-broccoli)
@ -156,12 +168,21 @@ while [ $# -ne 0 ]; do
--disable-auxtools) --disable-auxtools)
append_cache_entry INSTALL_AUX_TOOLS BOOL false append_cache_entry INSTALL_AUX_TOOLS BOOL false
;; ;;
--disable-perftools)
append_cache_entry DISABLE_PERFTOOLS BOOL true
;;
--disable-python) --disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;; ;;
--disable-ruby) --disable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
;; ;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*) --with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;; ;;
@ -222,6 +243,9 @@ while [ $# -ne 0 ]; do
--with-xml2=*) --with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;; ;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--binary-package) --binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true append_cache_entry BINARY_PACKAGING_MODE BOOL true
;; ;;

View file

@ -29,7 +29,7 @@ class BroLexer(RegexLexer):
r'|vector)\b', Keyword.Type), r'|vector)\b', Keyword.Type),
(r'(T|F)\b', Keyword.Constant), (r'(T|F)\b', Keyword.Constant),
(r'(&)((?:add|delete|expire)_func|attr|(create|read|write)_expire' (r'(&)((?:add|delete|expire)_func|attr|(create|read|write)_expire'
r'|default|disable_print_hook|raw_output|encrypt|group|log' r'|default|raw_output|encrypt|group|log'
r'|mergeable|optional|persistent|priority|redef' r'|mergeable|optional|persistent|priority|redef'
r'|rotate_(?:interval|size)|synchronized)\b', bygroups(Punctuation, r'|rotate_(?:interval|size)|synchronized)\b', bygroups(Punctuation,
Keyword)), Keyword)),

Binary file not shown.

View file

@ -12,6 +12,43 @@ Frequently Asked Questions
Installation and Configuration Installation and Configuration
============================== ==============================
How do I upgrade to a new version of Bro?
-----------------------------------------
There's two suggested approaches, either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy
local customizations over.
Re-Use Previous Install Prefix
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you choose to configure and install Bro with the same prefix
directory as before, local customization and configuration to files in
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
(``$prefix`` indicating the root of where Bro was installed). Also, logs
generated at run-time won't be touched by the upgrade. (But making
a backup of local changes before proceeding is still recommended.)
After upgrading, remember to check ``$prefix/share/bro/site`` and
``$prefix/etc`` for ``.example`` files, which indicate the
distribution's version of the file differs from the local one, which may
include local changes. Review the differences, and make adjustments
as necessary (for differences that aren't the result of a local change,
use the new version's).
Pick a New Install prefix
^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to install the newer version in a different prefix
directory than before, you can just copy local customization and
configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc``
to the new location (``$prefix`` indicating the root of where Bro was
originally installed). Make sure to review the files for difference
before copying and make adjustments as necessary (for differences that
aren't the result of a local change, use the new version's). Of
particular note, the copied version of ``$prefix/etc/broctl.cfg`` is
likely to need changes to the ``SpoolDir`` and ``LogDir`` settings.
How can I tune my operating system for best capture performance? How can I tune my operating system for best capture performance?
---------------------------------------------------------------- ----------------------------------------------------------------
@ -46,7 +83,7 @@ directions:
http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html
What does an error message like ``internal error: NB-DNS error`` mean? What does an error message like ``internal error: NB-DNS error`` mean?
--------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------
That often means that DNS is not set up correctly on the system That often means that DNS is not set up correctly on the system
running Bro. Try verifying from the command line that DNS lookups running Bro. Try verifying from the command line that DNS lookups
@ -65,6 +102,15 @@ Generally, please note that we do not regularly test OpenBSD builds.
We appreciate any patches that improve Bro's support for this We appreciate any patches that improve Bro's support for this
platform. platform.
How do BroControl options affect Bro script variables?
------------------------------------------------------
Some (but not all) BroControl options override a corresponding Bro script variable.
For example, setting the BroControl option "LogRotationInterval" will override
the value of the Bro script variable "Log::default_rotation_interval".
See the :doc:`BroControl Documentation <components/broctl/README>` to find out
which BroControl options override Bro script variables, and for more discussion
on site-specific customization.
Usage Usage
===== =====

View file

@ -98,12 +98,12 @@ been completed. Because of this, it is, for example, possible to call
will remain queued until the first read has been completed. will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires Once the input framework finishes reading from a data source, it fires
the ``update_finished`` event. Once this event has been received all data the ``end_of_data`` event. Once this event has been received all data
from the input file is available in the table. from the input file is available in the table.
.. code:: bro .. code:: bro
event Input::update_finished(name: string, source: string) { event Input::end_of_data(name: string, source: string) {
# now all data is in the table # now all data is in the table
print blacklist; print blacklist;
} }
@ -129,7 +129,7 @@ deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When The first, very basic method is an explicit refresh of an input stream. When
an input stream is open, the function ``force_update`` can be called. This an input stream is open, the function ``force_update`` can be called. This
will trigger a complete refresh of the table; any changed elements from the will trigger a complete refresh of the table; any changed elements from the
file will be updated. After the update is finished the ``update_finished`` file will be updated. After the update is finished the ``end_of_data``
event will be raised. event will be raised.
In our example the call would look like: In our example the call would look like:
@ -142,7 +142,7 @@ The input framework also supports two automatic refresh modes. The first mode
continually checks if a file has been changed. If the file has been changed, it continually checks if a file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been state. Each time a change has been detected and all the new data has been
read into the table, the ``update_finished`` event is raised. read into the table, the ``end_of_data`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data The second mode is a streaming mode. This mode assumes that the source data
file is an append-only file to which new data is continually appended. Bro file is an append-only file to which new data is continually appended. Bro
@ -150,7 +150,7 @@ continually checks for new data at the end of the file and will add the new
data to the table. If newer lines in the file have the same index as previous data to the table. If newer lines in the file have the same index as previous
lines, they will overwrite the values in the output table. Because of the lines, they will overwrite the values in the output table. Because of the
nature of streaming reads (data is continually added to the table), nature of streaming reads (data is continually added to the table),
the ``update_finished`` event is never raised when using streaming reads. the ``end_of_data`` event is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the The reading mode can be selected by setting the ``mode`` option of the
add_table call. Valid values are ``MANUAL`` (the default), ``REREAD`` add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``

View file

@ -0,0 +1,89 @@
=========================================
Indexed Logging Output with ElasticSearch
=========================================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for searching large volumes of data. ElasticSearch
is a new data storage technology for dealing with tons of data.
It's also a search engine built on top of Apache's Lucene
project. It scales very well, both for distributed indexing and
distributed searching.
.. contents::
Warning
-------
This writer plugin is still in testing and is not yet recommended for
production use! The approach to how logs are handled in the plugin is "fire
and forget" at this time, there is no error handling if the server fails to
respond successfully to the insertion request.
Installing ElasticSearch
------------------------
Download the latest version from: <http://www.elasticsearch.org/download/>.
Once extracted, start ElasticSearch with::
# ./bin/elasticsearch
For more detailed information, refer to the ElasticSearch installation
documentation: http://www.elasticsearch.org/guide/reference/setup/installation.html
Compiling Bro with ElasticSearch Support
----------------------------------------
First, ensure that you have libcurl installed the run configure.::
# ./configure
[...]
====================| Bro Build Summary |=====================
[...]
cURL: true
[...]
ElasticSearch: true
[...]
================================================================
Activating ElasticSearch
------------------------
The easiest way to enable ElasticSearch output is to load the tuning/logs-to-
elasticsearch.bro script. If you are using BroControl, the following line in
local.bro will enable it.
.. console::
@load tuning/logs-to-elasticsearch
With that, Bro will now write most of its logs into ElasticSearch in addition
to maintaining the Ascii logs like it would do by default. That script has
some tunable options for choosing which logs to send to ElasticSearch, refer
to the autogenerated script documentation for those options.
There is an interface being written specifically to integrate with the data
that Bro outputs into ElasticSearch named Brownian. It can be found here::
https://github.com/grigorescu/Brownian
Tuning
------
A common problem encountered with ElasticSearch is too many files being held
open. The ElasticSearch website has some suggestions on how to increase the
open file limit.
- http://www.elasticsearch.org/tutorials/2011/04/06/too-many-open-files.html
TODO
----
Lots.
- Perform multicast discovery for server.
- Better error detection.
- Better defaults (don't index loaded-plugins, for instance).
-

View file

@ -383,3 +383,4 @@ Bro supports the following output formats other than ASCII:
:maxdepth: 1 :maxdepth: 1
logging-dataseries logging-dataseries
logging-elasticsearch

View file

@ -1,5 +1,6 @@
.. _CMake: http://www.cmake.org .. _CMake: http://www.cmake.org
.. _SWIG: http://www.swig.org .. _SWIG: http://www.swig.org
.. _Xcode: https://developer.apple.com/xcode/
.. _MacPorts: http://www.macports.org .. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org .. _Fink: http://www.finkproject.org
.. _Homebrew: http://mxcl.github.com/homebrew .. _Homebrew: http://mxcl.github.com/homebrew
@ -85,17 +86,20 @@ The following dependencies are required to build Bro:
* Mac OS X * Mac OS X
Snow Leopard (10.6) comes with all required dependencies except for CMake_. Compiling source code on Macs requires first downloading Xcode_,
then going through its "Preferences..." -> "Downloads" menus to
install the "Command Line Tools" component.
Lion (10.7) comes with all required dependencies except for CMake_ and SWIG_. Lion (10.7) and Mountain Lion (10.8) come with all required
dependencies except for CMake_, SWIG_, and ``libmagic``.
Distributions of these dependencies can be obtained from the project websites Distributions of these dependencies can be obtained from the project
linked above, but they're also likely available from your preferred Mac OS X websites linked above, but they're also likely available from your
package management system (e.g. MacPorts_, Fink_, or Homebrew_). preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
or Homebrew_).
Note that the MacPorts ``swig`` package may not include any specific Specifically for MacPorts, the ``swig``, ``swig-ruby``, ``swig-python``
language support so you may need to also install ``swig-ruby`` and and ``file`` packages provide the required dependencies.
``swig-python``.
Optional Dependencies Optional Dependencies
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~

View file

@ -42,6 +42,7 @@ rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro)
rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro) rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro)
rest_target(${psd} base/frameworks/logging/writers/ascii.bro) rest_target(${psd} base/frameworks/logging/writers/ascii.bro)
rest_target(${psd} base/frameworks/logging/writers/dataseries.bro) rest_target(${psd} base/frameworks/logging/writers/dataseries.bro)
rest_target(${psd} base/frameworks/logging/writers/elasticsearch.bro)
rest_target(${psd} base/frameworks/logging/writers/none.bro) rest_target(${psd} base/frameworks/logging/writers/none.bro)
rest_target(${psd} base/frameworks/metrics/cluster.bro) rest_target(${psd} base/frameworks/metrics/cluster.bro)
rest_target(${psd} base/frameworks/metrics/main.bro) rest_target(${psd} base/frameworks/metrics/main.bro)
@ -64,9 +65,11 @@ rest_target(${psd} base/frameworks/tunnels/main.bro)
rest_target(${psd} base/protocols/conn/contents.bro) rest_target(${psd} base/protocols/conn/contents.bro)
rest_target(${psd} base/protocols/conn/inactivity.bro) rest_target(${psd} base/protocols/conn/inactivity.bro)
rest_target(${psd} base/protocols/conn/main.bro) rest_target(${psd} base/protocols/conn/main.bro)
rest_target(${psd} base/protocols/conn/polling.bro)
rest_target(${psd} base/protocols/dns/consts.bro) rest_target(${psd} base/protocols/dns/consts.bro)
rest_target(${psd} base/protocols/dns/main.bro) rest_target(${psd} base/protocols/dns/main.bro)
rest_target(${psd} base/protocols/ftp/file-extract.bro) rest_target(${psd} base/protocols/ftp/file-extract.bro)
rest_target(${psd} base/protocols/ftp/gridftp.bro)
rest_target(${psd} base/protocols/ftp/main.bro) rest_target(${psd} base/protocols/ftp/main.bro)
rest_target(${psd} base/protocols/ftp/utils-commands.bro) rest_target(${psd} base/protocols/ftp/utils-commands.bro)
rest_target(${psd} base/protocols/http/file-extract.bro) rest_target(${psd} base/protocols/http/file-extract.bro)
@ -145,6 +148,7 @@ rest_target(${psd} policy/protocols/ssl/known-certs.bro)
rest_target(${psd} policy/protocols/ssl/validate-certs.bro) rest_target(${psd} policy/protocols/ssl/validate-certs.bro)
rest_target(${psd} policy/tuning/defaults/packet-fragments.bro) rest_target(${psd} policy/tuning/defaults/packet-fragments.bro)
rest_target(${psd} policy/tuning/defaults/warnings.bro) rest_target(${psd} policy/tuning/defaults/warnings.bro)
rest_target(${psd} policy/tuning/logs-to-elasticsearch.bro)
rest_target(${psd} policy/tuning/track-all-assets.bro) rest_target(${psd} policy/tuning/track-all-assets.bro)
rest_target(${psd} site/local-manager.bro) rest_target(${psd} site/local-manager.bro)
rest_target(${psd} site/local-proxy.bro) rest_target(${psd} site/local-proxy.bro)

View file

@ -55,8 +55,8 @@ The Bro scripting language supports the following built-in types.
A temporal type representing a relative time. An ``interval`` A temporal type representing a relative time. An ``interval``
constant can be written as a numeric constant followed by a time constant can be written as a numeric constant followed by a time
unit where the time unit is one of ``usec``, ``sec``, ``min``, unit where the time unit is one of ``usec``, ``msec``, ``sec``, ``min``,
``hr``, or ``day`` which respectively represent microseconds, ``hr``, or ``day`` which respectively represent microseconds, milliseconds,
seconds, minutes, hours, and days. Whitespace between the numeric seconds, minutes, hours, and days. Whitespace between the numeric
constant and time unit is optional. Appending the letter "s" to the constant and time unit is optional. Appending the letter "s" to the
time unit in order to pluralize it is also optional (to no semantic time unit in order to pluralize it is also optional (to no semantic
@ -95,14 +95,14 @@ The Bro scripting language supports the following built-in types.
and embedded. and embedded.
In exact matching the ``==`` equality relational operator is used In exact matching the ``==`` equality relational operator is used
with one :bro:type:`string` operand and one :bro:type:`pattern` with one :bro:type:`pattern` operand and one :bro:type:`string`
operand to check whether the full string exactly matches the operand (order of operands does not matter) to check whether the full
pattern. In this case, the ``^`` beginning-of-line and ``$`` string exactly matches the pattern. In exact matching, the ``^``
end-of-line anchors are redundant since pattern is implicitly beginning-of-line and ``$`` end-of-line anchors are redundant since
anchored to the beginning and end of the line to facilitate an exact the pattern is implicitly anchored to the beginning and end of the
match. For example:: line to facilitate an exact match. For example::
"foo" == /foo|bar/ /foo|bar/ == "foo"
yields true, while:: yields true, while::
@ -110,9 +110,9 @@ The Bro scripting language supports the following built-in types.
yields false. The ``!=`` operator would yield the negation of ``==``. yields false. The ``!=`` operator would yield the negation of ``==``.
In embedded matching the ``in`` operator is again used with one In embedded matching the ``in`` operator is used with one
:bro:type:`string` operand and one :bro:type:`pattern` operand :bro:type:`pattern` operand (which must be on the left-hand side) and
(which must be on the left-hand side), but tests whether the pattern one :bro:type:`string` operand, but tests whether the pattern
appears anywhere within the given string. For example:: appears anywhere within the given string. For example::
/foo|bar/ in "foobar" /foo|bar/ in "foobar"
@ -600,10 +600,6 @@ scripting language supports the following built-in attributes.
.. TODO: needs to be documented. .. TODO: needs to be documented.
.. bro:attr:: &disable_print_hook
Deprecated. Will be removed.
.. bro:attr:: &raw_output .. bro:attr:: &raw_output
Opens a file in raw mode, i.e., non-ASCII characters are not Opens a file in raw mode, i.e., non-ASCII characters are not

View file

@ -83,9 +83,8 @@ Header Conditions
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
Header conditions limit the applicability of the signature to a subset Header conditions limit the applicability of the signature to a subset
of traffic that contains matching packet headers. For TCP, this match of traffic that contains matching packet headers. This type of matching
is performed only for the first packet of a connection. For other is performed only for the first packet of a connection.
protocols, it is done on each individual packet.
There are pre-defined header conditions for some of the most used There are pre-defined header conditions for some of the most used
header fields. All of them generally have the format ``<keyword> <cmp> header fields. All of them generally have the format ``<keyword> <cmp>
@ -95,14 +94,22 @@ one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and
against. The following keywords are defined: against. The following keywords are defined:
``src-ip``/``dst-ip <cmp> <address-list>`` ``src-ip``/``dst-ip <cmp> <address-list>``
Source and destination address, respectively. Addresses can be Source and destination address, respectively. Addresses can be given
given as IP addresses or CIDR masks. as IPv4 or IPv6 addresses or CIDR masks. For IPv6 addresses/masks
the colon-hexadecimal representation of the address must be enclosed
in square brackets (e.g. ``[fe80::1]`` or ``[fe80::0]/16``).
``src-port``/``dst-port`` ``<int-list>`` ``src-port``/``dst-port <cmp> <int-list>``
Source and destination port, respectively. Source and destination port, respectively.
``ip-proto tcp|udp|icmp`` ``ip-proto <cmp> tcp|udp|icmp|icmp6|ip|ip6``
IP protocol. IPv4 header's Protocol field or the Next Header field of the final
IPv6 header (i.e. either Next Header field in the fixed IPv6 header
if no extension headers are present or that field from the last
extension header in the chain). Note that the IP-in-IP forms of
tunneling are automatically decapsulated by default and signatures
apply to only the inner-most packet, so specifying ``ip`` or ``ip6``
is a no-op.
For lists of multiple values, they are sequentially compared against For lists of multiple values, they are sequentially compared against
the corresponding header field. If at least one of the comparisons the corresponding header field. If at least one of the comparisons
@ -116,20 +123,22 @@ condition can be defined either as
header <proto>[<offset>:<size>] [& <integer>] <cmp> <value-list> header <proto>[<offset>:<size>] [& <integer>] <cmp> <value-list>
This compares the value found at the given position of the packet This compares the value found at the given position of the packet header
header with a list of values. ``offset`` defines the position of the with a list of values. ``offset`` defines the position of the value
value within the header of the protocol defined by ``proto`` (which within the header of the protocol defined by ``proto`` (which can be
can be ``ip``, ``tcp``, ``udp`` or ``icmp``). ``size`` is either 1, 2, ``ip``, ``ip6``, ``tcp``, ``udp``, ``icmp`` or ``icmp6``). ``size`` is
or 4 and specifies the value to have a size of this many bytes. If the either 1, 2, or 4 and specifies the value to have a size of this many
optional ``& <integer>`` is given, the packet's value is first masked bytes. If the optional ``& <integer>`` is given, the packet's value is
with the integer before it is compared to the value-list. ``cmp`` is first masked with the integer before it is compared to the value-list.
one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``. ``value-list`` is ``cmp`` is one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``.
a list of comma-separated integers similar to those described above. ``value-list`` is a list of comma-separated integers similar to those
The integers within the list may be followed by an additional ``/ described above. The integers within the list may be followed by an
mask`` where ``mask`` is a value from 0 to 32. This corresponds to the additional ``/ mask`` where ``mask`` is a value from 0 to 32. This
CIDR notation for netmasks and is translated into a corresponding corresponds to the CIDR notation for netmasks and is translated into a
bitmask applied to the packet's value prior to the comparison (similar corresponding bitmask applied to the packet's value prior to the
to the optional ``& integer``). comparison (similar to the optional ``& integer``). IPv6 address values
are not allowed in the value-list, though you can still inspect any 1,
2, or 4 byte section of an IPv6 header using this keyword.
Putting it all together, this is an example condition that is Putting it all together, this is an example condition that is
equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``: equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``:
@ -138,8 +147,8 @@ equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``:
header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24 header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24
Internally, the predefined header conditions are in fact just Note that the analogous example for IPv6 isn't currently possible since
short-cuts and mapped into a generic condition. 4 bytes is the max width of a value that can be compared.
Content Conditions Content Conditions
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@ -229,20 +238,10 @@ matched. The following context conditions are defined:
confirming the match. If false is returned, no signature match is confirming the match. If false is returned, no signature match is
going to be triggered. The function has to be of type ``function going to be triggered. The function has to be of type ``function
cond(state: signature_state, data: string): bool``. Here, cond(state: signature_state, data: string): bool``. Here,
``content`` may contain the most recent content chunk available at ``data`` may contain the most recent content chunk available at
the time the signature was matched. If no such chunk is available, the time the signature was matched. If no such chunk is available,
``content`` will be the empty string. ``signature_state`` is ``data`` will be the empty string. See :bro:type:`signature_state`
defined as follows: for its definition.
.. code:: bro
type signature_state: record {
id: string; # ID of the signature
conn: connection; # Current connection
is_orig: bool; # True if current endpoint is originator
payload_size: count; # Payload size of the first packet
};
``payload-size <cmp> <integer>`` ``payload-size <cmp> <integer>``
Compares the integer to the size of the payload of a packet. For Compares the integer to the size of the payload of a packet. For

View file

@ -3,7 +3,13 @@
# This script creates binary packages for Mac OS X. # This script creates binary packages for Mac OS X.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
./check-cmake || { exit 1; } cmake -P /dev/stdin << "EOF"
if ( ${CMAKE_VERSION} VERSION_LESS 2.8.9 )
message(FATAL_ERROR "CMake >= 2.8.9 required to build package")
endif ()
EOF
[ $? -ne 0 ] && exit 1;
type sw_vers > /dev/null 2>&1 || { type sw_vers > /dev/null 2>&1 || {
echo "Unable to get Mac OS X version" >&2; echo "Unable to get Mac OS X version" >&2;
@ -34,26 +40,26 @@ prefix=/opt/bro
cd .. cd ..
# Minimum Bro # Minimum Bro
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--disable-broccoli --disable-broctl --pkg-name-prefix=Bro-minimal \ --disable-broccoli --disable-broctl --pkg-name-prefix=Bro-minimal \
--binary-package --binary-package
( cd build && make package ) ( cd build && make package )
# Full Bro package # Full Bro package
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--pkg-name-prefix=Bro --binary-package --pkg-name-prefix=Bro --binary-package
( cd build && make package ) ( cd build && make package )
# Broccoli # Broccoli
cd aux/broccoli cd aux/broccoli
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package --binary-package
( cd build && make package && mv *.dmg ../../../build/ ) ( cd build && make package && mv *.dmg ../../../build/ )
cd ../.. cd ../..
# Broctl # Broctl
cd aux/broctl cd aux/broctl
CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \
--binary-package --binary-package
( cd build && make package && mv *.dmg ../../../build/ ) ( cd build && make package && mv *.dmg ../../../build/ )
cd ../.. cd ../..

View file

@ -42,7 +42,7 @@ export {
type Info: record { type Info: record {
## The network time at which a communication event occurred. ## The network time at which a communication event occurred.
ts: time &log; ts: time &log;
## The peer name (if any) for which a communication event is concerned. ## The peer name (if any) with which a communication event is concerned.
peer: string &log &optional; peer: string &log &optional;
## Where the communication event message originated from, that is, ## Where the communication event message originated from, that is,
## either from the scripting layer or inside the Bro process. ## either from the scripting layer or inside the Bro process.

View file

@ -8,8 +8,16 @@ export {
## The default input reader used. Defaults to `READER_ASCII`. ## The default input reader used. Defaults to `READER_ASCII`.
const default_reader = READER_ASCII &redef; const default_reader = READER_ASCII &redef;
## The default reader mode used. Defaults to `MANUAL`.
const default_mode = MANUAL &redef; const default_mode = MANUAL &redef;
## Flag that controls if the input framework accepts records
## that contain types that are not supported (at the moment
## file and function). If true, the input framework will
## warn in these cases, but continue. If false, it will
## abort. Defaults to false (abort)
const accept_unsupported_types = F &redef;
## TableFilter description type used for the `table` method. ## TableFilter description type used for the `table` method.
type TableDescription: record { type TableDescription: record {
## Common definitions for tables and events ## Common definitions for tables and events
@ -82,11 +90,11 @@ export {
## Record describing the fields to be retrieved from the source input. ## Record describing the fields to be retrieved from the source input.
fields: any; fields: any;
## If want_record if false (default), the event receives each value in fields as a seperate argument. ## If want_record if false, the event receives each value in fields as a separate argument.
## If it is set to true, the event receives all fields in a signle record value. ## If it is set to true (default), the event receives all fields in a single record value.
want_record: bool &default=F; want_record: bool &default=T;
## The event that is rised each time a new line is received from the reader. ## The event that is raised each time a new line is received from the reader.
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. ## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
ev: any; ev: any;
@ -106,7 +114,8 @@ export {
## description: `TableDescription` record describing the source. ## description: `TableDescription` record describing the source.
global add_event: function(description: Input::EventDescription) : bool; global add_event: function(description: Input::EventDescription) : bool;
## Remove a input stream. Returns true on success and false if the named stream was not found. ## Remove a input stream. Returns true on success and false if the named stream was
## not found.
## ##
## id: string value identifying the stream to be removed ## id: string value identifying the stream to be removed
global remove: function(id: string) : bool; global remove: function(id: string) : bool;
@ -117,8 +126,9 @@ export {
## id: string value identifying the stream ## id: string value identifying the stream
global force_update: function(id: string) : bool; global force_update: function(id: string) : bool;
## Event that is called, when the update of a specific source is finished ## Event that is called, when the end of a data source has been reached, including
global update_finished: event(name: string, source:string); ## after an update.
global end_of_data: event(name: string, source:string);
} }
@load base/input.bif @load base/input.bif

View file

@ -2,4 +2,5 @@
@load ./postprocessors @load ./postprocessors
@load ./writers/ascii @load ./writers/ascii
@load ./writers/dataseries @load ./writers/dataseries
@load ./writers/elasticsearch
@load ./writers/none @load ./writers/none

View file

@ -99,6 +99,12 @@ export {
## file name. Generally, filenames are expected to given ## file name. Generally, filenames are expected to given
## without any extensions; writers will add appropiate ## without any extensions; writers will add appropiate
## extensions automatically. ## extensions automatically.
##
## If this path is found to conflict with another filter's
## for the same writer type, it is automatically corrected
## by appending "-N", where N is the smallest integer greater
## or equal to 2 that allows the corrected path name to not
## conflict with another filter's.
path: string &optional; path: string &optional;
## A function returning the output path for recording entries ## A function returning the output path for recording entries
@ -118,7 +124,10 @@ export {
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to be logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter, which will be subject
## to the same automatic correction rules as the *path*
## field of :bro:type:`Log::Filter` in the case of conflicts
## with other filters trying to use the same writer/path pair.
path_func: function(id: ID, path: string, rec: any): string &optional; path_func: function(id: ID, path: string, rec: any): string &optional;
## Subset of column names to record. If not given, all ## Subset of column names to record. If not given, all
@ -321,6 +330,11 @@ export {
## Log::default_rotation_postprocessor_cmd ## Log::default_rotation_postprocessor_cmd
## Log::default_rotation_postprocessors ## Log::default_rotation_postprocessors
global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool; global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool;
## The streams which are currently active and not disabled.
## This table is not meant to be modified by users! Only use it for
## examining which streams are active.
global active_streams: table[ID] of Stream = table();
} }
# We keep a script-level copy of all filters so that we can manipulate them. # We keep a script-level copy of all filters so that we can manipulate them.
@ -335,22 +349,23 @@ function __default_rotation_postprocessor(info: RotationInfo) : bool
{ {
if ( info$writer in default_rotation_postprocessors ) if ( info$writer in default_rotation_postprocessors )
return default_rotation_postprocessors[info$writer](info); return default_rotation_postprocessors[info$writer](info);
else
return F; # Return T by default so that postprocessor-less writers don't shutdown.
return T;
} }
function default_path_func(id: ID, path: string, rec: any) : string function default_path_func(id: ID, path: string, rec: any) : string
{
local id_str = fmt("%s", id);
local parts = split1(id_str, /::/);
if ( |parts| == 2 )
{ {
# The suggested path value is a previous result of this function # The suggested path value is a previous result of this function
# or a filter path explicitly set by the user, so continue using it. # or a filter path explicitly set by the user, so continue using it.
if ( path != "" ) if ( path != "" )
return path; return path;
local id_str = fmt("%s", id);
local parts = split1(id_str, /::/);
if ( |parts| == 2 )
{
# Example: Notice::LOG -> "notice" # Example: Notice::LOG -> "notice"
if ( parts[2] == "LOG" ) if ( parts[2] == "LOG" )
{ {
@ -405,11 +420,15 @@ function create_stream(id: ID, stream: Stream) : bool
if ( ! __create_stream(id, stream) ) if ( ! __create_stream(id, stream) )
return F; return F;
active_streams[id] = stream;
return add_default_filter(id); return add_default_filter(id);
} }
function disable_stream(id: ID) : bool function disable_stream(id: ID) : bool
{ {
delete active_streams[id];
return __disable_stream(id); return __disable_stream(id);
} }

View file

@ -8,12 +8,13 @@ export {
## into files. This is primarily for debugging purposes. ## into files. This is primarily for debugging purposes.
const output_to_stdout = F &redef; const output_to_stdout = F &redef;
## If true, include a header line with column names and description ## If true, include lines with log meta information such as column names with
## of the other ASCII logging options that were used. ## types, the values of ASCII logging options that in use, and the time when the
const include_header = T &redef; ## file was opened and closes (the latter at the end).
const include_meta = T &redef;
## Prefix for the header line if included. ## Prefix for lines with meta information.
const header_prefix = "#" &redef; const meta_prefix = "#" &redef;
## Separator between fields. ## Separator between fields.
const separator = "\t" &redef; const separator = "\t" &redef;

View file

@ -0,0 +1,48 @@
##! Log writer for sending logs to an ElasticSearch server.
##!
##! Note: This module is in testing and is not yet considered stable!
##!
##! There is one known memory issue. If your elasticsearch server is
##! running slowly and taking too long to return from bulk insert
##! requests, the message queue to the writer thread will continue
##! growing larger and larger giving the appearance of a memory leak.
module LogElasticSearch;
export {
## Name of the ES cluster
const cluster_name = "elasticsearch" &redef;
## ES Server
const server_host = "127.0.0.1" &redef;
## ES Port
const server_port = 9200 &redef;
## Name of the ES index
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular, time
## specifications less than a second result in a timeout value of 0, which
## means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
}

View file

@ -23,7 +23,7 @@ redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER ) @if ( Cluster::local_node_type() != Cluster::MANAGER )
# The notice policy is completely handled by the manager and shouldn't be # The notice policy is completely handled by the manager and shouldn't be
# done by workers or proxies to save time for packet processing. # done by workers or proxies to save time for packet processing.
event bro_init() &priority=-11 event bro_init() &priority=11
{ {
Notice::policy = table(); Notice::policy = table();
} }

View file

@ -36,24 +36,55 @@ export {
## Not all reporter messages will have locations in them though. ## Not all reporter messages will have locations in them though.
location: string &log &optional; location: string &log &optional;
}; };
## Tunable for sending reporter warning messages to STDERR. The option to
## turn it off is presented here in case Bro is being run by some
## external harness and shouldn't output anything to the console.
const warnings_to_stderr = T &redef;
## Tunable for sending reporter error messages to STDERR. The option to
## turn it off is presented here in case Bro is being run by some
## external harness and shouldn't output anything to the console.
const errors_to_stderr = T &redef;
} }
global stderr: file;
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Reporter::LOG, [$columns=Info]); Log::create_stream(Reporter::LOG, [$columns=Info]);
if ( errors_to_stderr || warnings_to_stderr )
stderr = open("/dev/stderr");
} }
event reporter_info(t: time, msg: string, location: string) event reporter_info(t: time, msg: string, location: string) &priority=-5
{ {
Log::write(Reporter::LOG, [$ts=t, $level=INFO, $message=msg, $location=location]); Log::write(Reporter::LOG, [$ts=t, $level=INFO, $message=msg, $location=location]);
} }
event reporter_warning(t: time, msg: string, location: string) event reporter_warning(t: time, msg: string, location: string) &priority=-5
{ {
if ( warnings_to_stderr )
{
if ( t > double_to_time(0.0) )
print stderr, fmt("WARNING: %.6f %s (%s)", t, msg, location);
else
print stderr, fmt("WARNING: %s (%s)", msg, location);
}
Log::write(Reporter::LOG, [$ts=t, $level=WARNING, $message=msg, $location=location]); Log::write(Reporter::LOG, [$ts=t, $level=WARNING, $message=msg, $location=location]);
} }
event reporter_error(t: time, msg: string, location: string) event reporter_error(t: time, msg: string, location: string) &priority=-5
{ {
if ( errors_to_stderr )
{
if ( t > double_to_time(0.0) )
print stderr, fmt("ERROR: %.6f %s (%s)", t, msg, location);
else
print stderr, fmt("ERROR: %s (%s)", msg, location);
}
Log::write(Reporter::LOG, [$ts=t, $level=ERROR, $message=msg, $location=location]); Log::write(Reporter::LOG, [$ts=t, $level=ERROR, $message=msg, $location=location]);
} }

View file

@ -826,7 +826,7 @@ const tcp_storm_interarrival_thresh = 1 sec &redef;
## peer's ACKs. Set to zero to turn off this determination. ## peer's ACKs. Set to zero to turn off this determination.
## ##
## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks
const tcp_max_initial_window = 4096; const tcp_max_initial_window = 4096 &redef;
## If we're not seeing our peer's ACKs, the maximum volume of data above a sequence ## If we're not seeing our peer's ACKs, the maximum volume of data above a sequence
## hole that we'll tolerate before assuming that there's been a packet drop and we ## hole that we'll tolerate before assuming that there's been a packet drop and we
@ -834,7 +834,7 @@ const tcp_max_initial_window = 4096;
## up. ## up.
## ##
## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks
const tcp_max_above_hole_without_any_acks = 4096; const tcp_max_above_hole_without_any_acks = 4096 &redef;
## If we've seen this much data without any of it being acked, we give up ## If we've seen this much data without any of it being acked, we give up
## on that connection to avoid memory exhaustion due to buffering all that ## on that connection to avoid memory exhaustion due to buffering all that
@ -843,7 +843,7 @@ const tcp_max_above_hole_without_any_acks = 4096;
## has in fact gone too far, but for now we just make this quite beefy. ## has in fact gone too far, but for now we just make this quite beefy.
## ##
## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks ## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks
const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024; const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024 &redef;
## For services without an a handler, these sets define originator-side ports that ## For services without an a handler, these sets define originator-side ports that
## still trigger reassembly. ## still trigger reassembly.
@ -1135,10 +1135,10 @@ type ip6_ah: record {
rsv: count; rsv: count;
## Security Parameter Index. ## Security Parameter Index.
spi: count; spi: count;
## Sequence number. ## Sequence number, unset in the case that *len* field is zero.
seq: count; seq: count &optional;
## Authentication data. ## Authentication data, unset in the case that *len* field is zero.
data: string; data: string &optional;
}; };
## Values extracted from an IPv6 ESP extension header. ## Values extracted from an IPv6 ESP extension header.
@ -2784,6 +2784,14 @@ export {
## to have a valid Teredo encapsulation. ## to have a valid Teredo encapsulation.
const yielding_teredo_decapsulation = T &redef; const yielding_teredo_decapsulation = T &redef;
## With this set, the Teredo analyzer waits until it sees both sides
## of a connection using a valid Teredo encapsulation before issuing
## a :bro:see:`protocol_confirmation`. If it's false, the first
## occurence of a packet with valid Teredo encapsulation causes a
## confirmation. Both cases are still subject to effects of
## :bro:see:`Tunnel::yielding_teredo_decapsulation`.
const delay_teredo_confirmation = T &redef;
## How often to cleanup internal state for inactive IP tunnels. ## How often to cleanup internal state for inactive IP tunnels.
const ip_tunnel_timeout = 24hrs &redef; const ip_tunnel_timeout = 24hrs &redef;
} # end export } # end export

View file

@ -1,3 +1,4 @@
@load ./main @load ./main
@load ./contents @load ./contents
@load ./inactivity @load ./inactivity
@load ./polling

View file

@ -17,7 +17,7 @@ export {
type Info: record { type Info: record {
## This is the time of the first packet. ## This is the time of the first packet.
ts: time &log; ts: time &log;
## A unique identifier of a connection. ## A unique identifier of the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports. ## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
@ -61,7 +61,7 @@ export {
## be left empty at all times. ## be left empty at all times.
local_orig: bool &log &optional; local_orig: bool &log &optional;
## Indicates the number of bytes missed in content gaps which is ## Indicates the number of bytes missed in content gaps, which is
## representative of packet loss. A value other than zero will ## representative of packet loss. A value other than zero will
## normally cause protocol analysis to fail but some analysis may ## normally cause protocol analysis to fail but some analysis may
## have been completed prior to the packet loss. ## have been completed prior to the packet loss.
@ -83,23 +83,24 @@ export {
## i inconsistent packet (e.g. SYN+RST bits both set) ## i inconsistent packet (e.g. SYN+RST bits both set)
## ====== ==================================================== ## ====== ====================================================
## ##
## If the letter is in upper case it means the event comes from the ## If the event comes from the originator, the letter is in upper-case; if it comes
## originator and lower case then means the responder. ## from the responder, it's in lower-case. Multiple packets of the same type will
## Also, there is compression. We only record one "d" in each direction, ## only be noted once (e.g. we only record one "d" in each direction, regardless of
## for instance. I.e., we just record that data went in that direction. ## how many data packets were seen.)
## This history is not meant to encode how much data that happened to
## be.
history: string &log &optional; history: string &log &optional;
## Number of packets the originator sent. ## Number of packets that the originator sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
orig_pkts: count &log &optional; orig_pkts: count &log &optional;
## Number IP level bytes the originator sent (as seen on the wire, ## Number of IP level bytes that the originator sent (as seen on the wire,
## taken from IP total_length header field). ## taken from IP total_length header field).
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
orig_ip_bytes: count &log &optional; orig_ip_bytes: count &log &optional;
## Number of packets the responder sent. See ``orig_pkts``. ## Number of packets that the responder sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T
resp_pkts: count &log &optional; resp_pkts: count &log &optional;
## Number IP level bytes the responder sent. See ``orig_pkts``. ## Number og IP level bytes that the responder sent (as seen on the wire,
## taken from IP total_length header field).
## Only set if :bro:id:`use_conn_size_analyzer` = T
resp_ip_bytes: count &log &optional; resp_ip_bytes: count &log &optional;
## If this connection was over a tunnel, indicate the ## If this connection was over a tunnel, indicate the
## *uid* values for any encapsulating parent connections ## *uid* values for any encapsulating parent connections

View file

@ -0,0 +1,49 @@
##! Implements a generic way to poll connections looking for certain features
##! (e.g. monitor bytes transferred). The specific feature of a connection
##! to look for, the polling interval, and the code to execute if the feature
##! is found are all controlled by user-defined callback functions.
module ConnPolling;
export {
## Starts monitoring a given connection.
##
## c: The connection to watch.
##
## callback: A callback function that takes as arguments the monitored
## *connection*, and counter *cnt* that increments each time the
## callback is called. It returns an interval indicating how long
## in the future to schedule an event which will call the
## callback. A negative return interval causes polling to stop.
##
## cnt: The initial value of a counter which gets passed to *callback*.
##
## i: The initial interval at which to schedule the next callback.
## May be ``0secs`` to poll right away.
global watch: function(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count, i: interval);
}
event ConnPolling::check(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count)
{
if ( ! connection_exists(c$id) )
return;
lookup_connection(c$id); # updates the conn val
local next_interval = callback(c, cnt);
if ( next_interval < 0secs )
return;
watch(c, callback, cnt + 1, next_interval);
}
function watch(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count, i: interval)
{
schedule i { ConnPolling::check(c, callback, cnt) };
}

View file

@ -45,27 +45,29 @@ export {
AA: bool &log &default=F; AA: bool &log &default=F;
## The Truncation bit specifies that the message was truncated. ## The Truncation bit specifies that the message was truncated.
TC: bool &log &default=F; TC: bool &log &default=F;
## The Recursion Desired bit indicates to a name server to recursively ## The Recursion Desired bit in a request message indicates that
## purse the query. ## the client wants recursive service for this query.
RD: bool &log &default=F; RD: bool &log &default=F;
## The Recursion Available bit in a response message indicates if ## The Recursion Available bit in a response message indicates that
## the name server supports recursive queries. ## the name server supports recursive queries.
RA: bool &log &default=F; RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all ## A reserved field that is currently supposed to be zero in all
## queries and responses. ## queries and responses.
Z: count &log &default=0; Z: count &log &default=0;
## The set of resource descriptions in answer of the query. ## The set of resource descriptions in the query answer.
answers: vector of string &log &optional; answers: vector of string &log &optional;
## The caching intervals of the associated RRs described by the ## The caching intervals of the associated RRs described by the
## ``answers`` field. ## ``answers`` field.
TTLs: vector of interval &log &optional; TTLs: vector of interval &log &optional;
## The DNS query was rejected by the server.
rejected: bool &log &default=F;
## This value indicates if this request/response pair is ready to be ## This value indicates if this request/response pair is ready to be
## logged. ## logged.
ready: bool &default=F; ready: bool &default=F;
## The total number of resource records in a reply message's answer ## The total number of resource records in a reply message's answer
## section. ## section.
total_answers: count &optional; total_answers: count &default=0;
## The total number of resource records in a reply message's answer, ## The total number of resource records in a reply message's answer,
## authority, and additional sections. ## authority, and additional sections.
total_replies: count &optional; total_replies: count &optional;
@ -162,11 +164,11 @@ function set_session(c: connection, msg: dns_msg, is_query: bool)
c$dns = c$dns_state$pending[msg$id]; c$dns = c$dns_state$pending[msg$id];
if ( ! is_query )
{
c$dns$rcode = msg$rcode; c$dns$rcode = msg$rcode;
c$dns$rcode_name = base_errors[msg$rcode]; c$dns$rcode_name = base_errors[msg$rcode];
if ( ! is_query )
{
if ( ! c$dns?$total_answers ) if ( ! c$dns?$total_answers )
c$dns$total_answers = msg$num_answers; c$dns$total_answers = msg$num_answers;
@ -186,10 +188,13 @@ function set_session(c: connection, msg: dns_msg, is_query: bool)
} }
} }
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
{
set_session(c, msg, is_orig);
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5 event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{ {
set_session(c, msg, F);
if ( ans$answer_type == DNS_ANS ) if ( ans$answer_type == DNS_ANS )
{ {
c$dns$AA = msg$AA; c$dns$AA = msg$AA;
@ -209,7 +214,8 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL; c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
} }
if ( c$dns?$answers && |c$dns$answers| == c$dns$total_answers ) if ( c$dns?$answers && c$dns?$total_answers &&
|c$dns$answers| == c$dns$total_answers )
{ {
add c$dns_state$finished_answers[c$dns$trans_id]; add c$dns_state$finished_answers[c$dns$trans_id];
# Indicate this request/reply pair is ready to be logged. # Indicate this request/reply pair is ready to be logged.
@ -230,8 +236,6 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5 event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{ {
set_session(c, msg, T);
c$dns$RD = msg$RD; c$dns$RD = msg$RD;
c$dns$TC = msg$TC; c$dns$TC = msg$TC;
c$dns$qclass = qclass; c$dns$qclass = qclass;
@ -321,11 +325,9 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
# #
# } # }
event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
event dns_rejected(c: connection, msg: dns_msg,
query: string, qtype: count, qclass: count) &priority=5
{ {
set_session(c, msg, F); c$dns$rejected = T;
} }
event connection_state_remove(c: connection) &priority=-5 event connection_state_remove(c: connection) &priority=-5

View file

@ -1,3 +1,4 @@
@load ./utils-commands @load ./utils-commands
@load ./main @load ./main
@load ./file-extract @load ./file-extract
@load ./gridftp

View file

@ -0,0 +1,121 @@
##! A detection script for GridFTP data and control channels.
##!
##! GridFTP control channels are identified by FTP control channels
##! that successfully negotiate the GSSAPI method of an AUTH request
##! and for which the exchange involved an encoded TLS/SSL handshake,
##! indicating the GSI mechanism for GSSAPI was used. This analysis
##! is all supported internally, this script simple adds the "gridftp"
##! label to the *service* field of the control channel's
##! :bro:type:`connection` record.
##!
##! GridFTP data channels are identified by a heuristic that relies on
##! the fact that default settings for GridFTP clients typically
##! mutally authenticate the data channel with TLS/SSL and negotiate a
##! NULL bulk cipher (no encryption). Connections with those
##! attributes are then polled for two minutes with decreasing frequency
##! to check if the transfer sizes are large enough to indicate a
##! GridFTP data channel that would be undesireable to analyze further
##! (e.g. stop TCP reassembly). A side effect is that true connection
##! sizes are not logged, but at the benefit of saving CPU cycles that
##! otherwise go to analyzing the large (and likely benign) connections.
@load ./main
@load base/protocols/conn
@load base/protocols/ssl
@load base/frameworks/notice
module GridFTP;
export {
## Number of bytes transferred before guessing a connection is a
## GridFTP data channel.
const size_threshold = 1073741824 &redef;
## Max number of times to check whether a connection's size exceeds the
## :bro:see:`GridFTP::size_threshold`.
const max_poll_count = 15 &redef;
## Whether to skip further processing of the GridFTP data channel once
## detected, which may help performance.
const skip_data = T &redef;
## Base amount of time between checking whether a GridFTP data connection
## has transferred more than :bro:see:`GridFTP::size_threshold` bytes.
const poll_interval = 1sec &redef;
## The amount of time the base :bro:see:`GridFTP::poll_interval` is
## increased by each poll interval. Can be used to make more frequent
## checks at the start of a connection and gradually slow down.
const poll_interval_increase = 1sec &redef;
## Raised when a GridFTP data channel is detected.
##
## c: The connection pertaining to the GridFTP data channel.
global data_channel_detected: event(c: connection);
## The initial criteria used to determine whether to start polling
## the connection for the :bro:see:`GridFTP::size_threshold` to have
## been exceeded. This is called in a :bro:see:`ssl_established` event
## handler and by default looks for both a client and server certificate
## and for a NULL bulk cipher. One way in which this function could be
## redefined is to make it also consider client/server certificate issuer
## subjects.
##
## c: The connection which may possibly be a GridFTP data channel.
##
## Returns: true if the connection should be further polled for an
## exceeded :bro:see:`GridFTP::size_threshold`, else false.
const data_channel_initial_criteria: function(c: connection): bool &redef;
}
redef record FTP::Info += {
last_auth_requested: string &optional;
};
event ftp_request(c: connection, command: string, arg: string) &priority=4
{
if ( command == "AUTH" && c?$ftp )
c$ftp$last_auth_requested = arg;
}
function size_callback(c: connection, cnt: count): interval
{
if ( c$orig$size > size_threshold || c$resp$size > size_threshold )
{
add c$service["gridftp-data"];
event GridFTP::data_channel_detected(c);
if ( skip_data )
skip_further_processing(c$id);
return -1sec;
}
if ( cnt >= max_poll_count )
return -1sec;
return poll_interval + poll_interval_increase * cnt;
}
event ssl_established(c: connection) &priority=5
{
# If an FTP client requests AUTH GSSAPI and later an SSL handshake
# finishes, it's likely a GridFTP control channel, so add service label.
if ( c?$ftp && c$ftp?$last_auth_requested &&
/GSSAPI/ in c$ftp$last_auth_requested )
add c$service["gridftp"];
}
function data_channel_initial_criteria(c: connection): bool
{
return ( c?$ssl && c$ssl?$client_subject && c$ssl?$subject &&
c$ssl?$cipher && /WITH_NULL/ in c$ssl$cipher );
}
event ssl_established(c: connection) &priority=-3
{
# By default GridFTP data channels do mutual authentication and
# negotiate a cipher suite with a NULL bulk cipher.
if ( data_channel_initial_criteria(c) )
ConnPolling::watch(c, size_callback, 0, 0secs);
}

View file

@ -28,7 +28,9 @@ export {
type Info: record { type Info: record {
## Time when the command was sent. ## Time when the command was sent.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## User name for the current FTP session. ## User name for the current FTP session.
user: string &log &default="<unknown>"; user: string &log &default="<unknown>";
@ -94,11 +96,11 @@ redef record connection += {
}; };
# Configure DPD # Configure DPD
const ports = { 21/tcp } &redef; const ports = { 21/tcp, 2811/tcp } &redef; # 2811/tcp is GridFTP.
redef capture_filters += { ["ftp"] = "port 21" }; redef capture_filters += { ["ftp"] = "port 21 and port 2811" };
redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] }; redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] };
redef likely_server_ports += { 21/tcp }; redef likely_server_ports += { 21/tcp, 2811/tcp };
# Establish the variable for tracking expected connections. # Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &create_expire=5mins; global ftp_data_expected: table[addr, port] of Info &create_expire=5mins;

View file

@ -22,7 +22,9 @@ export {
type Info: record { type Info: record {
## Timestamp for when the request happened. ## Timestamp for when the request happened.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Represents the pipelined depth into the connection of this ## Represents the pipelined depth into the connection of this
## request/response transaction. ## request/response transaction.
@ -112,7 +114,7 @@ event bro_init() &priority=5
# DPD configuration. # DPD configuration.
const ports = { const ports = {
80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3138/tcp, 80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3128/tcp,
8000/tcp, 8080/tcp, 8888/tcp, 8000/tcp, 8080/tcp, 8888/tcp,
}; };
redef dpd_config += { redef dpd_config += {

View file

@ -11,7 +11,9 @@ export {
type Info: record { type Info: record {
## Timestamp when the command was seen. ## Timestamp when the command was seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Nick name given for the connection. ## Nick name given for the connection.
nick: string &log &optional; nick: string &log &optional;

View file

@ -8,33 +8,51 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Time when the message was first seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## This is a number that indicates the number of messages deep into ## A count to represent the depth of this message transaction in a single
## this connection where this particular message was transferred. ## connection where multiple messages were transferred.
trans_depth: count &log; trans_depth: count &log;
## Contents of the Helo header.
helo: string &log &optional; helo: string &log &optional;
## Contents of the From header.
mailfrom: string &log &optional; mailfrom: string &log &optional;
## Contents of the Rcpt header.
rcptto: set[string] &log &optional; rcptto: set[string] &log &optional;
## Contents of the Date header.
date: string &log &optional; date: string &log &optional;
## Contents of the From header.
from: string &log &optional; from: string &log &optional;
## Contents of the To header.
to: set[string] &log &optional; to: set[string] &log &optional;
## Contents of the ReplyTo header.
reply_to: string &log &optional; reply_to: string &log &optional;
## Contents of the MsgID header.
msg_id: string &log &optional; msg_id: string &log &optional;
## Contents of the In-Reply-To header.
in_reply_to: string &log &optional; in_reply_to: string &log &optional;
## Contents of the Subject header.
subject: string &log &optional; subject: string &log &optional;
## Contents of the X-Origininating-IP header.
x_originating_ip: addr &log &optional; x_originating_ip: addr &log &optional;
## Contents of the first Received header.
first_received: string &log &optional; first_received: string &log &optional;
## Contents of the second Received header.
second_received: string &log &optional; second_received: string &log &optional;
## The last message the server sent to the client. ## The last message that the server sent to the client.
last_reply: string &log &optional; last_reply: string &log &optional;
## The message transmission path, as extracted from the headers.
path: vector of addr &log &optional; path: vector of addr &log &optional;
## Value of the User-Agent header from the client.
user_agent: string &log &optional; user_agent: string &log &optional;
## Indicate if the "Received: from" headers should still be processed. ## Indicates if the "Received: from" headers should still be processed.
process_received_from: bool &default=T; process_received_from: bool &default=T;
## Indicates if client activity has been seen, but not yet logged ## Indicates if client activity has been seen, but not yet logged.
has_client_activity: bool &default=F; has_client_activity: bool &default=F;
}; };

View file

@ -9,11 +9,13 @@ export {
type Info: record { type Info: record {
## Time when the proxy connection was first detected. ## Time when the proxy connection was first detected.
ts: time &log; ts: time &log;
## Unique ID for the tunnel - may correspond to connection uid or be non-existent.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Protocol version of SOCKS. ## Protocol version of SOCKS.
version: count &log; version: count &log;
## Username for the proxy if extracted from the network. ## Username for the proxy if extracted from the network..
user: string &log &optional; user: string &log &optional;
## Server status for the attempt at using the proxy. ## Server status for the attempt at using the proxy.
status: string &log &optional; status: string &log &optional;

View file

@ -26,19 +26,21 @@ export {
type Info: record { type Info: record {
## Time when the SSH connection began. ## Time when the SSH connection began.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Indicates if the login was heuristically guessed to be "success" ## Indicates if the login was heuristically guessed to be "success"
## or "failure". ## or "failure".
status: string &log &optional; status: string &log &optional;
## Direction of the connection. If the client was a local host ## Direction of the connection. If the client was a local host
## logging into an external host, this would be OUTBOUD. INBOUND ## logging into an external host, this would be OUTBOUND. INBOUND
## would be set for the opposite situation. ## would be set for the opposite situation.
# TODO: handle local-local and remote-remote better. # TODO: handle local-local and remote-remote better.
direction: Direction &log &optional; direction: Direction &log &optional;
## Software string given by the client. ## Software string from the client.
client: string &log &optional; client: string &log &optional;
## Software string given by the server. ## Software string from the server.
server: string &log &optional; server: string &log &optional;
## Amount of data returned from the server. This is currently ## Amount of data returned from the server. This is currently
## the only measure of the success heuristic and it is logged to ## the only measure of the success heuristic and it is logged to

View file

@ -9,13 +9,15 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Time when the SSL connection began. ## Time when the SSL connection was first detected.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## SSL/TLS version the server offered. ## SSL/TLS version that the server offered.
version: string &log &optional; version: string &log &optional;
## SSL/TLS cipher suite the server chose. ## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional; cipher: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It ## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting. ## indicates the server name that the client was requesting.
@ -28,17 +30,28 @@ export {
issuer_subject: string &log &optional; issuer_subject: string &log &optional;
## NotValidBefore field value from the server certificate. ## NotValidBefore field value from the server certificate.
not_valid_before: time &log &optional; not_valid_before: time &log &optional;
## NotValidAfter field value from the serve certificate. ## NotValidAfter field value from the server certificate.
not_valid_after: time &log &optional; not_valid_after: time &log &optional;
## Last alert that was seen during the connection. ## Last alert that was seen during the connection.
last_alert: string &log &optional; last_alert: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the client.
client_issuer_subject: string &log &optional;
## Full binary server certificate stored in DER format. ## Full binary server certificate stored in DER format.
cert: string &optional; cert: string &optional;
## Chain of certificates offered by the server to validate its ## Chain of certificates offered by the server to validate its
## complete signing chain. ## complete signing chain.
cert_chain: vector of string &optional; cert_chain: vector of string &optional;
## Full binary client certificate stored in DER format.
client_cert: string &optional;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of string &optional;
## The analyzer ID used for the analyzer instance attached ## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a ## to each connection. It is not used for logging since it's a
## meaningless arbitrary number. ## meaningless arbitrary number.
@ -105,7 +118,8 @@ redef likely_server_ports += {
function set_session(c: connection) function set_session(c: connection)
{ {
if ( ! c?$ssl ) if ( ! c?$ssl )
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector()]; c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(),
$client_cert_chain=vector()];
} }
function finish(c: connection) function finish(c: connection)
@ -139,8 +153,24 @@ event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: coun
# We aren't doing anything with client certificates yet. # We aren't doing anything with client certificates yet.
if ( is_orig ) if ( is_orig )
return; {
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$client_cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$client_subject = cert$subject;
c$ssl$client_issuer_subject = cert$issuer;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert;
}
}
else
{
if ( chain_idx == 0 ) if ( chain_idx == 0 )
{ {
# Save the primary cert. # Save the primary cert.
@ -158,6 +188,7 @@ event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: coun
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert; c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
} }
} }
}
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5 event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5
{ {

File diff suppressed because one or more lines are too long

View file

@ -9,9 +9,11 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Timestamp of when the syslog message was seen. ## Timestamp when the syslog message was seen.
ts: time &log; ts: time &log;
## Unique ID for the connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Protocol over which the message was seen. ## Protocol over which the message was seen.
proto: transport_proto &log; proto: transport_proto &log;

View file

@ -1,3 +1,4 @@
##! Watch for various SPAM blocklist URLs in SMTP error messages.
@load base/protocols/smtp @load base/protocols/smtp
@ -5,9 +6,11 @@ module SMTP;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Indicates that the server sent a reply mentioning an SMTP block list. ## An SMTP server sent a reply mentioning an SMTP block list.
Blocklist_Error_Message, Blocklist_Error_Message,
## Indicates the client's address is seen in the block list error message. ## The originator's address is seen in the block list error message.
## This is useful to detect local hosts sending SPAM with a high
## positive rate.
Blocklist_Blocked_Host, Blocklist_Blocked_Host,
}; };
@ -52,7 +55,8 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
message = fmt("%s is on an SMTP block list", c$id$orig_h); message = fmt("%s is on an SMTP block list", c$id$orig_h);
} }
NOTICE([$note=note, $conn=c, $msg=message, $sub=msg]); NOTICE([$note=note, $conn=c, $msg=message, $sub=msg,
$identifier=cat(c$id$orig_h)]);
} }
} }
} }

View file

@ -0,0 +1,36 @@
##! Load this script to enable global log output to an ElasticSearch database.
module LogElasticSearch;
export {
## An elasticsearch specific rotation interval.
const rotation_interval = 3hr &redef;
## Optionally ignore any :bro:type:`Log::ID` from being sent to
## ElasticSearch with this script.
const excluded_log_ids: set[Log::ID] &redef;
## If you want to explicitly only send certain :bro:type:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option will remain in
## effect as well.
const send_logs: set[Log::ID] &redef;
}
event bro_init() &priority=-5
{
if ( server_host == "" )
return;
for ( stream_id in Log::active_streams )
{
if ( stream_id in excluded_log_ids ||
(|send_logs| > 0 && stream_id !in send_logs) )
next;
local filter: Log::Filter = [$name = "default-es",
$writer = Log::WRITER_ELASTICSEARCH,
$interv = LogElasticSearch::rotation_interval];
Log::add_filter(stream_id, filter);
}
}

View file

@ -60,4 +60,5 @@
@load tuning/defaults/__load__.bro @load tuning/defaults/__load__.bro
@load tuning/defaults/packet-fragments.bro @load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro @load tuning/defaults/warnings.bro
@load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro @load tuning/track-all-assets.bro

View file

@ -171,6 +171,7 @@ const Analyzer::Config Analyzer::analyzer_configs[] = {
{ AnalyzerTag::Contents_SMB, "CONTENTS_SMB", 0, 0, 0, false }, { AnalyzerTag::Contents_SMB, "CONTENTS_SMB", 0, 0, 0, false },
{ AnalyzerTag::Contents_RPC, "CONTENTS_RPC", 0, 0, 0, false }, { AnalyzerTag::Contents_RPC, "CONTENTS_RPC", 0, 0, 0, false },
{ AnalyzerTag::Contents_NFS, "CONTENTS_NFS", 0, 0, 0, false }, { AnalyzerTag::Contents_NFS, "CONTENTS_NFS", 0, 0, 0, false },
{ AnalyzerTag::FTP_ADAT, "FTP_ADAT", 0, 0, 0, false },
}; };
AnalyzerTimer::~AnalyzerTimer() AnalyzerTimer::~AnalyzerTimer()

View file

@ -46,6 +46,7 @@ namespace AnalyzerTag {
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP, Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh, Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,
Contents_DCE_RPC, Contents_SMB, Contents_RPC, Contents_NFS, Contents_DCE_RPC, Contents_SMB, Contents_RPC, Contents_NFS,
FTP_ADAT,
// End-marker. // End-marker.
LastAnalyzer LastAnalyzer
}; };

View file

@ -15,7 +15,7 @@ const char* attr_name(attr_tag t)
"&add_func", "&delete_func", "&expire_func", "&add_func", "&delete_func", "&expire_func",
"&read_expire", "&write_expire", "&create_expire", "&read_expire", "&write_expire", "&create_expire",
"&persistent", "&synchronized", "&postprocessor", "&persistent", "&synchronized", "&postprocessor",
"&encrypt", "&match", "&disable_print_hook", "&encrypt", "&match",
"&raw_output", "&mergeable", "&priority", "&raw_output", "&mergeable", "&priority",
"&group", "&log", "&error_handler", "&type_column", "&group", "&log", "&error_handler", "&type_column",
"(&tracked)", "(&tracked)",
@ -385,11 +385,6 @@ void Attributes::CheckAttr(Attr* a)
// FIXME: Check here for global ID? // FIXME: Check here for global ID?
break; break;
case ATTR_DISABLE_PRINT_HOOK:
if ( type->Tag() != TYPE_FILE )
Error("&disable_print_hook only applicable to files");
break;
case ATTR_RAW_OUTPUT: case ATTR_RAW_OUTPUT:
if ( type->Tag() != TYPE_FILE ) if ( type->Tag() != TYPE_FILE )
Error("&raw_output only applicable to files"); Error("&raw_output only applicable to files");

View file

@ -28,7 +28,6 @@ typedef enum {
ATTR_POSTPROCESSOR, ATTR_POSTPROCESSOR,
ATTR_ENCRYPT, ATTR_ENCRYPT,
ATTR_MATCH, ATTR_MATCH,
ATTR_DISABLE_PRINT_HOOK,
ATTR_RAW_OUTPUT, ATTR_RAW_OUTPUT,
ATTR_MERGEABLE, ATTR_MERGEABLE,
ATTR_PRIORITY, ATTR_PRIORITY,

View file

@ -106,10 +106,10 @@ void BitTorrent_Analyzer::Undelivered(int seq, int len, bool orig)
// } // }
} }
void BitTorrent_Analyzer::EndpointEOF(TCP_Reassembler* endp) void BitTorrent_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void BitTorrent_Analyzer::DeliverWeird(const char* msg, bool orig) void BitTorrent_Analyzer::DeliverWeird(const char* msg, bool orig)

View file

@ -15,7 +15,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new BitTorrent_Analyzer(conn); } { return new BitTorrent_Analyzer(conn); }

View file

@ -215,9 +215,9 @@ void BitTorrentTracker_Analyzer::Undelivered(int seq, int len, bool orig)
stop_resp = true; stop_resp = true;
} }
void BitTorrentTracker_Analyzer::EndpointEOF(TCP_Reassembler* endp) void BitTorrentTracker_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
} }
void BitTorrentTracker_Analyzer::InitBencParser(void) void BitTorrentTracker_Analyzer::InitBencParser(void)

View file

@ -48,7 +48,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new BitTorrentTracker_Analyzer(conn); } { return new BitTorrentTracker_Analyzer(conn); }

View file

@ -4,6 +4,7 @@ include_directories(BEFORE
) )
configure_file(version.c.in ${CMAKE_CURRENT_BINARY_DIR}/version.c) configure_file(version.c.in ${CMAKE_CURRENT_BINARY_DIR}/version.c)
configure_file(util-config.h.in ${CMAKE_CURRENT_BINARY_DIR}/util-config.h)
# This creates a custom command to transform a bison output file (inFile) # This creates a custom command to transform a bison output file (inFile)
# into outFile in order to avoid symbol conflicts: # into outFile in order to avoid symbol conflicts:
@ -428,6 +429,7 @@ set(bro_SRCS
logging/WriterFrontend.cc logging/WriterFrontend.cc
logging/writers/Ascii.cc logging/writers/Ascii.cc
logging/writers/DataSeries.cc logging/writers/DataSeries.cc
logging/writers/ElasticSearch.cc
logging/writers/None.cc logging/writers/None.cc
input/Manager.cc input/Manager.cc
@ -443,10 +445,6 @@ set(bro_SRCS
collect_headers(bro_HEADERS ${bro_SRCS}) collect_headers(bro_HEADERS ${bro_SRCS})
add_definitions(-DBRO_SCRIPT_INSTALL_PATH="${BRO_SCRIPT_INSTALL_PATH}")
add_definitions(-DBRO_SCRIPT_SOURCE_PATH="${BRO_SCRIPT_SOURCE_PATH}")
add_definitions(-DBRO_BUILD_PATH="${CMAKE_CURRENT_BINARY_DIR}")
add_executable(bro ${bro_SRCS} ${bro_HEADERS}) add_executable(bro ${bro_SRCS} ${bro_HEADERS})
target_link_libraries(bro ${brodeps} ${CMAKE_THREAD_LIBS_INIT}) target_link_libraries(bro ${brodeps} ${CMAKE_THREAD_LIBS_INIT})

View file

@ -76,7 +76,7 @@ void ChunkedIO::DumpDebugData(const char* basefnname, bool want_reads)
ChunkedIOFd io(fd, "dump-file"); ChunkedIOFd io(fd, "dump-file");
io.Write(*i); io.Write(*i);
io.Flush(); io.Flush();
close(fd); safe_close(fd);
} }
l->clear(); l->clear();
@ -127,7 +127,7 @@ ChunkedIOFd::~ChunkedIOFd()
delete [] read_buffer; delete [] read_buffer;
delete [] write_buffer; delete [] write_buffer;
close(fd); safe_close(fd);
if ( partial ) if ( partial )
{ {
@ -686,7 +686,7 @@ ChunkedIOSSL::~ChunkedIOSSL()
ssl = 0; ssl = 0;
} }
close(socket); safe_close(socket);
} }

View file

@ -63,10 +63,10 @@ void DNS_TCP_Analyzer_binpac::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void DNS_TCP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) void DNS_TCP_Analyzer_binpac::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void DNS_TCP_Analyzer_binpac::DeliverStream(int len, const u_char* data, void DNS_TCP_Analyzer_binpac::DeliverStream(int len, const u_char* data,

View file

@ -45,7 +45,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new DNS_TCP_Analyzer_binpac(conn); } { return new DNS_TCP_Analyzer_binpac(conn); }

View file

@ -872,10 +872,12 @@ Val* BinaryExpr::SubNetFold(Val* v1, Val* v2) const
const IPPrefix& n1 = v1->AsSubNet(); const IPPrefix& n1 = v1->AsSubNet();
const IPPrefix& n2 = v2->AsSubNet(); const IPPrefix& n2 = v2->AsSubNet();
if ( n1 == n2 ) bool result = ( n1 == n2 ) ? true : false;
return new Val(1, TYPE_BOOL);
else if ( tag == EXPR_NE )
return new Val(0, TYPE_BOOL); result = ! result;
return new Val(result, TYPE_BOOL);
} }
void BinaryExpr::SwapOps() void BinaryExpr::SwapOps()
@ -1035,12 +1037,10 @@ Val* IncrExpr::Eval(Frame* f) const
{ {
Val* new_elt = DoSingleEval(f, elt); Val* new_elt = DoSingleEval(f, elt);
v_vec->Assign(i, new_elt, this, OP_INCR); v_vec->Assign(i, new_elt, this, OP_INCR);
Unref(new_elt); // was Ref()'d by Assign()
} }
else else
v_vec->Assign(i, 0, this, OP_INCR); v_vec->Assign(i, 0, this, OP_INCR);
} }
// FIXME: Is the next line needed?
op->Assign(f, v_vec, OP_INCR); op->Assign(f, v_vec, OP_INCR);
} }
@ -1517,6 +1517,8 @@ RemoveFromExpr::RemoveFromExpr(Expr* arg_op1, Expr* arg_op2)
if ( BothArithmetic(bt1, bt2) ) if ( BothArithmetic(bt1, bt2) )
PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2)); PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2));
else if ( BothInterval(bt1, bt2) )
SetType(base_type(bt1));
else else
ExprError("requires two arithmetic operands"); ExprError("requires two arithmetic operands");
} }
@ -2402,11 +2404,6 @@ Expr* RefExpr::MakeLvalue()
return this; return this;
} }
Val* RefExpr::Eval(Val* v) const
{
return Fold(v);
}
void RefExpr::Assign(Frame* f, Val* v, Opcode opcode) void RefExpr::Assign(Frame* f, Val* v, Opcode opcode)
{ {
op->Assign(f, v, opcode); op->Assign(f, v, opcode);

View file

@ -608,10 +608,6 @@ public:
void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN); void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN);
Expr* MakeLvalue(); Expr* MakeLvalue();
// Only overridden to avoid special vector handling which doesn't apply
// for this class.
Val* Eval(Val* v) const;
protected: protected:
friend class Expr; friend class Expr;
RefExpr() { } RefExpr() { }

View file

@ -8,6 +8,8 @@
#include "FTP.h" #include "FTP.h"
#include "NVT.h" #include "NVT.h"
#include "Event.h" #include "Event.h"
#include "SSL.h"
#include "Base64.h"
FTP_Analyzer::FTP_Analyzer(Connection* conn) FTP_Analyzer::FTP_Analyzer(Connection* conn)
: TCP_ApplicationAnalyzer(AnalyzerTag::FTP, conn) : TCP_ApplicationAnalyzer(AnalyzerTag::FTP, conn)
@ -44,6 +46,14 @@ void FTP_Analyzer::Done()
Weird("partial_ftp_request"); Weird("partial_ftp_request");
} }
static uint32 get_reply_code(int len, const char* line)
{
if ( len >= 3 && isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) )
return (line[0] - '0') * 100 + (line[1] - '0') * 10 + (line[2] - '0');
else
return 0;
}
void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig) void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
{ {
TCP_ApplicationAnalyzer::DeliverStream(length, data, orig); TCP_ApplicationAnalyzer::DeliverStream(length, data, orig);
@ -93,16 +103,7 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
} }
else else
{ {
uint32 reply_code; uint32 reply_code = get_reply_code(length, line);
if ( length >= 3 &&
isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) )
{
reply_code = (line[0] - '0') * 100 +
(line[1] - '0') * 10 +
(line[2] - '0');
}
else
reply_code = 0;
int cont_resp; int cont_resp;
@ -143,19 +144,22 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
else else
line = end_of_line; line = end_of_line;
if ( auth_requested.size() > 0 &&
(reply_code == 234 || reply_code == 335) )
// Server accepted AUTH requested,
// which means that very likely we
// won't be able to parse the rest
// of the session, and thus we stop
// here.
SetSkip(true);
cont_resp = 0; cont_resp = 0;
} }
} }
if ( reply_code == 334 && auth_requested.size() > 0 &&
auth_requested == "GSSAPI" )
{
// Server wants to proceed with an ADAT exchange and we
// know how to analyze the GSI mechanism, so attach analyzer
// to look for that.
SSL_Analyzer* ssl = new SSL_Analyzer(Conn());
ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), true));
ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), false));
AddChildAnalyzer(ssl);
}
vl->append(new Val(reply_code, TYPE_COUNT)); vl->append(new Val(reply_code, TYPE_COUNT));
vl->append(new StringVal(end_of_line - line, line)); vl->append(new StringVal(end_of_line - line, line));
vl->append(new Val(cont_resp, TYPE_BOOL)); vl->append(new Val(cont_resp, TYPE_BOOL));
@ -164,5 +168,140 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
} }
ConnectionEvent(f, vl); ConnectionEvent(f, vl);
ForwardStream(length, data, orig);
} }
void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
// Don't know how to parse anything but the ADAT exchanges of GSI GSSAPI,
// which is basically just TLS/SSL.
if ( ! Parent()->GetTag() == AnalyzerTag::SSL )
{
Parent()->Remove();
return;
}
bool done = false;
const char* line = (const char*) data;
const char* end_of_line = line + len;
BroString* decoded_adat = 0;
if ( orig )
{
int cmd_len;
const char* cmd;
line = skip_whitespace(line, end_of_line);
get_word(len, line, cmd_len, cmd);
if ( strncmp(cmd, "ADAT", cmd_len) == 0 )
{
line = skip_whitespace(line + cmd_len, end_of_line);
StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString());
if ( first_token )
{
// RFC 2743 section 3.1 specifies a framing format for tokens
// that includes an identifier for the mechanism type. The
// framing is supposed to be required for the initial context
// token, but GSI doesn't do that and starts right in on a
// TLS/SSL handshake, so look for that to identify it.
const u_char* msg = decoded_adat->Bytes();
int msg_len = decoded_adat->Len();
// Just check that it looks like a viable TLS/SSL handshake
// record from the first byte (content type of 0x16) and
// that the fourth and fifth bytes indicating the length of
// the record match the length of the decoded data.
if ( msg_len < 5 || msg[0] != 0x16 ||
msg_len - 5 != ntohs(*((uint16*)(msg + 3))) )
{
// Doesn't look like TLS/SSL, so done analyzing.
done = true;
delete decoded_adat;
decoded_adat = 0;
}
}
first_token = false;
}
else if ( strncmp(cmd, "AUTH", cmd_len) == 0 )
// Security state will be reset by a reissued AUTH.
done = true;
}
else
{
uint32 reply_code = get_reply_code(len, line);
switch ( reply_code ) {
case 232:
case 234:
// Indicates security data exchange is complete, but nothing
// more to decode in replies.
done = true;
break;
case 235:
// Security data exchange complete, but may have more to decode
// in the reply (same format at 334 and 335).
done = true;
// Fall-through.
case 334:
case 335:
// Security data exchange still in progress, and there could be data
// to decode in the reply.
line += 3;
if ( len > 3 && line[0] == '-' )
line++;
line = skip_whitespace(line, end_of_line);
if ( end_of_line - line >= 5 && strncmp(line, "ADAT=", 5) == 0 )
{
line += 5;
StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString());
}
break;
case 421:
case 431:
case 500:
case 501:
case 503:
case 535:
// Server isn't going to accept named security mechanism.
// Client has to restart back at the AUTH.
done = true;
break;
case 631:
case 632:
case 633:
// If the server is sending protected replies, the security
// data exchange must have already succeeded. It does have
// encoded data in the reply, but 632 and 633 are also encrypted.
done = true;
break;
default:
break;
}
}
if ( decoded_adat )
{
ForwardStream(decoded_adat->Len(), decoded_adat->Bytes(), orig);
delete decoded_adat;
}
if ( done )
Parent()->Remove();
}

View file

@ -30,4 +30,26 @@ protected:
string auth_requested; // AUTH method requested string auth_requested; // AUTH method requested
}; };
/**
* Analyzes security data of ADAT exchanges over FTP control session (RFC 2228).
* Currently only the GSI mechanism of GSSAPI AUTH method is understood.
* The ADAT exchange for GSI is base64 encoded TLS/SSL handshake tokens. This
* analyzer just decodes the tokens and passes them on to the parent, which must
* be an SSL analyzer instance.
*/
class FTP_ADAT_Analyzer : public SupportAnalyzer {
public:
FTP_ADAT_Analyzer(Connection* conn, bool arg_orig)
: SupportAnalyzer(AnalyzerTag::FTP_ADAT, conn, arg_orig),
first_token(true) { }
void DeliverStream(int len, const u_char* data, bool orig);
protected:
// Used by the client-side analyzer to tell if it needs to peek at the
// initial context token and do sanity checking (i.e. does it look like
// a TLS/SSL handshake token).
bool first_token;
};
#endif #endif

View file

@ -138,11 +138,22 @@ BroFile::BroFile(FILE* arg_f, const char* arg_name, const char* arg_access)
BroFile::BroFile(const char* arg_name, const char* arg_access, BroType* arg_t) BroFile::BroFile(const char* arg_name, const char* arg_access, BroType* arg_t)
{ {
Init(); Init();
f = 0;
name = copy_string(arg_name); name = copy_string(arg_name);
access = copy_string(arg_access); access = copy_string(arg_access);
t = arg_t ? arg_t : base_type(TYPE_STRING); t = arg_t ? arg_t : base_type(TYPE_STRING);
if ( ! Open() )
if ( streq(name, "/dev/stdin") )
f = stdin;
else if ( streq(name, "/dev/stdout") )
f = stdout;
else if ( streq(name, "/dev/stderr") )
f = stderr;
if ( f )
is_open = 1;
else if ( ! Open() )
{ {
reporter->Error("cannot open %s: %s", name, strerror(errno)); reporter->Error("cannot open %s: %s", name, strerror(errno));
is_open = 0; is_open = 0;
@ -342,8 +353,8 @@ int BroFile::Close()
FinishEncrypt(); FinishEncrypt();
// Do not close stdout/stderr. // Do not close stdin/stdout/stderr.
if ( f == stdout || f == stderr ) if ( f == stdin || f == stdout || f == stderr )
return 0; return 0;
if ( is_in_cache ) if ( is_in_cache )
@ -503,9 +514,6 @@ void BroFile::SetAttrs(Attributes* arg_attrs)
InitEncrypt(log_encryption_key->AsString()->CheckString()); InitEncrypt(log_encryption_key->AsString()->CheckString());
} }
if ( attrs->FindAttr(ATTR_DISABLE_PRINT_HOOK) )
DisablePrintHook();
if ( attrs->FindAttr(ATTR_RAW_OUTPUT) ) if ( attrs->FindAttr(ATTR_RAW_OUTPUT) )
EnableRawOutput(); EnableRawOutput();
@ -523,6 +531,10 @@ RecordVal* BroFile::Rotate()
if ( ! is_open ) if ( ! is_open )
return 0; return 0;
// Do not rotate stdin/stdout/stderr.
if ( f == stdin || f == stdout || f == stderr )
return 0;
if ( okay_to_manage && ! is_in_cache ) if ( okay_to_manage && ! is_in_cache )
BringIntoCache(); BringIntoCache();

View file

@ -57,7 +57,7 @@ public:
RecordVal* Rotate(); RecordVal* Rotate();
// Set &rotate_interval, &rotate_size, &postprocessor, // Set &rotate_interval, &rotate_size, &postprocessor,
// &disable_print_hook, and &raw_output attributes. // and &raw_output attributes.
void SetAttrs(Attributes* attrs); void SetAttrs(Attributes* attrs);
// Returns the current size of the file, after fresh stat'ing. // Returns the current size of the file, after fresh stat'ing.

View file

@ -58,7 +58,7 @@ void FlowSrc::Process()
void FlowSrc::Close() void FlowSrc::Close()
{ {
close(selectable_fd); safe_close(selectable_fd);
} }

View file

@ -20,10 +20,10 @@ void HTTP_Analyzer_binpac::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void HTTP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) void HTTP_Analyzer_binpac::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void HTTP_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig) void HTTP_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig)

View file

@ -13,7 +13,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new HTTP_Analyzer_binpac(conn); } { return new HTTP_Analyzer_binpac(conn); }

View file

@ -148,10 +148,16 @@ RecordVal* IPv6_Hdr::BuildRecordVal(VectorVal* chain) const
rv->Assign(1, new Val(((ip6_ext*)data)->ip6e_len, TYPE_COUNT)); rv->Assign(1, new Val(((ip6_ext*)data)->ip6e_len, TYPE_COUNT));
rv->Assign(2, new Val(ntohs(((uint16*)data)[1]), TYPE_COUNT)); rv->Assign(2, new Val(ntohs(((uint16*)data)[1]), TYPE_COUNT));
rv->Assign(3, new Val(ntohl(((uint32*)data)[1]), TYPE_COUNT)); rv->Assign(3, new Val(ntohl(((uint32*)data)[1]), TYPE_COUNT));
if ( Length() >= 12 )
{
// Sequence Number and ICV fields can only be extracted if
// Payload Len was non-zero for this header.
rv->Assign(4, new Val(ntohl(((uint32*)data)[2]), TYPE_COUNT)); rv->Assign(4, new Val(ntohl(((uint32*)data)[2]), TYPE_COUNT));
uint16 off = 3 * sizeof(uint32); uint16 off = 3 * sizeof(uint32);
rv->Assign(5, new StringVal(new BroString(data + off, Length() - off, 1))); rv->Assign(5, new StringVal(new BroString(data + off, Length() - off, 1)));
} }
}
break; break;
case IPPROTO_ESP: case IPPROTO_ESP:

View file

@ -248,10 +248,10 @@ IPPrefix::IPPrefix(const in6_addr& in6, uint8_t length)
prefix.Mask(this->length); prefix.Mask(this->length);
} }
IPPrefix::IPPrefix(const IPAddr& addr, uint8_t length) IPPrefix::IPPrefix(const IPAddr& addr, uint8_t length, bool len_is_v6_relative)
: prefix(addr) : prefix(addr)
{ {
if ( prefix.GetFamily() == IPv4 ) if ( prefix.GetFamily() == IPv4 && ! len_is_v6_relative )
{ {
if ( length > 32 ) if ( length > 32 )
reporter->InternalError("Bad IPAddr(v4) IPPrefix length : %d", reporter->InternalError("Bad IPAddr(v4) IPPrefix length : %d",

View file

@ -342,6 +342,21 @@ public:
return memcmp(&addr1.in6, &addr2.in6, sizeof(in6_addr)) < 0; return memcmp(&addr1.in6, &addr2.in6, sizeof(in6_addr)) < 0;
} }
friend bool operator<=(const IPAddr& addr1, const IPAddr& addr2)
{
return addr1 < addr2 || addr1 == addr2;
}
friend bool operator>=(const IPAddr& addr1, const IPAddr& addr2)
{
return ! ( addr1 < addr2 );
}
friend bool operator>(const IPAddr& addr1, const IPAddr& addr2)
{
return ! ( addr1 <= addr2 );
}
/** Converts the address into the type used internally by the /** Converts the address into the type used internally by the
* inter-thread communication. * inter-thread communication.
*/ */
@ -481,8 +496,15 @@ public:
* @param addr The IP address. * @param addr The IP address.
* *
* @param length The prefix length in the range from 0 to 128 * @param length The prefix length in the range from 0 to 128
*
* @param len_is_v6_relative Whether \a length is relative to the full
* 128 bits of an IPv6 address. If false and \a addr is an IPv4
* address, then \a length is expected to range from 0 to 32. If true
* \a length is expected to range from 0 to 128 even if \a addr is IPv4,
* meaning that the mask is to apply to the IPv4-mapped-IPv6 representation.
*/ */
IPPrefix(const IPAddr& addr, uint8_t length); IPPrefix(const IPAddr& addr, uint8_t length,
bool len_is_v6_relative = false);
/** /**
* Copy constructor. * Copy constructor.
@ -583,6 +605,11 @@ public:
return net1.Prefix() == net2.Prefix() && net1.Length() == net2.Length(); return net1.Prefix() == net2.Prefix() && net1.Length() == net2.Length();
} }
friend bool operator!=(const IPPrefix& net1, const IPPrefix& net2)
{
return ! (net1 == net2);
}
/** /**
* Comparison operator IP prefixes. This defines a well-defined order for * Comparison operator IP prefixes. This defines a well-defined order for
* IP prefix. However, the order does not necessarily corresponding to their * IP prefix. However, the order does not necessarily corresponding to their
@ -600,6 +627,21 @@ public:
return false; return false;
} }
friend bool operator<=(const IPPrefix& net1, const IPPrefix& net2)
{
return net1 < net2 || net1 == net2;
}
friend bool operator>=(const IPPrefix& net1, const IPPrefix& net2)
{
return ! (net1 < net2 );
}
friend bool operator>(const IPPrefix& net1, const IPPrefix& net2)
{
return ! ( net1 <= net2 );
}
private: private:
IPAddr prefix; // We store it as an address with the non-prefix bits masked out via Mask(). IPAddr prefix; // We store it as an address with the non-prefix bits masked out via Mask().
uint8_t length; // The bit length of the prefix relative to full IPv6 addr. uint8_t length; // The bit length of the prefix relative to full IPv6 addr.

View file

@ -219,16 +219,35 @@ void PktSrc::Process()
// Get protocol being carried from the ethernet frame. // Get protocol being carried from the ethernet frame.
protocol = (data[12] << 8) + data[13]; protocol = (data[12] << 8) + data[13];
// MPLS carried over the ethernet frame. switch ( protocol )
if ( protocol == 0x8847 )
have_mpls = true;
// VLAN carried over ethernet frame.
else if ( protocol == 0x8100 )
{ {
// MPLS carried over the ethernet frame.
case 0x8847:
have_mpls = true;
break;
// VLAN carried over the ethernet frame.
case 0x8100:
data += get_link_header_size(datalink); data += get_link_header_size(datalink);
data += 4; // Skip the vlan header data += 4; // Skip the vlan header
pkt_hdr_size = 0; pkt_hdr_size = 0;
break;
// PPPoE carried over the ethernet frame.
case 0x8864:
data += get_link_header_size(datalink);
protocol = (data[6] << 8) + data[7];
data += 8; // Skip the PPPoE session and PPP header
pkt_hdr_size = 0;
if ( protocol != 0x0021 && protocol != 0x0057 )
{
// Neither IPv4 nor IPv6.
sessions->Weird("non_ip_packet_in_pppoe_encapsulation", &hdr, data);
data = 0;
return;
}
break;
} }
break; break;

View file

@ -647,7 +647,7 @@ void RemoteSerializer::Fork()
exit(1); // FIXME: Better way to handle this? exit(1); // FIXME: Better way to handle this?
} }
close(pipe[1]); safe_close(pipe[1]);
return; return;
} }
@ -664,12 +664,12 @@ void RemoteSerializer::Fork()
} }
child.SetParentIO(io); child.SetParentIO(io);
close(pipe[0]); safe_close(pipe[0]);
// Close file descriptors. // Close file descriptors.
close(0); safe_close(0);
close(1); safe_close(1);
close(2); safe_close(2);
// Be nice. // Be nice.
setpriority(PRIO_PROCESS, 0, 5); setpriority(PRIO_PROCESS, 0, 5);
@ -2692,12 +2692,12 @@ bool RemoteSerializer::ProcessLogCreateWriter()
int id, writer; int id, writer;
int num_fields; int num_fields;
logging::WriterBackend::WriterInfo info; logging::WriterBackend::WriterInfo* info = new logging::WriterBackend::WriterInfo();
bool success = fmt.Read(&id, "id") && bool success = fmt.Read(&id, "id") &&
fmt.Read(&writer, "writer") && fmt.Read(&writer, "writer") &&
fmt.Read(&num_fields, "num_fields") && fmt.Read(&num_fields, "num_fields") &&
info.Read(&fmt); info->Read(&fmt);
if ( ! success ) if ( ! success )
goto error; goto error;
@ -2716,7 +2716,8 @@ bool RemoteSerializer::ProcessLogCreateWriter()
id_val = new EnumVal(id, BifType::Enum::Log::ID); id_val = new EnumVal(id, BifType::Enum::Log::ID);
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer); writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, true, false) ) if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields,
true, false, true) )
goto error; goto error;
Unref(id_val); Unref(id_val);
@ -2896,11 +2897,6 @@ void RemoteSerializer::GotID(ID* id, Val* val)
(desc && *desc) ? desc : "not set"), (desc && *desc) ? desc : "not set"),
current_peer); current_peer);
#ifdef USE_PERFTOOLS_DEBUG
// May still be cached, but we don't care.
heap_checker->IgnoreObject(id);
#endif
Unref(id); Unref(id);
return; return;
} }
@ -4001,7 +3997,7 @@ bool SocketComm::Connect(Peer* peer)
if ( connect(sockfd, res->ai_addr, res->ai_addrlen) < 0 ) if ( connect(sockfd, res->ai_addr, res->ai_addrlen) < 0 )
{ {
Error(fmt("connect failed: %s", strerror(errno)), peer); Error(fmt("connect failed: %s", strerror(errno)), peer);
close(sockfd); safe_close(sockfd);
sockfd = -1; sockfd = -1;
continue; continue;
} }
@ -4174,16 +4170,18 @@ bool SocketComm::Listen()
{ {
Error(fmt("can't bind to %s:%s, %s", l_addr_str.c_str(), Error(fmt("can't bind to %s:%s, %s", l_addr_str.c_str(),
port_str, strerror(errno))); port_str, strerror(errno)));
close(fd);
if ( errno == EADDRINUSE ) if ( errno == EADDRINUSE )
{ {
// Abandon completely this attempt to set up listening sockets, // Abandon completely this attempt to set up listening sockets,
// try again later. // try again later.
safe_close(fd);
CloseListenFDs(); CloseListenFDs();
listen_next_try = time(0) + bind_retry_interval; listen_next_try = time(0) + bind_retry_interval;
return false; return false;
} }
safe_close(fd);
continue; continue;
} }
@ -4191,7 +4189,7 @@ bool SocketComm::Listen()
{ {
Error(fmt("can't listen on %s:%s, %s", l_addr_str.c_str(), Error(fmt("can't listen on %s:%s, %s", l_addr_str.c_str(),
port_str, strerror(errno))); port_str, strerror(errno)));
close(fd); safe_close(fd);
continue; continue;
} }
@ -4227,7 +4225,7 @@ bool SocketComm::AcceptConnection(int fd)
{ {
Error(fmt("accept fail, unknown address family %d", Error(fmt("accept fail, unknown address family %d",
client.ss.ss_family)); client.ss.ss_family));
close(clientfd); safe_close(clientfd);
return false; return false;
} }
@ -4298,7 +4296,7 @@ const char* SocketComm::MakeLogString(const char* msg, Peer* peer)
void SocketComm::CloseListenFDs() void SocketComm::CloseListenFDs()
{ {
for ( size_t i = 0; i < listen_fds.size(); ++i ) for ( size_t i = 0; i < listen_fds.size(); ++i )
close(listen_fds[i]); safe_close(listen_fds[i]);
listen_fds.clear(); listen_fds.clear();
} }

View file

@ -126,6 +126,23 @@ RuleConditionEval::RuleConditionEval(const char* func)
rules_error("unknown identifier", func); rules_error("unknown identifier", func);
return; return;
} }
if ( id->Type()->Tag() == TYPE_FUNC )
{
// Validate argument quantity and type.
FuncType* f = id->Type()->AsFuncType();
if ( f->YieldType()->Tag() != TYPE_BOOL )
rules_error("eval function type must yield a 'bool'", func);
TypeList tl;
tl.Append(internal_type("signature_state")->Ref());
tl.Append(base_type(TYPE_STRING));
if ( ! f->CheckArgs(tl.Types()) )
rules_error("eval function parameters must be a 'signature_state' "
"and a 'string' type", func);
}
} }
bool RuleConditionEval::DoMatch(Rule* rule, RuleEndpointState* state, bool RuleConditionEval::DoMatch(Rule* rule, RuleEndpointState* state,

View file

@ -1,4 +1,5 @@
#include <algorithm> #include <algorithm>
#include <functional>
#include "config.h" #include "config.h"
@ -41,6 +42,23 @@ RuleHdrTest::RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size,
level = 0; level = 0;
} }
RuleHdrTest::RuleHdrTest(Prot arg_prot, Comp arg_comp, vector<IPPrefix> arg_v)
{
prot = arg_prot;
offset = 0;
size = 0;
comp = arg_comp;
vals = new maskedvalue_list;
prefix_vals = arg_v;
sibling = 0;
child = 0;
pattern_rules = 0;
pure_rules = 0;
ruleset = new IntSet;
id = ++idcounter;
level = 0;
}
Val* RuleMatcher::BuildRuleStateValue(const Rule* rule, Val* RuleMatcher::BuildRuleStateValue(const Rule* rule,
const RuleEndpointState* state) const const RuleEndpointState* state) const
{ {
@ -63,6 +81,8 @@ RuleHdrTest::RuleHdrTest(RuleHdrTest& h)
loop_over_list(*h.vals, i) loop_over_list(*h.vals, i)
vals->append(new MaskedValue(*(*h.vals)[i])); vals->append(new MaskedValue(*(*h.vals)[i]));
prefix_vals = h.prefix_vals;
for ( int j = 0; j < Rule::TYPES; ++j ) for ( int j = 0; j < Rule::TYPES; ++j )
{ {
loop_over_list(h.psets[j], k) loop_over_list(h.psets[j], k)
@ -114,6 +134,10 @@ bool RuleHdrTest::operator==(const RuleHdrTest& h)
(*vals)[i]->mask != (*h.vals)[i]->mask ) (*vals)[i]->mask != (*h.vals)[i]->mask )
return false; return false;
for ( size_t i = 0; i < prefix_vals.size(); ++i )
if ( ! (prefix_vals[i] == h.prefix_vals[i]) )
return false;
return true; return true;
} }
@ -129,6 +153,9 @@ void RuleHdrTest::PrintDebug()
fprintf(stderr, " 0x%08x/0x%08x", fprintf(stderr, " 0x%08x/0x%08x",
(*vals)[i]->val, (*vals)[i]->mask); (*vals)[i]->val, (*vals)[i]->mask);
for ( size_t i = 0; i < prefix_vals.size(); ++i )
fprintf(stderr, " %s", prefix_vals[i].AsString().c_str());
fprintf(stderr, "\n"); fprintf(stderr, "\n");
} }
@ -410,29 +437,129 @@ static inline uint32 getval(const u_char* data, int size)
} }
// A line which can be inserted into the macros below for debugging
// fprintf(stderr, "%.06f %08x & %08x %s %08x\n", network_time, v, (mvals)[i]->mask, #op, (mvals)[i]->val);
// Evaluate a value list (matches if at least one value matches). // Evaluate a value list (matches if at least one value matches).
#define DO_MATCH_OR( mvals, v, op ) \ template <typename FuncT>
{ \ static inline bool match_or(const maskedvalue_list& mvals, uint32 v, FuncT comp)
loop_over_list((mvals), i) \ {
{ \ loop_over_list(mvals, i)
if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \ {
goto match; \ if ( comp(v & mvals[i]->mask, mvals[i]->val) )
} \ return true;
goto no_match; \ }
return false;
}
// Evaluate a prefix list (matches if at least one value matches).
template <typename FuncT>
static inline bool match_or(const vector<IPPrefix>& prefixes, const IPAddr& a,
FuncT comp)
{
for ( size_t i = 0; i < prefixes.size(); ++i )
{
IPAddr masked(a);
masked.Mask(prefixes[i].LengthIPv6());
if ( comp(masked, prefixes[i].Prefix()) )
return true;
}
return false;
} }
// Evaluate a value list (doesn't match if any value matches). // Evaluate a value list (doesn't match if any value matches).
#define DO_MATCH_NOT_AND( mvals, v, op ) \ template <typename FuncT>
{ \ static inline bool match_not_and(const maskedvalue_list& mvals, uint32 v,
loop_over_list((mvals), i) \ FuncT comp)
{ \ {
if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \ loop_over_list(mvals, i)
goto no_match; \ {
} \ if ( comp(v & mvals[i]->mask, mvals[i]->val) )
goto match; \ return false;
}
return true;
}
// Evaluate a prefix list (doesn't match if any value matches).
template <typename FuncT>
static inline bool match_not_and(const vector<IPPrefix>& prefixes,
const IPAddr& a, FuncT comp)
{
for ( size_t i = 0; i < prefixes.size(); ++i )
{
IPAddr masked(a);
masked.Mask(prefixes[i].LengthIPv6());
if ( comp(masked, prefixes[i].Prefix()) )
return false;
}
return true;
}
static inline bool compare(const maskedvalue_list& mvals, uint32 v,
RuleHdrTest::Comp comp)
{
switch ( comp ) {
case RuleHdrTest::EQ:
return match_or(mvals, v, std::equal_to<uint32>());
break;
case RuleHdrTest::NE:
return match_not_and(mvals, v, std::equal_to<uint32>());
break;
case RuleHdrTest::LT:
return match_or(mvals, v, std::less<uint32>());
break;
case RuleHdrTest::GT:
return match_or(mvals, v, std::greater<uint32>());
break;
case RuleHdrTest::LE:
return match_or(mvals, v, std::less_equal<uint32>());
break;
case RuleHdrTest::GE:
return match_or(mvals, v, std::greater_equal<uint32>());
break;
default:
reporter->InternalError("unknown comparison type");
break;
}
return false;
}
static inline bool compare(const vector<IPPrefix>& prefixes, const IPAddr& a,
RuleHdrTest::Comp comp)
{
switch ( comp ) {
case RuleHdrTest::EQ:
return match_or(prefixes, a, std::equal_to<IPAddr>());
break;
case RuleHdrTest::NE:
return match_not_and(prefixes, a, std::equal_to<IPAddr>());
break;
case RuleHdrTest::LT:
return match_or(prefixes, a, std::less<IPAddr>());
break;
case RuleHdrTest::GT:
return match_or(prefixes, a, std::greater<IPAddr>());
break;
case RuleHdrTest::LE:
return match_or(prefixes, a, std::less_equal<IPAddr>());
break;
case RuleHdrTest::GE:
return match_or(prefixes, a, std::greater_equal<IPAddr>());
break;
default:
reporter->InternalError("unknown comparison type");
break;
}
return false;
} }
RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer, RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer,
@ -492,65 +619,53 @@ RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer,
if ( ip ) if ( ip )
{ {
// Get start of transport layer.
const u_char* transport = ip->Payload();
// Descend the RuleHdrTest tree further. // Descend the RuleHdrTest tree further.
for ( RuleHdrTest* h = hdr_test->child; h; for ( RuleHdrTest* h = hdr_test->child; h;
h = h->sibling ) h = h->sibling )
{ {
const u_char* data; bool match = false;
// Evaluate the header test. // Evaluate the header test.
switch ( h->prot ) { switch ( h->prot ) {
case RuleHdrTest::NEXT:
match = compare(*h->vals, ip->NextProto(), h->comp);
break;
case RuleHdrTest::IP: case RuleHdrTest::IP:
data = (const u_char*) ip->IP4_Hdr(); if ( ! ip->IP4_Hdr() )
continue;
match = compare(*h->vals, getval((const u_char*)ip->IP4_Hdr() + h->offset, h->size), h->comp);
break;
case RuleHdrTest::IPv6:
if ( ! ip->IP6_Hdr() )
continue;
match = compare(*h->vals, getval((const u_char*)ip->IP6_Hdr() + h->offset, h->size), h->comp);
break; break;
case RuleHdrTest::ICMP: case RuleHdrTest::ICMP:
case RuleHdrTest::ICMPv6:
case RuleHdrTest::TCP: case RuleHdrTest::TCP:
case RuleHdrTest::UDP: case RuleHdrTest::UDP:
data = transport; match = compare(*h->vals, getval(ip->Payload() + h->offset, h->size), h->comp);
break;
case RuleHdrTest::IPSrc:
match = compare(h->prefix_vals, ip->IPHeaderSrcAddr(), h->comp);
break;
case RuleHdrTest::IPDst:
match = compare(h->prefix_vals, ip->IPHeaderDstAddr(), h->comp);
break; break;
default: default:
data = 0;
reporter->InternalError("unknown protocol"); reporter->InternalError("unknown protocol");
break;
} }
// ### data can be nil here if it's an if ( match )
// IPv6 packet and we're doing an IP test.
if ( ! data )
continue;
// Sorry for the hidden gotos :-)
switch ( h->comp ) {
case RuleHdrTest::EQ:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), ==);
case RuleHdrTest::NE:
DO_MATCH_NOT_AND(*h->vals, getval(data + h->offset, h->size), ==);
case RuleHdrTest::LT:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <);
case RuleHdrTest::GT:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >);
case RuleHdrTest::LE:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <=);
case RuleHdrTest::GE:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >=);
default:
reporter->InternalError("unknown comparision type");
}
no_match:
continue;
match:
tests.append(h); tests.append(h);
} }
} }
@ -1050,8 +1165,11 @@ static Val* get_bro_val(const char* label)
} }
// Converts an atomic Val and appends it to the list // Converts an atomic Val and appends it to the list. For subnet types,
static bool val_to_maskedval(Val* v, maskedvalue_list* append_to) // if the prefix_vector param isn't null, appending to that is preferred
// over appending to the masked val list.
static bool val_to_maskedval(Val* v, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector)
{ {
MaskedValue* mval = new MaskedValue; MaskedValue* mval = new MaskedValue;
@ -1070,6 +1188,14 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
break; break;
case TYPE_SUBNET: case TYPE_SUBNET:
{
if ( prefix_vector )
{
prefix_vector->push_back(v->AsSubNet());
delete mval;
return true;
}
else
{ {
const uint32* n; const uint32* n;
uint32 m[4]; uint32 m[4];
@ -1082,13 +1208,12 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
bool is_v4_mask = m[0] == 0xffffffff && bool is_v4_mask = m[0] == 0xffffffff &&
m[1] == m[0] && m[2] == m[0]; m[1] == m[0] && m[2] == m[0];
if ( v->AsSubNet().Prefix().GetFamily() == IPv4 &&
is_v4_mask ) if ( v->AsSubNet().Prefix().GetFamily() == IPv4 && is_v4_mask )
{ {
mval->val = ntohl(*n); mval->val = ntohl(*n);
mval->mask = m[3]; mval->mask = m[3];
} }
else else
{ {
rules_error("IPv6 subnets not supported"); rules_error("IPv6 subnets not supported");
@ -1096,6 +1221,7 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
mval->mask = 0; mval->mask = 0;
} }
} }
}
break; break;
default: default:
@ -1108,7 +1234,8 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
return true; return true;
} }
void id_to_maskedvallist(const char* id, maskedvalue_list* append_to) void id_to_maskedvallist(const char* id, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector)
{ {
Val* v = get_bro_val(id); Val* v = get_bro_val(id);
if ( ! v ) if ( ! v )
@ -1118,7 +1245,7 @@ void id_to_maskedvallist(const char* id, maskedvalue_list* append_to)
{ {
val_list* vals = v->AsTableVal()->ConvertToPureList()->Vals(); val_list* vals = v->AsTableVal()->ConvertToPureList()->Vals();
loop_over_list(*vals, i ) loop_over_list(*vals, i )
if ( ! val_to_maskedval((*vals)[i], append_to) ) if ( ! val_to_maskedval((*vals)[i], append_to, prefix_vector) )
{ {
delete_vals(vals); delete_vals(vals);
return; return;
@ -1128,7 +1255,7 @@ void id_to_maskedvallist(const char* id, maskedvalue_list* append_to)
} }
else else
val_to_maskedval(v, append_to); val_to_maskedval(v, append_to, prefix_vector);
} }
char* id_to_str(const char* id) char* id_to_str(const char* id)

View file

@ -2,7 +2,9 @@
#define sigs_h #define sigs_h
#include <limits.h> #include <limits.h>
#include <vector>
#include "IPAddr.h"
#include "BroString.h" #include "BroString.h"
#include "List.h" #include "List.h"
#include "RE.h" #include "RE.h"
@ -59,17 +61,19 @@ declare(PList, BroString);
typedef PList(BroString) bstr_list; typedef PList(BroString) bstr_list;
// Get values from Bro's script-level variables. // Get values from Bro's script-level variables.
extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to); extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector = 0);
extern char* id_to_str(const char* id); extern char* id_to_str(const char* id);
extern uint32 id_to_uint(const char* id); extern uint32 id_to_uint(const char* id);
class RuleHdrTest { class RuleHdrTest {
public: public:
enum Comp { LE, GE, LT, GT, EQ, NE }; enum Comp { LE, GE, LT, GT, EQ, NE };
enum Prot { NOPROT, IP, ICMP, TCP, UDP }; enum Prot { NOPROT, IP, IPv6, ICMP, ICMPv6, TCP, UDP, NEXT, IPSrc, IPDst };
RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size, RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size,
Comp arg_comp, maskedvalue_list* arg_vals); Comp arg_comp, maskedvalue_list* arg_vals);
RuleHdrTest(Prot arg_prot, Comp arg_comp, vector<IPPrefix> arg_v);
~RuleHdrTest(); ~RuleHdrTest();
void PrintDebug(); void PrintDebug();
@ -86,6 +90,7 @@ private:
Prot prot; Prot prot;
Comp comp; Comp comp;
maskedvalue_list* vals; maskedvalue_list* vals;
vector<IPPrefix> prefix_vals; // for use with IPSrc/IPDst comparisons
uint32 offset; uint32 offset;
uint32 size; uint32 size;

View file

@ -31,10 +31,10 @@ void SOCKS_Analyzer::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void SOCKS_Analyzer::EndpointEOF(TCP_Reassembler* endp) void SOCKS_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig) void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig)

View file

@ -23,7 +23,7 @@ public:
virtual void Done(); virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new SOCKS_Analyzer(conn); } { return new SOCKS_Analyzer(conn); }

View file

@ -23,10 +23,10 @@ void SSL_Analyzer::Done()
interp->FlowEOF(false); interp->FlowEOF(false);
} }
void SSL_Analyzer::EndpointEOF(TCP_Reassembler* endp) void SSL_Analyzer::EndpointEOF(bool is_orig)
{ {
TCP_ApplicationAnalyzer::EndpointEOF(endp); TCP_ApplicationAnalyzer::EndpointEOF(is_orig);
interp->FlowEOF(endp->IsOrig()); interp->FlowEOF(is_orig);
} }
void SSL_Analyzer::DeliverStream(int len, const u_char* data, bool orig) void SSL_Analyzer::DeliverStream(int len, const u_char* data, bool orig)

View file

@ -15,7 +15,7 @@ public:
virtual void Undelivered(int seq, int len, bool orig); virtual void Undelivered(int seq, int len, bool orig);
// Overriden from TCP_ApplicationAnalyzer. // Overriden from TCP_ApplicationAnalyzer.
virtual void EndpointEOF(TCP_Reassembler* endp); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new SSL_Analyzer(conn); } { return new SSL_Analyzer(conn); }

View file

@ -742,10 +742,11 @@ FileSerializer::~FileSerializer()
io->Flush(); io->Flush();
delete [] file; delete [] file;
delete io;
if ( fd >= 0 ) if ( io )
close(fd); delete io; // destructor will call close() on fd
else if ( fd >= 0 )
safe_close(fd);
} }
bool FileSerializer::Open(const char* file, bool pure) bool FileSerializer::Open(const char* file, bool pure)
@ -808,8 +809,8 @@ void FileSerializer::CloseFile()
if ( io ) if ( io )
io->Flush(); io->Flush();
if ( fd >= 0 ) if ( fd >= 0 && ! io ) // destructor of io calls close() on fd
close(fd); safe_close(fd);
fd = -1; fd = -1;
delete [] file; delete [] file;

View file

@ -12,10 +12,10 @@
int killed_by_inactivity = 0; int killed_by_inactivity = 0;
uint32 tot_ack_events = 0; uint64 tot_ack_events = 0;
uint32 tot_ack_bytes = 0; uint64 tot_ack_bytes = 0;
uint32 tot_gap_events = 0; uint64 tot_gap_events = 0;
uint32 tot_gap_bytes = 0; uint64 tot_gap_bytes = 0;
class ProfileTimer : public Timer { class ProfileTimer : public Timer {

View file

@ -116,10 +116,10 @@ extern SampleLogger* sample_logger;
extern int killed_by_inactivity; extern int killed_by_inactivity;
// Content gap statistics. // Content gap statistics.
extern uint32 tot_ack_events; extern uint64 tot_ack_events;
extern uint32 tot_ack_bytes; extern uint64 tot_ack_bytes;
extern uint32 tot_gap_events; extern uint64 tot_gap_events;
extern uint32 tot_gap_bytes; extern uint64 tot_gap_bytes;
// A TCPStateStats object tracks the distribution of TCP states for // A TCPStateStats object tracks the distribution of TCP states for

View file

@ -943,7 +943,10 @@ ForStmt::ForStmt(id_list* arg_loop_vars, Expr* loop_expr)
{ {
const type_list* indices = e->Type()->AsTableType()->IndexTypes(); const type_list* indices = e->Type()->AsTableType()->IndexTypes();
if ( indices->length() != loop_vars->length() ) if ( indices->length() != loop_vars->length() )
{
e->Error("wrong index size"); e->Error("wrong index size");
return;
}
for ( int i = 0; i < indices->length(); i++ ) for ( int i = 0; i < indices->length(); i++ )
{ {

View file

@ -46,6 +46,7 @@ TCP_Analyzer::TCP_Analyzer(Connection* conn)
finished = 0; finished = 0;
reassembling = 0; reassembling = 0;
first_packet_seen = 0; first_packet_seen = 0;
is_partial = 0;
orig = new TCP_Endpoint(this, 1); orig = new TCP_Endpoint(this, 1);
resp = new TCP_Endpoint(this, 0); resp = new TCP_Endpoint(this, 0);

View file

@ -20,10 +20,10 @@ const bool DEBUG_tcp_connection_close = false;
const bool DEBUG_tcp_match_undelivered = false; const bool DEBUG_tcp_match_undelivered = false;
static double last_gap_report = 0.0; static double last_gap_report = 0.0;
static uint32 last_ack_events = 0; static uint64 last_ack_events = 0;
static uint32 last_ack_bytes = 0; static uint64 last_ack_bytes = 0;
static uint32 last_gap_events = 0; static uint64 last_gap_events = 0;
static uint32 last_gap_bytes = 0; static uint64 last_gap_bytes = 0;
TCP_Reassembler::TCP_Reassembler(Analyzer* arg_dst_analyzer, TCP_Reassembler::TCP_Reassembler(Analyzer* arg_dst_analyzer,
TCP_Analyzer* arg_tcp_analyzer, TCP_Analyzer* arg_tcp_analyzer,
@ -513,10 +513,10 @@ void TCP_Reassembler::AckReceived(int seq)
if ( gap_report && gap_report_freq > 0.0 && if ( gap_report && gap_report_freq > 0.0 &&
dt >= gap_report_freq ) dt >= gap_report_freq )
{ {
int devents = tot_ack_events - last_ack_events; uint64 devents = tot_ack_events - last_ack_events;
int dbytes = tot_ack_bytes - last_ack_bytes; uint64 dbytes = tot_ack_bytes - last_ack_bytes;
int dgaps = tot_gap_events - last_gap_events; uint64 dgaps = tot_gap_events - last_gap_events;
int dgap_bytes = tot_gap_bytes - last_gap_bytes; uint64 dgap_bytes = tot_gap_bytes - last_gap_bytes;
RecordVal* r = new RecordVal(gap_info); RecordVal* r = new RecordVal(gap_info);
r->Assign(0, new Val(devents, TYPE_COUNT)); r->Assign(0, new Val(devents, TYPE_COUNT));

View file

@ -138,6 +138,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
{ {
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen); Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
if ( orig )
valid_orig = false;
else
valid_resp = false;
TeredoEncapsulation te(this); TeredoEncapsulation te(this);
if ( ! te.Parse(data, len) ) if ( ! te.Parse(data, len) )
@ -150,7 +155,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( e && e->Depth() >= BifConst::Tunnel::max_depth ) if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
{ {
Weird("tunnel_depth"); Weird("tunnel_depth", true);
return; return;
} }
@ -162,7 +167,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 ) if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 )
// Teredo bubbles having data after IPv6 header isn't strictly a // Teredo bubbles having data after IPv6 header isn't strictly a
// violation, but a little weird. // violation, but a little weird.
Weird("Teredo_bubble_with_payload"); Weird("Teredo_bubble_with_payload", true);
else else
{ {
delete inner; delete inner;
@ -173,6 +178,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( rslt == 0 || rslt > 0 ) if ( rslt == 0 || rslt > 0 )
{ {
if ( orig )
valid_orig = true;
else
valid_resp = true;
if ( BifConst::Tunnel::yielding_teredo_decapsulation && if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
! ProtocolConfirmed() ) ! ProtocolConfirmed() )
{ {
@ -193,7 +203,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
} }
if ( ! sibling_has_confirmed ) if ( ! sibling_has_confirmed )
ProtocolConfirmation(); Confirm();
else else
{ {
delete inner; delete inner;
@ -201,10 +211,8 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
} }
} }
else else
{ // Aggressively decapsulate anything with valid Teredo encapsulation.
// Aggressively decapsulate anything with valid Teredo encapsulation Confirm();
ProtocolConfirmation();
}
} }
else else

View file

@ -6,7 +6,8 @@
class Teredo_Analyzer : public Analyzer { class Teredo_Analyzer : public Analyzer {
public: public:
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn) Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn),
valid_orig(false), valid_resp(false)
{} {}
virtual ~Teredo_Analyzer() virtual ~Teredo_Analyzer()
@ -26,18 +27,34 @@ public:
/** /**
* Emits a weird only if the analyzer has previously been able to * Emits a weird only if the analyzer has previously been able to
* decapsulate a Teredo packet since otherwise the weirds could happen * decapsulate a Teredo packet in both directions or if *force* param is
* frequently enough to be less than helpful. * set, since otherwise the weirds could happen frequently enough to be less
* than helpful. The *force* param is meant for cases where just one side
* has a valid encapsulation and so the weird would be informative.
*/ */
void Weird(const char* name) const void Weird(const char* name, bool force = false) const
{ {
if ( ProtocolConfirmed() ) if ( ProtocolConfirmed() || force )
reporter->Weird(Conn(), name); reporter->Weird(Conn(), name);
} }
/**
* If the delayed confirmation option is set, then a valid encapsulation
* seen from both end points is required before confirming.
*/
void Confirm()
{
if ( ! BifConst::Tunnel::delay_teredo_confirmation ||
( valid_orig && valid_resp ) )
ProtocolConfirmation();
}
protected: protected:
friend class AnalyzerTimer; friend class AnalyzerTimer;
void ExpireTimer(double t); void ExpireTimer(double t);
bool valid_orig;
bool valid_resp;
}; };
class TeredoEncapsulation { class TeredoEncapsulation {

View file

@ -64,7 +64,7 @@ Val::~Val()
Unref(type); Unref(type);
#ifdef DEBUG #ifdef DEBUG
Unref(bound_id); delete [] bound_id;
#endif #endif
} }

View file

@ -347,13 +347,15 @@ public:
#ifdef DEBUG #ifdef DEBUG
// For debugging, we keep a reference to the global ID to which a // For debugging, we keep a reference to the global ID to which a
// value has been bound *last*. // value has been bound *last*.
ID* GetID() const { return bound_id; } ID* GetID() const
{
return bound_id ? global_scope()->Lookup(bound_id) : 0;
}
void SetID(ID* id) void SetID(ID* id)
{ {
if ( bound_id ) delete [] bound_id;
::Unref(bound_id); bound_id = id ? copy_string(id->Name()) : 0;
bound_id = id;
::Ref(bound_id);
} }
#endif #endif
@ -401,8 +403,8 @@ protected:
RecordVal* attribs; RecordVal* attribs;
#ifdef DEBUG #ifdef DEBUG
// For debugging, we keep the ID to which a Val is bound. // For debugging, we keep the name of the ID to which a Val is bound.
ID* bound_id; const char* bound_id;
#endif #endif
}; };

View file

@ -11,6 +11,7 @@
#include <cmath> #include <cmath>
#include <sys/stat.h> #include <sys/stat.h>
#include <cstdio> #include <cstdio>
#include <time.h>
#include "digest.h" #include "digest.h"
#include "Reporter.h" #include "Reporter.h"
@ -2604,6 +2605,29 @@ function to_subnet%(sn: string%): subnet
return ret; return ret;
%} %}
## Converts a :bro:type:`string` to a :bro:type:`double`.
##
## str: The :bro:type:`string` to convert.
##
## Returns: The :bro:type:`string` *str* as double, or 0 if *str* has
## an invalid format.
##
function to_double%(str: string%): double
%{
const char* s = str->CheckString();
char* end_s;
double d = strtod(s, &end_s);
if ( s[0] == '\0' || end_s[0] != '\0' )
{
builtin_error("bad conversion to double", @ARG@[0]);
d = 0;
}
return new Val(d, TYPE_DOUBLE);
%}
## Converts a :bro:type:`count` to an :bro:type:`addr`. ## Converts a :bro:type:`count` to an :bro:type:`addr`.
## ##
## ip: The :bro:type:`count` to convert. ## ip: The :bro:type:`count` to convert.
@ -3262,6 +3286,31 @@ function strftime%(fmt: string, d: time%) : string
return new StringVal(buffer); return new StringVal(buffer);
%} %}
## Parse a textual representation of a date/time value into a ``time`` type value.
##
## fmt: The format string used to parse the following *d* argument. See ``man strftime``
## for the syntax.
##
## d: The string representing the time.
##
## Returns: The time value calculated from parsing *d* with *fmt*.
function strptime%(fmt: string, d: string%) : time
%{
const time_t timeval = time_t(NULL);
struct tm t = *localtime(&timeval);
if ( strptime(d->CheckString(), fmt->CheckString(), &t) == NULL )
{
reporter->Warning("strptime conversion failed: fmt:%s d:%s", fmt->CheckString(), d->CheckString());
return new Val(0.0, TYPE_TIME);
}
double ret = mktime(&t);
return new Val(ret, TYPE_TIME);
%}
# =========================================================================== # ===========================================================================
# #
# Network Type Processing # Network Type Processing
@ -3764,7 +3813,7 @@ static GeoIP* open_geoip_db(GeoIPDBTypes type)
geoip = GeoIP_open_type(type, GEOIP_MEMORY_CACHE); geoip = GeoIP_open_type(type, GEOIP_MEMORY_CACHE);
if ( ! geoip ) if ( ! geoip )
reporter->Warning("Failed to open GeoIP database: %s", reporter->Info("Failed to open GeoIP database: %s",
GeoIPDBFileName[type]); GeoIPDBFileName[type]);
return geoip; return geoip;
} }
@ -3804,7 +3853,7 @@ function lookup_location%(a: addr%) : geo_location
if ( ! geoip ) if ( ! geoip )
builtin_error("Can't initialize GeoIP City/Country database"); builtin_error("Can't initialize GeoIP City/Country database");
else else
reporter->Warning("Fell back to GeoIP Country database"); reporter->Info("Fell back to GeoIP Country database");
} }
else else
have_city_db = true; have_city_db = true;
@ -4835,7 +4884,7 @@ function file_size%(f: string%) : double
%} %}
## Disables sending :bro:id:`print_hook` events to remote peers for a given ## Disables sending :bro:id:`print_hook` events to remote peers for a given
## file. This function is equivalent to :bro:attr:`&disable_print_hook`. In a ## file. In a
## distributed setup, communicating Bro instances generate the event ## distributed setup, communicating Bro instances generate the event
## :bro:id:`print_hook` for each print statement and send it to the remote ## :bro:id:`print_hook` for each print statement and send it to the remote
## side. When disabled for a particular file, these events will not be ## side. When disabled for a particular file, these events will not be
@ -4851,7 +4900,7 @@ function disable_print_hook%(f: file%): any
%} %}
## Prevents escaping of non-ASCII characters when writing to a file. ## Prevents escaping of non-ASCII characters when writing to a file.
## This function is equivalent to :bro:attr:`&disable_print_hook`. ## This function is equivalent to :bro:attr:`&raw_output`.
## ##
## f: The file to disable raw output for. ## f: The file to disable raw output for.
## ##
@ -5660,12 +5709,6 @@ function match_signatures%(c: connection, pattern_type: int, s: string,
# #
# =========================================================================== # ===========================================================================
## Deprecated. Will be removed.
function parse_dotted_addr%(s: string%): addr
%{
IPAddr a(s->CheckString());
return new AddrVal(a);
%}
%%{ %%{
@ -5765,75 +5808,3 @@ function anonymize_addr%(a: addr, cl: IPAddrAnonymizationClass%): addr
} }
%} %}
## Deprecated. Will be removed.
function dump_config%(%) : bool
%{
return new Val(persistence_serializer->WriteConfig(true), TYPE_BOOL);
%}
## Deprecated. Will be removed.
function make_connection_persistent%(c: connection%) : any
%{
c->MakePersistent();
return 0;
%}
%%{
// Experimental code to add support for IDMEF XML output based on
// notices. For now, we're implementing it as a builtin you can call on an
// notices record.
#ifdef USE_IDMEF
extern "C" {
#include <libidmef/idmefxml.h>
}
#endif
#include <sys/socket.h>
char* port_to_string(PortVal* port)
{
char buf[256]; // to hold sprintf results on port numbers
snprintf(buf, sizeof(buf), "%u", port->Port());
return copy_string(buf);
}
%%}
## Deprecated. Will be removed.
function generate_idmef%(src_ip: addr, src_port: port,
dst_ip: addr, dst_port: port%) : bool
%{
#ifdef USE_IDMEF
xmlNodePtr message =
newIDMEF_Message(newAttribute("version","1.0"),
newAlert(newCreateTime(NULL),
newSource(
newNode(newAddress(
newAttribute("category","ipv4-addr"),
newSimpleElement("address",
copy_string(src_ip->AsAddr().AsString().c_str())),
NULL), NULL),
newService(
newSimpleElement("port",
port_to_string(src_port)),
NULL), NULL),
newTarget(
newNode(newAddress(
newAttribute("category","ipv4-addr"),
newSimpleElement("address",
copy_string(dst_ip->AsAddr().AsString().c_str())),
NULL), NULL),
newService(
newSimpleElement("port",
port_to_string(dst_port)),
NULL), NULL), NULL), NULL);
// if ( validateCurrentDoc() )
printCurrentMessage(stderr);
return new Val(1, TYPE_BOOL);
#else
builtin_error("Bro was not configured for IDMEF support");
return new Val(0, TYPE_BOOL);
#endif
%}

View file

@ -16,6 +16,7 @@ const Tunnel::enable_ip: bool;
const Tunnel::enable_ayiya: bool; const Tunnel::enable_ayiya: bool;
const Tunnel::enable_teredo: bool; const Tunnel::enable_teredo: bool;
const Tunnel::yielding_teredo_decapsulation: bool; const Tunnel::yielding_teredo_decapsulation: bool;
const Tunnel::delay_teredo_confirmation: bool;
const Tunnel::ip_tunnel_timeout: interval; const Tunnel::ip_tunnel_timeout: interval;
const Threading::heartbeat_interval: interval; const Threading::heartbeat_interval: interval;

View file

@ -34,6 +34,10 @@ function Input::__force_update%(id: string%) : bool
return new Val(res, TYPE_BOOL); return new Val(res, TYPE_BOOL);
%} %}
# Options for the input framework
const accept_unsupported_types: bool;
# Options for Ascii Reader # Options for Ascii Reader
module InputAscii; module InputAscii;

View file

@ -71,7 +71,7 @@ declare(PDict, InputHash);
class Manager::Stream { class Manager::Stream {
public: public:
string name; string name;
ReaderBackend::ReaderInfo info; ReaderBackend::ReaderInfo* info;
bool removed; bool removed;
StreamType stream_type; // to distinguish between event and table streams StreamType stream_type; // to distinguish between event and table streams
@ -196,7 +196,7 @@ Manager::TableStream::~TableStream()
Manager::Manager() Manager::Manager()
{ {
update_finished = internal_handler("Input::update_finished"); end_of_data = internal_handler("Input::end_of_data");
} }
Manager::~Manager() Manager::~Manager()
@ -257,7 +257,6 @@ ReaderBackend* Manager::CreateBackend(ReaderFrontend* frontend, bro_int_t type)
assert(ir->factory); assert(ir->factory);
frontend->SetTypeName(ir->name);
ReaderBackend* backend = (*ir->factory)(frontend); ReaderBackend* backend = (*ir->factory)(frontend);
assert(backend); assert(backend);
@ -291,9 +290,6 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
EnumVal* reader = description->LookupWithDefault(rtype->FieldOffset("reader"))->AsEnumVal(); EnumVal* reader = description->LookupWithDefault(rtype->FieldOffset("reader"))->AsEnumVal();
ReaderFrontend* reader_obj = new ReaderFrontend(reader->InternalInt());
assert(reader_obj);
// get the source ... // get the source ...
Val* sourceval = description->LookupWithDefault(rtype->FieldOffset("source")); Val* sourceval = description->LookupWithDefault(rtype->FieldOffset("source"));
assert ( sourceval != 0 ); assert ( sourceval != 0 );
@ -301,21 +297,22 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
string source((const char*) bsource->Bytes(), bsource->Len()); string source((const char*) bsource->Bytes(), bsource->Len());
Unref(sourceval); Unref(sourceval);
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal(); ReaderBackend::ReaderInfo* rinfo = new ReaderBackend::ReaderInfo();
Val* config = description->LookupWithDefault(rtype->FieldOffset("config")); rinfo->source = copy_string(source.c_str());
EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal();
switch ( mode->InternalInt() ) switch ( mode->InternalInt() )
{ {
case 0: case 0:
info->info.mode = MODE_MANUAL; rinfo->mode = MODE_MANUAL;
break; break;
case 1: case 1:
info->info.mode = MODE_REREAD; rinfo->mode = MODE_REREAD;
break; break;
case 2: case 2:
info->info.mode = MODE_STREAM; rinfo->mode = MODE_STREAM;
break; break;
default: default:
@ -324,17 +321,11 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
Unref(mode); Unref(mode);
info->reader = reader_obj; Val* config = description->LookupWithDefault(rtype->FieldOffset("config"));
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name;
info->config = config->AsTableVal(); // ref'd by LookupWithDefault info->config = config->AsTableVal(); // ref'd by LookupWithDefault
info->info.source = source;
Ref(description);
info->description = description;
{ {
// create config mapping in ReaderInfo. Has to be done before the construction of reader_obj.
HashKey* k; HashKey* k;
IterCookie* c = info->config->AsTable()->InitForIteration(); IterCookie* c = info->config->AsTable()->InitForIteration();
@ -344,13 +335,26 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
ListVal* index = info->config->RecoverIndex(k); ListVal* index = info->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString(); string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString(); string value = v->Value()->AsString()->CheckString();
info->info.config.insert(std::make_pair(key, value)); rinfo->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str())));
Unref(index); Unref(index);
delete k; delete k;
} }
} }
ReaderFrontend* reader_obj = new ReaderFrontend(*rinfo, reader);
assert(reader_obj);
info->reader = reader_obj;
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name;
info->info = rinfo;
Ref(description);
info->description = description;
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s", DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
name.c_str()); name.c_str());
@ -387,6 +391,8 @@ bool Manager::CreateEventStream(RecordVal* fval)
FuncType* etype = event->FType()->AsFuncType(); FuncType* etype = event->FType()->AsFuncType();
bool allow_file_func = false;
if ( ! etype->IsEvent() ) if ( ! etype->IsEvent() )
{ {
reporter->Error("stream event is a function, not an event"); reporter->Error("stream event is a function, not an event");
@ -440,12 +446,20 @@ bool Manager::CreateEventStream(RecordVal* fval)
return false; return false;
} }
if ( !same_type((*args)[2], fields ) ) if ( ! same_type((*args)[2], fields ) )
{ {
reporter->Error("Incompatible type for event"); ODesc desc1;
ODesc desc2;
(*args)[2]->Describe(&desc1);
fields->Describe(&desc2);
reporter->Error("Incompatible type '%s':%s for event, which needs type '%s':%s\n",
type_name((*args)[2]->Tag()), desc1.Description(),
type_name(fields->Tag()), desc2.Description());
return false; return false;
} }
allow_file_func = BifConst::Input::accept_unsupported_types;
} }
else else
@ -454,7 +468,7 @@ bool Manager::CreateEventStream(RecordVal* fval)
vector<Field*> fieldsV; // vector, because UnrollRecordType needs it vector<Field*> fieldsV; // vector, because UnrollRecordType needs it
bool status = !UnrollRecordType(&fieldsV, fields, ""); bool status = (! UnrollRecordType(&fieldsV, fields, "", allow_file_func));
if ( status ) if ( status )
{ {
@ -475,7 +489,7 @@ bool Manager::CreateEventStream(RecordVal* fval)
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->info, stream->num_fields, logf ); stream->reader->Init(stream->num_fields, logf );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -602,12 +616,12 @@ bool Manager::CreateTableStream(RecordVal* fval)
vector<Field*> fieldsV; // vector, because we don't know the length beforehands vector<Field*> fieldsV; // vector, because we don't know the length beforehands
bool status = !UnrollRecordType(&fieldsV, idx, ""); bool status = (! UnrollRecordType(&fieldsV, idx, "", false));
int idxfields = fieldsV.size(); int idxfields = fieldsV.size();
if ( val ) // if we are not a set if ( val ) // if we are not a set
status = status || !UnrollRecordType(&fieldsV, val, ""); status = status || ! UnrollRecordType(&fieldsV, val, "", BifConst::Input::accept_unsupported_types);
int valfields = fieldsV.size() - idxfields; int valfields = fieldsV.size() - idxfields;
@ -652,7 +666,7 @@ bool Manager::CreateTableStream(RecordVal* fval)
assert(stream->reader); assert(stream->reader);
stream->reader->Init(stream->info, fieldsV.size(), fields ); stream->reader->Init(fieldsV.size(), fields );
readers[stream->reader] = stream; readers[stream->reader] = stream;
@ -726,8 +740,6 @@ bool Manager::RemoveStream(Stream *i)
i->removed = true; i->removed = true;
i->reader->Close();
DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s", DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s",
i->name.c_str()); i->name.c_str());
@ -767,15 +779,29 @@ bool Manager::RemoveStreamContinuation(ReaderFrontend* reader)
return true; return true;
} }
bool Manager::UnrollRecordType(vector<Field*> *fields, bool Manager::UnrollRecordType(vector<Field*> *fields, const RecordType *rec,
const RecordType *rec, const string& nameprepend) const string& nameprepend, bool allow_file_func)
{ {
for ( int i = 0; i < rec->NumFields(); i++ ) for ( int i = 0; i < rec->NumFields(); i++ )
{ {
if ( ! IsCompatibleType(rec->FieldType(i)) ) if ( ! IsCompatibleType(rec->FieldType(i)) )
{ {
// If the field is a file or a function type
// and it is optional, we accept it nevertheless.
// This allows importing logfiles containing this
// stuff that we actually cannot read :)
if ( allow_file_func )
{
if ( ( rec->FieldType(i)->Tag() == TYPE_FILE ||
rec->FieldType(i)->Tag() == TYPE_FUNC ) &&
rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL) )
{
reporter->Info("Encountered incompatible type \"%s\" in table definition for ReaderFrontend. Ignoring field.", type_name(rec->FieldType(i)->Tag()));
continue;
}
}
reporter->Error("Incompatible type \"%s\" in table definition for ReaderFrontend", type_name(rec->FieldType(i)->Tag())); reporter->Error("Incompatible type \"%s\" in table definition for ReaderFrontend", type_name(rec->FieldType(i)->Tag()));
return false; return false;
} }
@ -784,7 +810,7 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
{ {
string prep = nameprepend + rec->FieldName(i) + "."; string prep = nameprepend + rec->FieldName(i) + ".";
if ( !UnrollRecordType(fields, rec->FieldType(i)->AsRecordType(), prep) ) if ( !UnrollRecordType(fields, rec->FieldType(i)->AsRecordType(), prep, allow_file_func) )
{ {
return false; return false;
} }
@ -793,17 +819,19 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
else else
{ {
Field* field = new Field(); string name = nameprepend + rec->FieldName(i);
field->name = nameprepend + rec->FieldName(i); const char* secondary = 0;
field->type = rec->FieldType(i)->Tag(); TypeTag ty = rec->FieldType(i)->Tag();
TypeTag st = TYPE_VOID;
bool optional = false;
if ( field->type == TYPE_TABLE ) if ( ty == TYPE_TABLE )
field->subtype = rec->FieldType(i)->AsSetType()->Indices()->PureType()->Tag(); st = rec->FieldType(i)->AsSetType()->Indices()->PureType()->Tag();
else if ( field->type == TYPE_VECTOR ) else if ( ty == TYPE_VECTOR )
field->subtype = rec->FieldType(i)->AsVectorType()->YieldType()->Tag(); st = rec->FieldType(i)->AsVectorType()->YieldType()->Tag();
else if ( field->type == TYPE_PORT && else if ( ty == TYPE_PORT &&
rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN) ) rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN) )
{ {
// we have an annotation for the second column // we have an annotation for the second column
@ -813,12 +841,13 @@ bool Manager::UnrollRecordType(vector<Field*> *fields,
assert(c); assert(c);
assert(c->Type()->Tag() == TYPE_STRING); assert(c->Type()->Tag() == TYPE_STRING);
field->secondary_name = c->AsStringVal()->AsString()->CheckString(); secondary = c->AsStringVal()->AsString()->CheckString();
} }
if ( rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL ) ) if ( rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL ) )
field->optional = true; optional = true;
Field* field = new Field(name.c_str(), secondary, ty, st, optional);
fields->push_back(field); fields->push_back(field);
} }
} }
@ -1036,9 +1065,7 @@ int Manager::SendEntryTable(Stream* i, const Value* const *vals)
if ( ! updated ) if ( ! updated )
{ {
// throw away. Hence - we quit. And remove the entry from the current dictionary... // just quit and delete everything we created.
// (but why should it be in there? assert this).
assert ( stream->currDict->RemoveEntry(idxhash) == 0 );
delete idxhash; delete idxhash;
delete h; delete h;
return stream->num_val_fields + stream->num_idx_fields; return stream->num_val_fields + stream->num_idx_fields;
@ -1145,8 +1172,12 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
DBG_LOG(DBG_INPUT, "Got EndCurrentSend stream %s", i->name.c_str()); DBG_LOG(DBG_INPUT, "Got EndCurrentSend stream %s", i->name.c_str());
#endif #endif
if ( i->stream_type == EVENT_STREAM ) // nothing to do.. if ( i->stream_type == EVENT_STREAM )
{
// just signal the end of the data source
SendEndOfData(i);
return; return;
}
assert(i->stream_type == TABLE_STREAM); assert(i->stream_type == TABLE_STREAM);
TableStream* stream = (TableStream*) i; TableStream* stream = (TableStream*) i;
@ -1204,7 +1235,7 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
Ref(predidx); Ref(predidx);
Ref(val); Ref(val);
Ref(ev); Ref(ev);
SendEvent(stream->event, 3, ev, predidx, val); SendEvent(stream->event, 4, stream->description->Ref(), ev, predidx, val);
} }
if ( predidx ) // if we have a stream or an event... if ( predidx ) // if we have a stream or an event...
@ -1227,12 +1258,29 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
stream->currDict->SetDeleteFunc(input_hash_delete_func); stream->currDict->SetDeleteFunc(input_hash_delete_func);
#ifdef DEBUG #ifdef DEBUG
DBG_LOG(DBG_INPUT, "EndCurrentSend complete for stream %s, queueing update_finished event", DBG_LOG(DBG_INPUT, "EndCurrentSend complete for stream %s",
i->name.c_str()); i->name.c_str());
#endif #endif
// Send event that the current update is indeed finished. SendEndOfData(i);
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info.source.c_str())); }
void Manager::SendEndOfData(ReaderFrontend* reader)
{
Stream *i = FindStream(reader);
if ( i == 0 )
{
reporter->InternalError("Unknown reader in SendEndOfData");
return;
}
SendEndOfData(i);
}
void Manager::SendEndOfData(const Stream *i)
{
SendEvent(end_of_data, 2, new StringVal(i->name.c_str()), new StringVal(i->info->source));
} }
void Manager::Put(ReaderFrontend* reader, Value* *vals) void Manager::Put(ReaderFrontend* reader, Value* *vals)
@ -1538,7 +1586,7 @@ bool Manager::Delete(ReaderFrontend* reader, Value* *vals)
bool Manager::CallPred(Func* pred_func, const int numvals, ...) bool Manager::CallPred(Func* pred_func, const int numvals, ...)
{ {
bool result; bool result = false;
val_list vl(numvals); val_list vl(numvals);
va_list lP; va_list lP;
@ -1549,10 +1597,13 @@ bool Manager::CallPred(Func* pred_func, const int numvals, ...)
va_end(lP); va_end(lP);
Val* v = pred_func->Call(&vl); Val* v = pred_func->Call(&vl);
if ( v )
{
result = v->AsBool(); result = v->AsBool();
Unref(v); Unref(v);
}
return(result); return result;
} }
bool Manager::SendEvent(const string& name, const int num_vals, Value* *vals) bool Manager::SendEvent(const string& name, const int num_vals, Value* *vals)
@ -1666,6 +1717,18 @@ RecordVal* Manager::ValueToRecordVal(const Value* const *vals,
Val* fieldVal = 0; Val* fieldVal = 0;
if ( request_type->FieldType(i)->Tag() == TYPE_RECORD ) if ( request_type->FieldType(i)->Tag() == TYPE_RECORD )
fieldVal = ValueToRecordVal(vals, request_type->FieldType(i)->AsRecordType(), position); fieldVal = ValueToRecordVal(vals, request_type->FieldType(i)->AsRecordType(), position);
else if ( request_type->FieldType(i)->Tag() == TYPE_FILE ||
request_type->FieldType(i)->Tag() == TYPE_FUNC )
{
// If those two unsupported types are encountered here, they have
// been let through by the type checking.
// That means that they are optional & the user agreed to ignore
// them and has been warned by reporter.
// Hence -> assign null to the field, done.
// Better check that it really is optional. Uou never know.
assert(request_type->FieldDecl(i)->FindAttr(ATTR_OPTIONAL));
}
else else
{ {
fieldVal = ValueToVal(vals[*position], request_type->FieldType(i)); fieldVal = ValueToVal(vals[*position], request_type->FieldType(i));
@ -1709,7 +1772,7 @@ int Manager::GetValueLength(const Value* val) {
case TYPE_STRING: case TYPE_STRING:
case TYPE_ENUM: case TYPE_ENUM:
{ {
length += val->val.string_val->size(); length += val->val.string_val.length + 1;
break; break;
} }
@ -1808,13 +1871,16 @@ int Manager::CopyValue(char *data, const int startpos, const Value* val)
case TYPE_STRING: case TYPE_STRING:
case TYPE_ENUM: case TYPE_ENUM:
{ {
memcpy(data+startpos, val->val.string_val->c_str(), val->val.string_val->length()); memcpy(data+startpos, val->val.string_val.data, val->val.string_val.length);
return val->val.string_val->size(); // Add a \0 to the end. To be able to hash zero-length
// strings and differentiate from !present.
memset(data + startpos + val->val.string_val.length, 0, 1);
return val->val.string_val.length + 1;
} }
case TYPE_ADDR: case TYPE_ADDR:
{ {
int length; int length = 0;
switch ( val->val.addr_val.family ) { switch ( val->val.addr_val.family ) {
case IPv4: case IPv4:
length = sizeof(val->val.addr_val.in.in4); length = sizeof(val->val.addr_val.in.in4);
@ -1835,7 +1901,7 @@ int Manager::CopyValue(char *data, const int startpos, const Value* val)
case TYPE_SUBNET: case TYPE_SUBNET:
{ {
int length; int length = 0;
switch ( val->val.subnet_val.prefix.family ) { switch ( val->val.subnet_val.prefix.family ) {
case IPv4: case IPv4:
length = sizeof(val->val.addr_val.in.in4); length = sizeof(val->val.addr_val.in.in4);
@ -1900,13 +1966,15 @@ HashKey* Manager::HashValues(const int num_elements, const Value* const *vals)
const Value* val = vals[i]; const Value* val = vals[i];
if ( val->present ) if ( val->present )
length += GetValueLength(val); length += GetValueLength(val);
// And in any case add 1 for the end-of-field-identifier.
length++;
} }
if ( length == 0 ) assert ( length >= num_elements );
{
reporter->Error("Input reader sent line where all elements are null values. Ignoring line"); if ( length == num_elements )
return NULL; return NULL;
}
int position = 0; int position = 0;
char *data = (char*) malloc(length); char *data = (char*) malloc(length);
@ -1918,6 +1986,12 @@ HashKey* Manager::HashValues(const int num_elements, const Value* const *vals)
const Value* val = vals[i]; const Value* val = vals[i];
if ( val->present ) if ( val->present )
position += CopyValue(data, position, val); position += CopyValue(data, position, val);
memset(data + position, 1, 1); // Add end-of-field-marker. Does not really matter which value it is,
// it just has to be... something.
position++;
} }
HashKey *key = new HashKey(data, length); HashKey *key = new HashKey(data, length);
@ -1957,7 +2031,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_STRING: case TYPE_STRING:
{ {
BroString *s = new BroString(*(val->val.string_val)); BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 1);
return new StringVal(s); return new StringVal(s);
} }
@ -1966,7 +2040,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_ADDR: case TYPE_ADDR:
{ {
IPAddr* addr; IPAddr* addr = 0;
switch ( val->val.addr_val.family ) { switch ( val->val.addr_val.family ) {
case IPv4: case IPv4:
addr = new IPAddr(val->val.addr_val.in.in4); addr = new IPAddr(val->val.addr_val.in.in4);
@ -1987,7 +2061,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_SUBNET: case TYPE_SUBNET:
{ {
IPAddr* addr; IPAddr* addr = 0;
switch ( val->val.subnet_val.prefix.family ) { switch ( val->val.subnet_val.prefix.family ) {
case IPv4: case IPv4:
addr = new IPAddr(val->val.subnet_val.prefix.in.in4); addr = new IPAddr(val->val.subnet_val.prefix.in.in4);
@ -2041,8 +2115,8 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_ENUM: { case TYPE_ENUM: {
// well, this is kind of stupid, because EnumType just mangles the module name and the var name together again... // well, this is kind of stupid, because EnumType just mangles the module name and the var name together again...
// but well // but well
string module = extract_module_name(val->val.string_val->c_str()); string module = extract_module_name(val->val.string_val.data);
string var = extract_var_name(val->val.string_val->c_str()); string var = extract_var_name(val->val.string_val.data);
bro_int_t index = request_type->AsEnumType()->Lookup(module, var.c_str()); bro_int_t index = request_type->AsEnumType()->Lookup(module, var.c_str());
if ( index == -1 ) if ( index == -1 )
reporter->InternalError("Value not found in enum mappimg. Module: %s, var: %s", reporter->InternalError("Value not found in enum mappimg. Module: %s, var: %s",

View file

@ -89,6 +89,7 @@ protected:
friend class EndCurrentSendMessage; friend class EndCurrentSendMessage;
friend class ReaderClosedMessage; friend class ReaderClosedMessage;
friend class DisableMessage; friend class DisableMessage;
friend class EndOfDataMessage;
// For readers to write to input stream in direct mode (reporting // For readers to write to input stream in direct mode (reporting
// new/deleted values directly). Functions take ownership of // new/deleted values directly). Functions take ownership of
@ -96,6 +97,9 @@ protected:
void Put(ReaderFrontend* reader, threading::Value* *vals); void Put(ReaderFrontend* reader, threading::Value* *vals);
void Clear(ReaderFrontend* reader); void Clear(ReaderFrontend* reader);
bool Delete(ReaderFrontend* reader, threading::Value* *vals); bool Delete(ReaderFrontend* reader, threading::Value* *vals);
// Trigger sending the End-of-Data event when the input source has
// finished reading. Just use in direct mode.
void SendEndOfData(ReaderFrontend* reader);
// For readers to write to input stream in indirect mode (manager is // For readers to write to input stream in indirect mode (manager is
// monitoring new/deleted values) Functions take ownership of // monitoring new/deleted values) Functions take ownership of
@ -154,16 +158,18 @@ private:
// equivalend in threading cannot be used, because we have support // equivalend in threading cannot be used, because we have support
// different types from the log framework // different types from the log framework
bool IsCompatibleType(BroType* t, bool atomic_only=false); bool IsCompatibleType(BroType* t, bool atomic_only=false);
// Check if a record is made up of compatible types and return a list // Check if a record is made up of compatible types and return a list
// of all fields that are in the record in order. Recursively unrolls // of all fields that are in the record in order. Recursively unrolls
// records // records
bool UnrollRecordType(vector<threading::Field*> *fields, const RecordType *rec, const string& nameprepend); bool UnrollRecordType(vector<threading::Field*> *fields, const RecordType *rec, const string& nameprepend, bool allow_file_func);
// Send events // Send events
void SendEvent(EventHandlerPtr ev, const int numvals, ...); void SendEvent(EventHandlerPtr ev, const int numvals, ...);
void SendEvent(EventHandlerPtr ev, list<Val*> events); void SendEvent(EventHandlerPtr ev, list<Val*> events);
// Implementation of SendEndOfData (send end_of_data event).
void SendEndOfData(const Stream *i);
// Call predicate function and return result. // Call predicate function and return result.
bool CallPred(Func* pred_func, const int numvals, ...); bool CallPred(Func* pred_func, const int numvals, ...);
@ -200,7 +206,7 @@ private:
map<ReaderFrontend*, Stream*> readers; map<ReaderFrontend*, Stream*> readers;
EventHandlerPtr update_finished; EventHandlerPtr end_of_data;
}; };

View file

@ -56,22 +56,24 @@ private:
class SendEventMessage : public threading::OutputMessage<ReaderFrontend> { class SendEventMessage : public threading::OutputMessage<ReaderFrontend> {
public: public:
SendEventMessage(ReaderFrontend* reader, const string& name, const int num_vals, Value* *val) SendEventMessage(ReaderFrontend* reader, const char* name, const int num_vals, Value* *val)
: threading::OutputMessage<ReaderFrontend>("SendEvent", reader), : threading::OutputMessage<ReaderFrontend>("SendEvent", reader),
name(name), num_vals(num_vals), val(val) {} name(copy_string(name)), num_vals(num_vals), val(val) {}
virtual ~SendEventMessage() { delete [] name; }
virtual bool Process() virtual bool Process()
{ {
bool success = input_mgr->SendEvent(name, num_vals, val); bool success = input_mgr->SendEvent(name, num_vals, val);
if ( ! success ) if ( ! success )
reporter->Error("SendEvent for event %s failed", name.c_str()); reporter->Error("SendEvent for event %s failed", name);
return true; // We do not want to die if sendEvent fails because the event did not return. return true; // We do not want to die if sendEvent fails because the event did not return.
} }
private: private:
const string name; const char* name;
const int num_vals; const int num_vals;
Value* *val; Value* *val;
}; };
@ -106,6 +108,20 @@ public:
private: private:
}; };
class EndOfDataMessage : public threading::OutputMessage<ReaderFrontend> {
public:
EndOfDataMessage(ReaderFrontend* reader)
: threading::OutputMessage<ReaderFrontend>("EndOfData", reader) {}
virtual bool Process()
{
input_mgr->SendEndOfData(Object());
return true;
}
private:
};
class ReaderClosedMessage : public threading::OutputMessage<ReaderFrontend> { class ReaderClosedMessage : public threading::OutputMessage<ReaderFrontend> {
public: public:
ReaderClosedMessage(ReaderFrontend* reader) ReaderClosedMessage(ReaderFrontend* reader)
@ -146,12 +162,14 @@ ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread()
{ {
disabled = true; // disabled will be set correcty in init. disabled = true; // disabled will be set correcty in init.
frontend = arg_frontend; frontend = arg_frontend;
info = new ReaderInfo(frontend->Info());
SetName(frontend->Name()); SetName(frontend->Name());
} }
ReaderBackend::~ReaderBackend() ReaderBackend::~ReaderBackend()
{ {
delete info;
} }
void ReaderBackend::Put(Value* *val) void ReaderBackend::Put(Value* *val)
@ -169,7 +187,7 @@ void ReaderBackend::Clear()
SendOut(new ClearMessage(frontend)); SendOut(new ClearMessage(frontend));
} }
void ReaderBackend::SendEvent(const string& name, const int num_vals, Value* *vals) void ReaderBackend::SendEvent(const char* name, const int num_vals, Value* *vals)
{ {
SendOut(new SendEventMessage(frontend, name, num_vals, vals)); SendOut(new SendEventMessage(frontend, name, num_vals, vals));
} }
@ -179,22 +197,27 @@ void ReaderBackend::EndCurrentSend()
SendOut(new EndCurrentSendMessage(frontend)); SendOut(new EndCurrentSendMessage(frontend));
} }
void ReaderBackend::EndOfData()
{
SendOut(new EndOfDataMessage(frontend));
}
void ReaderBackend::SendEntry(Value* *vals) void ReaderBackend::SendEntry(Value* *vals)
{ {
SendOut(new SendEntryMessage(frontend, vals)); SendOut(new SendEntryMessage(frontend, vals));
} }
bool ReaderBackend::Init(const ReaderInfo& arg_info, const int arg_num_fields, bool ReaderBackend::Init(const int arg_num_fields,
const threading::Field* const* arg_fields) const threading::Field* const* arg_fields)
{ {
info = arg_info; if ( Failed() )
return true;
num_fields = arg_num_fields; num_fields = arg_num_fields;
fields = arg_fields; fields = arg_fields;
SetName("InputReader/"+info.source);
// disable if DoInit returns error. // disable if DoInit returns error.
int success = DoInit(arg_info, arg_num_fields, arg_fields); int success = DoInit(*info, arg_num_fields, arg_fields);
if ( ! success ) if ( ! success )
{ {
@ -207,9 +230,11 @@ bool ReaderBackend::Init(const ReaderInfo& arg_info, const int arg_num_fields,
return success; return success;
} }
void ReaderBackend::Close() bool ReaderBackend::OnFinish(double network_time)
{ {
if ( ! Failed() )
DoClose(); DoClose();
disabled = true; // frontend disables itself when it gets the Close-message. disabled = true; // frontend disables itself when it gets the Close-message.
SendOut(new ReaderClosedMessage(frontend)); SendOut(new ReaderClosedMessage(frontend));
@ -221,6 +246,8 @@ void ReaderBackend::Close()
delete [] (fields); delete [] (fields);
fields = 0; fields = 0;
} }
return true;
} }
bool ReaderBackend::Update() bool ReaderBackend::Update()
@ -228,6 +255,9 @@ bool ReaderBackend::Update()
if ( disabled ) if ( disabled )
return false; return false;
if ( Failed() )
return true;
bool success = DoUpdate(); bool success = DoUpdate();
if ( ! success ) if ( ! success )
DisableFrontend(); DisableFrontend();
@ -243,10 +273,12 @@ void ReaderBackend::DisableFrontend()
SendOut(new DisableMessage(frontend)); SendOut(new DisableMessage(frontend));
} }
bool ReaderBackend::DoHeartbeat(double network_time, double current_time) bool ReaderBackend::OnHeartbeat(double network_time, double current_time)
{ {
MsgThread::DoHeartbeat(network_time, current_time); if ( Failed() )
return true; return true;
return DoHeartbeat(network_time, current_time);
} }
TransportProto ReaderBackend::StringToProto(const string &proto) TransportProto ReaderBackend::StringToProto(const string &proto)

View file

@ -34,7 +34,10 @@ enum ReaderMode {
* for new appended data. When new data is appended is has to be sent * for new appended data. When new data is appended is has to be sent
* using the Put api functions. * using the Put api functions.
*/ */
MODE_STREAM MODE_STREAM,
/** Internal dummy mode for initialization. */
MODE_NONE
}; };
class ReaderFrontend; class ReaderFrontend;
@ -70,14 +73,17 @@ public:
*/ */
struct ReaderInfo struct ReaderInfo
{ {
typedef std::map<string, string> config_map; // Structure takes ownership of the strings.
typedef std::map<const char*, const char*, CompareString> config_map;
/** /**
* A string left to the interpretation of the reader * A string left to the interpretation of the reader
* implementation; it corresponds to the value configured on * implementation; it corresponds to the value configured on
* the script-level for the logging filter. * the script-level for the logging filter.
*
* Structure takes ownership of the string.
*/ */
string source; const char* source;
/** /**
* A map of key/value pairs corresponding to the relevant * A map of key/value pairs corresponding to the relevant
@ -89,6 +95,35 @@ public:
* The opening mode for the input source. * The opening mode for the input source.
*/ */
ReaderMode mode; ReaderMode mode;
ReaderInfo()
{
source = 0;
mode = MODE_NONE;
}
ReaderInfo(const ReaderInfo& other)
{
source = other.source ? copy_string(other.source) : 0;
mode = other.mode;
for ( config_map::const_iterator i = other.config.begin(); i != other.config.end(); i++ )
config.insert(std::make_pair(copy_string(i->first), copy_string(i->second)));
}
~ReaderInfo()
{
delete [] source;
for ( config_map::iterator i = config.begin(); i != config.end(); i++ )
{
delete [] i->first;
delete [] i->second;
}
}
private:
const ReaderInfo& operator=(const ReaderInfo& other); // Disable.
}; };
/** /**
@ -106,16 +141,7 @@ public:
* *
* @return False if an error occured. * @return False if an error occured.
*/ */
bool Init(const ReaderInfo& info, int num_fields, const threading::Field* const* fields); bool Init(int num_fields, const threading::Field* const* fields);
/**
* Finishes reading from this input stream in a regular fashion. Must
* not be called if an error has been indicated earlier. After
* calling this, no further reading from the stream can be performed.
*
* @return False if an error occured.
*/
void Close();
/** /**
* Force trigger an update of the input stream. The action that will * Force trigger an update of the input stream. The action that will
@ -142,13 +168,16 @@ public:
/** /**
* Returns the additional reader information into the constructor. * Returns the additional reader information into the constructor.
*/ */
const ReaderInfo& Info() const { return info; } const ReaderInfo& Info() const { return *info; }
/** /**
* Returns the number of log fields as passed into the constructor. * Returns the number of log fields as passed into the constructor.
*/ */
int NumFields() const { return num_fields; } int NumFields() const { return num_fields; }
// Overridden from MsgThread.
virtual bool OnHeartbeat(double network_time, double current_time);
virtual bool OnFinish(double network_time);
protected: protected:
// Methods that have to be overwritten by the individual readers // Methods that have to be overwritten by the individual readers
@ -200,6 +229,11 @@ protected:
*/ */
virtual bool DoUpdate() = 0; virtual bool DoUpdate() = 0;
/**
* Triggered by regular heartbeat messages from the main thread.
*/
virtual bool DoHeartbeat(double network_time, double current_time) = 0;
/** /**
* Method allowing a reader to send a specified Bro event. Vals must * Method allowing a reader to send a specified Bro event. Vals must
* match the values expected by the bro event. * match the values expected by the bro event.
@ -210,7 +244,7 @@ protected:
* *
* @param vals the values to be given to the event * @param vals the values to be given to the event
*/ */
void SendEvent(const string& name, const int num_vals, threading::Value* *vals); void SendEvent(const char* name, const int num_vals, threading::Value* *vals);
// Content-sending-functions (simple mode). Include table-specific // Content-sending-functions (simple mode). Include table-specific
// functionality that simply is not used if we have no table. // functionality that simply is not used if we have no table.
@ -247,6 +281,16 @@ protected:
*/ */
void Clear(); void Clear();
/**
* Method telling the manager that we finished reading the current
* data source. Will trigger an end_of_data event.
*
* Note: When using SendEntry as the tracking mode this is triggered
* automatically by EndCurrentSend(). Only use if not using the
* tracking mode. Otherwise the event will be sent twice.
*/
void EndOfData();
// Content-sending-functions (tracking mode): Only changed lines are propagated. // Content-sending-functions (tracking mode): Only changed lines are propagated.
/** /**
@ -271,14 +315,6 @@ protected:
*/ */
void EndCurrentSend(); void EndCurrentSend();
/**
* Triggered by regular heartbeat messages from the main thread.
*
* This method can be overridden but once must call
* ReaderBackend::DoHeartbeat().
*/
virtual bool DoHeartbeat(double network_time, double current_time);
/** /**
* Convert a string into a TransportProto. This is just a utility * Convert a string into a TransportProto. This is just a utility
* function for Readers. * function for Readers.
@ -300,7 +336,7 @@ private:
// from this class, it's running in a different thread! // from this class, it's running in a different thread!
ReaderFrontend* frontend; ReaderFrontend* frontend;
ReaderInfo info; ReaderInfo* info;
unsigned int num_fields; unsigned int num_fields;
const threading::Field* const * fields; // raw mapping const threading::Field* const * fields; // raw mapping

Some files were not shown because too many files have changed in this diff Show more