Merge remote-tracking branch 'origin/master' into topic/seth/files-tracking

Conflicts:
	scripts/base/frameworks/files/main.bro
	src/file_analysis/File.cc
	testing/btest/Baseline/scripts.base.frameworks.file-analysis.actions.data_event/out
This commit is contained in:
Seth Hall 2014-09-23 13:05:39 -04:00
commit 42b2d56279
486 changed files with 106378 additions and 85985 deletions

3
.gitmodules vendored
View file

@ -19,3 +19,6 @@
[submodule "src/3rdparty"]
path = src/3rdparty
url = git://git.bro.org/bro-3rdparty
[submodule "aux/plugins"]
path = aux/plugins
url = git://git.bro.org/bro-plugins

321
CHANGES
View file

@ -1,4 +1,325 @@
2.3-180 | 2014-09-22 12:52:41 -0500
* BIT-1259: Fix issue w/ duplicate TCP reassembly deliveries.
(Jon Siwek)
2.3-178 | 2014-09-18 14:29:46 -0500
* BIT-1256: Fix file analysis events from coming after bro_done().
(Jon Siwek)
2.3-177 | 2014-09-17 09:41:27 -0500
* Documentation fixes. (Chris Mavrakis)
2.3-174 | 2014-09-17 09:37:09 -0500
* Fixed some "make doc" warnings caused by reST formatting
(Daniel Thayer).
2.3-172 | 2014-09-15 13:38:52 -0500
* Remove unneeded allocations for HTTP messages. (Jon Siwek)
2.3-171 | 2014-09-15 11:14:57 -0500
* Fix a compile error on systems without pcap-int.h. (Jon Siwek)
2.3-170 | 2014-09-12 19:28:01 -0700
* Fix incorrect data delivery skips after gap in HTTP Content-Range.
Addresses BIT-1247. (Jon Siwek)
* Fix file analysis placement of data after gap in HTTP
Content-Range. Addresses BIT-1248. (Jon Siwek)
* Fix issue w/ TCP reassembler not delivering some segments.
Addresses BIT-1246. (Jon Siwek)
* Fix MIME entity file data/gap ordering and raise http_entity_data
in line with data arrival. Addresses BIT-1240. (Jon Siwek)
* Implement file ID caching for MIME_Mail. (Jon Siwek)
* Fix a compile error. (Jon Siwek)
2.3-161 | 2014-09-09 12:35:38 -0500
* Bugfixes and test updates/additions. (Robin Sommer)
* Interface tweaks and docs for PktSrc/PktDumper. (Robin Sommer)
* Moving PCAP-related bifs to iosource/pcap.bif. (Robin Sommer)
* Moving some of the BPF filtering code into base class.
This will allow packet sources that don't support BPF natively to
emulate the filtering via libpcap. (Robin Sommer)
* Removing FlowSrc. (Robin Sommer)
* Removing remaining pieces of the 2ndary path, and left-over
files of packet sorter. (Robin Sommer)
* A bunch of infrastructure work to move IOSource, IOSourceRegistry
(now iosource::Manager) and PktSrc/PktDumper code into iosource/,
and over to a plugin structure. (Robin Sommer)
2.3-137 | 2014-09-08 19:01:13 -0500
* Fix Broxygen's rendering of opaque types. (Jon Siwek)
2.3-136 | 2014-09-07 20:50:46 -0700
* Change more http links to https. (Johanna Amann)
2.3-134 | 2014-09-04 16:16:36 -0700
* Fixed a number of issues with OCSP reply validation. Addresses
BIT-1212. (Johanna Amann)
* Fix null pointer dereference in OCSP verification code in case no
certificate is sent as part as the ocsp reply. Addresses BIT-1212.
(Johanna Amann)
2.3-131 | 2014-09-04 16:10:32 -0700
* Make links in documentation templates protocol relative. (Johanna
Amann)
2.3-129 | 2014-09-02 17:21:21 -0700
* Simplify a conditional with equivalent branches. (Jon Siwek)
* Change EDNS parsing code to use rdlength more cautiously. (Jon
Siwek)
* Fix a memory leak when bind() fails due to EADDRINUSE. (Jon Siwek)
* Fix possible buffer over-read in DNS TSIG parsing. (Jon Siwek)
2.3-124 | 2014-08-26 09:24:19 -0500
* Better documentation for sub_bytes (Jimmy Jones)
* BIT-1234: Fix build on systems that already have ntohll/htonll
(Jon Siwek)
2.3-121 | 2014-08-22 15:22:15 -0700
* Detect functions that try to bind variables from an outer scope
and raise an error saying that's not supported. Addresses
BIT-1233. (Jon Siwek)
2.3-116 | 2014-08-21 16:04:13 -0500
* Adding plugin testing to Makefile's test-all. (Robin Sommer)
* Converting log writers and input readers to plugins.
DataSeries and ElasticSearch plugins have moved to the new
bro-plugins repository, which is now a git submodule in the
aux/plugins directory. (Robin Sommer)
2.3-98 | 2014-08-19 11:03:46 -0500
* Silence some doc-related warnings when using `bro -e`.
Closes BIT-1232. (Jon Siwek)
* Fix possible null ptr derefs reported by Coverity. (Jon Siwek)
2.3-96 | 2014-08-01 14:35:01 -0700
* Small change to DHCP documentation. In server->client messages the
host name may differ from the one requested by the client.
(Johanna Amann)
* Split DHCP log writing from record creation. This allows users to
customize dhcp.log by changing the record in their own dhcp_ack
event. (Johanna Amann)
* Update PATH so that documentation btests can find bro-cut. (Daniel
Thayer)
* Remove gawk from list of optional packages in documentation.
(Daniel Thayer)
* Fix for redefining built-in constants. (Robin Sommer)
2.3-86 | 2014-07-31 14:19:58 -0700
* Fix for redefining built-in constants. (Robin Sommer)
* Adding missing check that a plugin's API version matches what Bro
defines. (Robin Sommer)
* Adding NEWS entry for plugins. (Robin Sommer)
2.3-83 | 2014-07-30 16:26:11 -0500
* Minor adjustments to plugin code/docs. (Jon Siwek)
* Dynamic plugin support. (Rpbin Sommer)
Bro now supports extending core functionality, like protocol and
file analysis, dynamically with external plugins in the form of
shared libraries. See doc/devel/plugins.rst for an overview of the
main functionality. Changes coming with this:
- Replacing the old Plugin macro magic with a new API.
- The plugin API changed to generally use std::strings instead
of const char*.
- There are a number of invocations of PLUGIN_HOOK_
{VOID,WITH_RESULT} across the code base, which allow plugins
to hook into the processing at those locations.
- A few new accessor methods to various classes to allow
plugins to get to that information.
- network_time cannot be just assigned to anymore, there's now
function net_update_time() for that.
- Redoing how builtin variables are initialized, so that it
works for plugins as well. No more init_net_var(), but
instead bifcl-generated code that registers them.
- Various changes for adjusting to the now dynamic generation
of analyzer instances.
- same_type() gets an optional extra argument allowing record type
comparision to ignore if field names don't match. (Robin Sommer)
- Further unify file analysis API with the protocol analyzer API
(assigning IDs to analyzers; adding Init()/Done() methods;
adding subtypes). (Robin Sommer)
- A new command line option -Q that prints some basic execution
time stats. (Robin Sommer)
- Add support to the file analysis for activating analyzers by
MIME type. (Robin Sommer)
- File::register_for_mime_type(tag: Analyzer::Tag, mt:
string): Associates a file analyzer with a MIME type.
- File::add_analyzers_for_mime_type(f: fa_file, mtype:
string): Activates all analyzers registered for a MIME
type for the file.
- The default file_new() handler calls
File::add_analyzers_for_mime_type() with the file's MIME
type.
2.3-20 | 2014-07-22 17:41:02 -0700
* Updating submodule(s).
2.3-19 | 2014-07-22 17:29:19 -0700
* Implement bytestring_to_coils() in Modbus analyzer so that coils
gets passed to the corresponding events. (Hui Lin)
* Add length field to ModbusHeaders. (Hui Lin)
2.3-12 | 2014-07-10 19:17:37 -0500
* Include yield of vectors in Broxygen's type descriptions.
Addresses BIT-1217. (Jon Siwek)
2.3-11 | 2014-07-10 14:49:27 -0700
* Fixing DataSeries output. It was using a now illegal value as its
default compression level. (Robin Sommer)
2.3-7 | 2014-06-26 17:35:18 -0700
* Extending "make test-all" to include aux/bro-aux. (Robin Sommer)
2.3-6 | 2014-06-26 17:24:10 -0700
* DataSeries compilation issue fixed. (mlaterman)
* Fix a reference counting bug in ListVal ctor. (Jon Siwek)
2.3-3 | 2014-06-26 15:41:04 -0500
* Support tilde expansion when Bro tries to find its own path. (Jon
Siwek)
2.3-2 | 2014-06-23 16:54:15 -0500
* Remove references to line numbers in tutorial text. (Daniel Thayer)
2.3 | 2014-06-16 09:48:25 -0500
* Release 2.3.
2.3-beta-33 | 2014-06-12 11:59:28 -0500
* Documentation improvements/fixes. (Daniel Thayer)
2.3-beta-24 | 2014-06-11 15:35:31 -0500
* Fix SMTP state tracking when server response is missing.
(Robin Sommer)
2.3-beta-22 | 2014-06-11 12:31:38 -0500
* Fix doc/test that broke due to a Bro script change. (Jon Siwek)
* Remove unused --with-libmagic configure option. (Jon Siwek)
2.3-beta-20 | 2014-06-10 18:16:51 -0700
* Fix use-after-free in some cases of reassigning a table index.
Addresses BIT-1202. (Jon Siwek)
2.3-beta-18 | 2014-06-06 13:11:50 -0700
* Add two more SSL events, one triggered for each handshake message
and one triggered for the tls change cipherspec message. (Bernhard
Amann)
* Small SSL bug fix. In case SSL::disable_analyzer_after_detection
was set to false, the ssl_established event would fire after each
data packet once the session is established. (Bernhard Amann)
2.3-beta-16 | 2014-06-06 13:05:44 -0700
* Re-activate notice suppression for expiring certificates.
(Bernhard Amann)
2.3-beta-14 | 2014-06-05 14:43:33 -0700
* Add new TLS extension type numbers from IANA (Bernhard Amann)
* Switch to double hashing for Bloomfilters for better performance.
(Matthias Vallentin)
* Bugfix to use full digest length instead of just one byte for
Bloomfilter's universal hash function. Addresses BIT-1140.
(Matthias Vallentin)
* Make buffer for X509 certificate subjects larger. Addresses
BIT-1195 (Bernhard Amann)
2.3-beta-5 | 2014-05-29 15:34:42 -0500
* Fix misc/load-balancing.bro's reference to
PacketFilter::sampling_filter (Jon Siwek)
2.3-beta-4 | 2014-05-28 14:55:24 -0500
* Fix potential mem leak in remote function/event unserialization.
(Jon Siwek)
* Fix reference counting bug in table coercion expressions (Jon Siwek)
* Fix an "unused value" warning. (Jon Siwek)
* Remove a duplicate unit test baseline dir. (Jon Siwek)
2.3-beta | 2014-05-19 16:36:50 -0500
* Release 2.3-beta

View file

@ -1,5 +1,9 @@
project(Bro C CXX)
# When changing the minimum version here, also adapt
# aux/bro-aux/plugin-support/skeleton/CMakeLists.txt
cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR)
include(cmake/CommonCMakeConfig.cmake)
########################################################################
@ -16,12 +20,18 @@ endif ()
get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH}
ABSOLUTE)
set(BRO_PLUGIN_INSTALL_PATH ${BRO_ROOT_DIR}/lib/bro/plugins CACHE STRING "Installation path for plugins" FORCE)
configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh
"export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"export BRO_PLUGIN_PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
"setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"setenv BRO_PLUGIN_PATH \"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1)
@ -117,33 +127,6 @@ if (GOOGLEPERFTOOLS_FOUND)
endif ()
endif ()
set(USE_DATASERIES false)
find_package(Lintel)
find_package(DataSeries)
find_package(LibXML2)
if (NOT DISABLE_DATASERIES AND
LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
set(USE_DATASERIES true)
include_directories(BEFORE ${Lintel_INCLUDE_DIR})
include_directories(BEFORE ${DataSeries_INCLUDE_DIR})
include_directories(BEFORE ${LibXML2_INCLUDE_DIR})
list(APPEND OPTLIBS ${Lintel_LIBRARIES})
list(APPEND OPTLIBS ${DataSeries_LIBRARIES})
list(APPEND OPTLIBS ${LibXML2_LIBRARIES})
endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(LibCURL)
if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${LibCURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibCURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG OR ENABLE_PERFTOOLS)
# Just a no op to prevent CMake from complaining about manually-specified
# ENABLE_PERFTOOLS_DEBUG or ENABLE_PERFTOOLS not being used if google
@ -165,6 +148,8 @@ set(brodeps
include(TestBigEndian)
test_big_endian(WORDS_BIGENDIAN)
include(CheckSymbolExists)
check_symbol_exists(htonll arpa/inet.h HAVE_BYTEORDER_64)
include(OSSpecific)
include(CheckTypes)
@ -174,6 +159,10 @@ include(MiscTests)
include(PCAPTests)
include(OpenSSLTests)
include(CheckNameserCompat)
include(GetArchitecture)
# Tell the plugin code that we're building as part of the main tree.
set(BRO_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in
${CMAKE_CURRENT_BINARY_DIR}/config.h)
@ -238,10 +227,6 @@ message(
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\njemalloc: ${ENABLE_JEMALLOC}"
"\ncURL: ${USE_CURL}"
"\n"
"\nDataSeries: ${USE_DATASERIES}"
"\nElasticSearch: ${USE_ELASTICSEARCH}"
"\n"
"\n================================================================\n"
)

View file

@ -55,6 +55,8 @@ test:
test-all: test
test -d aux/broctl && ( cd aux/broctl && make test )
test -d aux/btest && ( cd aux/btest && make test )
test -d aux/bro-aux && ( cd aux/bro-aux && make test )
test -d aux/plugins && ( cd aux/plugins && make test-all )
configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

26
NEWS
View file

@ -4,6 +4,32 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own ``CHANGES``.)
Bro 2.4 (in progress)
=====================
Dependencies
------------
New Functionality
-----------------
- Bro now has support for external plugins that can extend its core
functionality, like protocol/file analysis, via shared libraries.
Plugins can be developed and distributed externally, and will be
pulled in dynamically at startup. Currently, a plugin can provide
custom protocol analyzers, file analyzers, log writers[TODO], input
readers[TODO], packet sources[TODO], and new built-in functions. A
plugin can furthermore hook into Bro's processing a number of places
to add custom logic.
See https://www.bro.org/sphinx-git/devel/plugins.html for more
information on writing plugins.
Changed Functionality
---------------------
- bro-cut has been rewritten in C, and is hence much faster.
Bro 2.3
=======

View file

@ -1 +1 @@
2.3-beta
2.3-180

@ -1 +1 @@
Subproject commit ec1e052afd5a8cd3d1d2cbb28fcd688018e379a5
Subproject commit 3a4684801aafa0558383199e9abd711650b53af9

@ -1 +1 @@
Subproject commit 5721df4f5f6fa84de6257cca6582a28e45831786
Subproject commit 9ea20c3905bd3fd5109849c474a2f2b4ed008357

@ -1 +1 @@
Subproject commit c2f5dd2cb7876158fdf9721aebd22567db840db1
Subproject commit 33d0ed4a54a6ecf08a0b5fe18831aa413b437066

@ -1 +1 @@
Subproject commit ca3b12421269fce655028c05939e0ff2ee82ca2d
Subproject commit 2f808bc8541378b1a4953cca02c58c43945d154f

@ -1 +1 @@
Subproject commit 4da1bd24038d4977e655f2b210f34e37f0b73b78
Subproject commit 1efa4d10f943351efea96def68e598b053fd217a

1
aux/plugins Submodule

@ -0,0 +1 @@
Subproject commit 23055b473c689a79da12b2825d8388f71f28c709

2
cmake

@ -1 +1 @@
Subproject commit 0f301aa08a970150195a2ea5b3ed43d2d98b35b3
Subproject commit 03de0cc467d2334dcb851eddd843d59fef217909

View file

@ -129,6 +129,9 @@
/* whether words are stored with the most significant byte first */
#cmakedefine WORDS_BIGENDIAN
/* whether htonll/ntohll is defined in <arpa/inet.h> */
#cmakedefine HAVE_BYTEORDER_64
/* ultrix can't hack const */
#cmakedefine NEED_ULTRIX_CONST_HACK
#ifdef NEED_ULTRIX_CONST_HACK
@ -209,3 +212,14 @@
/* Common IPv6 extension structure */
#cmakedefine HAVE_IP6_EXT
/* String with host architecture (e.g., "linux-x86_64") */
#define HOST_ARCHITECTURE "@HOST_ARCHITECTURE@"
/* String with extension of dynamic libraries (e.g., ".so") */
#define DYNAMIC_PLUGIN_SUFFIX "@CMAKE_SHARED_MODULE_SUFFIX@"
/* True if we're building outside of the main Bro source code tree. */
#ifndef BRO_PLUGIN_INTERNAL_BUILD
#define BRO_PLUGIN_INTERNAL_BUILD @BRO_PLUGIN_INTERNAL_BUILD@
#endif

25
configure vendored
View file

@ -39,8 +39,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root
@ -50,7 +48,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable
--with-perl=PATH path to perl executable
--with-libmagic=PATH path to libmagic install root
Optional Packages in Non-Standard Locations:
--with-geoip=PATH path to the libGeoIP install root
@ -63,9 +60,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-ruby-lib=PATH path to ruby library
--with-ruby-inc=PATH path to ruby headers
--with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
Packaging Options (for developers):
--binary-package toggle special logic for binary packaging
@ -184,12 +178,6 @@ while [ $# -ne 0 ]; do
--enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;;
@ -211,9 +199,6 @@ while [ $# -ne 0 ]; do
--with-perl=*)
append_cache_entry PERL_EXECUTABLE PATH $optarg
;;
--with-libmagic=*)
append_cache_entry LibMagic_ROOT_DIR PATH $optarg
;;
--with-geoip=*)
append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg
;;
@ -247,16 +232,6 @@ while [ $# -ne 0 ]; do
--with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg
;;
--with-dataseries=*)
append_cache_entry DataSeries_ROOT_DIR PATH $optarg
append_cache_entry Lintel_ROOT_DIR PATH $optarg
;;
--with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true
;;

View file

@ -10,7 +10,7 @@
{% endblock %}
{% block header %}
<iframe src="http://www.bro.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
<iframe src="//www.bro.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %}
@ -108,6 +108,6 @@
{% endblock %}
{% block footer %}
<iframe src="http://www.bro.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
<iframe src="//www.bro.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %}

View file

@ -21,7 +21,7 @@ sys.path.insert(0, os.path.abspath('sphinx_input/ext'))
# ----- Begin of BTest configuration. -----
btest = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/btest")
brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/bro-aux/bro-cut")
brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/build/aux/bro-aux/bro-cut")
bro = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src")
os.environ["PATH"] += (":%s:%s/sphinx:%s:%s" % (btest, btest, bro, brocut))

436
doc/devel/plugins.rst Normal file
View file

@ -0,0 +1,436 @@
===================
Writing Bro Plugins
===================
Bro is internally moving to a plugin structure that enables extending
the system dynamically, without modifying the core code base. That way
custom code remains self-contained and can be maintained, compiled,
and installed independently. Currently, plugins can add the following
functionality to Bro:
- Bro scripts.
- Builtin functions/events/types for the scripting language.
- Protocol analyzers.
- File analyzers.
- Packet sources and packet dumpers. TODO: Not yet.
- Logging framework backends. TODO: Not yet.
- Input framework readers. TODO: Not yet.
A plugin's functionality is available to the user just as if Bro had
the corresponding code built-in. Indeed, internally many of Bro's
pieces are structured as plugins as well, they are just statically
compiled into the binary rather than loaded dynamically at runtime.
Quick Start
===========
Writing a basic plugin is quite straight-forward as long as one
follows a few conventions. In the following we walk a simple example
plugin that adds a new built-in function (bif) to Bro: we'll add
``rot13(s: string) : string``, a function that rotates every character
in a string by 13 places.
Generally, a plugin comes in the form of a directory following a
certain structure. To get started, Bro's distribution provides a
helper script ``aux/bro-aux/plugin-support/init-plugin`` that creates
a skeleton plugin that can then be customized. Let's use that::
# mkdir rot13-plugin
# cd rot13-plugin
# init-plugin Demo Rot13
As you can see the script takes two arguments. The first is a
namespace the plugin will live in, and the second a descriptive name
for the plugin itself. Bro uses the combination of the two to identify
a plugin. The namespace serves to avoid naming conflicts between
plugins written by independent developers; pick, e.g., the name of
your organisation. The namespace ``Bro`` is reserved for functionality
distributed by the Bro Project. In our example, the plugin will be
called ``Demo::Rot13``.
The ``init-plugin`` script puts a number of files in place. The full
layout is described later. For now, all we need is
``src/functions.bif``. It's initially empty, but we'll add our new bif
there as follows::
# cat scripts/functions.bif
module CaesarCipher;
function rot13%(s: string%) : string
%{
char* rot13 = copy_string(s->CheckString());
for ( char* p = rot13; *p; p++ )
{
char b = islower(*p) ? 'a' : 'A';
*p = (*p - b + 13) % 26 + b;
}
return new StringVal(new BroString(1, rot13, strlen(rot13)));
%}
The syntax of this file is just like any other ``*.bif`` file; we
won't go into it here.
Now we can already compile our plugin, we just need to tell the
Makefile put in place by ``init-plugin`` where the Bro source tree is
located (Bro needs to have been built there first)::
# make BRO=/path/to/bro/dist
[... cmake output ...]
Now our ``rot13-plugin`` directory has everything that it needs
for Bro to recognize it as a dynamic plugin. Once we point Bro to it,
it will pull it in automatically, as we can check with the ``-N``
option::
# export BRO_PLUGIN_PATH=/path/to/rot13-plugin
# bro -N
[...]
Plugin: Demo::Rot13 - <Insert brief description of plugin> (dynamic, version 1)
[...]
That looks quite good, except for the dummy description that we should
replace with something nicer so that users will know what our plugin
is about. We do this by editing the ``config.description`` line in
``src/Plugin.cc``, like this::
[...]
plugin::Configuration Configure()
{
plugin::Configuration config;
config.name = "Demo::Rot13";
config.description = "Caesar cipher rotating a string's characters by 13 places.";
config.version.major = 1;
config.version.minor = 0;
return config;
}
[...]
# make
[...]
# bro -N | grep Rot13
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
Better. Bro can also show us what exactly the plugin provides with the
more verbose option ``-NN``::
# bro -NN
[...]
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
[Function] CaesarCipher::rot13
[...]
There's our function. Now let's use it::
# bro -e 'print CaesarCipher::rot13("Hello")'
Uryyb
It works. We next install the plugin along with Bro itself, so that it
will find it directly without needing the ``BRO_PLUGIN_PATH``
environment variable. If we first unset the variable, the function
will no longer be available::
# unset BRO_PLUGIN_PATH
# bro -e 'print CaesarCipher::rot13("Hello")'
error in <command line>, line 1: unknown identifier CaesarCipher::rot13, at or near "CaesarCipher::rot13"
Once we install it, it works again::
# make install
# bro -e 'print CaesarCipher::rot13("Hello")'
Uryyb
The installed version went into
``<bro-install-prefix>/lib/bro/plugins/Demo_Rot13``.
We can distribute the plugin in either source or binary form by using
the Makefile's ``sdist`` and ``bdist`` target, respectively. Both
create corrsponding tarballs::
# make sdist
[...]
Source distribution in build/sdist/Demo_Rot13.tar.gz
# make bdist
[...]
Binary distribution in build/Demo_Rot13-darwin-x86_64.tar.gz
The source archive will contain everything in the plugin directory
except any generated files. The binary archive will contain anything
needed to install and run the plugin, i.e., just what ``make install``
puts into place as well. As the binary distribution is
platform-dependent, its name includes the OS and architecture the
plugin was built on.
Plugin Directory Layout
=======================
A plugin's directory needs to follow a set of conventions so that Bro
(1) recognizes it as a plugin, and (2) knows what to load. While
``init-plugin`` takes care of most of this, the following is the full
story. We'll use ``<base>`` to represent a plugin's top-level
directory.
``<base>/__bro_plugin__``
A file that marks a directory as containing a Bro plugin. The file
must exist, and its content must consist of a single line with the
qualified name of the plugin (e.g., "Demo::Rot13").
``<base>/lib/<plugin-name>-<os>-<arch>.so``
The shared library containing the plugin's compiled code. Bro will
load this in dynamically at run-time if OS and architecture match
the current platform.
``scripts/``
A directory with the plugin's custom Bro scripts. When the plugin
gets activated, this directory will be automatically added to
``BROPATH``, so that any scripts/modules inside can be
"@load"ed.
``scripts``/__load__.bro
A Bro script that will be loaded immediately when the plugin gets
activated. See below for more information on activating plugins.
``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugin's
bif elements. The files here are produced by ``bifcl``.
By convention, a plugin should put its custom scripts into sub folders
of ``scripts/``, i.e., ``scripts/<script-namespace>/<script>.bro`` to
avoid conflicts. As usual, you can then put a ``__load__.bro`` in
there as well so that, e.g., ``@load Demo/Rot13`` could load a whole
module in the form of multiple individual scripts.
Note that in addition to the paths above, the ``init-plugin`` helper
puts some more files and directories in place that help with
development and installation (e.g., ``CMakeLists.txt``, ``Makefile``,
and source code in ``src/``). However, all these do not have a special
meaning for Bro at runtime and aren't necessary for a plugin to
function.
``init-plugin``
===============
``init-plugin`` puts a basic plugin structure in place that follows
the above layout and augments it with a CMake build and installation
system. Plugins with this structure can be used both directly out of
their source directory (after ``make`` and setting Bro's
``BRO_PLUGIN_PATH``), and when installed alongside Bro (after ``make
install``).
``make install`` copies over the ``lib`` and ``scripts`` directories,
as well as the ``__bro_plugin__`` magic file and the ``README`` (which
you should customize). One can add further CMake ``install`` rules to
install additional files if needed.
``init-plugin`` will never overwrite existing files, so it's safe to
rerun in an existing plugin directory; it only put files in place that
don't exist yet. That also provides a convenient way to revert a file
back to what ``init-plugin`` created originally: just delete it and
rerun.
Activating a Plugin
===================
A plugin needs to be *activated* to make it available to the user.
Activating a plugin will:
1. Load the dynamic module
2. Make any bif items available
3. Add the ``scripts/`` directory to ``BROPATH``
4. Load ``scripts/__load__.bro``
By default, Bro will automatically activate all dynamic plugins found
in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro
-b``), no dynamic plugins will be activated by default; instead the
user can selectively enable individual plugins in scriptland using the
``@load-plugin <qualified-plugin-name>`` directive (e.g.,
``@load-plugin Demo::Rot13``). Alternatively, one can activate a
plugin from the command-line by specifying its full name
(``Demo::Rot13``), or set the environment variable
``BRO_PLUGIN_ACTIVATE`` to a list of comma(!)-separated names of
plugins to unconditionally activate, even in bare mode.
``bro -N`` shows activated plugins separately from found but not yet
activated plugins. Note that plugins compiled statically into Bro are
always activated, and hence show up as such even in bare mode.
Plugin Component
================
The following gives additional information about providing individual
types of functionality via plugins. Note that a single plugin can
provide more than one type. For example, a plugin could provide
multiple protocol analyzers at once; or both a logging backend and
input reader at the same time.
We now walk briefly through the specifics of providing a specific type
of functionality (a *component*) through a plugin. We'll focus on
their interfaces to the plugin system, rather than specifics on
writing the corresponding logic (usually the best way to get going on
that is to start with an existing plugin providing a corresponding
component and adapt that). We'll also point out how the CMake
infrastructure put in place by the ``init-plugin`` helper script ties
the various pieces together.
Bro Scripts
-----------
Scripts are easy: just put them into ``scripts/``, as described above.
The CMake infrastructure will automatically install them, as well
include them into the source and binary plugin distributions.
Builtin Language Elements
-------------------------
Functions
TODO
Events
TODO
Types
TODO
Protocol Analyzers
------------------
TODO.
File Analyzers
--------------
TODO.
Logging Writer
--------------
Not yet available as plugins.
Input Reader
------------
Not yet available as plugins.
Packet Sources
--------------
Not yet available as plugins.
Packet Dumpers
--------------
Not yet available as plugins.
Hooks
=====
TODO.
Testing Plugins
===============
A plugin should come with a test suite to exercise its functionality.
The ``init-plugin`` script puts in place a basic </btest/README> setup
to start with. Initially, it comes with a single test that just checks
that Bro loads the plugin correctly. It won't have a baseline yet, so
let's get that in place::
# cd tests
# btest -d
[ 0%] plugin.loading ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1.0)
[Function] CaesarCipher::rot13
== Error ===============================
test-diff: no baseline found.
=======================================
# btest -U
all 1 tests successful
# cd ..
# make test
make -C tests
make[1]: Entering directory `tests'
all 1 tests successful
make[1]: Leaving directory `tests'
Now let's add a custom test that ensures that our bif works
correctly::
# cd tests
# cat >plugin/rot13.bro
# @TEST-EXEC: bro %INPUT >output
# @TEST-EXEC: btest-diff output
event bro_init()
{
print CaesarCipher::rot13("Hello");
}
Check the output::
# btest -d plugin/rot13.bro
[ 0%] plugin.rot13 ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Uryyb
== Error ===============================
test-diff: no baseline found.
=======================================
% cat .stderr
1 of 1 test failed
Install the baseline::
# btest -U plugin/rot13.bro
all 1 tests successful
Run the test-suite::
# btest
all 2 tests successful
Debugging Plugins
=================
Plugins can use Bro's standard debug logger by using the
``PLUGIN_DBG_LOG(<plugin>, <args>)`` macro (defined in
``DebugLogger.h``), where ``<plugin>`` is the ``Plugin`` instance and
``<args>`` are printf-style arguments, just as with Bro's standard
debuggging macros.
At runtime, one then activates a plugin's debugging output with ``-B
plugin-<name>``, where ``<name>`` is the name of the plugin as
returned by its ``Configure()`` method, yet with the
namespace-separator ``::`` replaced with a simple dash. Example: If
the plugin is called ``Bro::Demo``, use ``-B plugin-Bro-Demo``. As
usual, the debugging output will be recorded to ``debug.log`` if Bro's
compiled in debug mode.
Documenting Plugins
===================
..todo::
Integrate all this with Broxygen.

View file

@ -1,186 +0,0 @@
=============================
Binary Output with DataSeries
=============================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for storing and searching large volumes of data. An an
alternative, Bro comes with experimental support for `DataSeries
<http://www.hpl.hp.com/techreports/2009/HPL-2009-323.html>`_
output, an efficient binary format for recording structured bulk
data. DataSeries is developed and maintained at HP Labs.
.. contents::
Installing DataSeries
---------------------
To use DataSeries, its libraries must be available at compile-time,
along with the supporting *Lintel* package. Generally, both are
distributed on `HP Labs' web site
<http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need
to use recent development versions for both packages, which you can
download from github like this::
git clone http://github.com/dataseries/Lintel
git clone http://github.com/dataseries/DataSeries
To build and install the two into ``<prefix>``, do::
( cd Lintel && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
( cd DataSeries && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
Please refer to the packages' documentation for more information about
the installation process. In particular, there's more information on
required and optional `dependencies for Lintel
<https://raw.github.com/dataseries/Lintel/master/doc/dependencies.txt>`_
and `dependencies for DataSeries
<https://raw.github.com/dataseries/DataSeries/master/doc/dependencies.txt>`_.
For users on RedHat-style systems, you'll need the following::
yum install libxml2-devel boost-devel
Compiling Bro with DataSeries Support
-------------------------------------
Once you have installed DataSeries, Bro's ``configure`` should pick it
up automatically as long as it finds it in a standard system location.
Alternatively, you can specify the DataSeries installation prefix
manually with ``--with-dataseries=<prefix>``. Keep an eye on
``configure``'s summary output, if it looks like the following, Bro
found DataSeries and will compile in the support::
# ./configure --with-dataseries=/usr/local
[...]
====================| Bro Build Summary |=====================
[...]
DataSeries: true
[...]
================================================================
Activating DataSeries
---------------------
The direct way to use DataSeries is to switch *all* log files over to
the binary format. To do that, just add ``redef
Log::default_writer=Log::WRITER_DATASERIES;`` to your ``local.bro``.
For testing, you can also just pass that on the command line::
bro -r trace.pcap Log::default_writer=Log::WRITER_DATASERIES
With that, Bro will now write all its output into DataSeries files
``*.ds``. You can inspect these using DataSeries's set of command line
tools, which its installation process installs into ``<prefix>/bin``.
For example, to convert a file back into an ASCII representation::
$ ds2txt conn.log
[... We skip a bunch of metadata here ...]
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
1300475167.099816 pXPi1kPMgxb 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0
1300475168.853899 R7sOc16woCj 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF F 0 Dd 1 66 1 117
1300475168.854378 Z6dfHVmt0X7 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF F 0 Dd 1 80 1 127
1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211
[...]
(``--skip-all`` suppresses the metadata.)
Note that the ASCII conversion is *not* equivalent to Bro's default
output format.
You can also switch only individual files over to DataSeries by adding
code like this to your ``local.bro``:
.. code:: bro
event bro_init()
{
local f = Log::get_filter(Conn::LOG, "default"); # Get default filter for connection log.
f$writer = Log::WRITER_DATASERIES; # Change writer type.
Log::add_filter(Conn::LOG, f); # Replace filter with adapted version.
}
Bro's DataSeries writer comes with a few tuning options, see
:doc:`/scripts/base/frameworks/logging/writers/dataseries.bro`.
Working with DataSeries
=======================
Here are a few examples of using DataSeries command line tools to work
with the output files.
* Printing CSV::
$ ds2txt --csv conn.log
ts,uid,id.orig_h,id.orig_p,id.resp_h,id.resp_p,proto,service,duration,orig_bytes,resp_bytes,conn_state,local_orig,missed_bytes,history,orig_pkts,orig_ip_bytes,resp_pkts,resp_ip_bytes
1258790493.773208,ZTtgbHvf4s3,192.168.1.104,137,192.168.1.255,137,udp,dns,3.748891,350,0,S0,F,0,D,7,546,0,0
1258790451.402091,pOY6Rw7lhUd,192.168.1.106,138,192.168.1.255,138,udp,,0.000000,0,0,S0,F,0,D,1,229,0,0
1258790493.787448,pn5IiEslca9,192.168.1.104,138,192.168.1.255,138,udp,,2.243339,348,0,S0,F,0,D,2,404,0,0
1258790615.268111,D9slyIu3hFj,192.168.1.106,137,192.168.1.255,137,udp,dns,3.764626,350,0,S0,F,0,D,7,546,0,0
[...]
Add ``--separator=X`` to set a different separator.
* Extracting a subset of columns::
$ ds2txt --select '*' ts,id.resp_h,id.resp_p --skip-all conn.log
1258790493.773208 192.168.1.255 137
1258790451.402091 192.168.1.255 138
1258790493.787448 192.168.1.255 138
1258790615.268111 192.168.1.255 137
1258790615.289842 192.168.1.255 138
[...]
* Filtering rows::
$ ds2txt --where '*' 'duration > 5 && id.resp_p > 1024' --skip-all conn.ds
1258790631.532888 V8mV5WLITu5 192.168.1.105 55890 239.255.255.250 1900 udp 15.004568 798 0 S0 F 0 D 6 966 0 0
1258792413.439596 tMcWVWQptvd 192.168.1.105 55890 239.255.255.250 1900 udp 15.004581 798 0 S0 F 0 D 6 966 0 0
1258794195.346127 cQwQMRdBrKa 192.168.1.105 55890 239.255.255.250 1900 udp 15.005071 798 0 S0 F 0 D 6 966 0 0
1258795977.253200 i8TEjhWd2W8 192.168.1.105 55890 239.255.255.250 1900 udp 15.004824 798 0 S0 F 0 D 6 966 0 0
1258797759.160217 MsLsBA8Ia49 192.168.1.105 55890 239.255.255.250 1900 udp 15.005078 798 0 S0 F 0 D 6 966 0 0
1258799541.068452 TsOxRWJRGwf 192.168.1.105 55890 239.255.255.250 1900 udp 15.004082 798 0 S0 F 0 D 6 966 0 0
[...]
* Calculate some statistics:
Mean/stddev/min/max over a column::
$ dsstatgroupby '*' basic duration from conn.ds
# Begin DSStatGroupByModule
# processed 2159 rows, where clause eliminated 0 rows
# count(*), mean(duration), stddev, min, max
2159, 42.7938, 1858.34, 0, 86370
[...]
Quantiles of total connection volume::
$ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds
[...]
2159 data points, mean 24616 +- 343295 [0,1.26615e+07]
quantiles about every 216 data points:
10%: 0, 124, 317, 348, 350, 350, 601, 798, 1469
tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262
[...]
The ``man`` pages for these tools show further options, and their
``-h`` option gives some more information (either can be a bit cryptic
unfortunately though).
Deficiencies
------------
Due to limitations of the DataSeries format, one cannot inspect its
files before they have been fully written. In other words, when using
DataSeries, it's currently not possible to inspect the live log
files inside the spool directory before they are rotated to their
final location. It seems that this could be fixed with some effort,
and we will work with DataSeries development team on that if the
format gains traction among Bro users.
Likewise, we're considering writing custom command line tools for
interacting with DataSeries files, making that a bit more convenient
than what the standard utilities provide.

View file

@ -1,89 +0,0 @@
=========================================
Indexed Logging Output with ElasticSearch
=========================================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for searching large volumes of data. ElasticSearch
is a new data storage technology for dealing with tons of data.
It's also a search engine built on top of Apache's Lucene
project. It scales very well, both for distributed indexing and
distributed searching.
.. contents::
Warning
-------
This writer plugin is still in testing and is not yet recommended for
production use! The approach to how logs are handled in the plugin is "fire
and forget" at this time, there is no error handling if the server fails to
respond successfully to the insertion request.
Installing ElasticSearch
------------------------
Download the latest version from: http://www.elasticsearch.org/download/.
Once extracted, start ElasticSearch with::
# ./bin/elasticsearch
For more detailed information, refer to the ElasticSearch installation
documentation: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
Compiling Bro with ElasticSearch Support
----------------------------------------
First, ensure that you have libcurl installed then run configure::
# ./configure
[...]
====================| Bro Build Summary |=====================
[...]
cURL: true
[...]
ElasticSearch: true
[...]
================================================================
Activating ElasticSearch
------------------------
The easiest way to enable ElasticSearch output is to load the
tuning/logs-to-elasticsearch.bro script. If you are using BroControl,
the following line in local.bro will enable it:
.. console::
@load tuning/logs-to-elasticsearch
With that, Bro will now write most of its logs into ElasticSearch in addition
to maintaining the Ascii logs like it would do by default. That script has
some tunable options for choosing which logs to send to ElasticSearch, refer
to the autogenerated script documentation for those options.
There is an interface being written specifically to integrate with the data
that Bro outputs into ElasticSearch named Brownian. It can be found here::
https://github.com/grigorescu/Brownian
Tuning
------
A common problem encountered with ElasticSearch is too many files being held
open. The ElasticSearch website has some suggestions on how to increase the
open file limit.
- http://www.elasticsearch.org/tutorials/too-many-open-files/
TODO
----
Lots.
- Perform multicast discovery for server.
- Better error detection.
- Better defaults (don't index loaded-plugins, for instance).
-

View file

@ -380,11 +380,11 @@ uncommon to need to delete that data before the end of the connection.
Other Writers
-------------
Bro supports the following output formats other than ASCII:
Bro supports the following built-in output formats other than ASCII:
.. toctree::
:maxdepth: 1
logging-dataseries
logging-elasticsearch
logging-input-sqlite
Further formats are available as external plugins.

View file

@ -31,6 +31,7 @@ Using Bro Section
httpmonitor/index.rst
broids/index.rst
mimestats/index.rst
scripting/index.rst
..
@ -40,7 +41,6 @@ Reference Section
.. toctree::
:maxdepth: 2
scripting/index.rst
frameworks/index.rst
script-reference/index.rst
components/index.rst

View file

@ -91,7 +91,6 @@ build time:
* LibGeoIP (for geolocating IP addresses)
* sendmail (enables Bro and BroControl to send mail)
* gawk (enables all features of bro-cut)
* curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
@ -181,7 +180,7 @@ automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory.
OpenBSD users, please see our `FAQ
<http://www.bro.org/documentation/faq.html>`_ if you are having
<//www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because

View file

@ -162,8 +162,8 @@ tools like ``awk`` allow you to indicate the log file as a command
line option, bro-cut only takes input through redirection such as
``|`` and ``<``. There are a couple of ways to direct log file data
into ``bro-cut``, each dependent upon the type of log file you're
processing. A caveat of its use, however, is that the 8 lines of
header data must be present.
processing. A caveat of its use, however, is that all of the
header lines must be present.
.. note::
@ -177,8 +177,8 @@ moving the current log file into a directory with format
``YYYY-MM-DD`` and gzip compressing the file with a file format that
includes the log file type and time range of the file. In the case of
processing a compressed log file you simply adjust your command line
tools to use the complementary ``z*`` versions of commands such as cat
(``zcat``), ``grep`` (``zgrep``), and ``head`` (``zhead``).
tools to use the complementary ``z*`` versions of commands such as ``cat``
(``zcat``) or ``grep`` (``zgrep``).
Working with Timestamps
-----------------------

View file

@ -1,5 +1,5 @@
.. _FAQ: http://www.bro.org/documentation/faq.html
.. _FAQ: //www.bro.org/documentation/faq.html
.. _quickstart:
@ -234,7 +234,7 @@ is valid before installing it and then restarting the Bro instance:
.. console::
[BroControl] > check
bro is ok.
bro scripts are ok.
[BroControl] > install
removing old policies in /usr/local/bro/spool/policy/site ... done.
removing old policies in /usr/local/bro/spool/policy/auto ... done.
@ -250,15 +250,15 @@ is valid before installing it and then restarting the Bro instance:
Now that the SSL notice is ignored, let's look at how to send an email on
the SSH notice. The notice framework has a similar option called
``emailed_types``, but that can't differentiate between SSH servers and we
only want email for logins to certain ones. Then we come to the ``PolicyItem``
record and ``policy`` set and realize that those are actually what get used
to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but using that would generate email for all SSH servers and
we only want email for logins to certain ones. There is a ``policy`` hook
that is actually what is used to implement the simple functionality of
``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken
on notices can be user-defined.
In ``local.bro``, let's add a new ``PolicyItem`` record to the ``policy`` set
that only takes the email action for SSH logins to a defined set of servers:
In ``local.bro``, let's define a new ``policy`` hook handler body
that takes the email action for SSH logins only for a defined set of servers:
.. code:: bro
@ -276,9 +276,9 @@ that only takes the email action for SSH logins to a defined set of servers:
You'll just have to trust the syntax for now, but what we've done is
first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a record to the policy that will generate
an email on the condition that the predicate function evaluates to true, which
is whenever the notice type is an SSH login and the responding host stored
``watched_servers``; then added a hook handler body to the policy that will
generate an email whenever the notice type is an SSH login and the responding
host stored
inside the ``Info`` record's connection field is in the set of watched servers.
.. note:: Record field member access is done with the '$' character

View file

@ -1,13 +1,19 @@
event bro_init()
{
# Declaration of the table.
local ssl_services: table[string] of port;
# Initialize the table.
ssl_services = table(["SSH"] = 22/tcp, ["HTTPS"] = 443/tcp);
# Insert one key-yield pair into the table.
ssl_services["IMAPS"] = 993/tcp;
# Check if the key "SMTPS" is not in the table.
if ( "SMTPS" !in ssl_services )
ssl_services["SMTPS"] = 587/tcp;
# Iterate over each key in the table.
for ( k in ssl_services )
print fmt("Service Name: %s - Common Port: %s", k, ssl_services[k]);
}

View file

@ -1,8 +1,10 @@
module Factor;
export {
# Append the value LOG to the Log::ID enumerable.
redef enum Log::ID += { LOG };
# Define a new type called Factor::Info.
type Info: record {
num: count &log;
factorial_num: count &log;
@ -20,6 +22,7 @@ function factorial(n: count): count
event bro_init()
{
# Create the logging stream.
Log::create_stream(LOG, [$columns=Info]);
}

View file

@ -2,7 +2,6 @@
@load base/protocols/ssh/
redef Notice::emailed_types += {
SSH::Interesting_Hostname_Login,
SSH::Login
SSH::Interesting_Hostname_Login
};

View file

@ -3,5 +3,4 @@
redef Notice::type_suppression_intervals += {
[SSH::Interesting_Hostname_Login] = 1day,
[SSH::Login] = 12hrs,
};

View file

@ -54,7 +54,8 @@ script and much more in following sections.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 4-6
Lines 3 to 5 of the script process the ``__load__.bro`` script in the
The first part of the script consists of ``@load`` directives which
process the ``__load__.bro`` script in the
respective directories being loaded. The ``@load`` directives are
often considered good practice or even just good manners when writing
Bro scripts to make sure they can be used on their own. While it's unlikely that in a
@ -78,29 +79,37 @@ of the :bro:id:`NOTICE` function to generate notices of type
``TeamCymruMalwareHashRegistry::Match`` as done in the next section. Notices
allow Bro to generate some kind of extra notification beyond its
default log types. Often times, this extra notification comes in the
form of an email generated and sent to a preconfigured address, but can be altered
depending on the needs of the deployment. The export section is finished off with
the definition of two constants that list the kind of files we want to match against and
the minimum percentage of detection threshold in which we are interested.
form of an email generated and sent to a preconfigured address, but can
be altered depending on the needs of the deployment. The export section
is finished off with the definition of a few constants that list the kind
of files we want to match against and the minimum percentage of
detection threshold in which we are interested.
Up until this point, the script has merely done some basic setup. With the next section,
the script starts to define instructions to take in a given event.
Up until this point, the script has merely done some basic setup. With
the next section, the script starts to define instructions to take in
a given event.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 38-71
The workhorse of the script is contained in the event handler for
``file_hash``. The :bro:see:`file_hash` event allows scripts to access
the information associated with a file for which Bro's file analysis framework has
generated a hash. The event handler is passed the file itself as ``f``, the type of digest
algorithm used as ``kind`` and the hash generated as ``hash``.
the information associated with a file for which Bro's file analysis
framework has generated a hash. The event handler is passed the
file itself as ``f``, the type of digest algorithm used as ``kind``
and the hash generated as ``hash``.
On line 3, an ``if`` statement is used to check for the correct type of hash, in this case
a SHA1 hash. It also checks for a mime type we've defined as being of interest as defined in the
constant ``match_file_types``. The comparison is made against the expression ``f$mime_type``, which uses
the ``$`` dereference operator to check the value ``mime_type`` inside the variable ``f``. Once both
values resolve to true, a local variable is defined to hold a string comprised of the SHA1 hash concatenated
with ``.malware.hash.cymru.com``; this value will be the domain queried in the malware hash registry.
In the ``file_hash`` event handler, there is an ``if`` statement that is used
to check for the correct type of hash, in this case
a SHA1 hash. It also checks for a mime type we've defined as
being of interest as defined in the constant ``match_file_types``.
The comparison is made against the expression ``f$mime_type``, which uses
the ``$`` dereference operator to check the value ``mime_type``
inside the variable ``f``. If the entire expression evaluates to true,
then a helper function is called to do the rest of the work. In that
function, a local variable is defined to hold a string comprised of
the SHA1 hash concatenated with ``.malware.hash.cymru.com``; this
value will be the domain queried in the malware hash registry.
The rest of the script is contained within a ``when`` block. In
short, a ``when`` block is used when Bro needs to perform asynchronous
@ -111,24 +120,28 @@ this event continues and upon receipt of the values returned by
:bro:id:`lookup_hostname_txt`, the ``when`` block is executed. The
``when`` block splits the string returned into a portion for the date on which
the malware was first detected and the detection rate by splitting on an text space
and storing the values returned in a local table variable. In line 12, if the table
returned by ``split1`` has two entries, indicating a successful split, we store the detection
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate`` on lines 14 and 15 respectively
and storing the values returned in a local table variable.
In the ``do_mhr_lookup`` function, if the table
returned by ``split1`` has two entries, indicating a successful split, we
store the detection
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate``
using the appropriate conversion functions. From this point on, Bro knows it has seen a file
transmitted which has a hash that has been seen by the Team Cymru Malware Hash Registry, the rest
of the script is dedicated to producing a notice.
On line 17, the detection time is processed into a string representation and stored in
``readable_first_detected``. The script then compares the detection rate against the
``notice_threshold`` that was defined earlier. If the detection rate is high enough, the script
creates a concise description of the notice on line 22, a possible URL to check the sample against
``virustotal.com``'s database, and makes the call to :bro:id:`NOTICE` to hand the relevant information
off to the Notice framework.
The detection time is processed into a string representation and stored in
``readable_first_detected``. The script then compares the detection rate
against the ``notice_threshold`` that was defined earlier. If the
detection rate is high enough, the script creates a concise description
of the notice and stores it in the ``message`` variable. It also
creates a possible URL to check the sample against
``virustotal.com``'s database, and makes the call to :bro:id:`NOTICE`
to hand the relevant information off to the Notice framework.
In approximately 25 lines of code, Bro provides an amazing
In approximately a few dozen lines of code, Bro provides an amazing
utility that would be incredibly difficult to implement and deploy
with other products. In truth, claiming that Bro does this in 25
lines is a misdirection; there is a truly massive number of things
with other products. In truth, claiming that Bro does this in such a small
number of lines is a misdirection; there is a truly massive number of things
going on behind-the-scenes in Bro, but it is the inclusion of the
scripting language that gives analysts access to those underlying
layers in a succinct and well defined manner.
@ -180,7 +193,7 @@ event definition used by Bro. As Bro detects DNS requests being
issued by an originator, it issues this event and any number of
scripts then have access to the data Bro passes along with the event.
In this example, Bro passes not only the message, the query, query
type and query class for the DNS request, but also a then record used
type and query class for the DNS request, but also a record used
for the connection itself.
The Connection Record Data Type
@ -210,8 +223,7 @@ into the connection record data type will be
:bro:id:`connection_state_remove` . As detailed in the in-line
documentation, Bro generates this event just before it decides to
remove this event from memory, effectively forgetting about it. Let's
take a look at a simple script, stored as
``connection_record_01.bro``, that will output the connection record
take a look at a simple example script, that will output the connection record
for a single connection.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_01.bro
@ -243,12 +255,12 @@ of reference for accessing data in a script.
Bro makes extensive use of nested data structures to store state and
information gleaned from the analysis of a connection as a complete
unit. To break down this collection of information, you will have to
make use of use Bro's field delimiter ``$``. For example, the
make use of Bro's field delimiter ``$``. For example, the
originating host is referenced by ``c$id$orig_h`` which if given a
narrative relates to ``orig_h`` which is a member of ``id`` which is
a member of the data structure referred to as ``c`` that was passed
into the event handler." Given that the responder port
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base HTTP scripts
into the event handler. Given that the responder port
``c$id$resp_p`` is ``53/tcp``, it's likely that Bro's base HTTP scripts
can further populate the connection record. Let's load the
``base/protocols/http`` scripts and check the output of our script.
@ -276,7 +288,7 @@ As mentioned above, including the appropriate ``@load`` statements is
not only good practice, but can also help to indicate which
functionalities are being used in a script. Take a second to run the
script without the ``-b`` flag and check the output when all of Bro's
functionality is applied to the tracefile.
functionality is applied to the trace file.
Data Types and Data Structures
==============================
@ -384,9 +396,12 @@ which it was declared. Local variables tend to be used for values
that are only needed within a specific scope and once the processing
of a script passes beyond that scope and no longer used, the variable
is deleted. Bro maintains names of locals separately from globally
visible ones, an example of which is illustrated below. The script
executes the event handler :bro:id:`bro_init` which in turn calls the
function ``add_two(i: count)`` with an argument of ``10``. Once Bro
visible ones, an example of which is illustrated below.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_local.bro
The script executes the event handler :bro:id:`bro_init` which in turn calls
the function ``add_two(i: count)`` with an argument of ``10``. Once Bro
enters the ``add_two`` function, it provisions a locally scoped
variable called ``added_two`` to hold the value of ``i+2``, in this
case, ``12``. The ``add_two`` function then prints the value of the
@ -398,8 +413,6 @@ processing the ``bro_init`` function, the variable called ``test`` is
no longer in scope and, since there exist no other references to the
value ``12``, the value is also deleted.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_local.bro
Data Structures
---------------
@ -506,20 +519,7 @@ Tables
A table in Bro is a mapping of a key to a value or yield. While the
values don't have to be unique, each key in the table must be unique
to preserve a one-to-one mapping of keys to values. In the example
below, we've compiled a table of SSL-enabled services and their common
ports. The explicit declaration and constructor for the table on
lines 5 and 7 lay out the data types of the keys (strings) and the
data types of the yields (ports) and then fill in some sample key and
yield pairs. Line 8 shows how to use a table accessor to insert one
key-yield pair into the table. When using the ``in`` operator on a table,
you are effectively working with the keys of the table. In the case
of an ``if`` statement, the ``in`` operator will check for membership among
the set of keys and return a true or false value. As seen on line 10,
we are checking if ``SMTPS`` is not in the set of keys for the
ssl_services table and if the condition holds true, we add the
key-yield pair to the table. Line 13 shows the use of a ``for`` statement
to iterate over each key currently in the table.
to preserve a one-to-one mapping of keys to values.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
@ -527,6 +527,21 @@ to iterate over each key currently in the table.
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
In this example,
we've compiled a table of SSL-enabled services and their common
ports. The explicit declaration and constructor for the table are on
two different lines and lay out the data types of the keys (strings) and the
data types of the yields (ports) and then fill in some sample key and
yield pairs. You can also use a table accessor to insert one
key-yield pair into the table. When using the ``in``
operator on a table, you are effectively working with the keys of the table.
In the case of an ``if`` statement, the ``in`` operator will check for
membership among the set of keys and return a true or false value.
The example shows how to check if ``SMTPS`` is not in the set
of keys for the ``ssl_services`` table and if the condition holds true,
we add the key-yield pair to the table. Finally, the example shows how
to use a ``for`` statement to iterate over each key currently in the table.
Simple examples aside, tables can become extremely complex as the keys
and values for the table become more intricate. Tables can have keys
comprised of multiple data types and even a series of elements called
@ -535,9 +550,15 @@ Bro implies a cost in complexity for the person writing the scripts
but pays off in effectiveness given the power of Bro as a network
security platform.
The script below shows a sample table of strings indexed by two
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_complex.bro
.. btest:: data_struct_table_complex
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_table_complex.bro
This script shows a sample table of strings indexed by two
strings, a count, and a final string. With a tuple acting as an
aggregate key, the order is the important as a change in order would
aggregate key, the order is important as a change in order would
result in a new key. Here, we're using the table to track the
director, studio, year or release, and lead actor in a series of
samurai flicks. It's important to note that in the case of the ``for``
@ -546,14 +567,9 @@ iterate over, say, the directors; we have to iterate with the exact
format as the keys themselves. In this case, we need squared brackets
surrounding four temporary variables to act as a collection for our
iteration. While this is a contrived example, we could easily have
had keys containing IP addresses (``addr``), ports (``port``) and even a ``string``
calculated as the result of a reverse hostname lookup.
had keys containing IP addresses (``addr``), ports (``port``) and even
a ``string`` calculated as the result of a reverse hostname lookup.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_complex.bro
.. btest:: data_struct_table_complex
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_table_complex.bro
Vectors
~~~~~~~
@ -657,7 +673,7 @@ using a 20 bit subnet mask.
Because this is a script that doesn't use any kind of network
analysis, we can handle the event :bro:id:`bro_init` which is always
generated by Bro's core upon startup. On lines six and seven, two
generated by Bro's core upon startup. In the example script, two
locally scoped vectors are created to hold our lists of subnets and IP
addresses respectively. Then, using a set of nested ``for`` loops, we
iterate over every subnet and every IP address and use an ``if``
@ -714,7 +730,7 @@ Bro supports ``usec``, ``msec``, ``sec``, ``min``, ``hr``, or ``day`` which repr
microseconds, milliseconds, seconds, minutes, hours, and days
respectively. In fact, the interval data type allows for a surprising
amount of variation in its definitions. There can be a space between
the numeric constant or they can crammed together like a temporal
the numeric constant or they can be crammed together like a temporal
portmanteau. The time unit can be either singular or plural. All of
this adds up to to the fact that both ``42hrs`` and ``42 hr`` are
perfectly valid and logically equivalent in Bro. The point, however,
@ -760,7 +776,7 @@ string against which it will be tested to be on the right.
In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word
``quick`` or the word ``fox``. The ``if`` statement on line six uses
``quick`` or the word ``fox``. The ``if`` statement in the script uses
embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces.
@ -768,8 +784,8 @@ of the pattern within the string. If the statement resolves to true,
table of strings indexed by a count. Each element of the table will
be the segments before and after any matches against the pattern but
excluding the actual matches. In this case, our pattern matches
twice, and results in a table with three entries. Lines 11 through 13
print the contents of the table in order.
twice, and results in a table with three entries. The ``print`` statements
in the script will print the contents of the table in order.
.. btest:: data_type_pattern
@ -780,7 +796,7 @@ inequality operators through the ``==`` and ``!=`` operators
respectively. When used in this manner however, the string must match
entirely to resolve to true. For example, the script below uses two
ternary conditional statements to illustrate the use of the ``==``
operators with patterns. On lines 8 and 11 the output is altered based
operator with patterns. The output is altered based
on the result of the comparison between the pattern and the string.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_pattern_02.bro
@ -803,7 +819,7 @@ with the ``typedef`` and ``struct`` keywords, Bro allows you to cobble
together new data types to suit the needs of your situation.
When combined with the ``type`` keyword, ``record`` can generate a
composite type. We have, in fact, already encountered a a complex
composite type. We have, in fact, already encountered a complex
example of the ``record`` data type in the earlier sections, the
:bro:type:`connection` record passed to many events. Another one,
:bro:type:`Conn::Info`, which corresponds to the fields logged into
@ -915,11 +931,7 @@ through a contrived example of simply logging the digits 1 through 10
and their corresponding factorial to the default ASCII log writer.
It's always best to work through the problem once, simulating the
desired output with ``print`` and ``fmt`` before attempting to dive
into the Logging Framework. Below is a script that defines a
factorial function to recursively calculate the factorial of a
unsigned integer passed as an argument to the function. Using
``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves.
into the Logging Framework.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
@ -927,19 +939,28 @@ calculations correctly as well get an idea of the answers ourselves.
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
This script defines a factorial function to recursively calculate the
factorial of a unsigned integer passed as an argument to the function. Using
``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves.
The output of the script aligns with what we expect so now it's time
to integrate the Logging Framework. As mentioned above we have to
perform a few steps before we can issue the :bro:id:`Log::write`
method and produce a logfile. As we are working within a namespace
and informing an outside entity of workings and data internal to the
namespace, we use an ``export`` block. First we need to inform Bro
to integrate the Logging Framework.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
As mentioned above we have to perform a few steps before we can
issue the :bro:id:`Log::write` method and produce a logfile.
As we are working within a namespace and informing an outside
entity of workings and data internal to the namespace, we use
an ``export`` block. First we need to inform Bro
that we are going to be adding another Log Stream by adding a value to
the :bro:type:`Log::ID` enumerable. In line 6 of the script, we append the
the :bro:type:`Log::ID` enumerable. In this script, we append the
value ``LOG`` to the ``Log::ID`` enumerable, however due to this being in
an export block the value appended to ``Log::ID`` is actually
``Factor::Log``. Next, we need to define the name and value pairs
that make up the data of our logs and dictate its format. Lines 8
through 11 define a new datatype called an ``Info`` record (actually,
that make up the data of our logs and dictate its format. This script
defines a new record datatype called ``Info`` (actually,
``Factor::Info``) with two fields, both unsigned integers. Each of the
fields in the ``Factor::Log`` record type include the ``&log``
attribute, indicating that these fields should be passed to the
@ -947,16 +968,14 @@ Logging Framework when ``Log::write`` is called. Were there to be
any name value pairs without the ``&log`` attribute, those fields
would simply be ignored during logging but remain available for the
lifespan of the variable. The next step is to create the logging
stream with :bro:id:`Log::create_stream` which takes a Log::ID and a
record as its arguments. In this example, on line 25, we call the
stream with :bro:id:`Log::create_stream` which takes a ``Log::ID`` and a
record as its arguments. In this example, we call the
``Log::create_stream`` method and pass ``Factor::LOG`` and the
``Factor::Info`` record as arguments. From here on out, if we issue
the ``Log::write`` command with the correct ``Log::ID`` and a properly
formatted ``Factor::Info`` record, a log entry will be generated.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
Now, if we run the new version of the script, instead of generating
Now, if we run this script, instead of generating
logging information to stdout, no output is created. Instead the
output is all in ``factor.log``, properly formatted and organized.
@ -995,13 +1014,13 @@ remaining logs to factor.log.
:lines: 38-62
:linenos:
To dynamically alter the file in which a stream writes its logs a
filter can specify function returns a string to be used as the
To dynamically alter the file in which a stream writes its logs, a
filter can specify a function that returns a string to be used as the
filename for the current call to ``Log::write``. The definition for
this function has to take as its parameters a ``Log::ID`` called id, a
string called ``path`` and the appropriate record type for the logs called
``rec``. You can see the definition of ``mod5`` used in this example on
line one conforms to that requirement. The function simply returns
``rec``. You can see the definition of ``mod5`` used in this example
conforms to that requirement. The function simply returns
``factor-mod5`` if the factorial is divisible evenly by 5, otherwise, it
returns ``factor-non5``. In the additional ``bro_init`` event
handler, we define a locally scoped ``Log::Filter`` and assign it a
@ -1074,7 +1093,8 @@ make a call to :bro:id:`NOTICE` supplying it with an appropriate
:bro:type:`Notice::Info` record. Often times the call to ``NOTICE``
includes just the ``Notice::Type``, and a concise message. There are
however, significantly more options available when raising notices as
seen in the table below. The only field in the table below whose
seen in the definition of :bro:type:`Notice::Info`. The only field in
``Notice::Info`` whose
attributes make it a required field is the ``note`` field. Still,
good manners are always important and including a concise message in
``$msg`` and, where necessary, the contents of the connection record
@ -1086,57 +1106,6 @@ that are commonly included, ``$identifier`` and ``$suppress_for`` are
built around the automated suppression feature of the Notice Framework
which we will cover shortly.
.. todo::
Once the link to ``Notice::Info`` work I think we should take out
the table. That's too easy to get out of date.
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| Field | Type | Attributes | Use |
+=====================+==================================================================+================+========================================+
| ts | time | &log &optional | The time of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| uid | string | &log &optional | A unique connection ID |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| id | conn_id | &log &optional | A 4-tuple to identify endpoints |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| conn | connection | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| iconn | icmp_conn | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| proto | transport_proto | &log &optional | Transport protocol |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| note | Notice::Type | &log | The Notice::Type of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| msg | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| sub | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src | addr | &log &optional | Source address if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| dst | addr | &log &optional | Destination addr if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| p | port | &log &optional | Port if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| n | count | &log &optional | Count or status code |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src_peer | event_peer | &log &optional | Peer that raised the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| peer_descr | string | &log &optional | Text description of the src_peer |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| actions | set[Notice::Action] | &log &optional | Actions applied to the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| policy_items | set[count] | &log &optional | Policy items that have been applied |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_body_sections | vector | &optional | Body of the email for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_delay_tokens | set[string] | &optional | Delay functionality for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| identifier | string | &optional | A unique string identifier |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| suppress_for | interval | &log &optional | Length of time to suppress a notice. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
One of the default policy scripts raises a notice when an SSH login
has been heuristically detected and the originating hostname is one
that would raise suspicion. Effectively, the script attempts to
@ -1153,15 +1122,15 @@ possible while staying concise.
While much of the script relates to the actual detection, the parts
specific to the Notice Framework are actually quite interesting in
themselves. On line 18 the script's ``export`` block adds the value
themselves. The script's ``export`` block adds the value
``SSH::Interesting_Hostname_Login`` to the enumerable constant
``Notice::Type`` to indicate to the Bro core that a new type of notice
is being defined. The script then calls ``NOTICE`` and defines the
``$note``, ``$msg``, ``$sub`` and ``$conn`` fields of the
:bro:type:`Notice::Info` record. Line 42 also includes a ternary if
statement that modifies the ``$msg`` text depending on whether the
:bro:type:`Notice::Info` record. There are two ternary if
statements that modify the ``$msg`` text depending on whether the
host is a local address and whether it is the client or the server.
This use of :bro:id:`fmt` and a ternary operators is a concise way to
This use of :bro:id:`fmt` and ternary operators is a concise way to
lend readability to the notices that are generated without the need
for branching ``if`` statements that each raise a specific notice.
@ -1222,7 +1191,7 @@ from the connection relative to the behavior that has been observed by
Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 60-63
:lines: 64-68
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs.bro` script
which identifies when SSL certificates are set to expire and raises
@ -1302,9 +1271,9 @@ in the call to ``NOTICE``.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_01.bro
The Notice Policy shortcut above adds the ``Notice::Types`` of
SSH::Interesting_Hostname_Login and SSH::Login to the
Notice::emailed_types set while the shortcut below alters the length
The Notice Policy shortcut above adds the ``Notice::Type`` of
``SSH::Interesting_Hostname_Login`` to the
``Notice::emailed_types`` set while the shortcut below alters the length
of time for which those notices will be suppressed.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_02.bro

View file

@ -181,6 +181,15 @@ export {
tag: Files::Tag,
args: AnalyzerArgs &default=AnalyzerArgs()): bool;
## Adds all analyzers associated with a give MIME type to the analysis of
## a file. Note that analyzers added via MIME types cannot take further
## arguments.
##
## f: the file.
##
## mtype: the MIME type; it will be compared case-insensitive.
global add_analyzers_for_mime_type: function(f: fa_file, mtype: string);
## Removes an analyzer from the analysis of a given file.
##
## f: the file.
@ -253,6 +262,42 @@ export {
## callback: Function to execute when the given file analyzer is being added.
global register_analyzer_add_callback: function(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs));
## Registers a set of MIME types for an analyzer. If a future connection on one of
## these types is seen, the analyzer will be automatically assigned to parsing it.
## The function *adds* to all MIME types already registered, it doesn't replace
## them.
##
## tag: The tag of the analyzer.
##
## mts: The set of MIME types, each in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME types were successfully registered.
global register_for_mime_types: function(tag: Analyzer::Tag, mts: set[string]) : bool;
## Registers a MIME type for an analyzer. If a future file with this type is seen,
## the analyzer will be automatically assigned to parsing it. The function *adds*
## to all MIME types already registered, it doesn't replace them.
##
## tag: The tag of the analyzer.
##
## mt: The MIME type in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME type was successfully registered.
global register_for_mime_type: function(tag: Analyzer::Tag, mt: string) : bool;
## Returns a set of all MIME types currently registered for a specific analyzer.
##
## tag: The tag of the analyzer.
##
## Returns: The set of MIME types.
global registered_mime_types: function(tag: Analyzer::Tag) : set[string];
## Returns a table of all MIME-type-to-analyzer mappings currently registered.
##
## Returns: A table mapping each analyzer to the set of MIME types
## registered for it.
global all_registered_mime_types: function() : table[Analyzer::Tag] of set[string];
## Event that can be handled to access the Info record as it is sent on
## to the logging framework.
global log_files: event(rec: Info);
@ -265,6 +310,9 @@ redef record fa_file += {
# Store the callbacks for protocol analyzers that have files.
global registered_protocols: table[Analyzer::Tag] of ProtoRegistration = table();
# Store the MIME type to analyzer mappings.
global mime_types: table[Analyzer::Tag] of set[string];
global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: AnalyzerArgs) = table();
event bro_init() &priority=5
@ -332,6 +380,15 @@ function add_analyzer(f: fa_file, tag: Files::Tag, args: AnalyzerArgs): bool
return T;
}
function add_analyzers_for_mime_type(f: fa_file, mtype: string)
{
local dummy_args: AnalyzerArgs;
local analyzers = __add_analyzers_for_mime_type(f$id, mtype, dummy_args);
for ( tag in analyzers )
add f$info$analyzers[Files::analyzer_name(tag)];
}
function register_analyzer_add_callback(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs))
{
analyzer_add_callbacks[tag] = callback;
@ -356,6 +413,9 @@ event file_new(f: fa_file) &priority=10
{
set_info(f);
if ( f?$mime_type )
add_analyzers_for_mime_type(f, f$mime_type);
if ( enable_reassembler )
{
Files::enable_reassembly(f);
@ -405,6 +465,41 @@ function register_protocol(tag: Analyzer::Tag, reg: ProtoRegistration): bool
return result;
}
function register_for_mime_types(tag: Analyzer::Tag, mime_types: set[string]) : bool
{
local rc = T;
for ( mt in mime_types )
{
if ( ! register_for_mime_type(tag, mt) )
rc = F;
}
return rc;
}
function register_for_mime_type(tag: Analyzer::Tag, mt: string) : bool
{
if ( ! __register_for_mime_type(tag, mt) )
return F;
if ( tag !in mime_types )
mime_types[tag] = set();
add mime_types[tag][mt];
return T;
}
function registered_mime_types(tag: Analyzer::Tag) : set[string]
{
return tag in mime_types ? mime_types[tag] : set();
}
function all_registered_mime_types(): table[Analyzer::Tag] of set[string]
{
return mime_types;
}
function describe(f: fa_file): string
{
local tag = Analyzer::get_tag(f$source);

View file

@ -4,6 +4,17 @@
module Input;
export {
type Event: enum {
EVENT_NEW = 0,
EVENT_CHANGED = 1,
EVENT_REMOVED = 2,
};
type Mode: enum {
MANUAL = 0,
REREAD = 1,
STREAM = 2
};
## The default input reader used. Defaults to `READER_ASCII`.
const default_reader = READER_ASCII &redef;

View file

@ -1,7 +1,5 @@
@load ./main
@load ./postprocessors
@load ./writers/ascii
@load ./writers/dataseries
@load ./writers/sqlite
@load ./writers/elasticsearch
@load ./writers/none

View file

@ -5,9 +5,15 @@
module Log;
# Log::ID and Log::Writer are defined in types.bif due to circular dependencies.
export {
## Type that defines an ID unique to each log stream. Scripts creating new log
## streams need to redef this enum to add their own specific log ID. The log ID
## implicitly determines the default name of the generated log file.
type Log::ID: enum {
## Dummy place-holder.
UNKNOWN
};
## If true, local logging is by default enabled for all filters.
const enable_local_logging = T &redef;

View file

@ -26,9 +26,9 @@ export {
## This option is also available as a per-filter ``$config`` option.
const use_json = F &redef;
## Format of timestamps when writing out JSON. By default, the JSON formatter will
## use double values for timestamps which represent the number of seconds from the
## UNIX epoch.
## Format of timestamps when writing out JSON. By default, the JSON
## formatter will use double values for timestamps which represent the
## number of seconds from the UNIX epoch.
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
## If true, include lines with log meta information such as column names

View file

@ -1,60 +0,0 @@
##! Interface for the DataSeries log writer.
module LogDataSeries;
export {
## Compression to use with the DS output file. Options are:
##
## 'none' -- No compression.
## 'lzf' -- LZF compression (very quick, but leads to larger output files).
## 'lzo' -- LZO compression (very fast decompression times).
## 'gz' -- GZIP compression (slower than LZF, but also produces smaller output).
## 'bz2' -- BZIP2 compression (slower than GZIP, but also produces smaller output).
const compression = "gz" &redef;
## The extent buffer size.
## Larger values here lead to better compression and more efficient writes,
## but also increase the lag between the time events are received and
## the time they are actually written to disk.
const extent_size = 65536 &redef;
## Should we dump the XML schema we use for this DS file to disk?
## If yes, the XML schema shares the name of the logfile, but has
## an XML ending.
const dump_schema = F &redef;
## How many threads should DataSeries spawn to perform compression?
## Note that this dictates the number of threads per log stream. If
## you're using a lot of streams, you may want to keep this number
## relatively small.
##
## Default value is 1, which will spawn one thread / stream.
##
## Maximum is 128, minimum is 1.
const num_threads = 1 &redef;
## Should time be stored as an integer or a double?
## Storing time as a double leads to possible precision issues and
## can (significantly) increase the size of the resulting DS log.
## That said, timestamps stored in double form are consistent
## with the rest of Bro, including the standard ASCII log. Hence, we
## use them by default.
const use_integer_for_time = F &redef;
}
# Default function to postprocess a rotated DataSeries log file. It moves the
# rotated file to a new name that includes a timestamp with the opening time,
# and then runs the writer's default postprocessor command on it.
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
{
# Move file to name including both opening and closing time.
local dst = fmt("%s.%s.ds", info$path,
strftime(Log::default_rotation_date_format, info$open));
system(fmt("/bin/mv %s %s", info$fname, dst));
# Run default postprocessor.
return Log::run_rotation_postprocessor_cmd(info, dst);
}
redef Log::default_rotation_postprocessors += { [Log::WRITER_DATASERIES] = default_rotation_postprocessor_func };

View file

@ -1,48 +0,0 @@
##! Log writer for sending logs to an ElasticSearch server.
##!
##! Note: This module is in testing and is not yet considered stable!
##!
##! There is one known memory issue. If your elasticsearch server is
##! running slowly and taking too long to return from bulk insert
##! requests, the message queue to the writer thread will continue
##! growing larger and larger giving the appearance of a memory leak.
module LogElasticSearch;
export {
## Name of the ES cluster.
const cluster_name = "elasticsearch" &redef;
## ES server.
const server_host = "127.0.0.1" &redef;
## ES port.
const server_port = 9200 &redef;
## Name of the ES index.
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular,
## time specifications less than a second result in a timeout value of
## 0, which means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
}

View file

@ -20,7 +20,8 @@ export {
## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example,
## SSH::Login is for heuristically guessed successful SSH logins.
## SSH::Password_Guessing is for hosts that have crossed a threshold of
## heuristically determined failed SSH logins.
type Type: enum {
## Notice reporting a count of how often a notice occurred.
Tally,

View file

@ -71,7 +71,7 @@ export {
## to be logged has occurred.
ts: time &log;
## A unique identifier of the connection which triggered the
## signature match event
## signature match event.
uid: string &log &optional;
## The host which triggered the signature match event.
src_addr: addr &log &optional;

View file

@ -75,6 +75,13 @@ type addr_vec: vector of addr;
## directly and then remove this alias.
type table_string_of_string: table[string] of string;
## A set of file analyzer tags.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type files_tag_set: set[Files::Tag];
## A structure indicating a MIME type and strength of a match against
## file magic signatures.
##
@ -2479,8 +2486,7 @@ type http_message_stat: record {
header_length: count;
};
## Maximum number of HTTP entity data delivered to events. The amount of data
## can be limited for better performance, zero disables truncation.
## Maximum number of HTTP entity data delivered to events.
##
## .. bro:see:: http_entity_data skip_http_entity_data skip_http_data
global http_entity_data_delivery_size = 1500 &redef;
@ -2732,6 +2738,7 @@ type ModbusRegisters: vector of count;
type ModbusHeaders: record {
tid: count;
pid: count;
len: count;
uid: count;
function_code: count;
};
@ -3357,9 +3364,6 @@ const global_hash_seed: string = "" &redef;
## The maximum is currently 128 bits.
const bits_per_uid: count = 96 &redef;
# Load BiFs defined by plugins.
@load base/bif/plugins
# Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types.
@load base/frameworks/logging
@ -3368,3 +3372,7 @@ const bits_per_uid: count = 96 &redef;
@load base/frameworks/files
@load base/bif
# Load BiFs defined by plugins.
@load base/bif/plugins

View file

@ -47,13 +47,13 @@ redef record connection += {
const ports = { 67/udp, 68/udp };
redef likely_server_ports += { 67/udp };
event bro_init()
event bro_init() &priority=5
{
Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports);
}
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string)
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=5
{
local info: Info;
info$ts = network_time();
@ -71,6 +71,9 @@ event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_lis
info$assigned_ip = c$id$orig_h;
c$dhcp = info;
}
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=-5
{
Log::write(DHCP::LOG, c$dhcp);
}

View file

@ -142,7 +142,10 @@ function set_smtp_session(c: connection)
function smtp_message(c: connection)
{
if ( c$smtp$has_client_activity )
{
Log::write(SMTP::LOG, c$smtp);
c$smtp = new_smtp_log(c);
}
}
event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &priority=5
@ -150,9 +153,6 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
set_smtp_session(c);
local upper_command = to_upper(command);
if ( upper_command != "QUIT" )
c$smtp$has_client_activity = T;
if ( upper_command == "HELO" || upper_command == "EHLO" )
{
c$smtp_state$helo = arg;
@ -164,12 +164,17 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
if ( ! c$smtp?$rcptto )
c$smtp$rcptto = set();
add c$smtp$rcptto[split1(arg, /:[[:blank:]]*/)[2]];
c$smtp$has_client_activity = T;
}
else if ( upper_command == "MAIL" && /^[fF][rR][oO][mM]:/ in arg )
{
# Flush last message in case we didn't see the server's acknowledgement.
smtp_message(c);
local partially_done = split1(arg, /:[[:blank:]]*/)[2];
c$smtp$mailfrom = split1(partially_done, /[[:blank:]]?/)[1];
c$smtp$has_client_activity = T;
}
}
@ -198,7 +203,6 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
event mime_one_header(c: connection, h: mime_header_rec) &priority=5
{
if ( ! c?$smtp ) return;
c$smtp$has_client_activity = T;
if ( h$name == "MESSAGE-ID" )
c$smtp$msg_id = h$value;
@ -281,7 +285,10 @@ event connection_state_remove(c: connection) &priority=-5
event smtp_starttls(c: connection) &priority=5
{
if ( c?$smtp )
{
c$smtp$tls = T;
c$smtp$has_client_activity = T;
}
}
function describe(rec: Info): string

View file

@ -26,6 +26,21 @@ export {
const V2_CLIENT_MASTER_KEY = 302;
const V2_SERVER_HELLO = 304;
## TLS Handshake types:
const HELLO_REQUEST = 0;
const CLIENT_HELLO = 1;
const SERVER_HELLO = 2;
const SESSION_TICKET = 4; # RFC 5077
const CERTIFICATE = 11;
const SERVER_KEY_EXCHANGE = 12;
const CERTIFICATE_REQUEST = 13;
const SERVER_HELLO_DONE = 14;
const CERTIFICATE_VERIFY = 15;
const CLIENT_KEY_EXCHANGE = 16;
const FINISHED = 20;
const CERTIFICATE_URL = 21; # RFC 3546
const CERTIFICATE_STATUS = 22; # RFC 3546
## Mapping between numeric codes and human readable strings for alert
## levels.
const alert_levels: table[count] of string = {
@ -94,6 +109,10 @@ export {
[16] = "application_layer_protocol_negotiation",
[17] = "status_request_v2",
[18] = "signed_certificate_timestamp",
[19] = "client_certificate_type",
[20] = "server_certificate_type",
[21] = "padding", # temporary till 2015-03-12
[22] = "encrypt_then_mac", # temporary till 2015-06-05
[35] = "SessionTicket TLS",
[40] = "extended_random",
[13172] = "next_protocol_negotiation",

View file

@ -82,7 +82,7 @@ event bro_init() &priority=5
++lb_proc_track[that_node$ip, that_node$interface];
if ( total_lb_procs > 1 )
{
that_node$lb_filter = PacketFilter::sample_filter(total_lb_procs, this_lb_proc);
that_node$lb_filter = PacketFilter::sampling_filter(total_lb_procs, this_lb_proc);
Communication::nodes[no]$capture_filter = that_node$lb_filter;
}
}

View file

@ -39,27 +39,31 @@ event ssl_established(c: connection) &priority=3
# If there are no certificates or we are not interested in the server, just return.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) ||
! c$ssl$cert_chain[0]?$x509 )
! c$ssl$cert_chain[0]?$x509 || ! c$ssl$cert_chain[0]?$sha1 )
return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
local hash = c$ssl$cert_chain[0]$sha1;
if ( cert$not_valid_before > network_time() )
NOTICE([$note=Certificate_Not_Valid_Yet,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before),
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after < network_time() )
NOTICE([$note=Certificate_Expired,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after),
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() )
NOTICE([$note=Certificate_Expires_Soon,
$msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
}

View file

@ -1,36 +0,0 @@
##! Load this script to enable global log output to an ElasticSearch database.
module LogElasticSearch;
export {
## An elasticsearch specific rotation interval.
const rotation_interval = 3hr &redef;
## Optionally ignore any :bro:type:`Log::ID` from being sent to
## ElasticSearch with this script.
const excluded_log_ids: set[Log::ID] &redef;
## If you want to explicitly only send certain :bro:type:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option
## will remain in effect as well.
const send_logs: set[Log::ID] &redef;
}
event bro_init() &priority=-5
{
if ( server_host == "" )
return;
for ( stream_id in Log::active_streams )
{
if ( stream_id in excluded_log_ids ||
(|send_logs| > 0 && stream_id !in send_logs) )
next;
local filter: Log::Filter = [$name = "default-es",
$writer = Log::WRITER_ELASTICSEARCH,
$interv = LogElasticSearch::rotation_interval];
Log::add_filter(stream_id, filter);
}
}

View file

@ -98,7 +98,4 @@
@load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro
@load tuning/json-logs.bro
@load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro
redef LogElasticSearch::server_host = "";

View file

@ -8,6 +8,7 @@ set(bro_ALL_GENERATED_OUTPUTS CACHE INTERNAL "automatically generated files" FO
# This collects bif inputs that we'll load automatically.
set(bro_AUTO_BIFS CACHE INTERNAL "BIFs for automatic inclusion" FORCE)
set(bro_REGISTER_BIFS CACHE INTERNAL "BIFs for automatic registering" FORCE)
set(bro_BASE_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in base distribution of Bro" FORCE)
set(bro_PLUGIN_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in Bro plugins" FORCE)
@ -117,8 +118,6 @@ include(BifCl)
set(BIF_SRCS
bro.bif
logging.bif
input.bif
event.bif
const.bif
types.bif
@ -155,21 +154,25 @@ set(bro_SUBDIR_LIBS CACHE INTERNAL "subdir libraries" FORCE)
set(bro_PLUGIN_LIBS CACHE INTERNAL "plugin libraries" FORCE)
add_subdirectory(analyzer)
add_subdirectory(file_analysis)
add_subdirectory(probabilistic)
add_subdirectory(broxygen)
add_subdirectory(file_analysis)
add_subdirectory(input)
add_subdirectory(iosource)
add_subdirectory(logging)
add_subdirectory(probabilistic)
set(bro_SUBDIRS
${bro_SUBDIR_LIBS}
# Order is important here.
${bro_PLUGIN_LIBS}
${bro_SUBDIR_LIBS}
)
if ( NOT bro_HAVE_OBJECT_LIBRARIES )
foreach (_plugin ${bro_PLUGIN_LIBS})
string(REGEX REPLACE "plugin-" "" _plugin "${_plugin}")
string(REGEX REPLACE "-" "_" _plugin "${_plugin}")
set(_decl "namespace plugin { namespace ${_plugin} { class Plugin; extern Plugin __plugin; } };")
set(_use "i += (size_t)(&(plugin::${_plugin}::__plugin));")
set(_decl "namespace plugin { namespace ${_plugin} { class Plugin; extern Plugin plugin; } };")
set(_use "i += (size_t)(&(plugin::${_plugin}::plugin));")
set(__BRO_DECL_PLUGINS "${__BRO_DECL_PLUGINS}${_decl}\n")
set(__BRO_USE_PLUGINS "${__BRO_USE_PLUGINS}${_use}\n")
endforeach()
@ -252,7 +255,6 @@ set(bro_SRCS
Anon.cc
Attr.cc
Base64.cc
BPF_Program.cc
Brofiler.cc
BroString.cc
CCL.cc
@ -277,14 +279,12 @@ set(bro_SRCS
EventRegistry.cc
Expr.cc
File.cc
FlowSrc.cc
Frag.cc
Frame.cc
Func.cc
Hash.cc
ID.cc
IntSet.cc
IOSource.cc
IP.cc
IPAddr.cc
List.cc
@ -297,7 +297,6 @@ set(bro_SRCS
OSFinger.cc
PacketFilter.cc
PersistenceSerializer.cc
PktSrc.cc
PolicyFile.cc
PrefixTable.cc
PriorityQueue.cc
@ -346,24 +345,6 @@ set(bro_SRCS
threading/formatters/Ascii.cc
threading/formatters/JSON.cc
logging/Manager.cc
logging/WriterBackend.cc
logging/WriterFrontend.cc
logging/writers/Ascii.cc
logging/writers/DataSeries.cc
logging/writers/SQLite.cc
logging/writers/ElasticSearch.cc
logging/writers/None.cc
input/Manager.cc
input/ReaderBackend.cc
input/ReaderFrontend.cc
input/readers/Ascii.cc
input/readers/Raw.cc
input/readers/Benchmark.cc
input/readers/Binary.cc
input/readers/SQLite.cc
3rdparty/sqlite3.c
plugin/Component.cc
@ -371,7 +352,6 @@ set(bro_SRCS
plugin/TaggedComponent.h
plugin/Manager.cc
plugin/Plugin.cc
plugin/Macros.h
nb_dns.c
digest.h
@ -387,22 +367,31 @@ else ()
target_link_libraries(bro ${bro_SUBDIRS} ${brodeps} ${CMAKE_THREAD_LIBS_INIT} ${CMAKE_DL_LIBS})
endif ()
if ( NOT "${bro_LINKER_FLAGS}" STREQUAL "" )
set_target_properties(bro PROPERTIES LINK_FLAGS "${bro_LINKER_FLAGS}")
endif ()
install(TARGETS bro DESTINATION bin)
set(BRO_EXE bro
CACHE STRING "Bro executable binary" FORCE)
set(BRO_EXE_PATH ${CMAKE_CURRENT_BINARY_DIR}/bro
CACHE STRING "Path to Bro executable binary" FORCE)
# Target to create all the autogenerated files.
add_custom_target(generate_outputs_stage1)
add_dependencies(generate_outputs_stage1 ${bro_ALL_GENERATED_OUTPUTS})
# Target to create the joint includes files that pull in the bif code.
bro_bif_create_includes(generate_outputs_stage2 ${CMAKE_CURRENT_BINARY_DIR} "${bro_AUTO_BIFS}")
add_dependencies(generate_outputs_stage2 generate_outputs_stage1)
bro_bif_create_includes(generate_outputs_stage2a ${CMAKE_CURRENT_BINARY_DIR} "${bro_AUTO_BIFS}")
bro_bif_create_register(generate_outputs_stage2b ${CMAKE_CURRENT_BINARY_DIR} "${bro_REGISTER_BIFS}")
add_dependencies(generate_outputs_stage2a generate_outputs_stage1)
add_dependencies(generate_outputs_stage2b generate_outputs_stage1)
# Global target to trigger creation of autogenerated code.
add_custom_target(generate_outputs)
add_dependencies(generate_outputs generate_outputs_stage2)
add_dependencies(generate_outputs generate_outputs_stage2a generate_outputs_stage2b)
# Build __load__.bro files for standard *.bif.bro.
bro_bif_create_loader(bif_loader "${bro_BASE_BIF_SCRIPTS}")

View file

@ -35,6 +35,7 @@
#include "Net.h"
#include "Var.h"
#include "Reporter.h"
#include "iosource/Manager.h"
extern "C" {
extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
@ -404,17 +405,17 @@ DNS_Mgr::~DNS_Mgr()
delete [] dir;
}
bool DNS_Mgr::Init()
void DNS_Mgr::InitPostScript()
{
if ( did_init )
return true;
return;
const char* cache_dir = dir ? dir : ".";
if ( mode == DNS_PRIME && ! ensure_dir(cache_dir) )
{
did_init = 0;
return false;
return;
}
cache_name = new char[strlen(cache_dir) + 64];
@ -433,14 +434,12 @@ bool DNS_Mgr::Init()
did_init = 1;
io_sources.Register(this, true);
iosource_mgr->Register(this, true);
// We never set idle to false, having the main loop only calling us from
// time to time. If we're issuing more DNS requests than we can handle
// in this way, we are having problems anyway ...
idle = true;
return true;
SetIdle(true);
}
static TableVal* fake_name_lookup_result(const char* name)

View file

@ -12,7 +12,7 @@
#include "BroList.h"
#include "Dict.h"
#include "EventHandler.h"
#include "IOSource.h"
#include "iosource/IOSource.h"
#include "IPAddr.h"
class Val;
@ -40,12 +40,12 @@ enum DNS_MgrMode {
// Number of seconds we'll wait for a reply.
#define DNS_TIMEOUT 5
class DNS_Mgr : public IOSource {
class DNS_Mgr : public iosource::IOSource {
public:
DNS_Mgr(DNS_MgrMode mode);
virtual ~DNS_Mgr();
bool Init();
void InitPostScript();
void Flush();
// Looks up the address or addresses of the given host, and returns

View file

@ -5,6 +5,7 @@
#include "DebugLogger.h"
#include "Net.h"
#include "plugin/Plugin.h"
DebugLogger debug_logger("debug");
@ -17,7 +18,8 @@ DebugLogger::Stream DebugLogger::streams[NUM_DBGS] = {
{ "dpd", 0, false }, { "tm", 0, false },
{ "logging", 0, false }, {"input", 0, false },
{ "threading", 0, false }, { "file_analysis", 0, false },
{ "plugins", 0, false }, { "broxygen", 0, false }
{ "plugins", 0, false }, { "broxygen", 0, false },
{ "pktio", 0, false}
};
DebugLogger::DebugLogger(const char* filename)
@ -73,10 +75,12 @@ void DebugLogger::EnableStreams(const char* s)
{
if ( strcasecmp("verbose", tok) == 0 )
verbose = true;
else
else if ( strncmp(tok, "plugin-", 7) != 0 )
reporter->FatalError("unknown debug stream %s\n", tok);
}
enabled_streams.insert(tok);
tok = strtok(0, ",");
}
@ -105,4 +109,24 @@ void DebugLogger::Log(DebugStream stream, const char* fmt, ...)
fflush(file);
}
void DebugLogger::Log(const plugin::Plugin& plugin, const char* fmt, ...)
{
string tok = string("plugin-") + plugin.Name();
tok = strreplace(tok, "::", "-");
if ( enabled_streams.find(tok) == enabled_streams.end() )
return;
fprintf(file, "%17.06f/%17.06f [plugin %s] ",
network_time, current_time(true), plugin.Name().c_str());
va_list ap;
va_start(ap, fmt);
vfprintf(file, fmt, ap);
va_end(ap);
fputc('\n', file);
fflush(file);
}
#endif

View file

@ -7,6 +7,8 @@
#ifdef DEBUG
#include <stdio.h>
#include <string>
#include <set>
// To add a new debugging stream, add a constant here as well as
// an entry to DebugLogger::streams in DebugLogger.cc.
@ -27,8 +29,9 @@ enum DebugStream {
DBG_INPUT, // Input streams
DBG_THREADING, // Threading system
DBG_FILE_ANALYSIS, // File analysis
DBG_PLUGINS,
DBG_BROXYGEN,
DBG_PLUGINS, // Plugin system
DBG_BROXYGEN, // Broxygen
DBG_PKTIO, // Packet sources and dumpers.
NUM_DBGS // Has to be last
};
@ -42,6 +45,10 @@ enum DebugStream {
#define DBG_PUSH(stream) debug_logger.PushIndent(stream)
#define DBG_POP(stream) debug_logger.PopIndent(stream)
#define PLUGIN_DBG_LOG(plugin, args...) debug_logger.Log(plugin, args)
namespace plugin { class Plugin; }
class DebugLogger {
public:
// Output goes to stderr per default.
@ -49,6 +56,7 @@ public:
~DebugLogger();
void Log(DebugStream stream, const char* fmt, ...);
void Log(const plugin::Plugin& plugin, const char* fmt, ...);
void PushIndent(DebugStream stream)
{ ++streams[int(stream)].indent; }
@ -79,6 +87,8 @@ private:
bool enabled;
};
std::set<std::string> enabled_streams;
static Stream streams[NUM_DBGS];
};
@ -89,6 +99,7 @@ extern DebugLogger debug_logger;
#define DBG_LOG_VERBOSE(args...)
#define DBG_PUSH(stream)
#define DBG_POP(stream)
#define PLUGIN_DBG_LOG(plugin, args...)
#endif
#endif

View file

@ -6,6 +6,7 @@
#include "Func.h"
#include "NetVar.h"
#include "Trigger.h"
#include "plugin/Manager.h"
EventMgr mgr;
@ -77,6 +78,11 @@ EventMgr::~EventMgr()
void EventMgr::QueueEvent(Event* event)
{
bool done = PLUGIN_HOOK_WITH_RESULT(HOOK_QUEUE_EVENT, HookQueueEvent(event), false);
if ( done )
return;
if ( ! head )
head = tail = event;
else
@ -115,6 +121,8 @@ void EventMgr::Drain()
SegmentProfiler(segment_logger, "draining-events");
PLUGIN_HOOK_VOID(HOOK_DRAIN_EVENTS, HookDrainEvents());
draining = true;
while ( head )
Dispatch();

View file

@ -24,6 +24,8 @@ public:
SourceID Source() const { return src; }
analyzer::ID Analyzer() const { return aid; }
TimerMgr* Mgr() const { return mgr; }
EventHandlerPtr Handler() const { return handler; }
val_list* Args() const { return args; }
void Describe(ODesc* d) const;

View file

@ -13,6 +13,7 @@ EventHandler::EventHandler(const char* arg_name)
type = 0;
error_handler = false;
enabled = true;
generate_always = false;
}
EventHandler::~EventHandler()
@ -23,7 +24,9 @@ EventHandler::~EventHandler()
EventHandler::operator bool() const
{
return enabled && ((local && local->HasBodies()) || receivers.length());
return enabled && ((local && local->HasBodies())
|| receivers.length()
|| generate_always);
}
FuncType* EventHandler::FType()

View file

@ -43,6 +43,11 @@ public:
void SetEnable(bool arg_enable) { enabled = arg_enable; }
// Flags the event as interesting even if there is no body defined. In
// particular, this will then still pass the event on to plugins.
void SetGenerateAlways() { generate_always = true; }
bool GenerateAlways() { return generate_always; }
// We don't serialize the handler(s) itself here, but
// just the reference to it.
bool Serialize(SerialInfo* info) const;
@ -57,6 +62,7 @@ private:
bool used; // this handler is indeed used somewhere
bool enabled;
bool error_handler; // this handler reports error messages.
bool generate_always;
declare(List, SourceID);
typedef List(SourceID) receiver_list;

View file

@ -71,6 +71,23 @@ EventRegistry::string_list* EventRegistry::UsedHandlers()
return names;
}
EventRegistry::string_list* EventRegistry::AllHandlers()
{
string_list* names = new string_list;
IterCookie* c = handlers.InitForIteration();
HashKey* k;
EventHandler* v;
while ( (v = handlers.NextEntry(k, c)) )
{
names->append(v->Name());
delete k;
}
return names;
}
void EventRegistry::PrintDebug()
{
IterCookie* c = handlers.InitForIteration();

View file

@ -33,6 +33,8 @@ public:
string_list* UnusedHandlers();
string_list* UsedHandlers();
string_list* AllHandlers();
void PrintDebug();
private:

View file

@ -4330,7 +4330,7 @@ Val* TableCoerceExpr::Fold(Val* v) const
if ( tv->Size() > 0 )
Internal("coercion of non-empty table/set");
return new TableVal(Type()->Ref()->AsTableType(), tv->Attrs());
return new TableVal(Type()->AsTableType(), tv->Attrs());
}
IMPLEMENT_SERIAL(TableCoerceExpr, SER_TABLE_COERCE_EXPR);

View file

@ -608,6 +608,10 @@ public:
CondExpr(Expr* op1, Expr* op2, Expr* op3);
~CondExpr();
const Expr* Op1() const { return op1; }
const Expr* Op2() const { return op2; }
const Expr* Op3() const { return op3; }
Expr* Simplify(SimplifyType simp_type);
Val* Eval(Frame* f) const;
int IsPure() const;
@ -706,6 +710,7 @@ public:
~FieldExpr();
int Field() const { return field; }
const char* FieldName() const { return field_name; }
int CanDel() const;
@ -737,6 +742,8 @@ public:
HasFieldExpr(Expr* op, const char* field_name);
~HasFieldExpr();
const char* FieldName() const { return field_name; }
protected:
friend class Expr;
HasFieldExpr() { field_name = 0; }

View file

@ -1,228 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Written by Bernhard Ager, TU Berlin (2006/2007).
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <netdb.h>
#include "FlowSrc.h"
#include "Net.h"
#include "analyzer/protocol/netflow/netflow_pac.h"
#include <errno.h>
FlowSrc::FlowSrc()
{ // TODO: v9.
selectable_fd = -1;
idle = false;
data = 0;
pdu_len = -1;
exporter_ip = 0;
current_timestamp = next_timestamp = 0.0;
netflow_analyzer = new binpac::NetFlow::NetFlow_Analyzer();
}
FlowSrc::~FlowSrc()
{
delete netflow_analyzer;
}
void FlowSrc::GetFds(int* read, int* write, int* except)
{
if ( selectable_fd >= 0 )
*read = selectable_fd;
}
double FlowSrc::NextTimestamp(double* network_time)
{
if ( ! data && ! ExtractNextPDU() )
return -1.0;
else
return next_timestamp;
}
void FlowSrc::Process()
{
if ( ! data && ! ExtractNextPDU() )
return;
// This is normally done by calling net_packet_dispatch(),
// but as we don't have a packet to dispatch ...
network_time = next_timestamp;
expire_timers();
netflow_analyzer->downflow()->set_exporter_ip(exporter_ip);
// We handle exceptions in NewData (might have changed w/ new binpac).
netflow_analyzer->NewData(0, data, data + pdu_len);
data = 0;
}
void FlowSrc::Close()
{
safe_close(selectable_fd);
}
FlowSocketSrc::~FlowSocketSrc()
{
}
int FlowSocketSrc::ExtractNextPDU()
{
sockaddr_in from;
socklen_t fromlen = sizeof(from);
pdu_len = recvfrom(selectable_fd, buffer, NF_MAX_PKT_SIZE, 0,
(struct sockaddr*) &from, &fromlen);
if ( pdu_len < 0 )
{
reporter->Error("problem reading NetFlow data from socket");
data = 0;
next_timestamp = -1.0;
closed = 1;
return 0;
}
if ( fromlen != sizeof(from) )
{
reporter->Error("malformed NetFlow PDU");
return 0;
}
data = buffer;
exporter_ip = from.sin_addr.s_addr;
next_timestamp = current_time();
if ( next_timestamp < current_timestamp )
next_timestamp = current_timestamp;
else
current_timestamp = next_timestamp;
return 1;
}
FlowSocketSrc::FlowSocketSrc(const char* listen_parms)
{
int n = strlen(listen_parms) + 1;
char laddr[n], port[n], ident[n];
laddr[0] = port[0] = ident[0] = '\0';
int ret = sscanf(listen_parms, "%[^:]:%[^=]=%s", laddr, port, ident);
if ( ret < 2 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"parsing your listen-spec went nuts: laddr='%s', port='%s'\n",
laddr[0] ? laddr : "", port[0] ? port : "");
closed = 1;
return;
}
const char* id = (ret == 3) ? ident : listen_parms;
netflow_analyzer->downflow()->set_identifier(id);
struct addrinfo aiprefs = {
0, PF_INET, SOCK_DGRAM, IPPROTO_UDP, 0, NULL, NULL, NULL
};
struct addrinfo* ainfo = 0;
if ( (ret = getaddrinfo(laddr, port, &aiprefs, &ainfo)) != 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"getaddrinfo(%s, %s, ...): %s",
laddr, port, gai_strerror(ret));
closed = 1;
return;
}
if ( (selectable_fd = socket (PF_INET, SOCK_DGRAM, 0)) < 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"socket: %s", strerror(errno));
closed = 1;
goto cleanup;
}
if ( bind (selectable_fd, ainfo->ai_addr, ainfo->ai_addrlen) < 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"bind: %s", strerror(errno));
closed = 1;
goto cleanup;
}
cleanup:
freeaddrinfo(ainfo);
}
FlowFileSrc::~FlowFileSrc()
{
delete [] readfile;
}
int FlowFileSrc::ExtractNextPDU()
{
FlowFileSrcPDUHeader pdu_header;
if ( read(selectable_fd, &pdu_header, sizeof(pdu_header)) <
int(sizeof(pdu_header)) )
return Error(errno, "read header");
if ( pdu_header.pdu_length > NF_MAX_PKT_SIZE )
{
reporter->Error("NetFlow packet too long");
// Safely skip over the too-long PDU.
if ( lseek(selectable_fd, pdu_header.pdu_length, SEEK_CUR) < 0 )
return Error(errno, "lseek");
return 0;
}
if ( read(selectable_fd, buffer, pdu_header.pdu_length) <
pdu_header.pdu_length )
return Error(errno, "read data");
if ( next_timestamp < pdu_header.network_time )
{
next_timestamp = pdu_header.network_time;
current_timestamp = pdu_header.network_time;
}
else
current_timestamp = next_timestamp;
data = buffer;
pdu_len = pdu_header.pdu_length;
exporter_ip = pdu_header.ipaddr;
return 1;
}
FlowFileSrc::FlowFileSrc(const char* readfile)
{
int n = strlen(readfile) + 1;
char ident[n];
this->readfile = new char[n];
int ret = sscanf(readfile, "%[^=]=%s", this->readfile, ident);
const char* id = (ret == 2) ? ident : this->readfile;
netflow_analyzer->downflow()->set_identifier(id);
selectable_fd = open(this->readfile, O_RDONLY);
if ( selectable_fd < 0 )
{
closed = 1;
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"open: %s", strerror(errno));
}
}
int FlowFileSrc::Error(int errlvl, const char* errmsg)
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"%s: %s", errmsg, strerror(errlvl));
data = 0;
next_timestamp = -1.0;
closed = 1;
return 0;
}

View file

@ -1,84 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Written by Bernhard Ager, TU Berlin (2006/2007).
#ifndef flowsrc_h
#define flowsrc_h
#include "IOSource.h"
#include "NetVar.h"
#include "binpac.h"
#define BRO_FLOW_ERRBUF_SIZE 512
// TODO: 1500 is enough for v5 - how about the others?
// 65536 would be enough for any UDP packet.
#define NF_MAX_PKT_SIZE 8192
struct FlowFileSrcPDUHeader {
double network_time;
int pdu_length;
uint32 ipaddr;
};
// Avoid including netflow_pac.h by explicitly declaring the NetFlow_Analyzer.
namespace binpac {
namespace NetFlow {
class NetFlow_Analyzer;
}
}
class FlowSrc : public IOSource {
public:
virtual ~FlowSrc();
// IOSource interface:
bool IsReady();
void GetFds(int* read, int* write, int* except);
double NextTimestamp(double* network_time);
void Process();
const char* Tag() { return "FlowSrc"; }
const char* ErrorMsg() const { return errbuf; }
protected:
FlowSrc();
virtual int ExtractNextPDU() = 0;
virtual void Close();
int selectable_fd;
double current_timestamp;
double next_timestamp;
binpac::NetFlow::NetFlow_Analyzer* netflow_analyzer;
u_char buffer[NF_MAX_PKT_SIZE];
u_char* data;
int pdu_len;
uint32 exporter_ip; // in network byte order
char errbuf[BRO_FLOW_ERRBUF_SIZE];
};
class FlowSocketSrc : public FlowSrc {
public:
FlowSocketSrc(const char* listen_parms);
virtual ~FlowSocketSrc();
int ExtractNextPDU();
};
class FlowFileSrc : public FlowSrc {
public:
FlowFileSrc(const char* readfile);
~FlowFileSrc();
int ExtractNextPDU();
protected:
int Error(int errlvl, const char* errmsg);
char* readfile;
};
#endif

View file

@ -46,6 +46,7 @@
#include "Event.h"
#include "Traverse.h"
#include "Reporter.h"
#include "plugin/Manager.h"
extern RETSIGTYPE sig_handler(int signo);
@ -226,7 +227,7 @@ TraversalCode Func::Traverse(TraversalCallback* cb) const
HANDLE_TC_STMT_PRE(tc);
// FIXME: Traverse arguments to builtin functions, too.
if ( kind == BRO_FUNC )
if ( kind == BRO_FUNC && scope )
{
tc = scope->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);
@ -244,6 +245,49 @@ TraversalCode Func::Traverse(TraversalCallback* cb) const
HANDLE_TC_STMT_POST(tc);
}
Val* Func::HandlePluginResult(Val* plugin_result, val_list* args, function_flavor flavor) const
{
// Helper function factoring out this code from BroFunc:Call() for better
// readability.
switch ( flavor ) {
case FUNC_FLAVOR_EVENT:
Unref(plugin_result);
plugin_result = 0;
break;
case FUNC_FLAVOR_HOOK:
if ( plugin_result->Type()->Tag() != TYPE_BOOL )
reporter->InternalError("plugin returned non-bool for hook");
break;
case FUNC_FLAVOR_FUNCTION:
{
BroType* yt = FType()->YieldType();
if ( (! yt) || yt->Tag() == TYPE_VOID )
{
Unref(plugin_result);
plugin_result = 0;
}
else
{
if ( plugin_result->Type()->Tag() != yt->Tag() )
reporter->InternalError("plugin returned wrong type for function call");
}
break;
}
}
loop_over_list(*args, i)
Unref((*args)[i]);
return plugin_result;
}
BroFunc::BroFunc(ID* arg_id, Stmt* arg_body, id_list* aggr_inits,
int arg_frame_size, int priority)
: Func(BRO_FUNC)
@ -281,6 +325,17 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
#ifdef PROFILE_BRO_FUNCTIONS
DEBUG_MSG("Function: %s\n", id->Name());
#endif
SegmentProfiler(segment_logger, location);
if ( sample_logger )
sample_logger->FunctionSeen(this);
Val* plugin_result = PLUGIN_HOOK_WITH_RESULT(HOOK_CALL_FUNCTION, HookCallFunction(this, args), 0);
if ( plugin_result )
return HandlePluginResult(plugin_result, args, Flavor());
if ( bodies.empty() )
{
// Can only happen for events and hooks.
@ -291,7 +346,6 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
return Flavor() == FUNC_FLAVOR_HOOK ? new Val(true, TYPE_BOOL) : 0;
}
SegmentProfiler(segment_logger, location);
Frame* f = new Frame(frame_size, this, args);
// Hand down any trigger.
@ -319,9 +373,6 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
Val* result = 0;
if ( sample_logger )
sample_logger->FunctionSeen(this);
for ( size_t i = 0; i < bodies.size(); ++i )
{
if ( sample_logger )
@ -497,6 +548,11 @@ Val* BuiltinFunc::Call(val_list* args, Frame* parent) const
if ( sample_logger )
sample_logger->FunctionSeen(this);
Val* plugin_result = PLUGIN_HOOK_WITH_RESULT(HOOK_CALL_FUNCTION, HookCallFunction(this, args), 0);
if ( plugin_result )
return HandlePluginResult(plugin_result, args, FUNC_FLAVOR_FUNCTION);
if ( g_trace_state.DoTrace() )
{
ODesc d;
@ -550,18 +606,15 @@ void builtin_error(const char* msg, BroObj* arg)
}
#include "bro.bif.func_h"
#include "logging.bif.func_h"
#include "input.bif.func_h"
#include "reporter.bif.func_h"
#include "strings.bif.func_h"
#include "bro.bif.func_def"
#include "logging.bif.func_def"
#include "input.bif.func_def"
#include "reporter.bif.func_def"
#include "strings.bif.func_def"
#include "__all__.bif.cc" // Autogenerated for compiling in the bif_target() code.
#include "__all__.bif.register.cc" // Autogenerated for compiling in the bif_target() code.
void init_builtin_funcs()
{
@ -572,16 +625,17 @@ void init_builtin_funcs()
gap_info = internal_type("gap_info")->AsRecordType();
#include "bro.bif.func_init"
#include "logging.bif.func_init"
#include "input.bif.func_init"
#include "reporter.bif.func_init"
#include "strings.bif.func_init"
#include "__all__.bif.init.cc" // Autogenerated for compiling in the bif_target() code.
did_builtin_init = true;
}
void init_builtin_funcs_subdirs()
{
#include "__all__.bif.init.cc" // Autogenerated for compiling in the bif_target() code.
}
bool check_built_in_call(BuiltinFunc* f, CallExpr* call)
{
if ( f->TheFunc() != BifFunc::bro_fmt )

View file

@ -52,6 +52,7 @@ public:
Kind GetKind() const { return kind; }
const char* Name() const { return name.c_str(); }
void SetName(const char* arg_name) { name = arg_name; }
virtual void Describe(ODesc* d) const = 0;
virtual void DescribeDebug(ODesc* d, const val_list* args) const;
@ -69,6 +70,9 @@ public:
protected:
Func();
// Helper function for handling result of plugin hook.
Val* HandlePluginResult(Val* plugin_result, val_list* args, function_flavor flavor) const;
DECLARE_ABSTRACT_SERIAL(Func);
vector<Body> bodies;
@ -130,6 +134,7 @@ protected:
extern void builtin_error(const char* msg, BroObj* arg = 0);
extern void init_builtin_funcs();
extern void init_builtin_funcs_subdirs();
extern bool check_built_in_call(BuiltinFunc* f, CallExpr* call);

View file

@ -32,9 +32,11 @@ public:
void SetType(BroType* t) { Unref(type); type = t; }
BroType* Type() { return type; }
const BroType* Type() const { return type; }
void MakeType() { is_type = 1; }
BroType* AsType() { return is_type ? Type() : 0; }
const BroType* AsType() const { return is_type ? Type() : 0; }
// If weak_ref is false, the Val is assumed to be already ref'ed
// and will be deref'ed when the ID is deleted.

View file

@ -1,103 +0,0 @@
// Interface for classes providing/consuming data during Bro's main loop.
#ifndef iosource_h
#define iosource_h
#include <list>
#include "Timer.h"
using namespace std;
class IOSource {
public:
IOSource() { idle = closed = false; }
virtual ~IOSource() {}
// Returns true if source has nothing ready to process.
bool IsIdle() const { return idle; }
// Returns true if more data is to be expected in the future.
// Otherwise, source may be removed.
bool IsOpen() const { return ! closed; }
// Returns select'able fds (leaves args untouched if we don't have
// selectable fds).
virtual void GetFds(int* read, int* write, int* except) = 0;
// The following two methods are only called when either IsIdle()
// returns false or select() on one of the fds indicates that there's
// data to process.
// Returns timestamp (in global network time) associated with next
// data item. If the source wants the data item to be processed
// with a local network time, it sets the argument accordingly.
virtual double NextTimestamp(double* network_time) = 0;
// Processes and consumes next data item.
virtual void Process() = 0;
// Returns tag of timer manager associated with last processed
// data item, nil for global timer manager.
virtual TimerMgr::Tag* GetCurrentTag() { return 0; }
// Returns a descriptual tag for debugging.
virtual const char* Tag() = 0;
protected:
// Derived classed are to set this to true if they have gone dry
// temporarily.
bool idle;
// Derived classed are to set this to true if they have gone dry
// permanently.
bool closed;
};
class IOSourceRegistry {
public:
IOSourceRegistry() { call_count = 0; dont_counts = 0; }
~IOSourceRegistry();
// If dont_count is true, this source does not contribute to the
// number of IOSources returned by Size(). The effect is that
// if all sources but the non-counting ones have gone dry,
// processing will shut down.
void Register(IOSource* src, bool dont_count = false);
// This may block for some time.
IOSource* FindSoonest(double* ts);
int Size() const { return sources.size() - dont_counts; }
// Terminate IOSource processing immediately by removing all
// sources (and therefore returning a Size() of zero).
void Terminate() { RemoveAll(); }
protected:
// When looking for a source with something to process,
// every SELECT_FREQUENCY calls we will go ahead and
// block on a select().
static const int SELECT_FREQUENCY = 25;
// Microseconds to wait in an empty select if no source is ready.
static const int SELECT_TIMEOUT = 50;
void RemoveAll();
unsigned int call_count;
int dont_counts;
struct Source {
IOSource* src;
int fd_read;
int fd_write;
int fd_except;
};
typedef list<Source*> SourceList;
SourceList sources;
};
extern IOSourceRegistry io_sources;
#endif

View file

@ -29,6 +29,10 @@
#include "Anon.h"
#include "Serializer.h"
#include "PacketDumper.h"
#include "iosource/Manager.h"
#include "iosource/PktSrc.h"
#include "iosource/PktDumper.h"
#include "plugin/Manager.h"
extern "C" {
#include "setsignal.h"
@ -38,10 +42,7 @@ extern "C" {
extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
}
PList(PktSrc) pkt_srcs;
// FIXME: We should really merge PktDumper and PacketDumper.
PktDumper* pkt_dumper = 0;
iosource::PktDumper* pkt_dumper = 0;
int reading_live = 0;
int reading_traces = 0;
@ -62,8 +63,8 @@ const u_char* current_pkt = 0;
int current_dispatched = 0;
int current_hdr_size = 0;
double current_timestamp = 0.0;
PktSrc* current_pktsrc = 0;
IOSource* current_iosrc;
iosource::PktSrc* current_pktsrc = 0;
iosource::IOSource* current_iosrc = 0;
std::list<ScannedFile> files_scanned;
std::vector<string> sig_files;
@ -112,17 +113,21 @@ RETSIGTYPE watchdog(int /* signo */)
// saving the packet which caused the
// watchdog to trigger may be helpful,
// so we'll save that one nevertheless.
pkt_dumper = new PktDumper("watchdog-pkt.pcap");
if ( pkt_dumper->IsError() )
pkt_dumper = iosource_mgr->OpenPktDumper("watchdog-pkt.pcap", false);
if ( ! pkt_dumper || pkt_dumper->IsError() )
{
reporter->Error("watchdog: can't open watchdog-pkt.pcap for writing\n");
delete pkt_dumper;
reporter->Error("watchdog: can't open watchdog-pkt.pcap for writing");
pkt_dumper = 0;
}
}
if ( pkt_dumper )
pkt_dumper->Dump(current_hdr, current_pkt);
{
iosource::PktDumper::Packet p;
p.hdr = current_hdr;
p.data = current_pkt;
pkt_dumper->Dump(&p);
}
}
net_get_final_stats();
@ -141,121 +146,47 @@ RETSIGTYPE watchdog(int /* signo */)
return RETSIGVAL;
}
void net_init(name_list& interfaces, name_list& readfiles,
name_list& netflows, name_list& flowfiles,
const char* writefile, const char* filter,
const char* secondary_filter, int do_watchdog)
void net_update_time(double new_network_time)
{
init_net_var();
network_time = new_network_time;
PLUGIN_HOOK_VOID(HOOK_UPDATE_NETWORK_TIME, HookUpdateNetworkTime(new_network_time));
}
if ( readfiles.length() > 0 || flowfiles.length() > 0 )
void net_init(name_list& interfaces, name_list& readfiles,
const char* writefile, int do_watchdog)
{
if ( readfiles.length() > 0 )
{
reading_live = pseudo_realtime > 0.0;
reading_traces = 1;
for ( int i = 0; i < readfiles.length(); ++i )
{
PktFileSrc* ps = new PktFileSrc(readfiles[i], filter);
iosource::PktSrc* ps = iosource_mgr->OpenPktSrc(readfiles[i], false);
assert(ps);
if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with trace file %s - %s\n",
prog, readfiles[i], ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
if ( secondary_filter )
{
// We use a second PktFileSrc for the
// secondary path.
PktFileSrc* ps = new PktFileSrc(readfiles[i],
secondary_filter,
TYPE_FILTER_SECONDARY);
if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with trace file %s - %s\n",
prog, readfiles[i],
reporter->FatalError("problem with trace file %s (%s)",
readfiles[i],
ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
ps->AddSecondaryTablePrograms();
}
}
for ( int i = 0; i < flowfiles.length(); ++i )
{
FlowFileSrc* fs = new FlowFileSrc(flowfiles[i]);
if ( ! fs->IsOpen() )
reporter->FatalError("%s: problem with netflow file %s - %s\n",
prog, flowfiles[i], fs->ErrorMsg());
else
{
io_sources.Register(fs);
}
}
}
else if ((interfaces.length() > 0 || netflows.length() > 0))
else if ( interfaces.length() > 0 )
{
reading_live = 1;
reading_traces = 0;
for ( int i = 0; i < interfaces.length(); ++i )
{
PktSrc* ps;
ps = new PktInterfaceSrc(interfaces[i], filter);
iosource::PktSrc* ps = iosource_mgr->OpenPktSrc(interfaces[i], true);
assert(ps);
if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with interface %s - %s\n",
prog, interfaces[i], ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
if ( secondary_filter )
{
PktSrc* ps;
ps = new PktInterfaceSrc(interfaces[i],
filter, TYPE_FILTER_SECONDARY);
if ( ! ps->IsOpen() )
reporter->Error("%s: problem with interface %s - %s\n",
prog, interfaces[i],
reporter->FatalError("problem with interface %s (%s)",
interfaces[i],
ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
ps->AddSecondaryTablePrograms();
}
}
for ( int i = 0; i < netflows.length(); ++i )
{
FlowSocketSrc* fs = new FlowSocketSrc(netflows[i]);
if ( ! fs->IsOpen() )
{
reporter->Error("%s: problem with netflow socket %s - %s\n",
prog, netflows[i], fs->ErrorMsg());
delete fs;
}
else
io_sources.Register(fs);
}
}
else
@ -267,12 +198,12 @@ void net_init(name_list& interfaces, name_list& readfiles,
if ( writefile )
{
// ### This will fail horribly if there are multiple
// interfaces with different-lengthed media.
pkt_dumper = new PktDumper(writefile);
if ( pkt_dumper->IsError() )
reporter->FatalError("%s: can't open write file \"%s\" - %s\n",
prog, writefile, pkt_dumper->ErrorMsg());
pkt_dumper = iosource_mgr->OpenPktDumper(writefile, false);
assert(pkt_dumper);
if ( ! pkt_dumper->IsOpen() )
reporter->FatalError("problem opening dump file %s (%s)",
writefile, pkt_dumper->ErrorMsg());
ID* id = global_scope()->Lookup("trace_output_file");
if ( ! id )
@ -293,7 +224,7 @@ void net_init(name_list& interfaces, name_list& readfiles,
}
}
void expire_timers(PktSrc* src_ps)
void expire_timers(iosource::PktSrc* src_ps)
{
SegmentProfiler(segment_logger, "expiring-timers");
TimerMgr* tmgr =
@ -307,7 +238,7 @@ void expire_timers(PktSrc* src_ps)
void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size,
PktSrc* src_ps)
iosource::PktSrc* src_ps)
{
if ( ! bro_start_network_time )
bro_start_network_time = t;
@ -315,7 +246,7 @@ void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
TimerMgr* tmgr = sessions->LookupTimerMgr(src_ps->GetCurrentTag());
// network_time never goes back.
network_time = tmgr->Time() < t ? t : tmgr->Time();
net_update_time(tmgr->Time() < t ? t : tmgr->Time());
current_pktsrc = src_ps;
current_iosrc = src_ps;
@ -363,11 +294,11 @@ void net_run()
{
set_processing_status("RUNNING", "net_run");
while ( io_sources.Size() ||
while ( iosource_mgr->Size() ||
(BifConst::exit_only_after_terminate && ! terminating) )
{
double ts;
IOSource* src = io_sources.FindSoonest(&ts);
iosource::IOSource* src = iosource_mgr->FindSoonest(&ts);
#ifdef DEBUG
static int loop_counter = 0;
@ -395,7 +326,7 @@ void net_run()
{
// Take advantage of the lull to get up to
// date on timers and events.
network_time = ct;
net_update_time(ct);
expire_timers();
usleep(1); // Just yield.
}
@ -408,7 +339,7 @@ void net_run()
// date on timers and events. Because we only
// have timers as sources, going to sleep here
// doesn't risk blocking on other inputs.
network_time = current_time();
net_update_time(current_time());
expire_timers();
// Avoid busy-waiting - pause for 100 ms.
@ -465,16 +396,19 @@ void net_run()
void net_get_final_stats()
{
loop_over_list(pkt_srcs, i)
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
for ( iosource::Manager::PktSrcList::const_iterator i = pkt_srcs.begin();
i != pkt_srcs.end(); i++ )
{
PktSrc* ps = pkt_srcs[i];
iosource::PktSrc* ps = *i;
if ( ps->IsLive() )
{
struct PktSrc::Stats s;
iosource::PktSrc::Stats s;
ps->Statistics(&s);
reporter->Info("%d packets received on interface %s, %d dropped\n",
s.received, ps->Interface(), s.dropped);
reporter->Info("%d packets received on interface %s, %d dropped",
s.received, ps->Path().c_str(), s.dropped);
}
}
}
@ -494,8 +428,6 @@ void net_finish(int drain_events)
sessions->Done();
}
delete pkt_dumper;
#ifdef DEBUG
extern int reassem_seen_bytes, reassem_copied_bytes;
// DEBUG_MSG("Reassembly (TCP and IP/Frag): %d bytes seen, %d bytes copied\n",
@ -516,29 +448,6 @@ void net_delete()
delete ip_anonymizer[i];
}
// net_packet_match
//
// Description:
// - Checks if a packet matches a filter. It just wraps up a call to
// [pcap.h's] bpf_filter().
//
// Inputs:
// - fp: a BPF-compiled filter
// - pkt: a pointer to the packet
// - len: the original packet length
// - caplen: the captured packet length. This is pkt length
//
// Output:
// - return: 1 if the packet matches the filter, 0 otherwise
int net_packet_match(BPF_Program* fp, const u_char* pkt,
u_int len, u_int caplen)
{
// NOTE: I don't like too much un-const'ing the pkt variable.
return bpf_filter(fp->GetProgram()->bf_insns, (u_char*) pkt, len, caplen);
}
int _processing_suspended = 0;
static double suspend_start = 0;
@ -556,8 +465,12 @@ void net_continue_processing()
if ( _processing_suspended == 1 )
{
reporter->Info("processing continued");
loop_over_list(pkt_srcs, i)
pkt_srcs[i]->ContinueAfterSuspend();
const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
for ( iosource::Manager::PktSrcList::const_iterator i = pkt_srcs.begin();
i != pkt_srcs.end(); i++ )
(*i)->ContinueAfterSuspend();
}
--_processing_suspended;

View file

@ -5,27 +5,24 @@
#include "net_util.h"
#include "util.h"
#include "BPF_Program.h"
#include "List.h"
#include "PktSrc.h"
#include "FlowSrc.h"
#include "Func.h"
#include "RemoteSerializer.h"
#include "iosource/IOSource.h"
#include "iosource/PktSrc.h"
#include "iosource/PktDumper.h"
extern void net_init(name_list& interfaces, name_list& readfiles,
name_list& netflows, name_list& flowfiles,
const char* writefile, const char* filter,
const char* secondary_filter, int do_watchdog);
const char* writefile, int do_watchdog);
extern void net_run();
extern void net_get_final_stats();
extern void net_finish(int drain_events);
extern void net_delete(); // Reclaim all memory, etc.
extern void net_update_time(double new_network_time);
extern void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size,
PktSrc* src_ps);
extern int net_packet_match(BPF_Program* fp, const u_char* pkt,
u_int len, u_int caplen);
extern void expire_timers(PktSrc* src_ps = 0);
iosource::PktSrc* src_ps);
extern void expire_timers(iosource::PktSrc* src_ps = 0);
extern void termination_signal();
// Functions to temporarily suspend processing of live input (network packets
@ -82,13 +79,10 @@ extern const u_char* current_pkt;
extern int current_dispatched;
extern int current_hdr_size;
extern double current_timestamp;
extern PktSrc* current_pktsrc;
extern IOSource* current_iosrc;
extern iosource::PktSrc* current_pktsrc;
extern iosource::IOSource* current_iosrc;
declare(PList,PktSrc);
extern PList(PktSrc) pkt_srcs;
extern PktDumper* pkt_dumper; // where to save packets
extern iosource::PktDumper* pkt_dumper; // where to save packets
extern char* writefile;

View file

@ -245,8 +245,6 @@ bro_uint_t bits_per_uid;
#include "const.bif.netvar_def"
#include "types.bif.netvar_def"
#include "event.bif.netvar_def"
#include "logging.bif.netvar_def"
#include "input.bif.netvar_def"
#include "reporter.bif.netvar_def"
void init_event_handlers()
@ -311,8 +309,6 @@ void init_net_var()
{
#include "const.bif.netvar_init"
#include "types.bif.netvar_init"
#include "logging.bif.netvar_init"
#include "input.bif.netvar_init"
#include "reporter.bif.netvar_init"
conn_id = internal_type("conn_id")->AsRecordType();

View file

@ -255,8 +255,6 @@ extern void init_net_var();
#include "const.bif.netvar_h"
#include "types.bif.netvar_h"
#include "event.bif.netvar_h"
#include "logging.bif.netvar_h"
#include "input.bif.netvar_h"
#include "reporter.bif.netvar_h"
#endif

View file

@ -7,6 +7,7 @@
#include "Obj.h"
#include "Serializer.h"
#include "File.h"
#include "plugin/Manager.h"
Location no_location("<no location>", 0, 0, 0, 0);
Location start_location("<start uninitialized>", 0, 0, 0, 0);
@ -92,6 +93,9 @@ int BroObj::suppress_errors = 0;
BroObj::~BroObj()
{
if ( notify_plugins )
PLUGIN_HOOK_VOID(HOOK_BRO_OBJ_DTOR, HookBroObjDtor(this));
delete location;
}

View file

@ -92,6 +92,7 @@ public:
{
ref_cnt = 1;
in_ser_cache = false;
notify_plugins = false;
// A bit of a hack. We'd like to associate location
// information with every object created when parsing,
@ -151,6 +152,9 @@ public:
// extend compound objects such as statement lists.
virtual void UpdateLocationEndInfo(const Location& end);
// Enable notification of plugins when this objects gets destroyed.
void NotifyPluginsOnDtor() { notify_plugins = true; }
int RefCnt() const { return ref_cnt; }
// Helper class to temporarily suppress errors
@ -181,6 +185,7 @@ private:
friend inline void Ref(BroObj* o);
friend inline void Unref(BroObj* o);
bool notify_plugins;
int ref_cnt;
// If non-zero, do not print runtime errors. Useful for

View file

@ -1,804 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include <errno.h>
#include <sys/stat.h>
#include "config.h"
#include "util.h"
#include "PktSrc.h"
#include "Hash.h"
#include "Net.h"
#include "Sessions.h"
// ### This needs auto-confing.
#ifdef HAVE_PCAP_INT_H
#include <pcap-int.h>
#endif
PktSrc::PktSrc()
{
interface = readfile = 0;
data = last_data = 0;
memset(&hdr, 0, sizeof(hdr));
hdr_size = 0;
datalink = 0;
netmask = 0xffffff00;
pd = 0;
idle = false;
next_sync_point = 0;
first_timestamp = current_timestamp = next_timestamp = 0.0;
first_wallclock = current_wallclock = 0;
stats.received = stats.dropped = stats.link = 0;
}
PktSrc::~PktSrc()
{
Close();
loop_over_list(program_list, i)
delete program_list[i];
BPF_Program* code;
IterCookie* cookie = filters.InitForIteration();
while ( (code = filters.NextEntry(cookie)) )
delete code;
delete [] interface;
delete [] readfile;
}
void PktSrc::GetFds(int* read, int* write, int* except)
{
if ( pseudo_realtime )
{
// Select would give erroneous results. But we simulate it
// by setting idle accordingly.
idle = CheckPseudoTime() == 0;
return;
}
if ( selectable_fd >= 0 )
*read = selectable_fd;
}
int PktSrc::ExtractNextPacket()
{
// Don't return any packets if processing is suspended (except for the
// very first packet which we need to set up times).
if ( net_is_processing_suspended() && first_timestamp )
{
idle = true;
return 0;
}
data = last_data = pcap_next(pd, &hdr);
if ( data && (hdr.len == 0 || hdr.caplen == 0) )
{
sessions->Weird("empty_pcap_header", &hdr, data);
return 0;
}
if ( data )
next_timestamp = hdr.ts.tv_sec + double(hdr.ts.tv_usec) / 1e6;
if ( pseudo_realtime )
current_wallclock = current_time(true);
if ( ! first_timestamp )
first_timestamp = next_timestamp;
idle = (data == 0);
if ( data )
++stats.received;
// Source has gone dry. If it's a network interface, this just means
// it's timed out. If it's a file, though, then the file has been
// exhausted.
if ( ! data && ! IsLive() )
{
closed = true;
if ( pseudo_realtime && using_communication )
{
if ( remote_trace_sync_interval )
remote_serializer->SendFinalSyncPoint();
else
remote_serializer->Terminate();
}
}
return data != 0;
}
double PktSrc::NextTimestamp(double* local_network_time)
{
if ( ! data && ! ExtractNextPacket() )
return -1.0;
if ( pseudo_realtime )
{
// Delay packet if necessary.
double packet_time = CheckPseudoTime();
if ( packet_time )
return packet_time;
idle = true;
return -1.0;
}
return next_timestamp;
}
void PktSrc::ContinueAfterSuspend()
{
current_wallclock = current_time(true);
}
double PktSrc::CurrentPacketWallClock()
{
// We stop time when we are suspended.
if ( net_is_processing_suspended() )
current_wallclock = current_time(true);
return current_wallclock;
}
double PktSrc::CheckPseudoTime()
{
if ( ! data && ! ExtractNextPacket() )
return 0;
if ( ! current_timestamp )
return bro_start_time;
if ( remote_trace_sync_interval )
{
if ( next_sync_point == 0 || next_timestamp >= next_sync_point )
{
int n = remote_serializer->SendSyncPoint();
next_sync_point = first_timestamp +
n * remote_trace_sync_interval;
remote_serializer->Log(RemoteSerializer::LogInfo,
fmt("stopping at packet %.6f, next sync-point at %.6f",
current_timestamp, next_sync_point));
return 0;
}
}
double pseudo_time = next_timestamp - first_timestamp;
double ct = (current_time(true) - first_wallclock) * pseudo_realtime;
return pseudo_time <= ct ? bro_start_time + pseudo_time : 0;
}
void PktSrc::Process()
{
if ( ! data && ! ExtractNextPacket() )
return;
current_timestamp = next_timestamp;
int pkt_hdr_size = hdr_size;
// Unfortunately some packets on the link might have MPLS labels
// while others don't. That means we need to ask the link-layer if
// labels are in place.
bool have_mpls = false;
int protocol = 0;
switch ( datalink ) {
case DLT_NULL:
{
protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0];
// From the Wireshark Wiki: "AF_INET6, unfortunately, has
// different values in {NetBSD,OpenBSD,BSD/OS},
// {FreeBSD,DragonFlyBSD}, and {Darwin/Mac OS X}, so an IPv6
// packet might have a link-layer header with 24, 28, or 30
// as the AF_ value." As we may be reading traces captured on
// platforms other than what we're running on, we accept them
// all here.
if ( protocol != AF_INET
&& protocol != AF_INET6
&& protocol != 24
&& protocol != 28
&& protocol != 30 )
{
sessions->Weird("non_ip_packet_in_null_transport", &hdr, data);
data = 0;
return;
}
break;
}
case DLT_EN10MB:
{
// Get protocol being carried from the ethernet frame.
protocol = (data[12] << 8) + data[13];
switch ( protocol )
{
// MPLS carried over the ethernet frame.
case 0x8847:
// Remove the data link layer and denote a
// header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
break;
// VLAN carried over the ethernet frame.
case 0x8100:
data += get_link_header_size(datalink);
// Check for MPLS in VLAN.
if ( ((data[2] << 8) + data[3]) == 0x8847 )
have_mpls = true;
data += 4; // Skip the vlan header
pkt_hdr_size = 0;
// Check for 802.1ah (Q-in-Q) containing IP.
// Only do a second layer of vlan tag
// stripping because there is no
// specification that allows for deeper
// nesting.
if ( ((data[2] << 8) + data[3]) == 0x0800 )
data += 4;
break;
// PPPoE carried over the ethernet frame.
case 0x8864:
data += get_link_header_size(datalink);
protocol = (data[6] << 8) + data[7];
data += 8; // Skip the PPPoE session and PPP header
pkt_hdr_size = 0;
if ( protocol != 0x0021 && protocol != 0x0057 )
{
// Neither IPv4 nor IPv6.
sessions->Weird("non_ip_packet_in_pppoe_encapsulation", &hdr, data);
data = 0;
return;
}
break;
}
break;
}
case DLT_PPP_SERIAL:
{
// Get PPP protocol.
protocol = (data[2] << 8) + data[3];
if ( protocol == 0x0281 )
{
// MPLS Unicast. Remove the data link layer and
// denote a header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
}
else if ( protocol != 0x0021 && protocol != 0x0057 )
{
// Neither IPv4 nor IPv6.
sessions->Weird("non_ip_packet_in_ppp_encapsulation", &hdr, data);
data = 0;
return;
}
break;
}
}
if ( have_mpls )
{
// Skip the MPLS label stack.
bool end_of_stack = false;
while ( ! end_of_stack )
{
end_of_stack = *(data + 2) & 0x01;
data += 4;
}
}
if ( pseudo_realtime )
{
current_pseudo = CheckPseudoTime();
net_packet_dispatch(current_pseudo, &hdr, data, pkt_hdr_size, this);
if ( ! first_wallclock )
first_wallclock = current_time(true);
}
else
net_packet_dispatch(current_timestamp, &hdr, data, pkt_hdr_size, this);
data = 0;
}
bool PktSrc::GetCurrentPacket(const struct pcap_pkthdr** arg_hdr,
const u_char** arg_pkt)
{
if ( ! last_data )
return false;
*arg_hdr = &hdr;
*arg_pkt = last_data;
return true;
}
int PktSrc::PrecompileFilter(int index, const char* filter)
{
// Compile filter.
BPF_Program* code = new BPF_Program();
if ( ! code->Compile(pd, filter, netmask, errbuf, sizeof(errbuf)) )
{
delete code;
return 0;
}
// Store it in hash.
HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
BPF_Program* oldcode = filters.Lookup(hash);
if ( oldcode )
delete oldcode;
filters.Insert(hash, code);
delete hash;
return 1;
}
int PktSrc::SetFilter(int index)
{
// We don't want load-level filters for the secondary path.
if ( filter_type == TYPE_FILTER_SECONDARY && index > 0 )
return 1;
HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
BPF_Program* code = filters.Lookup(hash);
delete hash;
if ( ! code )
{
safe_snprintf(errbuf, sizeof(errbuf),
"No precompiled pcap filter for index %d",
index);
return 0;
}
if ( pcap_setfilter(pd, code->GetProgram()) < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_setfilter(%d): %s",
index, pcap_geterr(pd));
return 0;
}
#ifndef HAVE_LINUX
// Linux doesn't clear counters when resetting filter.
stats.received = stats.dropped = stats.link = 0;
#endif
return 1;
}
void PktSrc::SetHdrSize()
{
int dl = pcap_datalink(pd);
hdr_size = get_link_header_size(dl);
if ( hdr_size < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"unknown data link type 0x%x", dl);
Close();
}
datalink = dl;
}
void PktSrc::Close()
{
if ( pd )
{
pcap_close(pd);
pd = 0;
closed = true;
}
}
void PktSrc::AddSecondaryTablePrograms()
{
BPF_Program* program;
loop_over_list(secondary_path->EventTable(), i)
{
SecondaryEvent* se = secondary_path->EventTable()[i];
program = new BPF_Program();
if ( ! program->Compile(snaplen, datalink, se->Filter(),
netmask, errbuf, sizeof(errbuf)) )
{
delete program;
Close();
return;
}
SecondaryProgram* sp = new SecondaryProgram(program, se);
program_list.append(sp);
}
}
void PktSrc::Statistics(Stats* s)
{
if ( reading_traces )
s->received = s->dropped = s->link = 0;
else
{
struct pcap_stat pstat;
if ( pcap_stats(pd, &pstat) < 0 )
{
reporter->Error("problem getting packet filter statistics: %s",
ErrorMsg());
s->received = s->dropped = s->link = 0;
}
else
{
s->dropped = pstat.ps_drop;
s->link = pstat.ps_recv;
}
}
s->received = stats.received;
if ( pseudo_realtime )
s->dropped = 0;
stats.dropped = s->dropped;
}
PktInterfaceSrc::PktInterfaceSrc(const char* arg_interface, const char* filter,
PktSrc_Filter_Type ft)
: PktSrc()
{
char tmp_errbuf[PCAP_ERRBUF_SIZE];
filter_type = ft;
// Determine interface if not specified.
if ( ! arg_interface && ! (arg_interface = pcap_lookupdev(tmp_errbuf)) )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_lookupdev: %s", tmp_errbuf);
return;
}
interface = copy_string(arg_interface);
// Determine network and netmask.
uint32 net;
if ( pcap_lookupnet(interface, &net, &netmask, tmp_errbuf) < 0 )
{
// ### The lookup can fail if no address is assigned to
// the interface; and libpcap doesn't have any useful notion
// of error codes, just error strings - how bogus - so we
// just kludge around the error :-(.
// sprintf(errbuf, "pcap_lookupnet %s", tmp_errbuf);
// return;
net = 0;
netmask = 0xffffff00;
}
// We use the smallest time-out possible to return almost immediately if
// no packets are available. (We can't use set_nonblocking() as it's
// broken on FreeBSD: even when select() indicates that we can read
// something, we may get nothing if the store buffer hasn't filled up
// yet.)
pd = pcap_open_live(interface, snaplen, 1, 1, tmp_errbuf);
if ( ! pd )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_open_live: %s", tmp_errbuf);
closed = true;
return;
}
// ### This needs autoconf'ing.
#ifdef HAVE_PCAP_INT_H
reporter->Info("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize);
#endif
#ifdef HAVE_LINUX
if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_setnonblock: %s", tmp_errbuf);
pcap_close(pd);
closed = true;
return;
}
#endif
selectable_fd = pcap_fileno(pd);
if ( PrecompileFilter(0, filter) && SetFilter(0) )
{
SetHdrSize();
if ( closed )
// Couldn't get header size.
return;
reporter->Info("listening on %s, capture length %d bytes\n", interface, snaplen);
}
else
closed = true;
}
PktFileSrc::PktFileSrc(const char* arg_readfile, const char* filter,
PktSrc_Filter_Type ft)
: PktSrc()
{
readfile = copy_string(arg_readfile);
filter_type = ft;
pd = pcap_open_offline((char*) readfile, errbuf);
if ( pd && PrecompileFilter(0, filter) && SetFilter(0) )
{
SetHdrSize();
if ( closed )
// Unknown link layer type.
return;
// We don't put file sources into non-blocking mode as
// otherwise we would not be able to identify the EOF.
selectable_fd = fileno(pcap_file(pd));
if ( selectable_fd < 0 )
reporter->InternalError("OS does not support selectable pcap fd");
}
else
closed = true;
}
SecondaryPath::SecondaryPath()
{
filter = 0;
// Glue together the secondary filter, if exists.
Val* secondary_fv = internal_val("secondary_filters");
if ( secondary_fv->AsTableVal()->Size() == 0 )
return;
int did_first = 0;
const TableEntryValPDict* v = secondary_fv->AsTable();
IterCookie* c = v->InitForIteration();
TableEntryVal* tv;
HashKey* h;
while ( (tv = v->NextEntry(h, c)) )
{
// Get the index values.
ListVal* index =
secondary_fv->AsTableVal()->RecoverIndex(h);
const char* str =
index->Index(0)->Ref()->AsString()->CheckString();
if ( ++did_first == 1 )
{
filter = copy_string(str);
}
else
{
if ( strlen(filter) > 0 )
{
char* tmp_f = new char[strlen(str) + strlen(filter) + 32];
if ( strlen(str) == 0 )
sprintf(tmp_f, "%s", filter);
else
sprintf(tmp_f, "(%s) or (%s)", filter, str);
delete [] filter;
filter = tmp_f;
}
}
// Build secondary_path event table item and link it.
SecondaryEvent* se =
new SecondaryEvent(index->Index(0)->Ref()->AsString()->CheckString(),
tv->Value()->AsFunc() );
event_list.append(se);
delete h;
Unref(index);
}
}
SecondaryPath::~SecondaryPath()
{
loop_over_list(event_list, i)
delete event_list[i];
delete [] filter;
}
SecondaryProgram::~SecondaryProgram()
{
delete program;
}
PktDumper::PktDumper(const char* arg_filename, bool arg_append)
{
filename[0] = '\0';
is_error = false;
append = arg_append;
dumper = 0;
open_time = 0.0;
// We need a pcap_t with a reasonable link-layer type. We try to get it
// from the packet sources. If not available, we fall back to Ethernet.
// FIXME: Perhaps we should make this configurable?
int linktype = -1;
if ( pkt_srcs.length() )
linktype = pkt_srcs[0]->LinkType();
if ( linktype < 0 )
linktype = DLT_EN10MB;
pd = pcap_open_dead(linktype, snaplen);
if ( ! pd )
{
Error("error for pcap_open_dead");
return;
}
if ( arg_filename )
Open(arg_filename);
}
bool PktDumper::Open(const char* arg_filename)
{
if ( ! arg_filename && ! *filename )
{
Error("no filename given");
return false;
}
if ( arg_filename )
{
if ( dumper && streq(arg_filename, filename) )
// Already open.
return true;
safe_strncpy(filename, arg_filename, FNBUF_LEN);
}
if ( dumper )
Close();
struct stat s;
int exists = 0;
if ( append )
{
// See if output file already exists (and is non-empty).
exists = stat(filename, &s); ;
if ( exists < 0 && errno != ENOENT )
{
Error(fmt("can't stat file %s: %s", filename, strerror(errno)));
return false;
}
}
if ( ! append || exists < 0 || s.st_size == 0 )
{
// Open new file.
dumper = pcap_dump_open(pd, filename);
if ( ! dumper )
{
Error(pcap_geterr(pd));
return false;
}
}
else
{
// Old file and we need to append, which, unfortunately,
// is not supported by libpcap. So, we have to hack a
// little bit, knowing that pcap_dumpter_t is, in fact,
// a FILE ... :-(
dumper = (pcap_dumper_t*) fopen(filename, "a");
if ( ! dumper )
{
Error(fmt("can't open dump %s: %s", filename, strerror(errno)));
return false;
}
}
open_time = network_time;
is_error = false;
return true;
}
bool PktDumper::Close()
{
if ( dumper )
{
pcap_dump_close(dumper);
dumper = 0;
is_error = false;
}
return true;
}
bool PktDumper::Dump(const struct pcap_pkthdr* hdr, const u_char* pkt)
{
if ( ! dumper )
return false;
if ( ! open_time )
open_time = network_time;
pcap_dump((u_char*) dumper, hdr, pkt);
return true;
}
void PktDumper::Error(const char* errstr)
{
safe_strncpy(errbuf, errstr, sizeof(errbuf));
is_error = true;
}
int get_link_header_size(int dl)
{
switch ( dl ) {
case DLT_NULL:
return 4;
case DLT_EN10MB:
return 14;
case DLT_FDDI:
return 13 + 8; // fddi_header + LLC
#ifdef DLT_LINUX_SLL
case DLT_LINUX_SLL:
return 16;
#endif
case DLT_PPP_SERIAL: // PPP_SERIAL
return 4;
case DLT_RAW:
return 0;
}
return -1;
}

View file

@ -1,258 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef pktsrc_h
#define pktsrc_h
#include "Dict.h"
#include "Expr.h"
#include "BPF_Program.h"
#include "IOSource.h"
#include "RemoteSerializer.h"
#define BRO_PCAP_ERRBUF_SIZE PCAP_ERRBUF_SIZE + 256
extern "C" {
#include <pcap.h>
}
declare(PDict,BPF_Program);
// Whether a PktSrc object is used by the normal filter structure or the
// secondary-path structure.
typedef enum {
TYPE_FILTER_NORMAL, // the normal filter
TYPE_FILTER_SECONDARY, // the secondary-path filter
} PktSrc_Filter_Type;
// {filter,event} tuples conforming the secondary path.
class SecondaryEvent {
public:
SecondaryEvent(const char* arg_filter, Func* arg_event)
{
filter = arg_filter;
event = arg_event;
}
const char* Filter() { return filter; }
Func* Event() { return event; }
private:
const char* filter;
Func* event;
};
declare(PList,SecondaryEvent);
typedef PList(SecondaryEvent) secondary_event_list;
class SecondaryPath {
public:
SecondaryPath();
~SecondaryPath();
secondary_event_list& EventTable() { return event_list; }
const char* Filter() { return filter; }
private:
secondary_event_list event_list;
// OR'ed union of all SecondaryEvent filters
char* filter;
};
// Main secondary-path object.
extern SecondaryPath* secondary_path;
// {program, {filter,event}} tuple table.
class SecondaryProgram {
public:
SecondaryProgram(BPF_Program* arg_program, SecondaryEvent* arg_event)
{
program = arg_program;
event = arg_event;
}
~SecondaryProgram();
BPF_Program* Program() { return program; }
SecondaryEvent* Event() { return event; }
private:
// Associated program.
BPF_Program *program;
// Event that is run in case the program is matched.
SecondaryEvent* event;
};
declare(PList,SecondaryProgram);
typedef PList(SecondaryProgram) secondary_program_list;
class PktSrc : public IOSource {
public:
~PktSrc();
// IOSource interface
bool IsReady();
void GetFds(int* read, int* write, int* except);
double NextTimestamp(double* local_network_time);
void Process();
const char* Tag() { return "PktSrc"; }
const char* ErrorMsg() const { return errbuf; }
void ClearErrorMsg() { *errbuf ='\0'; }
// Returns the packet last processed; false if there is no
// current packet available.
bool GetCurrentPacket(const pcap_pkthdr** hdr, const u_char** pkt);
int HdrSize() const { return hdr_size; }
int DataLink() const { return datalink; }
void ConsumePacket() { data = 0; }
int IsLive() const { return interface != 0; }
pcap_t* PcapHandle() const { return pd; }
int LinkType() const { return pcap_datalink(pd); }
const char* ReadFile() const { return readfile; }
const char* Interface() const { return interface; }
PktSrc_Filter_Type FilterType() const { return filter_type; }
void AddSecondaryTablePrograms();
const secondary_program_list& ProgramTable() const
{ return program_list; }
// Signal packet source that processing was suspended and is now going
// to be continued.
void ContinueAfterSuspend();
// Only valid in pseudo-realtime mode.
double CurrentPacketTimestamp() { return current_pseudo; }
double CurrentPacketWallClock();
struct Stats {
unsigned int received; // pkts received (w/o drops)
unsigned int dropped; // pkts dropped
unsigned int link; // total packets on link
// (not always not available)
};
virtual void Statistics(Stats* stats);
// Precompiles a filter and associates the given index with it.
// Returns true on success, 0 if a problem occurred.
virtual int PrecompileFilter(int index, const char* filter);
// Activates the filter with the given index.
// Returns true on success, 0 if a problem occurred.
virtual int SetFilter(int index);
protected:
PktSrc();
static const int PCAP_TIMEOUT = 20;
void SetHdrSize();
virtual void Close();
// Returns 1 on success, 0 on time-out/gone dry.
virtual int ExtractNextPacket();
// Checks if the current packet has a pseudo-time <= current_time.
// If yes, returns pseudo-time, otherwise 0.
double CheckPseudoTime();
double current_timestamp;
double next_timestamp;
// Only set in pseudo-realtime mode.
double first_timestamp;
double first_wallclock;
double current_wallclock;
double current_pseudo;
struct pcap_pkthdr hdr;
const u_char* data; // contents of current packet
const u_char* last_data; // same, but unaffected by consuming
int hdr_size;
int datalink;
double next_sync_point; // For trace synchronziation in pseudo-realtime
char* interface; // nil if not reading from an interface
char* readfile; // nil if not reading from a file
pcap_t* pd;
int selectable_fd;
uint32 netmask;
char errbuf[BRO_PCAP_ERRBUF_SIZE];
Stats stats;
PDict(BPF_Program) filters; // precompiled filters
PktSrc_Filter_Type filter_type; // normal path or secondary path
secondary_program_list program_list;
};
class PktInterfaceSrc : public PktSrc {
public:
PktInterfaceSrc(const char* interface, const char* filter,
PktSrc_Filter_Type ft=TYPE_FILTER_NORMAL);
};
class PktFileSrc : public PktSrc {
public:
PktFileSrc(const char* readfile, const char* filter,
PktSrc_Filter_Type ft=TYPE_FILTER_NORMAL);
};
extern int get_link_header_size(int dl);
class PktDumper {
public:
PktDumper(const char* file = 0, bool append = false);
~PktDumper() { Close(); }
bool Open(const char* file = 0);
bool Close();
bool Dump(const struct pcap_pkthdr* hdr, const u_char* pkt);
pcap_dumper_t* PcapDumper() { return dumper; }
const char* FileName() const { return filename; }
bool IsError() const { return is_error; }
const char* ErrorMsg() const { return errbuf; }
// This heuristic will horribly fail if we're using packets
// with different link layers. (If we can't derive a reasonable value
// from the packet sources, our fall-back is Ethernet.)
int HdrSize() const
{ return get_link_header_size(pcap_datalink(pd)); }
// Network time when dump file was opened.
double OpenTime() const { return open_time; }
private:
void InitPd();
void Error(const char* str);
static const int FNBUF_LEN = 1024;
char filename[FNBUF_LEN];
bool append;
pcap_dumper_t* dumper;
pcap_t* pd;
double open_time;
bool is_error;
char errbuf[BRO_PCAP_ERRBUF_SIZE];
};
#endif

View file

@ -188,10 +188,11 @@
#include "File.h"
#include "Conn.h"
#include "Reporter.h"
#include "threading/SerialTypes.h"
#include "logging/Manager.h"
#include "IPAddr.h"
#include "bro_inet_ntop.h"
#include "iosource/Manager.h"
#include "logging/Manager.h"
#include "logging/logging.bif.h"
extern "C" {
#include "setsignal.h"
@ -284,10 +285,10 @@ struct ping_args {
\
if ( ! c ) \
{ \
idle = io->IsIdle();\
SetIdle(io->IsIdle());\
return true; \
} \
idle = false; \
SetIdle(false); \
}
static const char* msgToStr(int msg)
@ -533,7 +534,6 @@ RemoteSerializer::RemoteSerializer()
current_sync_point = 0;
syncing_times = false;
io = 0;
closed = false;
terminating = false;
in_sync = 0;
last_flush = 0;
@ -558,7 +558,7 @@ RemoteSerializer::~RemoteSerializer()
delete io;
}
void RemoteSerializer::Init()
void RemoteSerializer::Enable()
{
if ( initialized )
return;
@ -571,7 +571,7 @@ void RemoteSerializer::Init()
Fork();
io_sources.Register(this);
iosource_mgr->Register(this);
Log(LogInfo, fmt("communication started, parent pid is %d, child pid is %d", getpid(), child_pid));
initialized = 1;
@ -1275,7 +1275,7 @@ bool RemoteSerializer::Listen(const IPAddr& ip, uint16 port, bool expect_ssl,
return false;
listening = true;
closed = false;
SetClosed(false);
return true;
}
@ -1344,7 +1344,7 @@ bool RemoteSerializer::StopListening()
return false;
listening = false;
closed = ! IsActive();
SetClosed(! IsActive());
return true;
}
@ -1382,7 +1382,7 @@ double RemoteSerializer::NextTimestamp(double* local_network_time)
if ( received_logs > 0 )
{
// If we processed logs last time, assume there's more.
idle = false;
SetIdle(false);
received_logs = 0;
return timer_mgr->Time();
}
@ -1397,7 +1397,7 @@ double RemoteSerializer::NextTimestamp(double* local_network_time)
pt = timer_mgr->Time();
if ( packets.length() )
idle = false;
SetIdle(false);
if ( et >= 0 && (et < pt || pt < 0) )
return et;
@ -1452,7 +1452,7 @@ void RemoteSerializer::Process()
// FIXME: The following chunk of code is copied from
// net_packet_dispatch(). We should change that function
// to accept an IOSource instead of the PktSrc.
network_time = p->time;
net_update_time(p->time);
SegmentProfiler(segment_logger, "expiring-timers");
TimerMgr* tmgr = sessions->LookupTimerMgr(GetCurrentTag());
@ -1476,7 +1476,7 @@ void RemoteSerializer::Process()
}
if ( packets.length() )
idle = false;
SetIdle(false);
}
void RemoteSerializer::Finish()
@ -1508,7 +1508,7 @@ bool RemoteSerializer::Poll(bool may_block)
}
io->Flush();
idle = false;
SetIdle(false);
switch ( msgstate ) {
case TYPE:
@ -1690,7 +1690,7 @@ bool RemoteSerializer::DoMessage()
case MSG_TERMINATE:
assert(terminating);
io_sources.Terminate();
iosource_mgr->Terminate();
return true;
case MSG_REMOTE_PRINT:
@ -1878,7 +1878,7 @@ void RemoteSerializer::RemovePeer(Peer* peer)
delete peer->cache_out;
delete peer;
closed = ! IsActive();
SetClosed(! IsActive());
if ( in_sync == peer )
in_sync = 0;
@ -2723,8 +2723,8 @@ bool RemoteSerializer::ProcessLogCreateWriter()
fmt.EndRead();
id_val = new EnumVal(id, BifType::Enum::Log::ID);
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
id_val = new EnumVal(id, internal_type("Log::ID")->AsEnumType());
writer_val = new EnumVal(writer, internal_type("Log::Writer")->AsEnumType());
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields,
true, false, true) )
@ -2796,8 +2796,8 @@ bool RemoteSerializer::ProcessLogWrite()
}
}
id_val = new EnumVal(id, BifType::Enum::Log::ID);
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer);
id_val = new EnumVal(id, internal_type("Log::ID")->AsEnumType());
writer_val = new EnumVal(writer, internal_type("Log::Writer")->AsEnumType());
success = log_mgr->Write(id_val, writer_val, path, num_fields, vals);
@ -2833,13 +2833,14 @@ void RemoteSerializer::GotEvent(const char* name, double time,
if ( ! current_peer )
{
Error("unserialized event from unknown peer");
delete_vals(args);
return;
}
BufferedEvent* e = new BufferedEvent;
// Our time, not the time when the event was generated.
e->time = pkt_srcs.length() ?
e->time = iosource_mgr->GetPktSrcs().size() ?
time_t(network_time) : time_t(timer_mgr->Time());
e->src = current_peer->id;
@ -2882,6 +2883,7 @@ void RemoteSerializer::GotFunctionCall(const char* name, double time,
if ( ! current_peer )
{
Error("unserialized function from unknown peer");
delete_vals(args);
return;
}
@ -3083,7 +3085,7 @@ RecordVal* RemoteSerializer::GetPeerVal(PeerID id)
void RemoteSerializer::ChildDied()
{
Log(LogError, "child died");
closed = true;
SetClosed(true);
child_pid = 0;
// Shut down the main process as well.
@ -3182,7 +3184,7 @@ void RemoteSerializer::FatalError(const char* msg)
Log(LogError, msg);
reporter->Error("%s", msg);
closed = true;
SetClosed(true);
if ( kill(child_pid, SIGQUIT) < 0 )
reporter->Warning("warning: cannot kill child pid %d, %s", child_pid, strerror(errno));
@ -4209,6 +4211,7 @@ bool SocketComm::Listen()
safe_close(fd);
CloseListenFDs();
listen_next_try = time(0) + bind_retry_interval;
freeaddrinfo(res0);
return false;
}

View file

@ -6,7 +6,7 @@
#include "Dict.h"
#include "List.h"
#include "Serializer.h"
#include "IOSource.h"
#include "iosource/IOSource.h"
#include "Stats.h"
#include "File.h"
#include "logging/WriterBackend.h"
@ -22,13 +22,13 @@ namespace threading {
}
// This class handles the communication done in Bro's main loop.
class RemoteSerializer : public Serializer, public IOSource {
class RemoteSerializer : public Serializer, public iosource::IOSource {
public:
RemoteSerializer();
virtual ~RemoteSerializer();
// Initialize the remote serializer (calling this will fork).
void Init();
void Enable();
// FIXME: Use SourceID directly (or rename everything to Peer*).
typedef SourceID PeerID;

View file

@ -65,11 +65,11 @@ RuleActionAnalyzer::RuleActionAnalyzer(const char* arg_analyzer)
void RuleActionAnalyzer::PrintDebug()
{
if ( ! child_analyzer )
fprintf(stderr, "|%s|\n", analyzer_mgr->GetComponentName(analyzer));
fprintf(stderr, "|%s|\n", analyzer_mgr->GetComponentName(analyzer).c_str());
else
fprintf(stderr, "|%s:%s|\n",
analyzer_mgr->GetComponentName(analyzer),
analyzer_mgr->GetComponentName(child_analyzer));
analyzer_mgr->GetComponentName(analyzer).c_str(),
analyzer_mgr->GetComponentName(child_analyzer).c_str());
}

View file

@ -19,6 +19,7 @@
#include "Conn.h"
#include "Timer.h"
#include "RemoteSerializer.h"
#include "iosource/Manager.h"
Serializer::Serializer(SerializationFormat* arg_format)
{
@ -1045,7 +1046,7 @@ EventPlayer::EventPlayer(const char* file)
Error(fmt("event replayer: cannot open %s", file));
if ( ReadHeader() )
io_sources.Register(this);
iosource_mgr->Register(this);
}
EventPlayer::~EventPlayer()
@ -1085,7 +1086,7 @@ double EventPlayer::NextTimestamp(double* local_network_time)
{
UnserialInfo info(this);
Unserialize(&info);
closed = io->Eof();
SetClosed(io->Eof());
}
if ( ! ne_time )
@ -1142,7 +1143,7 @@ bool Packet::Serialize(SerialInfo* info) const
static BroFile* profiling_output = 0;
#ifdef DEBUG
static PktDumper* dump = 0;
static iosource::PktDumper* dump = 0;
#endif
Packet* Packet::Unserialize(UnserialInfo* info)
@ -1188,7 +1189,7 @@ Packet* Packet::Unserialize(UnserialInfo* info)
p->hdr = hdr;
p->pkt = (u_char*) pkt;
p->tag = tag;
p->hdr_size = get_link_header_size(p->link_type);
p->hdr_size = iosource::PktSrc::GetLinkHeaderSize(p->link_type);
delete [] tag;
@ -1213,9 +1214,15 @@ Packet* Packet::Unserialize(UnserialInfo* info)
if ( debug_logger.IsEnabled(DBG_TM) )
{
if ( ! dump )
dump = new PktDumper("tm.pcap");
dump = iosource_mgr->OpenPktDumper("tm.pcap", true);
dump->Dump(p->hdr, p->pkt);
if ( dump )
{
iosource::PktDumper::Packet dp;
dp.hdr = p->hdr;
dp.data = p->pkt;
dump->Dump(&dp);
}
}
#endif

View file

@ -15,7 +15,7 @@
#include "SerialInfo.h"
#include "IP.h"
#include "Timer.h"
#include "IOSource.h"
#include "iosource/IOSource.h"
#include "Reporter.h"
class SerializationCache;
@ -350,7 +350,7 @@ public:
};
// Plays a file of events back.
class EventPlayer : public FileSerializer, public IOSource {
class EventPlayer : public FileSerializer, public iosource::IOSource {
public:
EventPlayer(const char* file);
virtual ~EventPlayer();

View file

@ -167,7 +167,7 @@ void NetSessions::Done()
void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size,
PktSrc* src_ps)
iosource::PktSrc* src_ps)
{
const struct ip* ip_hdr = 0;
const u_char* ip_data = 0;
@ -184,10 +184,7 @@ void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
// Blanket encapsulation
hdr_size += encap_hdr_size;
if ( src_ps->FilterType() == TYPE_FILTER_NORMAL )
NextPacket(t, hdr, pkt, hdr_size);
else
NextPacketSecondary(t, hdr, pkt, hdr_size, src_ps);
}
void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
@ -262,53 +259,6 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
DumpPacket(hdr, pkt);
}
void NetSessions::NextPacketSecondary(double /* t */, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size,
const PktSrc* src_ps)
{
SegmentProfiler(segment_logger, "processing-secondary-packet");
++num_packets_processed;
uint32 caplen = hdr->caplen - hdr_size;
if ( caplen < sizeof(struct ip) )
{
Weird("truncated_IP", hdr, pkt);
return;
}
const struct ip* ip = (const struct ip*) (pkt + hdr_size);
if ( ip->ip_v == 4 )
{
const secondary_program_list& spt = src_ps->ProgramTable();
loop_over_list(spt, i)
{
SecondaryProgram* sp = spt[i];
if ( ! net_packet_match(sp->Program(), pkt,
hdr->len, hdr->caplen) )
continue;
val_list* args = new val_list;
StringVal* cmd_val =
new StringVal(sp->Event()->Filter());
args->append(cmd_val);
IP_Hdr ip_hdr(ip, false);
args->append(ip_hdr.BuildPktHdrVal());
// ### Need to queue event here.
try
{
sp->Event()->Event()->Call(args);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
delete args;
}
}
}
int NetSessions::CheckConnectionTag(Connection* conn)
{
if ( current_iosrc->GetCurrentTag() )
@ -1440,14 +1390,24 @@ void NetSessions::DumpPacket(const struct pcap_pkthdr* hdr,
return;
if ( len == 0 )
pkt_dumper->Dump(hdr, pkt);
{
iosource::PktDumper::Packet p;
p.hdr = hdr;
p.data = pkt;
pkt_dumper->Dump(&p);
}
else
{
struct pcap_pkthdr h = *hdr;
h.caplen = len;
if ( h.caplen > hdr->caplen )
reporter->InternalError("bad modified caplen");
pkt_dumper->Dump(&h, pkt);
iosource::PktDumper::Packet p;
p.hdr = &h;
p.data = pkt;
pkt_dumper->Dump(&p);
}
}

View file

@ -69,11 +69,11 @@ public:
~NetSessions();
// Main entry point for packet processing. Dispatches the packet
// either through NextPacket() or NextPacketSecondary(), optionally
// employing the packet sorter first.
// either through NextPacket(), optionally employing the packet
// sorter first.
void DispatchPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size,
PktSrc* src_ps);
iosource::PktSrc* src_ps);
void Done(); // call to drain events before destructing
@ -221,10 +221,6 @@ protected:
void NextPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size);
void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size,
const PktSrc* src_ps);
// Record the given packet (if a dumper is active). If len=0
// then the whole packet is recorded, otherwise just the first
// len bytes.

View file

@ -660,8 +660,13 @@ void Case::Describe(ODesc* d) const
TraversalCode Case::Traverse(TraversalCallback* cb) const
{
TraversalCode tc = cases->Traverse(cb);
TraversalCode tc;
if ( cases )
{
tc = cases->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);
}
tc = s->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);

View file

@ -55,6 +55,7 @@ Tag& Tag::operator=(const Tag& other)
{
type = other.type;
subtype = other.subtype;
Unref(val);
val = other.val;
if ( val )

View file

@ -449,6 +449,11 @@ BroType* IndexType::YieldType()
return yield_type;
}
const BroType* IndexType::YieldType() const
{
return yield_type;
}
void IndexType::Describe(ODesc* d) const
{
BroType::Describe(d);
@ -742,6 +747,11 @@ BroType* FuncType::YieldType()
return yield;
}
const BroType* FuncType::YieldType() const
{
return yield;
}
int FuncType::MatchesIndex(ListExpr*& index) const
{
return check_and_promote_args(index, args) ?
@ -1371,6 +1381,11 @@ void OpaqueType::Describe(ODesc* d) const
d->Add(name.c_str());
}
void OpaqueType::DescribeReST(ODesc* d, bool roles_only) const
{
d->Add(fmt(":bro:type:`%s` of %s", type_name(Tag()), name.c_str()));
}
IMPLEMENT_SERIAL(OpaqueType, SER_OPAQUE_TYPE);
bool OpaqueType::DoSerialize(SerialInfo* info) const
@ -1393,6 +1408,23 @@ bool OpaqueType::DoUnserialize(UnserialInfo* info)
return true;
}
EnumType::EnumType(const string& name)
: BroType(TYPE_ENUM)
{
counter = 0;
SetName(name);
}
EnumType::EnumType(EnumType* e)
: BroType(TYPE_ENUM)
{
counter = e->counter;
SetName(e->GetName());
for ( NameMap::iterator it = e->names.begin(); it != e->names.end(); ++it )
names[copy_string(it->first)] = it->second;
}
EnumType::~EnumType()
{
for ( NameMap::iterator iter = names.begin(); iter != names.end(); ++iter )
@ -1448,6 +1480,12 @@ void EnumType::CheckAndAddName(const string& module_name, const char* name,
broxygen_mgr->Identifier(id);
}
else
{
// We allow double-definitions if matching exactly. This is so that
// we can define an enum both in a *.bif and *.bro for avoiding
// cyclic dependencies.
if ( id->Name() != make_full_var_name(module_name.c_str(), name)
|| (id->HasVal() && val != id->ID_Val()->AsEnum()) )
{
Unref(id);
reporter->Error("identifier or enumerator value in enumerated type definition already exists");
@ -1455,6 +1493,9 @@ void EnumType::CheckAndAddName(const string& module_name, const char* name,
return;
}
Unref(id);
}
AddNameInternal(module_name, name, val, is_export);
set<BroType*> types = BroType::GetAliases(GetName());
@ -1473,9 +1514,9 @@ void EnumType::AddNameInternal(const string& module_name, const char* name,
names[copy_string(fullname.c_str())] = val;
}
bro_int_t EnumType::Lookup(const string& module_name, const char* name)
bro_int_t EnumType::Lookup(const string& module_name, const char* name) const
{
NameMap::iterator pos =
NameMap::const_iterator pos =
names.find(make_full_var_name(module_name.c_str(), name).c_str());
if ( pos == names.end() )
@ -1484,9 +1525,9 @@ bro_int_t EnumType::Lookup(const string& module_name, const char* name)
return pos->second;
}
const char* EnumType::Lookup(bro_int_t value)
const char* EnumType::Lookup(bro_int_t value) const
{
for ( NameMap::iterator iter = names.begin();
for ( NameMap::const_iterator iter = names.begin();
iter != names.end(); ++iter )
if ( iter->second == value )
return iter->first;
@ -1494,6 +1535,16 @@ const char* EnumType::Lookup(bro_int_t value)
return 0;
}
EnumType::enum_name_list EnumType::Names() const
{
enum_name_list n;
for ( NameMap::const_iterator iter = names.begin();
iter != names.end(); ++iter )
n.push_back(std::make_pair(iter->first, iter->second));
return n;
}
void EnumType::DescribeReST(ODesc* d, bool roles_only) const
{
d->Add(":bro:type:`enum`");
@ -1644,6 +1695,23 @@ BroType* VectorType::YieldType()
return yield_type;
}
const BroType* VectorType::YieldType() const
{
// Work around the fact that we use void internally to mark a vector
// as being unspecified. When looking at its yield type, we need to
// return any as that's what other code historically expects for type
// comparisions.
if ( IsUnspecifiedVector() )
{
BroType* ret = ::base_type(TYPE_ANY);
Unref(ret); // unref, because this won't be held by anyone.
assert(ret);
return ret;
}
return yield_type;
}
int VectorType::MatchesIndex(ListExpr*& index) const
{
expr_list& el = index->Exprs();
@ -1691,7 +1759,17 @@ void VectorType::Describe(ODesc* d) const
yield_type->Describe(d);
}
BroType* base_type(TypeTag tag)
void VectorType::DescribeReST(ODesc* d, bool roles_only) const
{
d->Add(fmt(":bro:type:`%s` of ", type_name(Tag())));
if ( yield_type->GetName().empty() )
yield_type->DescribeReST(d, roles_only);
else
d->Add(fmt(":bro:type:`%s`", yield_type->GetName().c_str()));
}
BroType* base_type_no_ref(TypeTag tag)
{
static BroType* base_types[NUM_TYPES];
@ -1707,7 +1785,7 @@ BroType* base_type(TypeTag tag)
base_types[t]->SetLocationInfo(&l);
}
return base_types[t]->Ref();
return base_types[t];
}
@ -1732,7 +1810,7 @@ static int is_init_compat(const BroType* t1, const BroType* t2)
return 0;
}
int same_type(const BroType* t1, const BroType* t2, int is_init)
int same_type(const BroType* t1, const BroType* t2, int is_init, bool match_record_field_names)
{
if ( t1 == t2 ||
t1->Tag() == TYPE_ANY ||
@ -1788,7 +1866,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( tl1 || tl2 )
{
if ( ! tl1 || ! tl2 || ! same_type(tl1, tl2, is_init) )
if ( ! tl1 || ! tl2 || ! same_type(tl1, tl2, is_init, match_record_field_names) )
return 0;
}
@ -1797,7 +1875,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( y1 || y2 )
{
if ( ! y1 || ! y2 || ! same_type(y1, y2, is_init) )
if ( ! y1 || ! y2 || ! same_type(y1, y2, is_init, match_record_field_names) )
return 0;
}
@ -1815,7 +1893,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( t1->YieldType() || t2->YieldType() )
{
if ( ! t1->YieldType() || ! t2->YieldType() ||
! same_type(t1->YieldType(), t2->YieldType(), is_init) )
! same_type(t1->YieldType(), t2->YieldType(), is_init, match_record_field_names) )
return 0;
}
@ -1835,8 +1913,8 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
const TypeDecl* td1 = rt1->FieldDecl(i);
const TypeDecl* td2 = rt2->FieldDecl(i);
if ( ! streq(td1->id, td2->id) ||
! same_type(td1->type, td2->type, is_init) )
if ( (match_record_field_names && ! streq(td1->id, td2->id)) ||
! same_type(td1->type, td2->type, is_init, match_record_field_names) )
return 0;
}
@ -1852,7 +1930,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
return 0;
loop_over_list(*tl1, i)
if ( ! same_type((*tl1)[i], (*tl2)[i], is_init) )
if ( ! same_type((*tl1)[i], (*tl2)[i], is_init, match_record_field_names) )
return 0;
return 1;
@ -1860,7 +1938,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
case TYPE_VECTOR:
case TYPE_FILE:
return same_type(t1->YieldType(), t2->YieldType(), is_init);
return same_type(t1->YieldType(), t2->YieldType(), is_init, match_record_field_names);
case TYPE_OPAQUE:
{
@ -1870,7 +1948,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
}
case TYPE_TYPE:
return same_type(t1, t2, is_init);
return same_type(t1, t2, is_init, match_record_field_names);
case TYPE_UNION:
reporter->Error("union type in same_type()");

View file

@ -6,6 +6,7 @@
#include <string>
#include <set>
#include <map>
#include <list>
#include "Obj.h"
#include "Attr.h"
@ -334,6 +335,7 @@ public:
TypeList* Indices() const { return indices; }
const type_list* IndexTypes() const { return indices->Types(); }
BroType* YieldType();
const BroType* YieldType() const;
void Describe(ODesc* d) const;
void DescribeReST(ODesc* d, bool roles_only = false) const;
@ -396,6 +398,7 @@ public:
RecordType* Args() const { return args; }
BroType* YieldType();
const BroType* YieldType() const;
void SetYieldType(BroType* arg_yield) { yield = arg_yield; }
function_flavor Flavor() const { return flavor; }
string FlavorString() const;
@ -531,6 +534,7 @@ public:
const string& Name() const { return name; }
void Describe(ODesc* d) const;
void DescribeReST(ODesc* d, bool roles_only = false) const;
protected:
OpaqueType() { }
@ -542,7 +546,10 @@ protected:
class EnumType : public BroType {
public:
EnumType() : BroType(TYPE_ENUM) { counter = 0; }
typedef std::list<std::pair<string, bro_int_t> > enum_name_list;
EnumType(EnumType* e);
EnumType(const string& arg_name);
~EnumType();
// The value of this name is next internal counter value, starting
@ -555,12 +562,18 @@ public:
void AddName(const string& module_name, const char* name, bro_int_t val, bool is_export);
// -1 indicates not found.
bro_int_t Lookup(const string& module_name, const char* name);
const char* Lookup(bro_int_t value); // Returns 0 if not found
bro_int_t Lookup(const string& module_name, const char* name) const;
const char* Lookup(bro_int_t value) const; // Returns 0 if not found
// Returns the list of defined names with their values. The names
// will be fully qualified with their module name.
enum_name_list Names() const;
void DescribeReST(ODesc* d, bool roles_only = false) const;
protected:
EnumType() { counter = 0; }
DECLARE_SERIAL(EnumType)
void AddNameInternal(const string& module_name,
@ -586,6 +599,7 @@ public:
VectorType(BroType* t);
virtual ~VectorType();
BroType* YieldType();
const BroType* YieldType() const;
int MatchesIndex(ListExpr*& index) const;
@ -594,6 +608,7 @@ public:
bool IsUnspecifiedVector() const;
void Describe(ODesc* d) const;
void DescribeReST(ODesc* d, bool roles_only = false) const;
protected:
VectorType() { yield_type = 0; }
@ -612,15 +627,22 @@ extern OpaqueType* topk_type;
extern OpaqueType* bloomfilter_type;
extern OpaqueType* x509_opaque_type;
// Returns the Bro basic (non-parameterized) type with the given type.
// The reference count of the type is not increased.
BroType* base_type_no_ref(TypeTag tag);
// Returns the BRO basic (non-parameterized) type with the given type.
extern BroType* base_type(TypeTag tag);
// The caller assumes responsibility for a reference to the type.
inline BroType* base_type(TypeTag tag)
{ return base_type_no_ref(tag)->Ref(); }
// Returns the BRO basic error type.
inline BroType* error_type() { return base_type(TYPE_ERROR); }
// True if the two types are equivalent. If is_init is true then the
// test is done in the context of an initialization.
extern int same_type(const BroType* t1, const BroType* t2, int is_init=0);
// True if the two types are equivalent. If is_init is true then the test is
// done in the context of an initialization. If match_record_field_names is
// true then for record types the field names have to match, too.
extern int same_type(const BroType* t1, const BroType* t2, int is_init=0, bool match_record_field_names=true);
// True if the two attribute lists are equivalent.
extern int same_attrs(const Attributes* a1, const Attributes* a2);

View file

@ -465,10 +465,7 @@ void Val::Describe(ODesc* d) const
d->SP();
}
if ( d->IsReadable() )
ValDescribe(d);
else
Val::ValDescribe(d);
}
void Val::DescribeReST(ODesc* d) const
@ -1152,7 +1149,7 @@ bool PatternVal::DoUnserialize(UnserialInfo* info)
}
ListVal::ListVal(TypeTag t)
: Val(new TypeList(t == TYPE_ANY ? 0 : base_type(t)))
: Val(new TypeList(t == TYPE_ANY ? 0 : base_type_no_ref(t)))
{
tag = t;
}
@ -1471,13 +1468,20 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
}
TableEntryVal* new_entry_val = new TableEntryVal(new_val);
HashKey k_copy(k->Key(), k->Size(), k->Hash());
TableEntryVal* old_entry_val = AsNonConstTable()->Insert(k, new_entry_val);
// If the dictionary index already existed, the insert may free up the
// memory allocated to the key bytes, so have to assume k is invalid
// from here on out.
delete k;
k = 0;
if ( subnets )
{
if ( ! index )
{
Val* v = RecoverIndex(k);
Val* v = RecoverIndex(&k_copy);
subnets->Insert(v, new_entry_val);
Unref(v);
}
@ -1489,7 +1493,7 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
{
Val* rec_index = 0;
if ( ! index )
index = rec_index = RecoverIndex(k);
index = rec_index = RecoverIndex(&k_copy);
if ( new_val )
{
@ -1547,7 +1551,6 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
if ( old_entry_val && attrs && attrs->FindAttr(ATTR_EXPIRE_CREATE) )
new_entry_val->SetExpireAccess(old_entry_val->ExpireAccessTime());
delete k;
if ( old_entry_val )
{
old_entry_val->Unref();

View file

@ -9,6 +9,7 @@
#include "Serializer.h"
#include "RemoteSerializer.h"
#include "EventRegistry.h"
#include "Traverse.h"
static Val* init_val(Expr* init, const BroType* t, Val* aggr)
{
@ -392,6 +393,34 @@ void begin_func(ID* id, const char* module_name, function_flavor flavor,
}
}
class OuterIDBindingFinder : public TraversalCallback {
public:
OuterIDBindingFinder(Scope* s)
: scope(s) { }
virtual TraversalCode PreExpr(const Expr*);
Scope* scope;
vector<const NameExpr*> outer_id_references;
};
TraversalCode OuterIDBindingFinder::PreExpr(const Expr* expr)
{
if ( expr->Tag() != EXPR_NAME )
return TC_CONTINUE;
const NameExpr* e = static_cast<const NameExpr*>(expr);
if ( e->Id()->IsGlobal() )
return TC_CONTINUE;
if ( scope->GetIDs()->Lookup(e->Id()->Name()) )
return TC_CONTINUE;
outer_id_references.push_back(e);
return TC_CONTINUE;
}
void end_func(Stmt* body, attr_list* attrs)
{
int frame_size = current_scope()->Length();
@ -429,6 +458,16 @@ void end_func(Stmt* body, attr_list* attrs)
}
}
if ( streq(id->Name(), "anonymous-function") )
{
OuterIDBindingFinder cb(scope);
body->Traverse(&cb);
for ( size_t i = 0; i < cb.outer_id_references.size(); ++i )
cb.outer_id_references[i]->Error(
"referencing outer function IDs not supported");
}
if ( id->HasVal() )
id->ID_Val()->AsFunc()->AddBody(body, inits, frame_size, priority);
else

View file

@ -4,6 +4,7 @@
#include "Analyzer.h"
#include "Manager.h"
#include "binpac.h"
#include "analyzer/protocol/pia/PIA.h"
#include "../Event.h"
@ -75,7 +76,7 @@ analyzer::ID Analyzer::id_counter = 0;
const char* Analyzer::GetAnalyzerName() const
{
assert(tag);
return analyzer_mgr->GetComponentName(tag);
return analyzer_mgr->GetComponentName(tag).c_str();
}
void Analyzer::SetAnalyzerTag(const Tag& arg_tag)
@ -87,7 +88,7 @@ void Analyzer::SetAnalyzerTag(const Tag& arg_tag)
bool Analyzer::IsAnalyzer(const char* name)
{
assert(tag);
return strcmp(analyzer_mgr->GetComponentName(tag), name) == 0;
return strcmp(analyzer_mgr->GetComponentName(tag).c_str(), name) == 0;
}
// Used in debugging output.
@ -642,12 +643,12 @@ void Analyzer::FlipRoles()
resp_supporters = tmp;
}
void Analyzer::ProtocolConfirmation()
void Analyzer::ProtocolConfirmation(Tag arg_tag)
{
if ( protocol_confirmed )
return;
EnumVal* tval = tag.AsEnumVal();
EnumVal* tval = arg_tag ? arg_tag.AsEnumVal() : tag.AsEnumVal();
Ref(tval);
val_list* vl = new val_list;

View file

@ -97,8 +97,8 @@ public:
/**
* Constructor. As this version of the constructor does not receive a
* name or tag, setTag() must be called before the instance can be
* used.
* name or tag, SetAnalyzerTag() must be called before the instance
* can be used.
*
* @param conn The connection the analyzer is associated with.
*/
@ -471,8 +471,11 @@ public:
* may turn into \c protocol_confirmed event at the script-layer (but
* only once per analyzer for each connection, even if the method is
* called multiple times).
*
* If tag is given, it overrides the analyzer tag passed to the
* scripting layer; the default is the one of the analyzer itself.
*/
virtual void ProtocolConfirmation();
virtual void ProtocolConfirmation(Tag tag = Tag());
/**
* Signals Bro's protocol detection that the analyzer has found a

View file

@ -8,62 +8,29 @@
using namespace analyzer;
Component::Component(const char* arg_name, factory_callback arg_factory, Tag::subtype_t arg_subtype, bool arg_enabled, bool arg_partial)
: plugin::Component(plugin::component::ANALYZER),
Component::Component(const std::string& name, factory_callback arg_factory, Tag::subtype_t arg_subtype, bool arg_enabled, bool arg_partial)
: plugin::Component(plugin::component::ANALYZER, name),
plugin::TaggedComponent<analyzer::Tag>(arg_subtype)
{
name = copy_string(arg_name);
canon_name = canonify_name(arg_name);
factory = arg_factory;
enabled = arg_enabled;
partial = arg_partial;
}
Component::Component(const Component& other)
: plugin::Component(Type()),
plugin::TaggedComponent<analyzer::Tag>(other)
{
name = copy_string(other.name);
canon_name = copy_string(other.canon_name);
factory = other.factory;
enabled = other.enabled;
partial = other.partial;
analyzer_mgr->RegisterComponent(this, "ANALYZER_");
}
Component::~Component()
{
delete [] name;
delete [] canon_name;
}
void Component::Describe(ODesc* d) const
void Component::DoDescribe(ODesc* d) const
{
plugin::Component::Describe(d);
d->Add(name);
d->Add(" (");
if ( factory )
{
d->Add("ANALYZER_");
d->Add(canon_name);
d->Add(CanonicalName());
d->Add(", ");
}
d->Add(enabled ? "enabled" : "disabled");
d->Add(")");
}
Component& Component::operator=(const Component& other)
{
plugin::TaggedComponent<analyzer::Tag>::operator=(other);
if ( &other != this )
{
name = copy_string(other.name);
factory = other.factory;
enabled = other.enabled;
partial = other.partial;
}
return *this;
}

View file

@ -1,7 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef ANALYZER_PLUGIN_COMPONENT_H
#define ANALYZER_PLUGIN_COMPONENT_H
#ifndef ANALYZER_COMPONENT_H
#define ANALYZER_COMPONENT_H
#include "Tag.h"
#include "plugin/Component.h"
@ -56,34 +56,13 @@ public:
* connections has generally not seen much testing yet as virtually
* no existing analyzer supports it.
*/
Component(const char* name, factory_callback factory, Tag::subtype_t subtype = 0, bool enabled = true, bool partial = false);
/**
* Copy constructor.
*/
Component(const Component& other);
Component(const std::string& name, factory_callback factory, Tag::subtype_t subtype = 0, bool enabled = true, bool partial = false);
/**
* Destructor.
*/
~Component();
/**
* Returns the name of the analyzer. This name is unique across all
* analyzers and used to identify it. The returned name is derived
* from what's passed to the constructor but upper-cased and
* canonified to allow being part of a script-level ID.
*/
virtual const char* Name() const { return name; }
/**
* Returns a canonocalized version of the analyzer's name. The
* returned name is derived from what's passed to the constructor but
* upper-cased and transformed to allow being part of a script-level
* ID.
*/
const char* CanonicalName() const { return canon_name; }
/**
* Returns the analyzer's factory function.
*/
@ -110,17 +89,13 @@ public:
*/
void SetEnabled(bool arg_enabled) { enabled = arg_enabled; }
protected:
/**
* Generates a human-readable description of the component's main
* parameters. This goes into the output of \c "bro -NN".
* Overriden from plugin::Component.
*/
virtual void Describe(ODesc* d) const;
Component& operator=(const Component& other);
virtual void DoDescribe(ODesc* d) const;
private:
const char* name; // The analyzer's name.
const char* canon_name; // The analyzer's canonical name.
factory_callback factory; // The analyzer's factory callback.
bool partial; // True if the analyzer supports partial connections.
bool enabled; // True if the analyzer is enabled.

View file

@ -60,7 +60,7 @@ bool Manager::ConnIndex::operator<(const ConnIndex& other) const
}
Manager::Manager()
: plugin::ComponentManager<analyzer::Tag, analyzer::Component>("Analyzer")
: plugin::ComponentManager<analyzer::Tag, analyzer::Component>("Analyzer", "Tag")
{
}
@ -86,11 +86,6 @@ Manager::~Manager()
void Manager::InitPreScript()
{
std::list<Component*> analyzers = plugin_mgr->Components<Component>();
for ( std::list<Component*>::const_iterator i = analyzers.begin(); i != analyzers.end(); i++ )
RegisterComponent(*i, "ANALYZER_");
// Cache these tags.
analyzer_backdoor = GetComponentTag("BACKDOOR");
analyzer_connsize = GetComponentTag("CONNSIZE");
@ -109,7 +104,8 @@ void Manager::DumpDebug()
DBG_LOG(DBG_ANALYZER, "Available analyzers after bro_init():");
list<Component*> all_analyzers = GetComponents();
for ( list<Component*>::const_iterator i = all_analyzers.begin(); i != all_analyzers.end(); ++i )
DBG_LOG(DBG_ANALYZER, " %s (%s)", (*i)->Name(), IsEnabled((*i)->Tag()) ? "enabled" : "disabled");
DBG_LOG(DBG_ANALYZER, " %s (%s)", (*i)->Name().c_str(),
IsEnabled((*i)->Tag()) ? "enabled" : "disabled");
DBG_LOG(DBG_ANALYZER, "");
DBG_LOG(DBG_ANALYZER, "Analyzers by port:");
@ -148,7 +144,7 @@ bool Manager::EnableAnalyzer(Tag tag)
if ( ! p )
return false;
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name());
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name().c_str());
p->SetEnabled(true);
return true;
@ -161,7 +157,7 @@ bool Manager::EnableAnalyzer(EnumVal* val)
if ( ! p )
return false;
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name());
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name().c_str());
p->SetEnabled(true);
return true;
@ -174,7 +170,7 @@ bool Manager::DisableAnalyzer(Tag tag)
if ( ! p )
return false;
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name());
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name().c_str());
p->SetEnabled(false);
return true;
@ -187,7 +183,7 @@ bool Manager::DisableAnalyzer(EnumVal* val)
if ( ! p )
return false;
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name());
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name().c_str());
p->SetEnabled(false);
return true;
@ -202,6 +198,11 @@ void Manager::DisableAllAnalyzers()
(*i)->SetEnabled(false);
}
analyzer::Tag Manager::GetAnalyzerTag(const char* name)
{
return GetComponentTag(name);
}
bool Manager::IsEnabled(Tag tag)
{
if ( ! tag )
@ -254,7 +255,7 @@ bool Manager::RegisterAnalyzerForPort(Tag tag, TransportProto proto, uint32 port
return false;
#ifdef DEBUG
const char* name = GetComponentName(tag);
const char* name = GetComponentName(tag).c_str();
DBG_LOG(DBG_ANALYZER, "Registering analyzer %s for port %" PRIu32 "/%d", name, port, proto);
#endif
@ -270,7 +271,7 @@ bool Manager::UnregisterAnalyzerForPort(Tag tag, TransportProto proto, uint32 po
return true; // still a "successful" unregistration
#ifdef DEBUG
const char* name = GetComponentName(tag);
const char* name = GetComponentName(tag).c_str();
DBG_LOG(DBG_ANALYZER, "Unregistering analyzer %s for port %" PRIu32 "/%d", name, port, proto);
#endif
@ -293,7 +294,8 @@ Analyzer* Manager::InstantiateAnalyzer(Tag tag, Connection* conn)
if ( ! c->Factory() )
{
reporter->InternalWarning("analyzer %s cannot be instantiated dynamically", GetComponentName(tag));
reporter->InternalWarning("analyzer %s cannot be instantiated dynamically",
GetComponentName(tag).c_str());
return 0;
}
@ -413,7 +415,7 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
root->AddChildAnalyzer(analyzer, false);
DBG_ANALYZER_ARGS(conn, "activated %s analyzer due to port %d",
analyzer_mgr->GetComponentName(*j), resp_port);
analyzer_mgr->GetComponentName(*j).c_str(), resp_port);
}
}
}
@ -532,7 +534,7 @@ void Manager::ExpireScheduledAnalyzers()
conns.erase(i);
DBG_LOG(DBG_ANALYZER, "Expiring expected analyzer %s for connection %s",
analyzer_mgr->GetComponentName(a->analyzer),
analyzer_mgr->GetComponentName(a->analyzer).c_str(),
fmt_conn_id(a->conn.orig, 0, a->conn.resp, a->conn.resp_p));
delete a;
@ -638,7 +640,7 @@ bool Manager::ApplyScheduledAnalyzers(Connection* conn, bool init, TransportLaye
conn->Event(scheduled_analyzer_applied, 0, tag);
DBG_ANALYZER_ARGS(conn, "activated %s analyzer as scheduled",
analyzer_mgr->GetComponentName(*it));
analyzer_mgr->GetComponentName(*it).c_str());
}
return expected.size();

View file

@ -45,10 +45,6 @@ namespace analyzer {
* sets up their initial analyzer tree, including adding the right \c PIA,
* respecting well-known ports, and tracking any analyzers specifically
* scheduled for individidual connections.
*
* Note that we keep the public interface of this class free of std::*
* classes. This allows to external analyzer code to potentially use a
* different C++ standard library.
*/
class Manager : public plugin::ComponentManager<Tag, Component> {
public:
@ -133,6 +129,14 @@ public:
*/
void DisableAllAnalyzers();
/**
* Returns the tag associated with an analyer name, or the tag
* associated with an error if no such analyzer exists.
*
* @param name The canonical analyzer name to check.
*/
Tag GetAnalyzerTag(const char* name);
/**
* Returns true if an analyzer is enabled.
*

View file

@ -1,7 +1,21 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h"
BRO_PLUGIN_BEGIN(Bro, ARP)
BRO_PLUGIN_DESCRIPTION("ARP Parsing Code");
BRO_PLUGIN_BIF_FILE(events);
BRO_PLUGIN_END
namespace plugin {
namespace Bro_ARP {
class Plugin : public plugin::Plugin {
public:
plugin::Configuration Configure()
{
plugin::Configuration config;
config.name = "Bro::ARP";
config.description = "ARP Parsing";
return config;
}
} plugin;
}
}

View file

@ -14,7 +14,7 @@ public:
virtual void DeliverPacket(int len, const u_char* data, bool orig,
uint64 seq, const IP_Hdr* ip, int caplen);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new AYIYA_Analyzer(conn); }
protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h"
#include "AYIYA.h"
BRO_PLUGIN_BEGIN(Bro, AYIYA)
BRO_PLUGIN_DESCRIPTION("AYIYA Analyzer");
BRO_PLUGIN_ANALYZER("AYIYA", ayiya::AYIYA_Analyzer);
BRO_PLUGIN_BIF_FILE(events);
BRO_PLUGIN_END
namespace plugin {
namespace Bro_AYIYA {
class Plugin : public plugin::Plugin {
public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("AYIYA", ::analyzer::ayiya::AYIYA_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::AYIYA";
config.description = "AYIYA Analyzer";
return config;
}
} plugin;
}
}

View file

@ -73,7 +73,7 @@ public:
virtual void Done();
void StatTimer(double t, int is_expire);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BackDoor_Analyzer(conn); }
protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h"
#include "BackDoor.h"
BRO_PLUGIN_BEGIN(Bro, BackDoor)
BRO_PLUGIN_DESCRIPTION("Backdoor Analyzer (deprecated)");
BRO_PLUGIN_ANALYZER("BackDoor", backdoor::BackDoor_Analyzer);
BRO_PLUGIN_BIF_FILE(events);
BRO_PLUGIN_END
namespace plugin {
namespace Bro_BackDoor {
class Plugin : public plugin::Plugin {
public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("BackDoor", ::analyzer::backdoor::BackDoor_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::BackDoor";
config.description = "Backdoor Analyzer deprecated";
return config;
}
} plugin;
}
}

View file

@ -19,7 +19,7 @@ public:
virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BitTorrent_Analyzer(conn); }
protected:

View file

@ -52,7 +52,7 @@ public:
virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BitTorrentTracker_Analyzer(conn); }
protected:

Some files were not shown because too many files have changed in this diff Show more