Merge remote-tracking branch 'origin/master' into topic/vladg/smb

Conflicts:
	src/analyzer/protocol/smb/Plugin.cc
This commit is contained in:
Vlad Grigorescu 2014-09-24 18:38:43 -04:00
commit 6ee2ec666f
451 changed files with 105777 additions and 85781 deletions

3
.gitmodules vendored
View file

@ -19,3 +19,6 @@
[submodule "src/3rdparty"] [submodule "src/3rdparty"]
path = src/3rdparty path = src/3rdparty
url = git://git.bro.org/bro-3rdparty url = git://git.bro.org/bro-3rdparty
[submodule "aux/plugins"]
path = aux/plugins
url = git://git.bro.org/bro-plugins

230
CHANGES
View file

@ -1,4 +1,234 @@
2.3-183 | 2014-09-24 10:08:04 -0500
* Add a "node" field to Intel::Seen struture and intel.log to
indicate which node discovered a hit on an intel item. (Seth Hall)
* BIT-1261: Fixes to plugin quick start doc. (Jon Siwek)
2.3-180 | 2014-09-22 12:52:41 -0500
* BIT-1259: Fix issue w/ duplicate TCP reassembly deliveries.
(Jon Siwek)
2.3-178 | 2014-09-18 14:29:46 -0500
* BIT-1256: Fix file analysis events from coming after bro_done().
(Jon Siwek)
2.3-177 | 2014-09-17 09:41:27 -0500
* Documentation fixes. (Chris Mavrakis)
2.3-174 | 2014-09-17 09:37:09 -0500
* Fixed some "make doc" warnings caused by reST formatting
(Daniel Thayer).
2.3-172 | 2014-09-15 13:38:52 -0500
* Remove unneeded allocations for HTTP messages. (Jon Siwek)
2.3-171 | 2014-09-15 11:14:57 -0500
* Fix a compile error on systems without pcap-int.h. (Jon Siwek)
2.3-170 | 2014-09-12 19:28:01 -0700
* Fix incorrect data delivery skips after gap in HTTP Content-Range.
Addresses BIT-1247. (Jon Siwek)
* Fix file analysis placement of data after gap in HTTP
Content-Range. Addresses BIT-1248. (Jon Siwek)
* Fix issue w/ TCP reassembler not delivering some segments.
Addresses BIT-1246. (Jon Siwek)
* Fix MIME entity file data/gap ordering and raise http_entity_data
in line with data arrival. Addresses BIT-1240. (Jon Siwek)
* Implement file ID caching for MIME_Mail. (Jon Siwek)
* Fix a compile error. (Jon Siwek)
2.3-161 | 2014-09-09 12:35:38 -0500
* Bugfixes and test updates/additions. (Robin Sommer)
* Interface tweaks and docs for PktSrc/PktDumper. (Robin Sommer)
* Moving PCAP-related bifs to iosource/pcap.bif. (Robin Sommer)
* Moving some of the BPF filtering code into base class.
This will allow packet sources that don't support BPF natively to
emulate the filtering via libpcap. (Robin Sommer)
* Removing FlowSrc. (Robin Sommer)
* Removing remaining pieces of the 2ndary path, and left-over
files of packet sorter. (Robin Sommer)
* A bunch of infrastructure work to move IOSource, IOSourceRegistry
(now iosource::Manager) and PktSrc/PktDumper code into iosource/,
and over to a plugin structure. (Robin Sommer)
2.3-137 | 2014-09-08 19:01:13 -0500
* Fix Broxygen's rendering of opaque types. (Jon Siwek)
2.3-136 | 2014-09-07 20:50:46 -0700
* Change more http links to https. (Johanna Amann)
2.3-134 | 2014-09-04 16:16:36 -0700
* Fixed a number of issues with OCSP reply validation. Addresses
BIT-1212. (Johanna Amann)
* Fix null pointer dereference in OCSP verification code in case no
certificate is sent as part as the ocsp reply. Addresses BIT-1212.
(Johanna Amann)
2.3-131 | 2014-09-04 16:10:32 -0700
* Make links in documentation templates protocol relative. (Johanna
Amann)
2.3-129 | 2014-09-02 17:21:21 -0700
* Simplify a conditional with equivalent branches. (Jon Siwek)
* Change EDNS parsing code to use rdlength more cautiously. (Jon
Siwek)
* Fix a memory leak when bind() fails due to EADDRINUSE. (Jon Siwek)
* Fix possible buffer over-read in DNS TSIG parsing. (Jon Siwek)
2.3-124 | 2014-08-26 09:24:19 -0500
* Better documentation for sub_bytes (Jimmy Jones)
* BIT-1234: Fix build on systems that already have ntohll/htonll
(Jon Siwek)
2.3-121 | 2014-08-22 15:22:15 -0700
* Detect functions that try to bind variables from an outer scope
and raise an error saying that's not supported. Addresses
BIT-1233. (Jon Siwek)
2.3-116 | 2014-08-21 16:04:13 -0500
* Adding plugin testing to Makefile's test-all. (Robin Sommer)
* Converting log writers and input readers to plugins.
DataSeries and ElasticSearch plugins have moved to the new
bro-plugins repository, which is now a git submodule in the
aux/plugins directory. (Robin Sommer)
2.3-98 | 2014-08-19 11:03:46 -0500
* Silence some doc-related warnings when using `bro -e`.
Closes BIT-1232. (Jon Siwek)
* Fix possible null ptr derefs reported by Coverity. (Jon Siwek)
2.3-96 | 2014-08-01 14:35:01 -0700
* Small change to DHCP documentation. In server->client messages the
host name may differ from the one requested by the client.
(Johanna Amann)
* Split DHCP log writing from record creation. This allows users to
customize dhcp.log by changing the record in their own dhcp_ack
event. (Johanna Amann)
* Update PATH so that documentation btests can find bro-cut. (Daniel
Thayer)
* Remove gawk from list of optional packages in documentation.
(Daniel Thayer)
* Fix for redefining built-in constants. (Robin Sommer)
2.3-86 | 2014-07-31 14:19:58 -0700
* Fix for redefining built-in constants. (Robin Sommer)
* Adding missing check that a plugin's API version matches what Bro
defines. (Robin Sommer)
* Adding NEWS entry for plugins. (Robin Sommer)
2.3-83 | 2014-07-30 16:26:11 -0500
* Minor adjustments to plugin code/docs. (Jon Siwek)
* Dynamic plugin support. (Rpbin Sommer)
Bro now supports extending core functionality, like protocol and
file analysis, dynamically with external plugins in the form of
shared libraries. See doc/devel/plugins.rst for an overview of the
main functionality. Changes coming with this:
- Replacing the old Plugin macro magic with a new API.
- The plugin API changed to generally use std::strings instead
of const char*.
- There are a number of invocations of PLUGIN_HOOK_
{VOID,WITH_RESULT} across the code base, which allow plugins
to hook into the processing at those locations.
- A few new accessor methods to various classes to allow
plugins to get to that information.
- network_time cannot be just assigned to anymore, there's now
function net_update_time() for that.
- Redoing how builtin variables are initialized, so that it
works for plugins as well. No more init_net_var(), but
instead bifcl-generated code that registers them.
- Various changes for adjusting to the now dynamic generation
of analyzer instances.
- same_type() gets an optional extra argument allowing record type
comparision to ignore if field names don't match. (Robin Sommer)
- Further unify file analysis API with the protocol analyzer API
(assigning IDs to analyzers; adding Init()/Done() methods;
adding subtypes). (Robin Sommer)
- A new command line option -Q that prints some basic execution
time stats. (Robin Sommer)
- Add support to the file analysis for activating analyzers by
MIME type. (Robin Sommer)
- File::register_for_mime_type(tag: Analyzer::Tag, mt:
string): Associates a file analyzer with a MIME type.
- File::add_analyzers_for_mime_type(f: fa_file, mtype:
string): Activates all analyzers registered for a MIME
type for the file.
- The default file_new() handler calls
File::add_analyzers_for_mime_type() with the file's MIME
type.
2.3-20 | 2014-07-22 17:41:02 -0700
* Updating submodule(s).
2.3-19 | 2014-07-22 17:29:19 -0700
* Implement bytestring_to_coils() in Modbus analyzer so that coils
gets passed to the corresponding events. (Hui Lin)
* Add length field to ModbusHeaders. (Hui Lin)
2.3-12 | 2014-07-10 19:17:37 -0500 2.3-12 | 2014-07-10 19:17:37 -0500
* Include yield of vectors in Broxygen's type descriptions. * Include yield of vectors in Broxygen's type descriptions.

View file

@ -1,5 +1,9 @@
project(Bro C CXX) project(Bro C CXX)
# When changing the minimum version here, also adapt
# aux/bro-aux/plugin-support/skeleton/CMakeLists.txt
cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR) cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR)
include(cmake/CommonCMakeConfig.cmake) include(cmake/CommonCMakeConfig.cmake)
######################################################################## ########################################################################
@ -16,12 +20,18 @@ endif ()
get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH} get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH}
ABSOLUTE) ABSOLUTE)
set(BRO_PLUGIN_INSTALL_PATH ${BRO_ROOT_DIR}/lib/bro/plugins CACHE STRING "Installation path for plugins" FORCE)
configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev) configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh
"export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n" "export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"export BRO_PLUGIN_PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n") "export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
"setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n" "setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"setenv BRO_PLUGIN_PATH \"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n") "setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1) file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1)
@ -117,33 +127,6 @@ if (GOOGLEPERFTOOLS_FOUND)
endif () endif ()
endif () endif ()
set(USE_DATASERIES false)
find_package(Lintel)
find_package(DataSeries)
find_package(LibXML2)
if (NOT DISABLE_DATASERIES AND
LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
set(USE_DATASERIES true)
include_directories(BEFORE ${Lintel_INCLUDE_DIR})
include_directories(BEFORE ${DataSeries_INCLUDE_DIR})
include_directories(BEFORE ${LibXML2_INCLUDE_DIR})
list(APPEND OPTLIBS ${Lintel_LIBRARIES})
list(APPEND OPTLIBS ${DataSeries_LIBRARIES})
list(APPEND OPTLIBS ${LibXML2_LIBRARIES})
endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(LibCURL)
if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${LibCURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibCURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG OR ENABLE_PERFTOOLS) if (ENABLE_PERFTOOLS_DEBUG OR ENABLE_PERFTOOLS)
# Just a no op to prevent CMake from complaining about manually-specified # Just a no op to prevent CMake from complaining about manually-specified
# ENABLE_PERFTOOLS_DEBUG or ENABLE_PERFTOOLS not being used if google # ENABLE_PERFTOOLS_DEBUG or ENABLE_PERFTOOLS not being used if google
@ -165,6 +148,8 @@ set(brodeps
include(TestBigEndian) include(TestBigEndian)
test_big_endian(WORDS_BIGENDIAN) test_big_endian(WORDS_BIGENDIAN)
include(CheckSymbolExists)
check_symbol_exists(htonll arpa/inet.h HAVE_BYTEORDER_64)
include(OSSpecific) include(OSSpecific)
include(CheckTypes) include(CheckTypes)
@ -174,6 +159,10 @@ include(MiscTests)
include(PCAPTests) include(PCAPTests)
include(OpenSSLTests) include(OpenSSLTests)
include(CheckNameserCompat) include(CheckNameserCompat)
include(GetArchitecture)
# Tell the plugin code that we're building as part of the main tree.
set(BRO_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in
${CMAKE_CURRENT_BINARY_DIR}/config.h) ${CMAKE_CURRENT_BINARY_DIR}/config.h)
@ -238,10 +227,6 @@ message(
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}" "\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}" "\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\njemalloc: ${ENABLE_JEMALLOC}" "\njemalloc: ${ENABLE_JEMALLOC}"
"\ncURL: ${USE_CURL}"
"\n"
"\nDataSeries: ${USE_DATASERIES}"
"\nElasticSearch: ${USE_ELASTICSEARCH}"
"\n" "\n"
"\n================================================================\n" "\n================================================================\n"
) )

View file

@ -56,6 +56,7 @@ test-all: test
test -d aux/broctl && ( cd aux/broctl && make test ) test -d aux/broctl && ( cd aux/broctl && make test )
test -d aux/btest && ( cd aux/btest && make test ) test -d aux/btest && ( cd aux/btest && make test )
test -d aux/bro-aux && ( cd aux/bro-aux && make test ) test -d aux/bro-aux && ( cd aux/bro-aux && make test )
test -d aux/plugins && ( cd aux/plugins && make test-all )
configured: configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 ) @test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

26
NEWS
View file

@ -4,6 +4,32 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with (note that submodules, such as BroControl and Broccoli, come with
their own ``CHANGES``.) their own ``CHANGES``.)
Bro 2.4 (in progress)
=====================
Dependencies
------------
New Functionality
-----------------
- Bro now has support for external plugins that can extend its core
functionality, like protocol/file analysis, via shared libraries.
Plugins can be developed and distributed externally, and will be
pulled in dynamically at startup. Currently, a plugin can provide
custom protocol analyzers, file analyzers, log writers[TODO], input
readers[TODO], packet sources[TODO], and new built-in functions. A
plugin can furthermore hook into Bro's processing a number of places
to add custom logic.
See https://www.bro.org/sphinx-git/devel/plugins.html for more
information on writing plugins.
Changed Functionality
---------------------
- bro-cut has been rewritten in C, and is hence much faster.
Bro 2.3 Bro 2.3
======= =======

View file

@ -1 +1 @@
2.3-12 2.3-183

@ -1 +1 @@
Subproject commit ec1e052afd5a8cd3d1d2cbb28fcd688018e379a5 Subproject commit 3a4684801aafa0558383199e9abd711650b53af9

@ -1 +1 @@
Subproject commit 31d011479a4e956e029d8b708446841a088dd7e3 Subproject commit 9ea20c3905bd3fd5109849c474a2f2b4ed008357

@ -1 +1 @@
Subproject commit 1ee129f7159a2c32fe0cb0f44c9412486fb7a479 Subproject commit 33d0ed4a54a6ecf08a0b5fe18831aa413b437066

@ -1 +1 @@
Subproject commit 8a13886f322f3b618832c0ca3976e07f686d14da Subproject commit 2f808bc8541378b1a4953cca02c58c43945d154f

@ -1 +1 @@
Subproject commit 4da1bd24038d4977e655f2b210f34e37f0b73b78 Subproject commit 1efa4d10f943351efea96def68e598b053fd217a

1
aux/plugins Submodule

@ -0,0 +1 @@
Subproject commit 23055b473c689a79da12b2825d8388f71f28c709

2
cmake

@ -1 +1 @@
Subproject commit 0f301aa08a970150195a2ea5b3ed43d2d98b35b3 Subproject commit 03de0cc467d2334dcb851eddd843d59fef217909

View file

@ -129,6 +129,9 @@
/* whether words are stored with the most significant byte first */ /* whether words are stored with the most significant byte first */
#cmakedefine WORDS_BIGENDIAN #cmakedefine WORDS_BIGENDIAN
/* whether htonll/ntohll is defined in <arpa/inet.h> */
#cmakedefine HAVE_BYTEORDER_64
/* ultrix can't hack const */ /* ultrix can't hack const */
#cmakedefine NEED_ULTRIX_CONST_HACK #cmakedefine NEED_ULTRIX_CONST_HACK
#ifdef NEED_ULTRIX_CONST_HACK #ifdef NEED_ULTRIX_CONST_HACK
@ -209,3 +212,14 @@
/* Common IPv6 extension structure */ /* Common IPv6 extension structure */
#cmakedefine HAVE_IP6_EXT #cmakedefine HAVE_IP6_EXT
/* String with host architecture (e.g., "linux-x86_64") */
#define HOST_ARCHITECTURE "@HOST_ARCHITECTURE@"
/* String with extension of dynamic libraries (e.g., ".so") */
#define DYNAMIC_PLUGIN_SUFFIX "@CMAKE_SHARED_MODULE_SUFFIX@"
/* True if we're building outside of the main Bro source code tree. */
#ifndef BRO_PLUGIN_INTERNAL_BUILD
#define BRO_PLUGIN_INTERNAL_BUILD @BRO_PLUGIN_INTERNAL_BUILD@
#endif

21
configure vendored
View file

@ -39,8 +39,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools --disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root --with-openssl=PATH path to OpenSSL install root
@ -62,9 +60,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-ruby-lib=PATH path to ruby library --with-ruby-lib=PATH path to ruby library
--with-ruby-inc=PATH path to ruby headers --with-ruby-inc=PATH path to ruby headers
--with-swig=PATH path to SWIG executable --with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
Packaging Options (for developers): Packaging Options (for developers):
--binary-package toggle special logic for binary packaging --binary-package toggle special logic for binary packaging
@ -183,12 +178,6 @@ while [ $# -ne 0 ]; do
--enable-ruby) --enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;; ;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*) --with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;; ;;
@ -243,16 +232,6 @@ while [ $# -ne 0 ]; do
--with-swig=*) --with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg append_cache_entry SWIG_EXECUTABLE PATH $optarg
;; ;;
--with-dataseries=*)
append_cache_entry DataSeries_ROOT_DIR PATH $optarg
append_cache_entry Lintel_ROOT_DIR PATH $optarg
;;
--with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--binary-package) --binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true append_cache_entry BINARY_PACKAGING_MODE BOOL true
;; ;;

View file

@ -10,7 +10,7 @@
{% endblock %} {% endblock %}
{% block header %} {% block header %}
<iframe src="http://www.bro.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0"> <iframe src="//www.bro.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe> </iframe>
{% endblock %} {% endblock %}
@ -108,6 +108,6 @@
{% endblock %} {% endblock %}
{% block footer %} {% block footer %}
<iframe src="http://www.bro.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0"> <iframe src="//www.bro.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe> </iframe>
{% endblock %} {% endblock %}

View file

@ -21,7 +21,7 @@ sys.path.insert(0, os.path.abspath('sphinx_input/ext'))
# ----- Begin of BTest configuration. ----- # ----- Begin of BTest configuration. -----
btest = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/btest") btest = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/btest")
brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/bro-aux/bro-cut") brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/build/aux/bro-aux/bro-cut")
bro = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src") bro = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src")
os.environ["PATH"] += (":%s:%s/sphinx:%s:%s" % (btest, btest, bro, brocut)) os.environ["PATH"] += (":%s:%s/sphinx:%s:%s" % (btest, btest, bro, brocut))

438
doc/devel/plugins.rst Normal file
View file

@ -0,0 +1,438 @@
===================
Writing Bro Plugins
===================
Bro is internally moving to a plugin structure that enables extending
the system dynamically, without modifying the core code base. That way
custom code remains self-contained and can be maintained, compiled,
and installed independently. Currently, plugins can add the following
functionality to Bro:
- Bro scripts.
- Builtin functions/events/types for the scripting language.
- Protocol analyzers.
- File analyzers.
- Packet sources and packet dumpers. TODO: Not yet.
- Logging framework backends. TODO: Not yet.
- Input framework readers. TODO: Not yet.
A plugin's functionality is available to the user just as if Bro had
the corresponding code built-in. Indeed, internally many of Bro's
pieces are structured as plugins as well, they are just statically
compiled into the binary rather than loaded dynamically at runtime.
Quick Start
===========
Writing a basic plugin is quite straight-forward as long as one
follows a few conventions. In the following we walk a simple example
plugin that adds a new built-in function (bif) to Bro: we'll add
``rot13(s: string) : string``, a function that rotates every character
in a string by 13 places.
Generally, a plugin comes in the form of a directory following a
certain structure. To get started, Bro's distribution provides a
helper script ``aux/bro-aux/plugin-support/init-plugin`` that creates
a skeleton plugin that can then be customized. Let's use that::
# mkdir rot13-plugin
# cd rot13-plugin
# init-plugin Demo Rot13
As you can see the script takes two arguments. The first is a
namespace the plugin will live in, and the second a descriptive name
for the plugin itself. Bro uses the combination of the two to identify
a plugin. The namespace serves to avoid naming conflicts between
plugins written by independent developers; pick, e.g., the name of
your organisation. The namespace ``Bro`` is reserved for functionality
distributed by the Bro Project. In our example, the plugin will be
called ``Demo::Rot13``.
The ``init-plugin`` script puts a number of files in place. The full
layout is described later. For now, all we need is
``src/rot13.bif``. It's initially empty, but we'll add our new bif
there as follows::
# cat src/rot13.bif
module CaesarCipher;
function rot13%(s: string%) : string
%{
char* rot13 = copy_string(s->CheckString());
for ( char* p = rot13; *p; p++ )
{
char b = islower(*p) ? 'a' : 'A';
*p = (*p - b + 13) % 26 + b;
}
BroString* bs = new BroString(1, reinterpret_cast<byte_vec>(rot13),
strlen(rot13));
return new StringVal(bs);
%}
The syntax of this file is just like any other ``*.bif`` file; we
won't go into it here.
Now we can already compile our plugin, we just need to tell the
configure script put in place by ``init-plugin`` where the Bro source
tree is located (Bro needs to have been built there first)::
# ./configure --bro-dist=/path/to/bro/dist && make
[... cmake output ...]
Now our ``rot13-plugin`` directory has everything that it needs
for Bro to recognize it as a dynamic plugin. Once we point Bro to it,
it will pull it in automatically, as we can check with the ``-N``
option::
# export BRO_PLUGIN_PATH=/path/to/rot13-plugin
# bro -N
[...]
Plugin: Demo::Rot13 - <Insert brief description of plugin> (dynamic, version 1)
[...]
That looks quite good, except for the dummy description that we should
replace with something nicer so that users will know what our plugin
is about. We do this by editing the ``config.description`` line in
``src/Plugin.cc``, like this::
[...]
plugin::Configuration Configure()
{
plugin::Configuration config;
config.name = "Demo::Rot13";
config.description = "Caesar cipher rotating a string's characters by 13 places.";
config.version.major = 1;
config.version.minor = 0;
return config;
}
[...]
# make
[...]
# bro -N | grep Rot13
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
Better. Bro can also show us what exactly the plugin provides with the
more verbose option ``-NN``::
# bro -NN
[...]
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1)
[Function] CaesarCipher::rot13
[...]
There's our function. Now let's use it::
# bro -e 'print CaesarCipher::rot13("Hello")'
Uryyb
It works. We next install the plugin along with Bro itself, so that it
will find it directly without needing the ``BRO_PLUGIN_PATH``
environment variable. If we first unset the variable, the function
will no longer be available::
# unset BRO_PLUGIN_PATH
# bro -e 'print CaesarCipher::rot13("Hello")'
error in <command line>, line 1: unknown identifier CaesarCipher::rot13, at or near "CaesarCipher::rot13"
Once we install it, it works again::
# make install
# bro -e 'print CaesarCipher::rot13("Hello")'
Uryyb
The installed version went into
``<bro-install-prefix>/lib/bro/plugins/Demo_Rot13``.
We can distribute the plugin in either source or binary form by using
the Makefile's ``sdist`` and ``bdist`` target, respectively. Both
create corrsponding tarballs::
# make sdist
[...]
Source distribution in build/sdist/Demo_Rot13.tar.gz
# make bdist
[...]
Binary distribution in build/Demo_Rot13-darwin-x86_64.tar.gz
The source archive will contain everything in the plugin directory
except any generated files. The binary archive will contain anything
needed to install and run the plugin, i.e., just what ``make install``
puts into place as well. As the binary distribution is
platform-dependent, its name includes the OS and architecture the
plugin was built on.
Plugin Directory Layout
=======================
A plugin's directory needs to follow a set of conventions so that Bro
(1) recognizes it as a plugin, and (2) knows what to load. While
``init-plugin`` takes care of most of this, the following is the full
story. We'll use ``<base>`` to represent a plugin's top-level
directory.
``<base>/__bro_plugin__``
A file that marks a directory as containing a Bro plugin. The file
must exist, and its content must consist of a single line with the
qualified name of the plugin (e.g., "Demo::Rot13").
``<base>/lib/<plugin-name>-<os>-<arch>.so``
The shared library containing the plugin's compiled code. Bro will
load this in dynamically at run-time if OS and architecture match
the current platform.
``scripts/``
A directory with the plugin's custom Bro scripts. When the plugin
gets activated, this directory will be automatically added to
``BROPATH``, so that any scripts/modules inside can be
"@load"ed.
``scripts``/__load__.bro
A Bro script that will be loaded immediately when the plugin gets
activated. See below for more information on activating plugins.
``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugin's
bif elements. The files here are produced by ``bifcl``.
By convention, a plugin should put its custom scripts into sub folders
of ``scripts/``, i.e., ``scripts/<script-namespace>/<script>.bro`` to
avoid conflicts. As usual, you can then put a ``__load__.bro`` in
there as well so that, e.g., ``@load Demo/Rot13`` could load a whole
module in the form of multiple individual scripts.
Note that in addition to the paths above, the ``init-plugin`` helper
puts some more files and directories in place that help with
development and installation (e.g., ``CMakeLists.txt``, ``Makefile``,
and source code in ``src/``). However, all these do not have a special
meaning for Bro at runtime and aren't necessary for a plugin to
function.
``init-plugin``
===============
``init-plugin`` puts a basic plugin structure in place that follows
the above layout and augments it with a CMake build and installation
system. Plugins with this structure can be used both directly out of
their source directory (after ``make`` and setting Bro's
``BRO_PLUGIN_PATH``), and when installed alongside Bro (after ``make
install``).
``make install`` copies over the ``lib`` and ``scripts`` directories,
as well as the ``__bro_plugin__`` magic file and the ``README`` (which
you should customize). One can add further CMake ``install`` rules to
install additional files if needed.
``init-plugin`` will never overwrite existing files, so it's safe to
rerun in an existing plugin directory; it only put files in place that
don't exist yet. That also provides a convenient way to revert a file
back to what ``init-plugin`` created originally: just delete it and
rerun.
Activating a Plugin
===================
A plugin needs to be *activated* to make it available to the user.
Activating a plugin will:
1. Load the dynamic module
2. Make any bif items available
3. Add the ``scripts/`` directory to ``BROPATH``
4. Load ``scripts/__load__.bro``
By default, Bro will automatically activate all dynamic plugins found
in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro
-b``), no dynamic plugins will be activated by default; instead the
user can selectively enable individual plugins in scriptland using the
``@load-plugin <qualified-plugin-name>`` directive (e.g.,
``@load-plugin Demo::Rot13``). Alternatively, one can activate a
plugin from the command-line by specifying its full name
(``Demo::Rot13``), or set the environment variable
``BRO_PLUGIN_ACTIVATE`` to a list of comma(!)-separated names of
plugins to unconditionally activate, even in bare mode.
``bro -N`` shows activated plugins separately from found but not yet
activated plugins. Note that plugins compiled statically into Bro are
always activated, and hence show up as such even in bare mode.
Plugin Component
================
The following gives additional information about providing individual
types of functionality via plugins. Note that a single plugin can
provide more than one type. For example, a plugin could provide
multiple protocol analyzers at once; or both a logging backend and
input reader at the same time.
We now walk briefly through the specifics of providing a specific type
of functionality (a *component*) through a plugin. We'll focus on
their interfaces to the plugin system, rather than specifics on
writing the corresponding logic (usually the best way to get going on
that is to start with an existing plugin providing a corresponding
component and adapt that). We'll also point out how the CMake
infrastructure put in place by the ``init-plugin`` helper script ties
the various pieces together.
Bro Scripts
-----------
Scripts are easy: just put them into ``scripts/``, as described above.
The CMake infrastructure will automatically install them, as well
include them into the source and binary plugin distributions.
Builtin Language Elements
-------------------------
Functions
TODO
Events
TODO
Types
TODO
Protocol Analyzers
------------------
TODO.
File Analyzers
--------------
TODO.
Logging Writer
--------------
Not yet available as plugins.
Input Reader
------------
Not yet available as plugins.
Packet Sources
--------------
Not yet available as plugins.
Packet Dumpers
--------------
Not yet available as plugins.
Hooks
=====
TODO.
Testing Plugins
===============
A plugin should come with a test suite to exercise its functionality.
The ``init-plugin`` script puts in place a basic </btest/README> setup
to start with. Initially, it comes with a single test that just checks
that Bro loads the plugin correctly. It won't have a baseline yet, so
let's get that in place::
# cd tests
# btest -d
[ 0%] plugin.loading ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1.0)
[Function] CaesarCipher::rot13
== Error ===============================
test-diff: no baseline found.
=======================================
# btest -U
all 1 tests successful
# cd ..
# make test
make -C tests
make[1]: Entering directory `tests'
all 1 tests successful
make[1]: Leaving directory `tests'
Now let's add a custom test that ensures that our bif works
correctly::
# cd tests
# cat >plugin/rot13.bro
# @TEST-EXEC: bro %INPUT >output
# @TEST-EXEC: btest-diff output
event bro_init()
{
print CaesarCipher::rot13("Hello");
}
Check the output::
# btest -d plugin/rot13.bro
[ 0%] plugin.rot13 ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Uryyb
== Error ===============================
test-diff: no baseline found.
=======================================
% cat .stderr
1 of 1 test failed
Install the baseline::
# btest -U plugin/rot13.bro
all 1 tests successful
Run the test-suite::
# btest
all 2 tests successful
Debugging Plugins
=================
Plugins can use Bro's standard debug logger by using the
``PLUGIN_DBG_LOG(<plugin>, <args>)`` macro (defined in
``DebugLogger.h``), where ``<plugin>`` is the ``Plugin`` instance and
``<args>`` are printf-style arguments, just as with Bro's standard
debuggging macros.
At runtime, one then activates a plugin's debugging output with ``-B
plugin-<name>``, where ``<name>`` is the name of the plugin as
returned by its ``Configure()`` method, yet with the
namespace-separator ``::`` replaced with a simple dash. Example: If
the plugin is called ``Bro::Demo``, use ``-B plugin-Bro-Demo``. As
usual, the debugging output will be recorded to ``debug.log`` if Bro's
compiled in debug mode.
Documenting Plugins
===================
..todo::
Integrate all this with Broxygen.

View file

@ -1,186 +0,0 @@
=============================
Binary Output with DataSeries
=============================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for storing and searching large volumes of data. An an
alternative, Bro comes with experimental support for `DataSeries
<http://www.hpl.hp.com/techreports/2009/HPL-2009-323.html>`_
output, an efficient binary format for recording structured bulk
data. DataSeries is developed and maintained at HP Labs.
.. contents::
Installing DataSeries
---------------------
To use DataSeries, its libraries must be available at compile-time,
along with the supporting *Lintel* package. Generally, both are
distributed on `HP Labs' web site
<http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need
to use recent development versions for both packages, which you can
download from github like this::
git clone http://github.com/dataseries/Lintel
git clone http://github.com/dataseries/DataSeries
To build and install the two into ``<prefix>``, do::
( cd Lintel && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
( cd DataSeries && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
Please refer to the packages' documentation for more information about
the installation process. In particular, there's more information on
required and optional `dependencies for Lintel
<https://raw.github.com/dataseries/Lintel/master/doc/dependencies.txt>`_
and `dependencies for DataSeries
<https://raw.github.com/dataseries/DataSeries/master/doc/dependencies.txt>`_.
For users on RedHat-style systems, you'll need the following::
yum install libxml2-devel boost-devel
Compiling Bro with DataSeries Support
-------------------------------------
Once you have installed DataSeries, Bro's ``configure`` should pick it
up automatically as long as it finds it in a standard system location.
Alternatively, you can specify the DataSeries installation prefix
manually with ``--with-dataseries=<prefix>``. Keep an eye on
``configure``'s summary output, if it looks like the following, Bro
found DataSeries and will compile in the support::
# ./configure --with-dataseries=/usr/local
[...]
====================| Bro Build Summary |=====================
[...]
DataSeries: true
[...]
================================================================
Activating DataSeries
---------------------
The direct way to use DataSeries is to switch *all* log files over to
the binary format. To do that, just add ``redef
Log::default_writer=Log::WRITER_DATASERIES;`` to your ``local.bro``.
For testing, you can also just pass that on the command line::
bro -r trace.pcap Log::default_writer=Log::WRITER_DATASERIES
With that, Bro will now write all its output into DataSeries files
``*.ds``. You can inspect these using DataSeries's set of command line
tools, which its installation process installs into ``<prefix>/bin``.
For example, to convert a file back into an ASCII representation::
$ ds2txt conn.log
[... We skip a bunch of metadata here ...]
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
1300475167.099816 pXPi1kPMgxb 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0
1300475168.853899 R7sOc16woCj 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF F 0 Dd 1 66 1 117
1300475168.854378 Z6dfHVmt0X7 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF F 0 Dd 1 80 1 127
1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211
[...]
(``--skip-all`` suppresses the metadata.)
Note that the ASCII conversion is *not* equivalent to Bro's default
output format.
You can also switch only individual files over to DataSeries by adding
code like this to your ``local.bro``:
.. code:: bro
event bro_init()
{
local f = Log::get_filter(Conn::LOG, "default"); # Get default filter for connection log.
f$writer = Log::WRITER_DATASERIES; # Change writer type.
Log::add_filter(Conn::LOG, f); # Replace filter with adapted version.
}
Bro's DataSeries writer comes with a few tuning options, see
:doc:`/scripts/base/frameworks/logging/writers/dataseries.bro`.
Working with DataSeries
=======================
Here are a few examples of using DataSeries command line tools to work
with the output files.
* Printing CSV::
$ ds2txt --csv conn.log
ts,uid,id.orig_h,id.orig_p,id.resp_h,id.resp_p,proto,service,duration,orig_bytes,resp_bytes,conn_state,local_orig,missed_bytes,history,orig_pkts,orig_ip_bytes,resp_pkts,resp_ip_bytes
1258790493.773208,ZTtgbHvf4s3,192.168.1.104,137,192.168.1.255,137,udp,dns,3.748891,350,0,S0,F,0,D,7,546,0,0
1258790451.402091,pOY6Rw7lhUd,192.168.1.106,138,192.168.1.255,138,udp,,0.000000,0,0,S0,F,0,D,1,229,0,0
1258790493.787448,pn5IiEslca9,192.168.1.104,138,192.168.1.255,138,udp,,2.243339,348,0,S0,F,0,D,2,404,0,0
1258790615.268111,D9slyIu3hFj,192.168.1.106,137,192.168.1.255,137,udp,dns,3.764626,350,0,S0,F,0,D,7,546,0,0
[...]
Add ``--separator=X`` to set a different separator.
* Extracting a subset of columns::
$ ds2txt --select '*' ts,id.resp_h,id.resp_p --skip-all conn.log
1258790493.773208 192.168.1.255 137
1258790451.402091 192.168.1.255 138
1258790493.787448 192.168.1.255 138
1258790615.268111 192.168.1.255 137
1258790615.289842 192.168.1.255 138
[...]
* Filtering rows::
$ ds2txt --where '*' 'duration > 5 && id.resp_p > 1024' --skip-all conn.ds
1258790631.532888 V8mV5WLITu5 192.168.1.105 55890 239.255.255.250 1900 udp 15.004568 798 0 S0 F 0 D 6 966 0 0
1258792413.439596 tMcWVWQptvd 192.168.1.105 55890 239.255.255.250 1900 udp 15.004581 798 0 S0 F 0 D 6 966 0 0
1258794195.346127 cQwQMRdBrKa 192.168.1.105 55890 239.255.255.250 1900 udp 15.005071 798 0 S0 F 0 D 6 966 0 0
1258795977.253200 i8TEjhWd2W8 192.168.1.105 55890 239.255.255.250 1900 udp 15.004824 798 0 S0 F 0 D 6 966 0 0
1258797759.160217 MsLsBA8Ia49 192.168.1.105 55890 239.255.255.250 1900 udp 15.005078 798 0 S0 F 0 D 6 966 0 0
1258799541.068452 TsOxRWJRGwf 192.168.1.105 55890 239.255.255.250 1900 udp 15.004082 798 0 S0 F 0 D 6 966 0 0
[...]
* Calculate some statistics:
Mean/stddev/min/max over a column::
$ dsstatgroupby '*' basic duration from conn.ds
# Begin DSStatGroupByModule
# processed 2159 rows, where clause eliminated 0 rows
# count(*), mean(duration), stddev, min, max
2159, 42.7938, 1858.34, 0, 86370
[...]
Quantiles of total connection volume::
$ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds
[...]
2159 data points, mean 24616 +- 343295 [0,1.26615e+07]
quantiles about every 216 data points:
10%: 0, 124, 317, 348, 350, 350, 601, 798, 1469
tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262
[...]
The ``man`` pages for these tools show further options, and their
``-h`` option gives some more information (either can be a bit cryptic
unfortunately though).
Deficiencies
------------
Due to limitations of the DataSeries format, one cannot inspect its
files before they have been fully written. In other words, when using
DataSeries, it's currently not possible to inspect the live log
files inside the spool directory before they are rotated to their
final location. It seems that this could be fixed with some effort,
and we will work with DataSeries development team on that if the
format gains traction among Bro users.
Likewise, we're considering writing custom command line tools for
interacting with DataSeries files, making that a bit more convenient
than what the standard utilities provide.

View file

@ -1,89 +0,0 @@
=========================================
Indexed Logging Output with ElasticSearch
=========================================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for searching large volumes of data. ElasticSearch
is a new data storage technology for dealing with tons of data.
It's also a search engine built on top of Apache's Lucene
project. It scales very well, both for distributed indexing and
distributed searching.
.. contents::
Warning
-------
This writer plugin is still in testing and is not yet recommended for
production use! The approach to how logs are handled in the plugin is "fire
and forget" at this time, there is no error handling if the server fails to
respond successfully to the insertion request.
Installing ElasticSearch
------------------------
Download the latest version from: http://www.elasticsearch.org/download/.
Once extracted, start ElasticSearch with::
# ./bin/elasticsearch
For more detailed information, refer to the ElasticSearch installation
documentation: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
Compiling Bro with ElasticSearch Support
----------------------------------------
First, ensure that you have libcurl installed then run configure::
# ./configure
[...]
====================| Bro Build Summary |=====================
[...]
cURL: true
[...]
ElasticSearch: true
[...]
================================================================
Activating ElasticSearch
------------------------
The easiest way to enable ElasticSearch output is to load the
tuning/logs-to-elasticsearch.bro script. If you are using BroControl,
the following line in local.bro will enable it:
.. console::
@load tuning/logs-to-elasticsearch
With that, Bro will now write most of its logs into ElasticSearch in addition
to maintaining the Ascii logs like it would do by default. That script has
some tunable options for choosing which logs to send to ElasticSearch, refer
to the autogenerated script documentation for those options.
There is an interface being written specifically to integrate with the data
that Bro outputs into ElasticSearch named Brownian. It can be found here::
https://github.com/grigorescu/Brownian
Tuning
------
A common problem encountered with ElasticSearch is too many files being held
open. The ElasticSearch website has some suggestions on how to increase the
open file limit.
- http://www.elasticsearch.org/tutorials/too-many-open-files/
TODO
----
Lots.
- Perform multicast discovery for server.
- Better error detection.
- Better defaults (don't index loaded-plugins, for instance).
-

View file

@ -380,11 +380,11 @@ uncommon to need to delete that data before the end of the connection.
Other Writers Other Writers
------------- -------------
Bro supports the following output formats other than ASCII: Bro supports the following built-in output formats other than ASCII:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
logging-dataseries
logging-elasticsearch
logging-input-sqlite logging-input-sqlite
Further formats are available as external plugins.

View file

@ -91,7 +91,6 @@ build time:
* LibGeoIP (for geolocating IP addresses) * LibGeoIP (for geolocating IP addresses)
* sendmail (enables Bro and BroControl to send mail) * sendmail (enables Bro and BroControl to send mail)
* gawk (enables all features of bro-cut)
* curl (used by a Bro script that implements active HTTP) * curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage) * gperftools (tcmalloc is used to improve memory and CPU usage)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump) * ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
@ -181,7 +180,7 @@ automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory. the other programs that are in the ``aux/bro-aux`` directory.
OpenBSD users, please see our `FAQ OpenBSD users, please see our `FAQ
<http://www.bro.org/documentation/faq.html>`_ if you are having <//www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro. problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because Finally, if you want to build the Bro documentation (not required, because

View file

@ -1,5 +1,5 @@
.. _FAQ: http://www.bro.org/documentation/faq.html .. _FAQ: //www.bro.org/documentation/faq.html
.. _quickstart: .. _quickstart:

View file

@ -730,7 +730,7 @@ Bro supports ``usec``, ``msec``, ``sec``, ``min``, ``hr``, or ``day`` which repr
microseconds, milliseconds, seconds, minutes, hours, and days microseconds, milliseconds, seconds, minutes, hours, and days
respectively. In fact, the interval data type allows for a surprising respectively. In fact, the interval data type allows for a surprising
amount of variation in its definitions. There can be a space between amount of variation in its definitions. There can be a space between
the numeric constant or they can crammed together like a temporal the numeric constant or they can be crammed together like a temporal
portmanteau. The time unit can be either singular or plural. All of portmanteau. The time unit can be either singular or plural. All of
this adds up to to the fact that both ``42hrs`` and ``42 hr`` are this adds up to to the fact that both ``42hrs`` and ``42 hr`` are
perfectly valid and logically equivalent in Bro. The point, however, perfectly valid and logically equivalent in Bro. The point, however,
@ -819,7 +819,7 @@ with the ``typedef`` and ``struct`` keywords, Bro allows you to cobble
together new data types to suit the needs of your situation. together new data types to suit the needs of your situation.
When combined with the ``type`` keyword, ``record`` can generate a When combined with the ``type`` keyword, ``record`` can generate a
composite type. We have, in fact, already encountered a a complex composite type. We have, in fact, already encountered a complex
example of the ``record`` data type in the earlier sections, the example of the ``record`` data type in the earlier sections, the
:bro:type:`connection` record passed to many events. Another one, :bro:type:`connection` record passed to many events. Another one,
:bro:type:`Conn::Info`, which corresponds to the fields logged into :bro:type:`Conn::Info`, which corresponds to the fields logged into
@ -1014,8 +1014,8 @@ remaining logs to factor.log.
:lines: 38-62 :lines: 38-62
:linenos: :linenos:
To dynamically alter the file in which a stream writes its logs a To dynamically alter the file in which a stream writes its logs, a
filter can specify function returns a string to be used as the filter can specify a function that returns a string to be used as the
filename for the current call to ``Log::write``. The definition for filename for the current call to ``Log::write``. The definition for
this function has to take as its parameters a ``Log::ID`` called id, a this function has to take as its parameters a ``Log::ID`` called id, a
string called ``path`` and the appropriate record type for the logs called string called ``path`` and the appropriate record type for the logs called

View file

@ -153,6 +153,15 @@ export {
tag: Files::Tag, tag: Files::Tag,
args: AnalyzerArgs &default=AnalyzerArgs()): bool; args: AnalyzerArgs &default=AnalyzerArgs()): bool;
## Adds all analyzers associated with a give MIME type to the analysis of
## a file. Note that analyzers added via MIME types cannot take further
## arguments.
##
## f: the file.
##
## mtype: the MIME type; it will be compared case-insensitive.
global add_analyzers_for_mime_type: function(f: fa_file, mtype: string);
## Removes an analyzer from the analysis of a given file. ## Removes an analyzer from the analysis of a given file.
## ##
## f: the file. ## f: the file.
@ -225,6 +234,42 @@ export {
## callback: Function to execute when the given file analyzer is being added. ## callback: Function to execute when the given file analyzer is being added.
global register_analyzer_add_callback: function(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs)); global register_analyzer_add_callback: function(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs));
## Registers a set of MIME types for an analyzer. If a future connection on one of
## these types is seen, the analyzer will be automatically assigned to parsing it.
## The function *adds* to all MIME types already registered, it doesn't replace
## them.
##
## tag: The tag of the analyzer.
##
## mts: The set of MIME types, each in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME types were successfully registered.
global register_for_mime_types: function(tag: Analyzer::Tag, mts: set[string]) : bool;
## Registers a MIME type for an analyzer. If a future file with this type is seen,
## the analyzer will be automatically assigned to parsing it. The function *adds*
## to all MIME types already registered, it doesn't replace them.
##
## tag: The tag of the analyzer.
##
## mt: The MIME type in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME type was successfully registered.
global register_for_mime_type: function(tag: Analyzer::Tag, mt: string) : bool;
## Returns a set of all MIME types currently registered for a specific analyzer.
##
## tag: The tag of the analyzer.
##
## Returns: The set of MIME types.
global registered_mime_types: function(tag: Analyzer::Tag) : set[string];
## Returns a table of all MIME-type-to-analyzer mappings currently registered.
##
## Returns: A table mapping each analyzer to the set of MIME types
## registered for it.
global all_registered_mime_types: function() : table[Analyzer::Tag] of set[string];
## Event that can be handled to access the Info record as it is sent on ## Event that can be handled to access the Info record as it is sent on
## to the logging framework. ## to the logging framework.
global log_files: event(rec: Info); global log_files: event(rec: Info);
@ -237,6 +282,9 @@ redef record fa_file += {
# Store the callbacks for protocol analyzers that have files. # Store the callbacks for protocol analyzers that have files.
global registered_protocols: table[Analyzer::Tag] of ProtoRegistration = table(); global registered_protocols: table[Analyzer::Tag] of ProtoRegistration = table();
# Store the MIME type to analyzer mappings.
global mime_types: table[Analyzer::Tag] of set[string];
global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: AnalyzerArgs) = table(); global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: AnalyzerArgs) = table();
event bro_init() &priority=5 event bro_init() &priority=5
@ -289,6 +337,15 @@ function add_analyzer(f: fa_file, tag: Files::Tag, args: AnalyzerArgs): bool
return T; return T;
} }
function add_analyzers_for_mime_type(f: fa_file, mtype: string)
{
local dummy_args: AnalyzerArgs;
local analyzers = __add_analyzers_for_mime_type(f$id, mtype, dummy_args);
for ( tag in analyzers )
add f$info$analyzers[Files::analyzer_name(tag)];
}
function register_analyzer_add_callback(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs)) function register_analyzer_add_callback(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs))
{ {
analyzer_add_callbacks[tag] = callback; analyzer_add_callbacks[tag] = callback;
@ -312,6 +369,9 @@ function analyzer_name(tag: Files::Tag): string
event file_new(f: fa_file) &priority=10 event file_new(f: fa_file) &priority=10
{ {
set_info(f); set_info(f);
if ( f?$mime_type )
add_analyzers_for_mime_type(f, f$mime_type);
} }
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=10 event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=10
@ -349,6 +409,41 @@ function register_protocol(tag: Analyzer::Tag, reg: ProtoRegistration): bool
return result; return result;
} }
function register_for_mime_types(tag: Analyzer::Tag, mime_types: set[string]) : bool
{
local rc = T;
for ( mt in mime_types )
{
if ( ! register_for_mime_type(tag, mt) )
rc = F;
}
return rc;
}
function register_for_mime_type(tag: Analyzer::Tag, mt: string) : bool
{
if ( ! __register_for_mime_type(tag, mt) )
return F;
if ( tag !in mime_types )
mime_types[tag] = set();
add mime_types[tag][mt];
return T;
}
function registered_mime_types(tag: Analyzer::Tag) : set[string]
{
return tag in mime_types ? mime_types[tag] : set();
}
function all_registered_mime_types(): table[Analyzer::Tag] of set[string]
{
return mime_types;
}
function describe(f: fa_file): string function describe(f: fa_file): string
{ {
local tag = Analyzer::get_tag(f$source); local tag = Analyzer::get_tag(f$source);

View file

@ -4,6 +4,17 @@
module Input; module Input;
export { export {
type Event: enum {
EVENT_NEW = 0,
EVENT_CHANGED = 1,
EVENT_REMOVED = 2,
};
type Mode: enum {
MANUAL = 0,
REREAD = 1,
STREAM = 2
};
## The default input reader used. Defaults to `READER_ASCII`. ## The default input reader used. Defaults to `READER_ASCII`.
const default_reader = READER_ASCII &redef; const default_reader = READER_ASCII &redef;

View file

@ -81,6 +81,9 @@ export {
## Where the data was discovered. ## Where the data was discovered.
where: Where &log; where: Where &log;
## The name of the node where the match was discovered.
node: string &optional &log;
## If the data was discovered within a connection, the ## If the data was discovered within a connection, the
## connection record should go here to give context to the data. ## connection record should go here to give context to the data.
conn: connection &optional; conn: connection &optional;
@ -240,6 +243,11 @@ function Intel::seen(s: Seen)
s$indicator_type = Intel::ADDR; s$indicator_type = Intel::ADDR;
} }
if ( ! s?$node )
{
s$node = peer_description;
}
if ( have_full_data ) if ( have_full_data )
{ {
local items = get_items(s); local items = get_items(s);

View file

@ -1,7 +1,5 @@
@load ./main @load ./main
@load ./postprocessors @load ./postprocessors
@load ./writers/ascii @load ./writers/ascii
@load ./writers/dataseries
@load ./writers/sqlite @load ./writers/sqlite
@load ./writers/elasticsearch
@load ./writers/none @load ./writers/none

View file

@ -5,9 +5,15 @@
module Log; module Log;
# Log::ID and Log::Writer are defined in types.bif due to circular dependencies.
export { export {
## Type that defines an ID unique to each log stream. Scripts creating new log
## streams need to redef this enum to add their own specific log ID. The log ID
## implicitly determines the default name of the generated log file.
type Log::ID: enum {
## Dummy place-holder.
UNKNOWN
};
## If true, local logging is by default enabled for all filters. ## If true, local logging is by default enabled for all filters.
const enable_local_logging = T &redef; const enable_local_logging = T &redef;
@ -27,7 +33,7 @@ export {
const set_separator = "," &redef; const set_separator = "," &redef;
## String to use for empty fields. This should be different from ## String to use for empty fields. This should be different from
## *unset_field* to make the output unambiguous. ## *unset_field* to make the output unambiguous.
## Can be overwritten by individual writers. ## Can be overwritten by individual writers.
const empty_field = "(empty)" &redef; const empty_field = "(empty)" &redef;

View file

@ -1,60 +0,0 @@
##! Interface for the DataSeries log writer.
module LogDataSeries;
export {
## Compression to use with the DS output file. Options are:
##
## 'none' -- No compression.
## 'lzf' -- LZF compression (very quick, but leads to larger output files).
## 'lzo' -- LZO compression (very fast decompression times).
## 'zlib' -- GZIP compression (slower than LZF, but also produces smaller output).
## 'bz2' -- BZIP2 compression (slower than GZIP, but also produces smaller output).
const compression = "zlib" &redef;
## The extent buffer size.
## Larger values here lead to better compression and more efficient writes,
## but also increase the lag between the time events are received and
## the time they are actually written to disk.
const extent_size = 65536 &redef;
## Should we dump the XML schema we use for this DS file to disk?
## If yes, the XML schema shares the name of the logfile, but has
## an XML ending.
const dump_schema = F &redef;
## How many threads should DataSeries spawn to perform compression?
## Note that this dictates the number of threads per log stream. If
## you're using a lot of streams, you may want to keep this number
## relatively small.
##
## Default value is 1, which will spawn one thread / stream.
##
## Maximum is 128, minimum is 1.
const num_threads = 1 &redef;
## Should time be stored as an integer or a double?
## Storing time as a double leads to possible precision issues and
## can (significantly) increase the size of the resulting DS log.
## That said, timestamps stored in double form are consistent
## with the rest of Bro, including the standard ASCII log. Hence, we
## use them by default.
const use_integer_for_time = F &redef;
}
# Default function to postprocess a rotated DataSeries log file. It moves the
# rotated file to a new name that includes a timestamp with the opening time,
# and then runs the writer's default postprocessor command on it.
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
{
# Move file to name including both opening and closing time.
local dst = fmt("%s.%s.ds", info$path,
strftime(Log::default_rotation_date_format, info$open));
system(fmt("/bin/mv %s %s", info$fname, dst));
# Run default postprocessor.
return Log::run_rotation_postprocessor_cmd(info, dst);
}
redef Log::default_rotation_postprocessors += { [Log::WRITER_DATASERIES] = default_rotation_postprocessor_func };

View file

@ -1,48 +0,0 @@
##! Log writer for sending logs to an ElasticSearch server.
##!
##! Note: This module is in testing and is not yet considered stable!
##!
##! There is one known memory issue. If your elasticsearch server is
##! running slowly and taking too long to return from bulk insert
##! requests, the message queue to the writer thread will continue
##! growing larger and larger giving the appearance of a memory leak.
module LogElasticSearch;
export {
## Name of the ES cluster.
const cluster_name = "elasticsearch" &redef;
## ES server.
const server_host = "127.0.0.1" &redef;
## ES port.
const server_port = 9200 &redef;
## Name of the ES index.
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular,
## time specifications less than a second result in a timeout value of
## 0, which means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
}

View file

@ -75,6 +75,13 @@ type addr_vec: vector of addr;
## directly and then remove this alias. ## directly and then remove this alias.
type table_string_of_string: table[string] of string; type table_string_of_string: table[string] of string;
## A set of file analyzer tags.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type files_tag_set: set[Files::Tag];
## A structure indicating a MIME type and strength of a match against ## A structure indicating a MIME type and strength of a match against
## file magic signatures. ## file magic signatures.
## ##
@ -2871,8 +2878,7 @@ type http_message_stat: record {
header_length: count; header_length: count;
}; };
## Maximum number of HTTP entity data delivered to events. The amount of data ## Maximum number of HTTP entity data delivered to events.
## can be limited for better performance, zero disables truncation.
## ##
## .. bro:see:: http_entity_data skip_http_entity_data skip_http_data ## .. bro:see:: http_entity_data skip_http_entity_data skip_http_data
global http_entity_data_delivery_size = 1500 &redef; global http_entity_data_delivery_size = 1500 &redef;
@ -3124,6 +3130,7 @@ type ModbusRegisters: vector of count;
type ModbusHeaders: record { type ModbusHeaders: record {
tid: count; tid: count;
pid: count; pid: count;
len: count;
uid: count; uid: count;
function_code: count; function_code: count;
}; };
@ -3749,9 +3756,6 @@ const global_hash_seed: string = "" &redef;
## The maximum is currently 128 bits. ## The maximum is currently 128 bits.
const bits_per_uid: count = 96 &redef; const bits_per_uid: count = 96 &redef;
# Load BiFs defined by plugins.
@load base/bif/plugins
# Load these frameworks here because they use fairly deep integration with # Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types. # BiFs and script-land defined types.
@load base/frameworks/logging @load base/frameworks/logging
@ -3760,3 +3764,7 @@ const bits_per_uid: count = 96 &redef;
@load base/frameworks/files @load base/frameworks/files
@load base/bif @load base/bif
# Load BiFs defined by plugins.
@load base/bif/plugins

View file

@ -47,13 +47,13 @@ redef record connection += {
const ports = { 67/udp, 68/udp }; const ports = { 67/udp, 68/udp };
redef likely_server_ports += { 67/udp }; redef likely_server_ports += { 67/udp };
event bro_init() event bro_init() &priority=5
{ {
Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]); Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports);
} }
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=5
{ {
local info: Info; local info: Info;
info$ts = network_time(); info$ts = network_time();
@ -71,6 +71,9 @@ event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_lis
info$assigned_ip = c$id$orig_h; info$assigned_ip = c$id$orig_h;
c$dhcp = info; c$dhcp = info;
}
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=-5
{
Log::write(DHCP::LOG, c$dhcp); Log::write(DHCP::LOG, c$dhcp);
} }

View file

@ -1,36 +0,0 @@
##! Load this script to enable global log output to an ElasticSearch database.
module LogElasticSearch;
export {
## An elasticsearch specific rotation interval.
const rotation_interval = 3hr &redef;
## Optionally ignore any :bro:type:`Log::ID` from being sent to
## ElasticSearch with this script.
const excluded_log_ids: set[Log::ID] &redef;
## If you want to explicitly only send certain :bro:type:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option
## will remain in effect as well.
const send_logs: set[Log::ID] &redef;
}
event bro_init() &priority=-5
{
if ( server_host == "" )
return;
for ( stream_id in Log::active_streams )
{
if ( stream_id in excluded_log_ids ||
(|send_logs| > 0 && stream_id !in send_logs) )
next;
local filter: Log::Filter = [$name = "default-es",
$writer = Log::WRITER_ELASTICSEARCH,
$interv = LogElasticSearch::rotation_interval];
Log::add_filter(stream_id, filter);
}
}

View file

@ -98,7 +98,4 @@
@load tuning/defaults/packet-fragments.bro @load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro @load tuning/defaults/warnings.bro
@load tuning/json-logs.bro @load tuning/json-logs.bro
@load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro @load tuning/track-all-assets.bro
redef LogElasticSearch::server_host = "";

View file

@ -7,7 +7,8 @@ include_directories(BEFORE
set(bro_ALL_GENERATED_OUTPUTS CACHE INTERNAL "automatically generated files" FORCE) set(bro_ALL_GENERATED_OUTPUTS CACHE INTERNAL "automatically generated files" FORCE)
# This collects bif inputs that we'll load automatically. # This collects bif inputs that we'll load automatically.
set(bro_AUTO_BIFS CACHE INTERNAL "BIFs for automatic inclusion" FORCE) set(bro_AUTO_BIFS CACHE INTERNAL "BIFs for automatic inclusion" FORCE)
set(bro_REGISTER_BIFS CACHE INTERNAL "BIFs for automatic registering" FORCE)
set(bro_BASE_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in base distribution of Bro" FORCE) set(bro_BASE_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in base distribution of Bro" FORCE)
set(bro_PLUGIN_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in Bro plugins" FORCE) set(bro_PLUGIN_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in Bro plugins" FORCE)
@ -117,8 +118,6 @@ include(BifCl)
set(BIF_SRCS set(BIF_SRCS
bro.bif bro.bif
logging.bif
input.bif
event.bif event.bif
const.bif const.bif
types.bif types.bif
@ -155,21 +154,25 @@ set(bro_SUBDIR_LIBS CACHE INTERNAL "subdir libraries" FORCE)
set(bro_PLUGIN_LIBS CACHE INTERNAL "plugin libraries" FORCE) set(bro_PLUGIN_LIBS CACHE INTERNAL "plugin libraries" FORCE)
add_subdirectory(analyzer) add_subdirectory(analyzer)
add_subdirectory(file_analysis)
add_subdirectory(probabilistic)
add_subdirectory(broxygen) add_subdirectory(broxygen)
add_subdirectory(file_analysis)
add_subdirectory(input)
add_subdirectory(iosource)
add_subdirectory(logging)
add_subdirectory(probabilistic)
set(bro_SUBDIRS set(bro_SUBDIRS
${bro_SUBDIR_LIBS} # Order is important here.
${bro_PLUGIN_LIBS} ${bro_PLUGIN_LIBS}
${bro_SUBDIR_LIBS}
) )
if ( NOT bro_HAVE_OBJECT_LIBRARIES ) if ( NOT bro_HAVE_OBJECT_LIBRARIES )
foreach (_plugin ${bro_PLUGIN_LIBS}) foreach (_plugin ${bro_PLUGIN_LIBS})
string(REGEX REPLACE "plugin-" "" _plugin "${_plugin}") string(REGEX REPLACE "plugin-" "" _plugin "${_plugin}")
string(REGEX REPLACE "-" "_" _plugin "${_plugin}") string(REGEX REPLACE "-" "_" _plugin "${_plugin}")
set(_decl "namespace plugin { namespace ${_plugin} { class Plugin; extern Plugin __plugin; } };") set(_decl "namespace plugin { namespace ${_plugin} { class Plugin; extern Plugin plugin; } };")
set(_use "i += (size_t)(&(plugin::${_plugin}::__plugin));") set(_use "i += (size_t)(&(plugin::${_plugin}::plugin));")
set(__BRO_DECL_PLUGINS "${__BRO_DECL_PLUGINS}${_decl}\n") set(__BRO_DECL_PLUGINS "${__BRO_DECL_PLUGINS}${_decl}\n")
set(__BRO_USE_PLUGINS "${__BRO_USE_PLUGINS}${_use}\n") set(__BRO_USE_PLUGINS "${__BRO_USE_PLUGINS}${_use}\n")
endforeach() endforeach()
@ -252,7 +255,6 @@ set(bro_SRCS
Anon.cc Anon.cc
Attr.cc Attr.cc
Base64.cc Base64.cc
BPF_Program.cc
Brofiler.cc Brofiler.cc
BroString.cc BroString.cc
CCL.cc CCL.cc
@ -277,14 +279,12 @@ set(bro_SRCS
EventRegistry.cc EventRegistry.cc
Expr.cc Expr.cc
File.cc File.cc
FlowSrc.cc
Frag.cc Frag.cc
Frame.cc Frame.cc
Func.cc Func.cc
Hash.cc Hash.cc
ID.cc ID.cc
IntSet.cc IntSet.cc
IOSource.cc
IP.cc IP.cc
IPAddr.cc IPAddr.cc
List.cc List.cc
@ -297,7 +297,6 @@ set(bro_SRCS
OSFinger.cc OSFinger.cc
PacketFilter.cc PacketFilter.cc
PersistenceSerializer.cc PersistenceSerializer.cc
PktSrc.cc
PolicyFile.cc PolicyFile.cc
PrefixTable.cc PrefixTable.cc
PriorityQueue.cc PriorityQueue.cc
@ -346,24 +345,6 @@ set(bro_SRCS
threading/formatters/Ascii.cc threading/formatters/Ascii.cc
threading/formatters/JSON.cc threading/formatters/JSON.cc
logging/Manager.cc
logging/WriterBackend.cc
logging/WriterFrontend.cc
logging/writers/Ascii.cc
logging/writers/DataSeries.cc
logging/writers/SQLite.cc
logging/writers/ElasticSearch.cc
logging/writers/None.cc
input/Manager.cc
input/ReaderBackend.cc
input/ReaderFrontend.cc
input/readers/Ascii.cc
input/readers/Raw.cc
input/readers/Benchmark.cc
input/readers/Binary.cc
input/readers/SQLite.cc
3rdparty/sqlite3.c 3rdparty/sqlite3.c
plugin/Component.cc plugin/Component.cc
@ -371,7 +352,6 @@ set(bro_SRCS
plugin/TaggedComponent.h plugin/TaggedComponent.h
plugin/Manager.cc plugin/Manager.cc
plugin/Plugin.cc plugin/Plugin.cc
plugin/Macros.h
nb_dns.c nb_dns.c
digest.h digest.h
@ -387,22 +367,31 @@ else ()
target_link_libraries(bro ${bro_SUBDIRS} ${brodeps} ${CMAKE_THREAD_LIBS_INIT} ${CMAKE_DL_LIBS}) target_link_libraries(bro ${bro_SUBDIRS} ${brodeps} ${CMAKE_THREAD_LIBS_INIT} ${CMAKE_DL_LIBS})
endif () endif ()
if ( NOT "${bro_LINKER_FLAGS}" STREQUAL "" )
set_target_properties(bro PROPERTIES LINK_FLAGS "${bro_LINKER_FLAGS}")
endif ()
install(TARGETS bro DESTINATION bin) install(TARGETS bro DESTINATION bin)
set(BRO_EXE bro set(BRO_EXE bro
CACHE STRING "Bro executable binary" FORCE) CACHE STRING "Bro executable binary" FORCE)
set(BRO_EXE_PATH ${CMAKE_CURRENT_BINARY_DIR}/bro
CACHE STRING "Path to Bro executable binary" FORCE)
# Target to create all the autogenerated files. # Target to create all the autogenerated files.
add_custom_target(generate_outputs_stage1) add_custom_target(generate_outputs_stage1)
add_dependencies(generate_outputs_stage1 ${bro_ALL_GENERATED_OUTPUTS}) add_dependencies(generate_outputs_stage1 ${bro_ALL_GENERATED_OUTPUTS})
# Target to create the joint includes files that pull in the bif code. # Target to create the joint includes files that pull in the bif code.
bro_bif_create_includes(generate_outputs_stage2 ${CMAKE_CURRENT_BINARY_DIR} "${bro_AUTO_BIFS}") bro_bif_create_includes(generate_outputs_stage2a ${CMAKE_CURRENT_BINARY_DIR} "${bro_AUTO_BIFS}")
add_dependencies(generate_outputs_stage2 generate_outputs_stage1) bro_bif_create_register(generate_outputs_stage2b ${CMAKE_CURRENT_BINARY_DIR} "${bro_REGISTER_BIFS}")
add_dependencies(generate_outputs_stage2a generate_outputs_stage1)
add_dependencies(generate_outputs_stage2b generate_outputs_stage1)
# Global target to trigger creation of autogenerated code. # Global target to trigger creation of autogenerated code.
add_custom_target(generate_outputs) add_custom_target(generate_outputs)
add_dependencies(generate_outputs generate_outputs_stage2) add_dependencies(generate_outputs generate_outputs_stage2a generate_outputs_stage2b)
# Build __load__.bro files for standard *.bif.bro. # Build __load__.bro files for standard *.bif.bro.
bro_bif_create_loader(bif_loader "${bro_BASE_BIF_SCRIPTS}") bro_bif_create_loader(bif_loader "${bro_BASE_BIF_SCRIPTS}")

View file

@ -35,6 +35,7 @@
#include "Net.h" #include "Net.h"
#include "Var.h" #include "Var.h"
#include "Reporter.h" #include "Reporter.h"
#include "iosource/Manager.h"
extern "C" { extern "C" {
extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *); extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
@ -404,17 +405,17 @@ DNS_Mgr::~DNS_Mgr()
delete [] dir; delete [] dir;
} }
bool DNS_Mgr::Init() void DNS_Mgr::InitPostScript()
{ {
if ( did_init ) if ( did_init )
return true; return;
const char* cache_dir = dir ? dir : "."; const char* cache_dir = dir ? dir : ".";
if ( mode == DNS_PRIME && ! ensure_dir(cache_dir) ) if ( mode == DNS_PRIME && ! ensure_dir(cache_dir) )
{ {
did_init = 0; did_init = 0;
return false; return;
} }
cache_name = new char[strlen(cache_dir) + 64]; cache_name = new char[strlen(cache_dir) + 64];
@ -433,14 +434,12 @@ bool DNS_Mgr::Init()
did_init = 1; did_init = 1;
io_sources.Register(this, true); iosource_mgr->Register(this, true);
// We never set idle to false, having the main loop only calling us from // We never set idle to false, having the main loop only calling us from
// time to time. If we're issuing more DNS requests than we can handle // time to time. If we're issuing more DNS requests than we can handle
// in this way, we are having problems anyway ... // in this way, we are having problems anyway ...
idle = true; SetIdle(true);
return true;
} }
static TableVal* fake_name_lookup_result(const char* name) static TableVal* fake_name_lookup_result(const char* name)

View file

@ -12,7 +12,7 @@
#include "BroList.h" #include "BroList.h"
#include "Dict.h" #include "Dict.h"
#include "EventHandler.h" #include "EventHandler.h"
#include "IOSource.h" #include "iosource/IOSource.h"
#include "IPAddr.h" #include "IPAddr.h"
class Val; class Val;
@ -40,12 +40,12 @@ enum DNS_MgrMode {
// Number of seconds we'll wait for a reply. // Number of seconds we'll wait for a reply.
#define DNS_TIMEOUT 5 #define DNS_TIMEOUT 5
class DNS_Mgr : public IOSource { class DNS_Mgr : public iosource::IOSource {
public: public:
DNS_Mgr(DNS_MgrMode mode); DNS_Mgr(DNS_MgrMode mode);
virtual ~DNS_Mgr(); virtual ~DNS_Mgr();
bool Init(); void InitPostScript();
void Flush(); void Flush();
// Looks up the address or addresses of the given host, and returns // Looks up the address or addresses of the given host, and returns

View file

@ -5,6 +5,7 @@
#include "DebugLogger.h" #include "DebugLogger.h"
#include "Net.h" #include "Net.h"
#include "plugin/Plugin.h"
DebugLogger debug_logger("debug"); DebugLogger debug_logger("debug");
@ -17,7 +18,8 @@ DebugLogger::Stream DebugLogger::streams[NUM_DBGS] = {
{ "dpd", 0, false }, { "tm", 0, false }, { "dpd", 0, false }, { "tm", 0, false },
{ "logging", 0, false }, {"input", 0, false }, { "logging", 0, false }, {"input", 0, false },
{ "threading", 0, false }, { "file_analysis", 0, false }, { "threading", 0, false }, { "file_analysis", 0, false },
{ "plugins", 0, false }, { "broxygen", 0, false } { "plugins", 0, false }, { "broxygen", 0, false },
{ "pktio", 0, false}
}; };
DebugLogger::DebugLogger(const char* filename) DebugLogger::DebugLogger(const char* filename)
@ -73,10 +75,12 @@ void DebugLogger::EnableStreams(const char* s)
{ {
if ( strcasecmp("verbose", tok) == 0 ) if ( strcasecmp("verbose", tok) == 0 )
verbose = true; verbose = true;
else else if ( strncmp(tok, "plugin-", 7) != 0 )
reporter->FatalError("unknown debug stream %s\n", tok); reporter->FatalError("unknown debug stream %s\n", tok);
} }
enabled_streams.insert(tok);
tok = strtok(0, ","); tok = strtok(0, ",");
} }
@ -105,4 +109,24 @@ void DebugLogger::Log(DebugStream stream, const char* fmt, ...)
fflush(file); fflush(file);
} }
void DebugLogger::Log(const plugin::Plugin& plugin, const char* fmt, ...)
{
string tok = string("plugin-") + plugin.Name();
tok = strreplace(tok, "::", "-");
if ( enabled_streams.find(tok) == enabled_streams.end() )
return;
fprintf(file, "%17.06f/%17.06f [plugin %s] ",
network_time, current_time(true), plugin.Name().c_str());
va_list ap;
va_start(ap, fmt);
vfprintf(file, fmt, ap);
va_end(ap);
fputc('\n', file);
fflush(file);
}
#endif #endif

View file

@ -7,6 +7,8 @@
#ifdef DEBUG #ifdef DEBUG
#include <stdio.h> #include <stdio.h>
#include <string>
#include <set>
// To add a new debugging stream, add a constant here as well as // To add a new debugging stream, add a constant here as well as
// an entry to DebugLogger::streams in DebugLogger.cc. // an entry to DebugLogger::streams in DebugLogger.cc.
@ -27,8 +29,9 @@ enum DebugStream {
DBG_INPUT, // Input streams DBG_INPUT, // Input streams
DBG_THREADING, // Threading system DBG_THREADING, // Threading system
DBG_FILE_ANALYSIS, // File analysis DBG_FILE_ANALYSIS, // File analysis
DBG_PLUGINS, DBG_PLUGINS, // Plugin system
DBG_BROXYGEN, DBG_BROXYGEN, // Broxygen
DBG_PKTIO, // Packet sources and dumpers.
NUM_DBGS // Has to be last NUM_DBGS // Has to be last
}; };
@ -42,6 +45,10 @@ enum DebugStream {
#define DBG_PUSH(stream) debug_logger.PushIndent(stream) #define DBG_PUSH(stream) debug_logger.PushIndent(stream)
#define DBG_POP(stream) debug_logger.PopIndent(stream) #define DBG_POP(stream) debug_logger.PopIndent(stream)
#define PLUGIN_DBG_LOG(plugin, args...) debug_logger.Log(plugin, args)
namespace plugin { class Plugin; }
class DebugLogger { class DebugLogger {
public: public:
// Output goes to stderr per default. // Output goes to stderr per default.
@ -49,6 +56,7 @@ public:
~DebugLogger(); ~DebugLogger();
void Log(DebugStream stream, const char* fmt, ...); void Log(DebugStream stream, const char* fmt, ...);
void Log(const plugin::Plugin& plugin, const char* fmt, ...);
void PushIndent(DebugStream stream) void PushIndent(DebugStream stream)
{ ++streams[int(stream)].indent; } { ++streams[int(stream)].indent; }
@ -79,6 +87,8 @@ private:
bool enabled; bool enabled;
}; };
std::set<std::string> enabled_streams;
static Stream streams[NUM_DBGS]; static Stream streams[NUM_DBGS];
}; };
@ -89,6 +99,7 @@ extern DebugLogger debug_logger;
#define DBG_LOG_VERBOSE(args...) #define DBG_LOG_VERBOSE(args...)
#define DBG_PUSH(stream) #define DBG_PUSH(stream)
#define DBG_POP(stream) #define DBG_POP(stream)
#define PLUGIN_DBG_LOG(plugin, args...)
#endif #endif
#endif #endif

View file

@ -6,6 +6,7 @@
#include "Func.h" #include "Func.h"
#include "NetVar.h" #include "NetVar.h"
#include "Trigger.h" #include "Trigger.h"
#include "plugin/Manager.h"
EventMgr mgr; EventMgr mgr;
@ -77,6 +78,11 @@ EventMgr::~EventMgr()
void EventMgr::QueueEvent(Event* event) void EventMgr::QueueEvent(Event* event)
{ {
bool done = PLUGIN_HOOK_WITH_RESULT(HOOK_QUEUE_EVENT, HookQueueEvent(event), false);
if ( done )
return;
if ( ! head ) if ( ! head )
head = tail = event; head = tail = event;
else else
@ -115,6 +121,8 @@ void EventMgr::Drain()
SegmentProfiler(segment_logger, "draining-events"); SegmentProfiler(segment_logger, "draining-events");
PLUGIN_HOOK_VOID(HOOK_DRAIN_EVENTS, HookDrainEvents());
draining = true; draining = true;
while ( head ) while ( head )
Dispatch(); Dispatch();

View file

@ -24,6 +24,8 @@ public:
SourceID Source() const { return src; } SourceID Source() const { return src; }
analyzer::ID Analyzer() const { return aid; } analyzer::ID Analyzer() const { return aid; }
TimerMgr* Mgr() const { return mgr; } TimerMgr* Mgr() const { return mgr; }
EventHandlerPtr Handler() const { return handler; }
val_list* Args() const { return args; }
void Describe(ODesc* d) const; void Describe(ODesc* d) const;

View file

@ -13,6 +13,7 @@ EventHandler::EventHandler(const char* arg_name)
type = 0; type = 0;
error_handler = false; error_handler = false;
enabled = true; enabled = true;
generate_always = false;
} }
EventHandler::~EventHandler() EventHandler::~EventHandler()
@ -23,7 +24,9 @@ EventHandler::~EventHandler()
EventHandler::operator bool() const EventHandler::operator bool() const
{ {
return enabled && ((local && local->HasBodies()) || receivers.length()); return enabled && ((local && local->HasBodies())
|| receivers.length()
|| generate_always);
} }
FuncType* EventHandler::FType() FuncType* EventHandler::FType()

View file

@ -43,6 +43,11 @@ public:
void SetEnable(bool arg_enable) { enabled = arg_enable; } void SetEnable(bool arg_enable) { enabled = arg_enable; }
// Flags the event as interesting even if there is no body defined. In
// particular, this will then still pass the event on to plugins.
void SetGenerateAlways() { generate_always = true; }
bool GenerateAlways() { return generate_always; }
// We don't serialize the handler(s) itself here, but // We don't serialize the handler(s) itself here, but
// just the reference to it. // just the reference to it.
bool Serialize(SerialInfo* info) const; bool Serialize(SerialInfo* info) const;
@ -57,6 +62,7 @@ private:
bool used; // this handler is indeed used somewhere bool used; // this handler is indeed used somewhere
bool enabled; bool enabled;
bool error_handler; // this handler reports error messages. bool error_handler; // this handler reports error messages.
bool generate_always;
declare(List, SourceID); declare(List, SourceID);
typedef List(SourceID) receiver_list; typedef List(SourceID) receiver_list;

View file

@ -71,6 +71,23 @@ EventRegistry::string_list* EventRegistry::UsedHandlers()
return names; return names;
} }
EventRegistry::string_list* EventRegistry::AllHandlers()
{
string_list* names = new string_list;
IterCookie* c = handlers.InitForIteration();
HashKey* k;
EventHandler* v;
while ( (v = handlers.NextEntry(k, c)) )
{
names->append(v->Name());
delete k;
}
return names;
}
void EventRegistry::PrintDebug() void EventRegistry::PrintDebug()
{ {
IterCookie* c = handlers.InitForIteration(); IterCookie* c = handlers.InitForIteration();

View file

@ -33,6 +33,8 @@ public:
string_list* UnusedHandlers(); string_list* UnusedHandlers();
string_list* UsedHandlers(); string_list* UsedHandlers();
string_list* AllHandlers();
void PrintDebug(); void PrintDebug();
private: private:

View file

@ -608,6 +608,10 @@ public:
CondExpr(Expr* op1, Expr* op2, Expr* op3); CondExpr(Expr* op1, Expr* op2, Expr* op3);
~CondExpr(); ~CondExpr();
const Expr* Op1() const { return op1; }
const Expr* Op2() const { return op2; }
const Expr* Op3() const { return op3; }
Expr* Simplify(SimplifyType simp_type); Expr* Simplify(SimplifyType simp_type);
Val* Eval(Frame* f) const; Val* Eval(Frame* f) const;
int IsPure() const; int IsPure() const;
@ -706,6 +710,7 @@ public:
~FieldExpr(); ~FieldExpr();
int Field() const { return field; } int Field() const { return field; }
const char* FieldName() const { return field_name; }
int CanDel() const; int CanDel() const;
@ -737,6 +742,8 @@ public:
HasFieldExpr(Expr* op, const char* field_name); HasFieldExpr(Expr* op, const char* field_name);
~HasFieldExpr(); ~HasFieldExpr();
const char* FieldName() const { return field_name; }
protected: protected:
friend class Expr; friend class Expr;
HasFieldExpr() { field_name = 0; } HasFieldExpr() { field_name = 0; }

View file

@ -1,228 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Written by Bernhard Ager, TU Berlin (2006/2007).
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <netdb.h>
#include "FlowSrc.h"
#include "Net.h"
#include "analyzer/protocol/netflow/netflow_pac.h"
#include <errno.h>
FlowSrc::FlowSrc()
{ // TODO: v9.
selectable_fd = -1;
idle = false;
data = 0;
pdu_len = -1;
exporter_ip = 0;
current_timestamp = next_timestamp = 0.0;
netflow_analyzer = new binpac::NetFlow::NetFlow_Analyzer();
}
FlowSrc::~FlowSrc()
{
delete netflow_analyzer;
}
void FlowSrc::GetFds(int* read, int* write, int* except)
{
if ( selectable_fd >= 0 )
*read = selectable_fd;
}
double FlowSrc::NextTimestamp(double* network_time)
{
if ( ! data && ! ExtractNextPDU() )
return -1.0;
else
return next_timestamp;
}
void FlowSrc::Process()
{
if ( ! data && ! ExtractNextPDU() )
return;
// This is normally done by calling net_packet_dispatch(),
// but as we don't have a packet to dispatch ...
network_time = next_timestamp;
expire_timers();
netflow_analyzer->downflow()->set_exporter_ip(exporter_ip);
// We handle exceptions in NewData (might have changed w/ new binpac).
netflow_analyzer->NewData(0, data, data + pdu_len);
data = 0;
}
void FlowSrc::Close()
{
safe_close(selectable_fd);
}
FlowSocketSrc::~FlowSocketSrc()
{
}
int FlowSocketSrc::ExtractNextPDU()
{
sockaddr_in from;
socklen_t fromlen = sizeof(from);
pdu_len = recvfrom(selectable_fd, buffer, NF_MAX_PKT_SIZE, 0,
(struct sockaddr*) &from, &fromlen);
if ( pdu_len < 0 )
{
reporter->Error("problem reading NetFlow data from socket");
data = 0;
next_timestamp = -1.0;
closed = 1;
return 0;
}
if ( fromlen != sizeof(from) )
{
reporter->Error("malformed NetFlow PDU");
return 0;
}
data = buffer;
exporter_ip = from.sin_addr.s_addr;
next_timestamp = current_time();
if ( next_timestamp < current_timestamp )
next_timestamp = current_timestamp;
else
current_timestamp = next_timestamp;
return 1;
}
FlowSocketSrc::FlowSocketSrc(const char* listen_parms)
{
int n = strlen(listen_parms) + 1;
char laddr[n], port[n], ident[n];
laddr[0] = port[0] = ident[0] = '\0';
int ret = sscanf(listen_parms, "%[^:]:%[^=]=%s", laddr, port, ident);
if ( ret < 2 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"parsing your listen-spec went nuts: laddr='%s', port='%s'\n",
laddr[0] ? laddr : "", port[0] ? port : "");
closed = 1;
return;
}
const char* id = (ret == 3) ? ident : listen_parms;
netflow_analyzer->downflow()->set_identifier(id);
struct addrinfo aiprefs = {
0, PF_INET, SOCK_DGRAM, IPPROTO_UDP, 0, NULL, NULL, NULL
};
struct addrinfo* ainfo = 0;
if ( (ret = getaddrinfo(laddr, port, &aiprefs, &ainfo)) != 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"getaddrinfo(%s, %s, ...): %s",
laddr, port, gai_strerror(ret));
closed = 1;
return;
}
if ( (selectable_fd = socket (PF_INET, SOCK_DGRAM, 0)) < 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"socket: %s", strerror(errno));
closed = 1;
goto cleanup;
}
if ( bind (selectable_fd, ainfo->ai_addr, ainfo->ai_addrlen) < 0 )
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"bind: %s", strerror(errno));
closed = 1;
goto cleanup;
}
cleanup:
freeaddrinfo(ainfo);
}
FlowFileSrc::~FlowFileSrc()
{
delete [] readfile;
}
int FlowFileSrc::ExtractNextPDU()
{
FlowFileSrcPDUHeader pdu_header;
if ( read(selectable_fd, &pdu_header, sizeof(pdu_header)) <
int(sizeof(pdu_header)) )
return Error(errno, "read header");
if ( pdu_header.pdu_length > NF_MAX_PKT_SIZE )
{
reporter->Error("NetFlow packet too long");
// Safely skip over the too-long PDU.
if ( lseek(selectable_fd, pdu_header.pdu_length, SEEK_CUR) < 0 )
return Error(errno, "lseek");
return 0;
}
if ( read(selectable_fd, buffer, pdu_header.pdu_length) <
pdu_header.pdu_length )
return Error(errno, "read data");
if ( next_timestamp < pdu_header.network_time )
{
next_timestamp = pdu_header.network_time;
current_timestamp = pdu_header.network_time;
}
else
current_timestamp = next_timestamp;
data = buffer;
pdu_len = pdu_header.pdu_length;
exporter_ip = pdu_header.ipaddr;
return 1;
}
FlowFileSrc::FlowFileSrc(const char* readfile)
{
int n = strlen(readfile) + 1;
char ident[n];
this->readfile = new char[n];
int ret = sscanf(readfile, "%[^=]=%s", this->readfile, ident);
const char* id = (ret == 2) ? ident : this->readfile;
netflow_analyzer->downflow()->set_identifier(id);
selectable_fd = open(this->readfile, O_RDONLY);
if ( selectable_fd < 0 )
{
closed = 1;
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"open: %s", strerror(errno));
}
}
int FlowFileSrc::Error(int errlvl, const char* errmsg)
{
snprintf(errbuf, BRO_FLOW_ERRBUF_SIZE,
"%s: %s", errmsg, strerror(errlvl));
data = 0;
next_timestamp = -1.0;
closed = 1;
return 0;
}

View file

@ -1,84 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Written by Bernhard Ager, TU Berlin (2006/2007).
#ifndef flowsrc_h
#define flowsrc_h
#include "IOSource.h"
#include "NetVar.h"
#include "binpac.h"
#define BRO_FLOW_ERRBUF_SIZE 512
// TODO: 1500 is enough for v5 - how about the others?
// 65536 would be enough for any UDP packet.
#define NF_MAX_PKT_SIZE 8192
struct FlowFileSrcPDUHeader {
double network_time;
int pdu_length;
uint32 ipaddr;
};
// Avoid including netflow_pac.h by explicitly declaring the NetFlow_Analyzer.
namespace binpac {
namespace NetFlow {
class NetFlow_Analyzer;
}
}
class FlowSrc : public IOSource {
public:
virtual ~FlowSrc();
// IOSource interface:
bool IsReady();
void GetFds(int* read, int* write, int* except);
double NextTimestamp(double* network_time);
void Process();
const char* Tag() { return "FlowSrc"; }
const char* ErrorMsg() const { return errbuf; }
protected:
FlowSrc();
virtual int ExtractNextPDU() = 0;
virtual void Close();
int selectable_fd;
double current_timestamp;
double next_timestamp;
binpac::NetFlow::NetFlow_Analyzer* netflow_analyzer;
u_char buffer[NF_MAX_PKT_SIZE];
u_char* data;
int pdu_len;
uint32 exporter_ip; // in network byte order
char errbuf[BRO_FLOW_ERRBUF_SIZE];
};
class FlowSocketSrc : public FlowSrc {
public:
FlowSocketSrc(const char* listen_parms);
virtual ~FlowSocketSrc();
int ExtractNextPDU();
};
class FlowFileSrc : public FlowSrc {
public:
FlowFileSrc(const char* readfile);
~FlowFileSrc();
int ExtractNextPDU();
protected:
int Error(int errlvl, const char* errmsg);
char* readfile;
};
#endif

View file

@ -46,6 +46,7 @@
#include "Event.h" #include "Event.h"
#include "Traverse.h" #include "Traverse.h"
#include "Reporter.h" #include "Reporter.h"
#include "plugin/Manager.h"
extern RETSIGTYPE sig_handler(int signo); extern RETSIGTYPE sig_handler(int signo);
@ -226,7 +227,7 @@ TraversalCode Func::Traverse(TraversalCallback* cb) const
HANDLE_TC_STMT_PRE(tc); HANDLE_TC_STMT_PRE(tc);
// FIXME: Traverse arguments to builtin functions, too. // FIXME: Traverse arguments to builtin functions, too.
if ( kind == BRO_FUNC ) if ( kind == BRO_FUNC && scope )
{ {
tc = scope->Traverse(cb); tc = scope->Traverse(cb);
HANDLE_TC_STMT_PRE(tc); HANDLE_TC_STMT_PRE(tc);
@ -244,6 +245,49 @@ TraversalCode Func::Traverse(TraversalCallback* cb) const
HANDLE_TC_STMT_POST(tc); HANDLE_TC_STMT_POST(tc);
} }
Val* Func::HandlePluginResult(Val* plugin_result, val_list* args, function_flavor flavor) const
{
// Helper function factoring out this code from BroFunc:Call() for better
// readability.
switch ( flavor ) {
case FUNC_FLAVOR_EVENT:
Unref(plugin_result);
plugin_result = 0;
break;
case FUNC_FLAVOR_HOOK:
if ( plugin_result->Type()->Tag() != TYPE_BOOL )
reporter->InternalError("plugin returned non-bool for hook");
break;
case FUNC_FLAVOR_FUNCTION:
{
BroType* yt = FType()->YieldType();
if ( (! yt) || yt->Tag() == TYPE_VOID )
{
Unref(plugin_result);
plugin_result = 0;
}
else
{
if ( plugin_result->Type()->Tag() != yt->Tag() )
reporter->InternalError("plugin returned wrong type for function call");
}
break;
}
}
loop_over_list(*args, i)
Unref((*args)[i]);
return plugin_result;
}
BroFunc::BroFunc(ID* arg_id, Stmt* arg_body, id_list* aggr_inits, BroFunc::BroFunc(ID* arg_id, Stmt* arg_body, id_list* aggr_inits,
int arg_frame_size, int priority) int arg_frame_size, int priority)
: Func(BRO_FUNC) : Func(BRO_FUNC)
@ -281,6 +325,17 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
#ifdef PROFILE_BRO_FUNCTIONS #ifdef PROFILE_BRO_FUNCTIONS
DEBUG_MSG("Function: %s\n", id->Name()); DEBUG_MSG("Function: %s\n", id->Name());
#endif #endif
SegmentProfiler(segment_logger, location);
if ( sample_logger )
sample_logger->FunctionSeen(this);
Val* plugin_result = PLUGIN_HOOK_WITH_RESULT(HOOK_CALL_FUNCTION, HookCallFunction(this, args), 0);
if ( plugin_result )
return HandlePluginResult(plugin_result, args, Flavor());
if ( bodies.empty() ) if ( bodies.empty() )
{ {
// Can only happen for events and hooks. // Can only happen for events and hooks.
@ -291,7 +346,6 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
return Flavor() == FUNC_FLAVOR_HOOK ? new Val(true, TYPE_BOOL) : 0; return Flavor() == FUNC_FLAVOR_HOOK ? new Val(true, TYPE_BOOL) : 0;
} }
SegmentProfiler(segment_logger, location);
Frame* f = new Frame(frame_size, this, args); Frame* f = new Frame(frame_size, this, args);
// Hand down any trigger. // Hand down any trigger.
@ -319,9 +373,6 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
Val* result = 0; Val* result = 0;
if ( sample_logger )
sample_logger->FunctionSeen(this);
for ( size_t i = 0; i < bodies.size(); ++i ) for ( size_t i = 0; i < bodies.size(); ++i )
{ {
if ( sample_logger ) if ( sample_logger )
@ -497,6 +548,11 @@ Val* BuiltinFunc::Call(val_list* args, Frame* parent) const
if ( sample_logger ) if ( sample_logger )
sample_logger->FunctionSeen(this); sample_logger->FunctionSeen(this);
Val* plugin_result = PLUGIN_HOOK_WITH_RESULT(HOOK_CALL_FUNCTION, HookCallFunction(this, args), 0);
if ( plugin_result )
return HandlePluginResult(plugin_result, args, FUNC_FLAVOR_FUNCTION);
if ( g_trace_state.DoTrace() ) if ( g_trace_state.DoTrace() )
{ {
ODesc d; ODesc d;
@ -550,18 +606,15 @@ void builtin_error(const char* msg, BroObj* arg)
} }
#include "bro.bif.func_h" #include "bro.bif.func_h"
#include "logging.bif.func_h"
#include "input.bif.func_h"
#include "reporter.bif.func_h" #include "reporter.bif.func_h"
#include "strings.bif.func_h" #include "strings.bif.func_h"
#include "bro.bif.func_def" #include "bro.bif.func_def"
#include "logging.bif.func_def"
#include "input.bif.func_def"
#include "reporter.bif.func_def" #include "reporter.bif.func_def"
#include "strings.bif.func_def" #include "strings.bif.func_def"
#include "__all__.bif.cc" // Autogenerated for compiling in the bif_target() code. #include "__all__.bif.cc" // Autogenerated for compiling in the bif_target() code.
#include "__all__.bif.register.cc" // Autogenerated for compiling in the bif_target() code.
void init_builtin_funcs() void init_builtin_funcs()
{ {
@ -572,16 +625,17 @@ void init_builtin_funcs()
gap_info = internal_type("gap_info")->AsRecordType(); gap_info = internal_type("gap_info")->AsRecordType();
#include "bro.bif.func_init" #include "bro.bif.func_init"
#include "logging.bif.func_init"
#include "input.bif.func_init"
#include "reporter.bif.func_init" #include "reporter.bif.func_init"
#include "strings.bif.func_init" #include "strings.bif.func_init"
#include "__all__.bif.init.cc" // Autogenerated for compiling in the bif_target() code.
did_builtin_init = true; did_builtin_init = true;
} }
void init_builtin_funcs_subdirs()
{
#include "__all__.bif.init.cc" // Autogenerated for compiling in the bif_target() code.
}
bool check_built_in_call(BuiltinFunc* f, CallExpr* call) bool check_built_in_call(BuiltinFunc* f, CallExpr* call)
{ {
if ( f->TheFunc() != BifFunc::bro_fmt ) if ( f->TheFunc() != BifFunc::bro_fmt )

View file

@ -52,6 +52,7 @@ public:
Kind GetKind() const { return kind; } Kind GetKind() const { return kind; }
const char* Name() const { return name.c_str(); } const char* Name() const { return name.c_str(); }
void SetName(const char* arg_name) { name = arg_name; }
virtual void Describe(ODesc* d) const = 0; virtual void Describe(ODesc* d) const = 0;
virtual void DescribeDebug(ODesc* d, const val_list* args) const; virtual void DescribeDebug(ODesc* d, const val_list* args) const;
@ -69,6 +70,9 @@ public:
protected: protected:
Func(); Func();
// Helper function for handling result of plugin hook.
Val* HandlePluginResult(Val* plugin_result, val_list* args, function_flavor flavor) const;
DECLARE_ABSTRACT_SERIAL(Func); DECLARE_ABSTRACT_SERIAL(Func);
vector<Body> bodies; vector<Body> bodies;
@ -130,6 +134,7 @@ protected:
extern void builtin_error(const char* msg, BroObj* arg = 0); extern void builtin_error(const char* msg, BroObj* arg = 0);
extern void init_builtin_funcs(); extern void init_builtin_funcs();
extern void init_builtin_funcs_subdirs();
extern bool check_built_in_call(BuiltinFunc* f, CallExpr* call); extern bool check_built_in_call(BuiltinFunc* f, CallExpr* call);

View file

@ -32,9 +32,11 @@ public:
void SetType(BroType* t) { Unref(type); type = t; } void SetType(BroType* t) { Unref(type); type = t; }
BroType* Type() { return type; } BroType* Type() { return type; }
const BroType* Type() const { return type; }
void MakeType() { is_type = 1; } void MakeType() { is_type = 1; }
BroType* AsType() { return is_type ? Type() : 0; } BroType* AsType() { return is_type ? Type() : 0; }
const BroType* AsType() const { return is_type ? Type() : 0; }
// If weak_ref is false, the Val is assumed to be already ref'ed // If weak_ref is false, the Val is assumed to be already ref'ed
// and will be deref'ed when the ID is deleted. // and will be deref'ed when the ID is deleted.

View file

@ -1,103 +0,0 @@
// Interface for classes providing/consuming data during Bro's main loop.
#ifndef iosource_h
#define iosource_h
#include <list>
#include "Timer.h"
using namespace std;
class IOSource {
public:
IOSource() { idle = closed = false; }
virtual ~IOSource() {}
// Returns true if source has nothing ready to process.
bool IsIdle() const { return idle; }
// Returns true if more data is to be expected in the future.
// Otherwise, source may be removed.
bool IsOpen() const { return ! closed; }
// Returns select'able fds (leaves args untouched if we don't have
// selectable fds).
virtual void GetFds(int* read, int* write, int* except) = 0;
// The following two methods are only called when either IsIdle()
// returns false or select() on one of the fds indicates that there's
// data to process.
// Returns timestamp (in global network time) associated with next
// data item. If the source wants the data item to be processed
// with a local network time, it sets the argument accordingly.
virtual double NextTimestamp(double* network_time) = 0;
// Processes and consumes next data item.
virtual void Process() = 0;
// Returns tag of timer manager associated with last processed
// data item, nil for global timer manager.
virtual TimerMgr::Tag* GetCurrentTag() { return 0; }
// Returns a descriptual tag for debugging.
virtual const char* Tag() = 0;
protected:
// Derived classed are to set this to true if they have gone dry
// temporarily.
bool idle;
// Derived classed are to set this to true if they have gone dry
// permanently.
bool closed;
};
class IOSourceRegistry {
public:
IOSourceRegistry() { call_count = 0; dont_counts = 0; }
~IOSourceRegistry();
// If dont_count is true, this source does not contribute to the
// number of IOSources returned by Size(). The effect is that
// if all sources but the non-counting ones have gone dry,
// processing will shut down.
void Register(IOSource* src, bool dont_count = false);
// This may block for some time.
IOSource* FindSoonest(double* ts);
int Size() const { return sources.size() - dont_counts; }
// Terminate IOSource processing immediately by removing all
// sources (and therefore returning a Size() of zero).
void Terminate() { RemoveAll(); }
protected:
// When looking for a source with something to process,
// every SELECT_FREQUENCY calls we will go ahead and
// block on a select().
static const int SELECT_FREQUENCY = 25;
// Microseconds to wait in an empty select if no source is ready.
static const int SELECT_TIMEOUT = 50;
void RemoveAll();
unsigned int call_count;
int dont_counts;
struct Source {
IOSource* src;
int fd_read;
int fd_write;
int fd_except;
};
typedef list<Source*> SourceList;
SourceList sources;
};
extern IOSourceRegistry io_sources;
#endif

View file

@ -29,6 +29,10 @@
#include "Anon.h" #include "Anon.h"
#include "Serializer.h" #include "Serializer.h"
#include "PacketDumper.h" #include "PacketDumper.h"
#include "iosource/Manager.h"
#include "iosource/PktSrc.h"
#include "iosource/PktDumper.h"
#include "plugin/Manager.h"
extern "C" { extern "C" {
#include "setsignal.h" #include "setsignal.h"
@ -38,10 +42,7 @@ extern "C" {
extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *); extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
} }
PList(PktSrc) pkt_srcs; iosource::PktDumper* pkt_dumper = 0;
// FIXME: We should really merge PktDumper and PacketDumper.
PktDumper* pkt_dumper = 0;
int reading_live = 0; int reading_live = 0;
int reading_traces = 0; int reading_traces = 0;
@ -62,8 +63,8 @@ const u_char* current_pkt = 0;
int current_dispatched = 0; int current_dispatched = 0;
int current_hdr_size = 0; int current_hdr_size = 0;
double current_timestamp = 0.0; double current_timestamp = 0.0;
PktSrc* current_pktsrc = 0; iosource::PktSrc* current_pktsrc = 0;
IOSource* current_iosrc; iosource::IOSource* current_iosrc = 0;
std::list<ScannedFile> files_scanned; std::list<ScannedFile> files_scanned;
std::vector<string> sig_files; std::vector<string> sig_files;
@ -112,17 +113,21 @@ RETSIGTYPE watchdog(int /* signo */)
// saving the packet which caused the // saving the packet which caused the
// watchdog to trigger may be helpful, // watchdog to trigger may be helpful,
// so we'll save that one nevertheless. // so we'll save that one nevertheless.
pkt_dumper = new PktDumper("watchdog-pkt.pcap"); pkt_dumper = iosource_mgr->OpenPktDumper("watchdog-pkt.pcap", false);
if ( pkt_dumper->IsError() ) if ( ! pkt_dumper || pkt_dumper->IsError() )
{ {
reporter->Error("watchdog: can't open watchdog-pkt.pcap for writing\n"); reporter->Error("watchdog: can't open watchdog-pkt.pcap for writing");
delete pkt_dumper;
pkt_dumper = 0; pkt_dumper = 0;
} }
} }
if ( pkt_dumper ) if ( pkt_dumper )
pkt_dumper->Dump(current_hdr, current_pkt); {
iosource::PktDumper::Packet p;
p.hdr = current_hdr;
p.data = current_pkt;
pkt_dumper->Dump(&p);
}
} }
net_get_final_stats(); net_get_final_stats();
@ -141,121 +146,47 @@ RETSIGTYPE watchdog(int /* signo */)
return RETSIGVAL; return RETSIGVAL;
} }
void net_init(name_list& interfaces, name_list& readfiles, void net_update_time(double new_network_time)
name_list& netflows, name_list& flowfiles,
const char* writefile, const char* filter,
const char* secondary_filter, int do_watchdog)
{ {
init_net_var(); network_time = new_network_time;
PLUGIN_HOOK_VOID(HOOK_UPDATE_NETWORK_TIME, HookUpdateNetworkTime(new_network_time));
}
if ( readfiles.length() > 0 || flowfiles.length() > 0 ) void net_init(name_list& interfaces, name_list& readfiles,
const char* writefile, int do_watchdog)
{
if ( readfiles.length() > 0 )
{ {
reading_live = pseudo_realtime > 0.0; reading_live = pseudo_realtime > 0.0;
reading_traces = 1; reading_traces = 1;
for ( int i = 0; i < readfiles.length(); ++i ) for ( int i = 0; i < readfiles.length(); ++i )
{ {
PktFileSrc* ps = new PktFileSrc(readfiles[i], filter); iosource::PktSrc* ps = iosource_mgr->OpenPktSrc(readfiles[i], false);
assert(ps);
if ( ! ps->IsOpen() ) if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with trace file %s - %s\n", reporter->FatalError("problem with trace file %s (%s)",
prog, readfiles[i], ps->ErrorMsg()); readfiles[i],
else ps->ErrorMsg());
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
if ( secondary_filter )
{
// We use a second PktFileSrc for the
// secondary path.
PktFileSrc* ps = new PktFileSrc(readfiles[i],
secondary_filter,
TYPE_FILTER_SECONDARY);
if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with trace file %s - %s\n",
prog, readfiles[i],
ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
ps->AddSecondaryTablePrograms();
}
}
for ( int i = 0; i < flowfiles.length(); ++i )
{
FlowFileSrc* fs = new FlowFileSrc(flowfiles[i]);
if ( ! fs->IsOpen() )
reporter->FatalError("%s: problem with netflow file %s - %s\n",
prog, flowfiles[i], fs->ErrorMsg());
else
{
io_sources.Register(fs);
}
} }
} }
else if ((interfaces.length() > 0 || netflows.length() > 0)) else if ( interfaces.length() > 0 )
{ {
reading_live = 1; reading_live = 1;
reading_traces = 0; reading_traces = 0;
for ( int i = 0; i < interfaces.length(); ++i ) for ( int i = 0; i < interfaces.length(); ++i )
{ {
PktSrc* ps; iosource::PktSrc* ps = iosource_mgr->OpenPktSrc(interfaces[i], true);
ps = new PktInterfaceSrc(interfaces[i], filter); assert(ps);
if ( ! ps->IsOpen() ) if ( ! ps->IsOpen() )
reporter->FatalError("%s: problem with interface %s - %s\n", reporter->FatalError("problem with interface %s (%s)",
prog, interfaces[i], ps->ErrorMsg()); interfaces[i],
else ps->ErrorMsg());
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
if ( secondary_filter )
{
PktSrc* ps;
ps = new PktInterfaceSrc(interfaces[i],
filter, TYPE_FILTER_SECONDARY);
if ( ! ps->IsOpen() )
reporter->Error("%s: problem with interface %s - %s\n",
prog, interfaces[i],
ps->ErrorMsg());
else
{
pkt_srcs.append(ps);
io_sources.Register(ps);
}
ps->AddSecondaryTablePrograms();
}
} }
for ( int i = 0; i < netflows.length(); ++i )
{
FlowSocketSrc* fs = new FlowSocketSrc(netflows[i]);
if ( ! fs->IsOpen() )
{
reporter->Error("%s: problem with netflow socket %s - %s\n",
prog, netflows[i], fs->ErrorMsg());
delete fs;
}
else
io_sources.Register(fs);
}
} }
else else
@ -267,12 +198,12 @@ void net_init(name_list& interfaces, name_list& readfiles,
if ( writefile ) if ( writefile )
{ {
// ### This will fail horribly if there are multiple pkt_dumper = iosource_mgr->OpenPktDumper(writefile, false);
// interfaces with different-lengthed media. assert(pkt_dumper);
pkt_dumper = new PktDumper(writefile);
if ( pkt_dumper->IsError() ) if ( ! pkt_dumper->IsOpen() )
reporter->FatalError("%s: can't open write file \"%s\" - %s\n", reporter->FatalError("problem opening dump file %s (%s)",
prog, writefile, pkt_dumper->ErrorMsg()); writefile, pkt_dumper->ErrorMsg());
ID* id = global_scope()->Lookup("trace_output_file"); ID* id = global_scope()->Lookup("trace_output_file");
if ( ! id ) if ( ! id )
@ -293,7 +224,7 @@ void net_init(name_list& interfaces, name_list& readfiles,
} }
} }
void expire_timers(PktSrc* src_ps) void expire_timers(iosource::PktSrc* src_ps)
{ {
SegmentProfiler(segment_logger, "expiring-timers"); SegmentProfiler(segment_logger, "expiring-timers");
TimerMgr* tmgr = TimerMgr* tmgr =
@ -306,8 +237,8 @@ void expire_timers(PktSrc* src_ps)
} }
void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr, void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps) iosource::PktSrc* src_ps)
{ {
if ( ! bro_start_network_time ) if ( ! bro_start_network_time )
bro_start_network_time = t; bro_start_network_time = t;
@ -315,7 +246,7 @@ void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
TimerMgr* tmgr = sessions->LookupTimerMgr(src_ps->GetCurrentTag()); TimerMgr* tmgr = sessions->LookupTimerMgr(src_ps->GetCurrentTag());
// network_time never goes back. // network_time never goes back.
network_time = tmgr->Time() < t ? t : tmgr->Time(); net_update_time(tmgr->Time() < t ? t : tmgr->Time());
current_pktsrc = src_ps; current_pktsrc = src_ps;
current_iosrc = src_ps; current_iosrc = src_ps;
@ -363,11 +294,11 @@ void net_run()
{ {
set_processing_status("RUNNING", "net_run"); set_processing_status("RUNNING", "net_run");
while ( io_sources.Size() || while ( iosource_mgr->Size() ||
(BifConst::exit_only_after_terminate && ! terminating) ) (BifConst::exit_only_after_terminate && ! terminating) )
{ {
double ts; double ts;
IOSource* src = io_sources.FindSoonest(&ts); iosource::IOSource* src = iosource_mgr->FindSoonest(&ts);
#ifdef DEBUG #ifdef DEBUG
static int loop_counter = 0; static int loop_counter = 0;
@ -395,7 +326,7 @@ void net_run()
{ {
// Take advantage of the lull to get up to // Take advantage of the lull to get up to
// date on timers and events. // date on timers and events.
network_time = ct; net_update_time(ct);
expire_timers(); expire_timers();
usleep(1); // Just yield. usleep(1); // Just yield.
} }
@ -408,7 +339,7 @@ void net_run()
// date on timers and events. Because we only // date on timers and events. Because we only
// have timers as sources, going to sleep here // have timers as sources, going to sleep here
// doesn't risk blocking on other inputs. // doesn't risk blocking on other inputs.
network_time = current_time(); net_update_time(current_time());
expire_timers(); expire_timers();
// Avoid busy-waiting - pause for 100 ms. // Avoid busy-waiting - pause for 100 ms.
@ -465,16 +396,19 @@ void net_run()
void net_get_final_stats() void net_get_final_stats()
{ {
loop_over_list(pkt_srcs, i) const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
for ( iosource::Manager::PktSrcList::const_iterator i = pkt_srcs.begin();
i != pkt_srcs.end(); i++ )
{ {
PktSrc* ps = pkt_srcs[i]; iosource::PktSrc* ps = *i;
if ( ps->IsLive() ) if ( ps->IsLive() )
{ {
struct PktSrc::Stats s; iosource::PktSrc::Stats s;
ps->Statistics(&s); ps->Statistics(&s);
reporter->Info("%d packets received on interface %s, %d dropped\n", reporter->Info("%d packets received on interface %s, %d dropped",
s.received, ps->Interface(), s.dropped); s.received, ps->Path().c_str(), s.dropped);
} }
} }
} }
@ -494,8 +428,6 @@ void net_finish(int drain_events)
sessions->Done(); sessions->Done();
} }
delete pkt_dumper;
#ifdef DEBUG #ifdef DEBUG
extern int reassem_seen_bytes, reassem_copied_bytes; extern int reassem_seen_bytes, reassem_copied_bytes;
// DEBUG_MSG("Reassembly (TCP and IP/Frag): %d bytes seen, %d bytes copied\n", // DEBUG_MSG("Reassembly (TCP and IP/Frag): %d bytes seen, %d bytes copied\n",
@ -516,29 +448,6 @@ void net_delete()
delete ip_anonymizer[i]; delete ip_anonymizer[i];
} }
// net_packet_match
//
// Description:
// - Checks if a packet matches a filter. It just wraps up a call to
// [pcap.h's] bpf_filter().
//
// Inputs:
// - fp: a BPF-compiled filter
// - pkt: a pointer to the packet
// - len: the original packet length
// - caplen: the captured packet length. This is pkt length
//
// Output:
// - return: 1 if the packet matches the filter, 0 otherwise
int net_packet_match(BPF_Program* fp, const u_char* pkt,
u_int len, u_int caplen)
{
// NOTE: I don't like too much un-const'ing the pkt variable.
return bpf_filter(fp->GetProgram()->bf_insns, (u_char*) pkt, len, caplen);
}
int _processing_suspended = 0; int _processing_suspended = 0;
static double suspend_start = 0; static double suspend_start = 0;
@ -556,8 +465,12 @@ void net_continue_processing()
if ( _processing_suspended == 1 ) if ( _processing_suspended == 1 )
{ {
reporter->Info("processing continued"); reporter->Info("processing continued");
loop_over_list(pkt_srcs, i)
pkt_srcs[i]->ContinueAfterSuspend(); const iosource::Manager::PktSrcList& pkt_srcs(iosource_mgr->GetPktSrcs());
for ( iosource::Manager::PktSrcList::const_iterator i = pkt_srcs.begin();
i != pkt_srcs.end(); i++ )
(*i)->ContinueAfterSuspend();
} }
--_processing_suspended; --_processing_suspended;

View file

@ -5,27 +5,24 @@
#include "net_util.h" #include "net_util.h"
#include "util.h" #include "util.h"
#include "BPF_Program.h"
#include "List.h" #include "List.h"
#include "PktSrc.h"
#include "FlowSrc.h"
#include "Func.h" #include "Func.h"
#include "RemoteSerializer.h" #include "RemoteSerializer.h"
#include "iosource/IOSource.h"
#include "iosource/PktSrc.h"
#include "iosource/PktDumper.h"
extern void net_init(name_list& interfaces, name_list& readfiles, extern void net_init(name_list& interfaces, name_list& readfiles,
name_list& netflows, name_list& flowfiles, const char* writefile, int do_watchdog);
const char* writefile, const char* filter,
const char* secondary_filter, int do_watchdog);
extern void net_run(); extern void net_run();
extern void net_get_final_stats(); extern void net_get_final_stats();
extern void net_finish(int drain_events); extern void net_finish(int drain_events);
extern void net_delete(); // Reclaim all memory, etc. extern void net_delete(); // Reclaim all memory, etc.
extern void net_update_time(double new_network_time);
extern void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr, extern void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps); iosource::PktSrc* src_ps);
extern int net_packet_match(BPF_Program* fp, const u_char* pkt, extern void expire_timers(iosource::PktSrc* src_ps = 0);
u_int len, u_int caplen);
extern void expire_timers(PktSrc* src_ps = 0);
extern void termination_signal(); extern void termination_signal();
// Functions to temporarily suspend processing of live input (network packets // Functions to temporarily suspend processing of live input (network packets
@ -82,13 +79,10 @@ extern const u_char* current_pkt;
extern int current_dispatched; extern int current_dispatched;
extern int current_hdr_size; extern int current_hdr_size;
extern double current_timestamp; extern double current_timestamp;
extern PktSrc* current_pktsrc; extern iosource::PktSrc* current_pktsrc;
extern IOSource* current_iosrc; extern iosource::IOSource* current_iosrc;
declare(PList,PktSrc); extern iosource::PktDumper* pkt_dumper; // where to save packets
extern PList(PktSrc) pkt_srcs;
extern PktDumper* pkt_dumper; // where to save packets
extern char* writefile; extern char* writefile;

View file

@ -239,8 +239,6 @@ bro_uint_t bits_per_uid;
#include "const.bif.netvar_def" #include "const.bif.netvar_def"
#include "types.bif.netvar_def" #include "types.bif.netvar_def"
#include "event.bif.netvar_def" #include "event.bif.netvar_def"
#include "logging.bif.netvar_def"
#include "input.bif.netvar_def"
#include "reporter.bif.netvar_def" #include "reporter.bif.netvar_def"
void init_event_handlers() void init_event_handlers()
@ -305,8 +303,6 @@ void init_net_var()
{ {
#include "const.bif.netvar_init" #include "const.bif.netvar_init"
#include "types.bif.netvar_init" #include "types.bif.netvar_init"
#include "logging.bif.netvar_init"
#include "input.bif.netvar_init"
#include "reporter.bif.netvar_init" #include "reporter.bif.netvar_init"
conn_id = internal_type("conn_id")->AsRecordType(); conn_id = internal_type("conn_id")->AsRecordType();

View file

@ -249,8 +249,6 @@ extern void init_net_var();
#include "const.bif.netvar_h" #include "const.bif.netvar_h"
#include "types.bif.netvar_h" #include "types.bif.netvar_h"
#include "event.bif.netvar_h" #include "event.bif.netvar_h"
#include "logging.bif.netvar_h"
#include "input.bif.netvar_h"
#include "reporter.bif.netvar_h" #include "reporter.bif.netvar_h"
#endif #endif

View file

@ -7,6 +7,7 @@
#include "Obj.h" #include "Obj.h"
#include "Serializer.h" #include "Serializer.h"
#include "File.h" #include "File.h"
#include "plugin/Manager.h"
Location no_location("<no location>", 0, 0, 0, 0); Location no_location("<no location>", 0, 0, 0, 0);
Location start_location("<start uninitialized>", 0, 0, 0, 0); Location start_location("<start uninitialized>", 0, 0, 0, 0);
@ -92,6 +93,9 @@ int BroObj::suppress_errors = 0;
BroObj::~BroObj() BroObj::~BroObj()
{ {
if ( notify_plugins )
PLUGIN_HOOK_VOID(HOOK_BRO_OBJ_DTOR, HookBroObjDtor(this));
delete location; delete location;
} }

View file

@ -92,6 +92,7 @@ public:
{ {
ref_cnt = 1; ref_cnt = 1;
in_ser_cache = false; in_ser_cache = false;
notify_plugins = false;
// A bit of a hack. We'd like to associate location // A bit of a hack. We'd like to associate location
// information with every object created when parsing, // information with every object created when parsing,
@ -151,6 +152,9 @@ public:
// extend compound objects such as statement lists. // extend compound objects such as statement lists.
virtual void UpdateLocationEndInfo(const Location& end); virtual void UpdateLocationEndInfo(const Location& end);
// Enable notification of plugins when this objects gets destroyed.
void NotifyPluginsOnDtor() { notify_plugins = true; }
int RefCnt() const { return ref_cnt; } int RefCnt() const { return ref_cnt; }
// Helper class to temporarily suppress errors // Helper class to temporarily suppress errors
@ -181,6 +185,7 @@ private:
friend inline void Ref(BroObj* o); friend inline void Ref(BroObj* o);
friend inline void Unref(BroObj* o); friend inline void Unref(BroObj* o);
bool notify_plugins;
int ref_cnt; int ref_cnt;
// If non-zero, do not print runtime errors. Useful for // If non-zero, do not print runtime errors. Useful for

View file

@ -1,804 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include <errno.h>
#include <sys/stat.h>
#include "config.h"
#include "util.h"
#include "PktSrc.h"
#include "Hash.h"
#include "Net.h"
#include "Sessions.h"
// ### This needs auto-confing.
#ifdef HAVE_PCAP_INT_H
#include <pcap-int.h>
#endif
PktSrc::PktSrc()
{
interface = readfile = 0;
data = last_data = 0;
memset(&hdr, 0, sizeof(hdr));
hdr_size = 0;
datalink = 0;
netmask = 0xffffff00;
pd = 0;
idle = false;
next_sync_point = 0;
first_timestamp = current_timestamp = next_timestamp = 0.0;
first_wallclock = current_wallclock = 0;
stats.received = stats.dropped = stats.link = 0;
}
PktSrc::~PktSrc()
{
Close();
loop_over_list(program_list, i)
delete program_list[i];
BPF_Program* code;
IterCookie* cookie = filters.InitForIteration();
while ( (code = filters.NextEntry(cookie)) )
delete code;
delete [] interface;
delete [] readfile;
}
void PktSrc::GetFds(int* read, int* write, int* except)
{
if ( pseudo_realtime )
{
// Select would give erroneous results. But we simulate it
// by setting idle accordingly.
idle = CheckPseudoTime() == 0;
return;
}
if ( selectable_fd >= 0 )
*read = selectable_fd;
}
int PktSrc::ExtractNextPacket()
{
// Don't return any packets if processing is suspended (except for the
// very first packet which we need to set up times).
if ( net_is_processing_suspended() && first_timestamp )
{
idle = true;
return 0;
}
data = last_data = pcap_next(pd, &hdr);
if ( data && (hdr.len == 0 || hdr.caplen == 0) )
{
sessions->Weird("empty_pcap_header", &hdr, data);
return 0;
}
if ( data )
next_timestamp = hdr.ts.tv_sec + double(hdr.ts.tv_usec) / 1e6;
if ( pseudo_realtime )
current_wallclock = current_time(true);
if ( ! first_timestamp )
first_timestamp = next_timestamp;
idle = (data == 0);
if ( data )
++stats.received;
// Source has gone dry. If it's a network interface, this just means
// it's timed out. If it's a file, though, then the file has been
// exhausted.
if ( ! data && ! IsLive() )
{
closed = true;
if ( pseudo_realtime && using_communication )
{
if ( remote_trace_sync_interval )
remote_serializer->SendFinalSyncPoint();
else
remote_serializer->Terminate();
}
}
return data != 0;
}
double PktSrc::NextTimestamp(double* local_network_time)
{
if ( ! data && ! ExtractNextPacket() )
return -1.0;
if ( pseudo_realtime )
{
// Delay packet if necessary.
double packet_time = CheckPseudoTime();
if ( packet_time )
return packet_time;
idle = true;
return -1.0;
}
return next_timestamp;
}
void PktSrc::ContinueAfterSuspend()
{
current_wallclock = current_time(true);
}
double PktSrc::CurrentPacketWallClock()
{
// We stop time when we are suspended.
if ( net_is_processing_suspended() )
current_wallclock = current_time(true);
return current_wallclock;
}
double PktSrc::CheckPseudoTime()
{
if ( ! data && ! ExtractNextPacket() )
return 0;
if ( ! current_timestamp )
return bro_start_time;
if ( remote_trace_sync_interval )
{
if ( next_sync_point == 0 || next_timestamp >= next_sync_point )
{
int n = remote_serializer->SendSyncPoint();
next_sync_point = first_timestamp +
n * remote_trace_sync_interval;
remote_serializer->Log(RemoteSerializer::LogInfo,
fmt("stopping at packet %.6f, next sync-point at %.6f",
current_timestamp, next_sync_point));
return 0;
}
}
double pseudo_time = next_timestamp - first_timestamp;
double ct = (current_time(true) - first_wallclock) * pseudo_realtime;
return pseudo_time <= ct ? bro_start_time + pseudo_time : 0;
}
void PktSrc::Process()
{
if ( ! data && ! ExtractNextPacket() )
return;
current_timestamp = next_timestamp;
int pkt_hdr_size = hdr_size;
// Unfortunately some packets on the link might have MPLS labels
// while others don't. That means we need to ask the link-layer if
// labels are in place.
bool have_mpls = false;
int protocol = 0;
switch ( datalink ) {
case DLT_NULL:
{
protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0];
// From the Wireshark Wiki: "AF_INET6, unfortunately, has
// different values in {NetBSD,OpenBSD,BSD/OS},
// {FreeBSD,DragonFlyBSD}, and {Darwin/Mac OS X}, so an IPv6
// packet might have a link-layer header with 24, 28, or 30
// as the AF_ value." As we may be reading traces captured on
// platforms other than what we're running on, we accept them
// all here.
if ( protocol != AF_INET
&& protocol != AF_INET6
&& protocol != 24
&& protocol != 28
&& protocol != 30 )
{
sessions->Weird("non_ip_packet_in_null_transport", &hdr, data);
data = 0;
return;
}
break;
}
case DLT_EN10MB:
{
// Get protocol being carried from the ethernet frame.
protocol = (data[12] << 8) + data[13];
switch ( protocol )
{
// MPLS carried over the ethernet frame.
case 0x8847:
// Remove the data link layer and denote a
// header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
break;
// VLAN carried over the ethernet frame.
case 0x8100:
data += get_link_header_size(datalink);
// Check for MPLS in VLAN.
if ( ((data[2] << 8) + data[3]) == 0x8847 )
have_mpls = true;
data += 4; // Skip the vlan header
pkt_hdr_size = 0;
// Check for 802.1ah (Q-in-Q) containing IP.
// Only do a second layer of vlan tag
// stripping because there is no
// specification that allows for deeper
// nesting.
if ( ((data[2] << 8) + data[3]) == 0x0800 )
data += 4;
break;
// PPPoE carried over the ethernet frame.
case 0x8864:
data += get_link_header_size(datalink);
protocol = (data[6] << 8) + data[7];
data += 8; // Skip the PPPoE session and PPP header
pkt_hdr_size = 0;
if ( protocol != 0x0021 && protocol != 0x0057 )
{
// Neither IPv4 nor IPv6.
sessions->Weird("non_ip_packet_in_pppoe_encapsulation", &hdr, data);
data = 0;
return;
}
break;
}
break;
}
case DLT_PPP_SERIAL:
{
// Get PPP protocol.
protocol = (data[2] << 8) + data[3];
if ( protocol == 0x0281 )
{
// MPLS Unicast. Remove the data link layer and
// denote a header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
}
else if ( protocol != 0x0021 && protocol != 0x0057 )
{
// Neither IPv4 nor IPv6.
sessions->Weird("non_ip_packet_in_ppp_encapsulation", &hdr, data);
data = 0;
return;
}
break;
}
}
if ( have_mpls )
{
// Skip the MPLS label stack.
bool end_of_stack = false;
while ( ! end_of_stack )
{
end_of_stack = *(data + 2) & 0x01;
data += 4;
}
}
if ( pseudo_realtime )
{
current_pseudo = CheckPseudoTime();
net_packet_dispatch(current_pseudo, &hdr, data, pkt_hdr_size, this);
if ( ! first_wallclock )
first_wallclock = current_time(true);
}
else
net_packet_dispatch(current_timestamp, &hdr, data, pkt_hdr_size, this);
data = 0;
}
bool PktSrc::GetCurrentPacket(const struct pcap_pkthdr** arg_hdr,
const u_char** arg_pkt)
{
if ( ! last_data )
return false;
*arg_hdr = &hdr;
*arg_pkt = last_data;
return true;
}
int PktSrc::PrecompileFilter(int index, const char* filter)
{
// Compile filter.
BPF_Program* code = new BPF_Program();
if ( ! code->Compile(pd, filter, netmask, errbuf, sizeof(errbuf)) )
{
delete code;
return 0;
}
// Store it in hash.
HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
BPF_Program* oldcode = filters.Lookup(hash);
if ( oldcode )
delete oldcode;
filters.Insert(hash, code);
delete hash;
return 1;
}
int PktSrc::SetFilter(int index)
{
// We don't want load-level filters for the secondary path.
if ( filter_type == TYPE_FILTER_SECONDARY && index > 0 )
return 1;
HashKey* hash = new HashKey(HashKey(bro_int_t(index)));
BPF_Program* code = filters.Lookup(hash);
delete hash;
if ( ! code )
{
safe_snprintf(errbuf, sizeof(errbuf),
"No precompiled pcap filter for index %d",
index);
return 0;
}
if ( pcap_setfilter(pd, code->GetProgram()) < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_setfilter(%d): %s",
index, pcap_geterr(pd));
return 0;
}
#ifndef HAVE_LINUX
// Linux doesn't clear counters when resetting filter.
stats.received = stats.dropped = stats.link = 0;
#endif
return 1;
}
void PktSrc::SetHdrSize()
{
int dl = pcap_datalink(pd);
hdr_size = get_link_header_size(dl);
if ( hdr_size < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"unknown data link type 0x%x", dl);
Close();
}
datalink = dl;
}
void PktSrc::Close()
{
if ( pd )
{
pcap_close(pd);
pd = 0;
closed = true;
}
}
void PktSrc::AddSecondaryTablePrograms()
{
BPF_Program* program;
loop_over_list(secondary_path->EventTable(), i)
{
SecondaryEvent* se = secondary_path->EventTable()[i];
program = new BPF_Program();
if ( ! program->Compile(snaplen, datalink, se->Filter(),
netmask, errbuf, sizeof(errbuf)) )
{
delete program;
Close();
return;
}
SecondaryProgram* sp = new SecondaryProgram(program, se);
program_list.append(sp);
}
}
void PktSrc::Statistics(Stats* s)
{
if ( reading_traces )
s->received = s->dropped = s->link = 0;
else
{
struct pcap_stat pstat;
if ( pcap_stats(pd, &pstat) < 0 )
{
reporter->Error("problem getting packet filter statistics: %s",
ErrorMsg());
s->received = s->dropped = s->link = 0;
}
else
{
s->dropped = pstat.ps_drop;
s->link = pstat.ps_recv;
}
}
s->received = stats.received;
if ( pseudo_realtime )
s->dropped = 0;
stats.dropped = s->dropped;
}
PktInterfaceSrc::PktInterfaceSrc(const char* arg_interface, const char* filter,
PktSrc_Filter_Type ft)
: PktSrc()
{
char tmp_errbuf[PCAP_ERRBUF_SIZE];
filter_type = ft;
// Determine interface if not specified.
if ( ! arg_interface && ! (arg_interface = pcap_lookupdev(tmp_errbuf)) )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_lookupdev: %s", tmp_errbuf);
return;
}
interface = copy_string(arg_interface);
// Determine network and netmask.
uint32 net;
if ( pcap_lookupnet(interface, &net, &netmask, tmp_errbuf) < 0 )
{
// ### The lookup can fail if no address is assigned to
// the interface; and libpcap doesn't have any useful notion
// of error codes, just error strings - how bogus - so we
// just kludge around the error :-(.
// sprintf(errbuf, "pcap_lookupnet %s", tmp_errbuf);
// return;
net = 0;
netmask = 0xffffff00;
}
// We use the smallest time-out possible to return almost immediately if
// no packets are available. (We can't use set_nonblocking() as it's
// broken on FreeBSD: even when select() indicates that we can read
// something, we may get nothing if the store buffer hasn't filled up
// yet.)
pd = pcap_open_live(interface, snaplen, 1, 1, tmp_errbuf);
if ( ! pd )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_open_live: %s", tmp_errbuf);
closed = true;
return;
}
// ### This needs autoconf'ing.
#ifdef HAVE_PCAP_INT_H
reporter->Info("pcap bufsize = %d\n", ((struct pcap *) pd)->bufsize);
#endif
#ifdef HAVE_LINUX
if ( pcap_setnonblock(pd, 1, tmp_errbuf) < 0 )
{
safe_snprintf(errbuf, sizeof(errbuf),
"pcap_setnonblock: %s", tmp_errbuf);
pcap_close(pd);
closed = true;
return;
}
#endif
selectable_fd = pcap_fileno(pd);
if ( PrecompileFilter(0, filter) && SetFilter(0) )
{
SetHdrSize();
if ( closed )
// Couldn't get header size.
return;
reporter->Info("listening on %s, capture length %d bytes\n", interface, snaplen);
}
else
closed = true;
}
PktFileSrc::PktFileSrc(const char* arg_readfile, const char* filter,
PktSrc_Filter_Type ft)
: PktSrc()
{
readfile = copy_string(arg_readfile);
filter_type = ft;
pd = pcap_open_offline((char*) readfile, errbuf);
if ( pd && PrecompileFilter(0, filter) && SetFilter(0) )
{
SetHdrSize();
if ( closed )
// Unknown link layer type.
return;
// We don't put file sources into non-blocking mode as
// otherwise we would not be able to identify the EOF.
selectable_fd = fileno(pcap_file(pd));
if ( selectable_fd < 0 )
reporter->InternalError("OS does not support selectable pcap fd");
}
else
closed = true;
}
SecondaryPath::SecondaryPath()
{
filter = 0;
// Glue together the secondary filter, if exists.
Val* secondary_fv = internal_val("secondary_filters");
if ( secondary_fv->AsTableVal()->Size() == 0 )
return;
int did_first = 0;
const TableEntryValPDict* v = secondary_fv->AsTable();
IterCookie* c = v->InitForIteration();
TableEntryVal* tv;
HashKey* h;
while ( (tv = v->NextEntry(h, c)) )
{
// Get the index values.
ListVal* index =
secondary_fv->AsTableVal()->RecoverIndex(h);
const char* str =
index->Index(0)->Ref()->AsString()->CheckString();
if ( ++did_first == 1 )
{
filter = copy_string(str);
}
else
{
if ( strlen(filter) > 0 )
{
char* tmp_f = new char[strlen(str) + strlen(filter) + 32];
if ( strlen(str) == 0 )
sprintf(tmp_f, "%s", filter);
else
sprintf(tmp_f, "(%s) or (%s)", filter, str);
delete [] filter;
filter = tmp_f;
}
}
// Build secondary_path event table item and link it.
SecondaryEvent* se =
new SecondaryEvent(index->Index(0)->Ref()->AsString()->CheckString(),
tv->Value()->AsFunc() );
event_list.append(se);
delete h;
Unref(index);
}
}
SecondaryPath::~SecondaryPath()
{
loop_over_list(event_list, i)
delete event_list[i];
delete [] filter;
}
SecondaryProgram::~SecondaryProgram()
{
delete program;
}
PktDumper::PktDumper(const char* arg_filename, bool arg_append)
{
filename[0] = '\0';
is_error = false;
append = arg_append;
dumper = 0;
open_time = 0.0;
// We need a pcap_t with a reasonable link-layer type. We try to get it
// from the packet sources. If not available, we fall back to Ethernet.
// FIXME: Perhaps we should make this configurable?
int linktype = -1;
if ( pkt_srcs.length() )
linktype = pkt_srcs[0]->LinkType();
if ( linktype < 0 )
linktype = DLT_EN10MB;
pd = pcap_open_dead(linktype, snaplen);
if ( ! pd )
{
Error("error for pcap_open_dead");
return;
}
if ( arg_filename )
Open(arg_filename);
}
bool PktDumper::Open(const char* arg_filename)
{
if ( ! arg_filename && ! *filename )
{
Error("no filename given");
return false;
}
if ( arg_filename )
{
if ( dumper && streq(arg_filename, filename) )
// Already open.
return true;
safe_strncpy(filename, arg_filename, FNBUF_LEN);
}
if ( dumper )
Close();
struct stat s;
int exists = 0;
if ( append )
{
// See if output file already exists (and is non-empty).
exists = stat(filename, &s); ;
if ( exists < 0 && errno != ENOENT )
{
Error(fmt("can't stat file %s: %s", filename, strerror(errno)));
return false;
}
}
if ( ! append || exists < 0 || s.st_size == 0 )
{
// Open new file.
dumper = pcap_dump_open(pd, filename);
if ( ! dumper )
{
Error(pcap_geterr(pd));
return false;
}
}
else
{
// Old file and we need to append, which, unfortunately,
// is not supported by libpcap. So, we have to hack a
// little bit, knowing that pcap_dumpter_t is, in fact,
// a FILE ... :-(
dumper = (pcap_dumper_t*) fopen(filename, "a");
if ( ! dumper )
{
Error(fmt("can't open dump %s: %s", filename, strerror(errno)));
return false;
}
}
open_time = network_time;
is_error = false;
return true;
}
bool PktDumper::Close()
{
if ( dumper )
{
pcap_dump_close(dumper);
dumper = 0;
is_error = false;
}
return true;
}
bool PktDumper::Dump(const struct pcap_pkthdr* hdr, const u_char* pkt)
{
if ( ! dumper )
return false;
if ( ! open_time )
open_time = network_time;
pcap_dump((u_char*) dumper, hdr, pkt);
return true;
}
void PktDumper::Error(const char* errstr)
{
safe_strncpy(errbuf, errstr, sizeof(errbuf));
is_error = true;
}
int get_link_header_size(int dl)
{
switch ( dl ) {
case DLT_NULL:
return 4;
case DLT_EN10MB:
return 14;
case DLT_FDDI:
return 13 + 8; // fddi_header + LLC
#ifdef DLT_LINUX_SLL
case DLT_LINUX_SLL:
return 16;
#endif
case DLT_PPP_SERIAL: // PPP_SERIAL
return 4;
case DLT_RAW:
return 0;
}
return -1;
}

View file

@ -1,258 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef pktsrc_h
#define pktsrc_h
#include "Dict.h"
#include "Expr.h"
#include "BPF_Program.h"
#include "IOSource.h"
#include "RemoteSerializer.h"
#define BRO_PCAP_ERRBUF_SIZE PCAP_ERRBUF_SIZE + 256
extern "C" {
#include <pcap.h>
}
declare(PDict,BPF_Program);
// Whether a PktSrc object is used by the normal filter structure or the
// secondary-path structure.
typedef enum {
TYPE_FILTER_NORMAL, // the normal filter
TYPE_FILTER_SECONDARY, // the secondary-path filter
} PktSrc_Filter_Type;
// {filter,event} tuples conforming the secondary path.
class SecondaryEvent {
public:
SecondaryEvent(const char* arg_filter, Func* arg_event)
{
filter = arg_filter;
event = arg_event;
}
const char* Filter() { return filter; }
Func* Event() { return event; }
private:
const char* filter;
Func* event;
};
declare(PList,SecondaryEvent);
typedef PList(SecondaryEvent) secondary_event_list;
class SecondaryPath {
public:
SecondaryPath();
~SecondaryPath();
secondary_event_list& EventTable() { return event_list; }
const char* Filter() { return filter; }
private:
secondary_event_list event_list;
// OR'ed union of all SecondaryEvent filters
char* filter;
};
// Main secondary-path object.
extern SecondaryPath* secondary_path;
// {program, {filter,event}} tuple table.
class SecondaryProgram {
public:
SecondaryProgram(BPF_Program* arg_program, SecondaryEvent* arg_event)
{
program = arg_program;
event = arg_event;
}
~SecondaryProgram();
BPF_Program* Program() { return program; }
SecondaryEvent* Event() { return event; }
private:
// Associated program.
BPF_Program *program;
// Event that is run in case the program is matched.
SecondaryEvent* event;
};
declare(PList,SecondaryProgram);
typedef PList(SecondaryProgram) secondary_program_list;
class PktSrc : public IOSource {
public:
~PktSrc();
// IOSource interface
bool IsReady();
void GetFds(int* read, int* write, int* except);
double NextTimestamp(double* local_network_time);
void Process();
const char* Tag() { return "PktSrc"; }
const char* ErrorMsg() const { return errbuf; }
void ClearErrorMsg() { *errbuf ='\0'; }
// Returns the packet last processed; false if there is no
// current packet available.
bool GetCurrentPacket(const pcap_pkthdr** hdr, const u_char** pkt);
int HdrSize() const { return hdr_size; }
int DataLink() const { return datalink; }
void ConsumePacket() { data = 0; }
int IsLive() const { return interface != 0; }
pcap_t* PcapHandle() const { return pd; }
int LinkType() const { return pcap_datalink(pd); }
const char* ReadFile() const { return readfile; }
const char* Interface() const { return interface; }
PktSrc_Filter_Type FilterType() const { return filter_type; }
void AddSecondaryTablePrograms();
const secondary_program_list& ProgramTable() const
{ return program_list; }
// Signal packet source that processing was suspended and is now going
// to be continued.
void ContinueAfterSuspend();
// Only valid in pseudo-realtime mode.
double CurrentPacketTimestamp() { return current_pseudo; }
double CurrentPacketWallClock();
struct Stats {
unsigned int received; // pkts received (w/o drops)
unsigned int dropped; // pkts dropped
unsigned int link; // total packets on link
// (not always not available)
};
virtual void Statistics(Stats* stats);
// Precompiles a filter and associates the given index with it.
// Returns true on success, 0 if a problem occurred.
virtual int PrecompileFilter(int index, const char* filter);
// Activates the filter with the given index.
// Returns true on success, 0 if a problem occurred.
virtual int SetFilter(int index);
protected:
PktSrc();
static const int PCAP_TIMEOUT = 20;
void SetHdrSize();
virtual void Close();
// Returns 1 on success, 0 on time-out/gone dry.
virtual int ExtractNextPacket();
// Checks if the current packet has a pseudo-time <= current_time.
// If yes, returns pseudo-time, otherwise 0.
double CheckPseudoTime();
double current_timestamp;
double next_timestamp;
// Only set in pseudo-realtime mode.
double first_timestamp;
double first_wallclock;
double current_wallclock;
double current_pseudo;
struct pcap_pkthdr hdr;
const u_char* data; // contents of current packet
const u_char* last_data; // same, but unaffected by consuming
int hdr_size;
int datalink;
double next_sync_point; // For trace synchronziation in pseudo-realtime
char* interface; // nil if not reading from an interface
char* readfile; // nil if not reading from a file
pcap_t* pd;
int selectable_fd;
uint32 netmask;
char errbuf[BRO_PCAP_ERRBUF_SIZE];
Stats stats;
PDict(BPF_Program) filters; // precompiled filters
PktSrc_Filter_Type filter_type; // normal path or secondary path
secondary_program_list program_list;
};
class PktInterfaceSrc : public PktSrc {
public:
PktInterfaceSrc(const char* interface, const char* filter,
PktSrc_Filter_Type ft=TYPE_FILTER_NORMAL);
};
class PktFileSrc : public PktSrc {
public:
PktFileSrc(const char* readfile, const char* filter,
PktSrc_Filter_Type ft=TYPE_FILTER_NORMAL);
};
extern int get_link_header_size(int dl);
class PktDumper {
public:
PktDumper(const char* file = 0, bool append = false);
~PktDumper() { Close(); }
bool Open(const char* file = 0);
bool Close();
bool Dump(const struct pcap_pkthdr* hdr, const u_char* pkt);
pcap_dumper_t* PcapDumper() { return dumper; }
const char* FileName() const { return filename; }
bool IsError() const { return is_error; }
const char* ErrorMsg() const { return errbuf; }
// This heuristic will horribly fail if we're using packets
// with different link layers. (If we can't derive a reasonable value
// from the packet sources, our fall-back is Ethernet.)
int HdrSize() const
{ return get_link_header_size(pcap_datalink(pd)); }
// Network time when dump file was opened.
double OpenTime() const { return open_time; }
private:
void InitPd();
void Error(const char* str);
static const int FNBUF_LEN = 1024;
char filename[FNBUF_LEN];
bool append;
pcap_dumper_t* dumper;
pcap_t* pd;
double open_time;
bool is_error;
char errbuf[BRO_PCAP_ERRBUF_SIZE];
};
#endif

View file

@ -188,10 +188,11 @@
#include "File.h" #include "File.h"
#include "Conn.h" #include "Conn.h"
#include "Reporter.h" #include "Reporter.h"
#include "threading/SerialTypes.h"
#include "logging/Manager.h"
#include "IPAddr.h" #include "IPAddr.h"
#include "bro_inet_ntop.h" #include "bro_inet_ntop.h"
#include "iosource/Manager.h"
#include "logging/Manager.h"
#include "logging/logging.bif.h"
extern "C" { extern "C" {
#include "setsignal.h" #include "setsignal.h"
@ -284,10 +285,10 @@ struct ping_args {
\ \
if ( ! c ) \ if ( ! c ) \
{ \ { \
idle = io->IsIdle();\ SetIdle(io->IsIdle());\
return true; \ return true; \
} \ } \
idle = false; \ SetIdle(false); \
} }
static const char* msgToStr(int msg) static const char* msgToStr(int msg)
@ -533,7 +534,6 @@ RemoteSerializer::RemoteSerializer()
current_sync_point = 0; current_sync_point = 0;
syncing_times = false; syncing_times = false;
io = 0; io = 0;
closed = false;
terminating = false; terminating = false;
in_sync = 0; in_sync = 0;
last_flush = 0; last_flush = 0;
@ -558,7 +558,7 @@ RemoteSerializer::~RemoteSerializer()
delete io; delete io;
} }
void RemoteSerializer::Init() void RemoteSerializer::Enable()
{ {
if ( initialized ) if ( initialized )
return; return;
@ -571,7 +571,7 @@ void RemoteSerializer::Init()
Fork(); Fork();
io_sources.Register(this); iosource_mgr->Register(this);
Log(LogInfo, fmt("communication started, parent pid is %d, child pid is %d", getpid(), child_pid)); Log(LogInfo, fmt("communication started, parent pid is %d, child pid is %d", getpid(), child_pid));
initialized = 1; initialized = 1;
@ -1275,7 +1275,7 @@ bool RemoteSerializer::Listen(const IPAddr& ip, uint16 port, bool expect_ssl,
return false; return false;
listening = true; listening = true;
closed = false; SetClosed(false);
return true; return true;
} }
@ -1344,7 +1344,7 @@ bool RemoteSerializer::StopListening()
return false; return false;
listening = false; listening = false;
closed = ! IsActive(); SetClosed(! IsActive());
return true; return true;
} }
@ -1382,7 +1382,7 @@ double RemoteSerializer::NextTimestamp(double* local_network_time)
if ( received_logs > 0 ) if ( received_logs > 0 )
{ {
// If we processed logs last time, assume there's more. // If we processed logs last time, assume there's more.
idle = false; SetIdle(false);
received_logs = 0; received_logs = 0;
return timer_mgr->Time(); return timer_mgr->Time();
} }
@ -1397,7 +1397,7 @@ double RemoteSerializer::NextTimestamp(double* local_network_time)
pt = timer_mgr->Time(); pt = timer_mgr->Time();
if ( packets.length() ) if ( packets.length() )
idle = false; SetIdle(false);
if ( et >= 0 && (et < pt || pt < 0) ) if ( et >= 0 && (et < pt || pt < 0) )
return et; return et;
@ -1452,7 +1452,7 @@ void RemoteSerializer::Process()
// FIXME: The following chunk of code is copied from // FIXME: The following chunk of code is copied from
// net_packet_dispatch(). We should change that function // net_packet_dispatch(). We should change that function
// to accept an IOSource instead of the PktSrc. // to accept an IOSource instead of the PktSrc.
network_time = p->time; net_update_time(p->time);
SegmentProfiler(segment_logger, "expiring-timers"); SegmentProfiler(segment_logger, "expiring-timers");
TimerMgr* tmgr = sessions->LookupTimerMgr(GetCurrentTag()); TimerMgr* tmgr = sessions->LookupTimerMgr(GetCurrentTag());
@ -1476,7 +1476,7 @@ void RemoteSerializer::Process()
} }
if ( packets.length() ) if ( packets.length() )
idle = false; SetIdle(false);
} }
void RemoteSerializer::Finish() void RemoteSerializer::Finish()
@ -1508,7 +1508,7 @@ bool RemoteSerializer::Poll(bool may_block)
} }
io->Flush(); io->Flush();
idle = false; SetIdle(false);
switch ( msgstate ) { switch ( msgstate ) {
case TYPE: case TYPE:
@ -1690,7 +1690,7 @@ bool RemoteSerializer::DoMessage()
case MSG_TERMINATE: case MSG_TERMINATE:
assert(terminating); assert(terminating);
io_sources.Terminate(); iosource_mgr->Terminate();
return true; return true;
case MSG_REMOTE_PRINT: case MSG_REMOTE_PRINT:
@ -1878,7 +1878,7 @@ void RemoteSerializer::RemovePeer(Peer* peer)
delete peer->cache_out; delete peer->cache_out;
delete peer; delete peer;
closed = ! IsActive(); SetClosed(! IsActive());
if ( in_sync == peer ) if ( in_sync == peer )
in_sync = 0; in_sync = 0;
@ -2723,8 +2723,8 @@ bool RemoteSerializer::ProcessLogCreateWriter()
fmt.EndRead(); fmt.EndRead();
id_val = new EnumVal(id, BifType::Enum::Log::ID); id_val = new EnumVal(id, internal_type("Log::ID")->AsEnumType());
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer); writer_val = new EnumVal(writer, internal_type("Log::Writer")->AsEnumType());
if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields,
true, false, true) ) true, false, true) )
@ -2796,8 +2796,8 @@ bool RemoteSerializer::ProcessLogWrite()
} }
} }
id_val = new EnumVal(id, BifType::Enum::Log::ID); id_val = new EnumVal(id, internal_type("Log::ID")->AsEnumType());
writer_val = new EnumVal(writer, BifType::Enum::Log::Writer); writer_val = new EnumVal(writer, internal_type("Log::Writer")->AsEnumType());
success = log_mgr->Write(id_val, writer_val, path, num_fields, vals); success = log_mgr->Write(id_val, writer_val, path, num_fields, vals);
@ -2840,7 +2840,7 @@ void RemoteSerializer::GotEvent(const char* name, double time,
BufferedEvent* e = new BufferedEvent; BufferedEvent* e = new BufferedEvent;
// Our time, not the time when the event was generated. // Our time, not the time when the event was generated.
e->time = pkt_srcs.length() ? e->time = iosource_mgr->GetPktSrcs().size() ?
time_t(network_time) : time_t(timer_mgr->Time()); time_t(network_time) : time_t(timer_mgr->Time());
e->src = current_peer->id; e->src = current_peer->id;
@ -3085,7 +3085,7 @@ RecordVal* RemoteSerializer::GetPeerVal(PeerID id)
void RemoteSerializer::ChildDied() void RemoteSerializer::ChildDied()
{ {
Log(LogError, "child died"); Log(LogError, "child died");
closed = true; SetClosed(true);
child_pid = 0; child_pid = 0;
// Shut down the main process as well. // Shut down the main process as well.
@ -3184,7 +3184,7 @@ void RemoteSerializer::FatalError(const char* msg)
Log(LogError, msg); Log(LogError, msg);
reporter->Error("%s", msg); reporter->Error("%s", msg);
closed = true; SetClosed(true);
if ( kill(child_pid, SIGQUIT) < 0 ) if ( kill(child_pid, SIGQUIT) < 0 )
reporter->Warning("warning: cannot kill child pid %d, %s", child_pid, strerror(errno)); reporter->Warning("warning: cannot kill child pid %d, %s", child_pid, strerror(errno));
@ -4211,6 +4211,7 @@ bool SocketComm::Listen()
safe_close(fd); safe_close(fd);
CloseListenFDs(); CloseListenFDs();
listen_next_try = time(0) + bind_retry_interval; listen_next_try = time(0) + bind_retry_interval;
freeaddrinfo(res0);
return false; return false;
} }

View file

@ -6,7 +6,7 @@
#include "Dict.h" #include "Dict.h"
#include "List.h" #include "List.h"
#include "Serializer.h" #include "Serializer.h"
#include "IOSource.h" #include "iosource/IOSource.h"
#include "Stats.h" #include "Stats.h"
#include "File.h" #include "File.h"
#include "logging/WriterBackend.h" #include "logging/WriterBackend.h"
@ -22,13 +22,13 @@ namespace threading {
} }
// This class handles the communication done in Bro's main loop. // This class handles the communication done in Bro's main loop.
class RemoteSerializer : public Serializer, public IOSource { class RemoteSerializer : public Serializer, public iosource::IOSource {
public: public:
RemoteSerializer(); RemoteSerializer();
virtual ~RemoteSerializer(); virtual ~RemoteSerializer();
// Initialize the remote serializer (calling this will fork). // Initialize the remote serializer (calling this will fork).
void Init(); void Enable();
// FIXME: Use SourceID directly (or rename everything to Peer*). // FIXME: Use SourceID directly (or rename everything to Peer*).
typedef SourceID PeerID; typedef SourceID PeerID;

View file

@ -65,11 +65,11 @@ RuleActionAnalyzer::RuleActionAnalyzer(const char* arg_analyzer)
void RuleActionAnalyzer::PrintDebug() void RuleActionAnalyzer::PrintDebug()
{ {
if ( ! child_analyzer ) if ( ! child_analyzer )
fprintf(stderr, "|%s|\n", analyzer_mgr->GetComponentName(analyzer)); fprintf(stderr, "|%s|\n", analyzer_mgr->GetComponentName(analyzer).c_str());
else else
fprintf(stderr, "|%s:%s|\n", fprintf(stderr, "|%s:%s|\n",
analyzer_mgr->GetComponentName(analyzer), analyzer_mgr->GetComponentName(analyzer).c_str(),
analyzer_mgr->GetComponentName(child_analyzer)); analyzer_mgr->GetComponentName(child_analyzer).c_str());
} }

View file

@ -19,6 +19,7 @@
#include "Conn.h" #include "Conn.h"
#include "Timer.h" #include "Timer.h"
#include "RemoteSerializer.h" #include "RemoteSerializer.h"
#include "iosource/Manager.h"
Serializer::Serializer(SerializationFormat* arg_format) Serializer::Serializer(SerializationFormat* arg_format)
{ {
@ -1045,7 +1046,7 @@ EventPlayer::EventPlayer(const char* file)
Error(fmt("event replayer: cannot open %s", file)); Error(fmt("event replayer: cannot open %s", file));
if ( ReadHeader() ) if ( ReadHeader() )
io_sources.Register(this); iosource_mgr->Register(this);
} }
EventPlayer::~EventPlayer() EventPlayer::~EventPlayer()
@ -1085,7 +1086,7 @@ double EventPlayer::NextTimestamp(double* local_network_time)
{ {
UnserialInfo info(this); UnserialInfo info(this);
Unserialize(&info); Unserialize(&info);
closed = io->Eof(); SetClosed(io->Eof());
} }
if ( ! ne_time ) if ( ! ne_time )
@ -1142,7 +1143,7 @@ bool Packet::Serialize(SerialInfo* info) const
static BroFile* profiling_output = 0; static BroFile* profiling_output = 0;
#ifdef DEBUG #ifdef DEBUG
static PktDumper* dump = 0; static iosource::PktDumper* dump = 0;
#endif #endif
Packet* Packet::Unserialize(UnserialInfo* info) Packet* Packet::Unserialize(UnserialInfo* info)
@ -1188,7 +1189,7 @@ Packet* Packet::Unserialize(UnserialInfo* info)
p->hdr = hdr; p->hdr = hdr;
p->pkt = (u_char*) pkt; p->pkt = (u_char*) pkt;
p->tag = tag; p->tag = tag;
p->hdr_size = get_link_header_size(p->link_type); p->hdr_size = iosource::PktSrc::GetLinkHeaderSize(p->link_type);
delete [] tag; delete [] tag;
@ -1213,9 +1214,15 @@ Packet* Packet::Unserialize(UnserialInfo* info)
if ( debug_logger.IsEnabled(DBG_TM) ) if ( debug_logger.IsEnabled(DBG_TM) )
{ {
if ( ! dump ) if ( ! dump )
dump = new PktDumper("tm.pcap"); dump = iosource_mgr->OpenPktDumper("tm.pcap", true);
dump->Dump(p->hdr, p->pkt); if ( dump )
{
iosource::PktDumper::Packet dp;
dp.hdr = p->hdr;
dp.data = p->pkt;
dump->Dump(&dp);
}
} }
#endif #endif

View file

@ -15,7 +15,7 @@
#include "SerialInfo.h" #include "SerialInfo.h"
#include "IP.h" #include "IP.h"
#include "Timer.h" #include "Timer.h"
#include "IOSource.h" #include "iosource/IOSource.h"
#include "Reporter.h" #include "Reporter.h"
class SerializationCache; class SerializationCache;
@ -350,7 +350,7 @@ public:
}; };
// Plays a file of events back. // Plays a file of events back.
class EventPlayer : public FileSerializer, public IOSource { class EventPlayer : public FileSerializer, public iosource::IOSource {
public: public:
EventPlayer(const char* file); EventPlayer(const char* file);
virtual ~EventPlayer(); virtual ~EventPlayer();

View file

@ -167,7 +167,7 @@ void NetSessions::Done()
void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr, void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps) iosource::PktSrc* src_ps)
{ {
const struct ip* ip_hdr = 0; const struct ip* ip_hdr = 0;
const u_char* ip_data = 0; const u_char* ip_data = 0;
@ -184,10 +184,7 @@ void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr,
// Blanket encapsulation // Blanket encapsulation
hdr_size += encap_hdr_size; hdr_size += encap_hdr_size;
if ( src_ps->FilterType() == TYPE_FILTER_NORMAL ) NextPacket(t, hdr, pkt, hdr_size);
NextPacket(t, hdr, pkt, hdr_size);
else
NextPacketSecondary(t, hdr, pkt, hdr_size, src_ps);
} }
void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr, void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
@ -262,53 +259,6 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
DumpPacket(hdr, pkt); DumpPacket(hdr, pkt);
} }
void NetSessions::NextPacketSecondary(double /* t */, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size,
const PktSrc* src_ps)
{
SegmentProfiler(segment_logger, "processing-secondary-packet");
++num_packets_processed;
uint32 caplen = hdr->caplen - hdr_size;
if ( caplen < sizeof(struct ip) )
{
Weird("truncated_IP", hdr, pkt);
return;
}
const struct ip* ip = (const struct ip*) (pkt + hdr_size);
if ( ip->ip_v == 4 )
{
const secondary_program_list& spt = src_ps->ProgramTable();
loop_over_list(spt, i)
{
SecondaryProgram* sp = spt[i];
if ( ! net_packet_match(sp->Program(), pkt,
hdr->len, hdr->caplen) )
continue;
val_list* args = new val_list;
StringVal* cmd_val =
new StringVal(sp->Event()->Filter());
args->append(cmd_val);
IP_Hdr ip_hdr(ip, false);
args->append(ip_hdr.BuildPktHdrVal());
// ### Need to queue event here.
try
{
sp->Event()->Event()->Call(args);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
delete args;
}
}
}
int NetSessions::CheckConnectionTag(Connection* conn) int NetSessions::CheckConnectionTag(Connection* conn)
{ {
if ( current_iosrc->GetCurrentTag() ) if ( current_iosrc->GetCurrentTag() )
@ -1440,14 +1390,24 @@ void NetSessions::DumpPacket(const struct pcap_pkthdr* hdr,
return; return;
if ( len == 0 ) if ( len == 0 )
pkt_dumper->Dump(hdr, pkt); {
iosource::PktDumper::Packet p;
p.hdr = hdr;
p.data = pkt;
pkt_dumper->Dump(&p);
}
else else
{ {
struct pcap_pkthdr h = *hdr; struct pcap_pkthdr h = *hdr;
h.caplen = len; h.caplen = len;
if ( h.caplen > hdr->caplen ) if ( h.caplen > hdr->caplen )
reporter->InternalError("bad modified caplen"); reporter->InternalError("bad modified caplen");
pkt_dumper->Dump(&h, pkt);
iosource::PktDumper::Packet p;
p.hdr = &h;
p.data = pkt;
pkt_dumper->Dump(&p);
} }
} }

View file

@ -69,11 +69,11 @@ public:
~NetSessions(); ~NetSessions();
// Main entry point for packet processing. Dispatches the packet // Main entry point for packet processing. Dispatches the packet
// either through NextPacket() or NextPacketSecondary(), optionally // either through NextPacket(), optionally employing the packet
// employing the packet sorter first. // sorter first.
void DispatchPacket(double t, const struct pcap_pkthdr* hdr, void DispatchPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size, const u_char* const pkt, int hdr_size,
PktSrc* src_ps); iosource::PktSrc* src_ps);
void Done(); // call to drain events before destructing void Done(); // call to drain events before destructing
@ -221,10 +221,6 @@ protected:
void NextPacket(double t, const struct pcap_pkthdr* hdr, void NextPacket(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size); const u_char* const pkt, int hdr_size);
void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr,
const u_char* const pkt, int hdr_size,
const PktSrc* src_ps);
// Record the given packet (if a dumper is active). If len=0 // Record the given packet (if a dumper is active). If len=0
// then the whole packet is recorded, otherwise just the first // then the whole packet is recorded, otherwise just the first
// len bytes. // len bytes.

View file

@ -660,8 +660,13 @@ void Case::Describe(ODesc* d) const
TraversalCode Case::Traverse(TraversalCallback* cb) const TraversalCode Case::Traverse(TraversalCallback* cb) const
{ {
TraversalCode tc = cases->Traverse(cb); TraversalCode tc;
HANDLE_TC_STMT_PRE(tc);
if ( cases )
{
tc = cases->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);
}
tc = s->Traverse(cb); tc = s->Traverse(cb);
HANDLE_TC_STMT_PRE(tc); HANDLE_TC_STMT_PRE(tc);

View file

@ -55,6 +55,7 @@ Tag& Tag::operator=(const Tag& other)
{ {
type = other.type; type = other.type;
subtype = other.subtype; subtype = other.subtype;
Unref(val);
val = other.val; val = other.val;
if ( val ) if ( val )

View file

@ -449,6 +449,11 @@ BroType* IndexType::YieldType()
return yield_type; return yield_type;
} }
const BroType* IndexType::YieldType() const
{
return yield_type;
}
void IndexType::Describe(ODesc* d) const void IndexType::Describe(ODesc* d) const
{ {
BroType::Describe(d); BroType::Describe(d);
@ -742,6 +747,11 @@ BroType* FuncType::YieldType()
return yield; return yield;
} }
const BroType* FuncType::YieldType() const
{
return yield;
}
int FuncType::MatchesIndex(ListExpr*& index) const int FuncType::MatchesIndex(ListExpr*& index) const
{ {
return check_and_promote_args(index, args) ? return check_and_promote_args(index, args) ?
@ -1371,6 +1381,11 @@ void OpaqueType::Describe(ODesc* d) const
d->Add(name.c_str()); d->Add(name.c_str());
} }
void OpaqueType::DescribeReST(ODesc* d, bool roles_only) const
{
d->Add(fmt(":bro:type:`%s` of %s", type_name(Tag()), name.c_str()));
}
IMPLEMENT_SERIAL(OpaqueType, SER_OPAQUE_TYPE); IMPLEMENT_SERIAL(OpaqueType, SER_OPAQUE_TYPE);
bool OpaqueType::DoSerialize(SerialInfo* info) const bool OpaqueType::DoSerialize(SerialInfo* info) const
@ -1393,6 +1408,23 @@ bool OpaqueType::DoUnserialize(UnserialInfo* info)
return true; return true;
} }
EnumType::EnumType(const string& name)
: BroType(TYPE_ENUM)
{
counter = 0;
SetName(name);
}
EnumType::EnumType(EnumType* e)
: BroType(TYPE_ENUM)
{
counter = e->counter;
SetName(e->GetName());
for ( NameMap::iterator it = e->names.begin(); it != e->names.end(); ++it )
names[copy_string(it->first)] = it->second;
}
EnumType::~EnumType() EnumType::~EnumType()
{ {
for ( NameMap::iterator iter = names.begin(); iter != names.end(); ++iter ) for ( NameMap::iterator iter = names.begin(); iter != names.end(); ++iter )
@ -1449,10 +1481,19 @@ void EnumType::CheckAndAddName(const string& module_name, const char* name,
} }
else else
{ {
// We allow double-definitions if matching exactly. This is so that
// we can define an enum both in a *.bif and *.bro for avoiding
// cyclic dependencies.
if ( id->Name() != make_full_var_name(module_name.c_str(), name)
|| (id->HasVal() && val != id->ID_Val()->AsEnum()) )
{
Unref(id);
reporter->Error("identifier or enumerator value in enumerated type definition already exists");
SetError();
return;
}
Unref(id); Unref(id);
reporter->Error("identifier or enumerator value in enumerated type definition already exists");
SetError();
return;
} }
AddNameInternal(module_name, name, val, is_export); AddNameInternal(module_name, name, val, is_export);
@ -1473,9 +1514,9 @@ void EnumType::AddNameInternal(const string& module_name, const char* name,
names[copy_string(fullname.c_str())] = val; names[copy_string(fullname.c_str())] = val;
} }
bro_int_t EnumType::Lookup(const string& module_name, const char* name) bro_int_t EnumType::Lookup(const string& module_name, const char* name) const
{ {
NameMap::iterator pos = NameMap::const_iterator pos =
names.find(make_full_var_name(module_name.c_str(), name).c_str()); names.find(make_full_var_name(module_name.c_str(), name).c_str());
if ( pos == names.end() ) if ( pos == names.end() )
@ -1484,9 +1525,9 @@ bro_int_t EnumType::Lookup(const string& module_name, const char* name)
return pos->second; return pos->second;
} }
const char* EnumType::Lookup(bro_int_t value) const char* EnumType::Lookup(bro_int_t value) const
{ {
for ( NameMap::iterator iter = names.begin(); for ( NameMap::const_iterator iter = names.begin();
iter != names.end(); ++iter ) iter != names.end(); ++iter )
if ( iter->second == value ) if ( iter->second == value )
return iter->first; return iter->first;
@ -1494,6 +1535,16 @@ const char* EnumType::Lookup(bro_int_t value)
return 0; return 0;
} }
EnumType::enum_name_list EnumType::Names() const
{
enum_name_list n;
for ( NameMap::const_iterator iter = names.begin();
iter != names.end(); ++iter )
n.push_back(std::make_pair(iter->first, iter->second));
return n;
}
void EnumType::DescribeReST(ODesc* d, bool roles_only) const void EnumType::DescribeReST(ODesc* d, bool roles_only) const
{ {
d->Add(":bro:type:`enum`"); d->Add(":bro:type:`enum`");
@ -1644,6 +1695,23 @@ BroType* VectorType::YieldType()
return yield_type; return yield_type;
} }
const BroType* VectorType::YieldType() const
{
// Work around the fact that we use void internally to mark a vector
// as being unspecified. When looking at its yield type, we need to
// return any as that's what other code historically expects for type
// comparisions.
if ( IsUnspecifiedVector() )
{
BroType* ret = ::base_type(TYPE_ANY);
Unref(ret); // unref, because this won't be held by anyone.
assert(ret);
return ret;
}
return yield_type;
}
int VectorType::MatchesIndex(ListExpr*& index) const int VectorType::MatchesIndex(ListExpr*& index) const
{ {
expr_list& el = index->Exprs(); expr_list& el = index->Exprs();
@ -1742,7 +1810,7 @@ static int is_init_compat(const BroType* t1, const BroType* t2)
return 0; return 0;
} }
int same_type(const BroType* t1, const BroType* t2, int is_init) int same_type(const BroType* t1, const BroType* t2, int is_init, bool match_record_field_names)
{ {
if ( t1 == t2 || if ( t1 == t2 ||
t1->Tag() == TYPE_ANY || t1->Tag() == TYPE_ANY ||
@ -1798,7 +1866,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( tl1 || tl2 ) if ( tl1 || tl2 )
{ {
if ( ! tl1 || ! tl2 || ! same_type(tl1, tl2, is_init) ) if ( ! tl1 || ! tl2 || ! same_type(tl1, tl2, is_init, match_record_field_names) )
return 0; return 0;
} }
@ -1807,7 +1875,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( y1 || y2 ) if ( y1 || y2 )
{ {
if ( ! y1 || ! y2 || ! same_type(y1, y2, is_init) ) if ( ! y1 || ! y2 || ! same_type(y1, y2, is_init, match_record_field_names) )
return 0; return 0;
} }
@ -1825,7 +1893,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
if ( t1->YieldType() || t2->YieldType() ) if ( t1->YieldType() || t2->YieldType() )
{ {
if ( ! t1->YieldType() || ! t2->YieldType() || if ( ! t1->YieldType() || ! t2->YieldType() ||
! same_type(t1->YieldType(), t2->YieldType(), is_init) ) ! same_type(t1->YieldType(), t2->YieldType(), is_init, match_record_field_names) )
return 0; return 0;
} }
@ -1845,8 +1913,8 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
const TypeDecl* td1 = rt1->FieldDecl(i); const TypeDecl* td1 = rt1->FieldDecl(i);
const TypeDecl* td2 = rt2->FieldDecl(i); const TypeDecl* td2 = rt2->FieldDecl(i);
if ( ! streq(td1->id, td2->id) || if ( (match_record_field_names && ! streq(td1->id, td2->id)) ||
! same_type(td1->type, td2->type, is_init) ) ! same_type(td1->type, td2->type, is_init, match_record_field_names) )
return 0; return 0;
} }
@ -1862,7 +1930,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
return 0; return 0;
loop_over_list(*tl1, i) loop_over_list(*tl1, i)
if ( ! same_type((*tl1)[i], (*tl2)[i], is_init) ) if ( ! same_type((*tl1)[i], (*tl2)[i], is_init, match_record_field_names) )
return 0; return 0;
return 1; return 1;
@ -1870,7 +1938,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
case TYPE_VECTOR: case TYPE_VECTOR:
case TYPE_FILE: case TYPE_FILE:
return same_type(t1->YieldType(), t2->YieldType(), is_init); return same_type(t1->YieldType(), t2->YieldType(), is_init, match_record_field_names);
case TYPE_OPAQUE: case TYPE_OPAQUE:
{ {
@ -1880,7 +1948,7 @@ int same_type(const BroType* t1, const BroType* t2, int is_init)
} }
case TYPE_TYPE: case TYPE_TYPE:
return same_type(t1, t2, is_init); return same_type(t1, t2, is_init, match_record_field_names);
case TYPE_UNION: case TYPE_UNION:
reporter->Error("union type in same_type()"); reporter->Error("union type in same_type()");

View file

@ -6,6 +6,7 @@
#include <string> #include <string>
#include <set> #include <set>
#include <map> #include <map>
#include <list>
#include "Obj.h" #include "Obj.h"
#include "Attr.h" #include "Attr.h"
@ -334,6 +335,7 @@ public:
TypeList* Indices() const { return indices; } TypeList* Indices() const { return indices; }
const type_list* IndexTypes() const { return indices->Types(); } const type_list* IndexTypes() const { return indices->Types(); }
BroType* YieldType(); BroType* YieldType();
const BroType* YieldType() const;
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void DescribeReST(ODesc* d, bool roles_only = false) const; void DescribeReST(ODesc* d, bool roles_only = false) const;
@ -396,6 +398,7 @@ public:
RecordType* Args() const { return args; } RecordType* Args() const { return args; }
BroType* YieldType(); BroType* YieldType();
const BroType* YieldType() const;
void SetYieldType(BroType* arg_yield) { yield = arg_yield; } void SetYieldType(BroType* arg_yield) { yield = arg_yield; }
function_flavor Flavor() const { return flavor; } function_flavor Flavor() const { return flavor; }
string FlavorString() const; string FlavorString() const;
@ -531,6 +534,7 @@ public:
const string& Name() const { return name; } const string& Name() const { return name; }
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void DescribeReST(ODesc* d, bool roles_only = false) const;
protected: protected:
OpaqueType() { } OpaqueType() { }
@ -542,7 +546,10 @@ protected:
class EnumType : public BroType { class EnumType : public BroType {
public: public:
EnumType() : BroType(TYPE_ENUM) { counter = 0; } typedef std::list<std::pair<string, bro_int_t> > enum_name_list;
EnumType(EnumType* e);
EnumType(const string& arg_name);
~EnumType(); ~EnumType();
// The value of this name is next internal counter value, starting // The value of this name is next internal counter value, starting
@ -555,12 +562,18 @@ public:
void AddName(const string& module_name, const char* name, bro_int_t val, bool is_export); void AddName(const string& module_name, const char* name, bro_int_t val, bool is_export);
// -1 indicates not found. // -1 indicates not found.
bro_int_t Lookup(const string& module_name, const char* name); bro_int_t Lookup(const string& module_name, const char* name) const;
const char* Lookup(bro_int_t value); // Returns 0 if not found const char* Lookup(bro_int_t value) const; // Returns 0 if not found
// Returns the list of defined names with their values. The names
// will be fully qualified with their module name.
enum_name_list Names() const;
void DescribeReST(ODesc* d, bool roles_only = false) const; void DescribeReST(ODesc* d, bool roles_only = false) const;
protected: protected:
EnumType() { counter = 0; }
DECLARE_SERIAL(EnumType) DECLARE_SERIAL(EnumType)
void AddNameInternal(const string& module_name, void AddNameInternal(const string& module_name,
@ -586,6 +599,7 @@ public:
VectorType(BroType* t); VectorType(BroType* t);
virtual ~VectorType(); virtual ~VectorType();
BroType* YieldType(); BroType* YieldType();
const BroType* YieldType() const;
int MatchesIndex(ListExpr*& index) const; int MatchesIndex(ListExpr*& index) const;
@ -625,9 +639,10 @@ inline BroType* base_type(TypeTag tag)
// Returns the BRO basic error type. // Returns the BRO basic error type.
inline BroType* error_type() { return base_type(TYPE_ERROR); } inline BroType* error_type() { return base_type(TYPE_ERROR); }
// True if the two types are equivalent. If is_init is true then the // True if the two types are equivalent. If is_init is true then the test is
// test is done in the context of an initialization. // done in the context of an initialization. If match_record_field_names is
extern int same_type(const BroType* t1, const BroType* t2, int is_init=0); // true then for record types the field names have to match, too.
extern int same_type(const BroType* t1, const BroType* t2, int is_init=0, bool match_record_field_names=true);
// True if the two attribute lists are equivalent. // True if the two attribute lists are equivalent.
extern int same_attrs(const Attributes* a1, const Attributes* a2); extern int same_attrs(const Attributes* a1, const Attributes* a2);

View file

@ -465,10 +465,7 @@ void Val::Describe(ODesc* d) const
d->SP(); d->SP();
} }
if ( d->IsReadable() ) ValDescribe(d);
ValDescribe(d);
else
Val::ValDescribe(d);
} }
void Val::DescribeReST(ODesc* d) const void Val::DescribeReST(ODesc* d) const

View file

@ -9,6 +9,7 @@
#include "Serializer.h" #include "Serializer.h"
#include "RemoteSerializer.h" #include "RemoteSerializer.h"
#include "EventRegistry.h" #include "EventRegistry.h"
#include "Traverse.h"
static Val* init_val(Expr* init, const BroType* t, Val* aggr) static Val* init_val(Expr* init, const BroType* t, Val* aggr)
{ {
@ -392,6 +393,34 @@ void begin_func(ID* id, const char* module_name, function_flavor flavor,
} }
} }
class OuterIDBindingFinder : public TraversalCallback {
public:
OuterIDBindingFinder(Scope* s)
: scope(s) { }
virtual TraversalCode PreExpr(const Expr*);
Scope* scope;
vector<const NameExpr*> outer_id_references;
};
TraversalCode OuterIDBindingFinder::PreExpr(const Expr* expr)
{
if ( expr->Tag() != EXPR_NAME )
return TC_CONTINUE;
const NameExpr* e = static_cast<const NameExpr*>(expr);
if ( e->Id()->IsGlobal() )
return TC_CONTINUE;
if ( scope->GetIDs()->Lookup(e->Id()->Name()) )
return TC_CONTINUE;
outer_id_references.push_back(e);
return TC_CONTINUE;
}
void end_func(Stmt* body, attr_list* attrs) void end_func(Stmt* body, attr_list* attrs)
{ {
int frame_size = current_scope()->Length(); int frame_size = current_scope()->Length();
@ -429,6 +458,16 @@ void end_func(Stmt* body, attr_list* attrs)
} }
} }
if ( streq(id->Name(), "anonymous-function") )
{
OuterIDBindingFinder cb(scope);
body->Traverse(&cb);
for ( size_t i = 0; i < cb.outer_id_references.size(); ++i )
cb.outer_id_references[i]->Error(
"referencing outer function IDs not supported");
}
if ( id->HasVal() ) if ( id->HasVal() )
id->ID_Val()->AsFunc()->AddBody(body, inits, frame_size, priority); id->ID_Val()->AsFunc()->AddBody(body, inits, frame_size, priority);
else else

View file

@ -4,6 +4,7 @@
#include "Analyzer.h" #include "Analyzer.h"
#include "Manager.h" #include "Manager.h"
#include "binpac.h"
#include "analyzer/protocol/pia/PIA.h" #include "analyzer/protocol/pia/PIA.h"
#include "../Event.h" #include "../Event.h"
@ -75,7 +76,7 @@ analyzer::ID Analyzer::id_counter = 0;
const char* Analyzer::GetAnalyzerName() const const char* Analyzer::GetAnalyzerName() const
{ {
assert(tag); assert(tag);
return analyzer_mgr->GetComponentName(tag); return analyzer_mgr->GetComponentName(tag).c_str();
} }
void Analyzer::SetAnalyzerTag(const Tag& arg_tag) void Analyzer::SetAnalyzerTag(const Tag& arg_tag)
@ -87,7 +88,7 @@ void Analyzer::SetAnalyzerTag(const Tag& arg_tag)
bool Analyzer::IsAnalyzer(const char* name) bool Analyzer::IsAnalyzer(const char* name)
{ {
assert(tag); assert(tag);
return strcmp(analyzer_mgr->GetComponentName(tag), name) == 0; return strcmp(analyzer_mgr->GetComponentName(tag).c_str(), name) == 0;
} }
// Used in debugging output. // Used in debugging output.
@ -642,12 +643,12 @@ void Analyzer::FlipRoles()
resp_supporters = tmp; resp_supporters = tmp;
} }
void Analyzer::ProtocolConfirmation() void Analyzer::ProtocolConfirmation(Tag arg_tag)
{ {
if ( protocol_confirmed ) if ( protocol_confirmed )
return; return;
EnumVal* tval = tag.AsEnumVal(); EnumVal* tval = arg_tag ? arg_tag.AsEnumVal() : tag.AsEnumVal();
Ref(tval); Ref(tval);
val_list* vl = new val_list; val_list* vl = new val_list;

View file

@ -97,8 +97,8 @@ public:
/** /**
* Constructor. As this version of the constructor does not receive a * Constructor. As this version of the constructor does not receive a
* name or tag, setTag() must be called before the instance can be * name or tag, SetAnalyzerTag() must be called before the instance
* used. * can be used.
* *
* @param conn The connection the analyzer is associated with. * @param conn The connection the analyzer is associated with.
*/ */
@ -471,8 +471,11 @@ public:
* may turn into \c protocol_confirmed event at the script-layer (but * may turn into \c protocol_confirmed event at the script-layer (but
* only once per analyzer for each connection, even if the method is * only once per analyzer for each connection, even if the method is
* called multiple times). * called multiple times).
*
* If tag is given, it overrides the analyzer tag passed to the
* scripting layer; the default is the one of the analyzer itself.
*/ */
virtual void ProtocolConfirmation(); virtual void ProtocolConfirmation(Tag tag = Tag());
/** /**
* Signals Bro's protocol detection that the analyzer has found a * Signals Bro's protocol detection that the analyzer has found a

View file

@ -8,62 +8,29 @@
using namespace analyzer; using namespace analyzer;
Component::Component(const char* arg_name, factory_callback arg_factory, Tag::subtype_t arg_subtype, bool arg_enabled, bool arg_partial) Component::Component(const std::string& name, factory_callback arg_factory, Tag::subtype_t arg_subtype, bool arg_enabled, bool arg_partial)
: plugin::Component(plugin::component::ANALYZER), : plugin::Component(plugin::component::ANALYZER, name),
plugin::TaggedComponent<analyzer::Tag>(arg_subtype) plugin::TaggedComponent<analyzer::Tag>(arg_subtype)
{ {
name = copy_string(arg_name);
canon_name = canonify_name(arg_name);
factory = arg_factory; factory = arg_factory;
enabled = arg_enabled; enabled = arg_enabled;
partial = arg_partial; partial = arg_partial;
}
Component::Component(const Component& other) analyzer_mgr->RegisterComponent(this, "ANALYZER_");
: plugin::Component(Type()),
plugin::TaggedComponent<analyzer::Tag>(other)
{
name = copy_string(other.name);
canon_name = copy_string(other.canon_name);
factory = other.factory;
enabled = other.enabled;
partial = other.partial;
} }
Component::~Component() Component::~Component()
{ {
delete [] name;
delete [] canon_name;
} }
void Component::Describe(ODesc* d) const void Component::DoDescribe(ODesc* d) const
{ {
plugin::Component::Describe(d);
d->Add(name);
d->Add(" (");
if ( factory ) if ( factory )
{ {
d->Add("ANALYZER_"); d->Add("ANALYZER_");
d->Add(canon_name); d->Add(CanonicalName());
d->Add(", "); d->Add(", ");
} }
d->Add(enabled ? "enabled" : "disabled"); d->Add(enabled ? "enabled" : "disabled");
d->Add(")");
}
Component& Component::operator=(const Component& other)
{
plugin::TaggedComponent<analyzer::Tag>::operator=(other);
if ( &other != this )
{
name = copy_string(other.name);
factory = other.factory;
enabled = other.enabled;
partial = other.partial;
}
return *this;
} }

View file

@ -1,7 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#ifndef ANALYZER_PLUGIN_COMPONENT_H #ifndef ANALYZER_COMPONENT_H
#define ANALYZER_PLUGIN_COMPONENT_H #define ANALYZER_COMPONENT_H
#include "Tag.h" #include "Tag.h"
#include "plugin/Component.h" #include "plugin/Component.h"
@ -56,34 +56,13 @@ public:
* connections has generally not seen much testing yet as virtually * connections has generally not seen much testing yet as virtually
* no existing analyzer supports it. * no existing analyzer supports it.
*/ */
Component(const char* name, factory_callback factory, Tag::subtype_t subtype = 0, bool enabled = true, bool partial = false); Component(const std::string& name, factory_callback factory, Tag::subtype_t subtype = 0, bool enabled = true, bool partial = false);
/**
* Copy constructor.
*/
Component(const Component& other);
/** /**
* Destructor. * Destructor.
*/ */
~Component(); ~Component();
/**
* Returns the name of the analyzer. This name is unique across all
* analyzers and used to identify it. The returned name is derived
* from what's passed to the constructor but upper-cased and
* canonified to allow being part of a script-level ID.
*/
virtual const char* Name() const { return name; }
/**
* Returns a canonocalized version of the analyzer's name. The
* returned name is derived from what's passed to the constructor but
* upper-cased and transformed to allow being part of a script-level
* ID.
*/
const char* CanonicalName() const { return canon_name; }
/** /**
* Returns the analyzer's factory function. * Returns the analyzer's factory function.
*/ */
@ -110,17 +89,13 @@ public:
*/ */
void SetEnabled(bool arg_enabled) { enabled = arg_enabled; } void SetEnabled(bool arg_enabled) { enabled = arg_enabled; }
protected:
/** /**
* Generates a human-readable description of the component's main * Overriden from plugin::Component.
* parameters. This goes into the output of \c "bro -NN". */
*/ virtual void DoDescribe(ODesc* d) const;
virtual void Describe(ODesc* d) const;
Component& operator=(const Component& other);
private: private:
const char* name; // The analyzer's name.
const char* canon_name; // The analyzer's canonical name.
factory_callback factory; // The analyzer's factory callback. factory_callback factory; // The analyzer's factory callback.
bool partial; // True if the analyzer supports partial connections. bool partial; // True if the analyzer supports partial connections.
bool enabled; // True if the analyzer is enabled. bool enabled; // True if the analyzer is enabled.

View file

@ -60,7 +60,7 @@ bool Manager::ConnIndex::operator<(const ConnIndex& other) const
} }
Manager::Manager() Manager::Manager()
: plugin::ComponentManager<analyzer::Tag, analyzer::Component>("Analyzer") : plugin::ComponentManager<analyzer::Tag, analyzer::Component>("Analyzer", "Tag")
{ {
} }
@ -86,11 +86,6 @@ Manager::~Manager()
void Manager::InitPreScript() void Manager::InitPreScript()
{ {
std::list<Component*> analyzers = plugin_mgr->Components<Component>();
for ( std::list<Component*>::const_iterator i = analyzers.begin(); i != analyzers.end(); i++ )
RegisterComponent(*i, "ANALYZER_");
// Cache these tags. // Cache these tags.
analyzer_backdoor = GetComponentTag("BACKDOOR"); analyzer_backdoor = GetComponentTag("BACKDOOR");
analyzer_connsize = GetComponentTag("CONNSIZE"); analyzer_connsize = GetComponentTag("CONNSIZE");
@ -109,7 +104,8 @@ void Manager::DumpDebug()
DBG_LOG(DBG_ANALYZER, "Available analyzers after bro_init():"); DBG_LOG(DBG_ANALYZER, "Available analyzers after bro_init():");
list<Component*> all_analyzers = GetComponents(); list<Component*> all_analyzers = GetComponents();
for ( list<Component*>::const_iterator i = all_analyzers.begin(); i != all_analyzers.end(); ++i ) for ( list<Component*>::const_iterator i = all_analyzers.begin(); i != all_analyzers.end(); ++i )
DBG_LOG(DBG_ANALYZER, " %s (%s)", (*i)->Name(), IsEnabled((*i)->Tag()) ? "enabled" : "disabled"); DBG_LOG(DBG_ANALYZER, " %s (%s)", (*i)->Name().c_str(),
IsEnabled((*i)->Tag()) ? "enabled" : "disabled");
DBG_LOG(DBG_ANALYZER, ""); DBG_LOG(DBG_ANALYZER, "");
DBG_LOG(DBG_ANALYZER, "Analyzers by port:"); DBG_LOG(DBG_ANALYZER, "Analyzers by port:");
@ -148,7 +144,7 @@ bool Manager::EnableAnalyzer(Tag tag)
if ( ! p ) if ( ! p )
return false; return false;
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name()); DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name().c_str());
p->SetEnabled(true); p->SetEnabled(true);
return true; return true;
@ -161,7 +157,7 @@ bool Manager::EnableAnalyzer(EnumVal* val)
if ( ! p ) if ( ! p )
return false; return false;
DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name()); DBG_LOG(DBG_ANALYZER, "Enabling analyzer %s", p->Name().c_str());
p->SetEnabled(true); p->SetEnabled(true);
return true; return true;
@ -174,7 +170,7 @@ bool Manager::DisableAnalyzer(Tag tag)
if ( ! p ) if ( ! p )
return false; return false;
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name()); DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name().c_str());
p->SetEnabled(false); p->SetEnabled(false);
return true; return true;
@ -187,7 +183,7 @@ bool Manager::DisableAnalyzer(EnumVal* val)
if ( ! p ) if ( ! p )
return false; return false;
DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name()); DBG_LOG(DBG_ANALYZER, "Disabling analyzer %s", p->Name().c_str());
p->SetEnabled(false); p->SetEnabled(false);
return true; return true;
@ -202,6 +198,11 @@ void Manager::DisableAllAnalyzers()
(*i)->SetEnabled(false); (*i)->SetEnabled(false);
} }
analyzer::Tag Manager::GetAnalyzerTag(const char* name)
{
return GetComponentTag(name);
}
bool Manager::IsEnabled(Tag tag) bool Manager::IsEnabled(Tag tag)
{ {
if ( ! tag ) if ( ! tag )
@ -254,7 +255,7 @@ bool Manager::RegisterAnalyzerForPort(Tag tag, TransportProto proto, uint32 port
return false; return false;
#ifdef DEBUG #ifdef DEBUG
const char* name = GetComponentName(tag); const char* name = GetComponentName(tag).c_str();
DBG_LOG(DBG_ANALYZER, "Registering analyzer %s for port %" PRIu32 "/%d", name, port, proto); DBG_LOG(DBG_ANALYZER, "Registering analyzer %s for port %" PRIu32 "/%d", name, port, proto);
#endif #endif
@ -270,7 +271,7 @@ bool Manager::UnregisterAnalyzerForPort(Tag tag, TransportProto proto, uint32 po
return true; // still a "successful" unregistration return true; // still a "successful" unregistration
#ifdef DEBUG #ifdef DEBUG
const char* name = GetComponentName(tag); const char* name = GetComponentName(tag).c_str();
DBG_LOG(DBG_ANALYZER, "Unregistering analyzer %s for port %" PRIu32 "/%d", name, port, proto); DBG_LOG(DBG_ANALYZER, "Unregistering analyzer %s for port %" PRIu32 "/%d", name, port, proto);
#endif #endif
@ -293,7 +294,8 @@ Analyzer* Manager::InstantiateAnalyzer(Tag tag, Connection* conn)
if ( ! c->Factory() ) if ( ! c->Factory() )
{ {
reporter->InternalWarning("analyzer %s cannot be instantiated dynamically", GetComponentName(tag)); reporter->InternalWarning("analyzer %s cannot be instantiated dynamically",
GetComponentName(tag).c_str());
return 0; return 0;
} }
@ -413,7 +415,7 @@ bool Manager::BuildInitialAnalyzerTree(Connection* conn)
root->AddChildAnalyzer(analyzer, false); root->AddChildAnalyzer(analyzer, false);
DBG_ANALYZER_ARGS(conn, "activated %s analyzer due to port %d", DBG_ANALYZER_ARGS(conn, "activated %s analyzer due to port %d",
analyzer_mgr->GetComponentName(*j), resp_port); analyzer_mgr->GetComponentName(*j).c_str(), resp_port);
} }
} }
} }
@ -532,7 +534,7 @@ void Manager::ExpireScheduledAnalyzers()
conns.erase(i); conns.erase(i);
DBG_LOG(DBG_ANALYZER, "Expiring expected analyzer %s for connection %s", DBG_LOG(DBG_ANALYZER, "Expiring expected analyzer %s for connection %s",
analyzer_mgr->GetComponentName(a->analyzer), analyzer_mgr->GetComponentName(a->analyzer).c_str(),
fmt_conn_id(a->conn.orig, 0, a->conn.resp, a->conn.resp_p)); fmt_conn_id(a->conn.orig, 0, a->conn.resp, a->conn.resp_p));
delete a; delete a;
@ -638,7 +640,7 @@ bool Manager::ApplyScheduledAnalyzers(Connection* conn, bool init, TransportLaye
conn->Event(scheduled_analyzer_applied, 0, tag); conn->Event(scheduled_analyzer_applied, 0, tag);
DBG_ANALYZER_ARGS(conn, "activated %s analyzer as scheduled", DBG_ANALYZER_ARGS(conn, "activated %s analyzer as scheduled",
analyzer_mgr->GetComponentName(*it)); analyzer_mgr->GetComponentName(*it).c_str());
} }
return expected.size(); return expected.size();

View file

@ -45,10 +45,6 @@ namespace analyzer {
* sets up their initial analyzer tree, including adding the right \c PIA, * sets up their initial analyzer tree, including adding the right \c PIA,
* respecting well-known ports, and tracking any analyzers specifically * respecting well-known ports, and tracking any analyzers specifically
* scheduled for individidual connections. * scheduled for individidual connections.
*
* Note that we keep the public interface of this class free of std::*
* classes. This allows to external analyzer code to potentially use a
* different C++ standard library.
*/ */
class Manager : public plugin::ComponentManager<Tag, Component> { class Manager : public plugin::ComponentManager<Tag, Component> {
public: public:
@ -133,6 +129,14 @@ public:
*/ */
void DisableAllAnalyzers(); void DisableAllAnalyzers();
/**
* Returns the tag associated with an analyer name, or the tag
* associated with an error if no such analyzer exists.
*
* @param name The canonical analyzer name to check.
*/
Tag GetAnalyzerTag(const char* name);
/** /**
* Returns true if an analyzer is enabled. * Returns true if an analyzer is enabled.
* *

View file

@ -1,7 +1,21 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
BRO_PLUGIN_BEGIN(Bro, ARP) namespace plugin {
BRO_PLUGIN_DESCRIPTION("ARP Parsing Code"); namespace Bro_ARP {
BRO_PLUGIN_BIF_FILE(events);
BRO_PLUGIN_END class Plugin : public plugin::Plugin {
public:
plugin::Configuration Configure()
{
plugin::Configuration config;
config.name = "Bro::ARP";
config.description = "ARP Parsing";
return config;
}
} plugin;
}
}

View file

@ -14,7 +14,7 @@ public:
virtual void DeliverPacket(int len, const u_char* data, bool orig, virtual void DeliverPacket(int len, const u_char* data, bool orig,
uint64 seq, const IP_Hdr* ip, int caplen); uint64 seq, const IP_Hdr* ip, int caplen);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new AYIYA_Analyzer(conn); } { return new AYIYA_Analyzer(conn); }
protected: protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "AYIYA.h" #include "AYIYA.h"
BRO_PLUGIN_BEGIN(Bro, AYIYA) namespace plugin {
BRO_PLUGIN_DESCRIPTION("AYIYA Analyzer"); namespace Bro_AYIYA {
BRO_PLUGIN_ANALYZER("AYIYA", ayiya::AYIYA_Analyzer);
BRO_PLUGIN_BIF_FILE(events); class Plugin : public plugin::Plugin {
BRO_PLUGIN_END public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("AYIYA", ::analyzer::ayiya::AYIYA_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::AYIYA";
config.description = "AYIYA Analyzer";
return config;
}
} plugin;
}
}

View file

@ -73,7 +73,7 @@ public:
virtual void Done(); virtual void Done();
void StatTimer(double t, int is_expire); void StatTimer(double t, int is_expire);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BackDoor_Analyzer(conn); } { return new BackDoor_Analyzer(conn); }
protected: protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "BackDoor.h" #include "BackDoor.h"
BRO_PLUGIN_BEGIN(Bro, BackDoor) namespace plugin {
BRO_PLUGIN_DESCRIPTION("Backdoor Analyzer (deprecated)"); namespace Bro_BackDoor {
BRO_PLUGIN_ANALYZER("BackDoor", backdoor::BackDoor_Analyzer);
BRO_PLUGIN_BIF_FILE(events); class Plugin : public plugin::Plugin {
BRO_PLUGIN_END public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("BackDoor", ::analyzer::backdoor::BackDoor_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::BackDoor";
config.description = "Backdoor Analyzer deprecated";
return config;
}
} plugin;
}
}

View file

@ -19,7 +19,7 @@ public:
virtual void Undelivered(uint64 seq, int len, bool orig); virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig); virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BitTorrent_Analyzer(conn); } { return new BitTorrent_Analyzer(conn); }
protected: protected:

View file

@ -52,7 +52,7 @@ public:
virtual void Undelivered(uint64 seq, int len, bool orig); virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig); virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new BitTorrentTracker_Analyzer(conn); } { return new BitTorrentTracker_Analyzer(conn); }
protected: protected:

View file

@ -1,12 +1,27 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "BitTorrent.h" #include "BitTorrent.h"
#include "BitTorrentTracker.h" #include "BitTorrentTracker.h"
BRO_PLUGIN_BEGIN(Bro, BitTorrent) namespace plugin {
BRO_PLUGIN_DESCRIPTION("BitTorrent Analyzer"); namespace Bro_BitTorrent {
BRO_PLUGIN_ANALYZER("BitTorrent", bittorrent::BitTorrent_Analyzer);
BRO_PLUGIN_ANALYZER("BitTorrentTracker", bittorrent::BitTorrentTracker_Analyzer); class Plugin : public plugin::Plugin {
BRO_PLUGIN_BIF_FILE(events); public:
BRO_PLUGIN_END plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("BitTorrent", ::analyzer::bittorrent::BitTorrent_Analyzer::Instantiate));
AddComponent(new ::analyzer::Component("BitTorrentTracker", ::analyzer::bittorrent::BitTorrentTracker_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::BitTorrent";
config.description = "BitTorrent Analyzer";
return config;
}
} plugin;
}
}

View file

@ -21,7 +21,7 @@ public:
virtual void UpdateConnVal(RecordVal *conn_val); virtual void UpdateConnVal(RecordVal *conn_val);
virtual void FlipRoles(); virtual void FlipRoles();
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new ConnSize_Analyzer(conn); } { return new ConnSize_Analyzer(conn); }
protected: protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "ConnSize.h" #include "ConnSize.h"
BRO_PLUGIN_BEGIN(Bro, ConnSize) namespace plugin {
BRO_PLUGIN_DESCRIPTION("Connection size analyzer"); namespace Bro_ConnSize {
BRO_PLUGIN_ANALYZER("ConnSize", conn_size::ConnSize_Analyzer);
BRO_PLUGIN_BIF_FILE(events); class Plugin : public plugin::Plugin {
BRO_PLUGIN_END public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("ConnSize", ::analyzer::conn_size::ConnSize_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::ConnSize";
config.description = "Connection size analyzer";
return config;
}
} plugin;
}
}

View file

@ -178,7 +178,7 @@ public:
DCE_RPC_Analyzer(Connection* conn, bool speculative = false); DCE_RPC_Analyzer(Connection* conn, bool speculative = false);
~DCE_RPC_Analyzer(); ~DCE_RPC_Analyzer();
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new DCE_RPC_Analyzer(conn); } { return new DCE_RPC_Analyzer(conn); }
protected: protected:

View file

@ -1,11 +1,26 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "DCE_RPC.h" #include "DCE_RPC.h"
BRO_PLUGIN_BEGIN(Bro, DCE_RPC) namespace plugin {
BRO_PLUGIN_DESCRIPTION("DCE-RPC analyzer"); namespace Bro_DCE_RPC {
BRO_PLUGIN_ANALYZER("DCE_RPC", dce_rpc::DCE_RPC_Analyzer);
BRO_PLUGIN_SUPPORT_ANALYZER("Contents_DCE_RPC"); class Plugin : public plugin::Plugin {
BRO_PLUGIN_BIF_FILE(events); public:
BRO_PLUGIN_END plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("DCE_RPC", ::analyzer::dce_rpc::DCE_RPC_Analyzer::Instantiate));
AddComponent(new ::analyzer::Component("Contents_DCE_RPC", 0));
plugin::Configuration config;
config.name = "Bro::DCE_RPC";
config.description = "DCE-RPC analyzer";
return config;
}
} plugin;
}
}

View file

@ -16,7 +16,7 @@ public:
virtual void DeliverPacket(int len, const u_char* data, bool orig, virtual void DeliverPacket(int len, const u_char* data, bool orig,
uint64 seq, const IP_Hdr* ip, int caplen); uint64 seq, const IP_Hdr* ip, int caplen);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new DHCP_Analyzer(conn); } { return new DHCP_Analyzer(conn); }
protected: protected:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "DHCP.h" #include "DHCP.h"
BRO_PLUGIN_BEGIN(Bro, DHCP) namespace plugin {
BRO_PLUGIN_DESCRIPTION("DHCP analyzer"); namespace Bro_DHCP {
BRO_PLUGIN_ANALYZER("DHCP", dhcp::DHCP_Analyzer);
BRO_PLUGIN_BIF_FILE(events); class Plugin : public plugin::Plugin {
BRO_PLUGIN_END public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("DHCP", ::analyzer::dhcp::DHCP_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::DHCP";
config.description = "DHCP analyzer";
return config;
}
} plugin;
}
}

View file

@ -18,8 +18,8 @@
## ##
event dhcp_discover%(c: connection, msg: dhcp_msg, req_addr: addr, host_name: string%); event dhcp_discover%(c: connection, msg: dhcp_msg, req_addr: addr, host_name: string%);
## Generated for DHCP messages of type *DHCPOFFER* (server to client in response to ## Generated for DHCP messages of type *DHCPOFFER* (server to client in response
## DHCPDISCOVER with offer of configuration parameters). ## to DHCPDISCOVER with offer of configuration parameters).
## ##
## c: The connection record describing the underlying UDP flow. ## c: The connection record describing the underlying UDP flow.
## ##
@ -33,7 +33,8 @@ event dhcp_discover%(c: connection, msg: dhcp_msg, req_addr: addr, host_name: st
## ##
## serv_addr: The server address specified by the message. ## serv_addr: The server address specified by the message.
## ##
## host_name: The value of the host name option, if specified by the client. ## host_name: Optional host name value. May differ from the host name requested
## from the client.
## ##
## .. bro:see:: dhcp_discover dhcp_request dhcp_decline dhcp_ack dhcp_nak ## .. bro:see:: dhcp_discover dhcp_request dhcp_decline dhcp_ack dhcp_nak
## dhcp_release dhcp_inform ## dhcp_release dhcp_inform
@ -75,7 +76,7 @@ event dhcp_request%(c: connection, msg: dhcp_msg, req_addr: addr, serv_addr: add
## ##
## msg: The parsed type-independent part of the DHCP message. ## msg: The parsed type-independent part of the DHCP message.
## ##
## host_name: The value of the host name option, if specified by the client. ## host_name: Optional host name value.
## ##
## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_ack dhcp_nak ## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_ack dhcp_nak
## dhcp_release dhcp_inform ## dhcp_release dhcp_inform
@ -101,7 +102,8 @@ event dhcp_decline%(c: connection, msg: dhcp_msg, host_name: string%);
## ##
## serv_addr: The server address specified by the message. ## serv_addr: The server address specified by the message.
## ##
## host_name: The value of the host name option, if specified by the client. ## host_name: Optional host name value. May differ from the host name requested
## from the client.
## ##
## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_decline dhcp_nak ## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_decline dhcp_nak
## dhcp_release dhcp_inform ## dhcp_release dhcp_inform
@ -116,7 +118,7 @@ event dhcp_ack%(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_li
## ##
## msg: The parsed type-independent part of the DHCP message. ## msg: The parsed type-independent part of the DHCP message.
## ##
## host_name: The value of the host name option, if specified by the client. ## host_name: Optional host name value.
## ##
## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_decline dhcp_ack dhcp_release ## .. bro:see:: dhcp_discover dhcp_offer dhcp_request dhcp_decline dhcp_ack dhcp_release
## dhcp_inform ## dhcp_inform

View file

@ -17,7 +17,7 @@ public:
virtual void Undelivered(uint64 seq, int len, bool orig); virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig); virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn) static Analyzer* Instantiate(Connection* conn)
{ return new DNP3_Analyzer(conn); } { return new DNP3_Analyzer(conn); }
private: private:

View file

@ -1,10 +1,25 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "DNP3.h" #include "DNP3.h"
BRO_PLUGIN_BEGIN(Bro, DNP3) namespace plugin {
BRO_PLUGIN_DESCRIPTION("DNP3 analyzer"); namespace Bro_DNP3 {
BRO_PLUGIN_ANALYZER("DNP3", dnp3::DNP3_Analyzer);
BRO_PLUGIN_BIF_FILE(events); class Plugin : public plugin::Plugin {
BRO_PLUGIN_END public:
plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("DNP3", ::analyzer::dnp3::DNP3_Analyzer::Instantiate));
plugin::Configuration config;
config.name = "Bro::DNP3";
config.description = "DNP3 analyzer";
return config;
}
} plugin;
}
}

View file

@ -692,15 +692,23 @@ int DNS_Interpreter::ParseRR_EDNS(DNS_MsgInfo* msg,
data += rdlength; data += rdlength;
len -= rdlength; len -= rdlength;
} }
else
{ // no data, move on
data += rdlength;
len -= rdlength;
}
return 1; return 1;
} }
void DNS_Interpreter::ExtractOctets(const u_char*& data, int& len,
BroString** p)
{
uint16 dlen = ExtractShort(data, len);
dlen = min(len, static_cast<int>(dlen));
if ( p )
*p = new BroString(data, dlen, 0);
data += dlen;
len -= dlen;
}
int DNS_Interpreter::ParseRR_TSIG(DNS_MsgInfo* msg, int DNS_Interpreter::ParseRR_TSIG(DNS_MsgInfo* msg,
const u_char*& data, int& len, int rdlength, const u_char*& data, int& len, int rdlength,
const u_char* msg_start) const u_char* msg_start)
@ -718,24 +726,17 @@ int DNS_Interpreter::ParseRR_TSIG(DNS_MsgInfo* msg,
uint32 sign_time_sec = ExtractLong(data, len); uint32 sign_time_sec = ExtractLong(data, len);
unsigned int sign_time_msec = ExtractShort(data, len); unsigned int sign_time_msec = ExtractShort(data, len);
unsigned int fudge = ExtractShort(data, len); unsigned int fudge = ExtractShort(data, len);
BroString* request_MAC;
u_char request_MAC[16]; ExtractOctets(data, len, &request_MAC);
memcpy(request_MAC, data, sizeof(request_MAC));
// Here we adjust the size of the requested MAC + u_int16_t
// for length. See RFC 2845, sec 2.3.
int n = sizeof(request_MAC) + sizeof(u_int16_t);
data += n;
len -= n;
unsigned int orig_id = ExtractShort(data, len); unsigned int orig_id = ExtractShort(data, len);
unsigned int rr_error = ExtractShort(data, len); unsigned int rr_error = ExtractShort(data, len);
ExtractOctets(data, len, 0); // Other Data
msg->tsig = new TSIG_DATA; msg->tsig = new TSIG_DATA;
msg->tsig->alg_name = msg->tsig->alg_name =
new BroString(alg_name, alg_name_end - alg_name, 1); new BroString(alg_name, alg_name_end - alg_name, 1);
msg->tsig->sig = new BroString(request_MAC, sizeof(request_MAC), 1); msg->tsig->sig = request_MAC;
msg->tsig->time_s = sign_time_sec; msg->tsig->time_s = sign_time_sec;
msg->tsig->time_ms = sign_time_msec; msg->tsig->time_ms = sign_time_msec;
msg->tsig->fudge = fudge; msg->tsig->fudge = fudge;

View file

@ -180,6 +180,7 @@ protected:
uint16 ExtractShort(const u_char*& data, int& len); uint16 ExtractShort(const u_char*& data, int& len);
uint32 ExtractLong(const u_char*& data, int& len); uint32 ExtractLong(const u_char*& data, int& len);
void ExtractOctets(const u_char*& data, int& len, BroString** p);
int ParseRR_Name(DNS_MsgInfo* msg, int ParseRR_Name(DNS_MsgInfo* msg,
const u_char*& data, int& len, int rdlength, const u_char*& data, int& len, int rdlength,
@ -267,7 +268,7 @@ public:
void ExpireTimer(double t); void ExpireTimer(double t);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn) static analyzer::Analyzer* Instantiate(Connection* conn)
{ return new DNS_Analyzer(conn); } { return new DNS_Analyzer(conn); }
protected: protected:

View file

@ -1,11 +1,26 @@
// See the file in the main distribution directory for copyright.
#include "plugin/Plugin.h" #include "plugin/Plugin.h"
#include "DNS.h" #include "DNS.h"
BRO_PLUGIN_BEGIN(Bro, DNS) namespace plugin {
BRO_PLUGIN_DESCRIPTION("DNS analyzer"); namespace Bro_DNS {
BRO_PLUGIN_ANALYZER("DNS", dns::DNS_Analyzer);
BRO_PLUGIN_SUPPORT_ANALYZER("Contents_DNS"); class Plugin : public plugin::Plugin {
BRO_PLUGIN_BIF_FILE(events); public:
BRO_PLUGIN_END plugin::Configuration Configure()
{
AddComponent(new ::analyzer::Component("DNS", ::analyzer::dns::DNS_Analyzer::Instantiate));
AddComponent(new ::analyzer::Component("Contents_DNS", 0));
plugin::Configuration config;
config.name = "Bro::DNS";
config.description = "DNS analyzer";
return config;
}
} plugin;
}
}

Some files were not shown because too many files have changed in this diff Show more