Merge branch 'master' into topic/jsiwek/brofiler

Conflicts:
	src/main.cc
This commit is contained in:
Jon Siwek 2012-01-11 10:57:44 -06:00
commit 1181444f37
291 changed files with 17420 additions and 6314 deletions

231
CHANGES
View file

@ -1,4 +1,235 @@
2.0-beta-194 | 2012-01-10 10:44:32 -0800
* Added an option for filtering out URLs before they are turned into
HTTP::Incorrect_File_Type notices. (Seth Hall)
* Fix ref counting bug in BIFs that call internal_type. Addresses
#740. (Jon Siwek)
* Adding back the stats.bro file. (Seth Hall)
2.0-beta-188 | 2012-01-10 09:49:29 -0800
* Change SFTP/SCP log rotators to use 4-digit year in filenames
Fixes #745. (Jon Siwek)
* Adding back the stats.bro file. Addresses #656. (Seth Hall)
2.0-beta-185 | 2012-01-09 18:00:50 -0800
* Tweaks for OpenBSD support. (Jon Siwek)
2.0-beta-181 | 2012-01-08 20:49:04 -0800
* Add SFTP log postprocessor that transfers logs to remote hosts.
Addresses #737. (Jon Siwek)
* Add FAQ entry about disabling NIC offloading features. (Jon Siwek)
* Add a file NEWS with release notes. (Robin Sommer)
2.0-beta-177 | 2012-01-05 15:01:07 -0800
* Replace the --snaplen/-l command line option with a
scripting-layer option called "snaplen" (which can also be
redefined on the command line, e.g. `bro -i eth0 snaplen=65535`).
* Reduce snaplen default from 65535 to old default of 8192. Fixes
#720. (Jon Siwek)
2.0-beta-174 | 2012-01-04 12:47:10 -0800
* SSL improvements. (Seth Hall)
- Added the ssl_session_ticket_handshake event back.
- Fixed a few bugs.
- Removed the SSLv2.cc file since it's not used.
2.0-beta-169 | 2012-01-04 12:44:39 -0800
* Tuning the pretty-printed alarm mails, which now include the
covered time range into the subject. (Robin Sommer)
* Adding top-level "test" target to Makefile. (Robin Sommer)
* Adding SWIG as dependency to INSTALL. (Robin Sommer)
2.0-beta-155 | 2012-01-03 15:42:32 -0800
* Remove dead code related to record type inheritance. (Jon Siwek)
2.0-beta-152 | 2012-01-03 14:51:34 -0800
* Notices now record the transport-layer protocol. (Bernhard Amann)
2.0-beta-150 | 2012-01-03 14:42:45 -0800
* CMake 2.6 top-level 'install' target compat. Fixes #729. (Jon Siwek)
* Minor fixes to test process. Addresses #298.
* Increase timeout interval of communication-related btests. (Jon Siwek)
2.0-beta-145 | 2011-12-19 11:37:15 -0800
* Empty fields are now logged as "(empty)" by default. (Robin
Sommer)
* In log headers, only escape information when necessary. (Robin
Sommer)
2.0-beta-139 | 2011-12-19 07:06:29 -0800
* The hostname notice email extension works now, plus a general
mechanism for adding delayed information to notices. (Seth Hall)
* Fix &default fields in records not being initialized in coerced
assignments. Addresses #722. (Jon Siwek)
* Make log headers include the type of data stored inside a set or
vector ("vector[string]"). (Bernhard Amann)
2.0-beta-126 | 2011-12-18 15:18:05 -0800
* DNS updates. (Seth Hall)
- Fixed some bugs with capturing data in the base DNS script.
- Answers and TTLs are now vectors.
- A warning that was being generated (dns_reply_seen_after_done)
from transaction ID reuse is fixed.
* SSL updates. (Seth Hall)
- Added is_orig fields to the SSL events and adapted script.
- Added a field named last_alert to the SSL log.
- The x509_certificate function has an is_orig field now instead
of is_server and its position in the argument list has moved.
- A bit of reorganization and cleanup in the core analyzer. (Seth
Hall)
2.0-beta-121 | 2011-12-18 15:10:15 -0800
* Enable warnings for malformed Broxygen xref roles. (Jon Siwek)
* Fix Broxygen confusing scoped IDs at start of line as function
parameter. (Jon Siwek)
* Allow Broxygen markup "##<" for more general use. (Jon Siwek)
2.0-beta-116 | 2011-12-16 02:38:27 -0800
* Cleanup some misc Broxygen css/js stuff. (Jon Siwek)
* Add search box to Broxygen docs. Fixes #726. (Jon Siwek)
* Fixed major bug with cluster synchronization, which was not
working. (Seth Hall)
* Fix missing action in notice policy for looking up GeoIP data.
(Jon Siwek)
* Better persistent state configuration warning messages (fixes
#433). (Jon Siwek)
* Renaming HTTP::SQL_Injection_Attack_Against to
HTTP::SQL_Injection_Victim. (Seth Hall).
* Fixed DPD signatures for IRC. Fixes #311. (Seth Hall)
* Removing Off_Port_Protocol_Found notice. (Seth Hall)
* Teach Broxygen to more generally reference attribute values by name. (Jon Siwek)
* SSH::Interesting_Hostname_Login cleanup. Fixes #664. (Seth Hall)
* Fixed bug that was causing the malware hash registry script to
break. (Seth Hall)
* Remove remnant of libmagic optionality. (Jon Siwek)
2.0-beta-98 | 2011-12-07 08:12:08 -0800
* Adapting test-suite's diff-all so that it expands globs in both
current and baseline directory. Closes #677. (Robin Sommer)
2.0-beta-97 | 2011-12-06 11:49:29 -0800
* Omit loading local-<node>.bro scripts from base cluster framework.
Addresses #663 (Jon Siwek)
2.0-beta-94 | 2011-12-03 15:57:19 -0800
* Adapting attribute serialization when talking to Broccoli. (Robin
Sommer)
2.0-beta-92 | 2011-12-03 15:56:03 -0800
* Changes to Broxygen master script package index. (Jon Siwek)
- Now only lists packages as those directories in the script hierarchy
that contain an __load__.bro file.
- Script packages (dirs with a __load__.bro file), can now include
a README (in reST format) that will automatically be appended
under the link to a specific package in the master package
index.
2.0-beta-88 | 2011-12-02 17:00:58 -0800
* Teach LogWriterAscii to use BRO_LOG_SUFFIX environemt variable.
Addresses #704. (Jon Siwek)
* Fix double-free of DNS_Mgr_Request object. Addresses #661.
* Add a remote_log_peer event which comes with an event_peer record
parameter. Addresses #493. (Jon Siwek)
* Remove example redef of SMTP::entity_excerpt_len from local.bro.
Fixes error emitted when loading local.bro in bare mode. (Jon
Siwek)
* Add missing doc targets to top Makefile; remove old doc/Makefile.
Fixes #705. (Jon Siwek)
* Turn some globals into constants. Addresses #633. (Seth Hall)
* Rearrange packet filter and DPD documentation. (Jon Siwek)
2.0-beta-72 | 2011-11-30 20:16:09 -0800
* Fine-tuning the Sphinx layout to better match www. (Jon Siwek and
Robin Sommer)
2.0-beta-69 | 2011-11-29 16:55:31 -0800
* Fixing ASCII logger to escape the unset-field place holder if
written out literally. (Robin Sommer)
2.0-beta-68 | 2011-11-29 15:23:12 -0800
* Lots of documentation polishing. (Jon Siwek)
* Teach Broxygen the ".. bro:see::" directive. (Jon Siwek)
* Teach Broxygen :bro:see: role for referencing any identifier in
the Bro domain. (Jon Siwek)
* Teach Broxygen to generate an index of Bro notices. (Jon Siwek)
* Fix order of include directories. (Jon Siwek)
* Catch if logged vectors do not contain only atomic types.
(Bernhard Amann)
2.0-beta-47 | 2011-11-16 08:24:33 -0800 2.0-beta-47 | 2011-11-16 08:24:33 -0800
* Catch if logged sets do not contain only atomic types. (Bernhard * Catch if logged sets do not contain only atomic types. (Bernhard

View file

@ -1,4 +1,4 @@
Copyright (c) 1995-2011, The Regents of the University of California Copyright (c) 1995-2012, The Regents of the University of California
through the Lawrence Berkeley National Laboratory and the through the Lawrence Berkeley National Laboratory and the
International Computer Science Institute. All rights reserved. International Computer Science Institute. All rights reserved.

16
INSTALL
View file

@ -14,10 +14,11 @@ before you begin:
* OpenSSL (headers and libraries) http://www.openssl.org * OpenSSL (headers and libraries) http://www.openssl.org
* Libmagic For identifying file types (e.g., in FTP transfers). * SWIG http://www.swig.org
* Libz For decompressing HTTP bodies by the HTTP analyzer, and for * Libmagic
compressed Bro-to-Bro communication.
* Libz
Bro can make uses of some optional libraries if they are found at Bro can make uses of some optional libraries if they are found at
installation time: installation time:
@ -27,11 +28,13 @@ installation time:
Bro also needs the following tools, but on most systems they will Bro also needs the following tools, but on most systems they will
already come preinstalled: already come preinstalled:
* Bash (For Bro Control).
* BIND8 (headers and libraries) * BIND8 (headers and libraries)
* Bison (GNU Parser Generator) * Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer) * Flex (Fast Lexical Analyzer)
* Perl (Used only during the Bro build process) * Perl (Used only during the Bro build process)
Installation Installation
============ ============
@ -64,13 +67,16 @@ except for ``aux/bro-aux`` will also be built and installed by doing
``--disable-*`` options that can be given to the configure script to ``--disable-*`` options that can be given to the configure script to
turn off unwanted auxiliary projects. turn off unwanted auxiliary projects.
OpenBSD users, please see our `FAQ
<http://www.bro-ids.org/documentation/faq.html>` if you are having
problems installing Bro.
Running Bro Running Bro
=========== ===========
Bro is a complex program and it takes a bit of time to get familiar Bro is a complex program and it takes a bit of time to get familiar
with it. A good place for newcomers to start is the with it. A good place for newcomers to start is the Quickstart Guide
:doc:`quick start guide <quickstart>`. at http://www.bro-ids.org/documentation/quickstart.bro.html.
For developers that wish to run Bro directly from the ``build/`` For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have directory (i.e., without performing ``make install``), they will have

View file

@ -14,7 +14,7 @@ HAVE_MODULES=git submodule | grep -v cmake >/dev/null
all: configured all: configured
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@
install: configured install: configured all
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@
install-aux: configured install-aux: configured
@ -29,6 +29,18 @@ doc: configured
docclean: configured docclean: configured
$(MAKE) -C $(BUILD) $@ $(MAKE) -C $(BUILD) $@
restdoc: configured
$(MAKE) -C $(BUILD) $@
restclean: configured
$(MAKE) -C $(BUILD) $@
broxygen: configured
$(MAKE) -C $(BUILD) $@
broxygenclean: configured
$(MAKE) -C $(BUILD) $@
dist: dist:
@rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz @rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz
@rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz @rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz
@ -48,6 +60,9 @@ bindist:
distclean: distclean:
rm -rf $(BUILD) rm -rf $(BUILD)
test:
@(cd testing && make )
configured: configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 ) @test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )
@test -e $(BUILD)/Makefile || ( echo "Error: No build/Makefile found. Did you run configure?" && exit 1 ) @test -e $(BUILD)/Makefile || ( echo "Error: No build/Makefile found. Did you run configure?" && exit 1 )

63
NEWS Normal file
View file

@ -0,0 +1,63 @@
Release Notes
=============
This document summarizes the most important changes in the current Bro
release. For a complete list of changes, see the ``CHANGES`` file.
Bro 2.0
-------
As the version number jump suggests, Bro 2.0 is a major upgrade and
lots of things have changed. We have assembled a separate upprade
guide with the most important changes compared to Bro 1.5 at
http://www.bro-ids.org/documentation/upgrade.bro.html. You can find
the offline version of that document in ``doc/upgrade.rst.``.
Compared to the earlier 2.0 Beta version, the major changes in the
final release are:
* The default scripts now come with complete reference
documentation. See
http://www.bro-ids.org/documentation/index.html.
* libz and libmagic are now required dependencies.
* Reduced snaplen default from 65535 to old default of 8192. The
large value was introducing performance problems on many
systems.
* Replaced the --snaplen/-l command line option with a
scripting-layer option called "snaplen". The new option can also
be redefined on the command line, e.g. ``bro -i eth0
snaplen=65535``.
* Reintroduced the BRO_LOG_SUFFIX environment that the ASCII
logger now respects to add a suffix to the log files it creates.
* The ASCII logs now include further header information, and
fields set to an empty value are now logged as ``(empty)`` by
default (instead of ``-``, which is already used for fields that
are not set at all).
* Some NOTICES were renamed, and the signatures of some SSL events
have changed.
* bro-cut got some new capabilities:
- If no field names are given on the command line, we now pass
through all fields.
- New options -u/-U for time output in UTC.
- New option -F to give output field separator.
* Broccoli supports more types internally, allowing to send
complex records.
* Many smaller bug fixes, portability improvements, and general
polishing across all modules.

10
README
View file

@ -4,13 +4,15 @@ Bro Network Security Monitor
Bro is a powerful framework for network analysis and security Bro is a powerful framework for network analysis and security
monitoring. Please see the INSTALL file for installation instructions monitoring. Please see the INSTALL file for installation instructions
and pointers for getting started. For more documentation, research and pointers for getting started. NEWS contains releases notes for the
publications, and community contact information, see Bro's home page: current version, and CHANGES has the complete history of changes.
Please see COPYING for licensing information.
For more documentation, research publications, and community contact
information, please see Bro's home page:
http://www.bro-ids.org http://www.bro-ids.org
Please see COPYING for licensing information.
On behalf of the Bro Development Team, On behalf of the Bro Development Team,
Vern Paxson & Robin Sommer, Vern Paxson & Robin Sommer,

View file

@ -1 +1 @@
2.0-beta-47 2.0-beta-194

@ -1 +1 @@
Subproject commit 34d90437403e4129468f89acce0bd1a99813a2f4 Subproject commit aa1aa85ddcf524ffcfcf9efa5277bfac341871f7

@ -1 +1 @@
Subproject commit 7ea5837b4ba8403731ca4a9875616c0ab501342f Subproject commit 22a2c5249c56b46290f5bde366f69e1eeacbfe0a

@ -1 +1 @@
Subproject commit d281350dbcc19c24aa6b6d89a4edc08a5c74a790 Subproject commit 71da6a319f8c2578b87cf1bb6337b9fcc2724a66

@ -1 +1 @@
Subproject commit ed4d4ce1add51f0e08e6e8d2f5f247c2cbb422da Subproject commit 6ae53077c1b389011e4b49f10e3dbd8d3cf2d0eb

@ -1 +1 @@
Subproject commit 7230a09a8c220d2117e491fdf293bf5c19819b65 Subproject commit 5350e4652b44ce1fbd9fffe1228d097fb04247cd

2
cmake

@ -1 +1 @@
Subproject commit f0f7958639bb921985c1f58f1186da4b49b5d54d Subproject commit ca4ed1a237215765ce9a7f2bc4b57b56958039ef

View file

@ -17,9 +17,6 @@
/* We are on a Linux system */ /* We are on a Linux system */
#cmakedefine HAVE_LINUX #cmakedefine HAVE_LINUX
/* Define if you have the <magic.h> header file. */
#cmakedefine HAVE_MAGIC_H
/* Define if you have the `mallinfo' function. */ /* Define if you have the `mallinfo' function. */
#cmakedefine HAVE_MALLINFO #cmakedefine HAVE_MALLINFO
@ -35,8 +32,8 @@
/* Define if you have the <net/ethernet.h> header file. */ /* Define if you have the <net/ethernet.h> header file. */
#cmakedefine HAVE_NET_ETHERNET_H #cmakedefine HAVE_NET_ETHERNET_H
/* We are on a OpenBSD system */ /* Define if you have the <net/ethertypes.h> header file. */
#cmakedefine HAVE_OPENBSD #cmakedefine HAVE_NET_ETHERTYPES_H
/* have os-proto.h */ /* have os-proto.h */
#cmakedefine HAVE_OS_PROTO_H #cmakedefine HAVE_OS_PROTO_H
@ -148,3 +145,10 @@
/* Define u_int8_t */ /* Define u_int8_t */
#cmakedefine u_int8_t @u_int8_t@ #cmakedefine u_int8_t @u_int8_t@
/* OpenBSD's bpf.h may not declare this data link type, but it's supposed to be
used consistently for the same purpose on all platforms. */
#cmakedefine HAVE_DLT_PPP_SERIAL
#ifndef HAVE_DLT_PPP_SERIAL
#define DLT_PPP_SERIAL @DLT_PPP_SERIAL@
#endif

1
doc/.gitignore vendored
View file

@ -1 +1,2 @@
html html
*.pyc

View file

@ -51,6 +51,8 @@ add_custom_target(broxygen
COMMAND "${CMAKE_COMMAND}" -E create_symlink COMMAND "${CMAKE_COMMAND}" -E create_symlink
${DOC_OUTPUT_DIR}/html ${DOC_OUTPUT_DIR}/html
${CMAKE_BINARY_DIR}/html ${CMAKE_BINARY_DIR}/html
# copy Broccoli API reference into output dir if it exists
COMMAND test -d ${CMAKE_BINARY_DIR}/aux/broccoli/doc/html && ( rm -rf ${CMAKE_BINARY_DIR}/html/broccoli-api && cp -r ${CMAKE_BINARY_DIR}/aux/broccoli/doc/html ${CMAKE_BINARY_DIR}/html/broccoli-api ) || true
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "[Sphinx] Generating HTML policy script docs" COMMENT "[Sphinx] Generating HTML policy script docs"
# SOURCES just adds stuff to IDE projects as a convenience # SOURCES just adds stuff to IDE projects as a convenience
@ -58,16 +60,16 @@ add_custom_target(broxygen
# The "sphinxclean" target removes just the Sphinx input/output directories # The "sphinxclean" target removes just the Sphinx input/output directories
# from the build directory. # from the build directory.
add_custom_target(broxygen-clean add_custom_target(broxygenclean
COMMAND "${CMAKE_COMMAND}" -E remove_directory COMMAND "${CMAKE_COMMAND}" -E remove_directory
${DOC_SOURCE_WORKDIR} ${DOC_SOURCE_WORKDIR}
COMMAND "${CMAKE_COMMAND}" -E remove_directory COMMAND "${CMAKE_COMMAND}" -E remove_directory
${DOC_OUTPUT_DIR} ${DOC_OUTPUT_DIR}
VERBATIM) VERBATIM)
add_dependencies(broxygen broxygen-clean restdoc) add_dependencies(broxygen broxygenclean restdoc)
add_custom_target(doc) add_custom_target(doc)
add_custom_target(docclean) add_custom_target(docclean)
add_dependencies(doc broxygen) add_dependencies(doc broxygen)
add_dependencies(docclean broxygen-clean restclean) add_dependencies(docclean broxygenclean restclean)

View file

@ -1,7 +0,0 @@
all:
test -d html || mkdir html
for i in *.rst; do echo "$$i ..."; ./bin/rst2html.py $$i >html/`echo $$i | sed 's/rst$$/html/g'`; done
clean:
rm -rf html

View file

@ -15,8 +15,9 @@ which adds some reST directives and roles that aid in generating useful
index entries and cross-references. Other extensions can be added in index entries and cross-references. Other extensions can be added in
a similar fashion. a similar fashion.
Either the ``make doc`` or ``make broxygen`` can be used to locally Either the ``make doc`` or ``make broxygen`` targets in the top-level
render the reST files into HTML. Those targets depend on: Makefile can be used to locally render the reST files into HTML.
Those targets depend on:
* Python interpreter >= 2.5 * Python interpreter >= 2.5
* `Sphinx <http://sphinx.pocoo.org/>`_ >= 1.0.1 * `Sphinx <http://sphinx.pocoo.org/>`_ >= 1.0.1

1
doc/_static/960.css vendored Normal file

File diff suppressed because one or more lines are too long

513
doc/_static/basic.css vendored Normal file
View file

@ -0,0 +1,513 @@
/*
* basic.css
* ~~~~~~~~~
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 230px;
margin-left: -100%;
font-size: 90%;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar input[type="text"] {
width: 170px;
}
div.sphinxsidebar input[type="submit"] {
width: 30px;
}
img {
border: 0;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li div.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable {
width: 100%;
}
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable dl, table.indextable dd {
margin-top: 0;
margin-bottom: 0;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
div.modindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
div.genindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
/* -- general body styles --------------------------------------------------- */
a.headerlink {
visibility: hidden;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.field-list ul {
padding-left: 1em;
}
.first {
margin-top: 0 !important;
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
img.align-left, .figure.align-left, object.align-left {
clear: left;
float: left;
margin-right: 1em;
}
img.align-right, .figure.align-right, object.align-right {
clear: right;
float: right;
margin-left: 1em;
}
img.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
.align-center {
text-align: center;
}
.align-right {
text-align: right;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px 7px 0 7px;
background-color: #ffe;
width: 40%;
float: right;
}
p.sidebar-title {
font-weight: bold;
}
/* -- topics ---------------------------------------------------------------- */
div.topic {
border: 1px solid #ccc;
padding: 7px 7px 0 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
div.admonition dl {
margin-bottom: 0;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- tables ---------------------------------------------------------------- */
table.field-list td, table.field-list th {
border: 0 !important;
}
table.footnote td, table.footnote th {
border: 0 !important;
}
th {
text-align: left;
padding-right: 5px;
}
table.citation {
border-left: solid 1px gray;
margin-left: 1px;
}
table.citation td {
border-bottom: none;
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
list-style: decimal;
}
ol.loweralpha {
list-style: lower-alpha;
}
ol.upperalpha {
list-style: upper-alpha;
}
ol.lowerroman {
list-style: lower-roman;
}
ol.upperroman {
list-style: upper-roman;
}
dd p {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
dt:target, .highlighted {
background-color: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.refcount {
color: #060;
}
.optional {
font-size: 1.3em;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa;
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
.guilabel, .menuselection {
font-family: sans-serif;
}
.accelerator {
text-decoration: underline;
}
.classifier {
font-style: oblique;
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
td.linenos pre {
padding: 5px 0px;
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
margin-left: 0.5em;
}
table.highlighttable td {
padding: 0 0.5em 0 0.5em;
}
tt.descname {
background-color: transparent;
font-weight: bold;
# font-size: 1.2em;
}
tt.descclassname {
background-color: transparent;
}
tt.xref, a tt {
background-color: transparent;
# font-weight: bold;
}
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
background-color: transparent;
}
.viewcode-link {
float: right;
}
.viewcode-back {
float: right;
font-family: sans-serif;
}
div.viewcode-block:target {
margin: -1px -10px;
padding: 0 10px;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

View file

@ -1,3 +1,160 @@
.highlight {
background-color: #ffffff; a.toc-backref {
color: #333;
} }
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a {
padding:0 0 0px 0;
}
ul {
padding-bottom: 0px;
}
h1 {
font-weight: bold;
font-size: 32px;
line-height:32px;
text-align: center;
padding-top: 3px;
margin-bottom: 30px;
font-family: Palatino,'Palatino Linotype',Georgia,serif;;
color: #000;
border-bottom: 0px;
}
th.field-name
{
white-space:nowrap;
}
h2 {
margin-top: 50px;
padding-bottom: 5px;
margin-bottom: 30px;
border-bottom: 1px solid;
border-color: #aaa;
font-style: normal;
}
div.section h3 {
font-style: normal;
}
h3 {
font-size: 20px;
margin-top: 40px;
margin-bottom: 0¡px;
font-weight: bold;
font-style: normal;
}
h3.widgettitle {
font-style: normal;
}
h4 {
font-size:18px;
font-style: normal;
margin-bottom: 0em;
margin-top: 40px;
font-style: italic;
}
h5 {
font-size:16px;
}
h6 {
font-size:15px;
}
.toc-backref {
color: #333;
}
.contents ul {
padding-bottom: 1em;
}
dl.namespace {
display: none;
}
dl dt {
font-weight: normal;
}
table.docutils tbody {
margin: 1em 1em 1em 1em;
}
table.docutils td {
padding: 5pt 5pt 5pt 5pt;
font-size: 14px;
border-left: 0;
border-right: 0;
}
dl pre {
font-size: 14px;
}
table.docutils th {
padding: 5pt 5pt 5pt 5pt;
font-size: 14px;
font-style: normal;
border-left: 0;
border-right: 0;
}
table.docutils tr:first-child td {
#border-top: 1px solid #aaa;
}
.download {
font-family:"Courier New", Courier, mono;
font-weight: normal;
}
dt:target, .highlighted {
background-color: #ccc;
}
p {
padding-bottom: 0px;
}
p.last {
margin-bottom: 0px;
}
dl {
padding: 1em 1em 1em 1em;
background: #fffff0;
border: 1px solid #aaa;
}
dl {
margin-bottom: 10px;
}
table.docutils {
background: #fffff0;
border-collapse: collapse;
border: 1px solid #ddd;
}
dl table.docutils {
border: 0;
}
table.docutils dl {
border: 1px dashed #666;
}

0
doc/_static/broxygen-extra.js vendored Normal file
View file

437
doc/_static/broxygen.css vendored Normal file
View file

@ -0,0 +1,437 @@
/* Automatically generated. Do not edit. */
#bro-main, #bro-standalone-main {
padding: 0 0 0 0;
position:relative;
z-index:1;
}
#bro-main {
margin-bottom: 2em;
}
#bro-standalone-main {
margin-bottom: 0em;
padding-left: 50px;
padding-right: 50px;
}
#bro-outer {
color: #333;
background: #ffffff;
}
#bro-title {
font-weight: bold;
font-size: 32px;
line-height:32px;
text-align: center;
padding-top: 3px;
margin-bottom: 30px;
font-family: Palatino,'Palatino Linotype',Georgia,serif;;
color: #000;
}
.opening:first-letter {
font-size: 24px;
font-weight: bold;
letter-spacing: 0.05em;
}
.opening {
font-size: 17px;
}
.version {
text-align: right;
font-size: 12px;
color: #aaa;
line-height: 0;
height: 0;
}
.git-info-version {
position: relative;
height: 2em;
top: -1em;
color: #ccc;
float: left;
font-size: 12px;
}
.git-info-date {
position: relative;
height: 2em;
top: -1em;
color: #ccc;
float: right;
font-size: 12px;
}
body {
font-family:Arial, Helvetica, sans-serif;
font-size:15px;
line-height:22px;
color: #333;
margin: 0px;
}
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a {
padding:0 0 20px 0;
font-weight:bold;
text-decoration:none;
}
div.section h3, div.section h4, div.section h5, div.section h6 {
font-style: italic;
}
h1, h2 {
font-size:27px;
letter-spacing:-1px;
}
h3 {
margin-top: 1em;
font-size:18px;
}
h4 {
font-size:16px;
}
h5 {
font-size:15px;
}
h6 {
font-size:12px;
}
p {
padding:0 0 20px 0;
}
hr {
background:none;
height:1px;
line-height:1px;
border:0;
margin:0 0 20px 0;
}
ul, ol {
margin:0 20px 20px 0;
padding-left:40px;
}
ul.simple, ol.simple {
margin:0 0px 0px 0;
}
blockquote {
margin:0 0 0 40px;
}
strong, dfn {
font-weight:bold;
}
em, dfn {
font-style:italic;
}
sup, sub {
line-height:0;
}
pre {
white-space:pre;
}
pre, code, tt {
font-family:"Courier New", Courier, mono;
}
dl {
margin: 0 0 20px 0;
}
dl dt {
font-weight: bold;
}
dd {
margin:0 0 20px 20px;
}
small {
font-size:75%;
}
a:link,
a:visited,
a:active
{
color: #2a85a7;
}
a:hover
{
color:#c24444;
}
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a
{
color: #333;
}
hr {
border-bottom:1px solid #ddd;
}
pre {
color: #333;
background: #FFFAE2;
padding: 7px 5px 3px 5px;
margin-bottom: 25px;
margin-top: 0px;
}
ul {
padding-bottom: 5px;
}
h1, h2 {
margin-top: 30px;
}
h1 {
margin-bottom: 50px;
margin-bottom: 20px;
padding-bottom: 5px;
border-bottom: 1px solid;
border-color: #aaa;
}
h2 {
font-size: 24px;
}
pre {
-moz-box-shadow:0 0 6px #ddd;
-webkit-box-shadow:0 0 6px #ddd;
box-shadow:0 0 6px #ddd;
}
a {
text-decoration:none;
}
p {
padding-bottom: 15px;
}
p, dd, li {
text-align: justify;
}
li {
margin-bottom: 5px;
}
#footer .widget_links ul a,
#footer .widget_links ol a
{
color: #ddd;
}
#footer .widget_links ul a:hover,
#footer .widget_links ol a:hover
{
color:#c24444;
}
#footer .widget li {
padding-bottom:10px;
}
#footer .widget_links li {
padding-bottom:1px;
}
#footer .widget li:last-child {
padding-bottom:0;
}
#footer .widgettitle {
color: #ddd;
}
.widget {
margin:0 0 40px 0;
}
.widget, .widgettitle {
font-size:12px;
line-height:18px;
}
.widgettitle {
font-weight:bold;
text-transform:uppercase;
padding:0 0 10px 0;
margin:0 0 20px 0;
line-height:100%;
}
.widget UL, .widget OL {
list-style-type:none;
margin:0;
padding:0;
}
.widget p {
padding:0;
}
.widget li {
padding-bottom:10px;
}
.widget a {
text-decoration:none;
}
#bro-main .widgettitle,
{
color: #333;
}
.widget img.left {
padding:5px 10px 10px 0;
}
.widget img.right {
padding:5px 0 10px 10px;
}
.ads .widgettitle {
margin-right:16px;
}
.widget {
margin-left: 1em;
}
.widgettitle {
color: #333;
}
.widgettitle {
border-bottom:1px solid #ddd;
}
.sidebar-toc ul li {
padding-bottom: 0px;
text-align: left;
list-style-type: square;
list-style-position: inside;
padding-left: 1em;
text-indent: -1em;
}
.sidebar-toc ul li li {
margin-left: 1em;
margin-bottom: 0px;
list-style-type: square;
}
.sidebar-toc ul li li a {
font-size: 8pt;
}
.contents {
padding: 10px;
background: #FFFAE2;
margin: 20px;
}
.topic-title {
font-size: 20px;
font-weight: bold;
padding: 0px 0px 5px 0px;
text-align: center;
padding-top: .5em;
}
.contents li {
margin-bottom: 0px;
list-style-type: square;
}
.contents ul ul li {
margin-left: 0px;
padding-left: 0px;
padding-top: 0em;
font-size: 90%;
list-style-type: square;
font-weight: normal;
}
.contents ul ul ul li {
list-style-type: none;
}
.contents ul ul ul ul li {
display:none;
}
.contents ul li {
padding-top: 1em;
list-style-type: none;
font-weight: bold;
}
.contents ul {
margin-left: 0px;
padding-left: 2em;
margin: 0px 0px 0px 0px;
}
.note, .warning, .error {
margin-left: 2em;
margin-right: 2em;
margin-top: 1.5em;
margin-bottom: 1.5em;
padding: 0.5em 1em 0.5em 1em;
overflow: auto;
border-left: solid 3px #aaa;
font-size: 15px;
color: #333;
}
.admonition p {
margin-left: 1em;
}
.admonition-title {
font-size: 16px;
font-weight: bold;
color: #000;
padding-bottom: 0em;
margin-bottom: .5em;
margin-top: 0em;
}

View file

@ -1,309 +0,0 @@
/*
* default.css_t
* ~~~~~~~~~~~~~
*
* Sphinx stylesheet -- default theme.
*
* :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: {{ theme_bodyfont }};
font-size: 100%;
background-color: {{ theme_footerbgcolor }};
color: #000;
margin: 0;
padding: 0;
}
div.document {
background-color: {{ theme_sidebarbgcolor }};
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 {{ theme_sidebarwidth|toint }}px;
}
div.body {
background-color: {{ theme_bgcolor }};
color: {{ theme_textcolor }};
padding: 0 20px 30px 20px;
}
{%- if theme_rightsidebar|tobool %}
div.bodywrapper {
margin: 0 {{ theme_sidebarwidth|toint }}px 0 0;
}
{%- endif %}
div.footer {
color: {{ theme_footertextcolor }};
background-color: {{ theme_footerbgcolor }};
width: 100%;
padding: 9px 0 9px 0;
text-align: center;
font-size: 75%;
}
div.footer a {
color: {{ theme_footertextcolor }};
text-decoration: underline;
}
div.related {
background-color: {{ theme_relbarbgcolor }};
line-height: 30px;
color: {{ theme_relbartextcolor }};
}
div.related a {
color: {{ theme_relbarlinkcolor }};
}
div.sphinxsidebar {
{%- if theme_stickysidebar|tobool %}
top: 30px;
bottom: 0;
margin: 0;
position: fixed;
overflow: auto;
height: auto;
{%- endif %}
{%- if theme_rightsidebar|tobool %}
float: right;
{%- if theme_stickysidebar|tobool %}
right: 0;
{%- endif %}
{%- endif %}
}
{%- if theme_stickysidebar|tobool %}
/* this is nice, but it it leads to hidden headings when jumping
to an anchor */
/*
div.related {
position: fixed;
}
div.documentwrapper {
margin-top: 30px;
}
*/
{%- endif %}
div.sphinxsidebar h3 {
font-family: {{ theme_bodyfont }};
color: {{ theme_sidebartextcolor }};
font-size: 1.4em;
font-weight: normal;
margin: 0;
padding: 0;
}
div.sphinxsidebar h3 a {
color: {{ theme_sidebartextcolor }};
}
div.sphinxsidebar h4 {
font-family: {{ theme_bodyfont }};
color: {{ theme_sidebartextcolor }};
font-size: 1.3em;
font-weight: normal;
margin: 5px 0 0 0;
padding: 0;
}
div.sphinxsidebar p {
color: {{ theme_sidebartextcolor }};
}
div.sphinxsidebar p.topless {
margin: 5px 10px 10px 10px;
}
div.sphinxsidebar ul {
margin: 10px;
padding: 0;
color: {{ theme_sidebartextcolor }};
}
div.sphinxsidebar a {
color: {{ theme_sidebarlinkcolor }};
}
div.sphinxsidebar input {
border: 1px solid {{ theme_sidebarlinkcolor }};
font-family: sans-serif;
font-size: 1em;
}
{% if theme_collapsiblesidebar|tobool %}
/* for collapsible sidebar */
div#sidebarbutton {
background-color: {{ theme_sidebarbtncolor }};
}
{% endif %}
/* -- hyperlink styles ------------------------------------------------------ */
a {
color: {{ theme_linkcolor }};
text-decoration: none;
}
a:visited {
color: {{ theme_visitedlinkcolor }};
text-decoration: none;
}
{% if theme_externalrefs|tobool %}
a.external {
text-decoration: none;
border-bottom: 1px dashed {{ theme_linkcolor }};
}
a.external:hover {
text-decoration: none;
border-bottom: none;
}
a.external:visited {
text-decoration: none;
border-bottom: 1px dashed {{ theme_visitedlinkcolor }};
}
{% endif %}
/* -- body styles ----------------------------------------------------------- */
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: {{ theme_bodyfont }};
background-color: #ffffff;
font-weight: normal;
color: {{ theme_headtextcolor }};
border-bottom: 1px solid #aaa;
margin: 20px -20px 10px -20px;
padding: 3px 0 3px 10px;
}
div.body h1 {
font-family: {{ theme_headfont }};
text-align: center;
border-bottom: none;
}
div.body h1 { margin-top: 0; font-size: 200%; }
div.body h2 { font-size: 160%; }
div.body h3 { font-size: 140%; }
div.body h4 { font-size: 120%; }
div.body h5 { font-size: 110%; }
div.body h6 { font-size: 100%; }
a.headerlink {
color: {{ theme_headlinkcolor }};
font-size: 0.8em;
padding: 0 4px 0 4px;
text-decoration: none;
}
a.headerlink:hover {
background-color: {{ theme_headlinkcolor }};
color: white;
}
div.admonition p.admonition-title + p {
display: inline;
}
div.admonition p {
margin-bottom: 5px;
}
div.admonition pre {
margin-bottom: 5px;
}
div.admonition ul, div.admonition ol {
margin-bottom: 5px;
}
div.note {
background-color: #eee;
border: 1px solid #ccc;
}
div.seealso {
background-color: #ffc;
border: 1px solid #ff6;
}
div.warning {
background-color: #ffe4e4;
border: 1px solid #f66;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre {
padding: 5px;
background-color: {{ theme_codebgcolor }};
color: {{ theme_codetextcolor }};
line-height: 120%;
border: 1px solid #ac9;
border-left: none;
border-right: none;
}
tt {
background-color: #ecf0f3;
padding: 0 1px 0 1px;
font-size: 0.95em;
}
th {
background-color: #ede;
}
.warning tt {
background: #efc2c2;
}
.note tt {
background: #d6d6d6;
}
.viewcode-back {
font-family: {{ theme_bodyfont }};
}
div.viewcode-block:target {
background-color: #f4debf;
border-top: 1px solid #ac9;
border-bottom: 1px solid #ac9;
}
th.field-name
{
white-space:nowrap;
}

View file

@ -1,3 +0,0 @@
$(document).ready(function() {
$('.docutils.download').removeClass('download');
});

BIN
doc/_static/logo-bro.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

58
doc/_static/pygments.css vendored Normal file
View file

@ -0,0 +1,58 @@
.hll { background-color: #ffffcc }
.c { color: #aaaaaa; font-style: italic } /* Comment */
.err { color: #F00000; background-color: #F0A0A0 } /* Error */
.k { color: #0000aa } /* Keyword */
.cm { color: #aaaaaa; font-style: italic } /* Comment.Multiline */
.cp { color: #4c8317 } /* Comment.Preproc */
.c1 { color: #aaaaaa; font-style: italic } /* Comment.Single */
.cs { color: #0000aa; font-style: italic } /* Comment.Special */
.gd { color: #aa0000 } /* Generic.Deleted */
.ge { font-style: italic } /* Generic.Emph */
.gr { color: #aa0000 } /* Generic.Error */
.gh { color: #000080; font-weight: bold } /* Generic.Heading */
.gi { color: #00aa00 } /* Generic.Inserted */
.go { color: #888888 } /* Generic.Output */
.gp { color: #555555 } /* Generic.Prompt */
.gs { font-weight: bold } /* Generic.Strong */
.gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.gt { color: #aa0000 } /* Generic.Traceback */
.kc { color: #0000aa } /* Keyword.Constant */
.kd { color: #0000aa } /* Keyword.Declaration */
.kn { color: #0000aa } /* Keyword.Namespace */
.kp { color: #0000aa } /* Keyword.Pseudo */
.kr { color: #0000aa } /* Keyword.Reserved */
.kt { color: #00aaaa } /* Keyword.Type */
.m { color: #009999 } /* Literal.Number */
.s { color: #aa5500 } /* Literal.String */
.na { color: #1e90ff } /* Name.Attribute */
.nb { color: #00aaaa } /* Name.Builtin */
.nc { color: #00aa00; text-decoration: underline } /* Name.Class */
.no { color: #aa0000 } /* Name.Constant */
.nd { color: #888888 } /* Name.Decorator */
.ni { color: #800000; font-weight: bold } /* Name.Entity */
.nf { color: #00aa00 } /* Name.Function */
.nn { color: #00aaaa; text-decoration: underline } /* Name.Namespace */
.nt { color: #1e90ff; font-weight: bold } /* Name.Tag */
.nv { color: #aa0000 } /* Name.Variable */
.ow { color: #0000aa } /* Operator.Word */
.w { color: #bbbbbb } /* Text.Whitespace */
.mf { color: #009999 } /* Literal.Number.Float */
.mh { color: #009999 } /* Literal.Number.Hex */
.mi { color: #009999 } /* Literal.Number.Integer */
.mo { color: #009999 } /* Literal.Number.Oct */
.sb { color: #aa5500 } /* Literal.String.Backtick */
.sc { color: #aa5500 } /* Literal.String.Char */
.sd { color: #aa5500 } /* Literal.String.Doc */
.s2 { color: #aa5500 } /* Literal.String.Double */
.se { color: #aa5500 } /* Literal.String.Escape */
.sh { color: #aa5500 } /* Literal.String.Heredoc */
.si { color: #aa5500 } /* Literal.String.Interpol */
.sx { color: #aa5500 } /* Literal.String.Other */
.sr { color: #009999 } /* Literal.String.Regex */
.s1 { color: #aa5500 } /* Literal.String.Single */
.ss { color: #0000aa } /* Literal.String.Symbol */
.bp { color: #00aaaa } /* Name.Builtin.Pseudo */
.vc { color: #aa0000 } /* Name.Variable.Class */
.vg { color: #aa0000 } /* Name.Variable.Global */
.vi { color: #aa0000 } /* Name.Variable.Instance */
.il { color: #009999 } /* Literal.Number.Integer.Long */

View file

@ -1,64 +0,0 @@
// make literal blocks corresponding to identifier initial values
// hidden by default
$(document).ready(function() {
var showText='(Show Value)';
var hideText='(Hide Value)';
var is_visible = false;
// select field-list tables that come before a literal block
tables = $('.highlight-python').prev('table.docutils.field-list');
tables.find('th.field-name').filter(function(index) {
return $(this).html() == "Default :";
}).next().append('<a href="#" class="toggleLink">'+showText+'</a>');
// hide all literal blocks that follow a field-list table
tables.next('.highlight-python').hide();
// register handler for clicking a "toggle" link
$('a.toggleLink').click(function() {
is_visible = !is_visible;
$(this).html( (!is_visible) ? showText : hideText);
// the link is inside a <table><tbody><tr><td> and the next
// literal block after the table is the literal block that we want
// to show/hide
$(this).parent().parent().parent().parent().next('.highlight-python').slideToggle('fast');
// override default link behavior
return false;
});
});
// make "Private Interface" sections hidden by default
$(document).ready(function() {
var showText='Show Private Interface (for internal use)';
var hideText='Hide Private Interface';
var is_visible = false;
// insert show/hide links
$('#private-interface').children(":first-child").after('<a href="#" class="privateToggle">'+showText+'</a>');
// wrap all sub-sections in a new div that can be hidden/shown
$('#private-interface').children(".section").wrapAll('<div class="private" />');
// hide the given class
$('.private').hide();
// register handler for clicking a "toggle" link
$('a.privateToggle').click(function() {
is_visible = !is_visible;
$(this).html( (!is_visible) ? showText : hideText);
$('.private').slideToggle('fast');
// override default link behavior
return false;
});
});

View file

@ -1,10 +1,113 @@
{% extends "!layout.html" %} {% extends "!layout.html" %}
{% block extrahead %} {% block extrahead %}
<link rel="stylesheet" type="text/css" href="http://www.bro-ids.org/css/bro-ids.css" /> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="http://www.bro-ids.org/css/pygments.css" /> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/960.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/pygments.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen-extra.css', 1) }}"></script> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen-extra.css', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/download.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/broxygen-extra.js', 1) }}"></script>
{% endblock %}
{% block header %}
<iframe src="http://www.bro-ids.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %} {% endblock %}
{% block relbar2 %}{% endblock %} {% block relbar2 %}{% endblock %}
{% block relbar1 %}{% endblock %}
{% block content %}
<div id="bro-main" class="clearfix">
<div class="container_12">
<div class="grid_9">
<div>
{{ relbar() }}
</div>
<div class="body">
{% block body %}
{% endblock %}
</div>
</div>
<!-- Sidebar -->
<div class="grid_3 omega">
<div>
<img id="logo" src="{{pathto('_static/logo-bro.png', 1)}}" alt="Logo" />
</div>
<br />
<div class="widget sidebar-toc">
<h3 class="widgettitle">
Table of Contents
</h3>
<p>
<!-- <ul id="sidebar-toc"></ul> -->
<ul>{{toc}}</ul>
</p>
</div>
{% if next %}
<div class="widget">
<h3 class="widgettitle">
Next Page
</h3>
<p>
<a href="{{ next.link|e }}">{{ next.title }}</a>
</p>
</div>
{% endif %}
{% if prev %}
<div class="widget">
<h3 class="widgettitle">
Previous Page
</h3>
<p>
<a href="{{ prev.link|e }}">{{ prev.title }}</a>
</p>
</div>
{% endif %}
{%- if pagename != "search" %}
<div id="searchbox" style="display: none" class="widget">
<h3 class="widgettitle">{{ _('Search') }}</h3>
<form class="search" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" />
<input type="submit" value="{{ _('Search') }}" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
{%- endif %}
</div>
</div>
<div class="container_12">
<div class="grid_12 alpha omega">
<div class="center">
<small>
Copyright {{ copyright }}.
Last updated on {{ last_updated }}.
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> {{ sphinx_version }}.
</small>
</div>
</div>
</div>
</div>
{% endblock %}
{% block footer %}
<iframe src="http://www.bro-ids.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %}

View file

@ -49,6 +49,7 @@ with open(group_list, 'r') as f_group_list:
if not os.path.exists(os.path.dirname(group_file)): if not os.path.exists(os.path.dirname(group_file)):
os.makedirs(os.path.dirname(group_file)) os.makedirs(os.path.dirname(group_file))
with open(group_file, 'w') as f_group_file: with open(group_file, 'w') as f_group_file:
f_group_file.write(":orphan:\n\n")
title = "Package Index: %s\n" % os.path.dirname(group) title = "Package Index: %s\n" % os.path.dirname(group)
f_group_file.write(title); f_group_file.write(title);
for n in range(len(title)): for n in range(len(title)):

View file

@ -1,62 +0,0 @@
#!/usr/bin/env python
#
# Derived from docutils standard rst2html.py.
#
# $Id: rst2html.py 4564 2006-05-21 20:44:42Z wiemann $
# Author: David Goodger <goodger@python.org>
# Copyright: This module has been placed in the public domain.
#
#
# Extension: we add to dummy directorives "code" and "console" to be
# compatible with Bro's web site setup.
try:
import locale
locale.setlocale(locale.LC_ALL, '')
except:
pass
import textwrap
from docutils.core import publish_cmdline, default_description
from docutils import nodes
from docutils.parsers.rst import directives, Directive
from docutils.parsers.rst.directives.body import LineBlock
class Literal(Directive):
#max_line_length = 68
max_line_length = 0
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
has_content = True
def wrapped_content(self):
content = []
if Literal.max_line_length:
for line in self.content:
content += textwrap.wrap(line, Literal.max_line_length, subsequent_indent=" ")
else:
content = self.content
return u'\n'.join(content)
def run(self):
self.assert_has_content()
content = self.wrapped_content()
literal = nodes.literal_block(content, content)
return [literal]
directives.register_directive('code', Literal)
directives.register_directive('console', Literal)
description = ('Generates (X)HTML documents from standalone reStructuredText '
'sources. ' + default_description)
publish_cmdline(writer_name='html', description=description)

View file

@ -0,0 +1 @@
../../../aux/binpac/README

View file

@ -0,0 +1 @@
../../../aux/bro-aux/README

View file

@ -0,0 +1 @@
../../../aux/broccoli/bindings/broccoli-ruby/README

View file

@ -0,0 +1 @@
../../../aux/broccoli/doc/broccoli-manual.rst

View file

@ -24,7 +24,7 @@ sys.path.insert(0, os.path.abspath('sphinx-sources/ext'))
# Add any Sphinx extension module names here, as strings. They can be extensions # Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['bro', 'rst_directive'] extensions = ['bro', 'rst_directive', 'sphinx.ext.todo', 'adapt-toc']
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['sphinx-sources/_templates', 'sphinx-sources/_static'] templates_path = ['sphinx-sources/_templates', 'sphinx-sources/_static']
@ -40,7 +40,7 @@ master_doc = 'index'
# General information about the project. # General information about the project.
project = u'Bro' project = u'Bro'
copyright = u'2011, The Bro Project' copyright = u'2012, The Bro Project'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@ -90,44 +90,20 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
html_theme = 'default' html_theme = 'basic'
html_last_updated_fmt = '%B %d, %Y' html_last_updated_fmt = '%B %d, %Y'
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
html_theme_options = { html_theme_options = { }
"rightsidebar": "true",
"stickysidebar": "true",
"externalrefs": "false",
"footerbgcolor": "#333",
"footertextcolor": "#ddd",
"sidebarbgcolor": "#ffffff",
#"sidebarbtncolor": "",
"sidebartextcolor": "#333",
"sidebarlinkcolor": "#2a85a7",
"relbarbgcolor": "#ffffff",
"relbartextcolor": "#333",
"relbarlinkcolor": "#2a85a7",
"bgcolor": "#ffffff",
"textcolor": "#333",
"linkcolor": "#2a85a7",
"visitedlinkcolor": "#2a85a7",
"headbgcolor": "#f0f0f0",
"headtextcolor": "#000",
"headlinkcolor": "#2a85a7",
"codebgcolor": "#FFFAE2",
#"codetextcolor": "",
"bodyfont": "Arial, Helvetica, sans-serif",
"headfont": "Palatino,'Palatino Linotype',Georgia,serif",
}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = [] #html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to # The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation". # "<project> v<release> Documentation".
#html_title = None #html_title = None
# A shorter title for the navigation bar. Default is the same as html_title. # A shorter title for the navigation bar. Default is the same as html_title.
@ -193,6 +169,7 @@ html_sidebars = {
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'Broxygen' htmlhelp_basename = 'Broxygen'
html_add_permalinks = None
# -- Options for LaTeX output -------------------------------------------------- # -- Options for LaTeX output --------------------------------------------------
@ -232,7 +209,6 @@ latex_documents = [
# If false, no module index is generated. # If false, no module index is generated.
#latex_domain_indices = True #latex_domain_indices = True
# -- Options for manual page output -------------------------------------------- # -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
@ -241,3 +217,6 @@ man_pages = [
('index', 'bro', u'Bro Documentation', ('index', 'bro', u'Bro Documentation',
[u'The Bro Project'], 1) [u'The Bro Project'], 1)
] ]
# -- Options for todo plugin --------------------------------------------
todo_include_todos=True

29
doc/ext/adapt-toc.py Normal file
View file

@ -0,0 +1,29 @@
import sys
import re
# Removes the first TOC level, which is just the page title.
def process_html_toc(app, pagename, templatename, context, doctree):
if not "toc" in context:
return
toc = context["toc"]
lines = toc.strip().split("\n")
lines = lines[2:-2]
toc = "\n".join(lines)
toc = "<ul>" + toc
context["toc"] = toc
# print >>sys.stderr, pagename
# print >>sys.stderr, context["toc"]
# print >>sys.stderr, "-----"
# print >>sys.stderr, toc
# print >>sys.stderr, "===="
def setup(app):
app.connect('html-page-context', process_html_toc)

View file

@ -257,6 +257,9 @@ class BroDomain(Domain):
objects[objtype, target], objects[objtype, target],
objtype + '-' + target, objtype + '-' + target,
contnode, target + ' ' + objtype) contnode, target + ' ' + objtype)
else:
self.env.warn(fromdocname,
'unknown target for ":bro:%s:`%s`"' % (typ, target))
def get_objects(self): def get_objects(self):
for (typ, name), docname in self.data['objects'].iteritems(): for (typ, name), docname in self.data['objects'].iteritems():

View file

@ -28,6 +28,23 @@ Here are some pointers to more information:
Lothar Braun et. al evaluates packet capture performance on Lothar Braun et. al evaluates packet capture performance on
commodity hardware commodity hardware
Are there any gotchas regarding interface configuration for live capture? Or why might I be seeing abnormally large packets much greater than interface MTU?
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Some NICs offload the reassembly of traffic into "superpackets" so that
fewer packets are then passed up the stack (e.g. "TCP segmentation
offload", or "generic segmentation offload"). The result is that the
capturing application will observe packets much larger than the MTU size
of the interface they were captured from and may also interfere with the
maximum packet capture length, ``snaplen``, so it's a good idea to disable
an interface's offloading features.
You can use the ``ethtool`` program on Linux to view and disable
offloading features of an interface. See this page for more explicit
directions:
http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html
What does an error message like ``internal error: NB-DNS error`` mean? What does an error message like ``internal error: NB-DNS error`` mean?
--------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------
@ -35,6 +52,19 @@ That often means that DNS is not set up correctly on the system
running Bro. Try verifying from the command line that DNS lookups running Bro. Try verifying from the command line that DNS lookups
work, e.g., ``host www.google.com``. work, e.g., ``host www.google.com``.
I am using OpenBSD and having problems installing Bro?
------------------------------------------------------
One potential issue is that the top-level Makefile may not work with
OpenBSD's default make program, in which case you can either install
the ``gmake`` package and use it instead or first change into the
``build/`` directory before doing either ``make`` or ``make install``
such that the CMake-generated Makefile's are used directly.
Generally, please note that we do not regularly test OpenBSD builds.
We appreciate any patches that improve Bro's support for this
platform.
Usage Usage
===== =====
@ -42,34 +72,30 @@ Usage
How can I identify backscatter? How can I identify backscatter?
------------------------------- -------------------------------
Identifying backscatter via connections labeled as ``OTH`` is not Identifying backscatter via connections labeled as ``OTH`` is not a reliable
a reliable means to detect backscatter. Use rather the following means to detect backscatter. Backscatter is however visible by interpreting
procedure: the contents of the ``history`` field in the ``conn.log`` file. The basic idea
is to watch for connections that never had an initial ``SYN`` but started
* Enable connection history via ``redef record_state_history=T`` to instead with a ``SYN-ACK`` or ``RST`` (though this latter generally is just
track all control/data packet types in connection logs. discarded). Here are some history fields which provide backscatter examples:
``hAFf``, ``r``. Refer to the conn protocol analysis scripts to interpret the
* Backscatter is now visible in terms of connections that never had an individual character meanings in the history field.
initial ``SYN`` but started instead with a ``SYN-ACK`` or ``RST``
(though this latter generally is just discarded).
Is there help for understanding Bro's resource consumption? Is there help for understanding Bro's resource consumption?
----------------------------------------------------------- -----------------------------------------------------------
There are two scripts that collect statistics on resource usage: There are two scripts that collect statistics on resource usage:
``stats.bro`` and ``profiling.bro``. The former is quite lightweight, ``misc/stats.bro`` and ``misc/profiling.bro``. The former is quite
while the latter should only be used for debugging. Furthermore, lightweight, while the latter should only be used for debugging.
there's also ``print-globals.bro``, which prints the size of all
global script variable at termination.
How can I capture packets as an unprivileged user? How can I capture packets as an unprivileged user?
-------------------------------------------------- --------------------------------------------------
Normally, unprivileged users cannot capture packets from a network Normally, unprivileged users cannot capture packets from a network interface,
interface, which means they would not be able to use Bro to read/analyze which means they would not be able to use Bro to read/analyze live traffic.
live traffic. However, there are ways to enable packet capture However, there are operating system specific ways to enable packet capture
permission for non-root users, which is worth doing in the context of permission for non-root users, which is worth doing in the context of using
using Bro to monitor live traffic Bro to monitor live traffic.
With Linux Capabilities With Linux Capabilities
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^

View file

@ -1,8 +1,12 @@
.. Bro documentation master file .. Bro documentation master file
=================
Bro Documentation Bro Documentation
================= =================
Guides
------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
@ -37,7 +41,6 @@ Script Reference
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
scripts/common
scripts/builtins scripts/builtins
scripts/bifs scripts/bifs
scripts/packages scripts/packages
@ -46,16 +49,29 @@ Script Reference
Other Bro Components Other Bro Components
-------------------- --------------------
The following are snapshots of documentation for components that come
with this version of Bro (|version|). Since they can also be used
independently, see the `download page
<http://bro-ids.org/download/index.html>`_ for documentation of any
current, independent component releases.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
components/btest/README BinPAC - A protocol parser generator <components/binpac/README>
components/broccoli/README Broccoli - The Bro Client Communication Library (README) <components/broccoli/README>
components/broccoli-python/README Broccoli - User Manual <components/broccoli/broccoli-manual>
components/broctl/README Broccoli Python Bindings <components/broccoli-python/README>
components/capstats/README Broccoli Ruby Bindings <components/broccoli-ruby/README>
components/pysubnettree/README BroControl - Interactive Bro management shell <components/broctl/README>
components/trace-summary/README Bro-Aux - Small auxiliary tools for Bro <components/bro-aux/README>
BTest - A unit testing framework <components/btest/README>
Capstats - Command-line packet statistic tool <components/capstats/README>
PySubnetTree - Python module for CIDR lookups<components/pysubnettree/README>
trace-summary - Script for generating break-downs of network traffic <components/trace-summary/README>
The `Broccoli API Reference <broccoli-api/index.html>`_ may also be of
interest.
Other Indices and References Other Indices and References
---------------------------- ----------------------------

View file

@ -43,13 +43,14 @@ Basics
====== ======
The data fields that a stream records are defined by a record type The data fields that a stream records are defined by a record type
specified when it is created. Let's look at the script generating specified when it is created. Let's look at the script generating Bro's
Bro's connection summaries as an example, connection summaries as an example,
``base/protocols/conn/main.bro``. It defines a record ``Conn::Info`` :doc:`scripts/base/protocols/conn/main`. It defines a record
that lists all the fields that go into ``conn.log``, each marked with :bro:type:`Conn::Info` that lists all the fields that go into
a ``&log`` attribute indicating that it is part of the information ``conn.log``, each marked with a ``&log`` attribute indicating that it
written out. To write a log record, the script then passes an instance is part of the information written out. To write a log record, the
of ``Conn::Info`` to the logging framework's ``Log::write`` function. script then passes an instance of :bro:type:`Conn::Info` to the logging
framework's :bro:id:`Log::write` function.
By default, each stream automatically gets a filter named ``default`` By default, each stream automatically gets a filter named ``default``
that generates the normal output by recording all record fields into a that generates the normal output by recording all record fields into a
@ -66,7 +67,7 @@ To create new a new output file for an existing stream, you can add a
new filter. A filter can, e.g., restrict the set of fields being new filter. A filter can, e.g., restrict the set of fields being
logged: logged:
.. code:: bro: .. code:: bro
event bro_init() event bro_init()
{ {
@ -85,14 +86,15 @@ Note the fields that are set for the filter:
``path`` ``path``
The filename for the output file, without any extension (which The filename for the output file, without any extension (which
may be automatically added by the writer). Default path values may be automatically added by the writer). Default path values
are generated by taking the stream's ID and munging it are generated by taking the stream's ID and munging it slightly.
slightly. ``Conn::LOG`` is converted into ``conn``, :bro:enum:`Conn::LOG` is converted into ``conn``,
``PacketFilter::LOG`` is converted into ``packet_filter``, and :bro:enum:`PacketFilter::LOG` is converted into
``Notice::POLICY_LOG`` is converted into ``notice_policy``. ``packet_filter``, and :bro:enum:`Notice::POLICY_LOG` is
converted into ``notice_policy``.
``include`` ``include``
A set limiting the fields to the ones given. The names A set limiting the fields to the ones given. The names
correspond to those in the ``Conn::LOG`` record, with correspond to those in the :bro:type:`Conn::Info` record, with
sub-records unrolled by concatenating fields (separated with sub-records unrolled by concatenating fields (separated with
dots). dots).
@ -158,10 +160,10 @@ further for example to log information by subnets or even by IP
address. Be careful, however, as it is easy to create many files very address. Be careful, however, as it is easy to create many files very
quickly ... quickly ...
.. sidebar: .. sidebar:: A More Generic Path Function
The show ``split_log`` method has one draw-back: it can be used The ``split_log`` method has one draw-back: it can be used
only with the ``Conn::Log`` stream as the record type is hardcoded only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
into its argument list. However, Bro allows to do a more generic into its argument list. However, Bro allows to do a more generic
variant: variant:
@ -201,8 +203,8 @@ Extending
You can add further fields to a log stream by extending the record You can add further fields to a log stream by extending the record
type that defines its content. Let's say we want to add a boolean type that defines its content. Let's say we want to add a boolean
field ``is_private`` to ``Conn::Info`` that indicates whether the field ``is_private`` to :bro:type:`Conn::Info` that indicates whether the
originator IP address is part of the RFC1918 space: originator IP address is part of the :rfc:`1918` space:
.. code:: bro .. code:: bro
@ -234,10 +236,10 @@ Notes:
- For extending logs this way, one needs a bit of knowledge about how - For extending logs this way, one needs a bit of knowledge about how
the script that creates the log stream is organizing its state the script that creates the log stream is organizing its state
keeping. Most of the standard Bro scripts attach their log state to keeping. Most of the standard Bro scripts attach their log state to
the ``connection`` record where it can then be accessed, just as the the :bro:type:`connection` record where it can then be accessed, just
``c$conn`` above. For example, the HTTP analysis adds a field ``http as the ``c$conn`` above. For example, the HTTP analysis adds a field
: HTTP::Info`` to the ``connection`` record. See the script ``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
reference for more information. record. See the script reference for more information.
- When extending records as shown above, the new fields must always be - When extending records as shown above, the new fields must always be
declared either with a ``&default`` value or as ``&optional``. declared either with a ``&default`` value or as ``&optional``.
@ -251,8 +253,8 @@ Sometimes it is helpful to do additional analysis of the information
being logged. For these cases, a stream can specify an event that will being logged. For these cases, a stream can specify an event that will
be generated every time a log record is written to it. All of Bro's be generated every time a log record is written to it. All of Bro's
default log streams define such an event. For example, the connection default log streams define such an event. For example, the connection
log stream raises the event ``Conn::log_conn(rec: Conn::Info)``: You log stream raises the event :bro:id:`Conn::log_conn`. You
could use that for example for flagging when an a connection to could use that for example for flagging when a connection to
specific destination exceeds a certain duration: specific destination exceeds a certain duration:
.. code:: bro .. code:: bro
@ -279,11 +281,32 @@ real-time.
Rotation Rotation
-------- --------
By default, no log rotation occurs, but it's globally controllable for all
filters by redefining the :bro:id:`Log::default_rotation_interval` option:
.. code:: bro
redef Log::default_rotation_interval = 1 hr;
Or specifically for certain :bro:type:`Log::Filter` instances by setting
their ``interv`` field. Here's an example of changing just the
:bro:enum:`Conn::LOG` stream's default filter rotation.
.. code:: bro
event bro_init()
{
local f = Log::get_filter(Conn::LOG, "default");
f$interv = 1 min;
Log::remove_filter(Conn::LOG, "default");
Log::add_filter(Conn::LOG, f);
}
ASCII Writer Configuration ASCII Writer Configuration
-------------------------- --------------------------
The ASCII writer has a number of options for customizing the format of The ASCII writer has a number of options for customizing the format of
its output, see XXX.bro. its output, see :doc:`scripts/base/frameworks/logging/writers/ascii`.
Adding Streams Adding Streams
============== ==============
@ -321,8 +344,8 @@ example for the ``Foo`` module:
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]); Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]);
} }
You can also the state to the ``connection`` record to make it easily You can also add the state to the :bro:type:`connection` record to make
accessible across event handlers: it easily accessible across event handlers:
.. code:: bro .. code:: bro
@ -330,7 +353,7 @@ accessible across event handlers:
foo: Info &optional; foo: Info &optional;
} }
Now you can use the ``Log::write`` method to output log records and Now you can use the :bro:id:`Log::write` method to output log records and
save the logged ``Foo::Info`` record into the connection record: save the logged ``Foo::Info`` record into the connection record:
.. code:: bro .. code:: bro
@ -343,9 +366,9 @@ save the logged ``Foo::Info`` record into the connection record:
} }
See the existing scripts for how to work with such a new connection See the existing scripts for how to work with such a new connection
field. A simple example is ``base/protocols/syslog/main.bro``. field. A simple example is :doc:`scripts/base/protocols/syslog/main`.
When you are developing scripts that add data to the ``connection`` When you are developing scripts that add data to the :bro:type:`connection`
record, care must be given to when and how long data is stored. record, care must be given to when and how long data is stored.
Normally data saved to the connection record will remain there for the Normally data saved to the connection record will remain there for the
duration of the connection and from a practical perspective it's not duration of the connection and from a practical perspective it's not

View file

@ -29,17 +29,18 @@ definitions of what constitutes an attack or even a compromise differ quite a
bit between environments, and activity deemed malicious at one site might be bit between environments, and activity deemed malicious at one site might be
fully acceptable at another. fully acceptable at another.
Whenever one of Bro's analysis scripts sees something potentially interesting Whenever one of Bro's analysis scripts sees something potentially
it flags the situation by calling the ``NOTICE`` function and giving it a interesting it flags the situation by calling the :bro:see:`NOTICE`
single ``Notice::Info`` record. A Notice has a ``Notice::Type``, which function and giving it a single :bro:see:`Notice::Info` record. A Notice
reflects the kind of activity that has been seen, and it is usually also has a :bro:see:`Notice::Type`, which reflects the kind of activity that
augmented with further context about the situation. has been seen, and it is usually also augmented with further context
about the situation.
More information about raising notices can be found in the `Raising Notices`_ More information about raising notices can be found in the `Raising Notices`_
section. section.
Once a notice is raised, it can have any number of actions applied to it by Once a notice is raised, it can have any number of actions applied to it by
the ``Notice::policy`` set which is described in the `Notice Policy`_ the :bro:see:`Notice::policy` set which is described in the `Notice Policy`_
section below. Such actions can be to send a mail to the configured section below. Such actions can be to send a mail to the configured
address(es) or to simply ignore the notice. Currently, the following actions address(es) or to simply ignore the notice. Currently, the following actions
are defined: are defined:
@ -52,20 +53,20 @@ are defined:
- Description - Description
* - Notice::ACTION_LOG * - Notice::ACTION_LOG
- Write the notice to the ``Notice::LOG`` logging stream. - Write the notice to the :bro:see:`Notice::LOG` logging stream.
* - Notice::ACTION_ALARM * - Notice::ACTION_ALARM
- Log into the ``Notice::ALARM_LOG`` stream which will rotate - Log into the :bro:see:`Notice::ALARM_LOG` stream which will rotate
hourly and email the contents to the email address or addresses hourly and email the contents to the email address or addresses
defined in the ``Notice::mail_dest`` variable. defined in the :bro:see:`Notice::mail_dest` variable.
* - Notice::ACTION_EMAIL * - Notice::ACTION_EMAIL
- Send the notice in an email to the email address or addresses given in - Send the notice in an email to the email address or addresses given in
the ``Notice::mail_dest`` variable. the :bro:see:`Notice::mail_dest` variable.
* - Notice::ACTION_PAGE * - Notice::ACTION_PAGE
- Send an email to the email address or addresses given in the - Send an email to the email address or addresses given in the
``Notice::mail_page_dest`` variable. :bro:see:`Notice::mail_page_dest` variable.
* - Notice::ACTION_NO_SUPPRESS * - Notice::ACTION_NO_SUPPRESS
- This action will disable the built in notice suppression for the - This action will disable the built in notice suppression for the
@ -82,15 +83,17 @@ Processing Notices
Notice Policy Notice Policy
************* *************
The predefined set ``Notice::policy`` provides the mechanism for applying The predefined set :bro:see:`Notice::policy` provides the mechanism for
actions and other behavior modifications to notices. Each entry of applying actions and other behavior modifications to notices. Each entry
``Notice::policy`` is a record of the type ``Notice::PolicyItem`` which of :bro:see:`Notice::policy` is a record of the type
defines a condition to be matched against all raised notices and one or more :bro:see:`Notice::PolicyItem` which defines a condition to be matched
of a variety of behavior modifiers. The notice policy is defined by adding any against all raised notices and one or more of a variety of behavior
number of ``Notice::PolicyItem`` records to the ``Notice::policy`` set. modifiers. The notice policy is defined by adding any number of
:bro:see:`Notice::PolicyItem` records to the :bro:see:`Notice::policy`
set.
Here's a simple example which tells Bro to send an email for all notices of Here's a simple example which tells Bro to send an email for all notices of
type ``SSH::Login`` if the server is 10.0.0.1: type :bro:see:`SSH::Login` if the server is 10.0.0.1:
.. code:: bro .. code:: bro
@ -113,11 +116,11 @@ flexibility due to having access to Bro's full programming language.
Predicate Field Predicate Field
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
The ``Notice::PolicyItem`` record type has a field name ``$pred`` which The :bro:see:`Notice::PolicyItem` record type has a field name ``$pred``
defines the entry's condition in the form of a predicate written as a Bro which defines the entry's condition in the form of a predicate written
function. The function is passed the notice as a ``Notice::Info`` record and as a Bro function. The function is passed the notice as a
it returns a boolean value indicating if the entry is applicable to that :bro:see:`Notice::Info` record and it returns a boolean value indicating
particular notice. if the entry is applicable to that particular notice.
.. note:: .. note::
@ -125,14 +128,14 @@ particular notice.
(``T``) since an implicit false (``F``) value would never be used. (``T``) since an implicit false (``F``) value would never be used.
Bro evaluates the predicates of each entry in the order defined by the Bro evaluates the predicates of each entry in the order defined by the
``$priority`` field in ``Notice::PolicyItem`` records. The valid values are ``$priority`` field in :bro:see:`Notice::PolicyItem` records. The valid
0-10 with 10 being earliest evaluated. If ``$priority`` is omitted, the values are 0-10 with 10 being earliest evaluated. If ``$priority`` is
default priority is 5. omitted, the default priority is 5.
Behavior Modification Fields Behavior Modification Fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are a set of fields in the ``Notice::PolicyItem`` record type that There are a set of fields in the :bro:see:`Notice::PolicyItem` record type that
indicate ways that either the notice or notice processing should be modified indicate ways that either the notice or notice processing should be modified
if the predicate field (``$pred``) evaluated to true (``T``). Those fields are if the predicate field (``$pred``) evaluated to true (``T``). Those fields are
explained in more detail in the following table. explained in more detail in the following table.
@ -146,8 +149,8 @@ explained in more detail in the following table.
- Example - Example
* - ``$action=<Notice::Action>`` * - ``$action=<Notice::Action>``
- Each Notice::PolicyItem can have a single action applied to the notice - Each :bro:see:`Notice::PolicyItem` can have a single action
with this field. applied to the notice with this field.
- ``$action = Notice::ACTION_EMAIL`` - ``$action = Notice::ACTION_EMAIL``
* - ``$suppress_for=<interval>`` * - ``$suppress_for=<interval>``
@ -162,9 +165,9 @@ explained in more detail in the following table.
- This field can be used for modification of the notice policy - This field can be used for modification of the notice policy
evaluation. To stop processing of notice policy items before evaluation. To stop processing of notice policy items before
evaluating all of them, set this field to ``T`` and make the ``$pred`` evaluating all of them, set this field to ``T`` and make the ``$pred``
field return ``T``. ``Notice::PolicyItem`` records defined at a higher field return ``T``. :bro:see:`Notice::PolicyItem` records defined at
priority as defined by the ``$priority`` field will still be evaluated a higher priority as defined by the ``$priority`` field will still be
but those at a lower priority won't. evaluated but those at a lower priority won't.
- ``$halt = T`` - ``$halt = T``
@ -186,11 +189,11 @@ Notice Policy Shortcuts
Although the notice framework provides a great deal of flexibility and Although the notice framework provides a great deal of flexibility and
configurability there are many times that the full expressiveness isn't needed configurability there are many times that the full expressiveness isn't needed
and actually becomes a hindrance to achieving results. The framework provides and actually becomes a hindrance to achieving results. The framework provides
a default ``Notice::policy`` suite as a way of giving users the a default :bro:see:`Notice::policy` suite as a way of giving users the
shortcuts to easily apply many common actions to notices. shortcuts to easily apply many common actions to notices.
These are implemented as sets and tables indexed with a These are implemented as sets and tables indexed with a
``Notice::Type`` enum value. The following table shows and describes :bro:see:`Notice::Type` enum value. The following table shows and describes
all of the variables available for shortcut configuration of the notice all of the variables available for shortcut configuration of the notice
framework. framework.
@ -201,40 +204,44 @@ framework.
* - Variable name * - Variable name
- Description - Description
* - Notice::ignored_types * - :bro:see:`Notice::ignored_types`
- Adding a ``Notice::Type`` to this set results in the notice - Adding a :bro:see:`Notice::Type` to this set results in the notice
being ignored. It won't have any other action applied to it, not even being ignored. It won't have any other action applied to it, not even
``Notice::ACTION_LOG``. :bro:see:`Notice::ACTION_LOG`.
* - Notice::emailed_types * - :bro:see:`Notice::emailed_types`
- Adding a ``Notice::Type`` to this set results in - Adding a :bro:see:`Notice::Type` to this set results in
``Notice::ACTION_EMAIL`` being applied to the notices of that type. :bro:see:`Notice::ACTION_EMAIL` being applied to the notices of
that type.
* - Notice::alarmed_types * - :bro:see:`Notice::alarmed_types`
- Adding a Notice::Type to this set results in - Adding a :bro:see:`Notice::Type` to this set results in
``Notice::ACTION_ALARM`` being applied to the notices of that type. :bro:see:`Notice::ACTION_ALARM` being applied to the notices of
that type.
* - Notice::not_suppressed_types * - :bro:see:`Notice::not_suppressed_types`
- Adding a ``Notice::Type`` to this set results in that notice no longer - Adding a :bro:see:`Notice::Type` to this set results in that notice
undergoing the normal notice suppression that would take place. Be no longer undergoes the normal notice suppression that would
careful when using this in production it could result in a dramatic take place. Be careful when using this in production it could
increase in the number of notices being processed. result in a dramatic increase in the number of notices being
processed.
* - Notice::type_suppression_intervals * - :bro:see:`Notice::type_suppression_intervals`
- This is a table indexed on ``Notice::Type`` and yielding an interval. - This is a table indexed on :bro:see:`Notice::Type` and yielding an
It can be used as an easy way to extend the default suppression interval. It can be used as an easy way to extend the default
interval for an entire ``Notice::Type`` without having to create a suppression interval for an entire :bro:see:`Notice::Type`
whole ``Notice::policy`` entry and setting the ``$suppress_for`` without having to create a whole :bro:see:`Notice::policy` entry
field. and setting the ``$suppress_for`` field.
Raising Notices Raising Notices
--------------- ---------------
A script should raise a notice for any occurrence that a user may want to be A script should raise a notice for any occurrence that a user may want
notified about or take action on. For example, whenever the base SSH analysis to be notified about or take action on. For example, whenever the base
scripts sees an SSH session where it is heuristically guessed to be a SSH analysis scripts sees an SSH session where it is heuristically
successful login, it raises a Notice of the type ``SSH::Login``. The code in guessed to be a successful login, it raises a Notice of the type
the base SSH analysis script looks like this: :bro:see:`SSH::Login`. The code in the base SSH analysis script looks
like this:
.. code:: bro .. code:: bro
@ -242,10 +249,10 @@ the base SSH analysis script looks like this:
$msg="Heuristically detected successful SSH login.", $msg="Heuristically detected successful SSH login.",
$conn=c]); $conn=c]);
``NOTICE`` is a normal function in the global namespace which wraps a function :bro:see:`NOTICE` is a normal function in the global namespace which
within the ``Notice`` namespace. It takes a single argument of the wraps a function within the ``Notice`` namespace. It takes a single
``Notice::Info`` record type. The most common fields used when raising notices argument of the :bro:see:`Notice::Info` record type. The most common
are described in the following table: fields used when raising notices are described in the following table:
.. list-table:: .. list-table::
:widths: 32 40 :widths: 32 40
@ -295,9 +302,10 @@ are described in the following table:
* - ``$suppress_for`` * - ``$suppress_for``
- This field can be set if there is a natural suppression interval for - This field can be set if there is a natural suppression interval for
the notice that may be different than the default value. The value set the notice that may be different than the default value. The
to this field can also be modified by a user's ``Notice::policy`` so value set to this field can also be modified by a user's
the value is not set permanently and unchangeably. :bro:see:`Notice::policy` so the value is not set permanently
and unchangeably.
When writing Bro scripts which raise notices, some thought should be given to When writing Bro scripts which raise notices, some thought should be given to
what the notice represents and what data should be provided to give a consumer what the notice represents and what data should be provided to give a consumer
@ -325,7 +333,7 @@ The notice framework supports suppression for notices if the author of the
script that is generating the notice has indicated to the notice framework how script that is generating the notice has indicated to the notice framework how
to identify notices that are intrinsically the same. Identification of these to identify notices that are intrinsically the same. Identification of these
"intrinsically duplicate" notices is implemented with an optional field in "intrinsically duplicate" notices is implemented with an optional field in
``Notice::Info`` records named ``$identifier`` which is a simple string. :bro:see:`Notice::Info` records named ``$identifier`` which is a simple string.
If the ``$identifier`` and ``$type`` fields are the same for two notices, the If the ``$identifier`` and ``$type`` fields are the same for two notices, the
notice framework actually considers them to be the same thing and can use that notice framework actually considers them to be the same thing and can use that
information to suppress duplicates for a configurable period of time. information to suppress duplicates for a configurable period of time.
@ -337,12 +345,13 @@ information to suppress duplicates for a configurable period of time.
could be completely legitimate usage if no notices could ever be could be completely legitimate usage if no notices could ever be
considered to be duplicates. considered to be duplicates.
The ``$identifier`` field is typically comprised of several pieces of data The ``$identifier`` field is typically comprised of several pieces of
related to the notice that when combined represent a unique instance of that data related to the notice that when combined represent a unique
notice. Here is an example of the script instance of that notice. Here is an example of the script
``policy/protocols/ssl/validate-certs.bro`` raising a notice for session :doc:`scripts/policy/protocols/ssl/validate-certs` raising a notice
negotiations where the certificate or certificate chain did not validate for session negotiations where the certificate or certificate chain did
successfully against the available certificate authority certificates. not validate successfully against the available certificate authority
certificates.
.. code:: bro .. code:: bro
@ -369,7 +378,7 @@ it's assumed that the script author who is raising the notice understands the
full problem set and edge cases of the notice which may not be readily full problem set and edge cases of the notice which may not be readily
apparent to users. If users don't want the suppression to take place or simply apparent to users. If users don't want the suppression to take place or simply
want a different interval, they can always modify it with the want a different interval, they can always modify it with the
``Notice::policy``. :bro:see:`Notice::policy`.
Extending Notice Framework Extending Notice Framework

View file

@ -31,19 +31,19 @@ See the `bro downloads page`_ for currently supported/targeted platforms.
* RPM * RPM
.. console:: .. console::
sudo yum localinstall Bro-all*.rpm sudo yum localinstall Bro-*.rpm
* DEB * DEB
.. console:: .. console::
sudo gdebi Bro-all-*.deb sudo gdebi Bro-*.deb
* MacOS Disk Image with Installer * MacOS Disk Image with Installer
Just open the ``Bro-all-*.dmg`` and then run the ``.pkg`` installer. Just open the ``Bro-*.dmg`` and then run the ``.pkg`` installer.
Everything installed by the package will go into ``/opt/bro``. Everything installed by the package will go into ``/opt/bro``.
The primary install prefix for binary packages is ``/opt/bro``. The primary install prefix for binary packages is ``/opt/bro``.
@ -56,26 +56,32 @@ Building From Source
Required Dependencies Required Dependencies
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
The following dependencies are required to build Bro:
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
.. console:: .. console::
sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel
* DEB/Debian-based Linux: * DEB/Debian-based Linux:
.. console:: .. console::
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev
* FreeBSD * FreeBSD
Most required dependencies should come with a minimal FreeBSD install Most required dependencies should come with a minimal FreeBSD install
except for the following. except for the following.
.. console:: .. console::
sudo pkg_add -r cmake swig bison python sudo pkg_add -r bash cmake swig bison python
Note that ``bash`` needs to be in ``PATH``, which by default it is
not. The FreeBSD package installs the binary into
``/usr/local/bin``.
* Mac OS X * Mac OS X
@ -99,21 +105,21 @@ sending emails.
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
.. console:: .. console::
sudo yum install GeoIP-devel sendmail sudo yum install GeoIP-devel sendmail
* DEB/Debian-based Linux: * DEB/Debian-based Linux:
.. console:: .. console::
sudo apt-get install libgeoip-dev sendmail sudo apt-get install libgeoip-dev sendmail
* Ports-based FreeBSD * Ports-based FreeBSD
.. console:: .. console::
sudo pkg_add -r GeoIP sudo pkg_add -r GeoIP
sendmail is typically already available. sendmail is typically already available.

View file

@ -73,12 +73,14 @@ macro(REST_TARGET srcDir broInput)
elseif (${extension} STREQUAL ".bif.bro") elseif (${extension} STREQUAL ".bif.bro")
set(group bifs) set(group bifs)
elseif (relDstDir) elseif (relDstDir)
set(pkgIndex ${relDstDir}/index) set(group ${relDstDir}/index)
set(group ${pkgIndex})
# add package index to master package list if not already in it # add package index to master package list if not already in it
list(FIND MASTER_PKG_LIST ${pkgIndex} _found) # and if a __load__.bro exists in the original script directory
list(FIND MASTER_PKG_LIST ${relDstDir} _found)
if (_found EQUAL -1) if (_found EQUAL -1)
list(APPEND MASTER_PKG_LIST ${pkgIndex}) if (EXISTS ${CMAKE_SOURCE_DIR}/scripts/${relDstDir}/__load__.bro)
list(APPEND MASTER_PKG_LIST ${relDstDir})
endif ()
endif () endif ()
else () else ()
set(group "") set(group "")
@ -137,11 +139,15 @@ file(WRITE ${MASTER_POLICY_INDEX} "${MASTER_POLICY_INDEX_TEXT}")
# policy/packages.rst file # policy/packages.rst file
set(MASTER_PKG_INDEX_TEXT "") set(MASTER_PKG_INDEX_TEXT "")
foreach (pkg ${MASTER_PKG_LIST}) foreach (pkg ${MASTER_PKG_LIST})
# strip of the trailing /index for the link name set(MASTER_PKG_INDEX_TEXT
get_filename_component(lnktxt ${pkg} PATH) "${MASTER_PKG_INDEX_TEXT}\n:doc:`${pkg} <${pkg}/index>`\n")
# pretty-up the link name by removing common scripts/ prefix if (EXISTS ${CMAKE_SOURCE_DIR}/scripts/${pkg}/README)
string(REPLACE "scripts/" "" lnktxt "${lnktxt}") file(STRINGS ${CMAKE_SOURCE_DIR}/scripts/${pkg}/README pkgreadme)
set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n ${lnktxt} <${pkg}>") foreach (line ${pkgreadme})
set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n ${line}")
endforeach ()
set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n")
endif ()
endforeach () endforeach ()
file(WRITE ${MASTER_PACKAGE_INDEX} "${MASTER_PKG_INDEX_TEXT}") file(WRITE ${MASTER_PACKAGE_INDEX} "${MASTER_PKG_INDEX_TEXT}")

View file

@ -34,6 +34,7 @@ rest_target(${psd} base/frameworks/dpd/main.bro)
rest_target(${psd} base/frameworks/intel/main.bro) rest_target(${psd} base/frameworks/intel/main.bro)
rest_target(${psd} base/frameworks/logging/main.bro) rest_target(${psd} base/frameworks/logging/main.bro)
rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro) rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro)
rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro)
rest_target(${psd} base/frameworks/logging/writers/ascii.bro) rest_target(${psd} base/frameworks/logging/writers/ascii.bro)
rest_target(${psd} base/frameworks/metrics/cluster.bro) rest_target(${psd} base/frameworks/metrics/cluster.bro)
rest_target(${psd} base/frameworks/metrics/main.bro) rest_target(${psd} base/frameworks/metrics/main.bro)
@ -102,6 +103,7 @@ rest_target(${psd} policy/misc/analysis-groups.bro)
rest_target(${psd} policy/misc/capture-loss.bro) rest_target(${psd} policy/misc/capture-loss.bro)
rest_target(${psd} policy/misc/loaded-scripts.bro) rest_target(${psd} policy/misc/loaded-scripts.bro)
rest_target(${psd} policy/misc/profiling.bro) rest_target(${psd} policy/misc/profiling.bro)
rest_target(${psd} policy/misc/stats.bro)
rest_target(${psd} policy/misc/trim-trace-file.bro) rest_target(${psd} policy/misc/trim-trace-file.bro)
rest_target(${psd} policy/protocols/conn/known-hosts.bro) rest_target(${psd} policy/protocols/conn/known-hosts.bro)
rest_target(${psd} policy/protocols/conn/known-services.bro) rest_target(${psd} policy/protocols/conn/known-services.bro)

View file

@ -1,6 +1,6 @@
This directory contains scripts and templates that can be used to automate This directory contains scripts and templates that can be used to automate
the generation of Bro script documentation. Several build targets are defined the generation of Bro script documentation. Several build targets are defined
by CMake: by CMake and available in the top-level Makefile:
``restdoc`` ``restdoc``

View file

@ -6,113 +6,630 @@ Types
The Bro scripting language supports the following built-in types. The Bro scripting language supports the following built-in types.
.. TODO: add documentation
.. bro:type:: void .. bro:type:: void
An internal Bro type representing an absence of a type. Should
most often be seen as a possible function return type.
.. bro:type:: bool .. bro:type:: bool
Reflects a value with one of two meanings: true or false. The two
``bool`` constants are ``T`` and ``F``.
.. bro:type:: int .. bro:type:: int
A numeric type representing a signed integer. An ``int`` constant
is a string of digits preceded by a ``+`` or ``-`` sign, e.g.
``-42`` or ``+5``. When using type inferencing use care so that the
intended type is inferred, e.g. ``local size_difference = 0`` will
infer the :bro:type:`count` while ``local size_difference = +0``
will infer :bro:type:`int`.
.. bro:type:: count .. bro:type:: count
A numeric type representing an unsigned integer. A ``count``
constant is a string of digits, e.g. ``1234`` or ``0``.
.. bro:type:: counter .. bro:type:: counter
An alias to :bro:type:`count`
.. TODO: is there anything special about this type?
.. bro:type:: double .. bro:type:: double
A numeric type representing a double-precision floating-point
number. Floating-point constants are written as a string of digits
with an optional decimal point, optional scale-factor in scientific
notation, and optional ``+`` or ``-`` sign. Examples are ``-1234``,
``-1234e0``, ``3.14159``, and ``.003e-23``.
.. bro:type:: time .. bro:type:: time
A temporal type representing an absolute time. There is currently
no way to specify a ``time`` constant, but one can use the
:bro:id:`current_time` or :bro:id:`network_time` built-in functions
to assign a value to a ``time``-typed variable.
.. bro:type:: interval .. bro:type:: interval
A temporal type representing a relative time. An ``interval``
constant can be written as a numeric constant followed by a time
unit where the time unit is one of ``usec``, ``sec``, ``min``,
``hr``, or ``day`` which respectively represent microseconds,
seconds, minutes, hours, and days. Whitespace between the numeric
constant and time unit is optional. Appending the letter "s" to the
time unit in order to pluralize it is also optional (to no semantic
effect). Examples of ``interval`` constants are ``3.5 min`` and
``3.5mins``. An ``interval`` can also be negated, for example ``-
12 hr`` represents "twelve hours in the past". Intervals also
support addition, subtraction, multiplication, division, and
comparison operations.
.. bro:type:: string .. bro:type:: string
A type used to hold character-string values which represent text.
String constants are created by enclosing text in double quotes (")
and the backslash character (\) introduces escape sequences.
Note that Bro represents strings internally as a count and vector of
bytes rather than a NUL-terminated byte string (although string
constants are also automatically NUL-terminated). This is because
network traffic can easily introduce NULs into strings either by
nature of an application, inadvertently, or maliciously. And while
NULs are allowed in Bro strings, when present in strings passed as
arguments to many functions, a run-time error can occur as their
presence likely indicates a sort of problem. In that case, the
string will also only be represented to the user as the literal
"<string-with-NUL>" string.
.. bro:type:: pattern .. bro:type:: pattern
A type representing regular-expression patterns which can be used
for fast text-searching operations. Pattern constants are created
by enclosing text within forward slashes (/) and is the same syntax
as the patterns supported by the `flex lexical analyzer
<http://flex.sourceforge.net/manual/Patterns.html>`_. The speed of
regular expression matching does not depend on the complexity or
size of the patterns. Patterns support two types of matching, exact
and embedded.
In exact matching the ``==`` equality relational operator is used
with one :bro:type:`string` operand and one :bro:type:`pattern`
operand to check whether the full string exactly matches the
pattern. In this case, the ``^`` beginning-of-line and ``$``
end-of-line anchors are redundant since pattern is implicitly
anchored to the beginning and end of the line to facilitate an exact
match. For example::
"foo" == /foo|bar/
yields true, while::
/foo|bar/ == "foobar"
yields false. The ``!=`` operator would yield the negation of ``==``.
In embedded matching the ``in`` operator is again used with one
:bro:type:`string` operand and one :bro:type:`pattern` operand
(which must be on the left-hand side), but tests whether the pattern
appears anywhere within the given string. For example::
/foo|bar/ in "foobar"
yields true, while::
/^oob/ in "foobar"
is false since "oob" does not appear at the start of "foobar". The
``!in`` operator would yield the negation of ``in``.
.. bro:type:: enum .. bro:type:: enum
A type allowing the specification of a set of related values that
have no further structure. The only operations allowed on
enumerations are equality comparisons and they do not have
associated values or ordering. An example declaration:
.. code:: bro
type color: enum { Red, White, Blue, };
The last comma is after ``Blue`` is optional.
.. bro:type:: timer .. bro:type:: timer
.. TODO: is this a type that's exposed to users?
.. bro:type:: port .. bro:type:: port
A type representing transport-level port numbers. Besides TCP and
UDP ports, there is a concept of an ICMP "port" where the source
port is the ICMP message type and the destination port the ICMP
message code. A ``port`` constant is written as an unsigned integer
followed by one of ``/tcp``, ``/udp``, ``/icmp``, or ``/unknown``.
Ports can be compared for equality and also for ordering. When
comparing order across transport-level protocols, ``/unknown`` <
``/tcp`` < ``/udp`` < ``icmp``, for example ``65535/tcp`` is smaller
than ``0/udp``.
.. bro:type:: addr .. bro:type:: addr
.. bro:type:: net A type representing an IP address. Currently, Bro defaults to only
supporting IPv4 addresses unless configured/built with
``--enable-brov6``, in which case, IPv6 addresses are supported.
IPv4 address constants are written in "dotted quad" format,
``A1.A2.A3.A4``, where Ai all lie between 0 and 255.
IPv6 address constants are written as colon-separated hexadecimal form
as described by :rfc:`2373`.
Hostname constants can also be used, but since a hostname can
correspond to multiple IP addresses, the type of such variable is a
:bro:type:`set` of :bro:type:`addr` elements. For example:
.. code:: bro
local a = www.google.com;
Addresses can be compared for (in)equality using ``==`` and ``!=``.
They can also be masked with ``/`` to produce a :bro:type:`subnet`:
.. code:: bro
local a: addr = 192.168.1.100;
local s: subnet = 192.168.0.0/16;
if ( a/16 == s )
print "true";
And checked for inclusion within a :bro:type:`subnet` using ``in`` :
.. code:: bro
local a: addr = 192.168.1.100;
local s: subnet = 192.168.0.0/16;
if ( a in s )
print "true";
.. bro:type:: subnet .. bro:type:: subnet
A type representing a block of IP addresses in CIDR notation. A
``subnet`` constant is written as an :bro:type:`addr` followed by a
slash (/) and then the network prefix size specified as a decimal
number. For example, ``192.168.0.0/16``.
.. bro:type:: any .. bro:type:: any
Used to bypass strong typing. For example, a function can take an
argument of type ``any`` when it may be of different types.
.. bro:type:: table .. bro:type:: table
.. bro:type:: union An associate array that maps from one set of values to another. The
values being mapped are termed the *index* or *indices* and the
result of the mapping is called the *yield*. Indexing into tables
is very efficient, and internally it is just a single hash table
lookup.
.. bro:type:: record The table declaration syntax is::
.. bro:type:: types table [ type^+ ] of type
.. bro:type:: func where *type^+* is one or more types, separated by commas. For example:
.. bro:type:: file .. code:: bro
.. bro:type:: vector global a: table[count] of string;
.. TODO: below are kind of "special cases" that bro knows about? declares a table indexed by :bro:type:`count` values and yielding
:bro:type:`string` values. The yield type can also be more complex:
.. code:: bro
global a: table[count] of table[addr, port] of string;
which declared a table indexed by :bro:type:`count` and yielding
another :bro:type:`table` which is indexed by an :bro:type:`addr`
and :bro:type:`port` to yield a :bro:type:`string`.
Initialization of tables occurs by enclosing a set of initializers within
braces, for example:
.. code:: bro
global t: table[count] of string = {
[11] = "eleven",
[5] = "five",
};
Accessing table elements if provided by enclosing values within square
brackets (``[]``), for example:
.. code:: bro
t[13] = "thirteen";
And membership can be tested with ``in``:
.. code:: bro
if ( 13 in t )
...
Iterate over tables with a ``for`` loop:
.. code:: bro
local t: table[count] of string;
for ( n in t )
...
local services: table[addr, port] of string;
for ( [a, p] in services )
...
Remove individual table elements with ``delete``:
.. code:: bro
delete t[13];
Nothing happens if the element with value ``13`` isn't present in
the table.
Table size can be obtained by placing the table identifier between
vertical pipe (|) characters:
.. code:: bro
|t|
.. bro:type:: set .. bro:type:: set
A set is like a :bro:type:`table`, but it is a collection of indices
that do not map to any yield value. They are declared with the
syntax::
set [ type^+ ]
where *type^+* is one or more types separated by commas.
Sets are initialized by listing elements enclosed by curly braces:
.. code:: bro
global s: set[port] = { 21/tcp, 23/tcp, 80/tcp, 443/tcp };
global s2: set[port, string] = { [21/tcp, "ftp"], [23/tcp, "telnet"] };
The types are explicitly shown in the example above, but they could
have been left to type inference.
Set membership is tested with ``in``:
.. code:: bro
if ( 21/tcp in s )
...
Elements are added with ``add``:
.. code:: bro
add s[22/tcp];
And removed with ``delete``:
.. code:: bro
delete s[21/tcp];
Set size can be obtained by placing the set identifier between
vertical pipe (|) characters:
.. code:: bro
|s|
.. bro:type:: vector
A vector is like a :bro:type:`table`, except it's always indexed by a
:bro:type:`count`. A vector is declared like:
.. code:: bro
global v: vector of string;
And can be initialized with the vector constructor:
.. code:: bro
global v: vector of string = vector("one", "two", "three");
Adding an element to a vector involves accessing/assigning it:
.. code:: bro
v[3] = "four"
Note how the vector indexing is 0-based.
Vector size can be obtained by placing the vector identifier between
vertical pipe (|) characters:
.. code:: bro
|v|
.. bro:type:: record
A ``record`` is a collection of values. Each value has a field name
and a type. Values do not need to have the same type and the types
have no restrictions. An example record type definition:
.. code:: bro
type MyRecordType: record {
c: count;
s: string &optional;
};
Access to a record field uses the dollar sign (``$``) operator:
.. code:: bro
global r: MyRecordType;
r$c = 13;
Record assignment can be done field by field or as a whole like:
.. code:: bro
r = [$c = 13, $s = "thirteen"];
When assigning a whole record value, all fields that are not
:bro:attr:`&optional` or have a :bro:attr:`&default` attribute must
be specified.
To test for existence of field that is :bro:attr:`&optional`, use the
``?$`` operator:
.. code:: bro
if ( r?$s )
...
.. bro:type:: file
Bro supports writing to files, but not reading from them. For
example, declare, open, and write to a file and finally close it
like:
.. code:: bro
global f: file = open("myfile");
print f, "hello, world";
close(f);
Writing to files like this for logging usually isn't recommend, for better
logging support see :doc:`/logging`.
.. bro:type:: func
See :bro:type:`function`.
.. bro:type:: function .. bro:type:: function
Function types in Bro are declared using::
function( argument* ): type
where *argument* is a (possibly empty) comma-separated list of
arguments, and *type* is an optional return type. For example:
.. code:: bro
global greeting: function(name: string): string;
Here ``greeting`` is an identifier with a certain function type.
The function body is not defined yet and ``greeting`` could even
have different function body values at different times. To define
a function including a body value, the syntax is like:
.. code:: bro
function greeting(name: string): string
{
return "Hello, " + name;
}
Note that in the definition above, it's not necessary for us to have
done the first (forward) declaration of ``greeting`` as a function
type, but when it is, the argument list and return type much match
exactly.
Function types don't need to have a name and can be assigned anonymously:
.. code:: bro
greeting = function(name: string): string { return "Hi, " + name; };
And finally, the function can be called like:
.. code:: bro
print greeting("Dave");
.. bro:type:: event .. bro:type:: event
Event handlers are nearly identical in both syntax and semantics to
a :bro:type:`function`, with the two differences being that event
handlers have no return type since they never return a value, and
you cannot call an event handler. Instead of directly calling an
event handler from a script, event handler bodies are executed when
they are invoked by one of three different methods:
- From the event engine
When the event engine detects an event for which you have
defined a corresponding event handler, it queues an event for
that handler. The handler is invoked as soon as the event
engine finishes processing the current packet and flushing the
invocation of other event handlers that were queued first.
- With the ``event`` statement from a script
Immediately queuing invocation of an event handler occurs like:
.. code:: bro
event password_exposed(user, password);
This assumes that ``password_exposed`` was previously declared
as an event handler type with compatible arguments.
- Via the ``schedule`` expression in a script
This delays the invocation of event handlers until some time in
the future. For example:
.. code:: bro
schedule 5 secs { password_exposed(user, password) };
Multiple event handler bodies can be defined for the same event handler
identifier and the body of each will be executed in turn. Ordering
of execution can be influenced with :bro:attr:`&priority`.
Attributes Attributes
---------- ----------
The Bro scripting language supports the following built-in attributes. Attributes occur at the end of type/event declarations and change their
behavior. The syntax is ``&key`` or ``&key=val``, e.g., ``type T:
.. TODO: add documentation set[count] &read_expire=5min`` or ``event foo() &priority=-3``. The Bro
scripting language supports the following built-in attributes.
.. bro:attr:: &optional .. bro:attr:: &optional
Allows record field to be missing. For example the type ``record {
a: int, b: port &optional }`` could be instantiated both as
singleton ``[$a=127.0.0.1]`` or pair ``[$a=127.0.0.1, $b=80/tcp]``.
.. bro:attr:: &default .. bro:attr:: &default
Uses a default value for a record field or container elements. For
example, ``table[int] of string &default="foo" }`` would create
table that returns The :bro:type:`string` ``"foo"`` for any
non-existing index.
.. bro:attr:: &redef .. bro:attr:: &redef
Allows for redefinition of initial object values. This is typically
used with constants, for example, ``const clever = T &redef;`` would
allow the constant to be redifined at some later point during script
execution.
.. bro:attr:: &rotate_interval .. bro:attr:: &rotate_interval
Rotates a file after a specified interval.
.. bro:attr:: &rotate_size .. bro:attr:: &rotate_size
Rotates af file after it has reached a given size in bytes.
.. bro:attr:: &add_func .. bro:attr:: &add_func
.. TODO: needs to be documented.
.. bro:attr:: &delete_func .. bro:attr:: &delete_func
.. TODO: needs to be documented.
.. bro:attr:: &expire_func .. bro:attr:: &expire_func
Called right before a container element expires.
.. bro:attr:: &read_expire .. bro:attr:: &read_expire
Specifies a read expiration timeout for container elements. That is,
the element expires after the given amount of time since the last
time it has been read. Note that a write also counts as a read.
.. bro:attr:: &write_expire .. bro:attr:: &write_expire
Specifies a write expiration timeout for container elements. That
is, the element expires after the given amount of time since the
last time it has been written.
.. bro:attr:: &create_expire .. bro:attr:: &create_expire
Specifies a creation expiration timeout for container elements. That
is, the element expires after the given amount of time since it has
been inserted into the container, regardless of any reads or writes.
.. bro:attr:: &persistent .. bro:attr:: &persistent
Makes a variable persistent, i.e., its value is writen to disk (per
default at shutdown time).
.. bro:attr:: &synchronized .. bro:attr:: &synchronized
Synchronizes variable accesses across nodes. The value of a
``&synchronized`` variable is automatically propagated to all peers
when it changes.
.. bro:attr:: &postprocessor .. bro:attr:: &postprocessor
.. TODO: needs to be documented.
.. bro:attr:: &encrypt .. bro:attr:: &encrypt
Encrypts files right before writing them to disk.
.. TODO: needs to be documented in more detail.
.. bro:attr:: &match .. bro:attr:: &match
.. TODO: needs to be documented.
.. bro:attr:: &disable_print_hook .. bro:attr:: &disable_print_hook
Deprecated. Will be removed.
.. bro:attr:: &raw_output .. bro:attr:: &raw_output
Opens a file in raw mode, i.e., non-ASCII characters are not
escaped.
.. bro:attr:: &mergeable .. bro:attr:: &mergeable
Prefers set union to assignment for synchronized state. This
attribute is used in conjunction with :bro:attr:`&synchronized`
container types: when the same container is updated at two peers
with different value, the propagation of the state causes a race
condition, where the last update succeeds. This can cause
inconsistencies and can be avoided by unifying the two sets, rather
than merely overwriting the old value.
.. bro:attr:: &priority .. bro:attr:: &priority
Specifies the execution priority of an event handler. Higher values
are executed before lower ones. The default value is 0.
.. bro:attr:: &group .. bro:attr:: &group
Groups event handlers such that those in the same group can be
jointly activated or deactivated.
.. bro:attr:: &log .. bro:attr:: &log
Writes a record field to the associated log stream.
.. bro:attr:: &error_handler
.. TODO: needs documented
.. bro:attr:: (&tracked) .. bro:attr:: (&tracked)
.. TODO: needs documented or removed if it's not used anywhere.

View file

@ -1,19 +0,0 @@
Common Documentation
====================
.. _common_port_analysis_doc:
Port Analysis
-------------
TODO: add some stuff here
.. _common_packet_filter_doc:
Packet Filter
-------------
TODO: add some stuff here
.. note:: Filters are only relevant when dynamic protocol detection (DPD)
is explicitly turned off (Bro release 1.6 enabled DPD by default).

View file

@ -1,5 +1,5 @@
##! This is an example script that demonstrates how to document. Comments ##! This is an example script that demonstrates documentation features.
##! of the form ``##!`` are for the script summary. The contents of ##! Comments of the form ``##!`` are for the script summary. The contents of
##! these comments are transferred directly into the auto-generated ##! these comments are transferred directly into the auto-generated
##! `reStructuredText <http://docutils.sourceforge.net/rst.html>`_ ##! `reStructuredText <http://docutils.sourceforge.net/rst.html>`_
##! (reST) document's summary section. ##! (reST) document's summary section.
@ -22,8 +22,8 @@
# field comments, it's necessary to disambiguate the field with # field comments, it's necessary to disambiguate the field with
# which a comment associates: e.g. "##<" can be used on the same line # which a comment associates: e.g. "##<" can be used on the same line
# as a field to signify the comment relates to it and not the # as a field to signify the comment relates to it and not the
# following field. "##<" is not meant for general use, just # following field. "##<" can also be used more generally in any
# record/enum fields. # variable declarations to associate with the last-declared identifier.
# #
# Generally, the auto-doc comments (##) are associated with the # Generally, the auto-doc comments (##) are associated with the
# next declaration/identifier found in the script, but the doc framework # next declaration/identifier found in the script, but the doc framework
@ -151,7 +151,7 @@ export {
const an_option: set[addr, addr, string] &redef; const an_option: set[addr, addr, string] &redef;
# default initialization will be self-documenting # default initialization will be self-documenting
const option_with_init = 0.01 secs &redef; const option_with_init = 0.01 secs &redef; ##< More docs can be added here.
############## state variables ############ ############## state variables ############
# right now, I'm defining this as any global # right now, I'm defining this as any global
@ -183,6 +183,7 @@ export {
## Summarize "an_event" here. ## Summarize "an_event" here.
## Give more details about "an_event" here. ## Give more details about "an_event" here.
## Example::an_event should not be confused as a parameter.
## name: describe the argument here ## name: describe the argument here
global an_event: event(name: string); global an_event: event(name: string);

View file

@ -1,7 +1,7 @@
.. This is a stub doc to which broxygen appends during the build process .. This is a stub doc to which broxygen appends during the build process
Index of All Bro Scripts Index of All Individual Bro Scripts
======================== ===================================
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View file

@ -10,8 +10,3 @@ script, it supports being loaded in mass as a whole directory for convenience.
Packages/scripts in the ``base/`` directory are all loaded by default, while Packages/scripts in the ``base/`` directory are all loaded by default, while
ones in ``policy/`` provide functionality and customization options that are ones in ``policy/`` provide functionality and customization options that are
more appropriate for users to decide whether they'd like to load it or not. more appropriate for users to decide whether they'd like to load it or not.
.. toctree::
:maxdepth: 1

View file

@ -34,7 +34,7 @@ Let's look at an example signature first:
This signature asks Bro to match the regular expression ``.*root`` on This signature asks Bro to match the regular expression ``.*root`` on
all TCP connections going to port 80. When the signature triggers, Bro all TCP connections going to port 80. When the signature triggers, Bro
will raise an event ``signature_match`` of the form: will raise an event :bro:id:`signature_match` of the form:
.. code:: bro .. code:: bro
@ -45,20 +45,20 @@ triggered the match, ``msg`` is the string specified by the
signature's event statement (``Found root!``), and data is the last signature's event statement (``Found root!``), and data is the last
piece of payload which triggered the pattern match. piece of payload which triggered the pattern match.
To turn such ``signature_match`` events into actual alarms, you can To turn such :bro:id:`signature_match` events into actual alarms, you can
load Bro's ``signature.bro`` script. This script contains a default load Bro's :doc:`/scripts/base/frameworks/signatures/main` script.
event handler that raises ``SensitiveSignature`` :doc:`Notices <notice>` This script contains a default event handler that raises
:bro:enum:`Signatures::Sensitive_Signature` :doc:`Notices <notice>`
(as well as others; see the beginning of the script). (as well as others; see the beginning of the script).
As signatures are independent of Bro's policy scripts, they are put As signatures are independent of Bro's policy scripts, they are put
into their own file(s). There are two ways to specify which files into their own file(s). There are two ways to specify which files
contain signatures: By using the ``-s`` flag when you invoke Bro, or contain signatures: By using the ``-s`` flag when you invoke Bro, or
by extending the Bro variable ``signatures_files`` using the ``+=`` by extending the Bro variable :bro:id:`signature_files` using the ``+=``
operator. If a signature file is given without a path, it is searched operator. If a signature file is given without a path, it is searched
along the normal ``BROPATH``. The default extension of the file name along the normal ``BROPATH``. The default extension of the file name
is ``.sig``, and Bro appends that automatically when neccesary. is ``.sig``, and Bro appends that automatically when neccesary.
Signature language Signature language
================== ==================
@ -90,7 +90,7 @@ one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and
against. The following keywords are defined: against. The following keywords are defined:
``src-ip``/``dst-ip <cmp> <address-list>`` ``src-ip``/``dst-ip <cmp> <address-list>``
Source and destination address, repectively. Addresses can be Source and destination address, respectively. Addresses can be
given as IP addresses or CIDR masks. given as IP addresses or CIDR masks.
``src-port``/``dst-port`` ``<int-list>`` ``src-port``/``dst-port`` ``<int-list>``
@ -126,7 +126,7 @@ CIDR notation for netmasks and is translated into a corresponding
bitmask applied to the packet's value prior to the comparison (similar bitmask applied to the packet's value prior to the comparison (similar
to the optional ``& integer``). to the optional ``& integer``).
Putting all together, this is an example conditiation that is Putting all together, this is an example condition that is
equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``: equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``:
.. code:: bro-sig .. code:: bro-sig
@ -134,7 +134,7 @@ equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``:
header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24 header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24
Internally, the predefined header conditions are in fact just Internally, the predefined header conditions are in fact just
short-cuts and mappend into a generic condition. short-cuts and mapped into a generic condition.
Content Conditions Content Conditions
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@ -265,7 +265,7 @@ Actions define what to do if a signature matches. Currently, there are
two actions defined: two actions defined:
``event <string>`` ``event <string>``
Raises a ``signature_match`` event. The event handler has the Raises a :bro:id:`signature_match` event. The event handler has the
following type: following type:
.. code:: bro .. code:: bro
@ -339,10 +339,10 @@ Things to keep in mind when writing signatures
respectively. Generally, Bro follows `flex's regular expression respectively. Generally, Bro follows `flex's regular expression
syntax syntax
<http://www.gnu.org/software/flex/manual/html_chapter/flex_7.html>`_. <http://www.gnu.org/software/flex/manual/html_chapter/flex_7.html>`_.
See the DPD signatures in ``policy/sigs/dpd.bro`` for some examples See the DPD signatures in ``base/frameworks/dpd/dpd.sig`` for some examples
of fairly complex payload patterns. of fairly complex payload patterns.
* The data argument of the ``signature_match`` handler might not carry * The data argument of the :bro:id:`signature_match` handler might not carry
the full text matched by the regular expression. Bro performs the the full text matched by the regular expression. Bro performs the
matching incrementally as packets come in; when the signature matching incrementally as packets come in; when the signature
eventually fires, it can only pass on the most recent chunk of data. eventually fires, it can only pass on the most recent chunk of data.

View file

@ -168,10 +168,6 @@ New Default Settings
are loaded. See ``PacketFilter::all_packets`` for how to revert to old are loaded. See ``PacketFilter::all_packets`` for how to revert to old
behavior. behavior.
- By default, Bro now sets a libpcap snaplen of 65535. Depending on
the OS, this may have performance implications and you can use the
``--snaplen`` option to change the value.
API Changes API Changes
----------- -----------

View file

@ -9,10 +9,10 @@ redef peer_description = Cluster::node;
# Add a cluster prefix. # Add a cluster prefix.
@prefixes += cluster @prefixes += cluster
## If this script isn't found anywhere, the cluster bombs out. # If this script isn't found anywhere, the cluster bombs out.
## Loading the cluster framework requires that a script by this name exists # Loading the cluster framework requires that a script by this name exists
## somewhere in the BROPATH. The only thing in the file should be the # somewhere in the BROPATH. The only thing in the file should be the
## cluster definition in the :bro:id:`Cluster::nodes` variable. # cluster definition in the :bro:id:`Cluster::nodes` variable.
@load cluster-layout @load cluster-layout
@if ( Cluster::node in Cluster::nodes ) @if ( Cluster::node in Cluster::nodes )
@ -28,17 +28,14 @@ redef Communication::listen_port = Cluster::nodes[Cluster::node]$p;
@if ( Cluster::local_node_type() == Cluster::MANAGER ) @if ( Cluster::local_node_type() == Cluster::MANAGER )
@load ./nodes/manager @load ./nodes/manager
@load site/local-manager
@endif @endif
@if ( Cluster::local_node_type() == Cluster::PROXY ) @if ( Cluster::local_node_type() == Cluster::PROXY )
@load ./nodes/proxy @load ./nodes/proxy
@load site/local-proxy
@endif @endif
@if ( Cluster::local_node_type() == Cluster::WORKER ) @if ( Cluster::local_node_type() == Cluster::WORKER )
@load ./nodes/worker @load ./nodes/worker
@load site/local-worker
@endif @endif
@endif @endif

View file

@ -1,21 +1,45 @@
##! A framework for establishing and controlling a cluster of Bro instances.
##! In order to use the cluster framework, a script named
##! ``cluster-layout.bro`` must exist somewhere in Bro's script search path
##! which has a cluster definition of the :bro:id:`Cluster::nodes` variable.
##! The ``CLUSTER_NODE`` environment variable or :bro:id:`Cluster::node`
##! must also be sent and the cluster framework loaded as a package like
##! ``@load base/frameworks/cluster``.
@load base/frameworks/control @load base/frameworks/control
module Cluster; module Cluster;
export { export {
## The cluster logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the column fields of the cluster log.
type Info: record { type Info: record {
## The time at which a cluster message was generated.
ts: time; ts: time;
## A message indicating information about the cluster's operation.
message: string; message: string;
} &log; } &log;
## Types of nodes that are allowed to participate in the cluster
## configuration.
type NodeType: enum { type NodeType: enum {
## A dummy node type indicating the local node is not operating
## within a cluster.
NONE, NONE,
## A node type which is allowed to view/manipulate the configuration
## of other nodes in the cluster.
CONTROL, CONTROL,
## A node type responsible for log and policy management.
MANAGER, MANAGER,
## A node type for relaying worker node communication and synchronizing
## worker node state.
PROXY, PROXY,
## The node type doing all the actual traffic analysis.
WORKER, WORKER,
## A node acting as a traffic recorder using the
## `Time Machine <http://tracker.bro-ids.org/time-machine>`_ software.
TIME_MACHINE, TIME_MACHINE,
}; };
@ -49,30 +73,38 @@ export {
## Record type to indicate a node in a cluster. ## Record type to indicate a node in a cluster.
type Node: record { type Node: record {
## Identifies the type of cluster node in this node's configuration.
node_type: NodeType; node_type: NodeType;
## The IP address of the cluster node.
ip: addr; ip: addr;
## The port to which the this local node can connect when
## establishing communication.
p: port; p: port;
## Identifier for the interface a worker is sniffing. ## Identifier for the interface a worker is sniffing.
interface: string &optional; interface: string &optional;
## Name of the manager node this node uses. For workers and proxies.
## Manager node this node uses. For workers and proxies.
manager: string &optional; manager: string &optional;
## Proxy node this node uses. For workers and managers. ## Name of the proxy node this node uses. For workers and managers.
proxy: string &optional; proxy: string &optional;
## Worker nodes that this node connects with. For managers and proxies. ## Names of worker nodes that this node connects with.
## For managers and proxies.
workers: set[string] &optional; workers: set[string] &optional;
## Name of a time machine node with which this node connects.
time_machine: string &optional; time_machine: string &optional;
}; };
## This function can be called at any time to determine if the cluster ## This function can be called at any time to determine if the cluster
## framework is being enabled for this run. ## framework is being enabled for this run.
##
## Returns: True if :bro:id:`Cluster::node` has been set.
global is_enabled: function(): bool; global is_enabled: function(): bool;
## This function can be called at any time to determine what type of ## This function can be called at any time to determine what type of
## cluster node the current Bro instance is going to be acting as. ## cluster node the current Bro instance is going to be acting as.
## If :bro:id:`Cluster::is_enabled` returns false, then ## If :bro:id:`Cluster::is_enabled` returns false, then
## :bro:enum:`Cluster::NONE` is returned. ## :bro:enum:`Cluster::NONE` is returned.
##
## Returns: The :bro:type:`Cluster::NodeType` the calling node acts as.
global local_node_type: function(): NodeType; global local_node_type: function(): NodeType;
## This gives the value for the number of workers currently connected to, ## This gives the value for the number of workers currently connected to,

View file

@ -1,3 +1,7 @@
##! Redefines the options common to all proxy nodes within a Bro cluster.
##! In particular, proxies are not meant to produce logs locally and they
##! do not forward events anywhere, they mainly synchronize state between
##! worker nodes.
@prefixes += cluster-proxy @prefixes += cluster-proxy

View file

@ -1,3 +1,7 @@
##! Redefines some options common to all worker nodes within a Bro cluster.
##! In particular, worker nodes do not produce logs locally, instead they
##! send them off to a manager node for processing.
@prefixes += cluster-worker @prefixes += cluster-worker
## Don't do any local logging. ## Don't do any local logging.

View file

@ -1,3 +1,6 @@
##! This script establishes communication among all nodes in a cluster
##! as defined by :bro:id:`Cluster::nodes`.
@load ./main @load ./main
@load base/frameworks/communication @load base/frameworks/communication
@ -41,7 +44,7 @@ event bro_init() &priority=9
{ {
if ( n$node_type == WORKER && n$proxy == node ) if ( n$node_type == WORKER && n$proxy == node )
Communication::nodes[i] = Communication::nodes[i] =
[$host=n$ip, $connect=F, $class=i, $events=worker2proxy_events]; [$host=n$ip, $connect=F, $class=i, $sync=T, $auth=T, $events=worker2proxy_events];
# accepts connections from the previous one. # accepts connections from the previous one.
# (This is not ideal for setups with many proxies) # (This is not ideal for setups with many proxies)

View file

@ -1,11 +1,13 @@
##! Connect to remote Bro or Broccoli instances to share state and/or transfer ##! Facilitates connecting to remote Bro or Broccoli instances to share state
##! events. ##! and/or transfer events.
@load base/frameworks/packet-filter @load base/frameworks/packet-filter
module Communication; module Communication;
export { export {
## The communication logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Which interface to listen on (0.0.0.0 for any interface). ## Which interface to listen on (0.0.0.0 for any interface).
@ -21,14 +23,25 @@ export {
## compression. ## compression.
global compression_level = 0 &redef; global compression_level = 0 &redef;
## A record type containing the column fields of the communication log.
type Info: record { type Info: record {
## The network time at which a communication event occurred.
ts: time &log; ts: time &log;
## The peer name (if any) for which a communication event is concerned.
peer: string &log &optional; peer: string &log &optional;
## Where the communication event message originated from, that is,
## either from the scripting layer or inside the Bro process.
src_name: string &log &optional; src_name: string &log &optional;
## .. todo:: currently unused.
connected_peer_desc: string &log &optional; connected_peer_desc: string &log &optional;
## .. todo:: currently unused.
connected_peer_addr: addr &log &optional; connected_peer_addr: addr &log &optional;
## .. todo:: currently unused.
connected_peer_port: port &log &optional; connected_peer_port: port &log &optional;
## The severity of the communication event message.
level: string &log &optional; level: string &log &optional;
## A message describing the communication event between Bro or
## Broccoli instances.
message: string &log; message: string &log;
}; };
@ -77,7 +90,7 @@ export {
auth: bool &default = F; auth: bool &default = F;
## If not set, no capture filter is sent. ## If not set, no capture filter is sent.
## If set to "", the default cature filter is sent. ## If set to "", the default capture filter is sent.
capture_filter: string &optional; capture_filter: string &optional;
## Whether to use SSL-based communication. ## Whether to use SSL-based communication.
@ -97,10 +110,24 @@ export {
## to or respond to connections from. ## to or respond to connections from.
global nodes: table[string] of Node &redef; global nodes: table[string] of Node &redef;
## A table of peer nodes for which this node issued a
## :bro:id:`Communication::connect_peer` call but with which a connection
## has not yet been established or with which a connection has been
## closed and is currently in the process of retrying to establish.
## When a connection is successfully established, the peer is removed
## from the table.
global pending_peers: table[peer_id] of Node; global pending_peers: table[peer_id] of Node;
## A table of peer nodes for which this node has an established connection.
## Peers are automatically removed if their connection is closed and
## automatically added back if a connection is re-established later.
global connected_peers: table[peer_id] of Node; global connected_peers: table[peer_id] of Node;
## Connect to nodes[node], independent of its "connect" flag. ## Connect to a node in :bro:id:`Communication::nodes` independent
## of its "connect" flag.
##
## peer: the string used to index a particular node within the
## :bro:id:`Communication::nodes` table.
global connect_peer: function(peer: string); global connect_peer: function(peer: string);
} }
@ -130,6 +157,13 @@ event remote_log(level: count, src: count, msg: string)
do_script_log_common(level, src, msg); do_script_log_common(level, src, msg);
} }
# This is a core generated event.
event remote_log_peer(p: event_peer, level: count, src: count, msg: string)
{
local rmsg = fmt("[#%d/%s:%d] %s", p$id, p$host, p$p, msg);
do_script_log_common(level, src, rmsg);
}
function do_script_log(p: event_peer, msg: string) function do_script_log(p: event_peer, msg: string)
{ {
do_script_log_common(REMOTE_LOG_INFO, REMOTE_SRC_SCRIPT, msg); do_script_log_common(REMOTE_LOG_INFO, REMOTE_SRC_SCRIPT, msg);

View file

@ -1,43 +1,30 @@
##! This is a utility script that sends the current values of all &redef'able ##! The control framework provides the foundation for providing "commands"
##! consts to a remote Bro then sends the :bro:id:`configuration_update` event ##! that can be taken remotely at runtime to modify a running Bro instance
##! and terminates processing. ##! or collect information from the running instance.
##!
##! Intended to be used from the command line like this when starting a controller::
##!
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
##!
##! A controllee only needs to load the controllee script in addition
##! to the specific analysis scripts desired. It may also need a node
##! configured as a controller node in the communications nodes configuration::
##!
##! bro <scripts> frameworks/control/controllee
##!
##! To use the framework as a controllee, it only needs to be loaded and
##! the controlled node need to accept all events in the "Control::" namespace
##! from the host where the control actions will be performed from along with
##! using the "control" class.
module Control; module Control;
export { export {
## This is the address of the host that will be controlled. ## The address of the host that will be controlled.
const host = 0.0.0.0 &redef; const host = 0.0.0.0 &redef;
## This is the port of the host that will be controlled. ## The port of the host that will be controlled.
const host_port = 0/tcp &redef; const host_port = 0/tcp &redef;
## This is the command that is being done. It's typically set on the ## The command that is being done. It's typically set on the
## command line and influences whether this instance starts up as a ## command line.
## controller or controllee.
const cmd = "" &redef; const cmd = "" &redef;
## This can be used by commands that take an argument. ## This can be used by commands that take an argument.
const arg = "" &redef; const arg = "" &redef;
## Events that need to be handled by controllers.
const controller_events = /Control::.*_request/ &redef; const controller_events = /Control::.*_request/ &redef;
## Events that need to be handled by controllees.
const controllee_events = /Control::.*_response/ &redef; const controllee_events = /Control::.*_response/ &redef;
## These are the commands that can be given on the command line for ## The commands that can currently be given on the command line for
## remote control. ## remote control.
const commands: set[string] = { const commands: set[string] = {
"id_value", "id_value",
@ -45,15 +32,15 @@ export {
"net_stats", "net_stats",
"configuration_update", "configuration_update",
"shutdown", "shutdown",
}; } &redef;
## Variable IDs that are to be ignored by the update process. ## Variable IDs that are to be ignored by the update process.
const ignore_ids: set[string] = { const ignore_ids: set[string] = { };
};
## Event for requesting the value of an ID (a variable). ## Event for requesting the value of an ID (a variable).
global id_value_request: event(id: string); global id_value_request: event(id: string);
## Event for returning the value of an ID after an :bro:id:`id_request` event. ## Event for returning the value of an ID after an
## :bro:id:`Control::id_value_request` event.
global id_value_response: event(id: string, val: string); global id_value_response: event(id: string, val: string);
## Requests the current communication status. ## Requests the current communication status.
@ -68,7 +55,8 @@ export {
## Inform the remote Bro instance that it's configuration may have been updated. ## Inform the remote Bro instance that it's configuration may have been updated.
global configuration_update_request: event(); global configuration_update_request: event();
## This event is a wrapper and alias for the :bro:id:`configuration_update_request` event. ## This event is a wrapper and alias for the
## :bro:id:`Control::configuration_update_request` event.
## This event is also a primary hooking point for the control framework. ## This event is also a primary hooking point for the control framework.
global configuration_update: event(); global configuration_update: event();
## Message in response to a configuration update request. ## Message in response to a configuration update request.

View file

@ -80,15 +80,15 @@ signature irc_server_reply {
tcp-state responder tcp-state responder
} }
signature irc_sig3 { signature irc_server_to_server1 {
ip-proto == tcp ip-proto == tcp
payload /(.*\x0a)*(\x20)*[Ss][Ee][Rr][Vv][Ee][Rr](\x20)+.+\x0a/ payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
} }
signature irc_sig4 { signature irc_server_to_server2 {
ip-proto == tcp ip-proto == tcp
payload /(.*\x0a)*(\x20)*[Ss][Ee][Rr][Vv][Ee][Rr](\x20)+.+\x0a/ payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
requires-reverse-signature irc_sig3 requires-reverse-signature irc_server_to_server1
enable "irc" enable "irc"
} }

View file

@ -7,14 +7,16 @@ module DPD;
redef signature_files += "base/frameworks/dpd/dpd.sig"; redef signature_files += "base/frameworks/dpd/dpd.sig";
export { export {
## Add the DPD logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type defining the columns to log in the DPD logging stream.
type Info: record { type Info: record {
## Timestamp for when protocol analysis failed. ## Timestamp for when protocol analysis failed.
ts: time &log; ts: time &log;
## Connection unique ID. ## Connection unique ID.
uid: string &log; uid: string &log;
## Connection ID. ## Connection ID containing the 4-tuple which identifies endpoints.
id: conn_id &log; id: conn_id &log;
## Transport protocol for the violation. ## Transport protocol for the violation.
proto: transport_proto &log; proto: transport_proto &log;

View file

@ -11,7 +11,7 @@
# user_name # user_name
# file_name # file_name
# file_md5 # file_md5
# x509_cert - DER encoded, not PEM (ascii armored) # x509_md5
# Example tags: # Example tags:
# infrastructure # infrastructure
@ -25,6 +25,7 @@
module Intel; module Intel;
export { export {
## The intel logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
@ -33,72 +34,117 @@ export {
Detection, Detection,
}; };
## Record type used for logging information from the intelligence framework.
## Primarily for problems or oddities with inserting and querying data.
## This is important since the content of the intelligence framework can
## change quite dramatically during runtime and problems may be introduced
## into the data.
type Info: record { type Info: record {
## The current network time.
ts: time &log; ts: time &log;
## Represents the severity of the message.
## This value should be one of: "info", "warn", "error" ## This value should be one of: "info", "warn", "error"
level: string &log; level: string &log;
## The message.
message: string &log; message: string &log;
}; };
## Record to represent metadata associated with a single piece of
## intelligence.
type MetaData: record { type MetaData: record {
## A description for the data.
desc: string &optional; desc: string &optional;
## A URL where more information may be found about the intelligence.
url: string &optional; url: string &optional;
## The time at which the data was first declared to be intelligence.
first_seen: time &optional; first_seen: time &optional;
## When this data was most recent inserted into the framework.
latest_seen: time &optional; latest_seen: time &optional;
## Arbitrary text tags for the data.
tags: set[string]; tags: set[string];
}; };
## Record to represent a singular piece of intelligence.
type Item: record { type Item: record {
## If the data is an IP address, this hold the address.
ip: addr &optional; ip: addr &optional;
## If the data is textual, this holds the text.
str: string &optional; str: string &optional;
## If the data is numeric, this holds the number.
num: int &optional; num: int &optional;
## The subtype of the data for when either the $str or $num fields are
## given. If one of those fields are given, this field must be present.
subtype: string &optional; subtype: string &optional;
## The next five fields are temporary until a better model for
## attaching metadata to an intelligence item is created.
desc: string &optional; desc: string &optional;
url: string &optional; url: string &optional;
first_seen: time &optional; first_seen: time &optional;
latest_seen: time &optional; latest_seen: time &optional;
tags: set[string]; tags: set[string];
## These single string tags are throw away until pybroccoli supports sets ## These single string tags are throw away until pybroccoli supports sets.
tag1: string &optional; tag1: string &optional;
tag2: string &optional; tag2: string &optional;
tag3: string &optional; tag3: string &optional;
}; };
## Record model used for constructing queries against the intelligence
## framework.
type QueryItem: record { type QueryItem: record {
ip: addr &optional; ## If an IP address is being queried for, this field should be given.
str: string &optional; ip: addr &optional;
num: int &optional; ## If a string is being queried for, this field should be given.
subtype: string &optional; str: string &optional;
## If numeric data is being queried for, this field should be given.
num: int &optional;
## If either a string or number is being queried for, this field should
## indicate the subtype of the data.
subtype: string &optional;
or_tags: set[string] &optional; ## A set of tags where if a single metadata record attached to an item
and_tags: set[string] &optional; ## has any one of the tags defined in this field, it will match.
or_tags: set[string] &optional;
## A set of tags where a single metadata record attached to an item
## must have all of the tags defined in this field.
and_tags: set[string] &optional;
## The predicate can be given when searching for a match. It will ## The predicate can be given when searching for a match. It will
## be tested against every :bro:type:`MetaData` item associated with ## be tested against every :bro:type:`Intel::MetaData` item associated
## the data being matched on. If it returns T a single time, the ## with the data being matched on. If it returns T a single time, the
## matcher will consider that the item has matched. ## matcher will consider that the item has matched. This field can
pred: function(meta: Intel::MetaData): bool &optional; ## be used for constructing arbitrarily complex queries that may not
## be possible with the $or_tags or $and_tags fields.
pred: function(meta: Intel::MetaData): bool &optional;
}; };
## Function to insert data into the intelligence framework.
##
## item: The data item.
##
## Returns: T if the data was successfully inserted into the framework,
## otherwise it returns F.
global insert: function(item: Item): bool; global insert: function(item: Item): bool;
## A wrapper for the :bro:id:`Intel::insert` function. This is primarily
## used as the external API for inserting data into the intelligence
## using Broccoli.
global insert_event: event(item: Item); global insert_event: event(item: Item);
## Function for matching data within the intelligence framework.
global matcher: function(item: QueryItem): bool; global matcher: function(item: QueryItem): bool;
type MetaDataStore: table[count] of MetaData;
type DataStore: record {
ip_data: table[addr] of MetaDataStore;
## The first string is the actual value and the second string is the subtype.
string_data: table[string, string] of MetaDataStore;
int_data: table[int, string] of MetaDataStore;
};
global data_store: DataStore;
} }
type MetaDataStore: table[count] of MetaData;
type DataStore: record {
ip_data: table[addr] of MetaDataStore;
# The first string is the actual value and the second string is the subtype.
string_data: table[string, string] of MetaDataStore;
int_data: table[int, string] of MetaDataStore;
};
global data_store: DataStore;
event bro_init() event bro_init()
{ {
Log::create_stream(Intel::LOG, [$columns=Info]); Log::create_stream(Intel::LOG, [$columns=Info]);

View file

@ -1,16 +1,16 @@
##! The Bro logging interface. ##! The Bro logging interface.
##! ##!
##! See XXX for a introduction to Bro's logging framework. ##! See :doc:`/logging` for a introduction to Bro's logging framework.
module Log; module Log;
# Log::ID and Log::Writer are defined in bro.init due to circular dependencies. # Log::ID and Log::Writer are defined in types.bif due to circular dependencies.
export { export {
## If true, is local logging is by default enabled for all filters. ## If true, local logging is by default enabled for all filters.
const enable_local_logging = T &redef; const enable_local_logging = T &redef;
## If true, is remote logging is by default enabled for all filters. ## If true, remote logging is by default enabled for all filters.
const enable_remote_logging = T &redef; const enable_remote_logging = T &redef;
## Default writer to use if a filter does not specify ## Default writer to use if a filter does not specify
@ -23,21 +23,24 @@ export {
columns: any; columns: any;
## Event that will be raised once for each log entry. ## Event that will be raised once for each log entry.
## The event receives a single same parameter, an instance of type ``columns``. ## The event receives a single same parameter, an instance of type
## ``columns``.
ev: any &optional; ev: any &optional;
}; };
## Default function for building the path values for log filters if not ## Builds the default path values for log filters if not otherwise
## speficied otherwise by a filter. The default implementation uses ``id`` ## specified by a filter. The default implementation uses *id*
## to derive a name. ## to derive a name.
## ##
## id: The log stream. ## id: The ID associated with the log stream.
##
## path: A suggested path value, which may be either the filter's ## path: A suggested path value, which may be either the filter's
## ``path`` if defined, else a previous result from the function. ## ``path`` if defined, else a previous result from the function.
## If no ``path`` is defined for the filter, then the first call ## If no ``path`` is defined for the filter, then the first call
## to the function will contain an empty string. ## to the function will contain an empty string.
##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter.
global default_path_func: function(id: ID, path: string, rec: any) : string &redef; global default_path_func: function(id: ID, path: string, rec: any) : string &redef;
@ -46,7 +49,7 @@ export {
## Information passed into rotation callback functions. ## Information passed into rotation callback functions.
type RotationInfo: record { type RotationInfo: record {
writer: Writer; ##< Writer. writer: Writer; ##< The :bro:type:`Log::Writer` being used.
fname: string; ##< Full name of the rotated file. fname: string; ##< Full name of the rotated file.
path: string; ##< Original path value. path: string; ##< Original path value.
open: time; ##< Time when opened. open: time; ##< Time when opened.
@ -57,25 +60,26 @@ export {
## Default rotation interval. Zero disables rotation. ## Default rotation interval. Zero disables rotation.
const default_rotation_interval = 0secs &redef; const default_rotation_interval = 0secs &redef;
## Default naming format for timestamps embedded into filenames. Uses a strftime() style. ## Default naming format for timestamps embedded into filenames.
## Uses a ``strftime()`` style.
const default_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef; const default_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef;
## Default shell command to run on rotated files. Empty for none. ## Default shell command to run on rotated files. Empty for none.
const default_rotation_postprocessor_cmd = "" &redef; const default_rotation_postprocessor_cmd = "" &redef;
## Specifies the default postprocessor function per writer type. Entries in this ## Specifies the default postprocessor function per writer type.
## table are initialized by each writer type. ## Entries in this table are initialized by each writer type.
const default_rotation_postprocessors: table[Writer] of function(info: RotationInfo) : bool &redef; const default_rotation_postprocessors: table[Writer] of function(info: RotationInfo) : bool &redef;
## Filter customizing logging. ## A filter type describes how to customize logging streams.
type Filter: record { type Filter: record {
## Descriptive name to reference this filter. ## Descriptive name to reference this filter.
name: string; name: string;
## The writer to use. ## The logging writer implementation to use.
writer: Writer &default=default_writer; writer: Writer &default=default_writer;
## Predicate indicating whether a log entry should be recorded. ## Indicates whether a log entry should be recorded.
## If not given, all entries are recorded. ## If not given, all entries are recorded.
## ##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
@ -101,13 +105,15 @@ export {
## easy to flood the disk by returning a new string for each ## easy to flood the disk by returning a new string for each
## connection ... ## connection ...
## ##
## id: The log stream. ## id: The ID associated with the log stream.
##
## path: A suggested path value, which may be either the filter's ## path: A suggested path value, which may be either the filter's
## ``path`` if defined, else a previous result from the function. ## ``path`` if defined, else a previous result from the function.
## If no ``path`` is defined for the filter, then the first call ## If no ``path`` is defined for the filter, then the first call
## to the function will contain an empty string. ## to the function will contain an empty string.
##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter.
path_func: function(id: ID, path: string, rec: any): string &optional; path_func: function(id: ID, path: string, rec: any): string &optional;
@ -129,27 +135,183 @@ export {
## Rotation interval. ## Rotation interval.
interv: interval &default=default_rotation_interval; interv: interval &default=default_rotation_interval;
## Callback function to trigger for rotated files. If not set, ## Callback function to trigger for rotated files. If not set, the
## the default comes out of default_rotation_postprocessors. ## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
postprocessor: function(info: RotationInfo) : bool &optional; postprocessor: function(info: RotationInfo) : bool &optional;
}; };
## Sentinel value for indicating that a filter was not found when looked up. ## Sentinel value for indicating that a filter was not found when looked up.
const no_filter: Filter = [$name="<not found>"]; # Sentinel. const no_filter: Filter = [$name="<not found>"];
# TODO: Document. ## Creates a new logging stream with the default filter.
##
## id: The ID enum to be associated with the new logging stream.
##
## stream: A record defining the content that the new stream will log.
##
## Returns: True if a new logging stream was successfully created and
## a default filter added to it.
##
## .. bro:see:: Log::add_default_filter Log::remove_default_filter
global create_stream: function(id: ID, stream: Stream) : bool; global create_stream: function(id: ID, stream: Stream) : bool;
## Enables a previously disabled logging stream. Disabled streams
## will not be written to until they are enabled again. New streams
## are enabled by default.
##
## id: The ID associated with the logging stream to enable.
##
## Returns: True if the stream is re-enabled or was not previously disabled.
##
## .. bro:see:: Log::disable_stream
global enable_stream: function(id: ID) : bool; global enable_stream: function(id: ID) : bool;
## Disables a currently enabled logging stream. Disabled streams
## will not be written to until they are enabled again. New streams
## are enabled by default.
##
## id: The ID associated with the logging stream to disable.
##
## Returns: True if the stream is now disabled or was already disabled.
##
## .. bro:see:: Log::enable_stream
global disable_stream: function(id: ID) : bool; global disable_stream: function(id: ID) : bool;
## Adds a custom filter to an existing logging stream. If a filter
## with a matching ``name`` field already exists for the stream, it
## is removed when the new filter is successfully added.
##
## id: The ID associated with the logging stream to filter.
##
## filter: A record describing the desired logging parameters.
##
## Returns: True if the filter was sucessfully added, false if
## the filter was not added or the *filter* argument was not
## the correct type.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global add_filter: function(id: ID, filter: Filter) : bool; global add_filter: function(id: ID, filter: Filter) : bool;
## Removes a filter from an existing logging stream.
##
## id: The ID associated with the logging stream from which to
## remove a filter.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
##
## Returns: True if the logging stream's filter was removed or
## if no filter associated with *name* was found.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global remove_filter: function(id: ID, name: string) : bool; global remove_filter: function(id: ID, name: string) : bool;
global get_filter: function(id: ID, name: string) : Filter; # Returns no_filter if not found.
## Gets a filter associated with an existing logging stream.
##
## id: The ID associated with a logging stream from which to
## obtain one of its filters.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
##
## Returns: A filter attached to the logging stream *id* matching
## *name* or, if no matches are found returns the
## :bro:id:`Log::no_filter` sentinel value.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global get_filter: function(id: ID, name: string) : Filter;
## Writes a new log line/entry to a logging stream.
##
## id: The ID associated with a logging stream to be written to.
##
## columns: A record value describing the values of each field/column
## to write to the log stream.
##
## Returns: True if the stream was found and no error occurred in writing
## to it or if the stream was disabled and nothing was written.
## False if the stream was was not found, or the *columns*
## argument did not match what the stream was initially defined
## to handle, or one of the stream's filters has an invalid
## ``path_func``.
##
## .. bro:see: Log::enable_stream Log::disable_stream
global write: function(id: ID, columns: any) : bool; global write: function(id: ID, columns: any) : bool;
## Sets the buffering status for all the writers of a given logging stream.
## A given writer implementation may or may not support buffering and if it
## doesn't then toggling buffering with this function has no effect.
##
## id: The ID associated with a logging stream for which to
## enable/disable buffering.
##
## buffered: Whether to enable or disable log buffering.
##
## Returns: True if buffering status was set, false if the logging stream
## does not exist.
##
## .. bro:see:: Log::flush
global set_buf: function(id: ID, buffered: bool): bool; global set_buf: function(id: ID, buffered: bool): bool;
## Flushes any currently buffered output for all the writers of a given
## logging stream.
##
## id: The ID associated with a logging stream for which to flush buffered
## data.
##
## Returns: True if all writers of a log stream were signalled to flush
## buffered data or if the logging stream is disabled,
## false if the logging stream does not exist.
##
## .. bro:see:: Log::set_buf Log::enable_stream Log::disable_stream
global flush: function(id: ID): bool; global flush: function(id: ID): bool;
## Adds a default :bro:type:`Log::Filter` record with ``name`` field
## set as "default" to a given logging stream.
##
## id: The ID associated with a logging stream for which to add a default
## filter.
##
## Returns: The status of a call to :bro:id:`Log::add_filter` using a
## default :bro:type:`Log::Filter` argument with ``name`` field
## set to "default".
##
## .. bro:see:: Log::add_filter Log::remove_filter
## Log::remove_default_filter
global add_default_filter: function(id: ID) : bool; global add_default_filter: function(id: ID) : bool;
## Removes the :bro:type:`Log::Filter` with ``name`` field equal to
## "default".
##
## id: The ID associated with a logging stream from which to remove the
## default filter.
##
## Returns: The status of a call to :bro:id:`Log::remove_filter` using
## "default" as the argument.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
global remove_default_filter: function(id: ID) : bool; global remove_default_filter: function(id: ID) : bool;
## Runs a command given by :bro:id:`Log::default_rotation_postprocessor_cmd`
## on a rotated file. Meant to be called from postprocessor functions
## that are added to :bro:id:`Log::default_rotation_postprocessors`.
##
## info: A record holding meta-information about the log being rotated.
##
## npath: The new path of the file (after already being rotated/processed
## by writer-specific postprocessor as defined in
## :bro:id:`Log::default_rotation_postprocessors`.
##
## Returns: True when :bro:id:`Log::default_rotation_postprocessor_cmd`
## is empty or the system command given by it has been invoked
## to postprocess a rotated log file.
##
## .. bro:see:: Log::default_rotation_date_format
## Log::default_rotation_postprocessor_cmd
## Log::default_rotation_postprocessors
global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool; global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool;
} }

View file

@ -1 +1,2 @@
@load ./scp @load ./scp
@load ./sftp

View file

@ -1,30 +1,56 @@
##! This script defines a postprocessing function that can be applied ##! This script defines a postprocessing function that can be applied
##! to a logging filter in order to automatically SCP (secure copy) ##! to a logging filter in order to automatically SCP (secure copy)
##! a log stream (or a subset of it) to a remote host at configurable ##! a log stream (or a subset of it) to a remote host at configurable
##! rotation time intervals. ##! rotation time intervals. Generally, to use this functionality
##! you must handle the :bro:id:`bro_init` event and do the following
##! in your handler:
##!
##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path,
##! rotation interval, and set the ``postprocessor`` to
##! :bro:id:`Log::scp_postprocessor`.
##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`.
##! 3) Add a table entry to :bro:id:`Log::scp_destinations` for the filter's
##! writer/path pair which defines a set of :bro:type:`Log::SCPDestination`
##! records.
module Log; module Log;
export { export {
## This postprocessor SCP's the rotated-log to all the remote hosts ## Secure-copies the rotated-log to all the remote hosts
## defined in :bro:id:`Log::scp_destinations` and then deletes ## defined in :bro:id:`Log::scp_destinations` and then deletes
## the local copy of the rotated-log. It's not active when ## the local copy of the rotated-log. It's not active when
## reading from trace files. ## reading from trace files.
##
## info: A record holding meta-information about the log file to be
## postprocessed.
##
## Returns: True if secure-copy system command was initiated or
## if no destination was configured for the log as described
## by *info*.
global scp_postprocessor: function(info: Log::RotationInfo): bool; global scp_postprocessor: function(info: Log::RotationInfo): bool;
## A container that describes the remote destination for the SCP command ## A container that describes the remote destination for the SCP command
## argument as ``user@host:path``. ## argument as ``user@host:path``.
type SCPDestination: record { type SCPDestination: record {
## The remote user to log in as. A trust mechanism should be
## pre-established.
user: string; user: string;
## The remote host to which to transfer logs.
host: string; host: string;
## The path/directory on the remote host to send logs.
path: string; path: string;
}; };
## A table indexed by a particular log writer and filter path, that yields ## A table indexed by a particular log writer and filter path, that yields
## a set remote destinations. The :bro:id:`Log::scp_postprocessor` ## a set remote destinations. The :bro:id:`Log::scp_postprocessor`
## function queries this table upon log rotation and performs a secure ## function queries this table upon log rotation and performs a secure
## copy of the rotated-log to each destination in the set. ## copy of the rotated-log to each destination in the set. This
## table can be modified at run-time.
global scp_destinations: table[Writer, string] of set[SCPDestination]; global scp_destinations: table[Writer, string] of set[SCPDestination];
## Default naming format for timestamps embedded into log filenames
## that use the SCP rotator.
const scp_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef;
} }
function scp_postprocessor(info: Log::RotationInfo): bool function scp_postprocessor(info: Log::RotationInfo): bool
@ -34,7 +60,11 @@ function scp_postprocessor(info: Log::RotationInfo): bool
local command = ""; local command = "";
for ( d in scp_destinations[info$writer, info$path] ) for ( d in scp_destinations[info$writer, info$path] )
command += fmt("scp %s %s@%s:%s;", info$fname, d$user, d$host, d$path); {
local dst = fmt("%s/%s.%s.log", d$path, info$path,
strftime(Log::scp_rotation_date_format, info$open));
command += fmt("scp %s %s@%s:%s;", info$fname, d$user, d$host, dst);
}
command += fmt("/bin/rm %s", info$fname); command += fmt("/bin/rm %s", info$fname);
system(command); system(command);

View file

@ -0,0 +1,73 @@
##! This script defines a postprocessing function that can be applied
##! to a logging filter in order to automatically SFTP
##! a log stream (or a subset of it) to a remote host at configurable
##! rotation time intervals. Generally, to use this functionality
##! you must handle the :bro:id:`bro_init` event and do the following
##! in your handler:
##!
##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path,
##! rotation interval, and set the ``postprocessor`` to
##! :bro:id:`Log::sftp_postprocessor`.
##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`.
##! 3) Add a table entry to :bro:id:`Log::sftp_destinations` for the filter's
##! writer/path pair which defines a set of :bro:type:`Log::SFTPDestination`
##! records.
module Log;
export {
## Securely transfers the rotated-log to all the remote hosts
## defined in :bro:id:`Log::sftp_destinations` and then deletes
## the local copy of the rotated-log. It's not active when
## reading from trace files.
##
## info: A record holding meta-information about the log file to be
## postprocessed.
##
## Returns: True if sftp system command was initiated or
## if no destination was configured for the log as described
## by *info*.
global sftp_postprocessor: function(info: Log::RotationInfo): bool;
## A container that describes the remote destination for the SFTP command,
## comprised of the username, host, and path at which to upload the file.
type SFTPDestination: record {
## The remote user to log in as. A trust mechanism should be
## pre-established.
user: string;
## The remote host to which to transfer logs.
host: string;
## The path/directory on the remote host to send logs.
path: string;
};
## A table indexed by a particular log writer and filter path, that yields
## a set remote destinations. The :bro:id:`Log::sftp_postprocessor`
## function queries this table upon log rotation and performs a secure
## transfer of the rotated-log to each destination in the set. This
## table can be modified at run-time.
global sftp_destinations: table[Writer, string] of set[SFTPDestination];
## Default naming format for timestamps embedded into log filenames
## that use the SFTP rotator.
const sftp_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef;
}
function sftp_postprocessor(info: Log::RotationInfo): bool
{
if ( reading_traces() || [info$writer, info$path] !in sftp_destinations )
return T;
local command = "";
for ( d in sftp_destinations[info$writer, info$path] )
{
local dst = fmt("%s/%s.%s.log", d$path, info$path,
strftime(Log::sftp_rotation_date_format, info$open));
command += fmt("echo put %s %s | sftp -b - %s@%s;", info$fname, dst,
d$user, d$host);
}
command += fmt("/bin/rm %s", info$fname);
system(command);
return T;
}

View file

@ -1,4 +1,5 @@
##! Interface for the ascii log writer. ##! Interface for the ASCII log writer. Redefinable options are available
##! to tweak the output format of ASCII logs.
module LogAscii; module LogAscii;
@ -7,7 +8,8 @@ export {
## into files. This is primarily for debugging purposes. ## into files. This is primarily for debugging purposes.
const output_to_stdout = F &redef; const output_to_stdout = F &redef;
## If true, include a header line with column names. ## If true, include a header line with column names and description
## of the other ASCII logging options that were used.
const include_header = T &redef; const include_header = T &redef;
## Prefix for the header line if included. ## Prefix for the header line if included.
@ -19,8 +21,9 @@ export {
## Separator between set elements. ## Separator between set elements.
const set_separator = "," &redef; const set_separator = "," &redef;
## String to use for empty fields. ## String to use for empty fields. This should be different from
const empty_field = "-" &redef; ## *unset_field* to make the output non-ambigious.
const empty_field = "(empty)" &redef;
## String to use for an unset &optional field. ## String to use for an unset &optional field.
const unset_field = "-" &redef; const unset_field = "-" &redef;

View file

@ -13,11 +13,11 @@
module Metrics; module Metrics;
export { export {
## This value allows a user to decide how large of result groups the ## Allows a user to decide how large of result groups the
## workers should transmit values. ## workers should transmit values for cluster metric aggregation.
const cluster_send_in_groups_of = 50 &redef; const cluster_send_in_groups_of = 50 &redef;
## This is the percent of the full threshold value that needs to be met ## The percent of the full threshold value that needs to be met
## on a single worker for that worker to send the value to its manager in ## on a single worker for that worker to send the value to its manager in
## order for it to request a global view for that value. There is no ## order for it to request a global view for that value. There is no
## requirement that the manager requests a global view for the index ## requirement that the manager requests a global view for the index
@ -25,11 +25,11 @@ export {
## recently. ## recently.
const cluster_request_global_view_percent = 0.1 &redef; const cluster_request_global_view_percent = 0.1 &redef;
## This event is sent by the manager in a cluster to initiate the ## Event sent by the manager in a cluster to initiate the
## collection of metrics values for a filter. ## collection of metrics values for a filter.
global cluster_filter_request: event(uid: string, id: ID, filter_name: string); global cluster_filter_request: event(uid: string, id: ID, filter_name: string);
## This event is sent by nodes that are collecting metrics after receiving ## Event sent by nodes that are collecting metrics after receiving
## a request for the metric filter from the manager. ## a request for the metric filter from the manager.
global cluster_filter_response: event(uid: string, id: ID, filter_name: string, data: MetricTable, done: bool); global cluster_filter_response: event(uid: string, id: ID, filter_name: string, data: MetricTable, done: bool);
@ -40,12 +40,12 @@ export {
global cluster_index_request: event(uid: string, id: ID, filter_name: string, index: Index); global cluster_index_request: event(uid: string, id: ID, filter_name: string, index: Index);
## This event is sent by nodes in response to a ## This event is sent by nodes in response to a
## :bro:id:`cluster_index_request` event. ## :bro:id:`Metrics::cluster_index_request` event.
global cluster_index_response: event(uid: string, id: ID, filter_name: string, index: Index, val: count); global cluster_index_response: event(uid: string, id: ID, filter_name: string, index: Index, val: count);
## This is sent by workers to indicate that they crossed the percent of the ## This is sent by workers to indicate that they crossed the percent of the
## current threshold by the percentage defined globally in ## current threshold by the percentage defined globally in
## :bro:id:`cluster_request_global_view_percent` ## :bro:id:`Metrics::cluster_request_global_view_percent`
global cluster_index_intermediate_response: event(id: Metrics::ID, filter_name: string, index: Metrics::Index, val: count); global cluster_index_intermediate_response: event(id: Metrics::ID, filter_name: string, index: Metrics::Index, val: count);
## This event is scheduled internally on workers to send result chunks. ## This event is scheduled internally on workers to send result chunks.

View file

@ -1,13 +1,16 @@
##! This is the implementation of the metrics framework. ##! The metrics framework provides a way to count and measure data.
@load base/frameworks/notice @load base/frameworks/notice
module Metrics; module Metrics;
export { export {
## The metrics logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Identifiers for metrics to collect.
type ID: enum { type ID: enum {
## Blank placeholder value.
NOTHING, NOTHING,
}; };
@ -15,10 +18,13 @@ export {
## current value to the logging stream. ## current value to the logging stream.
const default_break_interval = 15mins &redef; const default_break_interval = 15mins &redef;
## This is the interval for how often notices will happen after they have ## This is the interval for how often threshold based notices will happen
## already fired. ## after they have already fired.
const renotice_interval = 1hr &redef; const renotice_interval = 1hr &redef;
## Represents a thing which is having metrics collected for it. An instance
## of this record type and a :bro:type:`Metrics::ID` together represent a
## single measurement.
type Index: record { type Index: record {
## Host is the value to which this metric applies. ## Host is the value to which this metric applies.
host: addr &optional; host: addr &optional;
@ -37,17 +43,30 @@ export {
network: subnet &optional; network: subnet &optional;
} &log; } &log;
## The record type that is used for logging metrics.
type Info: record { type Info: record {
## Timestamp at which the metric was "broken".
ts: time &log; ts: time &log;
## What measurement the metric represents.
metric_id: ID &log; metric_id: ID &log;
## The name of the filter being logged. :bro:type:`Metrics::ID` values
## can have multiple filters which represent different perspectives on
## the data so this is necessary to understand the value.
filter_name: string &log; filter_name: string &log;
## What the metric value applies to.
index: Index &log; index: Index &log;
## The simple numeric value of the metric.
value: count &log; value: count &log;
}; };
# TODO: configure a metrics filter logging stream to log the current # TODO: configure a metrics filter logging stream to log the current
# metrics configuration in case someone is looking through # metrics configuration in case someone is looking through
# old logs and the configuration has changed since then. # old logs and the configuration has changed since then.
## Filters define how the data from a metric is aggregated and handled.
## Filters can be used to set how often the measurements are cut or "broken"
## and logged or how the data within them is aggregated. It's also
## possible to disable logging and use filters for thresholding.
type Filter: record { type Filter: record {
## The :bro:type:`Metrics::ID` that this filter applies to. ## The :bro:type:`Metrics::ID` that this filter applies to.
id: ID &optional; id: ID &optional;
@ -62,7 +81,7 @@ export {
aggregation_mask: count &optional; aggregation_mask: count &optional;
## This is essentially a mapping table between addresses and subnets. ## This is essentially a mapping table between addresses and subnets.
aggregation_table: table[subnet] of subnet &optional; aggregation_table: table[subnet] of subnet &optional;
## The interval at which the metric should be "broken" and written ## The interval at which this filter should be "broken" and written
## to the logging stream. The counters are also reset to zero at ## to the logging stream. The counters are also reset to zero at
## this time so any threshold based detection needs to be set to a ## this time so any threshold based detection needs to be set to a
## number that should be expected to happen within this period. ## number that should be expected to happen within this period.
@ -79,7 +98,7 @@ export {
notice_threshold: count &optional; notice_threshold: count &optional;
## A series of thresholds at which to generate notices. ## A series of thresholds at which to generate notices.
notice_thresholds: vector of count &optional; notice_thresholds: vector of count &optional;
## How often this notice should be raised for this metric index. It ## How often this notice should be raised for this filter. It
## will be generated everytime it crosses a threshold, but if the ## will be generated everytime it crosses a threshold, but if the
## $break_interval is set to 5mins and this is set to 1hr the notice ## $break_interval is set to 5mins and this is set to 1hr the notice
## only be generated once per hour even if something crosses the ## only be generated once per hour even if something crosses the
@ -87,15 +106,43 @@ export {
notice_freq: interval &optional; notice_freq: interval &optional;
}; };
## Function to associate a metric filter with a metric ID.
##
## id: The metric ID that the filter should be associated with.
##
## filter: The record representing the filter configuration.
global add_filter: function(id: ID, filter: Filter); global add_filter: function(id: ID, filter: Filter);
## Add data into a :bro:type:`Metrics::ID`. This should be called when
## a script has measured some point value and is ready to increment the
## counters.
##
## id: The metric ID that the data represents.
##
## index: The metric index that the value is to be added to.
##
## increment: How much to increment the counter by.
global add_data: function(id: ID, index: Index, increment: count); global add_data: function(id: ID, index: Index, increment: count);
## Helper function to represent a :bro:type:`Metrics::Index` value as
## a simple string
##
## index: The metric index that is to be converted into a string.
##
## Returns: A string reprentation of the metric index.
global index2str: function(index: Index): string; global index2str: function(index: Index): string;
# This is the event that is used to "finish" metrics and adapt the metrics ## Event that is used to "finish" metrics and adapt the metrics
# framework for clustered or non-clustered usage. ## framework for clustered or non-clustered usage.
##
## ..note: This is primarily intended for internal use.
global log_it: event(filter: Filter); global log_it: event(filter: Filter);
## Event to access metrics records as they are passed to the logging framework.
global log_metrics: event(rec: Info); global log_metrics: event(rec: Info);
## Type to store a table of metrics values. Interal use only!
type MetricTable: table[Index] of count &default=0;
} }
redef record Notice::Info += { redef record Notice::Info += {
@ -105,7 +152,6 @@ redef record Notice::Info += {
global metric_filters: table[ID] of vector of Filter = table(); global metric_filters: table[ID] of vector of Filter = table();
global filter_store: table[ID, string] of Filter = table(); global filter_store: table[ID, string] of Filter = table();
type MetricTable: table[Index] of count &default=0;
# This is indexed by metric ID and stream filter name. # This is indexed by metric ID and stream filter name.
global store: table[ID, string] of MetricTable = table() &default=table(); global store: table[ID, string] of MetricTable = table() &default=table();

View file

@ -31,6 +31,7 @@ export {
## Add a helper to the notice policy for looking up GeoIP data. ## Add a helper to the notice policy for looking up GeoIP data.
redef Notice::policy += { redef Notice::policy += {
[$pred(n: Notice::Info) = { return (n$note in Notice::lookup_location_types); }, [$pred(n: Notice::Info) = { return (n$note in Notice::lookup_location_types); },
$action = ACTION_ADD_GEODATA,
$priority = 10], $priority = 10],
}; };
} }

View file

@ -1,3 +1,8 @@
##! Adds a new notice action type which can be used to email notices
##! to the administrators of a particular address space as set by
##! :bro:id:`Site::local_admins` if the notice contains a source
##! or destination address that lies within their space.
@load ../main @load ../main
@load base/utils/site @load base/utils/site
@ -6,8 +11,8 @@ module Notice;
export { export {
redef enum Action += { redef enum Action += {
## Indicate that the generated email should be addressed to the ## Indicate that the generated email should be addressed to the
## appropriate email addresses as found in the ## appropriate email addresses as found by the
## :bro:id:`Site::addr_to_emails` variable based on the relevant ## :bro:id:`Site::get_emails` function based on the relevant
## address or addresses indicated in the notice. ## address or addresses indicated in the notice.
ACTION_EMAIL_ADMIN ACTION_EMAIL_ADMIN
}; };

View file

@ -1,3 +1,5 @@
##! Allows configuration of a pager email address to which notices can be sent.
@load ../main @load ../main
module Notice; module Notice;
@ -5,7 +7,7 @@ module Notice;
export { export {
redef enum Action += { redef enum Action += {
## Indicates that the notice should be sent to the pager email address ## Indicates that the notice should be sent to the pager email address
## configured in the :bro:id:`mail_page_dest` variable. ## configured in the :bro:id:`Notice::mail_page_dest` variable.
ACTION_PAGE ACTION_PAGE
}; };

View file

@ -1,6 +1,6 @@
#! Notice extension that mails out a pretty-printed version of alarm.log ##! Notice extension that mails out a pretty-printed version of alarm.log
#! in regular intervals, formatted for better human readability. If activated, ##! in regular intervals, formatted for better human readability. If activated,
#! that replaces the default summary mail having the raw log output. ##! that replaces the default summary mail having the raw log output.
@load base/frameworks/cluster @load base/frameworks/cluster
@load ../main @load ../main
@ -14,14 +14,17 @@ export {
## Address to send the pretty-printed reports to. Default if not set is ## Address to send the pretty-printed reports to. Default if not set is
## :bro:id:`Notice::mail_dest`. ## :bro:id:`Notice::mail_dest`.
const mail_dest_pretty_printed = "" &redef; const mail_dest_pretty_printed = "" &redef;
## If an address from one of these networks is reported, we mark
## If an address from one of these networks is reported, we mark ## the entry with an additional quote symbol (i.e., ">"). Many MUAs
## the entry with an addition quote symbol (i.e., ">"). Many MUAs
## then highlight such lines differently. ## then highlight such lines differently.
global flag_nets: set[subnet] &redef; global flag_nets: set[subnet] &redef;
## Function that renders a single alarm. Can be overidden. ## Function that renders a single alarm. Can be overidden.
global pretty_print_alarm: function(out: file, n: Info) &redef; global pretty_print_alarm: function(out: file, n: Info) &redef;
## Force generating mail file, even if reading from traces or no mail
## destination is defined. This is mainly for testing.
global force_email_summaries = F &redef;
} }
# We maintain an old-style file recording the pretty-printed alarms. # We maintain an old-style file recording the pretty-printed alarms.
@ -32,6 +35,9 @@ global pp_alarms_open: bool = F;
# Returns True if pretty-printed alarm summaries are activated. # Returns True if pretty-printed alarm summaries are activated.
function want_pp() : bool function want_pp() : bool
{ {
if ( force_email_summaries )
return T;
return (pretty_print_alarms && ! reading_traces() return (pretty_print_alarms && ! reading_traces()
&& (mail_dest != "" || mail_dest_pretty_printed != "")); && (mail_dest != "" || mail_dest_pretty_printed != ""));
} }
@ -44,34 +50,45 @@ function pp_open()
pp_alarms_open = T; pp_alarms_open = T;
pp_alarms = open(pp_alarms_name); pp_alarms = open(pp_alarms_name);
local dest = mail_dest_pretty_printed != "" ? mail_dest_pretty_printed
: mail_dest;
local headers = email_headers("Alarm summary", dest);
write_file(pp_alarms, headers + "\n");
} }
# Closes and mails out the current output file. # Closes and mails out the current output file.
function pp_send() function pp_send(rinfo: Log::RotationInfo)
{ {
if ( ! pp_alarms_open ) if ( ! pp_alarms_open )
return; return;
write_file(pp_alarms, "\n\n--\n[Automatically generated]\n\n"); write_file(pp_alarms, "\n\n--\n[Automatically generated]\n\n");
close(pp_alarms); close(pp_alarms);
system(fmt("/bin/cat %s | %s -t -oi && /bin/rm %s",
pp_alarms_name, sendmail, pp_alarms_name));
pp_alarms_open = F; pp_alarms_open = F;
local from = strftime("%H:%M:%S", rinfo$open);
local to = strftime("%H:%M:%S", rinfo$close);
local subject = fmt("Alarm summary from %s-%s", from, to);
local dest = mail_dest_pretty_printed != "" ? mail_dest_pretty_printed
: mail_dest;
if ( dest == "" )
# No mail destination configured, just leave the file alone. This is mainly for
# testing.
return;
local headers = email_headers(subject, dest);
local header_name = pp_alarms_name + ".tmp";
local header = open(header_name);
write_file(header, headers + "\n");
close(header);
system(fmt("/bin/cat %s %s | %s -t -oi && /bin/rm -f %s %s",
header_name, pp_alarms_name, sendmail, header_name, pp_alarms_name));
} }
# Postprocessor function that triggers the email. # Postprocessor function that triggers the email.
function pp_postprocessor(info: Log::RotationInfo): bool function pp_postprocessor(info: Log::RotationInfo): bool
{ {
if ( want_pp() ) if ( want_pp() )
pp_send(); pp_send(info);
return T; return T;
} }
@ -93,7 +110,7 @@ event notice(n: Notice::Info) &priority=-5
if ( ! want_pp() ) if ( ! want_pp() )
return; return;
if ( ACTION_LOG !in n$actions ) if ( ACTION_ALARM !in n$actions )
return; return;
if ( ! pp_alarms_open ) if ( ! pp_alarms_open )
@ -154,31 +171,25 @@ function pretty_print_alarm(out: file, n: Info)
if ( n?$id ) if ( n?$id )
{ {
orig_p = fmt(":%s", n$id$orig_p); h1 = n$id$orig_h;
resp_p = fmt(":%s", n$id$resp_p); h2 = n$id$resp_h;
who = fmt("%s:%s -> %s:%s", h1, n$id$orig_p, h2, n$id$resp_p);
} }
else if ( n?$src && n?$dst )
if ( n?$src && n?$dst )
{ {
h1 = n$src; h1 = n$src;
h2 = n$dst; h2 = n$dst;
who = fmt("%s%s -> %s%s", h1, orig_p, h2, resp_p); who = fmt("%s -> %s", h1, h2);
if ( n?$uid )
who = fmt("%s (uid %s)", who, n$uid );
} }
else if ( n?$src ) else if ( n?$src )
{ {
local p = "";
if ( n?$p )
p = fmt(":%s", n$p);
h1 = n$src; h1 = n$src;
who = fmt("%s%s", h1, p); who = fmt("%s%s", h1, (n?$p ? fmt(":%s", n$p) : ""));
} }
if ( n?$uid )
who = fmt("%s (uid %s)", who, n$uid );
local flag = (h1 in flag_nets || h2 in flag_nets); local flag = (h1 in flag_nets || h2 in flag_nets);
local line1 = fmt(">%s %D %s %s", (flag ? ">" : " "), network_time(), n$note, who); local line1 = fmt(">%s %D %s %s", (flag ? ">" : " "), network_time(), n$note, who);
@ -191,6 +202,12 @@ function pretty_print_alarm(out: file, n: Info)
return; return;
} }
if ( reading_traces() )
{
do_msg(out, n, line1, line2, line3, h1, "<skipped>", h2, "<skipped>");
return;
}
when ( local h1name = lookup_addr(h1) ) when ( local h1name = lookup_addr(h1) )
{ {
if ( h2 == 0.0.0.0 ) if ( h2 == 0.0.0.0 )

View file

@ -1,4 +1,6 @@
##! Implements notice functionality across clusters. ##! Implements notice functionality across clusters. Worker nodes
##! will disable notice/alarm logging streams and forward notice
##! events to the manager node for logging/processing.
@load ./main @load ./main
@load base/frameworks/cluster @load base/frameworks/cluster
@ -7,10 +9,15 @@ module Notice;
export { export {
## This is the event used to transport notices on the cluster. ## This is the event used to transport notices on the cluster.
##
## n: The notice information to be sent to the cluster manager for
## further processing.
global cluster_notice: event(n: Notice::Info); global cluster_notice: event(n: Notice::Info);
} }
## Manager can communicate notice suppression to workers.
redef Cluster::manager2worker_events += /Notice::begin_suppression/; redef Cluster::manager2worker_events += /Notice::begin_suppression/;
## Workers needs need ability to forward notices to manager.
redef Cluster::worker2manager_events += /Notice::cluster_notice/; redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER ) @if ( Cluster::local_node_type() != Cluster::MANAGER )

View file

@ -1,8 +1,18 @@
##! Loading this script extends the :bro:enum:`Notice::ACTION_EMAIL` action
##! by appending to the email the hostnames associated with
##! :bro:type:`Notice::Info`'s *src* and *dst* fields as determined by a
##! DNS lookup.
@load ../main @load ../main
module Notice; module Notice;
# This probably doesn't actually work due to the async lookup_addr. # We have to store references to the notices here because the when statement
# clones the frame which doesn't give us access to modify values outside
# of it's execution scope. (we get a clone of the notice instead of a
# reference to the original notice)
global tmp_notice_storage: table[string] of Notice::Info &create_expire=max_email_delay+10secs;
event Notice::notice(n: Notice::Info) &priority=10 event Notice::notice(n: Notice::Info) &priority=10
{ {
if ( ! n?$src && ! n?$dst ) if ( ! n?$src && ! n?$dst )
@ -12,21 +22,31 @@ event Notice::notice(n: Notice::Info) &priority=10
if ( ACTION_EMAIL !in n$actions ) if ( ACTION_EMAIL !in n$actions )
return; return;
# I'm not recovering gracefully from the when statements because I want
# the notice framework to detect that something has exceeded the maximum
# allowed email delay and tell the user.
local uid = unique_id("");
tmp_notice_storage[uid] = n;
local output = ""; local output = "";
if ( n?$src ) if ( n?$src )
{ {
add n$email_delay_tokens["hostnames-src"];
when ( local src_name = lookup_addr(n$src) ) when ( local src_name = lookup_addr(n$src) )
{ {
output = string_cat("orig_h/src hostname: ", src_name, "\n"); output = string_cat("orig/src hostname: ", src_name, "\n");
n$email_body_sections[|n$email_body_sections|] = output; tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-src"];
} }
} }
if ( n?$dst ) if ( n?$dst )
{ {
add n$email_delay_tokens["hostnames-dst"];
when ( local dst_name = lookup_addr(n$dst) ) when ( local dst_name = lookup_addr(n$dst) )
{ {
output = string_cat("resp_h/dst hostname: ", dst_name, "\n"); output = string_cat("resp/dst hostname: ", dst_name, "\n");
n$email_body_sections[|n$email_body_sections|] = output; tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-dst"];
} }
} }
} }

View file

@ -2,8 +2,7 @@
##! are odd or potentially bad. Decisions of the meaning of various notices ##! are odd or potentially bad. Decisions of the meaning of various notices
##! need to be done per site because Bro does not ship with assumptions about ##! need to be done per site because Bro does not ship with assumptions about
##! what is bad activity for sites. More extensive documetation about using ##! what is bad activity for sites. More extensive documetation about using
##! the notice framework can be found in the documentation section of the ##! the notice framework can be found in :doc:`/notice`.
##! http://www.bro-ids.org/ website.
module Notice; module Notice;
@ -21,10 +20,10 @@ export {
## Scripts creating new notices need to redef this enum to add their own ## Scripts creating new notices need to redef this enum to add their own
## specific notice types which would then get used when they call the ## specific notice types which would then get used when they call the
## :bro:id:`NOTICE` function. The convention is to give a general category ## :bro:id:`NOTICE` function. The convention is to give a general category
## along with the specific notice separating words with underscores and using ## along with the specific notice separating words with underscores and
## leading capitals on each word except for abbreviations which are kept in ## using leading capitals on each word except for abbreviations which are
## all capitals. For example, SSH::Login is for heuristically guessed ## kept in all capitals. For example, SSH::Login is for heuristically
## successful SSH logins. ## guessed successful SSH logins.
type Type: enum { type Type: enum {
## Notice reporting a count of how often a notice occurred. ## Notice reporting a count of how often a notice occurred.
Tally, Tally,
@ -49,22 +48,37 @@ export {
}; };
## The notice framework is able to do automatic notice supression by ## The notice framework is able to do automatic notice supression by
## utilizing the $identifier field in :bro:type:`Info` records. ## utilizing the $identifier field in :bro:type:`Notice::Info` records.
## Set this to "0secs" to completely disable automated notice suppression. ## Set this to "0secs" to completely disable automated notice suppression.
const default_suppression_interval = 1hrs &redef; const default_suppression_interval = 1hrs &redef;
type Info: record { type Info: record {
## An absolute time indicating when the notice occurred, defaults
## to the current network time.
ts: time &log &optional; ts: time &log &optional;
## A connection UID which uniquely identifies the endpoints
## concerned with the notice.
uid: string &log &optional; uid: string &log &optional;
## A connection 4-tuple identifying the endpoints concerned with the
## notice.
id: conn_id &log &optional; id: conn_id &log &optional;
## These are shorthand ways of giving the uid and id to a notice. The ## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying ## reference to the actual connection will be deleted after applying
## the notice policy. ## the notice policy.
conn: connection &optional; conn: connection &optional;
## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying
## the notice policy.
iconn: icmp_conn &optional; iconn: icmp_conn &optional;
## The :bro:enum:`Notice::Type` of the notice. ## The transport protocol. Filled automatically when either conn, iconn
## or p is specified.
proto: transport_proto &log &optional;
## The :bro:type:`Notice::Type` of the notice.
note: Type &log; note: Type &log;
## The human readable message for the notice. ## The human readable message for the notice.
msg: string &log &optional; msg: string &log &optional;
@ -96,7 +110,13 @@ export {
## expand on notices that are being emailed. The normal way to add text ## expand on notices that are being emailed. The normal way to add text
## is to extend the vector by handling the :bro:id:`Notice::notice` ## is to extend the vector by handling the :bro:id:`Notice::notice`
## event and modifying the notice in place. ## event and modifying the notice in place.
email_body_sections: vector of string &default=vector(); email_body_sections: vector of string &optional;
## Adding a string "token" to this set will cause the notice framework's
## built-in emailing functionality to delay sending the email until
## either the token has been removed or the email has been delayed
## for :bro:id:`Notice::max_email_delay`.
email_delay_tokens: set[string] &optional;
## This field is to be provided when a notice is generated for the ## This field is to be provided when a notice is generated for the
## purpose of deduplicating notices. The identifier string should ## purpose of deduplicating notices. The identifier string should
@ -141,8 +161,9 @@ export {
## This is the record that defines the items that make up the notice policy. ## This is the record that defines the items that make up the notice policy.
type PolicyItem: record { type PolicyItem: record {
## This is the exact positional order in which the :bro:type:`PolicyItem` ## This is the exact positional order in which the
## records are checked. This is set internally by the notice framework. ## :bro:type:`Notice::PolicyItem` records are checked.
## This is set internally by the notice framework.
position: count &log &optional; position: count &log &optional;
## Define the priority for this check. Items are checked in ordered ## Define the priority for this check. Items are checked in ordered
## from highest value (10) to lowest value (0). ## from highest value (10) to lowest value (0).
@ -163,8 +184,8 @@ export {
suppress_for: interval &log &optional; suppress_for: interval &log &optional;
}; };
## This is the where the :bro:id:`Notice::policy` is defined. All notice ## Defines a notice policy that is extensible on a per-site basis.
## processing is done through this variable. ## All notice processing is done through this variable.
const policy: set[PolicyItem] = { const policy: set[PolicyItem] = {
[$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); }, [$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); },
$halt=T, $priority = 9], $halt=T, $priority = 9],
@ -193,8 +214,9 @@ export {
## Local system sendmail program. ## Local system sendmail program.
const sendmail = "/usr/sbin/sendmail" &redef; const sendmail = "/usr/sbin/sendmail" &redef;
## Email address to send notices with the :bro:enum:`ACTION_EMAIL` action ## Email address to send notices with the :bro:enum:`Notice::ACTION_EMAIL`
## or to send bulk alarm logs on rotation with :bro:enum:`ACTION_ALARM`. ## action or to send bulk alarm logs on rotation with
## :bro:enum:`Notice::ACTION_ALARM`.
const mail_dest = "" &redef; const mail_dest = "" &redef;
## Address that emails will be from. ## Address that emails will be from.
@ -203,18 +225,26 @@ export {
const reply_to = "" &redef; const reply_to = "" &redef;
## Text string prefixed to the subject of all emails sent out. ## Text string prefixed to the subject of all emails sent out.
const mail_subject_prefix = "[Bro]" &redef; const mail_subject_prefix = "[Bro]" &redef;
## The maximum amount of time a plugin can delay email from being sent.
const max_email_delay = 15secs &redef;
## A log postprocessing function that implements emailing the contents ## A log postprocessing function that implements emailing the contents
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`. ## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
## The rotated log is removed upon being sent. ## The rotated log is removed upon being sent.
##
## info: A record containing the rotated log file information.
##
## Returns: True.
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool; global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
## This is the event that is called as the entry point to the ## This is the event that is called as the entry point to the
## notice framework by the global :bro:id:`NOTICE` function. By the time ## notice framework by the global :bro:id:`NOTICE` function. By the time
## this event is generated, default values have already been filled out in ## this event is generated, default values have already been filled out in
## the :bro:type:`Notice::Info` record and synchronous functions in the ## the :bro:type:`Notice::Info` record and synchronous functions in the
## :bro:id:`Notice:sync_functions` have already been called. The notice ## :bro:id:`Notice::sync_functions` have already been called. The notice
## policy has also been applied. ## policy has also been applied.
##
## n: The record containing notice data.
global notice: event(n: Info); global notice: event(n: Info);
## This is a set of functions that provide a synchronous way for scripts ## This is a set of functions that provide a synchronous way for scripts
@ -231,30 +261,55 @@ export {
const sync_functions: set[function(n: Notice::Info)] = set() &redef; const sync_functions: set[function(n: Notice::Info)] = set() &redef;
## This event is generated when a notice begins to be suppressed. ## This event is generated when a notice begins to be suppressed.
##
## n: The record containing notice data regarding the notice type
## about to be suppressed.
global begin_suppression: event(n: Notice::Info); global begin_suppression: event(n: Notice::Info);
## This event is generated on each occurence of an event being suppressed. ## This event is generated on each occurence of an event being suppressed.
##
## n: The record containing notice data regarding the notice type
## being suppressed.
global suppressed: event(n: Notice::Info); global suppressed: event(n: Notice::Info);
## This event is generated when a notice stops being suppressed. ## This event is generated when a notice stops being suppressed.
##
## n: The record containing notice data regarding the notice type
## that was being suppressed.
global end_suppression: event(n: Notice::Info); global end_suppression: event(n: Notice::Info);
## Call this function to send a notice in an email. It is already used ## Call this function to send a notice in an email. It is already used
## by default with the built in :bro:enum:`ACTION_EMAIL` and ## by default with the built in :bro:enum:`Notice::ACTION_EMAIL` and
## :bro:enum:`ACTION_PAGE` actions. ## :bro:enum:`Notice::ACTION_PAGE` actions.
##
## n: The record of notice data to email.
##
## dest: The intended recipient of the notice email.
##
## extend: Whether to extend the email using the ``email_body_sections``
## field of *n*.
global email_notice_to: function(n: Info, dest: string, extend: bool); global email_notice_to: function(n: Info, dest: string, extend: bool);
## Constructs mail headers to which an email body can be appended for ## Constructs mail headers to which an email body can be appended for
## sending with sendmail. ## sending with sendmail.
##
## subject_desc: a subject string to use for the mail ## subject_desc: a subject string to use for the mail
##
## dest: recipient string to use for the mail ## dest: recipient string to use for the mail
##
## Returns: a string of mail headers to which an email body can be appended ## Returns: a string of mail headers to which an email body can be appended
global email_headers: function(subject_desc: string, dest: string): string; global email_headers: function(subject_desc: string, dest: string): string;
## This event can be handled to access the :bro:type:`Info` ## This event can be handled to access the :bro:type:`Notice::Info`
## record as it is sent on to the logging framework. ## record as it is sent on to the logging framework.
##
## rec: The record containing notice data before it is logged.
global log_notice: event(rec: Info); global log_notice: event(rec: Info);
## This is an internal wrapper for the global NOTICE function. Please ## This is an internal wrapper for the global :bro:id:`NOTICE` function;
## disregard. ## disregard.
##
## n: The record of notice data.
global internal_NOTICE: function(n: Notice::Info); global internal_NOTICE: function(n: Notice::Info);
} }
@ -347,11 +402,35 @@ function email_headers(subject_desc: string, dest: string): string
return header_text; return header_text;
} }
event delay_sending_email(n: Notice::Info, dest: string, extend: bool)
{
email_notice_to(n, dest, extend);
}
function email_notice_to(n: Notice::Info, dest: string, extend: bool) function email_notice_to(n: Notice::Info, dest: string, extend: bool)
{ {
if ( reading_traces() || dest == "" ) if ( reading_traces() || dest == "" )
return; return;
if ( extend )
{
if ( |n$email_delay_tokens| > 0 )
{
# If we still are within the max_email_delay, keep delaying.
if ( n$ts + max_email_delay > network_time() )
{
schedule 1sec { delay_sending_email(n, dest, extend) };
return;
}
else
{
event reporter_info(network_time(),
fmt("Notice email delay tokens weren't released in time (%s).", n$email_delay_tokens),
"");
}
}
}
local email_text = email_headers(fmt("%s", n$note), dest); local email_text = email_headers(fmt("%s", n$note), dest);
# First off, finish the headers and include the human readable messages # First off, finish the headers and include the human readable messages
@ -377,9 +456,10 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool)
# Add the extended information if it's requested. # Add the extended information if it's requested.
if ( extend ) if ( extend )
{ {
email_text = string_cat(email_text, "\nEmail Extensions\n");
email_text = string_cat(email_text, "----------------\n");
for ( i in n$email_body_sections ) for ( i in n$email_body_sections )
{ {
email_text = string_cat(email_text, "******************\n");
email_text = string_cat(email_text, n$email_body_sections[i], "\n"); email_text = string_cat(email_text, n$email_body_sections[i], "\n");
} }
} }
@ -410,7 +490,8 @@ event notice(n: Notice::Info) &priority=-5
} }
## This determines if a notice is being suppressed. It is only used ## This determines if a notice is being suppressed. It is only used
## internally as part of the mechanics for the global NOTICE function. ## internally as part of the mechanics for the global :bro:id:`NOTICE`
## function.
function is_being_suppressed(n: Notice::Info): bool function is_being_suppressed(n: Notice::Info): bool
{ {
if ( n?$identifier && [n$note, n$identifier] in suppressing ) if ( n?$identifier && [n$note, n$identifier] in suppressing )
@ -458,8 +539,12 @@ function apply_policy(n: Notice::Info)
n$p = n$id$resp_p; n$p = n$id$resp_p;
} }
if ( n?$p )
n$proto = get_port_transport_proto(n$p);
if ( n?$iconn ) if ( n?$iconn )
{ {
n$proto = icmp;
if ( ! n?$src ) if ( ! n?$src )
n$src = n$iconn$orig_h; n$src = n$iconn$orig_h;
if ( ! n?$dst ) if ( ! n?$dst )
@ -475,6 +560,11 @@ function apply_policy(n: Notice::Info)
if ( ! n?$actions ) if ( ! n?$actions )
n$actions = set(); n$actions = set();
if ( ! n?$email_body_sections )
n$email_body_sections = vector();
if ( ! n?$email_delay_tokens )
n$email_delay_tokens = set();
if ( ! n?$policy_items ) if ( ! n?$policy_items )
n$policy_items = set(); n$policy_items = set();

View file

@ -1,3 +1,12 @@
##! This script provides a default set of actions to take for "weird activity"
##! events generated from Bro's event engine. Weird activity is defined as
##! unusual or exceptional activity that can indicate malformed connections,
##! traffic that doesn't conform to a particular protocol, malfunctioning
##! or misconfigured hardware, or even an attacker attempting to avoid/confuse
##! a sensor. Without context, it's hard to judge whether a particular
##! category of weird activity is interesting, but this script provides
##! a starting point for the user.
@load base/utils/conn-ids @load base/utils/conn-ids
@load base/utils/site @load base/utils/site
@load ./main @load ./main
@ -5,6 +14,7 @@
module Weird; module Weird;
export { export {
## The weird logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
@ -12,6 +22,7 @@ export {
Activity, Activity,
}; };
## The record type which contains the column fields of the weird log.
type Info: record { type Info: record {
## The time when the weird occurred. ## The time when the weird occurred.
ts: time &log; ts: time &log;
@ -32,19 +43,32 @@ export {
peer: string &log &optional; peer: string &log &optional;
}; };
## Types of actions that may be taken when handling weird activity events.
type Action: enum { type Action: enum {
## A dummy action indicating the user does not care what internal
## decision is made regarding a given type of weird.
ACTION_UNSPECIFIED, ACTION_UNSPECIFIED,
## No action is to be taken.
ACTION_IGNORE, ACTION_IGNORE,
## Log the weird event every time it occurs.
ACTION_LOG, ACTION_LOG,
## Log the weird event only once.
ACTION_LOG_ONCE, ACTION_LOG_ONCE,
## Log the weird event once per connection.
ACTION_LOG_PER_CONN, ACTION_LOG_PER_CONN,
## Log the weird event once per originator host.
ACTION_LOG_PER_ORIG, ACTION_LOG_PER_ORIG,
## Always generate a notice associated with the weird event.
ACTION_NOTICE, ACTION_NOTICE,
## Generate a notice associated with the weird event only once.
ACTION_NOTICE_ONCE, ACTION_NOTICE_ONCE,
## Generate a notice for the weird event once per connection.
ACTION_NOTICE_PER_CONN, ACTION_NOTICE_PER_CONN,
## Generate a notice for the weird event once per originator host.
ACTION_NOTICE_PER_ORIG, ACTION_NOTICE_PER_ORIG,
}; };
## A table specifying default/recommended actions per weird type.
const actions: table[string] of Action = { const actions: table[string] of Action = {
["unsolicited_SYN_response"] = ACTION_IGNORE, ["unsolicited_SYN_response"] = ACTION_IGNORE,
["above_hole_data_without_any_acks"] = ACTION_LOG, ["above_hole_data_without_any_acks"] = ACTION_LOG,
@ -201,7 +225,7 @@ export {
["fragment_overlap"] = ACTION_LOG_PER_ORIG, ["fragment_overlap"] = ACTION_LOG_PER_ORIG,
["fragment_protocol_inconsistency"] = ACTION_LOG, ["fragment_protocol_inconsistency"] = ACTION_LOG,
["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG, ["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG,
## These do indeed happen! # These do indeed happen!
["fragment_with_DF"] = ACTION_LOG, ["fragment_with_DF"] = ACTION_LOG,
["incompletely_captured_fragment"] = ACTION_LOG, ["incompletely_captured_fragment"] = ACTION_LOG,
["bad_IP_checksum"] = ACTION_LOG_PER_ORIG, ["bad_IP_checksum"] = ACTION_LOG_PER_ORIG,
@ -215,8 +239,8 @@ export {
## and weird name into this set. ## and weird name into this set.
const ignore_hosts: set[addr, string] &redef; const ignore_hosts: set[addr, string] &redef;
# But don't ignore these (for the weird file), it's handy keeping ## Don't ignore repeats for weirds in this set. For example,
# track of clustered checksum errors. ## it's handy keeping track of clustered checksum errors.
const weird_do_not_ignore_repeats = { const weird_do_not_ignore_repeats = {
"bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum", "bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum",
"bad_ICMP_checksum", "bad_ICMP_checksum",
@ -237,6 +261,10 @@ export {
## duplicate notices from being raised. ## duplicate notices from being raised.
global did_notice: set[string, string] &create_expire=1day &redef; global did_notice: set[string, string] &create_expire=1day &redef;
## Handlers of this event are invoked one per write to the weird
## logging stream before the data is actually written.
##
## rec: The weird columns about to be logged to the weird stream.
global log_weird: event(rec: Info); global log_weird: event(rec: Info);
} }

View file

@ -9,17 +9,22 @@
module PacketFilter; module PacketFilter;
export { export {
## Add the packet filter logging stream.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Add notice types related to packet filter errors.
redef enum Notice::Type += { redef enum Notice::Type += {
## This notice is generated if a packet filter is unable to be compiled. ## This notice is generated if a packet filter is unable to be compiled.
Compile_Failure, Compile_Failure,
## This notice is generated if a packet filter is unable to be installed. ## This notice is generated if a packet filter is fails to install.
Install_Failure, Install_Failure,
}; };
## The record type defining columns to be logged in the packet filter
## logging stream.
type Info: record { type Info: record {
## The time at which the packet filter installation attempt was made.
ts: time &log; ts: time &log;
## This is a string representation of the node that applied this ## This is a string representation of the node that applied this
@ -40,7 +45,7 @@ export {
## By default, Bro will examine all packets. If this is set to false, ## By default, Bro will examine all packets. If this is set to false,
## it will dynamically build a BPF filter that only select protocols ## it will dynamically build a BPF filter that only select protocols
## for which the user has loaded a corresponding analysis script. ## for which the user has loaded a corresponding analysis script.
## The latter used to be default for Bro versions < 1.6. That has now ## The latter used to be default for Bro versions < 2.0. That has now
## changed however to enable port-independent protocol analysis. ## changed however to enable port-independent protocol analysis.
const all_packets = T &redef; const all_packets = T &redef;

View file

@ -1,4 +1,6 @@
##! This script reports on packet loss from the various packet sources. ##! This script reports on packet loss from the various packet sources.
##! When Bro is reading input from trace files, this script will not
##! report any packet loss statistics.
@load base/frameworks/notice @load base/frameworks/notice
@ -6,7 +8,7 @@ module PacketFilter;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Bro reported packets dropped by the packet filter. ## Indicates packets were dropped by the packet filter.
Dropped_Packets, Dropped_Packets,
}; };

View file

@ -1,21 +1,36 @@
##! This framework is intended to create an output and filtering path for ##! This framework is intended to create an output and filtering path for
##! internal messages/warnings/errors. It should typically be loaded to ##! internal messages/warnings/errors. It should typically be loaded to
##! avoid Bro spewing internal messages to standard error. ##! avoid Bro spewing internal messages to standard error and instead log
##! them to a file in a standard way. Note that this framework deals with
##! the handling of internally-generated reporter messages, for the
##! interface into actually creating reporter messages from the scripting
##! layer, use the built-in functions in :doc:`/scripts/base/reporter.bif`.
module Reporter; module Reporter;
export { export {
## The reporter logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## An indicator of reporter message severity.
type Level: enum { type Level: enum {
## Informational, not needing specific attention.
INFO, INFO,
## Warning of a potential problem.
WARNING, WARNING,
## A non-fatal error that should be addressed, but doesn't
## terminate program execution.
ERROR ERROR
}; };
## The record type which contains the column fields of the reporter log.
type Info: record { type Info: record {
## The network time at which the reporter event was generated.
ts: time &log; ts: time &log;
## The severity of the reporter message.
level: Level &log; level: Level &log;
## An info/warning/error message that could have either been
## generated from the internal Bro core or at the scripting-layer.
message: string &log; message: string &log;
## This is the location in a Bro script where the message originated. ## This is the location in a Bro script where the message originated.
## Not all reporter messages will have locations in them though. ## Not all reporter messages will have locations in them though.

View file

@ -1,30 +1,36 @@
##! Script level signature support. ##! Script level signature support. See the
##! :doc:`signature documentation </signatures>` for more information about
##! Bro's signature engine.
@load base/frameworks/notice @load base/frameworks/notice
module Signatures; module Signatures;
export { export {
## Add various signature-related notice types.
redef enum Notice::Type += { redef enum Notice::Type += {
## Generic for alarm-worthy ## Generic notice type for notice-worthy signature matches.
Sensitive_Signature, Sensitive_Signature,
## Host has triggered many signatures on the same host. The number of ## Host has triggered many signatures on the same host. The number of
## signatures is defined by the :bro:id:`vert_scan_thresholds` variable. ## signatures is defined by the
## :bro:id:`Signatures::vert_scan_thresholds` variable.
Multiple_Signatures, Multiple_Signatures,
## Host has triggered the same signature on multiple hosts as defined by the ## Host has triggered the same signature on multiple hosts as defined
## :bro:id:`horiz_scan_thresholds` variable. ## by the :bro:id:`Signatures::horiz_scan_thresholds` variable.
Multiple_Sig_Responders, Multiple_Sig_Responders,
## The same signature has triggered multiple times for a host. The number ## The same signature has triggered multiple times for a host. The
## of times the signature has be trigger is defined by the ## number of times the signature has been triggered is defined by the
## :bro:id:`count_thresholds` variable. To generate this notice, the ## :bro:id:`Signatures::count_thresholds` variable. To generate this
## :bro:enum:`SIG_COUNT_PER_RESP` action must be set for the signature. ## notice, the :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must
## bet set for the signature.
Count_Signature, Count_Signature,
## Summarize the number of times a host triggered a signature. The ## Summarize the number of times a host triggered a signature. The
## interval between summaries is defined by the :bro:id:`summary_interval` ## interval between summaries is defined by the
## variable. ## :bro:id:`Signatures::summary_interval` variable.
Signature_Summary, Signature_Summary,
}; };
## The signature logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## These are the default actions you can apply to signature matches. ## These are the default actions you can apply to signature matches.
@ -39,8 +45,8 @@ export {
SIG_QUIET, SIG_QUIET,
## Generate a notice. ## Generate a notice.
SIG_LOG, SIG_LOG,
## The same as :bro:enum:`SIG_FILE`, but ignore for aggregate/scan ## The same as :bro:enum:`Signatures::SIG_LOG`, but ignore for
## processing. ## aggregate/scan processing.
SIG_FILE_BUT_NO_SCAN, SIG_FILE_BUT_NO_SCAN,
## Generate a notice and set it to be alarmed upon. ## Generate a notice and set it to be alarmed upon.
SIG_ALARM, SIG_ALARM,
@ -49,22 +55,33 @@ export {
## Alarm once and then never again. ## Alarm once and then never again.
SIG_ALARM_ONCE, SIG_ALARM_ONCE,
## Count signatures per responder host and alarm with the ## Count signatures per responder host and alarm with the
## :bro:enum:`Count_Signature` notice if a threshold defined by ## :bro:enum:`Signatures::Count_Signature` notice if a threshold
## :bro:id:`count_thresholds` is reached. ## defined by :bro:id:`Signatures::count_thresholds` is reached.
SIG_COUNT_PER_RESP, SIG_COUNT_PER_RESP,
## Don't alarm, but generate per-orig summary. ## Don't alarm, but generate per-orig summary.
SIG_SUMMARY, SIG_SUMMARY,
}; };
## The record type which contains the column fields of the signature log.
type Info: record { type Info: record {
## The network time at which a signature matching type of event to
## be logged has occurred.
ts: time &log; ts: time &log;
## The host which triggered the signature match event.
src_addr: addr &log &optional; src_addr: addr &log &optional;
## The host port on which the signature-matching activity occurred.
src_port: port &log &optional; src_port: port &log &optional;
## The destination host which was sent the payload that triggered the
## signature match.
dst_addr: addr &log &optional; dst_addr: addr &log &optional;
## The destination host port which was sent the payload that triggered
## the signature match.
dst_port: port &log &optional; dst_port: port &log &optional;
## Notice associated with signature event ## Notice associated with signature event
note: Notice::Type &log; note: Notice::Type &log;
## The name of the signature that matched.
sig_id: string &log &optional; sig_id: string &log &optional;
## A more descriptive message of the signature-matching event.
event_msg: string &log &optional; event_msg: string &log &optional;
## Extracted payload data or extra message. ## Extracted payload data or extra message.
sub_msg: string &log &optional; sub_msg: string &log &optional;
@ -82,22 +99,26 @@ export {
## Signature IDs that should always be ignored. ## Signature IDs that should always be ignored.
const ignored_ids = /NO_DEFAULT_MATCHES/ &redef; const ignored_ids = /NO_DEFAULT_MATCHES/ &redef;
## Alarm if, for a pair [orig, signature], the number of different ## Generate a notice if, for a pair [orig, signature], the number of
## responders has reached one of the thresholds. ## different responders has reached one of the thresholds.
const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Alarm if, for a pair [orig, resp], the number of different signature ## Generate a notice if, for a pair [orig, resp], the number of different
## matches has reached one of the thresholds. ## signature matches has reached one of the thresholds.
const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Alarm if a :bro:enum:`SIG_COUNT_PER_RESP` signature is triggered as ## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP`
## often as given by one of these thresholds. ## signature is triggered as often as given by one of these thresholds.
const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef; const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef;
## The interval between when :bro:id:`Signature_Summary` notices are ## The interval between when :bro:enum:`Signatures::Signature_Summary`
## generated. ## notice are generated.
const summary_interval = 1 day &redef; const summary_interval = 1 day &redef;
## This event can be handled to access/alter data about to be logged
## to the signature logging stream.
##
## rec: The record of signature data about to be logged.
global log_signature: event(rec: Info); global log_signature: event(rec: Info);
} }

View file

@ -1,5 +1,5 @@
##! This script provides the framework for software version detection and ##! This script provides the framework for software version detection and
##! parsing, but doesn't actually do any detection on it's own. It relys on ##! parsing but doesn't actually do any detection on it's own. It relys on
##! other protocol specific scripts to parse out software from the protocols ##! other protocol specific scripts to parse out software from the protocols
##! that they analyze. The entry point for providing new software detections ##! that they analyze. The entry point for providing new software detections
##! to this framework is through the :bro:id:`Software::found` function. ##! to this framework is through the :bro:id:`Software::found` function.
@ -10,39 +10,44 @@
module Software; module Software;
export { export {
## The software logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Scripts detecting new types of software need to redef this enum to add
## their own specific software types which would then be used when they
## create :bro:type:`Software::Info` records.
type Type: enum { type Type: enum {
## A placeholder type for when the type of software is not known.
UNKNOWN, UNKNOWN,
OPERATING_SYSTEM,
DATABASE_SERVER,
# There are a number of ways to detect printers on the
# network, we just need to codify them in a script and move
# this out of here. It isn't currently used for anything.
PRINTER,
}; };
## A structure to represent the numeric version of software.
type Version: record { type Version: record {
major: count &optional; ##< Major version number ## Major version number
minor: count &optional; ##< Minor version number major: count &optional;
minor2: count &optional; ##< Minor subversion number ## Minor version number
addl: string &optional; ##< Additional version string (e.g. "beta42") minor: count &optional;
## Minor subversion number
minor2: count &optional;
## Additional version string (e.g. "beta42")
addl: string &optional;
} &log; } &log;
## The record type that is used for representing and logging software.
type Info: record { type Info: record {
## The time at which the software was first detected. ## The time at which the software was detected.
ts: time &log; ts: time &log;
## The IP address detected running the software. ## The IP address detected running the software.
host: addr &log; host: addr &log;
## The type of software detected (e.g. WEB_SERVER) ## The type of software detected (e.g. :bro:enum:`HTTP::SERVER`).
software_type: Type &log &default=UNKNOWN; software_type: Type &log &default=UNKNOWN;
## Name of the software (e.g. Apache) ## Name of the software (e.g. Apache).
name: string &log; name: string &log;
## Version of the software ## Version of the software.
version: Version &log; version: Version &log;
## The full unparsed version string found because the version parsing ## The full unparsed version string found because the version parsing
## doesn't work 100% reliably and this acts as a fall back in the logs. ## doesn't always work reliably in all cases and this acts as a
## fallback in the logs.
unparsed_version: string &log &optional; unparsed_version: string &log &optional;
## This can indicate that this software being detected should ## This can indicate that this software being detected should
@ -55,37 +60,48 @@ export {
force_log: bool &default=F; force_log: bool &default=F;
}; };
## The hosts whose software should be detected and tracked. ## Hosts whose software should be detected and tracked.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
const asset_tracking = LOCAL_HOSTS &redef; const asset_tracking = LOCAL_HOSTS &redef;
## Other scripts should call this function when they detect software. ## Other scripts should call this function when they detect software.
## unparsed_version: This is the full string from which the ## unparsed_version: This is the full string from which the
## :bro:type:`Software::Info` was extracted. ## :bro:type:`Software::Info` was extracted.
##
## id: The connection id where the software was discovered.
##
## info: A record representing the software discovered.
##
## Returns: T if the software was logged, F otherwise. ## Returns: T if the software was logged, F otherwise.
global found: function(id: conn_id, info: Software::Info): bool; global found: function(id: conn_id, info: Software::Info): bool;
## This function can take many software version strings and parse them ## Take many common software version strings and parse them
## into a sensible :bro:type:`Software::Version` record. There are ## into a sensible :bro:type:`Software::Version` record. There are
## still many cases where scripts may have to have their own specific ## still many cases where scripts may have to have their own specific
## version parsing though. ## version parsing though.
##
## unparsed_version: The raw version string.
##
## host: The host where the software was discovered.
##
## software_type: The type of software.
##
## Returns: A complete record ready for the :bro:id:`Software::found` function.
global parse: function(unparsed_version: string, global parse: function(unparsed_version: string,
host: addr, host: addr,
software_type: Type): Info; software_type: Type): Info;
## Compare two versions. ## Compare two version records.
##
## Returns: -1 for v1 < v2, 0 for v1 == v2, 1 for v1 > v2. ## Returns: -1 for v1 < v2, 0 for v1 == v2, 1 for v1 > v2.
## If the numerical version numbers match, the addl string ## If the numerical version numbers match, the addl string
## is compared lexicographically. ## is compared lexicographically.
global cmp_versions: function(v1: Version, v2: Version): int; global cmp_versions: function(v1: Version, v2: Version): int;
## This type represents a set of software. It's used by the ## Type to represent a collection of :bro:type:`Software::Info` records.
## :bro:id:`tracked` variable to store all known pieces of software ## It's indexed with the name of a piece of software such as "Firefox"
## for a particular host. It's indexed with the name of a piece of ## and it yields a :bro:type:`Software::Info` record with more information
## software such as "Firefox" and it yields a ## about the software.
## :bro:type:`Software::Info` record with more information about the
## software.
type SoftwareSet: table[string] of Info; type SoftwareSet: table[string] of Info;
## The set of software associated with an address. Data expires from ## The set of software associated with an address. Data expires from

File diff suppressed because it is too large Load diff

View file

@ -1,23 +1,27 @@
##! This script can be used to extract either the originator's data or the ##! This script can be used to extract either the originator's data or the
##! responders data or both. By default nothing is extracted, and in order ##! responders data or both. By default nothing is extracted, and in order
##! to actually extract data the ``c$extract_orig`` and/or the ##! to actually extract data the ``c$extract_orig`` and/or the
##! ``c$extract_resp`` variable must be set to T. One way to achieve this ##! ``c$extract_resp`` variable must be set to ``T``. One way to achieve this
##! would be to handle the connection_established event elsewhere and set the ##! would be to handle the :bro:id:`connection_established` event elsewhere
##! extract_orig and extract_resp options there. However, there may be trouble ##! and set the ``extract_orig`` and ``extract_resp`` options there.
##! with the timing due the event queue delay. ##! However, there may be trouble with the timing due to event queue delay.
##! This script does not work well in a cluster context unless it has a ##!
##! remotely mounted disk to write the content files to. ##! .. note::
##!
##! This script does not work well in a cluster context unless it has a
##! remotely mounted disk to write the content files to.
@load base/utils/files @load base/utils/files
module Conn; module Conn;
export { export {
## The prefix given to files as they are opened on disk. ## The prefix given to files containing extracted connections as they are
## opened on disk.
const extraction_prefix = "contents" &redef; const extraction_prefix = "contents" &redef;
## If this variable is set to T, then all contents of all files will be ## If this variable is set to ``T``, then all contents of all connections
## extracted. ## will be extracted.
const default_extract = F &redef; const default_extract = F &redef;
} }

View file

@ -4,7 +4,7 @@
module Conn; module Conn;
export { export {
## Define inactivty timeouts by the service detected being used over ## Define inactivity timeouts by the service detected being used over
## the connection. ## the connection.
const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = { const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = {
# For interactive services, allow longer periods of inactivity. # For interactive services, allow longer periods of inactivity.

View file

@ -1,17 +1,33 @@
##! This script manages the tracking/logging of general information regarding
##! TCP, UDP, and ICMP traffic. For UDP and ICMP, "connections" are to
##! be interpreted using flow semantics (sequence of packets from a source
##! host/post to a destination host/port). Further, ICMP "ports" are to
##! be interpreted as the source port meaning the ICMP message type and
##! the destination port being the ICMP message code.
@load base/utils/site @load base/utils/site
module Conn; module Conn;
export { export {
## The connection logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains column fields of the connection log.
type Info: record { type Info: record {
## This is the time of the first packet. ## This is the time of the first packet.
ts: time &log; ts: time &log;
## A unique identifier of a connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## The transport layer protocol of the connection.
proto: transport_proto &log; proto: transport_proto &log;
## An identification of an application protocol being sent over the
## the connection.
service: string &log &optional; service: string &log &optional;
## How long the connection lasted. For 3-way or 4-way connection
## tear-downs, this will not include the final ACK.
duration: interval &log &optional; duration: interval &log &optional;
## The number of payload bytes the originator sent. For TCP ## The number of payload bytes the originator sent. For TCP
## this is taken from sequence numbers and might be inaccurate ## this is taken from sequence numbers and might be inaccurate
@ -51,8 +67,8 @@ export {
## have been completed prior to the packet loss. ## have been completed prior to the packet loss.
missed_bytes: count &log &default=0; missed_bytes: count &log &default=0;
## Records the state history of (TCP) connections as ## Records the state history of connections as a string of letters.
## a string of letters. ## For TCP connections the meaning of those letters is:
## ##
## ====== ==================================================== ## ====== ====================================================
## Letter Meaning ## Letter Meaning
@ -71,7 +87,8 @@ export {
## originator and lower case then means the responder. ## originator and lower case then means the responder.
## Also, there is compression. We only record one "d" in each direction, ## Also, there is compression. We only record one "d" in each direction,
## for instance. I.e., we just record that data went in that direction. ## for instance. I.e., we just record that data went in that direction.
## This history is not meant to encode how much data that happened to be. ## This history is not meant to encode how much data that happened to
## be.
history: string &log &optional; history: string &log &optional;
## Number of packets the originator sent. ## Number of packets the originator sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
@ -86,6 +103,8 @@ export {
resp_ip_bytes: count &log &optional; resp_ip_bytes: count &log &optional;
}; };
## Event that can be handled to access the :bro:type:`Conn::Info`
## record as it is sent on to the logging framework.
global log_conn: event(rec: Info); global log_conn: event(rec: Info);
} }

View file

@ -4,9 +4,9 @@
module DNS; module DNS;
export { export {
const PTR = 12; const PTR = 12; ##< RR TYPE value for a domain name pointer.
const EDNS = 41; const EDNS = 41; ##< An OPT RR TYPE value described by EDNS.
const ANY = 255; const ANY = 255; ##< A QTYPE value describing a request for all records.
## Mapping of DNS query type codes to human readable string representation. ## Mapping of DNS query type codes to human readable string representation.
const query_types = { const query_types = {
@ -29,50 +29,43 @@ export {
[ANY] = "*", [ANY] = "*",
} &default = function(n: count): string { return fmt("query-%d", n); }; } &default = function(n: count): string { return fmt("query-%d", n); };
const code_types = {
[0] = "X0",
[1] = "Xfmt",
[2] = "Xsrv",
[3] = "Xnam",
[4] = "Ximp",
[5] = "X[",
} &default="?";
## Errors used for non-TSIG/EDNS types. ## Errors used for non-TSIG/EDNS types.
const base_errors = { const base_errors = {
[0] = "NOERROR", ##< No Error [0] = "NOERROR", # No Error
[1] = "FORMERR", ##< Format Error [1] = "FORMERR", # Format Error
[2] = "SERVFAIL", ##< Server Failure [2] = "SERVFAIL", # Server Failure
[3] = "NXDOMAIN", ##< Non-Existent Domain [3] = "NXDOMAIN", # Non-Existent Domain
[4] = "NOTIMP", ##< Not Implemented [4] = "NOTIMP", # Not Implemented
[5] = "REFUSED", ##< Query Refused [5] = "REFUSED", # Query Refused
[6] = "YXDOMAIN", ##< Name Exists when it should not [6] = "YXDOMAIN", # Name Exists when it should not
[7] = "YXRRSET", ##< RR Set Exists when it should not [7] = "YXRRSET", # RR Set Exists when it should not
[8] = "NXRRSet", ##< RR Set that should exist does not [8] = "NXRRSet", # RR Set that should exist does not
[9] = "NOTAUTH", ##< Server Not Authoritative for zone [9] = "NOTAUTH", # Server Not Authoritative for zone
[10] = "NOTZONE", ##< Name not contained in zone [10] = "NOTZONE", # Name not contained in zone
[11] = "unassigned-11", ##< available for assignment [11] = "unassigned-11", # available for assignment
[12] = "unassigned-12", ##< available for assignment [12] = "unassigned-12", # available for assignment
[13] = "unassigned-13", ##< available for assignment [13] = "unassigned-13", # available for assignment
[14] = "unassigned-14", ##< available for assignment [14] = "unassigned-14", # available for assignment
[15] = "unassigned-15", ##< available for assignment [15] = "unassigned-15", # available for assignment
[16] = "BADVERS", ##< for EDNS, collision w/ TSIG [16] = "BADVERS", # for EDNS, collision w/ TSIG
[17] = "BADKEY", ##< Key not recognized [17] = "BADKEY", # Key not recognized
[18] = "BADTIME", ##< Signature out of time window [18] = "BADTIME", # Signature out of time window
[19] = "BADMODE", ##< Bad TKEY Mode [19] = "BADMODE", # Bad TKEY Mode
[20] = "BADNAME", ##< Duplicate key name [20] = "BADNAME", # Duplicate key name
[21] = "BADALG", ##< Algorithm not supported [21] = "BADALG", # Algorithm not supported
[22] = "BADTRUNC", ##< draft-ietf-dnsext-tsig-sha-05.txt [22] = "BADTRUNC", # draft-ietf-dnsext-tsig-sha-05.txt
[3842] = "BADSIG", ##< 16 <= number collision with EDNS(16); [3842] = "BADSIG", # 16 <= number collision with EDNS(16);
##< this is a translation from TSIG(16) # this is a translation from TSIG(16)
} &default = function(n: count): string { return fmt("rcode-%d", n); }; } &default = function(n: count): string { return fmt("rcode-%d", n); };
# This deciphers EDNS Z field values. ## This deciphers EDNS Z field values.
const edns_zfield = { const edns_zfield = {
[0] = "NOVALUE", # regular entry [0] = "NOVALUE", # regular entry
[32768] = "DNS_SEC_OK", # accepts DNS Sec RRs [32768] = "DNS_SEC_OK", # accepts DNS Sec RRs
} &default="?"; } &default="?";
## Possible values of the CLASS field in resource records or QCLASS field
## in query messages.
const classes = { const classes = {
[1] = "C_INTERNET", [1] = "C_INTERNET",
[2] = "C_CSNET", [2] = "C_CSNET",

View file

@ -1,38 +1,80 @@
##! Base DNS analysis script which tracks and logs DNS queries along with
##! their responses.
@load ./consts @load ./consts
module DNS; module DNS;
export { export {
## The DNS logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the column fields of the DNS log.
type Info: record { type Info: record {
ts: time &log; ## The earliest time at which a DNS protocol message over the
uid: string &log; ## associated connection is observed.
id: conn_id &log; ts: time &log;
proto: transport_proto &log; ## A unique identifier of the connection over which DNS messages
trans_id: count &log &optional; ## are being transferred.
query: string &log &optional; uid: string &log;
qclass: count &log &optional; ## The connection's 4-tuple of endpoint addresses/ports.
qclass_name: string &log &optional; id: conn_id &log;
qtype: count &log &optional; ## The transport layer protocol of the connection.
qtype_name: string &log &optional; proto: transport_proto &log;
rcode: count &log &optional; ## A 16 bit identifier assigned by the program that generated the
rcode_name: string &log &optional; ## DNS query. Also used in responses to match up replies to
QR: bool &log &default=F; ## outstanding queries.
AA: bool &log &default=F; trans_id: count &log &optional;
TC: bool &log &default=F; ## The domain name that is the subject of the DNS query.
RD: bool &log &default=F; query: string &log &optional;
RA: bool &log &default=F; ## The QCLASS value specifying the class of the query.
Z: count &log &default=0; qclass: count &log &optional;
TTL: interval &log &optional; ## A descriptive name for the class of the query.
answers: set[string] &log &optional; qclass_name: string &log &optional;
## A QTYPE value specifying the type of the query.
qtype: count &log &optional;
## A descriptive name for the type of the query.
qtype_name: string &log &optional;
## The response code value in DNS response messages.
rcode: count &log &optional;
## A descriptive name for the response code value.
rcode_name: string &log &optional;
## Whether the message is a query (F) or response (T).
QR: bool &log &default=F;
## The Authoritative Answer bit for response messages specifies that
## the responding name server is an authority for the domain name
## in the question section.
AA: bool &log &default=F;
## The Truncation bit specifies that the message was truncated.
TC: bool &log &default=F;
## The Recursion Desired bit indicates to a name server to recursively
## purse the query.
RD: bool &log &default=F;
## The Recursion Available bit in a response message indicates if
## the name server supports recursive queries.
RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all
## queries and responses.
Z: count &log &default=0;
## The set of resource descriptions in answer of the query.
answers: vector of string &log &optional;
## The caching intervals of the associated RRs described by the
## ``answers`` field.
TTLs: vector of interval &log &optional;
## This value indicates if this request/response pair is ready to be logged. ## This value indicates if this request/response pair is ready to be
## logged.
ready: bool &default=F; ready: bool &default=F;
## The total number of resource records in a reply message's answer
## section.
total_answers: count &optional; total_answers: count &optional;
## The total number of resource records in a reply message's answer,
## authority, and additional sections.
total_replies: count &optional; total_replies: count &optional;
}; };
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record { type State: record {
## Indexed by query id, returns Info record corresponding to ## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet. ## query/response which haven't completed yet.
@ -44,11 +86,21 @@ export {
finished_answers: set[count] &optional; finished_answers: set[count] &optional;
}; };
## An event that can be handled to access the :bro:type:`DNS::Info`
## record as it is sent to the logging framework.
global log_dns: event(rec: Info); global log_dns: event(rec: Info);
## This is called by the specific dns_*_reply events with a "reply" which ## This is called by the specific dns_*_reply events with a "reply" which
## may not represent the full data available from the resource record, but ## may not represent the full data available from the resource record, but
## it's generally considered a summarization of the response(s). ## it's generally considered a summarization of the response(s).
##
## c: The connection record for which to fill in DNS reply data.
##
## msg: The DNS message header information for the response.
##
## ans: The general information of a RR response.
##
## reply: The specific response information according to RR type/class.
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string); global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
} }
@ -65,11 +117,11 @@ redef capture_filters += {
["netbios-ns"] = "udp port 137", ["netbios-ns"] = "udp port 137",
}; };
global dns_ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp } &redef; const dns_ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp };
redef dpd_config += { [ANALYZER_DNS] = [$ports = dns_ports] }; redef dpd_config += { [ANALYZER_DNS] = [$ports = dns_ports] };
global dns_udp_ports = { 53/udp, 137/udp, 5353/udp, 5355/udp } &redef; const dns_udp_ports = { 53/udp, 137/udp, 5353/udp, 5355/udp };
global dns_tcp_ports = { 53/tcp } &redef; const dns_tcp_ports = { 53/tcp };
redef dpd_config += { [ANALYZER_DNS_UDP_BINPAC] = [$ports = dns_udp_ports] }; redef dpd_config += { [ANALYZER_DNS_UDP_BINPAC] = [$ports = dns_udp_ports] };
redef dpd_config += { [ANALYZER_DNS_TCP_BINPAC] = [$ports = dns_tcp_ports] }; redef dpd_config += { [ANALYZER_DNS_TCP_BINPAC] = [$ports = dns_tcp_ports] };
@ -102,7 +154,13 @@ function new_session(c: connection, trans_id: count): Info
function set_session(c: connection, msg: dns_msg, is_query: bool) function set_session(c: connection, msg: dns_msg, is_query: bool)
{ {
if ( ! c?$dns_state || msg$id !in c$dns_state$pending ) if ( ! c?$dns_state || msg$id !in c$dns_state$pending )
{
c$dns_state$pending[msg$id] = new_session(c, msg$id); c$dns_state$pending[msg$id] = new_session(c, msg$id);
# Try deleting this transaction id from the set of finished answers.
# Sometimes hosts will reuse ports and transaction ids and this should
# be considered to be a legit scenario (although bad practice).
delete c$dns_state$finished_answers[msg$id];
}
c$dns = c$dns_state$pending[msg$id]; c$dns = c$dns_state$pending[msg$id];
@ -134,20 +192,23 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
{ {
set_session(c, msg, F); set_session(c, msg, F);
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
c$dns$TTL = ans$TTL;
if ( ans$answer_type == DNS_ANS ) if ( ans$answer_type == DNS_ANS )
{ {
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
if ( msg$id in c$dns_state$finished_answers ) if ( msg$id in c$dns_state$finished_answers )
event conn_weird("dns_reply_seen_after_done", c, ""); event conn_weird("dns_reply_seen_after_done", c, "");
if ( reply != "" ) if ( reply != "" )
{ {
if ( ! c$dns?$answers ) if ( ! c$dns?$answers )
c$dns$answers = set(); c$dns$answers = vector();
add c$dns$answers[reply]; c$dns$answers[|c$dns$answers|] = reply;
if ( ! c$dns?$TTLs )
c$dns$TTLs = vector();
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
} }
if ( c$dns?$answers && |c$dns$answers| == c$dns$total_answers ) if ( c$dns?$answers && |c$dns$answers| == c$dns$total_answers )
@ -164,7 +225,6 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
if ( c$dns$ready ) if ( c$dns$ready )
{ {
Log::write(DNS::LOG, c$dns); Log::write(DNS::LOG, c$dns);
add c$dns_state$finished_answers[c$dns$trans_id];
# This record is logged and no longer pending. # This record is logged and no longer pending.
delete c$dns_state$pending[c$dns$trans_id]; delete c$dns_state$pending[c$dns$trans_id];
} }

View file

@ -1,4 +1,4 @@
##! File extraction for FTP. ##! File extraction support for FTP.
@load ./main @load ./main
@load base/utils/files @load base/utils/files
@ -6,7 +6,7 @@
module FTP; module FTP;
export { export {
## Pattern of file mime types to extract from FTP entity bodies. ## Pattern of file mime types to extract from FTP transfers.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from FTP-data transfers. ## The on-disk prefix for files to be extracted from FTP-data transfers.
@ -14,10 +14,15 @@ export {
} }
redef record Info += { redef record Info += {
## The file handle for the file to be extracted ## On disk file where it was extracted to.
extraction_file: file &log &optional; extraction_file: file &log &optional;
## Indicates if the current command/response pair should attempt to
## extract the file if a file was transferred.
extract_file: bool &default=F; extract_file: bool &default=F;
## Internal tracking of the total number of files extracted during this
## session.
num_extracted_files: count &default=0; num_extracted_files: count &default=0;
}; };
@ -33,7 +38,6 @@ event file_transferred(c: connection, prefix: string, descr: string,
if ( extract_file_types in s$mime_type ) if ( extract_file_types in s$mime_type )
{ {
s$extract_file = T; s$extract_file = T;
add s$tags["extracted_file"];
++s$num_extracted_files; ++s$num_extracted_files;
} }
} }

View file

@ -2,10 +2,6 @@
##! along with metadata. For example, if files are transferred, the argument ##! along with metadata. For example, if files are transferred, the argument
##! will take on the full path that the client is at along with the requested ##! will take on the full path that the client is at along with the requested
##! file name. ##! file name.
##!
##! TODO:
##!
##! * Handle encrypted sessions correctly (get an example?)
@load ./utils-commands @load ./utils-commands
@load base/utils/paths @load base/utils/paths
@ -14,38 +10,64 @@
module FTP; module FTP;
export { export {
## The FTP protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## List of commands that should have their command/response pairs logged.
const logged_commands = {
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT"
} &redef;
## This setting changes if passwords used in FTP sessions are captured or not. ## This setting changes if passwords used in FTP sessions are captured or not.
const default_capture_password = F &redef; const default_capture_password = F &redef;
## User IDs that can be considered "anonymous".
const guest_ids = { "anonymous", "ftp", "guest" } &redef;
type Info: record { type Info: record {
## Time when the command was sent.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## User name for the current FTP session.
user: string &log &default="<unknown>"; user: string &log &default="<unknown>";
## Password for the current FTP session if captured.
password: string &log &optional; password: string &log &optional;
## Command given by the client.
command: string &log &optional; command: string &log &optional;
## Argument for the command if one is given.
arg: string &log &optional; arg: string &log &optional;
## Libmagic "sniffed" file type if the command indicates a file transfer.
mime_type: string &log &optional; mime_type: string &log &optional;
## Libmagic "sniffed" file description if the command indicates a file transfer.
mime_desc: string &log &optional; mime_desc: string &log &optional;
## Size of the file if the command indicates a file transfer.
file_size: count &log &optional; file_size: count &log &optional;
## Reply code from the server in response to the command.
reply_code: count &log &optional; reply_code: count &log &optional;
## Reply message from the server in response to the command.
reply_msg: string &log &optional; reply_msg: string &log &optional;
## Arbitrary tags that may indicate a particular attribute of this command.
tags: set[string] &log &default=set(); tags: set[string] &log &default=set();
## By setting the CWD to '/.', we can indicate that unless something ## Current working directory that this session is in. By making
## the default value '/.', we can indicate that unless something
## more concrete is discovered that the existing but unknown ## more concrete is discovered that the existing but unknown
## directory is ok to use. ## directory is ok to use.
cwd: string &default="/."; cwd: string &default="/.";
## Command that is currently waiting for a response.
cmdarg: CmdArg &optional; cmdarg: CmdArg &optional;
## Queue for commands that have been sent but not yet responded to
## are tracked here.
pending_commands: PendingCmds; pending_commands: PendingCmds;
## This indicates if the session is in active or passive mode. ## Indicates if the session is in active or passive mode.
passive: bool &default=F; passive: bool &default=F;
## This determines if the password will be captured for this request. ## Determines if the password will be captured for this request.
capture_password: bool &default=default_capture_password; capture_password: bool &default=default_capture_password;
}; };
@ -57,21 +79,11 @@ export {
z: count; z: count;
}; };
# TODO: add this back in some form. raise a notice again? ## Parse FTP reply codes into the three constituent single digit values.
#const excessive_filename_len = 250 &redef;
#const excessive_filename_trunc_len = 32 &redef;
## These are user IDs that can be considered "anonymous".
const guest_ids = { "anonymous", "ftp", "guest" } &redef;
## The list of commands that should have their command/response pairs logged.
const logged_commands = {
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT"
} &redef;
## This function splits FTP reply codes into the three constituent
global parse_ftp_reply_code: function(code: count): ReplyCode; global parse_ftp_reply_code: function(code: count): ReplyCode;
## Event that can be handled to access the :bro:type:`FTP::Info`
## record as it is sent on to the logging framework.
global log_ftp: event(rec: Info); global log_ftp: event(rec: Info);
} }

View file

@ -2,14 +2,22 @@ module FTP;
export { export {
type CmdArg: record { type CmdArg: record {
## Time when the command was sent.
ts: time; ts: time;
## Command.
cmd: string &default="<unknown>"; cmd: string &default="<unknown>";
## Argument for the command if one was given.
arg: string &default=""; arg: string &default="";
## Counter to track how many commands have been executed.
seq: count &default=0; seq: count &default=0;
}; };
## Structure for tracking pending commands in the event that the client
## sends a large number of commands before the server has a chance to
## reply.
type PendingCmds: table[count] of CmdArg; type PendingCmds: table[count] of CmdArg;
## Possible response codes for a wide variety of FTP commands.
const cmd_reply_code: set[string, count] = { const cmd_reply_code: set[string, count] = {
# According to RFC 959 # According to RFC 959
["<init>", [120, 220, 421]], ["<init>", [120, 220, 421]],

View file

@ -8,29 +8,24 @@
module HTTP; module HTTP;
export { export {
## Pattern of file mime types to extract from HTTP entity bodies. ## Pattern of file mime types to extract from HTTP response entity bodies.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from HTTP entity bodies. ## The on-disk prefix for files to be extracted from HTTP entity bodies.
const extraction_prefix = "http-item" &redef; const extraction_prefix = "http-item" &redef;
redef record Info += { redef record Info += {
## This field can be set per-connection to determine if the entity body ## On-disk file where the response body was extracted to.
## will be extracted. It must be set to T on or before the first
## entity_body_data event.
extracting_file: bool &default=F;
## This is the holder for the file handle as the file is being written
## to disk.
extraction_file: file &log &optional; extraction_file: file &log &optional;
};
redef record State += { ## Indicates if the response body is to be extracted or not. Must be
entity_bodies: count &default=0; ## set before or by the first :bro:id:`http_entity_data` event for the
## content.
extract_file: bool &default=F;
}; };
} }
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=5 event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=-5
{ {
# Client body extraction is not currently supported in this script. # Client body extraction is not currently supported in this script.
if ( is_orig ) if ( is_orig )
@ -41,8 +36,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string
if ( c$http?$mime_type && if ( c$http?$mime_type &&
extract_file_types in c$http$mime_type ) extract_file_types in c$http$mime_type )
{ {
c$http$extracting_file = T; c$http$extract_file = T;
local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", ++c$http_state$entity_bodies); }
if ( c$http$extract_file )
{
local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", c$http_state$current_response);
local fname = generate_extraction_filename(extraction_prefix, c, suffix); local fname = generate_extraction_filename(extraction_prefix, c, suffix);
c$http$extraction_file = open(fname); c$http$extraction_file = open(fname);
@ -50,12 +49,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string
} }
} }
if ( c$http$extracting_file ) if ( c$http?$extraction_file )
print c$http$extraction_file, data; print c$http$extraction_file, data;
} }
event http_end_entity(c: connection, is_orig: bool) event http_end_entity(c: connection, is_orig: bool)
{ {
if ( c$http$extracting_file ) if ( c$http?$extraction_file )
close(c$http$extraction_file); close(c$http$extraction_file);
} }

View file

@ -11,7 +11,8 @@ export {
}; };
redef record Info += { redef record Info += {
## The MD5 sum for a file transferred over HTTP will be stored here. ## MD5 sum for a file transferred over HTTP calculated from the
## response body.
md5: string &log &optional; md5: string &log &optional;
## This value can be set per-transfer to determine per request ## This value can be set per-transfer to determine per request
@ -19,8 +20,8 @@ export {
## set to T at the time of or before the first chunk of body data. ## set to T at the time of or before the first chunk of body data.
calc_md5: bool &default=F; calc_md5: bool &default=F;
## This boolean value indicates if an MD5 sum is currently being ## Indicates if an MD5 sum is being calculated for the current
## calculated for the current file transfer. ## request/response pair.
calculating_md5: bool &default=F; calculating_md5: bool &default=F;
}; };

View file

@ -1,5 +1,4 @@
##! This script is involved in the identification of file types in HTTP ##! Identification of file types in HTTP response bodies with file content sniffing.
##! response bodies.
@load base/frameworks/signatures @load base/frameworks/signatures
@load base/frameworks/notice @load base/frameworks/notice
@ -15,30 +14,32 @@ module HTTP;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
# This notice is thrown when the file extension doesn't ## Indicates when the file extension doesn't seem to match the file contents.
# seem to match the file contents.
Incorrect_File_Type, Incorrect_File_Type,
}; };
redef record Info += { redef record Info += {
## This will record the mime_type identified. ## Mime type of response body identified by content sniffing.
mime_type: string &log &optional; mime_type: string &log &optional;
## This indicates that no data of the current file transfer has been ## Indicates that no data of the current file transfer has been
## seen yet. After the first :bro:id:`http_entity_data` event, it ## seen yet. After the first :bro:id:`http_entity_data` event, it
## will be set to T. ## will be set to F.
first_chunk: bool &default=T; first_chunk: bool &default=T;
}; };
redef enum Tags += { ## Mapping between mime types and regular expressions for URLs
IDENTIFIED_FILE ## The :bro:enum:`HTTP::Incorrect_File_Type` notice is generated if the pattern
}; ## doesn't match the mime type that was discovered.
# Create regexes that *should* in be in the urls for specifics mime types.
# Notices are thrown if the pattern doesn't match the url for the file type.
const mime_types_extensions: table[string] of pattern = { const mime_types_extensions: table[string] of pattern = {
["application/x-dosexec"] = /\.([eE][xX][eE]|[dD][lL][lL])/, ["application/x-dosexec"] = /\.([eE][xX][eE]|[dD][lL][lL])/,
} &redef; } &redef;
## A pattern for filtering out :bro:enum:`HTTP::Incorrect_File_Type` urls
## that are not noteworthy before a notice is created. Each
## pattern added should match the complete URL (the matched URLs include
## "http://" at the beginning).
const ignored_incorrect_file_type_urls = /^$/ &redef;
} }
event signature_match(state: signature_state, msg: string, data: string) &priority=5 event signature_match(state: signature_state, msg: string, data: string) &priority=5
@ -59,6 +60,10 @@ event signature_match(state: signature_state, msg: string, data: string) &priori
c$http?$uri && mime_types_extensions[msg] !in c$http$uri ) c$http?$uri && mime_types_extensions[msg] !in c$http$uri )
{ {
local url = build_url_http(c$http); local url = build_url_http(c$http);
if ( url == ignored_incorrect_file_type_urls )
return;
local message = fmt("%s %s %s", msg, c$http$method, url); local message = fmt("%s %s %s", msg, c$http$method, url);
NOTICE([$note=Incorrect_File_Type, NOTICE([$note=Incorrect_File_Type,
$msg=message, $msg=message,

View file

@ -1,3 +1,7 @@
##! Implements base functionality for HTTP analysis. The logging model is
##! to log request/response pairs and all relevant metadata together in
##! a single record.
@load base/utils/numbers @load base/utils/numbers
@load base/utils/files @load base/utils/files
@ -8,6 +12,7 @@ export {
## Indicate a type of attack or compromise in the record to be logged. ## Indicate a type of attack or compromise in the record to be logged.
type Tags: enum { type Tags: enum {
## Placeholder.
EMPTY EMPTY
}; };
@ -15,64 +20,69 @@ export {
const default_capture_password = F &redef; const default_capture_password = F &redef;
type Info: record { type Info: record {
ts: time &log; ## Timestamp for when the request happened.
uid: string &log; ts: time &log;
id: conn_id &log; uid: string &log;
## This represents the pipelined depth into the connection of this id: conn_id &log;
## Represents the pipelined depth into the connection of this
## request/response transaction. ## request/response transaction.
trans_depth: count &log; trans_depth: count &log;
## The verb used in the HTTP request (GET, POST, HEAD, etc.). ## Verb used in the HTTP request (GET, POST, HEAD, etc.).
method: string &log &optional; method: string &log &optional;
## The value of the HOST header. ## Value of the HOST header.
host: string &log &optional; host: string &log &optional;
## The URI used in the request. ## URI used in the request.
uri: string &log &optional; uri: string &log &optional;
## The value of the "referer" header. The comment is deliberately ## Value of the "referer" header. The comment is deliberately
## misspelled like the standard declares, but the name used here is ## misspelled like the standard declares, but the name used here is
## "referrer" spelled correctly. ## "referrer" spelled correctly.
referrer: string &log &optional; referrer: string &log &optional;
## The value of the User-Agent header from the client. ## Value of the User-Agent header from the client.
user_agent: string &log &optional; user_agent: string &log &optional;
## The actual uncompressed content size of the data transferred from ## Actual uncompressed content size of the data transferred from
## the client. ## the client.
request_body_len: count &log &default=0; request_body_len: count &log &default=0;
## The actual uncompressed content size of the data transferred from ## Actual uncompressed content size of the data transferred from
## the server. ## the server.
response_body_len: count &log &default=0; response_body_len: count &log &default=0;
## The status code returned by the server. ## Status code returned by the server.
status_code: count &log &optional; status_code: count &log &optional;
## The status message returned by the server. ## Status message returned by the server.
status_msg: string &log &optional; status_msg: string &log &optional;
## The last 1xx informational reply code returned by the server. ## Last seen 1xx informational reply code returned by the server.
info_code: count &log &optional; info_code: count &log &optional;
## The last 1xx informational reply message returned by the server. ## Last seen 1xx informational reply message returned by the server.
info_msg: string &log &optional; info_msg: string &log &optional;
## The filename given in the Content-Disposition header ## Filename given in the Content-Disposition header sent by the server.
## sent by the server.
filename: string &log &optional; filename: string &log &optional;
## This is a set of indicators of various attributes discovered and ## A set of indicators of various attributes discovered and
## related to a particular request/response pair. ## related to a particular request/response pair.
tags: set[Tags] &log; tags: set[Tags] &log;
## The username if basic-auth is performed for the request. ## Username if basic-auth is performed for the request.
username: string &log &optional; username: string &log &optional;
## The password if basic-auth is performed for the request. ## Password if basic-auth is performed for the request.
password: string &log &optional; password: string &log &optional;
## This determines if the password will be captured for this request. ## Determines if the password will be captured for this request.
capture_password: bool &default=default_capture_password; capture_password: bool &default=default_capture_password;
## All of the headers that may indicate if the request was proxied. ## All of the headers that may indicate if the request was proxied.
proxied: set[string] &log &optional; proxied: set[string] &log &optional;
}; };
## Structure to maintain state for an HTTP connection with multiple
## requests and responses.
type State: record { type State: record {
## Pending requests.
pending: table[count] of Info; pending: table[count] of Info;
current_response: count &default=0; ## Current request in the pending queue.
current_request: count &default=0; current_request: count &default=0;
## Current response in the pending queue.
current_response: count &default=0;
}; };
## The list of HTTP headers typically used to indicate a proxied request. ## A list of HTTP headers typically used to indicate proxied requests.
const proxy_headers: set[string] = { const proxy_headers: set[string] = {
"FORWARDED", "FORWARDED",
"X-FORWARDED-FOR", "X-FORWARDED-FOR",
@ -83,6 +93,8 @@ export {
"PROXY-CONNECTION", "PROXY-CONNECTION",
} &redef; } &redef;
## Event that can be handled to access the HTTP record as it is sent on
## to the logging framework.
global log_http: event(rec: Info); global log_http: event(rec: Info);
} }

View file

@ -5,8 +5,31 @@
module HTTP; module HTTP;
export { export {
## Given a string containing a series of key-value pairs separated by "=",
## this function can be used to parse out all of the key names.
##
## data: The raw data, such as a URL or cookie value.
##
## kv_splitter: A regular expression representing the separator between
## key-value pairs.
##
## Returns: A vector of strings containing the keys.
global extract_keys: function(data: string, kv_splitter: pattern): string_vec; global extract_keys: function(data: string, kv_splitter: pattern): string_vec;
## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle
## edge cases such as proxied requests appropriately.
##
## rec: An :bro:type:`HTTP::Info` record.
##
## Returns: A URL, not prefixed by "http://".
global build_url: function(rec: Info): string; global build_url: function(rec: Info): string;
## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle
## edge cases such as proxied requests appropriately.
##
## rec: An :bro:type:`HTTP::Info` record.
##
## Returns: A URL prefixed with "http://".
global build_url_http: function(rec: Info): string; global build_url_http: function(rec: Info): string;
} }

View file

@ -5,8 +5,9 @@
##! but that connection will actually be between B and C which could be ##! but that connection will actually be between B and C which could be
##! analyzed on a different worker. ##! analyzed on a different worker.
##! ##!
##! Example line from IRC server indicating that the DCC SEND is about to start:
##! PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A # Example line from IRC server indicating that the DCC SEND is about to start:
# PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A
@load ./main @load ./main
@load base/utils/files @load base/utils/files
@ -14,24 +15,25 @@
module IRC; module IRC;
export { export {
redef enum Tag += { EXTRACTED_FILE };
## Pattern of file mime types to extract from IRC DCC file transfers. ## Pattern of file mime types to extract from IRC DCC file transfers.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from IRC DCC file transfers. ## On-disk prefix for files to be extracted from IRC DCC file transfers.
const extraction_prefix = "irc-dcc-item" &redef; const extraction_prefix = "irc-dcc-item" &redef;
redef record Info += { redef record Info += {
dcc_file_name: string &log &optional; ## DCC filename requested.
dcc_file_size: count &log &optional; dcc_file_name: string &log &optional;
dcc_mime_type: string &log &optional; ## Size of the DCC transfer as indicated by the sender.
dcc_file_size: count &log &optional;
## Sniffed mime type of the file.
dcc_mime_type: string &log &optional;
## The file handle for the file to be extracted ## The file handle for the file to be extracted
extraction_file: file &log &optional; extraction_file: file &log &optional;
## A boolean to indicate if the current file transfer should be extraced. ## A boolean to indicate if the current file transfer should be extracted.
extract_file: bool &default=F; extract_file: bool &default=F;
## The count of the number of file that have been extracted during the session. ## The count of the number of file that have been extracted during the session.
num_extracted_files: count &default=0; num_extracted_files: count &default=0;
@ -54,8 +56,10 @@ event file_transferred(c: connection, prefix: string, descr: string,
if ( extract_file_types == irc$dcc_mime_type ) if ( extract_file_types == irc$dcc_mime_type )
{ {
irc$extract_file = T; irc$extract_file = T;
add irc$tags[EXTRACTED_FILE]; }
if ( irc$extract_file )
{
local suffix = fmt("%d.dat", ++irc$num_extracted_files); local suffix = fmt("%d.dat", ++irc$num_extracted_files);
local fname = generate_extraction_filename(extraction_prefix, c, suffix); local fname = generate_extraction_filename(extraction_prefix, c, suffix);
irc$extraction_file = open(fname); irc$extraction_file = open(fname);
@ -76,7 +80,7 @@ event file_transferred(c: connection, prefix: string, descr: string,
Log::write(IRC::LOG, irc); Log::write(IRC::LOG, irc);
irc$command = tmp; irc$command = tmp;
if ( irc$extract_file && irc?$extraction_file ) if ( irc?$extraction_file )
set_contents_file(id, CONTENTS_RESP, irc$extraction_file); set_contents_file(id, CONTENTS_RESP, irc$extraction_file);
# Delete these values in case another DCC transfer # Delete these values in case another DCC transfer

View file

@ -1,36 +1,38 @@
##! This is the script that implements the core IRC analysis support. It only ##! Implements the core IRC analysis support. The logging model is to log
##! logs a very limited subset of the IRC protocol by default. The points ##! IRC commands along with the associated response and some additional
##! that it logs at are NICK commands, USER commands, and JOIN commands. It ##! metadata about the connection if it's available.
##! log various bits of meta data as indicated in the :bro:type:`Info` record
##! along with the command at the command arguments.
module IRC; module IRC;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Tag: enum {
EMPTY
};
type Info: record { type Info: record {
## Timestamp when the command was seen.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## Nick name given for the connection.
nick: string &log &optional; nick: string &log &optional;
## User name given for the connection.
user: string &log &optional; user: string &log &optional;
channels: set[string] &log &optional;
## Command given by the client.
command: string &log &optional; command: string &log &optional;
## Value for the command given by the client.
value: string &log &optional; value: string &log &optional;
## Any additional data for the command.
addl: string &log &optional; addl: string &log &optional;
tags: set[Tag] &log;
}; };
## Event that can be handled to access the IRC record as it is sent on
## to the logging framework.
global irc_log: event(rec: Info); global irc_log: event(rec: Info);
} }
redef record connection += { redef record connection += {
## IRC session information.
irc: Info &optional; irc: Info &optional;
}; };
@ -41,7 +43,7 @@ redef capture_filters += { ["irc-6668"] = "port 6668" };
redef capture_filters += { ["irc-6669"] = "port 6669" }; redef capture_filters += { ["irc-6669"] = "port 6669" };
# DPD configuration. # DPD configuration.
global irc_ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp } &redef; const irc_ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };
redef dpd_config += { [ANALYZER_IRC] = [$ports = irc_ports] }; redef dpd_config += { [ANALYZER_IRC] = [$ports = irc_ports] };
redef likely_server_ports += { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp }; redef likely_server_ports += { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };

View file

@ -14,15 +14,17 @@
module SSH; module SSH;
export { export {
## The SSH protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
## This indicates that a heuristically detected "successful" SSH ## Indicates that a heuristically detected "successful" SSH
## authentication occurred. ## authentication occurred.
Login Login
}; };
type Info: record { type Info: record {
## Time when the SSH connection began.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
@ -34,11 +36,11 @@ export {
## would be set for the opposite situation. ## would be set for the opposite situation.
# TODO: handle local-local and remote-remote better. # TODO: handle local-local and remote-remote better.
direction: Direction &log &optional; direction: Direction &log &optional;
## The software string given by the client. ## Software string given by the client.
client: string &log &optional; client: string &log &optional;
## The software string given by the server. ## Software string given by the server.
server: string &log &optional; server: string &log &optional;
## The amount of data returned from the server. This is currently ## Amount of data returned from the server. This is currently
## the only measure of the success heuristic and it is logged to ## the only measure of the success heuristic and it is logged to
## assist analysts looking at the logs to make their own determination ## assist analysts looking at the logs to make their own determination
## about the success on a case-by-case basis. ## about the success on a case-by-case basis.
@ -48,8 +50,8 @@ export {
done: bool &default=F; done: bool &default=F;
}; };
## The size in bytes at which the SSH connection is presumed to be ## The size in bytes of data sent by the server at which the SSH
## successful. ## connection is presumed to be successful.
const authentication_data_size = 5500 &redef; const authentication_data_size = 5500 &redef;
## If true, we tell the event engine to not look at further data ## If true, we tell the event engine to not look at further data
@ -58,14 +60,16 @@ export {
## kinds of analyses (e.g., tracking connection size). ## kinds of analyses (e.g., tracking connection size).
const skip_processing_after_detection = F &redef; const skip_processing_after_detection = F &redef;
## This event is generated when the heuristic thinks that a login ## Event that is generated when the heuristic thinks that a login
## was successful. ## was successful.
global heuristic_successful_login: event(c: connection); global heuristic_successful_login: event(c: connection);
## This event is generated when the heuristic thinks that a login ## Event that is generated when the heuristic thinks that a login
## failed. ## failed.
global heuristic_failed_login: event(c: connection); global heuristic_failed_login: event(c: connection);
## Event that can be handled to access the :bro:type:`SSH::Info`
## record as it is sent on to the logging framework.
global log_ssh: event(rec: Info); global log_ssh: event(rec: Info);
} }

Some files were not shown because too many files have changed in this diff Show more