Documentation fixes.

This cleans up most of the warnings from sphinx (broken :doc: links,
broxygen role misuses, etc.).  The remaining ones should be harmless,
but not quick to silence.

I found that the README for each component was a copy from the actual
repo, so I turned those in to symlinks so they don't get out of date.
This commit is contained in:
Jon Siwek 2013-09-03 15:59:40 -05:00
parent 2392a29b7f
commit db470a637a
32 changed files with 123 additions and 5151 deletions

29
NEWS
View file

@ -48,7 +48,7 @@ New Functionality
than global state as it was before. than global state as it was before.
- The scripting language now supports a constructing sets, tables, - The scripting language now supports a constructing sets, tables,
vectors, and records by name: vectors, and records by name::
type MyRecordType: record { type MyRecordType: record {
c: count; c: count;
@ -178,7 +178,7 @@ Changed Functionality
split_complete() split_complete()
- md5_*, sha1_*, sha256_*, and entropy_* have all changed - md5_*, sha1_*, sha256_*, and entropy_* have all changed
their signatures to work with opaque types (see above). their signatures to work with opaque types (see above).
- Removed a now unused argument from "do_split" helper function. - Removed a now unused argument from "do_split" helper function.
@ -204,7 +204,7 @@ Changed Functionality
- What used to be a "redef" of "Notice::policy" now becomes a hook - What used to be a "redef" of "Notice::policy" now becomes a hook
implementation. Example: implementation. Example:
Old: Old::
redef Notice::policy += { redef Notice::policy += {
[$pred(n: Notice::Info) = { [$pred(n: Notice::Info) = {
@ -213,7 +213,7 @@ Changed Functionality
$action = Notice::ACTION_EMAIL] $action = Notice::ACTION_EMAIL]
}; };
New: New::
hook Notice::policy(n: Notice::Info) hook Notice::policy(n: Notice::Info)
{ {
@ -225,18 +225,18 @@ Changed Functionality
handlers for that event, you'll likely just need to change the handlers for that event, you'll likely just need to change the
type accordingly. Example: type accordingly. Example:
Old: Old::
event notice(n: Notice::Info) { ... } event notice(n: Notice::Info) { ... }
New: New::
hook notice(n: Notice::Info) { ... } hook notice(n: Notice::Info) { ... }
- The notice_policy.log is gone. That's a result of the new notice - The notice_policy.log is gone. That's a result of the new notice
policy setup. policy setup.
- Removed the byte_len() and length() bif functions. Use the "|...|" - Removed the byte_len() and length() bif functions. Use the ``|...|``
operator instead. operator instead.
- The SSH::Login notice has been superseded by an corresponding - The SSH::Login notice has been superseded by an corresponding
@ -479,8 +479,8 @@ with the new version. The two rules of thumb are:
if you need help. if you need help.
Below we summarize changes from 1.x to 2.x in more detail. This list Below we summarize changes from 1.x to 2.x in more detail. This list
isn't complete, see the :download:`CHANGES <CHANGES>` file in the isn't complete, see the ``CHANGES`` file in the distribution or
distribution for the full story. :doc:`here <changes>` for the full story.
Script Organization Script Organization
------------------- -------------------
@ -568,8 +568,8 @@ Logging Framework
endpoint. endpoint.
- The new logging framework makes it possible to extend, customize, - The new logging framework makes it possible to extend, customize,
and filter logs very easily. See the :doc:`logging framework <logging>` and filter logs very easily. See the :doc:`logging framework
for more information on usage. </frameworks/logging>` for more information on usage.
- A common pattern found in the new scripts is to store logging stream - A common pattern found in the new scripts is to store logging stream
records for protocols inside the ``connection`` records so that records for protocols inside the ``connection`` records so that
@ -590,9 +590,10 @@ Logging Framework
Notice Framework Notice Framework
---------------- ----------------
The way users interact with "notices" has changed significantly in The way users interact with "notices" has changed significantly in order
order to make it easier to define a site policy and more extensible to make it easier to define a site policy and more extensible for adding
for adding customized actions. See the :doc:`notice framework <notice>`. customized actions. See the :doc:`notice framework
</frameworks/notice>`.
New Default Settings New Default Settings

@ -1 +1 @@
Subproject commit f66eea64d1bbcbee0e41621e553260cece2a1e48 Subproject commit d53d07dafe904db24ee1d022b3f831007f824f87

View file

@ -1,68 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.34-3
======
BinPAC
======
.. rst-class:: opening
BinPAC is a high level language for describing protocol parsers and
generates C++ code. It is currently maintained and distributed with the
Bro Network Security Monitor distribution, however, the generated parsers
may be used with other programs besides Bro.
Download
--------
You can find the latest BinPAC release for download at
http://www.bro.org/download.
BinPAC's git repository is located at `git://git.bro.org/binpac.git
<git://git.bro.org/binpac.git>`__. You can browse the repository
`here <http://git.bro.org/binpac.git>`__.
This document describes BinPAC |version|. See the ``CHANGES``
file for version history.
Prerequisites
-------------
BinPAC relies on the following libraries and tools, which need to be
installed before you begin:
* Flex (Fast Lexical Analyzer)
Flex is already installed on most systems, so with luck you can
skip having to install it yourself.
* Bison (GNU Parser Generator)
Bison is also already installed on many system.
* CMake 2.6.3 or greater
CMake is a cross-platform, open-source build system, typically
not installed by default. See http://www.cmake.org for more
information regarding CMake and the installation steps below for
how to use it to build this distribution. CMake generates native
Makefiles that depend on GNU Make by default
Installation
------------
To build and install into ``/usr/local``::
./configure
cd build
make
make install
This will perform an out-of-source build into the build directory using
the default build options and then install the binpac binary into
``/usr/local/bin``.
You can specify a different installation directory with::
./configure --prefix=<dir>
Run ``./configure --help`` for more options.

View file

@ -0,0 +1 @@
../../../aux/binpac/README

View file

@ -1,70 +0,0 @@
.. -*- mode: rst; -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.26-5
======================
Bro Auxiliary Programs
======================
.. contents::
:Version: |version|
Handy auxiliary programs related to the use of the Bro Network Security
Monitor (http://www.bro.org).
Note that some files that were formerly distributed with Bro as part
of the aux/ tree are now maintained separately. See the
http://www.bro.org/download for their download locations.
adtrace
=======
Makefile and source for the adtrace utility. This program is used
in conjunction with the localnetMAC.pl perl script to compute the
network address that compose the internal and extern nets that bro
is monitoring. This program when run by itself just reads a pcap
(tcpdump) file and writes out the src MAC, dst MAC, src IP, dst
IP for each packet seen in the file. This output is processed by
the localnetMAC.pl script during 'make install'.
devel-tools
===========
A set of scripts used commonly for Bro development.
extract-conn-by-uid:
Extracts a connection from a trace file based
on its UID found in Bro's conn.log
gen-mozilla-ca-list.rb
Generates list of Mozilla SSL root certificates in
a format readable by Bro.
update-changes
A script to maintain the CHANGES and VERSION files.
git-show-fastpath
Show commits to the fastpath branch not yet merged into master.
cpu-bench-with-trace
Run a number of Bro benchmarks on a trace file.
nftools
=======
Utilities for dealing with Bro's custom file format for storing
NetFlow records. nfcollector reads NetFlow data from a socket and
writes it in Bro's format. ftwire2bro reads NetFlow "wire" format
(e.g., as generated by a 'flow-export' directive) and writes it in
Bro's format.
rst
===
Makefile and source for the rst utility. "rst" can be invoked by
a Bro script to terminate an established TCP connection by forging
RST tear-down packets. See terminate_connection() in conn.bro.

View file

@ -0,0 +1 @@
../../../aux/bro-aux/README

View file

@ -1,231 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.54
============================
Python Bindings for Broccoli
============================
.. rst-class:: opening
This Python module provides bindings for Broccoli, Bro's client
communication library. In general, the bindings provide the same
functionality as Broccoli's C API.
.. contents::
Download
--------
You can find the latest Broccoli-Python release for download at
http://www.bro.org/download.
Broccoli-Python's git repository is located at `git://git.bro.org/broccoli-python.git
<git://git.bro.org/broccoli-python.git>`__. You can browse the repository
`here <http://git.bro.org/broccoli-python.git>`__.
This document describes Broccoli-Python |version|. See the ``CHANGES``
file for version history.
Installation
------------
Installation of the Python module is pretty straight-forward. After
Broccoli itself has been installed, it follows the standard installation
process for Python modules::
python setup.py install
Try the following to test the installation. If you do not see any
error message, everything should be fine::
python -c "import broccoli"
Usage
-----
The following examples demonstrate how to send and receive Bro
events in Python.
The main challenge when using Broccoli from Python is dealing with
the data types of Bro event parameters as there is no one-to-one
mapping between Bro's types and Python's types. The Python modules
automatically maps between those types which both systems provide
(such as strings) and provides a set of wrapper classes for Bro
types which do not have a direct Python equivalent (such as IP
addresses).
Connecting to Bro
~~~~~~~~~~~~~~~~~
The following code sets up a connection from Python to a remote Bro
instance (or another Broccoli) and provides a connection handle for
further communication::
from broccoli import *
bc = Connection("127.0.0.1:47758")
An ``IOError`` will be raised if the connection cannot be established.
Sending Events
~~~~~~~~~~~~~~
Once you have a connection handle ``bc`` set up as shown above, you can
start sending events::
bc.send("foo", 5, "attack!")
This sends an event called ``foo`` with two parameters, ``5`` and
``attack!``. Broccoli operates asynchronously, i.e., events scheduled
with ``send()`` are not always sent out immediately but might be
queued for later transmission. To ensure that all events get out
(and incoming events are processed, see below), you need to call
``bc.processInput()`` regularly.
Data Types
~~~~~~~~~~
In the example above, the types of the event parameters are
automatically derived from the corresponding Python types: the first
parameter (``5``) has the Bro type ``int`` and the second one
(``attack!``) has Bro type ``string``.
For types which do not have a Python equivalent, the ``broccoli``
module provides wrapper classes which have the same names as the
corresponding Bro types. For example, to send an event called ``bar``
with one ``addr`` argument and one ``count`` argument, you can write::
bc.send("bar", addr("192.168.1.1"), count(42))
The following table summarizes the available atomic types and their
usage.
======== =========== ===========================
Bro Type Python Type Example
======== =========== ===========================
addr ``addr("192.168.1.1")``
bool bool ``True``
count ``count(42)``
double float ``3.14``
enum Type currently not supported
int int ``5``
interval ``interval(60)``
net Type currently not supported
port ``port("80/tcp")``
string string ``"attack!"``
subnet ``subnet("192.168.1.0/24")``
time ``time(1111111111.0)``
======== =========== ===========================
The ``broccoli`` module also supports sending Bro records as event
parameters. To send a record, you first define a record type. For
example, a Bro record type::
type my_record: record {
a: int;
b: addr;
c: subnet;
};
turns into Python as::
my_record = record_type("a", "b", "c")
As the example shows, Python only needs to know the attribute names
but not their types. The types are derived automatically in the same
way as discussed above for atomic event parameters.
Now you can instantiate a record instance of the newly defined type
and send it out::
rec = record(my_record)
rec.a = 5
rec.b = addr("192.168.1.1")
rec.c = subnet("192.168.1.0/24")
bc.send("my_event", rec)
.. note:: The Python module does not support nested records at this time.
Receiving Events
~~~~~~~~~~~~~~~~
To receive events, you define a callback function having the same
name as the event and mark it with the ``event`` decorator::
@event
def foo(arg1, arg2):
print arg1, arg2
Once you start calling ``bc.processInput()`` regularly (see above),
each received ``foo`` event will trigger the callback function.
By default, the event's arguments are always passed in with built-in
Python types. For Bro types which do not have a direct Python
equivalent (see table above), a substitute built-in type is used
which corresponds to the type the wrapper class' constructor expects
(see the examples in the table). For example, Bro type ``addr`` is
passed in as a string and Bro type ``time`` is passed in as a float.
Alternatively, you can define a _typed_ prototype for the event. If you
do so, arguments will first be type-checked and then passed to the
call-back with the specified type (which means instances of the
wrapper classes for non-Python types). Example::
@event(count, addr)
def bar(arg1, arg2):
print arg1, arg2
Here, ``arg1`` will be an instance of the ``count`` wrapper class and
``arg2`` will be an instance of the ``addr`` wrapper class.
Protoyping works similarly with built-in Python types::
@event(int, string):
def foo(arg1, arg2):
print arg1, arg2
In general, the prototype specifies the types in which the callback
wants to receive the arguments. This actually provides support for
simple type casts as some types support conversion to into something
different. If for instance the event source sends an event with a
single port argument, ``@event(port)`` will pass the port as an
instance of the ``port`` wrapper class; ``@event(string)`` will pass it
as a string (e.g., ``"80/tcp"``); and ``@event(int)`` will pass it as an
integer without protocol information (e.g., just ``80``). If an
argument cannot be converted into the specified type, a ``TypeError``
will be raised.
To receive an event with a record parameter, the record type first
needs to be defined, as described above. Then the type can be used
with the ``@event`` decorator in the same way as atomic types::
my_record = record_type("a", "b", "c")
@event(my_record)
def my_event(rec):
print rec.a, rec.b, rec.c
Helper Functions
----------------
The ``broccoli`` module provides one helper function: ``current_time()``
returns the current time as a float which, if necessary, can be
wrapped into a ``time`` parameter (i.e., ``time(current_time()``)
Examples
--------
There are some example scripts in the ``tests/`` subdirectory of the
``broccoli-python`` repository
`here <http://git.bro.org/broccoli-python.git/tree/HEAD:/tests>`_:
- ``broping.py`` is a (simplified) Python version of Broccoli's test program
``broping``. Start Bro with ``broping.bro``.
- ``broping-record.py`` is a Python version of Broccoli's ``broping``
for records. Start Bro with ``broping-record.bro``.
- ``test.py`` is a very ugly but comprehensive regression test and part of
the communication test-suite. Start Bro with ``test.bro``.

View file

@ -0,0 +1 @@
../../../aux/broccoli/bindings/broccoli-python/README

View file

@ -1,67 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 1.54
===============================================
Ruby Bindings for Broccoli
===============================================
.. rst-class:: opening
This is the broccoli-ruby extension for Ruby which provides access
to the Broccoli API. Broccoli is a library for
communicating with the Bro Intrusion Detection System.
Download
========
You can find the latest Broccoli-Ruby release for download at
http://www.bro.org/download.
Broccoli-Ruby's git repository is located at `git://git.bro.org/broccoli-ruby.git
<git://git.bro.org/broccoli-ruby.git>`__. You can browse the repository
`here <http://git.bro.org/broccoli-ruby.git>`__.
This document describes Broccoli-Ruby |version|. See the ``CHANGES``
file for version history.
Installation
============
To install the extension:
1. Make sure that the ``broccoli-config`` binary is in your path.
(``export PATH=/usr/local/bro/bin:$PATH``)
2. Run ``sudo ruby setup.rb``.
To install the extension as a gem (suggested):
1. Install `rubygems <http://rubygems.org>`_.
2. Make sure that the ``broccoli-config`` binary is in your path.
(``export PATH=/usr/local/bro/bin:$PATH``)
3. Run, ``sudo gem install rbroccoli``.
Usage
=====
There aren't really any useful docs yet. Your best bet currently is
to read through the examples.
One thing I should mention however is that I haven't done any optimization
yet. You may find that if you write code that is going to be sending or
receiving extremely large numbers of events, that it won't run fast enough and
will begin to fall behind the Bro server. The dns_requests.rb example is
a good performance test if your Bro server is sitting on a network with many
dns lookups.
Contact
=======
If you have a question/comment/patch, see the Bro `contact page
<http://www.bro.org/contact/index.html>`_.

View file

@ -0,0 +1 @@
../../../aux/broccoli/bindings/broccoli-ruby/README

View file

@ -1,141 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 1.92-9
===============================================
Broccoli: The Bro Client Communications Library
===============================================
.. rst-class:: opening
Broccoli is the "Bro client communications library". It allows you
to create client sensors for the Bro intrusion detection system.
Broccoli can speak a good subset of the Bro communication protocol,
in particular, it can receive Bro IDs, send and receive Bro events,
and send and receive event requests to/from peering Bros. You can
currently create and receive values of pure types like integers,
counters, timestamps, IP addresses, port numbers, booleans, and
strings.
Download
--------
You can find the latest Broccoli release for download at
http://www.bro.org/download.
Broccoli's git repository is located at
`git://git.bro.org/broccoli <git://git.bro.org/broccoli>`_. You
can browse the repository `here <http://git.bro.org/broccoli>`_.
This document describes Broccoli |version|. See the ``CHANGES``
file for version history.
Installation
------------
The Broccoli library has been tested on Linux, the BSDs, and Solaris.
A Windows build has not currently been tried but is part of our future
plans. If you succeed in building Broccoli on other platforms, let us
know!
Prerequisites
-------------
Broccoli relies on the following libraries and tools, which need to be
installed before you begin:
Flex (Fast Lexical Analyzer)
Flex is already installed on most systems, so with luck you
can skip having to install it yourself.
Bison (GNU Parser Generator)
This comes with many systems, but if you get errors compiling
parse.y, you will need to install it.
OpenSSL headers and libraries
For encrypted communication. These are likely installed,
though some platforms may require installation of a 'devel'
package for the headers.
CMake 2.6.3 or greater
CMake is a cross-platform, open-source build system, typically
not installed by default. See http://www.cmake.org for more
information regarding CMake and the installation steps below
for how to use it to build this distribution. CMake generates
native Makefiles that depend on GNU Make by default.
Broccoli can also make use of some optional libraries if they are found at
installation time:
Libpcap headers and libraries
Network traffic capture library
Installation
------------
To build and install into ``/usr/local``::
./configure
make
make install
This will perform an out-of-source build into the build directory using the
default build options and then install libraries into ``/usr/local/lib``.
You can specify a different installation directory with::
./configure --prefix=<dir>
Or control the python bindings install destination more precisely with::
./configure --python-install-dir=<dir>
Run ``./configure --help`` for more options.
Further notable configure options:
``--enable-debug``
This one enables lots of debugging output. Be sure to disable
this when using the library in a production environment! The
output could easily end up in undersired places when the stdout
of the program you've instrumented is used in other ways.
``--with-configfile=FILE``
Broccoli can read key/value pairs from a config file. By default
it is located in the etc directory of the installation root
(exception: when using ``--prefix=/usr``, ``/etc`` is used
instead of /usr/etc). The default config file name is
broccoli.conf. Using ``--with-configfile``, you can override the
location and name of the config file.
To use the library in other programs & configure scripts, use the
``broccoli-config`` script. It gives you the necessary configuration flags
and linker flags for your system, see ``--cflags`` and ``--libs``.
The API is contained in broccoli.h and pretty well documented. A few
usage examples can be found in the test directory, in particular, the
``broping`` tool can be used to test event transmission and reception. Have
a look at the policy file ``broping.bro`` for the events that need to be
defined at the peering Bro. Try ``broping -h`` for a look at the available
options.
Broccoli knows two kinds of version numbers: the release version number
(as in "broccoli-x.y.tar.gz", or as shipped with Bro) and the shared
library API version number (as in libbroccoli.so.3.0.0). The former
relates to changes in the tree, the latter to compatibility changes in
the API.
Comments, feedback and patches are appreciated; please check the `Bro
website <http://www.bro.org/community>`_.
Documentation
-------------
Please see the `Broccoli User Manual <./broccoli-manual.html>`_ and
the `Broccoli API Reference <../../broccoli-api/index.html>`_.

View file

@ -0,0 +1 @@
../../../aux/broccoli/README

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
../../../aux/broccoli/doc/broccoli-manual.rst

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
../../../aux/broctl/doc/broctl.rst

View file

@ -1,843 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.4-14
============================================
BTest - A Simple Driver for Basic Unit Tests
============================================
.. rst-class:: opening
The ``btest`` is a simple framework for writing unit tests. Freely
borrowing some ideas from other packages, it's main objective is to
provide an easy-to-use, straightforward driver for a suite of
shell-based tests. Each test consists of a set of command lines that
will be executed, and success is determined based on their exit
codes. ``btest`` comes with some additional tools that can be used
within such tests to compare output against a previously established
baseline.
.. contents::
Download
========
You can find the latest BTest release for download at
http://www.bro.org/download.
BTest's git repository is located at `git://git.bro.org/btest.git
<git://git.bro.org/btest.git>`__. You can browse the repository
`here <http://git.bro.org/btest.git>`__.
This document describes BTest |version|. See the ``CHANGES``
file for version history.
Installation
============
Installation is simple and standard::
tar xzvf btest-*.tar.gz
cd btest-*
python setup.py install
This will install a few scripts: ``btest`` is the main driver program,
and there are a number of further helper scripts that we discuss below
(including ``btest-diff``, which is a tool for comparing output to a
previously established baseline).
Writing a Simple Test
=====================
In the most simple case, ``btest`` simply executes a set of command
lines, each of which must be prefixed with ``@TEST-EXEC:``
::
> cat examples/t1
@TEST-EXEC: echo "Foo" | grep -q Foo
@TEST-EXEC: test -d .
> btest examples/t1
examples.t1 ... ok
The test passes as both command lines return success. If one of them
didn't, that would be reported::
> cat examples/t2
@TEST-EXEC: echo "Foo" | grep -q Foo
@TEST-EXEC: test -d DOESNOTEXIST
> btest examples/t2
examples.t2 ... failed
Usually you will just run all tests found in a directory::
> btest examples
examples.t1 ... ok
examples.t2 ... failed
1 test failed
Why do we need the ``@TEST-EXEC:`` prefixes? Because the file
containing the test can simultaneously act as *its input*. Let's
say we want to verify a shell script::
> cat examples/t3.sh
# @TEST-EXEC: sh %INPUT
ls /etc | grep -q passwd
> btest examples/t3.sh
examples.t3 ... ok
Here, ``btest`` is executing (something similar to) ``sh
examples/t3.sh``, and then checks the return value as usual. The
example also shows that the ``@TEST-EXEC`` prefix can appear
anywhere, in particular inside the comment section of another
language.
Now, let's say we want to check the output of a program, making sure
that it matches what we expect. For that, we first add a command
line to the test that produces the output we want to check, and then
run ``btest-diff`` to make sure it matches a previously recorded
baseline. ``btest-diff`` is itself just a script that returns
success if the output is as expected, and failure otherwise. In the
following example, we use an awk script as a fancy way to print all
file names starting with a dot in the user's home directory. We
write that list into a file called ``dots`` and then check whether
its content matches what we know from last time::
> cat examples/t4.awk
# @TEST-EXEC: ls -a $HOME | awk -f %INPUT >dots
# @TEST-EXEC: btest-diff dots
/^\.+/ { print $1 }
Note that each test gets its own little sandbox directory when run,
so by creating a file like ``dots``, you aren't cluttering up
anything.
The first time we run this test, we need to record a baseline::
> btest -U examples/t4.awk
Now, ``btest-diff`` has remembered what the ``dots`` file should
look like::
> btest examples/t4.awk
examples.t4 ... ok
> touch ~/.NEWDOTFILE
> btest examples/t4.awk
examples.t4 ... failed
1 test failed
If we want to see what exactly the unexpected change is that was
introduced to ``dots``, there's a *diff* mode for that::
> btest -d examples/t4.awk
examples.t4 ... failed
% 'btest-diff dots' failed unexpectedly (exit code 1)
% cat .diag
== File ===============================
[... current dots file ...]
== Diff ===============================
--- /Users/robin/work/binpacpp/btest/Baseline/examples.t4/dots
2010-10-28 20:11:11.000000000 -0700
+++ dots 2010-10-28 20:12:30.000000000 -0700
@@ -4,6 +4,7 @@
.CFUserTextEncoding
.DS_Store
.MacOSX
+.NEWDOTFILE
.Rhistory
.Trash
.Xauthority
=======================================
% cat .stderr
[... if any of the commands had printed something to stderr, that would follow here ...]
Once we delete the new file, we are fine again::
> rm ~/.NEWDOTFILE
> btest -d examples/t4.awk
examples.t4 ... ok
That's already the main functionality that the ``btest`` package
provides. In the following, we describe a number of further options
extending/modifying this basic approach.
Reference
=========
Command Line Usage
------------------
``btest`` must be started with a list of tests and/or directories
given on the command line. In the latter case, the default is to
recursively scan the directories and assume all files found to be
tests to perform. It is however possible to exclude certain files by
specifying a suitable `configuration file`_.
``btest`` returns exit code 0 if all tests have successfully passed,
and 1 otherwise.
``btest`` accepts the following options:
-a ALTERNATIVE, --alternative=ALTERNATIVE
Activates an alternative_ configuration defined in the
configuration file. This option can be given multiple times to
run tests with several alternatives. If ``ALTERNATIVE`` is ``-``
that refers to running with the standard setup, which can be used
to run tests both with and without alterantives by giving both.
-b, --brief
Does not output *anything* for tests which pass. If all tests
pass, there will not be any output at all.
-c CONFIG, --config=CONFIG
Specifies an alternative `configuration file`_ to use. If not
specified, the default is to use a file called ``btest.cfg``
if found in the current directory.
-d, --diagnostics
Reports diagnostics for all failed tests. The diagnostics
include the command line that failed, its output to standard
error, and potential additional information recorded by the
command line for diagnostic purposes (see `@TEST-EXEC`_
below). In the case of ``btest-diff``, the latter is the
``diff`` between baseline and actual output.
-D, --diagnostics-all
Reports diagnostics for all tests, including those which pass.
-f DIAGFILE, --file-diagnostics=DIAGFILE
Writes diagnostics for all failed tests into the given file.
If the file already exists, it will be overwritten.
-g GROUPS, --group=GROUPS
Runs only tests assigned to the given test groups, see
`@TEST-GROUP`_. Multiple groups can be given as a
comma-separated list. Specifying ``-`` as a group name selects
all tests that do not belong to any group.
-j [THREADS], --jobs[=THREADS]
Runs up to the given number of tests in parallel. If no number
is given, BTest substitutes the number of available CPU cores
as reported by the OS.
By default, BTest assumes that all tests can be executed
concurrently without further constraints. One can however
ensure serialization of subsets by assigning them to the same
serialization set, see `@TEST-SERIALIZE`_.
-q, --quiet
Suppress information output other than about failed tests.
If all tests pass, there will not be any output at all.
-r, --rerun
Runs only tests that failed last time. After each execution
(except when updating baselines), BTest generates a state file
that records the tests that have failed. Using this option on
the next run then reads that file back in and limits execution
to those tests found in there.
-t, --tmp-keep
Does not delete any temporary files created for running the
tests (including their outputs). By default, the temporary
files for a test will be located in ``.tmp/<test>/``, where
``<test>`` is the relative path of the test file with all slashes
replaced with dots and the file extension removed (e.g., the files
for ``example/t3.sh`` will be in ``.tmp/example.t3``).
-U, --update-baseline
Records a new baseline for all ``btest-diff`` commands found
in any of the specified tests. To do this, all tests are run
as normal except that when ``btest-diff`` is executed, it
does not compute a diff but instead considers the given file
to be authoritative and records it as the version to compare
with in future runs.
-u, --update-interactive
Each time a ``btest-diff`` command fails in any tests that are
run, btest will stop and ask whether or not the user wants to
record a new baseline.
-v, --verbose
Shows all test command lines as they are executed.
-w, --wait
Interactively waits for ``<enter>`` after showing diagnostics
for a test.
-x FILE, --xml=FILE
Records test results in JUnit XML format to the given file.
If the file exists already, it is overwritten.
.. _configuration file:
Configuration
-------------
Specifics of ``btest``'s execution can be tuned with a configuration
file, which by default is ``btest.cfg`` if that's found in the
current directory. It can alternatively be specified with the
``--config`` command line option. The configuration file is
"INI-style", and an example comes with the distribution, see
``btest.cfg.example``. A configuration file has one main section,
``btest``, that defines most options; as well as an optional section
for defining `environment variables`_ and further optional sections
for defining alternatives_.
Note that all paths specified in the configuration file are relative
to ``btest``'s *base directory*. The base directory is either the
one where the configuration file is located if such is given/found,
or the current working directory if not. When setting values for
configuration options, the absolute path to the base directory is
available by using the macro ``%(testbase)s`` (the weird syntax is
due to Python's ``ConfigParser`` module).
Furthermore, all values can use standard "backtick-syntax" to
include the output of external commands (e.g., xyz=`\echo test\`).
Note that the backtick expansion is performed after any ``%(..)``
have already been replaced (including within the backticks).
Options
~~~~~~~
The following options can be set in the ``btest`` section of the
configuration file:
``TestDirs``
A space-separated list of directories to search for tests. If
defined, one doesn't need to specify any tests on the command
line.
``TmpDir``
A directory where to create temporary files when running tests.
By default, this is set to ``%(testbase)s/.tmp``.
``BaselineDir``
A directory where to store the baseline files for ``btest-diff``.
By default, this is set to ``%(testbase)s/Baseline``.
``IgnoreDirs``
A space-separated list of relative directory names to ignore
when scanning test directories recursively. Default is empty.
``IgnoreFiles``
A space-separated list of filename globs matching files to
ignore when scanning given test directories recursively.
Default is empty.
``StateFile``
The name of the state file to record the names of failing tests. Default is
``.btest.failed.dat``.
``Finalizer``
An executable that will be executed each time any test has
successfully run. It runs in the same directory as the test itself
and receives the name of the test as its parameter. The return
value indicates whether the test should indeed be considered
successful. By default, there's no finalizer set.
.. _environment variables:
Environment Variables
~~~~~~~~~~~~~~~~~~~~~
A special section ``environment`` defines environment variables that
will be propagated to all tests::
[environment]
CFLAGS=-O3
PATH=%(testbase)s/bin:%(default_path)s
Note how ``PATH`` can be adjusted to include local scripts: the
example above prefixes it with a local ``bin/`` directory inside the
base directory, using the predefined ``default_path`` macro to refer
to the ``PATH`` as it is set by default.
Furthermore, by setting ``PATH`` to include the ``btest``
distribution directory, one could skip the installation of the
``btest`` package.
.. _alternative:
Alternatives
~~~~~~~~~~~~
BTest can run a set of tests with different settings than it would
normally use by specifying an *alternative* configuration. Currently,
three things can be adjusted:
- Further environment variables can be set that will then be
available to all the commands that a test executes.
- *Filters* can modify an input file before a test uses it.
- *Substitutions* can modify command lines executed as part of a
test.
We discuss the three separately in the following. All of them are
defined by adding sections ``[<type>-<name>]`` where ``<type>``
corresponds to the type of adjustment being made and ``<name>`` is the
name of the alternative. Once at least one section is defined for a
name, that alternative can be enabled by BTest's ``--alternative``
flag.
Environment Variables
^^^^^^^^^^^^^^^^^^^^^
An alternative can add further environment variables by defining an
``[environment-<name>]`` section:
[environment-myalternative]
CFLAGS=-O3
Running ``btest`` with ``--alternative=myalternative`` will now make
the ``CFLAGS`` environment variable available to all commands
executed.
.. _filters:
Filters
^^^^^^^
Filters are a transparent way to adapt the input to a specific test
command before it is executed. A filter is defined by adding a section
``[filter-<name>]`` to the configuration file. This section must have
exactly one entry, and the name of that entry is interpreted as the
name of a command whose input is to be filtered. The value of that
entry is the name of a filter script that will be run with two
arguments representing input and output files, respectively. Example::
[filter-myalternative]
cat=%(testbase)s/bin/filter-cat
Once the filter is activated by running ``btest`` with
``--alternative=myalternative``, every time a ``@TEST-EXEC: cat
%INPUT`` is found, ``btest`` will first execute (something similar to)
``%(testbase)s/bin/filter-cat %INPUT out.tmp``, and then subsequently
``cat out.tmp`` (i.e., the original command but with the filtered
output). In the simplest case, the filter could be a no-op in the
form ``cp $1 $2``.
.. note::
There are a few limitations to the filter concept currently:
* Filters are *always* fed with ``%INPUT`` as their first
argument. We should add a way to filter other files as well.
* Filtered commands are only recognized if they are directly
starting the command line. For example, ``@TEST-EXEC: ls | cat
>outout`` would not trigger the example filter above.
* Filters are only executed for ``@TEST-EXEC``, not for
``@TEST-EXEC-FAIL``.
.. _substitution:
Substitutions
^^^^^^^^^^^^^^
Substitutions are similar to filters, yet they do not adapt the input
but the command line being executed. A substitution is defined by
adding a section ``[substitution-<name>]`` to the configuration file.
For each entry in this section, the entry's name specifies the
command that is to be replaced with something else given as its value.
Example::
[substitution-myalternative]
gcc=gcc -O2
Once the substitution is activated by running ``btest`` with
``--alternative=myalternative``, every time a ``@TEST-EXEC`` executes
``gcc``, that is replaced with ``gcc -O2``. The replacement is simple
string substitution so it works not only with commands but anything
found on the command line; it however only replaces full words, not
subparts of words.
Writing Tests
-------------
``btest`` scans a test file for lines containing keywords that
trigger certain functionality. Currently, the following keywords are
supported:
.. _@TEST-EXEC:
``@TEST-EXEC: <cmdline>``
Executes the given command line and aborts the test if it
returns an error code other than zero. The ``<cmdline>`` is
passed to the shell and thus can be a pipeline, use redirection,
and any environment variables specified in ``<cmdline>`` will be
expanded, etc.
When running a test, the current working directory for all
command lines will be set to a temporary sandbox (and will be
deleted later).
There are two macros that can be used in ``<cmdline>``:
``%INPUT`` will be replaced with the full pathname of the file defining
the test; and ``%DIR`` will be replaced with the directory where
the test file is located. The latter can be used to reference
further files also located there.
In addition to environment variables defined in the
configuration file, there are further ones that are passed into
the commands:
``TEST_DIAGNOSTICS``
A file where further diagnostic information can be saved
in case a command fails. ``--diagnostics`` will show
this file. (This is also where ``btest-diff`` stores its
diff.)
``TEST_MODE``
This is normally set to ``TEST``, but will be ``UPDATE``
if ``btest`` is run with ``--update-baseline``, or
``UPDATE_INTERACTIVE`` if run with ``--update-interactive``.
``TEST_BASELINE``
The name of a directory where the command can save permanent
information across ``btest`` runs. (This is where
``btest-diff`` stores its baseline in ``UPDATE`` mode.)
``TEST_NAME``
The name of the currently executing test.
``TEST_VERBOSE``
The path of a file where the test can record further
information about its execution that will be included with
btest's ``--verbose`` output. This is for further tracking
the execution of commands and should generally generate
output that follows a line-based structure.
.. note::
If a command returns the special exit code 100, the test is
considered failed, however subsequent test commands are still
run. ``btest-diff`` uses this special exit code to indicate that
no baseline has yet been established.
If a command returns the special exit code 200, the test is
considered failed and all further test executions are aborted.
``@TEST-EXEC-FAIL: <cmdline>``
Like ``@TEST-EXEC``, except that this expects the command to
*fail*, i.e., the test is aborted when the return code is zero.
``@TEST-REQUIRES: <cmdline>``
Defines a condition that must be met for the test to be executed.
The given command line will be run before any of the actual test
commands, and it must return success for the test to continue. If
it does not return success, the rest of the test will be skipped
but doing so will not be considered a failure of the test. This allows to
write conditional tests that may not always make sense to run, depending
on whether external constraints are satisfied or not (say, whether
a particular library is available). Multiple requirements may be
specified and then all must be met for the test to continue.
``@TEST-ALTERNATIVE: <alternative>`` Runs this test only for the given
alternative (see alternative_). If ``<alternatives>`` is
``default``, the test executes when BTest runs with no alternative
given (which however is the default anyways).
``@TEST-NOT-ALTERNATIVE: <alternative>`` Ignores this test for the
given alternative (see alternative_). If ``<alternative>`` is
``default``, the test is ignored if BTest runs with no alternative
given.
``@TEST-COPY-FILE: <file>``
Copy the given file into the test's directory before the test is
run. If ``<file>`` is a relative path, it's interpreted relative
to the BTest's base directory. Environment variables in ``<file>``
will be replaced if enclosed in ``${..}``. This command can be
given multiple times.
``@TEST-START-NEXT``
This is a short-cut for defining multiple test inputs in the
same file, all executing with the same command lines. When
``@TEST-START-NEXT`` is encountered, the test file is initially
considered to end at that point, and all ``@TEST-EXEC-*`` are
run with an ``%INPUT`` truncated accordingly. Afterwards, a
*new* ``%INPUT`` is created with everything *following* the
``@TEST-START-NEXT`` marker, and the *same* commands are run
again (further ``@TEST-EXEC-*`` will be ignored). The effect is
that a single file can actually define two tests, and the
``btest`` output will enumerate them::
> cat examples/t5.sh
# @TEST-EXEC: cat %INPUT | wc -c >output
# @TEST-EXEC: btest-diff output
This is the first test input in this file.
# @TEST-START-NEXT
... and the second.
> ./btest -D examples/t5.sh
examples.t5 ... ok
% cat .diag
== File ===============================
119
[...]
examples.t5-2 ... ok
% cat .diag
== File ===============================
22
[...]
Multiple ``@TEST-START-NEXT`` can be used to create more than
two tests per file.
``@TEST-START-FILE <file>``
This is used to include an additional input file for a test
right inside the test file. All lines following the keyword will
be written into the given file (and removed from the test's
`%INPUT`) until a terminating ``@TEST-END-FILE`` is found.
Example::
> cat examples/t6.sh
# @TEST-EXEC: awk -f %INPUT <foo.dat >output
# @TEST-EXEC: btest-diff output
{ lines += 1; }
END { print lines; }
@TEST-START-FILE foo.dat
1
2
3
@TEST-END-FILE
> btest -D examples/t6.sh
examples.t6 ... ok
% cat .diag
== File ===============================
3
Multiple such files can be defined within a single test.
Note that this is only one way to use further input files.
Another is to store a file in the same directory as the test
itself, making sure it's ignored via ``IgnoreFiles``, and then
refer to it via ``%DIR/<name>``.
.. _@TEST-GROUP:
``@TEST-GROUP: <group>``
Assigns the test to a group of name ``<group>``. By using option
``-g`` one can limit execution to all tests that belong to a given
group (or a set of groups).
.. _@TEST-SERIALIZE:
``@TEST-SERIALIZE: <set>``
When using option ``-j`` to parallelize execution, all tests that
specify the same serialization set are guaranteed to run
sequentially. ``<set>`` is an arbitrary user-chosen string.
Canonifying Diffs
=================
``btest-diff`` has the capability to filter its input through an
additional script before it compares the current version with the
baseline. This can be useful if certain elements in an output are
*expected* to change (e.g., timestamps). The filter can then
remove/replace these with something consistent. To enable such
canonification, set the environment variable
``TEST_DIFF_CANONIFIER`` to a script reading the original version
from stdin and writing the canonified version to stdout. Note that
both baseline and current output are passed through the filter
before their differences are computed.
Running Processes in the Background
===================================
Sometimes processes need to be spawned in the background for a test,
in particular if multiple processes need to cooperate in some fashion.
``btest`` comes with two helper scripts to make life easier in such a
situation:
``btest-bg-run <tag> <cmdline>``
This is a script that runs ``<cmdline>`` in the background, i.e.,
it's like using ``cmdline &`` in a shell script. Test execution
continues immediately with the next command. Note that the spawned
command is *not* run in the current directory, but instead in a
newly created sub-directory called ``<tag>``. This allows
spawning multiple instances of the same process without needing to
worry about conflicting outputs. If you want to access a command's
output later, like with ``btest-diff``, use ``<tag>/foo.log`` to
access it.
``btest-bg-wait [-k] <timeout>``
This script waits for all processes previously spawned via
``btest-bg-run`` to finish. If any of them exits with a non-zero
return code, ``btest-bg-wait`` does so as well, indicating a
failed test. ``<timeout>`` is mandatory and gives the maximum
number of seconds to wait for any of the processes to terminate.
If any process hasn't done so when the timeout expires, it will be
killed and the test is considered to be failed as long as ``-k``
is not given. If ``-k`` is given, pending processes are still
killed but the test continues normally, i.e., non-termination is
not considered a failure in this case. This script also collects
the processes' stdout and stderr outputs for diagnostics output.
Integration with Sphinx
=======================
``btest`` comes with a new directive for the documentation framework
`Sphinx <http://sphinx.pocoo.org>`_. The directive allows to write a
test directly inside a Sphinx document, and then to include output
from the test's command into the generated documentation. The same
tests can also run externally and will catch if any changes to the
included content occur. The following walks through setting this up.
Configuration
-------------
First, you need to tell Sphinx a base directory for the ``btest``
configuration as well as a directory in there where to store tests
it extracts from the Sphinx documentation. Typically, you'd just
create a new subdirectory ``tests`` in the Sphinx project for the
``btest`` setup and then store the tests in there in, e.g.,
``doc/``::
cd <sphinx-root>
mkdir tests
mkdir tests/doc
Then add the following to your Sphinx ``conf.py``::
extensions += ["btest-sphinx"]
btest_base="tests" # Relative to Sphinx-root.
btest_tests="doc" # Relative to btest_base.
Next, a finalizer to ``btest.cfg``::
[btest]
...
Finalizer=btest-diff-rst
Finally, create a ``btest.cfg`` in ``tests/`` as usual and add
``doc/`` to the ``TestDirs`` option.
Including a Test into a Sphinx Document
---------------------------------------
The ``btest`` extension provides a new directive to include a test
inside a Sphinx document::
.. btest:: <test-name>
<test content>
Here, ``<test-name>`` is a custom name for the test; it will be
stored in ``btest_tests`` under that name. ``<test content>`` is just
a standard test as you would normally put into one of the
``TestDirs``. Example::
.. btest:: just-a-test
@TEST-EXEC: expr 2 + 2
When you now run Sphinx, it will (1) store the test content into
``tests/doc/just-a-test`` (assuming the above path layout), and (2)
execute the test by running ``btest`` on it. You can then run
``btest`` manually in ``tests/`` as well and it will execute the test
just as it would in a standard setup. If a test fails when Sphinx runs
it, there will be a corresponding error and include the diagnostic output
into the document.
By default, nothing else will be included into the generated
documentation, i.e., the above test will just turn into an empty text
block. However, ``btest`` comes with a set of scripts that you can use
to specify content to be included. As a simple example,
``btest-rst-cmd <cmdline>`` will execute a command and (if it
succeeds) include both the command line and the standard output into
the documentation. Example::
.. btest:: another-test
@TEST-EXEC: btest-rst-cmd echo Hello, world!
When running Sphinx, this will render as:
.. code::
# echo Hello, world!
Hello world!
When running ``btest`` manually in ``tests/``, the ``Finalizer`` we
added to ``btest.cfg`` (see above) compares the generated reST code
with a previously established baseline, just like ``btest-diff`` does
with files. To establish the initial baseline, run ``btest -u``, like
you would with ``btest-diff``.
Scripts
-------
The following Sphinx support scripts come with ``btest``:
``btest-rst-cmd [options] <cmdline>``
By default, this executes ``<cmdline>`` and includes both the
command line itself and its standard output into the generated
documentation. See above for an example.
This script provides the following options:
-c ALTERNATIVE_CMDLINE
Show ``ALTERNATIVE_CMDLINE`` in the generated
documentation instead of the one actually executed. (It
still runs the ``<cmdline>`` given outside the option.)
-d
Do not actually execute ``<cmdline>``; just format it for
the generated documentation and include no further output.
-f FILTER_CMD
Pipe the command line's output through ``FILTER_CMD``
before including. If ``-r`` is given, it filters the
file's content instead of stdout.
-o
Do not include the executed command into the generated
documentation, just its output.
-r FILE
Insert ``FILE`` into output instead of stdout.
``btest-rst-include <file>``
Includes ``<file>`` inside a code block.
``btest-rst-pipe <cmdline>``
Executes ``<cmdline>``, includes its standard output inside a code
block. Note that this script does not include the command line
itself into the code block, just the output.
.. note::
All these scripts can be run directly from the command line to show
the reST code they generate.
.. note::
``btest-rst-cmd`` can do everything the other scripts provide if
you give it the right options. In fact, the other scripts are
provided just for convenience and leverage ``btest-rst-cmd``
internally.
License
=======
btest is open-source under a BSD licence.

View file

@ -0,0 +1 @@
../../../aux/btest/README

View file

@ -1,107 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.18
===============================================
capstats - A tool to get some NIC statistics.
===============================================
.. rst-class:: opening
capstats is a small tool to collect statistics on the
current load of a network interface, using either `libpcap
<http://www.tcpdump.org>`_ or the native interface for `Endace's
<http:///www.endace.com>`_. It reports statistics per time interval
and/or for the tool's total run-time.
Download
--------
You can find the latest capstats release for download at
http://www.bro.org/download.
Capstats's git repository is located at `git://git.bro.org/capstats.git
<git://git.bro.org/capstats.git>`__. You can browse the repository
`here <http://git.bro.org/capstats.git>`__.
This document describes capstats |version|. See the ``CHANGES``
file for version history.
Output
------
Here's an example output with output in one-second intervals until
``CTRL-C`` is hit:
.. console::
>capstats -i nve0 -I 1
1186620936.890567 pkts=12747 kpps=12.6 kbytes=10807 mbps=87.5 nic_pkts=12822 nic_drops=0 u=960 t=11705 i=58 o=24 nonip=0
1186620937.901490 pkts=13558 kpps=13.4 kbytes=11329 mbps=91.8 nic_pkts=13613 nic_drops=0 u=1795 t=24339 i=119 o=52 nonip=0
1186620938.912399 pkts=14771 kpps=14.6 kbytes=13659 mbps=110.7 nic_pkts=14781 nic_drops=0 u=2626 t=38154 i=185 o=111 nonip=0
1186620939.012446 pkts=1332 kpps=13.3 kbytes=1129 mbps=92.6 nic_pkts=1367 nic_drops=0 u=2715 t=39387 i=194 o=112 nonip=0
=== Total
1186620939.012483 pkts=42408 kpps=13.5 kbytes=36925 mbps=96.5 nic_pkts=1 nic_drops=0 u=2715 t=39387 i=194 o=112 nonip=0
Each line starts with a timestamp and the other fields are:
:pkts:
Absolute number of packets seen by ``capstats`` during interval.
:kpps:
Number of packets per second.
:kbytes:
Absolute number of KBytes during interval.
:mbps:
Mbits/sec.
:nic_pkts:
Number of packets as reported by ``libpcap``'s ``pcap_stats()`` (may not match _pkts_)
:nic_drops:
Number of packet drops as reported by ``libpcap``'s ``pcap_stats()``.
:u:
Number of UDP packets.
:t:
Number of TCP packets.
:i:
Number of ICMP packets.
:nonip:
Number of non-IP packets.
Options
-------
A list of all options::
capstats [Options] -i interface
-i| --interface <interface> Listen on interface
-d| --dag Use native DAG API
-f| --filter <filter> BPF filter
-I| --interval <secs> Stats logging interval
-l| --syslog Use syslog rather than print to stderr
-n| --number <count> Stop after outputting <number> intervals
-N| --select Use select() for live pcap (for testing only)
-p| --payload <n> Verifies that packets' payloads consist
entirely of bytes of the given value.
-q| --quiet <count> Suppress output, exit code indicates >= count
packets received.
-S| --size <size> Verify packets to have given <size>
-s| --snaplen <size> Use pcap snaplen <size>
-v| --version Print version and exit
-w| --write <filename> Write packets to file
Installation
------------
``capstats`` has been tested on Linux, FreeBSD, and MacOS. Please see
the ``INSTALL`` file for installation instructions.

View file

@ -0,0 +1 @@
../../../aux/broctl/aux/capstats/README

View file

@ -1,98 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.19-9
===============================================
PySubnetTree - A Python Module for CIDR Lookups
===============================================
.. rst-class:: opening
The PySubnetTree package provides a Python data structure
``SubnetTree`` which maps subnets given in `CIDR
<http://tools.ietf.org/html/rfc4632>`_ notation (incl.
corresponding IPv6 versions) to Python objects. Lookups are
performed by longest-prefix matching.
Download
--------
You can find the latest PySubnetTree release for download at
http://www.bro.org/download.
PySubnetTree's git repository is located at `git://git.bro.org/pysubnettree.git
<git://git.bro.org/pysubnettree.git>`__. You can browse the repository
`here <http://git.bro.org/pysubnettree.git>`__.
This document describes PySubnetTree |version|. See the ``CHANGES``
file for version history.
Example
-------
A simple example which associates CIDR prefixes with strings::
>>> import SubnetTree
>>> t = SubnetTree.SubnetTree()
>>> t["10.1.0.0/16"] = "Network 1"
>>> t["10.1.42.0/24"] = "Network 1, Subnet 42"
>>> t["10.2.0.0/16"] = "Network 2"
>>> print t["10.1.42.1"]
Network 1, Subnet 42
>>> print t["10.1.43.1"]
Network 1
>>> print "10.1.42.1" in t
True
>>> print "10.1.43.1" in t
True
>>> print "10.20.1.1" in t
False
>>> print t["10.20.1.1"]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "SubnetTree.py", line 67, in __getitem__
def __getitem__(*args): return _SubnetTree.SubnetTree___getitem__(*args)
KeyError: '10.20.1.1'
By default, CIDR prefixes and IP addresses are given as strings.
Alternatively, a ``SubnetTree`` object can be switched into *binary
mode*, in which single addresses are passed in the form of packed
binary strings as, e.g., returned by `socket.inet_aton
<http://docs.python.org/lib/module-socket.html#l2h-3657>`_::
>>> t.get_binary_lookup_mode()
False
>>> t.set_binary_lookup_mode(True)
>>> t.binary_lookup_mode()
True
>>> import socket
>>> print t[socket.inet_aton("10.1.42.1")]
Network 1, Subnet 42
A SubnetTree also provides methods ``insert(prefix,object=None)`` for insertion
of prefixes (``object`` can be skipped to use the tree like a set), and
``remove(prefix)`` for removing entries (``remove`` performs an _exact_ match
rather than longest-prefix).
Internally, the CIDR prefixes of a ``SubnetTree`` are managed by a
Patricia tree data structure and lookups are therefore efficient
even with a large number of prefixes.
PySubnetTree comes with a BSD license.
Prerequisites
-------------
This package requires Python 2.4 or newer.
Installation
------------
Installation is pretty simple::
> python setup.py install

View file

@ -0,0 +1 @@
../../../aux/broctl/aux/pysubnettree/README

View file

@ -1,154 +0,0 @@
.. -*- mode: rst-mode -*-
..
.. Version number is filled in automatically.
.. |version| replace:: 0.8
====================================================
trace-summary - Generating network traffic summaries
====================================================
.. rst-class:: opening
``trace-summary`` is a Python script that generates break-downs of
network traffic, including lists of the top hosts, protocols,
ports, etc. Optionally, it can generate output separately for
incoming vs. outgoing traffic, per subnet, and per time-interval.
Download
--------
You can find the latest trace-summary release for download at
http://www.bro.org/download.
trace-summary's git repository is located at `git://git.bro.org/trace-summary.git
<git://git.bro.org/trace-summary.git>`__. You can browse the repository
`here <http://git.bro.org/trace-summary.git>`__.
This document describes trace-summary |version|. See the ``CHANGES``
file for version history.
Overview
--------
The ``trace-summary`` script reads both packet traces in `libpcap
<http://www.tcpdump.org>`_ format and connection logs produced by the
`Bro <http://www.bro.org>`_ network intrusion detection system
(for the latter, it supports both 1.x and 2.x output formats).
Here are two example outputs in the most basic form (note that IP
addresses are 'anonymized'). The first is from a packet trace and the
second from a Bro connection log::
>== Total === 2005-01-06-14-23-33 - 2005-01-06-15-23-43
- Bytes 918.3m - Payload 846.3m - Pkts 1.8m - Frags 0.9% - MBit/s 1.9 -
Ports | Sources | Destinations | Protocols |
80 33.8% | 131.243.89.214 8.5% | 131.243.89.214 7.7% | 6 76.0% |
22 16.7% | 128.3.2.102 6.2% | 128.3.2.102 5.4% | 17 23.3% |
11001 12.4% | 204.116.120.26 4.8% | 131.243.89.4 4.8% | 1 0.5% |
2049 10.7% | 128.3.161.32 3.6% | 131.243.88.227 3.6% | |
1023 10.6% | 131.243.89.4 3.5% | 204.116.120.26 3.4% | |
993 8.2% | 128.3.164.194 2.7% | 131.243.89.64 3.1% | |
1049 8.1% | 128.3.164.15 2.4% | 128.3.164.229 2.9% | |
524 6.6% | 128.55.82.146 2.4% | 131.243.89.155 2.5% | |
33305 4.5% | 131.243.88.227 2.3% | 128.3.161.32 2.3% | |
1085 3.7% | 131.243.89.155 2.3% | 128.55.82.146 2.1% | |
>== Total === 2005-01-06-14-23-33 - 2005-01-06-15-23-42
- Connections 43.4k - Payload 398.4m -
Ports | Sources | Destinations | Services | Protocols | States |
80 21.7% | 207.240.215.71 3.0% | 239.255.255.253 8.0% | other 51.0% | 17 55.8% | S0 46.2% |
427 13.0% | 131.243.91.71 2.2% | 131.243.91.255 4.0% | http 21.7% | 6 36.4% | SF 30.1% |
443 3.8% | 128.3.161.76 1.7% | 131.243.89.138 2.1% | i-echo 7.3% | 1 7.7% | OTH 7.8% |
138 3.7% | 131.243.90.138 1.6% | 255.255.255.255 1.7% | https 3.8% | | RSTO 5.8% |
515 2.4% | 131.243.88.159 1.6% | 128.3.97.204 1.5% | nb-dgm 3.7% | | SHR 4.4% |
11001 2.3% | 131.243.88.202 1.4% | 131.243.88.107 1.1% | printer 2.4% | | REJ 3.0% |
53 1.9% | 131.243.89.250 1.4% | 117.72.94.10 1.1% | dns 1.9% | | S1 1.0% |
161 1.6% | 131.243.89.80 1.3% | 131.243.88.64 1.1% | snmp 1.6% | | RSTR 0.9% |
137 1.4% | 131.243.90.52 1.3% | 131.243.88.159 1.1% | nb-ns 1.4% | | SH 0.3% |
2222 1.1% | 128.3.161.252 1.2% | 131.243.91.92 1.1% | ntp 1.0% | | RSTRH 0.2% |
Prerequisites
-------------
* This script requires Python 2.4 or newer.
* The `pysubnettree
<http://www.bro.org/documentation/pysubnettree.html>`_ Python
module.
* Eddie Kohler's `ipsumdump <http://www.cs.ucla.edu/~kohler/ipsumdump>`_
if using ``trace-summary`` with packet traces (versus Bro connection logs)
Installation
------------
Simply copy the script into some directory which is in your ``PATH``.
Usage
-----
The general usage is::
trace-summary [options] [input-file]
Per default, it assumes the ``input-file`` to be a ``libpcap`` trace
file. If it is a Bro connection log, use ``-c``. If ``input-file`` is
not given, the script reads from stdin. It writes its output to
stdout.
Options
~~~~~~~
The most important options are summarized
below. Run ``trace-summary --help`` to see the full list including
some more esoteric ones.
:-c:
Input is a Bro connection log instead of a ``libpcap`` trace
file.
:-b:
Counts all percentages in bytes rather than number of
packets/connections.
:-E <file>:
Gives a file which contains a list of networks to ignore for the
analysis. The file must contain one network per line, where each
network is of the CIDR form ``a.b.c.d/mask`` (including the
corresponding syntax for IPv6 prefixes, e.g., ``1:2:3:4::/64``).
Empty lines and lines starting with a "#" are ignored.
:-i <duration>:
Creates totals for each time interval of the given length
(default is seconds; add "``m``" for minutes and "``h``" for
hours). Use ``-v`` if you also want to see the breakdowns for
each interval.
:-l <file>:
Generates separate summaries for incoming and outgoing traffic.
``<file>`` is a file which contains a list of networks to be
considered local. Format as for ``-E``.
:-n <n>:
Show top n entries in each break-down. Default is 10.
:-r:
Resolves hostnames in the output.
:-s <n>:
Gives the sample factor if the input has been sampled.
:-S <n>:
Sample input with the given factor; less accurate but faster and
saves memory.
:-m:
Does skip memory-expensive statistics.
:-v:
Generates full break-downs for each time interval. Requires
``-i``.

View file

@ -0,0 +1 @@
../../../aux/broctl/aux/trace-summary/README

View file

@ -104,7 +104,7 @@ code like this to your ``local.bro``:
} }
Bro's DataSeries writer comes with a few tuning options, see Bro's DataSeries writer comes with a few tuning options, see
:doc:`scripts/base/frameworks/logging/writers/dataseries`. :doc:`/scripts/base/frameworks/logging/writers/dataseries`.
Working with DataSeries Working with DataSeries
======================= =======================

View file

@ -48,7 +48,7 @@ Basics
The data fields that a stream records are defined by a record type The data fields that a stream records are defined by a record type
specified when it is created. Let's look at the script generating Bro's specified when it is created. Let's look at the script generating Bro's
connection summaries as an example, connection summaries as an example,
:doc:`scripts/base/protocols/conn/main`. It defines a record :doc:`/scripts/base/protocols/conn/main`. It defines a record
:bro:type:`Conn::Info` that lists all the fields that go into :bro:type:`Conn::Info` that lists all the fields that go into
``conn.log``, each marked with a ``&log`` attribute indicating that it ``conn.log``, each marked with a ``&log`` attribute indicating that it
is part of the information written out. To write a log record, the is part of the information written out. To write a log record, the
@ -92,8 +92,8 @@ Note the fields that are set for the filter:
are generated by taking the stream's ID and munging it slightly. are generated by taking the stream's ID and munging it slightly.
:bro:enum:`Conn::LOG` is converted into ``conn``, :bro:enum:`Conn::LOG` is converted into ``conn``,
:bro:enum:`PacketFilter::LOG` is converted into :bro:enum:`PacketFilter::LOG` is converted into
``packet_filter``, and :bro:enum:`Notice::POLICY_LOG` is ``packet_filter``, and :bro:enum:`Known::CERTS_LOG` is
converted into ``notice_policy``. converted into ``known_certs``.
``include`` ``include``
A set limiting the fields to the ones given. The names A set limiting the fields to the ones given. The names
@ -309,7 +309,7 @@ ASCII Writer Configuration
-------------------------- --------------------------
The ASCII writer has a number of options for customizing the format of The ASCII writer has a number of options for customizing the format of
its output, see :doc:`scripts/base/frameworks/logging/writers/ascii`. its output, see :doc:`/scripts/base/frameworks/logging/writers/ascii`.
Adding Streams Adding Streams
============== ==============
@ -369,7 +369,7 @@ save the logged ``Foo::Info`` record into the connection record:
} }
See the existing scripts for how to work with such a new connection See the existing scripts for how to work with such a new connection
field. A simple example is :doc:`scripts/base/protocols/syslog/main`. field. A simple example is :doc:`/scripts/base/protocols/syslog/main`.
When you are developing scripts that add data to the :bro:type:`connection` When you are developing scripts that add data to the :bro:type:`connection`
record, care must be given to when and how long data is stored. record, care must be given to when and how long data is stored.

View file

@ -283,7 +283,7 @@ information to suppress duplicates for a configurable period of time.
The ``$identifier`` field is typically comprised of several pieces of The ``$identifier`` field is typically comprised of several pieces of
data related to the notice that when combined represent a unique data related to the notice that when combined represent a unique
instance of that notice. Here is an example of the script instance of that notice. Here is an example of the script
:doc:`scripts/policy/protocols/ssl/validate-certs` raising a notice :doc:`/scripts/policy/protocols/ssl/validate-certs` raising a notice
for session negotiations where the certificate or certificate chain did for session negotiations where the certificate or certificate chain did
not validate successfully against the available certificate authority not validate successfully against the available certificate authority
certificates. certificates.

View file

@ -1,4 +1,6 @@
.. _FAQ: http://www.bro.org/documentation/faq.html
.. _quickstart: .. _quickstart:
================= =================
@ -60,9 +62,8 @@ policy and output the results in ``$PREFIX/logs``.
.. note:: The user starting BroControl needs permission to capture .. note:: The user starting BroControl needs permission to capture
network traffic. If you are not root, you may need to grant further network traffic. If you are not root, you may need to grant further
privileges to the account you're using; see the `FAQ privileges to the account you're using; see the FAQ_. Also, if it
<http://www.bro.org/documentation/faq.html>`_. Also, if it looks looks like Bro is not seeing any traffic, check out the FAQ entry on
like Bro is not seeing any traffic, check out the FAQ entry on
checksum offloading. checksum offloading.
You can leave it running for now, but to stop this Bro instance you would do: You can leave it running for now, but to stop this Bro instance you would do:
@ -196,7 +197,7 @@ the variable's value may not change at run-time, but whose initial value can be
modified via the ``redef`` operator at parse-time. modified via the ``redef`` operator at parse-time.
So let's continue on our path to modify the behavior for the two SSL So let's continue on our path to modify the behavior for the two SSL
and SSH notices. Looking at :doc:`scripts/base/frameworks/notice/main`, and SSH notices. Looking at :doc:`/scripts/base/frameworks/notice/main`,
we see that it advertises: we see that it advertises:
.. code:: bro .. code:: bro
@ -299,7 +300,7 @@ tweak the most basic options. Here's some suggestions on what to explore next:
* Reading the code of scripts that ship with Bro is also a great way to gain * Reading the code of scripts that ship with Bro is also a great way to gain
further understanding of the language and how scripts tend to be further understanding of the language and how scripts tend to be
structured. structured.
* Review the `FAQ <http://www.bro.org/documentation/faq.html>`_. * Review the FAQ_.
* Continue reading below for another mini-tutorial on using Bro as a standalone * Continue reading below for another mini-tutorial on using Bro as a standalone
command-line utility. command-line utility.
@ -326,9 +327,9 @@ that's available.
Bro will output log files into the working directory. Bro will output log files into the working directory.
.. note:: The :doc:`FAQ <faq>` entries about .. note:: The FAQ_ entries about
capturing as an unprivileged user and checksum offloading are particularly capturing as an unprivileged user and checksum offloading are
relevant at this point. particularly relevant at this point.
To use the site-specific ``local.bro`` script, just add it to the To use the site-specific ``local.bro`` script, just add it to the
command-line: command-line:

View file

@ -1,12 +1,12 @@
.. _writing-scripts: .. _writing-scripts:
.. contents::
=================== ===================
Writing Bro Scripts Writing Bro Scripts
=================== ===================
.. contents::
Understanding Bro Scripts Understanding Bro Scripts
========================= =========================
@ -91,9 +91,9 @@ form of an email generated and sent to a pre-configured address.
The workhorse of the script is contained in the event handler for The workhorse of the script is contained in the event handler for
``log_http``. The ``log_http`` event is defined as an event-hook in ``log_http``. The ``log_http`` event is defined as an event-hook in
the :doc:`scripts/base/protocols/http/main.bro` script and allows scripts the :doc:`/scripts/base/protocols/http/main` script and allows scripts
to handle a connection as it is being passed to the logging framework. to handle a connection as it is being passed to the logging framework.
The event handler is passed an :bro:id:`HTTP::Info` data structure The event handler is passed an :bro:type:`HTTP::Info` data structure
which will be referred to as ``rec`` in body of the event handler. which will be referred to as ``rec`` in body of the event handler.
An ``if`` statement is used to check for the existence of a data structure An ``if`` statement is used to check for the existence of a data structure
@ -182,7 +182,7 @@ The Connection Record Data Type
=============================== ===============================
Of all the events defined by Bro, an overwhelmingly large number of Of all the events defined by Bro, an overwhelmingly large number of
them are passed the :bro:id:`connection` record data type, in effect, them are passed the :bro:type:`connection` record data type, in effect,
making it the backbone of many scripting solutions. The connection making it the backbone of many scripting solutions. The connection
record itself, as we will see in a moment, is a mass of nested data record itself, as we will see in a moment, is a mass of nested data
types used to track state on a connection through its lifetime. Let's types used to track state on a connection through its lifetime. Let's
@ -217,7 +217,7 @@ for a single connection.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro .. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro
Again, we start with ``@load``, this time importing the Again, we start with ``@load``, this time importing the
:doc:`scripts/base/protocols/conn` scripts which supply the tracking :doc:`/scripts/base/protocols/conn/index` scripts which supply the tracking
and logging of general information and state of connections. We and logging of general information and state of connections. We
handle the :bro:id:`connection_state_remove` event and simply print handle the :bro:id:`connection_state_remove` event and simply print
the contents of the argument passed to it. For this example we're the contents of the argument passed to it. For this example we're
@ -316,7 +316,7 @@ block that variable is available to any other script through the
naming convention of ``MODULE::variable_name``. naming convention of ``MODULE::variable_name``.
The declaration below is taken from the The declaration below is taken from the
:doc:`scripts/policy/protocols/conn/known-hosts.bro` script and :doc:`/scripts/policy/protocols/conn/known-hosts` script and
declares a variable called ``known_hosts`` as a global set of unique declares a variable called ``known_hosts`` as a global set of unique
IP addresses within the ``Known`` namespace and exports it for use IP addresses within the ``Known`` namespace and exports it for use
outside of the ``Known`` namespace. Were we to want to use the outside of the ``Known`` namespace. Were we to want to use the
@ -348,8 +348,7 @@ constants are used in Bro scripts as containers for configuration
options. For example, the configuration option to log password options. For example, the configuration option to log password
decrypted from HTTP streams is stored in decrypted from HTTP streams is stored in
``HTTP::default_capture_password`` as shown in the stripped down ``HTTP::default_capture_password`` as shown in the stripped down
excerpt from :doc:`scripts/scripts/base/protocols/http/main.bro` excerpt from :doc:`/scripts/base/protocols/http/main` below.
below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
:lines: 8-10,19,20,118 :lines: 8-10,19,20,118
@ -806,8 +805,8 @@ together new data types to suit the needs of your situation.
When combined with the ``type`` keyword, ``record`` can generate a When combined with the ``type`` keyword, ``record`` can generate a
composite type. We have, in fact, already encountered a a complex composite type. We have, in fact, already encountered a a complex
example of the ``record`` data type in the earlier sections, the example of the ``record`` data type in the earlier sections, the
:bro:id:`connection` record passed to many events. Another one, :bro:type:`connection` record passed to many events. Another one,
:bro:id:`Conn::Info`, which corresponds to the fields logged into :bro:type:`Conn::Info`, which corresponds to the fields logged into
``conn.log``, is shown by the exerpt below. ``conn.log``, is shown by the exerpt below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro
@ -919,7 +918,7 @@ desired output with ``print`` and ``fmt`` before attempting to dive
into the Logging Framework. Below is a script that defines a into the Logging Framework. Below is a script that defines a
factorial function to recursively calculate the factorial of a factorial function to recursively calculate the factorial of a
unsigned integer passed as an argument to the function. Using unsigned integer passed as an argument to the function. Using
:bro:id:`print` :bro:id:`fmt` we can ensure that Bro can perform these ``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves. calculations correctly as well get an idea of the answers ourselves.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro .. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
@ -935,7 +934,7 @@ method and produce a logfile. As we are working within a namespace
and informing an outside entity of workings and data internal to the and informing an outside entity of workings and data internal to the
namespace, we use an ``export`` block. First we need to inform Bro namespace, we use an ``export`` block. First we need to inform Bro
that we are going to be adding another Log Stream by adding a value to that we are going to be adding another Log Stream by adding a value to
the :bro:id:`Log::ID` enumerable. In line 3 of the script, we append the the :bro:type:`Log::ID` enumerable. In line 3 of the script, we append the
value ``LOG`` to the ``Log::ID`` enumerable, however due to this being in value ``LOG`` to the ``Log::ID`` enumerable, however due to this being in
an export block the value appended to ``Log::ID`` is actually an export block the value appended to ``Log::ID`` is actually
``Factor::Log``. Next, we need to define the name and value pairs ``Factor::Log``. Next, we need to define the name and value pairs
@ -1070,9 +1069,9 @@ reporting. With the Notice Framework it's simple to raise a notice
for any behavior that is detected. for any behavior that is detected.
To raise a notice in Bro, you only need to indicate to Bro that you To raise a notice in Bro, you only need to indicate to Bro that you
are provide a specific :bro:id:`Notice::Type` by exporting it and then are provide a specific :bro:type:`Notice::Type` by exporting it and then
make a call to :bro:id:`NOTICE` supplying it with an appropriate make a call to :bro:id:`NOTICE` supplying it with an appropriate
:bro:id:`Notice::Info` record. Often times the call to ``NOTICE`` :bro:type:`Notice::Info` record. Often times the call to ``NOTICE``
includes just the ``Notice::Type``, and a concise message. There are includes just the ``Notice::Type``, and a concise message. There are
however, significantly more options available when raising notices as however, significantly more options available when raising notices as
seen in the table below. The only field in the table below whose seen in the table below. The only field in the table below whose
@ -1159,7 +1158,7 @@ themselves. On line 12 the script's ``export`` block adds the value
``Notice::Type`` to indicate to the Bro core that a new type of notice ``Notice::Type`` to indicate to the Bro core that a new type of notice
is being defined. The script then calls ``NOTICE`` and defines the is being defined. The script then calls ``NOTICE`` and defines the
``$note``, ``$msg``, ``$sub`` and ``$conn`` fields of the ``$note``, ``$msg``, ``$sub`` and ``$conn`` fields of the
:bro:id:`Notice::Info` record. Line 39 also includes a ternary if :bro:type:`Notice::Info` record. Line 39 also includes a ternary if
statement that modifies the ``$msg`` text depending on whether the statement that modifies the ``$msg`` text depending on whether the
host is a local address and whether it is the client or the server. host is a local address and whether it is the client or the server.
This use of :bro:id:`fmt` and a ternary operators is a concise way to This use of :bro:id:`fmt` and a ternary operators is a concise way to
@ -1181,25 +1180,25 @@ passing in the ``Notice::Info`` record. The simplest kind of
``Notice::policy`` hooks simply check the value of ``$note`` in the ``Notice::policy`` hooks simply check the value of ``$note`` in the
``Notice::Info`` record being passed into the hook and performing an ``Notice::Info`` record being passed into the hook and performing an
action based on the answer. The hook below adds the action based on the answer. The hook below adds the
:bro:id:`Notice::ACTION_EMAIL` action for the :bro:enum:`Notice::ACTION_EMAIL` action for the
``SSH::Interesting_Hostname_Login`` notice raised in the ``SSH::Interesting_Hostname_Login`` notice raised in the
:doc:`scripts/policy/protocols/ssh/interesting-hostnames.bro` script. :doc:`/scripts/policy/protocols/ssh/interesting-hostnames` script.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_hook_01.bro .. btest-include:: ${DOC_ROOT}/scripting/framework_notice_hook_01.bro
In the example above we've added ``Notice::ACTION_EMAIL`` to the In the example above we've added ``Notice::ACTION_EMAIL`` to the
``n$actions`` set. This set, defined in the Notice Framework scripts, ``n$actions`` set. This set, defined in the Notice Framework scripts,
can only have entries from the :bro:id:`Notice::Action` type, which is can only have entries from the :bro:type:`Notice::Action` type, which is
itself an enumerable that defines the values shown in the table below itself an enumerable that defines the values shown in the table below
along with their corresponding meanings. The along with their corresponding meanings. The
:bro:id:`Notice::ACTION_LOG` action writes the notice to the :bro:enum:`Notice::ACTION_LOG` action writes the notice to the
``Notice::LOG`` logging stream which, in the default configuration, ``Notice::LOG`` logging stream which, in the default configuration,
will write each notice to the ``notice.log`` file and take no further will write each notice to the ``notice.log`` file and take no further
action. The :bro:id:`Notice::ACTION_EMAIL` action will send an email action. The :bro:enum:`Notice::ACTION_EMAIL` action will send an email
to the address or addresses defined in the :bro:id:`Notice::mail_dest` to the address or addresses defined in the :bro:id:`Notice::mail_dest`
variable with the particulars of the notice as the body of the email. variable with the particulars of the notice as the body of the email.
The last action, :bro:id:`Notice::ACTION_ALARM` sends the notice to The last action, :bro:enum:`Notice::ACTION_ALARM` sends the notice to
the :bro:id:`Notice::ALARM_LOG` logging stream which is then rotated the :bro:enum:`Notice::ALARM_LOG` logging stream which is then rotated
hourly and its contents emailed in readable ASCII to the addresses in hourly and its contents emailed in readable ASCII to the addresses in
``Notice::mail_dest``. ``Notice::mail_dest``.
@ -1225,7 +1224,7 @@ Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 59-62 :lines: 59-62
In the :doc:`scripts/policy/protocols/ssl/expiring-certs.bro` script In the :doc:`/scripts/policy/protocols/ssl/expiring-certs` script
which identifies when SSL certificates are set to expire and raises which identifies when SSL certificates are set to expire and raises
notices when it crosses a pre-defined threshold, the call to notices when it crosses a pre-defined threshold, the call to
``NOTICE`` above also sets the ``$identifier`` entry by concatenating ``NOTICE`` above also sets the ``$identifier`` entry by concatenating
@ -1265,7 +1264,7 @@ facilitate these types of decisions, the Notice Framework supports
Notice Policy shortcuts. These shortcuts are implemented through the Notice Policy shortcuts. These shortcuts are implemented through the
means of a group of data structures that map specific, pre-defined means of a group of data structures that map specific, pre-defined
details and actions to the effective name of a notice. Primarily details and actions to the effective name of a notice. Primarily
implemented as a set or table of enumerables of :bro:id:`Notice::Type`, implemented as a set or table of enumerables of :bro:type:`Notice::Type`,
Notice Policy shortcuts can be placed as a single directive in your Notice Policy shortcuts can be placed as a single directive in your
``local.bro`` file as a concise readable configuration. As these ``local.bro`` file as a concise readable configuration. As these
variables are all constants, it bears mentioning that these variables variables are all constants, it bears mentioning that these variables

View file

@ -654,7 +654,7 @@ The Bro scripting language supports the following built-in types.
close(f); close(f);
Writing to files like this for logging usually isn't recommended, for better Writing to files like this for logging usually isn't recommended, for better
logging support see :doc:`/logging`. logging support see :doc:`/frameworks/logging`.
.. bro:type:: function .. bro:type:: function

View file

@ -9,12 +9,12 @@ Script Reference
packages packages
builtins builtins
bifs Built-In Functions (BIFs) <base/bif/index>
scripts scripts
packages packages
internal internal
site/proto-analyzers proto-analyzers
site/file-analyzers file-analyzers

View file

@ -1,6 +1,7 @@
##! The Bro logging interface. ##! The Bro logging interface.
##! ##!
##! See :doc:`/logging` for a introduction to Bro's logging framework. ##! See :doc:`/frameworks/logging` for a introduction to Bro's
##! logging framework.
module Log; module Log;

View file

@ -2,7 +2,7 @@
##! are odd or potentially bad. Decisions of the meaning of various notices ##! are odd or potentially bad. Decisions of the meaning of various notices
##! need to be done per site because Bro does not ship with assumptions about ##! need to be done per site because Bro does not ship with assumptions about
##! what is bad activity for sites. More extensive documetation about using ##! what is bad activity for sites. More extensive documetation about using
##! the notice framework can be found in :doc:`/notice`. ##! the notice framework can be found in :doc:`/frameworks/notice`.
module Notice; module Notice;

View file

@ -1,6 +1,6 @@
##! Script level signature support. See the ##! Script level signature support. See the
##! :doc:`signature documentation </signatures>` for more information about ##! :doc:`signature documentation </frameworks/signatures>` for more
##! Bro's signature engine. ##! information about Bro's signature engine.
@load base/frameworks/notice @load base/frameworks/notice

View file

@ -74,6 +74,9 @@ export {
## Type to store results for multiple reducers. ## Type to store results for multiple reducers.
type Result: table[string] of ResultVal; type Result: table[string] of ResultVal;
## Type to store a table of sumstats results indexed by keys.
type ResultTable: table[Key] of Result;
## SumStats represent an aggregation of reducers along with ## SumStats represent an aggregation of reducers along with
## mechanisms to handle various situations like the epoch ending ## mechanisms to handle various situations like the epoch ending
## or thresholds being crossed. ## or thresholds being crossed.
@ -142,7 +145,7 @@ export {
## Dynamically request a sumstat key. This function should be ## Dynamically request a sumstat key. This function should be
## used sparingly and not as a replacement for the callbacks ## used sparingly and not as a replacement for the callbacks
## from the :bro:see:`SumStat` record. The function is only ## from the :bro:see:`SumStats::SumStat` record. The function is only
## available for use within "when" statements as an asynchronous ## available for use within "when" statements as an asynchronous
## function. ## function.
## ##
@ -162,9 +165,6 @@ export {
global key2str: function(key: SumStats::Key): string; global key2str: function(key: SumStats::Key): string;
} }
# Type to store a table of sumstats results indexed by keys.
type ResultTable: table[Key] of Result;
# The function prototype for plugins to do calculations. # The function prototype for plugins to do calculations.
type ObserveFunc: function(r: Reducer, val: double, data: Observation, rv: ResultVal); type ObserveFunc: function(r: Reducer, val: double, data: Observation, rv: ResultVal);

View file

@ -1,7 +1,5 @@
##! Utilities specific for DHCP processing. ##! Utilities specific for DHCP processing.
@load ./main
module DHCP; module DHCP;
export { export {

View file

@ -2,4 +2,4 @@
@load ./main @load ./main
@load ./mozilla-ca-list @load ./mozilla-ca-list
@load-sigs ./dpd.sig @load-sigs ./dpd.sig

View file

@ -43,9 +43,9 @@ export {
addl_curl_args: string &optional; addl_curl_args: string &optional;
}; };
## Perform an HTTP request according to the :bro:type:`Request` record. ## Perform an HTTP request according to the
## This is an asynchronous function and must be called within a "when" ## :bro:type:`ActiveHTTP::Request` record. This is an asynchronous
## statement. ## function and must be called within a "when" statement.
## ##
## req: A record instance representing all options for an HTTP request. ## req: A record instance representing all options for an HTTP request.
## ##

View file

@ -1,5 +1,6 @@
@load base/frameworks/intel @load base/frameworks/intel
@load ./where-locations @load ./where-locations
@load base/utils/addrs
event http_header(c: connection, is_orig: bool, name: string, value: string) event http_header(c: connection, is_orig: bool, name: string, value: string)
{ {

View file

@ -20,10 +20,12 @@ event dnp3_application_response_header%(c: connection, is_orig: bool, fc: count,
## is_orig: True if this reflects originator-side activity. ## is_orig: True if this reflects originator-side activity.
## obj_type: type of object, which is classified based on an 8-bit group number and an 8-bit variation number ## obj_type: type of object, which is classified based on an 8-bit group number and an 8-bit variation number
## qua_field: qualifier field ## qua_field: qualifier field
## rf_low, rf_high: the structure of the range field depends on the qualified field. In some cases, range field ## rf_low: the structure of the range field depends on the qualified field.
## contain only one logic part, e.g., number of objects, so only rf_low contains the useful values; in some ## In some cases, range field contains only one logic part, e.g.,
## cases, range field contain two logic parts, e.g., start index and stop index, so rf_low contains the start ## number of objects, so only *rf_low* contains the useful values.
## index while rf_high contains the stop index ## rf_high: in some cases, range field contain two logic parts, e.g., start
## index and stop index, so *rf_low* contains the start index while
## while *rf_high* contains the stop index.
event dnp3_object_header%(c: connection, is_orig: bool, obj_type: count, qua_field: count, number: count, rf_low: count, rf_high: count%); event dnp3_object_header%(c: connection, is_orig: bool, obj_type: count, qua_field: count, number: count, rf_low: count, rf_high: count%);
## Generated for the prefix before a DNP3 object. The structure and the meaning ## Generated for the prefix before a DNP3 object. The structure and the meaning
@ -48,11 +50,12 @@ event dnp3_object_prefix%(c: connection, is_orig: bool, prefix_value: count%);
## src_addr: the "source" field in the DNP3 Pseudo Link Layer ## src_addr: the "source" field in the DNP3 Pseudo Link Layer
event dnp3_header_block%(c: connection, is_orig: bool, start: count, len: count, ctrl: count, dest_addr: count, src_addr: count%); event dnp3_header_block%(c: connection, is_orig: bool, start: count, len: count, ctrl: count, dest_addr: count, src_addr: count%);
## Generated for a DNP3 "Response_Data_Object". The "Response_Data_Object" contains two ## Generated for a DNP3 "Response_Data_Object".
## parts: object prefix and objects data. In most cases, objects data are defined ## The "Response_Data_Object" contains two parts: object prefix and object
## by new record types. But in a few cases, objects data are directly basic types, ## data. In most cases, objects data are defined by new record types. But
## such as int16, or int8; thus we use a additional data_value to record the values ## in a few cases, objects data are directly basic types, such as int16, or
## of those object data. ## int8; thus we use a additional data_value to record the values of those
## object data.
## ##
## c: The connection the DNP3 communication is part of. ## c: The connection the DNP3 communication is part of.
## is_orig: True if this reflects originator-side activity. ## is_orig: True if this reflects originator-side activity.

View file

@ -518,8 +518,8 @@ event load_sample%(samples: load_sample_info, CPU: interval, dmem: int%);
## processing. If a signature with an ``event`` action matches, this event is ## processing. If a signature with an ``event`` action matches, this event is
## raised. ## raised.
## ##
## See the :doc:`user manual </signatures>` for more information about Bro's ## See the :doc:`user manual </frameworks/signatures>` for more information
## signature engine. ## about Bro's signature engine.
## ##
## state: Context about the match, including which signatures triggered the ## state: Context about the match, including which signatures triggered the
## event and the connection for which the match was found. ## event and the connection for which the match was found.

View file

@ -23,12 +23,13 @@ module GLOBAL;
## fp: The desired false-positive rate. ## fp: The desired false-positive rate.
## ##
## capacity: the maximum number of elements that guarantees a false-positive ## capacity: the maximum number of elements that guarantees a false-positive
## rate of *fp*. ## rate of *fp*.
## ##
## name: A name that uniquely identifies and seeds the Bloom filter. If empty, ## name: A name that uniquely identifies and seeds the Bloom filter. If empty,
## the filter will use :bro:id:`global_hash_seed` if that's set, and otherwise use ## the filter will use :bro:id:`global_hash_seed` if that's set, and
## a local seed tied to the current Bro process. Only filters with the same seed ## otherwise use a local seed tied to the current Bro process. Only
## can be merged with :bro:id:`bloomfilter_merge` . ## filters with the same seed can be merged with
## :bro:id:`bloomfilter_merge` .
## ##
## Returns: A Bloom filter handle. ## Returns: A Bloom filter handle.
## ##
@ -61,13 +62,14 @@ function bloomfilter_basic_init%(fp: double, capacity: count,
## cells: The number of cells of the underlying bit vector. ## cells: The number of cells of the underlying bit vector.
## ##
## name: A name that uniquely identifies and seeds the Bloom filter. If empty, ## name: A name that uniquely identifies and seeds the Bloom filter. If empty,
## the filter will use :bro:id:`global_hash_seed` if that's set, and otherwise use ## the filter will use :bro:id:`global_hash_seed` if that's set, and
## a local seed tied to the current Bro process. Only filters with the same seed ## otherwise use a local seed tied to the current Bro process. Only
## can be merged with :bro:id:`bloomfilter_merge` . ## filters with the same seed can be merged with
## :bro:id:`bloomfilter_merge` .
## ##
## Returns: A Bloom filter handle. ## Returns: A Bloom filter handle.
## ##
## .. bro:see:: bloom_filter_basic_init bloomfilter_counting_init bloomfilter_add ## .. bro:see:: bloomfilter_basic_init bloomfilter_counting_init bloomfilter_add
## bloomfilter_lookup bloomfilter_clear bloomfilter_merge global_hash_seed ## bloomfilter_lookup bloomfilter_clear bloomfilter_merge global_hash_seed
function bloomfilter_basic_init2%(k: count, cells: count, function bloomfilter_basic_init2%(k: count, cells: count,
name: string &default=""%): opaque of bloomfilter name: string &default=""%): opaque of bloomfilter
@ -94,18 +96,20 @@ function bloomfilter_basic_init2%(k: count, cells: count,
## ##
## k: The number of hash functions to use. ## k: The number of hash functions to use.
## ##
## cells: The number of cells of the underlying counter vector. As there's no ## cells: The number of cells of the underlying counter vector. As there's
## single answer to what's the best parameterization for a counting Bloom filter, ## no single answer to what's the best parameterization for a
## we refer to the Bloom filter literature here for choosing an appropiate value. ## counting Bloom filter, we refer to the Bloom filter literature
## here for choosing an appropiate value.
## ##
## max: The maximum counter value associated with each each element described ## max: The maximum counter value associated with each each element
## by *w = ceil(log_2(max))* bits. Each bit in the underlying counter vector ## described by *w = ceil(log_2(max))* bits. Each bit in the underlying
## becomes a cell of size *w* bits. ## counter vector becomes a cell of size *w* bits.
## ##
## name: A name that uniquely identifies and seeds the Bloom filter. If empty, ## name: A name that uniquely identifies and seeds the Bloom filter. If empty,
## the filter will use :bro:id:`global_hash_seed` if that's set, and otherwise use ## the filter will use :bro:id:`global_hash_seed` if that's set, and
## a local seed tied to the current Bro process. Only filters with the same seed ## otherwise use a local seed tied to the current Bro process. Only
## can be merged with :bro:id:`bloomfilter_merge` . ## filters with the same seed can be merged with
## :bro:id:`bloomfilter_merge` .
## ##
## Returns: A Bloom filter handle. ## Returns: A Bloom filter handle.
## ##
@ -193,7 +197,7 @@ function bloomfilter_lookup%(bf: opaque of bloomfilter, x: any%): count
## ##
## bf: The Bloom filter handle. ## bf: The Bloom filter handle.
## ##
## .. bro:see:: bloomfilter_basic_init bloomfilter_counting_init2 ## .. bro:see:: bloomfilter_basic_init bloomfilter_basic_init2
## bloomfilter_counting_init bloomfilter_add bloomfilter_lookup ## bloomfilter_counting_init bloomfilter_add bloomfilter_lookup
## bloomfilter_merge ## bloomfilter_merge
function bloomfilter_clear%(bf: opaque of bloomfilter%): any function bloomfilter_clear%(bf: opaque of bloomfilter%): any

View file

@ -11,7 +11,7 @@ using namespace std;
%%} %%}
## Calculates the Levenshtein distance between the two strings. See `Wikipedia ## Calculates the Levenshtein distance between the two strings. See `Wikipedia
## <http://en.wikipedia.org/wiki/Levenshtein_distance>`_ for more information. ## <http://en.wikipedia.org/wiki/Levenshtein_distance>`__ for more information.
## ##
## s1: The first string. ## s1: The first string.
## ##
@ -840,7 +840,7 @@ function string_to_ascii_hex%(s: string%): string
%} %}
## Uses the Smith-Waterman algorithm to find similar/overlapping substrings. ## Uses the Smith-Waterman algorithm to find similar/overlapping substrings.
## See `Wikipedia <http://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm>`_. ## See `Wikipedia <http://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm>`__.
## ##
## s1: The first string. ## s1: The first string.
## ##