Merge remote-tracking branch 'origin/master' into fastpath

This commit is contained in:
Seth Hall 2011-11-14 16:06:44 -05:00
commit d14349a6f8
150 changed files with 4528 additions and 1114 deletions

1
.gitignore vendored
View file

@ -1 +1,2 @@
build
tmp

170
CHANGES
View file

@ -1,4 +1,174 @@
2.0-beta-21 | 2011-11-06 19:27:22 -0800
* Quickstart doc fixes. (Jon Siwek)
2.0-beta-19 | 2011-11-03 17:41:00 -0700
* Fixing packet filter test. (Robin Sommer)
2.0-beta-12 | 2011-11-03 15:21:08 -0700
* No longer write to the PacketFilter::LOG stream if not reading
traffic. (Seth Hall)
2.0-beta-10 | 2011-11-03 15:17:08 -0700
* Notice framework documentation update. (Seth Hall)
* Fixing compiler warnings (addresses #388) (Jon Siwek)
2.0-beta | 2011-10-27 17:46:28 -0700
* Preliminary fix for SSH login detection: we need a counted measure
of payload bytes (not ack tracking and not with the IP header
which is what we have now). (Seth Hall)
* Fixing send_id() problem. We no longer update &redef functions.
Updating code on the fly isn't fully supported. (Robin Sommer)
* Tuning the format of the pretty-printed alarm summaries. (Robin
Sommer)
1.6-dev-1508 | 2011-10-26 17:24:50 -0700
* Updating submodule(s). (Robin Sommer)
1.6-dev-1507 | 2011-10-26 15:10:18 -0700
* Baseline updates. (Robin Sommer)
1.6-dev-1506 | 2011-10-26 14:48:43 -0700
* Updating submodule(s). (Robin Sommer)
1.6-dev-1505 | 2011-10-26 14:43:58 -0700
* A new base script that pretty-prints alarms in the regular
summary. (Robin Sommer)
* Adding a dummy log writer WRITER_NONE that just discards
everything. (Robin Sommer)
1.6-dev-1498 | 2011-10-26 14:30:15 -0700
* Adding instructions to local.bro how to do ACTION_ALARM by
default. (Seth Hall)
1.6-dev-1495 | 2011-10-26 10:15:58 -0500
* Updated unit test baselines. (Seth Hall)
1.6-dev-1491 | 2011-10-25 20:22:56 -0700
* Updating submodule(s). (Robin Sommer)
1.6-dev-1482 | 2011-10-25 19:08:32 -0700
* Fixing bug in log managers predicate evaluation. (Robin Sommer)
1.6-dev-1481 | 2011-10-25 18:17:03 -0700
* Fix a problem with DNS servers being logged that aren't actually
servers. (Seth Hall)
* Changed generated root cert DN format for RFC2253 compliance. (Jon
Siwek)
* Removed :bro doc directives from notice documentation. (Seth Hall)
* New notice framework docs. (Seth Hall)
* Adding sub messages to emails. (Seth Hall)
* Adding extra fields to smtp and http to track transaction depth.
(Seth Hall)
* Fix for SSH login detection heuristic. (Seth Hall)
* Removed some fields from http analysis that weren't commonly
needed or were wrong. (Seth Hall)
* Updated/fixed MSIE version parsing in the software framework.
(Seth Hall)
* Update Mozilla trust roots to index certs by subject distinguished
name. (Jon Siwek)
* weird.bro rewrite. (Seth Hall)
* More notice email tuning. (Seth Hall)
* Slightly restructured http file hashing to fix a bug. (Seth Hall)
* Changed the notice name for interesting ssh logins to correctly
reflect semantics of the notice. (Seth Hall)
* Field name change to notice framwork. $result -> $action
- $result is renamed to $action to reflect changes to the notice
framework since there is already another result-like field
($suppress_for) and there may be more in the future.
- Slipped in a change to add connection information to notice
emails too. (Seth Hall)
* Small script refinements and documentation updates. (Seth Hall)
* Pass over upgrade guide. (Robin Sommer)
1.6-dev-1430 | 2011-10-21 10:39:09 -0700
* Fixing crash with unknown debug streams. Closes #643. (Robin
Sommer)
* Code to better handle interpreter errors, which can now be turned
into non-fatal runtime errors rather than immediate aborts. (Robin
Sommer).
* Remove old make-src-packages script. (Jon Siwek)
* Fixing a bunch of format strings. Closes #567. (Robin Sommer)
* Cleaning up some distribution files. (Robin Sommer)
* Various test, doc, and installation fixes/tweaks. (Seth Hall, Jon
Siwek and Robin Sommer).
* Varios smaller policy fixes and tweaks (Seth Hall).
* Moving docs from web server into distribution. (Robin Sommer)
* Fixing more (small) memory leaks. (Robin Sommer)
* Profiling support for DNS_Mgr and triggers. With
misc/profiling.bro, both now report a line in prof.log with some
counters on usage. (Robin Sommer)
* Fixing DNS memory leaks. Closes #534. (Robin Sommer)
* Fix code for disabling analyzers. Closes #577. (Robin Sommer)
* Changed communication option from listen_encrypted to listen_ssl.
(Seth Hall)
* Modification to the Communication framework API. (Seth Hall)
- Simplified the communication API and made it easier to change
to encrypted connections by not having separate variables to
define encrypted and unencrypted ports.
- Now, to enable listening without configuring nodes just
load the frameworks/communication/listen script.
- If encrypted listening is desired set the following:
redef Communication::listen_encrypted=T;
* Connection compressor now disabled by default. Addresses #559.
(Robin Sommer)
1.6-dev-1372 | 2011-10-06 18:09:17 -0700
* Filtering some potentially high-volume DNS weirds. (Robin Sommer)

28
COPYING
View file

@ -1,11 +1,12 @@
Copyright (c) 1995-2010, The Regents of the University of California,
through Lawrence Berkeley National Laboratory. All rights reserved.
Copyright (c) 1995-2011, The Regents of the University of California
through the Lawrence Berkeley National Laboratory and the
International Computer Science Institute. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
(1) Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
(1) Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
(2) Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
@ -29,20 +30,5 @@ CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Note that some files in the Bro distribution carry their own copyright
notices. The above applies to the Bro scripts in policy/ (other than as
noted below) and the source files in src/, other than:
policy/sigs/p0fsyn.osf
src/H3.h
src/OSFinger.cc
src/OSFinger.h
src/bsd-getopt-long.c
src/bsd-getopt-long.h
src/md5.c
src/md5.h
src/patricia.c
src/patricia.h
In addition, other components, such as the build system, may have
separate copyrights.
Note that some files in the distribution may carry their own copyright
notices.

95
INSTALL
View file

@ -8,54 +8,32 @@ Prerequisites
Bro relies on the following libraries and tools, which need to be installed
before you begin:
* A C/C++ compiler
* CMake 2.6 or greater http://www.cmake.org
* Libpcap headers and libraries
Network traffic capture library
* Libpcap (headers and libraries) http://www.tcpdump.org
* Flex (Fast Lexical Analyzer)
Flex is already installed on most systems, so with luck you can
skip having to install it yourself.
* OpenSSL (headers and libraries) http://www.openssl.org
* Bison (GNU Parser Generator)
This comes with many systems, but if you get errors compiling
parse.y, you will need to install it.
* Perl
Used only during the Bro build process
* sed
Used only during the Bro build process
* BIND8 headers and libraries
These are usually already installed as well.
* OpenSSL headers and libraries
For analysis of SSL certificates by the HTTP analyzer, and
for encrypted Bro-to-Bro communication. These are likely installed,
though some platforms may require installation of a 'devel' package
for the headers.
* CMake 2.6 or greater
CMake is a cross-platform, open-source build system, typically
not installed by default. See http://www.cmake.org for more
information regarding CMake and the installation steps below for
how to use it to build this distribution. CMake generates native
Makefiles that depend on GNU Make by default.
Bro can also make uses of some optional libraries if they are found at
Bro can make uses of some optional libraries if they are found at
installation time:
* Libmagic
For identifying file types (e.g., in FTP transfers).
* Libmagic For identifying file types (e.g., in FTP transfers).
* LibGeoIP
For geo-locating IP addresses.
* LibGeoIP For geo-locating IP addresses.
* Libz
For decompressing HTTP bodies by the HTTP analyzer, and for
* Libz For decompressing HTTP bodies by the HTTP analyzer, and for
compressed Bro-to-Bro communication.
Bro also needs the following tools, but on most systems they will
already come preinstalled:
* BIND8 (headers and libraries)
* Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer)
* Perl (Used only during the Bro build process)
Installation
============
@ -65,26 +43,30 @@ To build and install into ``/usr/local/bro``::
> make
> make install
This will perform an out-of-source build into a directory called
``build/``, using default build options. It then installs the Bro binary
into ``/usr/local/bro/bin``. Depending on the Bro package you
downloaded, there may be auxiliary tools and libraries available in the
``aux/`` directory. All of them except for ``aux/bro-aux`` will also be
built and installed by doing ``make install``. To install the programs
that come in the ``aux/bro-aux`` directory, additionally use ``make
install-aux``. There are ``--disable`` options that can be given to the
configure script to turn off unwanted auxiliary projects.
This will first build Bro into a directory inside the distribution
called ``build/``, using default build options. It then installs all
required files into ``/usr/local/bro``, including the Bro binary in
``/usr/local/bro/bin/bro``.
You can specify a different installation directory with::
> ./configure --prefix=<dir>
Note that ``/usr`` and ``/opt/bro`` are standard prefixes for binary
packages to be installed, so those are typically not good choices
unless you are creating such a package.
Note that ``/usr`` and ``/opt/bro`` are the standard prefixes for
binary Bro packages to be installed, so those are typically not good
choices unless you are creating such a package.
Run ``./configure --help`` for more options.
Depending on the Bro package you downloaded, there may be auxiliary
tools and libraries available in the ``aux/`` directory. All of them
except for ``aux/bro-aux`` will also be built and installed by doing
``make install``. To install the programs that come in the
``aux/bro-aux`` directory, use ``make install-aux``. There are
``--disable-*`` options that can be given to the configure script to
turn off unwanted auxiliary projects.
Running Bro
===========
@ -94,13 +76,14 @@ available here:
http://www.bro-ids.org/documentation/quickstart.html
For developers that wish to run Bro from the the ``build/`` directory
after performing ``make``, but without performing ``make install``, they
will have to first set ``BROPATH`` to look for scripts inside the build
For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have
to first adjust ``BROPATH`` to look for scripts inside the build
directory. Sourcing either ``build/bro-path-dev.sh`` or
``build/bro-path-dev.csh`` as appropriate for the current shell
accomplishes this and also augments your ``PATH`` so you can use Bro
without qualifying the path to it. e.g.::
accomplishes this and also augments your ``PATH`` so you can use the
Bro binary directly:
> ./configure
> make

View file

@ -6,6 +6,10 @@
#
BUILD=build
REPO=`basename \`git config --get remote.origin.url\``
VERSION_FULL=$(REPO)-`cat VERSION`
VERSION_MIN=$(REPO)-`cat VERSION`-minimal
HAVE_MODULES=git submodule | grep -v cmake >/dev/null
all: configured
( cd $(BUILD) && make )
@ -26,7 +30,16 @@ docclean: configured
( cd $(BUILD) && make docclean )
dist:
@./pkg/make-src-packages
@rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz
@rm -rf $(VERSION_MIN) $(VERSION_MIN).tgz
@mkdir $(VERSION_FULL)
@tar --exclude=$(VERSION_FULL)* --exclude=$(VERSION_MIN)* --exclude=.git -cf - . | ( cd $(VERSION_FULL) && tar -xpf - )
@( cd $(VERSION_FULL) && cp -R ../.git . && git reset -q --hard HEAD && git clean -xdfq && rm -rf .git )
@tar -czf $(VERSION_FULL).tgz $(VERSION_FULL) && echo Package: $(VERSION_FULL).tgz && rm -rf $(VERSION_FULL)
@$(HAVE_MODULES) && mkdir $(VERSION_MIN) || exit 0
@$(HAVE_MODULES) && tar --exclude=$(VERSION_FULL)* --exclude=$(VERSION_MIN)* --exclude=.git `git submodule | awk '{print "--exclude="$$2}' | grep -v cmake | tr '\n' ' '` -cf - . | ( cd $(VERSION_MIN) && tar -xpf - ) || exit 0
@$(HAVE_MODULES) && ( cd $(VERSION_MIN) && cp -R ../.git . && git reset -q --hard HEAD && git clean -xdfq && rm -rf .git ) || exit 0
@$(HAVE_MODULES) && tar -czf $(VERSION_MIN).tgz $(VERSION_MIN) && echo Package: $(VERSION_MIN).tgz && rm -rf $(VERSION_MIN) || exit 0
bindist:
@( cd pkg && ( ./make-deb-packages || ./make-mac-packages || \

10
README
View file

@ -3,11 +3,9 @@ Bro Network Security Monitor
============================
Bro is a powerful framework for network analysis and security
monitoring.
Please see the INSTALL file for installation instructions and pointers
for getting started. For more documentation, research publications, or
community contact information see Bro's home page:
monitoring. Please see the INSTALL file for installation instructions
and pointers for getting started. For more documentation, research
publications, and community contact information, see Bro's home page:
http://www.bro-ids.org
@ -19,5 +17,3 @@ Vern Paxson & Robin Sommer,
International Computer Science Institute &
Lawrence Berkeley National Laboratory
vern@icir.org / robin@icir.org

View file

@ -1 +1 @@
1.6-dev-1372
2.0-beta-21

@ -1 +1 @@
Subproject commit 952724eb355aa7d4ba50623e97a666dd7d1173a8
Subproject commit 777b8a21c4c74e1f62e8b9896b082e8c059b539f

@ -1 +1 @@
Subproject commit 86c604193543e0fa85f7edeb132436f3d1b33ac7
Subproject commit 906f970df5f708582c7002069b787d5af586b46f

@ -1 +1 @@
Subproject commit 999a935ccff7c2229cd37d9b117f12dc881f8168
Subproject commit e02e3cc89a3efb3d7ec376154e24835b4b828be8

@ -1 +1 @@
Subproject commit cf75e03def0f9d6196d2b4d0d2d3a9160a23b112
Subproject commit 288c8568d7aaa38cf7c05833c133a91cbadbfce4

@ -1 +1 @@
Subproject commit 3c0b0e9a91060a7a453a5d6fb72ed1fd9071fda9
Subproject commit 7230a09a8c220d2117e491fdf293bf5c19819b65

2
cmake

@ -1 +1 @@
Subproject commit bbf129bd7bd33dfb5641ff0d9242f4b3ebba8e82
Subproject commit 704e255d7ef2faf926836c1c64d16c5b8a02b063

1
doc/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
html

7
doc/Makefile Normal file
View file

@ -0,0 +1,7 @@
all:
test -d html || mkdir html
for i in *.rst; do echo "$$i ..."; ./bin/rst2html.py $$i >html/`echo $$i | sed 's/rst$$/html/g'`; done
clean:
rm -rf html

View file

@ -1 +1,38 @@
TODO
Documentation
=============
This directory contains Bro documentation in reStructured text format
(see http://docutils.sourceforge.net/rst.html).
Please note that for now these files are primarily intended for use on
http://www.bro-ids.org. While the Bro build process will render local
versions into ``build/doc/`` (if docutils is found), the resulting
HTML is very minimalistic and some features are not supported. In
particular, some links will be broken.
Notes for Writing Documentation
-------------------------------
* If you want to refer to a Bro script that's part of the
distribution, use {{'`foo.bro
<{{autodoc_bro_scripts}}/path/to/foo.html>`_'}}. For example,
``{{'{{autodoc_bro_scripts}}/scripts/base/frameworks/notice/main.html}}'}}``.
* If you want to refer to a page on the Bro web site, use the
``docroot`` macro (e.g.,
``{{'href="{{docroot}}/download/index.html"'}}). Make sure to
include the ``index.html`` for the main pages, just as in the
example.
* If you want to refer to page inside this directory, use a relative
path with HTML extension. (e.g., ``href="quickstart.html``).
Guidelines
----------
TODO.

62
doc/bin/rst2html.py Executable file
View file

@ -0,0 +1,62 @@
#!/usr/bin/env python
#
# Derived from docutils standard rst2html.py.
#
# $Id: rst2html.py 4564 2006-05-21 20:44:42Z wiemann $
# Author: David Goodger <goodger@python.org>
# Copyright: This module has been placed in the public domain.
#
#
# Extension: we add to dummy directorives "code" and "console" to be
# compatible with Bro's web site setup.
try:
import locale
locale.setlocale(locale.LC_ALL, '')
except:
pass
import textwrap
from docutils.core import publish_cmdline, default_description
from docutils import nodes
from docutils.parsers.rst import directives, Directive
from docutils.parsers.rst.directives.body import LineBlock
class Literal(Directive):
#max_line_length = 68
max_line_length = 0
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
has_content = True
def wrapped_content(self):
content = []
if Literal.max_line_length:
for line in self.content:
content += textwrap.wrap(line, Literal.max_line_length, subsequent_indent=" ")
else:
content = self.content
return u'\n'.join(content)
def run(self):
self.assert_has_content()
content = self.wrapped_content()
literal = nodes.literal_block(content, content)
return [literal]
directives.register_directive('code', Literal)
directives.register_directive('console', Literal)
description = ('Generates (X)HTML documents from standalone reStructuredText '
'sources. ' + default_description)
publish_cmdline(writer_name='html', description=description)

86
doc/cluster.rst Normal file
View file

@ -0,0 +1,86 @@
Bro Cluster
===========
Intro
------
Bro is not multithreaded, so once the limitations of a single processor core are reached, the only option currently is to spread the workload across many cores or even many physical computers. The cluster deployment scenario for Bro is the current solution to build these larger systems. The accompanying tools and scripts provide the structure to easily manage many Bro processes examining packets and doing correlation activities but acting as a singular, cohesive entity.
Architecture
---------------
The figure below illustrates the main components of a Bro cluster.
.. {{git_pull('bro:doc/deployment.png')}}
.. image:: deployment.bro.png
Tap
***
This is a mechanism that splits the packet stream in order to make a copy
available for inspection. Examples include the monitoring port on a switch and
an optical splitter for fiber networks.
Frontend
********
This is a discrete hardware device or on-host technique that will split your traffic into many streams or flows. The Bro binary does not do this job. There are numerous ways to accomplish this task, some of which are described below in `Frontend Options`_.
Manager
*******
This is a Bro process which has two primary jobs. It receives log messages and notices from the rest of the nodes in the cluster using the Bro communications protocol. The result is that you will end up with single logs for each log instead of many discrete logs that you have to later combine in some manner with post processing. The manager also takes the opportunity to de-duplicate notices and it has the ability to do so since its acting as the choke point for notices and how notices might be processed into actions such as emailing, paging, or blocking.
The manager process is started first by BroControl and it only opens its designated port and waits for connections, it doesnt initiate any connections to the rest of the cluster. Once the workers are started and connect to the manager, logs and notices will start arriving to the manager process from the workers.
Proxy
*****
This is a Bro process which manages synchronized state. Variables can be synchronized across connected Bro processes automatically in Bro and proxies will help the workers by alleviating the need for all of the workers to connect directly to each other.
Examples of synchronized state from the scripts that ship with Bro are things such as the full list of “known” hosts and services which are hosts or services which have been detected as performing full TCP handshakes or an analyzed protocol has been found on the connection. If worker A detects host 1.2.3.4 as an active host, it would be beneficial for worker B to know that as well so worker A shares that information as an insertion to a set <link to set documentation would be good here> which travels to the clusters proxy and the proxy then sends that same set insertion to worker B. The result is that worker A and worker B have shared knowledge about host and services that are active on the network being monitored.
The proxy model extends to having multiple proxies as well if necessary for performance reasons, it only adds one additional step for the Bro processes. Each proxy connects to another proxy in a ring and the workers are shared between them as evenly as possible. When a proxy receives some new bit of state, it will share that with its proxy which is then shared around the ring of proxies and down to all of the workers. From a practical standpoint, there are no rules of thumb established yet for the number of proxies necessary for the number of workers they are serving. Best is to start with a single proxy and add more if communication performance problems are found.
Bro processes acting as proxies dont tend to be extremely intense to CPU or memory and users frequently run proxy processes on the same physical host as the manager.
Worker
******
This is the Bro process that sniffs network traffic and does protocol analysis on the reassembled traffic streams. Most of the work of an active cluster takes place on the workers and as such, the workers typically represent the bulk of the Bro processes that are running in a cluster. The fastest memory and CPU core speed you can afford is best here since all of the protocol parsing and most analysis will take place here. There are no particular requirements for the disks in workers since almost all logging is done remotely to the manager and very little is normally written to disk.
The rule of thumb we have followed recently is to allocate approximately 1 core for every 80Mbps of traffic that is being analyzed, however this estimate could be extremely traffic mix specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps estimate works for your traffic, this could be handled by 3 physical hosts dedicated to being workers with each one containing dual 6-core processors.
Once a flow based load balancer is put into place this model is extremely easy to scale as well so its recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, its relatively easy to easy increase the size of the cluster in most cases.
Frontend Options
----------------
There are many options for setting up a frontend flow distributor and in many cases it may even be beneficial to do multiple stages of flow distribution on the network and on the host.
Discrete hardware flow balancers
********************************
cPacket
^^^^^^^
If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit it's visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces.
OpenFlow Switches
^^^^^^^^^^^^^^^^^
We are currently exploring the use of OpenFlow based switches to do flow based load balancing directly on the switch which can greatly reduce frontend costs for many users. This document will be updated when we have more information.
On host flow balancing
**********************
PF_RING
^^^^^^^
The PF_RING software for Linux has a “clustering” feature which will do flow based load balancing across a number of processes that are sniffing the same interface. This will allow you to easily take advantage of multiple cores in a single physical host because Bros main event loop is single threaded and cant natively utilize all of the cores. More information about Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING tutorial to link to here? document installing kernel module, libpcap wrapper, building Bro with the --with-pcap configure option)
Netmap
^^^^^^
FreeBSD has an in-progress project named Netmap which will enabled flow based load balancing as well. When it becomes viable for real world use, this document will be updated.
Click! Software Router
^^^^^^^^^^^^^^^^^^^^^^
Click! can be used for flow based load balancing with a simple configuration. (link to an example for the config). This solution is not recommended on Linux due to Bros PF_RING support and only as a last resort on other operating systems since it causes a lot of overhead due to context switching back and forth between kernel and userland several times per packet.

BIN
doc/deployment.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

102
doc/geoip.rst Normal file
View file

@ -0,0 +1,102 @@
===========
GeoLocation
===========
.. class:: opening
During the process of creating policy scripts the need may arise
to find the geographic location for an IP address. Bro has support
for the `GeoIP library <http://www.maxmind.com/app/c>`__ at the
policy script level beginning with release 1.3 to account for this
need.
.. contents::
GeoIPLite Database Installation
------------------------------------
A country database for GeoIPLite is included when you do the C API
install, but for Bro, we are using the city database which includes
cities and regions in addition to countries.
`Download <http://www.maxmind.com/app/geolitecity>`__ the geolitecity
binary database and follow the directions to install it.
FreeBSD Quick Install
---------------------
.. console::
pkg_add -r GeoIP
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz
mv GeoLiteCity.dat /usr/local/share/GeoIP/GeoIPCity.dat
# Set your environment correctly before running Bro's configure script
export CFLAGS=-I/usr/local/include
export LDFLAGS=-L/usr/local/lib
CentOS Quick Install
--------------------
.. console::
yum install GeoIP-devel
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz
mkdir -p /var/lib/GeoIP/
mv GeoLiteCity.dat /var/lib/GeoIP/GeoIPCity.dat
# Set your environment correctly before running Bro's configure script
export CFLAGS=-I/usr/local/include
export LDFLAGS=-L/usr/local/lib
Usage
-----
There is a single built in function that provides the GeoIP
functionality:
.. code:: bro
function lookup_location(a:addr): geo_location
There is also the ``geo_location`` data structure that is returned
from the ``lookup_location`` function:
.. code:: bro
type geo_location: record {
country_code: string;
region: string;
city: string;
latitude: double;
longitude: double;
};
Example
-------
To write a line in a log file for every ftp connection from hosts in
Ohio, this is now very easy:
.. code:: bro
global ftp_location_log: file = open_log_file("ftp-location");
event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool)
{
local client = c$id$orig_h;
local loc = lookup_location(client);
if (loc$region == "OH" && loc$country_code == "US")
{
print ftp_location_log, fmt("FTP Connection from:%s (%s,%s,%s)", client, loc$city, loc$region, loc$country_code);
}
}

50
doc/index.rst Normal file
View file

@ -0,0 +1,50 @@
Bro Documentation
=================
`Getting Started <{{git('bro:doc/quickstart.rst')}}>`_
A quick introduction into using Bro 2.x.
`Bro 1.5 to 2.0 Upgrade Guide <{{git('bro:doc/upgrade.rst')}}>`_
Guidelines and notes about upgrading from Bro 1.5 to 2.x. Lots of
things have changed, so make sure to read this when upgrading.
`BroControl <{{git('broctl:doc/broctl.rst')}}>`_
An interactive console for managing Bro installations.
`Script Reference <{{autodoc_bro_scripts}}/index.html>`_
A complete reference of all policy scripts shipped with Bro.
`FAQ <{{docroot}}/documentation/faq.html>`_
A list with frequently asked questions.
`How to Report a Problem <{{docroot}}/documentation/reporting-problems.html>`_
Some advice for when you see Bro doing something you believe it
shouldn't.
Frameworks
----------
Bro comes with a number of frameworks, some of which are described in
more detail here:
`Notice <{{git('bro:doc/notice.rst')}}>`_
The notice framework.
`Logging <{{git('bro:doc/logging.rst')}}>`_
Customizing and extensing Bro's logging.
`Cluster <{{git('bro:doc/cluster.rst')}}>`_
Setting up a Bro Cluster when a single box can't handle the traffic anymore.
`Signatures <{{git('bro:doc/signatures.rst')}}>`_
Bro has support for traditional NIDS signatures as well.
How-Tos
-------
We also collect more specific How-Tos on specific topics:
`Using GeoIP in Bro scripts <{{git('bro:doc/geoip.rst')}}>`_
Installation and usage of the the GeoIP library.

352
doc/logging.rst Normal file
View file

@ -0,0 +1,352 @@
==========================
Customizing Bro's Logging
==========================
.. class:: opening
Bro comes with a flexible key-value based logging interface that
allows fine-grained control of what gets logged and how it is
logged. This document describes how logging can be customized and
extended.
.. contents::
Terminology
===========
Bro's logging interface is built around three main abstractions:
Log streams
A stream corresponds to a single log. It defines the set of
fields that a log consists of with their names and fields.
Examples are the ``conn`` for recording connection summaries,
and the ``http`` stream for recording HTTP activity.
Filters
Each stream has a set of filters attached to it that determine
what information gets written out. By default, each stream has
one default filter that just logs everything directly to disk
with an automatically generated file name. However, further
filters can be added to record only a subset, split a stream
into different outputs, or to even duplicate the log to
multiple outputs. If all filters are removed from a stream,
all output is disabled.
Writers
A writer defines the actual output format for the information
being logged. At the moment, Bro comes with only one type of
writer, which produces tab separated ASCII files. In the
future we will add further writers, like for binary output and
direct logging into a database.
Basics
======
The data fields that a stream records are defined by a record type
specified when it is created. Let's look at the script generating
Bro's connection summaries as an example,
``base/protocols/conn/main.bro``. It defines a record ``Conn::Info``
that lists all the fields that go into ``conn.log``, each marked with
a ``&log`` attribute indicating that it is part of the information
written out. To write a log record, the script then passes an instance
of ``Conn::Info`` to the logging framework's ``Log::write`` function.
By default, each stream automatically gets a filter named ``default``
that generates the normal output by recording all record fields into a
single output file.
In the following, we summarize ways in which the logging can be
customized. We continue using the connection summaries as our example
to work with.
Filtering
---------
To create new a new output file for an existing stream, you can add a
new filter. A filter can, e.g., restrict the set of fields being
logged:
.. code:: bro:
event bro_init()
{
# Add a new filter to the Conn::LOG stream that logs only
# timestamp and originator address.
local filter: Log::Filter = [$name="orig-only", $path="origs", $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
}
Note the fields that are set for the filter:
``name``
A mandatory name for the filter that can later be used
to manipulate it further.
``path``
The filename for the output file, without any extension (which
may be automatically added by the writer). Default path values
are generated by taking the stream's ID and munging it
slightly. ``Conn::LOG`` is converted into ``conn``,
``PacketFilter::LOG`` is converted into ``packet_filter``, and
``Notice::POLICY_LOG`` is converted into ``notice_policy``.
``include``
A set limiting the fields to the ones given. The names
correspond to those in the ``Conn::LOG`` record, with
sub-records unrolled by concatenating fields (separated with
dots).
Using the code above, you will now get a new log file ``origs.log``
that looks like this::
#separator \x09
#path origs
#fields ts id.orig_h
#types time addr
1128727430.350788 141.42.64.125
1128727435.450898 141.42.64.125
If you want to make this the only log file for the stream, you can
remove the default filter (which, conveniently, has the name
``default``):
.. code:: bro
event bro_init()
{
# Remove the filter called "default".
Log::remove_filter(Conn::LOG, "default");
}
An alternate approach to "turning off" a log is to completely disable
the stream:
.. code:: bro
event bro_init()
{
Log::disable_stream(Conn::LOG);
}
If you want to skip only some fields but keep the rest, there is a
corresponding ``exclude`` filter attribute that you can use instead of
``include`` to list only the ones you are not interested in.
A filter can also determine output paths *dynamically* based on the
record being logged. That allows, e.g., to record local and remote
connections into separate files. To do this, you define a function
that returns the desired path:
.. code:: bro
function split_log(id: Log::ID, path: string, rec: Conn::Info) : string
{
# Return "conn-local" if originator is a local IP, otherwise "conn-remote".
local lr = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
return fmt("%s-%s", path, lr);
}
event bro_init()
{
local filter: Log::Filter = [$name="conn-split", $path_func=split_log, $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
}
Running this will now produce two files, ``local.log`` and
``remote.log``, with the corresponding entries. One could extend this
further for example to log information by subnets or even by IP
address. Be careful, however, as it is easy to create many files very
quickly ...
.. sidebar:
The show ``split_log`` method has one draw-back: it can be used
only with the ``Conn::Log`` stream as the record type is hardcoded
into its argument list. However, Bro allows to do a more generic
variant:
.. code:: bro
function split_log(id: Log::ID, path: string, rec: record { id: conn_id; } ) : string
{
return Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
}
This function can be used with all log streams that have records
containing an ``id: conn_id`` field.
While so far we have seen how to customize the columns being logged,
you can also control which records are written out by providing a
predicate that will be called for each log record:
.. code:: bro
function http_only(rec: Conn::Info) : bool
{
# Record only connections with successfully analyzed HTTP traffic
return rec$service == "http";
}
event bro_init()
{
local filter: Log::Filter = [$name="http-only", $path="conn-http", $pred=http_only];
Log::add_filter(Conn::LOG, filter);
}
This will results in a log file ``conn-http.log`` that contains only
traffic detected and analyzed as HTTP traffic.
Extending
---------
You can add further fields to a log stream by extending the record
type that defines its content. Let's say we want to add a boolean
field ``is_private`` to ``Conn::Info`` that indicates whether the
originator IP address is part of the RFC1918 space:
.. code:: bro
# Add a field to the connection log record.
redef record Conn::Info += {
## Indicate if the originator of the connection is part of the
## "private" address space defined in RFC1918.
is_private: bool &default=F &log;
};
Now we need to set the field. A connection's summary is generated at
the time its state is removed from memory. We can add another handler
at that time that sets our field correctly:
.. code:: bro
event connection_state_remove(c: connection)
{
if ( c$id$orig_h in Site::private_address_space )
c$conn$is_private = T;
}
Now ``conn.log`` will show a new field ``is_private`` of type
``bool``.
Notes:
- For extending logs this way, one needs a bit of knowledge about how
the script that creates the log stream is organizing its state
keeping. Most of the standard Bro scripts attach their log state to
the ``connection`` record where it can then be accessed, just as the
``c$conn`` above. For example, the HTTP analysis adds a field ``http
: HTTP::Info`` to the ``connection`` record. See the script
reference for more information.
- When extending records as shown above, the new fields must always be
declared either with a ``&default`` value or as ``&optional``.
Furthermore, you need to add the ``&log`` attribute or otherwise the
field won't appear in the output.
Hooking into the Logging
------------------------
Sometimes it is helpful to do additional analysis of the information
being logged. For these cases, a stream can specify an event that will
be generated every time a log record is written to it. All of Bro's
default log streams define such an event. For example, the connection
log stream raises the event ``Conn::log_conn(rec: Conn::Info)``: You
could use that for example for flagging when an a connection to
specific destination exceeds a certain duration:
.. code:: bro
redef enum Notice::Type += {
## Indicates that a connection remained established longer
## than 5 minutes.
Long_Conn_Found
};
event Conn::log_conn(rec: Conn::Info)
{
if ( rec$duration > 5mins )
NOTICE([$note=Long_Conn_Found,
$msg=fmt("unsually long conn to %s", rec$id$resp_h),
$id=rec$id]);
}
Often, these events can be an alternative to post-processing Bro logs
externally with Perl scripts. Much of what such an external script
would do later offline, one may instead do directly inside of Bro in
real-time.
Rotation
--------
ASCII Writer Configuration
--------------------------
The ASCII writer has a number of options for customizing the format of
its output, see XXX.bro.
Adding Streams
==============
It's easy to create a new log stream for custom scripts. Here's an
example for the ``Foo`` module:
.. code:: bro
module Foo;
export {
# Create an ID for the our new stream. By convention, this is
# called "LOG".
redef enum Log::ID += { LOG };
# Define the fields. By convention, the type is called "Info".
type Info: record {
ts: time &log;
id: conn_id &log;
};
# Define a hook event. By convention, this is called
# "log_<stream>".
global log_foo: event(rec: Info);
}
# This event should be handled at a higher priority so that when
# users modify your stream later and they do it at priority 0,
# their code runs after this.
event bro_init() &priority=5
{
# Create the stream. This also adds a default filter automatically.
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]);
}
You can also the state to the ``connection`` record to make it easily
accessible across event handlers:
.. code:: bro
redef record connection += {
foo: Info &optional;
}
Now you can use the ``Log::write`` method to output log records and
save the logged ``Foo::Info`` record into the connection record:
.. code:: bro
event connection_established(c: connection)
{
local rec: Foo::Info = [$ts=network_time(), $id=c$id];
c$foo = rec;
Log::write(Foo::LOG, rec);
}
See the existing scripts for how to work with such a new connection
field. A simple example is ``base/protocols/syslog/main.bro``.
When you are developing scripts that add data to the ``connection``
record, care must be given to when and how long data is stored.
Normally data saved to the connection record will remain there for the
duration of the connection and from a practical perspective it's not
uncommon to need to delete that data before the end of the connection.

386
doc/notice.rst Normal file
View file

@ -0,0 +1,386 @@
Notice Framework
================
.. class:: opening
One of the easiest ways to customize Bro is writing a local notice
policy. Bro can detect a large number of potentially interesting
situations, and the notice policy tells which of them the user wants to be
acted upon in some manner. In particular, the notice policy can specify
actions to be taken, such as sending an email or compiling regular
alarm emails. This page gives an introduction into writing such a notice
policy.
.. contents::
Overview
--------
Let's start with a little bit of background on Bro's philosophy on reporting
things. Bro ships with a large number of policy scripts which perform a wide
variety of analyses. Most of these scripts monitor for activity which might be
of interest for the user. However, none of these scripts determines the
importance of what it finds itself. Instead, the scripts only flags situations
as *potentially* interesting, leaving it to the local configuration to define
which of them are in fact actionable. This decoupling of detection and
reporting allows Bro to address the different needs that sites have:
definitions of what constitutes an attack or even a compromise differ quite a
bit between environments, and activity deemed malicious at one site might be
fully acceptable at another.
Whenever one of Bro's analysis scripts sees something potentially interesting
it flags the situation by calling the ``NOTICE`` function and giving it a
single ``Notice::Info`` record. A Notice has a ``Notice::Type``, which
reflects the kind of activity that has been seen, and it is usually also
augmented with further context about the situation.
More information about raising notices can be found in the `Raising Notices`_
section.
Once a notice is raised, it can have any number of actions applied to it by
the ``Notice::policy`` set which is described in the `Notice Policy`_
section below. Such actions can be to send a mail to the configured
address(es) or to simply ignore the notice. Currently, the following actions
are defined:
.. list-table::
:widths: 20 80
:header-rows: 1
* - Action
- Description
* - Notice::ACTION_LOG
- Write the notice to the ``Notice::LOG`` logging stream.
* - Notice::ACTION_ALARM
- Log into the ``Notice::ALARM_LOG`` stream which will rotate
hourly and email the contents to the email address or addresses
defined in the ``Notice::mail_dest`` variable.
* - Notice::ACTION_EMAIL
- Send the notice in an email to the email address or addresses given in
the ``Notice::mail_dest`` variable.
* - Notice::ACTION_PAGE
- Send an email to the email address or addresses given in the
``Notice::mail_page_dest`` variable.
* - Notice::ACTION_NO_SUPPRESS
- This action will disable the built in notice suppression for the
notice. Keep in mind that this action will need to be applied to
every notice that shouldn't be suppressed including each of the future
notices that would have normally been suppressed.
How these notice actions are applied to notices is discussed in the
`Notice Policy`_ and `Notice Policy Shortcuts`_ sections.
Processing Notices
------------------
Notice Policy
*************
The predefined set ``Notice::policy`` provides the mechanism for applying
actions and other behavior modifications to notices. Each entry of
``Notice::policy`` is a record of the type ``Notice::PolicyItem`` which
defines a condition to be matched against all raised notices and one or more
of a variety of behavior modifiers. The notice policy is defined by adding any
number of ``Notice::PolicyItem`` records to the ``Notice::policy`` set.
Here's a simple example which tells Bro to send an email for all notices of
type ``SSH::Login`` if the server is 10.0.0.1:
.. code:: bro
redef Notice::policy += {
[$pred(n: Notice::Info) = {
return n$note == SSH::Login && n$id$resp_h == 10.0.0.1;
},
$action = Notice::ACTION_EMAIL]
};
.. note::
Keep in mind that the semantics of the SSH::Login notice are
such that it is only raised when Bro heuristically detects a successful
login. No apparently failed logins will raise this notice.
While the syntax might look a bit convoluted at first, it provides a lot of
flexibility due to having access to Bro's full programming language.
Predicate Field
^^^^^^^^^^^^^^^
The ``Notice::PolicyItem`` record type has a field name ``$pred`` which
defines the entry's condition in the form of a predicate written as a Bro
function. The function is passed the notice as a ``Notice::Info`` record and
it returns a boolean value indicating if the entry is applicable to that
particular notice.
.. note::
The lack of a predicate in a ``Notice::PolicyItem`` is implicitly true
(``T``) since an implicit false (``F``) value would never be used.
Bro evaluates the predicates of each entry in the order defined by the
``$priority`` field in ``Notice::PolicyItem`` records. The valid values are
0-10 with 10 being earliest evaluated. If ``$priority`` is omitted, the
default priority is 5.
Behavior Modification Fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are a set of fields in the ``Notice::PolicyItem`` record type that
indicate ways that either the notice or notice processing should be modified
if the predicate field (``$pred``) evaluated to true (``T``). Those fields are
explained in more detail in the following table.
.. list-table::
:widths: 20 30 20
:header-rows: 1
* - Field
- Description
- Example
* - ``$action=<Notice::Action>``
- Each Notice::PolicyItem can have a single action applied to the notice
with this field.
- ``$action = Notice::ACTION_EMAIL``
* - ``$suppress_for=<interval>``
- This field makes it possible for a user to modify the behavior of the
notice framework's automated suppression of intrinsically similar
notices. More information about the notice framework's automated
suppression can be found in the `Automated Suppression`_ section of
this document.
- ``$suppress_for = 10mins``
* - ``$halt=<bool>``
- This field can be used for modification of the notice policy
evaluation. To stop processing of notice policy items before
evaluating all of them, set this field to ``T`` and make the ``$pred``
field return ``T``. ``Notice::PolicyItem`` records defined at a higher
priority as defined by the ``$priority`` field will still be evaluated
but those at a lower priority won't.
- ``$halt = T``
.. code:: bro
redef Notice::policy += {
[$pred(n: Notice::Info) = {
return n$note == SSH::Login && n$id$resp_h == 10.0.0.1;
},
$action = Notice::ACTION_EMAIL,
$priority=5]
};
Notice Policy Shortcuts
***********************
Although the notice framework provides a great deal of flexibility and
configurability there are many times that the full expressiveness isn't needed
and actually becomes a hindrance to achieving results. The framework provides
a default ``Notice::policy`` suite as a way of giving users the
shortcuts to easily apply many common actions to notices.
These are implemented as sets and tables indexed with a
``Notice::Type`` enum value. The following table shows and describes
all of the variables available for shortcut configuration of the notice
framework.
.. list-table::
:widths: 32 40
:header-rows: 1
* - Variable name
- Description
* - Notice::ignored_types
- Adding a ``Notice::Type`` to this set results in the notice
being ignored. It won't have any other action applied to it, not even
``Notice::ACTION_LOG``.
* - Notice::emailed_types
- Adding a ``Notice::Type`` to this set results in
``Notice::ACTION_EMAIL`` being applied to the notices of that type.
* - Notice::alarmed_types
- Adding a Notice::Type to this set results in
``Notice::ACTION_ALARM`` being applied to the notices of that type.
* - Notice::not_suppressed_types
- Adding a ``Notice::Type`` to this set results in that notice no longer
undergoing the normal notice suppression that would take place. Be
careful when using this in production it could result in a dramatic
increase in the number of notices being processed.
* - Notice::type_suppression_intervals
- This is a table indexed on ``Notice::Type`` and yielding an interval.
It can be used as an easy way to extend the default suppression
interval for an entire ``Notice::Type`` without having to create a
whole ``Notice::policy`` entry and setting the ``$suppress_for``
field.
Raising Notices
---------------
A script should raise a notice for any occurrence that a user may want to be
notified about or take action on. For example, whenever the base SSH analysis
scripts sees an SSH session where it is heuristically guessed to be a
successful login, it raises a Notice of the type ``SSH::Login``. The code in
the base SSH analysis script looks like this:
.. code:: bro
NOTICE([$note=SSH::Login,
$msg="Heuristically detected successful SSH login.",
$conn=c]);
``NOTICE`` is a normal function in the global namespace which wraps a function
within the ``Notice`` namespace. It takes a single argument of the
``Notice::Info`` record type. The most common fields used when raising notices
are described in the following table:
.. list-table::
:widths: 32 40
:header-rows: 1
* - Field name
- Description
* - ``$note``
- This field is required and is an enum value which represents the
notice type.
* - ``$msg``
- This is a human readable message which is meant to provide more
information about this particular instance of the notice type.
* - ``$sub``
- This is a sub-message which meant for human readability but will
frequently also be used to contain data meant to be matched with the
``Notice::policy``.
* - ``$conn``
- If a connection record is available when the notice is being raised
and the notice represents some attribute of the connection the
connection record can be given here. Other fields such as ``$id`` and
``$src`` will automatically be populated from this value.
* - ``$id``
- If a conn_id record is available when the notice is being raised and
the notice represents some attribute of the connection, the connection
be given here. Other fields such as ``$src`` will automatically be
populated from this value.
* - ``$src``
- If the notice represents an attribute of a single host then it's
possible that only this field should be filled out to represent the
host that is being "noticed".
* - ``$n``
- This normally represents a number if the notice has to do with some
number. It's most frequently used for numeric tests in the
``Notice::policy`` for making policy decisions.
* - ``$identifier``
- This represents a unique identifier for this notice. This field is
described in more detail in the `Automated Suppression`_ section.
* - ``$suppress_for``
- This field can be set if there is a natural suppression interval for
the notice that may be different than the default value. The value set
to this field can also be modified by a user's ``Notice::policy`` so
the value is not set permanently and unchangeably.
When writing Bro scripts which raise notices, some thought should be given to
what the notice represents and what data should be provided to give a consumer
of the notice the best information about the notice. If the notice is
representative of many connections and is an attribute of a host (e.g. a
scanning host) it probably makes most sense to fill out the ``$src`` field and
not give a connection or conn_id. If a notice is representative of a
connection attribute (e.g. an apparent SSH login) the it makes sense to fill
out either ``$conn`` or ``$id`` based on the data that is available when the
notice is raised. Using care when inserting data into a notice will make later
analysis easier when only the data to fully represent the occurrence that
raised the notice is available. If complete connection information is
available when an SSL server certificate is expiring, the logs will be very
confusing because the connection that the certificate was detected on is a
side topic to the fact that an expired certificate was detected. It's possible
in many cases that two or more separate notices may need to be generated. As
an example, one could be for the detection of the expired SSL certificate and
another could be for if the client decided to go ahead with the connection
neglecting the expired certificate.
Automated Suppression
---------------------
The notice framework supports suppression for notices if the author of the
script that is generating the notice has indicated to the notice framework how
to identify notices that are intrinsically the same. Identification of these
"intrinsically duplicate" notices is implemented with an optional field in
``Notice::Info`` records named ``$identifier`` which is a simple string.
If the ``$identifier`` and ``$type`` fields are the same for two notices, the
notice framework actually considers them to be the same thing and can use that
information to suppress duplicates for a configurable period of time.
.. note::
If the ``$identifier`` is left out of a notice, no notice suppression
takes place due to the framework's inability to identify duplicates. This
could be completely legitimate usage if no notices could ever be
considered to be duplicates.
The ``$identifier`` field is typically comprised of several pieces of data
related to the notice that when combined represent a unique instance of that
notice. Here is an example of the script
``policy/protocols/ssl/validate-certs.bro`` raising a notice for session
negotiations where the certificate or certificate chain did not validate
successfully against the available certificate authority certificates.
.. code:: bro
NOTICE([$note=SSL::Invalid_Server_Cert,
$msg=fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status),
$sub=c$ssl$subject,
$conn=c,
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$validation_status,c$ssl$cert_hash)]);
In the above example you can see that the ``$identifier`` field contains a
string that is built from the responder IP address and port, the validation
status message, and the MD5 sum of the server certificate. Those fields in
particular are chosen because different SSL certificates could be seen on any
port of a host, certificates could fail validation for different reasons, and
multiple server certificates could be used on that combination of IP address
and port with the ``server_name`` SSL extension (explaining the addition of
the MD5 sum of the certificate). The result is that if a certificate fails
validation and all four pieces of data match (IP address, port, validation
status, and certificate hash) that particular notice won't be raised again for
the default suppression period.
Setting the ``$identifier`` field is left to those raising notices because
it's assumed that the script author who is raising the notice understands the
full problem set and edge cases of the notice which may not be readily
apparent to users. If users don't want the suppression to take place or simply
want a different interval, they can always modify it with the
``Notice::policy``.
Extending Notice Framework
--------------------------
Adding Custom Notice Actions
****************************
Extending Notice Emails
***********************
Cluster Considerations
----------------------

593
doc/quickstart.rst Normal file
View file

@ -0,0 +1,593 @@
.. _CMake: http://www.cmake.org
.. _SWIG: http://www.swig.org
.. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org
.. _Homebrew: http://mxcl.github.com/homebrew
=================
Quick Start Guide
=================
.. class:: opening
The short story for getting Bro up and running in a simple configuration
for analysis of either live traffic from a network interface or a packet
capture trace file.
.. contents::
Installation
============
Bro works on most modern, Unix-based systems and requires no custom
hardware. It can be downloaded in either pre-built binary package or
source code forms.
Pre-Built Binary Release Packages
---------------------------------
See the `downloads page <{{docroot}}/download/index.html>`_ for currently
supported/targeted platforms.
* RPM
.. console::
> sudo yum localinstall Bro-all*.rpm
* DEB
.. console::
> sudo gdebi Bro-all-*.deb
* MacOS Disk Image with Installer
Just open the ``Bro-all-*.dmg`` and then run the ``.pkg`` installer.
Everything installed by the package will go into ``/opt/bro``.
The primary install prefix for binary packages is ``/opt/bro``.
Non-MacOS packages that include BroControl also put variable/runtime
data (e.g. Bro logs) in ``/var/opt/bro``.
Building From Source
--------------------
Required Dependencies
~~~~~~~~~~~~~~~~~~~~~
* RPM/RedHat-based Linux:
.. console::
> sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig
* DEB/Debian-based Linux:
.. console::
> sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig
* FreeBSD
Most required dependencies should come with a minimal FreeBSD install
except for the following.
.. console::
> sudo pkg_add -r cmake swig bison python
* Mac OS X
Snow Leopard (10.6) comes with all required dependencies except for CMake_.
Lion (10.7) comes with all required dependencies except for CMake_ and SWIG_.
Distributions of these dependencies can be obtained from the project websites
linked above, but they're also likely available from your preferred Mac OS X
package management system (e.g. MacPorts_, Fink_, or Homebrew_).
Note that the MacPorts ``swig`` package may not include any specific
language support so you may need to also install ``swig-ruby`` and
``swig-python``.
Optional Dependencies
~~~~~~~~~~~~~~~~~~~~~
Bro can use libmagic for identifying file types, libGeoIP for geo-locating
IP addresses, libz for (de)compression during analysis and communication,
and sendmail for sending emails.
* RPM/RedHat-based Linux:
.. console::
> sudo yum install zlib-devel file-devel GeoIP-devel sendmail
* DEB/Debian-based Linux:
.. console::
> sudo apt-get install zlib1g-dev libmagic-dev libgeoip-dev sendmail
* Ports-based FreeBSD
.. console::
> sudo pkg_add -r GeoIP
libz, libmagic, and sendmail are typically already available.
* Mac OS X
Vanilla OS X installations don't ship with libmagic or libGeoIP, but
if installed from your preferred package management system (e.g. MacPorts,
Fink Homebrew), they should be automatically detected and Bro will compile
against them.
Additional steps may be needed to `get the right GeoIP database
<{{git('bro:doc/geoip.rst')}}>`_.
Compiling Bro Source Code
~~~~~~~~~~~~~~~~~~~~~~~~~
Bro releases are bundled into source packages for convenience and
available from the `downloads page <{{docroot}}/download/index.html>`_.
The latest Bro development versions are obtainable through git repositories
hosted at `git.bro-ids.org <http://git.bro-ids.org>`_. See our `git development
documentation <{{docroot}}/development/process.html>`_ for comprehensive
information on Bro's use of git revision control, but the short story for
downloading the full source code experience for Bro via git is:
.. console::
git clone --recursive git://git.bro-ids.org/bro
.. note:: If you choose to clone the ``bro`` repository non-recursively for
a "minimal Bro experience", be aware that compiling it depends on
BinPAC, which has it's own ``binpac`` repository. Either install it
first or initizalize/update the cloned ``bro`` repository's
``aux/binpac`` submodule.
See the ``INSTALL`` file included with the source code for more information
on compiling, but this is the typical way to build and install from source
(of course, changing the value of the ``--prefix`` option to point to the
desired root install path):
.. console::
> ./configure --prefix=/desired/install/path
> make
> make install
The default installation prefix is ``/usr/local/bro``, which would typically
require root privileges when doing the ``make install``.
Configure the Run-Time Environment
----------------------------------
Just remember that you may need to adjust your ``PATH`` environment variable
according to the platform/shell/package you're using. For example:
Bourne-Shell Syntax:
.. console::
> export PATH=/usr/local/bro/bin:$PATH
C-Shell Syntax:
.. console::
> setenv PATH /usr/local/bro/bin:$PATH
Or substitute ``/opt/bro/bin`` instead if you installed from a binary package.
Using BroControl
================
BroControl is an interactive shell for easily operating/managing Bro
installations on a single system or even across multiple systems in a
traffic-monitoring cluster.
.. note:: Below, ``$PREFIX``, is used to reference the Bro installation
root directory.
A Minimal Starting Configuration
--------------------------------
These are the basic configuration changes to make for a minimal BroControl installation
that will manage a single Bro instance on the ``localhost``:
1) In ``$PREFIX/etc/node.cfg``, set the right interface to monitor.
2) In ``$PREFIX/etc/networks.cfg``, comment out the default settings and add
the networks that Bro will consider local to the monitored environment.
3) In ``$PREFIX/etc/broctl.cfg``, change the ``MailTo`` email address to a
desired recipient and the ``LogRotationInterval`` to a desired log
archival frequency.
Now start the BroControl shell like:
.. console::
> broctl
Since this is the first-time use of the shell, perform an initial installation
of the BroControl configuration:
.. console::
[BroControl] > install
Then start up a Bro instance:
.. console::
[BroControl] > start
If there are errors while trying to start the Bro instance, you can
can view the details with the ``diag`` command. If started successfully,
the Bro instance will begin analyzing traffic according to a default
policy and output the results in ``$PREFIX/logs``.
.. note:: The user starting BroControl needs permission to capture
network traffic. If you are not root, you may need to grant further
privileges to the account you're using; see the `FAQ
<{{docroot}}/documentation/faq.html>`_. Also, if it
looks like Bro is not seeing any traffic, check out the FAQ entry
checksum offloading.
You can leave it running for now, but to stop this Bro instance you would do:
.. console::
[BroControl] > stop
We also recommend to insert the following entry into `crontab`:
.. console::
0-59/5 * * * * $PREFIX/bin/broctl cron
This will perform a number of regular housekeeping tasks, including
verifying that the process is still running (and restarting if not in
case of any abnormal termination).
Browsing Log Files
------------------
By default, logs are written out in human-readable (ASCII) format and
data is organized into columns (tab-delimited). Logs that are part of
the current rotation interval are accumulated in
``$PREFIX/logs/current/`` (if Bro is not running, the directory will
be empty). For example, the ``http.log`` contains the results of Bro
HTTP protocol analysis. Here are the first few columns of
``http.log``::
# ts uid orig_h orig_p resp_h resp_p
1311627961.8 HSH4uV8KVJg 192.168.1.100 52303 192.150.187.43 80
Logs that deal with analysis of a network protocol will often start like this:
a timestamp, a unique connection identifier (UID), and a connection 4-tuple
(originator host/port and responder host/port). The UID can be used to
identify all logged activity (possibly across multiple log files) associated
with a given connection 4-tuple over its lifetime.
The remaining columns of protocol-specific logs then detail the
protocol-dependent activity that's occurring. E.g. ``http.log``'s next few
columns (shortened for brevity) show a request to the root of Bro website::
# method host uri referrer user_agent
GET bro-ids.org / - <...>Chrome/12.0.742.122<...>
Some logs are worth explicit mention:
``weird.log``
Contains unusual/exceptional activity that can indicate
malformed connections, traffic that doesn't conform to a particular
protocol, malfunctioning/misconfigured hardware, or even an attacker
attempting to avoid/confuse a sensor. Without context, it's hard to
judge whether this category of activity is interesting and so that is
left up to the user to configure.
``notice.log``
Identifies specific activity that Bro recognizes as
potentially interesting, odd, or bad. In Bro-speak, such
activity is called a "notice".
By default, ``BroControl`` regularly takes all the logs from
``$PREFIX/logs/current`` and archives/compresses them to a directory
named by date, e.g. ``$PREFIX/logs/2011-10-06``. The frequency at
which this is done can be configured via the ``LogRotationInterval``
option in ``$PREFIX/etc/broctl.cfg``.
Deployment Customization
------------------------
The goal of most Bro *deployments* may be to send email alarms when a network
event requires human intervention/investigation, but sometimes that conflicts
with Bro's goal as a *distribution* to remain policy and site neutral -- the
events on one network may be less noteworthy than the same events on another.
As a result, deploying Bro can be an iterative process of
updating its policy to take different actions for events that are noticed, and
using its scripting language to programmatically extend traffic analysis
in a precise way.
One of the first steps to take in customizing Bro might be to get familiar
with the notices it can generate by default and either tone down or escalate
the action that's taken when specific ones occur.
Let's say that we've been looking at the ``notice.log`` for a bit and see two
changes we want to make:
1) ``SSL::Invalid_Server_Cert`` (found in the ``note`` column) is one type of
notice that means an SSL connection was established and the server's
certificate couldn't be validated using Bro's default trust roots, but
we want to ignore it.
2) ``SSH::Login`` is a notice type that is triggered when an SSH connection
attempt looks like it may have been successful, and we want email when
that happens, but only for certain servers.
So we've defined *what* we want to do, but need to know *where* to do it.
The answer is to use a script written in the Bro programming language, so
let's do a quick intro to Bro scripting.
Bro Scripts
~~~~~~~~~~~
Bro ships with many pre-written scripts that are highly customizable
to support traffic analysis for your specific environment. By
default, these will be installed into ``$PREFIX/share/bro`` and can be
identified by the use of a ``.bro`` file name extension. These files
should **never** be edited directly as changes will be lost when
upgrading to newer versions of Bro. The exception to this rule is the
directory ``$PREFIX/share/bro/site`` where local site-specific files
can be put without fear of being clobbered later. The other main
script directories under ``$PREFIX/share/bro`` are ``base`` and
``policy``. By default, Bro automatically loads all scripts under
``base`` (unless the ``-b`` command line option is supplied), which
deal either with collecting basic/useful state about network
activities or providing frameworks/utilities that extend Bro's
functionality without any performance cost. Scripts under the
``policy`` directory may be more situational or costly, and so users
must explicitly choose if they want to load them.
The main entry point for the default analysis configuration of a standalone
Bro instance managed by BroControl is the ``$PREFIX/share/bro/site/local.bro``
script. So we'll be adding to that in the following sections, but first
we have to figure out what to add.
Redefining Script Option Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many simple customizations just require you to redefine a variable
from a standard Bro script with your own value, using Bro's ``redef``
operator.
The typical way a standard Bro script advertises tweak-able options to users
is by defining variables with the ``&redef`` attribute and ``const`` qualifier.
A redefineable constant might seem strange, but what that really means is that
the variable's value may not change at run-time, but whose initial value can be
modified via the ``redef`` operator at parse-time.
So let's continue on our path to modify the behavior for the two SSL
and SSH notices. Looking at
`$PREFIX/share/bro/base/frameworks/notice/main.bro
<{{autodoc_bro_scripts}}/scripts/base/frameworks/notice/main.html>`_,
we see that it advertises:
.. code:: bro
module Notice;
export {
...
## Ignored notice types.
const ignored_types: set[Notice::Type] = {} &redef;
}
That's exactly what we want to do for the SSL notice. So add to ``local.bro``:
.. code:: bro
redef Notice::ignored_types += { SSL::Invalid_Server_Cert };
.. note:: The ``Notice`` namespace scoping is necessary here because the
variable was declared and exported inside the ``Notice`` module, but is
being referenced from outside of it. Variables declared and exported
inside a module do not have to be scoped if referring to them while still
inside the module.
Then go into the BroControl shell to check whether the configuration change
is valid before installing it and then restarting the Bro instance:
.. console::
[BroControl] > check
bro is ok.
[BroControl] > install
removing old policies in /usr/local/bro/spool/policy/site ... done.
removing old policies in /usr/local/bro/spool/policy/auto ... done.
creating policy directories ... done.
installing site policies ... done.
generating standalone-layout.bro ... done.
generating local-networks.bro ... done.
generating broctl-config.bro ... done.
updating nodes ... done.
[BroControl] > restart
stopping bro ...
starting bro ...
Now that the SSL notice is ignored, let's look at how to send an email on
the SSH notice. The notice framework has a similar option called
``emailed_types``, but that can't differentiate between SSH servers and we
only want email for logins to certain ones. Then we come to the ``PolicyItem``
record and ``policy`` set and realize that those are actually what get used
to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken
on notices can be user-defined.
In ``local.bro``, let's add a new ``PolicyItem`` record to the ``policy`` set
that only takes the email action for SSH logins to a defined set of servers:
.. code:: bro
const watched_servers: set[addr] = {
192.168.1.100,
192.168.1.101,
192.168.1.102,
} &redef;
redef Notice::policy += {
[$action = Notice::ACTION_EMAIL,
$pred(n: Notice::Info) =
{
return n$note == SSH::Login && n$id$resp_h in watched_servers;
}
]
};
You'll just have to trust the syntax for now, but what we've done is first
first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a record to the policy that will generate
an email on the condition that the predicate function evaluates to true, which
is whenever the notice type is an SSH login and the responding host stored
inside the ``Info`` record's connection field is in the set of watched servers.
.. note:: record field member access is done with the '$' character
instead of a '.' as might be expected from other languages, in
order to avoid ambiguity with the builtin address type's use of '.'
in IPv4 dotted decimal representations.
Remember, to finalize that configuration change perform the ``check``,
``install``, ``restart`` commands in that order inside the BroControl shell.
Next Steps
----------
By this point, we've learned how to set up the most basic Bro instance and
tweak the most basic options. Here's some suggestions on what to explore next:
* We only looked at how to change options declared in the notice framework,
there's many more options to look at in other script packages.
* Look at the scripts in ``$PREFIX/share/bro/policy`` for further ones
you may want to load.
* Reading the code of scripts that ship with Bro is also a great way to gain
understanding of the language and how you can start writing your own custom
analysis.
* Review the `FAQ <{{docroot}}/documentation/faq.html>`_.
* Check out more `documentation <{{docroot}}/documentation/index.html>`_.
* Continue reading below for another mini-tutorial on using Bro as a standalone
command-line utility.
Bro, the Command-Line Utility
=============================
If you prefer not to use BroControl (e.g. don't need its automation and
management features), here's how to directly control Bro for your analysis
activities.
Monitoring Live Traffic
-----------------------
Analyzing live traffic from an interface is simple:
.. console::
> bro -i en0 <list of scripts to load>
``en0`` can be replaced by the interface of your choice and for the list of
scripts, you can just use "all" for now to perform all the default analysis
that's available.
Bro will output log files into the working directory.
.. note:: The `FAQ <{{docroot}}/documentation/faq.html>`_ entries about
capturing as an unprivileged user and checksum offloading are particularly
relevant at this point.
To use the site-specific ``local.bro`` script, just add it to the
command-line:
.. console::
> bro -i en0 local
This will cause Bro to print a warning about lacking the
``Site::local_nets`` variable being configured. You can supply this
information at the command line like this (supply your "local" subnets
in place of the example subnets):
.. console::
> bro -r mypackets.trace local "Site::local_nets += { 1.2.3.0/24, 5.6.7.0/24 }"
Reading Packet Capture (pcap) Files
-----------------------------------
Capturing packets from an interface and writing them to a file can be done
like this:
.. console::
> sudo tcpdump -i en0 -s 0 -w mypackets.trace
Where ``en0`` can be replaced by the correct interface for your system as
shown by e.g. ``ifconfig``. (The ``-s 0`` argument tells it to capture
whole packets; in cases where it's not supported use ``-s 65535`` instead).
After a while of capturing traffic, kill the ``tcpdump`` (with ctrl-c),
and tell Bro to perform all the default analysis on the capture which primarily includes :
.. console::
> bro -r mypackets.trace
Bro will output log files into the working directory.
If you are interested in more detection, you can again load the ``local``
script that we include as a suggested configuration:
.. console::
> bro -r mypackets.trace local
Telling Bro Which Scripts to Load
---------------------------------
A command-line invocation of Bro typically looks like:
.. console::
> bro <options> <policies...>
Where the last arguments are the specific policy scripts that this Bro
instance will load. These arguments don't have to include the ``.bro``
file extension, and if the corresponding script resides under the default
installation path, ``$PREFIX/share/bro``, then it requires no path
qualification. Further, a directory of scripts can be specified as
an argument to be loaded as a "package" if it contains a ``__load__.bro``
script that defines the scripts that are part of the package.
This example does all of the base analysis (primarily protocol
logging) and adds SSL certificate validation.
.. console::
> bro -r mypackets.trace protocols/ssl/validate-certs
You might notice that a script you load from the command line uses the
``@load`` directive in the Bro language to declare dependence on other scripts.
This directive is similar to the ``#include`` of C/C++, except the semantics
are "load this script if it hasn't already been loaded".
.. note:: If one wants Bro to be able to load scripts that live outside the
default directories in Bro's installation root, the ``BROPATH`` environment
variable will need to be extended to include all the directories that need
to be searched for scripts. See the default search path by doing
``bro --help``.

View file

@ -42,6 +42,7 @@ rest_target(${psd} base/frameworks/notice/actions/add-geodata.bro)
rest_target(${psd} base/frameworks/notice/actions/drop.bro)
rest_target(${psd} base/frameworks/notice/actions/email_admin.bro)
rest_target(${psd} base/frameworks/notice/actions/page.bro)
rest_target(${psd} base/frameworks/notice/actions/pp-alarms.bro)
rest_target(${psd} base/frameworks/notice/cluster.bro)
rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro)
rest_target(${psd} base/frameworks/notice/main.bro)
@ -131,7 +132,6 @@ rest_target(${psd} policy/protocols/ssl/extract-certs-pem.bro)
rest_target(${psd} policy/protocols/ssl/known-certs.bro)
rest_target(${psd} policy/protocols/ssl/validate-certs.bro)
rest_target(${psd} policy/tuning/defaults/packet-fragments.bro)
rest_target(${psd} policy/tuning/defaults/remove-high-volume-notices.bro)
rest_target(${psd} policy/tuning/defaults/warnings.bro)
rest_target(${psd} policy/tuning/track-all-assets.bro)
rest_target(${psd} site/local-manager.bro)

291
doc/scripts/example.rst Normal file
View file

@ -0,0 +1,291 @@
.. Automatically generated. Do not edit.
example.bro
===========
:download:`Original Source File <example.bro>`
Overview
--------
This is an example script that demonstrates how to document. Comments
of the form ``##!`` are for the script summary. The contents of
these comments are transferred directly into the auto-generated
`reStructuredText <http://docutils.sourceforge.net/rst.html>`_
(reST) document's summary section.
.. tip:: You can embed directives and roles within ``##``-stylized comments.
:Imports: :doc:`policy/frameworks/software/vulnerable </scripts/policy/frameworks/software/vulnerable>`
Summary
~~~~~~~
Options
#######
============================================================================ ======================================
:bro:id:`Example::an_option`: :bro:type:`set` :bro:attr:`&redef` add documentation for "an_option" here
:bro:id:`Example::option_with_init`: :bro:type:`interval` :bro:attr:`&redef`
============================================================================ ======================================
State Variables
###############
=========================================================================== =======================================
:bro:id:`Example::a_var`: :bro:type:`bool` put some documentation for "a_var" here
:bro:id:`Example::var_with_attr`: :bro:type:`count` :bro:attr:`&persistent`
:bro:id:`Example::var_without_explicit_type`: :bro:type:`string`
=========================================================================== =======================================
Types
#####
====================================================== ==========================================================
:bro:type:`Example::SimpleEnum`: :bro:type:`enum` documentation for "SimpleEnum"
goes here.
:bro:type:`Example::SimpleRecord`: :bro:type:`record` general documentation for a type "SimpleRecord"
goes here.
:bro:type:`Example::ComplexRecord`: :bro:type:`record` general documentation for a type "ComplexRecord" goes here
:bro:type:`Example::Info`: :bro:type:`record` An example record to be used with a logging stream.
====================================================== ==========================================================
Events
######
================================================= =============================================================
:bro:id:`Example::an_event`: :bro:type:`event` Summarize "an_event" here.
:bro:id:`Example::log_example`: :bro:type:`event` This is a declaration of an example event that can be used in
logging streams and is raised once for each log entry.
================================================= =============================================================
Functions
#########
=============================================== =======================================
:bro:id:`Example::a_function`: :bro:type:`func` Summarize purpose of "a_function" here.
=============================================== =======================================
Redefinitions
#############
===================================================== ========================================
:bro:type:`Log::ID`: :bro:type:`enum`
:bro:type:`Example::SimpleEnum`: :bro:type:`enum` document the "SimpleEnum" redef here
:bro:type:`Example::SimpleRecord`: :bro:type:`record` document the record extension redef here
===================================================== ========================================
Namespaces
~~~~~~~~~~
.. bro:namespace:: Example
Notices
~~~~~~~
:bro:type:`Notice::Type`
:Type: :bro:type:`enum`
.. bro:enum:: Example::Notice_One Notice::Type
any number of this type of comment
will document "Notice_One"
.. bro:enum:: Example::Notice_Two Notice::Type
any number of this type of comment
will document "Notice_Two"
.. bro:enum:: Example::Notice_Three Notice::Type
.. bro:enum:: Example::Notice_Four Notice::Type
Public Interface
----------------
Options
~~~~~~~
.. bro:id:: Example::an_option
:Type: :bro:type:`set` [:bro:type:`addr`, :bro:type:`addr`, :bro:type:`string`]
:Attributes: :bro:attr:`&redef`
:Default: ``{}``
add documentation for "an_option" here
.. bro:id:: Example::option_with_init
:Type: :bro:type:`interval`
:Attributes: :bro:attr:`&redef`
:Default: ``10.0 msecs``
State Variables
~~~~~~~~~~~~~~~
.. bro:id:: Example::a_var
:Type: :bro:type:`bool`
put some documentation for "a_var" here
.. bro:id:: Example::var_with_attr
:Type: :bro:type:`count`
:Attributes: :bro:attr:`&persistent`
.. bro:id:: Example::var_without_explicit_type
:Type: :bro:type:`string`
:Default: ``"this works"``
Types
~~~~~
.. bro:type:: Example::SimpleEnum
:Type: :bro:type:`enum`
.. bro:enum:: Example::ONE Example::SimpleEnum
and more specific info for "ONE"
can span multiple lines
.. bro:enum:: Example::TWO Example::SimpleEnum
or more info like this for "TWO"
can span multiple lines
.. bro:enum:: Example::THREE Example::SimpleEnum
documentation for "SimpleEnum"
goes here.
.. bro:type:: Example::SimpleRecord
:Type: :bro:type:`record`
field1: :bro:type:`count`
counts something
field2: :bro:type:`bool`
toggles something
general documentation for a type "SimpleRecord"
goes here.
.. bro:type:: Example::ComplexRecord
:Type: :bro:type:`record`
field1: :bro:type:`count`
counts something
field2: :bro:type:`bool`
toggles something
field3: :bro:type:`Example::SimpleRecord`
msg: :bro:type:`string` :bro:attr:`&default` = ``"blah"`` :bro:attr:`&optional`
attributes are self-documenting
general documentation for a type "ComplexRecord" goes here
.. bro:type:: Example::Info
:Type: :bro:type:`record`
ts: :bro:type:`time` :bro:attr:`&log`
uid: :bro:type:`string` :bro:attr:`&log`
status: :bro:type:`count` :bro:attr:`&log` :bro:attr:`&optional`
An example record to be used with a logging stream.
Events
~~~~~~
.. bro:id:: Example::an_event
:Type: :bro:type:`event` (name: :bro:type:`string`)
Summarize "an_event" here.
Give more details about "an_event" here.
:param name: describe the argument here
.. bro:id:: Example::log_example
:Type: :bro:type:`event` (rec: :bro:type:`Example::Info`)
This is a declaration of an example event that can be used in
logging streams and is raised once for each log entry.
Functions
~~~~~~~~~
.. bro:id:: Example::a_function
:Type: :bro:type:`function` (tag: :bro:type:`string`, msg: :bro:type:`string`) : :bro:type:`string`
Summarize purpose of "a_function" here.
Give more details about "a_function" here.
Separating the documentation of the params/return values with
empty comments is optional, but improves readability of script.
:param tag: function arguments can be described
like this
:param msg: another param
:returns: describe the return type here
Redefinitions
~~~~~~~~~~~~~
:bro:type:`Log::ID`
:Type: :bro:type:`enum`
.. bro:enum:: Example::LOG Log::ID
:bro:type:`Example::SimpleEnum`
:Type: :bro:type:`enum`
.. bro:enum:: Example::FOUR Example::SimpleEnum
and some documentation for "FOUR"
.. bro:enum:: Example::FIVE Example::SimpleEnum
also "FIVE" for good measure
document the "SimpleEnum" redef here
:bro:type:`Example::SimpleRecord`
:Type: :bro:type:`record`
field_ext: :bro:type:`string` :bro:attr:`&optional`
document the extending field here
(or here)
document the record extension redef here
Port Analysis
-------------
:ref:`More Information <common_port_analysis_doc>`
SSL::
[ports={
443/tcp,
562/tcp
}]
Packet Filter
-------------
:ref:`More Information <common_packet_filter_doc>`
Filters added::
[ssl] = tcp port 443,
[nntps] = tcp port 562

View file

@ -11,6 +11,8 @@
# Specific scripts can be blacklisted below when e.g. they currently aren't
# parseable or they just aren't meant to be documented.
export LC_ALL=C # Make sorting stable.
blacklist ()
{
if [[ "$blacklist" == "" ]]; then

390
doc/signatures.rst Normal file
View file

@ -0,0 +1,390 @@
==========
Signatures
==========
.. class:: opening
Bro relies primarily on its extensive scripting language for
defining and analyzing detection policies. In addition, however,
Bro also provides an independent *signature language* for doing
low-level, Snort-style pattern matching. While signatures are
*not* Bro's preferred detection tool, they sometimes come in handy
and are closer to what many people are familiar with from using
other NIDS. This page gives a brief overview on Bro's signatures
and covers some of their technical subtleties.
.. contents::
:depth: 2
Basics
======
Let's look at an example signature first:
.. code:: bro-sig
signature my-first-sig {
ip-proto == tcp
dst-port == 80
payload /.*root/
event "Found root!"
}
This signature asks Bro to match the regular expression ``.*root`` on
all TCP connections going to port 80. When the signature triggers, Bro
will raise an event ``signature_match`` of the form:
.. code:: bro
event signature_match(state: signature_state, msg: string, data: string)
Here, ``state`` contains more information on the connection that
triggered the match, ``msg`` is the string specified by the
signature's event statement (``Found root!``), and data is the last
piece of payload which triggered the pattern match.
To turn such ``signature_match`` events into actual alarms, you can
load Bro's ``signature.bro`` script. This script contains a default
event handler that raises ``SensitiveSignature`` `Notices
<notices.html>`_ (as well as others; see the beginning of the script).
As signatures are independent of Bro's policy scripts, they are put
into their own file(s). There are two ways to specify which files
contain signatures: By using the ``-s`` flag when you invoke Bro, or
by extending the Bro variable ``signatures_files`` using the ``+=``
operator. If a signature file is given without a path, it is searched
along the normal ``BROPATH``. The default extension of the file name
is ``.sig``, and Bro appends that automatically when neccesary.
Signature language
==================
Let's look at the format of a signature more closely. Each individual
signature has the format ``signature <id> { <attributes> }``. ``<id>``
is a unique label for the signature. There are two types of
attributes: *conditions* and *actions*. The conditions define when the
signature matches, while the actions declare what to do in the case of
a match. Conditions can be further divided into four types: *header*,
*content*, *dependency*, and *context*. We discuss these all in more
detail in the following.
Conditions
----------
Header Conditions
~~~~~~~~~~~~~~~~~
Header conditions limit the applicability of the signature to a subset
of traffic that contains matching packet headers. For TCP, this match
is performed only for the first packet of a connection. For other
protocols, it is done on each individual packet.
There are pre-defined header conditions for some of the most used
header fields. All of them generally have the format ``<keyword> <cmp>
<value-list>``, where ``<keyword>`` names the header field; ``cmp`` is
one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and
``<value-list>`` is a list of comma-separated values to compare
against. The following keywords are defined:
``src-ip``/``dst-ip <cmp> <address-list>``
Source and destination address, repectively. Addresses can be
given as IP addresses or CIDR masks.
``src-port``/``dst-port`` ``<int-list>``
Source and destination port, repectively.
``ip-proto tcp|udp|icmp``
IP protocol.
For lists of multiple values, they are sequentially compared against
the corresponding header field. If at least one of the comparisons
evaluates to true, the whole header condition matches (exception: with
``!=``, the header condition only matches if all values differ).
In addition to these pre-defined header keywords, a general header
condition can be defined either as
.. code:: bro-sig
header <proto>[<offset>:<size>] [& <integer>] <cmp> <value-list>
This compares the value found at the given position of the packet
header with a list of values. ``offset`` defines the position of the
value within the header of the protocol defined by ``proto`` (which
can be ``ip``, ``tcp``, ``udp`` or ``icmp``). ``size`` is either 1, 2,
or 4 and specifies the value to have a size of this many bytes. If the
optional ``& <integer>`` is given, the packet's value is first masked
with the integer before it is compared to the value-list. ``cmp`` is
one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``. ``value-list`` is
a list of comma-separated integers similar to those described above.
The integers within the list may be followed by an additional ``/
mask`` where ``mask`` is a value from 0 to 32. This corresponds to the
CIDR notation for netmasks and is translated into a corresponding
bitmask applied to the packet's value prior to the comparison (similar
to the optional ``& integer``).
Putting all together, this is an example conditiation that is
equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``:
.. code:: bro-sig
header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24
Internally, the predefined header conditions are in fact just
short-cuts and mappend into a generic condition.
Content Conditions
~~~~~~~~~~~~~~~~~~
Content conditions are defined by regular expressions. We
differentiate two kinds of content conditions: first, the expression
may be declared with the ``payload`` statement, in which case it is
matched against the raw payload of a connection (for reassembled TCP
streams) or of a each packet (for ICMP, UDP, and non-reassembled TCP).
Second, it may be prefixed with an analyzer-specific label, in which
case the expression is matched against the data as extracted by the
corresponding analyzer.
A ``payload`` condition has the form:
.. code:: bro-sig
payload /<regular expression>/
Currently, the following analyzer-specific content conditions are
defined (note that the corresponding analyzer has to be activated by
loading its policy script):
``http-request /<regular expression>/``
The regular expression is matched against decoded URIs of HTTP
requests. Obsolete alias: ``http``.
``http-request-header /<regular expression>/``
The regular expression is matched against client-side HTTP headers.
``http-request-body /<regular expression>/``
The regular expression is matched against client-side bodys of
HTTP requests.
``http-reply-header /<regular expression>/``
The regular expression is matched against server-side HTTP headers.
``http-reply-body /<regular expression>/``
The regular expression is matched against server-side bodys of
HTTP replys.
``ftp /<regular expression>/``
The regular expression is matched against the command line input
of FTP sessions.
``finger /<regular expression>/``
The regular expression is matched against finger requests.
For example, ``http-request /.*(etc/(passwd|shadow)/`` matches any URI
containing either ``etc/passwd`` or ``etc/shadow``. To filter on request
types, e.g. ``GET``, use ``payload /GET /``.
Note that HTTP pipelining (that is, multiple HTTP transactions in a
single TCP connection) has some side effects on signature matches. If
multiple conditions are specified within a single signature, this
signature matches if all conditions are met by any HTTP transaction
(not necessarily always the same!) in a pipelined connection.
Dependency Conditions
~~~~~~~~~~~~~~~~~~~~~
To define dependencies between signatures, there are two conditions:
``requires-signature [!] <id>``
Defines the current signature to match only if the signature given
by ``id`` matches for the same connection. Using ``!`` negates the
condition: The current signature only matches if ``id`` does not
match for the same connection (using this defers the match
decision until the connection terminates).
``requires-reverse-signature [!] <id>``
Similar to ``requires-signature``, but ``id`` has to match for the
opposite direction of the same connection, compared the current
signature. This allows to model the notion of requests and
replies.
Context Conditions
~~~~~~~~~~~~~~~~~~
Context conditions pass the match decision on to other components of
Bro. They are only evaluated if all other conditions have already
matched. The following context conditions are defined:
``eval <policy-function>``
The given policy function is called and has to return a boolean
confirming the match. If false is returned, no signature match is
going to be triggered. The function has to be of type ``function
cond(state: signature_state, data: string): bool``. Here,
``content`` may contain the most recent content chunk available at
the time the signature was matched. If no such chunk is available,
``content`` will be the empty string. ``signature_state`` is
defined as follows:
.. code:: bro
type signature_state: record {
id: string; # ID of the signature
conn: connection; # Current connection
is_orig: bool; # True if current endpoint is originator
payload_size: count; # Payload size of the first packet
};
``payload-size <cmp> <integer>``
Compares the integer to the size of the payload of a packet. For
reassembled TCP streams, the integer is compared to the size of
the first in-order payload chunk. Note that the latter is not very
well defined.
``same-ip``
Evaluates to true if the source address of the IP packets equals
its destination address.
``tcp-state <state-list>``
Imposes restrictions on the current TCP state of the connection.
``state-list`` is a comma-separated list of the keywords
``established`` (the three-way handshake has already been
performed), ``originator`` (the current data is send by the
originator of the connection), and ``responder`` (the current data
is send by the responder of the connection).
Actions
-------
Actions define what to do if a signature matches. Currently, there are
two actions defined:
``event <string>``
Raises a ``signature_match`` event. The event handler has the
following type:
.. code:: bro
event signature_match(state: signature_state, msg: string, data: string)
The given string is passed in as ``msg``, and data is the current
part of the payload that has eventually lead to the signature
match (this may be empty for signatures without content
conditions).
``enable <string>``
Enables the protocol analyzer ``<string>`` for the matching
connection (``"http"``, ``"ftp"``, etc.). This is used by Bro's
dynamic protocol detection to activate analyzers on the fly.
Things to keep in mind when writing signatures
==============================================
* Each signature is reported at most once for every connection,
further matches of the same signature are ignored.
* The content conditions perform pattern matching on elements
extracted from an application protocol dialogue. For example, ``http
/.*passwd/`` scans URLs requested within HTTP sessions. The thing to
keep in mind here is that these conditions only perform any matching
when the corresponding application analyzer is actually *active* for
a connection. Note that by default, analyzers are not enabled if the
corresponding Bro script has not been loaded. A good way to
double-check whether an analyzer "sees" a connection is checking its
log file for corresponding entries. If you cannot find the
connection in the analyzer's log, very likely the signature engine
has also not seen any application data.
* As the name indicates, the ``payload`` keyword matches on packet
*payload* only. You cannot use it to match on packet headers; use
the header conditions for that.
* For TCP connections, header conditions are only evaluated for the
*first packet from each endpoint*. If a header condition does not
match the initial packets, the signature will not trigger. Bro
optimizes for the most common application here, which is header
conditions selecting the connections to be examined more closely
with payload statements.
* For UDP and ICMP flows, the payload matching is done on a per-packet
basis; i.e., any content crossing packet boundaries will not be
found. For TCP connections, the matching semantics depend on whether
Bro is *reassembling* the connection (i.e., putting all of a
connection's packets in sequence). By default, Bro is reassembling
the first 1K of every TCP connection, which means that within this
window, matches will be found without regards to packet order or
boundaries (i.e., *stream-wise matching*).
* For performance reasons, by default Bro *stops matching* on a
connection after seeing 1K of payload; see the section on options
below for how to change this behaviour. The default was chosen with
Bro's main user of signatures in mind: dynamic protocol detection
works well even when examining just connection heads.
* Regular expressions are implicitly anchored, i.e., they work as if
prefixed with the ``^`` operator. For reassembled TCP connections,
they are anchored at the first byte of the payload *stream*. For all
other connections, they are anchored at the first payload byte of
each packet. To match at arbitrary positions, you can prefix the
regular expression with ``.*``, as done in the examples above.
* To match on non-ASCII characters, Bro's regular expressions support
the ``\x<hex>`` operator. CRs/LFs are not treated specially by the
signature engine and can be matched with ``\r`` and ``\n``,
respectively. Generally, Bro follows `flex's regular expression
syntax
<http://www.gnu.org/software/flex/manual/html_chapter/flex_7.html>`_.
See the DPD signatures in ``policy/sigs/dpd.bro`` for some examples
of fairly complex payload patterns.
* The data argument of the ``signature_match`` handler might not carry
the full text matched by the regular expression. Bro performs the
matching incrementally as packets come in; when the signature
eventually fires, it can only pass on the most recent chunk of data.
Options
=======
The following options control details of Bro's matching process:
``dpd_reassemble_first_packets: bool`` (default: ``T``)
If true, Bro reassembles the beginning of every TCP connection (of
up to ``dpd_buffer_size`` bytes, see below), to facilitate
reliable matching across packet boundaries. If false, only
connections are reassembled for which an application-layer
analyzer gets activated (e.g., by Bro's dynamic protocol
detection).
``dpd_match_only_beginning : bool`` (default: ``T``)
If true, Bro performs packet matching only within the initial
payload window of ``dpd_buffer_size``. If false, it keeps matching
on subsequent payload as well.
``dpd_buffer_size: count`` (default: ``1024``)
Defines the buffer size for the two preceding options. In
addition, this value determines the amount of bytes Bro buffers
for each connection in order to activate application analyzers
even after parts of the payload have already passed through. This
is needed by the dynamic protocol detection capability to defer
the decision which analyzers to use.
So, how about using Snort signatures with Bro?
==============================================
There was once a script, ``snort2bro``, that converted Snort
signatures automatically into Bro's signature syntax. However, in our
experience this didn't turn out to be a very useful thing to do
because by simply using Snort signatures, one can't benefit from the
additional capabilities that Bro provides; the approaches of the two
systems are just too different. We therefore stopped maintaining the
``snort2bro`` script, and there are now many newer Snort options which
it doesn't support. The script is now no longer part of the Bro
distribution.

252
doc/upgrade.rst Normal file
View file

@ -0,0 +1,252 @@
=============================
Upgrading From Bro 1.5 to 2.0
=============================
.. class:: opening
This guide details differences between Bro versions 1.5 and 2.0
that may be important for users to know as they work on updating
their Bro deployment/configuration to the later version.
.. contents::
Introduction
============
As the version number jump suggests, Bro 2.0 is a major upgrade and
lots of things have changed. Most importantly, we have rewritten
almost all of Bro's default scripts from scratch, using quite
different structure now and focusing more on operational deployment.
The result is a system that works much better "out of the box", even
without much initial site-specific configuration. The down-side is
that 1.x configurations will need to be adapted to work with the new
version. The two rules of thumb are:
(1) If you have written your own Bro scripts
that do not depend on any of the standard scripts formerly
found in ``policy/``, they will most likely just keep working
(although you might want to adapt them to use some of the new
features, like the new logging framework; see below).
(2) If you have custom code that depends on specifics of 1.x
default scripts (including most configuration tuning), that is
unlikely to work with 2.x. We recommend to start by using just
the new scripts first, and then port over any customizations
incrementally as necessary (they may be much easier to do now,
or even unnecessary). Send mail to the Bro user mailing list
if you need help.
Below we summarize changes from 1.x to 2.x in more detail. This list
isn't complete, see the `CHANGES <{{git('bro:CHANGES', 'txt')}}>`_ file in the
distribution for the full story.
Default Scripts
===============
Organization
------------
In versions before 2.0, Bro scripts were all maintained in a flat
directory called ``policy/`` in the source tree. This directory is now
renamed to ``scripts/`` and contains major subdirectories ``base/``,
``policy/``, and ``site/``, each of which may also be subdivided
further.
The contents of the new ``scripts/`` directory, like the old/flat
``policy/`` still gets installed under under the ``share/bro``
subdirectory of the installation prefix path just like previous
versions. For example, if Bro was compiled like ``./configure
--prefix=/usr/local/bro && make && make install``, then the script
hierarchy can be found in ``/usr/local/bro/share/bro``.
THe main
subdirectories of that hierarchy are as follows:
- ``base/`` contains all scripts that are loaded by Bro by default
(unless the ``-b`` command line option is used to run Bro in a
minimal configuration). Note that is a major conceptual change:
rather than not loading anything by default, Bro now uses an
extensive set of default scripts out of the box.
The scripts under this directory generally either accumulate/log
useful state/protocol information for monitored traffic, configure a
default/recommended mode of operation, or provide extra Bro
scripting-layer functionality that has no significant performance cost.
- ``policy/`` contains all scripts that a user will need to explicitly
tell Bro to load. These are scripts that implement
functionality/analysis that not all users may want to use and may have
more significant performance costs. For a new installation, you
should go through these and see what appears useful to load.
- ``site/`` remains a directory that can be used to store locally
developed scripts. It now comes with some preinstalled example
scripts that contain recommended default configurations going beyond
the ``base/`` setup. E.g. ``local.bro`` loads extra scripts from
``policy/`` and does extra tuning. These files can be customized in
place without being overwritten by upgrades/reinstalls, unlike
scripts in other directories.
With version 2.0, the default ``BROPATH`` is set to automatically
search for scripts in ``policy/``, ``site/`` and their parent
directory, but **not** ``base/``. Generally, everything under
``base/`` is loaded automatically, but for users of the ``-b`` option,
it's important to know that loading a script in that directory
requires the extra ``base/`` path qualification. For example, the
following two scripts:
* ``$PREFIX/share/bro/base/protocols/ssl/main.bro``
* ``$PREFIX/share/bro/policy/protocols/ssl/validate-certs.bro``
are referenced from another Bro script like:
.. code:: bro
@load base/protocols/ssl/main
@load protocols/ssl/validate-certs
Notice how ``policy/`` can be omitted as a convenience in the second
case. ``@load`` can now also use relative path, e.g., ``@load
../main``.
Logging Framework
-----------------
- The logs generated by scripts that ship with Bro are entirely redone
to use a standardized, machine parsable format via the new logging
framework. Generally, the log content has been restructured towards
making it more directly useful to operations. Also, several
analyzers have been significantly extended and thus now log more
information. Take a look at ``ssl.log``.
* A particular format change that may be useful to note is that the
``conn.log`` ``service`` field is derived from DPD instead of
well-known ports (while that was already possible in 1.5, it was
not the default).
* Also, ``conn.log`` now reports raw number of packets/bytes per
endpoint.
- The new logging framework makes it possible to extend, customize,
and filter logs very easily. See `the logging framework
<{{git('bro:doc/logging.rst')}}>`_ more information on usage.
- A common pattern found in the new scripts is to store logging stream
records for protocols inside the ``connection`` records so that
state can be collected until enough is seen to log a coherent unit
of information regarding the activity of that connection. This
state is now frequently seen/accessible in event handlers, for
example, like ``c$<protocol>`` where ``<protocol>`` is replaced by
the name of the protocol. This field is added to the ``connection``
record by ``redef``'ing it in a
``base/protocols/<protocol>/main.bro`` script.
- The logging code has been rewritten internally, with script-level
interface and output backend now clearly separated. While ASCII
logging is still the default, we will add further output types in
the future (binary format, direct database logging).
Notice Framework
----------------
The way users interact with "notices" has changed significantly in
order to make it easier to define a site policy and more extensible
for adding customized actions. See the `the notice framework
<{{git('bro:doc/notice.rst')}}>`_.
New Default Settings
--------------------
- Dynamic Protocol Detection (DPD) is now enabled/loaded by default.
- The default packet filter now examines all packets instead of
dynamically building a filter based on which protocol analysis scripts
are loaded. See ``PacketFilter::all_packets`` for how to revert to old
behavior.
- By default, Bro now sets a libpcap snaplen of 65535. Depending on
the OS, this may have performance implications and you can use the
``--snaplen`` option to change the value.
API Changes
-----------
- The ``@prefixes`` directive works differently now.
Any added prefixes are now searched for and loaded *after* all input
files have been parsed. After all input files are parsed, Bro
searches ``BROPATH`` for prefixed, flattened versions of all of the
parsed input files. For example, if ``lcl`` is in ``@prefixes``, and
``site.bro`` is loaded, then a file named ``lcl.site.bro`` that's in
``BROPATH`` would end up being automatically loaded as well. Packages
work similarly, e.g. loading ``protocols/http`` means a file named
``lcl.protocols.http.bro`` in ``BROPATH`` gets loaded automatically.
- The ``make_addr`` BIF now returns a ``subnet`` versus an ``addr``
Variable Naming
---------------
- ``Module`` is more widely used for namespacing. E.g. the new
``site.bro`` exports the ``local_nets`` identifier (among other
things) into the ``Site`` module.
- Identifiers may have been renamed to conform to new `scripting
conventions
<{{docroot}}/development/script-conventions.html>`_
BroControl
==========
BroControl looks pretty much similar to the version coming with Bro 1.x,
but has been cleaned up and streamlined significantly internally.
BroControl has a new ``process`` command to process a trace on disk
offline using a similar configuration to what BroControl installs for
live analysis.
BroControl now has an extensive plugin interface for adding new
commands and options. Note that this is still considered experimental.
We have remove the ``analysis`` command, and BroControl does currently
not not send daily alarm summaries anymore (this may be restored
later).
Removed Functionality
=====================
We have remove a bunch of functionality that was rarely used and/or
had not been maintained for a while already:
- The ``net`` script data type.
- The ``alarm`` statement; use the notice framework instead.
- Trace rewriting.
- DFA state expiration in regexp engine.
- Active mapping.
- Native DAG support (may come back eventually)
- ClamAV support.
- The connection compressor is now disabled by default, and will
be removed in the future.
Development Infrastructure
==========================
Bro development has moved from using SVN to Git for revision control.
Users that like to use the latest Bro developments by checking it out
from the source repositories should see the `development process
<{{docroot}}/development/process.html>`_. Note that all the various
sub-components now reside on their own repositories. However, the
top-level Bro repository includes them as git submodules so it's easu
to check them all out simultaneously.
Bro now uses `CMake <http://www.cmake.org>`_ for its build system so
that is a new required dependency when building from source.
Bro now comes with a growing suite of regression tests in
``testing/``.

View file

@ -1,23 +0,0 @@
#!/bin/sh
SOURCE="$( cd "$( dirname "$0" )" && cd .. && pwd )"
BUILD=${SOURCE}/build
TMP=/tmp/bro-dist.${UID}
BRO_V=`cat ${SOURCE}/VERSION`
BROCCOLI_V=`cat ${SOURCE}/aux/broccoli/VERSION`
BROCTL_V=`cat ${SOURCE}/aux/broctl/VERSION`
( mkdir -p ${BUILD} && rm -rf ${TMP} && mkdir ${TMP} )
cp -R ${SOURCE} ${TMP}/Bro-${BRO_V}
( cd ${TMP} && find . -name .git\* | xargs rm -rf )
( cd ${TMP} && find . -name \*.swp | xargs rm -rf )
( cd ${TMP} && find . -type d -name build | xargs rm -rf )
( cd ${TMP} && tar -czf ${BUILD}/Bro-all-${BRO_V}.tar.gz Bro-${BRO_V} )
( cd ${TMP}/Bro-${BRO_V}/aux && mv broccoli Broccoli-${BROCCOLI_V} && \
tar -czf ${BUILD}/Broccoli-${BROCCOLI_V}.tar.gz Broccoli-${BROCCOLI_V} )
( cd ${TMP}/Bro-${BRO_V}/aux && mv broctl Broctl-${BROCTL_V} && \
tar -czf ${BUILD}/Broctl-${BROCTL_V}.tar.gz Broctl-${BROCTL_V} )
( cd ${TMP}/Bro-${BRO_V}/aux && rm -rf Broctl* Broccoli* )
( cd ${TMP} && tar -czf ${BUILD}/Bro-${BRO_V}.tar.gz Bro-${BRO_V} )
rm -rf ${TMP}
echo "Distribution source tarballs have been compiled in ${BUILD}"

View file

@ -2,6 +2,7 @@ include(InstallPackageConfigFile)
install(DIRECTORY ./ DESTINATION ${BRO_SCRIPT_INSTALL_PATH} FILES_MATCHING
PATTERN "site/local*" EXCLUDE
PATTERN "test-all-policy.bro" EXCLUDE
PATTERN "*.bro"
PATTERN "*.sig"
PATTERN "*.fp"

View file

@ -14,8 +14,8 @@ export {
## Which port to listen on.
const listen_port = 47757/tcp &redef;
## This defines if a listening socket should use encryption.
const listen_encrypted = F &redef;
## This defines if a listening socket should use SSL.
const listen_ssl = F &redef;
## Default compression level. Compression level is 0-9, with 0 = no
## compression.

View file

@ -18,3 +18,6 @@
@if ( Cluster::is_enabled() )
@load ./cluster
@endif
# Load here so that it can check whether clustering is enabled.
@load ./actions/pp-alarms

View file

@ -0,0 +1,233 @@
#! Notice extension that mails out a pretty-printed version of alarm.log
#! in regular intervals, formatted for better human readability. If activated,
#! that replaces the default summary mail having the raw log output.
@load base/frameworks/cluster
@load ../main
module Notice;
export {
## Activate pretty-printed alarm summaries.
const pretty_print_alarms = T &redef;
## Address to send the pretty-printed reports to. Default if not set is
## :bro:id:`Notice::mail_dest`.
const mail_dest_pretty_printed = "" &redef;
## If an address from one of these networks is reported, we mark
## the entry with an addition quote symbol (i.e., ">"). Many MUAs
## then highlight such lines differently.
global flag_nets: set[subnet] &redef;
## Function that renders a single alarm. Can be overidden.
global pretty_print_alarm: function(out: file, n: Info) &redef;
}
# We maintain an old-style file recording the pretty-printed alarms.
const pp_alarms_name = "alarm-mail.txt";
global pp_alarms: file;
global pp_alarms_open: bool = F;
# Returns True if pretty-printed alarm summaries are activated.
function want_pp() : bool
{
return (pretty_print_alarms && ! reading_traces()
&& (mail_dest != "" || mail_dest_pretty_printed != ""));
}
# Opens and intializes the output file.
function pp_open()
{
if ( pp_alarms_open )
return;
pp_alarms_open = T;
pp_alarms = open(pp_alarms_name);
local dest = mail_dest_pretty_printed != "" ? mail_dest_pretty_printed
: mail_dest;
local headers = email_headers("Alarm summary", dest);
write_file(pp_alarms, headers + "\n");
}
# Closes and mails out the current output file.
function pp_send()
{
if ( ! pp_alarms_open )
return;
write_file(pp_alarms, "\n\n--\n[Automatically generated]\n\n");
close(pp_alarms);
system(fmt("/bin/cat %s | %s -t -oi && /bin/rm %s",
pp_alarms_name, sendmail, pp_alarms_name));
pp_alarms_open = F;
}
# Postprocessor function that triggers the email.
function pp_postprocessor(info: Log::RotationInfo): bool
{
if ( want_pp() )
pp_send();
return T;
}
event bro_init()
{
if ( ! want_pp() )
return;
# This replaces the standard non-pretty-printing filter.
Log::add_filter(Notice::ALARM_LOG,
[$name="alarm-mail", $writer=Log::WRITER_NONE,
$interv=Log::default_rotation_interval,
$postprocessor=pp_postprocessor]);
}
event notice(n: Notice::Info) &priority=-5
{
if ( ! want_pp() )
return;
if ( ACTION_LOG !in n$actions )
return;
if ( ! pp_alarms_open )
pp_open();
pretty_print_alarm(pp_alarms, n);
}
function do_msg(out: file, n: Info, line1: string, line2: string, line3: string, host1: addr, name1: string, host2: addr, name2: string)
{
local country = "";
@ifdef ( Notice::ACTION_ADD_GEODATA ) # Make tests happy, cyclic dependency.
if ( n?$remote_location && n$remote_location?$country_code )
country = fmt(" (remote location %s)", n$remote_location$country_code);
@endif
line1 = cat(line1, country);
local resolved = "";
if ( host1 != 0.0.0.0 )
resolved = fmt("%s # %s = %s", resolved, host1, name1);
if ( host2 != 0.0.0.0 )
resolved = fmt("%s %s = %s", resolved, host2, name2);
print out, line1;
print out, line2;
if ( line3 != "" )
print out, line3;
if ( resolved != "" )
print out, resolved;
print out, "";
}
# Default pretty-printer.
function pretty_print_alarm(out: file, n: Info)
{
local pdescr = "";
@if ( Cluster::is_enabled() )
pdescr = "local";
if ( n?$src_peer )
pdescr = n$src_peer?$descr ? n$src_peer$descr : fmt("%s", n$src_peer$host);
pdescr = fmt("<%s> ", pdescr);
@endif
local msg = fmt( "%s%s", pdescr, n$msg);
local who = "";
local h1 = 0.0.0.0;
local h2 = 0.0.0.0;
local orig_p = "";
local resp_p = "";
if ( n?$id )
{
orig_p = fmt(":%s", n$id$orig_p);
resp_p = fmt(":%s", n$id$resp_p);
}
if ( n?$src && n?$dst )
{
h1 = n$src;
h2 = n$dst;
who = fmt("%s%s -> %s%s", h1, orig_p, h2, resp_p);
if ( n?$uid )
who = fmt("%s (uid %s)", who, n$uid );
}
else if ( n?$src )
{
local p = "";
if ( n?$p )
p = fmt(":%s", n$p);
h1 = n$src;
who = fmt("%s%s", h1, p);
}
local flag = (h1 in flag_nets || h2 in flag_nets);
local line1 = fmt(">%s %D %s %s", (flag ? ">" : " "), network_time(), n$note, who);
local line2 = fmt(" %s", msg);
local line3 = n?$sub ? fmt(" %s", n$sub) : "";
if ( h1 == 0.0.0.0 )
{
do_msg(out, n, line1, line2, line3, h1, "", h2, "");
return;
}
when ( local h1name = lookup_addr(h1) )
{
if ( h2 == 0.0.0.0 )
{
do_msg(out, n, line1, line2, line3, h1, h1name, h2, "");
return;
}
when ( local h2name = lookup_addr(h2) )
{
do_msg(out, n, line1, line2, line3, h1, h1name, h2, h2name);
return;
}
timeout 5secs
{
do_msg(out, n, line1, line2, line3, h1, h1name, h2, "(dns timeout)");
return;
}
}
timeout 5secs
{
if ( h2 == 0.0.0.0 )
{
do_msg(out, n, line1, line2, line3, h1, "(dns timeout)", h2, "");
return;
}
when ( local h2name_ = lookup_addr(h2) )
{
do_msg(out, n, line1, line2, line3, h1, "(dns timeout)", h2, h2name_);
return;
}
timeout 5secs
{
do_msg(out, n, line1, line2, line3, h1, "(dns timeout)", h2, "(dns timeout)");
return;
}
}
}

View file

@ -17,25 +17,16 @@ event Notice::notice(n: Notice::Info) &priority=10
{
when ( local src_name = lookup_addr(n$src) )
{
output = cat(output, "orig_h/src: ", src_name, "\n");
}
timeout 5secs
{
output = cat(output, "orig_h/src: <timeout>\n");
output = string_cat("orig_h/src hostname: ", src_name, "\n");
n$email_body_sections[|n$email_body_sections|] = output;
}
}
if ( n?$dst )
{
when ( local dst_name = lookup_addr(n$dst) )
{
output = cat(output, "resp_h/dst: ", dst_name, "\n");
}
timeout 5secs
{
output = cat(output, "resp_h/dst: <timeout>\n");
}
}
if ( output != "" )
output = string_cat("resp_h/dst hostname: ", dst_name, "\n");
n$email_body_sections[|n$email_body_sections|] = output;
}
}
}

View file

@ -148,7 +148,7 @@ export {
## from highest value (10) to lowest value (0).
priority: count &log &default=5;
## An action given to the notice if the predicate return true.
result: Notice::Action &log &default=ACTION_NONE;
action: Notice::Action &log &default=ACTION_NONE;
## The pred (predicate) field is a function that returns a boolean T
## or F value. If the predicate function return true, the action in
## this record is applied to the notice that is given as an argument
@ -169,13 +169,13 @@ export {
[$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); },
$halt=T, $priority = 9],
[$pred(n: Notice::Info) = { return (n$note in Notice::not_suppressed_types); },
$result = ACTION_NO_SUPPRESS,
$action = ACTION_NO_SUPPRESS,
$priority = 9],
[$pred(n: Notice::Info) = { return (n$note in Notice::alarmed_types); },
$result = ACTION_ALARM,
$action = ACTION_ALARM,
$priority = 8],
[$pred(n: Notice::Info) = { return (n$note in Notice::emailed_types); },
$result = ACTION_EMAIL,
$action = ACTION_EMAIL,
$priority = 8],
[$pred(n: Notice::Info) = {
if (n$note in Notice::type_suppression_intervals)
@ -185,9 +185,9 @@ export {
}
return F;
},
$result = ACTION_NONE,
$action = ACTION_NONE,
$priority = 8],
[$result = ACTION_LOG,
[$action = ACTION_LOG,
$priority = 0],
} &redef;
@ -354,8 +354,25 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool)
local email_text = email_headers(fmt("%s", n$note), dest);
# The notice emails always start off with the human readable message.
email_text = string_cat(email_text, "\n", n$msg, "\n");
# First off, finish the headers and include the human readable messages
# then leave a blank line after the message.
email_text = string_cat(email_text, "\nMessage: ", n$msg);
if ( n?$sub )
email_text = string_cat(email_text, "\nSub-message: ", n$sub);
email_text = string_cat(email_text, "\n\n");
# Next, add information about the connection if it exists.
if ( n?$id )
{
email_text = string_cat(email_text, "Connection: ",
fmt("%s", n$id$orig_h), ":", fmt("%d", n$id$orig_p), " -> ",
fmt("%s", n$id$resp_h), ":", fmt("%d", n$id$resp_p), "\n");
if ( n?$uid )
email_text = string_cat(email_text, "Connection uid: ", n$uid, "\n");
}
else if ( n?$src )
email_text = string_cat(email_text, "Address: ", fmt("%s", n$src), "\n");
# Add the extended information if it's requested.
if ( extend )
@ -466,7 +483,7 @@ function apply_policy(n: Notice::Info)
# If there's no predicate or the predicate returns F.
if ( ! ordered_policy[i]?$pred || ordered_policy[i]$pred(n) )
{
add n$actions[ordered_policy[i]$result];
add n$actions[ordered_policy[i]$action];
add n$policy_items[int_to_count(i)];
# If the predicate matched and there was a suppression interval,

View file

@ -8,223 +8,212 @@ export {
redef enum Log::ID += { LOG };
redef enum Notice::Type += {
## Generic unusual but alarm-worthy activity.
Weird_Activity,
## Generic unusual but notice-worthy weird activity.
Activity,
};
type Info: record {
## The time when the weird occurred.
ts: time &log;
## If a connection is associated with this weird, this will be the
## connection's unique ID.
uid: string &log &optional;
## conn_id for the optional connection.
id: conn_id &log &optional;
msg: string &log;
## The name of the weird that occurred.
name: string &log;
## Additional information accompanying the weird if any.
addl: string &log &optional;
## Indicate if this weird was also turned into a notice.
notice: bool &log &default=F;
## The peer that originated this weird. This is helpful in cluster
## deployments if a particular cluster node is having trouble to help
## identify which node is having trouble.
peer: string &log &optional;
};
type WeirdAction: enum {
WEIRD_UNSPECIFIED, WEIRD_IGNORE, WEIRD_FILE,
WEIRD_NOTICE_ALWAYS, WEIRD_NOTICE_PER_CONN,
WEIRD_NOTICE_PER_ORIG, WEIRD_NOTICE_ONCE,
type Action: enum {
ACTION_UNSPECIFIED,
ACTION_IGNORE,
ACTION_LOG,
ACTION_LOG_ONCE,
ACTION_LOG_PER_CONN,
ACTION_LOG_PER_ORIG,
ACTION_NOTICE,
ACTION_NOTICE_ONCE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
};
# Which of the above actions lead to logging. For internal use.
const notice_actions = {
WEIRD_NOTICE_ALWAYS, WEIRD_NOTICE_PER_CONN,
WEIRD_NOTICE_PER_ORIG, WEIRD_NOTICE_ONCE,
};
const actions: table[string] of Action = {
["unsolicited_SYN_response"] = ACTION_IGNORE,
["above_hole_data_without_any_acks"] = ACTION_LOG,
["active_connection_reuse"] = ACTION_LOG,
["bad_HTTP_reply"] = ACTION_LOG,
["bad_HTTP_version"] = ACTION_LOG,
["bad_ICMP_checksum"] = ACTION_LOG_PER_ORIG,
["bad_ident_port"] = ACTION_LOG,
["bad_ident_reply"] = ACTION_LOG,
["bad_ident_request"] = ACTION_LOG,
["bad_rlogin_prolog"] = ACTION_LOG,
["bad_rsh_prolog"] = ACTION_LOG,
["rsh_text_after_rejected"] = ACTION_LOG,
["bad_RPC"] = ACTION_LOG_PER_ORIG,
["bad_RPC_program"] = ACTION_LOG,
["bad_SYN_ack"] = ACTION_LOG,
["bad_TCP_checksum"] = ACTION_LOG_PER_ORIG,
["bad_UDP_checksum"] = ACTION_LOG_PER_ORIG,
["baroque_SYN"] = ACTION_LOG,
["base64_illegal_encoding"] = ACTION_LOG,
["connection_originator_SYN_ack"] = ACTION_LOG_PER_ORIG,
["corrupt_tcp_options"] = ACTION_LOG_PER_ORIG,
["crud_trailing_HTTP_request"] = ACTION_LOG,
["data_after_reset"] = ACTION_LOG,
["data_before_established"] = ACTION_LOG,
["data_without_SYN_ACK"] = ACTION_LOG,
["DHCP_no_type_option"] = ACTION_LOG,
["DHCP_wrong_msg_type"] = ACTION_LOG,
["DHCP_wrong_op_type"] = ACTION_LOG,
["DNS_AAAA_neg_length"] = ACTION_LOG,
["DNS_Conn_count_too_large"] = ACTION_LOG,
["DNS_NAME_too_long"] = ACTION_LOG,
["DNS_RR_bad_length"] = ACTION_LOG,
["DNS_RR_length_mismatch"] = ACTION_LOG,
["DNS_RR_unknown_type"] = ACTION_LOG,
["DNS_label_forward_compress_offset"] = ACTION_LOG_PER_ORIG,
["DNS_label_len_gt_name_len"] = ACTION_LOG_PER_ORIG,
["DNS_label_len_gt_pkt"] = ACTION_LOG_PER_ORIG,
["DNS_label_too_long"] = ACTION_LOG_PER_ORIG,
["DNS_truncated_RR_rdlength_lt_len"] = ACTION_LOG,
["DNS_truncated_ans_too_short"] = ACTION_LOG,
["DNS_truncated_len_lt_hdr_len"] = ACTION_LOG,
["DNS_truncated_quest_too_short"] = ACTION_LOG,
["dns_changed_number_of_responses"] = ACTION_LOG_PER_ORIG,
["dns_reply_seen_after_done"] = ACTION_LOG_PER_ORIG,
["excessive_data_without_further_acks"] = ACTION_LOG,
["excess_RPC"] = ACTION_LOG_PER_ORIG,
["excessive_RPC_len"] = ACTION_LOG_PER_ORIG,
["FIN_advanced_last_seq"] = ACTION_LOG,
["FIN_after_reset"] = ACTION_IGNORE,
["FIN_storm"] = ACTION_NOTICE_PER_ORIG,
["HTTP_bad_chunk_size"] = ACTION_LOG,
["HTTP_chunked_transfer_for_multipart_message"] = ACTION_LOG,
["HTTP_overlapping_messages"] = ACTION_LOG,
["HTTP_unknown_method"] = ACTION_LOG,
["HTTP_version_mismatch"] = ACTION_LOG,
["ident_request_addendum"] = ACTION_LOG,
["inappropriate_FIN"] = ACTION_LOG,
["inflate_failed"] = ACTION_LOG,
["invalid_irc_global_users_reply"] = ACTION_LOG,
["irc_invalid_command"] = ACTION_LOG,
["irc_invalid_dcc_message_format"] = ACTION_LOG,
["irc_invalid_invite_message_format"] = ACTION_LOG,
["irc_invalid_join_line"] = ACTION_LOG,
["irc_invalid_kick_message_format"] = ACTION_LOG,
["irc_invalid_line"] = ACTION_LOG,
["irc_invalid_mode_message_format"] = ACTION_LOG,
["irc_invalid_names_line"] = ACTION_LOG,
["irc_invalid_njoin_line"] = ACTION_LOG,
["irc_invalid_notice_message_format"] = ACTION_LOG,
["irc_invalid_oper_message_format"] = ACTION_LOG,
["irc_invalid_privmsg_message_format"] = ACTION_LOG,
["irc_invalid_reply_number"] = ACTION_LOG,
["irc_invalid_squery_message_format"] = ACTION_LOG,
["irc_invalid_topic_reply"] = ACTION_LOG,
["irc_invalid_who_line"] = ACTION_LOG,
["irc_invalid_who_message_format"] = ACTION_LOG,
["irc_invalid_whois_channel_line"] = ACTION_LOG,
["irc_invalid_whois_message_format"] = ACTION_LOG,
["irc_invalid_whois_operator_line"] = ACTION_LOG,
["irc_invalid_whois_user_line"] = ACTION_LOG,
["irc_line_size_exceeded"] = ACTION_LOG,
["irc_line_too_short"] = ACTION_LOG,
["irc_too_many_invalid"] = ACTION_LOG,
["line_terminated_with_single_CR"] = ACTION_LOG,
["line_terminated_with_single_LF"] = ACTION_LOG,
["malformed_ssh_identification"] = ACTION_LOG,
["malformed_ssh_version"] = ACTION_LOG,
["matching_undelivered_data"] = ACTION_LOG,
["multiple_HTTP_request_elements"] = ACTION_LOG,
["multiple_RPCs"] = ACTION_LOG_PER_ORIG,
["non_IPv4_packet"] = ACTION_LOG_ONCE,
["NUL_in_line"] = ACTION_LOG,
["originator_RPC_reply"] = ACTION_LOG_PER_ORIG,
["partial_finger_request"] = ACTION_LOG,
["partial_ftp_request"] = ACTION_LOG,
["partial_ident_request"] = ACTION_LOG,
["partial_RPC"] = ACTION_LOG_PER_ORIG,
["partial_RPC_request"] = ACTION_LOG,
["pending_data_when_closed"] = ACTION_LOG,
["pop3_bad_base64_encoding"] = ACTION_LOG,
["pop3_client_command_unknown"] = ACTION_LOG,
["pop3_client_sending_server_commands"] = ACTION_LOG,
["pop3_malformed_auth_plain"] = ACTION_LOG,
["pop3_server_command_unknown"] = ACTION_LOG,
["pop3_server_sending_client_commands"] = ACTION_LOG,
["possible_split_routing"] = ACTION_LOG,
["premature_connection_reuse"] = ACTION_LOG,
["repeated_SYN_reply_wo_ack"] = ACTION_LOG,
["repeated_SYN_with_ack"] = ACTION_LOG,
["responder_RPC_call"] = ACTION_LOG_PER_ORIG,
["rlogin_text_after_rejected"] = ACTION_LOG,
["RPC_rexmit_inconsistency"] = ACTION_LOG,
["RPC_underflow"] = ACTION_LOG,
["RST_storm"] = ACTION_LOG,
["RST_with_data"] = ACTION_LOG,
["simultaneous_open"] = ACTION_LOG_PER_CONN,
["spontaneous_FIN"] = ACTION_IGNORE,
["spontaneous_RST"] = ACTION_IGNORE,
["SMB_parsing_error"] = ACTION_LOG,
["no_smb_session_using_parsesambamsg"] = ACTION_LOG,
["smb_andx_command_failed_to_parse"] = ACTION_LOG,
["transaction_subcmd_missing"] = ACTION_LOG,
["successful_RPC_reply_to_invalid_request"] = ACTION_NOTICE_PER_ORIG,
["SYN_after_close"] = ACTION_LOG,
["SYN_after_partial"] = ACTION_NOTICE_PER_ORIG,
["SYN_after_reset"] = ACTION_LOG,
["SYN_inside_connection"] = ACTION_LOG,
["SYN_seq_jump"] = ACTION_LOG,
["SYN_with_data"] = ACTION_LOG,
["TCP_christmas"] = ACTION_LOG,
["truncated_ARP"] = ACTION_LOG,
["truncated_NTP"] = ACTION_LOG,
["UDP_datagram_length_mismatch"] = ACTION_LOG_PER_ORIG,
["unexpected_client_HTTP_data"] = ACTION_LOG,
["unexpected_multiple_HTTP_requests"] = ACTION_LOG,
["unexpected_server_HTTP_data"] = ACTION_LOG,
["unmatched_HTTP_reply"] = ACTION_LOG,
["unpaired_RPC_response"] = ACTION_LOG,
["window_recision"] = ACTION_LOG,
["double_%_in_URI"] = ACTION_LOG,
["illegal_%_at_end_of_URI"] = ACTION_LOG,
["unescaped_%_in_URI"] = ACTION_LOG,
["unescaped_special_URI_char"] = ACTION_LOG,
["deficit_netbios_hdr_len"] = ACTION_LOG,
["excess_netbios_hdr_len"] = ACTION_LOG,
["netbios_client_session_reply"] = ACTION_LOG,
["netbios_raw_session_msg"] = ACTION_LOG,
["netbios_server_session_request"] = ACTION_LOG,
["unknown_netbios_type"] = ACTION_LOG,
["excessively_large_fragment"] = ACTION_LOG,
["excessively_small_fragment"] = ACTION_LOG_PER_ORIG,
["fragment_inconsistency"] = ACTION_LOG_PER_ORIG,
["fragment_overlap"] = ACTION_LOG_PER_ORIG,
["fragment_protocol_inconsistency"] = ACTION_LOG,
["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG,
## These do indeed happen!
["fragment_with_DF"] = ACTION_LOG,
["incompletely_captured_fragment"] = ACTION_LOG,
["bad_IP_checksum"] = ACTION_LOG_PER_ORIG,
["bad_TCP_header_len"] = ACTION_LOG,
["internally_truncated_header"] = ACTION_LOG,
["truncated_IP"] = ACTION_LOG,
["truncated_header"] = ACTION_LOG,
} &default=ACTION_LOG &redef;
const weird_action: table[string] of WeirdAction = {
# tcp_weird
["above_hole_data_without_any_acks"] = WEIRD_FILE,
["active_connection_reuse"] = WEIRD_FILE,
["bad_HTTP_reply"] = WEIRD_FILE,
["bad_HTTP_version"] = WEIRD_FILE,
["bad_ICMP_checksum"] = WEIRD_FILE,
["bad_ident_port"] = WEIRD_FILE,
["bad_ident_reply"] = WEIRD_FILE,
["bad_ident_request"] = WEIRD_FILE,
["bad_rlogin_prolog"] = WEIRD_FILE,
["bad_rsh_prolog"] = WEIRD_FILE,
["rsh_text_after_rejected"] = WEIRD_FILE,
["bad_RPC"] = WEIRD_NOTICE_PER_ORIG,
["bad_RPC_program"] = WEIRD_FILE,
["bad_SYN_ack"] = WEIRD_FILE,
["bad_TCP_checksum"] = WEIRD_FILE,
["bad_UDP_checksum"] = WEIRD_FILE,
["baroque_SYN"] = WEIRD_FILE,
["base64_illegal_encoding"] = WEIRD_FILE,
["connection_originator_SYN_ack"] = WEIRD_FILE,
["corrupt_tcp_options"] = WEIRD_NOTICE_PER_ORIG,
["crud_trailing_HTTP_request"] = WEIRD_FILE,
["data_after_reset"] = WEIRD_FILE,
["data_before_established"] = WEIRD_FILE,
["data_without_SYN_ACK"] = WEIRD_FILE,
["DHCP_no_type_option"] = WEIRD_FILE,
["DHCP_wrong_msg_type"] = WEIRD_FILE,
["DHCP_wrong_op_type"] = WEIRD_FILE,
["DNS_AAAA_neg_length"] = WEIRD_FILE,
["DNS_Conn_count_too_large"] = WEIRD_FILE,
["DNS_NAME_too_long"] = WEIRD_FILE,
["DNS_RR_bad_length"] = WEIRD_FILE,
["DNS_RR_length_mismatch"] = WEIRD_FILE,
["DNS_RR_unknown_type"] = WEIRD_FILE,
["DNS_label_forward_compress_offset"] = WEIRD_NOTICE_PER_ORIG,
["DNS_label_len_gt_name_len"] = WEIRD_NOTICE_PER_ORIG,
["DNS_label_len_gt_pkt"] = WEIRD_NOTICE_PER_ORIG,
["DNS_label_too_long"] = WEIRD_NOTICE_PER_ORIG,
["DNS_truncated_RR_rdlength_lt_len"] = WEIRD_FILE,
["DNS_truncated_ans_too_short"] = WEIRD_FILE,
["DNS_truncated_len_lt_hdr_len"] = WEIRD_FILE,
["DNS_truncated_quest_too_short"] = WEIRD_FILE,
["dns_changed_number_of_responses"] = WEIRD_NOTICE_PER_ORIG,
["dns_reply_seen_after_done"] = WEIRD_NOTICE_PER_ORIG,
["excessive_data_without_further_acks"] = WEIRD_FILE,
["excess_RPC"] = WEIRD_NOTICE_PER_ORIG,
["excessive_RPC_len"] = WEIRD_NOTICE_PER_ORIG,
["FIN_advanced_last_seq"] = WEIRD_FILE,
["FIN_after_reset"] = WEIRD_IGNORE,
["FIN_storm"] = WEIRD_NOTICE_ALWAYS,
["HTTP_bad_chunk_size"] = WEIRD_FILE,
["HTTP_chunked_transfer_for_multipart_message"] = WEIRD_FILE,
["HTTP_overlapping_messages"] = WEIRD_FILE,
["HTTP_unknown_method"] = WEIRD_FILE,
["HTTP_version_mismatch"] = WEIRD_FILE,
["ident_request_addendum"] = WEIRD_FILE,
["inappropriate_FIN"] = WEIRD_FILE,
["inflate_data_failed"] = WEIRD_FILE,
["inflate_failed"] = WEIRD_FILE,
["invalid_irc_global_users_reply"] = WEIRD_FILE,
["irc_invalid_command"] = WEIRD_FILE,
["irc_invalid_dcc_message_format"] = WEIRD_FILE,
["irc_invalid_invite_message_format"] = WEIRD_FILE,
["irc_invalid_join_line"] = WEIRD_FILE,
["irc_invalid_kick_message_format"] = WEIRD_FILE,
["irc_invalid_line"] = WEIRD_FILE,
["irc_invalid_mode_message_format"] = WEIRD_FILE,
["irc_invalid_names_line"] = WEIRD_FILE,
["irc_invalid_njoin_line"] = WEIRD_FILE,
["irc_invalid_notice_message_format"] = WEIRD_FILE,
["irc_invalid_oper_message_format"] = WEIRD_FILE,
["irc_invalid_privmsg_message_format"] = WEIRD_FILE,
["irc_invalid_reply_number"] = WEIRD_FILE,
["irc_invalid_squery_message_format"] = WEIRD_FILE,
["irc_invalid_topic_reply"] = WEIRD_FILE,
["irc_invalid_who_line"] = WEIRD_FILE,
["irc_invalid_who_message_format"] = WEIRD_FILE,
["irc_invalid_whois_channel_line"] = WEIRD_FILE,
["irc_invalid_whois_message_format"] = WEIRD_FILE,
["irc_invalid_whois_operator_line"] = WEIRD_FILE,
["irc_invalid_whois_user_line"] = WEIRD_FILE,
["irc_line_size_exceeded"] = WEIRD_FILE,
["irc_line_too_short"] = WEIRD_FILE,
["irc_too_many_invalid"] = WEIRD_FILE,
["line_terminated_with_single_CR"] = WEIRD_FILE,
["line_terminated_with_single_LF"] = WEIRD_FILE,
["malformed_ssh_identification"] = WEIRD_FILE,
["malformed_ssh_version"] = WEIRD_FILE,
["matching_undelivered_data"] = WEIRD_FILE,
["multiple_HTTP_request_elements"] = WEIRD_FILE,
["multiple_RPCs"] = WEIRD_NOTICE_PER_ORIG,
["non_IPv4_packet"] = WEIRD_NOTICE_ONCE,
["NUL_in_line"] = WEIRD_FILE,
["originator_RPC_reply"] = WEIRD_NOTICE_PER_ORIG,
["partial_finger_request"] = WEIRD_FILE,
["partial_ftp_request"] = WEIRD_FILE,
["partial_ident_request"] = WEIRD_FILE,
["partial_RPC"] = WEIRD_NOTICE_PER_ORIG,
["partial_RPC_request"] = WEIRD_FILE,
["pending_data_when_closed"] = WEIRD_FILE,
["pop3_bad_base64_encoding"] = WEIRD_FILE,
["pop3_client_command_unknown"] = WEIRD_FILE,
["pop3_client_sending_server_commands"] = WEIRD_FILE,
["pop3_malformed_auth_plain"] = WEIRD_FILE,
["pop3_server_command_unknown"] = WEIRD_FILE,
["pop3_server_sending_client_commands"] = WEIRD_FILE,
["possible_split_routing"] = WEIRD_FILE,
["premature_connection_reuse"] = WEIRD_FILE,
["repeated_SYN_reply_wo_ack"] = WEIRD_FILE,
["repeated_SYN_with_ack"] = WEIRD_FILE,
["responder_RPC_call"] = WEIRD_NOTICE_PER_ORIG,
["rlogin_text_after_rejected"] = WEIRD_FILE,
["RPC_rexmit_inconsistency"] = WEIRD_FILE,
["RPC_underflow"] = WEIRD_FILE,
["RST_storm"] = WEIRD_NOTICE_ALWAYS,
["RST_with_data"] = WEIRD_FILE, # PC's do this
["simultaneous_open"] = WEIRD_NOTICE_PER_CONN,
["spontaneous_FIN"] = WEIRD_IGNORE,
["spontaneous_RST"] = WEIRD_IGNORE,
["SMB_parsing_error"] = WEIRD_FILE,
["no_smb_session_using_parsesambamsg"] = WEIRD_FILE,
["smb_andx_command_failed_to_parse"] = WEIRD_FILE,
["transaction_subcmd_missing"] = WEIRD_FILE,
["SSLv3_data_without_full_handshake"] = WEIRD_FILE,
["unexpected_SSLv3_record"] = WEIRD_FILE,
["successful_RPC_reply_to_invalid_request"] = WEIRD_NOTICE_PER_ORIG,
["SYN_after_close"] = WEIRD_FILE,
["SYN_after_partial"] = WEIRD_NOTICE_PER_ORIG,
["SYN_after_reset"] = WEIRD_FILE,
["SYN_inside_connection"] = WEIRD_FILE,
["SYN_seq_jump"] = WEIRD_FILE,
["SYN_with_data"] = WEIRD_FILE,
["TCP_christmas"] = WEIRD_FILE,
["truncated_ARP"] = WEIRD_FILE,
["truncated_NTP"] = WEIRD_FILE,
["UDP_datagram_length_mismatch"] = WEIRD_NOTICE_PER_ORIG,
["unexpected_client_HTTP_data"] = WEIRD_FILE,
["unexpected_multiple_HTTP_requests"] = WEIRD_FILE,
["unexpected_server_HTTP_data"] = WEIRD_FILE,
["unmatched_HTTP_reply"] = WEIRD_FILE,
["unpaired_RPC_response"] = WEIRD_FILE,
["unsolicited_SYN_response"] = WEIRD_IGNORE,
["window_recision"] = WEIRD_FILE,
["double_%_in_URI"] = WEIRD_FILE,
["illegal_%_at_end_of_URI"] = WEIRD_FILE,
["unescaped_%_in_URI"] = WEIRD_FILE,
["unescaped_special_URI_char"] = WEIRD_FILE,
["UDP_zone_transfer"] = WEIRD_NOTICE_ONCE,
["deficit_netbios_hdr_len"] = WEIRD_FILE,
["excess_netbios_hdr_len"] = WEIRD_FILE,
["netbios_client_session_reply"] = WEIRD_FILE,
["netbios_raw_session_msg"] = WEIRD_FILE,
["netbios_server_session_request"] = WEIRD_FILE,
["unknown_netbios_type"] = WEIRD_FILE,
# flow_weird
["excessively_large_fragment"] = WEIRD_NOTICE_ALWAYS,
# Code Red generates slews ...
["excessively_small_fragment"] = WEIRD_NOTICE_PER_ORIG,
["fragment_inconsistency"] = WEIRD_NOTICE_PER_ORIG,
["fragment_overlap"] = WEIRD_NOTICE_PER_ORIG,
["fragment_protocol_inconsistency"] = WEIRD_NOTICE_ALWAYS,
["fragment_size_inconsistency"] = WEIRD_NOTICE_PER_ORIG,
["fragment_with_DF"] = WEIRD_FILE, # these do indeed happen!
["incompletely_captured_fragment"] = WEIRD_NOTICE_ALWAYS,
# net_weird
["bad_IP_checksum"] = WEIRD_FILE,
["bad_TCP_header_len"] = WEIRD_FILE,
["internally_truncated_header"] = WEIRD_NOTICE_ALWAYS,
["truncated_IP"] = WEIRD_FILE,
["truncated_header"] = WEIRD_FILE,
# generated by policy script
["Land_attack"] = WEIRD_NOTICE_PER_ORIG,
["bad_pm_port"] = WEIRD_NOTICE_PER_ORIG,
["ICMP-unreachable for wrong state"] = WEIRD_NOTICE_PER_ORIG,
} &redef;
# table that maps weird types into a function that should be called
# to determine the action.
const weird_action_filters:
table[string] of function(c: connection): WeirdAction &redef;
const weird_ignore_host: set[addr, string] &redef;
## To completely ignore a specific weird for a host, add the host
## and weird name into this set.
const ignore_hosts: set[addr, string] &redef;
# But don't ignore these (for the weird file), it's handy keeping
# track of clustered checksum errors.
@ -233,26 +222,45 @@ export {
"bad_ICMP_checksum",
} &redef;
## This table is used to track identifier and name pairs that should be
## temporarily ignored because the problem has already been reported.
## This helps reduce the volume of high volume weirds by only allowing
## a unique weird every ``create_expire`` interval.
global weird_ignore: set[string, string] &create_expire=10min &redef;
## A state set which tracks unique weirds solely by the name to reduce
## duplicate logging. This is not synchronized deliberately because it
## could cause overload during storms
global did_log: set[string, string] &create_expire=1day &redef;
## A state set which tracks unique weirds solely by the name to reduce
## duplicate notices from being raised.
global did_notice: set[string, string] &create_expire=1day &redef;
global log_weird: event(rec: Info);
}
# id/msg pairs that should be ignored (because the problem has already
# been reported).
global weird_ignore: table[string] of set[string] &write_expire = 10 min;
# These actions result in the output being limited and further redundant
# weirds not progressing to being logged or noticed.
const limiting_actions = {
ACTION_LOG_ONCE,
ACTION_LOG_PER_CONN,
ACTION_LOG_PER_ORIG,
ACTION_NOTICE_ONCE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
};
# For WEIRD_NOTICE_PER_CONN.
global did_notice_conn: set[addr, port, addr, port, string]
&read_expire = 1 day;
# This is an internal set to track which Weird::Action values lead to notice
# creation.
const notice_actions = {
ACTION_NOTICE,
ACTION_NOTICE_PER_CONN,
ACTION_NOTICE_PER_ORIG,
ACTION_NOTICE_ONCE,
};
# For WEIRD_NOTICE_PER_ORIG.
global did_notice_orig: set[addr, string] &read_expire = 1 day;
# For WEIRD_NOTICE_ONCE.
global did_weird_log: set[string] &read_expire = 1 day;
global did_inconsistency_msg: set[conn_id];
# Used to pass the optional connection into report_weird().
# Used to pass the optional connection into report().
global current_conn: connection;
event bro_init() &priority=5
@ -260,12 +268,54 @@ event bro_init() &priority=5
Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird]);
}
function report_weird(t: time, name: string, id: string, have_conn: bool,
addl: string, action: WeirdAction, no_log: bool)
function flow_id_string(src: addr, dst: addr): string
{
return fmt("%s -> %s", src, dst);
}
function report(t: time, name: string, identifier: string, have_conn: bool, addl: string)
{
local action = actions[name];
# If this weird is to be ignored let's drop out of here very early.
if ( action == ACTION_IGNORE || [name, identifier] in weird_ignore )
return;
if ( action in limiting_actions )
{
if ( action in notice_actions )
{
# Handle notices
if ( have_conn && action == ACTION_NOTICE_PER_ORIG )
identifier = fmt("%s", current_conn$id$orig_h);
else if ( action == ACTION_NOTICE_ONCE )
identifier = "";
# If this weird was already noticed then we're done.
if ( [name, identifier] in did_notice )
return;
add did_notice[name, identifier];
}
else
{
# Handle logging.
if ( have_conn && action == ACTION_LOG_PER_ORIG )
identifier = fmt("%s", current_conn$id$orig_h);
else if ( action == ACTION_LOG_ONCE )
identifier = "";
# If this weird was already logged then we're done.
if ( [name, identifier] in did_log )
return;
add did_log[name, identifier];
}
}
# Create the Weird::Info record.
local info: Info;
info$ts = t;
info$msg = name;
info$name = name;
info$peer = peer_description;
if ( addl != "" )
info$addl = addl;
if ( have_conn )
@ -274,128 +324,59 @@ function report_weird(t: time, name: string, id: string, have_conn: bool,
info$id = current_conn$id;
}
if ( action == WEIRD_IGNORE ||
(id in weird_ignore && name in weird_ignore[id]) )
return;
if ( action == WEIRD_UNSPECIFIED )
if ( action in notice_actions )
{
if ( name in weird_action && weird_action[name] == WEIRD_IGNORE )
return;
else
{
action = WEIRD_NOTICE_ALWAYS;
info$notice = T;
}
}
if ( action in notice_actions && ! no_log )
{
local n: Notice::Info;
n$note = Weird_Activity;
n$msg = info$msg;
n$note = Activity;
n$msg = info$name;
if ( have_conn )
n$conn = current_conn;
if ( info?$addl )
n$sub = info$addl;
NOTICE(n);
}
else if ( id != "" && name !in weird_do_not_ignore_repeats )
{
if ( id !in weird_ignore )
weird_ignore[id] = set() &mergeable;
add weird_ignore[id][name];
}
# This is for the temporary ignoring to reduce volume for identical weirds.
if ( name !in weird_do_not_ignore_repeats )
add weird_ignore[name, identifier];
Log::write(Weird::LOG, info);
}
function report_weird_conn(t: time, name: string, id: string, addl: string,
c: connection)
function report_conn(t: time, name: string, identifier: string, addl: string, c: connection)
{
if ( [c$id$orig_h, name] in weird_ignore_host ||
[c$id$resp_h, name] in weird_ignore_host )
local cid = c$id;
if ( [cid$orig_h, name] in ignore_hosts ||
[cid$resp_h, name] in ignore_hosts )
return;
local no_log = F;
local action = WEIRD_UNSPECIFIED;
if ( name in weird_action )
{
if ( name in weird_action_filters )
action = weird_action_filters[name](c);
if ( action == WEIRD_UNSPECIFIED )
action = weird_action[name];
local cid = c$id;
if ( action == WEIRD_NOTICE_PER_CONN )
{
if ( [cid$orig_h, cid$orig_p, cid$resp_h, cid$resp_p, name] in did_notice_conn )
no_log = T;
else
add did_notice_conn[cid$orig_h, cid$orig_p, cid$resp_h, cid$resp_p, name];
}
else if ( action == WEIRD_NOTICE_PER_ORIG )
{
if ( [c$id$orig_h, name] in did_notice_orig )
no_log = T;
else
add did_notice_orig[c$id$orig_h, name];
}
else if ( action == WEIRD_NOTICE_ONCE )
{
if ( name in did_weird_log )
no_log = T;
else
add did_weird_log[name];
}
}
current_conn = c;
report_weird(t, name, id, T, addl, action, no_log);
report(t, name, identifier, T, addl);
}
function report_weird_orig(t: time, name: string, id: string, orig: addr)
function report_orig(t: time, name: string, identifier: string, orig: addr)
{
local no_log = F;
local action = WEIRD_UNSPECIFIED;
if ( [orig, name] in ignore_hosts )
return;
if ( name in weird_action )
{
action = weird_action[name];
if ( action == WEIRD_NOTICE_PER_ORIG )
{
if ( [orig, name] in did_notice_orig )
no_log = T;
else
add did_notice_orig[orig, name];
}
report(t, name, identifier, F, "");
}
report_weird(t, name, id, F, "", action, no_log);
}
# The following events come from core generated weirds typically.
event conn_weird(name: string, c: connection, addl: string)
{
report_weird_conn(network_time(), name, id_string(c$id), addl, c);
report_conn(network_time(), name, id_string(c$id), addl, c);
}
event flow_weird(name: string, src: addr, dst: addr)
{
report_weird_orig(network_time(), name, fmt("%s -> %s", src, dst), src);
report_orig(network_time(), name, flow_id_string(src, dst), src);
}
event net_weird(name: string)
{
report_weird(network_time(), name, "", F, "", WEIRD_UNSPECIFIED, F);
}
event connection_state_remove(c: connection)
{
delete weird_ignore[id_string(c$id)];
delete did_inconsistency_msg[c$id];
report(network_time(), name, "", F, "");
}

View file

@ -121,18 +121,22 @@ function parse_mozilla(unparsed_version: string,
if ( 2 in parts )
v = parse(parts[2], host, software_type)$version;
}
else if ( /MSIE 7.*Trident\/4\.0/ in unparsed_version )
else if ( / MSIE / in unparsed_version )
{
software_name = "MSIE";
if ( /Trident\/4\.0/ in unparsed_version )
v = [$major=8,$minor=0];
}
else if ( / MSIE [0-9\.]*b?[0-9]*;/ in unparsed_version )
else if ( /Trident\/5\.0/ in unparsed_version )
v = [$major=9,$minor=0];
else if ( /Trident\/6\.0/ in unparsed_version )
v = [$major=10,$minor=0];
else
{
software_name = "MSIE";
parts = split_all(unparsed_version, /MSIE [0-9\.]*b?[0-9]*/);
parts = split_all(unparsed_version, /MSIE [0-9]{1,2}\.*[0-9]*b?[0-9]*/);
if ( 2 in parts )
v = parse(parts[2], host, software_type)$version;
}
}
else if ( /Version\/.*Safari\// in unparsed_version )
{
software_name = "Safari";

View file

@ -54,15 +54,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string
## incorrect anyway.
event content_gap(c: connection, is_orig: bool, seq: count, length: count) &priority=5
{
if ( is_orig || ! c?$http ) return;
if ( is_orig || ! c?$http || ! c$http$calculating_md5 ) return;
set_state(c, F, is_orig);
if ( c$http$calculating_md5 )
{
c$http$calculating_md5 = F;
md5_hash_finish(c$id);
}
}
## When the file finishes downloading, finish the hash and generate a notice.
event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &priority=-3

View file

@ -18,6 +18,9 @@ export {
ts: time &log;
uid: string &log;
id: conn_id &log;
## This represents the pipelined depth into the connection of this
## request/response transaction.
trans_depth: count &log;
## The verb used in the HTTP request (GET, POST, HEAD, etc.).
method: string &log &optional;
## The value of the HOST header.
@ -33,17 +36,9 @@ export {
## The actual uncompressed content size of the data transferred from
## the client.
request_body_len: count &log &default=0;
## This indicates whether or not there was an interruption while the
## request body was being sent.
request_body_interrupted: bool &log &default=F;
## The actual uncompressed content size of the data transferred from
## the server.
response_body_len: count &log &default=0;
## This indicates whether or not there was an interruption while the
## request body was being sent. An interruption could cause hash
## calculation to fail and a number of other problems since the
## analyzer may not be able to get back on track with the connection.
response_body_interrupted: bool &log &default=F;
## The status code returned by the server.
status_code: count &log &optional;
## The status message returned by the server.
@ -131,6 +126,9 @@ function new_http_session(c: connection): Info
tmp$ts=network_time();
tmp$uid=c$uid;
tmp$id=c$id;
# $current_request is set prior to the Info record creation so we
# can use the value directly here.
tmp$trans_depth = c$http_state$current_request;
return tmp;
}
@ -253,15 +251,9 @@ event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &
set_state(c, F, is_orig);
if ( is_orig )
{
c$http$request_body_len = stat$body_length;
c$http$request_body_interrupted = stat$interrupted;
}
else
{
c$http$response_body_len = stat$body_length;
c$http$response_body_interrupted = stat$interrupted;
}
}
event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &priority = -5

View file

@ -19,9 +19,9 @@ export {
ts: time &log;
uid: string &log;
id: conn_id &log;
## Internally generated "message id" that ties back to the particular
## message in the SMTP log where this entity was seen.
mid: string &log;
## A count to represent the depth of this message transaction in a
## single connection where multiple messages were transferred.
trans_depth: count &log;
## The filename seen in the Content-Disposition header.
filename: string &log &optional;
## Track how many bytes of the MIME encoded file have been seen.
@ -90,7 +90,7 @@ function set_session(c: connection, new_entity: bool)
info$ts=network_time();
info$uid=c$uid;
info$id=c$id;
info$mid=c$smtp$mid;
info$trans_depth=c$smtp$trans_depth;
c$smtp$current_entity = info;
++c$smtp_state$mime_level;

View file

@ -11,10 +11,9 @@ export {
ts: time &log;
uid: string &log;
id: conn_id &log;
## This is an internally generated "message id" that can be used to
## map between SMTP messages and MIME entities in the SMTP entities
## log.
mid: string &log;
## This is a number that indicates the number of messages deep into
## this connection where this particular message was transferred.
trans_depth: count &log;
helo: string &log &optional;
mailfrom: string &log &optional;
rcptto: set[string] &log &optional;
@ -98,8 +97,11 @@ function new_smtp_log(c: connection): Info
l$ts=network_time();
l$uid=c$uid;
l$id=c$id;
l$mid=unique_id("@");
if ( c?$smtp_state && c$smtp_state?$helo )
# The messages_transferred count isn't incremented until the message is
# finished so we need to increment the count by 1 here.
l$trans_depth = c$smtp_state$messages_transferred+1;
if ( c$smtp_state?$helo )
l$helo = c$smtp_state$helo;
# The path will always end with the hosts involved in this connection.
@ -165,7 +167,6 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
msg: string, cont_resp: bool) &priority=-5
{
set_smtp_session(c);
if ( cmd == "." )
{
# Track the number of messages seen in this session.

View file

@ -103,25 +103,35 @@ function check_ssh_connection(c: connection, done: bool)
return;
# Make sure conn_size_analyzer is active by checking
# resp$num_bytes_ip
# resp$num_bytes_ip. In general it should always be active though.
if ( ! c$resp?$num_bytes_ip )
return;
# If this is still a live connection and the byte count has not
# crossed the threshold, just return and let the resheduled check happen later.
if ( !done && c$resp$num_bytes_ip < authentication_data_size )
# Remove the IP and TCP header length from the total size.
# TODO: Fix for IPv6. This whole approach also seems to break in some
# cases where there are more header bytes than num_bytes_ip.
local header_bytes = c$resp$num_pkts*32 + c$resp$num_pkts*20;
local server_bytes = c$resp$num_bytes_ip;
if ( server_bytes >= header_bytes )
server_bytes = server_bytes - header_bytes;
else
server_bytes = c$resp$size;
# If this is still a live connection and the byte count has not crossed
# the threshold, just return and let the rescheduled check happen later.
if ( ! done && server_bytes < authentication_data_size )
return;
# Make sure the server has sent back more than 50 bytes to filter out
# hosts that are just port scanning. Nothing is ever logged if the server
# doesn't send back at least 50 bytes.
if ( c$resp$num_bytes_ip < 50 )
if ( server_bytes < 50 )
return;
c$ssh$direction = Site::is_local_addr(c$id$orig_h) ? OUTBOUND : INBOUND;
c$ssh$resp_size = c$resp$num_bytes_ip;
c$ssh$resp_size = server_bytes;
if ( c$resp$num_bytes_ip < authentication_data_size )
if ( server_bytes < authentication_data_size )
{
c$ssh$status = "failure";
event SSH::heuristic_failed_login(c);

View file

@ -493,40 +493,41 @@ export {
} &default="UNKNOWN";
const x509_errors: table[count] of string = {
[0] = "X509_V_OK",
[1] = "X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT",
[2] = "X509_V_ERR_UNABLE_TO_GET_CRL",
[3] = "X509_V_ERR_UNABLE_TO_DECRYPT_CERT_SIGNATURE",
[4] = "X509_V_ERR_UNABLE_TO_DECRYPT_CRL_SIGNATURE",
[5] = "X509_V_ERR_UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY",
[6] = "X509_V_ERR_CERT_SIGNATURE_FAILURE",
[7] = "X509_V_ERR_CRL_SIGNATURE_FAILURE",
[8] = "X509_V_ERR_CERT_NOT_YET_VALID",
[9] = "X509_V_ERR_CERT_HAS_EXPIRED",
[10] = "X509_V_ERR_CRL_NOT_YET_VALID",
[11] = "X509_V_ERR_CRL_HAS_EXPIRED",
[12] = "X509_V_ERR_ERROR_IN_CERT_NOT_BEFORE_FIELD",
[13] = "X509_V_ERR_ERROR_IN_CERT_NOT_AFTER_FIELD",
[14] = "X509_V_ERR_ERROR_IN_CRL_LAST_UPDATE_FIELD",
[15] = "X509_V_ERR_ERROR_IN_CRL_NEXT_UPDATE_FIELD",
[16] = "X509_V_ERR_OUT_OF_MEM",
[17] = "X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT",
[18] = "X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN",
[19] = "X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY",
[20] = "X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE",
[21] = "X509_V_ERR_CERT_CHAIN_TOO_LONG",
[22] = "X509_V_ERR_CERT_REVOKED",
[23] = "X509_V_ERR_INVALID_CA",
[24] = "X509_V_ERR_PATH_LENGTH_EXCEEDED",
[25] = "X509_V_ERR_INVALID_PURPOSE",
[26] = "X509_V_ERR_CERT_UNTRUSTED",
[27] = "X509_V_ERR_CERT_REJECTED",
[28] = "X509_V_ERR_SUBJECT_ISSUER_MISMATCH",
[29] = "X509_V_ERR_AKID_SKID_MISMATCH",
[30] = "X509_V_ERR_AKID_ISSUER_SERIAL_MISMATCH",
[31] = "X509_V_ERR_KEYUSAGE_NO_CERTSIGN",
[32] = "X509_V_ERR_UNABLE_TO_GET_CRL_ISSUER",
[33] = "X509_V_ERR_UNHANDLED_CRITICAL_EXTENSION"
[0] = "ok",
[1] = "unable to get issuer cert",
[2] = "unable to get crl",
[3] = "unable to decrypt cert signature",
[4] = "unable to decrypt crl signature",
[5] = "unable to decode issuer public key",
[6] = "cert signature failure",
[7] = "crl signature failure",
[8] = "cert not yet valid",
[9] = "cert has expired",
[10] = "crl not yet valid",
[11] = "crl has expired",
[12] = "error in cert not before field",
[13] = "error in cert not after field",
[14] = "error in crl last update field",
[15] = "error in crl next update field",
[16] = "out of mem",
[17] = "depth zero self signed cert",
[18] = "self signed cert in chain",
[19] = "unable to get issuer cert locally",
[20] = "unable to verify leaf signature",
[21] = "cert chain too long",
[22] = "cert revoked",
[23] = "invalid ca",
[24] = "path length exceeded",
[25] = "invalid purpose",
[26] = "cert untrusted",
[27] = "cert rejected",
[28] = "subject issuer mismatch",
[29] = "akid skid mismatch",
[30] = "akid issuer serial mismatch",
[31] = "keyusage no certsign",
[32] = "unable to get crl issuer",
[33] = "unhandled critical extension"
};
}

File diff suppressed because one or more lines are too long

View file

@ -8,5 +8,5 @@ module Communication;
event bro_init() &priority=-10
{
enable_communication();
listen(listen_interface, listen_port, listen_encrypted);
listen(listen_interface, listen_port, listen_ssl);
}

View file

@ -65,7 +65,10 @@ function configuration_update_func(p: event_peer)
# We don't want to update non-const globals because that's usually
# where state is stored and those values will frequently be declared
# with &redef so that attributes can be redefined.
if ( t$constant && t$redefinable )
#
# NOTE: functions are currently not fully supported for serialization and hence
# aren't sent.
if ( t$constant && t$redefinable && t$type_name != "func" )
{
send_id(p, id);
++cnt;

View file

@ -1,9 +1,14 @@
##!
module LoadedScripts;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Name of the script loaded potentially with spaces included before
## the file name to indicate load depth. The convention is two spaces
## per level of depth.
name: string &log;
};
}

View file

@ -29,6 +29,8 @@ export {
}
redef record connection += {
## This field is to indicate whether or not the processing for detecting
## and logging the service for this connection is complete.
known_services_done: bool &default=F;
};
@ -60,14 +62,15 @@ function known_services_done(c: connection)
c$known_services_done = T;
if ( ! addr_matches_host(id$resp_h, service_tracking) ||
"ftp-data" in c$service ) # don't include ftp data sessions
"ftp-data" in c$service || # don't include ftp data sessions
("DNS" in c$service && c$resp$size == 0) ) # for dns, require that the server talks.
return;
# If no protocol was detected, wait a short
# time before attempting to log in case a protocol is detected
# on another connection.
if ( |c$service| == 0 )
schedule 2mins { log_it(network_time(), id$resp_h, id$resp_p, c$service) };
schedule 5min { log_it(network_time(), id$resp_h, id$resp_p, c$service) };
else
event log_it(network_time(), id$resp_h, id$resp_p, c$service);
}

View file

@ -1,11 +1,12 @@
##! This script handles core generated connection related "weird" events to
##! push weird information about connections into the weird framework.
##! For live operational deployments, this can frequently cause load issues
##! due to large numbers of these events being passed between nodes.
##! due to large numbers of these events and quite possibly shouldn't be
##! loaded.
@load base/frameworks/notice
module Weird;
module Conn;
export {
redef enum Notice::Type += {
@ -19,15 +20,12 @@ export {
}
event rexmit_inconsistency(c: connection, t1: string, t2: string)
{
if ( c$id !in did_inconsistency_msg )
{
NOTICE([$note=Retransmission_Inconsistency,
$conn=c,
$msg=fmt("%s rexmit inconsistency (%s) (%s)",
id_string(c$id), t1, t2)]);
add did_inconsistency_msg[c$id];
}
id_string(c$id), t1, t2),
$identifier=fmt("%s", c$id)]);
}
event ack_above_hole(c: connection)

View file

@ -1,3 +1,8 @@
##! This script adds authoritative and additional responses for the current
##! query to the DNS log. It can cause severe overhead due to the need
##! for all authoritative and additional responses to have events generated.
##! This script is not recommended for use on heavily loaded links.
@load base/protocols/dns/main
redef dns_skip_all_auth = F;
@ -7,12 +12,14 @@ module DNS;
export {
redef record Info += {
## Authoritative responses for the query.
auth: set[string] &log &optional;
## Additional responses for the query.
addl: set[string] &log &optional;
};
}
event do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=4
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=4
{
# The "ready" flag will be set here. This causes the setting from the
# base script to be overridden since the base script will log immediately

View file

@ -1,14 +1,9 @@
##! Script for detecting strange activity within DNS.
##!
##! Notices raised:
##!
##! * :bro:enum:`DNS::External_Name`
##!
##! A remote host resolves to a local host, but the name is not considered
##! to be within a local zone. :bro:id:`local_zones` variable **must**
##! be set appropriately for this detection.
##! This script detects names which are not within zones considered to be
##! local but resolving to addresses considered local.
##! The :bro:id:`Site::local_zones` variable **must** be set appropriately for
##! this detection.
@load base/frameworks/notice/main
@load base/frameworks/notice
@load base/utils/site
module DNS;
@ -16,8 +11,8 @@ module DNS;
export {
redef enum Notice::Type += {
## Raised when a non-local name is found to be pointing at a local host.
## This only works appropriately when all of your authoritative DNS
## servers are located in your :bro:id:`Site::local_nets`.
## :bro:id:`Site::local_zones` variable **must** be set appropriately
## for this detection.
External_Name,
};
}
@ -30,11 +25,11 @@ event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priori
# Check for responses from remote hosts that point at local hosts
# but the name is not considered to be within a "local" zone.
if ( Site::is_local_addr(a) && # referring to a local host
!Site::is_local_addr(c$id$resp_h) && # response from an external nameserver
! Site::is_local_name(ans$query) ) # name isn't in a local zone.
{
NOTICE([$note=External_Name,
$msg=fmt("%s is pointing to a local host - %s.", ans$query, a),
$conn=c]);
$conn=c,
$identifier=cat(a,ans$query)]);
}
}

View file

@ -1,5 +1,7 @@
@load base/frameworks/notice/main
@load base/protocols/ftp/main
##! Detect various potentially bad FTP activities.
@load base/frameworks/notice
@load base/protocols/ftp
module FTP;
@ -21,6 +23,7 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior
/[Ee][Xx][Ee][Cc]/ in c$ftp$cmdarg$arg )
{
NOTICE([$note=Site_Exec_Success, $conn=c,
$msg=fmt("%s %s", c$ftp$cmdarg$cmd, c$ftp$cmdarg$arg)]);
$msg=fmt("FTP command: %s %s", c$ftp$cmdarg$cmd, c$ftp$cmdarg$arg),
$identifier=cat(c$id$orig_h, c$id$resp_h, "SITE EXEC")]);
}
}

View file

@ -1,12 +1,12 @@
##! Software detection with the FTP protocol.
##!
##! TODO:
##!
##! * Detect server software with initial 220 message
##! * Detect client software with password given for anonymous users
##! (e.g. cyberduck@example.net)
@load base/frameworks/software/main
# TODO:
#
# * Detect server software with initial 220 message
# * Detect client software with password given for anonymous users
# (e.g. cyberduck@example.net)
@load base/frameworks/software
module FTP;

View file

@ -4,10 +4,8 @@
##! documentation for the :doc:base/protocols/http/file-hash.bro script to see how to
##! configure which transfers will have hashes calculated.
@load base/frameworks/notice/main
@load base/protocols/http/main
@load base/protocols/http/utils
@load base/protocols/http/file-hash
@load base/frameworks/notice
@load base/protocols/http
export {
redef enum Notice::Type += {

View file

@ -1,4 +1,4 @@
##! SQL injection detection in HTTP.
##! SQL injection attack detection in HTTP.
@load base/frameworks/notice
@load base/frameworks/metrics
@ -8,7 +8,10 @@ module HTTP;
export {
redef enum Notice::Type += {
## Indicates that a host performing SQL injection attacks was detected.
SQL_Injection_Attacker,
## Indicates that a host was seen to have SQL injection attacks against
## it. This is tracked by IP address as opposed to hostname.
SQL_Injection_Attack_Against,
};
@ -49,6 +52,10 @@ export {
event bro_init() &priority=3
{
# Add filters to the metrics so that the metrics framework knows how to
# determine when it looks like an actual attack and how to respond when
# thresholds are crossed.
Metrics::add_filter(SQL_ATTACKER, [$log=F,
$notice_threshold=sqli_requests_threshold,
$break_interval=sqli_requests_interval,

View file

@ -1,7 +1,6 @@
@load base/frameworks/signatures/main
@load base/frameworks/software/main
@load base/protocols/http/main
@load base/protocols/http/utils
@load base/frameworks/signatures
@load base/frameworks/software
@load base/protocols/http
module HTTP;

View file

@ -1,5 +1,6 @@
##! This script take advantage of a few ways that installed plugin information
##! leaks from web browsers
##! leaks from web browsers.
@load base/protocols/http
@load base/frameworks/software

View file

@ -1,6 +1,6 @@
##! Software identification and extraction for HTTP traffic.
@load base/frameworks/software/main
@load base/frameworks/software
module HTTP;

View file

@ -1,7 +1,6 @@
##! This script extracts and logs variables from the requested URI
@load base/protocols/http/main
@load base/protocols/http/utils
@load base/protocols/http
module HTTP;

View file

@ -38,7 +38,8 @@ export {
const ignore_guessers: table[subnet] of subnet &redef;
## Keeps track of hosts identified as guessing passwords.
global password_guessers: set[addr] &read_expire=guessing_timeout+1hr &synchronized;
global password_guessers: set[addr]
&read_expire=guessing_timeout+1hr &synchronized &redef;
}
event bro_init()

View file

@ -1,8 +1,8 @@
##! This implements all of the additional information and geodata detections
##! for SSH analysis.
@load base/frameworks/notice/main
@load base/protocols/ssh/main
@load base/frameworks/notice
@load base/protocols/ssh
module SSH;
@ -11,17 +11,17 @@ export {
## If an SSH login is seen to or from a "watched" country based on the
## :bro:id:`SSH::watched_countries` variable then this notice will
## be generated.
Login_From_Watched_Country,
Watched_Country_Login,
};
## The set of countries for which you'd like to throw notices upon
## successful login
const watched_countries: set[string] = {"RO"} &redef;
redef record Info += {
## Add geographic data related to the "remote" host of the connection.
remote_location: geo_location &log &optional;
};
## The set of countries for which you'd like to throw notices upon
## successful login
const watched_countries: set[string] = {"RO"} &redef;
}
event SSH::heuristic_successful_login(c: connection) &priority=5
@ -35,8 +35,10 @@ event SSH::heuristic_successful_login(c: connection) &priority=5
if ( location?$country_code && location$country_code in watched_countries )
{
NOTICE([$note=Login_From_Watched_Country,
NOTICE([$note=Watched_Country_Login,
$conn=c,
$msg=fmt("SSH login from watched country: %s", location$country_code)]);
$msg=fmt("SSH login %s watched country: %s",
(c$ssh$direction == OUTBOUND) ? "to" : "from",
location$country_code)]);
}
}

View file

@ -1,15 +1,19 @@
@load base/frameworks/notice/main
##! This script will generate a notice if an apparent SSH login originates
##! or heads to a host with a reverse hostname that looks suspicious. By
##! default, the regular expression to match "interesting" hostnames includes
##! names that are typically used for infrastructure hosts like nameservers,
##! mail servers, web servers and ftp servers.
@load base/frameworks/notice
module SSH;
export {
redef enum Notice::Type += {
## Generated if a login originates from a host matched by the
## Generated if a login originates or responds with a host and the
## reverse hostname lookup resolves to a name matched by the
## :bro:id:`interesting_hostnames` regular expression.
Login_From_Interesting_Hostname,
## Generated if a login goes to a host matched by the
## :bro:id:`interesting_hostnames` regular expression.
Login_To_Interesting_Hostname,
Interesting_Hostname_Login,
};
## Strange/bad host names to see successful SSH logins from or to.
@ -25,27 +29,17 @@ export {
event SSH::heuristic_successful_login(c: connection)
{
# Check to see if this login came from an interesting hostname.
when ( local orig_hostname = lookup_addr(c$id$orig_h) )
for ( host in set(c$id$orig_h, c$id$resp_h) )
{
if ( interesting_hostnames in orig_hostname )
when ( local hostname = lookup_addr(host) )
{
NOTICE([$note=Login_From_Interesting_Hostname,
$conn=c,
$msg=fmt("Interesting login from hostname: %s", orig_hostname),
$sub=orig_hostname]);
if ( interesting_hostnames in hostname )
{
NOTICE([$note=Interesting_Hostname_Login,
$msg=fmt("Interesting login from hostname: %s", hostname),
$sub=hostname, $conn=c]);
}
}
# Check to see if this login went to an interesting hostname.
when ( local resp_hostname = lookup_addr(c$id$resp_h) )
{
if ( interesting_hostnames in resp_hostname )
{
NOTICE([$note=Login_To_Interesting_Hostname,
$conn=c,
$msg=fmt("Interesting login to hostname: %s", resp_hostname),
$sub=resp_hostname]);
}
}
}

View file

@ -1,4 +1,7 @@
@load base/frameworks/software/main
##! This script extracts SSH client and server information from SSH
##! connections and forwards it to the software framework.
@load base/frameworks/software
module SSH;

View file

@ -47,7 +47,8 @@ event bro_init() &priority=5
event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string) &priority=3
{
# Make sure this is the server cert and we have a hash for it.
if ( chain_idx == 0 && ! c$ssl?$cert_hash ) return;
if ( chain_idx != 0 || ! c$ssl?$cert_hash )
return;
local host = c$id$resp_h;
if ( [host, c$ssl$cert_hash] !in certs && addr_matches_host(host, cert_tracking) )

View file

@ -1,26 +1,29 @@
##! Perform full certificate chain validation for SSL certificates.
@load base/frameworks/notice/main
@load base/protocols/ssl/main
@load base/frameworks/notice
@load base/protocols/ssl
@load protocols/ssl/cert-hash
module SSL;
export {
redef enum Notice::Type += {
## This notice indicates that the result of validating the certificate
## along with it's full certificate chain was invalid.
Invalid_Server_Cert
};
redef record Info += {
## This stores and logs the result of certificate validation for
## this connection.
validation_status: string &log &optional;
};
## MD5 hash values for recently validated certs along with the validation
## status message are kept in this table so avoid constant validation
## status message are kept in this table to avoid constant validation
## everytime the same certificate is seen.
global recently_validated_certs: table[string] of string = table()
&read_expire=5mins &synchronized;
&read_expire=5mins &synchronized &redef;
}
event ssl_established(c: connection) &priority=3

View file

@ -1,3 +1,2 @@
@load ./remove-high-volume-notices
@load ./packet-fragments
@load ./warnings

View file

@ -1,10 +0,0 @@
##! This strives to tune out high volume and less useful data
##! from the notice log.
@load base/frameworks/notice
@load base/frameworks/notice/weird
redef Notice::ignored_types += {
## Only allow these to go in the weird log.
Weird::Weird_Activity,
};

View file

@ -20,6 +20,10 @@ redef Software::vulnerable_versions += {
# This adds signatures to detect cleartext forward and reverse windows shells.
redef signature_files += "frameworks/signatures/detect-windows-shells.sig";
# Uncomment the following line to begin receiving (by default hourly) emails
# containing all of your notices.
# redef Notice::policy += { [$action = Notice::ACTION_ALARM, $priority = 0] };
# Load all of the scripts that detect software in various protocols.
@load protocols/http/software
#@load protocols/http/detect-webapps

View file

@ -58,6 +58,5 @@
@load tuning/__load__.bro
@load tuning/defaults/__load__.bro
@load tuning/defaults/packet-fragments.bro
@load tuning/defaults/remove-high-volume-notices.bro
@load tuning/defaults/warnings.bro
@load tuning/track-all-assets.bro

View file

@ -426,19 +426,7 @@ void Analyzer::ForwardPacket(int len, const u_char* data, bool is_orig,
if ( ! (current->finished || current->removing ) )
current->NextPacket(len, data, is_orig, seq, ip, caplen);
else
{
if ( removing )
{
current->Done();
removing = false;
}
// Analyzer has already been disabled so delete it.
DBG_LOG(DBG_DPD, "%s deleted child %s",
fmt_analyzer(this).c_str(), fmt_analyzer(current).c_str());
children.erase(--i);
delete current;
}
DeleteChild(--i);
}
AppendNewChildren();
@ -461,19 +449,7 @@ void Analyzer::ForwardStream(int len, const u_char* data, bool is_orig)
if ( ! (current->finished || current->removing ) )
current->NextStream(len, data, is_orig);
else
{
// Analyzer has already been disabled so delete it.
if ( current->removing )
{
current->Done();
removing = false;
}
DBG_LOG(DBG_DPD, "%s deleted child %s",
fmt_analyzer(this).c_str(), fmt_analyzer(current).c_str());
children.erase(--i);
delete current;
}
DeleteChild(--i);
}
AppendNewChildren();
@ -496,19 +472,7 @@ void Analyzer::ForwardUndelivered(int seq, int len, bool is_orig)
if ( ! (current->finished || current->removing ) )
current->NextUndelivered(seq, len, is_orig);
else
{
if ( current->removing )
{
current->Done();
removing = false;
}
// Analyzer has already been disabled so delete it.
DBG_LOG(DBG_DPD, "%s deleted child %s",
fmt_analyzer(this).c_str(), fmt_analyzer(current).c_str());
children.erase(--i);
delete current;
}
DeleteChild(--i);
}
AppendNewChildren();
@ -528,19 +492,7 @@ void Analyzer::ForwardEndOfData(bool orig)
if ( ! (current->finished || current->removing ) )
current->NextEndOfData(orig);
else
{
if ( current->removing )
{
current->Done();
removing = false;
}
// Analyzer has already been disabled so delete it.
DBG_LOG(DBG_DPD, "%s deleted child %s",
fmt_analyzer(this).c_str(), fmt_analyzer(current).c_str());
children.erase(--i);
delete current;
}
DeleteChild(--i);
}
AppendNewChildren();
@ -606,7 +558,7 @@ void Analyzer::RemoveChildAnalyzer(AnalyzerID id)
LOOP_OVER_CHILDREN(i)
if ( (*i)->id == id && ! ((*i)->finished || (*i)->removing) )
{
DBG_LOG(DBG_DPD, "%s disabled child %s", GetTagName(), id,
DBG_LOG(DBG_DPD, "%s disabling child %s", GetTagName(), id,
fmt_analyzer(this).c_str(), fmt_analyzer(*i).c_str());
// See comment above.
(*i)->removing = true;
@ -657,6 +609,26 @@ Analyzer* Analyzer::FindChild(AnalyzerTag::Tag arg_tag)
return 0;
}
void Analyzer::DeleteChild(analyzer_list::iterator i)
{
Analyzer* child = *i;
// Analyzer must have already been finished or marked for removal.
assert(child->finished || child->removing);
if ( child->removing )
{
child->Done();
child->removing = false;
}
DBG_LOG(DBG_DPD, "%s deleted child %s 3",
fmt_analyzer(this).c_str(), fmt_analyzer(child).c_str());
children.erase(i);
delete child;
}
void Analyzer::AddSupportAnalyzer(SupportAnalyzer* analyzer)
{
if ( HasSupportAnalyzer(analyzer->GetTag(), analyzer->IsOrig()) )

View file

@ -277,6 +277,10 @@ protected:
void AppendNewChildren();
private:
// Internal method to eventually delete a child analyzer that's
// already Done().
void DeleteChild(analyzer_list::iterator i);
AnalyzerTag::Tag tag;
AnalyzerID id;

View file

@ -44,7 +44,7 @@ public:
if ( analyzer )
analyzer->Weird("base64_illegal_encoding", msg);
else
reporter->Error(msg);
reporter->Error("%s", msg);
}
protected:

View file

@ -335,6 +335,7 @@ set(bro_SRCS
LogMgr.cc
LogWriter.cc
LogWriterAscii.cc
LogWriterNone.cc
Login.cc
MIME.cc
NCP.cc

View file

@ -522,7 +522,7 @@ ListVal* CompositeHash::RecoverVals(const HashKey* k) const
}
if ( kp != k_end )
reporter->InternalError("under-ran key in CompositeHash::DescribeKey %ld", k_end - kp);
reporter->InternalError("under-ran key in CompositeHash::DescribeKey %zd", k_end - kp);
return l;
}
@ -644,7 +644,7 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0,
Func* f = Func::GetFuncPtrByID(*kp);
if ( ! f )
reporter->InternalError("failed to look up unique function id %"PRIu32" in CompositeHash::RecoverOneVal()");
reporter->InternalError("failed to look up unique function id %" PRIu32 " in CompositeHash::RecoverOneVal()", *kp);
pval = new Val(f);

View file

@ -370,6 +370,9 @@ DNS_Mgr::DNS_Mgr(DNS_MgrMode arg_mode)
cache_name = dir = 0;
asyncs_pending = 0;
num_requests = 0;
successful = 0;
failed = 0;
}
DNS_Mgr::~DNS_Mgr()
@ -592,6 +595,8 @@ void DNS_Mgr::Resolve()
}
else
--num_pending;
delete dr;
}
}
@ -847,6 +852,7 @@ const char* DNS_Mgr::LookupAddrInCache(dns_mgr_addr_type addr)
if ( d->Expired() )
{
dns_mgr->addr_mappings.Remove(&h);
delete d;
return 0;
}
@ -866,6 +872,7 @@ TableVal* DNS_Mgr::LookupNameInCache(string name)
{
HashKey h(name.c_str());
dns_mgr->host_mappings.Remove(&h);
delete d;
return 0;
}
@ -948,6 +955,8 @@ void DNS_Mgr::IssueAsyncRequests()
AsyncRequest* req = asyncs_queued.front();
asyncs_queued.pop_front();
++num_requests;
DNS_Mgr_Request* dr;
if ( req->IsAddrReq() )
dr = new DNS_Mgr_Request(req->host);
@ -957,6 +966,7 @@ void DNS_Mgr::IssueAsyncRequests()
if ( ! dr->MakeRequest(nb_dns) )
{
reporter->Warning("can't issue DNS request");
++failed;
req->Timeout();
continue;
}
@ -991,10 +1001,16 @@ void DNS_Mgr::CheckAsyncAddrRequest(dns_mgr_addr_type addr, bool timeout)
{
const char* name = LookupAddrInCache(addr);
if ( name )
{
++successful;
i->second->Resolved(name);
}
else if ( timeout )
{
++failed;
i->second->Timeout();
}
else
return;
@ -1020,12 +1036,16 @@ void DNS_Mgr::CheckAsyncHostRequest(const char* host, bool timeout)
if ( addrs )
{
++successful;
i->second->Resolved(addrs);
Unref(addrs);
}
else if ( timeout )
{
++failed;
i->second->Timeout();
}
else
return;
@ -1038,14 +1058,29 @@ void DNS_Mgr::CheckAsyncHostRequest(const char* host, bool timeout)
}
}
void DNS_Mgr::Flush()
{
DoProcess(false);
IterCookie* cookie = addr_mappings.InitForIteration();
DNS_Mapping* dm;
host_mappings.Clear();
addr_mappings.Clear();
}
void DNS_Mgr::Process()
{
DoProcess(false);
}
void DNS_Mgr::DoProcess(bool flush)
{
while ( asyncs_timeouts.size() > 0 )
{
AsyncRequest* req = asyncs_timeouts.top();
if ( req->time + DNS_TIMEOUT > current_time() )
if ( req->time + DNS_TIMEOUT > current_time() || flush )
break;
if ( req->IsAddrReq() )
@ -1086,6 +1121,8 @@ void DNS_Mgr::Process()
CheckAsyncHostRequest(dr->ReqHost(), true);
IssueAsyncRequests();
delete dr;
}
}
@ -1125,3 +1162,14 @@ int DNS_Mgr::AnswerAvailable(int timeout)
return status;
}
void DNS_Mgr::GetStats(Stats* stats)
{
stats->requests = num_requests;
stats->successful = successful;
stats->failed = failed;
stats->pending = asyncs_pending;
stats->cached_hosts = host_mappings.Length();
stats->cached_addresses = addr_mappings.Length();
}

View file

@ -49,6 +49,7 @@ public:
virtual ~DNS_Mgr();
bool Init();
void Flush();
// Looks up the address or addresses of the given host, and returns
// a set of addr.
@ -80,6 +81,17 @@ public:
void AsyncLookupAddr(dns_mgr_addr_type host, LookupCallback* callback);
void AsyncLookupName(string name, LookupCallback* callback);
struct Stats {
unsigned long requests; // These count only async requests.
unsigned long successful;
unsigned long failed;
unsigned long pending;
unsigned long cached_hosts;
unsigned long cached_addresses;
};
void GetStats(Stats* stats);
protected:
friend class LookupCallback;
friend class DNS_Mgr_Request;
@ -111,6 +123,9 @@ protected:
void CheckAsyncAddrRequest(dns_mgr_addr_type addr, bool timeout);
void CheckAsyncHostRequest(const char* host, bool timeout);
// Process outstanding requests.
void DoProcess(bool flush);
// IOSource interface.
virtual void GetFds(int* read, int* write, int* except);
virtual double NextTimestamp(double* network_time);
@ -202,6 +217,10 @@ protected:
TimeoutQueue asyncs_timeouts;
int asyncs_pending;
unsigned long num_requests;
unsigned long successful;
unsigned long failed;
};
extern DNS_Mgr* dns_mgr;

View file

@ -797,7 +797,7 @@ int dbg_handle_debug_input()
input_line = (char*) safe_malloc(1024);
input_line[1023] = 0;
// ### Maybe it's not always stdin.
fgets(input_line, 1023, stdin);
input_line = fgets(input_line, 1023, stdin);
#endif
// ### Maybe not stdin; maybe do better cleanup.

View file

@ -72,7 +72,7 @@ void DebugLogger::EnableStreams(const char* s)
if ( strcasecmp("verbose", tok) == 0 )
verbose = true;
else
reporter->InternalError("unknown debug stream %s\n", tok);
reporter->FatalError("unknown debug stream %s\n", tok);
}
tok = strtok(0, ",");

View file

@ -296,7 +296,7 @@ void ODesc::AddBytesRaw(const void* bytes, unsigned int n)
if ( ! write_failed )
// Most likely it's a "disk full" so report
// subsequent failures only once.
reporter->Error(fmt("error writing to %s: %s", f->Name(), strerror(errno)));
reporter->Error("error writing to %s: %s", f->Name(), strerror(errno));
write_failed = true;
return;

View file

@ -67,6 +67,19 @@ Dictionary::Dictionary(dict_order ordering, int initial_size)
}
Dictionary::~Dictionary()
{
DeInit();
delete order;
}
void Dictionary::Clear()
{
DeInit();
Init(2);
tbl2 = 0;
}
void Dictionary::DeInit()
{
for ( int i = 0; i < num_buckets; ++i )
if ( tbl[i] )
@ -84,7 +97,6 @@ Dictionary::~Dictionary()
}
delete [] tbl;
delete order;
if ( tbl2 == 0 )
return;
@ -103,7 +115,9 @@ Dictionary::~Dictionary()
delete chain;
}
delete [] tbl2;
tbl2 = 0;
}
void* Dictionary::Lookup(const void* key, int key_size, hash_t hash) const

View file

@ -118,11 +118,15 @@ public:
void MakeRobustCookie(IterCookie* cookie)
{ cookies.append(cookie); }
// Remove all entries.
void Clear();
unsigned int MemoryAllocation() const;
private:
void Init(int size);
void Init2(int size); // initialize second table for resizing
void DeInit();
// Internal version of Insert().
void* Insert(DictEntry* entry, int copy_key);

View file

@ -42,7 +42,17 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen)
{
val_list* args = new val_list;
args->append(BuildHeader(ip4));
try
{
discard_packet = check_ip->Call(args)->AsBool();
}
catch ( InterpreterException& e )
{
discard_packet = false;
}
delete args;
if ( discard_packet )
@ -90,7 +100,17 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen)
args->append(BuildHeader(ip4));
args->append(BuildHeader(tp, len));
args->append(BuildData(data, th_len, len, caplen));
try
{
discard_packet = check_tcp->Call(args)->AsBool();
}
catch ( InterpreterException& e )
{
discard_packet = false;
}
delete args;
}
}
@ -106,7 +126,17 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen)
args->append(BuildHeader(ip4));
args->append(BuildHeader(up));
args->append(BuildData(data, uh_len, len, caplen));
try
{
discard_packet = check_udp->Call(args)->AsBool();
}
catch ( InterpreterException& e )
{
discard_packet = false;
}
delete args;
}
}
@ -120,7 +150,17 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen)
val_list* args = new val_list;
args->append(BuildHeader(ip4));
args->append(BuildHeader(ih));
try
{
discard_packet = check_icmp->Call(args)->AsBool();
}
catch ( InterpreterException& e )
{
discard_packet = false;
}
delete args;
}
}

View file

@ -41,7 +41,16 @@ protected:
if ( handler->ErrorHandler() )
reporter->BeginErrorHandler();
try
{
handler->Call(args, no_remote);
}
catch ( InterpreterException& e )
{
// Already reported.
}
if ( obj )
// obj->EventDone();
Unref(obj);

View file

@ -68,6 +68,7 @@ void EventHandler::Call(val_list* vl, bool no_remote)
}
if ( local )
// No try/catch here; we pass exceptions upstream.
Unref(local->Call(vl));
else
{

View file

@ -221,7 +221,9 @@ bool Expr::DoUnserialize(UnserialInfo* info)
tag = BroExprTag(c);
UNSERIALIZE_OPTIONAL(type, BroType::Unserialize(info));
BroType* t = 0;
UNSERIALIZE_OPTIONAL(t, BroType::Unserialize(info));
SetType(t);
return true;
}
@ -3116,8 +3118,9 @@ Val* FieldExpr::Fold(Val* v) const
return def_attr->AttrExpr()->Eval(0);
else
{
Internal("field value missing");
return 0;
reporter->ExprRuntimeError(this, "field value missing");
assert(false);
return 0; // Will never get here, but compiler can't tell.
}
}
@ -3728,6 +3731,7 @@ Val* RecordMatchExpr::Fold(Val* v1, Val* v2) const
}
}
// No try/catch here; we pass exceptions upstream.
Val* pred_val =
match_rec->Lookup(pred_field_index)->AsFunc()->Call(&args);
bool is_zero = pred_val->IsZero();
@ -4218,7 +4222,7 @@ Val* FlattenExpr::Fold(Val* v) const
l->Append(fa->AttrExpr()->Eval(0));
else
Internal("missing field value");
reporter->ExprRuntimeError(this, "missing field value");
}
return l;
@ -4644,7 +4648,7 @@ Val* CallExpr::Eval(Frame* f) const
if ( f )
f->SetCall(this);
ret = func->Call(v, f);
ret = func->Call(v, f); // No try/catch here; we pass exceptions upstream.
if ( f )
f->ClearCall();
// Don't Unref() the arguments, as Func::Call already did that.

View file

@ -149,7 +149,7 @@ BroFile::BroFile(const char* arg_name, const char* arg_access, BroType* arg_t)
t = arg_t ? arg_t : base_type(TYPE_STRING);
if ( ! Open() )
{
reporter->Error(fmt("cannot open %s: %s", name, strerror(errno)));
reporter->Error("cannot open %s: %s", name, strerror(errno));
is_open = 0;
okay_to_manage = 0;
}
@ -285,7 +285,7 @@ FILE* BroFile::BringIntoCache()
if ( ! f )
{
reporter->Error("can't open %s", this);
reporter->Error("can't open %s", name);
f = fopen("/dev/null", "w");
@ -641,7 +641,7 @@ void BroFile::InitEncrypt(const char* keyfile)
if ( ! key )
{
reporter->Error(fmt("can't open key file %s: %s", keyfile, strerror(errno)));
reporter->Error("can't open key file %s: %s", keyfile, strerror(errno));
Close();
return;
}
@ -649,8 +649,8 @@ void BroFile::InitEncrypt(const char* keyfile)
pub_key = PEM_read_PUBKEY(key, 0, 0, 0);
if ( ! pub_key )
{
reporter->Error(fmt("can't read key from %s: %s", keyfile,
ERR_error_string(ERR_get_error(), 0)));
reporter->Error("can't read key from %s: %s", keyfile,
ERR_error_string(ERR_get_error(), 0));
Close();
return;
}
@ -671,8 +671,8 @@ void BroFile::InitEncrypt(const char* keyfile)
if ( ! EVP_SealInit(cipher_ctx, cipher_type, &psecret,
(int*) &secret_len, iv, &pub_key, 1) )
{
reporter->Error(fmt("can't init cipher context for %s: %s", keyfile,
ERR_error_string(ERR_get_error(), 0)));
reporter->Error("can't init cipher context for %s: %s", keyfile,
ERR_error_string(ERR_get_error(), 0));
Close();
return;
}
@ -684,8 +684,8 @@ void BroFile::InitEncrypt(const char* keyfile)
fwrite(secret, ntohl(secret_len), 1, f) &&
fwrite(iv, iv_len, 1, f)) )
{
reporter->Error(fmt("can't write header to log file %s: %s",
name, strerror(errno)));
reporter->Error("can't write header to log file %s: %s",
name, strerror(errno));
Close();
return;
}
@ -709,8 +709,8 @@ void BroFile::FinishEncrypt()
if ( outl && ! fwrite(cipher_buffer, outl, 1, f) )
{
reporter->Error(fmt("write error for %s: %s",
name, strerror(errno)));
reporter->Error("write error for %s: %s",
name, strerror(errno));
return;
}
@ -741,17 +741,17 @@ int BroFile::Write(const char* data, int len)
if ( ! EVP_SealUpdate(cipher_ctx, cipher_buffer, &outl,
(unsigned char*)data, inl) )
{
reporter->Error(fmt("encryption error for %s: %s",
reporter->Error("encryption error for %s: %s",
name,
ERR_error_string(ERR_get_error(), 0)));
ERR_error_string(ERR_get_error(), 0));
Close();
return 0;
}
if ( outl && ! fwrite(cipher_buffer, outl, 1, f) )
{
reporter->Error(fmt("write error for %s: %s",
name, strerror(errno)));
reporter->Error("write error for %s: %s",
name, strerror(errno));
Close();
return 0;
}
@ -798,7 +798,7 @@ void BroFile::UpdateFileSize()
struct stat s;
if ( fstat(fileno(f), &s) < 0 )
{
reporter->Error(fmt("can't stat fd for %s: %s", name, strerror(errno)));
reporter->Error("can't stat fd for %s: %s", name, strerror(errno));
current_size = 0;
return;
}

View file

@ -74,11 +74,11 @@ void File_Analyzer::InitMagic(magic_t* magic, int flags)
*magic = magic_open(flags);
if ( ! *magic )
reporter->Error(fmt("can't init libmagic: %s", magic_error(*magic)));
reporter->Error("can't init libmagic: %s", magic_error(*magic));
else if ( magic_load(*magic, 0) < 0 )
{
reporter->Error(fmt("can't load magic file: %s", magic_error(*magic)));
reporter->Error("can't load magic file: %s", magic_error(*magic));
magic_close(*magic);
*magic = 0;
}

View file

@ -9,6 +9,7 @@
#include "Net.h"
#include "LogWriterAscii.h"
#include "LogWriterNone.h"
// Structure describing a log writer type.
struct LogWriterDefinition {
@ -20,6 +21,7 @@ struct LogWriterDefinition {
// Static table defining all availabel log writers.
LogWriterDefinition log_writers[] = {
{ BifEnum::Log::WRITER_NONE, "None", 0, LogWriterNone::Instantiate },
{ BifEnum::Log::WRITER_ASCII, "Ascii", 0, LogWriterAscii::Instantiate },
// End marker, don't touch.
@ -888,9 +890,18 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns)
// to log this record.
val_list vl(1);
vl.append(columns->Ref());
int result = 1;
try
{
Val* v = filter->pred->Call(&vl);
int result = v->AsBool();
result = v->AsBool();
Unref(v);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
if ( ! result )
continue;
@ -920,7 +931,17 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns)
vl.append(rec_arg);
Val* v = filter->path_func->Call(&vl);
Val* v = 0;
try
{
v = filter->path_func->Call(&vl);
}
catch ( InterpreterException& e )
{
return false;
}
if ( ! v->Type()->Tag() == TYPE_STRING )
{
@ -1432,8 +1453,8 @@ bool LogMgr::Flush(EnumVal* id)
void LogMgr::Error(LogWriter* writer, const char* msg)
{
reporter->Error(fmt("error with writer for %s: %s",
writer->Path().c_str(), msg));
reporter->Error("error with writer for %s: %s",
writer->Path().c_str(), msg);
}
// Timer which on dispatching rotates the filter.
@ -1569,9 +1590,19 @@ bool LogMgr::FinishedRotation(LogWriter* writer, string new_name, string old_nam
// Call the postprocessor function.
val_list vl(1);
vl.append(info);
int result = 0;
try
{
Val* v = func->Call(&vl);
int result = v->AsBool();
result = v->AsBool();
Unref(v);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
return result;
}

16
src/LogWriterNone.cc Normal file
View file

@ -0,0 +1,16 @@
#include "LogWriterNone.h"
bool LogWriterNone::DoRotate(string rotated_path, double open,
double close, bool terminating)
{
if ( ! FinishedRotation(string("/dev/null"), Path(), open, close, terminating))
{
Error(Fmt("error rotating %s", Path().c_str()));
return false;
}
return true;
}

30
src/LogWriterNone.h Normal file
View file

@ -0,0 +1,30 @@
// See the file "COPYING" in the main distribution directory for copyright.
//
// Dummy log writer that just discards everything (but still pretends to rotate).
#ifndef LOGWRITERNONE_H
#define LOGWRITERNONE_H
#include "LogWriter.h"
class LogWriterNone : public LogWriter {
public:
LogWriterNone() {}
~LogWriterNone() {};
static LogWriter* Instantiate() { return new LogWriterNone; }
protected:
virtual bool DoInit(string path, int num_fields,
const LogField* const * fields) { return true; }
virtual bool DoWrite(int num_fields, const LogField* const * fields,
LogVal** vals) { return true; }
virtual bool DoSetBuf(bool enabled) { return true; }
virtual bool DoRotate(string rotated_path, double open, double close,
bool terminating);
virtual bool DoFlush() { return true; }
virtual void DoFinish() {}
};
#endif

View file

@ -85,8 +85,8 @@ void OSFingerprint::collide(uint32 id)
if (sig[id].ttl % 32 && sig[id].ttl != 255 && sig[id].ttl % 30)
{
problems=1;
reporter->Warning(fmt("OS fingerprinting: [!] Unusual TTL (%d) for signature '%s %s' (line %d).",
sig[id].ttl,sig[id].os,sig[id].desc,sig[id].line));
reporter->Warning("OS fingerprinting: [!] Unusual TTL (%d) for signature '%s %s' (line %d).",
sig[id].ttl,sig[id].os,sig[id].desc,sig[id].line);
}
for (i=0;i<id;i++)
@ -94,8 +94,8 @@ void OSFingerprint::collide(uint32 id)
if (!strcmp(sig[i].os,sig[id].os) &&
!strcmp(sig[i].desc,sig[id].desc)) {
problems=1;
reporter->Warning(fmt("OS fingerprinting: [!] Duplicate signature name: '%s %s' (line %d and %d).",
sig[i].os,sig[i].desc,sig[i].line,sig[id].line));
reporter->Warning("OS fingerprinting: [!] Duplicate signature name: '%s %s' (line %d and %d).",
sig[i].os,sig[i].desc,sig[i].line,sig[id].line);
}
/* If TTLs are sufficiently away from each other, the risk of
@ -277,10 +277,10 @@ do_const:
if (sig[id].opt[j] ^ sig[i].opt[j]) goto reloop;
problems=1;
reporter->Warning(fmt("OS fingerprinting: [!] Signature '%s %s' (line %d)\n"
reporter->Warning("OS fingerprinting: [!] Signature '%s %s' (line %d)\n"
" is already covered by '%s %s' (line %d).",
sig[id].os,sig[id].desc,sig[id].line,sig[i].os,sig[i].desc,
sig[i].line));
sig[i].line);
reloop:
;

View file

@ -103,7 +103,7 @@ protected:
void Error(const char* msg)
{
reporter->Error(msg);
reporter->Error("%s", msg);
err = true;
}

View file

@ -231,6 +231,8 @@ bool BroObj::DoUnserialize(UnserialInfo* info)
{
DO_UNSERIALIZE(SerialObj);
delete location;
UNSERIALIZE_OPTIONAL(location, Location::Unserialize(info));
return true;
}

View file

@ -200,9 +200,15 @@ void PersistenceSerializer::GotEvent(const char* name, double time,
void PersistenceSerializer::GotFunctionCall(const char* name, double time,
Func* func, val_list* args)
{
try
{
func->Call(args);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
}
void PersistenceSerializer::GotStateAccess(StateAccess* s)
{
s->Replay();

View file

@ -88,7 +88,8 @@ bool LoadPolicyFileText(const char* policy_filename)
// ### This code is not necessarily Unicode safe!
// (probably fine with UTF-8)
pf->filedata = new char[size+1];
fread(pf->filedata, size, 1, f);
if ( fread(pf->filedata, size, 1, f) != 1 )
reporter->InternalError("Failed to fread() file data");
pf->filedata[size] = 0;
fclose(f);

View file

@ -392,7 +392,7 @@ static bool sendToIO(ChunkedIO* io, ChunkedIO::Chunk* c)
{
if ( ! io->Write(c) )
{
reporter->Warning(fmt("can't send chunk: %s", io->Error()));
reporter->Warning("can't send chunk: %s", io->Error());
return false;
}
@ -404,7 +404,7 @@ static bool sendToIO(ChunkedIO* io, char msg_type, RemoteSerializer::PeerID id,
{
if ( ! sendCMsg(io, msg_type, id) )
{
reporter->Warning(fmt("can't send message of type %d: %s", msg_type, io->Error()));
reporter->Warning("can't send message of type %d: %s", msg_type, io->Error());
return false;
}
@ -419,7 +419,7 @@ static bool sendToIO(ChunkedIO* io, char msg_type, RemoteSerializer::PeerID id,
{
if ( ! sendCMsg(io, msg_type, id) )
{
reporter->Warning(fmt("can't send message of type %d: %s", msg_type, io->Error()));
reporter->Warning("can't send message of type %d: %s", msg_type, io->Error());
return false;
}
@ -715,7 +715,7 @@ bool RemoteSerializer::CloseConnection(PeerID id)
Peer* peer = LookupPeer(id, true);
if ( ! peer )
{
reporter->Error(fmt("unknown peer id %d for closing connection", int(id)));
reporter->Error("unknown peer id %d for closing connection", int(id));
return false;
}
@ -750,14 +750,14 @@ bool RemoteSerializer::RequestSync(PeerID id, bool auth)
Peer* peer = LookupPeer(id, true);
if ( ! peer )
{
reporter->Error(fmt("unknown peer id %d for request sync", int(id)));
reporter->Error("unknown peer id %d for request sync", int(id));
return false;
}
if ( peer->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't request sync from peer; wrong phase %d",
peer->phase));
reporter->Error("can't request sync from peer; wrong phase %d",
peer->phase);
return false;
}
@ -777,14 +777,14 @@ bool RemoteSerializer::RequestLogs(PeerID id)
Peer* peer = LookupPeer(id, true);
if ( ! peer )
{
reporter->Error(fmt("unknown peer id %d for request logs", int(id)));
reporter->Error("unknown peer id %d for request logs", int(id));
return false;
}
if ( peer->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't request logs from peer; wrong phase %d",
peer->phase));
reporter->Error("can't request logs from peer; wrong phase %d",
peer->phase);
return false;
}
@ -802,14 +802,14 @@ bool RemoteSerializer::RequestEvents(PeerID id, RE_Matcher* pattern)
Peer* peer = LookupPeer(id, true);
if ( ! peer )
{
reporter->Error(fmt("unknown peer id %d for request sync", int(id)));
reporter->Error("unknown peer id %d for request sync", int(id));
return false;
}
if ( peer->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't request events from peer; wrong phase %d",
peer->phase));
reporter->Error("can't request events from peer; wrong phase %d",
peer->phase);
return false;
}
@ -869,8 +869,8 @@ bool RemoteSerializer::CompleteHandshake(PeerID id)
if ( p->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't complete handshake; wrong phase %d",
p->phase));
reporter->Error("can't complete handshake; wrong phase %d",
p->phase);
return false;
}
@ -1138,7 +1138,7 @@ bool RemoteSerializer::SendCaptureFilter(PeerID id, const char* filter)
if ( peer->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't sent capture filter to peer; wrong phase %d", peer->phase));
reporter->Error("can't sent capture filter to peer; wrong phase %d", peer->phase);
return false;
}
@ -1215,8 +1215,8 @@ bool RemoteSerializer::SendCapabilities(Peer* peer)
{
if ( peer->phase != Peer::HANDSHAKE )
{
reporter->Error(fmt("can't sent capabilties to peer; wrong phase %d",
peer->phase));
reporter->Error("can't sent capabilties to peer; wrong phase %d",
peer->phase);
return false;
}
@ -2809,9 +2809,15 @@ void RemoteSerializer::GotFunctionCall(const char* name, double time,
return;
}
try
{
function->Call(args);
}
catch ( InterpreterException& e )
{ /* Already reported. */ }
}
void RemoteSerializer::GotID(ID* id, Val* val)
{
++stats.ids.in;
@ -3005,8 +3011,8 @@ bool RemoteSerializer::SendCMsgToChild(char msg_type, Peer* peer)
{
if ( ! sendCMsg(io, msg_type, peer ? peer->id : PEER_NONE) )
{
reporter->Warning(fmt("can't send message of type %d: %s",
msg_type, io->Error()));
reporter->Warning("can't send message of type %d: %s",
msg_type, io->Error());
return false;
}
return true;
@ -3085,7 +3091,7 @@ void RemoteSerializer::FatalError(const char* msg)
{
msg = fmt("fatal error, shutting down communication: %s", msg);
Log(LogError, msg);
reporter->Error(msg);
reporter->Error("%s", msg);
closed = true;
kill(child_pid, SIGQUIT);

View file

@ -39,7 +39,7 @@ void Reporter::Info(const char* fmt, ...)
{
va_list ap;
va_start(ap, fmt);
DoLog("", reporter_info, stderr, 0, 0, true, true, fmt, ap);
DoLog("", reporter_info, stderr, 0, 0, true, true, 0, fmt, ap);
va_end(ap);
}
@ -47,7 +47,7 @@ void Reporter::Warning(const char* fmt, ...)
{
va_list ap;
va_start(ap, fmt);
DoLog("warning", reporter_warning, stderr, 0, 0, true, true, fmt, ap);
DoLog("warning", reporter_warning, stderr, 0, 0, true, true, 0, fmt, ap);
va_end(ap);
}
@ -56,7 +56,7 @@ void Reporter::Error(const char* fmt, ...)
++errors;
va_list ap;
va_start(ap, fmt);
DoLog("error", reporter_error, stderr, 0, 0, true, true, fmt, ap);
DoLog("error", reporter_error, stderr, 0, 0, true, true, 0, fmt, ap);
va_end(ap);
}
@ -66,7 +66,7 @@ void Reporter::FatalError(const char* fmt, ...)
va_start(ap, fmt);
// Always log to stderr.
DoLog("fatal error", 0, stderr, 0, 0, true, false, fmt, ap);
DoLog("fatal error", 0, stderr, 0, 0, true, false, 0, fmt, ap);
va_end(ap);
@ -80,7 +80,7 @@ void Reporter::FatalErrorWithCore(const char* fmt, ...)
va_start(ap, fmt);
// Always log to stderr.
DoLog("fatal error", 0, stderr, 0, 0, true, false, fmt, ap);
DoLog("fatal error", 0, stderr, 0, 0, true, false, 0, fmt, ap);
va_end(ap);
@ -88,13 +88,29 @@ void Reporter::FatalErrorWithCore(const char* fmt, ...)
abort();
}
void Reporter::ExprRuntimeError(const Expr* expr, const char* fmt, ...)
{
++errors;
ODesc d;
expr->Describe(&d);
PushLocation(expr->GetLocationInfo());
va_list ap;
va_start(ap, fmt);
DoLog("expression error", reporter_error, stderr, 0, 0, true, true, d.Description(), fmt, ap);
va_end(ap);
PopLocation();
throw InterpreterException();
}
void Reporter::InternalError(const char* fmt, ...)
{
va_list ap;
va_start(ap, fmt);
// Always log to stderr.
DoLog("internal error", 0, stderr, 0, 0, true, false, fmt, ap);
DoLog("internal error", 0, stderr, 0, 0, true, false, 0, fmt, ap);
va_end(ap);
@ -106,7 +122,7 @@ void Reporter::InternalWarning(const char* fmt, ...)
{
va_list ap;
va_start(ap, fmt);
DoLog("internal warning", reporter_warning, stderr, 0, 0, true, true, fmt, ap);
DoLog("internal warning", reporter_warning, stderr, 0, 0, true, true, 0, fmt, ap);
va_end(ap);
}
@ -133,7 +149,7 @@ void Reporter::WeirdHelper(EventHandlerPtr event, Val* conn_val, const char* add
va_list ap;
va_start(ap, fmt_name);
DoLog("weird", event, stderr, 0, vl, false, false, fmt_name, ap);
DoLog("weird", event, stderr, 0, vl, false, false, 0, fmt_name, ap);
va_end(ap);
delete vl;
@ -147,7 +163,7 @@ void Reporter::WeirdFlowHelper(const uint32* orig, const uint32* resp, const cha
va_list ap;
va_start(ap, fmt_name);
DoLog("weird", flow_weird, stderr, 0, vl, false, false, fmt_name, ap);
DoLog("weird", flow_weird, stderr, 0, vl, false, false, 0, fmt_name, ap);
va_end(ap);
delete vl;
@ -173,7 +189,7 @@ void Reporter::Weird(const uint32* orig, const uint32* resp, const char* name)
WeirdFlowHelper(orig, resp, "%s", name);
}
void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Connection* conn, val_list* addl, bool location, bool time, const char* fmt, va_list ap)
void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Connection* conn, val_list* addl, bool location, bool time, const char* postfix, const char* fmt, va_list ap)
{
static char tmp[512];
@ -235,6 +251,9 @@ void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Conne
int n = vsnprintf(buffer, size, fmt, aq);
va_end(aq);
if ( postfix )
n += strlen(postfix) + 10; // Add a bit of slack.
if ( n > -1 && n < size )
// We had enough space;
break;
@ -247,6 +266,11 @@ void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Conne
FatalError("out of memory in Reporter");
}
if ( postfix )
// Note, if you change this fmt string, adjust the additional
// buffer size above.
sprintf(buffer + strlen(buffer), " [%s]", postfix);
if ( event && via_events && ! in_error_handler )
{
val_list* vl = new val_list;

View file

@ -14,6 +14,29 @@
class Connection;
class Location;
class Reporter;
// One cannot raise this exception directly, go through the
// Reporter's methods instead.
class ReporterException {
protected:
friend class Reporter;
ReporterException() {}
};
class InterpreterException : public ReporterException {
protected:
friend class Reporter;
InterpreterException() {}
};
// Check printf-style variadic arguments if we can.
#if __GNUC__
#define FMT_ATTR __attribute__((format(printf, 2, 3))) // sic! 1st is "this" I guess.
#else
#define FMT_ATTR
#endif
class Reporter {
public:
@ -22,25 +45,29 @@ public:
// Report an informational message, nothing that needs specific
// attention.
void Info(const char* fmt, ...);
void Info(const char* fmt, ...) FMT_ATTR;
// Report a warning that may indicate a problem.
void Warning(const char* fmt, ...);
void Warning(const char* fmt, ...) FMT_ATTR;
// Report a non-fatal error. Processing proceeds normally after the error
// has been reported.
void Error(const char* fmt, ...);
void Error(const char* fmt, ...) FMT_ATTR;
// Returns the number of errors reported so far.
int Errors() { return errors; }
// Report a fatal error. Bro will terminate after the message has been
// reported.
void FatalError(const char* fmt, ...);
void FatalError(const char* fmt, ...) FMT_ATTR;
// Report a fatal error. Bro will terminate after the message has been
// reported and always generate a core dump.
void FatalErrorWithCore(const char* fmt, ...);
void FatalErrorWithCore(const char* fmt, ...) FMT_ATTR;
// Report a runtime error in evaluating a Bro script expression. This
// function will not return but raise an InterpreterException.
void ExprRuntimeError(const Expr* expr, const char* fmt, ...);
// Report a traffic weirdness, i.e., an unexpected protocol situation
// that may lead to incorrectly processing a connnection.
@ -51,15 +78,15 @@ public:
// Syslog a message. This methods does nothing if we're running
// offline from a trace.
void Syslog(const char* fmt, ...);
void Syslog(const char* fmt, ...) FMT_ATTR;
// Report about a potential internal problem. Bro will continue
// normally.
void InternalWarning(const char* fmt, ...);
void InternalWarning(const char* fmt, ...) FMT_ATTR;
// Report an internal program error. Bro will terminate with a core
// dump after the message has been reported.
void InternalError(const char* fmt, ...);
void InternalError(const char* fmt, ...) FMT_ATTR;
// Toggle whether non-fatal messages should be reported through the
// scripting layer rather on standard output. Fatal errors are always
@ -87,7 +114,9 @@ public:
void EndErrorHandler() { --in_error_handler; }
private:
void DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Connection* conn, val_list* addl, bool location, bool time, const char* fmt, va_list ap);
void DoLog(const char* prefix, EventHandlerPtr event, FILE* out,
Connection* conn, val_list* addl, bool location, bool time,
const char* postfix, const char* fmt, va_list ap);
// The order if addl, name needs to be like that since fmt_name can
// contain format specifiers

View file

@ -149,9 +149,19 @@ bool RuleConditionEval::DoMatch(Rule* rule, RuleEndpointState* state,
else
args.append(new StringVal(""));
bool result = 0;
try
{
Val* val = id->ID_Val()->AsFunc()->Call(&args);
bool result = val->AsBool();
result = val->AsBool();
Unref(val);
}
catch ( InterpreterException& e )
{
result = false;
}
return result;
}

Some files were not shown because too many files have changed in this diff Show more