Merge tag 'v2.3' into topic/vladg/sip

Version tag

Conflicts:
	scripts/base/init-default.bro
This commit is contained in:
Vlad Grigorescu 2014-08-22 19:25:43 -04:00
commit f93f2af748
283 changed files with 6843 additions and 3048 deletions

296
CHANGES
View file

@ -1,4 +1,287 @@
2.3 | 2014-06-16 09:48:25 -0500
* Release 2.3.
2.3-beta-33 | 2014-06-12 11:59:28 -0500
* Documentation improvements/fixes. (Daniel Thayer)
2.3-beta-24 | 2014-06-11 15:35:31 -0500
* Fix SMTP state tracking when server response is missing.
(Robin Sommer)
2.3-beta-22 | 2014-06-11 12:31:38 -0500
* Fix doc/test that broke due to a Bro script change. (Jon Siwek)
* Remove unused --with-libmagic configure option. (Jon Siwek)
2.3-beta-20 | 2014-06-10 18:16:51 -0700
* Fix use-after-free in some cases of reassigning a table index.
Addresses BIT-1202. (Jon Siwek)
2.3-beta-18 | 2014-06-06 13:11:50 -0700
* Add two more SSL events, one triggered for each handshake message
and one triggered for the tls change cipherspec message. (Bernhard
Amann)
* Small SSL bug fix. In case SSL::disable_analyzer_after_detection
was set to false, the ssl_established event would fire after each
data packet once the session is established. (Bernhard Amann)
2.3-beta-16 | 2014-06-06 13:05:44 -0700
* Re-activate notice suppression for expiring certificates.
(Bernhard Amann)
2.3-beta-14 | 2014-06-05 14:43:33 -0700
* Add new TLS extension type numbers from IANA (Bernhard Amann)
* Switch to double hashing for Bloomfilters for better performance.
(Matthias Vallentin)
* Bugfix to use full digest length instead of just one byte for
Bloomfilter's universal hash function. Addresses BIT-1140.
(Matthias Vallentin)
* Make buffer for X509 certificate subjects larger. Addresses
BIT-1195 (Bernhard Amann)
2.3-beta-5 | 2014-05-29 15:34:42 -0500
* Fix misc/load-balancing.bro's reference to
PacketFilter::sampling_filter (Jon Siwek)
2.3-beta-4 | 2014-05-28 14:55:24 -0500
* Fix potential mem leak in remote function/event unserialization.
(Jon Siwek)
* Fix reference counting bug in table coercion expressions (Jon Siwek)
* Fix an "unused value" warning. (Jon Siwek)
* Remove a duplicate unit test baseline dir. (Jon Siwek)
2.3-beta | 2014-05-19 16:36:50 -0500
* Release 2.3-beta
* Clean up OpenSSL data structures on exit. (Bernhard Amann)
* Fixes for OCSP & x509 analysis memory leak issues. (Bernhard Amann)
* Remove remaining references to BROMAGIC (Daniel Thayer)
* Fix typos and formatting in event and BiF documentation (Daniel Thayer)
* Update intel framework plugin for ssl server_name extension API
changes. (Bernhard Amann, Justin Azoff)
* Fix expression errors in SSL/x509 scripts when unparseable data
is in certificate chain. (Bernhard Amann)
2.2-478 | 2014-05-19 15:31:33 -0500
* Change record ctors to only allow record-field-assignment
expressions. (Jon Siwek)
2.2-477 | 2014-05-19 14:13:00 -0500
* Fix X509::Result record's "result" field to be set internally as type int instead of type count. (Bernhard Amann)
* Fix a couple of doc build warnings (Daniel Thayer)
2.2-470 | 2014-05-16 15:16:32 -0700
* Add a new section "Cluster Configuration" to the docs that is
intended as a how-to for configuring a Bro cluster. Most of this
content was moved here from the BroControl doc (which is now
intended as more of a reference guide for more experienced users)
and the load balancing FAQ on the website. (Daniel Thayer)
* Update some doc tests and line numbers (Daniel Thayer)
2.2-457 | 2014-05-16 14:38:31 -0700
* New script policy/protocols/ssl/validate-ocsp.bro that adds OSCP
validation to ssl.log. The work is done by a new bif
x509_ocsp_verify(). (Bernhard Amann)
* STARTTLS support for POP3 and SMTP. The SSL analyzer takes over
when seen. smtp.log now logs when a connection switches to SSL.
(Bernhard Amann)
* Replace errors when parsing x509 certs with weirds. (Bernhard
Amann)
* Improved Heartbleed attack/scan detection. (Bernhard Amann)
* Let TLS analyzer fail better when no longer in sync with the data
stream. (Bernhard Amann)
2.2-444 | 2014-05-16 14:10:32 -0500
* Disable all default AppStat plugins except facebook. (Jon Siwek)
* Update for the active http test to force it to use ipv4. (Seth Hall)
2.2-441 | 2014-05-15 11:29:56 -0700
* A new RADIUS analyzer. (Vlad Grigorescu)
It produces a radius.log and generates two events:
event radius_message(c: connection, result: RADIUS::Message);
event radius_attribute(c: connection, attr_type: count, value: string);
2.2-427 | 2014-05-15 13:37:23 -0400
* Fix dynamic SumStats update on clusters (Bernhard Amann)
2.2-425 | 2014-05-08 16:34:44 -0700
* Fix reassembly of data w/ sizes beyond 32-bit capacities. (Jon Siwek)
Reassembly code (e.g. for TCP) now uses int64/uint64 (signedness
is situational) data types in place of int types in order to
support delivering data to analyzers that pass 2GB thresholds.
There's also changes in logic that accompany the change in data
types, e.g. to fix TCP sequence space arithmetic inconsistencies.
Another significant change is in the Analyzer API: the *Packet and
*Undelivered methods now use a uint64 in place of an int for the
relative sequence space offset parameter.
Addresses BIT-348.
* Fixing compiler warnings. (Robin Sommer)
* Update SNMP analyzer's DeliverPacket method signature. (Jon Siwek)
2.2-417 | 2014-05-07 10:59:22 -0500
* Change handling of atypical OpenSSL error case in x509 verification. (Jon Siwek)
* Fix memory leaks in X509 certificate parsing/verification. (Jon Siwek)
* Fix new []/delete mismatch in input::reader::Raw::DoClose(). (Jon Siwek)
* Fix buffer over-reads in file_analysis::Manager::Terminate() (Jon Siwek)
* Fix buffer overlows in IP address masking logic. (Jon Siwek)
That could occur either in taking a zero-length mask on an IPv6 address
(e.g. [fe80::]/0) or a reverse mask of length 128 on any address (e.g.
via the remask_addr BuiltIn Function).
* Fix new []/delete mismatch in ~Base64Converter. (Jon Siwek)
2.2-410 | 2014-05-02 12:49:53 -0500
* Replace an unneeded OPENSSL_malloc call. (Jon Siwek)
2.2-409 | 2014-05-02 12:09:06 -0500
* Clean up and documentation for base SNMP script. (Jon Siwek)
* Update base SNMP script to now produce a snmp.log. (Seth Hall)
* Add DH support to SSL analyzer. When using DHE or DH-Anon, sever
key parameters are now available in scriptland. Also add script to
alert on weak certificate keys or weak dh-params. (Bernhard Amann)
* Add a few more ciphers Bro did not know at all so far. (Bernhard Amann)
* Log chosen curve when using ec cipher suite in TLS. (Bernhard Amann)
2.2-397 | 2014-05-01 20:29:20 -0700
* Fix reference counting for lookup_ID() usages. (Jon Siwek)
2.2-395 | 2014-05-01 20:25:48 -0700
* Fix missing "irc-dcc-data" service field from IRC DCC connections.
(Jon Siwek)
* Correct a notice for heartbleed. The notice is thrown correctly,
just the message conteined wrong values. (Bernhard Amann)
* Improve/standardize some malloc/realloc return value checks. (Jon
Siwek)
* Improve file analysis manager shutdown/cleanup. (Jon Siwek)
2.2-388 | 2014-04-24 18:38:07 -0700
* Fix decoding of MIME quoted-printable. (Mareq)
2.2-386 | 2014-04-24 18:22:29 -0700
* Do a Intel::ADDR lookup for host field if we find an IP address
there. (jshlbrd)
2.2-381 | 2014-04-24 17:08:45 -0700
* Add Java version to software framework. (Brian Little)
2.2-379 | 2014-04-24 17:06:21 -0700
* Remove unused Val::attribs member. (Jon Siwek)
2.2-377 | 2014-04-24 16:57:54 -0700
* A larger set of SSL improvements and extensions. Addresses
BIT-1178. (Bernhard Amann)
- Fixes TLS protocol version detection. It also should
bail-out correctly on non-tls-connections now
- Adds support for a few TLS extensions, including
server_name, alpn, and ec-curves.
- Adds support for the heartbeat events.
- Add Heartbleed detector script.
- Adds basic support for OCSP stapling.
* Fix parsing of DNS TXT RRs w/ multiple character-strings.
Addresses BIT-1156. (Jon Siwek)
2.2-353 | 2014-04-24 16:12:30 -0700
* Adapt HTTP partial content to cache file analysis IDs. (Jon Siwek)
* Adapt SSL analyzer to generate file analysis handles itself. (Jon
Siwek)
* Adapt more of HTTP analyzer to use cached file analysis IDs. (Jon
Siwek)
* Adapt IRC/FTP analyzers to cache file analysis IDs. (Jon Siwek)
* Refactor regex/signature AcceptingSet data structure and usages.
(Jon Siwek)
* Enforce data size limit when checking files for MIME matches. (Jon
Siwek)
* Refactor file analysis file ID lookup. (Jon Siwek)
2.2-344 | 2014-04-22 20:13:30 -0700
* Refactor various hex escaping code. (Jon Siwek)
2.2-341 | 2014-04-17 18:01:41 -0500
* Fix duplicate DNS log entries. (Robin Sommer)
2.2-341 | 2014-04-17 18:01:01 -0500
* Refactor initialization of ASCII log writer options. (Jon Siwek)
@ -125,7 +408,18 @@
2.2-294 | 2014-03-30 22:08:25 +0200
* TODO: x509 changes. (Bernhard Amann)
* Rework and move X509 certificate processing from the SSL protocol
analyzer to a dedicated file analyzer. This will allow us to
examine X509 certificates from sources other than SSL in the
future. Furthermore, Bro now parses more fields and extensions
from the certificates (e.g. elliptic curve information, subject
alternative names, basic constraints). Certificate validation also
was improved, should be easier to use and exposes information like
the full verified certificate chain. (Bernhard Amann)
This update changes the format of ssl.log and adds a new x509.log
with certificate information. Furthermore all x509 events and
handling functions have changed.
2.2-271 | 2014-03-30 20:25:17 +0200

74
NEWS
View file

@ -7,14 +7,9 @@ their own ``CHANGES``.)
Bro 2.3
=======
[In progress]
Dependencies
------------
- Bro no longer requires a pre-installed libmagic (because it now
ships its own).
- Libmagic is no longer a dependency.
New Functionality
@ -34,12 +29,44 @@ New Functionality
and "file-mime" gives the MIME type string of content that matches
the magic and an optional strength value for the match. (See also
"Changed Functionality" below for changes due to switching from
using libmagic to such wsignatures.)
using libmagic to such signatures.)
- A new built-in function, "file_magic", can be used to get all file
magic matches and their corresponding strength against a given chunk
of data.
- The SSL analyzer now supports heartbeats as well as a few
extensions, including server_name, alpn, and ec-curves.
- The SSL analyzer comes with Heartbleed detector script in
protocols/ssl/heartbleed.bro. Note that loading this script changes
the default value of "SSL::disable_analyzer_after_detection" from true
to false to prevent encrypted heartbeats from being ignored.
- StartTLS is now supported for SMTP and POP3.
- The X509 analyzer can now perform OSCP validation.
- Bro now has analyzers for SNMP and Radius, which produce corresponding
snmp.log and radius.log output (as well as various events of course).
- BroControl has a new option "BroPort" which allows a user to specify
the starting port number for Bro.
- BroControl has a new option "StatsLogExpireInterval" which allows a
user to specify when entries in the stats.log file expire.
- BroControl has a new option "PFRINGClusterType" which allows a user
to specify a PF_RING cluster type.
- BroControl now supports PF_RING+DNA. There is also a new option
"PFRINGFirstAppInstance" that allows a user to specify the starting
application instance number for processes running on a DNA cluster.
See the BroControl documentation for more details.
- BroControl now warns a user to run "broctl install" if Bro has
been upgraded or if the broctl or node configuration has changed
since the most recent install.
Changed Functionality
---------------------
@ -57,17 +84,27 @@ Changed Functionality
event x509_extension(c: connection, is_orig: bool, cert: X509, ext: X509_extension_info);
- Bro no longer special-cases SYN/FIN/RST-filtered traces by not
reporting missing data. The old behavior can be reverted by
redef'ing "detect_filtered_trace".
- In addition, there are several new, more specialized events for a
number of x509 extensions.
TODO: Update if we add a detector for filtered traces.
- Generally, all x509 events and handling functions have changed their
signatures.
- X509 certificate verification now returns the complete certificate
chain that was used for verification.
- Bro no longer special-cases SYN/FIN/RST-filtered traces by not
reporting missing data. Instead, if Bro never sees any data segments
for analyzed TCP connections, the new
base/misc/find-filtered-trace.bro script will log a warning in
reporter.log and to stderr. The old behavior can be reverted by
redef'ing "detect_filtered_trace".
- We have removed the packet sorter component.
- Bro no longer uses libmagic to identify file types but instead now
comes with its own signature library (which initially is still
derived from libmagic;s database). This leads to a number of further
derived from libmagic's database). This leads to a number of further
changes with regards to MIME types:
* The second parameter of the "identify_data" built-in function
@ -82,7 +119,7 @@ Changed Functionality
in Bro as magic databases are no longer used/installed.
* Removed "binary" and "octet-stream" mime type detections. They
don' provide any more information than an uninitialized
don't provide any more information than an uninitialized
mime_type field.
* The "fa_file" record now contains a "mime_types" field that
@ -90,6 +127,19 @@ Changed Functionality
(where the "mime_type" field is just a shortcut for the
strongest match).
- dns_TXT_reply() now supports more than one string entry by receiving
a vector of strings.
- BroControl now runs the "exec" and "df" broctl commands only once
per host, instead of once per Bro node. The output of these
commands has been changed slightly to include both the host and
node names.
- Several performance improvements were made. Particular emphasis
was put on the File Analysis system, which generally will now emit
far fewer file handle request events due to protocol analyzers now
caching that information internally.
Bro 2.2
=======

View file

@ -1 +1 @@
2.2-341
2.3

@ -1 +1 @@
Subproject commit b0877edc68af6ae08face528fc411c8ce21f2e30
Subproject commit ec1e052afd5a8cd3d1d2cbb28fcd688018e379a5

@ -1 +1 @@
Subproject commit 3f86e2d5db2a0c5f2f104b15f359f4b752bb4558
Subproject commit 5721df4f5f6fa84de6257cca6582a28e45831786

@ -1 +1 @@
Subproject commit 04e6a7f591817f060a781f21c12e1afce7eb1e16
Subproject commit c2f5dd2cb7876158fdf9721aebd22567db840db1

@ -1 +1 @@
Subproject commit d99150801b7844e082b5421d1efe4050702d350e
Subproject commit 8a13886f322f3b618832c0ca3976e07f686d14da

@ -1 +1 @@
Subproject commit 4e2ec35917acb883c7d2ab19af487f3863c687ae
Subproject commit 4da1bd24038d4977e655f2b210f34e37f0b73b78

4
configure vendored
View file

@ -50,7 +50,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable
--with-perl=PATH path to perl executable
--with-libmagic=PATH path to libmagic install root
Optional Packages in Non-Standard Locations:
--with-geoip=PATH path to the libGeoIP install root
@ -211,9 +210,6 @@ while [ $# -ne 0 ]; do
--with-perl=*)
append_cache_entry PERL_EXECUTABLE PATH $optarg
;;
--with-libmagic=*)
append_cache_entry LibMagic_ROOT_DIR PATH $optarg
;;
--with-geoip=*)
append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg
;;

View file

@ -38,7 +38,6 @@ extensions += ["broxygen"]
bro_binary = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src/bro")
broxygen_cache="@BROXYGEN_CACHE_DIR@"
os.environ["BROPATH"] = "@BROPATH@"
os.environ["BROMAGIC"] = "@BROMAGIC@"
# ----- End of Broxygen configuration. -----
# -- General configuration -----------------------------------------------------

263
doc/configuration/index.rst Normal file
View file

@ -0,0 +1,263 @@
.. _configuration:
=====================
Cluster Configuration
=====================
.. contents::
A *Bro Cluster* is a set of systems jointly analyzing the traffic of
a network link in a coordinated fashion. You can operate such a setup from
a central manager system easily using BroControl because BroControl
hides much of the complexity of the multi-machine installation.
This section gives examples of how to setup common cluster configurations
using BroControl. For a full reference on BroControl, see the
:doc:`BroControl <../components/broctl/README>` documentation.
Preparing to Setup a Cluster
============================
In this document we refer to the user account used to set up the cluster
as the "Bro user". When setting up a cluster the Bro user must be set up
on all hosts, and this user must have ssh access from the manager to all
machines in the cluster, and it must work without being prompted for a
password/passphrase (for example, using ssh public key authentication).
Also, on the worker nodes this user must have access to the target
network interface in promiscuous mode.
Additional storage must be available on all hosts under the same path,
which we will call the cluster's prefix path. We refer to this directory
as ``<prefix>``. If you build Bro from source, then ``<prefix>`` is
the directory specified with the ``--prefix`` configure option,
or ``/usr/local/bro`` by default. The Bro user must be able to either
create this directory or, where it already exists, must have write
permission inside this directory on all hosts.
When trying to decide how to configure the Bro nodes, keep in mind that
there can be multiple Bro instances running on the same host. For example,
it's possible to run a proxy and the manager on the same host. However, it is
recommended to run workers on a different machine than the manager because
workers can consume a lot of CPU resources. The maximum recommended
number of workers to run on a machine should be one or two less than
the number of CPU cores available on that machine. Using a load-balancing
method (such as PF_RING) along with CPU pinning can decrease the load on
the worker machines.
Basic Cluster Configuration
===========================
With all prerequisites in place, perform the following steps to setup
a Bro cluster (do this as the Bro user on the manager host only):
- Edit the BroControl configuration file, ``<prefix>/etc/broctl.cfg``,
and change the value of any BroControl options to be more suitable for
your environment. You will most likely want to change the value of
the ``MailTo`` and ``LogRotationInterval`` options. A complete
reference of all BroControl options can be found in the
:doc:`BroControl <../components/broctl/README>` documentation.
- Edit the BroControl node configuration file, ``<prefix>/etc/node.cfg``
to define where manager, proxies, and workers are to run. For a cluster
configuration, you must comment-out (or remove) the standalone node
in that file, and either uncomment or add node entries for each node
in your cluster (manager, proxy, and workers). For example, if you wanted
to run four Bro nodes (two workers, one proxy, and a manager) on a cluster
consisting of three machines, your cluster configuration would look like
this::
[manager]
type=manager
host=10.0.0.10
[proxy-1]
type=proxy
host=10.0.0.10
[worker-1]
type=worker
host=10.0.0.11
interface=eth0
[worker-2]
type=worker
host=10.0.0.12
interface=eth0
For a complete reference of all options that are allowed in the ``node.cfg``
file, see the :doc:`BroControl <../components/broctl/README>` documentation.
- Edit the network configuration file ``<prefix>/etc/networks.cfg``. This
file lists all of the networks which the cluster should consider as local
to the monitored environment.
- Install workers and proxies using BroControl::
> broctl install
- Some tasks need to be run on a regular basis. On the manager node,
insert a line like this into the crontab of the user running the
cluster::
0-59/5 * * * * <prefix>/bin/broctl cron
(Note: if you are editing the system crontab instead of a user's own
crontab, then you need to also specify the user which the command
will be run as. The username must be placed after the time fields
and before the broctl command.)
Note that on some systems (FreeBSD in particular), the default PATH
for cron jobs does not include the directories where bash and python
are installed (the symptoms of this problem would be that "broctl cron"
works when run directly by the user, but does not work from a cron job).
To solve this problem, you would either need to create symlinks
to bash and python in a directory that is in the default PATH for
cron jobs, or specify a new PATH in the crontab.
PF_RING Cluster Configuration
=============================
`PF_RING <http://www.ntop.org/products/pf_ring/>`_ allows speeding up the
packet capture process by installing a new type of socket in Linux systems.
It supports 10Gbit hardware packet filtering using standard network adapters,
and user-space DNA (Direct NIC Access) for fast packet capture/transmission.
Installing PF_RING
^^^^^^^^^^^^^^^^^^
1. Download and install PF_RING for your system following the instructions
`here <http://www.ntop.org/get-started/download/#PF_RING>`_. The following
commands will install the PF_RING libraries and kernel module (replace
the version number 5.6.2 in this example with the version that you
downloaded)::
cd /usr/src
tar xvzf PF_RING-5.6.2.tar.gz
cd PF_RING-5.6.2/userland/lib
./configure --prefix=/opt/pfring
make install
cd ../libpcap
./configure --prefix=/opt/pfring
make install
cd ../tcpdump-4.1.1
./configure --prefix=/opt/pfring
make install
cd ../../kernel
make install
modprobe pf_ring enable_tx_capture=0 min_num_slots=32768
Refer to the documentation for your Linux distribution on how to load the
pf_ring module at boot time. You will need to install the PF_RING
library files and kernel module on all of the workers in your cluster.
2. Download the Bro source code.
3. Configure and install Bro using the following commands::
./configure --with-pcap=/opt/pfring
make
make install
4. Make sure Bro is correctly linked to the PF_RING libpcap libraries::
ldd /usr/local/bro/bin/bro | grep pcap
libpcap.so.1 => /opt/pfring/lib/libpcap.so.1 (0x00007fa6d7d24000)
5. Configure BroControl to use PF_RING (explained below).
6. Run "broctl install" on the manager. This command will install Bro and
all required scripts to the other machines in your cluster.
Using PF_RING
^^^^^^^^^^^^^
In order to use PF_RING, you need to specify the correct configuration
options for your worker nodes in BroControl's node configuration file.
Edit the ``node.cfg`` file and specify ``lb_method=pf_ring`` for each of
your worker nodes. Next, use the ``lb_procs`` node option to specify how
many Bro processes you'd like that worker node to run, and optionally pin
those processes to certain CPU cores with the ``pin_cpus`` option (CPU
numbering starts at zero). The correct ``pin_cpus`` setting to use is
dependent on your CPU architecture (Intel and AMD systems enumerate
processors in different ways). Using the wrong ``pin_cpus`` setting
can cause poor performance. Here is what a worker node entry should
look like when using PF_RING and CPU pinning::
[worker-1]
type=worker
host=10.0.0.50
interface=eth0
lb_method=pf_ring
lb_procs=10
pin_cpus=2,3,4,5,6,7,8,9,10,11
Using PF_RING+DNA with symmetric RSS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You must have a PF_RING+DNA license in order to do this. You can sniff
each packet only once.
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
2. Run "ethtool -L dna0 combined 10" (this will establish 10 RSS queues
on your NIC) on each worker host. You must make sure that you set the
number of RSS queues to the same as the number you specify for the
lb_procs option in the node.cfg file.
3. On the manager, configure your worker(s) in node.cfg::
[worker-1]
type=worker
host=10.0.0.50
interface=dna0
lb_method=pf_ring
lb_procs=10
Using PF_RING+DNA with pfdnacluster_master
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You must have a PF_RING+DNA license and a libzero license in order to do
this. You can load balance between multiple applications and sniff the
same packets multiple times with different tools.
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
2. Run "ethtool -L dna0 1" (this will establish 1 RSS queues on your NIC)
on each worker host.
3. Run the pfdnacluster_master command on each worker host. For example::
pfdnacluster_master -c 21 -i dna0 -n 10
Make sure that your cluster ID (21 in this example) matches the interface
name you specify in the node.cfg file. Also make sure that the number
of processes you're balancing across (10 in this example) matches
the lb_procs option in the node.cfg file.
4. If you are load balancing to other processes, you can use the
pfringfirstappinstance variable in broctl.cfg to set the first
application instance that Bro should use. For example, if you are running
pfdnacluster_master with "-n 10,4" you would set
pfringfirstappinstance=4. Unfortunately that's still a global setting
in broctl.cfg at the moment but we may change that to something you can
set in node.cfg eventually.
5. On the manager, configure your worker(s) in node.cfg::
[worker-1]
type=worker
host=10.0.0.50
interface=dnacluster:21
lb_method=pf_ring
lb_procs=10

View file

@ -15,6 +15,7 @@ Introduction Section
cluster/index.rst
install/index.rst
quickstart/index.rst
configuration/index.rst
..
@ -30,6 +31,7 @@ Using Bro Section
httpmonitor/index.rst
broids/index.rst
mimestats/index.rst
scripting/index.rst
..
@ -39,7 +41,6 @@ Reference Section
.. toctree::
:maxdepth: 2
scripting/index.rst
frameworks/index.rst
script-reference/index.rst
components/index.rst

View file

@ -25,8 +25,8 @@ BroControl is an interactive shell for easily operating/managing Bro
installations on a single system or even across multiple systems in a
traffic-monitoring cluster. This section explains how to use BroControl
to manage a stand-alone Bro installation. For instructions on how to
configure a Bro cluster, see the documentation for :doc:`BroControl
<../components/broctl/README>`.
configure a Bro cluster, see the :doc:`Cluster Configuration
<../configuration/index>` documentation.
A Minimal Starting Configuration
--------------------------------
@ -234,7 +234,7 @@ is valid before installing it and then restarting the Bro instance:
.. console::
[BroControl] > check
bro is ok.
bro scripts are ok.
[BroControl] > install
removing old policies in /usr/local/bro/spool/policy/site ... done.
removing old policies in /usr/local/bro/spool/policy/auto ... done.
@ -250,15 +250,15 @@ is valid before installing it and then restarting the Bro instance:
Now that the SSL notice is ignored, let's look at how to send an email on
the SSH notice. The notice framework has a similar option called
``emailed_types``, but that can't differentiate between SSH servers and we
only want email for logins to certain ones. Then we come to the ``PolicyItem``
record and ``policy`` set and realize that those are actually what get used
to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but using that would generate email for all SSH servers and
we only want email for logins to certain ones. There is a ``policy`` hook
that is actually what is used to implement the simple functionality of
``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken
on notices can be user-defined.
In ``local.bro``, let's add a new ``PolicyItem`` record to the ``policy`` set
that only takes the email action for SSH logins to a defined set of servers:
In ``local.bro``, let's define a new ``policy`` hook handler body
that takes the email action for SSH logins only for a defined set of servers:
.. code:: bro
@ -276,9 +276,9 @@ that only takes the email action for SSH logins to a defined set of servers:
You'll just have to trust the syntax for now, but what we've done is
first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a record to the policy that will generate
an email on the condition that the predicate function evaluates to true, which
is whenever the notice type is an SSH login and the responding host stored
``watched_servers``; then added a hook handler body to the policy that will
generate an email whenever the notice type is an SSH login and the responding
host stored
inside the ``Info`` record's connection field is in the set of watched servers.
.. note:: Record field member access is done with the '$' character
@ -426,7 +426,7 @@ Running Bro Without Installing
For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have
to first adjust ``BROPATH`` and ``BROMAGIC`` to look for scripts and
to first adjust ``BROPATH`` to look for scripts and
additional files inside the build directory. Sourcing either
``build/bro-path-dev.sh`` or ``build/bro-path-dev.csh`` as appropriate
for the current shell accomplishes this and also augments your

View file

@ -2,7 +2,6 @@
@load base/protocols/ssh/
redef Notice::emailed_types += {
SSH::Interesting_Hostname_Login,
SSH::Login
SSH::Interesting_Hostname_Login
};

View file

@ -3,5 +3,4 @@
redef Notice::type_suppression_intervals += {
[SSH::Interesting_Hostname_Login] = 1day,
[SSH::Login] = 12hrs,
};

View file

@ -87,7 +87,7 @@ Up until this point, the script has merely done some basic setup. With the next
the script starts to define instructions to take in a given event.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 38-62
:lines: 38-71
The workhorse of the script is contained in the event handler for
``file_hash``. The :bro:see:`file_hash` event allows scripts to access
@ -95,7 +95,7 @@ the information associated with a file for which Bro's file analysis framework h
generated a hash. The event handler is passed the file itself as ``f``, the type of digest
algorithm used as ``kind`` and the hash generated as ``hash``.
On line 3, an ``if`` statement is used to check for the correct type of hash, in this case
On line 34, an ``if`` statement is used to check for the correct type of hash, in this case
a SHA1 hash. It also checks for a mime type we've defined as being of interest as defined in the
constant ``match_file_types``. The comparison is made against the expression ``f$mime_type``, which uses
the ``$`` dereference operator to check the value ``mime_type`` inside the variable ``f``. Once both
@ -113,22 +113,22 @@ this event continues and upon receipt of the values returned by
the malware was first detected and the detection rate by splitting on an text space
and storing the values returned in a local table variable. In line 12, if the table
returned by ``split1`` has two entries, indicating a successful split, we store the detection
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate`` on lines 14 and 15 respectively
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate`` on lines 18 and 14 respectively
using the appropriate conversion functions. From this point on, Bro knows it has seen a file
transmitted which has a hash that has been seen by the Team Cymru Malware Hash Registry, the rest
of the script is dedicated to producing a notice.
On line 17, the detection time is processed into a string representation and stored in
On line 19, the detection time is processed into a string representation and stored in
``readable_first_detected``. The script then compares the detection rate against the
``notice_threshold`` that was defined earlier. If the detection rate is high enough, the script
creates a concise description of the notice on line 22, a possible URL to check the sample against
creates a concise description of the notice on line 20, a possible URL to check the sample against
``virustotal.com``'s database, and makes the call to :bro:id:`NOTICE` to hand the relevant information
off to the Notice framework.
In approximately 25 lines of code, Bro provides an amazing
In approximately a few dozen lines of code, Bro provides an amazing
utility that would be incredibly difficult to implement and deploy
with other products. In truth, claiming that Bro does this in 25
lines is a misdirection; there is a truly massive number of things
with other products. In truth, claiming that Bro does this in such a small
number of lines is a misdirection; there is a truly massive number of things
going on behind-the-scenes in Bro, but it is the inclusion of the
scripting language that gives analysts access to those underlying
layers in a succinct and well defined manner.
@ -657,7 +657,7 @@ using a 20 bit subnet mask.
Because this is a script that doesn't use any kind of network
analysis, we can handle the event :bro:id:`bro_init` which is always
generated by Bro's core upon startup. On lines six and seven, two
generated by Bro's core upon startup. On lines five and six, two
locally scoped vectors are created to hold our lists of subnets and IP
addresses respectively. Then, using a set of nested ``for`` loops, we
iterate over every subnet and every IP address and use an ``if``
@ -760,7 +760,7 @@ string against which it will be tested to be on the right.
In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word
``quick`` or the word ``fox``. The ``if`` statement on line six uses
``quick`` or the word ``fox``. The ``if`` statement on line eight uses
embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces.
@ -947,7 +947,7 @@ Logging Framework when ``Log::write`` is called. Were there to be
any name value pairs without the ``&log`` attribute, those fields
would simply be ignored during logging but remain available for the
lifespan of the variable. The next step is to create the logging
stream with :bro:id:`Log::create_stream` which takes a Log::ID and a
stream with :bro:id:`Log::create_stream` which takes a ``Log::ID`` and a
record as its arguments. In this example, on line 25, we call the
``Log::create_stream`` method and pass ``Factor::LOG`` and the
``Factor::Info`` record as arguments. From here on out, if we issue
@ -1001,7 +1001,7 @@ filename for the current call to ``Log::write``. The definition for
this function has to take as its parameters a ``Log::ID`` called id, a
string called ``path`` and the appropriate record type for the logs called
``rec``. You can see the definition of ``mod5`` used in this example on
line one conforms to that requirement. The function simply returns
line 38 conforms to that requirement. The function simply returns
``factor-mod5`` if the factorial is divisible evenly by 5, otherwise, it
returns ``factor-non5``. In the additional ``bro_init`` event
handler, we define a locally scoped ``Log::Filter`` and assign it a
@ -1074,7 +1074,8 @@ make a call to :bro:id:`NOTICE` supplying it with an appropriate
:bro:type:`Notice::Info` record. Often times the call to ``NOTICE``
includes just the ``Notice::Type``, and a concise message. There are
however, significantly more options available when raising notices as
seen in the table below. The only field in the table below whose
seen in the definition of :bro:type:`Notice::Info`. The only field in
``Notice::Info`` whose
attributes make it a required field is the ``note`` field. Still,
good manners are always important and including a concise message in
``$msg`` and, where necessary, the contents of the connection record
@ -1086,57 +1087,6 @@ that are commonly included, ``$identifier`` and ``$suppress_for`` are
built around the automated suppression feature of the Notice Framework
which we will cover shortly.
.. todo::
Once the link to ``Notice::Info`` work I think we should take out
the table. That's too easy to get out of date.
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| Field | Type | Attributes | Use |
+=====================+==================================================================+================+========================================+
| ts | time | &log &optional | The time of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| uid | string | &log &optional | A unique connection ID |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| id | conn_id | &log &optional | A 4-tuple to identify endpoints |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| conn | connection | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| iconn | icmp_conn | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| proto | transport_proto | &log &optional | Transport protocol |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| note | Notice::Type | &log | The Notice::Type of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| msg | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| sub | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src | addr | &log &optional | Source address if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| dst | addr | &log &optional | Destination addr if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| p | port | &log &optional | Port if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| n | count | &log &optional | Count or status code |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src_peer | event_peer | &log &optional | Peer that raised the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| peer_descr | string | &log &optional | Text description of the src_peer |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| actions | set[Notice::Action] | &log &optional | Actions applied to the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| policy_items | set[count] | &log &optional | Policy items that have been applied |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_body_sections | vector | &optional | Body of the email for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_delay_tokens | set[string] | &optional | Delay functionality for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| identifier | string | &optional | A unique string identifier |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| suppress_for | interval | &log &optional | Length of time to suppress a notice. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
One of the default policy scripts raises a notice when an SSH login
has been heuristically detected and the originating hostname is one
that would raise suspicion. Effectively, the script attempts to
@ -1153,7 +1103,7 @@ possible while staying concise.
While much of the script relates to the actual detection, the parts
specific to the Notice Framework are actually quite interesting in
themselves. On line 18 the script's ``export`` block adds the value
themselves. On line 13 the script's ``export`` block adds the value
``SSH::Interesting_Hostname_Login`` to the enumerable constant
``Notice::Type`` to indicate to the Bro core that a new type of notice
is being defined. The script then calls ``NOTICE`` and defines the
@ -1222,7 +1172,7 @@ from the connection relative to the behavior that has been observed by
Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 60-63
:lines: 64-68
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs.bro` script
which identifies when SSL certificates are set to expire and raises
@ -1302,9 +1252,9 @@ in the call to ``NOTICE``.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_01.bro
The Notice Policy shortcut above adds the ``Notice::Types`` of
SSH::Interesting_Hostname_Login and SSH::Login to the
Notice::emailed_types set while the shortcut below alters the length
The Notice Policy shortcut above adds the ``Notice::Type`` of
``SSH::Interesting_Hostname_Login`` to the
``Notice::emailed_types`` set while the shortcut below alters the length
of time for which those notices will be suppressed.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_02.bro

View file

@ -26,20 +26,20 @@ export {
## This option is also available as a per-filter ``$config`` option.
const use_json = F &redef;
## Format of timestamps when writing out JSON. By default, the JSON formatter will
## use double values for timestamps which represent the number of seconds from the
## UNIX epoch.
## Format of timestamps when writing out JSON. By default, the JSON
## formatter will use double values for timestamps which represent the
## number of seconds from the UNIX epoch.
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
## If true, include lines with log meta information such as column names
## with types, the values of ASCII logging options that are in use, and
## the time when the file was opened and closed (the latter at the end).
##
##
## If writing in JSON format, this is implicitly disabled.
const include_meta = T &redef;
## Prefix for lines with meta information.
##
##
## This option is also available as a per-filter ``$config`` option.
const meta_prefix = "#" &redef;

View file

@ -20,7 +20,8 @@ export {
## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example,
## SSH::Login is for heuristically guessed successful SSH logins.
## SSH::Password_Guessing is for hosts that have crossed a threshold of
## heuristically determined failed SSH logins.
type Type: enum {
## Notice reporting a count of how often a notice occurred.
Tally,

View file

@ -185,6 +185,7 @@ export {
["RPC_underflow"] = ACTION_LOG,
["RST_storm"] = ACTION_LOG,
["RST_with_data"] = ACTION_LOG,
["SSL_many_server_names"] = ACTION_LOG,
["simultaneous_open"] = ACTION_LOG_PER_CONN,
["spontaneous_FIN"] = ACTION_IGNORE,
["spontaneous_RST"] = ACTION_IGNORE,

View file

@ -71,7 +71,7 @@ export {
## to be logged has occurred.
ts: time &log;
## A unique identifier of the connection which triggered the
## signature match event
## signature match event.
uid: string &log &optional;
## The host which triggered the signature match event.
src_addr: addr &log &optional;

View file

@ -287,6 +287,13 @@ function parse_mozilla(unparsed_version: string): Description
if ( 2 in parts )
v = parse(parts[2])$version;
}
else if ( / Java\/[0-9]\./ in unparsed_version )
{
software_name = "Java";
parts = split_all(unparsed_version, /Java\/[0-9\._]*/);
if ( 2 in parts )
v = parse(parts[2])$version;
}
return [$version=v, $unparsed_version=unparsed_version, $name=software_name];
}

View file

@ -28,10 +28,6 @@ export {
## values for a sumstat.
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
# Event sent by nodes that are collecting sumstats after receiving a
# request for the sumstat from the manager.
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
## This event is sent by the manager in a cluster to initiate the
## collection of a single key value from a sumstat. It's typically used
## to get intermediate updates before the break interval triggers to
@ -195,6 +191,19 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
threshold_tracker[ss_name][key] = thold_index;
}
# request-key is a non-op on the workers.
# It only should be called by the manager. Due to the fact that we usually run the same scripts on the
# workers and the manager, it might also be called by the workers, so we just ignore it here.
#
# There is a small chance that people will try running it on events that are just thrown on the workers.
# This does not work at the moment and we cannot throw an error message, because we cannot distinguish it
# from the "script is running it everywhere" case. But - people should notice that they do not get results.
# Not entirely pretty, sorry :(
function request_key(ss_name: string, key: Key): Result
{
return Result();
}
@endif
@ -215,7 +224,6 @@ global stats_keys: table[string] of set[Key] &read_expire=1min
# matches the number of peer nodes that results should be coming from, the
# result is written out and deleted from here.
# Indexed on a uid.
# TODO: add an &expire_func in case not all results are received.
global done_with: table[string] of count &read_expire=1min &default=0;
# This variable is maintained by managers to track intermediate responses as

View file

@ -2812,7 +2812,7 @@ export {
## Result of an X509 certificate chain verification
type Result: record {
## OpenSSL result code
result: count;
result: int;
## Result as string
result_string: string;
## References to the final certificate chain, if verification successful. End-host certificate is first.
@ -2829,6 +2829,24 @@ export {
name: string &optional;
} &log;
}
module RADIUS;
export {
type RADIUS::AttributeList: vector of string;
type RADIUS::Attributes: table[count] of RADIUS::AttributeList;
type RADIUS::Message: record {
## The type of message (Access-Request, Access-Accept, etc.).
code : count;
## The transaction ID.
trans_id : count;
## The "authenticator" string.
authenticator : string;
## Any attributes.
attributes : RADIUS::Attributes &optional;
};
}
module GLOBAL;
@load base/bif/plugins/Bro_SNMP.types.bif

View file

@ -47,6 +47,7 @@
@load base/protocols/irc
@load base/protocols/modbus
@load base/protocols/pop3
@load base/protocols/radius
@load base/protocols/sip
@load base/protocols/snmp
@load base/protocols/smtp

View file

@ -183,7 +183,7 @@ function log_unmatched_msgs(msgs: PendingMessages)
for ( trans_id in msgs )
log_unmatched_msgs_queue(msgs[trans_id]);
msgs = PendingMessages();
clear_table(msgs);
}
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
@ -382,9 +382,19 @@ event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priori
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec) &priority=5
{
hook DNS::do_reply(c, msg, ans, str);
local txt_strings: string = "";
for ( i in strs )
{
if ( i > 0 )
txt_strings += " ";
txt_strings += fmt("TXT %d %s", |strs[i]|, strs[i]);
}
hook DNS::do_reply(c, msg, ans, txt_strings);
}
event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5

View file

@ -76,7 +76,7 @@ event irc_dcc_message(c: connection, is_orig: bool,
dcc_expected_transfers[address, p] = c$irc;
}
event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10
event scheduled_analyzer_applied(c: connection, a: Analyzer::Tag) &priority=10
{
local id = c$id;
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,231 @@
module RADIUS;
const msg_types: table[count] of string = {
[1] = "Access-Request",
[2] = "Access-Accept",
[3] = "Access-Reject",
[4] = "Accounting-Request",
[5] = "Accounting-Response",
[11] = "Access-Challenge",
[12] = "Status-Server",
[13] = "Status-Client",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const attr_types: table[count] of string = {
[1] = "User-Name",
[2] = "User-Password",
[3] = "CHAP-Password",
[4] = "NAS-IP-Address",
[5] = "NAS-Port",
[6] = "Service-Type",
[7] = "Framed-Protocol",
[8] = "Framed-IP-Address",
[9] = "Framed-IP-Netmask",
[10] = "Framed-Routing",
[11] = "Filter-Id",
[12] = "Framed-MTU",
[13] = "Framed-Compression",
[14] = "Login-IP-Host",
[15] = "Login-Service",
[16] = "Login-TCP-Port",
[18] = "Reply-Message",
[19] = "Callback-Number",
[20] = "Callback-Id",
[22] = "Framed-Route",
[23] = "Framed-IPX-Network",
[24] = "State",
[25] = "Class",
[26] = "Vendor-Specific",
[27] = "Session-Timeout",
[28] = "Idle-Timeout",
[29] = "Termination-Action",
[30] = "Called-Station-Id",
[31] = "Calling-Station-Id",
[32] = "NAS-Identifier",
[33] = "Proxy-State",
[34] = "Login-LAT-Service",
[35] = "Login-LAT-Node",
[36] = "Login-LAT-Group",
[37] = "Framed-AppleTalk-Link",
[38] = "Framed-AppleTalk-Network",
[39] = "Framed-AppleTalk-Zone",
[40] = "Acct-Status-Type",
[41] = "Acct-Delay-Time",
[42] = "Acct-Input-Octets",
[43] = "Acct-Output-Octets",
[44] = "Acct-Session-Id",
[45] = "Acct-Authentic",
[46] = "Acct-Session-Time",
[47] = "Acct-Input-Packets",
[48] = "Acct-Output-Packets",
[49] = "Acct-Terminate-Cause",
[50] = "Acct-Multi-Session-Id",
[51] = "Acct-Link-Count",
[52] = "Acct-Input-Gigawords",
[53] = "Acct-Output-Gigawords",
[55] = "Event-Timestamp",
[56] = "Egress-VLANID",
[57] = "Ingress-Filters",
[58] = "Egress-VLAN-Name",
[59] = "User-Priority-Table",
[60] = "CHAP-Challenge",
[61] = "NAS-Port-Type",
[62] = "Port-Limit",
[63] = "Login-LAT-Port",
[64] = "Tunnel-Type",
[65] = "Tunnel-Medium-Type",
[66] = "Tunnel-Client-EndPoint",
[67] = "Tunnel-Server-EndPoint",
[68] = "Acct-Tunnel-Connection",
[69] = "Tunnel-Password",
[70] = "ARAP-Password",
[71] = "ARAP-Features",
[72] = "ARAP-Zone-Access",
[73] = "ARAP-Security",
[74] = "ARAP-Security-Data",
[75] = "Password-Retry",
[76] = "Prompt",
[77] = "Connect-Info",
[78] = "Configuration-Token",
[79] = "EAP-Message",
[80] = "Message Authenticator",
[81] = "Tunnel-Private-Group-ID",
[82] = "Tunnel-Assignment-ID",
[83] = "Tunnel-Preference",
[84] = "ARAP-Challenge-Response",
[85] = "Acct-Interim-Interval",
[86] = "Acct-Tunnel-Packets-Lost",
[87] = "NAS-Port-Id",
[88] = "Framed-Pool",
[89] = "CUI",
[90] = "Tunnel-Client-Auth-ID",
[91] = "Tunnel-Server-Auth-ID",
[92] = "NAS-Filter-Rule",
[94] = "Originating-Line-Info",
[95] = "NAS-IPv6-Address",
[96] = "Framed-Interface-Id",
[97] = "Framed-IPv6-Prefix",
[98] = "Login-IPv6-Host",
[99] = "Framed-IPv6-Route",
[100] = "Framed-IPv6-Pool",
[101] = "Error-Cause",
[102] = "EAP-Key-Name",
[103] = "Digest-Response",
[104] = "Digest-Realm",
[105] = "Digest-Nonce",
[106] = "Digest-Response-Auth",
[107] = "Digest-Nextnonce",
[108] = "Digest-Method",
[109] = "Digest-URI",
[110] = "Digest-Qop",
[111] = "Digest-Algorithm",
[112] = "Digest-Entity-Body-Hash",
[113] = "Digest-CNonce",
[114] = "Digest-Nonce-Count",
[115] = "Digest-Username",
[116] = "Digest-Opaque",
[117] = "Digest-Auth-Param",
[118] = "Digest-AKA-Auts",
[119] = "Digest-Domain",
[120] = "Digest-Stale",
[121] = "Digest-HA1",
[122] = "SIP-AOR",
[123] = "Delegated-IPv6-Prefix",
[124] = "MIP6-Feature-Vector",
[125] = "MIP6-Home-Link-Prefix",
[126] = "Operator-Name",
[127] = "Location-Information",
[128] = "Location-Data",
[129] = "Basic-Location-Policy-Rules",
[130] = "Extended-Location-Policy-Rules",
[131] = "Location-Capable",
[132] = "Requested-Location-Info",
[133] = "Framed-Management-Protocol",
[134] = "Management-Transport-Protection",
[135] = "Management-Policy-Id",
[136] = "Management-Privilege-Level",
[137] = "PKM-SS-Cert",
[138] = "PKM-CA-Cert",
[139] = "PKM-Config-Settings",
[140] = "PKM-Cryptosuite-List",
[141] = "PKM-SAID",
[142] = "PKM-SA-Descriptor",
[143] = "PKM-Auth-Key",
[144] = "DS-Lite-Tunnel-Name",
[145] = "Mobile-Node-Identifier",
[146] = "Service-Selection",
[147] = "PMIP6-Home-LMA-IPv6-Address",
[148] = "PMIP6-Visited-LMA-IPv6-Address",
[149] = "PMIP6-Home-LMA-IPv4-Address",
[150] = "PMIP6-Visited-LMA-IPv4-Address",
[151] = "PMIP6-Home-HN-Prefix",
[152] = "PMIP6-Visited-HN-Prefix",
[153] = "PMIP6-Home-Interface-ID",
[154] = "PMIP6-Visited-Interface-ID",
[155] = "PMIP6-Home-IPv4-HoA",
[156] = "PMIP6-Visited-IPv4-HoA",
[157] = "PMIP6-Home-DHCP4-Server-Address",
[158] = "PMIP6-Visited-DHCP4-Server-Address",
[159] = "PMIP6-Home-DHCP6-Server-Address",
[160] = "PMIP6-Visited-DHCP6-Server-Address",
[161] = "PMIP6-Home-IPv4-Gateway",
[162] = "PMIP6-Visited-IPv4-Gateway",
[163] = "EAP-Lower-Layer",
[164] = "GSS-Acceptor-Service-Name",
[165] = "GSS-Acceptor-Host-Name",
[166] = "GSS-Acceptor-Service-Specifics",
[167] = "GSS-Acceptor-Realm-Name",
[168] = "Framed-IPv6-Address",
[169] = "DNS-Server-IPv6-Address",
[170] = "Route-IPv6-Information",
[171] = "Delegated-IPv6-Prefix-Pool",
[172] = "Stateful-IPv6-Address-Pool",
[173] = "IPv6-6rd-Configuration"
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const nas_port_types: table[count] of string = {
[0] = "Async",
[1] = "Sync",
[2] = "ISDN Sync",
[3] = "ISDN Async V.120",
[4] = "ISDN Async V.110",
[5] = "Virtual",
[6] = "PIAFS",
[7] = "HDLC Clear Channel",
[8] = "X.25",
[9] = "X.75",
[10] = "G.3 Fax",
[11] = "SDSL - Symmetric DSL",
[12] = "ADSL-CAP - Asymmetric DSL, Carrierless Amplitude Phase Modulation",
[13] = "ADSL-DMT - Asymmetric DSL, Discrete Multi-Tone",
[14] = "IDSL - ISDN Digital Subscriber Line",
[15] = "Ethernet",
[16] = "xDSL - Digital Subscriber Line of unknown type",
[17] = "Cable",
[18] = "Wireless - Other",
[19] = "Wireless - IEEE 802.11"
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const service_types: table[count] of string = {
[1] = "Login",
[2] = "Framed",
[3] = "Callback Login",
[4] = "Callback Framed",
[5] = "Outbound",
[6] = "Administrative",
[7] = "NAS Prompt",
[8] = "Authenticate Only",
[9] = "Callback NAS Prompt",
[10] = "Call Check",
[11] = "Callback Administrative",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const framed_protocol_types: table[count] of string = {
[1] = "PPP",
[2] = "SLIP",
[3] = "AppleTalk Remote Access Protocol (ARAP)",
[4] = "Gandalf proprietary SingleLink/MultiLink protocol",
[5] = "Xylogics proprietary IPX/SLIP",
[6] = "X.75 Synchronous"
} &default=function(i: count): string { return fmt("unknown-%d", i); };

View file

@ -0,0 +1,126 @@
##! Implements base functionality for RADIUS analysis. Generates the radius.log file.
module RADIUS;
@load ./consts.bro
@load base/utils/addrs
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts : time &log;
## Unique ID for the connection.
uid : string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id : conn_id &log;
## The username, if present.
username : string &log &optional;
## MAC address, if present.
mac : string &log &optional;
## Remote IP address, if present.
remote_ip : addr &log &optional;
## Connect info, if present.
connect_info : string &log &optional;
## Successful or failed authentication.
result : string &log &optional;
## Whether this has already been logged and can be ignored.
logged : bool &optional;
};
## The amount of time we wait for an authentication response before
## expiring it.
const expiration_interval = 10secs &redef;
## Logs an authentication attempt if we didn't see a response in time.
##
## t: A table of Info records.
##
## idx: The index of the connection$radius table corresponding to the
## radius authentication about to expire.
##
## Returns: 0secs, which when this function is used as an
## :bro:attr:`&expire_func`, indicates to remove the element at
## *idx* immediately.
global expire: function(t: table[count] of Info, idx: count): interval;
## Event that can be handled to access the RADIUS record as it is sent on
## to the loggin framework.
global log_radius: event(rec: Info);
}
redef record connection += {
radius: table[count] of Info &optional &write_expire=expiration_interval &expire_func=expire;
};
const ports = { 1812/udp };
event bro_init() &priority=5
{
Log::create_stream(RADIUS::LOG, [$columns=Info, $ev=log_radius]);
Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports);
}
event radius_message(c: connection, result: RADIUS::Message)
{
local info: Info;
if ( c?$radius && result$trans_id in c$radius )
info = c$radius[result$trans_id];
else
{
c$radius = table();
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
switch ( RADIUS::msg_types[result$code] ) {
case "Access-Request":
if ( result?$attributes ) {
# User-Name
if ( ! info?$username && 1 in result$attributes )
info$username = result$attributes[1][0];
# Calling-Station-Id (we expect this to be a MAC)
if ( ! info?$mac && 31 in result$attributes )
info$mac = normalize_mac(result$attributes[31][0]);
# Tunnel-Client-EndPoint (useful for VPNs)
if ( ! info?$remote_ip && 66 in result$attributes )
info$remote_ip = to_addr(result$attributes[66][0]);
# Connect-Info
if ( ! info?$connect_info && 77 in result$attributes )
info$connect_info = result$attributes[77][0];
}
break;
case "Access-Accept":
info$result = "success";
break;
case "Access-Reject":
info$result = "failed";
break;
}
if ( info?$result && ! info?$logged )
{
info$logged = T;
Log::write(RADIUS::LOG, info);
}
c$radius[result$trans_id] = info;
}
function expire(t: table[count] of Info, idx: count): interval
{
t[idx]$result = "unknown";
Log::write(RADIUS::LOG, t[idx]);
return 0secs;
}

View file

@ -48,6 +48,6 @@ event bro_init() &priority=5
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( c?$smtp )
if ( c?$smtp && !c$smtp$tls )
c$smtp$fuids[|c$smtp$fuids|] = f$id;
}

View file

@ -50,6 +50,8 @@ export {
## Value of the User-Agent header from the client.
user_agent: string &log &optional;
## Indicates that the connection has switched to using TLS.
tls: bool &log &default=F;
## Indicates if the "Received: from" headers should still be
## processed.
process_received_from: bool &default=T;
@ -140,7 +142,10 @@ function set_smtp_session(c: connection)
function smtp_message(c: connection)
{
if ( c$smtp$has_client_activity )
{
Log::write(SMTP::LOG, c$smtp);
c$smtp = new_smtp_log(c);
}
}
event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &priority=5
@ -148,9 +153,6 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
set_smtp_session(c);
local upper_command = to_upper(command);
if ( upper_command != "QUIT" )
c$smtp$has_client_activity = T;
if ( upper_command == "HELO" || upper_command == "EHLO" )
{
c$smtp_state$helo = arg;
@ -162,12 +164,17 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
if ( ! c$smtp?$rcptto )
c$smtp$rcptto = set();
add c$smtp$rcptto[split1(arg, /:[[:blank:]]*/)[2]];
c$smtp$has_client_activity = T;
}
else if ( upper_command == "MAIL" && /^[fF][rR][oO][mM]:/ in arg )
{
# Flush last message in case we didn't see the server's acknowledgement.
smtp_message(c);
local partially_done = split1(arg, /:[[:blank:]]*/)[2];
c$smtp$mailfrom = split1(partially_done, /[[:blank:]]?/)[1];
c$smtp$has_client_activity = T;
}
}
@ -196,7 +203,6 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
event mime_one_header(c: connection, h: mime_header_rec) &priority=5
{
if ( ! c?$smtp ) return;
c$smtp$has_client_activity = T;
if ( h$name == "MESSAGE-ID" )
c$smtp$msg_id = h$value;
@ -276,6 +282,15 @@ event connection_state_remove(c: connection) &priority=-5
smtp_message(c);
}
event smtp_starttls(c: connection) &priority=5
{
if ( c?$smtp )
{
c$smtp$tls = T;
c$smtp$has_client_activity = T;
}
}
function describe(rec: Info): string
{
if ( rec?$mailfrom && rec?$rcptto )

View file

@ -1,15 +1,182 @@
##! Enables analysis of SNMP datagrams.
##! Enables analysis and logging of SNMP datagrams.
module SNMP;
export {
redef enum Log::ID += { LOG };
## Information tracked per SNMP session.
type Info: record {
## Timestamp of first packet belonging to the SNMP session.
ts: time &log;
## The unique ID for the connection.
uid: string &log;
## The connection's 5-tuple of addresses/ports (ports inherently
## include transport protocol information)
id: conn_id &log;
## The amount of time between the first packet beloning to
## the SNMP session and the latest one seen.
duration: interval &log &default=0secs;
## The version of SNMP being used.
version: string &log;
## The community string of the first SNMP packet associated with
## the session. This is used as part of SNMP's (v1 and v2c)
## administrative/security framework. See :rfc:`1157` or :rfc:`1901`.
community: string &log &optional;
## The number of variable bindings in GetRequest/GetNextRequest PDUs
## seen for the session.
get_requests: count &log &default=0;
## The number of variable bindings in GetBulkRequest PDUs seen for
## the session.
get_bulk_requests: count &log &default=0;
## The number of variable bindings in GetResponse/Response PDUs seen
## for the session.
get_responses: count &log &default=0;
## The number of variable bindings in SetRequest PDUs seen for
## the session.
set_requests: count &log &default=0;
## A system description of the SNMP responder endpoint.
display_string: string &log &optional;
## The time at which the SNMP responder endpoint claims it's been
## up since.
up_since: time &log &optional;
};
## Maps an SNMP version integer to a human readable string.
const version_map: table[count] of string = {
[0] = "1",
[1] = "2c",
[3] = "3",
} &redef &default="unknown";
## Event that can be handled to access the SNMP record as it is sent on
## to the logging framework.
global log_snmp: event(rec: Info);
}
const ports = { 161/udp, 162/udp };
redef record connection += {
snmp: SNMP::Info &optional;
};
const ports = { 161/udp, 162/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports);
Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp]);
}
function init_state(c: connection, h: SNMP::Header): Info
{
if ( ! c?$snmp )
{
c$snmp = Info($ts=network_time(),
$uid=c$uid, $id=c$id,
$version=version_map[h$version]);
}
local s = c$snmp;
if ( ! s?$community )
{
if ( h?$v1 )
s$community = h$v1$community;
else if ( h?$v2 )
s$community = h$v2$community;
}
s$duration = network_time() - s$ts;
return s;
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$snmp )
Log::write(LOG, c$snmp);
}
event snmp_get_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_get_bulk_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::BulkPDU) &priority=5
{
local s = init_state(c, header);
s$get_bulk_requests += |pdu$bindings|;
}
event snmp_get_next_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_response(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_responses += |pdu$bindings|;
for ( i in pdu$bindings )
{
local binding = pdu$bindings[i];
if ( binding$oid == "1.3.6.1.2.1.1.1.0" && binding$value?$octets )
c$snmp$display_string = binding$value$octets;
else if ( binding$oid == "1.3.6.1.2.1.1.3.0" && binding$value?$unsigned )
{
local up_seconds = binding$value$unsigned / 100.0;
s$up_since = network_time() - double_to_interval(up_seconds);
}
}
}
event snmp_set_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$set_requests += |pdu$bindings|;
}
event snmp_trap(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::TrapPDU) &priority=5
{
init_state(c, header);
}
event snmp_inform_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_trapV2(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_report(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_unknown_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_unknown_scoped_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_encrypted_pdu(c: connection, is_orig: bool, header: SNMP::Header) &priority=5
{
init_state(c, header);
}
#event snmp_unknown_header_version(c: connection, is_orig: bool, version: count) &priority=5
# {
# }

View file

@ -15,6 +15,32 @@ export {
[TLSv12] = "TLSv12",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## TLS content types:
const CHANGE_CIPHER_SPEC = 20;
const ALERT = 21;
const HANDSHAKE = 22;
const APPLICATION_DATA = 23;
const HEARTBEAT = 24;
const V2_ERROR = 300;
const V2_CLIENT_HELLO = 301;
const V2_CLIENT_MASTER_KEY = 302;
const V2_SERVER_HELLO = 304;
## TLS Handshake types:
const HELLO_REQUEST = 0;
const CLIENT_HELLO = 1;
const SERVER_HELLO = 2;
const SESSION_TICKET = 4; # RFC 5077
const CERTIFICATE = 11;
const SERVER_KEY_EXCHANGE = 12;
const CERTIFICATE_REQUEST = 13;
const SERVER_HELLO_DONE = 14;
const CERTIFICATE_VERIFY = 15;
const CLIENT_KEY_EXCHANGE = 16;
const FINISHED = 20;
const CERTIFICATE_URL = 21; # RFC 3546
const CERTIFICATE_STATUS = 22; # RFC 3546
## Mapping between numeric codes and human readable strings for alert
## levels.
const alert_levels: table[count] of string = {
@ -83,6 +109,10 @@ export {
[16] = "application_layer_protocol_negotiation",
[17] = "status_request_v2",
[18] = "signed_certificate_timestamp",
[19] = "client_certificate_type",
[20] = "server_certificate_type",
[21] = "padding", # temporary till 2015-03-12
[22] = "encrypt_then_mac", # temporary till 2015-06-05
[35] = "SessionTicket TLS",
[40] = "extended_random",
[13172] = "next_protocol_negotiation",
@ -94,6 +124,49 @@ export {
[65281] = "renegotiation_info"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLS elliptic curves.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8
const ec_curves: table[count] of string = {
[1] = "sect163k1",
[2] = "sect163r1",
[3] = "sect163r2",
[4] = "sect193r1",
[5] = "sect193r2",
[6] = "sect233k1",
[7] = "sect233r1",
[8] = "sect239k1",
[9] = "sect283k1",
[10] = "sect283r1",
[11] = "sect409k1",
[12] = "sect409r1",
[13] = "sect571k1",
[14] = "sect571r1",
[15] = "secp160k1",
[16] = "secp160r1",
[17] = "secp160r2",
[18] = "secp192k1",
[19] = "secp192r1",
[20] = "secp224k1",
[21] = "secp224r1",
[22] = "secp256k1",
[23] = "secp256r1",
[24] = "secp384r1",
[25] = "secp521r1",
[26] = "brainpoolP256r1",
[27] = "brainpoolP384r1",
[28] = "brainpoolP512r1",
[0xFF01] = "arbitrary_explicit_prime_curves",
[0xFF02] = "arbitrary_explicit_char2_curves"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLC EC point formats.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-9
const ec_point_formats: table[count] of string = {
[0] = "uncompressed",
[1] = "ansiX962_compressed_prime",
[2] = "ansiX962_compressed_char2"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
# SSLv2
const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080;
const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080;
@ -444,6 +517,10 @@ export {
const TLS_PSK_WITH_AES_256_CCM_8 = 0xC0A9;
const TLS_PSK_DHE_WITH_AES_128_CCM_8 = 0xC0AA;
const TLS_PSK_DHE_WITH_AES_256_CCM_8 = 0xC0AB;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM = 0xC0AC;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM = 0xC0AD;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xC0AE;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xC0AF;
# draft-agl-tls-chacha20poly1305-02
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14;
@ -807,6 +884,10 @@ export {
[TLS_PSK_WITH_AES_256_CCM_8] = "TLS_PSK_WITH_AES_256_CCM_8",
[TLS_PSK_DHE_WITH_AES_128_CCM_8] = "TLS_PSK_DHE_WITH_AES_128_CCM_8",
[TLS_PSK_DHE_WITH_AES_256_CCM_8] = "TLS_PSK_DHE_WITH_AES_256_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8",
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
@ -821,42 +902,4 @@ export {
[TLS_EMPTY_RENEGOTIATION_INFO_SCSV] = "TLS_EMPTY_RENEGOTIATION_INFO_SCSV",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between the constants and string values for SSL/TLS errors.
const x509_errors: table[count] of string = {
[0] = "ok",
[1] = "unable to get issuer cert",
[2] = "unable to get crl",
[3] = "unable to decrypt cert signature",
[4] = "unable to decrypt crl signature",
[5] = "unable to decode issuer public key",
[6] = "cert signature failure",
[7] = "crl signature failure",
[8] = "cert not yet valid",
[9] = "cert has expired",
[10] = "crl not yet valid",
[11] = "crl has expired",
[12] = "error in cert not before field",
[13] = "error in cert not after field",
[14] = "error in crl last update field",
[15] = "error in crl next update field",
[16] = "out of mem",
[17] = "depth zero self signed cert",
[18] = "self signed cert in chain",
[19] = "unable to get issuer cert locally",
[20] = "unable to verify leaf signature",
[21] = "cert chain too long",
[22] = "cert revoked",
[23] = "invalid ca",
[24] = "path length exceeded",
[25] = "invalid purpose",
[26] = "cert untrusted",
[27] = "cert rejected",
[28] = "subject issuer mismatch",
[29] = "akid skid mismatch",
[30] = "akid issuer serial mismatch",
[31] = "keyusage no certsign",
[32] = "unable to get crl issuer",
[33] = "unhandled critical extension",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
}

View file

@ -52,22 +52,8 @@ export {
function get_file_handle(c: connection, is_orig: bool): string
{
set_session(c);
local depth: count;
if ( is_orig )
{
depth = c$ssl$client_depth;
++c$ssl$client_depth;
}
else
{
depth = c$ssl$server_depth;
++c$ssl$server_depth;
}
return cat(Analyzer::ANALYZER_SSL, c$start_time, is_orig, id_string(c$id), depth);
# Unused. File handles are generated in the analyzer.
return "";
}
function describe_file(f: fa_file): string
@ -135,13 +121,15 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
event ssl_established(c: connection) &priority=6
{
# update subject and issuer information
if ( c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 )
if ( c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 &&
c$ssl$cert_chain[0]?$x509 )
{
c$ssl$subject = c$ssl$cert_chain[0]$x509$certificate$subject;
c$ssl$issuer = c$ssl$cert_chain[0]$x509$certificate$issuer;
}
if ( c$ssl?$client_cert_chain && |c$ssl$client_cert_chain| > 0 )
if ( c$ssl?$client_cert_chain && |c$ssl$client_cert_chain| > 0 &&
c$ssl$client_cert_chain[0]?$x509 )
{
c$ssl$client_subject = c$ssl$client_cert_chain[0]$x509$certificate$subject;
c$ssl$client_issuer = c$ssl$client_cert_chain[0]$x509$certificate$issuer;

View file

@ -19,6 +19,8 @@ export {
version: string &log &optional;
## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional;
## Elliptic curve the server chose when using ECDH/ECDHE.
curve: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting.
server_name: string &log &optional;
@ -159,12 +161,23 @@ event ssl_server_hello(c: connection, version: count, possible_ts: time, server_
c$ssl$cipher = cipher_desc[cipher];
}
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5
event ssl_server_curve(c: connection, curve: count) &priority=5
{
set_session(c);
if ( is_orig && extensions[code] == "server_name" )
c$ssl$server_name = sub_bytes(val, 6, |val|);
c$ssl$curve = ec_curves[curve];
}
event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec) &priority=5
{
set_session(c);
if ( is_orig && |names| > 0 )
{
c$ssl$server_name = names[0];
if ( |names| > 1 )
event conn_weird("SSL_many_server_names", c, cat(names));
}
}
event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5

View file

@ -1,4 +1,4 @@
##! Functions for parsing and manipulating IP addresses.
##! Functions for parsing and manipulating IP and MAC addresses.
# Regular expressions for matching IP addresses in strings.
const ipv4_addr_regex = /[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}/;
@ -119,3 +119,30 @@ function addr_to_uri(a: addr): string
else
return fmt("[%s]", a);
}
## Given a string, extracts the hex digits and returns a MAC address in
## the format: 00:a0:32:d7:81:8f. If the string doesn't contain 12 or 16 hex
## digits, an empty string is returned.
##
## a: the string to normalize.
##
## Returns: a normalized MAC address, or an empty string in the case of an error.
function normalize_mac(a: string): string
{
local result = to_lower(gsub(a, /[^A-Fa-f0-9]/, ""));
local octets: string_vec;
if ( |result| == 12 )
{
octets = str_split(result, vector(2, 4, 6, 8, 10));
return fmt("%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6]);
}
if ( |result| == 16 )
{
octets = str_split(result, vector(2, 4, 6, 8, 10, 12, 14));
return fmt("%s:%s:%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6], octets[7], octets[8]);
}
return "";
}

View file

@ -9,10 +9,16 @@ event http_header(c: connection, is_orig: bool, name: string, value: string)
switch ( name )
{
case "HOST":
Intel::seen([$indicator=value,
$indicator_type=Intel::DOMAIN,
$conn=c,
$where=HTTP::IN_HOST_HEADER]);
if ( is_valid_ip(value) )
Intel::seen([$host=to_addr(value),
$indicator_type=Intel::ADDR,
$conn=c,
$where=HTTP::IN_HOST_HEADER]);
else
Intel::seen([$indicator=value,
$indicator_type=Intel::DOMAIN,
$conn=c,
$where=HTTP::IN_HOST_HEADER]);
break;
case "REFERER":

View file

@ -2,10 +2,9 @@
@load base/protocols/ssl
@load ./where-locations
event ssl_extension(c: connection, is_orig: bool, code: count, val: string)
event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec)
{
if ( is_orig && SSL::extensions[code] == "server_name" &&
c?$ssl && c$ssl?$server_name )
if ( is_orig && c?$ssl && c$ssl?$server_name )
Intel::seen([$indicator=c$ssl$server_name,
$indicator_type=Intel::DOMAIN,
$conn=c,

View file

@ -1,6 +1,6 @@
@load ./facebook
@load ./gmail
@load ./google
@load ./netflix
@load ./pandora
@load ./youtube
#@load ./gmail
#@load ./google
#@load ./netflix
#@load ./pandora
#@load ./youtube

View file

@ -82,7 +82,7 @@ event bro_init() &priority=5
++lb_proc_track[that_node$ip, that_node$interface];
if ( total_lb_procs > 1 )
{
that_node$lb_filter = PacketFilter::sample_filter(total_lb_procs, this_lb_proc);
that_node$lb_filter = PacketFilter::sampling_filter(total_lb_procs, this_lb_proc);
Communication::nodes[no]$capture_filter = that_node$lb_filter;
}
}

View file

@ -38,27 +38,32 @@ event ssl_established(c: connection) &priority=3
{
# If there are no certificates or we are not interested in the server, just return.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) )
! addr_matches_host(c$id$resp_h, notify_certs_expiration) ||
! c$ssl$cert_chain[0]?$x509 || ! c$ssl$cert_chain[0]?$sha1 )
return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
local hash = c$ssl$cert_chain[0]$sha1;
if ( cert$not_valid_before > network_time() )
NOTICE([$note=Certificate_Not_Valid_Yet,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before),
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after < network_time() )
NOTICE([$note=Certificate_Expired,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after),
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() )
NOTICE([$note=Certificate_Expires_Soon,
$msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
}

View file

@ -29,7 +29,8 @@ global extracted_certs: set[string] = set() &read_expire=1hr &redef;
event ssl_established(c: connection) &priority=5
{
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! c$ssl$cert_chain[0]?$x509 )
return;
if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) )

View file

@ -0,0 +1,238 @@
##! Detect the TLS heartbleed attack. See http://heartbleed.com for more.
@load base/protocols/ssl
@load base/frameworks/notice
module Heartbleed;
export {
redef enum Notice::Type += {
## Indicates that a host performed a heartbleed attack or scan.
SSL_Heartbeat_Attack,
## Indicates that a host performing a heartbleed attack was probably successful.
SSL_Heartbeat_Attack_Success,
## Indicates we saw heartbeat requests with odd length. Probably an attack or scan.
SSL_Heartbeat_Odd_Length,
## Indicates we saw many heartbeat requests without an reply. Might be an attack.
SSL_Heartbeat_Many_Requests
};
}
# Do not disable analyzers after detection - otherwhise we will not notice
# encrypted attacks.
redef SSL::disable_analyzer_after_detection=F;
redef record SSL::Info += {
last_originator_heartbeat_request_size: count &optional;
last_responder_heartbeat_request_size: count &optional;
originator_heartbeats: count &default=0;
responder_heartbeats: count &default=0;
# Unencrypted connections - was an exploit attempt detected yet.
heartbleed_detected: bool &default=F;
# Count number of appdata packages and bytes exchanged so far.
enc_appdata_packages: count &default=0;
enc_appdata_bytes: count &default=0;
};
type min_length: record {
cipher: pattern;
min_length: count;
};
global min_lengths: vector of min_length = vector();
global min_lengths_tls11: vector of min_length = vector();
event bro_init()
{
# Minimum length a heartbeat packet must have for different cipher suites.
# Note - tls 1.1f and 1.0 have different lengths :(
# This should be all cipher suites usually supported by vulnerable servers.
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA384$/, $min_length=96];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA256$/, $min_length=80];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA256$/, $min_length=80];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_256_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_128_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_DES_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
min_lengths[|min_lengths|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
min_lengths[|min_lengths|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
}
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
{
if ( ! c?$ssl )
return;
if ( heartbeat_type == 1 )
{
local checklength: count = (length<(3+16)) ? length : (length - 3 - 16);
if ( payload_length > checklength )
{
c$ssl$heartbleed_detected = T;
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
$msg=fmt("An TLS heartbleed attack was detected! Record length %d. Payload length %d", length, payload_length),
$conn=c,
$identifier=cat(c$uid, length, payload_length)
]);
}
else if ( is_orig )
{
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat request before encryption. Probable Scan without exploit attempt. Message length: %d. Payload length: %d", length, payload_length),
$conn=c,
$n=length,
$identifier=cat(c$uid, length)
]);
}
}
if ( heartbeat_type == 2 && c$ssl$heartbleed_detected )
{
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack_Success,
$msg=fmt("An TLS heartbleed attack detected before was probably exploited. Message length: %d. Payload length: %d", length, payload_length),
$conn=c,
$identifier=c$uid
]);
}
}
event ssl_encrypted_heartbeat(c: connection, is_orig: bool, length: count)
{
if ( is_orig )
++c$ssl$originator_heartbeats;
else
++c$ssl$responder_heartbeats;
local duration = network_time() - c$start_time;
if ( c$ssl$enc_appdata_packages == 0 )
NOTICE([$note=SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat before ciphertext. Probable attack or scan. Length: %d, is_orig: %d", length, is_orig),
$conn=c,
$n=length,
$identifier=fmt("%s%s", c$uid, "early")
]);
else if ( duration < 1min )
NOTICE([$note=SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat within first minute. Possible attack or scan. Length: %d, is_orig: %d, time: %s", length, is_orig, duration),
$conn=c,
$n=length,
$identifier=fmt("%s%s", c$uid, "early")
]);
if ( c$ssl$originator_heartbeats > c$ssl$responder_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("More than 3 heartbeat requests without replies from server. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( c$ssl$responder_heartbeats > c$ssl$originator_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("Server sending more heartbeat responses than requests seen. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( is_orig && length < 19 )
NOTICE([$note=SSL_Heartbeat_Odd_Length,
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack or scan. Message length: %d. Cipher: %s. Time: %f", length, c$ssl$cipher, duration),
$conn=c,
$n=length,
$identifier=fmt("%s-weak-%d", c$uid, length)
]);
# Examine request lengths based on used cipher...
local min_length_choice: vector of min_length;
if ( (c$ssl$version == "TLSv11") || (c$ssl$version == "TLSv12") ) # tls 1.1+ have different lengths for CBC
min_length_choice = min_lengths_tls11;
else
min_length_choice = min_lengths;
for ( i in min_length_choice )
{
if ( min_length_choice[i]$cipher in c$ssl$cipher )
{
if ( length < min_length_choice[i]$min_length )
{
NOTICE([$note=SSL_Heartbeat_Odd_Length,
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack. Message length: %d. Required length: %d. Cipher: %s. Cipher match: %s", length, min_length_choice[i]$min_length, c$ssl$cipher, min_length_choice[i]$cipher),
$conn=c,
$n=length,
$identifier=fmt("%s-weak-%d", c$uid, length)
]);
}
break;
}
}
if ( is_orig )
{
if ( c$ssl?$last_responder_heartbeat_request_size )
{
# server originated heartbeat. Ignore & continue
delete c$ssl$last_responder_heartbeat_request_size;
}
else
c$ssl$last_originator_heartbeat_request_size = length;
}
else
{
if ( c$ssl?$last_originator_heartbeat_request_size && c$ssl$last_originator_heartbeat_request_size < length )
{
NOTICE([$note=SSL_Heartbeat_Attack_Success,
$msg=fmt("An encrypted TLS heartbleed attack was probably detected! First packet client record length %d, first packet server record length %d. Time: %f",
c$ssl$last_originator_heartbeat_request_size, length, duration),
$conn=c,
$identifier=c$uid # only throw once per connection
]);
}
else if ( ! c$ssl?$last_originator_heartbeat_request_size )
c$ssl$last_responder_heartbeat_request_size = length;
if ( c$ssl?$last_originator_heartbeat_request_size )
delete c$ssl$last_originator_heartbeat_request_size;
}
}
event ssl_encrypted_data(c: connection, is_orig: bool, content_type: count, length: count)
{
if ( !c?$ssl )
return;
if ( content_type == SSL::HEARTBEAT )
event ssl_encrypted_heartbeat(c, is_orig, length);
else if ( (content_type == SSL::APPLICATION_DATA) && (length > 0) )
{
++c$ssl$enc_appdata_packages;
c$ssl$enc_appdata_bytes += length;
}
}

View file

@ -48,7 +48,8 @@ event bro_init() &priority=5
event ssl_established(c: connection) &priority=3
{
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| < 1 )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| < 1 ||
! c$ssl$cert_chain[0]?$x509 )
return;
local fuid = c$ssl$cert_chain_fuids[0];

View file

@ -39,7 +39,8 @@ function clear_waitlist(digest: string)
event ssl_established(c: connection) &priority=3
{
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! c$ssl$cert_chain[0]?$sha1 )
return;
local digest = c$ssl$cert_chain[0]$sha1;

View file

@ -28,7 +28,8 @@ export {
event ssl_established(c: connection) &priority=3
{
# If there aren't any certs we can't very well do certificate validation.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! c$ssl$cert_chain[0]?$x509 )
return;
local chain_id = join_string_vec(c$ssl$cert_chain_fuids, ".");
@ -36,7 +37,8 @@ event ssl_established(c: connection) &priority=3
local chain: vector of opaque of x509 = vector();
for ( i in c$ssl$cert_chain )
{
chain[i] = c$ssl$cert_chain[i]$x509$handle;
if ( c$ssl$cert_chain[i]?$x509 )
chain[i] = c$ssl$cert_chain[i]$x509$handle;
}
if ( chain_id in recently_validated_certs )

View file

@ -0,0 +1,66 @@
##! Perform OCSP response validation.
@load base/frameworks/notice
@load base/protocols/ssl
module SSL;
export {
redef enum Notice::Type += {
## This indicates that the OCSP response was not deemed
## to be valid.
Invalid_Ocsp_Response
};
redef record Info += {
## Result of ocsp validation for this connection.
ocsp_status: string &log &optional;
## ocsp response as string.
ocsp_response: string &optional;
};
}
# MD5 hash values for recently validated chains along with the OCSP validation
# status are kept in this table to avoid constant validation every time the same
# certificate chain is seen.
global recently_ocsp_validated: table[string] of string = table() &read_expire=5mins;
event ssl_stapled_ocsp(c: connection, is_orig: bool, response: string) &priority=3
{
c$ssl$ocsp_response = response;
}
event ssl_established(c: connection) &priority=3
{
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 || !c$ssl?$ocsp_response )
return;
local chain: vector of opaque of x509 = vector();
for ( i in c$ssl$cert_chain )
{
if ( c$ssl$cert_chain[i]?$x509 )
chain[i] = c$ssl$cert_chain[i]$x509$handle;
}
local reply_id = cat(md5_hash(c$ssl$ocsp_response), join_string_vec(c$ssl$cert_chain_fuids, "."));
if ( reply_id in recently_ocsp_validated )
{
c$ssl$ocsp_status = recently_ocsp_validated[reply_id];
return;
}
local result = x509_ocsp_verify(chain, c$ssl$ocsp_response, root_certs);
c$ssl$ocsp_status = result$result_string;
recently_ocsp_validated[reply_id] = result$result_string;
if( result$result_string != "good" )
{
local message = fmt("OCSP response validation failed with (%s)", result$result_string);
NOTICE([$note=Invalid_Ocsp_Response, $msg=message,
$sub=c$ssl$subject, $conn=c,
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$ocsp_status)]);
}
}

View file

@ -0,0 +1,92 @@
##! Generate notices when SSL/TLS connections use certificates or DH parameters
##! that have potentially unsafe key lengths.
@load base/protocols/ssl
@load base/frameworks/notice
@load base/utils/directions-and-hosts
module SSL;
export {
redef enum Notice::Type += {
## Indicates that a server is using a potentially unsafe key.
Weak_Key,
};
## The category of hosts you would like to be notified about which have
## certificates that are going to be expiring soon. By default, these
## notices will be suppressed by the notice framework for 1 day after a particular
## certificate has had a notice generated. Choices are: LOCAL_HOSTS, REMOTE_HOSTS,
## ALL_HOSTS, NO_HOSTS
const notify_weak_keys = LOCAL_HOSTS &redef;
## The minimal key length in bits that is considered to be safe. Any shorter
## (non-EC) key lengths will trigger the notice.
const notify_minimal_key_length = 1024 &redef;
## Warn if the DH key length is smaller than the certificate key length. This is
## potentially unsafe because it gives a wrong impression of safety due to the
## certificate key length. However, it is very common and cannot be avoided in some
## settings (e.g. with old jave clients).
const notify_dh_length_shorter_cert_length = T &redef;
}
# We check key lengths only for DSA or RSA certificates. For others, we do
# not know what is safe (e.g. EC is safe even with very short key lengths).
event ssl_established(c: connection) &priority=3
{
# If there are no certificates or we are not interested in the server, just return.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! addr_matches_host(c$id$resp_h, notify_weak_keys) ||
! c$ssl$cert_chain[0]?$x509 )
return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
if ( !cert?$key_type || !cert?$key_length )
return;
if ( cert$key_type != "dsa" && cert$key_type != "rsa" )
return;
local key_length = cert$key_length;
if ( key_length < notify_minimal_key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("Host uses weak certificate with %d bit key", key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
]);
}
event ssl_dh_server_params(c: connection, p: string, q: string, Ys: string) &priority=3
{
if ( ! addr_matches_host(c$id$resp_h, notify_weak_keys) )
return;
local key_length = |Ys| * 8; # key length in bits
if ( key_length < notify_minimal_key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("Host uses weak DH parameters with %d key bits", key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
]);
if ( notify_dh_length_shorter_cert_length &&
c?$ssl && c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 && c$ssl$cert_chain[0]?$x509 &&
c$ssl$cert_chain[0]$x509?$certificate && c$ssl$cert_chain[0]$x509$certificate?$key_type &&
(c$ssl$cert_chain[0]$x509$certificate$key_type == "rsa" ||
c$ssl$cert_chain[0]$x509$certificate$key_type == "dsa" ))
{
if ( c$ssl$cert_chain[0]$x509$certificate?$key_length &&
c$ssl$cert_chain[0]$x509$certificate$key_length > key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("DH key length of %d bits is smaller certificate key length of %d bits",
key_length, c$ssl$cert_chain[0]$x509$certificate$key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p)
]);
}
}

View file

@ -81,3 +81,6 @@
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
@load frameworks/files/detect-MHR
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed

View file

@ -85,10 +85,13 @@
@load protocols/ssh/software.bro
@load protocols/ssl/expiring-certs.bro
@load protocols/ssl/extract-certs-pem.bro
@load protocols/ssl/heartbleed.bro
@load protocols/ssl/known-certs.bro
@load protocols/ssl/log-hostcerts-only.bro
#@load protocols/ssl/notary.bro
@load protocols/ssl/validate-certs.bro
@load protocols/ssl/validate-ocsp.bro
@load protocols/ssl/weak-keys.bro
@load tuning/__load__.bro
@load tuning/defaults/__load__.bro
@load tuning/defaults/extracted_file_limits.bro

@ -1 +1 @@
Subproject commit 3b3e189dab3801cd0474dfdd376d9de633cd3766
Subproject commit 7e15efe9d28d46bfa662fcdd1cbb15ce1db285c9

View file

@ -104,7 +104,7 @@ Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string&
Base64Converter::~Base64Converter()
{
if ( base64_table != default_base64_table )
delete base64_table;
delete [] base64_table;
}
int Base64Converter::Decode(int len, const char* data, int* pblen, char** pbuf)

View file

@ -811,6 +811,17 @@ void Connection::Describe(ODesc* d) const
d->NL();
}
void Connection::IDString(ODesc* d) const
{
d->Add(orig_addr);
d->AddRaw(":", 1);
d->Add(ntohs(orig_port));
d->AddRaw(" > ", 3);
d->Add(resp_addr);
d->AddRaw(":", 1);
d->Add(ntohs(resp_port));
}
bool Connection::Serialize(SerialInfo* info) const
{
return SerialObj::Serialize(info);

View file

@ -204,6 +204,7 @@ public:
bool IsPersistent() { return persistent; }
void Describe(ODesc* d) const;
void IDString(ODesc* d) const;
TimerMgr* GetTimerMgr() const;

View file

@ -211,9 +211,10 @@ void DFA_State::Dump(FILE* f, DFA_Machine* m)
if ( accept )
{
for ( int i = 0; i < accept->length(); ++i )
fprintf(f, "%s accept #%d",
i > 0 ? "," : "", int((*accept)[i]));
AcceptingSet::const_iterator it;
for ( it = accept->begin(); it != accept->end(); ++it )
fprintf(f, "%s accept #%d", it == accept->begin() ? "" : ",", *it);
}
fprintf(f, "\n");
@ -285,7 +286,7 @@ unsigned int DFA_State::Size()
{
return sizeof(*this)
+ pad_size(sizeof(DFA_State*) * num_sym)
+ (accept ? pad_size(sizeof(int) * accept->length()) : 0)
+ (accept ? pad_size(sizeof(int) * accept->size()) : 0)
+ (nfa_states ? pad_size(sizeof(NFA_State*) * nfa_states->length()) : 0)
+ (meta_ec ? meta_ec->Size() : 0)
+ (centry ? padded_sizeof(CacheEntry) : 0);
@ -470,33 +471,20 @@ int DFA_Machine::StateSetToDFA_State(NFA_state_list* state_set,
return 0;
AcceptingSet* accept = new AcceptingSet;
for ( int i = 0; i < state_set->length(); ++i )
{
int acc = (*state_set)[i]->Accept();
if ( acc != NO_ACCEPT )
{
int j;
for ( j = 0; j < accept->length(); ++j )
if ( (*accept)[j] == acc )
break;
if ( j >= accept->length() )
// It's not already present.
accept->append(acc);
}
accept->insert(acc);
}
if ( accept->length() == 0 )
if ( accept->empty() )
{
delete accept;
accept = 0;
}
else
{
accept->sort(int_list_cmp);
accept->resize(0);
}
DFA_State* ds = new DFA_State(state_count++, ec, state_set, accept);
d = dfa_state_cache->Insert(ds, hash);

View file

@ -192,6 +192,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
string fullname = make_full_var_name(current_module.c_str(), s.c_str());
debug_msg("Function %s not defined.\n", fullname.c_str());
plr.type = plrUnknown;
Unref(id);
return;
}
@ -199,6 +200,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{
debug_msg("Function %s not declared.\n", id->Name());
plr.type = plrUnknown;
Unref(id);
return;
}
@ -206,6 +208,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{
debug_msg("Function %s declared but not defined.\n", id->Name());
plr.type = plrUnknown;
Unref(id);
return;
}
@ -216,9 +219,12 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{
debug_msg("Function %s is a built-in function\n", id->Name());
plr.type = plrUnknown;
Unref(id);
return;
}
Unref(id);
Stmt* body = 0; // the particular body we care about; 0 = all
if ( bodies.size() == 1 )

View file

@ -216,18 +216,32 @@ void ODesc::Indent()
}
}
static const char hex_chars[] = "0123456789abcdef";
static const char* find_first_unprintable(ODesc* d, const char* bytes, unsigned int n)
static bool starts_with(const char* str1, const char* str2, size_t len)
{
if ( d->IsBinary() )
for ( size_t i = 0; i < len; ++i )
if ( str1[i] != str2[i] )
return false;
return true;
}
size_t ODesc::StartsWithEscapeSequence(const char* start, const char* end)
{
if ( escape_sequences.empty() )
return 0;
while ( n-- )
escape_set::const_iterator it;
for ( it = escape_sequences.begin(); it != escape_sequences.end(); ++it )
{
if ( ! isprint(*bytes) )
return bytes;
++bytes;
const string& esc_str = *it;
size_t esc_len = esc_str.length();
if ( start + esc_len > end )
continue;
if ( starts_with(start, esc_str.c_str(), esc_len) )
return esc_len;
}
return 0;
@ -235,21 +249,23 @@ static const char* find_first_unprintable(ODesc* d, const char* bytes, unsigned
pair<const char*, size_t> ODesc::FirstEscapeLoc(const char* bytes, size_t n)
{
pair<const char*, size_t> p(find_first_unprintable(this, bytes, n), 1);
typedef pair<const char*, size_t> escape_pos;
string str(bytes, n);
list<string>::const_iterator it;
for ( it = escape_sequences.begin(); it != escape_sequences.end(); ++it )
if ( IsBinary() )
return escape_pos(0, 0);
for ( size_t i = 0; i < n; ++i )
{
size_t pos = str.find(*it);
if ( pos != string::npos && (p.first == 0 || bytes + pos < p.first) )
{
p.first = bytes + pos;
p.second = it->size();
}
if ( ! isprint(bytes[i]) )
return escape_pos(bytes + i, 1);
size_t len = StartsWithEscapeSequence(bytes + i, bytes + n);
if ( len )
return escape_pos(bytes + i, len);
}
return p;
return escape_pos(0, 0);
}
void ODesc::AddBytes(const void* bytes, unsigned int n)
@ -266,21 +282,11 @@ void ODesc::AddBytes(const void* bytes, unsigned int n)
while ( s < e )
{
pair<const char*, size_t> p = FirstEscapeLoc(s, e - s);
if ( p.first )
{
AddBytesRaw(s, p.first - s);
if ( p.second == 1 )
{
char hex[6] = "\\x00";
hex[2] = hex_chars[((*p.first) & 0xf0) >> 4];
hex[3] = hex_chars[(*p.first) & 0x0f];
AddBytesRaw(hex, 4);
}
else
{
string esc_str = get_escaped_string(string(p.first, p.second), true);
AddBytesRaw(esc_str.c_str(), esc_str.size());
}
get_escaped_string(this, p.first, p.second, true);
s = p.first + p.second;
}
else

View file

@ -4,7 +4,7 @@
#define descriptor_h
#include <stdio.h>
#include <list>
#include <set>
#include <utility>
#include "BroString.h"
@ -54,16 +54,16 @@ public:
void SetFlush(int arg_do_flush) { do_flush = arg_do_flush; }
void EnableEscaping();
void AddEscapeSequence(const char* s) { escape_sequences.push_back(s); }
void AddEscapeSequence(const char* s) { escape_sequences.insert(s); }
void AddEscapeSequence(const char* s, size_t n)
{ escape_sequences.push_back(string(s, n)); }
{ escape_sequences.insert(string(s, n)); }
void AddEscapeSequence(const string & s)
{ escape_sequences.push_back(s); }
void RemoveEscapeSequence(const char* s) { escape_sequences.remove(s); }
{ escape_sequences.insert(s); }
void RemoveEscapeSequence(const char* s) { escape_sequences.erase(s); }
void RemoveEscapeSequence(const char* s, size_t n)
{ escape_sequences.remove(string(s, n)); }
{ escape_sequences.erase(string(s, n)); }
void RemoveEscapeSequence(const string & s)
{ escape_sequences.remove(s); }
{ escape_sequences.erase(s); }
void PushIndent();
void PopIndent();
@ -163,6 +163,15 @@ protected:
*/
pair<const char*, size_t> FirstEscapeLoc(const char* bytes, size_t n);
/**
* @param start start of string to check for starting with an espace
* sequence.
* @param end one byte past the last character in the string.
* @return The number of bytes in the escape sequence that the string
* starts with.
*/
size_t StartsWithEscapeSequence(const char* start, const char* end);
desc_type type;
desc_style style;
@ -171,7 +180,8 @@ protected:
unsigned int size; // size of buffer in bytes
bool escape; // escape unprintable characters in output?
list<string> escape_sequences; // additional sequences of chars to escape
typedef set<string> escape_set;
escape_set escape_sequences; // additional sequences of chars to escape
BroFile* f; // or the file we're using.

View file

@ -39,7 +39,10 @@ FuncType* EventHandler::FType()
if ( id->Type()->Tag() != TYPE_FUNC )
return 0;
return type = id->Type()->AsFuncType();
type = id->Type()->AsFuncType();
Unref(id);
return type;
}
void EventHandler::SetLocalHandler(Func* f)

View file

@ -3398,8 +3398,8 @@ RecordConstructorExpr::RecordConstructorExpr(ListExpr* constructor_list)
if ( IsError() )
return;
// Spin through the list, which should be comprised of
// either record's or record-field-assign, and build up a
// Spin through the list, which should be comprised only of
// record-field-assign expressions, and build up a
// record type to associate with this constructor.
type_decl_list* record_types = new type_decl_list;
@ -3407,34 +3407,18 @@ RecordConstructorExpr::RecordConstructorExpr(ListExpr* constructor_list)
loop_over_list(exprs, i)
{
Expr* e = exprs[i];
BroType* t = e->Type();
if ( e->Tag() == EXPR_FIELD_ASSIGN )
{
FieldAssignExpr* field = (FieldAssignExpr*) e;
BroType* field_type = field->Type()->Ref();
char* field_name = copy_string(field->FieldName());
record_types->append(new TypeDecl(field_type, field_name));
continue;
}
if ( t->Tag() != TYPE_RECORD )
if ( e->Tag() != EXPR_FIELD_ASSIGN )
{
Error("bad type in record constructor", e);
SetError();
continue;
}
// It's a record - add in its fields.
const RecordType* rt = t->AsRecordType();
int n = rt->NumFields();
for ( int j = 0; j < n; ++j )
{
const TypeDecl* td = rt->FieldDecl(j);
record_types->append(new TypeDecl(td->type->Ref(), td->id));
}
FieldAssignExpr* field = (FieldAssignExpr*) e;
BroType* field_type = field->Type()->Ref();
char* field_name = copy_string(field->FieldName());
record_types->append(new TypeDecl(field_type, field_name));
}
SetType(new RecordType(record_types));
@ -4346,7 +4330,7 @@ Val* TableCoerceExpr::Fold(Val* v) const
if ( tv->Size() > 0 )
Internal("coercion of non-empty table/set");
return new TableVal(Type()->Ref()->AsTableType(), tv->Attrs());
return new TableVal(Type()->AsTableType(), tv->Attrs());
}
IMPLEMENT_SERIAL(TableCoerceExpr, SER_TABLE_COERCE_EXPR);

View file

@ -97,9 +97,9 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt)
// Linux MTU discovery for UDP can do this, for example.
s->Weird("fragment_with_DF", ip);
int offset = ip->FragOffset();
int len = ip->TotalLen();
int hdr_len = ip->HdrLen();
uint16 offset = ip->FragOffset();
uint32 len = ip->TotalLen();
uint16 hdr_len = ip->HdrLen();
if ( len < hdr_len )
{
@ -107,7 +107,7 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt)
return;
}
int upper_seq = offset + len - hdr_len;
uint64 upper_seq = offset + len - hdr_len;
if ( ! offset )
// Make sure to use the first fragment header's next field.
@ -178,7 +178,7 @@ void FragReassembler::Weird(const char* name) const
}
}
void FragReassembler::Overlap(const u_char* b1, const u_char* b2, int n)
void FragReassembler::Overlap(const u_char* b1, const u_char* b2, uint64 n)
{
if ( memcmp((const void*) b1, (const void*) b2, n) )
Weird("fragment_inconsistency");
@ -231,7 +231,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */)
return;
// We have it all. Compute the expected size of the fragment.
int n = proto_hdr_len + frag_size;
uint64 n = proto_hdr_len + frag_size;
// It's possible that we have blocks associated with this fragment
// that exceed this size, if we saw MF fragments (which don't lead

View file

@ -34,14 +34,14 @@ public:
protected:
void BlockInserted(DataBlock* start_block);
void Overlap(const u_char* b1, const u_char* b2, int n);
void Overlap(const u_char* b1, const u_char* b2, uint64 n);
void Weird(const char* name) const;
u_char* proto_hdr;
IP_Hdr* reassembled_pkt;
int proto_hdr_len;
uint16 proto_hdr_len;
NetSessions* s;
int frag_size; // size of fully reassembled fragment
uint64 frag_size; // size of fully reassembled fragment
uint16 next_proto; // first IPv6 fragment header's next proto field
HashKey* key;

View file

@ -475,6 +475,7 @@ BuiltinFunc::BuiltinFunc(built_in_func arg_func, const char* arg_name,
type = id->Type()->Ref();
id->SetVal(new Val(this));
Unref(id);
}
BuiltinFunc::~BuiltinFunc()

View file

@ -1,5 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include <cstdlib>
#include <string>
#include <vector>
#include "IPAddr.h"
@ -45,6 +46,14 @@ HashKey* BuildConnIDHashKey(const ConnID& id)
return new HashKey(&key, sizeof(key));
}
static inline uint32_t bit_mask32(int bottom_bits)
{
if ( bottom_bits >= 32 )
return 0xffffffff;
return (((uint32_t) 1) << bottom_bits) - 1;
}
void IPAddr::Mask(int top_bits_to_keep)
{
if ( top_bits_to_keep < 0 || top_bits_to_keep > 128 )
@ -53,25 +62,20 @@ void IPAddr::Mask(int top_bits_to_keep)
return;
}
uint32_t tmp[4];
memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr));
uint32_t mask_bits[4] = { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff };
std::ldiv_t res = std::ldiv(top_bits_to_keep, 32);
int word = 3;
int bits_to_chop = 128 - top_bits_to_keep;
if ( res.quot < 4 )
mask_bits[res.quot] =
htonl(mask_bits[res.quot] & ~bit_mask32(32 - res.rem));
while ( bits_to_chop >= 32 )
{
tmp[word] = 0;
--word;
bits_to_chop -= 32;
}
for ( unsigned int i = res.quot + 1; i < 4; ++i )
mask_bits[i] = 0;
uint32_t w = ntohl(tmp[word]);
w >>= bits_to_chop;
w <<= bits_to_chop;
tmp[word] = htonl(w);
uint32_t* p = reinterpret_cast<uint32_t*>(in6.s6_addr);
memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr));
for ( unsigned int i = 0; i < 4; ++i )
p[i] &= mask_bits[i];
}
void IPAddr::ReverseMask(int top_bits_to_chop)
@ -82,25 +86,19 @@ void IPAddr::ReverseMask(int top_bits_to_chop)
return;
}
uint32_t tmp[4];
memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr));
uint32_t mask_bits[4] = { 0, 0, 0, 0 };
std::ldiv_t res = std::ldiv(top_bits_to_chop, 32);
int word = 0;
int bits_to_chop = top_bits_to_chop;
if ( res.quot < 4 )
mask_bits[res.quot] = htonl(bit_mask32(32 - res.rem));
while ( bits_to_chop >= 32 )
{
tmp[word] = 0;
++word;
bits_to_chop -= 32;
}
for ( unsigned int i = res.quot + 1; i < 4; ++i )
mask_bits[i] = 0xffffffff;
uint32_t w = ntohl(tmp[word]);
w <<= bits_to_chop;
w >>= bits_to_chop;
tmp[word] = htonl(w);
uint32_t* p = reinterpret_cast<uint32_t*>(in6.s6_addr);
memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr));
for ( unsigned int i = 0; i < 4; ++i )
p[i] &= mask_bits[i];
}
void IPAddr::Init(const std::string& s)

View file

@ -3,6 +3,7 @@
#include "config.h"
#include <stdlib.h>
#include <utility>
#include "RE.h"
#include "DFA.h"
@ -266,6 +267,15 @@ void Specific_RE_Matcher::Dump(FILE* f)
dfa->Dump(f);
}
inline void RE_Match_State::AddMatches(const AcceptingSet& as,
MatchPos position)
{
typedef std::pair<AcceptIdx, MatchPos> am_idx;
for ( AcceptingSet::const_iterator it = as.begin(); it != as.end(); ++it )
accepted_matches.insert(am_idx(*it, position));
}
bool RE_Match_State::Match(const u_char* bv, int n,
bool bol, bool eol, bool clear)
{
@ -283,14 +293,9 @@ bool RE_Match_State::Match(const u_char* bv, int n,
current_state = dfa->StartState();
const AcceptingSet* ac = current_state->Accept();
if ( ac )
{
loop_over_list(*ac, i)
{
accepted.append((*ac)[i]);
match_pos.append(0);
}
}
AddMatches(*ac, 0);
}
else if ( clear )
@ -301,7 +306,7 @@ bool RE_Match_State::Match(const u_char* bv, int n,
current_pos = 0;
int old_matches = accepted.length();
size_t old_matches = accepted_matches.size();
int ec;
int m = bol ? n + 1 : n;
@ -324,25 +329,17 @@ bool RE_Match_State::Match(const u_char* bv, int n,
break;
}
if ( next_state->Accept() )
{
const AcceptingSet* ac = next_state->Accept();
loop_over_list(*ac, i)
{
if ( ! accepted.is_member((*ac)[i]) )
{
accepted.append((*ac)[i]);
match_pos.append(current_pos);
}
}
}
const AcceptingSet* ac = next_state->Accept();
if ( ac )
AddMatches(*ac, current_pos);
++current_pos;
current_state = next_state;
}
return accepted.length() != old_matches;
return accepted_matches.size() != old_matches;
}
int Specific_RE_Matcher::LongestMatch(const u_char* bv, int n)
@ -399,7 +396,8 @@ unsigned int Specific_RE_Matcher::MemoryAllocation() const
+ equiv_class.Size() - padded_sizeof(EquivClass)
+ (dfa ? dfa->MemoryAllocation() : 0) // this is ref counted; consider the bytes here?
+ padded_sizeof(*any_ccl)
+ accepted->MemoryAllocation();
+ padded_sizeof(*accepted)
+ accepted->size() * padded_sizeof(AcceptingSet::key_type);
}
RE_Matcher::RE_Matcher()

View file

@ -9,6 +9,9 @@
#include "CCL.h"
#include "EquivClass.h"
#include <set>
#include <map>
#include <ctype.h>
typedef int (*cce_func)(int);
@ -33,7 +36,10 @@ extern int re_lex(void);
extern int clower(int);
extern void synerr(const char str[]);
typedef int_list AcceptingSet;
typedef int AcceptIdx;
typedef std::set<AcceptIdx> AcceptingSet;
typedef uint64 MatchPos;
typedef std::map<AcceptIdx, MatchPos> AcceptingMatchSet;
typedef name_list string_list;
typedef enum { MATCH_ANYWHERE, MATCH_EXACTLY, } match_type;
@ -135,8 +141,8 @@ public:
current_state = 0;
}
const AcceptingSet* Accepted() const { return &accepted; }
const int_list* MatchPositions() const { return &match_pos; }
const AcceptingMatchSet& AcceptedMatches() const
{ return accepted_matches; }
// Returns the number of bytes feeded into the matcher so far
int Length() { return current_pos; }
@ -149,16 +155,16 @@ public:
{
current_pos = -1;
current_state = 0;
accepted.clear();
match_pos.clear();
accepted_matches.clear();
}
void AddMatches(const AcceptingSet& as, MatchPos position);
protected:
DFA_Machine* dfa;
int* ecs;
AcceptingSet accepted;
int_list match_pos;
AcceptingMatchSet accepted_matches;
DFA_State* current_state;
int current_pos;
};

View file

@ -7,14 +7,9 @@
#include "Reassem.h"
#include "Serializer.h"
const bool DEBUG_reassem = false;
static const bool DEBUG_reassem = false;
#ifdef DEBUG
int reassem_seen_bytes = 0;
int reassem_copied_bytes = 0;
#endif
DataBlock::DataBlock(const u_char* data, int size, int arg_seq,
DataBlock::DataBlock(const u_char* data, uint64 size, uint64 arg_seq,
DataBlock* arg_prev, DataBlock* arg_next)
{
seq = arg_seq;
@ -23,10 +18,6 @@ DataBlock::DataBlock(const u_char* data, int size, int arg_seq,
memcpy((void*) block, (const void*) data, size);
#ifdef DEBUG
reassem_copied_bytes += size;
#endif
prev = arg_prev;
next = arg_next;
@ -38,9 +29,9 @@ DataBlock::DataBlock(const u_char* data, int size, int arg_seq,
Reassembler::total_size += pad_size(size) + padded_sizeof(DataBlock);
}
unsigned int Reassembler::total_size = 0;
uint64 Reassembler::total_size = 0;
Reassembler::Reassembler(int init_seq, ReassemblerType arg_type)
Reassembler::Reassembler(uint64 init_seq, ReassemblerType arg_type)
{
blocks = last_block = 0;
trim_seq = last_reassem_seq = init_seq;
@ -51,24 +42,20 @@ Reassembler::~Reassembler()
ClearBlocks();
}
void Reassembler::NewBlock(double t, int seq, int len, const u_char* data)
void Reassembler::NewBlock(double t, uint64 seq, uint64 len, const u_char* data)
{
if ( len == 0 )
return;
#ifdef DEBUG
reassem_seen_bytes += len;
#endif
uint64 upper_seq = seq + len;
int upper_seq = seq + len;
if ( seq_delta(upper_seq, trim_seq) <= 0 )
if ( upper_seq <= trim_seq )
// Old data, don't do any work for it.
return;
if ( seq_delta(seq, trim_seq) < 0 )
if ( seq < trim_seq )
{ // Partially old data, just keep the good stuff.
int amount_old = seq_delta(trim_seq, seq);
uint64 amount_old = trim_seq - seq;
data += amount_old;
seq += amount_old;
@ -86,42 +73,42 @@ void Reassembler::NewBlock(double t, int seq, int len, const u_char* data)
BlockInserted(start_block);
}
int Reassembler::TrimToSeq(int seq)
uint64 Reassembler::TrimToSeq(uint64 seq)
{
int num_missing = 0;
uint64 num_missing = 0;
// Do this accounting before looking for Undelivered data,
// since that will alter last_reassem_seq.
if ( blocks )
{
if ( seq_delta(blocks->seq, last_reassem_seq) > 0 )
if ( blocks->seq > last_reassem_seq )
// An initial hole.
num_missing += seq_delta(blocks->seq, last_reassem_seq);
num_missing += blocks->seq - last_reassem_seq;
}
else if ( seq_delta(seq, last_reassem_seq) > 0 )
else if ( seq > last_reassem_seq )
{ // Trimming data we never delivered.
if ( ! blocks )
// We won't have any accounting based on blocks
// for this hole.
num_missing += seq_delta(seq, last_reassem_seq);
num_missing += seq - last_reassem_seq;
}
if ( seq_delta(seq, last_reassem_seq) > 0 )
if ( seq > last_reassem_seq )
{
// We're trimming data we never delivered.
Undelivered(seq);
}
while ( blocks && seq_delta(blocks->upper, seq) <= 0 )
while ( blocks && blocks->upper <= seq )
{
DataBlock* b = blocks->next;
if ( b && seq_delta(b->seq, seq) <= 0 )
if ( b && b->seq <= seq )
{
if ( blocks->upper != b->seq )
num_missing += seq_delta(b->seq, blocks->upper);
num_missing += b->seq - blocks->upper;
}
else
{
@ -129,7 +116,7 @@ int Reassembler::TrimToSeq(int seq)
// Second half of test is for acks of FINs, which
// don't get entered into the sequence space.
if ( blocks->upper != seq && blocks->upper != seq - 1 )
num_missing += seq_delta(seq, blocks->upper);
num_missing += seq - blocks->upper;
}
delete blocks;
@ -150,7 +137,7 @@ int Reassembler::TrimToSeq(int seq)
else
last_block = 0;
if ( seq_delta(seq, trim_seq) > 0 )
if ( seq > trim_seq )
// seq is further ahead in the sequence space.
trim_seq = seq;
@ -169,9 +156,9 @@ void Reassembler::ClearBlocks()
last_block = 0;
}
int Reassembler::TotalSize() const
uint64 Reassembler::TotalSize() const
{
int size = 0;
uint64 size = 0;
for ( DataBlock* b = blocks; b; b = b->next )
size += b->Size();
@ -184,18 +171,18 @@ void Reassembler::Describe(ODesc* d) const
d->Add("reassembler");
}
void Reassembler::Undelivered(int up_to_seq)
void Reassembler::Undelivered(uint64 up_to_seq)
{
// TrimToSeq() expects this.
last_reassem_seq = up_to_seq;
}
DataBlock* Reassembler::AddAndCheck(DataBlock* b, int seq, int upper,
DataBlock* Reassembler::AddAndCheck(DataBlock* b, uint64 seq, uint64 upper,
const u_char* data)
{
if ( DEBUG_reassem )
{
DEBUG_MSG("%.6f Reassembler::AddAndCheck seq=%d, upper=%d\n",
DEBUG_MSG("%.6f Reassembler::AddAndCheck seq=%"PRIu64", upper=%"PRIu64"\n",
network_time, seq, upper);
}
@ -209,10 +196,10 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, int seq, int upper,
// Find the first block that doesn't come completely before the
// new data.
while ( b->next && seq_delta(b->upper, seq) <= 0 )
while ( b->next && b->upper <= seq )
b = b->next;
if ( seq_delta(b->upper, seq) <= 0 )
if ( b->upper <= seq )
{
// b is the last block, and it comes completely before
// the new block.
@ -222,21 +209,20 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, int seq, int upper,
DataBlock* new_b = 0;
if ( seq_delta(upper, b->seq) <= 0 )
if ( upper <= b->seq )
{
// The new block comes completely before b.
new_b = new DataBlock(data, seq_delta(upper, seq), seq,
b->prev, b);
new_b = new DataBlock(data, upper - seq, seq, b->prev, b);
if ( b == blocks )
blocks = new_b;
return new_b;
}
// The blocks overlap, complain.
if ( seq_delta(seq, b->seq) < 0 )
if ( seq < b->seq )
{
// The new block has a prefix that comes before b.
int prefix_len = seq_delta(b->seq, seq);
uint64 prefix_len = b->seq - seq;
new_b = new DataBlock(data, prefix_len, seq, b->prev, b);
if ( b == blocks )
blocks = new_b;
@ -247,11 +233,11 @@ DataBlock* Reassembler::AddAndCheck(DataBlock* b, int seq, int upper,
else
new_b = b;
int overlap_start = seq;
int overlap_offset = seq_delta(overlap_start, b->seq);
int new_b_len = seq_delta(upper, seq);
int b_len = seq_delta(b->upper, overlap_start);
int overlap_len = min(new_b_len, b_len);
uint64 overlap_start = seq;
uint64 overlap_offset = overlap_start - b->seq;
uint64 new_b_len = upper - seq;
uint64 b_len = b->upper - overlap_start;
uint64 overlap_len = min(new_b_len, b_len);
Overlap(&b->block[overlap_offset], data, overlap_len);

View file

@ -8,16 +8,16 @@
class DataBlock {
public:
DataBlock(const u_char* data, int size, int seq,
DataBlock(const u_char* data, uint64 size, uint64 seq,
DataBlock* prev, DataBlock* next);
~DataBlock();
int Size() const { return upper - seq; }
uint64 Size() const { return upper - seq; }
DataBlock* next; // next block with higher seq #
DataBlock* prev; // previous block with lower seq #
int seq, upper;
uint64 seq, upper;
u_char* block;
};
@ -26,22 +26,22 @@ enum ReassemblerType { REASSEM_IP, REASSEM_TCP };
class Reassembler : public BroObj {
public:
Reassembler(int init_seq, ReassemblerType arg_type);
Reassembler(uint64 init_seq, ReassemblerType arg_type);
virtual ~Reassembler();
void NewBlock(double t, int seq, int len, const u_char* data);
void NewBlock(double t, uint64 seq, uint64 len, const u_char* data);
// Throws away all blocks up to seq. Returns number of bytes
// if not all in-sequence, 0 if they were.
int TrimToSeq(int seq);
uint64 TrimToSeq(uint64 seq);
// Delete all held blocks.
void ClearBlocks();
int HasBlocks() const { return blocks != 0; }
int LastReassemSeq() const { return last_reassem_seq; }
uint64 LastReassemSeq() const { return last_reassem_seq; }
int TotalSize() const; // number of bytes buffered up
uint64 TotalSize() const; // number of bytes buffered up
void Describe(ODesc* d) const;
@ -49,7 +49,7 @@ public:
static Reassembler* Unserialize(UnserialInfo* info);
// Sum over all data buffered in some reassembler.
static unsigned int TotalMemoryAllocation() { return total_size; }
static uint64 TotalMemoryAllocation() { return total_size; }
protected:
Reassembler() { }
@ -58,20 +58,20 @@ protected:
friend class DataBlock;
virtual void Undelivered(int up_to_seq);
virtual void Undelivered(uint64 up_to_seq);
virtual void BlockInserted(DataBlock* b) = 0;
virtual void Overlap(const u_char* b1, const u_char* b2, int n) = 0;
virtual void Overlap(const u_char* b1, const u_char* b2, uint64 n) = 0;
DataBlock* AddAndCheck(DataBlock* b, int seq,
int upper, const u_char* data);
DataBlock* AddAndCheck(DataBlock* b, uint64 seq,
uint64 upper, const u_char* data);
DataBlock* blocks;
DataBlock* last_block;
int last_reassem_seq;
int trim_seq; // how far we've trimmed
uint64 last_reassem_seq;
uint64 trim_seq; // how far we've trimmed
static unsigned int total_size;
static uint64 total_size;
};
inline DataBlock::~DataBlock()

View file

@ -2833,6 +2833,7 @@ void RemoteSerializer::GotEvent(const char* name, double time,
if ( ! current_peer )
{
Error("unserialized event from unknown peer");
delete_vals(args);
return;
}
@ -2882,6 +2883,7 @@ void RemoteSerializer::GotFunctionCall(const char* name, double time,
if ( ! current_peer )
{
Error("unserialized function from unknown peer");
delete_vals(args);
return;
}

View file

@ -594,6 +594,29 @@ RuleFileMagicState* RuleMatcher::InitFileMagic() const
return state;
}
bool RuleMatcher::AllRulePatternsMatched(const Rule* r, MatchPos matchpos,
const AcceptingMatchSet& ams)
{
DBG_LOG(DBG_RULES, "Checking rule: %s", r->id);
// Check whether all patterns of the rule have matched.
loop_over_list(r->patterns, j)
{
if ( ams.find(r->patterns[j]->id) == ams.end() )
return false;
// See if depth is satisfied.
if ( matchpos > r->patterns[j]->offset + r->patterns[j]->depth )
return false;
// FIXME: How to check for offset ??? ###
}
DBG_LOG(DBG_RULES, "All patterns of rule satisfied");
return true;
}
RuleMatcher::MIME_Matches* RuleMatcher::Match(RuleFileMagicState* state,
const u_char* data, uint64 len,
MIME_Matches* rval) const
@ -636,56 +659,39 @@ RuleMatcher::MIME_Matches* RuleMatcher::Match(RuleFileMagicState* state,
DBG_LOG(DBG_RULES, "New pattern match found");
AcceptingSet accepted;
int_list matchpos;
AcceptingMatchSet accepted_matches;
loop_over_list(state->matchers, y)
{
RuleFileMagicState::Matcher* m = state->matchers[y];
const AcceptingSet* ac = m->state->Accepted();
loop_over_list(*ac, k)
{
if ( ! accepted.is_member((*ac)[k]) )
{
accepted.append((*ac)[k]);
matchpos.append((*m->state->MatchPositions())[k]);
}
}
const AcceptingMatchSet& ams = m->state->AcceptedMatches();
accepted_matches.insert(ams.begin(), ams.end());
}
// Find rules for which patterns have matched.
rule_list matched;
set<Rule*> rule_matches;
loop_over_list(accepted, i)
for ( AcceptingMatchSet::const_iterator it = accepted_matches.begin();
it != accepted_matches.end(); ++it )
{
Rule* r = Rule::rule_table[accepted[i] - 1];
AcceptIdx aidx = it->first;
MatchPos mpos = it->second;
DBG_LOG(DBG_RULES, "Checking rule: %v", r->id);
Rule* r = Rule::rule_table[aidx - 1];
loop_over_list(r->patterns, j)
{
if ( ! accepted.is_member(r->patterns[j]->id) )
continue;
if ( (unsigned int) matchpos[i] >
r->patterns[j]->offset + r->patterns[j]->depth )
continue;
DBG_LOG(DBG_RULES, "All patterns of rule satisfied");
}
if ( ! matched.is_member(r) )
matched.append(r);
if ( AllRulePatternsMatched(r, mpos, accepted_matches) )
rule_matches.insert(r);
}
loop_over_list(matched, j)
for ( set<Rule*>::const_iterator it = rule_matches.begin();
it != rule_matches.end(); ++it )
{
Rule* r = matched[j];
Rule* r = *it;
loop_over_list(r->actions, rai)
{
const RuleActionMIME* ram = dynamic_cast<const RuleActionMIME*>(r->actions[rai]);
const RuleActionMIME* ram =
dynamic_cast<const RuleActionMIME*>(r->actions[rai]);
if ( ! ram )
continue;
@ -876,66 +882,40 @@ void RuleMatcher::Match(RuleEndpointState* state, Rule::PatternType type,
DBG_LOG(DBG_RULES, "New pattern match found");
// Build a joined AcceptingSet.
AcceptingSet accepted;
int_list matchpos;
AcceptingMatchSet accepted_matches;
loop_over_list(state->matchers, y)
loop_over_list(state->matchers, y )
{
RuleEndpointState::Matcher* m = state->matchers[y];
const AcceptingSet* ac = m->state->Accepted();
loop_over_list(*ac, k)
{
if ( ! accepted.is_member((*ac)[k]) )
{
accepted.append((*ac)[k]);
matchpos.append((*m->state->MatchPositions())[k]);
}
}
const AcceptingMatchSet& ams = m->state->AcceptedMatches();
accepted_matches.insert(ams.begin(), ams.end());
}
// Determine the rules for which all patterns have matched.
// This code should be fast enough as long as there are only very few
// matched patterns per connection (which is a plausible assumption).
rule_list matched;
// Find rules for which patterns have matched.
set<Rule*> rule_matches;
loop_over_list(accepted, i)
for ( AcceptingMatchSet::const_iterator it = accepted_matches.begin();
it != accepted_matches.end(); ++it )
{
Rule* r = Rule::rule_table[accepted[i] - 1];
AcceptIdx aidx = it->first;
MatchPos mpos = it->second;
DBG_LOG(DBG_RULES, "Checking rule: %s", r->id);
Rule* r = Rule::rule_table[aidx - 1];
// Check whether all patterns of the rule have matched.
loop_over_list(r->patterns, j)
{
if ( ! accepted.is_member(r->patterns[j]->id) )
goto next_pattern;
// See if depth is satisfied.
if ( (unsigned int) matchpos[i] >
r->patterns[j]->offset + r->patterns[j]->depth )
goto next_pattern;
DBG_LOG(DBG_RULES, "All patterns of rule satisfied");
// FIXME: How to check for offset ??? ###
}
// If not already in the list of matching rules, add it.
if ( ! matched.is_member(r) )
matched.append(r);
next_pattern:
continue;
if ( AllRulePatternsMatched(r, mpos, accepted_matches) )
rule_matches.insert(r);
}
// Check which of the matching rules really belong to any of our nodes.
loop_over_list(matched, j)
for ( set<Rule*>::const_iterator it = rule_matches.begin();
it != rule_matches.end(); ++it )
{
Rule* r = matched[j];
Rule* r = *it;
DBG_LOG(DBG_RULES, "Accepted rule: %s", r->id);
@ -1306,7 +1286,10 @@ static Val* get_bro_val(const char* label)
return 0;
}
return id->ID_Val();
Val* rval = id->ID_Val();
Unref(id);
return rval;
}

View file

@ -361,6 +361,9 @@ private:
void DumpStateStats(BroFile* f, RuleHdrTest* hdr_test);
static bool AllRulePatternsMatched(const Rule* r, MatchPos matchpos,
const AcceptingMatchSet& ams);
int RE_level;
bool parse_error;
RuleHdrTest* root;

View file

@ -62,6 +62,7 @@ protected:
extern bool in_debug;
// If no_global is true, don't search in the default "global" namespace.
// This passed ownership of a ref'ed ID to the caller.
extern ID* lookup_ID(const char* name, const char* module,
bool no_global = false, bool same_module_only=false);
extern ID* install_ID(const char* name, const char* module_name,

View file

@ -125,7 +125,7 @@ protected:
// This will be increased whenever there is an incompatible change
// in the data format.
static const uint32 DATA_FORMAT_VERSION = 24;
static const uint32 DATA_FORMAT_VERSION = 25;
ChunkedIO* io;

View file

@ -160,7 +160,7 @@ void ProfileLogger::Log()
file->Write(fmt("%.06f Connections expired due to inactivity: %d\n",
network_time, killed_by_inactivity));
file->Write(fmt("%.06f Total reassembler data: %dK\n", network_time,
file->Write(fmt("%.06f Total reassembler data: %"PRIu64"K\n", network_time,
Reassembler::TotalMemoryAllocation() / 1024));
// Signature engine.

View file

@ -1449,6 +1449,7 @@ void EnumType::CheckAndAddName(const string& module_name, const char* name,
}
else
{
Unref(id);
reporter->Error("identifier or enumerator value in enumerated type definition already exists");
SetError();
return;

View file

@ -32,7 +32,6 @@ Val::Val(Func* f)
val.func_val = f;
::Ref(val.func_val);
type = f->FType()->Ref();
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -49,7 +48,6 @@ Val::Val(BroFile* f)
assert(f->FType()->Tag() == TYPE_STRING);
type = string_file_type->Ref();
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -190,8 +188,6 @@ bool Val::DoSerialize(SerialInfo* info) const
if ( ! type->Serialize(info) )
return false;
SERIALIZE_OPTIONAL(attribs);
switch ( type->InternalType() ) {
case TYPE_INTERNAL_VOID:
info->s->Error("type is void");
@ -251,9 +247,6 @@ bool Val::DoUnserialize(UnserialInfo* info)
if ( ! (type = BroType::Unserialize(info)) )
return false;
UNSERIALIZE_OPTIONAL(attribs,
(RecordVal*) Val::Unserialize(info, TYPE_RECORD));
switch ( type->InternalType() ) {
case TYPE_INTERNAL_VOID:
info->s->Error("type is void");
@ -1478,13 +1471,20 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
}
TableEntryVal* new_entry_val = new TableEntryVal(new_val);
HashKey k_copy(k->Key(), k->Size(), k->Hash());
TableEntryVal* old_entry_val = AsNonConstTable()->Insert(k, new_entry_val);
// If the dictionary index already existed, the insert may free up the
// memory allocated to the key bytes, so have to assume k is invalid
// from here on out.
delete k;
k = 0;
if ( subnets )
{
if ( ! index )
{
Val* v = RecoverIndex(k);
Val* v = RecoverIndex(&k_copy);
subnets->Insert(v, new_entry_val);
Unref(v);
}
@ -1496,7 +1496,7 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
{
Val* rec_index = 0;
if ( ! index )
index = rec_index = RecoverIndex(k);
index = rec_index = RecoverIndex(&k_copy);
if ( new_val )
{
@ -1554,7 +1554,6 @@ int TableVal::Assign(Val* index, HashKey* k, Val* new_val, Opcode op)
if ( old_entry_val && attrs && attrs->FindAttr(ATTR_EXPIRE_CREATE) )
new_entry_val->SetExpireAccess(old_entry_val->ExpireAccessTime());
delete k;
if ( old_entry_val )
{
old_entry_val->Unref();

View file

@ -80,7 +80,6 @@ public:
{
val.int_val = b;
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -90,7 +89,6 @@ public:
{
val.int_val = bro_int_t(i);
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -100,7 +98,6 @@ public:
{
val.uint_val = bro_uint_t(u);
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -110,7 +107,6 @@ public:
{
val.int_val = i;
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -120,7 +116,6 @@ public:
{
val.uint_val = u;
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -130,7 +125,6 @@ public:
{
val.double_val = d;
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -145,7 +139,6 @@ public:
Val(BroType* t, bool type_type) // Extra arg to differentiate from protected version.
{
type = new TypeType(t->Ref());
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -155,7 +148,6 @@ public:
{
val.int_val = 0;
type = base_type(TYPE_ERROR);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -364,7 +356,6 @@ protected:
{
val.string_val = s;
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -376,7 +367,6 @@ protected:
Val(TypeTag t)
{
type = base_type(t);
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -385,7 +375,6 @@ protected:
Val(BroType* t)
{
type = t->Ref();
attribs = 0;
#ifdef DEBUG
bound_id = 0;
#endif
@ -400,7 +389,6 @@ protected:
BroValUnion val;
BroType* type;
RecordVal* attribs;
#ifdef DEBUG
// For debugging, we keep the name of the ID to which a Val is bound.
@ -944,7 +932,6 @@ public:
{
val.int_val = i;
type = t;
attribs = 0;
}
Val* SizeVal() const { return new Val(val.int_val, TYPE_INT); }

View file

@ -385,6 +385,8 @@ void begin_func(ID* id, const char* module_name, function_flavor flavor,
if ( arg_id && ! arg_id->IsGlobal() )
arg_id->Error("argument name used twice");
Unref(arg_id);
arg_id = install_ID(arg_i->id, module_name, false, false);
arg_id->SetType(arg_i->type->Ref());
}
@ -442,10 +444,13 @@ void end_func(Stmt* body, attr_list* attrs)
Val* internal_val(const char* name)
{
ID* id = lookup_ID(name, GLOBAL_MODULE_NAME);
if ( ! id )
reporter->InternalError("internal variable %s missing", name);
return id->ID_Val();
Val* rval = id->ID_Val();
Unref(id);
return rval;
}
Val* internal_const_val(const char* name)
@ -457,13 +462,17 @@ Val* internal_const_val(const char* name)
if ( ! id->IsConst() )
reporter->InternalError("internal variable %s is not constant", name);
return id->ID_Val();
Val* rval = id->ID_Val();
Unref(id);
return rval;
}
Val* opt_internal_val(const char* name)
{
ID* id = lookup_ID(name, GLOBAL_MODULE_NAME);
return id ? id->ID_Val() : 0;
Val* rval = id ? id->ID_Val() : 0;
Unref(id);
return rval;
}
double opt_internal_double(const char* name)
@ -503,6 +512,8 @@ ListVal* internal_list_val(const char* name)
return 0;
Val* v = id->ID_Val();
Unref(id);
if ( v )
{
if ( v->Type()->Tag() == TYPE_LIST )
@ -528,7 +539,9 @@ BroType* internal_type(const char* name)
if ( ! id )
reporter->InternalError("internal type %s missing", name);
return id->Type();
BroType* rval = id->Type();
Unref(id);
return rval;
}
Func* internal_func(const char* name)

View file

@ -203,7 +203,7 @@ void Analyzer::Done()
finished = true;
}
void Analyzer::NextPacket(int len, const u_char* data, bool is_orig, int seq,
void Analyzer::NextPacket(int len, const u_char* data, bool is_orig, uint64 seq,
const IP_Hdr* ip, int caplen)
{
if ( skip )
@ -250,7 +250,7 @@ void Analyzer::NextStream(int len, const u_char* data, bool is_orig)
}
}
void Analyzer::NextUndelivered(int seq, int len, bool is_orig)
void Analyzer::NextUndelivered(uint64 seq, int len, bool is_orig)
{
if ( skip )
return;
@ -287,7 +287,7 @@ void Analyzer::NextEndOfData(bool is_orig)
}
void Analyzer::ForwardPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen)
uint64 seq, const IP_Hdr* ip, int caplen)
{
if ( output_handler )
output_handler->DeliverPacket(len, data, is_orig, seq,
@ -335,7 +335,7 @@ void Analyzer::ForwardStream(int len, const u_char* data, bool is_orig)
AppendNewChildren();
}
void Analyzer::ForwardUndelivered(int seq, int len, bool is_orig)
void Analyzer::ForwardUndelivered(uint64 seq, int len, bool is_orig)
{
if ( output_handler )
output_handler->Undelivered(seq, len, is_orig);
@ -595,9 +595,9 @@ SupportAnalyzer* Analyzer::FirstSupportAnalyzer(bool orig)
}
void Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen)
uint64 seq, const IP_Hdr* ip, int caplen)
{
DBG_LOG(DBG_ANALYZER, "%s DeliverPacket(%d, %s, %d, %p, %d) [%s%s]",
DBG_LOG(DBG_ANALYZER, "%s DeliverPacket(%d, %s, %"PRIu64", %p, %d) [%s%s]",
fmt_analyzer(this).c_str(), len, is_orig ? "T" : "F", seq, ip, caplen,
fmt_bytes((const char*) data, min(40, len)), len > 40 ? "..." : "");
}
@ -609,9 +609,9 @@ void Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)
fmt_bytes((const char*) data, min(40, len)), len > 40 ? "..." : "");
}
void Analyzer::Undelivered(int seq, int len, bool is_orig)
void Analyzer::Undelivered(uint64 seq, int len, bool is_orig)
{
DBG_LOG(DBG_ANALYZER, "%s Undelivered(%d, %d, %s)",
DBG_LOG(DBG_ANALYZER, "%s Undelivered(%"PRIu64", %d, %s)",
fmt_analyzer(this).c_str(), seq, len, is_orig ? "T" : "F");
}
@ -793,7 +793,7 @@ SupportAnalyzer* SupportAnalyzer::Sibling(bool only_active) const
}
void SupportAnalyzer::ForwardPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen)
uint64 seq, const IP_Hdr* ip, int caplen)
{
// We do not call parent's method, as we're replacing the functionality.
@ -834,7 +834,7 @@ void SupportAnalyzer::ForwardStream(int len, const u_char* data, bool is_orig)
Parent()->DeliverStream(len, data, is_orig);
}
void SupportAnalyzer::ForwardUndelivered(int seq, int len, bool is_orig)
void SupportAnalyzer::ForwardUndelivered(uint64 seq, int len, bool is_orig)
{
// We do not call parent's method, as we're replacing the functionality.

View file

@ -44,7 +44,7 @@ public:
* Analyzer::DeliverPacket().
*/
virtual void DeliverPacket(int len, const u_char* data,
bool orig, int seq,
bool orig, uint64 seq,
const IP_Hdr* ip, int caplen)
{ }
@ -59,7 +59,7 @@ public:
* Hook for receiving notification of stream gaps. Parameters are the
* same as for Analyzer::Undelivered().
*/
virtual void Undelivered(int seq, int len, bool orig) { }
virtual void Undelivered(uint64 seq, int len, bool orig) { }
};
/**
@ -143,7 +143,7 @@ public:
* @param caplen The packet's capture length, if available.
*/
void NextPacket(int len, const u_char* data, bool is_orig,
int seq = -1, const IP_Hdr* ip = 0, int caplen = 0);
uint64 seq = -1, const IP_Hdr* ip = 0, int caplen = 0);
/**
* Passes stream input to the analyzer for processing. The analyzer
@ -173,7 +173,7 @@ public:
*
* @param is_orig True if this is about originator-side input.
*/
void NextUndelivered(int seq, int len, bool is_orig);
void NextUndelivered(uint64 seq, int len, bool is_orig);
/**
* Reports a message boundary. This is a generic method that can be
@ -195,7 +195,7 @@ public:
* Parameters are the same as for NextPacket().
*/
virtual void ForwardPacket(int len, const u_char* data,
bool orig, int seq,
bool orig, uint64 seq,
const IP_Hdr* ip, int caplen);
/**
@ -212,7 +212,7 @@ public:
*
* Parameters are the same as for NextUndelivered().
*/
virtual void ForwardUndelivered(int seq, int len, bool orig);
virtual void ForwardUndelivered(uint64 seq, int len, bool orig);
/**
* Forwards an end-of-data notification on to all child analyzers.
@ -227,7 +227,7 @@ public:
* Parameters are the same.
*/
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
/**
* Hook for accessing stream input for parsing. This is called by
@ -241,7 +241,7 @@ public:
* NextUndelivered() and can be overridden by derived classes.
* Parameters are the same.
*/
virtual void Undelivered(int seq, int len, bool orig);
virtual void Undelivered(uint64 seq, int len, bool orig);
/**
* Hook for accessing end-of-data notifications. This is called by
@ -749,7 +749,7 @@ public:
* Parameters same as for Analyzer::ForwardPacket.
*/
virtual void ForwardPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
/**
* Passes stream input to the next sibling SupportAnalyzer if any, or
@ -769,7 +769,7 @@ public:
*
* Parameters same as for Analyzer::ForwardPacket.
*/
virtual void ForwardUndelivered(int seq, int len, bool orig);
virtual void ForwardUndelivered(uint64 seq, int len, bool orig);
protected:
friend class Analyzer;

View file

@ -19,14 +19,15 @@ add_subdirectory(ident)
add_subdirectory(interconn)
add_subdirectory(irc)
add_subdirectory(login)
add_subdirectory(modbus)
add_subdirectory(mime)
add_subdirectory(modbus)
add_subdirectory(ncp)
add_subdirectory(netflow)
add_subdirectory(netbios)
add_subdirectory(netflow)
add_subdirectory(ntp)
add_subdirectory(pia)
add_subdirectory(pop3)
add_subdirectory(radius)
add_subdirectory(rpc)
add_subdirectory(sip)
add_subdirectory(snmp)

View file

@ -22,7 +22,7 @@ void AYIYA_Analyzer::Done()
Event(udp_session_done);
}
void AYIYA_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, int seq, const IP_Hdr* ip, int caplen)
void AYIYA_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, uint64 seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);

View file

@ -12,7 +12,7 @@ public:
virtual void Done();
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new AYIYA_Analyzer(conn); }

View file

@ -46,7 +46,7 @@ void BackDoorEndpoint::FinalCheckForRlogin()
}
}
int BackDoorEndpoint::DataSent(double /* t */, int seq,
int BackDoorEndpoint::DataSent(double /* t */, uint64 seq,
int len, int caplen, const u_char* data,
const IP_Hdr* /* ip */,
const struct tcphdr* /* tp */)
@ -60,8 +60,8 @@ int BackDoorEndpoint::DataSent(double /* t */, int seq,
if ( endp->state == tcp::TCP_ENDPOINT_PARTIAL )
is_partial = 1;
int ack = endp->AckSeq() - endp->StartSeq();
int top_seq = seq + len;
uint64 ack = endp->ToRelativeSeqSpace(endp->AckSeq(), endp->AckWraps());
uint64 top_seq = seq + len;
if ( top_seq <= ack || top_seq <= max_top_seq )
// There is no new data in this packet.
@ -124,7 +124,7 @@ RecordVal* BackDoorEndpoint::BuildStats()
return stats;
}
void BackDoorEndpoint::CheckForRlogin(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForRlogin(uint64 seq, int len, const u_char* data)
{
if ( rlogin_checking_done )
return;
@ -177,7 +177,7 @@ void BackDoorEndpoint::CheckForRlogin(int seq, int len, const u_char* data)
if ( seq < max_top_seq )
{ // trim to just the new data
int delta = max_top_seq - seq;
int64 delta = max_top_seq - seq;
seq += delta;
data += delta;
len -= delta;
@ -255,7 +255,7 @@ void BackDoorEndpoint::RloginSignatureFound(int len)
endp->TCP()->ConnectionEvent(rlogin_signature_found, vl);
}
void BackDoorEndpoint::CheckForTelnet(int /* seq */, int len, const u_char* data)
void BackDoorEndpoint::CheckForTelnet(uint64 /* seq */, int len, const u_char* data)
{
if ( len >= 3 &&
data[0] == TELNET_IAC && IS_TELNET_NEGOTIATION_CMD(data[1]) )
@ -346,7 +346,7 @@ void BackDoorEndpoint::TelnetSignatureFound(int len)
endp->TCP()->ConnectionEvent(telnet_signature_found, vl);
}
void BackDoorEndpoint::CheckForSSH(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForSSH(uint64 seq, int len, const u_char* data)
{
if ( seq == 1 && CheckForString("SSH-", data, len) && len > 4 &&
(data[4] == '1' || data[4] == '2') )
@ -363,8 +363,9 @@ void BackDoorEndpoint::CheckForSSH(int seq, int len, const u_char* data)
if ( seq > max_top_seq )
{ // Estimate number of packets in the sequence gap
int gap = seq - max_top_seq;
num_pkts += int((gap + DEFAULT_MTU - 1) / DEFAULT_MTU);
int64 gap = seq - max_top_seq;
if ( gap > 0 )
num_pkts += uint64((gap + DEFAULT_MTU - 1) / DEFAULT_MTU);
}
++num_pkts;
@ -388,7 +389,7 @@ void BackDoorEndpoint::CheckForSSH(int seq, int len, const u_char* data)
}
}
void BackDoorEndpoint::CheckForRootBackdoor(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForRootBackdoor(uint64 seq, int len, const u_char* data)
{
// Check for root backdoor signature: an initial payload of
// exactly "# ".
@ -397,7 +398,7 @@ void BackDoorEndpoint::CheckForRootBackdoor(int seq, int len, const u_char* data
SignatureFound(root_backdoor_signature_found);
}
void BackDoorEndpoint::CheckForFTP(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForFTP(uint64 seq, int len, const u_char* data)
{
// Check for FTP signature
//
@ -429,7 +430,7 @@ void BackDoorEndpoint::CheckForFTP(int seq, int len, const u_char* data)
SignatureFound(ftp_signature_found);
}
void BackDoorEndpoint::CheckForNapster(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForNapster(uint64 seq, int len, const u_char* data)
{
// Check for Napster signature "GETfoobar" or "SENDfoobar" where
// "foobar" is the Napster handle associated with the request
@ -449,7 +450,7 @@ void BackDoorEndpoint::CheckForNapster(int seq, int len, const u_char* data)
SignatureFound(napster_signature_found);
}
void BackDoorEndpoint::CheckForSMTP(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForSMTP(uint64 seq, int len, const u_char* data)
{
const char* smtp_handshake[] = { "HELO", "EHLO", 0 };
@ -460,7 +461,7 @@ void BackDoorEndpoint::CheckForSMTP(int seq, int len, const u_char* data)
SignatureFound(smtp_signature_found);
}
void BackDoorEndpoint::CheckForIRC(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForIRC(uint64 seq, int len, const u_char* data)
{
if ( seq != 1 || is_partial )
return;
@ -475,7 +476,7 @@ void BackDoorEndpoint::CheckForIRC(int seq, int len, const u_char* data)
SignatureFound(irc_signature_found);
}
void BackDoorEndpoint::CheckForGnutella(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForGnutella(uint64 seq, int len, const u_char* data)
{
// After connecting to the server, the connecting client says:
//
@ -492,13 +493,13 @@ void BackDoorEndpoint::CheckForGnutella(int seq, int len, const u_char* data)
SignatureFound(gnutella_signature_found);
}
void BackDoorEndpoint::CheckForGaoBot(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForGaoBot(uint64 seq, int len, const u_char* data)
{
if ( seq == 1 && CheckForString("220 Bot Server (Win32)", data, len) )
SignatureFound(gaobot_signature_found);
}
void BackDoorEndpoint::CheckForKazaa(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForKazaa(uint64 seq, int len, const u_char* data)
{
// *Some*, though not all, KaZaa connections begin with:
//
@ -565,7 +566,7 @@ int is_absolute_url(const u_char* data, int len)
return *abs_url_sig_pos == '\0';
}
void BackDoorEndpoint::CheckForHTTP(int seq, int len, const u_char* data)
void BackDoorEndpoint::CheckForHTTP(uint64 seq, int len, const u_char* data)
{
// According to the RFC, we should look for
// '<method> SP <url> SP HTTP/<version> CR LF'
@ -629,7 +630,7 @@ void BackDoorEndpoint::CheckForHTTP(int seq, int len, const u_char* data)
}
}
void BackDoorEndpoint::CheckForHTTPProxy(int /* seq */, int len,
void BackDoorEndpoint::CheckForHTTPProxy(uint64 /* seq */, int len,
const u_char* data)
{
// Proxy ONLY accepts absolute URI's: "The absoluteURI form is
@ -713,7 +714,7 @@ void BackDoor_Analyzer::Init()
}
void BackDoor_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen)
uint64 seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, is_orig, seq, ip, caplen);

View file

@ -14,7 +14,7 @@ class BackDoorEndpoint {
public:
BackDoorEndpoint(tcp::TCP_Endpoint* e);
int DataSent(double t, int seq, int len, int caplen, const u_char* data,
int DataSent(double t, uint64 seq, int len, int caplen, const u_char* data,
const IP_Hdr* ip, const struct tcphdr* tp);
RecordVal* BuildStats();
@ -22,23 +22,23 @@ public:
void FinalCheckForRlogin();
protected:
void CheckForRlogin(int seq, int len, const u_char* data);
void CheckForRlogin(uint64 seq, int len, const u_char* data);
void RloginSignatureFound(int len);
void CheckForTelnet(int seq, int len, const u_char* data);
void CheckForTelnet(uint64 seq, int len, const u_char* data);
void TelnetSignatureFound(int len);
void CheckForSSH(int seq, int len, const u_char* data);
void CheckForFTP(int seq, int len, const u_char* data);
void CheckForRootBackdoor(int seq, int len, const u_char* data);
void CheckForNapster(int seq, int len, const u_char* data);
void CheckForGnutella(int seq, int len, const u_char* data);
void CheckForKazaa(int seq, int len, const u_char* data);
void CheckForHTTP(int seq, int len, const u_char* data);
void CheckForHTTPProxy(int seq, int len, const u_char* data);
void CheckForSMTP(int seq, int len, const u_char* data);
void CheckForIRC(int seq, int len, const u_char* data);
void CheckForGaoBot(int seq, int len, const u_char* data);
void CheckForSSH(uint64 seq, int len, const u_char* data);
void CheckForFTP(uint64 seq, int len, const u_char* data);
void CheckForRootBackdoor(uint64 seq, int len, const u_char* data);
void CheckForNapster(uint64 seq, int len, const u_char* data);
void CheckForGnutella(uint64 seq, int len, const u_char* data);
void CheckForKazaa(uint64 seq, int len, const u_char* data);
void CheckForHTTP(uint64 seq, int len, const u_char* data);
void CheckForHTTPProxy(uint64 seq, int len, const u_char* data);
void CheckForSMTP(uint64 seq, int len, const u_char* data);
void CheckForIRC(uint64 seq, int len, const u_char* data);
void CheckForGaoBot(uint64 seq, int len, const u_char* data);
void SignatureFound(EventHandlerPtr e, int do_orig = 0);
@ -48,11 +48,11 @@ protected:
tcp::TCP_Endpoint* endp;
int is_partial;
int max_top_seq;
uint64 max_top_seq;
int rlogin_checking_done;
int rlogin_num_null;
int rlogin_string_separator_pos;
uint64 rlogin_string_separator_pos;
int rlogin_slash_seen;
uint32 num_pkts;
@ -80,7 +80,7 @@ protected:
// We support both packet and stream input, and can be instantiated
// even if the TCP analyzer is not yet reassembling.
virtual void DeliverPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
virtual void DeliverStream(int len, const u_char* data, bool is_orig);
void StatEvent();

View file

@ -68,7 +68,7 @@ void BitTorrent_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
}
}
void BitTorrent_Analyzer::Undelivered(int seq, int len, bool orig)
void BitTorrent_Analyzer::Undelivered(uint64 seq, int len, bool orig)
{
tcp::TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);

View file

@ -16,7 +16,7 @@ public:
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)

View file

@ -207,7 +207,7 @@ void BitTorrentTracker_Analyzer::ServerReply(int len, const u_char* data)
}
}
void BitTorrentTracker_Analyzer::Undelivered(int seq, int len, bool orig)
void BitTorrentTracker_Analyzer::Undelivered(uint64 seq, int len, bool orig)
{
tcp::TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);

View file

@ -49,7 +49,7 @@ public:
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)

View file

@ -36,7 +36,7 @@ void ConnSize_Analyzer::Done()
Analyzer::Done();
}
void ConnSize_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig, int seq, const IP_Hdr* ip, int caplen)
void ConnSize_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig, uint64 seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, is_orig, seq, ip, caplen);

View file

@ -26,7 +26,7 @@ public:
protected:
virtual void DeliverPacket(int len, const u_char* data, bool is_orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
uint64_t orig_bytes;

View file

@ -21,7 +21,7 @@ void DHCP_Analyzer::Done()
}
void DHCP_Analyzer::DeliverPacket(int len, const u_char* data,
bool orig, int seq, const IP_Hdr* ip, int caplen)
bool orig, uint64 seq, const IP_Hdr* ip, int caplen)
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
interp->NewData(orig, data, data + len);

View file

@ -14,7 +14,7 @@ public:
virtual void Done();
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new DHCP_Analyzer(conn); }

View file

@ -153,7 +153,7 @@ void DNP3_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
}
}
void DNP3_Analyzer::Undelivered(int seq, int len, bool orig)
void DNP3_Analyzer::Undelivered(uint64 seq, int len, bool orig)
{
TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
interp->NewGap(orig, len);

View file

@ -14,7 +14,7 @@ public:
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
virtual void Undelivered(uint64 seq, int len, bool orig);
virtual void EndpointEOF(bool is_orig);
static Analyzer* InstantiateAnalyzer(Connection* conn)

View file

@ -835,34 +835,61 @@ int DNS_Interpreter::ParseRR_HINFO(DNS_MsgInfo* msg,
return 1;
}
static StringVal* extract_char_string(analyzer::Analyzer* analyzer,
const u_char*& data, int& len, int& rdlen)
{
if ( rdlen <= 0 )
return 0;
uint8 str_size = data[0];
--rdlen;
--len;
++data;
if ( str_size > rdlen )
{
analyzer->Weird("DNS_TXT_char_str_past_rdlen");
return 0;
}
StringVal* rval = new StringVal(str_size,
reinterpret_cast<const char*>(data));
rdlen -= str_size;
len -= str_size;
data += str_size;
return rval;
}
int DNS_Interpreter::ParseRR_TXT(DNS_MsgInfo* msg,
const u_char*& data, int& len, int rdlength,
const u_char* msg_start)
{
int name_len = data[0];
char* name = new char[name_len];
memcpy(name, data+1, name_len);
data += rdlength;
len -= rdlength;
if ( dns_TXT_reply && ! msg->skip_event )
if ( ! dns_TXT_reply || msg->skip_event )
{
val_list* vl = new val_list;
vl->append(analyzer->BuildConnVal());
vl->append(msg->BuildHdrVal());
vl->append(msg->BuildAnswerVal());
vl->append(new StringVal(name_len, name));
analyzer->ConnectionEvent(dns_TXT_reply, vl);
data += rdlength;
len -= rdlength;
return 1;
}
delete [] name;
VectorVal* char_strings = new VectorVal(string_vec);
StringVal* char_string;
return 1;
while ( (char_string = extract_char_string(analyzer, data, len, rdlength)) )
char_strings->Assign(char_strings->Size(), char_string);
val_list* vl = new val_list;
vl->append(analyzer->BuildConnVal());
vl->append(msg->BuildHdrVal());
vl->append(msg->BuildAnswerVal());
vl->append(char_strings);
analyzer->ConnectionEvent(dns_TXT_reply, vl);
return rdlength == 0;
}
void DNS_Interpreter::SendReplyOrRejectEvent(DNS_MsgInfo* msg,
@ -1146,7 +1173,7 @@ void DNS_Analyzer::Done()
}
void DNS_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen)
uint64 seq, const IP_Hdr* ip, int caplen)
{
tcp::TCP_ApplicationAnalyzer::DeliverPacket(len, data, orig, seq, ip, caplen);

View file

@ -258,7 +258,7 @@ public:
~DNS_Analyzer();
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
uint64 seq, const IP_Hdr* ip, int caplen);
virtual void Init();
virtual void Done();

View file

@ -367,7 +367,7 @@ event dns_MX_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string,
##
## ans: The type-independent part of the parsed answer record.
##
## str: The textual information returned by the reply.
## strs: The textual information returned by the reply.
##
## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl
## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply
@ -376,7 +376,7 @@ event dns_MX_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string,
## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply
## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout
## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth
event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%);
event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec%);
## Generated for DNS replies of type *SRV*. For replies with multiple answers,
## an individual event of the corresponding type is raised for each.
@ -392,11 +392,17 @@ event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%)
##
## ans: The type-independent part of the parsed answer record.
##
## priority: Priority of the SRV response.
## target: Target of the SRV response -- the canonical hostname of the
## machine providing the service, ending in a dot.
##
## weight: Weight of the SRV response.
## priority: Priority of the SRV response -- the priority of the target
## host, lower value means more preferred.
##
## p: Port of the SRV response.
## weight: Weight of the SRV response -- a relative weight for records
## with the same priority, higher value means more preferred.
##
## p: Port of the SRV response -- the TCP or UDP port on which the
## service is to be found.
##
## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl
## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply
@ -408,8 +414,7 @@ event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%)
event dns_SRV_reply%(c: connection, msg: dns_msg, ans: dns_answer, target: string, priority: count, weight: count, p: count%);
## Generated on DNS reply resource records when the type of record is not one
## that Bro knows how to parse and generate another more specific specific
## event.
## that Bro knows how to parse and generate another more specific event.
##
## c: The connection, which may be UDP or TCP depending on the type of the
## transport-layer session being analyzed.

View file

@ -31,12 +31,25 @@ void File_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
if ( buffer_len == BUFFER_SIZE )
Identify();
}
return;
if ( orig )
file_id_orig = file_mgr->DataIn(data, len, GetAnalyzerTag(), Conn(),
orig, file_id_orig);
else
file_id_resp = file_mgr->DataIn(data, len, GetAnalyzerTag(), Conn(),
orig, file_id_resp);
}
void File_Analyzer::Undelivered(int seq, int len, bool orig)
void File_Analyzer::Undelivered(uint64 seq, int len, bool orig)
{
TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
if ( orig )
file_id_orig = file_mgr->Gap(seq, len, GetAnalyzerTag(), Conn(), orig,
file_id_orig);
else
file_id_resp = file_mgr->Gap(seq, len, GetAnalyzerTag(), Conn(), orig,
file_id_resp);
}
void File_Analyzer::Done()
@ -45,6 +58,16 @@ void File_Analyzer::Done()
if ( buffer_len && buffer_len != BUFFER_SIZE )
Identify();
if ( ! file_id_orig.empty() )
file_mgr->EndOfFile(file_id_orig);
else
file_mgr->EndOfFile(GetAnalyzerTag(), Conn(), true);
if ( ! file_id_resp.empty() )
file_mgr->EndOfFile(file_id_resp);
else
file_mgr->EndOfFile(GetAnalyzerTag(), Conn(), false);
}
void File_Analyzer::Identify()
@ -61,49 +84,3 @@ void File_Analyzer::Identify()
vl->append(new StringVal(match));
ConnectionEvent(file_transferred, vl);
}
IRC_Data::IRC_Data(Connection* conn)
: File_Analyzer("IRC_Data", conn)
{
}
void IRC_Data::Done()
{
File_Analyzer::Done();
file_mgr->EndOfFile(GetAnalyzerTag(), Conn());
}
void IRC_Data::DeliverStream(int len, const u_char* data, bool orig)
{
File_Analyzer::DeliverStream(len, data, orig);
file_mgr->DataIn(data, len, GetAnalyzerTag(), Conn(), orig);
}
void IRC_Data::Undelivered(int seq, int len, bool orig)
{
File_Analyzer::Undelivered(seq, len, orig);
file_mgr->Gap(seq, len, GetAnalyzerTag(), Conn(), orig);
}
FTP_Data::FTP_Data(Connection* conn)
: File_Analyzer("FTP_Data", conn)
{
}
void FTP_Data::Done()
{
File_Analyzer::Done();
file_mgr->EndOfFile(GetAnalyzerTag(), Conn());
}
void FTP_Data::DeliverStream(int len, const u_char* data, bool orig)
{
File_Analyzer::DeliverStream(len, data, orig);
file_mgr->DataIn(data, len, GetAnalyzerTag(), Conn(), orig);
}
void FTP_Data::Undelivered(int seq, int len, bool orig)
{
File_Analyzer::Undelivered(seq, len, orig);
file_mgr->Gap(seq, len, GetAnalyzerTag(), Conn(), orig);
}

View file

@ -17,7 +17,7 @@ public:
virtual void DeliverStream(int len, const u_char* data, bool orig);
void Undelivered(int seq, int len, bool orig);
void Undelivered(uint64 seq, int len, bool orig);
// static analyzer::Analyzer* InstantiateAnalyzer(Connection* conn)
// { return new File_Analyzer(conn); }
@ -28,17 +28,15 @@ protected:
static const int BUFFER_SIZE = 1024;
char buffer[BUFFER_SIZE];
int buffer_len;
string file_id_orig;
string file_id_resp;
};
class IRC_Data : public File_Analyzer {
public:
IRC_Data(Connection* conn);
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
IRC_Data(Connection* conn)
: File_Analyzer("IRC_Data", conn)
{ }
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new IRC_Data(conn); }
@ -46,13 +44,9 @@ public:
class FTP_Data : public File_Analyzer {
public:
FTP_Data(Connection* conn);
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
virtual void Undelivered(int seq, int len, bool orig);
FTP_Data(Connection* conn)
: File_Analyzer("FTP_Data", conn)
{ }
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new FTP_Data(conn); }

Some files were not shown because too many files have changed in this diff Show more