Merge branch 'master' into topic/jsiwek/broxygen

Conflicts:
	testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log
	testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log
This commit is contained in:
Jon Siwek 2013-10-30 16:20:48 -05:00
commit b38efa58d0
117 changed files with 1171 additions and 742 deletions

94
CHANGES
View file

@ -1,4 +1,98 @@
2.2-beta-177 | 2013-10-30 04:54:54 -0700
* Fix thread processing/termination conditions. (Jon Siwek)
2.2-beta-175 | 2013-10-29 09:30:09 -0700
* Return the Dir module to file name tracking instead of inode
tracking to avoid missing files that reuse a formerly seen inode.
(Seth Hall)
* Deprecate Broccoli Ruby bindings and no longer build them by
default; use --enable-ruby to do so. (Jon Siwek)
2.2-beta-167 | 2013-10-29 06:02:38 -0700
* Change percent_lost in capture-loss from a string to a double.
(Vlad Grigorescu)
* New version of the threading queue deadlock fix. (Robin Sommer)
* Updating README with download/git information. (Robin Sommer)
2.2-beta-161 | 2013-10-25 15:48:15 -0700
* Add curl to list of optional dependencies. It's used by the
active-http.bro script. (Daniel Thayer)
* Update test and baseline for a recent doc test fix. (Daniel
Thayer)
2.2-beta-158 | 2013-10-25 15:05:08 -0700
* Updating README with download/git information. (Robin Sommer)
2.2-beta-157 | 2013-10-25 11:11:17 -0700
* Extend the documentation of the SQLite reader/writer framework.
(Bernhard Amann)
* Fix inclusion of wrong example file in scripting tutorial.
Reported by Michael Auger @LM4K. (Bernhard Amann)
* Alternative fix for the thrading deadlock issue to avoid potential
performance impact. (Bernhard Amann)
2.2-beta-152 | 2013-10-24 18:16:49 -0700
* Fix for input readers occasionally dead-locking. (Robin Sommer)
2.2-beta-151 | 2013-10-24 16:52:26 -0700
* Updating submodule(s).
2.2-beta-150 | 2013-10-24 16:32:14 -0700
* Change temporary ASCII reader workaround for getline() on
Mavericks to permanent fix. (Bernhard Amann)
2.2-beta-148 | 2013-10-24 14:34:35 -0700
* Add gawk to list of optional packages. (Daniel Thayer)
* Add more script package README files. (Daniel Thayer)
* Add NEWS about new features of BroControl and upgrade info.
(Daniel Thayer)
* Intel framework notes added to NEWS. (Seth Hall)
* Temporary OSX Mavericks libc++ issue workaround for getline()
problem in ASCII reader. (Bernhard Amann)
* Change test of identify_data BIF to ignore charset as it may vary
with libmagic version. (Jon Siwek)
* Ensure that the starting BPF filter is logged on clusters. (Seth
Hall)
* Add UDP support to the checksum offload detection script. (Seth
Hall)
2.2-beta-133 | 2013-10-23 09:50:16 -0700
* Fix record coercion tolerance of optional fields. (Jon Siwek)
* Add NEWS about incompatible local.bro changes, addresses BIT-1047.
(Jon Siwek)
* Fix minor formatting problem in NEWS. (Jon Siwek)
2.2-beta-129 | 2013-10-23 09:47:29 -0700
* Another batch of documentation fixes and updates. (Daniel Thayer)
2.2-beta-114 | 2013-10-18 14:17:57 -0700
* Moving the SQLite examples into separate Bro files to turn them

100
NEWS
View file

@ -10,6 +10,28 @@ Bro 2.2 Beta
New Functionality
-----------------
- A completely overhauled intelligence framework for consuming
external intelligence data. It provides an abstracted mechanism
for feeding data into the framework to be matched against the
data available. It also provides a function named ``Intel::match``
which makes any hits on intelligence data available to the
scripting language.
Using input framework, the intel framework can load data from
text files. It can also update and add data if changes are
made to the file being monitored. Files to monitor for
intelligence can be provided by redef-ing the
``Intel::read_files`` variable.
The intel framework is cluster-ready. On a cluster, the
manager is the only node that needs to load in data from disk,
the cluster support will distribute the data across a cluster
automatically.
Scripts are provided at ``policy/frameworks/intel/seen`` that
provide a broad set of sources of data to feed into the intel
framwork to be matched.
- A new file analysis framework moves most of the processing of file
content from script-land into the core, where it belongs. See
``doc/file-analysis.rst``, or the online documentation, for more
@ -21,23 +43,26 @@ New Functionality
efficiently, now):
- HTTP:
* Identify MIME type of messages.
* Extract messages to disk.
* Compute MD5 for messages.
- SMTP:
* Identify MIME type of messages.
* Extract messages to disk.
* Compute MD5 for messages.
* Provide access to start of entity data.
- FTP data transfers:
* Identify MIME types of data.
* Record to disk.
- IRC DCC transfers: Record to disk.
- Support for analyzing data transfered via HTTP range requests.
- Support for analyzing data transferred via HTTP range requests.
- A binary input reader interfaces the input framework with the
file analysis, allowing to inject files on disk into Bro's
@ -233,6 +258,35 @@ New Functionality
To use CPU pinning, a new per-node option ``pin_cpus`` can be
specified in node.cfg if the OS is either Linux or FreeBSD.
- BroControl now returns useful exit codes. Most BroControl commands
return 0 if everything was OK, and 1 otherwise. However, there are
a few exceptions. The "status" and "top" commands return 0 if all Bro
nodes are running, and 1 if not all nodes are running. The "cron"
command always returns 0 (but it still sends email if there were any
problems). Any command provided by a plugin always returns 0.
- BroControl now has an option "env_vars" to set Bro environment variables.
The value of this option is a comma-separated list of environment variable
assignments (e.g., "VAR1=value, VAR2=another"). The "env_vars" option
can apply to all Bro nodes (by setting it in broctl.cfg), or can be
node-specific (by setting it in node.cfg). Environment variables in
node.cfg have priority over any specified in broctl.cfg.
- BroControl now supports load balancing with PF_RING while sniffing
multiple interfaces. Rather than assigning the same PF_RING cluster ID
to all workers on a host, cluster ID assignment is now based on which
interface a worker is sniffing (i.e., all workers on a host that sniff
the same interface will share a cluster ID). This is handled by
BroControl automatically.
- BroControl has several new options: MailConnectionSummary (for
disabling the sending of connection summary report emails),
MailAlarmsInterval (for specifying a different interval to send alarm
summary emails), CompressCmd (if archived log files will be compressed,
this specifies the command that will be used to compress them),
CompressExtension (if archived log files will be compressed, this
specifies the file extension to use).
- BroControl comes with its own test-suite now. ``make test`` in
``aux/broctl`` will run it.
@ -243,6 +297,37 @@ most submodules.
Changed Functionality
---------------------
- Previous versions of ``$prefix/share/bro/site/local.bro`` (where
"$prefix" indicates the installation prefix of Bro), aren't compatible
with Bro 2.2. This file won't be overwritten when installing over a
previous Bro installation to prevent clobbering users' modifications,
but an example of the new version is located in
``$prefix/share/bro/site/local.bro.example``. So if no modification
has been done to the previous local.bro, just copy the new example
version over it, else merge in the differences. For reference,
a common error message when attempting to use an outdated local.bro
looks like::
fatal error in /usr/local/bro/share/bro/policy/frameworks/software/vulnerable.bro, line 41: BroType::AsRecordType (table/record) (set[record { min:record { major:count; minor:count; minor2:count; minor3:count; addl:string; }; max:record { major:count; minor:count; minor2:count; minor3:count; addl:string; }; }])
- The type of ``Software::vulnerable_versions`` changed to allow
more flexibility and range specifications. An example usage:
.. code:: bro
const java_1_6_vuln = Software::VulnerableVersionRange(
$max = Software::Version($major = 1, $minor = 6, $minor2 = 0, $minor3 = 44)
);
const java_1_7_vuln = Software::VulnerableVersionRange(
$min = Software::Version($major = 1, $minor = 7),
$max = Software::Version($major = 1, $minor = 7, $minor2 = 0, $minor3 = 20)
);
redef Software::vulnerable_versions += {
["Java"] = set(java_1_6_vuln, java_1_7_vuln)
};
- The interface to extracting content from application-layer protocols
(including HTTP, SMTP, FTP) has changed significantly due to the
introduction of the new file analysis framework (see above).
@ -328,6 +413,19 @@ Changed Functionality
- We removed the BitTorrent DPD signatures pending further updates to
that analyzer.
- In previous versions of BroControl, running "broctl cron" would create
a file ``$prefix/logs/stats/www`` (where "$prefix" indicates the
installation prefix of Bro). Now, it is created as a directory.
Therefore, if you perform an upgrade install and you're using BroControl,
then you may see an email (generated by "broctl cron") containing an
error message: "error running update-stats". To fix this problem,
either remove that file (it is not needed) or rename it.
- Due to lack of maintenance the Ruby bindings for Broccoli are now
deprecated, and the build process no longer includes them by
default. For the time being, they can still be enabled by
configuring with ``--enable-ruby``, however we plan to remove
Broccoli's Ruby support with the next Bro release.
Bro 2.1
=======

10
README
View file

@ -8,11 +8,21 @@ and pointers for getting started. NEWS contains release notes for the
current version, and CHANGES has the complete history of changes.
Please see COPYING for licensing information.
You can download source and binary releases on:
http://www.bro.org/download
To get the current development version, clone our master git
repository:
git clone --recursive git://git.bro.org/bro
For more documentation, research publications, and community contact
information, please see Bro's home page:
http://www.bro.org
On behalf of the Bro Development Team,
Vern Paxson & Robin Sommer,

View file

@ -1 +1 @@
2.2-beta-114
2.2-beta-177

@ -1 +1 @@
Subproject commit 923994715b34bf3292e402bbe00c00ff77556490
Subproject commit 0f20a50afacb68154b4035b6da63164d154093e4

@ -1 +1 @@
Subproject commit 1496e0319f6fa12bb39362ab0947c82e1d6c669b
Subproject commit d17f99107cc778627a0829f0ae416073bb1e20bb

@ -1 +1 @@
Subproject commit e57ec85a898a077cb3376462cac1f047e9aeaee7
Subproject commit 5cc63348a4c3e54adaf59e5a85bec055025c6c1f

@ -1 +1 @@
Subproject commit e8eda204f418c78cc35102db04602ad2ea94aff8
Subproject commit cea34f6de7fc3b6f01921593797e5f0f197b67a7

@ -1 +1 @@
Subproject commit 056c666cd8534ba3ba88731d985dde3e29206800
Subproject commit cfc8fe7ddf5ba3a9f957d1d5a98e9cfe1e9692ac

7
configure vendored
View file

@ -32,12 +32,12 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux)
--enable-perftools-debug use Google's perftools for debugging
--enable-ruby build ruby bindings for broccoli (deprecated)
--disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli
--disable-ruby don't try to build ruby bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
@ -113,6 +113,7 @@ append_cache_entry INSTALL_BROCTL BOOL true
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
append_cache_entry DISABLE_PERFTOOLS BOOL false
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
# parse arguments
while [ $# -ne 0 ]; do
@ -174,8 +175,8 @@ while [ $# -ne 0 ]; do
--disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;;
--disable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
--enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true

View file

@ -97,6 +97,8 @@ build time:
* LibGeoIP (for geo-locating IP addresses)
* sendmail (enables Bro and BroControl to send mail)
* gawk (enables all features of bro-cut)
* curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
* Ruby executable, library, and headers (for Broccoli Ruby bindings)

View file

@ -214,7 +214,7 @@ take a look at a simple script, stored as
``connection_record_01.bro``, that will output the connection record
for a single connection.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_01.bro
Again, we start with ``@load``, this time importing the
:doc:`/scripts/base/protocols/conn/index` scripts which supply the tracking
@ -1222,7 +1222,7 @@ from the connection relative to the behavior that has been observed by
Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 59-62
:lines: 60-63
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs` script
which identifies when SSL certificates are set to expire and raises

View file

@ -122,6 +122,7 @@ rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro)
rest_target(${psd} base/frameworks/notice/main.bro)
rest_target(${psd} base/frameworks/notice/non-cluster.bro)
rest_target(${psd} base/frameworks/notice/weird.bro)
rest_target(${psd} base/frameworks/packet-filter/cluster.bro)
rest_target(${psd} base/frameworks/packet-filter/main.bro)
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
rest_target(${psd} base/frameworks/packet-filter/utils.bro)

View file

@ -0,0 +1 @@
Support for extracing files with the file analysis framework.

View file

@ -0,0 +1 @@
Support for file hashes with the file analysis framework.

View file

@ -0,0 +1 @@
Support for Unified2 files in the file analysis framework.

View file

@ -120,6 +120,7 @@ export {
## The cluster layout definition. This should be placed into a filter
## named cluster-layout.bro somewhere in the BROPATH. It will be
## automatically loaded if the CLUSTER_NODE environment variable is set.
## Note that BroControl handles all of this automatically.
const nodes: table[string] of Node = {} &redef;
## This is usually supplied on the command line for each instance

View file

@ -15,13 +15,16 @@ export {
## are wildcards.
const listen_interface = 0.0.0.0 &redef;
## Which port to listen on.
## Which port to listen on. Note that BroControl sets this
## automatically.
const listen_port = 47757/tcp &redef;
## This defines if a listening socket should use SSL.
const listen_ssl = F &redef;
## Defines if a listening socket can bind to IPv6 addresses.
##
## Note that this is overridden by the BroControl IPv6Comm option.
const listen_ipv6 = F &redef;
## If :bro:id:`Communication::listen_interface` is a non-global
@ -128,7 +131,8 @@ export {
};
## The table of Bro or Broccoli nodes that Bro will initiate connections
## to or respond to connections from.
## to or respond to connections from. Note that BroControl sets this
## automatically.
global nodes: table[string] of Node &redef;
## A table of peer nodes for which this node issued a

View file

@ -1,6 +1,12 @@
##! Interface for the SQLite input reader.
##! Interface for the SQLite input reader. Redefinable options are available
##! to tweak the input format of the SQLite reader.
##!
##! The defaults are set to match Bro's ASCII output.
##! See :doc:`/frameworks/logging-input-sqlite` for an introduction on how to
##! use the SQLite reader.
##!
##! When using the SQLite reader, you have to specify the SQL query that returns
##! the desired data by setting ``query`` in the ``config`` table. See the
##! introduction mentioned above for an example.
module InputSQLite;

View file

@ -76,9 +76,16 @@ export {
};
## Default rotation interval. Zero disables rotation.
##
## Note that this is overridden by the BroControl LogRotationInterval
## option.
const default_rotation_interval = 0secs &redef;
## Default alarm summary mail interval. Zero disables alarm summary mails.
## Default alarm summary mail interval. Zero disables alarm summary
## mails.
##
## Note that this is overridden by the BroControl MailAlarmsInterval
## option.
const default_mail_alarms_interval = 0secs &redef;
## Default naming format for timestamps embedded into filenames.

View file

@ -0,0 +1 @@
Support for postprocessors in the logging framework.

View file

@ -1,5 +1,13 @@
##! Interface for the SQLite log writer. Redefinable options are available
##! Interface for the SQLite log writer. Redefinable options are available
##! to tweak the output format of the SQLite reader.
##!
##! See :doc:`/frameworks/logging-input-sqlite` for an introduction on how to
##! use the SQLite log writer.
##!
##! The SQL writer currently supports one writer-specific filter option via
##! ``config``: setting ``tablename`` sets the name of the table that is used
##! or created in the SQLite database. An example for this is given in the
##! introduction mentioned above.
module LogSQLite;

View file

@ -0,0 +1,4 @@
The notice framework enables Bro to "notice" things which are odd or
potentially bad, leaving it to the local configuration to define which
of them are actionable. This decoupling of detection and reporting allows
Bro to be customized to the different needs that sites have.

View file

@ -7,12 +7,14 @@ module Notice;
export {
redef enum Action += {
## Drops the address via Drop::drop_address, and generates an alarm.
## Drops the address via Drop::drop_address, and generates an
## alarm.
ACTION_DROP
};
redef record Info += {
## Indicate if the $src IP address was dropped and denied network access.
## Indicate if the $src IP address was dropped and denied
## network access.
dropped: bool &log &default=F;
};
}

View file

@ -6,12 +6,14 @@ module Notice;
export {
redef enum Action += {
## Indicates that the notice should be sent to the pager email address
## configured in the :bro:id:`Notice::mail_page_dest` variable.
## Indicates that the notice should be sent to the pager email
## address configured in the :bro:id:`Notice::mail_page_dest`
## variable.
ACTION_PAGE
};
## Email address to send notices with the :bro:enum:`Notice::ACTION_PAGE` action.
## Email address to send notices with the :bro:enum:`Notice::ACTION_PAGE`
## action.
const mail_page_dest = "" &redef;
}

View file

@ -13,13 +13,15 @@ export {
## Address to send the pretty-printed reports to. Default if not set is
## :bro:id:`Notice::mail_dest`.
##
## Note that this is overridden by the BroControl MailAlarmsTo option.
const mail_dest_pretty_printed = "" &redef;
## If an address from one of these networks is reported, we mark
## the entry with an additional quote symbol (i.e., ">"). Many MUAs
## then highlight such lines differently.
global flag_nets: set[subnet] &redef;
## Function that renders a single alarm. Can be overidden.
## Function that renders a single alarm. Can be overridden.
global pretty_print_alarm: function(out: file, n: Info) &redef;
## Force generating mail file, even if reading from traces or no mail

View file

@ -17,7 +17,7 @@ export {
## Manager can communicate notice suppression to workers.
redef Cluster::manager2worker_events += /Notice::begin_suppression/;
## Workers needs need ability to forward notices to manager.
## Workers need ability to forward notices to manager.
redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER )

View file

@ -1,7 +1,7 @@
##! This is the notice framework which enables Bro to "notice" things which
##! are odd or potentially bad. Decisions of the meaning of various notices
##! need to be done per site because Bro does not ship with assumptions about
##! what is bad activity for sites. More extensive documetation about using
##! what is bad activity for sites. More extensive documentation about using
##! the notice framework can be found in :doc:`/frameworks/notice`.
module Notice;
@ -14,13 +14,13 @@ export {
ALARM_LOG,
};
## Scripts creating new notices need to redef this enum to add their own
## specific notice types which would then get used when they call the
## :bro:id:`NOTICE` function. The convention is to give a general category
## along with the specific notice separating words with underscores and
## using leading capitals on each word except for abbreviations which are
## kept in all capitals. For example, SSH::Login is for heuristically
## guessed successful SSH logins.
## Scripts creating new notices need to redef this enum to add their
## own specific notice types which would then get used when they call
## the :bro:id:`NOTICE` function. The convention is to give a general
## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example,
## SSH::Login is for heuristically guessed successful SSH logins.
type Type: enum {
## Notice reporting a count of how often a notice occurred.
Tally,
@ -30,67 +30,72 @@ export {
type Action: enum {
## Indicates that there is no action to be taken.
ACTION_NONE,
## Indicates that the notice should be sent to the notice logging stream.
## Indicates that the notice should be sent to the notice
## logging stream.
ACTION_LOG,
## Indicates that the notice should be sent to the email address(es)
## configured in the :bro:id:`Notice::mail_dest` variable.
## Indicates that the notice should be sent to the email
## address(es) configured in the :bro:id:`Notice::mail_dest`
## variable.
ACTION_EMAIL,
## Indicates that the notice should be alarmed. A readable ASCII
## version of the alarm log is emailed in bulk to the address(es)
## configured in :bro:id:`Notice::mail_dest`.
## Indicates that the notice should be alarmed. A readable
## ASCII version of the alarm log is emailed in bulk to the
## address(es) configured in :bro:id:`Notice::mail_dest`.
ACTION_ALARM,
};
type ActionSet: set[Notice::Action];
## The notice framework is able to do automatic notice supression by
## utilizing the $identifier field in :bro:type:`Notice::Info` records.
## Set this to "0secs" to completely disable automated notice suppression.
## The notice framework is able to do automatic notice suppression by
## utilizing the *identifier* field in :bro:type:`Notice::Info` records.
## Set this to "0secs" to completely disable automated notice
## suppression.
const default_suppression_interval = 1hrs &redef;
type Info: record {
## An absolute time indicating when the notice occurred, defaults
## to the current network time.
## An absolute time indicating when the notice occurred,
## defaults to the current network time.
ts: time &log &optional;
## A connection UID which uniquely identifies the endpoints
## concerned with the notice.
uid: string &log &optional;
## A connection 4-tuple identifying the endpoints concerned with the
## notice.
## A connection 4-tuple identifying the endpoints concerned
## with the notice.
id: conn_id &log &optional;
## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying
## the notice policy.
## reference to the actual connection will be deleted after
## applying the notice policy.
conn: connection &optional;
## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying
## the notice policy.
## reference to the actual connection will be deleted after
## applying the notice policy.
iconn: icmp_conn &optional;
## A file record if the notice is relted to a file. The
## reference to the actual fa_file record will be deleted after applying
## the notice policy.
## A file record if the notice is related to a file. The
## reference to the actual fa_file record will be deleted after
## applying the notice policy.
f: fa_file &optional;
## A file unique ID if this notice is related to a file. If the $f
## field is provided, this will be automatically filled out.
## A file unique ID if this notice is related to a file. If
## the *f* field is provided, this will be automatically filled
## out.
fuid: string &log &optional;
## A mime type if the notice is related to a file. If the $f field
## is provided, this will be automatically filled out.
## A mime type if the notice is related to a file. If the *f*
## field is provided, this will be automatically filled out.
file_mime_type: string &log &optional;
## Frequently files can be "described" to give a bit more context.
## This field will typically be automatically filled out from an
## fa_file record. For example, if a notice was related to a
## file over HTTP, the URL of the request would be shown.
## Frequently files can be "described" to give a bit more
## context. This field will typically be automatically filled
## out from an fa_file record. For example, if a notice was
## related to a file over HTTP, the URL of the request would
## be shown.
file_desc: string &log &optional;
## The transport protocol. Filled automatically when either conn, iconn
## or p is specified.
## The transport protocol. Filled automatically when either
## *conn*, *iconn* or *p* is specified.
proto: transport_proto &log &optional;
## The :bro:type:`Notice::Type` of the notice.
@ -117,38 +122,42 @@ export {
## The actions which have been applied to this notice.
actions: ActionSet &log &default=ActionSet();
## By adding chunks of text into this element, other scripts can
## expand on notices that are being emailed. The normal way to add text
## is to extend the vector by handling the :bro:id:`Notice::notice`
## event and modifying the notice in place.
## By adding chunks of text into this element, other scripts
## can expand on notices that are being emailed. The normal
## way to add text is to extend the vector by handling the
## :bro:id:`Notice::notice` event and modifying the notice in
## place.
email_body_sections: vector of string &optional;
## Adding a string "token" to this set will cause the notice framework's
## built-in emailing functionality to delay sending the email until
## either the token has been removed or the email has been delayed
## for :bro:id:`Notice::max_email_delay`.
## Adding a string "token" to this set will cause the notice
## framework's built-in emailing functionality to delay sending
## the email until either the token has been removed or the
## email has been delayed for :bro:id:`Notice::max_email_delay`.
email_delay_tokens: set[string] &optional;
## This field is to be provided when a notice is generated for the
## purpose of deduplicating notices. The identifier string should
## be unique for a single instance of the notice. This field should be
## filled out in almost all cases when generating notices to define
## when a notice is conceptually a duplicate of a previous notice.
## This field is to be provided when a notice is generated for
## the purpose of deduplicating notices. The identifier string
## should be unique for a single instance of the notice. This
## field should be filled out in almost all cases when
## generating notices to define when a notice is conceptually
## a duplicate of a previous notice.
##
## For example, an SSL certificate that is going to expire soon should
## always have the same identifier no matter the client IP address
## that connected and resulted in the certificate being exposed. In
## this case, the resp_h, resp_p, and hash of the certificate would be
## used to create this value. The hash of the cert is included
## because servers can return multiple certificates on the same port.
## For example, an SSL certificate that is going to expire soon
## should always have the same identifier no matter the client
## IP address that connected and resulted in the certificate
## being exposed. In this case, the resp_h, resp_p, and hash
## of the certificate would be used to create this value. The
## hash of the cert is included because servers can return
## multiple certificates on the same port.
##
## Another example might be a host downloading a file which triggered
## a notice because the MD5 sum of the file it downloaded was known
## by some set of intelligence. In that case, the orig_h (client)
## and MD5 sum would be used in this field to dedup because if the
## same file is downloaded over and over again you really only want to
## know about it a single time. This makes it possible to send those
## notices to email without worrying so much about sending thousands
## Another example might be a host downloading a file which
## triggered a notice because the MD5 sum of the file it
## downloaded was known by some set of intelligence. In that
## case, the orig_h (client) and MD5 sum would be used in this
## field to dedup because if the same file is downloaded over
## and over again you really only want to know about it a
## single time. This makes it possible to send those notices
## to email without worrying so much about sending thousands
## of emails.
identifier: string &optional;
@ -173,17 +182,26 @@ export {
global policy: hook(n: Notice::Info);
## Local system sendmail program.
##
## Note that this is overridden by the BroControl SendMail option.
const sendmail = "/usr/sbin/sendmail" &redef;
## Email address to send notices with the :bro:enum:`Notice::ACTION_EMAIL`
## action or to send bulk alarm logs on rotation with
## :bro:enum:`Notice::ACTION_ALARM`.
## Email address to send notices with the
## :bro:enum:`Notice::ACTION_EMAIL` action or to send bulk alarm logs
## on rotation with :bro:enum:`Notice::ACTION_ALARM`.
##
## Note that this is overridden by the BroControl MailTo option.
const mail_dest = "" &redef;
## Address that emails will be from.
##
## Note that this is overridden by the BroControl MailFrom option.
const mail_from = "Big Brother <bro@localhost>" &redef;
## Reply-to address used in outbound email.
const reply_to = "" &redef;
## Text string prefixed to the subject of all emails sent out.
##
## Note that this is overridden by the BroControl MailSubjectPrefix
## option.
const mail_subject_prefix = "[Bro]" &redef;
## The maximum amount of time a plugin can delay email from being sent.
const max_email_delay = 15secs &redef;
@ -198,9 +216,9 @@ export {
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
## This is the event that is called as the entry point to the
## notice framework by the global :bro:id:`NOTICE` function. By the time
## this event is generated, default values have already been filled out in
## the :bro:type:`Notice::Info` record and the notice
## notice framework by the global :bro:id:`NOTICE` function. By the
## time this event is generated, default values have already been
## filled out in the :bro:type:`Notice::Info` record and the notice
## policy has also been applied.
##
## n: The record containing notice data.
@ -217,7 +235,8 @@ export {
## n: The record containing the notice in question.
global is_being_suppressed: function(n: Notice::Info): bool;
## This event is generated on each occurence of an event being suppressed.
## This event is generated on each occurrence of an event being
## suppressed.
##
## n: The record containing notice data regarding the notice type
## being suppressed.
@ -237,18 +256,19 @@ export {
##
## dest: The intended recipient of the notice email.
##
## extend: Whether to extend the email using the ``email_body_sections``
## field of *n*.
## extend: Whether to extend the email using the
## ``email_body_sections`` field of *n*.
global email_notice_to: function(n: Info, dest: string, extend: bool);
## Constructs mail headers to which an email body can be appended for
## sending with sendmail.
##
## subject_desc: a subject string to use for the mail
## subject_desc: a subject string to use for the mail.
##
## dest: recipient string to use for the mail
## dest: recipient string to use for the mail.
##
## Returns: a string of mail headers to which an email body can be appended
## Returns: a string of mail headers to which an email body can be
## appended.
global email_headers: function(subject_desc: string, dest: string): string;
## This event can be handled to access the :bro:type:`Notice::Info`
@ -257,8 +277,8 @@ export {
## rec: The record containing notice data before it is logged.
global log_notice: event(rec: Info);
## This is an internal wrapper for the global :bro:id:`NOTICE` function;
## disregard.
## This is an internal wrapper for the global :bro:id:`NOTICE`
## function; disregard.
##
## n: The record of notice data.
global internal_NOTICE: function(n: Notice::Info);

View file

@ -3,7 +3,7 @@
module GLOBAL;
## This is the entry point in the global namespace for notice framework.
## This is the entry point in the global namespace for the notice framework.
function NOTICE(n: Notice::Info)
{
# Suppress this notice if necessary.

View file

@ -26,8 +26,8 @@ export {
type Info: record {
## The time when the weird occurred.
ts: time &log;
## If a connection is associated with this weird, this will be the
## connection's unique ID.
## If a connection is associated with this weird, this will be
## the connection's unique ID.
uid: string &log &optional;
## conn_id for the optional connection.
id: conn_id &log &optional;
@ -37,16 +37,16 @@ export {
addl: string &log &optional;
## Indicate if this weird was also turned into a notice.
notice: bool &log &default=F;
## The peer that originated this weird. This is helpful in cluster
## deployments if a particular cluster node is having trouble to help
## identify which node is having trouble.
## The peer that originated this weird. This is helpful in
## cluster deployments if a particular cluster node is having
## trouble to help identify which node is having trouble.
peer: string &log &optional;
};
## Types of actions that may be taken when handling weird activity events.
type Action: enum {
## A dummy action indicating the user does not care what internal
## decision is made regarding a given type of weird.
## A dummy action indicating the user does not care what
## internal decision is made regarding a given type of weird.
ACTION_UNSPECIFIED,
## No action is to be taken.
ACTION_IGNORE,
@ -252,16 +252,16 @@ export {
## a unique weird every ``create_expire`` interval.
global weird_ignore: set[string, string] &create_expire=10min &redef;
## A state set which tracks unique weirds solely by the name to reduce
## duplicate logging. This is not synchronized deliberately because it
## could cause overload during storms
## A state set which tracks unique weirds solely by name to reduce
## duplicate logging. This is deliberately not synchronized because it
## could cause overload during storms.
global did_log: set[string, string] &create_expire=1day &redef;
## A state set which tracks unique weirds solely by the name to reduce
## A state set which tracks unique weirds solely by name to reduce
## duplicate notices from being raised.
global did_notice: set[string, string] &create_expire=1day &redef;
## Handlers of this event are invoked one per write to the weird
## Handlers of this event are invoked once per write to the weird
## logging stream before the data is actually written.
##
## rec: The weird columns about to be logged to the weird stream.

View file

@ -1,3 +1,8 @@
@load ./utils
@load ./main
@load ./netstats
@load base/frameworks/cluster
@if ( Cluster::is_enabled() )
@load ./cluster
@endif

View file

@ -0,0 +1,14 @@
module PacketFilter;
event remote_connection_handshake_done(p: event_peer) &priority=3
{
if ( Cluster::local_node_type() == Cluster::WORKER &&
p$descr in Cluster::nodes &&
Cluster::nodes[p$descr]$node_type == Cluster::MANAGER )
{
# This ensures that a packet filter is installed and logged
# after the manager connects to us.
install();
}
}

View file

@ -294,6 +294,7 @@ function install(): bool
# Do an audit log for the packet filter.
local info: Info;
info$ts = network_time();
info$node = peer_description;
# If network_time() is 0.0 we're at init time so use the wall clock.
if ( info$ts == 0.0 )
{

View file

@ -0,0 +1,2 @@
This framework is intended to create an output and filtering path for
internally generated messages/warnings/errors.

View file

@ -0,0 +1,4 @@
The signature framework provides for doing low-level pattern matching. While
signatures are not Bro's preferred detection tool, they sometimes come in
handy and are closer to what many people are familiar with from using
other NIDS.

View file

@ -11,21 +11,23 @@ export {
redef enum Notice::Type += {
## Generic notice type for notice-worthy signature matches.
Sensitive_Signature,
## Host has triggered many signatures on the same host. The number of
## signatures is defined by the
## Host has triggered many signatures on the same host. The
## number of signatures is defined by the
## :bro:id:`Signatures::vert_scan_thresholds` variable.
Multiple_Signatures,
## Host has triggered the same signature on multiple hosts as defined
## by the :bro:id:`Signatures::horiz_scan_thresholds` variable.
## Host has triggered the same signature on multiple hosts as
## defined by the :bro:id:`Signatures::horiz_scan_thresholds`
## variable.
Multiple_Sig_Responders,
## The same signature has triggered multiple times for a host. The
## number of times the signature has been triggered is defined by the
## :bro:id:`Signatures::count_thresholds` variable. To generate this
## notice, the :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must
## bet set for the signature.
## The same signature has triggered multiple times for a host.
## The number of times the signature has been triggered is
## defined by the :bro:id:`Signatures::count_thresholds`
## variable. To generate this notice, the
## :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must be
## set for the signature.
Count_Signature,
## Summarize the number of times a host triggered a signature. The
## interval between summaries is defined by the
## Summarize the number of times a host triggered a signature.
## The interval between summaries is defined by the
## :bro:id:`Signatures::summary_interval` variable.
Signature_Summary,
};
@ -37,11 +39,12 @@ export {
## All of them write the signature record to the logging stream unless
## declared otherwise.
type Action: enum {
## Ignore this signature completely (even for scan detection). Don't
## write to the signatures logging stream.
## Ignore this signature completely (even for scan detection).
## Don't write to the signatures logging stream.
SIG_IGNORE,
## Process through the various aggregate techniques, but don't report
## individually and don't write to the signatures logging stream.
## Process through the various aggregate techniques, but don't
## report individually and don't write to the signatures logging
## stream.
SIG_QUIET,
## Generate a notice.
SIG_LOG,
@ -64,20 +67,21 @@ export {
## The record type which contains the column fields of the signature log.
type Info: record {
## The network time at which a signature matching type of event to
## be logged has occurred.
## The network time at which a signature matching type of event
## to be logged has occurred.
ts: time &log;
## The host which triggered the signature match event.
src_addr: addr &log &optional;
## The host port on which the signature-matching activity occurred.
## The host port on which the signature-matching activity
## occurred.
src_port: port &log &optional;
## The destination host which was sent the payload that triggered the
## signature match.
## The destination host which was sent the payload that
## triggered the signature match.
dst_addr: addr &log &optional;
## The destination host port which was sent the payload that triggered
## the signature match.
## The destination host port which was sent the payload that
## triggered the signature match.
dst_port: port &log &optional;
## Notice associated with signature event
## Notice associated with signature event.
note: Notice::Type &log;
## The name of the signature that matched.
sig_id: string &log &optional;
@ -103,8 +107,8 @@ export {
## different responders has reached one of the thresholds.
const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Generate a notice if, for a pair [orig, resp], the number of different
## signature matches has reached one of the thresholds.
## Generate a notice if, for a pair [orig, resp], the number of
## different signature matches has reached one of the thresholds.
const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP`
@ -112,7 +116,7 @@ export {
const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef;
## The interval between when :bro:enum:`Signatures::Signature_Summary`
## notice are generated.
## notices are generated.
const summary_interval = 1 day &redef;
## This event can be handled to access/alter data about to be logged

View file

@ -28,8 +28,8 @@ export {
## values for a sumstat.
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
## Event sent by nodes that are collecting sumstats after receiving a
## request for the sumstat from the manager.
# Event sent by nodes that are collecting sumstats after receiving a
# request for the sumstat from the manager.
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
## This event is sent by the manager in a cluster to initiate the

View file

@ -0,0 +1 @@
Plugins for the summary statistics framework.

File diff suppressed because it is too large Load diff

View file

@ -1,8 +1,8 @@
##! This script loads everything in the base/ script directory. If you want
##! to run Bro without all of these scripts loaded by default, you can use
##! the -b (--bare-mode) command line argument. You can also copy the "@load"
##! lines from this script to your own script to load only the scripts that
##! you actually want.
##! the ``-b`` (``--bare-mode``) command line argument. You can also copy the
##! "@load" lines from this script to your own script to load only the scripts
##! that you actually want.
@load base/utils/site
@load base/utils/active-http

View file

@ -16,6 +16,7 @@ export {
# Keep track of how many bad checksums have been seen.
global bad_ip_checksums = 0;
global bad_tcp_checksums = 0;
global bad_udp_checksums = 0;
# Track to see if this script is done so that messages aren't created multiple times.
global done = F;
@ -28,7 +29,11 @@ event ChecksumOffloading::check()
local pkts_recvd = net_stats()$pkts_recvd;
local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0;
local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0;
if ( bad_ip_checksum_pct > 0.05 || bad_tcp_checksum_pct > 0.05 )
local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0;
if ( bad_ip_checksum_pct > 0.05 ||
bad_tcp_checksum_pct > 0.05 ||
bad_udp_checksum_pct > 0.05 )
{
local packet_src = reading_traces() ? "trace file likely has" : "interface is likely receiving";
local bad_checksum_msg = (bad_ip_checksum_pct > 0.0) ? "IP" : "";
@ -38,6 +43,13 @@ event ChecksumOffloading::check()
bad_checksum_msg += " and ";
bad_checksum_msg += "TCP";
}
if ( bad_udp_checksum_pct > 0.0 )
{
if ( |bad_checksum_msg| > 0 )
bad_checksum_msg += " and ";
bad_checksum_msg += "UDP";
}
local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading.", packet_src, bad_checksum_msg);
Reporter::warning(message);
done = T;
@ -65,6 +77,8 @@ event conn_weird(name: string, c: connection, addl: string)
{
if ( name == "bad_TCP_checksum" )
++bad_tcp_checksums;
else if ( name == "bad_UDP_checksum" )
++bad_udp_checksums;
}
event bro_done()

View file

@ -32,9 +32,11 @@ export {
## mind that you will probably need to set the *method* field
## to "POST" or "PUT".
client_data: string &optional;
## Arbitrary headers to pass to the server. Some headers
## will be included by libCurl.
# Arbitrary headers to pass to the server. Some headers
# will be included by libCurl.
#custom_headers: table[string] of string &optional;
## Timeout for the request.
max_time: interval &default=default_max_time;
## Additional curl command line arguments. Be very careful

View file

@ -28,7 +28,7 @@ event Dir::monitor_ev(dir: string, last_files: set[string],
callback: function(fname: string),
poll_interval: interval)
{
when ( local result = Exec::run([$cmd=fmt("ls -i -1 \"%s/\"", str_shell_escape(dir))]) )
when ( local result = Exec::run([$cmd=fmt("ls -1 \"%s/\"", str_shell_escape(dir))]) )
{
if ( result$exit_code != 0 )
{
@ -44,10 +44,9 @@ event Dir::monitor_ev(dir: string, last_files: set[string],
for ( i in files )
{
local parts = split1(files[i], / /);
if ( parts[1] !in last_files )
callback(build_path_compressed(dir, parts[2]));
add current_files[parts[1]];
if ( files[i] !in last_files )
callback(build_path_compressed(dir, files[i]));
add current_files[files[i]];
}
schedule poll_interval

View file

@ -17,7 +17,8 @@ export {
[::1]/128,
} &redef;
## Networks that are considered "local".
## Networks that are considered "local". Note that BroControl sets
## this automatically.
const local_nets: set[subnet] &redef;
## This is used for retrieving the subnet when using multiple entries in

View file

@ -3,6 +3,7 @@
##! and then shutdown.
##!
##! It's intended to be used from the command line like this::
##!
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
@load base/frameworks/control

View file

@ -1,6 +1,6 @@
##! This script enables logging of packet segment data when a protocol
##! parsing violation is encountered. The amount of
##! data from the packet logged is set by the packet_segment_size variable.
##! parsing violation is encountered. The amount of data from the
##! packet logged is set by the :bro:see:`DPD::packet_segment_size` variable.
##! A caveat to logging packet data is that in some cases, the packet may
##! not be the packet that actually caused the protocol violation.
@ -10,8 +10,8 @@ module DPD;
export {
redef record Info += {
## A chunk of the payload the most likely resulted in the protocol
## violation.
## A chunk of the payload that most likely resulted in the
## protocol violation.
packet_segment: string &optional &log;
};

View file

@ -23,10 +23,10 @@ export {
/application\/jar/ |
/video\/mp4/ &redef;
## The malware hash registry runs each malware sample through several A/V engines.
## Team Cymru returns a percentage to indicate how many A/V engines flagged the
## sample as malicious. This threshold allows you to require a minimum detection
## rate.
## The malware hash registry runs each malware sample through several
## A/V engines. Team Cymru returns a percentage to indicate how
## many A/V engines flagged the sample as malicious. This threshold
## allows you to require a minimum detection rate.
const notice_threshold = 10 &redef;
}

View file

@ -1,4 +1,4 @@
# Perform MD5 and SHA1 hashing on all files.
##! Perform MD5 and SHA1 hashing on all files.
event file_new(f: fa_file)
{

View file

@ -18,7 +18,7 @@ export {
do_notice: bool &default=F;
## Restrictions on when notices are created to only create
## them if the do_notice field is T and the notice was
## them if the *do_notice* field is T and the notice was
## seen in the indicated location.
if_in: Intel::Where &optional;
};

View file

@ -0,0 +1 @@
Scripts that send data to the intelligence framework.

View file

@ -8,23 +8,23 @@ export {
const max_bpf_shunts = 100 &redef;
## Call this function to use BPF to shunt a connection (to prevent the
## data packets from reaching Bro). For TCP connections, control packets
## are still allowed through so that Bro can continue logging the connection
## and it can stop shunting once the connection ends.
## data packets from reaching Bro). For TCP connections, control
## packets are still allowed through so that Bro can continue logging
## the connection and it can stop shunting once the connection ends.
global shunt_conn: function(id: conn_id): bool;
## This function will use a BPF expresssion to shunt traffic between
## This function will use a BPF expression to shunt traffic between
## the two hosts given in the `conn_id` so that the traffic is never
## exposed to Bro's traffic processing.
global shunt_host_pair: function(id: conn_id): bool;
## Remove shunting for a host pair given as a `conn_id`. The filter
## is not immediately removed. It waits for the occassional filter
## is not immediately removed. It waits for the occasional filter
## update done by the `PacketFilter` framework.
global unshunt_host_pair: function(id: conn_id): bool;
## Performs the same function as the `unshunt_host_pair` function, but
## it forces an immediate filter update.
## Performs the same function as the :bro:id:`PacketFilter::unshunt_host_pair`
## function, but it forces an immediate filter update.
global force_unshunt_host_pair: function(id: conn_id): bool;
## Retrieve the currently shunted connections.
@ -34,12 +34,13 @@ export {
global current_shunted_host_pairs: function(): set[conn_id];
redef enum Notice::Type += {
## Indicative that :bro:id:`PacketFilter::max_bpf_shunts` connections
## are already being shunted with BPF filters and no more are allowed.
## Indicative that :bro:id:`PacketFilter::max_bpf_shunts`
## connections are already being shunted with BPF filters and
## no more are allowed.
No_More_Conn_Shunts_Available,
## Limitations in BPF make shunting some connections with BPF impossible.
## This notice encompasses those various cases.
## Limitations in BPF make shunting some connections with BPF
## impossible. This notice encompasses those various cases.
Cannot_BPF_Shunt_Conn,
};
}

View file

@ -1,4 +1,4 @@
##! Provides the possibly to define software names that are interesting to
##! Provides the possibility to define software names that are interesting to
##! watch for changes. A notice is generated if software versions change on a
##! host.
@ -9,15 +9,15 @@ module Software;
export {
redef enum Notice::Type += {
## For certain software, a version changing may matter. In that case,
## this notice will be generated. Software that matters if the version
## changes can be configured with the
## For certain software, a version changing may matter. In that
## case, this notice will be generated. Software that matters
## if the version changes can be configured with the
## :bro:id:`Software::interesting_version_changes` variable.
Software_Version_Change,
};
## Some software is more interesting when the version changes and this is
## a set of all software that should raise a notice when a different
## Some software is more interesting when the version changes and this
## is a set of all software that should raise a notice when a different
## version is seen on a host.
const interesting_version_changes: set[string] = { } &redef;
}

View file

@ -1,5 +1,5 @@
##! Provides a variable to define vulnerable versions of software and if a
##! a version of that software as old or older than the defined version a
##! Provides a variable to define vulnerable versions of software and if
##! a version of that software is as old or older than the defined version a
##! notice will be generated.
@load base/frameworks/control
@ -21,7 +21,7 @@ export {
min: Software::Version &optional;
## The maximum vulnerable version. This field is deliberately
## not optional because a maximum vulnerable version must
## always be defined. This assumption may become incorrent
## always be defined. This assumption may become incorrect
## if all future versions of some software are to be considered
## vulnerable. :)
max: Software::Version;

View file

@ -0,0 +1 @@
Integration with Barnyard2.

View file

@ -15,8 +15,8 @@ export {
alert: AlertData &log;
};
## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to a
## :bro:type:`conn_id` value in the case that you might need to index
## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to
## a :bro:type:`conn_id` value in the case that you might need to index
## into an existing data structure elsewhere within Bro.
global pid2cid: function(p: PacketID): conn_id;
}

View file

@ -11,7 +11,7 @@ export {
generator_id: count; ##< Which generator generated the alert?
signature_revision: count; ##< Sig revision for this id.
classification_id: count; ##< Event classification.
classification: string; ##< Descriptive classification string,
classification: string; ##< Descriptive classification string.
priority_id: count; ##< Event priority.
event_id: count; ##< Event ID.
} &log;

View file

@ -3,8 +3,8 @@
module Intel;
## These are some fields to add extended compatibility between Bro and the Collective
## Intelligence Framework
## These are some fields to add extended compatibility between Bro and the
## Collective Intelligence Framework.
redef record Intel::MetaData += {
## Maps to the Impact field in the Collective Intelligence Framework.
cif_impact: string &optional;

View file

@ -0,0 +1 @@
AppStats collects information about web applications in use on the network.

View file

@ -1,5 +1,5 @@
#! AppStats collects information about web applications in use
#! on the network.
##! AppStats collects information about web applications in use
##! on the network.
@load base/protocols/http
@load base/protocols/ssl

View file

@ -0,0 +1 @@
Plugins for AppStats.

View file

@ -4,7 +4,7 @@
##! the packet capture or it could even be beyond the host. If you are
##! capturing from a switch with a SPAN port, it's very possible that
##! the switch itself could be overloaded and dropping packets.
##! Reported loss is computed in terms of number of "gap events" (ACKs
##! Reported loss is computed in terms of the number of "gap events" (ACKs
##! for a sequence number that's above a gap).
@load base/frameworks/notice
@ -26,7 +26,7 @@ export {
## The time delay between this measurement and the last.
ts_delta: interval &log;
## In the event that there are multiple Bro instances logging
## to the same host, this distinguishes each peer with it's
## to the same host, this distinguishes each peer with its
## individual name.
peer: string &log;
## Number of missed ACKs from the previous measurement interval.
@ -34,7 +34,7 @@ export {
## Total number of ACKs seen in the previous measurement interval.
acks: count &log;
## Percentage of ACKs seen where the data being ACKed wasn't seen.
percent_lost: string &log;
percent_lost: double &log;
};
## The interval at which capture loss reports are created.
@ -43,7 +43,7 @@ export {
## The percentage of missed data that is considered "too much"
## when the :bro:enum:`CaptureLoss::Too_Much_Loss` notice should be
## generated. The value is expressed as a double between 0 and 1 with 1
## being 100%
## being 100%.
const too_much_loss: double = 0.1 &redef;
}
@ -64,7 +64,7 @@ event CaptureLoss::take_measurement(last_ts: time, last_acks: count, last_gaps:
$ts_delta=now-last_ts,
$peer=peer_description,
$acks=acks, $gaps=gaps,
$percent_lost=fmt("%.3f%%", pct_lost)];
$percent_lost=pct_lost];
if ( pct_lost >= too_much_loss*100 )
NOTICE([$note=Too_Much_Loss,

View file

@ -0,0 +1 @@
Detect hosts that are running traceroute.

View file

@ -1,7 +1,8 @@
##! This script detects a large number of ICMP Time Exceeded messages heading toward
##! hosts that have sent low TTL packets. It generates a notice when the number of
##! ICMP Time Exceeded messages for a source-destination pair exceeds a
##! threshold.
##! This script detects a large number of ICMP Time Exceeded messages heading
##! toward hosts that have sent low TTL packets. It generates a notice when the
##! number of ICMP Time Exceeded messages for a source-destination pair exceeds
##! a threshold.
@load base/frameworks/sumstats
@load base/frameworks/signatures
@load-sigs ./detect-low-ttls.sig
@ -20,15 +21,16 @@ export {
Detected
};
## By default this script requires that any host detected running traceroutes
## first send low TTL packets (TTL < 10) to the traceroute destination host.
## Changing this this setting to `F` will relax the detection a bit by
## solely relying on ICMP time-exceeded messages to detect traceroute.
## By default this script requires that any host detected running
## traceroutes first send low TTL packets (TTL < 10) to the traceroute
## destination host. Changing this setting to F will relax the
## detection a bit by solely relying on ICMP time-exceeded messages to
## detect traceroute.
const require_low_ttl_packets = T &redef;
## Defines the threshold for ICMP Time Exceeded messages for a src-dst pair.
## This threshold only comes into play after a host is found to be
## sending low ttl packets.
## Defines the threshold for ICMP Time Exceeded messages for a src-dst
## pair. This threshold only comes into play after a host is found to
## be sending low TTL packets.
const icmp_time_exceeded_threshold: double = 3 &redef;
## Interval at which to watch for the
@ -40,7 +42,7 @@ export {
type Info: record {
## Timestamp
ts: time &log;
## Address initiaing the traceroute.
## Address initiating the traceroute.
src: addr &log;
## Destination address of the traceroute.
dst: addr &log;

View file

@ -1,7 +1,7 @@
##! This script provides infrastructure for logging devices for which Bro has been
##! able to determine the MAC address, and it logs them once per day (by default).
##! The log that is output provides an easy way to determine a count of the devices
##! in use on a network per day.
##! This script provides infrastructure for logging devices for which Bro has
##! been able to determine the MAC address, and it logs them once per day (by
##! default). The log that is output provides an easy way to determine a count
##! of the devices in use on a network per day.
##!
##! .. note::
##!
@ -15,7 +15,8 @@ export {
## The known-hosts logging stream identifier.
redef enum Log::ID += { DEVICES_LOG };
## The record type which contains the column fields of the known-devices log.
## The record type which contains the column fields of the known-devices
## log.
type DevicesInfo: record {
## The timestamp at which the host was detected.
ts: time &log;
@ -24,10 +25,10 @@ export {
};
## The set of all known MAC addresses. It can accessed from other
## to add, and check for, addresses seen in use.
##
## We maintain each entry for 24 hours by default so that the existence of
## individual addressed is logged each day.
## scripts to add, and check for, addresses seen in use.
##
## We maintain each entry for 24 hours by default so that the existence
## of individual addresses is logged each day.
global known_devices: set[string] &create_expire=1day &synchronized &redef;
## An event that can be handled to access the :bro:type:`Known::DevicesInfo`

View file

@ -29,9 +29,10 @@ export {
#global confirm_filter_installation: event(success: bool);
redef record Cluster::Node += {
## A BPF filter for load balancing traffic sniffed on a single interface
## across a number of processes. In normal uses, this will be assigned
## dynamically by the manager and installed by the workers.
## A BPF filter for load balancing traffic sniffed on a single
## interface across a number of processes. In normal uses, this
## will be assigned dynamically by the manager and installed by
## the workers.
lb_filter: string &optional;
};
}

View file

@ -7,9 +7,9 @@ export {
redef enum Log::ID += { LOG };
type Info: record {
## Name of the script loaded potentially with spaces included before
## the file name to indicate load depth. The convention is two spaces
## per level of depth.
## Name of the script loaded potentially with spaces included
## before the file name to indicate load depth. The convention
## is two spaces per level of depth.
name: string &log;
};
}

View file

@ -8,7 +8,8 @@ redef profiling_file = open_log_file("prof");
## Set the cheap profiling interval.
redef profiling_interval = 15 secs;
## Set the expensive profiling interval.
## Set the expensive profiling interval (multiple of
## :bro:id:`profiling_interval`).
redef expensive_profiling_multiple = 20;
event bro_init()

View file

@ -1,8 +1,8 @@
##! TCP Scan detection
##!
##! ..Authors: Sheharbano Khattak
##! Seth Hall
##! All the authors of the old scan.bro
##! TCP Scan detection.
# ..Authors: Sheharbano Khattak
# Seth Hall
# All the authors of the old scan.bro
@load base/frameworks/notice
@load base/frameworks/sumstats
@ -13,37 +13,38 @@ module Scan;
export {
redef enum Notice::Type += {
## Address scans detect that a host appears to be scanning some number
## of destinations on a single port. This notice is generated when more
## than :bro:id:`Scan::addr_scan_threshold` unique hosts are seen over
## the previous :bro:id:`Scan::addr_scan_interval` time range.
## Address scans detect that a host appears to be scanning some
## number of destinations on a single port. This notice is
## generated when more than :bro:id:`Scan::addr_scan_threshold`
## unique hosts are seen over the previous
## :bro:id:`Scan::addr_scan_interval` time range.
Address_Scan,
## Port scans detect that an attacking host appears to be scanning a
## single victim host on several ports. This notice is generated when
## an attacking host attempts to connect to
## Port scans detect that an attacking host appears to be
## scanning a single victim host on several ports. This notice
## is generated when an attacking host attempts to connect to
## :bro:id:`Scan::port_scan_threshold`
## unique ports on a single host over the previous
## :bro:id:`Scan::port_scan_interval` time range.
Port_Scan,
};
## Failed connection attempts are tracked over this time interval for the address
## scan detection. A higher interval will detect slower scanners, but may also
## yield more false positives.
## Failed connection attempts are tracked over this time interval for
## the address scan detection. A higher interval will detect slower
## scanners, but may also yield more false positives.
const addr_scan_interval = 5min &redef;
## Failed connection attempts are tracked over this time interval for the port scan
## detection. A higher interval will detect slower scanners, but may also yield
## more false positives.
## Failed connection attempts are tracked over this time interval for
## the port scan detection. A higher interval will detect slower
## scanners, but may also yield more false positives.
const port_scan_interval = 5min &redef;
## The threshold of a unique number of hosts a scanning host has to have failed
## connections with on a single port.
## The threshold of the unique number of hosts a scanning host has to
## have failed connections with on a single port.
const addr_scan_threshold = 25.0 &redef;
## The threshold of a number of unique ports a scanning host has to have failed
## connections with on a single victim host.
## The threshold of the number of unique ports a scanning host has to
## have failed connections with on a single victim host.
const port_scan_threshold = 15.0 &redef;
global Scan::addr_scan_policy: hook(scanner: addr, victim: addr, scanned_port: port);
@ -148,7 +149,7 @@ function is_reverse_failed_conn(c: connection): bool
## Generated for an unsuccessful connection attempt. This
## event is raised when an originator unsuccessfully attempted
## to establish a connection. “Unsuccessful” is defined as at least
## to establish a connection. "Unsuccessful" is defined as at least
## tcp_attempt_delay seconds having elapsed since the originator first sent a
## connection establishment packet to the destination without seeing a reply.
event connection_attempt(c: connection)
@ -160,9 +161,9 @@ event connection_attempt(c: connection)
add_sumstats(c$id, is_reverse_scan);
}
## Generated for a rejected TCP connection. This event is raised when an originator
## attempted to setup a TCP connection but the responder replied with a RST packet
## denying it.
## Generated for a rejected TCP connection. This event is raised when an
## originator attempted to setup a TCP connection but the responder replied with
## a RST packet denying it.
event connection_rejected(c: connection)
{
local is_reverse_scan = F;
@ -173,7 +174,8 @@ event connection_rejected(c: connection)
}
## Generated when an endpoint aborted a TCP connection. The event is raised when
## one endpoint of an *established* TCP connection aborted by sending a RST packet.
## one endpoint of an *established* TCP connection aborted by sending a RST
## packet.
event connection_reset(c: connection)
{
if ( is_failed_conn(c) )

View file

@ -1,4 +1,5 @@
##! Log memory/packet/lag statistics. Differs from profiling.bro in that this
##! Log memory/packet/lag statistics. Differs from
##! :doc:`/scripts/policy/misc/profiling` in that this
##! is lighter-weight (much less info, and less load to generate).
@load base/frameworks/notice
@ -20,21 +21,23 @@ export {
mem: count &log;
## Number of packets processed since the last stats interval.
pkts_proc: count &log;
## Number of events that been processed since the last stats interval.
## Number of events processed since the last stats interval.
events_proc: count &log;
## Number of events that have been queued since the last stats interval.
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## Lag between the wall clock and packet timestamps if reading live traffic.
## Lag between the wall clock and packet timestamps if reading
## live traffic.
lag: interval &log &optional;
## Number of packets received since the last stats interval if reading
## live traffic.
## Number of packets received since the last stats interval if
## reading live traffic.
pkts_recv: count &log &optional;
## Number of packets dropped since the last stats interval if reading
## live traffic.
## Number of packets dropped since the last stats interval if
## reading live traffic.
pkts_dropped: count &log &optional;
## Number of packets seen on the link since the last stats interval
## if reading live traffic.
## Number of packets seen on the link since the last stats
## interval if reading live traffic.
pkts_link: count &log &optional;
};

View file

@ -1,4 +1,4 @@
##! Deletes the -w tracefile at regular intervals and starts a new file
##! Deletes the ``-w`` tracefile at regular intervals and starts a new file
##! from scratch.
module TrimTraceFile;
@ -8,9 +8,9 @@ export {
const trim_interval = 10 mins &redef;
## This event can be generated externally to this script if on-demand
## tracefile rotation is required with the caveat that the script doesn't
## currently attempt to get back on schedule automatically and the next
## trim will likely won't happen on the
## tracefile rotation is required with the caveat that the script
## doesn't currently attempt to get back on schedule automatically and
## the next trim likely won't happen on the
## :bro:id:`TrimTraceFile::trim_interval`.
global go: event(first_trim: bool);
}

View file

@ -15,8 +15,8 @@ export {
type HostsInfo: record {
## The timestamp at which the host was detected.
ts: time &log;
## The address that was detected originating or responding to a TCP
## connection.
## The address that was detected originating or responding to a
## TCP connection.
host: addr &log;
};

View file

@ -7,7 +7,7 @@ module Known;
export {
redef record DevicesInfo += {
## The value of the DHCP host name option, if seen
## The value of the DHCP host name option, if seen.
dhcp_host_name: string &log &optional;
};
}

View file

@ -10,9 +10,9 @@ module DNS;
export {
redef enum Notice::Type += {
## Raised when a non-local name is found to be pointing at a local host.
## :bro:id:`Site::local_zones` variable **must** be set appropriately
## for this detection.
## Raised when a non-local name is found to be pointing at a
## local host. The :bro:id:`Site::local_zones` variable
## **must** be set appropriately for this detection.
External_Name,
};
}

View file

@ -1,5 +1,5 @@
##! FTP brute-forcing detector, triggering when too many rejected usernames or
##! failed passwords have occured from a single address.
##! FTP brute-forcing detector, triggering when too many rejected usernames or
##! failed passwords have occurred from a single address.
@load base/protocols/ftp
@load base/frameworks/sumstats
@ -10,8 +10,8 @@ module FTP;
export {
redef enum Notice::Type += {
## Indicates a host bruteforcing FTP logins by watching for too many
## rejected usernames or failed passwords.
## Indicates a host bruteforcing FTP logins by watching for too
## many rejected usernames or failed passwords.
Bruteforcing
};

View file

@ -8,10 +8,12 @@ module HTTP;
export {
redef enum Notice::Type += {
## Indicates that a host performing SQL injection attacks was detected.
## Indicates that a host performing SQL injection attacks was
## detected.
SQL_Injection_Attacker,
## Indicates that a host was seen to have SQL injection attacks against
## it. This is tracked by IP address as opposed to hostname.
## Indicates that a host was seen to have SQL injection attacks
## against it. This is tracked by IP address as opposed to
## hostname.
SQL_Injection_Victim,
};
@ -19,9 +21,11 @@ export {
## Indicator of a URI based SQL injection attack.
URI_SQLI,
## Indicator of client body based SQL injection attack. This is
## typically the body content of a POST request. Not implemented yet.
## typically the body content of a POST request. Not implemented
## yet.
POST_SQLI,
## Indicator of a cookie based SQL injection attack. Not implemented yet.
## Indicator of a cookie based SQL injection attack. Not
## implemented yet.
COOKIE_SQLI,
};

View file

@ -8,12 +8,12 @@ module HTTP;
export {
redef record Info += {
## The vector of HTTP header names sent by the client. No header
## values are included here, just the header names.
## The vector of HTTP header names sent by the client. No
## header values are included here, just the header names.
client_header_names: vector of string &log &optional;
## The vector of HTTP header names sent by the server. No header
## values are included here, just the header names.
## The vector of HTTP header names sent by the server. No
## header values are included here, just the header names.
server_header_names: vector of string &log &optional;
};

View file

@ -1,4 +1,4 @@
##! Extracts and logs variables names from cookies sent by clients.
##! Extracts and logs variable names from cookies sent by clients.
@load base/protocols/http/main
@load base/protocols/http/utils

View file

@ -1,4 +1,4 @@
##! Extracts and log variables from the requested URI in the default HTTP
##! Extracts and logs variables from the requested URI in the default HTTP
##! logging stream.
@load base/protocols/http

View file

@ -15,9 +15,9 @@ export {
const track_memmap: Host = ALL_HOSTS &redef;
type MemmapInfo: record {
## Timestamp for the detected register change
## Timestamp for the detected register change.
ts: time &log;
## Unique ID for the connection
## Unique ID for the connection.
uid: string &log;
## Connection ID.
id: conn_id &log;
@ -27,7 +27,8 @@ export {
old_val: count &log;
## The new value stored in the register.
new_val: count &log;
## The time delta between when the 'old_val' and 'new_val' were seen.
## The time delta between when the *old_val* and *new_val* were
## seen.
delta: interval &log;
};
@ -42,8 +43,8 @@ export {
## The memory map of slaves is tracked with this variable.
global device_registers: table[addr] of Registers;
## This event is generated every time a register is seen to be different than
## it was previously seen to be.
## This event is generated every time a register is seen to be different
## than it was previously seen to be.
global changed_register: event(c: connection, register: count, old_val: count, new_val: count, delta: interval);
}

View file

@ -8,8 +8,8 @@ export {
Suspicious_Origination
};
## Places where it's suspicious for mail to originate from represented as
## all-capital, two character country codes (e.x. US). It requires
## Places where it's suspicious for mail to originate from represented
## as all-capital, two character country codes (e.g., US). It requires
## libGeoIP support built in.
const suspicious_origination_countries: set[string] = {} &redef;
const suspicious_origination_networks: set[subnet] = {} &redef;

View file

@ -5,7 +5,7 @@
##! TODO:
##!
##! * Find some heuristic to determine if email was sent through
##! a MS Exhange webmail interface as opposed to a desktop client.
##! a MS Exchange webmail interface as opposed to a desktop client.
@load base/frameworks/software/main
@load base/protocols/smtp/main
@ -20,19 +20,19 @@ export {
};
redef record Info += {
## Boolean indicator of if the message was sent through a webmail
## interface.
## Boolean indicator of if the message was sent through a
## webmail interface.
is_webmail: bool &log &default=F;
};
## Assuming that local mail servers are more trustworthy with the headers
## they insert into messages envelopes, this default makes Bro not attempt
## to detect software in inbound message bodies. If mail coming in from
## external addresses gives incorrect data in the Received headers, it
## could populate your SOFTWARE logging stream with incorrect data.
## If you would like to detect mail clients for incoming messages
## (network traffic originating from a non-local address), set this
## variable to EXTERNAL_HOSTS or ALL_HOSTS.
## Assuming that local mail servers are more trustworthy with the
## headers they insert into message envelopes, this default makes Bro
## not attempt to detect software in inbound message bodies. If mail
## coming in from external addresses gives incorrect data in
## the Received headers, it could populate your SOFTWARE logging stream
## with incorrect data. If you would like to detect mail clients for
## incoming messages (network traffic originating from a non-local
## address), set this variable to EXTERNAL_HOSTS or ALL_HOSTS.
const detect_clients_in_messages_from = LOCAL_HOSTS &redef;
## A regular expression to match USER-AGENT-like headers to find if a

View file

@ -11,12 +11,12 @@ module SSH;
export {
redef enum Notice::Type += {
## Indicates that a host has been identified as crossing the
## :bro:id:`SSH::password_guesses_limit` threshold with heuristically
## determined failed logins.
## :bro:id:`SSH::password_guesses_limit` threshold with
## heuristically determined failed logins.
Password_Guessing,
## Indicates that a host previously identified as a "password guesser"
## has now had a heuristically successful login attempt. This is not
## currently implemented.
## Indicates that a host previously identified as a "password
## guesser" has now had a heuristically successful login
## attempt. This is not currently implemented.
Login_By_Password_Guesser,
};
@ -29,8 +29,8 @@ export {
## guessing passwords.
const password_guesses_limit: double = 30 &redef;
## The amount of time to remember presumed non-successful logins to build
## model of a password guesser.
## The amount of time to remember presumed non-successful logins to
## build a model of a password guesser.
const guessing_timeout = 30 mins &redef;
## This value can be used to exclude hosts or entire networks from being

View file

@ -7,14 +7,15 @@ module SSH;
export {
redef enum Notice::Type += {
## If an SSH login is seen to or from a "watched" country based on the
## :bro:id:`SSH::watched_countries` variable then this notice will
## be generated.
## If an SSH login is seen to or from a "watched" country based
## on the :bro:id:`SSH::watched_countries` variable then this
## notice will be generated.
Watched_Country_Login,
};
redef record Info += {
## Add geographic data related to the "remote" host of the connection.
## Add geographic data related to the "remote" host of the
## connection.
remote_location: geo_location &log &optional;
};

View file

@ -10,8 +10,8 @@ module SSH;
export {
redef enum Notice::Type += {
## Generated if a login originates or responds with a host where the
## reverse hostname lookup resolves to a name matched by the
## Generated if a login originates or responds with a host where
## the reverse hostname lookup resolves to a name matched by the
## :bro:id:`SSH::interesting_hostnames` regular expression.
Interesting_Hostname_Login,
};

View file

@ -12,13 +12,14 @@ module SSL;
export {
redef enum Notice::Type += {
## Indicates that a certificate's NotValidAfter date has lapsed and
## the certificate is now invalid.
## Indicates that a certificate's NotValidAfter date has lapsed
## and the certificate is now invalid.
Certificate_Expired,
## Indicates that a certificate is going to expire within
## :bro:id:`SSL::notify_when_cert_expiring_in`.
Certificate_Expires_Soon,
## Indicates that a certificate's NotValidBefore date is future dated.
## Indicates that a certificate's NotValidBefore date is future
## dated.
Certificate_Not_Valid_Yet,
};
@ -29,8 +30,8 @@ export {
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
const notify_certs_expiration = LOCAL_HOSTS &redef;
## The time before a certificate is going to expire that you would like to
## start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices.
## The time before a certificate is going to expire that you would like
## to start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices.
const notify_when_cert_expiring_in = 30days &redef;
}

View file

@ -5,8 +5,8 @@
##! .. note::
##!
##! - It doesn't work well on a cluster because each worker will write its
##! own certificate files and no duplicate checking is done across
##! clusters so each node would log each certificate.
##! own certificate files and no duplicate checking is done across the
##! cluster so each node would log each certificate.
##!
@load base/protocols/ssl
@ -18,7 +18,7 @@ module SSL;
export {
## Control if host certificates offered by the defined hosts
## will be written to the PEM certificates file.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS.
const extract_certs_pem = LOCAL_HOSTS &redef;
}

View file

@ -1,4 +1,5 @@
##! Log information about certificates while attempting to avoid duplicate logging.
##! Log information about certificates while attempting to avoid duplicate
##! logging.
@load base/utils/directions-and-hosts
@load base/protocols/ssl
@ -26,7 +27,7 @@ export {
};
## The certificates whose existence should be logged and tracked.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS.
const cert_tracking = LOCAL_HOSTS &redef;
## The set of all known certificates to store for preventing duplicate

View file

@ -8,8 +8,9 @@ module SSL;
export {
redef enum Notice::Type += {
## This notice indicates that the result of validating the certificate
## along with it's full certificate chain was invalid.
## This notice indicates that the result of validating the
## certificate along with its full certificate chain was
## invalid.
Invalid_Server_Cert
};
@ -18,9 +19,9 @@ export {
validation_status: string &log &optional;
};
## MD5 hash values for recently validated certs along with the validation
## status message are kept in this table to avoid constant validation
## everytime the same certificate is seen.
## MD5 hash values for recently validated certs along with the
## validation status message are kept in this table to avoid constant
## validation every time the same certificate is seen.
global recently_validated_certs: table[string] of string = table()
&read_expire=5mins &synchronized &redef;
}

View file

@ -0,0 +1 @@
Miscellaneous tuning parameters.

View file

@ -0,0 +1,2 @@
Sets various defaults, and prints warning messages to stdout under
certain conditions.

View file

@ -12,8 +12,8 @@ export {
## If you want to explicitly only send certain :bro:type:`Log::ID`
## streams, add them to this set. If the set remains empty, all will
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option will remain in
## effect as well.
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option
## will remain in effect as well.
const send_logs: set[Log::ID] &redef;
}

View file

@ -2720,16 +2720,22 @@ RecordVal* RecordVal::CoerceTo(const RecordType* t, Val* aggr, bool allow_orphan
break;
}
Val* v = Lookup(i);
if ( ! v )
// Check for allowable optional fields is outside the loop, below.
continue;
if ( ar_t->FieldType(t_i)->Tag() == TYPE_RECORD
&& ! same_type(ar_t->FieldType(t_i), Lookup(i)->Type()) )
&& ! same_type(ar_t->FieldType(t_i), v->Type()) )
{
Expr* rhs = new ConstExpr(Lookup(i)->Ref());
Expr* rhs = new ConstExpr(v->Ref());
Expr* e = new RecordCoerceExpr(rhs, ar_t->FieldType(t_i)->AsRecordType());
ar->Assign(t_i, e->Eval(0));
continue;
}
ar->Assign(t_i, Lookup(i)->Ref());
ar->Assign(t_i, v->Ref());
}
for ( i = 0; i < ar_t->NumFields(); ++i )

View file

@ -2,65 +2,93 @@
## Generated for a DNP3 request header.
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
##
## fc: function code.
##
event dnp3_application_request_header%(c: connection, is_orig: bool, fc: count%);
## Generated for a DNP3 response header.
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
##
## fc: function code.
## iin: internal indication number
##
## iin: internal indication number.
##
event dnp3_application_response_header%(c: connection, is_orig: bool, fc: count, iin: count%);
## Generated for the object header found in both DNP3 requests and responses.
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
## obj_type: type of object, which is classified based on an 8-bit group number and an 8-bit variation number
## qua_field: qualifier field
##
## obj_type: type of object, which is classified based on an 8-bit group number
## and an 8-bit variation number.
##
## qua_field: qualifier field.
##
## rf_low: the structure of the range field depends on the qualified field.
## In some cases, range field contains only one logic part, e.g.,
## number of objects, so only *rf_low* contains the useful values.
## rf_high: in some cases, range field contain two logic parts, e.g., start
## index and stop index, so *rf_low* contains the start index while
## In some cases, the range field contains only one logic part, e.g.,
## number of objects, so only *rf_low* contains useful values.
##
## rf_high: in some cases, the range field contains two logic parts, e.g., start
## index and stop index, so *rf_low* contains the start index
## while *rf_high* contains the stop index.
##
event dnp3_object_header%(c: connection, is_orig: bool, obj_type: count, qua_field: count, number: count, rf_low: count, rf_high: count%);
## Generated for the prefix before a DNP3 object. The structure and the meaning
## of the prefix are defined by the qualifier field.
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
##
## prefix_value: The prefix.
##
event dnp3_object_prefix%(c: connection, is_orig: bool, prefix_value: count%);
## Generated for an additional header that the DNP3 analyzer passes to the
## script-level. This headers mimics the DNP3 transport-layer yet is only passed
## script-level. This header mimics the DNP3 transport-layer yet is only passed
## once for each sequence of DNP3 records (which are otherwise reassembled and
## treated as a single entity).
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
## start: the first two bytes of the DNP3 Pseudo Link Layer; its value is fixed as 0x0564
## len: the "length" field in the DNP3 Pseudo Link Layer
## ctrl: the "control" field in the DNP3 Pseudo Link Layer
## dest_addr: the "destination" field in the DNP3 Pseudo Link Layer
## src_addr: the "source" field in the DNP3 Pseudo Link Layer
##
## start: the first two bytes of the DNP3 Pseudo Link Layer; its value is fixed
## as 0x0564.
##
## len: the "length" field in the DNP3 Pseudo Link Layer.
##
## ctrl: the "control" field in the DNP3 Pseudo Link Layer.
##
## dest_addr: the "destination" field in the DNP3 Pseudo Link Layer.
##
## src_addr: the "source" field in the DNP3 Pseudo Link Layer.
##
event dnp3_header_block%(c: connection, is_orig: bool, start: count, len: count, ctrl: count, dest_addr: count, src_addr: count%);
## Generated for a DNP3 "Response_Data_Object".
## The "Response_Data_Object" contains two parts: object prefix and object
## data. In most cases, objects data are defined by new record types. But
## in a few cases, objects data are directly basic types, such as int16, or
## int8; thus we use a additional data_value to record the values of those
## data. In most cases, object data are defined by new record types. But
## in a few cases, object data are directly basic types, such as int16, or
## int8; thus we use an additional *data_value* to record the values of those
## object data.
##
## c: The connection the DNP3 communication is part of.
##
## is_orig: True if this reflects originator-side activity.
##
## data_value: The value for those objects that carry their information here
## directly.
##
event dnp3_response_data_object%(c: connection, is_orig: bool, data_value: count%);
## Generated for DNP3 attributes.
@ -238,6 +266,6 @@ event dnp3_frozen_analog_input_event_DPwTime%(c: connection, is_orig: bool, flag
event dnp3_file_transport%(c: connection, is_orig: bool, file_handle: count, block_num: count, file_data: string%);
## Debugging event generated by the DNP3 analyzer. The "Debug_Byte" binpac unit
## generates this for unknown "cases". The user can use it to debug the byte string
## to check what cause the malformed network packets.
## generates this for unknown "cases". The user can use it to debug the byte
## string to check what caused the malformed network packets.
event dnp3_debug_byte%(c: connection, is_orig: bool, debug: string%);

View file

@ -116,11 +116,12 @@ static Val* parse_eftp(const char* line)
}
%%}
## Converts a string representation of the FTP PORT command to an ``ftp_port``.
## Converts a string representation of the FTP PORT command to an
## :bro:type:`ftp_port`.
##
## s: The string of the FTP PORT command, e.g., ``"10,0,0,1,4,31"``.
##
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
##
## .. bro:see:: parse_eftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
function parse_ftp_port%(s: string%): ftp_port
@ -128,14 +129,14 @@ function parse_ftp_port%(s: string%): ftp_port
return parse_port(s->CheckString());
%}
## Converts a string representation of the FTP EPRT command to an ``ftp_port``.
## See `RFC 2428 <http://tools.ietf.org/html/rfc2428>`_.
## The format is ``EPRT<space><d><net-prt><d><net-addr><d><tcp-port><d>``,
## Converts a string representation of the FTP EPRT command (see :rfc:`2428`)
## to an :bro:type:`ftp_port`. The format is
## ``"EPRT<space><d><net-prt><d><net-addr><d><tcp-port><d>"``,
## where ``<d>`` is a delimiter in the ASCII range 33-126 (usually ``|``).
##
## s: The string of the FTP EPRT command, e.g., ``"|1|10.0.0.1|1055|"``.
##
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
##
## .. bro:see:: parse_ftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
function parse_eftp_port%(s: string%): ftp_port
@ -143,11 +144,11 @@ function parse_eftp_port%(s: string%): ftp_port
return parse_eftp(s->CheckString());
%}
## Converts the result of the FTP PASV command to an ``ftp_port``.
## Converts the result of the FTP PASV command to an :bro:type:`ftp_port`.
##
## str: The string containing the result of the FTP PASV command.
##
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
##
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_epsv fmt_ftp_port
function parse_ftp_pasv%(str: string%): ftp_port
@ -168,14 +169,13 @@ function parse_ftp_pasv%(str: string%): ftp_port
return parse_port(line);
%}
## Converts the result of the FTP EPSV command to an ``ftp_port``.
## See `RFC 2428 <http://tools.ietf.org/html/rfc2428>`_.
## The format is ``<text> (<d><d><d><tcp-port><d>)``, where ``<d>`` is a
## delimiter in the ASCII range 33-126 (usually ``|``).
## Converts the result of the FTP EPSV command (see :rfc:`2428`) to an
## :bro:type:`ftp_port`. The format is ``"<text> (<d><d><d><tcp-port><d>)"``,
## where ``<d>`` is a delimiter in the ASCII range 33-126 (usually ``|``).
##
## str: The string containing the result of the FTP EPSV command.
##
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
##
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_pasv fmt_ftp_port
function parse_ftp_epsv%(str: string%): ftp_port

View file

@ -42,11 +42,11 @@ function skip_http_entity_data%(c: connection, is_orig: bool%): any
##
## .. note::
##
## Unescaping reserved characters may cause loss of information. RFC 2396:
## A URI is always in an "escaped" form, since escaping or unescaping a
## completed URI might change its semantics. Normally, the only time
## escape encodings can safely be made is when the URI is being created
## from its component parts.
## Unescaping reserved characters may cause loss of information.
## :rfc:`2396`: A URI is always in an "escaped" form, since escaping or
## unescaping a completed URI might change its semantics. Normally, the
## only time escape encodings can safely be made is when the URI is
## being created from its component parts.
function unescape_URI%(URI: string%): string
%{
const u_char* line = URI->Bytes();

View file

@ -1,7 +1,6 @@
## Generated for client side commands on an RSH connection.
##
## See `RFC 1258 <http://tools.ietf.org/html/rfc1258>`__ for more information
## about the Rlogin/Rsh protocol.
## See :rfc:`1258` for more information about the Rlogin/Rsh protocol.
##
## c: The connection.
##
@ -30,8 +29,7 @@ event rsh_request%(c: connection, client_user: string, server_user: string, line
## Generated for client side commands on an RSH connection.
##
## See `RFC 1258 <http://tools.ietf.org/html/rfc1258>`__ for more information
## about the Rlogin/Rsh protocol.
## See :rfc:`1258` for more information about the Rlogin/Rsh protocol.
##
## c: The connection.
##

View file

@ -1,4 +1,4 @@
## Generated for any modbus message regardless if the particular function
## Generated for any Modbus message regardless if the particular function
## is further supported or not.
##
## c: The connection.
@ -8,7 +8,7 @@
## is_orig: True if the event is raised for the originator side.
event modbus_message%(c: connection, headers: ModbusHeaders, is_orig: bool%);
## Generated for any modbus exception message.
## Generated for any Modbus exception message.
##
## c: The connection.
##
@ -23,7 +23,7 @@ event modbus_exception%(c: connection, headers: ModbusHeaders, code: count%);
##
## headers: The headers for the modbus function.
##
## start_address: The memory address where of the first coil to be read.
## start_address: The memory address of the first coil to be read.
##
## quantity: The number of coils to be read.
event modbus_read_coils_request%(c: connection, headers: ModbusHeaders, start_address: count, quantity: count%);
@ -191,8 +191,8 @@ event modbus_write_multiple_registers_response%(c: connection, headers: ModbusHe
##
## headers: The headers for the modbus function.
##
## .. note: This event is incomplete. The information from the data structure is not
## yet passed through to the event.
## .. note: This event is incomplete. The information from the data structure
## is not yet passed through to the event.
event modbus_read_file_record_request%(c: connection, headers: ModbusHeaders%);
## Generated for a Modbus read file record response.
@ -201,8 +201,8 @@ event modbus_read_file_record_request%(c: connection, headers: ModbusHeaders%);
##
## headers: The headers for the modbus function.
##
## .. note: This event is incomplete. The information from the data structure is not
## yet passed through to the event.
## .. note: This event is incomplete. The information from the data structure
## is not yet passed through to the event.
event modbus_read_file_record_response%(c: connection, headers: ModbusHeaders%);
## Generated for a Modbus write file record request.
@ -211,8 +211,8 @@ event modbus_read_file_record_response%(c: connection, headers: ModbusHeaders%);
##
## headers: The headers for the modbus function.
##
## .. note: This event is incomplete. The information from the data structure is not
## yet passed through to the event.
## .. note: This event is incomplete. The information from the data structure
## is not yet passed through to the event.
event modbus_write_file_record_request%(c: connection, headers: ModbusHeaders%);
## Generated for a Modbus write file record response.
@ -221,8 +221,8 @@ event modbus_write_file_record_request%(c: connection, headers: ModbusHeaders%);
##
## headers: The headers for the modbus function.
##
## .. note: This event is incomplete. The information from the data structure is not
## yet passed through to the event.
## .. note: This event is incomplete. The information from the data structure
## is not yet passed through to the event.
event modbus_write_file_record_response%(c: connection, headers: ModbusHeaders%);
## Generated for a Modbus mask write register request.
@ -272,7 +272,8 @@ event modbus_read_write_multiple_registers_request%(c: connection, headers: Modb
##
## headers: The headers for the modbus function.
##
## written_registers: The register values read from the registers specified in the request.
## written_registers: The register values read from the registers specified in
## the request.
event modbus_read_write_multiple_registers_response%(c: connection, headers: ModbusHeaders, written_registers: ModbusRegisters%);
## Generated for a Modbus read FIFO queue request.

View file

@ -3,7 +3,7 @@
## its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -12,7 +12,7 @@
## is_orig: True if the message was sent by the originator of the connection.
##
## msg_type: The general type of message, as defined in Section 4.3.1 of
## `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__.
## :rfc:`1002`.
##
## data_len: The length of the message's payload.
##
@ -35,7 +35,7 @@ event netbios_session_message%(c: connection, is_orig: bool, msg_type: count, da
## (despite its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -63,7 +63,7 @@ event netbios_session_request%(c: connection, msg: string%);
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -91,7 +91,7 @@ event netbios_session_accepted%(c: connection, msg: string%);
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -121,7 +121,7 @@ event netbios_session_rejected%(c: connection, msg: string%);
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -154,7 +154,7 @@ event netbios_session_raw_message%(c: connection, is_orig: bool, msg: string%);
## (despite its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the
@ -184,7 +184,7 @@ event netbios_session_ret_arg_resp%(c: connection, msg: string%);
## its name!) the NetBIOS datagram service on UDP port 138.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
## about NetBIOS. :rfc:`1002` describes
## the packet format for NetBIOS over TCP/IP, which Bro parses.
##
## c: The connection, which may be TCP or UDP, depending on the type of the

View file

@ -123,7 +123,7 @@ event ssl_alert%(c: connection, is_orig: bool, level: count, desc: count%);
## an unencrypted handshake, and Bro extracts as much information out of that
## as it can. This event is raised when an SSL/TLS server passes a session
## ticket to the client that can later be used for resuming the session. The
## mechanism is described in :rfc:`4507`
## mechanism is described in :rfc:`4507`.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/Transport_Layer_Security>`__ for
## more information about the SSL/TLS protocol.

View file

@ -93,13 +93,12 @@ function get_gap_summary%(%): gap_info
##
## - ``CONTENTS_NONE``: Stop recording the connection's content.
## - ``CONTENTS_ORIG``: Record the data sent by the connection
## originator (often the client).
## originator (often the client).
## - ``CONTENTS_RESP``: Record the data sent by the connection
## responder (often the server).
## responder (often the server).
## - ``CONTENTS_BOTH``: Record the data sent in both directions.
## Results in the two directions being
## intermixed in the file, in the order the
## data was seen by Bro.
## Results in the two directions being intermixed in the file,
## in the order the data was seen by Bro.
##
## f: The file handle of the file to write the contents to.
##

Some files were not shown because too many files have changed in this diff Show more