mirror of
https://github.com/zeek/zeek.git
synced 2025-10-02 06:38:20 +00:00
Merge branch 'master' into topic/jsiwek/broxygen
Conflicts: testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log
This commit is contained in:
commit
b38efa58d0
117 changed files with 1171 additions and 742 deletions
94
CHANGES
94
CHANGES
|
@ -1,4 +1,98 @@
|
||||||
|
|
||||||
|
2.2-beta-177 | 2013-10-30 04:54:54 -0700
|
||||||
|
|
||||||
|
* Fix thread processing/termination conditions. (Jon Siwek)
|
||||||
|
|
||||||
|
2.2-beta-175 | 2013-10-29 09:30:09 -0700
|
||||||
|
|
||||||
|
* Return the Dir module to file name tracking instead of inode
|
||||||
|
tracking to avoid missing files that reuse a formerly seen inode.
|
||||||
|
(Seth Hall)
|
||||||
|
|
||||||
|
* Deprecate Broccoli Ruby bindings and no longer build them by
|
||||||
|
default; use --enable-ruby to do so. (Jon Siwek)
|
||||||
|
|
||||||
|
2.2-beta-167 | 2013-10-29 06:02:38 -0700
|
||||||
|
|
||||||
|
* Change percent_lost in capture-loss from a string to a double.
|
||||||
|
(Vlad Grigorescu)
|
||||||
|
|
||||||
|
* New version of the threading queue deadlock fix. (Robin Sommer)
|
||||||
|
|
||||||
|
* Updating README with download/git information. (Robin Sommer)
|
||||||
|
|
||||||
|
2.2-beta-161 | 2013-10-25 15:48:15 -0700
|
||||||
|
|
||||||
|
* Add curl to list of optional dependencies. It's used by the
|
||||||
|
active-http.bro script. (Daniel Thayer)
|
||||||
|
|
||||||
|
* Update test and baseline for a recent doc test fix. (Daniel
|
||||||
|
Thayer)
|
||||||
|
|
||||||
|
2.2-beta-158 | 2013-10-25 15:05:08 -0700
|
||||||
|
|
||||||
|
* Updating README with download/git information. (Robin Sommer)
|
||||||
|
|
||||||
|
2.2-beta-157 | 2013-10-25 11:11:17 -0700
|
||||||
|
|
||||||
|
* Extend the documentation of the SQLite reader/writer framework.
|
||||||
|
(Bernhard Amann)
|
||||||
|
|
||||||
|
* Fix inclusion of wrong example file in scripting tutorial.
|
||||||
|
Reported by Michael Auger @LM4K. (Bernhard Amann)
|
||||||
|
|
||||||
|
* Alternative fix for the thrading deadlock issue to avoid potential
|
||||||
|
performance impact. (Bernhard Amann)
|
||||||
|
|
||||||
|
2.2-beta-152 | 2013-10-24 18:16:49 -0700
|
||||||
|
|
||||||
|
* Fix for input readers occasionally dead-locking. (Robin Sommer)
|
||||||
|
|
||||||
|
2.2-beta-151 | 2013-10-24 16:52:26 -0700
|
||||||
|
|
||||||
|
* Updating submodule(s).
|
||||||
|
|
||||||
|
2.2-beta-150 | 2013-10-24 16:32:14 -0700
|
||||||
|
|
||||||
|
* Change temporary ASCII reader workaround for getline() on
|
||||||
|
Mavericks to permanent fix. (Bernhard Amann)
|
||||||
|
|
||||||
|
2.2-beta-148 | 2013-10-24 14:34:35 -0700
|
||||||
|
|
||||||
|
* Add gawk to list of optional packages. (Daniel Thayer)
|
||||||
|
|
||||||
|
* Add more script package README files. (Daniel Thayer)
|
||||||
|
|
||||||
|
* Add NEWS about new features of BroControl and upgrade info.
|
||||||
|
(Daniel Thayer)
|
||||||
|
|
||||||
|
* Intel framework notes added to NEWS. (Seth Hall)
|
||||||
|
|
||||||
|
* Temporary OSX Mavericks libc++ issue workaround for getline()
|
||||||
|
problem in ASCII reader. (Bernhard Amann)
|
||||||
|
|
||||||
|
* Change test of identify_data BIF to ignore charset as it may vary
|
||||||
|
with libmagic version. (Jon Siwek)
|
||||||
|
|
||||||
|
* Ensure that the starting BPF filter is logged on clusters. (Seth
|
||||||
|
Hall)
|
||||||
|
|
||||||
|
* Add UDP support to the checksum offload detection script. (Seth
|
||||||
|
Hall)
|
||||||
|
|
||||||
|
2.2-beta-133 | 2013-10-23 09:50:16 -0700
|
||||||
|
|
||||||
|
* Fix record coercion tolerance of optional fields. (Jon Siwek)
|
||||||
|
|
||||||
|
* Add NEWS about incompatible local.bro changes, addresses BIT-1047.
|
||||||
|
(Jon Siwek)
|
||||||
|
|
||||||
|
* Fix minor formatting problem in NEWS. (Jon Siwek)
|
||||||
|
|
||||||
|
2.2-beta-129 | 2013-10-23 09:47:29 -0700
|
||||||
|
|
||||||
|
* Another batch of documentation fixes and updates. (Daniel Thayer)
|
||||||
|
|
||||||
2.2-beta-114 | 2013-10-18 14:17:57 -0700
|
2.2-beta-114 | 2013-10-18 14:17:57 -0700
|
||||||
|
|
||||||
* Moving the SQLite examples into separate Bro files to turn them
|
* Moving the SQLite examples into separate Bro files to turn them
|
||||||
|
|
104
NEWS
104
NEWS
|
@ -10,6 +10,28 @@ Bro 2.2 Beta
|
||||||
New Functionality
|
New Functionality
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
|
- A completely overhauled intelligence framework for consuming
|
||||||
|
external intelligence data. It provides an abstracted mechanism
|
||||||
|
for feeding data into the framework to be matched against the
|
||||||
|
data available. It also provides a function named ``Intel::match``
|
||||||
|
which makes any hits on intelligence data available to the
|
||||||
|
scripting language.
|
||||||
|
|
||||||
|
Using input framework, the intel framework can load data from
|
||||||
|
text files. It can also update and add data if changes are
|
||||||
|
made to the file being monitored. Files to monitor for
|
||||||
|
intelligence can be provided by redef-ing the
|
||||||
|
``Intel::read_files`` variable.
|
||||||
|
|
||||||
|
The intel framework is cluster-ready. On a cluster, the
|
||||||
|
manager is the only node that needs to load in data from disk,
|
||||||
|
the cluster support will distribute the data across a cluster
|
||||||
|
automatically.
|
||||||
|
|
||||||
|
Scripts are provided at ``policy/frameworks/intel/seen`` that
|
||||||
|
provide a broad set of sources of data to feed into the intel
|
||||||
|
framwork to be matched.
|
||||||
|
|
||||||
- A new file analysis framework moves most of the processing of file
|
- A new file analysis framework moves most of the processing of file
|
||||||
content from script-land into the core, where it belongs. See
|
content from script-land into the core, where it belongs. See
|
||||||
``doc/file-analysis.rst``, or the online documentation, for more
|
``doc/file-analysis.rst``, or the online documentation, for more
|
||||||
|
@ -21,23 +43,26 @@ New Functionality
|
||||||
efficiently, now):
|
efficiently, now):
|
||||||
|
|
||||||
- HTTP:
|
- HTTP:
|
||||||
|
|
||||||
* Identify MIME type of messages.
|
* Identify MIME type of messages.
|
||||||
* Extract messages to disk.
|
* Extract messages to disk.
|
||||||
* Compute MD5 for messages.
|
* Compute MD5 for messages.
|
||||||
|
|
||||||
- SMTP:
|
- SMTP:
|
||||||
|
|
||||||
* Identify MIME type of messages.
|
* Identify MIME type of messages.
|
||||||
* Extract messages to disk.
|
* Extract messages to disk.
|
||||||
* Compute MD5 for messages.
|
* Compute MD5 for messages.
|
||||||
* Provide access to start of entity data.
|
* Provide access to start of entity data.
|
||||||
|
|
||||||
- FTP data transfers:
|
- FTP data transfers:
|
||||||
|
|
||||||
* Identify MIME types of data.
|
* Identify MIME types of data.
|
||||||
* Record to disk.
|
* Record to disk.
|
||||||
|
|
||||||
- IRC DCC transfers: Record to disk.
|
- IRC DCC transfers: Record to disk.
|
||||||
|
|
||||||
- Support for analyzing data transfered via HTTP range requests.
|
- Support for analyzing data transferred via HTTP range requests.
|
||||||
|
|
||||||
- A binary input reader interfaces the input framework with the
|
- A binary input reader interfaces the input framework with the
|
||||||
file analysis, allowing to inject files on disk into Bro's
|
file analysis, allowing to inject files on disk into Bro's
|
||||||
|
@ -50,7 +75,7 @@ New Functionality
|
||||||
information from many independent monitoring points (including
|
information from many independent monitoring points (including
|
||||||
clusters). It provides a transparent, easy-to-use user interface,
|
clusters). It provides a transparent, easy-to-use user interface,
|
||||||
and can optionally deploy a set of probabilistic data structures for
|
and can optionally deploy a set of probabilistic data structures for
|
||||||
memory-efficient operation. The framework is located in
|
memory-efficient operation. The framework is located in
|
||||||
``scripts/base/frameworks/sumstats``.
|
``scripts/base/frameworks/sumstats``.
|
||||||
|
|
||||||
A number of new applications now ship with Bro that are built on top
|
A number of new applications now ship with Bro that are built on top
|
||||||
|
@ -61,7 +86,7 @@ New Functionality
|
||||||
Bro versions <2.0; it's now back, but quite different).
|
Bro versions <2.0; it's now back, but quite different).
|
||||||
|
|
||||||
* Tracerouter detector: ``policy/misc/detect-traceroute.bro``
|
* Tracerouter detector: ``policy/misc/detect-traceroute.bro``
|
||||||
|
|
||||||
* Web application detection/measurement:
|
* Web application detection/measurement:
|
||||||
``policy/misc/app-stats/*``
|
``policy/misc/app-stats/*``
|
||||||
|
|
||||||
|
@ -233,6 +258,35 @@ New Functionality
|
||||||
To use CPU pinning, a new per-node option ``pin_cpus`` can be
|
To use CPU pinning, a new per-node option ``pin_cpus`` can be
|
||||||
specified in node.cfg if the OS is either Linux or FreeBSD.
|
specified in node.cfg if the OS is either Linux or FreeBSD.
|
||||||
|
|
||||||
|
- BroControl now returns useful exit codes. Most BroControl commands
|
||||||
|
return 0 if everything was OK, and 1 otherwise. However, there are
|
||||||
|
a few exceptions. The "status" and "top" commands return 0 if all Bro
|
||||||
|
nodes are running, and 1 if not all nodes are running. The "cron"
|
||||||
|
command always returns 0 (but it still sends email if there were any
|
||||||
|
problems). Any command provided by a plugin always returns 0.
|
||||||
|
|
||||||
|
- BroControl now has an option "env_vars" to set Bro environment variables.
|
||||||
|
The value of this option is a comma-separated list of environment variable
|
||||||
|
assignments (e.g., "VAR1=value, VAR2=another"). The "env_vars" option
|
||||||
|
can apply to all Bro nodes (by setting it in broctl.cfg), or can be
|
||||||
|
node-specific (by setting it in node.cfg). Environment variables in
|
||||||
|
node.cfg have priority over any specified in broctl.cfg.
|
||||||
|
|
||||||
|
- BroControl now supports load balancing with PF_RING while sniffing
|
||||||
|
multiple interfaces. Rather than assigning the same PF_RING cluster ID
|
||||||
|
to all workers on a host, cluster ID assignment is now based on which
|
||||||
|
interface a worker is sniffing (i.e., all workers on a host that sniff
|
||||||
|
the same interface will share a cluster ID). This is handled by
|
||||||
|
BroControl automatically.
|
||||||
|
|
||||||
|
- BroControl has several new options: MailConnectionSummary (for
|
||||||
|
disabling the sending of connection summary report emails),
|
||||||
|
MailAlarmsInterval (for specifying a different interval to send alarm
|
||||||
|
summary emails), CompressCmd (if archived log files will be compressed,
|
||||||
|
this specifies the command that will be used to compress them),
|
||||||
|
CompressExtension (if archived log files will be compressed, this
|
||||||
|
specifies the file extension to use).
|
||||||
|
|
||||||
- BroControl comes with its own test-suite now. ``make test`` in
|
- BroControl comes with its own test-suite now. ``make test`` in
|
||||||
``aux/broctl`` will run it.
|
``aux/broctl`` will run it.
|
||||||
|
|
||||||
|
@ -243,6 +297,37 @@ most submodules.
|
||||||
Changed Functionality
|
Changed Functionality
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
|
- Previous versions of ``$prefix/share/bro/site/local.bro`` (where
|
||||||
|
"$prefix" indicates the installation prefix of Bro), aren't compatible
|
||||||
|
with Bro 2.2. This file won't be overwritten when installing over a
|
||||||
|
previous Bro installation to prevent clobbering users' modifications,
|
||||||
|
but an example of the new version is located in
|
||||||
|
``$prefix/share/bro/site/local.bro.example``. So if no modification
|
||||||
|
has been done to the previous local.bro, just copy the new example
|
||||||
|
version over it, else merge in the differences. For reference,
|
||||||
|
a common error message when attempting to use an outdated local.bro
|
||||||
|
looks like::
|
||||||
|
|
||||||
|
fatal error in /usr/local/bro/share/bro/policy/frameworks/software/vulnerable.bro, line 41: BroType::AsRecordType (table/record) (set[record { min:record { major:count; minor:count; minor2:count; minor3:count; addl:string; }; max:record { major:count; minor:count; minor2:count; minor3:count; addl:string; }; }])
|
||||||
|
|
||||||
|
- The type of ``Software::vulnerable_versions`` changed to allow
|
||||||
|
more flexibility and range specifications. An example usage:
|
||||||
|
|
||||||
|
.. code:: bro
|
||||||
|
|
||||||
|
const java_1_6_vuln = Software::VulnerableVersionRange(
|
||||||
|
$max = Software::Version($major = 1, $minor = 6, $minor2 = 0, $minor3 = 44)
|
||||||
|
);
|
||||||
|
|
||||||
|
const java_1_7_vuln = Software::VulnerableVersionRange(
|
||||||
|
$min = Software::Version($major = 1, $minor = 7),
|
||||||
|
$max = Software::Version($major = 1, $minor = 7, $minor2 = 0, $minor3 = 20)
|
||||||
|
);
|
||||||
|
|
||||||
|
redef Software::vulnerable_versions += {
|
||||||
|
["Java"] = set(java_1_6_vuln, java_1_7_vuln)
|
||||||
|
};
|
||||||
|
|
||||||
- The interface to extracting content from application-layer protocols
|
- The interface to extracting content from application-layer protocols
|
||||||
(including HTTP, SMTP, FTP) has changed significantly due to the
|
(including HTTP, SMTP, FTP) has changed significantly due to the
|
||||||
introduction of the new file analysis framework (see above).
|
introduction of the new file analysis framework (see above).
|
||||||
|
@ -328,6 +413,19 @@ Changed Functionality
|
||||||
- We removed the BitTorrent DPD signatures pending further updates to
|
- We removed the BitTorrent DPD signatures pending further updates to
|
||||||
that analyzer.
|
that analyzer.
|
||||||
|
|
||||||
|
- In previous versions of BroControl, running "broctl cron" would create
|
||||||
|
a file ``$prefix/logs/stats/www`` (where "$prefix" indicates the
|
||||||
|
installation prefix of Bro). Now, it is created as a directory.
|
||||||
|
Therefore, if you perform an upgrade install and you're using BroControl,
|
||||||
|
then you may see an email (generated by "broctl cron") containing an
|
||||||
|
error message: "error running update-stats". To fix this problem,
|
||||||
|
either remove that file (it is not needed) or rename it.
|
||||||
|
|
||||||
|
- Due to lack of maintenance the Ruby bindings for Broccoli are now
|
||||||
|
deprecated, and the build process no longer includes them by
|
||||||
|
default. For the time being, they can still be enabled by
|
||||||
|
configuring with ``--enable-ruby``, however we plan to remove
|
||||||
|
Broccoli's Ruby support with the next Bro release.
|
||||||
|
|
||||||
Bro 2.1
|
Bro 2.1
|
||||||
=======
|
=======
|
||||||
|
|
10
README
10
README
|
@ -8,11 +8,21 @@ and pointers for getting started. NEWS contains release notes for the
|
||||||
current version, and CHANGES has the complete history of changes.
|
current version, and CHANGES has the complete history of changes.
|
||||||
Please see COPYING for licensing information.
|
Please see COPYING for licensing information.
|
||||||
|
|
||||||
|
You can download source and binary releases on:
|
||||||
|
|
||||||
|
http://www.bro.org/download
|
||||||
|
|
||||||
|
To get the current development version, clone our master git
|
||||||
|
repository:
|
||||||
|
|
||||||
|
git clone --recursive git://git.bro.org/bro
|
||||||
|
|
||||||
For more documentation, research publications, and community contact
|
For more documentation, research publications, and community contact
|
||||||
information, please see Bro's home page:
|
information, please see Bro's home page:
|
||||||
|
|
||||||
http://www.bro.org
|
http://www.bro.org
|
||||||
|
|
||||||
|
|
||||||
On behalf of the Bro Development Team,
|
On behalf of the Bro Development Team,
|
||||||
|
|
||||||
Vern Paxson & Robin Sommer,
|
Vern Paxson & Robin Sommer,
|
||||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
||||||
2.2-beta-114
|
2.2-beta-177
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
Subproject commit 923994715b34bf3292e402bbe00c00ff77556490
|
Subproject commit 0f20a50afacb68154b4035b6da63164d154093e4
|
|
@ -1 +1 @@
|
||||||
Subproject commit 1496e0319f6fa12bb39362ab0947c82e1d6c669b
|
Subproject commit d17f99107cc778627a0829f0ae416073bb1e20bb
|
|
@ -1 +1 @@
|
||||||
Subproject commit e57ec85a898a077cb3376462cac1f047e9aeaee7
|
Subproject commit 5cc63348a4c3e54adaf59e5a85bec055025c6c1f
|
|
@ -1 +1 @@
|
||||||
Subproject commit e8eda204f418c78cc35102db04602ad2ea94aff8
|
Subproject commit cea34f6de7fc3b6f01921593797e5f0f197b67a7
|
|
@ -1 +1 @@
|
||||||
Subproject commit 056c666cd8534ba3ba88731d985dde3e29206800
|
Subproject commit cfc8fe7ddf5ba3a9f957d1d5a98e9cfe1e9692ac
|
7
configure
vendored
7
configure
vendored
|
@ -32,12 +32,12 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
|
||||||
--enable-perftools force use of Google perftools on non-Linux systems
|
--enable-perftools force use of Google perftools on non-Linux systems
|
||||||
(automatically on when perftools is present on Linux)
|
(automatically on when perftools is present on Linux)
|
||||||
--enable-perftools-debug use Google's perftools for debugging
|
--enable-perftools-debug use Google's perftools for debugging
|
||||||
|
--enable-ruby build ruby bindings for broccoli (deprecated)
|
||||||
--disable-broccoli don't build or install the Broccoli library
|
--disable-broccoli don't build or install the Broccoli library
|
||||||
--disable-broctl don't install Broctl
|
--disable-broctl don't install Broctl
|
||||||
--disable-auxtools don't build or install auxiliary tools
|
--disable-auxtools don't build or install auxiliary tools
|
||||||
--disable-perftools don't try to build with Google Perftools
|
--disable-perftools don't try to build with Google Perftools
|
||||||
--disable-python don't try to build python bindings for broccoli
|
--disable-python don't try to build python bindings for broccoli
|
||||||
--disable-ruby don't try to build ruby bindings for broccoli
|
|
||||||
--disable-dataseries don't use the optional DataSeries log writer
|
--disable-dataseries don't use the optional DataSeries log writer
|
||||||
--disable-elasticsearch don't use the optional ElasticSearch log writer
|
--disable-elasticsearch don't use the optional ElasticSearch log writer
|
||||||
|
|
||||||
|
@ -113,6 +113,7 @@ append_cache_entry INSTALL_BROCTL BOOL true
|
||||||
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
|
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
|
||||||
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
|
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
|
||||||
append_cache_entry DISABLE_PERFTOOLS BOOL false
|
append_cache_entry DISABLE_PERFTOOLS BOOL false
|
||||||
|
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
|
||||||
|
|
||||||
# parse arguments
|
# parse arguments
|
||||||
while [ $# -ne 0 ]; do
|
while [ $# -ne 0 ]; do
|
||||||
|
@ -174,8 +175,8 @@ while [ $# -ne 0 ]; do
|
||||||
--disable-python)
|
--disable-python)
|
||||||
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
|
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
|
||||||
;;
|
;;
|
||||||
--disable-ruby)
|
--enable-ruby)
|
||||||
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
|
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
|
||||||
;;
|
;;
|
||||||
--disable-dataseries)
|
--disable-dataseries)
|
||||||
append_cache_entry DISABLE_DATASERIES BOOL true
|
append_cache_entry DISABLE_DATASERIES BOOL true
|
||||||
|
|
|
@ -97,6 +97,8 @@ build time:
|
||||||
|
|
||||||
* LibGeoIP (for geo-locating IP addresses)
|
* LibGeoIP (for geo-locating IP addresses)
|
||||||
* sendmail (enables Bro and BroControl to send mail)
|
* sendmail (enables Bro and BroControl to send mail)
|
||||||
|
* gawk (enables all features of bro-cut)
|
||||||
|
* curl (used by a Bro script that implements active HTTP)
|
||||||
* gperftools (tcmalloc is used to improve memory and CPU usage)
|
* gperftools (tcmalloc is used to improve memory and CPU usage)
|
||||||
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
|
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
|
||||||
* Ruby executable, library, and headers (for Broccoli Ruby bindings)
|
* Ruby executable, library, and headers (for Broccoli Ruby bindings)
|
||||||
|
|
|
@ -214,7 +214,7 @@ take a look at a simple script, stored as
|
||||||
``connection_record_01.bro``, that will output the connection record
|
``connection_record_01.bro``, that will output the connection record
|
||||||
for a single connection.
|
for a single connection.
|
||||||
|
|
||||||
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_02.bro
|
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_01.bro
|
||||||
|
|
||||||
Again, we start with ``@load``, this time importing the
|
Again, we start with ``@load``, this time importing the
|
||||||
:doc:`/scripts/base/protocols/conn/index` scripts which supply the tracking
|
:doc:`/scripts/base/protocols/conn/index` scripts which supply the tracking
|
||||||
|
@ -1222,7 +1222,7 @@ from the connection relative to the behavior that has been observed by
|
||||||
Bro.
|
Bro.
|
||||||
|
|
||||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
|
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
|
||||||
:lines: 59-62
|
:lines: 60-63
|
||||||
|
|
||||||
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs` script
|
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs` script
|
||||||
which identifies when SSL certificates are set to expire and raises
|
which identifies when SSL certificates are set to expire and raises
|
||||||
|
|
|
@ -122,6 +122,7 @@ rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro)
|
||||||
rest_target(${psd} base/frameworks/notice/main.bro)
|
rest_target(${psd} base/frameworks/notice/main.bro)
|
||||||
rest_target(${psd} base/frameworks/notice/non-cluster.bro)
|
rest_target(${psd} base/frameworks/notice/non-cluster.bro)
|
||||||
rest_target(${psd} base/frameworks/notice/weird.bro)
|
rest_target(${psd} base/frameworks/notice/weird.bro)
|
||||||
|
rest_target(${psd} base/frameworks/packet-filter/cluster.bro)
|
||||||
rest_target(${psd} base/frameworks/packet-filter/main.bro)
|
rest_target(${psd} base/frameworks/packet-filter/main.bro)
|
||||||
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
|
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
|
||||||
rest_target(${psd} base/frameworks/packet-filter/utils.bro)
|
rest_target(${psd} base/frameworks/packet-filter/utils.bro)
|
||||||
|
|
1
scripts/base/files/extract/README
Normal file
1
scripts/base/files/extract/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Support for extracing files with the file analysis framework.
|
1
scripts/base/files/hash/README
Normal file
1
scripts/base/files/hash/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Support for file hashes with the file analysis framework.
|
1
scripts/base/files/unified2/README
Normal file
1
scripts/base/files/unified2/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Support for Unified2 files in the file analysis framework.
|
|
@ -120,6 +120,7 @@ export {
|
||||||
## The cluster layout definition. This should be placed into a filter
|
## The cluster layout definition. This should be placed into a filter
|
||||||
## named cluster-layout.bro somewhere in the BROPATH. It will be
|
## named cluster-layout.bro somewhere in the BROPATH. It will be
|
||||||
## automatically loaded if the CLUSTER_NODE environment variable is set.
|
## automatically loaded if the CLUSTER_NODE environment variable is set.
|
||||||
|
## Note that BroControl handles all of this automatically.
|
||||||
const nodes: table[string] of Node = {} &redef;
|
const nodes: table[string] of Node = {} &redef;
|
||||||
|
|
||||||
## This is usually supplied on the command line for each instance
|
## This is usually supplied on the command line for each instance
|
||||||
|
|
|
@ -15,13 +15,16 @@ export {
|
||||||
## are wildcards.
|
## are wildcards.
|
||||||
const listen_interface = 0.0.0.0 &redef;
|
const listen_interface = 0.0.0.0 &redef;
|
||||||
|
|
||||||
## Which port to listen on.
|
## Which port to listen on. Note that BroControl sets this
|
||||||
|
## automatically.
|
||||||
const listen_port = 47757/tcp &redef;
|
const listen_port = 47757/tcp &redef;
|
||||||
|
|
||||||
## This defines if a listening socket should use SSL.
|
## This defines if a listening socket should use SSL.
|
||||||
const listen_ssl = F &redef;
|
const listen_ssl = F &redef;
|
||||||
|
|
||||||
## Defines if a listening socket can bind to IPv6 addresses.
|
## Defines if a listening socket can bind to IPv6 addresses.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl IPv6Comm option.
|
||||||
const listen_ipv6 = F &redef;
|
const listen_ipv6 = F &redef;
|
||||||
|
|
||||||
## If :bro:id:`Communication::listen_interface` is a non-global
|
## If :bro:id:`Communication::listen_interface` is a non-global
|
||||||
|
@ -128,7 +131,8 @@ export {
|
||||||
};
|
};
|
||||||
|
|
||||||
## The table of Bro or Broccoli nodes that Bro will initiate connections
|
## The table of Bro or Broccoli nodes that Bro will initiate connections
|
||||||
## to or respond to connections from.
|
## to or respond to connections from. Note that BroControl sets this
|
||||||
|
## automatically.
|
||||||
global nodes: table[string] of Node &redef;
|
global nodes: table[string] of Node &redef;
|
||||||
|
|
||||||
## A table of peer nodes for which this node issued a
|
## A table of peer nodes for which this node issued a
|
||||||
|
|
|
@ -1,6 +1,12 @@
|
||||||
##! Interface for the SQLite input reader.
|
##! Interface for the SQLite input reader. Redefinable options are available
|
||||||
|
##! to tweak the input format of the SQLite reader.
|
||||||
##!
|
##!
|
||||||
##! The defaults are set to match Bro's ASCII output.
|
##! See :doc:`/frameworks/logging-input-sqlite` for an introduction on how to
|
||||||
|
##! use the SQLite reader.
|
||||||
|
##!
|
||||||
|
##! When using the SQLite reader, you have to specify the SQL query that returns
|
||||||
|
##! the desired data by setting ``query`` in the ``config`` table. See the
|
||||||
|
##! introduction mentioned above for an example.
|
||||||
|
|
||||||
module InputSQLite;
|
module InputSQLite;
|
||||||
|
|
||||||
|
|
|
@ -76,9 +76,16 @@ export {
|
||||||
};
|
};
|
||||||
|
|
||||||
## Default rotation interval. Zero disables rotation.
|
## Default rotation interval. Zero disables rotation.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl LogRotationInterval
|
||||||
|
## option.
|
||||||
const default_rotation_interval = 0secs &redef;
|
const default_rotation_interval = 0secs &redef;
|
||||||
|
|
||||||
## Default alarm summary mail interval. Zero disables alarm summary mails.
|
## Default alarm summary mail interval. Zero disables alarm summary
|
||||||
|
## mails.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl MailAlarmsInterval
|
||||||
|
## option.
|
||||||
const default_mail_alarms_interval = 0secs &redef;
|
const default_mail_alarms_interval = 0secs &redef;
|
||||||
|
|
||||||
## Default naming format for timestamps embedded into filenames.
|
## Default naming format for timestamps embedded into filenames.
|
||||||
|
|
1
scripts/base/frameworks/logging/postprocessors/README
Normal file
1
scripts/base/frameworks/logging/postprocessors/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Support for postprocessors in the logging framework.
|
|
@ -1,5 +1,13 @@
|
||||||
##! Interface for the SQLite log writer. Redefinable options are available
|
##! Interface for the SQLite log writer. Redefinable options are available
|
||||||
##! to tweak the output format of the SQLite reader.
|
##! to tweak the output format of the SQLite reader.
|
||||||
|
##!
|
||||||
|
##! See :doc:`/frameworks/logging-input-sqlite` for an introduction on how to
|
||||||
|
##! use the SQLite log writer.
|
||||||
|
##!
|
||||||
|
##! The SQL writer currently supports one writer-specific filter option via
|
||||||
|
##! ``config``: setting ``tablename`` sets the name of the table that is used
|
||||||
|
##! or created in the SQLite database. An example for this is given in the
|
||||||
|
##! introduction mentioned above.
|
||||||
|
|
||||||
module LogSQLite;
|
module LogSQLite;
|
||||||
|
|
||||||
|
|
4
scripts/base/frameworks/notice/README
Normal file
4
scripts/base/frameworks/notice/README
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
The notice framework enables Bro to "notice" things which are odd or
|
||||||
|
potentially bad, leaving it to the local configuration to define which
|
||||||
|
of them are actionable. This decoupling of detection and reporting allows
|
||||||
|
Bro to be customized to the different needs that sites have.
|
|
@ -7,12 +7,14 @@ module Notice;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Action += {
|
redef enum Action += {
|
||||||
## Drops the address via Drop::drop_address, and generates an alarm.
|
## Drops the address via Drop::drop_address, and generates an
|
||||||
|
## alarm.
|
||||||
ACTION_DROP
|
ACTION_DROP
|
||||||
};
|
};
|
||||||
|
|
||||||
redef record Info += {
|
redef record Info += {
|
||||||
## Indicate if the $src IP address was dropped and denied network access.
|
## Indicate if the $src IP address was dropped and denied
|
||||||
|
## network access.
|
||||||
dropped: bool &log &default=F;
|
dropped: bool &log &default=F;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,12 +6,14 @@ module Notice;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Action += {
|
redef enum Action += {
|
||||||
## Indicates that the notice should be sent to the pager email address
|
## Indicates that the notice should be sent to the pager email
|
||||||
## configured in the :bro:id:`Notice::mail_page_dest` variable.
|
## address configured in the :bro:id:`Notice::mail_page_dest`
|
||||||
|
## variable.
|
||||||
ACTION_PAGE
|
ACTION_PAGE
|
||||||
};
|
};
|
||||||
|
|
||||||
## Email address to send notices with the :bro:enum:`Notice::ACTION_PAGE` action.
|
## Email address to send notices with the :bro:enum:`Notice::ACTION_PAGE`
|
||||||
|
## action.
|
||||||
const mail_page_dest = "" &redef;
|
const mail_page_dest = "" &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -13,13 +13,15 @@ export {
|
||||||
|
|
||||||
## Address to send the pretty-printed reports to. Default if not set is
|
## Address to send the pretty-printed reports to. Default if not set is
|
||||||
## :bro:id:`Notice::mail_dest`.
|
## :bro:id:`Notice::mail_dest`.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl MailAlarmsTo option.
|
||||||
const mail_dest_pretty_printed = "" &redef;
|
const mail_dest_pretty_printed = "" &redef;
|
||||||
## If an address from one of these networks is reported, we mark
|
## If an address from one of these networks is reported, we mark
|
||||||
## the entry with an additional quote symbol (i.e., ">"). Many MUAs
|
## the entry with an additional quote symbol (i.e., ">"). Many MUAs
|
||||||
## then highlight such lines differently.
|
## then highlight such lines differently.
|
||||||
global flag_nets: set[subnet] &redef;
|
global flag_nets: set[subnet] &redef;
|
||||||
|
|
||||||
## Function that renders a single alarm. Can be overidden.
|
## Function that renders a single alarm. Can be overridden.
|
||||||
global pretty_print_alarm: function(out: file, n: Info) &redef;
|
global pretty_print_alarm: function(out: file, n: Info) &redef;
|
||||||
|
|
||||||
## Force generating mail file, even if reading from traces or no mail
|
## Force generating mail file, even if reading from traces or no mail
|
||||||
|
|
|
@ -17,7 +17,7 @@ export {
|
||||||
|
|
||||||
## Manager can communicate notice suppression to workers.
|
## Manager can communicate notice suppression to workers.
|
||||||
redef Cluster::manager2worker_events += /Notice::begin_suppression/;
|
redef Cluster::manager2worker_events += /Notice::begin_suppression/;
|
||||||
## Workers needs need ability to forward notices to manager.
|
## Workers need ability to forward notices to manager.
|
||||||
redef Cluster::worker2manager_events += /Notice::cluster_notice/;
|
redef Cluster::worker2manager_events += /Notice::cluster_notice/;
|
||||||
|
|
||||||
@if ( Cluster::local_node_type() != Cluster::MANAGER )
|
@if ( Cluster::local_node_type() != Cluster::MANAGER )
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
##! This is the notice framework which enables Bro to "notice" things which
|
##! This is the notice framework which enables Bro to "notice" things which
|
||||||
##! are odd or potentially bad. Decisions of the meaning of various notices
|
##! are odd or potentially bad. Decisions of the meaning of various notices
|
||||||
##! need to be done per site because Bro does not ship with assumptions about
|
##! need to be done per site because Bro does not ship with assumptions about
|
||||||
##! what is bad activity for sites. More extensive documetation about using
|
##! what is bad activity for sites. More extensive documentation about using
|
||||||
##! the notice framework can be found in :doc:`/frameworks/notice`.
|
##! the notice framework can be found in :doc:`/frameworks/notice`.
|
||||||
|
|
||||||
module Notice;
|
module Notice;
|
||||||
|
@ -14,13 +14,13 @@ export {
|
||||||
ALARM_LOG,
|
ALARM_LOG,
|
||||||
};
|
};
|
||||||
|
|
||||||
## Scripts creating new notices need to redef this enum to add their own
|
## Scripts creating new notices need to redef this enum to add their
|
||||||
## specific notice types which would then get used when they call the
|
## own specific notice types which would then get used when they call
|
||||||
## :bro:id:`NOTICE` function. The convention is to give a general category
|
## the :bro:id:`NOTICE` function. The convention is to give a general
|
||||||
## along with the specific notice separating words with underscores and
|
## category along with the specific notice separating words with
|
||||||
## using leading capitals on each word except for abbreviations which are
|
## underscores and using leading capitals on each word except for
|
||||||
## kept in all capitals. For example, SSH::Login is for heuristically
|
## abbreviations which are kept in all capitals. For example,
|
||||||
## guessed successful SSH logins.
|
## SSH::Login is for heuristically guessed successful SSH logins.
|
||||||
type Type: enum {
|
type Type: enum {
|
||||||
## Notice reporting a count of how often a notice occurred.
|
## Notice reporting a count of how often a notice occurred.
|
||||||
Tally,
|
Tally,
|
||||||
|
@ -30,67 +30,72 @@ export {
|
||||||
type Action: enum {
|
type Action: enum {
|
||||||
## Indicates that there is no action to be taken.
|
## Indicates that there is no action to be taken.
|
||||||
ACTION_NONE,
|
ACTION_NONE,
|
||||||
## Indicates that the notice should be sent to the notice logging stream.
|
## Indicates that the notice should be sent to the notice
|
||||||
|
## logging stream.
|
||||||
ACTION_LOG,
|
ACTION_LOG,
|
||||||
## Indicates that the notice should be sent to the email address(es)
|
## Indicates that the notice should be sent to the email
|
||||||
## configured in the :bro:id:`Notice::mail_dest` variable.
|
## address(es) configured in the :bro:id:`Notice::mail_dest`
|
||||||
|
## variable.
|
||||||
ACTION_EMAIL,
|
ACTION_EMAIL,
|
||||||
## Indicates that the notice should be alarmed. A readable ASCII
|
## Indicates that the notice should be alarmed. A readable
|
||||||
## version of the alarm log is emailed in bulk to the address(es)
|
## ASCII version of the alarm log is emailed in bulk to the
|
||||||
## configured in :bro:id:`Notice::mail_dest`.
|
## address(es) configured in :bro:id:`Notice::mail_dest`.
|
||||||
ACTION_ALARM,
|
ACTION_ALARM,
|
||||||
};
|
};
|
||||||
|
|
||||||
type ActionSet: set[Notice::Action];
|
type ActionSet: set[Notice::Action];
|
||||||
|
|
||||||
## The notice framework is able to do automatic notice supression by
|
## The notice framework is able to do automatic notice suppression by
|
||||||
## utilizing the $identifier field in :bro:type:`Notice::Info` records.
|
## utilizing the *identifier* field in :bro:type:`Notice::Info` records.
|
||||||
## Set this to "0secs" to completely disable automated notice suppression.
|
## Set this to "0secs" to completely disable automated notice
|
||||||
|
## suppression.
|
||||||
const default_suppression_interval = 1hrs &redef;
|
const default_suppression_interval = 1hrs &redef;
|
||||||
|
|
||||||
type Info: record {
|
type Info: record {
|
||||||
## An absolute time indicating when the notice occurred, defaults
|
## An absolute time indicating when the notice occurred,
|
||||||
## to the current network time.
|
## defaults to the current network time.
|
||||||
ts: time &log &optional;
|
ts: time &log &optional;
|
||||||
|
|
||||||
## A connection UID which uniquely identifies the endpoints
|
## A connection UID which uniquely identifies the endpoints
|
||||||
## concerned with the notice.
|
## concerned with the notice.
|
||||||
uid: string &log &optional;
|
uid: string &log &optional;
|
||||||
|
|
||||||
## A connection 4-tuple identifying the endpoints concerned with the
|
## A connection 4-tuple identifying the endpoints concerned
|
||||||
## notice.
|
## with the notice.
|
||||||
id: conn_id &log &optional;
|
id: conn_id &log &optional;
|
||||||
|
|
||||||
## A shorthand way of giving the uid and id to a notice. The
|
## A shorthand way of giving the uid and id to a notice. The
|
||||||
## reference to the actual connection will be deleted after applying
|
## reference to the actual connection will be deleted after
|
||||||
## the notice policy.
|
## applying the notice policy.
|
||||||
conn: connection &optional;
|
conn: connection &optional;
|
||||||
## A shorthand way of giving the uid and id to a notice. The
|
## A shorthand way of giving the uid and id to a notice. The
|
||||||
## reference to the actual connection will be deleted after applying
|
## reference to the actual connection will be deleted after
|
||||||
## the notice policy.
|
## applying the notice policy.
|
||||||
iconn: icmp_conn &optional;
|
iconn: icmp_conn &optional;
|
||||||
|
|
||||||
## A file record if the notice is relted to a file. The
|
## A file record if the notice is related to a file. The
|
||||||
## reference to the actual fa_file record will be deleted after applying
|
## reference to the actual fa_file record will be deleted after
|
||||||
## the notice policy.
|
## applying the notice policy.
|
||||||
f: fa_file &optional;
|
f: fa_file &optional;
|
||||||
|
|
||||||
## A file unique ID if this notice is related to a file. If the $f
|
## A file unique ID if this notice is related to a file. If
|
||||||
## field is provided, this will be automatically filled out.
|
## the *f* field is provided, this will be automatically filled
|
||||||
|
## out.
|
||||||
fuid: string &log &optional;
|
fuid: string &log &optional;
|
||||||
|
|
||||||
## A mime type if the notice is related to a file. If the $f field
|
## A mime type if the notice is related to a file. If the *f*
|
||||||
## is provided, this will be automatically filled out.
|
## field is provided, this will be automatically filled out.
|
||||||
file_mime_type: string &log &optional;
|
file_mime_type: string &log &optional;
|
||||||
|
|
||||||
## Frequently files can be "described" to give a bit more context.
|
## Frequently files can be "described" to give a bit more
|
||||||
## This field will typically be automatically filled out from an
|
## context. This field will typically be automatically filled
|
||||||
## fa_file record. For example, if a notice was related to a
|
## out from an fa_file record. For example, if a notice was
|
||||||
## file over HTTP, the URL of the request would be shown.
|
## related to a file over HTTP, the URL of the request would
|
||||||
|
## be shown.
|
||||||
file_desc: string &log &optional;
|
file_desc: string &log &optional;
|
||||||
|
|
||||||
## The transport protocol. Filled automatically when either conn, iconn
|
## The transport protocol. Filled automatically when either
|
||||||
## or p is specified.
|
## *conn*, *iconn* or *p* is specified.
|
||||||
proto: transport_proto &log &optional;
|
proto: transport_proto &log &optional;
|
||||||
|
|
||||||
## The :bro:type:`Notice::Type` of the notice.
|
## The :bro:type:`Notice::Type` of the notice.
|
||||||
|
@ -117,38 +122,42 @@ export {
|
||||||
## The actions which have been applied to this notice.
|
## The actions which have been applied to this notice.
|
||||||
actions: ActionSet &log &default=ActionSet();
|
actions: ActionSet &log &default=ActionSet();
|
||||||
|
|
||||||
## By adding chunks of text into this element, other scripts can
|
## By adding chunks of text into this element, other scripts
|
||||||
## expand on notices that are being emailed. The normal way to add text
|
## can expand on notices that are being emailed. The normal
|
||||||
## is to extend the vector by handling the :bro:id:`Notice::notice`
|
## way to add text is to extend the vector by handling the
|
||||||
## event and modifying the notice in place.
|
## :bro:id:`Notice::notice` event and modifying the notice in
|
||||||
|
## place.
|
||||||
email_body_sections: vector of string &optional;
|
email_body_sections: vector of string &optional;
|
||||||
|
|
||||||
## Adding a string "token" to this set will cause the notice framework's
|
## Adding a string "token" to this set will cause the notice
|
||||||
## built-in emailing functionality to delay sending the email until
|
## framework's built-in emailing functionality to delay sending
|
||||||
## either the token has been removed or the email has been delayed
|
## the email until either the token has been removed or the
|
||||||
## for :bro:id:`Notice::max_email_delay`.
|
## email has been delayed for :bro:id:`Notice::max_email_delay`.
|
||||||
email_delay_tokens: set[string] &optional;
|
email_delay_tokens: set[string] &optional;
|
||||||
|
|
||||||
## This field is to be provided when a notice is generated for the
|
## This field is to be provided when a notice is generated for
|
||||||
## purpose of deduplicating notices. The identifier string should
|
## the purpose of deduplicating notices. The identifier string
|
||||||
## be unique for a single instance of the notice. This field should be
|
## should be unique for a single instance of the notice. This
|
||||||
## filled out in almost all cases when generating notices to define
|
## field should be filled out in almost all cases when
|
||||||
## when a notice is conceptually a duplicate of a previous notice.
|
## generating notices to define when a notice is conceptually
|
||||||
|
## a duplicate of a previous notice.
|
||||||
##
|
##
|
||||||
## For example, an SSL certificate that is going to expire soon should
|
## For example, an SSL certificate that is going to expire soon
|
||||||
## always have the same identifier no matter the client IP address
|
## should always have the same identifier no matter the client
|
||||||
## that connected and resulted in the certificate being exposed. In
|
## IP address that connected and resulted in the certificate
|
||||||
## this case, the resp_h, resp_p, and hash of the certificate would be
|
## being exposed. In this case, the resp_h, resp_p, and hash
|
||||||
## used to create this value. The hash of the cert is included
|
## of the certificate would be used to create this value. The
|
||||||
## because servers can return multiple certificates on the same port.
|
## hash of the cert is included because servers can return
|
||||||
|
## multiple certificates on the same port.
|
||||||
##
|
##
|
||||||
## Another example might be a host downloading a file which triggered
|
## Another example might be a host downloading a file which
|
||||||
## a notice because the MD5 sum of the file it downloaded was known
|
## triggered a notice because the MD5 sum of the file it
|
||||||
## by some set of intelligence. In that case, the orig_h (client)
|
## downloaded was known by some set of intelligence. In that
|
||||||
## and MD5 sum would be used in this field to dedup because if the
|
## case, the orig_h (client) and MD5 sum would be used in this
|
||||||
## same file is downloaded over and over again you really only want to
|
## field to dedup because if the same file is downloaded over
|
||||||
## know about it a single time. This makes it possible to send those
|
## and over again you really only want to know about it a
|
||||||
## notices to email without worrying so much about sending thousands
|
## single time. This makes it possible to send those notices
|
||||||
|
## to email without worrying so much about sending thousands
|
||||||
## of emails.
|
## of emails.
|
||||||
identifier: string &optional;
|
identifier: string &optional;
|
||||||
|
|
||||||
|
@ -173,17 +182,26 @@ export {
|
||||||
global policy: hook(n: Notice::Info);
|
global policy: hook(n: Notice::Info);
|
||||||
|
|
||||||
## Local system sendmail program.
|
## Local system sendmail program.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl SendMail option.
|
||||||
const sendmail = "/usr/sbin/sendmail" &redef;
|
const sendmail = "/usr/sbin/sendmail" &redef;
|
||||||
## Email address to send notices with the :bro:enum:`Notice::ACTION_EMAIL`
|
## Email address to send notices with the
|
||||||
## action or to send bulk alarm logs on rotation with
|
## :bro:enum:`Notice::ACTION_EMAIL` action or to send bulk alarm logs
|
||||||
## :bro:enum:`Notice::ACTION_ALARM`.
|
## on rotation with :bro:enum:`Notice::ACTION_ALARM`.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl MailTo option.
|
||||||
const mail_dest = "" &redef;
|
const mail_dest = "" &redef;
|
||||||
|
|
||||||
## Address that emails will be from.
|
## Address that emails will be from.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl MailFrom option.
|
||||||
const mail_from = "Big Brother <bro@localhost>" &redef;
|
const mail_from = "Big Brother <bro@localhost>" &redef;
|
||||||
## Reply-to address used in outbound email.
|
## Reply-to address used in outbound email.
|
||||||
const reply_to = "" &redef;
|
const reply_to = "" &redef;
|
||||||
## Text string prefixed to the subject of all emails sent out.
|
## Text string prefixed to the subject of all emails sent out.
|
||||||
|
##
|
||||||
|
## Note that this is overridden by the BroControl MailSubjectPrefix
|
||||||
|
## option.
|
||||||
const mail_subject_prefix = "[Bro]" &redef;
|
const mail_subject_prefix = "[Bro]" &redef;
|
||||||
## The maximum amount of time a plugin can delay email from being sent.
|
## The maximum amount of time a plugin can delay email from being sent.
|
||||||
const max_email_delay = 15secs &redef;
|
const max_email_delay = 15secs &redef;
|
||||||
|
@ -198,9 +216,9 @@ export {
|
||||||
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
|
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
|
||||||
|
|
||||||
## This is the event that is called as the entry point to the
|
## This is the event that is called as the entry point to the
|
||||||
## notice framework by the global :bro:id:`NOTICE` function. By the time
|
## notice framework by the global :bro:id:`NOTICE` function. By the
|
||||||
## this event is generated, default values have already been filled out in
|
## time this event is generated, default values have already been
|
||||||
## the :bro:type:`Notice::Info` record and the notice
|
## filled out in the :bro:type:`Notice::Info` record and the notice
|
||||||
## policy has also been applied.
|
## policy has also been applied.
|
||||||
##
|
##
|
||||||
## n: The record containing notice data.
|
## n: The record containing notice data.
|
||||||
|
@ -217,7 +235,8 @@ export {
|
||||||
## n: The record containing the notice in question.
|
## n: The record containing the notice in question.
|
||||||
global is_being_suppressed: function(n: Notice::Info): bool;
|
global is_being_suppressed: function(n: Notice::Info): bool;
|
||||||
|
|
||||||
## This event is generated on each occurence of an event being suppressed.
|
## This event is generated on each occurrence of an event being
|
||||||
|
## suppressed.
|
||||||
##
|
##
|
||||||
## n: The record containing notice data regarding the notice type
|
## n: The record containing notice data regarding the notice type
|
||||||
## being suppressed.
|
## being suppressed.
|
||||||
|
@ -237,18 +256,19 @@ export {
|
||||||
##
|
##
|
||||||
## dest: The intended recipient of the notice email.
|
## dest: The intended recipient of the notice email.
|
||||||
##
|
##
|
||||||
## extend: Whether to extend the email using the ``email_body_sections``
|
## extend: Whether to extend the email using the
|
||||||
## field of *n*.
|
## ``email_body_sections`` field of *n*.
|
||||||
global email_notice_to: function(n: Info, dest: string, extend: bool);
|
global email_notice_to: function(n: Info, dest: string, extend: bool);
|
||||||
|
|
||||||
## Constructs mail headers to which an email body can be appended for
|
## Constructs mail headers to which an email body can be appended for
|
||||||
## sending with sendmail.
|
## sending with sendmail.
|
||||||
##
|
##
|
||||||
## subject_desc: a subject string to use for the mail
|
## subject_desc: a subject string to use for the mail.
|
||||||
##
|
##
|
||||||
## dest: recipient string to use for the mail
|
## dest: recipient string to use for the mail.
|
||||||
##
|
##
|
||||||
## Returns: a string of mail headers to which an email body can be appended
|
## Returns: a string of mail headers to which an email body can be
|
||||||
|
## appended.
|
||||||
global email_headers: function(subject_desc: string, dest: string): string;
|
global email_headers: function(subject_desc: string, dest: string): string;
|
||||||
|
|
||||||
## This event can be handled to access the :bro:type:`Notice::Info`
|
## This event can be handled to access the :bro:type:`Notice::Info`
|
||||||
|
@ -257,8 +277,8 @@ export {
|
||||||
## rec: The record containing notice data before it is logged.
|
## rec: The record containing notice data before it is logged.
|
||||||
global log_notice: event(rec: Info);
|
global log_notice: event(rec: Info);
|
||||||
|
|
||||||
## This is an internal wrapper for the global :bro:id:`NOTICE` function;
|
## This is an internal wrapper for the global :bro:id:`NOTICE`
|
||||||
## disregard.
|
## function; disregard.
|
||||||
##
|
##
|
||||||
## n: The record of notice data.
|
## n: The record of notice data.
|
||||||
global internal_NOTICE: function(n: Notice::Info);
|
global internal_NOTICE: function(n: Notice::Info);
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
|
|
||||||
module GLOBAL;
|
module GLOBAL;
|
||||||
|
|
||||||
## This is the entry point in the global namespace for notice framework.
|
## This is the entry point in the global namespace for the notice framework.
|
||||||
function NOTICE(n: Notice::Info)
|
function NOTICE(n: Notice::Info)
|
||||||
{
|
{
|
||||||
# Suppress this notice if necessary.
|
# Suppress this notice if necessary.
|
||||||
|
|
|
@ -26,8 +26,8 @@ export {
|
||||||
type Info: record {
|
type Info: record {
|
||||||
## The time when the weird occurred.
|
## The time when the weird occurred.
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
## If a connection is associated with this weird, this will be the
|
## If a connection is associated with this weird, this will be
|
||||||
## connection's unique ID.
|
## the connection's unique ID.
|
||||||
uid: string &log &optional;
|
uid: string &log &optional;
|
||||||
## conn_id for the optional connection.
|
## conn_id for the optional connection.
|
||||||
id: conn_id &log &optional;
|
id: conn_id &log &optional;
|
||||||
|
@ -37,16 +37,16 @@ export {
|
||||||
addl: string &log &optional;
|
addl: string &log &optional;
|
||||||
## Indicate if this weird was also turned into a notice.
|
## Indicate if this weird was also turned into a notice.
|
||||||
notice: bool &log &default=F;
|
notice: bool &log &default=F;
|
||||||
## The peer that originated this weird. This is helpful in cluster
|
## The peer that originated this weird. This is helpful in
|
||||||
## deployments if a particular cluster node is having trouble to help
|
## cluster deployments if a particular cluster node is having
|
||||||
## identify which node is having trouble.
|
## trouble to help identify which node is having trouble.
|
||||||
peer: string &log &optional;
|
peer: string &log &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
## Types of actions that may be taken when handling weird activity events.
|
## Types of actions that may be taken when handling weird activity events.
|
||||||
type Action: enum {
|
type Action: enum {
|
||||||
## A dummy action indicating the user does not care what internal
|
## A dummy action indicating the user does not care what
|
||||||
## decision is made regarding a given type of weird.
|
## internal decision is made regarding a given type of weird.
|
||||||
ACTION_UNSPECIFIED,
|
ACTION_UNSPECIFIED,
|
||||||
## No action is to be taken.
|
## No action is to be taken.
|
||||||
ACTION_IGNORE,
|
ACTION_IGNORE,
|
||||||
|
@ -252,16 +252,16 @@ export {
|
||||||
## a unique weird every ``create_expire`` interval.
|
## a unique weird every ``create_expire`` interval.
|
||||||
global weird_ignore: set[string, string] &create_expire=10min &redef;
|
global weird_ignore: set[string, string] &create_expire=10min &redef;
|
||||||
|
|
||||||
## A state set which tracks unique weirds solely by the name to reduce
|
## A state set which tracks unique weirds solely by name to reduce
|
||||||
## duplicate logging. This is not synchronized deliberately because it
|
## duplicate logging. This is deliberately not synchronized because it
|
||||||
## could cause overload during storms
|
## could cause overload during storms.
|
||||||
global did_log: set[string, string] &create_expire=1day &redef;
|
global did_log: set[string, string] &create_expire=1day &redef;
|
||||||
|
|
||||||
## A state set which tracks unique weirds solely by the name to reduce
|
## A state set which tracks unique weirds solely by name to reduce
|
||||||
## duplicate notices from being raised.
|
## duplicate notices from being raised.
|
||||||
global did_notice: set[string, string] &create_expire=1day &redef;
|
global did_notice: set[string, string] &create_expire=1day &redef;
|
||||||
|
|
||||||
## Handlers of this event are invoked one per write to the weird
|
## Handlers of this event are invoked once per write to the weird
|
||||||
## logging stream before the data is actually written.
|
## logging stream before the data is actually written.
|
||||||
##
|
##
|
||||||
## rec: The weird columns about to be logged to the weird stream.
|
## rec: The weird columns about to be logged to the weird stream.
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
@load ./utils
|
@load ./utils
|
||||||
@load ./main
|
@load ./main
|
||||||
@load ./netstats
|
@load ./netstats
|
||||||
|
|
||||||
|
@load base/frameworks/cluster
|
||||||
|
@if ( Cluster::is_enabled() )
|
||||||
|
@load ./cluster
|
||||||
|
@endif
|
||||||
|
|
14
scripts/base/frameworks/packet-filter/cluster.bro
Normal file
14
scripts/base/frameworks/packet-filter/cluster.bro
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
|
||||||
|
module PacketFilter;
|
||||||
|
|
||||||
|
event remote_connection_handshake_done(p: event_peer) &priority=3
|
||||||
|
{
|
||||||
|
if ( Cluster::local_node_type() == Cluster::WORKER &&
|
||||||
|
p$descr in Cluster::nodes &&
|
||||||
|
Cluster::nodes[p$descr]$node_type == Cluster::MANAGER )
|
||||||
|
{
|
||||||
|
# This ensures that a packet filter is installed and logged
|
||||||
|
# after the manager connects to us.
|
||||||
|
install();
|
||||||
|
}
|
||||||
|
}
|
|
@ -294,6 +294,7 @@ function install(): bool
|
||||||
# Do an audit log for the packet filter.
|
# Do an audit log for the packet filter.
|
||||||
local info: Info;
|
local info: Info;
|
||||||
info$ts = network_time();
|
info$ts = network_time();
|
||||||
|
info$node = peer_description;
|
||||||
# If network_time() is 0.0 we're at init time so use the wall clock.
|
# If network_time() is 0.0 we're at init time so use the wall clock.
|
||||||
if ( info$ts == 0.0 )
|
if ( info$ts == 0.0 )
|
||||||
{
|
{
|
||||||
|
|
2
scripts/base/frameworks/reporter/README
Normal file
2
scripts/base/frameworks/reporter/README
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
This framework is intended to create an output and filtering path for
|
||||||
|
internally generated messages/warnings/errors.
|
4
scripts/base/frameworks/signatures/README
Normal file
4
scripts/base/frameworks/signatures/README
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
The signature framework provides for doing low-level pattern matching. While
|
||||||
|
signatures are not Bro's preferred detection tool, they sometimes come in
|
||||||
|
handy and are closer to what many people are familiar with from using
|
||||||
|
other NIDS.
|
|
@ -11,21 +11,23 @@ export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Generic notice type for notice-worthy signature matches.
|
## Generic notice type for notice-worthy signature matches.
|
||||||
Sensitive_Signature,
|
Sensitive_Signature,
|
||||||
## Host has triggered many signatures on the same host. The number of
|
## Host has triggered many signatures on the same host. The
|
||||||
## signatures is defined by the
|
## number of signatures is defined by the
|
||||||
## :bro:id:`Signatures::vert_scan_thresholds` variable.
|
## :bro:id:`Signatures::vert_scan_thresholds` variable.
|
||||||
Multiple_Signatures,
|
Multiple_Signatures,
|
||||||
## Host has triggered the same signature on multiple hosts as defined
|
## Host has triggered the same signature on multiple hosts as
|
||||||
## by the :bro:id:`Signatures::horiz_scan_thresholds` variable.
|
## defined by the :bro:id:`Signatures::horiz_scan_thresholds`
|
||||||
|
## variable.
|
||||||
Multiple_Sig_Responders,
|
Multiple_Sig_Responders,
|
||||||
## The same signature has triggered multiple times for a host. The
|
## The same signature has triggered multiple times for a host.
|
||||||
## number of times the signature has been triggered is defined by the
|
## The number of times the signature has been triggered is
|
||||||
## :bro:id:`Signatures::count_thresholds` variable. To generate this
|
## defined by the :bro:id:`Signatures::count_thresholds`
|
||||||
## notice, the :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must
|
## variable. To generate this notice, the
|
||||||
## bet set for the signature.
|
## :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must be
|
||||||
|
## set for the signature.
|
||||||
Count_Signature,
|
Count_Signature,
|
||||||
## Summarize the number of times a host triggered a signature. The
|
## Summarize the number of times a host triggered a signature.
|
||||||
## interval between summaries is defined by the
|
## The interval between summaries is defined by the
|
||||||
## :bro:id:`Signatures::summary_interval` variable.
|
## :bro:id:`Signatures::summary_interval` variable.
|
||||||
Signature_Summary,
|
Signature_Summary,
|
||||||
};
|
};
|
||||||
|
@ -37,11 +39,12 @@ export {
|
||||||
## All of them write the signature record to the logging stream unless
|
## All of them write the signature record to the logging stream unless
|
||||||
## declared otherwise.
|
## declared otherwise.
|
||||||
type Action: enum {
|
type Action: enum {
|
||||||
## Ignore this signature completely (even for scan detection). Don't
|
## Ignore this signature completely (even for scan detection).
|
||||||
## write to the signatures logging stream.
|
## Don't write to the signatures logging stream.
|
||||||
SIG_IGNORE,
|
SIG_IGNORE,
|
||||||
## Process through the various aggregate techniques, but don't report
|
## Process through the various aggregate techniques, but don't
|
||||||
## individually and don't write to the signatures logging stream.
|
## report individually and don't write to the signatures logging
|
||||||
|
## stream.
|
||||||
SIG_QUIET,
|
SIG_QUIET,
|
||||||
## Generate a notice.
|
## Generate a notice.
|
||||||
SIG_LOG,
|
SIG_LOG,
|
||||||
|
@ -64,20 +67,21 @@ export {
|
||||||
|
|
||||||
## The record type which contains the column fields of the signature log.
|
## The record type which contains the column fields of the signature log.
|
||||||
type Info: record {
|
type Info: record {
|
||||||
## The network time at which a signature matching type of event to
|
## The network time at which a signature matching type of event
|
||||||
## be logged has occurred.
|
## to be logged has occurred.
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
## The host which triggered the signature match event.
|
## The host which triggered the signature match event.
|
||||||
src_addr: addr &log &optional;
|
src_addr: addr &log &optional;
|
||||||
## The host port on which the signature-matching activity occurred.
|
## The host port on which the signature-matching activity
|
||||||
|
## occurred.
|
||||||
src_port: port &log &optional;
|
src_port: port &log &optional;
|
||||||
## The destination host which was sent the payload that triggered the
|
## The destination host which was sent the payload that
|
||||||
## signature match.
|
## triggered the signature match.
|
||||||
dst_addr: addr &log &optional;
|
dst_addr: addr &log &optional;
|
||||||
## The destination host port which was sent the payload that triggered
|
## The destination host port which was sent the payload that
|
||||||
## the signature match.
|
## triggered the signature match.
|
||||||
dst_port: port &log &optional;
|
dst_port: port &log &optional;
|
||||||
## Notice associated with signature event
|
## Notice associated with signature event.
|
||||||
note: Notice::Type &log;
|
note: Notice::Type &log;
|
||||||
## The name of the signature that matched.
|
## The name of the signature that matched.
|
||||||
sig_id: string &log &optional;
|
sig_id: string &log &optional;
|
||||||
|
@ -103,8 +107,8 @@ export {
|
||||||
## different responders has reached one of the thresholds.
|
## different responders has reached one of the thresholds.
|
||||||
const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
|
const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
|
||||||
|
|
||||||
## Generate a notice if, for a pair [orig, resp], the number of different
|
## Generate a notice if, for a pair [orig, resp], the number of
|
||||||
## signature matches has reached one of the thresholds.
|
## different signature matches has reached one of the thresholds.
|
||||||
const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
|
const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
|
||||||
|
|
||||||
## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP`
|
## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP`
|
||||||
|
@ -112,7 +116,7 @@ export {
|
||||||
const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef;
|
const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef;
|
||||||
|
|
||||||
## The interval between when :bro:enum:`Signatures::Signature_Summary`
|
## The interval between when :bro:enum:`Signatures::Signature_Summary`
|
||||||
## notice are generated.
|
## notices are generated.
|
||||||
const summary_interval = 1 day &redef;
|
const summary_interval = 1 day &redef;
|
||||||
|
|
||||||
## This event can be handled to access/alter data about to be logged
|
## This event can be handled to access/alter data about to be logged
|
||||||
|
|
|
@ -28,8 +28,8 @@ export {
|
||||||
## values for a sumstat.
|
## values for a sumstat.
|
||||||
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
|
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
|
||||||
|
|
||||||
## Event sent by nodes that are collecting sumstats after receiving a
|
# Event sent by nodes that are collecting sumstats after receiving a
|
||||||
## request for the sumstat from the manager.
|
# request for the sumstat from the manager.
|
||||||
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
|
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
|
||||||
|
|
||||||
## This event is sent by the manager in a cluster to initiate the
|
## This event is sent by the manager in a cluster to initiate the
|
||||||
|
|
1
scripts/base/frameworks/sumstats/plugins/README
Normal file
1
scripts/base/frameworks/sumstats/plugins/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Plugins for the summary statistics framework.
|
File diff suppressed because it is too large
Load diff
|
@ -1,8 +1,8 @@
|
||||||
##! This script loads everything in the base/ script directory. If you want
|
##! This script loads everything in the base/ script directory. If you want
|
||||||
##! to run Bro without all of these scripts loaded by default, you can use
|
##! to run Bro without all of these scripts loaded by default, you can use
|
||||||
##! the -b (--bare-mode) command line argument. You can also copy the "@load"
|
##! the ``-b`` (``--bare-mode``) command line argument. You can also copy the
|
||||||
##! lines from this script to your own script to load only the scripts that
|
##! "@load" lines from this script to your own script to load only the scripts
|
||||||
##! you actually want.
|
##! that you actually want.
|
||||||
|
|
||||||
@load base/utils/site
|
@load base/utils/site
|
||||||
@load base/utils/active-http
|
@load base/utils/active-http
|
||||||
|
|
|
@ -16,6 +16,7 @@ export {
|
||||||
# Keep track of how many bad checksums have been seen.
|
# Keep track of how many bad checksums have been seen.
|
||||||
global bad_ip_checksums = 0;
|
global bad_ip_checksums = 0;
|
||||||
global bad_tcp_checksums = 0;
|
global bad_tcp_checksums = 0;
|
||||||
|
global bad_udp_checksums = 0;
|
||||||
|
|
||||||
# Track to see if this script is done so that messages aren't created multiple times.
|
# Track to see if this script is done so that messages aren't created multiple times.
|
||||||
global done = F;
|
global done = F;
|
||||||
|
@ -28,7 +29,11 @@ event ChecksumOffloading::check()
|
||||||
local pkts_recvd = net_stats()$pkts_recvd;
|
local pkts_recvd = net_stats()$pkts_recvd;
|
||||||
local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0;
|
local bad_ip_checksum_pct = (pkts_recvd != 0) ? (bad_ip_checksums*1.0 / pkts_recvd*1.0) : 0;
|
||||||
local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0;
|
local bad_tcp_checksum_pct = (pkts_recvd != 0) ? (bad_tcp_checksums*1.0 / pkts_recvd*1.0) : 0;
|
||||||
if ( bad_ip_checksum_pct > 0.05 || bad_tcp_checksum_pct > 0.05 )
|
local bad_udp_checksum_pct = (pkts_recvd != 0) ? (bad_udp_checksums*1.0 / pkts_recvd*1.0) : 0;
|
||||||
|
|
||||||
|
if ( bad_ip_checksum_pct > 0.05 ||
|
||||||
|
bad_tcp_checksum_pct > 0.05 ||
|
||||||
|
bad_udp_checksum_pct > 0.05 )
|
||||||
{
|
{
|
||||||
local packet_src = reading_traces() ? "trace file likely has" : "interface is likely receiving";
|
local packet_src = reading_traces() ? "trace file likely has" : "interface is likely receiving";
|
||||||
local bad_checksum_msg = (bad_ip_checksum_pct > 0.0) ? "IP" : "";
|
local bad_checksum_msg = (bad_ip_checksum_pct > 0.0) ? "IP" : "";
|
||||||
|
@ -38,6 +43,13 @@ event ChecksumOffloading::check()
|
||||||
bad_checksum_msg += " and ";
|
bad_checksum_msg += " and ";
|
||||||
bad_checksum_msg += "TCP";
|
bad_checksum_msg += "TCP";
|
||||||
}
|
}
|
||||||
|
if ( bad_udp_checksum_pct > 0.0 )
|
||||||
|
{
|
||||||
|
if ( |bad_checksum_msg| > 0 )
|
||||||
|
bad_checksum_msg += " and ";
|
||||||
|
bad_checksum_msg += "UDP";
|
||||||
|
}
|
||||||
|
|
||||||
local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading.", packet_src, bad_checksum_msg);
|
local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading.", packet_src, bad_checksum_msg);
|
||||||
Reporter::warning(message);
|
Reporter::warning(message);
|
||||||
done = T;
|
done = T;
|
||||||
|
@ -65,6 +77,8 @@ event conn_weird(name: string, c: connection, addl: string)
|
||||||
{
|
{
|
||||||
if ( name == "bad_TCP_checksum" )
|
if ( name == "bad_TCP_checksum" )
|
||||||
++bad_tcp_checksums;
|
++bad_tcp_checksums;
|
||||||
|
else if ( name == "bad_UDP_checksum" )
|
||||||
|
++bad_udp_checksums;
|
||||||
}
|
}
|
||||||
|
|
||||||
event bro_done()
|
event bro_done()
|
||||||
|
|
|
@ -32,9 +32,11 @@ export {
|
||||||
## mind that you will probably need to set the *method* field
|
## mind that you will probably need to set the *method* field
|
||||||
## to "POST" or "PUT".
|
## to "POST" or "PUT".
|
||||||
client_data: string &optional;
|
client_data: string &optional;
|
||||||
## Arbitrary headers to pass to the server. Some headers
|
|
||||||
## will be included by libCurl.
|
# Arbitrary headers to pass to the server. Some headers
|
||||||
|
# will be included by libCurl.
|
||||||
#custom_headers: table[string] of string &optional;
|
#custom_headers: table[string] of string &optional;
|
||||||
|
|
||||||
## Timeout for the request.
|
## Timeout for the request.
|
||||||
max_time: interval &default=default_max_time;
|
max_time: interval &default=default_max_time;
|
||||||
## Additional curl command line arguments. Be very careful
|
## Additional curl command line arguments. Be very careful
|
||||||
|
|
|
@ -28,7 +28,7 @@ event Dir::monitor_ev(dir: string, last_files: set[string],
|
||||||
callback: function(fname: string),
|
callback: function(fname: string),
|
||||||
poll_interval: interval)
|
poll_interval: interval)
|
||||||
{
|
{
|
||||||
when ( local result = Exec::run([$cmd=fmt("ls -i -1 \"%s/\"", str_shell_escape(dir))]) )
|
when ( local result = Exec::run([$cmd=fmt("ls -1 \"%s/\"", str_shell_escape(dir))]) )
|
||||||
{
|
{
|
||||||
if ( result$exit_code != 0 )
|
if ( result$exit_code != 0 )
|
||||||
{
|
{
|
||||||
|
@ -44,10 +44,9 @@ event Dir::monitor_ev(dir: string, last_files: set[string],
|
||||||
|
|
||||||
for ( i in files )
|
for ( i in files )
|
||||||
{
|
{
|
||||||
local parts = split1(files[i], / /);
|
if ( files[i] !in last_files )
|
||||||
if ( parts[1] !in last_files )
|
callback(build_path_compressed(dir, files[i]));
|
||||||
callback(build_path_compressed(dir, parts[2]));
|
add current_files[files[i]];
|
||||||
add current_files[parts[1]];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
schedule poll_interval
|
schedule poll_interval
|
||||||
|
|
|
@ -17,7 +17,8 @@ export {
|
||||||
[::1]/128,
|
[::1]/128,
|
||||||
} &redef;
|
} &redef;
|
||||||
|
|
||||||
## Networks that are considered "local".
|
## Networks that are considered "local". Note that BroControl sets
|
||||||
|
## this automatically.
|
||||||
const local_nets: set[subnet] &redef;
|
const local_nets: set[subnet] &redef;
|
||||||
|
|
||||||
## This is used for retrieving the subnet when using multiple entries in
|
## This is used for retrieving the subnet when using multiple entries in
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
##! This is a utility script that implements the controller interface for the
|
##! This is a utility script that implements the controller interface for the
|
||||||
##! control framework. It's intended to be run to control a remote Bro
|
##! control framework. It's intended to be run to control a remote Bro
|
||||||
##! and then shutdown.
|
##! and then shutdown.
|
||||||
##!
|
##!
|
||||||
##! It's intended to be used from the command line like this::
|
##! It's intended to be used from the command line like this::
|
||||||
|
##!
|
||||||
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
|
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
|
||||||
|
|
||||||
@load base/frameworks/control
|
@load base/frameworks/control
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
##! This script enables logging of packet segment data when a protocol
|
##! This script enables logging of packet segment data when a protocol
|
||||||
##! parsing violation is encountered. The amount of
|
##! parsing violation is encountered. The amount of data from the
|
||||||
##! data from the packet logged is set by the packet_segment_size variable.
|
##! packet logged is set by the :bro:see:`DPD::packet_segment_size` variable.
|
||||||
##! A caveat to logging packet data is that in some cases, the packet may
|
##! A caveat to logging packet data is that in some cases, the packet may
|
||||||
##! not be the packet that actually caused the protocol violation.
|
##! not be the packet that actually caused the protocol violation.
|
||||||
|
|
||||||
|
@ -10,8 +10,8 @@ module DPD;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef record Info += {
|
redef record Info += {
|
||||||
## A chunk of the payload the most likely resulted in the protocol
|
## A chunk of the payload that most likely resulted in the
|
||||||
## violation.
|
## protocol violation.
|
||||||
packet_segment: string &optional &log;
|
packet_segment: string &optional &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -23,10 +23,10 @@ export {
|
||||||
/application\/jar/ |
|
/application\/jar/ |
|
||||||
/video\/mp4/ &redef;
|
/video\/mp4/ &redef;
|
||||||
|
|
||||||
## The malware hash registry runs each malware sample through several A/V engines.
|
## The malware hash registry runs each malware sample through several
|
||||||
## Team Cymru returns a percentage to indicate how many A/V engines flagged the
|
## A/V engines. Team Cymru returns a percentage to indicate how
|
||||||
## sample as malicious. This threshold allows you to require a minimum detection
|
## many A/V engines flagged the sample as malicious. This threshold
|
||||||
## rate.
|
## allows you to require a minimum detection rate.
|
||||||
const notice_threshold = 10 &redef;
|
const notice_threshold = 10 &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Perform MD5 and SHA1 hashing on all files.
|
##! Perform MD5 and SHA1 hashing on all files.
|
||||||
|
|
||||||
event file_new(f: fa_file)
|
event file_new(f: fa_file)
|
||||||
{
|
{
|
||||||
|
|
|
@ -18,7 +18,7 @@ export {
|
||||||
do_notice: bool &default=F;
|
do_notice: bool &default=F;
|
||||||
|
|
||||||
## Restrictions on when notices are created to only create
|
## Restrictions on when notices are created to only create
|
||||||
## them if the do_notice field is T and the notice was
|
## them if the *do_notice* field is T and the notice was
|
||||||
## seen in the indicated location.
|
## seen in the indicated location.
|
||||||
if_in: Intel::Where &optional;
|
if_in: Intel::Where &optional;
|
||||||
};
|
};
|
||||||
|
|
1
scripts/policy/frameworks/intel/seen/README
Normal file
1
scripts/policy/frameworks/intel/seen/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Scripts that send data to the intelligence framework.
|
|
@ -8,23 +8,23 @@ export {
|
||||||
const max_bpf_shunts = 100 &redef;
|
const max_bpf_shunts = 100 &redef;
|
||||||
|
|
||||||
## Call this function to use BPF to shunt a connection (to prevent the
|
## Call this function to use BPF to shunt a connection (to prevent the
|
||||||
## data packets from reaching Bro). For TCP connections, control packets
|
## data packets from reaching Bro). For TCP connections, control
|
||||||
## are still allowed through so that Bro can continue logging the connection
|
## packets are still allowed through so that Bro can continue logging
|
||||||
## and it can stop shunting once the connection ends.
|
## the connection and it can stop shunting once the connection ends.
|
||||||
global shunt_conn: function(id: conn_id): bool;
|
global shunt_conn: function(id: conn_id): bool;
|
||||||
|
|
||||||
## This function will use a BPF expresssion to shunt traffic between
|
## This function will use a BPF expression to shunt traffic between
|
||||||
## the two hosts given in the `conn_id` so that the traffic is never
|
## the two hosts given in the `conn_id` so that the traffic is never
|
||||||
## exposed to Bro's traffic processing.
|
## exposed to Bro's traffic processing.
|
||||||
global shunt_host_pair: function(id: conn_id): bool;
|
global shunt_host_pair: function(id: conn_id): bool;
|
||||||
|
|
||||||
## Remove shunting for a host pair given as a `conn_id`. The filter
|
## Remove shunting for a host pair given as a `conn_id`. The filter
|
||||||
## is not immediately removed. It waits for the occassional filter
|
## is not immediately removed. It waits for the occasional filter
|
||||||
## update done by the `PacketFilter` framework.
|
## update done by the `PacketFilter` framework.
|
||||||
global unshunt_host_pair: function(id: conn_id): bool;
|
global unshunt_host_pair: function(id: conn_id): bool;
|
||||||
|
|
||||||
## Performs the same function as the `unshunt_host_pair` function, but
|
## Performs the same function as the :bro:id:`PacketFilter::unshunt_host_pair`
|
||||||
## it forces an immediate filter update.
|
## function, but it forces an immediate filter update.
|
||||||
global force_unshunt_host_pair: function(id: conn_id): bool;
|
global force_unshunt_host_pair: function(id: conn_id): bool;
|
||||||
|
|
||||||
## Retrieve the currently shunted connections.
|
## Retrieve the currently shunted connections.
|
||||||
|
@ -34,12 +34,13 @@ export {
|
||||||
global current_shunted_host_pairs: function(): set[conn_id];
|
global current_shunted_host_pairs: function(): set[conn_id];
|
||||||
|
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Indicative that :bro:id:`PacketFilter::max_bpf_shunts` connections
|
## Indicative that :bro:id:`PacketFilter::max_bpf_shunts`
|
||||||
## are already being shunted with BPF filters and no more are allowed.
|
## connections are already being shunted with BPF filters and
|
||||||
|
## no more are allowed.
|
||||||
No_More_Conn_Shunts_Available,
|
No_More_Conn_Shunts_Available,
|
||||||
|
|
||||||
## Limitations in BPF make shunting some connections with BPF impossible.
|
## Limitations in BPF make shunting some connections with BPF
|
||||||
## This notice encompasses those various cases.
|
## impossible. This notice encompasses those various cases.
|
||||||
Cannot_BPF_Shunt_Conn,
|
Cannot_BPF_Shunt_Conn,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
##! Provides the possibly to define software names that are interesting to
|
##! Provides the possibility to define software names that are interesting to
|
||||||
##! watch for changes. A notice is generated if software versions change on a
|
##! watch for changes. A notice is generated if software versions change on a
|
||||||
##! host.
|
##! host.
|
||||||
|
|
||||||
|
@ -9,15 +9,15 @@ module Software;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## For certain software, a version changing may matter. In that case,
|
## For certain software, a version changing may matter. In that
|
||||||
## this notice will be generated. Software that matters if the version
|
## case, this notice will be generated. Software that matters
|
||||||
## changes can be configured with the
|
## if the version changes can be configured with the
|
||||||
## :bro:id:`Software::interesting_version_changes` variable.
|
## :bro:id:`Software::interesting_version_changes` variable.
|
||||||
Software_Version_Change,
|
Software_Version_Change,
|
||||||
};
|
};
|
||||||
|
|
||||||
## Some software is more interesting when the version changes and this is
|
## Some software is more interesting when the version changes and this
|
||||||
## a set of all software that should raise a notice when a different
|
## is a set of all software that should raise a notice when a different
|
||||||
## version is seen on a host.
|
## version is seen on a host.
|
||||||
const interesting_version_changes: set[string] = { } &redef;
|
const interesting_version_changes: set[string] = { } &redef;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
##! Provides a variable to define vulnerable versions of software and if a
|
##! Provides a variable to define vulnerable versions of software and if
|
||||||
##! a version of that software as old or older than the defined version a
|
##! a version of that software is as old or older than the defined version a
|
||||||
##! notice will be generated.
|
##! notice will be generated.
|
||||||
|
|
||||||
@load base/frameworks/control
|
@load base/frameworks/control
|
||||||
|
@ -21,7 +21,7 @@ export {
|
||||||
min: Software::Version &optional;
|
min: Software::Version &optional;
|
||||||
## The maximum vulnerable version. This field is deliberately
|
## The maximum vulnerable version. This field is deliberately
|
||||||
## not optional because a maximum vulnerable version must
|
## not optional because a maximum vulnerable version must
|
||||||
## always be defined. This assumption may become incorrent
|
## always be defined. This assumption may become incorrect
|
||||||
## if all future versions of some software are to be considered
|
## if all future versions of some software are to be considered
|
||||||
## vulnerable. :)
|
## vulnerable. :)
|
||||||
max: Software::Version;
|
max: Software::Version;
|
||||||
|
|
1
scripts/policy/integration/barnyard2/README
Normal file
1
scripts/policy/integration/barnyard2/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Integration with Barnyard2.
|
|
@ -15,8 +15,8 @@ export {
|
||||||
alert: AlertData &log;
|
alert: AlertData &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to a
|
## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to
|
||||||
## :bro:type:`conn_id` value in the case that you might need to index
|
## a :bro:type:`conn_id` value in the case that you might need to index
|
||||||
## into an existing data structure elsewhere within Bro.
|
## into an existing data structure elsewhere within Bro.
|
||||||
global pid2cid: function(p: PacketID): conn_id;
|
global pid2cid: function(p: PacketID): conn_id;
|
||||||
}
|
}
|
||||||
|
|
|
@ -11,7 +11,7 @@ export {
|
||||||
generator_id: count; ##< Which generator generated the alert?
|
generator_id: count; ##< Which generator generated the alert?
|
||||||
signature_revision: count; ##< Sig revision for this id.
|
signature_revision: count; ##< Sig revision for this id.
|
||||||
classification_id: count; ##< Event classification.
|
classification_id: count; ##< Event classification.
|
||||||
classification: string; ##< Descriptive classification string,
|
classification: string; ##< Descriptive classification string.
|
||||||
priority_id: count; ##< Event priority.
|
priority_id: count; ##< Event priority.
|
||||||
event_id: count; ##< Event ID.
|
event_id: count; ##< Event ID.
|
||||||
} &log;
|
} &log;
|
||||||
|
|
|
@ -3,8 +3,8 @@
|
||||||
|
|
||||||
module Intel;
|
module Intel;
|
||||||
|
|
||||||
## These are some fields to add extended compatibility between Bro and the Collective
|
## These are some fields to add extended compatibility between Bro and the
|
||||||
## Intelligence Framework
|
## Collective Intelligence Framework.
|
||||||
redef record Intel::MetaData += {
|
redef record Intel::MetaData += {
|
||||||
## Maps to the Impact field in the Collective Intelligence Framework.
|
## Maps to the Impact field in the Collective Intelligence Framework.
|
||||||
cif_impact: string &optional;
|
cif_impact: string &optional;
|
||||||
|
|
1
scripts/policy/misc/app-stats/README
Normal file
1
scripts/policy/misc/app-stats/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
AppStats collects information about web applications in use on the network.
|
|
@ -1,5 +1,5 @@
|
||||||
#! AppStats collects information about web applications in use
|
##! AppStats collects information about web applications in use
|
||||||
#! on the network.
|
##! on the network.
|
||||||
|
|
||||||
@load base/protocols/http
|
@load base/protocols/http
|
||||||
@load base/protocols/ssl
|
@load base/protocols/ssl
|
||||||
|
|
1
scripts/policy/misc/app-stats/plugins/README
Normal file
1
scripts/policy/misc/app-stats/plugins/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Plugins for AppStats.
|
|
@ -4,7 +4,7 @@
|
||||||
##! the packet capture or it could even be beyond the host. If you are
|
##! the packet capture or it could even be beyond the host. If you are
|
||||||
##! capturing from a switch with a SPAN port, it's very possible that
|
##! capturing from a switch with a SPAN port, it's very possible that
|
||||||
##! the switch itself could be overloaded and dropping packets.
|
##! the switch itself could be overloaded and dropping packets.
|
||||||
##! Reported loss is computed in terms of number of "gap events" (ACKs
|
##! Reported loss is computed in terms of the number of "gap events" (ACKs
|
||||||
##! for a sequence number that's above a gap).
|
##! for a sequence number that's above a gap).
|
||||||
|
|
||||||
@load base/frameworks/notice
|
@load base/frameworks/notice
|
||||||
|
@ -26,7 +26,7 @@ export {
|
||||||
## The time delay between this measurement and the last.
|
## The time delay between this measurement and the last.
|
||||||
ts_delta: interval &log;
|
ts_delta: interval &log;
|
||||||
## In the event that there are multiple Bro instances logging
|
## In the event that there are multiple Bro instances logging
|
||||||
## to the same host, this distinguishes each peer with it's
|
## to the same host, this distinguishes each peer with its
|
||||||
## individual name.
|
## individual name.
|
||||||
peer: string &log;
|
peer: string &log;
|
||||||
## Number of missed ACKs from the previous measurement interval.
|
## Number of missed ACKs from the previous measurement interval.
|
||||||
|
@ -34,7 +34,7 @@ export {
|
||||||
## Total number of ACKs seen in the previous measurement interval.
|
## Total number of ACKs seen in the previous measurement interval.
|
||||||
acks: count &log;
|
acks: count &log;
|
||||||
## Percentage of ACKs seen where the data being ACKed wasn't seen.
|
## Percentage of ACKs seen where the data being ACKed wasn't seen.
|
||||||
percent_lost: string &log;
|
percent_lost: double &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
## The interval at which capture loss reports are created.
|
## The interval at which capture loss reports are created.
|
||||||
|
@ -43,7 +43,7 @@ export {
|
||||||
## The percentage of missed data that is considered "too much"
|
## The percentage of missed data that is considered "too much"
|
||||||
## when the :bro:enum:`CaptureLoss::Too_Much_Loss` notice should be
|
## when the :bro:enum:`CaptureLoss::Too_Much_Loss` notice should be
|
||||||
## generated. The value is expressed as a double between 0 and 1 with 1
|
## generated. The value is expressed as a double between 0 and 1 with 1
|
||||||
## being 100%
|
## being 100%.
|
||||||
const too_much_loss: double = 0.1 &redef;
|
const too_much_loss: double = 0.1 &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -64,7 +64,7 @@ event CaptureLoss::take_measurement(last_ts: time, last_acks: count, last_gaps:
|
||||||
$ts_delta=now-last_ts,
|
$ts_delta=now-last_ts,
|
||||||
$peer=peer_description,
|
$peer=peer_description,
|
||||||
$acks=acks, $gaps=gaps,
|
$acks=acks, $gaps=gaps,
|
||||||
$percent_lost=fmt("%.3f%%", pct_lost)];
|
$percent_lost=pct_lost];
|
||||||
|
|
||||||
if ( pct_lost >= too_much_loss*100 )
|
if ( pct_lost >= too_much_loss*100 )
|
||||||
NOTICE([$note=Too_Much_Loss,
|
NOTICE([$note=Too_Much_Loss,
|
||||||
|
|
1
scripts/policy/misc/detect-traceroute/README
Normal file
1
scripts/policy/misc/detect-traceroute/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Detect hosts that are running traceroute.
|
|
@ -1,7 +1,8 @@
|
||||||
##! This script detects a large number of ICMP Time Exceeded messages heading toward
|
##! This script detects a large number of ICMP Time Exceeded messages heading
|
||||||
##! hosts that have sent low TTL packets. It generates a notice when the number of
|
##! toward hosts that have sent low TTL packets. It generates a notice when the
|
||||||
##! ICMP Time Exceeded messages for a source-destination pair exceeds a
|
##! number of ICMP Time Exceeded messages for a source-destination pair exceeds
|
||||||
##! threshold.
|
##! a threshold.
|
||||||
|
|
||||||
@load base/frameworks/sumstats
|
@load base/frameworks/sumstats
|
||||||
@load base/frameworks/signatures
|
@load base/frameworks/signatures
|
||||||
@load-sigs ./detect-low-ttls.sig
|
@load-sigs ./detect-low-ttls.sig
|
||||||
|
@ -20,15 +21,16 @@ export {
|
||||||
Detected
|
Detected
|
||||||
};
|
};
|
||||||
|
|
||||||
## By default this script requires that any host detected running traceroutes
|
## By default this script requires that any host detected running
|
||||||
## first send low TTL packets (TTL < 10) to the traceroute destination host.
|
## traceroutes first send low TTL packets (TTL < 10) to the traceroute
|
||||||
## Changing this this setting to `F` will relax the detection a bit by
|
## destination host. Changing this setting to F will relax the
|
||||||
## solely relying on ICMP time-exceeded messages to detect traceroute.
|
## detection a bit by solely relying on ICMP time-exceeded messages to
|
||||||
|
## detect traceroute.
|
||||||
const require_low_ttl_packets = T &redef;
|
const require_low_ttl_packets = T &redef;
|
||||||
|
|
||||||
## Defines the threshold for ICMP Time Exceeded messages for a src-dst pair.
|
## Defines the threshold for ICMP Time Exceeded messages for a src-dst
|
||||||
## This threshold only comes into play after a host is found to be
|
## pair. This threshold only comes into play after a host is found to
|
||||||
## sending low ttl packets.
|
## be sending low TTL packets.
|
||||||
const icmp_time_exceeded_threshold: double = 3 &redef;
|
const icmp_time_exceeded_threshold: double = 3 &redef;
|
||||||
|
|
||||||
## Interval at which to watch for the
|
## Interval at which to watch for the
|
||||||
|
@ -40,7 +42,7 @@ export {
|
||||||
type Info: record {
|
type Info: record {
|
||||||
## Timestamp
|
## Timestamp
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
## Address initiaing the traceroute.
|
## Address initiating the traceroute.
|
||||||
src: addr &log;
|
src: addr &log;
|
||||||
## Destination address of the traceroute.
|
## Destination address of the traceroute.
|
||||||
dst: addr &log;
|
dst: addr &log;
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
##! This script provides infrastructure for logging devices for which Bro has been
|
##! This script provides infrastructure for logging devices for which Bro has
|
||||||
##! able to determine the MAC address, and it logs them once per day (by default).
|
##! been able to determine the MAC address, and it logs them once per day (by
|
||||||
##! The log that is output provides an easy way to determine a count of the devices
|
##! default). The log that is output provides an easy way to determine a count
|
||||||
##! in use on a network per day.
|
##! of the devices in use on a network per day.
|
||||||
##!
|
##!
|
||||||
##! .. note::
|
##! .. note::
|
||||||
##!
|
##!
|
||||||
|
@ -15,7 +15,8 @@ export {
|
||||||
## The known-hosts logging stream identifier.
|
## The known-hosts logging stream identifier.
|
||||||
redef enum Log::ID += { DEVICES_LOG };
|
redef enum Log::ID += { DEVICES_LOG };
|
||||||
|
|
||||||
## The record type which contains the column fields of the known-devices log.
|
## The record type which contains the column fields of the known-devices
|
||||||
|
## log.
|
||||||
type DevicesInfo: record {
|
type DevicesInfo: record {
|
||||||
## The timestamp at which the host was detected.
|
## The timestamp at which the host was detected.
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
|
@ -24,10 +25,10 @@ export {
|
||||||
};
|
};
|
||||||
|
|
||||||
## The set of all known MAC addresses. It can accessed from other
|
## The set of all known MAC addresses. It can accessed from other
|
||||||
## to add, and check for, addresses seen in use.
|
## scripts to add, and check for, addresses seen in use.
|
||||||
##
|
##
|
||||||
## We maintain each entry for 24 hours by default so that the existence of
|
## We maintain each entry for 24 hours by default so that the existence
|
||||||
## individual addressed is logged each day.
|
## of individual addresses is logged each day.
|
||||||
global known_devices: set[string] &create_expire=1day &synchronized &redef;
|
global known_devices: set[string] &create_expire=1day &synchronized &redef;
|
||||||
|
|
||||||
## An event that can be handled to access the :bro:type:`Known::DevicesInfo`
|
## An event that can be handled to access the :bro:type:`Known::DevicesInfo`
|
||||||
|
|
|
@ -29,9 +29,10 @@ export {
|
||||||
#global confirm_filter_installation: event(success: bool);
|
#global confirm_filter_installation: event(success: bool);
|
||||||
|
|
||||||
redef record Cluster::Node += {
|
redef record Cluster::Node += {
|
||||||
## A BPF filter for load balancing traffic sniffed on a single interface
|
## A BPF filter for load balancing traffic sniffed on a single
|
||||||
## across a number of processes. In normal uses, this will be assigned
|
## interface across a number of processes. In normal uses, this
|
||||||
## dynamically by the manager and installed by the workers.
|
## will be assigned dynamically by the manager and installed by
|
||||||
|
## the workers.
|
||||||
lb_filter: string &optional;
|
lb_filter: string &optional;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -7,9 +7,9 @@ export {
|
||||||
redef enum Log::ID += { LOG };
|
redef enum Log::ID += { LOG };
|
||||||
|
|
||||||
type Info: record {
|
type Info: record {
|
||||||
## Name of the script loaded potentially with spaces included before
|
## Name of the script loaded potentially with spaces included
|
||||||
## the file name to indicate load depth. The convention is two spaces
|
## before the file name to indicate load depth. The convention
|
||||||
## per level of depth.
|
## is two spaces per level of depth.
|
||||||
name: string &log;
|
name: string &log;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -36,4 +36,4 @@ event bro_init() &priority=5
|
||||||
event bro_script_loaded(path: string, level: count)
|
event bro_script_loaded(path: string, level: count)
|
||||||
{
|
{
|
||||||
Log::write(LoadedScripts::LOG, [$name=cat(depth[level], compress_path(path))]);
|
Log::write(LoadedScripts::LOG, [$name=cat(depth[level], compress_path(path))]);
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,7 +8,8 @@ redef profiling_file = open_log_file("prof");
|
||||||
## Set the cheap profiling interval.
|
## Set the cheap profiling interval.
|
||||||
redef profiling_interval = 15 secs;
|
redef profiling_interval = 15 secs;
|
||||||
|
|
||||||
## Set the expensive profiling interval.
|
## Set the expensive profiling interval (multiple of
|
||||||
|
## :bro:id:`profiling_interval`).
|
||||||
redef expensive_profiling_multiple = 20;
|
redef expensive_profiling_multiple = 20;
|
||||||
|
|
||||||
event bro_init()
|
event bro_init()
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
##! TCP Scan detection
|
##! TCP Scan detection.
|
||||||
##!
|
|
||||||
##! ..Authors: Sheharbano Khattak
|
# ..Authors: Sheharbano Khattak
|
||||||
##! Seth Hall
|
# Seth Hall
|
||||||
##! All the authors of the old scan.bro
|
# All the authors of the old scan.bro
|
||||||
|
|
||||||
@load base/frameworks/notice
|
@load base/frameworks/notice
|
||||||
@load base/frameworks/sumstats
|
@load base/frameworks/sumstats
|
||||||
|
@ -13,37 +13,38 @@ module Scan;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Address scans detect that a host appears to be scanning some number
|
## Address scans detect that a host appears to be scanning some
|
||||||
## of destinations on a single port. This notice is generated when more
|
## number of destinations on a single port. This notice is
|
||||||
## than :bro:id:`Scan::addr_scan_threshold` unique hosts are seen over
|
## generated when more than :bro:id:`Scan::addr_scan_threshold`
|
||||||
## the previous :bro:id:`Scan::addr_scan_interval` time range.
|
## unique hosts are seen over the previous
|
||||||
|
## :bro:id:`Scan::addr_scan_interval` time range.
|
||||||
Address_Scan,
|
Address_Scan,
|
||||||
|
|
||||||
## Port scans detect that an attacking host appears to be scanning a
|
## Port scans detect that an attacking host appears to be
|
||||||
## single victim host on several ports. This notice is generated when
|
## scanning a single victim host on several ports. This notice
|
||||||
## an attacking host attempts to connect to
|
## is generated when an attacking host attempts to connect to
|
||||||
## :bro:id:`Scan::port_scan_threshold`
|
## :bro:id:`Scan::port_scan_threshold`
|
||||||
## unique ports on a single host over the previous
|
## unique ports on a single host over the previous
|
||||||
## :bro:id:`Scan::port_scan_interval` time range.
|
## :bro:id:`Scan::port_scan_interval` time range.
|
||||||
Port_Scan,
|
Port_Scan,
|
||||||
};
|
};
|
||||||
|
|
||||||
## Failed connection attempts are tracked over this time interval for the address
|
## Failed connection attempts are tracked over this time interval for
|
||||||
## scan detection. A higher interval will detect slower scanners, but may also
|
## the address scan detection. A higher interval will detect slower
|
||||||
## yield more false positives.
|
## scanners, but may also yield more false positives.
|
||||||
const addr_scan_interval = 5min &redef;
|
const addr_scan_interval = 5min &redef;
|
||||||
|
|
||||||
## Failed connection attempts are tracked over this time interval for the port scan
|
## Failed connection attempts are tracked over this time interval for
|
||||||
## detection. A higher interval will detect slower scanners, but may also yield
|
## the port scan detection. A higher interval will detect slower
|
||||||
## more false positives.
|
## scanners, but may also yield more false positives.
|
||||||
const port_scan_interval = 5min &redef;
|
const port_scan_interval = 5min &redef;
|
||||||
|
|
||||||
## The threshold of a unique number of hosts a scanning host has to have failed
|
## The threshold of the unique number of hosts a scanning host has to
|
||||||
## connections with on a single port.
|
## have failed connections with on a single port.
|
||||||
const addr_scan_threshold = 25.0 &redef;
|
const addr_scan_threshold = 25.0 &redef;
|
||||||
|
|
||||||
## The threshold of a number of unique ports a scanning host has to have failed
|
## The threshold of the number of unique ports a scanning host has to
|
||||||
## connections with on a single victim host.
|
## have failed connections with on a single victim host.
|
||||||
const port_scan_threshold = 15.0 &redef;
|
const port_scan_threshold = 15.0 &redef;
|
||||||
|
|
||||||
global Scan::addr_scan_policy: hook(scanner: addr, victim: addr, scanned_port: port);
|
global Scan::addr_scan_policy: hook(scanner: addr, victim: addr, scanned_port: port);
|
||||||
|
@ -148,7 +149,7 @@ function is_reverse_failed_conn(c: connection): bool
|
||||||
|
|
||||||
## Generated for an unsuccessful connection attempt. This
|
## Generated for an unsuccessful connection attempt. This
|
||||||
## event is raised when an originator unsuccessfully attempted
|
## event is raised when an originator unsuccessfully attempted
|
||||||
## to establish a connection. “Unsuccessful” is defined as at least
|
## to establish a connection. "Unsuccessful" is defined as at least
|
||||||
## tcp_attempt_delay seconds having elapsed since the originator first sent a
|
## tcp_attempt_delay seconds having elapsed since the originator first sent a
|
||||||
## connection establishment packet to the destination without seeing a reply.
|
## connection establishment packet to the destination without seeing a reply.
|
||||||
event connection_attempt(c: connection)
|
event connection_attempt(c: connection)
|
||||||
|
@ -160,9 +161,9 @@ event connection_attempt(c: connection)
|
||||||
add_sumstats(c$id, is_reverse_scan);
|
add_sumstats(c$id, is_reverse_scan);
|
||||||
}
|
}
|
||||||
|
|
||||||
## Generated for a rejected TCP connection. This event is raised when an originator
|
## Generated for a rejected TCP connection. This event is raised when an
|
||||||
## attempted to setup a TCP connection but the responder replied with a RST packet
|
## originator attempted to setup a TCP connection but the responder replied with
|
||||||
## denying it.
|
## a RST packet denying it.
|
||||||
event connection_rejected(c: connection)
|
event connection_rejected(c: connection)
|
||||||
{
|
{
|
||||||
local is_reverse_scan = F;
|
local is_reverse_scan = F;
|
||||||
|
@ -173,7 +174,8 @@ event connection_rejected(c: connection)
|
||||||
}
|
}
|
||||||
|
|
||||||
## Generated when an endpoint aborted a TCP connection. The event is raised when
|
## Generated when an endpoint aborted a TCP connection. The event is raised when
|
||||||
## one endpoint of an *established* TCP connection aborted by sending a RST packet.
|
## one endpoint of an *established* TCP connection aborted by sending a RST
|
||||||
|
## packet.
|
||||||
event connection_reset(c: connection)
|
event connection_reset(c: connection)
|
||||||
{
|
{
|
||||||
if ( is_failed_conn(c) )
|
if ( is_failed_conn(c) )
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
##! Log memory/packet/lag statistics. Differs from profiling.bro in that this
|
##! Log memory/packet/lag statistics. Differs from
|
||||||
|
##! :doc:`/scripts/policy/misc/profiling` in that this
|
||||||
##! is lighter-weight (much less info, and less load to generate).
|
##! is lighter-weight (much less info, and less load to generate).
|
||||||
|
|
||||||
@load base/frameworks/notice
|
@load base/frameworks/notice
|
||||||
|
@ -20,21 +21,23 @@ export {
|
||||||
mem: count &log;
|
mem: count &log;
|
||||||
## Number of packets processed since the last stats interval.
|
## Number of packets processed since the last stats interval.
|
||||||
pkts_proc: count &log;
|
pkts_proc: count &log;
|
||||||
## Number of events that been processed since the last stats interval.
|
## Number of events processed since the last stats interval.
|
||||||
events_proc: count &log;
|
events_proc: count &log;
|
||||||
## Number of events that have been queued since the last stats interval.
|
## Number of events that have been queued since the last stats
|
||||||
|
## interval.
|
||||||
events_queued: count &log;
|
events_queued: count &log;
|
||||||
|
|
||||||
## Lag between the wall clock and packet timestamps if reading live traffic.
|
## Lag between the wall clock and packet timestamps if reading
|
||||||
|
## live traffic.
|
||||||
lag: interval &log &optional;
|
lag: interval &log &optional;
|
||||||
## Number of packets received since the last stats interval if reading
|
## Number of packets received since the last stats interval if
|
||||||
## live traffic.
|
## reading live traffic.
|
||||||
pkts_recv: count &log &optional;
|
pkts_recv: count &log &optional;
|
||||||
## Number of packets dropped since the last stats interval if reading
|
## Number of packets dropped since the last stats interval if
|
||||||
## live traffic.
|
## reading live traffic.
|
||||||
pkts_dropped: count &log &optional;
|
pkts_dropped: count &log &optional;
|
||||||
## Number of packets seen on the link since the last stats interval
|
## Number of packets seen on the link since the last stats
|
||||||
## if reading live traffic.
|
## interval if reading live traffic.
|
||||||
pkts_link: count &log &optional;
|
pkts_link: count &log &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
##! Deletes the -w tracefile at regular intervals and starts a new file
|
##! Deletes the ``-w`` tracefile at regular intervals and starts a new file
|
||||||
##! from scratch.
|
##! from scratch.
|
||||||
|
|
||||||
module TrimTraceFile;
|
module TrimTraceFile;
|
||||||
|
@ -8,9 +8,9 @@ export {
|
||||||
const trim_interval = 10 mins &redef;
|
const trim_interval = 10 mins &redef;
|
||||||
|
|
||||||
## This event can be generated externally to this script if on-demand
|
## This event can be generated externally to this script if on-demand
|
||||||
## tracefile rotation is required with the caveat that the script doesn't
|
## tracefile rotation is required with the caveat that the script
|
||||||
## currently attempt to get back on schedule automatically and the next
|
## doesn't currently attempt to get back on schedule automatically and
|
||||||
## trim will likely won't happen on the
|
## the next trim likely won't happen on the
|
||||||
## :bro:id:`TrimTraceFile::trim_interval`.
|
## :bro:id:`TrimTraceFile::trim_interval`.
|
||||||
global go: event(first_trim: bool);
|
global go: event(first_trim: bool);
|
||||||
}
|
}
|
||||||
|
|
|
@ -15,8 +15,8 @@ export {
|
||||||
type HostsInfo: record {
|
type HostsInfo: record {
|
||||||
## The timestamp at which the host was detected.
|
## The timestamp at which the host was detected.
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
## The address that was detected originating or responding to a TCP
|
## The address that was detected originating or responding to a
|
||||||
## connection.
|
## TCP connection.
|
||||||
host: addr &log;
|
host: addr &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -7,7 +7,7 @@ module Known;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef record DevicesInfo += {
|
redef record DevicesInfo += {
|
||||||
## The value of the DHCP host name option, if seen
|
## The value of the DHCP host name option, if seen.
|
||||||
dhcp_host_name: string &log &optional;
|
dhcp_host_name: string &log &optional;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -10,9 +10,9 @@ module DNS;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Raised when a non-local name is found to be pointing at a local host.
|
## Raised when a non-local name is found to be pointing at a
|
||||||
## :bro:id:`Site::local_zones` variable **must** be set appropriately
|
## local host. The :bro:id:`Site::local_zones` variable
|
||||||
## for this detection.
|
## **must** be set appropriately for this detection.
|
||||||
External_Name,
|
External_Name,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
##! FTP brute-forcing detector, triggering when too many rejected usernames or
|
##! FTP brute-forcing detector, triggering when too many rejected usernames or
|
||||||
##! failed passwords have occured from a single address.
|
##! failed passwords have occurred from a single address.
|
||||||
|
|
||||||
@load base/protocols/ftp
|
@load base/protocols/ftp
|
||||||
@load base/frameworks/sumstats
|
@load base/frameworks/sumstats
|
||||||
|
@ -10,8 +10,8 @@ module FTP;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Indicates a host bruteforcing FTP logins by watching for too many
|
## Indicates a host bruteforcing FTP logins by watching for too
|
||||||
## rejected usernames or failed passwords.
|
## many rejected usernames or failed passwords.
|
||||||
Bruteforcing
|
Bruteforcing
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -8,10 +8,12 @@ module HTTP;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Indicates that a host performing SQL injection attacks was detected.
|
## Indicates that a host performing SQL injection attacks was
|
||||||
|
## detected.
|
||||||
SQL_Injection_Attacker,
|
SQL_Injection_Attacker,
|
||||||
## Indicates that a host was seen to have SQL injection attacks against
|
## Indicates that a host was seen to have SQL injection attacks
|
||||||
## it. This is tracked by IP address as opposed to hostname.
|
## against it. This is tracked by IP address as opposed to
|
||||||
|
## hostname.
|
||||||
SQL_Injection_Victim,
|
SQL_Injection_Victim,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -19,9 +21,11 @@ export {
|
||||||
## Indicator of a URI based SQL injection attack.
|
## Indicator of a URI based SQL injection attack.
|
||||||
URI_SQLI,
|
URI_SQLI,
|
||||||
## Indicator of client body based SQL injection attack. This is
|
## Indicator of client body based SQL injection attack. This is
|
||||||
## typically the body content of a POST request. Not implemented yet.
|
## typically the body content of a POST request. Not implemented
|
||||||
|
## yet.
|
||||||
POST_SQLI,
|
POST_SQLI,
|
||||||
## Indicator of a cookie based SQL injection attack. Not implemented yet.
|
## Indicator of a cookie based SQL injection attack. Not
|
||||||
|
## implemented yet.
|
||||||
COOKIE_SQLI,
|
COOKIE_SQLI,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -8,12 +8,12 @@ module HTTP;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef record Info += {
|
redef record Info += {
|
||||||
## The vector of HTTP header names sent by the client. No header
|
## The vector of HTTP header names sent by the client. No
|
||||||
## values are included here, just the header names.
|
## header values are included here, just the header names.
|
||||||
client_header_names: vector of string &log &optional;
|
client_header_names: vector of string &log &optional;
|
||||||
|
|
||||||
## The vector of HTTP header names sent by the server. No header
|
## The vector of HTTP header names sent by the server. No
|
||||||
## values are included here, just the header names.
|
## header values are included here, just the header names.
|
||||||
server_header_names: vector of string &log &optional;
|
server_header_names: vector of string &log &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
##! Extracts and logs variables names from cookies sent by clients.
|
##! Extracts and logs variable names from cookies sent by clients.
|
||||||
|
|
||||||
@load base/protocols/http/main
|
@load base/protocols/http/main
|
||||||
@load base/protocols/http/utils
|
@load base/protocols/http/utils
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
##! Extracts and log variables from the requested URI in the default HTTP
|
##! Extracts and logs variables from the requested URI in the default HTTP
|
||||||
##! logging stream.
|
##! logging stream.
|
||||||
|
|
||||||
@load base/protocols/http
|
@load base/protocols/http
|
||||||
|
|
|
@ -15,9 +15,9 @@ export {
|
||||||
const track_memmap: Host = ALL_HOSTS &redef;
|
const track_memmap: Host = ALL_HOSTS &redef;
|
||||||
|
|
||||||
type MemmapInfo: record {
|
type MemmapInfo: record {
|
||||||
## Timestamp for the detected register change
|
## Timestamp for the detected register change.
|
||||||
ts: time &log;
|
ts: time &log;
|
||||||
## Unique ID for the connection
|
## Unique ID for the connection.
|
||||||
uid: string &log;
|
uid: string &log;
|
||||||
## Connection ID.
|
## Connection ID.
|
||||||
id: conn_id &log;
|
id: conn_id &log;
|
||||||
|
@ -27,7 +27,8 @@ export {
|
||||||
old_val: count &log;
|
old_val: count &log;
|
||||||
## The new value stored in the register.
|
## The new value stored in the register.
|
||||||
new_val: count &log;
|
new_val: count &log;
|
||||||
## The time delta between when the 'old_val' and 'new_val' were seen.
|
## The time delta between when the *old_val* and *new_val* were
|
||||||
|
## seen.
|
||||||
delta: interval &log;
|
delta: interval &log;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -42,8 +43,8 @@ export {
|
||||||
## The memory map of slaves is tracked with this variable.
|
## The memory map of slaves is tracked with this variable.
|
||||||
global device_registers: table[addr] of Registers;
|
global device_registers: table[addr] of Registers;
|
||||||
|
|
||||||
## This event is generated every time a register is seen to be different than
|
## This event is generated every time a register is seen to be different
|
||||||
## it was previously seen to be.
|
## than it was previously seen to be.
|
||||||
global changed_register: event(c: connection, register: count, old_val: count, new_val: count, delta: interval);
|
global changed_register: event(c: connection, register: count, old_val: count, new_val: count, delta: interval);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -8,8 +8,8 @@ export {
|
||||||
Suspicious_Origination
|
Suspicious_Origination
|
||||||
};
|
};
|
||||||
|
|
||||||
## Places where it's suspicious for mail to originate from represented as
|
## Places where it's suspicious for mail to originate from represented
|
||||||
## all-capital, two character country codes (e.x. US). It requires
|
## as all-capital, two character country codes (e.g., US). It requires
|
||||||
## libGeoIP support built in.
|
## libGeoIP support built in.
|
||||||
const suspicious_origination_countries: set[string] = {} &redef;
|
const suspicious_origination_countries: set[string] = {} &redef;
|
||||||
const suspicious_origination_networks: set[subnet] = {} &redef;
|
const suspicious_origination_networks: set[subnet] = {} &redef;
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
##! TODO:
|
##! TODO:
|
||||||
##!
|
##!
|
||||||
##! * Find some heuristic to determine if email was sent through
|
##! * Find some heuristic to determine if email was sent through
|
||||||
##! a MS Exhange webmail interface as opposed to a desktop client.
|
##! a MS Exchange webmail interface as opposed to a desktop client.
|
||||||
|
|
||||||
@load base/frameworks/software/main
|
@load base/frameworks/software/main
|
||||||
@load base/protocols/smtp/main
|
@load base/protocols/smtp/main
|
||||||
|
@ -20,19 +20,19 @@ export {
|
||||||
};
|
};
|
||||||
|
|
||||||
redef record Info += {
|
redef record Info += {
|
||||||
## Boolean indicator of if the message was sent through a webmail
|
## Boolean indicator of if the message was sent through a
|
||||||
## interface.
|
## webmail interface.
|
||||||
is_webmail: bool &log &default=F;
|
is_webmail: bool &log &default=F;
|
||||||
};
|
};
|
||||||
|
|
||||||
## Assuming that local mail servers are more trustworthy with the headers
|
## Assuming that local mail servers are more trustworthy with the
|
||||||
## they insert into messages envelopes, this default makes Bro not attempt
|
## headers they insert into message envelopes, this default makes Bro
|
||||||
## to detect software in inbound message bodies. If mail coming in from
|
## not attempt to detect software in inbound message bodies. If mail
|
||||||
## external addresses gives incorrect data in the Received headers, it
|
## coming in from external addresses gives incorrect data in
|
||||||
## could populate your SOFTWARE logging stream with incorrect data.
|
## the Received headers, it could populate your SOFTWARE logging stream
|
||||||
## If you would like to detect mail clients for incoming messages
|
## with incorrect data. If you would like to detect mail clients for
|
||||||
## (network traffic originating from a non-local address), set this
|
## incoming messages (network traffic originating from a non-local
|
||||||
## variable to EXTERNAL_HOSTS or ALL_HOSTS.
|
## address), set this variable to EXTERNAL_HOSTS or ALL_HOSTS.
|
||||||
const detect_clients_in_messages_from = LOCAL_HOSTS &redef;
|
const detect_clients_in_messages_from = LOCAL_HOSTS &redef;
|
||||||
|
|
||||||
## A regular expression to match USER-AGENT-like headers to find if a
|
## A regular expression to match USER-AGENT-like headers to find if a
|
||||||
|
|
|
@ -11,12 +11,12 @@ module SSH;
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Indicates that a host has been identified as crossing the
|
## Indicates that a host has been identified as crossing the
|
||||||
## :bro:id:`SSH::password_guesses_limit` threshold with heuristically
|
## :bro:id:`SSH::password_guesses_limit` threshold with
|
||||||
## determined failed logins.
|
## heuristically determined failed logins.
|
||||||
Password_Guessing,
|
Password_Guessing,
|
||||||
## Indicates that a host previously identified as a "password guesser"
|
## Indicates that a host previously identified as a "password
|
||||||
## has now had a heuristically successful login attempt. This is not
|
## guesser" has now had a heuristically successful login
|
||||||
## currently implemented.
|
## attempt. This is not currently implemented.
|
||||||
Login_By_Password_Guesser,
|
Login_By_Password_Guesser,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -29,8 +29,8 @@ export {
|
||||||
## guessing passwords.
|
## guessing passwords.
|
||||||
const password_guesses_limit: double = 30 &redef;
|
const password_guesses_limit: double = 30 &redef;
|
||||||
|
|
||||||
## The amount of time to remember presumed non-successful logins to build
|
## The amount of time to remember presumed non-successful logins to
|
||||||
## model of a password guesser.
|
## build a model of a password guesser.
|
||||||
const guessing_timeout = 30 mins &redef;
|
const guessing_timeout = 30 mins &redef;
|
||||||
|
|
||||||
## This value can be used to exclude hosts or entire networks from being
|
## This value can be used to exclude hosts or entire networks from being
|
||||||
|
|
|
@ -7,14 +7,15 @@ module SSH;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## If an SSH login is seen to or from a "watched" country based on the
|
## If an SSH login is seen to or from a "watched" country based
|
||||||
## :bro:id:`SSH::watched_countries` variable then this notice will
|
## on the :bro:id:`SSH::watched_countries` variable then this
|
||||||
## be generated.
|
## notice will be generated.
|
||||||
Watched_Country_Login,
|
Watched_Country_Login,
|
||||||
};
|
};
|
||||||
|
|
||||||
redef record Info += {
|
redef record Info += {
|
||||||
## Add geographic data related to the "remote" host of the connection.
|
## Add geographic data related to the "remote" host of the
|
||||||
|
## connection.
|
||||||
remote_location: geo_location &log &optional;
|
remote_location: geo_location &log &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -10,8 +10,8 @@ module SSH;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Generated if a login originates or responds with a host where the
|
## Generated if a login originates or responds with a host where
|
||||||
## reverse hostname lookup resolves to a name matched by the
|
## the reverse hostname lookup resolves to a name matched by the
|
||||||
## :bro:id:`SSH::interesting_hostnames` regular expression.
|
## :bro:id:`SSH::interesting_hostnames` regular expression.
|
||||||
Interesting_Hostname_Login,
|
Interesting_Hostname_Login,
|
||||||
};
|
};
|
||||||
|
|
|
@ -12,13 +12,14 @@ module SSL;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## Indicates that a certificate's NotValidAfter date has lapsed and
|
## Indicates that a certificate's NotValidAfter date has lapsed
|
||||||
## the certificate is now invalid.
|
## and the certificate is now invalid.
|
||||||
Certificate_Expired,
|
Certificate_Expired,
|
||||||
## Indicates that a certificate is going to expire within
|
## Indicates that a certificate is going to expire within
|
||||||
## :bro:id:`SSL::notify_when_cert_expiring_in`.
|
## :bro:id:`SSL::notify_when_cert_expiring_in`.
|
||||||
Certificate_Expires_Soon,
|
Certificate_Expires_Soon,
|
||||||
## Indicates that a certificate's NotValidBefore date is future dated.
|
## Indicates that a certificate's NotValidBefore date is future
|
||||||
|
## dated.
|
||||||
Certificate_Not_Valid_Yet,
|
Certificate_Not_Valid_Yet,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -29,8 +30,8 @@ export {
|
||||||
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
|
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
|
||||||
const notify_certs_expiration = LOCAL_HOSTS &redef;
|
const notify_certs_expiration = LOCAL_HOSTS &redef;
|
||||||
|
|
||||||
## The time before a certificate is going to expire that you would like to
|
## The time before a certificate is going to expire that you would like
|
||||||
## start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices.
|
## to start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices.
|
||||||
const notify_when_cert_expiring_in = 30days &redef;
|
const notify_when_cert_expiring_in = 30days &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -5,8 +5,8 @@
|
||||||
##! .. note::
|
##! .. note::
|
||||||
##!
|
##!
|
||||||
##! - It doesn't work well on a cluster because each worker will write its
|
##! - It doesn't work well on a cluster because each worker will write its
|
||||||
##! own certificate files and no duplicate checking is done across
|
##! own certificate files and no duplicate checking is done across the
|
||||||
##! clusters so each node would log each certificate.
|
##! cluster so each node would log each certificate.
|
||||||
##!
|
##!
|
||||||
|
|
||||||
@load base/protocols/ssl
|
@load base/protocols/ssl
|
||||||
|
@ -18,7 +18,7 @@ module SSL;
|
||||||
export {
|
export {
|
||||||
## Control if host certificates offered by the defined hosts
|
## Control if host certificates offered by the defined hosts
|
||||||
## will be written to the PEM certificates file.
|
## will be written to the PEM certificates file.
|
||||||
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
|
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS.
|
||||||
const extract_certs_pem = LOCAL_HOSTS &redef;
|
const extract_certs_pem = LOCAL_HOSTS &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
##! Log information about certificates while attempting to avoid duplicate logging.
|
##! Log information about certificates while attempting to avoid duplicate
|
||||||
|
##! logging.
|
||||||
|
|
||||||
@load base/utils/directions-and-hosts
|
@load base/utils/directions-and-hosts
|
||||||
@load base/protocols/ssl
|
@load base/protocols/ssl
|
||||||
|
@ -26,7 +27,7 @@ export {
|
||||||
};
|
};
|
||||||
|
|
||||||
## The certificates whose existence should be logged and tracked.
|
## The certificates whose existence should be logged and tracked.
|
||||||
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
|
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS.
|
||||||
const cert_tracking = LOCAL_HOSTS &redef;
|
const cert_tracking = LOCAL_HOSTS &redef;
|
||||||
|
|
||||||
## The set of all known certificates to store for preventing duplicate
|
## The set of all known certificates to store for preventing duplicate
|
||||||
|
@ -35,7 +36,7 @@ export {
|
||||||
## in the set is for storing the DER formatted certificate's MD5 hash.
|
## in the set is for storing the DER formatted certificate's MD5 hash.
|
||||||
global certs: set[addr, string] &create_expire=1day &synchronized &redef;
|
global certs: set[addr, string] &create_expire=1day &synchronized &redef;
|
||||||
|
|
||||||
## Event that can be handled to access the loggable record as it is sent
|
## Event that can be handled to access the loggable record as it is sent
|
||||||
## on to the logging framework.
|
## on to the logging framework.
|
||||||
global log_known_certs: event(rec: CertsInfo);
|
global log_known_certs: event(rec: CertsInfo);
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,8 +8,9 @@ module SSL;
|
||||||
|
|
||||||
export {
|
export {
|
||||||
redef enum Notice::Type += {
|
redef enum Notice::Type += {
|
||||||
## This notice indicates that the result of validating the certificate
|
## This notice indicates that the result of validating the
|
||||||
## along with it's full certificate chain was invalid.
|
## certificate along with its full certificate chain was
|
||||||
|
## invalid.
|
||||||
Invalid_Server_Cert
|
Invalid_Server_Cert
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -18,9 +19,9 @@ export {
|
||||||
validation_status: string &log &optional;
|
validation_status: string &log &optional;
|
||||||
};
|
};
|
||||||
|
|
||||||
## MD5 hash values for recently validated certs along with the validation
|
## MD5 hash values for recently validated certs along with the
|
||||||
## status message are kept in this table to avoid constant validation
|
## validation status message are kept in this table to avoid constant
|
||||||
## everytime the same certificate is seen.
|
## validation every time the same certificate is seen.
|
||||||
global recently_validated_certs: table[string] of string = table()
|
global recently_validated_certs: table[string] of string = table()
|
||||||
&read_expire=5mins &synchronized &redef;
|
&read_expire=5mins &synchronized &redef;
|
||||||
}
|
}
|
||||||
|
|
1
scripts/policy/tuning/README
Normal file
1
scripts/policy/tuning/README
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Miscellaneous tuning parameters.
|
2
scripts/policy/tuning/defaults/README
Normal file
2
scripts/policy/tuning/defaults/README
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
Sets various defaults, and prints warning messages to stdout under
|
||||||
|
certain conditions.
|
|
@ -12,8 +12,8 @@ export {
|
||||||
|
|
||||||
## If you want to explicitly only send certain :bro:type:`Log::ID`
|
## If you want to explicitly only send certain :bro:type:`Log::ID`
|
||||||
## streams, add them to this set. If the set remains empty, all will
|
## streams, add them to this set. If the set remains empty, all will
|
||||||
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option will remain in
|
## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option
|
||||||
## effect as well.
|
## will remain in effect as well.
|
||||||
const send_logs: set[Log::ID] &redef;
|
const send_logs: set[Log::ID] &redef;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
12
src/Val.cc
12
src/Val.cc
|
@ -2720,16 +2720,22 @@ RecordVal* RecordVal::CoerceTo(const RecordType* t, Val* aggr, bool allow_orphan
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Val* v = Lookup(i);
|
||||||
|
|
||||||
|
if ( ! v )
|
||||||
|
// Check for allowable optional fields is outside the loop, below.
|
||||||
|
continue;
|
||||||
|
|
||||||
if ( ar_t->FieldType(t_i)->Tag() == TYPE_RECORD
|
if ( ar_t->FieldType(t_i)->Tag() == TYPE_RECORD
|
||||||
&& ! same_type(ar_t->FieldType(t_i), Lookup(i)->Type()) )
|
&& ! same_type(ar_t->FieldType(t_i), v->Type()) )
|
||||||
{
|
{
|
||||||
Expr* rhs = new ConstExpr(Lookup(i)->Ref());
|
Expr* rhs = new ConstExpr(v->Ref());
|
||||||
Expr* e = new RecordCoerceExpr(rhs, ar_t->FieldType(t_i)->AsRecordType());
|
Expr* e = new RecordCoerceExpr(rhs, ar_t->FieldType(t_i)->AsRecordType());
|
||||||
ar->Assign(t_i, e->Eval(0));
|
ar->Assign(t_i, e->Eval(0));
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
ar->Assign(t_i, Lookup(i)->Ref());
|
ar->Assign(t_i, v->Ref());
|
||||||
}
|
}
|
||||||
|
|
||||||
for ( i = 0; i < ar_t->NumFields(); ++i )
|
for ( i = 0; i < ar_t->NumFields(); ++i )
|
||||||
|
|
|
@ -2,65 +2,93 @@
|
||||||
## Generated for a DNP3 request header.
|
## Generated for a DNP3 request header.
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
|
##
|
||||||
## fc: function code.
|
## fc: function code.
|
||||||
|
##
|
||||||
event dnp3_application_request_header%(c: connection, is_orig: bool, fc: count%);
|
event dnp3_application_request_header%(c: connection, is_orig: bool, fc: count%);
|
||||||
|
|
||||||
## Generated for a DNP3 response header.
|
## Generated for a DNP3 response header.
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
|
##
|
||||||
## fc: function code.
|
## fc: function code.
|
||||||
## iin: internal indication number
|
##
|
||||||
|
## iin: internal indication number.
|
||||||
|
##
|
||||||
event dnp3_application_response_header%(c: connection, is_orig: bool, fc: count, iin: count%);
|
event dnp3_application_response_header%(c: connection, is_orig: bool, fc: count, iin: count%);
|
||||||
|
|
||||||
## Generated for the object header found in both DNP3 requests and responses.
|
## Generated for the object header found in both DNP3 requests and responses.
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
## obj_type: type of object, which is classified based on an 8-bit group number and an 8-bit variation number
|
##
|
||||||
## qua_field: qualifier field
|
## obj_type: type of object, which is classified based on an 8-bit group number
|
||||||
|
## and an 8-bit variation number.
|
||||||
|
##
|
||||||
|
## qua_field: qualifier field.
|
||||||
|
##
|
||||||
## rf_low: the structure of the range field depends on the qualified field.
|
## rf_low: the structure of the range field depends on the qualified field.
|
||||||
## In some cases, range field contains only one logic part, e.g.,
|
## In some cases, the range field contains only one logic part, e.g.,
|
||||||
## number of objects, so only *rf_low* contains the useful values.
|
## number of objects, so only *rf_low* contains useful values.
|
||||||
## rf_high: in some cases, range field contain two logic parts, e.g., start
|
##
|
||||||
## index and stop index, so *rf_low* contains the start index while
|
## rf_high: in some cases, the range field contains two logic parts, e.g., start
|
||||||
|
## index and stop index, so *rf_low* contains the start index
|
||||||
## while *rf_high* contains the stop index.
|
## while *rf_high* contains the stop index.
|
||||||
|
##
|
||||||
event dnp3_object_header%(c: connection, is_orig: bool, obj_type: count, qua_field: count, number: count, rf_low: count, rf_high: count%);
|
event dnp3_object_header%(c: connection, is_orig: bool, obj_type: count, qua_field: count, number: count, rf_low: count, rf_high: count%);
|
||||||
|
|
||||||
## Generated for the prefix before a DNP3 object. The structure and the meaning
|
## Generated for the prefix before a DNP3 object. The structure and the meaning
|
||||||
## of the prefix are defined by the qualifier field.
|
## of the prefix are defined by the qualifier field.
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
|
##
|
||||||
## prefix_value: The prefix.
|
## prefix_value: The prefix.
|
||||||
|
##
|
||||||
event dnp3_object_prefix%(c: connection, is_orig: bool, prefix_value: count%);
|
event dnp3_object_prefix%(c: connection, is_orig: bool, prefix_value: count%);
|
||||||
|
|
||||||
## Generated for an additional header that the DNP3 analyzer passes to the
|
## Generated for an additional header that the DNP3 analyzer passes to the
|
||||||
## script-level. This headers mimics the DNP3 transport-layer yet is only passed
|
## script-level. This header mimics the DNP3 transport-layer yet is only passed
|
||||||
## once for each sequence of DNP3 records (which are otherwise reassembled and
|
## once for each sequence of DNP3 records (which are otherwise reassembled and
|
||||||
## treated as a single entity).
|
## treated as a single entity).
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
## start: the first two bytes of the DNP3 Pseudo Link Layer; its value is fixed as 0x0564
|
##
|
||||||
## len: the "length" field in the DNP3 Pseudo Link Layer
|
## start: the first two bytes of the DNP3 Pseudo Link Layer; its value is fixed
|
||||||
## ctrl: the "control" field in the DNP3 Pseudo Link Layer
|
## as 0x0564.
|
||||||
## dest_addr: the "destination" field in the DNP3 Pseudo Link Layer
|
##
|
||||||
## src_addr: the "source" field in the DNP3 Pseudo Link Layer
|
## len: the "length" field in the DNP3 Pseudo Link Layer.
|
||||||
|
##
|
||||||
|
## ctrl: the "control" field in the DNP3 Pseudo Link Layer.
|
||||||
|
##
|
||||||
|
## dest_addr: the "destination" field in the DNP3 Pseudo Link Layer.
|
||||||
|
##
|
||||||
|
## src_addr: the "source" field in the DNP3 Pseudo Link Layer.
|
||||||
|
##
|
||||||
event dnp3_header_block%(c: connection, is_orig: bool, start: count, len: count, ctrl: count, dest_addr: count, src_addr: count%);
|
event dnp3_header_block%(c: connection, is_orig: bool, start: count, len: count, ctrl: count, dest_addr: count, src_addr: count%);
|
||||||
|
|
||||||
## Generated for a DNP3 "Response_Data_Object".
|
## Generated for a DNP3 "Response_Data_Object".
|
||||||
## The "Response_Data_Object" contains two parts: object prefix and object
|
## The "Response_Data_Object" contains two parts: object prefix and object
|
||||||
## data. In most cases, objects data are defined by new record types. But
|
## data. In most cases, object data are defined by new record types. But
|
||||||
## in a few cases, objects data are directly basic types, such as int16, or
|
## in a few cases, object data are directly basic types, such as int16, or
|
||||||
## int8; thus we use a additional data_value to record the values of those
|
## int8; thus we use an additional *data_value* to record the values of those
|
||||||
## object data.
|
## object data.
|
||||||
##
|
##
|
||||||
## c: The connection the DNP3 communication is part of.
|
## c: The connection the DNP3 communication is part of.
|
||||||
|
##
|
||||||
## is_orig: True if this reflects originator-side activity.
|
## is_orig: True if this reflects originator-side activity.
|
||||||
|
##
|
||||||
## data_value: The value for those objects that carry their information here
|
## data_value: The value for those objects that carry their information here
|
||||||
## directly.
|
## directly.
|
||||||
|
##
|
||||||
event dnp3_response_data_object%(c: connection, is_orig: bool, data_value: count%);
|
event dnp3_response_data_object%(c: connection, is_orig: bool, data_value: count%);
|
||||||
|
|
||||||
## Generated for DNP3 attributes.
|
## Generated for DNP3 attributes.
|
||||||
|
@ -238,6 +266,6 @@ event dnp3_frozen_analog_input_event_DPwTime%(c: connection, is_orig: bool, flag
|
||||||
event dnp3_file_transport%(c: connection, is_orig: bool, file_handle: count, block_num: count, file_data: string%);
|
event dnp3_file_transport%(c: connection, is_orig: bool, file_handle: count, block_num: count, file_data: string%);
|
||||||
|
|
||||||
## Debugging event generated by the DNP3 analyzer. The "Debug_Byte" binpac unit
|
## Debugging event generated by the DNP3 analyzer. The "Debug_Byte" binpac unit
|
||||||
## generates this for unknown "cases". The user can use it to debug the byte string
|
## generates this for unknown "cases". The user can use it to debug the byte
|
||||||
## to check what cause the malformed network packets.
|
## string to check what caused the malformed network packets.
|
||||||
event dnp3_debug_byte%(c: connection, is_orig: bool, debug: string%);
|
event dnp3_debug_byte%(c: connection, is_orig: bool, debug: string%);
|
||||||
|
|
|
@ -116,11 +116,12 @@ static Val* parse_eftp(const char* line)
|
||||||
}
|
}
|
||||||
%%}
|
%%}
|
||||||
|
|
||||||
## Converts a string representation of the FTP PORT command to an ``ftp_port``.
|
## Converts a string representation of the FTP PORT command to an
|
||||||
|
## :bro:type:`ftp_port`.
|
||||||
##
|
##
|
||||||
## s: The string of the FTP PORT command, e.g., ``"10,0,0,1,4,31"``.
|
## s: The string of the FTP PORT command, e.g., ``"10,0,0,1,4,31"``.
|
||||||
##
|
##
|
||||||
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
|
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
|
||||||
##
|
##
|
||||||
## .. bro:see:: parse_eftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
|
## .. bro:see:: parse_eftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
|
||||||
function parse_ftp_port%(s: string%): ftp_port
|
function parse_ftp_port%(s: string%): ftp_port
|
||||||
|
@ -128,14 +129,14 @@ function parse_ftp_port%(s: string%): ftp_port
|
||||||
return parse_port(s->CheckString());
|
return parse_port(s->CheckString());
|
||||||
%}
|
%}
|
||||||
|
|
||||||
## Converts a string representation of the FTP EPRT command to an ``ftp_port``.
|
## Converts a string representation of the FTP EPRT command (see :rfc:`2428`)
|
||||||
## See `RFC 2428 <http://tools.ietf.org/html/rfc2428>`_.
|
## to an :bro:type:`ftp_port`. The format is
|
||||||
## The format is ``EPRT<space><d><net-prt><d><net-addr><d><tcp-port><d>``,
|
## ``"EPRT<space><d><net-prt><d><net-addr><d><tcp-port><d>"``,
|
||||||
## where ``<d>`` is a delimiter in the ASCII range 33-126 (usually ``|``).
|
## where ``<d>`` is a delimiter in the ASCII range 33-126 (usually ``|``).
|
||||||
##
|
##
|
||||||
## s: The string of the FTP EPRT command, e.g., ``"|1|10.0.0.1|1055|"``.
|
## s: The string of the FTP EPRT command, e.g., ``"|1|10.0.0.1|1055|"``.
|
||||||
##
|
##
|
||||||
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
|
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
|
||||||
##
|
##
|
||||||
## .. bro:see:: parse_ftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
|
## .. bro:see:: parse_ftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port
|
||||||
function parse_eftp_port%(s: string%): ftp_port
|
function parse_eftp_port%(s: string%): ftp_port
|
||||||
|
@ -143,11 +144,11 @@ function parse_eftp_port%(s: string%): ftp_port
|
||||||
return parse_eftp(s->CheckString());
|
return parse_eftp(s->CheckString());
|
||||||
%}
|
%}
|
||||||
|
|
||||||
## Converts the result of the FTP PASV command to an ``ftp_port``.
|
## Converts the result of the FTP PASV command to an :bro:type:`ftp_port`.
|
||||||
##
|
##
|
||||||
## str: The string containing the result of the FTP PASV command.
|
## str: The string containing the result of the FTP PASV command.
|
||||||
##
|
##
|
||||||
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
|
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
|
||||||
##
|
##
|
||||||
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_epsv fmt_ftp_port
|
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_epsv fmt_ftp_port
|
||||||
function parse_ftp_pasv%(str: string%): ftp_port
|
function parse_ftp_pasv%(str: string%): ftp_port
|
||||||
|
@ -168,14 +169,13 @@ function parse_ftp_pasv%(str: string%): ftp_port
|
||||||
return parse_port(line);
|
return parse_port(line);
|
||||||
%}
|
%}
|
||||||
|
|
||||||
## Converts the result of the FTP EPSV command to an ``ftp_port``.
|
## Converts the result of the FTP EPSV command (see :rfc:`2428`) to an
|
||||||
## See `RFC 2428 <http://tools.ietf.org/html/rfc2428>`_.
|
## :bro:type:`ftp_port`. The format is ``"<text> (<d><d><d><tcp-port><d>)"``,
|
||||||
## The format is ``<text> (<d><d><d><tcp-port><d>)``, where ``<d>`` is a
|
## where ``<d>`` is a delimiter in the ASCII range 33-126 (usually ``|``).
|
||||||
## delimiter in the ASCII range 33-126 (usually ``|``).
|
|
||||||
##
|
##
|
||||||
## str: The string containing the result of the FTP EPSV command.
|
## str: The string containing the result of the FTP EPSV command.
|
||||||
##
|
##
|
||||||
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``
|
## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]``.
|
||||||
##
|
##
|
||||||
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_pasv fmt_ftp_port
|
## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_pasv fmt_ftp_port
|
||||||
function parse_ftp_epsv%(str: string%): ftp_port
|
function parse_ftp_epsv%(str: string%): ftp_port
|
||||||
|
|
|
@ -42,11 +42,11 @@ function skip_http_entity_data%(c: connection, is_orig: bool%): any
|
||||||
##
|
##
|
||||||
## .. note::
|
## .. note::
|
||||||
##
|
##
|
||||||
## Unescaping reserved characters may cause loss of information. RFC 2396:
|
## Unescaping reserved characters may cause loss of information.
|
||||||
## A URI is always in an "escaped" form, since escaping or unescaping a
|
## :rfc:`2396`: A URI is always in an "escaped" form, since escaping or
|
||||||
## completed URI might change its semantics. Normally, the only time
|
## unescaping a completed URI might change its semantics. Normally, the
|
||||||
## escape encodings can safely be made is when the URI is being created
|
## only time escape encodings can safely be made is when the URI is
|
||||||
## from its component parts.
|
## being created from its component parts.
|
||||||
function unescape_URI%(URI: string%): string
|
function unescape_URI%(URI: string%): string
|
||||||
%{
|
%{
|
||||||
const u_char* line = URI->Bytes();
|
const u_char* line = URI->Bytes();
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
## Generated for client side commands on an RSH connection.
|
## Generated for client side commands on an RSH connection.
|
||||||
##
|
##
|
||||||
## See `RFC 1258 <http://tools.ietf.org/html/rfc1258>`__ for more information
|
## See :rfc:`1258` for more information about the Rlogin/Rsh protocol.
|
||||||
## about the Rlogin/Rsh protocol.
|
|
||||||
##
|
##
|
||||||
## c: The connection.
|
## c: The connection.
|
||||||
##
|
##
|
||||||
|
@ -30,8 +29,7 @@ event rsh_request%(c: connection, client_user: string, server_user: string, line
|
||||||
|
|
||||||
## Generated for client side commands on an RSH connection.
|
## Generated for client side commands on an RSH connection.
|
||||||
##
|
##
|
||||||
## See `RFC 1258 <http://tools.ietf.org/html/rfc1258>`__ for more information
|
## See :rfc:`1258` for more information about the Rlogin/Rsh protocol.
|
||||||
## about the Rlogin/Rsh protocol.
|
|
||||||
##
|
##
|
||||||
## c: The connection.
|
## c: The connection.
|
||||||
##
|
##
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
## Generated for any modbus message regardless if the particular function
|
## Generated for any Modbus message regardless if the particular function
|
||||||
## is further supported or not.
|
## is further supported or not.
|
||||||
##
|
##
|
||||||
## c: The connection.
|
## c: The connection.
|
||||||
|
@ -8,7 +8,7 @@
|
||||||
## is_orig: True if the event is raised for the originator side.
|
## is_orig: True if the event is raised for the originator side.
|
||||||
event modbus_message%(c: connection, headers: ModbusHeaders, is_orig: bool%);
|
event modbus_message%(c: connection, headers: ModbusHeaders, is_orig: bool%);
|
||||||
|
|
||||||
## Generated for any modbus exception message.
|
## Generated for any Modbus exception message.
|
||||||
##
|
##
|
||||||
## c: The connection.
|
## c: The connection.
|
||||||
##
|
##
|
||||||
|
@ -23,7 +23,7 @@ event modbus_exception%(c: connection, headers: ModbusHeaders, code: count%);
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## start_address: The memory address where of the first coil to be read.
|
## start_address: The memory address of the first coil to be read.
|
||||||
##
|
##
|
||||||
## quantity: The number of coils to be read.
|
## quantity: The number of coils to be read.
|
||||||
event modbus_read_coils_request%(c: connection, headers: ModbusHeaders, start_address: count, quantity: count%);
|
event modbus_read_coils_request%(c: connection, headers: ModbusHeaders, start_address: count, quantity: count%);
|
||||||
|
@ -191,8 +191,8 @@ event modbus_write_multiple_registers_response%(c: connection, headers: ModbusHe
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## .. note: This event is incomplete. The information from the data structure is not
|
## .. note: This event is incomplete. The information from the data structure
|
||||||
## yet passed through to the event.
|
## is not yet passed through to the event.
|
||||||
event modbus_read_file_record_request%(c: connection, headers: ModbusHeaders%);
|
event modbus_read_file_record_request%(c: connection, headers: ModbusHeaders%);
|
||||||
|
|
||||||
## Generated for a Modbus read file record response.
|
## Generated for a Modbus read file record response.
|
||||||
|
@ -201,8 +201,8 @@ event modbus_read_file_record_request%(c: connection, headers: ModbusHeaders%);
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## .. note: This event is incomplete. The information from the data structure is not
|
## .. note: This event is incomplete. The information from the data structure
|
||||||
## yet passed through to the event.
|
## is not yet passed through to the event.
|
||||||
event modbus_read_file_record_response%(c: connection, headers: ModbusHeaders%);
|
event modbus_read_file_record_response%(c: connection, headers: ModbusHeaders%);
|
||||||
|
|
||||||
## Generated for a Modbus write file record request.
|
## Generated for a Modbus write file record request.
|
||||||
|
@ -211,8 +211,8 @@ event modbus_read_file_record_response%(c: connection, headers: ModbusHeaders%);
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## .. note: This event is incomplete. The information from the data structure is not
|
## .. note: This event is incomplete. The information from the data structure
|
||||||
## yet passed through to the event.
|
## is not yet passed through to the event.
|
||||||
event modbus_write_file_record_request%(c: connection, headers: ModbusHeaders%);
|
event modbus_write_file_record_request%(c: connection, headers: ModbusHeaders%);
|
||||||
|
|
||||||
## Generated for a Modbus write file record response.
|
## Generated for a Modbus write file record response.
|
||||||
|
@ -221,8 +221,8 @@ event modbus_write_file_record_request%(c: connection, headers: ModbusHeaders%);
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## .. note: This event is incomplete. The information from the data structure is not
|
## .. note: This event is incomplete. The information from the data structure
|
||||||
## yet passed through to the event.
|
## is not yet passed through to the event.
|
||||||
event modbus_write_file_record_response%(c: connection, headers: ModbusHeaders%);
|
event modbus_write_file_record_response%(c: connection, headers: ModbusHeaders%);
|
||||||
|
|
||||||
## Generated for a Modbus mask write register request.
|
## Generated for a Modbus mask write register request.
|
||||||
|
@ -272,7 +272,8 @@ event modbus_read_write_multiple_registers_request%(c: connection, headers: Modb
|
||||||
##
|
##
|
||||||
## headers: The headers for the modbus function.
|
## headers: The headers for the modbus function.
|
||||||
##
|
##
|
||||||
## written_registers: The register values read from the registers specified in the request.
|
## written_registers: The register values read from the registers specified in
|
||||||
|
## the request.
|
||||||
event modbus_read_write_multiple_registers_response%(c: connection, headers: ModbusHeaders, written_registers: ModbusRegisters%);
|
event modbus_read_write_multiple_registers_response%(c: connection, headers: ModbusHeaders, written_registers: ModbusRegisters%);
|
||||||
|
|
||||||
## Generated for a Modbus read FIFO queue request.
|
## Generated for a Modbus read FIFO queue request.
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
## its name!) the NetBIOS datagram service on UDP port 138.
|
## its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -12,7 +12,7 @@
|
||||||
## is_orig: True if the message was sent by the originator of the connection.
|
## is_orig: True if the message was sent by the originator of the connection.
|
||||||
##
|
##
|
||||||
## msg_type: The general type of message, as defined in Section 4.3.1 of
|
## msg_type: The general type of message, as defined in Section 4.3.1 of
|
||||||
## `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__.
|
## :rfc:`1002`.
|
||||||
##
|
##
|
||||||
## data_len: The length of the message's payload.
|
## data_len: The length of the message's payload.
|
||||||
##
|
##
|
||||||
|
@ -35,7 +35,7 @@ event netbios_session_message%(c: connection, is_orig: bool, msg_type: count, da
|
||||||
## (despite its name!) the NetBIOS datagram service on UDP port 138.
|
## (despite its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -63,7 +63,7 @@ event netbios_session_request%(c: connection, msg: string%);
|
||||||
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -91,7 +91,7 @@ event netbios_session_accepted%(c: connection, msg: string%);
|
||||||
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -121,7 +121,7 @@ event netbios_session_rejected%(c: connection, msg: string%);
|
||||||
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -154,7 +154,7 @@ event netbios_session_raw_message%(c: connection, is_orig: bool, msg: string%);
|
||||||
## (despite its name!) the NetBIOS datagram service on UDP port 138.
|
## (despite its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
@ -184,7 +184,7 @@ event netbios_session_ret_arg_resp%(c: connection, msg: string%);
|
||||||
## its name!) the NetBIOS datagram service on UDP port 138.
|
## its name!) the NetBIOS datagram service on UDP port 138.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
## See `Wikipedia <http://en.wikipedia.org/wiki/NetBIOS>`__ for more information
|
||||||
## about NetBIOS. `RFC 1002 <http://tools.ietf.org/html/rfc1002>`__ describes
|
## about NetBIOS. :rfc:`1002` describes
|
||||||
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
## the packet format for NetBIOS over TCP/IP, which Bro parses.
|
||||||
##
|
##
|
||||||
## c: The connection, which may be TCP or UDP, depending on the type of the
|
## c: The connection, which may be TCP or UDP, depending on the type of the
|
||||||
|
|
|
@ -123,7 +123,7 @@ event ssl_alert%(c: connection, is_orig: bool, level: count, desc: count%);
|
||||||
## an unencrypted handshake, and Bro extracts as much information out of that
|
## an unencrypted handshake, and Bro extracts as much information out of that
|
||||||
## as it can. This event is raised when an SSL/TLS server passes a session
|
## as it can. This event is raised when an SSL/TLS server passes a session
|
||||||
## ticket to the client that can later be used for resuming the session. The
|
## ticket to the client that can later be used for resuming the session. The
|
||||||
## mechanism is described in :rfc:`4507`
|
## mechanism is described in :rfc:`4507`.
|
||||||
##
|
##
|
||||||
## See `Wikipedia <http://en.wikipedia.org/wiki/Transport_Layer_Security>`__ for
|
## See `Wikipedia <http://en.wikipedia.org/wiki/Transport_Layer_Security>`__ for
|
||||||
## more information about the SSL/TLS protocol.
|
## more information about the SSL/TLS protocol.
|
||||||
|
|
|
@ -93,13 +93,12 @@ function get_gap_summary%(%): gap_info
|
||||||
##
|
##
|
||||||
## - ``CONTENTS_NONE``: Stop recording the connection's content.
|
## - ``CONTENTS_NONE``: Stop recording the connection's content.
|
||||||
## - ``CONTENTS_ORIG``: Record the data sent by the connection
|
## - ``CONTENTS_ORIG``: Record the data sent by the connection
|
||||||
## originator (often the client).
|
## originator (often the client).
|
||||||
## - ``CONTENTS_RESP``: Record the data sent by the connection
|
## - ``CONTENTS_RESP``: Record the data sent by the connection
|
||||||
## responder (often the server).
|
## responder (often the server).
|
||||||
## - ``CONTENTS_BOTH``: Record the data sent in both directions.
|
## - ``CONTENTS_BOTH``: Record the data sent in both directions.
|
||||||
## Results in the two directions being
|
## Results in the two directions being intermixed in the file,
|
||||||
## intermixed in the file, in the order the
|
## in the order the data was seen by Bro.
|
||||||
## data was seen by Bro.
|
|
||||||
##
|
##
|
||||||
## f: The file handle of the file to write the contents to.
|
## f: The file handle of the file to write the contents to.
|
||||||
##
|
##
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue