Merge remote-tracking branch 'origin/topic/jsiwek/file-analysis' into topic/seth/file-analysis-exe-analyzer

Conflicts:
	src/CMakeLists.txt
	src/file_analysis.bif
	src/file_analysis/Info.cc
This commit is contained in:
Seth Hall 2013-03-28 00:21:01 -04:00
commit e0276384e7
318 changed files with 8499 additions and 2109 deletions

123
CHANGES
View file

@ -1,4 +1,127 @@
2.1-386 | 2013-03-22 12:41:50 -0700
* Added reverse() function to strings.bif. (Yun Zheng Hu)
2.1-384 | 2013-03-22 12:10:14 -0700
* Fix record constructors in table initializer indices. Addresses
#660. (Jon Siwek)
2.1-382 | 2013-03-22 12:01:34 -0700
* Add support for 802.1ah (Q-in-Q). Addresses #641. (Seth Hall)
2.1-380 | 2013-03-18 12:18:10 -0700
* Fix gcc compile warnings in base64 encoder and benchmark reader.
(Bernhard Amann)
2.1-377 | 2013-03-17 17:36:09 -0700
* Fixing potential leak in DNS error case. (Vlad Grigorescu)
2.1-375 | 2013-03-17 13:14:26 -0700
* Add base64 encoding functionality, including new BiFs
encode_base64() and encode_base64_custom(). (Bernhard Amann)
* Replace call to external "openssl" in extract-certs-pem.bro with
that encode_base64(). (Bernhard Amann)
* Adding a test for extract-certs-pem.pem. (Robin Sommer)
* Renaming Base64Decoder to Base64Converter. (Robin Sommer)
2.1-366 | 2013-03-17 12:35:59 -0700
* Correctly handle DNS lookups for software version ranges. (Seth
Hall)
* Improvements to vulnerable software detection. (Seth Hall)
- Add a DNS based updating method. This needs to be tested
still.
- Vulnerable version ranges are used now instead of only single
versions. This can deal with software with multiple stable
major versions.
* Update software version parsing and comparison to account for a
third numeric subversion. Also, $addl is now compared numerically
if the value is actually numeric. (Seth Hall)
2.1-361 | 2013-03-13 07:18:22 -0700
* Add check for truncated link frames. Addresses #962. (Jacob
Baines)
* Fix large memory allocation in IP fragment reassembly. Addresses
#961. (Jacob Baines)
2.1-357 | 2013-03-08 09:18:35 -0800
* Fix race-condition in table-event test. (Bernhard Amann)
* s/bro-ids.org/bro.org/g. (Robin Sommer)
2.1-353 | 2013-03-07 13:31:37 -0800
* Fix function type-equivalence requiring same parameter names.
Addresses #957. (Jon Siwek)
2.1-351 | 2013-03-07 13:27:29 -0800
* Fix new/delete mismatch. Addresses #958. (Jacob Baines)
* Fix compiler warnings. (Jon Siwek)
2.1-347 | 2013-03-06 16:48:44 -0800
* Remove unused parameter from vector assignment method. (Bernhard Amann)
* Remove the byte_len() and length() bifs. (Bernhard Amann)
2.1-342 | 2013-03-06 15:42:52 -0800
* Moved the Notice::notice event and Notice::policy table to both be
hooks. See documentation and NEWS for information. (Seth Hall).
2.1-338 | 2013-03-06 15:10:43 -0800
* Fix init of local sets/vectors via curly brace initializer lists.
(Jon Siwek)
2.1-336 | 2013-03-06 15:08:06 -0800
* Fix memory leaks resulting from 'when' and 'return when'
statements. Addresses #946. (Jon Siwek)
* Fix three bugs with 'when' and 'return when' statements. Addresses
#946. (Jon Siwek)
2.1-333 | 2013-03-06 14:59:47 -0800
* Add parsing for GTPv1 extension headers and control messages. (Jon Siwek)
This includes:
- A new generic gtpv1_message() event generated for any GTP
message type.
- Specific events for the create/update/delete PDP context
request/response messages.
Addresses #934.
2.1-331 | 2013-03-06 14:54:33 -0800
* Fix possible null pointer dereference in identify_data BIF. Also
centralized libmagic calls for consistent error handling/output.
(Jon Siwek)
* Fix build on OpenBSD 5.2. (Jon Siwek)
2.1-328 | 2013-02-05 01:34:29 -0500
* New script to query the ICSI Certificate Notary

12
INSTALL
View file

@ -4,7 +4,7 @@
.. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org
.. _Homebrew: http://mxcl.github.com/homebrew
.. _bro downloads page: http://bro-ids.org/download/index.html
.. _bro downloads page: http://bro.org/download/index.html
==============
Installing Bro
@ -189,15 +189,15 @@ Bro releases are bundled into source packages for convenience and
available from the `bro downloads page`_.
Alternatively, the latest Bro development version can be obtained through git
repositories hosted at `git.bro-ids.org <http://git.bro-ids.org>`_. See
repositories hosted at `git.bro.org <http://git.bro.org>`_. See
our `git development documentation
<http://bro-ids.org/development/process.html>`_ for comprehensive
<http://bro.org/development/process.html>`_ for comprehensive
information on Bro's use of git revision control, but the short story
for downloading the full source code experience for Bro via git is:
.. console::
git clone --recursive git://git.bro-ids.org/bro
git clone --recursive git://git.bro.org/bro
.. note:: If you choose to clone the ``bro`` repository non-recursively for
a "minimal Bro experience", be aware that compiling it depends on
@ -230,7 +230,7 @@ automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory.
OpenBSD users, please see our FAQ at
http://www.bro-ids.org/documentation/faq.html if you are having
http://www.bro.org/documentation/faq.html if you are having
problems installing Bro.
@ -298,7 +298,7 @@ Running Bro
Bro is a complex program and it takes a bit of time to get familiar
with it. A good place for newcomers to start is the Quick Start Guide
at http://www.bro-ids.org/documentation/quickstart.html.
at http://www.bro.org/documentation/quickstart.html.
For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have

49
NEWS
View file

@ -67,6 +67,7 @@ Changed Functionality
- md5_*, sha1_*, sha256_*, and entropy_* have all changed
their signatures to work with opaque types (see above).
- Removed a now unused argument from "do_split" helper function.
- "this" is no longer a reserved keyword.
@ -81,6 +82,50 @@ Changed Functionality
value can now be set with the new broctl.cfg option
"MailAlarmsInterval".
- We have completely reworded the "notice_policy" mechanism. It now no
linger uses a record of policy items but a "hook", a new language
element that's roughly equivalent to a function with multiple
bodies. The documentation [TODO: insert link] describes how to use
the new notice policy. For existing code, the two main changes are:
- What used to be a "redef" of "Notice::policy" now becomes a hook
implementation. Example:
Old:
redef Notice::policy += {
[$pred(n: Notice::Info) = {
return n$note == SSH::Login && n$id$resp_h == 10.0.0.1;
},
$action = Notice::ACTION_EMAIL]
};
New:
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Login && n$id$resp_h == 10.0.0.1 )
add n$actions[Notice::ACTION_EMAIL];
}
- notice() is now likewise a hook, no longer an event. If you have
handlers for that event, you'll likely just need to change the
type accordingly. Example:
Old:
event notice(n: Notice::Info) { ... }
New:
hook notice(n: Notice::Info) { ... }
- The notice_policy.log is gone. That's a result of the new notice
policy setup.
- Removed the byte_len() and length() bif functions. Use the "|...|"
operator instead.
Bro 2.1
-------
@ -247,7 +292,7 @@ Bro 2.0
As the version number jump suggests, Bro 2.0 is a major upgrade and
lots of things have changed. We have assembled a separate upgrade
guide with the most important changes compared to Bro 1.5 at
http://www.bro-ids.org/documentation/upgrade.html. You can find
http://www.bro.org/documentation/upgrade.html. You can find
the offline version of that document in ``doc/upgrade.rst.``.
Compared to the earlier 2.0 Beta version, the major changes in the
@ -255,7 +300,7 @@ final release are:
* The default scripts now come with complete reference
documentation. See
http://www.bro-ids.org/documentation/index.html.
http://www.bro.org/documentation/index.html.
* libz and libmagic are now required dependencies.

2
README
View file

@ -11,7 +11,7 @@ Please see COPYING for licensing information.
For more documentation, research publications, and community contact
information, please see Bro's home page:
http://www.bro-ids.org
http://www.bro.org
On behalf of the Bro Development Team,

View file

@ -1 +1 @@
2.1-328
2.1-386

@ -1 +1 @@
Subproject commit 2fd9086c9dc0e76f6ff1ae04a60cbbce60507aab
Subproject commit 72d121ade5a37df83d3252646de51cb77ce69a89

@ -1 +1 @@
Subproject commit bea556198b69d30d64c0cf1b594e6de71176df6f
Subproject commit 70681007546aad6e5648494e882b71adb9165105

@ -1 +1 @@
Subproject commit c1ba9b44c4815c61c54c968f462ec5b0865e5990
Subproject commit e64204fec55759c614a276c1933bbff2069a63db

@ -1 +1 @@
Subproject commit 2bf6b37177b895329173acac2bb98f38a8783bc1
Subproject commit 2b35d0331366865fbf0119919cc9692d55c4538c

@ -1 +1 @@
Subproject commit ba0700fe448895b654b90d50f389f6f1341234cb
Subproject commit d5b8df42cb9c398142e02d4bf8ede835fd0227f4

2
cmake

@ -1 +1 @@
Subproject commit 14537f56d66b18ab9d5024f798caf4d1f356fc67
Subproject commit 94e72a3075bb0b9550ad05758963afda394bfb2c

View file

@ -10,7 +10,7 @@
{% endblock %}
{% block header %}
<iframe src="http://www.bro-ids.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
<iframe src="http://www.bro.org/frames/header-no-logo.html" width="100%" height="100px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %}
@ -108,6 +108,6 @@
{% endblock %}
{% block footer %}
<iframe src="http://www.bro-ids.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
<iframe src="http://www.bro.org/frames/footer.html" width="100%" height="420px" frameborder="0" marginheight="0" scrolling="no" marginwidth="0">
</iframe>
{% endblock %}

View file

@ -53,7 +53,7 @@ Other Bro Components
The following are snapshots of documentation for components that come
with this version of Bro (|version|). Since they can also be used
independently, see the `download page
<http://bro-ids.org/download/index.html>`_ for documentation of any
<http://bro.org/download/index.html>`_ for documentation of any
current, independent component releases.
.. toctree::

View file

@ -6,7 +6,7 @@ Notice Framework
One of the easiest ways to customize Bro is writing a local notice
policy. Bro can detect a large number of potentially interesting
situations, and the notice policy tells which of them the user wants to be
situations, and the notice policy hook which of them the user wants to be
acted upon in some manner. In particular, the notice policy can specify
actions to be taken, such as sending an email or compiling regular
alarm emails. This page gives an introduction into writing such a notice
@ -24,8 +24,8 @@ of interest for the user. However, none of these scripts determines the
importance of what it finds itself. Instead, the scripts only flag situations
as *potentially* interesting, leaving it to the local configuration to define
which of them are in fact actionable. This decoupling of detection and
reporting allows Bro to address the different needs that sites have:
definitions of what constitutes an attack or even a compromise differ quite a
reporting allows Bro to address the different needs that sites have.
Definitions of what constitutes an attack or even a compromise differ quite a
bit between environments, and activity deemed malicious at one site might be
fully acceptable at another.
@ -40,7 +40,7 @@ More information about raising notices can be found in the `Raising Notices`_
section.
Once a notice is raised, it can have any number of actions applied to it by
the :bro:see:`Notice::policy` set which is described in the `Notice Policy`_
writing :bro:see:`Notice::policy` hooks which is described in the `Notice Policy`_
section below. Such actions can be to send a mail to the configured
address(es) or to simply ignore the notice. Currently, the following actions
are defined:
@ -68,12 +68,6 @@ are defined:
- Send an email to the email address or addresses given in the
:bro:see:`Notice::mail_page_dest` variable.
* - Notice::ACTION_NO_SUPPRESS
- This action will disable the built in notice suppression for the
notice. Keep in mind that this action will need to be applied to
every notice that shouldn't be suppressed including each of the future
notices that would have normally been suppressed.
How these notice actions are applied to notices is discussed in the
`Notice Policy`_ and `Notice Policy Shortcuts`_ sections.
@ -83,26 +77,24 @@ Processing Notices
Notice Policy
*************
The predefined set :bro:see:`Notice::policy` provides the mechanism for
applying actions and other behavior modifications to notices. Each entry
of :bro:see:`Notice::policy` is a record of the type
:bro:see:`Notice::PolicyItem` which defines a condition to be matched
against all raised notices and one or more of a variety of behavior
modifiers. The notice policy is defined by adding any number of
:bro:see:`Notice::PolicyItem` records to the :bro:see:`Notice::policy`
set.
The hook :bro:see:`Notice::policy` provides the mechanism for applying
actions and generally modifying the notice before it's sent onward to
the action plugins. Hooks can be thought of as multi-bodied functions
and using them looks very similar to handling events. The difference
is that they don't go through the event queue like events. Users should
directly make modifications to the :bro:see:`Notice::Info` record
given as the argument to the hook.
Here's a simple example which tells Bro to send an email for all notices of
type :bro:see:`SSH::Login` if the server is 10.0.0.1:
.. code:: bro
redef Notice::policy += {
[$pred(n: Notice::Info) = {
return n$note == SSH::Login && n$id$resp_h == 10.0.0.1;
},
$action = Notice::ACTION_EMAIL]
};
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Login && n$id$resp_h == 10.0.0.1 )
add n$actions[Notice::ACTION_EMAIL];
}
.. note::
@ -110,78 +102,21 @@ type :bro:see:`SSH::Login` if the server is 10.0.0.1:
such that it is only raised when Bro heuristically detects a successful
login. No apparently failed logins will raise this notice.
While the syntax might look a bit convoluted at first, it provides a lot of
flexibility due to having access to Bro's full programming language.
Predicate Field
^^^^^^^^^^^^^^^
The :bro:see:`Notice::PolicyItem` record type has a field name ``$pred``
which defines the entry's condition in the form of a predicate written
as a Bro function. The function is passed the notice as a
:bro:see:`Notice::Info` record and it returns a boolean value indicating
if the entry is applicable to that particular notice.
.. note::
The lack of a predicate in a ``Notice::PolicyItem`` is implicitly true
(``T``) since an implicit false (``F``) value would never be used.
Bro evaluates the predicates of each entry in the order defined by the
``$priority`` field in :bro:see:`Notice::PolicyItem` records. The valid
values are 0-10 with 10 being earliest evaluated. If ``$priority`` is
omitted, the default priority is 5.
Behavior Modification Fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are a set of fields in the :bro:see:`Notice::PolicyItem` record type that
indicate ways that either the notice or notice processing should be modified
if the predicate field (``$pred``) evaluated to true (``T``). Those fields are
explained in more detail in the following table.
.. list-table::
:widths: 20 30 20
:header-rows: 1
* - Field
- Description
- Example
* - ``$action=<Notice::Action>``
- Each :bro:see:`Notice::PolicyItem` can have a single action
applied to the notice with this field.
- ``$action = Notice::ACTION_EMAIL``
* - ``$suppress_for=<interval>``
- This field makes it possible for a user to modify the behavior of the
notice framework's automated suppression of intrinsically similar
notices. More information about the notice framework's automated
suppression can be found in the `Automated Suppression`_ section of
this document.
- ``$suppress_for = 10mins``
* - ``$halt=<bool>``
- This field can be used for modification of the notice policy
evaluation. To stop processing of notice policy items before
evaluating all of them, set this field to ``T`` and make the ``$pred``
field return ``T``. :bro:see:`Notice::PolicyItem` records defined at
a higher priority as defined by the ``$priority`` field will still be
evaluated but those at a lower priority won't.
- ``$halt = T``
Hooks can also have priorities applied to order their execution like events
with a default priority of 0. Greater values are executed first. Setting
a hook body to run before default hook bodies might look like this:
.. code:: bro
redef Notice::policy += {
[$pred(n: Notice::Info) = {
return n$note == SSH::Login && n$id$resp_h == 10.0.0.1;
},
$action = Notice::ACTION_EMAIL,
$priority=5]
};
hook Notice::policy(n: Notice::Info) &priority=5
{
if ( n$note == SSH::Login && n$id$resp_h == 10.0.0.1 )
add n$actions[Notice::ACTION_EMAIL];
}
Hooks can also abort later hook bodies with the ``break`` keyword. This
is primarily useful if one wants to completely preempt processing by
lower priority :bro:see:`Notice::policy` hooks.
Notice Policy Shortcuts
***********************
@ -189,7 +124,7 @@ Notice Policy Shortcuts
Although the notice framework provides a great deal of flexibility and
configurability there are many times that the full expressiveness isn't needed
and actually becomes a hindrance to achieving results. The framework provides
a default :bro:see:`Notice::policy` suite as a way of giving users the
a default :bro:see:`Notice::policy` hook body as a way of giving users the
shortcuts to easily apply many common actions to notices.
These are implemented as sets and tables indexed with a
@ -377,19 +312,45 @@ Setting the ``$identifier`` field is left to those raising notices because
it's assumed that the script author who is raising the notice understands the
full problem set and edge cases of the notice which may not be readily
apparent to users. If users don't want the suppression to take place or simply
want a different interval, they can always modify it with the
:bro:see:`Notice::policy`.
want a different interval, they can set a notice's suppression
interval to ``0secs`` or delete the value from the ``$identifier`` field in
a :bro:see:`Notice::policy` hook.
Extending Notice Framework
--------------------------
Adding Custom Notice Actions
****************************
There are a couple of mechanism currently for extending the notice framework
and adding new capability.
Extending Notice Emails
***********************
If there is extra information that you would like to add to emails, that is
possible to add by writing :bro:see:`Notice::policy` hooks.
There is a field in the :bro:see:`Notice::Info` record named
``$email_body_sections`` which will be included verbatim when email is being
sent. An example of including some information from an HTTP request is
included below.
.. code:: bro
hook Notice::policy(n: Notice::Info)
{
if ( n?$conn && n$conn?$http && n$conn$http?$host )
n$email_body_sections[|email_body_sections|] = fmt("HTTP host header: %s", n$conn$http$host);
}
Cluster Considerations
----------------------
As a user/developer of Bro, the main cluster concern with the notice framework
is understanding what runs where. When a notice is generated on a worker, the
worker checks to see if the notice shoudl be suppressed based on information
locally maintained in the worker process. If it's not being
suppressed, the worker forwards the notice directly to the manager and does no more
local processing. The manager then runs the :bro:see:`Notice::policy` hook and
executes all of the actions determined to be run.

View file

@ -111,7 +111,7 @@ protocol-dependent activity that's occurring. E.g. ``http.log``'s next few
columns (shortened for brevity) show a request to the root of Bro website::
# method host uri referrer user_agent
GET bro-ids.org / - <...>Chrome/12.0.742.122<...>
GET bro.org / - <...>Chrome/12.0.742.122<...>
Some logs are worth explicit mention:

View file

@ -19,7 +19,7 @@ Reporting Problems
Generally, when you encounter a problem with Bro, the best thing to do
is opening a new ticket in `Bro's issue tracker
<http://tracker.bro-ids.org/>`__ and include information on how to
<http://tracker.bro.org/>`__ and include information on how to
reproduce the issue. Ideally, your ticket should come with the
following:

View file

@ -37,6 +37,7 @@ rest_target(${psd} base/frameworks/file-analysis/main.bro)
rest_target(${psd} base/frameworks/input/main.bro)
rest_target(${psd} base/frameworks/input/readers/ascii.bro)
rest_target(${psd} base/frameworks/input/readers/benchmark.bro)
rest_target(${psd} base/frameworks/input/readers/binary.bro)
rest_target(${psd} base/frameworks/input/readers/raw.bro)
rest_target(${psd} base/frameworks/intel/cluster.bro)
rest_target(${psd} base/frameworks/intel/input.bro)
@ -59,6 +60,7 @@ rest_target(${psd} base/frameworks/notice/actions/pp-alarms.bro)
rest_target(${psd} base/frameworks/notice/cluster.bro)
rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro)
rest_target(${psd} base/frameworks/notice/main.bro)
rest_target(${psd} base/frameworks/notice/non-cluster.bro)
rest_target(${psd} base/frameworks/notice/weird.bro)
rest_target(${psd} base/frameworks/packet-filter/main.bro)
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
@ -73,21 +75,25 @@ rest_target(${psd} base/protocols/conn/main.bro)
rest_target(${psd} base/protocols/conn/polling.bro)
rest_target(${psd} base/protocols/dns/consts.bro)
rest_target(${psd} base/protocols/dns/main.bro)
rest_target(${psd} base/protocols/ftp/file-analysis.bro)
rest_target(${psd} base/protocols/ftp/file-extract.bro)
rest_target(${psd} base/protocols/ftp/gridftp.bro)
rest_target(${psd} base/protocols/ftp/main.bro)
rest_target(${psd} base/protocols/ftp/utils-commands.bro)
rest_target(${psd} base/protocols/http/file-analysis.bro)
rest_target(${psd} base/protocols/http/file-extract.bro)
rest_target(${psd} base/protocols/http/file-hash.bro)
rest_target(${psd} base/protocols/http/file-ident.bro)
rest_target(${psd} base/protocols/http/main.bro)
rest_target(${psd} base/protocols/http/utils.bro)
rest_target(${psd} base/protocols/irc/dcc-send.bro)
rest_target(${psd} base/protocols/irc/file-analysis.bro)
rest_target(${psd} base/protocols/irc/main.bro)
rest_target(${psd} base/protocols/modbus/consts.bro)
rest_target(${psd} base/protocols/modbus/main.bro)
rest_target(${psd} base/protocols/smtp/entities-excerpt.bro)
rest_target(${psd} base/protocols/smtp/entities.bro)
rest_target(${psd} base/protocols/smtp/file-analysis.bro)
rest_target(${psd} base/protocols/smtp/main.bro)
rest_target(${psd} base/protocols/socks/consts.bro)
rest_target(${psd} base/protocols/socks/main.bro)

View file

@ -254,7 +254,7 @@ Variable Naming
- Identifiers may have been renamed to conform to new `scripting
conventions
<http://www.bro-ids.org/development/script-conventions.html>`_
<http://www.bro.org/development/script-conventions.html>`_
BroControl
@ -296,7 +296,7 @@ Development Infrastructure
Bro development has moved from using SVN to Git for revision control.
Users that want to use the latest Bro development snapshot by checking it out
from the source repositories should see the `development process
<http://www.bro-ids.org/development/process.html>`_. Note that all the various
<http://www.bro.org/development/process.html>`_. Note that all the various
sub-components now reside in their own repositories. However, the
top-level Bro repository includes them as git submodules so it's easy
to check them all out simultaneously.

View file

@ -39,7 +39,7 @@ export {
## The node type doing all the actual traffic analysis.
WORKER,
## A node acting as a traffic recorder using the
## `Time Machine <http://tracker.bro-ids.org/time-machine>`_ software.
## `Time Machine <http://tracker.bro.org/time-machine>`_ software.
TIME_MACHINE,
};

View file

@ -18,23 +18,20 @@ export {
const default_reassembly_buffer_size: count = 1024*1024 &redef;
## The default buffer size used for storing the beginning of files.
# TODO: what's a reasonable default?
const default_bof_buffer_size: count = 256 &redef;
const default_bof_buffer_size: count = 1024 &redef;
## The default amount of time file analysis will wait for new file data
## before giving up.
## TODO: what's a reasonable default?
#const default_timeout_interval: interval = 2 mins &redef;
const default_timeout_interval: interval = 10 sec &redef;
const default_timeout_interval: interval = 2 mins &redef;
## The default amount of data that a user is allowed to extract
## from a file to an event with the
## :bro:see:`FileAnalysis::ACTION_DATA_EVENT` action.
## TODO: what's a reasonable default?
const default_data_event_len: count = 1024*1024 &redef;
# Needed a forward declaration for event parameters...
type Info: record {};
type ActionArgs: record {
act: Action;
extract_filename: string &optional;
chunk_event: event(info: Info, data: string, off: count) &optional;
stream_event: event(info: Info, data: string) &optional;
};
type ActionResults: record {
@ -52,15 +49,16 @@ export {
## from a container file as part of the analysis.
parent_file_id: string &log &optional;
## The network protocol over which the file was transferred.
protocol: string &log &optional;
## An identification of the source of the file data. E.g. it may be
## a network protocol over which it was transferred, or a local file
## path which was read, or some other input source.
source: string &log &optional;
## The set of connections over which the file was transferred,
## indicated by UID strings.
conn_uids: set[string] &log &optional;
## The set of connections over which the file was transferred,
## indicated by 5-tuples.
conn_ids: set[conn_id] &optional;
## The set of connections over which the file was transferred.
conns: table[conn_id] of connection &optional;
## The time at which the last activity for the file was seen.
last_active: time &log;
## Number of bytes provided to the file analysis engine for the file.
seen_bytes: count &log &default=0;
@ -82,18 +80,100 @@ export {
## the analysis engine will wait before giving up on it.
timeout_interval: interval &log &default=default_timeout_interval;
## The number of bytes at the beginning of a file to save for later
## inspection in *bof_buffer* field of
## :bro:see:`FileAnalysis::ActionResults`.
bof_buffer_size: count &log &default=default_bof_buffer_size;
## The content of the beginning of a file up to *bof_buffer_size* bytes.
## This is also the buffer that's used for file/mime type detection.
bof_buffer: string &optional;
## An initial guess at file type.
file_type: string &log &optional;
## An initial guess at mime type.
mime_type: string &log &optional;
## Actions that have been added to the analysis of this file.
actions: vector of Action &default=vector();
## The corresponding arguments supplied to each element of *actions*.
action_args: vector of ActionArgs &default=vector();
## Some actions may directly yield results in this record.
action_results: ActionResults;
## Not meant to be modified directly by scripts.
actions: table[ActionArgs] of ActionResults;
} &redef;
## TODO: document
global policy: hook(trig: Trigger, info: Info);
const disable: table[AnalyzerTag] of bool = table() &redef;
# TODO: wrapper functions for BiFs ?
## Event that can be handled to access the Info record as it is sent on
## to the logging framework.
global log_file_analysis: event(rec: Info);
## The salt concatenated to unique file handle strings generated by
## :bro:see:`FileAnalysis::handle_callbacks` before hashing them
## in to a file id (the *file_id* field of :bro:see:`FileAnalysis::Info`).
## Provided to help mitigate the possiblility of manipulating parts of
## network connections that factor in to the file handle in order to
## generate two handles that would hash to the same file id.
const salt = "I recommend changing this." &redef;
}
event bro_init() &priority=5
{
Log::create_stream(FileAnalysis::LOG,
[$columns=Info, $ev=log_file_analysis]);
}
redef record FileAnalysis::Info += {
conn_uids: set[string] &log &optional;
actions_taken: set[Action] &log &optional;
extracted_files: set[string] &log &optional;
md5: string &log &optional;
sha1: string &log &optional;
sha256: string &log &optional;
};
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=-10
{
if ( trig != FileAnalysis::TRIGGER_EOF &&
trig != FileAnalysis::TRIGGER_DONE ) return;
info$conn_uids = set();
if ( info?$conns )
for ( cid in info$conns )
add info$conn_uids[info$conns[cid]$uid];
info$actions_taken = set();
info$extracted_files = set();
for ( act in info$actions )
{
add info$actions_taken[act$act];
local result: FileAnalysis::ActionResults = info$actions[act];
switch ( act$act ) {
case FileAnalysis::ACTION_EXTRACT:
add info$extracted_files[act$extract_filename];
break;
case FileAnalysis::ACTION_MD5:
if ( result?$md5 )
info$md5 = result$md5;
break;
case FileAnalysis::ACTION_SHA1:
if ( result?$sha1 )
info$sha1 = result$sha1;
break;
case FileAnalysis::ACTION_SHA256:
if ( result?$sha256 )
info$sha256 = result$sha256;
break;
case FileAnalysis::ACTION_DATA_EVENT:
# no direct result
break;
}
}
Log::write(FileAnalysis::LOG, info);
}

View file

@ -2,4 +2,5 @@
@load ./readers/ascii
@load ./readers/raw
@load ./readers/benchmark
@load ./readers/binary

View file

@ -0,0 +1,8 @@
##! Interface for the binary input reader.
module InputBinary;
export {
## Size of data chunks to read from the input file at a time.
const chunk_size = 1024 &redef;
}

View file

@ -17,6 +17,8 @@
@if ( Cluster::is_enabled() )
@load ./cluster
@else
@load ./non-cluster
@endif
# Load here so that it can check whether clustering is enabled.

View file

@ -27,18 +27,17 @@ export {
## Notice types which should have the "remote" location looked up.
## If GeoIP support is not built in, this does nothing.
const lookup_location_types: set[Notice::Type] = {} &redef;
## Add a helper to the notice policy for looking up GeoIP data.
redef Notice::policy += {
[$pred(n: Notice::Info) = { return (n$note in Notice::lookup_location_types); },
$action = ACTION_ADD_GEODATA,
$priority = 10],
};
}
hook policy(n: Notice::Info) &priority=10
{
if ( n$note in Notice::lookup_location_types )
add n$actions[ACTION_ADD_GEODATA];
}
# This is handled at a high priority in case other notice handlers
# want to use the data.
event notice(n: Notice::Info) &priority=10
hook notice(n: Notice::Info) &priority=10
{
if ( ACTION_ADD_GEODATA in n$actions &&
|Site::local_nets| > 0 &&

View file

@ -17,20 +17,13 @@ export {
};
}
# This is a little awkward because we want to inject drop along with the
# synchronous functions.
event bro_init()
hook notice(n: Notice::Info)
{
local drop_func = function(n: Notice::Info)
if ( ACTION_DROP in n$actions )
{
if ( ACTION_DROP in n$actions )
{
#local drop = React::drop_address(n$src, "");
#local addl = drop?$sub ? fmt(" %s", drop$sub) : "";
#n$dropped = drop$note != Drop::AddressDropIgnored;
#n$msg += fmt(" [%s%s]", drop$note, addl);
}
};
add Notice::sync_functions[drop_func];
#local drop = React::drop_address(n$src, "");
#local addl = drop?$sub ? fmt(" %s", drop$sub) : "";
#n$dropped = drop$note != Drop::AddressDropIgnored;
#n$msg += fmt(" [%s%s]", drop$note, addl);
}
}

View file

@ -18,7 +18,7 @@ export {
};
}
event notice(n: Notice::Info) &priority=-5
hook notice(n: Notice::Info) &priority=-5
{
if ( |Site::local_admins| > 0 &&
ACTION_EMAIL_ADMIN in n$actions )

View file

@ -15,7 +15,7 @@ export {
const mail_page_dest = "" &redef;
}
event notice(n: Notice::Info) &priority=-5
hook notice(n: Notice::Info) &priority=-5
{
if ( ACTION_PAGE in n$actions )
email_notice_to(n, mail_page_dest, F);

View file

@ -105,7 +105,7 @@ event bro_init()
$postprocessor=pp_postprocessor]);
}
event notice(n: Notice::Info) &priority=-5
hook notice(n: Notice::Info) &priority=-5
{
if ( ! want_pp() )
return;

View file

@ -21,30 +21,10 @@ redef Cluster::manager2worker_events += /Notice::begin_suppression/;
redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER )
# The notice policy is completely handled by the manager and shouldn't be
# done by workers or proxies to save time for packet processing.
redef Notice::policy = table();
event Notice::begin_suppression(n: Notice::Info)
{
suppressing[n$note, n$identifier] = n;
}
event Notice::notice(n: Notice::Info)
{
# Send the locally generated notice on to the manager.
event Notice::cluster_notice(n);
}
event bro_init() &priority=-3
{
# Workers and proxies need to disable the notice streams because notice
# events are forwarded directly instead of being logged remotely.
Log::disable_stream(Notice::LOG);
Log::disable_stream(Notice::POLICY_LOG);
Log::disable_stream(Notice::ALARM_LOG);
}
@endif
@if ( Cluster::local_node_type() == Cluster::MANAGER )
@ -54,3 +34,19 @@ event Notice::cluster_notice(n: Notice::Info)
NOTICE(n);
}
@endif
module GLOBAL;
## This is the entry point in the global namespace for the notice framework.
function NOTICE(n: Notice::Info)
{
# Suppress this notice if necessary.
if ( Notice::is_being_suppressed(n) )
return;
if ( Cluster::local_node_type() == Cluster::MANAGER )
Notice::internal_NOTICE(n);
else
# For non-managers, send the notice on to the manager.
event Notice::cluster_notice(n);
}

View file

@ -13,7 +13,7 @@ module Notice;
# reference to the original notice)
global tmp_notice_storage: table[string] of Notice::Info &create_expire=max_email_delay+10secs;
event Notice::notice(n: Notice::Info) &priority=10
hook notice(n: Notice::Info) &priority=10
{
if ( ! n?$src && ! n?$dst )
return;

View file

@ -10,9 +10,6 @@ export {
redef enum Log::ID += {
## This is the primary logging stream for notices.
LOG,
## This is the notice policy auditing log. It records what the current
## notice policy is at Bro init time.
POLICY_LOG,
## This is the alarm stream.
ALARM_LOG,
};
@ -42,9 +39,6 @@ export {
## version of the alarm log is emailed in bulk to the address(es)
## configured in :bro:id:`Notice::mail_dest`.
ACTION_ALARM,
## Indicates that the notice should not be supressed by the normal
## duplicate notice suppression that the notice framework does.
ACTION_NO_SUPPRESS,
};
## The notice framework is able to do automatic notice supression by
@ -102,10 +96,6 @@ export {
## The actions which have been applied to this notice.
actions: set[Notice::Action] &log &optional;
## These are policy items that returned T and applied their action
## to the notice.
policy_items: set[count] &log &optional;
## By adding chunks of text into this element, other scripts can
## expand on notices that are being emailed. The normal way to add text
## is to extend the vector by handling the :bro:id:`Notice::notice`
@ -142,9 +132,8 @@ export {
identifier: string &optional;
## This field indicates the length of time that this
## unique notice should be suppressed. This field is automatically
## filled out and should not be written to by any other script.
suppress_for: interval &log &optional;
## unique notice should be suppressed.
suppress_for: interval &log &default=default_suppression_interval;
};
## Ignored notice types.
@ -159,58 +148,8 @@ export {
## intervals for entire notice types.
const type_suppression_intervals: table[Notice::Type] of interval = {} &redef;
## This is the record that defines the items that make up the notice policy.
type PolicyItem: record {
## This is the exact positional order in which the
## :bro:type:`Notice::PolicyItem` records are checked.
## This is set internally by the notice framework.
position: count &log &optional;
## Define the priority for this check. Items are checked in ordered
## from highest value (10) to lowest value (0).
priority: count &log &default=5;
## An action given to the notice if the predicate return true.
action: Notice::Action &log &default=ACTION_NONE;
## The pred (predicate) field is a function that returns a boolean T
## or F value. If the predicate function return true, the action in
## this record is applied to the notice that is given as an argument
## to the predicate function. If no predicate is supplied, it's
## assumed that the PolicyItem always applies.
pred: function(n: Notice::Info): bool &log &optional;
## Indicates this item should terminate policy processing if the
## predicate returns T.
halt: bool &log &default=F;
## This defines the length of time that this particular notice should
## be supressed.
suppress_for: interval &log &optional;
};
## Defines a notice policy that is extensible on a per-site basis.
## All notice processing is done through this variable.
const policy: set[PolicyItem] = {
[$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); },
$halt=T, $priority = 9],
[$pred(n: Notice::Info) = { return (n$note in Notice::not_suppressed_types); },
$action = ACTION_NO_SUPPRESS,
$priority = 9],
[$pred(n: Notice::Info) = { return (n$note in Notice::alarmed_types); },
$action = ACTION_ALARM,
$priority = 8],
[$pred(n: Notice::Info) = { return (n$note in Notice::emailed_types); },
$action = ACTION_EMAIL,
$priority = 8],
[$pred(n: Notice::Info) = {
if (n$note in Notice::type_suppression_intervals)
{
n$suppress_for=Notice::type_suppression_intervals[n$note];
return T;
}
return F;
},
$action = ACTION_NONE,
$priority = 8],
[$action = ACTION_LOG,
$priority = 0],
} &redef;
## The hook to modify notice handling.
global policy: hook(n: Notice::Info);
## Local system sendmail program.
const sendmail = "/usr/sbin/sendmail" &redef;
@ -240,25 +179,11 @@ export {
## This is the event that is called as the entry point to the
## notice framework by the global :bro:id:`NOTICE` function. By the time
## this event is generated, default values have already been filled out in
## the :bro:type:`Notice::Info` record and synchronous functions in the
## :bro:id:`Notice::sync_functions` have already been called. The notice
## the :bro:type:`Notice::Info` record and the notice
## policy has also been applied.
##
## n: The record containing notice data.
global notice: event(n: Info);
## This is a set of functions that provide a synchronous way for scripts
## extending the notice framework to run before the normal event based
## notice pathway that most of the notice framework takes. This is helpful
## in cases where an action against a notice needs to happen immediately
## and can't wait the short time for the event to bubble up to the top of
## the event queue. An example is the IP address dropping script that
## can block IP addresses that have notices generated because it
## needs to operate closer to real time than the event queue allows it to.
## Normally the event based extension model using the
## :bro:id:`Notice::notice` event will work fine if there aren't harder
## real time constraints.
const sync_functions: set[function(n: Notice::Info)] = set() &redef;
global notice: hook(n: Info);
## This event is generated when a notice begins to be suppressed.
##
@ -266,6 +191,11 @@ export {
## about to be suppressed.
global begin_suppression: event(n: Notice::Info);
## A function to determine if an event is supposed to be suppressed.
##
## n: The record containing the notice in question.
global is_being_suppressed: function(n: Notice::Info): bool;
## This event is generated on each occurence of an event being suppressed.
##
## n: The record containing notice data regarding the notice type
@ -338,10 +268,6 @@ global suppressing: table[Type, string] of Notice::Info = {}
&create_expire=0secs
&expire_func=per_notice_suppression_interval;
# This is an internal variable used to store the notice policy ordered by
# priority.
global ordered_policy: vector of PolicyItem = vector();
function log_mailing_postprocessor(info: Log::RotationInfo): bool
{
if ( ! reading_traces() && mail_dest != "" )
@ -424,9 +350,7 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool)
}
else
{
event reporter_info(network_time(),
fmt("Notice email delay tokens weren't released in time (%s).", n$email_delay_tokens),
"");
Reporter::info(fmt("Notice email delay tokens weren't released in time (%s).", n$email_delay_tokens));
}
}
}
@ -468,7 +392,26 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool)
piped_exec(fmt("%s -t -oi", sendmail), email_text);
}
event notice(n: Notice::Info) &priority=-5
hook Notice::policy(n: Notice::Info) &priority=10
{
if ( n$note in Notice::ignored_types )
break;
if ( n$note in Notice::not_suppressed_types )
n$suppress_for=0secs;
if ( n$note in Notice::alarmed_types )
add n$actions[ACTION_ALARM];
if ( n$note in Notice::emailed_types )
add n$actions[ACTION_EMAIL];
if ( n$note in Notice::type_suppression_intervals )
n$suppress_for=Notice::type_suppression_intervals[n$note];
# Logging is a default action. It can be removed in a later hook if desired.
add n$actions[ACTION_LOG];
}
hook Notice::notice(n: Notice::Info) &priority=-5
{
if ( ACTION_EMAIL in n$actions )
email_notice_to(n, mail_dest, T);
@ -480,7 +423,6 @@ event notice(n: Notice::Info) &priority=-5
# Normally suppress further notices like this one unless directed not to.
# n$identifier *must* be specified for suppression to function at all.
if ( n?$identifier &&
ACTION_NO_SUPPRESS !in n$actions &&
[n$note, n$identifier] !in suppressing &&
n$suppress_for != 0secs )
{
@ -565,27 +507,8 @@ function apply_policy(n: Notice::Info)
if ( ! n?$email_delay_tokens )
n$email_delay_tokens = set();
if ( ! n?$policy_items )
n$policy_items = set();
for ( i in ordered_policy )
{
# If there's no predicate or the predicate returns F.
if ( ! ordered_policy[i]?$pred || ordered_policy[i]$pred(n) )
{
add n$actions[ordered_policy[i]$action];
add n$policy_items[int_to_count(i)];
# If the predicate matched and there was a suppression interval,
# apply it to the notice now.
if ( ordered_policy[i]?$suppress_for )
n$suppress_for = ordered_policy[i]$suppress_for;
# If the policy item wants to halt policy processing, do it now!
if ( ordered_policy[i]$halt )
break;
}
}
# Apply the hook based policy.
hook Notice::policy(n);
# Apply the suppression time after applying the policy so that policy
# items can give custom suppression intervals. If there is no
@ -602,61 +525,15 @@ function apply_policy(n: Notice::Info)
delete n$iconn;
}
# Create the ordered notice policy automatically which will be used at runtime
# for prioritized matching of the notice policy.
event bro_init() &priority=10
{
# Create the policy log here because it's only written to in this handler.
Log::create_stream(Notice::POLICY_LOG, [$columns=PolicyItem]);
local tmp: table[count] of set[PolicyItem] = table();
for ( pi in policy )
{
if ( pi$priority < 0 || pi$priority > 10 )
Reporter::fatal("All Notice::PolicyItem priorities must be within 0 and 10");
if ( pi$priority !in tmp )
tmp[pi$priority] = set();
add tmp[pi$priority][pi];
}
local rev_count = vector(10,9,8,7,6,5,4,3,2,1,0);
for ( i in rev_count )
{
local j = rev_count[i];
if ( j in tmp )
{
for ( pi in tmp[j] )
{
pi$position = |ordered_policy|;
ordered_policy[|ordered_policy|] = pi;
Log::write(Notice::POLICY_LOG, pi);
}
}
}
}
function internal_NOTICE(n: Notice::Info)
{
# Suppress this notice if necessary.
if ( is_being_suppressed(n) )
return;
# Fill out fields that might be empty and do the policy processing.
apply_policy(n);
# Run the synchronous functions with the notice.
for ( func in sync_functions )
func(n);
# Generate the notice event with the notice.
event Notice::notice(n);
hook Notice::notice(n);
}
module GLOBAL;
## This is the entry point in the global namespace for notice framework.
function NOTICE(n: Notice::Info)
{
Notice::internal_NOTICE(n);
}
global NOTICE: function(n: Notice::Info);

View file

@ -0,0 +1,14 @@
@load ./main
module GLOBAL;
## This is the entry point in the global namespace for notice framework.
function NOTICE(n: Notice::Info)
{
# Suppress this notice if necessary.
if ( Notice::is_being_suppressed(n) )
return;
Notice::internal_NOTICE(n);
}

View file

@ -161,7 +161,7 @@ event signature_match(state: signature_state, msg: string, data: string)
return;
# Trim the matched data down to something reasonable
if ( byte_len(data) > 140 )
if ( |data| > 140 )
data = fmt("%s...", sub_bytes(data, 0, 140));
local src_addr: addr;
@ -259,8 +259,8 @@ event signature_match(state: signature_state, msg: string, data: string)
add vert_table[orig, resp][sig_id];
local hcount = length(horiz_table[orig, sig_id]);
local vcount = length(vert_table[orig, resp]);
local hcount = |horiz_table[orig, sig_id]|;
local vcount = |vert_table[orig, resp]|;
if ( hcount in horiz_scan_thresholds && hcount != last_hthresh[orig] )
{

View file

@ -29,6 +29,8 @@ export {
minor: count &optional;
## Minor subversion number
minor2: count &optional;
## Minor updates number
minor3: count &optional;
## Additional version string (e.g. "beta42")
addl: string &optional;
} &log;
@ -146,10 +148,10 @@ function parse(unparsed_version: string): Description
if ( /^[\/\-\._v\(]/ in sv )
sv = strip(sub(version_parts[2], /^\(?[\/\-\._v\(]/, ""));
local version_numbers = split_n(sv, /[\-\._,\[\(\{ ]/, F, 3);
if ( 4 in version_numbers && version_numbers[4] != "" )
v$addl = strip(version_numbers[4]);
if ( 5 in version_numbers && version_numbers[5] != "" )
v$addl = strip(version_numbers[5]);
else if ( 3 in version_parts && version_parts[3] != "" &&
version_parts[3] != ")" )
version_parts[3] != ")" )
{
if ( /^[[:blank:]]*\([a-zA-Z0-9\-\._[:blank:]]*\)/ in version_parts[3] )
{
@ -178,6 +180,8 @@ function parse(unparsed_version: string): Description
}
}
if ( 4 in version_numbers && version_numbers[4] != "" )
v$minor3 = extract_count(version_numbers[4]);
if ( 3 in version_numbers && version_numbers[3] != "" )
v$minor2 = extract_count(version_numbers[3]);
if ( 2 in version_numbers && version_numbers[2] != "" )
@ -332,8 +336,25 @@ function cmp_versions(v1: Version, v2: Version): int
return v1?$minor2 ? 1 : -1;
}
if ( v1?$minor3 && v2?$minor3 )
{
if ( v1$minor3 < v2$minor3 )
return -1;
if ( v1$minor3 > v2$minor3 )
return 1;
}
else
{
if ( !v1?$minor3 && !v2?$minor3 )
{ }
else
return v1?$minor3 ? 1 : -1;
}
if ( v1?$addl && v2?$addl )
{
return strcmp(v1$addl, v2$addl);
}
else
{
if ( !v1?$addl && !v2?$addl )
@ -341,6 +362,9 @@ function cmp_versions(v1: Version, v2: Version): int
else
return v1?$addl ? 1 : -1;
}
# A catcher return that should never be reached...hopefully
return 0;
}
function software_endpoint_name(id: conn_id, host: addr): string
@ -351,10 +375,11 @@ function software_endpoint_name(id: conn_id, host: addr): string
# Convert a version into a string "a.b.c-x".
function software_fmt_version(v: Version): string
{
return fmt("%d.%d.%d%s",
v?$major ? v$major : 0,
v?$minor ? v$minor : 0,
v?$minor2 ? v$minor2 : 0,
return fmt("%s%s%s%s%s",
v?$major ? fmt("%d", v$major) : "0",
v?$minor ? fmt(".%d", v$minor) : "",
v?$minor2 ? fmt(".%d", v$minor2) : "",
v?$minor3 ? fmt(".%d", v$minor3) : "",
v?$addl ? fmt("-%s", v$addl) : "");
}

View file

@ -88,10 +88,10 @@ redef dpd_config += { [ANALYZER_AYIYA] = [$ports = ayiya_ports] };
const teredo_ports = { 3544/udp };
redef dpd_config += { [ANALYZER_TEREDO] = [$ports = teredo_ports] };
const gtpv1u_ports = { 2152/udp };
redef dpd_config += { [ANALYZER_GTPV1] = [$ports = gtpv1u_ports] };
const gtpv1_ports = { 2152/udp, 2123/udp };
redef dpd_config += { [ANALYZER_GTPV1] = [$ports = gtpv1_ports] };
redef likely_server_ports += { ayiya_ports, teredo_ports, gtpv1u_ports };
redef likely_server_ports += { ayiya_ports, teredo_ports, gtpv1_ports };
event bro_init() &priority=5
{

View file

@ -300,7 +300,7 @@ type connection: record {
## one protocol analyzer is able to parse the same data. If so, all will
## be recorded. Also note that the recorced services are independent of any
## transport-level protocols.
service: set[string];
service: set[string];
addl: string; ##< Deprecated.
hot: count; ##< Deprecated.
history: string; ##< State history of connections. See *history* in :bro:see:`Conn::Info`.
@ -1488,6 +1488,146 @@ type gtpv1_hdr: record {
next_type: count &optional;
};
type gtp_cause: count;
type gtp_imsi: count;
type gtp_teardown_ind: bool;
type gtp_nsapi: count;
type gtp_recovery: count;
type gtp_teid1: count;
type gtp_teid_control_plane: count;
type gtp_charging_id: count;
type gtp_charging_gateway_addr: addr;
type gtp_trace_reference: count;
type gtp_trace_type: count;
type gtp_tft: string;
type gtp_trigger_id: string;
type gtp_omc_id: string;
type gtp_reordering_required: bool;
type gtp_proto_config_options: string;
type gtp_charging_characteristics: count;
type gtp_selection_mode: count;
type gtp_access_point_name: string;
type gtp_msisdn: string;
type gtp_gsn_addr: record {
## If the GSN Address information element has length 4 or 16, then this
## field is set to be the informational element's value interpreted as
## an IPv4 or IPv6 address, respectively.
ip: addr &optional;
## This field is set if it's not an IPv4 or IPv6 address.
other: string &optional;
};
type gtp_end_user_addr: record {
pdp_type_org: count;
pdp_type_num: count;
## Set if the End User Address information element is IPv4/IPv6.
pdp_ip: addr &optional;
## Set if the End User Address information element isn't IPv4/IPv6.
pdp_other_addr: string &optional;
};
type gtp_rai: record {
mcc: count;
mnc: count;
lac: count;
rac: count;
};
type gtp_qos_profile: record {
priority: count;
data: string;
};
type gtp_private_extension: record {
id: count;
value: string;
};
type gtp_create_pdp_ctx_request_elements: record {
imsi: gtp_imsi &optional;
rai: gtp_rai &optional;
recovery: gtp_recovery &optional;
select_mode: gtp_selection_mode &optional;
data1: gtp_teid1;
cp: gtp_teid_control_plane &optional;
nsapi: gtp_nsapi;
linked_nsapi: gtp_nsapi &optional;
charge_character: gtp_charging_characteristics &optional;
trace_ref: gtp_trace_reference &optional;
trace_type: gtp_trace_type &optional;
end_user_addr: gtp_end_user_addr &optional;
ap_name: gtp_access_point_name &optional;
opts: gtp_proto_config_options &optional;
signal_addr: gtp_gsn_addr;
user_addr: gtp_gsn_addr;
msisdn: gtp_msisdn &optional;
qos_prof: gtp_qos_profile;
tft: gtp_tft &optional;
trigger_id: gtp_trigger_id &optional;
omc_id: gtp_omc_id &optional;
ext: gtp_private_extension &optional;
};
type gtp_create_pdp_ctx_response_elements: record {
cause: gtp_cause;
reorder_req: gtp_reordering_required &optional;
recovery: gtp_recovery &optional;
data1: gtp_teid1 &optional;
cp: gtp_teid_control_plane &optional;
charging_id: gtp_charging_id &optional;
end_user_addr: gtp_end_user_addr &optional;
opts: gtp_proto_config_options &optional;
cp_addr: gtp_gsn_addr &optional;
user_addr: gtp_gsn_addr &optional;
qos_prof: gtp_qos_profile &optional;
charge_gateway: gtp_charging_gateway_addr &optional;
ext: gtp_private_extension &optional;
};
type gtp_update_pdp_ctx_request_elements: record {
imsi: gtp_imsi &optional;
rai: gtp_rai &optional;
recovery: gtp_recovery &optional;
data1: gtp_teid1;
cp: gtp_teid_control_plane &optional;
nsapi: gtp_nsapi;
trace_ref: gtp_trace_reference &optional;
trace_type: gtp_trace_type &optional;
cp_addr: gtp_gsn_addr;
user_addr: gtp_gsn_addr;
qos_prof: gtp_qos_profile;
tft: gtp_tft &optional;
trigger_id: gtp_trigger_id &optional;
omc_id: gtp_omc_id &optional;
ext: gtp_private_extension &optional;
end_user_addr: gtp_end_user_addr &optional;
};
type gtp_update_pdp_ctx_response_elements: record {
cause: gtp_cause;
recovery: gtp_recovery &optional;
data1: gtp_teid1 &optional;
cp: gtp_teid_control_plane &optional;
charging_id: gtp_charging_id &optional;
cp_addr: gtp_gsn_addr &optional;
user_addr: gtp_gsn_addr &optional;
qos_prof: gtp_qos_profile &optional;
charge_gateway: gtp_charging_gateway_addr &optional;
ext: gtp_private_extension &optional;
};
type gtp_delete_pdp_ctx_request_elements: record {
teardown_ind: gtp_teardown_ind &optional;
nsapi: gtp_nsapi;
ext: gtp_private_extension &optional;
};
type gtp_delete_pdp_ctx_response_elements: record {
cause: gtp_cause;
ext: gtp_private_extension &optional;
};
## Definition of "secondary filters". A secondary filter is a BPF filter given as
## index in this table. For each such filter, the corresponding event is raised for
## all matching packets.

View file

@ -1,4 +1,5 @@
@load ./utils-commands
@load ./main
@load ./file-analysis
@load ./file-extract
@load ./gridftp

View file

@ -0,0 +1,50 @@
@load ./main
@load base/utils/conn-ids
@load base/frameworks/file-analysis/main
module FTP;
export {
## Determines whether the default :bro:see:`get_file_handle` handler
## is used to return file handles to the file analysis framework.
## Redefine to true in order to provide a custom handler which overrides
## the default for FTP.
const disable_default_file_handle_provider: bool = F &redef;
## Default file handle provider for FTP.
function get_file_handle(c: connection, is_orig: bool): string
{
if ( [c$id$resp_h, c$id$resp_p] !in ftp_data_expected ) return "";
local info: FTP::Info = ftp_data_expected[c$id$resp_h, c$id$resp_p];
local rval = fmt("%s %s %s", ANALYZER_FTP_DATA, c$start_time,
id_string(c$id));
if ( info$passive )
# FTP client initiates data channel.
if ( is_orig )
# Don't care about FTP client data.
return "";
else
# Do care about FTP server data.
return rval;
else
# FTP server initiates dta channel.
if ( is_orig )
# Do care about FTP server data.
return rval;
else
# Don't care about FTP client data.
return "";
}
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
{
if ( tag != ANALYZER_FTP_DATA ) return;
if ( FTP::disable_default_file_handle_provider ) return;
return_file_handle(FTP::get_file_handle(c, is_orig));
}

View file

@ -13,54 +13,96 @@ export {
const extraction_prefix = "ftp-item" &redef;
}
global extract_count: count = 0;
redef record Info += {
## On disk file where it was extracted to.
extraction_file: file &log &optional;
extraction_file: string &log &optional;
## Indicates if the current command/response pair should attempt to
## extract the file if a file was transferred.
extract_file: bool &default=F;
## Internal tracking of the total number of files extracted during this
## session.
num_extracted_files: count &default=0;
};
event file_transferred(c: connection, prefix: string, descr: string,
mime_type: string) &priority=3
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
local id = c$id;
if ( [id$resp_h, id$resp_p] !in ftp_data_expected )
return;
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "FTP_DATA" ) return;
if ( ! info?$conns ) return;
local s = ftp_data_expected[id$resp_h, id$resp_p];
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
local extracting: bool = F;
if ( extract_file_types in s$mime_type )
for ( cid in info$conns )
{
s$extract_file = T;
++s$num_extracted_files;
local c: connection = info$conns[cid];
if ( [cid$resp_h, cid$resp_p] !in ftp_data_expected ) next;
local s = ftp_data_expected[cid$resp_h, cid$resp_p];
if ( ! s$extract_file ) next;
if ( ! extracting )
{
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
}
}
event file_transferred(c: connection, prefix: string, descr: string,
mime_type: string) &priority=-4
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
local id = c$id;
if ( [id$resp_h, id$resp_p] !in ftp_data_expected )
return;
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "FTP_DATA" ) return;
if ( extract_file_types !in info$mime_type ) return;
local s = ftp_data_expected[id$resp_h, id$resp_p];
for ( act in info$actions )
if ( act$act == FileAnalysis::ACTION_EXTRACT ) return;
if ( s$extract_file )
{
local suffix = fmt("%d.dat", s$num_extracted_files);
local fname = generate_extraction_filename(extraction_prefix, c, suffix);
s$extraction_file = open(fname);
if ( s$passive )
set_contents_file(id, CONTENTS_RESP, s$extraction_file);
else
set_contents_file(id, CONTENTS_ORIG, s$extraction_file);
}
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
++extract_count;
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=-5
{
if ( trig != FileAnalysis::TRIGGER_EOF &&
trig != FileAnalysis::TRIGGER_DONE ) return;
if ( ! info?$source ) return;
if ( info$source != "FTP_DATA" ) return;
for ( act in info$actions )
if ( act$act == FileAnalysis::ACTION_EXTRACT )
{
local s: FTP::Info;
s$ts = network_time();
s$tags = set();
s$user = "<ftp-data>";
s$extraction_file = act$extract_filename;
if ( info?$conns )
for ( cid in info$conns )
{
s$uid = info$conns[cid]$uid;
s$id = cid;
break;
}
Log::write(FTP::LOG, s);
}
}
event log_ftp(rec: Info) &priority=-10

View file

@ -16,7 +16,8 @@ export {
## List of commands that should have their command/response pairs logged.
const logged_commands = {
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT"
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT", "PORT", "PASV", "EPRT",
"EPSV"
} &redef;
## This setting changes if passwords used in FTP sessions are captured or not.
@ -25,6 +26,18 @@ export {
## User IDs that can be considered "anonymous".
const guest_ids = { "anonymous", "ftp", "ftpuser", "guest" } &redef;
## The expected endpoints of an FTP data channel.
type ExpectedDataChannel: record {
## Whether PASV mode is toggled for control channel.
passive: bool &log;
## The host that will be initiating the data connection.
orig_h: addr &log;
## The host that will be accepting the data connection.
resp_h: addr &log;
## The port at which the acceptor is listening for the data connection.
resp_p: port &log;
};
type Info: record {
## Time when the command was sent.
ts: time &log;
@ -55,6 +68,9 @@ export {
## Arbitrary tags that may indicate a particular attribute of this command.
tags: set[string] &log &default=set();
## Expected FTP data channel.
data_channel: ExpectedDataChannel &log &optional;
## Current working directory that this session is in. By making
## the default value '/.', we can indicate that unless something
## more concrete is discovered that the existing but unknown
@ -103,7 +119,7 @@ redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] };
redef likely_server_ports += { 21/tcp, 2811/tcp };
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &create_expire=5mins;
global ftp_data_expected: table[addr, port] of Info &read_expire=5mins;
event bro_init() &priority=5
{
@ -190,8 +206,19 @@ function ftp_message(s: Info)
delete s$mime_type;
delete s$mime_desc;
delete s$file_size;
# Same with data channel.
delete s$data_channel;
# Tags are cleared everytime too.
delete s$tags;
s$tags = set();
}
function add_expected_data_channel(s: Info, chan: ExpectedDataChannel)
{
s$passive = chan$passive;
s$data_channel = chan;
ftp_data_expected[chan$resp_h, chan$resp_p] = s;
expect_connection(chan$orig_h, chan$resp_h, chan$resp_p, ANALYZER_FTP_DATA,
5mins);
}
event ftp_request(c: connection, command: string, arg: string) &priority=5
@ -226,9 +253,8 @@ event ftp_request(c: connection, command: string, arg: string) &priority=5
if ( data$valid )
{
c$ftp$passive=F;
ftp_data_expected[data$h, data$p] = c$ftp;
expect_connection(id$resp_h, data$h, data$p, ANALYZER_FILE, 5mins);
add_expected_data_channel(c$ftp, [$passive=F, $orig_h=id$resp_h,
$resp_h=data$h, $resp_p=data$p]);
}
else
{
@ -280,8 +306,8 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior
if ( code == 229 && data$h == [::] )
data$h = id$resp_h;
ftp_data_expected[data$h, data$p] = c$ftp;
expect_connection(id$orig_h, data$h, data$p, ANALYZER_FILE, 5mins);
add_expected_data_channel(c$ftp, [$passive=T, $orig_h=id$orig_h,
$resp_h=data$h, $resp_p=data$p]);
}
else
{
@ -331,12 +357,9 @@ event file_transferred(c: connection, prefix: string, descr: string,
}
}
event file_transferred(c: connection, prefix: string, descr: string,
mime_type: string) &priority=-5
event connection_state_remove(c: connection) &priority=-5
{
local id = c$id;
if ( [id$resp_h, id$resp_p] in ftp_data_expected )
delete ftp_data_expected[id$resp_h, id$resp_p];
delete ftp_data_expected[c$id$resp_h, c$id$resp_p];
}
# Use state remove event to cover connections terminated by RST.

View file

@ -1,5 +1,6 @@
@load ./main
@load ./utils
@load ./file-analysis
@load ./file-ident
@load ./file-hash
@load ./file-extract

View file

@ -0,0 +1,36 @@
@load ./main
@load ./utils
@load base/utils/conn-ids
@load base/frameworks/file-analysis/main
module HTTP;
export {
## Determines whether the default :bro:see:`get_file_handle` handler
## is used to return file handles to the file analysis framework.
## Redefine to true in order to provide a custom handler which overrides
## the default HTTP.
const disable_default_file_handle_provider: bool = F &redef;
## Default file handle provider for HTTP.
function get_file_handle(c: connection, is_orig: bool): string
{
if ( ! c?$http ) return "";
if ( c$http$range_request )
return fmt("%s %s %s %s", ANALYZER_HTTP, is_orig, c$id$orig_h,
build_url(c$http));
return fmt("%s %s %s %s %s", ANALYZER_HTTP, c$start_time, is_orig,
c$http$trans_depth, id_string(c$id));
}
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
{
if ( tag != ANALYZER_HTTP ) return;
if ( HTTP::disable_default_file_handle_provider ) return;
return_file_handle(HTTP::get_file_handle(c, is_orig));
}

View file

@ -2,8 +2,7 @@
##! the message body from the server can be extracted with this script.
@load ./main
@load ./file-ident
@load base/utils/files
@load ./file-analysis
module HTTP;
@ -16,45 +15,77 @@ export {
redef record Info += {
## On-disk file where the response body was extracted to.
extraction_file: file &log &optional;
extraction_file: string &log &optional;
## Indicates if the response body is to be extracted or not. Must be
## set before or by the first :bro:id:`http_entity_data` event for the
## content.
## set before or by the first :bro:enum:`FileAnalysis::TRIGGER_NEW`
## for the file content.
extract_file: bool &default=F;
};
}
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=-5
{
# Client body extraction is not currently supported in this script.
if ( is_orig )
return;
global extract_count: count = 0;
if ( c$http$first_chunk )
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
if ( extract_file_types !in info$mime_type ) return;
for ( act in info$actions )
if ( act$act == FileAnalysis::ACTION_EXTRACT ) return;
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
++extract_count;
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
if ( c$http?$mime_type &&
extract_file_types in c$http$mime_type )
{
c$http$extract_file = T;
}
local c: connection = info$conns[cid];
if ( ! c?$http ) next;
c$http$extraction_file = fname;
}
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
if ( ! info?$conns ) return;
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
local extracting: bool = F;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$http ) next;
if ( c$http$extract_file )
{
local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", c$http_state$current_response);
local fname = generate_extraction_filename(extraction_prefix, c, suffix);
if ( ! extracting )
{
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
c$http$extraction_file = open(fname);
enable_raw_output(c$http$extraction_file);
c$http$extraction_file = fname;
}
}
if ( c$http?$extraction_file )
print c$http$extraction_file, data;
}
event http_end_entity(c: connection, is_orig: bool)
{
if ( c$http?$extraction_file )
close(c$http$extraction_file);
}

View file

@ -1,15 +1,11 @@
##! Calculate hashes for HTTP body transfers.
@load ./file-ident
@load ./main
@load ./file-analysis
module HTTP;
export {
redef enum Notice::Type += {
## Indicates that an MD5 sum was calculated for an HTTP response body.
MD5,
};
redef record Info += {
## MD5 sum for a file transferred over HTTP calculated from the
## response body.
@ -19,10 +15,6 @@ export {
## if a file should have an MD5 sum generated. It must be
## set to T at the time of or before the first chunk of body data.
calc_md5: bool &default=F;
## Indicates if an MD5 sum is being calculated for the current
## request/response pair.
md5_handle: opaque of md5 &optional;
};
## Generate MD5 sums for these filetypes.
@ -31,62 +23,67 @@ export {
&redef;
}
## Initialize and calculate the hash.
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=5
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( is_orig || ! c?$http ) return;
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
if ( c$http$first_chunk )
if ( generate_md5 in info$mime_type )
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_MD5]);
else if ( info?$conns )
{
if ( c$http$calc_md5 ||
(c$http?$mime_type && generate_md5 in c$http$mime_type) )
for ( cid in info$conns )
{
c$http$md5_handle = md5_hash_init();
local c: connection = info$conns[cid];
if ( ! c?$http ) next;
if ( c$http$calc_md5 )
{
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_MD5]);
return;
}
}
}
if ( c$http?$md5_handle )
md5_hash_update(c$http$md5_handle, data);
}
## In the event of a content gap during a file transfer, detect the state for
## the MD5 sum calculation and stop calculating the MD5 since it would be
## incorrect anyway.
event content_gap(c: connection, is_orig: bool, seq: count, length: count) &priority=5
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( is_orig || ! c?$http || ! c$http?$md5_handle ) return;
if ( trig != FileAnalysis::TRIGGER_DONE &&
trig != FileAnalysis::TRIGGER_EOF ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
if ( ! info?$conns ) return;
set_state(c, F, is_orig);
md5_hash_finish(c$http$md5_handle); # Ignore return value.
delete c$http$md5_handle;
}
local act: FileAnalysis::ActionArgs = [$act=FileAnalysis::ACTION_MD5];
## When the file finishes downloading, finish the hash and generate a notice.
event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &priority=-3
{
if ( is_orig || ! c?$http ) return;
if ( act !in info$actions ) return;
if ( c$http?$md5_handle )
local result = info$actions[act];
if ( ! result?$md5 ) return;
for ( cid in info$conns )
{
local url = build_url_http(c$http);
c$http$md5 = md5_hash_finish(c$http$md5_handle);
delete c$http$md5_handle;
local c: connection = info$conns[cid];
NOTICE([$note=MD5, $msg=fmt("%s %s %s", c$id$orig_h, c$http$md5, url),
$sub=c$http$md5, $conn=c]);
if ( ! c?$http ) next;
c$http$md5 = result$md5;
}
}
event connection_state_remove(c: connection) &priority=-5
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( c?$http_state &&
c$http_state$current_response in c$http_state$pending &&
c$http_state$pending[c$http_state$current_response]?$md5_handle )
{
# The MD5 sum isn't going to be saved anywhere since the entire
# body wouldn't have been seen anyway and we'd just be giving an
# incorrect MD5 sum.
md5_hash_finish(c$http$md5_handle);
delete c$http$md5_handle;
}
if ( trig != FileAnalysis::TRIGGER_GAP ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
FileAnalysis::remove_action(info$file_id, [$act=FileAnalysis::ACTION_MD5]);
}

View file

@ -1,15 +1,9 @@
##! Identification of file types in HTTP response bodies with file content sniffing.
@load base/frameworks/signatures
@load base/frameworks/notice
@load ./main
@load ./utils
# Add the magic number signatures to the core signature set.
@load-sigs ./file-ident.sig
# Ignore the signatures used to match files
redef Signatures::ignored_ids += /^matchfile-/;
@load ./file-analysis
module HTTP;
@ -22,11 +16,6 @@ export {
redef record Info += {
## Mime type of response body identified by content sniffing.
mime_type: string &log &optional;
## Indicates that no data of the current file transfer has been
## seen yet. After the first :bro:id:`http_entity_data` event, it
## will be set to F.
first_chunk: bool &default=T;
};
## Mapping between mime types and regular expressions for URLs
@ -43,43 +32,34 @@ export {
const ignored_incorrect_file_type_urls = /^$/ &redef;
}
event signature_match(state: signature_state, msg: string, data: string) &priority=5
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
# Only signatures matching file types are dealt with here.
if ( /^matchfile-/ !in state$sig_id ) return;
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "HTTP" ) return;
if ( ! info?$conns ) return;
local c = state$conn;
set_state(c, F, F);
# Not much point in any of this if we don't know about the HTTP session.
if ( ! c?$http ) return;
# Set the mime type that was detected.
c$http$mime_type = msg;
if ( msg in mime_types_extensions &&
c$http?$uri && mime_types_extensions[msg] !in c$http$uri )
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$http ) next;
c$http$mime_type = info$mime_type;
if ( info$mime_type !in mime_types_extensions ) next;
if ( ! c$http?$uri ) next;
if ( mime_types_extensions[info$mime_type] in c$http$uri ) next;
local url = build_url_http(c$http);
if ( url == ignored_incorrect_file_type_urls )
return;
if ( url == ignored_incorrect_file_type_urls ) next;
local message = fmt("%s %s %s", msg, c$http$method, url);
local message = fmt("%s %s %s", info$mime_type, c$http$method, url);
NOTICE([$note=Incorrect_File_Type,
$msg=message,
$conn=c]);
}
}
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=5
{
if ( c$http$first_chunk && ! c$http?$mime_type )
c$http$mime_type = split1(identify_data(data, T), /;/)[1];
}
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=-10
{
if ( c$http$first_chunk )
c$http$first_chunk=F;
}

View file

@ -1,144 +0,0 @@
# These signatures are used as a replacement for libmagic. The signature
# name needs to start with "matchfile" and the "event" directive takes
# the mime type of the file matched by the http-reply-body pattern.
#
# Signatures from: http://www.garykessler.net/library/file_sigs.html
signature matchfile-exe {
http-reply-body /\x4D\x5A/
event "application/x-dosexec"
}
signature matchfile-elf {
http-reply-body /\x7F\x45\x4C\x46/
event "application/x-executable"
}
signature matchfile-script {
# This is meant to match the interpreter declaration at the top of many
# interpreted scripts.
http-reply-body /\#\![[:blank:]]?\//
event "application/x-script"
}
signature matchfile-wmv {
http-reply-body /\x30\x26\xB2\x75\x8E\x66\xCF\x11\xA6\xD9\x00\xAA\x00\x62\xCE\x6C/
event "video/x-ms-wmv"
}
signature matchfile-flv {
http-reply-body /\x46\x4C\x56\x01/
event "video/x-flv"
}
signature matchfile-swf {
http-reply-body /[\x46\x43]\x57\x53/
event "application/x-shockwave-flash"
}
signature matchfile-jar {
http-reply-body /\x5F\x27\xA8\x89/
event "application/java-archive"
}
signature matchfile-class {
http-reply-body /\xCA\xFE\xBA\xBE/
event "application/java-byte-code"
}
signature matchfile-msoffice-2007 {
# MS Office 2007 XML documents
http-reply-body /\x50\x4B\x03\x04\x14\x00\x06\x00/
event "application/msoffice"
}
signature matchfile-msoffice {
# Older MS Office files
http-reply-body /\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1/
event "application/msoffice"
}
signature matchfile-rtf {
http-reply-body /\x7B\x5C\x72\x74\x66\x31/
event "application/rtf"
}
signature matchfile-lnk {
http-reply-body /\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x00\x00\x46/
event "application/x-ms-shortcut"
}
signature matchfile-torrent {
http-reply-body /\x64\x38\x3A\x61\x6E\x6E\x6F\x75\x6E\x63\x65/
event "application/x-bittorrent"
}
signature matchfile-pdf {
http-reply-body /\x25\x50\x44\x46/
event "application/pdf"
}
signature matchfile-html {
http-reply-body /<[hH][tT][mM][lL]/
event "text/html"
}
signature matchfile-html2 {
http-reply-body /<![dD][oO][cC][tT][yY][pP][eE][[:blank:]][hH][tT][mM][lL]/
event "text/html"
}
signature matchfile-xml {
http-reply-body /<\??[xX][mM][lL]/
event "text/xml"
}
signature matchfile-gif {
http-reply-body /\x47\x49\x46\x38[\x37\x39]\x61/
event "image/gif"
}
signature matchfile-jpg {
http-reply-body /\xFF\xD8\xFF[\xDB\xE0\xE1\xE2\xE3\xE8]..[\x4A\x45\x53][\x46\x78\x50][\x49\x69][\x46\x66]/
event "image/jpeg"
}
signature matchfile-tiff {
http-reply-body /\x4D\x4D\x00[\x2A\x2B]/
event "image/tiff"
}
signature matchfile-png {
http-reply-body /\x89\x50\x4e\x47/
event "image/png"
}
signature matchfile-zip {
http-reply-body /\x50\x4B\x03\x04/
event "application/zip"
}
signature matchfile-bzip {
http-reply-body /\x42\x5A\x68/
event "application/bzip2"
}
signature matchfile-gzip {
http-reply-body /\x1F\x8B\x08/
event "application/x-gzip"
}
signature matchfile-cab {
http-reply-body /\x4D\x53\x43\x46/
event "application/vnd.ms-cab-compressed"
}
signature matchfile-rar {
http-reply-body /\x52\x61\x72\x21\x1A\x07\x00/
event "application/x-rar-compressed"
}
signature matchfile-7z {
http-reply-body /\x37\x7A\xBC\xAF\x27\x1C/
event "application/x-7z-compressed"
}

View file

@ -71,6 +71,10 @@ export {
## All of the headers that may indicate if the request was proxied.
proxied: set[string] &log &optional;
## Indicates if this request can assume 206 partial content in
## response.
range_request: bool &default=F;
};
## Structure to maintain state for an HTTP connection with multiple
@ -236,6 +240,9 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
# The split is done to remove the occasional port value that shows up here.
c$http$host = split1(value, /:/)[1];
else if ( name == "RANGE" )
c$http$range_request = T;
else if ( name == "USER-AGENT" )
c$http$user_agent = value;

View file

@ -1,2 +1,3 @@
@load ./main
@load ./dcc-send
@load ./file-analysis

View file

@ -30,67 +30,144 @@ export {
dcc_mime_type: string &log &optional;
## The file handle for the file to be extracted
extraction_file: file &log &optional;
extraction_file: string &log &optional;
## A boolean to indicate if the current file transfer should be extracted.
extract_file: bool &default=F;
## The count of the number of file that have been extracted during the session.
num_extracted_files: count &default=0;
};
}
global dcc_expected_transfers: table[addr, port] of Info = table();
global dcc_expected_transfers: table[addr, port] of Info &read_expire=5mins;
event file_transferred(c: connection, prefix: string, descr: string,
mime_type: string) &priority=3
global extract_count: count = 0;
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
local id = c$id;
if ( [id$resp_h, id$resp_p] !in dcc_expected_transfers )
return;
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "IRC_DATA" ) return;
if ( ! info?$conns ) return;
local irc = dcc_expected_transfers[id$resp_h, id$resp_p];
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
local extracting: bool = F;
irc$dcc_mime_type = split1(mime_type, /;/)[1];
if ( extract_file_types == irc$dcc_mime_type )
for ( cid in info$conns )
{
irc$extract_file = T;
}
local c: connection = info$conns[cid];
if ( irc$extract_file )
{
local suffix = fmt("%d.dat", ++irc$num_extracted_files);
local fname = generate_extraction_filename(extraction_prefix, c, suffix);
irc$extraction_file = open(fname);
if ( [cid$resp_h, cid$resp_p] !in dcc_expected_transfers ) next;
local s = dcc_expected_transfers[cid$resp_h, cid$resp_p];
if ( ! s$extract_file ) next;
if ( ! extracting )
{
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
s$extraction_file = fname;
}
}
event file_transferred(c: connection, prefix: string, descr: string,
mime_type: string) &priority=-4
function set_dcc_mime(info: FileAnalysis::Info)
{
local id = c$id;
if ( [id$resp_h, id$resp_p] !in dcc_expected_transfers )
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( [cid$resp_h, cid$resp_p] !in dcc_expected_transfers ) next;
local s = dcc_expected_transfers[cid$resp_h, cid$resp_p];
s$dcc_mime_type = info$mime_type;
}
}
function set_dcc_extraction_file(info: FileAnalysis::Info, filename: string)
{
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( [cid$resp_h, cid$resp_p] !in dcc_expected_transfers ) next;
local s = dcc_expected_transfers[cid$resp_h, cid$resp_p];
s$extraction_file = filename;
}
}
function log_dcc(info: FileAnalysis::Info)
{
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( [cid$resp_h, cid$resp_p] !in dcc_expected_transfers ) next;
local irc = dcc_expected_transfers[cid$resp_h, cid$resp_p];
local tmp = irc$command;
irc$command = "DCC";
Log::write(IRC::LOG, irc);
irc$command = tmp;
# Delete these values in case another DCC transfer
# happens during the IRC session.
delete irc$extract_file;
delete irc$extraction_file;
delete irc$dcc_file_name;
delete irc$dcc_file_size;
delete irc$dcc_mime_type;
return;
}
}
local irc = dcc_expected_transfers[id$resp_h, id$resp_p];
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "IRC_DATA" ) return;
local tmp = irc$command;
irc$command = "DCC";
Log::write(IRC::LOG, irc);
irc$command = tmp;
set_dcc_mime(info);
if ( irc?$extraction_file )
set_contents_file(id, CONTENTS_RESP, irc$extraction_file);
if ( extract_file_types !in info$mime_type ) return;
# Delete these values in case another DCC transfer
# happens during the IRC session.
delete irc$extract_file;
delete irc$extraction_file;
delete irc$dcc_file_name;
delete irc$dcc_file_size;
delete irc$dcc_mime_type;
delete dcc_expected_transfers[id$resp_h, id$resp_p];
for ( act in info$actions )
if ( act$act == FileAnalysis::ACTION_EXTRACT ) return;
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
++extract_count;
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
set_dcc_extraction_file(info, fname);
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=-5
{
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$source ) return;
if ( info$source != "IRC_DATA" ) return;
log_dcc(info);
}
event irc_dcc_message(c: connection, is_orig: bool,
@ -100,11 +177,11 @@ event irc_dcc_message(c: connection, is_orig: bool,
{
set_session(c);
if ( dcc_type != "SEND" )
return;
return;
c$irc$dcc_file_name = argument;
c$irc$dcc_file_size = size;
local p = count_to_port(dest_port, tcp);
expect_connection(to_addr("0.0.0.0"), address, p, ANALYZER_FILE, 5 min);
expect_connection(to_addr("0.0.0.0"), address, p, ANALYZER_IRC_DATA, 5 min);
dcc_expected_transfers[address, p] = c$irc;
}
@ -114,3 +191,8 @@ event expected_connection_seen(c: connection, a: count) &priority=10
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )
add c$service["irc-dcc-data"];
}
event connection_state_remove(c: connection) &priority=-5
{
delete dcc_expected_transfers[c$id$resp_h, c$id$resp_p];
}

View file

@ -0,0 +1,30 @@
@load ./dcc-send.bro
@load base/utils/conn-ids
@load base/frameworks/file-analysis/main
module IRC;
export {
## Determines whether the default :bro:see:`get_file_handle` handler
## is used to return file handles to the file analysis framework.
## Redefine to true in order to provide a custom handler which overrides
## the default for IRC.
const disable_default_file_handle_provider: bool = F &redef;
## Default file handle provider for IRC.
function get_file_handle(c: connection, is_orig: bool): string
{
if ( is_orig ) return "";
return fmt("%s %s %s", ANALYZER_IRC_DATA, c$start_time,
id_string(c$id));
}
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
{
if ( tag != ANALYZER_IRC_DATA ) return;
if ( IRC::disable_default_file_handle_provider ) return;
return_file_handle(IRC::get_file_handle(c, is_orig));
}

View file

@ -1,3 +1,4 @@
@load ./main
@load ./entities
@load ./entities-excerpt
@load ./file-analysis

View file

@ -9,44 +9,41 @@ export {
redef record SMTP::EntityInfo += {
## The entity body excerpt.
excerpt: string &log &default="";
## Internal tracking to know how much of the body should be included
## in the excerpt.
excerpt_len: count &optional;
};
## This is the default value for how much of the entity body should be
## included for all MIME entities.
const default_entity_excerpt_len = 0 &redef;
## This table defines how much of various entity bodies should be
## included in excerpts.
const entity_excerpt_len: table[string] of count = {}
&redef
&default = default_entity_excerpt_len;
}
event mime_segment_data(c: connection, length: count, data: string) &priority=-1
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( ! c?$smtp ) return;
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( c$smtp$current_entity$content_len == 0 )
c$smtp$current_entity$excerpt_len = entity_excerpt_len[c$smtp$current_entity$mime_type];
if ( default_entity_excerpt_len > info$bof_buffer_size )
info$bof_buffer_size = default_entity_excerpt_len;
}
event mime_segment_data(c: connection, length: count, data: string) &priority=-2
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( ! c?$smtp ) return;
if ( trig != FileAnalysis::TRIGGER_BOF_BUFFER ) return;
if ( ! info?$bof_buffer ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( ! info?$conns ) return;
local ent = c$smtp$current_entity;
if ( ent$content_len < ent$excerpt_len )
for ( cid in info$conns )
{
if ( ent$content_len + length < ent$excerpt_len )
ent$excerpt = cat(ent$excerpt, data);
else
{
local x_bytes = ent$excerpt_len - ent$content_len;
ent$excerpt = cat(ent$excerpt, sub_bytes(data, 1, x_bytes));
}
local c: connection = info$conns[cid];
if ( ! c?$smtp ) next;
if ( default_entity_excerpt_len > 0 )
c$smtp$current_entity$excerpt =
info$bof_buffer[0:default_entity_excerpt_len];
}
}

View file

@ -7,11 +7,6 @@
module SMTP;
export {
redef enum Notice::Type += {
## Indicates that an MD5 sum was calculated for a MIME message.
MD5,
};
redef enum Log::ID += { ENTITIES_LOG };
type EntityInfo: record {
@ -34,15 +29,12 @@ export {
## Optionally calculate the file's MD5 sum. Must be set prior to the
## first data chunk being see in an event.
calc_md5: bool &default=F;
## This boolean value indicates if an MD5 sum is being calculated
## for the current file transfer.
md5_handle: opaque of md5 &optional;
## Optionally write the file to disk. Must be set prior to first
## data chunk being seen in an event.
extract_file: bool &default=F;
## Store the file handle here for the file currently being extracted.
extraction_file: file &log &optional;
extraction_file: string &log &optional;
};
redef record Info += {
@ -51,9 +43,6 @@ export {
};
redef record State += {
## Store a count of the number of files that have been transferred in
## a conversation to create unique file names on disk.
num_extracted_files: count &default=0;
## Track the number of MIME encoded files transferred during a session.
mime_level: count &default=0;
};
@ -77,6 +66,8 @@ export {
global log_mime: event(rec: EntityInfo);
}
global extract_count: count = 0;
event bro_init() &priority=5
{
Log::create_stream(SMTP::ENTITIES_LOG, [$columns=EntityInfo, $ev=log_mime]);
@ -104,70 +95,151 @@ event mime_begin_entity(c: connection) &priority=10
set_session(c, T);
}
# This has priority -10 because other handlers need to know the current
# content_len before it's updated by this handler.
event mime_segment_data(c: connection, length: count, data: string) &priority=-10
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( ! c?$smtp ) return;
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( ! info?$conns ) return;
c$smtp$current_entity$content_len = c$smtp$current_entity$content_len + length;
}
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
local extracting: bool = F;
event mime_segment_data(c: connection, length: count, data: string) &priority=7
{
if ( ! c?$smtp ) return;
if ( c$smtp$current_entity$content_len == 0 )
c$smtp$current_entity$mime_type = split1(identify_data(data, T), /;/)[1];
}
event mime_segment_data(c: connection, length: count, data: string) &priority=-5
{
if ( ! c?$smtp ) return;
if ( c$smtp$current_entity$content_len == 0 )
for ( cid in info$conns )
{
local entity = c$smtp$current_entity;
if ( generate_md5 in entity$mime_type && ! never_calc_md5 )
entity$calc_md5 = T;
local c: connection = info$conns[cid];
if ( entity$calc_md5 )
entity$md5_handle = md5_hash_init();
}
if ( ! c?$smtp ) next;
if ( c$smtp$current_entity?$md5_handle )
md5_hash_update(entity$md5_handle, data);
}
if ( c$smtp$current_entity$extract_file )
{
if ( ! extracting )
{
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
## In the event of a content gap during the MIME transfer, detect the state for
## the MD5 sum calculation and stop calculating the MD5 since it would be
## incorrect anyway.
event content_gap(c: connection, is_orig: bool, seq: count, length: count) &priority=5
{
if ( is_orig || ! c?$smtp || ! c$smtp?$current_entity ) return;
c$smtp$current_entity$extraction_file = fname;
}
local entity = c$smtp$current_entity;
if ( entity?$md5_handle )
{
md5_hash_finish(entity$md5_handle);
delete entity$md5_handle;
if ( c$smtp$current_entity$calc_md5 )
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_MD5]);
}
}
event mime_end_entity(c: connection) &priority=-3
{
# TODO: this check is only due to a bug in mime_end_entity that
# causes the event to be generated twice for the same real event.
if ( ! c?$smtp || ! c$smtp?$current_entity )
function check_extract_by_type(info: FileAnalysis::Info)
{
if ( extract_file_types !in info$mime_type ) return;
for ( act in info$actions )
if ( act$act == FileAnalysis::ACTION_EXTRACT ) return;
local fname: string = fmt("%s-%s-%d.dat", extraction_prefix, info$file_id,
extract_count);
++extract_count;
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_EXTRACT,
$extract_filename=fname]);
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$smtp ) next;
c$smtp$current_entity$extraction_file = fname;
}
}
function check_md5_by_type(info: FileAnalysis::Info)
{
if ( never_calc_md5 ) return;
if ( generate_md5 !in info$mime_type ) return;
FileAnalysis::add_action(info$file_id, [$act=FileAnalysis::ACTION_MD5]);
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_TYPE ) return;
if ( ! info?$mime_type ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( info?$conns )
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$smtp ) next;
c$smtp$current_entity$mime_type = info$mime_type;
}
check_extract_by_type(info);
check_md5_by_type(info);
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_GAP ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$smtp ) next;
if ( ! c$smtp?$current_entity ) next;
FileAnalysis::remove_action(info$file_id,
[$act=FileAnalysis::ACTION_MD5]);
}
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_EOF &&
trig != FileAnalysis::TRIGGER_DONE ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
local c: connection = info$conns[cid];
if ( ! c?$smtp ) next;
if ( ! c$smtp?$current_entity ) next;
# Only log is there was some content.
if ( info$seen_bytes == 0 ) next;
local act: FileAnalysis::ActionArgs = [$act=FileAnalysis::ACTION_MD5];
if ( act in info$actions )
{
local result = info$actions[act];
if ( result?$md5 )
c$smtp$current_entity$md5 = result$md5;
}
c$smtp$current_entity$content_len = info$seen_bytes;
Log::write(SMTP::ENTITIES_LOG, c$smtp$current_entity);
delete c$smtp$current_entity;
return;
local entity = c$smtp$current_entity;
if ( entity?$md5_handle )
{
entity$md5 = md5_hash_finish(entity$md5_handle);
delete entity$md5_handle;
NOTICE([$note=MD5, $msg=fmt("Calculated a hash for a MIME entity from %s", c$id$orig_h),
$sub=entity$md5, $conn=c]);
}
}
@ -183,62 +255,3 @@ event mime_one_header(c: connection, h: mime_header_rec)
/[nN][aA][mM][eE][:blank:]*=/ in h$value )
c$smtp$current_entity$filename = extract_filename_from_content_disposition(h$value);
}
event mime_end_entity(c: connection) &priority=-5
{
if ( ! c?$smtp ) return;
# This check and the delete below are just to cope with a bug where
# mime_end_entity can be generated multiple times for the same event.
if ( ! c$smtp?$current_entity )
return;
# Only log is there was some content.
if ( c$smtp$current_entity$content_len > 0 )
Log::write(SMTP::ENTITIES_LOG, c$smtp$current_entity);
delete c$smtp$current_entity;
}
event mime_segment_data(c: connection, length: count, data: string) &priority=5
{
if ( ! c?$smtp ) return;
if ( extract_file_types in c$smtp$current_entity$mime_type )
c$smtp$current_entity$extract_file = T;
}
event mime_segment_data(c: connection, length: count, data: string) &priority=3
{
if ( ! c?$smtp ) return;
if ( c$smtp$current_entity$extract_file &&
c$smtp$current_entity$content_len == 0 )
{
local suffix = fmt("%d.dat", ++c$smtp_state$num_extracted_files);
local fname = generate_extraction_filename(extraction_prefix, c, suffix);
c$smtp$current_entity$extraction_file = open(fname);
enable_raw_output(c$smtp$current_entity$extraction_file);
}
}
event mime_segment_data(c: connection, length: count, data: string) &priority=-5
{
if ( ! c?$smtp ) return;
if ( c$smtp$current_entity$extract_file && c$smtp$current_entity?$extraction_file )
print c$smtp$current_entity$extraction_file, data;
}
event mime_end_entity(c: connection) &priority=-3
{
if ( ! c?$smtp ) return;
# TODO: this check is only due to a bug in mime_end_entity that
# causes the event to be generated twice for the same real event.
if ( ! c$smtp?$current_entity )
return;
if ( c$smtp$current_entity?$extraction_file )
close(c$smtp$current_entity$extraction_file);
}

View file

@ -0,0 +1,32 @@
@load ./main
@load ./entities
@load base/utils/conn-ids
@load base/frameworks/file-analysis/main
module SMTP;
export {
## Determines whether the default :bro:see:`get_file_handle` handler
## is used to return file handles to the file analysis framework.
## Redefine to true in order to provide a custom handler which overrides
## the default for SMTP.
const disable_default_file_handle_provider: bool = F &redef;
## Default file handle provider for SMTP.
function get_file_handle(c: connection, is_orig: bool): string
{
if ( ! c?$smtp ) return "";
return fmt("%s %s %s %s", ANALYZER_SMTP, c$start_time,
c$smtp$trans_depth, c$smtp_state$mime_level);
}
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
{
if ( tag != ANALYZER_SMTP ) return;
if ( SMTP::disable_default_file_handle_provider ) return;
return_file_handle(SMTP::get_file_handle(c, is_orig));
}

View file

@ -67,11 +67,6 @@ export {
## (especially with large file transfers).
const disable_analyzer_after_detection = T &redef;
## The openssl command line utility. If it's in the path the default
## value will work, otherwise a full path string can be supplied for the
## utility.
const openssl_util = "openssl" &redef;
## The maximum amount of time a script can delay records from being logged.
const max_log_delay = 15secs &redef;

View file

@ -27,7 +27,7 @@ function compress_path(dir: string): string
const cdup_sep = /((\/)*([^\/]|\\\/)+)?((\/)+\.\.(\/)*)/;
local parts = split_n(dir, cdup_sep, T, 1);
if ( length(parts) > 1 )
if ( |parts| > 1 )
{
# reaching a point with two parent dir references back-to-back means
# we don't know about anything higher in the tree to pop off

View file

@ -6,7 +6,7 @@
## characters.
function is_string_binary(s: string): bool
{
return byte_len(gsub(s, /[\x00-\x7f]/, "")) * 100 / |s| >= 25;
return |gsub(s, /[\x00-\x7f]/, "")| * 100 / |s| >= 25;
}
## Joins a set of string together, with elements delimited by a constant string.

View file

@ -1,15 +1,34 @@
@load base/frameworks/intel
@load base/protocols/smtp/file-analysis
@load base/utils/urls
@load ./where-locations
event mime_segment_data(c: connection, length: count, data: string) &priority=3
event intel_mime_data(info: FileAnalysis::Info, data: string)
{
local urls = find_all_urls_without_scheme(data);
for ( url in urls )
if ( ! info?$conns ) return;
for ( cid in info$conns )
{
Intel::seen([$str=url,
$str_type=Intel::URL,
$conn=c,
$where=SMTP::IN_MESSAGE]);
local c: connection = info$conns[cid];
local urls = find_all_urls_without_scheme(data);
for ( url in urls )
{
Intel::seen([$str=url,
$str_type=Intel::URL,
$conn=c,
$where=SMTP::IN_MESSAGE]);
}
}
}
hook FileAnalysis::policy(trig: FileAnalysis::Trigger, info: FileAnalysis::Info)
&priority=5
{
if ( trig != FileAnalysis::TRIGGER_NEW ) return;
if ( ! info?$source ) return;
if ( info$source != "SMTP" ) return;
FileAnalysis::add_action(info$file_id,
[$act=FileAnalysis::ACTION_DATA_EVENT,
$stream_event=intel_mime_data]);
}

View file

@ -2,6 +2,7 @@
##! a version of that software as old or older than the defined version a
##! notice will be generated.
@load base/frameworks/control
@load base/frameworks/notice
@load base/frameworks/software
@ -13,17 +14,126 @@ export {
Vulnerable_Version,
};
type VulnerableVersionRange: record {
## The minimal version of a vulnerable version range. This
## field can be undefined if all previous versions of a piece
## of software are vulnerable.
min: Software::Version &optional;
## The maximum vulnerable version. This field is deliberately
## not optional because a maximum vulnerable version must
## always be defined. This assumption may become incorrent
## if all future versions of some software are to be considered
## vulnerable. :)
max: Software::Version;
};
## The DNS zone where runtime vulnerable software updates will
## be loaded from.
const vulnerable_versions_update_endpoint = "" &redef;
## The interval at which vulnerable versions should grab updates
## over DNS.
const vulnerable_versions_update_interval = 1hr &redef;
## This is a table of software versions indexed by the name of the
## software and yielding the latest version that is vulnerable.
const vulnerable_versions: table[string] of Version &redef;
## software and a set of version ranges that are declared to be
## vulnerable for that software.
const vulnerable_versions: table[string] of set[VulnerableVersionRange] = table() &redef;
}
global internal_vulnerable_versions: table[string] of set[VulnerableVersionRange] = table();
event Control::configuration_update()
{
internal_vulnerable_versions = table();
# Copy the const vulnerable versions into the global modifiable one.
for ( sw in vulnerable_versions )
internal_vulnerable_versions[sw] = vulnerable_versions[sw];
}
function decode_vulnerable_version_range(vuln_sw: string): VulnerableVersionRange
{
# Create a max value with a dunce value only because the $max field
# is not optional.
local vvr: Software::VulnerableVersionRange = [$max=[$major=0]];
if ( /max=/ !in vuln_sw )
{
Reporter::warning(fmt("The vulnerable software detection script encountered a version with no max value (which is required). %s", vuln_sw));
return vvr;
}
local versions = split1(vuln_sw, /\x09/);
for ( i in versions )
{
local field_and_ver = split1(versions[i], /=/);
if ( |field_and_ver| != 2 )
return vvr; #failure!
local ver = Software::parse(field_and_ver[2])$version;
if ( field_and_ver[1] == "min" )
vvr$min = ver;
else if ( field_and_ver[1] == "max" )
vvr$max = ver;
}
return vvr;
}
event grab_vulnerable_versions(i: count)
{
if ( vulnerable_versions_update_endpoint == "" )
{
# Reschedule this event in case the user updates the setting at runtime.
schedule vulnerable_versions_update_interval { grab_vulnerable_versions(1) };
return;
}
when ( local result = lookup_hostname_txt(cat(i,".",vulnerable_versions_update_endpoint)) )
{
local parts = split1(result, /\x09/);
if ( |parts| != 2 ) #failure or end of list!
{
schedule vulnerable_versions_update_interval { grab_vulnerable_versions(1) };
return;
}
local sw = parts[1];
local vvr = decode_vulnerable_version_range(parts[2]);
if ( sw !in internal_vulnerable_versions )
internal_vulnerable_versions[sw] = set();
add internal_vulnerable_versions[sw][vvr];
event grab_vulnerable_versions(i+1);
}
timeout 5secs
{
# In case a lookup fails, try starting over in one minute.
schedule 1min { grab_vulnerable_versions(1) };
}
}
event bro_init()
{
event grab_vulnerable_versions(1);
}
event log_software(rec: Info)
{
if ( rec$name in vulnerable_versions &&
cmp_versions(rec$version, vulnerable_versions[rec$name]) <= 0 )
if ( rec$name !in internal_vulnerable_versions )
return;
for ( version_range in internal_vulnerable_versions[rec$name] )
{
NOTICE([$note=Vulnerable_Version, $src=rec$host,
$msg=fmt("A vulnerable version of software was detected: %s", software_fmt(rec))]);
if ( cmp_versions(rec$version, version_range$max) <= 0 &&
(!version_range?$min || cmp_versions(rec$version, version_range$min) >= 0) )
{
# The software is inside a vulnerable version range.
NOTICE([$note=Vulnerable_Version, $src=rec$host,
$msg=fmt("%s is running %s which is vulnerable.", rec$host, software_fmt(rec)),
$sub=software_fmt(rec)]);
}
}
}

View file

@ -32,7 +32,7 @@ event log_http(rec: HTTP::Info)
{
# Data is returned as "<dateFirstDetected> <detectionRate>"
local MHR_answer = split1(MHR_result, / /);
if ( length(MHR_answer) == 2 && to_count(MHR_answer[2]) >= MHR_threshold )
if ( |MHR_answer| == 2 && to_count(MHR_answer[2]) >= MHR_threshold )
{
local url = HTTP::build_url_http(rec);
local message = fmt("%s %s %s", rec$id$orig_h, rec$md5, url);

View file

@ -8,10 +8,6 @@
##! own certificate files and no duplicate checking is done across
##! clusters so each node would log each certificate.
##!
##! - If there is a certificate input based vulnerability found in the
##! openssl command line utility, you could be in trouble because this
##! script uses that utility to convert from DER to PEM certificates.
##!
@load base/protocols/ssl
@load base/utils/directions-and-hosts
@ -35,6 +31,7 @@ event ssl_established(c: connection) &priority=5
{
if ( ! c$ssl?$cert )
return;
if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) )
return;
@ -43,7 +40,24 @@ event ssl_established(c: connection) &priority=5
return;
add extracted_certs[c$ssl$cert_hash];
local side = Site::is_local_addr(c$id$resp_h) ? "local" : "remote";
local cmd = fmt("%s x509 -inform DER -outform PEM >> certs-%s.pem", openssl_util, side);
piped_exec(cmd, c$ssl$cert);
local filename = Site::is_local_addr(c$id$resp_h) ? "certs-local.pem" : "certs-remote.pem";
local outfile = open_for_append(filename);
print outfile, "-----BEGIN CERTIFICATE-----";
# Encode to base64 and format to fit 50 lines. Otherwise openssl won't like it later.
local lines = split_all(encode_base64(c$ssl$cert), /.{50}/);
local i = 1;
for ( line in lines )
{
if ( |lines[i]| > 0 )
{
print outfile, lines[i];
}
i+=1;
}
print outfile, "-----END CERTIFICATE-----";
print outfile, "";
close(outfile);
}

View file

@ -14,13 +14,6 @@
# information.
@load frameworks/software/vulnerable
# Example vulnerable software. This needs to be updated and maintained over
# time as new vulnerabilities are discovered.
redef Software::vulnerable_versions += {
["Flash"] = [$major=10,$minor=2,$minor2=153,$addl="1"],
["Java"] = [$major=1,$minor=6,$minor2=0,$addl="22"],
};
# Detect software changing (e.g. attacker installing hacked SSHD).
@load frameworks/software/version-changes

View file

@ -150,6 +150,10 @@ const Analyzer::Config Analyzer::analyzer_configs[] = {
{ AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer,
File_Analyzer::Available, 0, false },
{ AnalyzerTag::IRC_Data, "IRC_DATA", IRC_Data::InstantiateAnalyzer,
IRC_Data::Available, 0, false },
{ AnalyzerTag::FTP_Data, "FTP_DATA", FTP_Data::InstantiateAnalyzer,
FTP_Data::Available, 0, false },
{ AnalyzerTag::Backdoor, "BACKDOOR",
BackDoor_Analyzer::InstantiateAnalyzer,
BackDoor_Analyzer::Available, 0, false },

View file

@ -41,7 +41,7 @@ namespace AnalyzerTag {
GTPv1,
// Other
File, Backdoor, InterConn, SteppingStone, TCPStats,
File, IRC_Data, FTP_Data, Backdoor, InterConn, SteppingStone, TCPStats,
ConnSize,
// Support-analyzers

View file

@ -1,10 +1,48 @@
#include "config.h"
#include "Base64.h"
#include <math.h>
int Base64Decoder::default_base64_table[256];
const string Base64Decoder::default_alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int Base64Converter::default_base64_table[256];
const string Base64Converter::default_alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int* Base64Decoder::InitBase64Table(const string& alphabet)
void Base64Converter::Encode(int len, const unsigned char* data, int* pblen, char** pbuf)
{
int blen;
char *buf;
if ( ! pbuf )
reporter->InternalError("nil pointer to encoding result buffer");
if ( *pbuf && (*pblen % 4 != 0) )
reporter->InternalError("Base64 encode buffer not a multiple of 4");
if ( *pbuf )
{
buf = *pbuf;
blen = *pblen;
}
else
{
blen = (int)(4 * ceil((double)len / 3));
*pbuf = buf = new char[blen];
*pblen = blen;
}
for ( int i = 0, j = 0; (i < len) && ( j < blen ); )
{
uint32_t bit32 = data[i++] << 16;
bit32 += (i++ < len ? data[i-1] : 0) << 8;
bit32 += i++ < len ? data[i-1] : 0;
buf[j++] = alphabet[(bit32 >> 18) & 0x3f];
buf[j++] = alphabet[(bit32 >> 12) & 0x3f];
buf[j++] = (i == (len+2)) ? '=' : alphabet[(bit32 >> 6) & 0x3f];
buf[j++] = (i >= (len+1)) ? '=' : alphabet[bit32 & 0x3f];
}
}
int* Base64Converter::InitBase64Table(const string& alphabet)
{
assert(alphabet.size() == 64);
@ -44,26 +82,42 @@ int* Base64Decoder::InitBase64Table(const string& alphabet)
return base64_table;
}
Base64Decoder::Base64Decoder(Analyzer* arg_analyzer, const string& alphabet)
Base64Converter::Base64Converter(Analyzer* arg_analyzer, const string& arg_alphabet)
{
base64_table = InitBase64Table(alphabet.size() ? alphabet : default_alphabet);
if ( arg_alphabet.size() > 0 )
{
assert(arg_alphabet.size() == 64);
alphabet = arg_alphabet;
}
else
{
alphabet = default_alphabet;
}
base64_table = 0;
base64_group_next = 0;
base64_padding = base64_after_padding = 0;
errored = 0;
analyzer = arg_analyzer;
}
Base64Decoder::~Base64Decoder()
Base64Converter::~Base64Converter()
{
if ( base64_table != default_base64_table )
delete base64_table;
}
int Base64Decoder::Decode(int len, const char* data, int* pblen, char** pbuf)
int Base64Converter::Decode(int len, const char* data, int* pblen, char** pbuf)
{
int blen;
char* buf;
// Initialization of table on first_time call of Decode.
if ( ! base64_table )
base64_table = InitBase64Table(alphabet);
if ( ! pbuf )
reporter->InternalError("nil pointer to decoding result buffer");
@ -145,7 +199,7 @@ int Base64Decoder::Decode(int len, const char* data, int* pblen, char** pbuf)
return dlen;
}
int Base64Decoder::Done(int* pblen, char** pbuf)
int Base64Converter::Done(int* pblen, char** pbuf)
{
const char* padding = "===";
@ -177,7 +231,7 @@ BroString* decode_base64(const BroString* s, const BroString* a)
int rlen2, rlen = buf_len;
char* rbuf2, *rbuf = new char[rlen];
Base64Decoder dec(0, a ? a->CheckString() : "");
Base64Converter dec(0, a ? a->CheckString() : "");
if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 )
goto err;
@ -195,3 +249,21 @@ err:
delete [] rbuf;
return 0;
}
BroString* encode_base64(const BroString* s, const BroString* a)
{
if ( a && a->Len() != 64 )
{
reporter->Error("base64 alphabet is not 64 characters: %s",
a->CheckString());
return 0;
}
char* outbuf = 0;
int outlen = 0;
Base64Converter enc(0, a ? a->CheckString() : "");
enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf);
return new BroString(1, (u_char*)outbuf, outlen);
}

View file

@ -10,14 +10,13 @@
#include "Analyzer.h"
// Maybe we should have a base class for generic decoders?
class Base64Decoder {
class Base64Converter {
public:
// <analyzer> is used for error reporting, and it should be zero when
// the decoder is called by the built-in function decode_base64().
// the decoder is called by the built-in function decode_base64() or encode_base64().
// Empty alphabet indicates the default base64 alphabet.
Base64Decoder(Analyzer* analyzer, const string& alphabet = "");
~Base64Decoder();
Base64Converter(Analyzer* analyzer, const string& alphabet = "");
~Base64Converter();
// A note on Decode():
//
@ -30,6 +29,7 @@ public:
// is not enough output buffer space.
int Decode(int len, const char* data, int* blen, char** buf);
void Encode(int len, const unsigned char* data, int* blen, char** buf);
int Done(int* pblen, char** pbuf);
int HasData() const { return base64_group_next != 0; }
@ -51,19 +51,22 @@ protected:
char error_msg[256];
protected:
static const string default_alphabet;
string alphabet;
static int* InitBase64Table(const string& alphabet);
static int default_base64_table[256];
char base64_group[4];
int base64_group_next;
int base64_padding;
int base64_after_padding;
int* base64_table;
int errored; // if true, we encountered an error - skip further processing
Analyzer* analyzer;
int* base64_table;
static int* InitBase64Table(const string& alphabet);
static int default_base64_table[256];
static const string default_alphabet;
};
BroString* decode_base64(const BroString* s, const BroString* a = 0);
BroString* encode_base64(const BroString* s, const BroString* a = 0);
#endif /* base64_h */

View file

@ -369,7 +369,7 @@ VectorVal* BroString:: VecToPolicy(Vec* vec)
BroString* string = (*vec)[i];
StringVal* val = new StringVal(string->Len(),
(const char*) string->Bytes());
result->Assign(i+1, val, 0);
result->Assign(i+1, val);
}
return result;

View file

@ -451,14 +451,18 @@ set(bro_SRCS
input/readers/Ascii.cc
input/readers/Raw.cc
input/readers/Benchmark.cc
input/readers/Binary.cc
file_analysis/Manager.cc
file_analysis/Info.cc
file_analysis/InfoTimer.cc
file_analysis/PendingFile.cc
file_analysis/FileID.h
file_analysis/Action.h
file_analysis/ActionSet.cc
file_analysis/Extract.cc
file_analysis/Hash.cc
file_analysis/DataEvent.cc
file_analysis/analyzers/PE.cc
nb_dns.c

View file

@ -856,7 +856,7 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0,
if ( have_val )
kp1 = RecoverOneVal(k, kp1, k_end, vt->YieldType(), value,
false);
vv->Assign(index, value, 0);
vv->Assign(index, value);
}
pval = vv;

View file

@ -763,7 +763,7 @@ int dbg_handle_debug_input()
Frame* curr_frame = g_frame_stack.back();
const BroFunc* func = curr_frame->GetFunction();
if ( func )
current_module = func->GetID()->ModuleName();
current_module = extract_module_name(func->Name());
else
current_module = GLOBAL_MODULE_NAME;

View file

@ -6,6 +6,7 @@
#include "Func.h"
#include "NetVar.h"
#include "Trigger.h"
#include "file_analysis/Manager.h"
EventMgr mgr;
@ -124,6 +125,8 @@ void EventMgr::Drain()
// processing, we ensure that it's done at a regular basis by checking
// them here.
Trigger::EvaluatePending();
file_mgr->EventDrainDone();
}
void EventMgr::Describe(ODesc* d) const

View file

@ -485,7 +485,7 @@ Val* UnaryExpr::Eval(Frame* f) const
for ( unsigned int i = 0; i < v_op->Size(); ++i )
{
Val* v_i = v_op->Lookup(i);
result->Assign(i, v_i ? Fold(v_i) : 0, this);
result->Assign(i, v_i ? Fold(v_i) : 0);
}
Unref(v);
@ -625,10 +625,9 @@ Val* BinaryExpr::Eval(Frame* f) const
if ( v_op1->Lookup(i) && v_op2->Lookup(i) )
v_result->Assign(i,
Fold(v_op1->Lookup(i),
v_op2->Lookup(i)),
this);
v_op2->Lookup(i)));
else
v_result->Assign(i, 0, this);
v_result->Assign(i, 0);
// SetError("undefined element in vector operation");
}
@ -648,10 +647,9 @@ Val* BinaryExpr::Eval(Frame* f) const
if ( vv_i )
v_result->Assign(i,
is_vec1 ?
Fold(vv_i, v2) : Fold(v1, vv_i),
this);
Fold(vv_i, v2) : Fold(v1, vv_i));
else
v_result->Assign(i, 0, this);
v_result->Assign(i, 0);
// SetError("Undefined element in vector operation");
}
@ -1049,10 +1047,10 @@ Val* IncrExpr::Eval(Frame* f) const
if ( elt )
{
Val* new_elt = DoSingleEval(f, elt);
v_vec->Assign(i, new_elt, this, OP_INCR);
v_vec->Assign(i, new_elt, OP_INCR);
}
else
v_vec->Assign(i, 0, this, OP_INCR);
v_vec->Assign(i, 0, OP_INCR);
}
op->Assign(f, v_vec, OP_INCR);
}
@ -1919,7 +1917,7 @@ Val* BoolExpr::Eval(Frame* f) const
result = new VectorVal(Type()->AsVectorType());
result->Resize(vector_v->Size());
result->AssignRepeat(0, result->Size(),
scalar_v, this);
scalar_v);
}
else
result = vector_v->Ref()->AsVectorVal();
@ -1957,10 +1955,10 @@ Val* BoolExpr::Eval(Frame* f) const
(! op1->IsZero() && ! op2->IsZero()) :
(! op1->IsZero() || ! op2->IsZero());
result->Assign(i, new Val(local_result, TYPE_BOOL), this);
result->Assign(i, new Val(local_result, TYPE_BOOL));
}
else
result->Assign(i, 0, this);
result->Assign(i, 0);
}
Unref(v1);
@ -2334,10 +2332,9 @@ Val* CondExpr::Eval(Frame* f) const
if ( local_cond )
result->Assign(i,
local_cond->IsZero() ?
b->Lookup(i) : a->Lookup(i),
this);
b->Lookup(i) : a->Lookup(i));
else
result->Assign(i, 0, this);
result->Assign(i, 0);
}
return result;
@ -2507,15 +2504,27 @@ bool AssignExpr::TypeCheck(attr_list* attrs)
attr_copy->append((*attrs)[i]);
}
op2 = new TableConstructorExpr(op2->AsListExpr(), attr_copy);
if ( op1->Type()->IsSet() )
op2 = new SetConstructorExpr(op2->AsListExpr(), attr_copy);
else
op2 = new TableConstructorExpr(op2->AsListExpr(), attr_copy);
return true;
}
if ( bt1 == TYPE_VECTOR && bt2 == bt1 &&
op2->Type()->AsVectorType()->IsUnspecifiedVector() )
if ( bt1 == TYPE_VECTOR )
{
op2 = new VectorCoerceExpr(op2, op1->Type()->AsVectorType());
return true;
if ( bt2 == bt1 && op2->Type()->AsVectorType()->IsUnspecifiedVector() )
{
op2 = new VectorCoerceExpr(op2, op1->Type()->AsVectorType());
return true;
}
if ( op2->Tag() == EXPR_LIST )
{
op2 = new VectorConstructorExpr(op2->AsListExpr());
return true;
}
}
if ( op1->Type()->Tag() == TYPE_RECORD &&
@ -2961,7 +2970,7 @@ Val* IndexExpr::Eval(Frame* f) const
for ( unsigned int i = 0; i < v_v2->Size(); ++i )
{
if ( v_v2->Lookup(i)->AsBool() )
v_result->Assign(v_result->Size() + 1, v_v1->Lookup(i), this);
v_result->Assign(v_result->Size() + 1, v_v1->Lookup(i));
}
}
else
@ -2971,7 +2980,7 @@ Val* IndexExpr::Eval(Frame* f) const
// Probably only do this if *all* are negative.
v_result->Resize(v_v2->Size());
for ( unsigned int i = 0; i < v_v2->Size(); ++i )
v_result->Assign(i, v_v1->Lookup(v_v2->Lookup(i)->CoerceToInt()), this);
v_result->Assign(i, v_v1->Lookup(v_v2->Lookup(i)->CoerceToInt()));
}
}
else
@ -3048,7 +3057,7 @@ void IndexExpr::Assign(Frame* f, Val* v, Opcode op)
switch ( v1->Type()->Tag() ) {
case TYPE_VECTOR:
if ( ! v1->AsVectorVal()->Assign(v2, v, this, op) )
if ( ! v1->AsVectorVal()->Assign(v2, v, op) )
Internal("assignment failed");
break;
@ -3620,7 +3629,7 @@ Val* VectorConstructorExpr::Eval(Frame* f) const
{
Expr* e = exprs[i];
Val* v = e->Eval(f);
if ( ! vec->Assign(i, v, e) )
if ( ! vec->Assign(i, v) )
{
Error(fmt("type mismatch at index %d", i), e);
return 0;
@ -3644,7 +3653,7 @@ Val* VectorConstructorExpr::InitVal(const BroType* t, Val* aggr) const
Expr* e = exprs[i];
Val* v = check_and_promote(e->Eval(0), t->YieldType(), 1);
if ( ! v || ! vec->Assign(i, v, e) )
if ( ! v || ! vec->Assign(i, v) )
{
Error(fmt("initialization type mismatch at index %d", i), e);
return 0;
@ -3865,9 +3874,9 @@ Val* ArithCoerceExpr::Fold(Val* v) const
{
Val* elt = vv->Lookup(i);
if ( elt )
result->Assign(i, FoldSingleVal(elt, t), this);
result->Assign(i, FoldSingleVal(elt, t));
else
result->Assign(i, 0, this);
result->Assign(i, 0);
}
return result;
@ -4639,12 +4648,16 @@ Val* CallExpr::Eval(Frame* f) const
{
const ::Func* func = func_val->AsFunc();
calling_expr = this;
const CallExpr* current_call = f ? f->GetCall() : 0;
if ( f )
f->SetCall(this);
ret = func->Call(v, f); // No try/catch here; we pass exceptions upstream.
if ( f )
f->ClearCall();
f->SetCall(current_call);
// Don't Unref() the arguments, as Func::Call already did that.
delete v;
@ -4971,14 +4984,22 @@ Val* ListExpr::InitVal(const BroType* t, Val* aggr) const
{
ListVal* v = new ListVal(TYPE_ANY);
const type_list* tl = type->AsTypeList()->Types();
if ( exprs.length() != tl->length() )
{
Error("index mismatch", t);
return 0;
}
loop_over_list(exprs, i)
{
Val* vi = exprs[i]->InitVal(t, 0);
Val* vi = exprs[i]->InitVal((*tl)[i], 0);
if ( ! vi )
{
Unref(v);
return 0;
}
v->Append(vi);
}
return v;
@ -5042,7 +5063,7 @@ Val* ListExpr::InitVal(const BroType* t, Val* aggr) const
Expr* e = exprs[i];
check_and_promote_expr(e, vec->Type()->AsVectorType()->YieldType());
Val* v = e->Eval(0);
if ( ! vec->Assign(i, v, e) )
if ( ! vec->Assign(i, v) )
{
e->Error(fmt("type mismatch at index %d", i));
return 0;

View file

@ -3,34 +3,24 @@
#include "file_analysis/Manager.h"
#include "FileAnalyzer.h"
#include "Reporter.h"
#include "util.h"
magic_t File_Analyzer::magic = 0;
magic_t File_Analyzer::magic_mime = 0;
File_Analyzer::File_Analyzer(Connection* conn)
: TCP_ApplicationAnalyzer(AnalyzerTag::File, conn)
File_Analyzer::File_Analyzer(AnalyzerTag::Tag tag, Connection* conn)
: TCP_ApplicationAnalyzer(tag, conn)
{
buffer_len = 0;
if ( ! magic )
{
InitMagic(&magic, MAGIC_NONE);
InitMagic(&magic_mime, MAGIC_MIME);
}
char op[256], rp[256];
modp_ulitoa10(ntohs(conn->OrigPort()), op);
modp_ulitoa10(ntohs(conn->RespPort()), rp);
unique_file = "TCPFile " + conn->OrigAddr().AsString() + ":" + op + "->" +
conn->RespAddr().AsString() + ":" + rp;
bro_init_magic(&magic, MAGIC_NONE);
bro_init_magic(&magic_mime, MAGIC_MIME);
}
void File_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
TCP_ApplicationAnalyzer::DeliverStream(len, data, orig);
file_mgr->DataIn(unique_file, data, len, Conn());
int n = min(len, BUFFER_SIZE - buffer_len);
if ( n )
@ -47,16 +37,12 @@ void File_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
void File_Analyzer::Undelivered(int seq, int len, bool orig)
{
TCP_ApplicationAnalyzer::Undelivered(seq, len, orig);
file_mgr->Gap(unique_file, seq, len);
}
void File_Analyzer::Done()
{
TCP_ApplicationAnalyzer::Done();
file_mgr->EndOfFile(unique_file, Conn());
if ( buffer_len && buffer_len != BUFFER_SIZE )
Identify();
}
@ -67,10 +53,10 @@ void File_Analyzer::Identify()
const char* mime = 0;
if ( magic )
descr = magic_buffer(magic, buffer, buffer_len);
descr = bro_magic_buffer(magic, buffer, buffer_len);
if ( magic_mime )
mime = magic_buffer(magic_mime, buffer, buffer_len);
mime = bro_magic_buffer(magic_mime, buffer, buffer_len);
val_list* vl = new val_list;
vl->append(BuildConnVal());
@ -80,17 +66,48 @@ void File_Analyzer::Identify()
ConnectionEvent(file_transferred, vl);
}
void File_Analyzer::InitMagic(magic_t* magic, int flags)
IRC_Data::IRC_Data(Connection* conn)
: File_Analyzer(AnalyzerTag::IRC_Data, conn)
{
*magic = magic_open(flags);
if ( ! *magic )
reporter->Error("can't init libmagic: %s", magic_error(*magic));
else if ( magic_load(*magic, 0) < 0 )
{
reporter->Error("can't load magic file: %s", magic_error(*magic));
magic_close(*magic);
*magic = 0;
}
}
void IRC_Data::Done()
{
File_Analyzer::Done();
file_mgr->EndOfFile(GetTag(), Conn());
}
void IRC_Data::DeliverStream(int len, const u_char* data, bool orig)
{
File_Analyzer::DeliverStream(len, data, orig);
file_mgr->DataIn(data, len, GetTag(), Conn(), orig);
}
void IRC_Data::Undelivered(int seq, int len, bool orig)
{
File_Analyzer::Undelivered(seq, len, orig);
file_mgr->Gap(seq, len, GetTag(), Conn(), orig);
}
FTP_Data::FTP_Data(Connection* conn)
: File_Analyzer(AnalyzerTag::FTP_Data, conn)
{
}
void FTP_Data::Done()
{
File_Analyzer::Done();
file_mgr->EndOfFile(GetTag(), Conn());
}
void FTP_Data::DeliverStream(int len, const u_char* data, bool orig)
{
File_Analyzer::DeliverStream(len, data, orig);
file_mgr->DataIn(data, len, GetTag(), Conn(), orig);
}
void FTP_Data::Undelivered(int seq, int len, bool orig)
{
File_Analyzer::Undelivered(seq, len, orig);
file_mgr->Gap(seq, len, GetTag(), Conn(), orig);
}

View file

@ -10,7 +10,7 @@
class File_Analyzer : public TCP_ApplicationAnalyzer {
public:
File_Analyzer(Connection* conn);
File_Analyzer(AnalyzerTag::Tag tag, Connection* conn);
virtual void Done();
@ -19,7 +19,7 @@ public:
void Undelivered(int seq, int len, bool orig);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new File_Analyzer(conn); }
{ return new File_Analyzer(AnalyzerTag::File, conn); }
static bool Available() { return file_transferred; }
@ -32,12 +32,42 @@ protected:
char buffer[BUFFER_SIZE];
int buffer_len;
static void InitMagic(magic_t* magic, int flags);
static magic_t magic;
static magic_t magic_mime;
};
string unique_file;
class IRC_Data : public File_Analyzer {
public:
IRC_Data(Connection* conn);
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
void Undelivered(int seq, int len, bool orig);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new IRC_Data(conn); }
static bool Available() { return true; }
};
class FTP_Data : public File_Analyzer {
public:
FTP_Data(Connection* conn);
virtual void Done();
virtual void DeliverStream(int len, const u_char* data, bool orig);
void Undelivered(int seq, int len, bool orig);
static Analyzer* InstantiateAnalyzer(Connection* conn)
{ return new FTP_Data(conn); }
static bool Available() { return true; }
};
#endif

View file

@ -100,6 +100,13 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt)
int offset = ip->FragOffset();
int len = ip->TotalLen();
int hdr_len = ip->HdrLen();
if ( len < hdr_len )
{
s->Weird("fragment_protocol_inconsistency", ip);
return;
}
int upper_seq = offset + len - hdr_len;
if ( ! offset )

View file

@ -87,8 +87,11 @@ Frame* Frame::Clone()
void Frame::SetTrigger(Trigger* arg_trigger)
{
ClearTrigger();
if ( arg_trigger )
Ref(arg_trigger);
trigger = arg_trigger;
}

View file

@ -54,13 +54,13 @@ bool did_builtin_init = false;
vector<Func*> Func::unique_ids;
Func::Func() : scope(0), id(0), return_value(0)
Func::Func() : scope(0), type(0)
{
unique_id = unique_ids.size();
unique_ids.push_back(this);
}
Func::Func(Kind arg_kind) : scope(0), kind(arg_kind), id(0), return_value(0)
Func::Func(Kind arg_kind) : scope(0), kind(arg_kind), type(0)
{
unique_id = unique_ids.size();
unique_ids.push_back(this);
@ -68,6 +68,7 @@ Func::Func(Kind arg_kind) : scope(0), kind(arg_kind), id(0), return_value(0)
Func::~Func()
{
Unref(type);
}
void Func::AddBody(Stmt* /* new_body */, id_list* /* new_inits */,
@ -129,6 +130,12 @@ bool Func::DoSerialize(SerialInfo* info) const
if ( ! SERIALIZE(char(kind) ) )
return false;
if ( ! type->Serialize(info) )
return false;
if ( ! SERIALIZE(Name()) )
return false;
// We don't serialize scope as only global functions are considered here
// anyway.
return true;
@ -160,12 +167,25 @@ bool Func::DoUnserialize(UnserialInfo* info)
return false;
kind = (Kind) c;
type = BroType::Unserialize(info);
if ( ! type )
return false;
const char* n;
if ( ! UNSERIALIZE_STR(&n, 0) )
return false;
name = n;
delete [] n;
return true;
}
void Func::DescribeDebug(ODesc* d, const val_list* args) const
{
id->Describe(d);
d->Add(Name());
RecordType* func_args = FType()->Args();
if ( args )
@ -196,21 +216,6 @@ void Func::DescribeDebug(ODesc* d, const val_list* args) const
}
}
void Func::SetID(ID *arg_id)
{
id = arg_id;
return_value =
new ID(string(string(id->Name()) + "_returnvalue").c_str(),
SCOPE_FUNCTION, false);
return_value->SetType(FType()->YieldType()->Ref());
}
ID* Func::GetReturnValueID() const
{
return return_value;
}
TraversalCode Func::Traverse(TraversalCallback* cb) const
{
// FIXME: Make a fake scope for builtins?
@ -226,12 +231,6 @@ TraversalCode Func::Traverse(TraversalCallback* cb) const
tc = scope->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);
if ( GetReturnValueID() )
{
tc = GetReturnValueID()->Traverse(cb);
HANDLE_TC_STMT_PRE(tc);
}
for ( unsigned int i = 0; i < bodies.size(); ++i )
{
tc = bodies[i].stmts->Traverse(cb);
@ -249,7 +248,8 @@ BroFunc::BroFunc(ID* arg_id, Stmt* arg_body, id_list* aggr_inits,
int arg_frame_size, int priority)
: Func(BRO_FUNC)
{
id = arg_id;
name = arg_id->Name();
type = arg_id->Type()->Ref();
frame_size = arg_frame_size;
if ( arg_body )
@ -263,7 +263,6 @@ BroFunc::BroFunc(ID* arg_id, Stmt* arg_body, id_list* aggr_inits,
BroFunc::~BroFunc()
{
Unref(id);
for ( unsigned int i = 0; i < bodies.size(); ++i )
Unref(bodies[i].stmts);
}
@ -378,7 +377,8 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const
(flow != FLOW_RETURN /* we fell off the end */ ||
! result /* explicit return with no result */) &&
! f->HasDelayed() )
reporter->Warning("non-void function returns without a value: %s", id->Name());
reporter->Warning("non-void function returns without a value: %s",
Name());
if ( result && g_trace_state.DoTrace() )
{
@ -421,8 +421,7 @@ void BroFunc::AddBody(Stmt* new_body, id_list* new_inits, int new_frame_size,
void BroFunc::Describe(ODesc* d) const
{
if ( id )
id->Describe(d);
d->Add(Name());
d->NL();
d->AddCount(frame_size);
@ -450,14 +449,14 @@ IMPLEMENT_SERIAL(BroFunc, SER_BRO_FUNC);
bool BroFunc::DoSerialize(SerialInfo* info) const
{
DO_SERIALIZE(SER_BRO_FUNC, Func);
return id->Serialize(info) && SERIALIZE(frame_size);
return SERIALIZE(frame_size);
}
bool BroFunc::DoUnserialize(UnserialInfo* info)
{
DO_UNSERIALIZE(Func);
id = ID::Unserialize(info);
return id && UNSERIALIZE(&frame_size);
return UNSERIALIZE(&frame_size);
}
BuiltinFunc::BuiltinFunc(built_in_func arg_func, const char* arg_name,
@ -465,15 +464,16 @@ BuiltinFunc::BuiltinFunc(built_in_func arg_func, const char* arg_name,
: Func(BUILTIN_FUNC)
{
func = arg_func;
name = copy_string(make_full_var_name(GLOBAL_MODULE_NAME, arg_name).c_str());
name = make_full_var_name(GLOBAL_MODULE_NAME, arg_name);
is_pure = arg_is_pure;
id = lookup_ID(name, GLOBAL_MODULE_NAME, false);
ID* id = lookup_ID(Name(), GLOBAL_MODULE_NAME, false);
if ( ! id )
reporter->InternalError("built-in function %s missing", name);
reporter->InternalError("built-in function %s missing", Name());
if ( id->HasVal() )
reporter->InternalError("built-in function %s multiply defined", name);
reporter->InternalError("built-in function %s multiply defined", Name());
type = id->Type()->Ref();
id->SetVal(new Val(this));
}
@ -491,7 +491,7 @@ Val* BuiltinFunc::Call(val_list* args, Frame* parent) const
#ifdef PROFILE_BRO_FUNCTIONS
DEBUG_MSG("Function: %s\n", Name());
#endif
SegmentProfiler(segment_logger, name);
SegmentProfiler(segment_logger, Name());
if ( sample_logger )
sample_logger->FunctionSeen(this);
@ -522,8 +522,7 @@ Val* BuiltinFunc::Call(val_list* args, Frame* parent) const
void BuiltinFunc::Describe(ODesc* d) const
{
if ( id )
id->Describe(d);
d->Add(Name());
d->AddCount(is_pure);
}
@ -532,16 +531,13 @@ IMPLEMENT_SERIAL(BuiltinFunc, SER_BUILTIN_FUNC);
bool BuiltinFunc::DoSerialize(SerialInfo* info) const
{
DO_SERIALIZE(SER_BUILTIN_FUNC, Func);
// We ignore the ID. Func::Serialize() will rebind us anyway.
return SERIALIZE(name);
return true;
}
bool BuiltinFunc::DoUnserialize(UnserialInfo* info)
{
DO_UNSERIALIZE(Func);
id = 0;
return UNSERIALIZE_STR(&name, 0);
return true;
}
void builtin_error(const char* msg, BroObj* arg)

View file

@ -47,15 +47,11 @@ public:
virtual void SetScope(Scope* newscope) { scope = newscope; }
virtual Scope* GetScope() const { return scope; }
virtual FuncType* FType() const
{
return (FuncType*) id->Type()->AsFuncType();
}
virtual FuncType* FType() const { return type->AsFuncType(); }
Kind GetKind() const { return kind; }
const ID* GetID() const { return id; }
void SetID(ID *arg_id);
const char* Name() const { return name.c_str(); }
virtual void Describe(ODesc* d) const = 0;
virtual void DescribeDebug(ODesc* d, const val_list* args) const;
@ -64,7 +60,6 @@ public:
bool Serialize(SerialInfo* info) const;
static Func* Unserialize(UnserialInfo* info);
ID* GetReturnValueID() const;
virtual TraversalCode Traverse(TraversalCallback* cb) const;
uint32 GetUniqueFuncID() const { return unique_id; }
@ -79,8 +74,8 @@ protected:
vector<Body> bodies;
Scope* scope;
Kind kind;
ID* id;
ID* return_value;
BroType* type;
string name;
uint32 unique_id;
static vector<Func*> unique_ids;
};
@ -119,18 +114,16 @@ public:
int IsPure() const;
Val* Call(val_list* args, Frame* parent) const;
const char* Name() const { return name; }
built_in_func TheFunc() const { return func; }
void Describe(ODesc* d) const;
protected:
BuiltinFunc() { func = 0; name = 0; is_pure = 0; }
BuiltinFunc() { func = 0; is_pure = 0; }
DECLARE_SERIAL(BuiltinFunc);
built_in_func func;
const char* name;
int is_pure;
};

View file

@ -12,6 +12,7 @@
#include "HTTP.h"
#include "Event.h"
#include "MIME.h"
#include "file_analysis/Manager.h"
const bool DEBUG_http = false;
@ -41,9 +42,13 @@ HTTP_Entity::HTTP_Entity(HTTP_Message *arg_message, MIME_Entity* parent_entity,
expect_data_length = 0;
body_length = 0;
header_length = 0;
deliver_body = (http_entity_data != 0);
deliver_body = true;
encoding = IDENTITY;
zip = 0;
is_partial_content = false;
offset = 0;
instance_length = -1; // unspecified
send_size = true;
}
void HTTP_Entity::EndOfData()
@ -233,6 +238,11 @@ int HTTP_Entity::Undelivered(int64_t len)
if ( end_of_data && in_header )
return 0;
file_mgr->Gap(body_length, len,
http_message->MyHTTP_Analyzer()->GetTag(),
http_message->MyHTTP_Analyzer()->Conn(),
http_message->IsOrig());
if ( chunked_transfer_state != NON_CHUNKED_TRANSFER )
{
if ( chunked_transfer_state == EXPECT_CHUNK_DATA &&
@ -277,6 +287,38 @@ void HTTP_Entity::SubmitData(int len, const char* buf)
{
if ( deliver_body )
MIME_Entity::SubmitData(len, buf);
if ( send_size && ( encoding == GZIP || encoding == DEFLATE ) )
// Auto-decompress in DeliverBody invalidates sizes derived from headers
send_size = false;
if ( is_partial_content )
{
if ( send_size && instance_length > 0 )
file_mgr->SetSize(instance_length,
http_message->MyHTTP_Analyzer()->GetTag(),
http_message->MyHTTP_Analyzer()->Conn(),
http_message->IsOrig());
file_mgr->DataIn(reinterpret_cast<const u_char*>(buf), len, offset,
http_message->MyHTTP_Analyzer()->GetTag(),
http_message->MyHTTP_Analyzer()->Conn(),
http_message->IsOrig());
offset += len;
}
else
{
if ( send_size && content_length > 0 )
file_mgr->SetSize(content_length,
http_message->MyHTTP_Analyzer()->GetTag(),
http_message->MyHTTP_Analyzer()->Conn(),
http_message->IsOrig());
file_mgr->DataIn(reinterpret_cast<const u_char*>(buf), len,
http_message->MyHTTP_Analyzer()->GetTag(),
http_message->MyHTTP_Analyzer()->Conn(),
http_message->IsOrig());
}
send_size = false;
}
void HTTP_Entity::SetPlainDelivery(int64_t length)
@ -307,9 +349,7 @@ void HTTP_Entity::SubmitHeader(MIME_Header* h)
}
// Figure out content-length for HTTP 206 Partial Content response
// that uses multipart/byteranges content-type.
else if ( strcasecmp_n(h->get_name(), "content-range") == 0 && Parent() &&
Parent()->MIMEContentType() == CONTENT_TYPE_MULTIPART &&
else if ( strcasecmp_n(h->get_name(), "content-range") == 0 &&
http_message->MyHTTP_Analyzer()->HTTP_ReplyCode() == 206 )
{
data_chunk_t vt = h->get_value_token();
@ -333,7 +373,7 @@ void HTTP_Entity::SubmitHeader(MIME_Header* h)
}
string byte_range_resp_spec = byte_range.substr(0, p);
string instance_length = byte_range.substr(p + 1);
string instance_length_str = byte_range.substr(p + 1);
p = byte_range_resp_spec.find("-");
if ( p == string::npos )
@ -348,7 +388,7 @@ void HTTP_Entity::SubmitHeader(MIME_Header* h)
if ( DEBUG_http )
DEBUG_MSG("Parsed Content-Range: %s %s-%s/%s\n", byte_unit.c_str(),
first_byte_pos.c_str(), last_byte_pos.c_str(),
instance_length.c_str());
instance_length_str.c_str());
int64_t f, l;
atoi_n(first_byte_pos.size(), first_byte_pos.c_str(), 0, 10, f);
@ -359,7 +399,18 @@ void HTTP_Entity::SubmitHeader(MIME_Header* h)
DEBUG_MSG("Content-Range length = %"PRId64"\n", len);
if ( len > 0 )
{
if ( instance_length_str != "*" )
{
if ( ! atoi_n(instance_length_str.size(),
instance_length_str.c_str(), 0, 10,
instance_length) )
instance_length = 0;
}
is_partial_content = true;
offset = f;
content_length = len;
}
else
{
http_message->Weird("HTTP_non_positive_content_range");
@ -512,6 +563,11 @@ void HTTP_Message::Done(const int interrupted, const char* detail)
// DEBUG_MSG("%.6f HTTP message done.\n", network_time);
top_level->EndOfData();
if ( is_orig || MyHTTP_Analyzer()->HTTP_ReplyCode() != 206 )
// multipart/byteranges may span multiple connections
file_mgr->EndOfFile(MyHTTP_Analyzer()->GetTag(),
MyHTTP_Analyzer()->Conn(), is_orig);
if ( http_message_done )
{
val_list* vl = new val_list;
@ -586,6 +642,9 @@ void HTTP_Message::EndEntity(MIME_Entity* entity)
// SubmitAllHeaders (through EndOfData).
if ( entity == top_level )
Done();
else if ( is_orig || MyHTTP_Analyzer()->HTTP_ReplyCode() != 206 )
file_mgr->EndOfFile(MyHTTP_Analyzer()->GetTag(),
MyHTTP_Analyzer()->Conn(), is_orig);
}
void HTTP_Message::SubmitHeader(MIME_Header* h)
@ -641,9 +700,6 @@ void HTTP_Message::SubmitData(int len, const char* buf)
int HTTP_Message::RequestBuffer(int* plen, char** pbuf)
{
if ( ! http_entity_data )
return 0;
if ( ! data_buffer )
if ( ! InitBuffer(mime_segment_length) )
return 0;
@ -846,6 +902,13 @@ void HTTP_Analyzer::Done()
Unref(unanswered_requests.front());
unanswered_requests.pop();
}
file_mgr->EndOfFile(GetTag(), Conn(), true);
/* TODO: this might be nice to have, but reply code is cleared by now.
if ( HTTP_ReplyCode() != 206 )
// multipart/byteranges may span multiple connections
file_mgr->EndOfFile(GetTag(), Conn(), false);
*/
}
void HTTP_Analyzer::DeliverStream(int len, const u_char* data, bool is_orig)

View file

@ -55,6 +55,10 @@ protected:
int deliver_body;
enum { IDENTITY, GZIP, COMPRESS, DEFLATE } encoding;
ZIP_Analyzer* zip;
bool is_partial_content;
uint64_t offset;
int64_t instance_length; // total length indicated by content-range
bool send_size; // whether to send size indication to FAF
MIME_Entity* NewChildEntity() { return new HTTP_Entity(http_message, this, 1); }

View file

@ -829,7 +829,7 @@ VectorVal* ICMP_Analyzer::BuildNDOptionsVal(int caplen, const u_char* data)
data += length;
caplen -= length;
vv->Assign(vv->Size(), rv, 0);
vv->Assign(vv->Size(), rv);
}
return vv;

View file

@ -63,7 +63,7 @@ static VectorVal* BuildOptionsVal(const u_char* data, int len)
len -= opt->ip6o_len + off;
}
vv->Assign(vv->Size(), rv, 0);
vv->Assign(vv->Size(), rv);
}
return vv;
@ -626,7 +626,7 @@ VectorVal* IPv6_Hdr_Chain::BuildVal() const
reporter->InternalError("IPv6_Hdr_Chain bad header %d", type);
break;
}
rval->Assign(rval->Size(), ext_hdr, 0);
rval->Assign(rval->Size(), ext_hdr);
}
return rval;

View file

@ -5,6 +5,7 @@
#include "Event.h"
#include "Reporter.h"
#include "digest.h"
#include "file_analysis/Manager.h"
// Here are a few things to do:
//
@ -810,7 +811,7 @@ void MIME_Entity::StartDecodeBase64()
if ( base64_decoder )
reporter->InternalError("previous Base64 decoder not released!");
base64_decoder = new Base64Decoder(message->GetAnalyzer());
base64_decoder = new Base64Converter(message->GetAnalyzer());
}
void MIME_Entity::FinishDecodeBase64()
@ -1019,6 +1020,8 @@ void MIME_Mail::Done()
}
MIME_Message::Done();
file_mgr->EndOfFile(analyzer->GetTag(), analyzer->Conn());
}
MIME_Mail::~MIME_Mail()
@ -1030,6 +1033,7 @@ MIME_Mail::~MIME_Mail()
void MIME_Mail::BeginEntity(MIME_Entity* /* entity */)
{
cur_entity_len = 0;
if ( mime_begin_entity )
{
val_list* vl = new val_list;
@ -1065,6 +1069,8 @@ void MIME_Mail::EndEntity(MIME_Entity* /* entity */)
vl->append(analyzer->BuildConnVal());
analyzer->ConnectionEvent(mime_end_entity, vl);
}
file_mgr->EndOfFile(analyzer->GetTag(), analyzer->Conn());
}
void MIME_Mail::SubmitHeader(MIME_Header* h)
@ -1122,6 +1128,11 @@ void MIME_Mail::SubmitData(int len, const char* buf)
analyzer->ConnectionEvent(mime_segment_data, vl);
}
// is_orig param not available, doesn't matter as long as it's consistent
file_mgr->DataIn(reinterpret_cast<const u_char*>(buf), len,
analyzer->GetTag(), analyzer->Conn(), false);
cur_entity_len += len;
buffer_start = (buf + len) - (char*)data_buffer->Bytes();
}
@ -1193,6 +1204,12 @@ void MIME_Mail::SubmitEvent(int event_type, const char* detail)
}
}
void MIME_Mail::Undelivered(int len)
{
// is_orig param not available, doesn't matter as long as it's consistent
file_mgr->Gap(cur_entity_len, len, analyzer->GetTag(), analyzer->Conn(),
false);
}
int strcasecmp_n(data_chunk_t s, const char* t)
{

View file

@ -131,7 +131,7 @@ protected:
int GetDataBuffer();
void DataOctet(char ch);
void DataOctets(int len, const char* data);
void SubmitData(int len, const char* buf);
virtual void SubmitData(int len, const char* buf);
virtual void SubmitHeader(MIME_Header* h);
// Submit all headers in member "headers".
@ -163,7 +163,7 @@ protected:
MIME_Entity* parent;
MIME_Entity* current_child_entity;
Base64Decoder* base64_decoder;
Base64Converter* base64_decoder;
int data_buf_length;
char* data_buf_data;
@ -238,6 +238,7 @@ public:
int RequestBuffer(int* plen, char** pbuf);
void SubmitAllData();
void SubmitEvent(int event_type, const char* detail);
void Undelivered(int len);
protected:
int min_overlap_length;
@ -252,6 +253,8 @@ protected:
vector<const BroString*> all_content;
BroString* data_buffer;
uint64 cur_entity_len;
};

View file

@ -599,7 +599,7 @@ RecordVal* NFS_Interp::nfs3_readdir_reply(bool isplus, const u_char*& buf,
entry->Assign(4, nfs3_post_op_fh(buf,n));
}
entries->Assign(pos, entry, 0);
entries->Assign(pos, entry);
pos++;
}

View file

@ -5,7 +5,6 @@
#include "Var.h"
#include "NetVar.h"
RecordType* gtpv1_hdr_type;
RecordType* conn_id;
RecordType* endpoint;
RecordType* endpoint_stats;
@ -311,7 +310,6 @@ void init_net_var()
#include "reporter.bif.netvar_init"
#include "file_analysis.bif.netvar_init"
gtpv1_hdr_type = internal_type("gtpv1_hdr")->AsRecordType();
conn_id = internal_type("conn_id")->AsRecordType();
endpoint = internal_type("endpoint")->AsRecordType();
endpoint_stats = internal_type("endpoint_stats")->AsRecordType();

View file

@ -8,7 +8,6 @@
#include "EventRegistry.h"
#include "Stats.h"
extern RecordType* gtpv1_hdr_type;
extern RecordType* conn_id;
extern RecordType* endpoint;
extern RecordType* endpoint_stats;

View file

@ -36,7 +36,7 @@ public:
u_char key[MD5_DIGEST_LENGTH],
u_char result[MD5_DIGEST_LENGTH]);
MD5Val() : HashVal(new OpaqueType("md5")) { }
MD5Val() : HashVal(new OpaqueType("md5")) { Unref(Type()); }
protected:
friend class Val;
@ -55,7 +55,7 @@ class SHA1Val : public HashVal {
public:
static void digest(val_list& vlist, u_char result[SHA_DIGEST_LENGTH]);
SHA1Val() : HashVal(new OpaqueType("sha1")) { }
SHA1Val() : HashVal(new OpaqueType("sha1")) { Unref(Type()); }
protected:
friend class Val;
@ -74,7 +74,7 @@ class SHA256Val : public HashVal {
public:
static void digest(val_list& vlist, u_char result[SHA256_DIGEST_LENGTH]);
SHA256Val() : HashVal(new OpaqueType("sha256")) { }
SHA256Val() : HashVal(new OpaqueType("sha256")) { Unref(Type()); }
protected:
friend class Val;

View file

@ -231,6 +231,15 @@ void PktSrc::Process()
data += get_link_header_size(datalink);
data += 4; // Skip the vlan header
pkt_hdr_size = 0;
// Check for 802.1ah (Q-in-Q) containing IP.
// Only do a second layer of vlan tag
// stripping because there is no
// specification that allows for deeper
// nesting.
if ( ((data[2] << 8) + data[3]) == 0x0800 )
data += 4;
break;
// PPPoE carried over the ethernet frame.

View file

@ -496,7 +496,7 @@ static RE_Matcher* matcher_merge(const RE_Matcher* re1, const RE_Matcher* re2,
safe_snprintf(merge_text, n, "(%s)%s(%s)", text1, merge_op, text2);
RE_Matcher* merge = new RE_Matcher(merge_text);
delete merge_text;
delete [] merge_text;
merge->Compile();

View file

@ -85,9 +85,13 @@ void SMTP_Analyzer::Undelivered(int seq, int len, bool is_orig)
Unexpected(is_orig, "content gap", buf_len, buf);
if ( state == SMTP_IN_DATA )
{
// Record the SMTP data gap and terminate the
// ongoing mail transaction.
if ( mail )
mail->Undelivered(len);
EndData();
}
if ( line_after_gap )
{

View file

@ -155,7 +155,7 @@ SerialObj* SerialObj::Unserialize(UnserialInfo* info, SerialType type)
else
{
// Broccoli compatibility mode with 32bit pids.
uint32 tmp;
uint32 tmp = 0;
result = UNSERIALIZE(&full_obj) && UNSERIALIZE(&tmp);
pid = tmp;
}

View file

@ -223,6 +223,12 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr,
// we look to see if what we have is consistent with an
// IPv4 packet. If not, it's either ARP or IPv6 or weird.
if ( hdr_size > static_cast<int>(hdr->caplen) )
{
Weird("truncated_link_frame", hdr, pkt);
return;
}
uint32 caplen = hdr->caplen - hdr_size;
if ( caplen < sizeof(struct ip) )
{

View file

@ -96,12 +96,12 @@ VectorVal* BroSubstring::VecToPolicy(Vec* vec)
align_val->Assign(0, new StringVal(new BroString(*align.string)));
align_val->Assign(1, new Val(align.index, TYPE_COUNT));
aligns->Assign(j+1, align_val, 0);
aligns->Assign(j+1, align_val);
}
st_val->Assign(1, aligns);
st_val->Assign(2, new Val(bst->IsNewAlignment(), TYPE_BOOL));
result->Assign(i+1, st_val, 0);
result->Assign(i+1, st_val);
}
}

View file

@ -371,7 +371,7 @@ void StateAccess::Replay()
CheckOld("index assign", target.id, op1.val, op3,
v->AsVectorVal()->Lookup(index));
v->AsVectorVal()->Assign(index, op2 ? op2->Ref() : 0, 0);
v->AsVectorVal()->Assign(index, op2 ? op2->Ref() : 0);
}
else
@ -421,7 +421,7 @@ void StateAccess::Replay()
Val* lookup_op1 = v->AsVectorVal()->Lookup(index);
int delta = lookup_op1->CoerceToInt() + amount;
Val* new_val = new Val(delta, t);
v->AsVectorVal()->Assign(index, new_val, 0);
v->AsVectorVal()->Assign(index, new_val);
}
else
@ -926,17 +926,22 @@ void NotifierRegistry::Register(ID* id, NotifierRegistry::Notifier* notifier)
DBG_LOG(DBG_NOTIFIERS, "registering ID %s for notifier %s",
id->Name(), notifier->Name());
Attr* attr = new Attr(ATTR_TRACKED);
if ( id->Attrs() )
id->Attrs()->AddAttr(new Attr(ATTR_TRACKED));
{
if ( ! id->Attrs()->FindAttr(ATTR_TRACKED) )
id->Attrs()->AddAttr(attr);
}
else
{
attr_list* a = new attr_list;
Attr* attr = new Attr(ATTR_TRACKED);
a->append(attr);
id->SetAttrs(new Attributes(a, id->Type(), false));
Unref(attr);
}
Unref(attr);
NotifierMap::iterator i = ids.find(id->Name());
if ( i != ids.end() )
@ -967,7 +972,9 @@ void NotifierRegistry::Unregister(ID* id, NotifierRegistry::Notifier* notifier)
if ( i == ids.end() )
return;
Attr* attr = id->Attrs()->FindAttr(ATTR_TRACKED);
id->Attrs()->RemoveAttr(ATTR_TRACKED);
Unref(attr);
NotifierSet* s = i->second;
s->erase(notifier);

View file

@ -338,7 +338,7 @@ SampleLogger::~SampleLogger()
void SampleLogger::FunctionSeen(const Func* func)
{
load_samples->Assign(new StringVal(func->GetID()->Name()), 0);
load_samples->Assign(new StringVal(func->Name()), 0);
}
void SampleLogger::LocationSeen(const Location* loc)

View file

@ -23,6 +23,7 @@ enum TimerType {
TIMER_CONN_STATUS_UPDATE,
TIMER_DNS_EXPIRE,
TIMER_FILE_ANALYSIS_INACTIVITY,
TIMER_FILE_ANALYSIS_DRAIN,
TIMER_FRAG,
TIMER_INCREMENTAL_SEND,
TIMER_INCREMENTAL_WRITE,

View file

@ -242,6 +242,7 @@ bool Trigger::Eval()
trigger->Cache(frame->GetCall(), v);
trigger->Release();
frame->ClearTrigger();
}
Unref(v);
@ -330,6 +331,7 @@ void Trigger::Timeout()
#endif
trigger->Cache(frame->GetCall(), v);
trigger->Release();
frame->ClearTrigger();
}
Unref(v);
@ -424,6 +426,12 @@ Val* Trigger::Lookup(const CallExpr* expr)
return (i != cache.end()) ? i->second : 0;
}
void Trigger::Disable()
{
UnregisterAll();
disabled = true;
}
const char* Trigger::Name() const
{
assert(location);

View file

@ -49,7 +49,7 @@ public:
// Disable this trigger completely. Needed because Unref'ing the trigger
// may not immediately delete it as other references may still exist.
void Disable() { disabled = true; }
void Disable();
virtual void Describe(ODesc* d) const { d->Add("<trigger>"); }
@ -79,7 +79,6 @@ private:
friend class TriggerTimer;
void Init();
void DeleteTrigger();
void Register(ID* id);
void Register(Val* val);
void UnregisterAll();

View file

@ -186,7 +186,7 @@ public:
if ( conns )
{
for ( size_t i = 0; i < conns->size(); ++i )
vv->Assign(i, (*conns)[i].GetRecordVal(), 0);
vv->Assign(i, (*conns)[i].GetRecordVal());
}
return vv;

Some files were not shown because too many files have changed in this diff Show more