Merge branch 'master' into topic/seth/metrics-merge

This commit is contained in:
Seth Hall 2012-10-22 10:06:02 -04:00
commit 1200d04f81
226 changed files with 2822 additions and 386 deletions

136
CHANGES
View file

@ -1,4 +1,140 @@
2.1-84 | 2012-10-19 15:12:56 -0700
* Added a BiF strptime() to wrap the corresponding C function. (Seth
Hall)
2.1-82 | 2012-10-19 15:05:40 -0700
* Add IPv6 support to signature header conditions. (Jon Siwek)
- "src-ip" and "dst-ip" conditions can now use IPv6 addresses/subnets.
They must be written in colon-hexadecimal representation and enclosed
in square brackets (e.g. [fe80::1]). Addresses #774.
- "icmp6" is now a valid protocol for use with "ip-proto" and "header"
conditions. This allows signatures to be written that can match
against ICMPv6 payloads. Addresses #880.
- "ip6" is now a valid protocol for use with the "header" condition.
(also the "ip-proto" condition, but it results in a no-op in that
case since signatures apply only to the inner-most IP packet when
packets are tunneled). This allows signatures to match specifically
against IPv6 packets (whereas "ip" only matches against IPv4 packets).
- "ip-proto" conditions can now match against IPv6 packets. Before,
IPv6 packets were just silently ignored which meant DPD based on
signatures did not function for IPv6 -- protocol analyzers would only
get attached to a connection over IPv6 based on the well-known ports
set in the "dpd_config" table.
2.1-80 | 2012-10-19 14:48:42 -0700
* Change how "gridftp" gets added to service field of connection
records. In addition to checking for a finished SSL handshake over
an FTP connection, it now also requires that the SSL handshake
occurs after the FTP client requested AUTH GSSAPI, more
specifically identifying the characteristics of GridFTP control
channels. Addresses #891. (Jon Siwek)
* Allow faster rebuilds in certain cases. Previously, when
rebuilding with a different "--prefix" or "--scriptdir", all Bro
source files were recompiled. With this change, only util.cc is
recompiled. (Daniel Thayer)
2.1-76 | 2012-10-12 10:32:39 -0700
* Add support for recognizing GridFTP connections as an extension to
the standard FTP analyzer. (Jon Siwek)
This is enabled by default and includes:
- An analyzer for GSI mechanism of GSSAPI FTP AUTH method. GSI
authentication involves an encoded TLS/SSL handshake over the
FTP control session. For FTP sessions that attempt GSI
authentication, the *service* field of the connection log will
include "gridftp" (as well as also "ftp" and "ssl").
- Add an example of a GridFTP data channel detection script. It
relies on the heuristics of GridFTP data channels commonly
default to SSL mutual authentication with a NULL bulk cipher
and that they usually transfer large datasets (default
threshold of script is 1 GB). The script also defaults to
skip_further_processing() after detection to try to save
cycles analyzing the large, benign connection.
For identified GridFTP data channels, the *services* fields of
the connection log will include "gridftp-data".
* Add *client_subject* and *client_issuer_subject* as &log'd fields
to SSL::Info record. Also add *client_cert* and
*client_cert_chain* fields to track client cert chain. (Jon Siwek)
* Add a script in base/protocols/conn/polling that generalizes the
process of polling a connection for interesting features. The
GridFTP data channel detection script depends on it to monitor
bytes transferred. (Jon Siwek)
2.1-68 | 2012-10-12 09:46:41 -0700
* Rename the Input Framework's update_finished event to end_of_data.
It will now not only fire after table-reads have been completed,
but also after the last event of a whole-file-read (or
whole-db-read, etc.). (Bernhard Amann)
* Fix for DNS log problem when a DNS response is seen with 0 RRs.
(Seth Hall)
2.1-64 | 2012-10-12 09:36:41 -0700
* Teach --disable-dataseries/--disable-elasticsearch to ./configure.
Addresses #877. (Jon Siwek)
* Add --with-curl option to ./configure. Addresses #877. (Jon Siwek)
2.1-61 | 2012-10-12 09:32:48 -0700
* Fix bug in the input framework: the config table did not work.
(Bernhard Amann)
2.1-58 | 2012-10-08 10:10:09 -0700
* Fix a problem with non-manager cluster nodes applying
Notice::policy. This could, for example, result in duplicate
emails being sent if Notice::emailed_types is redef'd in local.bro
(or any script that gets loaded on all cluster nodes). (Jon Siwek)
2.1-56 | 2012-10-03 16:04:52 -0700
* Add general FAQ entry about upgrading Bro. (Jon Siwek)
2.1-53 | 2012-10-03 16:00:40 -0700
* Add new Tunnel::delay_teredo_confirmation option that indicates
that the Teredo analyzer should wait until it sees both sides of a
connection using a valid Teredo encapsulation before issuing a
protocol_confirmation. Default is on. Addresses #890. (Jon Siwek)
2.1-50 | 2012-10-02 12:06:08 -0700
* Fix a typing issue that prevented the ElasticSearch timeout to
work. (Matthias Vallentin)
* Use second granularity for ElasticSearch timeouts. (Matthias
Vallentin)
* Fix compile issues with older versions of libcurl, which don't
offer *_MS timeout constants. (Matthias Vallentin)
2.1-47 | 2012-10-02 11:59:29 -0700
* Fix for the input framework: BroStrings were constructed without a
final \0, which makes them unusable by basically all internal
functions (like to_count). (Bernhard Amann)
* Remove deprecated script functionality (see NEWS for details).
(Daniel Thayer)
2.1-39 | 2012-09-29 14:09:16 -0700
* Reliability adjustments to istate tests with network

View file

@ -120,7 +120,8 @@ find_package(Lintel)
find_package(DataSeries)
find_package(LibXML2)
if (LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
if (NOT DISABLE_DATASERIES AND
LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
set(USE_DATASERIES true)
include_directories(BEFORE ${Lintel_INCLUDE_DIR})
include_directories(BEFORE ${DataSeries_INCLUDE_DIR})
@ -132,13 +133,13 @@ endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(CURL)
find_package(LibCURL)
if (CURL_FOUND)
if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${CURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${CURL_LIBRARIES})
include_directories(BEFORE ${LibCURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibCURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG)

10
NEWS
View file

@ -13,7 +13,9 @@ Bro 2.2
New Functionality
~~~~~~~~~~~~~~~~~
- TODO: Update.
- GridFTP support. TODO: Extend.
- ssl.log now also records the subject client and issuer certificates.
Changed Functionality
~~~~~~~~~~~~~~~~~~~~~
@ -28,8 +30,14 @@ Changed Functionality
make_connection_persistent(), generate_idmef(),
split_complete()
- Removed a now unused argument from "do_split" helper function.
- "this" is no longer a reserved keyword.
- The Input Framework's update_finished event has been renamed to
end_of_data. It will now not only fire after table-reads have been
completed, but also after the last event of a whole-file-read (or
whole-db-read, etc.).
Bro 2.1
-------

View file

@ -1 +1 @@
2.1-39
2.1-84

@ -1 +1 @@
Subproject commit a93ef1373512c661ffcd0d0a61bd19b96667e0d5
Subproject commit 74e6a5401c4228d5293c0e309283f43c389e7c12

@ -1 +1 @@
Subproject commit 6748ec3a96d582a977cd9114ef19c76fe75c57ff
Subproject commit 01bb93cb23f31a98fb400584e8d2f2fbe8a589ef

@ -1 +1 @@
Subproject commit ebfa4de45a839e58aec200e7e4bad33eaab4f1ed
Subproject commit 907210ce1470724fb386f939cc1b10a4caa2ae39

@ -1 +1 @@
Subproject commit b0e3c0d84643878c135dcb8a9774ed78147dd648
Subproject commit fd0e7e0b0cf50131efaf536a5683266cfe169455

2
cmake

@ -1 +1 @@
Subproject commit 125f9a5fa851381d0350efa41a4d14f27be263a2
Subproject commit 14537f56d66b18ab9d5024f798caf4d1f356fc67

12
configure vendored
View file

@ -38,6 +38,8 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli
--disable-ruby don't try to build ruby bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root
@ -61,6 +63,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
Packaging Options (for developers):
--binary-package toggle special logic for binary packaging
@ -174,6 +177,12 @@ while [ $# -ne 0 ]; do
--disable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;;
@ -234,6 +243,9 @@ while [ $# -ne 0 ]; do
--with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true
;;

View file

@ -12,6 +12,43 @@ Frequently Asked Questions
Installation and Configuration
==============================
How do I upgrade to a new version of Bro?
-----------------------------------------
There's two suggested approaches, either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy
local customizations over.
Re-Use Previous Install Prefix
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you choose to configure and install Bro with the same prefix
directory as before, local customization and configuration to files in
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
(``$prefix`` indicating the root of where Bro was installed). Also, logs
generated at run-time won't be touched by the upgrade. (But making
a backup of local changes before proceeding is still recommended.)
After upgrading, remember to check ``$prefix/share/bro/site`` and
``$prefix/etc`` for ``.example`` files, which indicate the
distribution's version of the file differs from the local one, which may
include local changes. Review the differences, and make adjustments
as necessary (for differences that aren't the result of a local change,
use the new version's).
Pick a New Install prefix
^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to install the newer version in a different prefix
directory than before, you can just copy local customization and
configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc``
to the new location (``$prefix`` indicating the root of where Bro was
originally installed). Make sure to review the files for difference
before copying and make adjustments as necessary (for differences that
aren't the result of a local change, use the new version's). Of
particular note, the copied version of ``$prefix/etc/broctl.cfg`` is
likely to need changes to the ``SpoolDir`` and ``LogDir`` settings.
How can I tune my operating system for best capture performance?
----------------------------------------------------------------

View file

@ -98,12 +98,12 @@ been completed. Because of this, it is, for example, possible to call
will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires
the ``update_finished`` event. Once this event has been received all data
the ``end_of_data`` event. Once this event has been received all data
from the input file is available in the table.
.. code:: bro
event Input::update_finished(name: string, source: string) {
event Input::end_of_data(name: string, source: string) {
# now all data is in the table
print blacklist;
}
@ -129,7 +129,7 @@ deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When
an input stream is open, the function ``force_update`` can be called. This
will trigger a complete refresh of the table; any changed elements from the
file will be updated. After the update is finished the ``update_finished``
file will be updated. After the update is finished the ``end_of_data``
event will be raised.
In our example the call would look like:
@ -142,7 +142,7 @@ The input framework also supports two automatic refresh modes. The first mode
continually checks if a file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been
read into the table, the ``update_finished`` event is raised.
read into the table, the ``end_of_data`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data
file is an append-only file to which new data is continually appended. Bro
@ -150,7 +150,7 @@ continually checks for new data at the end of the file and will add the new
data to the table. If newer lines in the file have the same index as previous
lines, they will overwrite the values in the output table. Because of the
nature of streaming reads (data is continually added to the table),
the ``update_finished`` event is never raised when using streaming reads.
the ``end_of_data`` event is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the
add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``

View file

@ -65,9 +65,11 @@ rest_target(${psd} base/frameworks/tunnels/main.bro)
rest_target(${psd} base/protocols/conn/contents.bro)
rest_target(${psd} base/protocols/conn/inactivity.bro)
rest_target(${psd} base/protocols/conn/main.bro)
rest_target(${psd} base/protocols/conn/polling.bro)
rest_target(${psd} base/protocols/dns/consts.bro)
rest_target(${psd} base/protocols/dns/main.bro)
rest_target(${psd} base/protocols/ftp/file-extract.bro)
rest_target(${psd} base/protocols/ftp/gridftp.bro)
rest_target(${psd} base/protocols/ftp/main.bro)
rest_target(${psd} base/protocols/ftp/utils-commands.bro)
rest_target(${psd} base/protocols/http/file-extract.bro)

View file

@ -83,9 +83,8 @@ Header Conditions
~~~~~~~~~~~~~~~~~
Header conditions limit the applicability of the signature to a subset
of traffic that contains matching packet headers. For TCP, this match
is performed only for the first packet of a connection. For other
protocols, it is done on each individual packet.
of traffic that contains matching packet headers. This type of matching
is performed only for the first packet of a connection.
There are pre-defined header conditions for some of the most used
header fields. All of them generally have the format ``<keyword> <cmp>
@ -95,14 +94,22 @@ one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and
against. The following keywords are defined:
``src-ip``/``dst-ip <cmp> <address-list>``
Source and destination address, respectively. Addresses can be
given as IP addresses or CIDR masks.
Source and destination address, respectively. Addresses can be given
as IPv4 or IPv6 addresses or CIDR masks. For IPv6 addresses/masks
the colon-hexadecimal representation of the address must be enclosed
in square brackets (e.g. ``[fe80::1]`` or ``[fe80::0]/16``).
``src-port``/``dst-port`` ``<int-list>``
``src-port``/``dst-port <cmp> <int-list>``
Source and destination port, respectively.
``ip-proto tcp|udp|icmp``
IP protocol.
``ip-proto <cmp> tcp|udp|icmp|icmp6|ip|ip6``
IPv4 header's Protocol field or the Next Header field of the final
IPv6 header (i.e. either Next Header field in the fixed IPv6 header
if no extension headers are present or that field from the last
extension header in the chain). Note that the IP-in-IP forms of
tunneling are automatically decapsulated by default and signatures
apply to only the inner-most packet, so specifying ``ip`` or ``ip6``
is a no-op.
For lists of multiple values, they are sequentially compared against
the corresponding header field. If at least one of the comparisons
@ -116,20 +123,22 @@ condition can be defined either as
header <proto>[<offset>:<size>] [& <integer>] <cmp> <value-list>
This compares the value found at the given position of the packet
header with a list of values. ``offset`` defines the position of the
value within the header of the protocol defined by ``proto`` (which
can be ``ip``, ``tcp``, ``udp`` or ``icmp``). ``size`` is either 1, 2,
or 4 and specifies the value to have a size of this many bytes. If the
optional ``& <integer>`` is given, the packet's value is first masked
with the integer before it is compared to the value-list. ``cmp`` is
one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``. ``value-list`` is
a list of comma-separated integers similar to those described above.
The integers within the list may be followed by an additional ``/
mask`` where ``mask`` is a value from 0 to 32. This corresponds to the
CIDR notation for netmasks and is translated into a corresponding
bitmask applied to the packet's value prior to the comparison (similar
to the optional ``& integer``).
This compares the value found at the given position of the packet header
with a list of values. ``offset`` defines the position of the value
within the header of the protocol defined by ``proto`` (which can be
``ip``, ``ip6``, ``tcp``, ``udp``, ``icmp`` or ``icmp6``). ``size`` is
either 1, 2, or 4 and specifies the value to have a size of this many
bytes. If the optional ``& <integer>`` is given, the packet's value is
first masked with the integer before it is compared to the value-list.
``cmp`` is one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``.
``value-list`` is a list of comma-separated integers similar to those
described above. The integers within the list may be followed by an
additional ``/ mask`` where ``mask`` is a value from 0 to 32. This
corresponds to the CIDR notation for netmasks and is translated into a
corresponding bitmask applied to the packet's value prior to the
comparison (similar to the optional ``& integer``). IPv6 address values
are not allowed in the value-list, though you can still inspect any 1,
2, or 4 byte section of an IPv6 header using this keyword.
Putting it all together, this is an example condition that is
equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``:
@ -138,8 +147,8 @@ equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``:
header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24
Internally, the predefined header conditions are in fact just
short-cuts and mapped into a generic condition.
Note that the analogous example for IPv6 isn't currently possible since
4 bytes is the max width of a value that can be compared.
Content Conditions
~~~~~~~~~~~~~~~~~~

View file

@ -114,7 +114,8 @@ export {
## description: `TableDescription` record describing the source.
global add_event: function(description: Input::EventDescription) : bool;
## Remove a input stream. Returns true on success and false if the named stream was not found.
## Remove a input stream. Returns true on success and false if the named stream was
## not found.
##
## id: string value identifying the stream to be removed
global remove: function(id: string) : bool;
@ -125,8 +126,9 @@ export {
## id: string value identifying the stream
global force_update: function(id: string) : bool;
## Event that is called, when the update of a specific source is finished
global update_finished: event(name: string, source:string);
## Event that is called, when the end of a data source has been reached, including
## after an update.
global end_of_data: event(name: string, source:string);
}
@load base/input.bif

View file

@ -26,8 +26,10 @@ export {
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout.
## This is not working!
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular, time
## specifications less than a second result in a timeout value of 0, which
## means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before

View file

@ -23,7 +23,7 @@ redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER )
# The notice policy is completely handled by the manager and shouldn't be
# done by workers or proxies to save time for packet processing.
event bro_init() &priority=-11
event bro_init() &priority=11
{
Notice::policy = table();
}

View file

@ -2784,6 +2784,14 @@ export {
## to have a valid Teredo encapsulation.
const yielding_teredo_decapsulation = T &redef;
## With this set, the Teredo analyzer waits until it sees both sides
## of a connection using a valid Teredo encapsulation before issuing
## a :bro:see:`protocol_confirmation`. If it's false, the first
## occurence of a packet with valid Teredo encapsulation causes a
## confirmation. Both cases are still subject to effects of
## :bro:see:`Tunnel::yielding_teredo_decapsulation`.
const delay_teredo_confirmation = T &redef;
## How often to cleanup internal state for inactive IP tunnels.
const ip_tunnel_timeout = 24hrs &redef;
} # end export

View file

@ -1,3 +1,4 @@
@load ./main
@load ./contents
@load ./inactivity
@load ./polling

View file

@ -0,0 +1,49 @@
##! Implements a generic way to poll connections looking for certain features
##! (e.g. monitor bytes transferred). The specific feature of a connection
##! to look for, the polling interval, and the code to execute if the feature
##! is found are all controlled by user-defined callback functions.
module ConnPolling;
export {
## Starts monitoring a given connection.
##
## c: The connection to watch.
##
## callback: A callback function that takes as arguments the monitored
## *connection*, and counter *cnt* that increments each time the
## callback is called. It returns an interval indicating how long
## in the future to schedule an event which will call the
## callback. A negative return interval causes polling to stop.
##
## cnt: The initial value of a counter which gets passed to *callback*.
##
## i: The initial interval at which to schedule the next callback.
## May be ``0secs`` to poll right away.
global watch: function(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count, i: interval);
}
event ConnPolling::check(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count)
{
if ( ! connection_exists(c$id) )
return;
lookup_connection(c$id); # updates the conn val
local next_interval = callback(c, cnt);
if ( next_interval < 0secs )
return;
watch(c, callback, cnt + 1, next_interval);
}
function watch(c: connection,
callback: function(c: connection, cnt: count): interval,
cnt: count, i: interval)
{
schedule i { ConnPolling::check(c, callback, cnt) };
}

View file

@ -59,13 +59,15 @@ export {
## The caching intervals of the associated RRs described by the
## ``answers`` field.
TTLs: vector of interval &log &optional;
## The DNS query was rejected by the server.
rejected: bool &log &default=F;
## This value indicates if this request/response pair is ready to be
## logged.
ready: bool &default=F;
## The total number of resource records in a reply message's answer
## section.
total_answers: count &optional;
total_answers: count &default=0;
## The total number of resource records in a reply message's answer,
## authority, and additional sections.
total_replies: count &optional;
@ -186,10 +188,13 @@ function set_session(c: connection, msg: dns_msg, is_query: bool)
}
}
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
{
set_session(c, msg, is_orig);
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
set_session(c, msg, F);
if ( ans$answer_type == DNS_ANS )
{
c$dns$AA = msg$AA;
@ -209,7 +214,8 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
}
if ( c$dns?$answers && |c$dns$answers| == c$dns$total_answers )
if ( c$dns?$answers && c$dns?$total_answers &&
|c$dns$answers| == c$dns$total_answers )
{
add c$dns_state$finished_answers[c$dns$trans_id];
# Indicate this request/reply pair is ready to be logged.
@ -230,8 +236,6 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
set_session(c, msg, T);
c$dns$RD = msg$RD;
c$dns$TC = msg$TC;
c$dns$qclass = qclass;
@ -321,11 +325,9 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
#
# }
event dns_rejected(c: connection, msg: dns_msg,
query: string, qtype: count, qclass: count) &priority=5
event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
set_session(c, msg, F);
c$dns$rejected = T;
}
event connection_state_remove(c: connection) &priority=-5

View file

@ -1,3 +1,4 @@
@load ./utils-commands
@load ./main
@load ./file-extract
@load ./file-extract
@load ./gridftp

View file

@ -0,0 +1,121 @@
##! A detection script for GridFTP data and control channels.
##!
##! GridFTP control channels are identified by FTP control channels
##! that successfully negotiate the GSSAPI method of an AUTH request
##! and for which the exchange involved an encoded TLS/SSL handshake,
##! indicating the GSI mechanism for GSSAPI was used. This analysis
##! is all supported internally, this script simple adds the "gridftp"
##! label to the *service* field of the control channel's
##! :bro:type:`connection` record.
##!
##! GridFTP data channels are identified by a heuristic that relies on
##! the fact that default settings for GridFTP clients typically
##! mutally authenticate the data channel with TLS/SSL and negotiate a
##! NULL bulk cipher (no encryption). Connections with those
##! attributes are then polled for two minutes with decreasing frequency
##! to check if the transfer sizes are large enough to indicate a
##! GridFTP data channel that would be undesireable to analyze further
##! (e.g. stop TCP reassembly). A side effect is that true connection
##! sizes are not logged, but at the benefit of saving CPU cycles that
##! otherwise go to analyzing the large (and likely benign) connections.
@load ./main
@load base/protocols/conn
@load base/protocols/ssl
@load base/frameworks/notice
module GridFTP;
export {
## Number of bytes transferred before guessing a connection is a
## GridFTP data channel.
const size_threshold = 1073741824 &redef;
## Max number of times to check whether a connection's size exceeds the
## :bro:see:`GridFTP::size_threshold`.
const max_poll_count = 15 &redef;
## Whether to skip further processing of the GridFTP data channel once
## detected, which may help performance.
const skip_data = T &redef;
## Base amount of time between checking whether a GridFTP data connection
## has transferred more than :bro:see:`GridFTP::size_threshold` bytes.
const poll_interval = 1sec &redef;
## The amount of time the base :bro:see:`GridFTP::poll_interval` is
## increased by each poll interval. Can be used to make more frequent
## checks at the start of a connection and gradually slow down.
const poll_interval_increase = 1sec &redef;
## Raised when a GridFTP data channel is detected.
##
## c: The connection pertaining to the GridFTP data channel.
global data_channel_detected: event(c: connection);
## The initial criteria used to determine whether to start polling
## the connection for the :bro:see:`GridFTP::size_threshold` to have
## been exceeded. This is called in a :bro:see:`ssl_established` event
## handler and by default looks for both a client and server certificate
## and for a NULL bulk cipher. One way in which this function could be
## redefined is to make it also consider client/server certificate issuer
## subjects.
##
## c: The connection which may possibly be a GridFTP data channel.
##
## Returns: true if the connection should be further polled for an
## exceeded :bro:see:`GridFTP::size_threshold`, else false.
const data_channel_initial_criteria: function(c: connection): bool &redef;
}
redef record FTP::Info += {
last_auth_requested: string &optional;
};
event ftp_request(c: connection, command: string, arg: string) &priority=4
{
if ( command == "AUTH" && c?$ftp )
c$ftp$last_auth_requested = arg;
}
function size_callback(c: connection, cnt: count): interval
{
if ( c$orig$size > size_threshold || c$resp$size > size_threshold )
{
add c$service["gridftp-data"];
event GridFTP::data_channel_detected(c);
if ( skip_data )
skip_further_processing(c$id);
return -1sec;
}
if ( cnt >= max_poll_count )
return -1sec;
return poll_interval + poll_interval_increase * cnt;
}
event ssl_established(c: connection) &priority=5
{
# If an FTP client requests AUTH GSSAPI and later an SSL handshake
# finishes, it's likely a GridFTP control channel, so add service label.
if ( c?$ftp && c$ftp?$last_auth_requested &&
/GSSAPI/ in c$ftp$last_auth_requested )
add c$service["gridftp"];
}
function data_channel_initial_criteria(c: connection): bool
{
return ( c?$ssl && c$ssl?$client_subject && c$ssl?$subject &&
c$ssl?$cipher && /WITH_NULL/ in c$ssl$cipher );
}
event ssl_established(c: connection) &priority=-3
{
# By default GridFTP data channels do mutual authentication and
# negotiate a cipher suite with a NULL bulk cipher.
if ( data_channel_initial_criteria(c) )
ConnPolling::watch(c, size_callback, 0, 0secs);
}

View file

@ -96,11 +96,11 @@ redef record connection += {
};
# Configure DPD
const ports = { 21/tcp } &redef;
redef capture_filters += { ["ftp"] = "port 21" };
const ports = { 21/tcp, 2811/tcp } &redef; # 2811/tcp is GridFTP.
redef capture_filters += { ["ftp"] = "port 21 and port 2811" };
redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] };
redef likely_server_ports += { 21/tcp };
redef likely_server_ports += { 21/tcp, 2811/tcp };
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &create_expire=5mins;

View file

@ -19,7 +19,7 @@ export {
version: string &log &optional;
## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It
## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting.
server_name: string &log &optional;
## Session ID offered by the client for session resumption.
@ -30,37 +30,48 @@ export {
issuer_subject: string &log &optional;
## NotValidBefore field value from the server certificate.
not_valid_before: time &log &optional;
## NotValidAfter field value from the serve certificate.
## NotValidAfter field value from the server certificate.
not_valid_after: time &log &optional;
## Last alert that was seen during the connection.
last_alert: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the client.
client_issuer_subject: string &log &optional;
## Full binary server certificate stored in DER format.
cert: string &optional;
## Chain of certificates offered by the server to validate its
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of string &optional;
## Full binary client certificate stored in DER format.
client_cert: string &optional;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of string &optional;
## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a
## meaningless arbitrary number.
analyzer_id: count &optional;
};
## The default root CA bundle. By loading the
## mozilla-ca-list.bro script it will be set to Mozilla's root CA list.
const root_certs: table[string] of string = {} &redef;
## If true, detach the SSL analyzer from the connection to prevent
## continuing to process encrypted traffic. Helps with performance
## (especially with large file transfers).
const disable_analyzer_after_detection = T &redef;
## The openssl command line utility. If it's in the path the default
## value will work, otherwise a full path string can be supplied for the
## utility.
const openssl_util = "openssl" &redef;
## Event that can be handled to access the SSL
## record as it is sent on to the logging framework.
global log_ssl: event(rec: Info);
@ -107,7 +118,8 @@ redef likely_server_ports += {
function set_session(c: connection)
{
if ( ! c?$ssl )
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector()];
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(),
$client_cert_chain=vector()];
}
function finish(c: connection)
@ -141,23 +153,40 @@ event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: coun
# We aren't doing anything with client certificates yet.
if ( is_orig )
return;
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$cert = der_cert;
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$client_cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$subject = cert$subject;
c$ssl$issuer_subject = cert$issuer;
c$ssl$not_valid_before = cert$not_valid_before;
c$ssl$not_valid_after = cert$not_valid_after;
# Also save other certificate information about the primary cert.
c$ssl$client_subject = cert$subject;
c$ssl$client_issuer_subject = cert$issuer;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert;
}
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$subject = cert$subject;
c$ssl$issuer_subject = cert$issuer;
c$ssl$not_valid_before = cert$not_valid_before;
c$ssl$not_valid_after = cert$not_valid_after;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
}
}
}

View file

@ -171,6 +171,7 @@ const Analyzer::Config Analyzer::analyzer_configs[] = {
{ AnalyzerTag::Contents_SMB, "CONTENTS_SMB", 0, 0, 0, false },
{ AnalyzerTag::Contents_RPC, "CONTENTS_RPC", 0, 0, 0, false },
{ AnalyzerTag::Contents_NFS, "CONTENTS_NFS", 0, 0, 0, false },
{ AnalyzerTag::FTP_ADAT, "FTP_ADAT", 0, 0, 0, false },
};
AnalyzerTimer::~AnalyzerTimer()

View file

@ -46,6 +46,7 @@ namespace AnalyzerTag {
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,
Contents_DCE_RPC, Contents_SMB, Contents_RPC, Contents_NFS,
FTP_ADAT,
// End-marker.
LastAnalyzer
};

View file

@ -4,6 +4,7 @@ include_directories(BEFORE
)
configure_file(version.c.in ${CMAKE_CURRENT_BINARY_DIR}/version.c)
configure_file(util-config.h.in ${CMAKE_CURRENT_BINARY_DIR}/util-config.h)
# This creates a custom command to transform a bison output file (inFile)
# into outFile in order to avoid symbol conflicts:
@ -444,10 +445,6 @@ set(bro_SRCS
collect_headers(bro_HEADERS ${bro_SRCS})
add_definitions(-DBRO_SCRIPT_INSTALL_PATH="${BRO_SCRIPT_INSTALL_PATH}")
add_definitions(-DBRO_SCRIPT_SOURCE_PATH="${BRO_SCRIPT_SOURCE_PATH}")
add_definitions(-DBRO_BUILD_PATH="${CMAKE_CURRENT_BINARY_DIR}")
add_executable(bro ${bro_SRCS} ${bro_HEADERS})
target_link_libraries(bro ${brodeps} ${CMAKE_THREAD_LIBS_INIT})

View file

@ -8,6 +8,8 @@
#include "FTP.h"
#include "NVT.h"
#include "Event.h"
#include "SSL.h"
#include "Base64.h"
FTP_Analyzer::FTP_Analyzer(Connection* conn)
: TCP_ApplicationAnalyzer(AnalyzerTag::FTP, conn)
@ -44,6 +46,14 @@ void FTP_Analyzer::Done()
Weird("partial_ftp_request");
}
static uint32 get_reply_code(int len, const char* line)
{
if ( len >= 3 && isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) )
return (line[0] - '0') * 100 + (line[1] - '0') * 10 + (line[2] - '0');
else
return 0;
}
void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
{
TCP_ApplicationAnalyzer::DeliverStream(length, data, orig);
@ -93,16 +103,7 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
}
else
{
uint32 reply_code;
if ( length >= 3 &&
isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) )
{
reply_code = (line[0] - '0') * 100 +
(line[1] - '0') * 10 +
(line[2] - '0');
}
else
reply_code = 0;
uint32 reply_code = get_reply_code(length, line);
int cont_resp;
@ -143,19 +144,22 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
else
line = end_of_line;
if ( auth_requested.size() > 0 &&
(reply_code == 234 || reply_code == 335) )
// Server accepted AUTH requested,
// which means that very likely we
// won't be able to parse the rest
// of the session, and thus we stop
// here.
SetSkip(true);
cont_resp = 0;
}
}
if ( reply_code == 334 && auth_requested.size() > 0 &&
auth_requested == "GSSAPI" )
{
// Server wants to proceed with an ADAT exchange and we
// know how to analyze the GSI mechanism, so attach analyzer
// to look for that.
SSL_Analyzer* ssl = new SSL_Analyzer(Conn());
ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), true));
ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), false));
AddChildAnalyzer(ssl);
}
vl->append(new Val(reply_code, TYPE_COUNT));
vl->append(new StringVal(end_of_line - line, line));
vl->append(new Val(cont_resp, TYPE_BOOL));
@ -164,5 +168,140 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig)
}
ConnectionEvent(f, vl);
ForwardStream(length, data, orig);
}
void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig)
{
// Don't know how to parse anything but the ADAT exchanges of GSI GSSAPI,
// which is basically just TLS/SSL.
if ( ! Parent()->GetTag() == AnalyzerTag::SSL )
{
Parent()->Remove();
return;
}
bool done = false;
const char* line = (const char*) data;
const char* end_of_line = line + len;
BroString* decoded_adat = 0;
if ( orig )
{
int cmd_len;
const char* cmd;
line = skip_whitespace(line, end_of_line);
get_word(len, line, cmd_len, cmd);
if ( strncmp(cmd, "ADAT", cmd_len) == 0 )
{
line = skip_whitespace(line + cmd_len, end_of_line);
StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString());
if ( first_token )
{
// RFC 2743 section 3.1 specifies a framing format for tokens
// that includes an identifier for the mechanism type. The
// framing is supposed to be required for the initial context
// token, but GSI doesn't do that and starts right in on a
// TLS/SSL handshake, so look for that to identify it.
const u_char* msg = decoded_adat->Bytes();
int msg_len = decoded_adat->Len();
// Just check that it looks like a viable TLS/SSL handshake
// record from the first byte (content type of 0x16) and
// that the fourth and fifth bytes indicating the length of
// the record match the length of the decoded data.
if ( msg_len < 5 || msg[0] != 0x16 ||
msg_len - 5 != ntohs(*((uint16*)(msg + 3))) )
{
// Doesn't look like TLS/SSL, so done analyzing.
done = true;
delete decoded_adat;
decoded_adat = 0;
}
}
first_token = false;
}
else if ( strncmp(cmd, "AUTH", cmd_len) == 0 )
// Security state will be reset by a reissued AUTH.
done = true;
}
else
{
uint32 reply_code = get_reply_code(len, line);
switch ( reply_code ) {
case 232:
case 234:
// Indicates security data exchange is complete, but nothing
// more to decode in replies.
done = true;
break;
case 235:
// Security data exchange complete, but may have more to decode
// in the reply (same format at 334 and 335).
done = true;
// Fall-through.
case 334:
case 335:
// Security data exchange still in progress, and there could be data
// to decode in the reply.
line += 3;
if ( len > 3 && line[0] == '-' )
line++;
line = skip_whitespace(line, end_of_line);
if ( end_of_line - line >= 5 && strncmp(line, "ADAT=", 5) == 0 )
{
line += 5;
StringVal encoded(end_of_line - line, line);
decoded_adat = decode_base64(encoded.AsString());
}
break;
case 421:
case 431:
case 500:
case 501:
case 503:
case 535:
// Server isn't going to accept named security mechanism.
// Client has to restart back at the AUTH.
done = true;
break;
case 631:
case 632:
case 633:
// If the server is sending protected replies, the security
// data exchange must have already succeeded. It does have
// encoded data in the reply, but 632 and 633 are also encrypted.
done = true;
break;
default:
break;
}
}
if ( decoded_adat )
{
ForwardStream(decoded_adat->Len(), decoded_adat->Bytes(), orig);
delete decoded_adat;
}
if ( done )
Parent()->Remove();
}

View file

@ -30,4 +30,26 @@ protected:
string auth_requested; // AUTH method requested
};
/**
* Analyzes security data of ADAT exchanges over FTP control session (RFC 2228).
* Currently only the GSI mechanism of GSSAPI AUTH method is understood.
* The ADAT exchange for GSI is base64 encoded TLS/SSL handshake tokens. This
* analyzer just decodes the tokens and passes them on to the parent, which must
* be an SSL analyzer instance.
*/
class FTP_ADAT_Analyzer : public SupportAnalyzer {
public:
FTP_ADAT_Analyzer(Connection* conn, bool arg_orig)
: SupportAnalyzer(AnalyzerTag::FTP_ADAT, conn, arg_orig),
first_token(true) { }
void DeliverStream(int len, const u_char* data, bool orig);
protected:
// Used by the client-side analyzer to tell if it needs to peek at the
// initial context token and do sanity checking (i.e. does it look like
// a TLS/SSL handshake token).
bool first_token;
};
#endif

View file

@ -342,6 +342,21 @@ public:
return memcmp(&addr1.in6, &addr2.in6, sizeof(in6_addr)) < 0;
}
friend bool operator<=(const IPAddr& addr1, const IPAddr& addr2)
{
return addr1 < addr2 || addr1 == addr2;
}
friend bool operator>=(const IPAddr& addr1, const IPAddr& addr2)
{
return ! ( addr1 < addr2 );
}
friend bool operator>(const IPAddr& addr1, const IPAddr& addr2)
{
return ! ( addr1 <= addr2 );
}
/** Converts the address into the type used internally by the
* inter-thread communication.
*/
@ -583,6 +598,11 @@ public:
return net1.Prefix() == net2.Prefix() && net1.Length() == net2.Length();
}
friend bool operator!=(const IPPrefix& net1, const IPPrefix& net2)
{
return ! (net1 == net2);
}
/**
* Comparison operator IP prefixes. This defines a well-defined order for
* IP prefix. However, the order does not necessarily corresponding to their
@ -600,6 +620,21 @@ public:
return false;
}
friend bool operator<=(const IPPrefix& net1, const IPPrefix& net2)
{
return net1 < net2 || net1 == net2;
}
friend bool operator>=(const IPPrefix& net1, const IPPrefix& net2)
{
return ! (net1 < net2 );
}
friend bool operator>(const IPPrefix& net1, const IPPrefix& net2)
{
return ! ( net1 <= net2 );
}
private:
IPAddr prefix; // We store it as an address with the non-prefix bits masked out via Mask().
uint8_t length; // The bit length of the prefix relative to full IPv6 addr.

View file

@ -1,4 +1,5 @@
#include <algorithm>
#include <functional>
#include "config.h"
@ -41,6 +42,23 @@ RuleHdrTest::RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size,
level = 0;
}
RuleHdrTest::RuleHdrTest(Prot arg_prot, Comp arg_comp, vector<IPPrefix> arg_v)
{
prot = arg_prot;
offset = 0;
size = 0;
comp = arg_comp;
vals = new maskedvalue_list;
prefix_vals = arg_v;
sibling = 0;
child = 0;
pattern_rules = 0;
pure_rules = 0;
ruleset = new IntSet;
id = ++idcounter;
level = 0;
}
Val* RuleMatcher::BuildRuleStateValue(const Rule* rule,
const RuleEndpointState* state) const
{
@ -63,6 +81,8 @@ RuleHdrTest::RuleHdrTest(RuleHdrTest& h)
loop_over_list(*h.vals, i)
vals->append(new MaskedValue(*(*h.vals)[i]));
prefix_vals = h.prefix_vals;
for ( int j = 0; j < Rule::TYPES; ++j )
{
loop_over_list(h.psets[j], k)
@ -114,6 +134,10 @@ bool RuleHdrTest::operator==(const RuleHdrTest& h)
(*vals)[i]->mask != (*h.vals)[i]->mask )
return false;
for ( size_t i = 0; i < prefix_vals.size(); ++i )
if ( ! (prefix_vals[i] == h.prefix_vals[i]) )
return false;
return true;
}
@ -129,6 +153,9 @@ void RuleHdrTest::PrintDebug()
fprintf(stderr, " 0x%08x/0x%08x",
(*vals)[i]->val, (*vals)[i]->mask);
for ( size_t i = 0; i < prefix_vals.size(); ++i )
fprintf(stderr, " %s", prefix_vals[i].AsString().c_str());
fprintf(stderr, "\n");
}
@ -410,29 +437,129 @@ static inline uint32 getval(const u_char* data, int size)
}
// A line which can be inserted into the macros below for debugging
// fprintf(stderr, "%.06f %08x & %08x %s %08x\n", network_time, v, (mvals)[i]->mask, #op, (mvals)[i]->val);
// Evaluate a value list (matches if at least one value matches).
#define DO_MATCH_OR( mvals, v, op ) \
{ \
loop_over_list((mvals), i) \
{ \
if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \
goto match; \
} \
goto no_match; \
template <typename FuncT>
static inline bool match_or(const maskedvalue_list& mvals, uint32 v, FuncT comp)
{
loop_over_list(mvals, i)
{
if ( comp(v & mvals[i]->mask, mvals[i]->val) )
return true;
}
return false;
}
// Evaluate a prefix list (matches if at least one value matches).
template <typename FuncT>
static inline bool match_or(const vector<IPPrefix>& prefixes, const IPAddr& a,
FuncT comp)
{
for ( size_t i = 0; i < prefixes.size(); ++i )
{
IPAddr masked(a);
masked.Mask(prefixes[i].LengthIPv6());
if ( comp(masked, prefixes[i].Prefix()) )
return true;
}
return false;
}
// Evaluate a value list (doesn't match if any value matches).
#define DO_MATCH_NOT_AND( mvals, v, op ) \
{ \
loop_over_list((mvals), i) \
{ \
if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \
goto no_match; \
} \
goto match; \
template <typename FuncT>
static inline bool match_not_and(const maskedvalue_list& mvals, uint32 v,
FuncT comp)
{
loop_over_list(mvals, i)
{
if ( comp(v & mvals[i]->mask, mvals[i]->val) )
return false;
}
return true;
}
// Evaluate a prefix list (doesn't match if any value matches).
template <typename FuncT>
static inline bool match_not_and(const vector<IPPrefix>& prefixes,
const IPAddr& a, FuncT comp)
{
for ( size_t i = 0; i < prefixes.size(); ++i )
{
IPAddr masked(a);
masked.Mask(prefixes[i].LengthIPv6());
if ( comp(masked, prefixes[i].Prefix()) )
return false;
}
return true;
}
static inline bool compare(const maskedvalue_list& mvals, uint32 v,
RuleHdrTest::Comp comp)
{
switch ( comp ) {
case RuleHdrTest::EQ:
return match_or(mvals, v, std::equal_to<uint32>());
break;
case RuleHdrTest::NE:
return match_not_and(mvals, v, std::equal_to<uint32>());
break;
case RuleHdrTest::LT:
return match_or(mvals, v, std::less<uint32>());
break;
case RuleHdrTest::GT:
return match_or(mvals, v, std::greater<uint32>());
break;
case RuleHdrTest::LE:
return match_or(mvals, v, std::less_equal<uint32>());
break;
case RuleHdrTest::GE:
return match_or(mvals, v, std::greater_equal<uint32>());
break;
default:
reporter->InternalError("unknown comparison type");
break;
}
return false;
}
static inline bool compare(const vector<IPPrefix>& prefixes, const IPAddr& a,
RuleHdrTest::Comp comp)
{
switch ( comp ) {
case RuleHdrTest::EQ:
return match_or(prefixes, a, std::equal_to<IPAddr>());
break;
case RuleHdrTest::NE:
return match_not_and(prefixes, a, std::equal_to<IPAddr>());
break;
case RuleHdrTest::LT:
return match_or(prefixes, a, std::less<IPAddr>());
break;
case RuleHdrTest::GT:
return match_or(prefixes, a, std::greater<IPAddr>());
break;
case RuleHdrTest::LE:
return match_or(prefixes, a, std::less_equal<IPAddr>());
break;
case RuleHdrTest::GE:
return match_or(prefixes, a, std::greater_equal<IPAddr>());
break;
default:
reporter->InternalError("unknown comparison type");
break;
}
return false;
}
RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer,
@ -492,66 +619,54 @@ RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer,
if ( ip )
{
// Get start of transport layer.
const u_char* transport = ip->Payload();
// Descend the RuleHdrTest tree further.
for ( RuleHdrTest* h = hdr_test->child; h;
h = h->sibling )
{
const u_char* data;
bool match = false;
// Evaluate the header test.
switch ( h->prot ) {
case RuleHdrTest::NEXT:
match = compare(*h->vals, ip->NextProto(), h->comp);
break;
case RuleHdrTest::IP:
data = (const u_char*) ip->IP4_Hdr();
if ( ! ip->IP4_Hdr() )
continue;
match = compare(*h->vals, getval((const u_char*)ip->IP4_Hdr() + h->offset, h->size), h->comp);
break;
case RuleHdrTest::IPv6:
if ( ! ip->IP6_Hdr() )
continue;
match = compare(*h->vals, getval((const u_char*)ip->IP6_Hdr() + h->offset, h->size), h->comp);
break;
case RuleHdrTest::ICMP:
case RuleHdrTest::ICMPv6:
case RuleHdrTest::TCP:
case RuleHdrTest::UDP:
data = transport;
match = compare(*h->vals, getval(ip->Payload() + h->offset, h->size), h->comp);
break;
case RuleHdrTest::IPSrc:
match = compare(h->prefix_vals, ip->IPHeaderSrcAddr(), h->comp);
break;
case RuleHdrTest::IPDst:
match = compare(h->prefix_vals, ip->IPHeaderDstAddr(), h->comp);
break;
default:
data = 0;
reporter->InternalError("unknown protocol");
break;
}
// ### data can be nil here if it's an
// IPv6 packet and we're doing an IP test.
if ( ! data )
continue;
// Sorry for the hidden gotos :-)
switch ( h->comp ) {
case RuleHdrTest::EQ:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), ==);
case RuleHdrTest::NE:
DO_MATCH_NOT_AND(*h->vals, getval(data + h->offset, h->size), ==);
case RuleHdrTest::LT:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <);
case RuleHdrTest::GT:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >);
case RuleHdrTest::LE:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <=);
case RuleHdrTest::GE:
DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >=);
default:
reporter->InternalError("unknown comparision type");
}
no_match:
continue;
match:
tests.append(h);
if ( match )
tests.append(h);
}
}
}
@ -1028,7 +1143,7 @@ void RuleMatcher::DumpStateStats(BroFile* f, RuleHdrTest* hdr_test)
Rule* r = Rule::rule_table[set->ids[k] - 1];
f->Write(fmt("%s ", r->ID()));
}
f->Write("\n");
}
}
@ -1050,8 +1165,11 @@ static Val* get_bro_val(const char* label)
}
// Converts an atomic Val and appends it to the list
static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
// Converts an atomic Val and appends it to the list. For subnet types,
// if the prefix_vector param isn't null, appending to that is preferred
// over appending to the masked val list.
static bool val_to_maskedval(Val* v, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector)
{
MaskedValue* mval = new MaskedValue;
@ -1071,29 +1189,37 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
case TYPE_SUBNET:
{
const uint32* n;
uint32 m[4];
v->AsSubNet().Prefix().GetBytes(&n);
v->AsSubNetVal()->Mask().CopyIPv6(m);
for ( unsigned int i = 0; i < 4; ++i )
m[i] = ntohl(m[i]);
bool is_v4_mask = m[0] == 0xffffffff &&
m[1] == m[0] && m[2] == m[0];
if ( v->AsSubNet().Prefix().GetFamily() == IPv4 &&
is_v4_mask )
if ( prefix_vector )
{
mval->val = ntohl(*n);
mval->mask = m[3];
prefix_vector->push_back(v->AsSubNet());
delete mval;
return true;
}
else
{
rules_error("IPv6 subnets not supported");
mval->val = 0;
mval->mask = 0;
const uint32* n;
uint32 m[4];
v->AsSubNet().Prefix().GetBytes(&n);
v->AsSubNetVal()->Mask().CopyIPv6(m);
for ( unsigned int i = 0; i < 4; ++i )
m[i] = ntohl(m[i]);
bool is_v4_mask = m[0] == 0xffffffff &&
m[1] == m[0] && m[2] == m[0];
if ( v->AsSubNet().Prefix().GetFamily() == IPv4 && is_v4_mask )
{
mval->val = ntohl(*n);
mval->mask = m[3];
}
else
{
rules_error("IPv6 subnets not supported");
mval->val = 0;
mval->mask = 0;
}
}
}
break;
@ -1108,7 +1234,8 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to)
return true;
}
void id_to_maskedvallist(const char* id, maskedvalue_list* append_to)
void id_to_maskedvallist(const char* id, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector)
{
Val* v = get_bro_val(id);
if ( ! v )
@ -1118,7 +1245,7 @@ void id_to_maskedvallist(const char* id, maskedvalue_list* append_to)
{
val_list* vals = v->AsTableVal()->ConvertToPureList()->Vals();
loop_over_list(*vals, i )
if ( ! val_to_maskedval((*vals)[i], append_to) )
if ( ! val_to_maskedval((*vals)[i], append_to, prefix_vector) )
{
delete_vals(vals);
return;
@ -1128,7 +1255,7 @@ void id_to_maskedvallist(const char* id, maskedvalue_list* append_to)
}
else
val_to_maskedval(v, append_to);
val_to_maskedval(v, append_to, prefix_vector);
}
char* id_to_str(const char* id)

View file

@ -2,7 +2,9 @@
#define sigs_h
#include <limits.h>
#include <vector>
#include "IPAddr.h"
#include "BroString.h"
#include "List.h"
#include "RE.h"
@ -59,17 +61,19 @@ declare(PList, BroString);
typedef PList(BroString) bstr_list;
// Get values from Bro's script-level variables.
extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to);
extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to,
vector<IPPrefix>* prefix_vector = 0);
extern char* id_to_str(const char* id);
extern uint32 id_to_uint(const char* id);
class RuleHdrTest {
public:
enum Comp { LE, GE, LT, GT, EQ, NE };
enum Prot { NOPROT, IP, ICMP, TCP, UDP };
enum Prot { NOPROT, IP, IPv6, ICMP, ICMPv6, TCP, UDP, NEXT, IPSrc, IPDst };
RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size,
Comp arg_comp, maskedvalue_list* arg_vals);
RuleHdrTest(Prot arg_prot, Comp arg_comp, vector<IPPrefix> arg_v);
~RuleHdrTest();
void PrintDebug();
@ -86,6 +90,7 @@ private:
Prot prot;
Comp comp;
maskedvalue_list* vals;
vector<IPPrefix> prefix_vals; // for use with IPSrc/IPDst comparisons
uint32 offset;
uint32 size;

View file

@ -138,6 +138,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
{
Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen);
if ( orig )
valid_orig = false;
else
valid_resp = false;
TeredoEncapsulation te(this);
if ( ! te.Parse(data, len) )
@ -150,7 +155,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( e && e->Depth() >= BifConst::Tunnel::max_depth )
{
Weird("tunnel_depth");
Weird("tunnel_depth", true);
return;
}
@ -162,7 +167,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 )
// Teredo bubbles having data after IPv6 header isn't strictly a
// violation, but a little weird.
Weird("Teredo_bubble_with_payload");
Weird("Teredo_bubble_with_payload", true);
else
{
delete inner;
@ -173,6 +178,11 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
if ( rslt == 0 || rslt > 0 )
{
if ( orig )
valid_orig = true;
else
valid_resp = true;
if ( BifConst::Tunnel::yielding_teredo_decapsulation &&
! ProtocolConfirmed() )
{
@ -193,7 +203,7 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
}
if ( ! sibling_has_confirmed )
ProtocolConfirmation();
Confirm();
else
{
delete inner;
@ -201,10 +211,8 @@ void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig,
}
}
else
{
// Aggressively decapsulate anything with valid Teredo encapsulation
ProtocolConfirmation();
}
// Aggressively decapsulate anything with valid Teredo encapsulation.
Confirm();
}
else

View file

@ -6,7 +6,8 @@
class Teredo_Analyzer : public Analyzer {
public:
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn)
Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn),
valid_orig(false), valid_resp(false)
{}
virtual ~Teredo_Analyzer()
@ -26,18 +27,34 @@ public:
/**
* Emits a weird only if the analyzer has previously been able to
* decapsulate a Teredo packet since otherwise the weirds could happen
* frequently enough to be less than helpful.
* decapsulate a Teredo packet in both directions or if *force* param is
* set, since otherwise the weirds could happen frequently enough to be less
* than helpful. The *force* param is meant for cases where just one side
* has a valid encapsulation and so the weird would be informative.
*/
void Weird(const char* name) const
void Weird(const char* name, bool force = false) const
{
if ( ProtocolConfirmed() )
if ( ProtocolConfirmed() || force )
reporter->Weird(Conn(), name);
}
/**
* If the delayed confirmation option is set, then a valid encapsulation
* seen from both end points is required before confirming.
*/
void Confirm()
{
if ( ! BifConst::Tunnel::delay_teredo_confirmation ||
( valid_orig && valid_resp ) )
ProtocolConfirmation();
}
protected:
friend class AnalyzerTimer;
void ExpireTimer(double t);
bool valid_orig;
bool valid_resp;
};
class TeredoEncapsulation {

View file

@ -11,6 +11,7 @@
#include <cmath>
#include <sys/stat.h>
#include <cstdio>
#include <time.h>
#include "digest.h"
#include "Reporter.h"
@ -2615,15 +2616,15 @@ function to_double%(str: string%): double
%{
const char* s = str->CheckString();
char* end_s;
double d = strtod(s, &end_s);
if ( s[0] == '\0' || end_s[0] != '\0' )
{
{
builtin_error("bad conversion to double", @ARG@[0]);
d = 0;
}
}
return new Val(d, TYPE_DOUBLE);
%}
@ -3285,6 +3286,31 @@ function strftime%(fmt: string, d: time%) : string
return new StringVal(buffer);
%}
## Parse a textual representation of a date/time value into a ``time`` type value.
##
## fmt: The format string used to parse the following *d* argument. See ``man strftime``
## for the syntax.
##
## d: The string representing the time.
##
## Returns: The time value calculated from parsing *d* with *fmt*.
function strptime%(fmt: string, d: string%) : time
%{
const time_t timeval = time_t(NULL);
struct tm t = *localtime(&timeval);
if ( strptime(d->CheckString(), fmt->CheckString(), &t) == NULL )
{
reporter->Warning("strptime conversion failed: fmt:%s d:%s", fmt->CheckString(), d->CheckString());
return new Val(0.0, TYPE_TIME);
}
double ret = mktime(&t);
return new Val(ret, TYPE_TIME);
%}
# ===========================================================================
#
# Network Type Processing

View file

@ -16,6 +16,7 @@ const Tunnel::enable_ip: bool;
const Tunnel::enable_ayiya: bool;
const Tunnel::enable_teredo: bool;
const Tunnel::yielding_teredo_decapsulation: bool;
const Tunnel::delay_teredo_confirmation: bool;
const Tunnel::ip_tunnel_timeout: interval;
const Threading::heartbeat_interval: interval;

View file

@ -196,7 +196,7 @@ Manager::TableStream::~TableStream()
Manager::Manager()
{
update_finished = internal_handler("Input::update_finished");
end_of_data = internal_handler("Input::end_of_data");
}
Manager::~Manager()
@ -322,20 +322,10 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
Unref(mode);
Val* config = description->LookupWithDefault(rtype->FieldOffset("config"));
ReaderFrontend* reader_obj = new ReaderFrontend(*rinfo, reader);
assert(reader_obj);
info->reader = reader_obj;
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name;
info->config = config->AsTableVal(); // ref'd by LookupWithDefault
info->info = rinfo;
Ref(description);
info->description = description;
{
// create config mapping in ReaderInfo. Has to be done before the construction of reader_obj.
HashKey* k;
IterCookie* c = info->config->AsTable()->InitForIteration();
@ -345,13 +335,26 @@ bool Manager::CreateStream(Stream* info, RecordVal* description)
ListVal* index = info->config->RecoverIndex(k);
string key = index->Index(0)->AsString()->CheckString();
string value = v->Value()->AsString()->CheckString();
info->info->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str())));
rinfo->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str())));
Unref(index);
delete k;
}
}
ReaderFrontend* reader_obj = new ReaderFrontend(*rinfo, reader);
assert(reader_obj);
info->reader = reader_obj;
info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault
info->name = name;
info->info = rinfo;
Ref(description);
info->description = description;
DBG_LOG(DBG_INPUT, "Successfully created new input stream %s",
name.c_str());
@ -1169,8 +1172,12 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
DBG_LOG(DBG_INPUT, "Got EndCurrentSend stream %s", i->name.c_str());
#endif
if ( i->stream_type == EVENT_STREAM ) // nothing to do..
if ( i->stream_type == EVENT_STREAM )
{
// just signal the end of the data source
SendEndOfData(i);
return;
}
assert(i->stream_type == TABLE_STREAM);
TableStream* stream = (TableStream*) i;
@ -1251,12 +1258,29 @@ void Manager::EndCurrentSend(ReaderFrontend* reader)
stream->currDict->SetDeleteFunc(input_hash_delete_func);
#ifdef DEBUG
DBG_LOG(DBG_INPUT, "EndCurrentSend complete for stream %s, queueing update_finished event",
DBG_LOG(DBG_INPUT, "EndCurrentSend complete for stream %s",
i->name.c_str());
#endif
// Send event that the current update is indeed finished.
SendEvent(update_finished, 2, new StringVal(i->name.c_str()), new StringVal(i->info->source));
SendEndOfData(i);
}
void Manager::SendEndOfData(ReaderFrontend* reader)
{
Stream *i = FindStream(reader);
if ( i == 0 )
{
reporter->InternalError("Unknown reader in SendEndOfData");
return;
}
SendEndOfData(i);
}
void Manager::SendEndOfData(const Stream *i)
{
SendEvent(end_of_data, 2, new StringVal(i->name.c_str()), new StringVal(i->info->source));
}
void Manager::Put(ReaderFrontend* reader, Value* *vals)
@ -2007,7 +2031,7 @@ Val* Manager::ValueToVal(const Value* val, BroType* request_type)
case TYPE_STRING:
{
BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 0);
BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 1);
return new StringVal(s);
}

View file

@ -89,6 +89,7 @@ protected:
friend class EndCurrentSendMessage;
friend class ReaderClosedMessage;
friend class DisableMessage;
friend class EndOfDataMessage;
// For readers to write to input stream in direct mode (reporting
// new/deleted values directly). Functions take ownership of
@ -96,6 +97,9 @@ protected:
void Put(ReaderFrontend* reader, threading::Value* *vals);
void Clear(ReaderFrontend* reader);
bool Delete(ReaderFrontend* reader, threading::Value* *vals);
// Trigger sending the End-of-Data event when the input source has
// finished reading. Just use in direct mode.
void SendEndOfData(ReaderFrontend* reader);
// For readers to write to input stream in indirect mode (manager is
// monitoring new/deleted values) Functions take ownership of
@ -119,7 +123,7 @@ protected:
// main thread. This makes sure all data that has ben queued for a
// stream is still received.
bool RemoveStreamContinuation(ReaderFrontend* reader);
/**
* Deletes an existing input stream.
*
@ -154,7 +158,6 @@ private:
// equivalend in threading cannot be used, because we have support
// different types from the log framework
bool IsCompatibleType(BroType* t, bool atomic_only=false);
// Check if a record is made up of compatible types and return a list
// of all fields that are in the record in order. Recursively unrolls
// records
@ -164,6 +167,9 @@ private:
void SendEvent(EventHandlerPtr ev, const int numvals, ...);
void SendEvent(EventHandlerPtr ev, list<Val*> events);
// Implementation of SendEndOfData (send end_of_data event).
void SendEndOfData(const Stream *i);
// Call predicate function and return result.
bool CallPred(Func* pred_func, const int numvals, ...);
@ -200,7 +206,7 @@ private:
map<ReaderFrontend*, Stream*> readers;
EventHandlerPtr update_finished;
EventHandlerPtr end_of_data;
};

View file

@ -108,6 +108,20 @@ public:
private:
};
class EndOfDataMessage : public threading::OutputMessage<ReaderFrontend> {
public:
EndOfDataMessage(ReaderFrontend* reader)
: threading::OutputMessage<ReaderFrontend>("EndOfData", reader) {}
virtual bool Process()
{
input_mgr->SendEndOfData(Object());
return true;
}
private:
};
class ReaderClosedMessage : public threading::OutputMessage<ReaderFrontend> {
public:
ReaderClosedMessage(ReaderFrontend* reader)
@ -183,6 +197,11 @@ void ReaderBackend::EndCurrentSend()
SendOut(new EndCurrentSendMessage(frontend));
}
void ReaderBackend::EndOfData()
{
SendOut(new EndOfDataMessage(frontend));
}
void ReaderBackend::SendEntry(Value* *vals)
{
SendOut(new SendEntryMessage(frontend, vals));

View file

@ -281,6 +281,16 @@ protected:
*/
void Clear();
/**
* Method telling the manager that we finished reading the current
* data source. Will trigger an end_of_data event.
*
* Note: When using SendEntry as the tracking mode this is triggered
* automatically by EndCurrentSend(). Only use if not using the
* tracking mode. Otherwise the event will be sent twice.
*/
void EndOfData();
// Content-sending-functions (tracking mode): Only changed lines are propagated.
/**

View file

@ -48,7 +48,7 @@ ElasticSearch::ElasticSearch(WriterFrontend* frontend) : WriterBackend(frontend)
last_send = current_time();
failing = false;
transfer_timeout = BifConst::LogElasticSearch::transfer_timeout * 1000;
transfer_timeout = static_cast<long>(BifConst::LogElasticSearch::transfer_timeout);
curl_handle = HTTPSetup();
}
@ -373,8 +373,8 @@ bool ElasticSearch::HTTPSend(CURL *handle)
// Some timeout options. These will need more attention later.
curl_easy_setopt(handle, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT_MS, transfer_timeout);
curl_easy_setopt(handle, CURLOPT_TIMEOUT_MS, transfer_timeout*2);
curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, transfer_timeout);
curl_easy_setopt(handle, CURLOPT_TIMEOUT, transfer_timeout);
curl_easy_setopt(handle, CURLOPT_DNS_CACHE_TIMEOUT, 60*60);
CURLcode return_code = curl_easy_perform(handle);

View file

@ -68,7 +68,7 @@ private:
string path;
string index_prefix;
uint64 transfer_timeout;
long transfer_timeout;
bool failing;
uint64 batch_size;

View file

@ -1,13 +1,30 @@
%{
#include <stdio.h>
#include <netinet/in.h>
#include <vector>
#include "config.h"
#include "RuleMatcher.h"
#include "Reporter.h"
#include "IPAddr.h"
#include "net_util.h"
extern void begin_PS();
extern void end_PS();
Rule* current_rule = 0;
const char* current_rule_file = 0;
static uint8_t mask_to_len(uint32_t mask)
{
if ( mask == 0xffffffff )
return 32;
uint32_t x = ~mask + 1;
uint8_t len;
for ( len = 0; len < 32 && (! (x & (1 << len))); ++len );
return len;
}
%}
%token TOK_COMP
@ -21,6 +38,7 @@ const char* current_rule_file = 0;
%token TOK_IDENT
%token TOK_INT
%token TOK_IP
%token TOK_IP6
%token TOK_IP_OPTIONS
%token TOK_IP_OPTION_SYM
%token TOK_IP_PROTO
@ -49,7 +67,9 @@ const char* current_rule_file = 0;
%type <hdr_test> hdr_expr
%type <range> range rangeopt
%type <vallist> value_list
%type <prefix_val_list> prefix_value_list
%type <mval> TOK_IP value
%type <prefixval> TOK_IP6 prefix_value
%type <prot> TOK_PROT
%type <ptype> TOK_PATTERN_TYPE
@ -57,6 +77,8 @@ const char* current_rule_file = 0;
Rule* rule;
RuleHdrTest* hdr_test;
maskedvalue_list* vallist;
vector<IPPrefix>* prefix_val_list;
IPPrefix* prefixval;
bool bl;
int val;
@ -91,11 +113,11 @@ rule_attr_list:
;
rule_attr:
TOK_DST_IP TOK_COMP value_list
TOK_DST_IP TOK_COMP prefix_value_list
{
current_rule->AddHdrTest(new RuleHdrTest(
RuleHdrTest::IP, 16, 4,
(RuleHdrTest::Comp) $2, $3));
RuleHdrTest::IPDst,
(RuleHdrTest::Comp) $2, *($3)));
}
| TOK_DST_PORT TOK_COMP value_list
@ -123,10 +145,14 @@ rule_attr:
{
int proto = 0;
switch ( $3 ) {
case RuleHdrTest::ICMP: proto = 1; break;
case RuleHdrTest::ICMP: proto = IPPROTO_ICMP; break;
case RuleHdrTest::ICMPv6: proto = IPPROTO_ICMPV6; break;
// signature matching against outer packet headers of IP-in-IP
// tunneling not supported, so do a no-op there
case RuleHdrTest::IP: proto = 0; break;
case RuleHdrTest::TCP: proto = 6; break;
case RuleHdrTest::UDP: proto = 17; break;
case RuleHdrTest::IPv6: proto = 0; break;
case RuleHdrTest::TCP: proto = IPPROTO_TCP; break;
case RuleHdrTest::UDP: proto = IPPROTO_UDP; break;
default:
rules_error("internal_error: unknown protocol");
}
@ -140,16 +166,20 @@ rule_attr:
val->mask = 0xffffffff;
vallist->append(val);
// offset & size params are dummies, actual next proto value in
// header is retrieved dynamically via IP_Hdr::NextProto()
current_rule->AddHdrTest(new RuleHdrTest(
RuleHdrTest::IP, 9, 1,
RuleHdrTest::NEXT, 0, 0,
(RuleHdrTest::Comp) $2, vallist));
}
}
| TOK_IP_PROTO TOK_COMP value_list
{
// offset & size params are dummies, actual next proto value in
// header is retrieved dynamically via IP_Hdr::NextProto()
current_rule->AddHdrTest(new RuleHdrTest(
RuleHdrTest::IP, 9, 1,
RuleHdrTest::NEXT, 0, 0,
(RuleHdrTest::Comp) $2, $3));
}
@ -193,11 +223,11 @@ rule_attr:
| TOK_SAME_IP
{ current_rule->AddCondition(new RuleConditionSameIP()); }
| TOK_SRC_IP TOK_COMP value_list
| TOK_SRC_IP TOK_COMP prefix_value_list
{
current_rule->AddHdrTest(new RuleHdrTest(
RuleHdrTest::IP, 12, 4,
(RuleHdrTest::Comp) $2, $3));
RuleHdrTest::IPSrc,
(RuleHdrTest::Comp) $2, *($3)));
}
| TOK_SRC_PORT TOK_COMP value_list
@ -254,6 +284,38 @@ value_list:
}
;
prefix_value_list:
prefix_value_list ',' prefix_value
{
$$ = $1;
$$->push_back(*($3));
}
| prefix_value_list ',' TOK_IDENT
{
$$ = $1;
id_to_maskedvallist($3, 0, $1);
}
| prefix_value
{
$$ = new vector<IPPrefix>();
$$->push_back(*($1));
}
| TOK_IDENT
{
$$ = new vector<IPPrefix>();
id_to_maskedvallist($1, 0, $$);
}
;
prefix_value:
TOK_IP
{
$$ = new IPPrefix(IPAddr(IPv4, &($1.val), IPAddr::Host),
mask_to_len($1.mask));
}
| TOK_IP6
;
value:
TOK_INT
{ $$.val = $1; $$.mask = 0xffffffff; }

View file

@ -1,24 +1,38 @@
%{
typedef unsigned int uint32;
#include <string.h>
#include <string>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include "RuleMatcher.h"
#include "IPAddr.h"
#include "util.h"
#include "rule-parse.h"
int rules_line_number = 0;
static string extract_ipv6(string s)
{
if ( s.substr(0, 3) == "[0x" )
s = s.substr(3, s.find("]") - 3);
else
s = s.substr(1, s.find("]") - 1);
return s;
}
%}
%x PS
OWS [ \t]*
WS [ \t]+
D [0-9]+
H [0-9a-fA-F]+
HEX {H}
STRING \"([^\n\"]|\\\")*\"
ID ([0-9a-zA-Z_-]+::)*[0-9a-zA-Z_-]+
IP6 ("["({HEX}:){7}{HEX}"]")|("["0x{HEX}({HEX}|:)*"::"({HEX}|:)*"]")|("["({HEX}|:)*"::"({HEX}|:)*"]")|("["({HEX}|:)*"::"({HEX}|:)*({D}"."){3}{D}"]")
RE \/(\\\/)?([^/]|[^\\]\\\/)*\/
META \.[^ \t]+{WS}[^\n]+
PID ([0-9a-zA-Z_-]|"::")+
@ -34,6 +48,19 @@ PID ([0-9a-zA-Z_-]|"::")+
\n ++rules_line_number;
}
{IP6} {
rules_lval.prefixval = new IPPrefix(IPAddr(extract_ipv6(yytext)), 128);
return TOK_IP6;
}
{IP6}{OWS}"/"{OWS}{D} {
char* l = strchr(yytext, '/');
*l++ = '\0';
int len = atoi(l);
rules_lval.prefixval = new IPPrefix(IPAddr(extract_ipv6(yytext)), len);
return TOK_IP6;
}
[!\]\[{}&:,] return rules_text[0];
"<=" { rules_lval.val = RuleHdrTest::LE; return TOK_COMP; }
@ -45,7 +72,9 @@ PID ([0-9a-zA-Z_-]|"::")+
"!=" { rules_lval.val = RuleHdrTest::NE; return TOK_COMP; }
ip { rules_lval.val = RuleHdrTest::IP; return TOK_PROT; }
ip6 { rules_lval.val = RuleHdrTest::IPv6; return TOK_PROT; }
icmp { rules_lval.val = RuleHdrTest::ICMP; return TOK_PROT; }
icmp6 { rules_lval.val = RuleHdrTest::ICMPv6; return TOK_PROT; }
tcp { rules_lval.val = RuleHdrTest::TCP; return TOK_PROT; }
udp { rules_lval.val = RuleHdrTest::UDP; return TOK_PROT; }
@ -123,7 +152,7 @@ http { rules_lval.val = Rule::HTTP_REQUEST; return TOK_PATTERN_TYPE; }
ftp { rules_lval.val = Rule::FTP; return TOK_PATTERN_TYPE; }
finger { rules_lval.val = Rule::FINGER; return TOK_PATTERN_TYPE; }
{D}("."{D}){3}"/"{D} {
{D}("."{D}){3}{OWS}"/"{OWS}{D} {
char* s = strchr(yytext, '/');
*s++ = '\0';

3
src/util-config.h.in Normal file
View file

@ -0,0 +1,3 @@
#define BRO_SCRIPT_INSTALL_PATH "@BRO_SCRIPT_INSTALL_PATH@"
#define BRO_SCRIPT_SOURCE_PATH "@BRO_SCRIPT_SOURCE_PATH@"
#define BRO_BUILD_PATH "@CMAKE_CURRENT_BINARY_DIR@"

View file

@ -1,6 +1,7 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include "config.h"
#include "util-config.h"
#ifdef TIME_WITH_SYS_TIME
# include <sys/time.h>

View file

@ -0,0 +1,2 @@
1350604800.0
0.0

View file

@ -0,0 +1,10 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path reporter
#open 2012-10-19-06-06-36
#fields ts level message location
#types time enum string string
0.000000 Reporter::WARNING strptime conversion failed: fmt:%m d:1980-10-24 (empty)
#close 2012-10-19-06-06-36

View file

@ -3,9 +3,9 @@
#empty_field (empty)
#unset_field -
#path dns
#open 2012-03-07-01-37-58
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval]
1331084278.438444 UWkUyAuUGXf 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51850 2607:f740:b::f93 53 udp 3903 txtpadding_323.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000
1331084293.592245 arKYeMETxOg 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51851 2607:f740:b::f93 53 udp 40849 txtpadding_3230.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000
#close 2012-03-07-01-38-18
#open 2012-10-05-17-47-27
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
1331084278.438444 UWkUyAuUGXf 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51850 2607:f740:b::f93 53 udp 3903 txtpadding_323.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000 F
1331084293.592245 arKYeMETxOg 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51851 2607:f740:b::f93 53 udp 40849 txtpadding_3230.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000 F
#close 2012-10-05-17-47-27

View file

@ -3,38 +3,38 @@
#empty_field (empty)
#unset_field -
#path packet_filter
#open 2012-07-27-19-14-29
#open 2012-10-08-16-16-08
#fields ts node filter init success
#types time string string bool bool
1343416469.508262 - ip or not ip T T
#close 2012-07-27-19-14-29
1349712968.812610 - ip or not ip T T
#close 2012-10-08-16-16-08
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#open 2012-07-27-19-14-29
#open 2012-10-08-16-16-09
#fields ts node filter init success
#types time string string bool bool
1343416469.888870 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
#close 2012-07-27-19-14-29
1349712969.042094 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 995)) or (tcp port 22)) or (port 21 and port 2811)) or (tcp port 25 or tcp port 587)) or (tcp port 614)) or (tcp port 990)) or (port 6667)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T
#close 2012-10-08-16-16-09
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#open 2012-07-27-19-14-30
#open 2012-10-08-16-16-09
#fields ts node filter init success
#types time string string bool bool
1343416470.252918 - port 42 T T
#close 2012-07-27-19-14-30
1349712969.270826 - port 42 T T
#close 2012-10-08-16-16-09
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path packet_filter
#open 2012-07-27-19-14-30
#open 2012-10-08-16-16-09
#fields ts node filter init success
#types time string string bool bool
1343416470.614962 - port 56730 T T
#close 2012-07-27-19-14-30
1349712969.499878 - port 56730 T T
#close 2012-10-08-16-16-09

View file

@ -1,15 +0,0 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path dpd
#open 2009-11-18-17-59-51
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto analyzer failure_reason
#types time string addr port addr port enum string string
1258567191.486869 UWkUyAuUGXf 192.168.1.105 57696 192.168.1.1 53 udp TEREDO Teredo payload length [c\x1d\x81\x80\x00\x01\x00\x02\x00\x02\x00\x00\x04amch\x0equestionmarket\x03com\x00\x00\x01\x00...]
1258578181.516140 nQcgTWjvg4c 192.168.1.104 64838 192.168.1.1 53 udp TEREDO Teredo payload length [h\xfd\x81\x80\x00\x01\x00\x02\x00\x03\x00\x02\x08football\x02uk\x07reuters\x03com\x00\x00\x01\x00...]
1258579063.784919 j4u32Pc5bif 192.168.1.104 55778 192.168.1.1 53 udp TEREDO Teredo payload length [j\x12\x81\x80\x00\x01\x00\x02\x00\x04\x00\x00\x08fastflip\x0agooglelabs\x03com\x00\x00\x01\x00...]
1258581768.898165 TEfuqmmG4bh 192.168.1.104 50798 192.168.1.1 53 udp TEREDO Teredo payload length [o\xe3\x81\x80\x00\x01\x00\x02\x00\x04\x00\x04\x03www\x0fnashuatelegraph\x03com\x00\x00\x01\x00...]
1258584478.989528 FrJExwHcSal 192.168.1.104 64963 192.168.1.1 53 udp TEREDO Teredo payload length [e\xbd\x81\x80\x00\x01\x00\x08\x00\x06\x00\x06\x08wellness\x05blogs\x04time\x03com\x00\x00\x01\x00...]
1258600683.934672 5OKnoww6xl4 192.168.1.103 59838 192.168.1.1 53 udp TEREDO Teredo payload length [h\xf0\x81\x80\x00\x01\x00\x01\x00\x02\x00\x00\x06update\x0csanasecurity\x03com\x00\x00\x01\x00...]
#close 2009-11-19-03-18-03

View file

@ -0,0 +1,10 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path known_services
#open 2012-10-02-20-10-05
#fields ts host port_num port_proto service
#types time addr port enum table[string]
1258567191.405770 192.168.1.1 53 udp TEREDO
#close 2012-10-02-20-10-05

View file

@ -22,7 +22,7 @@
1210953052.202579 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 udp teredo 8.928880 129 48 SF - 0 Dd 2 185 1 76 (empty)
1210953060.829233 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 udp teredo 13.293994 2359 11243 SF - 0 Dd 12 2695 13 11607 (empty)
1210953058.933954 iE6yhOq3SF 0.0.0.0 68 255.255.255.255 67 udp - - - - S0 - 0 D 1 328 0 0 (empty)
1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 udp teredo - - - SHR - 0 d 0 0 1 137 (empty)
1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 udp - - - - SHR - 0 d 0 0 1 137 (empty)
1210953046.591933 UWkUyAuUGXf 192.168.2.16 138 192.168.2.255 138 udp - 28.448321 416 0 S0 - 0 D 2 472 0 0 (empty)
1210953052.324629 FrJExwHcSal fe80::8000:f227:bec8:61af 134 fe80::8000:ffff:ffff:fffd 133 icmp - - - - OTH - 0 - 1 88 0 0 TEfuqmmG4bh
1210953060.829303 qCaWGmzFtM5 2001:0:4137:9e50:8000:f12a:b9c8:2815 128 2001:4860:0:2001::68 129 icmp - 0.463615 4 4 OTH - 0 - 1 52 1 52 GSxOnSLghOa,nQcgTWjvg4c

View file

@ -9,7 +9,7 @@
1340127577.354166 FrJExwHcSal 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 tcp http 0.052829 1675 10467 S1 - 0 ShADad 10 2279 12 11191 j4u32Pc5bif
1340127577.336558 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 udp teredo 0.010291 129 52 SF - 0 Dd 2 185 1 80 (empty)
1340127577.341510 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 udp teredo 0.065485 2367 11243 SF - 0 Dd 12 2703 13 11607 (empty)
1340127577.339015 k6kgXLOoSKl 192.168.2.16 3797 65.55.158.81 3544 udp teredo - - - SHR - 0 d 0 0 1 137 (empty)
1340127577.339015 k6kgXLOoSKl 192.168.2.16 3797 65.55.158.81 3544 udp - - - - SHR - 0 d 0 0 1 137 (empty)
1340127577.339015 nQcgTWjvg4c fe80::8000:f227:bec8:61af 134 fe80::8000:ffff:ffff:fffd 133 icmp - - - - OTH - 0 - 1 88 0 0 k6kgXLOoSKl
1340127577.343969 TEfuqmmG4bh 2001:0:4137:9e50:8000:f12a:b9c8:2815 128 2001:4860:0:2001::68 129 icmp - 0.007778 4 4 OTH - 0 - 1 52 1 52 UWkUyAuUGXf,j4u32Pc5bif
1340127577.336558 arKYeMETxOg fe80::8000:ffff:ffff:fffd 133 ff02::2 134 icmp - - - - OTH - 0 - 1 64 0 0 UWkUyAuUGXf

View file

@ -3,9 +3,9 @@
#empty_field (empty)
#unset_field -
#path weird
#open 2012-06-19-17-39-37
#open 2012-10-02-16-53-03
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1340127577.341510 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 Teredo_bubble_with_payload - F bro
1340127577.346849 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 Teredo_bubble_with_payload - F bro
1340127577.349292 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 Teredo_bubble_with_payload - F bro
#close 2012-06-19-17-39-37
#close 2012-10-02-16-53-03

View file

@ -77,6 +77,7 @@ scripts/base/init-default.bro
scripts/base/protocols/conn/./main.bro
scripts/base/protocols/conn/./contents.bro
scripts/base/protocols/conn/./inactivity.bro
scripts/base/protocols/conn/./polling.bro
scripts/base/protocols/dns/__load__.bro
scripts/base/protocols/dns/./consts.bro
scripts/base/protocols/dns/./main.bro
@ -84,6 +85,11 @@ scripts/base/init-default.bro
scripts/base/protocols/ftp/./utils-commands.bro
scripts/base/protocols/ftp/./main.bro
scripts/base/protocols/ftp/./file-extract.bro
scripts/base/protocols/ftp/./gridftp.bro
scripts/base/protocols/ssl/__load__.bro
scripts/base/protocols/ssl/./consts.bro
scripts/base/protocols/ssl/./main.bro
scripts/base/protocols/ssl/./mozilla-ca-list.bro
scripts/base/protocols/http/__load__.bro
scripts/base/protocols/http/./main.bro
scripts/base/protocols/http/./utils.bro
@ -102,10 +108,6 @@ scripts/base/init-default.bro
scripts/base/protocols/socks/./main.bro
scripts/base/protocols/ssh/__load__.bro
scripts/base/protocols/ssh/./main.bro
scripts/base/protocols/ssl/__load__.bro
scripts/base/protocols/ssl/./consts.bro
scripts/base/protocols/ssl/./main.bro
scripts/base/protocols/ssl/./mozilla-ca-list.bro
scripts/base/protocols/syslog/__load__.bro
scripts/base/protocols/syslog/./consts.bro
scripts/base/protocols/syslog/./main.bro

View file

@ -1,5 +1,5 @@
{
[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={
[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={
2,
4,
1,
@ -12,3 +12,4 @@ BB
}, vc=[10, 20, 30], ve=[]]
}
4242

View file

@ -4,13 +4,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -23,13 +16,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -42,13 +28,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -61,13 +40,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -80,13 +52,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -99,13 +64,6 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
@ -118,16 +76,10 @@ print outfile, A::description;
print outfile, A::tpe;
print outfile, A::i;
print outfile, A::b;
try = try + 1;
if (7 == try)
{
close(outfile);
terminate();
}
}, config={
}]
Input::EVENT_NEW
7
T
End-of-data

View file

@ -0,0 +1,7 @@
new_connection, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp]
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 0
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 1
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 2
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 3
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 4
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 5

View file

@ -0,0 +1,4 @@
new_connection, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp]
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 0
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 1
callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 2

View file

@ -0,0 +1,10 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path dns
#open 2012-10-05-15-59-39
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
1349445121.080922 UWkUyAuUGXf 10.0.0.64 49204 146.186.163.66 53 udp 17323 psu.edu 1 C_INTERNET 28 AAAA 0 NOERROR F F T F 0 - - F
#close 2012-10-05-15-59-39

View file

@ -0,0 +1,11 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open 2012-10-05-21-45-15
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
1348168976.274919 UWkUyAuUGXf 192.168.57.103 60108 192.168.57.101 2811 tcp ssl,ftp,gridftp 0.294743 4491 6659 SF - 0 ShAdDaFf 22 5643 21 7759 (empty)
1348168976.546371 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 tcp ssl,gridftp-data 0.011938 2135 3196 S1 - 0 ShADad 8 2559 6 3516 (empty)
#close 2012-10-05-21-45-15

View file

@ -0,0 +1,10 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path notice
#open 2012-10-05-21-45-15
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network
#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet
1348168976.558309 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 tcp GridFTP::Data_Channel GridFTP data channel over threshold 2 bytes - 192.168.57.103 192.168.57.101 55968 - bro Notice::ACTION_LOG 6 3600.000000 F - - - - - - - -
#close 2012-10-05-21-45-15

View file

@ -0,0 +1,11 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path ssl
#open 2012-10-05-21-45-15
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version cipher server_name session_id subject issuer_subject not_valid_before not_valid_after last_alert client_subject client_issuer_subject
#types time string addr port addr port string string string string string string time time string string string
1348168976.508038 UWkUyAuUGXf 192.168.57.103 60108 192.168.57.101 2811 TLSv10 TLS_RSA_WITH_AES_256_CBC_SHA - - CN=host/alpha,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Globus Simple CA,OU=simpleCA-alpha,OU=GlobusTest,O=Grid 1348161979.000000 1379697979.000000 - CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid
1348168976.551422 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 TLSv10 TLS_RSA_WITH_NULL_SHA - - CN=932373381,CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid 1348168676.000000 1348206441.000000 - CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid
#close 2012-10-05-21-45-15

View file

@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path ssl
#open 2012-04-27-14-53-12
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version cipher server_name session_id subject issuer_subject not_valid_before not_valid_after last_alert
#types time string addr port addr port string string string string string string time time string
1335538392.319381 UWkUyAuUGXf 192.168.1.105 62045 74.125.224.79 443 TLSv10 TLS_ECDHE_RSA_WITH_RC4_128_SHA ssl.gstatic.com - CN=*.gstatic.com,O=Google Inc,L=Mountain View,ST=California,C=US CN=Google Internet Authority,O=Google Inc,C=US 1334102677.000000 1365639277.000000 -
#close 2012-04-27-14-53-16
#open 2012-10-08-16-18-56
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version cipher server_name session_id subject issuer_subject not_valid_before not_valid_after last_alert client_subject client_issuer_subject
#types time string addr port addr port string string string string string string time time string string string
1335538392.319381 UWkUyAuUGXf 192.168.1.105 62045 74.125.224.79 443 TLSv10 TLS_ECDHE_RSA_WITH_RC4_128_SHA ssl.gstatic.com - CN=*.gstatic.com,O=Google Inc,L=Mountain View,ST=California,C=US CN=Google Internet Authority,O=Google Inc,C=US 1334102677.000000 1365639277.000000 - - -
#close 2012-10-08-16-18-56

View file

@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path dns
#open 1999-06-28-23-40-27
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs auth addl
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] table[string] table[string]
930613226.529070 UWkUyAuUGXf 212.180.42.100 25000 131.243.64.3 53 tcp 34798 - - - - - 0 NOERROR F F F T 0 4.3.2.1 31337.000000 - -
#close 1999-06-28-23-40-27
#open 2012-10-05-17-47-40
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected auth addl
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool table[string] table[string]
930613226.518174 UWkUyAuUGXf 212.180.42.100 25000 131.243.64.3 53 tcp 34798 - - - - - 0 NOERROR F F F T 0 4.3.2.1 31337.000000 F - -
#close 2012-10-05-17-47-40

View file

@ -0,0 +1,79 @@
dpd_config, {
}
signature_match [orig_h=141.142.220.235, orig_p=50003/tcp, resp_h=199.233.217.249, resp_p=21/tcp] - matched my_ftp_client
ftp_reply 199.233.217.249:21 - 220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20100320) ready.
ftp_request 141.142.220.235:50003 - USER anonymous
ftp_reply 199.233.217.249:21 - 331 Guest login ok, type your name as password.
signature_match [orig_h=141.142.220.235, orig_p=50003/tcp, resp_h=199.233.217.249, resp_p=21/tcp] - matched my_ftp_server
ftp_request 141.142.220.235:50003 - PASS test
ftp_reply 199.233.217.249:21 - 230
ftp_reply 199.233.217.249:21 - 0 The NetBSD Project FTP Server located in Redwood City, CA, USA
ftp_reply 199.233.217.249:21 - 0 1 Gbps connectivity courtesy of , ,
ftp_reply 199.233.217.249:21 - 0 Internet Systems Consortium WELCOME! /( )`
ftp_reply 199.233.217.249:21 - 0 \ \___ / |
ftp_reply 199.233.217.249:21 - 0 +--- Currently Supported Platforms ----+ /- _ `-/ '
ftp_reply 199.233.217.249:21 - 0 | acorn[26,32], algor, alpha, amd64, | (/\/ \ \ /\
ftp_reply 199.233.217.249:21 - 0 | amiga[,ppc], arc, atari, bebox, | / / | ` \
ftp_reply 199.233.217.249:21 - 0 | cats, cesfic, cobalt, dreamcast, | O O ) / |
ftp_reply 199.233.217.249:21 - 0 | evb[arm,mips,ppc,sh3], hp[300,700], | `-^--'`< '
ftp_reply 199.233.217.249:21 - 0 | hpc[arm,mips,sh], i386, | (_.) _ ) /
ftp_reply 199.233.217.249:21 - 0 | ibmnws, iyonix, luna68k, | .___/` /
ftp_reply 199.233.217.249:21 - 0 | mac[m68k,ppc], mipsco, mmeye, | `-----' /
ftp_reply 199.233.217.249:21 - 0 | mvme[m68k,ppc], netwinders, | <----. __ / __ \
ftp_reply 199.233.217.249:21 - 0 | news[m68k,mips], next68k, ofppc, | <----|====O)))==) \) /====
ftp_reply 199.233.217.249:21 - 0 | playstation2, pmax, prep, sandpoint, | <----' `--' `.__,' \
ftp_reply 199.233.217.249:21 - 0 | sbmips, sgimips, shark, sparc[,64], | | |
ftp_reply 199.233.217.249:21 - 0 | sun[2,3], vax, x68k, xen | \ /
ftp_reply 199.233.217.249:21 - 0 +--------------------------------------+ ______( (_ / \_____
ftp_reply 199.233.217.249:21 - 0 See our website at http://www.NetBSD.org/ ,' ,-----' | \
ftp_reply 199.233.217.249:21 - 0 We log all FTP transfers and commands. `--{__________) (FL) \/
ftp_reply 199.233.217.249:21 - 0 230-
ftp_reply 199.233.217.249:21 - 0 EXPORT NOTICE
ftp_reply 199.233.217.249:21 - 0
ftp_reply 199.233.217.249:21 - 0 Please note that portions of this FTP site contain cryptographic
ftp_reply 199.233.217.249:21 - 0 software controlled under the Export Administration Regulations (EAR).
ftp_reply 199.233.217.249:21 - 0
ftp_reply 199.233.217.249:21 - 0 None of this software may be downloaded or otherwise exported or
ftp_reply 199.233.217.249:21 - 0 re-exported into (or to a national or resident of) Cuba, Iran, Libya,
ftp_reply 199.233.217.249:21 - 0 Sudan, North Korea, Syria or any other country to which the U.S. has
ftp_reply 199.233.217.249:21 - 0 embargoed goods.
ftp_reply 199.233.217.249:21 - 0
ftp_reply 199.233.217.249:21 - 0 By downloading or using said software, you are agreeing to the
ftp_reply 199.233.217.249:21 - 0 foregoing and you are representing and warranting that you are not
ftp_reply 199.233.217.249:21 - 0 located in, under the control of, or a national or resident of any
ftp_reply 199.233.217.249:21 - 0 such country or on any such list.
ftp_reply 199.233.217.249:21 - 230 Guest login ok, access restrictions apply.
ftp_request 141.142.220.235:50003 - SYST
ftp_reply 199.233.217.249:21 - 215 UNIX Type: L8 Version: NetBSD-ftpd 20100320
ftp_request 141.142.220.235:50003 - PASV
ftp_reply 199.233.217.249:21 - 227 Entering Passive Mode (199,233,217,249,221,90)
ftp_request 141.142.220.235:50003 - LIST
ftp_reply 199.233.217.249:21 - 150 Opening ASCII mode data connection for '/bin/ls'.
ftp_reply 199.233.217.249:21 - 226 Transfer complete.
ftp_request 141.142.220.235:50003 - TYPE I
ftp_reply 199.233.217.249:21 - 200 Type set to I.
ftp_request 141.142.220.235:50003 - PASV
ftp_reply 199.233.217.249:21 - 227 Entering Passive Mode (199,233,217,249,221,91)
ftp_request 141.142.220.235:50003 - RETR robots.txt
ftp_reply 199.233.217.249:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes).
ftp_reply 199.233.217.249:21 - 226 Transfer complete.
ftp_request 141.142.220.235:50003 - TYPE A
ftp_reply 199.233.217.249:21 - 200 Type set to A.
ftp_request 141.142.220.235:50003 - PORT 141,142,220,235,131,46
ftp_reply 199.233.217.249:21 - 200 PORT command successful.
ftp_request 141.142.220.235:50003 - LIST
ftp_reply 199.233.217.249:21 - 150 Opening ASCII mode data connection for '/bin/ls'.
ftp_reply 199.233.217.249:21 - 226 Transfer complete.
ftp_request 141.142.220.235:50003 - TYPE I
ftp_reply 199.233.217.249:21 - 200 Type set to I.
ftp_request 141.142.220.235:50003 - PORT 141,142,220,235,147,203
ftp_reply 199.233.217.249:21 - 200 PORT command successful.
ftp_request 141.142.220.235:50003 - RETR robots.txt
ftp_reply 199.233.217.249:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes).
ftp_reply 199.233.217.249:21 - 226 Transfer complete.
ftp_request 141.142.220.235:50003 - QUIT
ftp_reply 199.233.217.249:21 - 221
ftp_reply 199.233.217.249:21 - 0 Data traffic for this session was 154 bytes in 2 files.
ftp_reply 199.233.217.249:21 - 0 Total traffic for this session was 4037 bytes in 4 transfers.
ftp_reply 199.233.217.249:21 - 221 Thank you for using the FTP service on ftp.NetBSD.org.

View file

@ -0,0 +1,100 @@
dpd_config, {
}
signature_match [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] - matched my_ftp_client
ftp_reply [2001:470:4867:99::21]:21 - 220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20100320) ready.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - USER anonymous
ftp_reply [2001:470:4867:99::21]:21 - 331 Guest login ok, type your name as password.
signature_match [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] - matched my_ftp_server
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - PASS test
ftp_reply [2001:470:4867:99::21]:21 - 230
ftp_reply [2001:470:4867:99::21]:21 - 0 The NetBSD Project FTP Server located in Redwood City, CA, USA
ftp_reply [2001:470:4867:99::21]:21 - 0 1 Gbps connectivity courtesy of , ,
ftp_reply [2001:470:4867:99::21]:21 - 0 Internet Systems Consortium WELCOME! /( )`
ftp_reply [2001:470:4867:99::21]:21 - 0 \ \___ / |
ftp_reply [2001:470:4867:99::21]:21 - 0 +--- Currently Supported Platforms ----+ /- _ `-/ '
ftp_reply [2001:470:4867:99::21]:21 - 0 | acorn[26,32], algor, alpha, amd64, | (/\/ \ \ /\
ftp_reply [2001:470:4867:99::21]:21 - 0 | amiga[,ppc], arc, atari, bebox, | / / | ` \
ftp_reply [2001:470:4867:99::21]:21 - 0 | cats, cesfic, cobalt, dreamcast, | O O ) / |
ftp_reply [2001:470:4867:99::21]:21 - 0 | evb[arm,mips,ppc,sh3], hp[300,700], | `-^--'`< '
ftp_reply [2001:470:4867:99::21]:21 - 0 | hpc[arm,mips,sh], i386, | (_.) _ ) /
ftp_reply [2001:470:4867:99::21]:21 - 0 | ibmnws, iyonix, luna68k, | .___/` /
ftp_reply [2001:470:4867:99::21]:21 - 0 | mac[m68k,ppc], mipsco, mmeye, | `-----' /
ftp_reply [2001:470:4867:99::21]:21 - 0 | mvme[m68k,ppc], netwinders, | <----. __ / __ \
ftp_reply [2001:470:4867:99::21]:21 - 0 | news[m68k,mips], next68k, ofppc, | <----|====O)))==) \) /====
ftp_reply [2001:470:4867:99::21]:21 - 0 | playstation2, pmax, prep, sandpoint, | <----' `--' `.__,' \
ftp_reply [2001:470:4867:99::21]:21 - 0 | sbmips, sgimips, shark, sparc[,64], | | |
ftp_reply [2001:470:4867:99::21]:21 - 0 | sun[2,3], vax, x68k, xen | \ /
ftp_reply [2001:470:4867:99::21]:21 - 0 +--------------------------------------+ ______( (_ / \_____
ftp_reply [2001:470:4867:99::21]:21 - 0 See our website at http://www.NetBSD.org/ ,' ,-----' | \
ftp_reply [2001:470:4867:99::21]:21 - 0 We log all FTP transfers and commands. `--{__________) (FL) \/
ftp_reply [2001:470:4867:99::21]:21 - 0 230-
ftp_reply [2001:470:4867:99::21]:21 - 0 EXPORT NOTICE
ftp_reply [2001:470:4867:99::21]:21 - 0
ftp_reply [2001:470:4867:99::21]:21 - 0 Please note that portions of this FTP site contain cryptographic
ftp_reply [2001:470:4867:99::21]:21 - 0 software controlled under the Export Administration Regulations (EAR).
ftp_reply [2001:470:4867:99::21]:21 - 0
ftp_reply [2001:470:4867:99::21]:21 - 0 None of this software may be downloaded or otherwise exported or
ftp_reply [2001:470:4867:99::21]:21 - 0 re-exported into (or to a national or resident of) Cuba, Iran, Libya,
ftp_reply [2001:470:4867:99::21]:21 - 0 Sudan, North Korea, Syria or any other country to which the U.S. has
ftp_reply [2001:470:4867:99::21]:21 - 0 embargoed goods.
ftp_reply [2001:470:4867:99::21]:21 - 0
ftp_reply [2001:470:4867:99::21]:21 - 0 By downloading or using said software, you are agreeing to the
ftp_reply [2001:470:4867:99::21]:21 - 0 foregoing and you are representing and warranting that you are not
ftp_reply [2001:470:4867:99::21]:21 - 0 located in, under the control of, or a national or resident of any
ftp_reply [2001:470:4867:99::21]:21 - 0 such country or on any such list.
ftp_reply [2001:470:4867:99::21]:21 - 230 Guest login ok, access restrictions apply.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SYST
ftp_reply [2001:470:4867:99::21]:21 - 215 UNIX Type: L8 Version: NetBSD-ftpd 20100320
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - FEAT
ftp_reply [2001:470:4867:99::21]:21 - 211 Features supported
ftp_reply [2001:470:4867:99::21]:21 - 0 MDTM
ftp_reply [2001:470:4867:99::21]:21 - 0 MLST Type*;Size*;Modify*;Perm*;Unique*;
ftp_reply [2001:470:4867:99::21]:21 - 0 REST STREAM
ftp_reply [2001:470:4867:99::21]:21 - 0 SIZE
ftp_reply [2001:470:4867:99::21]:21 - 0 TVFS
ftp_reply [2001:470:4867:99::21]:21 - 211 End
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - PWD
ftp_reply [2001:470:4867:99::21]:21 - 257 "/" is the current directory.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV
ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57086|)
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - LIST
ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for '/bin/ls'.
ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV
ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57087|)
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - NLST
ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for 'file list'.
ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - TYPE I
ftp_reply [2001:470:4867:99::21]:21 - 200 Type set to I.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SIZE robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 213 77
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV
ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57088|)
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - RETR robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes).
ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - MDTM robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 213 20090816112038
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SIZE robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 213 77
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49189|
ftp_reply [2001:470:4867:99::21]:21 - 200 EPRT command successful.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - RETR robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes).
ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - MDTM robots.txt
ftp_reply [2001:470:4867:99::21]:21 - 213 20090816112038
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - TYPE A
ftp_reply [2001:470:4867:99::21]:21 - 200 Type set to A.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49190|
ftp_reply [2001:470:4867:99::21]:21 - 200 EPRT command successful.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - LIST
ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for '/bin/ls'.
ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete.
ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - QUIT
ftp_reply [2001:470:4867:99::21]:21 - 221
ftp_reply [2001:470:4867:99::21]:21 - 0 Data traffic for this session was 154 bytes in 2 files.
ftp_reply [2001:470:4867:99::21]:21 - 0 Total traffic for this session was 4512 bytes in 5 transfers.
ftp_reply [2001:470:4867:99::21]:21 - 221 Thank you for using the FTP service on ftp.NetBSD.org.

View file

@ -0,0 +1,3 @@
dpd_config, {
}

View file

@ -0,0 +1,3 @@
dpd_config, {
}

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq-list

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne-list

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq-list

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne-list

View file

@ -0,0 +1 @@
signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq-list

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne-list

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq-list

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne-list

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne

View file

@ -0,0 +1 @@
signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-eq

Some files were not shown because too many files have changed in this diff Show more