Merge remote-tracking branch 'origin/master' into fastpath

This commit is contained in:
Bernhard Amann 2014-02-26 14:17:11 -08:00
commit 89bc959cb0
50 changed files with 719 additions and 278 deletions

103
CHANGES
View file

@ -1,4 +1,107 @@
2.2-184 | 2014-02-24 07:28:18 -0800
* New TLS constants from
https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-01.
(Bernhard Amann)
2.2-180 | 2014-02-20 17:29:14 -0800
* New SSL alert descriptions from
https://tools.ietf.org/html/draft-ietf-tls-applayerprotoneg-04.
(Bernhard Amann)
* Update SQLite. (Bernhard Amann)
2.2-177 | 2014-02-20 17:27:46 -0800
* Update to libmagic version 5.17. Addresses BIT-1136. (Jon Siwek)
2.2-174 | 2014-02-14 12:07:04 -0800
* Support for MPLS over VLAN. (Chris Kanich)
2.2-173 | 2014-02-14 10:50:15 -0800
* Fix misidentification of SOCKS traffic that in particiular seemed
to happen a lot with DCE/RPC traffic. (Vlad Grigorescu)
2.2-170 | 2014-02-13 16:42:07 -0800
* Refactor DNS script's state management to improve performance.
(Jon Siwek)
* Revert "Expanding the HTTP methods used in the signature to detect
HTTP traffic." (Robin Sommer)
2.2-167 | 2014-02-12 20:17:39 -0800
* Increase timeouts of some unit tests. (Jon Siwek)
* Fix memory leak in modbus analyzer. Would happen if there's a
'modbus_read_fifo_queue_response' event handler. (Jon Siwek)
* Add channel_id TLS extension number. This number is not IANA
defined, but we see it being actively used. (Bernhard Amann)
* Test baseline updates for DNS change. (Robin Sommer)
2.2-158 | 2014-02-09 23:45:39 -0500
* Change dns.log to include only standard DNS queries. (Jon Siwek)
* Improve DNS analysis. (Jon Siwek)
- Fix parsing of empty question sections (when QDCOUNT == 0). In this
case, the DNS parser would extract two 2-byte fields for use in either
"dns_query_reply" or "dns_rejected" events (dependent on value of
RCODE) as qclass and qtype parameters. This is not correct, because
such fields don't actually exist in the DNS message format when
QDCOUNT is 0. As a result, these events are no longer raised when
there's an empty question section. Scripts that depends on checking
for an empty question section can do that in the "dns_message" event.
- Add a new "dns_unknown_reply" event, for when Bro does not know how
to fully parse a particular resource record type. This helps fix a
problem in the default DNS scripts where the logic to complete
request-reply pair matching doesn't work because it's waiting on more
RR events to complete the reply. i.e. it expects ANCOUNT number of
dns_*_reply events and will wait until it gets that many before
completing a request-reply pair and logging it to dns.log. This could
cause bogus replies to match a previous request if they happen to
share a DNS transaction ID. (Jon Siwek)
- The previous method of matching queries with replies was still
unreliable in cases where the reply contains no answers. The new code
also takes extra measures to avoid pending state growing too large in
cases where the condition to match a query with a corresponding reply is
never met, but yet DNS messages continue to be exchanged over the same
connection 5-tuple (preventing cleanup of the pending state). (Jon Siwek)
* Updates to httpmonitor and mimestats documentation. (Jeannette Dopheide)
* Updates to Logs and Cluster documentation (Jeannette Dopheide)
2.2-147 | 2014-02-07 08:06:53 -0800
* Fix x509-extension test sometimes failing. (Bernhard Amann)
2.2-144 | 2014-02-06 20:31:18 -0800
* Fixing bug in POP3 analyzer. With certain input the analyzer could
end up trying to write to non-writable memory. (Robin Sommer)
2.2-140 | 2014-02-06 17:58:04 -0800
* Fixing memory leaks in input framework. (Robin Sommer)
* Add script to detect filtered TCP traces. Addresses BIT-1119. (Jon
Siwek)
2.2-137 | 2014-02-04 09:09:55 -0800
* Minor unified2 script documentation fix. (Jon Siwek)
2.2-135 | 2014-01-31 11:09:36 -0800
* Added some grammar and spelling corrections to Installation and

View file

@ -56,7 +56,7 @@ set(LIBMAGIC_LIB_DIR ${LIBMAGIC_PREFIX}/lib)
set(LIBMAGIC_LIBRARY ${LIBMAGIC_LIB_DIR}/libmagic.a)
ExternalProject_Add(libmagic
PREFIX ${LIBMAGIC_PREFIX}
URL ${CMAKE_CURRENT_SOURCE_DIR}/src/3rdparty/file-5.16.tar.gz
URL ${CMAKE_CURRENT_SOURCE_DIR}/src/3rdparty/file-5.17.tar.gz
CONFIGURE_COMMAND ./configure --enable-static --disable-shared
--prefix=${LIBMAGIC_PREFIX}
--includedir=${LIBMAGIC_INCLUDE_DIR}

View file

@ -1 +1 @@
2.2-135
2.2-184

@ -1 +1 @@
Subproject commit f7229b4c6ff0208eaa082273074652e8ab5a17c5
Subproject commit 66793ec3c602439e235bee705b654aefb7ac8dec

@ -1 +1 @@
Subproject commit 23ff11bf0edbad2c6f1acbeb3f9a029ff4b61785
Subproject commit c3a65f13063291ffcfd6d05c09d7724c02e9a40d

View file

@ -16,18 +16,18 @@ In the following sections, we present a few examples of common uses of
Bro as an IDS.
------------------------------------------------
Detecting an FTP Bruteforce attack and notifying
Detecting an FTP Brute-force Attack and Notifying
------------------------------------------------
For the purpose of this exercise, we define FTP bruteforcing as too many
For the purpose of this exercise, we define FTP brute-forcing as too many
rejected usernames and passwords occurring from a single address. We
start by defining a threshold for the number of attempts and a
monitoring interval in minutes as well as a new notice type.
start by defining a threshold for the number of attempts, a monitoring
interval (in minutes), and a new notice type.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 9-25
Now, using the ftp_reply event, we check for error codes from the `500
Using the ftp_reply event, we check for error codes from the `500
series <http://en.wikipedia.org/wiki/List_of_FTP_server_return_codes>`_
for the "USER" and "PASS" commands, representing rejected usernames or
passwords. For this, we can use the :bro:see:`FTP::parse_ftp_reply_code`
@ -38,9 +38,9 @@ function to break down the reply code and check if the first digit is a
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 52-60
Next, we use the SumStats framework to raise a notice of the attack of
the attack when the number of failed attempts exceeds the specified
threshold during the measuring interval.
Next, we use the SumStats framework to raise a notice of the attack when
the number of failed attempts exceeds the specified threshold during the
measuring interval.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 28-50
@ -56,14 +56,14 @@ Below is the final code for our script.
As a final note, the :doc:`detect-bruteforcing.bro
</scripts/policy/protocols/ftp/detect-bruteforcing.bro>` script above is
include with Bro out of the box, so you only need to load it at startup
to instruct Bro to detect and notify of FTP bruteforce attacks.
included with Bro out of the box. Use this feature by loading this script
during startup.
-------------
Other Attacks
-------------
Detecting SQL Injection attacks
Detecting SQL Injection Attacks
-------------------------------
Checking files against known malware hashes
@ -76,5 +76,4 @@ list of known malware hashes. Bro simplifies this task by offering a
:doc:`detect-MHR.bro </scripts/policy/frameworks/files/detect-MHR.bro>`
script that creates and compares hashes against the `Malware Hash
Registry <https://www.team-cymru.org/Services/MHR/>`_ maintained by Team
Cymru. You only need to load this script along with your other scripts
at startup time.
Cymru. Use this feature by loading this script during startup.

View file

@ -6,7 +6,13 @@ Setting up a Bro Cluster
Intro
------
Bro is not multithreaded, so once the limitations of a single processor core are reached, the only option currently is to spread the workload across many cores or even many physical computers. The cluster deployment scenario for Bro is the current solution to build these larger systems. The accompanying tools and scripts provide the structure to easily manage many Bro processes examining packets and doing correlation activities but acting as a singular, cohesive entity.
Bro is not multithreaded, so once the limitations of a single processor core
are reached the only option currently is to spread the workload across many
cores, or even many physical computers. The cluster deployment scenario for
Bro is the current solution to build these larger systems. The accompanying
tools and scripts provide the structure to easily manage many Bro processes
examining packets and doing correlation activities but acting as a singular,
cohesive entity.
Architecture
---------------
@ -17,42 +23,98 @@ The figure below illustrates the main components of a Bro cluster.
Tap
***
This is a mechanism that splits the packet stream in order to make a copy
available for inspection. Examples include the monitoring port on a switch and
an optical splitter for fiber networks.
The tap is a mechanism that splits the packet stream in order to make a copy
available for inspection. Examples include the monitoring port on a switch
and an optical splitter on fiber networks.
Frontend
********
This is a discrete hardware device or on-host technique that will split your traffic into many streams or flows. The Bro binary does not do this job. There are numerous ways to accomplish this task, some of which are described below in `Frontend Options`_.
The frontend is a discrete hardware device or on-host technique that splits
traffic into many streams or flows. The Bro binary does not do this job.
There are numerous ways to accomplish this task, some of which are described
below in `Frontend Options`_.
Manager
*******
This is a Bro process which has two primary jobs. It receives log messages and notices from the rest of the nodes in the cluster using the Bro communications protocol. The result is that you will end up with single logs for each log instead of many discrete logs that you have to later combine in some manner with post processing. The manager also takes the opportunity to de-duplicate notices and it has the ability to do so since its acting as the choke point for notices and how notices might be processed into actions such as emailing, paging, or blocking.
The manager is a Bro process that has two primary jobs. It receives log
messages and notices from the rest of the nodes in the cluster using the Bro
communications protocol. The result is a single log instead of many
discrete logs that you have to combine in some manner with post-processing.
The manager also takes the opportunity to de-duplicate notices, and it has the
ability to do so since its acting as the choke point for notices and how notices
might be processed into actions (e.g., emailing, paging, or blocking).
The manager process is started first by BroControl and it only opens its designated port and waits for connections, it doesnt initiate any connections to the rest of the cluster. Once the workers are started and connect to the manager, logs and notices will start arriving to the manager process from the workers.
The manager process is started first by BroControl and it only opens its
designated port and waits for connections, it doesnt initiate any
connections to the rest of the cluster. Once the workers are started and
connect to the manager, logs and notices will start arriving to the manager
process from the workers.
Proxy
*****
This is a Bro process which manages synchronized state. Variables can be synchronized across connected Bro processes automatically in Bro and proxies will help the workers by alleviating the need for all of the workers to connect directly to each other.
The proxy is a Bro process that manages synchronized state. Variables can
be synchronized across connected Bro processes automatically. Proxies help
the workers by alleviating the need for all of the workers to connect
directly to each other.
Examples of synchronized state from the scripts that ship with Bro are things such as the full list of “known” hosts and services which are hosts or services which have been detected as performing full TCP handshakes or an analyzed protocol has been found on the connection. If worker A detects host 1.2.3.4 as an active host, it would be beneficial for worker B to know that as well so worker A shares that information as an insertion to a set <link to set documentation would be good here> which travels to the clusters proxy and the proxy then sends that same set insertion to worker B. The result is that worker A and worker B have shared knowledge about host and services that are active on the network being monitored.
Examples of synchronized state from the scripts that ship with Bro include
the full list of “known” hosts and services (which are hosts or services
identified as performing full TCP handshakes) or an analyzed protocol has been
found on the connection. If worker A detects host 1.2.3.4 as an active host,
it would be beneficial for worker B to know that as well. So worker A shares
that information as an insertion to a set
<link to set documentation would be good here> which travels to the clusters
proxy and the proxy sends that same set insertion to worker B. The result
is that worker A and worker B have shared knowledge about host and services
that are active on the network being monitored.
The proxy model extends to having multiple proxies as well if necessary for performance reasons, it only adds one additional step for the Bro processes. Each proxy connects to another proxy in a ring and the workers are shared between them as evenly as possible. When a proxy receives some new bit of state, it will share that with its proxy which is then shared around the ring of proxies and down to all of the workers. From a practical standpoint, there are no rules of thumb established yet for the number of proxies necessary for the number of workers they are serving. Best is to start with a single proxy and add more if communication performance problems are found.
The proxy model extends to having multiple proxies when necessary for
performance reasons. It only adds one additional step for the Bro processes.
Each proxy connects to another proxy in a ring and the workers are shared
between them as evenly as possible. When a proxy receives some new bit of
state it will share that with its proxy, which is then shared around the
ring of proxies, and down to all of the workers. From a practical standpoint,
there are no rules of thumb established for the number of proxies
necessary for the number of workers they are serving. It is best to start
with a single proxy and add more if communication performance problems are
found.
Bro processes acting as proxies dont tend to be extremely intense to CPU or memory and users frequently run proxy processes on the same physical host as the manager.
Bro processes acting as proxies dont tend to be extremely hard on CPU
or memory and users frequently run proxy processes on the same physical
host as the manager.
Worker
******
This is the Bro process that sniffs network traffic and does protocol analysis on the reassembled traffic streams. Most of the work of an active cluster takes place on the workers and as such, the workers typically represent the bulk of the Bro processes that are running in a cluster. The fastest memory and CPU core speed you can afford is best here since all of the protocol parsing and most analysis will take place here. There are no particular requirements for the disks in workers since almost all logging is done remotely to the manager and very little is normally written to disk.
The worker is the Bro process that sniffs network traffic and does protocol
analysis on the reassembled traffic streams. Most of the work of an active
cluster takes place on the workers and as such, the workers typically
represent the bulk of the Bro processes that are running in a cluster.
The fastest memory and CPU core speed you can afford is recommended
since all of the protocol parsing and most analysis will take place here.
There are no particular requirements for the disks in workers since almost all
logging is done remotely to the manager, and normally very little is written
to disk.
The rule of thumb we have followed recently is to allocate approximately 1 core for every 80Mbps of traffic that is being analyzed, however this estimate could be extremely traffic mix specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps estimate works for your traffic, this could be handled by 3 physical hosts dedicated to being workers with each one containing dual 6-core processors.
The rule of thumb we have followed recently is to allocate approximately 1
core for every 80Mbps of traffic that is being analyzed. However, this
estimate could be extremely traffic mix-specific. It has generally worked
for mixed traffic with many users and servers. For example, if your traffic
peaks around 2Gbps (combined) and you want to handle traffic at peak load,
you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps
estimate works for your traffic, this could be handled by 3 physical hosts
dedicated to being workers with each one containing dual 6-core processors.
Once a flow based load balancer is put into place this model is extremely easy to scale as well so its recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, its relatively easy to increase the size of the cluster in most cases.
Once a flow-based load balancer is put into place this model is extremely
easy to scale. It is recommended that you estimate the amount of
hardware you will need to fully analyze your traffic. If more is needed its
relatively easy to increase the size of the cluster in most cases.
Frontend Options
----------------
There are many options for setting up a frontend flow distributor and in many cases it may even be beneficial to do multiple stages of flow distribution on the network and on the host.
There are many options for setting up a frontend flow distributor. In many
cases it is beneficial to do multiple stages of flow distribution
on the network and on the host.
Discrete hardware flow balancers
********************************
@ -60,12 +122,24 @@ Discrete hardware flow balancers
cPacket
^^^^^^^
If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit its visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces.
If you are monitoring one or more 10G physical interfaces, the recommended
solution is to use either a cFlow or cVu device from cPacket because they
are used successfully at a number of sites. These devices will perform
layer-2 load balancing by rewriting the destination Ethernet MAC address
to cause each packet associated with a particular flow to have the same
destination MAC. The packets can then be passed directly to a monitoring
host where each worker has a BPF filter to limit its visibility to only that
stream of flows, or onward to a commodity switch to split the traffic out to
multiple 1G interfaces for the workers. This greatly reduces
costs since workers can use relatively inexpensive 1G interfaces.
OpenFlow Switches
^^^^^^^^^^^^^^^^^
We are currently exploring the use of OpenFlow based switches to do flow based load balancing directly on the switch which can greatly reduce frontend costs for many users. This document will be updated when we have more information.
We are currently exploring the use of OpenFlow based switches to do flow-based
load balancing directly on the switch, which greatly reduces frontend
costs for many users. This document will be updated when we have more
information.
On host flow balancing
**********************
@ -73,14 +147,27 @@ On host flow balancing
PF_RING
^^^^^^^
The PF_RING software for Linux has a “clustering” feature which will do flow based load balancing across a number of processes that are sniffing the same interface. This will allow you to easily take advantage of multiple cores in a single physical host because Bros main event loop is single threaded and cant natively utilize all of the cores. More information about Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING tutorial to link to here? document installing kernel module, libpcap wrapper, building Bro with the --with-pcap configure option)
The PF_RING software for Linux has a “clustering” feature which will do
flow-based load balancing across a number of processes that are sniffing the
same interface. This allows you to easily take advantage of multiple
cores in a single physical host because Bros main event loop is single
threaded and cant natively utilize all of the cores. More information about
Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING
tutorial to link to here? document installing kernel module, libpcap
wrapper, building Bro with the --with-pcap configure option)
Netmap
^^^^^^
FreeBSD has an in-progress project named Netmap which will enable flow based load balancing as well. When it becomes viable for real world use, this document will be updated.
FreeBSD has an in-progress project named Netmap which will enable flow-based
load balancing as well. When it becomes viable for real world use, this
document will be updated.
Click! Software Router
^^^^^^^^^^^^^^^^^^^^^^
Click! can be used for flow based load balancing with a simple configuration. (link to an example for the config). This solution is not recommended on Linux due to Bros PF_RING support and only as a last resort on other operating systems since it causes a lot of overhead due to context switching back and forth between kernel and userland several times per packet.
Click! can be used for flow based load balancing with a simple configuration.
(link to an example for the config). This solution is not recommended on
Linux due to Bros PF_RING support and only as a last resort on other
operating systems since it causes a lot of overhead due to context switching
back and forth between kernel and userland several times per packet.

View file

@ -10,7 +10,7 @@ http.log file. This file can then be used for analysis and auditing
purposes.
In the sections below we briefly explain the structure of the http.log
file. Then, we show you how to perform basic HTTP traffic monitoring and
file, then we show you how to perform basic HTTP traffic monitoring and
analysis tasks with Bro. Some of these ideas and techniques can later be
applied to monitor different protocols in a similar way.
@ -40,11 +40,10 @@ request to the root of Bro website::
Network administrators and security engineers, for instance, can use the
information in this log to understand the HTTP activity on the network
and troubleshoot network problems or search for anomalous activities. At
this point, we would like to stress out the fact that there is no just
one right way to perform analysis; it will depend on the expertise of
the person doing the analysis and the specific details of the task to
accomplish.
and troubleshoot network problems or search for anomalous activities. We must
stress that there is no single right way to perform an analysis. It will
depend on the expertise of the person performing the analysis and the
specific details of the task.
For more information about how to handle the HTTP protocol in Bro,
including a complete list of the fields available in http.log, go to
@ -58,15 +57,15 @@ Detecting a Proxy Server
A proxy server is a device on your network configured to request a
service on behalf of a third system; one of the most common examples is
a Web proxy server. A client without Internet access connects to the
proxy and requests a Web page; the proxy then sends the request to the
actual Web server, receives the response and passes it to the original
proxy and requests a web page, the proxy sends the request to the web
server, which receives the response, and passes it to the original
client.
Proxies were conceived to help manage a network and provide better
encapsulation. By themselves, proxies are not a security threat, but a
encapsulation. Proxies by themselves are not a security threat, but a
misconfigured or unauthorized proxy can allow others, either inside or
outside the network, to access any Web site and even conduct malicious
activities anonymously using the network resources.
outside the network, to access any web site and even conduct malicious
activities anonymously using the network's resources.
What Proxy Server traffic looks like
-------------------------------------

View file

@ -24,17 +24,17 @@ Working with Log Files
Generally, all of Bro's log files are produced by a corresponding
script that defines their individual structure. However, as each log
file flows through the Logging Framework, there share a set of
file flows through the Logging Framework, they share a set of
structural similarities. Without breaking into the scripting aspect of
Bro here, a bird's eye view of how the log files are produced would
progress as follows. The script's author defines the kinds of data,
Bro here, a bird's eye view of how the log files are produced
progresses as follows. The script's author defines the kinds of data,
such as the originating IP address or the duration of a connection,
which will make up the fields (i.e., columns) of the log file. The
author then decides what network activity should generate a single log
file entry (i.e., one line); that could, e.g., be a connection having
been completed or an HTTP ``GET`` method being issued by an
file entry (i.e., one line). For example, this could be a connection
having been completed or an HTTP ``GET`` request being issued by an
originator. When these behaviors are observed during operation, the
data is passed to the Logging Framework which, in turn, adds the entry
data is passed to the Logging Framework which adds the entry
to the appropriate log file.
As the fields of the log entries can be further customized by the
@ -57,7 +57,7 @@ data, the string ``(empty)`` as the indicator for an empty field and
the ``-`` character as the indicator for a field that hasn't been set.
The timestamp for when the file was created is included under
``#open``. The header then goes on to detail the fields being listed
in the file and the data types of those fields in ``#fields`` and
in the file and the data types of those fields, in ``#fields`` and
``#types``, respectively. These two entries are often the two most
significant points of interest as they detail not only the field names
but the data types used. When navigating through the different log
@ -66,12 +66,12 @@ definitions readily available saves the user some mental leg work. The
field names are also a key resource for using the :ref:`bro-cut
<bro-cut>` utility included with Bro, see below.
Next to the header follows the main content; in this example we see 7
Next to the header follows the main content. In this example we see 7
connections with their key properties, such as originator and
responder IP addresses (note how Bro transparely handles both IPv4 and
IPv6), transport-layer ports, application-layer services - the
``service`` field is filled ias Bro determines a specific protocol to
be in use, independent of the connection's ports - payload size, and
responder IP addresses (note how Bro transparently handles both IPv4 and
IPv6), transport-layer ports, application-layer services ( - the
``service`` field is filled in as Bro determines a specific protocol to
be in use, independent of the connection's ports), payload size, and
more. See :bro:type:`Conn::Info` for a description of all fields.
In addition to ``conn.log``, Bro generates many further logs by
@ -87,8 +87,8 @@ default, including:
A log of FTP session-level activity.
``files.log``
Summaries of files transfered over the network. This information
is aggregrated from different protocols, including HTTP, FTP, and
Summaries of files transferred over the network. This information
is aggregated from different protocols, including HTTP, FTP, and
SMTP.
``http.log``
@ -106,7 +106,7 @@ default, including:
``weird.log``
A log of unexpected protocol-level activity. Whenever Bro's
protocol analysis encounters a situation it would not expect
(e.g., an RFC violation) is logs it in this file. Note that in
(e.g., an RFC violation) it logs it in this file. Note that in
practice, real-world networks tend to exhibit a large number of
such "crud" that is usually not worth following up on.
@ -120,7 +120,7 @@ Using ``bro-cut``
The ``bro-cut`` utility can be used in place of other tools to build
terminal commands that remain flexible and accurate independent of
possible changes to log file itself. It accomplishes this by parsing
possible changes to the log file itself. It accomplishes this by parsing
the header in each file and allowing the user to refer to the specific
columnar data available (in contrast to tools like ``awk`` that
require the user to refer to fields referenced by their position).
@ -131,7 +131,7 @@ from a ``conn.log``:
@TEST-EXEC: btest-rst-cmd -n 10 "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration"
The correspding ``awk`` command would look like this:
The corresponding ``awk`` command will look like this:
.. btest:: using_bro
@ -185,8 +185,8 @@ Working with Timestamps
``bro-cut`` accepts the flag ``-d`` to convert the epoch time values
in the log files to human-readable format. The following command
includes the human readable time stamp, the unique identifier and the
HTTP ``Host`` and HTTP ``URI`` as extracted from the ``http.log``
includes the human readable time stamp, the unique identifier, the
HTTP ``Host``, and HTTP ``URI`` as extracted from the ``http.log``
file:
.. btest:: using_bro
@ -218,7 +218,7 @@ See ``man strfime`` for more options for the format string.
Using UIDs
----------
While Bro can do signature based analysis, its primary focus is on
While Bro can do signature-based analysis, its primary focus is on
behavioral detection which alters the practice of log review from
"reactionary review" to a process a little more akin to a hunting
trip. A common progression of review includes correlating a session
@ -254,12 +254,13 @@ network.
-----------------------
Common Log Files
-----------------------
As a monitoring tool, Bro records a detailed view of the traffic inspected and the events generated in
a series of relevant log files. These files can later be reviewed for monitoring, auditing and troubleshooting
purposes.
As a monitoring tool, Bro records a detailed view of the traffic inspected
and the events generated in a series of relevant log files. These files can
later be reviewed for monitoring, auditing and troubleshooting purposes.
In this section we present a brief explanation of the most commonly used log files generated by Bro including links
to descriptions of some of the fields for each log type.
In this section we present a brief explanation of the most commonly used log
files generated by Bro including links to descriptions of some of the fields
for each log type.
+-----------------+---------------------------------------+------------------------------+
| Log File | Description | Field Descriptions |

View file

@ -6,19 +6,19 @@ MIME Type Statistics
====================
Files are constantly transmitted over HTTP on regular networks. These
files belong to a specific category (i.e., executable, text, image,
etc.) identified by a `Multipurpose Internet Mail Extension (MIME)
files belong to a specific category (e.g., executable, text, image)
identified by a `Multipurpose Internet Mail Extension (MIME)
<http://en.wikipedia.org/wiki/MIME>`_. Although MIME was originally
developed to identify the type of non-text attachments on email, it is
also used by Web browser to identify the type of files transmitted and
also used by a web browser to identify the type of files transmitted and
present them accordingly.
In this tutorial, we will show how to use the Sumstats Framework to
collect some statistics information based on MIME types, specifically
In this tutorial, we will demonstrate how to use the Sumstats Framework
to collect statistical information based on MIME types; specifically,
the total number of occurrences, size in bytes, and number of unique
hosts transmitting files over HTTP per each type. For instructions about
hosts transmitting files over HTTP per each type. For instructions on
extracting and creating a local copy of these files, visit :ref:`this
<http-monitor>` tutorial instead.
tutorial <http-monitor>`.
------------------------------------------------
MIME Statistics with Sumstats
@ -30,31 +30,31 @@ Observations, where the event is observed and fed into the framework.
(ii) Reducers, where observations are collected and measured. (iii)
Sumstats, where the main functionality is implemented.
So, we start by defining our observation along with a record to store
all statistics values and an observation interval. We are conducting our
observation on the :bro:see:`HTTP::log_http` event and we are interested
in the MIME type, size of the file ("response_body_len") and the
We start by defining our observation along with a record to store
all statistical values and an observation interval. We are conducting our
observation on the :bro:see:`HTTP::log_http` event and are interested
in the MIME type, size of the file ("response_body_len"), and the
originator host ("orig_h"). We use the MIME type as our key and create
observers for the other two values.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 6-29, 54-64
Next, we create the reducers. The first one will accumulate file sizes
and the second one will make sure we only store a host ID once. Below is
Next, we create the reducers. The first will accumulate file sizes
and the second will make sure we only store a host ID once. Below is
the partial code from a :bro:see:`bro_init` handler.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 34-37
In our final step, we create the SumStats where we check for the
observation interval and once it expires, we populate the record
observation interval. Once it expires, we populate the record
(defined above) with all the relevant data and write it to a log.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 38-51
Putting everything together we end up with the following final code for
After putting the three pieces together we end up with the following final code for
our script.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro

View file

@ -1,5 +1,5 @@
@load base/protocols/conn
@load base/protocols/dns
@load base/protocols/http
event connection_state_remove(c: connection)
{

View file

@ -232,7 +232,7 @@ overly populated.
.. btest:: connection-record-01
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_01.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_01.bro
As you can see from the output, the connection record is something of
a jumble when printed on its own. Regularly taking a peek at a
@ -248,9 +248,9 @@ originating host is referenced by ``c$id$orig_h`` which if given a
narrative relates to ``orig_h`` which is a member of ``id`` which is
a member of the data structure referred to as ``c`` that was passed
into the event handler." Given that the responder port
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base DNS scripts
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base HTTP scripts
can further populate the connection record. Let's load the
``base/protocols/dns`` scripts and check the output of our script.
``base/protocols/http`` scripts and check the output of our script.
Bro uses the dollar sign as its field delimiter and a direct
correlation exists between the output of the connection record and the
@ -262,16 +262,16 @@ brackets, which would correspond to the ``$``-delimiter in a Bro script.
.. btest:: connection-record-02
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_02.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_02.bro
The addition of the ``base/protocols/dns`` scripts populates the
``dns=[]`` member of the connection record. While Bro is doing a
The addition of the ``base/protocols/http`` scripts populates the
``http=[]`` member of the connection record. While Bro is doing a
massive amount of work in the background, it is in what is commonly
called "scriptland" that details are being refined and decisions
being made. Were we to continue running in "bare mode" we could slowly
keep adding infrastructure through ``@load`` statements. For example,
were we to ``@load base/frameworks/logging``, Bro would generate a
``conn.log`` and ``dns.log`` for us in the current working directory.
``conn.log`` and ``http.log`` for us in the current working directory.
As mentioned above, including the appropriate ``@load`` statements is
not only good practice, but can also help to indicate which
functionalities are being used in a script. Take a second to run the

View file

@ -60,3 +60,4 @@
@load base/misc/find-checksum-offloading
@load base/misc/find-filtered-trace

View file

@ -0,0 +1,49 @@
##! Discovers trace files that contain TCP traffic consisting only of
##! control packets (e.g. it's been filtered to contain only SYN/FIN/RST
##! packets and no content). On finding such a trace, a warning is
##! emitted that suggests toggling the :bro:see:`detect_filtered_trace`
##! option may be desired if the user does not want Bro to report
##! missing TCP segments.
module FilteredTraceDetection;
export {
## Flag to enable filtered trace file detection and warning message.
global enable: bool = T &redef;
}
global saw_tcp_conn_with_data: bool = F;
global saw_a_tcp_conn: bool = F;
event connection_state_remove(c: connection)
{
if ( ! reading_traces() )
return;
if ( ! enable )
return;
if ( saw_tcp_conn_with_data )
return;
if ( ! is_tcp_port(c$id$orig_p) )
return;
saw_a_tcp_conn = T;
if ( /[Dd]/ in c$history )
saw_tcp_conn_with_data = T;
}
event bro_done()
{
if ( ! enable )
return;
if ( ! saw_a_tcp_conn )
return;
if ( ! saw_tcp_conn_with_data )
Reporter::warning("The analyzed trace file was determined to contain only TCP control packets, which may indicate it's been pre-filtered. By default, Bro reports the missing segments for this type of trace, but the 'detect_filtered_trace' option may be toggled if that's not desired.");
}

View file

@ -63,15 +63,17 @@ export {
## The DNS query was rejected by the server.
rejected: bool &log &default=F;
## This value indicates if this request/response pair is ready
## to be logged.
ready: bool &default=F;
## The total number of resource records in a reply message's
## answer section.
total_answers: count &optional;
## The total number of resource records in a reply message's
## answer, authority, and additional sections.
total_replies: count &optional;
## Whether the full DNS query has been seen.
saw_query: bool &default=F;
## Whether the full DNS reply has been seen.
saw_reply: bool &default=F;
};
## An event that can be handled to access the :bro:type:`DNS::Info`
@ -90,7 +92,7 @@ export {
## ans: The general information of a RR response.
##
## reply: The specific response information according to RR type/class.
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
global do_reply: hook(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
## A hook that is called whenever a session is being set.
## This can be used if additional initialization logic needs to happen
@ -103,17 +105,37 @@ export {
## is_query: Indicator for if this is being called for a query or a response.
global set_session: hook(c: connection, msg: dns_msg, is_query: bool);
## Yields a queue of :bro:see:`DNS::Info` objects for a given
## DNS message query/transaction ID.
type PendingMessages: table[count] of Queue::Queue;
## The amount of time that DNS queries or replies for a given
## query/transaction ID are allowed to be queued while waiting for
## a matching reply or query.
const pending_msg_expiry_interval = 2min &redef;
## Give up trying to match pending DNS queries or replies for a given
## query/transaction ID once this number of unmatched queries or replies
## is reached (this shouldn't happen unless either the DNS server/resolver
## is broken, Bro is not seeing all the DNS traffic, or an AXFR query
## response is ongoing).
const max_pending_msgs = 50 &redef;
## Give up trying to match pending DNS queries or replies across all
## query/transaction IDs once there is at least one unmatched query or
## reply across this number of different query IDs.
const max_pending_query_ids = 50 &redef;
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record {
## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet.
pending: table[count] of Queue::Queue;
## queries that haven't been matched with a response yet.
pending_queries: PendingMessages;
## This is the list of DNS responses that have completed based
## on the number of responses declared and the number received.
## The contents of the set are transaction IDs.
finished_answers: set[count];
## Indexed by query id, returns Info record corresponding to
## replies that haven't been matched with a query yet.
pending_replies: PendingMessages;
};
}
@ -143,6 +165,67 @@ function new_session(c: connection, trans_id: count): Info
return info;
}
function log_unmatched_msgs_queue(q: Queue::Queue)
{
local infos: vector of Info;
Queue::get_vector(q, infos);
for ( i in infos )
{
event flow_weird("dns_unmatched_msg",
infos[i]$id$orig_h, infos[i]$id$resp_h);
Log::write(DNS::LOG, infos[i]);
}
}
function log_unmatched_msgs(msgs: PendingMessages)
{
for ( trans_id in msgs )
{
log_unmatched_msgs_queue(msgs[trans_id]);
delete msgs[trans_id];
}
}
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
{
if ( id !in msgs )
{
if ( |msgs| > max_pending_query_ids )
{
event flow_weird("dns_unmatched_query_id_quantity",
msg$id$orig_h, msg$id$resp_h);
# Throw away all unmatched on assumption they'll never be matched.
log_unmatched_msgs(msgs);
}
msgs[id] = Queue::init();
}
else
{
if ( Queue::len(msgs[id]) > max_pending_msgs )
{
event flow_weird("dns_unmatched_msg_quantity",
msg$id$orig_h, msg$id$resp_h);
log_unmatched_msgs_queue(msgs[id]);
# Throw away all unmatched on assumption they'll never be matched.
msgs[id] = Queue::init();
}
}
Queue::put(msgs[id], msg);
}
function pop_msg(msgs: PendingMessages, id: count): Info
{
local rval: Info = Queue::get(msgs[id]);
if ( Queue::len(msgs[id]) == 0 )
delete msgs[id];
return rval;
}
hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
{
if ( ! c?$dns_state )
@ -151,29 +234,39 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
c$dns_state = state;
}
if ( msg$id !in c$dns_state$pending )
c$dns_state$pending[msg$id] = Queue::init();
local info: Info;
# If this is either a query or this is the reply but
# no Info records are in the queue (we missed the query?)
# we need to create an Info record and put it in the queue.
if ( is_query ||
Queue::len(c$dns_state$pending[msg$id]) == 0 )
{
info = new_session(c, msg$id);
Queue::put(c$dns_state$pending[msg$id], info);
}
if ( is_query )
# If this is a query, assign the newly created info variable
# so that the world looks correct to anything else handling
# this query.
c$dns = info;
{
if ( msg$id in c$dns_state$pending_replies &&
Queue::len(c$dns_state$pending_replies[msg$id]) > 0 )
{
# Match this DNS query w/ what's at head of pending reply queue.
c$dns = pop_msg(c$dns_state$pending_replies, msg$id);
}
else
{
# Create a new DNS session and put it in the query queue so
# we can wait for a matching reply.
c$dns = new_session(c, msg$id);
enqueue_new_msg(c$dns_state$pending_queries, msg$id, c$dns);
}
}
else
# Peek at the next item in the queue for this trans_id and
# assign it to c$dns since this is a response.
c$dns = Queue::peek(c$dns_state$pending[msg$id]);
{
if ( msg$id in c$dns_state$pending_queries &&
Queue::len(c$dns_state$pending_queries[msg$id]) > 0 )
{
# Match this DNS reply w/ what's at head of pending query queue.
c$dns = pop_msg(c$dns_state$pending_queries, msg$id);
}
else
{
# Create a new DNS session and put it in the reply queue so
# we can wait for a matching query.
c$dns = new_session(c, msg$id);
event conn_weird("dns_unmatched_reply", c, "");
enqueue_new_msg(c$dns_state$pending_replies, msg$id, c$dns);
}
}
if ( ! is_query )
{
@ -183,36 +276,36 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
if ( ! c$dns?$total_answers )
c$dns$total_answers = msg$num_answers;
if ( c$dns?$total_replies &&
c$dns$total_replies != msg$num_answers + msg$num_addl + msg$num_auth )
{
event conn_weird("dns_changed_number_of_responses", c,
fmt("The declared number of responses changed from %d to %d",
c$dns$total_replies,
msg$num_answers + msg$num_addl + msg$num_auth));
}
else
{
# Store the total number of responses expected from the first reply.
if ( ! c$dns?$total_replies )
c$dns$total_replies = msg$num_answers + msg$num_addl + msg$num_auth;
}
if ( msg$rcode != 0 && msg$num_queries == 0 )
c$dns$rejected = T;
}
}
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
{
hook set_session(c, msg, is_orig);
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
hook set_session(c, msg, ! msg$QR);
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
if ( ! msg$QR )
# This is weird: the inquirer must also be providing answers in
# the request, which is not what we want to track.
return;
if ( ans$answer_type == DNS_ANS )
{
if ( ! c?$dns )
{
event conn_weird("dns_unmatched_reply", c, "");
hook set_session(c, msg, F);
}
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
@ -226,29 +319,35 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$TTLs = vector();
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
}
if ( c$dns?$answers && c$dns?$total_answers &&
|c$dns$answers| == c$dns$total_answers )
{
# Indicate this request/reply pair is ready to be logged.
c$dns$ready = T;
}
}
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=-5
event dns_end(c: connection, msg: dns_msg) &priority=5
{
if ( c$dns$ready )
if ( ! c?$dns )
return;
if ( msg$QR )
c$dns$saw_reply = T;
else
c$dns$saw_query = T;
}
event dns_end(c: connection, msg: dns_msg) &priority=-5
{
if ( c?$dns && c$dns$saw_reply && c$dns$saw_query )
{
Log::write(DNS::LOG, c$dns);
# This record is logged and no longer pending.
Queue::get(c$dns_state$pending[c$dns$trans_id]);
delete c$dns;
}
}
event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
c$dns$RD = msg$RD;
c$dns$TC = msg$TC;
c$dns$qclass = qclass;
@ -265,60 +364,66 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
c$dns$query = query;
}
event dns_unknown_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
{
hook DNS::do_reply(c, msg, ans, fmt("<unknown type=%s>", ans$qtype));
}
event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5
{
event DNS::do_reply(c, msg, ans, str);
hook DNS::do_reply(c, msg, ans, str);
}
event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_A6_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_NS_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_CNAME_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_MX_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string,
preference: count) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_PTR_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_SOA_reply(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa) &priority=5
{
event DNS::do_reply(c, msg, ans, soa$mname);
hook DNS::do_reply(c, msg, ans, soa$mname);
}
event dns_WKS_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
{
event DNS::do_reply(c, msg, ans, "");
hook DNS::do_reply(c, msg, ans, "");
}
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
{
event DNS::do_reply(c, msg, ans, "");
hook DNS::do_reply(c, msg, ans, "");
}
# TODO: figure out how to handle these
@ -339,7 +444,8 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
c$dns$rejected = T;
if ( c?$dns )
c$dns$rejected = T;
}
event connection_state_remove(c: connection) &priority=-5
@ -347,16 +453,8 @@ event connection_state_remove(c: connection) &priority=-5
if ( ! c?$dns_state )
return;
# If Bro is expiring state, we should go ahead and log all unlogged
# request/response pairs now.
for ( trans_id in c$dns_state$pending )
{
local infos: vector of Info;
Queue::get_vector(c$dns_state$pending[trans_id], infos);
for ( i in infos )
{
Log::write(DNS::LOG, infos[i]);
}
}
# If Bro is expiring state, we should go ahead and log all unmatched
# queries and replies now.
log_unmatched_msgs(c$dns_state$pending_queries);
log_unmatched_msgs(c$dns_state$pending_replies);
}

View file

@ -19,12 +19,16 @@ export {
};
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=4
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
# The "ready" flag will be set here. This causes the setting from the
# base script to be overridden since the base script will log immediately
# after all of the ANS replies have been seen.
c$dns$ready=F;
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
if ( ! msg$QR )
# This is weird: the inquirer must also be providing answers in
# the request, which is not what we want to track.
return;
if ( ans$answer_type == DNS_AUTH )
{
@ -38,11 +42,4 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$addl = set();
add c$dns$addl[reply];
}
if ( c$dns?$answers && c$dns?$auth && c$dns?$addl &&
c$dns$total_replies == |c$dns$answers| + |c$dns$auth| + |c$dns$addl| )
{
# *Now* all replies desired have been seen.
c$dns$ready = T;
}
}

@ -1 +1 @@
Subproject commit 92674c57455cb71de5a2be6f482570e10be46aa6
Subproject commit e96d95a130a572b611fe70b3c3ede2b4727aaa22

View file

@ -229,12 +229,21 @@ void PktSrc::Process()
{
// MPLS carried over the ethernet frame.
case 0x8847:
// Remove the data link layer and denote a
// header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
break;
// VLAN carried over the ethernet frame.
case 0x8100:
data += get_link_header_size(datalink);
// Check for MPLS in VLAN.
if ( ((data[2] << 8) + data[3]) == 0x8847 )
have_mpls = true;
data += 4; // Skip the vlan header
pkt_hdr_size = 0;
@ -274,8 +283,13 @@ void PktSrc::Process()
protocol = (data[2] << 8) + data[3];
if ( protocol == 0x0281 )
// MPLS Unicast
{
// MPLS Unicast. Remove the data link layer and
// denote a header size of zero before the IP header.
have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
}
else if ( protocol != 0x0021 && protocol != 0x0057 )
{
@ -290,12 +304,6 @@ void PktSrc::Process()
if ( have_mpls )
{
// Remove the data link layer
data += get_link_header_size(datalink);
// Denote a header size of zero before the IP header
pkt_hdr_size = 0;
// Skip the MPLS label stack.
bool end_of_stack = false;

View file

@ -137,18 +137,6 @@ int DNS_Interpreter::ParseQuestions(DNS_MsgInfo* msg,
{
int n = msg->qdcount;
if ( n == 0 )
{
// Generate event here because we won't go into ParseQuestion.
EventHandlerPtr dns_event =
msg->rcode == DNS_CODE_OK ?
dns_query_reply : dns_rejected;
BroString* question_name = new BroString("<no query>");
SendReplyOrRejectEvent(msg, dns_event, data, len, question_name);
return 1;
}
while ( n > 0 && ParseQuestion(msg, data, len, msg_start) )
--n;
return n == 0;
@ -299,6 +287,16 @@ int DNS_Interpreter::ParseAnswer(DNS_MsgInfo* msg,
break;
default:
if ( dns_unknown_reply && ! msg->skip_event )
{
val_list* vl = new val_list;
vl->append(analyzer->BuildConnVal());
vl->append(msg->BuildHdrVal());
vl->append(msg->BuildAnswerVal());
analyzer->ConnectionEvent(dns_unknown_reply, vl);
}
analyzer->Weird("DNS_RR_unknown_type");
data += rdlength;
len -= rdlength;

View file

@ -50,7 +50,7 @@ event dns_message%(c: connection, is_orig: bool, msg: dns_msg, len: count%);
event dns_request%(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count%);
## Generated for DNS replies that reject a query. This event is raised if a DNS
## reply either indicates failure via its status code or does not pass on any
## reply indicates failure because it does not pass on any
## answers to a query. Note that all of the event's parameters are parsed out of
## the reply; there's no stateful correlation with the query.
##
@ -78,7 +78,7 @@ event dns_request%(c: connection, msg: dns_msg, query: string, qtype: count, qcl
## dns_skip_all_addl dns_skip_all_auth dns_skip_auth
event dns_rejected%(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count%);
## Generated for DNS replies with an *ok* status code but no question section.
## Generated for each entry in the Question section of a DNS reply.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/Domain_Name_System>`__ for more
## information about the DNS protocol. Bro analyzes both UDP and TCP DNS
@ -401,6 +401,22 @@ event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%)
## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth
event dns_SRV_reply%(c: connection, msg: dns_msg, ans: dns_answer%);
## Generated on DNS reply resource records when the type of record is not one
## that Bro knows how to parse and generate another more specific specific
## event.
##
## c: The connection, which may be UDP or TCP depending on the type of the
## transport-layer session being analyzed.
##
## msg: The parsed DNS message header.
##
## ans: The type-independent part of the parsed answer record.
##
## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl
## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply
## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_SRV_reply dns_end
event dns_unknown_reply%(c: connection, msg: dns_msg, ans: dns_answer%);
## Generated for DNS replies of type *EDNS*. For replies with multiple answers,
## an individual event of the corresponding type is raised for each.
##

View file

@ -192,14 +192,13 @@ void POP3_Analyzer::ProcessRequest(int length, const char* line)
case AUTH_CRAM_MD5:
{ // Format: "user<space>password-hash"
char* s;
char* str = (char*) decoded->CheckString();
const char* s;
const char* str = (char*) decoded->CheckString();
for ( s = str; *s && *s != '\t' && *s != ' '; ++s )
;
*s = '\0';
user = str;
user = std::string(str, s);
password = "";
break;

View file

@ -62,6 +62,14 @@ refine connection SOCKS_Conn += {
if ( ${request.reserved} != 0 )
{
bro_analyzer()->ProtocolViolation(fmt("invalid value in reserved field: %d", ${request.reserved}));
bro_analyzer()->SetSkip(true);
return false;
}
if ( (${request.command} == 0) || (${request.command} > 3) )
{
bro_analyzer()->ProtocolViolation(fmt("undefined value in command field: %d", ${request.command}));
bro_analyzer()->SetSkip(true);
return false;
}

View file

@ -397,7 +397,9 @@ bool Manager::CreateEventStream(RecordVal* fval)
string stream_name = name_val->AsString()->CheckString();
Unref(name_val);
RecordType *fields = fval->Lookup("fields", true)->AsType()->AsTypeType()->Type()->AsRecordType();
Val* fields_val = fval->Lookup("fields", true);
RecordType *fields = fields_val->AsType()->AsTypeType()->Type()->AsRecordType();
Unref(fields_val);
Val *want_record = fval->Lookup("want_record", true);
@ -548,13 +550,17 @@ bool Manager::CreateTableStream(RecordVal* fval)
Val* pred = fval->Lookup("pred", true);
RecordType *idx = fval->Lookup("idx", true)->AsType()->AsTypeType()->Type()->AsRecordType();
Val* idx_val = fval->Lookup("idx", true);
RecordType *idx = idx_val->AsType()->AsTypeType()->Type()->AsRecordType();
Unref(idx_val);
RecordType *val = 0;
if ( fval->Lookup("val", true) != 0 )
Val* val_val = fval->Lookup("val", true);
if ( val_val )
{
val = fval->Lookup("val", true)->AsType()->AsTypeType()->Type()->AsRecordType();
Unref(val); // The lookupwithdefault in the if-clause ref'ed val.
val = val_val->AsType()->AsTypeType()->Type()->AsRecordType();
Unref(val_val);
}
TableVal *dst = fval->Lookup("destination", true)->AsTableVal();
@ -729,7 +735,7 @@ bool Manager::CreateTableStream(RecordVal* fval)
stream->pred = pred ? pred->AsFunc() : 0;
stream->num_idx_fields = idxfields;
stream->num_val_fields = valfields;
stream->tab = dst->AsTableVal();
stream->tab = dst->AsTableVal(); // ref'd by lookupwithdefault
stream->rtype = val ? val->AsRecordType() : 0;
stream->itype = idx->AsRecordType();
stream->event = event ? event_registry->Lookup(event->Name()) : 0;

View file

@ -0,0 +1,12 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open 2014-02-14-20-04-20
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
1371685686.536606 CXWv6p3arKYeMETxOg 65.65.65.65 19244 65.65.65.65 80 tcp - - - - OTH - 0 D 1 257 0 0 (empty)
1371686961.156859 CjhGID4nQcgTWjvg4c 65.65.65.65 32828 65.65.65.65 80 tcp - - - - OTH - 0 d 0 0 1 1500 (empty)
1371686961.479321 CCvvfg3TEfuqmmG4bh 65.65.65.65 61193 65.65.65.65 80 tcp - - - - OTH - 0 D 1 710 0 0 (empty)
#close 2014-02-14-20-04-20

View file

@ -3,7 +3,7 @@
#empty_field (empty)
#unset_field -
#path loaded_scripts
#open 2013-10-30-16-52-28
#open 2014-01-31-22-54-38
#fields name
#types string
scripts/base/init-bare.bro
@ -220,5 +220,6 @@ scripts/base/init-default.bro
scripts/base/files/unified2/__load__.bro
scripts/base/files/unified2/main.bro
scripts/base/misc/find-checksum-offloading.bro
scripts/base/misc/find-filtered-trace.bro
scripts/policy/misc/loaded-scripts.bro
#close 2013-10-30-16-52-28
#close 2014-01-31-22-54-38

View file

@ -4,10 +4,10 @@
:linenos:
:emphasize-lines: 1,1
# bro -b -r dns-session.trace connection_record_01.bro
[id=[orig_h=212.180.42.100, orig_p=25000/tcp, resp_h=131.243.64.3, resp_p=53/tcp], orig=[size=29, state=5, num_pkts=6, num_bytes_ip=273, flow_label=0], resp=[size=44, state=5, num_pkts=5, num_bytes_ip=248, flow_label=0], start_time=930613226.067666, duration=0.709643, service={
# bro -b -r http/get.trace connection_record_01.bro
[id=[orig_h=141.142.228.5, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp], orig=[size=136, state=5, num_pkts=7, num_bytes_ip=512, flow_label=0], resp=[size=5007, state=5, num_pkts=7, num_bytes_ip=5379, flow_label=0], start_time=1362692526.869344, duration=0.211484, service={
}, addl=, hot=0, history=ShADadFf, uid=CXWv6p3arKYeMETxOg, tunnel=<uninitialized>, conn=[ts=930613226.067666, uid=CXWv6p3arKYeMETxOg, id=[orig_h=212.180.42.100, orig_p=25000/tcp, resp_h=131.243.64.3, resp_p=53/tcp], proto=tcp, service=<uninitialized>, duration=0.709643, orig_bytes=29, resp_bytes=44, conn_state=SF, local_orig=<uninitialized>, missed_bytes=0, history=ShADadFf, orig_pkts=6, orig_ip_bytes=273, resp_pkts=5, resp_ip_bytes=248, tunnel_parents={
}, addl=, hot=0, history=ShADadFf, uid=CXWv6p3arKYeMETxOg, tunnel=<uninitialized>, conn=[ts=1362692526.869344, uid=CXWv6p3arKYeMETxOg, id=[orig_h=141.142.228.5, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp], proto=tcp, service=<uninitialized>, duration=0.211484, orig_bytes=136, resp_bytes=5007, conn_state=SF, local_orig=<uninitialized>, missed_bytes=0, history=ShADadFf, orig_pkts=7, orig_ip_bytes=512, resp_pkts=7, resp_ip_bytes=5379, tunnel_parents={
}], extract_orig=F, extract_resp=F]

View file

@ -4,16 +4,14 @@
:linenos:
:emphasize-lines: 1,1
# bro -b -r dns-session.trace connection_record_02.bro
[id=[orig_h=212.180.42.100, orig_p=25000/tcp, resp_h=131.243.64.3, resp_p=53/tcp], orig=[size=29, state=5, num_pkts=6, num_bytes_ip=273, flow_label=0], resp=[size=44, state=5, num_pkts=5, num_bytes_ip=248, flow_label=0], start_time=930613226.067666, duration=0.709643, service={
# bro -b -r http/get.trace connection_record_02.bro
[id=[orig_h=141.142.228.5, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp], orig=[size=136, state=5, num_pkts=7, num_bytes_ip=512, flow_label=0], resp=[size=5007, state=5, num_pkts=7, num_bytes_ip=5379, flow_label=0], start_time=1362692526.869344, duration=0.211484, service={
}, addl=, hot=0, history=ShADadFf, uid=CXWv6p3arKYeMETxOg, tunnel=<uninitialized>, conn=[ts=930613226.067666, uid=CXWv6p3arKYeMETxOg, id=[orig_h=212.180.42.100, orig_p=25000/tcp, resp_h=131.243.64.3, resp_p=53/tcp], proto=tcp, service=<uninitialized>, duration=0.709643, orig_bytes=29, resp_bytes=44, conn_state=SF, local_orig=<uninitialized>, missed_bytes=0, history=ShADadFf, orig_pkts=6, orig_ip_bytes=273, resp_pkts=5, resp_ip_bytes=248, tunnel_parents={
}, addl=, hot=0, history=ShADadFf, uid=CXWv6p3arKYeMETxOg, tunnel=<uninitialized>, conn=[ts=1362692526.869344, uid=CXWv6p3arKYeMETxOg, id=[orig_h=141.142.228.5, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp], proto=tcp, service=<uninitialized>, duration=0.211484, orig_bytes=136, resp_bytes=5007, conn_state=SF, local_orig=<uninitialized>, missed_bytes=0, history=ShADadFf, orig_pkts=7, orig_ip_bytes=512, resp_pkts=7, resp_ip_bytes=5379, tunnel_parents={
}], extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=[pending={
[34798] = [initialized=T, vals={
}], extract_orig=F, extract_resp=F, http=[ts=1362692526.939527, uid=CXWv6p3arKYeMETxOg, id=[orig_h=141.142.228.5, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp], trans_depth=1, method=GET, host=bro.org, uri=/download/CHANGES.bro-aux.txt, referrer=<uninitialized>, user_agent=Wget/1.14 (darwin12.2.0), request_body_len=0, response_body_len=4705, status_code=200, status_msg=OK, info_code=<uninitialized>, info_msg=<uninitialized>, filename=<uninitialized>, tags={
}, settings=[max_len=<uninitialized>], top=1, bottom=1, size=0]
}, finished_answers={
}, username=<uninitialized>, password=<uninitialized>, capture_password=F, proxied=<uninitialized>, range_request=F, orig_fuids=<uninitialized>, orig_mime_types=<uninitialized>, resp_fuids=[FakNcS1Jfe01uljb3], resp_mime_types=[text/plain], current_entity=<uninitialized>, orig_mime_depth=1, resp_mime_depth=1], http_state=[pending={
}]]
}, current_request=1, current_response=1]]

View file

@ -3,7 +3,7 @@
connection_record_02.bro
@load base/protocols/conn
@load base/protocols/dns
@load base/protocols/http
event connection_state_remove(c: connection)
{

View file

@ -20,8 +20,8 @@
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]
1300475167.096535 CXWv6p3arKYeMETxOg 141.142.220.202 5353 224.0.0.251 5353 udp dns - - - S0 - 0 D 1 73 0 0 (empty)
1300475167.097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp - - - - S0 - 0 D 1 199 0 0 (empty)
1300475167.099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp - - - - S0 - 0 D 1 179 0 0 (empty)
1300475167.097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp dns - - - S0 - 0 D 1 199 0 0 (empty)
1300475167.099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp dns - - - S0 - 0 D 1 179 0 0 (empty)
1300475168.853899 CPbrpk1qSsw6ESzHV4 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF - 0 Dd 1 66 1 117 (empty)
1300475168.854378 C6pKV8GSxOnSLghOa 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF - 0 Dd 1 80 1 127 (empty)
1300475168.854837 CIPOse170MGiRM1Qf4 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF - 0 Dd 1 66 1 211 (empty)

View file

@ -54,8 +54,8 @@
# Extent, type='conn'
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
1300475167096535 CXWv6p3arKYeMETxOg 141.142.220.202 5353 224.0.0.251 5353 udp dns 0 0 0 S0 F 0 D 1 73 0 0
1300475167097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0 0 0 S0 F 0 D 1 199 0 0
1300475167099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp 0 0 0 S0 F 0 D 1 179 0 0
1300475167097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp dns 0 0 0 S0 F 0 D 1 199 0 0
1300475167099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp dns 0 0 0 S0 F 0 D 1 179 0 0
1300475168853899 CPbrpk1qSsw6ESzHV4 141.142.220.118 43927 141.142.2.2 53 udp dns 435 38 89 SF F 0 Dd 1 66 1 117
1300475168854378 C6pKV8GSxOnSLghOa 141.142.220.118 37676 141.142.2.2 53 udp dns 420 52 99 SF F 0 Dd 1 80 1 127
1300475168854837 CIPOse170MGiRM1Qf4 141.142.220.118 40526 141.142.2.2 53 udp dns 391 38 183 SF F 0 Dd 1 66 1 211

View file

@ -54,8 +54,8 @@
# Extent, type='conn'
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
1300475167.096535 CXWv6p3arKYeMETxOg 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
1300475167.099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0
1300475167.097012 CjhGID4nQcgTWjvg4c fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp dns 0.000000 0 0 S0 F 0 D 1 199 0 0
1300475167.099816 CCvvfg3TEfuqmmG4bh 141.142.220.50 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 179 0 0
1300475168.853899 CPbrpk1qSsw6ESzHV4 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF F 0 Dd 1 66 1 117
1300475168.854378 C6pKV8GSxOnSLghOa 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF F 0 Dd 1 80 1 127
1300475168.854837 CIPOse170MGiRM1Qf4 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211

View file

@ -1,6 +1,6 @@
1300475167.09653|CXWv6p3arKYeMETxOg|141.142.220.202|5353|224.0.0.251|5353|udp|dns||||S0||0|D|1|73|0|0|(empty)
1300475167.09701|CjhGID4nQcgTWjvg4c|fe80::217:f2ff:fed7:cf65|5353|ff02::fb|5353|udp|||||S0||0|D|1|199|0|0|(empty)
1300475167.09982|CCvvfg3TEfuqmmG4bh|141.142.220.50|5353|224.0.0.251|5353|udp|||||S0||0|D|1|179|0|0|(empty)
1300475167.09701|CjhGID4nQcgTWjvg4c|fe80::217:f2ff:fed7:cf65|5353|ff02::fb|5353|udp|dns||||S0||0|D|1|199|0|0|(empty)
1300475167.09982|CCvvfg3TEfuqmmG4bh|141.142.220.50|5353|224.0.0.251|5353|udp|dns||||S0||0|D|1|179|0|0|(empty)
1300475168.652|CsRx2w45OKnoww6xl4|141.142.220.118|35634|208.80.152.2|80|tcp||0.0613288879394531|463|350|OTH||0|DdA|2|567|1|402|(empty)
1300475168.72401|CRJuHdVW0XPVINV8a|141.142.220.118|48649|208.80.152.118|80|tcp|http|0.1199049949646|525|232|S1||0|ShADad|4|741|3|396|(empty)
1300475168.8539|CPbrpk1qSsw6ESzHV4|141.142.220.118|43927|141.142.2.2|53|udp|dns|0.000435113906860352|38|89|SF||0|Dd|1|66|1|117|(empty)

View file

@ -0,0 +1 @@
1389719059.311687 warning in /Users/jsiwek/Projects/bro/bro/scripts/base/misc/find-filtered-trace.bro, line 48: The analyzed trace file was determined to contain only TCP control packets, which may indicate it's been pre-filtered. By default, Bro reports the missing segments for this type of trace, but the 'detect_filtered_trace' option may be toggled if that's not desired.

View file

@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path dns
#open 2013-08-26-19-04-08
#open 2014-01-28-14-55-04
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
1359565680.761790 CXWv6p3arKYeMETxOg 192.168.6.10 53209 192.168.129.36 53 udp 41477 paypal.com 1 C_INTERNET 48 DNSKEY 0 NOERROR F F T F 1 - - F
#close 2013-08-26-19-04-08
1359565680.761790 CXWv6p3arKYeMETxOg 192.168.6.10 53209 192.168.129.36 53 udp 41477 paypal.com 1 C_INTERNET 48 DNSKEY 0 NOERROR F F T T 1 <unknown type=48>,<unknown type=48>,<unknown type=46>,<unknown type=46> 455.000000,455.000000,455.000000,455.000000 F
#close 2014-01-28-14-55-04

View file

@ -3,9 +3,9 @@
#empty_field (empty)
#unset_field -
#path dns
#open 2013-08-26-19-04-08
#open 2014-01-28-14-58-56
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
1363716396.798072 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 udp 21140 www.cmu.edu 1 C_INTERNET 1 A 0 NOERROR T F F F 1 www-cmu.andrew.cmu.edu,www-cmu-2.andrew.cmu.edu,128.2.10.163,www-cmu.andrew.cmu.edu 86400.000000,5.000000,21600.000000,86400.000000 F
1363716396.798374 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 udp 21140 - - - - - 0 NOERROR T F F F 0 www-cmu-2.andrew.cmu.edu,128.2.10.163 5.000000,21600.000000 F
#close 2013-08-26-19-04-08
1363716396.798072 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 udp 21140 www.cmu.edu 1 C_INTERNET 1 A 0 NOERROR T F F F 1 www-cmu.andrew.cmu.edu,<unknown type=46>,www-cmu-2.andrew.cmu.edu,128.2.10.163 86400.000000,86400.000000,5.000000,21600.000000 F
1363716396.798374 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 udp 21140 - - - - - 0 NOERROR T F F F 0 www-cmu.andrew.cmu.edu,<unknown type=46>,www-cmu-2.andrew.cmu.edu,128.2.10.163 86400.000000,86400.000000,5.000000,21600.000000 F
#close 2014-01-28-14-58-56

View file

@ -3,9 +3,10 @@
#empty_field (empty)
#unset_field -
#path weird
#open 2013-08-26-19-36-33
#open 2014-02-13-20-36-35
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer
#types time string addr port addr port string string bool string
1363716396.798286 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 DNS_RR_unknown_type - F bro
1363716396.798374 CXWv6p3arKYeMETxOg 55.247.223.174 27285 222.195.43.124 53 dns_unmatched_reply - F bro
#close 2013-08-26-19-36-33
1363716396.798374 - - - - - dns_unmatched_msg - F bro
#close 2014-02-13-20-36-35

View file

@ -1,10 +0,0 @@
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path dns
#open 2013-08-26-19-04-37
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected auth addl
#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool table[string] table[string]
930613226.518174 CXWv6p3arKYeMETxOg 212.180.42.100 25000 131.243.64.3 53 tcp 34798 - - - - - 0 NOERROR F F F T 0 4.3.2.1 31337.000000 F - -
#close 2013-08-26-19-04-37

Binary file not shown.

Binary file not shown.

View file

@ -10,7 +10,7 @@ event bro_init()
print identify_data(a, T);
# PNG image
local b = "\x89\x50\x4e\x47\x0d\x0a\x1a\x0a";
local b = "\x89\x50\x4e\x47\x0d\x0a\x1a\x0a\x00";
print identify_data(b, F);
print identify_data(b, T);
}

View file

@ -0,0 +1,63 @@
# Needs perftools support.
#
# @TEST-GROUP: leaks
#
# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks
#
# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local btest-bg-run bro bro -b -m -r $TRACES/wikipedia.trace %INPUT
# @TEST-EXEC: btest-bg-wait 15
@load base/frameworks/input
redef exit_only_after_terminate = T;
global c: count = 0;
type OneLine: record {
s: string;
};
event line(description: Input::EventDescription, tpe: Input::Event, s: string)
{
print "1", "Line";
}
event InputRaw::process_finished(name: string, source:string, exit_code:count, signal_exit:bool)
{
Input::remove(name);
print "2", name;
}
function run(): count
{
Input::add_event([$name=unique_id(""),
$source=fmt("%s |", "date"),
$reader=Input::READER_RAW,
$mode=Input::STREAM,
$fields=OneLine,
$ev=line,
$want_record=F]);
return 1;
}
event do()
{
run();
}
event do_term() {
terminate();
}
event bro_init() {
schedule 1sec {
do()
};
schedule 3sec {
do_term()
};
}

View file

@ -0,0 +1,2 @@
# @TEST-EXEC: bro -C -r $TRACES/mpls-in-vlan.trace
# @TEST-EXEC: btest-diff conn.log

View file

@ -1 +1 @@
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_01.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_01.bro

View file

@ -1 +1 @@
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_02.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_02.bro

View file

@ -3,7 +3,7 @@
connection_record_02.bro
@load base/protocols/conn
@load base/protocols/dns
@load base/protocols/http
event connection_state_remove(c: connection)
{

View file

@ -0,0 +1,4 @@
# @TEST-EXEC: bro -r $TRACES/http/bro.org-filtered.pcap >out1 2>&1
# @TEST-EXEC: bro -r $TRACES/http/bro.org-filtered.pcap "FilteredTraceDetection::enable=F" >out2 2>&1
# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff out1
# @TEST-EXEC: btest-diff out2

View file

@ -1,4 +0,0 @@
# @TEST-EXEC: bro -r $TRACES/dns-session.trace %INPUT
# @TEST-EXEC: btest-diff dns.log
@load protocols/dns/auth-addl

View file

@ -0,0 +1,4 @@
# @TEST-EXEC: bro -r $TRACES/dns-inverse-query.trace %INPUT
# @TEST-EXEC: test ! -e dns.log
@load protocols/dns/auth-addl