Merge remote-tracking branch 'origin/master' into topic/matthias/bloom-filter

This commit is contained in:
Matthias Vallentin 2013-07-22 22:26:15 +02:00
commit 69a7dd03bc
229 changed files with 7840 additions and 2802 deletions

165
CHANGES
View file

@ -1,4 +1,169 @@
2.1-824 | 2013-07-22 14:25:14 -0400
* Fixed a scriptland state issue that manifested especially badly on proxies. (Seth Hall)
* Another test fix. (Robin Sommer)
* Canonyfying the output of core.print-bpf-filters. (Robin Sommer)
2.1-820 | 2013-07-18 12:30:04 -0700
* Extending external canonifier to remove fractional values from
capture_loss.log. (Robin Sommer)
* Canonifying internal order for plugins and their components to
make it deterministic. (Robin Sommer)
* Small raw reader tweaks that got left our earlier. (Robin Sommer)
2.1-814 | 2013-07-15 18:18:20 -0700
* Fixing raw reader crash when accessing nonexistant file, and
memory leak when reading from file. Addresses #1038. (Bernhard
Amann)
2.1-811 | 2013-07-14 08:01:54 -0700
* Bump sqlite to 3.7.17. (Bernhard Amann)
* Small test fixes. (Seth Hall)
* Fix a bug where the same analyzer tag was reused for two different
analyzers. (Seth Hall)
* Moved DPD signatures into script specific directories. Left out
the BitTorrent signatures pending further updates to that
analyzer. (Seth Hall)
2.1-802 | 2013-07-10 10:55:14 -0700
* Const adjustment for methods. (Jon Siwek)
2.1-798 | 2013-07-08 13:05:37 -0700
* Rewrite of the packet filter framework. (Seth Hall)
This includes:
- Plugin interface for adding filtering mechanisms.
- Integrated the packet filter framework with the analyzer
framework to retrieve well-known ports from there.
- Support for BPF-based load balancing (IPv4 and IPv6). This will
tie in with upcoming BroControl support for configuring this.
- Support for BPF-based connection sampling.
- Support for "shunting" traffic with BPF filters.
- Replaced PacketFilter::all_packets with
PacketFilter::enable_auto_protocol_capture_filters.
2.1-784 | 2013-07-04 22:28:48 -0400
* Add a call to lookup_connection in SSH scripts to update connval. (Seth Hall)
* Updating submodule(s). (Robin Sommer)
2.1-782 | 2013-07-03 17:00:39 -0700
* Remove the SSL log queueing mechanism that was included with the
log delay mechanism. (Seth Hall)
2.1-780 | 2013-07-03 16:46:26 -0700
* Rewrite of the RAW input reader for improved robustness and new
features. (Bernhard Amann) This includes:
- Send "end_of_data" event for all kind of streams.
- Send "process_finished" event with exit code of child
process at process termination.
- Expose name of input stream to readers.
- Better error handling.
- New "force_kill" option which SIGKILLs processes on reader termination.
- Supports reading from stdout and stderr simultaneously.
- Support sending data to stdin of child process.
- Streaming reads from external commands work without blocking.
2.1-762 | 2013-07-03 16:33:22 -0700
* Fix to correct support for TLS 1.2. Addresses #1020. (Seth Hall,
with help from Rafal Lesniak).
2.1-760 | 2013-07-03 16:31:36 -0700
* Teach broxygen to generate protocol analyzer plugin reference.
(Jon Siwek)
* Adding 'const' to a number of C++ methods. (Jon Siwek)
2.1-757 | 2013-07-03 16:28:10 -0700
* Fix redef of table index from clearing table.
`redef foo["x"] = 1` now acts like `redef foo += { ["x"] = 1 }`
instead of `redef foo = { ["x"] = 1 }`.
Addresses #1013. (Jon Siwek)
2.1-755 | 2013-07-03 16:22:43 -0700
* Add a general file analysis overview/how-to document. (Jon Siwek)
* Improve file analysis doxygen comments. (Jon Siwek)
* Improve tracking of HTTP file extraction. http.log now has files
taken from request and response bodies in different fields for
each, and can now track multiple files per body. That is, the
"extraction_file" field is now "extracted_request_files" and
"extracted_response_files". Addresses #988. (Jon Siwek)
* Fix HTTP multipart body file analysis. Each part now gets assigned
a different file handle/id. (Jon Siwek)
* Remove logging of analyzers field of FileAnalysis::Info. (Jon
Siwek)
* Remove extraction counter in default file extraction scripts. (Jon
Siwek)
* Remove FileAnalysis::postpone_timeout.
FileAnalysis::set_timeout_interval can now perform same function.
(Jon Siwek)
* Make default get_file_handle handlers &priority=5 so they're
easier to override. (Jon Siwek)
* Add input interface to forward data for file analysis. The new
Input::add_analysis function is used to automatically forward
input data on to the file analysis framework. (Jon Siwek)
* File analysis framework interface simplifications. (Jon Siwek)
- Remove script-layer data input interface (will be managed directly
by input framework later).
- Only track files internally by file id hash. Chance of collision
too small to justify also tracking unique file string.
2.1-741 | 2013-06-07 17:28:50 -0700
* Fixing typo that could cause an assertion to falsely trigger.
(Robin Sommer)
2.1-740 | 2013-06-07 16:37:32 -0700
* Fix for CMake 2.6.x. (Robin Sommer)
2.1-738 | 2013-06-07 08:38:13 -0700
* Remove invalid free on non-allocated pointer in hash function
object. Addresses #1018. (Matthias Vallentin)
2.1-736 | 2013-06-06 10:05:20 -0700
* New "magic constants" @DIR and @FILENAME that expand to the

20
NEWS
View file

@ -73,10 +73,12 @@ New Functionality
script file name without path, respectively. (Jon Siwek)
- The new file analysis framework moves most of the processing of file
content from script-land into the core, where it belongs. Much of
this is an internal change, the framework comes with the following
user-visibible functionality (some of that was already available
before, but done differently):
content from script-land into the core, where it belongs. See
doc/file-analysis.rst for more information.
Much of this is an internal change, but the framework also comes
with the following user-visibible functionality (some of that was
already available before, but done differently):
[TODO: This will probably change with further script updates.]
@ -102,6 +104,10 @@ New Functionality
- IRC DCC transfers: Record to disk.
- New packet filter framework supports BPF-based load-balancing,
shunting, and sampling; plus plugin support to customize filters
dynamically.
Changed Functionality
~~~~~~~~~~~~~~~~~~~~~
@ -180,6 +186,12 @@ Changed Functionality
- The SSH::Login notice has been superseded by an corresponding
intelligence framework observation (SSH::SUCCESSFUL_LOGIN).
- PacketFilter::all_packets has been replaced with
PacketFilter::enable_auto_protocol_capture_filters.
- We removed the BitTorrent DPD signatures pending further updates to
that analyzer.
Bro 2.1
-------

View file

@ -1 +1 @@
2.1-736
2.1-824

@ -1 +1 @@
Subproject commit a1aaa1608ef08761a211b1e251449d796ba5e4a0
Subproject commit 0cd102805e73343cab3f9fd4a76552e13940dad9

@ -1 +1 @@
Subproject commit d5b8df42cb9c398142e02d4bf8ede835fd0227f4
Subproject commit ce366206e3407e534a786ad572c342e9f9fef26b

View file

@ -82,7 +82,8 @@ class BroGeneric(ObjectDescription):
objects = self.env.domaindata['bro']['objects']
key = (self.objtype, name)
if key in objects:
if ( key in objects and self.objtype != "id" and
self.objtype != "type" ):
self.env.warn(self.env.docname,
'duplicate description of %s %s, ' %
(self.objtype, name) +
@ -150,6 +151,12 @@ class BroEnum(BroGeneric):
#self.indexnode['entries'].append(('single', indextext,
# targetname, targetname))
m = sig.split()
if len(m) < 2:
self.env.warn(self.env.docname,
"bro:enum directive missing argument(s)")
return
if m[1] == "Notice::Type":
if 'notices' not in self.env.domaindata['bro']:
self.env.domaindata['bro']['notices'] = []

184
doc/file-analysis.rst Normal file
View file

@ -0,0 +1,184 @@
=============
File Analysis
=============
.. rst-class:: opening
In the past, writing Bro scripts with the intent of analyzing file
content could be cumbersome because of the fact that the content
would be presented in different ways, via events, at the
script-layer depending on which network protocol was involved in the
file transfer. Scripts written to analyze files over one protocol
would have to be copied and modified to fit other protocols. The
file analysis framework (FAF) instead provides a generalized
presentation of file-related information. The information regarding
the protocol involved in transporting a file over the network is
still available, but it no longer has to dictate how one organizes
their scripting logic to handle it. A goal of the FAF is to
provide analysis specifically for files that is analogous to the
analysis Bro provides for network connections.
.. contents::
File Lifecycle Events
=====================
The key events that may occur during the lifetime of a file are:
:bro:see:`file_new`, :bro:see:`file_over_new_connection`,
:bro:see:`file_timeout`, :bro:see:`file_gap`, and
:bro:see:`file_state_remove`. Handling any of these events provides
some information about the file such as which network
:bro:see:`connection` and protocol are transporting the file, how many
bytes have been transferred so far, and its MIME type.
.. code:: bro
event connection_state_remove(c: connection)
{
print "connection_state_remove";
print c$uid;
print c$id;
for ( s in c$service )
print s;
}
event file_state_remove(f: fa_file)
{
print "file_state_remove";
print f$id;
for ( cid in f$conns )
{
print f$conns[cid]$uid;
print cid;
}
print f$source;
}
might give output like::
file_state_remove
Cx92a0ym5R8
REs2LQfVW2j
[orig_h=10.0.0.7, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp]
HTTP
connection_state_remove
REs2LQfVW2j
[orig_h=10.0.0.7, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp]
HTTP
This doesn't perform any interesting analysis yet, but does highlight
the similarity between analysis of connections and files. Connections
are identified by the usual 5-tuple or a convenient UID string while
files are identified just by a string of the same format as the
connection UID. So there's unique ways to identify both files and
connections and files hold references to a connection (or connections)
that transported it.
Adding Analysis
===============
There are builtin file analyzers which can be attached to files. Once
attached, they start receiving the contents of the file as Bro extracts
it from an ongoing network connection. What they do with the file
contents is up to the particular file analyzer implementation, but
they'll typically either report further information about the file via
events (e.g. :bro:see:`FileAnalysis::ANALYZER_MD5` will report the
file's MD5 checksum via :bro:see:`file_hash` once calculated) or they'll
have some side effect (e.g. :bro:see:`FileAnalysis::ANALYZER_EXTRACT`
will write the contents of the file out to the local file system).
In the future there may be file analyzers that automatically attach to
files based on heuristics, similar to the Dynamic Protocol Detection
(DPD) framework for connections, but many will always require an
explicit attachment decision:
.. code:: bro
event file_new(f: fa_file)
{
print "new file", f$id;
if ( f?$mime_type && f$mime_type == "text/plain" )
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_MD5]);
}
event file_hash(f: fa_file, kind: string, hash: string)
{
print "file_hash", f$id, kind, hash;
}
this script calculates MD5s for all plain text files and might give
output::
new file, Cx92a0ym5R8
file_hash, Cx92a0ym5R8, md5, 397168fd09991a0e712254df7bc639ac
Some file analyzers might have tunable parameters that need to be
specified in the call to :bro:see:`FileAnalysis::add_analyzer`:
.. code:: bro
event file_new(f: fa_file)
{
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_EXTRACT,
$extract_filename="./myfile"]);
}
In this case, the file extraction analyzer doesn't generate any further
events, but does have the side effect of writing out the file contents
to the local file system at the specified location of ``./myfile``. Of
course, for a network with more than a single file being transferred,
it's probably preferable to specify a different extraction path for each
file, unlike this example.
Regardless of which file analyzers end up acting on a file, general
information about the file (e.g. size, time of last data transferred,
MIME type, etc.) are logged in ``file_analysis.log``.
Input Framework Integration
===========================
The FAF comes with a simple way to integrate with the :doc:`Input
Framework <input>`, so that Bro can analyze files from external sources
in the same way it analyzes files that it sees coming over traffic from
a network interface it's monitoring. It only requires a call to
:bro:see:`Input::add_analysis`:
.. code:: bro
redef exit_only_after_terminate = T;
event file_new(f: fa_file)
{
print "new file", f$id;
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_MD5]);
}
event file_state_remove(f: fa_file)
{
Input::remove(f$source);
terminate();
}
event file_hash(f: fa_file, kind: string, hash: string)
{
print "file_hash", f$id, kind, hash;
}
event bro_init()
{
local source: string = "./myfile";
Input::add_analysis([$source=source, $name=source]);
}
Note that the "source" field of :bro:see:`fa_file` corresponds to the
"name" field of :bro:see:`Input::AnalysisDescription` since that is what
the input framework uses to uniquely identify an input stream.
The output of the above script may be::
new file, G1fS2xthS4l
file_hash, G1fS2xthS4l, md5, 54098b367d2e87b078671fad4afb9dbb
Nothing that special, but it at least verifies the MD5 file analyzer
saw all the bytes of the input file and calculated the checksum
correctly!

View file

@ -25,6 +25,7 @@ Frameworks
notice
logging
input
file-analysis
cluster
signatures
@ -45,7 +46,7 @@ Script Reference
scripts/packages
scripts/index
scripts/builtins
scripts/bifs
scripts/proto-analyzers
Other Bro Components
--------------------

View file

@ -15,11 +15,11 @@ endif ()
#
# srcDir: the directory which contains broInput
# broInput: the file name of a bro policy script, any path prefix of this
# argument will be used to derive what path under policy/ the generated
# argument will be used to derive what path under scripts/ the generated
# documentation will be placed.
# group: optional name of group that the script documentation will belong to.
# If this is not given, .bif files automatically get their own group or
# the group is automatically by any path portion of the broInput argument.
# If this is not given, the group is automatically set to any path portion
# of the broInput argument.
#
# In addition to adding the makefile target, several CMake variables are set:
#
@ -64,8 +64,6 @@ macro(REST_TARGET srcDir broInput)
if (NOT "${ARGN}" STREQUAL "")
set(group ${ARGN})
elseif (${broInput} MATCHES "\\.bif\\.bro$")
set(group bifs)
elseif (relDstDir)
set(group ${relDstDir}/index)
# add package index to master package list if not already in it
@ -126,6 +124,29 @@ endmacro(REST_TARGET)
# Schedule Bro scripts for which to generate documentation.
include(DocSourcesList.cmake)
# This reST target is independent of a particular Bro script...
add_custom_command(OUTPUT proto-analyzers.rst
# delete any leftover state from previous bro runs
COMMAND "${CMAKE_COMMAND}"
ARGS -E remove_directory .state
# generate the reST documentation using bro
COMMAND BROPATH=${BROPATH}:${srcDir} BROMAGIC=${CMAKE_SOURCE_DIR}/magic ${CMAKE_BINARY_DIR}/src/bro
ARGS -b -Z base/init-bare.bro || (rm -rf .state *.log *.rst && exit 1)
# move generated doc into a new directory tree that
# defines the final structure of documents
COMMAND "${CMAKE_COMMAND}"
ARGS -E make_directory ${dstDir}
COMMAND "${CMAKE_COMMAND}"
ARGS -E copy proto-analyzers.rst ${dstDir}
# clean up the build directory
COMMAND rm
ARGS -rf .state *.log *.rst
DEPENDS bro
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "[Bro] Generating reST docs for proto-analyzers.rst"
)
list(APPEND ALL_REST_OUTPUTS proto-analyzers.rst)
# create temporary list of all docs to include in the master policy/index file
file(WRITE ${MASTER_POLICY_INDEX} "${MASTER_POLICY_INDEX_TEXT}")

View file

@ -34,6 +34,7 @@ rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_DNS.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FTP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FTP.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_File.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FileHash.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Finger.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_GTPv1.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Gnutella.events.bif.bro)
@ -111,6 +112,7 @@ rest_target(${psd} base/frameworks/notice/non-cluster.bro)
rest_target(${psd} base/frameworks/notice/weird.bro)
rest_target(${psd} base/frameworks/packet-filter/main.bro)
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
rest_target(${psd} base/frameworks/packet-filter/utils.bro)
rest_target(${psd} base/frameworks/reporter/main.bro)
rest_target(${psd} base/frameworks/signatures/main.bro)
rest_target(${psd} base/frameworks/software/main.bro)
@ -189,6 +191,7 @@ rest_target(${psd} policy/frameworks/intel/smtp-url-extraction.bro)
rest_target(${psd} policy/frameworks/intel/smtp.bro)
rest_target(${psd} policy/frameworks/intel/ssl.bro)
rest_target(${psd} policy/frameworks/intel/where-locations.bro)
rest_target(${psd} policy/frameworks/packet-filter/shunt.bro)
rest_target(${psd} policy/frameworks/software/version-changes.bro)
rest_target(${psd} policy/frameworks/software/vulnerable.bro)
rest_target(${psd} policy/integration/barnyard2/main.bro)
@ -197,6 +200,7 @@ rest_target(${psd} policy/integration/collective-intel/main.bro)
rest_target(${psd} policy/misc/app-metrics.bro)
rest_target(${psd} policy/misc/capture-loss.bro)
rest_target(${psd} policy/misc/detect-traceroute/main.bro)
rest_target(${psd} policy/misc/load-balancing.bro)
rest_target(${psd} policy/misc/loaded-scripts.bro)
rest_target(${psd} policy/misc/profiling.bro)
rest_target(${psd} policy/misc/scan.bro)

View file

@ -1,5 +0,0 @@
.. This is a stub doc to which broxygen appends during the build process
Built-In Functions (BIFs)
=========================

View file

@ -9,9 +9,9 @@
##! :bro:enum:`Analyzer::ANALYZER_HTTP`. These tags are defined internally by
##! the analyzers themselves, and documented in their analyzer-specific
##! description along with the events that they generate.
##!
##! .. todo: ``The ANALYZER_*`` are in fact not yet documented, we need to
##! add that to Broxygen.
@load base/frameworks/packet-filter/utils
module Analyzer;
export {
@ -100,6 +100,20 @@ export {
global schedule_analyzer: function(orig: addr, resp: addr, resp_p: port,
analyzer: Analyzer::Tag, tout: interval) : bool;
## Automatically creates a BPF filter for the specified protocol based
## on the data supplied for the protocol through the
## :bro:see:`Analyzer::register_for_ports` function.
##
## tag: The analyzer tag.
##
## Returns: BPF filter string.
global analyzer_to_bpf: function(tag: Analyzer::Tag): string;
## Create a BPF filter which matches all of the ports defined
## by the various protocol analysis scripts as "registered ports"
## for the protocol.
global get_bpf: function(): string;
## A set of analyzers to disable by default at startup. The default set
## contains legacy analyzers that are no longer supported.
global disabled_analyzers: set[Analyzer::Tag] = {
@ -179,3 +193,25 @@ function schedule_analyzer(orig: addr, resp: addr, resp_p: port,
return __schedule_analyzer(orig, resp, resp_p, analyzer, tout);
}
function analyzer_to_bpf(tag: Analyzer::Tag): string
{
# Return an empty string if an undefined analyzer was given.
if ( tag !in ports )
return "";
local output = "";
for ( p in ports[tag] )
output = PacketFilter::combine_filters(output, "or", PacketFilter::port_to_bpf(p));
return output;
}
function get_bpf(): string
{
local output = "";
for ( tag in ports )
{
output = PacketFilter::combine_filters(output, "or", analyzer_to_bpf(tag));
}
return output;
}

View file

@ -216,12 +216,9 @@ function setup_peer(p: event_peer, node: Node)
request_remote_events(p, node$events);
}
if ( node?$capture_filter )
if ( node?$capture_filter && node$capture_filter != "" )
{
local filter = node$capture_filter;
if ( filter == "" )
filter = PacketFilter::default_filter;
do_script_log(p, fmt("sending capture_filter: %s", filter));
send_capture_filter(p, filter);
}

View file

@ -1,212 +0,0 @@
# Signatures to initiate dynamic protocol detection.
signature dpd_ftp_client {
ip-proto == tcp
payload /(|.*[\n\r]) *[uU][sS][eE][rR] /
tcp-state originator
}
# Match for server greeting (220, 120) and for login or passwd
# required (230, 331).
signature dpd_ftp_server {
ip-proto == tcp
payload /[\n\r ]*(120|220)[^0-9].*[\n\r] *(230|331)[^0-9]/
tcp-state responder
requires-reverse-signature dpd_ftp_client
enable "ftp"
}
signature dpd_http_client {
ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
tcp-state originator
}
signature dpd_http_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_http_client
enable "http"
}
signature dpd_bittorrenttracker_client {
ip-proto == tcp
payload /^.*\/announce\?.*info_hash/
tcp-state originator
}
signature dpd_bittorrenttracker_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_bittorrenttracker_client
enable "bittorrenttracker"
}
signature dpd_bittorrent_peer1 {
ip-proto == tcp
payload /^\x13BitTorrent protocol/
tcp-state originator
}
signature dpd_bittorrent_peer2 {
ip-proto == tcp
payload /^\x13BitTorrent protocol/
tcp-state responder
requires-reverse-signature dpd_bittorrent_peer1
enable "bittorrent"
}
signature irc_client1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Uu][Ss][Ee][Rr] +.+[\n\r]+ *[Nn][Ii][Cc][Kk] +.*[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_client2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Nn][Ii][Cc][Kk] +.+[\r\n]+ *[Uu][Ss][Ee][Rr] +.+[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_server_reply {
ip-proto == tcp
payload /^(|.*[\n\r])(:[^ \n\r]+ )?[0-9][0-9][0-9] /
tcp-state responder
}
signature irc_server_to_server1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
}
signature irc_server_to_server2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
requires-reverse-signature irc_server_to_server1
enable "irc"
}
signature dpd_smtp_client {
ip-proto == tcp
payload /(|.*[\n\r])[[:space:]]*([hH][eE][lL][oO]|[eE][hH][lL][oO])/
requires-reverse-signature dpd_smtp_server
enable "smtp"
tcp-state originator
}
signature dpd_smtp_server {
ip-proto == tcp
payload /^[[:space:]]*220[[:space:]-]/
tcp-state responder
}
signature dpd_ssh_client {
ip-proto == tcp
payload /^[sS][sS][hH]-/
requires-reverse-signature dpd_ssh_server
enable "ssh"
tcp-state originator
}
signature dpd_ssh_server {
ip-proto == tcp
payload /^[sS][sS][hH]-/
tcp-state responder
}
signature dpd_pop3_server {
ip-proto == tcp
payload /^\+OK/
requires-reverse-signature dpd_pop3_client
enable "pop3"
tcp-state responder
}
signature dpd_pop3_client {
ip-proto == tcp
payload /(|.*[\r\n])[[:space:]]*([uU][sS][eE][rR][[:space:]]|[aA][pP][oO][pP][[:space:]]|[cC][aA][pP][aA]|[aA][uU][tT][hH])/
tcp-state originator
}
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
}
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
tcp-state originator
}
signature dpd_ayiya {
ip-proto = udp
payload /^..\x11\x29/
enable "ayiya"
}
signature dpd_teredo {
ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
enable "teredo"
}
signature dpd_socks4_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state originator
}
signature dpd_socks4_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state responder
enable "socks"
}
signature dpd_socks4_reverse_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state responder
}
signature dpd_socks4_reverse_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_reverse_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state originator
enable "socks"
}
signature dpd_socks5_client {
ip-proto == tcp
# Watch for a few authentication methods to reduce false positives.
payload /^\x05.[\x00\x01\x02]/
tcp-state originator
}
signature dpd_socks5_server {
ip-proto == tcp
requires-reverse-signature dpd_socks5_client
# Watch for a single authentication method to be chosen by the server or
# the server to indicate the no authentication is required.
payload /^\x05(\x00|\x01[\x00\x01\x02])/
tcp-state responder
enable "socks"
}

View file

@ -3,8 +3,6 @@
module DPD;
@load-sigs ./dpd.sig
export {
## Add the DPD logging stream identifier.
redef enum Log::ID += { LOG };

View file

@ -15,18 +15,20 @@ export {
## A structure which represents a desired type of file analysis.
type AnalyzerArgs: record {
## The type of analysis.
tag: Analyzer;
tag: FileAnalysis::Tag;
## The local filename to which to write an extracted file. Must be
## set when *tag* is :bro:see:`FileAnalysis::ANALYZER_EXTRACT`.
extract_filename: string &optional;
## An event which will be generated for all new file contents,
## chunk-wise.
## chunk-wise. Used when *tag* is
## :bro:see:`FileAnalysis::ANALYZER_DATA_EVENT`.
chunk_event: event(f: fa_file, data: string, off: count) &optional;
## An event which will be generated for all new file contents,
## stream-wise.
## stream-wise. Used when *tag* is
## :bro:see:`FileAnalysis::ANALYZER_DATA_EVENT`.
stream_event: event(f: fa_file, data: string) &optional;
} &redef;
@ -87,7 +89,7 @@ export {
conn_uids: set[string] &log;
## A set of analysis types done during the file analysis.
analyzers: set[Analyzer] &log;
analyzers: set[FileAnalysis::Tag];
## Local filenames of extracted files.
extracted_files: set[string] &log;
@ -120,7 +122,9 @@ export {
## Sets the *timeout_interval* field of :bro:see:`fa_file`, which is
## used to determine the length of inactivity that is allowed for a file
## before internal state related to it is cleaned up.
## before internal state related to it is cleaned up. When used within a
## :bro:see:`file_timeout` handler, the analysis will delay timing out
## again for the period specified by *t*.
##
## f: the file.
##
@ -130,18 +134,6 @@ export {
## for the *id* isn't currently active.
global set_timeout_interval: function(f: fa_file, t: interval): bool;
## Postpones the timeout of file analysis for a given file.
## When used within a :bro:see:`file_timeout` handler for, the analysis
## the analysis will delay timing out for the period of time indicated by
## the *timeout_interval* field of :bro:see:`fa_file`, which can be set
## with :bro:see:`FileAnalysis::set_timeout_interval`.
##
## f: the file.
##
## Returns: true if the timeout will be postponed, or false if analysis
## for the *id* isn't currently active.
global postpone_timeout: function(f: fa_file): bool;
## Adds an analyzer to the analysis of a given file.
##
## f: the file.
@ -171,58 +163,6 @@ export {
## rest of it's contents, or false if analysis for the *id*
## isn't currently active.
global stop: function(f: fa_file): bool;
## Sends a sequential stream of data in for file analysis.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## data: bytestring contents of the file to analyze.
global data_stream: function(source: string, data: string);
## Sends a non-sequential chunk of data in for file analysis.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## data: bytestring contents of the file to analyze.
##
## offset: the offset within the file that this chunk starts.
global data_chunk: function(source: string, data: string, offset: count);
## Signals a content gap in the file bytestream.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## offset: the offset within the file that this gap starts.
##
## len: the number of bytes that are missing.
global gap: function(source: string, offset: count, len: count);
## Signals the total size of a file.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## size: the number of bytes that comprise the full file.
global set_size: function(source: string, size: count);
## Signals the end of a file.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
global eof: function(source: string);
}
redef record fa_file += {
@ -259,11 +199,6 @@ function set_timeout_interval(f: fa_file, t: interval): bool
return __set_timeout_interval(f$id, t);
}
function postpone_timeout(f: fa_file): bool
{
return __postpone_timeout(f$id);
}
function add_analyzer(f: fa_file, args: AnalyzerArgs): bool
{
if ( ! __add_analyzer(f$id, args) ) return F;
@ -287,31 +222,6 @@ function stop(f: fa_file): bool
return __stop(f$id);
}
function data_stream(source: string, data: string)
{
__data_stream(source, data);
}
function data_chunk(source: string, data: string, offset: count)
{
__data_chunk(source, data, offset);
}
function gap(source: string, offset: count, len: count)
{
__gap(source, offset, len);
}
function set_size(source: string, size: count)
{
__set_size(source, size);
}
function eof(source: string)
{
__eof(source);
}
event bro_init() &priority=5
{
Log::create_stream(FileAnalysis::LOG,

View file

@ -122,6 +122,34 @@ export {
config: table[string] of string &default=table();
};
## A file analyis input stream type used to forward input data to the
## file analysis framework.
type AnalysisDescription: record {
## String that allows the reader to find the source.
## For `READER_ASCII`, this is the filename.
source: string;
## Reader to use for this steam. Compatible readers must be
## able to accept a filter of a single string type (i.e.
## they read a byte stream).
reader: Reader &default=Input::READER_BINARY;
## Read mode to use for this stream
mode: Mode &default=default_mode;
## Descriptive name that uniquely identifies the input source.
## Can be used used to remove a stream at a later time.
## This will also be used for the unique *source* field of
## :bro:see:`fa_file`. Most of the time, the best choice for this
## field will be the same value as the *source* field.
name: string;
## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
};
## Create a new table input from a given source. Returns true on success.
##
## description: `TableDescription` record describing the source.
@ -132,6 +160,14 @@ export {
## description: `TableDescription` record describing the source.
global add_event: function(description: Input::EventDescription) : bool;
## Create a new file analysis input from a given source. Data read from
## the source is automatically forwarded to the file analysis framework.
##
## description: A record describing the source
##
## Returns: true on sucess.
global add_analysis: function(description: Input::AnalysisDescription) : bool;
## Remove a input stream. Returns true on success and false if the named stream was
## not found.
##
@ -164,6 +200,11 @@ function add_event(description: Input::EventDescription) : bool
return __create_event_stream(description);
}
function add_analysis(description: Input::AnalysisDescription) : bool
{
return __create_analysis_stream(description);
}
function remove(id: string) : bool
{
return __remove_stream(id);

View file

@ -6,4 +6,12 @@ export {
## Separator between input records.
## Please note that the separator has to be exactly one character long
const record_separator = "\n" &redef;
## Event that is called when a process created by the raw reader exits.
##
## name: name of the input stream
## source: source of the input stream
## exit_code: exit code of the program, or number of the signal that forced the program to exit
## signal_exit: false when program exitted normally, true when program was forced to exit by a signal
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
}

View file

@ -1,2 +1,3 @@
@load ./utils
@load ./main
@load ./netstats

View file

@ -1,10 +1,12 @@
##! This script supports how Bro sets it's BPF capture filter. By default
##! Bro sets an unrestricted filter that allows all traffic. If a filter
##! Bro sets a capture filter that allows all traffic. If a filter
##! is set on the command line, that filter takes precedence over the default
##! open filter and all filters defined in Bro scripts with the
##! :bro:id:`capture_filters` and :bro:id:`restrict_filters` variables.
@load base/frameworks/notice
@load base/frameworks/analyzer
@load ./utils
module PacketFilter;
@ -14,11 +16,14 @@ export {
## Add notice types related to packet filter errors.
redef enum Notice::Type += {
## This notice is generated if a packet filter is unable to be compiled.
## This notice is generated if a packet filter cannot be compiled.
Compile_Failure,
## This notice is generated if a packet filter is fails to install.
## Generated if a packet filter is fails to install.
Install_Failure,
## Generated when a notice takes too long to compile.
Too_Long_To_Compile_Filter
};
## The record type defining columns to be logged in the packet filter
@ -42,83 +47,248 @@ export {
success: bool &log &default=T;
};
## By default, Bro will examine all packets. If this is set to false,
## it will dynamically build a BPF filter that only select protocols
## for which the user has loaded a corresponding analysis script.
## The latter used to be default for Bro versions < 2.0. That has now
## changed however to enable port-independent protocol analysis.
const all_packets = T &redef;
## The BPF filter that is used by default to define what traffic should
## be captured. Filters defined in :bro:id:`restrict_filters` will still
## be applied to reduce the captured traffic.
const default_capture_filter = "ip or not ip" &redef;
## Filter string which is unconditionally or'ed to the beginning of every
## dynamically built filter.
const unrestricted_filter = "" &redef;
## Filter string which is unconditionally and'ed to the beginning of every
## dynamically built filter. This is mostly used when a custom filter is being
## used but MPLS or VLAN tags are on the traffic.
const restricted_filter = "" &redef;
## The maximum amount of time that you'd like to allow for BPF filters to compile.
## If this time is exceeded, compensation measures may be taken by the framework
## to reduce the filter size. This threshold being crossed also results in
## the :bro:see:`PacketFilter::Too_Long_To_Compile_Filter` notice.
const max_filter_compile_time = 100msec &redef;
## Install a BPF filter to exclude some traffic. The filter should positively
## match what is to be excluded, it will be wrapped in a "not".
##
## filter_id: An arbitrary string that can be used to identify
## the filter.
##
## filter: A BPF expression of traffic that should be excluded.
##
## Returns: A boolean value to indicate if the filter was successfully
## installed or not.
global exclude: function(filter_id: string, filter: string): bool;
## Install a temporary filter to traffic which should not be passed through
## the BPF filter. The filter should match the traffic you don't want
## to see (it will be wrapped in a "not" condition).
##
## filter_id: An arbitrary string that can be used to identify
## the filter.
##
## filter: A BPF expression of traffic that should be excluded.
##
## length: The duration for which this filter should be put in place.
##
## Returns: A boolean value to indicate if the filter was successfully
## installed or not.
global exclude_for: function(filter_id: string, filter: string, span: interval): bool;
## Call this function to build and install a new dynamically built
## packet filter.
global install: function();
global install: function(): bool;
## A data structure to represent filter generating plugins.
type FilterPlugin: record {
## A function that is directly called when generating the complete filter.
func : function();
};
## API function to register a new plugin for dynamic restriction filters.
global register_filter_plugin: function(fp: FilterPlugin);
## Enables the old filtering approach of "only watch common ports for
## analyzed protocols".
##
## Unless you know what you are doing, leave this set to F.
const enable_auto_protocol_capture_filters = F &redef;
## This is where the default packet filter is stored and it should not
## normally be modified by users.
global default_filter = "<not set yet>";
global current_filter = "<not set yet>";
}
global dynamic_restrict_filters: table[string] of string = {};
# Track if a filter is currently building so functions that would ultimately
# install a filter immediately can still be used but they won't try to build or
# install the filter.
global currently_building = F;
# Internal tracking for if the the filter being built has possibly been changed.
global filter_changed = F;
global filter_plugins: set[FilterPlugin] = {};
redef enum PcapFilterID += {
DefaultPcapFilter,
FilterTester,
};
function combine_filters(lfilter: string, rfilter: string, op: string): string
function test_filter(filter: string): bool
{
if ( lfilter == "" && rfilter == "" )
return "";
else if ( lfilter == "" )
return rfilter;
else if ( rfilter == "" )
return lfilter;
else
return fmt("(%s) %s (%s)", lfilter, op, rfilter);
if ( ! precompile_pcap_filter(FilterTester, filter) )
{
# The given filter was invalid
# TODO: generate a notice.
return F;
}
return T;
}
function build_default_filter(): string
# This tracks any changes for filtering mechanisms that play along nice
# and set filter_changed to T.
event filter_change_tracking()
{
if ( filter_changed )
install();
schedule 5min { filter_change_tracking() };
}
event bro_init() &priority=5
{
Log::create_stream(PacketFilter::LOG, [$columns=Info]);
# Preverify the capture and restrict filters to give more granular failure messages.
for ( id in capture_filters )
{
if ( ! test_filter(capture_filters[id]) )
Reporter::fatal(fmt("Invalid capture_filter named '%s' - '%s'", id, capture_filters[id]));
}
for ( id in restrict_filters )
{
if ( ! test_filter(restrict_filters[id]) )
Reporter::fatal(fmt("Invalid restrict filter named '%s' - '%s'", id, restrict_filters[id]));
}
}
event bro_init() &priority=-5
{
install();
event filter_change_tracking();
}
function register_filter_plugin(fp: FilterPlugin)
{
add filter_plugins[fp];
}
event remove_dynamic_filter(filter_id: string)
{
if ( filter_id in dynamic_restrict_filters )
{
delete dynamic_restrict_filters[filter_id];
install();
}
}
function exclude(filter_id: string, filter: string): bool
{
if ( ! test_filter(filter) )
return F;
dynamic_restrict_filters[filter_id] = filter;
install();
return T;
}
function exclude_for(filter_id: string, filter: string, span: interval): bool
{
if ( exclude(filter_id, filter) )
{
schedule span { remove_dynamic_filter(filter_id) };
return T;
}
return F;
}
function build(): string
{
if ( cmd_line_bpf_filter != "" )
# Return what the user specified on the command line;
return cmd_line_bpf_filter;
if ( all_packets )
# Return an "always true" filter.
return "ip or not ip";
currently_building = T;
# Build filter dynamically.
# Generate all of the plugin based filters.
for ( plugin in filter_plugins )
{
plugin$func();
}
# First the capture_filter.
local cfilter = "";
for ( id in capture_filters )
cfilter = combine_filters(cfilter, capture_filters[id], "or");
if ( |capture_filters| == 0 && ! enable_auto_protocol_capture_filters )
cfilter = default_capture_filter;
# Then the restrict_filter.
for ( id in capture_filters )
cfilter = combine_filters(cfilter, "or", capture_filters[id]);
if ( enable_auto_protocol_capture_filters )
cfilter = combine_filters(cfilter, "or", Analyzer::get_bpf());
# Apply the restriction filters.
local rfilter = "";
for ( id in restrict_filters )
rfilter = combine_filters(rfilter, restrict_filters[id], "and");
rfilter = combine_filters(rfilter, "and", restrict_filters[id]);
# Apply the dynamic restriction filters.
for ( filt in dynamic_restrict_filters )
rfilter = combine_filters(rfilter, "and", string_cat("not (", dynamic_restrict_filters[filt], ")"));
# Finally, join them into one filter.
local filter = combine_filters(rfilter, cfilter, "and");
if ( unrestricted_filter != "" )
filter = combine_filters(unrestricted_filter, filter, "or");
local filter = combine_filters(cfilter, "and", rfilter);
if ( unrestricted_filter != "" )
filter = combine_filters(unrestricted_filter, "or", filter);
if ( restricted_filter != "" )
filter = combine_filters(restricted_filter, "and", filter);
currently_building = F;
return filter;
}
function install()
function install(): bool
{
default_filter = build_default_filter();
if ( currently_building )
return F;
if ( ! precompile_pcap_filter(DefaultPcapFilter, default_filter) )
local tmp_filter = build();
# No need to proceed if the filter hasn't changed.
if ( tmp_filter == current_filter )
return F;
local ts = current_time();
if ( ! precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
{
NOTICE([$note=Compile_Failure,
$msg=fmt("Compiling packet filter failed"),
$sub=default_filter]);
Reporter::fatal(fmt("Bad pcap filter '%s'", default_filter));
$sub=tmp_filter]);
if ( network_time() == 0.0 )
Reporter::fatal(fmt("Bad pcap filter '%s'", tmp_filter));
else
Reporter::warning(fmt("Bad pcap filter '%s'", tmp_filter));
}
local diff = current_time()-ts;
if ( diff > max_filter_compile_time )
NOTICE([$note=Too_Long_To_Compile_Filter,
$msg=fmt("A BPF filter is taking longer than %0.1f seconds to compile", diff)]);
# Set it to the current filter if it passed precompiling
current_filter = tmp_filter;
# Do an audit log for the packet filter.
local info: Info;
@ -129,7 +299,7 @@ function install()
info$ts = current_time();
info$init = T;
}
info$filter = default_filter;
info$filter = current_filter;
if ( ! install_pcap_filter(DefaultPcapFilter) )
{
@ -137,15 +307,13 @@ function install()
info$success = F;
NOTICE([$note=Install_Failure,
$msg=fmt("Installing packet filter failed"),
$sub=default_filter]);
$sub=current_filter]);
}
if ( reading_live_traffic() || reading_traces() )
Log::write(PacketFilter::LOG, info);
}
event bro_init() &priority=10
{
Log::create_stream(PacketFilter::LOG, [$columns=Info]);
PacketFilter::install();
# Update the filter change tracking
filter_changed = F;
return T;
}

View file

@ -13,7 +13,7 @@ export {
};
## This is the interval between individual statistics collection.
const stats_collection_interval = 10secs;
const stats_collection_interval = 5min;
}
event net_stats_update(last_stat: NetStats)

View file

@ -0,0 +1,58 @@
module PacketFilter;
export {
## Takes a :bro:type:`port` and returns a BPF expression which will
## match the port.
##
## p: The port.
##
## Returns: A valid BPF filter string for matching the port.
global port_to_bpf: function(p: port): string;
## Create a BPF filter to sample IPv4 and IPv6 traffic.
##
## num_parts: The number of parts the traffic should be split into.
##
## this_part: The part of the traffic this filter will accept. 0-based.
global sampling_filter: function(num_parts: count, this_part: count): string;
## Combines two valid BPF filter strings with a string based operator
## to form a new filter.
##
## lfilter: Filter which will go on the left side.
##
## op: Operation being applied (typically "or" or "and").
##
## rfilter: Filter which will go on the right side.
##
## Returns: A new string representing the two filters combined with
## the operator. Either filter being an empty string will
## still result in a valid filter.
global combine_filters: function(lfilter: string, op: string, rfilter: string): string;
}
function port_to_bpf(p: port): string
{
local tp = get_port_transport_proto(p);
return cat(tp, " and ", fmt("port %d", p));
}
function combine_filters(lfilter: string, op: string, rfilter: string): string
{
if ( lfilter == "" && rfilter == "" )
return "";
else if ( lfilter == "" )
return rfilter;
else if ( rfilter == "" )
return lfilter;
else
return fmt("(%s) %s (%s)", lfilter, op, rfilter);
}
function sampling_filter(num_parts: count, this_part: count): string
{
local v4_filter = fmt("ip and ((ip[14:2]+ip[18:2]) - (%d*((ip[14:2]+ip[18:2])/%d)) == %d)", num_parts, num_parts, this_part);
# TODO: this is probably a fairly suboptimal filter, but it should work for now.
local v6_filter = fmt("ip6 and ((ip6[22:2]+ip6[38:2]) - (%d*((ip6[22:2]+ip6[38:2])/%d)) == %d)", num_parts, num_parts, this_part);
return combine_filters(v4_filter, "or", v6_filter);
}

View file

@ -222,17 +222,6 @@ type endpoint_stats: record {
endian_type: count;
};
## A unique analyzer instance ID. Each time instantiates a protocol analyzers
## for a connection, it assigns it a unique ID that can be used to reference
## that instance.
##
## .. bro:see:: Analyzer::name Analyzer::disable_analyzer protocol_confirmation
## protocol_violation
##
## .. todo::While we declare an alias for the type here, the events/functions still
## use ``count``. That should be changed.
type AnalyzerID: count;
module Tunnel;
export {
## Records the identity of an encapsulating parent of a tunneled connection.
@ -777,19 +766,6 @@ global signature_files = "" &add_func = add_signature_file;
## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
# todo::testing to see if I can remove these without causing problems.
#const ftp = 21/tcp;
#const ssh = 22/tcp;
#const telnet = 23/tcp;
#const smtp = 25/tcp;
#const domain = 53/tcp; # note, doesn't include UDP version
#const gopher = 70/tcp;
#const finger = 79/tcp;
#const http = 80/tcp;
#const ident = 113/tcp;
#const bgp = 179/tcp;
#const rlogin = 513/tcp;
# TCP values for :bro:see:`endpoint` *state* field.
# todo::these should go into an enum to make them autodoc'able.
const TCP_INACTIVE = 0; ##< Endpoint is still inactive.
@ -3065,12 +3041,12 @@ module GLOBAL;
## Number of bytes per packet to capture from live interfaces.
const snaplen = 8192 &redef;
# Load BiFs defined by plugins.
@load base/bif/plugins
# Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types.
@load base/frameworks/logging
@load base/frameworks/input
@load base/frameworks/analyzer
@load base/frameworks/file-analysis
# Load BiFs defined by plugins.
@load base/bif/plugins

View file

@ -41,10 +41,12 @@
@load base/protocols/http
@load base/protocols/irc
@load base/protocols/modbus
@load base/protocols/pop3
@load base/protocols/smtp
@load base/protocols/socks
@load base/protocols/ssh
@load base/protocols/ssl
@load base/protocols/syslog
@load base/protocols/tunnels
@load base/misc/find-checksum-offloading

View file

@ -122,14 +122,6 @@ redef record connection += {
dns_state: State &optional;
};
# DPD configuration.
redef capture_filters += {
["dns"] = "port 53",
["mdns"] = "udp and port 5353",
["llmns"] = "udp and port 5355",
["netbios-ns"] = "udp port 137",
};
const ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp };
redef likely_server_ports += { ports };
@ -215,6 +207,11 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
{
if ( ans$answer_type == DNS_ANS )
{
if ( ! c?$dns )
{
event conn_weird("dns_unmatched_reply", c, "");
hook set_session(c, msg, F);
}
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;

View file

@ -3,3 +3,5 @@
@load ./file-analysis
@load ./file-extract
@load ./gridftp
@load-sigs ./dpd.sig

View file

@ -0,0 +1,15 @@
signature dpd_ftp_client {
ip-proto == tcp
payload /(|.*[\n\r]) *[uU][sS][eE][rR] /
tcp-state originator
}
# Match for server greeting (220, 120) and for login or passwd
# required (230, 331).
signature dpd_ftp_server {
ip-proto == tcp
payload /[\n\r ]*(120|220)[^0-9].*[\n\r] *(230|331)[^0-9]/
tcp-state responder
requires-reverse-signature dpd_ftp_client
enable "ftp"
}

View file

@ -41,6 +41,7 @@ function get_file_handle(c: connection, is_orig: bool): string
module GLOBAL;
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != Analyzer::ANALYZER_FTP_DATA ) return;
set_file_handle(FTP::get_file_handle(c, is_orig));

View file

@ -13,8 +13,6 @@ export {
const extraction_prefix = "ftp-item" &redef;
}
global extract_count: count = 0;
redef record Info += {
## On disk file where it was extracted to.
extraction_file: string &log &optional;
@ -26,8 +24,7 @@ redef record Info += {
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}

View file

@ -110,21 +110,18 @@ redef record connection += {
ftp_data_reuse: bool &default=F;
};
# Configure DPD
redef capture_filters += { ["ftp"] = "port 21 and port 2811" };
const ports = { 21/tcp, 2811/tcp };
redef likely_server_ports += { ports };
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &read_expire=5mins;
event bro_init() &priority=5
{
Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports);
}
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &read_expire=5mins;
## A set of commands where the argument can be expected to refer
## to a file or directory.
const file_cmds = {

View file

@ -4,3 +4,5 @@
@load ./file-ident
@load ./file-hash
@load ./file-extract
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_http_client {
ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
tcp-state originator
}
signature dpd_http_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_http_client
enable "http"
}

View file

@ -6,25 +6,48 @@
module HTTP;
export {
redef record HTTP::Info += {
## Number of MIME entities in the HTTP request message body so far.
request_mime_level: count &default=0;
## Number of MIME entities in the HTTP response message body so far.
response_mime_level: count &default=0;
};
## Default file handle provider for HTTP.
global get_file_handle: function(c: connection, is_orig: bool): string;
}
event http_begin_entity(c: connection, is_orig: bool) &priority=5
{
if ( ! c?$http )
return;
if ( is_orig )
++c$http$request_mime_level;
else
++c$http$response_mime_level;
}
function get_file_handle(c: connection, is_orig: bool): string
{
if ( ! c?$http ) return "";
local mime_level: count =
is_orig ? c$http$request_mime_level : c$http$response_mime_level;
local mime_level_str: string = mime_level > 1 ? cat(mime_level) : "";
if ( c$http$range_request )
return cat(Analyzer::ANALYZER_HTTP, " ", is_orig, " ", c$id$orig_h, " ",
build_url(c$http));
return cat(Analyzer::ANALYZER_HTTP, " ", c$start_time, " ", is_orig, " ",
c$http$trans_depth, " ", id_string(c$id));
c$http$trans_depth, mime_level_str, " ", id_string(c$id));
}
module GLOBAL;
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != Analyzer::ANALYZER_HTTP ) return;
set_file_handle(HTTP::get_file_handle(c, is_orig));

View file

@ -14,8 +14,11 @@ export {
const extraction_prefix = "http-item" &redef;
redef record Info += {
## On-disk file where the response body was extracted to.
extraction_file: string &log &optional;
## On-disk location where files in request body were extracted.
extracted_request_files: vector of string &log &optional;
## On-disk location where files in response body were extracted.
extracted_response_files: vector of string &log &optional;
## Indicates if the response body is to be extracted or not. Must be
## set before or by the first :bro:see:`file_new` for the file content.
@ -23,15 +26,28 @@ export {
};
}
global extract_count: count = 0;
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
function add_extraction_file(c: connection, is_orig: bool, fn: string)
{
if ( is_orig )
{
if ( ! c$http?$extracted_request_files )
c$http$extracted_request_files = vector();
c$http$extracted_request_files[|c$http$extracted_request_files|] = fn;
}
else
{
if ( ! c$http?$extracted_response_files )
c$http$extracted_response_files = vector();
c$http$extracted_response_files[|c$http$extracted_response_files|] = fn;
}
}
event file_new(f: fa_file) &priority=5
{
if ( ! f?$source ) return;
@ -51,7 +67,7 @@ event file_new(f: fa_file) &priority=5
{
c = f$conns[cid];
if ( ! c?$http ) next;
c$http$extraction_file = fname;
add_extraction_file(c, f$is_orig, fname);
}
return;
@ -79,6 +95,6 @@ event file_new(f: fa_file) &priority=5
{
c = f$conns[cid];
if ( ! c?$http ) next;
c$http$extraction_file = fname;
add_extraction_file(c, f$is_orig, fname);
}
}

View file

@ -123,19 +123,12 @@ redef record connection += {
http_state: State &optional;
};
# DPD configuration.
redef capture_filters += {
["http"] = "tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888)"
};
const ports = {
80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3128/tcp,
8000/tcp, 8080/tcp, 8888/tcp,
};
redef likely_server_ports += { ports };
# Initialize the HTTP logging stream and ports.
event bro_init() &priority=5
{

View file

@ -1,3 +1,5 @@
@load ./main
@load ./dcc-send
@load ./file-analysis
@load-sigs ./dpd.sig

View file

@ -39,8 +39,6 @@ export {
global dcc_expected_transfers: table[addr, port] of Info &read_expire=5mins;
global extract_count: count = 0;
function set_dcc_mime(f: fa_file)
{
if ( ! f?$conns ) return;
@ -75,8 +73,7 @@ function set_dcc_extraction_file(f: fa_file, filename: string)
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
@ -188,5 +185,6 @@ event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10
event connection_state_remove(c: connection) &priority=-5
{
if ( [c$id$resp_h, c$id$resp_p] in dcc_expected_transfers )
delete dcc_expected_transfers[c$id$resp_h, c$id$resp_p];
}

View file

@ -0,0 +1,33 @@
signature irc_client1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Uu][Ss][Ee][Rr] +.+[\n\r]+ *[Nn][Ii][Cc][Kk] +.*[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_client2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Nn][Ii][Cc][Kk] +.+[\r\n]+ *[Uu][Ss][Ee][Rr] +.+[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_server_reply {
ip-proto == tcp
payload /^(|.*[\n\r])(:[^ \n\r]+ )?[0-9][0-9][0-9] /
tcp-state responder
}
signature irc_server_to_server1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
}
signature irc_server_to_server2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
requires-reverse-signature irc_server_to_server1
enable "irc"
}

View file

@ -18,6 +18,7 @@ function get_file_handle(c: connection, is_orig: bool): string
module GLOBAL;
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != Analyzer::ANALYZER_IRC_DATA ) return;
set_file_handle(IRC::get_file_handle(c, is_orig));

View file

@ -38,13 +38,6 @@ redef record connection += {
irc: Info &optional;
};
# Some common IRC ports.
redef capture_filters += { ["irc-6666"] = "port 6666" };
redef capture_filters += { ["irc-6667"] = "port 6667" };
redef capture_filters += { ["irc-6668"] = "port 6668" };
redef capture_filters += { ["irc-6669"] = "port 6669" };
# DPD configuration.
const ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };
redef likely_server_ports += { ports };

View file

@ -29,9 +29,6 @@ redef record connection += {
modbus: Info &optional;
};
# Configure DPD and the packet filter.
redef capture_filters += { ["modbus"] = "tcp port 502" };
const ports = { 502/tcp };
redef likely_server_ports += { ports };

View file

@ -0,0 +1,2 @@
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_pop3_server {
ip-proto == tcp
payload /^\+OK/
requires-reverse-signature dpd_pop3_client
enable "pop3"
tcp-state responder
}
signature dpd_pop3_client {
ip-proto == tcp
payload /(|.*[\r\n])[[:space:]]*([uU][sS][eE][rR][[:space:]]|[aA][pP][oO][pP][[:space:]]|[cC][aA][pP][aA]|[aA][uU][tT][hH])/
tcp-state originator
}

View file

@ -2,3 +2,5 @@
@load ./entities
@load ./entities-excerpt
@load ./file-analysis
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_smtp_client {
ip-proto == tcp
payload /(|.*[\n\r])[[:space:]]*([hH][eE][lL][oO]|[eE][hH][lL][oO])/
requires-reverse-signature dpd_smtp_server
enable "smtp"
tcp-state originator
}
signature dpd_smtp_server {
ip-proto == tcp
payload /^[[:space:]]*220[[:space:]-]/
tcp-state responder
}

View file

@ -66,8 +66,6 @@ export {
global log_mime: event(rec: EntityInfo);
}
global extract_count: count = 0;
event bro_init() &priority=5
{
Log::create_stream(SMTP::ENTITIES_LOG, [$columns=EntityInfo, $ev=log_mime]);
@ -90,8 +88,7 @@ function set_session(c: connection, new_entity: bool)
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
@ -127,7 +124,6 @@ event file_new(f: fa_file) &priority=5
[$tag=FileAnalysis::ANALYZER_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
c$smtp$current_entity$extraction_file = fname;

View file

@ -20,6 +20,7 @@ function get_file_handle(c: connection, is_orig: bool): string
module GLOBAL;
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != Analyzer::ANALYZER_SMTP ) return;
set_file_handle(SMTP::get_file_handle(c, is_orig));

View file

@ -81,9 +81,6 @@ redef record connection += {
smtp_state: State &optional;
};
# Configure DPD
redef capture_filters += { ["smtp"] = "tcp port 25 or tcp port 587" };
const ports = { 25/tcp, 587/tcp };
redef likely_server_ports += { ports };

View file

@ -1,2 +1,4 @@
@load ./consts
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,48 @@
signature dpd_socks4_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state originator
}
signature dpd_socks4_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state responder
enable "socks"
}
signature dpd_socks4_reverse_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state responder
}
signature dpd_socks4_reverse_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_reverse_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state originator
enable "socks"
}
signature dpd_socks5_client {
ip-proto == tcp
# Watch for a few authentication methods to reduce false positives.
payload /^\x05.[\x00\x01\x02]/
tcp-state originator
}
signature dpd_socks5_server {
ip-proto == tcp
requires-reverse-signature dpd_socks5_client
# Watch for a single authentication method to be chosen by the server or
# the server to indicate the no authentication is required.
payload /^\x05(\x00|\x01[\x00\x01\x02])/
tcp-state responder
enable "socks"
}

View file

@ -47,10 +47,6 @@ redef record connection += {
socks: SOCKS::Info &optional;
};
# Configure DPD
redef capture_filters += { ["socks"] = "tcp port 1080" };
redef likely_server_ports += { 1080/tcp };
function set_session(c: connection, version: count)
{
if ( ! c?$socks )

View file

@ -1 +1,3 @@
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_ssh_client {
ip-proto == tcp
payload /^[sS][sS][hH]-/
requires-reverse-signature dpd_ssh_server
enable "ssh"
tcp-state originator
}
signature dpd_ssh_server {
ip-proto == tcp
payload /^[sS][sS][hH]-/
tcp-state responder
}

View file

@ -70,17 +70,13 @@ export {
global log_ssh: event(rec: Info);
}
# Configure DPD and the packet filter
const ports = { 22/tcp };
redef capture_filters += { ["ssh"] = "tcp port 22" };
redef likely_server_ports += { ports };
redef record connection += {
ssh: Info &optional;
};
const ports = { 22/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SSH::LOG, [$columns=Info, $ev=log_ssh]);
@ -178,6 +174,7 @@ event ssh_watcher(c: connection)
if ( ! connection_exists(id) )
return;
lookup_connection(c$id);
check_ssh_connection(c, F);
if ( ! c$ssh$done )
schedule +15secs { ssh_watcher(c) };

View file

@ -1,3 +1,5 @@
@load ./consts
@load ./main
@load ./mozilla-ca-list
@load-sigs ./dpd.sig

View file

@ -0,0 +1,15 @@
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
}
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
tcp-state originator
}

View file

@ -94,35 +94,12 @@ redef record Info += {
delay_tokens: set[string] &optional;
};
redef capture_filters += {
["ssl"] = "tcp port 443",
["nntps"] = "tcp port 563",
["imap4-ssl"] = "tcp port 585",
["sshell"] = "tcp port 614",
["ldaps"] = "tcp port 636",
["ftps-data"] = "tcp port 989",
["ftps"] = "tcp port 990",
["telnets"] = "tcp port 992",
["imaps"] = "tcp port 993",
["ircs"] = "tcp port 994",
["pop3s"] = "tcp port 995",
["xmpps"] = "tcp port 5223",
};
const ports = {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
} &redef;
};
redef likely_server_ports += { ports };
# A queue that buffers log records.
global log_delay_queue: table[count] of Info;
# The top queue index where records are added.
global log_delay_queue_head = 0;
# The bottom queue index that points to the next record to be flushed.
global log_delay_queue_tail = 0;
event bro_init() &priority=5
{
Log::create_stream(SSL::LOG, [$columns=Info, $ev=log_ssl]);
@ -138,26 +115,17 @@ function set_session(c: connection)
function delay_log(info: Info, token: string)
{
if ( ! info?$delay_tokens )
info$delay_tokens = set();
add info$delay_tokens[token];
log_delay_queue[log_delay_queue_head] = info;
++log_delay_queue_head;
}
function undelay_log(info: Info, token: string)
{
if ( token in info$delay_tokens )
if ( info?$delay_tokens && token in info$delay_tokens )
delete info$delay_tokens[token];
}
global log_record: function(info: Info);
event delay_logging(info: Info)
{
log_record(info);
}
function log_record(info: Info)
{
if ( ! info?$delay_tokens || |info$delay_tokens| == 0 )
@ -166,26 +134,14 @@ function log_record(info: Info)
}
else
{
for ( unused_index in log_delay_queue )
when ( |info$delay_tokens| == 0 )
{
if ( log_delay_queue_head == log_delay_queue_tail )
return;
if ( |log_delay_queue[log_delay_queue_tail]$delay_tokens| > 0 )
{
if ( info$ts + max_log_delay > network_time() )
{
schedule 1sec { delay_logging(info) };
return;
log_record(info);
}
else
timeout SSL::max_log_delay
{
Reporter::info(fmt("SSL delay tokens not released in time (%s)",
info$delay_tokens));
}
}
Log::write(SSL::LOG, log_delay_queue[log_delay_queue_tail]);
delete log_delay_queue[log_delay_queue_tail];
++log_delay_queue_tail;
Reporter::info(fmt("SSL delay tokens not released in time (%s tokens remaining)",
|info$delay_tokens|));
}
}
}
@ -295,15 +251,3 @@ event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
if ( c?$ssl )
finish(c);
}
event bro_done()
{
if ( |log_delay_queue| == 0 )
return;
for ( unused_index in log_delay_queue )
{
Log::write(SSL::LOG, log_delay_queue[log_delay_queue_tail]);
delete log_delay_queue[log_delay_queue_tail];
++log_delay_queue_tail;
}
}

View file

@ -26,15 +26,13 @@ export {
};
}
redef capture_filters += { ["syslog"] = "port 514" };
const ports = { 514/udp };
redef likely_server_ports += { ports };
redef record connection += {
syslog: Info &optional;
};
const ports = { 514/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(Syslog::LOG, [$columns=Info]);

View file

@ -0,0 +1 @@
@load-sigs ./dpd.sig

View file

@ -0,0 +1,14 @@
# Provide DPD signatures for tunneling protocols that otherwise
# wouldn't be detected at all.
signature dpd_ayiya {
ip-proto = udp
payload /^..\x11\x29/
enable "ayiya"
}
signature dpd_teredo {
ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
enable "teredo"
}

View file

@ -0,0 +1,169 @@
@load base/frameworks/notice
@load base/frameworks/packet-filter
module PacketFilter;
export {
## The maximum number of BPF based shunts that Bro is allowed to perform.
const max_bpf_shunts = 100 &redef;
## Call this function to use BPF to shunt a connection (to prevent the
## data packets from reaching Bro). For TCP connections, control packets
## are still allowed through so that Bro can continue logging the connection
## and it can stop shunting once the connection ends.
global shunt_conn: function(id: conn_id): bool;
## This function will use a BPF expresssion to shunt traffic between
## the two hosts given in the `conn_id` so that the traffic is never
## exposed to Bro's traffic processing.
global shunt_host_pair: function(id: conn_id): bool;
## Remove shunting for a host pair given as a `conn_id`. The filter
## is not immediately removed. It waits for the occassional filter
## update done by the `PacketFilter` framework.
global unshunt_host_pair: function(id: conn_id): bool;
## Performs the same function as the `unshunt_host_pair` function, but
## it forces an immediate filter update.
global force_unshunt_host_pair: function(id: conn_id): bool;
## Retrieve the currently shunted connections.
global current_shunted_conns: function(): set[conn_id];
## Retrieve the currently shunted host pairs.
global current_shunted_host_pairs: function(): set[conn_id];
redef enum Notice::Type += {
## Indicative that :bro:id:`max_bpf_shunts` connections are already
## being shunted with BPF filters and no more are allowed.
No_More_Conn_Shunts_Available,
## Limitations in BPF make shunting some connections with BPF impossible.
## This notice encompasses those various cases.
Cannot_BPF_Shunt_Conn,
};
}
global shunted_conns: set[conn_id];
global shunted_host_pairs: set[conn_id];
function shunt_filters()
{
# NOTE: this could wrongly match if a connection happens with the ports reversed.
local tcp_filter = "";
local udp_filter = "";
for ( id in shunted_conns )
{
local prot = get_port_transport_proto(id$resp_p);
local filt = fmt("host %s and port %d and host %s and port %d", id$orig_h, id$orig_p, id$resp_h, id$resp_p);
if ( prot == udp )
udp_filter = combine_filters(udp_filter, "and", filt);
else if ( prot == tcp )
tcp_filter = combine_filters(tcp_filter, "and", filt);
}
if ( tcp_filter != "" )
tcp_filter = combine_filters("tcp and tcp[tcpflags] & (tcp-syn|tcp-fin|tcp-rst) == 0", "and", tcp_filter);
local conn_shunt_filter = combine_filters(tcp_filter, "and", udp_filter);
local hp_shunt_filter = "";
for ( id in shunted_host_pairs )
hp_shunt_filter = combine_filters(hp_shunt_filter, "and", fmt("host %s and host %s", id$orig_h, id$resp_h));
local filter = combine_filters(conn_shunt_filter, "and", hp_shunt_filter);
if ( filter != "" )
PacketFilter::exclude("shunt_filters", filter);
}
event bro_init() &priority=5
{
register_filter_plugin([
$func()={ return shunt_filters(); }
]);
}
function current_shunted_conns(): set[conn_id]
{
return shunted_conns;
}
function current_shunted_host_pairs(): set[conn_id]
{
return shunted_host_pairs;
}
function reached_max_shunts(): bool
{
if ( |shunted_conns| + |shunted_host_pairs| > max_bpf_shunts )
{
NOTICE([$note=No_More_Conn_Shunts_Available,
$msg=fmt("%d BPF shunts are in place and no more will be added until space clears.", max_bpf_shunts)]);
return T;
}
else
return F;
}
function shunt_host_pair(id: conn_id): bool
{
PacketFilter::filter_changed = T;
if ( reached_max_shunts() )
return F;
add shunted_host_pairs[id];
install();
return T;
}
function unshunt_host_pair(id: conn_id): bool
{
PacketFilter::filter_changed = T;
if ( id in shunted_host_pairs )
{
delete shunted_host_pairs[id];
return T;
}
else
return F;
}
function force_unshunt_host_pair(id: conn_id): bool
{
if ( unshunt_host_pair(id) )
{
install();
return T;
}
else
return F;
}
function shunt_conn(id: conn_id): bool
{
if ( is_v6_addr(id$orig_h) )
{
NOTICE([$note=Cannot_BPF_Shunt_Conn,
$msg="IPv6 connections can't be shunted with BPF due to limitations in BPF",
$sub="ipv6_conn",
$id=id, $identifier=cat(id)]);
return F;
}
if ( reached_max_shunts() )
return F;
PacketFilter::filter_changed = T;
add shunted_conns[id];
install();
return T;
}
event connection_state_remove(c: connection) &priority=-5
{
# Don't rebuild the filter right away because the packet filter framework
# will check every few minutes and update the filter if things have changed.
if ( c$id in shunted_conns )
delete shunted_conns[c$id];
}

View file

@ -0,0 +1,132 @@
##! This script implements the "Bro side" of several load balancing
##! approaches for Bro clusters.
@load base/frameworks/cluster
@load base/frameworks/packet-filter
module LoadBalancing;
export {
type Method: enum {
## Apply BPF filters to each worker in a way that causes them to
## automatically flow balance traffic between them.
AUTO_BPF,
## Load balance traffic across the workers by making each one apply
## a restrict filter to only listen to a single MAC address. This
## is a somewhat common deployment option for sites doing network
## based load balancing with MAC address rewriting and passing the
## traffic to a single interface. Multiple MAC addresses will show
## up on the same interface and need filtered to a single address.
#MAC_ADDR_BPF,
};
## Defines the method of load balancing to use.
const method = AUTO_BPF &redef;
# Configure the cluster framework to enable the load balancing filter configuration.
#global send_filter: event(for_node: string, filter: string);
#global confirm_filter_installation: event(success: bool);
redef record Cluster::Node += {
## A BPF filter for load balancing traffic sniffed on a single interface
## across a number of processes. In normal uses, this will be assigned
## dynamically by the manager and installed by the workers.
lb_filter: string &optional;
};
}
#redef Cluster::manager2worker_events += /LoadBalancing::send_filter/;
#redef Cluster::worker2manager_events += /LoadBalancing::confirm_filter_installation/;
@if ( Cluster::is_enabled() )
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event bro_init() &priority=5
{
if ( method != AUTO_BPF )
return;
local worker_ip_interface: table[addr, string] of count = table();
for ( n in Cluster::nodes )
{
local this_node = Cluster::nodes[n];
# Only workers!
if ( this_node$node_type != Cluster::WORKER ||
! this_node?$interface )
next;
if ( [this_node$ip, this_node$interface] !in worker_ip_interface )
worker_ip_interface[this_node$ip, this_node$interface] = 0;
++worker_ip_interface[this_node$ip, this_node$interface];
}
# Now that we've counted up how many processes are running on an interface
# let's create the filters for each worker.
local lb_proc_track: table[addr, string] of count = table();
for ( no in Cluster::nodes )
{
local that_node = Cluster::nodes[no];
if ( that_node$node_type == Cluster::WORKER &&
that_node?$interface && [that_node$ip, that_node$interface] in worker_ip_interface )
{
if ( [that_node$ip, that_node$interface] !in lb_proc_track )
lb_proc_track[that_node$ip, that_node$interface] = 0;
local this_lb_proc = lb_proc_track[that_node$ip, that_node$interface];
local total_lb_procs = worker_ip_interface[that_node$ip, that_node$interface];
++lb_proc_track[that_node$ip, that_node$interface];
if ( total_lb_procs > 1 )
{
that_node$lb_filter = PacketFilter::sample_filter(total_lb_procs, this_lb_proc);
Communication::nodes[no]$capture_filter = that_node$lb_filter;
}
}
}
}
#event remote_connection_established(p: event_peer) &priority=-5
# {
# if ( is_remote_event() )
# return;
#
# local for_node = p$descr;
# # Send the filter to the peer.
# if ( for_node in Cluster::nodes &&
# Cluster::nodes[for_node]?$lb_filter )
# {
# local filter = Cluster::nodes[for_node]$lb_filter;
# event LoadBalancing::send_filter(for_node, filter);
# }
# }
#event LoadBalancing::confirm_filter_installation(success: bool)
# {
# # This doesn't really matter yet since we aren't getting back a meaningful success response.
# }
@endif
@if ( Cluster::local_node_type() == Cluster::WORKER )
#event LoadBalancing::send_filter(for_node: string, filter: string)
event remote_capture_filter(p: event_peer, filter: string)
{
#if ( for_node !in Cluster::nodes )
# return;
#
#if ( Cluster::node == for_node )
# {
restrict_filters["lb_filter"] = filter;
PacketFilter::install();
#event LoadBalancing::confirm_filter_installation(T);
# }
}
@endif
@endif

View file

@ -3,7 +3,9 @@
## This normally isn't used because of the default open packet filter
## but we set it anyway in case the user is using a packet filter.
redef capture_filters += { ["frag"] = "(ip[6:2] & 0x3fff != 0) and tcp" };
## Note: This was removed because the default model now is to have a wide
## open packet filter.
#redef capture_filters += { ["frag"] = "(ip[6:2] & 0x3fff != 0) and tcp" };
## Shorten the fragment timeout from never expiring to expiring fragments after
## five minutes.

View file

@ -24,6 +24,7 @@
@load frameworks/intel/smtp.bro
@load frameworks/intel/ssl.bro
@load frameworks/intel/where-locations.bro
@load frameworks/packet-filter/shunt.bro
@load frameworks/software/version-changes.bro
@load frameworks/software/vulnerable.bro
@load integration/barnyard2/__load__.bro
@ -35,6 +36,7 @@
@load misc/capture-loss.bro
@load misc/detect-traceroute/__load__.bro
@load misc/detect-traceroute/main.bro
@load misc/load-balancing.bro
@load misc/loaded-scripts.bro
@load misc/profiling.bro
@load misc/scan.bro

3087
src/3rdparty/sqlite3.c vendored

File diff suppressed because it is too large Load diff

109
src/3rdparty/sqlite3.h vendored
View file

@ -107,9 +107,9 @@ extern "C" {
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION "3.7.16.2"
#define SQLITE_VERSION_NUMBER 3007016
#define SQLITE_SOURCE_ID "2013-04-12 11:52:43 cbea02d93865ce0e06789db95fd9168ebac970c7"
#define SQLITE_VERSION "3.7.17"
#define SQLITE_VERSION_NUMBER 3007017
#define SQLITE_SOURCE_ID "2013-05-20 00:56:22 118a3b35693b134d56ebd780123b7fd6f1497668"
/*
** CAPI3REF: Run-Time Library Version Numbers
@ -425,6 +425,8 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_FORMAT 24 /* Auxiliary database format error */
#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */
#define SQLITE_NOTADB 26 /* File opened that is not a database file */
#define SQLITE_NOTICE 27 /* Notifications from sqlite3_log() */
#define SQLITE_WARNING 28 /* Warnings from sqlite3_log() */
#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */
#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */
/* end-of-error-codes */
@ -475,6 +477,7 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_IOERR_SHMMAP (SQLITE_IOERR | (21<<8))
#define SQLITE_IOERR_SEEK (SQLITE_IOERR | (22<<8))
#define SQLITE_IOERR_DELETE_NOENT (SQLITE_IOERR | (23<<8))
#define SQLITE_IOERR_MMAP (SQLITE_IOERR | (24<<8))
#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8))
#define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8))
#define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8))
@ -494,6 +497,8 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_CONSTRAINT_TRIGGER (SQLITE_CONSTRAINT | (7<<8))
#define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8))
#define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8))
#define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8))
#define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8))
/*
** CAPI3REF: Flags For File Open Operations
@ -733,6 +738,9 @@ struct sqlite3_io_methods {
void (*xShmBarrier)(sqlite3_file*);
int (*xShmUnmap)(sqlite3_file*, int deleteFlag);
/* Methods above are valid for version 2 */
int (*xFetch)(sqlite3_file*, sqlite3_int64 iOfst, int iAmt, void **pp);
int (*xUnfetch)(sqlite3_file*, sqlite3_int64 iOfst, void *p);
/* Methods above are valid for version 3 */
/* Additional methods may be added in future releases */
};
@ -869,7 +877,8 @@ struct sqlite3_io_methods {
** it is able to override built-in [PRAGMA] statements.
**
** <li>[[SQLITE_FCNTL_BUSYHANDLER]]
** ^This file-control may be invoked by SQLite on the database file handle
** ^The [SQLITE_FCNTL_BUSYHANDLER]
** file-control may be invoked by SQLite on the database file handle
** shortly after it is opened in order to provide a custom VFS with access
** to the connections busy-handler callback. The argument is of type (void **)
** - an array of two (void *) values. The first (void *) actually points
@ -880,13 +889,24 @@ struct sqlite3_io_methods {
** current operation.
**
** <li>[[SQLITE_FCNTL_TEMPFILENAME]]
** ^Application can invoke this file-control to have SQLite generate a
** ^Application can invoke the [SQLITE_FCNTL_TEMPFILENAME] file-control
** to have SQLite generate a
** temporary filename using the same algorithm that is followed to generate
** temporary filenames for TEMP tables and other internal uses. The
** argument should be a char** which will be filled with the filename
** written into memory obtained from [sqlite3_malloc()]. The caller should
** invoke [sqlite3_free()] on the result to avoid a memory leak.
**
** <li>[[SQLITE_FCNTL_MMAP_SIZE]]
** The [SQLITE_FCNTL_MMAP_SIZE] file control is used to query or set the
** maximum number of bytes that will be used for memory-mapped I/O.
** The argument is a pointer to a value of type sqlite3_int64 that
** is an advisory maximum number of bytes in the file to memory map. The
** pointer is overwritten with the old value. The limit is not changed if
** the value originally pointed to is negative, and so the current limit
** can be queried by passing in a pointer to a negative number. This
** file-control is used internally to implement [PRAGMA mmap_size].
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE 1
@ -905,6 +925,7 @@ struct sqlite3_io_methods {
#define SQLITE_FCNTL_PRAGMA 14
#define SQLITE_FCNTL_BUSYHANDLER 15
#define SQLITE_FCNTL_TEMPFILENAME 16
#define SQLITE_FCNTL_MMAP_SIZE 18
/*
** CAPI3REF: Mutex Handle
@ -1571,7 +1592,9 @@ struct sqlite3_mem_methods {
** page cache implementation into that object.)^ </dd>
**
** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt>
** <dd> ^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a
** <dd> The SQLITE_CONFIG_LOG option is used to configure the SQLite
** global [error log].
** (^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a
** function with a call signature of void(*)(void*,int,const char*),
** and a pointer to void. ^If the function pointer is not NULL, it is
** invoked by [sqlite3_log()] to process each logging event. ^If the
@ -1617,12 +1640,12 @@ struct sqlite3_mem_methods {
** <dt>SQLITE_CONFIG_PCACHE and SQLITE_CONFIG_GETPCACHE
** <dd> These options are obsolete and should not be used by new code.
** They are retained for backwards compatibility but are now no-ops.
** </dl>
** </dd>
**
** [[SQLITE_CONFIG_SQLLOG]]
** <dt>SQLITE_CONFIG_SQLLOG
** <dd>This option is only available if sqlite is compiled with the
** SQLITE_ENABLE_SQLLOG pre-processor macro defined. The first argument should
** [SQLITE_ENABLE_SQLLOG] pre-processor macro defined. The first argument should
** be a pointer to a function of type void(*)(void*,sqlite3*,const char*, int).
** The second should be of type (void*). The callback is invoked by the library
** in three separate circumstances, identified by the value passed as the
@ -1632,7 +1655,23 @@ struct sqlite3_mem_methods {
** fourth parameter is 1, then the SQL statement that the third parameter
** points to has just been executed. Or, if the fourth parameter is 2, then
** the connection being passed as the second parameter is being closed. The
** third parameter is passed NULL In this case.
** third parameter is passed NULL In this case. An example of using this
** configuration option can be seen in the "test_sqllog.c" source file in
** the canonical SQLite source tree.</dd>
**
** [[SQLITE_CONFIG_MMAP_SIZE]]
** <dt>SQLITE_CONFIG_MMAP_SIZE
** <dd>SQLITE_CONFIG_MMAP_SIZE takes two 64-bit integer (sqlite3_int64) values
** that are the default mmap size limit (the default setting for
** [PRAGMA mmap_size]) and the maximum allowed mmap size limit.
** The default setting can be overridden by each database connection using
** either the [PRAGMA mmap_size] command, or by using the
** [SQLITE_FCNTL_MMAP_SIZE] file control. The maximum allowed mmap size
** cannot be changed at run-time. Nor may the maximum allowed mmap size
** exceed the compile-time maximum mmap size set by the
** [SQLITE_MAX_MMAP_SIZE] compile-time option.
** If either argument to this option is negative, then that argument is
** changed to its compile-time default.
** </dl>
*/
#define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */
@ -1656,6 +1695,7 @@ struct sqlite3_mem_methods {
#define SQLITE_CONFIG_GETPCACHE2 19 /* sqlite3_pcache_methods2* */
#define SQLITE_CONFIG_COVERING_INDEX_SCAN 20 /* int */
#define SQLITE_CONFIG_SQLLOG 21 /* xSqllog, void* */
#define SQLITE_CONFIG_MMAP_SIZE 22 /* sqlite3_int64, sqlite3_int64 */
/*
** CAPI3REF: Database Connection Configuration Options
@ -2489,6 +2529,9 @@ SQLITE_API int sqlite3_set_authorizer(
** as each triggered subprogram is entered. The callbacks for triggers
** contain a UTF-8 SQL comment that identifies the trigger.)^
**
** The [SQLITE_TRACE_SIZE_LIMIT] compile-time option can be used to limit
** the length of [bound parameter] expansion in the output of sqlite3_trace().
**
** ^The callback function registered by sqlite3_profile() is invoked
** as each SQL statement finishes. ^The profile callback contains
** the original statement text and an estimate of wall-clock time
@ -3027,7 +3070,8 @@ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal);
** <li>
** ^If the database schema changes, instead of returning [SQLITE_SCHEMA] as it
** always used to do, [sqlite3_step()] will automatically recompile the SQL
** statement and try to run it again.
** statement and try to run it again. As many as [SQLITE_MAX_SCHEMA_RETRY]
** retries will occur before sqlite3_step() gives up and returns an error.
** </li>
**
** <li>
@ -3231,6 +3275,9 @@ typedef struct sqlite3_context sqlite3_context;
** parameter [SQLITE_LIMIT_VARIABLE_NUMBER] (default value: 999).
**
** ^The third argument is the value to bind to the parameter.
** ^If the third parameter to sqlite3_bind_text() or sqlite3_bind_text16()
** or sqlite3_bind_blob() is a NULL pointer then the fourth parameter
** is ignored and the end result is the same as sqlite3_bind_null().
**
** ^(In those routines that have a fourth argument, its value is the
** number of bytes in the parameter. To be clear: the value is the
@ -4187,7 +4234,7 @@ SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(voi
** the content before returning.
**
** The typedef is necessary to work around problems in certain
** C++ compilers. See ticket #2191.
** C++ compilers.
*/
typedef void (*sqlite3_destructor_type)(void*);
#define SQLITE_STATIC ((sqlite3_destructor_type)0)
@ -4986,11 +5033,20 @@ SQLITE_API int sqlite3_table_column_metadata(
** ^This interface loads an SQLite extension library from the named file.
**
** ^The sqlite3_load_extension() interface attempts to load an
** SQLite extension library contained in the file zFile.
** [SQLite extension] library contained in the file zFile. If
** the file cannot be loaded directly, attempts are made to load
** with various operating-system specific extensions added.
** So for example, if "samplelib" cannot be loaded, then names like
** "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might
** be tried also.
**
** ^The entry point is zProc.
** ^zProc may be 0, in which case the name of the entry point
** defaults to "sqlite3_extension_init".
** ^(zProc may be 0, in which case SQLite will try to come up with an
** entry point name on its own. It first tries "sqlite3_extension_init".
** If that does not work, it constructs a name "sqlite3_X_init" where the
** X is consists of the lower-case equivalent of all ASCII alphabetic
** characters in the filename from the last "/" to the first following
** "." and omitting any initial "lib".)^
** ^The sqlite3_load_extension() interface returns
** [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong.
** ^If an error occurs and pzErrMsg is not 0, then the
@ -5016,11 +5072,11 @@ SQLITE_API int sqlite3_load_extension(
** CAPI3REF: Enable Or Disable Extension Loading
**
** ^So as not to open security holes in older applications that are
** unprepared to deal with extension loading, and as a means of disabling
** extension loading while evaluating user-entered SQL, the following API
** unprepared to deal with [extension loading], and as a means of disabling
** [extension loading] while evaluating user-entered SQL, the following API
** is provided to turn the [sqlite3_load_extension()] mechanism on and off.
**
** ^Extension loading is off by default. See ticket #1863.
** ^Extension loading is off by default.
** ^Call the sqlite3_enable_load_extension() routine with onoff==1
** to turn extension loading on and call it with onoff==0 to turn
** it back off again.
@ -5032,7 +5088,7 @@ SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
**
** ^This interface causes the xEntryPoint() function to be invoked for
** each new [database connection] that is created. The idea here is that
** xEntryPoint() is the entry point for a statically linked SQLite extension
** xEntryPoint() is the entry point for a statically linked [SQLite extension]
** that is to be automatically loaded into all new database connections.
**
** ^(Even though the function prototype shows that xEntryPoint() takes
@ -6812,10 +6868,25 @@ SQLITE_API int sqlite3_unlock_notify(
SQLITE_API int sqlite3_stricmp(const char *, const char *);
SQLITE_API int sqlite3_strnicmp(const char *, const char *, int);
/*
** CAPI3REF: String Globbing
*
** ^The [sqlite3_strglob(P,X)] interface returns zero if string X matches
** the glob pattern P, and it returns non-zero if string X does not match
** the glob pattern P. ^The definition of glob pattern matching used in
** [sqlite3_strglob(P,X)] is the same as for the "X GLOB P" operator in the
** SQL dialect used by SQLite. ^The sqlite3_strglob(P,X) function is case
** sensitive.
**
** Note that this routine returns zero on a match and non-zero if the strings
** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()].
*/
SQLITE_API int sqlite3_strglob(const char *zGlob, const char *zStr);
/*
** CAPI3REF: Error Logging Interface
**
** ^The [sqlite3_log()] interface writes a message into the error log
** ^The [sqlite3_log()] interface writes a message into the [error log]
** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()].
** ^If logging is enabled, the zFormat string and subsequent arguments are
** used with [sqlite3_snprintf()] to generate the final output string.

View file

@ -8,6 +8,9 @@
#include "BroDoc.h"
#include "BroDocObj.h"
#include "util.h"
#include "plugin/Manager.h"
#include "analyzer/Manager.h"
#include "analyzer/Component.h"
BroDoc::BroDoc(const std::string& rel, const std::string& abs)
{
@ -164,84 +167,77 @@ void BroDoc::SetPacketFilter(const std::string& s)
packet_filter.clear();
}
void BroDoc::AddPortAnalysis(const std::string& analyzer,
const std::string& ports)
{
std::string reST_string = analyzer + "::\n" + ports + "\n\n";
port_analysis.push_back(reST_string);
}
void BroDoc::WriteDocFile() const
{
WriteToDoc(".. Automatically generated. Do not edit.\n\n");
WriteToDoc(reST_file, ".. Automatically generated. Do not edit.\n\n");
WriteToDoc(":tocdepth: 3\n\n");
WriteToDoc(reST_file, ":tocdepth: 3\n\n");
WriteSectionHeading(doc_title.c_str(), '=');
WriteSectionHeading(reST_file, doc_title.c_str(), '=');
WriteStringList(".. bro:namespace:: %s\n", modules);
WriteStringList(reST_file, ".. bro:namespace:: %s\n", modules);
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
// WriteSectionHeading("Overview", '-');
WriteStringList("%s\n", summary);
// WriteSectionHeading(reST_file, "Overview", '-');
WriteStringList(reST_file, "%s\n", summary);
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
if ( ! modules.empty() )
{
WriteToDoc(":Namespace%s: ", (modules.size() > 1 ? "s" : ""));
// WriteStringList(":bro:namespace:`%s`", modules);
WriteStringList("``%s``, ", "``%s``", modules);
WriteToDoc("\n");
WriteToDoc(reST_file, ":Namespace%s: ", (modules.size() > 1 ? "s" : ""));
// WriteStringList(reST_file, ":bro:namespace:`%s`", modules);
WriteStringList(reST_file, "``%s``, ", "``%s``", modules);
WriteToDoc(reST_file, "\n");
}
if ( ! imports.empty() )
{
WriteToDoc(":Imports: ");
WriteToDoc(reST_file, ":Imports: ");
std::list<std::string>::const_iterator it;
for ( it = imports.begin(); it != imports.end(); ++it )
{
if ( it != imports.begin() )
WriteToDoc(", ");
WriteToDoc(reST_file, ", ");
string pretty(*it);
size_t pos = pretty.find("/index");
if ( pos != std::string::npos && pos + 6 == pretty.size() )
pretty = pretty.substr(0, pos);
WriteToDoc(":doc:`%s </scripts/%s>`", pretty.c_str(), it->c_str());
WriteToDoc(reST_file, ":doc:`%s </scripts/%s>`", pretty.c_str(), it->c_str());
}
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
}
WriteToDoc(":Source File: :download:`%s`\n",
WriteToDoc(reST_file, ":Source File: :download:`%s`\n",
downloadable_filename.c_str());
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
WriteInterface("Summary", '~', '#', true, true);
if ( ! notices.empty() )
WriteBroDocObjList(notices, "Notices", '#');
WriteBroDocObjList(reST_file, notices, "Notices", '#');
if ( port_analysis.size() || packet_filter.size() )
WriteSectionHeading("Configuration Changes", '#');
WriteSectionHeading(reST_file, "Configuration Changes", '#');
if ( ! port_analysis.empty() )
{
WriteSectionHeading("Port Analysis", '^');
WriteToDoc("Loading this script makes the following changes to "
WriteSectionHeading(reST_file, "Port Analysis", '^');
WriteToDoc(reST_file, "Loading this script makes the following changes to "
":bro:see:`dpd_config`.\n\n");
WriteStringList("%s, ", "%s", port_analysis);
WriteStringList(reST_file, "%s, ", "%s", port_analysis);
}
if ( ! packet_filter.empty() )
{
WriteSectionHeading("Packet Filter", '^');
WriteToDoc("Loading this script makes the following changes to "
WriteSectionHeading(reST_file, "Packet Filter", '^');
WriteToDoc(reST_file, "Loading this script makes the following changes to "
":bro:see:`capture_filters`.\n\n");
WriteToDoc("Filters added::\n\n");
WriteToDoc("%s\n", packet_filter.c_str());
WriteToDoc(reST_file, "Filters added::\n\n");
WriteToDoc(reST_file, "%s\n", packet_filter.c_str());
}
WriteInterface("Detailed Interface", '~', '#', true, false);
@ -267,23 +263,23 @@ void BroDoc::WriteDocFile() const
void BroDoc::WriteInterface(const char* heading, char underline,
char sub, bool isPublic, bool isShort) const
{
WriteSectionHeading(heading, underline);
WriteBroDocObjList(options, isPublic, "Options", sub, isShort);
WriteBroDocObjList(constants, isPublic, "Constants", sub, isShort);
WriteBroDocObjList(state_vars, isPublic, "State Variables", sub, isShort);
WriteBroDocObjList(types, isPublic, "Types", sub, isShort);
WriteBroDocObjList(events, isPublic, "Events", sub, isShort);
WriteBroDocObjList(hooks, isPublic, "Hooks", sub, isShort);
WriteBroDocObjList(functions, isPublic, "Functions", sub, isShort);
WriteBroDocObjList(redefs, isPublic, "Redefinitions", sub, isShort);
WriteSectionHeading(reST_file, heading, underline);
WriteBroDocObjList(reST_file, options, isPublic, "Options", sub, isShort);
WriteBroDocObjList(reST_file, constants, isPublic, "Constants", sub, isShort);
WriteBroDocObjList(reST_file, state_vars, isPublic, "State Variables", sub, isShort);
WriteBroDocObjList(reST_file, types, isPublic, "Types", sub, isShort);
WriteBroDocObjList(reST_file, events, isPublic, "Events", sub, isShort);
WriteBroDocObjList(reST_file, hooks, isPublic, "Hooks", sub, isShort);
WriteBroDocObjList(reST_file, functions, isPublic, "Functions", sub, isShort);
WriteBroDocObjList(reST_file, redefs, isPublic, "Redefinitions", sub, isShort);
}
void BroDoc::WriteStringList(const char* format, const char* last_format,
const std::list<std::string>& l) const
void BroDoc::WriteStringList(FILE* f, const char* format, const char* last_format,
const std::list<std::string>& l)
{
if ( l.empty() )
{
WriteToDoc("\n");
WriteToDoc(f, "\n");
return;
}
@ -292,12 +288,12 @@ void BroDoc::WriteStringList(const char* format, const char* last_format,
last--;
for ( it = l.begin(); it != last; ++it )
WriteToDoc(format, it->c_str());
WriteToDoc(f, format, it->c_str());
WriteToDoc(last_format, last->c_str());
WriteToDoc(f, last_format, last->c_str());
}
void BroDoc::WriteBroDocObjTable(const BroDocObjList& l) const
void BroDoc::WriteBroDocObjTable(FILE* f, const BroDocObjList& l)
{
int max_id_col = 0;
int max_com_col = 0;
@ -317,38 +313,38 @@ void BroDoc::WriteBroDocObjTable(const BroDocObjList& l) const
}
// Start table.
WriteRepeatedChar('=', max_id_col);
WriteToDoc(" ");
WriteRepeatedChar(f, '=', max_id_col);
WriteToDoc(f, " ");
if ( max_com_col == 0 )
WriteToDoc("=");
WriteToDoc(f, "=");
else
WriteRepeatedChar('=', max_com_col);
WriteRepeatedChar(f, '=', max_com_col);
WriteToDoc("\n");
WriteToDoc(f, "\n");
for ( it = l.begin(); it != l.end(); ++it )
{
if ( it != l.begin() )
WriteToDoc("\n\n");
(*it)->WriteReSTCompact(reST_file, max_id_col);
WriteToDoc(f, "\n\n");
(*it)->WriteReSTCompact(f, max_id_col);
}
// End table.
WriteToDoc("\n");
WriteRepeatedChar('=', max_id_col);
WriteToDoc(" ");
WriteToDoc(f, "\n");
WriteRepeatedChar(f, '=', max_id_col);
WriteToDoc(f, " ");
if ( max_com_col == 0 )
WriteToDoc("=");
WriteToDoc(f, "=");
else
WriteRepeatedChar('=', max_com_col);
WriteRepeatedChar(f, '=', max_com_col);
WriteToDoc("\n\n");
WriteToDoc(f, "\n\n");
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
const char* heading, char underline, bool isShort) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l, bool wantPublic,
const char* heading, char underline, bool isShort)
{
if ( l.empty() )
return;
@ -366,7 +362,7 @@ void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
if ( it == l.end() )
return;
WriteSectionHeading(heading, underline);
WriteSectionHeading(f, heading, underline);
BroDocObjList filtered_list;
@ -377,13 +373,13 @@ void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
}
if ( isShort )
WriteBroDocObjTable(filtered_list);
WriteBroDocObjTable(f, filtered_list);
else
WriteBroDocObjList(filtered_list);
WriteBroDocObjList(f, filtered_list);
}
void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline, bool isShort) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline, bool isShort)
{
BroDocObjMap::const_iterator it;
BroDocObjList l;
@ -391,24 +387,24 @@ void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
for ( it = m.begin(); it != m.end(); ++it )
l.push_back(it->second);
WriteBroDocObjList(l, wantPublic, heading, underline, isShort);
WriteBroDocObjList(f, l, wantPublic, heading, underline, isShort);
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l, const char* heading,
char underline) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l, const char* heading,
char underline)
{
WriteSectionHeading(heading, underline);
WriteBroDocObjList(l);
WriteSectionHeading(f, heading, underline);
WriteBroDocObjList(f, l);
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l)
{
for ( BroDocObjList::const_iterator it = l.begin(); it != l.end(); ++it )
(*it)->WriteReST(reST_file);
(*it)->WriteReST(f);
}
void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
char underline) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjMap& m, const char* heading,
char underline)
{
BroDocObjMap::const_iterator it;
BroDocObjList l;
@ -416,28 +412,28 @@ void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
for ( it = m.begin(); it != m.end(); ++it )
l.push_back(it->second);
WriteBroDocObjList(l, heading, underline);
WriteBroDocObjList(f, l, heading, underline);
}
void BroDoc::WriteToDoc(const char* format, ...) const
void BroDoc::WriteToDoc(FILE* f, const char* format, ...)
{
va_list argp;
va_start(argp, format);
vfprintf(reST_file, format, argp);
vfprintf(f, format, argp);
va_end(argp);
}
void BroDoc::WriteSectionHeading(const char* heading, char underline) const
void BroDoc::WriteSectionHeading(FILE* f, const char* heading, char underline)
{
WriteToDoc("%s\n", heading);
WriteRepeatedChar(underline, strlen(heading));
WriteToDoc("\n");
WriteToDoc(f, "%s\n", heading);
WriteRepeatedChar(f, underline, strlen(heading));
WriteToDoc(f, "\n");
}
void BroDoc::WriteRepeatedChar(char c, size_t n) const
void BroDoc::WriteRepeatedChar(FILE* f, char c, size_t n)
{
for ( size_t i = 0; i < n; ++i )
WriteToDoc("%c", c);
WriteToDoc(f, "%c", c);
}
void BroDoc::FreeBroDocObjPtrList(BroDocObjList& l)
@ -459,3 +455,143 @@ void BroDoc::AddFunction(BroDocObj* o)
else
functions[o->Name()]->Combine(o);
}
static void WritePluginSectionHeading(FILE* f, const plugin::Plugin* p)
{
string name = p->Name();
fprintf(f, "%s\n", name.c_str());
for ( size_t i = 0; i < name.size(); ++i )
fprintf(f, "-");
fprintf(f, "\n\n");
fprintf(f, "%s\n\n", p->Description());
}
static void WriteAnalyzerComponent(FILE* f, const analyzer::Component* c)
{
EnumType* atag = analyzer_mgr->GetTagEnumType();
string tag = fmt("ANALYZER_%s", c->CanonicalName());
if ( atag->Lookup("Analyzer", tag.c_str()) < 0 )
reporter->InternalError("missing analyzer tag for %s", tag.c_str());
fprintf(f, ":bro:enum:`Analyzer::%s`\n\n", tag.c_str());
}
static void WritePluginComponents(FILE* f, const plugin::Plugin* p)
{
plugin::Plugin::component_list components = p->Components();
plugin::Plugin::component_list::const_iterator it;
fprintf(f, "Components\n");
fprintf(f, "++++++++++\n\n");
for ( it = components.begin(); it != components.end(); ++it )
{
switch ( (*it)->Type() ) {
case plugin::component::ANALYZER:
WriteAnalyzerComponent(f,
dynamic_cast<const analyzer::Component*>(*it));
break;
case plugin::component::READER:
reporter->InternalError("docs for READER component unimplemented");
case plugin::component::WRITER:
reporter->InternalError("docs for WRITER component unimplemented");
default:
reporter->InternalError("docs for unknown component unimplemented");
}
}
}
static void WritePluginBifItems(FILE* f, const plugin::Plugin* p,
plugin::BifItem::Type t, const string& heading)
{
plugin::Plugin::bif_item_list bifitems = p->BifItems();
plugin::Plugin::bif_item_list::iterator it = bifitems.begin();
while ( it != bifitems.end() )
{
if ( it->GetType() != t )
it = bifitems.erase(it);
else
++it;
}
if ( bifitems.empty() )
return;
fprintf(f, "%s\n", heading.c_str());
for ( size_t i = 0; i < heading.size(); ++i )
fprintf(f, "+");
fprintf(f, "\n\n");
for ( it = bifitems.begin(); it != bifitems.end(); ++it )
{
BroDocObj* o = doc_ids[it->GetID()];
if ( o )
o->WriteReST(f);
else
reporter->Warning("No docs for ID: %s\n", it->GetID());
}
}
static void WriteAnalyzerTagDefn(FILE* f, EnumType* e)
{
e = new CommentedEnumType(e);
e->SetTypeID(copy_string("Analyzer::Tag"));
ID* dummy_id = new ID(copy_string("Analyzer::Tag"), SCOPE_GLOBAL, true);
dummy_id->SetType(e);
dummy_id->MakeType();
list<string>* r = new list<string>();
r->push_back("Unique identifiers for protocol analyzers.");
BroDocObj bdo(dummy_id, r, true);
bdo.WriteReST(f);
}
static bool IsAnalyzerPlugin(const plugin::Plugin* p)
{
plugin::Plugin::component_list components = p->Components();
plugin::Plugin::component_list::const_iterator it;
for ( it = components.begin(); it != components.end(); ++it )
if ( (*it)->Type() != plugin::component::ANALYZER )
return false;
return true;
}
void CreateProtoAnalyzerDoc(const char* filename)
{
FILE* f = fopen(filename, "w");
fprintf(f, "Protocol Analyzer Reference\n");
fprintf(f, "===========================\n\n");
WriteAnalyzerTagDefn(f, analyzer_mgr->GetTagEnumType());
plugin::Manager::plugin_list plugins = plugin_mgr->Plugins();
plugin::Manager::plugin_list::const_iterator it;
for ( it = plugins.begin(); it != plugins.end(); ++it )
{
if ( ! IsAnalyzerPlugin(*it) )
continue;
WritePluginSectionHeading(f, *it);
WritePluginComponents(f, *it);
WritePluginBifItems(f, *it, plugin::BifItem::CONSTANT,
"Options/Constants");
WritePluginBifItems(f, *it, plugin::BifItem::GLOBAL, "Globals");
WritePluginBifItems(f, *it, plugin::BifItem::TYPE, "Types");
WritePluginBifItems(f, *it, plugin::BifItem::EVENT, "Events");
WritePluginBifItems(f, *it, plugin::BifItem::FUNCTION, "Functions");
}
fclose(f);
}

View file

@ -81,15 +81,6 @@ public:
*/
void SetPacketFilter(const std::string& s);
/**
* Schedules documentation of a given set of ports being associated
* with a particular analyzer as a result of the current script
* being loaded -- the way the "dpd_config" table is changed.
* @param analyzer An analyzer that changed the "dpd_config" table.
* @param ports The set of ports assigned to the analyzer in table.
*/
void AddPortAnalysis(const std::string& analyzer, const std::string& ports);
/**
* Schedules documentation of a script option. An option is
* defined as any variable in the script that is declared 'const'
@ -242,7 +233,115 @@ public:
return reST_filename.c_str();
}
protected:
typedef std::list<const BroDocObj*> BroDocObjList;
typedef std::map<std::string, BroDocObj*> BroDocObjMap;
/**
* Writes out a table of BroDocObj's to the reST document
* @param f The file to write to.
* @param l A list of BroDocObj pointers
*/
static void WriteBroDocObjTable(FILE* f, const BroDocObjList& l);
/**
* Writes out given number of characters to reST document
* @param f The file to write to.
* @param c the character to write
* @param n the number of characters to write
*/
static void WriteRepeatedChar(FILE* f, char c, size_t n);
/**
* A wrapper to fprintf() that always uses the reST document
* for the FILE* argument.
* @param f The file to write to.
* @param format A printf style format string.
*/
static void WriteToDoc(FILE* f, const char* format, ...);
/**
* Writes out a list of strings to the reST document.
* If the list is empty, prints a newline character.
* @param f The file to write to.
* @param format A printf style format string for elements of the list
* except for the last one in the list
* @param last_format A printf style format string to use for the last
* element of the list
* @param l A reference to a list of strings
*/
static void WriteStringList(FILE* f, const char* format, const char* last_format,
const std::list<std::string>& l);
/**
* @see WriteStringList(FILE* f, const char*, const char*,
* const std::list<std::string>&>)
*/
static void WriteStringList(FILE* f, const char* format,
const std::list<std::string>& l){
WriteStringList(f, format, format, l);
}
/**
* Writes out a list of BroDocObj objects to the reST document
* @param f The file to write to.
* @param l A list of BroDocObj pointers
* @param wantPublic If true, filter out objects that are not declared
* in the global scope. If false, filter out those that are in
* the global scope.
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
* @param isShort Whether to write the full documentation or a "short"
* version (a single sentence)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l, bool wantPublic,
const char* heading, char underline,
bool isShort);
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(FILE* f, const BroDocObjList&, bool, const char*, char,
bool)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline,
bool isShort);
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l, const char* heading,
char underline);
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l);
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(FILE* f, const BroDocObjList&, const char*, char)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjMap& m, const char* heading,
char underline);
/**
* Writes out a reST section heading
* @param f The file to write to.
* @param heading The title of the heading to create
* @param underline The character to use to underline the section title
* within the reST document
*/
static void WriteSectionHeading(FILE* f, const char* heading, char underline);
private:
FILE* reST_file;
std::string reST_filename;
std::string source_filename; // points to the basename of source file
@ -255,9 +354,6 @@ protected:
std::list<std::string> imports;
std::list<std::string> port_analysis;
typedef std::list<const BroDocObj*> BroDocObjList;
typedef std::map<std::string, BroDocObj*> BroDocObjMap;
BroDocObjList options;
BroDocObjList constants;
BroDocObjList state_vars;
@ -272,107 +368,6 @@ protected:
BroDocObjList all;
/**
* Writes out a list of strings to the reST document.
* If the list is empty, prints a newline character.
* @param format A printf style format string for elements of the list
* except for the last one in the list
* @param last_format A printf style format string to use for the last
* element of the list
* @param l A reference to a list of strings
*/
void WriteStringList(const char* format, const char* last_format,
const std::list<std::string>& l) const;
/**
* @see WriteStringList(const char*, const char*,
* const std::list<std::string>&>)
*/
void WriteStringList(const char* format,
const std::list<std::string>& l) const
{
WriteStringList(format, format, l);
}
/**
* Writes out a table of BroDocObj's to the reST document
* @param l A list of BroDocObj pointers
*/
void WriteBroDocObjTable(const BroDocObjList& l) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param wantPublic If true, filter out objects that are not declared
* in the global scope. If false, filter out those that are in
* the global scope.
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
* @param isShort Whether to write the full documentation or a "short"
* version (a single sentence)
*/
void WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
const char* heading, char underline,
bool isShort) const;
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(const BroDocObjList&, bool, const char*, char,
bool)
*/
void WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline,
bool isShort) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
*/
void WriteBroDocObjList(const BroDocObjList& l, const char* heading,
char underline) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
*/
void WriteBroDocObjList(const BroDocObjList& l) const;
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(const BroDocObjList&, const char*, char)
*/
void WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
char underline) const;
/**
* A wrapper to fprintf() that always uses the reST document
* for the FILE* argument.
* @param format A printf style format string.
*/
void WriteToDoc(const char* format, ...) const;
/**
* Writes out a reST section heading
* @param heading The title of the heading to create
* @param underline The character to use to underline the section title
* within the reST document
*/
void WriteSectionHeading(const char* heading, char underline) const;
/**
* Writes out given number of characters to reST document
* @param c the character to write
* @param n the number of characters to write
*/
void WriteRepeatedChar(char c, size_t n) const;
/**
* Writes out the reST for either the script's public or private interface
* @param heading The title of the interfaces section heading
@ -387,7 +382,6 @@ protected:
*/
void WriteInterface(const char* heading, char underline, char subunderline,
bool isPublic, bool isShort) const;
private:
/**
* Frees memory allocated to BroDocObj's objects in a given list.
@ -413,4 +407,10 @@ private:
};
};
/**
* Writes out plugin index documentation for all analyzer plugins.
* @param filename the name of the file to write.
*/
void CreateProtoAnalyzerDoc(const char* filename);
#endif

View file

@ -4,6 +4,8 @@
#include "ID.h"
#include "BroDocObj.h"
map<string, BroDocObj*> doc_ids = map<string, BroDocObj*>();
BroDocObj* BroDocObj::last = 0;
BroDocObj::BroDocObj(const ID* id, std::list<std::string>*& reST,
@ -16,6 +18,7 @@ BroDocObj::BroDocObj(const ID* id, std::list<std::string>*& reST,
is_fake_id = is_fake;
use_role = 0;
FormulateShortDesc();
doc_ids[id->Name()] = this;
}
BroDocObj::~BroDocObj()

View file

@ -4,6 +4,7 @@
#include <cstdio>
#include <string>
#include <list>
#include <map>
#include "ID.h"
@ -134,4 +135,9 @@ protected:
private:
};
/**
* Map identifiers to their broxygen documentation objects.
*/
extern map<string, BroDocObj*> doc_ids;
#endif

View file

@ -114,7 +114,6 @@ set(BIF_SRCS
logging.bif
input.bif
event.bif
file_analysis.bif
const.bif
types.bif
strings.bif
@ -150,6 +149,7 @@ set(bro_SUBDIR_LIBS CACHE INTERNAL "subdir libraries" FORCE)
set(bro_PLUGIN_LIBS CACHE INTERNAL "plugin libraries" FORCE)
add_subdirectory(analyzer)
add_subdirectory(file_analysis)
set(bro_SUBDIRS
${bro_SUBDIR_LIBS}
@ -359,21 +359,12 @@ set(bro_SRCS
input/readers/Binary.cc
input/readers/SQLite.cc
file_analysis/Manager.cc
file_analysis/File.cc
file_analysis/FileTimer.cc
file_analysis/FileID.h
file_analysis/Analyzer.h
file_analysis/AnalyzerSet.cc
file_analysis/Extract.cc
file_analysis/Hash.cc
file_analysis/DataEvent.cc
3rdparty/sqlite3.c
plugin/Component.cc
plugin/Manager.cc
plugin/Plugin.cc
plugin/Macros.h
nb_dns.c
digest.h
@ -395,7 +386,8 @@ set(BRO_EXE bro
CACHE STRING "Bro executable binary" FORCE)
# Target to create all the autogenerated files.
add_custom_target(generate_outputs DEPENDS ${bro_ALL_GENERATED_OUTPUTS})
add_custom_target(generate_outputs)
add_dependencies(generate_outputs ${bro_ALL_GENERATED_OUTPUTS})
# Build __load__.bro files for plugins/*.bif.bro.
bro_bif_create_loader(bif_loader_plugins ${CMAKE_BINARY_DIR}/scripts/base/bif/plugins)

View file

@ -553,14 +553,12 @@ void builtin_error(const char* msg, BroObj* arg)
#include "input.bif.func_h"
#include "reporter.bif.func_h"
#include "strings.bif.func_h"
#include "file_analysis.bif.func_h"
#include "bro.bif.func_def"
#include "logging.bif.func_def"
#include "input.bif.func_def"
#include "reporter.bif.func_def"
#include "strings.bif.func_def"
#include "file_analysis.bif.func_def"
void init_builtin_funcs()
{
@ -575,7 +573,6 @@ void init_builtin_funcs()
#include "input.bif.func_init"
#include "reporter.bif.func_init"
#include "strings.bif.func_init"
#include "file_analysis.bif.func_init"
did_builtin_init = true;
}

View file

@ -250,7 +250,6 @@ OpaqueType* bloomfilter_type;
#include "logging.bif.netvar_def"
#include "input.bif.netvar_def"
#include "reporter.bif.netvar_def"
#include "file_analysis.bif.netvar_def"
void init_event_handlers()
{
@ -319,7 +318,6 @@ void init_net_var()
#include "logging.bif.netvar_init"
#include "input.bif.netvar_init"
#include "reporter.bif.netvar_init"
#include "file_analysis.bif.netvar_init"
conn_id = internal_type("conn_id")->AsRecordType();
endpoint = internal_type("endpoint")->AsRecordType();

View file

@ -261,6 +261,5 @@ extern void init_net_var();
#include "logging.bif.netvar_h"
#include "input.bif.netvar_h"
#include "reporter.bif.netvar_h"
#include "file_analysis.bif.netvar_h"
#endif

View file

@ -1334,6 +1334,16 @@ EnumType::EnumType(const string& arg_name)
counter = 0;
}
EnumType::EnumType(EnumType* e)
: BroType(TYPE_ENUM)
{
name = e->name;
counter = e->counter;
for ( NameMap::iterator it = e->names.begin(); it != e->names.end(); ++it )
names[copy_string(it->first)] = it->second;
}
EnumType::~EnumType()
{
for ( NameMap::iterator iter = names.begin(); iter != names.end(); ++iter )

View file

@ -523,6 +523,7 @@ protected:
class EnumType : public BroType {
public:
EnumType(const string& arg_name);
EnumType(EnumType* e);
~EnumType();
// The value of this name is next internal counter value, starting
@ -567,6 +568,7 @@ protected:
class CommentedEnumType: public EnumType {
public:
CommentedEnumType(const string& arg_name) : EnumType(arg_name) {}
CommentedEnumType(EnumType* e) : EnumType(e) {}
~CommentedEnumType();
void DescribeReST(ODesc* d) const;

View file

@ -156,6 +156,12 @@ static void make_var(ID* id, BroType* t, init_class c, Expr* init,
if ( do_init )
{
if ( c == INIT_NONE && dt == VAR_REDEF && t->IsTable() &&
init && init->Tag() == EXPR_ASSIGN )
// e.g. 'redef foo["x"] = 1' is missing an init class, but the
// intention clearly isn't to overwrite entire existing table val.
c = INIT_EXTRA;
if ( (c == INIT_EXTRA && id->FindAttr(ATTR_ADD_FUNC)) ||
(c == INIT_REMOVE && id->FindAttr(ATTR_DEL_FUNC)) )
// Just apply the function.

View file

@ -4,26 +4,12 @@
#include "Manager.h"
#include "../Desc.h"
#include "../util.h"
using namespace analyzer;
Tag::type_t Component::type_counter = 0;
static const char* canonify_name(const char* name)
{
unsigned int len = strlen(name);
char* nname = new char[len + 1];
for ( unsigned int i = 0; i < len; i++ )
{
char c = isalnum(name[i]) ? name[i] : '_';
nname[i] = toupper(c);
}
nname[len] = '\0';
return nname;
}
Component::Component(const char* arg_name, factory_callback arg_factory, Tag::subtype_t arg_subtype, bool arg_enabled, bool arg_partial)
: plugin::Component(plugin::component::ANALYZER)
{
@ -58,7 +44,7 @@ analyzer::Tag Component::Tag() const
return tag;
}
void Component::Describe(ODesc* d)
void Component::Describe(ODesc* d) const
{
plugin::Component::Describe(d);
d->Add(name);

View file

@ -23,7 +23,6 @@ class Analyzer;
*/
class Component : public plugin::Component {
public:
typedef bool (*available_callback)();
typedef Analyzer* (*factory_callback)(Connection* conn);
/**
@ -73,7 +72,7 @@ public:
* from what's passed to the constructor but upper-cased and
* canonified to allow being part of a script-level ID.
*/
const char* Name() const { return name; }
virtual const char* Name() const { return name; }
/**
* Returns a canonocalized version of the analyzer's name. The
@ -120,7 +119,7 @@ public:
* Generates a human-readable description of the component's main
* parameters. This goes into the output of \c "bro -NN".
*/
virtual void Describe(ODesc* d);
virtual void Describe(ODesc* d) const;
Component& operator=(const Component& other);

View file

@ -24,7 +24,7 @@ Tag::Tag(type_t arg_type, subtype_t arg_subtype)
Tag::Tag(EnumVal* arg_val)
{
assert(val);
assert(arg_val);
val = arg_val;
Ref(val);

View file

@ -8,6 +8,11 @@
class EnumVal;
namespace file_analysis {
class Manager;
class Component;
}
namespace analyzer {
class Manager;
@ -24,7 +29,7 @@ class Component;
* subtype form an analyzer "tag". Each unique tag corresponds to a single
* "analyzer" from the user's perspective. At the script layer, these tags
* are mapped into enums of type \c Analyzer::Tag. Internally, the
* analyzer::Mangager maintains the mapping of tag to analyzer (and it also
* analyzer::Manager maintains the mapping of tag to analyzer (and it also
* assigns them their main types), and analyzer::Component creates new
* tags.
*
@ -121,9 +126,11 @@ public:
protected:
friend class analyzer::Manager;
friend class analyzer::Component;
friend class file_analysis::Manager;
friend class file_analysis::Component;
/**
* Constructor. Note
* Constructor.
*
* @param type The main type. Note that the \a analyzer::Manager
* manages the value space internally, so noone else should assign

View file

@ -22,7 +22,7 @@ static RecordType* bittorrent_benc_value;
static TableType* bittorrent_benc_dir;
BitTorrentTracker_Analyzer::BitTorrentTracker_Analyzer(Connection* c)
: tcp::TCP_ApplicationAnalyzer("BITTORRENT", c)
: tcp::TCP_ApplicationAnalyzer("BITTORRENTTRACKER", c)
{
if ( ! bt_tracker_headers )
{

View file

@ -7,6 +7,6 @@
BRO_PLUGIN_BEGIN(Bro, BitTorrent)
BRO_PLUGIN_DESCRIPTION("BitTorrent Analyzer");
BRO_PLUGIN_ANALYZER("BitTorrent", bittorrent::BitTorrent_Analyzer);
BRO_PLUGIN_ANALYZER("BitTorrentTracker", bittorrent::BitTorrent_Analyzer);
BRO_PLUGIN_ANALYZER("BitTorrentTracker", bittorrent::BitTorrentTracker_Analyzer);
BRO_PLUGIN_BIF_FILE(events);
BRO_PLUGIN_END

View file

@ -693,7 +693,7 @@ refine connection SSL_Conn += {
head2 : uint8) : int
%{
if ( head0 >= 20 && head0 <= 23 &&
head1 == 0x03 && head2 < 0x03 )
head1 == 0x03 && head2 <= 0x03 )
// This is most probably SSL version 3.
return (head1 << 8) | head2;

View file

@ -23,5 +23,3 @@ const Tunnel::delay_gtp_confirmation: bool;
const Tunnel::ip_tunnel_timeout: interval;
const Threading::heartbeat_interval: interval;
const FileAnalysis::salt: string;

View file

@ -920,7 +920,7 @@ event file_over_new_connection%(f: fa_file, c: connection%);
## f: The file.
##
## .. bro:see:: file_new file_over_new_connection file_gap file_state_remove
## default_file_timeout_interval FileAnalysis::postpone_timeout
## default_file_timeout_interval FileAnalysis::set_timeout_interval
## FileAnalysis::set_timeout_interval
event file_timeout%(f: fa_file%);
@ -942,19 +942,6 @@ event file_gap%(f: fa_file, offset: count, len: count%);
## .. bro:see:: file_new file_over_new_connection file_timeout file_gap
event file_state_remove%(f: fa_file%);
## This event is generated each time file analysis generates a digest of the
## file contents.
##
## f: The file.
##
## kind: The type of digest algorithm.
##
## hash: The result of the hashing.
##
## .. bro:see:: FileAnalysis::add_analyzer FileAnalysis::ANALYZER_MD5
## FileAnalysis::ANALYZER_SHA1 FileAnalysis::ANALYZER_SHA256
event file_hash%(f: fa_file, kind: string, hash: string%);
## Generated when an internal DNS lookup produces the same result as last time.
## Bro keeps an internal DNS cache for host names and IP addresses it has
## already resolved. This event is generated when a subsequent lookup returns

View file

@ -1,127 +0,0 @@
##! Internal functions and types used by the logging framework.
module FileAnalysis;
%%{
#include "file_analysis/Manager.h"
%%}
type AnalyzerArgs: record;
## An enumeration of various file analysis actions that can be taken.
enum Analyzer %{
## Extract a file to local filesystem
ANALYZER_EXTRACT,
## Calculate an MD5 digest of the file's contents.
ANALYZER_MD5,
## Calculate an SHA1 digest of the file's contents.
ANALYZER_SHA1,
## Calculate an SHA256 digest of the file's contents.
ANALYZER_SHA256,
## Deliver the file contents to the script-layer in an event.
ANALYZER_DATA_EVENT,
%}
## :bro:see:`FileAnalysis::postpone_timeout`.
function FileAnalysis::__postpone_timeout%(file_id: string%): bool
%{
using file_analysis::FileID;
bool result = file_mgr->PostponeTimeout(FileID(file_id->CheckString()));
return new Val(result, TYPE_BOOL);
%}
## :bro:see:`FileAnalysis::set_timeout_interval`.
function FileAnalysis::__set_timeout_interval%(file_id: string, t: interval%): bool
%{
using file_analysis::FileID;
bool result = file_mgr->SetTimeoutInterval(FileID(file_id->CheckString()),
t);
return new Val(result, TYPE_BOOL);
%}
## :bro:see:`FileAnalysis::add_analyzer`.
function FileAnalysis::__add_analyzer%(file_id: string, args: any%): bool
%{
using file_analysis::FileID;
using BifType::Record::FileAnalysis::AnalyzerArgs;
RecordVal* rv = args->AsRecordVal()->CoerceTo(AnalyzerArgs);
bool result = file_mgr->AddAnalyzer(FileID(file_id->CheckString()), rv);
Unref(rv);
return new Val(result, TYPE_BOOL);
%}
## :bro:see:`FileAnalysis::remove_analyzer`.
function FileAnalysis::__remove_analyzer%(file_id: string, args: any%): bool
%{
using file_analysis::FileID;
using BifType::Record::FileAnalysis::AnalyzerArgs;
RecordVal* rv = args->AsRecordVal()->CoerceTo(AnalyzerArgs);
bool result = file_mgr->RemoveAnalyzer(FileID(file_id->CheckString()), rv);
Unref(rv);
return new Val(result, TYPE_BOOL);
%}
## :bro:see:`FileAnalysis::stop`.
function FileAnalysis::__stop%(file_id: string%): bool
%{
using file_analysis::FileID;
bool result = file_mgr->IgnoreFile(FileID(file_id->CheckString()));
return new Val(result, TYPE_BOOL);
%}
## :bro:see:`FileAnalysis::data_stream`.
function FileAnalysis::__data_stream%(source: string, data: string%): any
%{
file_mgr->DataIn(data->Bytes(), data->Len(), source->CheckString());
return 0;
%}
## :bro:see:`FileAnalysis::data_chunk`.
function FileAnalysis::__data_chunk%(source: string, data: string,
offset: count%): any
%{
file_mgr->DataIn(data->Bytes(), data->Len(), offset, source->CheckString());
return 0;
%}
## :bro:see:`FileAnalysis::gap`.
function FileAnalysis::__gap%(source: string, offset: count, len: count%): any
%{
file_mgr->Gap(offset, len, source->CheckString());
return 0;
%}
## :bro:see:`FileAnalysis::set_size`.
function FileAnalysis::__set_size%(source: string, size: count%): any
%{
file_mgr->SetSize(size, source->CheckString());
return 0;
%}
## :bro:see:`FileAnalysis::eof`.
function FileAnalysis::__eof%(source: string%): any
%{
file_mgr->EndOfFile(source->CheckString());
return 0;
%}
module GLOBAL;
## For use within a :bro:see:`get_file_handle` handler to set a unique
## identifier to associate with the current input to the file analysis
## framework. Using an empty string for the handle signifies that the
## input will be ignored/discarded.
##
## handle: A string that uniquely identifies a file.
##
## .. bro:see:: get_file_handle
function set_file_handle%(handle: string%): any
%{
file_mgr->SetHandle(handle->CheckString());
return 0;
%}

View file

@ -5,10 +5,13 @@
#include "Val.h"
#include "NetVar.h"
#include "analyzer/Tag.h"
#include "file_analysis/file_analysis.bif.h"
namespace file_analysis {
typedef BifEnum::FileAnalysis::Analyzer FA_Tag;
typedef int FA_Tag;
class File;
@ -17,6 +20,11 @@ class File;
*/
class Analyzer {
public:
/**
* Destructor. Nothing special about it. Virtual since we definitely expect
* to delete instances of derived classes via pointers to this class.
*/
virtual ~Analyzer()
{
DBG_LOG(DBG_FILE_ANALYSIS, "Destroy file analyzer %d", tag);
@ -24,7 +32,10 @@ public:
}
/**
* Subclasses may override this to receive file data non-sequentially.
* Subclasses may override this metod to receive file data non-sequentially.
* @param data points to start of a chunk of file data.
* @param len length in bytes of the chunk of data pointed to by \a data.
* @param offset the byte offset within full file that data chunk starts.
* @return true if the analyzer is still in a valid state to continue
* receiving data/events or false if it's essentially "done".
*/
@ -32,7 +43,9 @@ public:
{ return true; }
/**
* Subclasses may override this to receive file sequentially.
* Subclasses may override this method to receive file sequentially.
* @param data points to start of the next chunk of file data.
* @param len length in bytes of the chunk of data pointed to by \a data.
* @return true if the analyzer is still in a valid state to continue
* receiving data/events or false if it's essentially "done".
*/
@ -40,7 +53,7 @@ public:
{ return true; }
/**
* Subclasses may override this to specifically handle an EOF signal,
* Subclasses may override this method to specifically handle an EOF signal,
* which means no more data is going to be incoming and the analyzer
* may be deleted/cleaned up soon.
* @return true if the analyzer is still in a valid state to continue
@ -50,7 +63,10 @@ public:
{ return true; }
/**
* Subclasses may override this to handle missing data in a file stream.
* Subclasses may override this method to handle missing data in a file.
* @param offset the byte offset within full file at which the missing
* data chunk occurs.
* @param len the number of missing bytes.
* @return true if the analyzer is still in a valid state to continue
* receiving data/events or false if it's essentially "done".
*/
@ -73,17 +89,25 @@ public:
File* GetFile() const { return file; }
/**
* Retrieves an analyzer tag field from full analyzer argument record.
* @param args an \c AnalyzerArgs (script-layer type) value.
* @return the analyzer tag equivalent of the 'tag' field from the
* AnalyzerArgs value \a args.
* \c AnalyzerArgs value \a args.
*/
static FA_Tag ArgsTag(const RecordVal* args)
{
using BifType::Record::FileAnalysis::AnalyzerArgs;
return static_cast<FA_Tag>(
args->Lookup(AnalyzerArgs->FieldOffset("tag"))->AsEnum());
return args->Lookup(AnalyzerArgs->FieldOffset("tag"))->AsEnum();
}
protected:
/**
* Constructor. Only derived classes are meant to be instantiated.
* @param arg_args an \c AnalyzerArgs (script-layer type) value specifiying
* tunable options, if any, related to a particular analyzer type.
* @param arg_file the file to which the the analyzer is being attached.
*/
Analyzer(RecordVal* arg_args, File* arg_file)
: tag(file_analysis::Analyzer::ArgsTag(arg_args)),
args(arg_args->Ref()->AsRecordVal()),
@ -91,13 +115,11 @@ protected:
{}
private:
FA_Tag tag;
RecordVal* args;
File* file;
};
typedef file_analysis::Analyzer* (*AnalyzerInstantiator)(RecordVal* args,
File* file);
FA_Tag tag; /**< The particular analyzer type of the analyzer instance. */
RecordVal* args; /**< \c AnalyzerArgs val gives tunable analyzer params. */
File* file; /**< The file to which the analyzer is attached. */
};
} // namespace file_analysis

View file

@ -3,21 +3,10 @@
#include "AnalyzerSet.h"
#include "File.h"
#include "Analyzer.h"
#include "Extract.h"
#include "DataEvent.h"
#include "Hash.h"
#include "Manager.h"
using namespace file_analysis;
// keep in order w/ declared enum values in file_analysis.bif
static AnalyzerInstantiator analyzer_factory[] = {
file_analysis::Extract::Instantiate,
file_analysis::MD5::Instantiate,
file_analysis::SHA1::Instantiate,
file_analysis::SHA256::Instantiate,
file_analysis::DataEvent::Instantiate,
};
static void analyzer_del_func(void* v)
{
delete (file_analysis::Analyzer*) v;
@ -154,14 +143,13 @@ HashKey* AnalyzerSet::GetKey(const RecordVal* args) const
file_analysis::Analyzer* AnalyzerSet::InstantiateAnalyzer(RecordVal* args) const
{
file_analysis::Analyzer* a =
analyzer_factory[file_analysis::Analyzer::ArgsTag(args)](args, file);
FA_Tag tag = file_analysis::Analyzer::ArgsTag(args);
file_analysis::Analyzer* a = file_mgr->InstantiateAnalyzer(tag, args, file);
if ( ! a )
{
DBG_LOG(DBG_FILE_ANALYSIS, "Instantiate analyzer %d failed for file id",
" %s", file_analysis::Analyzer::ArgsTag(args),
file->GetID().c_str());
reporter->Error("Failed file analyzer %s instantiation for file id %s",
file_mgr->GetAnalyzerName(tag), file->GetID().c_str());
return 0;
}

View file

@ -16,67 +16,144 @@ class File;
declare(PDict,Analyzer);
/**
* A set of file analysis analyzers indexed by AnalyzerArgs. Allows queueing
* of addition/removals so that those modifications can happen at well-defined
* times (e.g. to make sure a loop iterator isn't invalidated).
* A set of file analysis analyzers indexed by an \c AnalyzerArgs (script-layer
* type) value. Allows queueing of addition/removals so that those
* modifications can happen at well-defined times (e.g. to make sure a loop
* iterator isn't invalidated).
*/
class AnalyzerSet {
public:
/**
* Constructor. Nothing special.
* @param arg_file the file to which all analyzers in the set are attached.
*/
AnalyzerSet(File* arg_file);
/**
* Destructor. Any queued analyzer additions/removals are aborted and
* will not occur.
*/
~AnalyzerSet();
/**
* Attach an analyzer to #file immediately.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return true if analyzer was instantiated/attached, else false.
*/
bool Add(RecordVal* args);
/**
* Queue the attachment of an analyzer to #file.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return true if analyzer was able to be instantiated, else false.
*/
bool QueueAdd(RecordVal* args);
/**
* Remove an analyzer from #file immediately.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return false if analyzer didn't exist and so wasn't removed, else true.
*/
bool Remove(const RecordVal* args);
/**
* Queue the removal of an analyzer from #file.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return true if analyzer exists at time of call, else false;
*/
bool QueueRemove(const RecordVal* args);
/**
* Perform all queued modifications to the currently active analyzers.
* Perform all queued modifications to the current analyzer set.
*/
void DrainModifications();
/**
* Prepare the analyzer set to be iterated over.
* @see Dictionary#InitForIteration
* @return an iterator that may be used to loop over analyzers in the set.
*/
IterCookie* InitForIteration() const
{ return analyzer_map.InitForIteration(); }
/**
* Get next entry in the analyzer set.
* @see Dictionary#NextEntry
* @param c a set iterator.
* @return the next analyzer in the set or a null pointer if there is no
* more left (in that case the cookie is also deleted).
*/
file_analysis::Analyzer* NextEntry(IterCookie* c)
{ return analyzer_map.NextEntry(c); }
protected:
/**
* Get a hash key which represents an analyzer instance.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return the hash key calculated from \a args
*/
HashKey* GetKey(const RecordVal* args) const;
/**
* Create an instance of a file analyzer.
* @param args an \c AnalyzerArgs value which specifies an analyzer.
* @return a new file analyzer instance.
*/
file_analysis::Analyzer* InstantiateAnalyzer(RecordVal* args) const;
/**
* Insert an analyzer instance in to the set.
* @param a an analyzer instance.
* @param key the hash key which represents the analyzer's \c AnalyzerArgs.
*/
void Insert(file_analysis::Analyzer* a, HashKey* key);
/**
* Remove an analyzer instance from the set.
* @param tag enumarator which specifies type of the analyzer to remove,
* just used for debugging messages.
* @param key the hash key which represents the analyzer's \c AnalyzerArgs.
*/
bool Remove(FA_Tag tag, HashKey* key);
private:
File* file;
File* file; /**< File which owns the set */
CompositeHash* analyzer_hash; /**< AnalyzerArgs hashes. */
PDict(file_analysis::Analyzer) analyzer_map; /**< Indexed by AnalyzerArgs. */
/**
* Abstract base class for analyzer set modifications.
*/
class Modification {
public:
virtual ~Modification() {}
/**
* Perform the modification on an analyzer set.
* @param set the analyzer set on which the modification will happen.
* @return true if the modification altered \a set.
*/
virtual bool Perform(AnalyzerSet* set) = 0;
/**
* Don't perform the modification on the analyzer set and clean up.
*/
virtual void Abort() = 0;
};
/**
* Represents a request to add an analyzer to an analyzer set.
*/
class AddMod : public Modification {
public:
/**
* Construct request which can add an analyzer to an analyzer set.
* @param arg_a an analyzer instance to add to an analyzer set.
* @param arg_key hash key representing the analyzer's \c AnalyzerArgs.
*/
AddMod(file_analysis::Analyzer* arg_a, HashKey* arg_key)
: Modification(), a(arg_a), key(arg_key) {}
virtual ~AddMod() {}
@ -88,8 +165,16 @@ private:
HashKey* key;
};
/**
* Represents a request to remove an analyzer from an analyzer set.
*/
class RemoveMod : public Modification {
public:
/**
* Construct request which can remove an analyzer from an analyzer set.
* @param arg_a an analyzer instance to add to an analyzer set.
* @param arg_key hash key representing the analyzer's \c AnalyzerArgs.
*/
RemoveMod(FA_Tag arg_tag, HashKey* arg_key)
: Modification(), tag(arg_tag), key(arg_key) {}
virtual ~RemoveMod() {}
@ -102,7 +187,7 @@ private:
};
typedef queue<Modification*> ModQueue;
ModQueue mod_queue;
ModQueue mod_queue; /**< A queue of analyzer additions/removals requests. */
};
} // namespace file_analysiss

View file

@ -0,0 +1,22 @@
include(BroSubdir)
include_directories(BEFORE
${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_BINARY_DIR}
)
add_subdirectory(analyzer)
set(file_analysis_SRCS
Manager.cc
File.cc
FileTimer.cc
Analyzer.h
AnalyzerSet.cc
Component.cc
)
bif_target(file_analysis.bif)
bro_add_subdir_library(file_analysis ${file_analysis_SRCS} ${BIF_OUTPUT_CC})
add_dependencies(bro_file_analysis generate_outputs)

View file

@ -0,0 +1,69 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include "Component.h"
#include "Manager.h"
#include "../Desc.h"
#include "../util.h"
using namespace file_analysis;
analyzer::Tag::type_t Component::type_counter = 0;
Component::Component(const char* arg_name, factory_callback arg_factory,
analyzer::Tag::subtype_t arg_subtype)
: plugin::Component(plugin::component::FILE_ANALYZER)
{
name = copy_string(arg_name);
canon_name = canonify_name(arg_name);
factory = arg_factory;
tag = analyzer::Tag(++type_counter, arg_subtype);
}
Component::Component(const Component& other)
: plugin::Component(Type())
{
name = copy_string(other.name);
canon_name = copy_string(other.canon_name);
factory = other.factory;
tag = other.tag;
}
Component::~Component()
{
delete [] name;
delete [] canon_name;
}
analyzer::Tag Component::Tag() const
{
return tag;
}
void Component::Describe(ODesc* d) const
{
plugin::Component::Describe(d);
d->Add(name);
d->Add(" (");
if ( factory )
{
d->Add("ANALYZER_");
d->Add(canon_name);
}
d->Add(")");
}
Component& Component::operator=(const Component& other)
{
if ( &other != this )
{
name = copy_string(other.name);
factory = other.factory;
tag = other.tag;
}
return *this;
}

View file

@ -0,0 +1,109 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef FILE_ANALYZER_PLUGIN_COMPONENT_H
#define FILE_ANALYZER_PLUGIN_COMPONENT_H
#include "analyzer/Tag.h"
#include "plugin/Component.h"
#include "Val.h"
#include "../config.h"
#include "../util.h"
namespace file_analysis {
class File;
class Analyzer;
/**
* Component description for plugins providing file analyzers.
*
* A plugin can provide a specific file analyzer by registering this
* analyzer component, describing the analyzer.
*/
class Component : public plugin::Component {
public:
typedef Analyzer* (*factory_callback)(RecordVal* args, File* file);
/**
* Constructor.
*
* @param name The name of the provided analyzer. This name is used
* across the system to identify the analyzer, e.g., when calling
* file_analysis::Manager::InstantiateAnalyzer with a name.
*
* @param factory A factory function to instantiate instances of the
* analyzer's class, which must be derived directly or indirectly
* from file_analysis::Analyzer. This is typically a static \c
* Instatiate() method inside the class that just allocates and
* returns a new instance.
*
* @param subtype A subtype associated with this component that
* further distinguishes it. The subtype will be integrated into
* the analyzer::Tag that the manager associates with this analyzer,
* and analyzer instances can accordingly access it via analyzer::Tag().
* If not used, leave at zero.
*/
Component(const char* name, factory_callback factory,
analyzer::Tag::subtype_t subtype = 0);
/**
* Copy constructor.
*/
Component(const Component& other);
/**
* Destructor.
*/
~Component();
/**
* Returns the name of the analyzer. This name is unique across all
* analyzers and used to identify it. The returned name is derived
* from what's passed to the constructor but upper-cased and
* canonified to allow being part of a script-level ID.
*/
virtual const char* Name() const { return name; }
/**
* Returns a canonocalized version of the analyzer's name. The
* returned name is derived from what's passed to the constructor but
* upper-cased and transformed to allow being part of a script-level
* ID.
*/
const char* CanonicalName() const { return canon_name; }
/**
* Returns the analyzer's factory function.
*/
factory_callback Factory() const { return factory; }
/**
* Returns the analyzer's tag. Note that this is automatically
* generated for each new Components, and hence unique across all of
* them.
*/
analyzer::Tag Tag() const;
/**
* Generates a human-readable description of the component's main
* parameters. This goes into the output of \c "bro -NN".
*/
virtual void Describe(ODesc* d) const;
Component& operator=(const Component& other);
private:
const char* name; // The analyzer's name.
const char* canon_name; // The analyzer's canonical name.
factory_callback factory; // The analyzer's factory callback.
analyzer::Tag tag; // The automatically assigned analyzer tag.
// Global counter used to generate unique tags.
static analyzer::Tag::type_t type_counter;
};
}
#endif

View file

@ -1,36 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef FILE_ANALYSIS_DATAEVENT_H
#define FILE_ANALYSIS_DATAEVENT_H
#include <string>
#include "Val.h"
#include "File.h"
#include "Analyzer.h"
namespace file_analysis {
/**
* An analyzer to send file data to script-layer events.
*/
class DataEvent : public file_analysis::Analyzer {
public:
virtual bool DeliverChunk(const u_char* data, uint64 len, uint64 offset);
virtual bool DeliverStream(const u_char* data, uint64 len);
static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file);
protected:
DataEvent(RecordVal* args, File* file,
EventHandlerPtr ce, EventHandlerPtr se);
private:
EventHandlerPtr chunk_event;
EventHandlerPtr stream_event;
};
} // namespace file_analysis
#endif

View file

@ -1,35 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef FILE_ANALYSIS_EXTRACT_H
#define FILE_ANALYSIS_EXTRACT_H
#include <string>
#include "Val.h"
#include "File.h"
#include "Analyzer.h"
namespace file_analysis {
/**
* An analyzer to extract files to disk.
*/
class Extract : public file_analysis::Analyzer {
public:
virtual ~Extract();
virtual bool DeliverChunk(const u_char* data, uint64 len, uint64 offset);
static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file);
protected:
Extract(RecordVal* args, File* file, const string& arg_filename);
private:
string filename;
int fd;
};
} // namespace file_analysis
#endif

View file

@ -1,11 +1,9 @@
// See the file "COPYING" in the main distribution directory for copyright.
#include <string>
#include <openssl/md5.h>
#include "File.h"
#include "FileTimer.h"
#include "FileID.h"
#include "Analyzer.h"
#include "Manager.h"
#include "Reporter.h"
@ -53,8 +51,6 @@ int File::bof_buffer_size_idx = -1;
int File::bof_buffer_idx = -1;
int File::mime_type_idx = -1;
string File::salt;
void File::StaticInit()
{
if ( id_idx != -1 )
@ -74,42 +70,27 @@ void File::StaticInit()
bof_buffer_size_idx = Idx("bof_buffer_size");
bof_buffer_idx = Idx("bof_buffer");
mime_type_idx = Idx("mime_type");
salt = BifConst::FileAnalysis::salt->CheckString();
}
File::File(const string& unique, Connection* conn, analyzer::Tag tag,
File::File(const string& file_id, Connection* conn, analyzer::Tag tag,
bool is_orig)
: id(""), unique(unique), val(0), postpone_timeout(false),
first_chunk(true), missed_bof(false), need_reassembly(false), done(false),
analyzers(this)
: id(file_id), val(0), postpone_timeout(false), first_chunk(true),
missed_bof(false), need_reassembly(false), done(false), analyzers(this)
{
StaticInit();
char tmp[20];
uint64 hash[2];
string msg(unique + salt);
MD5(reinterpret_cast<const u_char*>(msg.data()), msg.size(),
reinterpret_cast<u_char*>(hash));
uitoa_n(hash[0], tmp, sizeof(tmp), 62);
DBG_LOG(DBG_FILE_ANALYSIS, "Creating new File object %s (%s)", tmp,
unique.c_str());
DBG_LOG(DBG_FILE_ANALYSIS, "Creating new File object %s", file_id.c_str());
val = new RecordVal(fa_file_type);
val->Assign(id_idx, new StringVal(tmp));
id = FileID(tmp);
val->Assign(id_idx, new StringVal(file_id.c_str()));
if ( conn )
{
// add source, connection, is_orig fields
val->Assign(source_idx, new StringVal(analyzer_mgr->GetAnalyzerName(tag)));
SetSource(analyzer_mgr->GetAnalyzerName(tag));
val->Assign(is_orig_idx, new Val(is_orig, TYPE_BOOL));
UpdateConnectionFields(conn);
}
else
// use the unique file handle as source
val->Assign(source_idx, new StringVal(unique.c_str()));
UpdateLastActivityTime();
}
@ -189,6 +170,18 @@ int File::Idx(const string& field)
return rval;
}
string File::GetSource() const
{
Val* v = val->Lookup(source_idx);
return v ? v->AsString()->CheckString() : string();
}
void File::SetSource(const string& source)
{
val->Assign(source_idx, new StringVal(source.c_str()));
}
double File::GetTimeoutInterval() const
{
return LookupFieldDefaultInterval(timeout_interval_idx);
@ -425,7 +418,7 @@ void File::Gap(uint64 offset, uint64 len)
bool File::FileEventAvailable(EventHandlerPtr h)
{
return h && ! file_mgr->IsIgnored(unique);
return h && ! file_mgr->IsIgnored(id);
}
void File::FileEvent(EventHandlerPtr h)

View file

@ -9,7 +9,6 @@
#include "Conn.h"
#include "Val.h"
#include "AnalyzerSet.h"
#include "FileID.h"
#include "BroString.h"
namespace file_analysis {
@ -19,13 +18,30 @@ namespace file_analysis {
*/
class File {
public:
/**
* Destructor. Nothing fancy, releases a reference to the wrapped
* \c fa_file value.
*/
~File();
/**
* @return the #val record.
* @return the wrapped \c fa_file record value, #val.
*/
RecordVal* GetVal() const { return val; }
/**
* @return the value of the "source" field from #val record or an empty
* string if it's not initialized.
*/
string GetSource() const;
/**
* Set the "source" field from #val record to \a source.
* @param source the new value of the "source" field.
*/
void SetSource(const string& source);
/**
* @return value (seconds) of the "timeout_interval" field from #val record.
*/
@ -33,18 +49,14 @@ public:
/**
* Set the "timeout_interval" field from #val record to \a interval seconds.
* @param interval the new value of the "timeout_interval" field.
*/
void SetTimeoutInterval(double interval);
/**
* @return value of the "id" field from #val record.
*/
FileID GetID() const { return id; }
/**
* @return the string which uniquely identifies the file.
*/
string GetUnique() const { return unique; }
string GetID() const { return id; }
/**
* @return value of "last_active" field in #val record;
@ -58,13 +70,15 @@ public:
/**
* Set "total_bytes" field of #val record to \a size.
* @param size the new value of the "total_bytes" field.
*/
void SetTotalBytes(uint64 size);
/**
* Compares "seen_bytes" field to "total_bytes" field of #val record
* and returns true if the comparison indicates the full file was seen.
* If "total_bytes" hasn't been set yet, it returns false.
* Compares "seen_bytes" field to "total_bytes" field of #val record to
* determine if the full file has been seen.
* @return false if "total_bytes" hasn't been set yet or "seen_bytes" is
* less than it, else true.
*/
bool IsComplete() const;
@ -78,23 +92,30 @@ public:
/**
* Queues attaching an analyzer. Only one analyzer per type can be attached
* at a time unless the arguments differ.
* @param args an \c AnalyzerArgs value representing a file analyzer.
* @return false if analyzer can't be instantiated, else true.
*/
bool AddAnalyzer(RecordVal* args);
/**
* Queues removal of an analyzer.
* @param args an \c AnalyzerArgs value representing a file analyzer.
* @return true if analyzer was active at time of call, else false.
*/
bool RemoveAnalyzer(const RecordVal* args);
/**
* Pass in non-sequential data and deliver to attached analyzers.
* @param data pointer to start of a chunk of file data.
* @param len number of bytes in the data chunk.
* @param offset number of bytes from start of file at which chunk occurs.
*/
void DataIn(const u_char* data, uint64 len, uint64 offset);
/**
* Pass in sequential data and deliver to attached analyzers.
* @param data pointer to start of a chunk of file data.
* @param len number of bytes in the data chunk.
*/
void DataIn(const u_char* data, uint64 len);
@ -105,10 +126,13 @@ public:
/**
* Inform attached analyzers about a gap in file stream.
* @param offset number of bytes in to file at which missing chunk starts.
* @param len length in bytes of the missing chunk of file data.
*/
void Gap(uint64 offset, uint64 len);
/**
* @param h pointer to an event handler.
* @return true if event has a handler and the file isn't ignored.
*/
bool FileEventAvailable(EventHandlerPtr h);
@ -116,11 +140,14 @@ public:
/**
* Raises an event related to the file's life-cycle, the only parameter
* to that event is the \c fa_file record..
* @param h pointer to an event handler.
*/
void FileEvent(EventHandlerPtr h);
/**
* Raises an event related to the file's life-cycle.
* @param h pointer to an event handler.
* @param vl list of argument values to pass to event call.
*/
void FileEvent(EventHandlerPtr h, val_list* vl);
@ -129,35 +156,51 @@ protected:
/**
* Constructor; only file_analysis::Manager should be creating these.
* @param file_id an identifier string for the file in pretty hash form
* (similar to connection uids).
* @param conn a network connection over which the file is transferred.
* @param tag the network protocol over which the file is transferred.
* @param is_orig true if the file is being transferred from the originator
* of the connection to the responder. False indicates the other
* direction.
*/
File(const string& unique, Connection* conn = 0,
File(const string& file_id, Connection* conn = 0,
analyzer::Tag tag = analyzer::Tag::Error, bool is_orig = false);
/**
* Updates the "conn_ids" and "conn_uids" fields in #val record with the
* \c conn_id and UID taken from \a conn.
* @param conn the connection over which a part of the file has been seen.
*/
void UpdateConnectionFields(Connection* conn);
/**
* Increment a byte count field of #val record by \a size.
* @param size number of bytes by which to increment.
* @param field_idx the index of the field in \c fa_file to increment.
*/
void IncrementByteCount(uint64 size, int field_idx);
/**
* Wrapper to RecordVal::LookupWithDefault for the field in #val at index
* \a idx which automatically unrefs the Val and returns a converted value.
* @param idx the index of a field of type "count" in \c fa_file.
* @return the value of the field, which may be it &default.
*/
uint64 LookupFieldDefaultCount(int idx) const;
/**
* Wrapper to RecordVal::LookupWithDefault for the field in #val at index
* \a idx which automatically unrefs the Val and returns a converted value.
* @param idx the index of a field of type "interval" in \c fa_file.
* @return the value of the field, which may be it &default.
*/
double LookupFieldDefaultInterval(int idx) const;
/**
* Buffers incoming data at the beginning of a file.
* @param data pointer to a data chunk to buffer.
* @param len number of bytes in the data chunk.
* @return true if buffering is still required, else false
*/
bool BufferBOF(const u_char* data, uint64 len);
@ -170,11 +213,15 @@ protected:
/**
* Does mime type detection and assigns type (if available) to \c mime_type
* field in #val.
* @param data pointer to a chunk of file data.
* @param len number of bytes in the data chunk.
* @return whether mime type was available.
*/
bool DetectMIME(const u_char* data, uint64 len);
/**
* Lookup a record field index/offset by name.
* @param field_name the name of the \c fa_file record field.
* @return the field offset in #val record corresponding to \a field_name.
*/
static int Idx(const string& field_name);
@ -185,15 +232,14 @@ protected:
static void StaticInit();
private:
FileID id; /**< A pretty hash that likely identifies file */
string unique; /**< A string that uniquely identifies file */
string id; /**< A pretty hash that likely identifies file */
RecordVal* val; /**< \c fa_file from script layer. */
bool postpone_timeout; /**< Whether postponing timeout is requested. */
bool first_chunk; /**< Track first non-linear chunk. */
bool missed_bof; /**< Flags that we missed start of file. */
bool need_reassembly; /**< Whether file stream reassembly is needed. */
bool done; /**< If this object is about to be deleted. */
AnalyzerSet analyzers;
AnalyzerSet analyzers; /**< A set of attached file analyzer. */
struct BOF_Buffer {
BOF_Buffer() : full(false), replayed(false), size(0) {}
@ -206,8 +252,6 @@ private:
BroString::CVec chunks;
} bof_buffer; /**< Beginning of file buffer. */
static string salt;
static int id_idx;
static int parent_id_idx;
static int source_idx;

View file

@ -1,34 +0,0 @@
// See the file "COPYING" in the main distribution directory for copyright.
#ifndef FILE_ANALYSIS_FILEID_H
#define FILE_ANALYSIS_FILEID_H
namespace file_analysis {
/**
* A simple string wrapper class to help enforce some type safety between
* methods of FileAnalysis::Manager, some of which use a unique string to
* identify files, and others which use a pretty hash (the FileID) to identify
* files. A FileID is primarily used in methods which interface with the
* script-layer, while the unique strings are used for methods which interface
* with protocol analyzers or anything that sends data to the file analysis
* framework.
*/
struct FileID {
string id;
explicit FileID(const string arg_id) : id(arg_id) {}
FileID(const FileID& other) : id(other.id) {}
const char* c_str() const { return id.c_str(); }
bool operator==(const FileID& rhs) const { return id == rhs.id; }
bool operator<(const FileID& rhs) const { return id < rhs.id; }
FileID& operator=(const FileID& rhs) { id = rhs.id; return *this; }
FileID& operator=(const string& rhs) { id = rhs; return *this; }
};
} // namespace file_analysis
#endif

View file

@ -5,7 +5,7 @@
using namespace file_analysis;
FileTimer::FileTimer(double t, const FileID& id, double interval)
FileTimer::FileTimer(double t, const string& id, double interval)
: Timer(t + interval, TIMER_FILE_ANALYSIS_INACTIVITY), file_id(id)
{
DBG_LOG(DBG_FILE_ANALYSIS, "New %f second timeout timer for %s",

View file

@ -5,7 +5,6 @@
#include <string>
#include "Timer.h"
#include "FileID.h"
namespace file_analysis {
@ -14,16 +13,25 @@ namespace file_analysis {
*/
class FileTimer : public Timer {
public:
FileTimer(double t, const FileID& id, double interval);
/**
* Constructor, nothing interesting about it.
* @param t unix time at which the timer should start ticking.
* @param id the file identifier which will be checked for inactivity.
* @param interval amount of time after \a t to check for inactivity.
*/
FileTimer(double t, const string& id, double interval);
/**
* Check inactivity of file_analysis::File corresponding to #file_id,
* reschedule if active, else call file_analysis::Manager::Timeout.
* @param t current unix time
* @param is_expire true if all pending timers are being expired.
*/
void Dispatch(double t, int is_expire);
private:
FileID file_id;
string file_id;
};
} // namespace file_analysis

Some files were not shown because too many files have changed in this diff Show more