Merge remote-tracking branch 'origin/master' into topic/seth/file-analysis-exe-analyzer

Conflicts:
	src/CMakeLists.txt
	src/binpac_bro.h
	src/event.bif
	src/file_analysis.bif
	src/file_analysis/AnalyzerSet.cc
This commit is contained in:
Seth Hall 2013-07-27 00:04:40 -04:00
commit 998cedb3b8
670 changed files with 35868 additions and 15013 deletions

10262
CHANGES

File diff suppressed because it is too large Load diff

View file

@ -61,7 +61,10 @@ distclean:
rm -rf $(BUILD)
test:
@(cd testing && make )
@( cd testing && make )
test-all: test
test -d aux/broctl && ( cd aux/broctl && make test )
configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

50
NEWS
View file

@ -46,6 +46,19 @@ New Functionality
have changed their signatures to work with opaques types rather
than global state as it was before.
- The scripting language now supports a constructing sets, tables,
vectors, and records by name:
type MyRecordType: record {
c: count;
s: string &optional;
};
global r: MyRecordType = record($c = 7);
type MySet: set[MyRec];
global s = MySet([$c=1], [$c=2]);
- Strings now support the subscript operator to extract individual
characters and substrings (e.g., s[4], s[1,5]). The index expression
can take up to two indices for the start and end index of the
@ -55,11 +68,17 @@ New Functionality
global foo: function(s: string, t: string &default="abc", u: count &default=0);
- Scripts can now use two new "magic constants" @DIR and @FILENAME
that expand to the directory path of the current script and just the
script file name without path, respectively. (Jon Siwek)
- The new file analysis framework moves most of the processing of file
content from script-land into the core, where it belongs. Much of
this is an internal change, the framework comes with the following
user-visibible functionality (some of that was already available
before, but done differently):
content from script-land into the core, where it belongs. See
doc/file-analysis.rst for more information.
Much of this is an internal change, but the framework also comes
with the following user-visibible functionality (some of that was
already available before, but done differently):
[TODO: This will probably change with further script updates.]
@ -85,6 +104,23 @@ New Functionality
- IRC DCC transfers: Record to disk.
- New packet filter framework supports BPF-based load-balancing,
shunting, and sampling; plus plugin support to customize filters
dynamically.
- Bro now provides Bloom filters of two kinds: basic Bloom filters
supporting membership tests, and counting Bloom filters that track
the frequency of elements. The corresponding functions are:
bloomfilter_basic_init(fp: double, capacity: count, name: string &default=""): opaque of bloomfilter
bloomfilter_counting_init(k: count, cells: count, max: count, name: string &default=""): opaque of bloomfilter
bloomfilter_add(bf: opaque of bloomfilter, x: any)
bloomfilter_lookup(bf: opaque of bloomfilter, x: any): count
bloomfilter_merge(bf1: opaque of bloomfilter, bf2: opaque of bloomfilter): opaque of bloomfilter
bloomfilter_clear(bf: opaque of bloomfilter)
See <INSERT LINK> for full documentation.
Changed Functionality
~~~~~~~~~~~~~~~~~~~~~
@ -163,6 +199,12 @@ Changed Functionality
- The SSH::Login notice has been superseded by an corresponding
intelligence framework observation (SSH::SUCCESSFUL_LOGIN).
- PacketFilter::all_packets has been replaced with
PacketFilter::enable_auto_protocol_capture_filters.
- We removed the BitTorrent DPD signatures pending further updates to
that analyzer.
Bro 2.1
-------

View file

@ -1 +1 @@
2.1-641
2.1-888

@ -1 +1 @@
Subproject commit f86a3169b8d49189d264cbc1a7507260cd9ff51d
Subproject commit 896ddedde55c48ec2163577fc258b49c418abb3e

@ -1 +1 @@
Subproject commit 18c454981e3b0903811c541f6e728d4ef6cee2c5
Subproject commit a9942558c7d3dfd80148b8aaded64c82ade3d117

@ -1 +1 @@
Subproject commit 8955807b0f4151f5f6aca2e68d353b9b341d9f86
Subproject commit 889f9c65944ceac20ad9230efc39d33e6e1221c3

@ -1 +1 @@
Subproject commit 3ea30f7a146343f054a3846a61ee5c67259b2de2
Subproject commit 0cd102805e73343cab3f9fd4a76552e13940dad9

@ -1 +1 @@
Subproject commit d5b8df42cb9c398142e02d4bf8ede835fd0227f4
Subproject commit ce366206e3407e534a786ad572c342e9f9fef26b

View file

@ -12,7 +12,7 @@
broPolicies=${BRO_SCRIPT_SOURCE_PATH}:${BRO_SCRIPT_SOURCE_PATH}/policy:${BRO_SCRIPT_SOURCE_PATH}/site
broGenPolicies=${CMAKE_BINARY_DIR}/src
broGenPolicies=${CMAKE_BINARY_DIR}/scripts
installedPolicies=${BRO_SCRIPT_INSTALL_PATH}:${BRO_SCRIPT_INSTALL_PATH}/site

2
cmake

@ -1 +1 @@
Subproject commit e1a7fd00a0a66d6831a239fe84f5fcfaa54e2c35
Subproject commit 026639f8368e56742c0cb5d9fb390ea64e60ec50

View file

@ -82,7 +82,8 @@ class BroGeneric(ObjectDescription):
objects = self.env.domaindata['bro']['objects']
key = (self.objtype, name)
if key in objects:
if ( key in objects and self.objtype != "id" and
self.objtype != "type" ):
self.env.warn(self.env.docname,
'duplicate description of %s %s, ' %
(self.objtype, name) +
@ -150,6 +151,12 @@ class BroEnum(BroGeneric):
#self.indexnode['entries'].append(('single', indextext,
# targetname, targetname))
m = sig.split()
if len(m) < 2:
self.env.warn(self.env.docname,
"bro:enum directive missing argument(s)")
return
if m[1] == "Notice::Type":
if 'notices' not in self.env.domaindata['bro']:
self.env.domaindata['bro']['notices'] = []

184
doc/file-analysis.rst Normal file
View file

@ -0,0 +1,184 @@
=============
File Analysis
=============
.. rst-class:: opening
In the past, writing Bro scripts with the intent of analyzing file
content could be cumbersome because of the fact that the content
would be presented in different ways, via events, at the
script-layer depending on which network protocol was involved in the
file transfer. Scripts written to analyze files over one protocol
would have to be copied and modified to fit other protocols. The
file analysis framework (FAF) instead provides a generalized
presentation of file-related information. The information regarding
the protocol involved in transporting a file over the network is
still available, but it no longer has to dictate how one organizes
their scripting logic to handle it. A goal of the FAF is to
provide analysis specifically for files that is analogous to the
analysis Bro provides for network connections.
.. contents::
File Lifecycle Events
=====================
The key events that may occur during the lifetime of a file are:
:bro:see:`file_new`, :bro:see:`file_over_new_connection`,
:bro:see:`file_timeout`, :bro:see:`file_gap`, and
:bro:see:`file_state_remove`. Handling any of these events provides
some information about the file such as which network
:bro:see:`connection` and protocol are transporting the file, how many
bytes have been transferred so far, and its MIME type.
.. code:: bro
event connection_state_remove(c: connection)
{
print "connection_state_remove";
print c$uid;
print c$id;
for ( s in c$service )
print s;
}
event file_state_remove(f: fa_file)
{
print "file_state_remove";
print f$id;
for ( cid in f$conns )
{
print f$conns[cid]$uid;
print cid;
}
print f$source;
}
might give output like::
file_state_remove
Cx92a0ym5R8
REs2LQfVW2j
[orig_h=10.0.0.7, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp]
HTTP
connection_state_remove
REs2LQfVW2j
[orig_h=10.0.0.7, orig_p=59856/tcp, resp_h=192.150.187.43, resp_p=80/tcp]
HTTP
This doesn't perform any interesting analysis yet, but does highlight
the similarity between analysis of connections and files. Connections
are identified by the usual 5-tuple or a convenient UID string while
files are identified just by a string of the same format as the
connection UID. So there's unique ways to identify both files and
connections and files hold references to a connection (or connections)
that transported it.
Adding Analysis
===============
There are builtin file analyzers which can be attached to files. Once
attached, they start receiving the contents of the file as Bro extracts
it from an ongoing network connection. What they do with the file
contents is up to the particular file analyzer implementation, but
they'll typically either report further information about the file via
events (e.g. :bro:see:`FileAnalysis::ANALYZER_MD5` will report the
file's MD5 checksum via :bro:see:`file_hash` once calculated) or they'll
have some side effect (e.g. :bro:see:`FileAnalysis::ANALYZER_EXTRACT`
will write the contents of the file out to the local file system).
In the future there may be file analyzers that automatically attach to
files based on heuristics, similar to the Dynamic Protocol Detection
(DPD) framework for connections, but many will always require an
explicit attachment decision:
.. code:: bro
event file_new(f: fa_file)
{
print "new file", f$id;
if ( f?$mime_type && f$mime_type == "text/plain" )
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_MD5]);
}
event file_hash(f: fa_file, kind: string, hash: string)
{
print "file_hash", f$id, kind, hash;
}
this script calculates MD5s for all plain text files and might give
output::
new file, Cx92a0ym5R8
file_hash, Cx92a0ym5R8, md5, 397168fd09991a0e712254df7bc639ac
Some file analyzers might have tunable parameters that need to be
specified in the call to :bro:see:`FileAnalysis::add_analyzer`:
.. code:: bro
event file_new(f: fa_file)
{
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_EXTRACT,
$extract_filename="./myfile"]);
}
In this case, the file extraction analyzer doesn't generate any further
events, but does have the side effect of writing out the file contents
to the local file system at the specified location of ``./myfile``. Of
course, for a network with more than a single file being transferred,
it's probably preferable to specify a different extraction path for each
file, unlike this example.
Regardless of which file analyzers end up acting on a file, general
information about the file (e.g. size, time of last data transferred,
MIME type, etc.) are logged in ``file_analysis.log``.
Input Framework Integration
===========================
The FAF comes with a simple way to integrate with the :doc:`Input
Framework <input>`, so that Bro can analyze files from external sources
in the same way it analyzes files that it sees coming over traffic from
a network interface it's monitoring. It only requires a call to
:bro:see:`Input::add_analysis`:
.. code:: bro
redef exit_only_after_terminate = T;
event file_new(f: fa_file)
{
print "new file", f$id;
FileAnalysis::add_analyzer(f, [$tag=FileAnalysis::ANALYZER_MD5]);
}
event file_state_remove(f: fa_file)
{
Input::remove(f$source);
terminate();
}
event file_hash(f: fa_file, kind: string, hash: string)
{
print "file_hash", f$id, kind, hash;
}
event bro_init()
{
local source: string = "./myfile";
Input::add_analysis([$source=source, $name=source]);
}
Note that the "source" field of :bro:see:`fa_file` corresponds to the
"name" field of :bro:see:`Input::AnalysisDescription` since that is what
the input framework uses to uniquely identify an input stream.
The output of the above script may be::
new file, G1fS2xthS4l
file_hash, G1fS2xthS4l, md5, 54098b367d2e87b078671fad4afb9dbb
Nothing that special, but it at least verifies the MD5 file analyzer
saw all the bytes of the input file and calculated the checksum
correctly!

View file

@ -25,6 +25,7 @@ Frameworks
notice
logging
input
file-analysis
cluster
signatures
@ -45,7 +46,7 @@ Script Reference
scripts/packages
scripts/index
scripts/builtins
scripts/bifs
scripts/proto-analyzers
Other Bro Components
--------------------

View file

@ -89,8 +89,7 @@ Note the fields that are set for the filter:
are generated by taking the stream's ID and munging it slightly.
:bro:enum:`Conn::LOG` is converted into ``conn``,
:bro:enum:`PacketFilter::LOG` is converted into
``packet_filter``, and :bro:enum:`Notice::POLICY_LOG` is
converted into ``notice_policy``.
``packet_filter``.
``include``
A set limiting the fields to the ones given. The names

View file

@ -86,21 +86,21 @@ directly make modifications to the :bro:see:`Notice::Info` record
given as the argument to the hook.
Here's a simple example which tells Bro to send an email for all notices of
type :bro:see:`SSH::Login` if the server is 10.0.0.1:
type :bro:see:`SSH::Password_Guessing` if the server is 10.0.0.1:
.. code:: bro
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Login && n$id$resp_h == 10.0.0.1 )
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 )
add n$actions[Notice::ACTION_EMAIL];
}
.. note::
Keep in mind that the semantics of the SSH::Login notice are
such that it is only raised when Bro heuristically detects a successful
login. No apparently failed logins will raise this notice.
Keep in mind that the semantics of the SSH::Password_Guessing notice are
such that it is only raised when Bro heuristically detects a failed
login.
Hooks can also have priorities applied to order their execution like events
with a default priority of 0. Greater values are executed first. Setting
@ -110,7 +110,7 @@ a hook body to run before default hook bodies might look like this:
hook Notice::policy(n: Notice::Info) &priority=5
{
if ( n$note == SSH::Login && n$id$resp_h == 10.0.0.1 )
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 )
add n$actions[Notice::ACTION_EMAIL];
}
@ -173,16 +173,16 @@ Raising Notices
A script should raise a notice for any occurrence that a user may want
to be notified about or take action on. For example, whenever the base
SSH analysis scripts sees an SSH session where it is heuristically
guessed to be a successful login, it raises a Notice of the type
:bro:see:`SSH::Login`. The code in the base SSH analysis script looks
like this:
SSH analysis scripts sees enough failed logins to a given host, it
raises a notice of the type :bro:see:`SSH::Password_Guessing`. The code
in the base SSH analysis script which raises the notice looks like this:
.. code:: bro
NOTICE([$note=SSH::Login,
$msg="Heuristically detected successful SSH login.",
$conn=c]);
NOTICE([$note=Password_Guessing,
$msg=fmt("%s appears to be guessing SSH passwords (seen in %d connections).", key$host, r$num),
$src=key$host,
$identifier=cat(key$host)]);
:bro:see:`NOTICE` is a normal function in the global namespace which
wraps a function within the ``Notice`` namespace. It takes a single

View file

@ -15,11 +15,11 @@ endif ()
#
# srcDir: the directory which contains broInput
# broInput: the file name of a bro policy script, any path prefix of this
# argument will be used to derive what path under policy/ the generated
# argument will be used to derive what path under scripts/ the generated
# documentation will be placed.
# group: optional name of group that the script documentation will belong to.
# If this is not given, .bif files automatically get their own group or
# the group is automatically by any path portion of the broInput argument.
# If this is not given, the group is automatically set to any path portion
# of the broInput argument.
#
# In addition to adding the makefile target, several CMake variables are set:
#
@ -45,12 +45,6 @@ macro(REST_TARGET srcDir broInput)
set(sumTextSrc ${absSrcPath})
set(ogSourceFile ${absSrcPath})
if (${extension} STREQUAL ".bif.bro")
set(ogSourceFile ${BIF_SRC_DIR}/${basename})
# the summary text is taken at configure time, but .bif.bro files
# may not have been generated yet, so read .bif file instead
set(sumTextSrc ${ogSourceFile})
endif ()
if (NOT relDstDir)
set(docName "${basename}")
@ -70,8 +64,6 @@ macro(REST_TARGET srcDir broInput)
if (NOT "${ARGN}" STREQUAL "")
set(group ${ARGN})
elseif (${extension} STREQUAL ".bif.bro")
set(group bifs)
elseif (relDstDir)
set(group ${relDstDir}/index)
# add package index to master package list if not already in it
@ -107,7 +99,7 @@ macro(REST_TARGET srcDir broInput)
COMMAND "${CMAKE_COMMAND}"
ARGS -E remove_directory .state
# generate the reST documentation using bro
COMMAND BROPATH=${BROPATH}:${srcDir} ${CMAKE_BINARY_DIR}/src/bro
COMMAND BROPATH=${BROPATH}:${srcDir} BROMAGIC=${CMAKE_SOURCE_DIR}/magic ${CMAKE_BINARY_DIR}/src/bro
ARGS -b -Z ${broInput} || (rm -rf .state *.log *.rst && exit 1)
# move generated doc into a new directory tree that
# defines the final structure of documents
@ -132,6 +124,29 @@ endmacro(REST_TARGET)
# Schedule Bro scripts for which to generate documentation.
include(DocSourcesList.cmake)
# This reST target is independent of a particular Bro script...
add_custom_command(OUTPUT proto-analyzers.rst
# delete any leftover state from previous bro runs
COMMAND "${CMAKE_COMMAND}"
ARGS -E remove_directory .state
# generate the reST documentation using bro
COMMAND BROPATH=${BROPATH}:${srcDir} BROMAGIC=${CMAKE_SOURCE_DIR}/magic ${CMAKE_BINARY_DIR}/src/bro
ARGS -b -Z base/init-bare.bro || (rm -rf .state *.log *.rst && exit 1)
# move generated doc into a new directory tree that
# defines the final structure of documents
COMMAND "${CMAKE_COMMAND}"
ARGS -E make_directory ${dstDir}
COMMAND "${CMAKE_COMMAND}"
ARGS -E copy proto-analyzers.rst ${dstDir}
# clean up the build directory
COMMAND rm
ARGS -rf .state *.log *.rst
DEPENDS bro
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "[Bro] Generating reST docs for proto-analyzers.rst"
)
list(APPEND ALL_REST_OUTPUTS proto-analyzers.rst)
# create temporary list of all docs to include in the master policy/index file
file(WRITE ${MASTER_POLICY_INDEX} "${MASTER_POLICY_INDEX_TEXT}")

View file

@ -16,15 +16,65 @@ rest_target(${CMAKE_CURRENT_SOURCE_DIR} example.bro internal)
rest_target(${psd} base/init-default.bro internal)
rest_target(${psd} base/init-bare.bro internal)
rest_target(${CMAKE_BINARY_DIR}/src base/bro.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/const.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/event.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/file_analysis.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/input.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/logging.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/reporter.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/strings.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/src base/types.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/analyzer.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/bloom-filter.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/bro.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/const.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/event.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/file_analysis.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/input.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/logging.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_ARP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_AYIYA.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_BackDoor.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_BitTorrent.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_ConnSize.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_DCE_RPC.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_DHCP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_DNS.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FTP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FTP.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_File.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_FileHash.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Finger.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_GTPv1.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Gnutella.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_HTTP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_HTTP.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_ICMP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_IRC.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Ident.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_InterConn.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Login.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Login.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_MIME.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Modbus.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_NCP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_NTP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_NetBIOS.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_NetBIOS.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_NetFlow.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_PIA.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_POP3.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_RPC.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SMB.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SMTP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SMTP.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SOCKS.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SSH.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SSL.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SSL.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_SteppingStone.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Syslog.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_TCP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_TCP.functions.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_Teredo.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_UDP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/plugins/Bro_ZIP.events.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/reporter.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/strings.bif.bro)
rest_target(${CMAKE_BINARY_DIR}/scripts base/bif/types.bif.bro)
rest_target(${psd} base/frameworks/analyzer/main.bro)
rest_target(${psd} base/frameworks/cluster/main.bro)
rest_target(${psd} base/frameworks/cluster/nodes/manager.bro)
rest_target(${psd} base/frameworks/cluster/nodes/proxy.bro)
@ -63,6 +113,7 @@ rest_target(${psd} base/frameworks/notice/non-cluster.bro)
rest_target(${psd} base/frameworks/notice/weird.bro)
rest_target(${psd} base/frameworks/packet-filter/main.bro)
rest_target(${psd} base/frameworks/packet-filter/netstats.bro)
rest_target(${psd} base/frameworks/packet-filter/utils.bro)
rest_target(${psd} base/frameworks/reporter/main.bro)
rest_target(${psd} base/frameworks/signatures/main.bro)
rest_target(${psd} base/frameworks/software/main.bro)
@ -141,15 +192,16 @@ rest_target(${psd} policy/frameworks/intel/smtp-url-extraction.bro)
rest_target(${psd} policy/frameworks/intel/smtp.bro)
rest_target(${psd} policy/frameworks/intel/ssl.bro)
rest_target(${psd} policy/frameworks/intel/where-locations.bro)
rest_target(${psd} policy/frameworks/packet-filter/shunt.bro)
rest_target(${psd} policy/frameworks/software/version-changes.bro)
rest_target(${psd} policy/frameworks/software/vulnerable.bro)
rest_target(${psd} policy/integration/barnyard2/main.bro)
rest_target(${psd} policy/integration/barnyard2/types.bro)
rest_target(${psd} policy/integration/collective-intel/main.bro)
rest_target(${psd} policy/misc/analysis-groups.bro)
rest_target(${psd} policy/misc/app-metrics.bro)
rest_target(${psd} policy/misc/capture-loss.bro)
rest_target(${psd} policy/misc/detect-traceroute/main.bro)
rest_target(${psd} policy/misc/load-balancing.bro)
rest_target(${psd} policy/misc/loaded-scripts.bro)
rest_target(${psd} policy/misc/profiling.bro)
rest_target(${psd} policy/misc/scan.bro)

View file

@ -1,5 +0,0 @@
.. This is a stub doc to which broxygen appends during the build process
Built-In Functions (BIFs)
=========================

View file

@ -246,6 +246,31 @@ The Bro scripting language supports the following built-in types.
[5] = "five",
};
A table constructor (equivalent to above example) can also be used
to create a table:
.. code:: bro
global t2: table[count] of string = table(
[11] = "eleven",
[5] = "five"
);
Table constructors can also be explicitly named by a type, which is
useful for when a more complex index type could otherwise be
ambiguous:
.. code:: bro
type MyRec: record {
a: count &optional;
b: count;
};
type MyTable: table[MyRec] of string;
global t3 = MyTable([[$b=5]] = "b5", [[$b=7]] = "b7");
Accessing table elements if provided by enclosing values within square
brackets (``[]``), for example:
@ -308,6 +333,28 @@ The Bro scripting language supports the following built-in types.
The types are explicitly shown in the example above, but they could
have been left to type inference.
A set constructor (equivalent to above example) can also be used to
create a set:
.. code:: bro
global s3: set[port] = set(21/tcp, 23/tcp, 80/tcp, 443/tcp);
Set constructors can also be explicitly named by a type, which is
useful for when a more complex index type could otherwise be
ambiguous:
.. code:: bro
type MyRec: record {
a: count &optional;
b: count;
};
type MySet: set[MyRec];
global s4 = MySet([$b=1], [$b=2]);
Set membership is tested with ``in``:
.. code:: bro
@ -349,6 +396,21 @@ The Bro scripting language supports the following built-in types.
global v: vector of string = vector("one", "two", "three");
Vector constructors can also be explicitly named by a type, which
is useful for when a more complex yield type could otherwise be
ambiguous.
.. code:: bro
type MyRec: record {
a: count &optional;
b: count;
};
type MyVec: vector of MyRec;
global v2 = MyVec([$b=1], [$b=2], [$b=3]);
Adding an element to a vector involves accessing/assigning it:
.. code:: bro
@ -402,6 +464,44 @@ The Bro scripting language supports the following built-in types.
if ( r?$s )
...
Records can also be created using a constructor syntax:
.. code:: bro
global r2: MyRecordType = record($c = 7);
And the constructor can be explicitly named by type, too, which
is arguably more readable code:
.. code:: bro
global r3 = MyRecordType($c = 42);
.. bro:type:: opaque
A data type whose actual representation/implementation is
intentionally hidden, but whose values may be passed to certain
functions that can actually access the internal/hidden resources.
Opaque types are differentiated from each other by qualifying them
like ``opaque of md5`` or ``opaque of sha1``. Any valid identifier
can be used as the type qualifier.
An example use of this type is the set of built-in functions which
perform hashing:
.. code:: bro
local handle: opaque of md5 = md5_hash_init();
md5_hash_update(handle, "test");
md5_hash_update(handle, "testing");
print md5_hash_finish(handle);
Here the opaque type is used to provide a handle to a particular
resource which is calculating an MD5 checksum incrementally over
time, but the details of that resource aren't relevant, it's only
necessary to have a handle as a way of identifying it and
distinguishing it from other such resources.
.. bro:type:: file
Bro supports writing to files, but not reading from them. For

View file

@ -54,11 +54,11 @@ global example_ports = {
443/tcp, 562/tcp,
} &redef;
# redefinitions of "dpd_config" are self-documenting and
# go into the generated doc's "Port Analysis" section
redef dpd_config += {
[ANALYZER_SSL] = [$ports = example_ports]
};
event bro_init()
{
Analyzer::register_for_ports(Analyzer::ANALYZER_SSL, example_ports);
}
# redefinitions of "Notice::Type" are self-documenting, but
# more information can be supplied in two different ways

View file

@ -67,12 +67,12 @@ sourcedir=${thisdir}/../..
echo "$statictext" > $outfile
bifs=`( cd ${sourcedir}/src && find . -name \*\.bif | sort )`
bifs=`( cd ${sourcedir}/build/scripts/base && find . -name \*\.bif.bro | sort )`
for file in $bifs
do
f=${file:2}.bro
echo "rest_target(\${CMAKE_BINARY_DIR}/src base/$f)" >> $outfile
f=${file:2}
echo "rest_target(\${CMAKE_BINARY_DIR}/scripts base/$f)" >> $outfile
done
scriptfiles=`( cd ${sourcedir}/scripts && find . -name \*\.bro | sort )`

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,217 @@
##! Framework for managing Bro's protocol analyzers.
##!
##! The analyzer framework allows to dynamically enable or disable analyzers, as
##! well as to manage the well-known ports which automatically activate a
##! particular analyzer for new connections.
##!
##! Protocol analyzers are identified by unique tags of type
##! :bro:type:`Analyzer::Tag`, such as :bro:enum:`Analyzer::ANALYZER_HTTP` and
##! :bro:enum:`Analyzer::ANALYZER_HTTP`. These tags are defined internally by
##! the analyzers themselves, and documented in their analyzer-specific
##! description along with the events that they generate.
@load base/frameworks/packet-filter/utils
module Analyzer;
export {
## If true, all available analyzers are initially disabled at startup. One
## can then selectively enable them with
## :bro:id:`Analyzer::enable_analyzer`.
global disable_all = F &redef;
## Enables an analyzer. Once enabled, the analyzer may be used for analysis
## of future connections as decided by Bro's dynamic protocol detection.
##
## tag: The tag of the analyzer to enable.
##
## Returns: True if the analyzer was successfully enabled.
global enable_analyzer: function(tag: Analyzer::Tag) : bool;
## Disables an analyzer. Once disabled, the analyzer will not be used
## further for analysis of future connections.
##
## tag: The tag of the analyzer to disable.
##
## Returns: True if the analyzer was successfully disabled.
global disable_analyzer: function(tag: Analyzer::Tag) : bool;
## Registers a set of well-known ports for an analyzer. If a future
## connection on one of these ports is seen, the analyzer will be
## automatically assigned to parsing it. The function *adds* to all ports
## already registered, it doesn't replace them.
##
## tag: The tag of the analyzer.
##
## ports: The set of well-known ports to associate with the analyzer.
##
## Returns: True if the ports were sucessfully registered.
global register_for_ports: function(tag: Analyzer::Tag, ports: set[port]) : bool;
## Registers an individual well-known port for an analyzer. If a future
## connection on this port is seen, the analyzer will be automatically
## assigned to parsing it. The function *adds* to all ports already
## registered, it doesn't replace them.
##
## tag: The tag of the analyzer.
##
## p: The well-known port to associate with the analyzer.
##
## Returns: True if the port was sucessfully registered.
global register_for_port: function(tag: Analyzer::Tag, p: port) : bool;
## Returns a set of all well-known ports currently registered for a
## specific analyzer.
##
## tag: The tag of the analyzer.
##
## Returns: The set of ports.
global registered_ports: function(tag: Analyzer::Tag) : set[port];
## Returns a table of all ports-to-analyzer mappings currently registered.
##
## Returns: A table mapping each analyzer to the set of ports
## registered for it.
global all_registered_ports: function() : table[Analyzer::Tag] of set[port];
## Translates an analyzer type to a string with the analyzer's name.
##
## tag: The analyzer tag.
##
## Returns: The analyzer name corresponding to the tag.
global name: function(tag: Analyzer::Tag) : string;
## Schedules an analyzer for a future connection originating from a given IP
## address and port.
##
## orig: The IP address originating a connection in the future.
## 0.0.0.0 can be used as a wildcard to match any originator address.
##
## resp: The IP address responding to a connection from *orig*.
##
## resp_p: The destination port at *resp*.
##
## analyzer: The analyzer ID.
##
## tout: A timeout interval after which the scheduling request will be
## discarded if the connection has not yet been seen.
##
## Returns: True if succesful.
global schedule_analyzer: function(orig: addr, resp: addr, resp_p: port,
analyzer: Analyzer::Tag, tout: interval) : bool;
## Automatically creates a BPF filter for the specified protocol based
## on the data supplied for the protocol through the
## :bro:see:`Analyzer::register_for_ports` function.
##
## tag: The analyzer tag.
##
## Returns: BPF filter string.
global analyzer_to_bpf: function(tag: Analyzer::Tag): string;
## Create a BPF filter which matches all of the ports defined
## by the various protocol analysis scripts as "registered ports"
## for the protocol.
global get_bpf: function(): string;
## A set of analyzers to disable by default at startup. The default set
## contains legacy analyzers that are no longer supported.
global disabled_analyzers: set[Analyzer::Tag] = {
ANALYZER_INTERCONN,
ANALYZER_STEPPINGSTONE,
ANALYZER_BACKDOOR,
ANALYZER_TCPSTATS,
} &redef;
}
@load base/bif/analyzer.bif
global ports: table[Analyzer::Tag] of set[port];
event bro_init() &priority=5
{
if ( disable_all )
__disable_all_analyzers();
for ( a in disabled_analyzers )
disable_analyzer(a);
}
function enable_analyzer(tag: Analyzer::Tag) : bool
{
return __enable_analyzer(tag);
}
function disable_analyzer(tag: Analyzer::Tag) : bool
{
return __disable_analyzer(tag);
}
function register_for_ports(tag: Analyzer::Tag, ports: set[port]) : bool
{
local rc = T;
for ( p in ports )
{
if ( ! register_for_port(tag, p) )
rc = F;
}
return rc;
}
function register_for_port(tag: Analyzer::Tag, p: port) : bool
{
if ( ! __register_for_port(tag, p) )
return F;
if ( tag !in ports )
ports[tag] = set();
add ports[tag][p];
return T;
}
function registered_ports(tag: Analyzer::Tag) : set[port]
{
return tag in ports ? ports[tag] : set();
}
function all_registered_ports(): table[Analyzer::Tag] of set[port]
{
return ports;
}
function name(atype: Analyzer::Tag) : string
{
return __name(atype);
}
function schedule_analyzer(orig: addr, resp: addr, resp_p: port,
analyzer: Analyzer::Tag, tout: interval) : bool
{
return __schedule_analyzer(orig, resp, resp_p, analyzer, tout);
}
function analyzer_to_bpf(tag: Analyzer::Tag): string
{
# Return an empty string if an undefined analyzer was given.
if ( tag !in ports )
return "";
local output = "";
for ( p in ports[tag] )
output = PacketFilter::combine_filters(output, "or", PacketFilter::port_to_bpf(p));
return output;
}
function get_bpf(): string
{
local output = "";
for ( tag in ports )
{
output = PacketFilter::combine_filters(output, "or", analyzer_to_bpf(tag));
}
return output;
}

View file

@ -216,12 +216,9 @@ function setup_peer(p: event_peer, node: Node)
request_remote_events(p, node$events);
}
if ( node?$capture_filter )
if ( node?$capture_filter && node$capture_filter != "" )
{
local filter = node$capture_filter;
if ( filter == "" )
filter = PacketFilter::default_filter;
do_script_log(p, fmt("sending capture_filter: %s", filter));
send_capture_filter(p, filter);
}

View file

@ -1,212 +0,0 @@
# Signatures to initiate dynamic protocol detection.
signature dpd_ftp_client {
ip-proto == tcp
payload /(|.*[\n\r]) *[uU][sS][eE][rR] /
tcp-state originator
}
# Match for server greeting (220, 120) and for login or passwd
# required (230, 331).
signature dpd_ftp_server {
ip-proto == tcp
payload /[\n\r ]*(120|220)[^0-9].*[\n\r] *(230|331)[^0-9]/
tcp-state responder
requires-reverse-signature dpd_ftp_client
enable "ftp"
}
signature dpd_http_client {
ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
tcp-state originator
}
signature dpd_http_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_http_client
enable "http"
}
signature dpd_bittorrenttracker_client {
ip-proto == tcp
payload /^.*\/announce\?.*info_hash/
tcp-state originator
}
signature dpd_bittorrenttracker_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_bittorrenttracker_client
enable "bittorrenttracker"
}
signature dpd_bittorrent_peer1 {
ip-proto == tcp
payload /^\x13BitTorrent protocol/
tcp-state originator
}
signature dpd_bittorrent_peer2 {
ip-proto == tcp
payload /^\x13BitTorrent protocol/
tcp-state responder
requires-reverse-signature dpd_bittorrent_peer1
enable "bittorrent"
}
signature irc_client1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Uu][Ss][Ee][Rr] +.+[\n\r]+ *[Nn][Ii][Cc][Kk] +.*[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_client2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Nn][Ii][Cc][Kk] +.+[\r\n]+ *[Uu][Ss][Ee][Rr] +.+[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_server_reply {
ip-proto == tcp
payload /^(|.*[\n\r])(:[^ \n\r]+ )?[0-9][0-9][0-9] /
tcp-state responder
}
signature irc_server_to_server1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
}
signature irc_server_to_server2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
requires-reverse-signature irc_server_to_server1
enable "irc"
}
signature dpd_smtp_client {
ip-proto == tcp
payload /(|.*[\n\r])[[:space:]]*([hH][eE][lL][oO]|[eE][hH][lL][oO])/
requires-reverse-signature dpd_smtp_server
enable "smtp"
tcp-state originator
}
signature dpd_smtp_server {
ip-proto == tcp
payload /^[[:space:]]*220[[:space:]-]/
tcp-state responder
}
signature dpd_ssh_client {
ip-proto == tcp
payload /^[sS][sS][hH]-/
requires-reverse-signature dpd_ssh_server
enable "ssh"
tcp-state originator
}
signature dpd_ssh_server {
ip-proto == tcp
payload /^[sS][sS][hH]-/
tcp-state responder
}
signature dpd_pop3_server {
ip-proto == tcp
payload /^\+OK/
requires-reverse-signature dpd_pop3_client
enable "pop3"
tcp-state responder
}
signature dpd_pop3_client {
ip-proto == tcp
payload /(|.*[\r\n])[[:space:]]*([uU][sS][eE][rR][[:space:]]|[aA][pP][oO][pP][[:space:]]|[cC][aA][pP][aA]|[aA][uU][tT][hH])/
tcp-state originator
}
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
}
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
tcp-state originator
}
signature dpd_ayiya {
ip-proto = udp
payload /^..\x11\x29/
enable "ayiya"
}
signature dpd_teredo {
ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
enable "teredo"
}
signature dpd_socks4_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state originator
}
signature dpd_socks4_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state responder
enable "socks"
}
signature dpd_socks4_reverse_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state responder
}
signature dpd_socks4_reverse_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_reverse_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state originator
enable "socks"
}
signature dpd_socks5_client {
ip-proto == tcp
# Watch for a few authentication methods to reduce false positives.
payload /^\x05.[\x00\x01\x02]/
tcp-state originator
}
signature dpd_socks5_server {
ip-proto == tcp
requires-reverse-signature dpd_socks5_client
# Watch for a single authentication method to be chosen by the server or
# the server to indicate the no authentication is required.
payload /^\x05(\x00|\x01[\x00\x01\x02])/
tcp-state responder
enable "socks"
}

View file

@ -3,8 +3,6 @@
module DPD;
@load-sigs ./dpd.sig
export {
## Add the DPD logging stream identifier.
redef enum Log::ID += { LOG };
@ -41,22 +39,11 @@ redef record connection += {
event bro_init() &priority=5
{
Log::create_stream(DPD::LOG, [$columns=Info]);
# Populate the internal DPD analysis variable.
for ( a in dpd_config )
{
for ( p in dpd_config[a]$ports )
{
if ( p !in dpd_analyzer_ports )
dpd_analyzer_ports[p] = set();
add dpd_analyzer_ports[p][a];
}
}
}
event protocol_confirmation(c: connection, atype: count, aid: count) &priority=10
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=10
{
local analyzer = analyzer_name(atype);
local analyzer = Analyzer::name(atype);
if ( fmt("-%s",analyzer) in c$service )
delete c$service[fmt("-%s", analyzer)];
@ -64,10 +51,10 @@ event protocol_confirmation(c: connection, atype: count, aid: count) &priority=1
add c$service[analyzer];
}
event protocol_violation(c: connection, atype: count, aid: count,
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=10
{
local analyzer = analyzer_name(atype);
local analyzer = Analyzer::name(atype);
# If the service hasn't been confirmed yet, don't generate a log message
# for the protocol violation.
if ( analyzer !in c$service )
@ -86,7 +73,7 @@ event protocol_violation(c: connection, atype: count, aid: count,
c$dpd = info;
}
event protocol_violation(c: connection, atype: count, aid: count, reason: string) &priority=5
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count, reason: string) &priority=5
{
if ( !c?$dpd || aid in c$dpd$disabled_aids )
return;
@ -100,7 +87,7 @@ event protocol_violation(c: connection, atype: count, aid: count, reason: string
add c$dpd$disabled_aids[aid];
}
event protocol_violation(c: connection, atype: count, aid: count,
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=-5
{
if ( c?$dpd )

View file

@ -1,7 +1,7 @@
##! An interface for driving the analysis of files, possibly independent of
##! any network protocol over which they're transported.
@load base/file_analysis.bif
@load base/bif/file_analysis.bif
@load base/frameworks/logging
module FileAnalysis;
@ -15,18 +15,20 @@ export {
## A structure which represents a desired type of file analysis.
type AnalyzerArgs: record {
## The type of analysis.
tag: Analyzer;
tag: FileAnalysis::Tag;
## The local filename to which to write an extracted file. Must be
## set when *tag* is :bro:see:`FileAnalysis::ANALYZER_EXTRACT`.
extract_filename: string &optional;
## An event which will be generated for all new file contents,
## chunk-wise.
## chunk-wise. Used when *tag* is
## :bro:see:`FileAnalysis::ANALYZER_DATA_EVENT`.
chunk_event: event(f: fa_file, data: string, off: count) &optional;
## An event which will be generated for all new file contents,
## stream-wise.
## stream-wise. Used when *tag* is
## :bro:see:`FileAnalysis::ANALYZER_DATA_EVENT`.
stream_event: event(f: fa_file, data: string) &optional;
} &redef;
@ -87,7 +89,7 @@ export {
conn_uids: set[string] &log;
## A set of analysis types done during the file analysis.
analyzers: set[Analyzer] &log;
analyzers: set[FileAnalysis::Tag];
## Local filenames of extracted files.
extracted_files: set[string] &log;
@ -104,7 +106,7 @@ export {
## A table that can be used to disable file analysis completely for
## any files transferred over given network protocol analyzers.
const disable: table[AnalyzerTag] of bool = table() &redef;
const disable: table[Analyzer::Tag] of bool = table() &redef;
## Event that can be handled to access the Info record as it is sent on
## to the logging framework.
@ -120,7 +122,9 @@ export {
## Sets the *timeout_interval* field of :bro:see:`fa_file`, which is
## used to determine the length of inactivity that is allowed for a file
## before internal state related to it is cleaned up.
## before internal state related to it is cleaned up. When used within a
## :bro:see:`file_timeout` handler, the analysis will delay timing out
## again for the period specified by *t*.
##
## f: the file.
##
@ -130,18 +134,6 @@ export {
## for the *id* isn't currently active.
global set_timeout_interval: function(f: fa_file, t: interval): bool;
## Postpones the timeout of file analysis for a given file.
## When used within a :bro:see:`file_timeout` handler for, the analysis
## the analysis will delay timing out for the period of time indicated by
## the *timeout_interval* field of :bro:see:`fa_file`, which can be set
## with :bro:see:`FileAnalysis::set_timeout_interval`.
##
## f: the file.
##
## Returns: true if the timeout will be postponed, or false if analysis
## for the *id* isn't currently active.
global postpone_timeout: function(f: fa_file): bool;
## Adds an analyzer to the analysis of a given file.
##
## f: the file.
@ -171,58 +163,6 @@ export {
## rest of it's contents, or false if analysis for the *id*
## isn't currently active.
global stop: function(f: fa_file): bool;
## Sends a sequential stream of data in for file analysis.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## data: bytestring contents of the file to analyze.
global data_stream: function(source: string, data: string);
## Sends a non-sequential chunk of data in for file analysis.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## data: bytestring contents of the file to analyze.
##
## offset: the offset within the file that this chunk starts.
global data_chunk: function(source: string, data: string, offset: count);
## Signals a content gap in the file bytestream.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## offset: the offset within the file that this gap starts.
##
## len: the number of bytes that are missing.
global gap: function(source: string, offset: count, len: count);
## Signals the total size of a file.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
##
## size: the number of bytes that comprise the full file.
global set_size: function(source: string, size: count);
## Signals the end of a file.
## Meant for use when providing external file analysis input (e.g.
## from the input framework).
##
## source: a string that uniquely identifies the logical file that the
## data is a part of and describes its source.
global eof: function(source: string);
}
redef record fa_file += {
@ -259,11 +199,6 @@ function set_timeout_interval(f: fa_file, t: interval): bool
return __set_timeout_interval(f$id, t);
}
function postpone_timeout(f: fa_file): bool
{
return __postpone_timeout(f$id);
}
function add_analyzer(f: fa_file, args: AnalyzerArgs): bool
{
if ( ! __add_analyzer(f$id, args) ) return F;
@ -287,31 +222,6 @@ function stop(f: fa_file): bool
return __stop(f$id);
}
function data_stream(source: string, data: string)
{
__data_stream(source, data);
}
function data_chunk(source: string, data: string, offset: count)
{
__data_chunk(source, data, offset);
}
function gap(source: string, offset: count, len: count)
{
__gap(source, offset, len);
}
function set_size(source: string, size: count)
{
__set_size(source, size);
}
function eof(source: string)
{
__eof(source);
}
event bro_init() &priority=5
{
Log::create_stream(FileAnalysis::LOG,

View file

@ -122,6 +122,34 @@ export {
config: table[string] of string &default=table();
};
## A file analyis input stream type used to forward input data to the
## file analysis framework.
type AnalysisDescription: record {
## String that allows the reader to find the source.
## For `READER_ASCII`, this is the filename.
source: string;
## Reader to use for this steam. Compatible readers must be
## able to accept a filter of a single string type (i.e.
## they read a byte stream).
reader: Reader &default=Input::READER_BINARY;
## Read mode to use for this stream
mode: Mode &default=default_mode;
## Descriptive name that uniquely identifies the input source.
## Can be used used to remove a stream at a later time.
## This will also be used for the unique *source* field of
## :bro:see:`fa_file`. Most of the time, the best choice for this
## field will be the same value as the *source* field.
name: string;
## A key/value table that will be passed on the reader.
## Interpretation of the values is left to the writer, but
## usually they will be used for configuration purposes.
config: table[string] of string &default=table();
};
## Create a new table input from a given source. Returns true on success.
##
## description: `TableDescription` record describing the source.
@ -132,6 +160,14 @@ export {
## description: `TableDescription` record describing the source.
global add_event: function(description: Input::EventDescription) : bool;
## Create a new file analysis input from a given source. Data read from
## the source is automatically forwarded to the file analysis framework.
##
## description: A record describing the source
##
## Returns: true on sucess.
global add_analysis: function(description: Input::AnalysisDescription) : bool;
## Remove a input stream. Returns true on success and false if the named stream was
## not found.
##
@ -149,7 +185,7 @@ export {
global end_of_data: event(name: string, source:string);
}
@load base/input.bif
@load base/bif/input.bif
module Input;
@ -164,6 +200,11 @@ function add_event(description: Input::EventDescription) : bool
return __create_event_stream(description);
}
function add_analysis(description: Input::AnalysisDescription) : bool
{
return __create_analysis_stream(description);
}
function remove(id: string) : bool
{
return __remove_stream(id);

View file

@ -6,4 +6,12 @@ export {
## Separator between input records.
## Please note that the separator has to be exactly one character long
const record_separator = "\n" &redef;
## Event that is called when a process created by the raw reader exits.
##
## name: name of the input stream
## source: source of the input stream
## exit_code: exit code of the program, or number of the signal that forced the program to exit
## signal_exit: false when program exitted normally, true when program was forced to exit by a signal
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
}

View file

@ -195,7 +195,7 @@ export {
##
## Returns: True if a new stream was successfully removed.
##
## .. bro:see:: Log:create_stream
## .. bro:see:: Log::create_stream
global remove_stream: function(id: ID) : bool;
## Enables a previously disabled logging stream. Disabled streams
@ -366,7 +366,7 @@ export {
# We keep a script-level copy of all filters so that we can manipulate them.
global filters: table[ID, string] of Filter;
@load base/logging.bif # Needs Filter and Stream defined.
@load base/bif/logging.bif # Needs Filter and Stream defined.
module Log;

View file

@ -431,9 +431,6 @@ hook Notice::notice(n: Notice::Info) &priority=-5
}
}
## This determines if a notice is being suppressed. It is only used
## internally as part of the mechanics for the global :bro:id:`NOTICE`
## function.
function is_being_suppressed(n: Notice::Info): bool
{
if ( n?$identifier && [n$note, n$identifier] in suppressing )

View file

@ -1,2 +1,3 @@
@load ./utils
@load ./main
@load ./netstats

View file

@ -1,10 +1,12 @@
##! This script supports how Bro sets it's BPF capture filter. By default
##! Bro sets an unrestricted filter that allows all traffic. If a filter
##! Bro sets a capture filter that allows all traffic. If a filter
##! is set on the command line, that filter takes precedence over the default
##! open filter and all filters defined in Bro scripts with the
##! :bro:id:`capture_filters` and :bro:id:`restrict_filters` variables.
@load base/frameworks/notice
@load base/frameworks/analyzer
@load ./utils
module PacketFilter;
@ -14,11 +16,14 @@ export {
## Add notice types related to packet filter errors.
redef enum Notice::Type += {
## This notice is generated if a packet filter is unable to be compiled.
## This notice is generated if a packet filter cannot be compiled.
Compile_Failure,
## This notice is generated if a packet filter is fails to install.
## Generated if a packet filter is fails to install.
Install_Failure,
## Generated when a notice takes too long to compile.
Too_Long_To_Compile_Filter
};
## The record type defining columns to be logged in the packet filter
@ -42,83 +47,248 @@ export {
success: bool &log &default=T;
};
## By default, Bro will examine all packets. If this is set to false,
## it will dynamically build a BPF filter that only select protocols
## for which the user has loaded a corresponding analysis script.
## The latter used to be default for Bro versions < 2.0. That has now
## changed however to enable port-independent protocol analysis.
const all_packets = T &redef;
## The BPF filter that is used by default to define what traffic should
## be captured. Filters defined in :bro:id:`restrict_filters` will still
## be applied to reduce the captured traffic.
const default_capture_filter = "ip or not ip" &redef;
## Filter string which is unconditionally or'ed to the beginning of every
## dynamically built filter.
const unrestricted_filter = "" &redef;
## Filter string which is unconditionally and'ed to the beginning of every
## dynamically built filter. This is mostly used when a custom filter is being
## used but MPLS or VLAN tags are on the traffic.
const restricted_filter = "" &redef;
## The maximum amount of time that you'd like to allow for BPF filters to compile.
## If this time is exceeded, compensation measures may be taken by the framework
## to reduce the filter size. This threshold being crossed also results in
## the :bro:see:`PacketFilter::Too_Long_To_Compile_Filter` notice.
const max_filter_compile_time = 100msec &redef;
## Install a BPF filter to exclude some traffic. The filter should positively
## match what is to be excluded, it will be wrapped in a "not".
##
## filter_id: An arbitrary string that can be used to identify
## the filter.
##
## filter: A BPF expression of traffic that should be excluded.
##
## Returns: A boolean value to indicate if the filter was successfully
## installed or not.
global exclude: function(filter_id: string, filter: string): bool;
## Install a temporary filter to traffic which should not be passed through
## the BPF filter. The filter should match the traffic you don't want
## to see (it will be wrapped in a "not" condition).
##
## filter_id: An arbitrary string that can be used to identify
## the filter.
##
## filter: A BPF expression of traffic that should be excluded.
##
## length: The duration for which this filter should be put in place.
##
## Returns: A boolean value to indicate if the filter was successfully
## installed or not.
global exclude_for: function(filter_id: string, filter: string, span: interval): bool;
## Call this function to build and install a new dynamically built
## packet filter.
global install: function();
global install: function(): bool;
## A data structure to represent filter generating plugins.
type FilterPlugin: record {
## A function that is directly called when generating the complete filter.
func : function();
};
## API function to register a new plugin for dynamic restriction filters.
global register_filter_plugin: function(fp: FilterPlugin);
## Enables the old filtering approach of "only watch common ports for
## analyzed protocols".
##
## Unless you know what you are doing, leave this set to F.
const enable_auto_protocol_capture_filters = F &redef;
## This is where the default packet filter is stored and it should not
## normally be modified by users.
global default_filter = "<not set yet>";
global current_filter = "<not set yet>";
}
global dynamic_restrict_filters: table[string] of string = {};
# Track if a filter is currently building so functions that would ultimately
# install a filter immediately can still be used but they won't try to build or
# install the filter.
global currently_building = F;
# Internal tracking for if the the filter being built has possibly been changed.
global filter_changed = F;
global filter_plugins: set[FilterPlugin] = {};
redef enum PcapFilterID += {
DefaultPcapFilter,
FilterTester,
};
function combine_filters(lfilter: string, rfilter: string, op: string): string
function test_filter(filter: string): bool
{
if ( lfilter == "" && rfilter == "" )
return "";
else if ( lfilter == "" )
return rfilter;
else if ( rfilter == "" )
return lfilter;
else
return fmt("(%s) %s (%s)", lfilter, op, rfilter);
if ( ! precompile_pcap_filter(FilterTester, filter) )
{
# The given filter was invalid
# TODO: generate a notice.
return F;
}
return T;
}
function build_default_filter(): string
# This tracks any changes for filtering mechanisms that play along nice
# and set filter_changed to T.
event filter_change_tracking()
{
if ( filter_changed )
install();
schedule 5min { filter_change_tracking() };
}
event bro_init() &priority=5
{
Log::create_stream(PacketFilter::LOG, [$columns=Info]);
# Preverify the capture and restrict filters to give more granular failure messages.
for ( id in capture_filters )
{
if ( ! test_filter(capture_filters[id]) )
Reporter::fatal(fmt("Invalid capture_filter named '%s' - '%s'", id, capture_filters[id]));
}
for ( id in restrict_filters )
{
if ( ! test_filter(restrict_filters[id]) )
Reporter::fatal(fmt("Invalid restrict filter named '%s' - '%s'", id, restrict_filters[id]));
}
}
event bro_init() &priority=-5
{
install();
event filter_change_tracking();
}
function register_filter_plugin(fp: FilterPlugin)
{
add filter_plugins[fp];
}
event remove_dynamic_filter(filter_id: string)
{
if ( filter_id in dynamic_restrict_filters )
{
delete dynamic_restrict_filters[filter_id];
install();
}
}
function exclude(filter_id: string, filter: string): bool
{
if ( ! test_filter(filter) )
return F;
dynamic_restrict_filters[filter_id] = filter;
install();
return T;
}
function exclude_for(filter_id: string, filter: string, span: interval): bool
{
if ( exclude(filter_id, filter) )
{
schedule span { remove_dynamic_filter(filter_id) };
return T;
}
return F;
}
function build(): string
{
if ( cmd_line_bpf_filter != "" )
# Return what the user specified on the command line;
return cmd_line_bpf_filter;
if ( all_packets )
# Return an "always true" filter.
return "ip or not ip";
currently_building = T;
# Build filter dynamically.
# Generate all of the plugin based filters.
for ( plugin in filter_plugins )
{
plugin$func();
}
# First the capture_filter.
local cfilter = "";
for ( id in capture_filters )
cfilter = combine_filters(cfilter, capture_filters[id], "or");
if ( |capture_filters| == 0 && ! enable_auto_protocol_capture_filters )
cfilter = default_capture_filter;
# Then the restrict_filter.
for ( id in capture_filters )
cfilter = combine_filters(cfilter, "or", capture_filters[id]);
if ( enable_auto_protocol_capture_filters )
cfilter = combine_filters(cfilter, "or", Analyzer::get_bpf());
# Apply the restriction filters.
local rfilter = "";
for ( id in restrict_filters )
rfilter = combine_filters(rfilter, restrict_filters[id], "and");
rfilter = combine_filters(rfilter, "and", restrict_filters[id]);
# Apply the dynamic restriction filters.
for ( filt in dynamic_restrict_filters )
rfilter = combine_filters(rfilter, "and", string_cat("not (", dynamic_restrict_filters[filt], ")"));
# Finally, join them into one filter.
local filter = combine_filters(rfilter, cfilter, "and");
if ( unrestricted_filter != "" )
filter = combine_filters(unrestricted_filter, filter, "or");
local filter = combine_filters(cfilter, "and", rfilter);
if ( unrestricted_filter != "" )
filter = combine_filters(unrestricted_filter, "or", filter);
if ( restricted_filter != "" )
filter = combine_filters(restricted_filter, "and", filter);
currently_building = F;
return filter;
}
function install()
function install(): bool
{
default_filter = build_default_filter();
if ( currently_building )
return F;
if ( ! precompile_pcap_filter(DefaultPcapFilter, default_filter) )
local tmp_filter = build();
# No need to proceed if the filter hasn't changed.
if ( tmp_filter == current_filter )
return F;
local ts = current_time();
if ( ! precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
{
NOTICE([$note=Compile_Failure,
$msg=fmt("Compiling packet filter failed"),
$sub=default_filter]);
Reporter::fatal(fmt("Bad pcap filter '%s'", default_filter));
$sub=tmp_filter]);
if ( network_time() == 0.0 )
Reporter::fatal(fmt("Bad pcap filter '%s'", tmp_filter));
else
Reporter::warning(fmt("Bad pcap filter '%s'", tmp_filter));
}
local diff = current_time()-ts;
if ( diff > max_filter_compile_time )
NOTICE([$note=Too_Long_To_Compile_Filter,
$msg=fmt("A BPF filter is taking longer than %0.1f seconds to compile", diff)]);
# Set it to the current filter if it passed precompiling
current_filter = tmp_filter;
# Do an audit log for the packet filter.
local info: Info;
@ -129,7 +299,7 @@ function install()
info$ts = current_time();
info$init = T;
}
info$filter = default_filter;
info$filter = current_filter;
if ( ! install_pcap_filter(DefaultPcapFilter) )
{
@ -137,15 +307,13 @@ function install()
info$success = F;
NOTICE([$note=Install_Failure,
$msg=fmt("Installing packet filter failed"),
$sub=default_filter]);
$sub=current_filter]);
}
if ( reading_live_traffic() || reading_traces() )
Log::write(PacketFilter::LOG, info);
}
event bro_init() &priority=10
{
Log::create_stream(PacketFilter::LOG, [$columns=Info]);
PacketFilter::install();
# Update the filter change tracking
filter_changed = F;
return T;
}

View file

@ -13,7 +13,7 @@ export {
};
## This is the interval between individual statistics collection.
const stats_collection_interval = 10secs;
const stats_collection_interval = 5min;
}
event net_stats_update(last_stat: NetStats)

View file

@ -0,0 +1,58 @@
module PacketFilter;
export {
## Takes a :bro:type:`port` and returns a BPF expression which will
## match the port.
##
## p: The port.
##
## Returns: A valid BPF filter string for matching the port.
global port_to_bpf: function(p: port): string;
## Create a BPF filter to sample IPv4 and IPv6 traffic.
##
## num_parts: The number of parts the traffic should be split into.
##
## this_part: The part of the traffic this filter will accept. 0-based.
global sampling_filter: function(num_parts: count, this_part: count): string;
## Combines two valid BPF filter strings with a string based operator
## to form a new filter.
##
## lfilter: Filter which will go on the left side.
##
## op: Operation being applied (typically "or" or "and").
##
## rfilter: Filter which will go on the right side.
##
## Returns: A new string representing the two filters combined with
## the operator. Either filter being an empty string will
## still result in a valid filter.
global combine_filters: function(lfilter: string, op: string, rfilter: string): string;
}
function port_to_bpf(p: port): string
{
local tp = get_port_transport_proto(p);
return cat(tp, " and ", fmt("port %d", p));
}
function combine_filters(lfilter: string, op: string, rfilter: string): string
{
if ( lfilter == "" && rfilter == "" )
return "";
else if ( lfilter == "" )
return rfilter;
else if ( rfilter == "" )
return lfilter;
else
return fmt("(%s) %s (%s)", lfilter, op, rfilter);
}
function sampling_filter(num_parts: count, this_part: count): string
{
local v4_filter = fmt("ip and ((ip[14:2]+ip[18:2]) - (%d*((ip[14:2]+ip[18:2])/%d)) == %d)", num_parts, num_parts, this_part);
# TODO: this is probably a fairly suboptimal filter, but it should work for now.
local v6_filter = fmt("ip6 and ((ip6[22:2]+ip6[38:2]) - (%d*((ip6[22:2]+ip6[38:2])/%d)) == %d)", num_parts, num_parts, this_part);
return combine_filters(v4_filter, "or", v6_filter);
}

View file

@ -9,7 +9,7 @@
##! Note that this framework deals with the handling of internally generated
##! reporter messages, for the interface in to actually creating interface
##! into actually creating reporter messages from the scripting layer, use
##! the built-in functions in :doc:`/scripts/base/reporter.bif`.
##! the built-in functions in :doc:`/scripts/base/bif/reporter.bif`.
module Reporter;

View file

@ -99,7 +99,7 @@ export {
reducers: set[Reducer];
## Provide a function to calculate a value from the
## :bro:see:`Result` structure which will be used
## :bro:see:`SumStats::Result` structure which will be used
## for thresholding.
## This is required if a $threshold value is given.
threshold_val: function(key: SumStats::Key, result: SumStats::Result): count &optional;

View file

@ -16,7 +16,8 @@ export {
redef record ResultVal += {
## This is the queue where elements are maintained. Use the
## :bro:see:`SumStats::get_elements` function to get a vector of the current element values.
## :bro:see:`SumStats::get_last` function to get a vector of
## the current element values.
last_elements: Queue::Queue &optional;
};

View file

@ -83,19 +83,17 @@ export {
}
const ayiya_ports = { 5072/udp };
redef dpd_config += { [ANALYZER_AYIYA] = [$ports = ayiya_ports] };
const teredo_ports = { 3544/udp };
redef dpd_config += { [ANALYZER_TEREDO] = [$ports = teredo_ports] };
const gtpv1_ports = { 2152/udp, 2123/udp };
redef dpd_config += { [ANALYZER_GTPV1] = [$ports = gtpv1_ports] };
redef likely_server_ports += { ayiya_ports, teredo_ports, gtpv1_ports };
event bro_init() &priority=5
{
Log::create_stream(Tunnel::LOG, [$columns=Info]);
Analyzer::register_for_ports(Analyzer::ANALYZER_AYIYA, ayiya_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_TEREDO, teredo_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_GTPV1, gtpv1_ports);
}
function register_all(ecv: EncapsulatingConnVector)

View file

@ -1,5 +1,5 @@
@load base/const.bif
@load base/types.bif
@load base/bif/const.bif.bro
@load base/bif/types.bif
# Type declarations
@ -222,17 +222,6 @@ type endpoint_stats: record {
endian_type: count;
};
## A unique analyzer instance ID. Each time instantiates a protocol analyzers
## for a connection, it assigns it a unique ID that can be used to reference
## that instance.
##
## .. bro:see:: analyzer_name disable_analyzer protocol_confirmation
## protocol_violation
##
## .. todo::While we declare an alias for the type here, the events/functions still
## use ``count``. That should be changed.
type AnalyzerID: count;
module Tunnel;
export {
## Records the identity of an encapsulating parent of a tunneled connection.
@ -713,9 +702,10 @@ type entropy_test_result: record {
};
# Prototypes of Bro built-in functions.
@load base/strings.bif
@load base/bro.bif
@load base/reporter.bif
@load base/bif/strings.bif
@load base/bif/bro.bif
@load base/bif/reporter.bif
@load base/bif/bloom-filter.bif
## Deprecated. This is superseded by the new logging framework.
global log_file_name: function(tag: string): string &redef;
@ -777,19 +767,6 @@ global signature_files = "" &add_func = add_signature_file;
## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``.
const passive_fingerprint_file = "base/misc/p0f.fp" &redef;
# todo::testing to see if I can remove these without causing problems.
#const ftp = 21/tcp;
#const ssh = 22/tcp;
#const telnet = 23/tcp;
#const smtp = 25/tcp;
#const domain = 53/tcp; # note, doesn't include UDP version
#const gopher = 70/tcp;
#const finger = 79/tcp;
#const http = 80/tcp;
#const ident = 113/tcp;
#const bgp = 179/tcp;
#const rlogin = 513/tcp;
# TCP values for :bro:see:`endpoint` *state* field.
# todo::these should go into an enum to make them autodoc'able.
const TCP_INACTIVE = 0; ##< Endpoint is still inactive.
@ -2798,7 +2775,7 @@ export {
}
module GLOBAL;
@load base/event.bif
@load base/bif/event.bif
## BPF filter the user has set via the -f command line options. Empty if none.
const cmd_line_bpf_filter = "" &redef;
@ -2988,34 +2965,11 @@ const remote_trace_sync_peers = 0 &redef;
## consistency check.
const remote_check_sync_consistency = F &redef;
## Analyzer tags. The core automatically defines constants
## ``ANALYZER_<analyzer-name>*``, e.g., ``ANALYZER_HTTP``.
##
## .. bro:see:: dpd_config
##
## .. todo::We should autodoc these automaticallty generated constants.
type AnalyzerTag: count;
## Set of ports activating a particular protocol analysis.
##
## .. bro:see:: dpd_config
type dpd_protocol_config: record {
ports: set[port] &optional; ##< Set of ports.
};
## Port configuration for Bro's "dynamic protocol detection". Protocol
## analyzers can be activated via either well-known ports or content analysis.
## This table defines the ports.
##
## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size
## dpd_match_only_beginning dpd_ignore_ports
const dpd_config: table[AnalyzerTag] of dpd_protocol_config = {} &redef;
## Reassemble the beginning of all TCP connections before doing
## signature-matching. Enabling this provides more accurate matching at the
## expensive of CPU cycles.
##
## .. bro:see:: dpd_config dpd_buffer_size
## .. bro:see:: dpd_buffer_size
## dpd_match_only_beginning dpd_ignore_ports
##
## .. note:: Despite the name, this option affects *all* signature matching, not
@ -3030,24 +2984,24 @@ const dpd_reassemble_first_packets = T &redef;
## activated afterwards. Then only analyzers that can deal with partial
## connections will be able to analyze the session.
##
## .. bro:see:: dpd_reassemble_first_packets dpd_config dpd_match_only_beginning
## .. bro:see:: dpd_reassemble_first_packets dpd_match_only_beginning
## dpd_ignore_ports
const dpd_buffer_size = 1024 &redef;
## If true, stops signature matching if dpd_buffer_size has been reached.
##
## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size
## dpd_config dpd_ignore_ports
## dpd_ignore_ports
##
## .. note:: Despite the name, this option affects *all* signature matching, not
## only signatures used for dynamic protocol detection.
const dpd_match_only_beginning = T &redef;
## If true, don't consider any ports for deciding which protocol analyzer to
## use. If so, the value of :bro:see:`dpd_config` is ignored.
## use.
##
## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size
## dpd_match_only_beginning dpd_config
## dpd_match_only_beginning
const dpd_ignore_ports = F &redef;
## Ports which the core considers being likely used by servers. For ports in
@ -3055,13 +3009,6 @@ const dpd_ignore_ports = F &redef;
## connection if it misses the initial handshake.
const likely_server_ports: set[port] &redef;
## Deprated. Set of all ports for which we know an analyzer, built by
## :doc:`/scripts/base/frameworks/dpd/main`.
##
## .. todo::This should be defined by :doc:`/scripts/base/frameworks/dpd/main`
## itself we still need it.
global dpd_analyzer_ports: table[port] of set[AnalyzerTag];
## Per-incident timer managers are drained after this amount of inactivity.
const timer_mgr_inactivity_timeout = 1 min &redef;
@ -3170,10 +3117,14 @@ module GLOBAL;
## Number of bytes per packet to capture from live interfaces.
const snaplen = 8192 &redef;
# Load the logging framework here because it uses fairly deep integration with
# Load BiFs defined by plugins.
@load base/bif/plugins
# Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types.
@load base/frameworks/logging
@load base/frameworks/input
@load base/frameworks/analyzer
@load base/frameworks/file-analysis
@load base/bif

View file

@ -22,6 +22,7 @@
# loaded in base/init-bare.bro
#@load base/frameworks/logging
@load base/frameworks/notice
@load base/frameworks/analyzer
@load base/frameworks/dpd
@load base/frameworks/signatures
@load base/frameworks/packet-filter
@ -40,11 +41,13 @@
@load base/protocols/http
@load base/protocols/irc
@load base/protocols/modbus
@load base/protocols/pop3
@load base/protocols/smtp
@load base/protocols/socks
@load base/protocols/ssh
@load base/protocols/ssl
@load base/protocols/syslog
@load base/protocols/tunnels
@load base/files/pe

View file

@ -6,9 +6,9 @@ module Conn;
export {
## Define inactivity timeouts by the service detected being used over
## the connection.
const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = {
const analyzer_inactivity_timeouts: table[Analyzer::Tag] of interval = {
# For interactive services, allow longer periods of inactivity.
[[ANALYZER_SSH, ANALYZER_FTP]] = 1 hrs,
[[Analyzer::ANALYZER_SSH, Analyzer::ANALYZER_FTP]] = 1 hrs,
} &redef;
## Define inactivity timeouts based on common protocol ports.
@ -18,7 +18,7 @@ export {
}
event protocol_confirmation(c: connection, atype: count, aid: count)
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count)
{
if ( atype in analyzer_inactivity_timeouts )
set_inactivity_timeout(c$id, analyzer_inactivity_timeouts[atype]);

View file

@ -1,6 +1,7 @@
##! Base DNS analysis script which tracks and logs DNS queries along with
##! their responses.
@load base/utils/queue
@load ./consts
module DNS;
@ -73,19 +74,6 @@ export {
total_replies: count &optional;
};
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record {
## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet.
pending: table[count] of Info &optional;
## This is the list of DNS responses that have completed based on the
## number of responses declared and the number received. The contents
## of the set are transaction IDs.
finished_answers: set[count] &optional;
};
## An event that can be handled to access the :bro:type:`DNS::Info`
## record as it is sent to the logging framework.
global log_dns: event(rec: Info);
@ -102,46 +90,49 @@ export {
##
## reply: The specific response information according to RR type/class.
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
## A hook that is called whenever a session is being set.
## This can be used if additional initialization logic needs to happen
## when creating a new session value.
##
## c: The connection involved in the new session
##
## msg: The DNS message header information.
##
## is_query: Indicator for if this is being called for a query or a response.
global set_session: hook(c: connection, msg: dns_msg, is_query: bool);
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record {
## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet.
pending: table[count] of Queue::Queue;
## This is the list of DNS responses that have completed based on the
## number of responses declared and the number received. The contents
## of the set are transaction IDs.
finished_answers: set[count];
};
}
redef record connection += {
dns: Info &optional;
dns_state: State &optional;
};
# DPD configuration.
redef capture_filters += {
["dns"] = "port 53",
["mdns"] = "udp and port 5353",
["llmns"] = "udp and port 5355",
["netbios-ns"] = "udp port 137",
};
const dns_ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp };
redef dpd_config += { [ANALYZER_DNS] = [$ports = dns_ports] };
const dns_udp_ports = { 53/udp, 137/udp, 5353/udp, 5355/udp };
const dns_tcp_ports = { 53/tcp };
redef dpd_config += { [ANALYZER_DNS_UDP_BINPAC] = [$ports = dns_udp_ports] };
redef dpd_config += { [ANALYZER_DNS_TCP_BINPAC] = [$ports = dns_tcp_ports] };
redef likely_server_ports += { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp };
const ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(DNS::LOG, [$columns=Info, $ev=log_dns]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DNS, ports);
}
function new_session(c: connection, trans_id: count): Info
{
if ( ! c?$dns_state )
{
local state: State;
state$pending=table();
state$finished_answers=set();
c$dns_state = state;
}
local info: Info;
info$ts = network_time();
info$id = c$id;
@ -151,18 +142,37 @@ function new_session(c: connection, trans_id: count): Info
return info;
}
function set_session(c: connection, msg: dns_msg, is_query: bool)
hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
{
if ( ! c?$dns_state || msg$id !in c$dns_state$pending )
if ( ! c?$dns_state )
{
c$dns_state$pending[msg$id] = new_session(c, msg$id);
# Try deleting this transaction id from the set of finished answers.
# Sometimes hosts will reuse ports and transaction ids and this should
# be considered to be a legit scenario (although bad practice).
delete c$dns_state$finished_answers[msg$id];
local state: State;
c$dns_state = state;
}
c$dns = c$dns_state$pending[msg$id];
if ( msg$id !in c$dns_state$pending )
c$dns_state$pending[msg$id] = Queue::init();
local info: Info;
# If this is either a query or this is the reply but
# no Info records are in the queue (we missed the query?)
# we need to create an Info record and put it in the queue.
if ( is_query ||
Queue::len(c$dns_state$pending[msg$id]) == 0 )
{
info = new_session(c, msg$id);
Queue::put(c$dns_state$pending[msg$id], info);
}
if ( is_query )
# If this is a query, assign the newly created info variable
# so that the world looks correct to anything else handling
# this query.
c$dns = info;
else
# Peek at the next item in the queue for this trans_id and
# assign it to c$dns since this is a response.
c$dns = Queue::peek(c$dns_state$pending[msg$id]);
if ( ! is_query )
{
@ -190,19 +200,21 @@ function set_session(c: connection, msg: dns_msg, is_query: bool)
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
{
set_session(c, msg, is_orig);
hook set_session(c, msg, is_orig);
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
if ( ans$answer_type == DNS_ANS )
{
if ( ! c?$dns )
{
event conn_weird("dns_unmatched_reply", c, "");
hook set_session(c, msg, F);
}
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
if ( msg$id in c$dns_state$finished_answers )
event conn_weird("dns_reply_seen_after_done", c, "");
if ( reply != "" )
{
if ( ! c$dns?$answers )
@ -217,7 +229,6 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
if ( c$dns?$answers && c$dns?$total_answers &&
|c$dns$answers| == c$dns$total_answers )
{
add c$dns_state$finished_answers[c$dns$trans_id];
# Indicate this request/reply pair is ready to be logged.
c$dns$ready = T;
}
@ -230,7 +241,7 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
{
Log::write(DNS::LOG, c$dns);
# This record is logged and no longer pending.
delete c$dns_state$pending[c$dns$trans_id];
Queue::get(c$dns_state$pending[c$dns$trans_id]);
delete c$dns;
}
}
@ -243,6 +254,7 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
c$dns$qclass_name = classes[qclass];
c$dns$qtype = qtype;
c$dns$qtype_name = query_types[qtype];
c$dns$Z = msg$Z;
# Decode netbios name queries
# Note: I'm ignoring the name type for now. Not sure if this should be
@ -250,8 +262,6 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
if ( c$id$resp_p == 137/udp )
query = decode_netbios_name(query);
c$dns$query = query;
c$dns$Z = msg$Z;
}
event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
@ -339,6 +349,13 @@ event connection_state_remove(c: connection) &priority=-5
# If Bro is expiring state, we should go ahead and log all unlogged
# request/response pairs now.
for ( trans_id in c$dns_state$pending )
Log::write(DNS::LOG, c$dns_state$pending[trans_id]);
{
local infos: vector of Info;
Queue::get_vector(c$dns_state$pending[trans_id], infos);
for ( i in infos )
{
Log::write(DNS::LOG, infos[i]);
}
}
}

View file

@ -3,3 +3,5 @@
@load ./file-analysis
@load ./file-extract
@load ./gridftp
@load-sigs ./dpd.sig

View file

@ -0,0 +1,15 @@
signature dpd_ftp_client {
ip-proto == tcp
payload /(|.*[\n\r]) *[uU][sS][eE][rR] /
tcp-state originator
}
# Match for server greeting (220, 120) and for login or passwd
# required (230, 331).
signature dpd_ftp_server {
ip-proto == tcp
payload /[\n\r ]*(120|220)[^0-9].*[\n\r] *(230|331)[^0-9]/
tcp-state responder
requires-reverse-signature dpd_ftp_client
enable "ftp"
}

View file

@ -11,7 +11,7 @@ export {
function get_handle_string(c: connection): string
{
return cat(ANALYZER_FTP_DATA, " ", c$start_time, " ", id_string(c$id));
return cat(Analyzer::ANALYZER_FTP_DATA, " ", c$start_time, " ", id_string(c$id));
}
function get_file_handle(c: connection, is_orig: bool): string
@ -40,8 +40,9 @@ function get_file_handle(c: connection, is_orig: bool): string
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != ANALYZER_FTP_DATA ) return;
if ( tag != Analyzer::ANALYZER_FTP_DATA ) return;
set_file_handle(FTP::get_file_handle(c, is_orig));
}

View file

@ -13,8 +13,6 @@ export {
const extraction_prefix = "ftp-item" &redef;
}
global extract_count: count = 0;
redef record Info += {
## On disk file where it was extracted to.
extraction_file: string &log &optional;
@ -26,8 +24,7 @@ redef record Info += {
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}

View file

@ -110,21 +110,18 @@ redef record connection += {
ftp_data_reuse: bool &default=F;
};
# Configure DPD
const ports = { 21/tcp, 2811/tcp } &redef; # 2811/tcp is GridFTP.
redef capture_filters += { ["ftp"] = "port 21 and port 2811" };
redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] };
redef likely_server_ports += { 21/tcp, 2811/tcp };
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &read_expire=5mins;
const ports = { 21/tcp, 2811/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports);
}
# Establish the variable for tracking expected connections.
global ftp_data_expected: table[addr, port] of Info &read_expire=5mins;
## A set of commands where the argument can be expected to refer
## to a file or directory.
const file_cmds = {
@ -221,7 +218,7 @@ function add_expected_data_channel(s: Info, chan: ExpectedDataChannel)
s$passive = chan$passive;
s$data_channel = chan;
ftp_data_expected[chan$resp_h, chan$resp_p] = s;
expect_connection(chan$orig_h, chan$resp_h, chan$resp_p, ANALYZER_FTP_DATA,
Analyzer::schedule_analyzer(chan$orig_h, chan$resp_h, chan$resp_p, Analyzer::ANALYZER_FTP_DATA,
5mins);
}
@ -338,7 +335,7 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior
}
}
event expected_connection_seen(c: connection, a: count) &priority=10
event scheduled_analyzer_applied(c: connection, a: Analyzer::Tag) &priority=10
{
local id = c$id;
if ( [id$resp_h, id$resp_p] in ftp_data_expected )

View file

@ -4,3 +4,5 @@
@load ./file-ident
@load ./file-hash
@load ./file-extract
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_http_client {
ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
tcp-state originator
}
signature dpd_http_server {
ip-proto == tcp
payload /^HTTP\/[0-9]/
tcp-state responder
requires-reverse-signature dpd_http_client
enable "http"
}

View file

@ -6,26 +6,49 @@
module HTTP;
export {
redef record HTTP::Info += {
## Number of MIME entities in the HTTP request message body so far.
request_mime_level: count &default=0;
## Number of MIME entities in the HTTP response message body so far.
response_mime_level: count &default=0;
};
## Default file handle provider for HTTP.
global get_file_handle: function(c: connection, is_orig: bool): string;
}
event http_begin_entity(c: connection, is_orig: bool) &priority=5
{
if ( ! c?$http )
return;
if ( is_orig )
++c$http$request_mime_level;
else
++c$http$response_mime_level;
}
function get_file_handle(c: connection, is_orig: bool): string
{
if ( ! c?$http ) return "";
local mime_level: count =
is_orig ? c$http$request_mime_level : c$http$response_mime_level;
local mime_level_str: string = mime_level > 1 ? cat(mime_level) : "";
if ( c$http$range_request )
return cat(ANALYZER_HTTP, " ", is_orig, " ", c$id$orig_h, " ",
return cat(Analyzer::ANALYZER_HTTP, " ", is_orig, " ", c$id$orig_h, " ",
build_url(c$http));
return cat(ANALYZER_HTTP, " ", c$start_time, " ", is_orig, " ",
c$http$trans_depth, " ", id_string(c$id));
return cat(Analyzer::ANALYZER_HTTP, " ", c$start_time, " ", is_orig, " ",
c$http$trans_depth, mime_level_str, " ", id_string(c$id));
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != ANALYZER_HTTP ) return;
if ( tag != Analyzer::ANALYZER_HTTP ) return;
set_file_handle(HTTP::get_file_handle(c, is_orig));
}

View file

@ -14,8 +14,11 @@ export {
const extraction_prefix = "http-item" &redef;
redef record Info += {
## On-disk file where the response body was extracted to.
extraction_file: string &log &optional;
## On-disk location where files in request body were extracted.
extracted_request_files: vector of string &log &optional;
## On-disk location where files in response body were extracted.
extracted_response_files: vector of string &log &optional;
## Indicates if the response body is to be extracted or not. Must be
## set before or by the first :bro:see:`file_new` for the file content.
@ -23,15 +26,28 @@ export {
};
}
global extract_count: count = 0;
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
function add_extraction_file(c: connection, is_orig: bool, fn: string)
{
if ( is_orig )
{
if ( ! c$http?$extracted_request_files )
c$http$extracted_request_files = vector();
c$http$extracted_request_files[|c$http$extracted_request_files|] = fn;
}
else
{
if ( ! c$http?$extracted_response_files )
c$http$extracted_response_files = vector();
c$http$extracted_response_files[|c$http$extracted_response_files|] = fn;
}
}
event file_new(f: fa_file) &priority=5
{
if ( ! f?$source ) return;
@ -51,7 +67,7 @@ event file_new(f: fa_file) &priority=5
{
c = f$conns[cid];
if ( ! c?$http ) next;
c$http$extraction_file = fname;
add_extraction_file(c, f$is_orig, fname);
}
return;
@ -79,6 +95,6 @@ event file_new(f: fa_file) &priority=5
{
c = f$conns[cid];
if ( ! c?$http ) next;
c$http$extraction_file = fname;
add_extraction_file(c, f$is_orig, fname);
}
}

View file

@ -123,28 +123,18 @@ redef record connection += {
http_state: State &optional;
};
# Initialize the HTTP logging stream.
event bro_init() &priority=5
{
Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http]);
}
# DPD configuration.
const ports = {
80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3128/tcp,
8000/tcp, 8080/tcp, 8888/tcp,
};
redef dpd_config += {
[[ANALYZER_HTTP, ANALYZER_HTTP_BINPAC]] = [$ports = ports],
};
redef capture_filters += {
["http"] = "tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888)"
};
redef likely_server_ports += { ports };
redef likely_server_ports += {
80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3138/tcp,
8000/tcp, 8080/tcp, 8888/tcp,
};
# Initialize the HTTP logging stream and ports.
event bro_init() &priority=5
{
Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http]);
Analyzer::register_for_ports(Analyzer::ANALYZER_HTTP, ports);
}
function code_in_range(c: count, min: count, max: count) : bool
{

View file

@ -1,3 +1,5 @@
@load ./main
@load ./dcc-send
@load ./file-analysis
@load-sigs ./dpd.sig

View file

@ -39,8 +39,6 @@ export {
global dcc_expected_transfers: table[addr, port] of Info &read_expire=5mins;
global extract_count: count = 0;
function set_dcc_mime(f: fa_file)
{
if ( ! f?$conns ) return;
@ -75,8 +73,7 @@ function set_dcc_extraction_file(f: fa_file, filename: string)
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
@ -175,11 +172,11 @@ event irc_dcc_message(c: connection, is_orig: bool,
c$irc$dcc_file_name = argument;
c$irc$dcc_file_size = size;
local p = count_to_port(dest_port, tcp);
expect_connection(to_addr("0.0.0.0"), address, p, ANALYZER_IRC_DATA, 5 min);
Analyzer::schedule_analyzer(0.0.0.0, address, p, Analyzer::ANALYZER_IRC_DATA, 5 min);
dcc_expected_transfers[address, p] = c$irc;
}
event expected_connection_seen(c: connection, a: count) &priority=10
event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10
{
local id = c$id;
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )
@ -188,5 +185,6 @@ event expected_connection_seen(c: connection, a: count) &priority=10
event connection_state_remove(c: connection) &priority=-5
{
if ( [c$id$resp_h, c$id$resp_p] in dcc_expected_transfers )
delete dcc_expected_transfers[c$id$resp_h, c$id$resp_p];
}

View file

@ -0,0 +1,33 @@
signature irc_client1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Uu][Ss][Ee][Rr] +.+[\n\r]+ *[Nn][Ii][Cc][Kk] +.*[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_client2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Nn][Ii][Cc][Kk] +.+[\r\n]+ *[Uu][Ss][Ee][Rr] +.+[\r\n]/
requires-reverse-signature irc_server_reply
tcp-state originator
enable "irc"
}
signature irc_server_reply {
ip-proto == tcp
payload /^(|.*[\n\r])(:[^ \n\r]+ )?[0-9][0-9][0-9] /
tcp-state responder
}
signature irc_server_to_server1 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
}
signature irc_server_to_server2 {
ip-proto == tcp
payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/
requires-reverse-signature irc_server_to_server1
enable "irc"
}

View file

@ -12,13 +12,14 @@ export {
function get_file_handle(c: connection, is_orig: bool): string
{
if ( is_orig ) return "";
return cat(ANALYZER_IRC_DATA, " ", c$start_time, " ", id_string(c$id));
return cat(Analyzer::ANALYZER_IRC_DATA, " ", c$start_time, " ", id_string(c$id));
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != ANALYZER_IRC_DATA ) return;
if ( tag != Analyzer::ANALYZER_IRC_DATA ) return;
set_file_handle(IRC::get_file_handle(c, is_orig));
}

View file

@ -38,21 +38,13 @@ redef record connection += {
irc: Info &optional;
};
# Some common IRC ports.
redef capture_filters += { ["irc-6666"] = "port 6666" };
redef capture_filters += { ["irc-6667"] = "port 6667" };
redef capture_filters += { ["irc-6668"] = "port 6668" };
redef capture_filters += { ["irc-6669"] = "port 6669" };
# DPD configuration.
const irc_ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };
redef dpd_config += { [ANALYZER_IRC] = [$ports = irc_ports] };
redef likely_server_ports += { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };
const ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(IRC::LOG, [$columns=Info, $ev=irc_log]);
Analyzer::register_for_ports(Analyzer::ANALYZER_IRC, ports);
}
function new_session(c: connection): Info

View file

@ -29,14 +29,13 @@ redef record connection += {
modbus: Info &optional;
};
# Configure DPD and the packet filter.
redef capture_filters += { ["modbus"] = "tcp port 502" };
redef dpd_config += { [ANALYZER_MODBUS] = [$ports = set(502/tcp)] };
redef likely_server_ports += { 502/tcp };
const ports = { 502/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(Modbus::LOG, [$columns=Info, $ev=log_modbus]);
Analyzer::register_for_ports(Analyzer::ANALYZER_MODBUS, ports);
}
event modbus_message(c: connection, headers: ModbusHeaders, is_orig: bool) &priority=5

View file

@ -0,0 +1,2 @@
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_pop3_server {
ip-proto == tcp
payload /^\+OK/
requires-reverse-signature dpd_pop3_client
enable "pop3"
tcp-state responder
}
signature dpd_pop3_client {
ip-proto == tcp
payload /(|.*[\r\n])[[:space:]]*([uU][sS][eE][rR][[:space:]]|[aA][pP][oO][pP][[:space:]]|[cC][aA][pP][aA]|[aA][uU][tT][hH])/
tcp-state originator
}

View file

@ -2,3 +2,5 @@
@load ./entities
@load ./entities-excerpt
@load ./file-analysis
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_smtp_client {
ip-proto == tcp
payload /(|.*[\n\r])[[:space:]]*([hH][eE][lL][oO]|[eE][hH][lL][oO])/
requires-reverse-signature dpd_smtp_server
enable "smtp"
tcp-state originator
}
signature dpd_smtp_server {
ip-proto == tcp
payload /^[[:space:]]*220[[:space:]-]/
tcp-state responder
}

View file

@ -66,8 +66,6 @@ export {
global log_mime: event(rec: EntityInfo);
}
global extract_count: count = 0;
event bro_init() &priority=5
{
Log::create_stream(SMTP::ENTITIES_LOG, [$columns=EntityInfo, $ev=log_mime]);
@ -90,8 +88,7 @@ function set_session(c: connection, new_entity: bool)
function get_extraction_name(f: fa_file): string
{
local r = fmt("%s-%s-%d.dat", extraction_prefix, f$id, extract_count);
++extract_count;
local r = fmt("%s-%s.dat", extraction_prefix, f$id);
return r;
}
@ -127,7 +124,6 @@ event file_new(f: fa_file) &priority=5
[$tag=FileAnalysis::ANALYZER_EXTRACT,
$extract_filename=fname]);
extracting = T;
++extract_count;
}
c$smtp$current_entity$extraction_file = fname;

View file

@ -13,14 +13,15 @@ export {
function get_file_handle(c: connection, is_orig: bool): string
{
if ( ! c?$smtp ) return "";
return cat(ANALYZER_SMTP, " ", c$start_time, " ", c$smtp$trans_depth, " ",
return cat(Analyzer::ANALYZER_SMTP, " ", c$start_time, " ", c$smtp$trans_depth, " ",
c$smtp_state$mime_level);
}
module GLOBAL;
event get_file_handle(tag: AnalyzerTag, c: connection, is_orig: bool)
event get_file_handle(tag: Analyzer::Tag, c: connection, is_orig: bool)
&priority=5
{
if ( tag != ANALYZER_SMTP ) return;
if ( tag != Analyzer::ANALYZER_SMTP ) return;
set_file_handle(SMTP::get_file_handle(c, is_orig));
}

View file

@ -74,9 +74,6 @@ export {
const mail_path_capture = ALL_HOSTS &redef;
global log_smtp: event(rec: Info);
## Configure the default ports for SMTP analysis.
const ports = { 25/tcp, 587/tcp } &redef;
}
redef record connection += {
@ -84,15 +81,13 @@ redef record connection += {
smtp_state: State &optional;
};
# Configure DPD
redef capture_filters += { ["smtp"] = "tcp port 25 or tcp port 587" };
redef dpd_config += { [ANALYZER_SMTP] = [$ports = ports] };
redef likely_server_ports += { 25/tcp, 587/tcp };
const ports = { 25/tcp, 587/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SMTP::LOG, [$columns=SMTP::Info, $ev=log_smtp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SMTP, ports);
}
function find_address_in_smtp_header(header: string): string

View file

@ -1,2 +1,4 @@
@load ./consts
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,48 @@
signature dpd_socks4_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state originator
}
signature dpd_socks4_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state responder
enable "socks"
}
signature dpd_socks4_reverse_client {
ip-proto == tcp
# '32' is a rather arbitrary max length for the user name.
payload /^\x04[\x01\x02].{0,32}\x00/
tcp-state responder
}
signature dpd_socks4_reverse_server {
ip-proto == tcp
requires-reverse-signature dpd_socks4_reverse_client
payload /^\x00[\x5a\x5b\x5c\x5d]/
tcp-state originator
enable "socks"
}
signature dpd_socks5_client {
ip-proto == tcp
# Watch for a few authentication methods to reduce false positives.
payload /^\x05.[\x00\x01\x02]/
tcp-state originator
}
signature dpd_socks5_server {
ip-proto == tcp
requires-reverse-signature dpd_socks5_client
# Watch for a single authentication method to be chosen by the server or
# the server to indicate the no authentication is required.
payload /^\x05(\x00|\x01[\x00\x01\x02])/
tcp-state responder
enable "socks"
}

View file

@ -34,20 +34,19 @@ export {
global log_socks: event(rec: Info);
}
const ports = { 1080/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SOCKS, ports);
}
redef record connection += {
socks: SOCKS::Info &optional;
};
# Configure DPD
redef capture_filters += { ["socks"] = "tcp port 1080" };
redef dpd_config += { [ANALYZER_SOCKS] = [$ports = set(1080/tcp)] };
redef likely_server_ports += { 1080/tcp };
function set_session(c: connection, version: count)
{
if ( ! c?$socks )

View file

@ -1 +1,3 @@
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,13 @@
signature dpd_ssh_client {
ip-proto == tcp
payload /^[sS][sS][hH]-/
requires-reverse-signature dpd_ssh_server
enable "ssh"
tcp-state originator
}
signature dpd_ssh_server {
ip-proto == tcp
payload /^[sS][sS][hH]-/
tcp-state responder
}

View file

@ -70,19 +70,17 @@ export {
global log_ssh: event(rec: Info);
}
# Configure DPD and the packet filter
redef capture_filters += { ["ssh"] = "tcp port 22" };
redef dpd_config += { [ANALYZER_SSH] = [$ports = set(22/tcp)] };
redef likely_server_ports += { 22/tcp };
redef record connection += {
ssh: Info &optional;
};
const ports = { 22/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SSH::LOG, [$columns=Info, $ev=log_ssh]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SSH, ports);
}
function set_session(c: connection)
@ -116,7 +114,7 @@ function check_ssh_connection(c: connection, done: bool)
# Responder must have sent fewer than 40 packets.
c$resp$num_pkts < 40 &&
# If there was a content gap we can't reliably do this heuristic.
c?$conn && c$conn$missed_bytes == 0)# &&
c?$conn && c$conn$missed_bytes == 0 )# &&
# Only "normal" connections can count.
#c$conn?$conn_state && c$conn$conn_state in valid_states )
{
@ -176,6 +174,7 @@ event ssh_watcher(c: connection)
if ( ! connection_exists(id) )
return;
lookup_connection(c$id);
check_ssh_connection(c, F);
if ( ! c$ssh$done )
schedule +15secs { ssh_watcher(c) };

View file

@ -1,3 +1,5 @@
@load ./consts
@load ./main
@load ./mozilla-ca-list
@load-sigs ./dpd.sig

View file

@ -0,0 +1,15 @@
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
}
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
tcp-state originator
}

View file

@ -94,46 +94,17 @@ redef record Info += {
delay_tokens: set[string] &optional;
};
event bro_init() &priority=5
{
Log::create_stream(SSL::LOG, [$columns=Info, $ev=log_ssl]);
}
redef capture_filters += {
["ssl"] = "tcp port 443",
["nntps"] = "tcp port 563",
["imap4-ssl"] = "tcp port 585",
["sshell"] = "tcp port 614",
["ldaps"] = "tcp port 636",
["ftps-data"] = "tcp port 989",
["ftps"] = "tcp port 990",
["telnets"] = "tcp port 992",
["imaps"] = "tcp port 993",
["ircs"] = "tcp port 994",
["pop3s"] = "tcp port 995",
["xmpps"] = "tcp port 5223",
};
const ports = {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
};
redef likely_server_ports += { ports };
redef dpd_config += {
[[ANALYZER_SSL]] = [$ports = ports]
};
redef likely_server_ports += {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
};
# A queue that buffers log records.
global log_delay_queue: table[count] of Info;
# The top queue index where records are added.
global log_delay_queue_head = 0;
# The bottom queue index that points to the next record to be flushed.
global log_delay_queue_tail = 0;
event bro_init() &priority=5
{
Log::create_stream(SSL::LOG, [$columns=Info, $ev=log_ssl]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SSL, ports);
}
function set_session(c: connection)
{
@ -144,26 +115,17 @@ function set_session(c: connection)
function delay_log(info: Info, token: string)
{
if ( ! info?$delay_tokens )
info$delay_tokens = set();
add info$delay_tokens[token];
log_delay_queue[log_delay_queue_head] = info;
++log_delay_queue_head;
}
function undelay_log(info: Info, token: string)
{
if ( token in info$delay_tokens )
if ( info?$delay_tokens && token in info$delay_tokens )
delete info$delay_tokens[token];
}
global log_record: function(info: Info);
event delay_logging(info: Info)
{
log_record(info);
}
function log_record(info: Info)
{
if ( ! info?$delay_tokens || |info$delay_tokens| == 0 )
@ -172,26 +134,14 @@ function log_record(info: Info)
}
else
{
for ( unused_index in log_delay_queue )
when ( |info$delay_tokens| == 0 )
{
if ( log_delay_queue_head == log_delay_queue_tail )
return;
if ( |log_delay_queue[log_delay_queue_tail]$delay_tokens| > 0 )
{
if ( info$ts + max_log_delay > network_time() )
{
schedule 1sec { delay_logging(info) };
return;
log_record(info);
}
else
timeout SSL::max_log_delay
{
Reporter::info(fmt("SSL delay tokens not released in time (%s)",
info$delay_tokens));
}
}
Log::write(SSL::LOG, log_delay_queue[log_delay_queue_tail]);
delete log_delay_queue[log_delay_queue_tail];
++log_delay_queue_tail;
Reporter::info(fmt("SSL delay tokens not released in time (%s tokens remaining)",
|info$delay_tokens|));
}
}
}
@ -288,28 +238,16 @@ event ssl_established(c: connection) &priority=-5
finish(c);
}
event protocol_confirmation(c: connection, atype: count, aid: count) &priority=5
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
{
# Check by checking for existence of c$ssl record.
if ( c?$ssl && analyzer_name(atype) == "SSL" )
if ( c?$ssl && atype == Analyzer::ANALYZER_SSL )
c$ssl$analyzer_id = aid;
}
event protocol_violation(c: connection, atype: count, aid: count,
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=5
{
if ( c?$ssl )
finish(c);
}
event bro_done()
{
if ( |log_delay_queue| == 0 )
return;
for ( unused_index in log_delay_queue )
{
Log::write(SSL::LOG, log_delay_queue[log_delay_queue_tail]);
delete log_delay_queue[log_delay_queue_tail];
++log_delay_queue_tail;
}
}

View file

@ -26,19 +26,17 @@ export {
};
}
redef capture_filters += { ["syslog"] = "port 514" };
const ports = { 514/udp } &redef;
redef dpd_config += { [ANALYZER_SYSLOG_BINPAC] = [$ports = ports] };
redef likely_server_ports += { 514/udp };
redef record connection += {
syslog: Info &optional;
};
const ports = { 514/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(Syslog::LOG, [$columns=Info]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SYSLOG, ports);
}
event syslog_message(c: connection, facility: count, severity: count, msg: string) &priority=5

View file

@ -0,0 +1 @@
@load-sigs ./dpd.sig

View file

@ -0,0 +1,14 @@
# Provide DPD signatures for tunneling protocols that otherwise
# wouldn't be detected at all.
signature dpd_ayiya {
ip-proto = udp
payload /^..\x11\x29/
enable "ayiya"
}
signature dpd_teredo {
ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/
enable "teredo"
}

View file

@ -16,25 +16,32 @@ export {
## Initialize a queue record structure.
##
## s: A :bro:record:`Settings` record configuring the queue.
## s: A record which configures the queue.
##
## Returns: An opaque queue record.
global init: function(s: Settings): Queue;
global init: function(s: Settings &default=[]): Queue;
## Put a string onto the beginning of a queue.
## Put a value onto the beginning of a queue.
##
## q: The queue to put the value into.
##
## val: The value to insert into the queue.
global put: function(q: Queue, val: any);
## Get a string from the end of a queue.
## Get a value from the end of a queue.
##
## q: The queue to get the string from.
## q: The queue to get the value from.
##
## Returns: The value gotten from the queue.
global get: function(q: Queue): any;
## Peek at the value at the end of the queue without removing it.
##
## q: The queue to get the value from.
##
## Returns: The value at the end of the queue.
global peek: function(q: Queue): any;
## Merge two queue's together. If any settings are applied
## to the queues, the settings from q1 are used for the new
## merged queue.
@ -103,6 +110,11 @@ function get(q: Queue): any
return ret;
}
function peek(q: Queue): any
{
return q$vals[q$bottom];
}
function merge(q1: Queue, q2: Queue): Queue
{
local ret = init(q1$settings);

View file

@ -21,22 +21,22 @@ export {
type dir: enum { NONE, INCOMING, OUTGOING, BOTH };
const valids: table[count, addr, port] of dir = {
const valids: table[Analyzer::Tag, addr, port] of dir = {
# A couple of ports commonly used for benign HTTP servers.
# For now we want to see everything.
# [ANALYZER_HTTP, 0.0.0.0, 81/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 82/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 83/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 88/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 8001/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 8090/tcp] = OUTGOING,
# [ANALYZER_HTTP, 0.0.0.0, 8081/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 81/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 82/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 83/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 88/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 8001/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 8090/tcp] = OUTGOING,
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 8081/tcp] = OUTGOING,
#
# [ANALYZER_HTTP, 0.0.0.0, 6346/tcp] = BOTH, # Gnutella
# [ANALYZER_HTTP, 0.0.0.0, 6347/tcp] = BOTH, # Gnutella
# [ANALYZER_HTTP, 0.0.0.0, 6348/tcp] = BOTH, # Gnutella
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 6346/tcp] = BOTH, # Gnutella
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 6347/tcp] = BOTH, # Gnutella
# [Analyzer::ANALYZER_HTTP, 0.0.0.0, 6348/tcp] = BOTH, # Gnutella
} &redef;
# Set of analyzers for which we suppress Server_Found notices
@ -44,8 +44,8 @@ export {
# log files, this also saves memory because for these we don't
# need to remember which servers we already have reported, which
# for some can be a lot.
const suppress_servers: set [count] = {
# ANALYZER_HTTP
const suppress_servers: set [Analyzer::Tag] = {
# Analyzer::ANALYZER_HTTP
} &redef;
# We consider a connection to use a protocol X if the analyzer for X
@ -60,7 +60,7 @@ export {
# Entry point for other analyzers to report that they recognized
# a certain (sub-)protocol.
global found_protocol: function(c: connection, analyzer: count,
global found_protocol: function(c: connection, analyzer: Analyzer::Tag,
protocol: string);
# Table keeping reported (server, port, analyzer) tuples (and their
@ -70,7 +70,7 @@ export {
}
# Table that tracks currently active dynamic analyzers per connection.
global conns: table[conn_id] of set[count];
global conns: table[conn_id] of set[Analyzer::Tag];
# Table of reports by other analyzers about the protocol used in a connection.
global protocols: table[conn_id] of set[string];
@ -80,7 +80,7 @@ type protocol : record {
sub: string; # "sub-protocols" reported by other sources
};
function get_protocol(c: connection, a: count) : protocol
function get_protocol(c: connection, a: Analyzer::Tag) : protocol
{
local str = "";
if ( c$id in protocols )
@ -89,7 +89,7 @@ function get_protocol(c: connection, a: count) : protocol
str = |str| > 0 ? fmt("%s/%s", str, p) : p;
}
return [$a=analyzer_name(a), $sub=str];
return [$a=Analyzer::name(a), $sub=str];
}
function fmt_protocol(p: protocol) : string
@ -97,7 +97,7 @@ function fmt_protocol(p: protocol) : string
return p$sub != "" ? fmt("%s (via %s)", p$sub, p$a) : p$a;
}
function do_notice(c: connection, a: count, d: dir)
function do_notice(c: connection, a: Analyzer::Tag, d: dir)
{
if ( d == BOTH )
return;
@ -113,7 +113,7 @@ function do_notice(c: connection, a: count, d: dir)
NOTICE([$note=Protocol_Found,
$msg=fmt("%s %s on port %s", id_string(c$id), s, c$id$resp_p),
$sub=s, $conn=c, $n=a]);
$sub=s, $conn=c]);
# We report multiple Server_Found's per host if we find a new
# sub-protocol.
@ -129,7 +129,7 @@ function do_notice(c: connection, a: count, d: dir)
NOTICE([$note=Server_Found,
$msg=fmt("%s: %s server on port %s%s", c$id$resp_h, s,
c$id$resp_p, (known ? " (update)" : "")),
$p=c$id$resp_p, $sub=s, $conn=c, $src=c$id$resp_h, $n=a]);
$p=c$id$resp_p, $sub=s, $conn=c, $src=c$id$resp_h]);
if ( ! known )
servers[c$id$resp_h, c$id$resp_p, p$a] = set();
@ -194,10 +194,10 @@ event connection_state_remove(c: connection)
report_protocols(c);
}
event protocol_confirmation(c: connection, atype: count, aid: count)
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count)
{
# Don't report anything running on a well-known port.
if ( atype in dpd_config && c$id$resp_p in dpd_config[atype]$ports )
if ( c$id$resp_p in Analyzer::registered_ports(atype) )
return;
if ( c$id in conns )
@ -214,11 +214,10 @@ event protocol_confirmation(c: connection, atype: count, aid: count)
}
}
function found_protocol(c: connection, analyzer: count, protocol: string)
function found_protocol(c: connection, atype: Analyzer::Tag, protocol: string)
{
# Don't report anything running on a well-known port.
if ( analyzer in dpd_config &&
c$id$resp_p in dpd_config[analyzer]$ports )
if ( c$id$resp_p in Analyzer::registered_ports(atype) )
return;
if ( c$id !in protocols )

View file

@ -20,7 +20,7 @@ export {
}
event protocol_violation(c: connection, atype: count, aid: count,
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=4
{
if ( ! c?$dpd ) return;

View file

@ -0,0 +1,169 @@
@load base/frameworks/notice
@load base/frameworks/packet-filter
module PacketFilter;
export {
## The maximum number of BPF based shunts that Bro is allowed to perform.
const max_bpf_shunts = 100 &redef;
## Call this function to use BPF to shunt a connection (to prevent the
## data packets from reaching Bro). For TCP connections, control packets
## are still allowed through so that Bro can continue logging the connection
## and it can stop shunting once the connection ends.
global shunt_conn: function(id: conn_id): bool;
## This function will use a BPF expresssion to shunt traffic between
## the two hosts given in the `conn_id` so that the traffic is never
## exposed to Bro's traffic processing.
global shunt_host_pair: function(id: conn_id): bool;
## Remove shunting for a host pair given as a `conn_id`. The filter
## is not immediately removed. It waits for the occassional filter
## update done by the `PacketFilter` framework.
global unshunt_host_pair: function(id: conn_id): bool;
## Performs the same function as the `unshunt_host_pair` function, but
## it forces an immediate filter update.
global force_unshunt_host_pair: function(id: conn_id): bool;
## Retrieve the currently shunted connections.
global current_shunted_conns: function(): set[conn_id];
## Retrieve the currently shunted host pairs.
global current_shunted_host_pairs: function(): set[conn_id];
redef enum Notice::Type += {
## Indicative that :bro:id:`max_bpf_shunts` connections are already
## being shunted with BPF filters and no more are allowed.
No_More_Conn_Shunts_Available,
## Limitations in BPF make shunting some connections with BPF impossible.
## This notice encompasses those various cases.
Cannot_BPF_Shunt_Conn,
};
}
global shunted_conns: set[conn_id];
global shunted_host_pairs: set[conn_id];
function shunt_filters()
{
# NOTE: this could wrongly match if a connection happens with the ports reversed.
local tcp_filter = "";
local udp_filter = "";
for ( id in shunted_conns )
{
local prot = get_port_transport_proto(id$resp_p);
local filt = fmt("host %s and port %d and host %s and port %d", id$orig_h, id$orig_p, id$resp_h, id$resp_p);
if ( prot == udp )
udp_filter = combine_filters(udp_filter, "and", filt);
else if ( prot == tcp )
tcp_filter = combine_filters(tcp_filter, "and", filt);
}
if ( tcp_filter != "" )
tcp_filter = combine_filters("tcp and tcp[tcpflags] & (tcp-syn|tcp-fin|tcp-rst) == 0", "and", tcp_filter);
local conn_shunt_filter = combine_filters(tcp_filter, "and", udp_filter);
local hp_shunt_filter = "";
for ( id in shunted_host_pairs )
hp_shunt_filter = combine_filters(hp_shunt_filter, "and", fmt("host %s and host %s", id$orig_h, id$resp_h));
local filter = combine_filters(conn_shunt_filter, "and", hp_shunt_filter);
if ( filter != "" )
PacketFilter::exclude("shunt_filters", filter);
}
event bro_init() &priority=5
{
register_filter_plugin([
$func()={ return shunt_filters(); }
]);
}
function current_shunted_conns(): set[conn_id]
{
return shunted_conns;
}
function current_shunted_host_pairs(): set[conn_id]
{
return shunted_host_pairs;
}
function reached_max_shunts(): bool
{
if ( |shunted_conns| + |shunted_host_pairs| > max_bpf_shunts )
{
NOTICE([$note=No_More_Conn_Shunts_Available,
$msg=fmt("%d BPF shunts are in place and no more will be added until space clears.", max_bpf_shunts)]);
return T;
}
else
return F;
}
function shunt_host_pair(id: conn_id): bool
{
PacketFilter::filter_changed = T;
if ( reached_max_shunts() )
return F;
add shunted_host_pairs[id];
install();
return T;
}
function unshunt_host_pair(id: conn_id): bool
{
PacketFilter::filter_changed = T;
if ( id in shunted_host_pairs )
{
delete shunted_host_pairs[id];
return T;
}
else
return F;
}
function force_unshunt_host_pair(id: conn_id): bool
{
if ( unshunt_host_pair(id) )
{
install();
return T;
}
else
return F;
}
function shunt_conn(id: conn_id): bool
{
if ( is_v6_addr(id$orig_h) )
{
NOTICE([$note=Cannot_BPF_Shunt_Conn,
$msg="IPv6 connections can't be shunted with BPF due to limitations in BPF",
$sub="ipv6_conn",
$id=id, $identifier=cat(id)]);
return F;
}
if ( reached_max_shunts() )
return F;
PacketFilter::filter_changed = T;
add shunted_conns[id];
install();
return T;
}
event connection_state_remove(c: connection) &priority=-5
{
# Don't rebuild the filter right away because the packet filter framework
# will check every few minutes and update the filter if things have changed.
if ( c$id in shunted_conns )
delete shunted_conns[c$id];
}

View file

@ -1,31 +0,0 @@
##! This script gives the capability to selectively enable and disable event
##! groups at runtime. No events will be raised for all members of a disabled
##! event group.
module AnalysisGroups;
export {
## By default, all event groups are enabled.
## We disable all groups in this table.
const disabled: set[string] &redef;
}
# Set to remember all groups which were disabled by the last update.
global currently_disabled: set[string];
# This is the event that the control framework uses when it needs to indicate
# that an update control action happened.
event Control::configuration_update()
{
# Reenable those which are not to be disabled anymore.
for ( g in currently_disabled )
if ( g !in disabled )
enable_event_group(g);
# Disable those which are not already disabled.
for ( g in disabled )
if ( g !in currently_disabled )
disable_event_group(g);
currently_disabled = copy(disabled);
}

View file

@ -32,8 +32,8 @@ export {
const icmp_time_exceeded_threshold = 3 &redef;
## Interval at which to watch for the
## :bro:id:`ICMPTimeExceeded::icmp_time_exceeded_threshold` variable to be crossed.
## At the end of each interval the counter is reset.
## :bro:id:`Traceroute::icmp_time_exceeded_threshold` variable to be
## crossed. At the end of each interval the counter is reset.
const icmp_time_exceeded_interval = 3min &redef;
## The log record for the traceroute log.

View file

@ -0,0 +1,132 @@
##! This script implements the "Bro side" of several load balancing
##! approaches for Bro clusters.
@load base/frameworks/cluster
@load base/frameworks/packet-filter
module LoadBalancing;
export {
type Method: enum {
## Apply BPF filters to each worker in a way that causes them to
## automatically flow balance traffic between them.
AUTO_BPF,
## Load balance traffic across the workers by making each one apply
## a restrict filter to only listen to a single MAC address. This
## is a somewhat common deployment option for sites doing network
## based load balancing with MAC address rewriting and passing the
## traffic to a single interface. Multiple MAC addresses will show
## up on the same interface and need filtered to a single address.
#MAC_ADDR_BPF,
};
## Defines the method of load balancing to use.
const method = AUTO_BPF &redef;
# Configure the cluster framework to enable the load balancing filter configuration.
#global send_filter: event(for_node: string, filter: string);
#global confirm_filter_installation: event(success: bool);
redef record Cluster::Node += {
## A BPF filter for load balancing traffic sniffed on a single interface
## across a number of processes. In normal uses, this will be assigned
## dynamically by the manager and installed by the workers.
lb_filter: string &optional;
};
}
#redef Cluster::manager2worker_events += /LoadBalancing::send_filter/;
#redef Cluster::worker2manager_events += /LoadBalancing::confirm_filter_installation/;
@if ( Cluster::is_enabled() )
@if ( Cluster::local_node_type() == Cluster::MANAGER )
event bro_init() &priority=5
{
if ( method != AUTO_BPF )
return;
local worker_ip_interface: table[addr, string] of count = table();
for ( n in Cluster::nodes )
{
local this_node = Cluster::nodes[n];
# Only workers!
if ( this_node$node_type != Cluster::WORKER ||
! this_node?$interface )
next;
if ( [this_node$ip, this_node$interface] !in worker_ip_interface )
worker_ip_interface[this_node$ip, this_node$interface] = 0;
++worker_ip_interface[this_node$ip, this_node$interface];
}
# Now that we've counted up how many processes are running on an interface
# let's create the filters for each worker.
local lb_proc_track: table[addr, string] of count = table();
for ( no in Cluster::nodes )
{
local that_node = Cluster::nodes[no];
if ( that_node$node_type == Cluster::WORKER &&
that_node?$interface && [that_node$ip, that_node$interface] in worker_ip_interface )
{
if ( [that_node$ip, that_node$interface] !in lb_proc_track )
lb_proc_track[that_node$ip, that_node$interface] = 0;
local this_lb_proc = lb_proc_track[that_node$ip, that_node$interface];
local total_lb_procs = worker_ip_interface[that_node$ip, that_node$interface];
++lb_proc_track[that_node$ip, that_node$interface];
if ( total_lb_procs > 1 )
{
that_node$lb_filter = PacketFilter::sample_filter(total_lb_procs, this_lb_proc);
Communication::nodes[no]$capture_filter = that_node$lb_filter;
}
}
}
}
#event remote_connection_established(p: event_peer) &priority=-5
# {
# if ( is_remote_event() )
# return;
#
# local for_node = p$descr;
# # Send the filter to the peer.
# if ( for_node in Cluster::nodes &&
# Cluster::nodes[for_node]?$lb_filter )
# {
# local filter = Cluster::nodes[for_node]$lb_filter;
# event LoadBalancing::send_filter(for_node, filter);
# }
# }
#event LoadBalancing::confirm_filter_installation(success: bool)
# {
# # This doesn't really matter yet since we aren't getting back a meaningful success response.
# }
@endif
@if ( Cluster::local_node_type() == Cluster::WORKER )
#event LoadBalancing::send_filter(for_node: string, filter: string)
event remote_capture_filter(p: event_peer, filter: string)
{
#if ( for_node !in Cluster::nodes )
# return;
#
#if ( Cluster::node == for_node )
# {
restrict_filters["lb_filter"] = filter;
PacketFilter::install();
#event LoadBalancing::confirm_filter_installation(T);
# }
}
@endif
@endif

View file

@ -13,17 +13,18 @@ module Scan;
export {
redef enum Notice::Type += {
## Address scans detect that a host appears to be scanning some number of
## destinations on a single port. This notice is generated when more than
## :bro:id:`addr_scan_threshold` unique hosts are seen over the previous
## :bro:id:`addr_scan_interval` time range.
## Address scans detect that a host appears to be scanning some number
## of destinations on a single port. This notice is generated when more
## than :bro:id:`Scan::addr_scan_threshold` unique hosts are seen over
## the previous :bro:id:`Scan::addr_scan_interval` time range.
Address_Scan,
## Port scans detect that an attacking host appears to be scanning a
## single victim host on several ports. This notice is generated when
## an attacking host attempts to connect to :bro:id:`port_scan_threshold`
## an attacking host attempts to connect to
## :bro:id:`Scan::port_scan_threshold`
## unique ports on a single host over the previous
## :bro:id:`port_scan_interval` time range.
## :bro:id:`Scan::port_scan_interval` time range.
Port_Scan,
};

View file

@ -87,7 +87,7 @@ function known_services_done(c: connection)
event log_it(network_time(), id$resp_h, id$resp_p, c$service);
}
event protocol_confirmation(c: connection, atype: count, aid: count) &priority=-5
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=-5
{
known_services_done(c);
}

View file

@ -3,7 +3,9 @@
## This normally isn't used because of the default open packet filter
## but we set it anyway in case the user is using a packet filter.
redef capture_filters += { ["frag"] = "(ip[6:2] & 0x3fff != 0) and tcp" };
## Note: This was removed because the default model now is to have a wide
## open packet filter.
#redef capture_filters += { ["frag"] = "(ip[6:2] & 0x3fff != 0) and tcp" };
## Shorten the fragment timeout from never expiring to expiring fragments after
## five minutes.

View file

@ -24,6 +24,7 @@
@load frameworks/intel/smtp.bro
@load frameworks/intel/ssl.bro
@load frameworks/intel/where-locations.bro
@load frameworks/packet-filter/shunt.bro
@load frameworks/software/version-changes.bro
@load frameworks/software/vulnerable.bro
@load integration/barnyard2/__load__.bro
@ -31,11 +32,11 @@
@load integration/barnyard2/types.bro
@load integration/collective-intel/__load__.bro
@load integration/collective-intel/main.bro
@load misc/analysis-groups.bro
@load misc/app-metrics.bro
@load misc/capture-loss.bro
@load misc/detect-traceroute/__load__.bro
@load misc/detect-traceroute/main.bro
@load misc/load-balancing.bro
@load misc/loaded-scripts.bro
@load misc/profiling.bro
@load misc/scan.bro

3101
src/3rdparty/sqlite3.c vendored

File diff suppressed because it is too large Load diff

109
src/3rdparty/sqlite3.h vendored
View file

@ -107,9 +107,9 @@ extern "C" {
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION "3.7.16.2"
#define SQLITE_VERSION_NUMBER 3007016
#define SQLITE_SOURCE_ID "2013-04-12 11:52:43 cbea02d93865ce0e06789db95fd9168ebac970c7"
#define SQLITE_VERSION "3.7.17"
#define SQLITE_VERSION_NUMBER 3007017
#define SQLITE_SOURCE_ID "2013-05-20 00:56:22 118a3b35693b134d56ebd780123b7fd6f1497668"
/*
** CAPI3REF: Run-Time Library Version Numbers
@ -425,6 +425,8 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_FORMAT 24 /* Auxiliary database format error */
#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */
#define SQLITE_NOTADB 26 /* File opened that is not a database file */
#define SQLITE_NOTICE 27 /* Notifications from sqlite3_log() */
#define SQLITE_WARNING 28 /* Warnings from sqlite3_log() */
#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */
#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */
/* end-of-error-codes */
@ -475,6 +477,7 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_IOERR_SHMMAP (SQLITE_IOERR | (21<<8))
#define SQLITE_IOERR_SEEK (SQLITE_IOERR | (22<<8))
#define SQLITE_IOERR_DELETE_NOENT (SQLITE_IOERR | (23<<8))
#define SQLITE_IOERR_MMAP (SQLITE_IOERR | (24<<8))
#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8))
#define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8))
#define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8))
@ -494,6 +497,8 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_CONSTRAINT_TRIGGER (SQLITE_CONSTRAINT | (7<<8))
#define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8))
#define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8))
#define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8))
#define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8))
/*
** CAPI3REF: Flags For File Open Operations
@ -733,6 +738,9 @@ struct sqlite3_io_methods {
void (*xShmBarrier)(sqlite3_file*);
int (*xShmUnmap)(sqlite3_file*, int deleteFlag);
/* Methods above are valid for version 2 */
int (*xFetch)(sqlite3_file*, sqlite3_int64 iOfst, int iAmt, void **pp);
int (*xUnfetch)(sqlite3_file*, sqlite3_int64 iOfst, void *p);
/* Methods above are valid for version 3 */
/* Additional methods may be added in future releases */
};
@ -869,7 +877,8 @@ struct sqlite3_io_methods {
** it is able to override built-in [PRAGMA] statements.
**
** <li>[[SQLITE_FCNTL_BUSYHANDLER]]
** ^This file-control may be invoked by SQLite on the database file handle
** ^The [SQLITE_FCNTL_BUSYHANDLER]
** file-control may be invoked by SQLite on the database file handle
** shortly after it is opened in order to provide a custom VFS with access
** to the connections busy-handler callback. The argument is of type (void **)
** - an array of two (void *) values. The first (void *) actually points
@ -880,13 +889,24 @@ struct sqlite3_io_methods {
** current operation.
**
** <li>[[SQLITE_FCNTL_TEMPFILENAME]]
** ^Application can invoke this file-control to have SQLite generate a
** ^Application can invoke the [SQLITE_FCNTL_TEMPFILENAME] file-control
** to have SQLite generate a
** temporary filename using the same algorithm that is followed to generate
** temporary filenames for TEMP tables and other internal uses. The
** argument should be a char** which will be filled with the filename
** written into memory obtained from [sqlite3_malloc()]. The caller should
** invoke [sqlite3_free()] on the result to avoid a memory leak.
**
** <li>[[SQLITE_FCNTL_MMAP_SIZE]]
** The [SQLITE_FCNTL_MMAP_SIZE] file control is used to query or set the
** maximum number of bytes that will be used for memory-mapped I/O.
** The argument is a pointer to a value of type sqlite3_int64 that
** is an advisory maximum number of bytes in the file to memory map. The
** pointer is overwritten with the old value. The limit is not changed if
** the value originally pointed to is negative, and so the current limit
** can be queried by passing in a pointer to a negative number. This
** file-control is used internally to implement [PRAGMA mmap_size].
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE 1
@ -905,6 +925,7 @@ struct sqlite3_io_methods {
#define SQLITE_FCNTL_PRAGMA 14
#define SQLITE_FCNTL_BUSYHANDLER 15
#define SQLITE_FCNTL_TEMPFILENAME 16
#define SQLITE_FCNTL_MMAP_SIZE 18
/*
** CAPI3REF: Mutex Handle
@ -1571,7 +1592,9 @@ struct sqlite3_mem_methods {
** page cache implementation into that object.)^ </dd>
**
** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt>
** <dd> ^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a
** <dd> The SQLITE_CONFIG_LOG option is used to configure the SQLite
** global [error log].
** (^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a
** function with a call signature of void(*)(void*,int,const char*),
** and a pointer to void. ^If the function pointer is not NULL, it is
** invoked by [sqlite3_log()] to process each logging event. ^If the
@ -1617,12 +1640,12 @@ struct sqlite3_mem_methods {
** <dt>SQLITE_CONFIG_PCACHE and SQLITE_CONFIG_GETPCACHE
** <dd> These options are obsolete and should not be used by new code.
** They are retained for backwards compatibility but are now no-ops.
** </dl>
** </dd>
**
** [[SQLITE_CONFIG_SQLLOG]]
** <dt>SQLITE_CONFIG_SQLLOG
** <dd>This option is only available if sqlite is compiled with the
** SQLITE_ENABLE_SQLLOG pre-processor macro defined. The first argument should
** [SQLITE_ENABLE_SQLLOG] pre-processor macro defined. The first argument should
** be a pointer to a function of type void(*)(void*,sqlite3*,const char*, int).
** The second should be of type (void*). The callback is invoked by the library
** in three separate circumstances, identified by the value passed as the
@ -1632,7 +1655,23 @@ struct sqlite3_mem_methods {
** fourth parameter is 1, then the SQL statement that the third parameter
** points to has just been executed. Or, if the fourth parameter is 2, then
** the connection being passed as the second parameter is being closed. The
** third parameter is passed NULL In this case.
** third parameter is passed NULL In this case. An example of using this
** configuration option can be seen in the "test_sqllog.c" source file in
** the canonical SQLite source tree.</dd>
**
** [[SQLITE_CONFIG_MMAP_SIZE]]
** <dt>SQLITE_CONFIG_MMAP_SIZE
** <dd>SQLITE_CONFIG_MMAP_SIZE takes two 64-bit integer (sqlite3_int64) values
** that are the default mmap size limit (the default setting for
** [PRAGMA mmap_size]) and the maximum allowed mmap size limit.
** The default setting can be overridden by each database connection using
** either the [PRAGMA mmap_size] command, or by using the
** [SQLITE_FCNTL_MMAP_SIZE] file control. The maximum allowed mmap size
** cannot be changed at run-time. Nor may the maximum allowed mmap size
** exceed the compile-time maximum mmap size set by the
** [SQLITE_MAX_MMAP_SIZE] compile-time option.
** If either argument to this option is negative, then that argument is
** changed to its compile-time default.
** </dl>
*/
#define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */
@ -1656,6 +1695,7 @@ struct sqlite3_mem_methods {
#define SQLITE_CONFIG_GETPCACHE2 19 /* sqlite3_pcache_methods2* */
#define SQLITE_CONFIG_COVERING_INDEX_SCAN 20 /* int */
#define SQLITE_CONFIG_SQLLOG 21 /* xSqllog, void* */
#define SQLITE_CONFIG_MMAP_SIZE 22 /* sqlite3_int64, sqlite3_int64 */
/*
** CAPI3REF: Database Connection Configuration Options
@ -2489,6 +2529,9 @@ SQLITE_API int sqlite3_set_authorizer(
** as each triggered subprogram is entered. The callbacks for triggers
** contain a UTF-8 SQL comment that identifies the trigger.)^
**
** The [SQLITE_TRACE_SIZE_LIMIT] compile-time option can be used to limit
** the length of [bound parameter] expansion in the output of sqlite3_trace().
**
** ^The callback function registered by sqlite3_profile() is invoked
** as each SQL statement finishes. ^The profile callback contains
** the original statement text and an estimate of wall-clock time
@ -3027,7 +3070,8 @@ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal);
** <li>
** ^If the database schema changes, instead of returning [SQLITE_SCHEMA] as it
** always used to do, [sqlite3_step()] will automatically recompile the SQL
** statement and try to run it again.
** statement and try to run it again. As many as [SQLITE_MAX_SCHEMA_RETRY]
** retries will occur before sqlite3_step() gives up and returns an error.
** </li>
**
** <li>
@ -3231,6 +3275,9 @@ typedef struct sqlite3_context sqlite3_context;
** parameter [SQLITE_LIMIT_VARIABLE_NUMBER] (default value: 999).
**
** ^The third argument is the value to bind to the parameter.
** ^If the third parameter to sqlite3_bind_text() or sqlite3_bind_text16()
** or sqlite3_bind_blob() is a NULL pointer then the fourth parameter
** is ignored and the end result is the same as sqlite3_bind_null().
**
** ^(In those routines that have a fourth argument, its value is the
** number of bytes in the parameter. To be clear: the value is the
@ -4187,7 +4234,7 @@ SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(voi
** the content before returning.
**
** The typedef is necessary to work around problems in certain
** C++ compilers. See ticket #2191.
** C++ compilers.
*/
typedef void (*sqlite3_destructor_type)(void*);
#define SQLITE_STATIC ((sqlite3_destructor_type)0)
@ -4986,11 +5033,20 @@ SQLITE_API int sqlite3_table_column_metadata(
** ^This interface loads an SQLite extension library from the named file.
**
** ^The sqlite3_load_extension() interface attempts to load an
** SQLite extension library contained in the file zFile.
** [SQLite extension] library contained in the file zFile. If
** the file cannot be loaded directly, attempts are made to load
** with various operating-system specific extensions added.
** So for example, if "samplelib" cannot be loaded, then names like
** "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might
** be tried also.
**
** ^The entry point is zProc.
** ^zProc may be 0, in which case the name of the entry point
** defaults to "sqlite3_extension_init".
** ^(zProc may be 0, in which case SQLite will try to come up with an
** entry point name on its own. It first tries "sqlite3_extension_init".
** If that does not work, it constructs a name "sqlite3_X_init" where the
** X is consists of the lower-case equivalent of all ASCII alphabetic
** characters in the filename from the last "/" to the first following
** "." and omitting any initial "lib".)^
** ^The sqlite3_load_extension() interface returns
** [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong.
** ^If an error occurs and pzErrMsg is not 0, then the
@ -5016,11 +5072,11 @@ SQLITE_API int sqlite3_load_extension(
** CAPI3REF: Enable Or Disable Extension Loading
**
** ^So as not to open security holes in older applications that are
** unprepared to deal with extension loading, and as a means of disabling
** extension loading while evaluating user-entered SQL, the following API
** unprepared to deal with [extension loading], and as a means of disabling
** [extension loading] while evaluating user-entered SQL, the following API
** is provided to turn the [sqlite3_load_extension()] mechanism on and off.
**
** ^Extension loading is off by default. See ticket #1863.
** ^Extension loading is off by default.
** ^Call the sqlite3_enable_load_extension() routine with onoff==1
** to turn extension loading on and call it with onoff==0 to turn
** it back off again.
@ -5032,7 +5088,7 @@ SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
**
** ^This interface causes the xEntryPoint() function to be invoked for
** each new [database connection] that is created. The idea here is that
** xEntryPoint() is the entry point for a statically linked SQLite extension
** xEntryPoint() is the entry point for a statically linked [SQLite extension]
** that is to be automatically loaded into all new database connections.
**
** ^(Even though the function prototype shows that xEntryPoint() takes
@ -6812,10 +6868,25 @@ SQLITE_API int sqlite3_unlock_notify(
SQLITE_API int sqlite3_stricmp(const char *, const char *);
SQLITE_API int sqlite3_strnicmp(const char *, const char *, int);
/*
** CAPI3REF: String Globbing
*
** ^The [sqlite3_strglob(P,X)] interface returns zero if string X matches
** the glob pattern P, and it returns non-zero if string X does not match
** the glob pattern P. ^The definition of glob pattern matching used in
** [sqlite3_strglob(P,X)] is the same as for the "X GLOB P" operator in the
** SQL dialect used by SQLite. ^The sqlite3_strglob(P,X) function is case
** sensitive.
**
** Note that this routine returns zero on a match and non-zero if the strings
** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()].
*/
SQLITE_API int sqlite3_strglob(const char *zGlob, const char *zStr);
/*
** CAPI3REF: Error Logging Interface
**
** ^The [sqlite3_log()] interface writes a message into the error log
** ^The [sqlite3_log()] interface writes a message into the [error log]
** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()].
** ^If logging is enabled, the zFormat string and subsequent arguments are
** used with [sqlite3_snprintf()] to generate the final output string.

View file

@ -1,403 +0,0 @@
// Main analyzer interface.
#ifndef ANALYZER_H
#define ANALYZER_H
#include <list>
#include "AnalyzerTags.h"
#include "Conn.h"
#include "Obj.h"
class DPM;
class PIA;
class Analyzer;
typedef list<Analyzer*> analyzer_list;
typedef void (Analyzer::*analyzer_timer_func)(double t);
// FIXME: This is a copy of ConnectionTimer, which we may eventually be
// able to get rid of.
class AnalyzerTimer : public Timer {
public:
AnalyzerTimer(Analyzer* arg_analyzer, analyzer_timer_func arg_timer,
double arg_t, int arg_do_expire, TimerType arg_type)
: Timer(arg_t, arg_type)
{ Init(arg_analyzer, arg_timer, arg_do_expire); }
virtual ~AnalyzerTimer();
void Dispatch(double t, int is_expire);
protected:
AnalyzerTimer() {}
void Init(Analyzer* analyzer, analyzer_timer_func timer, int do_expire);
Analyzer* analyzer;
analyzer_timer_func timer;
int do_expire;
};
// Main analyzer interface.
//
// Each analyzer is part of a tree, having a parent analyzer and an
// arbitrary number of child analyzers. Each analyzer also has a list of
// *suppport analyzers*. All its input first passes through this list of
// support analyzers, which can perform arbitrary preprocessing. Support
// analyzers share the same interface as regular analyzers, except that
// they are unidirectional, i.e., they see only one side of a connection.
//
// When overiding any of these methods, always make sure to call the
// base-class version first.
class SupportAnalyzer;
class OutputHandler;
class Analyzer {
public:
Analyzer(AnalyzerTag::Tag tag, Connection* conn);
virtual ~Analyzer();
virtual void Init();
virtual void Done();
// Pass data to the analyzer (it's automatically passed through its
// support analyzers first). We have packet-wise and stream-wise
// interfaces. For the packet-interface, some analyzers may require
// more information than others, so IP/caplen and seq may or may
// not be set.
void NextPacket(int len, const u_char* data, bool orig,
int seq = -1, const IP_Hdr* ip = 0, int caplen = 0);
void NextStream(int len, const u_char* data, bool is_orig);
// Used for data that can't be delivered (e.g., due to a previous
// sequence hole/gap).
void NextUndelivered(int seq, int len, bool is_orig);
// Report message boundary. (See EndOfData() below.)
void NextEndOfData(bool orig);
// Pass data on to all child analyzer(s). For SupportAnalyzers (see
// below), this is overridden to pass it on to the next sibling (or
// finally to the parent, if it's the last support analyzer).
//
// If we have an associated OutputHandler (see below), the data is
// additionally passed to that, too. For SupportAnalyzers, it is *only*
// delivered to the OutputHandler.
virtual void ForwardPacket(int len, const u_char* data,
bool orig, int seq,
const IP_Hdr* ip, int caplen);
virtual void ForwardStream(int len, const u_char* data, bool orig);
virtual void ForwardUndelivered(int seq, int len, bool orig);
// Report a message boundary to all child analyzers
virtual void ForwardEndOfData(bool orig);
AnalyzerID GetID() const { return id; }
Connection* Conn() const { return conn; }
// An OutputHandler can be used to get access to data extracted by this
// analyzer (i.e., all data which is passed to
// Forward{Packet,Stream,Undelivered}). We take the ownership of
// the handler.
class OutputHandler {
public:
virtual ~OutputHandler() { }
virtual void DeliverPacket(int len, const u_char* data,
bool orig, int seq,
const IP_Hdr* ip, int caplen)
{ }
virtual void DeliverStream(int len, const u_char* data,
bool orig) { }
virtual void Undelivered(int seq, int len, bool orig) { }
};
OutputHandler* GetOutputHandler() const { return output_handler; }
void SetOutputHandler(OutputHandler* handler)
{ output_handler = handler; }
// If an analyzer was triggered by a signature match, this returns the
// name of the signature; nil if not.
const Rule* Signature() const { return signature; }
void SetSignature(const Rule* sig) { signature = sig; }
void SetSkip(bool do_skip) { skip = do_skip; }
bool Skipping() const { return skip; }
bool IsFinished() const { return finished; }
AnalyzerTag::Tag GetTag() const { return tag; }
const char* GetTagName() const;
static AnalyzerTag::Tag GetTag(const char* tag);
static const char* GetTagName(AnalyzerTag::Tag tag);
static bool IsAvailable(AnalyzerTag::Tag tag)
{ return analyzer_configs[tag].available(); }
// Management of the tree.
//
// We immediately discard an added analyzer if there's already a child
// of the same type.
void AddChildAnalyzer(Analyzer* analyzer)
{ AddChildAnalyzer(analyzer, true); }
Analyzer* AddChildAnalyzer(AnalyzerTag::Tag tag);
void RemoveChildAnalyzer(Analyzer* analyzer);
void RemoveChildAnalyzer(AnalyzerID id);
bool HasChildAnalyzer(AnalyzerTag::Tag tag);
// Recursive; returns nil if not found.
Analyzer* FindChild(AnalyzerID id);
// Recursive; returns first found, or nil.
Analyzer* FindChild(AnalyzerTag::Tag tag);
const analyzer_list& GetChildren() { return children; }
Analyzer* Parent() const { return parent; }
void SetParent(Analyzer* p) { parent = p; }
// Remove this child analyzer from the parent's list.
void Remove() { assert(parent); parent->RemoveChildAnalyzer(this); }
// Management of support analyzers. Support analyzers are associated
// with a direction, and will only see data in the corresponding flow.
//
// We immediately discard an added analyzer if there's already a child
// of the same type for the same direction.
// Adds to tail of list.
void AddSupportAnalyzer(SupportAnalyzer* analyzer);
void RemoveSupportAnalyzer(SupportAnalyzer* analyzer);
// These are the methods where the analyzer actually gets its input.
// Each analyzer has only to implement the schemes it supports.
// Packet-wise (or more generally chunk-wise) input. "data" points
// to the payload that the analyzer is supposed to examine. If it's
// part of a full packet, "ip" points to its IP header. An analyzer
// may or may not require to be given the full packet (and its caplen)
// as well.
virtual void DeliverPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
// Stream-wise payload input.
virtual void DeliverStream(int len, const u_char* data, bool orig);
// If a parent analyzer can't turn a sequence of packets into a stream
// (e.g., due to holes), it can pass the remaining data through this
// method to the child.
virtual void Undelivered(int seq, int len, bool orig);
// Report a message boundary. This is a generic method that can be used
// by specific Analyzers if all data of a message has been delivered,
// e.g., to report that HTTP body has been delivered completely by the
// HTTP analyzer before it starts with the next body. EndOfData() is
// automatically generated by the analyzer's Done() method.
virtual void EndOfData(bool is_orig);
// Occasionally we may find during analysis that we got the direction
// of the connection wrong. In these cases, this method is called
// to swap state if necessary. This will not happen after payload
// has already been passed on, so most analyzers don't need to care.
virtual void FlipRoles();
// Feedback about protocol conformance, to be called by the
// analyzer's processing. The methods raise the correspondiong
// protocol_confirmation and protocol_violation events.
// Report that we believe we're parsing the right protocol. This
// should be called as early as possible during a connection's
// life-time. The protocol_confirmed event is only raised once per
// analyzer, even if the method is called multiple times.
virtual void ProtocolConfirmation();
// Return whether the analyzer previously called ProtocolConfirmation()
// at least once before.
bool ProtocolConfirmed() const
{ return protocol_confirmed; }
// Report that we found a significant protocol violation which might
// indicate that the analyzed data is in fact not the expected
// protocol. The protocol_violation event is raised once per call to
// this method so that the script-level may build up some notion of
// how "severely" protocol semantics are violated.
virtual void ProtocolViolation(const char* reason,
const char* data = 0, int len = 0);
virtual unsigned int MemoryAllocation() const;
// Called whenever the connection value needs to be updated. Per
// default, this method will be called for each analyzer in the tree.
// Analyzers can use this method to attach additional data to the
// connections. A call to BuildConnVal will in turn trigger a call to
// UpdateConnVal.
virtual void UpdateConnVal(RecordVal *conn_val);
// The following methods are proxies: calls are directly forwarded
// to the connection instance. These are for convenience only,
// allowing us to reuse more of the old analyzer code unchanged.
RecordVal* BuildConnVal()
{ return conn->BuildConnVal(); }
void Event(EventHandlerPtr f, const char* name = 0)
{ conn->Event(f, this, name); }
void Event(EventHandlerPtr f, Val* v1, Val* v2 = 0)
{ conn->Event(f, this, v1, v2); }
void ConnectionEvent(EventHandlerPtr f, val_list* vl)
{ conn->ConnectionEvent(f, this, vl); }
void Weird(const char* name, const char* addl = "")
{ conn->Weird(name, addl); }
// Factory function to instantiate new analyzers.
static Analyzer* InstantiateAnalyzer(AnalyzerTag::Tag tag, Connection* c);
protected:
friend class DPM;
friend class Connection;
friend class AnalyzerTimer;
friend class TCP_ApplicationAnalyzer;
Analyzer() { }
// Associates a connection with this analyzer. Must be called if
// we're using the default ctor.
void SetConnection(Connection* c) { conn = c; }
// Creates the given timer to expire at time t. If do_expire
// is true, then the timer is also evaluated when Bro terminates,
// otherwise not.
void AddTimer(analyzer_timer_func timer, double t, int do_expire,
TimerType type);
void RemoveTimer(Timer* t);
void CancelTimers();
bool HasSupportAnalyzer(AnalyzerTag::Tag tag, bool orig);
void AddChildAnalyzer(Analyzer* analyzer, bool init);
void InitChildren();
void AppendNewChildren();
private:
// Internal method to eventually delete a child analyzer that's
// already Done().
void DeleteChild(analyzer_list::iterator i);
AnalyzerTag::Tag tag;
AnalyzerID id;
Connection* conn;
Analyzer* parent;
const Rule* signature;
OutputHandler* output_handler;
analyzer_list children;
SupportAnalyzer* orig_supporters;
SupportAnalyzer* resp_supporters;
analyzer_list new_children;
bool protocol_confirmed;
timer_list timers;
bool timers_canceled;
bool skip;
bool finished;
bool removing;
static AnalyzerID id_counter;
typedef bool (*available_callback)();
typedef Analyzer* (*factory_callback)(Connection* conn);
typedef bool (*match_callback)(Connection*);
struct Config {
AnalyzerTag::Tag tag;
const char* name;
factory_callback factory;
available_callback available;
match_callback match;
bool partial;
};
// Table of analyzers.
static const Config analyzer_configs[];
};
#define ADD_ANALYZER_TIMER(timer, t, do_expire, type) \
AddTimer(analyzer_timer_func(timer), (t), (do_expire), (type))
#define LOOP_OVER_CHILDREN(var) \
for ( analyzer_list::iterator var = children.begin(); \
var != children.end(); var++ )
#define LOOP_OVER_CONST_CHILDREN(var) \
for ( analyzer_list::const_iterator var = children.begin(); \
var != children.end(); var++ )
#define LOOP_OVER_GIVEN_CHILDREN(var, the_kids) \
for ( analyzer_list::iterator var = the_kids.begin(); \
var != the_kids.end(); var++ )
#define LOOP_OVER_GIVEN_CONST_CHILDREN(var, the_kids) \
for ( analyzer_list::const_iterator var = the_kids.begin(); \
var != the_kids.end(); var++ )
class SupportAnalyzer : public Analyzer {
public:
SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig)
: Analyzer(tag, conn) { orig = arg_orig; sibling = 0; }
virtual ~SupportAnalyzer() {}
bool IsOrig() const { return orig; }
virtual void ForwardPacket(int len, const u_char* data, bool orig,
int seq, const IP_Hdr* ip, int caplen);
virtual void ForwardStream(int len, const u_char* data, bool orig);
virtual void ForwardUndelivered(int seq, int len, bool orig);
SupportAnalyzer* Sibling() const { return sibling; }
protected:
friend class Analyzer;
SupportAnalyzer() { }
private:
bool orig;
// Points to next support analyzer in chain. The list is managed by
// parent analyzer.
SupportAnalyzer* sibling;
};
class TransportLayerAnalyzer : public Analyzer {
public:
TransportLayerAnalyzer(AnalyzerTag::Tag tag, Connection* conn)
: Analyzer(tag, conn) { pia = 0; }
virtual void Done();
virtual bool IsReuse(double t, const u_char* pkt) = 0;
virtual void SetContentsFile(unsigned int direction, BroFile* f);
virtual BroFile* GetContentsFile(unsigned int direction) const;
void SetPIA(PIA* arg_PIA) { pia = arg_PIA; }
PIA* GetPIA() const { return pia; }
// Raises packet_contents event.
void PacketContents(const u_char* data, int len);
protected:
TransportLayerAnalyzer() { }
private:
PIA* pia;
};
#endif

View file

@ -1,57 +0,0 @@
#ifndef ANALYZERTAGS_H
#define ANALYZERTAGS_H
// Each kind of analyzer gets a tag. When adding an analyzer here, also adapt
// the table of analyzers in Analyzer.cc.
//
// Using a namespace here is kind of a hack: ideally this would be in "class
// Analyzer {...}". But then we'd have circular dependencies across the header
// files.
#include "util.h"
typedef uint32 AnalyzerID;
namespace AnalyzerTag {
enum Tag {
Error = 0, // used as error code
// Analyzer in charge of protocol detection.
PIA_TCP, PIA_UDP,
// Transport-layer analyzers.
ICMP, TCP, UDP,
// Application-layer analyzers (hand-written).
BitTorrent, BitTorrentTracker,
DCE_RPC, DNS, Finger, FTP, Gnutella, HTTP, Ident, IRC,
Login, NCP, NetbiosSSN, NFS, NTP, POP3, Portmapper, Rlogin,
RPC, Rsh, SMB, SMTP, SSH,
Telnet,
// Application-layer analyzers, binpac-generated.
DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC,
HTTP_BINPAC, SSL, SYSLOG_BINPAC,
Modbus,
// Decapsulation analyzers.
AYIYA,
SOCKS,
Teredo,
GTPv1,
// Other
File, IRC_Data, FTP_Data, Backdoor, InterConn, SteppingStone, TCPStats,
ConnSize,
// Support-analyzers
Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP,
Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh,
Contents_DCE_RPC, Contents_SMB, Contents_RPC, Contents_NFS,
FTP_ADAT,
// End-marker.
LastAnalyzer
};
};
#endif

View file

@ -82,9 +82,7 @@ int* Base64Converter::InitBase64Table(const string& alphabet)
return base64_table;
}
Base64Converter::Base64Converter(Analyzer* arg_analyzer, const string& arg_alphabet)
Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string& arg_alphabet)
{
if ( arg_alphabet.size() > 0 )
{

View file

@ -7,7 +7,8 @@
#include "util.h"
#include "BroString.h"
#include "Analyzer.h"
#include "Reporter.h"
#include "analyzer/Analyzer.h"
// Maybe we should have a base class for generic decoders?
class Base64Converter {
@ -15,7 +16,7 @@ public:
// <analyzer> is used for error reporting, and it should be zero when
// the decoder is called by the built-in function decode_base64() or encode_base64().
// Empty alphabet indicates the default base64 alphabet.
Base64Converter(Analyzer* analyzer, const string& alphabet = "");
Base64Converter(analyzer::Analyzer* analyzer, const string& alphabet = "");
~Base64Converter();
// A note on Decode():
@ -62,7 +63,7 @@ protected:
int base64_after_padding;
int* base64_table;
int errored; // if true, we encountered an error - skip further processing
Analyzer* analyzer;
analyzer::Analyzer* analyzer;
};

View file

@ -8,6 +8,9 @@
#include "BroDoc.h"
#include "BroDocObj.h"
#include "util.h"
#include "plugin/Manager.h"
#include "analyzer/Manager.h"
#include "analyzer/Component.h"
BroDoc::BroDoc(const std::string& rel, const std::string& abs)
{
@ -35,12 +38,14 @@ BroDoc::BroDoc(const std::string& rel, const std::string& abs)
downloadable_filename = source_filename;
#if 0
size_t ext_pos = downloadable_filename.find(".bif.bro");
if ( std::string::npos != ext_pos )
downloadable_filename.erase(ext_pos + 4);
#endif
reST_filename = doc_title;
ext_pos = reST_filename.find(".bro");
size_t ext_pos = reST_filename.find(".bro");
if ( std::string::npos == ext_pos )
reST_filename += ".rst";
@ -162,84 +167,77 @@ void BroDoc::SetPacketFilter(const std::string& s)
packet_filter.clear();
}
void BroDoc::AddPortAnalysis(const std::string& analyzer,
const std::string& ports)
{
std::string reST_string = analyzer + "::\n" + ports + "\n\n";
port_analysis.push_back(reST_string);
}
void BroDoc::WriteDocFile() const
{
WriteToDoc(".. Automatically generated. Do not edit.\n\n");
WriteToDoc(reST_file, ".. Automatically generated. Do not edit.\n\n");
WriteToDoc(":tocdepth: 3\n\n");
WriteToDoc(reST_file, ":tocdepth: 3\n\n");
WriteSectionHeading(doc_title.c_str(), '=');
WriteSectionHeading(reST_file, doc_title.c_str(), '=');
WriteStringList(".. bro:namespace:: %s\n", modules);
WriteStringList(reST_file, ".. bro:namespace:: %s\n", modules);
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
// WriteSectionHeading("Overview", '-');
WriteStringList("%s\n", summary);
// WriteSectionHeading(reST_file, "Overview", '-');
WriteStringList(reST_file, "%s\n", summary);
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
if ( ! modules.empty() )
{
WriteToDoc(":Namespace%s: ", (modules.size() > 1 ? "s" : ""));
// WriteStringList(":bro:namespace:`%s`", modules);
WriteStringList("``%s``, ", "``%s``", modules);
WriteToDoc("\n");
WriteToDoc(reST_file, ":Namespace%s: ", (modules.size() > 1 ? "s" : ""));
// WriteStringList(reST_file, ":bro:namespace:`%s`", modules);
WriteStringList(reST_file, "``%s``, ", "``%s``", modules);
WriteToDoc(reST_file, "\n");
}
if ( ! imports.empty() )
{
WriteToDoc(":Imports: ");
WriteToDoc(reST_file, ":Imports: ");
std::list<std::string>::const_iterator it;
for ( it = imports.begin(); it != imports.end(); ++it )
{
if ( it != imports.begin() )
WriteToDoc(", ");
WriteToDoc(reST_file, ", ");
string pretty(*it);
size_t pos = pretty.find("/index");
if ( pos != std::string::npos && pos + 6 == pretty.size() )
pretty = pretty.substr(0, pos);
WriteToDoc(":doc:`%s </scripts/%s>`", pretty.c_str(), it->c_str());
WriteToDoc(reST_file, ":doc:`%s </scripts/%s>`", pretty.c_str(), it->c_str());
}
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
}
WriteToDoc(":Source File: :download:`%s`\n",
WriteToDoc(reST_file, ":Source File: :download:`%s`\n",
downloadable_filename.c_str());
WriteToDoc("\n");
WriteToDoc(reST_file, "\n");
WriteInterface("Summary", '~', '#', true, true);
if ( ! notices.empty() )
WriteBroDocObjList(notices, "Notices", '#');
WriteBroDocObjList(reST_file, notices, "Notices", '#');
if ( port_analysis.size() || packet_filter.size() )
WriteSectionHeading("Configuration Changes", '#');
WriteSectionHeading(reST_file, "Configuration Changes", '#');
if ( ! port_analysis.empty() )
{
WriteSectionHeading("Port Analysis", '^');
WriteToDoc("Loading this script makes the following changes to "
WriteSectionHeading(reST_file, "Port Analysis", '^');
WriteToDoc(reST_file, "Loading this script makes the following changes to "
":bro:see:`dpd_config`.\n\n");
WriteStringList("%s, ", "%s", port_analysis);
WriteStringList(reST_file, "%s, ", "%s", port_analysis);
}
if ( ! packet_filter.empty() )
{
WriteSectionHeading("Packet Filter", '^');
WriteToDoc("Loading this script makes the following changes to "
WriteSectionHeading(reST_file, "Packet Filter", '^');
WriteToDoc(reST_file, "Loading this script makes the following changes to "
":bro:see:`capture_filters`.\n\n");
WriteToDoc("Filters added::\n\n");
WriteToDoc("%s\n", packet_filter.c_str());
WriteToDoc(reST_file, "Filters added::\n\n");
WriteToDoc(reST_file, "%s\n", packet_filter.c_str());
}
WriteInterface("Detailed Interface", '~', '#', true, false);
@ -265,23 +263,23 @@ void BroDoc::WriteDocFile() const
void BroDoc::WriteInterface(const char* heading, char underline,
char sub, bool isPublic, bool isShort) const
{
WriteSectionHeading(heading, underline);
WriteBroDocObjList(options, isPublic, "Options", sub, isShort);
WriteBroDocObjList(constants, isPublic, "Constants", sub, isShort);
WriteBroDocObjList(state_vars, isPublic, "State Variables", sub, isShort);
WriteBroDocObjList(types, isPublic, "Types", sub, isShort);
WriteBroDocObjList(events, isPublic, "Events", sub, isShort);
WriteBroDocObjList(hooks, isPublic, "Hooks", sub, isShort);
WriteBroDocObjList(functions, isPublic, "Functions", sub, isShort);
WriteBroDocObjList(redefs, isPublic, "Redefinitions", sub, isShort);
WriteSectionHeading(reST_file, heading, underline);
WriteBroDocObjList(reST_file, options, isPublic, "Options", sub, isShort);
WriteBroDocObjList(reST_file, constants, isPublic, "Constants", sub, isShort);
WriteBroDocObjList(reST_file, state_vars, isPublic, "State Variables", sub, isShort);
WriteBroDocObjList(reST_file, types, isPublic, "Types", sub, isShort);
WriteBroDocObjList(reST_file, events, isPublic, "Events", sub, isShort);
WriteBroDocObjList(reST_file, hooks, isPublic, "Hooks", sub, isShort);
WriteBroDocObjList(reST_file, functions, isPublic, "Functions", sub, isShort);
WriteBroDocObjList(reST_file, redefs, isPublic, "Redefinitions", sub, isShort);
}
void BroDoc::WriteStringList(const char* format, const char* last_format,
const std::list<std::string>& l) const
void BroDoc::WriteStringList(FILE* f, const char* format, const char* last_format,
const std::list<std::string>& l)
{
if ( l.empty() )
{
WriteToDoc("\n");
WriteToDoc(f, "\n");
return;
}
@ -290,12 +288,12 @@ void BroDoc::WriteStringList(const char* format, const char* last_format,
last--;
for ( it = l.begin(); it != last; ++it )
WriteToDoc(format, it->c_str());
WriteToDoc(f, format, it->c_str());
WriteToDoc(last_format, last->c_str());
WriteToDoc(f, last_format, last->c_str());
}
void BroDoc::WriteBroDocObjTable(const BroDocObjList& l) const
void BroDoc::WriteBroDocObjTable(FILE* f, const BroDocObjList& l)
{
int max_id_col = 0;
int max_com_col = 0;
@ -315,38 +313,38 @@ void BroDoc::WriteBroDocObjTable(const BroDocObjList& l) const
}
// Start table.
WriteRepeatedChar('=', max_id_col);
WriteToDoc(" ");
WriteRepeatedChar(f, '=', max_id_col);
WriteToDoc(f, " ");
if ( max_com_col == 0 )
WriteToDoc("=");
WriteToDoc(f, "=");
else
WriteRepeatedChar('=', max_com_col);
WriteRepeatedChar(f, '=', max_com_col);
WriteToDoc("\n");
WriteToDoc(f, "\n");
for ( it = l.begin(); it != l.end(); ++it )
{
if ( it != l.begin() )
WriteToDoc("\n\n");
(*it)->WriteReSTCompact(reST_file, max_id_col);
WriteToDoc(f, "\n\n");
(*it)->WriteReSTCompact(f, max_id_col);
}
// End table.
WriteToDoc("\n");
WriteRepeatedChar('=', max_id_col);
WriteToDoc(" ");
WriteToDoc(f, "\n");
WriteRepeatedChar(f, '=', max_id_col);
WriteToDoc(f, " ");
if ( max_com_col == 0 )
WriteToDoc("=");
WriteToDoc(f, "=");
else
WriteRepeatedChar('=', max_com_col);
WriteRepeatedChar(f, '=', max_com_col);
WriteToDoc("\n\n");
WriteToDoc(f, "\n\n");
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
const char* heading, char underline, bool isShort) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l, bool wantPublic,
const char* heading, char underline, bool isShort)
{
if ( l.empty() )
return;
@ -364,7 +362,7 @@ void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
if ( it == l.end() )
return;
WriteSectionHeading(heading, underline);
WriteSectionHeading(f, heading, underline);
BroDocObjList filtered_list;
@ -375,13 +373,13 @@ void BroDoc::WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
}
if ( isShort )
WriteBroDocObjTable(filtered_list);
WriteBroDocObjTable(f, filtered_list);
else
WriteBroDocObjList(filtered_list);
WriteBroDocObjList(f, filtered_list);
}
void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline, bool isShort) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline, bool isShort)
{
BroDocObjMap::const_iterator it;
BroDocObjList l;
@ -389,24 +387,24 @@ void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
for ( it = m.begin(); it != m.end(); ++it )
l.push_back(it->second);
WriteBroDocObjList(l, wantPublic, heading, underline, isShort);
WriteBroDocObjList(f, l, wantPublic, heading, underline, isShort);
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l, const char* heading,
char underline) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l, const char* heading,
char underline)
{
WriteSectionHeading(heading, underline);
WriteBroDocObjList(l);
WriteSectionHeading(f, heading, underline);
WriteBroDocObjList(f, l);
}
void BroDoc::WriteBroDocObjList(const BroDocObjList& l) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjList& l)
{
for ( BroDocObjList::const_iterator it = l.begin(); it != l.end(); ++it )
(*it)->WriteReST(reST_file);
(*it)->WriteReST(f);
}
void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
char underline) const
void BroDoc::WriteBroDocObjList(FILE* f, const BroDocObjMap& m, const char* heading,
char underline)
{
BroDocObjMap::const_iterator it;
BroDocObjList l;
@ -414,28 +412,28 @@ void BroDoc::WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
for ( it = m.begin(); it != m.end(); ++it )
l.push_back(it->second);
WriteBroDocObjList(l, heading, underline);
WriteBroDocObjList(f, l, heading, underline);
}
void BroDoc::WriteToDoc(const char* format, ...) const
void BroDoc::WriteToDoc(FILE* f, const char* format, ...)
{
va_list argp;
va_start(argp, format);
vfprintf(reST_file, format, argp);
vfprintf(f, format, argp);
va_end(argp);
}
void BroDoc::WriteSectionHeading(const char* heading, char underline) const
void BroDoc::WriteSectionHeading(FILE* f, const char* heading, char underline)
{
WriteToDoc("%s\n", heading);
WriteRepeatedChar(underline, strlen(heading));
WriteToDoc("\n");
WriteToDoc(f, "%s\n", heading);
WriteRepeatedChar(f, underline, strlen(heading));
WriteToDoc(f, "\n");
}
void BroDoc::WriteRepeatedChar(char c, size_t n) const
void BroDoc::WriteRepeatedChar(FILE* f, char c, size_t n)
{
for ( size_t i = 0; i < n; ++i )
WriteToDoc("%c", c);
WriteToDoc(f, "%c", c);
}
void BroDoc::FreeBroDocObjPtrList(BroDocObjList& l)
@ -457,3 +455,143 @@ void BroDoc::AddFunction(BroDocObj* o)
else
functions[o->Name()]->Combine(o);
}
static void WritePluginSectionHeading(FILE* f, const plugin::Plugin* p)
{
string name = p->Name();
fprintf(f, "%s\n", name.c_str());
for ( size_t i = 0; i < name.size(); ++i )
fprintf(f, "-");
fprintf(f, "\n\n");
fprintf(f, "%s\n\n", p->Description());
}
static void WriteAnalyzerComponent(FILE* f, const analyzer::Component* c)
{
EnumType* atag = analyzer_mgr->GetTagEnumType();
string tag = fmt("ANALYZER_%s", c->CanonicalName());
if ( atag->Lookup("Analyzer", tag.c_str()) < 0 )
reporter->InternalError("missing analyzer tag for %s", tag.c_str());
fprintf(f, ":bro:enum:`Analyzer::%s`\n\n", tag.c_str());
}
static void WritePluginComponents(FILE* f, const plugin::Plugin* p)
{
plugin::Plugin::component_list components = p->Components();
plugin::Plugin::component_list::const_iterator it;
fprintf(f, "Components\n");
fprintf(f, "++++++++++\n\n");
for ( it = components.begin(); it != components.end(); ++it )
{
switch ( (*it)->Type() ) {
case plugin::component::ANALYZER:
WriteAnalyzerComponent(f,
dynamic_cast<const analyzer::Component*>(*it));
break;
case plugin::component::READER:
reporter->InternalError("docs for READER component unimplemented");
case plugin::component::WRITER:
reporter->InternalError("docs for WRITER component unimplemented");
default:
reporter->InternalError("docs for unknown component unimplemented");
}
}
}
static void WritePluginBifItems(FILE* f, const plugin::Plugin* p,
plugin::BifItem::Type t, const string& heading)
{
plugin::Plugin::bif_item_list bifitems = p->BifItems();
plugin::Plugin::bif_item_list::iterator it = bifitems.begin();
while ( it != bifitems.end() )
{
if ( it->GetType() != t )
it = bifitems.erase(it);
else
++it;
}
if ( bifitems.empty() )
return;
fprintf(f, "%s\n", heading.c_str());
for ( size_t i = 0; i < heading.size(); ++i )
fprintf(f, "+");
fprintf(f, "\n\n");
for ( it = bifitems.begin(); it != bifitems.end(); ++it )
{
BroDocObj* o = doc_ids[it->GetID()];
if ( o )
o->WriteReST(f);
else
reporter->Warning("No docs for ID: %s\n", it->GetID());
}
}
static void WriteAnalyzerTagDefn(FILE* f, EnumType* e)
{
e = new CommentedEnumType(e);
e->SetTypeID(copy_string("Analyzer::Tag"));
ID* dummy_id = new ID(copy_string("Analyzer::Tag"), SCOPE_GLOBAL, true);
dummy_id->SetType(e);
dummy_id->MakeType();
list<string>* r = new list<string>();
r->push_back("Unique identifiers for protocol analyzers.");
BroDocObj bdo(dummy_id, r, true);
bdo.WriteReST(f);
}
static bool IsAnalyzerPlugin(const plugin::Plugin* p)
{
plugin::Plugin::component_list components = p->Components();
plugin::Plugin::component_list::const_iterator it;
for ( it = components.begin(); it != components.end(); ++it )
if ( (*it)->Type() != plugin::component::ANALYZER )
return false;
return true;
}
void CreateProtoAnalyzerDoc(const char* filename)
{
FILE* f = fopen(filename, "w");
fprintf(f, "Protocol Analyzer Reference\n");
fprintf(f, "===========================\n\n");
WriteAnalyzerTagDefn(f, analyzer_mgr->GetTagEnumType());
plugin::Manager::plugin_list plugins = plugin_mgr->Plugins();
plugin::Manager::plugin_list::const_iterator it;
for ( it = plugins.begin(); it != plugins.end(); ++it )
{
if ( ! IsAnalyzerPlugin(*it) )
continue;
WritePluginSectionHeading(f, *it);
WritePluginComponents(f, *it);
WritePluginBifItems(f, *it, plugin::BifItem::CONSTANT,
"Options/Constants");
WritePluginBifItems(f, *it, plugin::BifItem::GLOBAL, "Globals");
WritePluginBifItems(f, *it, plugin::BifItem::TYPE, "Types");
WritePluginBifItems(f, *it, plugin::BifItem::EVENT, "Events");
WritePluginBifItems(f, *it, plugin::BifItem::FUNCTION, "Functions");
}
fclose(f);
}

View file

@ -81,15 +81,6 @@ public:
*/
void SetPacketFilter(const std::string& s);
/**
* Schedules documentation of a given set of ports being associated
* with a particular analyzer as a result of the current script
* being loaded -- the way the "dpd_config" table is changed.
* @param analyzer An analyzer that changed the "dpd_config" table.
* @param ports The set of ports assigned to the analyzer in table.
*/
void AddPortAnalysis(const std::string& analyzer, const std::string& ports);
/**
* Schedules documentation of a script option. An option is
* defined as any variable in the script that is declared 'const'
@ -242,7 +233,115 @@ public:
return reST_filename.c_str();
}
protected:
typedef std::list<const BroDocObj*> BroDocObjList;
typedef std::map<std::string, BroDocObj*> BroDocObjMap;
/**
* Writes out a table of BroDocObj's to the reST document
* @param f The file to write to.
* @param l A list of BroDocObj pointers
*/
static void WriteBroDocObjTable(FILE* f, const BroDocObjList& l);
/**
* Writes out given number of characters to reST document
* @param f The file to write to.
* @param c the character to write
* @param n the number of characters to write
*/
static void WriteRepeatedChar(FILE* f, char c, size_t n);
/**
* A wrapper to fprintf() that always uses the reST document
* for the FILE* argument.
* @param f The file to write to.
* @param format A printf style format string.
*/
static void WriteToDoc(FILE* f, const char* format, ...);
/**
* Writes out a list of strings to the reST document.
* If the list is empty, prints a newline character.
* @param f The file to write to.
* @param format A printf style format string for elements of the list
* except for the last one in the list
* @param last_format A printf style format string to use for the last
* element of the list
* @param l A reference to a list of strings
*/
static void WriteStringList(FILE* f, const char* format, const char* last_format,
const std::list<std::string>& l);
/**
* @see WriteStringList(FILE* f, const char*, const char*,
* const std::list<std::string>&>)
*/
static void WriteStringList(FILE* f, const char* format,
const std::list<std::string>& l){
WriteStringList(f, format, format, l);
}
/**
* Writes out a list of BroDocObj objects to the reST document
* @param f The file to write to.
* @param l A list of BroDocObj pointers
* @param wantPublic If true, filter out objects that are not declared
* in the global scope. If false, filter out those that are in
* the global scope.
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
* @param isShort Whether to write the full documentation or a "short"
* version (a single sentence)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l, bool wantPublic,
const char* heading, char underline,
bool isShort);
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(FILE* f, const BroDocObjList&, bool, const char*, char,
bool)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline,
bool isShort);
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l, const char* heading,
char underline);
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjList& l);
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(FILE* f, const BroDocObjList&, const char*, char)
*/
static void WriteBroDocObjList(FILE* f, const BroDocObjMap& m, const char* heading,
char underline);
/**
* Writes out a reST section heading
* @param f The file to write to.
* @param heading The title of the heading to create
* @param underline The character to use to underline the section title
* within the reST document
*/
static void WriteSectionHeading(FILE* f, const char* heading, char underline);
private:
FILE* reST_file;
std::string reST_filename;
std::string source_filename; // points to the basename of source file
@ -255,9 +354,6 @@ protected:
std::list<std::string> imports;
std::list<std::string> port_analysis;
typedef std::list<const BroDocObj*> BroDocObjList;
typedef std::map<std::string, BroDocObj*> BroDocObjMap;
BroDocObjList options;
BroDocObjList constants;
BroDocObjList state_vars;
@ -272,107 +368,6 @@ protected:
BroDocObjList all;
/**
* Writes out a list of strings to the reST document.
* If the list is empty, prints a newline character.
* @param format A printf style format string for elements of the list
* except for the last one in the list
* @param last_format A printf style format string to use for the last
* element of the list
* @param l A reference to a list of strings
*/
void WriteStringList(const char* format, const char* last_format,
const std::list<std::string>& l) const;
/**
* @see WriteStringList(const char*, const char*,
* const std::list<std::string>&>)
*/
void WriteStringList(const char* format,
const std::list<std::string>& l) const
{
WriteStringList(format, format, l);
}
/**
* Writes out a table of BroDocObj's to the reST document
* @param l A list of BroDocObj pointers
*/
void WriteBroDocObjTable(const BroDocObjList& l) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param wantPublic If true, filter out objects that are not declared
* in the global scope. If false, filter out those that are in
* the global scope.
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
* @param isShort Whether to write the full documentation or a "short"
* version (a single sentence)
*/
void WriteBroDocObjList(const BroDocObjList& l, bool wantPublic,
const char* heading, char underline,
bool isShort) const;
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(const BroDocObjList&, bool, const char*, char,
bool)
*/
void WriteBroDocObjList(const BroDocObjMap& m, bool wantPublic,
const char* heading, char underline,
bool isShort) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
* @param heading The title of the section to create in the reST doc.
* @param underline The character to use to underline the reST
* section heading.
*/
void WriteBroDocObjList(const BroDocObjList& l, const char* heading,
char underline) const;
/**
* Writes out a list of BroDocObj objects to the reST document
* @param l A list of BroDocObj pointers
*/
void WriteBroDocObjList(const BroDocObjList& l) const;
/**
* Wraps the BroDocObjMap into a BroDocObjList and the writes that list
* to the reST document
* @see WriteBroDocObjList(const BroDocObjList&, const char*, char)
*/
void WriteBroDocObjList(const BroDocObjMap& m, const char* heading,
char underline) const;
/**
* A wrapper to fprintf() that always uses the reST document
* for the FILE* argument.
* @param format A printf style format string.
*/
void WriteToDoc(const char* format, ...) const;
/**
* Writes out a reST section heading
* @param heading The title of the heading to create
* @param underline The character to use to underline the section title
* within the reST document
*/
void WriteSectionHeading(const char* heading, char underline) const;
/**
* Writes out given number of characters to reST document
* @param c the character to write
* @param n the number of characters to write
*/
void WriteRepeatedChar(char c, size_t n) const;
/**
* Writes out the reST for either the script's public or private interface
* @param heading The title of the interfaces section heading
@ -387,7 +382,6 @@ protected:
*/
void WriteInterface(const char* heading, char underline, char subunderline,
bool isPublic, bool isShort) const;
private:
/**
* Frees memory allocated to BroDocObj's objects in a given list.
@ -413,4 +407,10 @@ private:
};
};
/**
* Writes out plugin index documentation for all analyzer plugins.
* @param filename the name of the file to write.
*/
void CreateProtoAnalyzerDoc(const char* filename);
#endif

View file

@ -4,6 +4,8 @@
#include "ID.h"
#include "BroDocObj.h"
map<string, BroDocObj*> doc_ids = map<string, BroDocObj*>();
BroDocObj* BroDocObj::last = 0;
BroDocObj::BroDocObj(const ID* id, std::list<std::string>*& reST,
@ -16,6 +18,7 @@ BroDocObj::BroDocObj(const ID* id, std::list<std::string>*& reST,
is_fake_id = is_fake;
use_role = 0;
FormulateShortDesc();
doc_ids[id->Name()] = this;
}
BroDocObj::~BroDocObj()

View file

@ -4,6 +4,7 @@
#include <cstdio>
#include <string>
#include <list>
#include <map>
#include "ID.h"
@ -134,4 +135,9 @@ protected:
private:
};
/**
* Map identifiers to their broxygen documentation objects.
*/
extern map<string, BroDocObj*> doc_ids;
#endif

Some files were not shown because too many files have changed in this diff Show more