Merge remote-tracking branch 'origin/master' into topic/robin/radius-merge

Conflicts:
	scripts/base/init-default.bro
This commit is contained in:
Robin Sommer 2014-05-15 11:10:11 -07:00
commit ebc8ebf5f9
504 changed files with 17125 additions and 5384 deletions

3
.gitmodules vendored
View file

@ -16,9 +16,6 @@
[submodule "cmake"] [submodule "cmake"]
path = cmake path = cmake
url = git://git.bro.org/cmake url = git://git.bro.org/cmake
[submodule "magic"]
path = magic
url = git://git.bro.org/bromagic
[submodule "src/3rdparty"] [submodule "src/3rdparty"]
path = src/3rdparty path = src/3rdparty
url = git://git.bro.org/bro-3rdparty url = git://git.bro.org/bro-3rdparty

498
CHANGES
View file

@ -1,4 +1,502 @@
2.2-427 | 2014-05-15 13:37:23 -0400
* Fix dynamic SumStats update on clusters (Bernhard Amann)
2.2-425 | 2014-05-08 16:34:44 -0700
* Fix reassembly of data w/ sizes beyond 32-bit capacities. (Jon Siwek)
Reassembly code (e.g. for TCP) now uses int64/uint64 (signedness
is situational) data types in place of int types in order to
support delivering data to analyzers that pass 2GB thresholds.
There's also changes in logic that accompany the change in data
types, e.g. to fix TCP sequence space arithmetic inconsistencies.
Another significant change is in the Analyzer API: the *Packet and
*Undelivered methods now use a uint64 in place of an int for the
relative sequence space offset parameter.
Addresses BIT-348.
* Fixing compiler warnings. (Robin Sommer)
* Update SNMP analyzer's DeliverPacket method signature. (Jon Siwek)
2.2-417 | 2014-05-07 10:59:22 -0500
* Change handling of atypical OpenSSL error case in x509 verification. (Jon Siwek)
* Fix memory leaks in X509 certificate parsing/verification. (Jon Siwek)
* Fix new []/delete mismatch in input::reader::Raw::DoClose(). (Jon Siwek)
* Fix buffer over-reads in file_analysis::Manager::Terminate() (Jon Siwek)
* Fix buffer overlows in IP address masking logic. (Jon Siwek)
That could occur either in taking a zero-length mask on an IPv6 address
(e.g. [fe80::]/0) or a reverse mask of length 128 on any address (e.g.
via the remask_addr BuiltIn Function).
* Fix new []/delete mismatch in ~Base64Converter. (Jon Siwek)
2.2-410 | 2014-05-02 12:49:53 -0500
* Replace an unneeded OPENSSL_malloc call. (Jon Siwek)
2.2-409 | 2014-05-02 12:09:06 -0500
* Clean up and documentation for base SNMP script. (Jon Siwek)
* Update base SNMP script to now produce a snmp.log. (Seth Hall)
* Add DH support to SSL analyzer. When using DHE or DH-Anon, sever
key parameters are now available in scriptland. Also add script to
alert on weak certificate keys or weak dh-params. (Bernhard Amann)
* Add a few more ciphers Bro did not know at all so far. (Bernhard Amann)
* Log chosen curve when using ec cipher suite in TLS. (Bernhard Amann)
2.2-397 | 2014-05-01 20:29:20 -0700
* Fix reference counting for lookup_ID() usages. (Jon Siwek)
2.2-395 | 2014-05-01 20:25:48 -0700
* Fix missing "irc-dcc-data" service field from IRC DCC connections.
(Jon Siwek)
* Correct a notice for heartbleed. The notice is thrown correctly,
just the message conteined wrong values. (Bernhard Amann)
* Improve/standardize some malloc/realloc return value checks. (Jon
Siwek)
* Improve file analysis manager shutdown/cleanup. (Jon Siwek)
2.2-388 | 2014-04-24 18:38:07 -0700
* Fix decoding of MIME quoted-printable. (Mareq)
2.2-386 | 2014-04-24 18:22:29 -0700
* Do a Intel::ADDR lookup for host field if we find an IP address
there. (jshlbrd)
2.2-381 | 2014-04-24 17:08:45 -0700
* Add Java version to software framework. (Brian Little)
2.2-379 | 2014-04-24 17:06:21 -0700
* Remove unused Val::attribs member. (Jon Siwek)
2.2-377 | 2014-04-24 16:57:54 -0700
* A larger set of SSL improvements and extensions. Addresses
BIT-1178. (Bernhard Amann)
- Fixes TLS protocol version detection. It also should
bail-out correctly on non-tls-connections now
- Adds support for a few TLS extensions, including
server_name, alpn, and ec-curves.
- Adds support for the heartbeat events.
- Add Heartbleed detector script.
- Adds basic support for OCSP stapling.
* Fix parsing of DNS TXT RRs w/ multiple character-strings.
Addresses BIT-1156. (Jon Siwek)
2.2-353 | 2014-04-24 16:12:30 -0700
* Adapt HTTP partial content to cache file analysis IDs. (Jon Siwek)
* Adapt SSL analyzer to generate file analysis handles itself. (Jon
Siwek)
* Adapt more of HTTP analyzer to use cached file analysis IDs. (Jon
Siwek)
* Adapt IRC/FTP analyzers to cache file analysis IDs. (Jon Siwek)
* Refactor regex/signature AcceptingSet data structure and usages.
(Jon Siwek)
* Enforce data size limit when checking files for MIME matches. (Jon
Siwek)
* Refactor file analysis file ID lookup. (Jon Siwek)
2.2-344 | 2014-04-22 20:13:30 -0700
* Refactor various hex escaping code. (Jon Siwek)
2.2-341 | 2014-04-17 18:01:41 -0500
* Fix duplicate DNS log entries. (Robin Sommer)
2.2-341 | 2014-04-17 18:01:01 -0500
* Refactor initialization of ASCII log writer options. (Jon Siwek)
* Fix a memory leak in ASCII log writer. (Jon Siwek)
2.2-338 | 2014-04-17 17:48:17 -0500
* Disable input/logging threads setting their names on every
heartbeat. (Jon Siwek)
* Fix bug when clearing Bloom filter contents. Reported by
@colonelxc. (Matthias Vallentin)
2.2-335 | 2014-04-10 15:04:57 -0700
* Small logic fix for main SSL script. (Bernhard Amann)
* Update DPD signatures for detecting TLS 1.2. (Bernhard Amann)
* Remove unused data member of SMTP_Analyzer to silence a Coverity
warning. (Jon Siwek)
* Fix missing @load dependencies in some scripts. Also update the
unit test which is supposed to catch such errors. (Jon Siwek)
2.2-326 | 2014-04-08 15:21:51 -0700
* Add SNMP datagram parsing support.This supports parsing of SNMPv1
(RFC 1157), SNMPv2 (RFC 1901/3416), and SNMPv2 (RFC 3412). An
event is raised for each SNMP PDU type, though there's not
currently any event handlers for them and not a default snmp.log
either. However, simple presence of SNMP is currently visible now
in conn.log service field and known_services.log. (Jon Siwek)
2.2-319 | 2014-04-03 15:53:25 -0700
* Improve __load__.bro creation for .bif.bro stubs. (Jon Siwek)
2.2-317 | 2014-04-03 10:51:31 -0400
* Add a uid field to the signatures.log. Addresses BIT-1171
(Anthony Verez)
2.2-315 | 2014-04-01 16:50:01 -0700
* Change logging's "#types" description of sets to "set". Addresses
BIT-1163 (Bernhard Amann)
2.2-313 | 2014-04-01 16:40:19 -0700
* Fix a couple nits reported by Coverity.(Jon Siwek)
* Fix potential memory leak in IP frag reassembly reported by
Coverity. (Jon Siwek)
2.2-310 | 2014-03-31 18:52:22 -0700
* Fix memory leak and unchecked dynamic cast reported by Coverity.
(Jon Siwek)
* Fix potential memory leak in x509 parser reported by Coverity.
(Bernhard Amann)
2.2-304 | 2014-03-30 23:05:54 +0200
* Replace libmagic w/ Bro signatures for file MIME type
identification. Addresses BIT-1143. (Jon Siwek)
Includes:
- libmagic is no longer used at all. All MIME type detection is
done through new Bro signatures, and there's no longer a means
to get verbose file type descriptions. The majority of the
default file magic signatures are derived from the default magic
database of libmagic ~5.17.
- File magic signatures consist of two new constructs in the
signature rule parsing grammar: "file-magic" gives a regular
expression to match against, and "file-mime" gives the MIME type
string of content that matches the magic and an optional strength
value for the match.
- Modified signature/rule syntax for identifiers: they can no
longer start with a '-', which made for ambiguous syntax when
doing negative strength values in "file-mime". Also brought
syntax for Bro script identifiers in line with reality (they
can't start with numbers or include '-' at all).
- A new built-in function, "file_magic", can be used to get all
file magic matches and their corresponding strength against a
given chunk of data.
- The second parameter of the "identify_data" built-in function
can no longer be used to get verbose file type descriptions,
though it can still be used to get the strongest matching file
magic signature.
- The "file_transferred" event's "descr" parameter no longer
contains verbose file type descriptions.
- The BROMAGIC environment variable no longer changes any behavior
in Bro as magic databases are no longer used/installed.
- Removed "binary" and "octet-stream" mime type detections. They
don' provide any more information than an uninitialized
mime_type field which implicitly means no magic signature
matches and so the media type is unknown to Bro.
- The "fa_file" record now contains a "mime_types" field that
contains all magic signatures that matched the file content
(where the "mime_type" field is just a shortcut for the
strongest match).
- Reverted back to minimum requirement of CMake 2.6.3 from 2.8.0.
* The logic for adding file ids to {orig,resp}_fuids fields of the
http.log incorrectly depended on the state of
{orig,resp}_mime_types fields, so sometimes not all file ids
associated w/ the session were logged. (Jon Siwek)
* Fix MHR script's use of fa_file$mime_type before checking if it's
initialized. (Jon Siwek)
2.2-294 | 2014-03-30 22:08:25 +0200
* Rework and move X509 certificate processing from the SSL protocol
analyzer to a dedicated file analyzer. This will allow us to
examine X509 certificates from sources other than SSL in the
future. Furthermore, Bro now parses more fields and extensions
from the certificates (e.g. elliptic curve information, subject
alternative names, basic constraints). Certificate validation also
was improved, should be easier to use and exposes information like
the full verified certificate chain. (Bernhard Amann)
This update changes the format of ssl.log and adds a new x509.log
with certificate information. Furthermore all x509 events and
handling functions have changed.
2.2-271 | 2014-03-30 20:25:17 +0200
* Add unit tests covering vector/set/table ctors/inits. (Jon Siwek)
* Fix parsing of "local" named table constructors. (Jon Siwek)
* Improve type checking of records. Addresses BIT-1159. (Jon Siwek)
2.2-267 | 2014-03-30 20:21:43 +0200
* Improve documentation of Bro clusters. Addresses BIT-1160.
(Daniel Thayer)
2.2-263 | 2014-03-30 20:19:05 +0200
* Don't include locations into serialization when cloning values.
(Robin Sommer)
2.2-262 | 2014-03-30 20:12:47 +0200
* Refactor SerializationFormat::EndWrite and ChunkedIO::Chunk memory
management. (Jon Siwek)
* Improve SerializationFormat's write buffer growth strategy. (Jon
Siwek)
* Add --parse-only option to exit after parsing scripts. May be
useful for syntax-checking tools. (Jon Siwek)
2.2-256 | 2014-03-30 19:57:28 +0200
* For the summary statistics framewirk, change all &create_expire
attributes to &read_expire in the cluster part. (Bernhard Amann)
2.2-254 | 2014-03-30 19:55:22 +0200
* Update instructions on how to build Bro docs. (Daniel Thayer)
2.2-251 | 2014-03-28 08:37:37 -0400
* Quick fix to the ElasticSearch writer. (Seth Hall)
2.2-250 | 2014-03-19 17:20:55 -0400
* Improve performance of MHR script by reducing cloned Vals in
a "when" scope. (Jon Siwek)
2.2-248 | 2014-03-19 14:47:40 -0400
* Make SumStats work incrementally and non-blocking in non-cluster
mode, but force it to operate by blocking if Bro is shutting
down. (Seth Hall)
2.2-244 | 2014-03-17 08:24:17 -0700
* Fix compile errror on FreeBSD caused by wrong include file order.
(Bernhard Amann)
2.2-240 | 2014-03-14 10:23:54 -0700
* Derive results of DNS lookups from from input when in BRO_DNS_FAKE
mode. Addresses BIT-1134. (Jon Siwek)
* Fixing a few cases of undefined behaviour introduced by recent
formatter work.
* Fixing compiler error. (Robin Sommer)
* Fixing (very unlikely) double delete in HTTP analyzer when
decapsulating CONNECTs. (Robin Sommer)
2.2-235 | 2014-03-13 16:21:19 -0700
* The Ascii writer has a new option LogAscii::use_json for writing
out logs as JSON. (Seth Hall)
* Ascii input reader now supports all config options as per-input
stream "config" values. (Seth Hall)
* Refactored formatters and updated the the writers a bit. (Seth
Hall)
2.2-229 | 2014-03-13 14:58:30 -0700
* Refactoring analyzer manager code to reuse
ApplyScheduledAnalyzers(). (Robin Sommer)
2.2-228 | 2014-03-13 14:25:53 -0700
* Teach async DNS lookup builtin-functions about BRO_DNS_FAKE.
Addresses BIT-1134. (Jon Siwek)
* Enable fake DNS mode for test suites.
* Improve analysis of TCP SYN/SYN-ACK reversal situations. (Jon
Siwek)
- Since it's just the handshake packets out of order, they're no
longer treated as partial connections, which some protocol analyzers
immediately refuse to look at.
- The TCP_Reassembler "is_orig" state failed to change, which led to
protocol analyzers sometimes using the wrong value for that.
- Add a unit test which exercises the Connection::FlipRoles() code
path (i.e. the SYN/SYN-ACK reversal situation).
Addresses BIT-1148.
* Fix bug in Connection::FlipRoles. It didn't swap address values
right and also didn't consider that analyzers might be scheduled
for the new connection tuple. Reported by Kevin McMahon. Addresses
BIT-1148. (Jon Siwek)
2.2-221 | 2014-03-12 17:23:18 -0700
* Teach configure script --enable-jemalloc, --with-jemalloc.
Addresses BIT-1128. (Jon Siwek)
2.2-218 | 2014-03-12 17:19:45 -0700
* Improve DBG_LOG macro (perf. improvement for --enable-debug mode).
(Jon Siwek)
* Silences some documentation warnings from Sphinx. (Jon Siwek)
2.2-215 | 2014-03-10 11:10:15 -0700
* Fix non-deterministic logging of unmatched DNS msgs. Addresses
BIT-1153 (Jon Siwek)
2.2-213 | 2014-03-09 08:57:37 -0700
* No longer accidentally attempting to parse NBSTAT RRs as SRV RRs
in DNS analyzer. (Seth Hall)
* Fix DNS SRV responses and a small issue with NBNS queries and
label length. (Seth Hall)
- DNS SRV responses never had the code written to actually
generate the dns_SRV_reply event. Adding this required
extending the event a bit to add extra information. SRV responses
now appear in the dns.log file correctly.
- Fixed an issue where some Microsoft NetBIOS Name Service lookups
would exceed the max label length for DNS and cause an incorrect
"DNS_label_too_long" weird.
2.2-210 | 2014-03-06 22:52:36 -0500
* Improve SSL logging so that connections are logged even when the
ssl_established event is not generated as well as other small SSL
fixes. (Bernhard Amann)
2.2-206 | 2014-03-03 16:52:28 -0800
* HTTP CONNECT proxy support. The HTTP analyzer now supports
handling HTTP CONNECT proxies. (Seth Hall)
* Expanding the HTTP methods used in the DPD signature to detect
HTTP traffic. (Seth Hall)
* Fixing removal of support analyzers. (Robin Sommer)
2.2-199 | 2014-03-03 16:34:20 -0800
* Allow iterating over bif functions with result type vector of any.
This changes the internal type that is used to signal that a
vector is unspecified from any to void. Addresses BIT-1144
(Bernhard Amann)
2.2-197 | 2014-02-28 15:36:58 -0800
* Remove test code. (Robin Sommer)
2.2-194 | 2014-02-28 14:50:53 -0800
* Remove packet sorter. Addresses BIT-700. (Bernhard Amann)
2.2-192 | 2014-02-28 09:46:43 -0800
* Update Mozilla root bundle. (Bernhard Amann)
2.2-190 | 2014-02-27 07:34:44 -0800
* Adjust timings of a few leak tests. (Bernhard Amann)
2.2-187 | 2014-02-25 07:24:42 -0800
* More Google TLS extensions that are being actively used. (Bernhard
Amann)
* Remove unused, and potentially unsafe, function
ListVal::IncludedInString. (Bernhard Amann)
2.2-184 | 2014-02-24 07:28:18 -0800
* New TLS constants from
https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-01.
(Bernhard Amann)
2.2-180 | 2014-02-20 17:29:14 -0800
* New SSL alert descriptions from
https://tools.ietf.org/html/draft-ietf-tls-applayerprotoneg-04.
(Bernhard Amann)
* Update SQLite. (Bernhard Amann)
2.2-177 | 2014-02-20 17:27:46 -0800
* Update to libmagic version 5.17. Addresses BIT-1136. (Jon Siwek)
2.2-174 | 2014-02-14 12:07:04 -0800
* Support for MPLS over VLAN. (Chris Kanich)
2.2-173 | 2014-02-14 10:50:15 -0800 2.2-173 | 2014-02-14 10:50:15 -0800
* Fix misidentification of SOCKS traffic that in particiular seemed * Fix misidentification of SOCKS traffic that in particiular seemed

View file

@ -1,5 +1,5 @@
project(Bro C CXX) project(Bro C CXX)
cmake_minimum_required(VERSION 2.8.0 FATAL_ERROR) cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR)
include(cmake/CommonCMakeConfig.cmake) include(cmake/CommonCMakeConfig.cmake)
######################################################################## ########################################################################
@ -16,17 +16,12 @@ endif ()
get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH} get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH}
ABSOLUTE) ABSOLUTE)
set(BRO_MAGIC_INSTALL_PATH ${BRO_ROOT_DIR}/share/bro/magic)
set(BRO_MAGIC_SOURCE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/magic/database)
configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev) configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh
"export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n" "export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"export BROMAGIC=\"${BRO_MAGIC_SOURCE_PATH}\"\n"
"export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n") "export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
"setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n" "setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"setenv BROMAGIC \"${BRO_MAGIC_SOURCE_PATH}\"\n"
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n") "setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1) file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1)
@ -39,32 +34,6 @@ set(VERSION_MAJ_MIN "${VERSION_MAJOR}.${VERSION_MINOR}")
######################################################################## ########################################################################
## Dependency Configuration ## Dependency Configuration
include(ExternalProject)
# LOG_* options to ExternalProject_Add appear in CMake 2.8.3. If
# available, using them hides external project configure/build output.
if("${CMAKE_VERSION}" VERSION_GREATER 2.8.2)
set(EXTERNAL_PROJECT_LOG_OPTIONS
LOG_DOWNLOAD 1 LOG_UPDATE 1 LOG_CONFIGURE 1 LOG_BUILD 1 LOG_INSTALL 1)
else()
set(EXTERNAL_PROJECT_LOG_OPTIONS)
endif()
set(LIBMAGIC_PREFIX ${CMAKE_CURRENT_BINARY_DIR}/libmagic-prefix)
set(LIBMAGIC_INCLUDE_DIR ${LIBMAGIC_PREFIX}/include)
set(LIBMAGIC_LIB_DIR ${LIBMAGIC_PREFIX}/lib)
set(LIBMAGIC_LIBRARY ${LIBMAGIC_LIB_DIR}/libmagic.a)
ExternalProject_Add(libmagic
PREFIX ${LIBMAGIC_PREFIX}
URL ${CMAKE_CURRENT_SOURCE_DIR}/src/3rdparty/file-5.16.tar.gz
CONFIGURE_COMMAND ./configure --enable-static --disable-shared
--prefix=${LIBMAGIC_PREFIX}
--includedir=${LIBMAGIC_INCLUDE_DIR}
--libdir=${LIBMAGIC_LIB_DIR}
BUILD_IN_SOURCE 1
${EXTERNAL_PROJECT_LOG_OPTIONS}
)
include(FindRequiredPackage) include(FindRequiredPackage)
# Check cache value first to avoid displaying "Found sed" messages everytime # Check cache value first to avoid displaying "Found sed" messages everytime
@ -91,6 +60,10 @@ if (NOT BinPAC_ROOT_DIR AND
endif () endif ()
FindRequiredPackage(BinPAC) FindRequiredPackage(BinPAC)
if (ENABLE_JEMALLOC)
find_package(JeMalloc)
endif ()
if (MISSING_PREREQS) if (MISSING_PREREQS)
foreach (prereq ${MISSING_PREREQ_DESCS}) foreach (prereq ${MISSING_PREREQ_DESCS})
message(SEND_ERROR ${prereq}) message(SEND_ERROR ${prereq})
@ -103,8 +76,8 @@ include_directories(BEFORE
${OpenSSL_INCLUDE_DIR} ${OpenSSL_INCLUDE_DIR}
${BIND_INCLUDE_DIR} ${BIND_INCLUDE_DIR}
${BinPAC_INCLUDE_DIR} ${BinPAC_INCLUDE_DIR}
${LIBMAGIC_INCLUDE_DIR}
${ZLIB_INCLUDE_DIR} ${ZLIB_INCLUDE_DIR}
${JEMALLOC_INCLUDE_DIR}
) )
# Optional Dependencies # Optional Dependencies
@ -182,8 +155,8 @@ set(brodeps
${PCAP_LIBRARY} ${PCAP_LIBRARY}
${OpenSSL_LIBRARIES} ${OpenSSL_LIBRARIES}
${BIND_LIBRARY} ${BIND_LIBRARY}
${LIBMAGIC_LIBRARY}
${ZLIB_LIBRARY} ${ZLIB_LIBRARY}
${JEMALLOC_LIBRARIES}
${OPTLIBS} ${OPTLIBS}
) )
@ -220,10 +193,6 @@ CheckOptionalBuildSources(aux/broctl Broctl INSTALL_BROCTL)
CheckOptionalBuildSources(aux/bro-aux Bro-Aux INSTALL_AUX_TOOLS) CheckOptionalBuildSources(aux/bro-aux Bro-Aux INSTALL_AUX_TOOLS)
CheckOptionalBuildSources(aux/broccoli Broccoli INSTALL_BROCCOLI) CheckOptionalBuildSources(aux/broccoli Broccoli INSTALL_BROCCOLI)
install(DIRECTORY ./magic/database/
DESTINATION ${BRO_MAGIC_INSTALL_PATH}
)
######################################################################## ########################################################################
## Packaging Setup ## Packaging Setup
@ -268,6 +237,7 @@ message(
"\ngperftools found: ${HAVE_PERFTOOLS}" "\ngperftools found: ${HAVE_PERFTOOLS}"
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}" "\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}" "\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\njemalloc: ${ENABLE_JEMALLOC}"
"\ncURL: ${USE_CURL}" "\ncURL: ${USE_CURL}"
"\n" "\n"
"\nDataSeries: ${USE_DATASERIES}" "\nDataSeries: ${USE_DATASERIES}"

56
NEWS
View file

@ -15,7 +15,7 @@ Dependencies
- Bro no longer requires a pre-installed libmagic (because it now - Bro no longer requires a pre-installed libmagic (because it now
ships its own). ships its own).
- Compiling from source now needs a CMake version >= 2.8.0. - Libmagic is no longer a dependency.
New Functionality New Functionality
----------------- -----------------
@ -25,6 +25,27 @@ New Functionality
parsing past the GRE header in between the delivery and payload IP parsing past the GRE header in between the delivery and payload IP
packets. packets.
- The DNS analyzer now actually generates the dns_SRV_reply() event.
It had been documented before, yet was never raised.
- Bro now uses "file magic signatures" to identify file types. These
are defined via two new constructs in the signature rule parsing
grammar: "file-magic" gives a regular expression to match against,
and "file-mime" gives the MIME type string of content that matches
the magic and an optional strength value for the match. (See also
"Changed Functionality" below for changes due to switching from
using libmagic to such wsignatures.)
- A new built-in function, "file_magic", can be used to get all file
magic matches and their corresponding strength against a given chunk
of data.
- The SSL analyzer now has support heartbeats as well as for a few
extensions, including server_name, alpn, and ec-curves.
- The SSL analyzer comes with Heartbleed detector script in
protocols/ssl/heartbleed.bro.
Changed Functionality Changed Functionality
--------------------- ---------------------
@ -41,12 +62,45 @@ Changed Functionality
event x509_extension(c: connection, is_orig: bool, cert: X509, ext: X509_extension_info); event x509_extension(c: connection, is_orig: bool, cert: X509, ext: X509_extension_info);
- Generally, all x509 events and handling functions have changed their
signatures.
- Bro no longer special-cases SYN/FIN/RST-filtered traces by not - Bro no longer special-cases SYN/FIN/RST-filtered traces by not
reporting missing data. The old behavior can be reverted by reporting missing data. The old behavior can be reverted by
redef'ing "detect_filtered_trace". redef'ing "detect_filtered_trace".
TODO: Update if we add a detector for filtered traces. TODO: Update if we add a detector for filtered traces.
- We have removed the packet sorter component.
- Bro no longer uses libmagic to identify file types but instead now
comes with its own signature library (which initially is still
derived from libmagic;s database). This leads to a number of further
changes with regards to MIME types:
* The second parameter of the "identify_data" built-in function
can no longer be used to get verbose file type descriptions,
though it can still be used to get the strongest matching file
magic signature.
* The "file_transferred" event's "descr" parameter no longer
contains verbose file type descriptions.
* The BROMAGIC environment variable no longer changes any behavior
in Bro as magic databases are no longer used/installed.
* Removed "binary" and "octet-stream" mime type detections. They
don' provide any more information than an uninitialized
mime_type field.
* The "fa_file" record now contains a "mime_types" field that
contains all magic signatures that matched the file content
(where the "mime_type" field is just a shortcut for the
strongest match).
- dns_TXT_reply() now supports more than one string entry by receiving
a vector of strings.
Bro 2.2 Bro 2.2
======= =======

View file

@ -1 +1 @@
2.2-173 2.2-427

@ -1 +1 @@
Subproject commit 54b321009b750268526419bdbd841f421c839313 Subproject commit b0877edc68af6ae08face528fc411c8ce21f2e30

@ -1 +1 @@
Subproject commit ebf9c0d88ae8230845b91f15755156f93ff21aa8 Subproject commit 6dfc648d22d234d2ba4b1cb0fc74cda2eb023d1e

@ -1 +1 @@
Subproject commit 52ba12128e0673a09cbc7a68b8485f5d19030633 Subproject commit 561ccdd6edec4ac5540f3d5565aefb59e7510634

@ -1 +1 @@
Subproject commit 66793ec3c602439e235bee705b654aefb7ac8dec Subproject commit c44ec9c13d87b8589d6f1549b9c523130fcc2a39

@ -1 +1 @@
Subproject commit c3a65f13063291ffcfd6d05c09d7724c02e9a40d Subproject commit 4e2ec35917acb883c7d2ab19af487f3863c687ae

2
cmake

@ -1 +1 @@
Subproject commit e7a46cb82ee10aa522c4d88115baf10181277d20 Subproject commit 0f301aa08a970150195a2ea5b3ed43d2d98b35b3

10
configure vendored
View file

@ -32,6 +32,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools force use of Google perftools on non-Linux systems --enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux) (automatically on when perftools is present on Linux)
--enable-perftools-debug use Google's perftools for debugging --enable-perftools-debug use Google's perftools for debugging
--enable-jemalloc link against jemalloc
--enable-ruby build ruby bindings for broccoli (deprecated) --enable-ruby build ruby bindings for broccoli (deprecated)
--disable-broccoli don't build or install the Broccoli library --disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl --disable-broctl don't install Broctl
@ -54,6 +55,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
Optional Packages in Non-Standard Locations: Optional Packages in Non-Standard Locations:
--with-geoip=PATH path to the libGeoIP install root --with-geoip=PATH path to the libGeoIP install root
--with-perftools=PATH path to Google Perftools install root --with-perftools=PATH path to Google Perftools install root
--with-jemalloc=PATH path to jemalloc install root
--with-python=PATH path to Python interpreter --with-python=PATH path to Python interpreter
--with-python-lib=PATH path to libpython --with-python-lib=PATH path to libpython
--with-python-inc=PATH path to Python headers --with-python-inc=PATH path to Python headers
@ -105,6 +107,7 @@ append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry ENABLE_DEBUG BOOL false append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry ENABLE_JEMALLOC BOOL false
append_cache_entry BinPAC_SKIP_INSTALL BOOL true append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true append_cache_entry BUILD_SHARED_LIBS BOOL true
append_cache_entry INSTALL_AUX_TOOLS BOOL true append_cache_entry INSTALL_AUX_TOOLS BOOL true
@ -160,6 +163,9 @@ while [ $# -ne 0 ]; do
append_cache_entry ENABLE_PERFTOOLS BOOL true append_cache_entry ENABLE_PERFTOOLS BOOL true
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true
;; ;;
--enable-jemalloc)
append_cache_entry ENABLE_JEMALLOC BOOL true
;;
--disable-broccoli) --disable-broccoli)
append_cache_entry INSTALL_BROCCOLI BOOL false append_cache_entry INSTALL_BROCCOLI BOOL false
;; ;;
@ -214,6 +220,10 @@ while [ $# -ne 0 ]; do
--with-perftools=*) --with-perftools=*)
append_cache_entry GooglePerftools_ROOT_DIR PATH $optarg append_cache_entry GooglePerftools_ROOT_DIR PATH $optarg
;; ;;
--with-jemalloc=*)
append_cache_entry JEMALLOC_ROOT_DIR PATH $optarg
append_cache_entry ENABLE_JEMALLOC BOOL true
;;
--with-python=*) --with-python=*)
append_cache_entry PYTHON_EXECUTABLE PATH $optarg append_cache_entry PYTHON_EXECUTABLE PATH $optarg
;; ;;

View file

@ -14,8 +14,6 @@ if (NOT ${retval} EQUAL 0)
message(FATAL_ERROR "Problem setting BROPATH") message(FATAL_ERROR "Problem setting BROPATH")
endif () endif ()
set(BROMAGIC ${BRO_MAGIC_SOURCE_PATH})
# Configure the Sphinx config file (expand variables CMake might know about). # Configure the Sphinx config file (expand variables CMake might know about).
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in
${CMAKE_CURRENT_BINARY_DIR}/conf.py ${CMAKE_CURRENT_BINARY_DIR}/conf.py
@ -34,7 +32,6 @@ add_custom_target(sphinxdoc
${CMAKE_CURRENT_SOURCE_DIR}/ ${SPHINX_INPUT_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/ ${SPHINX_INPUT_DIR}
# Use Bro/Broxygen to dynamically generate reST for all Bro scripts. # Use Bro/Broxygen to dynamically generate reST for all Bro scripts.
COMMAND BROPATH=${BROPATH} COMMAND BROPATH=${BROPATH}
BROMAGIC=${BROMAGIC}
${CMAKE_BINARY_DIR}/src/bro ${CMAKE_BINARY_DIR}/src/bro
-X ${CMAKE_CURRENT_BINARY_DIR}/broxygen.conf -X ${CMAKE_CURRENT_BINARY_DIR}/broxygen.conf
broxygen >/dev/null broxygen >/dev/null

View file

@ -10,7 +10,7 @@ common/general documentation, style sheets, JavaScript, etc. The Sphinx
config file is produced from ``conf.py.in``, and can be edited to change config file is produced from ``conf.py.in``, and can be edited to change
various Sphinx options. various Sphinx options.
There is also a custom Sphinx domain implemented in ``source/ext/bro.py`` There is also a custom Sphinx domain implemented in ``ext/bro.py``
which adds some reST directives and roles that aid in generating useful which adds some reST directives and roles that aid in generating useful
index entries and cross-references. Other extensions can be added in index entries and cross-references. Other extensions can be added in
a similar fashion. a similar fashion.
@ -19,7 +19,8 @@ The ``make doc`` target in the top-level Makefile can be used to locally
render the reST files into HTML. That target depends on: render the reST files into HTML. That target depends on:
* Python interpreter >= 2.5 * Python interpreter >= 2.5
* `Sphinx <http://sphinx.pocoo.org/>`_ >= 1.0.1 * `Sphinx <http://sphinx-doc.org/>`_ >= 1.0.1
* Doxygen (required only for building the Broccoli API doc)
After completion, HTML documentation is symlinked in ``build/html``. After completion, HTML documentation is symlinked in ``build/html``.

View file

@ -15,9 +15,9 @@ conditions specific to your particular case.
In the following sections, we present a few examples of common uses of In the following sections, we present a few examples of common uses of
Bro as an IDS. Bro as an IDS.
------------------------------------------------ -------------------------------------------------
Detecting an FTP Brute-force Attack and Notifying Detecting an FTP Brute-force Attack and Notifying
------------------------------------------------ -------------------------------------------------
For the purpose of this exercise, we define FTP brute-forcing as too many For the purpose of this exercise, we define FTP brute-forcing as too many
rejected usernames and passwords occurring from a single address. We rejected usernames and passwords occurring from a single address. We

View file

@ -1,18 +1,19 @@
======================== ========================
Setting up a Bro Cluster Bro Cluster Architecture
======================== ========================
Intro
------
Bro is not multithreaded, so once the limitations of a single processor core Bro is not multithreaded, so once the limitations of a single processor core
are reached the only option currently is to spread the workload across many are reached the only option currently is to spread the workload across many
cores, or even many physical computers. The cluster deployment scenario for cores, or even many physical computers. The cluster deployment scenario for
Bro is the current solution to build these larger systems. The accompanying Bro is the current solution to build these larger systems. The tools and
tools and scripts provide the structure to easily manage many Bro processes scripts that accompany Bro provide the structure to easily manage many Bro
examining packets and doing correlation activities but acting as a singular, processes examining packets and doing correlation activities but acting as
cohesive entity. a singular, cohesive entity. This document describes the Bro cluster
architecture. For information on how to configure a Bro cluster,
see the documentation for
:doc:`BroControl <../components/broctl/README>`.
Architecture Architecture
--------------- ---------------
@ -41,11 +42,11 @@ messages and notices from the rest of the nodes in the cluster using the Bro
communications protocol. The result is a single log instead of many communications protocol. The result is a single log instead of many
discrete logs that you have to combine in some manner with post-processing. discrete logs that you have to combine in some manner with post-processing.
The manager also takes the opportunity to de-duplicate notices, and it has the The manager also takes the opportunity to de-duplicate notices, and it has the
ability to do so since its acting as the choke point for notices and how notices ability to do so since it's acting as the choke point for notices and how
might be processed into actions (e.g., emailing, paging, or blocking). notices might be processed into actions (e.g., emailing, paging, or blocking).
The manager process is started first by BroControl and it only opens its The manager process is started first by BroControl and it only opens its
designated port and waits for connections, it doesnt initiate any designated port and waits for connections, it doesn't initiate any
connections to the rest of the cluster. Once the workers are started and connections to the rest of the cluster. Once the workers are started and
connect to the manager, logs and notices will start arriving to the manager connect to the manager, logs and notices will start arriving to the manager
process from the workers. process from the workers.
@ -58,12 +59,11 @@ the workers by alleviating the need for all of the workers to connect
directly to each other. directly to each other.
Examples of synchronized state from the scripts that ship with Bro include Examples of synchronized state from the scripts that ship with Bro include
the full list of “known” hosts and services (which are hosts or services the full list of "known" hosts and services (which are hosts or services
identified as performing full TCP handshakes) or an analyzed protocol has been identified as performing full TCP handshakes) or an analyzed protocol has been
found on the connection. If worker A detects host 1.2.3.4 as an active host, found on the connection. If worker A detects host 1.2.3.4 as an active host,
it would be beneficial for worker B to know that as well. So worker A shares it would be beneficial for worker B to know that as well. So worker A shares
that information as an insertion to a set that information as an insertion to a set which travels to the cluster's
<link to set documentation would be good here> which travels to the clusters
proxy and the proxy sends that same set insertion to worker B. The result proxy and the proxy sends that same set insertion to worker B. The result
is that worker A and worker B have shared knowledge about host and services is that worker A and worker B have shared knowledge about host and services
that are active on the network being monitored. that are active on the network being monitored.
@ -79,7 +79,7 @@ necessary for the number of workers they are serving. It is best to start
with a single proxy and add more if communication performance problems are with a single proxy and add more if communication performance problems are
found. found.
Bro processes acting as proxies dont tend to be extremely hard on CPU Bro processes acting as proxies don't tend to be extremely hard on CPU
or memory and users frequently run proxy processes on the same physical or memory and users frequently run proxy processes on the same physical
host as the manager. host as the manager.
@ -106,7 +106,7 @@ dedicated to being workers with each one containing dual 6-core processors.
Once a flow-based load balancer is put into place this model is extremely Once a flow-based load balancer is put into place this model is extremely
easy to scale. It is recommended that you estimate the amount of easy to scale. It is recommended that you estimate the amount of
hardware you will need to fully analyze your traffic. If more is needed its hardware you will need to fully analyze your traffic. If more is needed it's
relatively easy to increase the size of the cluster in most cases. relatively easy to increase the size of the cluster in most cases.
Frontend Options Frontend Options
@ -147,14 +147,13 @@ On host flow balancing
PF_RING PF_RING
^^^^^^^ ^^^^^^^
The PF_RING software for Linux has a “clustering” feature which will do The PF_RING software for Linux has a "clustering" feature which will do
flow-based load balancing across a number of processes that are sniffing the flow-based load balancing across a number of processes that are sniffing the
same interface. This allows you to easily take advantage of multiple same interface. This allows you to easily take advantage of multiple
cores in a single physical host because Bros main event loop is single cores in a single physical host because Bro's main event loop is single
threaded and cant natively utilize all of the cores. More information about threaded and can't natively utilize all of the cores. If you want to use
Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING PF_RING, see the documentation on `how to configure Bro with PF_RING
tutorial to link to here? document installing kernel module, libpcap <http://bro.org/documentation/load-balancing.html>`_.
wrapper, building Bro with the --with-pcap configure option)
Netmap Netmap
^^^^^^ ^^^^^^
@ -167,7 +166,7 @@ Click! Software Router
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
Click! can be used for flow based load balancing with a simple configuration. Click! can be used for flow based load balancing with a simple configuration.
(link to an example for the config). This solution is not recommended on This solution is not recommended on
Linux due to Bros PF_RING support and only as a last resort on other Linux due to Bro's PF_RING support and only as a last resort on other
operating systems since it causes a lot of overhead due to context switching operating systems since it causes a lot of overhead due to context switching
back and forth between kernel and userland several times per packet. back and forth between kernel and userland several times per packet.

View file

@ -64,8 +64,8 @@ expect that signature file in the same directory as the Bro script. The
default extension of the file name is ``.sig``, and Bro appends that default extension of the file name is ``.sig``, and Bro appends that
automatically when necessary. automatically when necessary.
Signature language Signature Language for Network Traffic
================== ======================================
Let's look at the format of a signature more closely. Each individual Let's look at the format of a signature more closely. Each individual
signature has the format ``signature <id> { <attributes> }``. ``<id>`` signature has the format ``signature <id> { <attributes> }``. ``<id>``
@ -286,6 +286,44 @@ two actions defined:
connection (``"http"``, ``"ftp"``, etc.). This is used by Bro's connection (``"http"``, ``"ftp"``, etc.). This is used by Bro's
dynamic protocol detection to activate analyzers on the fly. dynamic protocol detection to activate analyzers on the fly.
Signature Language for File Content
===================================
The signature framework can also be used to identify MIME types of files
irrespective of the network protocol/connection over which the file is
transferred. A special type of signature can be written for this
purpose and will be used automatically by the :doc:`Files Framework
<file-analysis>` or by Bro scripts that use the :bro:see:`file_magic`
built-in function.
Conditions
----------
File signatures use a single type of content condition in the form of a
regular expression:
``file-magic /<regular expression>/``
This is analogous to the ``payload`` content condition for the network
traffic signature language described above. The difference is that
``payload`` signatures are applied to payloads of network connections,
but ``file-magic`` can be applied to any arbitrary data, it does not
have to be tied to a network protocol/connection.
Actions
-------
Upon matching a chunk of data, file signatures use the following action
to get information about that data's MIME type:
``file-mime <string> [, <integer>]``
The arguments include the MIME type string associated with the file
magic regular expression and an optional "strength" as a signed integer.
Since multiple file magic signatures may match against a given chunk of
data, the strength value may be used to help choose a "winner". Higher
values are considered stronger.
Things to keep in mind when writing signatures Things to keep in mind when writing signatures
============================================== ==============================================

View file

@ -12,11 +12,14 @@ Introduction Section
:maxdepth: 2 :maxdepth: 2
intro/index.rst intro/index.rst
cluster/index.rst
install/index.rst install/index.rst
quickstart/index.rst quickstart/index.rst
.. ..
.. _using-bro:
Using Bro Section Using Bro Section
================= =================
@ -27,7 +30,6 @@ Using Bro Section
httpmonitor/index.rst httpmonitor/index.rst
broids/index.rst broids/index.rst
mimestats/index.rst mimestats/index.rst
cluster/index.rst
.. ..

View file

@ -1,16 +1,21 @@
.. _upgrade-guidelines: .. _upgrade-guidelines:
================== ==============
General Guidelines How to Upgrade
================== ==============
If you're doing an upgrade install (rather than a fresh install), If you're doing an upgrade install (rather than a fresh install),
there's two suggested approaches: either install Bro using the same there's two suggested approaches: either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy installation prefix directory as before, or pick a new prefix and copy
local customizations over. In the following we summarize general local customizations over. Regardless of which approach you choose,
guidelines for upgrading, see the :ref:`release-notes` for if you are using BroControl, then after upgrading Bro you will need to
version-specific information. run "broctl check" (to verify that your new configuration is OK)
and "broctl install" to complete the upgrade process.
In the following we summarize general guidelines for upgrading, see
the :ref:`release-notes` for version-specific information.
Reusing Previous Install Prefix Reusing Previous Install Prefix
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -35,7 +35,7 @@ before you begin:
To build Bro from source, the following additional dependencies are required: To build Bro from source, the following additional dependencies are required:
* CMake 2.8.0 or greater (http://www.cmake.org) * CMake 2.6.3 or greater (http://www.cmake.org)
* Make * Make
* C/C++ compiler * C/C++ compiler
* SWIG (http://www.swig.org) * SWIG (http://www.swig.org)
@ -80,7 +80,7 @@ that ``bash`` and ``python`` are in your ``PATH``):
Distributions of these dependencies can likely be obtained from your Distributions of these dependencies can likely be obtained from your
preferred Mac OS X package management system (e.g. MacPorts_, Fink_, preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``, or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``,
``swig-python`` and packages provide the required dependencies. and ``swig-python`` packages provide the required dependencies.
Optional Dependencies Optional Dependencies
@ -184,6 +184,11 @@ OpenBSD users, please see our `FAQ
<http://www.bro.org/documentation/faq.html>`_ if you are having <http://www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro. problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because
all of the documentation for the latest Bro release is available on the
Bro web site), there are instructions in ``doc/README`` in the source
distribution.
Configure the Run-Time Environment Configure the Run-Time Environment
================================== ==================================

View file

@ -12,8 +12,10 @@ Quick Start Guide
Bro works on most modern, Unix-based systems and requires no custom Bro works on most modern, Unix-based systems and requires no custom
hardware. It can be downloaded in either pre-built binary package or hardware. It can be downloaded in either pre-built binary package or
source code forms. See :ref:`installing-bro` for instructions on how to source code forms. See :ref:`installing-bro` for instructions on how to
install Bro. Below, ``$PREFIX`` is used to reference the Bro install Bro.
installation root directory, which by default is ``/usr/local/bro/`` if
In the examples below, ``$PREFIX`` is used to reference the Bro
installation root directory, which by default is ``/usr/local/bro`` if
you install from source. you install from source.
Managing Bro with BroControl Managing Bro with BroControl
@ -21,7 +23,10 @@ Managing Bro with BroControl
BroControl is an interactive shell for easily operating/managing Bro BroControl is an interactive shell for easily operating/managing Bro
installations on a single system or even across multiple systems in a installations on a single system or even across multiple systems in a
traffic-monitoring cluster. traffic-monitoring cluster. This section explains how to use BroControl
to manage a stand-alone Bro installation. For instructions on how to
configure a Bro cluster, see the documentation for :doc:`BroControl
<../components/broctl/README>`.
A Minimal Starting Configuration A Minimal Starting Configuration
-------------------------------- --------------------------------
@ -292,9 +297,10 @@ tweak the most basic options. Here's some suggestions on what to explore next:
* We only looked at how to change options declared in the notice framework, * We only looked at how to change options declared in the notice framework,
there's many more options to look at in other script packages. there's many more options to look at in other script packages.
* Continue reading with :ref:`using-bro` chapter which goes into more * Continue reading with :ref:`Using Bro <using-bro>` chapter which goes
depth on working with Bro; then look at :ref:`writing-scripts` for into more depth on working with Bro; then look at
learning how to start writing your own scripts. :ref:`writing-scripts` for learning how to start writing your own
scripts.
* Look at the scripts in ``$PREFIX/share/bro/policy`` for further ones * Look at the scripts in ``$PREFIX/share/bro/policy`` for further ones
you may want to load; you can browse their documentation at the you may want to load; you can browse their documentation at the
:ref:`overview of script packages <script-packages>`. :ref:`overview of script packages <script-packages>`.

View file

@ -345,13 +345,13 @@ keyword. Unlike globals, constants can only be set or altered at
parse time if the ``&redef`` attribute has been used. Afterwards (in parse time if the ``&redef`` attribute has been used. Afterwards (in
runtime) the constants are unalterable. In most cases, re-definable runtime) the constants are unalterable. In most cases, re-definable
constants are used in Bro scripts as containers for configuration constants are used in Bro scripts as containers for configuration
options. For example, the configuration option to log password options. For example, the configuration option to log passwords
decrypted from HTTP streams is stored in decrypted from HTTP streams is stored in
``HTTP::default_capture_password`` as shown in the stripped down :bro:see:`HTTP::default_capture_password` as shown in the stripped down
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below. excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
:lines: 8-10,19-21,120 :lines: 9-11,20-22,121
Because the constant was declared with the ``&redef`` attribute, if we Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the needed to turn this option on globally, we could do so by adding the

1
magic

@ -1 +0,0 @@
Subproject commit 99c6b89230e2b9b0e781c42b0b9412d2ab4e14b2

View file

@ -0,0 +1 @@
Support for X509 certificates with the file analysis framework.

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,77 @@
@load base/frameworks/files
@load base/files/hash
module X509;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Current timestamp.
ts: time &log;
## File id of this certificate.
id: string &log;
## Basic information about the certificate.
certificate: X509::Certificate &log;
## The opaque wrapping the certificate. Mainly used
## for the verify operations.
handle: opaque of x509;
## All extensions that were encountered in the certificate.
extensions: vector of X509::Extension &default=vector();
## Subject alternative name extension of the certificate.
san: X509::SubjectAlternativeName &optional &log;
## Basic constraints extension of the certificate.
basic_constraints: X509::BasicConstraints &optional &log;
};
## Event for accessing logged records.
global log_x509: event(rec: Info);
}
event bro_init() &priority=5
{
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509]);
}
redef record Files::Info += {
## Information about X509 certificates. This is used to keep
## certificate information until all events have been received.
x509: X509::Info &optional;
};
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5
{
f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref];
}
event x509_extension(f: fa_file, ext: X509::Extension) &priority=5
{
if ( f$info?$x509 )
f$info$x509$extensions[|f$info$x509$extensions|] = ext;
}
event x509_ext_basic_constraints(f: fa_file, ext: X509::BasicConstraints) &priority=5
{
if ( f$info?$x509 )
f$info$x509$basic_constraints = ext;
}
event x509_ext_subject_alternative_name(f: fa_file, ext: X509::SubjectAlternativeName) &priority=5
{
if ( f$info?$x509 )
f$info$x509$san = ext;
}
event file_state_remove(f: fa_file) &priority=5
{
if ( ! f$info?$x509 )
return;
Log::write(LOG, f$info$x509);
}

View file

@ -1 +1,2 @@
@load ./main.bro @load ./main.bro
@load ./magic

View file

@ -0,0 +1,2 @@
@load-sigs ./general
@load-sigs ./libmagic

View file

@ -0,0 +1,11 @@
# General purpose file magic signatures.
signature file-plaintext {
file-magic /([[:print:][:space:]]{10})/
file-mime "text/plain", -20
}
signature file-tar {
file-magic /([[:print:]\x00]){100}(([[:digit:]\x00\x20]){8}){3}/
file-mime "application/x-tar", 150
}

File diff suppressed because it is too large Load diff

View file

@ -41,15 +41,15 @@ export {
## If this file was transferred over a network ## If this file was transferred over a network
## connection this should show the host or hosts that ## connection this should show the host or hosts that
## the data sourced from. ## the data sourced from.
tx_hosts: set[addr] &log; tx_hosts: set[addr] &default=addr_set() &log;
## If this file was transferred over a network ## If this file was transferred over a network
## connection this should show the host or hosts that ## connection this should show the host or hosts that
## the data traveled to. ## the data traveled to.
rx_hosts: set[addr] &log; rx_hosts: set[addr] &default=addr_set() &log;
## Connection UIDs over which the file was transferred. ## Connection UIDs over which the file was transferred.
conn_uids: set[string] &log; conn_uids: set[string] &default=string_set() &log;
## An identification of the source of the file data. E.g. it ## An identification of the source of the file data. E.g. it
## may be a network protocol over which it was transferred, or a ## may be a network protocol over which it was transferred, or a
@ -63,12 +63,13 @@ export {
depth: count &default=0 &log; depth: count &default=0 &log;
## A set of analysis types done during the file analysis. ## A set of analysis types done during the file analysis.
analyzers: set[string] &log; analyzers: set[string] &default=string_set() &log;
## A mime type provided by libmagic against the *bof_buffer* ## A mime type provided by the strongest file magic signature
## field of :bro:see:`fa_file`, or in the cases where no ## match against the *bof_buffer* field of :bro:see:`fa_file`,
## buffering of the beginning of file occurs, an initial ## or in the cases where no buffering of the beginning of file
## guess of the mime type based on the first data seen. ## occurs, an initial guess of the mime type based on the first
## data seen.
mime_type: string &log &optional; mime_type: string &log &optional;
## A filename for the file if one is available from the source ## A filename for the file if one is available from the source

View file

@ -5,11 +5,11 @@
##! ``config``: setting ``tsv`` to the string ``T`` turns the output into ##! ``config``: setting ``tsv`` to the string ``T`` turns the output into
##! "tab-separated-value" mode where only a single header row with the column ##! "tab-separated-value" mode where only a single header row with the column
##! names is printed out as meta information, with no "# fields" prepended; no ##! names is printed out as meta information, with no "# fields" prepended; no
##! other meta data gets included in that mode. ##! other meta data gets included in that mode.
##! ##!
##! Example filter using this:: ##! Example filter using this::
##! ##!
##! local my_filter: Log::Filter = [$name = "my-filter", $writer = Log::WRITER_ASCII, $config = table(["tsv"] = "T")]; ##! local my_filter: Log::Filter = [$name = "my-filter", $writer = Log::WRITER_ASCII, $config = table(["tsv"] = "T")];
##! ##!
module LogAscii; module LogAscii;
@ -17,27 +17,51 @@ module LogAscii;
export { export {
## If true, output everything to stdout rather than ## If true, output everything to stdout rather than
## into files. This is primarily for debugging purposes. ## into files. This is primarily for debugging purposes.
##
## This option is also available as a per-filter ``$config`` option.
const output_to_stdout = F &redef; const output_to_stdout = F &redef;
## If true, the default will be to write logs in a JSON format.
##
## This option is also available as a per-filter ``$config`` option.
const use_json = F &redef;
## Format of timestamps when writing out JSON. By default, the JSON formatter will
## use double values for timestamps which represent the number of seconds from the
## UNIX epoch.
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
## If true, include lines with log meta information such as column names ## If true, include lines with log meta information such as column names
## with types, the values of ASCII logging options that are in use, and ## with types, the values of ASCII logging options that are in use, and
## the time when the file was opened and closed (the latter at the end). ## the time when the file was opened and closed (the latter at the end).
##
## If writing in JSON format, this is implicitly disabled.
const include_meta = T &redef; const include_meta = T &redef;
## Prefix for lines with meta information. ## Prefix for lines with meta information.
##
## This option is also available as a per-filter ``$config`` option.
const meta_prefix = "#" &redef; const meta_prefix = "#" &redef;
## Separator between fields. ## Separator between fields.
##
## This option is also available as a per-filter ``$config`` option.
const separator = Log::separator &redef; const separator = Log::separator &redef;
## Separator between set elements. ## Separator between set elements.
##
## This option is also available as a per-filter ``$config`` option.
const set_separator = Log::set_separator &redef; const set_separator = Log::set_separator &redef;
## String to use for empty fields. This should be different from ## String to use for empty fields. This should be different from
## *unset_field* to make the output unambiguous. ## *unset_field* to make the output unambiguous.
##
## This option is also available as a per-filter ``$config`` option.
const empty_field = Log::empty_field &redef; const empty_field = Log::empty_field &redef;
## String to use for an unset &optional field. ## String to use for an unset &optional field.
##
## This option is also available as a per-filter ``$config`` option.
const unset_field = Log::unset_field &redef; const unset_field = Log::unset_field &redef;
} }

View file

@ -206,6 +206,38 @@ export {
## The maximum amount of time a plugin can delay email from being sent. ## The maximum amount of time a plugin can delay email from being sent.
const max_email_delay = 15secs &redef; const max_email_delay = 15secs &redef;
## Contains a portion of :bro:see:`fa_file` that's also contained in
## :bro:see:`Notice::Info`.
type FileInfo: record {
fuid: string; ##< File UID.
desc: string; ##< File description from e.g.
##< :bro:see:`Files::describe`.
mime: string &optional; ##< Strongest mime type match for file.
cid: conn_id &optional; ##< Connection tuple over which file is sent.
cuid: string &optional; ##< Connection UID over which file is sent.
};
## Creates a record containing a subset of a full :bro:see:`fa_file` record.
##
## f: record containing metadata about a file.
##
## Returns: record containing a subset of fields copied from *f*.
global create_file_info: function(f: fa_file): Notice::FileInfo;
## Populates file-related fields in a notice info record.
##
## f: record containing metadata about a file.
##
## n: a notice record that needs file-related fields populated.
global populate_file_info: function(f: fa_file, n: Notice::Info);
## Populates file-related fields in a notice info record.
##
## fi: record containing metadata about a file.
##
## n: a notice record that needs file-related fields populated.
global populate_file_info2: function(fi: Notice::FileInfo, n: Notice::Info);
## A log postprocessing function that implements emailing the contents ## A log postprocessing function that implements emailing the contents
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`. ## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
## The rotated log is removed upon being sent. ## The rotated log is removed upon being sent.
@ -493,6 +525,42 @@ function execute_with_notice(cmd: string, n: Notice::Info)
#system_env(cmd, tags); #system_env(cmd, tags);
} }
function create_file_info(f: fa_file): Notice::FileInfo
{
local fi: Notice::FileInfo = Notice::FileInfo($fuid = f$id,
$desc = Files::describe(f));
if ( f?$mime_type )
fi$mime = f$mime_type;
if ( f?$conns && |f$conns| == 1 )
for ( id in f$conns )
{
fi$cid = id;
fi$cuid = f$conns[id]$uid;
}
return fi;
}
function populate_file_info(f: fa_file, n: Notice::Info)
{
populate_file_info2(create_file_info(f), n);
}
function populate_file_info2(fi: Notice::FileInfo, n: Notice::Info)
{
if ( ! n?$fuid )
n$fuid = fi$fuid;
if ( ! n?$file_mime_type && fi?$mime )
n$file_mime_type = fi$mime;
n$file_desc = fi$desc;
n$id = fi$cid;
n$uid = fi$cuid;
}
# This is run synchronously as a function before all of the other # This is run synchronously as a function before all of the other
# notice related functions and events. It also modifies the # notice related functions and events. It also modifies the
# :bro:type:`Notice::Info` record in place. # :bro:type:`Notice::Info` record in place.
@ -503,21 +571,7 @@ function apply_policy(n: Notice::Info)
n$ts = network_time(); n$ts = network_time();
if ( n?$f ) if ( n?$f )
{ populate_file_info(n$f, n);
if ( ! n?$fuid )
n$fuid = n$f$id;
if ( ! n?$file_mime_type && n$f?$mime_type )
n$file_mime_type = n$f$mime_type;
n$file_desc = Files::describe(n$f);
if ( n$f?$conns && |n$f$conns| == 1 )
{
for ( id in n$f$conns )
n$conn = n$f$conns[id];
}
}
if ( n?$conn ) if ( n?$conn )
{ {

View file

@ -185,6 +185,7 @@ export {
["RPC_underflow"] = ACTION_LOG, ["RPC_underflow"] = ACTION_LOG,
["RST_storm"] = ACTION_LOG, ["RST_storm"] = ACTION_LOG,
["RST_with_data"] = ACTION_LOG, ["RST_with_data"] = ACTION_LOG,
["SSL_many_server_names"] = ACTION_LOG,
["simultaneous_open"] = ACTION_LOG_PER_CONN, ["simultaneous_open"] = ACTION_LOG_PER_CONN,
["spontaneous_FIN"] = ACTION_IGNORE, ["spontaneous_FIN"] = ACTION_IGNORE,
["spontaneous_RST"] = ACTION_IGNORE, ["spontaneous_RST"] = ACTION_IGNORE,

View file

@ -70,6 +70,9 @@ export {
## The network time at which a signature matching type of event ## The network time at which a signature matching type of event
## to be logged has occurred. ## to be logged has occurred.
ts: time &log; ts: time &log;
## A unique identifier of the connection which triggered the
## signature match event
uid: string &log &optional;
## The host which triggered the signature match event. ## The host which triggered the signature match event.
src_addr: addr &log &optional; src_addr: addr &log &optional;
## The host port on which the signature-matching activity ## The host port on which the signature-matching activity
@ -167,7 +170,7 @@ event signature_match(state: signature_state, msg: string, data: string)
# Trim the matched data down to something reasonable # Trim the matched data down to something reasonable
if ( |data| > 140 ) if ( |data| > 140 )
data = fmt("%s...", sub_bytes(data, 0, 140)); data = fmt("%s...", sub_bytes(data, 0, 140));
local src_addr: addr; local src_addr: addr;
local src_port: port; local src_port: port;
local dst_addr: addr; local dst_addr: addr;
@ -192,6 +195,7 @@ event signature_match(state: signature_state, msg: string, data: string)
{ {
local info: Info = [$ts=network_time(), local info: Info = [$ts=network_time(),
$note=Sensitive_Signature, $note=Sensitive_Signature,
$uid=state$conn$uid,
$src_addr=src_addr, $src_addr=src_addr,
$src_port=src_port, $src_port=src_port,
$dst_addr=dst_addr, $dst_addr=dst_addr,
@ -212,11 +216,11 @@ event signature_match(state: signature_state, msg: string, data: string)
if ( ++count_per_resp[dst,sig_id] in count_thresholds ) if ( ++count_per_resp[dst,sig_id] in count_thresholds )
{ {
NOTICE([$note=Count_Signature, $conn=state$conn, NOTICE([$note=Count_Signature, $conn=state$conn,
$msg=msg, $msg=msg,
$n=count_per_resp[dst,sig_id], $n=count_per_resp[dst,sig_id],
$sub=fmt("%d matches of signature %s on host %s", $sub=fmt("%d matches of signature %s on host %s",
count_per_resp[dst,sig_id], count_per_resp[dst,sig_id],
sig_id, dst)]); sig_id, dst)]);
} }
} }
@ -290,16 +294,16 @@ event signature_match(state: signature_state, msg: string, data: string)
orig, vcount, resp); orig, vcount, resp);
Log::write(Signatures::LOG, Log::write(Signatures::LOG,
[$ts=network_time(), [$ts=network_time(),
$note=Multiple_Signatures, $note=Multiple_Signatures,
$src_addr=orig, $src_addr=orig,
$dst_addr=resp, $sig_id=sig_id, $sig_count=vcount, $dst_addr=resp, $sig_id=sig_id, $sig_count=vcount,
$event_msg=fmt("%s different signatures triggered", vcount), $event_msg=fmt("%s different signatures triggered", vcount),
$sub_msg=vert_scan_msg]); $sub_msg=vert_scan_msg]);
NOTICE([$note=Multiple_Signatures, $src=orig, $dst=resp, NOTICE([$note=Multiple_Signatures, $src=orig, $dst=resp,
$msg=fmt("%s different signatures triggered", vcount), $msg=fmt("%s different signatures triggered", vcount),
$n=vcount, $sub=vert_scan_msg]); $n=vcount, $sub=vert_scan_msg]);
last_vthresh[orig] = vcount; last_vthresh[orig] = vcount;
} }

View file

@ -287,6 +287,13 @@ function parse_mozilla(unparsed_version: string): Description
if ( 2 in parts ) if ( 2 in parts )
v = parse(parts[2])$version; v = parse(parts[2])$version;
} }
else if ( / Java\/[0-9]\./ in unparsed_version )
{
software_name = "Java";
parts = split_all(unparsed_version, /Java\/[0-9\._]*/);
if ( 2 in parts )
v = parse(parts[2])$version;
}
return [$version=v, $unparsed_version=unparsed_version, $name=software_name]; return [$version=v, $unparsed_version=unparsed_version, $name=software_name];
} }

View file

@ -28,10 +28,6 @@ export {
## values for a sumstat. ## values for a sumstat.
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool); global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
# Event sent by nodes that are collecting sumstats after receiving a
# request for the sumstat from the manager.
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
## This event is sent by the manager in a cluster to initiate the ## This event is sent by the manager in a cluster to initiate the
## collection of a single key value from a sumstat. It's typically used ## collection of a single key value from a sumstat. It's typically used
## to get intermediate updates before the break interval triggers to ## to get intermediate updates before the break interval triggers to
@ -62,7 +58,7 @@ export {
# Add events to the cluster framework to make this work. # Add events to the cluster framework to make this work.
redef Cluster::manager2worker_events += /SumStats::cluster_(ss_request|get_result|threshold_crossed)/; redef Cluster::manager2worker_events += /SumStats::cluster_(ss_request|get_result|threshold_crossed)/;
redef Cluster::manager2worker_events += /SumStats::(get_a_key)/; redef Cluster::manager2worker_events += /SumStats::(get_a_key)/;
redef Cluster::worker2manager_events += /SumStats::cluster_(ss_response|send_result|key_intermediate_response)/; redef Cluster::worker2manager_events += /SumStats::cluster_(send_result|key_intermediate_response)/;
redef Cluster::worker2manager_events += /SumStats::(send_a_key|send_no_key)/; redef Cluster::worker2manager_events += /SumStats::(send_a_key|send_no_key)/;
@if ( Cluster::local_node_type() != Cluster::MANAGER ) @if ( Cluster::local_node_type() != Cluster::MANAGER )
@ -74,7 +70,7 @@ global recent_global_view_keys: table[string, Key] of count &create_expire=1min
# Result tables indexed on a uid that are currently being sent to the # Result tables indexed on a uid that are currently being sent to the
# manager. # manager.
global sending_results: table[string] of ResultTable = table() &create_expire=1min; global sending_results: table[string] of ResultTable = table() &read_expire=1min;
# This is done on all non-manager node types in the event that a sumstat is # This is done on all non-manager node types in the event that a sumstat is
# being collected somewhere other than a worker. # being collected somewhere other than a worker.
@ -144,7 +140,7 @@ event SumStats::cluster_ss_request(uid: string, ss_name: string, cleanup: bool)
sending_results[uid] = (ss_name in result_store) ? result_store[ss_name] : table(); sending_results[uid] = (ss_name in result_store) ? result_store[ss_name] : table();
# Lookup the actual sumstats and reset it, the reference to the data # Lookup the actual sumstats and reset it, the reference to the data
# currently stored will be maintained internally from the # currently stored will be maintained internally from the
# sending_results table. # sending_results table.
if ( cleanup && ss_name in stats_store ) if ( cleanup && ss_name in stats_store )
reset(stats_store[ss_name]); reset(stats_store[ss_name]);
@ -159,7 +155,7 @@ event SumStats::cluster_get_result(uid: string, ss_name: string, key: Key, clean
if ( uid in sending_results && key in sending_results[uid] ) if ( uid in sending_results && key in sending_results[uid] )
{ {
# Note: copy is needed to compensate serialization caching issue. This should be # Note: copy is needed to compensate serialization caching issue. This should be
# changed to something else later. # changed to something else later.
event SumStats::cluster_send_result(uid, ss_name, key, copy(sending_results[uid][key]), cleanup); event SumStats::cluster_send_result(uid, ss_name, key, copy(sending_results[uid][key]), cleanup);
delete sending_results[uid][key]; delete sending_results[uid][key];
} }
@ -170,12 +166,12 @@ event SumStats::cluster_get_result(uid: string, ss_name: string, key: Key, clean
event SumStats::cluster_send_result(uid, ss_name, key, table(), cleanup); event SumStats::cluster_send_result(uid, ss_name, key, table(), cleanup);
} }
} }
else else
{ {
if ( ss_name in result_store && key in result_store[ss_name] ) if ( ss_name in result_store && key in result_store[ss_name] )
{ {
# Note: copy is needed to compensate serialization caching issue. This should be # Note: copy is needed to compensate serialization caching issue. This should be
# changed to something else later. # changed to something else later.
event SumStats::cluster_send_result(uid, ss_name, key, copy(result_store[ss_name][key]), cleanup); event SumStats::cluster_send_result(uid, ss_name, key, copy(result_store[ss_name][key]), cleanup);
} }
else else
@ -195,6 +191,19 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
threshold_tracker[ss_name][key] = thold_index; threshold_tracker[ss_name][key] = thold_index;
} }
# request-key is a non-op on the workers.
# It only should be called by the manager. Due to the fact that we usually run the same scripts on the
# workers and the manager, it might also be called by the workers, so we just ignore it here.
#
# There is a small chance that people will try running it on events that are just thrown on the workers.
# This does not work at the moment and we cannot throw an error message, because we cannot distinguish it
# from the "script is running it everywhere" case. But - people should notice that they do not get results.
# Not entirely pretty, sorry :(
function request_key(ss_name: string, key: Key): Result
{
return Result();
}
@endif @endif
@ -203,7 +212,7 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
# This variable is maintained by manager nodes as they collect and aggregate # This variable is maintained by manager nodes as they collect and aggregate
# results. # results.
# Index on a uid. # Index on a uid.
global stats_keys: table[string] of set[Key] &create_expire=1min global stats_keys: table[string] of set[Key] &read_expire=1min
&expire_func=function(s: table[string] of set[Key], idx: string): interval &expire_func=function(s: table[string] of set[Key], idx: string): interval
{ {
Reporter::warning(fmt("SumStat key request for the %s SumStat uid took longer than 1 minute and was automatically cancelled.", idx)); Reporter::warning(fmt("SumStat key request for the %s SumStat uid took longer than 1 minute and was automatically cancelled.", idx));
@ -215,17 +224,16 @@ global stats_keys: table[string] of set[Key] &create_expire=1min
# matches the number of peer nodes that results should be coming from, the # matches the number of peer nodes that results should be coming from, the
# result is written out and deleted from here. # result is written out and deleted from here.
# Indexed on a uid. # Indexed on a uid.
# TODO: add an &expire_func in case not all results are received. global done_with: table[string] of count &read_expire=1min &default=0;
global done_with: table[string] of count &create_expire=1min &default=0;
# This variable is maintained by managers to track intermediate responses as # This variable is maintained by managers to track intermediate responses as
# they are getting a global view for a certain key. # they are getting a global view for a certain key.
# Indexed on a uid. # Indexed on a uid.
global key_requests: table[string] of Result &create_expire=1min; global key_requests: table[string] of Result &read_expire=1min;
# Store uids for dynamic requests here to avoid cleanup on the uid. # Store uids for dynamic requests here to avoid cleanup on the uid.
# (This needs to be done differently!) # (This needs to be done differently!)
global dynamic_requests: set[string] &create_expire=1min; global dynamic_requests: set[string] &read_expire=1min;
# This variable is maintained by managers to prevent overwhelming communication due # This variable is maintained by managers to prevent overwhelming communication due
# to too many intermediate updates. Each sumstat is tracked separately so that # to too many intermediate updates. Each sumstat is tracked separately so that
@ -414,7 +422,7 @@ event SumStats::cluster_send_result(uid: string, ss_name: string, key: Key, resu
# Mark that a worker is done. # Mark that a worker is done.
if ( uid !in done_with ) if ( uid !in done_with )
done_with[uid] = 0; done_with[uid] = 0;
#print fmt("MANAGER: got a result for %s %s from %s", uid, key, get_event_peer()$descr); #print fmt("MANAGER: got a result for %s %s from %s", uid, key, get_event_peer()$descr);
++done_with[uid]; ++done_with[uid];

View file

@ -2,23 +2,59 @@
module SumStats; module SumStats;
event SumStats::process_epoch_result(ss: SumStat, now: time, data: ResultTable)
{
# TODO: is this the right processing group size?
local i = 50;
for ( key in data )
{
ss$epoch_result(now, key, data[key]);
delete data[key];
if ( |data| == 0 )
{
if ( ss?$epoch_finished )
ss$epoch_finished(now);
# Now that no data is left we can finish.
return;
}
i = i-1;
if ( i == 0 )
{
# TODO: is this the right interval?
schedule 0.01 secs { process_epoch_result(ss, now, data) };
break;
}
}
}
event SumStats::finish_epoch(ss: SumStat) event SumStats::finish_epoch(ss: SumStat)
{ {
if ( ss$name in result_store ) if ( ss$name in result_store )
{ {
local now = network_time();
if ( ss?$epoch_result ) if ( ss?$epoch_result )
{ {
local data = result_store[ss$name]; local data = result_store[ss$name];
# TODO: don't block here. local now = network_time();
for ( key in data ) if ( bro_is_terminating() )
ss$epoch_result(now, key, data[key]); {
for ( key in data )
ss$epoch_result(now, key, data[key]);
if ( ss?$epoch_finished )
ss$epoch_finished(now);
}
else
{
event SumStats::process_epoch_result(ss, now, data);
}
} }
if ( ss?$epoch_finished ) # We can reset here because we know that the reference
ss$epoch_finished(now); # to the data will be maintained by the process_epoch_result
# event.
reset(ss); reset(ss);
} }

View file

@ -39,6 +39,14 @@ type count_set: set[count];
## directly and then remove this alias. ## directly and then remove this alias.
type index_vec: vector of count; type index_vec: vector of count;
## A vector of any, used by some builtin functions to store a list of varying
## types.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type any_vec: vector of any;
## A vector of strings. ## A vector of strings.
## ##
## .. todo:: We need this type definition only for declaring builtin functions ## .. todo:: We need this type definition only for declaring builtin functions
@ -46,6 +54,13 @@ type index_vec: vector of count;
## directly and then remove this alias. ## directly and then remove this alias.
type string_vec: vector of string; type string_vec: vector of string;
## A vector of x509 opaques.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type x509_opaque_vector: vector of opaque of x509;
## A vector of addresses. ## A vector of addresses.
## ##
## .. todo:: We need this type definition only for declaring builtin functions ## .. todo:: We need this type definition only for declaring builtin functions
@ -60,6 +75,23 @@ type addr_vec: vector of addr;
## directly and then remove this alias. ## directly and then remove this alias.
type table_string_of_string: table[string] of string; type table_string_of_string: table[string] of string;
## A structure indicating a MIME type and strength of a match against
## file magic signatures.
##
## :bro:see:`file_magic`
type mime_match: record {
strength: int; ##< How strongly the signature matched. Used for
##< prioritization when multiple file magic signatures
##< match.
mime: string; ##< The MIME type of the file magic signature match.
};
## A vector of file magic signature matches, ordered by strength of
## the signature, strongest first.
##
## :bro:see:`file_magic`
type mime_matches: vector of mime_match;
## A connection's transport-layer protocol. Note that Bro uses the term ## A connection's transport-layer protocol. Note that Bro uses the term
## "connection" broadly, using flow semantics for ICMP and UDP. ## "connection" broadly, using flow semantics for ICMP and UDP.
type transport_proto: enum { type transport_proto: enum {
@ -371,10 +403,15 @@ type fa_file: record {
## This is also the buffer that's used for file/mime type detection. ## This is also the buffer that's used for file/mime type detection.
bof_buffer: string &optional; bof_buffer: string &optional;
## A mime type provided by libmagic against the *bof_buffer*, or ## The mime type of the strongest file magic signature matches against
## in the cases where no buffering of the beginning of file occurs, ## the data chunk in *bof_buffer*, or in the cases where no buffering
## an initial guess of the mime type based on the first data seen. ## of the beginning of file occurs, an initial guess of the mime type
## based on the first data seen.
mime_type: string &optional; mime_type: string &optional;
## All mime types that matched file magic signatures against the data
## chunk in *bof_buffer*, in order of their strength value.
mime_types: mime_matches &optional;
} &redef; } &redef;
## Fields of a SYN packet. ## Fields of a SYN packet.
@ -1028,13 +1065,6 @@ const rpc_timeout = 24 sec &redef;
## means "forever", which resists evasion, but can lead to state accrual. ## means "forever", which resists evasion, but can lead to state accrual.
const frag_timeout = 0.0 sec &redef; const frag_timeout = 0.0 sec &redef;
## Time window for reordering packets. This is used for dealing with timestamp
## discrepancy between multiple packet sources.
##
## .. note:: Setting this can have a major performance impact as now packets
## need to be potentially copied and buffered.
const packet_sort_window = 0 usecs &redef;
## If positive, indicates the encapsulation header size that should ## If positive, indicates the encapsulation header size that should
## be skipped. This applies to all packets. ## be skipped. This applies to all packets.
const encap_hdr_size = 0 &redef; const encap_hdr_size = 0 &redef;
@ -2420,29 +2450,6 @@ global dns_skip_all_addl = T &redef;
## traffic and do not process it. Set to 0 to turn off this functionality. ## traffic and do not process it. Set to 0 to turn off this functionality.
global dns_max_queries = 5; global dns_max_queries = 5;
## An X509 certificate.
##
## .. bro:see:: x509_certificate
type X509: record {
version: count; ##< Version number.
serial: string; ##< Serial number.
subject: string; ##< Subject.
issuer: string; ##< Issuer.
not_valid_before: time; ##< Timestamp before when certificate is not valid.
not_valid_after: time; ##< Timestamp after when certificate is not valid.
};
## An X509 extension.
##
## .. bro:see:: x509_extension
type X509_extension_info: record {
name: string; ##< Long name of extension; oid if name not known.
short_name: string &optional; ##< Short name of extension if known.
oid: string; ##< Oid of extension.
critical: bool; ##< True if extension is critical.
value: string; ##< Extension content parsed to string for known extensions. Raw data otherwise.
};
## HTTP session statistics. ## HTTP session statistics.
## ##
## .. bro:see:: http_stats ## .. bro:see:: http_stats
@ -2764,6 +2771,55 @@ export {
}; };
} }
module X509;
export {
type Certificate: record {
version: count; ##< Version number.
serial: string; ##< Serial number.
subject: string; ##< Subject.
issuer: string; ##< Issuer.
not_valid_before: time; ##< Timestamp before when certificate is not valid.
not_valid_after: time; ##< Timestamp after when certificate is not valid.
key_alg: string; ##< Name of the key algorithm
sig_alg: string; ##< Name of the signature algorithm
key_type: string &optional; ##< Key type, if key parseable by openssl (either rsa, dsa or ec)
key_length: count &optional; ##< Key length in bits
exponent: string &optional; ##< Exponent, if RSA-certificate
curve: string &optional; ##< Curve, if EC-certificate
} &log;
type Extension: record {
name: string; ##< Long name of extension. oid if name not known
short_name: string &optional; ##< Short name of extension if known
oid: string; ##< Oid of extension
critical: bool; ##< True if extension is critical
value: string; ##< Extension content parsed to string for known extensions. Raw data otherwise.
};
type BasicConstraints: record {
ca: bool; ##< CA flag set?
path_len: count &optional; ##< Maximum path length
} &log;
type SubjectAlternativeName: record {
dns: string_vec &optional &log; ##< List of DNS entries in SAN
uri: string_vec &optional &log; ##< List of URI entries in SAN
email: string_vec &optional &log; ##< List of email entries in SAN
ip: addr_vec &optional &log; ##< List of IP entries in SAN
other_fields: bool; ##< True if the certificate contained other, not recognized or parsed name fields
};
## Result of an X509 certificate chain verification
type Result: record {
## OpenSSL result code
result: count;
## Result as string
result_string: string;
## References to the final certificate chain, if verification successful. End-host certificate is first.
chain_certs: vector of opaque of x509 &optional;
};
}
module SOCKS; module SOCKS;
export { export {
## This record is for a SOCKS client or server to provide either a ## This record is for a SOCKS client or server to provide either a
@ -2793,6 +2849,130 @@ export {
} }
module GLOBAL; module GLOBAL;
@load base/bif/plugins/Bro_SNMP.types.bif
module SNMP;
export {
## The top-level message data structure of an SNMPv1 datagram, not
## including the PDU data. See :rfc:`1157`.
type SNMP::HeaderV1: record {
community: string;
};
## The top-level message data structure of an SNMPv2 datagram, not
## including the PDU data. See :rfc:`1901`.
type SNMP::HeaderV2: record {
community: string;
};
## The ``ScopedPduData`` data structure of an SNMPv3 datagram, not
## including the PDU data (i.e. just the "context" fields).
## See :rfc:`3412`.
type SNMP::ScopedPDU_Context: record {
engine_id: string;
name: string;
};
## The top-level message data structure of an SNMPv3 datagram, not
## including the PDU data. See :rfc:`3412`.
type SNMP::HeaderV3: record {
id: count;
max_size: count;
flags: count;
auth_flag: bool;
priv_flag: bool;
reportable_flag: bool;
security_model: count;
security_params: string;
pdu_context: SNMP::ScopedPDU_Context &optional;
};
## A generic SNMP header data structure that may include data from
## any version of SNMP. The value of the ``version`` field
## determines what header field is initialized.
type SNMP::Header: record {
version: count;
v1: SNMP::HeaderV1 &optional; ##< Set when ``version`` is 0.
v2: SNMP::HeaderV2 &optional; ##< Set when ``version`` is 1.
v3: SNMP::HeaderV3 &optional; ##< Set when ``version`` is 3.
};
## A generic SNMP object value, that may include any of the
## valid ``ObjectSyntax`` values from :rfc:`1155` or :rfc:`3416`.
## The value is decoded whenever possible and assigned to
## the appropriate field, which can be determined from the value
## of the ``tag`` field. For tags that can't be mapped to an
## appropriate type, the ``octets`` field holds the BER encoded
## ASN.1 content if there is any (though, ``octets`` is may also
## be used for other tags such as OCTET STRINGS or Opaque). Null
## values will only have their corresponding tag value set.
type SNMP::ObjectValue: record {
tag: count;
oid: string &optional;
signed: int &optional;
unsigned: count &optional;
address: addr &optional;
octets: string &optional;
};
# These aren't an enum because it's easier to type fields as count.
# That way don't have to deal with type conversion, plus doesn't
# mislead that these are the only valid tag values (it's just the set
# of known tags).
const SNMP::OBJ_INTEGER_TAG : count = 0x02; ##< Signed 64-bit integer.
const SNMP::OBJ_OCTETSTRING_TAG : count = 0x04; ##< An octet string.
const SNMP::OBJ_UNSPECIFIED_TAG : count = 0x05; ##< A NULL value.
const SNMP::OBJ_OID_TAG : count = 0x06; ##< An Object Identifier.
const SNMP::OBJ_IPADDRESS_TAG : count = 0x40; ##< An IP address.
const SNMP::OBJ_COUNTER32_TAG : count = 0x41; ##< Unsigned 32-bit integer.
const SNMP::OBJ_UNSIGNED32_TAG : count = 0x42; ##< Unsigned 32-bit integer.
const SNMP::OBJ_TIMETICKS_TAG : count = 0x43; ##< Unsigned 32-bit integer.
const SNMP::OBJ_OPAQUE_TAG : count = 0x44; ##< An octet string.
const SNMP::OBJ_COUNTER64_TAG : count = 0x46; ##< Unsigned 64-bit integer.
const SNMP::OBJ_NOSUCHOBJECT_TAG : count = 0x80; ##< A NULL value.
const SNMP::OBJ_NOSUCHINSTANCE_TAG: count = 0x81; ##< A NULL value.
const SNMP::OBJ_ENDOFMIBVIEW_TAG : count = 0x82; ##< A NULL value.
## The ``VarBind`` data structure from either :rfc:`1157` or
## :rfc:`3416`, which maps an Object Identifier to a value.
type SNMP::Binding: record {
oid: string;
value: SNMP::ObjectValue;
};
## A ``VarBindList`` data structure from either :rfc:`1157` or :rfc:`3416`.
## A sequences of :bro:see:`SNMP::Binding`, which maps an OIDs to values.
type SNMP::Bindings: vector of SNMP::Binding;
## A ``PDU`` data structure from either :rfc:`1157` or :rfc:`3416`.
type SNMP::PDU: record {
request_id: int;
error_status: int;
error_index: int;
bindings: SNMP::Bindings;
};
## A ``Trap-PDU`` data structure from :rfc:`1157`.
type SNMP::TrapPDU: record {
enterprise: string;
agent: addr;
generic_trap: int;
specific_trap: int;
time_stamp: count;
bindings: SNMP::Bindings;
};
## A ``BulkPDU`` data structure from :rfc:`3416`.
type SNMP::BulkPDU: record {
request_id: int;
non_repeaters: count;
max_repititions: count;
bindings: SNMP::Bindings;
};
}
module GLOBAL;
@load base/bif/event.bif @load base/bif/event.bif
## BPF filter the user has set via the -f command line options. Empty if none. ## BPF filter the user has set via the -f command line options. Empty if none.
@ -3074,6 +3254,24 @@ const record_all_packets = F &redef;
## .. bro:see:: conn_stats ## .. bro:see:: conn_stats
const ignore_keep_alive_rexmit = F &redef; const ignore_keep_alive_rexmit = F &redef;
module JSON;
export {
type TimestampFormat: enum {
## Timestamps will be formatted as UNIX epoch doubles. This is
## the format that Bro typically writes out timestamps.
TS_EPOCH,
## Timestamps will be formatted as unsigned integers that
## represent the number of milliseconds since the UNIX
## epoch.
TS_MILLIS,
## Timestamps will be formatted in the ISO8601 DateTime format.
## Subseconds are also included which isn't actually part of the
## standard but most consumers that parse ISO8601 seem to be able
## to cope with that.
TS_ISO8601,
};
}
module Tunnel; module Tunnel;
export { export {
## The maximum depth of a tunnel to decapsulate until giving up. ## The maximum depth of a tunnel to decapsulate until giving up.

View file

@ -48,6 +48,7 @@
@load base/protocols/modbus @load base/protocols/modbus
@load base/protocols/pop3 @load base/protocols/pop3
@load base/protocols/radius @load base/protocols/radius
@load base/protocols/snmp
@load base/protocols/smtp @load base/protocols/smtp
@load base/protocols/socks @load base/protocols/socks
@load base/protocols/ssh @load base/protocols/ssh
@ -58,7 +59,7 @@
@load base/files/hash @load base/files/hash
@load base/files/extract @load base/files/extract
@load base/files/unified2 @load base/files/unified2
@load base/files/x509
@load base/misc/find-checksum-offloading @load base/misc/find-checksum-offloading
@load base/misc/find-filtered-trace @load base/misc/find-filtered-trace

View file

@ -181,10 +181,9 @@ function log_unmatched_msgs_queue(q: Queue::Queue)
function log_unmatched_msgs(msgs: PendingMessages) function log_unmatched_msgs(msgs: PendingMessages)
{ {
for ( trans_id in msgs ) for ( trans_id in msgs )
{
log_unmatched_msgs_queue(msgs[trans_id]); log_unmatched_msgs_queue(msgs[trans_id]);
delete msgs[trans_id];
} clear_table(msgs);
} }
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info) function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
@ -360,7 +359,15 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
# Note: I'm ignoring the name type for now. Not sure if this should be # Note: I'm ignoring the name type for now. Not sure if this should be
# worked into the query/response in some fashion. # worked into the query/response in some fashion.
if ( c$id$resp_p == 137/udp ) if ( c$id$resp_p == 137/udp )
{
query = decode_netbios_name(query); query = decode_netbios_name(query);
if ( c$dns$qtype_name == "SRV" )
{
# The SRV RFC used the ID used for NetBios Status RRs.
# So if this is NetBios Name Service we name it correctly.
c$dns$qtype_name = "NBSTAT";
}
}
c$dns$query = query; c$dns$query = query;
} }
@ -375,9 +382,19 @@ event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priori
hook DNS::do_reply(c, msg, ans, fmt("%s", a)); hook DNS::do_reply(c, msg, ans, fmt("%s", a));
} }
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5 event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec) &priority=5
{ {
hook DNS::do_reply(c, msg, ans, str); local txt_strings: string = "";
for ( i in strs )
{
if ( i > 0 )
txt_strings += " ";
txt_strings += fmt("TXT %d %s", |strs[i]|, strs[i]);
}
hook DNS::do_reply(c, msg, ans, txt_strings);
} }
event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5 event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
@ -421,9 +438,9 @@ event dns_WKS_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
hook DNS::do_reply(c, msg, ans, ""); hook DNS::do_reply(c, msg, ans, "");
} }
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5 event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer, target: string, priority: count, weight: count, p: count) &priority=5
{ {
hook DNS::do_reply(c, msg, ans, ""); hook DNS::do_reply(c, msg, ans, target);
} }
# TODO: figure out how to handle these # TODO: figure out how to handle these

View file

@ -1,6 +1,8 @@
# List of HTTP headers pulled from:
# http://annevankesteren.nl/2007/10/http-methods
signature dpd_http_client { signature dpd_http_client {
ip-proto == tcp ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/ payload /^[[:space:]]*(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PROPFIND|PROPPATCH|MKCOL|COPY|MOVE|LOCK|UNLOCK|VERSION-CONTROL|REPORT|CHECKOUT|CHECKIN|UNCHECKOUT|MKWORKSPACE|UPDATE|LABEL|MERGE|BASELINE-CONTROL|MKACTIVITY|ORDERPATCH|ACL|PATCH|SEARCH|BCOPY|BDELETE|BMOVE|BPROPFIND|BPROPPATCH|NOTIFY|POLL|SUBSCRIBE|UNSUBSCRIBE|X-MS-ENUMATTS|RPC_OUT_DATA|RPC_IN_DATA)[[:space:]]*/
tcp-state originator tcp-state originator
} }

View file

@ -72,7 +72,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
if ( f$is_orig ) if ( f$is_orig )
{ {
if ( ! c$http?$orig_mime_types ) if ( ! c$http?$orig_fuids )
c$http$orig_fuids = string_vec(f$id); c$http$orig_fuids = string_vec(f$id);
else else
c$http$orig_fuids[|c$http$orig_fuids|] = f$id; c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
@ -87,7 +87,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
} }
else else
{ {
if ( ! c$http?$resp_mime_types ) if ( ! c$http?$resp_fuids )
c$http$resp_fuids = string_vec(f$id); c$http$resp_fuids = string_vec(f$id);
else else
c$http$resp_fuids[|c$http$resp_fuids|] = f$id; c$http$resp_fuids[|c$http$resp_fuids|] = f$id;

View file

@ -4,6 +4,7 @@
@load base/utils/numbers @load base/utils/numbers
@load base/utils/files @load base/utils/files
@load base/frameworks/tunnels
module HTTP; module HTTP;
@ -217,6 +218,17 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
c$http$info_code = code; c$http$info_code = code;
c$http$info_msg = reason; c$http$info_msg = reason;
} }
if ( c$http?$method && c$http$method == "CONNECT" && code == 200 )
{
# Copy this conn_id and set the orig_p to zero because in the case of CONNECT
# proxies there will be potentially many source ports since a new proxy connection
# is established for each proxied connection. We treat this as a singular
# "tunnel".
local tid = copy(c$id);
tid$orig_p = 0/tcp;
Tunnel::register([$cid=tid, $tunnel_type=Tunnel::HTTP]);
}
} }
event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5 event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5

View file

@ -76,7 +76,7 @@ event irc_dcc_message(c: connection, is_orig: bool,
dcc_expected_transfers[address, p] = c$irc; dcc_expected_transfers[address, p] = c$irc;
} }
event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10 event scheduled_analyzer_applied(c: connection, a: Analyzer::Tag) &priority=10
{ {
local id = c$id; local id = c$id;
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers ) if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )

View file

@ -0,0 +1 @@
Support for Simple Network Management Protocol (SNMP) analysis.

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,182 @@
##! Enables analysis and logging of SNMP datagrams.
module SNMP;
export {
redef enum Log::ID += { LOG };
## Information tracked per SNMP session.
type Info: record {
## Timestamp of first packet belonging to the SNMP session.
ts: time &log;
## The unique ID for the connection.
uid: string &log;
## The connection's 5-tuple of addresses/ports (ports inherently
## include transport protocol information)
id: conn_id &log;
## The amount of time between the first packet beloning to
## the SNMP session and the latest one seen.
duration: interval &log &default=0secs;
## The version of SNMP being used.
version: string &log;
## The community string of the first SNMP packet associated with
## the session. This is used as part of SNMP's (v1 and v2c)
## administrative/security framework. See :rfc:`1157` or :rfc:`1901`.
community: string &log &optional;
## The number of variable bindings in GetRequest/GetNextRequest PDUs
## seen for the session.
get_requests: count &log &default=0;
## The number of variable bindings in GetBulkRequest PDUs seen for
## the session.
get_bulk_requests: count &log &default=0;
## The number of variable bindings in GetResponse/Response PDUs seen
## for the session.
get_responses: count &log &default=0;
## The number of variable bindings in SetRequest PDUs seen for
## the session.
set_requests: count &log &default=0;
## A system description of the SNMP responder endpoint.
display_string: string &log &optional;
## The time at which the SNMP responder endpoint claims it's been
## up since.
up_since: time &log &optional;
};
## Maps an SNMP version integer to a human readable string.
const version_map: table[count] of string = {
[0] = "1",
[1] = "2c",
[3] = "3",
} &redef &default="unknown";
## Event that can be handled to access the SNMP record as it is sent on
## to the logging framework.
global log_snmp: event(rec: Info);
}
redef record connection += {
snmp: SNMP::Info &optional;
};
const ports = { 161/udp, 162/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports);
Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp]);
}
function init_state(c: connection, h: SNMP::Header): Info
{
if ( ! c?$snmp )
{
c$snmp = Info($ts=network_time(),
$uid=c$uid, $id=c$id,
$version=version_map[h$version]);
}
local s = c$snmp;
if ( ! s?$community )
{
if ( h?$v1 )
s$community = h$v1$community;
else if ( h?$v2 )
s$community = h$v2$community;
}
s$duration = network_time() - s$ts;
return s;
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$snmp )
Log::write(LOG, c$snmp);
}
event snmp_get_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_get_bulk_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::BulkPDU) &priority=5
{
local s = init_state(c, header);
s$get_bulk_requests += |pdu$bindings|;
}
event snmp_get_next_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_response(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_responses += |pdu$bindings|;
for ( i in pdu$bindings )
{
local binding = pdu$bindings[i];
if ( binding$oid == "1.3.6.1.2.1.1.1.0" && binding$value?$octets )
c$snmp$display_string = binding$value$octets;
else if ( binding$oid == "1.3.6.1.2.1.1.3.0" && binding$value?$unsigned )
{
local up_seconds = binding$value$unsigned / 100.0;
s$up_since = network_time() - double_to_interval(up_seconds);
}
}
}
event snmp_set_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$set_requests += |pdu$bindings|;
}
event snmp_trap(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::TrapPDU) &priority=5
{
init_state(c, header);
}
event snmp_inform_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_trapV2(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_report(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_unknown_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_unknown_scoped_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_encrypted_pdu(c: connection, is_orig: bool, header: SNMP::Header) &priority=5
{
init_state(c, header);
}
#event snmp_unknown_header_version(c: connection, is_orig: bool, version: count) &priority=5
# {
# }

View file

@ -1,5 +1,6 @@
@load ./consts @load ./consts
@load ./main @load ./main
@load ./mozilla-ca-list @load ./mozilla-ca-list
@load ./files
@load-sigs ./dpd.sig @load-sigs ./dpd.sig

View file

@ -14,15 +14,15 @@ export {
[TLSv11] = "TLSv11", [TLSv11] = "TLSv11",
[TLSv12] = "TLSv12", [TLSv12] = "TLSv12",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for alert ## Mapping between numeric codes and human readable strings for alert
## levels. ## levels.
const alert_levels: table[count] of string = { const alert_levels: table[count] of string = {
[1] = "warning", [1] = "warning",
[2] = "fatal", [2] = "fatal",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for alert ## Mapping between numeric codes and human readable strings for alert
## descriptions. ## descriptions.
const alert_descriptions: table[count] of string = { const alert_descriptions: table[count] of string = {
[0] = "close_notify", [0] = "close_notify",
@ -47,6 +47,7 @@ export {
[70] = "protocol_version", [70] = "protocol_version",
[71] = "insufficient_security", [71] = "insufficient_security",
[80] = "internal_error", [80] = "internal_error",
[86] = "inappropriate_fallback",
[90] = "user_canceled", [90] = "user_canceled",
[100] = "no_renegotiation", [100] = "no_renegotiation",
[110] = "unsupported_extension", [110] = "unsupported_extension",
@ -55,8 +56,9 @@ export {
[113] = "bad_certificate_status_response", [113] = "bad_certificate_status_response",
[114] = "bad_certificate_hash_value", [114] = "bad_certificate_hash_value",
[115] = "unknown_psk_identity", [115] = "unknown_psk_identity",
[120] = "no_application_protocol",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for SSL/TLS ## Mapping between numeric codes and human readable strings for SSL/TLS
## extensions. ## extensions.
# More information can be found here: # More information can be found here:
@ -87,9 +89,54 @@ export {
[13175] = "origin_bound_certificates", [13175] = "origin_bound_certificates",
[13180] = "encrypted_client_certificates", [13180] = "encrypted_client_certificates",
[30031] = "channel_id", [30031] = "channel_id",
[30032] = "channel_id_new",
[35655] = "padding",
[65281] = "renegotiation_info" [65281] = "renegotiation_info"
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLS elliptic curves.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8
const ec_curves: table[count] of string = {
[1] = "sect163k1",
[2] = "sect163r1",
[3] = "sect163r2",
[4] = "sect193r1",
[5] = "sect193r2",
[6] = "sect233k1",
[7] = "sect233r1",
[8] = "sect239k1",
[9] = "sect283k1",
[10] = "sect283r1",
[11] = "sect409k1",
[12] = "sect409r1",
[13] = "sect571k1",
[14] = "sect571r1",
[15] = "secp160k1",
[16] = "secp160r1",
[17] = "secp160r2",
[18] = "secp192k1",
[19] = "secp192r1",
[20] = "secp224k1",
[21] = "secp224r1",
[22] = "secp256k1",
[23] = "secp256r1",
[24] = "secp384r1",
[25] = "secp521r1",
[26] = "brainpoolP256r1",
[27] = "brainpoolP384r1",
[28] = "brainpoolP512r1",
[0xFF01] = "arbitrary_explicit_prime_curves",
[0xFF02] = "arbitrary_explicit_char2_curves"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLC EC point formats.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-9
const ec_point_formats: table[count] of string = {
[0] = "uncompressed",
[1] = "ansiX962_compressed_prime",
[2] = "ansiX962_compressed_char2"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
# SSLv2 # SSLv2
const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080; const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080;
const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080; const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080;
@ -263,6 +310,8 @@ export {
const TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C3; const TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C3;
const TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C4; const TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C4;
const TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C5; const TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C5;
# draft-bmoeller-tls-downgrade-scsv-01
const TLS_FALLBACK_SCSV = 0x5600;
# RFC 4492 # RFC 4492
const TLS_ECDH_ECDSA_WITH_NULL_SHA = 0xC001; const TLS_ECDH_ECDSA_WITH_NULL_SHA = 0xC001;
const TLS_ECDH_ECDSA_WITH_RC4_128_SHA = 0xC002; const TLS_ECDH_ECDSA_WITH_RC4_128_SHA = 0xC002;
@ -438,6 +487,10 @@ export {
const TLS_PSK_WITH_AES_256_CCM_8 = 0xC0A9; const TLS_PSK_WITH_AES_256_CCM_8 = 0xC0A9;
const TLS_PSK_DHE_WITH_AES_128_CCM_8 = 0xC0AA; const TLS_PSK_DHE_WITH_AES_128_CCM_8 = 0xC0AA;
const TLS_PSK_DHE_WITH_AES_256_CCM_8 = 0xC0AB; const TLS_PSK_DHE_WITH_AES_256_CCM_8 = 0xC0AB;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM = 0xC0AC;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM = 0xC0AD;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xC0AE;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xC0AF;
# draft-agl-tls-chacha20poly1305-02 # draft-agl-tls-chacha20poly1305-02
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13; const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14; const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14;
@ -452,8 +505,8 @@ export {
const SSL_RSA_WITH_DES_CBC_MD5 = 0xFF82; const SSL_RSA_WITH_DES_CBC_MD5 = 0xFF82;
const SSL_RSA_WITH_3DES_EDE_CBC_MD5 = 0xFF83; const SSL_RSA_WITH_3DES_EDE_CBC_MD5 = 0xFF83;
const TLS_EMPTY_RENEGOTIATION_INFO_SCSV = 0x00FF; const TLS_EMPTY_RENEGOTIATION_INFO_SCSV = 0x00FF;
## This is a table of all known cipher specs. It can be used for ## This is a table of all known cipher specs. It can be used for
## detecting unknown ciphers and for converting the cipher spec ## detecting unknown ciphers and for converting the cipher spec
## constants into a human readable format. ## constants into a human readable format.
const cipher_desc: table[count] of string = { const cipher_desc: table[count] of string = {
@ -629,6 +682,7 @@ export {
[TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256", [TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256", [TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256", [TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_FALLBACK_SCSV] = "TLS_FALLBACK_SCSV",
[TLS_ECDH_ECDSA_WITH_NULL_SHA] = "TLS_ECDH_ECDSA_WITH_NULL_SHA", [TLS_ECDH_ECDSA_WITH_NULL_SHA] = "TLS_ECDH_ECDSA_WITH_NULL_SHA",
[TLS_ECDH_ECDSA_WITH_RC4_128_SHA] = "TLS_ECDH_ECDSA_WITH_RC4_128_SHA", [TLS_ECDH_ECDSA_WITH_RC4_128_SHA] = "TLS_ECDH_ECDSA_WITH_RC4_128_SHA",
[TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA] = "TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA", [TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA] = "TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA",
@ -800,6 +854,10 @@ export {
[TLS_PSK_WITH_AES_256_CCM_8] = "TLS_PSK_WITH_AES_256_CCM_8", [TLS_PSK_WITH_AES_256_CCM_8] = "TLS_PSK_WITH_AES_256_CCM_8",
[TLS_PSK_DHE_WITH_AES_128_CCM_8] = "TLS_PSK_DHE_WITH_AES_128_CCM_8", [TLS_PSK_DHE_WITH_AES_128_CCM_8] = "TLS_PSK_DHE_WITH_AES_128_CCM_8",
[TLS_PSK_DHE_WITH_AES_256_CCM_8] = "TLS_PSK_DHE_WITH_AES_256_CCM_8", [TLS_PSK_DHE_WITH_AES_256_CCM_8] = "TLS_PSK_DHE_WITH_AES_256_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8",
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", [TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", [TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256", [TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
@ -813,43 +871,5 @@ export {
[SSL_RSA_WITH_3DES_EDE_CBC_MD5] = "SSL_RSA_WITH_3DES_EDE_CBC_MD5", [SSL_RSA_WITH_3DES_EDE_CBC_MD5] = "SSL_RSA_WITH_3DES_EDE_CBC_MD5",
[TLS_EMPTY_RENEGOTIATION_INFO_SCSV] = "TLS_EMPTY_RENEGOTIATION_INFO_SCSV", [TLS_EMPTY_RENEGOTIATION_INFO_SCSV] = "TLS_EMPTY_RENEGOTIATION_INFO_SCSV",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between the constants and string values for SSL/TLS errors.
const x509_errors: table[count] of string = {
[0] = "ok",
[1] = "unable to get issuer cert",
[2] = "unable to get crl",
[3] = "unable to decrypt cert signature",
[4] = "unable to decrypt crl signature",
[5] = "unable to decode issuer public key",
[6] = "cert signature failure",
[7] = "crl signature failure",
[8] = "cert not yet valid",
[9] = "cert has expired",
[10] = "crl not yet valid",
[11] = "crl has expired",
[12] = "error in cert not before field",
[13] = "error in cert not after field",
[14] = "error in crl last update field",
[15] = "error in crl next update field",
[16] = "out of mem",
[17] = "depth zero self signed cert",
[18] = "self signed cert in chain",
[19] = "unable to get issuer cert locally",
[20] = "unable to verify leaf signature",
[21] = "cert chain too long",
[22] = "cert revoked",
[23] = "invalid ca",
[24] = "path length exceeded",
[25] = "invalid purpose",
[26] = "cert untrusted",
[27] = "cert rejected",
[28] = "subject issuer mismatch",
[29] = "akid skid mismatch",
[30] = "akid issuer serial mismatch",
[31] = "keyusage no certsign",
[32] = "unable to get crl issuer",
[33] = "unhandled critical extension",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
} }

View file

@ -1,7 +1,7 @@
signature dpd_ssl_server { signature dpd_ssl_server {
ip-proto == tcp ip-proto == tcp
# Server hello. # Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/ payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client requires-reverse-signature dpd_ssl_client
enable "ssl" enable "ssl"
tcp-state responder tcp-state responder
@ -10,6 +10,6 @@ signature dpd_ssl_server {
signature dpd_ssl_client { signature dpd_ssl_client {
ip-proto == tcp ip-proto == tcp
# Client hello. # Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/ payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/
tcp-state originator tcp-state originator
} }

View file

@ -0,0 +1,135 @@
@load ./main
@load base/utils/conn-ids
@load base/frameworks/files
@load base/files/x509
module SSL;
export {
redef record Info += {
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## certificates offered by the server.
cert_chain_fuids: vector of string &optional &log;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## certificates offered by the client.
client_cert_chain_fuids: vector of string &optional &log;
## Subject of the X.509 certificate offered by the server.
subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## server.
issuer: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## client.
client_issuer: string &log &optional;
## Current number of certificates seen from either side. Used
## to create file handles.
server_depth: count &default=0;
client_depth: count &default=0;
};
## Default file handle provider for SSL.
global get_file_handle: function(c: connection, is_orig: bool): string;
## Default file describer for SSL.
global describe_file: function(f: fa_file): string;
}
function get_file_handle(c: connection, is_orig: bool): string
{
# Unused. File handles are generated in the analyzer.
return "";
}
function describe_file(f: fa_file): string
{
if ( f$source != "SSL" || ! f?$info || ! f$info?$x509 || ! f$info$x509?$certificate )
return "";
# It is difficult to reliably describe a certificate - especially since
# we do not know when this function is called (hence, if the data structures
# are already populated).
#
# Just return a bit of our connection information and hope that that is good enough.
for ( cid in f$conns )
{
if ( f$conns[cid]?$ssl )
{
local c = f$conns[cid];
return cat(c$id$resp_h, ":", c$id$resp_p);
}
}
return cat("Serial: ", f$info$x509$certificate$serial, " Subject: ",
f$info$x509$certificate$subject, " Issuer: ",
f$info$x509$certificate$issuer);
}
event bro_init() &priority=5
{
Files::register_protocol(Analyzer::ANALYZER_SSL,
[$get_file_handle = SSL::get_file_handle,
$describe = SSL::describe_file]);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( ! c?$ssl )
return;
if ( ! c$ssl?$cert_chain )
{
c$ssl$cert_chain = vector();
c$ssl$client_cert_chain = vector();
c$ssl$cert_chain_fuids = string_vec();
c$ssl$client_cert_chain_fuids = string_vec();
}
if ( is_orig )
{
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
}
else
{
c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
}
Files::add_analyzer(f, Files::ANALYZER_X509);
# always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
event ssl_established(c: connection) &priority=6
{
# update subject and issuer information
if ( c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 )
{
c$ssl$subject = c$ssl$cert_chain[0]$x509$certificate$subject;
c$ssl$issuer = c$ssl$cert_chain[0]$x509$certificate$issuer;
}
if ( c$ssl?$client_cert_chain && |c$ssl$client_cert_chain| > 0 )
{
c$ssl$client_subject = c$ssl$client_cert_chain[0]$x509$certificate$subject;
c$ssl$client_issuer = c$ssl$client_cert_chain[0]$x509$certificate$issuer;
}
}

View file

@ -19,45 +19,28 @@ export {
version: string &log &optional; version: string &log &optional;
## SSL/TLS cipher suite that the server chose. ## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional; cipher: string &log &optional;
## Elliptic curve the server chose when using ECDH/ECDHE.
curve: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It ## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting. ## indicates the server name that the client was requesting.
server_name: string &log &optional; server_name: string &log &optional;
## Session ID offered by the client for session resumption. ## Session ID offered by the client for session resumption.
session_id: string &log &optional; session_id: string &log &optional;
## Subject of the X.509 certificate offered by the server.
subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## server.
issuer_subject: string &log &optional;
## NotValidBefore field value from the server certificate.
not_valid_before: time &log &optional;
## NotValidAfter field value from the server certificate.
not_valid_after: time &log &optional;
## Last alert that was seen during the connection. ## Last alert that was seen during the connection.
last_alert: string &log &optional; last_alert: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## client.
client_issuer_subject: string &log &optional;
## Full binary server certificate stored in DER format.
cert: string &optional;
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of string &optional;
## Full binary client certificate stored in DER format.
client_cert: string &optional;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of string &optional;
## The analyzer ID used for the analyzer instance attached ## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a ## to each connection. It is not used for logging since it's a
## meaningless arbitrary number. ## meaningless arbitrary number.
analyzer_id: count &optional; analyzer_id: count &optional;
## Flag to indicate if this ssl session has been established
## succesfully, or if it was aborted during the handshake.
established: bool &log &default=F;
## Flag to indicate if this record already has been logged, to
## prevent duplicates.
logged: bool &default=F;
}; };
## The default root CA bundle. By default, the mozilla-ca-list.bro ## The default root CA bundle. By default, the mozilla-ca-list.bro
@ -108,8 +91,7 @@ event bro_init() &priority=5
function set_session(c: connection) function set_session(c: connection)
{ {
if ( ! c?$ssl ) if ( ! c?$ssl )
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(), c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id];
$client_cert_chain=vector()];
} }
function delay_log(info: Info, token: string) function delay_log(info: Info, token: string)
@ -127,9 +109,13 @@ function undelay_log(info: Info, token: string)
function log_record(info: Info) function log_record(info: Info)
{ {
if ( info$logged )
return;
if ( ! info?$delay_tokens || |info$delay_tokens| == 0 ) if ( ! info?$delay_tokens || |info$delay_tokens| == 0 )
{ {
Log::write(SSL::LOG, info); Log::write(SSL::LOG, info);
info$logged = T;
} }
else else
{ {
@ -146,11 +132,16 @@ function log_record(info: Info)
} }
} }
function finish(c: connection) # remove_analyzer flag is used to prevent disabling analyzer for finished
# connections.
function finish(c: connection, remove_analyzer: bool)
{ {
log_record(c$ssl); log_record(c$ssl);
if ( disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id ) if ( remove_analyzer && disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id )
{
disable_analyzer(c$id, c$ssl$analyzer_id); disable_analyzer(c$id, c$ssl$analyzer_id);
delete c$ssl$analyzer_id;
}
} }
event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) &priority=5 event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) &priority=5
@ -170,55 +161,23 @@ event ssl_server_hello(c: connection, version: count, possible_ts: time, server_
c$ssl$cipher = cipher_desc[cipher]; c$ssl$cipher = cipher_desc[cipher];
} }
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=5 event ssl_server_curve(c: connection, curve: count) &priority=5
{ {
set_session(c); set_session(c);
# We aren't doing anything with client certificates yet. c$ssl$curve = ec_curves[curve];
if ( is_orig )
{
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$client_cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$client_subject = cert$subject;
c$ssl$client_issuer_subject = cert$issuer;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert;
}
}
else
{
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$subject = cert$subject;
c$ssl$issuer_subject = cert$issuer;
c$ssl$not_valid_before = cert$not_valid_before;
c$ssl$not_valid_after = cert$not_valid_after;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
}
}
} }
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5 event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec) &priority=5
{ {
set_session(c); set_session(c);
if ( is_orig && extensions[code] == "server_name" ) if ( is_orig && |names| > 0 )
c$ssl$server_name = sub_bytes(val, 6, |val|); {
c$ssl$server_name = names[0];
if ( |names| > 1 )
event conn_weird("SSL_many_server_names", c, cat(names));
}
} }
event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5 event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5
@ -228,26 +187,36 @@ event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priori
c$ssl$last_alert = alert_descriptions[desc]; c$ssl$last_alert = alert_descriptions[desc];
} }
event ssl_established(c: connection) &priority=5 event ssl_established(c: connection) &priority=7
{ {
set_session(c); set_session(c);
c$ssl$established = T;
} }
event ssl_established(c: connection) &priority=-5 event ssl_established(c: connection) &priority=-5
{ {
finish(c); finish(c, T);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ssl )
# called in case a SSL connection that has not been established terminates
finish(c, F);
} }
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5 event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
{ {
# Check by checking for existence of c$ssl record. if ( atype == Analyzer::ANALYZER_SSL )
if ( c?$ssl && atype == Analyzer::ANALYZER_SSL ) {
set_session(c);
c$ssl$analyzer_id = aid; c$ssl$analyzer_id = aid;
}
} }
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count, event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=5 reason: string) &priority=5
{ {
if ( c?$ssl ) if ( c?$ssl )
finish(c); finish(c, T);
} }

File diff suppressed because one or more lines are too long

View file

@ -35,28 +35,37 @@ export {
const notice_threshold = 10 &redef; const notice_threshold = 10 &redef;
} }
event file_hash(f: fa_file, kind: string, hash: string) function do_mhr_lookup(hash: string, fi: Notice::FileInfo)
{ {
if ( kind=="sha1" && match_file_types in f$mime_type ) local hash_domain = fmt("%s.malware.hash.cymru.com", hash);
when ( local MHR_result = lookup_hostname_txt(hash_domain) )
{ {
local hash_domain = fmt("%s.malware.hash.cymru.com", hash); # Data is returned as "<dateFirstDetected> <detectionRate>"
when ( local MHR_result = lookup_hostname_txt(hash_domain) ) local MHR_answer = split1(MHR_result, / /);
if ( |MHR_answer| == 2 )
{ {
# Data is returned as "<dateFirstDetected> <detectionRate>" local mhr_detect_rate = to_count(MHR_answer[2]);
local MHR_answer = split1(MHR_result, / /);
if ( |MHR_answer| == 2 ) if ( mhr_detect_rate >= notice_threshold )
{ {
local mhr_first_detected = double_to_time(to_double(MHR_answer[1])); local mhr_first_detected = double_to_time(to_double(MHR_answer[1]));
local mhr_detect_rate = to_count(MHR_answer[2]);
local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected); local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected);
if ( mhr_detect_rate >= notice_threshold ) local message = fmt("Malware Hash Registry Detection rate: %d%% Last seen: %s", mhr_detect_rate, readable_first_detected);
{ local virustotal_url = fmt(match_sub_url, hash);
local message = fmt("Malware Hash Registry Detection rate: %d%% Last seen: %s", mhr_detect_rate, readable_first_detected); # We don't have the full fa_file record here in order to
local virustotal_url = fmt(match_sub_url, hash); # avoid the "when" statement cloning it (expensive!).
NOTICE([$note=Match, $msg=message, $sub=virustotal_url, $f=f]); local n: Notice::Info = Notice::Info($note=Match, $msg=message, $sub=virustotal_url);
} Notice::populate_file_info2(fi, n);
NOTICE(n);
} }
} }
} }
} }
event file_hash(f: fa_file, kind: string, hash: string)
{
if ( kind == "sha1" && f?$mime_type && match_file_types in f$mime_type )
do_mhr_lookup(hash, Notice::create_file_info(f));
}

View file

@ -6,4 +6,5 @@
@load ./http-url @load ./http-url
@load ./ssl @load ./ssl
@load ./smtp @load ./smtp
@load ./smtp-url-extraction @load ./smtp-url-extraction
@load ./x509

View file

@ -8,11 +8,17 @@ event http_header(c: connection, is_orig: bool, name: string, value: string)
{ {
switch ( name ) switch ( name )
{ {
case "HOST": case "HOST":
Intel::seen([$indicator=value, if ( is_valid_ip(value) )
$indicator_type=Intel::DOMAIN, Intel::seen([$host=to_addr(value),
$conn=c, $indicator_type=Intel::ADDR,
$where=HTTP::IN_HOST_HEADER]); $conn=c,
$where=HTTP::IN_HOST_HEADER]);
else
Intel::seen([$indicator=value,
$indicator_type=Intel::DOMAIN,
$conn=c,
$where=HTTP::IN_HOST_HEADER]);
break; break;
case "REFERER": case "REFERER":

View file

@ -2,27 +2,6 @@
@load base/protocols/ssl @load base/protocols/ssl
@load ./where-locations @load ./where-locations
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string)
{
if ( chain_idx == 0 )
{
if ( /emailAddress=/ in cert$subject )
{
local email = sub(cert$subject, /^.*emailAddress=/, "");
email = sub(email, /,.*$/, "");
Intel::seen([$indicator=email,
$indicator_type=Intel::EMAIL,
$conn=c,
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
}
Intel::seen([$indicator=sha1_hash(der_cert),
$indicator_type=Intel::CERT_HASH,
$conn=c,
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
}
}
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) event ssl_extension(c: connection, is_orig: bool, code: count, val: string)
{ {
if ( is_orig && SSL::extensions[code] == "server_name" && if ( is_orig && SSL::extensions[code] == "server_name" &&

View file

@ -21,9 +21,8 @@ export {
SMTP::IN_REPLY_TO, SMTP::IN_REPLY_TO,
SMTP::IN_X_ORIGINATING_IP_HEADER, SMTP::IN_X_ORIGINATING_IP_HEADER,
SMTP::IN_MESSAGE, SMTP::IN_MESSAGE,
SSL::IN_SERVER_CERT,
SSL::IN_CLIENT_CERT,
SSL::IN_SERVER_NAME, SSL::IN_SERVER_NAME,
SMTP::IN_HEADER, SMTP::IN_HEADER,
X509::IN_CERT,
}; };
} }

View file

@ -0,0 +1,16 @@
@load base/frameworks/intel
@load base/files/x509
@load ./where-locations
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate)
{
if ( /emailAddress=/ in cert$subject )
{
local email = sub(cert$subject, /^.*emailAddress=/, "");
email = sub(email, /,.*$/, "");
Intel::seen([$indicator=email,
$indicator_type=Intel::EMAIL,
$f=f,
$where=X509::IN_CERT]);
}
}

View file

@ -1,22 +0,0 @@
##! Calculate MD5 sums for server DER formatted certificates.
@load base/protocols/ssl
module SSL;
export {
redef record Info += {
## MD5 sum of the raw server certificate.
cert_hash: string &log &optional;
};
}
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=4
{
# We aren't tracking client certificates yet and we are also only tracking
# the primary cert. Watch that this came from an SSL analyzed session too.
if ( is_orig || chain_idx != 0 || ! c?$ssl )
return;
c$ssl$cert_hash = md5_hash(der_cert);
}

View file

@ -3,11 +3,10 @@
##! certificate. ##! certificate.
@load base/protocols/ssl @load base/protocols/ssl
@load base/files/x509
@load base/frameworks/notice @load base/frameworks/notice
@load base/utils/directions-and-hosts @load base/utils/directions-and-hosts
@load protocols/ssl/cert-hash
module SSL; module SSL;
export { export {
@ -35,30 +34,31 @@ export {
const notify_when_cert_expiring_in = 30days &redef; const notify_when_cert_expiring_in = 30days &redef;
} }
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3 event ssl_established(c: connection) &priority=3
{ {
# If this isn't the host cert or we aren't interested in the server, just return. # If there are no certificates or we are not interested in the server, just return.
if ( is_orig || if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
chain_idx != 0 ||
! c$ssl?$cert_hash ||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) ) ! addr_matches_host(c$id$resp_h, notify_certs_expiration) )
return; return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
if ( cert$not_valid_before > network_time() ) if ( cert$not_valid_before > network_time() )
NOTICE([$note=Certificate_Not_Valid_Yet, NOTICE([$note=Certificate_Not_Valid_Yet,
$conn=c, $suppress_for=1day, $conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before), $msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before),
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]); $fuid=fuid]);
else if ( cert$not_valid_after < network_time() ) else if ( cert$not_valid_after < network_time() )
NOTICE([$note=Certificate_Expired, NOTICE([$note=Certificate_Expired,
$conn=c, $suppress_for=1day, $conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after), $msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after),
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]); $fuid=fuid]);
else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() ) else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() )
NOTICE([$note=Certificate_Expires_Soon, NOTICE([$note=Certificate_Expires_Soon,
$msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after), $msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after),
$conn=c, $suppress_for=1day, $conn=c, $suppress_for=1day,
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]); $fuid=fuid]);
} }

View file

@ -10,8 +10,8 @@
##! ##!
@load base/protocols/ssl @load base/protocols/ssl
@load base/files/x509
@load base/utils/directions-and-hosts @load base/utils/directions-and-hosts
@load protocols/ssl/cert-hash
module SSL; module SSL;
@ -23,41 +23,31 @@ export {
} }
# This is an internally maintained variable to prevent relogging of # This is an internally maintained variable to prevent relogging of
# certificates that have already been seen. It is indexed on an md5 sum of # certificates that have already been seen. It is indexed on an sha1 sum of
# the certificate. # the certificate.
global extracted_certs: set[string] = set() &read_expire=1hr &redef; global extracted_certs: set[string] = set() &read_expire=1hr &redef;
event ssl_established(c: connection) &priority=5 event ssl_established(c: connection) &priority=5
{ {
if ( ! c$ssl?$cert ) if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
return; return;
if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) ) if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) )
return; return;
if ( c$ssl$cert_hash in extracted_certs ) local hash = c$ssl$cert_chain[0]$sha1;
local cert = c$ssl$cert_chain[0]$x509$handle;
if ( hash in extracted_certs )
# If we already extracted this cert, don't do it again. # If we already extracted this cert, don't do it again.
return; return;
add extracted_certs[c$ssl$cert_hash]; add extracted_certs[hash];
local filename = Site::is_local_addr(c$id$resp_h) ? "certs-local.pem" : "certs-remote.pem"; local filename = Site::is_local_addr(c$id$resp_h) ? "certs-local.pem" : "certs-remote.pem";
local outfile = open_for_append(filename); local outfile = open_for_append(filename);
enable_raw_output(outfile);
print outfile, "-----BEGIN CERTIFICATE-----"; print outfile, x509_get_certificate_string(cert, T);
# Encode to base64 and format to fit 50 lines. Otherwise openssl won't like it later.
local lines = split_all(encode_base64(c$ssl$cert), /.{50}/);
local i = 1;
for ( line in lines )
{
if ( |lines[i]| > 0 )
{
print outfile, lines[i];
}
i+=1;
}
print outfile, "-----END CERTIFICATE-----";
print outfile, "";
close(outfile); close(outfile);
} }

View file

@ -0,0 +1,121 @@
##! Detect the TLS heartbleed attack. See http://heartbleed.com for more.
@load base/protocols/ssl
@load base/frameworks/notice
module Heartbleed;
export {
redef enum Notice::Type += {
## Indicates that a host performing a heartbleed attack.
SSL_Heartbeat_Attack,
## Indicates that a host performing a heartbleed attack was probably successful.
SSL_Heartbeat_Attack_Success,
## Indicates we saw heartbeat requests with odd length. Probably an attack.
SSL_Heartbeat_Odd_Length,
## Indicates we saw many heartbeat requests without an reply. Might be an attack.
SSL_Heartbeat_Many_Requests
};
}
# Do not disable analyzers after detection - otherwhise we will not notice
# encrypted attacks.
redef SSL::disable_analyzer_after_detection=F;
redef record SSL::Info += {
last_originator_heartbeat_request_size: count &optional;
last_responder_heartbeat_request_size: count &optional;
originator_heartbeats: count &default=0;
responder_heartbeats: count &default=0;
heartbleed_detected: bool &default=F;
};
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
{
if ( heartbeat_type == 1 )
{
local checklength: count = (length<(3+16)) ? length : (length - 3 - 16);
if ( payload_length > checklength )
{
c$ssl$heartbleed_detected = T;
NOTICE([$note=SSL_Heartbeat_Attack,
$msg=fmt("An TLS heartbleed attack was detected! Record length %d, payload length %d", length, payload_length),
$conn=c,
$identifier=cat(c$uid, length, payload_length)
]);
}
}
if ( heartbeat_type == 2 && c$ssl$heartbleed_detected )
{
NOTICE([$note=SSL_Heartbeat_Attack_Success,
$msg=fmt("An TLS heartbleed attack detected before was probably exploited. Transmitted payload length in first packet: %d", payload_length),
$conn=c,
$identifier=c$uid
]);
}
}
event ssl_encrypted_heartbeat(c: connection, is_orig: bool, length: count)
{
if ( is_orig )
++c$ssl$originator_heartbeats;
else
++c$ssl$responder_heartbeats;
if ( c$ssl$originator_heartbeats > c$ssl$responder_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("Seeing more than 3 heartbeat requests without replies from server. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( c$ssl$responder_heartbeats > c$ssl$originator_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("Server is sending more heartbleed responsed than requests were seen. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( is_orig && length < 19 )
NOTICE([$note=SSL_Heartbeat_Odd_Length,
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack. Message length: %d", length),
$conn=c,
$n=length,
$identifier=cat(c$uid, length)
]);
if ( is_orig )
{
if ( c$ssl?$last_responder_heartbeat_request_size )
{
# server originated heartbeat. Ignore & continue
delete c$ssl$last_responder_heartbeat_request_size;
}
else
c$ssl$last_originator_heartbeat_request_size = length;
}
else
{
if ( c$ssl?$last_originator_heartbeat_request_size && c$ssl$last_originator_heartbeat_request_size < length )
{
NOTICE([$note=SSL_Heartbeat_Attack_Success,
$msg=fmt("An Encrypted TLS heartbleed attack was probably detected! First packet client record length %d, first packet server record length %d",
c$ssl$last_originator_heartbeat_request_size, length),
$conn=c,
$identifier=c$uid # only throw once per connection
]);
}
else if ( ! c$ssl?$last_originator_heartbeat_request_size )
c$ssl$last_responder_heartbeat_request_size = length;
if ( c$ssl?$last_originator_heartbeat_request_size )
delete c$ssl$last_originator_heartbeat_request_size;
}
}

View file

@ -3,7 +3,7 @@
@load base/utils/directions-and-hosts @load base/utils/directions-and-hosts
@load base/protocols/ssl @load base/protocols/ssl
@load protocols/ssl/cert-hash @load base/files/x509
module Known; module Known;
@ -31,9 +31,9 @@ export {
const cert_tracking = LOCAL_HOSTS &redef; const cert_tracking = LOCAL_HOSTS &redef;
## The set of all known certificates to store for preventing duplicate ## The set of all known certificates to store for preventing duplicate
## logging. It can also be used from other scripts to ## logging. It can also be used from other scripts to
## inspect if a certificate has been seen in use. The string value ## inspect if a certificate has been seen in use. The string value
## in the set is for storing the DER formatted certificate's MD5 hash. ## in the set is for storing the DER formatted certificate' SHA1 hash.
global certs: set[addr, string] &create_expire=1day &synchronized &redef; global certs: set[addr, string] &create_expire=1day &synchronized &redef;
## Event that can be handled to access the loggable record as it is sent ## Event that can be handled to access the loggable record as it is sent
@ -46,16 +46,27 @@ event bro_init() &priority=5
Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]); Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]);
} }
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3 event ssl_established(c: connection) &priority=3
{ {
# Make sure this is the server cert and we have a hash for it. if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| < 1 )
if ( is_orig || chain_idx != 0 || ! c$ssl?$cert_hash )
return; return;
local host = c$id$resp_h; local fuid = c$ssl$cert_chain_fuids[0];
if ( [host, c$ssl$cert_hash] !in certs && addr_matches_host(host, cert_tracking) )
if ( ! c$ssl$cert_chain[0]?$sha1 )
{ {
add certs[host, c$ssl$cert_hash]; Reporter::error(fmt("Certificate with fuid %s did not contain sha1 hash when checking for known certs. Aborting",
fuid));
return;
}
local hash = c$ssl$cert_chain[0]$sha1;
local cert = c$ssl$cert_chain[0]$x509$certificate;
local host = c$id$resp_h;
if ( [host, hash] !in certs && addr_matches_host(host, cert_tracking) )
{
add certs[host, hash];
Log::write(Known::CERTS_LOG, [$ts=network_time(), $host=host, Log::write(Known::CERTS_LOG, [$ts=network_time(), $host=host,
$port_num=c$id$resp_p, $subject=cert$subject, $port_num=c$id$resp_p, $subject=cert$subject,
$issuer_subject=cert$issuer, $issuer_subject=cert$issuer,

View file

@ -0,0 +1,68 @@
##! When this script is loaded, only the host certificates (client and server)
##! will be logged to x509.log. Logging of all other certificates will be suppressed.
@load base/protocols/ssl
@load base/files/x509
module X509;
export {
redef record Info += {
# Logging is suppressed if field is set to F
logcert: bool &default=T;
};
}
# We need both the Info and the fa_file record modified.
# The only instant when we have both, the connection and the
# file available without having to loop is in the file_over_new_connection
# event.
# When that event is raised, the x509 record in f$info (which is the only
# record the logging framework gets) is not yet available. So - we
# have to do this two times, sorry.
# Alternatively, we could place it info Files::Info first - but we would
# still have to copy it.
redef record fa_file += {
logcert: bool &default=T;
};
function host_certs_only(rec: X509::Info): bool
{
return rec$logcert;
}
event bro_init() &priority=2
{
local f = Log::get_filter(X509::LOG, "default");
Log::remove_filter(X509::LOG, "default"); # disable default logging
f$pred=host_certs_only; # and add our predicate
Log::add_filter(X509::LOG, f);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=2
{
if ( ! c?$ssl )
return;
local chain: vector of string;
if ( is_orig )
chain = c$ssl$client_cert_chain_fuids;
else
chain = c$ssl$cert_chain_fuids;
if ( |chain| == 0 )
{
Reporter::warning(fmt("Certificate not in chain? (fuid %s)", f$id));
return;
}
# Check if this is the host certificate
if ( f$id != chain[0] )
f$logcert=F;
}
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=2
{
f$info$x509$logcert = f$logcert; # info record available, copy information.
}

View file

@ -16,7 +16,6 @@ export {
} }
redef record SSL::Info += { redef record SSL::Info += {
sha1: string &log &optional;
notary: Response &log &optional; notary: Response &log &optional;
}; };
@ -38,14 +37,12 @@ function clear_waitlist(digest: string)
} }
} }
event x509_certificate(c: connection, is_orig: bool, cert: X509, event ssl_established(c: connection) &priority=3
chain_idx: count, chain_len: count, der_cert: string)
{ {
if ( is_orig || chain_idx != 0 || ! c?$ssl ) if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
return; return;
local digest = sha1_hash(der_cert); local digest = c$ssl$cert_chain[0]$sha1;
c$ssl$sha1 = digest;
if ( digest in notary_cache ) if ( digest in notary_cache )
{ {

View file

@ -2,7 +2,6 @@
@load base/frameworks/notice @load base/frameworks/notice
@load base/protocols/ssl @load base/protocols/ssl
@load protocols/ssl/cert-hash
module SSL; module SSL;
@ -19,9 +18,9 @@ export {
validation_status: string &log &optional; validation_status: string &log &optional;
}; };
## MD5 hash values for recently validated certs along with the ## MD5 hash values for recently validated chains along with the
## validation status message are kept in this table to avoid constant ## validation status message are kept in this table to avoid constant
## validation every time the same certificate is seen. ## validation every time the same certificate chain is seen.
global recently_validated_certs: table[string] of string = table() global recently_validated_certs: table[string] of string = table()
&read_expire=5mins &synchronized &redef; &read_expire=5mins &synchronized &redef;
} }
@ -29,18 +28,26 @@ export {
event ssl_established(c: connection) &priority=3 event ssl_established(c: connection) &priority=3
{ {
# If there aren't any certs we can't very well do certificate validation. # If there aren't any certs we can't very well do certificate validation.
if ( ! c$ssl?$cert || ! c$ssl?$cert_chain ) if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
return; return;
if ( c$ssl?$cert_hash && c$ssl$cert_hash in recently_validated_certs ) local chain_id = join_string_vec(c$ssl$cert_chain_fuids, ".");
local chain: vector of opaque of x509 = vector();
for ( i in c$ssl$cert_chain )
{ {
c$ssl$validation_status = recently_validated_certs[c$ssl$cert_hash]; chain[i] = c$ssl$cert_chain[i]$x509$handle;
}
if ( chain_id in recently_validated_certs )
{
c$ssl$validation_status = recently_validated_certs[chain_id];
} }
else else
{ {
local result = x509_verify(c$ssl$cert, c$ssl$cert_chain, root_certs); local result = x509_verify(chain, root_certs);
c$ssl$validation_status = x509_err2str(result); c$ssl$validation_status = result$result_string;
recently_validated_certs[c$ssl$cert_hash] = c$ssl$validation_status; recently_validated_certs[chain_id] = result$result_string;
} }
if ( c$ssl$validation_status != "ok" ) if ( c$ssl$validation_status != "ok" )
@ -48,7 +55,7 @@ event ssl_established(c: connection) &priority=3
local message = fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status); local message = fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status);
NOTICE([$note=Invalid_Server_Cert, $msg=message, NOTICE([$note=Invalid_Server_Cert, $msg=message,
$sub=c$ssl$subject, $conn=c, $sub=c$ssl$subject, $conn=c,
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$validation_status,c$ssl$cert_hash)]); $identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$validation_status)]);
} }
} }

View file

@ -0,0 +1,91 @@
##! Generate notices when SSL/TLS connections use certificates or DH parameters
##! that have potentially unsafe key lengths.
@load base/protocols/ssl
@load base/frameworks/notice
@load base/utils/directions-and-hosts
module SSL;
export {
redef enum Notice::Type += {
## Indicates that a server is using a potentially unsafe key.
Weak_Key,
};
## The category of hosts you would like to be notified about which have
## certificates that are going to be expiring soon. By default, these
## notices will be suppressed by the notice framework for 1 day after a particular
## certificate has had a notice generated. Choices are: LOCAL_HOSTS, REMOTE_HOSTS,
## ALL_HOSTS, NO_HOSTS
const notify_weak_keys = LOCAL_HOSTS &redef;
## The minimal key length in bits that is considered to be safe. Any shorter
## (non-EC) key lengths will trigger the notice.
const notify_minimal_key_length = 1024 &redef;
## Warn if the DH key length is smaller than the certificate key length. This is
## potentially unsafe because it gives a wrong impression of safety due to the
## certificate key length. However, it is very common and cannot be avoided in some
## settings (e.g. with old jave clients).
const notify_dh_length_shorter_cert_length = T &redef;
}
# We check key lengths only for DSA or RSA certificates. For others, we do
# not know what is safe (e.g. EC is safe even with very short key lengths).
event ssl_established(c: connection) &priority=3
{
# If there are no certificates or we are not interested in the server, just return.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! addr_matches_host(c$id$resp_h, notify_weak_keys) )
return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
if ( !cert?$key_type || !cert?$key_length )
return;
if ( cert$key_type != "dsa" && cert$key_type != "rsa" )
return;
local key_length = cert$key_length;
if ( key_length < notify_minimal_key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("Host uses weak certificate with %d bit key", key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
]);
}
event ssl_dh_server_params(c: connection, p: string, q: string, Ys: string) &priority=3
{
if ( ! addr_matches_host(c$id$resp_h, notify_weak_keys) )
return;
local key_length = |Ys| * 8; # key length in bits
if ( key_length < notify_minimal_key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("Host uses weak DH parameters with %d key bits", key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
]);
if ( notify_dh_length_shorter_cert_length &&
c?$ssl && c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 && c$ssl$cert_chain[0]?$x509 &&
c$ssl$cert_chain[0]$x509?$certificate && c$ssl$cert_chain[0]$x509$certificate?$key_type &&
(c$ssl$cert_chain[0]$x509$certificate$key_type == "rsa" ||
c$ssl$cert_chain[0]$x509$certificate$key_type == "dsa" ))
{
if ( c$ssl$cert_chain[0]$x509$certificate?$key_length &&
c$ssl$cert_chain[0]$x509$certificate$key_length > key_length )
NOTICE([$note=Weak_Key,
$msg=fmt("DH key length of %d bits is smaller certificate key length of %d bits",
key_length, c$ssl$cert_chain[0]$x509$certificate$key_length),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$orig_h, c$id$orig_p)
]);
}
}

View file

@ -0,0 +1,4 @@
##! Loading this script will cause all logs to be written
##! out as JSON by default.
redef LogAscii::use_json=T;

View file

@ -55,6 +55,9 @@
# This script enables SSL/TLS certificate validation. # This script enables SSL/TLS certificate validation.
@load protocols/ssl/validate-certs @load protocols/ssl/validate-certs
# This script prevents the logging of SSL CA certificates in x509.log
@load protocols/ssl/log-hostcerts-only
# Uncomment the following line to check each SSL certificate hash against the ICSI # Uncomment the following line to check each SSL certificate hash against the ICSI
# certificate notary service; see http://notary.icsi.berkeley.edu . # certificate notary service; see http://notary.icsi.berkeley.edu .
# @load protocols/ssl/notary # @load protocols/ssl/notary
@ -78,3 +81,6 @@
# Detect SHA1 sums in Team Cymru's Malware Hash Registry. # Detect SHA1 sums in Team Cymru's Malware Hash Registry.
@load frameworks/files/detect-MHR @load frameworks/files/detect-MHR
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed

View file

@ -26,6 +26,7 @@
@load frameworks/intel/seen/smtp.bro @load frameworks/intel/seen/smtp.bro
@load frameworks/intel/seen/ssl.bro @load frameworks/intel/seen/ssl.bro
@load frameworks/intel/seen/where-locations.bro @load frameworks/intel/seen/where-locations.bro
@load frameworks/intel/seen/x509.bro
@load frameworks/files/detect-MHR.bro @load frameworks/files/detect-MHR.bro
@load frameworks/files/hash-all-files.bro @load frameworks/files/hash-all-files.bro
@load frameworks/packet-filter/shunt.bro @load frameworks/packet-filter/shunt.bro
@ -82,17 +83,20 @@
@load protocols/ssh/geo-data.bro @load protocols/ssh/geo-data.bro
@load protocols/ssh/interesting-hostnames.bro @load protocols/ssh/interesting-hostnames.bro
@load protocols/ssh/software.bro @load protocols/ssh/software.bro
@load protocols/ssl/cert-hash.bro
@load protocols/ssl/expiring-certs.bro @load protocols/ssl/expiring-certs.bro
@load protocols/ssl/extract-certs-pem.bro @load protocols/ssl/extract-certs-pem.bro
@load protocols/ssl/heartbleed.bro
@load protocols/ssl/known-certs.bro @load protocols/ssl/known-certs.bro
@load protocols/ssl/log-hostcerts-only.bro
#@load protocols/ssl/notary.bro #@load protocols/ssl/notary.bro
@load protocols/ssl/validate-certs.bro @load protocols/ssl/validate-certs.bro
@load protocols/ssl/weak-keys.bro
@load tuning/__load__.bro @load tuning/__load__.bro
@load tuning/defaults/__load__.bro @load tuning/defaults/__load__.bro
@load tuning/defaults/extracted_file_limits.bro @load tuning/defaults/extracted_file_limits.bro
@load tuning/defaults/packet-fragments.bro @load tuning/defaults/packet-fragments.bro
@load tuning/defaults/warnings.bro @load tuning/defaults/warnings.bro
@load tuning/json-logs.bro
@load tuning/logs-to-elasticsearch.bro @load tuning/logs-to-elasticsearch.bro
@load tuning/track-all-assets.bro @load tuning/track-all-assets.bro

@ -1 +1 @@
Subproject commit 42a4c9694a2b2677b050fbb7cbae26bc5ec4605a Subproject commit 3b3e189dab3801cd0474dfdd376d9de633cd3766

View file

@ -104,7 +104,7 @@ Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string&
Base64Converter::~Base64Converter() Base64Converter::~Base64Converter()
{ {
if ( base64_table != default_base64_table ) if ( base64_table != default_base64_table )
delete base64_table; delete [] base64_table;
} }
int Base64Converter::Decode(int len, const char* data, int* pblen, char** pbuf) int Base64Converter::Decode(int len, const char* data, int* pblen, char** pbuf)

View file

@ -9,6 +9,9 @@ set(bro_ALL_GENERATED_OUTPUTS CACHE INTERNAL "automatically generated files" FO
# This collects bif inputs that we'll load automatically. # This collects bif inputs that we'll load automatically.
set(bro_AUTO_BIFS CACHE INTERNAL "BIFs for automatic inclusion" FORCE) set(bro_AUTO_BIFS CACHE INTERNAL "BIFs for automatic inclusion" FORCE)
set(bro_BASE_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in base distribution of Bro" FORCE)
set(bro_PLUGIN_BIF_SCRIPTS CACHE INTERNAL "Bro script stubs for BIFs in Bro plugins" FORCE)
# If TRUE, use CMake's object libraries for sub-directories instead of # If TRUE, use CMake's object libraries for sub-directories instead of
# static libraries. This requires CMake >= 2.8.8. # static libraries. This requires CMake >= 2.8.8.
set(bro_HAVE_OBJECT_LIBRARIES FALSE) set(bro_HAVE_OBJECT_LIBRARIES FALSE)
@ -293,7 +296,6 @@ set(bro_SRCS
OpaqueVal.cc OpaqueVal.cc
OSFinger.cc OSFinger.cc
PacketFilter.cc PacketFilter.cc
PacketSort.cc
PersistenceSerializer.cc PersistenceSerializer.cc
PktSrc.cc PktSrc.cc
PolicyFile.cc PolicyFile.cc
@ -336,11 +338,13 @@ set(bro_SRCS
strsep.c strsep.c
modp_numtoa.c modp_numtoa.c
threading/AsciiFormatter.cc
threading/BasicThread.cc threading/BasicThread.cc
threading/Formatter.cc
threading/Manager.cc threading/Manager.cc
threading/MsgThread.cc threading/MsgThread.cc
threading/SerialTypes.cc threading/SerialTypes.cc
threading/formatters/Ascii.cc
threading/formatters/JSON.cc
logging/Manager.cc logging/Manager.cc
logging/WriterBackend.cc logging/WriterBackend.cc
@ -388,9 +392,6 @@ install(TARGETS bro DESTINATION bin)
set(BRO_EXE bro set(BRO_EXE bro
CACHE STRING "Bro executable binary" FORCE) CACHE STRING "Bro executable binary" FORCE)
# External libmagic project must be built before bro.
add_dependencies(bro libmagic)
# Target to create all the autogenerated files. # Target to create all the autogenerated files.
add_custom_target(generate_outputs_stage1) add_custom_target(generate_outputs_stage1)
add_dependencies(generate_outputs_stage1 ${bro_ALL_GENERATED_OUTPUTS}) add_dependencies(generate_outputs_stage1 ${bro_ALL_GENERATED_OUTPUTS})
@ -404,12 +405,12 @@ add_custom_target(generate_outputs)
add_dependencies(generate_outputs generate_outputs_stage2) add_dependencies(generate_outputs generate_outputs_stage2)
# Build __load__.bro files for standard *.bif.bro. # Build __load__.bro files for standard *.bif.bro.
bro_bif_create_loader(bif_loader ${CMAKE_BINARY_DIR}/scripts/base/bif) bro_bif_create_loader(bif_loader "${bro_BASE_BIF_SCRIPTS}")
add_dependencies(bif_loader ${bro_SUBDIRS}) add_dependencies(bif_loader ${bro_SUBDIRS})
add_dependencies(bro bif_loader) add_dependencies(bro bif_loader)
# Build __load__.bro files for plugins/*.bif.bro. # Build __load__.bro files for plugins/*.bif.bro.
bro_bif_create_loader(bif_loader_plugins ${CMAKE_BINARY_DIR}/scripts/base/bif/plugins) bro_bif_create_loader(bif_loader_plugins "${bro_PLUGIN_BIF_SCRIPTS}")
add_dependencies(bif_loader_plugins ${bro_SUBDIRS}) add_dependencies(bif_loader_plugins ${bro_SUBDIRS})
add_dependencies(bro bif_loader_plugins) add_dependencies(bro bif_loader_plugins)

View file

@ -127,12 +127,7 @@ ChunkedIOFd::~ChunkedIOFd()
delete [] read_buffer; delete [] read_buffer;
delete [] write_buffer; delete [] write_buffer;
safe_close(fd); safe_close(fd);
delete partial;
if ( partial )
{
delete [] partial->data;
delete partial;
}
} }
bool ChunkedIOFd::Write(Chunk* chunk) bool ChunkedIOFd::Write(Chunk* chunk)
@ -169,10 +164,9 @@ bool ChunkedIOFd::Write(Chunk* chunk)
while ( left ) while ( left )
{ {
Chunk* part = new Chunk; uint32 sz = min<uint32>(BUFFER_SIZE - sizeof(uint32), left);
Chunk* part = new Chunk(new char[sz], sz);
part->len = min<uint32>(BUFFER_SIZE - sizeof(uint32), left);
part->data = new char[part->len];
memcpy(part->data, p, part->len); memcpy(part->data, p, part->len);
left -= part->len; left -= part->len;
p += part->len; p += part->len;
@ -181,9 +175,7 @@ bool ChunkedIOFd::Write(Chunk* chunk)
return false; return false;
} }
delete [] chunk->data;
delete chunk; delete chunk;
return true; return true;
} }
@ -239,7 +231,6 @@ bool ChunkedIOFd::PutIntoWriteBuffer(Chunk* chunk)
memcpy(write_buffer + write_len, chunk->data, len); memcpy(write_buffer + write_len, chunk->data, len);
write_len += len; write_len += len;
delete [] chunk->data;
delete chunk; delete chunk;
if ( network_time - last_flush > 0.005 ) if ( network_time - last_flush > 0.005 )
@ -362,9 +353,7 @@ ChunkedIO::Chunk* ChunkedIOFd::ExtractChunk()
read_pos += sizeof(uint32); read_pos += sizeof(uint32);
Chunk* chunk = new Chunk; Chunk* chunk = new Chunk(new char[real_len], len);
chunk->len = len;
chunk->data = new char[real_len];
memcpy(chunk->data, read_buffer + read_pos, real_len); memcpy(chunk->data, read_buffer + read_pos, real_len);
read_pos += real_len; read_pos += real_len;
@ -375,17 +364,13 @@ ChunkedIO::Chunk* ChunkedIOFd::ExtractChunk()
ChunkedIO::Chunk* ChunkedIOFd::ConcatChunks(Chunk* c1, Chunk* c2) ChunkedIO::Chunk* ChunkedIOFd::ConcatChunks(Chunk* c1, Chunk* c2)
{ {
Chunk* c = new Chunk; uint32 sz = c1->len + c2->len;
Chunk* c = new Chunk(new char[sz], sz);
c->len = c1->len + c2->len;
c->data = new char[c->len];
memcpy(c->data, c1->data, c1->len); memcpy(c->data, c1->data, c1->len);
memcpy(c->data + c1->len, c2->data, c2->len); memcpy(c->data + c1->len, c2->data, c2->len);
delete [] c1->data;
delete c1; delete c1;
delete [] c2->data;
delete c2; delete c2;
return c; return c;
@ -627,7 +612,6 @@ void ChunkedIOFd::Clear()
while ( pending_head ) while ( pending_head )
{ {
ChunkQueue* next = pending_head->next; ChunkQueue* next = pending_head->next;
delete [] pending_head->chunk->data;
delete pending_head->chunk; delete pending_head->chunk;
delete pending_head; delete pending_head;
pending_head = next; pending_head = next;
@ -946,7 +930,6 @@ bool ChunkedIOSSL::Flush()
--stats.pending; --stats.pending;
delete q; delete q;
delete [] c->data;
delete c; delete c;
write_state = LEN; write_state = LEN;
@ -1063,7 +1046,10 @@ bool ChunkedIOSSL::Read(Chunk** chunk, bool mayblock)
} }
if ( ! read_chunk->data ) if ( ! read_chunk->data )
{
read_chunk->data = new char[read_chunk->len]; read_chunk->data = new char[read_chunk->len];
read_chunk->free_func = Chunk::free_func_delete;
}
if ( ! ReadData(read_chunk->data, read_chunk->len, &error) ) if ( ! ReadData(read_chunk->data, read_chunk->len, &error) )
return ! error; return ! error;
@ -1123,7 +1109,6 @@ void ChunkedIOSSL::Clear()
while ( write_head ) while ( write_head )
{ {
Queue* next = write_head->next; Queue* next = write_head->next;
delete [] write_head->chunk->data;
delete write_head->chunk; delete write_head->chunk;
delete write_head; delete write_head;
write_head = next; write_head = next;
@ -1231,12 +1216,13 @@ bool CompressedChunkedIO::Read(Chunk** chunk, bool may_block)
return false; return false;
} }
delete [] (*chunk)->data; (*chunk)->free_func((*chunk)->data);
uncompressed_bytes_read += uncompressed_len; uncompressed_bytes_read += uncompressed_len;
(*chunk)->len = uncompressed_len; (*chunk)->len = uncompressed_len;
(*chunk)->data = uncompressed; (*chunk)->data = uncompressed;
(*chunk)->free_func = Chunk::free_func_delete;
return true; return true;
} }
@ -1280,8 +1266,9 @@ bool CompressedChunkedIO::Write(Chunk* chunk)
memcpy(compressed, chunk->data, chunk->len); memcpy(compressed, chunk->data, chunk->len);
*(uint32*) (compressed + chunk->len) = 0; // uncompressed_length *(uint32*) (compressed + chunk->len) = 0; // uncompressed_length
delete [] chunk->data; chunk->free_func(chunk->data);
chunk->data = compressed; chunk->data = compressed;
chunk->free_func = Chunk::free_func_delete;
chunk->len += 4; chunk->len += 4;
DBG_LOG(DBG_CHUNKEDIO, "zlib write pass-through: size=%d", chunk->len); DBG_LOG(DBG_CHUNKEDIO, "zlib write pass-through: size=%d", chunk->len);
@ -1322,8 +1309,9 @@ bool CompressedChunkedIO::Write(Chunk* chunk)
*(uint32*) zout.next_out = original_size; // uncompressed_length *(uint32*) zout.next_out = original_size; // uncompressed_length
delete [] chunk->data; chunk->free_func(chunk->data);
chunk->data = compressed; chunk->data = compressed;
chunk->free_func = Chunk::free_func_delete;
chunk->len = chunk->len =
((char*) zout.next_out - compressed) + sizeof(uint32); ((char*) zout.next_out - compressed) + sizeof(uint32);

View file

@ -11,7 +11,7 @@
#ifdef NEED_KRB5_H #ifdef NEED_KRB5_H
# include <krb5.h> # include <krb5.h>
#endif #endif
#include <openssl/ssl.h> #include <openssl/ssl.h>
#include <openssl/err.h> #include <openssl/err.h>
@ -26,10 +26,28 @@ public:
ChunkedIO(); ChunkedIO();
virtual ~ChunkedIO() { } virtual ~ChunkedIO() { }
typedef struct { struct Chunk {
typedef void (*FreeFunc)(char*);
static void free_func_free(char* data) { free(data); }
static void free_func_delete(char* data) { delete [] data; }
Chunk()
: data(), len(), free_func(free_func_delete)
{ }
// Takes ownership of data.
Chunk(char* arg_data, uint32 arg_len,
FreeFunc arg_ff = free_func_delete)
: data(arg_data), len(arg_len), free_func(arg_ff)
{ }
~Chunk()
{ free_func(data); }
char* data; char* data;
uint32 len; uint32 len;
} Chunk; FreeFunc free_func;
};
// Initialization before any I/O operation is performed. Returns false // Initialization before any I/O operation is performed. Returns false
// on any form of error. // on any form of error.

View file

@ -15,6 +15,7 @@
#include "binpac.h" #include "binpac.h"
#include "TunnelEncapsulation.h" #include "TunnelEncapsulation.h"
#include "analyzer/Analyzer.h" #include "analyzer/Analyzer.h"
#include "analyzer/Manager.h"
void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer, void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer,
int arg_do_expire) int arg_do_expire)
@ -722,8 +723,8 @@ TimerMgr* Connection::GetTimerMgr() const
void Connection::FlipRoles() void Connection::FlipRoles()
{ {
IPAddr tmp_addr = resp_addr; IPAddr tmp_addr = resp_addr;
orig_addr = resp_addr; resp_addr = orig_addr;
resp_addr = tmp_addr; orig_addr = tmp_addr;
uint32 tmp_port = resp_port; uint32 tmp_port = resp_port;
resp_port = orig_port; resp_port = orig_port;
@ -742,6 +743,8 @@ void Connection::FlipRoles()
if ( root_analyzer ) if ( root_analyzer )
root_analyzer->FlipRoles(); root_analyzer->FlipRoles();
analyzer_mgr->ApplyScheduledAnalyzers(this);
} }
unsigned int Connection::MemoryAllocation() const unsigned int Connection::MemoryAllocation() const
@ -808,6 +811,17 @@ void Connection::Describe(ODesc* d) const
d->NL(); d->NL();
} }
void Connection::IDString(ODesc* d) const
{
d->Add(orig_addr);
d->AddRaw(":", 1);
d->Add(ntohs(orig_port));
d->AddRaw(" > ", 3);
d->Add(resp_addr);
d->AddRaw(":", 1);
d->Add(ntohs(resp_port));
}
bool Connection::Serialize(SerialInfo* info) const bool Connection::Serialize(SerialInfo* info) const
{ {
return SerialObj::Serialize(info); return SerialObj::Serialize(info);

View file

@ -204,6 +204,7 @@ public:
bool IsPersistent() { return persistent; } bool IsPersistent() { return persistent; }
void Describe(ODesc* d) const; void Describe(ODesc* d) const;
void IDString(ODesc* d) const;
TimerMgr* GetTimerMgr() const; TimerMgr* GetTimerMgr() const;

View file

@ -211,9 +211,10 @@ void DFA_State::Dump(FILE* f, DFA_Machine* m)
if ( accept ) if ( accept )
{ {
for ( int i = 0; i < accept->length(); ++i ) AcceptingSet::const_iterator it;
fprintf(f, "%s accept #%d",
i > 0 ? "," : "", int((*accept)[i])); for ( it = accept->begin(); it != accept->end(); ++it )
fprintf(f, "%s accept #%d", it == accept->begin() ? "" : ",", *it);
} }
fprintf(f, "\n"); fprintf(f, "\n");
@ -285,7 +286,7 @@ unsigned int DFA_State::Size()
{ {
return sizeof(*this) return sizeof(*this)
+ pad_size(sizeof(DFA_State*) * num_sym) + pad_size(sizeof(DFA_State*) * num_sym)
+ (accept ? pad_size(sizeof(int) * accept->length()) : 0) + (accept ? pad_size(sizeof(int) * accept->size()) : 0)
+ (nfa_states ? pad_size(sizeof(NFA_State*) * nfa_states->length()) : 0) + (nfa_states ? pad_size(sizeof(NFA_State*) * nfa_states->length()) : 0)
+ (meta_ec ? meta_ec->Size() : 0) + (meta_ec ? meta_ec->Size() : 0)
+ (centry ? padded_sizeof(CacheEntry) : 0); + (centry ? padded_sizeof(CacheEntry) : 0);
@ -470,33 +471,20 @@ int DFA_Machine::StateSetToDFA_State(NFA_state_list* state_set,
return 0; return 0;
AcceptingSet* accept = new AcceptingSet; AcceptingSet* accept = new AcceptingSet;
for ( int i = 0; i < state_set->length(); ++i ) for ( int i = 0; i < state_set->length(); ++i )
{ {
int acc = (*state_set)[i]->Accept(); int acc = (*state_set)[i]->Accept();
if ( acc != NO_ACCEPT ) if ( acc != NO_ACCEPT )
{ accept->insert(acc);
int j;
for ( j = 0; j < accept->length(); ++j )
if ( (*accept)[j] == acc )
break;
if ( j >= accept->length() )
// It's not already present.
accept->append(acc);
}
} }
if ( accept->length() == 0 ) if ( accept->empty() )
{ {
delete accept; delete accept;
accept = 0; accept = 0;
} }
else
{
accept->sort(int_list_cmp);
accept->resize(0);
}
DFA_State* ds = new DFA_State(state_count++, ec, state_set, accept); DFA_State* ds = new DFA_State(state_count++, ec, state_set, accept);
d = dfa_state_cache->Insert(ds, hash); d = dfa_state_cache->Insert(ds, hash);

View file

@ -2,6 +2,7 @@
#include "config.h" #include "config.h"
#include <openssl/md5.h>
#include <sys/types.h> #include <sys/types.h>
#include <sys/socket.h> #include <sys/socket.h>
#ifdef TIME_WITH_SYS_TIME #ifdef TIME_WITH_SYS_TIME
@ -385,7 +386,6 @@ DNS_Mgr::DNS_Mgr(DNS_MgrMode arg_mode)
dns_mapping_altered = 0; dns_mapping_altered = 0;
dm_rec = 0; dm_rec = 0;
dns_fake_count = 0;
cache_name = dir = 0; cache_name = dir = 0;
@ -443,6 +443,33 @@ bool DNS_Mgr::Init()
return true; return true;
} }
static TableVal* fake_name_lookup_result(const char* name)
{
uint32 hash[4];
MD5(reinterpret_cast<const u_char*>(name), strlen(name),
reinterpret_cast<u_char*>(hash));
ListVal* hv = new ListVal(TYPE_ADDR);
hv->Append(new AddrVal(hash));
TableVal* tv = hv->ConvertToSet();
Unref(hv);
return tv;
}
static const char* fake_text_lookup_result(const char* name)
{
static char tmp[32 + 256];
snprintf(tmp, sizeof(tmp), "fake_text_lookup_result_%s", name);
return tmp;
}
static const char* fake_addr_lookup_result(const IPAddr& addr)
{
static char tmp[128];
snprintf(tmp, sizeof(tmp), "fake_addr_lookup_result_%s",
addr.AsString().c_str());
return tmp;
}
TableVal* DNS_Mgr::LookupHost(const char* name) TableVal* DNS_Mgr::LookupHost(const char* name)
{ {
if ( ! nb_dns ) if ( ! nb_dns )
@ -452,11 +479,7 @@ TableVal* DNS_Mgr::LookupHost(const char* name)
Init(); Init();
if ( mode == DNS_FAKE ) if ( mode == DNS_FAKE )
{ return fake_name_lookup_result(name);
ListVal* hv = new ListVal(TYPE_ADDR);
hv->Append(new AddrVal(uint32(++dns_fake_count)));
return hv->ConvertToSet();
}
if ( mode != DNS_PRIME ) if ( mode != DNS_PRIME )
{ {
@ -960,7 +983,7 @@ const char* DNS_Mgr::LookupAddrInCache(const IPAddr& addr)
return d->names ? d->names[0] : "<\?\?\?>"; return d->names ? d->names[0] : "<\?\?\?>";
} }
TableVal* DNS_Mgr::LookupNameInCache(string name) TableVal* DNS_Mgr::LookupNameInCache(const string& name)
{ {
HostMap::iterator it = host_mappings.find(name); HostMap::iterator it = host_mappings.find(name);
if ( it == host_mappings.end() ) if ( it == host_mappings.end() )
@ -990,7 +1013,7 @@ TableVal* DNS_Mgr::LookupNameInCache(string name)
return tv6; return tv6;
} }
const char* DNS_Mgr::LookupTextInCache(string name) const char* DNS_Mgr::LookupTextInCache(const string& name)
{ {
TextMap::iterator it = text_mappings.find(name); TextMap::iterator it = text_mappings.find(name);
if ( it == text_mappings.end() ) if ( it == text_mappings.end() )
@ -1010,17 +1033,37 @@ const char* DNS_Mgr::LookupTextInCache(string name)
return d->names ? d->names[0] : "<\?\?\?>"; return d->names ? d->names[0] : "<\?\?\?>";
} }
static void resolve_lookup_cb(DNS_Mgr::LookupCallback* callback,
TableVal* result)
{
callback->Resolved(result);
Unref(result);
delete callback;
}
static void resolve_lookup_cb(DNS_Mgr::LookupCallback* callback,
const char* result)
{
callback->Resolved(result);
delete callback;
}
void DNS_Mgr::AsyncLookupAddr(const IPAddr& host, LookupCallback* callback) void DNS_Mgr::AsyncLookupAddr(const IPAddr& host, LookupCallback* callback)
{ {
if ( ! did_init ) if ( ! did_init )
Init(); Init();
if ( mode == DNS_FAKE )
{
resolve_lookup_cb(callback, fake_addr_lookup_result(host));
return;
}
// Do we already know the answer? // Do we already know the answer?
const char* name = LookupAddrInCache(host); const char* name = LookupAddrInCache(host);
if ( name ) if ( name )
{ {
callback->Resolved(name); resolve_lookup_cb(callback, name);
delete callback;
return; return;
} }
@ -1044,18 +1087,22 @@ void DNS_Mgr::AsyncLookupAddr(const IPAddr& host, LookupCallback* callback)
IssueAsyncRequests(); IssueAsyncRequests();
} }
void DNS_Mgr::AsyncLookupName(string name, LookupCallback* callback) void DNS_Mgr::AsyncLookupName(const string& name, LookupCallback* callback)
{ {
if ( ! did_init ) if ( ! did_init )
Init(); Init();
if ( mode == DNS_FAKE )
{
resolve_lookup_cb(callback, fake_name_lookup_result(name.c_str()));
return;
}
// Do we already know the answer? // Do we already know the answer?
TableVal* addrs = LookupNameInCache(name); TableVal* addrs = LookupNameInCache(name);
if ( addrs ) if ( addrs )
{ {
callback->Resolved(addrs); resolve_lookup_cb(callback, addrs);
Unref(addrs);
delete callback;
return; return;
} }
@ -1079,13 +1126,25 @@ void DNS_Mgr::AsyncLookupName(string name, LookupCallback* callback)
IssueAsyncRequests(); IssueAsyncRequests();
} }
void DNS_Mgr::AsyncLookupNameText(string name, LookupCallback* callback) void DNS_Mgr::AsyncLookupNameText(const string& name, LookupCallback* callback)
{ {
if ( ! did_init ) if ( ! did_init )
Init(); Init();
if ( mode == DNS_FAKE )
{
resolve_lookup_cb(callback, fake_text_lookup_result(name.c_str()));
return;
}
// Do we already know the answer? // Do we already know the answer?
TableVal* addrs; const char* txt = LookupTextInCache(name);
if ( txt )
{
resolve_lookup_cb(callback, txt);
return;
}
AsyncRequest* req = 0; AsyncRequest* req = 0;

View file

@ -62,8 +62,8 @@ public:
int Save(); int Save();
const char* LookupAddrInCache(const IPAddr& addr); const char* LookupAddrInCache(const IPAddr& addr);
TableVal* LookupNameInCache(string name); TableVal* LookupNameInCache(const string& name);
const char* LookupTextInCache(string name); const char* LookupTextInCache(const string& name);
// Support for async lookups. // Support for async lookups.
class LookupCallback { class LookupCallback {
@ -77,8 +77,8 @@ public:
}; };
void AsyncLookupAddr(const IPAddr& host, LookupCallback* callback); void AsyncLookupAddr(const IPAddr& host, LookupCallback* callback);
void AsyncLookupName(string name, LookupCallback* callback); void AsyncLookupName(const string& name, LookupCallback* callback);
void AsyncLookupNameText(string name, LookupCallback* callback); void AsyncLookupNameText(const string& name, LookupCallback* callback);
struct Stats { struct Stats {
unsigned long requests; // These count only async requests. unsigned long requests; // These count only async requests.
@ -163,8 +163,6 @@ protected:
RecordType* dm_rec; RecordType* dm_rec;
int dns_fake_count; // used to generate unique fake replies
typedef list<LookupCallback*> CallbackList; typedef list<LookupCallback*> CallbackList;
struct AsyncRequest { struct AsyncRequest {

View file

@ -192,6 +192,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
string fullname = make_full_var_name(current_module.c_str(), s.c_str()); string fullname = make_full_var_name(current_module.c_str(), s.c_str());
debug_msg("Function %s not defined.\n", fullname.c_str()); debug_msg("Function %s not defined.\n", fullname.c_str());
plr.type = plrUnknown; plr.type = plrUnknown;
Unref(id);
return; return;
} }
@ -199,6 +200,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{ {
debug_msg("Function %s not declared.\n", id->Name()); debug_msg("Function %s not declared.\n", id->Name());
plr.type = plrUnknown; plr.type = plrUnknown;
Unref(id);
return; return;
} }
@ -206,6 +208,7 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{ {
debug_msg("Function %s declared but not defined.\n", id->Name()); debug_msg("Function %s declared but not defined.\n", id->Name());
plr.type = plrUnknown; plr.type = plrUnknown;
Unref(id);
return; return;
} }
@ -216,9 +219,12 @@ static void parse_function_name(vector<ParseLocationRec>& result,
{ {
debug_msg("Function %s is a built-in function\n", id->Name()); debug_msg("Function %s is a built-in function\n", id->Name());
plr.type = plrUnknown; plr.type = plrUnknown;
Unref(id);
return; return;
} }
Unref(id);
Stmt* body = 0; // the particular body we care about; 0 = all Stmt* body = 0; // the particular body we care about; 0 = all
if ( bodies.size() == 1 ) if ( bodies.size() == 1 )

View file

@ -33,10 +33,12 @@ enum DebugStream {
NUM_DBGS // Has to be last NUM_DBGS // Has to be last
}; };
#define DBG_LOG(args...) debug_logger.Log(args) #define DBG_LOG(stream, args...) \
#define DBG_LOG_VERBOSE(args...) \ if ( debug_logger.IsEnabled(stream) ) \
if ( debug_logger.IsVerbose() ) \ debug_logger.Log(stream, args)
debug_logger.Log(args) #define DBG_LOG_VERBOSE(stream, args...) \
if ( debug_logger.IsVerbose() && debug_logger.IsEnabled(stream) ) \
debug_logger.Log(stream, args)
#define DBG_PUSH(stream) debug_logger.PushIndent(stream) #define DBG_PUSH(stream) debug_logger.PushIndent(stream)
#define DBG_POP(stream) debug_logger.PopIndent(stream) #define DBG_POP(stream) debug_logger.PopIndent(stream)

View file

@ -216,18 +216,32 @@ void ODesc::Indent()
} }
} }
static const char hex_chars[] = "0123456789abcdef"; static bool starts_with(const char* str1, const char* str2, size_t len)
static const char* find_first_unprintable(ODesc* d, const char* bytes, unsigned int n)
{ {
if ( d->IsBinary() ) for ( size_t i = 0; i < len; ++i )
if ( str1[i] != str2[i] )
return false;
return true;
}
size_t ODesc::StartsWithEscapeSequence(const char* start, const char* end)
{
if ( escape_sequences.empty() )
return 0; return 0;
while ( n-- ) escape_set::const_iterator it;
for ( it = escape_sequences.begin(); it != escape_sequences.end(); ++it )
{ {
if ( ! isprint(*bytes) ) const string& esc_str = *it;
return bytes; size_t esc_len = esc_str.length();
++bytes;
if ( start + esc_len > end )
continue;
if ( starts_with(start, esc_str.c_str(), esc_len) )
return esc_len;
} }
return 0; return 0;
@ -235,21 +249,23 @@ static const char* find_first_unprintable(ODesc* d, const char* bytes, unsigned
pair<const char*, size_t> ODesc::FirstEscapeLoc(const char* bytes, size_t n) pair<const char*, size_t> ODesc::FirstEscapeLoc(const char* bytes, size_t n)
{ {
pair<const char*, size_t> p(find_first_unprintable(this, bytes, n), 1); typedef pair<const char*, size_t> escape_pos;
string str(bytes, n); if ( IsBinary() )
list<string>::const_iterator it; return escape_pos(0, 0);
for ( it = escape_sequences.begin(); it != escape_sequences.end(); ++it )
for ( size_t i = 0; i < n; ++i )
{ {
size_t pos = str.find(*it); if ( ! isprint(bytes[i]) )
if ( pos != string::npos && (p.first == 0 || bytes + pos < p.first) ) return escape_pos(bytes + i, 1);
{
p.first = bytes + pos; size_t len = StartsWithEscapeSequence(bytes + i, bytes + n);
p.second = it->size();
} if ( len )
return escape_pos(bytes + i, len);
} }
return p; return escape_pos(0, 0);
} }
void ODesc::AddBytes(const void* bytes, unsigned int n) void ODesc::AddBytes(const void* bytes, unsigned int n)
@ -266,21 +282,11 @@ void ODesc::AddBytes(const void* bytes, unsigned int n)
while ( s < e ) while ( s < e )
{ {
pair<const char*, size_t> p = FirstEscapeLoc(s, e - s); pair<const char*, size_t> p = FirstEscapeLoc(s, e - s);
if ( p.first ) if ( p.first )
{ {
AddBytesRaw(s, p.first - s); AddBytesRaw(s, p.first - s);
if ( p.second == 1 ) get_escaped_string(this, p.first, p.second, true);
{
char hex[6] = "\\x00";
hex[2] = hex_chars[((*p.first) & 0xf0) >> 4];
hex[3] = hex_chars[(*p.first) & 0x0f];
AddBytesRaw(hex, 4);
}
else
{
string esc_str = get_escaped_string(string(p.first, p.second), true);
AddBytesRaw(esc_str.c_str(), esc_str.size());
}
s = p.first + p.second; s = p.first + p.second;
} }
else else

View file

@ -4,7 +4,7 @@
#define descriptor_h #define descriptor_h
#include <stdio.h> #include <stdio.h>
#include <list> #include <set>
#include <utility> #include <utility>
#include "BroString.h" #include "BroString.h"
@ -54,16 +54,16 @@ public:
void SetFlush(int arg_do_flush) { do_flush = arg_do_flush; } void SetFlush(int arg_do_flush) { do_flush = arg_do_flush; }
void EnableEscaping(); void EnableEscaping();
void AddEscapeSequence(const char* s) { escape_sequences.push_back(s); } void AddEscapeSequence(const char* s) { escape_sequences.insert(s); }
void AddEscapeSequence(const char* s, size_t n) void AddEscapeSequence(const char* s, size_t n)
{ escape_sequences.push_back(string(s, n)); } { escape_sequences.insert(string(s, n)); }
void AddEscapeSequence(const string & s) void AddEscapeSequence(const string & s)
{ escape_sequences.push_back(s); } { escape_sequences.insert(s); }
void RemoveEscapeSequence(const char* s) { escape_sequences.remove(s); } void RemoveEscapeSequence(const char* s) { escape_sequences.erase(s); }
void RemoveEscapeSequence(const char* s, size_t n) void RemoveEscapeSequence(const char* s, size_t n)
{ escape_sequences.remove(string(s, n)); } { escape_sequences.erase(string(s, n)); }
void RemoveEscapeSequence(const string & s) void RemoveEscapeSequence(const string & s)
{ escape_sequences.remove(s); } { escape_sequences.erase(s); }
void PushIndent(); void PushIndent();
void PopIndent(); void PopIndent();
@ -163,6 +163,15 @@ protected:
*/ */
pair<const char*, size_t> FirstEscapeLoc(const char* bytes, size_t n); pair<const char*, size_t> FirstEscapeLoc(const char* bytes, size_t n);
/**
* @param start start of string to check for starting with an espace
* sequence.
* @param end one byte past the last character in the string.
* @return The number of bytes in the escape sequence that the string
* starts with.
*/
size_t StartsWithEscapeSequence(const char* start, const char* end);
desc_type type; desc_type type;
desc_style style; desc_style style;
@ -171,7 +180,8 @@ protected:
unsigned int size; // size of buffer in bytes unsigned int size; // size of buffer in bytes
bool escape; // escape unprintable characters in output? bool escape; // escape unprintable characters in output?
list<string> escape_sequences; // additional sequences of chars to escape typedef set<string> escape_set;
escape_set escape_sequences; // additional sequences of chars to escape
BroFile* f; // or the file we're using. BroFile* f; // or the file we're using.

View file

@ -39,7 +39,10 @@ FuncType* EventHandler::FType()
if ( id->Type()->Tag() != TYPE_FUNC ) if ( id->Type()->Tag() != TYPE_FUNC )
return 0; return 0;
return type = id->Type()->AsFuncType(); type = id->Type()->AsFuncType();
Unref(id);
return type;
} }
void EventHandler::SetLocalHandler(Func* f) void EventHandler::SetLocalHandler(Func* f)

View file

@ -3392,22 +3392,12 @@ bool HasFieldExpr::DoUnserialize(UnserialInfo* info)
return UNSERIALIZE(&not_used) && UNSERIALIZE_STR(&field_name, 0) && UNSERIALIZE(&field); return UNSERIALIZE(&not_used) && UNSERIALIZE_STR(&field_name, 0) && UNSERIALIZE(&field);
} }
RecordConstructorExpr::RecordConstructorExpr(ListExpr* constructor_list, RecordConstructorExpr::RecordConstructorExpr(ListExpr* constructor_list)
BroType* arg_type)
: UnaryExpr(EXPR_RECORD_CONSTRUCTOR, constructor_list) : UnaryExpr(EXPR_RECORD_CONSTRUCTOR, constructor_list)
{ {
ctor_type = 0;
if ( IsError() ) if ( IsError() )
return; return;
if ( arg_type && arg_type->Tag() != TYPE_RECORD )
{
Error("bad record constructor type", arg_type);
SetError();
return;
}
// Spin through the list, which should be comprised of // Spin through the list, which should be comprised of
// either record's or record-field-assign, and build up a // either record's or record-field-assign, and build up a
// record type to associate with this constructor. // record type to associate with this constructor.
@ -3447,17 +3437,11 @@ RecordConstructorExpr::RecordConstructorExpr(ListExpr* constructor_list,
} }
} }
ctor_type = new RecordType(record_types); SetType(new RecordType(record_types));
if ( arg_type )
SetType(arg_type->Ref());
else
SetType(ctor_type->Ref());
} }
RecordConstructorExpr::~RecordConstructorExpr() RecordConstructorExpr::~RecordConstructorExpr()
{ {
Unref(ctor_type);
} }
Val* RecordConstructorExpr::InitVal(const BroType* t, Val* aggr) const Val* RecordConstructorExpr::InitVal(const BroType* t, Val* aggr) const
@ -3483,7 +3467,7 @@ Val* RecordConstructorExpr::InitVal(const BroType* t, Val* aggr) const
Val* RecordConstructorExpr::Fold(Val* v) const Val* RecordConstructorExpr::Fold(Val* v) const
{ {
ListVal* lv = v->AsListVal(); ListVal* lv = v->AsListVal();
RecordType* rt = ctor_type->AsRecordType(); RecordType* rt = type->AsRecordType();
if ( lv->Length() != rt->NumFields() ) if ( lv->Length() != rt->NumFields() )
Internal("inconsistency evaluating record constructor"); Internal("inconsistency evaluating record constructor");
@ -3493,19 +3477,6 @@ Val* RecordConstructorExpr::Fold(Val* v) const
for ( int i = 0; i < lv->Length(); ++i ) for ( int i = 0; i < lv->Length(); ++i )
rv->Assign(i, lv->Index(i)->Ref()); rv->Assign(i, lv->Index(i)->Ref());
if ( ! same_type(rt, type) )
{
RecordVal* new_val = rv->CoerceTo(type->AsRecordType());
if ( new_val )
{
Unref(rv);
rv = new_val;
}
else
Internal("record constructor coercion failed");
}
return rv; return rv;
} }
@ -3521,16 +3492,12 @@ IMPLEMENT_SERIAL(RecordConstructorExpr, SER_RECORD_CONSTRUCTOR_EXPR);
bool RecordConstructorExpr::DoSerialize(SerialInfo* info) const bool RecordConstructorExpr::DoSerialize(SerialInfo* info) const
{ {
DO_SERIALIZE(SER_RECORD_CONSTRUCTOR_EXPR, UnaryExpr); DO_SERIALIZE(SER_RECORD_CONSTRUCTOR_EXPR, UnaryExpr);
SERIALIZE_OPTIONAL(ctor_type);
return true; return true;
} }
bool RecordConstructorExpr::DoUnserialize(UnserialInfo* info) bool RecordConstructorExpr::DoUnserialize(UnserialInfo* info)
{ {
DO_UNSERIALIZE(UnaryExpr); DO_UNSERIALIZE(UnaryExpr);
BroType* t = 0;
UNSERIALIZE_OPTIONAL(t, RecordType::Unserialize(info));
ctor_type = t->AsRecordType();
return true; return true;
} }
@ -3819,7 +3786,9 @@ VectorConstructorExpr::VectorConstructorExpr(ListExpr* constructor_list,
if ( constructor_list->Exprs().length() == 0 ) if ( constructor_list->Exprs().length() == 0 )
{ {
// vector(). // vector().
SetType(new ::VectorType(base_type(TYPE_ANY))); // By default, assign VOID type here. A vector with
// void type set is seen as an unspecified vector.
SetType(new ::VectorType(base_type(TYPE_VOID)));
return; return;
} }
@ -4212,6 +4181,26 @@ RecordCoerceExpr::~RecordCoerceExpr()
delete [] map; delete [] map;
} }
Val* RecordCoerceExpr::InitVal(const BroType* t, Val* aggr) const
{
Val* v = Eval(0);
if ( v )
{
RecordVal* rv = v->AsRecordVal();
RecordVal* ar = rv->CoerceTo(t->AsRecordType(), aggr);
if ( ar )
{
Unref(rv);
return ar;
}
}
Error("bad record initializer");
return 0;
}
Val* RecordCoerceExpr::Fold(Val* v) const Val* RecordCoerceExpr::Fold(Val* v) const
{ {
RecordVal* val = new RecordVal(Type()->AsRecordType()); RecordVal* val = new RecordVal(Type()->AsRecordType());
@ -4236,6 +4225,13 @@ Val* RecordCoerceExpr::Fold(Val* v) const
assert(rhs || Type()->AsRecordType()->FieldDecl(i)->FindAttr(ATTR_OPTIONAL)); assert(rhs || Type()->AsRecordType()->FieldDecl(i)->FindAttr(ATTR_OPTIONAL));
if ( ! rhs )
{
// Optional field is missing.
val->Assign(i, 0);
continue;
}
BroType* rhs_type = rhs->Type(); BroType* rhs_type = rhs->Type();
RecordType* val_type = val->Type()->AsRecordType(); RecordType* val_type = val->Type()->AsRecordType();
BroType* field_type = val_type->FieldType(i); BroType* field_type = val_type->FieldType(i);

View file

@ -753,7 +753,7 @@ protected:
class RecordConstructorExpr : public UnaryExpr { class RecordConstructorExpr : public UnaryExpr {
public: public:
RecordConstructorExpr(ListExpr* constructor_list, BroType* arg_type = 0); RecordConstructorExpr(ListExpr* constructor_list);
~RecordConstructorExpr(); ~RecordConstructorExpr();
protected: protected:
@ -766,8 +766,6 @@ protected:
void ExprDescribe(ODesc* d) const; void ExprDescribe(ODesc* d) const;
DECLARE_SERIAL(RecordConstructorExpr); DECLARE_SERIAL(RecordConstructorExpr);
RecordType* ctor_type; // type inferred from the ctor expression list args
}; };
class TableConstructorExpr : public UnaryExpr { class TableConstructorExpr : public UnaryExpr {
@ -878,6 +876,7 @@ protected:
friend class Expr; friend class Expr;
RecordCoerceExpr() { map = 0; } RecordCoerceExpr() { map = 0; }
Val* InitVal(const BroType* t, Val* aggr) const;
Val* Fold(Val* v) const; Val* Fold(Val* v) const;
DECLARE_SERIAL(RecordCoerceExpr); DECLARE_SERIAL(RecordCoerceExpr);

View file

@ -97,9 +97,9 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt)
// Linux MTU discovery for UDP can do this, for example. // Linux MTU discovery for UDP can do this, for example.
s->Weird("fragment_with_DF", ip); s->Weird("fragment_with_DF", ip);
int offset = ip->FragOffset(); uint16 offset = ip->FragOffset();
int len = ip->TotalLen(); uint32 len = ip->TotalLen();
int hdr_len = ip->HdrLen(); uint16 hdr_len = ip->HdrLen();
if ( len < hdr_len ) if ( len < hdr_len )
{ {
@ -107,7 +107,7 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt)
return; return;
} }
int upper_seq = offset + len - hdr_len; uint64 upper_seq = offset + len - hdr_len;
if ( ! offset ) if ( ! offset )
// Make sure to use the first fragment header's next field. // Make sure to use the first fragment header's next field.
@ -178,7 +178,7 @@ void FragReassembler::Weird(const char* name) const
} }
} }
void FragReassembler::Overlap(const u_char* b1, const u_char* b2, int n) void FragReassembler::Overlap(const u_char* b1, const u_char* b2, uint64 n)
{ {
if ( memcmp((const void*) b1, (const void*) b2, n) ) if ( memcmp((const void*) b1, (const void*) b2, n) )
Weird("fragment_inconsistency"); Weird("fragment_inconsistency");
@ -231,7 +231,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */)
return; return;
// We have it all. Compute the expected size of the fragment. // We have it all. Compute the expected size of the fragment.
int n = proto_hdr_len + frag_size; uint64 n = proto_hdr_len + frag_size;
// It's possible that we have blocks associated with this fragment // It's possible that we have blocks associated with this fragment
// that exceed this size, if we saw MF fragments (which don't lead // that exceed this size, if we saw MF fragments (which don't lead
@ -260,6 +260,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */)
reporter->InternalWarning("bad fragment reassembly"); reporter->InternalWarning("bad fragment reassembly");
DeleteTimer(); DeleteTimer();
Expire(network_time); Expire(network_time);
delete [] pkt_start;
return; return;
} }

View file

@ -34,14 +34,14 @@ public:
protected: protected:
void BlockInserted(DataBlock* start_block); void BlockInserted(DataBlock* start_block);
void Overlap(const u_char* b1, const u_char* b2, int n); void Overlap(const u_char* b1, const u_char* b2, uint64 n);
void Weird(const char* name) const; void Weird(const char* name) const;
u_char* proto_hdr; u_char* proto_hdr;
IP_Hdr* reassembled_pkt; IP_Hdr* reassembled_pkt;
int proto_hdr_len; uint16 proto_hdr_len;
NetSessions* s; NetSessions* s;
int frag_size; // size of fully reassembled fragment uint64 frag_size; // size of fully reassembled fragment
uint16 next_proto; // first IPv6 fragment header's next proto field uint16 next_proto; // first IPv6 fragment header's next proto field
HashKey* key; HashKey* key;

View file

@ -475,6 +475,7 @@ BuiltinFunc::BuiltinFunc(built_in_func arg_func, const char* arg_name,
type = id->Type()->Ref(); type = id->Type()->Ref();
id->SetVal(new Val(this)); id->SetVal(new Val(this));
Unref(id);
} }
BuiltinFunc::~BuiltinFunc() BuiltinFunc::~BuiltinFunc()

View file

@ -1,5 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#include <cstdlib>
#include <string> #include <string>
#include <vector> #include <vector>
#include "IPAddr.h" #include "IPAddr.h"
@ -45,6 +46,14 @@ HashKey* BuildConnIDHashKey(const ConnID& id)
return new HashKey(&key, sizeof(key)); return new HashKey(&key, sizeof(key));
} }
static inline uint32_t bit_mask32(int bottom_bits)
{
if ( bottom_bits >= 32 )
return 0xffffffff;
return (((uint32_t) 1) << bottom_bits) - 1;
}
void IPAddr::Mask(int top_bits_to_keep) void IPAddr::Mask(int top_bits_to_keep)
{ {
if ( top_bits_to_keep < 0 || top_bits_to_keep > 128 ) if ( top_bits_to_keep < 0 || top_bits_to_keep > 128 )
@ -53,25 +62,20 @@ void IPAddr::Mask(int top_bits_to_keep)
return; return;
} }
uint32_t tmp[4]; uint32_t mask_bits[4] = { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff };
memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr)); std::ldiv_t res = std::ldiv(top_bits_to_keep, 32);
int word = 3; if ( res.quot < 4 )
int bits_to_chop = 128 - top_bits_to_keep; mask_bits[res.quot] =
htonl(mask_bits[res.quot] & ~bit_mask32(32 - res.rem));
while ( bits_to_chop >= 32 ) for ( unsigned int i = res.quot + 1; i < 4; ++i )
{ mask_bits[i] = 0;
tmp[word] = 0;
--word;
bits_to_chop -= 32;
}
uint32_t w = ntohl(tmp[word]); uint32_t* p = reinterpret_cast<uint32_t*>(in6.s6_addr);
w >>= bits_to_chop;
w <<= bits_to_chop;
tmp[word] = htonl(w);
memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr)); for ( unsigned int i = 0; i < 4; ++i )
p[i] &= mask_bits[i];
} }
void IPAddr::ReverseMask(int top_bits_to_chop) void IPAddr::ReverseMask(int top_bits_to_chop)
@ -82,25 +86,19 @@ void IPAddr::ReverseMask(int top_bits_to_chop)
return; return;
} }
uint32_t tmp[4]; uint32_t mask_bits[4] = { 0, 0, 0, 0 };
memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr)); std::ldiv_t res = std::ldiv(top_bits_to_chop, 32);
int word = 0; if ( res.quot < 4 )
int bits_to_chop = top_bits_to_chop; mask_bits[res.quot] = htonl(bit_mask32(32 - res.rem));
while ( bits_to_chop >= 32 ) for ( unsigned int i = res.quot + 1; i < 4; ++i )
{ mask_bits[i] = 0xffffffff;
tmp[word] = 0;
++word;
bits_to_chop -= 32;
}
uint32_t w = ntohl(tmp[word]); uint32_t* p = reinterpret_cast<uint32_t*>(in6.s6_addr);
w <<= bits_to_chop;
w >>= bits_to_chop;
tmp[word] = htonl(w);
memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr)); for ( unsigned int i = 0; i < 4; ++i )
p[i] &= mask_bits[i];
} }
void IPAddr::Init(const std::string& s) void IPAddr::Init(const std::string& s)

View file

@ -27,7 +27,6 @@
#include "Reporter.h" #include "Reporter.h"
#include "Net.h" #include "Net.h"
#include "Anon.h" #include "Anon.h"
#include "PacketSort.h"
#include "Serializer.h" #include "Serializer.h"
#include "PacketDumper.h" #include "PacketDumper.h"
@ -58,8 +57,6 @@ double bro_start_network_time; // timestamp of first packet
double last_watchdog_proc_time = 0.0; // value of above during last watchdog double last_watchdog_proc_time = 0.0; // value of above during last watchdog
bool terminating = false; // whether we're done reading and finishing up bool terminating = false; // whether we're done reading and finishing up
PacketSortGlobalPQ* packet_sorter = 0;
const struct pcap_pkthdr* current_hdr = 0; const struct pcap_pkthdr* current_hdr = 0;
const u_char* current_pkt = 0; const u_char* current_pkt = 0;
int current_dispatched = 0; int current_dispatched = 0;
@ -286,9 +283,6 @@ void net_init(name_list& interfaces, name_list& readfiles,
init_ip_addr_anonymizers(); init_ip_addr_anonymizers();
if ( packet_sort_window > 0 )
packet_sorter = new PacketSortGlobalPQ();
sessions = new NetSessions(); sessions = new NetSessions();
if ( do_watchdog ) if ( do_watchdog )
@ -313,14 +307,12 @@ void expire_timers(PktSrc* src_ps)
void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr, void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps, PacketSortElement* pkt_elem) PktSrc* src_ps)
{ {
if ( ! bro_start_network_time ) if ( ! bro_start_network_time )
bro_start_network_time = t; bro_start_network_time = t;
TimerMgr* tmgr = TimerMgr* tmgr = sessions->LookupTimerMgr(src_ps->GetCurrentTag());
src_ps ? sessions->LookupTimerMgr(src_ps->GetCurrentTag())
: timer_mgr;
// network_time never goes back. // network_time never goes back.
network_time = tmgr->Time() < t ? t : tmgr->Time(); network_time = tmgr->Time() < t ? t : tmgr->Time();
@ -351,7 +343,7 @@ void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
} }
} }
sessions->DispatchPacket(t, hdr, pkt, hdr_size, src_ps, pkt_elem); sessions->DispatchPacket(t, hdr, pkt, hdr_size, src_ps);
mgr.Drain(); mgr.Drain();
if ( sp ) if ( sp )
@ -367,62 +359,11 @@ void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
current_pktsrc = 0; current_pktsrc = 0;
} }
int process_packet_sorter(double latest_packet_time)
{
if ( ! packet_sorter )
return 0;
double min_t = latest_packet_time - packet_sort_window;
int num_pkts_dispatched = 0;
PacketSortElement* pkt_elem;
// Dispatch packets in the packet_sorter until timestamp min_t.
// It's possible that zero or multiple packets are dispatched.
while ( (pkt_elem = packet_sorter->RemoveMin(min_t)) != 0 )
{
net_packet_dispatch(pkt_elem->TimeStamp(),
pkt_elem->Hdr(), pkt_elem->Pkt(),
pkt_elem->HdrSize(), pkt_elem->Src(),
pkt_elem);
++num_pkts_dispatched;
delete pkt_elem;
}
return num_pkts_dispatched;
}
void net_packet_arrival(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size,
PktSrc* src_ps)
{
if ( packet_sorter )
{
// Note that when we enable packet sorter, there will
// be a small window between the time packet arrives
// to Bro and when it is processed ("dispatched"). We
// define network_time to be the latest timestamp for
// packets *dispatched* so far (usually that's the
// timestamp of the current packet).
// Add the packet to the packet_sorter.
packet_sorter->Add(
new PacketSortElement(src_ps, t, hdr, pkt, hdr_size));
// Do we have any packets to dispatch from packet_sorter?
process_packet_sorter(t);
}
else
// Otherwise we dispatch the packet immediately
net_packet_dispatch(t, hdr, pkt, hdr_size, src_ps, 0);
}
void net_run() void net_run()
{ {
set_processing_status("RUNNING", "net_run"); set_processing_status("RUNNING", "net_run");
while ( io_sources.Size() || while ( io_sources.Size() ||
(packet_sorter && ! packet_sorter->Empty()) ||
(BifConst::exit_only_after_terminate && ! terminating) ) (BifConst::exit_only_after_terminate && ! terminating) )
{ {
double ts; double ts;
@ -445,14 +386,12 @@ void net_run()
current_iosrc = src; current_iosrc = src;
if ( src ) if ( src )
src->Process(); // which will call net_packet_arrival() src->Process(); // which will call net_packet_dispatch()
else if ( reading_live && ! pseudo_realtime) else if ( reading_live && ! pseudo_realtime)
{ // live but no source is currently active { // live but no source is currently active
double ct = current_time(); double ct = current_time();
if ( packet_sorter && ! packet_sorter->Empty() ) if ( ! net_is_processing_suspended() )
process_packet_sorter(ct);
else if ( ! net_is_processing_suspended() )
{ {
// Take advantage of the lull to get up to // Take advantage of the lull to get up to
// date on timers and events. // date on timers and events.
@ -462,15 +401,6 @@ void net_run()
} }
} }
else if ( packet_sorter && ! packet_sorter->Empty() )
{
// We are no longer reading live; done with all the
// sources.
// Drain packets remaining in the packet sorter.
process_packet_sorter(
network_time + packet_sort_window + 1000000);
}
else if ( (have_pending_timers || using_communication) && else if ( (have_pending_timers || using_communication) &&
! pseudo_realtime ) ! pseudo_realtime )
{ {
@ -581,7 +511,6 @@ void net_delete()
set_processing_status("TERMINATING", "net_delete"); set_processing_status("TERMINATING", "net_delete");
delete sessions; delete sessions;
delete packet_sorter;
for ( int i = 0; i < NUM_ADDR_ANONYMIZATION_METHODS; ++i ) for ( int i = 0; i < NUM_ADDR_ANONYMIZATION_METHODS; ++i )
delete ip_anonymizer[i]; delete ip_anonymizer[i];

View file

@ -20,7 +20,7 @@ extern void net_run();
extern void net_get_final_stats(); extern void net_get_final_stats();
extern void net_finish(int drain_events); extern void net_finish(int drain_events);
extern void net_delete(); // Reclaim all memory, etc. extern void net_delete(); // Reclaim all memory, etc.
extern void net_packet_arrival(double t, const struct pcap_pkthdr* hdr, extern void net_packet_dispatch(double t, const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size, const u_char* pkt, int hdr_size,
PktSrc* src_ps); PktSrc* src_ps);
extern int net_packet_match(BPF_Program* fp, const u_char* pkt, extern int net_packet_match(BPF_Program* fp, const u_char* pkt,

View file

@ -20,6 +20,8 @@ TableType* string_set;
TableType* string_array; TableType* string_array;
TableType* count_set; TableType* count_set;
VectorType* string_vec; VectorType* string_vec;
VectorType* mime_matches;
RecordType* mime_match;
int watchdog_interval; int watchdog_interval;
@ -47,9 +49,6 @@ int tcp_max_initial_window;
int tcp_max_above_hole_without_any_acks; int tcp_max_above_hole_without_any_acks;
int tcp_excessive_data_without_further_acks; int tcp_excessive_data_without_further_acks;
RecordType* x509_type;
RecordType* x509_extension_type;
RecordType* socks_address; RecordType* socks_address;
double non_analyzed_lifetime; double non_analyzed_lifetime;
@ -156,8 +155,6 @@ int table_incremental_step;
RecordType* packet_type; RecordType* packet_type;
double packet_sort_window;
double connection_status_update_interval; double connection_status_update_interval;
StringVal* state_dir; StringVal* state_dir;
@ -332,6 +329,8 @@ void init_net_var()
string_set = internal_type("string_set")->AsTableType(); string_set = internal_type("string_set")->AsTableType();
string_array = internal_type("string_array")->AsTableType(); string_array = internal_type("string_array")->AsTableType();
string_vec = internal_type("string_vec")->AsVectorType(); string_vec = internal_type("string_vec")->AsVectorType();
mime_match = internal_type("mime_match")->AsRecordType();
mime_matches = internal_type("mime_matches")->AsVectorType();
ignore_checksums = opt_internal_int("ignore_checksums"); ignore_checksums = opt_internal_int("ignore_checksums");
partial_connection_ok = opt_internal_int("partial_connection_ok"); partial_connection_ok = opt_internal_int("partial_connection_ok");
@ -356,9 +355,6 @@ void init_net_var()
tcp_excessive_data_without_further_acks = tcp_excessive_data_without_further_acks =
opt_internal_int("tcp_excessive_data_without_further_acks"); opt_internal_int("tcp_excessive_data_without_further_acks");
x509_type = internal_type("X509")->AsRecordType();
x509_extension_type = internal_type("X509_extension_info")->AsRecordType();
socks_address = internal_type("SOCKS::Address")->AsRecordType(); socks_address = internal_type("SOCKS::Address")->AsRecordType();
non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime"); non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime");
@ -481,8 +477,6 @@ void init_net_var()
packet_type = internal_type("packet")->AsRecordType(); packet_type = internal_type("packet")->AsRecordType();
packet_sort_window = opt_internal_double("packet_sort_window");
orig_addr_anonymization = opt_internal_int("orig_addr_anonymization"); orig_addr_anonymization = opt_internal_int("orig_addr_anonymization");
resp_addr_anonymization = opt_internal_int("resp_addr_anonymization"); resp_addr_anonymization = opt_internal_int("resp_addr_anonymization");
other_addr_anonymization = opt_internal_int("other_addr_anonymization"); other_addr_anonymization = opt_internal_int("other_addr_anonymization");

View file

@ -23,6 +23,8 @@ extern TableType* string_set;
extern TableType* string_array; extern TableType* string_array;
extern TableType* count_set; extern TableType* count_set;
extern VectorType* string_vec; extern VectorType* string_vec;
extern VectorType* mime_matches;
extern RecordType* mime_match;
extern int watchdog_interval; extern int watchdog_interval;
@ -50,9 +52,6 @@ extern int tcp_max_initial_window;
extern int tcp_max_above_hole_without_any_acks; extern int tcp_max_above_hole_without_any_acks;
extern int tcp_excessive_data_without_further_acks; extern int tcp_excessive_data_without_further_acks;
extern RecordType* x509_type;
extern RecordType* x509_extension_type;
extern RecordType* socks_address; extern RecordType* socks_address;
extern double non_analyzed_lifetime; extern double non_analyzed_lifetime;
@ -159,8 +158,6 @@ extern int table_incremental_step;
extern RecordType* packet_type; extern RecordType* packet_type;
extern double packet_sort_window;
extern int orig_addr_anonymization, resp_addr_anonymization; extern int orig_addr_anonymization, resp_addr_anonymization;
extern int other_addr_anonymization; extern int other_addr_anonymization;
extern TableVal* preserve_orig_addr; extern TableVal* preserve_orig_addr;

View file

@ -1,364 +0,0 @@
#include "IP.h"
#include "PacketSort.h"
const bool DEBUG_packetsort = false;
PacketSortElement::PacketSortElement(PktSrc* arg_src,
double arg_timestamp, const struct pcap_pkthdr* arg_hdr,
const u_char* arg_pkt, int arg_hdr_size)
{
src = arg_src;
timestamp = arg_timestamp;
hdr = *arg_hdr;
hdr_size = arg_hdr_size;
pkt = new u_char[hdr.caplen];
memcpy(pkt, arg_pkt, hdr.caplen);
is_tcp = 0;
ip_hdr = 0;
tcp_flags = 0;
endp = 0;
payload_length = 0;
key = 0;
// Now check if it is a "parsable" TCP packet.
uint32 caplen = hdr.caplen;
uint32 tcp_offset;
if ( caplen >= sizeof(struct ip) + hdr_size )
{
const struct ip* ip = (const struct ip*) (pkt + hdr_size);
if ( ip->ip_v == 4 )
ip_hdr = new IP_Hdr(ip, false);
else if ( ip->ip_v == 6 && (caplen >= sizeof(struct ip6_hdr) + hdr_size) )
ip_hdr = new IP_Hdr((const struct ip6_hdr*) ip, false, caplen - hdr_size);
else
// Weird will be generated later in NetSessions::NextPacket.
return;
if ( ip_hdr->NextProto() == IPPROTO_TCP &&
// Note: can't sort fragmented packets
( ! ip_hdr->IsFragment() ) )
{
tcp_offset = hdr_size + ip_hdr->HdrLen();
if ( caplen >= tcp_offset + sizeof(struct tcphdr) )
{
const struct tcphdr* tp = (const struct tcphdr*)
(pkt + tcp_offset);
id.src_addr = ip_hdr->SrcAddr();
id.dst_addr = ip_hdr->DstAddr();
id.src_port = tp->th_sport;
id.dst_port = tp->th_dport;
id.is_one_way = 0;
endp = addr_port_canon_lt(id.src_addr,
id.src_port,
id.dst_addr,
id.dst_port) ? 0 : 1;
seq[endp] = ntohl(tp->th_seq);
if ( tp->th_flags & TH_ACK )
seq[1-endp] = ntohl(tp->th_ack);
else
seq[1-endp] = 0;
tcp_flags = tp->th_flags;
// DEBUG_MSG("%.6f: %u, %u\n", timestamp, seq[0], seq[1]);
payload_length = ip_hdr->PayloadLen() - tp->th_off * 4;
key = BuildConnIDHashKey(id);
is_tcp = 1;
}
}
}
if ( DEBUG_packetsort && ! is_tcp )
DEBUG_MSG("%.6f non-TCP packet\n", timestamp);
}
PacketSortElement::~PacketSortElement()
{
delete [] pkt;
delete ip_hdr;
delete key;
}
int PacketSortPQ::Timestamp_Cmp(PacketSortElement* a, PacketSortElement* b)
{
double d = a->timestamp - b->timestamp;
if ( d > 0 ) return 1;
else if ( d < 0 ) return -1;
else return 0;
}
int PacketSortPQ::UpdatePQ(PacketSortElement* prev_e, PacketSortElement* new_e)
{
int index = prev_e->pq_index[pq_level];
new_e->pq_index[pq_level] = index;
pq[index] = new_e;
if ( Cmp(prev_e, new_e) > 0 )
return FixUp(new_e, index);
else
{
FixDown(new_e, index);
return index == 0;
}
}
int PacketSortPQ::AddToPQ(PacketSortElement* new_e)
{
int index = pq.size();
new_e->pq_index[pq_level] = index;
pq.push_back(new_e);
return FixUp(new_e, index);
}
int PacketSortPQ::RemoveFromPQ(PacketSortElement* prev_e)
{
if ( pq.size() > 1 )
{
PacketSortElement* new_e = pq[pq.size() - 1];
pq.pop_back();
return UpdatePQ(prev_e, new_e);
}
else
{
pq.pop_back();
return 1;
}
}
void PacketSortPQ::Assign(int k, PacketSortElement* e)
{
pq[k] = e;
e->pq_index[pq_level] = k;
}
PacketSortConnPQ::~PacketSortConnPQ()
{
// Delete elements only in ConnPQ (not in GlobalPQ) to avoid
// double delete.
for ( int i = 0; i < (int) pq.size(); ++i )
{
delete pq[i];
pq[i] = 0;
}
}
int PacketSortConnPQ::Cmp(PacketSortElement* a, PacketSortElement* b)
{
// Note: here we do not distinguish between packets without
// an ACK and packets with seq/ack of 0. The later will sorted
// only by their timestamps.
if ( a->seq[0] && b->seq[0] && a->seq[0] != b->seq[0] )
return (a->seq[0] > b->seq[0]) ? 1 : -1;
else if ( a->seq[1] && b->seq[1] && a->seq[1] != b->seq[1] )
return (a->seq[1] > b->seq[1]) ? 1 : -1;
else
return Timestamp_Cmp(a, b);
}
int PacketSortPQ::FixUp(PacketSortElement* e, int k)
{
if ( k == 0 )
{
Assign(0, e);
return 1;
}
int parent = (k-1) / 2;
if ( Cmp(pq[parent], e) > 0 )
{
Assign(k, pq[parent]);
return FixUp(e, parent);
}
else
{
Assign(k, e);
return 0;
}
}
void PacketSortPQ::FixDown(PacketSortElement* e, int k)
{
uint32 kid = k * 2 + 1;
if ( kid >= pq.size() )
{
Assign(k, e);
return;
}
if ( kid + 1 < pq.size() && Cmp(pq[kid], pq[kid+1]) > 0 )
++kid;
if ( Cmp(e, pq[kid]) > 0 )
{
Assign(k, pq[kid]);
FixDown(e, kid);
}
else
Assign(k, e);
}
int PacketSortConnPQ::Add(PacketSortElement* e)
{
#if 0
int endp = e->endp;
uint32 end_seq = e->seq[endp] + e->payload_length;
int p = 1 - endp;
if ( (e->tcp_flags & TH_RST) && ! (e->tcp_flags & TH_ACK) )
{
DEBUG_MSG("%.6f %c: %u -> %u\n",
e->TimeStamp(), (p == endp) ? 'S' : 'A',
e->seq[p], next_seq[p]);
e->seq[p] = next_seq[p];
}
if ( end_seq > next_seq[endp] )
next_seq[endp] = end_seq;
#endif
return AddToPQ(e);
}
void PacketSortConnPQ::UpdateDeliveredSeq(int endp, int seq, int len, int ack)
{
if ( delivered_seq[endp] == 0 || delivered_seq[endp] == seq )
delivered_seq[endp] = seq + len;
if ( ack > delivered_seq[1 - endp] )
delivered_seq[endp] = ack;
}
bool PacketSortConnPQ::IsContentGapSafe(PacketSortElement* e)
{
int ack = e->seq[1 - e->endp];
return ack <= delivered_seq[1 - e->endp];
}
int PacketSortConnPQ::Remove(PacketSortElement* e)
{
int ret = RemoveFromPQ(e);
UpdateDeliveredSeq(e->endp, e->seq[e->endp], e->payload_length,
e->seq[1 - e->endp]);
return ret;
}
static void DeleteConnPQ(void* p)
{
delete (PacketSortConnPQ*) p;
}
PacketSortGlobalPQ::PacketSortGlobalPQ()
{
pq_level = GLOBAL_PQ;
conn_pq_table.SetDeleteFunc(DeleteConnPQ);
}
PacketSortGlobalPQ::~PacketSortGlobalPQ()
{
// Destruction of PacketSortConnPQ will delete all conn_pq's.
}
int PacketSortGlobalPQ::Add(PacketSortElement* e)
{
if ( e->is_tcp )
{
// TCP packets are sorted by sequence numbers
PacketSortConnPQ* conn_pq = FindConnPQ(e);
PacketSortElement* prev_min = conn_pq->Min();
if ( conn_pq->Add(e) )
{
ASSERT(conn_pq->Min() != prev_min);
if ( prev_min )
return UpdatePQ(prev_min, e);
else
return AddToPQ(e);
}
else
{
ASSERT(conn_pq->Min() == prev_min);
return 0;
}
}
else
return AddToPQ(e);
}
PacketSortElement* PacketSortGlobalPQ::RemoveMin(double timestamp)
{
PacketSortElement* e = Min();
if ( ! e )
return 0;
if ( e->is_tcp )
{
PacketSortConnPQ* conn_pq = FindConnPQ(e);
#if 0
// Note: the content gap safety check does not work
// because we remove the state for a connection once
// it has no packet in the priority queue.
// Do not deliver e if it arrives later than timestamp,
// and is not content-gap-safe.
if ( e->timestamp > timestamp &&
! conn_pq->IsContentGapSafe(e) )
return 0;
#else
if ( e->timestamp > timestamp )
return 0;
#endif
conn_pq->Remove(e);
PacketSortElement* new_e = conn_pq->Min();
if ( new_e )
UpdatePQ(e, new_e);
else
{
RemoveFromPQ(e);
conn_pq_table.Remove(e->key);
delete conn_pq;
}
}
else
RemoveFromPQ(e);
return e;
}
PacketSortConnPQ* PacketSortGlobalPQ::FindConnPQ(PacketSortElement* e)
{
if ( ! e->is_tcp )
reporter->InternalError("cannot find a connection for an invalid id");
PacketSortConnPQ* pq = (PacketSortConnPQ*) conn_pq_table.Lookup(e->key);
if ( ! pq )
{
pq = new PacketSortConnPQ();
conn_pq_table.Insert(e->key, pq);
}
return pq;
}

View file

@ -1,132 +0,0 @@
#ifndef packetsort_h
#define packetsort_h
// Timestamps can be imprecise and even inconsistent among packets
// from different sources. This class tries to guess a "correct"
// order by looking at TCP sequence numbers.
//
// In particular, it tries to eliminate "false" content gaps.
#include "Dict.h"
#include "Conn.h"
enum {
CONN_PQ,
GLOBAL_PQ,
NUM_OF_PQ_LEVEL,
};
class PktSrc;
class PacketSortElement {
public:
PacketSortElement(PktSrc* src, double timestamp,
const struct pcap_pkthdr* hdr,
const u_char* pkt, int hdr_size);
~PacketSortElement();
PktSrc* Src() const { return src; }
double TimeStamp() const { return timestamp; }
const struct pcap_pkthdr* Hdr() const { return &hdr; }
const u_char* Pkt() const { return pkt; }
int HdrSize() const { return hdr_size; }
const IP_Hdr* IPHdr() const { return ip_hdr; }
protected:
PktSrc* src;
double timestamp;
struct pcap_pkthdr hdr;
u_char* pkt;
int hdr_size;
IP_Hdr* ip_hdr;
int is_tcp;
ConnID id;
uint32 seq[2]; // indexed by endpoint
int tcp_flags;
int endp; // 0 or 1
int payload_length;
HashKey* key;
int pq_index[NUM_OF_PQ_LEVEL];
friend class PacketSortPQ;
friend class PacketSortConnPQ;
friend class PacketSortGlobalPQ;
};
class PacketSortPQ {
public:
PacketSortPQ()
{ pq_level = -1; }
virtual ~PacketSortPQ() {}
PacketSortElement* Min() const { return (pq.size() > 0) ? pq[0] : 0; }
protected:
virtual int Cmp(PacketSortElement* a, PacketSortElement* b) = 0;
int Timestamp_Cmp(PacketSortElement* a, PacketSortElement* b);
int UpdatePQ(PacketSortElement* prev_e, PacketSortElement* new_e);
int AddToPQ(PacketSortElement* e);
int RemoveFromPQ(PacketSortElement* e);
void Assign(int k, PacketSortElement* e);
int FixUp(PacketSortElement* e, int k);
void FixDown(PacketSortElement* e, int k);
vector<PacketSortElement*> pq;
int pq_level;
};
// Sort by sequence numbers within a connection
class PacketSortConnPQ : public PacketSortPQ {
public:
PacketSortConnPQ()
{
pq_level = CONN_PQ;
delivered_seq[0] = delivered_seq[1] = 0;
}
~PacketSortConnPQ();
int Add(PacketSortElement* e);
int Remove(PacketSortElement* e);
bool IsContentGapSafe(PacketSortElement* e);
protected:
int Cmp(PacketSortElement* a, PacketSortElement* b);
void UpdateDeliveredSeq(int endp, int seq, int len, int ack);
int delivered_seq[2];
};
declare(PDict, PacketSortConnPQ);
// Sort by timestamps.
class PacketSortGlobalPQ : public PacketSortPQ {
public:
PacketSortGlobalPQ();
~PacketSortGlobalPQ();
int Add(PacketSortElement* e);
int Empty() const { return conn_pq_table.Length() == 0; }
// Returns the next packet to dispatch if it arrives earlier than the
// given timestamp, otherwise returns 0.
// The packet, if to be returned, is also removed from the
// priority queue.
PacketSortElement* RemoveMin(double timestamp);
protected:
int Cmp(PacketSortElement* a, PacketSortElement* b)
{ return Timestamp_Cmp(a, b); }
PacketSortConnPQ* FindConnPQ(PacketSortElement* e);
PDict(PacketSortConnPQ) conn_pq_table;
};
#endif

View file

@ -229,12 +229,21 @@ void PktSrc::Process()
{ {
// MPLS carried over the ethernet frame. // MPLS carried over the ethernet frame.
case 0x8847: case 0x8847:
// Remove the data link layer and denote a
// header size of zero before the IP header.
have_mpls = true; have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
break; break;
// VLAN carried over the ethernet frame. // VLAN carried over the ethernet frame.
case 0x8100: case 0x8100:
data += get_link_header_size(datalink); data += get_link_header_size(datalink);
// Check for MPLS in VLAN.
if ( ((data[2] << 8) + data[3]) == 0x8847 )
have_mpls = true;
data += 4; // Skip the vlan header data += 4; // Skip the vlan header
pkt_hdr_size = 0; pkt_hdr_size = 0;
@ -274,8 +283,13 @@ void PktSrc::Process()
protocol = (data[2] << 8) + data[3]; protocol = (data[2] << 8) + data[3];
if ( protocol == 0x0281 ) if ( protocol == 0x0281 )
// MPLS Unicast {
// MPLS Unicast. Remove the data link layer and
// denote a header size of zero before the IP header.
have_mpls = true; have_mpls = true;
data += get_link_header_size(datalink);
pkt_hdr_size = 0;
}
else if ( protocol != 0x0021 && protocol != 0x0057 ) else if ( protocol != 0x0021 && protocol != 0x0057 )
{ {
@ -290,12 +304,6 @@ void PktSrc::Process()
if ( have_mpls ) if ( have_mpls )
{ {
// Remove the data link layer
data += get_link_header_size(datalink);
// Denote a header size of zero before the IP header
pkt_hdr_size = 0;
// Skip the MPLS label stack. // Skip the MPLS label stack.
bool end_of_stack = false; bool end_of_stack = false;
@ -309,13 +317,13 @@ void PktSrc::Process()
if ( pseudo_realtime ) if ( pseudo_realtime )
{ {
current_pseudo = CheckPseudoTime(); current_pseudo = CheckPseudoTime();
net_packet_arrival(current_pseudo, &hdr, data, pkt_hdr_size, this); net_packet_dispatch(current_pseudo, &hdr, data, pkt_hdr_size, this);
if ( ! first_wallclock ) if ( ! first_wallclock )
first_wallclock = current_time(true); first_wallclock = current_time(true);
} }
else else
net_packet_arrival(current_timestamp, &hdr, data, pkt_hdr_size, this); net_packet_dispatch(current_timestamp, &hdr, data, pkt_hdr_size, this);
data = 0; data = 0;
} }

Some files were not shown because too many files have changed in this diff Show more